WIP: wip: switch to MicroOS template machines #1

Draft
prskr wants to merge 4 commits from feature/use-pre-built-image-for-k8s-nodes into main
Owner
No description provided.
prskr changed title from wip: switch to MicroOS template machines to WIP: wip: switch to MicroOS template machines 2025-06-03 21:22:42 +00:00
prskr force-pushed feature/use-pre-built-image-for-k8s-nodes from be93a7d6d3 to 3ea9046d62 2025-06-03 21:28:29 +00:00 Compare
prskr force-pushed feature/use-pre-built-image-for-k8s-nodes from 3ea9046d62 to 0cc8441d71
Some checks failed
0/1 projects planned successfully.
2025-06-03 21:38:42 +00:00
Compare
prskr force-pushed feature/use-pre-built-image-for-k8s-nodes from 0cc8441d71
Some checks failed
0/1 projects planned successfully.
to cf9b11e025
Some checks failed
Plan failed.
2025-06-03 22:31:40 +00:00
Compare
prskr force-pushed feature/use-pre-built-image-for-k8s-nodes from cf9b11e025
Some checks failed
Plan failed.
to b791eb2107
All checks were successful
0/0 projects policies checked successfully.
2025-06-03 22:33:03 +00:00
Compare
Author
Owner

atlantis plan

atlantis plan
Collaborator

Ran Plan for dir: . workspace: default

Show Output
data.cloudinit_config.k8s_node_config["w4-cax11-hel1"]: Reading...
data.hcloud_image.forgejo_runner_snapshot_arm64: Reading...
data.hcloud_image.forgejo_runner_snapshot_amd64: Reading...
data.hcloud_image.k3s_node_arm64: Reading...
data.cloudinit_config.k8s_node_config["w4-cax11-hel1"]: Read complete after 1s [id=315400678]
data.hcloud_image.forgejo_runner_snapshot_amd64: Read complete after 0s
data.hcloud_image.forgejo_runner_snapshot_arm64: Read complete after 1s
data.hcloud_image.k3s_node_arm64: Read complete after 0s
data.azurerm_client_config.current: Reading...
data.azurerm_client_config.current: Read complete after 0s [id=Y2xpZW50Q29uZmlncy9jbGllbnRJZD1jYWZmMDAwZS01NmY1LTQ5M2MtODAxZS1jNjcwYTViMjUzNTQ7b2JqZWN0SWQ9ZmVmMmVkZjYtZGVlOC00NWFmLWEwNDctMmY3MDk4M2E5YmFkO3N1YnNjcmlwdGlvbklkPWY2ZGU3Yjk3LWJlZmEtNDNkOC04NDE2LWFhZDZjOTg4YzE2ODt0ZW5hbnRJZD1mNGU4MDExMS0xNTcxLTQ3N2EtYjU2ZC1jNWZlNTE3Njc2Yjc=]

OpenTofu used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
 <= read (data resources)

OpenTofu will perform the following actions:

  # data.azurerm_key_vault_secret.cloudflare_account_id will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "cloudflare_account_id" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "cloudflare-account-id"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # data.azurerm_key_vault_secret.harbor_minion_token will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "harbor_minion_token" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "harbor-minion-token"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # data.azurerm_key_vault_secret.harbor_minion_username will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "harbor_minion_username" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "harbor-minion-username"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # data.azurerm_key_vault_secret.k3s_backup_access_key will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "k3s_backup_access_key" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "k3s-backup-access-key"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # data.azurerm_key_vault_secret.k3s_backup_endpoint will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "k3s_backup_endpoint" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "k3s-backup-endpoint"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # data.azurerm_key_vault_secret.k3s_backup_secret_key will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "k3s_backup_secret_key" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "k3s-backup-secret-key"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # data.azurerm_key_vault_secret.k3s_token will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "k3s_token" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "k3s-token"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # data.azurerm_key_vault_secret.runner_secret["ci-minion-bob"] will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "runner_secret" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "ci-minion-bob-runner-secret"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # data.azurerm_key_vault_secret.runner_secret["ci-minion-stuart"] will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "runner_secret" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "ci-minion-stuart-runner-secret"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # data.cloudinit_config.runner_config["ci-minion-bob"] will be read during apply
  # (config refers to values not yet known)
 <= data "cloudinit_config" "runner_config" {
      + base64_encode = true
      + boundary      = (known after apply)
      + gzip          = true
      + id            = (known after apply)
      + rendered      = (known after apply)

      + part {
          + content      = (known after apply)
          + content_type = "text/cloud-config"
        }
      + part {
          + content      = <<-EOT
                runcmd:
                  - |
                    set -e
                    systemctl daemon-reload
                    systemctl enable --now forgejo-runner.service
            EOT
          + content_type = "text/cloud-config"
        }
    }

  # data.cloudinit_config.runner_config["ci-minion-stuart"] will be read during apply
  # (config refers to values not yet known)
 <= data "cloudinit_config" "runner_config" {
      + base64_encode = true
      + boundary      = (known after apply)
      + gzip          = true
      + id            = (known after apply)
      + rendered      = (known after apply)

      + part {
          + content      = (known after apply)
          + content_type = "text/cloud-config"
        }
      + part {
          + content      = <<-EOT
                runcmd:
                  - |
                    set -e
                    systemctl daemon-reload
                    systemctl enable --now forgejo-runner.service
            EOT
          + content_type = "text/cloud-config"
        }
    }

  # data.ct_config.machine-ignitions["w1-cax11-hel1-gen7"] will be read during apply
  # (config refers to values not yet known)
 <= data "ct_config" "machine-ignitions" {
      + content  = (sensitive value)
      + id       = (known after apply)
      + rendered = (known after apply)
      + snippets = [
          + (known after apply),
        ]
      + strict   = true
    }

  # data.ct_config.machine-ignitions["w2-cax11-hel1-gen7"] will be read during apply
  # (config refers to values not yet known)
 <= data "ct_config" "machine-ignitions" {
      + content  = (sensitive value)
      + id       = (known after apply)
      + rendered = (known after apply)
      + snippets = [
          + (known after apply),
        ]
      + strict   = true
    }

  # data.ct_config.machine-ignitions["w3-cax11-hel1-gen7"] will be read during apply
  # (config refers to values not yet known)
 <= data "ct_config" "machine-ignitions" {
      + content  = (sensitive value)
      + id       = (known after apply)
      + rendered = (known after apply)
      + snippets = [
          + (known after apply),
        ]
      + strict   = true
    }

  # data.ct_config.machine-ignitions-cp["cp1-cax11-hel1"] will be read during apply
  # (config refers to values not yet known)
 <= data "ct_config" "machine-ignitions-cp" {
      + content  = (sensitive value)
      + id       = (known after apply)
      + rendered = (known after apply)
      + snippets = [
          + (known after apply),
        ]
      + strict   = true
    }

  # azurerm_key_vault.forgejo_runners will be created
+ resource "azurerm_key_vault" "forgejo_runners" {
      + access_policy                 = (known after apply)
      + enable_rbac_authorization     = true
      + id                            = (known after apply)
      + location                      = "westeurope"
      + name                          = "Forgejo-Runners"
      + public_network_access_enabled = true
      + purge_protection_enabled      = false
      + resource_group_name           = "Forgejo-Runners"
      + sku_name                      = "standard"
      + soft_delete_retention_days    = 30
      + tenant_id                     = "f4e80111-1571-477a-b56d-c5fe517676b7"
      + vault_uri                     = (known after apply)

      + contact (known after apply)

      + network_acls (known after apply)
    }

  # azurerm_key_vault.hetzner will be created
+ resource "azurerm_key_vault" "hetzner" {
      + access_policy                 = (known after apply)
      + enable_rbac_authorization     = true
      + id                            = (known after apply)
      + location                      = "westeurope"
      + name                          = "Hetzner"
      + public_network_access_enabled = true
      + purge_protection_enabled      = false
      + resource_group_name           = "Infrastructure"
      + sku_name                      = "standard"
      + soft_delete_retention_days    = 30
      + tenant_id                     = "f4e80111-1571-477a-b56d-c5fe517676b7"
      + vault_uri                     = (known after apply)

      + contact (known after apply)

      + network_acls (known after apply)
    }

  # azurerm_resource_group.forgejo_runners will be created
+ resource "azurerm_resource_group" "forgejo_runners" {
      + id       = (known after apply)
      + location = "westeurope"
      + name     = "Forgejo-Runners"
    }

  # azurerm_resource_group.infrastructure will be created
+ resource "azurerm_resource_group" "infrastructure" {
      + id       = (known after apply)
      + location = "westeurope"
      + name     = "Infrastructure"
    }

  # cloudflare_dns_record.apple_proof will be created
+ resource "cloudflare_dns_record" "apple_proof" {
      + comment             = (known after apply)
      + comment_modified_on = (known after apply)
      + content             = "apple-domain=chwbVvzH8hWIgg1l"
      + created_on          = (known after apply)
      + data                = (known after apply)
      + id                  = (known after apply)
      + meta                = (known after apply)
      + modified_on         = (known after apply)
      + name                = "icb4dc0.de"
      + proxiable           = (known after apply)
      + proxied             = false
      + settings            = (known after apply)
      + tags                = []
      + tags_modified_on    = (known after apply)
      + ttl                 = 1
      + type                = "TXT"
      + zone_id             = (known after apply)
    }

  # cloudflare_dns_record.apple_sig_domainkey will be created
+ resource "cloudflare_dns_record" "apple_sig_domainkey" {
      + comment             = (known after apply)
      + comment_modified_on = (known after apply)
      + content             = "sig1.dkim.icb4dc0.de.at.icloudmailadmin.com"
      + created_on          = (known after apply)
      + data                = (known after apply)
      + id                  = (known after apply)
      + meta                = (known after apply)
      + modified_on         = (known after apply)
      + name                = "sig1._domainkey.icb4dc0.de"
      + proxiable           = (known after apply)
      + proxied             = false
      + settings            = (known after apply)
      + tags                = []
      + tags_modified_on    = (known after apply)
      + ttl                 = 1
      + type                = "CNAME"
      + zone_id             = (known after apply)
    }

  # cloudflare_dns_record.apple_spf will be created
+ resource "cloudflare_dns_record" "apple_spf" {
      + comment             = (known after apply)
      + comment_modified_on = (known after apply)
      + content             = "\"v=spf1 include:icloud.com ~all\""
      + created_on          = (known after apply)
      + data                = (known after apply)
      + id                  = (known after apply)
      + meta                = (known after apply)
      + modified_on         = (known after apply)
      + name                = "icb4dc0.de"
      + proxiable           = (known after apply)
      + proxied             = false
      + settings            = (known after apply)
      + tags                = []
      + tags_modified_on    = (known after apply)
      + ttl                 = 1
      + type                = "TXT"
      + zone_id             = (known after apply)
    }

  # cloudflare_dns_record.cp-host-ipv4["cp1-cax11-hel1"] will be created
+ resource "cloudflare_dns_record" "cp-host-ipv4" {
      + comment             = (known after apply)
      + comment_modified_on = (known after apply)
      + content             = (known after apply)
      + created_on          = (known after apply)
      + data                = (known after apply)
      + id                  = (known after apply)
      + meta                = (known after apply)
      + modified_on         = (known after apply)
      + name                = "cp1-cax11-hel1.k8s.icb4dc0.de"
      + proxiable           = (known after apply)
      + proxied             = false
      + settings            = (known after apply)
      + tags                = []
      + tags_modified_on    = (known after apply)
      + ttl                 = 1
      + type                = "A"
      + zone_id             = (known after apply)
    }

  # cloudflare_dns_record.cp-host-ipv6["cp1-cax11-hel1"] will be created
+ resource "cloudflare_dns_record" "cp-host-ipv6" {
      + comment             = (known after apply)
      + comment_modified_on = (known after apply)
      + content             = (known after apply)
      + created_on          = (known after apply)
      + data                = (known after apply)
      + id                  = (known after apply)
      + meta                = (known after apply)
      + modified_on         = (known after apply)
      + name                = "cp1-cax11-hel1.k8s.icb4dc0.de"
      + proxiable           = (known after apply)
      + proxied             = false
      + settings            = (known after apply)
      + tags                = []
      + tags_modified_on    = (known after apply)
      + ttl                 = 1
      + type                = "AAAA"
      + zone_id             = (known after apply)
    }

  # cloudflare_dns_record.keybase_proof will be created
+ resource "cloudflare_dns_record" "keybase_proof" {
      + comment             = (known after apply)
      + comment_modified_on = (known after apply)
      + content             = "keybase-site-verification=WDQoLtW22epD7eQnts6rPKJBGA0lD6jSI6m0bGMYWag"
      + created_on          = (known after apply)
      + data                = (known after apply)
      + id                  = (known after apply)
      + meta                = (known after apply)
      + modified_on         = (known after apply)
      + name                = "icb4dc0.de"
      + proxiable           = (known after apply)
      + proxied             = false
      + settings            = (known after apply)
      + tags                = []
      + tags_modified_on    = (known after apply)
      + ttl                 = 1
      + type                = "TXT"
      + zone_id             = (known after apply)
    }

  # cloudflare_dns_record.mx_primary will be created
+ resource "cloudflare_dns_record" "mx_primary" {
      + comment             = (known after apply)
      + comment_modified_on = (known after apply)
      + content             = "mx01.mail.icloud.com"
      + created_on          = (known after apply)
      + data                = (known after apply)
      + id                  = (known after apply)
      + meta                = (known after apply)
      + modified_on         = (known after apply)
      + name                = "icb4dc0.de"
      + priority            = 10
      + proxiable           = (known after apply)
      + proxied             = false
      + settings            = (known after apply)
      + tags                = []
      + tags_modified_on    = (known after apply)
      + ttl                 = 1
      + type                = "MX"
      + zone_id             = (known after apply)
    }

  # cloudflare_dns_record.mx_secondary will be created
+ resource "cloudflare_dns_record" "mx_secondary" {
      + comment             = (known after apply)
      + comment_modified_on = (known after apply)
      + content             = "mx02.mail.icloud.com"
      + created_on          = (known after apply)
      + data                = (known after apply)
      + id                  = (known after apply)
      + meta                = (known after apply)
      + modified_on         = (known after apply)
      + name                = "icb4dc0.de"
      + priority            = 10
      + proxiable           = (known after apply)
      + proxied             = false
      + settings            = (known after apply)
      + tags                = []
      + tags_modified_on    = (known after apply)
      + ttl                 = 1
      + type                = "MX"
      + zone_id             = (known after apply)
    }

  # cloudflare_zone.icb4dc0de will be created
+ resource "cloudflare_zone" "icb4dc0de" {
      + account               = {
          + id = (sensitive value)
        }
      + activated_on          = (known after apply)
      + created_on            = (known after apply)
      + development_mode      = (known after apply)
      + id                    = (known after apply)
      + meta                  = (known after apply)
      + modified_on           = (known after apply)
      + name                  = "icb4dc0.de"
      + name_servers          = (known after apply)
      + original_dnshost      = (known after apply)
      + original_name_servers = (known after apply)
      + original_registrar    = (known after apply)
      + owner                 = (known after apply)
      + paused                = false
      + status                = (known after apply)
      + type                  = "full"
      + verification_key      = (known after apply)
    }

  # hcloud_network.k8s_net will be created
+ resource "hcloud_network" "k8s_net" {
      + delete_protection        = false
      + expose_routes_to_vswitch = false
      + id                       = (known after apply)
      + ip_range                 = "172.16.0.0/12"
      + name                     = "k8s-net"
    }

  # hcloud_network_subnet.k8s_internal will be created
+ resource "hcloud_network_subnet" "k8s_internal" {
      + gateway      = (known after apply)
      + id           = (known after apply)
      + ip_range     = "172.23.2.0/23"
      + network_id   = (known after apply)
      + network_zone = "eu-central"
      + type         = "cloud"
    }

  # hcloud_placement_group.forgejo_runners will be created
+ resource "hcloud_placement_group" "forgejo_runners" {
      + id      = (known after apply)
      + labels  = {
          + "cluster" = "forgejo.icb4dc0.de"
        }
      + name    = "forgejo-runners"
      + servers = (known after apply)
      + type    = "spread"
    }

  # hcloud_placement_group.k3s_machines will be created
+ resource "hcloud_placement_group" "k3s_machines" {
      + id      = (known after apply)
      + labels  = {
          + "cluster" = "icb4dc0.de"
        }
      + name    = "k3s-machines"
      + servers = (known after apply)
      + type    = "spread"
    }

  # hcloud_server.control-plane["cp1-cax11-hel1"] will be created
+ resource "hcloud_server" "control-plane" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-22.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"   = "icb4dc0.de"
          + "node_type" = "control-plane"
        }
      + location                   = "hel1"
      + name                       = "cp1-cax11-hel1"
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + rescue                     = "linux64"
      + server_type                = "cax11"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)

      + network {
          + alias_ips   = []
          + ip          = "172.23.2.10"
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_server.forgejo_runner["ci-minion-bob"] will be created
+ resource "hcloud_server" "forgejo_runner" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "228451454"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"   = "forgejo.icb4dc0.de"
          + "node_type" = "forgejo_runner"
        }
      + location                   = "hel1"
      + name                       = "ci-minion-bob"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cax21"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = (known after apply)

      + network {
          + alias_ips   = (known after apply)
          + ip          = "172.23.2.30"
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_server.forgejo_runner["ci-minion-stuart"] will be created
+ resource "hcloud_server" "forgejo_runner" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "228451463"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"   = "forgejo.icb4dc0.de"
          + "node_type" = "forgejo_runner"
        }
      + location                   = "hel1"
      + name                       = "ci-minion-stuart"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cpx31"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = (known after apply)

      + network {
          + alias_ips   = (known after apply)
          + ip          = "172.23.2.31"
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_server.machine["w1-cax11-hel1-gen7"] will be created
+ resource "hcloud_server" "machine" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-22.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"   = "icb4dc0.de"
          + "node_type" = "worker"
        }
      + location                   = "hel1"
      + name                       = "w1-cax11-hel1-gen7"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + rescue                     = "linux64"
      + server_type                = "cax21"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)

      + network {
          + alias_ips   = (known after apply)
          + ip          = "172.23.2.20"
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = false
        }
    }

  # hcloud_server.machine["w2-cax11-hel1-gen7"] will be created
+ resource "hcloud_server" "machine" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-22.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"   = "icb4dc0.de"
          + "node_type" = "worker"
        }
      + location                   = "hel1"
      + name                       = "w2-cax11-hel1-gen7"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + rescue                     = "linux64"
      + server_type                = "cax21"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)

      + network {
          + alias_ips   = (known after apply)
          + ip          = "172.23.2.21"
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = false
        }
    }

  # hcloud_server.machine["w3-cax11-hel1-gen7"] will be created
+ resource "hcloud_server" "machine" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-22.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"   = "icb4dc0.de"
          + "node_type" = "worker"
        }
      + location                   = "hel1"
      + name                       = "w3-cax11-hel1-gen7"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + rescue                     = "linux64"
      + server_type                = "cax21"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)

      + network {
          + alias_ips   = (known after apply)
          + ip          = "172.23.2.22"
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = false
        }
    }

  # hcloud_server.microos_machine["w4-cax11-hel1"] will be created
+ resource "hcloud_server" "microos_machine" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "229900685"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"   = "icb4dc0.de"
          + "node_type" = "worker"
        }
      + location                   = "hel1"
      + name                       = "w4-cax11-hel1"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cax21"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = "zlbbei8qUy+QwyKAUTeLg22M25A="

      + network {
          + alias_ips   = (known after apply)
          + ip          = "172.23.2.24"
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = false
        }
    }

  # hcloud_ssh_key.default will be created
+ resource "hcloud_ssh_key" "default" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + labels      = {}
      + name        = "Default Management"
      + public_key  = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKfHZaI0F5GjAcrM8hjWqwMfULDkAZ2TOIBTQtRocg1F id_ed25519"
    }

  # hcloud_ssh_key.provisioning_key will be created
+ resource "hcloud_ssh_key" "provisioning_key" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + labels      = {}
      + name        = "Provisioning key for hcloud cluster"
      + public_key  = (known after apply)
    }

  # hcloud_ssh_key.yubikey will be created
+ resource "hcloud_ssh_key" "yubikey" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + labels      = {}
      + name        = "Yubikey"
      + public_key  = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDQoNCLuHgcaDn4JTjCeQKJsIsYU0Jmub5PUMzIIZbUBb+TGMh6mCAY/UbYaq/n4jVnskXopzPGJbx4iPBG5HrNzqYZqMjkk8uIeeT0mdIcNv9bXxuCxCH1iHZF8LlzIZCmQ0w3X6VQ1izcJgvjrAYzbHN3gqCHOXtNkqIUkwaadIWCEjg33OVSlM4yrIDElr6+LHzv84VNh/PhemixCVVEMJ83GjhDtpApMg9WWW3es6rpJn4TlYEMV+aPNU4ZZEWFen/DFBKoX+ulkiJ8CwpY3eJxSzlBijs5ZKH89OOk/MXN1lnREElFqli+jE8EbZKQzi59Zmx8ZOb52qVNot8XZT0Un4EttAIEeE8cETqUC4jK+6RbUrsXtclVbU9i57LWRpl65LYSIJEFmkTxvYdkPXqGbvlW024IjgSo8kds121w95+Rpo6419cSYsQWowS8+aXfEv2Q8SE81QH7ObYfWFXsPBAmmNleQNN3E5HOoaxpWQjv3aTUGuxm4PCzKLdP0LsHmTfGJB7Priaj+9i8xLjDWe7zXDde2Gp9FmdedDr06uEkVSRFnS35Dwfd7M7xP6NsilfMOdWzJWWy/BAYxtnWcrEFxhaEr4vgs8Ub+KBtKhr740x3Mr8up+mythConAs4LOj37lWK4kJ8cI7TXjcSJi9nTIPd39us7tp3Aw=="
    }

  # local_file.provisioning_key will be created
+ resource "local_file" "provisioning_key" {
      + content              = (sensitive value)
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0700"
      + file_permission      = "0400"
      + filename             = "./.ssh/provisioning_private_key.pem"
      + id                   = (known after apply)
    }

  # local_file.provisioning_key_pub will be created
+ resource "local_file" "provisioning_key_pub" {
      + content              = (known after apply)
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0700"
      + file_permission      = "0440"
      + filename             = "./.ssh/provisioning_key.pub"
      + id                   = (known after apply)
    }

  # null_resource.control_plane_generation["cp1-cax11-hel1"] will be created
+ resource "null_resource" "control_plane_generation" {
      + id       = (known after apply)
      + triggers = {
          + "timestamp" = "5"
        }
    }

  # null_resource.cp-config will be created
+ resource "null_resource" "cp-config" {
      + id       = (known after apply)
      + triggers = {
          + "version" = "v1.32.1+k3s1"
        }
    }

  # null_resource.machine_generation["w1-cax11-hel1-gen7"] will be created
+ resource "null_resource" "machine_generation" {
      + id       = (known after apply)
      + triggers = {
          + "timestamp" = "1"
        }
    }

  # null_resource.machine_generation["w2-cax11-hel1-gen7"] will be created
+ resource "null_resource" "machine_generation" {
      + id       = (known after apply)
      + triggers = {
          + "timestamp" = "1"
        }
    }

  # null_resource.machine_generation["w3-cax11-hel1-gen7"] will be created
+ resource "null_resource" "machine_generation" {
      + id       = (known after apply)
      + triggers = {
          + "timestamp" = "1"
        }
    }

  # null_resource.runner-config will be created
+ resource "null_resource" "runner-config" {
      + id       = (known after apply)
      + triggers = {
          + "version" = "6.3.1"
        }
    }

  # null_resource.runner_generation["ci-minion-bob"] will be created
+ resource "null_resource" "runner_generation" {
      + id       = (known after apply)
      + triggers = {
          + "timestamp" = "2"
        }
    }

  # null_resource.runner_generation["ci-minion-stuart"] will be created
+ resource "null_resource" "runner_generation" {
      + id       = (known after apply)
      + triggers = {
          + "timestamp" = "2"
        }
    }

  # null_resource.worker-config will be created
+ resource "null_resource" "worker-config" {
      + id       = (known after apply)
      + triggers = {
          + "version" = "v1.32.1+k3s1"
        }
    }

  # tls_private_key.provisioning will be created
+ resource "tls_private_key" "provisioning" {
      + algorithm                     = "RSA"
      + ecdsa_curve                   = "P224"
      + id                            = (known after apply)
      + private_key_openssh           = (sensitive value)
      + private_key_pem               = (sensitive value)
      + private_key_pem_pkcs8         = (sensitive value)
      + public_key_fingerprint_md5    = (known after apply)
      + public_key_fingerprint_sha256 = (known after apply)
      + public_key_openssh            = (known after apply)
      + public_key_pem                = (known after apply)
      + rsa_bits                      = 4096
    }

Plan: 39 to add, 0 to change, 0 to destroy.
  • ▶️ To apply this plan, comment:
    atlantis apply -d .
    
  • 🚮 To delete this plan and lock, click here
  • 🔁 To plan this project again, comment:
    atlantis plan -d .
    

Plan: 39 to add, 0 to change, 0 to destroy.


  • To apply all unapplied plans from this Pull Request, comment:
    atlantis apply
    
  • 🚮 To delete all plans and locks from this Pull Request, comment:
    atlantis unlock
    
Ran Plan for dir: `.` workspace: `default` <details><summary>Show Output</summary> ```diff data.cloudinit_config.k8s_node_config["w4-cax11-hel1"]: Reading... data.hcloud_image.forgejo_runner_snapshot_arm64: Reading... data.hcloud_image.forgejo_runner_snapshot_amd64: Reading... data.hcloud_image.k3s_node_arm64: Reading... data.cloudinit_config.k8s_node_config["w4-cax11-hel1"]: Read complete after 1s [id=315400678] data.hcloud_image.forgejo_runner_snapshot_amd64: Read complete after 0s data.hcloud_image.forgejo_runner_snapshot_arm64: Read complete after 1s data.hcloud_image.k3s_node_arm64: Read complete after 0s data.azurerm_client_config.current: Reading... data.azurerm_client_config.current: Read complete after 0s [id=Y2xpZW50Q29uZmlncy9jbGllbnRJZD1jYWZmMDAwZS01NmY1LTQ5M2MtODAxZS1jNjcwYTViMjUzNTQ7b2JqZWN0SWQ9ZmVmMmVkZjYtZGVlOC00NWFmLWEwNDctMmY3MDk4M2E5YmFkO3N1YnNjcmlwdGlvbklkPWY2ZGU3Yjk3LWJlZmEtNDNkOC04NDE2LWFhZDZjOTg4YzE2ODt0ZW5hbnRJZD1mNGU4MDExMS0xNTcxLTQ3N2EtYjU2ZC1jNWZlNTE3Njc2Yjc=] OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create <= read (data resources) OpenTofu will perform the following actions: # data.azurerm_key_vault_secret.cloudflare_account_id will be read during apply # (config refers to values not yet known) <= data "azurerm_key_vault_secret" "cloudflare_account_id" { + content_type = (known after apply) + expiration_date = (known after apply) + id = (known after apply) + key_vault_id = (known after apply) + name = "cloudflare-account-id" + not_before_date = (known after apply) + resource_id = (known after apply) + resource_versionless_id = (known after apply) + tags = (known after apply) + value = (sensitive value) + versionless_id = (known after apply) } # data.azurerm_key_vault_secret.harbor_minion_token will be read during apply # (config refers to values not yet known) <= data "azurerm_key_vault_secret" "harbor_minion_token" { + content_type = (known after apply) + expiration_date = (known after apply) + id = (known after apply) + key_vault_id = (known after apply) + name = "harbor-minion-token" + not_before_date = (known after apply) + resource_id = (known after apply) + resource_versionless_id = (known after apply) + tags = (known after apply) + value = (sensitive value) + versionless_id = (known after apply) } # data.azurerm_key_vault_secret.harbor_minion_username will be read during apply # (config refers to values not yet known) <= data "azurerm_key_vault_secret" "harbor_minion_username" { + content_type = (known after apply) + expiration_date = (known after apply) + id = (known after apply) + key_vault_id = (known after apply) + name = "harbor-minion-username" + not_before_date = (known after apply) + resource_id = (known after apply) + resource_versionless_id = (known after apply) + tags = (known after apply) + value = (sensitive value) + versionless_id = (known after apply) } # data.azurerm_key_vault_secret.k3s_backup_access_key will be read during apply # (config refers to values not yet known) <= data "azurerm_key_vault_secret" "k3s_backup_access_key" { + content_type = (known after apply) + expiration_date = (known after apply) + id = (known after apply) + key_vault_id = (known after apply) + name = "k3s-backup-access-key" + not_before_date = (known after apply) + resource_id = (known after apply) + resource_versionless_id = (known after apply) + tags = (known after apply) + value = (sensitive value) + versionless_id = (known after apply) } # data.azurerm_key_vault_secret.k3s_backup_endpoint will be read during apply # (config refers to values not yet known) <= data "azurerm_key_vault_secret" "k3s_backup_endpoint" { + content_type = (known after apply) + expiration_date = (known after apply) + id = (known after apply) + key_vault_id = (known after apply) + name = "k3s-backup-endpoint" + not_before_date = (known after apply) + resource_id = (known after apply) + resource_versionless_id = (known after apply) + tags = (known after apply) + value = (sensitive value) + versionless_id = (known after apply) } # data.azurerm_key_vault_secret.k3s_backup_secret_key will be read during apply # (config refers to values not yet known) <= data "azurerm_key_vault_secret" "k3s_backup_secret_key" { + content_type = (known after apply) + expiration_date = (known after apply) + id = (known after apply) + key_vault_id = (known after apply) + name = "k3s-backup-secret-key" + not_before_date = (known after apply) + resource_id = (known after apply) + resource_versionless_id = (known after apply) + tags = (known after apply) + value = (sensitive value) + versionless_id = (known after apply) } # data.azurerm_key_vault_secret.k3s_token will be read during apply # (config refers to values not yet known) <= data "azurerm_key_vault_secret" "k3s_token" { + content_type = (known after apply) + expiration_date = (known after apply) + id = (known after apply) + key_vault_id = (known after apply) + name = "k3s-token" + not_before_date = (known after apply) + resource_id = (known after apply) + resource_versionless_id = (known after apply) + tags = (known after apply) + value = (sensitive value) + versionless_id = (known after apply) } # data.azurerm_key_vault_secret.runner_secret["ci-minion-bob"] will be read during apply # (config refers to values not yet known) <= data "azurerm_key_vault_secret" "runner_secret" { + content_type = (known after apply) + expiration_date = (known after apply) + id = (known after apply) + key_vault_id = (known after apply) + name = "ci-minion-bob-runner-secret" + not_before_date = (known after apply) + resource_id = (known after apply) + resource_versionless_id = (known after apply) + tags = (known after apply) + value = (sensitive value) + versionless_id = (known after apply) } # data.azurerm_key_vault_secret.runner_secret["ci-minion-stuart"] will be read during apply # (config refers to values not yet known) <= data "azurerm_key_vault_secret" "runner_secret" { + content_type = (known after apply) + expiration_date = (known after apply) + id = (known after apply) + key_vault_id = (known after apply) + name = "ci-minion-stuart-runner-secret" + not_before_date = (known after apply) + resource_id = (known after apply) + resource_versionless_id = (known after apply) + tags = (known after apply) + value = (sensitive value) + versionless_id = (known after apply) } # data.cloudinit_config.runner_config["ci-minion-bob"] will be read during apply # (config refers to values not yet known) <= data "cloudinit_config" "runner_config" { + base64_encode = true + boundary = (known after apply) + gzip = true + id = (known after apply) + rendered = (known after apply) + part { + content = (known after apply) + content_type = "text/cloud-config" } + part { + content = <<-EOT runcmd: - | set -e systemctl daemon-reload systemctl enable --now forgejo-runner.service EOT + content_type = "text/cloud-config" } } # data.cloudinit_config.runner_config["ci-minion-stuart"] will be read during apply # (config refers to values not yet known) <= data "cloudinit_config" "runner_config" { + base64_encode = true + boundary = (known after apply) + gzip = true + id = (known after apply) + rendered = (known after apply) + part { + content = (known after apply) + content_type = "text/cloud-config" } + part { + content = <<-EOT runcmd: - | set -e systemctl daemon-reload systemctl enable --now forgejo-runner.service EOT + content_type = "text/cloud-config" } } # data.ct_config.machine-ignitions["w1-cax11-hel1-gen7"] will be read during apply # (config refers to values not yet known) <= data "ct_config" "machine-ignitions" { + content = (sensitive value) + id = (known after apply) + rendered = (known after apply) + snippets = [ + (known after apply), ] + strict = true } # data.ct_config.machine-ignitions["w2-cax11-hel1-gen7"] will be read during apply # (config refers to values not yet known) <= data "ct_config" "machine-ignitions" { + content = (sensitive value) + id = (known after apply) + rendered = (known after apply) + snippets = [ + (known after apply), ] + strict = true } # data.ct_config.machine-ignitions["w3-cax11-hel1-gen7"] will be read during apply # (config refers to values not yet known) <= data "ct_config" "machine-ignitions" { + content = (sensitive value) + id = (known after apply) + rendered = (known after apply) + snippets = [ + (known after apply), ] + strict = true } # data.ct_config.machine-ignitions-cp["cp1-cax11-hel1"] will be read during apply # (config refers to values not yet known) <= data "ct_config" "machine-ignitions-cp" { + content = (sensitive value) + id = (known after apply) + rendered = (known after apply) + snippets = [ + (known after apply), ] + strict = true } # azurerm_key_vault.forgejo_runners will be created + resource "azurerm_key_vault" "forgejo_runners" { + access_policy = (known after apply) + enable_rbac_authorization = true + id = (known after apply) + location = "westeurope" + name = "Forgejo-Runners" + public_network_access_enabled = true + purge_protection_enabled = false + resource_group_name = "Forgejo-Runners" + sku_name = "standard" + soft_delete_retention_days = 30 + tenant_id = "f4e80111-1571-477a-b56d-c5fe517676b7" + vault_uri = (known after apply) + contact (known after apply) + network_acls (known after apply) } # azurerm_key_vault.hetzner will be created + resource "azurerm_key_vault" "hetzner" { + access_policy = (known after apply) + enable_rbac_authorization = true + id = (known after apply) + location = "westeurope" + name = "Hetzner" + public_network_access_enabled = true + purge_protection_enabled = false + resource_group_name = "Infrastructure" + sku_name = "standard" + soft_delete_retention_days = 30 + tenant_id = "f4e80111-1571-477a-b56d-c5fe517676b7" + vault_uri = (known after apply) + contact (known after apply) + network_acls (known after apply) } # azurerm_resource_group.forgejo_runners will be created + resource "azurerm_resource_group" "forgejo_runners" { + id = (known after apply) + location = "westeurope" + name = "Forgejo-Runners" } # azurerm_resource_group.infrastructure will be created + resource "azurerm_resource_group" "infrastructure" { + id = (known after apply) + location = "westeurope" + name = "Infrastructure" } # cloudflare_dns_record.apple_proof will be created + resource "cloudflare_dns_record" "apple_proof" { + comment = (known after apply) + comment_modified_on = (known after apply) + content = "apple-domain=chwbVvzH8hWIgg1l" + created_on = (known after apply) + data = (known after apply) + id = (known after apply) + meta = (known after apply) + modified_on = (known after apply) + name = "icb4dc0.de" + proxiable = (known after apply) + proxied = false + settings = (known after apply) + tags = [] + tags_modified_on = (known after apply) + ttl = 1 + type = "TXT" + zone_id = (known after apply) } # cloudflare_dns_record.apple_sig_domainkey will be created + resource "cloudflare_dns_record" "apple_sig_domainkey" { + comment = (known after apply) + comment_modified_on = (known after apply) + content = "sig1.dkim.icb4dc0.de.at.icloudmailadmin.com" + created_on = (known after apply) + data = (known after apply) + id = (known after apply) + meta = (known after apply) + modified_on = (known after apply) + name = "sig1._domainkey.icb4dc0.de" + proxiable = (known after apply) + proxied = false + settings = (known after apply) + tags = [] + tags_modified_on = (known after apply) + ttl = 1 + type = "CNAME" + zone_id = (known after apply) } # cloudflare_dns_record.apple_spf will be created + resource "cloudflare_dns_record" "apple_spf" { + comment = (known after apply) + comment_modified_on = (known after apply) + content = "\"v=spf1 include:icloud.com ~all\"" + created_on = (known after apply) + data = (known after apply) + id = (known after apply) + meta = (known after apply) + modified_on = (known after apply) + name = "icb4dc0.de" + proxiable = (known after apply) + proxied = false + settings = (known after apply) + tags = [] + tags_modified_on = (known after apply) + ttl = 1 + type = "TXT" + zone_id = (known after apply) } # cloudflare_dns_record.cp-host-ipv4["cp1-cax11-hel1"] will be created + resource "cloudflare_dns_record" "cp-host-ipv4" { + comment = (known after apply) + comment_modified_on = (known after apply) + content = (known after apply) + created_on = (known after apply) + data = (known after apply) + id = (known after apply) + meta = (known after apply) + modified_on = (known after apply) + name = "cp1-cax11-hel1.k8s.icb4dc0.de" + proxiable = (known after apply) + proxied = false + settings = (known after apply) + tags = [] + tags_modified_on = (known after apply) + ttl = 1 + type = "A" + zone_id = (known after apply) } # cloudflare_dns_record.cp-host-ipv6["cp1-cax11-hel1"] will be created + resource "cloudflare_dns_record" "cp-host-ipv6" { + comment = (known after apply) + comment_modified_on = (known after apply) + content = (known after apply) + created_on = (known after apply) + data = (known after apply) + id = (known after apply) + meta = (known after apply) + modified_on = (known after apply) + name = "cp1-cax11-hel1.k8s.icb4dc0.de" + proxiable = (known after apply) + proxied = false + settings = (known after apply) + tags = [] + tags_modified_on = (known after apply) + ttl = 1 + type = "AAAA" + zone_id = (known after apply) } # cloudflare_dns_record.keybase_proof will be created + resource "cloudflare_dns_record" "keybase_proof" { + comment = (known after apply) + comment_modified_on = (known after apply) + content = "keybase-site-verification=WDQoLtW22epD7eQnts6rPKJBGA0lD6jSI6m0bGMYWag" + created_on = (known after apply) + data = (known after apply) + id = (known after apply) + meta = (known after apply) + modified_on = (known after apply) + name = "icb4dc0.de" + proxiable = (known after apply) + proxied = false + settings = (known after apply) + tags = [] + tags_modified_on = (known after apply) + ttl = 1 + type = "TXT" + zone_id = (known after apply) } # cloudflare_dns_record.mx_primary will be created + resource "cloudflare_dns_record" "mx_primary" { + comment = (known after apply) + comment_modified_on = (known after apply) + content = "mx01.mail.icloud.com" + created_on = (known after apply) + data = (known after apply) + id = (known after apply) + meta = (known after apply) + modified_on = (known after apply) + name = "icb4dc0.de" + priority = 10 + proxiable = (known after apply) + proxied = false + settings = (known after apply) + tags = [] + tags_modified_on = (known after apply) + ttl = 1 + type = "MX" + zone_id = (known after apply) } # cloudflare_dns_record.mx_secondary will be created + resource "cloudflare_dns_record" "mx_secondary" { + comment = (known after apply) + comment_modified_on = (known after apply) + content = "mx02.mail.icloud.com" + created_on = (known after apply) + data = (known after apply) + id = (known after apply) + meta = (known after apply) + modified_on = (known after apply) + name = "icb4dc0.de" + priority = 10 + proxiable = (known after apply) + proxied = false + settings = (known after apply) + tags = [] + tags_modified_on = (known after apply) + ttl = 1 + type = "MX" + zone_id = (known after apply) } # cloudflare_zone.icb4dc0de will be created + resource "cloudflare_zone" "icb4dc0de" { + account = { + id = (sensitive value) } + activated_on = (known after apply) + created_on = (known after apply) + development_mode = (known after apply) + id = (known after apply) + meta = (known after apply) + modified_on = (known after apply) + name = "icb4dc0.de" + name_servers = (known after apply) + original_dnshost = (known after apply) + original_name_servers = (known after apply) + original_registrar = (known after apply) + owner = (known after apply) + paused = false + status = (known after apply) + type = "full" + verification_key = (known after apply) } # hcloud_network.k8s_net will be created + resource "hcloud_network" "k8s_net" { + delete_protection = false + expose_routes_to_vswitch = false + id = (known after apply) + ip_range = "172.16.0.0/12" + name = "k8s-net" } # hcloud_network_subnet.k8s_internal will be created + resource "hcloud_network_subnet" "k8s_internal" { + gateway = (known after apply) + id = (known after apply) + ip_range = "172.23.2.0/23" + network_id = (known after apply) + network_zone = "eu-central" + type = "cloud" } # hcloud_placement_group.forgejo_runners will be created + resource "hcloud_placement_group" "forgejo_runners" { + id = (known after apply) + labels = { + "cluster" = "forgejo.icb4dc0.de" } + name = "forgejo-runners" + servers = (known after apply) + type = "spread" } # hcloud_placement_group.k3s_machines will be created + resource "hcloud_placement_group" "k3s_machines" { + id = (known after apply) + labels = { + "cluster" = "icb4dc0.de" } + name = "k3s-machines" + servers = (known after apply) + type = "spread" } # hcloud_server.control-plane["cp1-cax11-hel1"] will be created + resource "hcloud_server" "control-plane" { + allow_deprecated_images = false + backup_window = (known after apply) + backups = false + datacenter = (known after apply) + delete_protection = false + firewall_ids = (known after apply) + id = (known after apply) + ignore_remote_firewall_ids = false + image = "ubuntu-22.04" + ipv4_address = (known after apply) + ipv6_address = (known after apply) + ipv6_network = (known after apply) + keep_disk = false + labels = { + "cluster" = "icb4dc0.de" + "node_type" = "control-plane" } + location = "hel1" + name = "cp1-cax11-hel1" + primary_disk_size = (known after apply) + rebuild_protection = false + rescue = "linux64" + server_type = "cax11" + shutdown_before_deletion = false + ssh_keys = (known after apply) + status = (known after apply) + network { + alias_ips = [] + ip = "172.23.2.10" + mac_address = (known after apply) + network_id = (known after apply) } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = true } } # hcloud_server.forgejo_runner["ci-minion-bob"] will be created + resource "hcloud_server" "forgejo_runner" { + allow_deprecated_images = false + backup_window = (known after apply) + backups = false + datacenter = (known after apply) + delete_protection = false + firewall_ids = (known after apply) + id = (known after apply) + ignore_remote_firewall_ids = false + image = "228451454" + ipv4_address = (known after apply) + ipv6_address = (known after apply) + ipv6_network = (known after apply) + keep_disk = false + labels = { + "cluster" = "forgejo.icb4dc0.de" + "node_type" = "forgejo_runner" } + location = "hel1" + name = "ci-minion-bob" + placement_group_id = (known after apply) + primary_disk_size = (known after apply) + rebuild_protection = false + server_type = "cax21" + shutdown_before_deletion = false + ssh_keys = (known after apply) + status = (known after apply) + user_data = (known after apply) + network { + alias_ips = (known after apply) + ip = "172.23.2.30" + mac_address = (known after apply) + network_id = (known after apply) } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = true } } # hcloud_server.forgejo_runner["ci-minion-stuart"] will be created + resource "hcloud_server" "forgejo_runner" { + allow_deprecated_images = false + backup_window = (known after apply) + backups = false + datacenter = (known after apply) + delete_protection = false + firewall_ids = (known after apply) + id = (known after apply) + ignore_remote_firewall_ids = false + image = "228451463" + ipv4_address = (known after apply) + ipv6_address = (known after apply) + ipv6_network = (known after apply) + keep_disk = false + labels = { + "cluster" = "forgejo.icb4dc0.de" + "node_type" = "forgejo_runner" } + location = "hel1" + name = "ci-minion-stuart" + placement_group_id = (known after apply) + primary_disk_size = (known after apply) + rebuild_protection = false + server_type = "cpx31" + shutdown_before_deletion = false + ssh_keys = (known after apply) + status = (known after apply) + user_data = (known after apply) + network { + alias_ips = (known after apply) + ip = "172.23.2.31" + mac_address = (known after apply) + network_id = (known after apply) } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = true } } # hcloud_server.machine["w1-cax11-hel1-gen7"] will be created + resource "hcloud_server" "machine" { + allow_deprecated_images = false + backup_window = (known after apply) + backups = false + datacenter = (known after apply) + delete_protection = false + firewall_ids = (known after apply) + id = (known after apply) + ignore_remote_firewall_ids = false + image = "ubuntu-22.04" + ipv4_address = (known after apply) + ipv6_address = (known after apply) + ipv6_network = (known after apply) + keep_disk = false + labels = { + "cluster" = "icb4dc0.de" + "node_type" = "worker" } + location = "hel1" + name = "w1-cax11-hel1-gen7" + placement_group_id = (known after apply) + primary_disk_size = (known after apply) + rebuild_protection = false + rescue = "linux64" + server_type = "cax21" + shutdown_before_deletion = false + ssh_keys = (known after apply) + status = (known after apply) + network { + alias_ips = (known after apply) + ip = "172.23.2.20" + mac_address = (known after apply) + network_id = (known after apply) } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = false } } # hcloud_server.machine["w2-cax11-hel1-gen7"] will be created + resource "hcloud_server" "machine" { + allow_deprecated_images = false + backup_window = (known after apply) + backups = false + datacenter = (known after apply) + delete_protection = false + firewall_ids = (known after apply) + id = (known after apply) + ignore_remote_firewall_ids = false + image = "ubuntu-22.04" + ipv4_address = (known after apply) + ipv6_address = (known after apply) + ipv6_network = (known after apply) + keep_disk = false + labels = { + "cluster" = "icb4dc0.de" + "node_type" = "worker" } + location = "hel1" + name = "w2-cax11-hel1-gen7" + placement_group_id = (known after apply) + primary_disk_size = (known after apply) + rebuild_protection = false + rescue = "linux64" + server_type = "cax21" + shutdown_before_deletion = false + ssh_keys = (known after apply) + status = (known after apply) + network { + alias_ips = (known after apply) + ip = "172.23.2.21" + mac_address = (known after apply) + network_id = (known after apply) } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = false } } # hcloud_server.machine["w3-cax11-hel1-gen7"] will be created + resource "hcloud_server" "machine" { + allow_deprecated_images = false + backup_window = (known after apply) + backups = false + datacenter = (known after apply) + delete_protection = false + firewall_ids = (known after apply) + id = (known after apply) + ignore_remote_firewall_ids = false + image = "ubuntu-22.04" + ipv4_address = (known after apply) + ipv6_address = (known after apply) + ipv6_network = (known after apply) + keep_disk = false + labels = { + "cluster" = "icb4dc0.de" + "node_type" = "worker" } + location = "hel1" + name = "w3-cax11-hel1-gen7" + placement_group_id = (known after apply) + primary_disk_size = (known after apply) + rebuild_protection = false + rescue = "linux64" + server_type = "cax21" + shutdown_before_deletion = false + ssh_keys = (known after apply) + status = (known after apply) + network { + alias_ips = (known after apply) + ip = "172.23.2.22" + mac_address = (known after apply) + network_id = (known after apply) } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = false } } # hcloud_server.microos_machine["w4-cax11-hel1"] will be created + resource "hcloud_server" "microos_machine" { + allow_deprecated_images = false + backup_window = (known after apply) + backups = false + datacenter = (known after apply) + delete_protection = false + firewall_ids = (known after apply) + id = (known after apply) + ignore_remote_firewall_ids = false + image = "229900685" + ipv4_address = (known after apply) + ipv6_address = (known after apply) + ipv6_network = (known after apply) + keep_disk = false + labels = { + "cluster" = "icb4dc0.de" + "node_type" = "worker" } + location = "hel1" + name = "w4-cax11-hel1" + placement_group_id = (known after apply) + primary_disk_size = (known after apply) + rebuild_protection = false + server_type = "cax21" + shutdown_before_deletion = false + ssh_keys = (known after apply) + status = (known after apply) + user_data = "zlbbei8qUy+QwyKAUTeLg22M25A=" + network { + alias_ips = (known after apply) + ip = "172.23.2.24" + mac_address = (known after apply) + network_id = (known after apply) } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = false } } # hcloud_ssh_key.default will be created + resource "hcloud_ssh_key" "default" { + fingerprint = (known after apply) + id = (known after apply) + labels = {} + name = "Default Management" + public_key = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKfHZaI0F5GjAcrM8hjWqwMfULDkAZ2TOIBTQtRocg1F id_ed25519" } # hcloud_ssh_key.provisioning_key will be created + resource "hcloud_ssh_key" "provisioning_key" { + fingerprint = (known after apply) + id = (known after apply) + labels = {} + name = "Provisioning key for hcloud cluster" + public_key = (known after apply) } # hcloud_ssh_key.yubikey will be created + resource "hcloud_ssh_key" "yubikey" { + fingerprint = (known after apply) + id = (known after apply) + labels = {} + name = "Yubikey" + public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDQoNCLuHgcaDn4JTjCeQKJsIsYU0Jmub5PUMzIIZbUBb+TGMh6mCAY/UbYaq/n4jVnskXopzPGJbx4iPBG5HrNzqYZqMjkk8uIeeT0mdIcNv9bXxuCxCH1iHZF8LlzIZCmQ0w3X6VQ1izcJgvjrAYzbHN3gqCHOXtNkqIUkwaadIWCEjg33OVSlM4yrIDElr6+LHzv84VNh/PhemixCVVEMJ83GjhDtpApMg9WWW3es6rpJn4TlYEMV+aPNU4ZZEWFen/DFBKoX+ulkiJ8CwpY3eJxSzlBijs5ZKH89OOk/MXN1lnREElFqli+jE8EbZKQzi59Zmx8ZOb52qVNot8XZT0Un4EttAIEeE8cETqUC4jK+6RbUrsXtclVbU9i57LWRpl65LYSIJEFmkTxvYdkPXqGbvlW024IjgSo8kds121w95+Rpo6419cSYsQWowS8+aXfEv2Q8SE81QH7ObYfWFXsPBAmmNleQNN3E5HOoaxpWQjv3aTUGuxm4PCzKLdP0LsHmTfGJB7Priaj+9i8xLjDWe7zXDde2Gp9FmdedDr06uEkVSRFnS35Dwfd7M7xP6NsilfMOdWzJWWy/BAYxtnWcrEFxhaEr4vgs8Ub+KBtKhr740x3Mr8up+mythConAs4LOj37lWK4kJ8cI7TXjcSJi9nTIPd39us7tp3Aw==" } # local_file.provisioning_key will be created + resource "local_file" "provisioning_key" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0700" + file_permission = "0400" + filename = "./.ssh/provisioning_private_key.pem" + id = (known after apply) } # local_file.provisioning_key_pub will be created + resource "local_file" "provisioning_key_pub" { + content = (known after apply) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0700" + file_permission = "0440" + filename = "./.ssh/provisioning_key.pub" + id = (known after apply) } # null_resource.control_plane_generation["cp1-cax11-hel1"] will be created + resource "null_resource" "control_plane_generation" { + id = (known after apply) + triggers = { + "timestamp" = "5" } } # null_resource.cp-config will be created + resource "null_resource" "cp-config" { + id = (known after apply) + triggers = { + "version" = "v1.32.1+k3s1" } } # null_resource.machine_generation["w1-cax11-hel1-gen7"] will be created + resource "null_resource" "machine_generation" { + id = (known after apply) + triggers = { + "timestamp" = "1" } } # null_resource.machine_generation["w2-cax11-hel1-gen7"] will be created + resource "null_resource" "machine_generation" { + id = (known after apply) + triggers = { + "timestamp" = "1" } } # null_resource.machine_generation["w3-cax11-hel1-gen7"] will be created + resource "null_resource" "machine_generation" { + id = (known after apply) + triggers = { + "timestamp" = "1" } } # null_resource.runner-config will be created + resource "null_resource" "runner-config" { + id = (known after apply) + triggers = { + "version" = "6.3.1" } } # null_resource.runner_generation["ci-minion-bob"] will be created + resource "null_resource" "runner_generation" { + id = (known after apply) + triggers = { + "timestamp" = "2" } } # null_resource.runner_generation["ci-minion-stuart"] will be created + resource "null_resource" "runner_generation" { + id = (known after apply) + triggers = { + "timestamp" = "2" } } # null_resource.worker-config will be created + resource "null_resource" "worker-config" { + id = (known after apply) + triggers = { + "version" = "v1.32.1+k3s1" } } # tls_private_key.provisioning will be created + resource "tls_private_key" "provisioning" { + algorithm = "RSA" + ecdsa_curve = "P224" + id = (known after apply) + private_key_openssh = (sensitive value) + private_key_pem = (sensitive value) + private_key_pem_pkcs8 = (sensitive value) + public_key_fingerprint_md5 = (known after apply) + public_key_fingerprint_sha256 = (known after apply) + public_key_openssh = (known after apply) + public_key_pem = (known after apply) + rsa_bits = 4096 } Plan: 39 to add, 0 to change, 0 to destroy. ``` </details> * :arrow_forward: To **apply** this plan, comment: ```shell atlantis apply -d . ``` * :put_litter_in_its_place: To **delete** this plan and lock, click [here](http://atlantis-0:4141/lock?id=infrastructure%252Fcluster%252F.%252Fdefault) * :repeat: To **plan** this project again, comment: ```shell atlantis plan -d . ``` Plan: 39 to add, 0 to change, 0 to destroy. --- * :fast_forward: To **apply** all unapplied plans from this Pull Request, comment: ```shell atlantis apply ``` * :put_litter_in_its_place: To **delete** all plans and locks from this Pull Request, comment: ```shell atlantis unlock ```
Author
Owner

atlantis plan -w hcloud-infra

atlantis plan -w hcloud-infra
Collaborator

Plan Error

the default workspace at path . is currently locked by another command that is running for this pull request.
Wait until the previous command is complete and try again
**Plan Error** ``` the default workspace at path . is currently locked by another command that is running for this pull request. Wait until the previous command is complete and try again ```
Collaborator

Plan Error

the default workspace at path . is currently locked by another command that is running for this pull request.
Wait until the previous command is complete and try again
**Plan Error** ``` the default workspace at path . is currently locked by another command that is running for this pull request. Wait until the previous command is complete and try again ```
Collaborator

Ran Plan for dir: . workspace: hcloud-infra

Show Output
OpenTofu used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
~ update in-place

OpenTofu will perform the following actions:

  # hcloud_server.microos_machine["w4-cax11-hel1"] will be created
+ resource "hcloud_server" "microos_machine" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "229900685"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"   = "icb4dc0.de"
          + "node_type" = "worker"
        }
      + location                   = "hel1"
      + name                       = "w4-cax11-hel1"
      + placement_group_id         = 338479
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cax21"
      + shutdown_before_deletion   = false
      + ssh_keys                   = [
          + "9685856",
          + "20640130",
          + "7606267",
        ]
      + status                     = (known after apply)
      + user_data                  = "zlbbei8qUy+QwyKAUTeLg22M25A="

      + network {
          + alias_ips   = (known after apply)
          + ip          = "172.23.2.24"
          + mac_address = (known after apply)
          + network_id  = 1982434
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = false
        }
    }

  # hcloud_ssh_key.yubikey will be updated in-place
~ resource "hcloud_ssh_key" "yubikey" {
        id          = "20640130"
      ~ labels      = {
          - "kind" = "yubikey" -> null
        }
        name        = "Yubikey"
        # (2 unchanged attributes hidden)
    }

  # local_file.provisioning_key will be created
+ resource "local_file" "provisioning_key" {
      + content              = (sensitive value)
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0700"
      + file_permission      = "0400"
      + filename             = "./.ssh/provisioning_private_key.pem"
      + id                   = (known after apply)
    }

  # local_file.provisioning_key_pub will be created
+ resource "local_file" "provisioning_key_pub" {
      + content              = <<-EOT
            ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDfBF8TbFMM8Vhq0XxodlsVNlQ0HezGsUf+AOKbfVo/4HQ+b57DTUxRq7guuoN3NpQzcMwma56UkOK9sYFtYUY0+LhqtemLFM15cp93z+iBXG8SzBxk8KmfLqBmoNjK0HU1e79RFvBBO++4ivM5K2DMw/3LYQz7MvuX5+JCq110GfHkVmKYoj66uhn+7h1GxQlVbMtRXGnEC16h7kFXbSvNKWOcbRgtvhFdUxOsuURICytQ78yrFc/FS9+nQIQfYIrisy1r4JioisKMWiB9vsYXIoCr7z0cG0TAFINexPHYNkse0+n1YgZG0QQ72anRv7pE98qvG4ymaYJaw3mHQ0WR/qKoGhs1X6S+z02du7toAl10aIINwxh6i8EoSH4uY0gSMXmFzg3PeoFkiYU6KkkF4JiL0kyILFny7n31GLMdYPGtTW18t5T+2FlSWqy7I/F4xOeh+8+iUypWkqg7H4gnl+KwT2yM+ZX/Q3MEz+byFdFsLg4NBrA7S1FhCdDTeCXMSNfZv71mCosg8yHmwiqrO8aHsSRk7ZPyfXm1PWLRTso02NTFB/1m1sA4q/9N29CCFm1PZeF+845uY5FvYzJS62UGpdJYGI4oIctbp8QMbJHhhOXl9pPgEiy+urZAaiaS3Gct5e1sN4XeaGcTqzCelTYyw2awoRO+iS7JUTMLiQ==
        EOT
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0700"
      + file_permission      = "0440"
      + filename             = "./.ssh/provisioning_key.pub"
      + id                   = (known after apply)
    }

Plan: 3 to add, 1 to change, 0 to destroy.
  • ▶️ To apply this plan, comment:
    atlantis apply -w hcloud-infra
    
  • 🚮 To delete this plan and lock, click here
  • 🔁 To plan this project again, comment:
    atlantis plan -w hcloud-infra
    

Plan: 3 to add, 1 to change, 0 to destroy.


  • To apply all unapplied plans from this Pull Request, comment:
    atlantis apply
    
  • 🚮 To delete all plans and locks from this Pull Request, comment:
    atlantis unlock
    
Ran Plan for dir: `.` workspace: `hcloud-infra` <details><summary>Show Output</summary> ```diff OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create ~ update in-place OpenTofu will perform the following actions: # hcloud_server.microos_machine["w4-cax11-hel1"] will be created + resource "hcloud_server" "microos_machine" { + allow_deprecated_images = false + backup_window = (known after apply) + backups = false + datacenter = (known after apply) + delete_protection = false + firewall_ids = (known after apply) + id = (known after apply) + ignore_remote_firewall_ids = false + image = "229900685" + ipv4_address = (known after apply) + ipv6_address = (known after apply) + ipv6_network = (known after apply) + keep_disk = false + labels = { + "cluster" = "icb4dc0.de" + "node_type" = "worker" } + location = "hel1" + name = "w4-cax11-hel1" + placement_group_id = 338479 + primary_disk_size = (known after apply) + rebuild_protection = false + server_type = "cax21" + shutdown_before_deletion = false + ssh_keys = [ + "9685856", + "20640130", + "7606267", ] + status = (known after apply) + user_data = "zlbbei8qUy+QwyKAUTeLg22M25A=" + network { + alias_ips = (known after apply) + ip = "172.23.2.24" + mac_address = (known after apply) + network_id = 1982434 } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = false } } # hcloud_ssh_key.yubikey will be updated in-place ~ resource "hcloud_ssh_key" "yubikey" { id = "20640130" ~ labels = { - "kind" = "yubikey" -> null } name = "Yubikey" # (2 unchanged attributes hidden) } # local_file.provisioning_key will be created + resource "local_file" "provisioning_key" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0700" + file_permission = "0400" + filename = "./.ssh/provisioning_private_key.pem" + id = (known after apply) } # local_file.provisioning_key_pub will be created + resource "local_file" "provisioning_key_pub" { + content = <<-EOT ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDfBF8TbFMM8Vhq0XxodlsVNlQ0HezGsUf+AOKbfVo/4HQ+b57DTUxRq7guuoN3NpQzcMwma56UkOK9sYFtYUY0+LhqtemLFM15cp93z+iBXG8SzBxk8KmfLqBmoNjK0HU1e79RFvBBO++4ivM5K2DMw/3LYQz7MvuX5+JCq110GfHkVmKYoj66uhn+7h1GxQlVbMtRXGnEC16h7kFXbSvNKWOcbRgtvhFdUxOsuURICytQ78yrFc/FS9+nQIQfYIrisy1r4JioisKMWiB9vsYXIoCr7z0cG0TAFINexPHYNkse0+n1YgZG0QQ72anRv7pE98qvG4ymaYJaw3mHQ0WR/qKoGhs1X6S+z02du7toAl10aIINwxh6i8EoSH4uY0gSMXmFzg3PeoFkiYU6KkkF4JiL0kyILFny7n31GLMdYPGtTW18t5T+2FlSWqy7I/F4xOeh+8+iUypWkqg7H4gnl+KwT2yM+ZX/Q3MEz+byFdFsLg4NBrA7S1FhCdDTeCXMSNfZv71mCosg8yHmwiqrO8aHsSRk7ZPyfXm1PWLRTso02NTFB/1m1sA4q/9N29CCFm1PZeF+845uY5FvYzJS62UGpdJYGI4oIctbp8QMbJHhhOXl9pPgEiy+urZAaiaS3Gct5e1sN4XeaGcTqzCelTYyw2awoRO+iS7JUTMLiQ== EOT + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0700" + file_permission = "0440" + filename = "./.ssh/provisioning_key.pub" + id = (known after apply) } Plan: 3 to add, 1 to change, 0 to destroy. ``` </details> * :arrow_forward: To **apply** this plan, comment: ```shell atlantis apply -w hcloud-infra ``` * :put_litter_in_its_place: To **delete** this plan and lock, click [here](http://atlantis-0:4141/lock?id=infrastructure%252Fcluster%252F.%252Fhcloud-infra) * :repeat: To **plan** this project again, comment: ```shell atlantis plan -w hcloud-infra ``` Plan: 3 to add, 1 to change, 0 to destroy. --- * :fast_forward: To **apply** all unapplied plans from this Pull Request, comment: ```shell atlantis apply ``` * :put_litter_in_its_place: To **delete** all plans and locks from this Pull Request, comment: ```shell atlantis unlock ```
Collaborator

Ran Plan for dir: . workspace: default

Show Output
data.hcloud_image.k3s_node_arm64: Reading...
data.cloudinit_config.k8s_node_config["w4-cax11-hel1"]: Reading...
data.cloudinit_config.k8s_node_config["w4-cax11-hel1"]: Read complete after 0s [id=315400678]
data.hcloud_image.forgejo_runner_snapshot_amd64: Reading...
data.hcloud_image.forgejo_runner_snapshot_arm64: Reading...
data.hcloud_image.k3s_node_arm64: Read complete after 0s
data.hcloud_image.forgejo_runner_snapshot_arm64: Read complete after 1s
data.hcloud_image.forgejo_runner_snapshot_amd64: Read complete after 1s
data.azurerm_client_config.current: Reading...
data.azurerm_client_config.current: Read complete after 0s [id=Y2xpZW50Q29uZmlncy9jbGllbnRJZD1jYWZmMDAwZS01NmY1LTQ5M2MtODAxZS1jNjcwYTViMjUzNTQ7b2JqZWN0SWQ9ZmVmMmVkZjYtZGVlOC00NWFmLWEwNDctMmY3MDk4M2E5YmFkO3N1YnNjcmlwdGlvbklkPWY2ZGU3Yjk3LWJlZmEtNDNkOC04NDE2LWFhZDZjOTg4YzE2ODt0ZW5hbnRJZD1mNGU4MDExMS0xNTcxLTQ3N2EtYjU2ZC1jNWZlNTE3Njc2Yjc=]

OpenTofu used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
 <= read (data resources)

OpenTofu will perform the following actions:

  # data.azurerm_key_vault_secret.cloudflare_account_id will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "cloudflare_account_id" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "cloudflare-account-id"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # data.azurerm_key_vault_secret.harbor_minion_token will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "harbor_minion_token" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "harbor-minion-token"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # data.azurerm_key_vault_secret.harbor_minion_username will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "harbor_minion_username" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "harbor-minion-username"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # data.azurerm_key_vault_secret.k3s_backup_access_key will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "k3s_backup_access_key" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "k3s-backup-access-key"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # data.azurerm_key_vault_secret.k3s_backup_endpoint will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "k3s_backup_endpoint" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "k3s-backup-endpoint"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # data.azurerm_key_vault_secret.k3s_backup_secret_key will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "k3s_backup_secret_key" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "k3s-backup-secret-key"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # data.azurerm_key_vault_secret.k3s_token will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "k3s_token" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "k3s-token"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # data.azurerm_key_vault_secret.runner_secret["ci-minion-bob"] will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "runner_secret" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "ci-minion-bob-runner-secret"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # data.azurerm_key_vault_secret.runner_secret["ci-minion-stuart"] will be read during apply
  # (config refers to values not yet known)
 <= data "azurerm_key_vault_secret" "runner_secret" {
      + content_type            = (known after apply)
      + expiration_date         = (known after apply)
      + id                      = (known after apply)
      + key_vault_id            = (known after apply)
      + name                    = "ci-minion-stuart-runner-secret"
      + not_before_date         = (known after apply)
      + resource_id             = (known after apply)
      + resource_versionless_id = (known after apply)
      + tags                    = (known after apply)
      + value                   = (sensitive value)
      + versionless_id          = (known after apply)
    }

  # data.cloudinit_config.runner_config["ci-minion-bob"] will be read during apply
  # (config refers to values not yet known)
 <= data "cloudinit_config" "runner_config" {
      + base64_encode = true
      + boundary      = (known after apply)
      + gzip          = true
      + id            = (known after apply)
      + rendered      = (known after apply)

      + part {
          + content      = (known after apply)
          + content_type = "text/cloud-config"
        }
      + part {
          + content      = <<-EOT
                runcmd:
                  - |
                    set -e
                    systemctl daemon-reload
                    systemctl enable --now forgejo-runner.service
            EOT
          + content_type = "text/cloud-config"
        }
    }

  # data.cloudinit_config.runner_config["ci-minion-stuart"] will be read during apply
  # (config refers to values not yet known)
 <= data "cloudinit_config" "runner_config" {
      + base64_encode = true
      + boundary      = (known after apply)
      + gzip          = true
      + id            = (known after apply)
      + rendered      = (known after apply)

      + part {
          + content      = (known after apply)
          + content_type = "text/cloud-config"
        }
      + part {
          + content      = <<-EOT
                runcmd:
                  - |
                    set -e
                    systemctl daemon-reload
                    systemctl enable --now forgejo-runner.service
            EOT
          + content_type = "text/cloud-config"
        }
    }

  # data.ct_config.machine-ignitions["w1-cax11-hel1-gen7"] will be read during apply
  # (config refers to values not yet known)
 <= data "ct_config" "machine-ignitions" {
      + content  = (sensitive value)
      + id       = (known after apply)
      + rendered = (known after apply)
      + snippets = [
          + (known after apply),
        ]
      + strict   = true
    }

  # data.ct_config.machine-ignitions["w2-cax11-hel1-gen7"] will be read during apply
  # (config refers to values not yet known)
 <= data "ct_config" "machine-ignitions" {
      + content  = (sensitive value)
      + id       = (known after apply)
      + rendered = (known after apply)
      + snippets = [
          + (known after apply),
        ]
      + strict   = true
    }

  # data.ct_config.machine-ignitions["w3-cax11-hel1-gen7"] will be read during apply
  # (config refers to values not yet known)
 <= data "ct_config" "machine-ignitions" {
      + content  = (sensitive value)
      + id       = (known after apply)
      + rendered = (known after apply)
      + snippets = [
          + (known after apply),
        ]
      + strict   = true
    }

  # data.ct_config.machine-ignitions-cp["cp1-cax11-hel1"] will be read during apply
  # (config refers to values not yet known)
 <= data "ct_config" "machine-ignitions-cp" {
      + content  = (sensitive value)
      + id       = (known after apply)
      + rendered = (known after apply)
      + snippets = [
          + (known after apply),
        ]
      + strict   = true
    }

  # azurerm_key_vault.forgejo_runners will be created
+ resource "azurerm_key_vault" "forgejo_runners" {
      + access_policy                 = (known after apply)
      + enable_rbac_authorization     = true
      + id                            = (known after apply)
      + location                      = "westeurope"
      + name                          = "Forgejo-Runners"
      + public_network_access_enabled = true
      + purge_protection_enabled      = false
      + resource_group_name           = "Forgejo-Runners"
      + sku_name                      = "standard"
      + soft_delete_retention_days    = 30
      + tenant_id                     = "f4e80111-1571-477a-b56d-c5fe517676b7"
      + vault_uri                     = (known after apply)

      + contact (known after apply)

      + network_acls (known after apply)
    }

  # azurerm_key_vault.hetzner will be created
+ resource "azurerm_key_vault" "hetzner" {
      + access_policy                 = (known after apply)
      + enable_rbac_authorization     = true
      + id                            = (known after apply)
      + location                      = "westeurope"
      + name                          = "Hetzner"
      + public_network_access_enabled = true
      + purge_protection_enabled      = false
      + resource_group_name           = "Infrastructure"
      + sku_name                      = "standard"
      + soft_delete_retention_days    = 30
      + tenant_id                     = "f4e80111-1571-477a-b56d-c5fe517676b7"
      + vault_uri                     = (known after apply)

      + contact (known after apply)

      + network_acls (known after apply)
    }

  # azurerm_resource_group.forgejo_runners will be created
+ resource "azurerm_resource_group" "forgejo_runners" {
      + id       = (known after apply)
      + location = "westeurope"
      + name     = "Forgejo-Runners"
    }

  # azurerm_resource_group.infrastructure will be created
+ resource "azurerm_resource_group" "infrastructure" {
      + id       = (known after apply)
      + location = "westeurope"
      + name     = "Infrastructure"
    }

  # cloudflare_dns_record.apple_proof will be created
+ resource "cloudflare_dns_record" "apple_proof" {
      + comment             = (known after apply)
      + comment_modified_on = (known after apply)
      + content             = "apple-domain=chwbVvzH8hWIgg1l"
      + created_on          = (known after apply)
      + data                = (known after apply)
      + id                  = (known after apply)
      + meta                = (known after apply)
      + modified_on         = (known after apply)
      + name                = "icb4dc0.de"
      + proxiable           = (known after apply)
      + proxied             = false
      + settings            = (known after apply)
      + tags                = []
      + tags_modified_on    = (known after apply)
      + ttl                 = 1
      + type                = "TXT"
      + zone_id             = (known after apply)
    }

  # cloudflare_dns_record.apple_sig_domainkey will be created
+ resource "cloudflare_dns_record" "apple_sig_domainkey" {
      + comment             = (known after apply)
      + comment_modified_on = (known after apply)
      + content             = "sig1.dkim.icb4dc0.de.at.icloudmailadmin.com"
      + created_on          = (known after apply)
      + data                = (known after apply)
      + id                  = (known after apply)
      + meta                = (known after apply)
      + modified_on         = (known after apply)
      + name                = "sig1._domainkey.icb4dc0.de"
      + proxiable           = (known after apply)
      + proxied             = false
      + settings            = (known after apply)
      + tags                = []
      + tags_modified_on    = (known after apply)
      + ttl                 = 1
      + type                = "CNAME"
      + zone_id             = (known after apply)
    }

  # cloudflare_dns_record.apple_spf will be created
+ resource "cloudflare_dns_record" "apple_spf" {
      + comment             = (known after apply)
      + comment_modified_on = (known after apply)
      + content             = "\"v=spf1 include:icloud.com ~all\""
      + created_on          = (known after apply)
      + data                = (known after apply)
      + id                  = (known after apply)
      + meta                = (known after apply)
      + modified_on         = (known after apply)
      + name                = "icb4dc0.de"
      + proxiable           = (known after apply)
      + proxied             = false
      + settings            = (known after apply)
      + tags                = []
      + tags_modified_on    = (known after apply)
      + ttl                 = 1
      + type                = "TXT"
      + zone_id             = (known after apply)
    }

  # cloudflare_dns_record.cp-host-ipv4["cp1-cax11-hel1"] will be created
+ resource "cloudflare_dns_record" "cp-host-ipv4" {
      + comment             = (known after apply)
      + comment_modified_on = (known after apply)
      + content             = (known after apply)
      + created_on          = (known after apply)
      + data                = (known after apply)
      + id                  = (known after apply)
      + meta                = (known after apply)
      + modified_on         = (known after apply)
      + name                = "cp1-cax11-hel1.k8s.icb4dc0.de"
      + proxiable           = (known after apply)
      + proxied             = false
      + settings            = (known after apply)
      + tags                = []
      + tags_modified_on    = (known after apply)
      + ttl                 = 1
      + type                = "A"
      + zone_id             = (known after apply)
    }

  # cloudflare_dns_record.cp-host-ipv6["cp1-cax11-hel1"] will be created
+ resource "cloudflare_dns_record" "cp-host-ipv6" {
      + comment             = (known after apply)
      + comment_modified_on = (known after apply)
      + content             = (known after apply)
      + created_on          = (known after apply)
      + data                = (known after apply)
      + id                  = (known after apply)
      + meta                = (known after apply)
      + modified_on         = (known after apply)
      + name                = "cp1-cax11-hel1.k8s.icb4dc0.de"
      + proxiable           = (known after apply)
      + proxied             = false
      + settings            = (known after apply)
      + tags                = []
      + tags_modified_on    = (known after apply)
      + ttl                 = 1
      + type                = "AAAA"
      + zone_id             = (known after apply)
    }

  # cloudflare_dns_record.keybase_proof will be created
+ resource "cloudflare_dns_record" "keybase_proof" {
      + comment             = (known after apply)
      + comment_modified_on = (known after apply)
      + content             = "keybase-site-verification=WDQoLtW22epD7eQnts6rPKJBGA0lD6jSI6m0bGMYWag"
      + created_on          = (known after apply)
      + data                = (known after apply)
      + id                  = (known after apply)
      + meta                = (known after apply)
      + modified_on         = (known after apply)
      + name                = "icb4dc0.de"
      + proxiable           = (known after apply)
      + proxied             = false
      + settings            = (known after apply)
      + tags                = []
      + tags_modified_on    = (known after apply)
      + ttl                 = 1
      + type                = "TXT"
      + zone_id             = (known after apply)
    }

  # cloudflare_dns_record.mx_primary will be created
+ resource "cloudflare_dns_record" "mx_primary" {
      + comment             = (known after apply)
      + comment_modified_on = (known after apply)
      + content             = "mx01.mail.icloud.com"
      + created_on          = (known after apply)
      + data                = (known after apply)
      + id                  = (known after apply)
      + meta                = (known after apply)
      + modified_on         = (known after apply)
      + name                = "icb4dc0.de"
      + priority            = 10
      + proxiable           = (known after apply)
      + proxied             = false
      + settings            = (known after apply)
      + tags                = []
      + tags_modified_on    = (known after apply)
      + ttl                 = 1
      + type                = "MX"
      + zone_id             = (known after apply)
    }

  # cloudflare_dns_record.mx_secondary will be created
+ resource "cloudflare_dns_record" "mx_secondary" {
      + comment             = (known after apply)
      + comment_modified_on = (known after apply)
      + content             = "mx02.mail.icloud.com"
      + created_on          = (known after apply)
      + data                = (known after apply)
      + id                  = (known after apply)
      + meta                = (known after apply)
      + modified_on         = (known after apply)
      + name                = "icb4dc0.de"
      + priority            = 10
      + proxiable           = (known after apply)
      + proxied             = false
      + settings            = (known after apply)
      + tags                = []
      + tags_modified_on    = (known after apply)
      + ttl                 = 1
      + type                = "MX"
      + zone_id             = (known after apply)
    }

  # cloudflare_zone.icb4dc0de will be created
+ resource "cloudflare_zone" "icb4dc0de" {
      + account               = {
          + id = (sensitive value)
        }
      + activated_on          = (known after apply)
      + created_on            = (known after apply)
      + development_mode      = (known after apply)
      + id                    = (known after apply)
      + meta                  = (known after apply)
      + modified_on           = (known after apply)
      + name                  = "icb4dc0.de"
      + name_servers          = (known after apply)
      + original_dnshost      = (known after apply)
      + original_name_servers = (known after apply)
      + original_registrar    = (known after apply)
      + owner                 = (known after apply)
      + paused                = false
      + status                = (known after apply)
      + type                  = "full"
      + verification_key      = (known after apply)
    }

  # hcloud_network.k8s_net will be created
+ resource "hcloud_network" "k8s_net" {
      + delete_protection        = false
      + expose_routes_to_vswitch = false
      + id                       = (known after apply)
      + ip_range                 = "172.16.0.0/12"
      + name                     = "k8s-net"
    }

  # hcloud_network_subnet.k8s_internal will be created
+ resource "hcloud_network_subnet" "k8s_internal" {
      + gateway      = (known after apply)
      + id           = (known after apply)
      + ip_range     = "172.23.2.0/23"
      + network_id   = (known after apply)
      + network_zone = "eu-central"
      + type         = "cloud"
    }

  # hcloud_placement_group.forgejo_runners will be created
+ resource "hcloud_placement_group" "forgejo_runners" {
      + id      = (known after apply)
      + labels  = {
          + "cluster" = "forgejo.icb4dc0.de"
        }
      + name    = "forgejo-runners"
      + servers = (known after apply)
      + type    = "spread"
    }

  # hcloud_placement_group.k3s_machines will be created
+ resource "hcloud_placement_group" "k3s_machines" {
      + id      = (known after apply)
      + labels  = {
          + "cluster" = "icb4dc0.de"
        }
      + name    = "k3s-machines"
      + servers = (known after apply)
      + type    = "spread"
    }

  # hcloud_server.control-plane["cp1-cax11-hel1"] will be created
+ resource "hcloud_server" "control-plane" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-22.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"   = "icb4dc0.de"
          + "node_type" = "control-plane"
        }
      + location                   = "hel1"
      + name                       = "cp1-cax11-hel1"
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + rescue                     = "linux64"
      + server_type                = "cax11"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)

      + network {
          + alias_ips   = []
          + ip          = "172.23.2.10"
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_server.forgejo_runner["ci-minion-bob"] will be created
+ resource "hcloud_server" "forgejo_runner" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "228451454"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"   = "forgejo.icb4dc0.de"
          + "node_type" = "forgejo_runner"
        }
      + location                   = "hel1"
      + name                       = "ci-minion-bob"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cax21"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = (known after apply)

      + network {
          + alias_ips   = (known after apply)
          + ip          = "172.23.2.30"
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_server.forgejo_runner["ci-minion-stuart"] will be created
+ resource "hcloud_server" "forgejo_runner" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "228451463"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"   = "forgejo.icb4dc0.de"
          + "node_type" = "forgejo_runner"
        }
      + location                   = "hel1"
      + name                       = "ci-minion-stuart"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cpx31"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = (known after apply)

      + network {
          + alias_ips   = (known after apply)
          + ip          = "172.23.2.31"
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = true
        }
    }

  # hcloud_server.machine["w1-cax11-hel1-gen7"] will be created
+ resource "hcloud_server" "machine" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-22.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"   = "icb4dc0.de"
          + "node_type" = "worker"
        }
      + location                   = "hel1"
      + name                       = "w1-cax11-hel1-gen7"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + rescue                     = "linux64"
      + server_type                = "cax21"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)

      + network {
          + alias_ips   = (known after apply)
          + ip          = "172.23.2.20"
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = false
        }
    }

  # hcloud_server.machine["w2-cax11-hel1-gen7"] will be created
+ resource "hcloud_server" "machine" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-22.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"   = "icb4dc0.de"
          + "node_type" = "worker"
        }
      + location                   = "hel1"
      + name                       = "w2-cax11-hel1-gen7"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + rescue                     = "linux64"
      + server_type                = "cax21"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)

      + network {
          + alias_ips   = (known after apply)
          + ip          = "172.23.2.21"
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = false
        }
    }

  # hcloud_server.machine["w3-cax11-hel1-gen7"] will be created
+ resource "hcloud_server" "machine" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "ubuntu-22.04"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"   = "icb4dc0.de"
          + "node_type" = "worker"
        }
      + location                   = "hel1"
      + name                       = "w3-cax11-hel1-gen7"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + rescue                     = "linux64"
      + server_type                = "cax21"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)

      + network {
          + alias_ips   = (known after apply)
          + ip          = "172.23.2.22"
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = false
        }
    }

  # hcloud_server.microos_machine["w4-cax11-hel1"] will be created
+ resource "hcloud_server" "microos_machine" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "229900685"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"   = "icb4dc0.de"
          + "node_type" = "worker"
        }
      + location                   = "hel1"
      + name                       = "w4-cax11-hel1"
      + placement_group_id         = (known after apply)
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cax21"
      + shutdown_before_deletion   = false
      + ssh_keys                   = (known after apply)
      + status                     = (known after apply)
      + user_data                  = "zlbbei8qUy+QwyKAUTeLg22M25A="

      + network {
          + alias_ips   = (known after apply)
          + ip          = "172.23.2.24"
          + mac_address = (known after apply)
          + network_id  = (known after apply)
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = false
        }
    }

  # hcloud_ssh_key.default will be created
+ resource "hcloud_ssh_key" "default" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + labels      = {}
      + name        = "Default Management"
      + public_key  = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKfHZaI0F5GjAcrM8hjWqwMfULDkAZ2TOIBTQtRocg1F id_ed25519"
    }

  # hcloud_ssh_key.provisioning_key will be created
+ resource "hcloud_ssh_key" "provisioning_key" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + labels      = {}
      + name        = "Provisioning key for hcloud cluster"
      + public_key  = (known after apply)
    }

  # hcloud_ssh_key.yubikey will be created
+ resource "hcloud_ssh_key" "yubikey" {
      + fingerprint = (known after apply)
      + id          = (known after apply)
      + labels      = {}
      + name        = "Yubikey"
      + public_key  = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDQoNCLuHgcaDn4JTjCeQKJsIsYU0Jmub5PUMzIIZbUBb+TGMh6mCAY/UbYaq/n4jVnskXopzPGJbx4iPBG5HrNzqYZqMjkk8uIeeT0mdIcNv9bXxuCxCH1iHZF8LlzIZCmQ0w3X6VQ1izcJgvjrAYzbHN3gqCHOXtNkqIUkwaadIWCEjg33OVSlM4yrIDElr6+LHzv84VNh/PhemixCVVEMJ83GjhDtpApMg9WWW3es6rpJn4TlYEMV+aPNU4ZZEWFen/DFBKoX+ulkiJ8CwpY3eJxSzlBijs5ZKH89OOk/MXN1lnREElFqli+jE8EbZKQzi59Zmx8ZOb52qVNot8XZT0Un4EttAIEeE8cETqUC4jK+6RbUrsXtclVbU9i57LWRpl65LYSIJEFmkTxvYdkPXqGbvlW024IjgSo8kds121w95+Rpo6419cSYsQWowS8+aXfEv2Q8SE81QH7ObYfWFXsPBAmmNleQNN3E5HOoaxpWQjv3aTUGuxm4PCzKLdP0LsHmTfGJB7Priaj+9i8xLjDWe7zXDde2Gp9FmdedDr06uEkVSRFnS35Dwfd7M7xP6NsilfMOdWzJWWy/BAYxtnWcrEFxhaEr4vgs8Ub+KBtKhr740x3Mr8up+mythConAs4LOj37lWK4kJ8cI7TXjcSJi9nTIPd39us7tp3Aw=="
    }

  # local_file.provisioning_key will be created
+ resource "local_file" "provisioning_key" {
      + content              = (sensitive value)
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0700"
      + file_permission      = "0400"
      + filename             = "./.ssh/provisioning_private_key.pem"
      + id                   = (known after apply)
    }

  # local_file.provisioning_key_pub will be created
+ resource "local_file" "provisioning_key_pub" {
      + content              = (known after apply)
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0700"
      + file_permission      = "0440"
      + filename             = "./.ssh/provisioning_key.pub"
      + id                   = (known after apply)
    }

  # null_resource.control_plane_generation["cp1-cax11-hel1"] will be created
+ resource "null_resource" "control_plane_generation" {
      + id       = (known after apply)
      + triggers = {
          + "timestamp" = "5"
        }
    }

  # null_resource.cp-config will be created
+ resource "null_resource" "cp-config" {
      + id       = (known after apply)
      + triggers = {
          + "version" = "v1.32.1+k3s1"
        }
    }

  # null_resource.machine_generation["w1-cax11-hel1-gen7"] will be created
+ resource "null_resource" "machine_generation" {
      + id       = (known after apply)
      + triggers = {
          + "timestamp" = "1"
        }
    }

  # null_resource.machine_generation["w2-cax11-hel1-gen7"] will be created
+ resource "null_resource" "machine_generation" {
      + id       = (known after apply)
      + triggers = {
          + "timestamp" = "1"
        }
    }

  # null_resource.machine_generation["w3-cax11-hel1-gen7"] will be created
+ resource "null_resource" "machine_generation" {
      + id       = (known after apply)
      + triggers = {
          + "timestamp" = "1"
        }
    }

  # null_resource.runner-config will be created
+ resource "null_resource" "runner-config" {
      + id       = (known after apply)
      + triggers = {
          + "version" = "6.3.1"
        }
    }

  # null_resource.runner_generation["ci-minion-bob"] will be created
+ resource "null_resource" "runner_generation" {
      + id       = (known after apply)
      + triggers = {
          + "timestamp" = "2"
        }
    }

  # null_resource.runner_generation["ci-minion-stuart"] will be created
+ resource "null_resource" "runner_generation" {
      + id       = (known after apply)
      + triggers = {
          + "timestamp" = "2"
        }
    }

  # null_resource.worker-config will be created
+ resource "null_resource" "worker-config" {
      + id       = (known after apply)
      + triggers = {
          + "version" = "v1.32.1+k3s1"
        }
    }

  # tls_private_key.provisioning will be created
+ resource "tls_private_key" "provisioning" {
      + algorithm                     = "RSA"
      + ecdsa_curve                   = "P224"
      + id                            = (known after apply)
      + private_key_openssh           = (sensitive value)
      + private_key_pem               = (sensitive value)
      + private_key_pem_pkcs8         = (sensitive value)
      + public_key_fingerprint_md5    = (known after apply)
      + public_key_fingerprint_sha256 = (known after apply)
      + public_key_openssh            = (known after apply)
      + public_key_pem                = (known after apply)
      + rsa_bits                      = 4096
    }

Plan: 39 to add, 0 to change, 0 to destroy.
  • ▶️ To apply this plan, comment:
    atlantis apply -d .
    
  • 🚮 To delete this plan and lock, click here
  • 🔁 To plan this project again, comment:
    atlantis plan -d .
    

Plan: 39 to add, 0 to change, 0 to destroy.


  • To apply all unapplied plans from this Pull Request, comment:
    atlantis apply
    
  • 🚮 To delete all plans and locks from this Pull Request, comment:
    atlantis unlock
    
Ran Plan for dir: `.` workspace: `default` <details><summary>Show Output</summary> ```diff data.hcloud_image.k3s_node_arm64: Reading... data.cloudinit_config.k8s_node_config["w4-cax11-hel1"]: Reading... data.cloudinit_config.k8s_node_config["w4-cax11-hel1"]: Read complete after 0s [id=315400678] data.hcloud_image.forgejo_runner_snapshot_amd64: Reading... data.hcloud_image.forgejo_runner_snapshot_arm64: Reading... data.hcloud_image.k3s_node_arm64: Read complete after 0s data.hcloud_image.forgejo_runner_snapshot_arm64: Read complete after 1s data.hcloud_image.forgejo_runner_snapshot_amd64: Read complete after 1s data.azurerm_client_config.current: Reading... data.azurerm_client_config.current: Read complete after 0s [id=Y2xpZW50Q29uZmlncy9jbGllbnRJZD1jYWZmMDAwZS01NmY1LTQ5M2MtODAxZS1jNjcwYTViMjUzNTQ7b2JqZWN0SWQ9ZmVmMmVkZjYtZGVlOC00NWFmLWEwNDctMmY3MDk4M2E5YmFkO3N1YnNjcmlwdGlvbklkPWY2ZGU3Yjk3LWJlZmEtNDNkOC04NDE2LWFhZDZjOTg4YzE2ODt0ZW5hbnRJZD1mNGU4MDExMS0xNTcxLTQ3N2EtYjU2ZC1jNWZlNTE3Njc2Yjc=] OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create <= read (data resources) OpenTofu will perform the following actions: # data.azurerm_key_vault_secret.cloudflare_account_id will be read during apply # (config refers to values not yet known) <= data "azurerm_key_vault_secret" "cloudflare_account_id" { + content_type = (known after apply) + expiration_date = (known after apply) + id = (known after apply) + key_vault_id = (known after apply) + name = "cloudflare-account-id" + not_before_date = (known after apply) + resource_id = (known after apply) + resource_versionless_id = (known after apply) + tags = (known after apply) + value = (sensitive value) + versionless_id = (known after apply) } # data.azurerm_key_vault_secret.harbor_minion_token will be read during apply # (config refers to values not yet known) <= data "azurerm_key_vault_secret" "harbor_minion_token" { + content_type = (known after apply) + expiration_date = (known after apply) + id = (known after apply) + key_vault_id = (known after apply) + name = "harbor-minion-token" + not_before_date = (known after apply) + resource_id = (known after apply) + resource_versionless_id = (known after apply) + tags = (known after apply) + value = (sensitive value) + versionless_id = (known after apply) } # data.azurerm_key_vault_secret.harbor_minion_username will be read during apply # (config refers to values not yet known) <= data "azurerm_key_vault_secret" "harbor_minion_username" { + content_type = (known after apply) + expiration_date = (known after apply) + id = (known after apply) + key_vault_id = (known after apply) + name = "harbor-minion-username" + not_before_date = (known after apply) + resource_id = (known after apply) + resource_versionless_id = (known after apply) + tags = (known after apply) + value = (sensitive value) + versionless_id = (known after apply) } # data.azurerm_key_vault_secret.k3s_backup_access_key will be read during apply # (config refers to values not yet known) <= data "azurerm_key_vault_secret" "k3s_backup_access_key" { + content_type = (known after apply) + expiration_date = (known after apply) + id = (known after apply) + key_vault_id = (known after apply) + name = "k3s-backup-access-key" + not_before_date = (known after apply) + resource_id = (known after apply) + resource_versionless_id = (known after apply) + tags = (known after apply) + value = (sensitive value) + versionless_id = (known after apply) } # data.azurerm_key_vault_secret.k3s_backup_endpoint will be read during apply # (config refers to values not yet known) <= data "azurerm_key_vault_secret" "k3s_backup_endpoint" { + content_type = (known after apply) + expiration_date = (known after apply) + id = (known after apply) + key_vault_id = (known after apply) + name = "k3s-backup-endpoint" + not_before_date = (known after apply) + resource_id = (known after apply) + resource_versionless_id = (known after apply) + tags = (known after apply) + value = (sensitive value) + versionless_id = (known after apply) } # data.azurerm_key_vault_secret.k3s_backup_secret_key will be read during apply # (config refers to values not yet known) <= data "azurerm_key_vault_secret" "k3s_backup_secret_key" { + content_type = (known after apply) + expiration_date = (known after apply) + id = (known after apply) + key_vault_id = (known after apply) + name = "k3s-backup-secret-key" + not_before_date = (known after apply) + resource_id = (known after apply) + resource_versionless_id = (known after apply) + tags = (known after apply) + value = (sensitive value) + versionless_id = (known after apply) } # data.azurerm_key_vault_secret.k3s_token will be read during apply # (config refers to values not yet known) <= data "azurerm_key_vault_secret" "k3s_token" { + content_type = (known after apply) + expiration_date = (known after apply) + id = (known after apply) + key_vault_id = (known after apply) + name = "k3s-token" + not_before_date = (known after apply) + resource_id = (known after apply) + resource_versionless_id = (known after apply) + tags = (known after apply) + value = (sensitive value) + versionless_id = (known after apply) } # data.azurerm_key_vault_secret.runner_secret["ci-minion-bob"] will be read during apply # (config refers to values not yet known) <= data "azurerm_key_vault_secret" "runner_secret" { + content_type = (known after apply) + expiration_date = (known after apply) + id = (known after apply) + key_vault_id = (known after apply) + name = "ci-minion-bob-runner-secret" + not_before_date = (known after apply) + resource_id = (known after apply) + resource_versionless_id = (known after apply) + tags = (known after apply) + value = (sensitive value) + versionless_id = (known after apply) } # data.azurerm_key_vault_secret.runner_secret["ci-minion-stuart"] will be read during apply # (config refers to values not yet known) <= data "azurerm_key_vault_secret" "runner_secret" { + content_type = (known after apply) + expiration_date = (known after apply) + id = (known after apply) + key_vault_id = (known after apply) + name = "ci-minion-stuart-runner-secret" + not_before_date = (known after apply) + resource_id = (known after apply) + resource_versionless_id = (known after apply) + tags = (known after apply) + value = (sensitive value) + versionless_id = (known after apply) } # data.cloudinit_config.runner_config["ci-minion-bob"] will be read during apply # (config refers to values not yet known) <= data "cloudinit_config" "runner_config" { + base64_encode = true + boundary = (known after apply) + gzip = true + id = (known after apply) + rendered = (known after apply) + part { + content = (known after apply) + content_type = "text/cloud-config" } + part { + content = <<-EOT runcmd: - | set -e systemctl daemon-reload systemctl enable --now forgejo-runner.service EOT + content_type = "text/cloud-config" } } # data.cloudinit_config.runner_config["ci-minion-stuart"] will be read during apply # (config refers to values not yet known) <= data "cloudinit_config" "runner_config" { + base64_encode = true + boundary = (known after apply) + gzip = true + id = (known after apply) + rendered = (known after apply) + part { + content = (known after apply) + content_type = "text/cloud-config" } + part { + content = <<-EOT runcmd: - | set -e systemctl daemon-reload systemctl enable --now forgejo-runner.service EOT + content_type = "text/cloud-config" } } # data.ct_config.machine-ignitions["w1-cax11-hel1-gen7"] will be read during apply # (config refers to values not yet known) <= data "ct_config" "machine-ignitions" { + content = (sensitive value) + id = (known after apply) + rendered = (known after apply) + snippets = [ + (known after apply), ] + strict = true } # data.ct_config.machine-ignitions["w2-cax11-hel1-gen7"] will be read during apply # (config refers to values not yet known) <= data "ct_config" "machine-ignitions" { + content = (sensitive value) + id = (known after apply) + rendered = (known after apply) + snippets = [ + (known after apply), ] + strict = true } # data.ct_config.machine-ignitions["w3-cax11-hel1-gen7"] will be read during apply # (config refers to values not yet known) <= data "ct_config" "machine-ignitions" { + content = (sensitive value) + id = (known after apply) + rendered = (known after apply) + snippets = [ + (known after apply), ] + strict = true } # data.ct_config.machine-ignitions-cp["cp1-cax11-hel1"] will be read during apply # (config refers to values not yet known) <= data "ct_config" "machine-ignitions-cp" { + content = (sensitive value) + id = (known after apply) + rendered = (known after apply) + snippets = [ + (known after apply), ] + strict = true } # azurerm_key_vault.forgejo_runners will be created + resource "azurerm_key_vault" "forgejo_runners" { + access_policy = (known after apply) + enable_rbac_authorization = true + id = (known after apply) + location = "westeurope" + name = "Forgejo-Runners" + public_network_access_enabled = true + purge_protection_enabled = false + resource_group_name = "Forgejo-Runners" + sku_name = "standard" + soft_delete_retention_days = 30 + tenant_id = "f4e80111-1571-477a-b56d-c5fe517676b7" + vault_uri = (known after apply) + contact (known after apply) + network_acls (known after apply) } # azurerm_key_vault.hetzner will be created + resource "azurerm_key_vault" "hetzner" { + access_policy = (known after apply) + enable_rbac_authorization = true + id = (known after apply) + location = "westeurope" + name = "Hetzner" + public_network_access_enabled = true + purge_protection_enabled = false + resource_group_name = "Infrastructure" + sku_name = "standard" + soft_delete_retention_days = 30 + tenant_id = "f4e80111-1571-477a-b56d-c5fe517676b7" + vault_uri = (known after apply) + contact (known after apply) + network_acls (known after apply) } # azurerm_resource_group.forgejo_runners will be created + resource "azurerm_resource_group" "forgejo_runners" { + id = (known after apply) + location = "westeurope" + name = "Forgejo-Runners" } # azurerm_resource_group.infrastructure will be created + resource "azurerm_resource_group" "infrastructure" { + id = (known after apply) + location = "westeurope" + name = "Infrastructure" } # cloudflare_dns_record.apple_proof will be created + resource "cloudflare_dns_record" "apple_proof" { + comment = (known after apply) + comment_modified_on = (known after apply) + content = "apple-domain=chwbVvzH8hWIgg1l" + created_on = (known after apply) + data = (known after apply) + id = (known after apply) + meta = (known after apply) + modified_on = (known after apply) + name = "icb4dc0.de" + proxiable = (known after apply) + proxied = false + settings = (known after apply) + tags = [] + tags_modified_on = (known after apply) + ttl = 1 + type = "TXT" + zone_id = (known after apply) } # cloudflare_dns_record.apple_sig_domainkey will be created + resource "cloudflare_dns_record" "apple_sig_domainkey" { + comment = (known after apply) + comment_modified_on = (known after apply) + content = "sig1.dkim.icb4dc0.de.at.icloudmailadmin.com" + created_on = (known after apply) + data = (known after apply) + id = (known after apply) + meta = (known after apply) + modified_on = (known after apply) + name = "sig1._domainkey.icb4dc0.de" + proxiable = (known after apply) + proxied = false + settings = (known after apply) + tags = [] + tags_modified_on = (known after apply) + ttl = 1 + type = "CNAME" + zone_id = (known after apply) } # cloudflare_dns_record.apple_spf will be created + resource "cloudflare_dns_record" "apple_spf" { + comment = (known after apply) + comment_modified_on = (known after apply) + content = "\"v=spf1 include:icloud.com ~all\"" + created_on = (known after apply) + data = (known after apply) + id = (known after apply) + meta = (known after apply) + modified_on = (known after apply) + name = "icb4dc0.de" + proxiable = (known after apply) + proxied = false + settings = (known after apply) + tags = [] + tags_modified_on = (known after apply) + ttl = 1 + type = "TXT" + zone_id = (known after apply) } # cloudflare_dns_record.cp-host-ipv4["cp1-cax11-hel1"] will be created + resource "cloudflare_dns_record" "cp-host-ipv4" { + comment = (known after apply) + comment_modified_on = (known after apply) + content = (known after apply) + created_on = (known after apply) + data = (known after apply) + id = (known after apply) + meta = (known after apply) + modified_on = (known after apply) + name = "cp1-cax11-hel1.k8s.icb4dc0.de" + proxiable = (known after apply) + proxied = false + settings = (known after apply) + tags = [] + tags_modified_on = (known after apply) + ttl = 1 + type = "A" + zone_id = (known after apply) } # cloudflare_dns_record.cp-host-ipv6["cp1-cax11-hel1"] will be created + resource "cloudflare_dns_record" "cp-host-ipv6" { + comment = (known after apply) + comment_modified_on = (known after apply) + content = (known after apply) + created_on = (known after apply) + data = (known after apply) + id = (known after apply) + meta = (known after apply) + modified_on = (known after apply) + name = "cp1-cax11-hel1.k8s.icb4dc0.de" + proxiable = (known after apply) + proxied = false + settings = (known after apply) + tags = [] + tags_modified_on = (known after apply) + ttl = 1 + type = "AAAA" + zone_id = (known after apply) } # cloudflare_dns_record.keybase_proof will be created + resource "cloudflare_dns_record" "keybase_proof" { + comment = (known after apply) + comment_modified_on = (known after apply) + content = "keybase-site-verification=WDQoLtW22epD7eQnts6rPKJBGA0lD6jSI6m0bGMYWag" + created_on = (known after apply) + data = (known after apply) + id = (known after apply) + meta = (known after apply) + modified_on = (known after apply) + name = "icb4dc0.de" + proxiable = (known after apply) + proxied = false + settings = (known after apply) + tags = [] + tags_modified_on = (known after apply) + ttl = 1 + type = "TXT" + zone_id = (known after apply) } # cloudflare_dns_record.mx_primary will be created + resource "cloudflare_dns_record" "mx_primary" { + comment = (known after apply) + comment_modified_on = (known after apply) + content = "mx01.mail.icloud.com" + created_on = (known after apply) + data = (known after apply) + id = (known after apply) + meta = (known after apply) + modified_on = (known after apply) + name = "icb4dc0.de" + priority = 10 + proxiable = (known after apply) + proxied = false + settings = (known after apply) + tags = [] + tags_modified_on = (known after apply) + ttl = 1 + type = "MX" + zone_id = (known after apply) } # cloudflare_dns_record.mx_secondary will be created + resource "cloudflare_dns_record" "mx_secondary" { + comment = (known after apply) + comment_modified_on = (known after apply) + content = "mx02.mail.icloud.com" + created_on = (known after apply) + data = (known after apply) + id = (known after apply) + meta = (known after apply) + modified_on = (known after apply) + name = "icb4dc0.de" + priority = 10 + proxiable = (known after apply) + proxied = false + settings = (known after apply) + tags = [] + tags_modified_on = (known after apply) + ttl = 1 + type = "MX" + zone_id = (known after apply) } # cloudflare_zone.icb4dc0de will be created + resource "cloudflare_zone" "icb4dc0de" { + account = { + id = (sensitive value) } + activated_on = (known after apply) + created_on = (known after apply) + development_mode = (known after apply) + id = (known after apply) + meta = (known after apply) + modified_on = (known after apply) + name = "icb4dc0.de" + name_servers = (known after apply) + original_dnshost = (known after apply) + original_name_servers = (known after apply) + original_registrar = (known after apply) + owner = (known after apply) + paused = false + status = (known after apply) + type = "full" + verification_key = (known after apply) } # hcloud_network.k8s_net will be created + resource "hcloud_network" "k8s_net" { + delete_protection = false + expose_routes_to_vswitch = false + id = (known after apply) + ip_range = "172.16.0.0/12" + name = "k8s-net" } # hcloud_network_subnet.k8s_internal will be created + resource "hcloud_network_subnet" "k8s_internal" { + gateway = (known after apply) + id = (known after apply) + ip_range = "172.23.2.0/23" + network_id = (known after apply) + network_zone = "eu-central" + type = "cloud" } # hcloud_placement_group.forgejo_runners will be created + resource "hcloud_placement_group" "forgejo_runners" { + id = (known after apply) + labels = { + "cluster" = "forgejo.icb4dc0.de" } + name = "forgejo-runners" + servers = (known after apply) + type = "spread" } # hcloud_placement_group.k3s_machines will be created + resource "hcloud_placement_group" "k3s_machines" { + id = (known after apply) + labels = { + "cluster" = "icb4dc0.de" } + name = "k3s-machines" + servers = (known after apply) + type = "spread" } # hcloud_server.control-plane["cp1-cax11-hel1"] will be created + resource "hcloud_server" "control-plane" { + allow_deprecated_images = false + backup_window = (known after apply) + backups = false + datacenter = (known after apply) + delete_protection = false + firewall_ids = (known after apply) + id = (known after apply) + ignore_remote_firewall_ids = false + image = "ubuntu-22.04" + ipv4_address = (known after apply) + ipv6_address = (known after apply) + ipv6_network = (known after apply) + keep_disk = false + labels = { + "cluster" = "icb4dc0.de" + "node_type" = "control-plane" } + location = "hel1" + name = "cp1-cax11-hel1" + primary_disk_size = (known after apply) + rebuild_protection = false + rescue = "linux64" + server_type = "cax11" + shutdown_before_deletion = false + ssh_keys = (known after apply) + status = (known after apply) + network { + alias_ips = [] + ip = "172.23.2.10" + mac_address = (known after apply) + network_id = (known after apply) } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = true } } # hcloud_server.forgejo_runner["ci-minion-bob"] will be created + resource "hcloud_server" "forgejo_runner" { + allow_deprecated_images = false + backup_window = (known after apply) + backups = false + datacenter = (known after apply) + delete_protection = false + firewall_ids = (known after apply) + id = (known after apply) + ignore_remote_firewall_ids = false + image = "228451454" + ipv4_address = (known after apply) + ipv6_address = (known after apply) + ipv6_network = (known after apply) + keep_disk = false + labels = { + "cluster" = "forgejo.icb4dc0.de" + "node_type" = "forgejo_runner" } + location = "hel1" + name = "ci-minion-bob" + placement_group_id = (known after apply) + primary_disk_size = (known after apply) + rebuild_protection = false + server_type = "cax21" + shutdown_before_deletion = false + ssh_keys = (known after apply) + status = (known after apply) + user_data = (known after apply) + network { + alias_ips = (known after apply) + ip = "172.23.2.30" + mac_address = (known after apply) + network_id = (known after apply) } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = true } } # hcloud_server.forgejo_runner["ci-minion-stuart"] will be created + resource "hcloud_server" "forgejo_runner" { + allow_deprecated_images = false + backup_window = (known after apply) + backups = false + datacenter = (known after apply) + delete_protection = false + firewall_ids = (known after apply) + id = (known after apply) + ignore_remote_firewall_ids = false + image = "228451463" + ipv4_address = (known after apply) + ipv6_address = (known after apply) + ipv6_network = (known after apply) + keep_disk = false + labels = { + "cluster" = "forgejo.icb4dc0.de" + "node_type" = "forgejo_runner" } + location = "hel1" + name = "ci-minion-stuart" + placement_group_id = (known after apply) + primary_disk_size = (known after apply) + rebuild_protection = false + server_type = "cpx31" + shutdown_before_deletion = false + ssh_keys = (known after apply) + status = (known after apply) + user_data = (known after apply) + network { + alias_ips = (known after apply) + ip = "172.23.2.31" + mac_address = (known after apply) + network_id = (known after apply) } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = true } } # hcloud_server.machine["w1-cax11-hel1-gen7"] will be created + resource "hcloud_server" "machine" { + allow_deprecated_images = false + backup_window = (known after apply) + backups = false + datacenter = (known after apply) + delete_protection = false + firewall_ids = (known after apply) + id = (known after apply) + ignore_remote_firewall_ids = false + image = "ubuntu-22.04" + ipv4_address = (known after apply) + ipv6_address = (known after apply) + ipv6_network = (known after apply) + keep_disk = false + labels = { + "cluster" = "icb4dc0.de" + "node_type" = "worker" } + location = "hel1" + name = "w1-cax11-hel1-gen7" + placement_group_id = (known after apply) + primary_disk_size = (known after apply) + rebuild_protection = false + rescue = "linux64" + server_type = "cax21" + shutdown_before_deletion = false + ssh_keys = (known after apply) + status = (known after apply) + network { + alias_ips = (known after apply) + ip = "172.23.2.20" + mac_address = (known after apply) + network_id = (known after apply) } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = false } } # hcloud_server.machine["w2-cax11-hel1-gen7"] will be created + resource "hcloud_server" "machine" { + allow_deprecated_images = false + backup_window = (known after apply) + backups = false + datacenter = (known after apply) + delete_protection = false + firewall_ids = (known after apply) + id = (known after apply) + ignore_remote_firewall_ids = false + image = "ubuntu-22.04" + ipv4_address = (known after apply) + ipv6_address = (known after apply) + ipv6_network = (known after apply) + keep_disk = false + labels = { + "cluster" = "icb4dc0.de" + "node_type" = "worker" } + location = "hel1" + name = "w2-cax11-hel1-gen7" + placement_group_id = (known after apply) + primary_disk_size = (known after apply) + rebuild_protection = false + rescue = "linux64" + server_type = "cax21" + shutdown_before_deletion = false + ssh_keys = (known after apply) + status = (known after apply) + network { + alias_ips = (known after apply) + ip = "172.23.2.21" + mac_address = (known after apply) + network_id = (known after apply) } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = false } } # hcloud_server.machine["w3-cax11-hel1-gen7"] will be created + resource "hcloud_server" "machine" { + allow_deprecated_images = false + backup_window = (known after apply) + backups = false + datacenter = (known after apply) + delete_protection = false + firewall_ids = (known after apply) + id = (known after apply) + ignore_remote_firewall_ids = false + image = "ubuntu-22.04" + ipv4_address = (known after apply) + ipv6_address = (known after apply) + ipv6_network = (known after apply) + keep_disk = false + labels = { + "cluster" = "icb4dc0.de" + "node_type" = "worker" } + location = "hel1" + name = "w3-cax11-hel1-gen7" + placement_group_id = (known after apply) + primary_disk_size = (known after apply) + rebuild_protection = false + rescue = "linux64" + server_type = "cax21" + shutdown_before_deletion = false + ssh_keys = (known after apply) + status = (known after apply) + network { + alias_ips = (known after apply) + ip = "172.23.2.22" + mac_address = (known after apply) + network_id = (known after apply) } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = false } } # hcloud_server.microos_machine["w4-cax11-hel1"] will be created + resource "hcloud_server" "microos_machine" { + allow_deprecated_images = false + backup_window = (known after apply) + backups = false + datacenter = (known after apply) + delete_protection = false + firewall_ids = (known after apply) + id = (known after apply) + ignore_remote_firewall_ids = false + image = "229900685" + ipv4_address = (known after apply) + ipv6_address = (known after apply) + ipv6_network = (known after apply) + keep_disk = false + labels = { + "cluster" = "icb4dc0.de" + "node_type" = "worker" } + location = "hel1" + name = "w4-cax11-hel1" + placement_group_id = (known after apply) + primary_disk_size = (known after apply) + rebuild_protection = false + server_type = "cax21" + shutdown_before_deletion = false + ssh_keys = (known after apply) + status = (known after apply) + user_data = "zlbbei8qUy+QwyKAUTeLg22M25A=" + network { + alias_ips = (known after apply) + ip = "172.23.2.24" + mac_address = (known after apply) + network_id = (known after apply) } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = false } } # hcloud_ssh_key.default will be created + resource "hcloud_ssh_key" "default" { + fingerprint = (known after apply) + id = (known after apply) + labels = {} + name = "Default Management" + public_key = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKfHZaI0F5GjAcrM8hjWqwMfULDkAZ2TOIBTQtRocg1F id_ed25519" } # hcloud_ssh_key.provisioning_key will be created + resource "hcloud_ssh_key" "provisioning_key" { + fingerprint = (known after apply) + id = (known after apply) + labels = {} + name = "Provisioning key for hcloud cluster" + public_key = (known after apply) } # hcloud_ssh_key.yubikey will be created + resource "hcloud_ssh_key" "yubikey" { + fingerprint = (known after apply) + id = (known after apply) + labels = {} + name = "Yubikey" + public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDQoNCLuHgcaDn4JTjCeQKJsIsYU0Jmub5PUMzIIZbUBb+TGMh6mCAY/UbYaq/n4jVnskXopzPGJbx4iPBG5HrNzqYZqMjkk8uIeeT0mdIcNv9bXxuCxCH1iHZF8LlzIZCmQ0w3X6VQ1izcJgvjrAYzbHN3gqCHOXtNkqIUkwaadIWCEjg33OVSlM4yrIDElr6+LHzv84VNh/PhemixCVVEMJ83GjhDtpApMg9WWW3es6rpJn4TlYEMV+aPNU4ZZEWFen/DFBKoX+ulkiJ8CwpY3eJxSzlBijs5ZKH89OOk/MXN1lnREElFqli+jE8EbZKQzi59Zmx8ZOb52qVNot8XZT0Un4EttAIEeE8cETqUC4jK+6RbUrsXtclVbU9i57LWRpl65LYSIJEFmkTxvYdkPXqGbvlW024IjgSo8kds121w95+Rpo6419cSYsQWowS8+aXfEv2Q8SE81QH7ObYfWFXsPBAmmNleQNN3E5HOoaxpWQjv3aTUGuxm4PCzKLdP0LsHmTfGJB7Priaj+9i8xLjDWe7zXDde2Gp9FmdedDr06uEkVSRFnS35Dwfd7M7xP6NsilfMOdWzJWWy/BAYxtnWcrEFxhaEr4vgs8Ub+KBtKhr740x3Mr8up+mythConAs4LOj37lWK4kJ8cI7TXjcSJi9nTIPd39us7tp3Aw==" } # local_file.provisioning_key will be created + resource "local_file" "provisioning_key" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0700" + file_permission = "0400" + filename = "./.ssh/provisioning_private_key.pem" + id = (known after apply) } # local_file.provisioning_key_pub will be created + resource "local_file" "provisioning_key_pub" { + content = (known after apply) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0700" + file_permission = "0440" + filename = "./.ssh/provisioning_key.pub" + id = (known after apply) } # null_resource.control_plane_generation["cp1-cax11-hel1"] will be created + resource "null_resource" "control_plane_generation" { + id = (known after apply) + triggers = { + "timestamp" = "5" } } # null_resource.cp-config will be created + resource "null_resource" "cp-config" { + id = (known after apply) + triggers = { + "version" = "v1.32.1+k3s1" } } # null_resource.machine_generation["w1-cax11-hel1-gen7"] will be created + resource "null_resource" "machine_generation" { + id = (known after apply) + triggers = { + "timestamp" = "1" } } # null_resource.machine_generation["w2-cax11-hel1-gen7"] will be created + resource "null_resource" "machine_generation" { + id = (known after apply) + triggers = { + "timestamp" = "1" } } # null_resource.machine_generation["w3-cax11-hel1-gen7"] will be created + resource "null_resource" "machine_generation" { + id = (known after apply) + triggers = { + "timestamp" = "1" } } # null_resource.runner-config will be created + resource "null_resource" "runner-config" { + id = (known after apply) + triggers = { + "version" = "6.3.1" } } # null_resource.runner_generation["ci-minion-bob"] will be created + resource "null_resource" "runner_generation" { + id = (known after apply) + triggers = { + "timestamp" = "2" } } # null_resource.runner_generation["ci-minion-stuart"] will be created + resource "null_resource" "runner_generation" { + id = (known after apply) + triggers = { + "timestamp" = "2" } } # null_resource.worker-config will be created + resource "null_resource" "worker-config" { + id = (known after apply) + triggers = { + "version" = "v1.32.1+k3s1" } } # tls_private_key.provisioning will be created + resource "tls_private_key" "provisioning" { + algorithm = "RSA" + ecdsa_curve = "P224" + id = (known after apply) + private_key_openssh = (sensitive value) + private_key_pem = (sensitive value) + private_key_pem_pkcs8 = (sensitive value) + public_key_fingerprint_md5 = (known after apply) + public_key_fingerprint_sha256 = (known after apply) + public_key_openssh = (known after apply) + public_key_pem = (known after apply) + rsa_bits = 4096 } Plan: 39 to add, 0 to change, 0 to destroy. ``` </details> * :arrow_forward: To **apply** this plan, comment: ```shell atlantis apply -d . ``` * :put_litter_in_its_place: To **delete** this plan and lock, click [here](http://atlantis-0:4141/lock?id=infrastructure%252Fcluster%252F.%252Fdefault) * :repeat: To **plan** this project again, comment: ```shell atlantis plan -d . ``` Plan: 39 to add, 0 to change, 0 to destroy. --- * :fast_forward: To **apply** all unapplied plans from this Pull Request, comment: ```shell atlantis apply ``` * :put_litter_in_its_place: To **delete** all plans and locks from this Pull Request, comment: ```shell atlantis unlock ```
prskr force-pushed feature/use-pre-built-image-for-k8s-nodes from b791eb2107
All checks were successful
0/0 projects policies checked successfully.
to ca7410009d
All checks were successful
0/0 projects policies checked successfully.
2025-06-03 23:02:06 +00:00
Compare
Author
Owner

atlantis plan

atlantis plan
Collaborator

Did you mean to use atlantis instead of @atlantis?

Did you mean to use `atlantis` instead of `@atlantis`?
Collaborator

Ran Plan for project: hcloud-infra dir: . workspace: hcloud-infra

Show Output
OpenTofu used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
~ update in-place

OpenTofu will perform the following actions:

  # hcloud_server.microos_machine["w4-cax11-hel1"] will be created
+ resource "hcloud_server" "microos_machine" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "229900685"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"   = "icb4dc0.de"
          + "node_type" = "worker"
        }
      + location                   = "hel1"
      + name                       = "w4-cax11-hel1"
      + placement_group_id         = 338479
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cax21"
      + shutdown_before_deletion   = false
      + ssh_keys                   = [
          + "9685856",
          + "20640130",
          + "7606267",
        ]
      + status                     = (known after apply)
      + user_data                  = "zlbbei8qUy+QwyKAUTeLg22M25A="

      + network {
          + alias_ips   = (known after apply)
          + ip          = "172.23.2.24"
          + mac_address = (known after apply)
          + network_id  = 1982434
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = false
        }
    }

  # hcloud_ssh_key.yubikey will be updated in-place
~ resource "hcloud_ssh_key" "yubikey" {
        id          = "20640130"
      ~ labels      = {
          - "kind" = "yubikey" -> null
        }
        name        = "Yubikey"
        # (2 unchanged attributes hidden)
    }

  # local_file.provisioning_key will be created
+ resource "local_file" "provisioning_key" {
      + content              = (sensitive value)
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0700"
      + file_permission      = "0400"
      + filename             = "./.ssh/provisioning_private_key.pem"
      + id                   = (known after apply)
    }

  # local_file.provisioning_key_pub will be created
+ resource "local_file" "provisioning_key_pub" {
      + content              = <<-EOT
            ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDfBF8TbFMM8Vhq0XxodlsVNlQ0HezGsUf+AOKbfVo/4HQ+b57DTUxRq7guuoN3NpQzcMwma56UkOK9sYFtYUY0+LhqtemLFM15cp93z+iBXG8SzBxk8KmfLqBmoNjK0HU1e79RFvBBO++4ivM5K2DMw/3LYQz7MvuX5+JCq110GfHkVmKYoj66uhn+7h1GxQlVbMtRXGnEC16h7kFXbSvNKWOcbRgtvhFdUxOsuURICytQ78yrFc/FS9+nQIQfYIrisy1r4JioisKMWiB9vsYXIoCr7z0cG0TAFINexPHYNkse0+n1YgZG0QQ72anRv7pE98qvG4ymaYJaw3mHQ0WR/qKoGhs1X6S+z02du7toAl10aIINwxh6i8EoSH4uY0gSMXmFzg3PeoFkiYU6KkkF4JiL0kyILFny7n31GLMdYPGtTW18t5T+2FlSWqy7I/F4xOeh+8+iUypWkqg7H4gnl+KwT2yM+ZX/Q3MEz+byFdFsLg4NBrA7S1FhCdDTeCXMSNfZv71mCosg8yHmwiqrO8aHsSRk7ZPyfXm1PWLRTso02NTFB/1m1sA4q/9N29CCFm1PZeF+845uY5FvYzJS62UGpdJYGI4oIctbp8QMbJHhhOXl9pPgEiy+urZAaiaS3Gct5e1sN4XeaGcTqzCelTYyw2awoRO+iS7JUTMLiQ==
        EOT
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0700"
      + file_permission      = "0440"
      + filename             = "./.ssh/provisioning_key.pub"
      + id                   = (known after apply)
    }

Plan: 3 to add, 1 to change, 0 to destroy.
  • ▶️ To apply this plan, comment:
    atlantis apply -p hcloud-infra
    
  • 🚮 To delete this plan and lock, click here
  • 🔁 To plan this project again, comment:
    atlantis plan -p hcloud-infra
    

Plan: 3 to add, 1 to change, 0 to destroy.


  • To apply all unapplied plans from this Pull Request, comment:
    atlantis apply
    
  • 🚮 To delete all plans and locks from this Pull Request, comment:
    atlantis unlock
    
Ran Plan for project: `hcloud-infra` dir: `.` workspace: `hcloud-infra` <details><summary>Show Output</summary> ```diff OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create ~ update in-place OpenTofu will perform the following actions: # hcloud_server.microos_machine["w4-cax11-hel1"] will be created + resource "hcloud_server" "microos_machine" { + allow_deprecated_images = false + backup_window = (known after apply) + backups = false + datacenter = (known after apply) + delete_protection = false + firewall_ids = (known after apply) + id = (known after apply) + ignore_remote_firewall_ids = false + image = "229900685" + ipv4_address = (known after apply) + ipv6_address = (known after apply) + ipv6_network = (known after apply) + keep_disk = false + labels = { + "cluster" = "icb4dc0.de" + "node_type" = "worker" } + location = "hel1" + name = "w4-cax11-hel1" + placement_group_id = 338479 + primary_disk_size = (known after apply) + rebuild_protection = false + server_type = "cax21" + shutdown_before_deletion = false + ssh_keys = [ + "9685856", + "20640130", + "7606267", ] + status = (known after apply) + user_data = "zlbbei8qUy+QwyKAUTeLg22M25A=" + network { + alias_ips = (known after apply) + ip = "172.23.2.24" + mac_address = (known after apply) + network_id = 1982434 } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = false } } # hcloud_ssh_key.yubikey will be updated in-place ~ resource "hcloud_ssh_key" "yubikey" { id = "20640130" ~ labels = { - "kind" = "yubikey" -> null } name = "Yubikey" # (2 unchanged attributes hidden) } # local_file.provisioning_key will be created + resource "local_file" "provisioning_key" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0700" + file_permission = "0400" + filename = "./.ssh/provisioning_private_key.pem" + id = (known after apply) } # local_file.provisioning_key_pub will be created + resource "local_file" "provisioning_key_pub" { + content = <<-EOT ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDfBF8TbFMM8Vhq0XxodlsVNlQ0HezGsUf+AOKbfVo/4HQ+b57DTUxRq7guuoN3NpQzcMwma56UkOK9sYFtYUY0+LhqtemLFM15cp93z+iBXG8SzBxk8KmfLqBmoNjK0HU1e79RFvBBO++4ivM5K2DMw/3LYQz7MvuX5+JCq110GfHkVmKYoj66uhn+7h1GxQlVbMtRXGnEC16h7kFXbSvNKWOcbRgtvhFdUxOsuURICytQ78yrFc/FS9+nQIQfYIrisy1r4JioisKMWiB9vsYXIoCr7z0cG0TAFINexPHYNkse0+n1YgZG0QQ72anRv7pE98qvG4ymaYJaw3mHQ0WR/qKoGhs1X6S+z02du7toAl10aIINwxh6i8EoSH4uY0gSMXmFzg3PeoFkiYU6KkkF4JiL0kyILFny7n31GLMdYPGtTW18t5T+2FlSWqy7I/F4xOeh+8+iUypWkqg7H4gnl+KwT2yM+ZX/Q3MEz+byFdFsLg4NBrA7S1FhCdDTeCXMSNfZv71mCosg8yHmwiqrO8aHsSRk7ZPyfXm1PWLRTso02NTFB/1m1sA4q/9N29CCFm1PZeF+845uY5FvYzJS62UGpdJYGI4oIctbp8QMbJHhhOXl9pPgEiy+urZAaiaS3Gct5e1sN4XeaGcTqzCelTYyw2awoRO+iS7JUTMLiQ== EOT + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0700" + file_permission = "0440" + filename = "./.ssh/provisioning_key.pub" + id = (known after apply) } Plan: 3 to add, 1 to change, 0 to destroy. ``` </details> * :arrow_forward: To **apply** this plan, comment: ```shell atlantis apply -p hcloud-infra ``` * :put_litter_in_its_place: To **delete** this plan and lock, click [here](http://atlantis-0:4141/lock?id=infrastructure%252Fcluster%252F.%252Fhcloud-infra) * :repeat: To **plan** this project again, comment: ```shell atlantis plan -p hcloud-infra ``` Plan: 3 to add, 1 to change, 0 to destroy. --- * :fast_forward: To **apply** all unapplied plans from this Pull Request, comment: ```shell atlantis apply ``` * :put_litter_in_its_place: To **delete** all plans and locks from this Pull Request, comment: ```shell atlantis unlock ```
chore: update MicroOS machine template
All checks were successful
1/1 projects applied successfully.
367968c715
Author
Owner

atlantis plan

atlantis plan
Collaborator

Ran Plan for project: hcloud-infra dir: . workspace: hcloud-infra

Show Output
OpenTofu used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
~ update in-place

OpenTofu will perform the following actions:

  # hcloud_server.microos_machine["w4-cax11-hel1"] will be created
+ resource "hcloud_server" "microos_machine" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "248717639"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"   = "icb4dc0.de"
          + "node_type" = "worker"
        }
      + location                   = "hel1"
      + name                       = "w4-cax11-hel1"
      + placement_group_id         = 338479
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cax21"
      + shutdown_before_deletion   = false
      + ssh_keys                   = [
          + "9685856",
          + "20640130",
          + "7606267",
        ]
      + status                     = (known after apply)
      + user_data                  = "zlbbei8qUy+QwyKAUTeLg22M25A="

      + network {
          + alias_ips   = (known after apply)
          + ip          = "172.23.2.24"
          + mac_address = (known after apply)
          + network_id  = 1982434
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = false
        }
    }

  # hcloud_ssh_key.yubikey will be updated in-place
~ resource "hcloud_ssh_key" "yubikey" {
        id          = "20640130"
      ~ labels      = {
          - "kind" = "yubikey" -> null
        }
        name        = "Yubikey"
        # (2 unchanged attributes hidden)
    }

  # local_file.provisioning_key will be created
+ resource "local_file" "provisioning_key" {
      + content              = (sensitive value)
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0700"
      + file_permission      = "0400"
      + filename             = "./.ssh/provisioning_private_key.pem"
      + id                   = (known after apply)
    }

  # local_file.provisioning_key_pub will be created
+ resource "local_file" "provisioning_key_pub" {
      + content              = <<-EOT
            ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDfBF8TbFMM8Vhq0XxodlsVNlQ0HezGsUf+AOKbfVo/4HQ+b57DTUxRq7guuoN3NpQzcMwma56UkOK9sYFtYUY0+LhqtemLFM15cp93z+iBXG8SzBxk8KmfLqBmoNjK0HU1e79RFvBBO++4ivM5K2DMw/3LYQz7MvuX5+JCq110GfHkVmKYoj66uhn+7h1GxQlVbMtRXGnEC16h7kFXbSvNKWOcbRgtvhFdUxOsuURICytQ78yrFc/FS9+nQIQfYIrisy1r4JioisKMWiB9vsYXIoCr7z0cG0TAFINexPHYNkse0+n1YgZG0QQ72anRv7pE98qvG4ymaYJaw3mHQ0WR/qKoGhs1X6S+z02du7toAl10aIINwxh6i8EoSH4uY0gSMXmFzg3PeoFkiYU6KkkF4JiL0kyILFny7n31GLMdYPGtTW18t5T+2FlSWqy7I/F4xOeh+8+iUypWkqg7H4gnl+KwT2yM+ZX/Q3MEz+byFdFsLg4NBrA7S1FhCdDTeCXMSNfZv71mCosg8yHmwiqrO8aHsSRk7ZPyfXm1PWLRTso02NTFB/1m1sA4q/9N29CCFm1PZeF+845uY5FvYzJS62UGpdJYGI4oIctbp8QMbJHhhOXl9pPgEiy+urZAaiaS3Gct5e1sN4XeaGcTqzCelTYyw2awoRO+iS7JUTMLiQ==
        EOT
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0700"
      + file_permission      = "0440"
      + filename             = "./.ssh/provisioning_key.pub"
      + id                   = (known after apply)
    }

Plan: 3 to add, 1 to change, 0 to destroy.
  • ▶️ To apply this plan, comment:
    atlantis apply -p hcloud-infra
    
  • 🚮 To delete this plan and lock, click here
  • 🔁 To plan this project again, comment:
    atlantis plan -p hcloud-infra
    

Plan: 3 to add, 1 to change, 0 to destroy.


  • To apply all unapplied plans from this Pull Request, comment:
    atlantis apply
    
  • 🚮 To delete all plans and locks from this Pull Request, comment:
    atlantis unlock
    
Ran Plan for project: `hcloud-infra` dir: `.` workspace: `hcloud-infra` <details><summary>Show Output</summary> ```diff OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create ~ update in-place OpenTofu will perform the following actions: # hcloud_server.microos_machine["w4-cax11-hel1"] will be created + resource "hcloud_server" "microos_machine" { + allow_deprecated_images = false + backup_window = (known after apply) + backups = false + datacenter = (known after apply) + delete_protection = false + firewall_ids = (known after apply) + id = (known after apply) + ignore_remote_firewall_ids = false + image = "248717639" + ipv4_address = (known after apply) + ipv6_address = (known after apply) + ipv6_network = (known after apply) + keep_disk = false + labels = { + "cluster" = "icb4dc0.de" + "node_type" = "worker" } + location = "hel1" + name = "w4-cax11-hel1" + placement_group_id = 338479 + primary_disk_size = (known after apply) + rebuild_protection = false + server_type = "cax21" + shutdown_before_deletion = false + ssh_keys = [ + "9685856", + "20640130", + "7606267", ] + status = (known after apply) + user_data = "zlbbei8qUy+QwyKAUTeLg22M25A=" + network { + alias_ips = (known after apply) + ip = "172.23.2.24" + mac_address = (known after apply) + network_id = 1982434 } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = false } } # hcloud_ssh_key.yubikey will be updated in-place ~ resource "hcloud_ssh_key" "yubikey" { id = "20640130" ~ labels = { - "kind" = "yubikey" -> null } name = "Yubikey" # (2 unchanged attributes hidden) } # local_file.provisioning_key will be created + resource "local_file" "provisioning_key" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0700" + file_permission = "0400" + filename = "./.ssh/provisioning_private_key.pem" + id = (known after apply) } # local_file.provisioning_key_pub will be created + resource "local_file" "provisioning_key_pub" { + content = <<-EOT ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDfBF8TbFMM8Vhq0XxodlsVNlQ0HezGsUf+AOKbfVo/4HQ+b57DTUxRq7guuoN3NpQzcMwma56UkOK9sYFtYUY0+LhqtemLFM15cp93z+iBXG8SzBxk8KmfLqBmoNjK0HU1e79RFvBBO++4ivM5K2DMw/3LYQz7MvuX5+JCq110GfHkVmKYoj66uhn+7h1GxQlVbMtRXGnEC16h7kFXbSvNKWOcbRgtvhFdUxOsuURICytQ78yrFc/FS9+nQIQfYIrisy1r4JioisKMWiB9vsYXIoCr7z0cG0TAFINexPHYNkse0+n1YgZG0QQ72anRv7pE98qvG4ymaYJaw3mHQ0WR/qKoGhs1X6S+z02du7toAl10aIINwxh6i8EoSH4uY0gSMXmFzg3PeoFkiYU6KkkF4JiL0kyILFny7n31GLMdYPGtTW18t5T+2FlSWqy7I/F4xOeh+8+iUypWkqg7H4gnl+KwT2yM+ZX/Q3MEz+byFdFsLg4NBrA7S1FhCdDTeCXMSNfZv71mCosg8yHmwiqrO8aHsSRk7ZPyfXm1PWLRTso02NTFB/1m1sA4q/9N29CCFm1PZeF+845uY5FvYzJS62UGpdJYGI4oIctbp8QMbJHhhOXl9pPgEiy+urZAaiaS3Gct5e1sN4XeaGcTqzCelTYyw2awoRO+iS7JUTMLiQ== EOT + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0700" + file_permission = "0440" + filename = "./.ssh/provisioning_key.pub" + id = (known after apply) } Plan: 3 to add, 1 to change, 0 to destroy. ``` </details> * :arrow_forward: To **apply** this plan, comment: ```shell atlantis apply -p hcloud-infra ``` * :put_litter_in_its_place: To **delete** this plan and lock, click [here](http://atlantis-0:4141/lock?id=infrastructure%252Fcluster%252F.%252Fhcloud-infra) * :repeat: To **plan** this project again, comment: ```shell atlantis plan -p hcloud-infra ``` Plan: 3 to add, 1 to change, 0 to destroy. --- * :fast_forward: To **apply** all unapplied plans from this Pull Request, comment: ```shell atlantis apply ``` * :put_litter_in_its_place: To **delete** all plans and locks from this Pull Request, comment: ```shell atlantis unlock ```
Author
Owner

atlantis apply -p hcloud-infra

atlantis apply -p hcloud-infra
Collaborator

Ran Apply for project: hcloud-infra dir: . workspace: hcloud-infra

Show Output
hcloud_ssh_key.yubikey: Modifying... [id=20640130]
local_file.provisioning_key: Creating...
local_file.provisioning_key_pub: Creating...
local_file.provisioning_key_pub: Creation complete after 1s [id=046a4085226502a0d86bd73e65e9edf75dd96866]
local_file.provisioning_key: Creation complete after 1s [id=f47379b55a45bbdb708f417eca94f2b6c00a5ce4]
hcloud_ssh_key.yubikey: Modifications complete after 1s [id=20640130]
hcloud_server.microos_machine["w4-cax11-hel1"]: Creating...
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [10s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [20s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [30s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [40s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [50s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m0s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m10s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m20s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m30s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m40s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m50s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m0s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m10s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Provisioning with 'remote-exec'...
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH...
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Host: 37.27.38.71
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   User: root
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Password: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Private key: true
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Certificate: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   SSH Agent: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Checking Host Key: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Target Platform: unix
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connected!
hcloud_server.microos_machine["w4-cax11-hel1"]: Creation complete after 2m15s [id=66629450]

Apply complete! Resources: 3 added, 1 changed, 0 destroyed.
Ran Apply for project: `hcloud-infra` dir: `.` workspace: `hcloud-infra` <details><summary>Show Output</summary> ```diff hcloud_ssh_key.yubikey: Modifying... [id=20640130] local_file.provisioning_key: Creating... local_file.provisioning_key_pub: Creating... local_file.provisioning_key_pub: Creation complete after 1s [id=046a4085226502a0d86bd73e65e9edf75dd96866] local_file.provisioning_key: Creation complete after 1s [id=f47379b55a45bbdb708f417eca94f2b6c00a5ce4] hcloud_ssh_key.yubikey: Modifications complete after 1s [id=20640130] hcloud_server.microos_machine["w4-cax11-hel1"]: Creating... hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [10s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [20s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [30s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [40s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [50s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m0s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m10s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m20s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m30s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m40s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m50s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m0s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m10s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Provisioning with 'remote-exec'... hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH... hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Host: 37.27.38.71 hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): User: root hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Password: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Private key: true hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Certificate: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): SSH Agent: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Checking Host Key: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Target Platform: unix hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connected! hcloud_server.microos_machine["w4-cax11-hel1"]: Creation complete after 2m15s [id=66629450] Apply complete! Resources: 3 added, 1 changed, 0 destroyed. ``` </details>
feat: install k3s-install & ensure sshd is running
All checks were successful
1/1 projects applied successfully.
7ed7c246b5
Author
Owner

atlantis plan

atlantis plan
Collaborator

Ran Plan for project: hcloud-infra dir: . workspace: hcloud-infra

Show Output
OpenTofu used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
-/+ destroy and then create replacement

OpenTofu will perform the following actions:

  # hcloud_server.microos_machine["w4-cax11-hel1"] must be replaced
-/+ resource "hcloud_server" "microos_machine" {
      + backup_window              = (known after apply)
      ~ datacenter                 = "hel1-dc2" -> (known after apply)
      ~ firewall_ids               = [] -> (known after apply)
      ~ id                         = "66629450" -> (known after apply)
      ~ image                      = "248717639" -> "248717908" # forces replacement
      ~ ipv4_address               = "37.27.38.71" -> (known after apply)
      + ipv6_address               = (known after apply)
      ~ ipv6_network               = "<nil>" -> (known after apply)
        name                       = "w4-cax11-hel1"
      ~ primary_disk_size          = 80 -> (known after apply)
      ~ status                     = "running" -> (known after apply)
      ~ user_data                  = "zlbbei8qUy+QwyKAUTeLg22M25A=" -> "rqm50dk43gw9CcCP/J0UmYlU1d4=" # forces replacement
        # (12 unchanged attributes hidden)

      - network {
          - alias_ips   = [] -> null
          - ip          = "172.23.2.24" -> null
          - mac_address = "86:00:00:b4:a8:ac" -> null
          - network_id  = 1982434 -> null
        }
      + network {
          + alias_ips   = (known after apply)
          + ip          = "172.23.2.24"
          + mac_address = (known after apply)
          + network_id  = 1982434
        }

      - public_net {
          - ipv4         = 0 -> null
          - ipv4_enabled = true -> null
          - ipv6         = 0 -> null
          - ipv6_enabled = false -> null
        }
      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = false
        }
    }

  # local_file.provisioning_key will be created
+ resource "local_file" "provisioning_key" {
      + content              = (sensitive value)
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0700"
      + file_permission      = "0400"
      + filename             = "./.ssh/provisioning_private_key.pem"
      + id                   = (known after apply)
    }

  # local_file.provisioning_key_pub will be created
+ resource "local_file" "provisioning_key_pub" {
      + content              = <<-EOT
            ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDfBF8TbFMM8Vhq0XxodlsVNlQ0HezGsUf+AOKbfVo/4HQ+b57DTUxRq7guuoN3NpQzcMwma56UkOK9sYFtYUY0+LhqtemLFM15cp93z+iBXG8SzBxk8KmfLqBmoNjK0HU1e79RFvBBO++4ivM5K2DMw/3LYQz7MvuX5+JCq110GfHkVmKYoj66uhn+7h1GxQlVbMtRXGnEC16h7kFXbSvNKWOcbRgtvhFdUxOsuURICytQ78yrFc/FS9+nQIQfYIrisy1r4JioisKMWiB9vsYXIoCr7z0cG0TAFINexPHYNkse0+n1YgZG0QQ72anRv7pE98qvG4ymaYJaw3mHQ0WR/qKoGhs1X6S+z02du7toAl10aIINwxh6i8EoSH4uY0gSMXmFzg3PeoFkiYU6KkkF4JiL0kyILFny7n31GLMdYPGtTW18t5T+2FlSWqy7I/F4xOeh+8+iUypWkqg7H4gnl+KwT2yM+ZX/Q3MEz+byFdFsLg4NBrA7S1FhCdDTeCXMSNfZv71mCosg8yHmwiqrO8aHsSRk7ZPyfXm1PWLRTso02NTFB/1m1sA4q/9N29CCFm1PZeF+845uY5FvYzJS62UGpdJYGI4oIctbp8QMbJHhhOXl9pPgEiy+urZAaiaS3Gct5e1sN4XeaGcTqzCelTYyw2awoRO+iS7JUTMLiQ==
        EOT
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0700"
      + file_permission      = "0440"
      + filename             = "./.ssh/provisioning_key.pub"
      + id                   = (known after apply)
    }

Plan: 3 to add, 0 to change, 1 to destroy.
  • ▶️ To apply this plan, comment:
    atlantis apply -p hcloud-infra
    
  • 🚮 To delete this plan and lock, click here
  • 🔁 To plan this project again, comment:
    atlantis plan -p hcloud-infra
    

Plan: 3 to add, 0 to change, 1 to destroy.


  • To apply all unapplied plans from this Pull Request, comment:
    atlantis apply
    
  • 🚮 To delete all plans and locks from this Pull Request, comment:
    atlantis unlock
    
Ran Plan for project: `hcloud-infra` dir: `.` workspace: `hcloud-infra` <details><summary>Show Output</summary> ```diff OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create -/+ destroy and then create replacement OpenTofu will perform the following actions: # hcloud_server.microos_machine["w4-cax11-hel1"] must be replaced -/+ resource "hcloud_server" "microos_machine" { + backup_window = (known after apply) ~ datacenter = "hel1-dc2" -> (known after apply) ~ firewall_ids = [] -> (known after apply) ~ id = "66629450" -> (known after apply) ~ image = "248717639" -> "248717908" # forces replacement ~ ipv4_address = "37.27.38.71" -> (known after apply) + ipv6_address = (known after apply) ~ ipv6_network = "<nil>" -> (known after apply) name = "w4-cax11-hel1" ~ primary_disk_size = 80 -> (known after apply) ~ status = "running" -> (known after apply) ~ user_data = "zlbbei8qUy+QwyKAUTeLg22M25A=" -> "rqm50dk43gw9CcCP/J0UmYlU1d4=" # forces replacement # (12 unchanged attributes hidden) - network { - alias_ips = [] -> null - ip = "172.23.2.24" -> null - mac_address = "86:00:00:b4:a8:ac" -> null - network_id = 1982434 -> null } + network { + alias_ips = (known after apply) + ip = "172.23.2.24" + mac_address = (known after apply) + network_id = 1982434 } - public_net { - ipv4 = 0 -> null - ipv4_enabled = true -> null - ipv6 = 0 -> null - ipv6_enabled = false -> null } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = false } } # local_file.provisioning_key will be created + resource "local_file" "provisioning_key" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0700" + file_permission = "0400" + filename = "./.ssh/provisioning_private_key.pem" + id = (known after apply) } # local_file.provisioning_key_pub will be created + resource "local_file" "provisioning_key_pub" { + content = <<-EOT ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDfBF8TbFMM8Vhq0XxodlsVNlQ0HezGsUf+AOKbfVo/4HQ+b57DTUxRq7guuoN3NpQzcMwma56UkOK9sYFtYUY0+LhqtemLFM15cp93z+iBXG8SzBxk8KmfLqBmoNjK0HU1e79RFvBBO++4ivM5K2DMw/3LYQz7MvuX5+JCq110GfHkVmKYoj66uhn+7h1GxQlVbMtRXGnEC16h7kFXbSvNKWOcbRgtvhFdUxOsuURICytQ78yrFc/FS9+nQIQfYIrisy1r4JioisKMWiB9vsYXIoCr7z0cG0TAFINexPHYNkse0+n1YgZG0QQ72anRv7pE98qvG4ymaYJaw3mHQ0WR/qKoGhs1X6S+z02du7toAl10aIINwxh6i8EoSH4uY0gSMXmFzg3PeoFkiYU6KkkF4JiL0kyILFny7n31GLMdYPGtTW18t5T+2FlSWqy7I/F4xOeh+8+iUypWkqg7H4gnl+KwT2yM+ZX/Q3MEz+byFdFsLg4NBrA7S1FhCdDTeCXMSNfZv71mCosg8yHmwiqrO8aHsSRk7ZPyfXm1PWLRTso02NTFB/1m1sA4q/9N29CCFm1PZeF+845uY5FvYzJS62UGpdJYGI4oIctbp8QMbJHhhOXl9pPgEiy+urZAaiaS3Gct5e1sN4XeaGcTqzCelTYyw2awoRO+iS7JUTMLiQ== EOT + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0700" + file_permission = "0440" + filename = "./.ssh/provisioning_key.pub" + id = (known after apply) } Plan: 3 to add, 0 to change, 1 to destroy. ``` </details> * :arrow_forward: To **apply** this plan, comment: ```shell atlantis apply -p hcloud-infra ``` * :put_litter_in_its_place: To **delete** this plan and lock, click [here](http://atlantis-0:4141/lock?id=infrastructure%252Fcluster%252F.%252Fhcloud-infra) * :repeat: To **plan** this project again, comment: ```shell atlantis plan -p hcloud-infra ``` Plan: 3 to add, 0 to change, 1 to destroy. --- * :fast_forward: To **apply** all unapplied plans from this Pull Request, comment: ```shell atlantis apply ``` * :put_litter_in_its_place: To **delete** all plans and locks from this Pull Request, comment: ```shell atlantis unlock ```
Author
Owner

atlantis apply -p hcloud-infra

atlantis apply -p hcloud-infra
Collaborator

Ran Apply for project: hcloud-infra dir: . workspace: hcloud-infra

Show Output
hcloud_server.microos_machine["w4-cax11-hel1"]: Destroying... [id=66629450]
local_file.provisioning_key: Creating...
local_file.provisioning_key_pub: Creating...
local_file.provisioning_key: Creation complete after 0s [id=f47379b55a45bbdb708f417eca94f2b6c00a5ce4]
local_file.provisioning_key_pub: Creation complete after 0s [id=046a4085226502a0d86bd73e65e9edf75dd96866]
hcloud_server.microos_machine["w4-cax11-hel1"]: Destruction complete after 8s
hcloud_server.microos_machine["w4-cax11-hel1"]: Creating...
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [10s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [20s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [30s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [40s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [50s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m0s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m10s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m20s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m30s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m40s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m50s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m0s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m10s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Provisioning with 'remote-exec'...
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH...
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Host: 37.27.38.71
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   User: root
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Password: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Private key: true
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Certificate: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   SSH Agent: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Checking Host Key: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Target Platform: unix
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connected!
hcloud_server.microos_machine["w4-cax11-hel1"]: Creation complete after 2m16s [id=66631348]

Apply complete! Resources: 3 added, 0 changed, 1 destroyed.
Ran Apply for project: `hcloud-infra` dir: `.` workspace: `hcloud-infra` <details><summary>Show Output</summary> ```diff hcloud_server.microos_machine["w4-cax11-hel1"]: Destroying... [id=66629450] local_file.provisioning_key: Creating... local_file.provisioning_key_pub: Creating... local_file.provisioning_key: Creation complete after 0s [id=f47379b55a45bbdb708f417eca94f2b6c00a5ce4] local_file.provisioning_key_pub: Creation complete after 0s [id=046a4085226502a0d86bd73e65e9edf75dd96866] hcloud_server.microos_machine["w4-cax11-hel1"]: Destruction complete after 8s hcloud_server.microos_machine["w4-cax11-hel1"]: Creating... hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [10s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [20s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [30s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [40s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [50s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m0s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m10s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m20s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m30s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m40s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m50s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m0s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m10s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Provisioning with 'remote-exec'... hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH... hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Host: 37.27.38.71 hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): User: root hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Password: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Private key: true hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Certificate: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): SSH Agent: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Checking Host Key: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Target Platform: unix hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connected! hcloud_server.microos_machine["w4-cax11-hel1"]: Creation complete after 2m16s [id=66631348] Apply complete! Resources: 3 added, 0 changed, 1 destroyed. ``` </details>
feat: install K3s agent on MicroOS machine
Some checks failed
0/1 projects applied successfully.
f9f4f951e4
Author
Owner

atlantis plan

atlantis plan
Collaborator

Ran Plan for project: hcloud-infra dir: . workspace: hcloud-infra

Show Output
OpenTofu used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create

OpenTofu will perform the following actions:

  # local_file.provisioning_key will be created
+ resource "local_file" "provisioning_key" {
      + content              = (sensitive value)
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0700"
      + file_permission      = "0400"
      + filename             = "./.ssh/provisioning_private_key.pem"
      + id                   = (known after apply)
    }

  # local_file.provisioning_key_pub will be created
+ resource "local_file" "provisioning_key_pub" {
      + content              = <<-EOT
            ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDfBF8TbFMM8Vhq0XxodlsVNlQ0HezGsUf+AOKbfVo/4HQ+b57DTUxRq7guuoN3NpQzcMwma56UkOK9sYFtYUY0+LhqtemLFM15cp93z+iBXG8SzBxk8KmfLqBmoNjK0HU1e79RFvBBO++4ivM5K2DMw/3LYQz7MvuX5+JCq110GfHkVmKYoj66uhn+7h1GxQlVbMtRXGnEC16h7kFXbSvNKWOcbRgtvhFdUxOsuURICytQ78yrFc/FS9+nQIQfYIrisy1r4JioisKMWiB9vsYXIoCr7z0cG0TAFINexPHYNkse0+n1YgZG0QQ72anRv7pE98qvG4ymaYJaw3mHQ0WR/qKoGhs1X6S+z02du7toAl10aIINwxh6i8EoSH4uY0gSMXmFzg3PeoFkiYU6KkkF4JiL0kyILFny7n31GLMdYPGtTW18t5T+2FlSWqy7I/F4xOeh+8+iUypWkqg7H4gnl+KwT2yM+ZX/Q3MEz+byFdFsLg4NBrA7S1FhCdDTeCXMSNfZv71mCosg8yHmwiqrO8aHsSRk7ZPyfXm1PWLRTso02NTFB/1m1sA4q/9N29CCFm1PZeF+845uY5FvYzJS62UGpdJYGI4oIctbp8QMbJHhhOXl9pPgEiy+urZAaiaS3Gct5e1sN4XeaGcTqzCelTYyw2awoRO+iS7JUTMLiQ==
        EOT
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0700"
      + file_permission      = "0440"
      + filename             = "./.ssh/provisioning_key.pub"
      + id                   = (known after apply)
    }

Plan: 2 to add, 0 to change, 0 to destroy.
  • ▶️ To apply this plan, comment:
    atlantis apply -p hcloud-infra
    
  • 🚮 To delete this plan and lock, click here
  • 🔁 To plan this project again, comment:
    atlantis plan -p hcloud-infra
    

Plan: 2 to add, 0 to change, 0 to destroy.


  • To apply all unapplied plans from this Pull Request, comment:
    atlantis apply
    
  • 🚮 To delete all plans and locks from this Pull Request, comment:
    atlantis unlock
    
Ran Plan for project: `hcloud-infra` dir: `.` workspace: `hcloud-infra` <details><summary>Show Output</summary> ```diff OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create OpenTofu will perform the following actions: # local_file.provisioning_key will be created + resource "local_file" "provisioning_key" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0700" + file_permission = "0400" + filename = "./.ssh/provisioning_private_key.pem" + id = (known after apply) } # local_file.provisioning_key_pub will be created + resource "local_file" "provisioning_key_pub" { + content = <<-EOT ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDfBF8TbFMM8Vhq0XxodlsVNlQ0HezGsUf+AOKbfVo/4HQ+b57DTUxRq7guuoN3NpQzcMwma56UkOK9sYFtYUY0+LhqtemLFM15cp93z+iBXG8SzBxk8KmfLqBmoNjK0HU1e79RFvBBO++4ivM5K2DMw/3LYQz7MvuX5+JCq110GfHkVmKYoj66uhn+7h1GxQlVbMtRXGnEC16h7kFXbSvNKWOcbRgtvhFdUxOsuURICytQ78yrFc/FS9+nQIQfYIrisy1r4JioisKMWiB9vsYXIoCr7z0cG0TAFINexPHYNkse0+n1YgZG0QQ72anRv7pE98qvG4ymaYJaw3mHQ0WR/qKoGhs1X6S+z02du7toAl10aIINwxh6i8EoSH4uY0gSMXmFzg3PeoFkiYU6KkkF4JiL0kyILFny7n31GLMdYPGtTW18t5T+2FlSWqy7I/F4xOeh+8+iUypWkqg7H4gnl+KwT2yM+ZX/Q3MEz+byFdFsLg4NBrA7S1FhCdDTeCXMSNfZv71mCosg8yHmwiqrO8aHsSRk7ZPyfXm1PWLRTso02NTFB/1m1sA4q/9N29CCFm1PZeF+845uY5FvYzJS62UGpdJYGI4oIctbp8QMbJHhhOXl9pPgEiy+urZAaiaS3Gct5e1sN4XeaGcTqzCelTYyw2awoRO+iS7JUTMLiQ== EOT + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0700" + file_permission = "0440" + filename = "./.ssh/provisioning_key.pub" + id = (known after apply) } Plan: 2 to add, 0 to change, 0 to destroy. ``` </details> * :arrow_forward: To **apply** this plan, comment: ```shell atlantis apply -p hcloud-infra ``` * :put_litter_in_its_place: To **delete** this plan and lock, click [here](http://atlantis-0:4141/lock?id=infrastructure%252Fcluster%252F.%252Fhcloud-infra) * :repeat: To **plan** this project again, comment: ```shell atlantis plan -p hcloud-infra ``` Plan: 2 to add, 0 to change, 0 to destroy. --- * :fast_forward: To **apply** all unapplied plans from this Pull Request, comment: ```shell atlantis apply ``` * :put_litter_in_its_place: To **delete** all plans and locks from this Pull Request, comment: ```shell atlantis unlock ```
Author
Owner

atlantis plan

atlantis plan
Collaborator

Ran Plan for project: hcloud-infra dir: . workspace: hcloud-infra

Show Output
OpenTofu used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create

OpenTofu will perform the following actions:

  # hcloud_server.microos_machine["w4-cax11-hel1"] will be created
+ resource "hcloud_server" "microos_machine" {
      + allow_deprecated_images    = false
      + backup_window              = (known after apply)
      + backups                    = false
      + datacenter                 = (known after apply)
      + delete_protection          = false
      + firewall_ids               = (known after apply)
      + id                         = (known after apply)
      + ignore_remote_firewall_ids = false
      + image                      = "248717908"
      + ipv4_address               = (known after apply)
      + ipv6_address               = (known after apply)
      + ipv6_network               = (known after apply)
      + keep_disk                  = false
      + labels                     = {
          + "cluster"   = "icb4dc0.de"
          + "node_type" = "worker"
        }
      + location                   = "hel1"
      + name                       = "w4-cax11-hel1"
      + placement_group_id         = 338479
      + primary_disk_size          = (known after apply)
      + rebuild_protection         = false
      + server_type                = "cax21"
      + shutdown_before_deletion   = false
      + ssh_keys                   = [
          + "9685856",
          + "20640130",
          + "7606267",
        ]
      + status                     = (known after apply)
      + user_data                  = "rqm50dk43gw9CcCP/J0UmYlU1d4="

      + network {
          + alias_ips   = (known after apply)
          + ip          = "172.23.2.24"
          + mac_address = (known after apply)
          + network_id  = 1982434
        }

      + public_net {
          + ipv4         = (known after apply)
          + ipv4_enabled = true
          + ipv6         = (known after apply)
          + ipv6_enabled = false
        }
    }

  # local_file.provisioning_key will be created
+ resource "local_file" "provisioning_key" {
      + content              = (sensitive value)
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0700"
      + file_permission      = "0400"
      + filename             = "./.ssh/provisioning_private_key.pem"
      + id                   = (known after apply)
    }

  # local_file.provisioning_key_pub will be created
+ resource "local_file" "provisioning_key_pub" {
      + content              = <<-EOT
            ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDfBF8TbFMM8Vhq0XxodlsVNlQ0HezGsUf+AOKbfVo/4HQ+b57DTUxRq7guuoN3NpQzcMwma56UkOK9sYFtYUY0+LhqtemLFM15cp93z+iBXG8SzBxk8KmfLqBmoNjK0HU1e79RFvBBO++4ivM5K2DMw/3LYQz7MvuX5+JCq110GfHkVmKYoj66uhn+7h1GxQlVbMtRXGnEC16h7kFXbSvNKWOcbRgtvhFdUxOsuURICytQ78yrFc/FS9+nQIQfYIrisy1r4JioisKMWiB9vsYXIoCr7z0cG0TAFINexPHYNkse0+n1YgZG0QQ72anRv7pE98qvG4ymaYJaw3mHQ0WR/qKoGhs1X6S+z02du7toAl10aIINwxh6i8EoSH4uY0gSMXmFzg3PeoFkiYU6KkkF4JiL0kyILFny7n31GLMdYPGtTW18t5T+2FlSWqy7I/F4xOeh+8+iUypWkqg7H4gnl+KwT2yM+ZX/Q3MEz+byFdFsLg4NBrA7S1FhCdDTeCXMSNfZv71mCosg8yHmwiqrO8aHsSRk7ZPyfXm1PWLRTso02NTFB/1m1sA4q/9N29CCFm1PZeF+845uY5FvYzJS62UGpdJYGI4oIctbp8QMbJHhhOXl9pPgEiy+urZAaiaS3Gct5e1sN4XeaGcTqzCelTYyw2awoRO+iS7JUTMLiQ==
        EOT
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0700"
      + file_permission      = "0440"
      + filename             = "./.ssh/provisioning_key.pub"
      + id                   = (known after apply)
    }

Plan: 3 to add, 0 to change, 0 to destroy.
  • ▶️ To apply this plan, comment:
    atlantis apply -p hcloud-infra
    
  • 🚮 To delete this plan and lock, click here
  • 🔁 To plan this project again, comment:
    atlantis plan -p hcloud-infra
    

Plan: 3 to add, 0 to change, 0 to destroy.


  • To apply all unapplied plans from this Pull Request, comment:
    atlantis apply
    
  • 🚮 To delete all plans and locks from this Pull Request, comment:
    atlantis unlock
    
Ran Plan for project: `hcloud-infra` dir: `.` workspace: `hcloud-infra` <details><summary>Show Output</summary> ```diff OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create OpenTofu will perform the following actions: # hcloud_server.microos_machine["w4-cax11-hel1"] will be created + resource "hcloud_server" "microos_machine" { + allow_deprecated_images = false + backup_window = (known after apply) + backups = false + datacenter = (known after apply) + delete_protection = false + firewall_ids = (known after apply) + id = (known after apply) + ignore_remote_firewall_ids = false + image = "248717908" + ipv4_address = (known after apply) + ipv6_address = (known after apply) + ipv6_network = (known after apply) + keep_disk = false + labels = { + "cluster" = "icb4dc0.de" + "node_type" = "worker" } + location = "hel1" + name = "w4-cax11-hel1" + placement_group_id = 338479 + primary_disk_size = (known after apply) + rebuild_protection = false + server_type = "cax21" + shutdown_before_deletion = false + ssh_keys = [ + "9685856", + "20640130", + "7606267", ] + status = (known after apply) + user_data = "rqm50dk43gw9CcCP/J0UmYlU1d4=" + network { + alias_ips = (known after apply) + ip = "172.23.2.24" + mac_address = (known after apply) + network_id = 1982434 } + public_net { + ipv4 = (known after apply) + ipv4_enabled = true + ipv6 = (known after apply) + ipv6_enabled = false } } # local_file.provisioning_key will be created + resource "local_file" "provisioning_key" { + content = (sensitive value) + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0700" + file_permission = "0400" + filename = "./.ssh/provisioning_private_key.pem" + id = (known after apply) } # local_file.provisioning_key_pub will be created + resource "local_file" "provisioning_key_pub" { + content = <<-EOT ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDfBF8TbFMM8Vhq0XxodlsVNlQ0HezGsUf+AOKbfVo/4HQ+b57DTUxRq7guuoN3NpQzcMwma56UkOK9sYFtYUY0+LhqtemLFM15cp93z+iBXG8SzBxk8KmfLqBmoNjK0HU1e79RFvBBO++4ivM5K2DMw/3LYQz7MvuX5+JCq110GfHkVmKYoj66uhn+7h1GxQlVbMtRXGnEC16h7kFXbSvNKWOcbRgtvhFdUxOsuURICytQ78yrFc/FS9+nQIQfYIrisy1r4JioisKMWiB9vsYXIoCr7z0cG0TAFINexPHYNkse0+n1YgZG0QQ72anRv7pE98qvG4ymaYJaw3mHQ0WR/qKoGhs1X6S+z02du7toAl10aIINwxh6i8EoSH4uY0gSMXmFzg3PeoFkiYU6KkkF4JiL0kyILFny7n31GLMdYPGtTW18t5T+2FlSWqy7I/F4xOeh+8+iUypWkqg7H4gnl+KwT2yM+ZX/Q3MEz+byFdFsLg4NBrA7S1FhCdDTeCXMSNfZv71mCosg8yHmwiqrO8aHsSRk7ZPyfXm1PWLRTso02NTFB/1m1sA4q/9N29CCFm1PZeF+845uY5FvYzJS62UGpdJYGI4oIctbp8QMbJHhhOXl9pPgEiy+urZAaiaS3Gct5e1sN4XeaGcTqzCelTYyw2awoRO+iS7JUTMLiQ== EOT + content_base64sha256 = (known after apply) + content_base64sha512 = (known after apply) + content_md5 = (known after apply) + content_sha1 = (known after apply) + content_sha256 = (known after apply) + content_sha512 = (known after apply) + directory_permission = "0700" + file_permission = "0440" + filename = "./.ssh/provisioning_key.pub" + id = (known after apply) } Plan: 3 to add, 0 to change, 0 to destroy. ``` </details> * :arrow_forward: To **apply** this plan, comment: ```shell atlantis apply -p hcloud-infra ``` * :put_litter_in_its_place: To **delete** this plan and lock, click [here](http://atlantis-0:4141/lock?id=infrastructure%252Fcluster%252F.%252Fhcloud-infra) * :repeat: To **plan** this project again, comment: ```shell atlantis plan -p hcloud-infra ``` Plan: 3 to add, 0 to change, 0 to destroy. --- * :fast_forward: To **apply** all unapplied plans from this Pull Request, comment: ```shell atlantis apply ``` * :put_litter_in_its_place: To **delete** all plans and locks from this Pull Request, comment: ```shell atlantis unlock ```
Author
Owner

atlantis apply -p hcloud-infra

atlantis apply -p hcloud-infra
Collaborator

Ran Apply for project: hcloud-infra dir: . workspace: hcloud-infra

Apply Error

the hcloud-infra workspace at path . is currently locked by another command that is running for this pull request.
Wait until the previous command is complete and try again
Ran Apply for project: `hcloud-infra` dir: `.` workspace: `hcloud-infra` **Apply Error** ``` the hcloud-infra workspace at path . is currently locked by another command that is running for this pull request. Wait until the previous command is complete and try again ```
Author
Owner

atlantis unlock

atlantis unlock
Collaborator

All Atlantis locks for this PR have been unlocked and plans discarded

All Atlantis locks for this PR have been unlocked and plans discarded
Author
Owner

atlantis plan

atlantis plan
Collaborator

Ran Plan for dir: . workspace: hcloud-infra

Plan Error

the hcloud-infra workspace at path . is currently locked by another command that is running for this pull request.
Wait until the previous command is complete and try again
Ran Plan for dir: `.` workspace: `hcloud-infra` **Plan Error** ``` the hcloud-infra workspace at path . is currently locked by another command that is running for this pull request. Wait until the previous command is complete and try again ```
Collaborator

Ran Apply for project: hcloud-infra dir: . workspace: hcloud-infra

Apply Error

Show Output
running 'sh -c' '/usr/local/bin/tofu apply -input=false "/atlantis/repos/infrastructure/cluster/1/hcloud-infra/hcloud-infra-hcloud-infra.tfplan"' in '/atlantis/repos/infrastructure/cluster/1/hcloud-infra': exit status 1
local_file.provisioning_key_pub: Creating...
local_file.provisioning_key: Creating...
local_file.provisioning_key_pub: Creation complete after 0s [id=046a4085226502a0d86bd73e65e9edf75dd96866]
local_file.provisioning_key: Creation complete after 0s [id=f47379b55a45bbdb708f417eca94f2b6c00a5ce4]
hcloud_server.microos_machine["w4-cax11-hel1"]: Creating...
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [10s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [20s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [30s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [40s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [50s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m0s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m10s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m20s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m30s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m40s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m50s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m0s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m10s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Provisioning with 'remote-exec'...
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH...
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Host: 37.27.38.71
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   User: root
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Password: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Private key: true
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Certificate: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   SSH Agent: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Checking Host Key: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Target Platform: unix
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m20s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m30s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH...
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Host: 37.27.38.71
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   User: root
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Password: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Private key: true
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Certificate: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   SSH Agent: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Checking Host Key: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Target Platform: unix
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m40s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH...
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Host: 37.27.38.71
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   User: root
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Password: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Private key: true
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Certificate: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   SSH Agent: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Checking Host Key: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Target Platform: unix
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m50s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [3m0s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH...
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Host: 37.27.38.71
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   User: root
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Password: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Private key: true
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Certificate: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   SSH Agent: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Checking Host Key: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Target Platform: unix
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [3m10s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [3m20s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH...
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Host: 37.27.38.71
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   User: root
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Password: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Private key: true
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Certificate: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   SSH Agent: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Checking Host Key: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Target Platform: unix
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [3m30s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [3m40s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [3m50s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [4m0s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH...
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Host: 37.27.38.71
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   User: root
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Password: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Private key: true
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Certificate: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   SSH Agent: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Checking Host Key: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Target Platform: unix
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [4m10s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [4m20s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [4m30s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH...
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Host: 37.27.38.71
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   User: root
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Password: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Private key: true
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Certificate: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   SSH Agent: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Checking Host Key: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Target Platform: unix
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [4m40s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [4m50s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [5m0s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [5m10s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH...
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Host: 37.27.38.71
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   User: root
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Password: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Private key: true
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Certificate: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   SSH Agent: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Checking Host Key: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Target Platform: unix
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [5m20s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [5m30s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [5m40s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH...
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Host: 37.27.38.71
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   User: root
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Password: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Private key: true
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Certificate: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   SSH Agent: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Checking Host Key: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Target Platform: unix
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [5m50s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [6m0s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [6m10s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [6m20s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH...
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Host: 37.27.38.71
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   User: root
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Password: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Private key: true
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Certificate: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   SSH Agent: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Checking Host Key: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Target Platform: unix
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [6m30s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [6m40s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [6m50s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH...
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Host: 37.27.38.71
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   User: root
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Password: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Private key: true
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Certificate: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   SSH Agent: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Checking Host Key: false
hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec):   Target Platform: unix
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [7m0s elapsed]
hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [7m10s elapsed]
╷
│ Error: remote-exec provisioner error
│ 
│   with hcloud_server.microos_machine["w4-cax11-hel1"],
│   on k8s_microos_machines.tf line 46, in resource "hcloud_server" "microos_machine":
│   46:   provisioner "remote-exec" {
│ 
│ timeout - last error: dial tcp 37.27.38.71:22: i/o timeout
╵

Ran Apply for project: `hcloud-infra` dir: `.` workspace: `hcloud-infra` **Apply Error** <details><summary>Show Output</summary> ``` running 'sh -c' '/usr/local/bin/tofu apply -input=false "/atlantis/repos/infrastructure/cluster/1/hcloud-infra/hcloud-infra-hcloud-infra.tfplan"' in '/atlantis/repos/infrastructure/cluster/1/hcloud-infra': exit status 1 local_file.provisioning_key_pub: Creating... local_file.provisioning_key: Creating... local_file.provisioning_key_pub: Creation complete after 0s [id=046a4085226502a0d86bd73e65e9edf75dd96866] local_file.provisioning_key: Creation complete after 0s [id=f47379b55a45bbdb708f417eca94f2b6c00a5ce4] hcloud_server.microos_machine["w4-cax11-hel1"]: Creating... hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [10s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [20s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [30s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [40s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [50s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m0s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m10s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m20s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m30s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m40s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [1m50s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m0s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m10s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Provisioning with 'remote-exec'... hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH... hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Host: 37.27.38.71 hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): User: root hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Password: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Private key: true hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Certificate: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): SSH Agent: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Checking Host Key: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Target Platform: unix hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m20s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m30s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH... hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Host: 37.27.38.71 hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): User: root hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Password: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Private key: true hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Certificate: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): SSH Agent: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Checking Host Key: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Target Platform: unix hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m40s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH... hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Host: 37.27.38.71 hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): User: root hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Password: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Private key: true hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Certificate: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): SSH Agent: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Checking Host Key: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Target Platform: unix hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [2m50s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [3m0s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH... hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Host: 37.27.38.71 hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): User: root hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Password: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Private key: true hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Certificate: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): SSH Agent: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Checking Host Key: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Target Platform: unix hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [3m10s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [3m20s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH... hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Host: 37.27.38.71 hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): User: root hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Password: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Private key: true hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Certificate: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): SSH Agent: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Checking Host Key: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Target Platform: unix hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [3m30s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [3m40s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [3m50s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [4m0s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH... hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Host: 37.27.38.71 hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): User: root hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Password: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Private key: true hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Certificate: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): SSH Agent: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Checking Host Key: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Target Platform: unix hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [4m10s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [4m20s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [4m30s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH... hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Host: 37.27.38.71 hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): User: root hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Password: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Private key: true hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Certificate: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): SSH Agent: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Checking Host Key: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Target Platform: unix hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [4m40s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [4m50s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [5m0s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [5m10s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH... hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Host: 37.27.38.71 hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): User: root hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Password: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Private key: true hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Certificate: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): SSH Agent: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Checking Host Key: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Target Platform: unix hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [5m20s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [5m30s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [5m40s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH... hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Host: 37.27.38.71 hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): User: root hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Password: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Private key: true hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Certificate: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): SSH Agent: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Checking Host Key: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Target Platform: unix hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [5m50s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [6m0s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [6m10s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [6m20s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH... hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Host: 37.27.38.71 hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): User: root hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Password: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Private key: true hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Certificate: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): SSH Agent: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Checking Host Key: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Target Platform: unix hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [6m30s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [6m40s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [6m50s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Connecting to remote host via SSH... hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Host: 37.27.38.71 hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): User: root hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Password: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Private key: true hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Certificate: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): SSH Agent: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Checking Host Key: false hcloud_server.microos_machine["w4-cax11-hel1"] (remote-exec): Target Platform: unix hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [7m0s elapsed] hcloud_server.microos_machine["w4-cax11-hel1"]: Still creating... [7m10s elapsed] ╷ │ Error: remote-exec provisioner error │ │ with hcloud_server.microos_machine["w4-cax11-hel1"], │ on k8s_microos_machines.tf line 46, in resource "hcloud_server" "microos_machine": │ 46: provisioner "remote-exec" { │ │ timeout - last error: dial tcp 37.27.38.71:22: i/o timeout ╵ ``` </details>
Some checks failed
0/1 projects applied successfully.
This pull request is marked as a work in progress.
View command line instructions

Checkout

From your project repository, check out a new branch and test the changes.
git fetch -u origin feature/use-pre-built-image-for-k8s-nodes:feature/use-pre-built-image-for-k8s-nodes
git switch feature/use-pre-built-image-for-k8s-nodes

Merge

Merge the changes and update on Forgejo.

Warning: The "Autodetect manual merge" setting is not enabled for this repository, you will have to mark this pull request as manually merged afterwards.

git switch main
git merge --no-ff feature/use-pre-built-image-for-k8s-nodes
git switch feature/use-pre-built-image-for-k8s-nodes
git rebase main
git switch main
git merge --ff-only feature/use-pre-built-image-for-k8s-nodes
git switch feature/use-pre-built-image-for-k8s-nodes
git rebase main
git switch main
git merge --no-ff feature/use-pre-built-image-for-k8s-nodes
git switch main
git merge --squash feature/use-pre-built-image-for-k8s-nodes
git switch main
git merge --ff-only feature/use-pre-built-image-for-k8s-nodes
git switch main
git merge feature/use-pre-built-image-for-k8s-nodes
git push origin main
Sign in to join this conversation.
No reviewers
No labels
No milestone
No project
No assignees
2 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
infrastructure/cluster!1
No description provided.