-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for s3 object storage #1005
Comments
Hey @KiaraGrouwstra, right now we do not plan to add support for the Object Storage S3 API in this terraform provider. You can use any S3-compatible provider like the Minio provider instead. If using other providers does not work for you, could you explain the issues you have with them and the benefits you see with adding the APIs to this provider? |
i'll try that one - thank you for your response! |
i would have prefered to have all from one hand. it would also feel strange if i could create server, firewalls, ... via hetzner cli but not the object storage. i would vote for reopening the issue |
As I understood, creating buckets and access keys must be done using Hetzner API. We already successfully configured existing buckets (object lifecycle ruels) using other existing providers, but it would be nice to be able to create the buckets using this provider (saves a manual step in the UI) |
Hello all 👋 All our integrations rely on the Hetzner Cloud public API, which is available with a certain level of stability. Since the features you are requesting are not in the public API, we cannot implement them. Therefore, for the time being, we do not plan to support:
Note that only a subset of the Amazon S3 features are currently supported. We will leave this ticket open to increase its visibility. If you have questions, reach out to us using the Support Center. |
please correct me if i am wong, as i assume that hcloud cli code is the core for the terrform provider, excuse the crosspost: let us vote for hetznercloud/cli#918 maybe this awesome hetzner developers ❤️ get a bigger budget if we vote for the issue, which i see as voting for them (the hetzner developers). cheers |
@apricote Just to let you know a bunch of resources are not supported by the minio terraform in combination with hetzner object storage. E.g. setting public acl on a bucket or create a lifecycle rule. |
Do you have some code example to show your use case? Have you tried the aws terraform provider? |
@jooola thanks for your response. This (at least) does not work right now with Hetzner:
Creating a public bucket with the terraform example fails - Reddit
All the IAM stuff from minio doesn't work either. |
This leads me right now to do something like this 😢
|
The problem starts with the IAM stuff in MinIO. It's not possible to create a user in the first place. eg. resource "minio_iam_user" "some-user" {
name = "some-custom-name"
} It's not necessary for Hetzner to duplicate functionality into the hcloud Terraform provider. However, functionality that is distinct and cannot be achieved with third-party providers should be implemented. In some comment here, it was mentioned that other tools can be used for different use cases, but no other methods of creating users (in general IAM) were stated. |
Another limitation is that you can not delete
Would be great to have at least a list which features are supported. |
@3deep5me The following configuration should get yourself started using the aws terraform provider: terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
skip_region_validation = true
endpoints {
s3 = "https://fsn1.your-objectstorage.com"
}
region = "fsn1"
# Please checks the docs on how to store those credentials safely.
access_key = "<YOUR-ACCESS-KEY>"
secret_key = "<YOUR-SECRET-KEY>"
}
resource "aws_s3_bucket" "main" {
bucket = "my-bucket-a9c8ae4e"
}
resource "aws_s3_bucket_acl" "main" {
bucket = aws_s3_bucket.main.id
acl = "private"
}
resource "aws_s3_bucket_versioning" "main" {
bucket = aws_s3_bucket.main.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_lifecycle_configuration" "main" {
bucket = aws_s3_bucket.main.id
rule {
id = "expire-7d"
status = "Enabled"
expiration {
days = 7
}
}
}
resource "aws_s3_bucket_policy" "main" {
bucket = aws_s3_bucket.main.id
policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = "*",
Action = ["s3:GetObject"],
Resource = ["arn:aws:s3:::${aws_s3_bucket.main.bucket}/*"]
}
]
})
} |
I find this pretty unprofessional. Clearly, the official terraform provider for hcloud should cover all hcloud products.. This should not even be a discussion. It is really irrelevant for the user why it is not in the provider, it should be. I cannot classify using another provider as anything but a hack. That this even has to be said makes me very wary of using hcloud. As a freelancer with many AWS customers that would love to migrate away to hcloud: it is exactly friction points like this that make them go "oh, I see" and not migrate. AWS bends backwards to make sure the user has a seamless experience, while it seems that when you tell hcloud something is not working as expected the response is an explanation why it's not working instead of an effort to make it work. |
Creating and destroying S3 Compatible storages vis this provider should be a no brainer. I am surprised that the team is saying they won't support it and instead direct us to use third party providers. If that's the case then surely it's easy for the team to add support for it. If Hetzner currently supports only a subset of S3 then it's more of a reason to create your own terraform resource to prevent users from shooting themselves in the foot.
I share the same sentiments as @mzhaase on this one |
@jooola Thanks for the config. |
I tried the AWS-Provider - it's much better! Thanks again @jooola To get the Here is my configuration with object lock, versioning, lifecycle policy and bucket policy. |
Referring to an external provider seems fine. What we're really missing is a stabilization of the Right now we resorted to using the internal API with a token from the web console. provider "restapi" {
alias = "hcloud_v1"
uri = "https://api.hetzner.cloud/v1"
write_returns_object = true
debug = true
headers = {
"Authorization" = "Bearer ${var.hcloud_token}"
"Content-Type" = "application/json"
}
}
# API is still private and only works with SPA tokens from HCloud Console.
# 1. Login to https://console.hetzner.cloud/projects/735113
# 2. Open Developer Console and record an API request
# 3. Find token in Authorization header
# 4. Export as TF_VAR_hcloud_token
resource "restapi_object" "object_storage_credentials" {
for_each = local.projects
provider = restapi.hcloud_v1
path = "/_object_storage_credentials"
data = jsonencode({description = each.key})
id_attribute = "object_storage_credential/id"
} |
What whould you like to see?
hetzner recently introduced their S3-compatible object storage, offering immutable storage cheaper than their regular shared volumes.
it would be cool if this provider would facilitate configuring hetzner object storage as well, tho it seems given it's in beta there is currently still a manual step to request access involved as well.
The text was updated successfully, but these errors were encountered: