You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 10, 2022. It is now read-only.
Waited for 60s, but kubernetes api is still not healthy
This is likely because the curl command fails:
# curl -X GET --fail ${apiserver}/healthz --header "Authorization: Bearer ${token}" --cacert ${cert} -w "\n"
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
If you add the -k option, you get the correct response:
# curl -X GET --fail ${apiserver}/healthz --header "Authorization: Bearer ${token}" --cacert ${cert} -w "\n" -k
ok
From kube-apiserver.stderr.log (10.0.16.7 is the master node):
Presumably the Root CA needs to be added to the list of trusted roots.
What you expected to happen:
I should be able to create the cluster using an intermediate authority from any depth of the chain. This is useful for establishing a proper trust chain that can be shared amongst all of the deployments in my environment. In the case of a tool like PKS, this would mean having a trust hierarchy that chains all the way back to a single known root for each of my clusters.
How to reproduce it (as minimally and precisely as possible):
This is a development pre-release of what will go in to 0.18.0.
Environment Info (bosh -e <environment> environment):
Name bosh-thall-cfcr
UUID 55f6a6ad-6dc7-4290-b5ef-3f963f8ca2a2
Version 264.7.0 (00000000)
CPI google_cpi
Features compiled_package_cache: disabled
config_server: enabled
dns: disabled
snapshots: disabled
User admin
Kubernetes version (kubectl version): 1.10.4
Cloud provider (e.g. aws, gcp, vsphere): All.
The text was updated successfully, but these errors were encountered:
The above example was, at least partially, the result of a bug with how Credhub generates intermediate and leaf certificates. That was fixed as a result of cloudfoundry/credhub@35e66ff .
That said, kubo-release still needs the ability to construct the various certs files as chains. For example, kube-apiserver would need to use a server cert including every layer of intermediate in the chain except the root.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
When trying to configure the Kubernetes cluster with an intermediate certificate authority,
kube-apiserver
fails to deploy.From
post-start.stdout.log
:This is likely because the curl command fails:
If you add the
-k
option, you get the correct response:From
kube-apiserver.stderr.log
(10.0.16.7
is the master node):Presumably the Root CA needs to be added to the list of trusted roots.
What you expected to happen:
I should be able to create the cluster using an intermediate authority from any depth of the chain. This is useful for establishing a proper trust chain that can be shared amongst all of the deployments in my environment. In the case of a tool like PKS, this would mean having a trust hierarchy that chains all the way back to a single known root for each of my clusters.
How to reproduce it (as minimally and precisely as possible):
As an example the manifest might look like:
Environment:
bosh -d <deployment> deployment
):This is a development pre-release of what will go in to 0.18.0.
bosh -e <environment> environment
):kubectl version
):1.10.4
aws
,gcp
,vsphere
): All.The text was updated successfully, but these errors were encountered: