-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add an OpenShift Deployment for an EMDT Traceability Traction Tenant Controller for use with the VC Traceability Test Suite #170
Comments
In order to determine the most appropriate home and access controls for the new controller, I'll need some more information about the TTTC itself, along with how it interacts with an agent and the TTS. Things like;
For monitoring. Are there any health endpoints on the TTTC we could utilize for health/uptime monitoring? What conditions need to be met in order to consider the service as operational? EMDT has their own set of namespaces in OCP. Will their agent be hosted there, or is the plan to have them use a Tranction tenant hosted in the main Traction environemnt? |
Answers:
Let’s have a call about this. |
Thanks @swcurran for setting up this issue. TTTC is an implementation of the w3c-ccg traceability specification which leverages traction/aca-py as the backend service. Here's some answers for @WadeBarnes questions:
The TTTC will interact with the agent by requesting a traction multi-tenant token and sending authenticated requests to the API
The TTTC will use the following endpoints from the traction api:
When deploying TTTC, you pass in a client_id/api_key as environment variables
Per tenant
The TTS is ran every 24 hours through github actions
Here's the official procedure.
I've added a
As long as the traction instance doesn't reset periodically as we do not want to have to resubmit the information to the TTS maintainers. |
+1 to have a call as I also would like to understand the deployment models we will aim for. I can also do a quick demonstration of how the app will interact with Traction. Here's the project, I have a branch for this specific deployment. From my understanding the charts will live in a different repo and the deployment will be done mostly through GA? A few questions that come to mind:
The deployment itself is a simple fastapi server with a connection to a postgres database. Until v0.12.0 is available in a traction instance, I also deploy an agent for verification (this agent does not need to be exposed.) |
Reading back on this question, I think it would make most sense if the TTTC instance is deployed in a EMDT namespace and use a traction tenant from the main traction environment. |
I don’t think this is an either/or. The plan is for them to use a Traction Tenant in Traction Dev (or Test or Prod as you see fit). The TTTC itself will need an OCP namespace and it should go wherever is easiest. Who manages the EMDT namespaces? What I don’t think we want is to use an OCP namespace managed by EMLI folks that know nothing about Digital Trust. |
Typically the charts live with the application, provided they are somewhat generic (deploys to a K8S environment), and the values file (the environment specific settings) would be contained in a separate repo. What we try to avoid is imposing our specific infrastructure (BC Gov OCP platform specifics) on others. Traditionally our OCP templates have been contained in a separate repo because they are tailored to the BC Gov OCP environments. As we move to Helm charts we're consciously making them more generic. |
What we have done for our projects is have a
What is the expected lifespan for this project? If it is a prototype/demo and it is expected to stay as such, we may want to possibly use our demo namespaces in OCP rather than requesting a new set that will add to the list and be only partially used? Unsure about using the EMDT namespaces as I don't know how much they're involved technically, especially if this is a prototype that may/may not evolve and be there long term. @PatStLouis is the service stateless, or do we need to provision storage in form of a database? If this is the case, we will want to conenct it to a backup instance so we have the data if we ever need to move and restore it elsewhere. |
Thanks for the information @WadeBarnes @esune What you described is what I currently have. I usually template my The lifespan of it will be however long BCGov wants to be published as a w3c-ccg traceability spec implementor. This instance will only be used for conformity/interoperability testing and demonstration. The demo namespace might be the way to go for the time being. I have a postgresDB service in my architecture. This can be deployed along the application or we can just provide a connection url to the application if you use an external DB service. We will need permanent storage for did documents, oauth client information and status list credentials. |
@PatStLouis check out the |
The values files for those can be found in a separate repo, here; https://github.com/bcgov/trust-over-ip-configurations/tree/main/helm-values |
@esune @WadeBarnes Here's the current state of the charts. I based them on the traction deployment and simplified some components. I also copied the chart release files. I will need some clarification on how we will manage tls certificates to complete the ingress configuration. Only the controller service will need an ingress and be exposed publicly. The controller will communicate with the agent/db through their respective internal service endpoints. The agent does not need a connection to a ledger or a db, it's only used to verify proofs on json-ld credentials. Issuance will be made through the traction instance. Let me know if network policies/service accounts are needed. The controller has a secret ressource with all required environment variables which is injected into the deployment. They just need to be populated when we are ready to deploy. For the domain, I think a traceability.interop.vonx.io What are the suggested next step? I'm available friday for a session where we could proceed with the repo transfer/deployment. |
With the certs, we typically use the BCDevOps/certbot to manage Let's Encrypt certificates on our Routes in OpenShift. Any Route labeled with |
For network policies, one will need to be defined to allow ingress to the pod exposing the public endpoint, and one each for the inter-pod communications. Examples: |
I'll add those, there was a comment claiming if ingress is enabled openshift configuration isn't required. From your last comment, we will need to deactivate the ingress and use openshift routes instead to enable tls? |
I believe if you leave the @i5okie please correct me if I have the process incorrect. |
My vote is for |
@WadeBarnes @esune @i5okie |
@swcurran the controller is available at https://traceability.interop.vonx.io. It's currently pointing to a sandbox tenant and I will run the test-suites again tonight to make sure all is still running smooth. In the meantime I'll submit a ticket for a dev/test tenant to use when we are ready to submit the implementation to the w3c-ccg. |
Awesome — nice work! |
@esune I managed to pass most of the conformance tests, I'm getting gateway timeouts when verifying credentials and resolving did's. These are the 2 operations which rely on the deployed agent so I'm suspecting that the controller is unable to talk with the agent admin endpoint. It might have to do with the network policy. Can we have a look? |
@PatStLouis, It's an issue with the Network Policy. The controller pod is missing the expected |
@WadeBarnes I confirm that it's working, thanks! |
Thanks @WadeBarnes - you got to it before I did. |
WTZ FTW 😁 |
Tagging folks: @WadeBarnes @esune @PatStLouis @krobinsonca
Patrick has developed a Traceability Traction Tenant Controller (TTTC - github reference to be added) that interacts with the Traceability Test Suite (TTS - link to be added). The intention of the TTS is that each participant standup an active Web component that can be used to test conformance with any other participant of the TTS. The purpose of this issue is to request cooperation in standing up an OpenShift workspace to hosts the TTTC such that it can be used by other TTS participants.
My thoughts on what needs to be done. I expect that others — especially Wade, Emiliano and Patrick will extend/update/correct this list to make it accurate and to enable the work be done over the next short while. As I understand it, there is not a lot to do this, and so the hope is that this can be done with a meeting or two, Discord discussions as needed, and everyone doing a little bit off the side of their desks. If this is a bigger thing — we’ll determine that quickly.
Tasks:
Thoughts on other things to be done?
An interesting question is if Patrick needs access to the OpenShift instance. Ideally not, but will that just create an undue burden on others?
The text was updated successfully, but these errors were encountered: