-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DPE-6575] Add microk8s testing #21
Conversation
…ly calculate number of brokers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a small question, I'd like to double check. For the rest looks fine. Thanks!
|
||
|
||
@pytest.fixture(scope="module") | ||
async def microk8s(ops_test: OpsTest) -> SimpleNamespace: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
praise that's really nice! well done!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this is excellent, I'll definitely co-opt it at some point.
src/charm.py
Outdated
# We may be sitting behind k8s, in this case, we need to do a more careful calculation | ||
# of the host count, and hence, the replication factor | ||
if client and client.describe_cluster(): | ||
replication_factor = len(client.describe_cluster().get("brokers", 1)) - 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question I would maybe put maximum number of 3 (as long as we have capacity). If we have a deployment of 7 brokers, having replication factor of 7 seems wild. Or am I missing something here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually we do have a limitation, but later in the game. I can add it here instead, it makes more sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Thanks!
|
||
|
||
@pytest.fixture(scope="module") | ||
async def microk8s(ops_test: OpsTest) -> SimpleNamespace: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this is excellent, I'll definitely co-opt it at some point.
tests/integration/test_charm.py
Outdated
model_db = None | ||
|
||
|
||
DEPLOY_MARKS = [ | ||
( | ||
pytest.param( | ||
use_tls, | ||
cloud, | ||
id=str(use_tls) + f"-{cloud}", | ||
marks=pytest.mark.group(str(use_tls) + f"-{cloud}"), | ||
) | ||
) | ||
for use_tls in [True, False] | ||
for cloud in ["vm", MICROK8S_CLOUD_NAME] | ||
] | ||
|
||
K8S_MARKS = [ | ||
( | ||
pytest.param( | ||
use_tls, | ||
cloud, | ||
id=str(use_tls) + f"-{cloud}", | ||
marks=pytest.mark.group(str(use_tls) + f"-{cloud}"), | ||
) | ||
) | ||
for use_tls in [True, False] | ||
} | ||
USE_TLS = list(TLS_MARK.values()) | ||
for cloud in [MICROK8S_CLOUD_NAME] | ||
] | ||
|
||
VM_MARKS = [ | ||
( | ||
pytest.param( | ||
use_tls, | ||
cloud, | ||
id=str(use_tls) + f"-{cloud}", | ||
marks=pytest.mark.group(str(use_tls) + f"-{cloud}"), | ||
) | ||
) | ||
for use_tls in [True, False] | ||
for cloud in ["vm"] | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
todo/nitpick: Please could we move these to conftest
so they're not clogging up the test file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am moving it to helpers.py
5087773
This PR extends the integration testing to also cover kafka k8s charm. It tests both with and without TLS.
It also fixes an issue between a VM-based benchmark related to a k8s-based broker: we get the incorrect number of brokers as we only see a single endpoint - the nodeport one.
This PR extends the Kafka client lib to then expose the brokers' metadata. With that metadata, we can recover the correct number of brokers in both VM and k8s based deployments instead.