Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Created Tests for Prompt Model #117

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Created Tests for Prompt Model #117

wants to merge 2 commits into from

Conversation

isabellahoie
Copy link

Overview

Created tests for the prompt model and modified the view to only accept json requests for post requests. Writing tests for prompt model with goal of later using similar structure of the tests for other models on Pear.

Changes Made

Modified prompt/tests.py to include tests for prompt model views and view controllers.
Modified prompt/views.py to require a json request.

Test Coverage

The changes themselves were test cases so in order to test the prompt model, run the test file by typing ./manage.py test prompt.tests.

Copy link
Member

@chalo2000 chalo2000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job on your first PR! Happy that you got 100% code coverage 😄 but now to fix some things up 😮 . A higher level comment I wanted to make is that your tests should be structured in the Arrange-Act-Assert format https://automationpanda.com/2020/07/07/arrange-act-assert-a-pattern-for-writing-good-tests/. What would be great to see is if you separate each test case's body with 2 newlines; the first separating the arrange and the act steps and the second separating the act and assert state. Once you've taken care of my other comments, a lot of your arrange step should be in the setUp function (the except being you getting the url with Django's reverse function), the act step is typically (and should be in this case) getting the response from calling an endpoint with the client, and the assert state should be the asserts you're testing or a call to the helper you have for asserts (since you have a lot of common checks in tests which is bloating up your test file).

self.client.credentials(HTTP_AUTHORIZATION="Token " + self.user_token)

def populate(self):
"""populates the database"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Capitalize first word

content = json.loads(response.content)
self.assertTrue(content.get("success"))
self.assertEqual(response.status_code, 201)
self.assertEqual(Prompt.objects.count(), 27)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't want to make a test checking for exactly 27 prompts since we'd have to change this case every time we add a new prompt. What we really care about is that we get more than 0 prompts since we're assuming that there is at least one prompt on Pear.

self.assertEqual(Prompt.objects.count(), 27)

def test_repopulate(self):
"""test repopulate"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO implement

"""test repopulate"""

def test_get_all_prompts(self):
"""tests getting all the promts"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

misspelled prompts

self.assertEqual(response.status_code, 200)

def test_get_prompt_by_id(self):
"""tests getting a promt by a valid id"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto misspell

data = json.loads(request.body)
except json.JSONDecodeError:
data = request.data
data = json.loads(request.body)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is inconsistent with what you passed into CreatePromptController; make sure the code is symmetric so put the json.loads(request.body) you passed in directly into the controller as it's own variable data

def test_populate_prompts(self):
"""tests that the code populates the 27 prompts in pear_prompts.txt"""

response = self.populate()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see you use self.populate() a lot; why don't you put this in the setUp function which runs before each test?

url = reverse("prompt", args=[1000])
response = self.client.get(url, format='json')
content = json.loads(response.content)
self.assertFalse(content.get("sucess"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

success is mispelled - how did this pass?

url = reverse("prompt", args=[1])
response = self.client.get(url, format='json')
content = json.loads(response.content)
self.assertTrue(content.get("success"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see this logic quite a lot; have you considered making a helper function that asserts the contents of content?

poulate = self.populate()
url = reverse("prompts")

data = {"question_name": "My favorite animal is ...",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This constant is redefined in a couple functions, make a constant at the beginning of the class and refer to it in your tests

Copy link
Member

@chalo2000 chalo2000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You definitely factored out a lot of code into helper functions, but you did your job too well. While this file is much shorter, it is also much harder to understand how you're testing because all of the test logic is in a separate helper. Tests in the Arrange, Act, Assert format are very easy to understand; I can see what you're setting up, the straightforward act step, and what you're ensuring about what happened when you act. It is fine if your arrange step is taken care of in a helper if it is repeatedly done (like logging in and populating the database) but the actual act step and assert step should be clear. For example, when I take a look at test_get_all_prompts, you are taking care of the act through a helper self.prompts_response("get") and then your assert step takes that along with a 200 status, and 2 booleans which I need to check the helper to see what those represent.

Specifically, you should not have an assert_helper, prompt_response or prompts_response helper. Take a look at the difference of DRY (don't repeat yourself) which is what you did vs DAMP (descriptive and meaningful phrases) which is preferable for tests (readability+++) https://enterprisecraftsmanship.com/posts/dry-damp-unit-tests/#:~:text=The%20DRY%20principle%20stands%20for,t%20duplicate%20the%20domain%20knowledge.

@chalo2000
Copy link
Member

chalo2000 commented Nov 16, 2022

I noticed I said that you should make a helper for the asserts; the repeated step I do see in lots of your steps is that you check to see if id, question_name and question_placeholder are in the response. If you would like to factor that out into a function (which is fine), make sure the assert has a descriptive name. For example, you can make a helper called assertIns and pass a list of the fields along with the response. I believe you can use assertContains on the response directly so you don't need to get the data so maybe a call to a helper you define assertContainsAll(["id", "question_name", "question_placeholder"). You'll note in the article that DRY and DAMP don't have to be mutually exclusive and the point of tests is that they're readable and act as a source of documentation as well :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants