Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: added article creation e2e test #1139

Conversation

JohnAllenTech
Copy link
Contributor

✨ Codu Pull Request 💻

Fixes #(issue)

Adds to #468

image

Pull Request details

  • Authenticated E2E user writes a basic article and ensures that it can be published on mobile and desktop
  • Checks for bookmark and like icon on article page
  • Checks author is correct

Any Breaking changes

  • None

Associated Screenshots

image Generated article

@JohnAllenTech JohnAllenTech requested a review from a team as a code owner October 17, 2024 00:37
Copy link

vercel bot commented Oct 17, 2024

@JohnAllenTech is attempting to deploy a commit to the Codú Team on Vercel.

A member of the Team first needs to authorize it.

Copy link
Contributor

coderabbitai bot commented Oct 17, 2024

Walkthrough

The pull request introduces a new test case to the "Authenticated Articles Page" test suite in the e2e/articles.spec.ts file. This test, titled "Should write and publish an article," simulates user interactions for creating and publishing an article. Key actions include navigating to the main page, checking the visibility of the "New Post" link, and verifying the published article's details. Additionally, the playwright.config.ts file is updated to increase the number of worker processes in a Continuous Integration (CI) environment. Existing tests remain unchanged, maintaining coverage of other functionalities.

Changes

File Change Summary
e2e/articles.spec.ts Added a new test case for writing and publishing an article.
playwright.config.ts Updated workers property from process.env.CI ? 2 : undefined to process.env.CI ? 3 : undefined.

Possibly related PRs

Poem

In the land of code, where rabbits play,
A new test hops in, brightening the day.
Writing and publishing, a joyful cheer,
Articles flourish, as we draw near.
With every click, our dreams take flight,
In the world of tests, everything feels right! 🐇✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 2dc9bb8 and ba784d9.

📒 Files selected for processing (1)
  • e2e/articles.spec.ts (1 hunks)
🧰 Additional context used
🔇 Additional comments (2)
e2e/articles.spec.ts (2)

165-176: Verify that the article content assertions are accurate

The test checks for specific text in the article, such as "Lorem ipsum dolor sit amet," which might be sensitive to changes in the content or formatting.

Ensure that the content you're asserting matches the actual rendered content. If the content includes dynamic elements or formatting, consider using more robust selectors or text matching strategies.


128-129: ⚠️ Potential issue

Ensure the user is authenticated before testing article publication

The test assumes that the user is authenticated but does not include any authentication steps. Ensure that the test logs in the user or that authentication is handled in a setup step.

To verify if user authentication is handled elsewhere in the test suite, run the following script:

This script searches for authentication-related code within the test suite.

Copy link
Contributor

@NiallJoeMaher NiallJoeMaher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🌮🌮🌮 LGTM!

Copy link

vercel bot commented Oct 17, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
codu ✅ Ready (Inspect) Visit Preview 💬 Add feedback Oct 17, 2024 4:28pm

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (3)
e2e/articles.spec.ts (3)

133-148: Consider adding an explicit wait after navigation

The navigation logic looks good and correctly handles both mobile and desktop scenarios. However, to improve test reliability, consider adding an explicit wait for an element on the article creation page after navigation.

Apply this diff to add an explicit wait:

 await page.waitForURL("http:/localhost:3000/create");
+await page.waitForSelector('input[placeholder="Article title"]');

This ensures that the article creation form is fully loaded before proceeding with the test.


150-161: Add content validation after filling the form

The article creation process looks good overall. To enhance the test's robustness, consider adding validation steps after filling in the title and content.

Apply this diff to add content validation:

 await page.getByPlaceholder("Article title").fill(articleTitle);
+await expect(page.getByPlaceholder("Article title")).toHaveValue(articleTitle);

 await page
   .getByPlaceholder("Enter your content here 💖")
   .fill(articleContent);
+await expect(page.getByPlaceholder("Enter your content here 💖")).toHaveValue(articleContent);

This ensures that the content is correctly entered before proceeding with publication.


166-178: Consider verifying the full article content

The verification of the published article is thorough and covers all important aspects. To further enhance the test, consider adding a check for the full article content.

Apply this diff to add full content verification:

 await expect(page.getByText("Lorem ipsum dolor sit amet,")).toBeVisible();
+await expect(page.locator('article')).toContainText(articleContent);

This ensures that the entire article content is correctly published and displayed.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between ba784d9 and 37497e0.

📒 Files selected for processing (2)
  • e2e/articles.spec.ts (1 hunks)
  • playwright.config.ts (1 hunks)
🧰 Additional context used
📓 Learnings (1)
e2e/articles.spec.ts (1)
Learnt from: JohnAllenTech
PR: codu-code/codu#1139
File: e2e/articles.spec.ts:133-147
Timestamp: 2024-10-17T01:03:59.199Z
Learning: In `e2e/articles.spec.ts` (TypeScript Playwright tests), avoid refactoring duplicated code into helper functions unless the code is reused elsewhere.
🔇 Additional comments (4)
playwright.config.ts (1)

21-21: Approved: Increased workers in CI, but clarification needed.

The change to increase the number of workers from 2 to 3 in the CI environment can potentially improve test execution speed. However, I have a few points to consider:

  1. Could you please clarify the reason for this specific change? Is it necessary for the new article creation e2e tests, or is it a general performance improvement?

  2. Have you verified that the CI environment has sufficient resources to handle the additional worker without issues?

  3. Consider adding a comment in the code to explain why 3 workers are used in CI. This can help future maintainers understand the reasoning behind this configuration.

To ensure this change doesn't negatively impact the CI pipeline, please run the following command in your CI environment and verify the results:

This will help confirm that the CI environment can handle the increased number of workers.

e2e/articles.spec.ts (3)

128-132: LGTM: Test setup and content definition

The test setup and content definition look good. Using constants for the article content and title improves maintainability.


162-164: LGTM: URL verification after publication

The URL verification after article publication looks good. Using a regex to match the dynamic article URL is an appropriate approach.


128-178: Overall assessment: Well-implemented E2E test for article creation and publication

This new test case is a valuable addition to the test suite, covering the entire process of writing and publishing an article for both mobile and desktop scenarios. The test is well-structured and includes appropriate verifications at each step.

Key strengths:

  1. Handles both mobile and desktop navigation paths.
  2. Verifies important elements of the published article.
  3. Uses regex for dynamic URL matching.

Suggested improvements:

  1. Add an explicit wait after navigation to the article creation page.
  2. Validate entered content in the article form.
  3. Verify the full article content after publication.

These enhancements will further improve the test's reliability and thoroughness.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants