-
-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Introduce ecosystem tests for popular plugins #127
base: main
Are you sure you want to change the base?
feat: Introduce ecosystem tests for popular plugins #127
Conversation
That none of the At least one of eg I also note that the selection criteria is similar to those outlined in the suggested |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My apologies, I started a review last week, left a bunch of comments, and then forgot to hit "Submit Review".
I think this outlines a good infrastructure, just looking for some clarifications.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great to me! This is an excellent starting point. We can proceed to run it and see how it performs. 🚀🚀🚀
``` | ||
A `test/ecosystem` directory will be created with a directory for each plugin. | ||
The `test:ecosystem` script will copy the contents of the provided `--plugin` directory into a clean `test/${plugin}-scratch` directory. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just realized we're not really defining what "breakage" is anywhere. Are we just running npm test
on each package with the local ESLint changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't planning on suggesting including source code from plugins in this proposal. To start I was thinking of just verifying that npm lint
doesn't error out - regardless of rule reports. Meanwhile, the instigating issues mentioned in the RFC were all runtime crashes that would be caught by it.
Relying on tests makes me a little nervous. It'd be a lot slower -especially if the plugins have build steps- and we'd need to make sure none of them have tests that rely on specifics of rule reports.
My vote would be to just use the plugins as end-users until we have a breakage that would have been caught by source code level checks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To start I was thinking of just verifying that
npm lint
doesn't error out - regardless of rule reports.
Can you explain how that would work? Are you running npm lint
in the ESLint repo? If so, does that mean we can only test plugins that ESLint itself uses?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's what I intend by this:
rfcs/designs/2024-repo-ecosystem-plugin-tests/README.md
Lines 30 to 35 in 8fd5da4
1. Create a new directory containing: | |
- `package.json` | |
- `eslint.config.js` with the closest equivalent to an _"enable all rules"_ preset from the plugin | |
- A small set of files known to be parsed and not cause lint reports with the plugin | |
2. Run a lint command (i.e. `npx eslint .`) in that directory | |
3. Assert that the lint command passed with 0 lint reports. |
Is that not clear? Is there a different phrasing you'd suggest?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it now.
I'm a bit skeptical that this approach will give us the data that we want for a few reasons:
- We can't guarantee that code we'll come up with will properly exercise each rule.
- It's possible that rules could work fine when there are no violations but actually crash when there is a violation. If we're not testing all the code paths, we can't have high confidence that the rules actually work.
- Not all plugins have an
all
config that enables most of the rules. - Even if we enable all rules, we won't have coverage of all possible options, which also leaves it possible that linting will pass but a different option that we aren't using might fail.
That why's I think running npm test
for each plugin is a better approach. The tests for each rule are (mostly) guaranteed to exercise the important paths for both valid and invalid code.
Co-authored-by: Milos Djermanovic <[email protected]>
2. On the `main` branch only | ||
3. On all PRs targeting the `main` branch, alongside existing CI jobs | ||
|
||
Starting with a job separately from `main` ensures that unexpectedly high frequencies of breakages are caught early, without blocking `main` branch builds. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit confused by this process. Is the intent that we end up with all three at the end? Or just number 3?
And I'm unclear on the value of number 2. Presumably, this is meant to run only after a PR is merged, but how will we be notified if the job fails?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
intent
My intent was that each step would replace the previous. As in, at the end, we end up with just 3, which is a superset of 2.
how will be notified
I was thinking the ❌ failing commit on the main
branch.
I clarified the intent in c767f0f. Does that make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking the ❌ failing commit on the
main
branch.
The problem with this is that it's passive -- it requires someone looking at the GitHub repo Code tab. (And then it requires that someone care enough to click on it to see what's going on.) I feel like this will be easy to miss. Is there some way to have a notification of some sort?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GitHub user that merged the PR gets a notification if CI checks failed on the main branch, so perhaps that user could take further steps and/or notify the rest of the team in the team channel.
|
||
## Detailed Design | ||
|
||
### CI Job |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we utilize eslint-remote-tester for testing against other repositories? If not, can we mention it under "alternatives considered" or at least mention it as prior art?
I know a number of popular plugins use it like:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for ping @bmish. Here's a quick summary of what eslint-remote-tester
does:
- In
eslint-remote-tester.config.js/ts
you'll configure ESLint config (identical to ESLint'soverrideConfig
) and repositories that should be cloned on the file system. - When run, it spins up x-amount of
node:worker_threads
that each handle a single repository parallel. - A single thread clones the assigned repository and run ESLint using Node API against it. Results are reported back to main thread.
- Main thread constructs markdown or plain-text files and writes them to file system or outputs to CLI
There are some examples of bugs it can find automatically listed here: AriPerkkio/eslint-remote-tester#3. I used to run it against most popular community plugins for a while couple of years ago.
I know a number of popular plugins use it like:
Also worth to mention: eslint-plugin-unicorn
, eslint-plugin-jest
, eslint-plugin-testing-library
and eslint-plugin-vitest
.
For the other ecosystem CI setups I would recommend to check how Vite and Vitest does this. There has also been some thoughts about making a generic ecosystem-ci that all Javascript ecosystem packages could utilize. It would not be strictly tied to Vite-ecosystem like the current ones are.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TIL, thanks for the reference! I think it would actually make a lot of sense to use eslint-remote-tester
. It does roughly what we're suggesting, but with much better parallelization and reporting. Nice! 🔥
I put up a PoC branch here: https://github.com/JoshuaKGoldberg/eslint/tree/eslint-remote-tester-poc/tests/remote.
I'm in favor, but am hesitant to change the RFC until the TSC weighs in.
Co-authored-by: Nicholas C. Zakas <[email protected]>
Summary
Adding an CI job to the
eslint/eslint
repo that checks changes against a small selection of third-party plugins.Related Issues
eslint/eslint#19139