Skip to content

magic-inspector/auto-inspector

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

43 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Magic Inspector

.

Your Open-Source AI Web Testing Agent β€” Auto Inspector is an autonomous AI Agent that will test your website and give you a report of the results based on your user stories.

Auto Inspector is made by the Magic Inspector team to change the way web testing is done.

Focus on planning your tests, we run them for you.

License version Docker Image CI

.

🌟 Give us some love by starring this repository! 🌟

.

Open-Source Web Testing AI Agent

Auto Inspector is fully open-sourced (Apache 2.0) and Magic Inspector offers cloud hosting and dedicated enterprise grade support.

Demo

GUI VERSION

demo-taia-ui2.mp4

CLI VERSION

User story: I can log in to my account with '[email protected]' and 'demopassword' and create a new test inside the Default Project. Once the test has been created, I can see the test editor.

auto-inspector-demo.mp4

How it works

agentlabs.dev

Getting Started

ℹ️ Note: Auto Inspector is currently in development and not ready to self-host. If you're looking for an enterprise-grade testing solution, check our Cloud Version.

Auto Inspector is available as a CLI utility and as a web application.

  • The GUI web version is the easiest way to get started if you just to play with the agent.
  • The CLI is probably more adapted to improve the agent and add new features to the core.

GUI Version

Prerequisites

Before you begin, ensure you have the following installed on your machine:

  • Docker
  • Docker Compose

You can download Docker from here and Docker Compose from here.

Clone the repository

git clone https://github.com/magic-inspector/auto-inspector.git
cd auto-inspector

Add your OpenAI API key in your .env file

echo OPENAI_API_KEY="<replace-with-your-key>" >> .env

Run the web application

make up

or to run in detached mode

make upd
make logs

This command will start the web application at http://localhost.

CLI Version

Prerequisites

ℹ️ Note: Auto Inspector requires Node.js version 20 or higher.

Clone the repository and go to the backend folder

git clone https://github.com/magic-inspector/auto-inspector.git
cd auto-inspector/backend

npm install

Add your OpenAI API key

echo OPENAI_API_KEY="<replace-with-your-key>" >> .env

Run an example test case

npm run example:voyager

Run your own test case

npm run scenario -- --url="start-url" --user-story="As a user, I can <replace-with-your-user-story>"

Roadmap for a stable release

We're committed to improving the project, feel free to open an issue if you have any suggestions or feedback.

Component Status Features
Alpha release βœ…οΈοΈ
  • Release a first minimap version that is able to run a test
Add support for variables and secrets βœ…οΈοΈ
  • The agent can take variables and secrets from the user story
  • Secrets are not displayed in the logs or sent to the LLM
Run multiple cases from a test file βœ…οΈοΈ
  • Check the npm run example:file command for more information
Interrupt actions when dom changes βœ…οΈ
  • We need to interrupt the action if the interactives elements change after one action
Wait page stabilized before evaluation βœ…οΈ
  • Wait for the domContentLoaded event to be fired
  • Wait for a minimal time to make sure the page is stable
Manage completion at the action level βœ…οΈ
  • We must manage completion at the action level insted of the task level to make sure the agent does not restart filling inputs over and over
Update UI version to display steps in real-time πŸ—οΈ
  • Update the UI to show the steps generated by the agent in real-time
Add unit tests πŸ—οΈ
  • Add vitest to test business logic
Manager multiple tabs πŸ—οΈ
  • We must listen to the tab events and manage the tabs
Persist voyager results in file πŸ—οΈ
  • we need to persist screenshots and results in a file for every test we run
Refine user inputs πŸ—οΈ
  • We must make sure that the Manager Agent and the Evaluation Agent get distinct inputs so the Manager Agent does not try to update its behavior based on the expected result
Provide a GUI πŸ—οΈ
  • Add docker configuration
  • Add a simple UI to create a test
Build a serious benchmark framework πŸ—οΈ
  • The only serious way to improve the agent is to build a serious benchmark dedicated to the web testing.
Add OpenAI YAML spec and generate frontend SDK dynamically πŸ—οΈ
  • Automatically add OpenAI YAML specification
  • Generate frontend SDK dynamically based on the specification

.

🌟 Give us some love by starring this repository! 🌟

.