Skip to content

Commit

Permalink
Merge branch 'main' into add-search-for-github
Browse files Browse the repository at this point in the history
  • Loading branch information
0xRaduan authored Dec 3, 2024
2 parents c6a2597 + f2e8f2e commit fe2a5ea
Show file tree
Hide file tree
Showing 30 changed files with 2,307 additions and 149 deletions.
4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,12 +88,12 @@ Documentation improvements are always welcome:

## Community

- Participate in [GitHub Discussions](https://github.com/modelcontextprotocol/servers/discussions)
- Participate in [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions)
- Follow the [Code of Conduct](CODE_OF_CONDUCT.md)

## Questions?

- Check the [documentation](https://modelcontextprotocol.io)
- Ask in GitHub Discussions

Thank you for contributing to MCP Servers!
Thank you for contributing to MCP Servers!
5 changes: 5 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,11 @@ Each MCP server is implemented with either the [Typescript MCP SDK](https://gith
- **[Google Maps](src/google-maps)** - Location services, directions, and place details
- **[Fetch](src/fetch)** - Web content fetching and conversion for efficient LLM usage

## 🌎 Community Servers

- **[Cloudflare](https://github.com/cloudflare/mcp-server-cloudflare)** - Deploy, configure & interrogate your resources on the Cloudflare developer platform (e.g. Workers/KV/R2/D1)
- **[Raygun](https://github.com/MindscapeHQ/mcp-server-raygun)** - Interact with your crash reporting and real using monitoring data on your Raygun account

## 🚀 Getting Started

### Using MCP Servers in this Repository
Expand Down
4 changes: 3 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@
"@modelcontextprotocol/server-slack": "*",
"@modelcontextprotocol/server-brave-search": "*",
"@modelcontextprotocol/server-memory": "*",
"@modelcontextprotocol/server-filesystem": "*"
"@modelcontextprotocol/server-filesystem": "*",
"@modelcontextprotocol/server-everart": "*",
"@modelcontextprotocol/server-sequentialthinking": "*"
}
}
73 changes: 73 additions & 0 deletions src/everart/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# EverArt MCP Server

Image generation server for Claude Desktop using EverArt's API.

## Install
```bash
npm install
export EVERART_API_KEY=your_key_here
```

## Config
Add to Claude Desktop config:
```json
{
"mcpServers": {
"everart": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-everart"],
"env": {
"EVERART_API_KEY": "your_key_here"
}
}
}
}
```

## Tools

### generate_image
Generates images with multiple model options. Opens result in browser and returns URL.

Parameters:
```typescript
{
prompt: string, // Image description
model?: string, // Model ID (default: "207910310772879360")
image_count?: number // Number of images (default: 1)
}
```

Models:
- 5000: FLUX1.1 (standard)
- 9000: FLUX1.1-ultra
- 6000: SD3.5
- 7000: Recraft-Real
- 8000: Recraft-Vector

All images generated at 1024x1024.

Sample usage:
```javascript
const result = await client.callTool({
name: "generate_image",
arguments: {
prompt: "A cat sitting elegantly",
model: "7000",
image_count: 1
}
});
```

Response format:
```
Image generated successfully!
The image has been opened in your default browser.
Generation details:
- Model: 7000
- Prompt: "A cat sitting elegantly"
- Image URL: https://storage.googleapis.com/...
You can also click the URL above to view the image again.
```
160 changes: 160 additions & 0 deletions src/everart/index.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
#!/usr/bin/env node
import EverArt from "everart";
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListToolsRequestSchema,
ListResourcesRequestSchema,
ReadResourceRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
import fetch from "node-fetch";
import open from "open";

const server = new Server(
{
name: "example-servers/everart",
version: "0.2.0",
},
{
capabilities: {
tools: {},
resources: {}, // Required for image resources
},
},
);

if (!process.env.EVERART_API_KEY) {
console.error("EVERART_API_KEY environment variable is not set");
process.exit(1);
}

const client = new EverArt.default(process.env.EVERART_API_KEY);

server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: "generate_image",
description:
"Generate images using EverArt Models and returns a clickable link to view the generated image. " +
"The tool will return a URL that can be clicked to view the image in a browser. " +
"Available models:\n" +
"- 5000:FLUX1.1: Standard quality\n" +
"- 9000:FLUX1.1-ultra: Ultra high quality\n" +
"- 6000:SD3.5: Stable Diffusion 3.5\n" +
"- 7000:Recraft-Real: Photorealistic style\n" +
"- 8000:Recraft-Vector: Vector art style\n" +
"\nThe response will contain a direct link to view the generated image.",
inputSchema: {
type: "object",
properties: {
prompt: {
type: "string",
description: "Text description of desired image",
},
model: {
type: "string",
description:
"Model ID (5000:FLUX1.1, 9000:FLUX1.1-ultra, 6000:SD3.5, 7000:Recraft-Real, 8000:Recraft-Vector)",
default: "5000",
},
image_count: {
type: "number",
description: "Number of images to generate",
default: 1,
},
},
required: ["prompt"],
},
},
],
}));

server.setRequestHandler(ListResourcesRequestSchema, async () => {
return {
resources: [
{
uri: "everart://images",
mimeType: "image/png",
name: "Generated Images",
},
],
};
});

server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
if (request.params.uri === "everart://images") {
return {
contents: [
{
uri: "everart://images",
mimeType: "image/png",
blob: "", // Empty since this is just for listing
},
],
};
}
throw new Error("Resource not found");
});

server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === "generate_image") {
try {
const {
prompt,
model = "207910310772879360",
image_count = 1,
} = request.params.arguments as any;

// Use correct EverArt API method
const generation = await client.v1.generations.create(
model,
prompt,
"txt2img",
{
imageCount: image_count,
height: 1024,
width: 1024,
},
);

// Wait for generation to complete
const completedGen = await client.v1.generations.fetchWithPolling(
generation[0].id,
);

const imgUrl = completedGen.image_url;
if (!imgUrl) throw new Error("No image URL");

// Automatically open the image URL in the default browser
await open(imgUrl);

// Return a formatted message with the clickable link
return {
content: [
{
type: "text",
text: `Image generated successfully!\nThe image has been opened in your default browser.\n\nGeneration details:\n- Model: ${model}\n- Prompt: "${prompt}"\n- Image URL: ${imgUrl}\n\nYou can also click the URL above to view the image again.`,
},
],
};
} catch (error: unknown) {
console.error("Detailed error:", error);
const errorMessage =
error instanceof Error ? error.message : "Unknown error";
return {
content: [{ type: "text", text: `Error: ${errorMessage}` }],
isError: true,
};
}
}
throw new Error(`Unknown tool: ${request.params.name}`);
});

async function runServer() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("EverArt MCP Server running on stdio");
}

runServer().catch(console.error);
32 changes: 32 additions & 0 deletions src/everart/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
{
"name": "@modelcontextprotocol/server-everart",
"version": "0.1.0",
"description": "MCP server for EverArt API integration",
"license": "MIT",
"author": "Anthropic, PBC (https://anthropic.com)",
"homepage": "https://modelcontextprotocol.io",
"bugs": "https://github.com/modelcontextprotocol/servers/issues",
"type": "module",
"bin": {
"mcp-server-everart": "dist/index.js"
},
"files": [
"dist"
],
"scripts": {
"build": "tsc && shx chmod +x dist/*.js",
"prepare": "npm run build",
"watch": "tsc --watch"
},
"dependencies": {
"@modelcontextprotocol/sdk": "0.5.0",
"everart": "^1.0.0",
"node-fetch": "^3.3.2",
"open": "^9.1.0"
},
"devDependencies": {
"@types/node": "^20.11.0",
"shx": "^0.3.4",
"typescript": "^5.3.3"
}
}
10 changes: 10 additions & 0 deletions src/everart/tsconfig.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
{
"extends": "../../tsconfig.json",
"compilerOptions": {
"outDir": "./dist",
"rootDir": "."
},
"include": [
"./**/*.ts"
]
}
43 changes: 10 additions & 33 deletions src/fetch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,20 +2,27 @@

A Model Context Protocol server that provides web content fetching capabilities. This server enables LLMs to retrieve and process content from web pages, converting HTML to markdown for easier consumption.

Presently the server only supports fetching HTML content.
The fetch tool will truncate the response, but by using the `start_index` argument, you can specify where to start the content extraction. This lets models read a webpage in chunks, until they find the information they need.

### Available Tools

- `fetch` - Fetches a URL from the internet and extracts its contents as markdown.
- `url` (string, required): URL to fetch
- `max_length` (integer, optional): Maximum number of characters to return (default: 5000)
- `start_index` (integer, optional): Start content from this character index (default: 0)
- `raw` (boolean, optional): Get raw content without markdown conversion (default: false)

### Prompts

- **fetch**
- Fetch a URL and extract its contents as markdown
- Argument: `url` (string, required): URL to fetch
- Arguments:
- `url` (string, required): URL to fetch

## Installation

Optionally: Install node.js, this will cause the fetch server to use a different HTML simplifier that is more robust.

### Using uv (recommended)

When using [`uv`](https://docs.astral.sh/uv/) no specific installation is needed. We will
Expand Down Expand Up @@ -67,36 +74,6 @@ Add to your Claude settings:
```
</details>

### Configure for Zed

Add to your Zed settings.json:

<details>
<summary>Using uvx</summary>

```json
"context_servers": [
"mcp-server-fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
],
```
</details>

<details>
<summary>Using pip installation</summary>

```json
"context_servers": {
"mcp-server-fetch": {
"command": "python",
"args": ["-m", "mcp_server_fetch"]
}
},
```
</details>

### Customization - robots.txt

By default, the server will obey a websites robots.txt file if the request came from the model (via a tool), but not if
Expand All @@ -105,7 +82,7 @@ the request was user initiated (via a prompt). This can be disabled by adding th

### Customization - User-agent

By default, depending on if the request came from the model (via a tool), or was user initiated (via a prompt), the
By default, depending on if the request came from the model (via a tool), or was user initiated (via a prompt), the
server will use either the user-agent
```
ModelContextProtocol/1.0 (Autonomous; +https://github.com/modelcontextprotocol/servers)
Expand Down
Loading

0 comments on commit fe2a5ea

Please sign in to comment.