Skip to content

Commit

Permalink
补充识别过程以及readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Areedd committed Dec 11, 2024
1 parent 7b5f826 commit a4eccaa
Show file tree
Hide file tree
Showing 2 changed files with 97 additions and 9 deletions.
60 changes: 60 additions & 0 deletions Plugin_for_Chrome/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# Plugin_for_Chrome

## 项目简介

`Plugin_for_Chrome` 是一个用于检测钓鱼网站的Chrome插件项目。该插件可以在用户按下设置好的快捷键或者点击插件按钮后,自动获取当前网页的网址以及截图,并将其发送到服务端进行钓鱼网站检测。服务端使用Flask框架,加载Phishpedia模型进行识别,并返回检测结果。

## 目录结构

```
Plugin_for_Chrome/
├── client/
│ ├── background.js # 处理插件的后台逻辑,包括快捷键和按钮点击事件。
│ ├── manifest.json # Chrome插件的配置文件。
│ └── popup/
│ ├── popup.html # 插件弹出页面的HTML文件。
│ ├── popup.js # 插件弹出页面的JavaScript文件。
│ └── popup.css # 插件弹出页面的CSS文件。
└── server/
└── app.py # Flask服务端的主程序,处理客户端请求并调用Phishpedia模型进行识别。
```

## 安装与使用

### 前端部分

1. 打开Chrome浏览器,进入 `chrome://extensions/`
2. 启用开发者模式。
3. 点击“加载已解压的扩展程序”,选择 `Plugin_for_Chrome` 目录。

### 后端部分

1. 进入 `server` 目录:
```sh
cd Plugin_for_Chrome/server
```
2. 安装所需依赖:
```sh
pip install flask flask_cors
```
3. 运行Flask服务:
```sh
python app.py
```

### 使用插件

1. 在Chrome浏览器中,按下快捷键 `Ctrl+Shift+Y` 或点击插件按钮。
2. 插件会自动获取当前网页的网址和截图,并发送到服务端进行检测。
3. 服务端返回检测结果,插件会显示该网页是否为钓鱼网站,以及对应的正版网站。


## 注意事项

- 确保服务端在本地运行,并监听默认的5000端口。
- 插件和服务端需要在同一网络环境下运行。

## 贡献

欢迎提交问题和贡献代码!
46 changes: 37 additions & 9 deletions Plugin_for_Chrome/server/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,42 +3,70 @@
import base64
from io import BytesIO
from PIL import Image
from datetime import datetime
import os
import sys

current_dir = os.path.dirname(os.path.realpath(__file__))
root_dir = os.path.abspath(os.path.join(current_dir, os.pardir))
root_dir = os.path.abspath(os.path.join(root_dir, os.pardir))
sys.path.append(root_dir)

from phishpedia import PhishpediaWrapper
from phishpedia import result_file_write

app = Flask(__name__)
CORS(app)

# 这里后续添加模型加载代码
def load_model():
# TODO: 加载识别模型
pass

# 在创建应用时初始化模型
with app.app_context():
load_model()
log_dir = os.path.join(current_dir, 'logs')
os.makedirs(log_dir, exist_ok=True)
global phishpedia_cls
phishpedia_cls = PhishpediaWrapper()

@app.route('/analyze', methods=['POST'])
def analyze():
try:
print('Request received')
data = request.get_json()
url = data.get('url')
screenshot_data = data.get('screenshot')

# 解码Base64图片数据
image_data = base64.b64decode(screenshot_data.split(',')[1])
image = Image.open(BytesIO(image_data))
screenshot_path = 'temp_screenshot.png'
image.save(screenshot_path, format='PNG')

# 调用Phishpedia模型进行识别
phish_category, pred_target, matched_domain, \
plotvis, siamese_conf, pred_boxes, \
logo_recog_time, logo_match_time = phishpedia_cls.test_orig_phishpedia(url, screenshot_path, None)

# TODO: 这里添加识别逻辑
today = datetime.now().strftime('%Y%m%d')
log_file_path = os.path.join(log_dir, f'{today}_results.txt')

try:
with open(log_file_path, "a+", encoding='ISO-8859-1') as f:
result_file_write(f, current_dir, url, phish_category, pred_target, matched_domain, siamese_conf,
logo_recog_time, logo_match_time)
except UnicodeError:
with open(log_file_path, "a+", encoding='utf-8') as f:
result_file_write(f, current_dir, url, phish_category, pred_target, matched_domain, siamese_conf,
logo_recog_time, logo_match_time)
# 目前返回示例数据
result = {
"isPhishing": False,
"legitUrl": None,
"confidence": 0.95
"isPhishing": bool(phish_category),
"legitUrl": pred_target,
"confidence": float(siamese_conf)
}

return jsonify(result)

except Exception as e:
print(e)
return jsonify({"error": str(e)}), 500

if __name__ == '__main__':
Expand Down

0 comments on commit a4eccaa

Please sign in to comment.