Module Name | falsr_c |
---|---|
Category | Image editing |
Network | falsr_c |
Dataset | DIV2k |
Fine-tuning supported or not | No |
Module Size | 4.4MB |
Data indicators | PSNR37.66 |
Latest update date | 2021-02-26 |
-
-
Falsr_c is a lightweight super-resolution model based on "Accurate and Lightweight Super-Resolution with Neural Architecture Search". The model uses a multi-objective approach to deal with the over-segmentation problem, and uses an elastic search strategy based on a hybrid controller to improve the performance of the model. This model provides super resolution result with scale factor x2.
-
For more information, please refer to:falsr_c
-
-
-
paddlepaddle >= 2.0.0
-
paddlehub >= 2.0.0
-
-
-
$ hub install falsr_c
-
In case of any problems during installation, please refer to:Windows_Quickstart | Linux_Quickstart | Mac_Quickstart
-
-
-
$ hub run falsr_c --input_path "/PATH/TO/IMAGE"
- If you want to call the Hub module through the command line, please refer to: PaddleHub Command Line Instruction
-
-
import cv2 import paddlehub as hub sr_model = hub.Module(name='falsr_c') im = cv2.imread('/PATH/TO/IMAGE').astype('float32') res = sr_model.reconstruct(images=[im], visualization=True) print(res[0]['data'])
-
-
def reconstruct(images=None, paths=None, use_gpu=False, visualization=False, output_dir="falsr_c_output")
-
Prediction API.
-
Parameter
- images (list[numpy.ndarray]): Image data,ndarray.shape is in the format [H, W, C],BGR.
- paths (list[str]): Image path.
- use_gpu (bool): Use GPU or not. set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU.
- visualization (bool): Whether to save the recognition results as picture files.
- output_dir (str): Save path of images, "dcscn_output" by default.
-
Return
- res (list[dict]): The list of model results, where each element is dict and each field is:
- save_path (str, optional): Save path of the result, save_path is '' if no image is saved.
- data (numpy.ndarray): Result of super resolution.
- res (list[dict]): The list of model results, where each element is dict and each field is:
-
-
def save_inference_model(dirname)
-
Save the model to the specified path.
-
Parameters
- dirname: Model save path.
-
-
-
PaddleHub Serving can deploy an online service of super resolution.
-
-
Run the startup command:
-
$ hub serving start -m falsr_c
-
-
The servitization API is now deployed and the default port number is 8866.
-
NOTE: If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
-
-
-
With a configured server, use the following lines of code to send the prediction request and obtain the result
-
import requests import json import base64 import cv2 import numpy as np def cv2_to_base64(image): data = cv2.imencode('.jpg', image)[1] return base64.b64encode(data.tostring()).decode('utf8') def base64_to_cv2(b64str): data = base64.b64decode(b64str.encode('utf8')) data = np.fromstring(data, np.uint8) data = cv2.imdecode(data, cv2.IMREAD_COLOR) return data org_im = cv2.imread('/PATH/TO/IMAGE') data = {'images':[cv2_to_base64(org_im)]} headers = {"Content-type": "application/json"} url = "http://127.0.0.1:8866/predict/falsr_c" r = requests.post(url=url, headers=headers, data=json.dumps(data)) sr = base64_to_cv2(r.json()["results"][0]['data']) cv2.imwrite('falsr_c_X2.png', sr) print("save image as falsr_c_X2.png")
-
-
-
Starting with PaddleHub 2.3.1, the Gradio APP for falsr_c is supported to be accessed in the browser using the link http://127.0.0.1:8866/gradio/falsr_c.
-
1.0.0
First release
-
1.1.0
Remove Fluid API
-
1.2.0
Add Gradio APP support.
$ hub install falsr_c == 1.2.0