diff --git a/modules/image/Image_editing/colorization/deoldify/README_en.md b/modules/image/Image_editing/colorization/deoldify/README_en.md index 159a7f293..cbfcd6078 100644 --- a/modules/image/Image_editing/colorization/deoldify/README_en.md +++ b/modules/image/Image_editing/colorization/deoldify/README_en.md @@ -2,7 +2,7 @@ | Module Name |deoldify| | :--- | :---: | -|Category|image editing| +|Category|Image editing| |Network |NoGAN| |Dataset|ILSVRC 2012| |Fine-tuning supported or not |No| @@ -22,7 +22,7 @@ - ### Module Introduction - - deoldify is a color rendering model for images and videos, which can restore color for black and white photos and videos. + - Deoldify is a color rendering model for images and videos, which can restore color for black and white photos and videos. - For more information, please refer to: [deoldify](https://github.com/jantic/DeOldify) @@ -36,9 +36,9 @@ - NOTE: This Module relies on ffmpeg, Please install ffmpeg before using this Module. - ```shell - $ conda install x264=='1!152.20180717' ffmpeg=4.0.2 -c conda-forge - ``` + ```shell + $ conda install x264=='1!152.20180717' ffmpeg=4.0.2 -c conda-forge + ``` - ### 2、Installation @@ -46,20 +46,20 @@ $ hub install deoldify ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction - ### 1、Prediction Code Example - ```python - import paddlehub as hub + - ```python + import paddlehub as hub - model = hub.Module(name='deoldify') - model.predict('/PATH/TO/IMAGE/OR/VIDEO') - ``` + model = hub.Module(name='deoldify') + model.predict('/PATH/TO/IMAGE/OR/VIDEO') + ``` - ### 2、API @@ -69,32 +69,32 @@ - Prediction API. - - **Parameter** + - **Parameter** - - input (str): image path. + - input (str): Image path. - - **Return** + - **Return** - - If input is image path, the output is: - - pred_img(np.ndarray): image data, ndarray.shape is in the format [H, W, C], BGR; - - out_path(str): save path of images. + - If input is image path, the output is: + - pred_img(np.ndarray): image data, ndarray.shape is in the format [H, W, C], BGR. + - out_path(str): save path of images. - - If input is video path, the output is : - - frame_pattern_combined(str): save path of frames from output video; - - vid_out_path(str): save path of output video. + - If input is video path, the output is : + - frame_pattern_combined(str): save path of frames from output video. + - vid_out_path(str): save path of output video. - ```python def run_image(self, img): ``` - Prediction API for image. - - **Parameter** + - **Parameter** - - img (str|np.ndarray): image data, str or ndarray. ndarray.shape is in the format [H, W, C], BGR. + - img (str|np.ndarray): Image data, str or ndarray. ndarray.shape is in the format [H, W, C], BGR. - - **Return** + - **Return** - - pred_img(np.ndarray): ndarray.shape is in the format [H, W, C], BGR. + - pred_img(np.ndarray): Ndarray.shape is in the format [H, W, C], BGR. - ```python def run_video(self, video): @@ -103,12 +103,12 @@ - **Parameter** - - video(str): video path. + - video(str): Video path. - **Return** - - frame_pattern_combined(str): save path of frames from output video; - - vid_out_path(str): save path of output video. + - frame_pattern_combined(str): Save path of frames from output video. + - vid_out_path(str): Save path of output video. ## IV. Server Deployment @@ -120,44 +120,44 @@ - Run the startup command: - - ```shell - $ hub serving start -m deoldify - ``` + - ```shell + $ hub serving start -m deoldify + ``` - - The servitization API is now deployed and the default port number is 8866. + - The servitization API is now deployed and the default port number is 8866. - - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set. + - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set. - ### Step 2: Send a predictive request - With a configured server, use the following lines of code to send the prediction request and obtain the result. - - ```python - import requests - import json - import base64 - - import cv2 - import numpy as np - - def cv2_to_base64(image): - data = cv2.imencode('.jpg', image)[1] - return base64.b64encode(data.tostring()).decode('utf8') - def base64_to_cv2(b64str): - data = base64.b64decode(b64str.encode('utf8')) - data = np.fromstring(data, np.uint8) - data = cv2.imdecode(data, cv2.IMREAD_COLOR) - return data - - # Send an HTTP request - org_im = cv2.imread('/PATH/TO/ORIGIN/IMAGE') - data = {'images':cv2_to_base64(org_im)} - headers = {"Content-type": "application/json"} - url = "http://127.0.0.1:8866/predict/deoldify" - r = requests.post(url=url, headers=headers, data=json.dumps(data)) - img = base64_to_cv2(r.json()["results"]) - cv2.imwrite('/PATH/TO/SAVE/IMAGE', img) - ``` + - ```python + import requests + import json + import base64 + + import cv2 + import numpy as np + + def cv2_to_base64(image): + data = cv2.imencode('.jpg', image)[1] + return base64.b64encode(data.tostring()).decode('utf8') + def base64_to_cv2(b64str): + data = base64.b64decode(b64str.encode('utf8')) + data = np.fromstring(data, np.uint8) + data = cv2.imdecode(data, cv2.IMREAD_COLOR) + return data + + # Send an HTTP request + org_im = cv2.imread('/PATH/TO/ORIGIN/IMAGE') + data = {'images':cv2_to_base64(org_im)} + headers = {"Content-type": "application/json"} + url = "http://127.0.0.1:8866/predict/deoldify" + r = requests.post(url=url, headers=headers, data=json.dumps(data)) + img = base64_to_cv2(r.json()["results"]) + cv2.imwrite('/PATH/TO/SAVE/IMAGE', img) + ``` ## V. Release Note diff --git a/modules/image/Image_editing/colorization/photo_restoration/README_en.md b/modules/image/Image_editing/colorization/photo_restoration/README_en.md index 0f807f6cf..1ff585bdd 100644 --- a/modules/image/Image_editing/colorization/photo_restoration/README_en.md +++ b/modules/image/Image_editing/colorization/photo_restoration/README_en.md @@ -2,7 +2,7 @@ |Module Name|photo_restoration| | :--- | :---: | -|Category|image editing| +|Category|Image editing| |Network|deoldify and realsr| |Fine-tuning supported or not|No| |Module Size |64MB+834MB| @@ -47,8 +47,8 @@ $ hub install photo_restoration ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -56,7 +56,7 @@ - ### 1、Prediction Code Example - ```python + - ```python import cv2 import paddlehub as hub @@ -68,7 +68,7 @@ - ### 2、API - ```python + - ```python def run_image(self, input, model_select= ['Colorization', 'SuperResolution'], @@ -79,16 +79,16 @@ - **Parameter** - - input (numpy.ndarray|str): image data,numpy.ndarray or str. ndarray.shape is in the format [H, W, C], BGR; + - input (numpy.ndarray|str): Image data,numpy.ndarray or str. ndarray.shape is in the format [H, W, C], BGR. - model_select (list\[str\]): Mode selection,\['Colorization'\] only colorize the input image, \['SuperResolution'\] only increase the image resolution; default is \['Colorization', 'SuperResolution'\]。 - - save_path (str): save path, default is 'photo_restoration'. + - save_path (str): Save path, default is 'photo_restoration'. - **Return** - - output (numpy.ndarray): restoration result,ndarray.shape is in the format [H, W, C], BGR. + - output (numpy.ndarray): Restoration result,ndarray.shape is in the format [H, W, C], BGR. ## IV. Server Deployment @@ -103,15 +103,15 @@ $ hub serving start -m photo_restoration ``` - - The servitization API is now deployed and the default port number is 8866. + - The servitization API is now deployed and the default port number is 8866. - - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set. + - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set. - ### Step 2: Send a predictive request - With a configured server, use the following lines of code to send the prediction request and obtain the result - ```python + - ```python import requests import json import base64 diff --git a/modules/image/Image_editing/colorization/user_guided_colorization/README_en.md b/modules/image/Image_editing/colorization/user_guided_colorization/README_en.md index b968f4008..8e17592c8 100644 --- a/modules/image/Image_editing/colorization/user_guided_colorization/README_en.md +++ b/modules/image/Image_editing/colorization/user_guided_colorization/README_en.md @@ -24,7 +24,7 @@ - ### Module Introduction - - user_guided_colorization is a colorization model based on "Real-Time User-Guided Image Colorization with Learned Deep Priors",this model uses pre-supplied coloring blocks to color the gray image. + - User_guided_colorization is a colorization model based on "Real-Time User-Guided Image Colorization with Learned Deep Priors",this model uses pre-supplied coloring blocks to color the gray image. ## II. Installation @@ -40,8 +40,8 @@ $ hub install user_guided_colorization ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -50,6 +50,8 @@ ```shell $ hub run user_guided_colorization --input_path "/PATH/TO/IMAGE" ``` + + - If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst) - ### 2、Prediction Code Example ```python @@ -69,6 +71,7 @@ - Steps: - Step1: Define the data preprocessing method + - ```python import paddlehub.vision.transforms as T @@ -77,7 +80,7 @@ T.RGB2LAB()], to_rgb=True) ``` - - `transforms` The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs. + - `transforms`: The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs. - Step2: Download the dataset - ```python @@ -86,9 +89,9 @@ color_set = Canvas(transform=transform, mode='train') ``` - * `transforms`: data preprocessing methods. + * `transforms`: Data preprocessing methods. * `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`. - * `hub.datasets.Canvas()` The dataset will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory. + * `hub.datasets.Canvas()`: The dataset will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory. - Step3: Load the pre-trained model @@ -97,7 +100,7 @@ model = hub.Module(name='user_guided_colorization', load_checkpoint=None) model.set_config(classification=True, prob=1) ``` - * `name`: model name. + * `name`: Model name. * `load_checkpoint`: Whether to load the self-trained model, if it is None, load the provided parameters. * `classification`: The model is trained by two mode. At the beginning, `classification` is set to True, which is used for shallow network training. In the later stage of training, set `classification` to False, which is used to train the output layer of the network. * `prob`: The probability that a priori color block is not added to each input image, the default is 1, that is, no prior color block is added. For example, when `prob` is set to 0.9, the probability that there are two a priori color blocks on a picture is(1-0.9)*(1-0.9)*0.9=0.009. @@ -115,20 +118,20 @@ - `Trainer` mainly control the training of Fine-tune, including the following controllable parameters: - * `model`: Optimized model; - * `optimizer`: Optimizer selection; - * `use_vdl`: Whether to use vdl to visualize the training process; - * `checkpoint_dir`: The storage address of the model parameters; - * `compare_metrics`: The measurement index of the optimal model; + * `model`: Optimized model. + * `optimizer`: Optimizer selection. + * `use_vdl`: Whether to use vdl to visualize the training process. + * `checkpoint_dir`: The storage address of the model parameters. + * `compare_metrics`: The measurement index of the optimal model. - `trainer.train` mainly control the specific training process, including the following controllable parameters: - * `train_dataset`: Training dataset; - * `epochs`: Epochs of training process; - * `batch_size`: Batch size; + * `train_dataset`: Training dataset. + * `epochs`: Epochs of training process. + * `batch_size`: Batch size. * `num_workers`: Number of workers. - * `eval_dataset`: Validation dataset; - * `log_interval`:The interval for printing logs; + * `eval_dataset`: Validation dataset. + * `log_interval`:The interval for printing logs. * `save_interval`: The interval for saving model parameters. - Model prediction @@ -156,9 +159,9 @@ - Run the startup command: - - ```shell - $ hub serving start -m user_guided_colorization - ``` + - ```shell + $ hub serving start -m user_guided_colorization + ``` - The servitization API is now deployed and the default port number is 8866. @@ -167,33 +170,32 @@ - ### Step 2: Send a predictive request - With a configured server, use the following lines of code to send the prediction request and obtain the result - - ```python - import requests - import json - import cv2 - import base64 - import numpy as np - - def cv2_to_base64(image): - data = cv2.imencode('.jpg', image)[1] - return base64.b64encode(data.tostring()).decode('utf8') - - def base64_to_cv2(b64str): - data = base64.b64decode(b64str.encode('utf8')) - data = np.fromstring(data, np.uint8) - data = cv2.imdecode(data, cv2.IMREAD_COLOR) - return data - - # Send an HTTP request - org_im = cv2.imread('/PATH/TO/IMAGE') - data = {'images':[cv2_to_base64(org_im)]} - headers = {"Content-type": "application/json"} - url = "http://127.0.0.1:8866/predict/user_guided_colorization" - r = requests.post(url=url, headers=headers, data=json.dumps(data)) - data = base64_to_cv2(r.json()["results"]['data'][0]['fake_reg']) - cv2.imwrite('color.png', data) - ``` + - ```python + import requests + import json + import cv2 + import base64 + import numpy as np + + def cv2_to_base64(image): + data = cv2.imencode('.jpg', image)[1] + return base64.b64encode(data.tostring()).decode('utf8') + + def base64_to_cv2(b64str): + data = base64.b64decode(b64str.encode('utf8')) + data = np.fromstring(data, np.uint8) + data = cv2.imdecode(data, cv2.IMREAD_COLOR) + return data + + # Send an HTTP request + org_im = cv2.imread('/PATH/TO/IMAGE') + data = {'images':[cv2_to_base64(org_im)]} + headers = {"Content-type": "application/json"} + url = "http://127.0.0.1:8866/predict/user_guided_colorization" + r = requests.post(url=url, headers=headers, data=json.dumps(data)) + data = base64_to_cv2(r.json()["results"]['data'][0]['fake_reg']) + cv2.imwrite('color.png', data) + ``` ## V. Release Note diff --git a/modules/image/Image_editing/super_resolution/dcscn/README_en.md b/modules/image/Image_editing/super_resolution/dcscn/README_en.md index e6b844c6e..098d03657 100644 --- a/modules/image/Image_editing/super_resolution/dcscn/README_en.md +++ b/modules/image/Image_editing/super_resolution/dcscn/README_en.md @@ -1,6 +1,5 @@ # dcscn - |Module Name|dcscn| | :--- | :---: | |Category |Image editing| @@ -8,7 +7,7 @@ |Dataset|DIV2k| |Fine-tuning supported or not|No| |Module Size|260KB| -|指标|PSNR37.63| +|Data indicators|PSNR37.63| |Data indicators |2021-02-26| @@ -43,8 +42,8 @@ $ hub install dcscn ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -53,18 +52,20 @@ - ``` $ hub run dcscn --input_path "/PATH/TO/IMAGE" ``` -- ### 2、Prediction Code Example - ```python - import cv2 - import paddlehub as hub + - If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst) +- ### 2、Prediction Code Example - sr_model = hub.Module(name='dcscn') - im = cv2.imread('/PATH/TO/IMAGE').astype('float32') - res = sr_model.reconstruct(images=[im], visualization=True) - print(res[0]['data']) - sr_model.save_inference_model() - ``` + - ```python + import cv2 + import paddlehub as hub + + sr_model = hub.Module(name='dcscn') + im = cv2.imread('/PATH/TO/IMAGE').astype('float32') + res = sr_model.reconstruct(images=[im], visualization=True) + print(res[0]['data']) + sr_model.save_inference_model() + ``` - ### 3、API @@ -81,16 +82,16 @@ - **Parameter** - * images (list\[numpy.ndarray\]): image data,ndarray.shape is in the format \[H, W, C\],BGR; - * paths (list\[str\]): image path; - * use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**; - * visualization (bool): Whether to save the recognition results as picture files; - * output\_dir (str): save path of images, "dcscn_output" by default. + * images (list\[numpy.ndarray\]): Image data,ndarray.shape is in the format \[H, W, C\],BGR. + * paths (list\[str\]): image path. + * use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**. + * visualization (bool): Whether to save the recognition results as picture files. + * output\_dir (str): Save path of images, "dcscn_output" by default. - **Return** * res (list\[dict\]): The list of model results, where each element is dict and each field is: * save\_path (str, optional): Save path of the result, save_path is '' if no image is saved. - * data (numpy.ndarray): result of super resolution. + * data (numpy.ndarray): Result of super resolution. - ```python def save_inference_model(self, @@ -105,8 +106,8 @@ - **Parameters** * dirname: Save path. - * model\_filename: model file name,defalt is \_\_model\_\_ - * params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True) + * model\_filename: Model file name,defalt is \_\_model\_\_ + * params\_filename: Parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True) * combined: Whether to save the parameters to a unified file. @@ -123,46 +124,46 @@ $ hub serving start -m dcscn ``` - - The servitization API is now deployed and the default port number is 8866. + - The servitization API is now deployed and the default port number is 8866. - - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set. + - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set. - ### Step 2: Send a predictive request - With a configured server, use the following lines of code to send the prediction request and obtain the result - ```python - import requests - import json - import base64 - - import cv2 - import numpy as np - - def cv2_to_base64(image): - data = cv2.imencode('.jpg', image)[1] - return base64.b64encode(data.tostring()).decode('utf8') - def base64_to_cv2(b64str): - data = base64.b64decode(b64str.encode('utf8')) - data = np.fromstring(data, np.uint8) - data = cv2.imdecode(data, cv2.IMREAD_COLOR) - return data - - - org_im = cv2.imread('/PATH/TO/IMAGE') - data = {'images':[cv2_to_base64(org_im)]} - headers = {"Content-type": "application/json"} - url = "http://127.0.0.1:8866/predict/dcscn" - r = requests.post(url=url, headers=headers, data=json.dumps(data)) - - sr = np.expand_dims(cv2.cvtColor(base64_to_cv2(r.json()["results"][0]['data']), cv2.COLOR_BGR2GRAY), axis=2) - shape =sr.shape - org_im = cv2.cvtColor(org_im, cv2.COLOR_BGR2YUV) - uv = cv2.resize(org_im[...,1:], (shape[1], shape[0]), interpolation=cv2.INTER_CUBIC) - combine_im = cv2.cvtColor(np.concatenate((sr, uv), axis=2), cv2.COLOR_YUV2BGR) - cv2.imwrite('dcscn_X2.png', combine_im) - print("save image as dcscn_X2.png") - ``` + - ```python + import requests + import json + import base64 + + import cv2 + import numpy as np + + def cv2_to_base64(image): + data = cv2.imencode('.jpg', image)[1] + return base64.b64encode(data.tostring()).decode('utf8') + def base64_to_cv2(b64str): + data = base64.b64decode(b64str.encode('utf8')) + data = np.fromstring(data, np.uint8) + data = cv2.imdecode(data, cv2.IMREAD_COLOR) + return data + + + org_im = cv2.imread('/PATH/TO/IMAGE') + data = {'images':[cv2_to_base64(org_im)]} + headers = {"Content-type": "application/json"} + url = "http://127.0.0.1:8866/predict/dcscn" + r = requests.post(url=url, headers=headers, data=json.dumps(data)) + + sr = np.expand_dims(cv2.cvtColor(base64_to_cv2(r.json()["results"][0]['data']), cv2.COLOR_BGR2GRAY), axis=2) + shape =sr.shape + org_im = cv2.cvtColor(org_im, cv2.COLOR_BGR2YUV) + uv = cv2.resize(org_im[...,1:], (shape[1], shape[0]), interpolation=cv2.INTER_CUBIC) + combine_im = cv2.cvtColor(np.concatenate((sr, uv), axis=2), cv2.COLOR_YUV2BGR) + cv2.imwrite('dcscn_X2.png', combine_im) + print("save image as dcscn_X2.png") + ``` ## V. Release Note diff --git a/modules/image/Image_editing/super_resolution/falsr_a/README_en.md b/modules/image/Image_editing/super_resolution/falsr_a/README_en.md index 24c07e9d8..aa677c6d5 100644 --- a/modules/image/Image_editing/super_resolution/falsr_a/README_en.md +++ b/modules/image/Image_editing/super_resolution/falsr_a/README_en.md @@ -23,9 +23,9 @@ - ### Module Introduction - - falsr_a is a lightweight super-resolution model based on `Accurate and Lightweight Super-Resolution with Neural Architecture Search`. The model uses a multi-objective approach to deal with the over-segmentation problem, and uses an elastic search strategy based on a hybrid controller to improve the performance of the model. This model provides super resolution result with scale factor x2. + - Falsr_a is a lightweight super-resolution model based on "Accurate and Lightweight Super-Resolution with Neural Architecture Search". The model uses a multi-objective approach to deal with the over-segmentation problem, and uses an elastic search strategy based on a hybrid controller to improve the performance of the model. This model provides super resolution result with scale factor x2. - - For more information, please refer to:[falsr_a](https://github.com/xiaomi-automl/FALSR) + - For more information, please refer to: [falsr_a](https://github.com/xiaomi-automl/FALSR) ## II. Installation @@ -42,8 +42,8 @@ $ hub install falsr_a ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -53,19 +53,20 @@ - ``` $ hub run falsr_a --input_path "/PATH/TO/IMAGE" ``` + - If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst) - ### 2、Prediction Code Example - ```python - import cv2 - import paddlehub as hub - - sr_model = hub.Module(name='falsr_a') - im = cv2.imread('/PATH/TO/IMAGE').astype('float32') - res = sr_model.reconstruct(images=[im], visualization=True) - print(res[0]['data']) - sr_model.save_inference_model() - ``` + - ```python + import cv2 + import paddlehub as hub + + sr_model = hub.Module(name='falsr_a') + im = cv2.imread('/PATH/TO/IMAGE').astype('float32') + res = sr_model.reconstruct(images=[im], visualization=True) + print(res[0]['data']) + sr_model.save_inference_model() + ``` - ### 3、API @@ -82,10 +83,10 @@ - **Parameter** - * images (list\[numpy.ndarray\]): image data,ndarray.shape is in the format \[H, W, C\],BGR; - * paths (list\[str\]): image path; - * use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**; - * visualization (bool): Whether to save the recognition results as picture files; + * images (list\[numpy.ndarray\]): image data,ndarray.shape is in the format \[H, W, C\],BGR. + * paths (list\[str\]): image path. + * use\_gpu (bool): use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**. + * visualization (bool): Whether to save the recognition results as picture files. * output\_dir (str): save path of images, "dcscn_output" by default. - **Return** @@ -126,15 +127,15 @@ $ hub serving start -m falsr_a ``` - - The servitization API is now deployed and the default port number is 8866. + - The servitization API is now deployed and the default port number is 8866. - - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set. + - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set. - ### Step 2: Send a predictive request - With a configured server, use the following lines of code to send the prediction request and obtain the result - ```python + - ```python import requests import json import base64 diff --git a/modules/image/Image_editing/super_resolution/falsr_b/README_en.md b/modules/image/Image_editing/super_resolution/falsr_b/README_en.md index f9ff6a346..5507b2ac6 100644 --- a/modules/image/Image_editing/super_resolution/falsr_b/README_en.md +++ b/modules/image/Image_editing/super_resolution/falsr_b/README_en.md @@ -23,7 +23,7 @@ - ### Module Introduction - - falsr_b is a lightweight super-resolution model based on `Accurate and Lightweight Super-Resolution with Neural Architecture Search`. The model uses a multi-objective approach to deal with the over-segmentation problem, and uses an elastic search strategy based on a hybrid controller to improve the performance of the model. This model provides super resolution result with scale factor x2. + - Falsr_b is a lightweight super-resolution model based on "Accurate and Lightweight Super-Resolution with Neural Architecture Search". The model uses a multi-objective approach to deal with the over-segmentation problem, and uses an elastic search strategy based on a hybrid controller to improve the performance of the model. This model provides super resolution result with scale factor x2. - For more information, please refer to:[falsr_b](https://github.com/xiaomi-automl/FALSR) @@ -42,8 +42,8 @@ $ hub install falsr_b ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -53,6 +53,7 @@ - ``` $ hub run falsr_b --input_path "/PATH/TO/IMAGE" ``` + - If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst) - ### 2、Prediction Code Example @@ -82,16 +83,16 @@ - **Parameter** - * images (list\[numpy.ndarray\]): image data,ndarray.shape is in the format \[H, W, C\],BGR; - * paths (list\[str\]): image path; - * use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**; - * visualization (bool): Whether to save the recognition results as picture files; - * output\_dir (str): save path of images, "dcscn_output" by default. + * images (list\[numpy.ndarray\]): Image data,ndarray.shape is in the format \[H, W, C\],BGR. + * paths (list\[str\]): Image path. + * use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**. + * visualization (bool): Whether to save the recognition results as picture files. + * output\_dir (str): Save path of images, "dcscn_output" by default. - **Return** * res (list\[dict\]): The list of model results, where each element is dict and each field is: * save\_path (str, optional): Save path of the result, save_path is '' if no image is saved. - * data (numpy.ndarray): result of super resolution. + * data (numpy.ndarray): Result of super resolution. - ```python def save_inference_model(self, @@ -106,8 +107,8 @@ - **Parameters** * dirname: Save path. - * model\_filename: model file name,defalt is \_\_model\_\_ - * params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True) + * model\_filename: Model file name,defalt is \_\_model\_\_ + * params\_filename: Parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True) * combined: Whether to save the parameters to a unified file. @@ -126,15 +127,15 @@ $ hub serving start -m falsr_b ``` - - The servitization API is now deployed and the default port number is 8866. + - The servitization API is now deployed and the default port number is 8866. - - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set. + - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set. - ### Step 2: Send a predictive request - With a configured server, use the following lines of code to send the prediction request and obtain the result - ```python + - ```python import requests import json import base64 diff --git a/modules/image/Image_editing/super_resolution/falsr_c/README_en.md b/modules/image/Image_editing/super_resolution/falsr_c/README_en.md index 6225933ef..5e651a7ea 100644 --- a/modules/image/Image_editing/super_resolution/falsr_c/README_en.md +++ b/modules/image/Image_editing/super_resolution/falsr_c/README_en.md @@ -23,7 +23,7 @@ - ### Module Introduction - - falsr_c is a lightweight super-resolution model based on `Accurate and Lightweight Super-Resolution with Neural Architecture Search`. The model uses a multi-objective approach to deal with the over-segmentation problem, and uses an elastic search strategy based on a hybrid controller to improve the performance of the model. This model provides super resolution result with scale factor x2. + - Falsr_c is a lightweight super-resolution model based on "Accurate and Lightweight Super-Resolution with Neural Architecture Search". The model uses a multi-objective approach to deal with the over-segmentation problem, and uses an elastic search strategy based on a hybrid controller to improve the performance of the model. This model provides super resolution result with scale factor x2. - For more information, please refer to:[falsr_c](https://github.com/xiaomi-automl/FALSR) @@ -42,8 +42,8 @@ $ hub install falsr_c ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -53,6 +53,7 @@ - ``` $ hub run falsr_c --input_path "/PATH/TO/IMAGE" ``` + - If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst) - ### 2、Prediction Code Example @@ -82,16 +83,16 @@ - **Parameter** - * images (list\[numpy.ndarray\]): image data,ndarray.shape is in the format \[H, W, C\],BGR; - * paths (list\[str\]): image path; - * use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**; - * visualization (bool): Whether to save the recognition results as picture files; - * output\_dir (str): save path of images, "dcscn_output" by default. + * images (list\[numpy.ndarray\]): Image data,ndarray.shape is in the format \[H, W, C\],BGR. + * paths (list\[str\]): Image path. + * use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**. + * visualization (bool): Whether to save the recognition results as picture files. + * output\_dir (str): Save path of images, "dcscn_output" by default. - **Return** * res (list\[dict\]): The list of model results, where each element is dict and each field is: * save\_path (str, optional): Save path of the result, save_path is '' if no image is saved. - * data (numpy.ndarray): result of super resolution. + * data (numpy.ndarray): Result of super resolution. - ```python def save_inference_model(self, @@ -106,8 +107,8 @@ - **Parameters** * dirname: Save path. - * model\_filename: model file name,defalt is \_\_model\_\_ - * params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True) + * model\_filename: Model file name,defalt is \_\_model\_\_ + * params\_filename: Parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True) * combined: Whether to save the parameters to a unified file. @@ -126,15 +127,15 @@ $ hub serving start -m falsr_c ``` - - The servitization API is now deployed and the default port number is 8866. + - The servitization API is now deployed and the default port number is 8866. - - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set. + - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set. - ### Step 2: Send a predictive request - With a configured server, use the following lines of code to send the prediction request and obtain the result - ```python + - ```python import requests import json import base64 diff --git a/modules/image/Image_editing/super_resolution/realsr/README_en.md b/modules/image/Image_editing/super_resolution/realsr/README_en.md index f05588356..4e3eafba8 100644 --- a/modules/image/Image_editing/super_resolution/realsr/README_en.md +++ b/modules/image/Image_editing/super_resolution/realsr/README_en.md @@ -2,7 +2,7 @@ |Module Name |reasr| | :--- | :---: | -|Category |image editing| +|Category |Image editing| |Network|LP-KPN| |Dataset |RealSR dataset| |Fine-tuning supported or not|No| @@ -23,7 +23,7 @@ - ### Module Introduction - - realsr is a super resolution model for image and video based on "Toward Real-World Single Image Super-Resolution: A New Benchmark and A New Mode". This model provides super resolution result with scale factor x4. + - Realsr is a super resolution model for image and video based on "Toward Real-World Single Image Super-Resolution: A New Benchmark and A New Mode". This model provides super resolution result with scale factor x4. - For more information, please refer to: [realsr](https://github.com/csjcai/RealSR) @@ -47,8 +47,8 @@ $ hub install realsr ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) @@ -56,60 +56,60 @@ - ### 1、Prediction Code Example - ```python - import paddlehub as hub + - ```python + import paddlehub as hub - model = hub.Module(name='realsr') - model.predict('/PATH/TO/IMAGE/OR/VIDEO') - ``` + model = hub.Module(name='realsr') + model.predict('/PATH/TO/IMAGE/OR/VIDEO') + ``` - ### 2、API - ```python def predict(self, input): ``` - - Prediction API. + - Prediction API. - - **Parameter** + - **Parameter** - - input (str): image path. + - input (str): image path. - - **Return** + - **Return** - - If input is image path, the output is: - - pred_img(np.ndarray): image data, ndarray.shape is in the format [H, W, C], BGR; - - out_path(str): save path of images. + - If input is image path, the output is: + - pred_img(np.ndarray): image data, ndarray.shape is in the format [H, W, C], BGR. + - out_path(str): save path of images. - - If input is video path, the output is : - - frame_pattern_combined(str): save path of frames from output video; - - vid_out_path(str): save path of output video. + - If input is video path, the output is : + - frame_pattern_combined(str): save path of frames from output video. + - vid_out_path(str): save path of output video. - ```python def run_image(self, img): ``` - Prediction API for images. - - **Parameter** + - **Parameter** - - img (str|np.ndarray): image data, str or ndarray. ndarray.shape is in the format [H, W, C], BGR. + - img (str|np.ndarray): Image data, str or ndarray. ndarray.shape is in the format [H, W, C], BGR. - - **Return** + - **Return** - - pred_img(np.ndarray): ndarray.shape is in the format [H, W, C], BGR. + - pred_img(np.ndarray): Prediction result, ndarray.shape is in the format [H, W, C], BGR. - ```python def run_video(self, video): ``` - Prediction API for video. - - **Parameter** + - **Parameter** - - video(str): video path. + - video(str): Video path. - - **Return** + - **Return** - - frame_pattern_combined(str): save path of frames from output video; - - vid_out_path(str): save path of output video. + - frame_pattern_combined(str): Save path of frames from output video. + - vid_out_path(str): Save path of output video. ## IV. Server Deployment @@ -124,41 +124,41 @@ $ hub serving start -m realsr ``` - - The servitization API is now deployed and the default port number is 8866. + - The servitization API is now deployed and the default port number is 8866. - - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set. + - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set. - ### Step 2: Send a predictive request - With a configured server, use the following lines of code to send the prediction request and obtain the result - - ```python - import requests - import json - import base64 - - import cv2 - import numpy as np - - def cv2_to_base64(image): - data = cv2.imencode('.jpg', image)[1] - return base64.b64encode(data.tostring()).decode('utf8') - def base64_to_cv2(b64str): - data = base64.b64decode(b64str.encode('utf8')) - data = np.fromstring(data, np.uint8) - data = cv2.imdecode(data, cv2.IMREAD_COLOR) - return data - - # 发送HTTP请求 - org_im = cv2.imread('/PATH/TO/IMAGE') - data = {'images':cv2_to_base64(org_im)} - headers = {"Content-type": "application/json"} - url = "http://127.0.0.1:8866/predict/realsr" - r = requests.post(url=url, headers=headers, data=json.dumps(data)) - img = base64_to_cv2(r.json()["results"]) - cv2.imwrite('/PATH/TO/SAVE/IMAGE', img) - - ``` + - ```python + import requests + import json + import base64 + + import cv2 + import numpy as np + + def cv2_to_base64(image): + data = cv2.imencode('.jpg', image)[1] + return base64.b64encode(data.tostring()).decode('utf8') + def base64_to_cv2(b64str): + data = base64.b64decode(b64str.encode('utf8')) + data = np.fromstring(data, np.uint8) + data = cv2.imdecode(data, cv2.IMREAD_COLOR) + return data + + + org_im = cv2.imread('/PATH/TO/IMAGE') + data = {'images':cv2_to_base64(org_im)} + headers = {"Content-type": "application/json"} + url = "http://127.0.0.1:8866/predict/realsr" + r = requests.post(url=url, headers=headers, data=json.dumps(data)) + img = base64_to_cv2(r.json()["results"]) + cv2.imwrite('/PATH/TO/SAVE/IMAGE', img) + + ``` ## V. Release Note diff --git a/modules/image/Image_gan/attgan_celeba/README_en.md b/modules/image/Image_gan/attgan_celeba/README_en.md index 66a92020e..488084753 100644 --- a/modules/image/Image_gan/attgan_celeba/README_en.md +++ b/modules/image/Image_gan/attgan_celeba/README_en.md @@ -33,7 +33,7 @@ - paddlepaddle >= 1.5.2 - - paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_ch/get_start/installation.rst) + - paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst) - ### 2、Installation @@ -41,8 +41,8 @@ $ hub install attgan_celeba==1.0.0 ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md). + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md). @@ -56,10 +56,13 @@ - **Parameters** - - image: image path + - image: Input image path. - style: Specify the attributes to be converted. The options are "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". You can choose one of the options. + - If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst) + + - ### 2、Prediction Code Example @@ -89,7 +92,7 @@ - **Parameter** - - data(list[dict]): each element in the list is dict and each field is: + - data(list[dict]): Each element in the list is dict and each field is: - image (list\[str\]): Each element in the list is the path of the image to be converted. - style (list\[str\]): Each element in the list is a string, fill in the face attributes to be converted. diff --git a/modules/image/Image_gan/cyclegan_cityscapes/README_en.md b/modules/image/Image_gan/cyclegan_cityscapes/README_en.md index 92db68a51..dc310e8f1 100644 --- a/modules/image/Image_gan/cyclegan_cityscapes/README_en.md +++ b/modules/image/Image_gan/cyclegan_cityscapes/README_en.md @@ -2,7 +2,7 @@ |Module Name|cyclegan_cityscapes| | :--- | :---: | -|Category |image generation| +|Category |Image generation| |Network |CycleGAN| |Dataset|Cityscapes| |Fine-tuning supported or not |No| @@ -32,7 +32,6 @@ - ### Module Introduction - - CycleGAN是生成对抗网络(Generative Adversarial Networks )的一种,与传统的GAN只能单向生成图片不同,CycleGAN可以同时完成两个domain的图片进行相互转换。该PaddleHub Module使用Cityscapes数据集训练完成,支持图片从实景图转换为语义分割结果,也支持从语义分割结果转换为实景图。 - CycleGAN belongs to Generative Adversarial Networks(GANs). Unlike traditional GANs that can only generate pictures in one direction, CycleGAN can simultaneously complete the style transfer of two domains. The PaddleHub Module is trained by Cityscapes dataset, and supports the conversion from real images to semantic segmentation results, and also supports conversion from semantic segmentation results to real images. @@ -42,15 +41,15 @@ - paddlepaddle >= 1.4.0 - - paddlehub >= 1.1.0 | [How to install PaddleHub](../../../../docs/docs_ch/get_start/installation.rst) + - paddlehub >= 1.1.0 - ### 2、Installation - ```shell $ hub install cyclegan_cityscapes==1.0.0 ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -59,9 +58,11 @@ - ```shell $ hub run cyclegan_cityscapes --input_path "/PATH/TO/IMAGE" ``` - - **Parameters** + + - **Parameters** - - input_path: image path + - input_path: image path + - If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst) - ### 2、Prediction Code Example @@ -90,13 +91,13 @@ - **Parameters** - - data(list[dict]): each element in the list is dict and each field is: - - image (list\[str\]): image path. + - data(list[dict]): Each element in the list is dict and each field is: + - image (list\[str\]): Image path. - **Return** - res (list\[str\]): The list of style transfer results, where each element is dict and each field is: - - origin: original input path. - - generated: save path of images. + - origin: Original input path. + - generated: Save path of images. diff --git a/modules/image/Image_gan/stargan_celeba/README_en.md b/modules/image/Image_gan/stargan_celeba/README_en.md index 6961fc5bb..a79a091aa 100644 --- a/modules/image/Image_gan/stargan_celeba/README_en.md +++ b/modules/image/Image_gan/stargan_celeba/README_en.md @@ -5,7 +5,7 @@ |Category|image generation| |Network|STGAN| |Dataset|Celeba| -|Fine-tuning supported or not|否| +|Fine-tuning supported or not|No| |Module Size |33MB| |Latest update date|2021-02-26| |Data indicators|-| @@ -24,7 +24,7 @@ - ### Module Introduction - - STGAN takes the difference between the original attribute and the target attribute as input, and proposes STUs (Selective transfer units) to select and modify features of the encoder. The PaddleHub Module is trained one Celeba dataset and currently supports attributes of "Black_Hair", "Blond_Hair", "Brown_Hair", "Female", "Male", "Aged". + - STGAN takes the original attribute and the target attribute as input, and proposes STUs (Selective transfer units) to select and modify features of the encoder. The PaddleHub Module is trained one Celeba dataset and currently supports attributes of "Black_Hair", "Blond_Hair", "Brown_Hair", "Female", "Male", "Aged". ## II. Installation @@ -40,8 +40,8 @@ - ```shell $ hub install stargan_celeba==1.0.0 ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -54,9 +54,11 @@ - **Parameters** - - image: image path + - image: image path - - style: Specify the attributes to be converted. The options are "Black_Hair", "Blond_Hair", "Brown_Hair", "Female", "Male", "Aged". You can choose one of the options. + - style: Specify the attributes to be converted. The options are "Black_Hair", "Blond_Hair", "Brown_Hair", "Female", "Male", "Aged". You can choose one of the options. + + - If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst) - ### 2、Prediction Code Example diff --git a/modules/image/Image_gan/stgan_celeba/README_en.md b/modules/image/Image_gan/stgan_celeba/README_en.md index d0412f6e9..c48718c79 100644 --- a/modules/image/Image_gan/stgan_celeba/README_en.md +++ b/modules/image/Image_gan/stgan_celeba/README_en.md @@ -5,7 +5,7 @@ |Category|image generation| |Network|STGAN| |Dataset|Celeba| -|Fine-tuning supported or not|否| +|Fine-tuning supported or not|No| |Module Size |287MB| |Latest update date|2021-02-26| |Data indicators|-| @@ -24,7 +24,7 @@ - ### Module Introduction - - STGAN takes the difference between the original attribute and the target attribute as input, and proposes STUs (Selective transfer units) to select and modify features of the encoder. The PaddleHub Module is trained one Celeba dataset and currently supports attributes of "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". + - STGAN takes the original attribute and the target attribute as input, and proposes STUs (Selective transfer units) to select and modify features of the encoder. The PaddleHub Module is trained one Celeba dataset and currently supports attributes of "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". ## II. Installation @@ -40,8 +40,8 @@ - ```shell $ hub install stgan_celeba==1.0.0 ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -53,11 +53,12 @@ ``` - **Parameters** - - image: image path + - image: Image path - - info: attributes of original image, must fill in gender( "Male" or "Female").The options are "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". For example, the input picture is a girl with black hair, then fill in as "Female,Black_Hair". + - info: Attributes of original image, must fill in gender( "Male" or "Female").The options are "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". For example, the input picture is a girl with black hair, then fill in as "Female,Black_Hair". - - style: Specify the attributes to be converted. The options are "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". You can choose one of the options. + - style: Specify the attributes to be converted. The options are "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". You can choose one of the options. + - If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst) - ### 2、Prediction Code Example @@ -88,7 +89,7 @@ - **Parameter** - - data(list[dict]): each element in the list is dict and each field is: + - data(list[dict]): Each element in the list is dict and each field is: - image (list\[str\]): Each element in the list is the path of the image to be converted. - style (list\[str\]): Each element in the list is a string, fill in the face attributes to be converted. - info (list\[str\]): Represents the face attributes of the original image. Different attributes are separated by commas. diff --git a/modules/image/Image_gan/style_transfer/ID_Photo_GEN/README_en.md b/modules/image/Image_gan/style_transfer/ID_Photo_GEN/README_en.md index 2da0ea16a..ba06c5e7b 100644 --- a/modules/image/Image_gan/style_transfer/ID_Photo_GEN/README_en.md +++ b/modules/image/Image_gan/style_transfer/ID_Photo_GEN/README_en.md @@ -2,7 +2,7 @@ |Module Name |ID_Photo_GEN| | :--- | :---: | -|Category|image generation| +|Category|Image generation| |Network|HRNet_W18| |Dataset |-| |Fine-tuning supported or not |No| @@ -39,8 +39,8 @@ $ hub install ID_Photo_GEN ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -77,12 +77,12 @@ - Prediction API, generating ID photos. - **Parameter** - * images (list[np.ndarray]): image data, ndarray.shape is in the format [H, W, C], BGR; - * paths (list[str]): image path - * batch_size (int): batch size - * output_dir (str): save path of images, output by default. - * visualization (bool): Whether to save the recognition results as picture files; - * use_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** + * images (list[np.ndarray]): Image data, ndarray.shape is in the format [H, W, C], BGR. + * paths (list[str]): Image path + * batch_size (int): Batch size + * output_dir (str): Save path of images, output by default. + * visualization (bool): Whether to save the recognition results as picture files. + * use_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** **NOTE:** Choose one of `paths` and `images` to provide input data. diff --git a/modules/image/Image_gan/style_transfer/UGATIT_83w/README_en.md b/modules/image/Image_gan/style_transfer/UGATIT_83w/README_en.md index f030c364f..b4afce178 100644 --- a/modules/image/Image_gan/style_transfer/UGATIT_83w/README_en.md +++ b/modules/image/Image_gan/style_transfer/UGATIT_83w/README_en.md @@ -40,8 +40,8 @@ $ hub install UGATIT_83w ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -73,17 +73,17 @@ - Style transfer API, convert the input face image into anime style. - **Parameters** - * images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR; + * images (list\[numpy.ndarray\]): Image data, ndarray.shape is in the format [H, W, C], BGR. * paths (list\[str\]): image path,default is None; - * batch\_size (int): batch size, default is 1; - * visualization (bool): Whether to save the recognition results as picture files, default is False; - * output\_dir (str): save path of images, `output` by default. + * batch\_size (int): Batch size, default is 1; + * visualization (bool): Whether to save the recognition results as picture files, default is False. + * output\_dir (str): Save path of images, `output` by default. **NOTE:** Choose one of `paths` and `images` to provide data. - **Return** - - res (list\[numpy.ndarray\]): result, ndarray.shape is in the format [H, W, C]. + - res (list\[numpy.ndarray\]): Result, ndarray.shape is in the format [H, W, C]. ## IV. Server Deployment @@ -93,9 +93,9 @@ - Run the startup command: - - ```shell - $ hub serving start -m UGATIT_83w - ``` + - ```shell + $ hub serving start -m UGATIT_83w + ``` - The servitization API is now deployed and the default port number is 8866. @@ -105,27 +105,27 @@ - With a configured server, use the following lines of code to send the prediction request and obtain the result - - ```python - import requests - import json - import cv2 - import base64 + - ```python + import requests + import json + import cv2 + import base64 - def cv2_to_base64(image): - data = cv2.imencode('.jpg', image)[1] - return base64.b64encode(data.tostring()).decode('utf8') + def cv2_to_base64(image): + data = cv2.imencode('.jpg', image)[1] + return base64.b64encode(data.tostring()).decode('utf8') - # Send an HTTP request - data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]} - headers = {"Content-type": "application/json"} - url = "http://127.0.0.1:8866/predict/UGATIT_83w" - r = requests.post(url=url, headers=headers, data=json.dumps(data)) + # Send an HTTP request + data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]} + headers = {"Content-type": "application/json"} + url = "http://127.0.0.1:8866/predict/UGATIT_83w" + r = requests.post(url=url, headers=headers, data=json.dumps(data)) - # print prediction results - print(r.json()["results"]) - ``` + # print prediction results + print(r.json()["results"]) + ``` ## V. Release Note diff --git a/modules/image/Image_gan/style_transfer/UGATIT_92w/README_en.md b/modules/image/Image_gan/style_transfer/UGATIT_92w/README_en.md index 5ac2e5c52..ef7a22a49 100644 --- a/modules/image/Image_gan/style_transfer/UGATIT_92w/README_en.md +++ b/modules/image/Image_gan/style_transfer/UGATIT_92w/README_en.md @@ -2,7 +2,7 @@ |Module Name|UGATIT_92w| | :--- | :---: | -|Category|image editing| +|Category|Image editing| |Network |U-GAT-IT| |Dataset|selfie2anime| |Fine-tuning supported or not|No| @@ -40,8 +40,8 @@ $ hub install UGATIT_92w ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -73,17 +73,17 @@ - Style transfer API, convert the input face image into anime style. - **Parameters** - * images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR; - * paths (list\[str\]): image path,default is None; - * batch\_size (int): batch size, default is 1; - * visualization (bool): Whether to save the recognition results as picture files, default is False; + * images (list\[numpy.ndarray\]): Image data, ndarray.shape is in the format [H, W, C], BGR. + * paths (list\[str\]): Image path,default is None; + * batch\_size (int): Batch size, default is 1; + * visualization (bool): Whether to save the recognition results as picture files, default is False. * output\_dir (str): save path of images, `output` by default. **NOTE:** Choose one of `paths` and `images` to provide input data. - **Return** - - res (list\[numpy.ndarray\]): result, ndarray.shape is in the format [H, W, C]. + - res (list\[numpy.ndarray\]): Style tranfer result, ndarray.shape is in the format [H, W, C]. ## IV. Server Deployment @@ -93,9 +93,9 @@ - Run the startup command: - - ```shell - $ hub serving start -m UGATIT_92w - ``` + - ```shell + $ hub serving start -m UGATIT_92w + ``` - The servitization API is now deployed and the default port number is 8866. @@ -105,27 +105,27 @@ - With a configured server, use the following lines of code to send the prediction request and obtain the result - - ```python - import requests - import json - import cv2 - import base64 + - ```python + import requests + import json + import cv2 + import base64 - def cv2_to_base64(image): - data = cv2.imencode('.jpg', image)[1] - return base64.b64encode(data.tostring()).decode('utf8') + def cv2_to_base64(image): + data = cv2.imencode('.jpg', image)[1] + return base64.b64encode(data.tostring()).decode('utf8') - # Send an HTTP request - data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]} - headers = {"Content-type": "application/json"} - url = "http://127.0.0.1:8866/predict/UGATIT_92w" - r = requests.post(url=url, headers=headers, data=json.dumps(data)) + # Send an HTTP request + data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]} + headers = {"Content-type": "application/json"} + url = "http://127.0.0.1:8866/predict/UGATIT_92w" + r = requests.post(url=url, headers=headers, data=json.dumps(data)) - # print prediction results - print(r.json()["results"]) - ``` + # print prediction results + print(r.json()["results"]) + ``` ## V. Release Note diff --git a/modules/image/Image_gan/style_transfer/animegan_v2_paprika_54/README_en.md b/modules/image/Image_gan/style_transfer/animegan_v2_paprika_54/README_en.md index 5a6bb349e..77d724986 100644 --- a/modules/image/Image_gan/style_transfer/animegan_v2_paprika_54/README_en.md +++ b/modules/image/Image_gan/style_transfer/animegan_v2_paprika_54/README_en.md @@ -2,7 +2,7 @@ |Module Name |animegan_v2_paprika_54| | :--- | :---: | -|Category |image generation| +|Category |Image generation| |Network|AnimeGAN| |Dataset|Paprika| |Fine-tuning supported or not|No| @@ -19,11 +19,11 @@


- 输入图像 + Input image

- 输出图像 + Output image

@@ -40,7 +40,7 @@ - paddlepaddle >= 1.8.0 - - paddlehub >= 1.8.0 | [How to install PaddleHub](../../../../docs/docs_ch/get_start/installation.rst) + - paddlehub >= 1.8.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst) - ### 2、Installation @@ -48,8 +48,8 @@ $ hub install animegan_v2_paprika_54 ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -81,12 +81,12 @@ - **Parameters** - - images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR; - - paths (list\[str\]): image path; - - output\_dir (str): save path of images, `output` by default; - - visualization (bool): Whether to save the results as picture files; - - min\_size (int): minimum size, default is 32; - - max\_size (int): maximum size, default is 1024. + - images (list\[numpy.ndarray\]): Image data, ndarray.shape is in the format [H, W, C], BGR. + - paths (list\[str\]): Image path. + - output\_dir (str): Save path of images, `output` by default. + - visualization (bool): Whether to save the results as picture files. + - min\_size (int): Minimum size, default is 32. + - max\_size (int): Maximum size, default is 1024. **NOTE:** Choose one of `paths` and `images` to provide input data. @@ -102,9 +102,9 @@ - Run the startup command: - - ```shell - $ hub serving start -m animegan_v2_paprika_54 - ``` + - ```shell + $ hub serving start -m animegan_v2_paprika_54 + ``` - The servitization API is now deployed and the default port number is 8866. @@ -114,26 +114,26 @@ - With a configured server, use the following lines of code to send the prediction request and obtain the result - - ```python - import requests - import json - import cv2 - import base64 + - ```python + import requests + import json + import cv2 + import base64 - def cv2_to_base64(image): - data = cv2.imencode('.jpg', image)[1] - return base64.b64encode(data.tostring()).decode('utf8') + def cv2_to_base64(image): + data = cv2.imencode('.jpg', image)[1] + return base64.b64encode(data.tostring()).decode('utf8') - # Send an HTTP request - data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]} - headers = {"Content-type": "application/json"} - url = "http://127.0.0.1:8866/predict/animegan_v2_paprika_54" - r = requests.post(url=url, headers=headers, data=json.dumps(data)) + # Send an HTTP request + data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]} + headers = {"Content-type": "application/json"} + url = "http://127.0.0.1:8866/predict/animegan_v2_paprika_54" + r = requests.post(url=url, headers=headers, data=json.dumps(data)) - # print prediction results - print(r.json()["results"]) - ``` + # print prediction results + print(r.json()["results"]) + ``` ## V. Release Note diff --git a/modules/image/Image_gan/style_transfer/animegan_v2_paprika_97/README_en.md b/modules/image/Image_gan/style_transfer/animegan_v2_paprika_97/README_en.md index 3004a8403..fa2a8953a 100644 --- a/modules/image/Image_gan/style_transfer/animegan_v2_paprika_97/README_en.md +++ b/modules/image/Image_gan/style_transfer/animegan_v2_paprika_97/README_en.md @@ -2,7 +2,7 @@ |Module Name |animegan_v2_paprika_97| | :--- | :---: | -|Category |image generation| +|Category |Image generation| |Network|AnimeGAN| |Dataset|Paprika| |Fine-tuning supported or not|No| @@ -48,8 +48,8 @@ $ hub install animegan_v2_paprika_97 ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -81,12 +81,12 @@ - **Parameters** - - images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR; - - paths (list\[str\]): image path; - - output\_dir (str): save path of images, `output` by default; - - visualization (bool): Whether to save the results as picture files; - - min\_size (int): minimum size, default is 32; - - max\_size (int): maximum size, default is 1024. + - images (list\[numpy.ndarray\]): Image data, ndarray.shape is in the format [H, W, C], BGR. + - paths (list\[str\]): Image path. + - output\_dir (str): Save path of images, `output` by default. + - visualization (bool): Whether to save the results as picture files. + - min\_size (int): Minimum size, default is 32. + - max\_size (int): Maximum size, default is 1024. **NOTE:** Choose one of `paths` and `images` to provide input data. @@ -102,9 +102,9 @@ - Run the startup command: - - ```shell - $ hub serving start -m animegan_v2_paprika_97 - ``` + - ```shell + $ hub serving start -m animegan_v2_paprika_97 + ``` - The servitization API is now deployed and the default port number is 8866. @@ -114,26 +114,26 @@ - With a configured server, use the following lines of code to send the prediction request and obtain the result - - ```python - import requests - import json - import cv2 - import base64 + - ```python + import requests + import json + import cv2 + import base64 - def cv2_to_base64(image): - data = cv2.imencode('.jpg', image)[1] - return base64.b64encode(data.tostring()).decode('utf8') + def cv2_to_base64(image): + data = cv2.imencode('.jpg', image)[1] + return base64.b64encode(data.tostring()).decode('utf8') - # Send an HTTP request - data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]} - headers = {"Content-type": "application/json"} - url = "http://127.0.0.1:8866/predict/animegan_v2_paprika_97" - r = requests.post(url=url, headers=headers, data=json.dumps(data)) + # Send an HTTP request + data = {'images':[cv2_to_base64(cv2.imread("/PATH/TO/IMAGE"))]} + headers = {"Content-type": "application/json"} + url = "http://127.0.0.1:8866/predict/animegan_v2_paprika_97" + r = requests.post(url=url, headers=headers, data=json.dumps(data)) - # print prediction results - print(r.json()["results"]) - ``` + # print prediction results + print(r.json()["results"]) + ``` ## V. Release Note diff --git a/modules/image/Image_gan/style_transfer/msgnet/README_en.md b/modules/image/Image_gan/style_transfer/msgnet/README_en.md index 1aeac4aa3..30d978b85 100644 --- a/modules/image/Image_gan/style_transfer/msgnet/README_en.md +++ b/modules/image/Image_gan/style_transfer/msgnet/README_en.md @@ -2,7 +2,7 @@ |Module Name|msgnet| | :--- | :---: | -|Category|image editing| +|Category|Image editing| |Network|msgnet| |Dataset|COCO2014| |Fine-tuning supported or not|Yes| @@ -38,30 +38,30 @@ $ hub install msgnet ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction - ### 1、Command line Prediction -``` -$ hub run msgnet --input_path "/PATH/TO/ORIGIN/IMAGE" --style_path "/PATH/TO/STYLE/IMAGE" -``` + - ``` + $ hub run msgnet --input_path "/PATH/TO/ORIGIN/IMAGE" --style_path "/PATH/TO/STYLE/IMAGE" + ``` + - If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst) -- ### 2、Prediction Code Example - -```python -import paddle -import paddlehub as hub -if __name__ == '__main__': - model = hub.Module(name='msgnet') - result = model.predict(origin=["/PATH/TO/ORIGIN/IMAGE"], style="/PATH/TO/STYLE/IMAGE", visualization=True, save_path ="/PATH/TO/SAVE/IMAGE") -``` +- ### 2、Prediction Code Example + - ```python + import paddle + import paddlehub as hub + if __name__ == '__main__': + model = hub.Module(name='msgnet') + result = model.predict(origin=["/PATH/TO/ORIGIN/IMAGE"], style="/PATH/TO/STYLE/IMAGE", visualization=True, save_path ="/PATH/TO/SAVE/IMAGE") + ``` - ### 3.Fine-tune and Encapsulation @@ -111,20 +111,20 @@ if __name__ == '__main__': - Model prediction - When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows: - ```python - import paddle - import paddlehub as hub + - ```python + import paddle + import paddlehub as hub - if __name__ == '__main__': - model = hub.Module(name='msgnet', load_checkpoint="/PATH/TO/CHECKPOINT") - result = model.predict(origin=["/PATH/TO/ORIGIN/IMAGE"], style="/PATH/TO/STYLE/IMAGE", visualization=True, save_path ="/PATH/TO/SAVE/IMAGE") - ``` + if __name__ == '__main__': + model = hub.Module(name='msgnet', load_checkpoint="/PATH/TO/CHECKPOINT") + result = model.predict(origin=["/PATH/TO/ORIGIN/IMAGE"], style="/PATH/TO/STYLE/IMAGE", visualization=True, save_path ="/PATH/TO/SAVE/IMAGE") + ``` - - **Args** - * `origin`: Image path or ndarray data with format [H, W, C], BGR; - * `style`: Style image path; - * `visualization`: Whether to save the recognition results as picture files; - * `save_path`: Save path of the result, default is 'style_tranfer'. + - **Parameters** + * `origin`: Image path or ndarray data with format [H, W, C], BGR. + * `style`: Style image path. + * `visualization`: Whether to save the recognition results as picture files. + * `save_path`: Save path of the result, default is 'style_tranfer'. ## IV. Server Deployment @@ -135,9 +135,9 @@ if __name__ == '__main__': - Run the startup command: - - ```shell - $ hub serving start -m msgnet - ``` + - ```shell + $ hub serving start -m msgnet + ``` - The servitization API is now deployed and the default port number is 8866. @@ -148,35 +148,35 @@ if __name__ == '__main__': - With a configured server, use the following lines of code to send the prediction request and obtain the result: - ```python - import requests - import json - import cv2 - import base64 - - import numpy as np - - - def cv2_to_base64(image): - data = cv2.imencode('.jpg', image)[1] - return base64.b64encode(data.tostring()).decode('utf8') - - def base64_to_cv2(b64str): - data = base64.b64decode(b64str.encode('utf8')) - data = np.fromstring(data, np.uint8) - data = cv2.imdecode(data, cv2.IMREAD_COLOR) - return data - - # Send an HTTP request - org_im = cv2.imread('/PATH/TO/ORIGIN/IMAGE') - style_im = cv2.imread('/PATH/TO/STYLE/IMAGE') - data = {'images':[[cv2_to_base64(org_im)], cv2_to_base64(style_im)]} - headers = {"Content-type": "application/json"} - url = "http://127.0.0.1:8866/predict/msgnet" - r = requests.post(url=url, headers=headers, data=json.dumps(data)) - data = base64_to_cv2(r.json()["results"]['data'][0]) - cv2.imwrite('style.png', data) - ``` + - ```python + import requests + import json + import cv2 + import base64 + + import numpy as np + + + def cv2_to_base64(image): + data = cv2.imencode('.jpg', image)[1] + return base64.b64encode(data.tostring()).decode('utf8') + + def base64_to_cv2(b64str): + data = base64.b64decode(b64str.encode('utf8')) + data = np.fromstring(data, np.uint8) + data = cv2.imdecode(data, cv2.IMREAD_COLOR) + return data + + # Send an HTTP request + org_im = cv2.imread('/PATH/TO/ORIGIN/IMAGE') + style_im = cv2.imread('/PATH/TO/STYLE/IMAGE') + data = {'images':[[cv2_to_base64(org_im)], cv2_to_base64(style_im)]} + headers = {"Content-type": "application/json"} + url = "http://127.0.0.1:8866/predict/msgnet" + r = requests.post(url=url, headers=headers, data=json.dumps(data)) + data = base64_to_cv2(r.json()["results"]['data'][0]) + cv2.imwrite('style.png', data) + ``` ## V. Release Note diff --git a/modules/image/classification/resnet50_vd_animals/README_en.md b/modules/image/classification/resnet50_vd_animals/README_en.md index 000b62e13..031f469fc 100644 --- a/modules/image/classification/resnet50_vd_animals/README_en.md +++ b/modules/image/classification/resnet50_vd_animals/README_en.md @@ -2,7 +2,7 @@ |Module Name|resnet50_vd_animals| | :--- | :---: | -|Category |image classification| +|Category |Image classification| |Network|ResNet50_vd| |Dataset|Baidu self-built dataset| |Fine-tuning supported or not|No| @@ -33,8 +33,8 @@ - ```shell $ hub install resnet50_vd_animals ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -44,7 +44,7 @@ - ```shell $ hub run resnet50_vd_animals --input_path "/PATH/TO/IMAGE" ``` - - If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst) + - If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst) - ### 2、Prediction Code Example @@ -135,14 +135,14 @@ $ hub serving start -m resnet50_vd_animals ``` - - The servitization API is now deployed and the default port number is 8866. - - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set. + - The servitization API is now deployed and the default port number is 8866. + - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set. - ### Step 2: Send a predictive request - With a configured server, use the following lines of code to send the prediction request and obtain the result - ```python + - ```python import requests import json import cv2 diff --git a/modules/image/classification/resnet50_vd_imagenet_ssld/README_en.md b/modules/image/classification/resnet50_vd_imagenet_ssld/README_en.md index 473004156..9cf41b043 100644 --- a/modules/image/classification/resnet50_vd_imagenet_ssld/README_en.md +++ b/modules/image/classification/resnet50_vd_imagenet_ssld/README_en.md @@ -2,7 +2,7 @@ |Module Name|resnet50_vd_imagenet_ssld| | :--- | :---: | -|Category |image classification| +|Category |Image classification| |Network|ResNet_vd| |Dataset|ImageNet-2012| |Fine-tuning supported or notFine-tuning|Yes| @@ -32,8 +32,8 @@ $ hub install resnet50_vd_imagenet_ssld ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -69,7 +69,7 @@ to_rgb=True) ``` - - `transforms` The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs. + - `transforms`: The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs. - Step2: Download the dataset @@ -108,20 +108,20 @@ - `Trainer` mainly control the training of Fine-tune, including the following controllable parameters: - * `model`: Optimized model; - * `optimizer`: Optimizer selection; - * `use_vdl`: Whether to use vdl to visualize the training process; - * `checkpoint_dir`: The storage address of the model parameters; - * `compare_metrics`: The measurement index of the optimal model; + * `model`: Optimized model. + * `optimizer`: Optimizer selection. + * `use_vdl`: Whether to use vdl to visualize the training process. + * `checkpoint_dir`: The storage address of the model parameters. + * `compare_metrics`: The measurement index of the optimal model. - `trainer.train` mainly control the specific training process, including the following controllable parameters: - * `train_dataset`: Training dataset; - * `epochs`: Epochs of training process; - * `batch_size`: Batch size; + * `train_dataset`: Training dataset. + * `epochs`: Epochs of training process. + * `batch_size`: Batch size. * `num_workers`: Number of workers. - * `eval_dataset`: Validation dataset; - * `log_interval`:The interval for printing logs; + * `eval_dataset`: Validation dataset. + * `log_interval`:The interval for printing logs. * `save_interval`: The interval for saving model parameters. @@ -147,9 +147,9 @@ - Run the startup command: - - ```shell - $ hub serving start -m resnet50_vd_imagenet_ssld - ``` + - ```shell + $ hub serving start -m resnet50_vd_imagenet_ssld + ``` - The servitization API is now deployed and the default port number is 8866. @@ -159,7 +159,7 @@ - With a configured server, use the following lines of code to send the prediction request and obtain the result - ```python + - ```python import requests import json import cv2 @@ -195,4 +195,4 @@ * 1.1.0 - Upgrade to dynamic version. + Upgrade to dynamic version diff --git a/modules/image/classification/resnet_v2_50_imagenet/README_en.md b/modules/image/classification/resnet_v2_50_imagenet/README_en.md index f45dc9a8e..76e7dfd87 100644 --- a/modules/image/classification/resnet_v2_50_imagenet/README_en.md +++ b/modules/image/classification/resnet_v2_50_imagenet/README_en.md @@ -2,7 +2,7 @@ |Module Name|resnet_v2_50_imagenet| | :--- | :---: | -|Category |image classification| +|Category |Image classification| |Network|ResNet V2| |Dataset|ImageNet-2012| |Fine-tuning supported or not|No| @@ -23,7 +23,7 @@ - paddlepaddle >= 1.4.0 - - paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_ch/get_start/installation.rst) + - paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst) - ### 2、Installation @@ -31,8 +31,8 @@ - ```shell $ hub install resnet_v2_50_imagenet ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -64,10 +64,10 @@ - Prediction API for classification. - **Parameter** - - data (dict): key is 'image',value is the list of image path. + - data (dict): Key is 'image',value is the list of image path. - **Return** - - result (list[dict]): the list of classification results,key is the prediction label, value is the corresponding confidence. + - result (list[dict]): The list of classification results,key is the prediction label, value is the corresponding confidence. @@ -79,6 +79,7 @@ First release - 1.0.1 + Fix encoding problem in python2 - ```shell diff --git a/modules/image/semantic_segmentation/Pneumonia_CT_LKM_PP/README_en.md b/modules/image/semantic_segmentation/Pneumonia_CT_LKM_PP/README_en.md index 74aa6de84..397441dfd 100644 --- a/modules/image/semantic_segmentation/Pneumonia_CT_LKM_PP/README_en.md +++ b/modules/image/semantic_segmentation/Pneumonia_CT_LKM_PP/README_en.md @@ -2,7 +2,7 @@ |Module Name|Pneumonia_CT_LKM_PP| | :--- | :---: | -|Category|image segmentation| +|Category|Image segmentation| |Network |-| |Dataset|-| |Fine-tuning supported or not|No| @@ -32,8 +32,8 @@ $ hub install Pneumonia_CT_LKM_PP==1.0.0 ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -64,24 +64,24 @@ - ### 2、API - ```python - def segmentation(data) - ``` + - ```python + def segmentation(data) + ``` - - Prediction API, used for CT analysis of pneumonia. + - Prediction API, used for CT analysis of pneumonia. - - **Parameter** + - **Parameter** - * data (dict): key is "image_np_path", value is the list of results which contains lesion and lung segmentation masks. - + * data (dict): key is "image_np_path", value is the list of results which contains lesion and lung segmentation masks. + - - **Return** + - **Return** - * result (list\[dict\]): the list of recognition results, where each element is dict and each field is: - * input_lesion_np_path: input path of lesion; - * output_lesion_np: segmentation result path of lesion; - * input_lung_np_path: input path of lung; - * output_lung_np:segmentation result path of lung. + * result (list\[dict\]): the list of recognition results, where each element is dict and each field is: + * input_lesion_np_path: input path of lesion. + * output_lesion_np: segmentation result path of lesion. + * input_lung_np_path: input path of lung. + * output_lung_np:segmentation result path of lung. ## IV. Release Note diff --git a/modules/image/semantic_segmentation/Pneumonia_CT_LKM_PP_lung/README_en.md b/modules/image/semantic_segmentation/Pneumonia_CT_LKM_PP_lung/README_en.md index 32bb76489..067ab57f3 100644 --- a/modules/image/semantic_segmentation/Pneumonia_CT_LKM_PP_lung/README_en.md +++ b/modules/image/semantic_segmentation/Pneumonia_CT_LKM_PP_lung/README_en.md @@ -2,7 +2,7 @@ |Module Name|Pneumonia_CT_LKM_PP_lung| | :--- | :---: | -|Category|image segmentation| +|Category|Image segmentation| |Network |-| |Dataset|-| |Fine-tuning supported or not|No| @@ -64,7 +64,7 @@ - ### 2、API - ```python + - ```python def segmentation(data) ``` @@ -72,16 +72,16 @@ - **Parameter** - * data (dict): key is "image_np_path", value is the list of results which contains lesion and lung segmentation masks. + * data (dict): Key is "image_np_path", value is the list of results which contains lesion and lung segmentation masks. - **Return** - * result (list\[dict\]): the list of recognition results, where each element is dict and each field is: - * input_lesion_np_path: input path of lesion; - * output_lesion_np: segmentation result path of lesion; - * input_lung_np_path: input path of lung; - * output_lung_np:segmentation result path of lung. + * result (list\[dict\]): The list of recognition results, where each element is dict and each field is: + * input_lesion_np_path: Input path of lesion. + * output_lesion_np: Segmentation result path of lesion. + * input_lung_np_path: Input path of lung. + * output_lung_np: Segmentation result path of lung. ## IV. Release Note diff --git a/modules/image/semantic_segmentation/U2Net/README_en.md b/modules/image/semantic_segmentation/U2Net/README_en.md index 68eb2daa2..4cea82d05 100644 --- a/modules/image/semantic_segmentation/U2Net/README_en.md +++ b/modules/image/semantic_segmentation/U2Net/README_en.md @@ -2,7 +2,7 @@ |Module Name |U2Net| | :--- | :---: | -|Category |image segmentation| +|Category |Image segmentation| |Network |U^2Net| |Dataset|-| |Fine-tuning supported or not|No| @@ -44,8 +44,8 @@ $ hub install U2Net ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -79,12 +79,12 @@ - Prediction API, obtaining segmentation result. - **Parameter** - * images (list[np.ndarray]) : image data, ndarray.shape is in the format [H, W, C], BGR; - * paths (list[str]) : image path; - * batch_size (int) : batch size; - * input_size (int) : input image size, default is 320; - * output_dir (str) : save path of images, 'output' by default; - * visualization (bool) : whether to save the results as picture files. + * images (list[np.ndarray]) : Image data, ndarray.shape is in the format [H, W, C], BGR. + * paths (list[str]) : Image path. + * batch_size (int) : Batch size. + * input_size (int) : Input image size, default is 320. + * output_dir (str) : Save path of images, 'output' by default. + * visualization (bool) : Whether to save the results as picture files. - **Return** * results (list[np.ndarray]): The list of segmentation results. diff --git a/modules/image/semantic_segmentation/U2Netp/README_en.md b/modules/image/semantic_segmentation/U2Netp/README_en.md index f47ba5a0d..ffb0bac24 100644 --- a/modules/image/semantic_segmentation/U2Netp/README_en.md +++ b/modules/image/semantic_segmentation/U2Netp/README_en.md @@ -2,7 +2,7 @@ |Module Name |U2Netp| | :--- | :---: | -|Category |image segmentation| +|Category |Image segmentation| |Network |U^2Net| |Dataset|-| |Fine-tuning supported or not|No| @@ -44,8 +44,8 @@ $ hub install U2Netp ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -79,15 +79,15 @@ - Prediction API, obtaining segmentation result. - **Parameter** - * images (list[np.ndarray]) : image data, ndarray.shape is in the format [H, W, C], BGR; - * paths (list[str]) : image path; - * batch_size (int) : batch size; - * input_size (int) : input image size, default is 320; - * output_dir (str) : save path of images, 'output' by default; - * visualization (bool) : whether to save the results as picture files. + * images (list[np.ndarray]) : Image data, ndarray.shape is in the format [H, W, C], BGR. + * paths (list[str]) : Image path. + * batch_size (int) : Batch size. + * input_size (int) : Input image size, default is 320. + * output_dir (str) : Save path of images, 'output' by default. + * visualization (bool) : Whether to save the results as picture files. - **Return** - * results (list[np.ndarray]): the list of segmentation results. + * results (list[np.ndarray]): The list of segmentation results. ## IV. Release Note diff --git a/modules/image/semantic_segmentation/ace2p/README_en.md b/modules/image/semantic_segmentation/ace2p/README_en.md index 2b9313ff3..3fa0c273e 100644 --- a/modules/image/semantic_segmentation/ace2p/README_en.md +++ b/modules/image/semantic_segmentation/ace2p/README_en.md @@ -2,7 +2,7 @@ |Module Name|ace2p| | :--- | :---: | -|Category|image segmentation| +|Category|Image segmentation| |Network|ACE2P| |Dataset|LIP| |Fine-tuning supported or not|No| @@ -50,21 +50,24 @@ - ```shell $ hub install ace2p ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction - ### 1、Command line Prediction - ```shell - $ hub run ace2p --input_path "/PATH/TO/IMAGE" - ``` + - ```shell + $ hub run ace2p --input_path "/PATH/TO/IMAGE" + ``` + + - If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst) + - ### 2、Prediction Code Example - ```python + - ```python import paddlehub as hub import cv2 @@ -72,48 +75,48 @@ result = human_parser.segmentation(images=[cv2.imread('/PATH/TO/IMAGE')]) ``` - - ### 3、API - - ```python - def segmentation(images=None, - paths=None, - batch_size=1, - use_gpu=False, - output_dir='ace2p_output', - visualization=False): - ``` +- ### 3、API + + - ```python + def segmentation(images=None, + paths=None, + batch_size=1, + use_gpu=False, + output_dir='ace2p_output', + visualization=False): + ``` - - Prediction API, used for human parsing. + - Prediction API, used for human parsing. - **Parameter** - * images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR; - * paths (list\[str\]): image path; - * batch\_size (int): batch size; - * use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** - * output\_dir (str): save path of output, default is 'ace2p_output'; + * images (list\[numpy.ndarray\]): Image data, ndarray.shape is in the format [H, W, C], BGR. + * paths (list\[str\]): Image path. + * batch\_size (int): Batch size. + * use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** + * output\_dir (str): Save path of output, default is 'ace2p_output'. * visualization (bool): Whether to save the recognition results as picture files. - **Return** * res (list\[dict\]): The list of recognition results, where each element is dict and each field is: - * save\_path (str, optional): Save path of the result; + * save\_path (str, optional): Save path of the result. * data (numpy.ndarray): The result of portrait segmentation. - ```python - def save_inference_model(dirname, - model_filename=None, - params_filename=None, - combined=True) - ``` + - ```python + def save_inference_model(dirname, + model_filename=None, + params_filename=None, + combined=True) + ``` - Save the model to the specified path. - **Parameters** * dirname: Save path. - * model\_filename: model file name,defalt is \_\_model\_\_ - * params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True) + * model\_filename: mMdel file name,defalt is \_\_model\_\_ + * params\_filename: Parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True) * combined: Whether to save the parameters to a unified file. @@ -125,9 +128,9 @@ - Run the startup command: - ```shell - $ hub serving start -m ace2p - ``` + - ```shell + $ hub serving start -m ace2p + ``` - The servitization API is now deployed and the default port number is 8866. @@ -138,7 +141,7 @@ - With a configured server, use the following lines of code to send the prediction request and obtain the result - ```python + - ```python import requests import json import cv2 diff --git a/modules/image/semantic_segmentation/deeplabv3p_xception65_humanseg/README_en.md b/modules/image/semantic_segmentation/deeplabv3p_xception65_humanseg/README_en.md index 0852edb43..1afa20b09 100644 --- a/modules/image/semantic_segmentation/deeplabv3p_xception65_humanseg/README_en.md +++ b/modules/image/semantic_segmentation/deeplabv3p_xception65_humanseg/README_en.md @@ -2,7 +2,7 @@ |Module Name |deeplabv3p_xception65_humanseg| | :--- | :---: | -|Category|image segmentation| +|Category|Image segmentation| |Network|deeplabv3p| |Dataset|Baidu self-built dataset| |Fine-tuning supported or not|No| @@ -41,72 +41,72 @@ - ```shell $ hub install deeplabv3p_xception65_humanseg ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction - ### 1、Command line Prediction - ```shell - hub run deeplabv3p_xception65_humanseg --input_path "/PATH/TO/IMAGE" - ``` + - ```shell + hub run deeplabv3p_xception65_humanseg --input_path "/PATH/TO/IMAGE" + ``` + - If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst) - ### 2、Prediction Code Example - ```python - import paddlehub as hub - import cv2 - - human_seg = hub.Module(name="deeplabv3p_xception65_humanseg") - result = human_seg.segmentation(images=[cv2.imread('/PATH/TO/IMAGE')]) + - ```python + import paddlehub as hub + import cv2 - ``` + human_seg = hub.Module(name="deeplabv3p_xception65_humanseg") + result = human_seg.segmentation(images=[cv2.imread('/PATH/TO/IMAGE')]) + ``` - ### 3.API - ```python - def segmentation(images=None, - paths=None, - batch_size=1, - use_gpu=False, - visualization=False, - output_dir='humanseg_output') - ``` + - ```python + def segmentation(images=None, + paths=None, + batch_size=1, + use_gpu=False, + visualization=False, + output_dir='humanseg_output') + ``` - - Prediction API, generating segmentation result. + - Prediction API, generating segmentation result. - - **Parameter** - * images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR; - * paths (list\[str\]): image path; - * batch\_size (int): batch size; - * use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** - * visualization (bool): Whether to save the recognition results as picture files; - * output\_dir (str): save path of images. + - **Parameter** + * images (list\[numpy.ndarray\]): Image data, ndarray.shape is in the format [H, W, C], BGR. + * paths (list\[str\]): Image path. + * batch\_size (int): Batch size. + * use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** + * visualization (bool): Whether to save the recognition results as picture files. + * output\_dir (str): Save path of images. - - **Return** + - **Return** - * res (list\[dict\]): The list of recognition results, where each element is dict and each field is: - * save\_path (str, optional): Save path of the result; - * data (numpy.ndarray): The result of portrait segmentation. + * res (list\[dict\]): The list of recognition results, where each element is dict and each field is: + * save\_path (str, optional): Save path of the result. + * data (numpy.ndarray): The result of portrait segmentation. - ```python - def save_inference_model(dirname, - model_filename=None, - params_filename=None, - combined=True) - ``` + - ```python + def save_inference_model(dirname, + model_filename=None, + params_filename=None, + combined=True) + ``` - - Save the model to the specified path. + - Save the model to the specified path. - - **Parameters** - * dirname: Save path. - * model\_filename: model file name,defalt is \_\_model\_\_ - * params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True) - * combined: Whether to save the parameters to a unified file. + - **Parameters** + * dirname: Save path. + * model\_filename: Model file name,defalt is \_\_model\_\_ + * params\_filename: Parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True) + * combined: Whether to save the parameters to a unified file. ## IV. Server Deployment @@ -117,9 +117,9 @@ - Run the startup command: - - ```shell - $ hub serving start -m deeplabv3p_xception65_humanseg - ``` + - ```shell + $ hub serving start -m deeplabv3p_xception65_humanseg + ``` - **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set. @@ -128,7 +128,7 @@ - With a configured server, use the following lines of code to send the prediction request and obtain the result - ```python + - ```python import requests import json import cv2 diff --git a/modules/image/semantic_segmentation/humanseg_lite/README_en.md b/modules/image/semantic_segmentation/humanseg_lite/README_en.md index 1cfb7fb1e..e37ba0123 100644 --- a/modules/image/semantic_segmentation/humanseg_lite/README_en.md +++ b/modules/image/semantic_segmentation/humanseg_lite/README_en.md @@ -2,7 +2,7 @@ |Module Name |humanseg_lite| | :--- | :---: | -|Category |image segmentation| +|Category |Image segmentation| |Network|shufflenet| |Dataset|Baidu self-built dataset| |Fine-tuning supported or not|No| @@ -40,20 +40,23 @@ $ hub install humanseg_lite ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction - ### 1、Command line Prediction - ``` - hub run humanseg_lite --input_path "/PATH/TO/IMAGE" - - ``` + - ``` + hub run humanseg_lite --input_path "/PATH/TO/IMAGE" + + ``` + + - If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst) + - ### 2、Prediction Code Example - Image segmentation and video segmentation example: - ```python + - ```python import cv2 import paddlehub as hub @@ -67,7 +70,7 @@ ``` - Video prediction example: - ```python + - ```python import cv2 import numpy as np import paddlehub as hub @@ -99,91 +102,90 @@ - ### 3、API - ```python - def segment(images=None, + - ```python + def segment(images=None, paths=None, batch_size=1, use_gpu=False, visualization=False, output_dir='humanseg_lite_output') - ``` + ``` - - Prediction API, generating segmentation result. + - Prediction API, generating segmentation result. - - **Parameter** + - **Parameter** - * images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR; - * paths (list\[str\]): image path; - * batch\_size (int): batch size; - * use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** - * visualization (bool): Whether to save the results as picture files; - * output\_dir (str): save path of images, humanseg_lite_output by default. + * images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR. + * paths (list\[str\]): image path. + * batch\_size (int): batch size. + * use\_gpu (bool): use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** + * visualization (bool): Whether to save the results as picture files. + * output\_dir (str): save path of images, humanseg_lite_output by default. - - **Return** + - **Return** - * res (list\[dict\]): The list of recognition results, where each element is dict and each field is: - * save\_path (str, optional): Save path of the result; - * data (numpy.ndarray): The result of portrait segmentation. + * res (list\[dict\]): The list of recognition results, where each element is dict and each field is: + * save\_path (str, optional): Save path of the result. + * data (numpy.ndarray): The result of portrait segmentation. - ```python - def video_stream_segment(self, + - ```python + def video_stream_segment(self, frame_org, frame_id, prev_gray, prev_cfd, use_gpu=False): - ``` - - - Prediction API, used to segment video portraits frame by frame. + ``` + - Prediction API, used to segment video portraits frame by frame. - - **Parameter** + - **Parameter** - * frame_org (numpy.ndarray): single frame for prediction,ndarray.shape is in the format [H, W, C], BGR; - * frame_id (int): The number of the current frame; - * prev_gray (numpy.ndarray): Grayscale image of the previous network input; - * prev_cfd (numpy.ndarray): The fusion image from optical flow and the prediction result from previous frame. - * use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** + * frame_org (numpy.ndarray): single frame for prediction,ndarray.shape is in the format [H, W, C], BGR. + * frame_id (int): The number of the current frame. + * prev_gray (numpy.ndarray): Grayscale image of the previous network input. + * prev_cfd (numpy.ndarray): The fusion image from optical flow and the prediction result from previous frame. + * use\_gpu (bool): use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** - - **Return** + - **Return** - * img_matting (numpy.ndarray): The result of portrait segmentation; - * cur_gray (numpy.ndarray): Grayscale image of the current network input; - * optflow_map (numpy.ndarray): The fusion image from optical flow and the prediction result from current frame. + * img_matting (numpy.ndarray): The result of portrait segmentation. + * cur_gray (numpy.ndarray): Grayscale image of the current network input. + * optflow_map (numpy.ndarray): The fusion image from optical flow and the prediction result from current frame. - ```python - def video_segment(self, - video_path=None, - use_gpu=False, - save_dir='humanseg_lite_video_result'): - ``` + - ```python + def video_segment(self, + video_path=None, + use_gpu=False, + save_dir='humanseg_lite_video_result'): + ``` - - Prediction API to produce video segmentation result. + - Prediction API to produce video segmentation result. - - **Parameter** + - **Parameter** - * video\_path (str): Video path for segmentation。If None, the video will be obtained from the local camera, and a window will display the online segmentation result. - * use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** - * save\_dir (str): save path of video. + * video\_path (str): Video path for segmentation。If None, the video will be obtained from the local camera, and a window will display the online segmentation result. + * use\_gpu (bool): use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** + * save\_dir (str): save path of video. - ```python - def save_inference_model(dirname='humanseg_lite_model', - model_filename=None, - params_filename=None, - combined=True) - ``` + - ```python + def save_inference_model(dirname='humanseg_lite_model', + model_filename=None, + params_filename=None, + combined=True) + ``` - - Save the model to the specified path. + - Save the model to the specified path. - - **Parameters** + - **Parameters** - * dirname: Save path. - * model\_filename: model file name,defalt is \_\_model\_\_ - * params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True) - * combined: Whether to save the parameters to a unified file. + * dirname: Save path. + * model\_filename: model file name,defalt is \_\_model\_\_ + * params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True) + * combined: Whether to save the parameters to a unified file. @@ -193,11 +195,11 @@ - ### Step 1: Start PaddleHub Serving - - Run the startup command: + - Run the startup command: - ```shell - $ hub serving start -m humanseg_lite - ``` + - ```shell + hub serving start -m humanseg_lite + ``` - The servitization API is now deployed and the default port number is 8866. @@ -207,34 +209,34 @@ - With a configured server, use the following lines of code to send the prediction request and obtain the result - ```python - import requests - import json - import base64 + - ```python + import requests + import json + import base64 - import cv2 - import numpy as np + import cv2 + import numpy as np - def cv2_to_base64(image): - data = cv2.imencode('.jpg', image)[1] - return base64.b64encode(data.tostring()).decode('utf8') - def base64_to_cv2(b64str): - data = base64.b64decode(b64str.encode('utf8')) - data = np.fromstring(data, np.uint8) - data = cv2.imdecode(data, cv2.IMREAD_COLOR) - return data - - # Send an HTTP request - org_im = cv2.imread('/PATH/TO/IMAGE') - data = {'images':[cv2_to_base64(org_im)]} - headers = {"Content-type": "application/json"} - url = "http://127.0.0.1:8866/predict/humanseg_lite" - r = requests.post(url=url, headers=headers, data=json.dumps(data)) - - mask =cv2.cvtColor(base64_to_cv2(r.json()["results"][0]['data']), cv2.COLOR_BGR2GRAY) - rgba = np.concatenate((org_im, np.expand_dims(mask, axis=2)), axis=2) - cv2.imwrite("segment_human_lite.png", rgba) - ``` + def cv2_to_base64(image): + data = cv2.imencode('.jpg', image)[1] + return base64.b64encode(data.tostring()).decode('utf8') + def base64_to_cv2(b64str): + data = base64.b64decode(b64str.encode('utf8')) + data = np.fromstring(data, np.uint8) + data = cv2.imdecode(data, cv2.IMREAD_COLOR) + return data + + # Send an HTTP request + org_im = cv2.imread('/PATH/TO/IMAGE') + data = {'images':[cv2_to_base64(org_im)]} + headers = {"Content-type": "application/json"} + url = "http://127.0.0.1:8866/predict/humanseg_lite" + r = requests.post(url=url, headers=headers, data=json.dumps(data)) + + mask =cv2.cvtColor(base64_to_cv2(r.json()["results"][0]['data']), cv2.COLOR_BGR2GRAY) + rgba = np.concatenate((org_im, np.expand_dims(mask, axis=2)), axis=2) + cv2.imwrite("segment_human_lite.png", rgba) + ``` ## V. Release Note @@ -245,7 +247,7 @@ - 1.1.0 - Added video portrait split interface + Added video portrait segmentation interface Added video stream portrait segmentation interface * 1.1.1 diff --git a/modules/image/semantic_segmentation/humanseg_mobile/README_en.md b/modules/image/semantic_segmentation/humanseg_mobile/README_en.md index 7dffb4f43..7af902ced 100644 --- a/modules/image/semantic_segmentation/humanseg_mobile/README_en.md +++ b/modules/image/semantic_segmentation/humanseg_mobile/README_en.md @@ -2,7 +2,7 @@ |Module Name |humanseg_mobile| | :--- | :---: | -|Category |image segmentation| +|Category |Image segmentation| |Network|hrnet| |Dataset|Baidu self-built dataset| |Fine-tuning supported or not|No| @@ -40,17 +40,20 @@ $ hub install humanseg_mobile ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction - ### 1、Command line Prediction - ``` - hub run humanseg_mobile --input_path "/PATH/TO/IMAGE" + - ``` + hub run humanseg_mobile --input_path "/PATH/TO/IMAGE" + + ``` + - If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst) + - ``` - ### 2、Prediction Code Example - Image segmentation and video segmentation example: ```python @@ -112,17 +115,17 @@ - **Parameter** - * images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR; - * paths (list\[str\]): image path; - * batch\_size (int): batch size; - * use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** - * visualization (bool): Whether to save the results as picture files; + * images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR. + * paths (list\[str\]): image path. + * batch\_size (int): batch size. + * use\_gpu (bool): use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** + * visualization (bool): Whether to save the results as picture files. * output\_dir (str): save path of images, humanseg_mobile_output by default. - **Return** * res (list\[dict\]): The list of recognition results, where each element is dict and each field is: - * save\_path (str, optional): Save path of the result; + * save\_path (str, optional): Save path of the result. * data (numpy.ndarray): The result of portrait segmentation. ```python @@ -138,17 +141,17 @@ - **Parameter** - * frame_org (numpy.ndarray): single frame for prediction,ndarray.shape is in the format [H, W, C], BGR; - * frame_id (int): The number of the current frame; - * prev_gray (numpy.ndarray): Grayscale image of the previous network input; + * frame_org (numpy.ndarray): single frame for prediction,ndarray.shape is in the format [H, W, C], BGR. + * frame_id (int): The number of the current frame. + * prev_gray (numpy.ndarray): Grayscale image of the previous network input. * prev_cfd (numpy.ndarray): The fusion image from optical flow and the prediction result from previous frame. - * use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** + * use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** - **Return** - * img_matting (numpy.ndarray): The result of portrait segmentation; - * cur_gray (numpy.ndarray): Grayscale image of the current network input; + * img_matting (numpy.ndarray): The result of portrait segmentation. + * cur_gray (numpy.ndarray): Grayscale image of the current network input. * optflow_map (numpy.ndarray): The fusion image from optical flow and the prediction result from current frame. @@ -164,7 +167,7 @@ - **Parameter** * video\_path (str): Video path for segmentation。If None, the video will be obtained from the local camera, and a window will display the online segmentation result. - * use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** + * use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** * save\_dir (str): save path of video. @@ -181,8 +184,8 @@ - **Parameters** * dirname: Save path. - * model\_filename: model file name,defalt is \_\_model\_\_ - * params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True) + * model\_filename: Model file name,defalt is \_\_model\_\_ + * params\_filename: Parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True) * combined: Whether to save the parameters to a unified file. @@ -193,11 +196,11 @@ - ### Step 1: Start PaddleHub Serving - - Run the startup command: + - Run the startup command: - ```shell - $ hub serving start -m humanseg_mobile - ``` + - ```shell + $ hub serving start -m humanseg_mobile + ``` - The servitization API is now deployed and the default port number is 8866. diff --git a/modules/image/semantic_segmentation/humanseg_server/README_en.md b/modules/image/semantic_segmentation/humanseg_server/README_en.md index 6ed70ac64..052b37e2a 100644 --- a/modules/image/semantic_segmentation/humanseg_server/README_en.md +++ b/modules/image/semantic_segmentation/humanseg_server/README_en.md @@ -2,7 +2,7 @@ |Module Name |humanseg_server| | :--- | :---: | -|Category |image segmentation| +|Category |Image segmentation| |Network|hrnet| |Dataset|Baidu self-built dataset| |Fine-tuning supported or not|No| @@ -40,17 +40,18 @@ $ hub install humanseg_server ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction - ### 1、Command line Prediction - ``` - hub run humanseg_server --input_path "/PATH/TO/IMAGE" - - ``` + - ``` + hub run humanseg_server --input_path "/PATH/TO/IMAGE" + ``` + - If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst) + - ### 2、Prediction Code Example - Image segmentation and video segmentation example: ```python @@ -112,17 +113,17 @@ - **Parameter** - * images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR; - * paths (list\[str\]): image path; - * batch\_size (int): batch size; - * use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** - * visualization (bool): Whether to save the results as picture files; - * output\_dir (str): save path of images, humanseg_server_output by default. + * images (list\[numpy.ndarray\]): Image data, ndarray.shape is in the format [H, W, C], BGR. + * paths (list\[str\]): Image path. + * batch\_size (int): Batch size. + * use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** + * visualization (bool): Whether to save the results as picture files. + * output\_dir (str): Save path of images, humanseg_server_output by default. - **Return** * res (list\[dict\]): The list of recognition results, where each element is dict and each field is: - * save\_path (str, optional): Save path of the result; + * save\_path (str, optional): Save path of the result. * data (numpy.ndarray): The result of portrait segmentation. ```python @@ -138,17 +139,17 @@ - **Parameter** - * frame_org (numpy.ndarray): single frame for prediction,ndarray.shape is in the format [H, W, C], BGR; - * frame_id (int): The number of the current frame; - * prev_gray (numpy.ndarray): Grayscale image of the previous network input; + * frame_org (numpy.ndarray): Single frame for prediction,ndarray.shape is in the format [H, W, C], BGR. + * frame_id (int): The number of the current frame. + * prev_gray (numpy.ndarray): Grayscale image of the previous network input. * prev_cfd (numpy.ndarray): The fusion image from optical flow and the prediction result from previous frame. - * use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** + * use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** - **Return** - * img_matting (numpy.ndarray): The result of portrait segmentation; - * cur_gray (numpy.ndarray): Grayscale image of the current network input; + * img_matting (numpy.ndarray): The result of portrait segmentation. + * cur_gray (numpy.ndarray): Grayscale image of the current network input. * optflow_map (numpy.ndarray): The fusion image from optical flow and the prediction result from current frame. @@ -164,8 +165,8 @@ - **Parameter** * video\_path (str): Video path for segmentation。If None, the video will be obtained from the local camera, and a window will display the online segmentation result. - * use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** - * save\_dir (str): save path of video. + * use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU** + * save\_dir (str): Save path of video. ```python @@ -181,8 +182,8 @@ - **Parameters** * dirname: Save path. - * model\_filename: model file name,defalt is \_\_model\_\_ - * params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True) + * model\_filename: Model file name,defalt is \_\_model\_\_ + * params\_filename: Parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True) * combined: Whether to save the parameters to a unified file. @@ -193,11 +194,11 @@ - ### Step 1: Start PaddleHub Serving - - Run the startup command: + - Run the startup command: - ```shell - $ hub serving start -m humanseg_server - ``` + - ```shell + $ hub serving start -m humanseg_server + ``` - The servitization API is now deployed and the default port number is 8866. @@ -207,7 +208,7 @@ - With a configured server, use the following lines of code to send the prediction request and obtain the result - ```python + - ```python import requests import json import base64 @@ -245,7 +246,7 @@ - 1.1.0 - Added video portrait split interface + Added video portrait segmentation interface Added video stream portrait segmentation interface diff --git a/modules/video/Video_editing/SkyAR/README_en.md b/modules/video/Video_editing/SkyAR/README_en.md index 14989bbe7..1b122baa1 100644 --- a/modules/video/Video_editing/SkyAR/README_en.md +++ b/modules/video/Video_editing/SkyAR/README_en.md @@ -2,7 +2,7 @@ |Module Name|SkyAR| | :--- | :---: | -|Category|video editing| +|Category|Video editing| |Network|UNet| |Dataset|-| |Fine-tuning supported or not|No| @@ -63,11 +63,11 @@ - ### 2、Installation - ```shell - $hub install SkyAR - ``` - - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md) - | [Linux_Quickstart](../../../../docs/docs_ch/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_ch/get_start/mac_quickstart.md) + - ```shell + $hub install SkyAR + ``` + - In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md) + | [Linux_Quickstart](../../../../docs/docs_en/get_start/linux_quickstart.md) | [Mac_Quickstart](../../../../docs/docs_en/get_start/mac_quickstart.md) ## III. Module API Prediction @@ -99,13 +99,10 @@ * video_path(str):input video path. * save_path(str):save videp path. - * config(str): SkyBox configuration, all preset configurations are as follows, if you use a custom SkyBox, please set it to None: - ``` - [ - 'cloudy', 'district9ship', 'floatingcastle', 'galaxy', 'jupiter', + * config(str): SkyBox configuration, all preset configurations are as follows: `['cloudy', 'district9ship', 'floatingcastle', 'galaxy', 'jupiter', 'rainy', 'sunny', 'sunset', 'supermoon', 'thunderstorm' - ] - ``` + ]`, if you use a custom SkyBox, please set it to None. + * skybox_img(str):custom SkyBox image path * skybox_video(str):custom SkyBox video path * is_video_sky(bool):customize whether SkyBox is a video