In terms of evaluating the performance of a specific model, we should have general metrics to measure the performance of different models. Different frameworks always have their own Metric module but with different features and APIs. LPOT Metrics supports code-free configuration through a yaml file, with built-in metrics, so that LPOT can achieve performance and accuracy without code changes from the user. In special cases, users can also register their own metric classes through the LPOT method.
Users can specify an LPOT built-in metric such as shown below:
evaluation:
accuracy:
metric:
topk: 1
Users can also register their own metric as follows:
class Metric(object):
def __init__(self):
# init code here
def update(self, preds, labels):
# add preds and labels to storage
def reset(self):
# clear preds and labels storage
def result(self):
# calculate accuracy
return accuracy
The result() function returns a higher-is-better scalar to reflect model accuracy on an evaluation dataset.
After defining the metric class, users need to register it with a user-defined metric name and the metric class:
from lpot.quantization import Quantization, common
quantizer = Quantization(yaml_file)
quantizer.model = common.Model(graph)
quantizer.metric = common.Metric(NewMetric, 'metric_name')
quantizer.calib_dataloader = dataloader
q_model = quantizer()
LPOT supports some built-in metrics that are popularly used in industry.
Refer to this HelloWorld example on how to config a built-in metric.
Metric | Parameters | Inputs | Comments | Usage(In yaml file) |
---|---|---|---|---|
topk(k) | k (int, default=1): Number of top elements to look at for computing accuracy | preds, labels | Computes top k predictions accuracy. | metric: topk: k: 1 |
Accuracy() | None | preds, labels | Computes accuracy classification score. | metric: Accuracy: {} |
Loss() | None | preds, labels | A dummy metric for directly printing loss, it calculates the average of predictions. Please refer to MXNet docs for details. |
metric: Loss: {} |
MAE() | None | preds, labels | Computes Mean Absolute Error (MAE) loss. | metric: MAE: {} |
RMSE() | None | preds, labels | Computes Root Mean Squred Error (RMSE) loss. | metric: RMSE: {} |
MSE() | None | preds, labels | Computes Mean Squared Error (MSE) loss. | metric: MSE: {} |
F1() | None | preds, labels | Computes the F1 score of a binary classification problem. | metric: F1: {} |
COCOmAP(anno_path) | anno_path(str, default=None):annotation path | preds, labels | preds is a tuple which supports 2 length: 3 and 4. If its length is 3, it should contain boxes, scores, classes in turn. If its length is 4, it should contain target_boxes_num, boxes, scores, classes in turn labels is a tuple which contains bbox, str_label, int_label, image_id inturn the length of one of str_label and int_label can be 0 |
metric: COCOmAP: anno_path: /path/to/annotation If anno_path is not set, metric will use built-in coco_label_map |
BLEU() | None | preds, labels | BLEU score computation between labels and predictions. An approximate BLEU scoring method since we do not glue word pieces or decode the ids and tokenize the output. By default, we use ngram order of 4 and use brevity penalty. Also, this does not have beam search | metric: BLEU: {} |
SquadF1() | None | preds, labels | Evaluate v1.1 of the SQuAD dataset | metric: SquadF1: {} |
Metric | Parameters | Inputs | Comments | Usage(In yaml file) |
---|---|---|---|---|
topk(k) | k (int, default=1): Number of top elements to look at for computing accuracy | preds, labels | Calculates the top-k categorical accuracy. | metric: topk: k: 1 |
Accuracy() | None | preds, labels | Calculates the accuracy for binary, multiclass and multilabel data. Please refer Pytorch docs for details. |
metric: Accuracy: {} |
Loss() | None | preds, labels | A dummy metric for directly printing loss, it calculates the average of predictions. Please refer MXNet docs for details. |
metric: Loss: {} |
MAE() | None | preds, labels | Calculates the mean absolute error. Please refer Pytorch docs for details. |
metric: MAE: {} |
RMSE() | None | preds, labels | Calculates the root mean squared error. Please refer Pytorch docs for details. |
metric: RMSE: {} |
MSE() | None | preds, labels | Calculates the mean squared error. Please refer Pytorch docs for details. |
metric: MSE: {} |
F1() | None | preds, labels | Computes the F1 score of a binary classification problem. | metric: F1: {} |
Metric | Parameters | Inputs | Comments | Usage(In yaml file) |
---|---|---|---|---|
topk(k) | k (int, default=1): Number of top elements to look at for computing accuracy | preds, labels | Computes top k predictions accuracy. | metric: topk: k: 1 |
Accuracy() | None | preds, labels | Computes accuracy classification score. Please refer to MXNet docs for details. |
metric: Accuracy: {} |
Loss() | None | preds, labels | A dummy metric for directly printing loss, it calculates the average of predictions. Please refer to MXNet docs for details. |
metric: Loss: {} |
MAE() | None | preds, labels | Computes Mean Absolute Error (MAE) loss. Please refer to MXNet docs for details. |
metric: MAE: {} |
RMSE() | None | preds, labels | Computes Root Mean Squred Error (RMSE) loss. Please refer to MXNet docs for details. |
metric: RMSE: {} |
MSE() | None | preds, labels | Computes Mean Squared Error (MSE) loss. Please refer to MXNet docs for details. |
metric: MSE: {} |
F1() | None | preds, labels | Computes the F1 score of a binary classification problem. Please refer to MXNet docs for details. |
metric: F1: {} |
Metric | Parameters | Inputs | Comments | Usage(In yaml file) |
---|---|---|---|---|
topk(k) | k (int, default=1): Number of top elements to look at for computing accuracy | preds, labels | Computes top k predictions accuracy. | metric: topk: k: 1 |
Accuracy() | None | preds, labels | Computes accuracy classification score. | metric: Accuracy: {} |
Loss() | None | preds, labels | A dummy metric for directly printing loss, it calculates the average of predictions. Please refer to MXNet docs for details. |
metric: Loss: {} |
MAE() | None | preds, labels | Computes Mean Absolute Error (MAE) loss. | metric: MAE: {} |
RMSE() | None | preds, labels | Computes Root Mean Squred Error (RMSE) loss. | metric: RMSE: {} |
MSE() | None | preds, labels | Computes Mean Squared Error (MSE) loss. | metric: MSE: {} |
F1() | None | preds, labels | Computes the F1 score of a binary classification problem. | metric: F1: {} |