From 95d9217c2879f9046a3ed0ce37d7d7e4f17b9cff Mon Sep 17 00:00:00 2001 From: Sunish Sheth Date: Thu, 19 Oct 2023 22:00:07 -0700 Subject: [PATCH] Cleaning up custom metrics from docs strings (#10016) Signed-off-by: Sunish Sheth --- docs/source/models.rst | 4 ++-- examples/evaluation/README.md | 4 ++-- mlflow/models/evaluation/base.py | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/source/models.rst b/docs/source/models.rst index d94dc8f6bbae8..d3d7897daebf3 100644 --- a/docs/source/models.rst +++ b/docs/source/models.rst @@ -3644,10 +3644,10 @@ each model: For additional examples demonstrating the use of ``mlflow.evaluate()`` with LLMs, check out the `MLflow LLMs example repository `_. -Evaluating with Custom Metrics +Evaluating with Extra Metrics ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -If the default set of metrics is insufficient, you can supply ``custom_metrics`` and ``custom_artifacts`` +If the default set of metrics is insufficient, you can supply ``extra_metrics`` and ``custom_artifacts`` to :py:func:`mlflow.evaluate()` to produce custom metrics and artifacts for the model(s) that you're evaluating. The following `short example from the MLflow GitHub Repository `_ diff --git a/examples/evaluation/README.md b/examples/evaluation/README.md index 8dcd8fe7a6f50..eaec69df35b88 100644 --- a/examples/evaluation/README.md +++ b/examples/evaluation/README.md @@ -2,7 +2,7 @@ The examples in this directory demonstrate how to use the `mlflow.evaluate()` API. Specifically, they show how to evaluate a PyFunc model on a specified dataset using the builtin default evaluator -and specified custom metrics, where the resulting metrics & artifacts are logged to MLflow Tracking. +and specified extra metrics, where the resulting metrics & artifacts are logged to MLflow Tracking. They also show how to specify validation thresholds for the resulting metrics to validate the quality of your model. See full list of examples below: @@ -18,7 +18,7 @@ of your model. See full list of examples below: with a comprehensive list of custom metric functions on dataset loaded by `sklearn.datasets.fetch_california_housing` - Example `evaluate_with_model_validation.py` trains both a candidate xgboost `XGBClassifier` model and a baseline `DummyClassifier` model on dataset loaded by `shap.datasets.adult`. Then, it validates - the candidate model against specified thresholds on both builtin and custom metrics and the dummy model. + the candidate model against specified thresholds on both builtin and extra metrics and the dummy model. #### Prerequisites diff --git a/mlflow/models/evaluation/base.py b/mlflow/models/evaluation/base.py index 9a32d2af0b4d7..7f9a92d8c6d7f 100644 --- a/mlflow/models/evaluation/base.py +++ b/mlflow/models/evaluation/base.py @@ -1512,10 +1512,10 @@ def model(inputs): :param extra_metrics: (Optional) A list of :py:class:`EvaluationMetric ` objects. See the `mlflow.metrics` module for more information about the - builtin metrics and how to define custom metrics + builtin metrics and how to define extra metrics .. code-block:: python - :caption: Example usage of custom metrics + :caption: Example usage of extra metrics import mlflow import numpy as np