Skip to content

Commit

Permalink
Cleaning up custom metrics from docs strings (mlflow#10016)
Browse files Browse the repository at this point in the history
Signed-off-by: Sunish Sheth <[email protected]>
  • Loading branch information
sunishsheth2009 authored Oct 20, 2023
1 parent cd2b3fe commit 95d9217
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 6 deletions.
4 changes: 2 additions & 2 deletions docs/source/models.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3644,10 +3644,10 @@ each model:
For additional examples demonstrating the use of ``mlflow.evaluate()`` with LLMs, check out the
`MLflow LLMs example repository <https://github.com/mlflow/mlflow/tree/master/examples/llms>`_.

Evaluating with Custom Metrics
Evaluating with Extra Metrics
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

If the default set of metrics is insufficient, you can supply ``custom_metrics`` and ``custom_artifacts``
If the default set of metrics is insufficient, you can supply ``extra_metrics`` and ``custom_artifacts``
to :py:func:`mlflow.evaluate()` to produce custom metrics and artifacts for the model(s) that you're evaluating.
The following `short example from the MLflow GitHub Repository
<https://github.com/mlflow/mlflow/blob/master/examples/evaluation/evaluate_with_custom_metrics.py>`_
Expand Down
4 changes: 2 additions & 2 deletions examples/evaluation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

The examples in this directory demonstrate how to use the `mlflow.evaluate()` API. Specifically,
they show how to evaluate a PyFunc model on a specified dataset using the builtin default evaluator
and specified custom metrics, where the resulting metrics & artifacts are logged to MLflow Tracking.
and specified extra metrics, where the resulting metrics & artifacts are logged to MLflow Tracking.
They also show how to specify validation thresholds for the resulting metrics to validate the quality
of your model. See full list of examples below:

Expand All @@ -18,7 +18,7 @@ of your model. See full list of examples below:
with a comprehensive list of custom metric functions on dataset loaded by `sklearn.datasets.fetch_california_housing`
- Example `evaluate_with_model_validation.py` trains both a candidate xgboost `XGBClassifier` model
and a baseline `DummyClassifier` model on dataset loaded by `shap.datasets.adult`. Then, it validates
the candidate model against specified thresholds on both builtin and custom metrics and the dummy model.
the candidate model against specified thresholds on both builtin and extra metrics and the dummy model.

#### Prerequisites

Expand Down
4 changes: 2 additions & 2 deletions mlflow/models/evaluation/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -1512,10 +1512,10 @@ def model(inputs):
:param extra_metrics:
(Optional) A list of :py:class:`EvaluationMetric <mlflow.models.EvaluationMetric>` objects.
See the `mlflow.metrics` module for more information about the
builtin metrics and how to define custom metrics
builtin metrics and how to define extra metrics
.. code-block:: python
:caption: Example usage of custom metrics
:caption: Example usage of extra metrics
import mlflow
import numpy as np
Expand Down

0 comments on commit 95d9217

Please sign in to comment.