diff --git a/0.5.10/cloud/billing/bills/index.html b/0.5.10/cloud/billing/bills/index.html index 5f4125f03..64987049c 100644 --- a/0.5.10/cloud/billing/bills/index.html +++ b/0.5.10/cloud/billing/bills/index.html @@ -10,13 +10,13 @@ - - + +
Skip to main content
- - + + \ No newline at end of file diff --git a/0.5.10/cloud/billing/index.html b/0.5.10/cloud/billing/index.html index 401e2a4f6..aa4c4ee68 100644 --- a/0.5.10/cloud/billing/index.html +++ b/0.5.10/cloud/billing/index.html @@ -10,13 +10,13 @@ - - + +
Skip to main content
- - + + \ No newline at end of file diff --git a/0.5.10/cloud/billing/recharge/index.html b/0.5.10/cloud/billing/recharge/index.html index 356da4b6c..6011d3f96 100644 --- a/0.5.10/cloud/billing/recharge/index.html +++ b/0.5.10/cloud/billing/recharge/index.html @@ -10,13 +10,13 @@ - - + +
Skip to main content
- - + + \ No newline at end of file diff --git a/0.5.10/cloud/billing/refund/index.html b/0.5.10/cloud/billing/refund/index.html index 1a24e69b1..c96afc148 100644 --- a/0.5.10/cloud/billing/refund/index.html +++ b/0.5.10/cloud/billing/refund/index.html @@ -10,13 +10,13 @@ - - + +
Skip to main content
- - + + \ No newline at end of file diff --git a/0.5.10/cloud/billing/voucher/index.html b/0.5.10/cloud/billing/voucher/index.html index 000a0471e..007349f25 100644 --- a/0.5.10/cloud/billing/voucher/index.html +++ b/0.5.10/cloud/billing/voucher/index.html @@ -10,13 +10,13 @@ - - + +
Skip to main content
- - + + \ No newline at end of file diff --git a/0.5.10/cloud/index.html b/0.5.10/cloud/index.html index 2f2f64436..d96cc0d13 100644 --- a/0.5.10/cloud/index.html +++ b/0.5.10/cloud/index.html @@ -10,13 +10,13 @@ - - + +
Skip to main content
Version: 0.5.10

Starwhale Cloud User Guide

Starwhale Cloud is a service hosted on public cloud and operated by the Starwhale team. The access url is https://cloud.starwhale.cn.

- - + + \ No newline at end of file diff --git a/0.5.10/community/contribute/index.html b/0.5.10/community/contribute/index.html index a10c67f74..fca71b226 100644 --- a/0.5.10/community/contribute/index.html +++ b/0.5.10/community/contribute/index.html @@ -10,13 +10,13 @@ - - + +
Skip to main content
Version: 0.5.10

Contribute to Starwhale

Getting Involved/Contributing

We welcome and encourage all contributions to Starwhale, including and not limited to:

  • Describe the problems encountered during use.
  • Submit feature request.
  • Discuss in Slack and Github Issues.
  • Code Review.
  • Improve docs, tutorials and examples.
  • Fix Bug.
  • Add Test Case.
  • Code readability and code comments to import readability.
  • Develop new features.
  • Write enhancement proposal.

You can get involved, get updates and contact Starwhale developers in the following ways:

Starwhale Resources

Code Structure

  • client: swcli and Python SDK with Pure Python3, which includes all Standalone Instance features.
    • api: Python SDK.
    • cli: Command Line Interface entrypoint.
    • base: Python base abstract.
    • core: Starwhale core concepts which includes Dataset,Model,Runtime,Project, job and Evaluation, etc.
    • utils: Python utilities lib.
  • console: frontend with React + TypeScript.
  • server:Starwhale Controller with java, which includes all Starwhale Cloud Instance backend apis.
  • docker:Helm Charts, dockerfile.
  • docs:Starwhale官方文档。
  • example:Example code.
  • scripts:Bash and Python scripts for E2E testing and software releases, etc.

Fork and clone the repository

You will need to fork the code of Starwhale repository and clone it to your local machine.

  • Fork Starwhale repository: Fork Starwhale Github Repo,For more usage details, please refer to: Fork a repo

  • Install Git-LFS:Git-LFS

     git lfs install
  • Clone code to local machine

    git clone https://github.com/${your username}/starwhale.git

Development environment for Standalone Instance

Standalone Instance is written in Python3. When you want to modify swcli and sdk, you need to build the development environment.

Standalone development environment prerequisites

  • OS: Linux or macOS
  • Python: 3.7~3.11
  • Docker: >=19.03(optional)
  • Python isolated env tools:Python venv, virtualenv or conda, etc

Building from source code

Based on the previous step, clone to the local directory: starwhale, and enter the client subdirectory:

cd starwhale/client

Create an isolated python environment with conda:

conda create -n starwhale-dev python=3.8 -y
conda activate starwhale-dev

Install client package and python dependencies into the starwhale-dev environment:

make install-sw
make install-dev-req

Validate with the swcli --version command. In the development environment, the version is 0.0.0.dev0:

❯ swcli --version
swcli, version 0.0.0.dev0

❯ swcli --version
/home/username/anaconda3/envs/starwhale-dev/bin/swcli

Modifying the code

When you modify the code, you need not to install python package(run make install-sw command) again. .editorconfig will be imported into the most IDE and code editors which helps maintain consistent coding styles for multiple developers.

Lint and Test

Run unit test, E2E test, mypy lint, flake lint and isort check in the starwhale directory.

make client-all-check

Development environment for Cloud Instance

Cloud Instance is written in Java(backend) and React+TypeScript(frontend).

Development environment for Console

Development environment for Server

  • Language: Java
  • Build tool: Maven
  • Development framework: Spring Boot+Mybatis
  • Unit test framework:Junit5
    • Mockito used for mocking
    • Hamcrest used for assertion
    • Testcontainers used for providing lightweight, throwaway instances of common databases, Selenium web browsers that can run in a Docker container.
  • Check style tool:use maven-checkstyle-plugin

Server development environment prerequisites

  • OS: Linux, macOS or Windows
  • Docker: >=19.03
  • JDK: >=11
  • Maven: >=3.8.1
  • Mysql: >=8.0.29
  • Minio
  • Kubernetes cluster/Minikube(If you don't have a k8s cluster, you can use Minikube as an alternative for development and debugging)

Modify the code and add unit tests

Now you can enter the corresponding module to modify and adjust the code on the server side. The main business code directory is src/main/java, and the unit test directory is src/test/java.

Execute code check and run unit tests

cd starwhale/server
mvn clean test

Deploy the server at local machine

  • Dependent services that need to be deployed

    • Minikube(Optional. Minikube can be used when there is no k8s cluster, there is the installation doc: Minikube

      minikube start
      minikube addons enable ingress
      minikube addons enable ingress-dns
    • Mysql

      docker run --name sw-mysql -d \
      -p 3306:3306 \
      -e MYSQL_ROOT_PASSWORD=starwhale \
      -e MYSQL_USER=starwhale \
      -e MYSQL_PASSWORD=starwhale \
      -e MYSQL_DATABASE=starwhale \
      mysql:latest
    • Minio

      docker run --name minio -d \
      -p 9000:9000 --publish 9001:9001 \
      -e MINIO_DEFAULT_BUCKETS='starwhale' \
      -e MINIO_ROOT_USER="minioadmin" \
      -e MINIO_ROOT_PASSWORD="minioadmin" \
      bitnami/minio:latest
  • Package server program

    If you need to deploy the front-end at the same time when deploying the server, you can execute the build command of the front-end part first, and then execute 'mvn clean package', and the compiled front-end files will be automatically packaged.

    Use the following command to package the program

      cd starwhale/server
    mvn clean package
  • Specify the environment required for server startup

    # Minio env
    export SW_STORAGE_ENDPOINT=http://${Minio IP,default is:27.0.0.1}:9000
    export SW_STORAGE_BUCKET=${Minio bucket,default is:starwhale}
    export SW_STORAGE_ACCESSKEY=${Minio accessKey,default is:starwhale}
    export SW_STORAGE_SECRETKEY=${Minio secretKey,default is:starwhale}
    export SW_STORAGE_REGION=${Minio region,default is:local}
    # kubernetes env
    export KUBECONFIG=${the '.kube' file path}\.kube\config

    export SW_INSTANCE_URI=http://${Server IP}:8082
    export SW_METADATA_STORAGE_IP=${Mysql IP,default: 127.0.0.1}
    export SW_METADATA_STORAGE_PORT=${Mysql port,default: 3306}
    export SW_METADATA_STORAGE_DB=${Mysql dbname,default: starwhale}
    export SW_METADATA_STORAGE_USER=${Mysql user,default: starwhale}
    export SW_METADATA_STORAGE_PASSWORD=${user password,default: starwhale}
  • Deploy server service

    You can use the IDE or the command to deploy.

    java -jar controller/target/starwhale-controller-0.1.0-SNAPSHOT.jar
  • Debug

    there are two ways to debug the modified function:

    • Use swagger-ui for interface debugging, visit /swagger-ui/index.html to find the corresponding api
    • Debug the corresponding function directly in the ui (provided that the front-end code has been built in advance according to the instructions when packaging)
- - + + \ No newline at end of file diff --git a/0.5.10/concepts/index.html b/0.5.10/concepts/index.html index 786b9a4c6..ad9e73261 100644 --- a/0.5.10/concepts/index.html +++ b/0.5.10/concepts/index.html @@ -10,13 +10,13 @@ - - + +
Skip to main content
- - + + \ No newline at end of file diff --git a/0.5.10/concepts/names/index.html b/0.5.10/concepts/names/index.html index 773dbc9fd..4ef4b938b 100644 --- a/0.5.10/concepts/names/index.html +++ b/0.5.10/concepts/names/index.html @@ -10,13 +10,13 @@ - - + +
Skip to main content
Version: 0.5.10

Names in Starwhale

Names mean project names, model names, dataset names, runtime names, and tag names.

Names Limitation

  • Names are case-insensitive.
  • A name MUST only consist of letters A-Z a-z, digits 0-9, the hyphen character -, the dot character ., and the underscore character _.
  • A name should always start with a letter or the _ character.
  • The maximum length of a name is 80.

Names uniqueness requirement

  • The resource name should be a unique string within its owner. For example, the project name should be unique in the owner instance, and the model name should be unique in the owner project.
  • The resource name can not be used by any other resource of the same kind in their owner, including those removed ones. For example, Project "apple" can not have two models named "Alice", even if one of them is already removed.
  • Different kinds of resources can have the same name. For example, a project and a model can be called "Alice" simultaneously.
  • Resources with different owners can have the same name. For example, a model in project "Apple" and a model in project "Banana" can have the same name "Alice".
  • Garbage-collected resources' names can be reused. For example, after the model with the name "Alice" in project "Apple" is removed and garbage collected, the project can have a new model with the same name "Alice".
- - + + \ No newline at end of file diff --git a/0.5.10/concepts/project/index.html b/0.5.10/concepts/project/index.html index bd633158b..c3fa0dcda 100644 --- a/0.5.10/concepts/project/index.html +++ b/0.5.10/concepts/project/index.html @@ -10,13 +10,13 @@ - - + +
Skip to main content
Version: 0.5.10

Project in Starwhale

"Project" is the basic unit for organizing different resources like models, datasets, etc. You may use projects for different purposes. For example, you can create a project for a data scientist team, a product line, or a specific model. Users usually work on one or more projects in their daily lives.

Starwhale Server/Cloud projects are grouped by accounts. Starwhale Standalone does not have accounts. So you will not see any account name prefix in Starwhale Standalone projects. Starwhale Server/Cloud projects can be either "public" or "private". Public projects means all users on the same instance are assigned a "guest" role to the project by default. For more information about roles, see Roles and permissions in Starwhale.

A self project is created automatically and configured as the default project in Starwhale Standalone.

- - + + \ No newline at end of file diff --git a/0.5.10/concepts/roles-permissions/index.html b/0.5.10/concepts/roles-permissions/index.html index 610e5ea94..8b00aeb06 100644 --- a/0.5.10/concepts/roles-permissions/index.html +++ b/0.5.10/concepts/roles-permissions/index.html @@ -10,13 +10,13 @@ - - + +
Skip to main content
Version: 0.5.10

Roles and permissions in Starwhale

Roles are used to assign permissions to users. Only Starwhale Server/Cloud has roles and permissions, and Starwhale Standalone does not.The Administrator role is automatically created and assigned to the user "admin". Some sensitive operations can only be performed by users with the Administrator role, for example, creating accounts in Starwhale Server.

Projects have three roles:

  • Admin - Project administrators can read and write project data and assign project roles to users.
  • Maintainer - Project maintainers can read and write project data.
  • Guest - Project guests can only read project data.
ActionAdminMaintainerGuest
Manage project membersYes
Edit projectYesYes
View projectYesYesYes
Create evaluationsYesYes
Remove evaluationsYesYes
View evaluationsYesYesYes
Create datasetsYesYes
Update datasetsYesYes
Remove datasetsYesYes
View datasetsYesYesYes
Create modelsYesYes
Update modelsYesYes
Remove modelsYesYes
View modelsYesYesYes
Create runtimesYesYes
Update runtimesYesYes
Remove runtimesYesYes
View runtimesYesYesYes

The user who creates a project becomes the first project administrator. They can assign roles to other users later.

- - + + \ No newline at end of file diff --git a/0.5.10/concepts/versioning/index.html b/0.5.10/concepts/versioning/index.html index 850bb28e3..477ad4bff 100644 --- a/0.5.10/concepts/versioning/index.html +++ b/0.5.10/concepts/versioning/index.html @@ -10,13 +10,13 @@ - - + +
Skip to main content
Version: 0.5.10

Resource versioning in Starwhale

  • Starwhale manages the history of all models, datasets, and runtimes. Every update to a specific resource appends a new version of the history.
  • Versions are identified by a version id which is a random string generated automatically by Starwhale and are ordered by their creation time.
  • Versions can have tags. Starwhale uses version tags to provide a human-friendly representation of versions. By default, Starwhale attaches a default tag to each version. The default tag is the letter "v", followed by a number. For each versioned resource, the first version tag is always tagged with "v0", the second version is tagged with "v1", and so on. And there is a special tag "latest" that always points to the last version. When a version is removed, its default tag will not be reused. For example, there is a model with tags "v0, v1, v2". When "v2" is removed, tags will be "v0, v1". And the following tag will be "v3" instead of "v2" again. You can attach your own tags to any version and remove them at any time.
  • Starwhale uses a linear history model. There is neither branch nor cycle in history.
  • History can not be rollback. When a version is to be reverted, Starwhale clones the version and appends it as a new version to the end of the history. Versions in history can be manually removed and recovered.
- - + + \ No newline at end of file diff --git a/0.5.10/dataset/index.html b/0.5.10/dataset/index.html index 38bf5a413..b6d38ee4b 100644 --- a/0.5.10/dataset/index.html +++ b/0.5.10/dataset/index.html @@ -10,13 +10,13 @@ - - + +
Skip to main content
Version: 0.5.10

Starwhale Dataset User Guide

Design Overview

Starwhale Dataset Positioning

The Starwhale Dataset contains three core stages: data construction, data loading, and data visualization. It is a data management tool for the ML/DL field. Starwhale Dataset can directly use the environment built by Starwhale Runtime, and can be seamlessly integrated with Starwhale Model and Starwhale Evaluation. It is an important part of the Starwhale MLOps toolchain.

According to the classification of MLOps Roles in Machine Learning Operations (MLOps): Overview, Definition, and Architecture, the three stages of Starwhale Dataset target the following user groups:

  • Data construction: Data Engineer, Data Scientist
  • Data loading: Data Scientist, ML Developer
  • Data visualization: Data Engineer, Data Scientist, ML Developer

mlops-users

Core Functions

  • Efficient loading: The original dataset files are stored in external storage such as OSS or NAS, and are loaded on demand without having to save to disk.
  • Simple construction: Supports one-click dataset construction from Image/Video/Audio directories, json files and Huggingface datasets, and also supports writing Python code to build completely custom datasets.
  • Versioning: Can perform version tracking, data append and other operations, and avoid duplicate data storage through the internally abstracted ObjectStore.
  • Sharing: Implement bidirectional dataset sharing between Standalone instances and Cloud/Server instances through the swcli dataset copy command.
  • Visualization: The web interface of Cloud/Server instances can present multi-dimensional, multi-type data visualization of datasets.
  • Artifact storage: The Standalone instance can store locally built or distributed swds series files, while the Cloud/Server instance uses object storage to provide centralized swds artifact storage.
  • Seamless Starwhale integration: Starwhale Dataset can use the runtime environment built by Starwhale Runtime to build datasets. Starwhale Evaluation and Starwhale Model can directly specify the dataset through the --dataset parameter to complete automatic data loading, which facilitates inference, model evaluation and other environments.

Key Elements

  • swds virtual package file: swds is different from swmp and swrt. It is not a single packaged file, but a virtual concept that specifically refers to a directory that contains dataset-related files for a version of the Starwhale dataset, including _manifest.yaml, dataset.yaml, dataset build Python scripts, and data file links, etc. You can use the swcli dataset info command to view where the swds is located. swds is the abbreviation of Starwhale Dataset.

swds-tree.png

  • swcli dataset command line: A set of dataset-related commands, including construction, distribution and management functions. See CLI Reference for details.
  • dataset.yaml configuration file: Describes the dataset construction process. It can be completely omitted and specified through swcli dataset build parameters. dataset.yaml can be considered as a configuration file representation of the swcli dataset build command line parameters. swcli dataset build parameters take precedence over dataset.yaml.
  • Dataset Python SDK: Includes data construction, data loading, and several predefined data types. See Python SDK for details.
  • Python scripts for dataset construction: A series of scripts written using the Starwhale Python SDK to build datasets.

Best Practices

The construction of Starwhale Dataset is performed independently. If third-party libraries need to be introduced when writing construction scripts, using Starwhale Runtime can simplify Python dependency management and ensure reproducible dataset construction. The Starwhale platform will build in as many open source datasets as possible for users to copy datasets for immediate use.

Command Line Grouping

The Starwhale Dataset command line can be divided into the following stages from the perspective of usage phases:

  • Construction phase
    • swcli dataset build
  • Visualization phase
    • swcli dataset diff
    • swcli dataset head
  • Distribution phase
    • swcli dataset copy
  • Basic management
    • swcli dataset tag
    • swcli dataset info
    • swcli dataset history
    • swcli dataset list
    • swcli dataset summary
    • swcli dataset remove
    • swcli dataset recover

Starwhale Dataset Viewer

Currently, the Web UI in the Cloud/Server instance can visually display the dataset. Currently, only DataTypes using the Python SDK can be correctly interpreted by the frontend, with mappings as follows:

  • Image: Display thumbnails, enlarged images, MASK type images, support image/png, image/jpeg, image/webp, image/svg+xml, image/gif, image/apng, image/avif formats.
  • Audio: Displayed as an audio wave graph, playable, supports audio/mp3 and audio/wav formats.
  • Video: Displayed as a video, playable, supports video/mp4, video/avi and video/webm formats.
  • GrayscaleImage: Display grayscale images, support x/grayscale format.
  • Text: Display text, support text/plain format, set encoding format, default is utf-8.
  • Binary and Bytes: Not supported for display currently.
  • Link: The above multimedia types all support specifying links as storage paths.

Starwhale Dataset Data Format

The dataset consists of multiple rows, each row being a sample, each sample containing several features. The features have a dict-like structure with some simple restrictions [L]:

  • The dict keys must be str type.
  • The dict values must be Python basic types like int/float/bool/str/bytes/dict/list/tuple, or Starwhale built-in data types.
  • For the same key across different samples, the value types do not need to stay the same.
  • If the value is a list or tuple, the element data types must be consistent.
  • For dict values, the restrictions are the same as [L].

Example:

{
"img": GrayscaleImage(
link=Link(
"123",
offset=32,
size=784,
_swds_bin_offset=0,
_swds_bin_size=8160,
)
),
"label": 0,
}

File Data Handling

Starwhale Dataset handles file type data in a special way. You can ignore this section if you don't care about Starwhale's implementation.

According to actual usage scenarios, Starwhale Dataset has two ways of handling file class data that is based on the base class starwhale.BaseArtifact:

  • swds-bin: Starwhale merges the data into several large files in its own binary format (swds-bin), which can efficiently perform indexing, slicing and loading.
  • remote-link: If the user's original data is stored in some external storage such as OSS or NAS, with a lot of original data that is inconvenient to move or has already been encapsulated by some internal dataset implementation, then you only need to use links in the data to establish indexes.

In the same Starwhale dataset, two types of data can be included simultaneously.

- - + + \ No newline at end of file diff --git a/0.5.10/dataset/yaml/index.html b/0.5.10/dataset/yaml/index.html index 042c7b72e..dd1fecaa9 100644 --- a/0.5.10/dataset/yaml/index.html +++ b/0.5.10/dataset/yaml/index.html @@ -10,13 +10,13 @@ - - + +
Skip to main content
Version: 0.5.10

The dataset.yaml Specification

tip

dataset.yaml is optional for the swcli dataset build command.

Building Starwhale Dataset uses dataset.yaml. Omitting dataset.yaml allows describing related configurations in swcli dataset build command line parameters. dataset.yaml can be considered as a file-based representation of the build command line configuration.

YAML Field Descriptions

FieldDescriptionRequiredTypeDefault
nameName of the Starwhale DatasetYesString
handlerImportable address of a class that inherits starwhale.SWDSBinBuildExecutor, starwhale.UserRawBuildExecutor or starwhale.BuildExecutor, or a function that returns a Generator or iterable object. Format is {module path}:{class name\|function name}YesString
descDataset descriptionNoString""
versiondataset.yaml format version, currently only "1.0" is supportedNoString1.0
attrDataset build parametersNoDict
attr.volume_sizeSize of each data file in the swds-bin dataset. Can be a number in bytes, or a number plus unit like 64M, 1GB etc.NoInt or Str64MB
attr.alignment_sizeData alignment size of each data block in the swds-bin dataset. If set to 4k, and a data block is 7.9K, 0.1K padding will be added to make the block size a multiple of alignment_size, improving page size and read efficiency.NoInteger or String128

Examples

Simplest Example

name: helloworld
handler: dataset:ExampleProcessExecutor

The helloworld dataset uses the ExampleProcessExecutor class in dataset.py of the dataset.yaml directory to build data.

MNIST Dataset Build Example

name: mnist
handler: mnist.dataset:DatasetProcessExecutor
desc: MNIST data and label test dataset
attr:
alignment_size: 128
volume_size: 4M

Example with handler as a generator function

dataset.yaml contents:

name: helloworld
handler: dataset:iter_item

dataset.py contents:

def iter_item():
for i in range(10):
yield {"img": f"image-{i}".encode(), "label": i}
- - + + \ No newline at end of file diff --git a/0.5.10/evaluation/heterogeneous/node-able/index.html b/0.5.10/evaluation/heterogeneous/node-able/index.html index da8b09bb0..ad7e11798 100644 --- a/0.5.10/evaluation/heterogeneous/node-able/index.html +++ b/0.5.10/evaluation/heterogeneous/node-able/index.html @@ -10,8 +10,8 @@ - - + +
@@ -23,7 +23,7 @@ Refer to the link.

Take v0.13.0-rc.1 as an example:

kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.13.0-rc.1/nvidia-device-plugin.yml

Note: This operation will run the NVIDIA device plugin plugin on all Kubernetes nodes. If configured before, it will be updated. Please evaluate the image version used carefully.

  • Confirm GPU can be discovered and used in the cluster. Refer to the command below. Check that nvidia.com/gpu is in the Capacity of the Jetson node. The GPU is then recognized normally by the Kubernetes cluster.

    # kubectl describe node orin | grep -A15 Capacity
    Capacity:
    cpu: 12
    ephemeral-storage: 59549612Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    hugepages-32Mi: 0
    hugepages-64Ki: 0
    memory: 31357608Ki
    nvidia.com/gpu: 1
    pods: 110
  • Build and Use Custom Images

    The l4t-jetpack image mentioned earlier can meet our general use. If we need to customize a more streamlined image or one with more features, we can make it based on l4t-base. Relevant Dockerfiles can refer to the image Starwhale made for mnist.

    - - + + \ No newline at end of file diff --git a/0.5.10/evaluation/heterogeneous/virtual-node/index.html b/0.5.10/evaluation/heterogeneous/virtual-node/index.html index d72c74b36..7d34cf092 100644 --- a/0.5.10/evaluation/heterogeneous/virtual-node/index.html +++ b/0.5.10/evaluation/heterogeneous/virtual-node/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Virtual Kubelet as Kubernetes nodes

    Introduction

    Virtual Kubelet is an open source framework that can simulate a K8s node by mimicking the communication between kubelet and the K8s cluster.

    This solution is widely used by major cloud vendors for serverless container cluster solutions, such as Alibaba Cloud's ASK, Amazon's AWS Fargate, etc.

    Principles

    The virtual kubelet framework implements the related interfaces of kubelet for Node. With simple configuration, it can simulate a node.

    We only need to implement the PodLifecycleHandler interface to support:

    • Create, update, delete Pod
    • Get Pod status
    • Get Container logs

    Adding Devices to the Cluster

    If our device cannot serve as a K8s node due to resource constraints or other situations, we can manage these devices by using virtual kubelet to simulate a proxy node.

    The control flow between Starwhale Controller and the device is as follows:


    ┌──────────────────────┐ ┌────────────────┐ ┌─────────────────┐ ┌────────────┐
    │ Starwhale Controller ├─────►│ K8s API Server ├────►│ virtual kubelet ├────►│ Our device │
    └──────────────────────┘ └────────────────┘ └─────────────────┘ └────────────┘

    Virtual kubelet converts the Pod orchestration information sent by Starwhale Controller into control behaviors for the device, such as executing a command via ssh on the device, or sending a message via USB or serial port.

    Below is an example of using virtual kubelet to control a device not joined to the cluster that is SSH-enabled:

    1. Prepare certificates
    • Create file vklet.csr with the following content:
    [req]
    req_extensions = v3_req
    distinguished_name = req_distinguished_name

    [req_distinguished_name]

    [v3_req]
    basicConstraints = CA:FALSE
    keyUsage = digitalSignature, keyEncipherment
    extendedKeyUsage = serverAuth
    subjectAltName = @alt_names

    [alt_names]
    IP = 1.2.3.4
    • Generate the certificate:
    openssl genrsa -out vklet-key.pem 2048
    openssl req -new -key vklet-key.pem -out vklet.csr -subj '/CN=system:node:1.2.3.4;/C=US/O=system:nodes' -config ./csr.conf
    • Submit the certificate:
    cat vklet.csr| base64 | tr -d "\n" # output as content of spec.request in csr.yaml

    csr.yaml:

    apiVersion: certificates.k8s.io/v1
    kind: CertificateSigningRequest
    metadata:
    name: vklet
    spec:
    request: ******************
    signerName: kubernetes.io/kube-apiserver-client
    expirationSeconds: 1086400
    usages:
    - client auth
    kubectl apply -f csr.yaml
    kubectl certificate approve vklet
    kubectl get csr vklet -o jsonpath='{.status.certificate}'| base64 -d > vklet-cert.pem

    Now we have vklet-cert.pem.

    • Compile virtual kubelet:
    git clone https://github.com/virtual-kubelet/virtual-kubelet
    cd virtual-kubelet && make build

    Create the node configuration file mock.json:

    {
    "virtual-kubelet":
    {
    "cpu": "100",
    "memory": "100Gi",
    "pods": "100"
    }
    }

    Start virtual kubelet:

    export APISERVER_CERT_LOCATION=/path/to/vklet-cert.pem
    export APISERVER_KEY_LOCATION=/path/to/vklet-key.pem
    export KUBECONFIG=/path/to/kubeconfig
    virtual-kubelet --provider mock --provider-config /path/to/mock.json

    Now we have simulated a node with 100 cores + 100GB memory using virtual kubelet.

    • Add PodLifecycleHandler implementation to convert important information in Pod orchestration into ssh command execution, and collect logs for Starwhale Controller to collect.

    See ssh executor for a concrete implementation.

    - - + + \ No newline at end of file diff --git a/0.5.10/evaluation/index.html b/0.5.10/evaluation/index.html index d3a4079e5..7489e154a 100644 --- a/0.5.10/evaluation/index.html +++ b/0.5.10/evaluation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Starwhale Model Evaluation

    Design Overview

    Starwhale Evaluation Positioning

    The goal of Starwhale Evaluation is to provide end-to-end management for model evaluation, including creating Jobs, distributing Tasks, viewing model evaluation reports and basic management. Starwhale Evaluation is a specific application of Starwhale Model, Starwhale Dataset, and Starwhale Runtime in the model evaluation scenario. Starwhale Evaluation is part of the MLOps toolchain built by Starwhale. More applications like Starwhale Model Serving, Starwhale Training will be included in the future.

    Core Features

    • Visualization: Both swcli and the Web UI provide visualization of model evaluation results, supporting comparison of multiple results. Users can also customize logging of intermediate processes.

    • Multi-scenario Adaptation: Whether it's a notebook, desktop or distributed cluster environment, the same commands, Python scripts, artifacts and operations can be used for model evaluation. This satisfies different computational power and data volume requirements.

    • Seamless Starwhale Integration: Leverage Starwhale Runtime for the runtime environment, Starwhale Dataset as data input, and run models from Starwhale Model. Configuration is simple whether using swcli, Python SDK or Cloud/Server instance Web UI.

    Key Elements

    • swcli model run: Command line for bulk offline model evaluation.
    • swcli model serve: Command line for online model evaluation.

    Best Practices

    Command Line Grouping

    From the perspective of completing an end-to-end Starwhale Evaluation workflow, commands can be grouped as:

    • Preparation Stage
      • swcli dataset build or Starwhale Dataset Python SDK
      • swcli model build or Starwhale Model Python SDK
      • swcli runtime build
    • Evaluation Stage
      • swcli model run
      • swcli model serve
    • Results Stage
      • swcli job info
    • Basic Management
      • swcli job list
      • swcli job remove
      • swcli job recover

    Abstraction job-step-task

    • job: A model evaluation task is a job, which contains one or more steps.

    • step: A step corresponds to a stage in the evaluation process. With the default PipelineHandler, steps are predict and evaluate. For custom evaluation processes using @handler, @evaluation.predict, @evaluation.evaluate decorators, steps are the decorated functions. Steps can have dependencies, forming a DAG. A step contains one or more tasks. Tasks in the same step have the same logic but different inputs. A common approach is to split the dataset into multiple parts, with each part passed to a task. Tasks can run in parallel.

    • task: A task is the final running entity. In Cloud/Server instances, a task is a container in a Pod. In Standalone instances, a task is a Python Thread.

    The job-step-task abstraction is the basis for implementing distributed runs in Starwhale Evaluation.

    - - + + \ No newline at end of file diff --git a/0.5.10/faq/index.html b/0.5.10/faq/index.html index 698c168ab..7b94dc2b6 100644 --- a/0.5.10/faq/index.html +++ b/0.5.10/faq/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.5.10/getting-started/cloud/index.html b/0.5.10/getting-started/cloud/index.html index 73828ea8f..9785dc2c1 100644 --- a/0.5.10/getting-started/cloud/index.html +++ b/0.5.10/getting-started/cloud/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Getting started with Starwhale Cloud

    Starwhale Cloud is hosted on Aliyun with the domain name https://cloud.starwhale.cn. In the futher, we will launch the service on AWS with the domain name https://cloud.starwhale.ai. It's important to note that these are two separate instances that are not interconnected, and accounts and data are not shared. You can choose either one to get started.

    You need to install the Starwhale Client (swcli) at first.

    Sign Up for Starwhale Cloud and create your first project

    You can either directly log in with your GitHub or Weixin account or sign up for an account. You will be asked for an account name if you log in with your GitHub or Weixin account.

    Then you can create a new project. In this tutorial, we will use the name demo for the project name.

    Build the dataset, model, and runtime on your local machine

    Follow step 1 to step 4 in Getting started with Starwhale Standalone to create:

    • a Starwhale model named mnist
    • a Starwhale dataset named mnist
    • a Starwhale runtime named pytorch

    Login to the cloud instance

    swcli instance login --username <your account name> --password <your password> --alias swcloud https://cloud.starwhale.cn

    Copy the dataset, model, and runtime to the cloud instance

    swcli model copy mnist swcloud/project/<your account name>:demo
    swcli dataset copy mnist swcloud/project/<your account name>:demo
    swcli runtime copy pytorch swcloud/project/<your account name>:demo

    Run an evaluation with the web UI

    console-create-job.gif

    Congratulations! You have completed the Starwhale Cloud Getting Started Guide.

    - - + + \ No newline at end of file diff --git a/0.5.10/getting-started/index.html b/0.5.10/getting-started/index.html index 0e6ea958a..c58030056 100644 --- a/0.5.10/getting-started/index.html +++ b/0.5.10/getting-started/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Getting started

    First, you need to install the Starwhale Client (swcli), which can be done by running the following command:

    python3 -m pip install starwhale

    For more information, see the swcli installation guide.

    Depending on your instance type, there are three getting-started guides available for you:

    • Getting started with Starwhale Standalone - This guide helps you run an MNIST evaluation on your desktop PC/laptop. It is the fastest and simplest way to get started with Starwhale.
    • Getting started with Starwhale Server - This guide helps you install Starwhale Server in your private data center and run an MNIST evaluation. At the end of the tutorial, you will have a Starwhale Server instance where you can run model evaluations on and manage your datasets and models.
    • Getting started with Starwhale Cloud - This guide helps you create an account on Starwhale Cloud and run an MNIST evaluation. It is the easiest way to experience all Starwhale features.
    - - + + \ No newline at end of file diff --git a/0.5.10/getting-started/runtime/index.html b/0.5.10/getting-started/runtime/index.html index 6059b777b..c83ccf5d2 100644 --- a/0.5.10/getting-started/runtime/index.html +++ b/0.5.10/getting-started/runtime/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Getting Started with Starwhale Runtime

    This article demonstrates how to build a Starwhale Runtime of the Pytorch environment and how to use it. This runtime can meet the dependency requirements of the six examples in Starwhale: mnist, speech commands, nmt, cifar10, ag_news, and PennFudan. Links to relevant code: example/runtime/pytorch.

    You can learn the following things from this tutorial:

    • How to build a Starwhale Runtime.
    • How to use a Starwhale Runtime in different scenarios.
    • How to release a Starwhale Runtime.

    Prerequisites

    Run the following command to clone the example code:

    git clone https://github.com/star-whale/starwhale.git
    cd starwhale/example/runtime/pytorch # for users in the mainland of China, use pytorch-cn-mirror instead.

    Build Starwhale Runtime

    ❯ swcli -vvv runtime build --yaml runtime.yaml

    Use Starwhale Runtime in the standalone instance

    Use Starwhale Runtime in the shell

    # Activate the runtime
    swcli runtime activate pytorch

    swcli runtime activate will download all python dependencies of the runtime, which may take a long time.

    All dependencies are ready in your python environment when the runtime is activated. It is similar to source venv/bin/activate of virtualenv or the conda activate command of conda. If you close the shell or switch to another shell, you need to reactivate the runtime.

    Use Starwhale Runtime in swcli

    # Use the runtime when building a Starwhale Model
    swcli model build . --runtime pytorch
    # Use the runtime when building a Starwhale Dataset
    swcli dataset build --yaml /path/to/dataset.yaml --runtime pytorch
    # Run a model evaluation with the runtime
    swcli model run --uri mnist/version/v0 --dataset mnist --runtime pytorch

    Copy Starwhale Runtime to another instance

    You can copy the runtime to a server/cloud instance, which can then be used in the server/cloud instance or downloaded by other users.

    # Copy the runtime to a server instance named 'pre-k8s'
    ❯ swcli runtime copy pytorch cloud://pre-k8s/project/starwhale
    - - + + \ No newline at end of file diff --git a/0.5.10/getting-started/server/index.html b/0.5.10/getting-started/server/index.html index 9a5b76282..9cd99f1d9 100644 --- a/0.5.10/getting-started/server/index.html +++ b/0.5.10/getting-started/server/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Getting started with Starwhale Server

    Install Starwhale Server

    To install Starwhale Server, see the installation guide.

    Create your first project

    Login to the server

    Open your browser and enter your server's URL in the address bar. Login with your username(starwhale) and password(abcd1234).

    console-artifacts.gif

    Create a new project

    Build the dataset, model, and runtime on your local machine

    Follow step 1 to step 4 in Getting started with Starwhale Standalone to create:

    • a Starwhale model named mnist
    • a Starwhale dataset named mnist
    • a Starwhale runtime named pytorch

    Copy the dataset, the model, and the runtime to the server

    swcli instance login --username <your username> --password <your password> --alias server <Your Server URL>

    swcli model copy mnist server/project/demo
    swcli dataset copy mnist server/project/demo
    swcli runtime copy pytorch server/project/demo

    Use the Web UI to run an evaluation

    Navigate to the "demo" project in your browser and create a new one.

    console-create-job.gif

    Congratulations! You have completed the Starwhale Server Getting Started Guide.

    - - + + \ No newline at end of file diff --git a/0.5.10/getting-started/standalone/index.html b/0.5.10/getting-started/standalone/index.html index 1071f99da..64d1a5e12 100644 --- a/0.5.10/getting-started/standalone/index.html +++ b/0.5.10/getting-started/standalone/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Getting started with Starwhale Standalone

    When the Starwhale Client (swcli) is installed, you are ready to use Starwhale Standalone.

    We also provide a Jupyter Notebook example, you can try it in Google Colab or in your local vscode/jupyterlab.

    Downloading Examples

    Download Starwhale examples by cloning the Starwhale project via:

    GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/star-whale/starwhale.git --depth 1
    cd starwhale

    To save time in the example downloading, we skip git-lfs and other commits info. We will use ML/DL HelloWorld code MNIST to start your Starwhale journey. The following steps are all performed in the starwhale directory.

    Core Workflow

    Building a Pytorch Runtime

    Runtime example codes are in the example/runtime/pytorch directory.

    • Build the Starwhale runtime bundle:

      swcli runtime build --yaml example/runtime/pytorch/runtime.yaml
      tip

      When you first build runtime, creating an isolated python environment and downloading python dependencies will take a lot of time. The command execution time is related to the network environment of the machine and the number of packages in the runtime.yaml. Using the befitting pypi mirror and cache config in the ~/.pip/pip.conf file is a recommended practice.

      For users in the mainland of China, the following conf file is an option:

      [global]
      cache-dir = ~/.cache/pip
      index-url = https://pypi.tuna.tsinghua.edu.cn/simple
      extra-index-url = https://mirrors.aliyun.com/pypi/simple/
    • Check your local Starwhale Runtime:

      swcli runtime list
      swcli runtime info pytorch

    Building a Model

    Model example codes are in the example/mnist directory.

    • Download the pre-trained model file:

      cd example/mnist
      make download-model
      # For users in the mainland of China, please add `CN=1` environment for make command:
      # CN=1 make download-model
      cd -
    • Build a Starwhale model:

      swcli model build example/mnist --runtime pytorch
    • Check your local Starwhale models:

      swcli model list
      swcli model info mnist

    Building a Dataset

    Dataset example codes are in the example/mnist directory.

    • Download the MNIST raw data:

      cd example/mnist
      make download-data
      # For users in the mainland of China, please add `CN=1` environment for make command:
      # CN=1 make download-data
      cd -
    • Build a Starwhale dataset:

      swcli dataset build --yaml example/mnist/dataset.yaml
    • Check your local Starwhale dataset:

      swcli dataset list
      swcli dataset info mnist
      swcli dataset head mnist

    Running an Evaluation Job

    • Create an evaluation job:

      swcli -vvv model run --uri mnist --dataset mnist --runtime pytorch
    • Check the evaluation result

      swcli job list
      swcli job info $(swcli job list | grep mnist | grep success | awk '{print $1}' | head -n 1)

    Congratulations! You have completed the Starwhale Standalone Getting Started Guide.

    - - + + \ No newline at end of file diff --git a/0.5.10/index.html b/0.5.10/index.html index eaed27bb8..5f2176423 100644 --- a/0.5.10/index.html +++ b/0.5.10/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    What is Starwhale

    Overview

    Starwhale is an MLOps/LLMOps platform that make your model creation, evaluation and publication much easier. It aims to create a handy tool for data scientists and machine learning engineers.

    Starwhale helps you:

    • Keep track of your training/testing dataset history including data items and their labels, so that you can easily access them.
    • Manage your model packages that you can share across your team.
    • Run your models in different environments, either on a Nvidia GPU server or on an embedded device like Cherry Pi.
    • Create a online service with interactive Web UI for your models.

    Starwhale is designed to be an open platform. You can create your own plugins to meet your requirements.

    Deployment options

    Each deployment of Starwhale is called an instance. All instances can be managed by the Starwhale Client (swcli).

    You can start using Starwhale with one of the following instance types:

    • Starwhale Standalone - Rather than a running service, Starwhale Standalone is actually a repository that resides in your local file system. It is created and managed by the Starwhale Client (swcli). You only need to install swcli to use it. Currently, each user on a single machine can have only ONE Starwhale Standalone instance. We recommend you use the Starwhale Standalone to build and test your datasets, runtime, and models before pushing them to Starwhale Server/Cloud instances.
    • Starwhale Server - Starwhale Server is a service deployed on your local server. Besides text-only results from the Starwhale Client (swcli), Starwhale Server provides Web UI for you to manage your datasets and models, evaluate your models in your local Kubernetes cluster, and review the evaluation results.
    • Starwhale Cloud - Starwhale Cloud is a managed service hosted on public clouds. By registering an account on https://cloud.starwhale.cn, you are ready to use Starwhale without needing to install, operate, and maintain your own instances. Starwhale Cloud also provides public resources for you to download, like datasets, runtimes, and models. Check the "starwhale/public" project on Starwhale Cloud for more details.

    When choosing which instance type to use, consider the following:

    Instance TypeDeployment locationMaintained byUser InterfaceScalability
    Starwhale StandaloneYour laptop or any server in your data centerNot requiredCommand lineNot scalable
    Starwhale ServerYour data centerYourselfWeb UI and command lineScalable, depends on your Kubernetes cluster
    Starwhale CloudPublic cloud, like AWS or Aliyunthe Starwhale TeamWeb UI and command lineScalable, but currently limited by the freely available resource on the cloud
    - - + + \ No newline at end of file diff --git a/0.5.10/model/index.html b/0.5.10/model/index.html index 19667a561..2456420aa 100644 --- a/0.5.10/model/index.html +++ b/0.5.10/model/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Starwhale Model

    A Starwhale Model is a standard format for packaging machine learning models that can be used for various purposes, like model fine-tuning, model evaluation, and online serving. A Starwhale Model contains the model file, inference codes, configuration files, and any other files required to run the model.

    Create a Starwhale Model

    There are two ways to create a Starwhale Model: by swcli or by Python SDK.

    Create a Starwhale Model by swcli

    To create a Starwhale Model by swcli, you need to define a model.yaml, which describes some required information about the model package, and run the following command:

    swcli model build . --model-yaml /path/to/model.yaml

    For more information about the command and model.yaml, see the swcli reference. model.yaml is optional for model building.

    Create a Starwhale Model by Python SDK

    from starwhale import model, predict

    @predict
    def predict_img(data):
    ...

    model.build(name="mnist", modules=[predict_img])

    Model Management

    Model Management by swcli

    CommandDescription
    swcli model listList all Starwhale Models in a project
    swcli model infoShow detail information about a Starwhale Model
    swcli model copyCopy a Starwhale Model to another location
    swcli model removeRemove a Starwhale Model
    swcli model recoverRecover a previously removed Starwhale Model

    Model Management by WebUI

    Model History

    Starwhale Models are versioned. The general rules about versions are described in Resource versioning in Starwhale.

    Model History Management by swcli

    CommandDescription
    swcli model historyList all versions of a Starwhale Model
    swcli model infoShow detail information about a Starwhale Model version
    swcli model diffCompare two versions of a Starwhale model
    swcli model copyCopy a Starwhale Model version to a new one
    swcli model removeRemove a Starwhale Model version
    swcli model recoverRecover a previously removed Starwhale Model version

    Model Evaluation

    Model Evaluation by swcli

    CommandDescription
    swcli model runCreate an evaluation with a Starwhale Model

    The Storage Format

    The Starwhale Model is a tarball file that contains the source directory.

    - - + + \ No newline at end of file diff --git a/0.5.10/model/yaml/index.html b/0.5.10/model/yaml/index.html index 0908758da..859b5e1ae 100644 --- a/0.5.10/model/yaml/index.html +++ b/0.5.10/model/yaml/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    The model.yaml Specification

    tip

    model.yaml is optional for swcli model build.

    When building a Starwhale Model using the swcli model build command, you can specify a yaml file that follows a specific format via the --model-yaml parameter to simplify specifying build parameters.

    Even without specifying the --model-yaml parameter, swcli model build will automatically look for a model.yaml file under the ${workdir} directory and extract parameters from it. Parameters specified on the swcli model build command line take precedence over equivalent configurations in model.yaml, so you can think of model.yaml as a file-based representation of the build command line.

    When building a Starwhale Model using the Python SDK, the model.yaml file does not take effect.

    YAML Field Descriptions

    FieldDescriptionRequiredTypeDefault
    nameName of the Starwhale Model, equivalent to --name parameter.NoString
    run.modulesPython Modules searched during model build, can specify multiple entry points for model execution, format is Python Importable path. Equivalent to --module parameter.YesList[String]
    run.handlerDeprecated alias of run.modules, can only specify one entry point.NoString
    versiondataset.yaml format version, currently only supports "1.0"NoString1.0
    descModel description, equivalent to --desc parameter.NoString

    Example


    name: helloworld

    run:
    modules:
    - src.evaluator

    desc: "example yaml"

    A Starwhale model named helloworld, searches for functions decorated with @evaluation.predict, @evaluation.evaluate or @handler, or classes inheriting from PipelineHandler in src/evaluator.py under ${WORKDIR} of the swcli model build command. These functions or classes will be added to the list of runnable entry points for the Starwhale model. When running the model via swcli model run or Web UI, select the corresponding entry point (handler) to run.

    model.yaml is optional, parameters defined in yaml can also be specified via swcli command line parameters.


    swcli model build . --model-yaml model.yaml

    Is equivalent to:


    swcli model build . --name helloworld --module src.evaluator --desc "example yaml"

    - - + + \ No newline at end of file diff --git a/0.5.10/reference/sdk/dataset/index.html b/0.5.10/reference/sdk/dataset/index.html index 551cfde9c..fe3b5698d 100644 --- a/0.5.10/reference/sdk/dataset/index.html +++ b/0.5.10/reference/sdk/dataset/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Starwhale Dataset SDK

    dataset

    Get starwhale.Dataset object, by creating new datasets or loading existing datasets.

    @classmethod
    def dataset(
    cls,
    uri: t.Union[str, Resource],
    create: str = _DatasetCreateMode.auto,
    readonly: bool = False,
    ) -> Dataset:

    Parameters

    • uri: (str or Resource, required)
      • The dataset uri or Resource object.
    • create: (str, optional)
      • The mode of dataset creating. The options are auto, empty and forbid.
        • auto mode: If the dataset already exists, creation is ignored. If it does not exist, the dataset is created automatically.
        • empty mode: If the dataset already exists, an Exception is raised; If it does not exist, an empty dataset is created. This mode ensures the creation of a new, empty dataset.
        • forbid mode: If the dataset already exists, nothing is done.If it does not exist, an Exception is raised. This mode ensures the existence of the dataset.
      • The default is auto.
    • readonly: (bool, optional)
      • For an existing dataset, you can specify the readonly=True argument to ensure the dataset is in readonly mode.
      • Default is False.

    Examples

    from starwhale import dataset, Image

    # create a new dataset named mnist, and add a row into the dataset
    # dataset("mnist") is equal to dataset("mnist", create="auto")
    ds = dataset("mnist")
    ds.exists() # return False, "mnist" dataset is not existing.
    ds.append({"img": Image(), "label": 1})
    ds.commit()
    ds.close()

    # load a cloud instance dataset in readonly mode
    ds = dataset("cloud://remote-instance/project/starwhale/dataset/mnist", readonly=True)
    labels = [row.features.label in ds]
    ds.close()

    # load a read/write dataset with a specified version
    ds = dataset("mnist/version/mrrdczdbmzsw")
    ds[0].features.label = 1
    ds.commit()
    ds.close()

    # create an empty dataset
    ds = dataset("mnist-empty", create="empty")

    # ensure the dataset existence
    ds = dataset("mnist-existed", create="forbid")

    class starwhale.Dataset

    starwhale.Dataset implements the abstraction of a Starwhale dataset, and can operate on datasets in Standalone/Server/Cloud instances.

    from_huggingface

    from_huggingface is a classmethod that can convert a Huggingface dataset into a Starwhale dataset.

    def from_huggingface(
    cls,
    name: str,
    repo: str,
    subset: str | None = None,
    split: str | None = None,
    revision: str = "main",
    alignment_size: int | str = D_ALIGNMENT_SIZE,
    volume_size: int | str = D_FILE_VOLUME_SIZE,
    mode: DatasetChangeMode | str = DatasetChangeMode.PATCH,
    cache: bool = True,
    tags: t.List[str] | None = None,
    ) -> Dataset:

    Parameters

    • name: (str, required)
      • dataset name.
    • repo: (str, required)
    • subset: (str, optional)
      • The subset name. If the huggingface dataset has multiple subsets, you must specify the subset name.
    • split: (str, optional)
      • The split name. If the split name is not specified, the all splits dataset will be built.
    • revision: (str, optional)
      • The huggingface datasets revision. The default value is main. If the split name is not specified, the all splits dataset will be built.
    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.
    • mode: (str|DatasetChangeMode, optional)
      • The dataset change mode. The default value is patch. Mode choices are patch and overwrite.
    • cache: (bool, optional)
      • Whether to use huggingface dataset cache(download + local hf dataset).
      • The default value is True.
    • tags: (List[str], optional)
      • The user custom tags of the dataset.

    Examples

    from starwhale import Dataset
    myds = Dataset.from_huggingface("mnist", "mnist")
    print(myds[0])
    from starwhale import Dataset
    myds = Dataset.from_huggingface("mmlu", "cais/mmlu", subset="anatomy", split="auxiliary_train", revision="7456cfb")

    from_json

    from_json is a classmethod that can convert a json text into a Starwhale dataset.

    @classmethod
    def from_json(
    cls,
    name: str,
    json_text: str,
    field_selector: str = "",
    alignment_size: int | str = D_ALIGNMENT_SIZE,
    volume_size: int | str = D_FILE_VOLUME_SIZE,
    mode: DatasetChangeMode | str = DatasetChangeMode.PATCH,
    tags: t.List[str] | None = None,
    ) -> Dataset:

    Parameters

    • name: (str, required)
      • Dataset name.
    • json_text: (str, required)
      • A json string. The from_json function deserializes this string into Python objects to start building the Starwhale dataset.
    • field_selector: (str, optional)
      • The filed from which you would like to extract dataset array items.
      • The default value is "" which indicates that the json object is an array contains all the items.
    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.
    • mode: (str|DatasetChangeMode, optional)
      • The dataset change mode. The default value is patch. Mode choices are patch and overwrite.
    • tags: (List[str], optional)
      • The user custom tags of the dataset.

    Examples

    from starwhale import Dataset
    myds = Dataset.from_json(
    name="translation",
    json_text='[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]'
    )
    print(myds[0].features.en)
    from starwhale import Dataset
    myds = Dataset.from_json(
    name="translation",
    json_text='{"content":{"child_content":[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]}}',
    field_selector="content.child_content"
    )
    print(myds[0].features["zh-cn"])

    from_folder

    from_folder is a classmethod that can read Image/Video/Audio data from a specified directory and automatically convert them into a Starwhale dataset. This function supports the following features:

    • It can recursively search the target directory and its subdirectories
    • Supports extracting three types of files:
      • image: Supports png/jpg/jpeg/webp/svg/apng image types. Image files will be converted to Starwhale.Image type.
      • video: Supports mp4/webm/avi video types. Video files will be converted to Starwhale.Video type.
      • audio: Supports mp3/wav audio types. Audio files will be converted to Starwhale.Audio type.
    • Each file corresponds to one record in the dataset, with the file stored in the file field.
    • If auto_label=True, the parent directory name will be used as the label for that record, stored in the label field. Files in the root directory will not be labeled.
    • If a txt file with the same name as an image/video/audio file exists, its content will be stored as the caption field in the dataset.
    • If metadata.csv or metadata.jsonl exists in the root directory, their content will be read automatically and associated to records by file path as meta information in the dataset.
      • metadata.csv and metadata.jsonl are mutually exclusive. An exception will be thrown if both exist.
      • Each record in metadata.csv and metadata.jsonl must contain a file_name field pointing to the file path.
      • metadata.csv and metadata.jsonl are optional for dataset building.
    @classmethod
    def from_folder(
    cls,
    folder: str | Path,
    kind: str | DatasetFolderSourceType,
    name: str | Resource = "",
    auto_label: bool = True,
    alignment_size: int | str = D_ALIGNMENT_SIZE,
    volume_size: int | str = D_FILE_VOLUME_SIZE,
    mode: DatasetChangeMode | str = DatasetChangeMode.PATCH,
    tags: t.List[str] | None = None,
    ) -> Dataset:

    Parameters

    • folder: (str|Path, required)
      • The folder path from which you would like to create this dataset.
    • kind: (str|DatasetFolderSourceType, required)
      • The dataset source type you would like to use, the choices are: image, video and audio.
      • Recursively searching for files of the specified kind in folder. Other file types will be ignored.
    • name: (str|Resource, optional)
      • The dataset name you would like to use.
      • If not specified, the name is the folder name.
    • auto_label: (bool, optional)
      • Whether to auto label by the sub-folder name.
      • The default value is True.
    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.
    • mode: (str|DatasetChangeMode, optional)
      • The dataset change mode. The default value is patch. Mode choices are patch and overwrite.
    • tags: (List[str], optional)
      • The user custom tags of the dataset.

    Examples ${folder-example}

    • Example for the normal function calling

      from starwhale import Dataset

      # create a my-image-dataset dataset from /path/to/image folder.
      ds = Dataset.from_folder(
      folder="/path/to/image",
      kind="image",
      name="my-image-dataset"
      )
    • Example for caption

      folder/dog/1.png
      folder/dog/1.txt

      1.txt content will be used as the caption of 1.png.

    • Example for metadata

      metadata.csv:

      file_name, caption
      1.png, dog
      2.png, cat

      metadata.jsonl:

      {"file_name": "1.png", "caption": "dog"}
      {"file_name": "2.png", "caption": "cat"}
    • Example for auto-labeling

      The following structure will create a dataset with 2 labels: "cat" and "dog", 4 images in total.

      folder/dog/1.png
      folder/cat/2.png
      folder/dog/3.png
      folder/cat/4.png

    __iter__

    __iter__ a method that iter the dataset rows.

    from starwhale import dataset

    ds = dataset("mnist")

    for item in ds:
    print(item.index)
    print(item.features.label) # label and img are the features of mnist.
    print(item.features.img)

    batch_iter

    batch_iter is a method that iter the dataset rows in batch.

    def batch_iter(
    self, batch_size: int = 1, drop_not_full: bool = False
    ) -> t.Iterator[t.List[DataRow]]:

    Parameters

    • batch_size: (int, optional)
      • batch size. The default value is 1.
    • drop_not_full: (bool, optional)
      • Whether the last batch of data, with a size smaller than batch_size, it will be discarded.
      • The default value is False.

    Examples

    from starwhale import dataset

    ds = dataset("mnist")
    for batch_rows in ds.batch_iter(batch_size=2):
    assert len(batch_rows) == 2
    print(batch_rows[0].features)

    __getitem__

    __getitem__ is a method that allows retrieving certain rows of data from the dataset, with usage similar to Python dict and list types.

    from starwhale import dataset

    ds = dataset("mock-int-index")

    # if the index type is string
    ds["str_key"] # get the DataRow by the "str_key" string key
    ds["start":"end"] # get a slice of the dataset by the range ("start", "end")

    ds = dataset("mock-str-index")
    # if the index type is int
    ds[1] # get the DataRow by the 1 int key
    ds[1:10:2] # get a slice of the dataset by the range (1, 10), step is 2

    __setitem__

    __setitem__ is a method that allows updating rows of data in the dataset, with usage similar to Python dicts. __setitem__ supports multi-threaded parallel data insertion.

    def __setitem__(
    self, key: t.Union[str, int], value: t.Union[DataRow, t.Tuple, t.Dict]
    ) -> None:

    Parameters

    • key: (int|str, required)
      • key is the index for each row in the dataset. The type is int or str, but a dataset only accepts one type.
    • value: (DataRow|tuple|dict, required)
      • value is the features for each row in the dataset, using a Python dict is generally recommended.

    Examples

    • Normal insertion

    Insert two rows into the test dataset, with index test and test2 repectively:

    from starwhale import dataset

    with dataset("test") as ds:
    ds["test"] = {"txt": "abc", "int": 1}
    ds["test2"] = {"txt": "bcd", "int": 2}
    ds.commit()
    • Parallel insertion
    from starwhale import dataset, Binary
    from concurrent.futures import as_completed, ThreadPoolExecutor

    ds = dataset("test")

    def _do_append(_start: int) -> None:
    for i in range(_start, 100):
    ds.append((i, {"data": Binary(), "label": i}))

    pool = ThreadPoolExecutor(max_workers=10)
    tasks = [pool.submit(_do_append, i * 10) for i in range(0, 9)]

    ds.commit()
    ds.close()

    __delitem__

    __delitem__ is a method to delete certain rows of data from the dataset.

    def __delitem__(self, key: _ItemType) -> None:
    from starwhale import dataset

    ds = dataset("existed-ds")
    del ds[6:9]
    del ds[0]
    ds.commit()
    ds.close()

    append

    append is a method to append data to a dataset, similar to the append method for Python lists.

    • Adding features dict, each row is automatically indexed with int starting from 0 and incrementing.

      from starwhale import dataset, Image

      with dataset("new-ds") as ds:
      for i in range(0, 100):
      ds.append({"label": i, "image": Image(f"folder/{i}.png")})
      ds.commit()
    • By appending the index and features dictionary, the index of each data row in the dataset will not be handled automatically.

      from dataset import dataset, Image

      with dataset("new-ds") as ds:
      for i in range(0, 100):
      ds.append((f"index-{i}", {"label": i, "image": Image(f"folder/{i}.png")}))

      ds.commit()

    extend

    extend is a method to bulk append data to a dataset, similar to the extend method for Python lists.

    from starwhale import dataset, Text

    ds = dataset("new-ds")
    ds.extend([
    (f"label-{i}", {"text": Text(), "label": i}) for i in range(0, 10)
    ])
    ds.commit()
    ds.close()

    commit

    commit is a method that flushes the current cached data to storage when called, and generates a dataset version. This version can then be used to load the corresponding dataset content afterwards.

    For a dataset, if some data is added without calling commit, but close is called or the process exits directly instead, the data will still be written to the dataset, just without generating a new version.

    @_check_readonly
    def commit(
    self,
    tags: t.Optional[t.List[str]] = None,
    message: str = "",
    force_add_tags: bool = False,
    ignore_add_tags_errors: bool = False,
    ) -> str:

    Parameters

    • tags: (list(str), optional)
      • tag as a list
    • message: (str, optional)
      • commit message. The default value is empty.
    • force_add_tags: (bool, optional)
      • For server/cloud instances, when adding labels to this version, if a label has already been applied to other dataset versions, you can use the force_add_tags=True parameter to forcibly add the label to this version, otherwise an exception will be thrown.
      • The default is False.
    • ignore_add_tags_errors: (bool, optional)
      • Ignore any exceptions thrown when adding labels.
      • The default is False.

    Examples

    from starwhale import dataset
    with dataset("mnist") as ds:
    ds.append({"label": 1})
    ds.commit(message="init commit")

    readonly

    readonly is a property attribute indicating if the dataset is read-only, it returns a bool value.

    from starwhale import dataset
    ds = dataset("mnist", readonly=True)
    assert ds.readonly

    loading_version

    loading_version is a property attribute, string type.

    • When loading an existing dataset, the loading_version is the related dataset version.
    • When creating a non-existed dataset, the loading_version is equal to the pending_commit_version.

    pending_commit_version

    pending_commit_version is a property attribute, string type. When you call the commit function, the pending_commit_version will be recorded in the Standalone instance ,Server instance or Cloud instance.

    committed_version

    committed_version is a property attribute, string type. After the commit function is called, the committed_version will come out, it is equal to the pending_commit_version. Accessing this attribute without calling commit first will raise an exception.

    remove

    remove is a method equivalent to the swcli dataset remove command, it can delete a dataset.

    def remove(self, force: bool = False) -> None:

    recover

    recover is a method equivalent to the swcli dataset recover command, it can recover a soft-deleted dataset that has not been run garbage collection.

    def recover(self, force: bool = False) -> None:

    summary

    summary is a method equivalent to the swcli dataset summary command, it returns summary information of the dataset.

    def summary(self) -> t.Optional[DatasetSummary]:

    history

    history is a method equivalent to the swcli dataset history command, it returns the history records of the dataset.

    def history(self) -> t.List[t.Dict]:

    flush

    flush is a method that flushes temporarily cached data from memory to persistent storage. The commit and close methods will automatically call flush.

    close

    close is a method that closes opened connections related to the dataset. Dataset also implements contextmanager, so datasets can be automatically closed using with syntax without needing to explicitly call close.

    from starwhale import dataset

    ds = dataset("mnist")
    ds.close()

    with dataset("mnist") as ds:
    print(ds[0])

    head is a method to show the first n rows of a dataset, equivalent to the swcli dataset head command.

    def head(self, n: int = 5, skip_fetch_data: bool = False) -> List[DataRow]:

    fetch_one

    fetch_one is a method to get the first record in a dataset, similar to head(n=1)[0].

    list

    list is a class method to list Starwhale datasets under a project URI, equivalent to the swcli dataset list command.

    @classmethod
    def list(
    cls,
    project_uri: Union[str, Project] = "",
    fullname: bool = False,
    show_removed: bool = False,
    page_index: int = DEFAULT_PAGE_IDX,
    page_size: int = DEFAULT_PAGE_SIZE,
    ) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:

    copy

    copy is a method to copy a dataset to another instance, equivalent to the swcli dataset copy command.

    def copy(
    self,
    dest_uri: str,
    dest_local_project_uri: str = "",
    force: bool = False,
    mode: str = DatasetChangeMode.PATCH.value,
    ignore_tags: t.List[str] | None = None,
    ) -> None:

    Parameters

    • dest_uri: (str, required)
      • Dataset URI
    • dest_local_project_uri: (str, optional)
      • When copy the remote dataset into local, the parameter can set for the Project URI.
    • force: (bool, optional)
      • Whether to forcibly overwrite the dataset if there is already one with the same version on the target instance.
      • The default value is False.
      • When the tags are already used for the other dataset version in the dest instance, you should use force option or adjust the tags.
    • mode: (str, optional)
      • Dataset copy mode, default is 'patch'. Mode choices are: 'patch', 'overwrite'.
      • patch: Patch mode, only update the changed rows and columns for the remote dataset.
      • overwrite: Overwrite mode, update records and delete extraneous rows from the remote dataset.
    • ignore_tags (List[str], optional)
      • Ignore tags when copying.
      • In default, copy dataset with all user custom tags.
      • latest and ^v\d+$ are the system builtin tags, they are ignored automatically.

    Examples

    from starwhale import dataset
    ds = dataset("mnist")
    ds.copy("cloud://remote-instance/project/starwhale")

    to_pytorch

    to_pytorch is a method that can convert a Starwhale dataset to a Pytorch torch.utils.data.Dataset, which can then be passed to torch.utils.data.DataLoader for use.

    It should be noted that the to_pytorch function returns a Pytorch IterableDataset.

    def to_pytorch(
    self,
    transform: t.Optional[t.Callable] = None,
    drop_index: bool = True,
    skip_default_transform: bool = False,
    ) -> torch.utils.data.Dataset:

    Parameters

    • transform: (callable, optional)
      • A transform function for input data.
    • drop_index: (bool, optional)
      • Whether to drop the index column.
    • skip_default_transform: (bool, optional)
      • If transform is not set, by default the built-in Starwhale transform function will be used to transform the data. This can be disabled with the skip_default_transform parameter.

    Examples

    import torch.utils.data as tdata
    from starwhale import dataset

    ds = dataset("mnist")

    torch_ds = ds.to_pytorch()
    torch_loader = tdata.DataLoader(torch_ds, batch_size=2)
    import torch.utils.data as tdata
    from starwhale import dataset

    with dataset("mnist") as ds:
    for i in range(0, 10):
    ds.append({"txt": Text(f"data-{i}"), "label": i})

    ds.commit()

    def _custom_transform(data: t.Any) -> t.Any:
    data = data.copy()
    txt = data["txt"].to_str()
    data["txt"] = f"custom-{txt}"
    return data

    torch_loader = tdata.DataLoader(
    dataset(ds.uri).to_pytorch(transform=_custom_transform), batch_size=1
    )
    item = next(iter(torch_loader))
    assert isinstance(item["label"], torch.Tensor)
    assert item["txt"][0] in ("custom-data-0", "custom-data-1")

    to_tensorflow

    to_tensorflow is a method that can convert a Starwhale dataset to a Tensorflow tensorflow.data.Dataset.

    def to_tensorflow(self, drop_index: bool = True) -> tensorflow.data.Dataset:

    Parameters

    • drop_index: (bool, optional)
      • Whether to drop the index column.

    Examples

    from starwhale import dataset
    import tensorflow as tf

    ds = dataset("mnist")
    tf_ds = ds.to_tensorflow(drop_index=True)
    assert isinstance(tf_ds, tf.data.Dataset)

    with_builder_blob_config

    with_builder_blob_config is a method to set blob-related attributes in a Starwhale dataset. It needs to be called before making data changes.

    def with_builder_blob_config(
    self,
    volume_size: int | str | None = D_FILE_VOLUME_SIZE,
    alignment_size: int | str | None = D_ALIGNMENT_SIZE,
    ) -> Dataset:

    Parameters

    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.

    Examples

    from starwhale import dataset, Binary

    ds = dataset("mnist").with_builder_blob_config(volume_size="32M", alignment_size=128)
    ds.append({"data": Binary(b"123")})
    ds.commit()
    ds.close()

    with_loader_config

    with_loader_config is a method to set parameters for the Starwhale dataset loader process.

    def with_loader_config(
    self,
    num_workers: t.Optional[int] = None,
    cache_size: t.Optional[int] = None,
    field_transformer: t.Optional[t.Dict] = None,
    ) -> Dataset:

    Parameters

    • num_workers: (int, optional)
      • The workers number for loading dataset.
      • The default value is 2.
    • cache_size: (int, optional)
      • Prefetched data rows.
      • The default value is 20.
    • field_transformer: (dict, optional)
      • features name transform dict.

    Examples

    from starwhale import Dataset, dataset
    Dataset.from_json(
    "translation",
    '[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]'
    )
    myds = dataset("translation").with_loader_config(field_transformer={"en": "en-us"})
    assert myds[0].features["en-us"] == myds[0].features["en"]
    from starwhale import Dataset, dataset
    Dataset.from_json(
    "translation2",
    '[{"content":{"child_content":[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]}}]'
    )
    myds = dataset("translation2").with_loader_config(field_transformer={"content.child_content[0].en": "en-us"})
    assert myds[0].features["en-us"] == myds[0].features["content"]["child_content"][0]["en"]
    - - + + \ No newline at end of file diff --git a/0.5.10/reference/sdk/evaluation/index.html b/0.5.10/reference/sdk/evaluation/index.html index 1913e8c3a..e1f6ea83f 100644 --- a/0.5.10/reference/sdk/evaluation/index.html +++ b/0.5.10/reference/sdk/evaluation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Starwhale Model Evaluation SDK

    @evaluation.predict

    The @evaluation.predict decorator defines the inference process in the Starwhale Model Evaluation, similar to the map phase in MapReduce. It contains the following core features:

    • On the Server instance, require the resources needed to run.
    • Automatically read the local or remote datasets, and pass the data in the datasets one by one or in batches to the function decorated by evaluation.predict.
    • By the replicas setting, implement distributed dataset consumption to horizontally scale and shorten the time required for the model evaluation tasks.
    • Automatically store the return values of the function and the input features of the dataset into the results table, for display in the Web UI and further use in the evaluate phase.
    • The decorated function is called once for each single piece of data or each batch, to complete the inference process.

    Parameters

    • resources: (dict, optional)
      • Defines the resources required by each predict task when running on the Server instance, including mem, cpu, and nvidia.com/gpu.
      • mem: The unit is Bytes, int and float types are supported.
        • Supports setting request and limit as a dictionary, e.g. resources={"mem": {"request": 100 * 1024, "limit": 200 * 1024}}.
        • If only a single number is set, the Python SDK will automatically set request and limit to the same value, e.g. resources={"mem": 100 * 1024} is equivalent to resources={"mem": {"request": 100 * 1024, "limit": 100 * 1024}}.
      • cpu: The unit is the number of CPU cores, int and float types are supported.
        • Supports setting request and limit as a dictionary, e.g. resources={"cpu": {"request": 1, "limit": 2}}.
        • If only a single number is set, the SDK will automatically set request and limit to the same value, e.g. resources={"cpu": 1.5} is equivalent to resources={"cpu": {"request": 1.5, "limit": 1.5}}.
      • nvidia.com/gpu: The unit is the number of GPUs, int type is supported.
        • nvidia.com/gpu does not support setting request and limit, only a single number is supported.
      • Note: The resources parameter currently only takes effect on the Server instances. For the Cloud instances, the same can be achieved by selecting the corresponding resource pool when submitting the evaluation task. Standalone instances do not support this feature at all.
    • replicas: (int, optional)
      • The number of replicas to run predict.
      • predict defines a Step, in which there are multiple equivalent Tasks. Each Task runs on a Pod in Cloud/Server instances, and a Thread in Standalone instances.
      • When multiple replicas are specified, they are equivalent and will jointly consume the selected dataset to achieve distributed dataset consumption. It can be understood that a row in the dataset will only be read by one predict replica.
      • The default is 1.
    • batch_size: (int, optional)
      • Batch size for passing data from the dataset into the function.
      • The default is 1.
    • fail_on_error: (bool, optional)
      • Whether to interrupt the entire model evaluation when the decorated function throws an exception. If you expect some "exceptional" data to cause evaluation failures but don't want to interrupt the overall evaluation, you can set fail_on_error=False.
      • The default is True.
    • auto_log: (bool, optional)
      • Whether to automatically log the return values of the function and the input features of the dataset to the results table.
      • The default is True.
    • log_mode: (str, optional)
      • When auto_log=True, you can set log_mode to define logging the return values in plain or pickle format.
      • The default is pickle.
    • log_dataset_features: (List[str], optional)
      • When auto_log=True, you can selectively log certain features from the dataset via this parameter.
      • By default, all features will be logged.
    • needs: (List[Callable], optional)
      • Defines the prerequisites for this task to run, can use the needs syntax to implement DAG.
      • needs accepts functions decorated by @evaluation.predict, @evaluation.evaluate, and @handler.
      • The default is empty, i.e. does not depend on any other tasks.

    Input

    The decorated functions need to define some input parameters to accept dataset data, etc. They contain the following patterns:

    • data:

      • data is a dict type that can read the features of the dataset.
      • When batch_size=1 or batch_size is not set, the label feature can be read through data['label'] or data.label.
      • When batch_size is set to > 1, data is a list.
      from starwhale import evaluation

      @evaluation.predict
      def predict(data):
      print(data['label'])
      print(data.label)
    • data + external:

      • data is a dict type that can read the features of the dataset.
      • external is also a dict, including: index, index_with_dataset, dataset_info, context and dataset_uri keys. The attributes can be used for the further fine-grained processing.
        • index: The index of the dataset row.
        • index_with_dataset: The index with the dataset info.
        • dataset_info: starwhale.core.dataset.tabular.TabularDatasetInfo Class.
        • context: starwhale.Context Class.
        • dataset_uri: starwhale.nase.uri.resource.Resource Class.
      from starwhale import evaluation

      @evaluation.predict
      def predict(data, external):
      print(data['label'])
      print(data.label)
      print(external["context"])
      print(external["dataset_uri"])
    • data + **kw:

      • data is a dict type that can read the features of the dataset.
      • kw is a dict that contains external.
      from starwhale import evaluation

      @evaluation.predict
      def predict(data, **kw):
      print(kw["external"]["context"])
      print(kw["external"]["dataset_uri"])
    • *args + **kwargs:

      • The first argument of args list is data.
      from starwhale import evaluation

      @evaluation.predict
      def predict(*args, **kw):
      print(args[0].label)
      print(args[0]["label"])
      print(kw["external"]["context"])
    • **kwargs:

      from starwhale import evaluation

      @evaluation.predict
      def predict(**kw):
      print(kw["data"].label)
      print(kw["data"]["label"])
      print(kw["external"]["context"])
    • *args:

      • *args does not contain external.
      from starwhale import evaluation

      @evaluation.predict
      def predict(*args):
      print(args[0].label)
      print(args[0]["label"])

    Examples

    from starwhale import evaluation

    @evaluation.predict
    def predict_image(data):
    ...

    @evaluation.predict(
    dataset="mnist/version/latest",
    batch_size=32,
    replicas=4,
    needs=[predict_image],
    )
    def predict_batch_images(batch_data)
    ...

    @evaluation.predict(
    resources={"nvidia.com/gpu": 1,
    "cpu": {"request": 1, "limit": 2},
    "mem": 200 * 1024}, # 200MB
    log_mode="plain",
    )
    def predict_with_resources(data):
    ...

    @evaluation.predict(
    replicas=1,
    log_mode="plain",
    log_dataset_features=["txt", "img", "label"],
    )
    def predict_with_selected_features(data):
    ...

    @evaluation.evaluate

    @evaluation.evaluate is a decorator that defines the evaluation process in the Starwhale Model evaluation, similar to the reduce phase in MapReduce. It contains the following core features:

    • On the Server instance, apply for the resources.
    • Read the data recorded in the results table automatically during the predict phase, and pass it into the function as an iterator.
    • The evaluate phase will only run one replica, and cannot define the replicas parameter like the predict phase.

    Parameters

    • resources: (dict, optional)
      • Consistent with the resources parameter definition in @evaluation.predict.
    • needs: (List[Callable], optional)
      • Consistent with the needs parameter definition in @evaluation.predict.
      • In the common case, it will depend on a function decorated by @evaluation.predict.
    • use_predict_auto_log: (bool, optional)
      • Defaults to True, passes an iterator that can traverse the predict results to the function.

    Input

    • When use_predict_auto_log=True (default), pass an iterator that can traverse the predict results into the function.
      • The iterated object is a dictionary containing two keys: output and input.
        • output is the element returned by the predict stage function.
        • input is the features of the corresponding dataset during the inference process, which is a dictionary type.
    • When use_predict_auto_log=False, do not pass any parameters into the function.

    Examples

    from starwhale import evaluation

    @evaluation.evaluate(needs=[predict_image])
    def evaluate_results(predict_result_iter):
    ...

    @evaluation.evaluate(
    use_predict_auto_log=False,
    needs=[predict_image],
    )
    def evaluate_results():
    ...

    evaluation.log

    evaluation.log is a function that logs the certain evaluation metrics to the specific tables, which can be viewed as the Web page in the Server/Cloud instance.

    Parameters

    • category: (str, required)
      • The category of the logged record, which will be used as a suffix for the Starwhale Datastore table name.
      • Each category corresponds to a Starwhale Datastore table, with these tables isolated by evaluation task ID without affecting each other.
    • id: (str|int, required)
      • The ID of the logged record, unique within the table.
      • Only one type, either str or int, can be used as ID type in the same table.
    • metrics: (dict, required)
      • A dictionary recording metrics in key-value pairs.

    Examples

    from starwhale import evaluation

    evaluation.log("label/1", 1, {"loss": 0.99, "accuracy": 0.98})
    evaluation.log("ppl", "1", {"a": "test", "b": 1})

    evaluation.log_summary

    evaluation.log_summary is a function that logs the certain metrics to the summary table. The evaluation page of a Server/Cloud instance displays data from the summary table.

    Each time it is called, Starwhale automatically updates the table using the unique ID of the current evaluation as the row ID. This function can be called multiple times during an evaluation to update different columns.

    Each project has one summary table, and all evaluation jobs under that project will log their summary information into this table.

    @classmethod
    def log_summary(cls, *args: t.Any, **kw: t.Any) -> None:

    Examples

    from starwhale import evaluation

    evaluation.log_summary(loss=0.99)
    evaluation.log_summary(loss=0.99, accuracy=0.99)
    evaluation.log_summary({"loss": 0.99, "accuracy": 0.99})

    evaluation.iter

    evaluation.iter is a function that returns an iterator for reading data iteratively from certain model evaluation tables.

    @classmethod
    def iter(cls, category: str) -> t.Iterator:

    Parameters

    • category: (str, required)
      • This parameter is consistent with the meaning of the category parameter in the evaluation.log function.

    Examples

    from starwhale import evaluation

    results = [data for data in evaluation.iter("label/0")]

    @handler

    @handler is a decorator that provides the following functionalities:

    • On a Server instance, it requests the required resources to run.
    • It can control the number of replicas.
    • Multiple handlers can form a DAG through dependency relationships to control the execution workflow.
    • It can expose ports externally to run like a web handler.

    @fine_tune, @evaluation.predict and @evaluation.evalute can be considered applications of @handler in the certain specific areas. @handler is the underlying implementation of these decorators and is more fundamental and flexible.

    @classmethod
    def handler(
    cls,
    resources: t.Optional[t.Dict[str, t.Any]] = None,
    replicas: int = 1,
    needs: t.Optional[t.List[t.Callable]] = None,
    name: str = "",
    expose: int = 0,
    require_dataset: bool = False,
    ) -> t.Callable:

    Parameters

    • resources: (dict, optional)
      • Consistent with the resources parameter definition in @evaluation.predict.
    • needs: (List[Callable], optional)
      • Consistent with the needs parameter definition in @evaluation.predict.
    • replicas: (int, optional)
      • Consistent with the replicas parameter definition in @evaluation.predict.
    • name: (str, optional)
      • The name displayed for the handler.
      • If not specified, use the decorated function's name.
    • expose: (int, optional)
      • The port exposed externally. When running a web handler, the exposed port needs to be declared.
      • The default is 0, meaning no port is exposed.
      • Currently only one port can be exposed.
    • require_dataset: (bool, optional)
      • Defines whether this handler requires a dataset when running.
      • If required_dataset=True, the user is required to input a dataset when creating an evaluation task on the Server/Cloud instance web page. If required_dataset=False, the user does not need to specify a dataset on the web page.
      • The default is False.

    Examples

    from starwhale import handler
    import gradio

    @handler(resources={"cpu": 1, "nvidia.com/gpu": 1}, replicas=3)
    def my_handler():
    ...

    @handler(needs=[my_handler])
    def my_another_handler():
    ...

    @handler(expose=7860)
    def chatbot():
    with gradio.Blocks() as server:
    ...
    server.launch(server_name="0.0.0.0", server_port=7860)

    @fine_tune

    fine_tune is a decorator that defines the fine-tuning process for model training.

    Some restrictions and usage suggestions:

    • fine_tune has only one replica.
    • fine_tune requires dataset input.
    • Generally, the dataset is obtained through Context.get_runtime_context() at the start of fine_tune.
    • Generally, at the end of fine_tune, the fine-tuned Starwhale model package is generated through starwhale.model.build, which will be automatically copied to the corresponding evaluation project.

    Parameters

    • resources: (dict, optional)
      • Consistent with the resources parameter definition in @evaluation.predict.
    • needs: (List[Callable], optional)
      • Consistent with the needs parameter definition in @evaluation.predict.

    Examples

    from starwhale import model as starwhale_model
    from starwhale import fine_tune, Context

    @fine_tune(resources={"nvidia.com/gpu": 1})
    def llama_fine_tuning():
    ctx = Context.get_runtime_context()

    if len(ctx.dataset_uris) == 2:
    # TODO: use more graceful way to get train and eval dataset
    train_dataset = dataset(ctx.dataset_uris[0], readonly=True, create="forbid")
    eval_dataset = dataset(ctx.dataset_uris[1], readonly=True, create="forbid")
    elif len(ctx.dataset_uris) == 1:
    train_dataset = dataset(ctx.dataset_uris[0], readonly=True, create="forbid")
    eval_dataset = None
    else:
    raise ValueError("Only support 1 or 2 datasets(train and eval dataset) for now")

    #user training code
    train_llama(
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
    )

    model_name = get_model_name()
    starwhale_model.build(name=f"llama-{model_name}-qlora-ft")

    @multi_classification

    The @multi_classification decorator uses the sklearn lib to analyze results for multi-classification problems, outputting the confusion matrix, ROC, AUC etc., and writing them to related tables in the Starwhale Datastore.

    When using it, certain requirements are placed on the return value of the decorated function, which should be (label, result) or (label, result, probability_matrix).

    def multi_classification(
    confusion_matrix_normalize: str = "all",
    show_hamming_loss: bool = True,
    show_cohen_kappa_score: bool = True,
    show_roc_auc: bool = True,
    all_labels: t.Optional[t.List[t.Any]] = None,
    ) -> t.Any:

    Parameters

    • confusion_matrix_normalize: (str, optional)
      • Accepts three parameters:
        • true: rows
        • pred: columns
        • all: rows+columns
    • show_hamming_loss: (bool, optional)
      • Whether to calculate the Hamming loss.
      • The default is True.
    • show_cohen_kappa_score: (bool, optional)
      • Whether to calculate the Cohen kappa score.
      • The default is True.
    • show_roc_auc: (bool, optional)
      • Whether to calculate ROC/AUC. To calculate, the function needs to return a (label, result, probability_matrix) tuple, otherwise a (label, result) tuple is sufficient.
      • The default is True.
    • all_labels: (List, optional)
      • Defines all the labels.

    Examples


    @multi_classification(
    confusion_matrix_normalize="all",
    show_hamming_loss=True,
    show_cohen_kappa_score=True,
    show_roc_auc=True,
    all_labels=[i for i in range(0, 10)],
    )
    def evaluate(ppl_result) -> t.Tuple[t.List[int], t.List[int], t.List[t.List[float]]]:
    label, result, probability_matrix = [], [], []
    return label, result, probability_matrix

    @multi_classification(
    confusion_matrix_normalize="all",
    show_hamming_loss=True,
    show_cohen_kappa_score=True,
    show_roc_auc=False,
    all_labels=[i for i in range(0, 10)],
    )
    def evaluate(ppl_result) -> t.Tuple[t.List[int], t.List[int], t.List[t.List[float]]]:
    label, result = [], [], []
    return label, result

    PipelineHandler

    The PipelineHandler class provides a default model evaluation workflow definition that requires users to implement the predict and evaluate functions.

    The PipelineHandler is equivalent to using the @evaluation.predict and @evaluation.evaluate decorators together - the usage looks different but the underlying model evaluation process is the same.

    Note that PipelineHandler currently does not support defining resources parameters.

    Users need to implement the following functions:

    • predict: Defines the inference process, equivalent to a function decorated with @evaluation.predict.

    • evaluate: Defines the evaluation process, equivalent to a function decorated with @evaluation.evaluate.

    from typing import Any, Iterator
    from abc import ABCMeta, abstractmethod

    class PipelineHandler(metaclass=ABCMeta):
    def __init__(
    self,
    predict_batch_size: int = 1,
    ignore_error: bool = False,
    predict_auto_log: bool = True,
    predict_log_mode: str = PredictLogMode.PICKLE.value,
    predict_log_dataset_features: t.Optional[t.List[str]] = None,
    **kwargs: t.Any,
    ) -> None:
    self.context = Context.get_runtime_context()
    ...

    def predict(self, data: Any, **kw: Any) -> Any:
    raise NotImplementedError

    def evaluate(self, ppl_result: Iterator) -> Any
    raise NotImplementedError

    Parameters

    • predict_batch_size: (int, optional)
      • Equivalent to the batch_size parameter in @evaluation.predict.
      • Default is 1.
    • ignore_error: (bool, optional)
      • Equivalent to the fail_on_error parameter in @evaluation.predict.
      • Default is False.
    • predict_auto_log: (bool, optional)
      • Equivalent to the auto_log parameter in @evaluation.predict.
      • Default is True.
    • predict_log_mode: (str, optional)
      • Equivalent to the log_mode parameter in @evaluation.predict.
      • Default is pickle.
    • predict_log_dataset_features: (bool, optional)
      • Equivalent to the log_dataset_features parameter in @evaluation.predict.
      • Default is None, which records all features.

    Examples

    import typing as t

    import torch
    from starwhale import PipelineHandler

    class Example(PipelineHandler):
    def __init__(self) -> None:
    super().__init__()
    self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    self.model = self._load_model(self.device)

    def predict(self, data: t.Dict):
    data_tensor = self._pre(data.img)
    output = self.model(data_tensor)
    return self._post(output)

    def evaluate(self, ppl_result):
    result, label, pr = [], [], []
    for _data in ppl_result:
    label.append(_data["input"]["label"])
    result.extend(_data["output"][0])
    pr.extend(_data["output"][1])
    return label, result, pr

    def _pre(self, input: Image) -> torch.Tensor:
    ...

    def _post(self, input):
    ...

    def _load_model(self, device):
    ...

    Context

    The context information passed during model evaluation, including Project, Task ID, etc. The Context content is automatically injected and can be used in the following ways:

    • Inherit the PipelineHandler class and use the self.context object.
    • Get it through Context.get_runtime_context().

    Note that Context can only be used during model evaluation, otherwise the program will throw an exception.

    Currently Context can get the following values:

    • project: str
      • Project name.
    • version: str
      • Unique ID of model evaluation.
    • step: str
      • Step name.
    • total: int
      • Total number of Tasks under the Step.
    • index: int
      • Task index number, starting from 0.
    • dataset_uris: List[str]
      • List of Starwhale dataset URIs.

    Examples


    from starwhale import Context, PipelineHandler

    def func():
    ctx = Context.get_runtime_context()
    print(ctx.project)
    print(ctx.version)
    print(ctx.step)
    ...

    class Example(PipelineHandler):

    def predict(self, data: t.Dict):
    print(self.context.project)
    print(self.context.version)
    print(self.context.step)

    @starwhale.api.service.api

    @starwhale.api.service.api is a decorator that provides a simple Web Handler input definition based on Gradio for accepting external requests and returning inference results to the user when launching a Web Service with the swcli model serve command, enabling online evaluation.

    Examples

    import gradio
    from starwhale.api.service import api

    def predict_image(img):
    ...

    @api(gradio.File(), gradio.Label())
    def predict_view(file: t.Any) -> t.Any:
    with open(file.name, "rb") as f:
    data = Image(f.read(), shape=(28, 28, 1))
    _, prob = predict_image({"img": data})
    return {i: p for i, p in enumerate(prob)}

    starwhale.api.service.Service

    If you want to customize the web service implementation, you can subclass Service and override the serve method.

    class CustomService(Service):
    def serve(self, addr: str, port: int, handler_list: t.List[str] = None) -> None:
    ...

    svc = CustomService()

    @svc.api(...)
    def handler(data):
    ...

    Notes:

    • Handlers added with PipelineHandler.add_api and the api decorator or Service.api can work together
    • If using a custom Service, you need to instantiate the custom Service class in the model

    Custom Request and Response

    Request and Response are handler preprocessing and postprocessing classes for receiving user requests and returning results. They can be simply understood as pre and post logic for the handler.

    Starwhale provides built-in Request implementations for Dataset types and Json Response. Users can also customize the logic as follows:

    import typing as t

    from starwhale.api.service import (
    Request,
    Service,
    Response,
    )

    class CustomInput(Request):
    def load(self, req: t.Any) -> t.Any:
    return req

    class CustomOutput(Response):
    def __init__(self, prefix: str) -> None:
    self.prefix = prefix

    def dump(self, req: str) -> bytes:
    return f"{self.prefix} {req}".encode("utf-8")

    svc = Service()

    @svc.api(request=CustomInput(), response=CustomOutput("hello"))
    def foo(data: t.Any) -> t.Any:
    ...
    - - + + \ No newline at end of file diff --git a/0.5.10/reference/sdk/model/index.html b/0.5.10/reference/sdk/model/index.html index a25f73176..8f4cd1865 100644 --- a/0.5.10/reference/sdk/model/index.html +++ b/0.5.10/reference/sdk/model/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Starwhale Model SDK

    model.build

    model.build is a function that can build the Starwhale model, equivalent to the swcli model build command.

    def build(
    modules: t.Optional[t.List[t.Any]] = None,
    workdir: t.Optional[_path_T] = None,
    name: t.Optional[str] = None,
    project_uri: str = "",
    desc: str = "",
    remote_project_uri: t.Optional[str] = None,
    add_all: bool = False,
    tags: t.List[str] | None = None,
    ) -> None:

    Parameters

    • modules: (List[str|object], optional)
      • The search modules supports object(function, class or module) or str(example: "to.path.module", "to.path.module:object").
      • If the argument is not specified, the search modules are the imported modules.
    • name: (str, optional)
      • Starwhale Model name.
      • The default is the current work dir (cwd) name.
    • workdir: (str, Pathlib.Path, optional)
      • The path of the rootdir. The default workdir is the current working dir.
      • All files in the workdir will be packaged. If you want to ignore some files, you can add .swignore file in the workdir.
    • project_uri: (str, optional)
      • The project uri of the Starwhale Model.
      • If the argument is not specified, the project_uri is the config value of swcli project select command.
    • desc: (str, optional)
      • The description of the Starwhale Model.
    • remote_project_uri: (str, optional)
      • Project URI of another example instance. After the Starwhale model is built, it will be automatically copied to the remote instance.
    • add_all: (bool, optional)
      • Add all files in the working directory to the model package(excludes python cache files and virtual environment files when disabled).The .swignore file still takes effect.
      • The default value is False.
    • tags: (List[str], optional)
      • The tags for the model version.
      • latest and ^v\d+$ tags are reserved tags.

    Examples

    from starwhale import model

    # class search handlers
    from .user.code.evaluator import ExamplePipelineHandler
    model.build([ExamplePipelineHandler])

    # function search handlers
    from .user.code.evaluator import predict_image
    model.build([predict_image])

    # module handlers, @handler decorates function in this module
    from .user.code import evaluator
    model.build([evaluator])

    # str search handlers
    model.build(["user.code.evaluator:ExamplePipelineHandler"])
    model.build(["user.code1", "user.code2"])

    # no search handlers, use imported modules
    model.build()

    # add user custom tags
    model.build(tags=["t1", "t2"])
    - - + + \ No newline at end of file diff --git a/0.5.10/reference/sdk/other/index.html b/0.5.10/reference/sdk/other/index.html index e1ab4ed95..c1d454df0 100644 --- a/0.5.10/reference/sdk/other/index.html +++ b/0.5.10/reference/sdk/other/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Other SDK

    __version__

    Version of Starwhale Python SDK and swcli, string constant.

    >>> from starwhale import __version__
    >>> print(__version__)
    0.5.7

    init_logger

    Initialize Starwhale logger and traceback. The default value is 0.

    • 0: show only errors, traceback only shows 1 frame.
    • 1: show errors + warnings, traceback shows 5 frames.
    • 2: show errors + warnings + info, traceback shows 10 frames.
    • 3: show errors + warnings + info + debug, traceback shows 100 frames.
    • >=4: show errors + warnings + info + debug + trace, traceback shows 1000 frames.
    def init_logger(verbose: int = 0) -> None:

    login

    Log in to a server/cloud instance. It is equivalent to running the swcli instance login command. Log in to the Standalone instance is meaningless.

    def login(
    instance: str,
    alias: str = "",
    username: str = "",
    password: str = "",
    token: str = "",
    ) -> None:

    Parameters

    • instance: (str, required)
      • The http url of the server/cloud instance.
    • alias: (str, optional)
      • An alias for the instance to simplify the instance part of the Starwhale URI.
      • If not specified, the hostname part of the instance http url will be used.
    • username: (str, optional)
    • password: (str, optional)
    • token: (str, optional)
      • You can only choose one of username + password or token to login to the instance.

    Examples

    from starwhale import login

    # login to Starwhale Cloud instance by token
    login(instance="https://cloud.starwhale.cn", alias="cloud-cn", token="xxx")

    # login to Starwhale Server instance by username and password
    login(instance="http://controller.starwhale.svc", alias="dev", username="starwhale", password="abcd1234")

    logout

    Log out of a server/cloud instance. It is equivalent to running the swcli instance logout command. Log out of the Standalone instance is meaningless.

    def logout(instance: str) -> None:

    Examples

    from starwhale import login, logout

    login(instance="https://cloud.starwhale.cn", alias="cloud-cn", token="xxx")
    # logout by the alias
    logout("cloud-cn")

    login(instance="http://controller.starwhale.svc", alias="dev", username="starwhale", password="abcd1234")
    # logout by the instance http url
    logout("http://controller.starwhale.svc")
    - - + + \ No newline at end of file diff --git a/0.5.10/reference/sdk/overview/index.html b/0.5.10/reference/sdk/overview/index.html index 3beed9ea4..3946e0ae5 100644 --- a/0.5.10/reference/sdk/overview/index.html +++ b/0.5.10/reference/sdk/overview/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Python SDK Overview

    Starwhale provides a series of Python SDKs to help manage datasets, models, evaluations etc. Using the Starwhale Python SDK can make it easier to complete your ML/DL development tasks.

    Classes

    • PipelineHandler: Provides default model evaluation process definition, requires implementation of predict and evaluate methods.
    • Context: Passes context information during model evaluation, including Project, Task ID etc.
    • class Dataset: Starwhale Dataset class.
    • class starwhale.api.service.Service: The base class of online evaluation.

    Functions

    • @multi_classification: Decorator for multi-class problems to simplify evaluate result calculation and storage for better evaluation presentation.
    • @handler: Decorator to define a running entity with resource attributes (mem/cpu/gpu). You can control replica count. Handlers can form DAGs through dependencies to control execution flow.
    • @evaluation.predict: Decorator to define inference process in model evaluation, similar to map phase in MapReduce.
    • @evaluation.evaluate: Decorator to define evaluation process in model evaluation, similar to reduce phase in MapReduce.
    • evaluation.log: Log evaluation metrics to the specific tables.
    • evaluation.log_summary: Log certain metrics to the summary table.
    • evaluation.iter: Iterate and read data from the certain tables.
    • model.build: Build Starwhale model.
    • @fine_tune: Decorator to define model fine-tuning process.
    • init_logger: Set log level, implement 5-level logging.
    • dataset: Get starwhale.Dataset object, by creating new datasets or loading existing datasets.
    • @starwhale.api.service.api: Decorator to provide a simple Web Handler input definition based on Gradio.
    • login: Log in to the server/cloud instance.
    • logout: Log out of the server/cloud instance.

    Data Types

    • COCOObjectAnnotation: Provides COCO format definitions.
    • BoundingBox: Bounding box type, currently in LTWH format - left_x, top_y, width and height.
    • ClassLabel: Describes the number and types of labels.
    • Image: Image type.
    • GrayscaleImage: Grayscale image type, e.g. MNIST digit images, a special case of Image type.
    • Audio: Audio type.
    • Video: Video type.
    • Text: Text type, default utf-8 encoding, for storing large texts.
    • Binary: Binary type, stored in bytes, for storing large binary content.
    • Line: Line type.
    • Point: Point type.
    • Polygon: Polygon type.
    • Link: Link type, for creating remote-link data.
    • S3LinkAuth: When data is stored in S3-based object storage, this type describes auth and key info.
    • MIMEType: Describes multimedia types supported by Starwhale, used in mime_type attribute of Image, Video etc for better Dataset Viewer.
    • LinkType: Describes remote link types supported by Starwhale, currently LocalFS and S3.

    Other

    • __version__: Version of Starwhale Python SDK and swcli, string constant.

    Further reading

    - - + + \ No newline at end of file diff --git a/0.5.10/reference/sdk/type/index.html b/0.5.10/reference/sdk/type/index.html index 83a5dc015..522f27c29 100644 --- a/0.5.10/reference/sdk/type/index.html +++ b/0.5.10/reference/sdk/type/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Starwhale Data Types

    COCOObjectAnnotation

    It provides definitions following the COCO format.

    COCOObjectAnnotation(
    id: int,
    image_id: int,
    category_id: int,
    segmentation: Union[t.List, t.Dict],
    area: Union[float, int],
    bbox: Union[BoundingBox, t.List[float]],
    iscrowd: int,
    )
    ParameterDescription
    idObject id, usually a globally incrementing id
    image_idImage id, usually id of the image
    category_idCategory id, usually id of the class in object detection
    segmentationObject contour representation, Polygon (polygon vertices) or RLE format
    areaObject area
    bboxRepresents bounding box, can be BoundingBox type or list of floats
    iscrowd0 indicates a single object, 1 indicates two unseparated objects

    Examples

    def _make_coco_annotations(
    self, mask_fpath: Path, image_id: int
    ) -> t.List[COCOObjectAnnotation]:
    mask_img = PILImage.open(str(mask_fpath))

    mask = np.array(mask_img)
    object_ids = np.unique(mask)[1:]
    binary_mask = mask == object_ids[:, None, None]
    # TODO: tune permute without pytorch
    binary_mask_tensor = torch.as_tensor(binary_mask, dtype=torch.uint8)
    binary_mask_tensor = (
    binary_mask_tensor.permute(0, 2, 1).contiguous().permute(0, 2, 1)
    )

    coco_annotations = []
    for i in range(0, len(object_ids)):
    _pos = np.where(binary_mask[i])
    _xmin, _ymin = float(np.min(_pos[1])), float(np.min(_pos[0]))
    _xmax, _ymax = float(np.max(_pos[1])), float(np.max(_pos[0]))
    _bbox = BoundingBox(
    x=_xmin, y=_ymin, width=_xmax - _xmin, height=_ymax - _ymin
    )

    rle: t.Dict = coco_mask.encode(binary_mask_tensor[i].numpy()) # type: ignore
    rle["counts"] = rle["counts"].decode("utf-8")

    coco_annotations.append(
    COCOObjectAnnotation(
    id=self.object_id,
    image_id=image_id,
    category_id=1, # PennFudan Dataset only has one class-PASPersonStanding
    segmentation=rle,
    area=_bbox.width * _bbox.height,
    bbox=_bbox,
    iscrowd=0, # suppose all instances are not crowd
    )
    )
    self.object_id += 1

    return coco_annotations

    GrayscaleImage

    GrayscaleImage provides a grayscale image type. It is a special case of the Image type, for example the digit images in MNIST.

    GrayscaleImage(
    fp: _TArtifactFP = "",
    display_name: str = "",
    shape: Optional[_TShape] = None,
    as_mask: bool = False,
    mask_uri: str = "",
    )
    ParameterDescription
    fpImage path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    shapeImage width and height, default channel is 1
    as_maskWhether used as a mask image
    mask_uriURI of the original image for the mask

    Examples

    for i in range(0, min(data_number, label_number)):
    _data = data_file.read(image_size)
    _label = struct.unpack(">B", label_file.read(1))[0]
    yield GrayscaleImage(
    _data,
    display_name=f"{i}",
    shape=(height, width, 1),
    ), {"label": _label}

    GrayscaleImage Functions

    GrayscaleImage.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    GrayscaleImage.carry_raw_data

    carry_raw_data() -> GrayscaleImage

    GrayscaleImage.astype

    astype() -> Dict[str, t.Any]

    BoundingBox

    BoundingBox provides a bounding box type, currently in LTWH format:

    • left_x: x-coordinate of left edge
    • top_y: y-coordinate of top edge
    • width: width of bounding box
    • height: height of bounding box

    So it represents the bounding box using the coordinates of its left, top, width and height. This is a common format for specifying bounding boxes in computer vision tasks.

    BoundingBox(
    x: float,
    y: float,
    width: float,
    height: float
    )
    ParameterDescription
    xx-coordinate of left edge (left_x)
    yy-coordinate of top edge (top_y)
    widthWidth of bounding box
    heightHeight of bounding box

    ClassLabel

    Describe labels.

    ClassLabel(
    names: List[Union[int, float, str]]
    )

    Image

    Image Type.

    Image(
    fp: _TArtifactFP = "",
    display_name: str = "",
    shape: Optional[_TShape] = None,
    mime_type: Optional[MIMEType] = None,
    as_mask: bool = False,
    mask_uri: str = "",
    )
    ParameterDescription
    fpImage path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    shapeImage width, height and channels
    mime_typeMIMEType supported types
    as_maskWhether used as a mask image
    mask_uriURI of the original image for the mask

    The main difference from GrayscaleImage is that Image supports multi-channel RGB images by specifying shape as (W, H, C).

    Examples

    import io
    import typing as t
    import pickle
    from PIL import Image as PILImage
    from starwhale import Image, MIMEType

    def _iter_item(paths: t.List[Path]) -> t.Generator[t.Tuple[t.Any, t.Dict], None, None]:
    for path in paths:
    with path.open("rb") as f:
    content = pickle.load(f, encoding="bytes")
    for data, label, filename in zip(
    content[b"data"], content[b"labels"], content[b"filenames"]
    ):
    annotations = {
    "label": label,
    "label_display_name": dataset_meta["label_names"][label],
    }

    image_array = data.reshape(3, 32, 32).transpose(1, 2, 0)
    image_bytes = io.BytesIO()
    PILImage.fromarray(image_array).save(image_bytes, format="PNG")

    yield Image(
    fp=image_bytes.getvalue(),
    display_name=filename.decode(),
    shape=image_array.shape,
    mime_type=MIMEType.PNG,
    ), annotations

    Image Functions

    Image.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Image.carry_raw_data

    carry_raw_data() -> GrayscaleImage

    Image.astype

    astype() -> Dict[str, t.Any]

    Video

    Video type.

    Video(
    fp: _TArtifactFP = "",
    display_name: str = "",
    mime_type: Optional[MIMEType] = None,
    )
    ParameterDescription
    fpVideo path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    mime_typeMIMEType supported types

    Examples

    import typing as t
    from pathlib import Path

    from starwhale import Video, MIMEType

    root_dir = Path(__file__).parent.parent
    dataset_dir = root_dir / "data" / "UCF-101"
    test_ds_path = [root_dir / "data" / "test_list.txt"]

    def iter_ucf_item() -> t.Generator:
    for path in test_ds_path:
    with path.open() as f:
    for line in f.readlines():
    _, label, video_sub_path = line.split()

    data_path = dataset_dir / video_sub_path
    data = Video(
    data_path,
    display_name=video_sub_path,
    shape=(1,),
    mime_type=MIMEType.WEBM,
    )

    yield f"{label}_{video_sub_path}", {
    "video": data,
    "label": label,
    }

    Audio

    Audio type.

    Audio(
    fp: _TArtifactFP = "",
    display_name: str = "",
    mime_type: Optional[MIMEType] = None,
    )
    ParameterDescription
    fpAudio path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    mime_typeMIMEType supported types

    Examples

    import typing as t
    from starwhale import Audio

    def iter_item() -> t.Generator[t.Tuple[t.Any, t.Any], None, None]:
    for path in validation_ds_paths:
    with path.open() as f:
    for item in f.readlines():
    item = item.strip()
    if not item:
    continue

    data_path = dataset_dir / item
    data = Audio(
    data_path, display_name=item, shape=(1,), mime_type=MIMEType.WAV
    )

    speaker_id, utterance_num = data_path.stem.split("_nohash_")
    annotations = {
    "label": data_path.parent.name,
    "speaker_id": speaker_id,
    "utterance_num": int(utterance_num),
    }
    yield data, annotations

    Audio Functions

    Audio.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Audio.carry_raw_data

    carry_raw_data() -> Audio

    Audio.astype

    astype() -> Dict[str, t.Any]

    Text

    Text type, the default encode type is utf-8.

    Text(
    content: str,
    encoding: str = "utf-8",
    )
    ParameterDescription
    contentThe text content
    encodingEncoding format of the text

    Examples

    import typing as t
    from pathlib import Path
    from starwhale import Text

    def iter_item(self) -> t.Generator[t.Tuple[t.Any, t.Any], None, None]:
    root_dir = Path(__file__).parent.parent / "data"

    with (root_dir / "fra-test.txt").open("r") as f:
    for line in f.readlines():
    line = line.strip()
    if not line or line.startswith("CC-BY"):
    continue

    _data, _label, *_ = line.split("\t")
    data = Text(_data, encoding="utf-8")
    annotations = {"label": _label}
    yield data, annotations

    Text Functions

    to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Text.carry_raw_data

    carry_raw_data() -> Text

    Text.astype

    astype() -> Dict[str, t.Any]

    Text.to_str

    to_str() -> str

    Binary

    Binary provides a binary data type, stored as bytes.

    Binary(
    fp: _TArtifactFP = "",
    mime_type: MIMEType = MIMEType.UNDEFINED,
    )
    ParameterDescription
    fpPath, IO object, or file content bytes
    mime_typeMIMEType supported types

    Binary Functions

    Binary.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Binary.carry_raw_data

    carry_raw_data() -> Binary

    Binary.astype

    astype() -> Dict[str, t.Any]

    Link provides a link type to create remote-link datasets in Starwhale.

    Link(
    uri: str,
    auth: Optional[LinkAuth] = DefaultS3LinkAuth,
    offset: int = 0,
    size: int = -1,
    data_type: Optional[BaseArtifact] = None,
    )
    ParameterDescription
    uriURI of the original data, currently supports localFS and S3 protocols
    authLink auth information
    offsetData offset relative to file pointed by uri
    sizeData size
    data_typeActual data type pointed by the link, currently supports Binary, Image, Text, Audio and Video

    Link.astype

    astype() -> Dict[str, t.Any]

    S3LinkAuth

    S3LinkAuth provides authentication and key information when data is stored on S3 protocol based object storage.

    S3LinkAuth(
    name: str = "",
    access_key: str = "",
    secret: str = "",
    endpoint: str = "",
    region: str = "local",
    )
    ParameterDescription
    nameName of the auth
    access_keyAccess key for S3 connection
    secretSecret for S3 connection
    endpointEndpoint URL for S3 connection
    regionS3 region where bucket is located, default is local.

    Examples

    import struct
    import typing as t
    from pathlib import Path

    from starwhale import (
    Link,
    S3LinkAuth,
    GrayscaleImage,
    UserRawBuildExecutor,
    )
    class LinkRawDatasetProcessExecutor(UserRawBuildExecutor):
    _auth = S3LinkAuth(name="mnist", access_key="minioadmin", secret="minioadmin")
    _endpoint = "10.131.0.1:9000"
    _bucket = "users"

    def iter_item(self) -> t.Generator[t.Tuple[t.Any, t.Any], None, None]:
    root_dir = Path(__file__).parent.parent / "data"

    with (root_dir / "t10k-labels-idx1-ubyte").open("rb") as label_file:
    _, label_number = struct.unpack(">II", label_file.read(8))

    offset = 16
    image_size = 28 * 28

    uri = f"s3://{self._endpoint}/{self._bucket}/dataset/mnist/t10k-images-idx3-ubyte"
    for i in range(label_number):
    _data = Link(
    f"{uri}",
    self._auth,
    offset=offset,
    size=image_size,
    data_type=GrayscaleImage(display_name=f"{i}", shape=(28, 28, 1)),
    )
    _label = struct.unpack(">B", label_file.read(1))[0]
    yield _data, {"label": _label}
    offset += image_size

    MIMEType

    MIMEType describes the multimedia types supported by Starwhale, implemented using Python Enum. It is used in the mime_type attribute of Image, Video etc to enable better Dataset Viewer support.

    class MIMEType(Enum):
    PNG = "image/png"
    JPEG = "image/jpeg"
    WEBP = "image/webp"
    SVG = "image/svg+xml"
    GIF = "image/gif"
    APNG = "image/apng"
    AVIF = "image/avif"
    PPM = "image/x-portable-pixmap"
    MP4 = "video/mp4"
    AVI = "video/avi"
    WEBM = "video/webm"
    WAV = "audio/wav"
    MP3 = "audio/mp3"
    PLAIN = "text/plain"
    CSV = "text/csv"
    HTML = "text/html"
    GRAYSCALE = "x/grayscale"
    UNDEFINED = "x/undefined"

    LinkType

    LinkType describes the remote link types supported by Starwhale, also implemented using Python Enum. Currently supports LocalFS and S3 types.

    class LinkType(Enum):
    LocalFS = "local_fs"
    S3 = "s3"
    UNDEFINED = "undefined"

    Line

    from starwhale import ds, Point, Line

    with dataset("collections") as ds:
    line_points = [
    Point(x=0.0, y=1.0),
    Point(x=0.0, y=100.0)
    ]
    ds.append({"line": line_points})
    ds.commit()

    Point

    from starwhale import ds, Point

    with dataset("collections") as ds:
    ds.append(Point(x=0.0, y=100.0))
    ds.commit()

    Polygon

    from starwhale import ds, Point, Polygon

    with dataset("collections") as ds:
    polygon_points = [
    Point(x=0.0, y=1.0),
    Point(x=0.0, y=100.0),
    Point(x=2.0, y=1.0),
    Point(x=2.0, y=100.0),
    ]
    ds.append({"polygon": polygon_points})
    ds.commit()
    - - + + \ No newline at end of file diff --git a/0.5.10/reference/swcli/dataset/index.html b/0.5.10/reference/swcli/dataset/index.html index a0247790e..9c64e76b9 100644 --- a/0.5.10/reference/swcli/dataset/index.html +++ b/0.5.10/reference/swcli/dataset/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    swcli dataset

    Overview

    swcli [GLOBAL OPTIONS] dataset [OPTIONS] <SUBCOMMAND> [ARGS]...

    The dataset command includes the following subcommands:

    • build
    • copy(cp)
    • diff
    • head
    • history
    • info
    • list(ls)
    • recover
    • remove(rm)
    • summary
    • tag

    swcli dataset build

    swcli [GLOBAL OPTIONS] dataset build [OPTIONS]

    Build Starwhale Dataset. This command only supports to build standalone dataset.

    Options

    • Data sources options:
    OptionRequiredTypeDefaultsDescription
    -if or --image or --image-folderNStringBuild dataset from image folder, the folder should contain the image files.
    -af or --audio or --audio-folderNStringBuild dataset from audio folder, the folder should contain the audio files.
    -vf or --video or --video-folderNStringBuild dataset from video folder, the folder should contain the video files.
    -h or --handler or --python-handlerNStringBuild dataset from python executor handler, the handler format is [module path]:[class or func name].
    -f or --yaml or --dataset-yamlNdataset.yaml in cwdBuild dataset from dataset.yaml file. Default uses dataset.yaml in the work directory(cwd).
    -jf or --json-fileNStringBuild dataset from json file, the json file option is a json file path or a http downloaded url.The json content structure should be a list[dict] or tuple[dict].
    -hf or --huggingfaceNStringBuild dataset from huggingface dataset, the huggingface option is a huggingface repo name.

    Data source options are mutually exclusive, only one option is accepted. If no set, swcli dataset build command will use dataset yaml mode to build dataset with the dataset.yaml in the cwd.

    • Other options:
    OptionRequiredScopeTypeDefaultsDescription
    -pt or --patchone of --patch and --overwriteGlobalBooleanTruePatch mode, only update the changed rows and columns for the existed dataset.
    -ow or --overwriteone of --patch and --overwriteGlobalBooleanFalseOverwrite mode, update records and delete extraneous rows from the existed dataset.
    -n or --nameNGlobalStringDataset name
    -p or --projectNGlobalStringDefault projectProject URI, the default is the current selected project. The dataset will store in the specified project.
    -d or --descNGlobalStringDataset description
    -as or --alignment-sizeNGlobalString128Bswds-bin format dataset: alignment size
    -vs or --volume-sizeNGlobalString64MBswds-bin format dataset: volume size
    -r or --runtimeNGlobalStringRuntime URI
    -w or --workdirNPython Handler ModeStringcwdwork dir to search handler.
    --auto-label/--no-auto-labelNImage/Video/Audio Folder ModeBooleanTrueWhether to auto label by the sub-folder name.
    --field-selectorNJSON File ModeStringThe filed from which you would like to extract dataset array items. The filed is split by the dot(.) symbol.
    --subsetNHuggingface ModeStringHuggingface dataset subset name. If the huggingface dataset has multiple subsets, you must specify the subset name.
    --splitNHuggingface ModeStringHuggingface dataset split name. If the split name is not specified, the all splits dataset will be built.
    --revisionNHuggingface ModeStringmainVersion of the dataset script to load. Defaults to 'main'. The option value accepts tag name, or branch name, or commit hash.
    --cache/--no-cacheNHuggingface ModeBooleanTrueWhether to use huggingface dataset cache(download + local hf dataset).
    -t or --tagNGlobalStringDataset tags, the option can be used multiple times.

    Examples for dataset building

    #- from dataset.yaml
    swcli dataset build # build dataset from dataset.yaml in the current work directory(pwd)
    swcli dataset build --yaml /path/to/dataset.yaml # build dataset from /path/to/dataset.yaml, all the involved files are related to the dataset.yaml file.
    swcli dataset build --overwrite --yaml /path/to/dataset.yaml # build dataset from /path/to/dataset.yaml, and overwrite the existed dataset.
    swcli dataset build --tag tag1 --tag tag2

    #- from handler
    swcli dataset build --handler mnist.dataset:iter_mnist_item # build dataset from mnist.dataset:iter_mnist_item handler, the workdir is the current work directory(pwd).
    # build dataset from mnist.dataset:LinkRawDatasetProcessExecutor handler, the workdir is example/mnist
    swcli dataset build --handler mnist.dataset:LinkRawDatasetProcessExecutor --workdir example/mnist

    #- from image folder
    swcli dataset build --image-folder /path/to/image/folder # build dataset from /path/to/image/folder, search all image type files.

    #- from audio folder
    swcli dataset build --audio-folder /path/to/audio/folder # build dataset from /path/to/audio/folder, search all audio type files.

    #- from video folder
    swcli dataset build --video-folder /path/to/video/folder # build dataset from /path/to/video/folder, search all video type files.

    #- from json file
    swcli dataset build --json-file /path/to/example.json
    swcli dataset build --json-file http://example.com/example.json
    swcli dataset build --json-file /path/to/example.json --field-selector a.b.c # extract the json_content["a"]["b"]["c"] field from the json file.
    swcli dataset build --name qald9 --json-file https://raw.githubusercontent.com/ag-sc/QALD/master/9/data/qald-9-test-multilingual.json --field-selector questions

    #- from huggingface dataset
    swcli dataset build --huggingface mnist
    swcli dataset build -hf mnist --no-cache
    swcli dataset build -hf cais/mmlu --subset anatomy --split auxiliary_train --revision 7456cfb

    swcli dataset copy

    swcli [GLOBAL OPTIONS] dataset copy [OPTIONS] <SRC> <DEST>

    dataset copy copies from SRC to DEST.

    SRC and DEST are both dataset URIs.

    When copying Starwhale Dataset, all custom user-defined labels will be copied by default. You can use the --ignore-tag parameter to ignore certain labels. In addition, the latest and ^v\d+$ labels are Starwhale built-in labels that are only used within the instance itself and will not be copied to other instances.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, DEST will be overwritten if it exists. In addition, if the labels carried during duplication have already been used by other versions, this parameter can be used to forcibly update the labels to this version.
    -p or --patchone of --patch and --overwriteBooleanTruePatch mode, only update the changed rows and columns for the remote dataset.
    -o or --overwriteone of --patch and --overwriteBooleanFalseOverwrite mode, update records and delete extraneous rows from the remote dataset.
    -i or --ignore-tagNStringIgnore tags to copy. The option can be used multiple times.

    Examples for dataset copy

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local project(myproject) with a new dataset name 'mnist-local'
    swcli dataset cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq local/project/myproject/mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local default project(self) with the cloud instance dataset name 'mnist-cloud'
    swcli dataset cp --patch cloud://pre-k8s/project/dataset/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq .

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local project(myproject) with the cloud instance dataset name 'mnist-cloud'
    swcli dataset cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq . -dlp myproject

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local default project(self) with a dataset name 'mnist-local'
    swcli dataset cp --overwrite cloud://pre-k8s/project/dataset/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local project(myproject) with a dataset name 'mnist-local'
    swcli dataset cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local -dlp myproject

    #- copy standalone instance(local) default project(self)'s mnist-local dataset to cloud instance(pre-k8s) mnist project with a new dataset name 'mnist-cloud'
    swcli dataset cp mnist-local/version/latest cloud://pre-k8s/project/mnist/mnist-cloud

    #- copy standalone instance(local) default project(self)'s mnist-local dataset to cloud instance(pre-k8s) mnist project with standalone instance dataset name 'mnist-local'
    swcli dataset cp mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy standalone instance(local) default project(self)'s mnist-local dataset to cloud instance(pre-k8s) mnist project without 'cloud://' prefix
    swcli dataset cp mnist-local/version/latest pre-k8s/project/mnist

    #- copy standalone instance(local) project(myproject)'s mnist-local dataset to cloud instance(pre-k8s) mnist project with standalone instance dataset name 'mnist-local'
    swcli dataset cp local/project/myproject/dataset/mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy without some tags
    swcli dataset cp mnist cloud://cloud.starwhale.cn/project/starwhale:public --ignore-tag t1 --force

    swcli dataset diff

    swcli [GLOBAL OPTIONS] dataset diff [OPTIONS] <DATASET VERSION> <DATASET VERSION>

    dataset diff compares the difference between two versions of the same dataset.

    DATASET VERSION is a dataset URI.

    OptionRequiredTypeDefaultsDescription
    --show-detailsNBooleanFalseIf true, outputs the detail information.
    swcli [全局选项] dataset head [选项] <DATASET VERSION>

    Print the first n rows of the dataset. DATASET VERSION is a dataset URI.

    OptionRequiredTypeDefaultsDescription
    -n or --rowsNInt5Print the first NUM rows of the dataset.
    -srd or --show-raw-dataNBooleanFalseFetch raw data content from objectstore.
    -st or --show-typesNBooleanFalseshow data types.

    Examples for dataset head

    #- print the first 5 rows of the mnist dataset
    swcli dataset head -n 5 mnist

    #- print the first 10 rows of the mnist(v0 version) dataset and show raw data
    swcli dataset head -n 10 mnist/v0 --show-raw-data

    #- print the data types of the mnist dataset
    swcli dataset head mnist --show-types

    #- print the remote cloud dataset's first 5 rows
    swcli dataset head cloud://cloud-cn/project/test/dataset/mnist -n 5

    #- print the first 5 rows in the json format
    swcli -o json dataset head -n 5 mnist

    swcli dataset history

    swcli [GLOBAL OPTIONS] dataset history [OPTIONS] <DATASET>

    dataset history outputs all history versions of the specified Starwhale Dataset.

    DATASET is a dataset URI.

    OptionRequiredTypeDefaultsDescription
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.

    swcli dataset info

    swcli [GLOBAL OPTIONS] dataset info [OPTIONS] <DATASET>

    dataset info outputs detailed information about the specified Starwhale Dataset version.

    DATASET is a dataset URI.

    swcli dataset list

    swcli [GLOBAL OPTIONS] dataset list [OPTIONS]

    dataset list shows all Starwhale Datasets.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.
    --show-removed or -srNBooleanFalseIf true, include datasets that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.
    --filter or -flNStringShow only Starwhale Datasetes that match specified filters. This option can be used multiple times in one command.
    FilterTypeDescriptionExample
    nameKey-ValueThe name prefix of datasets--filter name=mnist
    ownerKey-ValueThe dataset owner name--filter owner=starwhale
    latestFlagIf specified, it shows only the latest version.--filter latest

    swcli dataset recover

    swcli [GLOBAL OPTIONS] dataset recover [OPTIONS] <DATASET>

    dataset recover recovers previously removed Starwhale Datasets or versions.

    DATASET is a dataset URI. If the version part of the URI is omitted, all removed versions are recovered.

    Garbage-collected Starwhale Datasets or versions can not be recovered, as well as those are removed with the --force option.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, overwrite the Starwhale Dataset or version with the same name or version id.

    swcli dataset remove

    swcli [GLOBAL OPTIONS] dataset remove [OPTIONS] <DATASET>

    dataset remove removes the specified Starwhale Dataset or version.

    DATASET is a dataset URI. If the version part of the URI is omitted, all versions are removed.

    Removed Starwhale Datasets or versions can be recovered by swcli dataset recover before garbage collection. Use the --force option to persistently remove a Starwhale Dataset or version.

    Removed Starwhale Datasets or versions can be listed by swcli dataset list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Dataset or version. It can not be recovered.

    swcli dataset summary

    swcli [GLOBAL OPTIONS]  dataset summary <DATASET>

    Show dataset summary. DATASET is a dataset URI.

    swcli dataset tag

    swcli [GLOBAL OPTIONS] dataset tag [OPTIONS] <DATASET> [TAGS]...

    dataset tag attaches a tag to a specified Starwhale Dataset version. At the same time, tag command also supports list and remove tags. The tag can be used in a dataset URI instead of the version id.

    DATASET is a dataset URI.

    Each dataset version can have any number of tags, but duplicated tag names are not allowed in the same dataset.

    dataset tag only works for the Standalone Instance.

    OptionRequiredTypeDefaultsDescription
    --remove or -rNBooleanFalseremove the tag if true
    --quiet or -qNBooleanFalseignore errors, for example, removing tags that do not exist.
    --force-add or -fNBooleanFalseWhen adding labels to server/cloud instances, if the label is already used by another dataset version, an error will be prompted. In this case, you can force an update using the --force-add parameter.

    Examples for dataset tag

    #- list tags of the mnist dataset
    swcli dataset tag mnist

    #- add tags for the mnist dataset
    swcli dataset tag mnist -t t1 -t t2
    swcli dataset tag cloud://cloud.starwhale.cn/project/public:starwhale/dataset/mnist/version/latest -t t1 --force-add
    swcli dataset tag mnist -t t1 --quiet

    #- remove tags for the mnist dataset
    swcli dataset tag mnist -r -t t1 -t t2
    swcli dataset tag cloud://cloud.starwhale.cn/project/public:starwhale/dataset/mnist --remove -t t1
    - - + + \ No newline at end of file diff --git a/0.5.10/reference/swcli/index.html b/0.5.10/reference/swcli/index.html index 8b8c8f8e7..c9591256e 100644 --- a/0.5.10/reference/swcli/index.html +++ b/0.5.10/reference/swcli/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Overview

    Usage

    swcli [OPTIONS] <COMMAND> [ARGS]...
    note

    sw and starwhale are aliases for swcli.

    Global Options

    OptionDescription
    --versionShow the Starwhale Client version
    -v or --verboseShow verbose log, support multi counts for -v args. More -v args, more logs.
    --helpShow the help message.
    caution

    Global options must be put immediately after swcli, and before any command.

    Commands

    - - + + \ No newline at end of file diff --git a/0.5.10/reference/swcli/instance/index.html b/0.5.10/reference/swcli/instance/index.html index 451d800f9..e6f793d98 100644 --- a/0.5.10/reference/swcli/instance/index.html +++ b/0.5.10/reference/swcli/instance/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    swcli instance

    Overview

    swcli [GLOBAL OPTIONS] instance [OPTIONS] <SUBCOMMAND> [ARGS]

    The instance command includes the following subcommands:

    • info
    • list (ls)
    • login
    • logout
    • use (select)

    swcli instance info

    swcli [GLOBAL OPTIONS] instance info [OPTIONS] <INSTANCE>

    instance info outputs detailed information about the specified Starwhale Instance.

    INSTANCE is an instance URI.

    swcli instance list

    swcli [GLOBAL OPTIONS] instance list [OPTIONS]

    instance list shows all Starwhale Instances.

    swcli instance login

    swcli [GLOBAL OPTIONS] instance login [OPTIONS] <INSTANCE>

    instance login connects to a Server/Cloud instance and makes the specified instance default.

    INSTANCE is an instance URI.

    OptionRequiredTypeDefaultsDescription
    --usernameNStringThe login username.
    --passwordNStringThe login password.
    --tokenNStringThe login token.
    --aliasYStringThe alias of the instance. You can use it anywhere that requires an instance URI.

    --username and --password can not be used together with --token.

    swcli instance logout

    swcli [GLOBAL OPTIONS] instance logout [INSTANCE]

    instance logout disconnects from the Server/Cloud instance, and clears information stored in the local storage.

    INSTANCE is an instance URI. If it is omiited, the default instance is used instead.

    swcli instance use

    swcli [GLOBAL OPTIONS] instance use <INSTANCE>

    instance use make the specified instance default.

    INSTANCE is an instance URI.

    - - + + \ No newline at end of file diff --git a/0.5.10/reference/swcli/job/index.html b/0.5.10/reference/swcli/job/index.html index caf5904ca..7b40efa2d 100644 --- a/0.5.10/reference/swcli/job/index.html +++ b/0.5.10/reference/swcli/job/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    swcli job

    Overview

    swcli [GLOBAL OPTIONS] job [OPTIONS] <SUBCOMMAND> [ARGS]...

    The job command includes the following subcommands:

    • cancel
    • info
    • list(ls)
    • pause
    • recover
    • remove(rm)
    • resume

    swcli job cancel

    swcli [GLOBAL OPTIONS] job cancel [OPTIONS] <JOB>

    job cancel stops the specified job. On Standalone instance, this command only takes effect for containerized jobs.

    JOB is a job URI.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, kill the Starwhale Job by force.

    swcli job info

    swcli [GLOBAL OPTIONS] job info [OPTIONS] <JOB>

    job info outputs detailed information about the specified Starwhale Job.

    JOB is a job URI.

    OptionRequiredTypeDefaultsDescription
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.

    swcli job list

    swcli [GLOBAL OPTIONS] job list [OPTIONS]

    job list shows all Starwhale Jobs.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --show-removed or -srNBooleanFalseIf true, include packages that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.

    swcli job pause

    swcli [GLOBAL OPTIONS] job pause [OPTIONS] <JOB>

    job pause pauses the specified job. Paused jobs can be resumed by job resume. On Standalone instance, this command only takes effect for containerized jobs.

    JOB is a job URI.

    From Starwhale's perspective, pause is almost the same as cancel, except that the job reuses the old Job id when resumed. It is job developer's responsibility to save all data periodically and load them when resumed. The job id is usually used as a key of the checkpoint.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, kill the Starwhale Job by force.

    swcli job resume

    swcli [GLOBAL OPTIONS] job resume [OPTIONS] <JOB>

    job resume resumes the specified job. On Standalone instance, this command only takes effect for containerized jobs.

    JOB is a job URI.

    - - + + \ No newline at end of file diff --git a/0.5.10/reference/swcli/model/index.html b/0.5.10/reference/swcli/model/index.html index ba92f0258..87d94412b 100644 --- a/0.5.10/reference/swcli/model/index.html +++ b/0.5.10/reference/swcli/model/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    swcli model

    Overview

    swcli [GLOBAL OPTIONS] model [OPTIONS] <SUBCOMMAND> [ARGS]...

    The model command includes the following subcommands:

    • build
    • copy(cp)
    • diff
    • extract
    • history
    • info
    • list(ls)
    • recover
    • remove(rm)
    • run
    • serve
    • tag

    swcli model build

    swcli [GLOBAL OPTIONS] model build [OPTIONS] <WORKDIR>

    model build will put the whole WORKDIR into the model, except files that match patterns defined in .swignore.

    model build will import modules specified by --module to generate the required configurations to run the model. If your module depends on third-party libraries, we strongly recommend you use the --runtime option; otherwise, you need to ensure that the python environment used by swcli has these libraries installed.

    OptionRequiredTypeDefaultsDescription
    --project or -pNStringthe default projectthe project URI
    --model-yaml or -fNString${workdir}/model.yamlmodel yaml path, default use ${workdir}/model.yaml file. model.yaml is optional for model build.
    --module or -mNStringPython modules to be imported during the build process. Starwhale will export model handlers from these modules to the model package. This option supports set multiple times.
    --runtime or -rNStringthe URI of the Starwhale Runtime to use when running this command. If this option is used, this command will run in an independent python environment specified by the Starwhale Runtime; otherwise, it will run directly in the swcli's current python environment.
    --name or -nNStringmodel package name
    --desc or -dNStringmodel package description
    --package-runtime--no-package-runtimeNBooleanTrueWhen using the --runtime parameter, by default, the corresponding Starwhale runtime will become the built-in runtime for the Starwhale model. This feature can be disabled with the --no-package-runtime parameter.
    --add-allNBooleanFalseAdd all files in the working directory to the model package(excludes python cache files and virtual environment files when disabled).The .swignore file still takes effect.
    -t or --tagNGlobalString

    Examples for model build

    # build by the model.yaml in current directory and model package will package all the files from the current directory.
    swcli model build .
    # search model run decorators from mnist.evaluate, mnist.train and mnist.predict modules, then package all the files from the current directory to model package.
    swcli model build . --module mnist.evaluate --module mnist.train --module mnist.predict
    # build model package in the Starwhale Runtime environment.
    swcli model build . --module mnist.evaluate --runtime pytorch/version/v1
    # forbid to package Starwhale Runtime into the model.
    swcli model build . --module mnist.evaluate --runtime pytorch/version/v1 --no-package-runtime
    # build model package with tags.
    swcli model build . --tag tag1 --tag tag2

    swcli model copy

    swcli [GLOBAL OPTIONS] model copy [OPTIONS] <SRC> <DEST>

    model copy copies from SRC to DEST for Starwhale Model sharing.

    SRC and DEST are both model URIs.

    When copying Starwhale Model, all custom user-defined labels will be copied by default. You can use the --ignore-tag parameter to ignore certain labels. In addition, the latest and ^v\d+$ labels are Starwhale built-in labels that are only used within the instance itself and will not be copied to other instances.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, DEST will be overwritten if it exists. In addition, if the labels carried during duplication have already been used by other versions, this parameter can be used to forcibly update the labels to this version.
    -i or --ignore-tagNStringIgnore tags to copy. The option can be used multiple times.

    Examples for model copy

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local project(myproject) with a new model name 'mnist-local'
    swcli model cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq local/project/myproject/mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local default project(self) with the cloud instance model name 'mnist-cloud'
    swcli model cp cloud://pre-k8s/project/model/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq .

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local project(myproject) with the cloud instance model name 'mnist-cloud'
    swcli model cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq . -dlp myproject

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local default project(self) with a model name 'mnist-local'
    swcli model cp cloud://pre-k8s/project/model/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local project(myproject) with a model name 'mnist-local'
    swcli model cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local -dlp myproject

    #- copy standalone instance(local) default project(self)'s mnist-local model to cloud instance(pre-k8s) mnist project with a new model name 'mnist-cloud'
    swcli model cp mnist-local/version/latest cloud://pre-k8s/project/mnist/mnist-cloud

    #- copy standalone instance(local) default project(self)'s mnist-local model to cloud instance(pre-k8s) mnist project with standalone instance model name 'mnist-local'
    swcli model cp mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy standalone instance(local) default project(self)'s mnist-local model to cloud instance(pre-k8s) mnist project without 'cloud://' prefix
    swcli model cp mnist-local/version/latest pre-k8s/project/mnist

    #- copy standalone instance(local) project(myproject)'s mnist-local model to cloud instance(pre-k8s) mnist project with standalone instance model name 'mnist-local'
    swcli model cp local/project/myproject/model/mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy without some tags
    swcli model cp mnist cloud://cloud.starwhale.cn/project/starwhale:public --ignore-tag t1

    swcli model diff

    swcli [GLOBAL OPTIONS] model diff [OPTIONS] <MODEL VERSION> <MODEL VERSION>

    model diff compares the difference between two versions of the same model.

    MODEL VERSION is a model URI.

    OptionRequiredTypeDefaultsDescription
    --show-detailsNBooleanFalseIf true, outputs the detail information.

    swcli model extract

    swcli [GLOBAL OPTIONS] model extract [OPTIONS] <MODEL> <TARGET_DIR>

    The model extract command can extract a Starwhale model to a specified directory for further customization.

    MODEL is a model URI.

    OptionRequiredTypeDefaultDescription
    --force or -fNBooleanFalseIf this option is used, it will forcibly overwrite existing extracted model files in the target directory.

    Examples for model extract

    #- extract mnist model package to current directory
    swcli model extract mnist/version/xxxx .

    #- extract mnist model package to current directory and force to overwrite the files
    swcli model extract mnist/version/xxxx . -f

    swcli model history

    swcli [GLOBAL OPTIONS] model history [OPTIONS] <MODEL>

    model history outputs all history versions of the specified Starwhale Model.

    MODEL is a model URI.

    OptionRequiredTypeDefaultsDescription
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.

    swcli model info

    swcli [GLOBAL OPTIONS] model info [OPTIONS] <MODEL>

    model info outputs detailed information about the specified Starwhale Model version.

    MODEL is a model URI.

    OptionRequiredTypeDefaultsDescription
    --output-filter or -ofNChoice of [basic/model_yaml/manifest/files/handlers/all]basicFilter the output content. Only standalone instance supports this option.

    Examples for model info

    swcli model info mnist # show basic info from the latest version of model
    swcli model info mnist/version/v0 # show basic info from the v0 version of model
    swcli model info mnist/version/latest --output-filter=all # show all info
    swcli model info mnist -of basic # show basic info
    swcli model info mnist -of model_yaml # show model.yaml
    swcli model info mnist -of handlers # show model runnable handlers info
    swcli model info mnist -of files # show model package files tree
    swcli -o json model info mnist -of all # show all info in json format

    swcli model list

    swcli [GLOBAL OPTIONS] model list [OPTIONS]

    model list shows all Starwhale Models.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.
    --show-removedNBooleanFalseIf true, include packages that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.
    --filter or -flNStringShow only Starwhale Models that match specified filters. This option can be used multiple times in one command.
    FilterTypeDescriptionExample
    nameKey-ValueThe name prefix of models--filter name=mnist
    ownerKey-ValueThe model owner name--filter owner=starwhale
    latestFlagIf specified, it shows only the latest version.--filter latest

    swcli model recover

    swcli [GLOBAL OPTIONS] model recover [OPTIONS] <MODEL>

    model recover recovers previously removed Starwhale Models or versions.

    MODEL is a model URI. If the version part of the URI is omitted, all removed versions are recovered.

    Garbage-collected Starwhale Models or versions can not be recovered, as well as those are removed with the --force option.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, overwrite the Starwhale Model or version with the same name or version id.

    swcli model remove

    swcli [GLOBAL OPTIONS] model remove [OPTIONS] <MODEL>

    model remove removes the specified Starwhale Model or version.

    MODEL is a model URI. If the version part of the URI is omitted, all versions are removed.

    Removed Starwhale Models or versions can be recovered by swcli model recover before garbage collection. Use the --force option to persistently remove a Starwhale Model or version.

    Removed Starwhale Models or versions can be listed by swcli model list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Model or version. It can not be recovered.

    swcli model run

    swcli [GLOBAL OPTIONS] model run [OPTIONS]

    model run executes a model handler. Model run supports two modes to run: model URI and local development. Model URI mode needs a pre-built Starwhale Model Package. Local development model only needs the model src dir.

    OptionRequiredTypeDefaultsDescription
    --workdir or -wNStringFor local development mode, the path of model src dir.
    --uri or -uNStringFor model URI mode, the string of model uri.
    --handler or -hNStringRunnable handler index or name, default is None, will use the first handler
    --module or -mNStringThe name of the Python module to import. This parameter can be set multiple times.
    --runtime or -rNStringthe Starwhale Runtime URI to use when running this command. If this option is used, this command will run in an independent python environment specified by the Starwhale Runtime; otherwise, it will run directly in the swcli's current python environment.
    --model-yaml-fNString${MODEL_DIR}/model.yamlThe path to the model.yaml. model.yaml is optional for model run.
    --run-project or -pNStringDefault projectProject URI, indicates the model run results will be stored in the corresponding project.
    --dataset or -dNStringDataset URI, the Starwhale dataset required for model running. This parameter can be set multiple times.
    --in-containerNBooleanFalseUse docker container to run the model. This option is only available for standalone instances. For server and cloud instances, a docker image is always used. If the runtime is a docker image, this option is always implied.
    --forbid-snapshot or -fsNBooleanFalseIn model URI mode, each model run uses a new snapshot directory. Setting this parameter will directly use the model's workdir as the run directory. In local dev mode, this parameter does not take effect, each run is in the --workdir specified directory.

    Examples for model run

    # --> run by model uri
    # run the first handler from model uri
    swcli model run -u mnist/version/latest
    # run index id(1) handler from model uri
    swcli model run --uri mnist/version/latest --handler 1
    # run index fullname(mnist.evaluator:MNISTInference.cmp) handler from model uri
    swcli model run --uri mnist/version/latest --handler mnist.evaluator:MNISTInference.cmp

    # --> run by the working directory, which does not build model package yet. Make local debug happy.
    # run the first handler from the working directory, use the model.yaml in the working directory
    swcli model run -w .
    # run index id(1) handler from the working directory, search mnist.evaluator module and model.yaml handlers(if existed) to get runnable handlers
    swcli model run --workdir . --module mnist.evaluator --handler 1
    # run index fullname(mnist.evaluator:MNISTInference.cmp) handler from the working directory, search mnist.evaluator module to get runnable handlers
    swcli model run --workdir . --module mnist.evaluator --handler mnist.evaluator:MNISTInference.cmp

    swcli model serve

    Here is the English translation:

    swcli [GLOBAL OPTIONS] model serve [OPTIONS]

    The model serve command can run the model as a web server, and provide a simple web interaction interface.

    OptionRequiredTypeDefaultsDescription
    --workdir or -wNStringIn local dev mode, specify the directory of the model code.
    --uri or -uNStringIn model URI mode, specify the model URI.
    --runtime or -rNStringThe URI of the Starwhale runtime to use when running this command. If specified, the command will run in the isolated Python environment defined in the Starwhale runtime. Otherwise it will run directly in the current Python environment of swcli.
    --model-yaml or -fNString${MODEL_DIR}/model.yamlThe path to the model.yaml. model.yaml is optional for model serve.
    --module or -mNStringName of the Python module to import. This parameter can be set multiple times.
    --hostNString127.0.0.1The address for the service to listen on.
    --portNInteger8080The port for the service to listen on.

    Examples for model serve

    swcli model serve -u mnist
    swcli model serve --uri mnist/version/latest --runtime pytorch/version/latest

    swcli model serve --workdir . --runtime pytorch/version/v0
    swcli model serve --workdir . --runtime pytorch/version/v1 --host 0.0.0.0 --port 8080
    swcli model serve --workdir . --runtime pytorch --module mnist.evaluator

    swcli model tag

    swcli [GLOBAL OPTIONS] model tag [OPTIONS] <MODEL> [TAGS]...

    model tag attaches a tag to a specified Starwhale Model version. At the same time, tag command also supports list and remove tags. The tag can be used in a model URI instead of the version id.

    MODEL is a model URI.

    Each model version can have any number of tags, but duplicated tag names are not allowed in the same model.

    model tag only works for the Standalone Instance.

    OptionRequiredTypeDefaultsDescription
    --remove or -rNBooleanFalseremove the tag if true
    --quiet or -qNBooleanFalseignore errors, for example, removing tags that do not exist.
    --force-add or -fNBooleanFalseWhen adding labels to server/cloud instances, if the label is already used by another model version, an error will be prompted. In this case, you can force an update using the --force-add parameter.

    Examples for model tag

    #- list tags of the mnist model
    swcli model tag mnist

    #- add tags for the mnist model
    swcli model tag mnist -t t1 -t t2
    swcli model tag cloud://cloud.starwhale.cn/project/public:starwhale/model/mnist/version/latest -t t1 --force-add
    swcli model tag mnist -t t1 --quiet

    #- remove tags for the mnist model
    swcli model tag mnist -r -t t1 -t t2
    swcli model tag cloud://cloud.starwhale.cn/project/public:starwhale/model/mnist --remove -t t1
    - - + + \ No newline at end of file diff --git a/0.5.10/reference/swcli/project/index.html b/0.5.10/reference/swcli/project/index.html index 90d26c8e6..fdb80125a 100644 --- a/0.5.10/reference/swcli/project/index.html +++ b/0.5.10/reference/swcli/project/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    swcli project

    Overview

    swcli [GLOBAL OPTIONS] project [OPTIONS] <SUBCOMMAND> [ARGS]...

    The project command includes the following subcommands:

    • create(add, new)
    • info
    • list(ls)
    • recover
    • remove(ls)
    • use(select)

    swcli project create

    swcli [GLOBAL OPTIONS] project create <PROJECT>

    project create creates a new project.

    PROJECT is a project URI.

    swcli project info

    swcli [GLOBAL OPTIONS] project info [OPTIONS] <PROJECT>

    project info outputs detailed information about the specified Starwhale Project.

    PROJECT is a project URI.

    swcli project list

    swcli [GLOBAL OPTIONS] project list [OPTIONS]

    project list shows all Starwhale Projects.

    OptionRequiredTypeDefaultsDescription
    --instanceNStringThe URI of the instance to list. If this option is omitted, use the default instance.
    --show-removedNBooleanFalseIf true, include projects that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.

    swcli project recover

    swcli [GLOBAL OPTIONS] project recover [OPTIONS] <PROJECT>

    project recover recovers previously removed Starwhale Projects.

    PROJECT is a project URI.

    Garbage-collected Starwhale Projects can not be recovered, as well as those are removed with the --force option.

    swcli project remove

    swcli [GLOBAL OPTIONS] project remove [OPTIONS] <PROJECT>

    project remove removes the specified Starwhale Project.

    PROJECT is a project URI.

    Removed Starwhale Projects can be recovered by swcli project recover before garbage collection. Use the --force option to persistently remove a Starwhale Project.

    Removed Starwhale Project can be listed by swcli project list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Project. It can not be recovered.

    swcli project use

    swcli [GLOBAL OPTIONS] project use <PROJECT>

    project use make the specified project default. You must login at first to use a project on a Server/Cloud instance.

    - - + + \ No newline at end of file diff --git a/0.5.10/reference/swcli/runtime/index.html b/0.5.10/reference/swcli/runtime/index.html index d6615e80d..692ff11ca 100644 --- a/0.5.10/reference/swcli/runtime/index.html +++ b/0.5.10/reference/swcli/runtime/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    swcli runtime

    Overview

    swcli [GLOBAL OPTIONS] runtime [OPTIONS] <SUBCOMMAND> [ARGS]...

    The runtime command includes the following subcommands:

    • activate(actv)
    • build
    • copy(cp)
    • dockerize
    • extract
    • history
    • info
    • list(ls)
    • recover
    • remove(rm)
    • tag

    swcli runtime activate

    swcli [GLOBAL OPTIONS] runtime activate [OPTIONS] <RUNTIME>

    Like source venv/bin/activate or conda activate xxx, runtime activate setups a new python environment according to the settings of the specified runtime. When the current shell is closed or switched to another one, you need to reactivate the runtime.RUNTIME is a Runtime URI.

    If you want to quit the activated runtime environment, please run venv deactivate in the venv environment or conda deactivate in the conda environment.

    The runtime activate command will build a Python isolated environment and download relevant Python packages according to the definition of the Starwhale runtime when activating the environment for the first time. This process may spend a lot of time.

    swcli runtime build

    swcli [GLOBAL OPTIONS] runtime build [OPTIONS]

    The runtime build command can build a shareable and reproducible runtime environment suitable for ML/DL from various environments or runtime.yaml file.

    Parameters

    • Parameters related to runtime building methods:
    OptionRequiredTypeDefaultsDescription
    -c or --condaNStringFind the corresponding conda environment by conda env name, export Python dependencies to generate Starwhale runtime.
    -cp or --conda-prefixNStringFind the corresponding conda environment by conda env prefix path, export Python dependencies to generate Starwhale runtime.
    -v or --venvNStringFind the corresponding venv environment by venv directory address, export Python dependencies to generate Starwhale runtime.
    -s or --shellNStringExport Python dependencies according to current shell environment to generate Starwhale runtime.
    -y or --yamlNruntime.yaml in cwd directoryBuild Starwhale runtime according to user-defined runtime.yaml.
    -d or --dockerNStringUse the docker image as Starwhale runtime.

    The parameters for runtime building methods are mutually exclusive, only one method can be specified. If not specified, it will use --yaml method to read runtime.yaml in cwd directory to build Starwhale runtime.

    • Other parameters:
    OptionRequiredScopeTypeDefaultsDescription
    --project or -pNGlobalStringDefault projectProject URI
    -del or --disable-env-lockNruntime.yaml modeBooleanFalseWhether to install dependencies in runtime.yaml and lock the version information of related dependencies. The dependencies will be locked by default.
    -nc or --no-cacheNruntime.yaml modeBooleanFalseWhether to delete the isolated environment and install related dependencies from scratch. By default dependencies will be installed in the existing isolated environment.
    --cudaNconda/venv/shell modeChoice[11.3/11.4/11.5/11.6/11.7/]CUDA version, CUDA will not be used by default.
    --cudnnNconda/venv/shell modeChoice[8/]cuDNN version, cuDNN will not be used by default.
    --archNconda/venv/shell modeChoice[amd64/arm64/noarch]noarchArchitecture
    -epo or --emit-pip-optionsNGlobalBooleanFalseWhether to export ~/.pip/pip.conf, exported by default.
    -ecc or --emit-condarcNGlobalBooleanFalseWhether to export ~/.condarc, exported by default.
    -t or --tagNGlobalStringRuntime tags, the option can be used multiple times.

    Examples for Starwhale Runtime building

    #- from runtime.yaml:
    swcli runtime build # use the current directory as the workdir and use the default runtime.yaml file
    swcli runtime build -y example/pytorch/runtime.yaml # use example/pytorch/runtime.yaml as the runtime.yaml file
    swcli runtime build --yaml runtime.yaml # use runtime.yaml at the current directory as the runtime.yaml file
    swcli runtime build --tag tag1 --tag tag2

    #- from conda name:
    swcli runtime build -c pytorch # lock pytorch conda environment and use `pytorch` as the runtime name
    swcli runtime build --conda pytorch --name pytorch-runtime # use `pytorch-runtime` as the runtime name
    swcli runtime build --conda pytorch --cuda 11.4 # specify the cuda version
    swcli runtime build --conda pytorch --arch noarch # specify the system architecture

    #- from conda prefix path:
    swcli runtime build --conda-prefix /home/starwhale/anaconda3/envs/pytorch # get conda prefix path by `conda info --envs` command

    #- from venv prefix path:
    swcli runtime build -v /home/starwhale/.virtualenvs/pytorch
    swcli runtime build --venv /home/starwhale/.local/share/virtualenvs/pytorch --arch amd64

    #- from docker image:
    swcli runtime build --docker pytorch/pytorch:1.9.0-cuda11.1-cudnn8-runtime # use the docker image as the runtime directly

    #- from shell:
    swcli runtime build -s --cuda 11.4 --cudnn 8 # specify the cuda and cudnn version
    swcli runtime build --shell --name pytorch-runtime # lock the current shell environment and use `pytorch-runtime` as the runtime name

    swcli runtime copy

    swcli [GLOBAL OPTIONS] runtime copy [OPTIONS] <SRC> <DEST>

    runtime copy copies from SRC to DEST. SRC and DEST are both Runtime URIs.

    When copying Starwhale Runtime, all custom user-defined labels will be copied by default. You can use the --ignore-tag parameter to ignore certain labels. In addition, the latest and ^v\d+$ labels are built-in Starwhale system labels that are only used within the instance itself and will not be copied to other instances.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, DEST will be overwritten if it exists. In addition, if the labels carried during duplication have already been used by other versions, this parameter can be used to forcibly update the labels to this version.
    -i or --ignore-tagNStringIgnore tags to copy. The option can be used multiple times.

    Examples for Starwhale Runtime copy

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local project(myproject) with a new runtime name 'mnist-local'
    swcli runtime cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq local/project/myproject/mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local default project(self) with the cloud instance runtime name 'mnist-cloud'
    swcli runtime cp cloud://pre-k8s/project/runtime/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq .

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local project(myproject) with the cloud instance runtime name 'mnist-cloud'
    swcli runtime cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq . -dlp myproject

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local default project(self) with a runtime name 'mnist-local'
    swcli runtime cp cloud://pre-k8s/project/runtime/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local project(myproject) with a runtime name 'mnist-local'
    swcli runtime cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local -dlp myproject

    #- copy standalone instance(local) default project(self)'s mnist-local runtime to cloud instance(pre-k8s) mnist project with a new runtime name 'mnist-cloud'
    swcli runtime cp mnist-local/version/latest cloud://pre-k8s/project/mnist/mnist-cloud

    #- copy standalone instance(local) default project(self)'s mnist-local runtime to cloud instance(pre-k8s) mnist project with standalone instance runtime name 'mnist-local'
    swcli runtime cp mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy standalone instance(local) default project(self)'s mnist-local runtime to cloud instance(pre-k8s) mnist project without 'cloud://' prefix
    swcli runtime cp mnist-local/version/latest pre-k8s/project/mnist

    #- copy standalone instance(local) project(myproject)'s mnist-local runtime to cloud instance(pre-k8s) mnist project with standalone instance runtime name 'mnist-local'
    swcli runtime cp local/project/myproject/runtime/mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy without some tags
    swcli runtime cp pytorch cloud://cloud.starwhale.cn/project/starwhale:public --ignore-tag t1

    swcli runtime dockerize

    swcli [GLOBAL OPTIONS] runtime dockerize [OPTIONS] <RUNTIME>

    runtime dockerize generates a docker image based on the specified runtime. Starwhale uses docker buildx to create the image. Docker 19.03 or later is required to run this command.

    RUNTIME is a Runtime URI.

    OptionRequiredTypeDefaultsDescription
    --tag or -tNStringThe tag of the docker image. This option can be repeated multiple times.
    --pushNBooleanFalseIf true, push the image to the docker registry
    --platformNStringamd64The target platform,can be either amd64 or arm64. This option can be repeated multiple times to create a multi-platform image.

    Here is the English translation:

    swcli runtime extract

    swcli [Global Options] runtime extract [Options] <RUNTIME>

    Starwhale runtimes use the compressed packages to distribute. The runtime extract command can be used to extract the runtime package for further customization and modification.

    OptionRequiredTypeDefaultDescription
    --force or -fNBooleanFalseWhether to delete and re-extract if there is already an extracted Starwhale runtime in the target directory.
    --target-dirNStringCustom extraction directory. If not specified, it will be extracted to the default Starwhale runtime workdir. The command log will show the directory location.

    swcli runtime history

    swcli [GLOBAL OPTIONS] runtime history [OPTIONS] <RUNTIME>

    runtime history outputs all history versions of the specified Starwhale Runtime.

    RUNTIME is a Runtime URI.

    OptionRequiredTypeDefaultsDescription
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.

    swcli runtime info

    swcli [GLOBAL OPTIONS] runtime info [OPTIONS] <RUNTIME>

    runtime info outputs detailed information about a specified Starwhale Runtime version.

    RUNTIME is a Runtime URI.

    OptionRequiredTypeDefaultsDescription
    --output-filter or -ofNChoice of [basic/runtime_yaml/manifest/lock/all]basicFilter the output content. Only standalone instance supports this option.

    Examples for Starwhale Runtime info

    swcli runtime info pytorch # show basic info from the latest version of runtime
    swcli runtime info pytorch/version/v0 # show basic info
    swcli runtime info pytorch/version/v0 --output-filter basic # show basic info
    swcli runtime info pytorch/version/v1 -of runtime_yaml # show runtime.yaml content
    swcli runtime info pytorch/version/v1 -of lock # show auto lock file content
    swcli runtime info pytorch/version/v1 -of manifest # show _manifest.yaml content
    swcli runtime info pytorch/version/v1 -of all # show all info of the runtime

    swcli runtime list

    swcli [GLOBAL OPTIONS] runtime list [OPTIONS]

    runtime list shows all Starwhale Runtimes.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.
    --show-removed or -srNBooleanFalseIf true, include runtimes that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.
    --filter or -flNStringShow only Starwhale Runtimes that match specified filters. This option can be used multiple times in one command.
    FilterTypeDescriptionExample
    nameKey-ValueThe name prefix of runtimes--filter name=pytorch
    ownerKey-ValueThe runtime owner name--filter owner=starwhale
    latestFlagIf specified, it shows only the latest version.--filter latest

    swcli runtime recover

    swcli [GLOBAL OPTIONS] runtime recover [OPTIONS] <RUNTIME>

    runtime recover can recover previously removed Starwhale Runtimes or versions.

    RUNTIME is a Runtime URI. If the version part of the URI is omitted, all removed versions are recovered.

    Garbage-collected Starwhale Runtimes or versions can not be recovered, as well as those are removed with the --force option.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, overwrite the Starwhale Runtime or version with the same name or version id.

    swcli runtime remove

    swcli [GLOBAL OPTIONS] runtime remove [OPTIONS] <RUNTIME>

    runtime remove removes the specified Starwhale Runtime or version.

    RUNTIME is a Runtime URI. If the version part of the URI is omitted, all versions are removed.

    Removed Starwhale Runtimes or versions can be recovered by swcli runtime recover before garbage collection. Use the -- force option to persistently remove a Starwhale Runtime or version.

    Removed Starwhale Runtimes or versions can be listed by swcli runtime list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Runtime or version. It can not be recovered.

    swcli runtime tag

    swcli [GLOBAL OPTIONS] runtime tag [OPTIONS] <RUNTIME> [TAGS]...

    runtime tag attaches a tag to a specified Starwhale Runtime version. At the same time, tag command also supports list and remove tags. The tag can be used in a runtime URI instead of the version id.

    RUNTIME is a Runtime URI.

    Each runtime version can have any number of tags, but duplicated tag names are not allowed in the same runtime.

    runtime tag only works for the Standalone Instance.

    OptionRequiredTypeDefaultsDescription
    --remove or -rNBooleanFalseRemove the tag if true
    --quiet or -qNBooleanFalseIgnore errors, for example, removing tags that do not exist.
    --force-add or -fNBooleanFalseWhen adding labels to server/cloud instances, if the label is already used by another runtime version, an error will be prompted. In this case, you can force an update using the --force-add parameter.

    Examples for runtime tag

    #- list tags of the pytorch runtime
    swcli runtime tag pytorch

    #- add tags for the pytorch runtime
    swcli runtime tag mnist -t t1 -t t2
    swcli runtime tag cloud://cloud.starwhale.cn/project/public:starwhale/runtime/pytorch/version/latest -t t1 --force-add
    swcli runtime tag mnist -t t1 --quiet

    #- remove tags for the pytorch runtime
    swcli runtime tag mnist -r -t t1 -t t2
    swcli runtime tag cloud://cloud.starwhale.cn/project/public:starwhale/runtime/pytorch --remove -t t1
    - - + + \ No newline at end of file diff --git a/0.5.10/reference/swcli/utilities/index.html b/0.5.10/reference/swcli/utilities/index.html index 35f3a746a..d5fbcb3c7 100644 --- a/0.5.10/reference/swcli/utilities/index.html +++ b/0.5.10/reference/swcli/utilities/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Utility Commands

    swcli gc

    swcli [GLOBAL OPTIONS] gc [OPTIONS]

    gc clears removed projects, models, datasets, and runtimes according to the internal garbage collection policy.

    OptionRequiredTypeDefaultsDescription
    --dry-runNBooleanFalseIf true, outputs objects to be removed instead of clearing them.
    --yesNBooleanFalseBypass confirmation prompts.

    swcli check

    swcli [GLOBAL OPTIONS] check

    Check if the external dependencies of the swcli command meet the requirements. Currently mainly checks Docker and Conda.

    swcli completion install

    swcli [GLOBAL OPTIONS] completion install <SHELL_NAME>

    Install autocompletion for swcli commands. Currently supports bash, zsh and fish. If SHELL_NAME is not specified, it will try to automatically detect the current shell type.

    swcli config edit

    swcli [GLOBAL OPTIONS] config edit

    Edit the Starwhale configuration file at ~/.config/starwhale/config.yaml.

    swcli ui

    swcli [GLOBAL OPTIONS] ui <INSTANCE>

    Open the web page for the corresponding instance.

    - - + + \ No newline at end of file diff --git a/0.5.10/runtime/index.html b/0.5.10/runtime/index.html index 495ed78d1..f264b85f8 100644 --- a/0.5.10/runtime/index.html +++ b/0.5.10/runtime/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Starwhale Runtime

    Overview

    Starwhale Runtime aims to provide a reproducible and sharable running environment for python programs. You can easily share your working environment with your teammates or outsiders, and vice versa. Furthermore, you can run your programs on Starwhale Server or Starwhale Cloud without bothering with the dependencies.

    Starwhale works well with virtualenv, conda, and docker. If you are using one of them, it is straightforward to create a Starwhale Runtime based on your current environment.

    Multiple Starwhale Runtimes on your local machine can be switched freely by one command. You can work on different projects without messing up the environment.Starwhale Runtime consists of two parts: the base image and the dependencies.

    The base image

    The base is a docker image with Python, CUDA, and cuDNN installed. Starwhale provides various base images for you to choose from; see the following list:

    • Computer system architecture:
      • X86 (amd64)
      • Arm (aarch64)
    • Operating system:
      • Ubuntu 20.04 LTS (ubuntu:20.04)
    • Python:
      • 3.7
      • 3.8
      • 3.9
      • 3.10
      • 3.11
    • CUDA:
      • CUDA 11.3 + cuDNN 8.4
      • CUDA 11.4 + cuDNN 8.4
      • CUDA 11.5 + cuDNN 8.4
      • CUDA 11.6 + cuDNN 8.4
      • CUDA 11.7

    runtime.yaml

    runtime.yaml is the core configuration file of Starwhale Runtime.

    # The name of Starwhale Runtime
    name: demo
    # The mode of Starwhale Runtime: venv or conda. Default is venv.
    mode: venv
    configs:
    # If you do not use conda, ignore this field.
    conda:
    condarc: # custom condarc config file
    channels:
    - defaults
    show_channel_urls: true
    default_channels:
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2
    custom_channels:
    conda-forge: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    pytorch: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    pytorch-lts: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    nvidia: https://mirrors.aliyun.com/anaconda/cloud
    ssl_verify: false
    default_threads: 10
    pip:
    # pip config set global.index-url
    index_url: https://example.org/
    # pip config set global.extra-index-url
    extra_index_url: https://another.net/
    # pip config set install.trusted-host
    trusted_host:
    - example.org
    - another.net
    environment:
    # Now it must be ubuntu:20.04
    os: ubuntu:20.04
    # CUDA version. possible values: 11.3, 11.4, 11.5, 11.6, 11.7
    cuda: 11.4
    # Python version. possible values: 3.7, 3.8, 3.9, 3.10, 3.11
    python: 3.8
    # Define your base image
    docker:
    image: mycustom.com/docker/image:tag
    dependencies:
    # If this item is present, conda env create -f conda.yml will be executed
    - conda.yaml
    # If this item is present, pip install -r requirements.txt will be executed before installing other pip packages
    - requirements.txt
    # Packages to be install with conda. venv mode will ignore the conda field.
    - conda:
    - numpy
    - requests
    # Packages to be installed with pip. The format is the same as requirements.txt
    - pip:
    - pillow
    - numpy
    - deepspeed==0.9.0
    - safetensors==0.3.0
    - transformers @ git+https://github.com/huggingface/transformers.git@3c3108972af74246bc3a0ecf3259fd2eafbacdef
    - peft @ git+https://github.com/huggingface/peft.git@fcff23f005fc7bfb816ad1f55360442c170cd5f5
    - accelerate @ git+https://github.com/huggingface/accelerate.git@eba6eb79dc2ab652cd8b44b37165a4852768a8ac
    # Additional wheels packages to be installed when restoring the runtime
    - wheels:
    - dummy-0.0.0-py3-none-any.whl
    # Additional files to be included in the runtime
    - files:
    - dest: bin/prepare.sh
    name: prepare
    src: scripts/prepare.sh
    # Run some custom commands
    - commands:
    - apt-get install -y libgl1
    - touch /tmp/runtime-command-run.flag
    - - + + \ No newline at end of file diff --git a/0.5.10/runtime/yaml/index.html b/0.5.10/runtime/yaml/index.html index f18c30281..73a4df86f 100644 --- a/0.5.10/runtime/yaml/index.html +++ b/0.5.10/runtime/yaml/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    The runtime.yaml Specification

    runtime.yaml is the configuration file that defines the properties of the Starwhale Runtime. runtime.yaml is required for the yaml mode of the swcli runtime build command.

    Examples

    The simplest example

    dependencies:
    - pip:
    - numpy
    name: simple-test

    Define a Starwhale Runtime that uses venv as the Python virtual environment for package isolation, and installs the numpy dependency.

    The llama2 example

    name: llama2
    mode: venv
    environment:
    arch: noarch
    os: ubuntu:20.04
    cuda: 11.7
    python: "3.10"
    dependencies:
    - pip:
    - torch
    - fairscale
    - fire
    - sentencepiece
    - gradio >= 3.37.0
    # external starwhale dependencies
    - starwhale[serve] >= 0.5.5

    The full definition example

    # [required]The name of Starwhale Runtime
    name: demo
    # [optional]The mode of Starwhale Runtime: venv or conda. Default is venv.
    mode: venv
    # [optional]The configurations of pip and conda.
    configs:
    # If you do not use conda, ignore this field.
    conda:
    condarc: # custom condarc config file
    channels:
    - defaults
    show_channel_urls: true
    default_channels:
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2
    custom_channels:
    conda-forge: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    pytorch: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    pytorch-lts: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    nvidia: https://mirrors.aliyun.com/anaconda/cloud
    ssl_verify: false
    default_threads: 10
    pip:
    # pip config set global.index-url
    index_url: https://example.org/
    # pip config set global.extra-index-url
    extra_index_url: https://another.net/
    # pip config set install.trusted-host
    trusted_host:
    - example.org
    - another.net
    # [optional] The definition of the environment.
    environment:
    # Now it must be ubuntu:20.04
    os: ubuntu:20.04
    # CUDA version. possible values: 11.3, 11.4, 11.5, 11.6, 11.7
    cuda: 11.4
    # Python version. possible values: 3.7, 3.8, 3.9, 3.10, 3.11
    python: 3.8
    # Define your custom base image
    docker:
    image: mycustom.com/docker/image:tag
    # [required] The dependencies of the Starwhale Runtime.
    dependencies:
    # If this item is present, conda env create -f conda.yml will be executed
    - conda.yaml
    # If this item is present, pip install -r requirements.txt will be executed before installing other pip packages
    - requirements.txt
    # Packages to be install with conda. venv mode will ignore the conda field.
    - conda:
    - numpy
    - requests
    # Packages to be installed with pip. The format is the same as requirements.txt
    - pip:
    - pillow
    - numpy
    - deepspeed==0.9.0
    - safetensors==0.3.0
    - transformers @ git+https://github.com/huggingface/transformers.git@3c3108972af74246bc3a0ecf3259fd2eafbacdef
    - peft @ git+https://github.com/huggingface/peft.git@fcff23f005fc7bfb816ad1f55360442c170cd5f5
    - accelerate @ git+https://github.com/huggingface/accelerate.git@eba6eb79dc2ab652cd8b44b37165a4852768a8ac
    # Additional wheels packages to be installed when restoring the runtime
    - wheels:
    - dummy-0.0.0-py3-none-any.whl
    # Additional files to be included in the runtime
    - files:
    - dest: bin/prepare.sh
    name: prepare
    src: scripts/prepare.sh
    # Run some custom commands
    - commands:
    - apt-get install -y libgl1
    - touch /tmp/runtime-command-run.flag
    - - + + \ No newline at end of file diff --git a/0.5.10/server/guides/server_admin/index.html b/0.5.10/server/guides/server_admin/index.html index a9f6a5443..051bb4a27 100644 --- a/0.5.10/server/guides/server_admin/index.html +++ b/0.5.10/server/guides/server_admin/index.html @@ -10,14 +10,14 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Controller Admin Settings

    Superuser Password Reset

    In case you forget the superusers password, you could use the sql below to reset the password to abcd1234

    update user_info set user_pwd='ee9533077d01d2d65a4efdb41129a91e', user_pwd_salt='6ea18d595773ccc2beacce26' where id=1

    After that, you could login to the console and then change the password to what you really want.

    System Settings

    You could customize system to make it easier to use by leverage of System setting. Here is an example below:

    dockerSetting:
    registryForPull: "docker-registry.starwhale.cn/star-whale"
    registryForPush: ""
    userName: ""
    password: ""
    insecure: true
    pypiSetting:
    indexUrl: ""
    extraIndexUrl: ""
    trustedHost: ""
    retries: 10
    timeout: 90
    imageBuild:
    resourcePool: ""
    image: ""
    clientVersion: ""
    pythonVersion: ""
    datasetBuild:
    resourcePool: ""
    image: ""
    clientVersion: ""
    pythonVersion: ""
    resourcePoolSetting:
    - name: "default"
    nodeSelector: null
    resources:
    - name: "cpu"
    max: null
    min: null
    defaults: 5.0
    - name: "memory"
    max: null
    min: null
    defaults: 3145728.0
    - name: "nvidia.com/gpu"
    max: null
    min: null
    defaults: null
    tolerations: null
    metadata: null
    isPrivate: null
    visibleUserIds: null
    storageSetting:
    - type: "minio"
    tokens:
    bucket: "users"
    ak: "starwhale"
    sk: "starwhale"
    endpoint: "http://10.131.0.1:9000"
    region: "local"
    hugeFileThreshold: "10485760"
    hugeFilePartSize: "5242880"
    - type: "s3"
    tokens:
    bucket: "users"
    ak: "starwhale"b
    sk: "starwhale"
    endpoint: "http://10.131.0.1:9000"
    region: "local"
    hugeFileThreshold: "10485760"
    hugeFilePartSize: "5242880"

    Image Registry

    Tasks dispatched by the server are based on docker images. Pulling these images could be slow if your internet is not working well. Starwhale Server supports the custom image registries, includes dockerSetting.registryForPush and dockerSetting.registryForPull.

    Resource Pool

    The resourcePoolSetting allows you to manage your cluster in a group manner. It is currently implemented by K8S nodeSelector, you could label your machines in K8S cluster and make them a resourcePool in Starwhale.

    Remote Storage

    The storageSetting allows you to manage the storages the server could access.

    storageSetting:
    - type: s3
    tokens:
    - bucket: starwhale # required
    ak: access_key # required
    sk: scret_key # required
    endpoint: http://s3.region.amazonaws.com # optional
    region: region of the service # required when endpoint is empty
    hugeFileThreshold: 10485760 # bigger than 10MB will use multiple part upload
    hugeFilePartSize: 5242880 # MB part size for multiple part upload
    - type: minio
    tokens:
    - bucket: starwhale # required
    ak: access_key # required
    sk: scret_key # required
    endpoint: http://10.131.0.1:9000 # required
    region: local # optional
    hugeFileThreshold: 10485760 # bigger than 10MB will use multiple part upload
    hugeFilePartSize: 5242880 # MB part size for multiple part upload
    - type: aliyun
    tokens:
    - bucket: starwhale # required
    ak: access_key # required
    sk: scret_key # required
    endpoint: http://10.131.0.2:9000 # required
    region: local # optional
    hugeFileThreshold: 10485760 # bigger than 10MB will use multiple part upload
    hugeFilePartSize: 5242880 # MB part size for multiple part upload

    Every storageSetting item has a corresponding implementation of StorageAccessService interface. Starwhale has four build-in implementations:

    • StorageAccessServiceAliyun matches type in (aliyun,oss)
    • StorageAccessServiceMinio matches type in (minio)
    • StorageAccessServiceS3 matches type in (s3)
    • StorageAccessServiceFile matches type in (fs, file)

    Each of the implementations has different requirements for tokens. endpoint is required when type in (aliyun,minio), region is required when type is s3 and endpoint is empty. While fs/file type requires tokens has name rootDir and serviceProvider. Please refer the code for more details.

    - - + + \ No newline at end of file diff --git a/0.5.10/server/index.html b/0.5.10/server/index.html index afe596c22..35d575c31 100644 --- a/0.5.10/server/index.html +++ b/0.5.10/server/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.5.10/server/installation/docker/index.html b/0.5.10/server/installation/docker/index.html index 6088e16d6..0b7d41406 100644 --- a/0.5.10/server/installation/docker/index.html +++ b/0.5.10/server/installation/docker/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Install Starwhale Server with Docker

    Prerequisites

    • A running Kubernetes 1.19+ cluster to run tasks.
    • A running MySQL 8.0+ instance to store metadata.
    • A S3-compatible object storage to save datasets, models, and others.

    Please make sure pods on the Kubernetes cluster can access the port exposed by the Starwhale Server installation.

    Prepare an env file for Docker

    Starwhale Server can be configured by environment variables.

    An env file template for Docker is here. You may create your own env file by modifying the template.

    Prepare a kubeconfig file

    The kubeconfig file is used for accessing the Kubernetes cluster. For more information about kubeconfig files, see the Official Kubernetes Documentation.

    If you have a local kubectl command-line tool installed, you can run kubectl config view to see your current configuration.

    Run the Docker image

    docker run -it -d --name starwhale-server -p 8082:8082 \
    --restart unless-stopped \
    --mount type=bind,source=<path to your kubeconfig file>,destination=/root/.kube/config,readonly \
    --env-file <path to your env file> \
    ghcr.io/star-whale/server:0.5.6

    For users in the mainland of China, use docker image: docker-registry.starwhale.cn/star-whale/server.

    - - + + \ No newline at end of file diff --git a/0.5.10/server/installation/helm-charts/index.html b/0.5.10/server/installation/helm-charts/index.html index f103bbc89..af31f29c5 100644 --- a/0.5.10/server/installation/helm-charts/index.html +++ b/0.5.10/server/installation/helm-charts/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Install Starwhale Server with Helm

    Prerequisites

    • A running Kubernetes 1.19+ cluster to run tasks.
    • A running MySQL 8.0+ instance to store metadata.
    • A S3-compatible object storage system to save datasets, models, and others.
    • Helm 3.2.0+.

    The Starwhale Helm Charts includes MySQL and MinIO as dependencies. If you do not have your own MySQL instance or any S3-compatible object storage available, use the Helm Charts to install. Please check Installation Options to learn how to install Starwhale Server with MySQL and MinIO.

    Create a service account on Kubernetes for Starwhale Server

    If Kubernetes RBAC is enabled (In Kubernetes 1.6+, RBAC is enabled by default), Starwhale Server can not work properly unless is started by a service account with at least the following permissions:

    ResourceAPI GroupGetListWatchCreateDelete
    jobsbatchYYYYY
    podscoreYYY
    nodescoreYYY
    events""Y

    Example:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
    name: starwhale-role
    rules:
    - apiGroups:
    - ""
    resources:
    - pods
    - nodes
    verbs:
    - get
    - list
    - watch
    - apiGroups:
    - "batch"
    resources:
    - jobs
    verbs:
    - create
    - get
    - list
    - watch
    - delete
    - apiGroups:
    - ""
    resources:
    - events
    verbs:
    - get
    - watch
    - list
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    name: starwhale-binding
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: starwhale-role
    subjects:
    - kind: ServiceAccount
    name: starwhale

    Downloading Starwhale Helm Charts

    helm repo add starwhale https://star-whale.github.io/charts
    helm repo update

    Installing Starwhale Server

    helm install starwhale-server starwhale/starwhale-server -n starwhale --create-namespace

    If you have a local kubectl command-line tool installed, you can run kubectl get pods -n starwhale to check if all pods are running.

    Updating Starwhale Server

    helm repo update
    helm upgrade starwhale-server starwhale/starwhale-server

    Uninstalling Starwhale Server

    helm delete starwhale-server
    - - + + \ No newline at end of file diff --git a/0.5.10/server/installation/index.html b/0.5.10/server/installation/index.html index d84e458fb..d23acbbb6 100644 --- a/0.5.10/server/installation/index.html +++ b/0.5.10/server/installation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.5.10/server/installation/minikube/index.html b/0.5.10/server/installation/minikube/index.html index 4f5e76195..67b82b9cf 100644 --- a/0.5.10/server/installation/minikube/index.html +++ b/0.5.10/server/installation/minikube/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Install Starwhale Server with Minikube

    Prerequisites

    Starting Minikube

    minikube start --addons ingress --kubernetes-version=1.25.3

    For users in the mainland of China, please add --image-mirror-country=cn parameter. If there is no kubectl bin in your machine, you may use minikube kubectl or alias kubectl="minikube kubectl --" alias command.

    Installing Starwhale Server

    helm repo add starwhale https://star-whale.github.io/charts
    helm repo update
    helm pull starwhale/starwhale --untar --untardir ./charts

    helm upgrade --install starwhale ./charts/starwhale -n starwhale --create-namespace -f ./charts/starwhale/values.minikube.global.yaml

    For users in the mainland of China, use values.minikube.global.yaml:

    helm upgrade --install starwhale ./charts/starwhale -n starwhale --create-namespace -f ./charts/starwhale/values.minikube.cn.yaml

    After the installation is successful, the following prompt message appears:

        Release "starwhale" has been upgraded. Happy Helming!
    NAME: starwhale
    LAST DEPLOYED: Tue Feb 14 16:25:03 2023
    NAMESPACE: starwhale
    STATUS: deployed
    REVISION: 14
    NOTES:
    ******************************************
    Chart Name: starwhale
    Chart Version: 0.5.6
    App Version: latest
    Starwhale Image:
    - server: ghcr.io/star-whale/server:latest

    ******************************************
    Controller:
    - visit: http://controller.starwhale.svc
    Minio:
    - web visit: http://minio.starwhale.svc
    - admin visit: http://minio-admin.starwhale.svc
    MySQL:
    - port-forward:
    - run: kubectl port-forward --namespace starwhale svc/mysql 3306:3306
    - visit: mysql -h 127.0.0.1 -P 3306 -ustarwhale -pstarwhale
    Please run the following command for the domains searching:
    echo "$(sudo minikube ip) controller.starwhale.svc minio.starwhale.svc minio-admin.starwhale.svc " | sudo tee -a /etc/hosts
    ******************************************
    Login Info:
    - starwhale: u:starwhale, p:abcd1234
    - minio admin: u:minioadmin, p:minioadmin

    *_* Enjoy to use Starwhale Platform. *_*

    Checking Starwhale Server status

    Keep checking the minikube service status until all deployments are running(waiting for 3~5 mins):

    kubectl get deployments -n starwhale
    NAMEREADYUP-TO-DATEAVAILABLEAGE
    controller1/1115m
    minio1/1115m
    mysql1/1115m

    Visiting for local

    Make the Starwhale controller accessible locally with the following command:

    echo "$(sudo minikube ip) controller.starwhale.svc minio.starwhale.svc  minio-admin.starwhale.svc " | sudo tee -a /etc/hosts

    Then you can visit http://controller.starwhale.svc in your local web browser.

    Visiting for others

    • Step 1: in the Starwhale Server machine

      for temporary use with socat command:

      # install socat at first, ref: https://howtoinstall.co/en/socat
      sudo socat TCP4-LISTEN:80,fork,reuseaddr,bind=0.0.0.0 TCP4:`minikube ip`:80

      When you kill the socat process, the share access will be blocked. iptables maybe a better choice for long-term use.

    • Step 2: in the other machines

      # for macOSX or Linux environment, run the command in the shell.
      echo ${your_machine_ip} controller.starwhale.svc minio.starwhale.svc minio-admin.starwhale.svc " | sudo tee -a /etc/hosts

      # for Windows environment, run the command in the PowerShell with administrator permission.
      Add-Content -Path C:\Windows\System32\drivers\etc\hosts -Value "`n${your_machine_ip} controller.starwhale.svc minio.starwhale.svc minio-admin.starwhale.svc"
    - - + + \ No newline at end of file diff --git a/0.5.10/server/installation/starwhale_env/index.html b/0.5.10/server/installation/starwhale_env/index.html index a55cdc7d1..96ae998f0 100644 --- a/0.5.10/server/installation/starwhale_env/index.html +++ b/0.5.10/server/installation/starwhale_env/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Starwhale Server Environment Example

    ################################################################################
    # *** Required ***
    # The external Starwhale server URL. For example: https://cloud.starwhale.ai
    SW_INSTANCE_URI=

    # The listening port of Starwhale Server
    SW_CONTROLLER_PORT=8082

    # The maximum upload file size. This setting affects datasets and models uploading when copied from outside.
    SW_UPLOAD_MAX_FILE_SIZE=20480MB
    ################################################################################
    # The base URL of the Python Package Index to use when creating a runtime environment.
    SW_PYPI_INDEX_URL=http://10.131.0.1/repository/pypi-hosted/simple/

    # Extra URLs of package indexes to use in addition to the base url.
    SW_PYPI_EXTRA_INDEX_URL=

    # Space separated hostnames. When any host specified in the base URL or extra URLs does not have a valid SSL
    # certification, use this option to trust it anyway.
    SW_PYPI_TRUSTED_HOST=
    ################################################################################
    # The JWT token expiration time. When the token expires, the server will request the user to login again.
    SW_JWT_TOKEN_EXPIRE_MINUTES=43200

    # *** Required ***
    # The JWT secret key. All strings are valid, but we strongly recommend you to use a random string with at least 16 characters.
    SW_JWT_SECRET=
    ################################################################################
    # The Kubernetes namespace to use when running a task
    SW_K8S_NAME_SPACE=default

    # The path on the Kubernetes host node's filesystem to cache Python packages. Use the setting only if you have
    # the permission to use host node's filesystem. The runtime environment setup process may be accelerated when the host
    # path cache is used. Leave it blank if you do not want to use it.
    SW_K8S_HOST_PATH_FOR_CACHE=

    ###############################################################################
    # *** Required ***
    # The object storage system type. Valid values are:
    # s3: [AWS S3](https://aws.amazon.com/s3) or other s3-compatible object storage systems
    # aliyun: [Aliyun OSS](https://www.alibabacloud.com/product/object-storage-service)
    # minio: [MinIO](https://min.io)
    # file: Local filesystem
    SW_STORAGE_TYPE=

    # The path prefix for all data saved on the storage system.
    SW_STORAGE_PREFIX=
    ################################################################################
    # The following settings are only used when SW_STORAGE_TYPE is file.

    # The root directory to save data.
    # This setting is only used when SW_STORAGE_TYPE is file.
    SW_STORAGE_FS_ROOT_DIR=/usr/local/starwhale
    ################################################################################
    # The following settings are only used when SW_STORAGE_TYPE is not file.

    # *** Required ***
    # The name of the bucket to save data.
    SW_STORAGE_BUCKET=

    # *** Required ***
    # The endpoint URL of the object storage service.
    # This setting is only used when SW_STORAGE_TYPE is s3 or aliyun.
    SW_STORAGE_ENDPOINT=

    # *** Required ***
    # The access key used to access the object storage system.
    SW_STORAGE_ACCESSKEY=

    # *** Required ***
    # The secret access key used to access the object storage system.
    SW_STORAGE_SECRETKEY=

    # *** Optional ***
    # The region of the object storage system.
    SW_STORAGE_REGION=

    # Starwhale Server will use multipart upload when uploading a large file. This setting specifies the part size.
    SW_STORAGE_PART_SIZE=5MB
    ################################################################################
    # MySQL settings

    # *** Required ***
    # The hostname/IP of the MySQL server.
    SW_METADATA_STORAGE_IP=

    # The port of the MySQL server.
    SW_METADATA_STORAGE_PORT=3306

    # *** Required ***
    # The database used by Starwhale Server
    SW_METADATA_STORAGE_DB=starwhale

    # *** Required ***
    # The username of the MySQL server.
    SW_METADATA_STORAGE_USER=

    # *** Required ***
    # The password of the MySQL server.
    SW_METADATA_STORAGE_PASSWORD=
    ################################################################################
    - - + + \ No newline at end of file diff --git a/0.5.10/server/project/index.html b/0.5.10/server/project/index.html index aedb13adb..89f5c45bb 100644 --- a/0.5.10/server/project/index.html +++ b/0.5.10/server/project/index.html @@ -10,15 +10,15 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Project Management

    Project type

    There are two types of projects:

    • Public: Visible to anyone. Everyone on the internet can find and see public projects.

    • Private: Visible to users specified in the project member settings. Private projects can only be seen by project owners and project members. The project owner can manage access in the project setting of Manage Member.

    Create a project

    1 Sign in to Starwhale, click Create Project.

    creat

    2 Type a name for the project.

    image

    tip

    Avoid duplicate project names.For more information, see Names in Starwhale

    3 Select project visibility to decide who can find and see the project.

    image

    4 Type a description. It is optional.

    image

    5 To finish, click Submit.

    image

    Edit a project

    The name, privacy and description of a project can be edited.

    tip

    Users with the project owner or maintainer role can edit a project. For more information, see Roles and permissions

    Edit name

    • If you are on the project list page:

      1 Hover your mouse over the project you want to edit, then click the Edit button.

      image

      2 Enter a new name for the project.

      image

      tip

      Avoid duplicate project names. For more information, see Names in Starwhale

      3 Click Submit to save changes.

      image

      4 If you're editing multiple projects, repeat steps 1 through 3.

    • If you are on a specific project:

      1 Select Overview on the left navigation, and click Edit.

      image

      2 Enter a new name for the project.

      image

      tip

      Avoid duplicate project names. For more information, see Names in Starwhale

      3 Click Submit to save changes.

      image

    Edit privacy

    • If you are on the project list page:

      1 Hover your mouse over the project you want to edit, then click the Edit button.

      image

      2 Click the Public or Private by your command. For more information, see Project types.

      image

      3 Click Submit to save changes.

      image

    • If you are on a specific project

      1 Select Overview on the left navigation, and click Edit.

      image

      2 Click the Public or Private by your command. For more information, see Project types.

      image

      3 Click Submit to save changes.

      image

    Edit description

    • If you are on the project list page:

      1 Hover your mouse over the project you want to edit, then click the Edit button.

      image

      2 Enter any description you want to describe the project.

      image

      3 Click Submit to save changes.

      image

    • If you are on a specific project

      1 Select Overview on the left navigation, and click Edit.

      image

      2 Enter any description you want to describe the project.

      image

      3 Click Submit to save changes.

      image

    Delete a project

    1 Hover your mouse over the project you want to delete, then click the Delete button.

    image

    2 If you are sure to delete, type the exact name of the project and then click Confirm to delete the project.

    image

    :::Important: When you delete a project, all the models, datasets, evaluations and runtimes belonging to the project will also be deleted and can not be restored. Be careful about the action. :::

    Manage project member

    Only users with the admin role can assign people to the project. The project owner defaulted to having the project owner role.

    Add a member to the project

    1 On the project list page or overview tab, click the Manage Member button, then Add Member.

    image

    image

    2 Type the username you want to add to the project, then click a name in the list of matches.

    image

    3 Select a project role for the member from the drop-down menu.For more information, see Roles and permissions

    image

    4 To finish, click Submit.

    image

    Remove a member

    1 On the project list page or project overview tab, click the Manage Member button.

    image

    2 Find the username you want to remove in the search box, click Remove, then Yes.

    image

    - - + + \ No newline at end of file diff --git a/0.5.10/swcli/config/index.html b/0.5.10/swcli/config/index.html index feac1c039..33b6e2e84 100644 --- a/0.5.10/swcli/config/index.html +++ b/0.5.10/swcli/config/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Configuration

    Standalone Instance is installed on the user's laptop or development server, providing isolation at the level of Linux/macOX users. Users can install the Starwhale Python package using the pip command and execute any swcli command. After that, they can view their Starwhale configuration in ~/.config/starwhale/config.yaml. In the vast majority of cases, users do not need to manually modify the config.yaml file.

    The ~/.config/starwhale/config.yaml file has permissions set to 0o600 to ensure security, as it contains sensitive information such as encryption keys. Users are advised not to change the file permissions.You could customize your swcli by swci config edit:

    swcli config edit

    config.yaml example

    The typical config.yaml file is as follows:

    • The default instance is local.
    • cloud-cn/cloud-k8s/pre-k8s are the server/cloud instances, local is the standalone instance.
    • The local storage root directory for the Standalone Instance is set to /home/liutianwei/.starwhale.
    current_instance: local
    instances:
    cloud-cn:
    sw_token: ${TOKEN}
    type: cloud
    updated_at: 2022-09-28 18:41:05 CST
    uri: https://cloud.starwhale.cn
    user_name: starwhale
    user_role: normal
    cloud-k8s:
    sw_token: ${TOKEN}
    type: cloud
    updated_at: 2022-09-19 16:10:01 CST
    uri: http://cloud.pre.intra.starwhale.ai
    user_name: starwhale
    user_role: normal
    local:
    current_project: self
    type: standalone
    updated_at: 2022-06-09 16:14:02 CST
    uri: local
    user_name: liutianwei
    pre-k8s:
    sw_token: ${TOKEN}
    type: cloud
    updated_at: 2022-09-19 18:06:50 CST
    uri: http://console.pre.intra.starwhale.ai
    user_name: starwhale
    user_role: normal
    link_auths:
    - ak: starwhale
    bucket: users
    connect_timeout: 10.0
    endpoint: http://10.131.0.1:9000
    read_timeout: 100.0
    sk: starwhale
    type: s3
    storage:
    root: /home/liutianwei/.starwhale
    version: '2.0'

    config.yaml definition

    ParameterDescriptionTypeDefault ValueRequired
    current_instanceThe name of the default instance to use. It is usually set using the swcli instance select command.StringselfYes
    instancesManaged instances, including Standalone, Server and Cloud Instances. There must be at least one Standalone Instance named "local" and one or more Server/Cloud Instances. You can log in to a new instance with swcli instance login and log out from an instance with swcli instance logout.DictStandalone Instance named "local"Yes
    instances.{instance-alias-name}.sw_tokenLogin token for Server/Cloud Instances. It is only effective for Server/Cloud Instances. Subsequent swcli operations on Server/Cloud Instances will use this token. Note that tokens have an expiration time, typically set to one month, which can be configured within the Server/Cloud Instance.StringCloud - Yes, Standalone - No
    instances.{instance-alias-name}.typeType of the instance, currently can only be "cloud" or "standalone".Choice[string]Yes
    instances.{instance-alias-name}.uriFor Server/Cloud Instances, the URI is an http/https address. For Standalone Instances, the URI is set to "local".StringYes
    instances.{instance-alias-name}.user_nameUser's nameStringYes
    instances.{instance-alias-name}.current_projectDefault Project under the current instance. It will be used to fill the "project" field in the URI representation by default. You can set it using the swcli project select command.StringYes
    instances.{instance-alias-name}.user_roleUser's role.StringnormalYes
    instances.{instance-alias-name}.updated_atThe last updated time for this instance configuration.Time format stringYes
    storageSettings related to local storage.DictYes
    storage.rootThe root directory for Standalone Instance's local storage. Typically, if there is insufficient space in the home directory and you manually move data files to another location, you can modify this field.String~/.starwhaleYes
    versionThe version of config.yaml, currently only supports 2.0.String2.0Yes

    You could put starwhale.Link to your assets while the URI in the Link could be whatever(only s3 like or http is implemented) you need, such as s3://10.131.0.1:9000/users/path. However, Links may need to be authed, you could config the auth info in link_auths.

    link_auths:
    - type: s3
    ak: starwhale
    bucket: users
    region: local
    connect_timeout: 10.0
    endpoint: http://10.131.0.1:9000
    read_timeout: 100.0
    sk: starwhale

    Items in link_auths will match the uri in Links automatically. s3 typed link_auth matching Links by looking up bucket and endpoint.

    - - + + \ No newline at end of file diff --git a/0.5.10/swcli/index.html b/0.5.10/swcli/index.html index ec0703edb..cfcf9b590 100644 --- a/0.5.10/swcli/index.html +++ b/0.5.10/swcli/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Starwhale Client (swcli) User Guide

    The Starwhale Client (swcli) is a command-line tool that enables you to interact with Starwhale instances. You can use swcli to complete almost all tasks in Starwhale. swcli is written in pure python3 (require Python 3.7 | 3.11) so that it can be easily installed by the pip command. Currently, swcli only supports Linux and macOS, Windows is coming soon.

    - - + + \ No newline at end of file diff --git a/0.5.10/swcli/installation/index.html b/0.5.10/swcli/installation/index.html index c9c4d6694..f7c30035a 100644 --- a/0.5.10/swcli/installation/index.html +++ b/0.5.10/swcli/installation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Installation Guide

    We can use swcli to complete all tasks for Starwhale Instances. swcli is written by pure python3, which can be installed easily by the pip command.Here are some installation tips that can help you get a cleaner, unambiguous, no dependency conflicts swcli python environment.

    Installing Advice

    DO NOT install Starwhale in your system's global Python environment. It will cause a python dependency conflict problem.

    Prerequisites

    • Python 3.7 ~ 3.11
    • Linux or macOS
    • Conda (optional)

    In the Ubuntu system, you can run the following commands:

    sudo apt-get install python3 python3-venv python3-pip

    #If you want to install multi python versions
    sudo add-apt-repository -y ppa:deadsnakes/ppa
    sudo apt-get update
    sudo apt-get install -y python3.7 python3.8 python3.9 python3-pip python3-venv python3.8-venv python3.7-venv python3.9-venv

    swcli works on macOS. If you run into issues with the default system Python3 on macOS, try installing Python3 through the homebrew:

    brew install python3

    Install swcli

    Install with venv

    python3 -m venv ~/.cache/venv/starwhale
    source ~/.cache/venv/starwhale/bin/activate
    python3 -m pip install starwhale

    swcli --version

    sudo rm -rf /usr/local/bin/swcli
    sudo ln -s `which swcli` /usr/local/bin/

    Install with conda

    conda create --name starwhale --yes  python=3.9
    conda activate starwhale
    python3 -m pip install starwhale

    swcli --version

    sudo rm -rf /usr/local/bin/swcli
    sudo ln -s `which swcli` /usr/local/bin/

    👏 Now, you can use swcli in the global environment.

    Install for the special scenarios

    # for Audio processing
    python -m pip install starwhale[audio]

    # for Image processing
    python -m pip install starwhale[pillow]

    # for swcli model server command
    python -m pip install starwhale[server]

    # for built-in online serving
    python -m pip install starwhale[online-serve]

    # install all dependencies
    python -m pip install starwhale[all]

    Update swcli

    #for venv
    python3 -m pip install --upgrade starwhale

    #for conda
    conda run -n starwhale python3 -m pip install --upgrade starwhale

    Uninstall swcli

    python3 -m pip remove starwhale

    rm -rf ~/.config/starwhale
    rm -rf ~/.starwhale
    - - + + \ No newline at end of file diff --git a/0.5.10/swcli/swignore/index.html b/0.5.10/swcli/swignore/index.html index 17faa710c..2dd5b5c65 100644 --- a/0.5.10/swcli/swignore/index.html +++ b/0.5.10/swcli/swignore/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    About the .swignore file

    The .swignore file is similar to .gitignore, .dockerignore, and other files used to define ignored files or dirs. The .swignore files mainly used in the Starwhale Model building process. By default, the swcli model build command or starwhale.model.build() Python SDK will traverse all files in the specified directory and automatically exclude certain known files or directories that are not suitable for inclusion in the model package.

    PATTERN FORMAT

    • Each line in a swignore file specifies a pattern, which matches files and directories.
    • A blank line matches no files, so it can serve as a separator for readability.
    • An asterisk * matches anything except a slash.
    • A line starting with # serves as a comment.
    • Support wildcard expression, for example: *.jpg, .png.

    Auto Ingored files or dirs

    If you want to include the auto ingored files or dirs, you can add --add-all for swcli model build command.

    • __pycache__/
    • *.py[cod]
    • *$py.class
    • venv installation dir
    • conda installation dir

    Example

    Here is the .swignore file used in the MNIST example:

    venv/*
    .git/*
    .history*
    .vscode/*
    .venv/*
    data/*
    .idea/*
    *.py[cod]
    - - + + \ No newline at end of file diff --git a/0.5.10/swcli/uri/index.html b/0.5.10/swcli/uri/index.html index 9ba6a49e8..fb32c83e3 100644 --- a/0.5.10/swcli/uri/index.html +++ b/0.5.10/swcli/uri/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.10

    Starwhale Resources URI

    tip

    Resource URI is widely used in Starwhale client commands. The URI can refer to a resource in the local instance or any other resource in a remote instance. In this way, the Starwhale client can easily manipulate any resource.

    concepts-org.jpg

    Instance URI

    Instance URI can be either:

    • local: standalone instance.
    • [http(s)://]<hostname or ip>[:<port>]: cloud instance with HTTP address.
    • [cloud://]<cloud alias>: cloud or server instance with an alias name, which can be configured in the instance login phase.
    caution

    "local" is different from "localhost". The former means the local standalone instance without a controller, while the latter implies a controller listening at the default port 8082 on the localhost.

    Example:

    # log in Starwhale Cloud; the alias is swcloud
    swcli instance login --username <your account name> --password <your password> https://cloud.starwhale.ai --alias swcloud

    # copy a model from the local instance to the cloud instance
    swcli model copy mnist/version/latest swcloud/project/<your account name>:demo

    # copy a runtime to a Starwhale Server instance: http://localhost:8081
    swcli runtime copy pytorch/version/v1 http://localhost:8081/project/<your account name>:demo

    Project URI

    Project URI is in the format [<Instance URI>/project/]<project name>. If the instance URI is not specified, use the current instance instead.

    Example:

    swcli project select self   # select the self project in the current instance
    swcli project info local/project/self # inspect self project info in the local instance

    Model/Dataset/Runtime URI

    • Model URI: [<Project URI>/model/]<model name>[/version/<version id|tag>].
    • Dataset URI: [<Project URI>/dataset/]<dataset name>[/version/<version id|tag>].
    • Runtime URI: [<Project URI>/runtime/]<runtime name>[/version/<version id|tag>].
    tip
    • swcli supports human-friendly short version id. You can type the first few characters of the version id, provided it is at least four characters long and unambiguous. However, the recover command must use the complete version id.
    • If the project URI is not specified, the default project will be used.
    • You can always use the version tag instead of the version id.

    Example:

    swcli model info mnist/version/hbtdenjxgm4ggnrtmftdgyjzm43tioi  # inspect model info, model name: mnist, version:hbtdenjxgm4ggnrtmftdgyjzm43tioi
    swcli model remove mnist/version/hbtdenj # short version
    swcli model info mnist # inspect mnist model info
    swcli model run mnist --runtime pytorch-mnist --dataset mnist # use the default latest tag

    Job URI

    • format: [<Project URI>/job/]<job id>.
    • If the project URI is not specified, the default project will be used.

    Example:

    swcli job info mezdayjzge3w   # Inspect mezdayjzge3w version in default instance and default project
    swcli job info local/project/self/job/mezday # Inspect the local instance, self project, with short job id:mezday

    The default instance

    When the instance part of a project URI is omitted, the default instance is used instead. The default instance is the one selected by the swcli instance login or swcli instance use command.

    The default project

    When the project parts of Model/Dataset/Runtime/Evaluation URIs are omitted, the default project is used instead. The default project is the one selected by the swcli project use command.

    - - + + \ No newline at end of file diff --git a/0.5.12/cloud/billing/bills/index.html b/0.5.12/cloud/billing/bills/index.html index fa5abf31f..153690cc8 100644 --- a/0.5.12/cloud/billing/bills/index.html +++ b/0.5.12/cloud/billing/bills/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.5.12/cloud/billing/index.html b/0.5.12/cloud/billing/index.html index d541e13f1..88d791bd3 100644 --- a/0.5.12/cloud/billing/index.html +++ b/0.5.12/cloud/billing/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.5.12/cloud/billing/recharge/index.html b/0.5.12/cloud/billing/recharge/index.html index dc09e5b65..f9fd9bf99 100644 --- a/0.5.12/cloud/billing/recharge/index.html +++ b/0.5.12/cloud/billing/recharge/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.5.12/cloud/billing/refund/index.html b/0.5.12/cloud/billing/refund/index.html index feecd88e5..55bdf1c60 100644 --- a/0.5.12/cloud/billing/refund/index.html +++ b/0.5.12/cloud/billing/refund/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.5.12/cloud/billing/voucher/index.html b/0.5.12/cloud/billing/voucher/index.html index 11e3add3c..1eba4e2d7 100644 --- a/0.5.12/cloud/billing/voucher/index.html +++ b/0.5.12/cloud/billing/voucher/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.5.12/cloud/index.html b/0.5.12/cloud/index.html index bca4386d7..b41ef4f9f 100644 --- a/0.5.12/cloud/index.html +++ b/0.5.12/cloud/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Starwhale Cloud User Guide

    Starwhale Cloud is a service hosted on public cloud and operated by the Starwhale team. The access url is https://cloud.starwhale.cn.

    - - + + \ No newline at end of file diff --git a/0.5.12/community/contribute/index.html b/0.5.12/community/contribute/index.html index 3685c6254..123a9f536 100644 --- a/0.5.12/community/contribute/index.html +++ b/0.5.12/community/contribute/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Contribute to Starwhale

    Getting Involved/Contributing

    We welcome and encourage all contributions to Starwhale, including and not limited to:

    • Describe the problems encountered during use.
    • Submit feature request.
    • Discuss in Slack and Github Issues.
    • Code Review.
    • Improve docs, tutorials and examples.
    • Fix Bug.
    • Add Test Case.
    • Code readability and code comments to import readability.
    • Develop new features.
    • Write enhancement proposal.

    You can get involved, get updates and contact Starwhale developers in the following ways:

    Starwhale Resources

    Code Structure

    • client: swcli and Python SDK with Pure Python3, which includes all Standalone Instance features.
      • api: Python SDK.
      • cli: Command Line Interface entrypoint.
      • base: Python base abstract.
      • core: Starwhale core concepts which includes Dataset,Model,Runtime,Project, job and Evaluation, etc.
      • utils: Python utilities lib.
    • console: frontend with React + TypeScript.
    • server:Starwhale Controller with java, which includes all Starwhale Cloud Instance backend apis.
    • docker:Helm Charts, dockerfile.
    • docs:Starwhale官方文档。
    • example:Example code.
    • scripts:Bash and Python scripts for E2E testing and software releases, etc.

    Fork and clone the repository

    You will need to fork the code of Starwhale repository and clone it to your local machine.

    • Fork Starwhale repository: Fork Starwhale Github Repo,For more usage details, please refer to: Fork a repo

    • Install Git-LFS:Git-LFS

       git lfs install
    • Clone code to local machine

      git clone https://github.com/${your username}/starwhale.git

    Development environment for Standalone Instance

    Standalone Instance is written in Python3. When you want to modify swcli and sdk, you need to build the development environment.

    Standalone development environment prerequisites

    • OS: Linux or macOS
    • Python: 3.7~3.11
    • Docker: >=19.03(optional)
    • Python isolated env tools:Python venv, virtualenv or conda, etc

    Building from source code

    Based on the previous step, clone to the local directory: starwhale, and enter the client subdirectory:

    cd starwhale/client

    Create an isolated python environment with conda:

    conda create -n starwhale-dev python=3.8 -y
    conda activate starwhale-dev

    Install client package and python dependencies into the starwhale-dev environment:

    make install-sw
    make install-dev-req

    Validate with the swcli --version command. In the development environment, the version is 0.0.0.dev0:

    ❯ swcli --version
    swcli, version 0.0.0.dev0

    ❯ swcli --version
    /home/username/anaconda3/envs/starwhale-dev/bin/swcli

    Modifying the code

    When you modify the code, you need not to install python package(run make install-sw command) again. .editorconfig will be imported into the most IDE and code editors which helps maintain consistent coding styles for multiple developers.

    Lint and Test

    Run unit test, E2E test, mypy lint, flake lint and isort check in the starwhale directory.

    make client-all-check

    Development environment for Cloud Instance

    Cloud Instance is written in Java(backend) and React+TypeScript(frontend).

    Development environment for Console

    Development environment for Server

    • Language: Java
    • Build tool: Maven
    • Development framework: Spring Boot+Mybatis
    • Unit test framework:Junit5
      • Mockito used for mocking
      • Hamcrest used for assertion
      • Testcontainers used for providing lightweight, throwaway instances of common databases, Selenium web browsers that can run in a Docker container.
    • Check style tool:use maven-checkstyle-plugin

    Server development environment prerequisites

    • OS: Linux, macOS or Windows
    • Docker: >=19.03
    • JDK: >=11
    • Maven: >=3.8.1
    • Mysql: >=8.0.29
    • Minio
    • Kubernetes cluster/Minikube(If you don't have a k8s cluster, you can use Minikube as an alternative for development and debugging)

    Modify the code and add unit tests

    Now you can enter the corresponding module to modify and adjust the code on the server side. The main business code directory is src/main/java, and the unit test directory is src/test/java.

    Execute code check and run unit tests

    cd starwhale/server
    mvn clean test

    Deploy the server at local machine

    • Dependent services that need to be deployed

      • Minikube(Optional. Minikube can be used when there is no k8s cluster, there is the installation doc: Minikube

        minikube start
        minikube addons enable ingress
        minikube addons enable ingress-dns
      • Mysql

        docker run --name sw-mysql -d \
        -p 3306:3306 \
        -e MYSQL_ROOT_PASSWORD=starwhale \
        -e MYSQL_USER=starwhale \
        -e MYSQL_PASSWORD=starwhale \
        -e MYSQL_DATABASE=starwhale \
        mysql:latest
      • Minio

        docker run --name minio -d \
        -p 9000:9000 --publish 9001:9001 \
        -e MINIO_DEFAULT_BUCKETS='starwhale' \
        -e MINIO_ROOT_USER="minioadmin" \
        -e MINIO_ROOT_PASSWORD="minioadmin" \
        bitnami/minio:latest
    • Package server program

      If you need to deploy the front-end at the same time when deploying the server, you can execute the build command of the front-end part first, and then execute 'mvn clean package', and the compiled front-end files will be automatically packaged.

      Use the following command to package the program

        cd starwhale/server
      mvn clean package
    • Specify the environment required for server startup

      # Minio env
      export SW_STORAGE_ENDPOINT=http://${Minio IP,default is:27.0.0.1}:9000
      export SW_STORAGE_BUCKET=${Minio bucket,default is:starwhale}
      export SW_STORAGE_ACCESSKEY=${Minio accessKey,default is:starwhale}
      export SW_STORAGE_SECRETKEY=${Minio secretKey,default is:starwhale}
      export SW_STORAGE_REGION=${Minio region,default is:local}
      # kubernetes env
      export KUBECONFIG=${the '.kube' file path}\.kube\config

      export SW_INSTANCE_URI=http://${Server IP}:8082
      export SW_METADATA_STORAGE_IP=${Mysql IP,default: 127.0.0.1}
      export SW_METADATA_STORAGE_PORT=${Mysql port,default: 3306}
      export SW_METADATA_STORAGE_DB=${Mysql dbname,default: starwhale}
      export SW_METADATA_STORAGE_USER=${Mysql user,default: starwhale}
      export SW_METADATA_STORAGE_PASSWORD=${user password,default: starwhale}
    • Deploy server service

      You can use the IDE or the command to deploy.

      java -jar controller/target/starwhale-controller-0.1.0-SNAPSHOT.jar
    • Debug

      there are two ways to debug the modified function:

      • Use swagger-ui for interface debugging, visit /swagger-ui/index.html to find the corresponding api
      • Debug the corresponding function directly in the ui (provided that the front-end code has been built in advance according to the instructions when packaging)
    - - + + \ No newline at end of file diff --git a/0.5.12/concepts/index.html b/0.5.12/concepts/index.html index 3e05cd2a1..e5fffa5d4 100644 --- a/0.5.12/concepts/index.html +++ b/0.5.12/concepts/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.5.12/concepts/names/index.html b/0.5.12/concepts/names/index.html index d70dfe5f8..fdd31f8ab 100644 --- a/0.5.12/concepts/names/index.html +++ b/0.5.12/concepts/names/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Names in Starwhale

    Names mean project names, model names, dataset names, runtime names, and tag names.

    Names Limitation

    • Names are case-insensitive.
    • A name MUST only consist of letters A-Z a-z, digits 0-9, the hyphen character -, the dot character ., and the underscore character _.
    • A name should always start with a letter or the _ character.
    • The maximum length of a name is 80.

    Names uniqueness requirement

    • The resource name should be a unique string within its owner. For example, the project name should be unique in the owner instance, and the model name should be unique in the owner project.
    • The resource name can not be used by any other resource of the same kind in their owner, including those removed ones. For example, Project "apple" can not have two models named "Alice", even if one of them is already removed.
    • Different kinds of resources can have the same name. For example, a project and a model can be called "Alice" simultaneously.
    • Resources with different owners can have the same name. For example, a model in project "Apple" and a model in project "Banana" can have the same name "Alice".
    • Garbage-collected resources' names can be reused. For example, after the model with the name "Alice" in project "Apple" is removed and garbage collected, the project can have a new model with the same name "Alice".
    - - + + \ No newline at end of file diff --git a/0.5.12/concepts/project/index.html b/0.5.12/concepts/project/index.html index cdd0bf6e5..82d8f980c 100644 --- a/0.5.12/concepts/project/index.html +++ b/0.5.12/concepts/project/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Project in Starwhale

    "Project" is the basic unit for organizing different resources like models, datasets, etc. You may use projects for different purposes. For example, you can create a project for a data scientist team, a product line, or a specific model. Users usually work on one or more projects in their daily lives.

    Starwhale Server/Cloud projects are grouped by accounts. Starwhale Standalone does not have accounts. So you will not see any account name prefix in Starwhale Standalone projects. Starwhale Server/Cloud projects can be either "public" or "private". Public projects means all users on the same instance are assigned a "guest" role to the project by default. For more information about roles, see Roles and permissions in Starwhale.

    A self project is created automatically and configured as the default project in Starwhale Standalone.

    - - + + \ No newline at end of file diff --git a/0.5.12/concepts/roles-permissions/index.html b/0.5.12/concepts/roles-permissions/index.html index 86ce16ad7..fcf5fd8e0 100644 --- a/0.5.12/concepts/roles-permissions/index.html +++ b/0.5.12/concepts/roles-permissions/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Roles and permissions in Starwhale

    Roles are used to assign permissions to users. Only Starwhale Server/Cloud has roles and permissions, and Starwhale Standalone does not.The Administrator role is automatically created and assigned to the user "admin". Some sensitive operations can only be performed by users with the Administrator role, for example, creating accounts in Starwhale Server.

    Projects have three roles:

    • Admin - Project administrators can read and write project data and assign project roles to users.
    • Maintainer - Project maintainers can read and write project data.
    • Guest - Project guests can only read project data.
    ActionAdminMaintainerGuest
    Manage project membersYes
    Edit projectYesYes
    View projectYesYesYes
    Create evaluationsYesYes
    Remove evaluationsYesYes
    View evaluationsYesYesYes
    Create datasetsYesYes
    Update datasetsYesYes
    Remove datasetsYesYes
    View datasetsYesYesYes
    Create modelsYesYes
    Update modelsYesYes
    Remove modelsYesYes
    View modelsYesYesYes
    Create runtimesYesYes
    Update runtimesYesYes
    Remove runtimesYesYes
    View runtimesYesYesYes

    The user who creates a project becomes the first project administrator. They can assign roles to other users later.

    - - + + \ No newline at end of file diff --git a/0.5.12/concepts/versioning/index.html b/0.5.12/concepts/versioning/index.html index 8277c465a..dbce18b72 100644 --- a/0.5.12/concepts/versioning/index.html +++ b/0.5.12/concepts/versioning/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Resource versioning in Starwhale

    • Starwhale manages the history of all models, datasets, and runtimes. Every update to a specific resource appends a new version of the history.
    • Versions are identified by a version id which is a random string generated automatically by Starwhale and are ordered by their creation time.
    • Versions can have tags. Starwhale uses version tags to provide a human-friendly representation of versions. By default, Starwhale attaches a default tag to each version. The default tag is the letter "v", followed by a number. For each versioned resource, the first version tag is always tagged with "v0", the second version is tagged with "v1", and so on. And there is a special tag "latest" that always points to the last version. When a version is removed, its default tag will not be reused. For example, there is a model with tags "v0, v1, v2". When "v2" is removed, tags will be "v0, v1". And the following tag will be "v3" instead of "v2" again. You can attach your own tags to any version and remove them at any time.
    • Starwhale uses a linear history model. There is neither branch nor cycle in history.
    • History can not be rollback. When a version is to be reverted, Starwhale clones the version and appends it as a new version to the end of the history. Versions in history can be manually removed and recovered.
    - - + + \ No newline at end of file diff --git a/0.5.12/dataset/index.html b/0.5.12/dataset/index.html index fd8f49423..0de98044a 100644 --- a/0.5.12/dataset/index.html +++ b/0.5.12/dataset/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Starwhale Dataset User Guide

    Design Overview

    Starwhale Dataset Positioning

    The Starwhale Dataset contains three core stages: data construction, data loading, and data visualization. It is a data management tool for the ML/DL field. Starwhale Dataset can directly use the environment built by Starwhale Runtime, and can be seamlessly integrated with Starwhale Model and Starwhale Evaluation. It is an important part of the Starwhale MLOps toolchain.

    According to the classification of MLOps Roles in Machine Learning Operations (MLOps): Overview, Definition, and Architecture, the three stages of Starwhale Dataset target the following user groups:

    • Data construction: Data Engineer, Data Scientist
    • Data loading: Data Scientist, ML Developer
    • Data visualization: Data Engineer, Data Scientist, ML Developer

    mlops-users

    Core Functions

    • Efficient loading: The original dataset files are stored in external storage such as OSS or NAS, and are loaded on demand without having to save to disk.
    • Simple construction: Supports one-click dataset construction from Image/Video/Audio directories, json files and Huggingface datasets, and also supports writing Python code to build completely custom datasets.
    • Versioning: Can perform version tracking, data append and other operations, and avoid duplicate data storage through the internally abstracted ObjectStore.
    • Sharing: Implement bidirectional dataset sharing between Standalone instances and Cloud/Server instances through the swcli dataset copy command.
    • Visualization: The web interface of Cloud/Server instances can present multi-dimensional, multi-type data visualization of datasets.
    • Artifact storage: The Standalone instance can store locally built or distributed swds series files, while the Cloud/Server instance uses object storage to provide centralized swds artifact storage.
    • Seamless Starwhale integration: Starwhale Dataset can use the runtime environment built by Starwhale Runtime to build datasets. Starwhale Evaluation and Starwhale Model can directly specify the dataset through the --dataset parameter to complete automatic data loading, which facilitates inference, model evaluation and other environments.

    Key Elements

    • swds virtual package file: swds is different from swmp and swrt. It is not a single packaged file, but a virtual concept that specifically refers to a directory that contains dataset-related files for a version of the Starwhale dataset, including _manifest.yaml, dataset.yaml, dataset build Python scripts, and data file links, etc. You can use the swcli dataset info command to view where the swds is located. swds is the abbreviation of Starwhale Dataset.

    swds-tree.png

    • swcli dataset command line: A set of dataset-related commands, including construction, distribution and management functions. See CLI Reference for details.
    • dataset.yaml configuration file: Describes the dataset construction process. It can be completely omitted and specified through swcli dataset build parameters. dataset.yaml can be considered as a configuration file representation of the swcli dataset build command line parameters. swcli dataset build parameters take precedence over dataset.yaml.
    • Dataset Python SDK: Includes data construction, data loading, and several predefined data types. See Python SDK for details.
    • Python scripts for dataset construction: A series of scripts written using the Starwhale Python SDK to build datasets.

    Best Practices

    The construction of Starwhale Dataset is performed independently. If third-party libraries need to be introduced when writing construction scripts, using Starwhale Runtime can simplify Python dependency management and ensure reproducible dataset construction. The Starwhale platform will build in as many open source datasets as possible for users to copy datasets for immediate use.

    Command Line Grouping

    The Starwhale Dataset command line can be divided into the following stages from the perspective of usage phases:

    • Construction phase
      • swcli dataset build
    • Visualization phase
      • swcli dataset diff
      • swcli dataset head
    • Distribution phase
      • swcli dataset copy
    • Basic management
      • swcli dataset tag
      • swcli dataset info
      • swcli dataset history
      • swcli dataset list
      • swcli dataset summary
      • swcli dataset remove
      • swcli dataset recover

    Starwhale Dataset Viewer

    Currently, the Web UI in the Cloud/Server instance can visually display the dataset. Currently, only DataTypes using the Python SDK can be correctly interpreted by the frontend, with mappings as follows:

    • Image: Display thumbnails, enlarged images, MASK type images, support image/png, image/jpeg, image/webp, image/svg+xml, image/gif, image/apng, image/avif formats.
    • Audio: Displayed as an audio wave graph, playable, supports audio/mp3 and audio/wav formats.
    • Video: Displayed as a video, playable, supports video/mp4, video/avi and video/webm formats.
    • GrayscaleImage: Display grayscale images, support x/grayscale format.
    • Text: Display text, support text/plain format, set encoding format, default is utf-8.
    • Binary and Bytes: Not supported for display currently.
    • Link: The above multimedia types all support specifying links as storage paths.

    Starwhale Dataset Data Format

    The dataset consists of multiple rows, each row being a sample, each sample containing several features. The features have a dict-like structure with some simple restrictions [L]:

    • The dict keys must be str type.
    • The dict values must be Python basic types like int/float/bool/str/bytes/dict/list/tuple, or Starwhale built-in data types.
    • For the same key across different samples, the value types do not need to stay the same.
    • If the value is a list or tuple, the element data types must be consistent.
    • For dict values, the restrictions are the same as [L].

    Example:

    {
    "img": GrayscaleImage(
    link=Link(
    "123",
    offset=32,
    size=784,
    _swds_bin_offset=0,
    _swds_bin_size=8160,
    )
    ),
    "label": 0,
    }

    File Data Handling

    Starwhale Dataset handles file type data in a special way. You can ignore this section if you don't care about Starwhale's implementation.

    According to actual usage scenarios, Starwhale Dataset has two ways of handling file class data that is based on the base class starwhale.BaseArtifact:

    • swds-bin: Starwhale merges the data into several large files in its own binary format (swds-bin), which can efficiently perform indexing, slicing and loading.
    • remote-link: If the user's original data is stored in some external storage such as OSS or NAS, with a lot of original data that is inconvenient to move or has already been encapsulated by some internal dataset implementation, then you only need to use links in the data to establish indexes.

    In the same Starwhale dataset, two types of data can be included simultaneously.

    - - + + \ No newline at end of file diff --git a/0.5.12/dataset/yaml/index.html b/0.5.12/dataset/yaml/index.html index a9803b152..f5088a35a 100644 --- a/0.5.12/dataset/yaml/index.html +++ b/0.5.12/dataset/yaml/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    The dataset.yaml Specification

    tip

    dataset.yaml is optional for the swcli dataset build command.

    Building Starwhale Dataset uses dataset.yaml. Omitting dataset.yaml allows describing related configurations in swcli dataset build command line parameters. dataset.yaml can be considered as a file-based representation of the build command line configuration.

    YAML Field Descriptions

    FieldDescriptionRequiredTypeDefault
    nameName of the Starwhale DatasetYesString
    handlerImportable address of a class that inherits starwhale.SWDSBinBuildExecutor, starwhale.UserRawBuildExecutor or starwhale.BuildExecutor, or a function that returns a Generator or iterable object. Format is {module path}:{class name\|function name}YesString
    descDataset descriptionNoString""
    versiondataset.yaml format version, currently only "1.0" is supportedNoString1.0
    attrDataset build parametersNoDict
    attr.volume_sizeSize of each data file in the swds-bin dataset. Can be a number in bytes, or a number plus unit like 64M, 1GB etc.NoInt or Str64MB
    attr.alignment_sizeData alignment size of each data block in the swds-bin dataset. If set to 4k, and a data block is 7.9K, 0.1K padding will be added to make the block size a multiple of alignment_size, improving page size and read efficiency.NoInteger or String128

    Examples

    Simplest Example

    name: helloworld
    handler: dataset:ExampleProcessExecutor

    The helloworld dataset uses the ExampleProcessExecutor class in dataset.py of the dataset.yaml directory to build data.

    MNIST Dataset Build Example

    name: mnist
    handler: mnist.dataset:DatasetProcessExecutor
    desc: MNIST data and label test dataset
    attr:
    alignment_size: 128
    volume_size: 4M

    Example with handler as a generator function

    dataset.yaml contents:

    name: helloworld
    handler: dataset:iter_item

    dataset.py contents:

    def iter_item():
    for i in range(10):
    yield {"img": f"image-{i}".encode(), "label": i}
    - - + + \ No newline at end of file diff --git a/0.5.12/evaluation/heterogeneous/node-able/index.html b/0.5.12/evaluation/heterogeneous/node-able/index.html index bb7db63f6..c65b43972 100644 --- a/0.5.12/evaluation/heterogeneous/node-able/index.html +++ b/0.5.12/evaluation/heterogeneous/node-able/index.html @@ -10,8 +10,8 @@ - - + +
    @@ -23,7 +23,7 @@ Refer to the link.

    Take v0.13.0-rc.1 as an example:

    kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.13.0-rc.1/nvidia-device-plugin.yml

    Note: This operation will run the NVIDIA device plugin plugin on all Kubernetes nodes. If configured before, it will be updated. Please evaluate the image version used carefully.

  • Confirm GPU can be discovered and used in the cluster. Refer to the command below. Check that nvidia.com/gpu is in the Capacity of the Jetson node. The GPU is then recognized normally by the Kubernetes cluster.

    # kubectl describe node orin | grep -A15 Capacity
    Capacity:
    cpu: 12
    ephemeral-storage: 59549612Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    hugepages-32Mi: 0
    hugepages-64Ki: 0
    memory: 31357608Ki
    nvidia.com/gpu: 1
    pods: 110
  • Build and Use Custom Images

    The l4t-jetpack image mentioned earlier can meet our general use. If we need to customize a more streamlined image or one with more features, we can make it based on l4t-base. Relevant Dockerfiles can refer to the image Starwhale made for mnist.

    - - + + \ No newline at end of file diff --git a/0.5.12/evaluation/heterogeneous/virtual-node/index.html b/0.5.12/evaluation/heterogeneous/virtual-node/index.html index a9515aba1..fb1a14b63 100644 --- a/0.5.12/evaluation/heterogeneous/virtual-node/index.html +++ b/0.5.12/evaluation/heterogeneous/virtual-node/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Virtual Kubelet as Kubernetes nodes

    Introduction

    Virtual Kubelet is an open source framework that can simulate a K8s node by mimicking the communication between kubelet and the K8s cluster.

    This solution is widely used by major cloud vendors for serverless container cluster solutions, such as Alibaba Cloud's ASK, Amazon's AWS Fargate, etc.

    Principles

    The virtual kubelet framework implements the related interfaces of kubelet for Node. With simple configuration, it can simulate a node.

    We only need to implement the PodLifecycleHandler interface to support:

    • Create, update, delete Pod
    • Get Pod status
    • Get Container logs

    Adding Devices to the Cluster

    If our device cannot serve as a K8s node due to resource constraints or other situations, we can manage these devices by using virtual kubelet to simulate a proxy node.

    The control flow between Starwhale Controller and the device is as follows:


    ┌──────────────────────┐ ┌────────────────┐ ┌─────────────────┐ ┌────────────┐
    │ Starwhale Controller ├─────►│ K8s API Server ├────►│ virtual kubelet ├────►│ Our device │
    └──────────────────────┘ └────────────────┘ └─────────────────┘ └────────────┘

    Virtual kubelet converts the Pod orchestration information sent by Starwhale Controller into control behaviors for the device, such as executing a command via ssh on the device, or sending a message via USB or serial port.

    Below is an example of using virtual kubelet to control a device not joined to the cluster that is SSH-enabled:

    1. Prepare certificates
    • Create file vklet.csr with the following content:
    [req]
    req_extensions = v3_req
    distinguished_name = req_distinguished_name

    [req_distinguished_name]

    [v3_req]
    basicConstraints = CA:FALSE
    keyUsage = digitalSignature, keyEncipherment
    extendedKeyUsage = serverAuth
    subjectAltName = @alt_names

    [alt_names]
    IP = 1.2.3.4
    • Generate the certificate:
    openssl genrsa -out vklet-key.pem 2048
    openssl req -new -key vklet-key.pem -out vklet.csr -subj '/CN=system:node:1.2.3.4;/C=US/O=system:nodes' -config ./csr.conf
    • Submit the certificate:
    cat vklet.csr| base64 | tr -d "\n" # output as content of spec.request in csr.yaml

    csr.yaml:

    apiVersion: certificates.k8s.io/v1
    kind: CertificateSigningRequest
    metadata:
    name: vklet
    spec:
    request: ******************
    signerName: kubernetes.io/kube-apiserver-client
    expirationSeconds: 1086400
    usages:
    - client auth
    kubectl apply -f csr.yaml
    kubectl certificate approve vklet
    kubectl get csr vklet -o jsonpath='{.status.certificate}'| base64 -d > vklet-cert.pem

    Now we have vklet-cert.pem.

    • Compile virtual kubelet:
    git clone https://github.com/virtual-kubelet/virtual-kubelet
    cd virtual-kubelet && make build

    Create the node configuration file mock.json:

    {
    "virtual-kubelet":
    {
    "cpu": "100",
    "memory": "100Gi",
    "pods": "100"
    }
    }

    Start virtual kubelet:

    export APISERVER_CERT_LOCATION=/path/to/vklet-cert.pem
    export APISERVER_KEY_LOCATION=/path/to/vklet-key.pem
    export KUBECONFIG=/path/to/kubeconfig
    virtual-kubelet --provider mock --provider-config /path/to/mock.json

    Now we have simulated a node with 100 cores + 100GB memory using virtual kubelet.

    • Add PodLifecycleHandler implementation to convert important information in Pod orchestration into ssh command execution, and collect logs for Starwhale Controller to collect.

    See ssh executor for a concrete implementation.

    - - + + \ No newline at end of file diff --git a/0.5.12/evaluation/index.html b/0.5.12/evaluation/index.html index 5686a4fde..3c13c3451 100644 --- a/0.5.12/evaluation/index.html +++ b/0.5.12/evaluation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Starwhale Model Evaluation

    Design Overview

    Starwhale Evaluation Positioning

    The goal of Starwhale Evaluation is to provide end-to-end management for model evaluation, including creating Jobs, distributing Tasks, viewing model evaluation reports and basic management. Starwhale Evaluation is a specific application of Starwhale Model, Starwhale Dataset, and Starwhale Runtime in the model evaluation scenario. Starwhale Evaluation is part of the MLOps toolchain built by Starwhale. More applications like Starwhale Model Serving, Starwhale Training will be included in the future.

    Core Features

    • Visualization: Both swcli and the Web UI provide visualization of model evaluation results, supporting comparison of multiple results. Users can also customize logging of intermediate processes.

    • Multi-scenario Adaptation: Whether it's a notebook, desktop or distributed cluster environment, the same commands, Python scripts, artifacts and operations can be used for model evaluation. This satisfies different computational power and data volume requirements.

    • Seamless Starwhale Integration: Leverage Starwhale Runtime for the runtime environment, Starwhale Dataset as data input, and run models from Starwhale Model. Configuration is simple whether using swcli, Python SDK or Cloud/Server instance Web UI.

    Key Elements

    • swcli model run: Command line for bulk offline model evaluation.
    • swcli model serve: Command line for online model evaluation.

    Best Practices

    Command Line Grouping

    From the perspective of completing an end-to-end Starwhale Evaluation workflow, commands can be grouped as:

    • Preparation Stage
      • swcli dataset build or Starwhale Dataset Python SDK
      • swcli model build or Starwhale Model Python SDK
      • swcli runtime build
    • Evaluation Stage
      • swcli model run
      • swcli model serve
    • Results Stage
      • swcli job info
    • Basic Management
      • swcli job list
      • swcli job remove
      • swcli job recover

    Abstraction job-step-task

    • job: A model evaluation task is a job, which contains one or more steps.

    • step: A step corresponds to a stage in the evaluation process. With the default PipelineHandler, steps are predict and evaluate. For custom evaluation processes using @handler, @evaluation.predict, @evaluation.evaluate decorators, steps are the decorated functions. Steps can have dependencies, forming a DAG. A step contains one or more tasks. Tasks in the same step have the same logic but different inputs. A common approach is to split the dataset into multiple parts, with each part passed to a task. Tasks can run in parallel.

    • task: A task is the final running entity. In Cloud/Server instances, a task is a container in a Pod. In Standalone instances, a task is a Python Thread.

    The job-step-task abstraction is the basis for implementing distributed runs in Starwhale Evaluation.

    - - + + \ No newline at end of file diff --git a/0.5.12/faq/index.html b/0.5.12/faq/index.html index 988d258d7..f1b6129bd 100644 --- a/0.5.12/faq/index.html +++ b/0.5.12/faq/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.5.12/getting-started/cloud/index.html b/0.5.12/getting-started/cloud/index.html index 338bebfb2..fa88bf3a8 100644 --- a/0.5.12/getting-started/cloud/index.html +++ b/0.5.12/getting-started/cloud/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Getting started with Starwhale Cloud

    Starwhale Cloud is hosted on Aliyun with the domain name https://cloud.starwhale.cn. In the futher, we will launch the service on AWS with the domain name https://cloud.starwhale.ai. It's important to note that these are two separate instances that are not interconnected, and accounts and data are not shared. You can choose either one to get started.

    You need to install the Starwhale Client (swcli) at first.

    Sign Up for Starwhale Cloud and create your first project

    You can either directly log in with your GitHub or Weixin account or sign up for an account. You will be asked for an account name if you log in with your GitHub or Weixin account.

    Then you can create a new project. In this tutorial, we will use the name demo for the project name.

    Build the dataset, model, and runtime on your local machine

    Follow step 1 to step 4 in Getting started with Starwhale Standalone to create:

    • a Starwhale model named mnist
    • a Starwhale dataset named mnist
    • a Starwhale runtime named pytorch

    Login to the cloud instance

    swcli instance login --username <your account name> --password <your password> --alias swcloud https://cloud.starwhale.cn

    Copy the dataset, model, and runtime to the cloud instance

    swcli model copy mnist swcloud/project/<your account name>:demo
    swcli dataset copy mnist swcloud/project/<your account name>:demo
    swcli runtime copy pytorch swcloud/project/<your account name>:demo

    Run an evaluation with the web UI

    console-create-job.gif

    Congratulations! You have completed the Starwhale Cloud Getting Started Guide.

    - - + + \ No newline at end of file diff --git a/0.5.12/getting-started/index.html b/0.5.12/getting-started/index.html index 1fbc04ad5..0cde0d1ce 100644 --- a/0.5.12/getting-started/index.html +++ b/0.5.12/getting-started/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Getting started

    First, you need to install the Starwhale Client (swcli), which can be done by running the following command:

    python3 -m pip install starwhale

    For more information, see the swcli installation guide.

    Depending on your instance type, there are three getting-started guides available for you:

    • Getting started with Starwhale Standalone - This guide helps you run an MNIST evaluation on your desktop PC/laptop. It is the fastest and simplest way to get started with Starwhale.
    • Getting started with Starwhale Server - This guide helps you install Starwhale Server in your private data center and run an MNIST evaluation. At the end of the tutorial, you will have a Starwhale Server instance where you can run model evaluations on and manage your datasets and models.
    • Getting started with Starwhale Cloud - This guide helps you create an account on Starwhale Cloud and run an MNIST evaluation. It is the easiest way to experience all Starwhale features.
    - - + + \ No newline at end of file diff --git a/0.5.12/getting-started/runtime/index.html b/0.5.12/getting-started/runtime/index.html index 99488bd25..476eb2844 100644 --- a/0.5.12/getting-started/runtime/index.html +++ b/0.5.12/getting-started/runtime/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Getting Started with Starwhale Runtime

    This article demonstrates how to build a Starwhale Runtime of the Pytorch environment and how to use it. This runtime can meet the dependency requirements of the six examples in Starwhale: mnist, speech commands, nmt, cifar10, ag_news, and PennFudan. Links to relevant code: example/runtime/pytorch.

    You can learn the following things from this tutorial:

    • How to build a Starwhale Runtime.
    • How to use a Starwhale Runtime in different scenarios.
    • How to release a Starwhale Runtime.

    Prerequisites

    Run the following command to clone the example code:

    git clone https://github.com/star-whale/starwhale.git
    cd starwhale/example/runtime/pytorch # for users in the mainland of China, use pytorch-cn-mirror instead.

    Build Starwhale Runtime

    ❯ swcli -vvv runtime build --yaml runtime.yaml

    Use Starwhale Runtime in the standalone instance

    Use Starwhale Runtime in the shell

    # Activate the runtime
    swcli runtime activate pytorch

    swcli runtime activate will download all python dependencies of the runtime, which may take a long time.

    All dependencies are ready in your python environment when the runtime is activated. It is similar to source venv/bin/activate of virtualenv or the conda activate command of conda. If you close the shell or switch to another shell, you need to reactivate the runtime.

    Use Starwhale Runtime in swcli

    # Use the runtime when building a Starwhale Model
    swcli model build . --runtime pytorch
    # Use the runtime when building a Starwhale Dataset
    swcli dataset build --yaml /path/to/dataset.yaml --runtime pytorch
    # Run a model evaluation with the runtime
    swcli model run --uri mnist/version/v0 --dataset mnist --runtime pytorch

    Copy Starwhale Runtime to another instance

    You can copy the runtime to a server/cloud instance, which can then be used in the server/cloud instance or downloaded by other users.

    # Copy the runtime to a server instance named 'pre-k8s'
    ❯ swcli runtime copy pytorch cloud://pre-k8s/project/starwhale
    - - + + \ No newline at end of file diff --git a/0.5.12/getting-started/server/index.html b/0.5.12/getting-started/server/index.html index 07973b768..f12426090 100644 --- a/0.5.12/getting-started/server/index.html +++ b/0.5.12/getting-started/server/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Getting started with Starwhale Server

    Install Starwhale Server

    To install Starwhale Server, see the installation guide.

    Create your first project

    Login to the server

    Open your browser and enter your server's URL in the address bar. Login with your username(starwhale) and password(abcd1234).

    console-artifacts.gif

    Create a new project

    Build the dataset, model, and runtime on your local machine

    Follow step 1 to step 4 in Getting started with Starwhale Standalone to create:

    • a Starwhale model named mnist
    • a Starwhale dataset named mnist
    • a Starwhale runtime named pytorch

    Copy the dataset, the model, and the runtime to the server

    swcli instance login --username <your username> --password <your password> --alias server <Your Server URL>

    swcli model copy mnist server/project/demo
    swcli dataset copy mnist server/project/demo
    swcli runtime copy pytorch server/project/demo

    Use the Web UI to run an evaluation

    Navigate to the "demo" project in your browser and create a new one.

    console-create-job.gif

    Congratulations! You have completed the Starwhale Server Getting Started Guide.

    - - + + \ No newline at end of file diff --git a/0.5.12/getting-started/standalone/index.html b/0.5.12/getting-started/standalone/index.html index 69c46a851..2757f9fb3 100644 --- a/0.5.12/getting-started/standalone/index.html +++ b/0.5.12/getting-started/standalone/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Getting started with Starwhale Standalone

    When the Starwhale Client (swcli) is installed, you are ready to use Starwhale Standalone.

    We also provide a Jupyter Notebook example, you can try it in Google Colab or in your local vscode/jupyterlab.

    Downloading Examples

    Download Starwhale examples by cloning the Starwhale project via:

    GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/star-whale/starwhale.git --depth 1
    cd starwhale

    To save time in the example downloading, we skip git-lfs and other commits info. We will use ML/DL HelloWorld code MNIST to start your Starwhale journey. The following steps are all performed in the starwhale directory.

    Core Workflow

    Building a Pytorch Runtime

    Runtime example codes are in the example/runtime/pytorch directory.

    • Build the Starwhale runtime bundle:

      swcli runtime build --yaml example/runtime/pytorch/runtime.yaml
      tip

      When you first build runtime, creating an isolated python environment and downloading python dependencies will take a lot of time. The command execution time is related to the network environment of the machine and the number of packages in the runtime.yaml. Using the befitting pypi mirror and cache config in the ~/.pip/pip.conf file is a recommended practice.

      For users in the mainland of China, the following conf file is an option:

      [global]
      cache-dir = ~/.cache/pip
      index-url = https://pypi.tuna.tsinghua.edu.cn/simple
      extra-index-url = https://mirrors.aliyun.com/pypi/simple/
    • Check your local Starwhale Runtime:

      swcli runtime list
      swcli runtime info pytorch

    Building a Model

    Model example codes are in the example/mnist directory.

    • Download the pre-trained model file:

      cd example/mnist
      make download-model
      # For users in the mainland of China, please add `CN=1` environment for make command:
      # CN=1 make download-model
      cd -
    • Build a Starwhale model:

      swcli model build example/mnist --runtime pytorch
    • Check your local Starwhale models:

      swcli model list
      swcli model info mnist

    Building a Dataset

    Dataset example codes are in the example/mnist directory.

    • Download the MNIST raw data:

      cd example/mnist
      make download-data
      # For users in the mainland of China, please add `CN=1` environment for make command:
      # CN=1 make download-data
      cd -
    • Build a Starwhale dataset:

      swcli dataset build --yaml example/mnist/dataset.yaml
    • Check your local Starwhale dataset:

      swcli dataset list
      swcli dataset info mnist
      swcli dataset head mnist

    Running an Evaluation Job

    • Create an evaluation job:

      swcli -vvv model run --uri mnist --dataset mnist --runtime pytorch
    • Check the evaluation result

      swcli job list
      swcli job info $(swcli job list | grep mnist | grep success | awk '{print $1}' | head -n 1)

    Congratulations! You have completed the Starwhale Standalone Getting Started Guide.

    - - + + \ No newline at end of file diff --git a/0.5.12/index.html b/0.5.12/index.html index b0fc88c20..d011f7b20 100644 --- a/0.5.12/index.html +++ b/0.5.12/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    What is Starwhale

    Overview

    Starwhale is an MLOps/LLMOps platform that make your model creation, evaluation and publication much easier. It aims to create a handy tool for data scientists and machine learning engineers.

    Starwhale helps you:

    • Keep track of your training/testing dataset history including data items and their labels, so that you can easily access them.
    • Manage your model packages that you can share across your team.
    • Run your models in different environments, either on a Nvidia GPU server or on an embedded device like Cherry Pi.
    • Create a online service with interactive Web UI for your models.

    Starwhale is designed to be an open platform. You can create your own plugins to meet your requirements.

    Deployment options

    Each deployment of Starwhale is called an instance. All instances can be managed by the Starwhale Client (swcli).

    You can start using Starwhale with one of the following instance types:

    • Starwhale Standalone - Rather than a running service, Starwhale Standalone is actually a repository that resides in your local file system. It is created and managed by the Starwhale Client (swcli). You only need to install swcli to use it. Currently, each user on a single machine can have only ONE Starwhale Standalone instance. We recommend you use the Starwhale Standalone to build and test your datasets, runtime, and models before pushing them to Starwhale Server/Cloud instances.
    • Starwhale Server - Starwhale Server is a service deployed on your local server. Besides text-only results from the Starwhale Client (swcli), Starwhale Server provides Web UI for you to manage your datasets and models, evaluate your models in your local Kubernetes cluster, and review the evaluation results.
    • Starwhale Cloud - Starwhale Cloud is a managed service hosted on public clouds. By registering an account on https://cloud.starwhale.cn, you are ready to use Starwhale without needing to install, operate, and maintain your own instances. Starwhale Cloud also provides public resources for you to download, like datasets, runtimes, and models. Check the "starwhale/public" project on Starwhale Cloud for more details.

    When choosing which instance type to use, consider the following:

    Instance TypeDeployment locationMaintained byUser InterfaceScalability
    Starwhale StandaloneYour laptop or any server in your data centerNot requiredCommand lineNot scalable
    Starwhale ServerYour data centerYourselfWeb UI and command lineScalable, depends on your Kubernetes cluster
    Starwhale CloudPublic cloud, like AWS or Aliyunthe Starwhale TeamWeb UI and command lineScalable, but currently limited by the freely available resource on the cloud
    - - + + \ No newline at end of file diff --git a/0.5.12/model/index.html b/0.5.12/model/index.html index 0072468d8..40e3c2c37 100644 --- a/0.5.12/model/index.html +++ b/0.5.12/model/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Starwhale Model

    A Starwhale Model is a standard format for packaging machine learning models that can be used for various purposes, like model fine-tuning, model evaluation, and online serving. A Starwhale Model contains the model file, inference codes, configuration files, and any other files required to run the model.

    Create a Starwhale Model

    There are two ways to create a Starwhale Model: by swcli or by Python SDK.

    Create a Starwhale Model by swcli

    To create a Starwhale Model by swcli, you need to define a model.yaml, which describes some required information about the model package, and run the following command:

    swcli model build . --model-yaml /path/to/model.yaml

    For more information about the command and model.yaml, see the swcli reference. model.yaml is optional for model building.

    Create a Starwhale Model by Python SDK

    from starwhale import model, predict

    @predict
    def predict_img(data):
    ...

    model.build(name="mnist", modules=[predict_img])

    Model Management

    Model Management by swcli

    CommandDescription
    swcli model listList all Starwhale Models in a project
    swcli model infoShow detail information about a Starwhale Model
    swcli model copyCopy a Starwhale Model to another location
    swcli model removeRemove a Starwhale Model
    swcli model recoverRecover a previously removed Starwhale Model

    Model Management by WebUI

    Model History

    Starwhale Models are versioned. The general rules about versions are described in Resource versioning in Starwhale.

    Model History Management by swcli

    CommandDescription
    swcli model historyList all versions of a Starwhale Model
    swcli model infoShow detail information about a Starwhale Model version
    swcli model diffCompare two versions of a Starwhale model
    swcli model copyCopy a Starwhale Model version to a new one
    swcli model removeRemove a Starwhale Model version
    swcli model recoverRecover a previously removed Starwhale Model version

    Model Evaluation

    Model Evaluation by swcli

    CommandDescription
    swcli model runCreate an evaluation with a Starwhale Model

    The Storage Format

    The Starwhale Model is a tarball file that contains the source directory.

    - - + + \ No newline at end of file diff --git a/0.5.12/model/yaml/index.html b/0.5.12/model/yaml/index.html index 6f008c92d..8aa363ecd 100644 --- a/0.5.12/model/yaml/index.html +++ b/0.5.12/model/yaml/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    The model.yaml Specification

    tip

    model.yaml is optional for swcli model build.

    When building a Starwhale Model using the swcli model build command, you can specify a yaml file that follows a specific format via the --model-yaml parameter to simplify specifying build parameters.

    Even without specifying the --model-yaml parameter, swcli model build will automatically look for a model.yaml file under the ${workdir} directory and extract parameters from it. Parameters specified on the swcli model build command line take precedence over equivalent configurations in model.yaml, so you can think of model.yaml as a file-based representation of the build command line.

    When building a Starwhale Model using the Python SDK, the model.yaml file does not take effect.

    YAML Field Descriptions

    FieldDescriptionRequiredTypeDefault
    nameName of the Starwhale Model, equivalent to --name parameter.NoString
    run.modulesPython Modules searched during model build, can specify multiple entry points for model execution, format is Python Importable path. Equivalent to --module parameter.YesList[String]
    run.handlerDeprecated alias of run.modules, can only specify one entry point.NoString
    versiondataset.yaml format version, currently only supports "1.0"NoString1.0
    descModel description, equivalent to --desc parameter.NoString

    Example


    name: helloworld

    run:
    modules:
    - src.evaluator

    desc: "example yaml"

    A Starwhale model named helloworld, searches for functions decorated with @evaluation.predict, @evaluation.evaluate or @handler, or classes inheriting from PipelineHandler in src/evaluator.py under ${WORKDIR} of the swcli model build command. These functions or classes will be added to the list of runnable entry points for the Starwhale model. When running the model via swcli model run or Web UI, select the corresponding entry point (handler) to run.

    model.yaml is optional, parameters defined in yaml can also be specified via swcli command line parameters.


    swcli model build . --model-yaml model.yaml

    Is equivalent to:


    swcli model build . --name helloworld --module src.evaluator --desc "example yaml"

    - - + + \ No newline at end of file diff --git a/0.5.12/reference/sdk/dataset/index.html b/0.5.12/reference/sdk/dataset/index.html index 3b005ef21..8f54e3140 100644 --- a/0.5.12/reference/sdk/dataset/index.html +++ b/0.5.12/reference/sdk/dataset/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Starwhale Dataset SDK

    dataset

    Get starwhale.Dataset object, by creating new datasets or loading existing datasets.

    @classmethod
    def dataset(
    cls,
    uri: t.Union[str, Resource],
    create: str = _DatasetCreateMode.auto,
    readonly: bool = False,
    ) -> Dataset:

    Parameters

    • uri: (str or Resource, required)
      • The dataset uri or Resource object.
    • create: (str, optional)
      • The mode of dataset creating. The options are auto, empty and forbid.
        • auto mode: If the dataset already exists, creation is ignored. If it does not exist, the dataset is created automatically.
        • empty mode: If the dataset already exists, an Exception is raised; If it does not exist, an empty dataset is created. This mode ensures the creation of a new, empty dataset.
        • forbid mode: If the dataset already exists, nothing is done.If it does not exist, an Exception is raised. This mode ensures the existence of the dataset.
      • The default is auto.
    • readonly: (bool, optional)
      • For an existing dataset, you can specify the readonly=True argument to ensure the dataset is in readonly mode.
      • Default is False.

    Examples

    from starwhale import dataset, Image

    # create a new dataset named mnist, and add a row into the dataset
    # dataset("mnist") is equal to dataset("mnist", create="auto")
    ds = dataset("mnist")
    ds.exists() # return False, "mnist" dataset is not existing.
    ds.append({"img": Image(), "label": 1})
    ds.commit()
    ds.close()

    # load a cloud instance dataset in readonly mode
    ds = dataset("cloud://remote-instance/project/starwhale/dataset/mnist", readonly=True)
    labels = [row.features.label in ds]
    ds.close()

    # load a read/write dataset with a specified version
    ds = dataset("mnist/version/mrrdczdbmzsw")
    ds[0].features.label = 1
    ds.commit()
    ds.close()

    # create an empty dataset
    ds = dataset("mnist-empty", create="empty")

    # ensure the dataset existence
    ds = dataset("mnist-existed", create="forbid")

    class starwhale.Dataset

    starwhale.Dataset implements the abstraction of a Starwhale dataset, and can operate on datasets in Standalone/Server/Cloud instances.

    from_huggingface

    from_huggingface is a classmethod that can convert a Huggingface dataset into a Starwhale dataset.

    def from_huggingface(
    cls,
    name: str,
    repo: str,
    subset: str | None = None,
    split: str | None = None,
    revision: str = "main",
    alignment_size: int | str = D_ALIGNMENT_SIZE,
    volume_size: int | str = D_FILE_VOLUME_SIZE,
    mode: DatasetChangeMode | str = DatasetChangeMode.PATCH,
    cache: bool = True,
    tags: t.List[str] | None = None,
    ) -> Dataset:

    Parameters

    • name: (str, required)
      • dataset name.
    • repo: (str, required)
    • subset: (str, optional)
      • The subset name. If the huggingface dataset has multiple subsets, you must specify the subset name.
    • split: (str, optional)
      • The split name. If the split name is not specified, the all splits dataset will be built.
    • revision: (str, optional)
      • The huggingface datasets revision. The default value is main. If the split name is not specified, the all splits dataset will be built.
    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.
    • mode: (str|DatasetChangeMode, optional)
      • The dataset change mode. The default value is patch. Mode choices are patch and overwrite.
    • cache: (bool, optional)
      • Whether to use huggingface dataset cache(download + local hf dataset).
      • The default value is True.
    • tags: (List[str], optional)
      • The user custom tags of the dataset.

    Examples

    from starwhale import Dataset
    myds = Dataset.from_huggingface("mnist", "mnist")
    print(myds[0])
    from starwhale import Dataset
    myds = Dataset.from_huggingface("mmlu", "cais/mmlu", subset="anatomy", split="auxiliary_train", revision="7456cfb")

    from_json

    from_json is a classmethod that can convert a json text into a Starwhale dataset.

    @classmethod
    def from_json(
    cls,
    name: str,
    json_text: str,
    field_selector: str = "",
    alignment_size: int | str = D_ALIGNMENT_SIZE,
    volume_size: int | str = D_FILE_VOLUME_SIZE,
    mode: DatasetChangeMode | str = DatasetChangeMode.PATCH,
    tags: t.List[str] | None = None,
    ) -> Dataset:

    Parameters

    • name: (str, required)
      • Dataset name.
    • json_text: (str, required)
      • A json string. The from_json function deserializes this string into Python objects to start building the Starwhale dataset.
    • field_selector: (str, optional)
      • The filed from which you would like to extract dataset array items.
      • The default value is "" which indicates that the json object is an array contains all the items.
    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.
    • mode: (str|DatasetChangeMode, optional)
      • The dataset change mode. The default value is patch. Mode choices are patch and overwrite.
    • tags: (List[str], optional)
      • The user custom tags of the dataset.

    Examples

    from starwhale import Dataset
    myds = Dataset.from_json(
    name="translation",
    json_text='[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]'
    )
    print(myds[0].features.en)
    from starwhale import Dataset
    myds = Dataset.from_json(
    name="translation",
    json_text='{"content":{"child_content":[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]}}',
    field_selector="content.child_content"
    )
    print(myds[0].features["zh-cn"])

    from_folder

    from_folder is a classmethod that can read Image/Video/Audio data from a specified directory and automatically convert them into a Starwhale dataset. This function supports the following features:

    • It can recursively search the target directory and its subdirectories
    • Supports extracting three types of files:
      • image: Supports png/jpg/jpeg/webp/svg/apng image types. Image files will be converted to Starwhale.Image type.
      • video: Supports mp4/webm/avi video types. Video files will be converted to Starwhale.Video type.
      • audio: Supports mp3/wav audio types. Audio files will be converted to Starwhale.Audio type.
    • Each file corresponds to one record in the dataset, with the file stored in the file field.
    • If auto_label=True, the parent directory name will be used as the label for that record, stored in the label field. Files in the root directory will not be labeled.
    • If a txt file with the same name as an image/video/audio file exists, its content will be stored as the caption field in the dataset.
    • If metadata.csv or metadata.jsonl exists in the root directory, their content will be read automatically and associated to records by file path as meta information in the dataset.
      • metadata.csv and metadata.jsonl are mutually exclusive. An exception will be thrown if both exist.
      • Each record in metadata.csv and metadata.jsonl must contain a file_name field pointing to the file path.
      • metadata.csv and metadata.jsonl are optional for dataset building.
    @classmethod
    def from_folder(
    cls,
    folder: str | Path,
    kind: str | DatasetFolderSourceType,
    name: str | Resource = "",
    auto_label: bool = True,
    alignment_size: int | str = D_ALIGNMENT_SIZE,
    volume_size: int | str = D_FILE_VOLUME_SIZE,
    mode: DatasetChangeMode | str = DatasetChangeMode.PATCH,
    tags: t.List[str] | None = None,
    ) -> Dataset:

    Parameters

    • folder: (str|Path, required)
      • The folder path from which you would like to create this dataset.
    • kind: (str|DatasetFolderSourceType, required)
      • The dataset source type you would like to use, the choices are: image, video and audio.
      • Recursively searching for files of the specified kind in folder. Other file types will be ignored.
    • name: (str|Resource, optional)
      • The dataset name you would like to use.
      • If not specified, the name is the folder name.
    • auto_label: (bool, optional)
      • Whether to auto label by the sub-folder name.
      • The default value is True.
    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.
    • mode: (str|DatasetChangeMode, optional)
      • The dataset change mode. The default value is patch. Mode choices are patch and overwrite.
    • tags: (List[str], optional)
      • The user custom tags of the dataset.

    Examples ${folder-example}

    • Example for the normal function calling

      from starwhale import Dataset

      # create a my-image-dataset dataset from /path/to/image folder.
      ds = Dataset.from_folder(
      folder="/path/to/image",
      kind="image",
      name="my-image-dataset"
      )
    • Example for caption

      folder/dog/1.png
      folder/dog/1.txt

      1.txt content will be used as the caption of 1.png.

    • Example for metadata

      metadata.csv:

      file_name, caption
      1.png, dog
      2.png, cat

      metadata.jsonl:

      {"file_name": "1.png", "caption": "dog"}
      {"file_name": "2.png", "caption": "cat"}
    • Example for auto-labeling

      The following structure will create a dataset with 2 labels: "cat" and "dog", 4 images in total.

      folder/dog/1.png
      folder/cat/2.png
      folder/dog/3.png
      folder/cat/4.png

    __iter__

    __iter__ a method that iter the dataset rows.

    from starwhale import dataset

    ds = dataset("mnist")

    for item in ds:
    print(item.index)
    print(item.features.label) # label and img are the features of mnist.
    print(item.features.img)

    batch_iter

    batch_iter is a method that iter the dataset rows in batch.

    def batch_iter(
    self, batch_size: int = 1, drop_not_full: bool = False
    ) -> t.Iterator[t.List[DataRow]]:

    Parameters

    • batch_size: (int, optional)
      • batch size. The default value is 1.
    • drop_not_full: (bool, optional)
      • Whether the last batch of data, with a size smaller than batch_size, it will be discarded.
      • The default value is False.

    Examples

    from starwhale import dataset

    ds = dataset("mnist")
    for batch_rows in ds.batch_iter(batch_size=2):
    assert len(batch_rows) == 2
    print(batch_rows[0].features)

    __getitem__

    __getitem__ is a method that allows retrieving certain rows of data from the dataset, with usage similar to Python dict and list types.

    from starwhale import dataset

    ds = dataset("mock-int-index")

    # if the index type is string
    ds["str_key"] # get the DataRow by the "str_key" string key
    ds["start":"end"] # get a slice of the dataset by the range ("start", "end")

    ds = dataset("mock-str-index")
    # if the index type is int
    ds[1] # get the DataRow by the 1 int key
    ds[1:10:2] # get a slice of the dataset by the range (1, 10), step is 2

    __setitem__

    __setitem__ is a method that allows updating rows of data in the dataset, with usage similar to Python dicts. __setitem__ supports multi-threaded parallel data insertion.

    def __setitem__(
    self, key: t.Union[str, int], value: t.Union[DataRow, t.Tuple, t.Dict]
    ) -> None:

    Parameters

    • key: (int|str, required)
      • key is the index for each row in the dataset. The type is int or str, but a dataset only accepts one type.
    • value: (DataRow|tuple|dict, required)
      • value is the features for each row in the dataset, using a Python dict is generally recommended.

    Examples

    • Normal insertion

    Insert two rows into the test dataset, with index test and test2 repectively:

    from starwhale import dataset

    with dataset("test") as ds:
    ds["test"] = {"txt": "abc", "int": 1}
    ds["test2"] = {"txt": "bcd", "int": 2}
    ds.commit()
    • Parallel insertion
    from starwhale import dataset, Binary
    from concurrent.futures import as_completed, ThreadPoolExecutor

    ds = dataset("test")

    def _do_append(_start: int) -> None:
    for i in range(_start, 100):
    ds.append((i, {"data": Binary(), "label": i}))

    pool = ThreadPoolExecutor(max_workers=10)
    tasks = [pool.submit(_do_append, i * 10) for i in range(0, 9)]

    ds.commit()
    ds.close()

    __delitem__

    __delitem__ is a method to delete certain rows of data from the dataset.

    def __delitem__(self, key: _ItemType) -> None:
    from starwhale import dataset

    ds = dataset("existed-ds")
    del ds[6:9]
    del ds[0]
    ds.commit()
    ds.close()

    append

    append is a method to append data to a dataset, similar to the append method for Python lists.

    • Adding features dict, each row is automatically indexed with int starting from 0 and incrementing.

      from starwhale import dataset, Image

      with dataset("new-ds") as ds:
      for i in range(0, 100):
      ds.append({"label": i, "image": Image(f"folder/{i}.png")})
      ds.commit()
    • By appending the index and features dictionary, the index of each data row in the dataset will not be handled automatically.

      from dataset import dataset, Image

      with dataset("new-ds") as ds:
      for i in range(0, 100):
      ds.append((f"index-{i}", {"label": i, "image": Image(f"folder/{i}.png")}))

      ds.commit()

    extend

    extend is a method to bulk append data to a dataset, similar to the extend method for Python lists.

    from starwhale import dataset, Text

    ds = dataset("new-ds")
    ds.extend([
    (f"label-{i}", {"text": Text(), "label": i}) for i in range(0, 10)
    ])
    ds.commit()
    ds.close()

    commit

    commit is a method that flushes the current cached data to storage when called, and generates a dataset version. This version can then be used to load the corresponding dataset content afterwards.

    For a dataset, if some data is added without calling commit, but close is called or the process exits directly instead, the data will still be written to the dataset, just without generating a new version.

    @_check_readonly
    def commit(
    self,
    tags: t.Optional[t.List[str]] = None,
    message: str = "",
    force_add_tags: bool = False,
    ignore_add_tags_errors: bool = False,
    ) -> str:

    Parameters

    • tags: (list(str), optional)
      • tag as a list
    • message: (str, optional)
      • commit message. The default value is empty.
    • force_add_tags: (bool, optional)
      • For server/cloud instances, when adding labels to this version, if a label has already been applied to other dataset versions, you can use the force_add_tags=True parameter to forcibly add the label to this version, otherwise an exception will be thrown.
      • The default is False.
    • ignore_add_tags_errors: (bool, optional)
      • Ignore any exceptions thrown when adding labels.
      • The default is False.

    Examples

    from starwhale import dataset
    with dataset("mnist") as ds:
    ds.append({"label": 1})
    ds.commit(message="init commit")

    readonly

    readonly is a property attribute indicating if the dataset is read-only, it returns a bool value.

    from starwhale import dataset
    ds = dataset("mnist", readonly=True)
    assert ds.readonly

    loading_version

    loading_version is a property attribute, string type.

    • When loading an existing dataset, the loading_version is the related dataset version.
    • When creating a non-existed dataset, the loading_version is equal to the pending_commit_version.

    pending_commit_version

    pending_commit_version is a property attribute, string type. When you call the commit function, the pending_commit_version will be recorded in the Standalone instance ,Server instance or Cloud instance.

    committed_version

    committed_version is a property attribute, string type. After the commit function is called, the committed_version will come out, it is equal to the pending_commit_version. Accessing this attribute without calling commit first will raise an exception.

    remove

    remove is a method equivalent to the swcli dataset remove command, it can delete a dataset.

    def remove(self, force: bool = False) -> None:

    recover

    recover is a method equivalent to the swcli dataset recover command, it can recover a soft-deleted dataset that has not been run garbage collection.

    def recover(self, force: bool = False) -> None:

    summary

    summary is a method equivalent to the swcli dataset summary command, it returns summary information of the dataset.

    def summary(self) -> t.Optional[DatasetSummary]:

    history

    history is a method equivalent to the swcli dataset history command, it returns the history records of the dataset.

    def history(self) -> t.List[t.Dict]:

    flush

    flush is a method that flushes temporarily cached data from memory to persistent storage. The commit and close methods will automatically call flush.

    close

    close is a method that closes opened connections related to the dataset. Dataset also implements contextmanager, so datasets can be automatically closed using with syntax without needing to explicitly call close.

    from starwhale import dataset

    ds = dataset("mnist")
    ds.close()

    with dataset("mnist") as ds:
    print(ds[0])

    head is a method to show the first n rows of a dataset, equivalent to the swcli dataset head command.

    def head(self, n: int = 5, skip_fetch_data: bool = False) -> List[DataRow]:

    fetch_one

    fetch_one is a method to get the first record in a dataset, similar to head(n=1)[0].

    list

    list is a class method to list Starwhale datasets under a project URI, equivalent to the swcli dataset list command.

    @classmethod
    def list(
    cls,
    project_uri: Union[str, Project] = "",
    fullname: bool = False,
    show_removed: bool = False,
    page_index: int = DEFAULT_PAGE_IDX,
    page_size: int = DEFAULT_PAGE_SIZE,
    ) -> Tuple[List[Dict[str, Any]], Dict[str, Any]]:

    copy

    copy is a method to copy a dataset to another instance, equivalent to the swcli dataset copy command.

    def copy(
    self,
    dest_uri: str,
    dest_local_project_uri: str = "",
    force: bool = False,
    mode: str = DatasetChangeMode.PATCH.value,
    ignore_tags: t.List[str] | None = None,
    ) -> None:

    Parameters

    • dest_uri: (str, required)
      • Dataset URI
    • dest_local_project_uri: (str, optional)
      • When copy the remote dataset into local, the parameter can set for the Project URI.
    • force: (bool, optional)
      • Whether to forcibly overwrite the dataset if there is already one with the same version on the target instance.
      • The default value is False.
      • When the tags are already used for the other dataset version in the dest instance, you should use force option or adjust the tags.
    • mode: (str, optional)
      • Dataset copy mode, default is 'patch'. Mode choices are: 'patch', 'overwrite'.
      • patch: Patch mode, only update the changed rows and columns for the remote dataset.
      • overwrite: Overwrite mode, update records and delete extraneous rows from the remote dataset.
    • ignore_tags (List[str], optional)
      • Ignore tags when copying.
      • In default, copy dataset with all user custom tags.
      • latest and ^v\d+$ are the system builtin tags, they are ignored automatically.

    Examples

    from starwhale import dataset
    ds = dataset("mnist")
    ds.copy("cloud://remote-instance/project/starwhale")

    to_pytorch

    to_pytorch is a method that can convert a Starwhale dataset to a Pytorch torch.utils.data.Dataset, which can then be passed to torch.utils.data.DataLoader for use.

    It should be noted that the to_pytorch function returns a Pytorch IterableDataset.

    def to_pytorch(
    self,
    transform: t.Optional[t.Callable] = None,
    drop_index: bool = True,
    skip_default_transform: bool = False,
    ) -> torch.utils.data.Dataset:

    Parameters

    • transform: (callable, optional)
      • A transform function for input data.
    • drop_index: (bool, optional)
      • Whether to drop the index column.
    • skip_default_transform: (bool, optional)
      • If transform is not set, by default the built-in Starwhale transform function will be used to transform the data. This can be disabled with the skip_default_transform parameter.

    Examples

    import torch.utils.data as tdata
    from starwhale import dataset

    ds = dataset("mnist")

    torch_ds = ds.to_pytorch()
    torch_loader = tdata.DataLoader(torch_ds, batch_size=2)
    import torch.utils.data as tdata
    from starwhale import dataset

    with dataset("mnist") as ds:
    for i in range(0, 10):
    ds.append({"txt": Text(f"data-{i}"), "label": i})

    ds.commit()

    def _custom_transform(data: t.Any) -> t.Any:
    data = data.copy()
    txt = data["txt"].to_str()
    data["txt"] = f"custom-{txt}"
    return data

    torch_loader = tdata.DataLoader(
    dataset(ds.uri).to_pytorch(transform=_custom_transform), batch_size=1
    )
    item = next(iter(torch_loader))
    assert isinstance(item["label"], torch.Tensor)
    assert item["txt"][0] in ("custom-data-0", "custom-data-1")

    to_tensorflow

    to_tensorflow is a method that can convert a Starwhale dataset to a Tensorflow tensorflow.data.Dataset.

    def to_tensorflow(self, drop_index: bool = True) -> tensorflow.data.Dataset:

    Parameters

    • drop_index: (bool, optional)
      • Whether to drop the index column.

    Examples

    from starwhale import dataset
    import tensorflow as tf

    ds = dataset("mnist")
    tf_ds = ds.to_tensorflow(drop_index=True)
    assert isinstance(tf_ds, tf.data.Dataset)

    with_builder_blob_config

    with_builder_blob_config is a method to set blob-related attributes in a Starwhale dataset. It needs to be called before making data changes.

    def with_builder_blob_config(
    self,
    volume_size: int | str | None = D_FILE_VOLUME_SIZE,
    alignment_size: int | str | None = D_ALIGNMENT_SIZE,
    ) -> Dataset:

    Parameters

    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.

    Examples

    from starwhale import dataset, Binary

    ds = dataset("mnist").with_builder_blob_config(volume_size="32M", alignment_size=128)
    ds.append({"data": Binary(b"123")})
    ds.commit()
    ds.close()

    with_loader_config

    with_loader_config is a method to set parameters for the Starwhale dataset loader process.

    def with_loader_config(
    self,
    num_workers: t.Optional[int] = None,
    cache_size: t.Optional[int] = None,
    field_transformer: t.Optional[t.Dict] = None,
    ) -> Dataset:

    Parameters

    • num_workers: (int, optional)
      • The workers number for loading dataset.
      • The default value is 2.
    • cache_size: (int, optional)
      • Prefetched data rows.
      • The default value is 20.
    • field_transformer: (dict, optional)
      • features name transform dict.

    Examples

    from starwhale import Dataset, dataset
    Dataset.from_json(
    "translation",
    '[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]'
    )
    myds = dataset("translation").with_loader_config(field_transformer={"en": "en-us"})
    assert myds[0].features["en-us"] == myds[0].features["en"]
    from starwhale import Dataset, dataset
    Dataset.from_json(
    "translation2",
    '[{"content":{"child_content":[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]}}]'
    )
    myds = dataset("translation2").with_loader_config(field_transformer={"content.child_content[0].en": "en-us"})
    assert myds[0].features["en-us"] == myds[0].features["content"]["child_content"][0]["en"]
    - - + + \ No newline at end of file diff --git a/0.5.12/reference/sdk/evaluation/index.html b/0.5.12/reference/sdk/evaluation/index.html index 2c8fdeda9..bdb007dc3 100644 --- a/0.5.12/reference/sdk/evaluation/index.html +++ b/0.5.12/reference/sdk/evaluation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Starwhale Model Evaluation SDK

    @evaluation.predict

    The @evaluation.predict decorator defines the inference process in the Starwhale Model Evaluation, similar to the map phase in MapReduce. It contains the following core features:

    • On the Server instance, require the resources needed to run.
    • Automatically read the local or remote datasets, and pass the data in the datasets one by one or in batches to the function decorated by evaluation.predict.
    • By the replicas setting, implement distributed dataset consumption to horizontally scale and shorten the time required for the model evaluation tasks.
    • Automatically store the return values of the function and the input features of the dataset into the results table, for display in the Web UI and further use in the evaluate phase.
    • The decorated function is called once for each single piece of data or each batch, to complete the inference process.

    Parameters

    • resources: (dict, optional)
      • Defines the resources required by each predict task when running on the Server instance, including mem, cpu, and nvidia.com/gpu.
      • mem: The unit is Bytes, int and float types are supported.
        • Supports setting request and limit as a dictionary, e.g. resources={"mem": {"request": 100 * 1024, "limit": 200 * 1024}}.
        • If only a single number is set, the Python SDK will automatically set request and limit to the same value, e.g. resources={"mem": 100 * 1024} is equivalent to resources={"mem": {"request": 100 * 1024, "limit": 100 * 1024}}.
      • cpu: The unit is the number of CPU cores, int and float types are supported.
        • Supports setting request and limit as a dictionary, e.g. resources={"cpu": {"request": 1, "limit": 2}}.
        • If only a single number is set, the SDK will automatically set request and limit to the same value, e.g. resources={"cpu": 1.5} is equivalent to resources={"cpu": {"request": 1.5, "limit": 1.5}}.
      • nvidia.com/gpu: The unit is the number of GPUs, int type is supported.
        • nvidia.com/gpu does not support setting request and limit, only a single number is supported.
      • Note: The resources parameter currently only takes effect on the Server instances. For the Cloud instances, the same can be achieved by selecting the corresponding resource pool when submitting the evaluation task. Standalone instances do not support this feature at all.
    • replicas: (int, optional)
      • The number of replicas to run predict.
      • predict defines a Step, in which there are multiple equivalent Tasks. Each Task runs on a Pod in Cloud/Server instances, and a Thread in Standalone instances.
      • When multiple replicas are specified, they are equivalent and will jointly consume the selected dataset to achieve distributed dataset consumption. It can be understood that a row in the dataset will only be read by one predict replica.
      • The default is 1.
    • batch_size: (int, optional)
      • Batch size for passing data from the dataset into the function.
      • The default is 1.
    • fail_on_error: (bool, optional)
      • Whether to interrupt the entire model evaluation when the decorated function throws an exception. If you expect some "exceptional" data to cause evaluation failures but don't want to interrupt the overall evaluation, you can set fail_on_error=False.
      • The default is True.
    • auto_log: (bool, optional)
      • Whether to automatically log the return values of the function and the input features of the dataset to the results table.
      • The default is True.
    • log_mode: (str, optional)
      • When auto_log=True, you can set log_mode to define logging the return values in plain or pickle format.
      • The default is pickle.
    • log_dataset_features: (List[str], optional)
      • When auto_log=True, you can selectively log certain features from the dataset via this parameter.
      • By default, all features will be logged.
    • needs: (List[Callable], optional)
      • Defines the prerequisites for this task to run, can use the needs syntax to implement DAG.
      • needs accepts functions decorated by @evaluation.predict, @evaluation.evaluate, and @handler.
      • The default is empty, i.e. does not depend on any other tasks.

    Input

    The decorated functions need to define some input parameters to accept dataset data, etc. They contain the following patterns:

    • data:

      • data is a dict type that can read the features of the dataset.
      • When batch_size=1 or batch_size is not set, the label feature can be read through data['label'] or data.label.
      • When batch_size is set to > 1, data is a list.
      from starwhale import evaluation

      @evaluation.predict
      def predict(data):
      print(data['label'])
      print(data.label)
    • data + external:

      • data is a dict type that can read the features of the dataset.
      • external is also a dict, including: index, index_with_dataset, dataset_info, context and dataset_uri keys. The attributes can be used for the further fine-grained processing.
        • index: The index of the dataset row.
        • index_with_dataset: The index with the dataset info.
        • dataset_info: starwhale.core.dataset.tabular.TabularDatasetInfo Class.
        • context: starwhale.Context Class.
        • dataset_uri: starwhale.nase.uri.resource.Resource Class.
      from starwhale import evaluation

      @evaluation.predict
      def predict(data, external):
      print(data['label'])
      print(data.label)
      print(external["context"])
      print(external["dataset_uri"])
    • data + **kw:

      • data is a dict type that can read the features of the dataset.
      • kw is a dict that contains external.
      from starwhale import evaluation

      @evaluation.predict
      def predict(data, **kw):
      print(kw["external"]["context"])
      print(kw["external"]["dataset_uri"])
    • *args + **kwargs:

      • The first argument of args list is data.
      from starwhale import evaluation

      @evaluation.predict
      def predict(*args, **kw):
      print(args[0].label)
      print(args[0]["label"])
      print(kw["external"]["context"])
    • **kwargs:

      from starwhale import evaluation

      @evaluation.predict
      def predict(**kw):
      print(kw["data"].label)
      print(kw["data"]["label"])
      print(kw["external"]["context"])
    • *args:

      • *args does not contain external.
      from starwhale import evaluation

      @evaluation.predict
      def predict(*args):
      print(args[0].label)
      print(args[0]["label"])

    Examples

    from starwhale import evaluation

    @evaluation.predict
    def predict_image(data):
    ...

    @evaluation.predict(
    dataset="mnist/version/latest",
    batch_size=32,
    replicas=4,
    needs=[predict_image],
    )
    def predict_batch_images(batch_data)
    ...

    @evaluation.predict(
    resources={"nvidia.com/gpu": 1,
    "cpu": {"request": 1, "limit": 2},
    "mem": 200 * 1024}, # 200MB
    log_mode="plain",
    )
    def predict_with_resources(data):
    ...

    @evaluation.predict(
    replicas=1,
    log_mode="plain",
    log_dataset_features=["txt", "img", "label"],
    )
    def predict_with_selected_features(data):
    ...

    @evaluation.evaluate

    @evaluation.evaluate is a decorator that defines the evaluation process in the Starwhale Model evaluation, similar to the reduce phase in MapReduce. It contains the following core features:

    • On the Server instance, apply for the resources.
    • Read the data recorded in the results table automatically during the predict phase, and pass it into the function as an iterator.
    • The evaluate phase will only run one replica, and cannot define the replicas parameter like the predict phase.

    Parameters

    • resources: (dict, optional)
      • Consistent with the resources parameter definition in @evaluation.predict.
    • needs: (List[Callable], optional)
      • Consistent with the needs parameter definition in @evaluation.predict.
      • In the common case, it will depend on a function decorated by @evaluation.predict.
    • use_predict_auto_log: (bool, optional)
      • Defaults to True, passes an iterator that can traverse the predict results to the function.

    Input

    • When use_predict_auto_log=True (default), pass an iterator that can traverse the predict results into the function.
      • The iterated object is a dictionary containing two keys: output and input.
        • output is the element returned by the predict stage function.
        • input is the features of the corresponding dataset during the inference process, which is a dictionary type.
    • When use_predict_auto_log=False, do not pass any parameters into the function.

    Examples

    from starwhale import evaluation

    @evaluation.evaluate(needs=[predict_image])
    def evaluate_results(predict_result_iter):
    ...

    @evaluation.evaluate(
    use_predict_auto_log=False,
    needs=[predict_image],
    )
    def evaluate_results():
    ...

    evaluation.log

    evaluation.log is a function that logs the certain evaluation metrics to the specific tables, which can be viewed as the Web page in the Server/Cloud instance.

    Parameters

    • category: (str, required)
      • The category of the logged record, which will be used as a suffix for the Starwhale Datastore table name.
      • Each category corresponds to a Starwhale Datastore table, with these tables isolated by evaluation task ID without affecting each other.
    • id: (str|int, required)
      • The ID of the logged record, unique within the table.
      • Only one type, either str or int, can be used as ID type in the same table.
    • metrics: (dict, required)
      • A dictionary recording metrics in key-value pairs.

    Examples

    from starwhale import evaluation

    evaluation.log("label/1", 1, {"loss": 0.99, "accuracy": 0.98})
    evaluation.log("ppl", "1", {"a": "test", "b": 1})

    evaluation.log_summary

    evaluation.log_summary is a function that logs the certain metrics to the summary table. The evaluation page of a Server/Cloud instance displays data from the summary table.

    Each time it is called, Starwhale automatically updates the table using the unique ID of the current evaluation as the row ID. This function can be called multiple times during an evaluation to update different columns.

    Each project has one summary table, and all evaluation jobs under that project will log their summary information into this table.

    @classmethod
    def log_summary(cls, *args: t.Any, **kw: t.Any) -> None:

    Examples

    from starwhale import evaluation

    evaluation.log_summary(loss=0.99)
    evaluation.log_summary(loss=0.99, accuracy=0.99)
    evaluation.log_summary({"loss": 0.99, "accuracy": 0.99})

    evaluation.iter

    evaluation.iter is a function that returns an iterator for reading data iteratively from certain model evaluation tables.

    @classmethod
    def iter(cls, category: str) -> t.Iterator:

    Parameters

    • category: (str, required)
      • This parameter is consistent with the meaning of the category parameter in the evaluation.log function.

    Examples

    from starwhale import evaluation

    results = [data for data in evaluation.iter("label/0")]

    @handler

    @handler is a decorator that provides the following functionalities:

    • On a Server instance, it requests the required resources to run.
    • It can control the number of replicas.
    • Multiple handlers can form a DAG through dependency relationships to control the execution workflow.
    • It can expose ports externally to run like a web handler.

    @fine_tune, @evaluation.predict and @evaluation.evalute can be considered applications of @handler in the certain specific areas. @handler is the underlying implementation of these decorators and is more fundamental and flexible.

    @classmethod
    def handler(
    cls,
    resources: t.Optional[t.Dict[str, t.Any]] = None,
    replicas: int = 1,
    needs: t.Optional[t.List[t.Callable]] = None,
    name: str = "",
    expose: int = 0,
    require_dataset: bool = False,
    ) -> t.Callable:

    Parameters

    • resources: (dict, optional)
      • Consistent with the resources parameter definition in @evaluation.predict.
    • needs: (List[Callable], optional)
      • Consistent with the needs parameter definition in @evaluation.predict.
    • replicas: (int, optional)
      • Consistent with the replicas parameter definition in @evaluation.predict.
    • name: (str, optional)
      • The name displayed for the handler.
      • If not specified, use the decorated function's name.
    • expose: (int, optional)
      • The port exposed externally. When running a web handler, the exposed port needs to be declared.
      • The default is 0, meaning no port is exposed.
      • Currently only one port can be exposed.
    • require_dataset: (bool, optional)
      • Defines whether this handler requires a dataset when running.
      • If required_dataset=True, the user is required to input a dataset when creating an evaluation task on the Server/Cloud instance web page. If required_dataset=False, the user does not need to specify a dataset on the web page.
      • The default is False.

    Examples

    from starwhale import handler
    import gradio

    @handler(resources={"cpu": 1, "nvidia.com/gpu": 1}, replicas=3)
    def my_handler():
    ...

    @handler(needs=[my_handler])
    def my_another_handler():
    ...

    @handler(expose=7860)
    def chatbot():
    with gradio.Blocks() as server:
    ...
    server.launch(server_name="0.0.0.0", server_port=7860)

    @fine_tune

    fine_tune is a decorator that defines the fine-tuning process for model training.

    Some restrictions and usage suggestions:

    • fine_tune has only one replica.
    • fine_tune requires dataset input.
    • Generally, the dataset is obtained through Context.get_runtime_context() at the start of fine_tune.
    • Generally, at the end of fine_tune, the fine-tuned Starwhale model package is generated through starwhale.model.build, which will be automatically copied to the corresponding evaluation project.

    Parameters

    • resources: (dict, optional)
      • Consistent with the resources parameter definition in @evaluation.predict.
    • needs: (List[Callable], optional)
      • Consistent with the needs parameter definition in @evaluation.predict.

    Examples

    from starwhale import model as starwhale_model
    from starwhale import fine_tune, Context

    @fine_tune(resources={"nvidia.com/gpu": 1})
    def llama_fine_tuning():
    ctx = Context.get_runtime_context()

    if len(ctx.dataset_uris) == 2:
    # TODO: use more graceful way to get train and eval dataset
    train_dataset = dataset(ctx.dataset_uris[0], readonly=True, create="forbid")
    eval_dataset = dataset(ctx.dataset_uris[1], readonly=True, create="forbid")
    elif len(ctx.dataset_uris) == 1:
    train_dataset = dataset(ctx.dataset_uris[0], readonly=True, create="forbid")
    eval_dataset = None
    else:
    raise ValueError("Only support 1 or 2 datasets(train and eval dataset) for now")

    #user training code
    train_llama(
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
    )

    model_name = get_model_name()
    starwhale_model.build(name=f"llama-{model_name}-qlora-ft")

    @multi_classification

    The @multi_classification decorator uses the sklearn lib to analyze results for multi-classification problems, outputting the confusion matrix, ROC, AUC etc., and writing them to related tables in the Starwhale Datastore.

    When using it, certain requirements are placed on the return value of the decorated function, which should be (label, result) or (label, result, probability_matrix).

    def multi_classification(
    confusion_matrix_normalize: str = "all",
    show_hamming_loss: bool = True,
    show_cohen_kappa_score: bool = True,
    show_roc_auc: bool = True,
    all_labels: t.Optional[t.List[t.Any]] = None,
    ) -> t.Any:

    Parameters

    • confusion_matrix_normalize: (str, optional)
      • Accepts three parameters:
        • true: rows
        • pred: columns
        • all: rows+columns
    • show_hamming_loss: (bool, optional)
      • Whether to calculate the Hamming loss.
      • The default is True.
    • show_cohen_kappa_score: (bool, optional)
      • Whether to calculate the Cohen kappa score.
      • The default is True.
    • show_roc_auc: (bool, optional)
      • Whether to calculate ROC/AUC. To calculate, the function needs to return a (label, result, probability_matrix) tuple, otherwise a (label, result) tuple is sufficient.
      • The default is True.
    • all_labels: (List, optional)
      • Defines all the labels.

    Examples


    @multi_classification(
    confusion_matrix_normalize="all",
    show_hamming_loss=True,
    show_cohen_kappa_score=True,
    show_roc_auc=True,
    all_labels=[i for i in range(0, 10)],
    )
    def evaluate(ppl_result) -> t.Tuple[t.List[int], t.List[int], t.List[t.List[float]]]:
    label, result, probability_matrix = [], [], []
    return label, result, probability_matrix

    @multi_classification(
    confusion_matrix_normalize="all",
    show_hamming_loss=True,
    show_cohen_kappa_score=True,
    show_roc_auc=False,
    all_labels=[i for i in range(0, 10)],
    )
    def evaluate(ppl_result) -> t.Tuple[t.List[int], t.List[int], t.List[t.List[float]]]:
    label, result = [], [], []
    return label, result

    PipelineHandler

    The PipelineHandler class provides a default model evaluation workflow definition that requires users to implement the predict and evaluate functions.

    The PipelineHandler is equivalent to using the @evaluation.predict and @evaluation.evaluate decorators together - the usage looks different but the underlying model evaluation process is the same.

    Note that PipelineHandler currently does not support defining resources parameters.

    Users need to implement the following functions:

    • predict: Defines the inference process, equivalent to a function decorated with @evaluation.predict.

    • evaluate: Defines the evaluation process, equivalent to a function decorated with @evaluation.evaluate.

    from typing import Any, Iterator
    from abc import ABCMeta, abstractmethod

    class PipelineHandler(metaclass=ABCMeta):
    def __init__(
    self,
    predict_batch_size: int = 1,
    ignore_error: bool = False,
    predict_auto_log: bool = True,
    predict_log_mode: str = PredictLogMode.PICKLE.value,
    predict_log_dataset_features: t.Optional[t.List[str]] = None,
    **kwargs: t.Any,
    ) -> None:
    self.context = Context.get_runtime_context()
    ...

    def predict(self, data: Any, **kw: Any) -> Any:
    raise NotImplementedError

    def evaluate(self, ppl_result: Iterator) -> Any
    raise NotImplementedError

    Parameters

    • predict_batch_size: (int, optional)
      • Equivalent to the batch_size parameter in @evaluation.predict.
      • Default is 1.
    • ignore_error: (bool, optional)
      • Equivalent to the fail_on_error parameter in @evaluation.predict.
      • Default is False.
    • predict_auto_log: (bool, optional)
      • Equivalent to the auto_log parameter in @evaluation.predict.
      • Default is True.
    • predict_log_mode: (str, optional)
      • Equivalent to the log_mode parameter in @evaluation.predict.
      • Default is pickle.
    • predict_log_dataset_features: (bool, optional)
      • Equivalent to the log_dataset_features parameter in @evaluation.predict.
      • Default is None, which records all features.

    PipelineHandler.run Decorator

    The PipelineHandler.run decorator can be used to describe resources for the predict and evaluate methods, supporting definitions of replicas and resources:

    • The PipelineHandler.run decorator can only decorate predict and evaluate methods in subclasses inheriting from PipelineHandler.
    • The predict method can set the replicas parameter. The replicas value for the evaluate method is always 1.
    • The resources parameter is defined and used in the same way as the resources parameter in @evaluation.predict or @evaluation.evaluate.
    • The PipelineHandler.run decorator is optional.
    • The PipelineHandler.run decorator only takes effect on Server and Cloud instances, not Standalone instances that don't support resource definition.
    @classmethod
    def run(
    cls, resources: t.Optional[t.Dict[str, t.Any]] = None, replicas: int = 1
    ) -> t.Callable:

    Examples

    import typing as t

    import torch
    from starwhale import PipelineHandler

    class Example(PipelineHandler):
    def __init__(self) -> None:
    super().__init__()
    self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    self.model = self._load_model(self.device)

    @PipelineHandler.run(replicas=4, resources={"memory": 1 * 1024 * 1024 *1024, "nvidia.com/gpu": 1}) # 1G Memory, 1 GPU
    def predict(self, data: t.Dict):
    data_tensor = self._pre(data.img)
    output = self.model(data_tensor)
    return self._post(output)

    @PipelineHandler.run(resources={"memory": 1 * 1024 * 1024 *1024}) # 1G Memory
    def evaluate(self, ppl_result):
    result, label, pr = [], [], []
    for _data in ppl_result:
    label.append(_data["input"]["label"])
    result.extend(_data["output"][0])
    pr.extend(_data["output"][1])
    return label, result, pr

    def _pre(self, input: Image) -> torch.Tensor:
    ...

    def _post(self, input):
    ...

    def _load_model(self, device):
    ...

    Context

    The context information passed during model evaluation, including Project, Task ID, etc. The Context content is automatically injected and can be used in the following ways:

    • Inherit the PipelineHandler class and use the self.context object.
    • Get it through Context.get_runtime_context().

    Note that Context can only be used during model evaluation, otherwise the program will throw an exception.

    Currently Context can get the following values:

    • project: str
      • Project name.
    • version: str
      • Unique ID of model evaluation.
    • step: str
      • Step name.
    • total: int
      • Total number of Tasks under the Step.
    • index: int
      • Task index number, starting from 0.
    • dataset_uris: List[str]
      • List of Starwhale dataset URIs.

    Examples


    from starwhale import Context, PipelineHandler

    def func():
    ctx = Context.get_runtime_context()
    print(ctx.project)
    print(ctx.version)
    print(ctx.step)
    ...

    class Example(PipelineHandler):

    def predict(self, data: t.Dict):
    print(self.context.project)
    print(self.context.version)
    print(self.context.step)

    @starwhale.api.service.api

    @starwhale.api.service.api is a decorator that provides a simple Web Handler input definition based on Gradio for accepting external requests and returning inference results to the user when launching a Web Service with the swcli model serve command, enabling online evaluation.

    Examples

    import gradio
    from starwhale.api.service import api

    def predict_image(img):
    ...

    @api(gradio.File(), gradio.Label())
    def predict_view(file: t.Any) -> t.Any:
    with open(file.name, "rb") as f:
    data = Image(f.read(), shape=(28, 28, 1))
    _, prob = predict_image({"img": data})
    return {i: p for i, p in enumerate(prob)}

    starwhale.api.service.Service

    If you want to customize the web service implementation, you can subclass Service and override the serve method.

    class CustomService(Service):
    def serve(self, addr: str, port: int, handler_list: t.List[str] = None) -> None:
    ...

    svc = CustomService()

    @svc.api(...)
    def handler(data):
    ...

    Notes:

    • Handlers added with PipelineHandler.add_api and the api decorator or Service.api can work together
    • If using a custom Service, you need to instantiate the custom Service class in the model

    Custom Request and Response

    Request and Response are handler preprocessing and postprocessing classes for receiving user requests and returning results. They can be simply understood as pre and post logic for the handler.

    Starwhale provides built-in Request implementations for Dataset types and Json Response. Users can also customize the logic as follows:

    import typing as t

    from starwhale.api.service import (
    Request,
    Service,
    Response,
    )

    class CustomInput(Request):
    def load(self, req: t.Any) -> t.Any:
    return req

    class CustomOutput(Response):
    def __init__(self, prefix: str) -> None:
    self.prefix = prefix

    def dump(self, req: str) -> bytes:
    return f"{self.prefix} {req}".encode("utf-8")

    svc = Service()

    @svc.api(request=CustomInput(), response=CustomOutput("hello"))
    def foo(data: t.Any) -> t.Any:
    ...
    - - + + \ No newline at end of file diff --git a/0.5.12/reference/sdk/job/index.html b/0.5.12/reference/sdk/job/index.html index cb7eb81fd..7fbe11ffa 100644 --- a/0.5.12/reference/sdk/job/index.html +++ b/0.5.12/reference/sdk/job/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Starwhale Task SDK

    job

    Get a starwhale.Job object through the Job URI parameter, which represents a Job on Standalone/Server/Cloud instances.

    @classmethod
    def job(
    cls,
    uri: str,
    ) -> Job:

    Parameters

    • uri: (str, required)
      • Job URI format.

    Usage Example

    from starwhale import job

    # get job object of uri=https://server/job/1
    j1 = job("https://server/job/1")

    # get job from standalone instance
    j2 = job("local/project/self/job/xm5wnup")
    j3 = job("xm5wnup")

    class starwhale.Job

    starwhale.Job abstracts Starwhale Job and enables some information retrieval operations on the job.

    list

    list is a classmethod that can list the jobs under a project.

    @classmethod
    def list(
    cls,
    project: str = "",
    page_index: int = DEFAULT_PAGE_IDX,
    page_size: int = DEFAULT_PAGE_SIZE,
    ) -> Tuple[List[Job], Dict]:

    Parameters

    • project: (str, optional)
      • Project URI, can be projects on Standalone/Server/Cloud instances.
      • If project is not specified, the project selected by swcli project selected will be used.
    • page_index: (int, optional)
      • When getting the jobs list from Server/Cloud instances, paging is supported. This parameter specifies the page number.
        • Default is 1.
        • Page numbers start from 1.
      • Standalone instances do not support paging. This parameter has no effect.
    • page_size: (int, optional)
      • When getting the jobs list from Server/Cloud instances, paging is supported. This parameter specifies the number of jobs returned per page.
        • Default is 1.
        • Page numbers start from 1.
      • Standalone instances do not support paging. This parameter has no effect.

    Usage Example

    from starwhale import Job

    # list jobs of current selected project
    jobs, pagination_info = Job.list()

    # list jobs of starwhale/public project in the cloud.starwhale.cn instance
    jobs, pagination_info = Job.list("https://cloud.starwhale.cn/project/starwhale:public")

    # list jobs of id=1 project in the server instance, page index is 2, page size is 10
    jobs, pagination_info = Job.list("https://server/project/1", page_index=2, page_size=10)

    get

    get is a classmethod that gets information about a specific job and returns a Starwhale.Job object. It has the same functionality and parameter definitions as the starwhale.job function.

    Usage Example

    from starwhale import Job

    # get job object of uri=https://server/job/1
    j1 = Job.get("https://server/job/1")

    # get job from standalone instance
    j2 = Job.get("local/project/self/job/xm5wnup")
    j3 = Job.get("xm5wnup")

    summary

    summary is a property that returns the data written to the summary table during the job execution, in dict type.

    @property
    def summary(self) -> Dict[str, Any]:

    Usage Example

    from starwhale import jobs

    j1 = job("https://server/job/1")

    print(j1.summary)

    tables

    tables is a property that returns the names of tables created during the job execution (not including the summary table, which is created automatically at the project level), in list type.

    @property
    def tables(self) -> List[str]:

    Usage Example

    from starwhale import jobs

    j1 = job("https://server/job/1")

    print(j1.tables)

    get_table_rows

    get_table_rows is a method that returns records from a data table according to the table name and other parameters, in iterator type.

    def get_table_rows(
    self,
    name: str,
    start: Any = None,
    end: Any = None,
    keep_none: bool = False,
    end_inclusive: bool = False,
    ) -> Iterator[Dict[str, Any]]:

    Parameters

    • name: (str, required)
      • Datastore table name. The one of table names obtained through the tables property is ok.
    • start: (Any, optional)
      • The starting ID value of the returned records.
      • Default is None, meaning start from the beginning of the table.
    • end: (Any, optional)
      • The ending ID value of the returned records.
      • Default is None, meaning until the end of the table.
      • If both start and end are None, all records in the table will be returned as an iterator.
    • keep_none: (bool, optional)
      • Whether to return records with None values.
      • Default is False.
    • end_inclusive: (bool, optional)
      • When end is set, whether the iteration includes the end record.
      • Default is False.

    Usage Example

    from starwhale import job

    j = job("local/project/self/job/xm5wnup")

    table_name = j.tables[0]

    for row in j.get_table_rows(table_name):
    print(row)

    rows = list(j.get_table_rows(table_name, start=0, end=100))

    # return the first record from the results table
    result = list(j.get_table_rows('results', start=0, end=1))[0]
    - - + + \ No newline at end of file diff --git a/0.5.12/reference/sdk/model/index.html b/0.5.12/reference/sdk/model/index.html index e93652241..80c06f204 100644 --- a/0.5.12/reference/sdk/model/index.html +++ b/0.5.12/reference/sdk/model/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Starwhale Model SDK

    model.build

    model.build is a function that can build the Starwhale model, equivalent to the swcli model build command.

    def build(
    modules: t.Optional[t.List[t.Any]] = None,
    workdir: t.Optional[_path_T] = None,
    name: t.Optional[str] = None,
    project_uri: str = "",
    desc: str = "",
    remote_project_uri: t.Optional[str] = None,
    add_all: bool = False,
    tags: t.List[str] | None = None,
    ) -> None:

    Parameters

    • modules: (List[str|object], optional)
      • The search modules supports object(function, class or module) or str(example: "to.path.module", "to.path.module:object").
      • If the argument is not specified, the search modules are the imported modules.
    • name: (str, optional)
      • Starwhale Model name.
      • The default is the current work dir (cwd) name.
    • workdir: (str, Pathlib.Path, optional)
      • The path of the rootdir. The default workdir is the current working dir.
      • All files in the workdir will be packaged. If you want to ignore some files, you can add .swignore file in the workdir.
    • project_uri: (str, optional)
      • The project uri of the Starwhale Model.
      • If the argument is not specified, the project_uri is the config value of swcli project select command.
    • desc: (str, optional)
      • The description of the Starwhale Model.
    • remote_project_uri: (str, optional)
      • Project URI of another example instance. After the Starwhale model is built, it will be automatically copied to the remote instance.
    • add_all: (bool, optional)
      • Add all files in the working directory to the model package(excludes python cache files and virtual environment files when disabled).The .swignore file still takes effect.
      • The default value is False.
    • tags: (List[str], optional)
      • The tags for the model version.
      • latest and ^v\d+$ tags are reserved tags.

    Examples

    from starwhale import model

    # class search handlers
    from .user.code.evaluator import ExamplePipelineHandler
    model.build([ExamplePipelineHandler])

    # function search handlers
    from .user.code.evaluator import predict_image
    model.build([predict_image])

    # module handlers, @handler decorates function in this module
    from .user.code import evaluator
    model.build([evaluator])

    # str search handlers
    model.build(["user.code.evaluator:ExamplePipelineHandler"])
    model.build(["user.code1", "user.code2"])

    # no search handlers, use imported modules
    model.build()

    # add user custom tags
    model.build(tags=["t1", "t2"])
    - - + + \ No newline at end of file diff --git a/0.5.12/reference/sdk/other/index.html b/0.5.12/reference/sdk/other/index.html index f4377dccc..3225dbade 100644 --- a/0.5.12/reference/sdk/other/index.html +++ b/0.5.12/reference/sdk/other/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Other SDK

    __version__

    Version of Starwhale Python SDK and swcli, string constant.

    >>> from starwhale import __version__
    >>> print(__version__)
    0.5.7

    init_logger

    Initialize Starwhale logger and traceback. The default value is 0.

    • 0: show only errors, traceback only shows 1 frame.
    • 1: show errors + warnings, traceback shows 5 frames.
    • 2: show errors + warnings + info, traceback shows 10 frames.
    • 3: show errors + warnings + info + debug, traceback shows 100 frames.
    • >=4: show errors + warnings + info + debug + trace, traceback shows 1000 frames.
    def init_logger(verbose: int = 0) -> None:

    login

    Log in to a server/cloud instance. It is equivalent to running the swcli instance login command. Log in to the Standalone instance is meaningless.

    def login(
    instance: str,
    alias: str = "",
    username: str = "",
    password: str = "",
    token: str = "",
    ) -> None:

    Parameters

    • instance: (str, required)
      • The http url of the server/cloud instance.
    • alias: (str, optional)
      • An alias for the instance to simplify the instance part of the Starwhale URI.
      • If not specified, the hostname part of the instance http url will be used.
    • username: (str, optional)
    • password: (str, optional)
    • token: (str, optional)
      • You can only choose one of username + password or token to login to the instance.

    Examples

    from starwhale import login

    # login to Starwhale Cloud instance by token
    login(instance="https://cloud.starwhale.cn", alias="cloud-cn", token="xxx")

    # login to Starwhale Server instance by username and password
    login(instance="http://controller.starwhale.svc", alias="dev", username="starwhale", password="abcd1234")

    logout

    Log out of a server/cloud instance. It is equivalent to running the swcli instance logout command. Log out of the Standalone instance is meaningless.

    def logout(instance: str) -> None:

    Examples

    from starwhale import login, logout

    login(instance="https://cloud.starwhale.cn", alias="cloud-cn", token="xxx")
    # logout by the alias
    logout("cloud-cn")

    login(instance="http://controller.starwhale.svc", alias="dev", username="starwhale", password="abcd1234")
    # logout by the instance http url
    logout("http://controller.starwhale.svc")
    - - + + \ No newline at end of file diff --git a/0.5.12/reference/sdk/overview/index.html b/0.5.12/reference/sdk/overview/index.html index c666eebb5..7142cbc8d 100644 --- a/0.5.12/reference/sdk/overview/index.html +++ b/0.5.12/reference/sdk/overview/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Python SDK Overview

    Starwhale provides a series of Python SDKs to help manage datasets, models, evaluations etc. Using the Starwhale Python SDK can make it easier to complete your ML/DL development tasks.

    Classes

    • PipelineHandler: Provides default model evaluation process definition, requires implementation of predict and evaluate methods.
    • Context: Passes context information during model evaluation, including Project, Task ID etc.
    • class Dataset: Starwhale Dataset class.
    • class starwhale.api.service.Service: The base class of online evaluation.
    • class Job: Provides operations for Job.

    Functions

    • @multi_classification: Decorator for multi-class problems to simplify evaluate result calculation and storage for better evaluation presentation.
    • @handler: Decorator to define a running entity with resource attributes (mem/cpu/gpu). You can control replica count. Handlers can form DAGs through dependencies to control execution flow.
    • @evaluation.predict: Decorator to define inference process in model evaluation, similar to map phase in MapReduce.
    • @evaluation.evaluate: Decorator to define evaluation process in model evaluation, similar to reduce phase in MapReduce.
    • evaluation.log: Log evaluation metrics to the specific tables.
    • evaluation.log_summary: Log certain metrics to the summary table.
    • evaluation.iter: Iterate and read data from the certain tables.
    • model.build: Build Starwhale model.
    • @fine_tune: Decorator to define model fine-tuning process.
    • init_logger: Set log level, implement 5-level logging.
    • dataset: Get starwhale.Dataset object, by creating new datasets or loading existing datasets.
    • @starwhale.api.service.api: Decorator to provide a simple Web Handler input definition based on Gradio.
    • login: Log in to the server/cloud instance.
    • logout: Log out of the server/cloud instance.
    • job: Get starwhale.Job object by the Job URI.
    • @PipelineHandler.run: Decorator to define the resources for the predict and evaluate methods in PipelineHandler subclasses.

    Data Types

    • COCOObjectAnnotation: Provides COCO format definitions.
    • BoundingBox: Bounding box type, currently in LTWH format - left_x, top_y, width and height.
    • ClassLabel: Describes the number and types of labels.
    • Image: Image type.
    • GrayscaleImage: Grayscale image type, e.g. MNIST digit images, a special case of Image type.
    • Audio: Audio type.
    • Video: Video type.
    • Text: Text type, default utf-8 encoding, for storing large texts.
    • Binary: Binary type, stored in bytes, for storing large binary content.
    • Line: Line type.
    • Point: Point type.
    • Polygon: Polygon type.
    • Link: Link type, for creating remote-link data.
    • S3LinkAuth: When data is stored in S3-based object storage, this type describes auth and key info.
    • MIMEType: Describes multimedia types supported by Starwhale, used in mime_type attribute of Image, Video etc for better Dataset Viewer.
    • LinkType: Describes remote link types supported by Starwhale, currently LocalFS and S3.

    Other

    • __version__: Version of Starwhale Python SDK and swcli, string constant.

    Further reading

    - - + + \ No newline at end of file diff --git a/0.5.12/reference/sdk/type/index.html b/0.5.12/reference/sdk/type/index.html index 8d0a30bbd..0cfc60c3b 100644 --- a/0.5.12/reference/sdk/type/index.html +++ b/0.5.12/reference/sdk/type/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Starwhale Data Types

    COCOObjectAnnotation

    It provides definitions following the COCO format.

    COCOObjectAnnotation(
    id: int,
    image_id: int,
    category_id: int,
    segmentation: Union[t.List, t.Dict],
    area: Union[float, int],
    bbox: Union[BoundingBox, t.List[float]],
    iscrowd: int,
    )
    ParameterDescription
    idObject id, usually a globally incrementing id
    image_idImage id, usually id of the image
    category_idCategory id, usually id of the class in object detection
    segmentationObject contour representation, Polygon (polygon vertices) or RLE format
    areaObject area
    bboxRepresents bounding box, can be BoundingBox type or list of floats
    iscrowd0 indicates a single object, 1 indicates two unseparated objects

    Examples

    def _make_coco_annotations(
    self, mask_fpath: Path, image_id: int
    ) -> t.List[COCOObjectAnnotation]:
    mask_img = PILImage.open(str(mask_fpath))

    mask = np.array(mask_img)
    object_ids = np.unique(mask)[1:]
    binary_mask = mask == object_ids[:, None, None]
    # TODO: tune permute without pytorch
    binary_mask_tensor = torch.as_tensor(binary_mask, dtype=torch.uint8)
    binary_mask_tensor = (
    binary_mask_tensor.permute(0, 2, 1).contiguous().permute(0, 2, 1)
    )

    coco_annotations = []
    for i in range(0, len(object_ids)):
    _pos = np.where(binary_mask[i])
    _xmin, _ymin = float(np.min(_pos[1])), float(np.min(_pos[0]))
    _xmax, _ymax = float(np.max(_pos[1])), float(np.max(_pos[0]))
    _bbox = BoundingBox(
    x=_xmin, y=_ymin, width=_xmax - _xmin, height=_ymax - _ymin
    )

    rle: t.Dict = coco_mask.encode(binary_mask_tensor[i].numpy()) # type: ignore
    rle["counts"] = rle["counts"].decode("utf-8")

    coco_annotations.append(
    COCOObjectAnnotation(
    id=self.object_id,
    image_id=image_id,
    category_id=1, # PennFudan Dataset only has one class-PASPersonStanding
    segmentation=rle,
    area=_bbox.width * _bbox.height,
    bbox=_bbox,
    iscrowd=0, # suppose all instances are not crowd
    )
    )
    self.object_id += 1

    return coco_annotations

    GrayscaleImage

    GrayscaleImage provides a grayscale image type. It is a special case of the Image type, for example the digit images in MNIST.

    GrayscaleImage(
    fp: _TArtifactFP = "",
    display_name: str = "",
    shape: Optional[_TShape] = None,
    as_mask: bool = False,
    mask_uri: str = "",
    )
    ParameterDescription
    fpImage path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    shapeImage width and height, default channel is 1
    as_maskWhether used as a mask image
    mask_uriURI of the original image for the mask

    Examples

    for i in range(0, min(data_number, label_number)):
    _data = data_file.read(image_size)
    _label = struct.unpack(">B", label_file.read(1))[0]
    yield GrayscaleImage(
    _data,
    display_name=f"{i}",
    shape=(height, width, 1),
    ), {"label": _label}

    GrayscaleImage Functions

    GrayscaleImage.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    GrayscaleImage.carry_raw_data

    carry_raw_data() -> GrayscaleImage

    GrayscaleImage.astype

    astype() -> Dict[str, t.Any]

    BoundingBox

    BoundingBox provides a bounding box type, currently in LTWH format:

    • left_x: x-coordinate of left edge
    • top_y: y-coordinate of top edge
    • width: width of bounding box
    • height: height of bounding box

    So it represents the bounding box using the coordinates of its left, top, width and height. This is a common format for specifying bounding boxes in computer vision tasks.

    BoundingBox(
    x: float,
    y: float,
    width: float,
    height: float
    )
    ParameterDescription
    xx-coordinate of left edge (left_x)
    yy-coordinate of top edge (top_y)
    widthWidth of bounding box
    heightHeight of bounding box

    ClassLabel

    Describe labels.

    ClassLabel(
    names: List[Union[int, float, str]]
    )

    Image

    Image Type.

    Image(
    fp: _TArtifactFP = "",
    display_name: str = "",
    shape: Optional[_TShape] = None,
    mime_type: Optional[MIMEType] = None,
    as_mask: bool = False,
    mask_uri: str = "",
    )
    ParameterDescription
    fpImage path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    shapeImage width, height and channels
    mime_typeMIMEType supported types
    as_maskWhether used as a mask image
    mask_uriURI of the original image for the mask

    The main difference from GrayscaleImage is that Image supports multi-channel RGB images by specifying shape as (W, H, C).

    Examples

    import io
    import typing as t
    import pickle
    from PIL import Image as PILImage
    from starwhale import Image, MIMEType

    def _iter_item(paths: t.List[Path]) -> t.Generator[t.Tuple[t.Any, t.Dict], None, None]:
    for path in paths:
    with path.open("rb") as f:
    content = pickle.load(f, encoding="bytes")
    for data, label, filename in zip(
    content[b"data"], content[b"labels"], content[b"filenames"]
    ):
    annotations = {
    "label": label,
    "label_display_name": dataset_meta["label_names"][label],
    }

    image_array = data.reshape(3, 32, 32).transpose(1, 2, 0)
    image_bytes = io.BytesIO()
    PILImage.fromarray(image_array).save(image_bytes, format="PNG")

    yield Image(
    fp=image_bytes.getvalue(),
    display_name=filename.decode(),
    shape=image_array.shape,
    mime_type=MIMEType.PNG,
    ), annotations

    Image Functions

    Image.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Image.carry_raw_data

    carry_raw_data() -> GrayscaleImage

    Image.astype

    astype() -> Dict[str, t.Any]

    Video

    Video type.

    Video(
    fp: _TArtifactFP = "",
    display_name: str = "",
    mime_type: Optional[MIMEType] = None,
    )
    ParameterDescription
    fpVideo path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    mime_typeMIMEType supported types

    Examples

    import typing as t
    from pathlib import Path

    from starwhale import Video, MIMEType

    root_dir = Path(__file__).parent.parent
    dataset_dir = root_dir / "data" / "UCF-101"
    test_ds_path = [root_dir / "data" / "test_list.txt"]

    def iter_ucf_item() -> t.Generator:
    for path in test_ds_path:
    with path.open() as f:
    for line in f.readlines():
    _, label, video_sub_path = line.split()

    data_path = dataset_dir / video_sub_path
    data = Video(
    data_path,
    display_name=video_sub_path,
    shape=(1,),
    mime_type=MIMEType.WEBM,
    )

    yield f"{label}_{video_sub_path}", {
    "video": data,
    "label": label,
    }

    Audio

    Audio type.

    Audio(
    fp: _TArtifactFP = "",
    display_name: str = "",
    mime_type: Optional[MIMEType] = None,
    )
    ParameterDescription
    fpAudio path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    mime_typeMIMEType supported types

    Examples

    import typing as t
    from starwhale import Audio

    def iter_item() -> t.Generator[t.Tuple[t.Any, t.Any], None, None]:
    for path in validation_ds_paths:
    with path.open() as f:
    for item in f.readlines():
    item = item.strip()
    if not item:
    continue

    data_path = dataset_dir / item
    data = Audio(
    data_path, display_name=item, shape=(1,), mime_type=MIMEType.WAV
    )

    speaker_id, utterance_num = data_path.stem.split("_nohash_")
    annotations = {
    "label": data_path.parent.name,
    "speaker_id": speaker_id,
    "utterance_num": int(utterance_num),
    }
    yield data, annotations

    Audio Functions

    Audio.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Audio.carry_raw_data

    carry_raw_data() -> Audio

    Audio.astype

    astype() -> Dict[str, t.Any]

    Text

    Text type, the default encode type is utf-8.

    Text(
    content: str,
    encoding: str = "utf-8",
    )
    ParameterDescription
    contentThe text content
    encodingEncoding format of the text

    Examples

    import typing as t
    from pathlib import Path
    from starwhale import Text

    def iter_item(self) -> t.Generator[t.Tuple[t.Any, t.Any], None, None]:
    root_dir = Path(__file__).parent.parent / "data"

    with (root_dir / "fra-test.txt").open("r") as f:
    for line in f.readlines():
    line = line.strip()
    if not line or line.startswith("CC-BY"):
    continue

    _data, _label, *_ = line.split("\t")
    data = Text(_data, encoding="utf-8")
    annotations = {"label": _label}
    yield data, annotations

    Text Functions

    to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Text.carry_raw_data

    carry_raw_data() -> Text

    Text.astype

    astype() -> Dict[str, t.Any]

    Text.to_str

    to_str() -> str

    Binary

    Binary provides a binary data type, stored as bytes.

    Binary(
    fp: _TArtifactFP = "",
    mime_type: MIMEType = MIMEType.UNDEFINED,
    )
    ParameterDescription
    fpPath, IO object, or file content bytes
    mime_typeMIMEType supported types

    Binary Functions

    Binary.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Binary.carry_raw_data

    carry_raw_data() -> Binary

    Binary.astype

    astype() -> Dict[str, t.Any]

    Link provides a link type to create remote-link datasets in Starwhale.

    Link(
    uri: str,
    auth: Optional[LinkAuth] = DefaultS3LinkAuth,
    offset: int = 0,
    size: int = -1,
    data_type: Optional[BaseArtifact] = None,
    )
    ParameterDescription
    uriURI of the original data, currently supports localFS and S3 protocols
    authLink auth information
    offsetData offset relative to file pointed by uri
    sizeData size
    data_typeActual data type pointed by the link, currently supports Binary, Image, Text, Audio and Video

    Link.astype

    astype() -> Dict[str, t.Any]

    S3LinkAuth

    S3LinkAuth provides authentication and key information when data is stored on S3 protocol based object storage.

    S3LinkAuth(
    name: str = "",
    access_key: str = "",
    secret: str = "",
    endpoint: str = "",
    region: str = "local",
    )
    ParameterDescription
    nameName of the auth
    access_keyAccess key for S3 connection
    secretSecret for S3 connection
    endpointEndpoint URL for S3 connection
    regionS3 region where bucket is located, default is local.

    Examples

    import struct
    import typing as t
    from pathlib import Path

    from starwhale import (
    Link,
    S3LinkAuth,
    GrayscaleImage,
    UserRawBuildExecutor,
    )
    class LinkRawDatasetProcessExecutor(UserRawBuildExecutor):
    _auth = S3LinkAuth(name="mnist", access_key="minioadmin", secret="minioadmin")
    _endpoint = "10.131.0.1:9000"
    _bucket = "users"

    def iter_item(self) -> t.Generator[t.Tuple[t.Any, t.Any], None, None]:
    root_dir = Path(__file__).parent.parent / "data"

    with (root_dir / "t10k-labels-idx1-ubyte").open("rb") as label_file:
    _, label_number = struct.unpack(">II", label_file.read(8))

    offset = 16
    image_size = 28 * 28

    uri = f"s3://{self._endpoint}/{self._bucket}/dataset/mnist/t10k-images-idx3-ubyte"
    for i in range(label_number):
    _data = Link(
    f"{uri}",
    self._auth,
    offset=offset,
    size=image_size,
    data_type=GrayscaleImage(display_name=f"{i}", shape=(28, 28, 1)),
    )
    _label = struct.unpack(">B", label_file.read(1))[0]
    yield _data, {"label": _label}
    offset += image_size

    MIMEType

    MIMEType describes the multimedia types supported by Starwhale, implemented using Python Enum. It is used in the mime_type attribute of Image, Video etc to enable better Dataset Viewer support.

    class MIMEType(Enum):
    PNG = "image/png"
    JPEG = "image/jpeg"
    WEBP = "image/webp"
    SVG = "image/svg+xml"
    GIF = "image/gif"
    APNG = "image/apng"
    AVIF = "image/avif"
    PPM = "image/x-portable-pixmap"
    MP4 = "video/mp4"
    AVI = "video/avi"
    WEBM = "video/webm"
    WAV = "audio/wav"
    MP3 = "audio/mp3"
    PLAIN = "text/plain"
    CSV = "text/csv"
    HTML = "text/html"
    GRAYSCALE = "x/grayscale"
    UNDEFINED = "x/undefined"

    LinkType

    LinkType describes the remote link types supported by Starwhale, also implemented using Python Enum. Currently supports LocalFS and S3 types.

    class LinkType(Enum):
    LocalFS = "local_fs"
    S3 = "s3"
    UNDEFINED = "undefined"

    Line

    from starwhale import ds, Point, Line

    with dataset("collections") as ds:
    line_points = [
    Point(x=0.0, y=1.0),
    Point(x=0.0, y=100.0)
    ]
    ds.append({"line": line_points})
    ds.commit()

    Point

    from starwhale import ds, Point

    with dataset("collections") as ds:
    ds.append(Point(x=0.0, y=100.0))
    ds.commit()

    Polygon

    from starwhale import ds, Point, Polygon

    with dataset("collections") as ds:
    polygon_points = [
    Point(x=0.0, y=1.0),
    Point(x=0.0, y=100.0),
    Point(x=2.0, y=1.0),
    Point(x=2.0, y=100.0),
    ]
    ds.append({"polygon": polygon_points})
    ds.commit()
    - - + + \ No newline at end of file diff --git a/0.5.12/reference/swcli/dataset/index.html b/0.5.12/reference/swcli/dataset/index.html index eca01e2bc..989d8c68b 100644 --- a/0.5.12/reference/swcli/dataset/index.html +++ b/0.5.12/reference/swcli/dataset/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    swcli dataset

    Overview

    swcli [GLOBAL OPTIONS] dataset [OPTIONS] <SUBCOMMAND> [ARGS]...

    The dataset command includes the following subcommands:

    • build
    • copy(cp)
    • diff
    • head
    • history
    • info
    • list(ls)
    • recover
    • remove(rm)
    • summary
    • tag

    swcli dataset build

    swcli [GLOBAL OPTIONS] dataset build [OPTIONS]

    Build Starwhale Dataset. This command only supports to build standalone dataset.

    Options

    • Data sources options:
    OptionRequiredTypeDefaultsDescription
    -if or --image or --image-folderNStringBuild dataset from image folder, the folder should contain the image files.
    -af or --audio or --audio-folderNStringBuild dataset from audio folder, the folder should contain the audio files.
    -vf or --video or --video-folderNStringBuild dataset from video folder, the folder should contain the video files.
    -h or --handler or --python-handlerNStringBuild dataset from python executor handler, the handler format is [module path]:[class or func name].
    -f or --yaml or --dataset-yamlNdataset.yaml in cwdBuild dataset from dataset.yaml file. Default uses dataset.yaml in the work directory(cwd).
    -jf or --jsonNStringBuild dataset from json or jsonl file, the json or jsonl file option is a json file path or a http downloaded url.The json content structure should be a list[dict] or tuple[dict].
    -hf or --huggingfaceNStringBuild dataset from huggingface dataset, the huggingface option is a huggingface repo name.
    -c or --csvNStringBuild dataset from csv files. The option is a csv file path, dir path or a http downloaded url.The option can be used multiple times.

    Data source options are mutually exclusive, only one option is accepted. If no set, swcli dataset build command will use dataset yaml mode to build dataset with the dataset.yaml in the cwd.

    • Other options:
    OptionRequiredScopeTypeDefaultsDescription
    -pt or --patchone of --patch and --overwriteGlobalBooleanTruePatch mode, only update the changed rows and columns for the existed dataset.
    -ow or --overwriteone of --patch and --overwriteGlobalBooleanFalseOverwrite mode, update records and delete extraneous rows from the existed dataset.
    -n or --nameNGlobalStringDataset name
    -p or --projectNGlobalStringDefault projectProject URI, the default is the current selected project. The dataset will store in the specified project.
    -d or --descNGlobalStringDataset description
    -as or --alignment-sizeNGlobalString128Bswds-bin format dataset: alignment size
    -vs or --volume-sizeNGlobalString64MBswds-bin format dataset: volume size
    -r or --runtimeNGlobalStringRuntime URI
    -w or --workdirNPython Handler ModeStringcwdwork dir to search handler.
    --auto-label/--no-auto-labelNImage/Video/Audio Folder ModeBooleanTrueWhether to auto label by the sub-folder name.
    --field-selectorNJSON File ModeStringThe filed from which you would like to extract dataset array items. The filed is split by the dot(.) symbol.
    --subsetNHuggingface ModeStringHuggingface dataset subset name. If the subset name is not specified, the all subsets will be built.
    --splitNHuggingface ModeStringHuggingface dataset split name. If the split name is not specified, the all splits will be built.
    --revisionNHuggingface ModeStringmainVersion of the dataset script to load. Defaults to 'main'. The option value accepts tag name, or branch name, or commit hash.
    --add-hf-info/--no-add-hf-infoNHuggingface ModeBooleanTrueWhether to add huggingface dataset info to the dataset rows, currently support to add subset and split into the dataset rows. Subset uses _hf_subset field name, split uses _hf_split field name.
    --cache/--no-cacheNHuggingface ModeBooleanTrueWhether to use huggingface dataset cache(download + local hf dataset).
    -t or --tagNGlobalStringDataset tags, the option can be used multiple times.
    --encodingNCSV/JSON/JSONL ModeStringfile encoding.
    --dialectNCSV ModeStringexcelThe csv file dialect, the default is excel. Current supports excel, excel-tab and unix formats.
    --delimiterNCSV ModeString,A one-character string used to separate fields for the csv file.
    --quotecharNCSV ModeString"A one-character string used to quote fields containing special characters, such as the delimiter or quotechar, or which contain new-line characters.
    --skipinitialspace/--no-skipinitialspaceNCSV ModeBoolFalseWhether to skip spaces after delimiter for the csv file.
    --strict/--no-strictNCSV ModeBoolFalseWhen True, raise exception Error if the csv is not well formed.

    Examples for dataset building

    #- from dataset.yaml
    swcli dataset build # build dataset from dataset.yaml in the current work directory(pwd)
    swcli dataset build --yaml /path/to/dataset.yaml # build dataset from /path/to/dataset.yaml, all the involved files are related to the dataset.yaml file.
    swcli dataset build --overwrite --yaml /path/to/dataset.yaml # build dataset from /path/to/dataset.yaml, and overwrite the existed dataset.
    swcli dataset build --tag tag1 --tag tag2

    #- from handler
    swcli dataset build --handler mnist.dataset:iter_mnist_item # build dataset from mnist.dataset:iter_mnist_item handler, the workdir is the current work directory(pwd).
    # build dataset from mnist.dataset:LinkRawDatasetProcessExecutor handler, the workdir is example/mnist
    swcli dataset build --handler mnist.dataset:LinkRawDatasetProcessExecutor --workdir example/mnist

    #- from image folder
    swcli dataset build --image-folder /path/to/image/folder # build dataset from /path/to/image/folder, search all image type files.

    #- from audio folder
    swcli dataset build --audio-folder /path/to/audio/folder # build dataset from /path/to/audio/folder, search all audio type files.

    #- from video folder
    swcli dataset build --video-folder /path/to/video/folder # build dataset from /path/to/video/folder, search all video type files.

    #- from json/jsonl file
    swcli dataset build --json /path/to/example.json
    swcli dataset build --json http://example.com/example.json
    swcli dataset build --json /path/to/example.json --field-selector a.b.c # extract the json_content["a"]["b"]["c"] field from the json file.
    swcli dataset build --name qald9 --json https://raw.githubusercontent.com/ag-sc/QALD/master/9/data/qald-9-test-multilingual.json --field-selector questions
    swcli dataset build --json /path/to/test01.jsonl --json /path/to/test02.jsonl
    swcli dataset build --json https://modelscope.cn/api/v1/datasets/damo/100PoisonMpts/repo\?Revision\=master\&FilePath\=train.jsonl

    #- from huggingface dataset
    swcli dataset build --huggingface mnist
    swcli dataset build -hf mnist --no-cache
    swcli dataset build -hf cais/mmlu --subset anatomy --split auxiliary_train --revision 7456cfb

    #- from csv files
    swcli dataset build --csv /path/to/example.csv
    swcli dataset build --csv /path/to/example.csv --csv-file /path/to/example2.csv
    swcli dataset build --csv /path/to/csv-dir
    swcli dataset build --csv http://example.com/example.csv
    swcli dataset build --name product-desc-modelscope --csv https://modelscope.cn/api/v1/datasets/lcl193798/product_description_generation/repo\?Revision\=master\&FilePath\=test.csv --encoding=utf-8-sig

    swcli dataset copy

    swcli [GLOBAL OPTIONS] dataset copy [OPTIONS] <SRC> <DEST>

    dataset copy copies from SRC to DEST.

    SRC and DEST are both dataset URIs.

    When copying Starwhale Dataset, all custom user-defined labels will be copied by default. You can use the --ignore-tag parameter to ignore certain labels. In addition, the latest and ^v\d+$ labels are Starwhale built-in labels that are only used within the instance itself and will not be copied to other instances.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, DEST will be overwritten if it exists. In addition, if the labels carried during duplication have already been used by other versions, this parameter can be used to forcibly update the labels to this version.
    -p or --patchone of --patch and --overwriteBooleanTruePatch mode, only update the changed rows and columns for the remote dataset.
    -o or --overwriteone of --patch and --overwriteBooleanFalseOverwrite mode, update records and delete extraneous rows from the remote dataset.
    -i or --ignore-tagNStringIgnore tags to copy. The option can be used multiple times.

    Examples for dataset copy

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local project(myproject) with a new dataset name 'mnist-local'
    swcli dataset cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq local/project/myproject/mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local default project(self) with the cloud instance dataset name 'mnist-cloud'
    swcli dataset cp --patch cloud://pre-k8s/project/dataset/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq .

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local project(myproject) with the cloud instance dataset name 'mnist-cloud'
    swcli dataset cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq . -dlp myproject

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local default project(self) with a dataset name 'mnist-local'
    swcli dataset cp --overwrite cloud://pre-k8s/project/dataset/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local project(myproject) with a dataset name 'mnist-local'
    swcli dataset cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local -dlp myproject

    #- copy standalone instance(local) default project(self)'s mnist-local dataset to cloud instance(pre-k8s) mnist project with a new dataset name 'mnist-cloud'
    swcli dataset cp mnist-local/version/latest cloud://pre-k8s/project/mnist/mnist-cloud

    #- copy standalone instance(local) default project(self)'s mnist-local dataset to cloud instance(pre-k8s) mnist project with standalone instance dataset name 'mnist-local'
    swcli dataset cp mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy standalone instance(local) default project(self)'s mnist-local dataset to cloud instance(pre-k8s) mnist project without 'cloud://' prefix
    swcli dataset cp mnist-local/version/latest pre-k8s/project/mnist

    #- copy standalone instance(local) project(myproject)'s mnist-local dataset to cloud instance(pre-k8s) mnist project with standalone instance dataset name 'mnist-local'
    swcli dataset cp local/project/myproject/dataset/mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy without some tags
    swcli dataset cp mnist cloud://cloud.starwhale.cn/project/starwhale:public --ignore-tag t1 --force

    swcli dataset diff

    swcli [GLOBAL OPTIONS] dataset diff [OPTIONS] <DATASET VERSION> <DATASET VERSION>

    dataset diff compares the difference between two versions of the same dataset.

    DATASET VERSION is a dataset URI.

    OptionRequiredTypeDefaultsDescription
    --show-detailsNBooleanFalseIf true, outputs the detail information.
    swcli [全局选项] dataset head [选项] <DATASET VERSION>

    Print the first n rows of the dataset. DATASET VERSION is a dataset URI.

    OptionRequiredTypeDefaultsDescription
    -n or --rowsNInt5Print the first NUM rows of the dataset.
    -srd or --show-raw-dataNBooleanFalseFetch raw data content from objectstore.
    -st or --show-typesNBooleanFalseshow data types.

    Examples for dataset head

    #- print the first 5 rows of the mnist dataset
    swcli dataset head -n 5 mnist

    #- print the first 10 rows of the mnist(v0 version) dataset and show raw data
    swcli dataset head -n 10 mnist/v0 --show-raw-data

    #- print the data types of the mnist dataset
    swcli dataset head mnist --show-types

    #- print the remote cloud dataset's first 5 rows
    swcli dataset head cloud://cloud-cn/project/test/dataset/mnist -n 5

    #- print the first 5 rows in the json format
    swcli -o json dataset head -n 5 mnist

    swcli dataset history

    swcli [GLOBAL OPTIONS] dataset history [OPTIONS] <DATASET>

    dataset history outputs all history versions of the specified Starwhale Dataset.

    DATASET is a dataset URI.

    OptionRequiredTypeDefaultsDescription
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.

    swcli dataset info

    swcli [GLOBAL OPTIONS] dataset info [OPTIONS] <DATASET>

    dataset info outputs detailed information about the specified Starwhale Dataset version.

    DATASET is a dataset URI.

    swcli dataset list

    swcli [GLOBAL OPTIONS] dataset list [OPTIONS]

    dataset list shows all Starwhale Datasets.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.
    --show-removed or -srNBooleanFalseIf true, include datasets that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.
    --filter or -flNStringShow only Starwhale Datasetes that match specified filters. This option can be used multiple times in one command.
    FilterTypeDescriptionExample
    nameKey-ValueThe name prefix of datasets--filter name=mnist
    ownerKey-ValueThe dataset owner name--filter owner=starwhale
    latestFlagIf specified, it shows only the latest version.--filter latest

    swcli dataset recover

    swcli [GLOBAL OPTIONS] dataset recover [OPTIONS] <DATASET>

    dataset recover recovers previously removed Starwhale Datasets or versions.

    DATASET is a dataset URI. If the version part of the URI is omitted, all removed versions are recovered.

    Garbage-collected Starwhale Datasets or versions can not be recovered, as well as those are removed with the --force option.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, overwrite the Starwhale Dataset or version with the same name or version id.

    swcli dataset remove

    swcli [GLOBAL OPTIONS] dataset remove [OPTIONS] <DATASET>

    dataset remove removes the specified Starwhale Dataset or version.

    DATASET is a dataset URI. If the version part of the URI is omitted, all versions are removed.

    Removed Starwhale Datasets or versions can be recovered by swcli dataset recover before garbage collection. Use the --force option to persistently remove a Starwhale Dataset or version.

    Removed Starwhale Datasets or versions can be listed by swcli dataset list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Dataset or version. It can not be recovered.

    swcli dataset summary

    swcli [GLOBAL OPTIONS]  dataset summary <DATASET>

    Show dataset summary. DATASET is a dataset URI.

    swcli dataset tag

    swcli [GLOBAL OPTIONS] dataset tag [OPTIONS] <DATASET> [TAGS]...

    dataset tag attaches a tag to a specified Starwhale Dataset version. At the same time, tag command also supports list and remove tags. The tag can be used in a dataset URI instead of the version id.

    DATASET is a dataset URI.

    Each dataset version can have any number of tags, but duplicated tag names are not allowed in the same dataset.

    dataset tag only works for the Standalone Instance.

    OptionRequiredTypeDefaultsDescription
    --remove or -rNBooleanFalseremove the tag if true
    --quiet or -qNBooleanFalseignore errors, for example, removing tags that do not exist.
    --force-add or -fNBooleanFalseWhen adding labels to server/cloud instances, if the label is already used by another dataset version, an error will be prompted. In this case, you can force an update using the --force-add parameter.

    Examples for dataset tag

    #- list tags of the mnist dataset
    swcli dataset tag mnist

    #- add tags for the mnist dataset
    swcli dataset tag mnist t1 t2
    swcli dataset tag cloud://cloud.starwhale.cn/project/public:starwhale/dataset/mnist/version/latest t1 --force-ad
    swcli dataset tag mnist t1 --quiet

    #- remove tags for the mnist dataset
    swcli dataset tag mnist -r t1 t2
    swcli dataset tag cloud://cloud.starwhale.cn/project/public:starwhale/dataset/mnist --remove t1
    - - + + \ No newline at end of file diff --git a/0.5.12/reference/swcli/index.html b/0.5.12/reference/swcli/index.html index 387f7860f..a93812191 100644 --- a/0.5.12/reference/swcli/index.html +++ b/0.5.12/reference/swcli/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Overview

    Usage

    swcli [OPTIONS] <COMMAND> [ARGS]...
    note

    sw and starwhale are aliases for swcli.

    Global Options

    OptionDescription
    --versionShow the Starwhale Client version
    -v or --verboseShow verbose log, support multi counts for -v args. More -v args, more logs.
    --helpShow the help message.
    caution

    Global options must be put immediately after swcli, and before any command.

    Commands

    - - + + \ No newline at end of file diff --git a/0.5.12/reference/swcli/instance/index.html b/0.5.12/reference/swcli/instance/index.html index 181dfe05f..a493e28b9 100644 --- a/0.5.12/reference/swcli/instance/index.html +++ b/0.5.12/reference/swcli/instance/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    swcli instance

    Overview

    swcli [GLOBAL OPTIONS] instance [OPTIONS] <SUBCOMMAND> [ARGS]

    The instance command includes the following subcommands:

    • info
    • list (ls)
    • login
    • logout
    • use (select)

    swcli instance info

    swcli [GLOBAL OPTIONS] instance info [OPTIONS] <INSTANCE>

    instance info outputs detailed information about the specified Starwhale Instance.

    INSTANCE is an instance URI.

    swcli instance list

    swcli [GLOBAL OPTIONS] instance list [OPTIONS]

    instance list shows all Starwhale Instances.

    swcli instance login

    swcli [GLOBAL OPTIONS] instance login [OPTIONS] <INSTANCE>

    instance login connects to a Server/Cloud instance and makes the specified instance default.

    INSTANCE is an instance URI.

    OptionRequiredTypeDefaultsDescription
    --usernameNStringThe login username.
    --passwordNStringThe login password.
    --tokenNStringThe login token.
    --aliasYStringThe alias of the instance. You can use it anywhere that requires an instance URI.

    --username and --password can not be used together with --token.

    swcli instance logout

    swcli [GLOBAL OPTIONS] instance logout [INSTANCE]

    instance logout disconnects from the Server/Cloud instance, and clears information stored in the local storage.

    INSTANCE is an instance URI. If it is omiited, the default instance is used instead.

    swcli instance use

    swcli [GLOBAL OPTIONS] instance use <INSTANCE>

    instance use make the specified instance default.

    INSTANCE is an instance URI.

    - - + + \ No newline at end of file diff --git a/0.5.12/reference/swcli/job/index.html b/0.5.12/reference/swcli/job/index.html index 3dbd5051e..f3da4d70f 100644 --- a/0.5.12/reference/swcli/job/index.html +++ b/0.5.12/reference/swcli/job/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    swcli job

    Overview

    swcli [GLOBAL OPTIONS] job [OPTIONS] <SUBCOMMAND> [ARGS]...

    The job command includes the following subcommands:

    • cancel
    • info
    • list(ls)
    • pause
    • recover
    • remove(rm)
    • resume

    swcli job cancel

    swcli [GLOBAL OPTIONS] job cancel [OPTIONS] <JOB>

    job cancel stops the specified job. On Standalone instance, this command only takes effect for containerized jobs.

    JOB is a job URI.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, kill the Starwhale Job by force.

    swcli job info

    swcli [GLOBAL OPTIONS] job info [OPTIONS] <JOB>

    job info outputs detailed information about the specified Starwhale Job.

    JOB is a job URI.

    swcli job list

    swcli [GLOBAL OPTIONS] job list [OPTIONS]

    job list shows all Starwhale Jobs.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --show-removed or -srNBooleanFalseIf true, include packages that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.

    swcli job pause

    swcli [GLOBAL OPTIONS] job pause [OPTIONS] <JOB>

    job pause pauses the specified job. Paused jobs can be resumed by job resume. On Standalone instance, this command only takes effect for containerized jobs.

    JOB is a job URI.

    From Starwhale's perspective, pause is almost the same as cancel, except that the job reuses the old Job id when resumed. It is job developer's responsibility to save all data periodically and load them when resumed. The job id is usually used as a key of the checkpoint.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, kill the Starwhale Job by force.

    swcli job resume

    swcli [GLOBAL OPTIONS] job resume [OPTIONS] <JOB>

    job resume resumes the specified job. On Standalone instance, this command only takes effect for containerized jobs.

    JOB is a job URI.

    - - + + \ No newline at end of file diff --git a/0.5.12/reference/swcli/model/index.html b/0.5.12/reference/swcli/model/index.html index d9a39e57f..91960c095 100644 --- a/0.5.12/reference/swcli/model/index.html +++ b/0.5.12/reference/swcli/model/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    swcli model

    Overview

    swcli [GLOBAL OPTIONS] model [OPTIONS] <SUBCOMMAND> [ARGS]...

    The model command includes the following subcommands:

    • build
    • copy(cp)
    • diff
    • extract
    • history
    • info
    • list(ls)
    • recover
    • remove(rm)
    • run
    • serve
    • tag

    swcli model build

    swcli [GLOBAL OPTIONS] model build [OPTIONS] <WORKDIR>

    model build will put the whole WORKDIR into the model, except files that match patterns defined in .swignore.

    model build will import modules specified by --module to generate the required configurations to run the model. If your module depends on third-party libraries, we strongly recommend you use the --runtime option; otherwise, you need to ensure that the python environment used by swcli has these libraries installed.

    OptionRequiredTypeDefaultsDescription
    --project or -pNStringthe default projectthe project URI
    --model-yaml or -fNString${workdir}/model.yamlmodel yaml path, default use ${workdir}/model.yaml file. model.yaml is optional for model build.
    --module or -mNStringPython modules to be imported during the build process. Starwhale will export model handlers from these modules to the model package. This option supports set multiple times.
    --runtime or -rNStringthe URI of the Starwhale Runtime to use when running this command. If this option is used, this command will run in an independent python environment specified by the Starwhale Runtime; otherwise, it will run directly in the swcli's current python environment.
    --name or -nNStringmodel package name
    --desc or -dNStringmodel package description
    --package-runtime--no-package-runtimeNBooleanTrueWhen using the --runtime parameter, by default, the corresponding Starwhale runtime will become the built-in runtime for the Starwhale model. This feature can be disabled with the --no-package-runtime parameter.
    --add-allNBooleanFalseAdd all files in the working directory to the model package(excludes python cache files and virtual environment files when disabled).The .swignore file still takes effect.
    -t or --tagNGlobalString

    Examples for model build

    # build by the model.yaml in current directory and model package will package all the files from the current directory.
    swcli model build .
    # search model run decorators from mnist.evaluate, mnist.train and mnist.predict modules, then package all the files from the current directory to model package.
    swcli model build . --module mnist.evaluate --module mnist.train --module mnist.predict
    # build model package in the Starwhale Runtime environment.
    swcli model build . --module mnist.evaluate --runtime pytorch/version/v1
    # forbid to package Starwhale Runtime into the model.
    swcli model build . --module mnist.evaluate --runtime pytorch/version/v1 --no-package-runtime
    # build model package with tags.
    swcli model build . --tag tag1 --tag tag2

    swcli model copy

    swcli [GLOBAL OPTIONS] model copy [OPTIONS] <SRC> <DEST>

    model copy copies from SRC to DEST for Starwhale Model sharing.

    SRC and DEST are both model URIs.

    When copying Starwhale Model, all custom user-defined labels will be copied by default. You can use the --ignore-tag parameter to ignore certain labels. In addition, the latest and ^v\d+$ labels are Starwhale built-in labels that are only used within the instance itself and will not be copied to other instances.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, DEST will be overwritten if it exists. In addition, if the labels carried during duplication have already been used by other versions, this parameter can be used to forcibly update the labels to this version.
    -i or --ignore-tagNStringIgnore tags to copy. The option can be used multiple times.

    Examples for model copy

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local project(myproject) with a new model name 'mnist-local'
    swcli model cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq local/project/myproject/mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local default project(self) with the cloud instance model name 'mnist-cloud'
    swcli model cp cloud://pre-k8s/project/model/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq .

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local project(myproject) with the cloud instance model name 'mnist-cloud'
    swcli model cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq . -dlp myproject

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local default project(self) with a model name 'mnist-local'
    swcli model cp cloud://pre-k8s/project/model/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local project(myproject) with a model name 'mnist-local'
    swcli model cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local -dlp myproject

    #- copy standalone instance(local) default project(self)'s mnist-local model to cloud instance(pre-k8s) mnist project with a new model name 'mnist-cloud'
    swcli model cp mnist-local/version/latest cloud://pre-k8s/project/mnist/mnist-cloud

    #- copy standalone instance(local) default project(self)'s mnist-local model to cloud instance(pre-k8s) mnist project with standalone instance model name 'mnist-local'
    swcli model cp mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy standalone instance(local) default project(self)'s mnist-local model to cloud instance(pre-k8s) mnist project without 'cloud://' prefix
    swcli model cp mnist-local/version/latest pre-k8s/project/mnist

    #- copy standalone instance(local) project(myproject)'s mnist-local model to cloud instance(pre-k8s) mnist project with standalone instance model name 'mnist-local'
    swcli model cp local/project/myproject/model/mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy without some tags
    swcli model cp mnist cloud://cloud.starwhale.cn/project/starwhale:public --ignore-tag t1

    swcli model diff

    swcli [GLOBAL OPTIONS] model diff [OPTIONS] <MODEL VERSION> <MODEL VERSION>

    model diff compares the difference between two versions of the same model.

    MODEL VERSION is a model URI.

    OptionRequiredTypeDefaultsDescription
    --show-detailsNBooleanFalseIf true, outputs the detail information.

    swcli model extract

    swcli [GLOBAL OPTIONS] model extract [OPTIONS] <MODEL> <TARGET_DIR>

    The model extract command can extract a Starwhale model to a specified directory for further customization.

    MODEL is a model URI.

    OptionRequiredTypeDefaultDescription
    --force or -fNBooleanFalseIf this option is used, it will forcibly overwrite existing extracted model files in the target directory.

    Examples for model extract

    #- extract mnist model package to current directory
    swcli model extract mnist/version/xxxx .

    #- extract mnist model package to current directory and force to overwrite the files
    swcli model extract mnist/version/xxxx . -f

    swcli model history

    swcli [GLOBAL OPTIONS] model history [OPTIONS] <MODEL>

    model history outputs all history versions of the specified Starwhale Model.

    MODEL is a model URI.

    OptionRequiredTypeDefaultsDescription
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.

    swcli model info

    swcli [GLOBAL OPTIONS] model info [OPTIONS] <MODEL>

    model info outputs detailed information about the specified Starwhale Model version.

    MODEL is a model URI.

    OptionRequiredTypeDefaultsDescription
    --output-filter or -ofNChoice of [basic/model_yaml/manifest/files/handlers/all]basicFilter the output content. Only standalone instance supports this option.

    Examples for model info

    swcli model info mnist # show basic info from the latest version of model
    swcli model info mnist/version/v0 # show basic info from the v0 version of model
    swcli model info mnist/version/latest --output-filter=all # show all info
    swcli model info mnist -of basic # show basic info
    swcli model info mnist -of model_yaml # show model.yaml
    swcli model info mnist -of handlers # show model runnable handlers info
    swcli model info mnist -of files # show model package files tree
    swcli -o json model info mnist -of all # show all info in json format

    swcli model list

    swcli [GLOBAL OPTIONS] model list [OPTIONS]

    model list shows all Starwhale Models.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.
    --show-removedNBooleanFalseIf true, include packages that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.
    --filter or -flNStringShow only Starwhale Models that match specified filters. This option can be used multiple times in one command.
    FilterTypeDescriptionExample
    nameKey-ValueThe name prefix of models--filter name=mnist
    ownerKey-ValueThe model owner name--filter owner=starwhale
    latestFlagIf specified, it shows only the latest version.--filter latest

    swcli model recover

    swcli [GLOBAL OPTIONS] model recover [OPTIONS] <MODEL>

    model recover recovers previously removed Starwhale Models or versions.

    MODEL is a model URI. If the version part of the URI is omitted, all removed versions are recovered.

    Garbage-collected Starwhale Models or versions can not be recovered, as well as those are removed with the --force option.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, overwrite the Starwhale Model or version with the same name or version id.

    swcli model remove

    swcli [GLOBAL OPTIONS] model remove [OPTIONS] <MODEL>

    model remove removes the specified Starwhale Model or version.

    MODEL is a model URI. If the version part of the URI is omitted, all versions are removed.

    Removed Starwhale Models or versions can be recovered by swcli model recover before garbage collection. Use the --force option to persistently remove a Starwhale Model or version.

    Removed Starwhale Models or versions can be listed by swcli model list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Model or version. It can not be recovered.

    swcli model run

    swcli [GLOBAL OPTIONS] model run [OPTIONS]

    model run executes a model handler. Model run supports two modes to run: model URI and local development. Model URI mode needs a pre-built Starwhale Model Package. Local development model only needs the model src dir.

    OptionRequiredTypeDefaultsDescription
    --workdir or -wNStringFor local development mode, the path of model src dir.
    --uri or -uNStringFor model URI mode, the string of model uri.
    --handler or -hNStringRunnable handler index or name, default is None, will use the first handler
    --module or -mNStringThe name of the Python module to import. This parameter can be set multiple times.
    --runtime or -rNStringthe Starwhale Runtime URI to use when running this command. If this option is used, this command will run in an independent python environment specified by the Starwhale Runtime; otherwise, it will run directly in the swcli's current python environment.
    --model-yaml-fNString${MODEL_DIR}/model.yamlThe path to the model.yaml. model.yaml is optional for model run.
    --run-project or -pNStringDefault projectProject URI, indicates the model run results will be stored in the corresponding project.
    --dataset or -dNStringDataset URI, the Starwhale dataset required for model running. This parameter can be set multiple times.
    --in-containerNBooleanFalseUse docker container to run the model. This option is only available for standalone instances. For server and cloud instances, a docker image is always used. If the runtime is a docker image, this option is always implied.
    --forbid-snapshot or -fsNBooleanFalseIn model URI mode, each model run uses a new snapshot directory. Setting this parameter will directly use the model's workdir as the run directory. In local dev mode, this parameter does not take effect, each run is in the --workdir specified directory.
    -- --user-arbitrary-argsNStringSpecify the args you defined in your handlers.

    Examples for model run

    # --> run by model uri
    # run the first handler from model uri
    swcli model run -u mnist/version/latest
    # run index id(1) handler from model uri
    swcli model run --uri mnist/version/latest --handler 1
    # run index fullname(mnist.evaluator:MNISTInference.cmp) handler from model uri
    swcli model run --uri mnist/version/latest --handler mnist.evaluator:MNISTInference.cmp

    # --> run by the working directory, which does not build model package yet. Make local debug happy.
    # run the first handler from the working directory, use the model.yaml in the working directory
    swcli model run -w .
    # run index id(1) handler from the working directory, search mnist.evaluator module and model.yaml handlers(if existed) to get runnable handlers
    swcli model run --workdir . --module mnist.evaluator --handler 1
    # run index fullname(mnist.evaluator:MNISTInference.cmp) handler from the working directory, search mnist.evaluator module to get runnable handlers
    swcli model run --workdir . --module mnist.evaluator --handler mnist.evaluator:MNISTInference.cmp
    # run the f handler in th.py from the working directory with the args defined in th:f
    # @handler()
    # def f(
    # x=ListInput(IntInput()),
    # y=2,
    # mi=MyInput(),
    # ds=DatasetInput(required=True),
    # ctx=ContextInput(),
    # )
    swcli model run -w . -m th --handler th:f -- -x 2 -x=1 --mi=blab-la --ds mnist

    swcli model serve

    Here is the English translation:

    swcli [GLOBAL OPTIONS] model serve [OPTIONS]

    The model serve command can run the model as a web server, and provide a simple web interaction interface.

    OptionRequiredTypeDefaultsDescription
    --workdir or -wNStringIn local dev mode, specify the directory of the model code.
    --uri or -uNStringIn model URI mode, specify the model URI.
    --runtime or -rNStringThe URI of the Starwhale runtime to use when running this command. If specified, the command will run in the isolated Python environment defined in the Starwhale runtime. Otherwise it will run directly in the current Python environment of swcli.
    --model-yaml or -fNString${MODEL_DIR}/model.yamlThe path to the model.yaml. model.yaml is optional for model serve.
    --module or -mNStringName of the Python module to import. This parameter can be set multiple times.
    --hostNString127.0.0.1The address for the service to listen on.
    --portNInteger8080The port for the service to listen on.

    Examples for model serve

    swcli model serve -u mnist
    swcli model serve --uri mnist/version/latest --runtime pytorch/version/latest

    swcli model serve --workdir . --runtime pytorch/version/v0
    swcli model serve --workdir . --runtime pytorch/version/v1 --host 0.0.0.0 --port 8080
    swcli model serve --workdir . --runtime pytorch --module mnist.evaluator

    swcli model tag

    swcli [GLOBAL OPTIONS] model tag [OPTIONS] <MODEL> [TAGS]...

    model tag attaches a tag to a specified Starwhale Model version. At the same time, tag command also supports list and remove tags. The tag can be used in a model URI instead of the version id.

    MODEL is a model URI.

    Each model version can have any number of tags, but duplicated tag names are not allowed in the same model.

    model tag only works for the Standalone Instance.

    OptionRequiredTypeDefaultsDescription
    --remove or -rNBooleanFalseremove the tag if true
    --quiet or -qNBooleanFalseignore errors, for example, removing tags that do not exist.
    --force-add or -fNBooleanFalseWhen adding labels to server/cloud instances, if the label is already used by another model version, an error will be prompted. In this case, you can force an update using the --force-add parameter.

    Examples for model tag

    #- list tags of the mnist model
    swcli model tag mnist

    #- add tags for the mnist model
    swcli model tag mnist t1 t2
    swcli model tag cloud://cloud.starwhale.cn/project/public:starwhale/model/mnist/version/latest t1 --force-add
    swcli model tag mnist t1 --quiet

    #- remove tags for the mnist model
    swcli model tag mnist -r t1 t2
    swcli model tag cloud://cloud.starwhale.cn/project/public:starwhale/model/mnist --remove t1
    - - + + \ No newline at end of file diff --git a/0.5.12/reference/swcli/project/index.html b/0.5.12/reference/swcli/project/index.html index b71e9d8fd..029e45810 100644 --- a/0.5.12/reference/swcli/project/index.html +++ b/0.5.12/reference/swcli/project/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    swcli project

    Overview

    swcli [GLOBAL OPTIONS] project [OPTIONS] <SUBCOMMAND> [ARGS]...

    The project command includes the following subcommands:

    • create(add, new)
    • info
    • list(ls)
    • recover
    • remove(ls)
    • use(select)

    swcli project create

    swcli [GLOBAL OPTIONS] project create <PROJECT>

    project create creates a new project.

    PROJECT is a project URI.

    swcli project info

    swcli [GLOBAL OPTIONS] project info [OPTIONS] <PROJECT>

    project info outputs detailed information about the specified Starwhale Project.

    PROJECT is a project URI.

    swcli project list

    swcli [GLOBAL OPTIONS] project list [OPTIONS]

    project list shows all Starwhale Projects.

    OptionRequiredTypeDefaultsDescription
    --instanceNStringThe URI of the instance to list. If this option is omitted, use the default instance.
    --show-removedNBooleanFalseIf true, include projects that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.

    swcli project recover

    swcli [GLOBAL OPTIONS] project recover [OPTIONS] <PROJECT>

    project recover recovers previously removed Starwhale Projects.

    PROJECT is a project URI.

    Garbage-collected Starwhale Projects can not be recovered, as well as those are removed with the --force option.

    swcli project remove

    swcli [GLOBAL OPTIONS] project remove [OPTIONS] <PROJECT>

    project remove removes the specified Starwhale Project.

    PROJECT is a project URI.

    Removed Starwhale Projects can be recovered by swcli project recover before garbage collection. Use the --force option to persistently remove a Starwhale Project.

    Removed Starwhale Project can be listed by swcli project list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Project. It can not be recovered.

    swcli project use

    swcli [GLOBAL OPTIONS] project use <PROJECT>

    project use make the specified project default. You must login at first to use a project on a Server/Cloud instance.

    - - + + \ No newline at end of file diff --git a/0.5.12/reference/swcli/runtime/index.html b/0.5.12/reference/swcli/runtime/index.html index c096696c7..7d81c6e2d 100644 --- a/0.5.12/reference/swcli/runtime/index.html +++ b/0.5.12/reference/swcli/runtime/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    swcli runtime

    Overview

    swcli [GLOBAL OPTIONS] runtime [OPTIONS] <SUBCOMMAND> [ARGS]...

    The runtime command includes the following subcommands:

    • activate(actv)
    • build
    • copy(cp)
    • dockerize
    • extract
    • history
    • info
    • list(ls)
    • recover
    • remove(rm)
    • tag

    swcli runtime activate

    swcli [GLOBAL OPTIONS] runtime activate [OPTIONS] <RUNTIME>

    Like source venv/bin/activate or conda activate xxx, runtime activate setups a new python environment according to the settings of the specified runtime. When the current shell is closed or switched to another one, you need to reactivate the runtime.RUNTIME is a Runtime URI.

    If you want to quit the activated runtime environment, please run venv deactivate in the venv environment or conda deactivate in the conda environment.

    The runtime activate command will build a Python isolated environment and download relevant Python packages according to the definition of the Starwhale runtime when activating the environment for the first time. This process may spend a lot of time.

    swcli runtime build

    swcli [GLOBAL OPTIONS] runtime build [OPTIONS]

    The runtime build command can build a shareable and reproducible runtime environment suitable for ML/DL from various environments or runtime.yaml file.

    Parameters

    • Parameters related to runtime building methods:
    OptionRequiredTypeDefaultsDescription
    -c or --condaNStringFind the corresponding conda environment by conda env name, export Python dependencies to generate Starwhale runtime.
    -cp or --conda-prefixNStringFind the corresponding conda environment by conda env prefix path, export Python dependencies to generate Starwhale runtime.
    -v or --venvNStringFind the corresponding venv environment by venv directory address, export Python dependencies to generate Starwhale runtime.
    -s or --shellNStringExport Python dependencies according to current shell environment to generate Starwhale runtime.
    -y or --yamlNruntime.yaml in cwd directoryBuild Starwhale runtime according to user-defined runtime.yaml.
    -d or --dockerNStringUse the docker image as Starwhale runtime.

    The parameters for runtime building methods are mutually exclusive, only one method can be specified. If not specified, it will use --yaml method to read runtime.yaml in cwd directory to build Starwhale runtime.

    • Other parameters:
    OptionRequiredScopeTypeDefaultsDescription
    --project or -pNGlobalStringDefault projectProject URI
    -del or --disable-env-lockNruntime.yaml modeBooleanFalseWhether to install dependencies in runtime.yaml and lock the version information of related dependencies. The dependencies will be locked by default.
    -nc or --no-cacheNruntime.yaml modeBooleanFalseWhether to delete the isolated environment and install related dependencies from scratch. By default dependencies will be installed in the existing isolated environment.
    --cudaNconda/venv/shell modeChoice[11.3/11.4/11.5/11.6/11.7/]CUDA version, CUDA will not be used by default.
    --cudnnNconda/venv/shell modeChoice[8/]cuDNN version, cuDNN will not be used by default.
    --archNconda/venv/shell modeChoice[amd64/arm64/noarch]noarchArchitecture
    -dpo or --dump-pip-optionsNGlobalBooleanFalseDump pip config options from the ~/.pip/pip.conf file.
    -dcc or --dump-condarcNGlobalBooleanFalseDump conda config from the ~/.condarc file.
    -t or --tagNGlobalStringRuntime tags, the option can be used multiple times.

    Examples for Starwhale Runtime building

    #- from runtime.yaml:
    swcli runtime build # use the current directory as the workdir and use the default runtime.yaml file
    swcli runtime build -y example/pytorch/runtime.yaml # use example/pytorch/runtime.yaml as the runtime.yaml file
    swcli runtime build --yaml runtime.yaml # use runtime.yaml at the current directory as the runtime.yaml file
    swcli runtime build --tag tag1 --tag tag2

    #- from conda name:
    swcli runtime build -c pytorch # lock pytorch conda environment and use `pytorch` as the runtime name
    swcli runtime build --conda pytorch --name pytorch-runtime # use `pytorch-runtime` as the runtime name
    swcli runtime build --conda pytorch --cuda 11.4 # specify the cuda version
    swcli runtime build --conda pytorch --arch noarch # specify the system architecture

    #- from conda prefix path:
    swcli runtime build --conda-prefix /home/starwhale/anaconda3/envs/pytorch # get conda prefix path by `conda info --envs` command

    #- from venv prefix path:
    swcli runtime build -v /home/starwhale/.virtualenvs/pytorch
    swcli runtime build --venv /home/starwhale/.local/share/virtualenvs/pytorch --arch amd64

    #- from docker image:
    swcli runtime build --docker pytorch/pytorch:1.9.0-cuda11.1-cudnn8-runtime # use the docker image as the runtime directly

    #- from shell:
    swcli runtime build -s --cuda 11.4 --cudnn 8 # specify the cuda and cudnn version
    swcli runtime build --shell --name pytorch-runtime # lock the current shell environment and use `pytorch-runtime` as the runtime name

    swcli runtime copy

    swcli [GLOBAL OPTIONS] runtime copy [OPTIONS] <SRC> <DEST>

    runtime copy copies from SRC to DEST. SRC and DEST are both Runtime URIs.

    When copying Starwhale Runtime, all custom user-defined labels will be copied by default. You can use the --ignore-tag parameter to ignore certain labels. In addition, the latest and ^v\d+$ labels are built-in Starwhale system labels that are only used within the instance itself and will not be copied to other instances.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, DEST will be overwritten if it exists. In addition, if the labels carried during duplication have already been used by other versions, this parameter can be used to forcibly update the labels to this version.
    -i or --ignore-tagNStringIgnore tags to copy. The option can be used multiple times.

    Examples for Starwhale Runtime copy

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local project(myproject) with a new runtime name 'mnist-local'
    swcli runtime cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq local/project/myproject/mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local default project(self) with the cloud instance runtime name 'mnist-cloud'
    swcli runtime cp cloud://pre-k8s/project/runtime/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq .

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local project(myproject) with the cloud instance runtime name 'mnist-cloud'
    swcli runtime cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq . -dlp myproject

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local default project(self) with a runtime name 'mnist-local'
    swcli runtime cp cloud://pre-k8s/project/runtime/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local project(myproject) with a runtime name 'mnist-local'
    swcli runtime cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local -dlp myproject

    #- copy standalone instance(local) default project(self)'s mnist-local runtime to cloud instance(pre-k8s) mnist project with a new runtime name 'mnist-cloud'
    swcli runtime cp mnist-local/version/latest cloud://pre-k8s/project/mnist/mnist-cloud

    #- copy standalone instance(local) default project(self)'s mnist-local runtime to cloud instance(pre-k8s) mnist project with standalone instance runtime name 'mnist-local'
    swcli runtime cp mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy standalone instance(local) default project(self)'s mnist-local runtime to cloud instance(pre-k8s) mnist project without 'cloud://' prefix
    swcli runtime cp mnist-local/version/latest pre-k8s/project/mnist

    #- copy standalone instance(local) project(myproject)'s mnist-local runtime to cloud instance(pre-k8s) mnist project with standalone instance runtime name 'mnist-local'
    swcli runtime cp local/project/myproject/runtime/mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy without some tags
    swcli runtime cp pytorch cloud://cloud.starwhale.cn/project/starwhale:public --ignore-tag t1

    swcli runtime dockerize

    swcli [GLOBAL OPTIONS] runtime dockerize [OPTIONS] <RUNTIME>

    runtime dockerize generates a docker image based on the specified runtime. Starwhale uses docker buildx to create the image. Docker 19.03 or later is required to run this command.

    RUNTIME is a Runtime URI.

    OptionRequiredTypeDefaultsDescription
    --tag or -tNStringThe tag of the docker image. This option can be repeated multiple times.
    --pushNBooleanFalseIf true, push the image to the docker registry
    --platformNStringamd64The target platform,can be either amd64 or arm64. This option can be repeated multiple times to create a multi-platform image.

    Here is the English translation:

    swcli runtime extract

    swcli [Global Options] runtime extract [Options] <RUNTIME>

    Starwhale runtimes use the compressed packages to distribute. The runtime extract command can be used to extract the runtime package for further customization and modification.

    OptionRequiredTypeDefaultDescription
    --force or -fNBooleanFalseWhether to delete and re-extract if there is already an extracted Starwhale runtime in the target directory.
    --target-dirNStringCustom extraction directory. If not specified, it will be extracted to the default Starwhale runtime workdir. The command log will show the directory location.

    swcli runtime history

    swcli [GLOBAL OPTIONS] runtime history [OPTIONS] <RUNTIME>

    runtime history outputs all history versions of the specified Starwhale Runtime.

    RUNTIME is a Runtime URI.

    OptionRequiredTypeDefaultsDescription
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.

    swcli runtime info

    swcli [GLOBAL OPTIONS] runtime info [OPTIONS] <RUNTIME>

    runtime info outputs detailed information about a specified Starwhale Runtime version.

    RUNTIME is a Runtime URI.

    OptionRequiredTypeDefaultsDescription
    --output-filter or -ofNChoice of [basic/runtime_yaml/manifest/lock/all]basicFilter the output content. Only standalone instance supports this option.

    Examples for Starwhale Runtime info

    swcli runtime info pytorch # show basic info from the latest version of runtime
    swcli runtime info pytorch/version/v0 # show basic info
    swcli runtime info pytorch/version/v0 --output-filter basic # show basic info
    swcli runtime info pytorch/version/v1 -of runtime_yaml # show runtime.yaml content
    swcli runtime info pytorch/version/v1 -of lock # show auto lock file content
    swcli runtime info pytorch/version/v1 -of manifest # show _manifest.yaml content
    swcli runtime info pytorch/version/v1 -of all # show all info of the runtime

    swcli runtime list

    swcli [GLOBAL OPTIONS] runtime list [OPTIONS]

    runtime list shows all Starwhale Runtimes.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.
    --show-removed or -srNBooleanFalseIf true, include runtimes that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.
    --filter or -flNStringShow only Starwhale Runtimes that match specified filters. This option can be used multiple times in one command.
    FilterTypeDescriptionExample
    nameKey-ValueThe name prefix of runtimes--filter name=pytorch
    ownerKey-ValueThe runtime owner name--filter owner=starwhale
    latestFlagIf specified, it shows only the latest version.--filter latest

    swcli runtime recover

    swcli [GLOBAL OPTIONS] runtime recover [OPTIONS] <RUNTIME>

    runtime recover can recover previously removed Starwhale Runtimes or versions.

    RUNTIME is a Runtime URI. If the version part of the URI is omitted, all removed versions are recovered.

    Garbage-collected Starwhale Runtimes or versions can not be recovered, as well as those are removed with the --force option.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, overwrite the Starwhale Runtime or version with the same name or version id.

    swcli runtime remove

    swcli [GLOBAL OPTIONS] runtime remove [OPTIONS] <RUNTIME>

    runtime remove removes the specified Starwhale Runtime or version.

    RUNTIME is a Runtime URI. If the version part of the URI is omitted, all versions are removed.

    Removed Starwhale Runtimes or versions can be recovered by swcli runtime recover before garbage collection. Use the -- force option to persistently remove a Starwhale Runtime or version.

    Removed Starwhale Runtimes or versions can be listed by swcli runtime list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Runtime or version. It can not be recovered.

    swcli runtime tag

    swcli [GLOBAL OPTIONS] runtime tag [OPTIONS] <RUNTIME> [TAGS]...

    runtime tag attaches a tag to a specified Starwhale Runtime version. At the same time, tag command also supports list and remove tags. The tag can be used in a runtime URI instead of the version id.

    RUNTIME is a Runtime URI.

    Each runtime version can have any number of tags, but duplicated tag names are not allowed in the same runtime.

    runtime tag only works for the Standalone Instance.

    OptionRequiredTypeDefaultsDescription
    --remove or -rNBooleanFalseRemove the tag if true
    --quiet or -qNBooleanFalseIgnore errors, for example, removing tags that do not exist.
    --force-add or -fNBooleanFalseWhen adding labels to server/cloud instances, if the label is already used by another runtime version, an error will be prompted. In this case, you can force an update using the --force-add parameter.

    Examples for runtime tag

    #- list tags of the pytorch runtime
    swcli runtime tag pytorch

    #- add tags for the pytorch runtime
    swcli runtime tag mnist t1 t2
    swcli runtime tag cloud://cloud.starwhale.cn/project/public:starwhale/runtime/pytorch/version/latest t1 --force-add
    swcli runtime tag mnist t1 --quiet

    #- remove tags for the pytorch runtime
    swcli runtime tag mnist -r t1 t2
    swcli runtime tag cloud://cloud.starwhale.cn/project/public:starwhale/runtime/pytorch --remove t1
    - - + + \ No newline at end of file diff --git a/0.5.12/reference/swcli/utilities/index.html b/0.5.12/reference/swcli/utilities/index.html index 2fdff560f..ff563b55a 100644 --- a/0.5.12/reference/swcli/utilities/index.html +++ b/0.5.12/reference/swcli/utilities/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Utility Commands

    swcli gc

    swcli [GLOBAL OPTIONS] gc [OPTIONS]

    gc clears removed projects, models, datasets, and runtimes according to the internal garbage collection policy.

    OptionRequiredTypeDefaultsDescription
    --dry-runNBooleanFalseIf true, outputs objects to be removed instead of clearing them.
    --yesNBooleanFalseBypass confirmation prompts.

    swcli check

    swcli [GLOBAL OPTIONS] check

    Check if the external dependencies of the swcli command meet the requirements. Currently mainly checks Docker and Conda.

    swcli completion install

    swcli [GLOBAL OPTIONS] completion install <SHELL_NAME>

    Install autocompletion for swcli commands. Currently supports bash, zsh and fish. If SHELL_NAME is not specified, it will try to automatically detect the current shell type.

    swcli config edit

    swcli [GLOBAL OPTIONS] config edit

    Edit the Starwhale configuration file at ~/.config/starwhale/config.yaml.

    swcli ui

    swcli [GLOBAL OPTIONS] ui <INSTANCE>

    Open the web page for the corresponding instance.

    - - + + \ No newline at end of file diff --git a/0.5.12/runtime/index.html b/0.5.12/runtime/index.html index 7cb2a8cae..2ce8f0f92 100644 --- a/0.5.12/runtime/index.html +++ b/0.5.12/runtime/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Starwhale Runtime

    Overview

    Starwhale Runtime aims to provide a reproducible and sharable running environment for python programs. You can easily share your working environment with your teammates or outsiders, and vice versa. Furthermore, you can run your programs on Starwhale Server or Starwhale Cloud without bothering with the dependencies.

    Starwhale works well with virtualenv, conda, and docker. If you are using one of them, it is straightforward to create a Starwhale Runtime based on your current environment.

    Multiple Starwhale Runtimes on your local machine can be switched freely by one command. You can work on different projects without messing up the environment.Starwhale Runtime consists of two parts: the base image and the dependencies.

    The base image

    The base is a docker image with Python, CUDA, and cuDNN installed. Starwhale provides various base images for you to choose from; see the following list:

    • Computer system architecture:
      • X86 (amd64)
      • Arm (aarch64)
    • Operating system:
      • Ubuntu 20.04 LTS (ubuntu:20.04)
    • Python:
      • 3.7
      • 3.8
      • 3.9
      • 3.10
      • 3.11
    • CUDA:
      • CUDA 11.3 + cuDNN 8.4
      • CUDA 11.4 + cuDNN 8.4
      • CUDA 11.5 + cuDNN 8.4
      • CUDA 11.6 + cuDNN 8.4
      • CUDA 11.7

    runtime.yaml

    runtime.yaml is the core configuration file of Starwhale Runtime.

    # The name of Starwhale Runtime
    name: demo
    # The mode of Starwhale Runtime: venv or conda. Default is venv.
    mode: venv
    configs:
    # If you do not use conda, ignore this field.
    conda:
    condarc: # custom condarc config file
    channels:
    - defaults
    show_channel_urls: true
    default_channels:
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2
    custom_channels:
    conda-forge: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    pytorch: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    pytorch-lts: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    nvidia: https://mirrors.aliyun.com/anaconda/cloud
    ssl_verify: false
    default_threads: 10
    pip:
    # pip config set global.index-url
    index_url: https://example.org/
    # pip config set global.extra-index-url
    extra_index_url: https://another.net/
    # pip config set install.trusted-host
    trusted_host:
    - example.org
    - another.net
    environment:
    # Now it must be ubuntu:20.04
    os: ubuntu:20.04
    # CUDA version. possible values: 11.3, 11.4, 11.5, 11.6, 11.7
    cuda: 11.4
    # Python version. possible values: 3.7, 3.8, 3.9, 3.10, 3.11
    python: 3.8
    # Define your base image
    docker:
    image: mycustom.com/docker/image:tag
    dependencies:
    # If this item is present, conda env create -f conda.yml will be executed
    - conda.yaml
    # If this item is present, pip install -r requirements.txt will be executed before installing other pip packages
    - requirements.txt
    # Packages to be install with conda. venv mode will ignore the conda field.
    - conda:
    - numpy
    - requests
    # Packages to be installed with pip. The format is the same as requirements.txt
    - pip:
    - pillow
    - numpy
    - deepspeed==0.9.0
    - safetensors==0.3.0
    - transformers @ git+https://github.com/huggingface/transformers.git@3c3108972af74246bc3a0ecf3259fd2eafbacdef
    - peft @ git+https://github.com/huggingface/peft.git@fcff23f005fc7bfb816ad1f55360442c170cd5f5
    - accelerate @ git+https://github.com/huggingface/accelerate.git@eba6eb79dc2ab652cd8b44b37165a4852768a8ac
    # Additional wheels packages to be installed when restoring the runtime
    - wheels:
    - dummy-0.0.0-py3-none-any.whl
    # Additional files to be included in the runtime
    - files:
    - dest: bin/prepare.sh
    name: prepare
    src: scripts/prepare.sh
    # Run some custom commands
    - commands:
    - apt-get install -y libgl1
    - touch /tmp/runtime-command-run.flag
    - - + + \ No newline at end of file diff --git a/0.5.12/runtime/yaml/index.html b/0.5.12/runtime/yaml/index.html index 57aa506a2..df8055d75 100644 --- a/0.5.12/runtime/yaml/index.html +++ b/0.5.12/runtime/yaml/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    The runtime.yaml Specification

    runtime.yaml is the configuration file that defines the properties of the Starwhale Runtime. runtime.yaml is required for the yaml mode of the swcli runtime build command.

    Examples

    The simplest example

    dependencies:
    - pip:
    - numpy
    name: simple-test

    Define a Starwhale Runtime that uses venv as the Python virtual environment for package isolation, and installs the numpy dependency.

    The llama2 example

    name: llama2
    mode: venv
    environment:
    arch: noarch
    os: ubuntu:20.04
    cuda: 11.7
    python: "3.10"
    dependencies:
    - pip:
    - torch
    - fairscale
    - fire
    - sentencepiece
    - gradio >= 3.37.0
    # external starwhale dependencies
    - starwhale[serve] >= 0.5.5

    The full definition example

    # [required]The name of Starwhale Runtime
    name: demo
    # [optional]The mode of Starwhale Runtime: venv or conda. Default is venv.
    mode: venv
    # [optional]The configurations of pip and conda.
    configs:
    # If you do not use conda, ignore this field.
    conda:
    condarc: # custom condarc config file
    channels:
    - defaults
    show_channel_urls: true
    default_channels:
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2
    custom_channels:
    conda-forge: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    pytorch: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    pytorch-lts: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    nvidia: https://mirrors.aliyun.com/anaconda/cloud
    ssl_verify: false
    default_threads: 10
    pip:
    # pip config set global.index-url
    index_url: https://example.org/
    # pip config set global.extra-index-url
    extra_index_url: https://another.net/
    # pip config set install.trusted-host
    trusted_host:
    - example.org
    - another.net
    # [optional] The definition of the environment.
    environment:
    # Now it must be ubuntu:20.04
    os: ubuntu:20.04
    # CUDA version. possible values: 11.3, 11.4, 11.5, 11.6, 11.7
    cuda: 11.4
    # Python version. possible values: 3.7, 3.8, 3.9, 3.10, 3.11
    python: 3.8
    # Define your custom base image
    docker:
    image: mycustom.com/docker/image:tag
    # [required] The dependencies of the Starwhale Runtime.
    dependencies:
    # If this item is present, conda env create -f conda.yml will be executed
    - conda.yaml
    # If this item is present, pip install -r requirements.txt will be executed before installing other pip packages
    - requirements.txt
    # Packages to be install with conda. venv mode will ignore the conda field.
    - conda:
    - numpy
    - requests
    # Packages to be installed with pip. The format is the same as requirements.txt
    - pip:
    - pillow
    - numpy
    - deepspeed==0.9.0
    - safetensors==0.3.0
    - transformers @ git+https://github.com/huggingface/transformers.git@3c3108972af74246bc3a0ecf3259fd2eafbacdef
    - peft @ git+https://github.com/huggingface/peft.git@fcff23f005fc7bfb816ad1f55360442c170cd5f5
    - accelerate @ git+https://github.com/huggingface/accelerate.git@eba6eb79dc2ab652cd8b44b37165a4852768a8ac
    # Additional wheels packages to be installed when restoring the runtime
    - wheels:
    - dummy-0.0.0-py3-none-any.whl
    # Additional files to be included in the runtime
    - files:
    - dest: bin/prepare.sh
    name: prepare
    src: scripts/prepare.sh
    # Run some custom commands
    - commands:
    - apt-get install -y libgl1
    - touch /tmp/runtime-command-run.flag
    - - + + \ No newline at end of file diff --git a/0.5.12/server/guides/server_admin/index.html b/0.5.12/server/guides/server_admin/index.html index 41ff11d95..b85a49002 100644 --- a/0.5.12/server/guides/server_admin/index.html +++ b/0.5.12/server/guides/server_admin/index.html @@ -10,14 +10,14 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Controller Admin Settings

    Superuser Password Reset

    In case you forget the superusers password, you could use the sql below to reset the password to abcd1234

    update user_info set user_pwd='ee9533077d01d2d65a4efdb41129a91e', user_pwd_salt='6ea18d595773ccc2beacce26' where id=1

    After that, you could login to the console and then change the password to what you really want.

    System Settings

    You could customize system to make it easier to use by leverage of System setting. Here is an example below:

    dockerSetting:
    registryForPull: "docker-registry.starwhale.cn/star-whale"
    registryForPush: ""
    userName: ""
    password: ""
    insecure: true
    pypiSetting:
    indexUrl: ""
    extraIndexUrl: ""
    trustedHost: ""
    retries: 10
    timeout: 90
    imageBuild:
    resourcePool: ""
    image: ""
    clientVersion: ""
    pythonVersion: ""
    datasetBuild:
    resourcePool: ""
    image: ""
    clientVersion: ""
    pythonVersion: ""
    resourcePoolSetting:
    - name: "default"
    nodeSelector: null
    resources:
    - name: "cpu"
    max: null
    min: null
    defaults: 5.0
    - name: "memory"
    max: null
    min: null
    defaults: 3145728.0
    - name: "nvidia.com/gpu"
    max: null
    min: null
    defaults: null
    tolerations: null
    metadata: null
    isPrivate: null
    visibleUserIds: null
    storageSetting:
    - type: "minio"
    tokens:
    bucket: "users"
    ak: "starwhale"
    sk: "starwhale"
    endpoint: "http://10.131.0.1:9000"
    region: "local"
    hugeFileThreshold: "10485760"
    hugeFilePartSize: "5242880"
    - type: "s3"
    tokens:
    bucket: "users"
    ak: "starwhale"b
    sk: "starwhale"
    endpoint: "http://10.131.0.1:9000"
    region: "local"
    hugeFileThreshold: "10485760"
    hugeFilePartSize: "5242880"

    Image Registry

    Tasks dispatched by the server are based on docker images. Pulling these images could be slow if your internet is not working well. Starwhale Server supports the custom image registries, includes dockerSetting.registryForPush and dockerSetting.registryForPull.

    Resource Pool

    The resourcePoolSetting allows you to manage your cluster in a group manner. It is currently implemented by K8S nodeSelector, you could label your machines in K8S cluster and make them a resourcePool in Starwhale.

    Remote Storage

    The storageSetting allows you to manage the storages the server could access.

    storageSetting:
    - type: s3
    tokens:
    - bucket: starwhale # required
    ak: access_key # required
    sk: scret_key # required
    endpoint: http://s3.region.amazonaws.com # optional
    region: region of the service # required when endpoint is empty
    hugeFileThreshold: 10485760 # bigger than 10MB will use multiple part upload
    hugeFilePartSize: 5242880 # MB part size for multiple part upload
    - type: minio
    tokens:
    - bucket: starwhale # required
    ak: access_key # required
    sk: scret_key # required
    endpoint: http://10.131.0.1:9000 # required
    region: local # optional
    hugeFileThreshold: 10485760 # bigger than 10MB will use multiple part upload
    hugeFilePartSize: 5242880 # MB part size for multiple part upload
    - type: aliyun
    tokens:
    - bucket: starwhale # required
    ak: access_key # required
    sk: scret_key # required
    endpoint: http://10.131.0.2:9000 # required
    region: local # optional
    hugeFileThreshold: 10485760 # bigger than 10MB will use multiple part upload
    hugeFilePartSize: 5242880 # MB part size for multiple part upload

    Every storageSetting item has a corresponding implementation of StorageAccessService interface. Starwhale has four build-in implementations:

    • StorageAccessServiceAliyun matches type in (aliyun,oss)
    • StorageAccessServiceMinio matches type in (minio)
    • StorageAccessServiceS3 matches type in (s3)
    • StorageAccessServiceFile matches type in (fs, file)

    Each of the implementations has different requirements for tokens. endpoint is required when type in (aliyun,minio), region is required when type is s3 and endpoint is empty. While fs/file type requires tokens has name rootDir and serviceProvider. Please refer the code for more details.

    - - + + \ No newline at end of file diff --git a/0.5.12/server/index.html b/0.5.12/server/index.html index 9e8fc2403..a6ba8656a 100644 --- a/0.5.12/server/index.html +++ b/0.5.12/server/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.5.12/server/installation/docker-compose/index.html b/0.5.12/server/installation/docker-compose/index.html index 2974d4a40..d6f4c0caa 100644 --- a/0.5.12/server/installation/docker-compose/index.html +++ b/0.5.12/server/installation/docker-compose/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Install Starwhale Server with Docker Compose

    Prerequisites

    Usage

    Start up the server

    wget https://raw.githubusercontent.com/star-whale/starwhale/main/docker/compose/compose.yaml
    GLOBAL_IP=${your_accessible_ip_for_server} ; docker compose up

    The GLOBAL_IP is the ip for Controller which could be accessed by all swcli both inside docker containers and other user machines.

    compose.yaml contains Starwhale Controller/MySQL/MinIO services. Touch a compose.override.yaml, as its name implies, can contain configuration overrides for compose.yaml. The available configurations are specified here

    - - + + \ No newline at end of file diff --git a/0.5.12/server/installation/docker/index.html b/0.5.12/server/installation/docker/index.html index 805bd241c..2fc42a049 100644 --- a/0.5.12/server/installation/docker/index.html +++ b/0.5.12/server/installation/docker/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Install Starwhale Server with Docker

    Prerequisites

    • A running Kubernetes 1.19+ cluster to run tasks.
    • A running MySQL 8.0+ instance to store metadata.
    • A S3-compatible object storage to save datasets, models, and others.

    Please make sure pods on the Kubernetes cluster can access the port exposed by the Starwhale Server installation.

    Prepare an env file for Docker

    Starwhale Server can be configured by environment variables.

    An env file template for Docker is here. You may create your own env file by modifying the template.

    Prepare a kubeconfig file [Optional][SW_SCHEDULER=k8s]

    The kubeconfig file is used for accessing the Kubernetes cluster. For more information about kubeconfig files, see the Official Kubernetes Documentation.

    If you have a local kubectl command-line tool installed, you can run kubectl config view to see your current configuration.

    Run the Docker image

    docker run -it -d --name starwhale-server -p 8082:8082 \
    --restart unless-stopped \
    --mount type=bind,source=<path to your kubeconfig file>,destination=/root/.kube/config,readonly \
    --env-file <path to your env file> \
    ghcr.io/star-whale/server:0.5.6

    For users in the mainland of China, use docker image: docker-registry.starwhale.cn/star-whale/server.

    - - + + \ No newline at end of file diff --git a/0.5.12/server/installation/helm-charts/index.html b/0.5.12/server/installation/helm-charts/index.html index 903d1270a..29fb2c05a 100644 --- a/0.5.12/server/installation/helm-charts/index.html +++ b/0.5.12/server/installation/helm-charts/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Install Starwhale Server with Helm

    Prerequisites

    • A running Kubernetes 1.19+ cluster to run tasks.
    • A running MySQL 8.0+ instance to store metadata.
    • A S3-compatible object storage system to save datasets, models, and others.
    • Helm 3.2.0+.

    The Starwhale Helm Charts includes MySQL and MinIO as dependencies. If you do not have your own MySQL instance or any S3-compatible object storage available, use the Helm Charts to install. Please check Installation Options to learn how to install Starwhale Server with MySQL and MinIO.

    Create a service account on Kubernetes for Starwhale Server

    If Kubernetes RBAC is enabled (In Kubernetes 1.6+, RBAC is enabled by default), Starwhale Server can not work properly unless is started by a service account with at least the following permissions:

    ResourceAPI GroupGetListWatchCreateDelete
    jobsbatchYYYYY
    podscoreYYY
    nodescoreYYY
    events""Y

    Example:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
    name: starwhale-role
    rules:
    - apiGroups:
    - ""
    resources:
    - pods
    - nodes
    verbs:
    - get
    - list
    - watch
    - apiGroups:
    - "batch"
    resources:
    - jobs
    verbs:
    - create
    - get
    - list
    - watch
    - delete
    - apiGroups:
    - ""
    resources:
    - events
    verbs:
    - get
    - watch
    - list
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    name: starwhale-binding
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: starwhale-role
    subjects:
    - kind: ServiceAccount
    name: starwhale

    Downloading Starwhale Helm Charts

    helm repo add starwhale https://star-whale.github.io/charts
    helm repo update

    Installing Starwhale Server

    helm install starwhale-server starwhale/starwhale-server -n starwhale --create-namespace

    If you have a local kubectl command-line tool installed, you can run kubectl get pods -n starwhale to check if all pods are running.

    Updating Starwhale Server

    helm repo update
    helm upgrade starwhale-server starwhale/starwhale-server

    Uninstalling Starwhale Server

    helm delete starwhale-server
    - - + + \ No newline at end of file diff --git a/0.5.12/server/installation/index.html b/0.5.12/server/installation/index.html index 8a2bf4f49..c8787cc71 100644 --- a/0.5.12/server/installation/index.html +++ b/0.5.12/server/installation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.5.12/server/installation/minikube/index.html b/0.5.12/server/installation/minikube/index.html index 28b8412a7..af4e83349 100644 --- a/0.5.12/server/installation/minikube/index.html +++ b/0.5.12/server/installation/minikube/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Install Starwhale Server with Minikube

    Prerequisites

    Starting Minikube

    minikube start --addons ingress --kubernetes-version=1.25.3

    For users in the mainland of China, please add --image-mirror-country=cn parameter. If there is no kubectl bin in your machine, you may use minikube kubectl or alias kubectl="minikube kubectl --" alias command.

    Installing Starwhale Server

    helm repo add starwhale https://star-whale.github.io/charts
    helm repo update
    helm pull starwhale/starwhale --untar --untardir ./charts

    helm upgrade --install starwhale ./charts/starwhale -n starwhale --create-namespace -f ./charts/starwhale/values.minikube.global.yaml

    For users in the mainland of China, use values.minikube.global.yaml:

    helm upgrade --install starwhale ./charts/starwhale -n starwhale --create-namespace -f ./charts/starwhale/values.minikube.cn.yaml

    After the installation is successful, the following prompt message appears:

        Release "starwhale" has been upgraded. Happy Helming!
    NAME: starwhale
    LAST DEPLOYED: Tue Feb 14 16:25:03 2023
    NAMESPACE: starwhale
    STATUS: deployed
    REVISION: 14
    NOTES:
    ******************************************
    Chart Name: starwhale
    Chart Version: 0.5.6
    App Version: latest
    Starwhale Image:
    - server: ghcr.io/star-whale/server:latest

    ******************************************
    Controller:
    - visit: http://controller.starwhale.svc
    Minio:
    - web visit: http://minio.starwhale.svc
    - admin visit: http://minio-admin.starwhale.svc
    MySQL:
    - port-forward:
    - run: kubectl port-forward --namespace starwhale svc/mysql 3306:3306
    - visit: mysql -h 127.0.0.1 -P 3306 -ustarwhale -pstarwhale
    Please run the following command for the domains searching:
    echo "$(sudo minikube ip) controller.starwhale.svc minio.starwhale.svc minio-admin.starwhale.svc " | sudo tee -a /etc/hosts
    ******************************************
    Login Info:
    - starwhale: u:starwhale, p:abcd1234
    - minio admin: u:minioadmin, p:minioadmin

    *_* Enjoy to use Starwhale Platform. *_*

    Checking Starwhale Server status

    Keep checking the minikube service status until all deployments are running(waiting for 3~5 mins):

    kubectl get deployments -n starwhale
    NAMEREADYUP-TO-DATEAVAILABLEAGE
    controller1/1115m
    minio1/1115m
    mysql1/1115m

    Visiting for local

    Make the Starwhale controller accessible locally with the following command:

    echo "$(sudo minikube ip) controller.starwhale.svc minio.starwhale.svc  minio-admin.starwhale.svc " | sudo tee -a /etc/hosts

    Then you can visit http://controller.starwhale.svc in your local web browser.

    Visiting for others

    • Step 1: in the Starwhale Server machine

      for temporary use with socat command:

      # install socat at first, ref: https://howtoinstall.co/en/socat
      sudo socat TCP4-LISTEN:80,fork,reuseaddr,bind=0.0.0.0 TCP4:`minikube ip`:80

      When you kill the socat process, the share access will be blocked. iptables maybe a better choice for long-term use.

    • Step 2: in the other machines

      # for macOSX or Linux environment, run the command in the shell.
      echo ${your_machine_ip} controller.starwhale.svc minio.starwhale.svc minio-admin.starwhale.svc " | sudo tee -a /etc/hosts

      # for Windows environment, run the command in the PowerShell with administrator permission.
      Add-Content -Path C:\Windows\System32\drivers\etc\hosts -Value "`n${your_machine_ip} controller.starwhale.svc minio.starwhale.svc minio-admin.starwhale.svc"
    - - + + \ No newline at end of file diff --git a/0.5.12/server/installation/starwhale_env/index.html b/0.5.12/server/installation/starwhale_env/index.html index 1eeca4879..80e355ac4 100644 --- a/0.5.12/server/installation/starwhale_env/index.html +++ b/0.5.12/server/installation/starwhale_env/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Starwhale Server Environment Example

    ################################################################################
    # *** Required ***
    # The external Starwhale server URL. For example: https://cloud.starwhale.ai
    SW_INSTANCE_URI=

    # The listening port of Starwhale Server
    SW_CONTROLLER_PORT=8082

    # The maximum upload file size. This setting affects datasets and models uploading when copied from outside.
    SW_UPLOAD_MAX_FILE_SIZE=20480MB
    ################################################################################
    # The base URL of the Python Package Index to use when creating a runtime environment.
    SW_PYPI_INDEX_URL=http://10.131.0.1/repository/pypi-hosted/simple/

    # Extra URLs of package indexes to use in addition to the base url.
    SW_PYPI_EXTRA_INDEX_URL=

    # Space separated hostnames. When any host specified in the base URL or extra URLs does not have a valid SSL
    # certification, use this option to trust it anyway.
    SW_PYPI_TRUSTED_HOST=
    ################################################################################
    # The JWT token expiration time. When the token expires, the server will request the user to login again.
    SW_JWT_TOKEN_EXPIRE_MINUTES=43200

    # *** Required ***
    # The JWT secret key. All strings are valid, but we strongly recommend you to use a random string with at least 16 characters.
    SW_JWT_SECRET=
    ################################################################################
    # The scheduler controller to use. Valid values are:
    # docker: Controller schedule jobs by leveraging docker
    # k8s: Controller schedule jobs by leveraging Kubernetes
    SW_SCHEDULER=k8s

    # The Kubernetes namespace to use when running a task when SW_SCHEDULER is k8s
    SW_K8S_NAME_SPACE=default

    # The path on the Kubernetes host node's filesystem to cache Python packages. Use the setting only if you have
    # the permission to use host node's filesystem. The runtime environment setup process may be accelerated when the host
    # path cache is used. Leave it blank if you do not want to use it.
    SW_K8S_HOST_PATH_FOR_CACHE=

    # The ip for the containers created by Controller when SW_SCHEDULER is docker
    SW_DOCKER_CONTAINER_NODE_IP=127.0.0.1
    ###############################################################################
    # *** Required ***
    # The object storage system type. Valid values are:
    # s3: [AWS S3](https://aws.amazon.com/s3) or other s3-compatible object storage systems
    # aliyun: [Aliyun OSS](https://www.alibabacloud.com/product/object-storage-service)
    # minio: [MinIO](https://min.io)
    # file: Local filesystem
    SW_STORAGE_TYPE=

    # The path prefix for all data saved on the storage system.
    SW_STORAGE_PREFIX=
    ################################################################################
    # The following settings are only used when SW_STORAGE_TYPE is file.

    # The root directory to save data.
    # This setting is only used when SW_STORAGE_TYPE is file.
    SW_STORAGE_FS_ROOT_DIR=/usr/local/starwhale
    ################################################################################
    # The following settings are only used when SW_STORAGE_TYPE is not file.

    # *** Required ***
    # The name of the bucket to save data.
    SW_STORAGE_BUCKET=

    # *** Required ***
    # The endpoint URL of the object storage service.
    # This setting is only used when SW_STORAGE_TYPE is s3 or aliyun.
    SW_STORAGE_ENDPOINT=

    # *** Required ***
    # The access key used to access the object storage system.
    SW_STORAGE_ACCESSKEY=

    # *** Required ***
    # The secret access key used to access the object storage system.
    SW_STORAGE_SECRETKEY=

    # *** Optional ***
    # The region of the object storage system.
    SW_STORAGE_REGION=

    # Starwhale Server will use multipart upload when uploading a large file. This setting specifies the part size.
    SW_STORAGE_PART_SIZE=5MB
    ################################################################################
    # MySQL settings

    # *** Required ***
    # The hostname/IP of the MySQL server.
    SW_METADATA_STORAGE_IP=

    # The port of the MySQL server.
    SW_METADATA_STORAGE_PORT=3306

    # *** Required ***
    # The database used by Starwhale Server
    SW_METADATA_STORAGE_DB=starwhale

    # *** Required ***
    # The username of the MySQL server.
    SW_METADATA_STORAGE_USER=

    # *** Required ***
    # The password of the MySQL server.
    SW_METADATA_STORAGE_PASSWORD=
    ################################################################################
    - - + + \ No newline at end of file diff --git a/0.5.12/server/project/index.html b/0.5.12/server/project/index.html index 056ed8bff..243baae9e 100644 --- a/0.5.12/server/project/index.html +++ b/0.5.12/server/project/index.html @@ -10,15 +10,15 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Project Management

    Project type

    There are two types of projects:

    • Public: Visible to anyone. Everyone on the internet can find and see public projects.

    • Private: Visible to users specified in the project member settings. Private projects can only be seen by project owners and project members. The project owner can manage access in the project setting of Manage Member.

    Create a project

    1 Sign in to Starwhale, click Create Project.

    creat

    2 Type a name for the project.

    image

    tip

    Avoid duplicate project names.For more information, see Names in Starwhale

    3 Select project visibility to decide who can find and see the project.

    image

    4 Type a description. It is optional.

    image

    5 To finish, click Submit.

    image

    Edit a project

    The name, privacy and description of a project can be edited.

    tip

    Users with the project owner or maintainer role can edit a project. For more information, see Roles and permissions

    Edit name

    • If you are on the project list page:

      1 Hover your mouse over the project you want to edit, then click the Edit button.

      image

      2 Enter a new name for the project.

      image

      tip

      Avoid duplicate project names. For more information, see Names in Starwhale

      3 Click Submit to save changes.

      image

      4 If you're editing multiple projects, repeat steps 1 through 3.

    • If you are on a specific project:

      1 Select Overview on the left navigation, and click Edit.

      image

      2 Enter a new name for the project.

      image

      tip

      Avoid duplicate project names. For more information, see Names in Starwhale

      3 Click Submit to save changes.

      image

    Edit privacy

    • If you are on the project list page:

      1 Hover your mouse over the project you want to edit, then click the Edit button.

      image

      2 Click the Public or Private by your command. For more information, see Project types.

      image

      3 Click Submit to save changes.

      image

    • If you are on a specific project

      1 Select Overview on the left navigation, and click Edit.

      image

      2 Click the Public or Private by your command. For more information, see Project types.

      image

      3 Click Submit to save changes.

      image

    Edit description

    • If you are on the project list page:

      1 Hover your mouse over the project you want to edit, then click the Edit button.

      image

      2 Enter any description you want to describe the project.

      image

      3 Click Submit to save changes.

      image

    • If you are on a specific project

      1 Select Overview on the left navigation, and click Edit.

      image

      2 Enter any description you want to describe the project.

      image

      3 Click Submit to save changes.

      image

    Delete a project

    1 Hover your mouse over the project you want to delete, then click the Delete button.

    image

    2 If you are sure to delete, type the exact name of the project and then click Confirm to delete the project.

    image

    :::Important: When you delete a project, all the models, datasets, evaluations and runtimes belonging to the project will also be deleted and can not be restored. Be careful about the action. :::

    Manage project member

    Only users with the admin role can assign people to the project. The project owner defaulted to having the project owner role.

    Add a member to the project

    1 On the project list page or overview tab, click the Manage Member button, then Add Member.

    image

    image

    2 Type the username you want to add to the project, then click a name in the list of matches.

    image

    3 Select a project role for the member from the drop-down menu.For more information, see Roles and permissions

    image

    4 To finish, click Submit.

    image

    Remove a member

    1 On the project list page or project overview tab, click the Manage Member button.

    image

    2 Find the username you want to remove in the search box, click Remove, then Yes.

    image

    - - + + \ No newline at end of file diff --git a/0.5.12/swcli/config/index.html b/0.5.12/swcli/config/index.html index 81626bc0d..f900c3cde 100644 --- a/0.5.12/swcli/config/index.html +++ b/0.5.12/swcli/config/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Configuration

    Standalone Instance is installed on the user's laptop or development server, providing isolation at the level of Linux/macOX users. Users can install the Starwhale Python package using the pip command and execute any swcli command. After that, they can view their Starwhale configuration in ~/.config/starwhale/config.yaml. In the vast majority of cases, users do not need to manually modify the config.yaml file.

    The ~/.config/starwhale/config.yaml file has permissions set to 0o600 to ensure security, as it contains sensitive information such as encryption keys. Users are advised not to change the file permissions.You could customize your swcli by swci config edit:

    swcli config edit

    config.yaml example

    The typical config.yaml file is as follows:

    • The default instance is local.
    • cloud-cn/cloud-k8s/pre-k8s are the server/cloud instances, local is the standalone instance.
    • The local storage root directory for the Standalone Instance is set to /home/liutianwei/.starwhale.
    current_instance: local
    instances:
    cloud-cn:
    sw_token: ${TOKEN}
    type: cloud
    updated_at: 2022-09-28 18:41:05 CST
    uri: https://cloud.starwhale.cn
    user_name: starwhale
    user_role: normal
    cloud-k8s:
    sw_token: ${TOKEN}
    type: cloud
    updated_at: 2022-09-19 16:10:01 CST
    uri: http://cloud.pre.intra.starwhale.ai
    user_name: starwhale
    user_role: normal
    local:
    current_project: self
    type: standalone
    updated_at: 2022-06-09 16:14:02 CST
    uri: local
    user_name: liutianwei
    pre-k8s:
    sw_token: ${TOKEN}
    type: cloud
    updated_at: 2022-09-19 18:06:50 CST
    uri: http://console.pre.intra.starwhale.ai
    user_name: starwhale
    user_role: normal
    link_auths:
    - ak: starwhale
    bucket: users
    connect_timeout: 10.0
    endpoint: http://10.131.0.1:9000
    read_timeout: 100.0
    sk: starwhale
    type: s3
    storage:
    root: /home/liutianwei/.starwhale
    version: '2.0'

    config.yaml definition

    ParameterDescriptionTypeDefault ValueRequired
    current_instanceThe name of the default instance to use. It is usually set using the swcli instance select command.StringselfYes
    instancesManaged instances, including Standalone, Server and Cloud Instances. There must be at least one Standalone Instance named "local" and one or more Server/Cloud Instances. You can log in to a new instance with swcli instance login and log out from an instance with swcli instance logout.DictStandalone Instance named "local"Yes
    instances.{instance-alias-name}.sw_tokenLogin token for Server/Cloud Instances. It is only effective for Server/Cloud Instances. Subsequent swcli operations on Server/Cloud Instances will use this token. Note that tokens have an expiration time, typically set to one month, which can be configured within the Server/Cloud Instance.StringCloud - Yes, Standalone - No
    instances.{instance-alias-name}.typeType of the instance, currently can only be "cloud" or "standalone".Choice[string]Yes
    instances.{instance-alias-name}.uriFor Server/Cloud Instances, the URI is an http/https address. For Standalone Instances, the URI is set to "local".StringYes
    instances.{instance-alias-name}.user_nameUser's nameStringYes
    instances.{instance-alias-name}.current_projectDefault Project under the current instance. It will be used to fill the "project" field in the URI representation by default. You can set it using the swcli project select command.StringYes
    instances.{instance-alias-name}.user_roleUser's role.StringnormalYes
    instances.{instance-alias-name}.updated_atThe last updated time for this instance configuration.Time format stringYes
    storageSettings related to local storage.DictYes
    storage.rootThe root directory for Standalone Instance's local storage. Typically, if there is insufficient space in the home directory and you manually move data files to another location, you can modify this field.String~/.starwhaleYes
    versionThe version of config.yaml, currently only supports 2.0.String2.0Yes

    You could put starwhale.Link to your assets while the URI in the Link could be whatever(only s3 like or http is implemented) you need, such as s3://10.131.0.1:9000/users/path. However, Links may need to be authed, you could config the auth info in link_auths.

    link_auths:
    - type: s3
    ak: starwhale
    bucket: users
    region: local
    connect_timeout: 10.0
    endpoint: http://10.131.0.1:9000
    read_timeout: 100.0
    sk: starwhale

    Items in link_auths will match the uri in Links automatically. s3 typed link_auth matching Links by looking up bucket and endpoint.

    - - + + \ No newline at end of file diff --git a/0.5.12/swcli/index.html b/0.5.12/swcli/index.html index b131dc69a..b494ae7ac 100644 --- a/0.5.12/swcli/index.html +++ b/0.5.12/swcli/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Starwhale Client (swcli) User Guide

    The Starwhale Client (swcli) is a command-line tool that enables you to interact with Starwhale instances. You can use swcli to complete almost all tasks in Starwhale. swcli is written in pure python3 (require Python 3.7 | 3.11) so that it can be easily installed by the pip command. Currently, swcli only supports Linux and macOS, Windows is coming soon.

    - - + + \ No newline at end of file diff --git a/0.5.12/swcli/installation/index.html b/0.5.12/swcli/installation/index.html index b5f1bdf26..f4790a3de 100644 --- a/0.5.12/swcli/installation/index.html +++ b/0.5.12/swcli/installation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Installation Guide

    We can use swcli to complete all tasks for Starwhale Instances. swcli is written by pure python3, which can be installed easily by the pip command.Here are some installation tips that can help you get a cleaner, unambiguous, no dependency conflicts swcli python environment.

    Installing Advice

    DO NOT install Starwhale in your system's global Python environment. It will cause a python dependency conflict problem.

    Prerequisites

    • Python 3.7 ~ 3.11
    • Linux or macOS
    • Conda (optional)

    In the Ubuntu system, you can run the following commands:

    sudo apt-get install python3 python3-venv python3-pip

    #If you want to install multi python versions
    sudo add-apt-repository -y ppa:deadsnakes/ppa
    sudo apt-get update
    sudo apt-get install -y python3.7 python3.8 python3.9 python3-pip python3-venv python3.8-venv python3.7-venv python3.9-venv

    swcli works on macOS. If you run into issues with the default system Python3 on macOS, try installing Python3 through the homebrew:

    brew install python3

    Install swcli

    Install with venv

    python3 -m venv ~/.cache/venv/starwhale
    source ~/.cache/venv/starwhale/bin/activate
    python3 -m pip install starwhale

    swcli --version

    sudo rm -rf /usr/local/bin/swcli
    sudo ln -s `which swcli` /usr/local/bin/

    Install with conda

    conda create --name starwhale --yes  python=3.9
    conda activate starwhale
    python3 -m pip install starwhale

    swcli --version

    sudo rm -rf /usr/local/bin/swcli
    sudo ln -s `which swcli` /usr/local/bin/

    👏 Now, you can use swcli in the global environment.

    Install for the special scenarios

    # for Audio processing
    python -m pip install starwhale[audio]

    # for Image processing
    python -m pip install starwhale[pillow]

    # for swcli model server command
    python -m pip install starwhale[server]

    # for built-in online serving
    python -m pip install starwhale[online-serve]

    # install all dependencies
    python -m pip install starwhale[all]

    Update swcli

    #for venv
    python3 -m pip install --upgrade starwhale

    #for conda
    conda run -n starwhale python3 -m pip install --upgrade starwhale

    Uninstall swcli

    python3 -m pip remove starwhale

    rm -rf ~/.config/starwhale
    rm -rf ~/.starwhale
    - - + + \ No newline at end of file diff --git a/0.5.12/swcli/swignore/index.html b/0.5.12/swcli/swignore/index.html index d42177934..8b7ca8ce5 100644 --- a/0.5.12/swcli/swignore/index.html +++ b/0.5.12/swcli/swignore/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    About the .swignore file

    The .swignore file is similar to .gitignore, .dockerignore, and other files used to define ignored files or dirs. The .swignore files mainly used in the Starwhale Model building process. By default, the swcli model build command or starwhale.model.build() Python SDK will traverse all files in the specified directory and automatically exclude certain known files or directories that are not suitable for inclusion in the model package.

    PATTERN FORMAT

    • Each line in a swignore file specifies a pattern, which matches files and directories.
    • A blank line matches no files, so it can serve as a separator for readability.
    • An asterisk * matches anything except a slash.
    • A line starting with # serves as a comment.
    • Support wildcard expression, for example: *.jpg, .png.

    Auto Ingored files or dirs

    If you want to include the auto ingored files or dirs, you can add --add-all for swcli model build command.

    • __pycache__/
    • *.py[cod]
    • *$py.class
    • venv installation dir
    • conda installation dir

    Example

    Here is the .swignore file used in the MNIST example:

    venv/*
    .git/*
    .history*
    .vscode/*
    .venv/*
    data/*
    .idea/*
    *.py[cod]
    - - + + \ No newline at end of file diff --git a/0.5.12/swcli/uri/index.html b/0.5.12/swcli/uri/index.html index eca0764c9..7f44ba669 100644 --- a/0.5.12/swcli/uri/index.html +++ b/0.5.12/swcli/uri/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.5.12

    Starwhale Resources URI

    tip

    Resource URI is widely used in Starwhale client commands. The URI can refer to a resource in the local instance or any other resource in a remote instance. In this way, the Starwhale client can easily manipulate any resource.

    concepts-org.jpg

    Instance URI

    Instance URI can be either:

    • local: standalone instance.
    • [http(s)://]<hostname or ip>[:<port>]: cloud instance with HTTP address.
    • [cloud://]<cloud alias>: cloud or server instance with an alias name, which can be configured in the instance login phase.
    caution

    "local" is different from "localhost". The former means the local standalone instance without a controller, while the latter implies a controller listening at the default port 8082 on the localhost.

    Example:

    # log in Starwhale Cloud; the alias is swcloud
    swcli instance login --username <your account name> --password <your password> https://cloud.starwhale.ai --alias swcloud

    # copy a model from the local instance to the cloud instance
    swcli model copy mnist/version/latest swcloud/project/<your account name>:demo

    # copy a runtime to a Starwhale Server instance: http://localhost:8081
    swcli runtime copy pytorch/version/v1 http://localhost:8081/project/<your account name>:demo

    Project URI

    Project URI is in the format [<Instance URI>/project/]<project name>. If the instance URI is not specified, use the current instance instead.

    Example:

    swcli project select self   # select the self project in the current instance
    swcli project info local/project/self # inspect self project info in the local instance

    Model/Dataset/Runtime URI

    • Model URI: [<Project URI>/model/]<model name>[/version/<version id|tag>].
    • Dataset URI: [<Project URI>/dataset/]<dataset name>[/version/<version id|tag>].
    • Runtime URI: [<Project URI>/runtime/]<runtime name>[/version/<version id|tag>].
    tip
    • swcli supports human-friendly short version id. You can type the first few characters of the version id, provided it is at least four characters long and unambiguous. However, the recover command must use the complete version id.
    • If the project URI is not specified, the default project will be used.
    • You can always use the version tag instead of the version id.

    Example:

    swcli model info mnist/version/hbtdenjxgm4ggnrtmftdgyjzm43tioi  # inspect model info, model name: mnist, version:hbtdenjxgm4ggnrtmftdgyjzm43tioi
    swcli model remove mnist/version/hbtdenj # short version
    swcli model info mnist # inspect mnist model info
    swcli model run mnist --runtime pytorch-mnist --dataset mnist # use the default latest tag

    Job URI

    • format: [<Project URI>/job/]<job id>.
    • If the project URI is not specified, the default project will be used.

    Example:

    swcli job info mezdayjzge3w   # Inspect mezdayjzge3w version in default instance and default project
    swcli job info local/project/self/job/mezday # Inspect the local instance, self project, with short job id:mezday

    The default instance

    When the instance part of a project URI is omitted, the default instance is used instead. The default instance is the one selected by the swcli instance login or swcli instance use command.

    The default project

    When the project parts of Model/Dataset/Runtime/Evaluation URIs are omitted, the default project is used instead. The default project is the one selected by the swcli project use command.

    - - + + \ No newline at end of file diff --git a/0.6.0/cloud/billing/bills/index.html b/0.6.0/cloud/billing/bills/index.html index 58b085be7..3242c898a 100644 --- a/0.6.0/cloud/billing/bills/index.html +++ b/0.6.0/cloud/billing/bills/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.0/cloud/billing/index.html b/0.6.0/cloud/billing/index.html index 064170089..a3dfbc364 100644 --- a/0.6.0/cloud/billing/index.html +++ b/0.6.0/cloud/billing/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.0/cloud/billing/recharge/index.html b/0.6.0/cloud/billing/recharge/index.html index 2c091d2d1..7f4b5e688 100644 --- a/0.6.0/cloud/billing/recharge/index.html +++ b/0.6.0/cloud/billing/recharge/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.0/cloud/billing/refund/index.html b/0.6.0/cloud/billing/refund/index.html index 7e4975255..5f118bebe 100644 --- a/0.6.0/cloud/billing/refund/index.html +++ b/0.6.0/cloud/billing/refund/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.0/cloud/billing/voucher/index.html b/0.6.0/cloud/billing/voucher/index.html index 1bfff0fc9..15e587c20 100644 --- a/0.6.0/cloud/billing/voucher/index.html +++ b/0.6.0/cloud/billing/voucher/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.0/cloud/index.html b/0.6.0/cloud/index.html index e917f78a2..9e73d5bb3 100644 --- a/0.6.0/cloud/index.html +++ b/0.6.0/cloud/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Starwhale Cloud User Guide

    Starwhale Cloud is a service hosted on public cloud and operated by the Starwhale team. The access url is https://cloud.starwhale.cn.

    - - + + \ No newline at end of file diff --git a/0.6.0/community/contribute/index.html b/0.6.0/community/contribute/index.html index 730d80c33..bcfa23ba5 100644 --- a/0.6.0/community/contribute/index.html +++ b/0.6.0/community/contribute/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Contribute to Starwhale

    Getting Involved/Contributing

    We welcome and encourage all contributions to Starwhale, including and not limited to:

    • Describe the problems encountered during use.
    • Submit feature request.
    • Discuss in Slack and Github Issues.
    • Code Review.
    • Improve docs, tutorials and examples.
    • Fix Bug.
    • Add Test Case.
    • Code readability and code comments to import readability.
    • Develop new features.
    • Write enhancement proposal.

    You can get involved, get updates and contact Starwhale developers in the following ways:

    Starwhale Resources

    Code Structure

    • client: swcli and Python SDK with Pure Python3, which includes all Standalone Instance features.
      • api: Python SDK.
      • cli: Command Line Interface entrypoint.
      • base: Python base abstract.
      • core: Starwhale core concepts which includes Dataset,Model,Runtime,Project, job and Evaluation, etc.
      • utils: Python utilities lib.
    • console: frontend with React + TypeScript.
    • server:Starwhale Controller with java, which includes all Starwhale Cloud Instance backend apis.
    • docker:Helm Charts, dockerfile.
    • docs:Starwhale官方文档。
    • example:Example code.
    • scripts:Bash and Python scripts for E2E testing and software releases, etc.

    Fork and clone the repository

    You will need to fork the code of Starwhale repository and clone it to your local machine.

    • Fork Starwhale repository: Fork Starwhale Github Repo,For more usage details, please refer to: Fork a repo

    • Install Git-LFS:Git-LFS

       git lfs install
    • Clone code to local machine

      git clone https://github.com/${your username}/starwhale.git

    Development environment for Standalone Instance

    Standalone Instance is written in Python3. When you want to modify swcli and sdk, you need to build the development environment.

    Standalone development environment prerequisites

    • OS: Linux or macOS
    • Python: 3.7~3.11
    • Docker: >=19.03(optional)
    • Python isolated env tools:Python venv, virtualenv or conda, etc

    Building from source code

    Based on the previous step, clone to the local directory: starwhale, and enter the client subdirectory:

    cd starwhale/client

    Create an isolated python environment with conda:

    conda create -n starwhale-dev python=3.8 -y
    conda activate starwhale-dev

    Install client package and python dependencies into the starwhale-dev environment:

    make install-sw
    make install-dev-req

    Validate with the swcli --version command. In the development environment, the version is 0.0.0.dev0:

    ❯ swcli --version
    swcli, version 0.0.0.dev0

    ❯ swcli --version
    /home/username/anaconda3/envs/starwhale-dev/bin/swcli

    Modifying the code

    When you modify the code, you need not to install python package(run make install-sw command) again. .editorconfig will be imported into the most IDE and code editors which helps maintain consistent coding styles for multiple developers.

    Lint and Test

    Run unit test, E2E test, mypy lint, flake lint and isort check in the starwhale directory.

    make client-all-check

    Development environment for Cloud Instance

    Cloud Instance is written in Java(backend) and React+TypeScript(frontend).

    Development environment for Console

    Development environment for Server

    • Language: Java
    • Build tool: Maven
    • Development framework: Spring Boot+Mybatis
    • Unit test framework:Junit5
      • Mockito used for mocking
      • Hamcrest used for assertion
      • Testcontainers used for providing lightweight, throwaway instances of common databases, Selenium web browsers that can run in a Docker container.
    • Check style tool:use maven-checkstyle-plugin

    Server development environment prerequisites

    • OS: Linux, macOS or Windows
    • Docker: >=19.03
    • JDK: >=11
    • Maven: >=3.8.1
    • Mysql: >=8.0.29
    • Minio
    • Kubernetes cluster/Minikube(If you don't have a k8s cluster, you can use Minikube as an alternative for development and debugging)

    Modify the code and add unit tests

    Now you can enter the corresponding module to modify and adjust the code on the server side. The main business code directory is src/main/java, and the unit test directory is src/test/java.

    Execute code check and run unit tests

    cd starwhale/server
    mvn clean test

    Deploy the server at local machine

    • Dependent services that need to be deployed

      • Minikube(Optional. Minikube can be used when there is no k8s cluster, there is the installation doc: Minikube

        minikube start
        minikube addons enable ingress
        minikube addons enable ingress-dns
      • Mysql

        docker run --name sw-mysql -d \
        -p 3306:3306 \
        -e MYSQL_ROOT_PASSWORD=starwhale \
        -e MYSQL_USER=starwhale \
        -e MYSQL_PASSWORD=starwhale \
        -e MYSQL_DATABASE=starwhale \
        mysql:latest
      • Minio

        docker run --name minio -d \
        -p 9000:9000 --publish 9001:9001 \
        -e MINIO_DEFAULT_BUCKETS='starwhale' \
        -e MINIO_ROOT_USER="minioadmin" \
        -e MINIO_ROOT_PASSWORD="minioadmin" \
        bitnami/minio:latest
    • Package server program

      If you need to deploy the front-end at the same time when deploying the server, you can execute the build command of the front-end part first, and then execute 'mvn clean package', and the compiled front-end files will be automatically packaged.

      Use the following command to package the program

        cd starwhale/server
      mvn clean package
    • Specify the environment required for server startup

      # Minio env
      export SW_STORAGE_ENDPOINT=http://${Minio IP,default is:27.0.0.1}:9000
      export SW_STORAGE_BUCKET=${Minio bucket,default is:starwhale}
      export SW_STORAGE_ACCESSKEY=${Minio accessKey,default is:starwhale}
      export SW_STORAGE_SECRETKEY=${Minio secretKey,default is:starwhale}
      export SW_STORAGE_REGION=${Minio region,default is:local}
      # kubernetes env
      export KUBECONFIG=${the '.kube' file path}\.kube\config

      export SW_INSTANCE_URI=http://${Server IP}:8082
      export SW_METADATA_STORAGE_IP=${Mysql IP,default: 127.0.0.1}
      export SW_METADATA_STORAGE_PORT=${Mysql port,default: 3306}
      export SW_METADATA_STORAGE_DB=${Mysql dbname,default: starwhale}
      export SW_METADATA_STORAGE_USER=${Mysql user,default: starwhale}
      export SW_METADATA_STORAGE_PASSWORD=${user password,default: starwhale}
    • Deploy server service

      You can use the IDE or the command to deploy.

      java -jar controller/target/starwhale-controller-0.1.0-SNAPSHOT.jar
    • Debug

      there are two ways to debug the modified function:

      • Use swagger-ui for interface debugging, visit /swagger-ui/index.html to find the corresponding api
      • Debug the corresponding function directly in the ui (provided that the front-end code has been built in advance according to the instructions when packaging)
    - - + + \ No newline at end of file diff --git a/0.6.0/concepts/index.html b/0.6.0/concepts/index.html index 6d9b2e926..a0747024c 100644 --- a/0.6.0/concepts/index.html +++ b/0.6.0/concepts/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.0/concepts/names/index.html b/0.6.0/concepts/names/index.html index 28b151518..7e709305a 100644 --- a/0.6.0/concepts/names/index.html +++ b/0.6.0/concepts/names/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Names in Starwhale

    Names mean project names, model names, dataset names, runtime names, and tag names.

    Names Limitation

    • Names are case-insensitive.
    • A name MUST only consist of letters A-Z a-z, digits 0-9, the hyphen character -, the dot character ., and the underscore character _.
    • A name should always start with a letter or the _ character.
    • The maximum length of a name is 80.

    Names uniqueness requirement

    • The resource name should be a unique string within its owner. For example, the project name should be unique in the owner instance, and the model name should be unique in the owner project.
    • The resource name can not be used by any other resource of the same kind in their owner, including those removed ones. For example, Project "apple" can not have two models named "Alice", even if one of them is already removed.
    • Different kinds of resources can have the same name. For example, a project and a model can be called "Alice" simultaneously.
    • Resources with different owners can have the same name. For example, a model in project "Apple" and a model in project "Banana" can have the same name "Alice".
    • Garbage-collected resources' names can be reused. For example, after the model with the name "Alice" in project "Apple" is removed and garbage collected, the project can have a new model with the same name "Alice".
    - - + + \ No newline at end of file diff --git a/0.6.0/concepts/project/index.html b/0.6.0/concepts/project/index.html index be9188961..b9c25acb2 100644 --- a/0.6.0/concepts/project/index.html +++ b/0.6.0/concepts/project/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Project in Starwhale

    "Project" is the basic unit for organizing different resources like models, datasets, etc. You may use projects for different purposes. For example, you can create a project for a data scientist team, a product line, or a specific model. Users usually work on one or more projects in their daily lives.

    Starwhale Server/Cloud projects are grouped by accounts. Starwhale Standalone does not have accounts. So you will not see any account name prefix in Starwhale Standalone projects. Starwhale Server/Cloud projects can be either "public" or "private". Public projects means all users on the same instance are assigned a "guest" role to the project by default. For more information about roles, see Roles and permissions in Starwhale.

    A self project is created automatically and configured as the default project in Starwhale Standalone.

    - - + + \ No newline at end of file diff --git a/0.6.0/concepts/roles-permissions/index.html b/0.6.0/concepts/roles-permissions/index.html index b29ce481b..4428871cd 100644 --- a/0.6.0/concepts/roles-permissions/index.html +++ b/0.6.0/concepts/roles-permissions/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Roles and permissions in Starwhale

    Roles are used to assign permissions to users. Only Starwhale Server/Cloud has roles and permissions, and Starwhale Standalone does not.The Administrator role is automatically created and assigned to the user "admin". Some sensitive operations can only be performed by users with the Administrator role, for example, creating accounts in Starwhale Server.

    Projects have three roles:

    • Admin - Project administrators can read and write project data and assign project roles to users.
    • Maintainer - Project maintainers can read and write project data.
    • Guest - Project guests can only read project data.
    ActionAdminMaintainerGuest
    Manage project membersYes
    Edit projectYesYes
    View projectYesYesYes
    Create evaluationsYesYes
    Remove evaluationsYesYes
    View evaluationsYesYesYes
    Create datasetsYesYes
    Update datasetsYesYes
    Remove datasetsYesYes
    View datasetsYesYesYes
    Create modelsYesYes
    Update modelsYesYes
    Remove modelsYesYes
    View modelsYesYesYes
    Create runtimesYesYes
    Update runtimesYesYes
    Remove runtimesYesYes
    View runtimesYesYesYes

    The user who creates a project becomes the first project administrator. They can assign roles to other users later.

    - - + + \ No newline at end of file diff --git a/0.6.0/concepts/versioning/index.html b/0.6.0/concepts/versioning/index.html index 9d579821a..0299c56ea 100644 --- a/0.6.0/concepts/versioning/index.html +++ b/0.6.0/concepts/versioning/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Resource versioning in Starwhale

    • Starwhale manages the history of all models, datasets, and runtimes. Every update to a specific resource appends a new version of the history.
    • Versions are identified by a version id which is a random string generated automatically by Starwhale and are ordered by their creation time.
    • Versions can have tags. Starwhale uses version tags to provide a human-friendly representation of versions. By default, Starwhale attaches a default tag to each version. The default tag is the letter "v", followed by a number. For each versioned resource, the first version tag is always tagged with "v0", the second version is tagged with "v1", and so on. And there is a special tag "latest" that always points to the last version. When a version is removed, its default tag will not be reused. For example, there is a model with tags "v0, v1, v2". When "v2" is removed, tags will be "v0, v1". And the following tag will be "v3" instead of "v2" again. You can attach your own tags to any version and remove them at any time.
    • Starwhale uses a linear history model. There is neither branch nor cycle in history.
    • History can not be rollback. When a version is to be reverted, Starwhale clones the version and appends it as a new version to the end of the history. Versions in history can be manually removed and recovered.
    - - + + \ No newline at end of file diff --git a/0.6.0/dataset/index.html b/0.6.0/dataset/index.html index 3b9794848..2a515cce0 100644 --- a/0.6.0/dataset/index.html +++ b/0.6.0/dataset/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Starwhale Dataset User Guide

    overview

    Design Overview

    Starwhale Dataset Positioning

    The Starwhale Dataset contains three core stages: data construction, data loading, and data visualization. It is a data management tool for the ML/DL field. Starwhale Dataset can directly use the environment built by Starwhale Runtime, and can be seamlessly integrated with Starwhale Model and Starwhale Evaluation. It is an important part of the Starwhale MLOps toolchain.

    According to the classification of MLOps Roles in Machine Learning Operations (MLOps): Overview, Definition, and Architecture, the three stages of Starwhale Dataset target the following user groups:

    • Data construction: Data Engineer, Data Scientist
    • Data loading: Data Scientist, ML Developer
    • Data visualization: Data Engineer, Data Scientist, ML Developer

    mlops-users

    Core Functions

    • Efficient loading: The original dataset files are stored in external storage such as OSS or NAS, and are loaded on demand without having to save to disk.
    • Simple construction: Supports one-click dataset construction from Image/Video/Audio directories, json files and Huggingface datasets, and also supports writing Python code to build completely custom datasets.
    • Versioning: Can perform version tracking, data append and other operations, and avoid duplicate data storage through the internally abstracted ObjectStore.
    • Sharing: Implement bidirectional dataset sharing between Standalone instances and Cloud/Server instances through the swcli dataset copy command.
    • Visualization: The web interface of Cloud/Server instances can present multi-dimensional, multi-type data visualization of datasets.
    • Artifact storage: The Standalone instance can store locally built or distributed swds series files, while the Cloud/Server instance uses object storage to provide centralized swds artifact storage.
    • Seamless Starwhale integration: Starwhale Dataset can use the runtime environment built by Starwhale Runtime to build datasets. Starwhale Evaluation and Starwhale Model can directly specify the dataset through the --dataset parameter to complete automatic data loading, which facilitates inference, model evaluation and other environments.

    Key Elements

    • swds virtual package file: swds is different from swmp and swrt. It is not a single packaged file, but a virtual concept that specifically refers to a directory that contains dataset-related files for a version of the Starwhale dataset, including _manifest.yaml, dataset.yaml, dataset build Python scripts, and data file links, etc. You can use the swcli dataset info command to view where the swds is located. swds is the abbreviation of Starwhale Dataset.

    swds-tree.png

    • swcli dataset command line: A set of dataset-related commands, including construction, distribution and management functions. See CLI Reference for details.
    • dataset.yaml configuration file: Describes the dataset construction process. It can be completely omitted and specified through swcli dataset build parameters. dataset.yaml can be considered as a configuration file representation of the swcli dataset build command line parameters. swcli dataset build parameters take precedence over dataset.yaml.
    • Dataset Python SDK: Includes data construction, data loading, and several predefined data types. See Python SDK for details.
    • Python scripts for dataset construction: A series of scripts written using the Starwhale Python SDK to build datasets.

    Best Practices

    The construction of Starwhale Dataset is performed independently. If third-party libraries need to be introduced when writing construction scripts, using Starwhale Runtime can simplify Python dependency management and ensure reproducible dataset construction. The Starwhale platform will build in as many open source datasets as possible for users to copy datasets for immediate use.

    Command Line Grouping

    The Starwhale Dataset command line can be divided into the following stages from the perspective of usage phases:

    • Construction phase
      • swcli dataset build
    • Visualization phase
      • swcli dataset diff
      • swcli dataset head
    • Distribution phase
      • swcli dataset copy
    • Basic management
      • swcli dataset tag
      • swcli dataset info
      • swcli dataset history
      • swcli dataset list
      • swcli dataset summary
      • swcli dataset remove
      • swcli dataset recover

    Starwhale Dataset Viewer

    Currently, the Web UI in the Cloud/Server instance can visually display the dataset. Currently, only DataTypes using the Python SDK can be correctly interpreted by the frontend, with mappings as follows:

    • Image: Display thumbnails, enlarged images, MASK type images, support image/png, image/jpeg, image/webp, image/svg+xml, image/gif, image/apng, image/avif formats.
    • Audio: Displayed as an audio wave graph, playable, supports audio/mp3 and audio/wav formats.
    • Video: Displayed as a video, playable, supports video/mp4, video/avi and video/webm formats.
    • GrayscaleImage: Display grayscale images, support x/grayscale format.
    • Text: Display text, support text/plain format, set encoding format, default is utf-8.
    • Binary and Bytes: Not supported for display currently.
    • Link: The above multimedia types all support specifying links as storage paths.

    Starwhale Dataset Data Format

    The dataset consists of multiple rows, each row being a sample, each sample containing several features. The features have a dict-like structure with some simple restrictions [L]:

    • The dict keys must be str type.
    • The dict values must be Python basic types like int/float/bool/str/bytes/dict/list/tuple, or Starwhale built-in data types.
    • For the same key across different samples, the value types do not need to stay the same.
    • If the value is a list or tuple, the element data types must be consistent.
    • For dict values, the restrictions are the same as [L].

    Example:

    {
    "img": GrayscaleImage(
    link=Link(
    "123",
    offset=32,
    size=784,
    _swds_bin_offset=0,
    _swds_bin_size=8160,
    )
    ),
    "label": 0,
    }

    File Data Handling

    Starwhale Dataset handles file type data in a special way. You can ignore this section if you don't care about Starwhale's implementation.

    According to actual usage scenarios, Starwhale Dataset has two ways of handling file class data that is based on the base class starwhale.BaseArtifact:

    • swds-bin: Starwhale merges the data into several large files in its own binary format (swds-bin), which can efficiently perform indexing, slicing and loading.
    • remote-link: If the user's original data is stored in some external storage such as OSS or NAS, with a lot of original data that is inconvenient to move or has already been encapsulated by some internal dataset implementation, then you only need to use links in the data to establish indexes.

    In the same Starwhale dataset, two types of data can be included simultaneously.

    - - + + \ No newline at end of file diff --git a/0.6.0/dataset/yaml/index.html b/0.6.0/dataset/yaml/index.html index 2c3f0cd58..18280bafc 100644 --- a/0.6.0/dataset/yaml/index.html +++ b/0.6.0/dataset/yaml/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    The dataset.yaml Specification

    tip

    dataset.yaml is optional for the swcli dataset build command.

    Building Starwhale Dataset uses dataset.yaml. Omitting dataset.yaml allows describing related configurations in swcli dataset build command line parameters. dataset.yaml can be considered as a file-based representation of the build command line configuration.

    YAML Field Descriptions

    FieldDescriptionRequiredTypeDefault
    nameName of the Starwhale DatasetYesString
    handlerImportable address of a class that inherits starwhale.SWDSBinBuildExecutor, starwhale.UserRawBuildExecutor or starwhale.BuildExecutor, or a function that returns a Generator or iterable object. Format is {module path}:{class name\|function name}YesString
    descDataset descriptionNoString""
    versiondataset.yaml format version, currently only "1.0" is supportedNoString1.0
    attrDataset build parametersNoDict
    attr.volume_sizeSize of each data file in the swds-bin dataset. Can be a number in bytes, or a number plus unit like 64M, 1GB etc.NoInt or Str64MB
    attr.alignment_sizeData alignment size of each data block in the swds-bin dataset. If set to 4k, and a data block is 7.9K, 0.1K padding will be added to make the block size a multiple of alignment_size, improving page size and read efficiency.NoInteger or String128

    Examples

    Simplest Example

    name: helloworld
    handler: dataset:ExampleProcessExecutor

    The helloworld dataset uses the ExampleProcessExecutor class in dataset.py of the dataset.yaml directory to build data.

    MNIST Dataset Build Example

    name: mnist
    handler: mnist.dataset:DatasetProcessExecutor
    desc: MNIST data and label test dataset
    attr:
    alignment_size: 128
    volume_size: 4M

    Example with handler as a generator function

    dataset.yaml contents:

    name: helloworld
    handler: dataset:iter_item

    dataset.py contents:

    def iter_item():
    for i in range(10):
    yield {"img": f"image-{i}".encode(), "label": i}
    - - + + \ No newline at end of file diff --git a/0.6.0/evaluation/heterogeneous/node-able/index.html b/0.6.0/evaluation/heterogeneous/node-able/index.html index 993f8073f..500535d87 100644 --- a/0.6.0/evaluation/heterogeneous/node-able/index.html +++ b/0.6.0/evaluation/heterogeneous/node-able/index.html @@ -10,8 +10,8 @@ - - + +
    @@ -23,7 +23,7 @@ Refer to the link.

    Take v0.13.0-rc.1 as an example:

    kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.13.0-rc.1/nvidia-device-plugin.yml

    Note: This operation will run the NVIDIA device plugin plugin on all Kubernetes nodes. If configured before, it will be updated. Please evaluate the image version used carefully.

  • Confirm GPU can be discovered and used in the cluster. Refer to the command below. Check that nvidia.com/gpu is in the Capacity of the Jetson node. The GPU is then recognized normally by the Kubernetes cluster.

    # kubectl describe node orin | grep -A15 Capacity
    Capacity:
    cpu: 12
    ephemeral-storage: 59549612Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    hugepages-32Mi: 0
    hugepages-64Ki: 0
    memory: 31357608Ki
    nvidia.com/gpu: 1
    pods: 110
  • Build and Use Custom Images

    The l4t-jetpack image mentioned earlier can meet our general use. If we need to customize a more streamlined image or one with more features, we can make it based on l4t-base. Relevant Dockerfiles can refer to the image Starwhale made for mnist.

    - - + + \ No newline at end of file diff --git a/0.6.0/evaluation/heterogeneous/virtual-node/index.html b/0.6.0/evaluation/heterogeneous/virtual-node/index.html index df398281d..84c67cf5a 100644 --- a/0.6.0/evaluation/heterogeneous/virtual-node/index.html +++ b/0.6.0/evaluation/heterogeneous/virtual-node/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Virtual Kubelet as Kubernetes nodes

    Introduction

    Virtual Kubelet is an open source framework that can simulate a K8s node by mimicking the communication between kubelet and the K8s cluster.

    This solution is widely used by major cloud vendors for serverless container cluster solutions, such as Alibaba Cloud's ASK, Amazon's AWS Fargate, etc.

    Principles

    The virtual kubelet framework implements the related interfaces of kubelet for Node. With simple configuration, it can simulate a node.

    We only need to implement the PodLifecycleHandler interface to support:

    • Create, update, delete Pod
    • Get Pod status
    • Get Container logs

    Adding Devices to the Cluster

    If our device cannot serve as a K8s node due to resource constraints or other situations, we can manage these devices by using virtual kubelet to simulate a proxy node.

    The control flow between Starwhale Controller and the device is as follows:


    ┌──────────────────────┐ ┌────────────────┐ ┌─────────────────┐ ┌────────────┐
    │ Starwhale Controller ├─────►│ K8s API Server ├────►│ virtual kubelet ├────►│ Our device │
    └──────────────────────┘ └────────────────┘ └─────────────────┘ └────────────┘

    Virtual kubelet converts the Pod orchestration information sent by Starwhale Controller into control behaviors for the device, such as executing a command via ssh on the device, or sending a message via USB or serial port.

    Below is an example of using virtual kubelet to control a device not joined to the cluster that is SSH-enabled:

    1. Prepare certificates
    • Create file vklet.csr with the following content:
    [req]
    req_extensions = v3_req
    distinguished_name = req_distinguished_name

    [req_distinguished_name]

    [v3_req]
    basicConstraints = CA:FALSE
    keyUsage = digitalSignature, keyEncipherment
    extendedKeyUsage = serverAuth
    subjectAltName = @alt_names

    [alt_names]
    IP = 1.2.3.4
    • Generate the certificate:
    openssl genrsa -out vklet-key.pem 2048
    openssl req -new -key vklet-key.pem -out vklet.csr -subj '/CN=system:node:1.2.3.4;/C=US/O=system:nodes' -config ./csr.conf
    • Submit the certificate:
    cat vklet.csr| base64 | tr -d "\n" # output as content of spec.request in csr.yaml

    csr.yaml:

    apiVersion: certificates.k8s.io/v1
    kind: CertificateSigningRequest
    metadata:
    name: vklet
    spec:
    request: ******************
    signerName: kubernetes.io/kube-apiserver-client
    expirationSeconds: 1086400
    usages:
    - client auth
    kubectl apply -f csr.yaml
    kubectl certificate approve vklet
    kubectl get csr vklet -o jsonpath='{.status.certificate}'| base64 -d > vklet-cert.pem

    Now we have vklet-cert.pem.

    • Compile virtual kubelet:
    git clone https://github.com/virtual-kubelet/virtual-kubelet
    cd virtual-kubelet && make build

    Create the node configuration file mock.json:

    {
    "virtual-kubelet":
    {
    "cpu": "100",
    "memory": "100Gi",
    "pods": "100"
    }
    }

    Start virtual kubelet:

    export APISERVER_CERT_LOCATION=/path/to/vklet-cert.pem
    export APISERVER_KEY_LOCATION=/path/to/vklet-key.pem
    export KUBECONFIG=/path/to/kubeconfig
    virtual-kubelet --provider mock --provider-config /path/to/mock.json

    Now we have simulated a node with 100 cores + 100GB memory using virtual kubelet.

    • Add PodLifecycleHandler implementation to convert important information in Pod orchestration into ssh command execution, and collect logs for Starwhale Controller to collect.

    See ssh executor for a concrete implementation.

    - - + + \ No newline at end of file diff --git a/0.6.0/evaluation/index.html b/0.6.0/evaluation/index.html index 6fbc160bb..a465da107 100644 --- a/0.6.0/evaluation/index.html +++ b/0.6.0/evaluation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Starwhale Model Evaluation

    Design Overview

    Starwhale Evaluation Positioning

    The goal of Starwhale Evaluation is to provide end-to-end management for model evaluation, including creating Jobs, distributing Tasks, viewing model evaluation reports and basic management. Starwhale Evaluation is a specific application of Starwhale Model, Starwhale Dataset, and Starwhale Runtime in the model evaluation scenario. Starwhale Evaluation is part of the MLOps toolchain built by Starwhale. More applications like Starwhale Model Serving, Starwhale Training will be included in the future.

    Core Features

    • Visualization: Both swcli and the Web UI provide visualization of model evaluation results, supporting comparison of multiple results. Users can also customize logging of intermediate processes.

    • Multi-scenario Adaptation: Whether it's a notebook, desktop or distributed cluster environment, the same commands, Python scripts, artifacts and operations can be used for model evaluation. This satisfies different computational power and data volume requirements.

    • Seamless Starwhale Integration: Leverage Starwhale Runtime for the runtime environment, Starwhale Dataset as data input, and run models from Starwhale Model. Configuration is simple whether using swcli, Python SDK or Cloud/Server instance Web UI.

    Key Elements

    • swcli model run: Command line for bulk offline model evaluation.
    • swcli model serve: Command line for online model evaluation.

    Best Practices

    Command Line Grouping

    From the perspective of completing an end-to-end Starwhale Evaluation workflow, commands can be grouped as:

    • Preparation Stage
      • swcli dataset build or Starwhale Dataset Python SDK
      • swcli model build or Starwhale Model Python SDK
      • swcli runtime build
    • Evaluation Stage
      • swcli model run
      • swcli model serve
    • Results Stage
      • swcli job info
    • Basic Management
      • swcli job list
      • swcli job remove
      • swcli job recover

    Abstraction job-step-task

    • job: A model evaluation task is a job, which contains one or more steps.

    • step: A step corresponds to a stage in the evaluation process. With the default PipelineHandler, steps are predict and evaluate. For custom evaluation processes using @handler, @evaluation.predict, @evaluation.evaluate decorators, steps are the decorated functions. Steps can have dependencies, forming a DAG. A step contains one or more tasks. Tasks in the same step have the same logic but different inputs. A common approach is to split the dataset into multiple parts, with each part passed to a task. Tasks can run in parallel.

    • task: A task is the final running entity. In Cloud/Server instances, a task is a container in a Pod. In Standalone instances, a task is a Python Thread.

    The job-step-task abstraction is the basis for implementing distributed runs in Starwhale Evaluation.

    - - + + \ No newline at end of file diff --git a/0.6.0/faq/index.html b/0.6.0/faq/index.html index c305af21d..42d53752b 100644 --- a/0.6.0/faq/index.html +++ b/0.6.0/faq/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.0/getting-started/cloud/index.html b/0.6.0/getting-started/cloud/index.html index 87326cd82..2842398d7 100644 --- a/0.6.0/getting-started/cloud/index.html +++ b/0.6.0/getting-started/cloud/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Getting started with Starwhale Cloud

    Starwhale Cloud is hosted on Aliyun with the domain name https://cloud.starwhale.cn. In the futher, we will launch the service on AWS with the domain name https://cloud.starwhale.ai. It's important to note that these are two separate instances that are not interconnected, and accounts and data are not shared. You can choose either one to get started.

    You need to install the Starwhale Client (swcli) at first.

    Sign Up for Starwhale Cloud and create your first project

    You can either directly log in with your GitHub or Weixin account or sign up for an account. You will be asked for an account name if you log in with your GitHub or Weixin account.

    Then you can create a new project. In this tutorial, we will use the name demo for the project name.

    Build the dataset, model, and runtime on your local machine

    Follow step 1 to step 4 in Getting started with Starwhale Standalone to create:

    • a Starwhale model named mnist
    • a Starwhale dataset named mnist
    • a Starwhale runtime named pytorch

    Login to the cloud instance

    swcli instance login --username <your account name> --password <your password> --alias swcloud https://cloud.starwhale.cn

    Copy the dataset, model, and runtime to the cloud instance

    swcli model copy mnist swcloud/project/<your account name>:demo
    swcli dataset copy mnist swcloud/project/<your account name>:demo
    swcli runtime copy pytorch swcloud/project/<your account name>:demo

    Run an evaluation with the web UI

    console-create-job.gif

    Congratulations! You have completed the Starwhale Cloud Getting Started Guide.

    - - + + \ No newline at end of file diff --git a/0.6.0/getting-started/index.html b/0.6.0/getting-started/index.html index 16f2155cb..29bb5cc89 100644 --- a/0.6.0/getting-started/index.html +++ b/0.6.0/getting-started/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Getting started

    First, you need to install the Starwhale Client (swcli), which can be done by running the following command:

    python3 -m pip install starwhale

    For more information, see the swcli installation guide.

    Depending on your instance type, there are three getting-started guides available for you:

    • Getting started with Starwhale Standalone - This guide helps you run an MNIST evaluation on your desktop PC/laptop. It is the fastest and simplest way to get started with Starwhale.
    • Getting started with Starwhale Server - This guide helps you install Starwhale Server in your private data center and run an MNIST evaluation. At the end of the tutorial, you will have a Starwhale Server instance where you can run model evaluations on and manage your datasets and models.
    • Getting started with Starwhale Cloud - This guide helps you create an account on Starwhale Cloud and run an MNIST evaluation. It is the easiest way to experience all Starwhale features.
    - - + + \ No newline at end of file diff --git a/0.6.0/getting-started/runtime/index.html b/0.6.0/getting-started/runtime/index.html index 5f0148ef9..6bbbed3d0 100644 --- a/0.6.0/getting-started/runtime/index.html +++ b/0.6.0/getting-started/runtime/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Getting Started with Starwhale Runtime

    This article demonstrates how to build a Starwhale Runtime of the Pytorch environment and how to use it. This runtime can meet the dependency requirements of the six examples in Starwhale: mnist, speech commands, nmt, cifar10, ag_news, and PennFudan. Links to relevant code: example/runtime/pytorch.

    You can learn the following things from this tutorial:

    • How to build a Starwhale Runtime.
    • How to use a Starwhale Runtime in different scenarios.
    • How to release a Starwhale Runtime.

    Prerequisites

    Run the following command to clone the example code:

    git clone https://github.com/star-whale/starwhale.git
    cd starwhale/example/runtime/pytorch # for users in the mainland of China, use pytorch-cn-mirror instead.

    Build Starwhale Runtime

    ❯ swcli -vvv runtime build --yaml runtime.yaml

    Use Starwhale Runtime in the standalone instance

    Use Starwhale Runtime in the shell

    # Activate the runtime
    swcli runtime activate pytorch

    swcli runtime activate will download all python dependencies of the runtime, which may take a long time.

    All dependencies are ready in your python environment when the runtime is activated. It is similar to source venv/bin/activate of virtualenv or the conda activate command of conda. If you close the shell or switch to another shell, you need to reactivate the runtime.

    Use Starwhale Runtime in swcli

    # Use the runtime when building a Starwhale Model
    swcli model build . --runtime pytorch
    # Use the runtime when building a Starwhale Dataset
    swcli dataset build --yaml /path/to/dataset.yaml --runtime pytorch
    # Run a model evaluation with the runtime
    swcli model run --uri mnist/version/v0 --dataset mnist --runtime pytorch

    Copy Starwhale Runtime to another instance

    You can copy the runtime to a server/cloud instance, which can then be used in the server/cloud instance or downloaded by other users.

    # Copy the runtime to a server instance named 'pre-k8s'
    ❯ swcli runtime copy pytorch cloud://pre-k8s/project/starwhale
    - - + + \ No newline at end of file diff --git a/0.6.0/getting-started/server/index.html b/0.6.0/getting-started/server/index.html index 31c44f0db..0ecce8d9d 100644 --- a/0.6.0/getting-started/server/index.html +++ b/0.6.0/getting-started/server/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Getting started with Starwhale Server

    Install Starwhale Server

    To install Starwhale Server, see the installation guide.

    Create your first project

    Login to the server

    Open your browser and enter your server's URL in the address bar. Login with your username(starwhale) and password(abcd1234).

    console-artifacts.gif

    Create a new project

    Build the dataset, model, and runtime on your local machine

    Follow step 1 to step 4 in Getting started with Starwhale Standalone to create:

    • a Starwhale model named mnist
    • a Starwhale dataset named mnist
    • a Starwhale runtime named pytorch

    Copy the dataset, the model, and the runtime to the server

    swcli instance login --username <your username> --password <your password> --alias server <Your Server URL>

    swcli model copy mnist server/project/demo
    swcli dataset copy mnist server/project/demo
    swcli runtime copy pytorch server/project/demo

    Use the Web UI to run an evaluation

    Navigate to the "demo" project in your browser and create a new one.

    console-create-job.gif

    Congratulations! You have completed the Starwhale Server Getting Started Guide.

    - - + + \ No newline at end of file diff --git a/0.6.0/getting-started/standalone/index.html b/0.6.0/getting-started/standalone/index.html index 531c95bf1..fab31eb57 100644 --- a/0.6.0/getting-started/standalone/index.html +++ b/0.6.0/getting-started/standalone/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Getting started with Starwhale Standalone

    When the Starwhale Client (swcli) is installed, you are ready to use Starwhale Standalone.

    We also provide a Jupyter Notebook example, you can try it in Google Colab or in your local vscode/jupyterlab.

    Downloading Examples

    Download Starwhale examples by cloning the Starwhale project via:

    GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/star-whale/starwhale.git --depth 1
    cd starwhale

    To save time in the example downloading, we skip git-lfs and other commits info. We will use ML/DL HelloWorld code MNIST to start your Starwhale journey. The following steps are all performed in the starwhale directory.

    Core Workflow

    Building a Pytorch Runtime

    Runtime example codes are in the example/runtime/pytorch directory.

    • Build the Starwhale runtime bundle:

      swcli runtime build --yaml example/runtime/pytorch/runtime.yaml
      tip

      When you first build runtime, creating an isolated python environment and downloading python dependencies will take a lot of time. The command execution time is related to the network environment of the machine and the number of packages in the runtime.yaml. Using the befitting pypi mirror and cache config in the ~/.pip/pip.conf file is a recommended practice.

      For users in the mainland of China, the following conf file is an option:

      [global]
      cache-dir = ~/.cache/pip
      index-url = https://pypi.tuna.tsinghua.edu.cn/simple
      extra-index-url = https://mirrors.aliyun.com/pypi/simple/
    • Check your local Starwhale Runtime:

      swcli runtime list
      swcli runtime info pytorch

    Building a Model

    Model example codes are in the example/mnist directory.

    • Download the pre-trained model file:

      cd example/mnist
      make download-model
      # For users in the mainland of China, please add `CN=1` environment for make command:
      # CN=1 make download-model
      cd -
    • Build a Starwhale model:

      swcli model build example/mnist --runtime pytorch
    • Check your local Starwhale models:

      swcli model list
      swcli model info mnist

    Building a Dataset

    Dataset example codes are in the example/mnist directory.

    • Download the MNIST raw data:

      cd example/mnist
      make download-data
      # For users in the mainland of China, please add `CN=1` environment for make command:
      # CN=1 make download-data
      cd -
    • Build a Starwhale dataset:

      swcli dataset build --yaml example/mnist/dataset.yaml
    • Check your local Starwhale dataset:

      swcli dataset list
      swcli dataset info mnist
      swcli dataset head mnist

    Running an Evaluation Job

    • Create an evaluation job:

      swcli -vvv model run --uri mnist --dataset mnist --runtime pytorch
    • Check the evaluation result

      swcli job list
      swcli job info $(swcli job list | grep mnist | grep success | awk '{print $1}' | head -n 1)

    Congratulations! You have completed the Starwhale Standalone Getting Started Guide.

    - - + + \ No newline at end of file diff --git a/0.6.0/index.html b/0.6.0/index.html index 9f4fba034..5e5d549aa 100644 --- a/0.6.0/index.html +++ b/0.6.0/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    What is Starwhale

    Overview

    Starwhale is an MLOps/LLMOps platform that make your model creation, evaluation and publication much easier. It aims to create a handy tool for data scientists and machine learning engineers.

    Starwhale helps you:

    • Keep track of your training/testing dataset history including data items and their labels, so that you can easily access them.
    • Manage your model packages that you can share across your team.
    • Run your models in different environments, either on a Nvidia GPU server or on an embedded device like Cherry Pi.
    • Create a online service with interactive Web UI for your models.

    Starwhale is designed to be an open platform. You can create your own plugins to meet your requirements.

    Deployment options

    Each deployment of Starwhale is called an instance. All instances can be managed by the Starwhale Client (swcli).

    You can start using Starwhale with one of the following instance types:

    • Starwhale Standalone - Rather than a running service, Starwhale Standalone is actually a repository that resides in your local file system. It is created and managed by the Starwhale Client (swcli). You only need to install swcli to use it. Currently, each user on a single machine can have only ONE Starwhale Standalone instance. We recommend you use the Starwhale Standalone to build and test your datasets, runtime, and models before pushing them to Starwhale Server/Cloud instances.
    • Starwhale Server - Starwhale Server is a service deployed on your local server. Besides text-only results from the Starwhale Client (swcli), Starwhale Server provides Web UI for you to manage your datasets and models, evaluate your models in your local Kubernetes cluster, and review the evaluation results.
    • Starwhale Cloud - Starwhale Cloud is a managed service hosted on public clouds. By registering an account on https://cloud.starwhale.cn, you are ready to use Starwhale without needing to install, operate, and maintain your own instances. Starwhale Cloud also provides public resources for you to download, like datasets, runtimes, and models. Check the "starwhale/public" project on Starwhale Cloud for more details.

    When choosing which instance type to use, consider the following:

    Instance TypeDeployment locationMaintained byUser InterfaceScalability
    Starwhale StandaloneYour laptop or any server in your data centerNot requiredCommand lineNot scalable
    Starwhale ServerYour data centerYourselfWeb UI and command lineScalable, depends on your Kubernetes cluster
    Starwhale CloudPublic cloud, like AWS or Aliyunthe Starwhale TeamWeb UI and command lineScalable, but currently limited by the freely available resource on the cloud
    - - + + \ No newline at end of file diff --git a/0.6.0/model/index.html b/0.6.0/model/index.html index 0ba59c261..7c68df40a 100644 --- a/0.6.0/model/index.html +++ b/0.6.0/model/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Starwhale Model

    overview

    A Starwhale Model is a standard format for packaging machine learning models that can be used for various purposes, like model fine-tuning, model evaluation, and online serving. A Starwhale Model contains the model file, inference codes, configuration files, and any other files required to run the model.

    Create a Starwhale Model

    There are two ways to create a Starwhale Model: by swcli or by Python SDK.

    Create a Starwhale Model by swcli

    To create a Starwhale Model by swcli, you need to define a model.yaml, which describes some required information about the model package, and run the following command:

    swcli model build . --model-yaml /path/to/model.yaml

    For more information about the command and model.yaml, see the swcli reference. model.yaml is optional for model building.

    Create a Starwhale Model by Python SDK

    from starwhale import model, predict

    @predict
    def predict_img(data):
    ...

    model.build(name="mnist", modules=[predict_img])

    Model Management

    Model Management by swcli

    CommandDescription
    swcli model listList all Starwhale Models in a project
    swcli model infoShow detail information about a Starwhale Model
    swcli model copyCopy a Starwhale Model to another location
    swcli model removeRemove a Starwhale Model
    swcli model recoverRecover a previously removed Starwhale Model

    Model Management by WebUI

    Model History

    Starwhale Models are versioned. The general rules about versions are described in Resource versioning in Starwhale.

    Model History Management by swcli

    CommandDescription
    swcli model historyList all versions of a Starwhale Model
    swcli model infoShow detail information about a Starwhale Model version
    swcli model diffCompare two versions of a Starwhale model
    swcli model copyCopy a Starwhale Model version to a new one
    swcli model removeRemove a Starwhale Model version
    swcli model recoverRecover a previously removed Starwhale Model version

    Model Evaluation

    Model Evaluation by swcli

    CommandDescription
    swcli model runCreate an evaluation with a Starwhale Model

    The Storage Format

    The Starwhale Model is a tarball file that contains the source directory.

    - - + + \ No newline at end of file diff --git a/0.6.0/model/yaml/index.html b/0.6.0/model/yaml/index.html index f4c19e2fa..308de3e0b 100644 --- a/0.6.0/model/yaml/index.html +++ b/0.6.0/model/yaml/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    The model.yaml Specification

    tip

    model.yaml is optional for swcli model build.

    When building a Starwhale Model using the swcli model build command, you can specify a yaml file that follows a specific format via the --model-yaml parameter to simplify specifying build parameters.

    Even without specifying the --model-yaml parameter, swcli model build will automatically look for a model.yaml file under the ${workdir} directory and extract parameters from it. Parameters specified on the swcli model build command line take precedence over equivalent configurations in model.yaml, so you can think of model.yaml as a file-based representation of the build command line.

    When building a Starwhale Model using the Python SDK, the model.yaml file does not take effect.

    YAML Field Descriptions

    FieldDescriptionRequiredTypeDefault
    nameName of the Starwhale Model, equivalent to --name parameter.NoString
    run.modulesPython Modules searched during model build, can specify multiple entry points for model execution, format is Python Importable path. Equivalent to --module parameter.YesList[String]
    run.handlerDeprecated alias of run.modules, can only specify one entry point.NoString
    versiondataset.yaml format version, currently only supports "1.0"NoString1.0
    descModel description, equivalent to --desc parameter.NoString

    Example


    name: helloworld

    run:
    modules:
    - src.evaluator

    desc: "example yaml"

    A Starwhale model named helloworld, searches for functions decorated with @evaluation.predict, @evaluation.evaluate or @handler, or classes inheriting from PipelineHandler in src/evaluator.py under ${WORKDIR} of the swcli model build command. These functions or classes will be added to the list of runnable entry points for the Starwhale model. When running the model via swcli model run or Web UI, select the corresponding entry point (handler) to run.

    model.yaml is optional, parameters defined in yaml can also be specified via swcli command line parameters.


    swcli model build . --model-yaml model.yaml

    Is equivalent to:


    swcli model build . --name helloworld --module src.evaluator --desc "example yaml"

    - - + + \ No newline at end of file diff --git a/0.6.0/reference/sdk/dataset/index.html b/0.6.0/reference/sdk/dataset/index.html index c3029024c..1592158f5 100644 --- a/0.6.0/reference/sdk/dataset/index.html +++ b/0.6.0/reference/sdk/dataset/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Starwhale Dataset SDK

    dataset

    Get starwhale.Dataset object, by creating new datasets or loading existing datasets.

    @classmethod
    def dataset(
    cls,
    uri: t.Union[str, Resource],
    create: str = _DatasetCreateMode.auto,
    readonly: bool = False,
    ) -> Dataset:

    Parameters

    • uri: (str or Resource, required)
      • The dataset uri or Resource object.
    • create: (str, optional)
      • The mode of dataset creating. The options are auto, empty and forbid.
        • auto mode: If the dataset already exists, creation is ignored. If it does not exist, the dataset is created automatically.
        • empty mode: If the dataset already exists, an Exception is raised; If it does not exist, an empty dataset is created. This mode ensures the creation of a new, empty dataset.
        • forbid mode: If the dataset already exists, nothing is done.If it does not exist, an Exception is raised. This mode ensures the existence of the dataset.
      • The default is auto.
    • readonly: (bool, optional)
      • For an existing dataset, you can specify the readonly=True argument to ensure the dataset is in readonly mode.
      • Default is False.

    Examples

    from starwhale import dataset, Image

    # create a new dataset named mnist, and add a row into the dataset
    # dataset("mnist") is equal to dataset("mnist", create="auto")
    ds = dataset("mnist")
    ds.exists() # return False, "mnist" dataset is not existing.
    ds.append({"img": Image(), "label": 1})
    ds.commit()
    ds.close()

    # load a cloud instance dataset in readonly mode
    ds = dataset("cloud://remote-instance/project/starwhale/dataset/mnist", readonly=True)
    labels = [row.features.label in ds]
    ds.close()

    # load a read/write dataset with a specified version
    ds = dataset("mnist/version/mrrdczdbmzsw")
    ds[0].features.label = 1
    ds.commit()
    ds.close()

    # create an empty dataset
    ds = dataset("mnist-empty", create="empty")

    # ensure the dataset existence
    ds = dataset("mnist-existed", create="forbid")

    class starwhale.Dataset

    starwhale.Dataset implements the abstraction of a Starwhale dataset, and can operate on datasets in Standalone/Server/Cloud instances.

    from_huggingface

    from_huggingface is a classmethod that can convert a Huggingface dataset into a Starwhale dataset.

    def from_huggingface(
    cls,
    name: str,
    repo: str,
    subset: str | None = None,
    split: str | None = None,
    revision: str = "main",
    alignment_size: int | str = D_ALIGNMENT_SIZE,
    volume_size: int | str = D_FILE_VOLUME_SIZE,
    mode: DatasetChangeMode | str = DatasetChangeMode.PATCH,
    cache: bool = True,
    tags: t.List[str] | None = None,
    ) -> Dataset:

    Parameters

    • name: (str, required)
      • dataset name.
    • repo: (str, required)
    • subset: (str, optional)
      • The subset name. If the huggingface dataset has multiple subsets, you must specify the subset name.
    • split: (str, optional)
      • The split name. If the split name is not specified, the all splits dataset will be built.
    • revision: (str, optional)
      • The huggingface datasets revision. The default value is main. If the split name is not specified, the all splits dataset will be built.
    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.
    • mode: (str|DatasetChangeMode, optional)
      • The dataset change mode. The default value is patch. Mode choices are patch and overwrite.
    • cache: (bool, optional)
      • Whether to use huggingface dataset cache(download + local hf dataset).
      • The default value is True.
    • tags: (List[str], optional)
      • The user custom tags of the dataset.

    Examples

    from starwhale import Dataset
    myds = Dataset.from_huggingface("mnist", "mnist")
    print(myds[0])
    from starwhale import Dataset
    myds = Dataset.from_huggingface("mmlu", "cais/mmlu", subset="anatomy", split="auxiliary_train", revision="7456cfb")

    from_json

    from_json is a classmethod that can convert a json text into a Starwhale dataset.

    @classmethod
    def from_json(
    cls,
    name: str,
    json_text: str,
    field_selector: str = "",
    alignment_size: int | str = D_ALIGNMENT_SIZE,
    volume_size: int | str = D_FILE_VOLUME_SIZE,
    mode: DatasetChangeMode | str = DatasetChangeMode.PATCH,
    tags: t.List[str] | None = None,
    ) -> Dataset:

    Parameters

    • name: (str, required)
      • Dataset name.
    • json_text: (str, required)
      • A json string. The from_json function deserializes this string into Python objects to start building the Starwhale dataset.
    • field_selector: (str, optional)
      • The filed from which you would like to extract dataset array items.
      • The default value is "" which indicates that the json object is an array contains all the items.
    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.
    • mode: (str|DatasetChangeMode, optional)
      • The dataset change mode. The default value is patch. Mode choices are patch and overwrite.
    • tags: (List[str], optional)
      • The user custom tags of the dataset.

    Examples

    from starwhale import Dataset
    myds = Dataset.from_json(
    name="translation",
    json_text='[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]'
    )
    print(myds[0].features.en)
    from starwhale import Dataset
    myds = Dataset.from_json(
    name="translation",
    json_text='{"content":{"child_content":[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]}}',
    field_selector="content.child_content"
    )
    print(myds[0].features["zh-cn"])

    from_folder

    from_folder is a classmethod that can read Image/Video/Audio data from a specified directory and automatically convert them into a Starwhale dataset. This function supports the following features:

    • It can recursively search the target directory and its subdirectories
    • Supports extracting three types of files:
      • image: Supports png/jpg/jpeg/webp/svg/apng image types. Image files will be converted to Starwhale.Image type.
      • video: Supports mp4/webm/avi video types. Video files will be converted to Starwhale.Video type.
      • audio: Supports mp3/wav audio types. Audio files will be converted to Starwhale.Audio type.
    • Each file corresponds to one record in the dataset, with the file stored in the file field.
    • If auto_label=True, the parent directory name will be used as the label for that record, stored in the label field. Files in the root directory will not be labeled.
    • If a txt file with the same name as an image/video/audio file exists, its content will be stored as the caption field in the dataset.
    • If metadata.csv or metadata.jsonl exists in the root directory, their content will be read automatically and associated to records by file path as meta information in the dataset.
      • metadata.csv and metadata.jsonl are mutually exclusive. An exception will be thrown if both exist.
      • Each record in metadata.csv and metadata.jsonl must contain a file_name field pointing to the file path.
      • metadata.csv and metadata.jsonl are optional for dataset building.
    @classmethod
    def from_folder(
    cls,
    folder: str | Path,
    kind: str | DatasetFolderSourceType,
    name: str | Resource = "",
    auto_label: bool = True,
    alignment_size: int | str = D_ALIGNMENT_SIZE,
    volume_size: int | str = D_FILE_VOLUME_SIZE,
    mode: DatasetChangeMode | str = DatasetChangeMode.PATCH,
    tags: t.List[str] | None = None,
    ) -> Dataset:

    Parameters

    • folder: (str|Path, required)
      • The folder path from which you would like to create this dataset.
    • kind: (str|DatasetFolderSourceType, required)
      • The dataset source type you would like to use, the choices are: image, video and audio.
      • Recursively searching for files of the specified kind in folder. Other file types will be ignored.
    • name: (str|Resource, optional)
      • The dataset name you would like to use.
      • If not specified, the name is the folder name.
    • auto_label: (bool, optional)
      • Whether to auto label by the sub-folder name.
      • The default value is True.
    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.
    • mode: (str|DatasetChangeMode, optional)
      • The dataset change mode. The default value is patch. Mode choices are patch and overwrite.
    • tags: (List[str], optional)
      • The user custom tags of the dataset.

    Examples ${folder-example}

    • Example for the normal function calling

      from starwhale import Dataset

      # create a my-image-dataset dataset from /path/to/image folder.
      ds = Dataset.from_folder(
      folder="/path/to/image",
      kind="image",
      name="my-image-dataset"
      )
    • Example for caption

      folder/dog/1.png
      folder/dog/1.txt

      1.txt content will be used as the caption of 1.png.

    • Example for metadata

      metadata.csv:

      file_name, caption
      1.png, dog
      2.png, cat

      metadata.jsonl:

      {"file_name": "1.png", "caption": "dog"}
      {"file_name": "2.png", "caption": "cat"}
    • Example for auto-labeling

      The following structure will create a dataset with 2 labels: "cat" and "dog", 4 images in total.

      folder/dog/1.png
      folder/cat/2.png
      folder/dog/3.png
      folder/cat/4.png

    __iter__

    __iter__ a method that iter the dataset rows.

    from starwhale import dataset

    ds = dataset("mnist")

    for item in ds:
    print(item.index)
    print(item.features.label) # label and img are the features of mnist.
    print(item.features.img)

    batch_iter

    batch_iter is a method that iter the dataset rows in batch.

    def batch_iter(
    self, batch_size: int = 1, drop_not_full: bool = False
    ) -> t.Iterator[t.List[DataRow]]:

    Parameters

    • batch_size: (int, optional)
      • batch size. The default value is 1.
    • drop_not_full: (bool, optional)
      • Whether the last batch of data, with a size smaller than batch_size, it will be discarded.
      • The default value is False.

    Examples

    from starwhale import dataset

    ds = dataset("mnist")
    for batch_rows in ds.batch_iter(batch_size=2):
    assert len(batch_rows) == 2
    print(batch_rows[0].features)

    __getitem__

    __getitem__ is a method that allows retrieving certain rows of data from the dataset, with usage similar to Python dict and list types.

    from starwhale import dataset

    ds = dataset("mock-int-index")

    # if the index type is string
    ds["str_key"] # get the DataRow by the "str_key" string key
    ds["start":"end"] # get a slice of the dataset by the range ("start", "end")

    ds = dataset("mock-str-index")
    # if the index type is int
    ds[1] # get the DataRow by the 1 int key
    ds[1:10:2] # get a slice of the dataset by the range (1, 10), step is 2

    __setitem__

    __setitem__ is a method that allows updating rows of data in the dataset, with usage similar to Python dicts. __setitem__ supports multi-threaded parallel data insertion.

    def __setitem__(
    self, key: t.Union[str, int], value: t.Union[DataRow, t.Tuple, t.Dict]
    ) -> None:

    Parameters

    • key: (int|str, required)
      • key is the index for each row in the dataset. The type is int or str, but a dataset only accepts one type.
    • value: (DataRow|tuple|dict, required)
      • value is the features for each row in the dataset, using a Python dict is generally recommended.

    Examples

    • Normal insertion

    Insert two rows into the test dataset, with index test and test2 repectively:

    from starwhale import dataset

    with dataset("test") as ds:
    ds["test"] = {"txt": "abc", "int": 1}
    ds["test2"] = {"txt": "bcd", "int": 2}
    ds.commit()
    • Parallel insertion
    from starwhale import dataset, Binary
    from concurrent.futures import as_completed, ThreadPoolExecutor

    ds = dataset("test")

    def _do_append(_start: int) -> None:
    for i in range(_start, 100):
    ds.append((i, {"data": Binary(), "label": i}))

    pool = ThreadPoolExecutor(max_workers=10)
    tasks = [pool.submit(_do_append, i * 10) for i in range(0, 9)]

    ds.commit()
    ds.close()

    __delitem__

    __delitem__ is a method to delete certain rows of data from the dataset.

    def __delitem__(self, key: _ItemType) -> None:
    from starwhale import dataset

    ds = dataset("existed-ds")
    del ds[6:9]
    del ds[0]
    ds.commit()
    ds.close()

    append

    append is a method to append data to a dataset, similar to the append method for Python lists.

    • Adding features dict, each row is automatically indexed with int starting from 0 and incrementing.

      from starwhale import dataset, Image

      with dataset("new-ds") as ds:
      for i in range(0, 100):
      ds.append({"label": i, "image": Image(f"folder/{i}.png")})
      ds.commit()
    • By appending the index and features dictionary, the index of each data row in the dataset will not be handled automatically.

      from dataset import dataset, Image

      with dataset("new-ds") as ds:
      for i in range(0, 100):
      ds.append((f"index-{i}", {"label": i, "image": Image(f"folder/{i}.png")}))

      ds.commit()

    extend

    extend is a method to bulk append data to a dataset, similar to the extend method for Python lists.

    from starwhale import dataset, Text

    ds = dataset("new-ds")
    ds.extend([
    (f"label-{i}", {"text": Text(), "label": i}) for i in range(0, 10)
    ])
    ds.commit()
    ds.close()

    commit

    commit is a method that flushes the current cached data to storage when called, and generates a dataset version. This version can then be used to load the corresponding dataset content afterwards.

    For a dataset, if some data is added without calling commit, but close is called or the process exits directly instead, the data will still be written to the dataset, just without generating a new version.

    @_check_readonly
    def commit(
    self,
    tags: t.Optional[t.List[str]] = None,
    message: str = "",
    force_add_tags: bool = False,
    ignore_add_tags_errors: bool = False,
    ) -> str:

    Parameters

    • tags: (list(str), optional)
      • tag as a list
    • message: (str, optional)
      • commit message. The default value is empty.
    • force_add_tags: (bool, optional)
      • For server/cloud instances, when adding labels to this version, if a label has already been applied to other dataset versions, you can use the force_add_tags=True parameter to forcibly add the label to this version, otherwise an exception will be thrown.
      • The default is False.
    • ignore_add_tags_errors: (bool, optional)
      • Ignore any exceptions thrown when adding labels.
      • The default is False.

    Examples

    from starwhale import dataset
    with dataset("mnist") as ds:
    ds.append({"label": 1})
    ds.commit(message="init commit")

    readonly

    readonly is a property attribute indicating if the dataset is read-only, it returns a bool value.

    from starwhale import dataset
    ds = dataset("mnist", readonly=True)
    assert ds.readonly

    loading_version

    loading_version is a property attribute, string type.

    • When loading an existing dataset, the loading_version is the related dataset version.
    • When creating a non-existed dataset, the loading_version is equal to the pending_commit_version.

    pending_commit_version

    pending_commit_version is a property attribute, string type. When you call the commit function, the pending_commit_version will be recorded in the Standalone instance ,Server instance or Cloud instance.

    committed_version

    committed_version is a property attribute, string type. After the commit function is called, the committed_version will come out, it is equal to the pending_commit_version. Accessing this attribute without calling commit first will raise an exception.

    remove

    remove is a method equivalent to the swcli dataset remove command, it can delete a dataset.

    def remove(self, force: bool = False) -> None:

    recover

    recover is a method equivalent to the swcli dataset recover command, it can recover a soft-deleted dataset that has not been run garbage collection.

    def recover(self, force: bool = False) -> None:

    summary

    summary is a method equivalent to the swcli dataset summary command, it returns summary information of the dataset.

    def summary(self) -> t.Optional[DatasetSummary]:

    history

    history is a method equivalent to the swcli dataset history command, it returns the history records of the dataset.

    def history(self) -> t.List[t.Dict]:

    flush

    flush is a method that flushes temporarily cached data from memory to persistent storage. The commit and close methods will automatically call flush.

    close

    close is a method that closes opened connections related to the dataset. Dataset also implements contextmanager, so datasets can be automatically closed using with syntax without needing to explicitly call close.

    from starwhale import dataset

    ds = dataset("mnist")
    ds.close()

    with dataset("mnist") as ds:
    print(ds[0])

    head is a method to show the first n rows of a dataset, equivalent to the swcli dataset head command.

    def head(self, n: int = 5, skip_fetch_data: bool = False) -> List[DataRow]:

    fetch_one

    fetch_one is a method to get the first record in a dataset, similar to head(n=1)[0].

    list

    list is a class method to list Starwhale datasets under a project URI, equivalent to the swcli dataset list command.

    @classmethod
    def list(
    cls,
    project_uri: Union[str, Project] = "",
    fullname: bool = False,
    show_removed: bool = False,
    page_index: int = DEFAULT_PAGE_IDX,
    page_size: int = DEFAULT_PAGE_SIZE,
    ) -> Tuple[DatasetListType, Dict[str, Any]]:

    copy

    copy is a method to copy a dataset to another instance, equivalent to the swcli dataset copy command.

    def copy(
    self,
    dest_uri: str,
    dest_local_project_uri: str = "",
    force: bool = False,
    mode: str = DatasetChangeMode.PATCH.value,
    ignore_tags: t.List[str] | None = None,
    ) -> None:

    Parameters

    • dest_uri: (str, required)
      • Dataset URI
    • dest_local_project_uri: (str, optional)
      • When copy the remote dataset into local, the parameter can set for the Project URI.
    • force: (bool, optional)
      • Whether to forcibly overwrite the dataset if there is already one with the same version on the target instance.
      • The default value is False.
      • When the tags are already used for the other dataset version in the dest instance, you should use force option or adjust the tags.
    • mode: (str, optional)
      • Dataset copy mode, default is 'patch'. Mode choices are: 'patch', 'overwrite'.
      • patch: Patch mode, only update the changed rows and columns for the remote dataset.
      • overwrite: Overwrite mode, update records and delete extraneous rows from the remote dataset.
    • ignore_tags (List[str], optional)
      • Ignore tags when copying.
      • In default, copy dataset with all user custom tags.
      • latest and ^v\d+$ are the system builtin tags, they are ignored automatically.

    Examples

    from starwhale import dataset
    ds = dataset("mnist")
    ds.copy("cloud://remote-instance/project/starwhale")

    to_pytorch

    to_pytorch is a method that can convert a Starwhale dataset to a Pytorch torch.utils.data.Dataset, which can then be passed to torch.utils.data.DataLoader for use.

    It should be noted that the to_pytorch function returns a Pytorch IterableDataset.

    def to_pytorch(
    self,
    transform: t.Optional[t.Callable] = None,
    drop_index: bool = True,
    skip_default_transform: bool = False,
    ) -> torch.utils.data.Dataset:

    Parameters

    • transform: (callable, optional)
      • A transform function for input data.
    • drop_index: (bool, optional)
      • Whether to drop the index column.
    • skip_default_transform: (bool, optional)
      • If transform is not set, by default the built-in Starwhale transform function will be used to transform the data. This can be disabled with the skip_default_transform parameter.

    Examples

    import torch.utils.data as tdata
    from starwhale import dataset

    ds = dataset("mnist")

    torch_ds = ds.to_pytorch()
    torch_loader = tdata.DataLoader(torch_ds, batch_size=2)
    import torch.utils.data as tdata
    from starwhale import dataset

    with dataset("mnist") as ds:
    for i in range(0, 10):
    ds.append({"txt": Text(f"data-{i}"), "label": i})

    ds.commit()

    def _custom_transform(data: t.Any) -> t.Any:
    data = data.copy()
    txt = data["txt"].to_str()
    data["txt"] = f"custom-{txt}"
    return data

    torch_loader = tdata.DataLoader(
    dataset(ds.uri).to_pytorch(transform=_custom_transform), batch_size=1
    )
    item = next(iter(torch_loader))
    assert isinstance(item["label"], torch.Tensor)
    assert item["txt"][0] in ("custom-data-0", "custom-data-1")

    to_tensorflow

    to_tensorflow is a method that can convert a Starwhale dataset to a Tensorflow tensorflow.data.Dataset.

    def to_tensorflow(self, drop_index: bool = True) -> tensorflow.data.Dataset:

    Parameters

    • drop_index: (bool, optional)
      • Whether to drop the index column.

    Examples

    from starwhale import dataset
    import tensorflow as tf

    ds = dataset("mnist")
    tf_ds = ds.to_tensorflow(drop_index=True)
    assert isinstance(tf_ds, tf.data.Dataset)

    with_builder_blob_config

    with_builder_blob_config is a method to set blob-related attributes in a Starwhale dataset. It needs to be called before making data changes.

    def with_builder_blob_config(
    self,
    volume_size: int | str | None = D_FILE_VOLUME_SIZE,
    alignment_size: int | str | None = D_ALIGNMENT_SIZE,
    ) -> Dataset:

    Parameters

    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.

    Examples

    from starwhale import dataset, Binary

    ds = dataset("mnist").with_builder_blob_config(volume_size="32M", alignment_size=128)
    ds.append({"data": Binary(b"123")})
    ds.commit()
    ds.close()

    with_loader_config

    with_loader_config is a method to set parameters for the Starwhale dataset loader process.

    def with_loader_config(
    self,
    num_workers: t.Optional[int] = None,
    cache_size: t.Optional[int] = None,
    field_transformer: t.Optional[t.Dict] = None,
    ) -> Dataset:

    Parameters

    • num_workers: (int, optional)
      • The workers number for loading dataset.
      • The default value is 2.
    • cache_size: (int, optional)
      • Prefetched data rows.
      • The default value is 20.
    • field_transformer: (dict, optional)
      • features name transform dict.

    Examples

    from starwhale import Dataset, dataset
    Dataset.from_json(
    "translation",
    '[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]'
    )
    myds = dataset("translation").with_loader_config(field_transformer={"en": "en-us"})
    assert myds[0].features["en-us"] == myds[0].features["en"]
    from starwhale import Dataset, dataset
    Dataset.from_json(
    "translation2",
    '[{"content":{"child_content":[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]}}]'
    )
    myds = dataset("translation2").with_loader_config(field_transformer={"content.child_content[0].en": "en-us"})
    assert myds[0].features["en-us"] == myds[0].features["content"]["child_content"][0]["en"]
    - - + + \ No newline at end of file diff --git a/0.6.0/reference/sdk/evaluation/index.html b/0.6.0/reference/sdk/evaluation/index.html index 7e6390524..38fc76fc6 100644 --- a/0.6.0/reference/sdk/evaluation/index.html +++ b/0.6.0/reference/sdk/evaluation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Starwhale Model Evaluation SDK

    @evaluation.predict

    The @evaluation.predict decorator defines the inference process in the Starwhale Model Evaluation, similar to the map phase in MapReduce. It contains the following core features:

    • On the Server instance, require the resources needed to run.
    • Automatically read the local or remote datasets, and pass the data in the datasets one by one or in batches to the function decorated by evaluation.predict.
    • By the replicas setting, implement distributed dataset consumption to horizontally scale and shorten the time required for the model evaluation tasks.
    • Automatically store the return values of the function and the input features of the dataset into the results table, for display in the Web UI and further use in the evaluate phase.
    • The decorated function is called once for each single piece of data or each batch, to complete the inference process.

    Parameters

    • resources: (dict, optional)
      • Defines the resources required by each predict task when running on the Server instance, including memory, cpu, and nvidia.com/gpu.
      • memory: The unit is Bytes, int and float types are supported.
        • Supports setting request and limit as a dictionary, e.g. resources={"memory": {"request": 100 * 1024, "limit": 200 * 1024}}.
        • If only a single number is set, the Python SDK will automatically set request and limit to the same value, e.g. resources={"memory": 100 * 1024} is equivalent to resources={"memory": {"request": 100 * 1024, "limit": 100 * 1024}}.
      • cpu: The unit is the number of CPU cores, int and float types are supported.
        • Supports setting request and limit as a dictionary, e.g. resources={"cpu": {"request": 1, "limit": 2}}.
        • If only a single number is set, the SDK will automatically set request and limit to the same value, e.g. resources={"cpu": 1.5} is equivalent to resources={"cpu": {"request": 1.5, "limit": 1.5}}.
      • nvidia.com/gpu: The unit is the number of GPUs, int type is supported.
        • nvidia.com/gpu does not support setting request and limit, only a single number is supported.
      • Note: The resources parameter currently only takes effect on the Server instances. For the Cloud instances, the same can be achieved by selecting the corresponding resource pool when submitting the evaluation task. Standalone instances do not support this feature at all.
    • replicas: (int, optional)
      • The number of replicas to run predict.
      • predict defines a Step, in which there are multiple equivalent Tasks. Each Task runs on a Pod in Cloud/Server instances, and a Thread in Standalone instances.
      • When multiple replicas are specified, they are equivalent and will jointly consume the selected dataset to achieve distributed dataset consumption. It can be understood that a row in the dataset will only be read by one predict replica.
      • The default is 1.
    • batch_size: (int, optional)
      • Batch size for passing data from the dataset into the function.
      • The default is 1.
    • fail_on_error: (bool, optional)
      • Whether to interrupt the entire model evaluation when the decorated function throws an exception. If you expect some "exceptional" data to cause evaluation failures but don't want to interrupt the overall evaluation, you can set fail_on_error=False.
      • The default is True.
    • auto_log: (bool, optional)
      • Whether to automatically log the return values of the function and the input features of the dataset to the results table.
      • The default is True.
    • log_mode: (str, optional)
      • When auto_log=True, you can set log_mode to define logging the return values in plain or pickle format.
      • The default is pickle.
    • log_dataset_features: (List[str], optional)
      • When auto_log=True, you can selectively log certain features from the dataset via this parameter.
      • By default, all features will be logged.
    • needs: (List[Callable], optional)
      • Defines the prerequisites for this task to run, can use the needs syntax to implement DAG.
      • needs accepts functions decorated by @evaluation.predict, @evaluation.evaluate, and @handler.
      • The default is empty, i.e. does not depend on any other tasks.

    Input

    The decorated functions need to define some input parameters to accept dataset data, etc. They contain the following patterns:

    • data:

      • data is a dict type that can read the features of the dataset.
      • When batch_size=1 or batch_size is not set, the label feature can be read through data['label'] or data.label.
      • When batch_size is set to > 1, data is a list.
      from starwhale import evaluation

      @evaluation.predict
      def predict(data):
      print(data['label'])
      print(data.label)
    • data + external:

      • data is a dict type that can read the features of the dataset.
      • external is also a dict, including: index, index_with_dataset, dataset_info, context and dataset_uri keys. The attributes can be used for the further fine-grained processing.
        • index: The index of the dataset row.
        • index_with_dataset: The index with the dataset info.
        • dataset_info: starwhale.core.dataset.tabular.TabularDatasetInfo Class.
        • context: starwhale.Context Class.
        • dataset_uri: starwhale.nase.uri.resource.Resource Class.
      from starwhale import evaluation

      @evaluation.predict
      def predict(data, external):
      print(data['label'])
      print(data.label)
      print(external["context"])
      print(external["dataset_uri"])
    • data + **kw:

      • data is a dict type that can read the features of the dataset.
      • kw is a dict that contains external.
      from starwhale import evaluation

      @evaluation.predict
      def predict(data, **kw):
      print(kw["external"]["context"])
      print(kw["external"]["dataset_uri"])
    • *args + **kwargs:

      • The first argument of args list is data.
      from starwhale import evaluation

      @evaluation.predict
      def predict(*args, **kw):
      print(args[0].label)
      print(args[0]["label"])
      print(kw["external"]["context"])
    • **kwargs:

      from starwhale import evaluation

      @evaluation.predict
      def predict(**kw):
      print(kw["data"].label)
      print(kw["data"]["label"])
      print(kw["external"]["context"])
    • *args:

      • *args does not contain external.
      from starwhale import evaluation

      @evaluation.predict
      def predict(*args):
      print(args[0].label)
      print(args[0]["label"])

    Examples

    from starwhale import evaluation

    @evaluation.predict
    def predict_image(data):
    ...

    @evaluation.predict(
    dataset="mnist/version/latest",
    batch_size=32,
    replicas=4,
    needs=[predict_image],
    )
    def predict_batch_images(batch_data)
    ...

    @evaluation.predict(
    resources={"nvidia.com/gpu": 1,
    "cpu": {"request": 1, "limit": 2},
    "memory": 200 * 1024}, # 200MB
    log_mode="plain",
    )
    def predict_with_resources(data):
    ...

    @evaluation.predict(
    replicas=1,
    log_mode="plain",
    log_dataset_features=["txt", "img", "label"],
    )
    def predict_with_selected_features(data):
    ...

    @evaluation.evaluate

    @evaluation.evaluate is a decorator that defines the evaluation process in the Starwhale Model evaluation, similar to the reduce phase in MapReduce. It contains the following core features:

    • On the Server instance, apply for the resources.
    • Read the data recorded in the results table automatically during the predict phase, and pass it into the function as an iterator.
    • The evaluate phase will only run one replica, and cannot define the replicas parameter like the predict phase.

    Parameters

    • resources: (dict, optional)
      • Consistent with the resources parameter definition in @evaluation.predict.
    • needs: (List[Callable], optional)
      • Consistent with the needs parameter definition in @evaluation.predict.
      • In the common case, it will depend on a function decorated by @evaluation.predict.
    • use_predict_auto_log: (bool, optional)
      • Defaults to True, passes an iterator that can traverse the predict results to the function.

    Input

    • When use_predict_auto_log=True (default), pass an iterator that can traverse the predict results into the function.
      • The iterated object is a dictionary containing two keys: output and input.
        • output is the element returned by the predict stage function.
        • input is the features of the corresponding dataset during the inference process, which is a dictionary type.
    • When use_predict_auto_log=False, do not pass any parameters into the function.

    Examples

    from starwhale import evaluation

    @evaluation.evaluate(needs=[predict_image])
    def evaluate_results(predict_result_iter):
    ...

    @evaluation.evaluate(
    use_predict_auto_log=False,
    needs=[predict_image],
    )
    def evaluate_results():
    ...

    class Evaluation

    starwhale.Evaluation implements the abstraction for Starwhale Model Evaluation, and can perform operations like logging and scanning for Model Evaluation on Standalone/Server/Cloud instances, to record and retrieve metrics.

    __init__

    __init__ function initializes Evaluation object.

    class Evaluation
    def __init__(self, id: str, project: Project | str) -> None:

    Parameters

    • id: (str, required)
      • The UUID of Model Evaluation that is generated by Starwhale automatically.
    • project: (Project|str, required)
      • Project object or Project URI str.

    Example

    from starwhale import Evaluation

    standalone_e = Evaluation("fcd1206bf1694fce8053724861c7874c", project="self")
    server_e = Evaluation("fcd1206bf1694fce8053724861c7874c", project="cloud://server/project/starwhale:starwhale")
    cloud_e = Evaluation("2ddab20df9e9430dbd73853d773a9ff6", project="https://cloud.starwhale.cn/project/starwhale:llm-leaderboard")

    from_context

    from_context is a classmethod that obtains the Evaluation object under the current Context. from_context can only take effect under the task runtime environment. Calling this method in a non-task runtime environment will raise a RuntimeError exception, indicating that the Starwhale Context has not been properly set.

    @classmethod
    def from_context(cls) -> Evaluation:

    Example

    from starwhale import Evaluation

    with Evaluation.from_context() as e:
    e.log("label/1", 1, {"loss": 0.99, "accuracy": 0.98})

    log

    log is a method that logs evaluation metrics to a specific table, which can then be viewed on the Server/Cloud instance's web page or through the scan method.

    def log(
    self, category: str, id: t.Union[str, int], metrics: t.Dict[str, t.Any]
    ) -> None:

    Parameters

    • category: (str, required)
      • The category of the logged metrics, which will be used as the suffix of the Starwhale Datastore table name.
      • Each category corresponds to a Starwhale Datastore table. These tables will be isolated by the evaluation task ID and will not affect each other.
    • id: (str|int, required)
      • The ID of the logged record, unique within the table.
      • For the same table, only str or int can be used as the ID type.
    • metrics: (dict, required)
      • A dict to log metrics in key-value format.
      • Keys are of str type.
      • Values can be constant types like int, float, str, bytes, bool, or compound types like tuple, list, dict. It also supports logging Artifacts types like Starwhale.Image, Starwhale.Video, Starwhale.Audio, Starwhale.Text, Starwhale.Binary.
        • When the value contains dict type, the Starwhale SDK will automatically flatten the dict for better visualization and metric comparison.
        • For example, if metrics is {"test": {"loss": 0.99, "prob": [0.98,0.99]}, "image": [Image, Image]}, it will be stored as {"test/loss": 0.99, "test/prob": [0.98, 0.99], "image/0": Image, "image/1": Image} after flattening.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation.from_context()

    evaluation_store.log("label/1", 1, {"loss": 0.99, "accuracy": 0.98})
    evaluation_store.log("ppl", "1", {"a": "test", "b": 1})

    scan

    scan is a method that returns an iterator for reading data from certain model evaluation tables.

    def scan(
    self,
    category: str,
    start: t.Any = None,
    end: t.Any = None,
    keep_none: bool = False,
    end_inclusive: bool = False,
    ) -> t.Iterator:

    Parameters

    • category: (str, required)
      • Same meaning as the category parameter in the log method.
    • start: (Any, optional)
      • Start key, if not specified, start from the first data item in the table.
    • end: (Any, optional)
      • End key, if not specified, iterate to the end of the table.
    • keep_none: (bool, optional)
      • Whether to return columns with None values, not returned by default.
    • end_inclusive: (bool, optional)
      • Whether to include the row corresponding to end, not included by default.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="https://cloud.starwhale.cn/projects/349")
    results = [data for data in evaluation_store.scan("label/0")]

    flush

    flush is a method that can immediately flush the metrics logged by the log method to the datastore and oss storage. If the flush method is not called, Evaluation will automatically flush data to storage when it is finally closed.

    def flush(self, category: str, artifacts_flush: bool = True) -> None

    Parameters

    • category: (str, required)
      • Same meaning as the category parameter in the log method.
    • artifacts_flush: (bool, optional)
      • Whether to dump artifact data to blob files and upload them to related storage. Default is True.

    log_result

    log_result is a method that logs evaluation metrics to the results table, equivalent to calling the log method with category set to results. The results table is generally used to store inference results. By default, @starwhale.predict will store the return value of the decorated function in the results table, you can also manually store using log_results.

    def log_result(self, id: t.Union[str, int], metrics: t.Dict[str, t.Any]) -> None:

    Parameters

    • id: (str|int, required)
      • The ID of the record, unique within the results table.
      • For the results table, only str or int can be used as the ID type.
    • metrics: (dict, required)
      • Same definition as the metrics parameter in the log method.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="self")
    evaluation_store.log_result(1, {"loss": 0.99, "accuracy": 0.98})
    evaluation_store.log_result(2, {"loss": 0.98, "accuracy": 0.99})

    scan_results

    scan_results is a method that returns an iterator for reading data from the results table.

    def scan_results(
    self,
    start: t.Any = None,
    end: t.Any = None,
    keep_none: bool = False,
    end_inclusive: bool = False,
    ) -> t.Iterator:

    Parameters

    • start: (Any, optional)
      • Start key, if not specified, start from the first data item in the table.
      • Same definition as the start parameter in the scan method.
    • end: (Any, optional)
      • End key, if not specified, iterate to the end of the table.
      • Same definition as the end parameter in the scan method.
    • keep_none: (bool, optional)
      • Whether to return columns with None values, not returned by default.
      • Same definition as the keep_none parameter in the scan method.
    • end_inclusive: (bool, optional)
      • Whether to include the row corresponding to end, not included by default.
      • Same definition as the end_inclusive parameter in the scan method.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="self")

    evaluation_store.log_result(1, {"loss": 0.99, "accuracy": 0.98})
    evaluation_store.log_result(2, {"loss": 0.98, "accuracy": 0.99})
    results = [data for data in evaluation_store.scan_results()]

    flush_results

    flush_results is a method that can immediately flush the metrics logged by the log_results method to the datastore and oss storage. If the flush_results method is not called, Evaluation will automatically flush data to storage when it is finally closed.

    def flush_results(self, artifacts_flush: bool = True) -> None:

    Parameters

    • artifacts_flush: (bool, optional)
      • Whether to dump artifact data to blob files and upload them to related storage. Default is True.
      • Same definition as the artifacts_flush parameter in the flush method.

    log_summary

    log_summary is a method that logs certain metrics to the summary table. The evaluation page on Server/Cloud instances displays data from the summary table.

    Each time it is called, Starwhale will automatically update with the unique ID of this evaluation as the row ID of the table. This function can be called multiple times during one evaluation to update different columns.

    Each project has one summary table. All evaluation tasks under that project will write summary information to this table for easy comparison between evaluations of different models.

    def log_summary(self, *args: t.Any, **kw: t.Any) -> None:

    Same as log method, log_summary will automatically flatten the dict.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="https://cloud.starwhale.cn/projects/349")

    evaluation_store.log_summary(loss=0.99)
    evaluation_store.log_summary(loss=0.99, accuracy=0.99)
    evaluation_store.log_summary({"loss": 0.99, "accuracy": 0.99})

    get_summary

    get_summary is a method that returns the information logged by log_summary.

    def get_summary(self) -> t.Dict:

    flush_summary

    flush_summary is a method that can immediately flush the metrics logged by the log_summary method to the datastore and oss storage. If the flush_results method is not called, Evaluation will automatically flush data to storage when it is finally closed.

    def flush_summary(self, artifacts_flush: bool = True) -> None:

    Parameters

    • artifacts_flush: (bool, optional)
      • Whether to dump artifact data to blob files and upload them to related storage. Default is True.
      • Same definition as the artifacts_flush parameter in the flush method.

    flush_all

    flush_all is a method that can immediately flush the metrics logged by log, log_results, log_summary methods to the datastore and oss storage. If the flush_all method is not called, Evaluation will automatically flush data to storage when it is finally closed.

    def flush_all(self, artifacts_flush: bool = True) -> None:

    Parameters

    • artifacts_flush: (bool, optional)
      • Whether to dump artifact data to blob files and upload them to related storage. Default is True.
      • Same definition as the artifacts_flush parameter in the flush method.

    get_tables

    get_tables is a method that returns the names of all tables generated during model evaluation. Note that this function does not return the summary table name.

    def get_tables(self) -> t.List[str]:

    close

    close is a method to close the Evaluation object. close will automatically flush data to storage when called. Evaluation also implements __enter__ and __exit__ methods, which can simplify manual close calls using with syntax.

    def close(self) -> None:

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="https://cloud.starwhale.cn/projects/349")
    evaluation_store.log_summary(loss=0.99)
    evaluation_store.close()

    # auto close when the with-context exits.
    with Evaluation.from_context() as e:
    e.log_summary(loss=0.99)

    @handler

    @handler is a decorator that provides the following functionalities:

    • On a Server instance, it requests the required resources to run.
    • It can control the number of replicas.
    • Multiple handlers can form a DAG through dependency relationships to control the execution workflow.
    • It can expose ports externally to run like a web handler.

    @fine_tune, @evaluation.predict and @evaluation.evalute can be considered applications of @handler in the certain specific areas. @handler is the underlying implementation of these decorators and is more fundamental and flexible.

    @classmethod
    def handler(
    cls,
    resources: t.Optional[t.Dict[str, t.Any]] = None,
    replicas: int = 1,
    needs: t.Optional[t.List[t.Callable]] = None,
    name: str = "",
    expose: int = 0,
    require_dataset: bool = False,
    ) -> t.Callable:

    Parameters

    • resources: (dict, optional)
      • Consistent with the resources parameter definition in @evaluation.predict.
    • needs: (List[Callable], optional)
      • Consistent with the needs parameter definition in @evaluation.predict.
    • replicas: (int, optional)
      • Consistent with the replicas parameter definition in @evaluation.predict.
    • name: (str, optional)
      • The name displayed for the handler.
      • If not specified, use the decorated function's name.
    • expose: (int, optional)
      • The port exposed externally. When running a web handler, the exposed port needs to be declared.
      • The default is 0, meaning no port is exposed.
      • Currently only one port can be exposed.
    • require_dataset: (bool, optional)
      • Defines whether this handler requires a dataset when running.
      • If required_dataset=True, the user is required to input a dataset when creating an evaluation task on the Server/Cloud instance web page. If required_dataset=False, the user does not need to specify a dataset on the web page.
      • The default is False.

    Examples

    from starwhale import handler
    import gradio

    @handler(resources={"cpu": 1, "nvidia.com/gpu": 1}, replicas=3)
    def my_handler():
    ...

    @handler(needs=[my_handler])
    def my_another_handler():
    ...

    @handler(expose=7860)
    def chatbot():
    with gradio.Blocks() as server:
    ...
    server.launch(server_name="0.0.0.0", server_port=7860)

    @fine_tune

    fine_tune is a decorator that defines the fine-tuning process for model training.

    Some restrictions and usage suggestions:

    • fine_tune has only one replica.
    • fine_tune requires dataset input.
    • Generally, the dataset is obtained through Context.get_runtime_context() at the start of fine_tune.
    • Generally, at the end of fine_tune, the fine-tuned Starwhale model package is generated through starwhale.model.build, which will be automatically copied to the corresponding evaluation project.

    Parameters

    • resources: (dict, optional)
      • Consistent with the resources parameter definition in @evaluation.predict.
    • needs: (List[Callable], optional)
      • Consistent with the needs parameter definition in @evaluation.predict.

    Examples

    from starwhale import model as starwhale_model
    from starwhale import fine_tune, Context

    @fine_tune(resources={"nvidia.com/gpu": 1})
    def llama_fine_tuning():
    ctx = Context.get_runtime_context()

    if len(ctx.dataset_uris) == 2:
    # TODO: use more graceful way to get train and eval dataset
    train_dataset = dataset(ctx.dataset_uris[0], readonly=True, create="forbid")
    eval_dataset = dataset(ctx.dataset_uris[1], readonly=True, create="forbid")
    elif len(ctx.dataset_uris) == 1:
    train_dataset = dataset(ctx.dataset_uris[0], readonly=True, create="forbid")
    eval_dataset = None
    else:
    raise ValueError("Only support 1 or 2 datasets(train and eval dataset) for now")

    #user training code
    train_llama(
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
    )

    model_name = get_model_name()
    starwhale_model.build(name=f"llama-{model_name}-qlora-ft")

    @multi_classification

    The @multi_classification decorator uses the sklearn lib to analyze results for multi-classification problems, outputting the confusion matrix, ROC, AUC etc., and writing them to related tables in the Starwhale Datastore.

    When using it, certain requirements are placed on the return value of the decorated function, which should be (label, result) or (label, result, probability_matrix).

    def multi_classification(
    confusion_matrix_normalize: str = "all",
    show_hamming_loss: bool = True,
    show_cohen_kappa_score: bool = True,
    show_roc_auc: bool = True,
    all_labels: t.Optional[t.List[t.Any]] = None,
    ) -> t.Any:

    Parameters

    • confusion_matrix_normalize: (str, optional)
      • Accepts three parameters:
        • true: rows
        • pred: columns
        • all: rows+columns
    • show_hamming_loss: (bool, optional)
      • Whether to calculate the Hamming loss.
      • The default is True.
    • show_cohen_kappa_score: (bool, optional)
      • Whether to calculate the Cohen kappa score.
      • The default is True.
    • show_roc_auc: (bool, optional)
      • Whether to calculate ROC/AUC. To calculate, the function needs to return a (label, result, probability_matrix) tuple, otherwise a (label, result) tuple is sufficient.
      • The default is True.
    • all_labels: (List, optional)
      • Defines all the labels.

    Examples


    @multi_classification(
    confusion_matrix_normalize="all",
    show_hamming_loss=True,
    show_cohen_kappa_score=True,
    show_roc_auc=True,
    all_labels=[i for i in range(0, 10)],
    )
    def evaluate(ppl_result) -> t.Tuple[t.List[int], t.List[int], t.List[t.List[float]]]:
    label, result, probability_matrix = [], [], []
    return label, result, probability_matrix

    @multi_classification(
    confusion_matrix_normalize="all",
    show_hamming_loss=True,
    show_cohen_kappa_score=True,
    show_roc_auc=False,
    all_labels=[i for i in range(0, 10)],
    )
    def evaluate(ppl_result) -> t.Tuple[t.List[int], t.List[int], t.List[t.List[float]]]:
    label, result = [], [], []
    return label, result

    PipelineHandler

    The PipelineHandler class provides a default model evaluation workflow definition that requires users to implement the predict and evaluate functions.

    The PipelineHandler is equivalent to using the @evaluation.predict and @evaluation.evaluate decorators together - the usage looks different but the underlying model evaluation process is the same.

    Note that PipelineHandler currently does not support defining resources parameters.

    Users need to implement the following functions:

    • predict: Defines the inference process, equivalent to a function decorated with @evaluation.predict.

    • evaluate: Defines the evaluation process, equivalent to a function decorated with @evaluation.evaluate.

    from typing import Any, Iterator
    from abc import ABCMeta, abstractmethod

    class PipelineHandler(metaclass=ABCMeta):
    def __init__(
    self,
    predict_batch_size: int = 1,
    ignore_error: bool = False,
    predict_auto_log: bool = True,
    predict_log_mode: str = PredictLogMode.PICKLE.value,
    predict_log_dataset_features: t.Optional[t.List[str]] = None,
    **kwargs: t.Any,
    ) -> None:
    self.context = Context.get_runtime_context()
    ...

    def predict(self, data: Any, **kw: Any) -> Any:
    raise NotImplementedError

    def evaluate(self, ppl_result: Iterator) -> Any
    raise NotImplementedError

    Parameters

    • predict_batch_size: (int, optional)
      • Equivalent to the batch_size parameter in @evaluation.predict.
      • Default is 1.
    • ignore_error: (bool, optional)
      • Equivalent to the fail_on_error parameter in @evaluation.predict.
      • Default is False.
    • predict_auto_log: (bool, optional)
      • Equivalent to the auto_log parameter in @evaluation.predict.
      • Default is True.
    • predict_log_mode: (str, optional)
      • Equivalent to the log_mode parameter in @evaluation.predict.
      • Default is pickle.
    • predict_log_dataset_features: (bool, optional)
      • Equivalent to the log_dataset_features parameter in @evaluation.predict.
      • Default is None, which records all features.

    PipelineHandler.run Decorator

    The PipelineHandler.run decorator can be used to describe resources for the predict and evaluate methods, supporting definitions of replicas and resources:

    • The PipelineHandler.run decorator can only decorate predict and evaluate methods in subclasses inheriting from PipelineHandler.
    • The predict method can set the replicas parameter. The replicas value for the evaluate method is always 1.
    • The resources parameter is defined and used in the same way as the resources parameter in @evaluation.predict or @evaluation.evaluate.
    • The PipelineHandler.run decorator is optional.
    • The PipelineHandler.run decorator only takes effect on Server and Cloud instances, not Standalone instances that don't support resource definition.
    @classmethod
    def run(
    cls, resources: t.Optional[t.Dict[str, t.Any]] = None, replicas: int = 1
    ) -> t.Callable:

    Examples

    import typing as t

    import torch
    from starwhale import PipelineHandler

    class Example(PipelineHandler):
    def __init__(self) -> None:
    super().__init__()
    self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    self.model = self._load_model(self.device)

    @PipelineHandler.run(replicas=4, resources={"memory": 1 * 1024 * 1024 *1024, "nvidia.com/gpu": 1}) # 1G Memory, 1 GPU
    def predict(self, data: t.Dict):
    data_tensor = self._pre(data.img)
    output = self.model(data_tensor)
    return self._post(output)

    @PipelineHandler.run(resources={"memory": 1 * 1024 * 1024 *1024}) # 1G Memory
    def evaluate(self, ppl_result):
    result, label, pr = [], [], []
    for _data in ppl_result:
    label.append(_data["input"]["label"])
    result.extend(_data["output"][0])
    pr.extend(_data["output"][1])
    return label, result, pr

    def _pre(self, input: Image) -> torch.Tensor:
    ...

    def _post(self, input):
    ...

    def _load_model(self, device):
    ...

    Context

    The context information passed during model evaluation, including Project, Task ID, etc. The Context content is automatically injected and can be used in the following ways:

    • Inherit the PipelineHandler class and use the self.context object.
    • Get it through Context.get_runtime_context().

    Note that Context can only be used during model evaluation, otherwise the program will throw an exception.

    Currently Context can get the following values:

    • project: str
      • Project name.
    • version: str
      • Unique ID of model evaluation.
    • step: str
      • Step name.
    • total: int
      • Total number of Tasks under the Step.
    • index: int
      • Task index number, starting from 0.
    • dataset_uris: List[str]
      • List of Starwhale dataset URIs.

    Examples


    from starwhale import Context, PipelineHandler

    def func():
    ctx = Context.get_runtime_context()
    print(ctx.project)
    print(ctx.version)
    print(ctx.step)
    ...

    class Example(PipelineHandler):

    def predict(self, data: t.Dict):
    print(self.context.project)
    print(self.context.version)
    print(self.context.step)

    @starwhale.api.service.api

    @starwhale.api.service.api is a decorator that provides a simple Web Handler input definition based on Gradio for accepting external requests and returning inference results to the user when launching a Web Service with the swcli model serve command, enabling online evaluation.

    Examples

    import gradio
    from starwhale.api.service import api

    def predict_image(img):
    ...

    @api(gradio.File(), gradio.Label())
    def predict_view(file: t.Any) -> t.Any:
    with open(file.name, "rb") as f:
    data = Image(f.read(), shape=(28, 28, 1))
    _, prob = predict_image({"img": data})
    return {i: p for i, p in enumerate(prob)}

    starwhale.api.service.Service

    If you want to customize the web service implementation, you can subclass Service and override the serve method.

    class CustomService(Service):
    def serve(self, addr: str, port: int, handler_list: t.List[str] = None) -> None:
    ...

    svc = CustomService()

    @svc.api(...)
    def handler(data):
    ...

    Notes:

    • Handlers added with PipelineHandler.add_api and the api decorator or Service.api can work together
    • If using a custom Service, you need to instantiate the custom Service class in the model

    Custom Request and Response

    Request and Response are handler preprocessing and postprocessing classes for receiving user requests and returning results. They can be simply understood as pre and post logic for the handler.

    Starwhale provides built-in Request implementations for Dataset types and Json Response. Users can also customize the logic as follows:

    import typing as t

    from starwhale.api.service import (
    Request,
    Service,
    Response,
    )

    class CustomInput(Request):
    def load(self, req: t.Any) -> t.Any:
    return req

    class CustomOutput(Response):
    def __init__(self, prefix: str) -> None:
    self.prefix = prefix

    def dump(self, req: str) -> bytes:
    return f"{self.prefix} {req}".encode("utf-8")

    svc = Service()

    @svc.api(request=CustomInput(), response=CustomOutput("hello"))
    def foo(data: t.Any) -> t.Any:
    ...
    - - + + \ No newline at end of file diff --git a/0.6.0/reference/sdk/job/index.html b/0.6.0/reference/sdk/job/index.html index a79295a51..fc16f8a50 100644 --- a/0.6.0/reference/sdk/job/index.html +++ b/0.6.0/reference/sdk/job/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Starwhale Job SDK

    job

    Get a starwhale.Job object through the Job URI parameter, which represents a Job on Standalone/Server/Cloud instances.

    @classmethod
    def job(
    cls,
    uri: str,
    ) -> Job:

    Parameters

    • uri: (str, required)
      • Job URI format.

    Usage Example

    from starwhale import job

    # get job object of uri=https://server/job/1
    j1 = job("https://server/job/1")

    # get job from standalone instance
    j2 = job("local/project/self/job/xm5wnup")
    j3 = job("xm5wnup")

    class starwhale.Job

    starwhale.Job abstracts Starwhale Job and enables some information retrieval operations on the job.

    list

    list is a classmethod that can list the jobs under a project.

    @classmethod
    def list(
    cls,
    project: str = "",
    page_index: int = DEFAULT_PAGE_IDX,
    page_size: int = DEFAULT_PAGE_SIZE,
    ) -> Tuple[List[Job], Dict]:

    Parameters

    • project: (str, optional)
      • Project URI, can be projects on Standalone/Server/Cloud instances.
      • If project is not specified, the project selected by swcli project selected will be used.
    • page_index: (int, optional)
      • When getting the jobs list from Server/Cloud instances, paging is supported. This parameter specifies the page number.
        • Default is 1.
        • Page numbers start from 1.
      • Standalone instances do not support paging. This parameter has no effect.
    • page_size: (int, optional)
      • When getting the jobs list from Server/Cloud instances, paging is supported. This parameter specifies the number of jobs returned per page.
        • Default is 1.
        • Page numbers start from 1.
      • Standalone instances do not support paging. This parameter has no effect.

    Usage Example

    from starwhale import Job

    # list jobs of current selected project
    jobs, pagination_info = Job.list()

    # list jobs of starwhale/public project in the cloud.starwhale.cn instance
    jobs, pagination_info = Job.list("https://cloud.starwhale.cn/project/starwhale:public")

    # list jobs of id=1 project in the server instance, page index is 2, page size is 10
    jobs, pagination_info = Job.list("https://server/project/1", page_index=2, page_size=10)

    get

    get is a classmethod that gets information about a specific job and returns a Starwhale.Job object. It has the same functionality and parameter definitions as the starwhale.job function.

    Usage Example

    from starwhale import Job

    # get job object of uri=https://server/job/1
    j1 = Job.get("https://server/job/1")

    # get job from standalone instance
    j2 = Job.get("local/project/self/job/xm5wnup")
    j3 = Job.get("xm5wnup")

    summary

    summary is a property that returns the data written to the summary table during the job execution, in dict type.

    @property
    def summary(self) -> Dict[str, Any]:

    Usage Example

    from starwhale import jobs

    j1 = job("https://server/job/1")

    print(j1.summary)

    tables

    tables is a property that returns the names of tables created during the job execution (not including the summary table, which is created automatically at the project level), in list type.

    @property
    def tables(self) -> List[str]:

    Usage Example

    from starwhale import jobs

    j1 = job("https://server/job/1")

    print(j1.tables)

    get_table_rows

    get_table_rows is a method that returns records from a data table according to the table name and other parameters, in iterator type.

    def get_table_rows(
    self,
    name: str,
    start: Any = None,
    end: Any = None,
    keep_none: bool = False,
    end_inclusive: bool = False,
    ) -> Iterator[Dict[str, Any]]:

    Parameters

    • name: (str, required)
      • Datastore table name. The one of table names obtained through the tables property is ok.
    • start: (Any, optional)
      • The starting ID value of the returned records.
      • Default is None, meaning start from the beginning of the table.
    • end: (Any, optional)
      • The ending ID value of the returned records.
      • Default is None, meaning until the end of the table.
      • If both start and end are None, all records in the table will be returned as an iterator.
    • keep_none: (bool, optional)
      • Whether to return records with None values.
      • Default is False.
    • end_inclusive: (bool, optional)
      • When end is set, whether the iteration includes the end record.
      • Default is False.

    Usage Example

    from starwhale import job

    j = job("local/project/self/job/xm5wnup")

    table_name = j.tables[0]

    for row in j.get_table_rows(table_name):
    print(row)

    rows = list(j.get_table_rows(table_name, start=0, end=100))

    # return the first record from the results table
    result = list(j.get_table_rows('results', start=0, end=1))[0]

    status

    status is a property that returns the current real-time state of the Job as a string. The possible states are CREATED, READY, PAUSED, RUNNING, CANCELLING, CANCELED, SUCCESS, FAIL, and UNKNOWN.

    @property
    def status(self) -> str:

    create

    create is a classmethod that can create tasks on a Standalone instance or Server/Cloud instance, including tasks for Model Evaluation, Fine-tuning, Online Serving, and Developing. The function returns a Job object.

    • create determines which instance the generated task runs on through the project parameter, including Standalone and Server/Cloud instances.
    • On a Standalone instance, create creates a synchronously executed task.
    • On a Server/Cloud instance, create creates an asynchronously executed task.
    @classmethod
    def create(
    cls,
    project: Project | str,
    model: Resource | str,
    run_handler: str,
    datasets: t.List[str | Resource] | None = None,
    runtime: Resource | str | None = None,
    resource_pool: str = DEFAULT_RESOURCE_POOL,
    ttl: int = 0,
    dev_mode: bool = False,
    dev_mode_password: str = "",
    dataset_head: int = 0,
    overwrite_specs: t.Dict[str, t.Any] | None = None,
    ) -> Job:

    Parameters

    Parameters apply to all instances:

    • project: (Project|str, required)
      • A Project object or Project URI string.
    • model: (Resource|str, required)
      • Model URI string or Resource object of Model type, representing the Starwhale model package to run.
    • run_handler: (str, required)
      • The name of the runnable handler in the Starwhale model package, e.g. the evaluate handler of mnist: mnist.evaluator:MNISTInference.evaluate.
    • datasets: (List[str | Resource], optional)
      • Datasets required for the Starwhale model package to run, not required.

    Parameters only effective for Standalone instances:

    • dataset_head: (int, optional)
      • Generally used for debugging scenarios, only uses the first N data in the dataset for the Starwhale model to consume.

    Parameters only effective for Server/Cloud instances:

    • runtime: (Resource | str, optional)
      • Runtime URI string or Resource object of Runtime type, representing the Starwhale runtime required to run the task.
      • When not specified, it will try to use the built-in runtime of the Starwhale model package.
      • When creating tasks under a Standalone instance, the Python interpreter environment used by the Python script is used as its own runtime. Specifying a runtime via the runtime parameter is not supported. If you need to specify a runtime, you can use the swcli model run command.
    • resource_pool: (str, optional)
      • Specify which resource pool the task runs in, default to the default resource pool.
    • ttl: (int, optional)
      • Maximum lifetime of the task, will be killed after timeout.
      • The unit is seconds.
      • By default, ttl is 0, meaning no timeout limit, and the task will run as expected.
      • When ttl is less than 0, it also means no timeout limit.
    • dev_mode: (bool, optional)
      • Whether to set debug mode. After turning on this mode, you can enter the related environment through VSCode Web.
      • Debug mode is off by default.
    • dev_mode_password: (str, optional)
      • Login password for VSCode Web in debug mode.
      • Default is empty, in which case the task's UUID will be used as the password, which can be obtained via job.info().job.uuid.
    • overwrite_specs: (Dict[str, Any], optional)
      • Support setting the replicas and resources fields of the handler.
      • If empty, use the values set in the corresponding handler of the model package.
      • The key of overwrite_specs is the name of the handler, e.g. the evaluate handler of mnist: mnist.evaluator:MNISTInference.evaluate.
      • The value of overwrite_specs is the set value, in dictionary format, supporting settings for replicas and resources, e.g. {"replicas": 1, "resources": {"memory": "1GiB"}}.

    Examples

    • create a Cloud Instance job
    from starwhale import Job
    project = "https://cloud.starwhale.cn/project/starwhale:public"
    job = Job.create(
    project=project,
    model=f"{project}/model/mnist/version/v0",
    run_handler="mnist.evaluator:MNISTInference.evaluate",
    datasets=[f"{project}/dataset/mnist/version/v0"],
    runtime=f"{project}/runtime/pytorch",
    overwrite_specs={"mnist.evaluator:MNISTInference.evaluate": {"resources": "4GiB"},
    "mnist.evaluator:MNISTInference.predict": {"resources": "8GiB", "replicas": 10}}
    )
    print(job.status)
    • create a Standalone Instance job
    from starwhale import Job
    job = Job.create(
    project="self",
    model="mnist",
    run_handler="mnist.evaluator:MNISTInference.evaluate",
    datasets=["mnist"],
    )
    print(job.status)
    - - + + \ No newline at end of file diff --git a/0.6.0/reference/sdk/model/index.html b/0.6.0/reference/sdk/model/index.html index d4f059e29..47c1d609c 100644 --- a/0.6.0/reference/sdk/model/index.html +++ b/0.6.0/reference/sdk/model/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Starwhale Model SDK

    model.build

    model.build is a function that can build the Starwhale model, equivalent to the swcli model build command.

    def build(
    modules: t.Optional[t.List[t.Any]] = None,
    workdir: t.Optional[_path_T] = None,
    name: t.Optional[str] = None,
    project_uri: str = "",
    desc: str = "",
    remote_project_uri: t.Optional[str] = None,
    add_all: bool = False,
    tags: t.List[str] | None = None,
    ) -> None:

    Parameters

    • modules: (List[str|object], optional)
      • The search modules supports object(function, class or module) or str(example: "to.path.module", "to.path.module:object").
      • If the argument is not specified, the search modules are the imported modules.
    • name: (str, optional)
      • Starwhale Model name.
      • The default is the current work dir (cwd) name.
    • workdir: (str, Pathlib.Path, optional)
      • The path of the rootdir. The default workdir is the current working dir.
      • All files in the workdir will be packaged. If you want to ignore some files, you can add .swignore file in the workdir.
    • project_uri: (str, optional)
      • The project uri of the Starwhale Model.
      • If the argument is not specified, the project_uri is the config value of swcli project select command.
    • desc: (str, optional)
      • The description of the Starwhale Model.
    • remote_project_uri: (str, optional)
      • Project URI of another example instance. After the Starwhale model is built, it will be automatically copied to the remote instance.
    • add_all: (bool, optional)
      • Add all files in the working directory to the model package(excludes python cache files and virtual environment files when disabled).The .swignore file still takes effect.
      • The default value is False.
    • tags: (List[str], optional)
      • The tags for the model version.
      • latest and ^v\d+$ tags are reserved tags.

    Examples

    from starwhale import model

    # class search handlers
    from .user.code.evaluator import ExamplePipelineHandler
    model.build([ExamplePipelineHandler])

    # function search handlers
    from .user.code.evaluator import predict_image
    model.build([predict_image])

    # module handlers, @handler decorates function in this module
    from .user.code import evaluator
    model.build([evaluator])

    # str search handlers
    model.build(["user.code.evaluator:ExamplePipelineHandler"])
    model.build(["user.code1", "user.code2"])

    # no search handlers, use imported modules
    model.build()

    # add user custom tags
    model.build(tags=["t1", "t2"])
    - - + + \ No newline at end of file diff --git a/0.6.0/reference/sdk/other/index.html b/0.6.0/reference/sdk/other/index.html index 0e23c4c9c..aeecf60af 100644 --- a/0.6.0/reference/sdk/other/index.html +++ b/0.6.0/reference/sdk/other/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Other SDK

    __version__

    Version of Starwhale Python SDK and swcli, string constant.

    >>> from starwhale import __version__
    >>> print(__version__)
    0.5.7

    init_logger

    Initialize Starwhale logger and traceback. The default value is 0.

    • 0: show only errors, traceback only shows 1 frame.
    • 1: show errors + warnings, traceback shows 5 frames.
    • 2: show errors + warnings + info, traceback shows 10 frames.
    • 3: show errors + warnings + info + debug, traceback shows 100 frames.
    • >=4: show errors + warnings + info + debug + trace, traceback shows 1000 frames.
    def init_logger(verbose: int = 0) -> None:

    login

    Log in to a server/cloud instance. It is equivalent to running the swcli instance login command. Log in to the Standalone instance is meaningless.

    def login(
    instance: str,
    alias: str = "",
    username: str = "",
    password: str = "",
    token: str = "",
    ) -> None:

    Parameters

    • instance: (str, required)
      • The http url of the server/cloud instance.
    • alias: (str, optional)
      • An alias for the instance to simplify the instance part of the Starwhale URI.
      • If not specified, the hostname part of the instance http url will be used.
    • username: (str, optional)
    • password: (str, optional)
    • token: (str, optional)
      • You can only choose one of username + password or token to login to the instance.

    Examples

    from starwhale import login

    # login to Starwhale Cloud instance by token
    login(instance="https://cloud.starwhale.cn", alias="cloud-cn", token="xxx")

    # login to Starwhale Server instance by username and password
    login(instance="http://controller.starwhale.svc", alias="dev", username="starwhale", password="abcd1234")

    logout

    Log out of a server/cloud instance. It is equivalent to running the swcli instance logout command. Log out of the Standalone instance is meaningless.

    def logout(instance: str) -> None:

    Examples

    from starwhale import login, logout

    login(instance="https://cloud.starwhale.cn", alias="cloud-cn", token="xxx")
    # logout by the alias
    logout("cloud-cn")

    login(instance="http://controller.starwhale.svc", alias="dev", username="starwhale", password="abcd1234")
    # logout by the instance http url
    logout("http://controller.starwhale.svc")
    - - + + \ No newline at end of file diff --git a/0.6.0/reference/sdk/overview/index.html b/0.6.0/reference/sdk/overview/index.html index f5bd0d4e7..fbef79652 100644 --- a/0.6.0/reference/sdk/overview/index.html +++ b/0.6.0/reference/sdk/overview/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Python SDK Overview

    Starwhale provides a series of Python SDKs to help manage datasets, models, evaluations etc. Using the Starwhale Python SDK can make it easier to complete your ML/DL development tasks.

    Classes

    • PipelineHandler: Provides default model evaluation process definition, requires implementation of predict and evaluate methods.
    • Context: Passes context information during model evaluation, including Project, Task ID etc.
    • class Dataset: Starwhale Dataset class.
    • class starwhale.api.service.Service: The base class of online evaluation.
    • class Job: Starwhale Job class.
    • class Evaluation: Starwhale Evaluation class.

    Functions

    • @multi_classification: Decorator for multi-class problems to simplify evaluate result calculation and storage for better evaluation presentation.
    • @handler: Decorator to define a running entity with resource attributes (mem/cpu/gpu). You can control replica count. Handlers can form DAGs through dependencies to control execution flow.
    • @evaluation.predict: Decorator to define inference process in model evaluation, similar to map phase in MapReduce.
    • @evaluation.evaluate: Decorator to define evaluation process in model evaluation, similar to reduce phase in MapReduce.
    • model.build: Build Starwhale model.
    • @fine_tune: Decorator to define model fine-tuning process.
    • init_logger: Set log level, implement 5-level logging.
    • dataset: Get starwhale.Dataset object, by creating new datasets or loading existing datasets.
    • @starwhale.api.service.api: Decorator to provide a simple Web Handler input definition based on Gradio.
    • login: Log in to the server/cloud instance.
    • logout: Log out of the server/cloud instance.
    • job: Get starwhale.Job object by the Job URI.
    • @PipelineHandler.run: Decorator to define the resources for the predict and evaluate methods in PipelineHandler subclasses.

    Data Types

    • COCOObjectAnnotation: Provides COCO format definitions.
    • BoundingBox: Bounding box type, currently in LTWH format - left_x, top_y, width and height.
    • ClassLabel: Describes the number and types of labels.
    • Image: Image type.
    • GrayscaleImage: Grayscale image type, e.g. MNIST digit images, a special case of Image type.
    • Audio: Audio type.
    • Video: Video type.
    • Text: Text type, default utf-8 encoding, for storing large texts.
    • Binary: Binary type, stored in bytes, for storing large binary content.
    • Line: Line type.
    • Point: Point type.
    • Polygon: Polygon type.
    • Link: Link type, for creating remote-link data.
    • MIMEType: Describes multimedia types supported by Starwhale, used in mime_type attribute of Image, Video etc for better Dataset Viewer.

    Other

    • __version__: Version of Starwhale Python SDK and swcli, string constant.

    Further reading

    - - + + \ No newline at end of file diff --git a/0.6.0/reference/sdk/type/index.html b/0.6.0/reference/sdk/type/index.html index ef3053516..22e96fee4 100644 --- a/0.6.0/reference/sdk/type/index.html +++ b/0.6.0/reference/sdk/type/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Starwhale Data Types

    COCOObjectAnnotation

    It provides definitions following the COCO format.

    COCOObjectAnnotation(
    id: int,
    image_id: int,
    category_id: int,
    segmentation: Union[t.List, t.Dict],
    area: Union[float, int],
    bbox: Union[BoundingBox, t.List[float]],
    iscrowd: int,
    )
    ParameterDescription
    idObject id, usually a globally incrementing id
    image_idImage id, usually id of the image
    category_idCategory id, usually id of the class in object detection
    segmentationObject contour representation, Polygon (polygon vertices) or RLE format
    areaObject area
    bboxRepresents bounding box, can be BoundingBox type or list of floats
    iscrowd0 indicates a single object, 1 indicates two unseparated objects

    Examples

    def _make_coco_annotations(
    self, mask_fpath: Path, image_id: int
    ) -> t.List[COCOObjectAnnotation]:
    mask_img = PILImage.open(str(mask_fpath))

    mask = np.array(mask_img)
    object_ids = np.unique(mask)[1:]
    binary_mask = mask == object_ids[:, None, None]
    # TODO: tune permute without pytorch
    binary_mask_tensor = torch.as_tensor(binary_mask, dtype=torch.uint8)
    binary_mask_tensor = (
    binary_mask_tensor.permute(0, 2, 1).contiguous().permute(0, 2, 1)
    )

    coco_annotations = []
    for i in range(0, len(object_ids)):
    _pos = np.where(binary_mask[i])
    _xmin, _ymin = float(np.min(_pos[1])), float(np.min(_pos[0]))
    _xmax, _ymax = float(np.max(_pos[1])), float(np.max(_pos[0]))
    _bbox = BoundingBox(
    x=_xmin, y=_ymin, width=_xmax - _xmin, height=_ymax - _ymin
    )

    rle: t.Dict = coco_mask.encode(binary_mask_tensor[i].numpy()) # type: ignore
    rle["counts"] = rle["counts"].decode("utf-8")

    coco_annotations.append(
    COCOObjectAnnotation(
    id=self.object_id,
    image_id=image_id,
    category_id=1, # PennFudan Dataset only has one class-PASPersonStanding
    segmentation=rle,
    area=_bbox.width * _bbox.height,
    bbox=_bbox,
    iscrowd=0, # suppose all instances are not crowd
    )
    )
    self.object_id += 1

    return coco_annotations

    GrayscaleImage

    GrayscaleImage provides a grayscale image type. It is a special case of the Image type, for example the digit images in MNIST.

    GrayscaleImage(
    fp: _TArtifactFP = "",
    display_name: str = "",
    shape: Optional[_TShape] = None,
    as_mask: bool = False,
    mask_uri: str = "",
    )
    ParameterDescription
    fpImage path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    shapeImage width and height, default channel is 1
    as_maskWhether used as a mask image
    mask_uriURI of the original image for the mask

    Examples

    for i in range(0, min(data_number, label_number)):
    _data = data_file.read(image_size)
    _label = struct.unpack(">B", label_file.read(1))[0]
    yield GrayscaleImage(
    _data,
    display_name=f"{i}",
    shape=(height, width, 1),
    ), {"label": _label}

    GrayscaleImage Functions

    GrayscaleImage.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    GrayscaleImage.carry_raw_data

    carry_raw_data() -> GrayscaleImage

    GrayscaleImage.astype

    astype() -> Dict[str, t.Any]

    BoundingBox

    BoundingBox provides a bounding box type, currently in LTWH format:

    • left_x: x-coordinate of left edge
    • top_y: y-coordinate of top edge
    • width: width of bounding box
    • height: height of bounding box

    So it represents the bounding box using the coordinates of its left, top, width and height. This is a common format for specifying bounding boxes in computer vision tasks.

    BoundingBox(
    x: float,
    y: float,
    width: float,
    height: float
    )
    ParameterDescription
    xx-coordinate of left edge (left_x)
    yy-coordinate of top edge (top_y)
    widthWidth of bounding box
    heightHeight of bounding box

    ClassLabel

    Describe labels.

    ClassLabel(
    names: List[Union[int, float, str]]
    )

    Image

    Image Type.

    Image(
    fp: _TArtifactFP = "",
    display_name: str = "",
    shape: Optional[_TShape] = None,
    mime_type: Optional[MIMEType] = None,
    as_mask: bool = False,
    mask_uri: str = "",
    )
    ParameterDescription
    fpImage path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    shapeImage width, height and channels
    mime_typeMIMEType supported types
    as_maskWhether used as a mask image
    mask_uriURI of the original image for the mask

    The main difference from GrayscaleImage is that Image supports multi-channel RGB images by specifying shape as (W, H, C).

    Examples

    import io
    import typing as t
    import pickle
    from PIL import Image as PILImage
    from starwhale import Image, MIMEType

    def _iter_item(paths: t.List[Path]) -> t.Generator[t.Tuple[t.Any, t.Dict], None, None]:
    for path in paths:
    with path.open("rb") as f:
    content = pickle.load(f, encoding="bytes")
    for data, label, filename in zip(
    content[b"data"], content[b"labels"], content[b"filenames"]
    ):
    annotations = {
    "label": label,
    "label_display_name": dataset_meta["label_names"][label],
    }

    image_array = data.reshape(3, 32, 32).transpose(1, 2, 0)
    image_bytes = io.BytesIO()
    PILImage.fromarray(image_array).save(image_bytes, format="PNG")

    yield Image(
    fp=image_bytes.getvalue(),
    display_name=filename.decode(),
    shape=image_array.shape,
    mime_type=MIMEType.PNG,
    ), annotations

    Image Functions

    Image.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Image.carry_raw_data

    carry_raw_data() -> GrayscaleImage

    Image.astype

    astype() -> Dict[str, t.Any]

    Video

    Video type.

    Video(
    fp: _TArtifactFP = "",
    display_name: str = "",
    mime_type: Optional[MIMEType] = None,
    )
    ParameterDescription
    fpVideo path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    mime_typeMIMEType supported types

    Examples

    import typing as t
    from pathlib import Path

    from starwhale import Video, MIMEType

    root_dir = Path(__file__).parent.parent
    dataset_dir = root_dir / "data" / "UCF-101"
    test_ds_path = [root_dir / "data" / "test_list.txt"]

    def iter_ucf_item() -> t.Generator:
    for path in test_ds_path:
    with path.open() as f:
    for line in f.readlines():
    _, label, video_sub_path = line.split()

    data_path = dataset_dir / video_sub_path
    data = Video(
    data_path,
    display_name=video_sub_path,
    shape=(1,),
    mime_type=MIMEType.WEBM,
    )

    yield f"{label}_{video_sub_path}", {
    "video": data,
    "label": label,
    }

    Audio

    Audio type.

    Audio(
    fp: _TArtifactFP = "",
    display_name: str = "",
    mime_type: Optional[MIMEType] = None,
    )
    ParameterDescription
    fpAudio path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    mime_typeMIMEType supported types

    Examples

    import typing as t
    from starwhale import Audio

    def iter_item() -> t.Generator[t.Tuple[t.Any, t.Any], None, None]:
    for path in validation_ds_paths:
    with path.open() as f:
    for item in f.readlines():
    item = item.strip()
    if not item:
    continue

    data_path = dataset_dir / item
    data = Audio(
    data_path, display_name=item, shape=(1,), mime_type=MIMEType.WAV
    )

    speaker_id, utterance_num = data_path.stem.split("_nohash_")
    annotations = {
    "label": data_path.parent.name,
    "speaker_id": speaker_id,
    "utterance_num": int(utterance_num),
    }
    yield data, annotations

    Audio Functions

    Audio.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Audio.carry_raw_data

    carry_raw_data() -> Audio

    Audio.astype

    astype() -> Dict[str, t.Any]

    Text

    Text type, the default encode type is utf-8.

    Text(
    content: str,
    encoding: str = "utf-8",
    )
    ParameterDescription
    contentThe text content
    encodingEncoding format of the text

    Examples

    import typing as t
    from pathlib import Path
    from starwhale import Text

    def iter_item(self) -> t.Generator[t.Tuple[t.Any, t.Any], None, None]:
    root_dir = Path(__file__).parent.parent / "data"

    with (root_dir / "fra-test.txt").open("r") as f:
    for line in f.readlines():
    line = line.strip()
    if not line or line.startswith("CC-BY"):
    continue

    _data, _label, *_ = line.split("\t")
    data = Text(_data, encoding="utf-8")
    annotations = {"label": _label}
    yield data, annotations

    Text Functions

    to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Text.carry_raw_data

    carry_raw_data() -> Text

    Text.astype

    astype() -> Dict[str, t.Any]

    Text.to_str

    to_str() -> str

    Binary

    Binary provides a binary data type, stored as bytes.

    Binary(
    fp: _TArtifactFP = "",
    mime_type: MIMEType = MIMEType.UNDEFINED,
    )
    ParameterDescription
    fpPath, IO object, or file content bytes
    mime_typeMIMEType supported types

    Binary Functions

    Binary.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Binary.carry_raw_data

    carry_raw_data() -> Binary

    Binary.astype

    astype() -> Dict[str, t.Any]

    Link provides a link type to create remote-link datasets in Starwhale.

    Link(
    uri: str,
    auth: Optional[LinkAuth] = DefaultS3LinkAuth,
    offset: int = 0,
    size: int = -1,
    data_type: Optional[BaseArtifact] = None,
    )
    ParameterDescription
    uriURI of the original data, currently supports localFS and S3 protocols
    authLink auth information
    offsetData offset relative to file pointed by uri
    sizeData size
    data_typeActual data type pointed by the link, currently supports Binary, Image, Text, Audio and Video

    Link.astype

    astype() -> Dict[str, t.Any]

    MIMEType

    MIMEType describes the multimedia types supported by Starwhale, implemented using Python Enum. It is used in the mime_type attribute of Image, Video etc to enable better Dataset Viewer support.

    class MIMEType(Enum):
    PNG = "image/png"
    JPEG = "image/jpeg"
    WEBP = "image/webp"
    SVG = "image/svg+xml"
    GIF = "image/gif"
    APNG = "image/apng"
    AVIF = "image/avif"
    PPM = "image/x-portable-pixmap"
    MP4 = "video/mp4"
    AVI = "video/avi"
    WEBM = "video/webm"
    WAV = "audio/wav"
    MP3 = "audio/mp3"
    PLAIN = "text/plain"
    CSV = "text/csv"
    HTML = "text/html"
    GRAYSCALE = "x/grayscale"
    UNDEFINED = "x/undefined"

    Line

    from starwhale import ds, Point, Line

    with dataset("collections") as ds:
    line_points = [
    Point(x=0.0, y=1.0),
    Point(x=0.0, y=100.0)
    ]
    ds.append({"line": line_points})
    ds.commit()

    Point

    from starwhale import ds, Point

    with dataset("collections") as ds:
    ds.append(Point(x=0.0, y=100.0))
    ds.commit()

    Polygon

    from starwhale import ds, Point, Polygon

    with dataset("collections") as ds:
    polygon_points = [
    Point(x=0.0, y=1.0),
    Point(x=0.0, y=100.0),
    Point(x=2.0, y=1.0),
    Point(x=2.0, y=100.0),
    ]
    ds.append({"polygon": polygon_points})
    ds.commit()
    - - + + \ No newline at end of file diff --git a/0.6.0/reference/swcli/dataset/index.html b/0.6.0/reference/swcli/dataset/index.html index a6da5269f..5796c4b07 100644 --- a/0.6.0/reference/swcli/dataset/index.html +++ b/0.6.0/reference/swcli/dataset/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    swcli dataset

    Overview

    swcli [GLOBAL OPTIONS] dataset [OPTIONS] <SUBCOMMAND> [ARGS]...

    The dataset command includes the following subcommands:

    • build
    • copy(cp)
    • diff
    • head
    • history
    • info
    • list(ls)
    • recover
    • remove(rm)
    • summary
    • tag

    swcli dataset build

    swcli [GLOBAL OPTIONS] dataset build [OPTIONS]

    Build Starwhale Dataset. This command only supports to build standalone dataset.

    Options

    • Data sources options:
    OptionRequiredTypeDefaultsDescription
    -if or --image or --image-folderNStringBuild dataset from image folder, the folder should contain the image files.
    -af or --audio or --audio-folderNStringBuild dataset from audio folder, the folder should contain the audio files.
    -vf or --video or --video-folderNStringBuild dataset from video folder, the folder should contain the video files.
    -h or --handler or --python-handlerNStringBuild dataset from python executor handler, the handler format is [module path]:[class or func name].
    -f or --yaml or --dataset-yamlNdataset.yaml in cwdBuild dataset from dataset.yaml file. Default uses dataset.yaml in the work directory(cwd).
    -jf or --jsonNStringBuild dataset from json or jsonl file, the json or jsonl file option is a json file path or a http downloaded url.The json content structure should be a list[dict] or tuple[dict].
    -hf or --huggingfaceNStringBuild dataset from huggingface dataset, the huggingface option is a huggingface repo name.
    -c or --csvNStringBuild dataset from csv files. The option is a csv file path, dir path or a http downloaded url.The option can be used multiple times.

    Data source options are mutually exclusive, only one option is accepted. If no set, swcli dataset build command will use dataset yaml mode to build dataset with the dataset.yaml in the cwd.

    • Other options:
    OptionRequiredScopeTypeDefaultsDescription
    -pt or --patchone of --patch and --overwriteGlobalBooleanTruePatch mode, only update the changed rows and columns for the existed dataset.
    -ow or --overwriteone of --patch and --overwriteGlobalBooleanFalseOverwrite mode, update records and delete extraneous rows from the existed dataset.
    -n or --nameNGlobalStringDataset name
    -p or --projectNGlobalStringDefault projectProject URI, the default is the current selected project. The dataset will store in the specified project.
    -d or --descNGlobalStringDataset description
    -as or --alignment-sizeNGlobalString128Bswds-bin format dataset: alignment size
    -vs or --volume-sizeNGlobalString64MBswds-bin format dataset: volume size
    -r or --runtimeNGlobalStringRuntime URI
    -w or --workdirNPython Handler ModeStringcwdwork dir to search handler.
    --auto-label/--no-auto-labelNImage/Video/Audio Folder ModeBooleanTrueWhether to auto label by the sub-folder name.
    --field-selectorNJSON File ModeStringThe filed from which you would like to extract dataset array items. The filed is split by the dot(.) symbol.
    --subsetNHuggingface ModeStringHuggingface dataset subset name. If the subset name is not specified, the all subsets will be built.
    --splitNHuggingface ModeStringHuggingface dataset split name. If the split name is not specified, the all splits will be built.
    --revisionNHuggingface ModeStringmainVersion of the dataset script to load. Defaults to 'main'. The option value accepts tag name, or branch name, or commit hash.
    --add-hf-info/--no-add-hf-infoNHuggingface ModeBooleanTrueWhether to add huggingface dataset info to the dataset rows, currently support to add subset and split into the dataset rows. Subset uses _hf_subset field name, split uses _hf_split field name.
    --cache/--no-cacheNHuggingface ModeBooleanTrueWhether to use huggingface dataset cache(download + local hf dataset).
    -t or --tagNGlobalStringDataset tags, the option can be used multiple times.
    --encodingNCSV/JSON/JSONL ModeStringfile encoding.
    --dialectNCSV ModeStringexcelThe csv file dialect, the default is excel. Current supports excel, excel-tab and unix formats.
    --delimiterNCSV ModeString,A one-character string used to separate fields for the csv file.
    --quotecharNCSV ModeString"A one-character string used to quote fields containing special characters, such as the delimiter or quotechar, or which contain new-line characters.
    --skipinitialspace/--no-skipinitialspaceNCSV ModeBoolFalseWhether to skip spaces after delimiter for the csv file.
    --strict/--no-strictNCSV ModeBoolFalseWhen True, raise exception Error if the csv is not well formed.

    Examples for dataset building

    #- from dataset.yaml
    swcli dataset build # build dataset from dataset.yaml in the current work directory(pwd)
    swcli dataset build --yaml /path/to/dataset.yaml # build dataset from /path/to/dataset.yaml, all the involved files are related to the dataset.yaml file.
    swcli dataset build --overwrite --yaml /path/to/dataset.yaml # build dataset from /path/to/dataset.yaml, and overwrite the existed dataset.
    swcli dataset build --tag tag1 --tag tag2

    #- from handler
    swcli dataset build --handler mnist.dataset:iter_mnist_item # build dataset from mnist.dataset:iter_mnist_item handler, the workdir is the current work directory(pwd).
    # build dataset from mnist.dataset:LinkRawDatasetProcessExecutor handler, the workdir is example/mnist
    swcli dataset build --handler mnist.dataset:LinkRawDatasetProcessExecutor --workdir example/mnist

    #- from image folder
    swcli dataset build --image-folder /path/to/image/folder # build dataset from /path/to/image/folder, search all image type files.

    #- from audio folder
    swcli dataset build --audio-folder /path/to/audio/folder # build dataset from /path/to/audio/folder, search all audio type files.

    #- from video folder
    swcli dataset build --video-folder /path/to/video/folder # build dataset from /path/to/video/folder, search all video type files.

    #- from json/jsonl file
    swcli dataset build --json /path/to/example.json
    swcli dataset build --json http://example.com/example.json
    swcli dataset build --json /path/to/example.json --field-selector a.b.c # extract the json_content["a"]["b"]["c"] field from the json file.
    swcli dataset build --name qald9 --json https://raw.githubusercontent.com/ag-sc/QALD/master/9/data/qald-9-test-multilingual.json --field-selector questions
    swcli dataset build --json /path/to/test01.jsonl --json /path/to/test02.jsonl
    swcli dataset build --json https://modelscope.cn/api/v1/datasets/damo/100PoisonMpts/repo\?Revision\=master\&FilePath\=train.jsonl

    #- from huggingface dataset
    swcli dataset build --huggingface mnist
    swcli dataset build -hf mnist --no-cache
    swcli dataset build -hf cais/mmlu --subset anatomy --split auxiliary_train --revision 7456cfb

    #- from csv files
    swcli dataset build --csv /path/to/example.csv
    swcli dataset build --csv /path/to/example.csv --csv-file /path/to/example2.csv
    swcli dataset build --csv /path/to/csv-dir
    swcli dataset build --csv http://example.com/example.csv
    swcli dataset build --name product-desc-modelscope --csv https://modelscope.cn/api/v1/datasets/lcl193798/product_description_generation/repo\?Revision\=master\&FilePath\=test.csv --encoding=utf-8-sig

    swcli dataset copy

    swcli [GLOBAL OPTIONS] dataset copy [OPTIONS] <SRC> <DEST>

    dataset copy copies from SRC to DEST.

    SRC and DEST are both dataset URIs.

    When copying Starwhale Dataset, all custom user-defined labels will be copied by default. You can use the --ignore-tag parameter to ignore certain labels. In addition, the latest and ^v\d+$ labels are Starwhale built-in labels that are only used within the instance itself and will not be copied to other instances.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, DEST will be overwritten if it exists. In addition, if the labels carried during duplication have already been used by other versions, this parameter can be used to forcibly update the labels to this version.
    -p or --patchone of --patch and --overwriteBooleanTruePatch mode, only update the changed rows and columns for the remote dataset.
    -o or --overwriteone of --patch and --overwriteBooleanFalseOverwrite mode, update records and delete extraneous rows from the remote dataset.
    -i or --ignore-tagNStringIgnore tags to copy. The option can be used multiple times.

    Examples for dataset copy

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local project(myproject) with a new dataset name 'mnist-local'
    swcli dataset cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq local/project/myproject/mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local default project(self) with the cloud instance dataset name 'mnist-cloud'
    swcli dataset cp --patch cloud://pre-k8s/project/dataset/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq .

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local project(myproject) with the cloud instance dataset name 'mnist-cloud'
    swcli dataset cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq . -dlp myproject

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local default project(self) with a dataset name 'mnist-local'
    swcli dataset cp --overwrite cloud://pre-k8s/project/dataset/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local project(myproject) with a dataset name 'mnist-local'
    swcli dataset cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local -dlp myproject

    #- copy standalone instance(local) default project(self)'s mnist-local dataset to cloud instance(pre-k8s) mnist project with a new dataset name 'mnist-cloud'
    swcli dataset cp mnist-local/version/latest cloud://pre-k8s/project/mnist/mnist-cloud

    #- copy standalone instance(local) default project(self)'s mnist-local dataset to cloud instance(pre-k8s) mnist project with standalone instance dataset name 'mnist-local'
    swcli dataset cp mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy standalone instance(local) default project(self)'s mnist-local dataset to cloud instance(pre-k8s) mnist project without 'cloud://' prefix
    swcli dataset cp mnist-local/version/latest pre-k8s/project/mnist

    #- copy standalone instance(local) project(myproject)'s mnist-local dataset to cloud instance(pre-k8s) mnist project with standalone instance dataset name 'mnist-local'
    swcli dataset cp local/project/myproject/dataset/mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy without some tags
    swcli dataset cp mnist cloud://cloud.starwhale.cn/project/starwhale:public --ignore-tag t1 --force

    swcli dataset diff

    swcli [GLOBAL OPTIONS] dataset diff [OPTIONS] <DATASET VERSION> <DATASET VERSION>

    dataset diff compares the difference between two versions of the same dataset.

    DATASET VERSION is a dataset URI.

    OptionRequiredTypeDefaultsDescription
    --show-detailsNBooleanFalseIf true, outputs the detail information.
    swcli [全局选项] dataset head [选项] <DATASET VERSION>

    Print the first n rows of the dataset. DATASET VERSION is a dataset URI.

    OptionRequiredTypeDefaultsDescription
    -n or --rowsNInt5Print the first NUM rows of the dataset.
    -srd or --show-raw-dataNBooleanFalseFetch raw data content from objectstore.
    -st or --show-typesNBooleanFalseshow data types.

    Examples for dataset head

    #- print the first 5 rows of the mnist dataset
    swcli dataset head -n 5 mnist

    #- print the first 10 rows of the mnist(v0 version) dataset and show raw data
    swcli dataset head -n 10 mnist/v0 --show-raw-data

    #- print the data types of the mnist dataset
    swcli dataset head mnist --show-types

    #- print the remote cloud dataset's first 5 rows
    swcli dataset head cloud://cloud-cn/project/test/dataset/mnist -n 5

    #- print the first 5 rows in the json format
    swcli -o json dataset head -n 5 mnist

    swcli dataset history

    swcli [GLOBAL OPTIONS] dataset history [OPTIONS] <DATASET>

    dataset history outputs all history versions of the specified Starwhale Dataset.

    DATASET is a dataset URI.

    OptionRequiredTypeDefaultsDescription
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.

    swcli dataset info

    swcli [GLOBAL OPTIONS] dataset info [OPTIONS] <DATASET>

    dataset info outputs detailed information about the specified Starwhale Dataset version.

    DATASET is a dataset URI.

    swcli dataset list

    swcli [GLOBAL OPTIONS] dataset list [OPTIONS]

    dataset list shows all Starwhale Datasets.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.
    --show-removed or -srNBooleanFalseIf true, include datasets that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.
    --filter or -flNStringShow only Starwhale Datasetes that match specified filters. This option can be used multiple times in one command.
    FilterTypeDescriptionExample
    nameKey-ValueThe name prefix of datasets--filter name=mnist
    ownerKey-ValueThe dataset owner name--filter owner=starwhale
    latestFlagIf specified, it shows only the latest version.--filter latest

    swcli dataset recover

    swcli [GLOBAL OPTIONS] dataset recover [OPTIONS] <DATASET>

    dataset recover recovers previously removed Starwhale Datasets or versions.

    DATASET is a dataset URI. If the version part of the URI is omitted, all removed versions are recovered.

    Garbage-collected Starwhale Datasets or versions can not be recovered, as well as those are removed with the --force option.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, overwrite the Starwhale Dataset or version with the same name or version id.

    swcli dataset remove

    swcli [GLOBAL OPTIONS] dataset remove [OPTIONS] <DATASET>

    dataset remove removes the specified Starwhale Dataset or version.

    DATASET is a dataset URI. If the version part of the URI is omitted, all versions are removed.

    Removed Starwhale Datasets or versions can be recovered by swcli dataset recover before garbage collection. Use the --force option to persistently remove a Starwhale Dataset or version.

    Removed Starwhale Datasets or versions can be listed by swcli dataset list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Dataset or version. It can not be recovered.

    swcli dataset summary

    swcli [GLOBAL OPTIONS]  dataset summary <DATASET>

    Show dataset summary. DATASET is a dataset URI.

    swcli dataset tag

    swcli [GLOBAL OPTIONS] dataset tag [OPTIONS] <DATASET> [TAGS]...

    dataset tag attaches a tag to a specified Starwhale Dataset version. At the same time, tag command also supports list and remove tags. The tag can be used in a dataset URI instead of the version id.

    DATASET is a dataset URI.

    Each dataset version can have any number of tags, but duplicated tag names are not allowed in the same dataset.

    dataset tag only works for the Standalone Instance.

    OptionRequiredTypeDefaultsDescription
    --remove or -rNBooleanFalseremove the tag if true
    --quiet or -qNBooleanFalseignore errors, for example, removing tags that do not exist.
    --force-add or -fNBooleanFalseWhen adding labels to server/cloud instances, if the label is already used by another dataset version, an error will be prompted. In this case, you can force an update using the --force-add parameter.

    Examples for dataset tag

    #- list tags of the mnist dataset
    swcli dataset tag mnist

    #- add tags for the mnist dataset
    swcli dataset tag mnist t1 t2
    swcli dataset tag cloud://cloud.starwhale.cn/project/public:starwhale/dataset/mnist/version/latest t1 --force-ad
    swcli dataset tag mnist t1 --quiet

    #- remove tags for the mnist dataset
    swcli dataset tag mnist -r t1 t2
    swcli dataset tag cloud://cloud.starwhale.cn/project/public:starwhale/dataset/mnist --remove t1
    - - + + \ No newline at end of file diff --git a/0.6.0/reference/swcli/index.html b/0.6.0/reference/swcli/index.html index fb6bc59ef..93d1ea081 100644 --- a/0.6.0/reference/swcli/index.html +++ b/0.6.0/reference/swcli/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Overview

    Usage

    swcli [OPTIONS] <COMMAND> [ARGS]...
    note

    sw and starwhale are aliases for swcli.

    Global Options

    OptionDescription
    --versionShow the Starwhale Client version
    -v or --verboseShow verbose log, support multi counts for -v args. More -v args, more logs.
    --helpShow the help message.
    caution

    Global options must be put immediately after swcli, and before any command.

    Commands

    - - + + \ No newline at end of file diff --git a/0.6.0/reference/swcli/instance/index.html b/0.6.0/reference/swcli/instance/index.html index af5a8e22b..dd24d07e3 100644 --- a/0.6.0/reference/swcli/instance/index.html +++ b/0.6.0/reference/swcli/instance/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    swcli instance

    Overview

    swcli [GLOBAL OPTIONS] instance [OPTIONS] <SUBCOMMAND> [ARGS]

    The instance command includes the following subcommands:

    • info
    • list (ls)
    • login
    • logout
    • use (select)

    swcli instance info

    swcli [GLOBAL OPTIONS] instance info [OPTIONS] <INSTANCE>

    instance info outputs detailed information about the specified Starwhale Instance.

    INSTANCE is an instance URI.

    swcli instance list

    swcli [GLOBAL OPTIONS] instance list [OPTIONS]

    instance list shows all Starwhale Instances.

    swcli instance login

    swcli [GLOBAL OPTIONS] instance login [OPTIONS] <INSTANCE>

    instance login connects to a Server/Cloud instance and makes the specified instance default.

    INSTANCE is an instance URI.

    OptionRequiredTypeDefaultsDescription
    --usernameNStringThe login username.
    --passwordNStringThe login password.
    --tokenNStringThe login token.
    --aliasYStringThe alias of the instance. You can use it anywhere that requires an instance URI.

    --username and --password can not be used together with --token.

    swcli instance logout

    swcli [GLOBAL OPTIONS] instance logout [INSTANCE]

    instance logout disconnects from the Server/Cloud instance, and clears information stored in the local storage.

    INSTANCE is an instance URI. If it is omiited, the default instance is used instead.

    swcli instance use

    swcli [GLOBAL OPTIONS] instance use <INSTANCE>

    instance use make the specified instance default.

    INSTANCE is an instance URI.

    - - + + \ No newline at end of file diff --git a/0.6.0/reference/swcli/job/index.html b/0.6.0/reference/swcli/job/index.html index 352cf4e23..0efe62265 100644 --- a/0.6.0/reference/swcli/job/index.html +++ b/0.6.0/reference/swcli/job/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    swcli job

    Overview

    swcli [GLOBAL OPTIONS] job [OPTIONS] <SUBCOMMAND> [ARGS]...

    The job command includes the following subcommands:

    • cancel
    • info
    • list(ls)
    • pause
    • recover
    • remove(rm)
    • resume

    swcli job cancel

    swcli [GLOBAL OPTIONS] job cancel [OPTIONS] <JOB>

    job cancel stops the specified job. On Standalone instance, this command only takes effect for containerized jobs.

    JOB is a job URI.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, kill the Starwhale Job by force.

    swcli job info

    swcli [GLOBAL OPTIONS] job info [OPTIONS] <JOB>

    job info outputs detailed information about the specified Starwhale Job.

    JOB is a job URI.

    swcli job list

    swcli [GLOBAL OPTIONS] job list [OPTIONS]

    job list shows all Starwhale Jobs.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --show-removed or -srNBooleanFalseIf true, include packages that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.

    swcli job pause

    swcli [GLOBAL OPTIONS] job pause [OPTIONS] <JOB>

    job pause pauses the specified job. Paused jobs can be resumed by job resume. On Standalone instance, this command only takes effect for containerized jobs.

    JOB is a job URI.

    From Starwhale's perspective, pause is almost the same as cancel, except that the job reuses the old Job id when resumed. It is job developer's responsibility to save all data periodically and load them when resumed. The job id is usually used as a key of the checkpoint.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, kill the Starwhale Job by force.

    swcli job resume

    swcli [GLOBAL OPTIONS] job resume [OPTIONS] <JOB>

    job resume resumes the specified job. On Standalone instance, this command only takes effect for containerized jobs.

    JOB is a job URI.

    - - + + \ No newline at end of file diff --git a/0.6.0/reference/swcli/model/index.html b/0.6.0/reference/swcli/model/index.html index f5e8de17a..256f32b92 100644 --- a/0.6.0/reference/swcli/model/index.html +++ b/0.6.0/reference/swcli/model/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    swcli model

    Overview

    swcli [GLOBAL OPTIONS] model [OPTIONS] <SUBCOMMAND> [ARGS]...

    The model command includes the following subcommands:

    • build
    • copy(cp)
    • diff
    • extract
    • history
    • info
    • list(ls)
    • recover
    • remove(rm)
    • run
    • serve
    • tag

    swcli model build

    swcli [GLOBAL OPTIONS] model build [OPTIONS] <WORKDIR>

    model build will put the whole WORKDIR into the model, except files that match patterns defined in .swignore.

    model build will import modules specified by --module to generate the required configurations to run the model. If your module depends on third-party libraries, we strongly recommend you use the --runtime option; otherwise, you need to ensure that the python environment used by swcli has these libraries installed.

    OptionRequiredTypeDefaultsDescription
    --project or -pNStringthe default projectthe project URI
    --model-yaml or -fNString${workdir}/model.yamlmodel yaml path, default use ${workdir}/model.yaml file. model.yaml is optional for model build.
    --module or -mNStringPython modules to be imported during the build process. Starwhale will export model handlers from these modules to the model package. This option supports set multiple times.
    --runtime or -rNStringthe URI of the Starwhale Runtime to use when running this command. If this option is used, this command will run in an independent python environment specified by the Starwhale Runtime; otherwise, it will run directly in the swcli's current python environment.
    --name or -nNStringmodel package name
    --desc or -dNStringmodel package description
    --package-runtime--no-package-runtimeNBooleanTrueWhen using the --runtime parameter, by default, the corresponding Starwhale runtime will become the built-in runtime for the Starwhale model. This feature can be disabled with the --no-package-runtime parameter.
    --add-allNBooleanFalseAdd all files in the working directory to the model package(excludes python cache files and virtual environment files when disabled).The .swignore file still takes effect.
    -t or --tagNGlobalString

    Examples for model build

    # build by the model.yaml in current directory and model package will package all the files from the current directory.
    swcli model build .
    # search model run decorators from mnist.evaluate, mnist.train and mnist.predict modules, then package all the files from the current directory to model package.
    swcli model build . --module mnist.evaluate --module mnist.train --module mnist.predict
    # build model package in the Starwhale Runtime environment.
    swcli model build . --module mnist.evaluate --runtime pytorch/version/v1
    # forbid to package Starwhale Runtime into the model.
    swcli model build . --module mnist.evaluate --runtime pytorch/version/v1 --no-package-runtime
    # build model package with tags.
    swcli model build . --tag tag1 --tag tag2

    swcli model copy

    swcli [GLOBAL OPTIONS] model copy [OPTIONS] <SRC> <DEST>

    model copy copies from SRC to DEST for Starwhale Model sharing.

    SRC and DEST are both model URIs.

    When copying Starwhale Model, all custom user-defined labels will be copied by default. You can use the --ignore-tag parameter to ignore certain labels. In addition, the latest and ^v\d+$ labels are Starwhale built-in labels that are only used within the instance itself and will not be copied to other instances.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, DEST will be overwritten if it exists. In addition, if the labels carried during duplication have already been used by other versions, this parameter can be used to forcibly update the labels to this version.
    -i or --ignore-tagNStringIgnore tags to copy. The option can be used multiple times.

    Examples for model copy

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local project(myproject) with a new model name 'mnist-local'
    swcli model cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq local/project/myproject/mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local default project(self) with the cloud instance model name 'mnist-cloud'
    swcli model cp cloud://pre-k8s/project/model/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq .

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local project(myproject) with the cloud instance model name 'mnist-cloud'
    swcli model cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq . -dlp myproject

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local default project(self) with a model name 'mnist-local'
    swcli model cp cloud://pre-k8s/project/model/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local project(myproject) with a model name 'mnist-local'
    swcli model cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local -dlp myproject

    #- copy standalone instance(local) default project(self)'s mnist-local model to cloud instance(pre-k8s) mnist project with a new model name 'mnist-cloud'
    swcli model cp mnist-local/version/latest cloud://pre-k8s/project/mnist/mnist-cloud

    #- copy standalone instance(local) default project(self)'s mnist-local model to cloud instance(pre-k8s) mnist project with standalone instance model name 'mnist-local'
    swcli model cp mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy standalone instance(local) default project(self)'s mnist-local model to cloud instance(pre-k8s) mnist project without 'cloud://' prefix
    swcli model cp mnist-local/version/latest pre-k8s/project/mnist

    #- copy standalone instance(local) project(myproject)'s mnist-local model to cloud instance(pre-k8s) mnist project with standalone instance model name 'mnist-local'
    swcli model cp local/project/myproject/model/mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy without some tags
    swcli model cp mnist cloud://cloud.starwhale.cn/project/starwhale:public --ignore-tag t1

    swcli model diff

    swcli [GLOBAL OPTIONS] model diff [OPTIONS] <MODEL VERSION> <MODEL VERSION>

    model diff compares the difference between two versions of the same model.

    MODEL VERSION is a model URI.

    OptionRequiredTypeDefaultsDescription
    --show-detailsNBooleanFalseIf true, outputs the detail information.

    swcli model extract

    swcli [GLOBAL OPTIONS] model extract [OPTIONS] <MODEL> <TARGET_DIR>

    The model extract command can extract a Starwhale model to a specified directory for further customization.

    MODEL is a model URI.

    OptionRequiredTypeDefaultDescription
    --force or -fNBooleanFalseIf this option is used, it will forcibly overwrite existing extracted model files in the target directory.

    Examples for model extract

    #- extract mnist model package to current directory
    swcli model extract mnist/version/xxxx .

    #- extract mnist model package to current directory and force to overwrite the files
    swcli model extract mnist/version/xxxx . -f

    swcli model history

    swcli [GLOBAL OPTIONS] model history [OPTIONS] <MODEL>

    model history outputs all history versions of the specified Starwhale Model.

    MODEL is a model URI.

    OptionRequiredTypeDefaultsDescription
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.

    swcli model info

    swcli [GLOBAL OPTIONS] model info [OPTIONS] <MODEL>

    model info outputs detailed information about the specified Starwhale Model version.

    MODEL is a model URI.

    OptionRequiredTypeDefaultsDescription
    --output-filter or -ofNChoice of [basic/model_yaml/manifest/files/handlers/all]basicFilter the output content. Only standalone instance supports this option.

    Examples for model info

    swcli model info mnist # show basic info from the latest version of model
    swcli model info mnist/version/v0 # show basic info from the v0 version of model
    swcli model info mnist/version/latest --output-filter=all # show all info
    swcli model info mnist -of basic # show basic info
    swcli model info mnist -of model_yaml # show model.yaml
    swcli model info mnist -of handlers # show model runnable handlers info
    swcli model info mnist -of files # show model package files tree
    swcli -o json model info mnist -of all # show all info in json format

    swcli model list

    swcli [GLOBAL OPTIONS] model list [OPTIONS]

    model list shows all Starwhale Models.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.
    --show-removedNBooleanFalseIf true, include packages that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.
    --filter or -flNStringShow only Starwhale Models that match specified filters. This option can be used multiple times in one command.
    FilterTypeDescriptionExample
    nameKey-ValueThe name prefix of models--filter name=mnist
    ownerKey-ValueThe model owner name--filter owner=starwhale
    latestFlagIf specified, it shows only the latest version.--filter latest

    swcli model recover

    swcli [GLOBAL OPTIONS] model recover [OPTIONS] <MODEL>

    model recover recovers previously removed Starwhale Models or versions.

    MODEL is a model URI. If the version part of the URI is omitted, all removed versions are recovered.

    Garbage-collected Starwhale Models or versions can not be recovered, as well as those are removed with the --force option.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, overwrite the Starwhale Model or version with the same name or version id.

    swcli model remove

    swcli [GLOBAL OPTIONS] model remove [OPTIONS] <MODEL>

    model remove removes the specified Starwhale Model or version.

    MODEL is a model URI. If the version part of the URI is omitted, all versions are removed.

    Removed Starwhale Models or versions can be recovered by swcli model recover before garbage collection. Use the --force option to persistently remove a Starwhale Model or version.

    Removed Starwhale Models or versions can be listed by swcli model list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Model or version. It can not be recovered.

    swcli model run

    swcli [GLOBAL OPTIONS] model run [OPTIONS]

    model run executes a model handler. Model run supports two modes to run: model URI and local development. Model URI mode needs a pre-built Starwhale Model Package. Local development model only needs the model src dir.

    OptionRequiredTypeDefaultsDescription
    --workdir or -wNStringFor local development mode, the path of model src dir.
    --uri or -uNStringFor model URI mode, the string of model uri.
    --handler or -hNStringRunnable handler index or name, default is None, will use the first handler
    --module or -mNStringThe name of the Python module to import. This parameter can be set multiple times.
    --runtime or -rNStringthe Starwhale Runtime URI to use when running this command. If this option is used, this command will run in an independent python environment specified by the Starwhale Runtime; otherwise, it will run directly in the swcli's current python environment.
    --model-yaml or -fNString${MODEL_DIR}/model.yamlThe path to the model.yaml. model.yaml is optional for model run.
    --run-project or -pNStringDefault projectProject URI, indicates the model run results will be stored in the corresponding project.
    --dataset or -dNStringDataset URI, the Starwhale dataset required for model running. This parameter can be set multiple times.
    --dataset-head or -dhNInteger0[ONLY STANDALONE]For debugging purpose, every prediction task will, at most, consume the first n rows from every dataset.When the value is less than or equal to 0, all samples will be used.
    --in-containerNBooleanFalseUse docker container to run the model. This option is only available for standalone instances. For server and cloud instances, a docker image is always used. If the runtime is a docker image, this option is always implied.
    --forbid-snapshot or -fsNBooleanFalseIn model URI mode, each model run uses a new snapshot directory. Setting this parameter will directly use the model's workdir as the run directory. In local dev mode, this parameter does not take effect, each run is in the --workdir specified directory.
    -- --user-arbitrary-argsNStringSpecify the args you defined in your handlers.

    Examples for model run

    # --> run by model uri
    # run the first handler from model uri
    swcli model run -u mnist/version/latest
    # run index id(1) handler from model uri
    swcli model run --uri mnist/version/latest --handler 1
    # run index fullname(mnist.evaluator:MNISTInference.cmp) handler from model uri
    swcli model run --uri mnist/version/latest --handler mnist.evaluator:MNISTInference.cmp

    # --> run by the working directory, which does not build model package yet. Make local debug happy.
    # run the first handler from the working directory, use the model.yaml in the working directory
    swcli model run -w .
    # run index id(1) handler from the working directory, search mnist.evaluator module and model.yaml handlers(if existed) to get runnable handlers
    swcli model run --workdir . --module mnist.evaluator --handler 1
    # run index fullname(mnist.evaluator:MNISTInference.cmp) handler from the working directory, search mnist.evaluator module to get runnable handlers
    swcli model run --workdir . --module mnist.evaluator --handler mnist.evaluator:MNISTInference.cmp
    # run the f handler in th.py from the working directory with the args defined in th:f
    # @handler()
    # def f(
    # x=ListInput(IntInput()),
    # y=2,
    # mi=MyInput(),
    # ds=DatasetInput(required=True),
    # ctx=ContextInput(),
    # )
    swcli model run -w . -m th --handler th:f -- -x 2 -x=1 --mi=blab-la --ds mnist

    # --> run with dataset of head 10
    swcli model run --uri mnist --dataset-head 10 --dataset mnist

    swcli model serve

    Here is the English translation:

    swcli [GLOBAL OPTIONS] model serve [OPTIONS]

    The model serve command can run the model as a web server, and provide a simple web interaction interface.

    OptionRequiredTypeDefaultsDescription
    --workdir or -wNStringIn local dev mode, specify the directory of the model code.
    --uri or -uNStringIn model URI mode, specify the model URI.
    --runtime or -rNStringThe URI of the Starwhale runtime to use when running this command. If specified, the command will run in the isolated Python environment defined in the Starwhale runtime. Otherwise it will run directly in the current Python environment of swcli.
    --model-yaml or -fNString${MODEL_DIR}/model.yamlThe path to the model.yaml. model.yaml is optional for model serve.
    --module or -mNStringName of the Python module to import. This parameter can be set multiple times.
    --hostNString127.0.0.1The address for the service to listen on.
    --portNInteger8080The port for the service to listen on.

    Examples for model serve

    swcli model serve -u mnist
    swcli model serve --uri mnist/version/latest --runtime pytorch/version/latest

    swcli model serve --workdir . --runtime pytorch/version/v0
    swcli model serve --workdir . --runtime pytorch/version/v1 --host 0.0.0.0 --port 8080
    swcli model serve --workdir . --runtime pytorch --module mnist.evaluator

    swcli model tag

    swcli [GLOBAL OPTIONS] model tag [OPTIONS] <MODEL> [TAGS]...

    model tag attaches a tag to a specified Starwhale Model version. At the same time, tag command also supports list and remove tags. The tag can be used in a model URI instead of the version id.

    MODEL is a model URI.

    Each model version can have any number of tags, but duplicated tag names are not allowed in the same model.

    model tag only works for the Standalone Instance.

    OptionRequiredTypeDefaultsDescription
    --remove or -rNBooleanFalseremove the tag if true
    --quiet or -qNBooleanFalseignore errors, for example, removing tags that do not exist.
    --force-add or -fNBooleanFalseWhen adding labels to server/cloud instances, if the label is already used by another model version, an error will be prompted. In this case, you can force an update using the --force-add parameter.

    Examples for model tag

    #- list tags of the mnist model
    swcli model tag mnist

    #- add tags for the mnist model
    swcli model tag mnist t1 t2
    swcli model tag cloud://cloud.starwhale.cn/project/public:starwhale/model/mnist/version/latest t1 --force-add
    swcli model tag mnist t1 --quiet

    #- remove tags for the mnist model
    swcli model tag mnist -r t1 t2
    swcli model tag cloud://cloud.starwhale.cn/project/public:starwhale/model/mnist --remove t1
    - - + + \ No newline at end of file diff --git a/0.6.0/reference/swcli/project/index.html b/0.6.0/reference/swcli/project/index.html index aa22e34f6..9f2648224 100644 --- a/0.6.0/reference/swcli/project/index.html +++ b/0.6.0/reference/swcli/project/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    swcli project

    Overview

    swcli [GLOBAL OPTIONS] project [OPTIONS] <SUBCOMMAND> [ARGS]...

    The project command includes the following subcommands:

    • create(add, new)
    • info
    • list(ls)
    • recover
    • remove(ls)
    • use(select)

    swcli project create

    swcli [GLOBAL OPTIONS] project create <PROJECT>

    project create creates a new project.

    PROJECT is a project URI.

    swcli project info

    swcli [GLOBAL OPTIONS] project info [OPTIONS] <PROJECT>

    project info outputs detailed information about the specified Starwhale Project.

    PROJECT is a project URI.

    swcli project list

    swcli [GLOBAL OPTIONS] project list [OPTIONS]

    project list shows all Starwhale Projects.

    OptionRequiredTypeDefaultsDescription
    --instanceNStringThe URI of the instance to list. If this option is omitted, use the default instance.
    --show-removedNBooleanFalseIf true, include projects that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.

    swcli project recover

    swcli [GLOBAL OPTIONS] project recover [OPTIONS] <PROJECT>

    project recover recovers previously removed Starwhale Projects.

    PROJECT is a project URI.

    Garbage-collected Starwhale Projects can not be recovered, as well as those are removed with the --force option.

    swcli project remove

    swcli [GLOBAL OPTIONS] project remove [OPTIONS] <PROJECT>

    project remove removes the specified Starwhale Project.

    PROJECT is a project URI.

    Removed Starwhale Projects can be recovered by swcli project recover before garbage collection. Use the --force option to persistently remove a Starwhale Project.

    Removed Starwhale Project can be listed by swcli project list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Project. It can not be recovered.

    swcli project use

    swcli [GLOBAL OPTIONS] project use <PROJECT>

    project use make the specified project default. You must login at first to use a project on a Server/Cloud instance.

    - - + + \ No newline at end of file diff --git a/0.6.0/reference/swcli/runtime/index.html b/0.6.0/reference/swcli/runtime/index.html index 545f4d95a..8593fd03f 100644 --- a/0.6.0/reference/swcli/runtime/index.html +++ b/0.6.0/reference/swcli/runtime/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    swcli runtime

    Overview

    swcli [GLOBAL OPTIONS] runtime [OPTIONS] <SUBCOMMAND> [ARGS]...

    The runtime command includes the following subcommands:

    • activate(actv)
    • build
    • copy(cp)
    • dockerize
    • extract
    • history
    • info
    • list(ls)
    • recover
    • remove(rm)
    • tag

    swcli runtime activate

    swcli [GLOBAL OPTIONS] runtime activate [OPTIONS] <RUNTIME>

    Like source venv/bin/activate or conda activate xxx, runtime activate setups a new python environment according to the settings of the specified runtime. When the current shell is closed or switched to another one, you need to reactivate the runtime.RUNTIME is a Runtime URI.

    If you want to quit the activated runtime environment, please run venv deactivate in the venv environment or conda deactivate in the conda environment.

    The runtime activate command will build a Python isolated environment and download relevant Python packages according to the definition of the Starwhale runtime when activating the environment for the first time. This process may spend a lot of time.

    swcli runtime build

    swcli [GLOBAL OPTIONS] runtime build [OPTIONS]

    The runtime build command can build a shareable and reproducible runtime environment suitable for ML/DL from various environments or runtime.yaml file.

    Parameters

    • Parameters related to runtime building methods:
    OptionRequiredTypeDefaultsDescription
    -c or --condaNStringFind the corresponding conda environment by conda env name, export Python dependencies to generate Starwhale runtime.
    -cp or --conda-prefixNStringFind the corresponding conda environment by conda env prefix path, export Python dependencies to generate Starwhale runtime.
    -v or --venvNStringFind the corresponding venv environment by venv directory address, export Python dependencies to generate Starwhale runtime.
    -s or --shellNStringExport Python dependencies according to current shell environment to generate Starwhale runtime.
    -y or --yamlNruntime.yaml in cwd directoryBuild Starwhale runtime according to user-defined runtime.yaml.
    -d or --dockerNStringUse the docker image as Starwhale runtime.

    The parameters for runtime building methods are mutually exclusive, only one method can be specified. If not specified, it will use --yaml method to read runtime.yaml in cwd directory to build Starwhale runtime.

    • Other parameters:
    OptionRequiredScopeTypeDefaultsDescription
    --project or -pNGlobalStringDefault projectProject URI
    -del or --disable-env-lockNruntime.yaml modeBooleanFalseWhether to install dependencies in runtime.yaml and lock the version information of related dependencies. The dependencies will be locked by default.
    -nc or --no-cacheNruntime.yaml modeBooleanFalseWhether to delete the isolated environment and install related dependencies from scratch. By default dependencies will be installed in the existing isolated environment.
    --cudaNconda/venv/shell modeChoice[11.3/11.4/11.5/11.6/11.7/]CUDA version, CUDA will not be used by default.
    --cudnnNconda/venv/shell modeChoice[8/]cuDNN version, cuDNN will not be used by default.
    --archNconda/venv/shell modeChoice[amd64/arm64/noarch]noarchArchitecture
    -dpo or --dump-pip-optionsNGlobalBooleanFalseDump pip config options from the ~/.pip/pip.conf file.
    -dcc or --dump-condarcNGlobalBooleanFalseDump conda config from the ~/.condarc file.
    -t or --tagNGlobalStringRuntime tags, the option can be used multiple times.

    Examples for Starwhale Runtime building

    #- from runtime.yaml:
    swcli runtime build # use the current directory as the workdir and use the default runtime.yaml file
    swcli runtime build -y example/pytorch/runtime.yaml # use example/pytorch/runtime.yaml as the runtime.yaml file
    swcli runtime build --yaml runtime.yaml # use runtime.yaml at the current directory as the runtime.yaml file
    swcli runtime build --tag tag1 --tag tag2

    #- from conda name:
    swcli runtime build -c pytorch # lock pytorch conda environment and use `pytorch` as the runtime name
    swcli runtime build --conda pytorch --name pytorch-runtime # use `pytorch-runtime` as the runtime name
    swcli runtime build --conda pytorch --cuda 11.4 # specify the cuda version
    swcli runtime build --conda pytorch --arch noarch # specify the system architecture

    #- from conda prefix path:
    swcli runtime build --conda-prefix /home/starwhale/anaconda3/envs/pytorch # get conda prefix path by `conda info --envs` command

    #- from venv prefix path:
    swcli runtime build -v /home/starwhale/.virtualenvs/pytorch
    swcli runtime build --venv /home/starwhale/.local/share/virtualenvs/pytorch --arch amd64

    #- from docker image:
    swcli runtime build --docker pytorch/pytorch:1.9.0-cuda11.1-cudnn8-runtime # use the docker image as the runtime directly

    #- from shell:
    swcli runtime build -s --cuda 11.4 --cudnn 8 # specify the cuda and cudnn version
    swcli runtime build --shell --name pytorch-runtime # lock the current shell environment and use `pytorch-runtime` as the runtime name

    swcli runtime copy

    swcli [GLOBAL OPTIONS] runtime copy [OPTIONS] <SRC> <DEST>

    runtime copy copies from SRC to DEST. SRC and DEST are both Runtime URIs.

    When copying Starwhale Runtime, all custom user-defined labels will be copied by default. You can use the --ignore-tag parameter to ignore certain labels. In addition, the latest and ^v\d+$ labels are built-in Starwhale system labels that are only used within the instance itself and will not be copied to other instances.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, DEST will be overwritten if it exists. In addition, if the labels carried during duplication have already been used by other versions, this parameter can be used to forcibly update the labels to this version.
    -i or --ignore-tagNStringIgnore tags to copy. The option can be used multiple times.

    Examples for Starwhale Runtime copy

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local project(myproject) with a new runtime name 'mnist-local'
    swcli runtime cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq local/project/myproject/mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local default project(self) with the cloud instance runtime name 'mnist-cloud'
    swcli runtime cp cloud://pre-k8s/project/runtime/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq .

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local project(myproject) with the cloud instance runtime name 'mnist-cloud'
    swcli runtime cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq . -dlp myproject

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local default project(self) with a runtime name 'mnist-local'
    swcli runtime cp cloud://pre-k8s/project/runtime/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local project(myproject) with a runtime name 'mnist-local'
    swcli runtime cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local -dlp myproject

    #- copy standalone instance(local) default project(self)'s mnist-local runtime to cloud instance(pre-k8s) mnist project with a new runtime name 'mnist-cloud'
    swcli runtime cp mnist-local/version/latest cloud://pre-k8s/project/mnist/mnist-cloud

    #- copy standalone instance(local) default project(self)'s mnist-local runtime to cloud instance(pre-k8s) mnist project with standalone instance runtime name 'mnist-local'
    swcli runtime cp mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy standalone instance(local) default project(self)'s mnist-local runtime to cloud instance(pre-k8s) mnist project without 'cloud://' prefix
    swcli runtime cp mnist-local/version/latest pre-k8s/project/mnist

    #- copy standalone instance(local) project(myproject)'s mnist-local runtime to cloud instance(pre-k8s) mnist project with standalone instance runtime name 'mnist-local'
    swcli runtime cp local/project/myproject/runtime/mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy without some tags
    swcli runtime cp pytorch cloud://cloud.starwhale.cn/project/starwhale:public --ignore-tag t1

    swcli runtime dockerize

    swcli [GLOBAL OPTIONS] runtime dockerize [OPTIONS] <RUNTIME>

    runtime dockerize generates a docker image based on the specified runtime. Starwhale uses docker buildx to create the image. Docker 19.03 or later is required to run this command.

    RUNTIME is a Runtime URI.

    OptionRequiredTypeDefaultsDescription
    --tag or -tNStringThe tag of the docker image. This option can be repeated multiple times.
    --pushNBooleanFalseIf true, push the image to the docker registry
    --platformNStringamd64The target platform,can be either amd64 or arm64. This option can be repeated multiple times to create a multi-platform image.

    Here is the English translation:

    swcli runtime extract

    swcli [Global Options] runtime extract [Options] <RUNTIME>

    Starwhale runtimes use the compressed packages to distribute. The runtime extract command can be used to extract the runtime package for further customization and modification.

    OptionRequiredTypeDefaultDescription
    --force or -fNBooleanFalseWhether to delete and re-extract if there is already an extracted Starwhale runtime in the target directory.
    --target-dirNStringCustom extraction directory. If not specified, it will be extracted to the default Starwhale runtime workdir. The command log will show the directory location.

    swcli runtime history

    swcli [GLOBAL OPTIONS] runtime history [OPTIONS] <RUNTIME>

    runtime history outputs all history versions of the specified Starwhale Runtime.

    RUNTIME is a Runtime URI.

    OptionRequiredTypeDefaultsDescription
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.

    swcli runtime info

    swcli [GLOBAL OPTIONS] runtime info [OPTIONS] <RUNTIME>

    runtime info outputs detailed information about a specified Starwhale Runtime version.

    RUNTIME is a Runtime URI.

    OptionRequiredTypeDefaultsDescription
    --output-filter or -ofNChoice of [basic/runtime_yaml/manifest/lock/all]basicFilter the output content. Only standalone instance supports this option.

    Examples for Starwhale Runtime info

    swcli runtime info pytorch # show basic info from the latest version of runtime
    swcli runtime info pytorch/version/v0 # show basic info
    swcli runtime info pytorch/version/v0 --output-filter basic # show basic info
    swcli runtime info pytorch/version/v1 -of runtime_yaml # show runtime.yaml content
    swcli runtime info pytorch/version/v1 -of lock # show auto lock file content
    swcli runtime info pytorch/version/v1 -of manifest # show _manifest.yaml content
    swcli runtime info pytorch/version/v1 -of all # show all info of the runtime

    swcli runtime list

    swcli [GLOBAL OPTIONS] runtime list [OPTIONS]

    runtime list shows all Starwhale Runtimes.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.
    --show-removed or -srNBooleanFalseIf true, include runtimes that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.
    --filter or -flNStringShow only Starwhale Runtimes that match specified filters. This option can be used multiple times in one command.
    FilterTypeDescriptionExample
    nameKey-ValueThe name prefix of runtimes--filter name=pytorch
    ownerKey-ValueThe runtime owner name--filter owner=starwhale
    latestFlagIf specified, it shows only the latest version.--filter latest

    swcli runtime recover

    swcli [GLOBAL OPTIONS] runtime recover [OPTIONS] <RUNTIME>

    runtime recover can recover previously removed Starwhale Runtimes or versions.

    RUNTIME is a Runtime URI. If the version part of the URI is omitted, all removed versions are recovered.

    Garbage-collected Starwhale Runtimes or versions can not be recovered, as well as those are removed with the --force option.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, overwrite the Starwhale Runtime or version with the same name or version id.

    swcli runtime remove

    swcli [GLOBAL OPTIONS] runtime remove [OPTIONS] <RUNTIME>

    runtime remove removes the specified Starwhale Runtime or version.

    RUNTIME is a Runtime URI. If the version part of the URI is omitted, all versions are removed.

    Removed Starwhale Runtimes or versions can be recovered by swcli runtime recover before garbage collection. Use the -- force option to persistently remove a Starwhale Runtime or version.

    Removed Starwhale Runtimes or versions can be listed by swcli runtime list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Runtime or version. It can not be recovered.

    swcli runtime tag

    swcli [GLOBAL OPTIONS] runtime tag [OPTIONS] <RUNTIME> [TAGS]...

    runtime tag attaches a tag to a specified Starwhale Runtime version. At the same time, tag command also supports list and remove tags. The tag can be used in a runtime URI instead of the version id.

    RUNTIME is a Runtime URI.

    Each runtime version can have any number of tags, but duplicated tag names are not allowed in the same runtime.

    runtime tag only works for the Standalone Instance.

    OptionRequiredTypeDefaultsDescription
    --remove or -rNBooleanFalseRemove the tag if true
    --quiet or -qNBooleanFalseIgnore errors, for example, removing tags that do not exist.
    --force-add or -fNBooleanFalseWhen adding labels to server/cloud instances, if the label is already used by another runtime version, an error will be prompted. In this case, you can force an update using the --force-add parameter.

    Examples for runtime tag

    #- list tags of the pytorch runtime
    swcli runtime tag pytorch

    #- add tags for the pytorch runtime
    swcli runtime tag mnist t1 t2
    swcli runtime tag cloud://cloud.starwhale.cn/project/public:starwhale/runtime/pytorch/version/latest t1 --force-add
    swcli runtime tag mnist t1 --quiet

    #- remove tags for the pytorch runtime
    swcli runtime tag mnist -r t1 t2
    swcli runtime tag cloud://cloud.starwhale.cn/project/public:starwhale/runtime/pytorch --remove t1
    - - + + \ No newline at end of file diff --git a/0.6.0/reference/swcli/utilities/index.html b/0.6.0/reference/swcli/utilities/index.html index 192299d41..835be8311 100644 --- a/0.6.0/reference/swcli/utilities/index.html +++ b/0.6.0/reference/swcli/utilities/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Utility Commands

    swcli gc

    swcli [GLOBAL OPTIONS] gc [OPTIONS]

    gc clears removed projects, models, datasets, and runtimes according to the internal garbage collection policy.

    OptionRequiredTypeDefaultsDescription
    --dry-runNBooleanFalseIf true, outputs objects to be removed instead of clearing them.
    --yesNBooleanFalseBypass confirmation prompts.

    swcli check

    swcli [GLOBAL OPTIONS] check

    Check if the external dependencies of the swcli command meet the requirements. Currently mainly checks Docker and Conda.

    swcli completion install

    swcli [GLOBAL OPTIONS] completion install <SHELL_NAME>

    Install autocompletion for swcli commands. Currently supports bash, zsh and fish. If SHELL_NAME is not specified, it will try to automatically detect the current shell type.

    swcli config edit

    swcli [GLOBAL OPTIONS] config edit

    Edit the Starwhale configuration file at ~/.config/starwhale/config.yaml.

    swcli ui

    swcli [GLOBAL OPTIONS] ui <INSTANCE>

    Open the web page for the corresponding instance.

    - - + + \ No newline at end of file diff --git a/0.6.0/runtime/index.html b/0.6.0/runtime/index.html index e872b7440..4d03aecc6 100644 --- a/0.6.0/runtime/index.html +++ b/0.6.0/runtime/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Starwhale Runtime

    overview

    Overview

    Starwhale Runtime aims to provide a reproducible and sharable running environment for python programs. You can easily share your working environment with your teammates or outsiders, and vice versa. Furthermore, you can run your programs on Starwhale Server or Starwhale Cloud without bothering with the dependencies.

    Starwhale works well with virtualenv, conda, and docker. If you are using one of them, it is straightforward to create a Starwhale Runtime based on your current environment.

    Multiple Starwhale Runtimes on your local machine can be switched freely by one command. You can work on different projects without messing up the environment.Starwhale Runtime consists of two parts: the base image and the dependencies.

    The base image

    The base is a docker image with Python, CUDA, and cuDNN installed. Starwhale provides various base images for you to choose from; see the following list:

    • Computer system architecture:
      • X86 (amd64)
      • Arm (aarch64)
    • Operating system:
      • Ubuntu 20.04 LTS (ubuntu:20.04)
    • Python:
      • 3.7
      • 3.8
      • 3.9
      • 3.10
      • 3.11
    • CUDA:
      • CUDA 11.3 + cuDNN 8.4
      • CUDA 11.4 + cuDNN 8.4
      • CUDA 11.5 + cuDNN 8.4
      • CUDA 11.6 + cuDNN 8.4
      • CUDA 11.7
    - - + + \ No newline at end of file diff --git a/0.6.0/runtime/yaml/index.html b/0.6.0/runtime/yaml/index.html index da3c2d54a..f465ff76c 100644 --- a/0.6.0/runtime/yaml/index.html +++ b/0.6.0/runtime/yaml/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    The runtime.yaml Specification

    runtime.yaml is the configuration file that defines the properties of the Starwhale Runtime. runtime.yaml is required for the yaml mode of the swcli runtime build command.

    Examples

    The simplest example

    dependencies:
    - pip:
    - numpy
    name: simple-test

    Define a Starwhale Runtime that uses venv as the Python virtual environment for package isolation, and installs the numpy dependency.

    The llama2 example

    name: llama2
    mode: venv
    environment:
    arch: noarch
    os: ubuntu:20.04
    cuda: 11.7
    python: "3.10"
    dependencies:
    - pip:
    - torch
    - fairscale
    - fire
    - sentencepiece
    - gradio >= 3.37.0
    # external starwhale dependencies
    - starwhale[serve] >= 0.5.5

    The full definition example

    # [required]The name of Starwhale Runtime
    name: demo
    # [optional]The mode of Starwhale Runtime: venv or conda. Default is venv.
    mode: venv
    # [optional]The configurations of pip and conda.
    configs:
    # If you do not use conda, ignore this field.
    conda:
    condarc: # custom condarc config file
    channels:
    - defaults
    show_channel_urls: true
    default_channels:
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2
    custom_channels:
    conda-forge: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    pytorch: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    pytorch-lts: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    nvidia: https://mirrors.aliyun.com/anaconda/cloud
    ssl_verify: false
    default_threads: 10
    pip:
    # pip config set global.index-url
    index_url: https://example.org/
    # pip config set global.extra-index-url
    extra_index_url: https://another.net/
    # pip config set install.trusted-host
    trusted_host:
    - example.org
    - another.net
    # [optional] The definition of the environment.
    environment:
    # Now it must be ubuntu:20.04
    os: ubuntu:20.04
    # CUDA version. possible values: 11.3, 11.4, 11.5, 11.6, 11.7
    cuda: 11.4
    # Python version. possible values: 3.7, 3.8, 3.9, 3.10, 3.11
    python: 3.8
    # Define your custom base image
    docker:
    image: mycustom.com/docker/image:tag
    # [required] The dependencies of the Starwhale Runtime.
    dependencies:
    # If this item is present, conda env create -f conda.yml will be executed
    - conda.yaml
    # If this item is present, pip install -r requirements.txt will be executed before installing other pip packages
    - requirements.txt
    # Packages to be install with conda. venv mode will ignore the conda field.
    - conda:
    - numpy
    - requests
    # Packages to be installed with pip. The format is the same as requirements.txt
    - pip:
    - pillow
    - numpy
    - deepspeed==0.9.0
    - safetensors==0.3.0
    - transformers @ git+https://github.com/huggingface/transformers.git@3c3108972af74246bc3a0ecf3259fd2eafbacdef
    - peft @ git+https://github.com/huggingface/peft.git@fcff23f005fc7bfb816ad1f55360442c170cd5f5
    - accelerate @ git+https://github.com/huggingface/accelerate.git@eba6eb79dc2ab652cd8b44b37165a4852768a8ac
    # Additional wheels packages to be installed when restoring the runtime
    - wheels:
    - dummy-0.0.0-py3-none-any.whl
    # Additional files to be included in the runtime
    - files:
    - dest: bin/prepare.sh
    name: prepare
    src: scripts/prepare.sh
    # Run some custom commands
    - commands:
    - apt-get install -y libgl1
    - touch /tmp/runtime-command-run.flag
    - - + + \ No newline at end of file diff --git a/0.6.0/server/guides/server_admin/index.html b/0.6.0/server/guides/server_admin/index.html index 105911580..f107dd6cb 100644 --- a/0.6.0/server/guides/server_admin/index.html +++ b/0.6.0/server/guides/server_admin/index.html @@ -10,14 +10,14 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Controller Admin Settings

    Superuser Password Reset

    In case you forget the superusers password, you could use the sql below to reset the password to abcd1234

    update user_info set user_pwd='ee9533077d01d2d65a4efdb41129a91e', user_pwd_salt='6ea18d595773ccc2beacce26' where id=1

    After that, you could login to the console and then change the password to what you really want.

    System Settings

    You could customize system to make it easier to use by leverage of System setting. Here is an example below:

    dockerSetting:
    registryForPull: "docker-registry.starwhale.cn/star-whale"
    registryForPush: ""
    userName: ""
    password: ""
    insecure: true
    pypiSetting:
    indexUrl: ""
    extraIndexUrl: ""
    trustedHost: ""
    retries: 10
    timeout: 90
    imageBuild:
    resourcePool: ""
    image: ""
    clientVersion: ""
    pythonVersion: ""
    datasetBuild:
    resourcePool: ""
    image: ""
    clientVersion: ""
    pythonVersion: ""
    resourcePoolSetting:
    - name: "default"
    nodeSelector: null
    resources:
    - name: "cpu"
    max: null
    min: null
    defaults: 5.0
    - name: "memory"
    max: null
    min: null
    defaults: 3145728.0
    - name: "nvidia.com/gpu"
    max: null
    min: null
    defaults: null
    tolerations: null
    metadata: null
    isPrivate: null
    visibleUserIds: null
    storageSetting:
    - type: "minio"
    tokens:
    bucket: "users"
    ak: "starwhale"
    sk: "starwhale"
    endpoint: "http://10.131.0.1:9000"
    region: "local"
    hugeFileThreshold: "10485760"
    hugeFilePartSize: "5242880"
    - type: "s3"
    tokens:
    bucket: "users"
    ak: "starwhale"b
    sk: "starwhale"
    endpoint: "http://10.131.0.1:9000"
    region: "local"
    hugeFileThreshold: "10485760"
    hugeFilePartSize: "5242880"

    Image Registry

    Tasks dispatched by the server are based on docker images. Pulling these images could be slow if your internet is not working well. Starwhale Server supports the custom image registries, includes dockerSetting.registryForPush and dockerSetting.registryForPull.

    Resource Pool

    The resourcePoolSetting allows you to manage your cluster in a group manner. It is currently implemented by K8S nodeSelector, you could label your machines in K8S cluster and make them a resourcePool in Starwhale.

    Remote Storage

    The storageSetting allows you to manage the storages the server could access.

    storageSetting:
    - type: s3
    tokens:
    - bucket: starwhale # required
    ak: access_key # required
    sk: scret_key # required
    endpoint: http://s3.region.amazonaws.com # optional
    region: region of the service # required when endpoint is empty
    hugeFileThreshold: 10485760 # bigger than 10MB will use multiple part upload
    hugeFilePartSize: 5242880 # MB part size for multiple part upload
    - type: minio
    tokens:
    - bucket: starwhale # required
    ak: access_key # required
    sk: scret_key # required
    endpoint: http://10.131.0.1:9000 # required
    region: local # optional
    hugeFileThreshold: 10485760 # bigger than 10MB will use multiple part upload
    hugeFilePartSize: 5242880 # MB part size for multiple part upload
    - type: aliyun
    tokens:
    - bucket: starwhale # required
    ak: access_key # required
    sk: scret_key # required
    endpoint: http://10.131.0.2:9000 # required
    region: local # optional
    hugeFileThreshold: 10485760 # bigger than 10MB will use multiple part upload
    hugeFilePartSize: 5242880 # MB part size for multiple part upload

    Every storageSetting item has a corresponding implementation of StorageAccessService interface. Starwhale has four build-in implementations:

    • StorageAccessServiceAliyun matches type in (aliyun,oss)
    • StorageAccessServiceMinio matches type in (minio)
    • StorageAccessServiceS3 matches type in (s3)
    • StorageAccessServiceFile matches type in (fs, file)

    Each of the implementations has different requirements for tokens. endpoint is required when type in (aliyun,minio), region is required when type is s3 and endpoint is empty. While fs/file type requires tokens has name rootDir and serviceProvider. Please refer the code for more details.

    - - + + \ No newline at end of file diff --git a/0.6.0/server/index.html b/0.6.0/server/index.html index 673b1852c..f2be635dc 100644 --- a/0.6.0/server/index.html +++ b/0.6.0/server/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.0/server/installation/docker-compose/index.html b/0.6.0/server/installation/docker-compose/index.html index 1a9a9c2e6..1901327a3 100644 --- a/0.6.0/server/installation/docker-compose/index.html +++ b/0.6.0/server/installation/docker-compose/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Install Starwhale Server with Docker Compose

    Prerequisites

    Usage

    Start up the server

    wget https://raw.githubusercontent.com/star-whale/starwhale/main/docker/compose/compose.yaml
    GLOBAL_IP=${your_accessible_ip_for_server} ; docker compose up

    The GLOBAL_IP is the ip for Controller which could be accessed by all swcli both inside docker containers and other user machines.

    compose.yaml contains Starwhale Controller/MySQL/MinIO services. Touch a compose.override.yaml, as its name implies, can contain configuration overrides for compose.yaml. The available configurations are specified here

    - - + + \ No newline at end of file diff --git a/0.6.0/server/installation/docker/index.html b/0.6.0/server/installation/docker/index.html index 2c39fa76a..126a58451 100644 --- a/0.6.0/server/installation/docker/index.html +++ b/0.6.0/server/installation/docker/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Install Starwhale Server with Docker

    Prerequisites

    • A running Kubernetes 1.19+ cluster to run tasks.
    • A running MySQL 8.0+ instance to store metadata.
    • A S3-compatible object storage to save datasets, models, and others.

    Please make sure pods on the Kubernetes cluster can access the port exposed by the Starwhale Server installation.

    Prepare an env file for Docker

    Starwhale Server can be configured by environment variables.

    An env file template for Docker is here. You may create your own env file by modifying the template.

    Prepare a kubeconfig file [Optional][SW_SCHEDULER=k8s]

    The kubeconfig file is used for accessing the Kubernetes cluster. For more information about kubeconfig files, see the Official Kubernetes Documentation.

    If you have a local kubectl command-line tool installed, you can run kubectl config view to see your current configuration.

    Run the Docker image

    docker run -it -d --name starwhale-server -p 8082:8082 \
    --restart unless-stopped \
    --mount type=bind,source=<path to your kubeconfig file>,destination=/root/.kube/config,readonly \
    --env-file <path to your env file> \
    ghcr.io/star-whale/server:0.5.6

    For users in the mainland of China, use docker image: docker-registry.starwhale.cn/star-whale/server.

    - - + + \ No newline at end of file diff --git a/0.6.0/server/installation/helm-charts/index.html b/0.6.0/server/installation/helm-charts/index.html index 6025fcfbe..c734c220b 100644 --- a/0.6.0/server/installation/helm-charts/index.html +++ b/0.6.0/server/installation/helm-charts/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Install Starwhale Server with Helm

    Prerequisites

    • A running Kubernetes 1.19+ cluster to run tasks.
    • A running MySQL 8.0+ instance to store metadata.
    • A S3-compatible object storage system to save datasets, models, and others.
    • Helm 3.2.0+.

    The Starwhale Helm Charts includes MySQL and MinIO as dependencies. If you do not have your own MySQL instance or any S3-compatible object storage available, use the Helm Charts to install. Please check Installation Options to learn how to install Starwhale Server with MySQL and MinIO.

    Create a service account on Kubernetes for Starwhale Server

    If Kubernetes RBAC is enabled (In Kubernetes 1.6+, RBAC is enabled by default), Starwhale Server can not work properly unless is started by a service account with at least the following permissions:

    ResourceAPI GroupGetListWatchCreateDelete
    jobsbatchYYYYY
    podscoreYYY
    nodescoreYYY
    events""Y

    Example:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
    name: starwhale-role
    rules:
    - apiGroups:
    - ""
    resources:
    - pods
    - nodes
    verbs:
    - get
    - list
    - watch
    - apiGroups:
    - "batch"
    resources:
    - jobs
    verbs:
    - create
    - get
    - list
    - watch
    - delete
    - apiGroups:
    - ""
    resources:
    - events
    verbs:
    - get
    - watch
    - list
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    name: starwhale-binding
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: starwhale-role
    subjects:
    - kind: ServiceAccount
    name: starwhale

    Downloading Starwhale Helm Charts

    helm repo add starwhale https://star-whale.github.io/charts
    helm repo update

    Installing Starwhale Server

    helm install starwhale-server starwhale/starwhale-server -n starwhale --create-namespace

    If you have a local kubectl command-line tool installed, you can run kubectl get pods -n starwhale to check if all pods are running.

    Updating Starwhale Server

    helm repo update
    helm upgrade starwhale-server starwhale/starwhale-server

    Uninstalling Starwhale Server

    helm delete starwhale-server
    - - + + \ No newline at end of file diff --git a/0.6.0/server/installation/index.html b/0.6.0/server/installation/index.html index b9d24f243..36a8d32a8 100644 --- a/0.6.0/server/installation/index.html +++ b/0.6.0/server/installation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.0/server/installation/minikube/index.html b/0.6.0/server/installation/minikube/index.html index 69429ced5..66cfc4617 100644 --- a/0.6.0/server/installation/minikube/index.html +++ b/0.6.0/server/installation/minikube/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Install Starwhale Server with Minikube

    Prerequisites

    Starting Minikube

    minikube start --addons ingress --kubernetes-version=1.25.3

    For users in the mainland of China, please add --image-mirror-country=cn parameter. If there is no kubectl bin in your machine, you may use minikube kubectl or alias kubectl="minikube kubectl --" alias command.

    Installing Starwhale Server

    helm repo add starwhale https://star-whale.github.io/charts
    helm repo update
    helm pull starwhale/starwhale --untar --untardir ./charts

    helm upgrade --install starwhale ./charts/starwhale -n starwhale --create-namespace -f ./charts/starwhale/values.minikube.global.yaml

    For users in the mainland of China, use values.minikube.global.yaml:

    helm upgrade --install starwhale ./charts/starwhale -n starwhale --create-namespace -f ./charts/starwhale/values.minikube.cn.yaml

    After the installation is successful, the following prompt message appears:

        Release "starwhale" has been upgraded. Happy Helming!
    NAME: starwhale
    LAST DEPLOYED: Tue Feb 14 16:25:03 2023
    NAMESPACE: starwhale
    STATUS: deployed
    REVISION: 14
    NOTES:
    ******************************************
    Chart Name: starwhale
    Chart Version: 0.5.6
    App Version: latest
    Starwhale Image:
    - server: ghcr.io/star-whale/server:latest

    ******************************************
    Controller:
    - visit: http://controller.starwhale.svc
    Minio:
    - web visit: http://minio.starwhale.svc
    - admin visit: http://minio-admin.starwhale.svc
    MySQL:
    - port-forward:
    - run: kubectl port-forward --namespace starwhale svc/mysql 3306:3306
    - visit: mysql -h 127.0.0.1 -P 3306 -ustarwhale -pstarwhale
    Please run the following command for the domains searching:
    echo "$(sudo minikube ip) controller.starwhale.svc minio.starwhale.svc minio-admin.starwhale.svc " | sudo tee -a /etc/hosts
    ******************************************
    Login Info:
    - starwhale: u:starwhale, p:abcd1234
    - minio admin: u:minioadmin, p:minioadmin

    *_* Enjoy to use Starwhale Platform. *_*

    Checking Starwhale Server status

    Keep checking the minikube service status until all deployments are running(waiting for 3~5 mins):

    kubectl get deployments -n starwhale
    NAMEREADYUP-TO-DATEAVAILABLEAGE
    controller1/1115m
    minio1/1115m
    mysql1/1115m

    Visiting for local

    Make the Starwhale controller accessible locally with the following command:

    echo "$(sudo minikube ip) controller.starwhale.svc minio.starwhale.svc  minio-admin.starwhale.svc " | sudo tee -a /etc/hosts

    Then you can visit http://controller.starwhale.svc in your local web browser.

    Visiting for others

    • Step 1: in the Starwhale Server machine

      for temporary use with socat command:

      # install socat at first, ref: https://howtoinstall.co/en/socat
      sudo socat TCP4-LISTEN:80,fork,reuseaddr,bind=0.0.0.0 TCP4:`minikube ip`:80

      When you kill the socat process, the share access will be blocked. iptables maybe a better choice for long-term use.

    • Step 2: in the other machines

      # for macOSX or Linux environment, run the command in the shell.
      echo ${your_machine_ip} controller.starwhale.svc minio.starwhale.svc minio-admin.starwhale.svc " | sudo tee -a /etc/hosts

      # for Windows environment, run the command in the PowerShell with administrator permission.
      Add-Content -Path C:\Windows\System32\drivers\etc\hosts -Value "`n${your_machine_ip} controller.starwhale.svc minio.starwhale.svc minio-admin.starwhale.svc"
    - - + + \ No newline at end of file diff --git a/0.6.0/server/installation/starwhale_env/index.html b/0.6.0/server/installation/starwhale_env/index.html index d574da8b4..a5a7c755e 100644 --- a/0.6.0/server/installation/starwhale_env/index.html +++ b/0.6.0/server/installation/starwhale_env/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Starwhale Server Environment Example

    ################################################################################
    # *** Required ***
    # The external Starwhale server URL. For example: https://cloud.starwhale.ai
    SW_INSTANCE_URI=

    # The listening port of Starwhale Server
    SW_CONTROLLER_PORT=8082

    # The maximum upload file size. This setting affects datasets and models uploading when copied from outside.
    SW_UPLOAD_MAX_FILE_SIZE=20480MB
    ################################################################################
    # The base URL of the Python Package Index to use when creating a runtime environment.
    SW_PYPI_INDEX_URL=http://10.131.0.1/repository/pypi-hosted/simple/

    # Extra URLs of package indexes to use in addition to the base url.
    SW_PYPI_EXTRA_INDEX_URL=

    # Space separated hostnames. When any host specified in the base URL or extra URLs does not have a valid SSL
    # certification, use this option to trust it anyway.
    SW_PYPI_TRUSTED_HOST=
    ################################################################################
    # The JWT token expiration time. When the token expires, the server will request the user to login again.
    SW_JWT_TOKEN_EXPIRE_MINUTES=43200

    # *** Required ***
    # The JWT secret key. All strings are valid, but we strongly recommend you to use a random string with at least 16 characters.
    SW_JWT_SECRET=
    ################################################################################
    # The scheduler controller to use. Valid values are:
    # docker: Controller schedule jobs by leveraging docker
    # k8s: Controller schedule jobs by leveraging Kubernetes
    SW_SCHEDULER=k8s

    # The Kubernetes namespace to use when running a task when SW_SCHEDULER is k8s
    SW_K8S_NAME_SPACE=default

    # The path on the Kubernetes host node's filesystem to cache Python packages. Use the setting only if you have
    # the permission to use host node's filesystem. The runtime environment setup process may be accelerated when the host
    # path cache is used. Leave it blank if you do not want to use it.
    SW_K8S_HOST_PATH_FOR_CACHE=

    # The ip for the containers created by Controller when SW_SCHEDULER is docker
    SW_DOCKER_CONTAINER_NODE_IP=127.0.0.1
    ###############################################################################
    # *** Required ***
    # The object storage system type. Valid values are:
    # s3: [AWS S3](https://aws.amazon.com/s3) or other s3-compatible object storage systems
    # aliyun: [Aliyun OSS](https://www.alibabacloud.com/product/object-storage-service)
    # minio: [MinIO](https://min.io)
    # file: Local filesystem
    SW_STORAGE_TYPE=

    # The path prefix for all data saved on the storage system.
    SW_STORAGE_PREFIX=
    ################################################################################
    # The following settings are only used when SW_STORAGE_TYPE is file.

    # The root directory to save data.
    # This setting is only used when SW_STORAGE_TYPE is file.
    SW_STORAGE_FS_ROOT_DIR=/usr/local/starwhale
    ################################################################################
    # The following settings are only used when SW_STORAGE_TYPE is not file.

    # *** Required ***
    # The name of the bucket to save data.
    SW_STORAGE_BUCKET=

    # *** Required ***
    # The endpoint URL of the object storage service.
    # This setting is only used when SW_STORAGE_TYPE is s3 or aliyun.
    SW_STORAGE_ENDPOINT=

    # *** Required ***
    # The access key used to access the object storage system.
    SW_STORAGE_ACCESSKEY=

    # *** Required ***
    # The secret access key used to access the object storage system.
    SW_STORAGE_SECRETKEY=

    # *** Optional ***
    # The region of the object storage system.
    SW_STORAGE_REGION=

    # Starwhale Server will use multipart upload when uploading a large file. This setting specifies the part size.
    SW_STORAGE_PART_SIZE=5MB
    ################################################################################
    # MySQL settings

    # *** Required ***
    # The hostname/IP of the MySQL server.
    SW_METADATA_STORAGE_IP=

    # The port of the MySQL server.
    SW_METADATA_STORAGE_PORT=3306

    # *** Required ***
    # The database used by Starwhale Server
    SW_METADATA_STORAGE_DB=starwhale

    # *** Required ***
    # The username of the MySQL server.
    SW_METADATA_STORAGE_USER=

    # *** Required ***
    # The password of the MySQL server.
    SW_METADATA_STORAGE_PASSWORD=
    ################################################################################

    # The cache directory for the WAL files. Point it to a mounted volume or host path with enough space.
    # If not set, the WAL files will be saved in the docker runtime layer, and will be lost when the container is restarted.
    SW_DATASTORE_WAL_LOCAL_CACHE_DIR=
    - - + + \ No newline at end of file diff --git a/0.6.0/server/project/index.html b/0.6.0/server/project/index.html index 7b3ad579e..490e71c17 100644 --- a/0.6.0/server/project/index.html +++ b/0.6.0/server/project/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    How to Organize and Manage Resources with Starwhale Projects

    Project is the basic unit for organizing and managing resources (such as models, datasets, runtime environments, etc.). You can create and manage projects based on your needs. For example, you can create projects by business team, product line, or models. One user can create and participate in one or more projects.

    Project type

    There are two types of projects:

    • Private project: The project (and related resources in the project) is only visible to project members with permission. Project members can view or edit the project (as well as associated resources in the project). For more information on roles, please take a look at Roles and permissions in Starwhale.

    • Public project: The project (and related resources in the project) is visible to all Starwhale users. Project members can view or edit the project (as well as associated resources in the project). For more information on roles, please take a look at Roles and permissions in Starwhale.

    Create a project

    1. Click the Create button in the upper right corner of the project list page;
    2. Enter a name for the project. Pay attention to avoiding duplicate names. For more information, please see Names in Starwhale
    3. Select the Project Type, which is defaulted to private project and can be selected as public according to needs;
    4. Fill in the description content;
    5. To finish, Click the Submit button.

    Edit a project

    The name, privacy and description of a project can be edited.

    1. Go to the project list page and find the project that needs to be edited by searching for the project name, then click the Edit Project button;
    2. Edit the items that need to be edited;
    3. Click Submit to save the edited content;
    4. If you're editing multiple projects, repeat steps 1 through 3.

    View a project

    My projects

    On the project list page, only my projects are displayed by default. My projects refer to the projects participated in by the current users as project members or project owners.

    Project sorting

    On the project list page, all projects are supported to be sorted by "Recently visited", "Project creation time from new to old", and "Project creation time from old to new", which can be selected according to your needs.

    Delete a project

    Once a project is deleted, all related resources (such as datasets, models, runtimes, evaluations, etc.) will be deleted and cannot be restored.

    1. Enter the project list page and search for the project name to find the project that needs to be deleted. Hover your mouse over the project you want to delete, then click the Delete button;
    2. Follow the prompts, enter the relevant information, click Confirm to delete the project, or click Cancel to cancel the deletion;
    3. If you are deleting multiple projects, repeat the above steps.

    Manage project member

    Only users with the admin role can assign people to the project. The project owner defaulted to having the project owner role.

    Add a member

    1. Click Manage Members to go to the project member list page;
    2. Click the Add Member button in the upper right corner.
    3. Enter the Username you want to add, select a project role for the user in the project.
    4. Click submit to complete.
    5. If you're adding multiple members, repeat steps 1 through 4.

    Remove a member

    1. On the project list page or project overview tab, click Manage Members to go to the project member list page.
    2. Search for the username you want to delete, then click the Delete button.
    3. Click Yes to delete the user from this project, click No to cancel the deletion.
    4. If you're removing multiple members, repeat steps 1 through 3.

    Edit a member's role

    1. Hover your mouse over the project you want to edit, then click Manage Members to go to the project member list page.
    2. Find the username you want to adjust through searching, click the Project Role drop-down menu, and select a new project role. For more information on roles, please take a look at Roles and permissions in Starwhale.
    - - + + \ No newline at end of file diff --git a/0.6.0/swcli/config/index.html b/0.6.0/swcli/config/index.html index 271c63257..b3b2ab2dc 100644 --- a/0.6.0/swcli/config/index.html +++ b/0.6.0/swcli/config/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Configuration

    Standalone Instance is installed on the user's laptop or development server, providing isolation at the level of Linux/macOX users. Users can install the Starwhale Python package using the pip command and execute any swcli command. After that, they can view their Starwhale configuration in ~/.config/starwhale/config.yaml. In the vast majority of cases, users do not need to manually modify the config.yaml file.

    The ~/.config/starwhale/config.yaml file has permissions set to 0o600 to ensure security, as it contains sensitive information such as encryption keys. Users are advised not to change the file permissions.You could customize your swcli by swci config edit:

    swcli config edit

    config.yaml example

    The typical config.yaml file is as follows:

    • The default instance is local.
    • cloud-cn/cloud-k8s/pre-k8s are the server/cloud instances, local is the standalone instance.
    • The local storage root directory for the Standalone Instance is set to /home/liutianwei/.starwhale.
    current_instance: local
    instances:
    cloud-cn:
    sw_token: ${TOKEN}
    type: cloud
    updated_at: 2022-09-28 18:41:05 CST
    uri: https://cloud.starwhale.cn
    user_name: starwhale
    user_role: normal
    cloud-k8s:
    sw_token: ${TOKEN}
    type: cloud
    updated_at: 2022-09-19 16:10:01 CST
    uri: http://cloud.pre.intra.starwhale.ai
    user_name: starwhale
    user_role: normal
    local:
    current_project: self
    type: standalone
    updated_at: 2022-06-09 16:14:02 CST
    uri: local
    user_name: liutianwei
    pre-k8s:
    sw_token: ${TOKEN}
    type: cloud
    updated_at: 2022-09-19 18:06:50 CST
    uri: http://console.pre.intra.starwhale.ai
    user_name: starwhale
    user_role: normal
    link_auths:
    - ak: starwhale
    bucket: users
    connect_timeout: 10.0
    endpoint: http://10.131.0.1:9000
    read_timeout: 100.0
    sk: starwhale
    type: s3
    storage:
    root: /home/liutianwei/.starwhale
    version: '2.0'

    config.yaml definition

    ParameterDescriptionTypeDefault ValueRequired
    current_instanceThe name of the default instance to use. It is usually set using the swcli instance select command.StringselfYes
    instancesManaged instances, including Standalone, Server and Cloud Instances. There must be at least one Standalone Instance named "local" and one or more Server/Cloud Instances. You can log in to a new instance with swcli instance login and log out from an instance with swcli instance logout.DictStandalone Instance named "local"Yes
    instances.{instance-alias-name}.sw_tokenLogin token for Server/Cloud Instances. It is only effective for Server/Cloud Instances. Subsequent swcli operations on Server/Cloud Instances will use this token. Note that tokens have an expiration time, typically set to one month, which can be configured within the Server/Cloud Instance.StringCloud - Yes, Standalone - No
    instances.{instance-alias-name}.typeType of the instance, currently can only be "cloud" or "standalone".Choice[string]Yes
    instances.{instance-alias-name}.uriFor Server/Cloud Instances, the URI is an http/https address. For Standalone Instances, the URI is set to "local".StringYes
    instances.{instance-alias-name}.user_nameUser's nameStringYes
    instances.{instance-alias-name}.current_projectDefault Project under the current instance. It will be used to fill the "project" field in the URI representation by default. You can set it using the swcli project select command.StringYes
    instances.{instance-alias-name}.user_roleUser's role.StringnormalYes
    instances.{instance-alias-name}.updated_atThe last updated time for this instance configuration.Time format stringYes
    storageSettings related to local storage.DictYes
    storage.rootThe root directory for Standalone Instance's local storage. Typically, if there is insufficient space in the home directory and you manually move data files to another location, you can modify this field.String~/.starwhaleYes
    versionThe version of config.yaml, currently only supports 2.0.String2.0Yes

    You could put starwhale.Link to your assets while the URI in the Link could be whatever(only s3 like or http is implemented) you need, such as s3://10.131.0.1:9000/users/path. However, Links may need to be authed, you could config the auth info in link_auths.

    link_auths:
    - type: s3
    ak: starwhale
    bucket: users
    region: local
    connect_timeout: 10.0
    endpoint: http://10.131.0.1:9000
    read_timeout: 100.0
    sk: starwhale

    Items in link_auths will match the uri in Links automatically. s3 typed link_auth matching Links by looking up bucket and endpoint.

    - - + + \ No newline at end of file diff --git a/0.6.0/swcli/index.html b/0.6.0/swcli/index.html index ffe33d57f..f89c30f21 100644 --- a/0.6.0/swcli/index.html +++ b/0.6.0/swcli/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Starwhale Client (swcli) User Guide

    The Starwhale Client (swcli) is a command-line tool that enables you to interact with Starwhale instances. You can use swcli to complete almost all tasks in Starwhale. swcli is written in pure python3 (require Python 3.7 | 3.11) so that it can be easily installed by the pip command. Currently, swcli only supports Linux and macOS, Windows is coming soon.

    - - + + \ No newline at end of file diff --git a/0.6.0/swcli/installation/index.html b/0.6.0/swcli/installation/index.html index 69e73d761..fb2de6b89 100644 --- a/0.6.0/swcli/installation/index.html +++ b/0.6.0/swcli/installation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Installation Guide

    We can use swcli to complete all tasks for Starwhale Instances. swcli is written by pure python3, which can be installed easily by the pip command.Here are some installation tips that can help you get a cleaner, unambiguous, no dependency conflicts swcli python environment.

    Installing Advice

    DO NOT install Starwhale in your system's global Python environment. It will cause a python dependency conflict problem.

    Prerequisites

    • Python 3.7 ~ 3.11
    • Linux or macOS
    • Conda (optional)

    In the Ubuntu system, you can run the following commands:

    sudo apt-get install python3 python3-venv python3-pip

    #If you want to install multi python versions
    sudo add-apt-repository -y ppa:deadsnakes/ppa
    sudo apt-get update
    sudo apt-get install -y python3.7 python3.8 python3.9 python3-pip python3-venv python3.8-venv python3.7-venv python3.9-venv

    swcli works on macOS. If you run into issues with the default system Python3 on macOS, try installing Python3 through the homebrew:

    brew install python3

    Install swcli

    Install with venv

    python3 -m venv ~/.cache/venv/starwhale
    source ~/.cache/venv/starwhale/bin/activate
    python3 -m pip install starwhale

    swcli --version

    sudo rm -rf /usr/local/bin/swcli
    sudo ln -s `which swcli` /usr/local/bin/

    Install with conda

    conda create --name starwhale --yes  python=3.9
    conda activate starwhale
    python3 -m pip install starwhale

    swcli --version

    sudo rm -rf /usr/local/bin/swcli
    sudo ln -s `which swcli` /usr/local/bin/

    👏 Now, you can use swcli in the global environment.

    Install for the special scenarios

    # for Audio processing
    python -m pip install starwhale[audio]

    # for Image processing
    python -m pip install starwhale[pillow]

    # for swcli model server command
    python -m pip install starwhale[server]

    # for built-in online serving
    python -m pip install starwhale[online-serve]

    # install all dependencies
    python -m pip install starwhale[all]

    Update swcli

    #for venv
    python3 -m pip install --upgrade starwhale

    #for conda
    conda run -n starwhale python3 -m pip install --upgrade starwhale

    Uninstall swcli

    python3 -m pip remove starwhale

    rm -rf ~/.config/starwhale
    rm -rf ~/.starwhale
    - - + + \ No newline at end of file diff --git a/0.6.0/swcli/swignore/index.html b/0.6.0/swcli/swignore/index.html index a94b96aa4..b42ecb9f0 100644 --- a/0.6.0/swcli/swignore/index.html +++ b/0.6.0/swcli/swignore/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    About the .swignore file

    The .swignore file is similar to .gitignore, .dockerignore, and other files used to define ignored files or dirs. The .swignore files mainly used in the Starwhale Model building process. By default, the swcli model build command or starwhale.model.build() Python SDK will traverse all files in the specified directory and automatically exclude certain known files or directories that are not suitable for inclusion in the model package.

    PATTERN FORMAT

    • Each line in a swignore file specifies a pattern, which matches files and directories.
    • A blank line matches no files, so it can serve as a separator for readability.
    • An asterisk * matches anything except a slash.
    • A line starting with # serves as a comment.
    • Support wildcard expression, for example: *.jpg, .png.

    Auto Ingored files or dirs

    If you want to include the auto ingored files or dirs, you can add --add-all for swcli model build command.

    • __pycache__/
    • *.py[cod]
    • *$py.class
    • venv installation dir
    • conda installation dir

    Example

    Here is the .swignore file used in the MNIST example:

    venv/*
    .git/*
    .history*
    .vscode/*
    .venv/*
    data/*
    .idea/*
    *.py[cod]
    - - + + \ No newline at end of file diff --git a/0.6.0/swcli/uri/index.html b/0.6.0/swcli/uri/index.html index 04116fd50..577e80875 100644 --- a/0.6.0/swcli/uri/index.html +++ b/0.6.0/swcli/uri/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.0

    Starwhale Resources URI

    tip

    Resource URI is widely used in Starwhale client commands. The URI can refer to a resource in the local instance or any other resource in a remote instance. In this way, the Starwhale client can easily manipulate any resource.

    concepts-org.jpg

    Instance URI

    Instance URI can be either:

    • local: standalone instance.
    • [http(s)://]<hostname or ip>[:<port>]: cloud instance with HTTP address.
    • [cloud://]<cloud alias>: cloud or server instance with an alias name, which can be configured in the instance login phase.
    caution

    "local" is different from "localhost". The former means the local standalone instance without a controller, while the latter implies a controller listening at the default port 8082 on the localhost.

    Example:

    # log in Starwhale Cloud; the alias is swcloud
    swcli instance login --username <your account name> --password <your password> https://cloud.starwhale.ai --alias swcloud

    # copy a model from the local instance to the cloud instance
    swcli model copy mnist/version/latest swcloud/project/<your account name>:demo

    # copy a runtime to a Starwhale Server instance: http://localhost:8081
    swcli runtime copy pytorch/version/v1 http://localhost:8081/project/<your account name>:demo

    Project URI

    Project URI is in the format [<Instance URI>/project/]<project name>. If the instance URI is not specified, use the current instance instead.

    Example:

    swcli project select self   # select the self project in the current instance
    swcli project info local/project/self # inspect self project info in the local instance

    Model/Dataset/Runtime URI

    • Model URI: [<Project URI>/model/]<model name>[/version/<version id|tag>].
    • Dataset URI: [<Project URI>/dataset/]<dataset name>[/version/<version id|tag>].
    • Runtime URI: [<Project URI>/runtime/]<runtime name>[/version/<version id|tag>].
    tip
    • swcli supports human-friendly short version id. You can type the first few characters of the version id, provided it is at least four characters long and unambiguous. However, the recover command must use the complete version id.
    • If the project URI is not specified, the default project will be used.
    • You can always use the version tag instead of the version id.

    Example:

    swcli model info mnist/version/hbtdenjxgm4ggnrtmftdgyjzm43tioi  # inspect model info, model name: mnist, version:hbtdenjxgm4ggnrtmftdgyjzm43tioi
    swcli model remove mnist/version/hbtdenj # short version
    swcli model info mnist # inspect mnist model info
    swcli model run mnist --runtime pytorch-mnist --dataset mnist # use the default latest tag

    Job URI

    • format: [<Project URI>/job/]<job id>.
    • If the project URI is not specified, the default project will be used.

    Example:

    swcli job info mezdayjzge3w   # Inspect mezdayjzge3w version in default instance and default project
    swcli job info local/project/self/job/mezday # Inspect the local instance, self project, with short job id:mezday

    The default instance

    When the instance part of a project URI is omitted, the default instance is used instead. The default instance is the one selected by the swcli instance login or swcli instance use command.

    The default project

    When the project parts of Model/Dataset/Runtime/Evaluation URIs are omitted, the default project is used instead. The default project is the one selected by the swcli project use command.

    - - + + \ No newline at end of file diff --git a/0.6.4/cloud/billing/bills/index.html b/0.6.4/cloud/billing/bills/index.html index 93e8fbde5..609d7c0de 100644 --- a/0.6.4/cloud/billing/bills/index.html +++ b/0.6.4/cloud/billing/bills/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.4/cloud/billing/index.html b/0.6.4/cloud/billing/index.html index 5028793aa..9028ce9d8 100644 --- a/0.6.4/cloud/billing/index.html +++ b/0.6.4/cloud/billing/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.4/cloud/billing/recharge/index.html b/0.6.4/cloud/billing/recharge/index.html index 3e16b69e2..e277d290d 100644 --- a/0.6.4/cloud/billing/recharge/index.html +++ b/0.6.4/cloud/billing/recharge/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.4/cloud/billing/refund/index.html b/0.6.4/cloud/billing/refund/index.html index 8104912a2..14f328069 100644 --- a/0.6.4/cloud/billing/refund/index.html +++ b/0.6.4/cloud/billing/refund/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.4/cloud/billing/voucher/index.html b/0.6.4/cloud/billing/voucher/index.html index d2d059ea2..8442ef514 100644 --- a/0.6.4/cloud/billing/voucher/index.html +++ b/0.6.4/cloud/billing/voucher/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.4/cloud/index.html b/0.6.4/cloud/index.html index 1de16e198..e6ef07d3f 100644 --- a/0.6.4/cloud/index.html +++ b/0.6.4/cloud/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Starwhale Cloud User Guide

    Starwhale Cloud is a service hosted on public cloud and operated by the Starwhale team. The access url is https://cloud.starwhale.cn.

    - - + + \ No newline at end of file diff --git a/0.6.4/community/contribute/index.html b/0.6.4/community/contribute/index.html index f2db07400..479c19e95 100644 --- a/0.6.4/community/contribute/index.html +++ b/0.6.4/community/contribute/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Contribute to Starwhale

    Getting Involved/Contributing

    We welcome and encourage all contributions to Starwhale, including and not limited to:

    • Describe the problems encountered during use.
    • Submit feature request.
    • Discuss in Slack and Github Issues.
    • Code Review.
    • Improve docs, tutorials and examples.
    • Fix Bug.
    • Add Test Case.
    • Code readability and code comments to import readability.
    • Develop new features.
    • Write enhancement proposal.

    You can get involved, get updates and contact Starwhale developers in the following ways:

    Starwhale Resources

    Code Structure

    • client: swcli and Python SDK with Pure Python3, which includes all Standalone Instance features.
      • api: Python SDK.
      • cli: Command Line Interface entrypoint.
      • base: Python base abstract.
      • core: Starwhale core concepts which includes Dataset,Model,Runtime,Project, job and Evaluation, etc.
      • utils: Python utilities lib.
    • console: frontend with React + TypeScript.
    • server:Starwhale Controller with java, which includes all Starwhale Cloud Instance backend apis.
    • docker:Helm Charts, dockerfile.
    • docs:Starwhale官方文档。
    • example:Example code.
    • scripts:Bash and Python scripts for E2E testing and software releases, etc.

    Fork and clone the repository

    You will need to fork the code of Starwhale repository and clone it to your local machine.

    • Fork Starwhale repository: Fork Starwhale Github Repo,For more usage details, please refer to: Fork a repo

    • Install Git-LFS:Git-LFS

       git lfs install
    • Clone code to local machine

      git clone https://github.com/${your username}/starwhale.git

    Development environment for Standalone Instance

    Standalone Instance is written in Python3. When you want to modify swcli and sdk, you need to build the development environment.

    Standalone development environment prerequisites

    • OS: Linux or macOS
    • Python: 3.7~3.11
    • Docker: >=19.03(optional)
    • Python isolated env tools:Python venv, virtualenv or conda, etc

    Building from source code

    Based on the previous step, clone to the local directory: starwhale, and enter the client subdirectory:

    cd starwhale/client

    Create an isolated python environment with conda:

    conda create -n starwhale-dev python=3.8 -y
    conda activate starwhale-dev

    Install client package and python dependencies into the starwhale-dev environment:

    make install-sw
    make install-dev-req

    Validate with the swcli --version command. In the development environment, the version is 0.0.0.dev0:

    ❯ swcli --version
    swcli, version 0.0.0.dev0

    ❯ swcli --version
    /home/username/anaconda3/envs/starwhale-dev/bin/swcli

    Modifying the code

    When you modify the code, you need not to install python package(run make install-sw command) again. .editorconfig will be imported into the most IDE and code editors which helps maintain consistent coding styles for multiple developers.

    Lint and Test

    Run unit test, E2E test, mypy lint, flake lint and isort check in the starwhale directory.

    make client-all-check

    Development environment for Cloud Instance

    Cloud Instance is written in Java(backend) and React+TypeScript(frontend).

    Development environment for Console

    Development environment for Server

    • Language: Java
    • Build tool: Maven
    • Development framework: Spring Boot+Mybatis
    • Unit test framework:Junit5
      • Mockito used for mocking
      • Hamcrest used for assertion
      • Testcontainers used for providing lightweight, throwaway instances of common databases, Selenium web browsers that can run in a Docker container.
    • Check style tool:use maven-checkstyle-plugin

    Server development environment prerequisites

    • OS: Linux, macOS or Windows
    • Docker: >=19.03
    • JDK: >=11
    • Maven: >=3.8.1
    • Mysql: >=8.0.29
    • Minio
    • Kubernetes cluster/Minikube(If you don't have a k8s cluster, you can use Minikube as an alternative for development and debugging)

    Modify the code and add unit tests

    Now you can enter the corresponding module to modify and adjust the code on the server side. The main business code directory is src/main/java, and the unit test directory is src/test/java.

    Execute code check and run unit tests

    cd starwhale/server
    mvn clean test

    Deploy the server at local machine

    • Dependent services that need to be deployed

      • Minikube(Optional. Minikube can be used when there is no k8s cluster, there is the installation doc: Minikube

        minikube start
        minikube addons enable ingress
        minikube addons enable ingress-dns
      • Mysql

        docker run --name sw-mysql -d \
        -p 3306:3306 \
        -e MYSQL_ROOT_PASSWORD=starwhale \
        -e MYSQL_USER=starwhale \
        -e MYSQL_PASSWORD=starwhale \
        -e MYSQL_DATABASE=starwhale \
        mysql:latest
      • Minio

        docker run --name minio -d \
        -p 9000:9000 --publish 9001:9001 \
        -e MINIO_DEFAULT_BUCKETS='starwhale' \
        -e MINIO_ROOT_USER="minioadmin" \
        -e MINIO_ROOT_PASSWORD="minioadmin" \
        bitnami/minio:latest
    • Package server program

      If you need to deploy the front-end at the same time when deploying the server, you can execute the build command of the front-end part first, and then execute 'mvn clean package', and the compiled front-end files will be automatically packaged.

      Use the following command to package the program

        cd starwhale/server
      mvn clean package
    • Specify the environment required for server startup

      # Minio env
      export SW_STORAGE_ENDPOINT=http://${Minio IP,default is:27.0.0.1}:9000
      export SW_STORAGE_BUCKET=${Minio bucket,default is:starwhale}
      export SW_STORAGE_ACCESSKEY=${Minio accessKey,default is:starwhale}
      export SW_STORAGE_SECRETKEY=${Minio secretKey,default is:starwhale}
      export SW_STORAGE_REGION=${Minio region,default is:local}
      # kubernetes env
      export KUBECONFIG=${the '.kube' file path}\.kube\config

      export SW_INSTANCE_URI=http://${Server IP}:8082
      export SW_METADATA_STORAGE_IP=${Mysql IP,default: 127.0.0.1}
      export SW_METADATA_STORAGE_PORT=${Mysql port,default: 3306}
      export SW_METADATA_STORAGE_DB=${Mysql dbname,default: starwhale}
      export SW_METADATA_STORAGE_USER=${Mysql user,default: starwhale}
      export SW_METADATA_STORAGE_PASSWORD=${user password,default: starwhale}
    • Deploy server service

      You can use the IDE or the command to deploy.

      java -jar controller/target/starwhale-controller-0.1.0-SNAPSHOT.jar
    • Debug

      there are two ways to debug the modified function:

      • Use swagger-ui for interface debugging, visit /swagger-ui/index.html to find the corresponding api
      • Debug the corresponding function directly in the ui (provided that the front-end code has been built in advance according to the instructions when packaging)
    - - + + \ No newline at end of file diff --git a/0.6.4/concepts/index.html b/0.6.4/concepts/index.html index 3d6d7806a..299469ad3 100644 --- a/0.6.4/concepts/index.html +++ b/0.6.4/concepts/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.4/concepts/names/index.html b/0.6.4/concepts/names/index.html index 593751687..fecc2e265 100644 --- a/0.6.4/concepts/names/index.html +++ b/0.6.4/concepts/names/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Names in Starwhale

    Names mean project names, model names, dataset names, runtime names, and tag names.

    Names Limitation

    • Names are case-insensitive.
    • A name MUST only consist of letters A-Z a-z, digits 0-9, the hyphen character -, the dot character ., and the underscore character _.
    • A name should always start with a letter or the _ character.
    • The maximum length of a name is 80.

    Names uniqueness requirement

    • The resource name should be a unique string within its owner. For example, the project name should be unique in the owner instance, and the model name should be unique in the owner project.
    • The resource name can not be used by any other resource of the same kind in their owner, including those removed ones. For example, Project "apple" can not have two models named "Alice", even if one of them is already removed.
    • Different kinds of resources can have the same name. For example, a project and a model can be called "Alice" simultaneously.
    • Resources with different owners can have the same name. For example, a model in project "Apple" and a model in project "Banana" can have the same name "Alice".
    • Garbage-collected resources' names can be reused. For example, after the model with the name "Alice" in project "Apple" is removed and garbage collected, the project can have a new model with the same name "Alice".
    - - + + \ No newline at end of file diff --git a/0.6.4/concepts/project/index.html b/0.6.4/concepts/project/index.html index 2e8a9d933..54cb3832c 100644 --- a/0.6.4/concepts/project/index.html +++ b/0.6.4/concepts/project/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Project in Starwhale

    "Project" is the basic unit for organizing different resources like models, datasets, etc. You may use projects for different purposes. For example, you can create a project for a data scientist team, a product line, or a specific model. Users usually work on one or more projects in their daily lives.

    Starwhale Server/Cloud projects are grouped by accounts. Starwhale Standalone does not have accounts. So you will not see any account name prefix in Starwhale Standalone projects. Starwhale Server/Cloud projects can be either "public" or "private". Public projects means all users on the same instance are assigned a "guest" role to the project by default. For more information about roles, see Roles and permissions in Starwhale.

    A self project is created automatically and configured as the default project in Starwhale Standalone.

    - - + + \ No newline at end of file diff --git a/0.6.4/concepts/roles-permissions/index.html b/0.6.4/concepts/roles-permissions/index.html index a1d35b615..1373fbf77 100644 --- a/0.6.4/concepts/roles-permissions/index.html +++ b/0.6.4/concepts/roles-permissions/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Roles and permissions in Starwhale

    Roles are used to assign permissions to users. Only Starwhale Server/Cloud has roles and permissions, and Starwhale Standalone does not.The Administrator role is automatically created and assigned to the user "admin". Some sensitive operations can only be performed by users with the Administrator role, for example, creating accounts in Starwhale Server.

    Projects have three roles:

    • Admin - Project administrators can read and write project data and assign project roles to users.
    • Maintainer - Project maintainers can read and write project data.
    • Guest - Project guests can only read project data.
    ActionAdminMaintainerGuest
    Manage project membersYes
    Edit projectYesYes
    View projectYesYesYes
    Create evaluationsYesYes
    Remove evaluationsYesYes
    View evaluationsYesYesYes
    Create datasetsYesYes
    Update datasetsYesYes
    Remove datasetsYesYes
    View datasetsYesYesYes
    Create modelsYesYes
    Update modelsYesYes
    Remove modelsYesYes
    View modelsYesYesYes
    Create runtimesYesYes
    Update runtimesYesYes
    Remove runtimesYesYes
    View runtimesYesYesYes

    The user who creates a project becomes the first project administrator. They can assign roles to other users later.

    - - + + \ No newline at end of file diff --git a/0.6.4/concepts/versioning/index.html b/0.6.4/concepts/versioning/index.html index 59209c478..01cccaaa0 100644 --- a/0.6.4/concepts/versioning/index.html +++ b/0.6.4/concepts/versioning/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Resource versioning in Starwhale

    • Starwhale manages the history of all models, datasets, and runtimes. Every update to a specific resource appends a new version of the history.
    • Versions are identified by a version id which is a random string generated automatically by Starwhale and are ordered by their creation time.
    • Versions can have tags. Starwhale uses version tags to provide a human-friendly representation of versions. By default, Starwhale attaches a default tag to each version. The default tag is the letter "v", followed by a number. For each versioned resource, the first version tag is always tagged with "v0", the second version is tagged with "v1", and so on. And there is a special tag "latest" that always points to the last version. When a version is removed, its default tag will not be reused. For example, there is a model with tags "v0, v1, v2". When "v2" is removed, tags will be "v0, v1". And the following tag will be "v3" instead of "v2" again. You can attach your own tags to any version and remove them at any time.
    • Starwhale uses a linear history model. There is neither branch nor cycle in history.
    • History can not be rollback. When a version is to be reverted, Starwhale clones the version and appends it as a new version to the end of the history. Versions in history can be manually removed and recovered.
    - - + + \ No newline at end of file diff --git a/0.6.4/dataset/index.html b/0.6.4/dataset/index.html index 98371768d..62c87da71 100644 --- a/0.6.4/dataset/index.html +++ b/0.6.4/dataset/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Starwhale Dataset User Guide

    overview

    Design Overview

    Starwhale Dataset Positioning

    The Starwhale Dataset contains three core stages: data construction, data loading, and data visualization. It is a data management tool for the ML/DL field. Starwhale Dataset can directly use the environment built by Starwhale Runtime, and can be seamlessly integrated with Starwhale Model and Starwhale Evaluation. It is an important part of the Starwhale MLOps toolchain.

    According to the classification of MLOps Roles in Machine Learning Operations (MLOps): Overview, Definition, and Architecture, the three stages of Starwhale Dataset target the following user groups:

    • Data construction: Data Engineer, Data Scientist
    • Data loading: Data Scientist, ML Developer
    • Data visualization: Data Engineer, Data Scientist, ML Developer

    mlops-users

    Core Functions

    • Efficient loading: The original dataset files are stored in external storage such as OSS or NAS, and are loaded on demand without having to save to disk.
    • Simple construction: Supports one-click dataset construction from Image/Video/Audio directories, json files and Huggingface datasets, and also supports writing Python code to build completely custom datasets.
    • Versioning: Can perform version tracking, data append and other operations, and avoid duplicate data storage through the internally abstracted ObjectStore.
    • Sharing: Implement bidirectional dataset sharing between Standalone instances and Cloud/Server instances through the swcli dataset copy command.
    • Visualization: The web interface of Cloud/Server instances can present multi-dimensional, multi-type data visualization of datasets.
    • Artifact storage: The Standalone instance can store locally built or distributed swds series files, while the Cloud/Server instance uses object storage to provide centralized swds artifact storage.
    • Seamless Starwhale integration: Starwhale Dataset can use the runtime environment built by Starwhale Runtime to build datasets. Starwhale Evaluation and Starwhale Model can directly specify the dataset through the --dataset parameter to complete automatic data loading, which facilitates inference, model evaluation and other environments.

    Key Elements

    • swds virtual package file: swds is different from swmp and swrt. It is not a single packaged file, but a virtual concept that specifically refers to a directory that contains dataset-related files for a version of the Starwhale dataset, including _manifest.yaml, dataset.yaml, dataset build Python scripts, and data file links, etc. You can use the swcli dataset info command to view where the swds is located. swds is the abbreviation of Starwhale Dataset.

    swds-tree.png

    • swcli dataset command line: A set of dataset-related commands, including construction, distribution and management functions. See CLI Reference for details.
    • dataset.yaml configuration file: Describes the dataset construction process. It can be completely omitted and specified through swcli dataset build parameters. dataset.yaml can be considered as a configuration file representation of the swcli dataset build command line parameters. swcli dataset build parameters take precedence over dataset.yaml.
    • Dataset Python SDK: Includes data construction, data loading, and several predefined data types. See Python SDK for details.
    • Python scripts for dataset construction: A series of scripts written using the Starwhale Python SDK to build datasets.

    Best Practices

    The construction of Starwhale Dataset is performed independently. If third-party libraries need to be introduced when writing construction scripts, using Starwhale Runtime can simplify Python dependency management and ensure reproducible dataset construction. The Starwhale platform will build in as many open source datasets as possible for users to copy datasets for immediate use.

    Command Line Grouping

    The Starwhale Dataset command line can be divided into the following stages from the perspective of usage phases:

    • Construction phase
      • swcli dataset build
    • Visualization phase
      • swcli dataset diff
      • swcli dataset head
    • Distribution phase
      • swcli dataset copy
    • Basic management
      • swcli dataset tag
      • swcli dataset info
      • swcli dataset history
      • swcli dataset list
      • swcli dataset summary
      • swcli dataset remove
      • swcli dataset recover

    Starwhale Dataset Viewer

    Currently, the Web UI in the Cloud/Server instance can visually display the dataset. Currently, only DataTypes using the Python SDK can be correctly interpreted by the frontend, with mappings as follows:

    • Image: Display thumbnails, enlarged images, MASK type images, support image/png, image/jpeg, image/webp, image/svg+xml, image/gif, image/apng, image/avif formats.
    • Audio: Displayed as an audio wave graph, playable, supports audio/mp3 and audio/wav formats.
    • Video: Displayed as a video, playable, supports video/mp4, video/avi and video/webm formats.
    • GrayscaleImage: Display grayscale images, support x/grayscale format.
    • Text: Display text, support text/plain format, set encoding format, default is utf-8.
    • Binary and Bytes: Not supported for display currently.
    • Link: The above multimedia types all support specifying links as storage paths.

    Starwhale Dataset Data Format

    The dataset consists of multiple rows, each row being a sample, each sample containing several features. The features have a dict-like structure with some simple restrictions [L]:

    • The dict keys must be str type.
    • The dict values must be Python basic types like int/float/bool/str/bytes/dict/list/tuple, or Starwhale built-in data types.
    • For the same key across different samples, the value types do not need to stay the same.
    • If the value is a list or tuple, the element data types must be consistent.
    • For dict values, the restrictions are the same as [L].

    Example:

    {
    "img": GrayscaleImage(
    link=Link(
    "123",
    offset=32,
    size=784,
    _swds_bin_offset=0,
    _swds_bin_size=8160,
    )
    ),
    "label": 0,
    }

    File Data Handling

    Starwhale Dataset handles file type data in a special way. You can ignore this section if you don't care about Starwhale's implementation.

    According to actual usage scenarios, Starwhale Dataset has two ways of handling file class data that is based on the base class starwhale.BaseArtifact:

    • swds-bin: Starwhale merges the data into several large files in its own binary format (swds-bin), which can efficiently perform indexing, slicing and loading.
    • remote-link: If the user's original data is stored in some external storage such as OSS or NAS, with a lot of original data that is inconvenient to move or has already been encapsulated by some internal dataset implementation, then you only need to use links in the data to establish indexes.

    In the same Starwhale dataset, two types of data can be included simultaneously.

    - - + + \ No newline at end of file diff --git a/0.6.4/dataset/yaml/index.html b/0.6.4/dataset/yaml/index.html index f82df8bdb..1c96792af 100644 --- a/0.6.4/dataset/yaml/index.html +++ b/0.6.4/dataset/yaml/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    The dataset.yaml Specification

    tip

    dataset.yaml is optional for the swcli dataset build command.

    Building Starwhale Dataset uses dataset.yaml. Omitting dataset.yaml allows describing related configurations in swcli dataset build command line parameters. dataset.yaml can be considered as a file-based representation of the build command line configuration.

    YAML Field Descriptions

    FieldDescriptionRequiredTypeDefault
    nameName of the Starwhale DatasetYesString
    handlerImportable address of a class that inherits starwhale.SWDSBinBuildExecutor, starwhale.UserRawBuildExecutor or starwhale.BuildExecutor, or a function that returns a Generator or iterable object. Format is {module path}:{class name\|function name}YesString
    descDataset descriptionNoString""
    versiondataset.yaml format version, currently only "1.0" is supportedNoString1.0
    attrDataset build parametersNoDict
    attr.volume_sizeSize of each data file in the swds-bin dataset. Can be a number in bytes, or a number plus unit like 64M, 1GB etc.NoInt or Str64MB
    attr.alignment_sizeData alignment size of each data block in the swds-bin dataset. If set to 4k, and a data block is 7.9K, 0.1K padding will be added to make the block size a multiple of alignment_size, improving page size and read efficiency.NoInteger or String128

    Examples

    Simplest Example

    name: helloworld
    handler: dataset:ExampleProcessExecutor

    The helloworld dataset uses the ExampleProcessExecutor class in dataset.py of the dataset.yaml directory to build data.

    MNIST Dataset Build Example

    name: mnist
    handler: mnist.dataset:DatasetProcessExecutor
    desc: MNIST data and label test dataset
    attr:
    alignment_size: 128
    volume_size: 4M

    Example with handler as a generator function

    dataset.yaml contents:

    name: helloworld
    handler: dataset:iter_item

    dataset.py contents:

    def iter_item():
    for i in range(10):
    yield {"img": f"image-{i}".encode(), "label": i}
    - - + + \ No newline at end of file diff --git a/0.6.4/evaluation/heterogeneous/node-able/index.html b/0.6.4/evaluation/heterogeneous/node-able/index.html index 41e77e248..6e4f1ae6b 100644 --- a/0.6.4/evaluation/heterogeneous/node-able/index.html +++ b/0.6.4/evaluation/heterogeneous/node-able/index.html @@ -10,8 +10,8 @@ - - + +
    @@ -23,7 +23,7 @@ Refer to the link.

    Take v0.13.0-rc.1 as an example:

    kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.13.0-rc.1/nvidia-device-plugin.yml

    Note: This operation will run the NVIDIA device plugin plugin on all Kubernetes nodes. If configured before, it will be updated. Please evaluate the image version used carefully.

  • Confirm GPU can be discovered and used in the cluster. Refer to the command below. Check that nvidia.com/gpu is in the Capacity of the Jetson node. The GPU is then recognized normally by the Kubernetes cluster.

    # kubectl describe node orin | grep -A15 Capacity
    Capacity:
    cpu: 12
    ephemeral-storage: 59549612Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    hugepages-32Mi: 0
    hugepages-64Ki: 0
    memory: 31357608Ki
    nvidia.com/gpu: 1
    pods: 110
  • Build and Use Custom Images

    The l4t-jetpack image mentioned earlier can meet our general use. If we need to customize a more streamlined image or one with more features, we can make it based on l4t-base. Relevant Dockerfiles can refer to the image Starwhale made for mnist.

    - - + + \ No newline at end of file diff --git a/0.6.4/evaluation/heterogeneous/virtual-node/index.html b/0.6.4/evaluation/heterogeneous/virtual-node/index.html index 57a922334..ab4ae42f6 100644 --- a/0.6.4/evaluation/heterogeneous/virtual-node/index.html +++ b/0.6.4/evaluation/heterogeneous/virtual-node/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Virtual Kubelet as Kubernetes nodes

    Introduction

    Virtual Kubelet is an open source framework that can simulate a K8s node by mimicking the communication between kubelet and the K8s cluster.

    This solution is widely used by major cloud vendors for serverless container cluster solutions, such as Alibaba Cloud's ASK, Amazon's AWS Fargate, etc.

    Principles

    The virtual kubelet framework implements the related interfaces of kubelet for Node. With simple configuration, it can simulate a node.

    We only need to implement the PodLifecycleHandler interface to support:

    • Create, update, delete Pod
    • Get Pod status
    • Get Container logs

    Adding Devices to the Cluster

    If our device cannot serve as a K8s node due to resource constraints or other situations, we can manage these devices by using virtual kubelet to simulate a proxy node.

    The control flow between Starwhale Controller and the device is as follows:


    ┌──────────────────────┐ ┌────────────────┐ ┌─────────────────┐ ┌────────────┐
    │ Starwhale Controller ├─────►│ K8s API Server ├────►│ virtual kubelet ├────►│ Our device │
    └──────────────────────┘ └────────────────┘ └─────────────────┘ └────────────┘

    Virtual kubelet converts the Pod orchestration information sent by Starwhale Controller into control behaviors for the device, such as executing a command via ssh on the device, or sending a message via USB or serial port.

    Below is an example of using virtual kubelet to control a device not joined to the cluster that is SSH-enabled:

    1. Prepare certificates
    • Create file vklet.csr with the following content:
    [req]
    req_extensions = v3_req
    distinguished_name = req_distinguished_name

    [req_distinguished_name]

    [v3_req]
    basicConstraints = CA:FALSE
    keyUsage = digitalSignature, keyEncipherment
    extendedKeyUsage = serverAuth
    subjectAltName = @alt_names

    [alt_names]
    IP = 1.2.3.4
    • Generate the certificate:
    openssl genrsa -out vklet-key.pem 2048
    openssl req -new -key vklet-key.pem -out vklet.csr -subj '/CN=system:node:1.2.3.4;/C=US/O=system:nodes' -config ./csr.conf
    • Submit the certificate:
    cat vklet.csr| base64 | tr -d "\n" # output as content of spec.request in csr.yaml

    csr.yaml:

    apiVersion: certificates.k8s.io/v1
    kind: CertificateSigningRequest
    metadata:
    name: vklet
    spec:
    request: ******************
    signerName: kubernetes.io/kube-apiserver-client
    expirationSeconds: 1086400
    usages:
    - client auth
    kubectl apply -f csr.yaml
    kubectl certificate approve vklet
    kubectl get csr vklet -o jsonpath='{.status.certificate}'| base64 -d > vklet-cert.pem

    Now we have vklet-cert.pem.

    • Compile virtual kubelet:
    git clone https://github.com/virtual-kubelet/virtual-kubelet
    cd virtual-kubelet && make build

    Create the node configuration file mock.json:

    {
    "virtual-kubelet":
    {
    "cpu": "100",
    "memory": "100Gi",
    "pods": "100"
    }
    }

    Start virtual kubelet:

    export APISERVER_CERT_LOCATION=/path/to/vklet-cert.pem
    export APISERVER_KEY_LOCATION=/path/to/vklet-key.pem
    export KUBECONFIG=/path/to/kubeconfig
    virtual-kubelet --provider mock --provider-config /path/to/mock.json

    Now we have simulated a node with 100 cores + 100GB memory using virtual kubelet.

    • Add PodLifecycleHandler implementation to convert important information in Pod orchestration into ssh command execution, and collect logs for Starwhale Controller to collect.

    See ssh executor for a concrete implementation.

    - - + + \ No newline at end of file diff --git a/0.6.4/evaluation/index.html b/0.6.4/evaluation/index.html index 58e75b5e4..bf6425b83 100644 --- a/0.6.4/evaluation/index.html +++ b/0.6.4/evaluation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Starwhale Model Evaluation

    Design Overview

    Starwhale Evaluation Positioning

    The goal of Starwhale Evaluation is to provide end-to-end management for model evaluation, including creating Jobs, distributing Tasks, viewing model evaluation reports and basic management. Starwhale Evaluation is a specific application of Starwhale Model, Starwhale Dataset, and Starwhale Runtime in the model evaluation scenario. Starwhale Evaluation is part of the MLOps toolchain built by Starwhale. More applications like Starwhale Model Serving, Starwhale Training will be included in the future.

    Core Features

    • Visualization: Both swcli and the Web UI provide visualization of model evaluation results, supporting comparison of multiple results. Users can also customize logging of intermediate processes.

    • Multi-scenario Adaptation: Whether it's a notebook, desktop or distributed cluster environment, the same commands, Python scripts, artifacts and operations can be used for model evaluation. This satisfies different computational power and data volume requirements.

    • Seamless Starwhale Integration: Leverage Starwhale Runtime for the runtime environment, Starwhale Dataset as data input, and run models from Starwhale Model. Configuration is simple whether using swcli, Python SDK or Cloud/Server instance Web UI.

    Key Elements

    • swcli model run: Command line for bulk offline model evaluation.
    • swcli model serve: Command line for online model evaluation.

    Best Practices

    Command Line Grouping

    From the perspective of completing an end-to-end Starwhale Evaluation workflow, commands can be grouped as:

    • Preparation Stage
      • swcli dataset build or Starwhale Dataset Python SDK
      • swcli model build or Starwhale Model Python SDK
      • swcli runtime build
    • Evaluation Stage
      • swcli model run
      • swcli model serve
    • Results Stage
      • swcli job info
    • Basic Management
      • swcli job list
      • swcli job remove
      • swcli job recover

    Abstraction job-step-task

    • job: A model evaluation task is a job, which contains one or more steps.

    • step: A step corresponds to a stage in the evaluation process. With the default PipelineHandler, steps are predict and evaluate. For custom evaluation processes using @handler, @evaluation.predict, @evaluation.evaluate decorators, steps are the decorated functions. Steps can have dependencies, forming a DAG. A step contains one or more tasks. Tasks in the same step have the same logic but different inputs. A common approach is to split the dataset into multiple parts, with each part passed to a task. Tasks can run in parallel.

    • task: A task is the final running entity. In Cloud/Server instances, a task is a container in a Pod. In Standalone instances, a task is a Python Thread.

    The job-step-task abstraction is the basis for implementing distributed runs in Starwhale Evaluation.

    - - + + \ No newline at end of file diff --git a/0.6.4/faq/index.html b/0.6.4/faq/index.html index 273ae332b..4904e77b2 100644 --- a/0.6.4/faq/index.html +++ b/0.6.4/faq/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.4/getting-started/cloud/index.html b/0.6.4/getting-started/cloud/index.html index ae4b35ea9..b01750146 100644 --- a/0.6.4/getting-started/cloud/index.html +++ b/0.6.4/getting-started/cloud/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Getting started with Starwhale Cloud

    Starwhale Cloud is hosted on Aliyun with the domain name https://cloud.starwhale.cn. In the futher, we will launch the service on AWS with the domain name https://cloud.starwhale.ai. It's important to note that these are two separate instances that are not interconnected, and accounts and data are not shared. You can choose either one to get started.

    You need to install the Starwhale Client (swcli) at first.

    Sign Up for Starwhale Cloud and create your first project

    You can either directly log in with your GitHub or Weixin account or sign up for an account. You will be asked for an account name if you log in with your GitHub or Weixin account.

    Then you can create a new project. In this tutorial, we will use the name demo for the project name.

    Build the dataset, model, and runtime on your local machine

    Follow step 1 to step 4 in Getting started with Starwhale Standalone to create:

    • a Starwhale model named mnist
    • a Starwhale dataset named mnist
    • a Starwhale runtime named pytorch

    Login to the cloud instance

    swcli instance login --username <your account name> --password <your password> --alias swcloud https://cloud.starwhale.cn

    Copy the dataset, model, and runtime to the cloud instance

    swcli model copy mnist swcloud/project/<your account name>:demo
    swcli dataset copy mnist swcloud/project/<your account name>:demo
    swcli runtime copy pytorch swcloud/project/<your account name>:demo

    Run an evaluation with the web UI

    console-create-job.gif

    Congratulations! You have completed the Starwhale Cloud Getting Started Guide.

    - - + + \ No newline at end of file diff --git a/0.6.4/getting-started/index.html b/0.6.4/getting-started/index.html index 44898933d..98fdabee1 100644 --- a/0.6.4/getting-started/index.html +++ b/0.6.4/getting-started/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Getting started

    First, you need to install the Starwhale Client (swcli), which can be done by running the following command:

    python3 -m pip install starwhale

    For more information, see the swcli installation guide.

    Depending on your instance type, there are three getting-started guides available for you:

    • Getting started with Starwhale Standalone - This guide helps you run an MNIST evaluation on your desktop PC/laptop. It is the fastest and simplest way to get started with Starwhale.
    • Getting started with Starwhale Server - This guide helps you install Starwhale Server in your private data center and run an MNIST evaluation. At the end of the tutorial, you will have a Starwhale Server instance where you can run model evaluations on and manage your datasets and models.
    • Getting started with Starwhale Cloud - This guide helps you create an account on Starwhale Cloud and run an MNIST evaluation. It is the easiest way to experience all Starwhale features.
    - - + + \ No newline at end of file diff --git a/0.6.4/getting-started/runtime/index.html b/0.6.4/getting-started/runtime/index.html index 7a9a95929..4d7926ec7 100644 --- a/0.6.4/getting-started/runtime/index.html +++ b/0.6.4/getting-started/runtime/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Getting Started with Starwhale Runtime

    This article demonstrates how to build a Starwhale Runtime of the Pytorch environment and how to use it. This runtime can meet the dependency requirements of the six examples in Starwhale: mnist, speech commands, nmt, cifar10, ag_news, and PennFudan. Links to relevant code: example/runtime/pytorch.

    You can learn the following things from this tutorial:

    • How to build a Starwhale Runtime.
    • How to use a Starwhale Runtime in different scenarios.
    • How to release a Starwhale Runtime.

    Prerequisites

    Run the following command to clone the example code:

    git clone https://github.com/star-whale/starwhale.git
    cd starwhale/example/runtime/pytorch # for users in the mainland of China, use pytorch-cn-mirror instead.

    Build Starwhale Runtime

    ❯ swcli -vvv runtime build --yaml runtime.yaml

    Use Starwhale Runtime in the standalone instance

    Use Starwhale Runtime in the shell

    # Activate the runtime
    swcli runtime activate pytorch

    swcli runtime activate will download all python dependencies of the runtime, which may take a long time.

    All dependencies are ready in your python environment when the runtime is activated. It is similar to source venv/bin/activate of virtualenv or the conda activate command of conda. If you close the shell or switch to another shell, you need to reactivate the runtime.

    Use Starwhale Runtime in swcli

    # Use the runtime when building a Starwhale Model
    swcli model build . --runtime pytorch
    # Use the runtime when building a Starwhale Dataset
    swcli dataset build --yaml /path/to/dataset.yaml --runtime pytorch
    # Run a model evaluation with the runtime
    swcli model run --uri mnist/version/v0 --dataset mnist --runtime pytorch

    Copy Starwhale Runtime to another instance

    You can copy the runtime to a server/cloud instance, which can then be used in the server/cloud instance or downloaded by other users.

    # Copy the runtime to a server instance named 'pre-k8s'
    ❯ swcli runtime copy pytorch cloud://pre-k8s/project/starwhale
    - - + + \ No newline at end of file diff --git a/0.6.4/getting-started/server/index.html b/0.6.4/getting-started/server/index.html index 26962f615..d8362312f 100644 --- a/0.6.4/getting-started/server/index.html +++ b/0.6.4/getting-started/server/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Getting started with Starwhale Server

    Install Starwhale Server

    To install Starwhale Server, see the installation guide.

    Create your first project

    Login to the server

    Open your browser and enter your server's URL in the address bar. Login with your username(starwhale) and password(abcd1234).

    console-artifacts.gif

    Create a new project

    Build the dataset, model, and runtime on your local machine

    Follow step 1 to step 4 in Getting started with Starwhale Standalone to create:

    • a Starwhale model named mnist
    • a Starwhale dataset named mnist
    • a Starwhale runtime named pytorch

    Copy the dataset, the model, and the runtime to the server

    swcli instance login --username <your username> --password <your password> --alias server <Your Server URL>

    swcli model copy mnist server/project/demo
    swcli dataset copy mnist server/project/demo
    swcli runtime copy pytorch server/project/demo

    Use the Web UI to run an evaluation

    Navigate to the "demo" project in your browser and create a new one.

    console-create-job.gif

    Congratulations! You have completed the Starwhale Server Getting Started Guide.

    - - + + \ No newline at end of file diff --git a/0.6.4/getting-started/standalone/index.html b/0.6.4/getting-started/standalone/index.html index 5c59687af..3041ced8c 100644 --- a/0.6.4/getting-started/standalone/index.html +++ b/0.6.4/getting-started/standalone/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Getting started with Starwhale Standalone

    When the Starwhale Client (swcli) is installed, you are ready to use Starwhale Standalone.

    We also provide a Jupyter Notebook example, you can try it in Google Colab or in your local vscode/jupyterlab.

    Downloading Examples

    Download Starwhale examples by cloning the Starwhale project via:

    GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/star-whale/starwhale.git --depth 1
    cd starwhale

    To save time in the example downloading, we skip git-lfs and other commits info. We will use ML/DL HelloWorld code MNIST to start your Starwhale journey. The following steps are all performed in the starwhale directory.

    Core Workflow

    Building a Pytorch Runtime

    Runtime example codes are in the example/runtime/pytorch directory.

    • Build the Starwhale runtime bundle:

      swcli runtime build --yaml example/runtime/pytorch/runtime.yaml
      tip

      When you first build runtime, creating an isolated python environment and downloading python dependencies will take a lot of time. The command execution time is related to the network environment of the machine and the number of packages in the runtime.yaml. Using the befitting pypi mirror and cache config in the ~/.pip/pip.conf file is a recommended practice.

      For users in the mainland of China, the following conf file is an option:

      [global]
      cache-dir = ~/.cache/pip
      index-url = https://pypi.tuna.tsinghua.edu.cn/simple
      extra-index-url = https://mirrors.aliyun.com/pypi/simple/
    • Check your local Starwhale Runtime:

      swcli runtime list
      swcli runtime info pytorch

    Building a Model

    Model example codes are in the example/mnist directory.

    • Download the pre-trained model file:

      cd example/mnist
      make download-model
      # For users in the mainland of China, please add `CN=1` environment for make command:
      # CN=1 make download-model
      cd -
    • Build a Starwhale model:

      swcli model build example/mnist --runtime pytorch
    • Check your local Starwhale models:

      swcli model list
      swcli model info mnist

    Building a Dataset

    Dataset example codes are in the example/mnist directory.

    • Download the MNIST raw data:

      cd example/mnist
      make download-data
      # For users in the mainland of China, please add `CN=1` environment for make command:
      # CN=1 make download-data
      cd -
    • Build a Starwhale dataset:

      swcli dataset build --yaml example/mnist/dataset.yaml
    • Check your local Starwhale dataset:

      swcli dataset list
      swcli dataset info mnist
      swcli dataset head mnist

    Running an Evaluation Job

    • Create an evaluation job:

      swcli -vvv model run --uri mnist --dataset mnist --runtime pytorch
    • Check the evaluation result

      swcli job list
      swcli job info $(swcli job list | grep mnist | grep success | awk '{print $1}' | head -n 1)

    Congratulations! You have completed the Starwhale Standalone Getting Started Guide.

    - - + + \ No newline at end of file diff --git a/0.6.4/index.html b/0.6.4/index.html index aeb5e6905..9f38e7a5e 100644 --- a/0.6.4/index.html +++ b/0.6.4/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    What is Starwhale

    Overview

    Starwhale is an MLOps/LLMOps platform that make your model creation, evaluation and publication much easier. It aims to create a handy tool for data scientists and machine learning engineers.

    Starwhale helps you:

    • Keep track of your training/testing dataset history including data items and their labels, so that you can easily access them.
    • Manage your model packages that you can share across your team.
    • Run your models in different environments, either on a Nvidia GPU server or on an embedded device like Cherry Pi.
    • Create a online service with interactive Web UI for your models.

    Starwhale is designed to be an open platform. You can create your own plugins to meet your requirements.

    Deployment options

    Each deployment of Starwhale is called an instance. All instances can be managed by the Starwhale Client (swcli).

    You can start using Starwhale with one of the following instance types:

    • Starwhale Standalone - Rather than a running service, Starwhale Standalone is actually a repository that resides in your local file system. It is created and managed by the Starwhale Client (swcli). You only need to install swcli to use it. Currently, each user on a single machine can have only ONE Starwhale Standalone instance. We recommend you use the Starwhale Standalone to build and test your datasets, runtime, and models before pushing them to Starwhale Server/Cloud instances.
    • Starwhale Server - Starwhale Server is a service deployed on your local server. Besides text-only results from the Starwhale Client (swcli), Starwhale Server provides Web UI for you to manage your datasets and models, evaluate your models in your local Kubernetes cluster, and review the evaluation results.
    • Starwhale Cloud - Starwhale Cloud is a managed service hosted on public clouds. By registering an account on https://cloud.starwhale.cn, you are ready to use Starwhale without needing to install, operate, and maintain your own instances. Starwhale Cloud also provides public resources for you to download, like datasets, runtimes, and models. Check the "starwhale/public" project on Starwhale Cloud for more details.

    When choosing which instance type to use, consider the following:

    Instance TypeDeployment locationMaintained byUser InterfaceScalability
    Starwhale StandaloneYour laptop or any server in your data centerNot requiredCommand lineNot scalable
    Starwhale ServerYour data centerYourselfWeb UI and command lineScalable, depends on your Kubernetes cluster
    Starwhale CloudPublic cloud, like AWS or Aliyunthe Starwhale TeamWeb UI and command lineScalable, but currently limited by the freely available resource on the cloud
    - - + + \ No newline at end of file diff --git a/0.6.4/model/index.html b/0.6.4/model/index.html index 7972c878c..4e88db98b 100644 --- a/0.6.4/model/index.html +++ b/0.6.4/model/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Starwhale Model

    overview

    A Starwhale Model is a standard format for packaging machine learning models that can be used for various purposes, like model fine-tuning, model evaluation, and online serving. A Starwhale Model contains the model file, inference codes, configuration files, and any other files required to run the model.

    Create a Starwhale Model

    There are two ways to create a Starwhale Model: by swcli or by Python SDK.

    Create a Starwhale Model by swcli

    To create a Starwhale Model by swcli, you need to define a model.yaml, which describes some required information about the model package, and run the following command:

    swcli model build . --model-yaml /path/to/model.yaml

    For more information about the command and model.yaml, see the swcli reference. model.yaml is optional for model building.

    Create a Starwhale Model by Python SDK

    from starwhale import model, predict

    @predict
    def predict_img(data):
    ...

    model.build(name="mnist", modules=[predict_img])

    Model Management

    Model Management by swcli

    CommandDescription
    swcli model listList all Starwhale Models in a project
    swcli model infoShow detail information about a Starwhale Model
    swcli model copyCopy a Starwhale Model to another location
    swcli model removeRemove a Starwhale Model
    swcli model recoverRecover a previously removed Starwhale Model

    Model Management by WebUI

    Model History

    Starwhale Models are versioned. The general rules about versions are described in Resource versioning in Starwhale.

    Model History Management by swcli

    CommandDescription
    swcli model historyList all versions of a Starwhale Model
    swcli model infoShow detail information about a Starwhale Model version
    swcli model diffCompare two versions of a Starwhale model
    swcli model copyCopy a Starwhale Model version to a new one
    swcli model removeRemove a Starwhale Model version
    swcli model recoverRecover a previously removed Starwhale Model version

    Model Evaluation

    Model Evaluation by swcli

    CommandDescription
    swcli model runCreate an evaluation with a Starwhale Model

    The Storage Format

    The Starwhale Model is a tarball file that contains the source directory.

    - - + + \ No newline at end of file diff --git a/0.6.4/model/yaml/index.html b/0.6.4/model/yaml/index.html index 381c6b6a5..a149d0a5e 100644 --- a/0.6.4/model/yaml/index.html +++ b/0.6.4/model/yaml/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    The model.yaml Specification

    tip

    model.yaml is optional for swcli model build.

    When building a Starwhale Model using the swcli model build command, you can specify a yaml file that follows a specific format via the --model-yaml parameter to simplify specifying build parameters.

    Even without specifying the --model-yaml parameter, swcli model build will automatically look for a model.yaml file under the ${workdir} directory and extract parameters from it. Parameters specified on the swcli model build command line take precedence over equivalent configurations in model.yaml, so you can think of model.yaml as a file-based representation of the build command line.

    When building a Starwhale Model using the Python SDK, the model.yaml file does not take effect.

    YAML Field Descriptions

    FieldDescriptionRequiredTypeDefault
    nameName of the Starwhale Model, equivalent to --name parameter.NoString
    run.modulesPython Modules searched during model build, can specify multiple entry points for model execution, format is Python Importable path. Equivalent to --module parameter.YesList[String]
    run.handlerDeprecated alias of run.modules, can only specify one entry point.NoString
    versiondataset.yaml format version, currently only supports "1.0"NoString1.0
    descModel description, equivalent to --desc parameter.NoString

    Example


    name: helloworld

    run:
    modules:
    - src.evaluator

    desc: "example yaml"

    A Starwhale model named helloworld, searches for functions decorated with @evaluation.predict, @evaluation.evaluate or @handler, or classes inheriting from PipelineHandler in src/evaluator.py under ${WORKDIR} of the swcli model build command. These functions or classes will be added to the list of runnable entry points for the Starwhale model. When running the model via swcli model run or Web UI, select the corresponding entry point (handler) to run.

    model.yaml is optional, parameters defined in yaml can also be specified via swcli command line parameters.


    swcli model build . --model-yaml model.yaml

    Is equivalent to:


    swcli model build . --name helloworld --module src.evaluator --desc "example yaml"

    - - + + \ No newline at end of file diff --git a/0.6.4/reference/sdk/dataset/index.html b/0.6.4/reference/sdk/dataset/index.html index f31b54e4d..fc4ea9711 100644 --- a/0.6.4/reference/sdk/dataset/index.html +++ b/0.6.4/reference/sdk/dataset/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Starwhale Dataset SDK

    dataset

    Get starwhale.Dataset object, by creating new datasets or loading existing datasets.

    @classmethod
    def dataset(
    cls,
    uri: t.Union[str, Resource],
    create: str = _DatasetCreateMode.auto,
    readonly: bool = False,
    ) -> Dataset:

    Parameters

    • uri: (str or Resource, required)
      • The dataset uri or Resource object.
    • create: (str, optional)
      • The mode of dataset creating. The options are auto, empty and forbid.
        • auto mode: If the dataset already exists, creation is ignored. If it does not exist, the dataset is created automatically.
        • empty mode: If the dataset already exists, an Exception is raised; If it does not exist, an empty dataset is created. This mode ensures the creation of a new, empty dataset.
        • forbid mode: If the dataset already exists, nothing is done.If it does not exist, an Exception is raised. This mode ensures the existence of the dataset.
      • The default is auto.
    • readonly: (bool, optional)
      • For an existing dataset, you can specify the readonly=True argument to ensure the dataset is in readonly mode.
      • Default is False.

    Examples

    from starwhale import dataset, Image

    # create a new dataset named mnist, and add a row into the dataset
    # dataset("mnist") is equal to dataset("mnist", create="auto")
    ds = dataset("mnist")
    ds.exists() # return False, "mnist" dataset is not existing.
    ds.append({"img": Image(), "label": 1})
    ds.commit()
    ds.close()

    # load a cloud instance dataset in readonly mode
    ds = dataset("cloud://remote-instance/project/starwhale/dataset/mnist", readonly=True)
    labels = [row.features.label in ds]
    ds.close()

    # load a read/write dataset with a specified version
    ds = dataset("mnist/version/mrrdczdbmzsw")
    ds[0].features.label = 1
    ds.commit()
    ds.close()

    # create an empty dataset
    ds = dataset("mnist-empty", create="empty")

    # ensure the dataset existence
    ds = dataset("mnist-existed", create="forbid")

    class starwhale.Dataset

    starwhale.Dataset implements the abstraction of a Starwhale dataset, and can operate on datasets in Standalone/Server/Cloud instances.

    from_huggingface

    from_huggingface is a classmethod that can convert a Huggingface dataset into a Starwhale dataset.

    def from_huggingface(
    cls,
    name: str,
    repo: str,
    subset: str | None = None,
    split: str | None = None,
    revision: str = "main",
    alignment_size: int | str = D_ALIGNMENT_SIZE,
    volume_size: int | str = D_FILE_VOLUME_SIZE,
    mode: DatasetChangeMode | str = DatasetChangeMode.PATCH,
    cache: bool = True,
    tags: t.List[str] | None = None,
    ) -> Dataset:

    Parameters

    • name: (str, required)
      • dataset name.
    • repo: (str, required)
    • subset: (str, optional)
      • The subset name. If the huggingface dataset has multiple subsets, you must specify the subset name.
    • split: (str, optional)
      • The split name. If the split name is not specified, the all splits dataset will be built.
    • revision: (str, optional)
      • The huggingface datasets revision. The default value is main. If the split name is not specified, the all splits dataset will be built.
    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.
    • mode: (str|DatasetChangeMode, optional)
      • The dataset change mode. The default value is patch. Mode choices are patch and overwrite.
    • cache: (bool, optional)
      • Whether to use huggingface dataset cache(download + local hf dataset).
      • The default value is True.
    • tags: (List[str], optional)
      • The user custom tags of the dataset.

    Examples

    from starwhale import Dataset
    myds = Dataset.from_huggingface("mnist", "mnist")
    print(myds[0])
    from starwhale import Dataset
    myds = Dataset.from_huggingface("mmlu", "cais/mmlu", subset="anatomy", split="auxiliary_train", revision="7456cfb")

    from_json

    from_json is a classmethod that can convert a json text into a Starwhale dataset.

    @classmethod
    def from_json(
    cls,
    name: str,
    json_text: str,
    field_selector: str = "",
    alignment_size: int | str = D_ALIGNMENT_SIZE,
    volume_size: int | str = D_FILE_VOLUME_SIZE,
    mode: DatasetChangeMode | str = DatasetChangeMode.PATCH,
    tags: t.List[str] | None = None,
    ) -> Dataset:

    Parameters

    • name: (str, required)
      • Dataset name.
    • json_text: (str, required)
      • A json string. The from_json function deserializes this string into Python objects to start building the Starwhale dataset.
    • field_selector: (str, optional)
      • The filed from which you would like to extract dataset array items.
      • The default value is "" which indicates that the json object is an array contains all the items.
    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.
    • mode: (str|DatasetChangeMode, optional)
      • The dataset change mode. The default value is patch. Mode choices are patch and overwrite.
    • tags: (List[str], optional)
      • The user custom tags of the dataset.

    Examples

    from starwhale import Dataset
    myds = Dataset.from_json(
    name="translation",
    json_text='[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]'
    )
    print(myds[0].features.en)
    from starwhale import Dataset
    myds = Dataset.from_json(
    name="translation",
    json_text='{"content":{"child_content":[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]}}',
    field_selector="content.child_content"
    )
    print(myds[0].features["zh-cn"])

    from_folder

    from_folder is a classmethod that can read Image/Video/Audio data from a specified directory and automatically convert them into a Starwhale dataset. This function supports the following features:

    • It can recursively search the target directory and its subdirectories
    • Supports extracting three types of files:
      • image: Supports png/jpg/jpeg/webp/svg/apng image types. Image files will be converted to Starwhale.Image type.
      • video: Supports mp4/webm/avi video types. Video files will be converted to Starwhale.Video type.
      • audio: Supports mp3/wav audio types. Audio files will be converted to Starwhale.Audio type.
    • Each file corresponds to one record in the dataset, with the file stored in the file field.
    • If auto_label=True, the parent directory name will be used as the label for that record, stored in the label field. Files in the root directory will not be labeled.
    • If a txt file with the same name as an image/video/audio file exists, its content will be stored as the caption field in the dataset.
    • If metadata.csv or metadata.jsonl exists in the root directory, their content will be read automatically and associated to records by file path as meta information in the dataset.
      • metadata.csv and metadata.jsonl are mutually exclusive. An exception will be thrown if both exist.
      • Each record in metadata.csv and metadata.jsonl must contain a file_name field pointing to the file path.
      • metadata.csv and metadata.jsonl are optional for dataset building.
    @classmethod
    def from_folder(
    cls,
    folder: str | Path,
    kind: str | DatasetFolderSourceType,
    name: str | Resource = "",
    auto_label: bool = True,
    alignment_size: int | str = D_ALIGNMENT_SIZE,
    volume_size: int | str = D_FILE_VOLUME_SIZE,
    mode: DatasetChangeMode | str = DatasetChangeMode.PATCH,
    tags: t.List[str] | None = None,
    ) -> Dataset:

    Parameters

    • folder: (str|Path, required)
      • The folder path from which you would like to create this dataset.
    • kind: (str|DatasetFolderSourceType, required)
      • The dataset source type you would like to use, the choices are: image, video and audio.
      • Recursively searching for files of the specified kind in folder. Other file types will be ignored.
    • name: (str|Resource, optional)
      • The dataset name you would like to use.
      • If not specified, the name is the folder name.
    • auto_label: (bool, optional)
      • Whether to auto label by the sub-folder name.
      • The default value is True.
    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.
    • mode: (str|DatasetChangeMode, optional)
      • The dataset change mode. The default value is patch. Mode choices are patch and overwrite.
    • tags: (List[str], optional)
      • The user custom tags of the dataset.

    Examples ${folder-example}

    • Example for the normal function calling

      from starwhale import Dataset

      # create a my-image-dataset dataset from /path/to/image folder.
      ds = Dataset.from_folder(
      folder="/path/to/image",
      kind="image",
      name="my-image-dataset"
      )
    • Example for caption

      folder/dog/1.png
      folder/dog/1.txt

      1.txt content will be used as the caption of 1.png.

    • Example for metadata

      metadata.csv:

      file_name, caption
      1.png, dog
      2.png, cat

      metadata.jsonl:

      {"file_name": "1.png", "caption": "dog"}
      {"file_name": "2.png", "caption": "cat"}
    • Example for auto-labeling

      The following structure will create a dataset with 2 labels: "cat" and "dog", 4 images in total.

      folder/dog/1.png
      folder/cat/2.png
      folder/dog/3.png
      folder/cat/4.png

    __iter__

    __iter__ a method that iter the dataset rows.

    from starwhale import dataset

    ds = dataset("mnist")

    for item in ds:
    print(item.index)
    print(item.features.label) # label and img are the features of mnist.
    print(item.features.img)

    batch_iter

    batch_iter is a method that iter the dataset rows in batch.

    def batch_iter(
    self, batch_size: int = 1, drop_not_full: bool = False
    ) -> t.Iterator[t.List[DataRow]]:

    Parameters

    • batch_size: (int, optional)
      • batch size. The default value is 1.
    • drop_not_full: (bool, optional)
      • Whether the last batch of data, with a size smaller than batch_size, it will be discarded.
      • The default value is False.

    Examples

    from starwhale import dataset

    ds = dataset("mnist")
    for batch_rows in ds.batch_iter(batch_size=2):
    assert len(batch_rows) == 2
    print(batch_rows[0].features)

    __getitem__

    __getitem__ is a method that allows retrieving certain rows of data from the dataset, with usage similar to Python dict and list types.

    from starwhale import dataset

    ds = dataset("mock-int-index")

    # if the index type is string
    ds["str_key"] # get the DataRow by the "str_key" string key
    ds["start":"end"] # get a slice of the dataset by the range ("start", "end")

    ds = dataset("mock-str-index")
    # if the index type is int
    ds[1] # get the DataRow by the 1 int key
    ds[1:10:2] # get a slice of the dataset by the range (1, 10), step is 2

    __setitem__

    __setitem__ is a method that allows updating rows of data in the dataset, with usage similar to Python dicts. __setitem__ supports multi-threaded parallel data insertion.

    def __setitem__(
    self, key: t.Union[str, int], value: t.Union[DataRow, t.Tuple, t.Dict]
    ) -> None:

    Parameters

    • key: (int|str, required)
      • key is the index for each row in the dataset. The type is int or str, but a dataset only accepts one type.
    • value: (DataRow|tuple|dict, required)
      • value is the features for each row in the dataset, using a Python dict is generally recommended.

    Examples

    • Normal insertion

    Insert two rows into the test dataset, with index test and test2 repectively:

    from starwhale import dataset

    with dataset("test") as ds:
    ds["test"] = {"txt": "abc", "int": 1}
    ds["test2"] = {"txt": "bcd", "int": 2}
    ds.commit()
    • Parallel insertion
    from starwhale import dataset, Binary
    from concurrent.futures import as_completed, ThreadPoolExecutor

    ds = dataset("test")

    def _do_append(_start: int) -> None:
    for i in range(_start, 100):
    ds.append((i, {"data": Binary(), "label": i}))

    pool = ThreadPoolExecutor(max_workers=10)
    tasks = [pool.submit(_do_append, i * 10) for i in range(0, 9)]

    ds.commit()
    ds.close()

    __delitem__

    __delitem__ is a method to delete certain rows of data from the dataset.

    def __delitem__(self, key: _ItemType) -> None:
    from starwhale import dataset

    ds = dataset("existed-ds")
    del ds[6:9]
    del ds[0]
    ds.commit()
    ds.close()

    append

    append is a method to append data to a dataset, similar to the append method for Python lists.

    • Adding features dict, each row is automatically indexed with int starting from 0 and incrementing.

      from starwhale import dataset, Image

      with dataset("new-ds") as ds:
      for i in range(0, 100):
      ds.append({"label": i, "image": Image(f"folder/{i}.png")})
      ds.commit()
    • By appending the index and features dictionary, the index of each data row in the dataset will not be handled automatically.

      from dataset import dataset, Image

      with dataset("new-ds") as ds:
      for i in range(0, 100):
      ds.append((f"index-{i}", {"label": i, "image": Image(f"folder/{i}.png")}))

      ds.commit()

    extend

    extend is a method to bulk append data to a dataset, similar to the extend method for Python lists.

    from starwhale import dataset, Text

    ds = dataset("new-ds")
    ds.extend([
    (f"label-{i}", {"text": Text(), "label": i}) for i in range(0, 10)
    ])
    ds.commit()
    ds.close()

    commit

    commit is a method that flushes the current cached data to storage when called, and generates a dataset version. This version can then be used to load the corresponding dataset content afterwards.

    For a dataset, if some data is added without calling commit, but close is called or the process exits directly instead, the data will still be written to the dataset, just without generating a new version.

    @_check_readonly
    def commit(
    self,
    tags: t.Optional[t.List[str]] = None,
    message: str = "",
    force_add_tags: bool = False,
    ignore_add_tags_errors: bool = False,
    ) -> str:

    Parameters

    • tags: (list(str), optional)
      • tag as a list
    • message: (str, optional)
      • commit message. The default value is empty.
    • force_add_tags: (bool, optional)
      • For server/cloud instances, when adding labels to this version, if a label has already been applied to other dataset versions, you can use the force_add_tags=True parameter to forcibly add the label to this version, otherwise an exception will be thrown.
      • The default is False.
    • ignore_add_tags_errors: (bool, optional)
      • Ignore any exceptions thrown when adding labels.
      • The default is False.

    Examples

    from starwhale import dataset
    with dataset("mnist") as ds:
    ds.append({"label": 1})
    ds.commit(message="init commit")

    readonly

    readonly is a property attribute indicating if the dataset is read-only, it returns a bool value.

    from starwhale import dataset
    ds = dataset("mnist", readonly=True)
    assert ds.readonly

    loading_version

    loading_version is a property attribute, string type.

    • When loading an existing dataset, the loading_version is the related dataset version.
    • When creating a non-existed dataset, the loading_version is equal to the pending_commit_version.

    pending_commit_version

    pending_commit_version is a property attribute, string type. When you call the commit function, the pending_commit_version will be recorded in the Standalone instance ,Server instance or Cloud instance.

    committed_version

    committed_version is a property attribute, string type. After the commit function is called, the committed_version will come out, it is equal to the pending_commit_version. Accessing this attribute without calling commit first will raise an exception.

    remove

    remove is a method equivalent to the swcli dataset remove command, it can delete a dataset.

    def remove(self, force: bool = False) -> None:

    recover

    recover is a method equivalent to the swcli dataset recover command, it can recover a soft-deleted dataset that has not been run garbage collection.

    def recover(self, force: bool = False) -> None:

    summary

    summary is a method equivalent to the swcli dataset summary command, it returns summary information of the dataset.

    def summary(self) -> t.Optional[DatasetSummary]:

    history

    history is a method equivalent to the swcli dataset history command, it returns the history records of the dataset.

    def history(self) -> t.List[t.Dict]:

    flush

    flush is a method that flushes temporarily cached data from memory to persistent storage. The commit and close methods will automatically call flush.

    close

    close is a method that closes opened connections related to the dataset. Dataset also implements contextmanager, so datasets can be automatically closed using with syntax without needing to explicitly call close.

    from starwhale import dataset

    ds = dataset("mnist")
    ds.close()

    with dataset("mnist") as ds:
    print(ds[0])

    head is a method to show the first n rows of a dataset, equivalent to the swcli dataset head command.

    def head(self, n: int = 5, skip_fetch_data: bool = False) -> List[DataRow]:

    fetch_one

    fetch_one is a method to get the first record in a dataset, similar to head(n=1)[0].

    list

    list is a class method to list Starwhale datasets under a project URI, equivalent to the swcli dataset list command.

    @classmethod
    def list(
    cls,
    project_uri: Union[str, Project] = "",
    fullname: bool = False,
    show_removed: bool = False,
    page_index: int = DEFAULT_PAGE_IDX,
    page_size: int = DEFAULT_PAGE_SIZE,
    ) -> Tuple[DatasetListType, Dict[str, Any]]:

    copy

    copy is a method to copy a dataset to another instance, equivalent to the swcli dataset copy command.

    def copy(
    self,
    dest_uri: str,
    dest_local_project_uri: str = "",
    force: bool = False,
    mode: str = DatasetChangeMode.PATCH.value,
    ignore_tags: t.List[str] | None = None,
    ) -> None:

    Parameters

    • dest_uri: (str, required)
      • Dataset URI
    • dest_local_project_uri: (str, optional)
      • When copy the remote dataset into local, the parameter can set for the Project URI.
    • force: (bool, optional)
      • Whether to forcibly overwrite the dataset if there is already one with the same version on the target instance.
      • The default value is False.
      • When the tags are already used for the other dataset version in the dest instance, you should use force option or adjust the tags.
    • mode: (str, optional)
      • Dataset copy mode, default is 'patch'. Mode choices are: 'patch', 'overwrite'.
      • patch: Patch mode, only update the changed rows and columns for the remote dataset.
      • overwrite: Overwrite mode, update records and delete extraneous rows from the remote dataset.
    • ignore_tags (List[str], optional)
      • Ignore tags when copying.
      • In default, copy dataset with all user custom tags.
      • latest and ^v\d+$ are the system builtin tags, they are ignored automatically.

    Examples

    from starwhale import dataset
    ds = dataset("mnist")
    ds.copy("cloud://remote-instance/project/starwhale")

    to_pytorch

    to_pytorch is a method that can convert a Starwhale dataset to a Pytorch torch.utils.data.Dataset, which can then be passed to torch.utils.data.DataLoader for use.

    It should be noted that the to_pytorch function returns a Pytorch IterableDataset.

    def to_pytorch(
    self,
    transform: t.Optional[t.Callable] = None,
    drop_index: bool = True,
    skip_default_transform: bool = False,
    ) -> torch.utils.data.Dataset:

    Parameters

    • transform: (callable, optional)
      • A transform function for input data.
    • drop_index: (bool, optional)
      • Whether to drop the index column.
    • skip_default_transform: (bool, optional)
      • If transform is not set, by default the built-in Starwhale transform function will be used to transform the data. This can be disabled with the skip_default_transform parameter.

    Examples

    import torch.utils.data as tdata
    from starwhale import dataset

    ds = dataset("mnist")

    torch_ds = ds.to_pytorch()
    torch_loader = tdata.DataLoader(torch_ds, batch_size=2)
    import torch.utils.data as tdata
    from starwhale import dataset

    with dataset("mnist") as ds:
    for i in range(0, 10):
    ds.append({"txt": Text(f"data-{i}"), "label": i})

    ds.commit()

    def _custom_transform(data: t.Any) -> t.Any:
    data = data.copy()
    txt = data["txt"].to_str()
    data["txt"] = f"custom-{txt}"
    return data

    torch_loader = tdata.DataLoader(
    dataset(ds.uri).to_pytorch(transform=_custom_transform), batch_size=1
    )
    item = next(iter(torch_loader))
    assert isinstance(item["label"], torch.Tensor)
    assert item["txt"][0] in ("custom-data-0", "custom-data-1")

    to_tensorflow

    to_tensorflow is a method that can convert a Starwhale dataset to a Tensorflow tensorflow.data.Dataset.

    def to_tensorflow(self, drop_index: bool = True) -> tensorflow.data.Dataset:

    Parameters

    • drop_index: (bool, optional)
      • Whether to drop the index column.

    Examples

    from starwhale import dataset
    import tensorflow as tf

    ds = dataset("mnist")
    tf_ds = ds.to_tensorflow(drop_index=True)
    assert isinstance(tf_ds, tf.data.Dataset)

    with_builder_blob_config

    with_builder_blob_config is a method to set blob-related attributes in a Starwhale dataset. It needs to be called before making data changes.

    def with_builder_blob_config(
    self,
    volume_size: int | str | None = D_FILE_VOLUME_SIZE,
    alignment_size: int | str | None = D_ALIGNMENT_SIZE,
    ) -> Dataset:

    Parameters

    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.

    Examples

    from starwhale import dataset, Binary

    ds = dataset("mnist").with_builder_blob_config(volume_size="32M", alignment_size=128)
    ds.append({"data": Binary(b"123")})
    ds.commit()
    ds.close()

    with_loader_config

    with_loader_config is a method to set parameters for the Starwhale dataset loader process.

    def with_loader_config(
    self,
    num_workers: t.Optional[int] = None,
    cache_size: t.Optional[int] = None,
    field_transformer: t.Optional[t.Dict] = None,
    ) -> Dataset:

    Parameters

    • num_workers: (int, optional)
      • The workers number for loading dataset.
      • The default value is 2.
    • cache_size: (int, optional)
      • Prefetched data rows.
      • The default value is 20.
    • field_transformer: (dict, optional)
      • features name transform dict.

    Examples

    from starwhale import Dataset, dataset
    Dataset.from_json(
    "translation",
    '[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]'
    )
    myds = dataset("translation").with_loader_config(field_transformer={"en": "en-us"})
    assert myds[0].features["en-us"] == myds[0].features["en"]
    from starwhale import Dataset, dataset
    Dataset.from_json(
    "translation2",
    '[{"content":{"child_content":[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]}}]'
    )
    myds = dataset("translation2").with_loader_config(field_transformer={"content.child_content[0].en": "en-us"})
    assert myds[0].features["en-us"] == myds[0].features["content"]["child_content"][0]["en"]
    - - + + \ No newline at end of file diff --git a/0.6.4/reference/sdk/evaluation/index.html b/0.6.4/reference/sdk/evaluation/index.html index d70af8e11..ea4458511 100644 --- a/0.6.4/reference/sdk/evaluation/index.html +++ b/0.6.4/reference/sdk/evaluation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Starwhale Model Evaluation SDK

    @evaluation.predict

    The @evaluation.predict decorator defines the inference process in the Starwhale Model Evaluation, similar to the map phase in MapReduce. It contains the following core features:

    • On the Server instance, require the resources needed to run.
    • Automatically read the local or remote datasets, and pass the data in the datasets one by one or in batches to the function decorated by evaluation.predict.
    • By the replicas setting, implement distributed dataset consumption to horizontally scale and shorten the time required for the model evaluation tasks.
    • Automatically store the return values of the function and the input features of the dataset into the results table, for display in the Web UI and further use in the evaluate phase.
    • The decorated function is called once for each single piece of data or each batch, to complete the inference process.

    Parameters

    • resources: (dict, optional)
      • Defines the resources required by each predict task when running on the Server instance, including memory, cpu, and nvidia.com/gpu.
      • memory: The unit is Bytes, int and float types are supported.
        • Supports setting request and limit as a dictionary, e.g. resources={"memory": {"request": 100 * 1024, "limit": 200 * 1024}}.
        • If only a single number is set, the Python SDK will automatically set request and limit to the same value, e.g. resources={"memory": 100 * 1024} is equivalent to resources={"memory": {"request": 100 * 1024, "limit": 100 * 1024}}.
      • cpu: The unit is the number of CPU cores, int and float types are supported.
        • Supports setting request and limit as a dictionary, e.g. resources={"cpu": {"request": 1, "limit": 2}}.
        • If only a single number is set, the SDK will automatically set request and limit to the same value, e.g. resources={"cpu": 1.5} is equivalent to resources={"cpu": {"request": 1.5, "limit": 1.5}}.
      • nvidia.com/gpu: The unit is the number of GPUs, int type is supported.
        • nvidia.com/gpu does not support setting request and limit, only a single number is supported.
      • Note: The resources parameter currently only takes effect on the Server instances. For the Cloud instances, the same can be achieved by selecting the corresponding resource pool when submitting the evaluation task. Standalone instances do not support this feature at all.
    • replicas: (int, optional)
      • The number of replicas to run predict.
      • predict defines a Step, in which there are multiple equivalent Tasks. Each Task runs on a Pod in Cloud/Server instances, and a Thread in Standalone instances.
      • When multiple replicas are specified, they are equivalent and will jointly consume the selected dataset to achieve distributed dataset consumption. It can be understood that a row in the dataset will only be read by one predict replica.
      • The default is 1.
    • batch_size: (int, optional)
      • Batch size for passing data from the dataset into the function.
      • The default is 1.
    • fail_on_error: (bool, optional)
      • Whether to interrupt the entire model evaluation when the decorated function throws an exception. If you expect some "exceptional" data to cause evaluation failures but don't want to interrupt the overall evaluation, you can set fail_on_error=False.
      • The default is True.
    • auto_log: (bool, optional)
      • Whether to automatically log the return values of the function and the input features of the dataset to the results table.
      • The default is True.
    • log_mode: (str, optional)
      • When auto_log=True, you can set log_mode to define logging the return values in plain or pickle format.
      • The default is pickle.
    • log_dataset_features: (List[str], optional)
      • When auto_log=True, you can selectively log certain features from the dataset via this parameter.
      • By default, all features will be logged.
    • needs: (List[Callable], optional)
      • Defines the prerequisites for this task to run, can use the needs syntax to implement DAG.
      • needs accepts functions decorated by @evaluation.predict, @evaluation.evaluate, and @handler.
      • The default is empty, i.e. does not depend on any other tasks.

    Input

    The decorated functions need to define some input parameters to accept dataset data, etc. They contain the following patterns:

    • data:

      • data is a dict type that can read the features of the dataset.
      • When batch_size=1 or batch_size is not set, the label feature can be read through data['label'] or data.label.
      • When batch_size is set to > 1, data is a list.
      from starwhale import evaluation

      @evaluation.predict
      def predict(data):
      print(data['label'])
      print(data.label)
    • data + external:

      • data is a dict type that can read the features of the dataset.
      • external is also a dict, including: index, index_with_dataset, dataset_info, context and dataset_uri keys. The attributes can be used for the further fine-grained processing.
        • index: The index of the dataset row.
        • index_with_dataset: The index with the dataset info.
        • dataset_info: starwhale.core.dataset.tabular.TabularDatasetInfo Class.
        • context: starwhale.Context Class.
        • dataset_uri: starwhale.nase.uri.resource.Resource Class.
      from starwhale import evaluation

      @evaluation.predict
      def predict(data, external):
      print(data['label'])
      print(data.label)
      print(external["context"])
      print(external["dataset_uri"])
    • data + **kw:

      • data is a dict type that can read the features of the dataset.
      • kw is a dict that contains external.
      from starwhale import evaluation

      @evaluation.predict
      def predict(data, **kw):
      print(kw["external"]["context"])
      print(kw["external"]["dataset_uri"])
    • *args + **kwargs:

      • The first argument of args list is data.
      from starwhale import evaluation

      @evaluation.predict
      def predict(*args, **kw):
      print(args[0].label)
      print(args[0]["label"])
      print(kw["external"]["context"])
    • **kwargs:

      from starwhale import evaluation

      @evaluation.predict
      def predict(**kw):
      print(kw["data"].label)
      print(kw["data"]["label"])
      print(kw["external"]["context"])
    • *args:

      • *args does not contain external.
      from starwhale import evaluation

      @evaluation.predict
      def predict(*args):
      print(args[0].label)
      print(args[0]["label"])

    Examples

    from starwhale import evaluation

    @evaluation.predict
    def predict_image(data):
    ...

    @evaluation.predict(
    dataset="mnist/version/latest",
    batch_size=32,
    replicas=4,
    needs=[predict_image],
    )
    def predict_batch_images(batch_data)
    ...

    @evaluation.predict(
    resources={"nvidia.com/gpu": 1,
    "cpu": {"request": 1, "limit": 2},
    "memory": 200 * 1024}, # 200MB
    log_mode="plain",
    )
    def predict_with_resources(data):
    ...

    @evaluation.predict(
    replicas=1,
    log_mode="plain",
    log_dataset_features=["txt", "img", "label"],
    )
    def predict_with_selected_features(data):
    ...

    @evaluation.evaluate

    @evaluation.evaluate is a decorator that defines the evaluation process in the Starwhale Model evaluation, similar to the reduce phase in MapReduce. It contains the following core features:

    • On the Server instance, apply for the resources.
    • Read the data recorded in the results table automatically during the predict phase, and pass it into the function as an iterator.
    • The evaluate phase will only run one replica, and cannot define the replicas parameter like the predict phase.

    Parameters

    • resources: (dict, optional)
      • Consistent with the resources parameter definition in @evaluation.predict.
    • needs: (List[Callable], optional)
      • Consistent with the needs parameter definition in @evaluation.predict.
      • In the common case, it will depend on a function decorated by @evaluation.predict.
    • use_predict_auto_log: (bool, optional)
      • Defaults to True, passes an iterator that can traverse the predict results to the function.

    Input

    • When use_predict_auto_log=True (default), pass an iterator that can traverse the predict results into the function.
      • The iterated object is a dictionary containing two keys: output and input.
        • output is the element returned by the predict stage function.
        • input is the features of the corresponding dataset during the inference process, which is a dictionary type.
    • When use_predict_auto_log=False, do not pass any parameters into the function.

    Examples

    from starwhale import evaluation

    @evaluation.evaluate(needs=[predict_image])
    def evaluate_results(predict_result_iter):
    ...

    @evaluation.evaluate(
    use_predict_auto_log=False,
    needs=[predict_image],
    )
    def evaluate_results():
    ...

    class Evaluation

    starwhale.Evaluation implements the abstraction for Starwhale Model Evaluation, and can perform operations like logging and scanning for Model Evaluation on Standalone/Server/Cloud instances, to record and retrieve metrics.

    __init__

    __init__ function initializes Evaluation object.

    class Evaluation
    def __init__(self, id: str, project: Project | str) -> None:

    Parameters

    • id: (str, required)
      • The UUID of Model Evaluation that is generated by Starwhale automatically.
    • project: (Project|str, required)
      • Project object or Project URI str.

    Example

    from starwhale import Evaluation

    standalone_e = Evaluation("fcd1206bf1694fce8053724861c7874c", project="self")
    server_e = Evaluation("fcd1206bf1694fce8053724861c7874c", project="cloud://server/project/starwhale:starwhale")
    cloud_e = Evaluation("2ddab20df9e9430dbd73853d773a9ff6", project="https://cloud.starwhale.cn/project/starwhale:llm-leaderboard")

    from_context

    from_context is a classmethod that obtains the Evaluation object under the current Context. from_context can only take effect under the task runtime environment. Calling this method in a non-task runtime environment will raise a RuntimeError exception, indicating that the Starwhale Context has not been properly set.

    @classmethod
    def from_context(cls) -> Evaluation:

    Example

    from starwhale import Evaluation

    with Evaluation.from_context() as e:
    e.log("label/1", 1, {"loss": 0.99, "accuracy": 0.98})

    log

    log is a method that logs evaluation metrics to a specific table, which can then be viewed on the Server/Cloud instance's web page or through the scan method.

    def log(
    self, category: str, id: t.Union[str, int], metrics: t.Dict[str, t.Any]
    ) -> None:

    Parameters

    • category: (str, required)
      • The category of the logged metrics, which will be used as the suffix of the Starwhale Datastore table name.
      • Each category corresponds to a Starwhale Datastore table. These tables will be isolated by the evaluation task ID and will not affect each other.
    • id: (str|int, required)
      • The ID of the logged record, unique within the table.
      • For the same table, only str or int can be used as the ID type.
    • metrics: (dict, required)
      • A dict to log metrics in key-value format.
      • Keys are of str type.
      • Values can be constant types like int, float, str, bytes, bool, or compound types like tuple, list, dict. It also supports logging Artifacts types like Starwhale.Image, Starwhale.Video, Starwhale.Audio, Starwhale.Text, Starwhale.Binary.
        • When the value contains dict type, the Starwhale SDK will automatically flatten the dict for better visualization and metric comparison.
        • For example, if metrics is {"test": {"loss": 0.99, "prob": [0.98,0.99]}, "image": [Image, Image]}, it will be stored as {"test/loss": 0.99, "test/prob": [0.98, 0.99], "image/0": Image, "image/1": Image} after flattening.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation.from_context()

    evaluation_store.log("label/1", 1, {"loss": 0.99, "accuracy": 0.98})
    evaluation_store.log("ppl", "1", {"a": "test", "b": 1})

    scan

    scan is a method that returns an iterator for reading data from certain model evaluation tables.

    def scan(
    self,
    category: str,
    start: t.Any = None,
    end: t.Any = None,
    keep_none: bool = False,
    end_inclusive: bool = False,
    ) -> t.Iterator:

    Parameters

    • category: (str, required)
      • Same meaning as the category parameter in the log method.
    • start: (Any, optional)
      • Start key, if not specified, start from the first data item in the table.
    • end: (Any, optional)
      • End key, if not specified, iterate to the end of the table.
    • keep_none: (bool, optional)
      • Whether to return columns with None values, not returned by default.
    • end_inclusive: (bool, optional)
      • Whether to include the row corresponding to end, not included by default.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="https://cloud.starwhale.cn/projects/349")
    results = [data for data in evaluation_store.scan("label/0")]

    flush

    flush is a method that can immediately flush the metrics logged by the log method to the datastore and oss storage. If the flush method is not called, Evaluation will automatically flush data to storage when it is finally closed.

    def flush(self, category: str, artifacts_flush: bool = True) -> None

    Parameters

    • category: (str, required)
      • Same meaning as the category parameter in the log method.
    • artifacts_flush: (bool, optional)
      • Whether to dump artifact data to blob files and upload them to related storage. Default is True.

    log_result

    log_result is a method that logs evaluation metrics to the results table, equivalent to calling the log method with category set to results. The results table is generally used to store inference results. By default, @starwhale.predict will store the return value of the decorated function in the results table, you can also manually store using log_results.

    def log_result(self, id: t.Union[str, int], metrics: t.Dict[str, t.Any]) -> None:

    Parameters

    • id: (str|int, required)
      • The ID of the record, unique within the results table.
      • For the results table, only str or int can be used as the ID type.
    • metrics: (dict, required)
      • Same definition as the metrics parameter in the log method.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="self")
    evaluation_store.log_result(1, {"loss": 0.99, "accuracy": 0.98})
    evaluation_store.log_result(2, {"loss": 0.98, "accuracy": 0.99})

    scan_results

    scan_results is a method that returns an iterator for reading data from the results table.

    def scan_results(
    self,
    start: t.Any = None,
    end: t.Any = None,
    keep_none: bool = False,
    end_inclusive: bool = False,
    ) -> t.Iterator:

    Parameters

    • start: (Any, optional)
      • Start key, if not specified, start from the first data item in the table.
      • Same definition as the start parameter in the scan method.
    • end: (Any, optional)
      • End key, if not specified, iterate to the end of the table.
      • Same definition as the end parameter in the scan method.
    • keep_none: (bool, optional)
      • Whether to return columns with None values, not returned by default.
      • Same definition as the keep_none parameter in the scan method.
    • end_inclusive: (bool, optional)
      • Whether to include the row corresponding to end, not included by default.
      • Same definition as the end_inclusive parameter in the scan method.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="self")

    evaluation_store.log_result(1, {"loss": 0.99, "accuracy": 0.98})
    evaluation_store.log_result(2, {"loss": 0.98, "accuracy": 0.99})
    results = [data for data in evaluation_store.scan_results()]

    flush_results

    flush_results is a method that can immediately flush the metrics logged by the log_results method to the datastore and oss storage. If the flush_results method is not called, Evaluation will automatically flush data to storage when it is finally closed.

    def flush_results(self, artifacts_flush: bool = True) -> None:

    Parameters

    • artifacts_flush: (bool, optional)
      • Whether to dump artifact data to blob files and upload them to related storage. Default is True.
      • Same definition as the artifacts_flush parameter in the flush method.

    log_summary

    log_summary is a method that logs certain metrics to the summary table. The evaluation page on Server/Cloud instances displays data from the summary table.

    Each time it is called, Starwhale will automatically update with the unique ID of this evaluation as the row ID of the table. This function can be called multiple times during one evaluation to update different columns.

    Each project has one summary table. All evaluation tasks under that project will write summary information to this table for easy comparison between evaluations of different models.

    def log_summary(self, *args: t.Any, **kw: t.Any) -> None:

    Same as log method, log_summary will automatically flatten the dict.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="https://cloud.starwhale.cn/projects/349")

    evaluation_store.log_summary(loss=0.99)
    evaluation_store.log_summary(loss=0.99, accuracy=0.99)
    evaluation_store.log_summary({"loss": 0.99, "accuracy": 0.99})

    get_summary

    get_summary is a method that returns the information logged by log_summary.

    def get_summary(self) -> t.Dict:

    flush_summary

    flush_summary is a method that can immediately flush the metrics logged by the log_summary method to the datastore and oss storage. If the flush_results method is not called, Evaluation will automatically flush data to storage when it is finally closed.

    def flush_summary(self, artifacts_flush: bool = True) -> None:

    Parameters

    • artifacts_flush: (bool, optional)
      • Whether to dump artifact data to blob files and upload them to related storage. Default is True.
      • Same definition as the artifacts_flush parameter in the flush method.

    flush_all

    flush_all is a method that can immediately flush the metrics logged by log, log_results, log_summary methods to the datastore and oss storage. If the flush_all method is not called, Evaluation will automatically flush data to storage when it is finally closed.

    def flush_all(self, artifacts_flush: bool = True) -> None:

    Parameters

    • artifacts_flush: (bool, optional)
      • Whether to dump artifact data to blob files and upload them to related storage. Default is True.
      • Same definition as the artifacts_flush parameter in the flush method.

    get_tables

    get_tables is a method that returns the names of all tables generated during model evaluation. Note that this function does not return the summary table name.

    def get_tables(self) -> t.List[str]:

    close

    close is a method to close the Evaluation object. close will automatically flush data to storage when called. Evaluation also implements __enter__ and __exit__ methods, which can simplify manual close calls using with syntax.

    def close(self) -> None:

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="https://cloud.starwhale.cn/projects/349")
    evaluation_store.log_summary(loss=0.99)
    evaluation_store.close()

    # auto close when the with-context exits.
    with Evaluation.from_context() as e:
    e.log_summary(loss=0.99)

    @handler

    @handler is a decorator that provides the following functionalities:

    • On a Server instance, it requests the required resources to run.
    • It can control the number of replicas.
    • Multiple handlers can form a DAG through dependency relationships to control the execution workflow.
    • It can expose ports externally to run like a web handler.

    @fine_tune, @evaluation.predict and @evaluation.evalute can be considered applications of @handler in the certain specific areas. @handler is the underlying implementation of these decorators and is more fundamental and flexible.

    @classmethod
    def handler(
    cls,
    resources: t.Optional[t.Dict[str, t.Any]] = None,
    replicas: int = 1,
    needs: t.Optional[t.List[t.Callable]] = None,
    name: str = "",
    expose: int = 0,
    require_dataset: bool = False,
    ) -> t.Callable:

    Parameters

    • resources: (dict, optional)
      • Consistent with the resources parameter definition in @evaluation.predict.
    • needs: (List[Callable], optional)
      • Consistent with the needs parameter definition in @evaluation.predict.
    • replicas: (int, optional)
      • Consistent with the replicas parameter definition in @evaluation.predict.
    • name: (str, optional)
      • The name displayed for the handler.
      • If not specified, use the decorated function's name.
    • expose: (int, optional)
      • The port exposed externally. When running a web handler, the exposed port needs to be declared.
      • The default is 0, meaning no port is exposed.
      • Currently only one port can be exposed.
    • require_dataset: (bool, optional)
      • Defines whether this handler requires a dataset when running.
      • If required_dataset=True, the user is required to input a dataset when creating an evaluation task on the Server/Cloud instance web page. If required_dataset=False, the user does not need to specify a dataset on the web page.
      • The default is False.

    Examples

    from starwhale import handler
    import gradio

    @handler(resources={"cpu": 1, "nvidia.com/gpu": 1}, replicas=3)
    def my_handler():
    ...

    @handler(needs=[my_handler])
    def my_another_handler():
    ...

    @handler(expose=7860)
    def chatbot():
    with gradio.Blocks() as server:
    ...
    server.launch(server_name="0.0.0.0", server_port=7860)

    @fine_tune

    fine_tune is a decorator that defines the fine-tuning process for model training.

    Some restrictions and usage suggestions:

    • fine_tune has only one replica.
    • fine_tune requires dataset input.
    • Generally, the dataset is obtained through Context.get_runtime_context() at the start of fine_tune.
    • Generally, at the end of fine_tune, the fine-tuned Starwhale model package is generated through starwhale.model.build, which will be automatically copied to the corresponding evaluation project.

    Parameters

    • resources: (dict, optional)
      • Consistent with the resources parameter definition in @evaluation.predict.
    • needs: (List[Callable], optional)
      • Consistent with the needs parameter definition in @evaluation.predict.

    Examples

    from starwhale import model as starwhale_model
    from starwhale import fine_tune, Context

    @fine_tune(resources={"nvidia.com/gpu": 1})
    def llama_fine_tuning():
    ctx = Context.get_runtime_context()

    if len(ctx.dataset_uris) == 2:
    # TODO: use more graceful way to get train and eval dataset
    train_dataset = dataset(ctx.dataset_uris[0], readonly=True, create="forbid")
    eval_dataset = dataset(ctx.dataset_uris[1], readonly=True, create="forbid")
    elif len(ctx.dataset_uris) == 1:
    train_dataset = dataset(ctx.dataset_uris[0], readonly=True, create="forbid")
    eval_dataset = None
    else:
    raise ValueError("Only support 1 or 2 datasets(train and eval dataset) for now")

    #user training code
    train_llama(
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
    )

    model_name = get_model_name()
    starwhale_model.build(name=f"llama-{model_name}-qlora-ft")

    @multi_classification

    The @multi_classification decorator uses the sklearn lib to analyze results for multi-classification problems, outputting the confusion matrix, ROC, AUC etc., and writing them to related tables in the Starwhale Datastore.

    When using it, certain requirements are placed on the return value of the decorated function, which should be (label, result) or (label, result, probability_matrix).

    def multi_classification(
    confusion_matrix_normalize: str = "all",
    show_hamming_loss: bool = True,
    show_cohen_kappa_score: bool = True,
    show_roc_auc: bool = True,
    all_labels: t.Optional[t.List[t.Any]] = None,
    ) -> t.Any:

    Parameters

    • confusion_matrix_normalize: (str, optional)
      • Accepts three parameters:
        • true: rows
        • pred: columns
        • all: rows+columns
    • show_hamming_loss: (bool, optional)
      • Whether to calculate the Hamming loss.
      • The default is True.
    • show_cohen_kappa_score: (bool, optional)
      • Whether to calculate the Cohen kappa score.
      • The default is True.
    • show_roc_auc: (bool, optional)
      • Whether to calculate ROC/AUC. To calculate, the function needs to return a (label, result, probability_matrix) tuple, otherwise a (label, result) tuple is sufficient.
      • The default is True.
    • all_labels: (List, optional)
      • Defines all the labels.

    Examples


    @multi_classification(
    confusion_matrix_normalize="all",
    show_hamming_loss=True,
    show_cohen_kappa_score=True,
    show_roc_auc=True,
    all_labels=[i for i in range(0, 10)],
    )
    def evaluate(ppl_result) -> t.Tuple[t.List[int], t.List[int], t.List[t.List[float]]]:
    label, result, probability_matrix = [], [], []
    return label, result, probability_matrix

    @multi_classification(
    confusion_matrix_normalize="all",
    show_hamming_loss=True,
    show_cohen_kappa_score=True,
    show_roc_auc=False,
    all_labels=[i for i in range(0, 10)],
    )
    def evaluate(ppl_result) -> t.Tuple[t.List[int], t.List[int], t.List[t.List[float]]]:
    label, result = [], [], []
    return label, result

    PipelineHandler

    The PipelineHandler class provides a default model evaluation workflow definition that requires users to implement the predict and evaluate functions.

    The PipelineHandler is equivalent to using the @evaluation.predict and @evaluation.evaluate decorators together - the usage looks different but the underlying model evaluation process is the same.

    Note that PipelineHandler currently does not support defining resources parameters.

    Users need to implement the following functions:

    • predict: Defines the inference process, equivalent to a function decorated with @evaluation.predict.

    • evaluate: Defines the evaluation process, equivalent to a function decorated with @evaluation.evaluate.

    from typing import Any, Iterator
    from abc import ABCMeta, abstractmethod

    class PipelineHandler(metaclass=ABCMeta):
    def __init__(
    self,
    predict_batch_size: int = 1,
    ignore_error: bool = False,
    predict_auto_log: bool = True,
    predict_log_mode: str = PredictLogMode.PICKLE.value,
    predict_log_dataset_features: t.Optional[t.List[str]] = None,
    **kwargs: t.Any,
    ) -> None:
    self.context = Context.get_runtime_context()
    ...

    def predict(self, data: Any, **kw: Any) -> Any:
    raise NotImplementedError

    def evaluate(self, ppl_result: Iterator) -> Any
    raise NotImplementedError

    Parameters

    • predict_batch_size: (int, optional)
      • Equivalent to the batch_size parameter in @evaluation.predict.
      • Default is 1.
    • ignore_error: (bool, optional)
      • Equivalent to the fail_on_error parameter in @evaluation.predict.
      • Default is False.
    • predict_auto_log: (bool, optional)
      • Equivalent to the auto_log parameter in @evaluation.predict.
      • Default is True.
    • predict_log_mode: (str, optional)
      • Equivalent to the log_mode parameter in @evaluation.predict.
      • Default is pickle.
    • predict_log_dataset_features: (bool, optional)
      • Equivalent to the log_dataset_features parameter in @evaluation.predict.
      • Default is None, which records all features.

    PipelineHandler.run Decorator

    The PipelineHandler.run decorator can be used to describe resources for the predict and evaluate methods, supporting definitions of replicas and resources:

    • The PipelineHandler.run decorator can only decorate predict and evaluate methods in subclasses inheriting from PipelineHandler.
    • The predict method can set the replicas parameter. The replicas value for the evaluate method is always 1.
    • The resources parameter is defined and used in the same way as the resources parameter in @evaluation.predict or @evaluation.evaluate.
    • The PipelineHandler.run decorator is optional.
    • The PipelineHandler.run decorator only takes effect on Server and Cloud instances, not Standalone instances that don't support resource definition.
    @classmethod
    def run(
    cls, resources: t.Optional[t.Dict[str, t.Any]] = None, replicas: int = 1
    ) -> t.Callable:

    Examples

    import typing as t

    import torch
    from starwhale import PipelineHandler

    class Example(PipelineHandler):
    def __init__(self) -> None:
    super().__init__()
    self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    self.model = self._load_model(self.device)

    @PipelineHandler.run(replicas=4, resources={"memory": 1 * 1024 * 1024 *1024, "nvidia.com/gpu": 1}) # 1G Memory, 1 GPU
    def predict(self, data: t.Dict):
    data_tensor = self._pre(data.img)
    output = self.model(data_tensor)
    return self._post(output)

    @PipelineHandler.run(resources={"memory": 1 * 1024 * 1024 *1024}) # 1G Memory
    def evaluate(self, ppl_result):
    result, label, pr = [], [], []
    for _data in ppl_result:
    label.append(_data["input"]["label"])
    result.extend(_data["output"][0])
    pr.extend(_data["output"][1])
    return label, result, pr

    def _pre(self, input: Image) -> torch.Tensor:
    ...

    def _post(self, input):
    ...

    def _load_model(self, device):
    ...

    Context

    The context information passed during model evaluation, including Project, Task ID, etc. The Context content is automatically injected and can be used in the following ways:

    • Inherit the PipelineHandler class and use the self.context object.
    • Get it through Context.get_runtime_context().

    Note that Context can only be used during model evaluation, otherwise the program will throw an exception.

    Currently Context can get the following values:

    • project: str
      • Project name.
    • version: str
      • Unique ID of model evaluation.
    • step: str
      • Step name.
    • total: int
      • Total number of Tasks under the Step.
    • index: int
      • Task index number, starting from 0.
    • dataset_uris: List[str]
      • List of Starwhale dataset URIs.

    Examples


    from starwhale import Context, PipelineHandler

    def func():
    ctx = Context.get_runtime_context()
    print(ctx.project)
    print(ctx.version)
    print(ctx.step)
    ...

    class Example(PipelineHandler):

    def predict(self, data: t.Dict):
    print(self.context.project)
    print(self.context.version)
    print(self.context.step)

    @starwhale.api.service.api

    @starwhale.api.service.api is a decorator that provides a simple Web Handler input definition based on Gradio for accepting external requests and returning inference results to the user when launching a Web Service with the swcli model serve command, enabling online evaluation.

    Examples

    import gradio
    from starwhale.api.service import api

    def predict_image(img):
    ...

    @api(gradio.File(), gradio.Label())
    def predict_view(file: t.Any) -> t.Any:
    with open(file.name, "rb") as f:
    data = Image(f.read(), shape=(28, 28, 1))
    _, prob = predict_image({"img": data})
    return {i: p for i, p in enumerate(prob)}

    starwhale.api.service.Service

    If you want to customize the web service implementation, you can subclass Service and override the serve method.

    class CustomService(Service):
    def serve(self, addr: str, port: int, handler_list: t.List[str] = None) -> None:
    ...

    svc = CustomService()

    @svc.api(...)
    def handler(data):
    ...

    Notes:

    • Handlers added with PipelineHandler.add_api and the api decorator or Service.api can work together
    • If using a custom Service, you need to instantiate the custom Service class in the model

    Custom Request and Response

    Request and Response are handler preprocessing and postprocessing classes for receiving user requests and returning results. They can be simply understood as pre and post logic for the handler.

    Starwhale provides built-in Request implementations for Dataset types and Json Response. Users can also customize the logic as follows:

    import typing as t

    from starwhale.api.service import (
    Request,
    Service,
    Response,
    )

    class CustomInput(Request):
    def load(self, req: t.Any) -> t.Any:
    return req

    class CustomOutput(Response):
    def __init__(self, prefix: str) -> None:
    self.prefix = prefix

    def dump(self, req: str) -> bytes:
    return f"{self.prefix} {req}".encode("utf-8")

    svc = Service()

    @svc.api(request=CustomInput(), response=CustomOutput("hello"))
    def foo(data: t.Any) -> t.Any:
    ...
    - - + + \ No newline at end of file diff --git a/0.6.4/reference/sdk/job/index.html b/0.6.4/reference/sdk/job/index.html index 00c1e9bef..bb2a874ce 100644 --- a/0.6.4/reference/sdk/job/index.html +++ b/0.6.4/reference/sdk/job/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Starwhale Job SDK

    job

    Get a starwhale.Job object through the Job URI parameter, which represents a Job on Standalone/Server/Cloud instances.

    @classmethod
    def job(
    cls,
    uri: str,
    ) -> Job:

    Parameters

    • uri: (str, required)
      • Job URI format.

    Usage Example

    from starwhale import job

    # get job object of uri=https://server/job/1
    j1 = job("https://server/job/1")

    # get job from standalone instance
    j2 = job("local/project/self/job/xm5wnup")
    j3 = job("xm5wnup")

    class starwhale.Job

    starwhale.Job abstracts Starwhale Job and enables some information retrieval operations on the job.

    list

    list is a classmethod that can list the jobs under a project.

    @classmethod
    def list(
    cls,
    project: str = "",
    page_index: int = DEFAULT_PAGE_IDX,
    page_size: int = DEFAULT_PAGE_SIZE,
    ) -> Tuple[List[Job], Dict]:

    Parameters

    • project: (str, optional)
      • Project URI, can be projects on Standalone/Server/Cloud instances.
      • If project is not specified, the project selected by swcli project selected will be used.
    • page_index: (int, optional)
      • When getting the jobs list from Server/Cloud instances, paging is supported. This parameter specifies the page number.
        • Default is 1.
        • Page numbers start from 1.
      • Standalone instances do not support paging. This parameter has no effect.
    • page_size: (int, optional)
      • When getting the jobs list from Server/Cloud instances, paging is supported. This parameter specifies the number of jobs returned per page.
        • Default is 1.
        • Page numbers start from 1.
      • Standalone instances do not support paging. This parameter has no effect.

    Usage Example

    from starwhale import Job

    # list jobs of current selected project
    jobs, pagination_info = Job.list()

    # list jobs of starwhale/public project in the cloud.starwhale.cn instance
    jobs, pagination_info = Job.list("https://cloud.starwhale.cn/project/starwhale:public")

    # list jobs of id=1 project in the server instance, page index is 2, page size is 10
    jobs, pagination_info = Job.list("https://server/project/1", page_index=2, page_size=10)

    get

    get is a classmethod that gets information about a specific job and returns a Starwhale.Job object. It has the same functionality and parameter definitions as the starwhale.job function.

    Usage Example

    from starwhale import Job

    # get job object of uri=https://server/job/1
    j1 = Job.get("https://server/job/1")

    # get job from standalone instance
    j2 = Job.get("local/project/self/job/xm5wnup")
    j3 = Job.get("xm5wnup")

    summary

    summary is a property that returns the data written to the summary table during the job execution, in dict type.

    @property
    def summary(self) -> Dict[str, Any]:

    Usage Example

    from starwhale import jobs

    j1 = job("https://server/job/1")

    print(j1.summary)

    tables

    tables is a property that returns the names of tables created during the job execution (not including the summary table, which is created automatically at the project level), in list type.

    @property
    def tables(self) -> List[str]:

    Usage Example

    from starwhale import jobs

    j1 = job("https://server/job/1")

    print(j1.tables)

    get_table_rows

    get_table_rows is a method that returns records from a data table according to the table name and other parameters, in iterator type.

    def get_table_rows(
    self,
    name: str,
    start: Any = None,
    end: Any = None,
    keep_none: bool = False,
    end_inclusive: bool = False,
    ) -> Iterator[Dict[str, Any]]:

    Parameters

    • name: (str, required)
      • Datastore table name. The one of table names obtained through the tables property is ok.
    • start: (Any, optional)
      • The starting ID value of the returned records.
      • Default is None, meaning start from the beginning of the table.
    • end: (Any, optional)
      • The ending ID value of the returned records.
      • Default is None, meaning until the end of the table.
      • If both start and end are None, all records in the table will be returned as an iterator.
    • keep_none: (bool, optional)
      • Whether to return records with None values.
      • Default is False.
    • end_inclusive: (bool, optional)
      • When end is set, whether the iteration includes the end record.
      • Default is False.

    Usage Example

    from starwhale import job

    j = job("local/project/self/job/xm5wnup")

    table_name = j.tables[0]

    for row in j.get_table_rows(table_name):
    print(row)

    rows = list(j.get_table_rows(table_name, start=0, end=100))

    # return the first record from the results table
    result = list(j.get_table_rows('results', start=0, end=1))[0]

    status

    status is a property that returns the current real-time state of the Job as a string. The possible states are CREATED, READY, PAUSED, RUNNING, CANCELLING, CANCELED, SUCCESS, FAIL, and UNKNOWN.

    @property
    def status(self) -> str:

    create

    create is a classmethod that can create tasks on a Standalone instance or Server/Cloud instance, including tasks for Model Evaluation, Fine-tuning, Online Serving, and Developing. The function returns a Job object.

    • create determines which instance the generated task runs on through the project parameter, including Standalone and Server/Cloud instances.
    • On a Standalone instance, create creates a synchronously executed task.
    • On a Server/Cloud instance, create creates an asynchronously executed task.
    @classmethod
    def create(
    cls,
    project: Project | str,
    model: Resource | str,
    run_handler: str,
    datasets: t.List[str | Resource] | None = None,
    runtime: Resource | str | None = None,
    resource_pool: str = DEFAULT_RESOURCE_POOL,
    ttl: int = 0,
    dev_mode: bool = False,
    dev_mode_password: str = "",
    dataset_head: int = 0,
    overwrite_specs: t.Dict[str, t.Any] | None = None,
    ) -> Job:

    Parameters

    Parameters apply to all instances:

    • project: (Project|str, required)
      • A Project object or Project URI string.
    • model: (Resource|str, required)
      • Model URI string or Resource object of Model type, representing the Starwhale model package to run.
    • run_handler: (str, required)
      • The name of the runnable handler in the Starwhale model package, e.g. the evaluate handler of mnist: mnist.evaluator:MNISTInference.evaluate.
    • datasets: (List[str | Resource], optional)
      • Datasets required for the Starwhale model package to run, not required.

    Parameters only effective for Standalone instances:

    • dataset_head: (int, optional)
      • Generally used for debugging scenarios, only uses the first N data in the dataset for the Starwhale model to consume.

    Parameters only effective for Server/Cloud instances:

    • runtime: (Resource | str, optional)
      • Runtime URI string or Resource object of Runtime type, representing the Starwhale runtime required to run the task.
      • When not specified, it will try to use the built-in runtime of the Starwhale model package.
      • When creating tasks under a Standalone instance, the Python interpreter environment used by the Python script is used as its own runtime. Specifying a runtime via the runtime parameter is not supported. If you need to specify a runtime, you can use the swcli model run command.
    • resource_pool: (str, optional)
      • Specify which resource pool the task runs in, default to the default resource pool.
    • ttl: (int, optional)
      • Maximum lifetime of the task, will be killed after timeout.
      • The unit is seconds.
      • By default, ttl is 0, meaning no timeout limit, and the task will run as expected.
      • When ttl is less than 0, it also means no timeout limit.
    • dev_mode: (bool, optional)
      • Whether to set debug mode. After turning on this mode, you can enter the related environment through VSCode Web.
      • Debug mode is off by default.
    • dev_mode_password: (str, optional)
      • Login password for VSCode Web in debug mode.
      • Default is empty, in which case the task's UUID will be used as the password, which can be obtained via job.info().job.uuid.
    • overwrite_specs: (Dict[str, Any], optional)
      • Support setting the replicas and resources fields of the handler.
      • If empty, use the values set in the corresponding handler of the model package.
      • The key of overwrite_specs is the name of the handler, e.g. the evaluate handler of mnist: mnist.evaluator:MNISTInference.evaluate.
      • The value of overwrite_specs is the set value, in dictionary format, supporting settings for replicas and resources, e.g. {"replicas": 1, "resources": {"memory": "1GiB"}}.

    Examples

    • create a Cloud Instance job
    from starwhale import Job
    project = "https://cloud.starwhale.cn/project/starwhale:public"
    job = Job.create(
    project=project,
    model=f"{project}/model/mnist/version/v0",
    run_handler="mnist.evaluator:MNISTInference.evaluate",
    datasets=[f"{project}/dataset/mnist/version/v0"],
    runtime=f"{project}/runtime/pytorch",
    overwrite_specs={"mnist.evaluator:MNISTInference.evaluate": {"resources": "4GiB"},
    "mnist.evaluator:MNISTInference.predict": {"resources": "8GiB", "replicas": 10}}
    )
    print(job.status)
    • create a Standalone Instance job
    from starwhale import Job
    job = Job.create(
    project="self",
    model="mnist",
    run_handler="mnist.evaluator:MNISTInference.evaluate",
    datasets=["mnist"],
    )
    print(job.status)
    - - + + \ No newline at end of file diff --git a/0.6.4/reference/sdk/model/index.html b/0.6.4/reference/sdk/model/index.html index e8e3d1166..349681173 100644 --- a/0.6.4/reference/sdk/model/index.html +++ b/0.6.4/reference/sdk/model/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Starwhale Model SDK

    model.build

    model.build is a function that can build the Starwhale model, equivalent to the swcli model build command.

    def build(
    modules: t.Optional[t.List[t.Any]] = None,
    workdir: t.Optional[_path_T] = None,
    name: t.Optional[str] = None,
    project_uri: str = "",
    desc: str = "",
    remote_project_uri: t.Optional[str] = None,
    add_all: bool = False,
    tags: t.List[str] | None = None,
    ) -> None:

    Parameters

    • modules: (List[str|object], optional)
      • The search modules supports object(function, class or module) or str(example: "to.path.module", "to.path.module:object").
      • If the argument is not specified, the search modules are the imported modules.
    • name: (str, optional)
      • Starwhale Model name.
      • The default is the current work dir (cwd) name.
    • workdir: (str, Pathlib.Path, optional)
      • The path of the rootdir. The default workdir is the current working dir.
      • All files in the workdir will be packaged. If you want to ignore some files, you can add .swignore file in the workdir.
    • project_uri: (str, optional)
      • The project uri of the Starwhale Model.
      • If the argument is not specified, the project_uri is the config value of swcli project select command.
    • desc: (str, optional)
      • The description of the Starwhale Model.
    • remote_project_uri: (str, optional)
      • Project URI of another example instance. After the Starwhale model is built, it will be automatically copied to the remote instance.
    • add_all: (bool, optional)
      • Add all files in the working directory to the model package(excludes python cache files and virtual environment files when disabled).The .swignore file still takes effect.
      • The default value is False.
    • tags: (List[str], optional)
      • The tags for the model version.
      • latest and ^v\d+$ tags are reserved tags.

    Examples

    from starwhale import model

    # class search handlers
    from .user.code.evaluator import ExamplePipelineHandler
    model.build([ExamplePipelineHandler])

    # function search handlers
    from .user.code.evaluator import predict_image
    model.build([predict_image])

    # module handlers, @handler decorates function in this module
    from .user.code import evaluator
    model.build([evaluator])

    # str search handlers
    model.build(["user.code.evaluator:ExamplePipelineHandler"])
    model.build(["user.code1", "user.code2"])

    # no search handlers, use imported modules
    model.build()

    # add user custom tags
    model.build(tags=["t1", "t2"])
    - - + + \ No newline at end of file diff --git a/0.6.4/reference/sdk/other/index.html b/0.6.4/reference/sdk/other/index.html index 4bbecd85c..7fce32b97 100644 --- a/0.6.4/reference/sdk/other/index.html +++ b/0.6.4/reference/sdk/other/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Other SDK

    __version__

    Version of Starwhale Python SDK and swcli, string constant.

    >>> from starwhale import __version__
    >>> print(__version__)
    0.5.7

    init_logger

    Initialize Starwhale logger and traceback. The default value is 0.

    • 0: show only errors, traceback only shows 1 frame.
    • 1: show errors + warnings, traceback shows 5 frames.
    • 2: show errors + warnings + info, traceback shows 10 frames.
    • 3: show errors + warnings + info + debug, traceback shows 100 frames.
    • >=4: show errors + warnings + info + debug + trace, traceback shows 1000 frames.
    def init_logger(verbose: int = 0) -> None:

    login

    Log in to a server/cloud instance. It is equivalent to running the swcli instance login command. Log in to the Standalone instance is meaningless.

    def login(
    instance: str,
    alias: str = "",
    username: str = "",
    password: str = "",
    token: str = "",
    ) -> None:

    Parameters

    • instance: (str, required)
      • The http url of the server/cloud instance.
    • alias: (str, optional)
      • An alias for the instance to simplify the instance part of the Starwhale URI.
      • If not specified, the hostname part of the instance http url will be used.
    • username: (str, optional)
    • password: (str, optional)
    • token: (str, optional)
      • You can only choose one of username + password or token to login to the instance.

    Examples

    from starwhale import login

    # login to Starwhale Cloud instance by token
    login(instance="https://cloud.starwhale.cn", alias="cloud-cn", token="xxx")

    # login to Starwhale Server instance by username and password
    login(instance="http://controller.starwhale.svc", alias="dev", username="starwhale", password="abcd1234")

    logout

    Log out of a server/cloud instance. It is equivalent to running the swcli instance logout command. Log out of the Standalone instance is meaningless.

    def logout(instance: str) -> None:

    Examples

    from starwhale import login, logout

    login(instance="https://cloud.starwhale.cn", alias="cloud-cn", token="xxx")
    # logout by the alias
    logout("cloud-cn")

    login(instance="http://controller.starwhale.svc", alias="dev", username="starwhale", password="abcd1234")
    # logout by the instance http url
    logout("http://controller.starwhale.svc")
    - - + + \ No newline at end of file diff --git a/0.6.4/reference/sdk/overview/index.html b/0.6.4/reference/sdk/overview/index.html index d54670283..09576b726 100644 --- a/0.6.4/reference/sdk/overview/index.html +++ b/0.6.4/reference/sdk/overview/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Python SDK Overview

    Starwhale provides a series of Python SDKs to help manage datasets, models, evaluations etc. Using the Starwhale Python SDK can make it easier to complete your ML/DL development tasks.

    Classes

    • PipelineHandler: Provides default model evaluation process definition, requires implementation of predict and evaluate methods.
    • Context: Passes context information during model evaluation, including Project, Task ID etc.
    • class Dataset: Starwhale Dataset class.
    • class starwhale.api.service.Service: The base class of online evaluation.
    • class Job: Starwhale Job class.
    • class Evaluation: Starwhale Evaluation class.

    Functions

    • @multi_classification: Decorator for multi-class problems to simplify evaluate result calculation and storage for better evaluation presentation.
    • @handler: Decorator to define a running entity with resource attributes (mem/cpu/gpu). You can control replica count. Handlers can form DAGs through dependencies to control execution flow.
    • @evaluation.predict: Decorator to define inference process in model evaluation, similar to map phase in MapReduce.
    • @evaluation.evaluate: Decorator to define evaluation process in model evaluation, similar to reduce phase in MapReduce.
    • model.build: Build Starwhale model.
    • @fine_tune: Decorator to define model fine-tuning process.
    • init_logger: Set log level, implement 5-level logging.
    • dataset: Get starwhale.Dataset object, by creating new datasets or loading existing datasets.
    • @starwhale.api.service.api: Decorator to provide a simple Web Handler input definition based on Gradio.
    • login: Log in to the server/cloud instance.
    • logout: Log out of the server/cloud instance.
    • job: Get starwhale.Job object by the Job URI.
    • @PipelineHandler.run: Decorator to define the resources for the predict and evaluate methods in PipelineHandler subclasses.

    Data Types

    • COCOObjectAnnotation: Provides COCO format definitions.
    • BoundingBox: Bounding box type, currently in LTWH format - left_x, top_y, width and height.
    • ClassLabel: Describes the number and types of labels.
    • Image: Image type.
    • GrayscaleImage: Grayscale image type, e.g. MNIST digit images, a special case of Image type.
    • Audio: Audio type.
    • Video: Video type.
    • Text: Text type, default utf-8 encoding, for storing large texts.
    • Binary: Binary type, stored in bytes, for storing large binary content.
    • Line: Line type.
    • Point: Point type.
    • Polygon: Polygon type.
    • Link: Link type, for creating remote-link data.
    • MIMEType: Describes multimedia types supported by Starwhale, used in mime_type attribute of Image, Video etc for better Dataset Viewer.

    Other

    • __version__: Version of Starwhale Python SDK and swcli, string constant.

    Further reading

    - - + + \ No newline at end of file diff --git a/0.6.4/reference/sdk/type/index.html b/0.6.4/reference/sdk/type/index.html index 9eb966597..4c02c9cc7 100644 --- a/0.6.4/reference/sdk/type/index.html +++ b/0.6.4/reference/sdk/type/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Starwhale Data Types

    COCOObjectAnnotation

    It provides definitions following the COCO format.

    COCOObjectAnnotation(
    id: int,
    image_id: int,
    category_id: int,
    segmentation: Union[t.List, t.Dict],
    area: Union[float, int],
    bbox: Union[BoundingBox, t.List[float]],
    iscrowd: int,
    )
    ParameterDescription
    idObject id, usually a globally incrementing id
    image_idImage id, usually id of the image
    category_idCategory id, usually id of the class in object detection
    segmentationObject contour representation, Polygon (polygon vertices) or RLE format
    areaObject area
    bboxRepresents bounding box, can be BoundingBox type or list of floats
    iscrowd0 indicates a single object, 1 indicates two unseparated objects

    Examples

    def _make_coco_annotations(
    self, mask_fpath: Path, image_id: int
    ) -> t.List[COCOObjectAnnotation]:
    mask_img = PILImage.open(str(mask_fpath))

    mask = np.array(mask_img)
    object_ids = np.unique(mask)[1:]
    binary_mask = mask == object_ids[:, None, None]
    # TODO: tune permute without pytorch
    binary_mask_tensor = torch.as_tensor(binary_mask, dtype=torch.uint8)
    binary_mask_tensor = (
    binary_mask_tensor.permute(0, 2, 1).contiguous().permute(0, 2, 1)
    )

    coco_annotations = []
    for i in range(0, len(object_ids)):
    _pos = np.where(binary_mask[i])
    _xmin, _ymin = float(np.min(_pos[1])), float(np.min(_pos[0]))
    _xmax, _ymax = float(np.max(_pos[1])), float(np.max(_pos[0]))
    _bbox = BoundingBox(
    x=_xmin, y=_ymin, width=_xmax - _xmin, height=_ymax - _ymin
    )

    rle: t.Dict = coco_mask.encode(binary_mask_tensor[i].numpy()) # type: ignore
    rle["counts"] = rle["counts"].decode("utf-8")

    coco_annotations.append(
    COCOObjectAnnotation(
    id=self.object_id,
    image_id=image_id,
    category_id=1, # PennFudan Dataset only has one class-PASPersonStanding
    segmentation=rle,
    area=_bbox.width * _bbox.height,
    bbox=_bbox,
    iscrowd=0, # suppose all instances are not crowd
    )
    )
    self.object_id += 1

    return coco_annotations

    GrayscaleImage

    GrayscaleImage provides a grayscale image type. It is a special case of the Image type, for example the digit images in MNIST.

    GrayscaleImage(
    fp: _TArtifactFP = "",
    display_name: str = "",
    shape: Optional[_TShape] = None,
    as_mask: bool = False,
    mask_uri: str = "",
    )
    ParameterDescription
    fpImage path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    shapeImage width and height, default channel is 1
    as_maskWhether used as a mask image
    mask_uriURI of the original image for the mask

    Examples

    for i in range(0, min(data_number, label_number)):
    _data = data_file.read(image_size)
    _label = struct.unpack(">B", label_file.read(1))[0]
    yield GrayscaleImage(
    _data,
    display_name=f"{i}",
    shape=(height, width, 1),
    ), {"label": _label}

    GrayscaleImage Functions

    GrayscaleImage.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    GrayscaleImage.carry_raw_data

    carry_raw_data() -> GrayscaleImage

    GrayscaleImage.astype

    astype() -> Dict[str, t.Any]

    BoundingBox

    BoundingBox provides a bounding box type, currently in LTWH format:

    • left_x: x-coordinate of left edge
    • top_y: y-coordinate of top edge
    • width: width of bounding box
    • height: height of bounding box

    So it represents the bounding box using the coordinates of its left, top, width and height. This is a common format for specifying bounding boxes in computer vision tasks.

    BoundingBox(
    x: float,
    y: float,
    width: float,
    height: float
    )
    ParameterDescription
    xx-coordinate of left edge (left_x)
    yy-coordinate of top edge (top_y)
    widthWidth of bounding box
    heightHeight of bounding box

    ClassLabel

    Describe labels.

    ClassLabel(
    names: List[Union[int, float, str]]
    )

    Image

    Image Type.

    Image(
    fp: _TArtifactFP = "",
    display_name: str = "",
    shape: Optional[_TShape] = None,
    mime_type: Optional[MIMEType] = None,
    as_mask: bool = False,
    mask_uri: str = "",
    )
    ParameterDescription
    fpImage path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    shapeImage width, height and channels
    mime_typeMIMEType supported types
    as_maskWhether used as a mask image
    mask_uriURI of the original image for the mask

    The main difference from GrayscaleImage is that Image supports multi-channel RGB images by specifying shape as (W, H, C).

    Examples

    import io
    import typing as t
    import pickle
    from PIL import Image as PILImage
    from starwhale import Image, MIMEType

    def _iter_item(paths: t.List[Path]) -> t.Generator[t.Tuple[t.Any, t.Dict], None, None]:
    for path in paths:
    with path.open("rb") as f:
    content = pickle.load(f, encoding="bytes")
    for data, label, filename in zip(
    content[b"data"], content[b"labels"], content[b"filenames"]
    ):
    annotations = {
    "label": label,
    "label_display_name": dataset_meta["label_names"][label],
    }

    image_array = data.reshape(3, 32, 32).transpose(1, 2, 0)
    image_bytes = io.BytesIO()
    PILImage.fromarray(image_array).save(image_bytes, format="PNG")

    yield Image(
    fp=image_bytes.getvalue(),
    display_name=filename.decode(),
    shape=image_array.shape,
    mime_type=MIMEType.PNG,
    ), annotations

    Image Functions

    Image.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Image.carry_raw_data

    carry_raw_data() -> GrayscaleImage

    Image.astype

    astype() -> Dict[str, t.Any]

    Video

    Video type.

    Video(
    fp: _TArtifactFP = "",
    display_name: str = "",
    mime_type: Optional[MIMEType] = None,
    )
    ParameterDescription
    fpVideo path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    mime_typeMIMEType supported types

    Examples

    import typing as t
    from pathlib import Path

    from starwhale import Video, MIMEType

    root_dir = Path(__file__).parent.parent
    dataset_dir = root_dir / "data" / "UCF-101"
    test_ds_path = [root_dir / "data" / "test_list.txt"]

    def iter_ucf_item() -> t.Generator:
    for path in test_ds_path:
    with path.open() as f:
    for line in f.readlines():
    _, label, video_sub_path = line.split()

    data_path = dataset_dir / video_sub_path
    data = Video(
    data_path,
    display_name=video_sub_path,
    shape=(1,),
    mime_type=MIMEType.WEBM,
    )

    yield f"{label}_{video_sub_path}", {
    "video": data,
    "label": label,
    }

    Audio

    Audio type.

    Audio(
    fp: _TArtifactFP = "",
    display_name: str = "",
    mime_type: Optional[MIMEType] = None,
    )
    ParameterDescription
    fpAudio path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    mime_typeMIMEType supported types

    Examples

    import typing as t
    from starwhale import Audio

    def iter_item() -> t.Generator[t.Tuple[t.Any, t.Any], None, None]:
    for path in validation_ds_paths:
    with path.open() as f:
    for item in f.readlines():
    item = item.strip()
    if not item:
    continue

    data_path = dataset_dir / item
    data = Audio(
    data_path, display_name=item, shape=(1,), mime_type=MIMEType.WAV
    )

    speaker_id, utterance_num = data_path.stem.split("_nohash_")
    annotations = {
    "label": data_path.parent.name,
    "speaker_id": speaker_id,
    "utterance_num": int(utterance_num),
    }
    yield data, annotations

    Audio Functions

    Audio.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Audio.carry_raw_data

    carry_raw_data() -> Audio

    Audio.astype

    astype() -> Dict[str, t.Any]

    Text

    Text type, the default encode type is utf-8.

    Text(
    content: str,
    encoding: str = "utf-8",
    )
    ParameterDescription
    contentThe text content
    encodingEncoding format of the text

    Examples

    import typing as t
    from pathlib import Path
    from starwhale import Text

    def iter_item(self) -> t.Generator[t.Tuple[t.Any, t.Any], None, None]:
    root_dir = Path(__file__).parent.parent / "data"

    with (root_dir / "fra-test.txt").open("r") as f:
    for line in f.readlines():
    line = line.strip()
    if not line or line.startswith("CC-BY"):
    continue

    _data, _label, *_ = line.split("\t")
    data = Text(_data, encoding="utf-8")
    annotations = {"label": _label}
    yield data, annotations

    Text Functions

    to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Text.carry_raw_data

    carry_raw_data() -> Text

    Text.astype

    astype() -> Dict[str, t.Any]

    Text.to_str

    to_str() -> str

    Binary

    Binary provides a binary data type, stored as bytes.

    Binary(
    fp: _TArtifactFP = "",
    mime_type: MIMEType = MIMEType.UNDEFINED,
    )
    ParameterDescription
    fpPath, IO object, or file content bytes
    mime_typeMIMEType supported types

    Binary Functions

    Binary.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Binary.carry_raw_data

    carry_raw_data() -> Binary

    Binary.astype

    astype() -> Dict[str, t.Any]

    Link provides a link type to create remote-link datasets in Starwhale.

    Link(
    uri: str,
    auth: Optional[LinkAuth] = DefaultS3LinkAuth,
    offset: int = 0,
    size: int = -1,
    data_type: Optional[BaseArtifact] = None,
    )
    ParameterDescription
    uriURI of the original data, currently supports localFS and S3 protocols
    authLink auth information
    offsetData offset relative to file pointed by uri
    sizeData size
    data_typeActual data type pointed by the link, currently supports Binary, Image, Text, Audio and Video

    Link.astype

    astype() -> Dict[str, t.Any]

    MIMEType

    MIMEType describes the multimedia types supported by Starwhale, implemented using Python Enum. It is used in the mime_type attribute of Image, Video etc to enable better Dataset Viewer support.

    class MIMEType(Enum):
    PNG = "image/png"
    JPEG = "image/jpeg"
    WEBP = "image/webp"
    SVG = "image/svg+xml"
    GIF = "image/gif"
    APNG = "image/apng"
    AVIF = "image/avif"
    PPM = "image/x-portable-pixmap"
    MP4 = "video/mp4"
    AVI = "video/avi"
    WEBM = "video/webm"
    WAV = "audio/wav"
    MP3 = "audio/mp3"
    PLAIN = "text/plain"
    CSV = "text/csv"
    HTML = "text/html"
    GRAYSCALE = "x/grayscale"
    UNDEFINED = "x/undefined"

    Line

    from starwhale import ds, Point, Line

    with dataset("collections") as ds:
    line_points = [
    Point(x=0.0, y=1.0),
    Point(x=0.0, y=100.0)
    ]
    ds.append({"line": line_points})
    ds.commit()

    Point

    from starwhale import ds, Point

    with dataset("collections") as ds:
    ds.append(Point(x=0.0, y=100.0))
    ds.commit()

    Polygon

    from starwhale import ds, Point, Polygon

    with dataset("collections") as ds:
    polygon_points = [
    Point(x=0.0, y=1.0),
    Point(x=0.0, y=100.0),
    Point(x=2.0, y=1.0),
    Point(x=2.0, y=100.0),
    ]
    ds.append({"polygon": polygon_points})
    ds.commit()
    - - + + \ No newline at end of file diff --git a/0.6.4/reference/swcli/dataset/index.html b/0.6.4/reference/swcli/dataset/index.html index 64847f245..e15cbcf8c 100644 --- a/0.6.4/reference/swcli/dataset/index.html +++ b/0.6.4/reference/swcli/dataset/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    swcli dataset

    Overview

    swcli [GLOBAL OPTIONS] dataset [OPTIONS] <SUBCOMMAND> [ARGS]...

    The dataset command includes the following subcommands:

    • build
    • copy(cp)
    • diff
    • head
    • history
    • info
    • list(ls)
    • recover
    • remove(rm)
    • summary
    • tag

    swcli dataset build

    swcli [GLOBAL OPTIONS] dataset build [OPTIONS]

    Build Starwhale Dataset. This command only supports to build standalone dataset.

    Options

    • Data sources options:
    OptionRequiredTypeDefaultsDescription
    -if or --image or --image-folderNStringBuild dataset from image folder, the folder should contain the image files.
    -af or --audio or --audio-folderNStringBuild dataset from audio folder, the folder should contain the audio files.
    -vf or --video or --video-folderNStringBuild dataset from video folder, the folder should contain the video files.
    -h or --handler or --python-handlerNStringBuild dataset from python executor handler, the handler format is [module path]:[class or func name].
    -f or --yaml or --dataset-yamlNdataset.yaml in cwdBuild dataset from dataset.yaml file. Default uses dataset.yaml in the work directory(cwd).
    -jf or --jsonNStringBuild dataset from json or jsonl file, the json or jsonl file option is a json file path or a http downloaded url.The json content structure should be a list[dict] or tuple[dict].
    -hf or --huggingfaceNStringBuild dataset from huggingface dataset, the huggingface option is a huggingface repo name.
    -c or --csvNStringBuild dataset from csv files. The option is a csv file path, dir path or a http downloaded url.The option can be used multiple times.

    Data source options are mutually exclusive, only one option is accepted. If no set, swcli dataset build command will use dataset yaml mode to build dataset with the dataset.yaml in the cwd.

    • Other options:
    OptionRequiredScopeTypeDefaultsDescription
    -pt or --patchone of --patch and --overwriteGlobalBooleanTruePatch mode, only update the changed rows and columns for the existed dataset.
    -ow or --overwriteone of --patch and --overwriteGlobalBooleanFalseOverwrite mode, update records and delete extraneous rows from the existed dataset.
    -n or --nameNGlobalStringDataset name
    -p or --projectNGlobalStringDefault projectProject URI, the default is the current selected project. The dataset will store in the specified project.
    -d or --descNGlobalStringDataset description
    -as or --alignment-sizeNGlobalString128Bswds-bin format dataset: alignment size
    -vs or --volume-sizeNGlobalString64MBswds-bin format dataset: volume size
    -r or --runtimeNGlobalStringRuntime URI
    -w or --workdirNPython Handler ModeStringcwdwork dir to search handler.
    --auto-label/--no-auto-labelNImage/Video/Audio Folder ModeBooleanTrueWhether to auto label by the sub-folder name.
    --field-selectorNJSON File ModeStringThe filed from which you would like to extract dataset array items. The filed is split by the dot(.) symbol.
    --subsetNHuggingface ModeStringHuggingface dataset subset name. If the subset name is not specified, the all subsets will be built.
    --splitNHuggingface ModeStringHuggingface dataset split name. If the split name is not specified, the all splits will be built.
    --revisionNHuggingface ModeStringmainVersion of the dataset script to load. Defaults to 'main'. The option value accepts tag name, or branch name, or commit hash.
    --add-hf-info/--no-add-hf-infoNHuggingface ModeBooleanTrueWhether to add huggingface dataset info to the dataset rows, currently support to add subset and split into the dataset rows. Subset uses _hf_subset field name, split uses _hf_split field name.
    --cache/--no-cacheNHuggingface ModeBooleanTrueWhether to use huggingface dataset cache(download + local hf dataset).
    -t or --tagNGlobalStringDataset tags, the option can be used multiple times.
    --encodingNCSV/JSON/JSONL ModeStringfile encoding.
    --dialectNCSV ModeStringexcelThe csv file dialect, the default is excel. Current supports excel, excel-tab and unix formats.
    --delimiterNCSV ModeString,A one-character string used to separate fields for the csv file.
    --quotecharNCSV ModeString"A one-character string used to quote fields containing special characters, such as the delimiter or quotechar, or which contain new-line characters.
    --skipinitialspace/--no-skipinitialspaceNCSV ModeBoolFalseWhether to skip spaces after delimiter for the csv file.
    --strict/--no-strictNCSV ModeBoolFalseWhen True, raise exception Error if the csv is not well formed.

    Examples for dataset building

    #- from dataset.yaml
    swcli dataset build # build dataset from dataset.yaml in the current work directory(pwd)
    swcli dataset build --yaml /path/to/dataset.yaml # build dataset from /path/to/dataset.yaml, all the involved files are related to the dataset.yaml file.
    swcli dataset build --overwrite --yaml /path/to/dataset.yaml # build dataset from /path/to/dataset.yaml, and overwrite the existed dataset.
    swcli dataset build --tag tag1 --tag tag2

    #- from handler
    swcli dataset build --handler mnist.dataset:iter_mnist_item # build dataset from mnist.dataset:iter_mnist_item handler, the workdir is the current work directory(pwd).
    # build dataset from mnist.dataset:LinkRawDatasetProcessExecutor handler, the workdir is example/mnist
    swcli dataset build --handler mnist.dataset:LinkRawDatasetProcessExecutor --workdir example/mnist

    #- from image folder
    swcli dataset build --image-folder /path/to/image/folder # build dataset from /path/to/image/folder, search all image type files.

    #- from audio folder
    swcli dataset build --audio-folder /path/to/audio/folder # build dataset from /path/to/audio/folder, search all audio type files.

    #- from video folder
    swcli dataset build --video-folder /path/to/video/folder # build dataset from /path/to/video/folder, search all video type files.

    #- from json/jsonl file
    swcli dataset build --json /path/to/example.json
    swcli dataset build --json http://example.com/example.json
    swcli dataset build --json /path/to/example.json --field-selector a.b.c # extract the json_content["a"]["b"]["c"] field from the json file.
    swcli dataset build --name qald9 --json https://raw.githubusercontent.com/ag-sc/QALD/master/9/data/qald-9-test-multilingual.json --field-selector questions
    swcli dataset build --json /path/to/test01.jsonl --json /path/to/test02.jsonl
    swcli dataset build --json https://modelscope.cn/api/v1/datasets/damo/100PoisonMpts/repo\?Revision\=master\&FilePath\=train.jsonl

    #- from huggingface dataset
    swcli dataset build --huggingface mnist
    swcli dataset build -hf mnist --no-cache
    swcli dataset build -hf cais/mmlu --subset anatomy --split auxiliary_train --revision 7456cfb

    #- from csv files
    swcli dataset build --csv /path/to/example.csv
    swcli dataset build --csv /path/to/example.csv --csv-file /path/to/example2.csv
    swcli dataset build --csv /path/to/csv-dir
    swcli dataset build --csv http://example.com/example.csv
    swcli dataset build --name product-desc-modelscope --csv https://modelscope.cn/api/v1/datasets/lcl193798/product_description_generation/repo\?Revision\=master\&FilePath\=test.csv --encoding=utf-8-sig

    swcli dataset copy

    swcli [GLOBAL OPTIONS] dataset copy [OPTIONS] <SRC> <DEST>

    dataset copy copies from SRC to DEST.

    SRC and DEST are both dataset URIs.

    When copying Starwhale Dataset, all custom user-defined labels will be copied by default. You can use the --ignore-tag parameter to ignore certain labels. In addition, the latest and ^v\d+$ labels are Starwhale built-in labels that are only used within the instance itself and will not be copied to other instances.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, DEST will be overwritten if it exists. In addition, if the labels carried during duplication have already been used by other versions, this parameter can be used to forcibly update the labels to this version.
    -p or --patchone of --patch and --overwriteBooleanTruePatch mode, only update the changed rows and columns for the remote dataset.
    -o or --overwriteone of --patch and --overwriteBooleanFalseOverwrite mode, update records and delete extraneous rows from the remote dataset.
    -i or --ignore-tagNStringIgnore tags to copy. The option can be used multiple times.

    Examples for dataset copy

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local project(myproject) with a new dataset name 'mnist-local'
    swcli dataset cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq local/project/myproject/mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local default project(self) with the cloud instance dataset name 'mnist-cloud'
    swcli dataset cp --patch cloud://pre-k8s/project/dataset/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq .

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local project(myproject) with the cloud instance dataset name 'mnist-cloud'
    swcli dataset cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq . -dlp myproject

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local default project(self) with a dataset name 'mnist-local'
    swcli dataset cp --overwrite cloud://pre-k8s/project/dataset/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local project(myproject) with a dataset name 'mnist-local'
    swcli dataset cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local -dlp myproject

    #- copy standalone instance(local) default project(self)'s mnist-local dataset to cloud instance(pre-k8s) mnist project with a new dataset name 'mnist-cloud'
    swcli dataset cp mnist-local/version/latest cloud://pre-k8s/project/mnist/mnist-cloud

    #- copy standalone instance(local) default project(self)'s mnist-local dataset to cloud instance(pre-k8s) mnist project with standalone instance dataset name 'mnist-local'
    swcli dataset cp mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy standalone instance(local) default project(self)'s mnist-local dataset to cloud instance(pre-k8s) mnist project without 'cloud://' prefix
    swcli dataset cp mnist-local/version/latest pre-k8s/project/mnist

    #- copy standalone instance(local) project(myproject)'s mnist-local dataset to cloud instance(pre-k8s) mnist project with standalone instance dataset name 'mnist-local'
    swcli dataset cp local/project/myproject/dataset/mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy without some tags
    swcli dataset cp mnist cloud://cloud.starwhale.cn/project/starwhale:public --ignore-tag t1 --force

    swcli dataset diff

    swcli [GLOBAL OPTIONS] dataset diff [OPTIONS] <DATASET VERSION> <DATASET VERSION>

    dataset diff compares the difference between two versions of the same dataset.

    DATASET VERSION is a dataset URI.

    OptionRequiredTypeDefaultsDescription
    --show-detailsNBooleanFalseIf true, outputs the detail information.
    swcli [全局选项] dataset head [选项] <DATASET VERSION>

    Print the first n rows of the dataset. DATASET VERSION is a dataset URI.

    OptionRequiredTypeDefaultsDescription
    -n or --rowsNInt5Print the first NUM rows of the dataset.
    -srd or --show-raw-dataNBooleanFalseFetch raw data content from objectstore.
    -st or --show-typesNBooleanFalseshow data types.

    Examples for dataset head

    #- print the first 5 rows of the mnist dataset
    swcli dataset head -n 5 mnist

    #- print the first 10 rows of the mnist(v0 version) dataset and show raw data
    swcli dataset head -n 10 mnist/v0 --show-raw-data

    #- print the data types of the mnist dataset
    swcli dataset head mnist --show-types

    #- print the remote cloud dataset's first 5 rows
    swcli dataset head cloud://cloud-cn/project/test/dataset/mnist -n 5

    #- print the first 5 rows in the json format
    swcli -o json dataset head -n 5 mnist

    swcli dataset history

    swcli [GLOBAL OPTIONS] dataset history [OPTIONS] <DATASET>

    dataset history outputs all history versions of the specified Starwhale Dataset.

    DATASET is a dataset URI.

    OptionRequiredTypeDefaultsDescription
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.

    swcli dataset info

    swcli [GLOBAL OPTIONS] dataset info [OPTIONS] <DATASET>

    dataset info outputs detailed information about the specified Starwhale Dataset version.

    DATASET is a dataset URI.

    swcli dataset list

    swcli [GLOBAL OPTIONS] dataset list [OPTIONS]

    dataset list shows all Starwhale Datasets.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.
    --show-removed or -srNBooleanFalseIf true, include datasets that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.
    --filter or -flNStringShow only Starwhale Datasetes that match specified filters. This option can be used multiple times in one command.
    FilterTypeDescriptionExample
    nameKey-ValueThe name prefix of datasets--filter name=mnist
    ownerKey-ValueThe dataset owner name--filter owner=starwhale
    latestFlagIf specified, it shows only the latest version.--filter latest

    swcli dataset recover

    swcli [GLOBAL OPTIONS] dataset recover [OPTIONS] <DATASET>

    dataset recover recovers previously removed Starwhale Datasets or versions.

    DATASET is a dataset URI. If the version part of the URI is omitted, all removed versions are recovered.

    Garbage-collected Starwhale Datasets or versions can not be recovered, as well as those are removed with the --force option.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, overwrite the Starwhale Dataset or version with the same name or version id.

    swcli dataset remove

    swcli [GLOBAL OPTIONS] dataset remove [OPTIONS] <DATASET>

    dataset remove removes the specified Starwhale Dataset or version.

    DATASET is a dataset URI. If the version part of the URI is omitted, all versions are removed.

    Removed Starwhale Datasets or versions can be recovered by swcli dataset recover before garbage collection. Use the --force option to persistently remove a Starwhale Dataset or version.

    Removed Starwhale Datasets or versions can be listed by swcli dataset list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Dataset or version. It can not be recovered.

    swcli dataset summary

    swcli [GLOBAL OPTIONS]  dataset summary <DATASET>

    Show dataset summary. DATASET is a dataset URI.

    swcli dataset tag

    swcli [GLOBAL OPTIONS] dataset tag [OPTIONS] <DATASET> [TAGS]...

    dataset tag attaches a tag to a specified Starwhale Dataset version. At the same time, tag command also supports list and remove tags. The tag can be used in a dataset URI instead of the version id.

    DATASET is a dataset URI.

    Each dataset version can have any number of tags, but duplicated tag names are not allowed in the same dataset.

    dataset tag only works for the Standalone Instance.

    OptionRequiredTypeDefaultsDescription
    --remove or -rNBooleanFalseremove the tag if true
    --quiet or -qNBooleanFalseignore errors, for example, removing tags that do not exist.
    --force-add or -fNBooleanFalseWhen adding labels to server/cloud instances, if the label is already used by another dataset version, an error will be prompted. In this case, you can force an update using the --force-add parameter.

    Examples for dataset tag

    #- list tags of the mnist dataset
    swcli dataset tag mnist

    #- add tags for the mnist dataset
    swcli dataset tag mnist t1 t2
    swcli dataset tag cloud://cloud.starwhale.cn/project/public:starwhale/dataset/mnist/version/latest t1 --force-ad
    swcli dataset tag mnist t1 --quiet

    #- remove tags for the mnist dataset
    swcli dataset tag mnist -r t1 t2
    swcli dataset tag cloud://cloud.starwhale.cn/project/public:starwhale/dataset/mnist --remove t1
    - - + + \ No newline at end of file diff --git a/0.6.4/reference/swcli/index.html b/0.6.4/reference/swcli/index.html index 81f2ea0ab..d0b4df80e 100644 --- a/0.6.4/reference/swcli/index.html +++ b/0.6.4/reference/swcli/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Overview

    Usage

    swcli [OPTIONS] <COMMAND> [ARGS]...
    note

    sw and starwhale are aliases for swcli.

    Global Options

    OptionDescription
    --versionShow the Starwhale Client version
    -v or --verboseShow verbose log, support multi counts for -v args. More -v args, more logs.
    --helpShow the help message.
    caution

    Global options must be put immediately after swcli, and before any command.

    Commands

    - - + + \ No newline at end of file diff --git a/0.6.4/reference/swcli/instance/index.html b/0.6.4/reference/swcli/instance/index.html index 586c9aa47..762f5079a 100644 --- a/0.6.4/reference/swcli/instance/index.html +++ b/0.6.4/reference/swcli/instance/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    swcli instance

    Overview

    swcli [GLOBAL OPTIONS] instance [OPTIONS] <SUBCOMMAND> [ARGS]

    The instance command includes the following subcommands:

    • info
    • list (ls)
    • login
    • logout
    • use (select)

    swcli instance info

    swcli [GLOBAL OPTIONS] instance info [OPTIONS] <INSTANCE>

    instance info outputs detailed information about the specified Starwhale Instance.

    INSTANCE is an instance URI.

    swcli instance list

    swcli [GLOBAL OPTIONS] instance list [OPTIONS]

    instance list shows all Starwhale Instances.

    swcli instance login

    swcli [GLOBAL OPTIONS] instance login [OPTIONS] <INSTANCE>

    instance login connects to a Server/Cloud instance and makes the specified instance default.

    INSTANCE is an instance URI.

    OptionRequiredTypeDefaultsDescription
    --usernameNStringThe login username.
    --passwordNStringThe login password.
    --tokenNStringThe login token.
    --aliasYStringThe alias of the instance. You can use it anywhere that requires an instance URI.

    --username and --password can not be used together with --token.

    swcli instance logout

    swcli [GLOBAL OPTIONS] instance logout [INSTANCE]

    instance logout disconnects from the Server/Cloud instance, and clears information stored in the local storage.

    INSTANCE is an instance URI. If it is omiited, the default instance is used instead.

    swcli instance use

    swcli [GLOBAL OPTIONS] instance use <INSTANCE>

    instance use make the specified instance default.

    INSTANCE is an instance URI.

    - - + + \ No newline at end of file diff --git a/0.6.4/reference/swcli/job/index.html b/0.6.4/reference/swcli/job/index.html index 49b3ce573..bd135e59f 100644 --- a/0.6.4/reference/swcli/job/index.html +++ b/0.6.4/reference/swcli/job/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    swcli job

    Overview

    swcli [GLOBAL OPTIONS] job [OPTIONS] <SUBCOMMAND> [ARGS]...

    The job command includes the following subcommands:

    • cancel
    • info
    • list(ls)
    • pause
    • recover
    • remove(rm)
    • resume

    swcli job cancel

    swcli [GLOBAL OPTIONS] job cancel [OPTIONS] <JOB>

    job cancel stops the specified job. On Standalone instance, this command only takes effect for containerized jobs.

    JOB is a job URI.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, kill the Starwhale Job by force.

    swcli job info

    swcli [GLOBAL OPTIONS] job info [OPTIONS] <JOB>

    job info outputs detailed information about the specified Starwhale Job.

    JOB is a job URI.

    swcli job list

    swcli [GLOBAL OPTIONS] job list [OPTIONS]

    job list shows all Starwhale Jobs.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --show-removed or -srNBooleanFalseIf true, include packages that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.

    swcli job pause

    swcli [GLOBAL OPTIONS] job pause [OPTIONS] <JOB>

    job pause pauses the specified job. Paused jobs can be resumed by job resume. On Standalone instance, this command only takes effect for containerized jobs.

    JOB is a job URI.

    From Starwhale's perspective, pause is almost the same as cancel, except that the job reuses the old Job id when resumed. It is job developer's responsibility to save all data periodically and load them when resumed. The job id is usually used as a key of the checkpoint.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, kill the Starwhale Job by force.

    swcli job resume

    swcli [GLOBAL OPTIONS] job resume [OPTIONS] <JOB>

    job resume resumes the specified job. On Standalone instance, this command only takes effect for containerized jobs.

    JOB is a job URI.

    - - + + \ No newline at end of file diff --git a/0.6.4/reference/swcli/model/index.html b/0.6.4/reference/swcli/model/index.html index 8b081ade5..75fe9799c 100644 --- a/0.6.4/reference/swcli/model/index.html +++ b/0.6.4/reference/swcli/model/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    swcli model

    Overview

    swcli [GLOBAL OPTIONS] model [OPTIONS] <SUBCOMMAND> [ARGS]...

    The model command includes the following subcommands:

    • build
    • copy(cp)
    • diff
    • extract
    • history
    • info
    • list(ls)
    • recover
    • remove(rm)
    • run
    • serve
    • tag

    swcli model build

    swcli [GLOBAL OPTIONS] model build [OPTIONS] <WORKDIR>

    model build will put the whole WORKDIR into the model, except files that match patterns defined in .swignore.

    model build will import modules specified by --module to generate the required configurations to run the model. If your module depends on third-party libraries, we strongly recommend you use the --runtime option; otherwise, you need to ensure that the python environment used by swcli has these libraries installed.

    OptionRequiredTypeDefaultsDescription
    --project or -pNStringthe default projectthe project URI
    --model-yaml or -fNString${workdir}/model.yamlmodel yaml path, default use ${workdir}/model.yaml file. model.yaml is optional for model build.
    --module or -mNStringPython modules to be imported during the build process. Starwhale will export model handlers from these modules to the model package. This option supports set multiple times.
    --runtime or -rNStringthe URI of the Starwhale Runtime to use when running this command. If this option is used, this command will run in an independent python environment specified by the Starwhale Runtime; otherwise, it will run directly in the swcli's current python environment.
    --name or -nNStringmodel package name
    --desc or -dNStringmodel package description
    --package-runtime--no-package-runtimeNBooleanTrueWhen using the --runtime parameter, by default, the corresponding Starwhale runtime will become the built-in runtime for the Starwhale model. This feature can be disabled with the --no-package-runtime parameter.
    --add-allNBooleanFalseAdd all files in the working directory to the model package(excludes python cache files and virtual environment files when disabled).The .swignore file still takes effect.
    -t or --tagNGlobalString

    Examples for model build

    # build by the model.yaml in current directory and model package will package all the files from the current directory.
    swcli model build .
    # search model run decorators from mnist.evaluate, mnist.train and mnist.predict modules, then package all the files from the current directory to model package.
    swcli model build . --module mnist.evaluate --module mnist.train --module mnist.predict
    # build model package in the Starwhale Runtime environment.
    swcli model build . --module mnist.evaluate --runtime pytorch/version/v1
    # forbid to package Starwhale Runtime into the model.
    swcli model build . --module mnist.evaluate --runtime pytorch/version/v1 --no-package-runtime
    # build model package with tags.
    swcli model build . --tag tag1 --tag tag2

    swcli model copy

    swcli [GLOBAL OPTIONS] model copy [OPTIONS] <SRC> <DEST>

    model copy copies from SRC to DEST for Starwhale Model sharing.

    SRC and DEST are both model URIs.

    When copying Starwhale Model, all custom user-defined labels will be copied by default. You can use the --ignore-tag parameter to ignore certain labels. In addition, the latest and ^v\d+$ labels are Starwhale built-in labels that are only used within the instance itself and will not be copied to other instances.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, DEST will be overwritten if it exists. In addition, if the labels carried during duplication have already been used by other versions, this parameter can be used to forcibly update the labels to this version.
    -i or --ignore-tagNStringIgnore tags to copy. The option can be used multiple times.

    Examples for model copy

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local project(myproject) with a new model name 'mnist-local'
    swcli model cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq local/project/myproject/mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local default project(self) with the cloud instance model name 'mnist-cloud'
    swcli model cp cloud://pre-k8s/project/model/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq .

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local project(myproject) with the cloud instance model name 'mnist-cloud'
    swcli model cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq . -dlp myproject

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local default project(self) with a model name 'mnist-local'
    swcli model cp cloud://pre-k8s/project/model/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local project(myproject) with a model name 'mnist-local'
    swcli model cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local -dlp myproject

    #- copy standalone instance(local) default project(self)'s mnist-local model to cloud instance(pre-k8s) mnist project with a new model name 'mnist-cloud'
    swcli model cp mnist-local/version/latest cloud://pre-k8s/project/mnist/mnist-cloud

    #- copy standalone instance(local) default project(self)'s mnist-local model to cloud instance(pre-k8s) mnist project with standalone instance model name 'mnist-local'
    swcli model cp mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy standalone instance(local) default project(self)'s mnist-local model to cloud instance(pre-k8s) mnist project without 'cloud://' prefix
    swcli model cp mnist-local/version/latest pre-k8s/project/mnist

    #- copy standalone instance(local) project(myproject)'s mnist-local model to cloud instance(pre-k8s) mnist project with standalone instance model name 'mnist-local'
    swcli model cp local/project/myproject/model/mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy without some tags
    swcli model cp mnist cloud://cloud.starwhale.cn/project/starwhale:public --ignore-tag t1

    swcli model diff

    swcli [GLOBAL OPTIONS] model diff [OPTIONS] <MODEL VERSION> <MODEL VERSION>

    model diff compares the difference between two versions of the same model.

    MODEL VERSION is a model URI.

    OptionRequiredTypeDefaultsDescription
    --show-detailsNBooleanFalseIf true, outputs the detail information.

    swcli model extract

    swcli [GLOBAL OPTIONS] model extract [OPTIONS] <MODEL> <TARGET_DIR>

    The model extract command can extract a Starwhale model to a specified directory for further customization.

    MODEL is a model URI.

    OptionRequiredTypeDefaultDescription
    --force or -fNBooleanFalseIf this option is used, it will forcibly overwrite existing extracted model files in the target directory.

    Examples for model extract

    #- extract mnist model package to current directory
    swcli model extract mnist/version/xxxx .

    #- extract mnist model package to current directory and force to overwrite the files
    swcli model extract mnist/version/xxxx . -f

    swcli model history

    swcli [GLOBAL OPTIONS] model history [OPTIONS] <MODEL>

    model history outputs all history versions of the specified Starwhale Model.

    MODEL is a model URI.

    OptionRequiredTypeDefaultsDescription
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.

    swcli model info

    swcli [GLOBAL OPTIONS] model info [OPTIONS] <MODEL>

    model info outputs detailed information about the specified Starwhale Model version.

    MODEL is a model URI.

    OptionRequiredTypeDefaultsDescription
    --output-filter or -ofNChoice of [basic/model_yaml/manifest/files/handlers/all]basicFilter the output content. Only standalone instance supports this option.

    Examples for model info

    swcli model info mnist # show basic info from the latest version of model
    swcli model info mnist/version/v0 # show basic info from the v0 version of model
    swcli model info mnist/version/latest --output-filter=all # show all info
    swcli model info mnist -of basic # show basic info
    swcli model info mnist -of model_yaml # show model.yaml
    swcli model info mnist -of handlers # show model runnable handlers info
    swcli model info mnist -of files # show model package files tree
    swcli -o json model info mnist -of all # show all info in json format

    swcli model list

    swcli [GLOBAL OPTIONS] model list [OPTIONS]

    model list shows all Starwhale Models.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.
    --show-removedNBooleanFalseIf true, include packages that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.
    --filter or -flNStringShow only Starwhale Models that match specified filters. This option can be used multiple times in one command.
    FilterTypeDescriptionExample
    nameKey-ValueThe name prefix of models--filter name=mnist
    ownerKey-ValueThe model owner name--filter owner=starwhale
    latestFlagIf specified, it shows only the latest version.--filter latest

    swcli model recover

    swcli [GLOBAL OPTIONS] model recover [OPTIONS] <MODEL>

    model recover recovers previously removed Starwhale Models or versions.

    MODEL is a model URI. If the version part of the URI is omitted, all removed versions are recovered.

    Garbage-collected Starwhale Models or versions can not be recovered, as well as those are removed with the --force option.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, overwrite the Starwhale Model or version with the same name or version id.

    swcli model remove

    swcli [GLOBAL OPTIONS] model remove [OPTIONS] <MODEL>

    model remove removes the specified Starwhale Model or version.

    MODEL is a model URI. If the version part of the URI is omitted, all versions are removed.

    Removed Starwhale Models or versions can be recovered by swcli model recover before garbage collection. Use the --force option to persistently remove a Starwhale Model or version.

    Removed Starwhale Models or versions can be listed by swcli model list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Model or version. It can not be recovered.

    swcli model run

    swcli [GLOBAL OPTIONS] model run [OPTIONS]

    model run executes a model handler. Model run supports two modes to run: model URI and local development. Model URI mode needs a pre-built Starwhale Model Package. Local development model only needs the model src dir.

    OptionRequiredTypeDefaultsDescription
    --workdir or -wNStringFor local development mode, the path of model src dir.
    --uri or -uNStringFor model URI mode, the string of model uri.
    --handler or -hNStringRunnable handler index or name, default is None, will use the first handler
    --module or -mNStringThe name of the Python module to import. This parameter can be set multiple times.
    --runtime or -rNStringthe Starwhale Runtime URI to use when running this command. If this option is used, this command will run in an independent python environment specified by the Starwhale Runtime; otherwise, it will run directly in the swcli's current python environment.
    --model-yaml or -fNString${MODEL_DIR}/model.yamlThe path to the model.yaml. model.yaml is optional for model run.
    --run-project or -pNStringDefault projectProject URI, indicates the model run results will be stored in the corresponding project.
    --dataset or -dNStringDataset URI, the Starwhale dataset required for model running. This parameter can be set multiple times.
    --dataset-head or -dhNInteger0[ONLY STANDALONE]For debugging purpose, every prediction task will, at most, consume the first n rows from every dataset.When the value is less than or equal to 0, all samples will be used.
    --in-containerNBooleanFalseUse docker container to run the model. This option is only available for standalone instances. For server and cloud instances, a docker image is always used. If the runtime is a docker image, this option is always implied.
    --forbid-snapshot or -fsNBooleanFalseIn model URI mode, each model run uses a new snapshot directory. Setting this parameter will directly use the model's workdir as the run directory. In local dev mode, this parameter does not take effect, each run is in the --workdir specified directory.
    -- --user-arbitrary-argsNStringSpecify the args you defined in your handlers.

    Examples for model run

    # --> run by model uri
    # run the first handler from model uri
    swcli model run -u mnist/version/latest
    # run index id(1) handler from model uri
    swcli model run --uri mnist/version/latest --handler 1
    # run index fullname(mnist.evaluator:MNISTInference.cmp) handler from model uri
    swcli model run --uri mnist/version/latest --handler mnist.evaluator:MNISTInference.cmp

    # --> run by the working directory, which does not build model package yet. Make local debug happy.
    # run the first handler from the working directory, use the model.yaml in the working directory
    swcli model run -w .
    # run index id(1) handler from the working directory, search mnist.evaluator module and model.yaml handlers(if existed) to get runnable handlers
    swcli model run --workdir . --module mnist.evaluator --handler 1
    # run index fullname(mnist.evaluator:MNISTInference.cmp) handler from the working directory, search mnist.evaluator module to get runnable handlers
    swcli model run --workdir . --module mnist.evaluator --handler mnist.evaluator:MNISTInference.cmp
    # run the f handler in th.py from the working directory with the args defined in th:f
    # @handler()
    # def f(
    # x=ListInput(IntInput()),
    # y=2,
    # mi=MyInput(),
    # ds=DatasetInput(required=True),
    # ctx=ContextInput(),
    # )
    swcli model run -w . -m th --handler th:f -- -x 2 -x=1 --mi=blab-la --ds mnist

    # --> run with dataset of head 10
    swcli model run --uri mnist --dataset-head 10 --dataset mnist

    swcli model serve

    Here is the English translation:

    swcli [GLOBAL OPTIONS] model serve [OPTIONS]

    The model serve command can run the model as a web server, and provide a simple web interaction interface.

    OptionRequiredTypeDefaultsDescription
    --workdir or -wNStringIn local dev mode, specify the directory of the model code.
    --uri or -uNStringIn model URI mode, specify the model URI.
    --runtime or -rNStringThe URI of the Starwhale runtime to use when running this command. If specified, the command will run in the isolated Python environment defined in the Starwhale runtime. Otherwise it will run directly in the current Python environment of swcli.
    --model-yaml or -fNString${MODEL_DIR}/model.yamlThe path to the model.yaml. model.yaml is optional for model serve.
    --module or -mNStringName of the Python module to import. This parameter can be set multiple times.
    --hostNString127.0.0.1The address for the service to listen on.
    --portNInteger8080The port for the service to listen on.

    Examples for model serve

    swcli model serve -u mnist
    swcli model serve --uri mnist/version/latest --runtime pytorch/version/latest

    swcli model serve --workdir . --runtime pytorch/version/v0
    swcli model serve --workdir . --runtime pytorch/version/v1 --host 0.0.0.0 --port 8080
    swcli model serve --workdir . --runtime pytorch --module mnist.evaluator

    swcli model tag

    swcli [GLOBAL OPTIONS] model tag [OPTIONS] <MODEL> [TAGS]...

    model tag attaches a tag to a specified Starwhale Model version. At the same time, tag command also supports list and remove tags. The tag can be used in a model URI instead of the version id.

    MODEL is a model URI.

    Each model version can have any number of tags, but duplicated tag names are not allowed in the same model.

    model tag only works for the Standalone Instance.

    OptionRequiredTypeDefaultsDescription
    --remove or -rNBooleanFalseremove the tag if true
    --quiet or -qNBooleanFalseignore errors, for example, removing tags that do not exist.
    --force-add or -fNBooleanFalseWhen adding labels to server/cloud instances, if the label is already used by another model version, an error will be prompted. In this case, you can force an update using the --force-add parameter.

    Examples for model tag

    #- list tags of the mnist model
    swcli model tag mnist

    #- add tags for the mnist model
    swcli model tag mnist t1 t2
    swcli model tag cloud://cloud.starwhale.cn/project/public:starwhale/model/mnist/version/latest t1 --force-add
    swcli model tag mnist t1 --quiet

    #- remove tags for the mnist model
    swcli model tag mnist -r t1 t2
    swcli model tag cloud://cloud.starwhale.cn/project/public:starwhale/model/mnist --remove t1
    - - + + \ No newline at end of file diff --git a/0.6.4/reference/swcli/project/index.html b/0.6.4/reference/swcli/project/index.html index 37d9f34a5..91d1a5190 100644 --- a/0.6.4/reference/swcli/project/index.html +++ b/0.6.4/reference/swcli/project/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    swcli project

    Overview

    swcli [GLOBAL OPTIONS] project [OPTIONS] <SUBCOMMAND> [ARGS]...

    The project command includes the following subcommands:

    • create(add, new)
    • info
    • list(ls)
    • recover
    • remove(ls)
    • use(select)

    swcli project create

    swcli [GLOBAL OPTIONS] project create <PROJECT>

    project create creates a new project.

    PROJECT is a project URI.

    swcli project info

    swcli [GLOBAL OPTIONS] project info [OPTIONS] <PROJECT>

    project info outputs detailed information about the specified Starwhale Project.

    PROJECT is a project URI.

    swcli project list

    swcli [GLOBAL OPTIONS] project list [OPTIONS]

    project list shows all Starwhale Projects.

    OptionRequiredTypeDefaultsDescription
    --instanceNStringThe URI of the instance to list. If this option is omitted, use the default instance.
    --show-removedNBooleanFalseIf true, include projects that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.

    swcli project recover

    swcli [GLOBAL OPTIONS] project recover [OPTIONS] <PROJECT>

    project recover recovers previously removed Starwhale Projects.

    PROJECT is a project URI.

    Garbage-collected Starwhale Projects can not be recovered, as well as those are removed with the --force option.

    swcli project remove

    swcli [GLOBAL OPTIONS] project remove [OPTIONS] <PROJECT>

    project remove removes the specified Starwhale Project.

    PROJECT is a project URI.

    Removed Starwhale Projects can be recovered by swcli project recover before garbage collection. Use the --force option to persistently remove a Starwhale Project.

    Removed Starwhale Project can be listed by swcli project list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Project. It can not be recovered.

    swcli project use

    swcli [GLOBAL OPTIONS] project use <PROJECT>

    project use make the specified project default. You must login at first to use a project on a Server/Cloud instance.

    - - + + \ No newline at end of file diff --git a/0.6.4/reference/swcli/runtime/index.html b/0.6.4/reference/swcli/runtime/index.html index c4a0098f4..7fcc62fd4 100644 --- a/0.6.4/reference/swcli/runtime/index.html +++ b/0.6.4/reference/swcli/runtime/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    swcli runtime

    Overview

    swcli [GLOBAL OPTIONS] runtime [OPTIONS] <SUBCOMMAND> [ARGS]...

    The runtime command includes the following subcommands:

    • activate(actv)
    • build
    • copy(cp)
    • dockerize
    • extract
    • history
    • info
    • list(ls)
    • recover
    • remove(rm)
    • tag

    swcli runtime activate

    swcli [GLOBAL OPTIONS] runtime activate [OPTIONS] <RUNTIME>

    Like source venv/bin/activate or conda activate xxx, runtime activate setups a new python environment according to the settings of the specified runtime. When the current shell is closed or switched to another one, you need to reactivate the runtime.RUNTIME is a Runtime URI.

    If you want to quit the activated runtime environment, please run venv deactivate in the venv environment or conda deactivate in the conda environment.

    The runtime activate command will build a Python isolated environment and download relevant Python packages according to the definition of the Starwhale runtime when activating the environment for the first time. This process may spend a lot of time.

    swcli runtime build

    swcli [GLOBAL OPTIONS] runtime build [OPTIONS]

    The runtime build command can build a shareable and reproducible runtime environment suitable for ML/DL from various environments or runtime.yaml file.

    Parameters

    • Parameters related to runtime building methods:
    OptionRequiredTypeDefaultsDescription
    -c or --condaNStringFind the corresponding conda environment by conda env name, export Python dependencies to generate Starwhale runtime.
    -cp or --conda-prefixNStringFind the corresponding conda environment by conda env prefix path, export Python dependencies to generate Starwhale runtime.
    -v or --venvNStringFind the corresponding venv environment by venv directory address, export Python dependencies to generate Starwhale runtime.
    -s or --shellNStringExport Python dependencies according to current shell environment to generate Starwhale runtime.
    -y or --yamlNruntime.yaml in cwd directoryBuild Starwhale runtime according to user-defined runtime.yaml.
    -d or --dockerNStringUse the docker image as Starwhale runtime.

    The parameters for runtime building methods are mutually exclusive, only one method can be specified. If not specified, it will use --yaml method to read runtime.yaml in cwd directory to build Starwhale runtime.

    • Other parameters:
    OptionRequiredScopeTypeDefaultsDescription
    --project or -pNGlobalStringDefault projectProject URI
    -del or --disable-env-lockNruntime.yaml modeBooleanFalseWhether to install dependencies in runtime.yaml and lock the version information of related dependencies. The dependencies will be locked by default.
    -nc or --no-cacheNruntime.yaml modeBooleanFalseWhether to delete the isolated environment and install related dependencies from scratch. By default dependencies will be installed in the existing isolated environment.
    --cudaNconda/venv/shell modeChoice[11.3/11.4/11.5/11.6/11.7/]CUDA version, CUDA will not be used by default.
    --cudnnNconda/venv/shell modeChoice[8/]cuDNN version, cuDNN will not be used by default.
    --archNconda/venv/shell modeChoice[amd64/arm64/noarch]noarchArchitecture
    -dpo or --dump-pip-optionsNGlobalBooleanFalseDump pip config options from the ~/.pip/pip.conf file.
    -dcc or --dump-condarcNGlobalBooleanFalseDump conda config from the ~/.condarc file.
    -t or --tagNGlobalStringRuntime tags, the option can be used multiple times.

    Examples for Starwhale Runtime building

    #- from runtime.yaml:
    swcli runtime build # use the current directory as the workdir and use the default runtime.yaml file
    swcli runtime build -y example/pytorch/runtime.yaml # use example/pytorch/runtime.yaml as the runtime.yaml file
    swcli runtime build --yaml runtime.yaml # use runtime.yaml at the current directory as the runtime.yaml file
    swcli runtime build --tag tag1 --tag tag2

    #- from conda name:
    swcli runtime build -c pytorch # lock pytorch conda environment and use `pytorch` as the runtime name
    swcli runtime build --conda pytorch --name pytorch-runtime # use `pytorch-runtime` as the runtime name
    swcli runtime build --conda pytorch --cuda 11.4 # specify the cuda version
    swcli runtime build --conda pytorch --arch noarch # specify the system architecture

    #- from conda prefix path:
    swcli runtime build --conda-prefix /home/starwhale/anaconda3/envs/pytorch # get conda prefix path by `conda info --envs` command

    #- from venv prefix path:
    swcli runtime build -v /home/starwhale/.virtualenvs/pytorch
    swcli runtime build --venv /home/starwhale/.local/share/virtualenvs/pytorch --arch amd64

    #- from docker image:
    swcli runtime build --docker pytorch/pytorch:1.9.0-cuda11.1-cudnn8-runtime # use the docker image as the runtime directly

    #- from shell:
    swcli runtime build -s --cuda 11.4 --cudnn 8 # specify the cuda and cudnn version
    swcli runtime build --shell --name pytorch-runtime # lock the current shell environment and use `pytorch-runtime` as the runtime name

    swcli runtime copy

    swcli [GLOBAL OPTIONS] runtime copy [OPTIONS] <SRC> <DEST>

    runtime copy copies from SRC to DEST. SRC and DEST are both Runtime URIs.

    When copying Starwhale Runtime, all custom user-defined labels will be copied by default. You can use the --ignore-tag parameter to ignore certain labels. In addition, the latest and ^v\d+$ labels are built-in Starwhale system labels that are only used within the instance itself and will not be copied to other instances.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, DEST will be overwritten if it exists. In addition, if the labels carried during duplication have already been used by other versions, this parameter can be used to forcibly update the labels to this version.
    -i or --ignore-tagNStringIgnore tags to copy. The option can be used multiple times.

    Examples for Starwhale Runtime copy

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local project(myproject) with a new runtime name 'mnist-local'
    swcli runtime cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq local/project/myproject/mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local default project(self) with the cloud instance runtime name 'mnist-cloud'
    swcli runtime cp cloud://pre-k8s/project/runtime/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq .

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local project(myproject) with the cloud instance runtime name 'mnist-cloud'
    swcli runtime cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq . -dlp myproject

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local default project(self) with a runtime name 'mnist-local'
    swcli runtime cp cloud://pre-k8s/project/runtime/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local project(myproject) with a runtime name 'mnist-local'
    swcli runtime cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local -dlp myproject

    #- copy standalone instance(local) default project(self)'s mnist-local runtime to cloud instance(pre-k8s) mnist project with a new runtime name 'mnist-cloud'
    swcli runtime cp mnist-local/version/latest cloud://pre-k8s/project/mnist/mnist-cloud

    #- copy standalone instance(local) default project(self)'s mnist-local runtime to cloud instance(pre-k8s) mnist project with standalone instance runtime name 'mnist-local'
    swcli runtime cp mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy standalone instance(local) default project(self)'s mnist-local runtime to cloud instance(pre-k8s) mnist project without 'cloud://' prefix
    swcli runtime cp mnist-local/version/latest pre-k8s/project/mnist

    #- copy standalone instance(local) project(myproject)'s mnist-local runtime to cloud instance(pre-k8s) mnist project with standalone instance runtime name 'mnist-local'
    swcli runtime cp local/project/myproject/runtime/mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy without some tags
    swcli runtime cp pytorch cloud://cloud.starwhale.cn/project/starwhale:public --ignore-tag t1

    swcli runtime dockerize

    swcli [GLOBAL OPTIONS] runtime dockerize [OPTIONS] <RUNTIME>

    runtime dockerize generates a docker image based on the specified runtime. Starwhale uses docker buildx to create the image. Docker 19.03 or later is required to run this command.

    RUNTIME is a Runtime URI.

    OptionRequiredTypeDefaultsDescription
    --tag or -tNStringThe tag of the docker image. This option can be repeated multiple times.
    --pushNBooleanFalseIf true, push the image to the docker registry
    --platformNStringamd64The target platform,can be either amd64 or arm64. This option can be repeated multiple times to create a multi-platform image.

    Here is the English translation:

    swcli runtime extract

    swcli [Global Options] runtime extract [Options] <RUNTIME>

    Starwhale runtimes use the compressed packages to distribute. The runtime extract command can be used to extract the runtime package for further customization and modification.

    OptionRequiredTypeDefaultDescription
    --force or -fNBooleanFalseWhether to delete and re-extract if there is already an extracted Starwhale runtime in the target directory.
    --target-dirNStringCustom extraction directory. If not specified, it will be extracted to the default Starwhale runtime workdir. The command log will show the directory location.

    swcli runtime history

    swcli [GLOBAL OPTIONS] runtime history [OPTIONS] <RUNTIME>

    runtime history outputs all history versions of the specified Starwhale Runtime.

    RUNTIME is a Runtime URI.

    OptionRequiredTypeDefaultsDescription
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.

    swcli runtime info

    swcli [GLOBAL OPTIONS] runtime info [OPTIONS] <RUNTIME>

    runtime info outputs detailed information about a specified Starwhale Runtime version.

    RUNTIME is a Runtime URI.

    OptionRequiredTypeDefaultsDescription
    --output-filter or -ofNChoice of [basic/runtime_yaml/manifest/lock/all]basicFilter the output content. Only standalone instance supports this option.

    Examples for Starwhale Runtime info

    swcli runtime info pytorch # show basic info from the latest version of runtime
    swcli runtime info pytorch/version/v0 # show basic info
    swcli runtime info pytorch/version/v0 --output-filter basic # show basic info
    swcli runtime info pytorch/version/v1 -of runtime_yaml # show runtime.yaml content
    swcli runtime info pytorch/version/v1 -of lock # show auto lock file content
    swcli runtime info pytorch/version/v1 -of manifest # show _manifest.yaml content
    swcli runtime info pytorch/version/v1 -of all # show all info of the runtime

    swcli runtime list

    swcli [GLOBAL OPTIONS] runtime list [OPTIONS]

    runtime list shows all Starwhale Runtimes.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.
    --show-removed or -srNBooleanFalseIf true, include runtimes that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.
    --filter or -flNStringShow only Starwhale Runtimes that match specified filters. This option can be used multiple times in one command.
    FilterTypeDescriptionExample
    nameKey-ValueThe name prefix of runtimes--filter name=pytorch
    ownerKey-ValueThe runtime owner name--filter owner=starwhale
    latestFlagIf specified, it shows only the latest version.--filter latest

    swcli runtime recover

    swcli [GLOBAL OPTIONS] runtime recover [OPTIONS] <RUNTIME>

    runtime recover can recover previously removed Starwhale Runtimes or versions.

    RUNTIME is a Runtime URI. If the version part of the URI is omitted, all removed versions are recovered.

    Garbage-collected Starwhale Runtimes or versions can not be recovered, as well as those are removed with the --force option.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, overwrite the Starwhale Runtime or version with the same name or version id.

    swcli runtime remove

    swcli [GLOBAL OPTIONS] runtime remove [OPTIONS] <RUNTIME>

    runtime remove removes the specified Starwhale Runtime or version.

    RUNTIME is a Runtime URI. If the version part of the URI is omitted, all versions are removed.

    Removed Starwhale Runtimes or versions can be recovered by swcli runtime recover before garbage collection. Use the -- force option to persistently remove a Starwhale Runtime or version.

    Removed Starwhale Runtimes or versions can be listed by swcli runtime list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Runtime or version. It can not be recovered.

    swcli runtime tag

    swcli [GLOBAL OPTIONS] runtime tag [OPTIONS] <RUNTIME> [TAGS]...

    runtime tag attaches a tag to a specified Starwhale Runtime version. At the same time, tag command also supports list and remove tags. The tag can be used in a runtime URI instead of the version id.

    RUNTIME is a Runtime URI.

    Each runtime version can have any number of tags, but duplicated tag names are not allowed in the same runtime.

    runtime tag only works for the Standalone Instance.

    OptionRequiredTypeDefaultsDescription
    --remove or -rNBooleanFalseRemove the tag if true
    --quiet or -qNBooleanFalseIgnore errors, for example, removing tags that do not exist.
    --force-add or -fNBooleanFalseWhen adding labels to server/cloud instances, if the label is already used by another runtime version, an error will be prompted. In this case, you can force an update using the --force-add parameter.

    Examples for runtime tag

    #- list tags of the pytorch runtime
    swcli runtime tag pytorch

    #- add tags for the pytorch runtime
    swcli runtime tag mnist t1 t2
    swcli runtime tag cloud://cloud.starwhale.cn/project/public:starwhale/runtime/pytorch/version/latest t1 --force-add
    swcli runtime tag mnist t1 --quiet

    #- remove tags for the pytorch runtime
    swcli runtime tag mnist -r t1 t2
    swcli runtime tag cloud://cloud.starwhale.cn/project/public:starwhale/runtime/pytorch --remove t1
    - - + + \ No newline at end of file diff --git a/0.6.4/reference/swcli/utilities/index.html b/0.6.4/reference/swcli/utilities/index.html index 4e0ba4523..6d9759180 100644 --- a/0.6.4/reference/swcli/utilities/index.html +++ b/0.6.4/reference/swcli/utilities/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Utility Commands

    swcli gc

    swcli [GLOBAL OPTIONS] gc [OPTIONS]

    gc clears removed projects, models, datasets, and runtimes according to the internal garbage collection policy.

    OptionRequiredTypeDefaultsDescription
    --dry-runNBooleanFalseIf true, outputs objects to be removed instead of clearing them.
    --yesNBooleanFalseBypass confirmation prompts.

    swcli check

    swcli [GLOBAL OPTIONS] check

    Check if the external dependencies of the swcli command meet the requirements. Currently mainly checks Docker and Conda.

    swcli completion install

    swcli [GLOBAL OPTIONS] completion install <SHELL_NAME>

    Install autocompletion for swcli commands. Currently supports bash, zsh and fish. If SHELL_NAME is not specified, it will try to automatically detect the current shell type.

    swcli config edit

    swcli [GLOBAL OPTIONS] config edit

    Edit the Starwhale configuration file at ~/.config/starwhale/config.yaml.

    swcli ui

    swcli [GLOBAL OPTIONS] ui <INSTANCE>

    Open the web page for the corresponding instance.

    - - + + \ No newline at end of file diff --git a/0.6.4/runtime/index.html b/0.6.4/runtime/index.html index 728d61164..c0bde2599 100644 --- a/0.6.4/runtime/index.html +++ b/0.6.4/runtime/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Starwhale Runtime

    overview

    Overview

    Starwhale Runtime aims to provide a reproducible and sharable running environment for python programs. You can easily share your working environment with your teammates or outsiders, and vice versa. Furthermore, you can run your programs on Starwhale Server or Starwhale Cloud without bothering with the dependencies.

    Starwhale works well with virtualenv, conda, and docker. If you are using one of them, it is straightforward to create a Starwhale Runtime based on your current environment.

    Multiple Starwhale Runtimes on your local machine can be switched freely by one command. You can work on different projects without messing up the environment.Starwhale Runtime consists of two parts: the base image and the dependencies.

    The base image

    The base is a docker image with Python, CUDA, and cuDNN installed. Starwhale provides various base images for you to choose from; see the following list:

    • Computer system architecture:
      • X86 (amd64)
      • Arm (aarch64)
    • Operating system:
      • Ubuntu 20.04 LTS (ubuntu:20.04)
    • Python:
      • 3.7
      • 3.8
      • 3.9
      • 3.10
      • 3.11
    • CUDA:
      • CUDA 11.3 + cuDNN 8.4
      • CUDA 11.4 + cuDNN 8.4
      • CUDA 11.5 + cuDNN 8.4
      • CUDA 11.6 + cuDNN 8.4
      • CUDA 11.7
    - - + + \ No newline at end of file diff --git a/0.6.4/runtime/yaml/index.html b/0.6.4/runtime/yaml/index.html index a3dbccb5c..17ee8393f 100644 --- a/0.6.4/runtime/yaml/index.html +++ b/0.6.4/runtime/yaml/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    The runtime.yaml Specification

    runtime.yaml is the configuration file that defines the properties of the Starwhale Runtime. runtime.yaml is required for the yaml mode of the swcli runtime build command.

    Examples

    The simplest example

    dependencies:
    - pip:
    - numpy
    name: simple-test

    Define a Starwhale Runtime that uses venv as the Python virtual environment for package isolation, and installs the numpy dependency.

    The llama2 example

    name: llama2
    mode: venv
    environment:
    arch: noarch
    os: ubuntu:20.04
    cuda: 11.7
    python: "3.10"
    dependencies:
    - pip:
    - torch
    - fairscale
    - fire
    - sentencepiece
    - gradio >= 3.37.0
    # external starwhale dependencies
    - starwhale[serve] >= 0.5.5

    The full definition example

    # [required]The name of Starwhale Runtime
    name: demo
    # [optional]The mode of Starwhale Runtime: venv or conda. Default is venv.
    mode: venv
    # [optional]The configurations of pip and conda.
    configs:
    # If you do not use conda, ignore this field.
    conda:
    condarc: # custom condarc config file
    channels:
    - defaults
    show_channel_urls: true
    default_channels:
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2
    custom_channels:
    conda-forge: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    pytorch: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    pytorch-lts: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    nvidia: https://mirrors.aliyun.com/anaconda/cloud
    ssl_verify: false
    default_threads: 10
    pip:
    # pip config set global.index-url
    index_url: https://example.org/
    # pip config set global.extra-index-url
    extra_index_url: https://another.net/
    # pip config set install.trusted-host
    trusted_host:
    - example.org
    - another.net
    # [optional] The definition of the environment.
    environment:
    # Now it must be ubuntu:20.04
    os: ubuntu:20.04
    # CUDA version. possible values: 11.3, 11.4, 11.5, 11.6, 11.7
    cuda: 11.4
    # Python version. possible values: 3.7, 3.8, 3.9, 3.10, 3.11
    python: 3.8
    # Define your custom base image
    docker:
    image: mycustom.com/docker/image:tag
    # [required] The dependencies of the Starwhale Runtime.
    dependencies:
    # If this item is present, conda env create -f conda.yml will be executed
    - conda.yaml
    # If this item is present, pip install -r requirements.txt will be executed before installing other pip packages
    - requirements.txt
    # Packages to be install with conda. venv mode will ignore the conda field.
    - conda:
    - numpy
    - requests
    # Packages to be installed with pip. The format is the same as requirements.txt
    - pip:
    - pillow
    - numpy
    - deepspeed==0.9.0
    - safetensors==0.3.0
    - transformers @ git+https://github.com/huggingface/transformers.git@3c3108972af74246bc3a0ecf3259fd2eafbacdef
    - peft @ git+https://github.com/huggingface/peft.git@fcff23f005fc7bfb816ad1f55360442c170cd5f5
    - accelerate @ git+https://github.com/huggingface/accelerate.git@eba6eb79dc2ab652cd8b44b37165a4852768a8ac
    # Additional wheels packages to be installed when restoring the runtime
    - wheels:
    - dummy-0.0.0-py3-none-any.whl
    # Additional files to be included in the runtime
    - files:
    - dest: bin/prepare.sh
    name: prepare
    src: scripts/prepare.sh
    # Run some custom commands
    - commands:
    - apt-get install -y libgl1
    - touch /tmp/runtime-command-run.flag
    - - + + \ No newline at end of file diff --git a/0.6.4/server/guides/server_admin/index.html b/0.6.4/server/guides/server_admin/index.html index 950c4428d..83cf8f4fc 100644 --- a/0.6.4/server/guides/server_admin/index.html +++ b/0.6.4/server/guides/server_admin/index.html @@ -10,14 +10,14 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Controller Admin Settings

    Superuser Password Reset

    In case you forget the superusers password, you could use the sql below to reset the password to abcd1234

    update user_info set user_pwd='ee9533077d01d2d65a4efdb41129a91e', user_pwd_salt='6ea18d595773ccc2beacce26' where id=1

    After that, you could login to the console and then change the password to what you really want.

    System Settings

    You could customize system to make it easier to use by leverage of System setting. Here is an example below:

    dockerSetting:
    registryForPull: "docker-registry.starwhale.cn/star-whale"
    registryForPush: ""
    userName: ""
    password: ""
    insecure: true
    pypiSetting:
    indexUrl: ""
    extraIndexUrl: ""
    trustedHost: ""
    retries: 10
    timeout: 90
    imageBuild:
    resourcePool: ""
    image: ""
    clientVersion: ""
    pythonVersion: ""
    datasetBuild:
    resourcePool: ""
    image: ""
    clientVersion: ""
    pythonVersion: ""
    resourcePoolSetting:
    - name: "default"
    nodeSelector: null
    resources:
    - name: "cpu"
    max: null
    min: null
    defaults: 5.0
    - name: "memory"
    max: null
    min: null
    defaults: 3145728.0
    - name: "nvidia.com/gpu"
    max: null
    min: null
    defaults: null
    tolerations: null
    metadata: null
    isPrivate: null
    visibleUserIds: null
    storageSetting:
    - type: "minio"
    tokens:
    bucket: "users"
    ak: "starwhale"
    sk: "starwhale"
    endpoint: "http://10.131.0.1:9000"
    region: "local"
    hugeFileThreshold: "10485760"
    hugeFilePartSize: "5242880"
    - type: "s3"
    tokens:
    bucket: "users"
    ak: "starwhale"b
    sk: "starwhale"
    endpoint: "http://10.131.0.1:9000"
    region: "local"
    hugeFileThreshold: "10485760"
    hugeFilePartSize: "5242880"

    Image Registry

    Tasks dispatched by the server are based on docker images. Pulling these images could be slow if your internet is not working well. Starwhale Server supports the custom image registries, includes dockerSetting.registryForPush and dockerSetting.registryForPull.

    Resource Pool

    The resourcePoolSetting allows you to manage your cluster in a group manner. It is currently implemented by K8S nodeSelector, you could label your machines in K8S cluster and make them a resourcePool in Starwhale.

    Remote Storage

    The storageSetting allows you to manage the storages the server could access.

    storageSetting:
    - type: s3
    tokens:
    - bucket: starwhale # required
    ak: access_key # required
    sk: scret_key # required
    endpoint: http://s3.region.amazonaws.com # optional
    region: region of the service # required when endpoint is empty
    hugeFileThreshold: 10485760 # bigger than 10MB will use multiple part upload
    hugeFilePartSize: 5242880 # MB part size for multiple part upload
    - type: minio
    tokens:
    - bucket: starwhale # required
    ak: access_key # required
    sk: scret_key # required
    endpoint: http://10.131.0.1:9000 # required
    region: local # optional
    hugeFileThreshold: 10485760 # bigger than 10MB will use multiple part upload
    hugeFilePartSize: 5242880 # MB part size for multiple part upload
    - type: aliyun
    tokens:
    - bucket: starwhale # required
    ak: access_key # required
    sk: scret_key # required
    endpoint: http://10.131.0.2:9000 # required
    region: local # optional
    hugeFileThreshold: 10485760 # bigger than 10MB will use multiple part upload
    hugeFilePartSize: 5242880 # MB part size for multiple part upload

    Every storageSetting item has a corresponding implementation of StorageAccessService interface. Starwhale has four build-in implementations:

    • StorageAccessServiceAliyun matches type in (aliyun,oss)
    • StorageAccessServiceMinio matches type in (minio)
    • StorageAccessServiceS3 matches type in (s3)
    • StorageAccessServiceFile matches type in (fs, file)

    Each of the implementations has different requirements for tokens. endpoint is required when type in (aliyun,minio), region is required when type is s3 and endpoint is empty. While fs/file type requires tokens has name rootDir and serviceProvider. Please refer the code for more details.

    - - + + \ No newline at end of file diff --git a/0.6.4/server/index.html b/0.6.4/server/index.html index 90bb9e481..a2b3e7876 100644 --- a/0.6.4/server/index.html +++ b/0.6.4/server/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.4/server/installation/docker-compose/index.html b/0.6.4/server/installation/docker-compose/index.html index 58a5ad3ee..2b6f0c159 100644 --- a/0.6.4/server/installation/docker-compose/index.html +++ b/0.6.4/server/installation/docker-compose/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Install Starwhale Server with Docker Compose

    Prerequisites

    Usage

    Start up the server

    wget https://raw.githubusercontent.com/star-whale/starwhale/main/docker/compose/compose.yaml
    GLOBAL_IP=${your_accessible_ip_for_server} ; docker compose up

    The GLOBAL_IP is the ip for Controller which could be accessed by all swcli both inside docker containers and other user machines.

    compose.yaml contains Starwhale Controller/MySQL/MinIO services. Touch a compose.override.yaml, as its name implies, can contain configuration overrides for compose.yaml. The available configurations are specified here

    - - + + \ No newline at end of file diff --git a/0.6.4/server/installation/docker/index.html b/0.6.4/server/installation/docker/index.html index 1daa244c3..efc383db9 100644 --- a/0.6.4/server/installation/docker/index.html +++ b/0.6.4/server/installation/docker/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Install Starwhale Server with Docker

    Prerequisites

    • A running Kubernetes 1.19+ cluster to run tasks.
    • A running MySQL 8.0+ instance to store metadata.
    • A S3-compatible object storage to save datasets, models, and others.

    Please make sure pods on the Kubernetes cluster can access the port exposed by the Starwhale Server installation.

    Prepare an env file for Docker

    Starwhale Server can be configured by environment variables.

    An env file template for Docker is here. You may create your own env file by modifying the template.

    Prepare a kubeconfig file [Optional][SW_SCHEDULER=k8s]

    The kubeconfig file is used for accessing the Kubernetes cluster. For more information about kubeconfig files, see the Official Kubernetes Documentation.

    If you have a local kubectl command-line tool installed, you can run kubectl config view to see your current configuration.

    Run the Docker image

    docker run -it -d --name starwhale-server -p 8082:8082 \
    --restart unless-stopped \
    --mount type=bind,source=<path to your kubeconfig file>,destination=/root/.kube/config,readonly \
    --env-file <path to your env file> \
    ghcr.io/star-whale/server:0.5.6

    For users in the mainland of China, use docker image: docker-registry.starwhale.cn/star-whale/server.

    - - + + \ No newline at end of file diff --git a/0.6.4/server/installation/helm-charts/index.html b/0.6.4/server/installation/helm-charts/index.html index cc28d0283..cd9ad6cc6 100644 --- a/0.6.4/server/installation/helm-charts/index.html +++ b/0.6.4/server/installation/helm-charts/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Install Starwhale Server with Helm

    Prerequisites

    • A running Kubernetes 1.19+ cluster to run tasks.
    • A running MySQL 8.0+ instance to store metadata.
    • A S3-compatible object storage system to save datasets, models, and others.
    • Helm 3.2.0+.

    The Starwhale Helm Charts includes MySQL and MinIO as dependencies. If you do not have your own MySQL instance or any S3-compatible object storage available, use the Helm Charts to install. Please check Installation Options to learn how to install Starwhale Server with MySQL and MinIO.

    Create a service account on Kubernetes for Starwhale Server

    If Kubernetes RBAC is enabled (In Kubernetes 1.6+, RBAC is enabled by default), Starwhale Server can not work properly unless is started by a service account with at least the following permissions:

    ResourceAPI GroupGetListWatchCreateDelete
    jobsbatchYYYYY
    podscoreYYY
    nodescoreYYY
    events""Y

    Example:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
    name: starwhale-role
    rules:
    - apiGroups:
    - ""
    resources:
    - pods
    - nodes
    verbs:
    - get
    - list
    - watch
    - apiGroups:
    - "batch"
    resources:
    - jobs
    verbs:
    - create
    - get
    - list
    - watch
    - delete
    - apiGroups:
    - ""
    resources:
    - events
    verbs:
    - get
    - watch
    - list
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    name: starwhale-binding
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: starwhale-role
    subjects:
    - kind: ServiceAccount
    name: starwhale

    Downloading Starwhale Helm Charts

    helm repo add starwhale https://star-whale.github.io/charts
    helm repo update

    Installing Starwhale Server

    helm install starwhale-server starwhale/starwhale-server -n starwhale --create-namespace

    If you have a local kubectl command-line tool installed, you can run kubectl get pods -n starwhale to check if all pods are running.

    Updating Starwhale Server

    helm repo update
    helm upgrade starwhale-server starwhale/starwhale-server

    Uninstalling Starwhale Server

    helm delete starwhale-server
    - - + + \ No newline at end of file diff --git a/0.6.4/server/installation/index.html b/0.6.4/server/installation/index.html index 0107245d7..3909a4ac5 100644 --- a/0.6.4/server/installation/index.html +++ b/0.6.4/server/installation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.4/server/installation/minikube/index.html b/0.6.4/server/installation/minikube/index.html index 31f6d9243..c81d3cc9b 100644 --- a/0.6.4/server/installation/minikube/index.html +++ b/0.6.4/server/installation/minikube/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Install Starwhale Server with Minikube

    Prerequisites

    Starting Minikube

    minikube start --addons ingress

    For users in the mainland of China, please run the following commands:

    minikube start --kubernetes-version=1.25.3 --image-repository=docker-registry.starwhale.cn/minikube --base-image=docker-registry.starwhale.cn/minikube/k8s-minikube/kicbase:v0.0.42

    minikube addons enable ingress --images="KubeWebhookCertgenPatch=ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0,KubeWebhookCertgenCreate=ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0,IngressController=ingress-nginx/controller:v1.9.4"

    The docker registry docker-registry.starwhale.cn/minikube currently only caches the images for Kubernetes 1.25.3. Another choice, you can also use Aliyun mirror:

    minikube start --image-mirror-country=cn

    minikube addons enable ingress --images="KubeWebhookCertgenPatch=kube-webhook-certgen:v20231011-8b53cabe0,KubeWebhookCertgenCreate=kube-webhook-certgen:v20231011-8b53cabe0,IngressController=nginx-ingress-controller:v1.9.4" --registries="KubeWebhookCertgenPatch=registry.cn-hangzhou.aliyuncs.com/google_containers,KubeWebhookCertgenCreate=registry.cn-hangzhou.aliyuncs.com/google_containers,IngressController=registry.cn-hangzhou.aliyuncs.com/google_containers"

    If there is no kubectl bin in your machine, you may use minikube kubectl or alias kubectl="minikube kubectl --" alias command.

    Installing Starwhale Server

    helm repo add starwhale https://star-whale.github.io/charts
    helm repo update
    helm pull starwhale/starwhale --untar --untardir ./charts

    helm upgrade --install starwhale ./charts/starwhale -n starwhale --create-namespace -f ./charts/starwhale/values.minikube.global.yaml

    For users in the mainland of China, use values.minikube.global.yaml:

    helm upgrade --install starwhale ./charts/starwhale -n starwhale --create-namespace -f ./charts/starwhale/values.minikube.cn.yaml

    After the installation is successful, the following prompt message appears:

        Release "starwhale" has been upgraded. Happy Helming!
    NAME: starwhale
    LAST DEPLOYED: Tue Feb 14 16:25:03 2023
    NAMESPACE: starwhale
    STATUS: deployed
    REVISION: 14
    NOTES:
    ******************************************
    Chart Name: starwhale
    Chart Version: 0.5.6
    App Version: latest
    Starwhale Image:
    - server: ghcr.io/star-whale/server:latest

    ******************************************
    Controller:
    - visit: http://controller.starwhale.svc
    Minio:
    - web visit: http://minio.starwhale.svc
    - admin visit: http://minio-admin.starwhale.svc
    MySQL:
    - port-forward:
    - run: kubectl port-forward --namespace starwhale svc/mysql 3306:3306
    - visit: mysql -h 127.0.0.1 -P 3306 -ustarwhale -pstarwhale
    Please run the following command for the domains searching:
    echo "$(sudo minikube ip) controller.starwhale.svc minio.starwhale.svc minio-admin.starwhale.svc " | sudo tee -a /etc/hosts
    ******************************************
    Login Info:
    - starwhale: u:starwhale, p:abcd1234
    - minio admin: u:minioadmin, p:minioadmin

    *_* Enjoy to use Starwhale Platform. *_*

    Checking Starwhale Server status

    Keep checking the minikube service status until all deployments are running(waiting for 3~5 mins):

    kubectl get deployments -n starwhale
    NAMEREADYUP-TO-DATEAVAILABLEAGE
    controller1/1115m
    minio1/1115m
    mysql1/1115m

    Visiting for local

    Make the Starwhale controller accessible locally with the following command:

    echo "$(sudo minikube ip) controller.starwhale.svc minio.starwhale.svc  minio-admin.starwhale.svc " | sudo tee -a /etc/hosts

    Then you can visit http://controller.starwhale.svc in your local web browser.

    Visiting for others

    • Step 1: in the Starwhale Server machine

      for temporary use with socat command:

      # install socat at first, ref: https://howtoinstall.co/en/socat
      sudo socat TCP4-LISTEN:80,fork,reuseaddr,bind=0.0.0.0 TCP4:`minikube ip`:80

      When you kill the socat process, the share access will be blocked. iptables maybe a better choice for long-term use.

    • Step 2: in the other machines

      # for macOSX or Linux environment, run the command in the shell.
      echo ${your_machine_ip} controller.starwhale.svc minio.starwhale.svc minio-admin.starwhale.svc " | sudo tee -a /etc/hosts

      # for Windows environment, run the command in the PowerShell with administrator permission.
      Add-Content -Path C:\Windows\System32\drivers\etc\hosts -Value "`n${your_machine_ip} controller.starwhale.svc minio.starwhale.svc minio-admin.starwhale.svc"
    - - + + \ No newline at end of file diff --git a/0.6.4/server/installation/starwhale_env/index.html b/0.6.4/server/installation/starwhale_env/index.html index 149d7f810..7ac495390 100644 --- a/0.6.4/server/installation/starwhale_env/index.html +++ b/0.6.4/server/installation/starwhale_env/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Starwhale Server Environment Example

    ################################################################################
    # *** Required ***
    # The external Starwhale server URL. For example: https://cloud.starwhale.ai
    SW_INSTANCE_URI=

    # The listening port of Starwhale Server
    SW_CONTROLLER_PORT=8082

    # The maximum upload file size. This setting affects datasets and models uploading when copied from outside.
    SW_UPLOAD_MAX_FILE_SIZE=20480MB
    ################################################################################
    # The base URL of the Python Package Index to use when creating a runtime environment.
    SW_PYPI_INDEX_URL=http://10.131.0.1/repository/pypi-hosted/simple/

    # Extra URLs of package indexes to use in addition to the base url.
    SW_PYPI_EXTRA_INDEX_URL=

    # Space separated hostnames. When any host specified in the base URL or extra URLs does not have a valid SSL
    # certification, use this option to trust it anyway.
    SW_PYPI_TRUSTED_HOST=
    ################################################################################
    # The JWT token expiration time. When the token expires, the server will request the user to login again.
    SW_JWT_TOKEN_EXPIRE_MINUTES=43200

    # *** Required ***
    # The JWT secret key. All strings are valid, but we strongly recommend you to use a random string with at least 16 characters.
    SW_JWT_SECRET=
    ################################################################################
    # The scheduler controller to use. Valid values are:
    # docker: Controller schedule jobs by leveraging docker
    # k8s: Controller schedule jobs by leveraging Kubernetes
    SW_SCHEDULER=k8s

    # The Kubernetes namespace to use when running a task when SW_SCHEDULER is k8s
    SW_K8S_NAME_SPACE=default

    # The path on the Kubernetes host node's filesystem to cache Python packages. Use the setting only if you have
    # the permission to use host node's filesystem. The runtime environment setup process may be accelerated when the host
    # path cache is used. Leave it blank if you do not want to use it.
    SW_K8S_HOST_PATH_FOR_CACHE=

    # The ip for the containers created by Controller when SW_SCHEDULER is docker
    SW_DOCKER_CONTAINER_NODE_IP=127.0.0.1
    ###############################################################################
    # *** Required ***
    # The object storage system type. Valid values are:
    # s3: [AWS S3](https://aws.amazon.com/s3) or other s3-compatible object storage systems
    # aliyun: [Aliyun OSS](https://www.alibabacloud.com/product/object-storage-service)
    # minio: [MinIO](https://min.io)
    # file: Local filesystem
    SW_STORAGE_TYPE=

    # The path prefix for all data saved on the storage system.
    SW_STORAGE_PREFIX=
    ################################################################################
    # The following settings are only used when SW_STORAGE_TYPE is file.

    # The root directory to save data.
    # This setting is only used when SW_STORAGE_TYPE is file.
    SW_STORAGE_FS_ROOT_DIR=/usr/local/starwhale
    ################################################################################
    # The following settings are only used when SW_STORAGE_TYPE is not file.

    # *** Required ***
    # The name of the bucket to save data.
    SW_STORAGE_BUCKET=

    # *** Required ***
    # The endpoint URL of the object storage service.
    # This setting is only used when SW_STORAGE_TYPE is s3 or aliyun.
    SW_STORAGE_ENDPOINT=

    # *** Required ***
    # The access key used to access the object storage system.
    SW_STORAGE_ACCESSKEY=

    # *** Required ***
    # The secret access key used to access the object storage system.
    SW_STORAGE_SECRETKEY=

    # *** Optional ***
    # The region of the object storage system.
    SW_STORAGE_REGION=

    # Starwhale Server will use multipart upload when uploading a large file. This setting specifies the part size.
    SW_STORAGE_PART_SIZE=5MB
    ################################################################################
    # MySQL settings

    # *** Required ***
    # The hostname/IP of the MySQL server.
    SW_METADATA_STORAGE_IP=

    # The port of the MySQL server.
    SW_METADATA_STORAGE_PORT=3306

    # *** Required ***
    # The database used by Starwhale Server
    SW_METADATA_STORAGE_DB=starwhale

    # *** Required ***
    # The username of the MySQL server.
    SW_METADATA_STORAGE_USER=

    # *** Required ***
    # The password of the MySQL server.
    SW_METADATA_STORAGE_PASSWORD=
    ################################################################################

    # The cache directory for the WAL files. Point it to a mounted volume or host path with enough space.
    # If not set, the WAL files will be saved in the docker runtime layer, and will be lost when the container is restarted.
    SW_DATASTORE_WAL_LOCAL_CACHE_DIR=
    - - + + \ No newline at end of file diff --git a/0.6.4/server/project/index.html b/0.6.4/server/project/index.html index 59126213e..8d67bb0f0 100644 --- a/0.6.4/server/project/index.html +++ b/0.6.4/server/project/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    How to Organize and Manage Resources with Starwhale Projects

    Project is the basic unit for organizing and managing resources (such as models, datasets, runtime environments, etc.). You can create and manage projects based on your needs. For example, you can create projects by business team, product line, or models. One user can create and participate in one or more projects.

    Project type

    There are two types of projects:

    • Private project: The project (and related resources in the project) is only visible to project members with permission. Project members can view or edit the project (as well as associated resources in the project). For more information on roles, please take a look at Roles and permissions in Starwhale.

    • Public project: The project (and related resources in the project) is visible to all Starwhale users. Project members can view or edit the project (as well as associated resources in the project). For more information on roles, please take a look at Roles and permissions in Starwhale.

    Create a project

    1. Click the Create button in the upper right corner of the project list page;
    2. Enter a name for the project. Pay attention to avoiding duplicate names. For more information, please see Names in Starwhale
    3. Select the Project Type, which is defaulted to private project and can be selected as public according to needs;
    4. Fill in the description content;
    5. To finish, Click the Submit button.

    Edit a project

    The name, privacy and description of a project can be edited.

    1. Go to the project list page and find the project that needs to be edited by searching for the project name, then click the Edit Project button;
    2. Edit the items that need to be edited;
    3. Click Submit to save the edited content;
    4. If you're editing multiple projects, repeat steps 1 through 3.

    View a project

    My projects

    On the project list page, only my projects are displayed by default. My projects refer to the projects participated in by the current users as project members or project owners.

    Project sorting

    On the project list page, all projects are supported to be sorted by "Recently visited", "Project creation time from new to old", and "Project creation time from old to new", which can be selected according to your needs.

    Delete a project

    Once a project is deleted, all related resources (such as datasets, models, runtimes, evaluations, etc.) will be deleted and cannot be restored.

    1. Enter the project list page and search for the project name to find the project that needs to be deleted. Hover your mouse over the project you want to delete, then click the Delete button;
    2. Follow the prompts, enter the relevant information, click Confirm to delete the project, or click Cancel to cancel the deletion;
    3. If you are deleting multiple projects, repeat the above steps.

    Manage project member

    Only users with the admin role can assign people to the project. The project owner defaulted to having the project owner role.

    Add a member

    1. Click Manage Members to go to the project member list page;
    2. Click the Add Member button in the upper right corner.
    3. Enter the Username you want to add, select a project role for the user in the project.
    4. Click submit to complete.
    5. If you're adding multiple members, repeat steps 1 through 4.

    Remove a member

    1. On the project list page or project overview tab, click Manage Members to go to the project member list page.
    2. Search for the username you want to delete, then click the Delete button.
    3. Click Yes to delete the user from this project, click No to cancel the deletion.
    4. If you're removing multiple members, repeat steps 1 through 3.

    Edit a member's role

    1. Hover your mouse over the project you want to edit, then click Manage Members to go to the project member list page.
    2. Find the username you want to adjust through searching, click the Project Role drop-down menu, and select a new project role. For more information on roles, please take a look at Roles and permissions in Starwhale.
    - - + + \ No newline at end of file diff --git a/0.6.4/swcli/config/index.html b/0.6.4/swcli/config/index.html index 987f87a73..401ae5d6c 100644 --- a/0.6.4/swcli/config/index.html +++ b/0.6.4/swcli/config/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Configuration

    Standalone Instance is installed on the user's laptop or development server, providing isolation at the level of Linux/macOX users. Users can install the Starwhale Python package using the pip command and execute any swcli command. After that, they can view their Starwhale configuration in ~/.config/starwhale/config.yaml. In the vast majority of cases, users do not need to manually modify the config.yaml file.

    The ~/.config/starwhale/config.yaml file has permissions set to 0o600 to ensure security, as it contains sensitive information such as encryption keys. Users are advised not to change the file permissions.You could customize your swcli by swci config edit:

    swcli config edit

    config.yaml example

    The typical config.yaml file is as follows:

    • The default instance is local.
    • cloud-cn/cloud-k8s/pre-k8s are the server/cloud instances, local is the standalone instance.
    • The local storage root directory for the Standalone Instance is set to /home/liutianwei/.starwhale.
    current_instance: local
    instances:
    cloud-cn:
    sw_token: ${TOKEN}
    type: cloud
    updated_at: 2022-09-28 18:41:05 CST
    uri: https://cloud.starwhale.cn
    user_name: starwhale
    user_role: normal
    cloud-k8s:
    sw_token: ${TOKEN}
    type: cloud
    updated_at: 2022-09-19 16:10:01 CST
    uri: http://cloud.pre.intra.starwhale.ai
    user_name: starwhale
    user_role: normal
    local:
    current_project: self
    type: standalone
    updated_at: 2022-06-09 16:14:02 CST
    uri: local
    user_name: liutianwei
    pre-k8s:
    sw_token: ${TOKEN}
    type: cloud
    updated_at: 2022-09-19 18:06:50 CST
    uri: http://console.pre.intra.starwhale.ai
    user_name: starwhale
    user_role: normal
    link_auths:
    - ak: starwhale
    bucket: users
    connect_timeout: 10.0
    endpoint: http://10.131.0.1:9000
    read_timeout: 100.0
    sk: starwhale
    type: s3
    storage:
    root: /home/liutianwei/.starwhale
    version: '2.0'

    config.yaml definition

    ParameterDescriptionTypeDefault ValueRequired
    current_instanceThe name of the default instance to use. It is usually set using the swcli instance select command.StringselfYes
    instancesManaged instances, including Standalone, Server and Cloud Instances. There must be at least one Standalone Instance named "local" and one or more Server/Cloud Instances. You can log in to a new instance with swcli instance login and log out from an instance with swcli instance logout.DictStandalone Instance named "local"Yes
    instances.{instance-alias-name}.sw_tokenLogin token for Server/Cloud Instances. It is only effective for Server/Cloud Instances. Subsequent swcli operations on Server/Cloud Instances will use this token. Note that tokens have an expiration time, typically set to one month, which can be configured within the Server/Cloud Instance.StringCloud - Yes, Standalone - No
    instances.{instance-alias-name}.typeType of the instance, currently can only be "cloud" or "standalone".Choice[string]Yes
    instances.{instance-alias-name}.uriFor Server/Cloud Instances, the URI is an http/https address. For Standalone Instances, the URI is set to "local".StringYes
    instances.{instance-alias-name}.user_nameUser's nameStringYes
    instances.{instance-alias-name}.current_projectDefault Project under the current instance. It will be used to fill the "project" field in the URI representation by default. You can set it using the swcli project select command.StringYes
    instances.{instance-alias-name}.user_roleUser's role.StringnormalYes
    instances.{instance-alias-name}.updated_atThe last updated time for this instance configuration.Time format stringYes
    storageSettings related to local storage.DictYes
    storage.rootThe root directory for Standalone Instance's local storage. Typically, if there is insufficient space in the home directory and you manually move data files to another location, you can modify this field.String~/.starwhaleYes
    versionThe version of config.yaml, currently only supports 2.0.String2.0Yes

    You could put starwhale.Link to your assets while the URI in the Link could be whatever(only s3 like or http is implemented) you need, such as s3://10.131.0.1:9000/users/path. However, Links may need to be authed, you could config the auth info in link_auths.

    link_auths:
    - type: s3
    ak: starwhale
    bucket: users
    region: local
    connect_timeout: 10.0
    endpoint: http://10.131.0.1:9000
    read_timeout: 100.0
    sk: starwhale

    Items in link_auths will match the uri in Links automatically. s3 typed link_auth matching Links by looking up bucket and endpoint.

    - - + + \ No newline at end of file diff --git a/0.6.4/swcli/index.html b/0.6.4/swcli/index.html index 3afe356a2..37d250f7f 100644 --- a/0.6.4/swcli/index.html +++ b/0.6.4/swcli/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Starwhale Client (swcli) User Guide

    The Starwhale Client (swcli) is a command-line tool that enables you to interact with Starwhale instances. You can use swcli to complete almost all tasks in Starwhale. swcli is written in pure python3 (require Python 3.7 | 3.11) so that it can be easily installed by the pip command. Currently, swcli only supports Linux and macOS, Windows is coming soon.

    - - + + \ No newline at end of file diff --git a/0.6.4/swcli/installation/index.html b/0.6.4/swcli/installation/index.html index 58ad366bb..947600f0a 100644 --- a/0.6.4/swcli/installation/index.html +++ b/0.6.4/swcli/installation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Installation Guide

    We can use swcli to complete all tasks for Starwhale Instances. swcli is written by pure python3, which can be installed easily by the pip command.Here are some installation tips that can help you get a cleaner, unambiguous, no dependency conflicts swcli python environment.

    Installing Advice

    DO NOT install Starwhale in your system's global Python environment. It will cause a python dependency conflict problem.

    Prerequisites

    • Python 3.7 ~ 3.11
    • Linux or macOS
    • Conda (optional)

    In the Ubuntu system, you can run the following commands:

    sudo apt-get install python3 python3-venv python3-pip

    #If you want to install multi python versions
    sudo add-apt-repository -y ppa:deadsnakes/ppa
    sudo apt-get update
    sudo apt-get install -y python3.7 python3.8 python3.9 python3-pip python3-venv python3.8-venv python3.7-venv python3.9-venv

    swcli works on macOS. If you run into issues with the default system Python3 on macOS, try installing Python3 through the homebrew:

    brew install python3

    Install swcli

    Install with venv

    python3 -m venv ~/.cache/venv/starwhale
    source ~/.cache/venv/starwhale/bin/activate
    python3 -m pip install starwhale

    swcli --version

    sudo ln -sf "$(which swcli)" /usr/local/bin/

    Install with conda

    conda create --name starwhale --yes  python=3.9
    conda activate starwhale
    python3 -m pip install starwhale

    swcli --version

    sudo ln -sf "$(which swcli)" /usr/local/bin/

    👏 Now, you can use swcli in the global environment.

    Install for the special scenarios

    # for Audio processing
    python -m pip install starwhale[audio]

    # for Image processing
    python -m pip install starwhale[pillow]

    # for swcli model server command
    python -m pip install starwhale[server]

    # for built-in online serving
    python -m pip install starwhale[online-serve]

    # install all dependencies
    python -m pip install starwhale[all]

    Update swcli

    #for venv
    python3 -m pip install --upgrade starwhale

    #for conda
    conda run -n starwhale python3 -m pip install --upgrade starwhale

    Uninstall swcli

    python3 -m pip remove starwhale

    rm -rf ~/.config/starwhale
    rm -rf ~/.starwhale
    - - + + \ No newline at end of file diff --git a/0.6.4/swcli/swignore/index.html b/0.6.4/swcli/swignore/index.html index ee72749da..70cf95cd1 100644 --- a/0.6.4/swcli/swignore/index.html +++ b/0.6.4/swcli/swignore/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    About the .swignore file

    The .swignore file is similar to .gitignore, .dockerignore, and other files used to define ignored files or dirs. The .swignore files mainly used in the Starwhale Model building process. By default, the swcli model build command or starwhale.model.build() Python SDK will traverse all files in the specified directory and automatically exclude certain known files or directories that are not suitable for inclusion in the model package.

    PATTERN FORMAT

    • Each line in a swignore file specifies a pattern, which matches files and directories.
    • A blank line matches no files, so it can serve as a separator for readability.
    • An asterisk * matches anything except a slash.
    • A line starting with # serves as a comment.
    • Support wildcard expression, for example: *.jpg, .png.

    Auto Ingored files or dirs

    If you want to include the auto ingored files or dirs, you can add --add-all for swcli model build command.

    • __pycache__/
    • *.py[cod]
    • *$py.class
    • venv installation dir
    • conda installation dir

    Example

    Here is the .swignore file used in the MNIST example:

    venv/*
    .git/*
    .history*
    .vscode/*
    .venv/*
    data/*
    .idea/*
    *.py[cod]
    - - + + \ No newline at end of file diff --git a/0.6.4/swcli/uri/index.html b/0.6.4/swcli/uri/index.html index 8a106008b..f4a1ded74 100644 --- a/0.6.4/swcli/uri/index.html +++ b/0.6.4/swcli/uri/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.4

    Starwhale Resources URI

    tip

    Resource URI is widely used in Starwhale client commands. The URI can refer to a resource in the local instance or any other resource in a remote instance. In this way, the Starwhale client can easily manipulate any resource.

    concepts-org.jpg

    Instance URI

    Instance URI can be either:

    • local: standalone instance.
    • [http(s)://]<hostname or ip>[:<port>]: cloud instance with HTTP address.
    • [cloud://]<cloud alias>: cloud or server instance with an alias name, which can be configured in the instance login phase.
    caution

    "local" is different from "localhost". The former means the local standalone instance without a controller, while the latter implies a controller listening at the default port 8082 on the localhost.

    Example:

    # log in Starwhale Cloud; the alias is swcloud
    swcli instance login --username <your account name> --password <your password> https://cloud.starwhale.ai --alias swcloud

    # copy a model from the local instance to the cloud instance
    swcli model copy mnist/version/latest swcloud/project/<your account name>:demo

    # copy a runtime to a Starwhale Server instance: http://localhost:8081
    swcli runtime copy pytorch/version/v1 http://localhost:8081/project/<your account name>:demo

    Project URI

    Project URI is in the format [<Instance URI>/project/]<project name>. If the instance URI is not specified, use the current instance instead.

    Example:

    swcli project select self   # select the self project in the current instance
    swcli project info local/project/self # inspect self project info in the local instance

    Model/Dataset/Runtime URI

    • Model URI: [<Project URI>/model/]<model name>[/version/<version id|tag>].
    • Dataset URI: [<Project URI>/dataset/]<dataset name>[/version/<version id|tag>].
    • Runtime URI: [<Project URI>/runtime/]<runtime name>[/version/<version id|tag>].
    tip
    • swcli supports human-friendly short version id. You can type the first few characters of the version id, provided it is at least four characters long and unambiguous. However, the recover command must use the complete version id.
    • If the project URI is not specified, the default project will be used.
    • You can always use the version tag instead of the version id.

    Example:

    swcli model info mnist/version/hbtdenjxgm4ggnrtmftdgyjzm43tioi  # inspect model info, model name: mnist, version:hbtdenjxgm4ggnrtmftdgyjzm43tioi
    swcli model remove mnist/version/hbtdenj # short version
    swcli model info mnist # inspect mnist model info
    swcli model run mnist --runtime pytorch-mnist --dataset mnist # use the default latest tag

    Job URI

    • format: [<Project URI>/job/]<job id>.
    • If the project URI is not specified, the default project will be used.

    Example:

    swcli job info mezdayjzge3w   # Inspect mezdayjzge3w version in default instance and default project
    swcli job info local/project/self/job/mezday # Inspect the local instance, self project, with short job id:mezday

    The default instance

    When the instance part of a project URI is omitted, the default instance is used instead. The default instance is the one selected by the swcli instance login or swcli instance use command.

    The default project

    When the project parts of Model/Dataset/Runtime/Evaluation URIs are omitted, the default project is used instead. The default project is the one selected by the swcli project use command.

    - - + + \ No newline at end of file diff --git a/0.6.5/cloud/billing/bills/index.html b/0.6.5/cloud/billing/bills/index.html index 24d399ff1..0f3149796 100644 --- a/0.6.5/cloud/billing/bills/index.html +++ b/0.6.5/cloud/billing/bills/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.5/cloud/billing/index.html b/0.6.5/cloud/billing/index.html index 775a574ef..28a0cffed 100644 --- a/0.6.5/cloud/billing/index.html +++ b/0.6.5/cloud/billing/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.5/cloud/billing/recharge/index.html b/0.6.5/cloud/billing/recharge/index.html index e4ab47810..a9d26fa46 100644 --- a/0.6.5/cloud/billing/recharge/index.html +++ b/0.6.5/cloud/billing/recharge/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.5/cloud/billing/refund/index.html b/0.6.5/cloud/billing/refund/index.html index 2a02528b7..6168411b2 100644 --- a/0.6.5/cloud/billing/refund/index.html +++ b/0.6.5/cloud/billing/refund/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.5/cloud/billing/voucher/index.html b/0.6.5/cloud/billing/voucher/index.html index 0f76a1346..ac8d8a2b6 100644 --- a/0.6.5/cloud/billing/voucher/index.html +++ b/0.6.5/cloud/billing/voucher/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.5/cloud/index.html b/0.6.5/cloud/index.html index 15cfe1c15..9f577a441 100644 --- a/0.6.5/cloud/index.html +++ b/0.6.5/cloud/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Starwhale Cloud User Guide

    Starwhale Cloud is a service hosted on public cloud and operated by the Starwhale team. The access url is https://cloud.starwhale.cn.

    - - + + \ No newline at end of file diff --git a/0.6.5/community/contribute/index.html b/0.6.5/community/contribute/index.html index 589357154..d366b5042 100644 --- a/0.6.5/community/contribute/index.html +++ b/0.6.5/community/contribute/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Contribute to Starwhale

    Getting Involved/Contributing

    We welcome and encourage all contributions to Starwhale, including and not limited to:

    • Describe the problems encountered during use.
    • Submit feature request.
    • Discuss in Slack and Github Issues.
    • Code Review.
    • Improve docs, tutorials and examples.
    • Fix Bug.
    • Add Test Case.
    • Code readability and code comments to import readability.
    • Develop new features.
    • Write enhancement proposal.

    You can get involved, get updates and contact Starwhale developers in the following ways:

    Starwhale Resources

    Code Structure

    • client: swcli and Python SDK with Pure Python3, which includes all Standalone Instance features.
      • api: Python SDK.
      • cli: Command Line Interface entrypoint.
      • base: Python base abstract.
      • core: Starwhale core concepts which includes Dataset,Model,Runtime,Project, job and Evaluation, etc.
      • utils: Python utilities lib.
    • console: frontend with React + TypeScript.
    • server:Starwhale Controller with java, which includes all Starwhale Cloud Instance backend apis.
    • docker:Helm Charts, dockerfile.
    • docs:Starwhale官方文档。
    • example:Example code.
    • scripts:Bash and Python scripts for E2E testing and software releases, etc.

    Fork and clone the repository

    You will need to fork the code of Starwhale repository and clone it to your local machine.

    • Fork Starwhale repository: Fork Starwhale Github Repo,For more usage details, please refer to: Fork a repo

    • Install Git-LFS:Git-LFS

       git lfs install
    • Clone code to local machine

      git clone https://github.com/${your username}/starwhale.git

    Development environment for Standalone Instance

    Standalone Instance is written in Python3. When you want to modify swcli and sdk, you need to build the development environment.

    Standalone development environment prerequisites

    • OS: Linux or macOS
    • Python: 3.7~3.11
    • Docker: >=19.03(optional)
    • Python isolated env tools:Python venv, virtualenv or conda, etc

    Building from source code

    Based on the previous step, clone to the local directory: starwhale, and enter the client subdirectory:

    cd starwhale/client

    Create an isolated python environment with conda:

    conda create -n starwhale-dev python=3.8 -y
    conda activate starwhale-dev

    Install client package and python dependencies into the starwhale-dev environment:

    make install-sw
    make install-dev-req

    Validate with the swcli --version command. In the development environment, the version is 0.0.0.dev0:

    ❯ swcli --version
    swcli, version 0.0.0.dev0

    ❯ swcli --version
    /home/username/anaconda3/envs/starwhale-dev/bin/swcli

    Modifying the code

    When you modify the code, you need not to install python package(run make install-sw command) again. .editorconfig will be imported into the most IDE and code editors which helps maintain consistent coding styles for multiple developers.

    Lint and Test

    Run unit test, E2E test, mypy lint, flake lint and isort check in the starwhale directory.

    make client-all-check

    Development environment for Cloud Instance

    Cloud Instance is written in Java(backend) and React+TypeScript(frontend).

    Development environment for Console

    Development environment for Server

    • Language: Java
    • Build tool: Maven
    • Development framework: Spring Boot+Mybatis
    • Unit test framework:Junit5
      • Mockito used for mocking
      • Hamcrest used for assertion
      • Testcontainers used for providing lightweight, throwaway instances of common databases, Selenium web browsers that can run in a Docker container.
    • Check style tool:use maven-checkstyle-plugin

    Server development environment prerequisites

    • OS: Linux, macOS or Windows
    • Docker: >=19.03
    • JDK: >=11
    • Maven: >=3.8.1
    • Mysql: >=8.0.29
    • Minio
    • Kubernetes cluster/Minikube(If you don't have a k8s cluster, you can use Minikube as an alternative for development and debugging)

    Modify the code and add unit tests

    Now you can enter the corresponding module to modify and adjust the code on the server side. The main business code directory is src/main/java, and the unit test directory is src/test/java.

    Execute code check and run unit tests

    cd starwhale/server
    mvn clean test

    Deploy the server at local machine

    • Dependent services that need to be deployed

      • Minikube(Optional. Minikube can be used when there is no k8s cluster, there is the installation doc: Minikube

        minikube start
        minikube addons enable ingress
        minikube addons enable ingress-dns
      • Mysql

        docker run --name sw-mysql -d \
        -p 3306:3306 \
        -e MYSQL_ROOT_PASSWORD=starwhale \
        -e MYSQL_USER=starwhale \
        -e MYSQL_PASSWORD=starwhale \
        -e MYSQL_DATABASE=starwhale \
        mysql:latest
      • Minio

        docker run --name minio -d \
        -p 9000:9000 --publish 9001:9001 \
        -e MINIO_DEFAULT_BUCKETS='starwhale' \
        -e MINIO_ROOT_USER="minioadmin" \
        -e MINIO_ROOT_PASSWORD="minioadmin" \
        bitnami/minio:latest
    • Package server program

      If you need to deploy the front-end at the same time when deploying the server, you can execute the build command of the front-end part first, and then execute 'mvn clean package', and the compiled front-end files will be automatically packaged.

      Use the following command to package the program

        cd starwhale/server
      mvn clean package
    • Specify the environment required for server startup

      # Minio env
      export SW_STORAGE_ENDPOINT=http://${Minio IP,default is:27.0.0.1}:9000
      export SW_STORAGE_BUCKET=${Minio bucket,default is:starwhale}
      export SW_STORAGE_ACCESSKEY=${Minio accessKey,default is:starwhale}
      export SW_STORAGE_SECRETKEY=${Minio secretKey,default is:starwhale}
      export SW_STORAGE_REGION=${Minio region,default is:local}
      # kubernetes env
      export KUBECONFIG=${the '.kube' file path}\.kube\config

      export SW_INSTANCE_URI=http://${Server IP}:8082
      export SW_METADATA_STORAGE_IP=${Mysql IP,default: 127.0.0.1}
      export SW_METADATA_STORAGE_PORT=${Mysql port,default: 3306}
      export SW_METADATA_STORAGE_DB=${Mysql dbname,default: starwhale}
      export SW_METADATA_STORAGE_USER=${Mysql user,default: starwhale}
      export SW_METADATA_STORAGE_PASSWORD=${user password,default: starwhale}
    • Deploy server service

      You can use the IDE or the command to deploy.

      java -jar controller/target/starwhale-controller-0.1.0-SNAPSHOT.jar
    • Debug

      there are two ways to debug the modified function:

      • Use swagger-ui for interface debugging, visit /swagger-ui/index.html to find the corresponding api
      • Debug the corresponding function directly in the ui (provided that the front-end code has been built in advance according to the instructions when packaging)
    - - + + \ No newline at end of file diff --git a/0.6.5/concepts/index.html b/0.6.5/concepts/index.html index af9d88aa1..a6ec4f940 100644 --- a/0.6.5/concepts/index.html +++ b/0.6.5/concepts/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.5/concepts/names/index.html b/0.6.5/concepts/names/index.html index 034602936..264bf2f41 100644 --- a/0.6.5/concepts/names/index.html +++ b/0.6.5/concepts/names/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Names in Starwhale

    Names mean project names, model names, dataset names, runtime names, and tag names.

    Names Limitation

    • Names are case-insensitive.
    • A name MUST only consist of letters A-Z a-z, digits 0-9, the hyphen character -, the dot character ., and the underscore character _.
    • A name should always start with a letter or the _ character.
    • The maximum length of a name is 80.

    Names uniqueness requirement

    • The resource name should be a unique string within its owner. For example, the project name should be unique in the owner instance, and the model name should be unique in the owner project.
    • The resource name can not be used by any other resource of the same kind in their owner, including those removed ones. For example, Project "apple" can not have two models named "Alice", even if one of them is already removed.
    • Different kinds of resources can have the same name. For example, a project and a model can be called "Alice" simultaneously.
    • Resources with different owners can have the same name. For example, a model in project "Apple" and a model in project "Banana" can have the same name "Alice".
    • Garbage-collected resources' names can be reused. For example, after the model with the name "Alice" in project "Apple" is removed and garbage collected, the project can have a new model with the same name "Alice".
    - - + + \ No newline at end of file diff --git a/0.6.5/concepts/project/index.html b/0.6.5/concepts/project/index.html index 60a47f38e..a1706b87a 100644 --- a/0.6.5/concepts/project/index.html +++ b/0.6.5/concepts/project/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Project in Starwhale

    "Project" is the basic unit for organizing different resources like models, datasets, etc. You may use projects for different purposes. For example, you can create a project for a data scientist team, a product line, or a specific model. Users usually work on one or more projects in their daily lives.

    Starwhale Server/Cloud projects are grouped by accounts. Starwhale Standalone does not have accounts. So you will not see any account name prefix in Starwhale Standalone projects. Starwhale Server/Cloud projects can be either "public" or "private". Public projects means all users on the same instance are assigned a "guest" role to the project by default. For more information about roles, see Roles and permissions in Starwhale.

    A self project is created automatically and configured as the default project in Starwhale Standalone.

    - - + + \ No newline at end of file diff --git a/0.6.5/concepts/roles-permissions/index.html b/0.6.5/concepts/roles-permissions/index.html index 1b8fb63f7..bcbb72aaa 100644 --- a/0.6.5/concepts/roles-permissions/index.html +++ b/0.6.5/concepts/roles-permissions/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Roles and permissions in Starwhale

    Roles are used to assign permissions to users. Only Starwhale Server/Cloud has roles and permissions, and Starwhale Standalone does not.The Administrator role is automatically created and assigned to the user "admin". Some sensitive operations can only be performed by users with the Administrator role, for example, creating accounts in Starwhale Server.

    Projects have three roles:

    • Admin - Project administrators can read and write project data and assign project roles to users.
    • Maintainer - Project maintainers can read and write project data.
    • Guest - Project guests can only read project data.
    ActionAdminMaintainerGuest
    Manage project membersYes
    Edit projectYesYes
    View projectYesYesYes
    Create evaluationsYesYes
    Remove evaluationsYesYes
    View evaluationsYesYesYes
    Create datasetsYesYes
    Update datasetsYesYes
    Remove datasetsYesYes
    View datasetsYesYesYes
    Create modelsYesYes
    Update modelsYesYes
    Remove modelsYesYes
    View modelsYesYesYes
    Create runtimesYesYes
    Update runtimesYesYes
    Remove runtimesYesYes
    View runtimesYesYesYes

    The user who creates a project becomes the first project administrator. They can assign roles to other users later.

    - - + + \ No newline at end of file diff --git a/0.6.5/concepts/versioning/index.html b/0.6.5/concepts/versioning/index.html index 5b4e5e5d1..fbf29d9e2 100644 --- a/0.6.5/concepts/versioning/index.html +++ b/0.6.5/concepts/versioning/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Resource versioning in Starwhale

    • Starwhale manages the history of all models, datasets, and runtimes. Every update to a specific resource appends a new version of the history.
    • Versions are identified by a version id which is a random string generated automatically by Starwhale and are ordered by their creation time.
    • Versions can have tags. Starwhale uses version tags to provide a human-friendly representation of versions. By default, Starwhale attaches a default tag to each version. The default tag is the letter "v", followed by a number. For each versioned resource, the first version tag is always tagged with "v0", the second version is tagged with "v1", and so on. And there is a special tag "latest" that always points to the last version. When a version is removed, its default tag will not be reused. For example, there is a model with tags "v0, v1, v2". When "v2" is removed, tags will be "v0, v1". And the following tag will be "v3" instead of "v2" again. You can attach your own tags to any version and remove them at any time.
    • Starwhale uses a linear history model. There is neither branch nor cycle in history.
    • History can not be rollback. When a version is to be reverted, Starwhale clones the version and appends it as a new version to the end of the history. Versions in history can be manually removed and recovered.
    - - + + \ No newline at end of file diff --git a/0.6.5/dataset/index.html b/0.6.5/dataset/index.html index e7f5133dc..a2610e2eb 100644 --- a/0.6.5/dataset/index.html +++ b/0.6.5/dataset/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Starwhale Dataset User Guide

    overview

    Design Overview

    Starwhale Dataset Positioning

    The Starwhale Dataset contains three core stages: data construction, data loading, and data visualization. It is a data management tool for the ML/DL field. Starwhale Dataset can directly use the environment built by Starwhale Runtime, and can be seamlessly integrated with Starwhale Model and Starwhale Evaluation. It is an important part of the Starwhale MLOps toolchain.

    According to the classification of MLOps Roles in Machine Learning Operations (MLOps): Overview, Definition, and Architecture, the three stages of Starwhale Dataset target the following user groups:

    • Data construction: Data Engineer, Data Scientist
    • Data loading: Data Scientist, ML Developer
    • Data visualization: Data Engineer, Data Scientist, ML Developer

    mlops-users

    Core Functions

    • Efficient loading: The original dataset files are stored in external storage such as OSS or NAS, and are loaded on demand without having to save to disk.
    • Simple construction: Supports one-click dataset construction from Image/Video/Audio directories, json files and Huggingface datasets, and also supports writing Python code to build completely custom datasets.
    • Versioning: Can perform version tracking, data append and other operations, and avoid duplicate data storage through the internally abstracted ObjectStore.
    • Sharing: Implement bidirectional dataset sharing between Standalone instances and Cloud/Server instances through the swcli dataset copy command.
    • Visualization: The web interface of Cloud/Server instances can present multi-dimensional, multi-type data visualization of datasets.
    • Artifact storage: The Standalone instance can store locally built or distributed swds series files, while the Cloud/Server instance uses object storage to provide centralized swds artifact storage.
    • Seamless Starwhale integration: Starwhale Dataset can use the runtime environment built by Starwhale Runtime to build datasets. Starwhale Evaluation and Starwhale Model can directly specify the dataset through the --dataset parameter to complete automatic data loading, which facilitates inference, model evaluation and other environments.

    Key Elements

    • swds virtual package file: swds is different from swmp and swrt. It is not a single packaged file, but a virtual concept that specifically refers to a directory that contains dataset-related files for a version of the Starwhale dataset, including _manifest.yaml, dataset.yaml, dataset build Python scripts, and data file links, etc. You can use the swcli dataset info command to view where the swds is located. swds is the abbreviation of Starwhale Dataset.

    swds-tree.png

    • swcli dataset command line: A set of dataset-related commands, including construction, distribution and management functions. See CLI Reference for details.
    • dataset.yaml configuration file: Describes the dataset construction process. It can be completely omitted and specified through swcli dataset build parameters. dataset.yaml can be considered as a configuration file representation of the swcli dataset build command line parameters. swcli dataset build parameters take precedence over dataset.yaml.
    • Dataset Python SDK: Includes data construction, data loading, and several predefined data types. See Python SDK for details.
    • Python scripts for dataset construction: A series of scripts written using the Starwhale Python SDK to build datasets.

    Best Practices

    The construction of Starwhale Dataset is performed independently. If third-party libraries need to be introduced when writing construction scripts, using Starwhale Runtime can simplify Python dependency management and ensure reproducible dataset construction. The Starwhale platform will build in as many open source datasets as possible for users to copy datasets for immediate use.

    Command Line Grouping

    The Starwhale Dataset command line can be divided into the following stages from the perspective of usage phases:

    • Construction phase
      • swcli dataset build
    • Visualization phase
      • swcli dataset diff
      • swcli dataset head
    • Distribution phase
      • swcli dataset copy
    • Basic management
      • swcli dataset tag
      • swcli dataset info
      • swcli dataset history
      • swcli dataset list
      • swcli dataset summary
      • swcli dataset remove
      • swcli dataset recover

    Starwhale Dataset Viewer

    Currently, the Web UI in the Cloud/Server instance can visually display the dataset. Currently, only DataTypes using the Python SDK can be correctly interpreted by the frontend, with mappings as follows:

    • Image: Display thumbnails, enlarged images, MASK type images, support image/png, image/jpeg, image/webp, image/svg+xml, image/gif, image/apng, image/avif formats.
    • Audio: Displayed as an audio wave graph, playable, supports audio/mp3 and audio/wav formats.
    • Video: Displayed as a video, playable, supports video/mp4, video/avi and video/webm formats.
    • GrayscaleImage: Display grayscale images, support x/grayscale format.
    • Text: Display text, support text/plain format, set encoding format, default is utf-8.
    • Binary and Bytes: Not supported for display currently.
    • Link: The above multimedia types all support specifying links as storage paths.

    Starwhale Dataset Data Format

    The dataset consists of multiple rows, each row being a sample, each sample containing several features. The features have a dict-like structure with some simple restrictions [L]:

    • The dict keys must be str type.
    • The dict values must be Python basic types like int/float/bool/str/bytes/dict/list/tuple, or Starwhale built-in data types.
    • For the same key across different samples, the value types do not need to stay the same.
    • If the value is a list or tuple, the element data types must be consistent.
    • For dict values, the restrictions are the same as [L].

    Example:

    {
    "img": GrayscaleImage(
    link=Link(
    "123",
    offset=32,
    size=784,
    _swds_bin_offset=0,
    _swds_bin_size=8160,
    )
    ),
    "label": 0,
    }

    File Data Handling

    Starwhale Dataset handles file type data in a special way. You can ignore this section if you don't care about Starwhale's implementation.

    According to actual usage scenarios, Starwhale Dataset has two ways of handling file class data that is based on the base class starwhale.BaseArtifact:

    • swds-bin: Starwhale merges the data into several large files in its own binary format (swds-bin), which can efficiently perform indexing, slicing and loading.
    • remote-link: If the user's original data is stored in some external storage such as OSS or NAS, with a lot of original data that is inconvenient to move or has already been encapsulated by some internal dataset implementation, then you only need to use links in the data to establish indexes.

    In the same Starwhale dataset, two types of data can be included simultaneously.

    - - + + \ No newline at end of file diff --git a/0.6.5/dataset/yaml/index.html b/0.6.5/dataset/yaml/index.html index 1204ee377..9d2df5f4c 100644 --- a/0.6.5/dataset/yaml/index.html +++ b/0.6.5/dataset/yaml/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    The dataset.yaml Specification

    tip

    dataset.yaml is optional for the swcli dataset build command.

    Building Starwhale Dataset uses dataset.yaml. Omitting dataset.yaml allows describing related configurations in swcli dataset build command line parameters. dataset.yaml can be considered as a file-based representation of the build command line configuration.

    YAML Field Descriptions

    FieldDescriptionRequiredTypeDefault
    nameName of the Starwhale DatasetYesString
    handlerImportable address of a class that inherits starwhale.SWDSBinBuildExecutor, starwhale.UserRawBuildExecutor or starwhale.BuildExecutor, or a function that returns a Generator or iterable object. Format is {module path}:{class name\|function name}YesString
    descDataset descriptionNoString""
    versiondataset.yaml format version, currently only "1.0" is supportedNoString1.0
    attrDataset build parametersNoDict
    attr.volume_sizeSize of each data file in the swds-bin dataset. Can be a number in bytes, or a number plus unit like 64M, 1GB etc.NoInt or Str64MB
    attr.alignment_sizeData alignment size of each data block in the swds-bin dataset. If set to 4k, and a data block is 7.9K, 0.1K padding will be added to make the block size a multiple of alignment_size, improving page size and read efficiency.NoInteger or String128

    Examples

    Simplest Example

    name: helloworld
    handler: dataset:ExampleProcessExecutor

    The helloworld dataset uses the ExampleProcessExecutor class in dataset.py of the dataset.yaml directory to build data.

    MNIST Dataset Build Example

    name: mnist
    handler: mnist.dataset:DatasetProcessExecutor
    desc: MNIST data and label test dataset
    attr:
    alignment_size: 128
    volume_size: 4M

    Example with handler as a generator function

    dataset.yaml contents:

    name: helloworld
    handler: dataset:iter_item

    dataset.py contents:

    def iter_item():
    for i in range(10):
    yield {"img": f"image-{i}".encode(), "label": i}
    - - + + \ No newline at end of file diff --git a/0.6.5/evaluation/heterogeneous/node-able/index.html b/0.6.5/evaluation/heterogeneous/node-able/index.html index ac1410618..8644b5977 100644 --- a/0.6.5/evaluation/heterogeneous/node-able/index.html +++ b/0.6.5/evaluation/heterogeneous/node-able/index.html @@ -10,8 +10,8 @@ - - + +
    @@ -23,7 +23,7 @@ Refer to the link.

    Take v0.13.0-rc.1 as an example:

    kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.13.0-rc.1/nvidia-device-plugin.yml

    Note: This operation will run the NVIDIA device plugin plugin on all Kubernetes nodes. If configured before, it will be updated. Please evaluate the image version used carefully.

  • Confirm GPU can be discovered and used in the cluster. Refer to the command below. Check that nvidia.com/gpu is in the Capacity of the Jetson node. The GPU is then recognized normally by the Kubernetes cluster.

    # kubectl describe node orin | grep -A15 Capacity
    Capacity:
    cpu: 12
    ephemeral-storage: 59549612Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    hugepages-32Mi: 0
    hugepages-64Ki: 0
    memory: 31357608Ki
    nvidia.com/gpu: 1
    pods: 110
  • Build and Use Custom Images

    The l4t-jetpack image mentioned earlier can meet our general use. If we need to customize a more streamlined image or one with more features, we can make it based on l4t-base. Relevant Dockerfiles can refer to the image Starwhale made for mnist.

    - - + + \ No newline at end of file diff --git a/0.6.5/evaluation/heterogeneous/virtual-node/index.html b/0.6.5/evaluation/heterogeneous/virtual-node/index.html index d370ca397..177a9c060 100644 --- a/0.6.5/evaluation/heterogeneous/virtual-node/index.html +++ b/0.6.5/evaluation/heterogeneous/virtual-node/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Virtual Kubelet as Kubernetes nodes

    Introduction

    Virtual Kubelet is an open source framework that can simulate a K8s node by mimicking the communication between kubelet and the K8s cluster.

    This solution is widely used by major cloud vendors for serverless container cluster solutions, such as Alibaba Cloud's ASK, Amazon's AWS Fargate, etc.

    Principles

    The virtual kubelet framework implements the related interfaces of kubelet for Node. With simple configuration, it can simulate a node.

    We only need to implement the PodLifecycleHandler interface to support:

    • Create, update, delete Pod
    • Get Pod status
    • Get Container logs

    Adding Devices to the Cluster

    If our device cannot serve as a K8s node due to resource constraints or other situations, we can manage these devices by using virtual kubelet to simulate a proxy node.

    The control flow between Starwhale Controller and the device is as follows:


    ┌──────────────────────┐ ┌────────────────┐ ┌─────────────────┐ ┌────────────┐
    │ Starwhale Controller ├─────►│ K8s API Server ├────►│ virtual kubelet ├────►│ Our device │
    └──────────────────────┘ └────────────────┘ └─────────────────┘ └────────────┘

    Virtual kubelet converts the Pod orchestration information sent by Starwhale Controller into control behaviors for the device, such as executing a command via ssh on the device, or sending a message via USB or serial port.

    Below is an example of using virtual kubelet to control a device not joined to the cluster that is SSH-enabled:

    1. Prepare certificates
    • Create file vklet.csr with the following content:
    [req]
    req_extensions = v3_req
    distinguished_name = req_distinguished_name

    [req_distinguished_name]

    [v3_req]
    basicConstraints = CA:FALSE
    keyUsage = digitalSignature, keyEncipherment
    extendedKeyUsage = serverAuth
    subjectAltName = @alt_names

    [alt_names]
    IP = 1.2.3.4
    • Generate the certificate:
    openssl genrsa -out vklet-key.pem 2048
    openssl req -new -key vklet-key.pem -out vklet.csr -subj '/CN=system:node:1.2.3.4;/C=US/O=system:nodes' -config ./csr.conf
    • Submit the certificate:
    cat vklet.csr| base64 | tr -d "\n" # output as content of spec.request in csr.yaml

    csr.yaml:

    apiVersion: certificates.k8s.io/v1
    kind: CertificateSigningRequest
    metadata:
    name: vklet
    spec:
    request: ******************
    signerName: kubernetes.io/kube-apiserver-client
    expirationSeconds: 1086400
    usages:
    - client auth
    kubectl apply -f csr.yaml
    kubectl certificate approve vklet
    kubectl get csr vklet -o jsonpath='{.status.certificate}'| base64 -d > vklet-cert.pem

    Now we have vklet-cert.pem.

    • Compile virtual kubelet:
    git clone https://github.com/virtual-kubelet/virtual-kubelet
    cd virtual-kubelet && make build

    Create the node configuration file mock.json:

    {
    "virtual-kubelet":
    {
    "cpu": "100",
    "memory": "100Gi",
    "pods": "100"
    }
    }

    Start virtual kubelet:

    export APISERVER_CERT_LOCATION=/path/to/vklet-cert.pem
    export APISERVER_KEY_LOCATION=/path/to/vklet-key.pem
    export KUBECONFIG=/path/to/kubeconfig
    virtual-kubelet --provider mock --provider-config /path/to/mock.json

    Now we have simulated a node with 100 cores + 100GB memory using virtual kubelet.

    • Add PodLifecycleHandler implementation to convert important information in Pod orchestration into ssh command execution, and collect logs for Starwhale Controller to collect.

    See ssh executor for a concrete implementation.

    - - + + \ No newline at end of file diff --git a/0.6.5/evaluation/index.html b/0.6.5/evaluation/index.html index 089ed0599..1cb418c03 100644 --- a/0.6.5/evaluation/index.html +++ b/0.6.5/evaluation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Starwhale Model Evaluation

    Design Overview

    Starwhale Evaluation Positioning

    The goal of Starwhale Evaluation is to provide end-to-end management for model evaluation, including creating Jobs, distributing Tasks, viewing model evaluation reports and basic management. Starwhale Evaluation is a specific application of Starwhale Model, Starwhale Dataset, and Starwhale Runtime in the model evaluation scenario. Starwhale Evaluation is part of the MLOps toolchain built by Starwhale. More applications like Starwhale Model Serving, Starwhale Training will be included in the future.

    Core Features

    • Visualization: Both swcli and the Web UI provide visualization of model evaluation results, supporting comparison of multiple results. Users can also customize logging of intermediate processes.

    • Multi-scenario Adaptation: Whether it's a notebook, desktop or distributed cluster environment, the same commands, Python scripts, artifacts and operations can be used for model evaluation. This satisfies different computational power and data volume requirements.

    • Seamless Starwhale Integration: Leverage Starwhale Runtime for the runtime environment, Starwhale Dataset as data input, and run models from Starwhale Model. Configuration is simple whether using swcli, Python SDK or Cloud/Server instance Web UI.

    Key Elements

    • swcli model run: Command line for bulk offline model evaluation.
    • swcli model serve: Command line for online model evaluation.

    Best Practices

    Command Line Grouping

    From the perspective of completing an end-to-end Starwhale Evaluation workflow, commands can be grouped as:

    • Preparation Stage
      • swcli dataset build or Starwhale Dataset Python SDK
      • swcli model build or Starwhale Model Python SDK
      • swcli runtime build
    • Evaluation Stage
      • swcli model run
      • swcli model serve
    • Results Stage
      • swcli job info
    • Basic Management
      • swcli job list
      • swcli job remove
      • swcli job recover

    Abstraction job-step-task

    • job: A model evaluation task is a job, which contains one or more steps.

    • step: A step corresponds to a stage in the evaluation process. With the default PipelineHandler, steps are predict and evaluate. For custom evaluation processes using @handler, @evaluation.predict, @evaluation.evaluate decorators, steps are the decorated functions. Steps can have dependencies, forming a DAG. A step contains one or more tasks. Tasks in the same step have the same logic but different inputs. A common approach is to split the dataset into multiple parts, with each part passed to a task. Tasks can run in parallel.

    • task: A task is the final running entity. In Cloud/Server instances, a task is a container in a Pod. In Standalone instances, a task is a Python Thread.

    The job-step-task abstraction is the basis for implementing distributed runs in Starwhale Evaluation.

    - - + + \ No newline at end of file diff --git a/0.6.5/faq/index.html b/0.6.5/faq/index.html index 56d873d39..c9b498860 100644 --- a/0.6.5/faq/index.html +++ b/0.6.5/faq/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    FAQs

    Error "413 Client Error: Request Entity Too Large" when Copying Starwhale Models to Server

    • Cause: The proxy-body-size set in the Ingress (Nginx default is 1MB) is smaller than the actual uploaded file size.
    • Solution: Check the Ingress configuration of the Starwhale Server and add nginx.ingress.kubernetes.io/proxy-body-size: 30g to the annotations field.

    RBAC Authorization Error when Starwhale Server Submits Jobs to Kubernetes Cluster

    The Kubernetes cluster has RBAC enabled, and the service account for the Starwhale Server does not have sufficient permissions. It requires at least the following permissions:

    ResourceAPI GroupGetListWatchCreateDelete
    jobsbatchYYYYY
    podscoreYYY
    nodescoreYYY
    events""Y

    Example YAML:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
    name: starwhale-role
    rules:
    - apiGroups:
    - ""
    resources:
    - pods
    - nodes
    verbs:
    - get
    - list
    - watch
    - apiGroups:
    - "batch"
    resources:
    - jobs
    verbs:
    - create
    - get
    - list
    - watch
    - delete
    - apiGroups:
    - ""
    resources:
    - events
    verbs:
    - get
    - watch
    - list
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    name: starwhale-binding
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: starwhale-role
    subjects:
    - kind: ServiceAccount
    name: starwhale
    - - + + \ No newline at end of file diff --git a/0.6.5/getting-started/cloud/index.html b/0.6.5/getting-started/cloud/index.html index bfdce3ca6..59d53089b 100644 --- a/0.6.5/getting-started/cloud/index.html +++ b/0.6.5/getting-started/cloud/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Getting started with Starwhale Cloud

    Starwhale Cloud is hosted on Aliyun with the domain name https://cloud.starwhale.cn. In the futher, we will launch the service on AWS with the domain name https://cloud.starwhale.ai. It's important to note that these are two separate instances that are not interconnected, and accounts and data are not shared. You can choose either one to get started.

    You need to install the Starwhale Client (swcli) at first.

    Sign Up for Starwhale Cloud and create your first project

    You can either directly log in with your GitHub or Weixin account or sign up for an account. You will be asked for an account name if you log in with your GitHub or Weixin account.

    Then you can create a new project. In this tutorial, we will use the name demo for the project name.

    Build the dataset, model, and runtime on your local machine

    Follow step 1 to step 4 in Getting started with Starwhale Standalone to create:

    • a Starwhale model named helloworld
    • a Starwhale dataset named mnist64
    • a Starwhale runtime named helloworld

    Login to the cloud instance

    swcli instance login --username <your account name> --password <your password> --alias swcloud https://cloud.starwhale.cn

    Copy the dataset, model, and runtime to the cloud instance

    swcli model copy helloworld swcloud/project/<your account name>:demo
    swcli dataset copy mnist64 swcloud/project/<your account name>:demo
    swcli runtime copy helloworld swcloud/project/<your account name>:demo

    Run an evaluation with the web UI

    console-create-job.gif

    Congratulations! You have completed the Starwhale Cloud Getting Started Guide.

    - - + + \ No newline at end of file diff --git a/0.6.5/getting-started/index.html b/0.6.5/getting-started/index.html index 4322be40a..3c9549cf0 100644 --- a/0.6.5/getting-started/index.html +++ b/0.6.5/getting-started/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Getting started

    First, you need to install the Starwhale Client (swcli), which can be done by running the following command:

    python3 -m pip install starwhale

    For more information, see the swcli installation guide.

    Depending on your instance type, there are three getting-started guides available for you:

    • Getting started with Starwhale Standalone - This guide helps you run an MNIST evaluation on your desktop PC/laptop. It is the fastest and simplest way to get started with Starwhale.
    • Getting started with Starwhale Server - This guide helps you install Starwhale Server in your private data center and run an MNIST evaluation. At the end of the tutorial, you will have a Starwhale Server instance where you can run model evaluations on and manage your datasets and models.
    • Getting started with Starwhale Cloud - This guide helps you create an account on Starwhale Cloud and run an MNIST evaluation. It is the easiest way to experience all Starwhale features.
    - - + + \ No newline at end of file diff --git a/0.6.5/getting-started/runtime/index.html b/0.6.5/getting-started/runtime/index.html index a843e653e..f886a142a 100644 --- a/0.6.5/getting-started/runtime/index.html +++ b/0.6.5/getting-started/runtime/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Getting Started with Starwhale Runtime

    This article demonstrates how to build a Starwhale Runtime of the Pytorch environment and how to use it. This runtime can meet the dependency requirements of the six examples in Starwhale: mnist, speech commands, nmt, cifar10, ag_news, and PennFudan. Links to relevant code: example/runtime/pytorch.

    You can learn the following things from this tutorial:

    • How to build a Starwhale Runtime.
    • How to use a Starwhale Runtime in different scenarios.
    • How to release a Starwhale Runtime.

    Prerequisites

    Run the following command to clone the example code:

    git clone https://github.com/star-whale/starwhale.git
    cd starwhale/example/runtime/pytorch # for users in the mainland of China, use pytorch-cn-mirror instead.

    Build Starwhale Runtime

    ❯ swcli -vvv runtime build --yaml runtime.yaml

    Use Starwhale Runtime in the standalone instance

    Use Starwhale Runtime in the shell

    # Activate the runtime
    swcli runtime activate pytorch

    swcli runtime activate will download all python dependencies of the runtime, which may take a long time.

    All dependencies are ready in your python environment when the runtime is activated. It is similar to source venv/bin/activate of virtualenv or the conda activate command of conda. If you close the shell or switch to another shell, you need to reactivate the runtime.

    Use Starwhale Runtime in swcli

    # Use the runtime when building a Starwhale Model
    swcli model build . --runtime pytorch
    # Use the runtime when building a Starwhale Dataset
    swcli dataset build --yaml /path/to/dataset.yaml --runtime pytorch
    # Run a model evaluation with the runtime
    swcli model run --uri mnist/version/v0 --dataset mnist --runtime pytorch

    Copy Starwhale Runtime to another instance

    You can copy the runtime to a server/cloud instance, which can then be used in the server/cloud instance or downloaded by other users.

    # Copy the runtime to a server instance named 'pre-k8s'
    ❯ swcli runtime copy pytorch cloud://pre-k8s/project/starwhale
    - - + + \ No newline at end of file diff --git a/0.6.5/getting-started/server/index.html b/0.6.5/getting-started/server/index.html index 713093a4f..81bd021c0 100644 --- a/0.6.5/getting-started/server/index.html +++ b/0.6.5/getting-started/server/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Getting started with Starwhale Server

    Install Starwhale Server

    To install Starwhale Server, see the installation guide.

    Create your first project

    Login to the server

    Open your browser and enter your server's URL in the address bar. Login with your username(starwhale) and password(abcd1234).

    console-artifacts.gif

    Create a new project

    Build the dataset, model, and runtime on your local machine

    Follow step 1 to step 4 in Getting started with Starwhale Standalone to create:

    • a Starwhale model named helloworld
    • a Starwhale dataset named mnist64
    • a Starwhale runtime named helloworld

    Copy the dataset, the model, and the runtime to the server

    swcli instance login --username <your username> --password <your password> --alias server <Your Server URL>

    swcli model copy helloworld server/project/demo
    swcli dataset copy mnist64 server/project/demo
    swcli runtime copy helloworld server/project/demo

    Use the Web UI to run an evaluation

    Navigate to the "demo" project in your browser and create a new one.

    console-create-job.gif

    Congratulations! You have completed the Starwhale Server Getting Started Guide.

    - - + + \ No newline at end of file diff --git a/0.6.5/getting-started/standalone/index.html b/0.6.5/getting-started/standalone/index.html index 9e06b668c..d7ba43956 100644 --- a/0.6.5/getting-started/standalone/index.html +++ b/0.6.5/getting-started/standalone/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Getting started with Starwhale Standalone

    When the Starwhale Client (swcli) is installed, you are ready to use Starwhale Standalone.

    We also provide a Jupyter Notebook example, you can try it in Google Colab or in your local vscode/jupyterlab.

    Downloading Examples

    Download Starwhale examples by cloning the Starwhale project via:

    GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/star-whale/starwhale.git --depth 1
    cd starwhale

    To save time in the example downloading, we skip git-lfs and other commits info. We will use ML/DL HelloWorld code MNIST to start your Starwhale journey. The following steps are all performed in the starwhale directory.

    Core Workflow

    Building Starwhale Runtime

    Runtime example codes are in the example/helloworld directory.

    • Build the Starwhale runtime bundle:

      swcli -vvv runtime build --yaml example/helloworld/runtime.yaml
      tip

      When you first build runtime, creating an isolated python environment and downloading python dependencies will take a lot of time. The command execution time is related to the network environment of the machine and the number of packages in the runtime.yaml. Using the befitting pypi mirror and cache config in the ~/.pip/pip.conf file is a recommended practice.

      For users in the mainland of China, the following conf file is an option:

      [global]
      cache-dir = ~/.cache/pip
      index-url = https://pypi.tuna.tsinghua.edu.cn/simple
      extra-index-url = https://mirrors.aliyun.com/pypi/simple/
    • Check your local Starwhale Runtime:

      swcli runtime list
      swcli runtime info helloworld

    Building a Model

    Model example codes are in the example/helloworld directory.

    • Build a Starwhale model:

      swcli -vvv model build example/helloworld --name helloworld -m evaluation --runtime helloworld
    • Check your local Starwhale models:

      swcli model list
      swcli model info helloworld

    Building a Dataset

    Dataset example codes are in the example/helloworld directory.

    • Build a Starwhale dataset:

      swcli runtime activate helloworld
      python3 example/helloworld/dataset.py
      deactivate
    • Check your local Starwhale dataset:

      swcli dataset list
      swcli dataset info mnist64
      swcli dataset head mnist64

    Running an Evaluation Job

    • Create an evaluation job:

      swcli -vvv model run --uri helloworld --dataset mnist64 --runtime helloworld
    • Check the evaluation result

      swcli job list
      swcli job info $(swcli job list | grep mnist | grep success | awk '{print $1}' | head -n 1)

    Congratulations! You have completed the Starwhale Standalone Getting Started Guide.

    - - + + \ No newline at end of file diff --git a/0.6.5/index.html b/0.6.5/index.html index 3b058002d..1b345face 100644 --- a/0.6.5/index.html +++ b/0.6.5/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    What is Starwhale

    Overview

    Starwhale is an MLOps/LLMOps platform that make your model creation, evaluation and publication much easier. It aims to create a handy tool for data scientists and machine learning engineers.

    Starwhale helps you:

    • Keep track of your training/testing dataset history including data items and their labels, so that you can easily access them.
    • Manage your model packages that you can share across your team.
    • Run your models in different environments, either on a Nvidia GPU server or on an embedded device like Cherry Pi.
    • Create a online service with interactive Web UI for your models.

    Starwhale is designed to be an open platform. You can create your own plugins to meet your requirements.

    Deployment options

    Each deployment of Starwhale is called an instance. All instances can be managed by the Starwhale Client (swcli).

    You can start using Starwhale with one of the following instance types:

    • Starwhale Standalone - Rather than a running service, Starwhale Standalone is actually a repository that resides in your local file system. It is created and managed by the Starwhale Client (swcli). You only need to install swcli to use it. Currently, each user on a single machine can have only ONE Starwhale Standalone instance. We recommend you use the Starwhale Standalone to build and test your datasets, runtime, and models before pushing them to Starwhale Server/Cloud instances.
    • Starwhale Server - Starwhale Server is a service deployed on your local server. Besides text-only results from the Starwhale Client (swcli), Starwhale Server provides Web UI for you to manage your datasets and models, evaluate your models in your local Kubernetes cluster, and review the evaluation results.
    • Starwhale Cloud - Starwhale Cloud is a managed service hosted on public clouds. By registering an account on https://cloud.starwhale.cn, you are ready to use Starwhale without needing to install, operate, and maintain your own instances. Starwhale Cloud also provides public resources for you to download, like datasets, runtimes, and models. Check the "starwhale/public" project on Starwhale Cloud for more details.

    When choosing which instance type to use, consider the following:

    Instance TypeDeployment locationMaintained byUser InterfaceScalability
    Starwhale StandaloneYour laptop or any server in your data centerNot requiredCommand lineNot scalable
    Starwhale ServerYour data centerYourselfWeb UI and command lineScalable, depends on your Kubernetes cluster
    Starwhale CloudPublic cloud, like AWS or Aliyunthe Starwhale TeamWeb UI and command lineScalable, but currently limited by the freely available resource on the cloud
    - - + + \ No newline at end of file diff --git a/0.6.5/model/index.html b/0.6.5/model/index.html index 911516a4a..2c46581a3 100644 --- a/0.6.5/model/index.html +++ b/0.6.5/model/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Starwhale Model

    overview

    A Starwhale Model is a standard format for packaging machine learning models that can be used for various purposes, like model fine-tuning, model evaluation, and online serving. A Starwhale Model contains the model file, inference codes, configuration files, and any other files required to run the model.

    Create a Starwhale Model

    There are two ways to create a Starwhale Model: by swcli or by Python SDK.

    Create a Starwhale Model by swcli

    To create a Starwhale Model by swcli, you need to define a model.yaml, which describes some required information about the model package, and run the following command:

    swcli model build . --model-yaml /path/to/model.yaml

    For more information about the command and model.yaml, see the swcli reference. model.yaml is optional for model building.

    Create a Starwhale Model by Python SDK

    from starwhale import model, predict

    @predict
    def predict_img(data):
    ...

    model.build(name="mnist", modules=[predict_img])

    Model Management

    Model Management by swcli

    CommandDescription
    swcli model listList all Starwhale Models in a project
    swcli model infoShow detail information about a Starwhale Model
    swcli model copyCopy a Starwhale Model to another location
    swcli model removeRemove a Starwhale Model
    swcli model recoverRecover a previously removed Starwhale Model

    Model Management by WebUI

    Model History

    Starwhale Models are versioned. The general rules about versions are described in Resource versioning in Starwhale.

    Model History Management by swcli

    CommandDescription
    swcli model historyList all versions of a Starwhale Model
    swcli model infoShow detail information about a Starwhale Model version
    swcli model diffCompare two versions of a Starwhale model
    swcli model copyCopy a Starwhale Model version to a new one
    swcli model removeRemove a Starwhale Model version
    swcli model recoverRecover a previously removed Starwhale Model version

    Model Evaluation

    Model Evaluation by swcli

    CommandDescription
    swcli model runCreate an evaluation with a Starwhale Model

    The Storage Format

    The Starwhale Model is a tarball file that contains the source directory.

    - - + + \ No newline at end of file diff --git a/0.6.5/model/yaml/index.html b/0.6.5/model/yaml/index.html index ae9baaf50..472f86715 100644 --- a/0.6.5/model/yaml/index.html +++ b/0.6.5/model/yaml/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    The model.yaml Specification

    tip

    model.yaml is optional for swcli model build.

    When building a Starwhale Model using the swcli model build command, you can specify a yaml file that follows a specific format via the --model-yaml parameter to simplify specifying build parameters.

    Even without specifying the --model-yaml parameter, swcli model build will automatically look for a model.yaml file under the ${workdir} directory and extract parameters from it. Parameters specified on the swcli model build command line take precedence over equivalent configurations in model.yaml, so you can think of model.yaml as a file-based representation of the build command line.

    When building a Starwhale Model using the Python SDK, the model.yaml file does not take effect.

    YAML Field Descriptions

    FieldDescriptionRequiredTypeDefault
    nameName of the Starwhale Model, equivalent to --name parameter.NoString
    run.modulesPython Modules searched during model build, can specify multiple entry points for model execution, format is Python Importable path. Equivalent to --module parameter.YesList[String]
    run.handlerDeprecated alias of run.modules, can only specify one entry point.NoString
    versiondataset.yaml format version, currently only supports "1.0"NoString1.0
    descModel description, equivalent to --desc parameter.NoString

    Example


    name: helloworld

    run:
    modules:
    - src.evaluator

    desc: "example yaml"

    A Starwhale model named helloworld, searches for functions decorated with @evaluation.predict, @evaluation.evaluate or @handler, or classes inheriting from PipelineHandler in src/evaluator.py under ${WORKDIR} of the swcli model build command. These functions or classes will be added to the list of runnable entry points for the Starwhale model. When running the model via swcli model run or Web UI, select the corresponding entry point (handler) to run.

    model.yaml is optional, parameters defined in yaml can also be specified via swcli command line parameters.


    swcli model build . --model-yaml model.yaml

    Is equivalent to:


    swcli model build . --name helloworld --module src.evaluator --desc "example yaml"

    - - + + \ No newline at end of file diff --git a/0.6.5/reference/sdk/dataset/index.html b/0.6.5/reference/sdk/dataset/index.html index 65152f5be..4447b620d 100644 --- a/0.6.5/reference/sdk/dataset/index.html +++ b/0.6.5/reference/sdk/dataset/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Starwhale Dataset SDK

    dataset

    Get starwhale.Dataset object, by creating new datasets or loading existing datasets.

    @classmethod
    def dataset(
    cls,
    uri: t.Union[str, Resource],
    create: str = _DatasetCreateMode.auto,
    readonly: bool = False,
    ) -> Dataset:

    Parameters

    • uri: (str or Resource, required)
      • The dataset uri or Resource object.
    • create: (str, optional)
      • The mode of dataset creating. The options are auto, empty and forbid.
        • auto mode: If the dataset already exists, creation is ignored. If it does not exist, the dataset is created automatically.
        • empty mode: If the dataset already exists, an Exception is raised; If it does not exist, an empty dataset is created. This mode ensures the creation of a new, empty dataset.
        • forbid mode: If the dataset already exists, nothing is done.If it does not exist, an Exception is raised. This mode ensures the existence of the dataset.
      • The default is auto.
    • readonly: (bool, optional)
      • For an existing dataset, you can specify the readonly=True argument to ensure the dataset is in readonly mode.
      • Default is False.

    Examples

    from starwhale import dataset, Image

    # create a new dataset named mnist, and add a row into the dataset
    # dataset("mnist") is equal to dataset("mnist", create="auto")
    ds = dataset("mnist")
    ds.exists() # return False, "mnist" dataset is not existing.
    ds.append({"img": Image(), "label": 1})
    ds.commit()
    ds.close()

    # load a cloud instance dataset in readonly mode
    ds = dataset("cloud://remote-instance/project/starwhale/dataset/mnist", readonly=True)
    labels = [row.features.label in ds]
    ds.close()

    # load a read/write dataset with a specified version
    ds = dataset("mnist/version/mrrdczdbmzsw")
    ds[0].features.label = 1
    ds.commit()
    ds.close()

    # create an empty dataset
    ds = dataset("mnist-empty", create="empty")

    # ensure the dataset existence
    ds = dataset("mnist-existed", create="forbid")

    class starwhale.Dataset

    starwhale.Dataset implements the abstraction of a Starwhale dataset, and can operate on datasets in Standalone/Server/Cloud instances.

    from_huggingface

    from_huggingface is a classmethod that can convert a Huggingface dataset into a Starwhale dataset.

    def from_huggingface(
    cls,
    name: str,
    repo: str,
    subset: str | None = None,
    split: str | None = None,
    revision: str = "main",
    alignment_size: int | str = D_ALIGNMENT_SIZE,
    volume_size: int | str = D_FILE_VOLUME_SIZE,
    mode: DatasetChangeMode | str = DatasetChangeMode.PATCH,
    cache: bool = True,
    tags: t.List[str] | None = None,
    ) -> Dataset:

    Parameters

    • name: (str, required)
      • dataset name.
    • repo: (str, required)
    • subset: (str, optional)
      • The subset name. If the huggingface dataset has multiple subsets, you must specify the subset name.
    • split: (str, optional)
      • The split name. If the split name is not specified, the all splits dataset will be built.
    • revision: (str, optional)
      • The huggingface datasets revision. The default value is main. If the split name is not specified, the all splits dataset will be built.
    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.
    • mode: (str|DatasetChangeMode, optional)
      • The dataset change mode. The default value is patch. Mode choices are patch and overwrite.
    • cache: (bool, optional)
      • Whether to use huggingface dataset cache(download + local hf dataset).
      • The default value is True.
    • tags: (List[str], optional)
      • The user custom tags of the dataset.

    Examples

    from starwhale import Dataset
    myds = Dataset.from_huggingface("mnist", "mnist")
    print(myds[0])
    from starwhale import Dataset
    myds = Dataset.from_huggingface("mmlu", "cais/mmlu", subset="anatomy", split="auxiliary_train", revision="7456cfb")

    from_json

    from_json is a classmethod that can convert a json text into a Starwhale dataset.

    @classmethod
    def from_json(
    cls,
    name: str,
    json_text: str,
    field_selector: str = "",
    alignment_size: int | str = D_ALIGNMENT_SIZE,
    volume_size: int | str = D_FILE_VOLUME_SIZE,
    mode: DatasetChangeMode | str = DatasetChangeMode.PATCH,
    tags: t.List[str] | None = None,
    ) -> Dataset:

    Parameters

    • name: (str, required)
      • Dataset name.
    • json_text: (str, required)
      • A json string. The from_json function deserializes this string into Python objects to start building the Starwhale dataset.
    • field_selector: (str, optional)
      • The filed from which you would like to extract dataset array items.
      • The default value is "" which indicates that the json object is an array contains all the items.
    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.
    • mode: (str|DatasetChangeMode, optional)
      • The dataset change mode. The default value is patch. Mode choices are patch and overwrite.
    • tags: (List[str], optional)
      • The user custom tags of the dataset.

    Examples

    from starwhale import Dataset
    myds = Dataset.from_json(
    name="translation",
    json_text='[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]'
    )
    print(myds[0].features.en)
    from starwhale import Dataset
    myds = Dataset.from_json(
    name="translation",
    json_text='{"content":{"child_content":[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]}}',
    field_selector="content.child_content"
    )
    print(myds[0].features["zh-cn"])

    from_folder

    from_folder is a classmethod that can read Image/Video/Audio data from a specified directory and automatically convert them into a Starwhale dataset. This function supports the following features:

    • It can recursively search the target directory and its subdirectories
    • Supports extracting three types of files:
      • image: Supports png/jpg/jpeg/webp/svg/apng image types. Image files will be converted to Starwhale.Image type.
      • video: Supports mp4/webm/avi video types. Video files will be converted to Starwhale.Video type.
      • audio: Supports mp3/wav audio types. Audio files will be converted to Starwhale.Audio type.
    • Each file corresponds to one record in the dataset, with the file stored in the file field.
    • If auto_label=True, the parent directory name will be used as the label for that record, stored in the label field. Files in the root directory will not be labeled.
    • If a txt file with the same name as an image/video/audio file exists, its content will be stored as the caption field in the dataset.
    • If metadata.csv or metadata.jsonl exists in the root directory, their content will be read automatically and associated to records by file path as meta information in the dataset.
      • metadata.csv and metadata.jsonl are mutually exclusive. An exception will be thrown if both exist.
      • Each record in metadata.csv and metadata.jsonl must contain a file_name field pointing to the file path.
      • metadata.csv and metadata.jsonl are optional for dataset building.
    @classmethod
    def from_folder(
    cls,
    folder: str | Path,
    kind: str | DatasetFolderSourceType,
    name: str | Resource = "",
    auto_label: bool = True,
    alignment_size: int | str = D_ALIGNMENT_SIZE,
    volume_size: int | str = D_FILE_VOLUME_SIZE,
    mode: DatasetChangeMode | str = DatasetChangeMode.PATCH,
    tags: t.List[str] | None = None,
    ) -> Dataset:

    Parameters

    • folder: (str|Path, required)
      • The folder path from which you would like to create this dataset.
    • kind: (str|DatasetFolderSourceType, required)
      • The dataset source type you would like to use, the choices are: image, video and audio.
      • Recursively searching for files of the specified kind in folder. Other file types will be ignored.
    • name: (str|Resource, optional)
      • The dataset name you would like to use.
      • If not specified, the name is the folder name.
    • auto_label: (bool, optional)
      • Whether to auto label by the sub-folder name.
      • The default value is True.
    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.
    • mode: (str|DatasetChangeMode, optional)
      • The dataset change mode. The default value is patch. Mode choices are patch and overwrite.
    • tags: (List[str], optional)
      • The user custom tags of the dataset.

    Examples ${folder-example}

    • Example for the normal function calling

      from starwhale import Dataset

      # create a my-image-dataset dataset from /path/to/image folder.
      ds = Dataset.from_folder(
      folder="/path/to/image",
      kind="image",
      name="my-image-dataset"
      )
    • Example for caption

      folder/dog/1.png
      folder/dog/1.txt

      1.txt content will be used as the caption of 1.png.

    • Example for metadata

      metadata.csv:

      file_name, caption
      1.png, dog
      2.png, cat

      metadata.jsonl:

      {"file_name": "1.png", "caption": "dog"}
      {"file_name": "2.png", "caption": "cat"}
    • Example for auto-labeling

      The following structure will create a dataset with 2 labels: "cat" and "dog", 4 images in total.

      folder/dog/1.png
      folder/cat/2.png
      folder/dog/3.png
      folder/cat/4.png

    __iter__

    __iter__ a method that iter the dataset rows.

    from starwhale import dataset

    ds = dataset("mnist")

    for item in ds:
    print(item.index)
    print(item.features.label) # label and img are the features of mnist.
    print(item.features.img)

    batch_iter

    batch_iter is a method that iter the dataset rows in batch.

    def batch_iter(
    self, batch_size: int = 1, drop_not_full: bool = False
    ) -> t.Iterator[t.List[DataRow]]:

    Parameters

    • batch_size: (int, optional)
      • batch size. The default value is 1.
    • drop_not_full: (bool, optional)
      • Whether the last batch of data, with a size smaller than batch_size, it will be discarded.
      • The default value is False.

    Examples

    from starwhale import dataset

    ds = dataset("mnist")
    for batch_rows in ds.batch_iter(batch_size=2):
    assert len(batch_rows) == 2
    print(batch_rows[0].features)

    __getitem__

    __getitem__ is a method that allows retrieving certain rows of data from the dataset, with usage similar to Python dict and list types.

    from starwhale import dataset

    ds = dataset("mock-int-index")

    # if the index type is string
    ds["str_key"] # get the DataRow by the "str_key" string key
    ds["start":"end"] # get a slice of the dataset by the range ("start", "end")

    ds = dataset("mock-str-index")
    # if the index type is int
    ds[1] # get the DataRow by the 1 int key
    ds[1:10:2] # get a slice of the dataset by the range (1, 10), step is 2

    __setitem__

    __setitem__ is a method that allows updating rows of data in the dataset, with usage similar to Python dicts. __setitem__ supports multi-threaded parallel data insertion.

    def __setitem__(
    self, key: t.Union[str, int], value: t.Union[DataRow, t.Tuple, t.Dict]
    ) -> None:

    Parameters

    • key: (int|str, required)
      • key is the index for each row in the dataset. The type is int or str, but a dataset only accepts one type.
    • value: (DataRow|tuple|dict, required)
      • value is the features for each row in the dataset, using a Python dict is generally recommended.

    Examples

    • Normal insertion

    Insert two rows into the test dataset, with index test and test2 repectively:

    from starwhale import dataset

    with dataset("test") as ds:
    ds["test"] = {"txt": "abc", "int": 1}
    ds["test2"] = {"txt": "bcd", "int": 2}
    ds.commit()
    • Parallel insertion
    from starwhale import dataset, Binary
    from concurrent.futures import as_completed, ThreadPoolExecutor

    ds = dataset("test")

    def _do_append(_start: int) -> None:
    for i in range(_start, 100):
    ds.append((i, {"data": Binary(), "label": i}))

    pool = ThreadPoolExecutor(max_workers=10)
    tasks = [pool.submit(_do_append, i * 10) for i in range(0, 9)]

    ds.commit()
    ds.close()

    __delitem__

    __delitem__ is a method to delete certain rows of data from the dataset.

    def __delitem__(self, key: _ItemType) -> None:
    from starwhale import dataset

    ds = dataset("existed-ds")
    del ds[6:9]
    del ds[0]
    ds.commit()
    ds.close()

    append

    append is a method to append data to a dataset, similar to the append method for Python lists.

    • Adding features dict, each row is automatically indexed with int starting from 0 and incrementing.

      from starwhale import dataset, Image

      with dataset("new-ds") as ds:
      for i in range(0, 100):
      ds.append({"label": i, "image": Image(f"folder/{i}.png")})
      ds.commit()
    • By appending the index and features dictionary, the index of each data row in the dataset will not be handled automatically.

      from dataset import dataset, Image

      with dataset("new-ds") as ds:
      for i in range(0, 100):
      ds.append((f"index-{i}", {"label": i, "image": Image(f"folder/{i}.png")}))

      ds.commit()

    extend

    extend is a method to bulk append data to a dataset, similar to the extend method for Python lists.

    from starwhale import dataset, Text

    ds = dataset("new-ds")
    ds.extend([
    (f"label-{i}", {"text": Text(), "label": i}) for i in range(0, 10)
    ])
    ds.commit()
    ds.close()

    commit

    commit is a method that flushes the current cached data to storage when called, and generates a dataset version. This version can then be used to load the corresponding dataset content afterwards.

    For a dataset, if some data is added without calling commit, but close is called or the process exits directly instead, the data will still be written to the dataset, just without generating a new version.

    @_check_readonly
    def commit(
    self,
    tags: t.Optional[t.List[str]] = None,
    message: str = "",
    force_add_tags: bool = False,
    ignore_add_tags_errors: bool = False,
    ) -> str:

    Parameters

    • tags: (list(str), optional)
      • tag as a list
    • message: (str, optional)
      • commit message. The default value is empty.
    • force_add_tags: (bool, optional)
      • For server/cloud instances, when adding labels to this version, if a label has already been applied to other dataset versions, you can use the force_add_tags=True parameter to forcibly add the label to this version, otherwise an exception will be thrown.
      • The default is False.
    • ignore_add_tags_errors: (bool, optional)
      • Ignore any exceptions thrown when adding labels.
      • The default is False.

    Examples

    from starwhale import dataset
    with dataset("mnist") as ds:
    ds.append({"label": 1})
    ds.commit(message="init commit")

    readonly

    readonly is a property attribute indicating if the dataset is read-only, it returns a bool value.

    from starwhale import dataset
    ds = dataset("mnist", readonly=True)
    assert ds.readonly

    loading_version

    loading_version is a property attribute, string type.

    • When loading an existing dataset, the loading_version is the related dataset version.
    • When creating a non-existed dataset, the loading_version is equal to the pending_commit_version.

    pending_commit_version

    pending_commit_version is a property attribute, string type. When you call the commit function, the pending_commit_version will be recorded in the Standalone instance ,Server instance or Cloud instance.

    committed_version

    committed_version is a property attribute, string type. After the commit function is called, the committed_version will come out, it is equal to the pending_commit_version. Accessing this attribute without calling commit first will raise an exception.

    remove

    remove is a method equivalent to the swcli dataset remove command, it can delete a dataset.

    def remove(self, force: bool = False) -> None:

    recover

    recover is a method equivalent to the swcli dataset recover command, it can recover a soft-deleted dataset that has not been run garbage collection.

    def recover(self, force: bool = False) -> None:

    summary

    summary is a method equivalent to the swcli dataset summary command, it returns summary information of the dataset.

    def summary(self) -> t.Optional[DatasetSummary]:

    history

    history is a method equivalent to the swcli dataset history command, it returns the history records of the dataset.

    def history(self) -> t.List[t.Dict]:

    flush

    flush is a method that flushes temporarily cached data from memory to persistent storage. The commit and close methods will automatically call flush.

    close

    close is a method that closes opened connections related to the dataset. Dataset also implements contextmanager, so datasets can be automatically closed using with syntax without needing to explicitly call close.

    from starwhale import dataset

    ds = dataset("mnist")
    ds.close()

    with dataset("mnist") as ds:
    print(ds[0])

    head is a method to show the first n rows of a dataset, equivalent to the swcli dataset head command.

    def head(self, n: int = 5, skip_fetch_data: bool = False) -> List[DataRow]:

    fetch_one

    fetch_one is a method to get the first record in a dataset, similar to head(n=1)[0].

    list

    list is a class method to list Starwhale datasets under a project URI, equivalent to the swcli dataset list command.

    @classmethod
    def list(
    cls,
    project_uri: Union[str, Project] = "",
    fullname: bool = False,
    show_removed: bool = False,
    page_index: int = DEFAULT_PAGE_IDX,
    page_size: int = DEFAULT_PAGE_SIZE,
    ) -> Tuple[DatasetListType, Dict[str, Any]]:

    copy

    copy is a method to copy a dataset to another instance, equivalent to the swcli dataset copy command.

    def copy(
    self,
    dest_uri: str,
    dest_local_project_uri: str = "",
    force: bool = False,
    mode: str = DatasetChangeMode.PATCH.value,
    ignore_tags: t.List[str] | None = None,
    ) -> None:

    Parameters

    • dest_uri: (str, required)
      • Dataset URI
    • dest_local_project_uri: (str, optional)
      • When copy the remote dataset into local, the parameter can set for the Project URI.
    • force: (bool, optional)
      • Whether to forcibly overwrite the dataset if there is already one with the same version on the target instance.
      • The default value is False.
      • When the tags are already used for the other dataset version in the dest instance, you should use force option or adjust the tags.
    • mode: (str, optional)
      • Dataset copy mode, default is 'patch'. Mode choices are: 'patch', 'overwrite'.
      • patch: Patch mode, only update the changed rows and columns for the remote dataset.
      • overwrite: Overwrite mode, update records and delete extraneous rows from the remote dataset.
    • ignore_tags (List[str], optional)
      • Ignore tags when copying.
      • In default, copy dataset with all user custom tags.
      • latest and ^v\d+$ are the system builtin tags, they are ignored automatically.

    Examples

    from starwhale import dataset
    ds = dataset("mnist")
    ds.copy("cloud://remote-instance/project/starwhale")

    to_pytorch

    to_pytorch is a method that can convert a Starwhale dataset to a Pytorch torch.utils.data.Dataset, which can then be passed to torch.utils.data.DataLoader for use.

    It should be noted that the to_pytorch function returns a Pytorch IterableDataset.

    def to_pytorch(
    self,
    transform: t.Optional[t.Callable] = None,
    drop_index: bool = True,
    skip_default_transform: bool = False,
    ) -> torch.utils.data.Dataset:

    Parameters

    • transform: (callable, optional)
      • A transform function for input data.
    • drop_index: (bool, optional)
      • Whether to drop the index column.
    • skip_default_transform: (bool, optional)
      • If transform is not set, by default the built-in Starwhale transform function will be used to transform the data. This can be disabled with the skip_default_transform parameter.

    Examples

    import torch.utils.data as tdata
    from starwhale import dataset

    ds = dataset("mnist")

    torch_ds = ds.to_pytorch()
    torch_loader = tdata.DataLoader(torch_ds, batch_size=2)
    import torch.utils.data as tdata
    from starwhale import dataset

    with dataset("mnist") as ds:
    for i in range(0, 10):
    ds.append({"txt": Text(f"data-{i}"), "label": i})

    ds.commit()

    def _custom_transform(data: t.Any) -> t.Any:
    data = data.copy()
    txt = data["txt"].to_str()
    data["txt"] = f"custom-{txt}"
    return data

    torch_loader = tdata.DataLoader(
    dataset(ds.uri).to_pytorch(transform=_custom_transform), batch_size=1
    )
    item = next(iter(torch_loader))
    assert isinstance(item["label"], torch.Tensor)
    assert item["txt"][0] in ("custom-data-0", "custom-data-1")

    to_tensorflow

    to_tensorflow is a method that can convert a Starwhale dataset to a Tensorflow tensorflow.data.Dataset.

    def to_tensorflow(self, drop_index: bool = True) -> tensorflow.data.Dataset:

    Parameters

    • drop_index: (bool, optional)
      • Whether to drop the index column.

    Examples

    from starwhale import dataset
    import tensorflow as tf

    ds = dataset("mnist")
    tf_ds = ds.to_tensorflow(drop_index=True)
    assert isinstance(tf_ds, tf.data.Dataset)

    with_builder_blob_config

    with_builder_blob_config is a method to set blob-related attributes in a Starwhale dataset. It needs to be called before making data changes.

    def with_builder_blob_config(
    self,
    volume_size: int | str | None = D_FILE_VOLUME_SIZE,
    alignment_size: int | str | None = D_ALIGNMENT_SIZE,
    ) -> Dataset:

    Parameters

    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.

    Examples

    from starwhale import dataset, Binary

    ds = dataset("mnist").with_builder_blob_config(volume_size="32M", alignment_size=128)
    ds.append({"data": Binary(b"123")})
    ds.commit()
    ds.close()

    with_loader_config

    with_loader_config is a method to set parameters for the Starwhale dataset loader process.

    def with_loader_config(
    self,
    num_workers: t.Optional[int] = None,
    cache_size: t.Optional[int] = None,
    field_transformer: t.Optional[t.Dict] = None,
    ) -> Dataset:

    Parameters

    • num_workers: (int, optional)
      • The workers number for loading dataset.
      • The default value is 2.
    • cache_size: (int, optional)
      • Prefetched data rows.
      • The default value is 20.
    • field_transformer: (dict, optional)
      • features name transform dict.

    Examples

    from starwhale import Dataset, dataset
    Dataset.from_json(
    "translation",
    '[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]'
    )
    myds = dataset("translation").with_loader_config(field_transformer={"en": "en-us"})
    assert myds[0].features["en-us"] == myds[0].features["en"]
    from starwhale import Dataset, dataset
    Dataset.from_json(
    "translation2",
    '[{"content":{"child_content":[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]}}]'
    )
    myds = dataset("translation2").with_loader_config(field_transformer={"content.child_content[0].en": "en-us"})
    assert myds[0].features["en-us"] == myds[0].features["content"]["child_content"][0]["en"]
    - - + + \ No newline at end of file diff --git a/0.6.5/reference/sdk/evaluation/index.html b/0.6.5/reference/sdk/evaluation/index.html index c206a82fc..8d7e42b4a 100644 --- a/0.6.5/reference/sdk/evaluation/index.html +++ b/0.6.5/reference/sdk/evaluation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Starwhale Model Evaluation SDK

    @evaluation.predict

    The @evaluation.predict decorator defines the inference process in the Starwhale Model Evaluation, similar to the map phase in MapReduce. It contains the following core features:

    • On the Server instance, require the resources needed to run.
    • Automatically read the local or remote datasets, and pass the data in the datasets one by one or in batches to the function decorated by evaluation.predict.
    • By the replicas setting, implement distributed dataset consumption to horizontally scale and shorten the time required for the model evaluation tasks.
    • Automatically store the return values of the function and the input features of the dataset into the results table, for display in the Web UI and further use in the evaluate phase.
    • The decorated function is called once for each single piece of data or each batch, to complete the inference process.

    Parameters

    • resources: (dict, optional)
      • Defines the resources required by each predict task when running on the Server instance, including memory, cpu, and nvidia.com/gpu.
      • memory: The unit is Bytes, int and float types are supported.
        • Supports setting request and limit as a dictionary, e.g. resources={"memory": {"request": 100 * 1024, "limit": 200 * 1024}}.
        • If only a single number is set, the Python SDK will automatically set request and limit to the same value, e.g. resources={"memory": 100 * 1024} is equivalent to resources={"memory": {"request": 100 * 1024, "limit": 100 * 1024}}.
      • cpu: The unit is the number of CPU cores, int and float types are supported.
        • Supports setting request and limit as a dictionary, e.g. resources={"cpu": {"request": 1, "limit": 2}}.
        • If only a single number is set, the SDK will automatically set request and limit to the same value, e.g. resources={"cpu": 1.5} is equivalent to resources={"cpu": {"request": 1.5, "limit": 1.5}}.
      • nvidia.com/gpu: The unit is the number of GPUs, int type is supported.
        • nvidia.com/gpu does not support setting request and limit, only a single number is supported.
      • Note: The resources parameter currently only takes effect on the Server instances. For the Cloud instances, the same can be achieved by selecting the corresponding resource pool when submitting the evaluation task. Standalone instances do not support this feature at all.
    • replicas: (int, optional)
      • The number of replicas to run predict.
      • predict defines a Step, in which there are multiple equivalent Tasks. Each Task runs on a Pod in Cloud/Server instances, and a Thread in Standalone instances.
      • When multiple replicas are specified, they are equivalent and will jointly consume the selected dataset to achieve distributed dataset consumption. It can be understood that a row in the dataset will only be read by one predict replica.
      • The default is 1.
    • batch_size: (int, optional)
      • Batch size for passing data from the dataset into the function.
      • The default is 1.
    • fail_on_error: (bool, optional)
      • Whether to interrupt the entire model evaluation when the decorated function throws an exception. If you expect some "exceptional" data to cause evaluation failures but don't want to interrupt the overall evaluation, you can set fail_on_error=False.
      • The default is True.
    • auto_log: (bool, optional)
      • Whether to automatically log the return values of the function and the input features of the dataset to the results table.
      • The default is True.
    • log_mode: (str, optional)
      • When auto_log=True, you can set log_mode to define logging the return values in plain or pickle format.
      • The default is pickle.
    • log_dataset_features: (List[str], optional)
      • When auto_log=True, you can selectively log certain features from the dataset via this parameter.
      • By default, all features will be logged.
    • needs: (List[Callable], optional)
      • Defines the prerequisites for this task to run, can use the needs syntax to implement DAG.
      • needs accepts functions decorated by @evaluation.predict, @evaluation.evaluate, and @handler.
      • The default is empty, i.e. does not depend on any other tasks.

    Input

    The decorated functions need to define some input parameters to accept dataset data, etc. They contain the following patterns:

    • data:

      • data is a dict type that can read the features of the dataset.
      • When batch_size=1 or batch_size is not set, the label feature can be read through data['label'] or data.label.
      • When batch_size is set to > 1, data is a list.
      from starwhale import evaluation

      @evaluation.predict
      def predict(data):
      print(data['label'])
      print(data.label)
    • data + external:

      • data is a dict type that can read the features of the dataset.
      • external is also a dict, including: index, index_with_dataset, dataset_info, context and dataset_uri keys. The attributes can be used for the further fine-grained processing.
        • index: The index of the dataset row.
        • index_with_dataset: The index with the dataset info.
        • dataset_info: starwhale.core.dataset.tabular.TabularDatasetInfo Class.
        • context: starwhale.Context Class.
        • dataset_uri: starwhale.nase.uri.resource.Resource Class.
      from starwhale import evaluation

      @evaluation.predict
      def predict(data, external):
      print(data['label'])
      print(data.label)
      print(external["context"])
      print(external["dataset_uri"])
    • data + **kw:

      • data is a dict type that can read the features of the dataset.
      • kw is a dict that contains external.
      from starwhale import evaluation

      @evaluation.predict
      def predict(data, **kw):
      print(kw["external"]["context"])
      print(kw["external"]["dataset_uri"])
    • *args + **kwargs:

      • The first argument of args list is data.
      from starwhale import evaluation

      @evaluation.predict
      def predict(*args, **kw):
      print(args[0].label)
      print(args[0]["label"])
      print(kw["external"]["context"])
    • **kwargs:

      from starwhale import evaluation

      @evaluation.predict
      def predict(**kw):
      print(kw["data"].label)
      print(kw["data"]["label"])
      print(kw["external"]["context"])
    • *args:

      • *args does not contain external.
      from starwhale import evaluation

      @evaluation.predict
      def predict(*args):
      print(args[0].label)
      print(args[0]["label"])

    Examples

    from starwhale import evaluation

    @evaluation.predict
    def predict_image(data):
    ...

    @evaluation.predict(
    dataset="mnist/version/latest",
    batch_size=32,
    replicas=4,
    needs=[predict_image],
    )
    def predict_batch_images(batch_data)
    ...

    @evaluation.predict(
    resources={"nvidia.com/gpu": 1,
    "cpu": {"request": 1, "limit": 2},
    "memory": 200 * 1024}, # 200MB
    log_mode="plain",
    )
    def predict_with_resources(data):
    ...

    @evaluation.predict(
    replicas=1,
    log_mode="plain",
    log_dataset_features=["txt", "img", "label"],
    )
    def predict_with_selected_features(data):
    ...

    @evaluation.evaluate

    @evaluation.evaluate is a decorator that defines the evaluation process in the Starwhale Model evaluation, similar to the reduce phase in MapReduce. It contains the following core features:

    • On the Server instance, apply for the resources.
    • Read the data recorded in the results table automatically during the predict phase, and pass it into the function as an iterator.
    • The evaluate phase will only run one replica, and cannot define the replicas parameter like the predict phase.

    Parameters

    • resources: (dict, optional)
      • Consistent with the resources parameter definition in @evaluation.predict.
    • needs: (List[Callable], optional)
      • Consistent with the needs parameter definition in @evaluation.predict.
      • In the common case, it will depend on a function decorated by @evaluation.predict.
    • use_predict_auto_log: (bool, optional)
      • Defaults to True, passes an iterator that can traverse the predict results to the function.

    Input

    • When use_predict_auto_log=True (default), pass an iterator that can traverse the predict results into the function.
      • The iterated object is a dictionary containing two keys: output and input.
        • output is the element returned by the predict stage function.
        • input is the features of the corresponding dataset during the inference process, which is a dictionary type.
    • When use_predict_auto_log=False, do not pass any parameters into the function.

    Examples

    from starwhale import evaluation

    @evaluation.evaluate(needs=[predict_image])
    def evaluate_results(predict_result_iter):
    ...

    @evaluation.evaluate(
    use_predict_auto_log=False,
    needs=[predict_image],
    )
    def evaluate_results():
    ...

    class Evaluation

    starwhale.Evaluation implements the abstraction for Starwhale Model Evaluation, and can perform operations like logging and scanning for Model Evaluation on Standalone/Server/Cloud instances, to record and retrieve metrics.

    __init__

    __init__ function initializes Evaluation object.

    class Evaluation
    def __init__(self, id: str, project: Project | str) -> None:

    Parameters

    • id: (str, required)
      • The UUID of Model Evaluation that is generated by Starwhale automatically.
    • project: (Project|str, required)
      • Project object or Project URI str.

    Example

    from starwhale import Evaluation

    standalone_e = Evaluation("fcd1206bf1694fce8053724861c7874c", project="self")
    server_e = Evaluation("fcd1206bf1694fce8053724861c7874c", project="cloud://server/project/starwhale:starwhale")
    cloud_e = Evaluation("2ddab20df9e9430dbd73853d773a9ff6", project="https://cloud.starwhale.cn/project/starwhale:llm-leaderboard")

    from_context

    from_context is a classmethod that obtains the Evaluation object under the current Context. from_context can only take effect under the task runtime environment. Calling this method in a non-task runtime environment will raise a RuntimeError exception, indicating that the Starwhale Context has not been properly set.

    @classmethod
    def from_context(cls) -> Evaluation:

    Example

    from starwhale import Evaluation

    with Evaluation.from_context() as e:
    e.log("label/1", 1, {"loss": 0.99, "accuracy": 0.98})

    log

    log is a method that logs evaluation metrics to a specific table, which can then be viewed on the Server/Cloud instance's web page or through the scan method.

    def log(
    self, category: str, id: t.Union[str, int], metrics: t.Dict[str, t.Any]
    ) -> None:

    Parameters

    • category: (str, required)
      • The category of the logged metrics, which will be used as the suffix of the Starwhale Datastore table name.
      • Each category corresponds to a Starwhale Datastore table. These tables will be isolated by the evaluation task ID and will not affect each other.
    • id: (str|int, required)
      • The ID of the logged record, unique within the table.
      • For the same table, only str or int can be used as the ID type.
    • metrics: (dict, required)
      • A dict to log metrics in key-value format.
      • Keys are of str type.
      • Values can be constant types like int, float, str, bytes, bool, or compound types like tuple, list, dict. It also supports logging Artifacts types like Starwhale.Image, Starwhale.Video, Starwhale.Audio, Starwhale.Text, Starwhale.Binary.
        • When the value contains dict type, the Starwhale SDK will automatically flatten the dict for better visualization and metric comparison.
        • For example, if metrics is {"test": {"loss": 0.99, "prob": [0.98,0.99]}, "image": [Image, Image]}, it will be stored as {"test/loss": 0.99, "test/prob": [0.98, 0.99], "image/0": Image, "image/1": Image} after flattening.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation.from_context()

    evaluation_store.log("label/1", 1, {"loss": 0.99, "accuracy": 0.98})
    evaluation_store.log("ppl", "1", {"a": "test", "b": 1})

    scan

    scan is a method that returns an iterator for reading data from certain model evaluation tables.

    def scan(
    self,
    category: str,
    start: t.Any = None,
    end: t.Any = None,
    keep_none: bool = False,
    end_inclusive: bool = False,
    ) -> t.Iterator:

    Parameters

    • category: (str, required)
      • Same meaning as the category parameter in the log method.
    • start: (Any, optional)
      • Start key, if not specified, start from the first data item in the table.
    • end: (Any, optional)
      • End key, if not specified, iterate to the end of the table.
    • keep_none: (bool, optional)
      • Whether to return columns with None values, not returned by default.
    • end_inclusive: (bool, optional)
      • Whether to include the row corresponding to end, not included by default.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="https://cloud.starwhale.cn/projects/349")
    results = [data for data in evaluation_store.scan("label/0")]

    flush

    flush is a method that can immediately flush the metrics logged by the log method to the datastore and oss storage. If the flush method is not called, Evaluation will automatically flush data to storage when it is finally closed.

    def flush(self, category: str, artifacts_flush: bool = True) -> None

    Parameters

    • category: (str, required)
      • Same meaning as the category parameter in the log method.
    • artifacts_flush: (bool, optional)
      • Whether to dump artifact data to blob files and upload them to related storage. Default is True.

    log_result

    log_result is a method that logs evaluation metrics to the results table, equivalent to calling the log method with category set to results. The results table is generally used to store inference results. By default, @starwhale.predict will store the return value of the decorated function in the results table, you can also manually store using log_results.

    def log_result(self, id: t.Union[str, int], metrics: t.Dict[str, t.Any]) -> None:

    Parameters

    • id: (str|int, required)
      • The ID of the record, unique within the results table.
      • For the results table, only str or int can be used as the ID type.
    • metrics: (dict, required)
      • Same definition as the metrics parameter in the log method.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="self")
    evaluation_store.log_result(1, {"loss": 0.99, "accuracy": 0.98})
    evaluation_store.log_result(2, {"loss": 0.98, "accuracy": 0.99})

    scan_results

    scan_results is a method that returns an iterator for reading data from the results table.

    def scan_results(
    self,
    start: t.Any = None,
    end: t.Any = None,
    keep_none: bool = False,
    end_inclusive: bool = False,
    ) -> t.Iterator:

    Parameters

    • start: (Any, optional)
      • Start key, if not specified, start from the first data item in the table.
      • Same definition as the start parameter in the scan method.
    • end: (Any, optional)
      • End key, if not specified, iterate to the end of the table.
      • Same definition as the end parameter in the scan method.
    • keep_none: (bool, optional)
      • Whether to return columns with None values, not returned by default.
      • Same definition as the keep_none parameter in the scan method.
    • end_inclusive: (bool, optional)
      • Whether to include the row corresponding to end, not included by default.
      • Same definition as the end_inclusive parameter in the scan method.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="self")

    evaluation_store.log_result(1, {"loss": 0.99, "accuracy": 0.98})
    evaluation_store.log_result(2, {"loss": 0.98, "accuracy": 0.99})
    results = [data for data in evaluation_store.scan_results()]

    flush_results

    flush_results is a method that can immediately flush the metrics logged by the log_results method to the datastore and oss storage. If the flush_results method is not called, Evaluation will automatically flush data to storage when it is finally closed.

    def flush_results(self, artifacts_flush: bool = True) -> None:

    Parameters

    • artifacts_flush: (bool, optional)
      • Whether to dump artifact data to blob files and upload them to related storage. Default is True.
      • Same definition as the artifacts_flush parameter in the flush method.

    log_summary

    log_summary is a method that logs certain metrics to the summary table. The evaluation page on Server/Cloud instances displays data from the summary table.

    Each time it is called, Starwhale will automatically update with the unique ID of this evaluation as the row ID of the table. This function can be called multiple times during one evaluation to update different columns.

    Each project has one summary table. All evaluation tasks under that project will write summary information to this table for easy comparison between evaluations of different models.

    def log_summary(self, *args: t.Any, **kw: t.Any) -> None:

    Same as log method, log_summary will automatically flatten the dict.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="https://cloud.starwhale.cn/projects/349")

    evaluation_store.log_summary(loss=0.99)
    evaluation_store.log_summary(loss=0.99, accuracy=0.99)
    evaluation_store.log_summary({"loss": 0.99, "accuracy": 0.99})

    get_summary

    get_summary is a method that returns the information logged by log_summary.

    def get_summary(self) -> t.Dict:

    flush_summary

    flush_summary is a method that can immediately flush the metrics logged by the log_summary method to the datastore and oss storage. If the flush_results method is not called, Evaluation will automatically flush data to storage when it is finally closed.

    def flush_summary(self, artifacts_flush: bool = True) -> None:

    Parameters

    • artifacts_flush: (bool, optional)
      • Whether to dump artifact data to blob files and upload them to related storage. Default is True.
      • Same definition as the artifacts_flush parameter in the flush method.

    flush_all

    flush_all is a method that can immediately flush the metrics logged by log, log_results, log_summary methods to the datastore and oss storage. If the flush_all method is not called, Evaluation will automatically flush data to storage when it is finally closed.

    def flush_all(self, artifacts_flush: bool = True) -> None:

    Parameters

    • artifacts_flush: (bool, optional)
      • Whether to dump artifact data to blob files and upload them to related storage. Default is True.
      • Same definition as the artifacts_flush parameter in the flush method.

    get_tables

    get_tables is a method that returns the names of all tables generated during model evaluation. Note that this function does not return the summary table name.

    def get_tables(self) -> t.List[str]:

    close

    close is a method to close the Evaluation object. close will automatically flush data to storage when called. Evaluation also implements __enter__ and __exit__ methods, which can simplify manual close calls using with syntax.

    def close(self) -> None:

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="https://cloud.starwhale.cn/projects/349")
    evaluation_store.log_summary(loss=0.99)
    evaluation_store.close()

    # auto close when the with-context exits.
    with Evaluation.from_context() as e:
    e.log_summary(loss=0.99)

    @handler

    @handler is a decorator that provides the following functionalities:

    • On a Server instance, it requests the required resources to run.
    • It can control the number of replicas.
    • Multiple handlers can form a DAG through dependency relationships to control the execution workflow.
    • It can expose ports externally to run like a web handler.

    @fine_tune, @evaluation.predict and @evaluation.evalute can be considered applications of @handler in the certain specific areas. @handler is the underlying implementation of these decorators and is more fundamental and flexible.

    @classmethod
    def handler(
    cls,
    resources: t.Optional[t.Dict[str, t.Any]] = None,
    replicas: int = 1,
    needs: t.Optional[t.List[t.Callable]] = None,
    name: str = "",
    expose: int = 0,
    require_dataset: bool = False,
    ) -> t.Callable:

    Parameters

    • resources: (dict, optional)
      • Consistent with the resources parameter definition in @evaluation.predict.
    • needs: (List[Callable], optional)
      • Consistent with the needs parameter definition in @evaluation.predict.
    • replicas: (int, optional)
      • Consistent with the replicas parameter definition in @evaluation.predict.
    • name: (str, optional)
      • The name displayed for the handler.
      • If not specified, use the decorated function's name.
    • expose: (int, optional)
      • The port exposed externally. When running a web handler, the exposed port needs to be declared.
      • The default is 0, meaning no port is exposed.
      • Currently only one port can be exposed.
    • require_dataset: (bool, optional)
      • Defines whether this handler requires a dataset when running.
      • If required_dataset=True, the user is required to input a dataset when creating an evaluation task on the Server/Cloud instance web page. If required_dataset=False, the user does not need to specify a dataset on the web page.
      • The default is False.

    Examples

    from starwhale import handler
    import gradio

    @handler(resources={"cpu": 1, "nvidia.com/gpu": 1}, replicas=3)
    def my_handler():
    ...

    @handler(needs=[my_handler])
    def my_another_handler():
    ...

    @handler(expose=7860)
    def chatbot():
    with gradio.Blocks() as server:
    ...
    server.launch(server_name="0.0.0.0", server_port=7860)

    @fine_tune

    fine_tune is a decorator that defines the fine-tuning process for model training.

    Some restrictions and usage suggestions:

    • fine_tune has only one replica.
    • fine_tune requires dataset input.
    • Generally, the dataset is obtained through Context.get_runtime_context() at the start of fine_tune.
    • Generally, at the end of fine_tune, the fine-tuned Starwhale model package is generated through starwhale.model.build, which will be automatically copied to the corresponding evaluation project.

    Parameters

    • resources: (dict, optional)
      • Consistent with the resources parameter definition in @evaluation.predict.
    • needs: (List[Callable], optional)
      • Consistent with the needs parameter definition in @evaluation.predict.

    Examples

    from starwhale import model as starwhale_model
    from starwhale import fine_tune, Context

    @fine_tune(resources={"nvidia.com/gpu": 1})
    def llama_fine_tuning():
    ctx = Context.get_runtime_context()

    if len(ctx.dataset_uris) == 2:
    # TODO: use more graceful way to get train and eval dataset
    train_dataset = dataset(ctx.dataset_uris[0], readonly=True, create="forbid")
    eval_dataset = dataset(ctx.dataset_uris[1], readonly=True, create="forbid")
    elif len(ctx.dataset_uris) == 1:
    train_dataset = dataset(ctx.dataset_uris[0], readonly=True, create="forbid")
    eval_dataset = None
    else:
    raise ValueError("Only support 1 or 2 datasets(train and eval dataset) for now")

    #user training code
    train_llama(
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
    )

    model_name = get_model_name()
    starwhale_model.build(name=f"llama-{model_name}-qlora-ft")

    @multi_classification

    The @multi_classification decorator uses the sklearn lib to analyze results for multi-classification problems, outputting the confusion matrix, ROC, AUC etc., and writing them to related tables in the Starwhale Datastore.

    When using it, certain requirements are placed on the return value of the decorated function, which should be (label, result) or (label, result, probability_matrix).

    def multi_classification(
    confusion_matrix_normalize: str = "all",
    show_hamming_loss: bool = True,
    show_cohen_kappa_score: bool = True,
    show_roc_auc: bool = True,
    all_labels: t.Optional[t.List[t.Any]] = None,
    ) -> t.Any:

    Parameters

    • confusion_matrix_normalize: (str, optional)
      • Accepts three parameters:
        • true: rows
        • pred: columns
        • all: rows+columns
    • show_hamming_loss: (bool, optional)
      • Whether to calculate the Hamming loss.
      • The default is True.
    • show_cohen_kappa_score: (bool, optional)
      • Whether to calculate the Cohen kappa score.
      • The default is True.
    • show_roc_auc: (bool, optional)
      • Whether to calculate ROC/AUC. To calculate, the function needs to return a (label, result, probability_matrix) tuple, otherwise a (label, result) tuple is sufficient.
      • The default is True.
    • all_labels: (List, optional)
      • Defines all the labels.

    Examples


    @multi_classification(
    confusion_matrix_normalize="all",
    show_hamming_loss=True,
    show_cohen_kappa_score=True,
    show_roc_auc=True,
    all_labels=[i for i in range(0, 10)],
    )
    def evaluate(ppl_result) -> t.Tuple[t.List[int], t.List[int], t.List[t.List[float]]]:
    label, result, probability_matrix = [], [], []
    return label, result, probability_matrix

    @multi_classification(
    confusion_matrix_normalize="all",
    show_hamming_loss=True,
    show_cohen_kappa_score=True,
    show_roc_auc=False,
    all_labels=[i for i in range(0, 10)],
    )
    def evaluate(ppl_result) -> t.Tuple[t.List[int], t.List[int], t.List[t.List[float]]]:
    label, result = [], [], []
    return label, result

    PipelineHandler

    The PipelineHandler class provides a default model evaluation workflow definition that requires users to implement the predict and evaluate functions.

    The PipelineHandler is equivalent to using the @evaluation.predict and @evaluation.evaluate decorators together - the usage looks different but the underlying model evaluation process is the same.

    Note that PipelineHandler currently does not support defining resources parameters.

    Users need to implement the following functions:

    • predict: Defines the inference process, equivalent to a function decorated with @evaluation.predict.

    • evaluate: Defines the evaluation process, equivalent to a function decorated with @evaluation.evaluate.

    from typing import Any, Iterator
    from abc import ABCMeta, abstractmethod

    class PipelineHandler(metaclass=ABCMeta):
    def __init__(
    self,
    predict_batch_size: int = 1,
    ignore_error: bool = False,
    predict_auto_log: bool = True,
    predict_log_mode: str = PredictLogMode.PICKLE.value,
    predict_log_dataset_features: t.Optional[t.List[str]] = None,
    **kwargs: t.Any,
    ) -> None:
    self.context = Context.get_runtime_context()
    ...

    def predict(self, data: Any, **kw: Any) -> Any:
    raise NotImplementedError

    def evaluate(self, ppl_result: Iterator) -> Any
    raise NotImplementedError

    Parameters

    • predict_batch_size: (int, optional)
      • Equivalent to the batch_size parameter in @evaluation.predict.
      • Default is 1.
    • ignore_error: (bool, optional)
      • Equivalent to the fail_on_error parameter in @evaluation.predict.
      • Default is False.
    • predict_auto_log: (bool, optional)
      • Equivalent to the auto_log parameter in @evaluation.predict.
      • Default is True.
    • predict_log_mode: (str, optional)
      • Equivalent to the log_mode parameter in @evaluation.predict.
      • Default is pickle.
    • predict_log_dataset_features: (bool, optional)
      • Equivalent to the log_dataset_features parameter in @evaluation.predict.
      • Default is None, which records all features.

    PipelineHandler.run Decorator

    The PipelineHandler.run decorator can be used to describe resources for the predict and evaluate methods, supporting definitions of replicas and resources:

    • The PipelineHandler.run decorator can only decorate predict and evaluate methods in subclasses inheriting from PipelineHandler.
    • The predict method can set the replicas parameter. The replicas value for the evaluate method is always 1.
    • The resources parameter is defined and used in the same way as the resources parameter in @evaluation.predict or @evaluation.evaluate.
    • The PipelineHandler.run decorator is optional.
    • The PipelineHandler.run decorator only takes effect on Server and Cloud instances, not Standalone instances that don't support resource definition.
    @classmethod
    def run(
    cls, resources: t.Optional[t.Dict[str, t.Any]] = None, replicas: int = 1
    ) -> t.Callable:

    Examples

    import typing as t

    import torch
    from starwhale import PipelineHandler

    class Example(PipelineHandler):
    def __init__(self) -> None:
    super().__init__()
    self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    self.model = self._load_model(self.device)

    @PipelineHandler.run(replicas=4, resources={"memory": 1 * 1024 * 1024 *1024, "nvidia.com/gpu": 1}) # 1G Memory, 1 GPU
    def predict(self, data: t.Dict):
    data_tensor = self._pre(data.img)
    output = self.model(data_tensor)
    return self._post(output)

    @PipelineHandler.run(resources={"memory": 1 * 1024 * 1024 *1024}) # 1G Memory
    def evaluate(self, ppl_result):
    result, label, pr = [], [], []
    for _data in ppl_result:
    label.append(_data["input"]["label"])
    result.extend(_data["output"][0])
    pr.extend(_data["output"][1])
    return label, result, pr

    def _pre(self, input: Image) -> torch.Tensor:
    ...

    def _post(self, input):
    ...

    def _load_model(self, device):
    ...

    Context

    The context information passed during model evaluation, including Project, Task ID, etc. The Context content is automatically injected and can be used in the following ways:

    • Inherit the PipelineHandler class and use the self.context object.
    • Get it through Context.get_runtime_context().

    Note that Context can only be used during model evaluation, otherwise the program will throw an exception.

    Currently Context can get the following values:

    • project: str
      • Project name.
    • version: str
      • Unique ID of model evaluation.
    • step: str
      • Step name.
    • total: int
      • Total number of Tasks under the Step.
    • index: int
      • Task index number, starting from 0.
    • dataset_uris: List[str]
      • List of Starwhale dataset URIs.

    Examples


    from starwhale import Context, PipelineHandler

    def func():
    ctx = Context.get_runtime_context()
    print(ctx.project)
    print(ctx.version)
    print(ctx.step)
    ...

    class Example(PipelineHandler):

    def predict(self, data: t.Dict):
    print(self.context.project)
    print(self.context.version)
    print(self.context.step)

    @starwhale.api.service.api

    @starwhale.api.service.api is a decorator that provides a simple Web Handler input definition based on Gradio for accepting external requests and returning inference results to the user when launching a Web Service with the swcli model serve command, enabling online evaluation.

    Examples

    import gradio
    from starwhale.api.service import api

    def predict_image(img):
    ...

    @api(gradio.File(), gradio.Label())
    def predict_view(file: t.Any) -> t.Any:
    with open(file.name, "rb") as f:
    data = Image(f.read(), shape=(28, 28, 1))
    _, prob = predict_image({"img": data})
    return {i: p for i, p in enumerate(prob)}

    starwhale.api.service.Service

    If you want to customize the web service implementation, you can subclass Service and override the serve method.

    class CustomService(Service):
    def serve(self, addr: str, port: int, handler_list: t.List[str] = None) -> None:
    ...

    svc = CustomService()

    @svc.api(...)
    def handler(data):
    ...

    Notes:

    • Handlers added with PipelineHandler.add_api and the api decorator or Service.api can work together
    • If using a custom Service, you need to instantiate the custom Service class in the model

    Custom Request and Response

    Request and Response are handler preprocessing and postprocessing classes for receiving user requests and returning results. They can be simply understood as pre and post logic for the handler.

    Starwhale provides built-in Request implementations for Dataset types and Json Response. Users can also customize the logic as follows:

    import typing as t

    from starwhale.api.service import (
    Request,
    Service,
    Response,
    )

    class CustomInput(Request):
    def load(self, req: t.Any) -> t.Any:
    return req

    class CustomOutput(Response):
    def __init__(self, prefix: str) -> None:
    self.prefix = prefix

    def dump(self, req: str) -> bytes:
    return f"{self.prefix} {req}".encode("utf-8")

    svc = Service()

    @svc.api(request=CustomInput(), response=CustomOutput("hello"))
    def foo(data: t.Any) -> t.Any:
    ...
    - - + + \ No newline at end of file diff --git a/0.6.5/reference/sdk/job/index.html b/0.6.5/reference/sdk/job/index.html index eea4dca1e..bd0579b91 100644 --- a/0.6.5/reference/sdk/job/index.html +++ b/0.6.5/reference/sdk/job/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Starwhale Job SDK

    job

    Get a starwhale.Job object through the Job URI parameter, which represents a Job on Standalone/Server/Cloud instances.

    @classmethod
    def job(
    cls,
    uri: str,
    ) -> Job:

    Parameters

    • uri: (str, required)
      • Job URI format.

    Usage Example

    from starwhale import job

    # get job object of uri=https://server/job/1
    j1 = job("https://server/job/1")

    # get job from standalone instance
    j2 = job("local/project/self/job/xm5wnup")
    j3 = job("xm5wnup")

    class starwhale.Job

    starwhale.Job abstracts Starwhale Job and enables some information retrieval operations on the job.

    list

    list is a classmethod that can list the jobs under a project.

    @classmethod
    def list(
    cls,
    project: str = "",
    page_index: int = DEFAULT_PAGE_IDX,
    page_size: int = DEFAULT_PAGE_SIZE,
    ) -> Tuple[List[Job], Dict]:

    Parameters

    • project: (str, optional)
      • Project URI, can be projects on Standalone/Server/Cloud instances.
      • If project is not specified, the project selected by swcli project selected will be used.
    • page_index: (int, optional)
      • When getting the jobs list from Server/Cloud instances, paging is supported. This parameter specifies the page number.
        • Default is 1.
        • Page numbers start from 1.
      • Standalone instances do not support paging. This parameter has no effect.
    • page_size: (int, optional)
      • When getting the jobs list from Server/Cloud instances, paging is supported. This parameter specifies the number of jobs returned per page.
        • Default is 1.
        • Page numbers start from 1.
      • Standalone instances do not support paging. This parameter has no effect.

    Usage Example

    from starwhale import Job

    # list jobs of current selected project
    jobs, pagination_info = Job.list()

    # list jobs of starwhale/public project in the cloud.starwhale.cn instance
    jobs, pagination_info = Job.list("https://cloud.starwhale.cn/project/starwhale:public")

    # list jobs of id=1 project in the server instance, page index is 2, page size is 10
    jobs, pagination_info = Job.list("https://server/project/1", page_index=2, page_size=10)

    get

    get is a classmethod that gets information about a specific job and returns a Starwhale.Job object. It has the same functionality and parameter definitions as the starwhale.job function.

    Usage Example

    from starwhale import Job

    # get job object of uri=https://server/job/1
    j1 = Job.get("https://server/job/1")

    # get job from standalone instance
    j2 = Job.get("local/project/self/job/xm5wnup")
    j3 = Job.get("xm5wnup")

    summary

    summary is a property that returns the data written to the summary table during the job execution, in dict type.

    @property
    def summary(self) -> Dict[str, Any]:

    Usage Example

    from starwhale import jobs

    j1 = job("https://server/job/1")

    print(j1.summary)

    tables

    tables is a property that returns the names of tables created during the job execution (not including the summary table, which is created automatically at the project level), in list type.

    @property
    def tables(self) -> List[str]:

    Usage Example

    from starwhale import jobs

    j1 = job("https://server/job/1")

    print(j1.tables)

    get_table_rows

    get_table_rows is a method that returns records from a data table according to the table name and other parameters, in iterator type.

    def get_table_rows(
    self,
    name: str,
    start: Any = None,
    end: Any = None,
    keep_none: bool = False,
    end_inclusive: bool = False,
    ) -> Iterator[Dict[str, Any]]:

    Parameters

    • name: (str, required)
      • Datastore table name. The one of table names obtained through the tables property is ok.
    • start: (Any, optional)
      • The starting ID value of the returned records.
      • Default is None, meaning start from the beginning of the table.
    • end: (Any, optional)
      • The ending ID value of the returned records.
      • Default is None, meaning until the end of the table.
      • If both start and end are None, all records in the table will be returned as an iterator.
    • keep_none: (bool, optional)
      • Whether to return records with None values.
      • Default is False.
    • end_inclusive: (bool, optional)
      • When end is set, whether the iteration includes the end record.
      • Default is False.

    Usage Example

    from starwhale import job

    j = job("local/project/self/job/xm5wnup")

    table_name = j.tables[0]

    for row in j.get_table_rows(table_name):
    print(row)

    rows = list(j.get_table_rows(table_name, start=0, end=100))

    # return the first record from the results table
    result = list(j.get_table_rows('results', start=0, end=1))[0]

    status

    status is a property that returns the current real-time state of the Job as a string. The possible states are CREATED, READY, PAUSED, RUNNING, CANCELLING, CANCELED, SUCCESS, FAIL, and UNKNOWN.

    @property
    def status(self) -> str:

    create

    create is a classmethod that can create tasks on a Standalone instance or Server/Cloud instance, including tasks for Model Evaluation, Fine-tuning, Online Serving, and Developing. The function returns a Job object.

    • create determines which instance the generated task runs on through the project parameter, including Standalone and Server/Cloud instances.
    • On a Standalone instance, create creates a synchronously executed task.
    • On a Server/Cloud instance, create creates an asynchronously executed task.
    @classmethod
    def create(
    cls,
    project: Project | str,
    model: Resource | str,
    run_handler: str,
    datasets: t.List[str | Resource] | None = None,
    runtime: Resource | str | None = None,
    resource_pool: str = DEFAULT_RESOURCE_POOL,
    ttl: int = 0,
    dev_mode: bool = False,
    dev_mode_password: str = "",
    dataset_head: int = 0,
    overwrite_specs: t.Dict[str, t.Any] | None = None,
    ) -> Job:

    Parameters

    Parameters apply to all instances:

    • project: (Project|str, required)
      • A Project object or Project URI string.
    • model: (Resource|str, required)
      • Model URI string or Resource object of Model type, representing the Starwhale model package to run.
    • run_handler: (str, required)
      • The name of the runnable handler in the Starwhale model package, e.g. the evaluate handler of mnist: mnist.evaluator:MNISTInference.evaluate.
    • datasets: (List[str | Resource], optional)
      • Datasets required for the Starwhale model package to run, not required.

    Parameters only effective for Standalone instances:

    • dataset_head: (int, optional)
      • Generally used for debugging scenarios, only uses the first N data in the dataset for the Starwhale model to consume.

    Parameters only effective for Server/Cloud instances:

    • runtime: (Resource | str, optional)
      • Runtime URI string or Resource object of Runtime type, representing the Starwhale runtime required to run the task.
      • When not specified, it will try to use the built-in runtime of the Starwhale model package.
      • When creating tasks under a Standalone instance, the Python interpreter environment used by the Python script is used as its own runtime. Specifying a runtime via the runtime parameter is not supported. If you need to specify a runtime, you can use the swcli model run command.
    • resource_pool: (str, optional)
      • Specify which resource pool the task runs in, default to the default resource pool.
    • ttl: (int, optional)
      • Maximum lifetime of the task, will be killed after timeout.
      • The unit is seconds.
      • By default, ttl is 0, meaning no timeout limit, and the task will run as expected.
      • When ttl is less than 0, it also means no timeout limit.
    • dev_mode: (bool, optional)
      • Whether to set debug mode. After turning on this mode, you can enter the related environment through VSCode Web.
      • Debug mode is off by default.
    • dev_mode_password: (str, optional)
      • Login password for VSCode Web in debug mode.
      • Default is empty, in which case the task's UUID will be used as the password, which can be obtained via job.info().job.uuid.
    • overwrite_specs: (Dict[str, Any], optional)
      • Support setting the replicas and resources fields of the handler.
      • If empty, use the values set in the corresponding handler of the model package.
      • The key of overwrite_specs is the name of the handler, e.g. the evaluate handler of mnist: mnist.evaluator:MNISTInference.evaluate.
      • The value of overwrite_specs is the set value, in dictionary format, supporting settings for replicas and resources, e.g. {"replicas": 1, "resources": {"memory": "1GiB"}}.

    Examples

    • create a Cloud Instance job
    from starwhale import Job
    project = "https://cloud.starwhale.cn/project/starwhale:public"
    job = Job.create(
    project=project,
    model=f"{project}/model/mnist/version/v0",
    run_handler="mnist.evaluator:MNISTInference.evaluate",
    datasets=[f"{project}/dataset/mnist/version/v0"],
    runtime=f"{project}/runtime/pytorch",
    overwrite_specs={"mnist.evaluator:MNISTInference.evaluate": {"resources": "4GiB"},
    "mnist.evaluator:MNISTInference.predict": {"resources": "8GiB", "replicas": 10}}
    )
    print(job.status)
    • create a Standalone Instance job
    from starwhale import Job
    job = Job.create(
    project="self",
    model="mnist",
    run_handler="mnist.evaluator:MNISTInference.evaluate",
    datasets=["mnist"],
    )
    print(job.status)
    - - + + \ No newline at end of file diff --git a/0.6.5/reference/sdk/model/index.html b/0.6.5/reference/sdk/model/index.html index 9e78a3306..adc175b30 100644 --- a/0.6.5/reference/sdk/model/index.html +++ b/0.6.5/reference/sdk/model/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Starwhale Model SDK

    model.build

    model.build is a function that can build the Starwhale model, equivalent to the swcli model build command.

    def build(
    modules: t.Optional[t.List[t.Any]] = None,
    workdir: t.Optional[_path_T] = None,
    name: t.Optional[str] = None,
    project_uri: str = "",
    desc: str = "",
    remote_project_uri: t.Optional[str] = None,
    add_all: bool = False,
    tags: t.List[str] | None = None,
    ) -> None:

    Parameters

    • modules: (List[str|object], optional)
      • The search modules supports object(function, class or module) or str(example: "to.path.module", "to.path.module:object").
      • If the argument is not specified, the search modules are the imported modules.
    • name: (str, optional)
      • Starwhale Model name.
      • The default is the current work dir (cwd) name.
    • workdir: (str, Pathlib.Path, optional)
      • The path of the rootdir. The default workdir is the current working dir.
      • All files in the workdir will be packaged. If you want to ignore some files, you can add .swignore file in the workdir.
    • project_uri: (str, optional)
      • The project uri of the Starwhale Model.
      • If the argument is not specified, the project_uri is the config value of swcli project select command.
    • desc: (str, optional)
      • The description of the Starwhale Model.
    • remote_project_uri: (str, optional)
      • Project URI of another example instance. After the Starwhale model is built, it will be automatically copied to the remote instance.
    • add_all: (bool, optional)
      • Add all files in the working directory to the model package(excludes python cache files and virtual environment files when disabled).The .swignore file still takes effect.
      • The default value is False.
    • tags: (List[str], optional)
      • The tags for the model version.
      • latest and ^v\d+$ tags are reserved tags.

    Examples

    from starwhale import model

    # class search handlers
    from .user.code.evaluator import ExamplePipelineHandler
    model.build([ExamplePipelineHandler])

    # function search handlers
    from .user.code.evaluator import predict_image
    model.build([predict_image])

    # module handlers, @handler decorates function in this module
    from .user.code import evaluator
    model.build([evaluator])

    # str search handlers
    model.build(["user.code.evaluator:ExamplePipelineHandler"])
    model.build(["user.code1", "user.code2"])

    # no search handlers, use imported modules
    model.build()

    # add user custom tags
    model.build(tags=["t1", "t2"])
    - - + + \ No newline at end of file diff --git a/0.6.5/reference/sdk/other/index.html b/0.6.5/reference/sdk/other/index.html index e5941cd40..1eb7cfa4a 100644 --- a/0.6.5/reference/sdk/other/index.html +++ b/0.6.5/reference/sdk/other/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Other SDK

    __version__

    Version of Starwhale Python SDK and swcli, string constant.

    >>> from starwhale import __version__
    >>> print(__version__)
    0.5.7

    init_logger

    Initialize Starwhale logger and traceback. The default value is 0.

    • 0: show only errors, traceback only shows 1 frame.
    • 1: show errors + warnings, traceback shows 5 frames.
    • 2: show errors + warnings + info, traceback shows 10 frames.
    • 3: show errors + warnings + info + debug, traceback shows 100 frames.
    • >=4: show errors + warnings + info + debug + trace, traceback shows 1000 frames.
    def init_logger(verbose: int = 0) -> None:

    login

    Log in to a server/cloud instance. It is equivalent to running the swcli instance login command. Log in to the Standalone instance is meaningless.

    def login(
    instance: str,
    alias: str = "",
    username: str = "",
    password: str = "",
    token: str = "",
    ) -> None:

    Parameters

    • instance: (str, required)
      • The http url of the server/cloud instance.
    • alias: (str, optional)
      • An alias for the instance to simplify the instance part of the Starwhale URI.
      • If not specified, the hostname part of the instance http url will be used.
    • username: (str, optional)
    • password: (str, optional)
    • token: (str, optional)
      • You can only choose one of username + password or token to login to the instance.

    Examples

    from starwhale import login

    # login to Starwhale Cloud instance by token
    login(instance="https://cloud.starwhale.cn", alias="cloud-cn", token="xxx")

    # login to Starwhale Server instance by username and password
    login(instance="http://controller.starwhale.svc", alias="dev", username="starwhale", password="abcd1234")

    logout

    Log out of a server/cloud instance. It is equivalent to running the swcli instance logout command. Log out of the Standalone instance is meaningless.

    def logout(instance: str) -> None:

    Examples

    from starwhale import login, logout

    login(instance="https://cloud.starwhale.cn", alias="cloud-cn", token="xxx")
    # logout by the alias
    logout("cloud-cn")

    login(instance="http://controller.starwhale.svc", alias="dev", username="starwhale", password="abcd1234")
    # logout by the instance http url
    logout("http://controller.starwhale.svc")
    - - + + \ No newline at end of file diff --git a/0.6.5/reference/sdk/overview/index.html b/0.6.5/reference/sdk/overview/index.html index 65c686d2a..d635db3da 100644 --- a/0.6.5/reference/sdk/overview/index.html +++ b/0.6.5/reference/sdk/overview/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Python SDK Overview

    Starwhale provides a series of Python SDKs to help manage datasets, models, evaluations etc. Using the Starwhale Python SDK can make it easier to complete your ML/DL development tasks.

    Classes

    • PipelineHandler: Provides default model evaluation process definition, requires implementation of predict and evaluate methods.
    • Context: Passes context information during model evaluation, including Project, Task ID etc.
    • class Dataset: Starwhale Dataset class.
    • class starwhale.api.service.Service: The base class of online evaluation.
    • class Job: Starwhale Job class.
    • class Evaluation: Starwhale Evaluation class.

    Functions

    • @multi_classification: Decorator for multi-class problems to simplify evaluate result calculation and storage for better evaluation presentation.
    • @handler: Decorator to define a running entity with resource attributes (mem/cpu/gpu). You can control replica count. Handlers can form DAGs through dependencies to control execution flow.
    • @evaluation.predict: Decorator to define inference process in model evaluation, similar to map phase in MapReduce.
    • @evaluation.evaluate: Decorator to define evaluation process in model evaluation, similar to reduce phase in MapReduce.
    • model.build: Build Starwhale model.
    • @fine_tune: Decorator to define model fine-tuning process.
    • init_logger: Set log level, implement 5-level logging.
    • dataset: Get starwhale.Dataset object, by creating new datasets or loading existing datasets.
    • @starwhale.api.service.api: Decorator to provide a simple Web Handler input definition based on Gradio.
    • login: Log in to the server/cloud instance.
    • logout: Log out of the server/cloud instance.
    • job: Get starwhale.Job object by the Job URI.
    • @PipelineHandler.run: Decorator to define the resources for the predict and evaluate methods in PipelineHandler subclasses.

    Data Types

    • COCOObjectAnnotation: Provides COCO format definitions.
    • BoundingBox: Bounding box type, currently in LTWH format - left_x, top_y, width and height.
    • ClassLabel: Describes the number and types of labels.
    • Image: Image type.
    • GrayscaleImage: Grayscale image type, e.g. MNIST digit images, a special case of Image type.
    • Audio: Audio type.
    • Video: Video type.
    • Text: Text type, default utf-8 encoding, for storing large texts.
    • Binary: Binary type, stored in bytes, for storing large binary content.
    • Line: Line type.
    • Point: Point type.
    • Polygon: Polygon type.
    • Link: Link type, for creating remote-link data.
    • MIMEType: Describes multimedia types supported by Starwhale, used in mime_type attribute of Image, Video etc for better Dataset Viewer.

    Other

    • __version__: Version of Starwhale Python SDK and swcli, string constant.

    Further reading

    - - + + \ No newline at end of file diff --git a/0.6.5/reference/sdk/type/index.html b/0.6.5/reference/sdk/type/index.html index 0a51041c4..176faff8e 100644 --- a/0.6.5/reference/sdk/type/index.html +++ b/0.6.5/reference/sdk/type/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Starwhale Data Types

    COCOObjectAnnotation

    It provides definitions following the COCO format.

    COCOObjectAnnotation(
    id: int,
    image_id: int,
    category_id: int,
    segmentation: Union[t.List, t.Dict],
    area: Union[float, int],
    bbox: Union[BoundingBox, t.List[float]],
    iscrowd: int,
    )
    ParameterDescription
    idObject id, usually a globally incrementing id
    image_idImage id, usually id of the image
    category_idCategory id, usually id of the class in object detection
    segmentationObject contour representation, Polygon (polygon vertices) or RLE format
    areaObject area
    bboxRepresents bounding box, can be BoundingBox type or list of floats
    iscrowd0 indicates a single object, 1 indicates two unseparated objects

    Examples

    def _make_coco_annotations(
    self, mask_fpath: Path, image_id: int
    ) -> t.List[COCOObjectAnnotation]:
    mask_img = PILImage.open(str(mask_fpath))

    mask = np.array(mask_img)
    object_ids = np.unique(mask)[1:]
    binary_mask = mask == object_ids[:, None, None]
    # TODO: tune permute without pytorch
    binary_mask_tensor = torch.as_tensor(binary_mask, dtype=torch.uint8)
    binary_mask_tensor = (
    binary_mask_tensor.permute(0, 2, 1).contiguous().permute(0, 2, 1)
    )

    coco_annotations = []
    for i in range(0, len(object_ids)):
    _pos = np.where(binary_mask[i])
    _xmin, _ymin = float(np.min(_pos[1])), float(np.min(_pos[0]))
    _xmax, _ymax = float(np.max(_pos[1])), float(np.max(_pos[0]))
    _bbox = BoundingBox(
    x=_xmin, y=_ymin, width=_xmax - _xmin, height=_ymax - _ymin
    )

    rle: t.Dict = coco_mask.encode(binary_mask_tensor[i].numpy()) # type: ignore
    rle["counts"] = rle["counts"].decode("utf-8")

    coco_annotations.append(
    COCOObjectAnnotation(
    id=self.object_id,
    image_id=image_id,
    category_id=1, # PennFudan Dataset only has one class-PASPersonStanding
    segmentation=rle,
    area=_bbox.width * _bbox.height,
    bbox=_bbox,
    iscrowd=0, # suppose all instances are not crowd
    )
    )
    self.object_id += 1

    return coco_annotations

    GrayscaleImage

    GrayscaleImage provides a grayscale image type. It is a special case of the Image type, for example the digit images in MNIST.

    GrayscaleImage(
    fp: _TArtifactFP = "",
    display_name: str = "",
    shape: Optional[_TShape] = None,
    as_mask: bool = False,
    mask_uri: str = "",
    )
    ParameterDescription
    fpImage path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    shapeImage width and height, default channel is 1
    as_maskWhether used as a mask image
    mask_uriURI of the original image for the mask

    Examples

    for i in range(0, min(data_number, label_number)):
    _data = data_file.read(image_size)
    _label = struct.unpack(">B", label_file.read(1))[0]
    yield GrayscaleImage(
    _data,
    display_name=f"{i}",
    shape=(height, width, 1),
    ), {"label": _label}

    GrayscaleImage Functions

    GrayscaleImage.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    GrayscaleImage.carry_raw_data

    carry_raw_data() -> GrayscaleImage

    GrayscaleImage.astype

    astype() -> Dict[str, t.Any]

    BoundingBox

    BoundingBox provides a bounding box type, currently in LTWH format:

    • left_x: x-coordinate of left edge
    • top_y: y-coordinate of top edge
    • width: width of bounding box
    • height: height of bounding box

    So it represents the bounding box using the coordinates of its left, top, width and height. This is a common format for specifying bounding boxes in computer vision tasks.

    BoundingBox(
    x: float,
    y: float,
    width: float,
    height: float
    )
    ParameterDescription
    xx-coordinate of left edge (left_x)
    yy-coordinate of top edge (top_y)
    widthWidth of bounding box
    heightHeight of bounding box

    ClassLabel

    Describe labels.

    ClassLabel(
    names: List[Union[int, float, str]]
    )

    Image

    Image Type.

    Image(
    fp: _TArtifactFP = "",
    display_name: str = "",
    shape: Optional[_TShape] = None,
    mime_type: Optional[MIMEType] = None,
    as_mask: bool = False,
    mask_uri: str = "",
    )
    ParameterDescription
    fpImage path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    shapeImage width, height and channels
    mime_typeMIMEType supported types
    as_maskWhether used as a mask image
    mask_uriURI of the original image for the mask

    The main difference from GrayscaleImage is that Image supports multi-channel RGB images by specifying shape as (W, H, C).

    Examples

    import io
    import typing as t
    import pickle
    from PIL import Image as PILImage
    from starwhale import Image, MIMEType

    def _iter_item(paths: t.List[Path]) -> t.Generator[t.Tuple[t.Any, t.Dict], None, None]:
    for path in paths:
    with path.open("rb") as f:
    content = pickle.load(f, encoding="bytes")
    for data, label, filename in zip(
    content[b"data"], content[b"labels"], content[b"filenames"]
    ):
    annotations = {
    "label": label,
    "label_display_name": dataset_meta["label_names"][label],
    }

    image_array = data.reshape(3, 32, 32).transpose(1, 2, 0)
    image_bytes = io.BytesIO()
    PILImage.fromarray(image_array).save(image_bytes, format="PNG")

    yield Image(
    fp=image_bytes.getvalue(),
    display_name=filename.decode(),
    shape=image_array.shape,
    mime_type=MIMEType.PNG,
    ), annotations

    Image Functions

    Image.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Image.carry_raw_data

    carry_raw_data() -> GrayscaleImage

    Image.astype

    astype() -> Dict[str, t.Any]

    Video

    Video type.

    Video(
    fp: _TArtifactFP = "",
    display_name: str = "",
    mime_type: Optional[MIMEType] = None,
    )
    ParameterDescription
    fpVideo path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    mime_typeMIMEType supported types

    Examples

    import typing as t
    from pathlib import Path

    from starwhale import Video, MIMEType

    root_dir = Path(__file__).parent.parent
    dataset_dir = root_dir / "data" / "UCF-101"
    test_ds_path = [root_dir / "data" / "test_list.txt"]

    def iter_ucf_item() -> t.Generator:
    for path in test_ds_path:
    with path.open() as f:
    for line in f.readlines():
    _, label, video_sub_path = line.split()

    data_path = dataset_dir / video_sub_path
    data = Video(
    data_path,
    display_name=video_sub_path,
    shape=(1,),
    mime_type=MIMEType.WEBM,
    )

    yield f"{label}_{video_sub_path}", {
    "video": data,
    "label": label,
    }

    Audio

    Audio type.

    Audio(
    fp: _TArtifactFP = "",
    display_name: str = "",
    mime_type: Optional[MIMEType] = None,
    )
    ParameterDescription
    fpAudio path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    mime_typeMIMEType supported types

    Examples

    import typing as t
    from starwhale import Audio

    def iter_item() -> t.Generator[t.Tuple[t.Any, t.Any], None, None]:
    for path in validation_ds_paths:
    with path.open() as f:
    for item in f.readlines():
    item = item.strip()
    if not item:
    continue

    data_path = dataset_dir / item
    data = Audio(
    data_path, display_name=item, shape=(1,), mime_type=MIMEType.WAV
    )

    speaker_id, utterance_num = data_path.stem.split("_nohash_")
    annotations = {
    "label": data_path.parent.name,
    "speaker_id": speaker_id,
    "utterance_num": int(utterance_num),
    }
    yield data, annotations

    Audio Functions

    Audio.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Audio.carry_raw_data

    carry_raw_data() -> Audio

    Audio.astype

    astype() -> Dict[str, t.Any]

    Text

    Text type, the default encode type is utf-8.

    Text(
    content: str,
    encoding: str = "utf-8",
    )
    ParameterDescription
    contentThe text content
    encodingEncoding format of the text

    Examples

    import typing as t
    from pathlib import Path
    from starwhale import Text

    def iter_item(self) -> t.Generator[t.Tuple[t.Any, t.Any], None, None]:
    root_dir = Path(__file__).parent.parent / "data"

    with (root_dir / "fra-test.txt").open("r") as f:
    for line in f.readlines():
    line = line.strip()
    if not line or line.startswith("CC-BY"):
    continue

    _data, _label, *_ = line.split("\t")
    data = Text(_data, encoding="utf-8")
    annotations = {"label": _label}
    yield data, annotations

    Text Functions

    to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Text.carry_raw_data

    carry_raw_data() -> Text

    Text.astype

    astype() -> Dict[str, t.Any]

    Text.to_str

    to_str() -> str

    Binary

    Binary provides a binary data type, stored as bytes.

    Binary(
    fp: _TArtifactFP = "",
    mime_type: MIMEType = MIMEType.UNDEFINED,
    )
    ParameterDescription
    fpPath, IO object, or file content bytes
    mime_typeMIMEType supported types

    Binary Functions

    Binary.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Binary.carry_raw_data

    carry_raw_data() -> Binary

    Binary.astype

    astype() -> Dict[str, t.Any]

    Link provides a link type to create remote-link datasets in Starwhale.

    Link(
    uri: str,
    auth: Optional[LinkAuth] = DefaultS3LinkAuth,
    offset: int = 0,
    size: int = -1,
    data_type: Optional[BaseArtifact] = None,
    )
    ParameterDescription
    uriURI of the original data, currently supports localFS and S3 protocols
    authLink auth information
    offsetData offset relative to file pointed by uri
    sizeData size
    data_typeActual data type pointed by the link, currently supports Binary, Image, Text, Audio and Video

    Link.astype

    astype() -> Dict[str, t.Any]

    MIMEType

    MIMEType describes the multimedia types supported by Starwhale, implemented using Python Enum. It is used in the mime_type attribute of Image, Video etc to enable better Dataset Viewer support.

    class MIMEType(Enum):
    PNG = "image/png"
    JPEG = "image/jpeg"
    WEBP = "image/webp"
    SVG = "image/svg+xml"
    GIF = "image/gif"
    APNG = "image/apng"
    AVIF = "image/avif"
    PPM = "image/x-portable-pixmap"
    MP4 = "video/mp4"
    AVI = "video/avi"
    WEBM = "video/webm"
    WAV = "audio/wav"
    MP3 = "audio/mp3"
    PLAIN = "text/plain"
    CSV = "text/csv"
    HTML = "text/html"
    GRAYSCALE = "x/grayscale"
    UNDEFINED = "x/undefined"

    Line

    from starwhale import ds, Point, Line

    with dataset("collections") as ds:
    line_points = [
    Point(x=0.0, y=1.0),
    Point(x=0.0, y=100.0)
    ]
    ds.append({"line": line_points})
    ds.commit()

    Point

    from starwhale import ds, Point

    with dataset("collections") as ds:
    ds.append(Point(x=0.0, y=100.0))
    ds.commit()

    Polygon

    from starwhale import ds, Point, Polygon

    with dataset("collections") as ds:
    polygon_points = [
    Point(x=0.0, y=1.0),
    Point(x=0.0, y=100.0),
    Point(x=2.0, y=1.0),
    Point(x=2.0, y=100.0),
    ]
    ds.append({"polygon": polygon_points})
    ds.commit()
    - - + + \ No newline at end of file diff --git a/0.6.5/reference/swcli/dataset/index.html b/0.6.5/reference/swcli/dataset/index.html index 5afadbfa1..48239bba7 100644 --- a/0.6.5/reference/swcli/dataset/index.html +++ b/0.6.5/reference/swcli/dataset/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    swcli dataset

    Overview

    swcli [GLOBAL OPTIONS] dataset [OPTIONS] <SUBCOMMAND> [ARGS]...

    The dataset command includes the following subcommands:

    • build
    • copy(cp)
    • diff
    • head
    • history
    • info
    • list(ls)
    • recover
    • remove(rm)
    • summary
    • tag

    swcli dataset build

    swcli [GLOBAL OPTIONS] dataset build [OPTIONS]

    Build Starwhale Dataset. This command only supports to build standalone dataset.

    Options

    • Data sources options:
    OptionRequiredTypeDefaultsDescription
    -if or --image or --image-folderNStringBuild dataset from image folder, the folder should contain the image files.
    -af or --audio or --audio-folderNStringBuild dataset from audio folder, the folder should contain the audio files.
    -vf or --video or --video-folderNStringBuild dataset from video folder, the folder should contain the video files.
    -h or --handler or --python-handlerNStringBuild dataset from python executor handler, the handler format is [module path]:[class or func name].
    -f or --yaml or --dataset-yamlNdataset.yaml in cwdBuild dataset from dataset.yaml file. Default uses dataset.yaml in the work directory(cwd).
    -jf or --jsonNStringBuild dataset from json or jsonl file, the json or jsonl file option is a json file path or a http downloaded url.The json content structure should be a list[dict] or tuple[dict].
    -hf or --huggingfaceNStringBuild dataset from huggingface dataset, the huggingface option is a huggingface repo name.
    -c or --csvNStringBuild dataset from csv files. The option is a csv file path, dir path or a http downloaded url.The option can be used multiple times.

    Data source options are mutually exclusive, only one option is accepted. If no set, swcli dataset build command will use dataset yaml mode to build dataset with the dataset.yaml in the cwd.

    • Other options:
    OptionRequiredScopeTypeDefaultsDescription
    -pt or --patchone of --patch and --overwriteGlobalBooleanTruePatch mode, only update the changed rows and columns for the existed dataset.
    -ow or --overwriteone of --patch and --overwriteGlobalBooleanFalseOverwrite mode, update records and delete extraneous rows from the existed dataset.
    -n or --nameNGlobalStringDataset name
    -p or --projectNGlobalStringDefault projectProject URI, the default is the current selected project. The dataset will store in the specified project.
    -d or --descNGlobalStringDataset description
    -as or --alignment-sizeNGlobalString128Bswds-bin format dataset: alignment size
    -vs or --volume-sizeNGlobalString64MBswds-bin format dataset: volume size
    -r or --runtimeNGlobalStringRuntime URI
    -w or --workdirNPython Handler ModeStringcwdwork dir to search handler.
    --auto-label/--no-auto-labelNImage/Video/Audio Folder ModeBooleanTrueWhether to auto label by the sub-folder name.
    --field-selectorNJSON File ModeStringThe filed from which you would like to extract dataset array items. The filed is split by the dot(.) symbol.
    --subsetNHuggingface ModeStringHuggingface dataset subset name. If the subset name is not specified, the all subsets will be built.
    --splitNHuggingface ModeStringHuggingface dataset split name. If the split name is not specified, the all splits will be built.
    --revisionNHuggingface ModeStringmainVersion of the dataset script to load. Defaults to 'main'. The option value accepts tag name, or branch name, or commit hash.
    --add-hf-info/--no-add-hf-infoNHuggingface ModeBooleanTrueWhether to add huggingface dataset info to the dataset rows, currently support to add subset and split into the dataset rows. Subset uses _hf_subset field name, split uses _hf_split field name.
    --cache/--no-cacheNHuggingface ModeBooleanTrueWhether to use huggingface dataset cache(download + local hf dataset).
    -t or --tagNGlobalStringDataset tags, the option can be used multiple times.
    --encodingNCSV/JSON/JSONL ModeStringfile encoding.
    --dialectNCSV ModeStringexcelThe csv file dialect, the default is excel. Current supports excel, excel-tab and unix formats.
    --delimiterNCSV ModeString,A one-character string used to separate fields for the csv file.
    --quotecharNCSV ModeString"A one-character string used to quote fields containing special characters, such as the delimiter or quotechar, or which contain new-line characters.
    --skipinitialspace/--no-skipinitialspaceNCSV ModeBoolFalseWhether to skip spaces after delimiter for the csv file.
    --strict/--no-strictNCSV ModeBoolFalseWhen True, raise exception Error if the csv is not well formed.

    Examples for dataset building

    #- from dataset.yaml
    swcli dataset build # build dataset from dataset.yaml in the current work directory(pwd)
    swcli dataset build --yaml /path/to/dataset.yaml # build dataset from /path/to/dataset.yaml, all the involved files are related to the dataset.yaml file.
    swcli dataset build --overwrite --yaml /path/to/dataset.yaml # build dataset from /path/to/dataset.yaml, and overwrite the existed dataset.
    swcli dataset build --tag tag1 --tag tag2

    #- from handler
    swcli dataset build --handler mnist.dataset:iter_mnist_item # build dataset from mnist.dataset:iter_mnist_item handler, the workdir is the current work directory(pwd).
    # build dataset from mnist.dataset:LinkRawDatasetProcessExecutor handler, the workdir is example/mnist
    swcli dataset build --handler mnist.dataset:LinkRawDatasetProcessExecutor --workdir example/mnist

    #- from image folder
    swcli dataset build --image-folder /path/to/image/folder # build dataset from /path/to/image/folder, search all image type files.

    #- from audio folder
    swcli dataset build --audio-folder /path/to/audio/folder # build dataset from /path/to/audio/folder, search all audio type files.

    #- from video folder
    swcli dataset build --video-folder /path/to/video/folder # build dataset from /path/to/video/folder, search all video type files.

    #- from json/jsonl file
    swcli dataset build --json /path/to/example.json
    swcli dataset build --json http://example.com/example.json
    swcli dataset build --json /path/to/example.json --field-selector a.b.c # extract the json_content["a"]["b"]["c"] field from the json file.
    swcli dataset build --name qald9 --json https://raw.githubusercontent.com/ag-sc/QALD/master/9/data/qald-9-test-multilingual.json --field-selector questions
    swcli dataset build --json /path/to/test01.jsonl --json /path/to/test02.jsonl
    swcli dataset build --json https://modelscope.cn/api/v1/datasets/damo/100PoisonMpts/repo\?Revision\=master\&FilePath\=train.jsonl

    #- from huggingface dataset
    swcli dataset build --huggingface mnist
    swcli dataset build -hf mnist --no-cache
    swcli dataset build -hf cais/mmlu --subset anatomy --split auxiliary_train --revision 7456cfb

    #- from csv files
    swcli dataset build --csv /path/to/example.csv
    swcli dataset build --csv /path/to/example.csv --csv-file /path/to/example2.csv
    swcli dataset build --csv /path/to/csv-dir
    swcli dataset build --csv http://example.com/example.csv
    swcli dataset build --name product-desc-modelscope --csv https://modelscope.cn/api/v1/datasets/lcl193798/product_description_generation/repo\?Revision\=master\&FilePath\=test.csv --encoding=utf-8-sig

    swcli dataset copy

    swcli [GLOBAL OPTIONS] dataset copy [OPTIONS] <SRC> <DEST>

    dataset copy copies from SRC to DEST.

    SRC and DEST are both dataset URIs.

    When copying Starwhale Dataset, all custom user-defined labels will be copied by default. You can use the --ignore-tag parameter to ignore certain labels. In addition, the latest and ^v\d+$ labels are Starwhale built-in labels that are only used within the instance itself and will not be copied to other instances.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, DEST will be overwritten if it exists. In addition, if the labels carried during duplication have already been used by other versions, this parameter can be used to forcibly update the labels to this version.
    -p or --patchone of --patch and --overwriteBooleanTruePatch mode, only update the changed rows and columns for the remote dataset.
    -o or --overwriteone of --patch and --overwriteBooleanFalseOverwrite mode, update records and delete extraneous rows from the remote dataset.
    -i or --ignore-tagNStringIgnore tags to copy. The option can be used multiple times.

    Examples for dataset copy

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local project(myproject) with a new dataset name 'mnist-local'
    swcli dataset cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq local/project/myproject/mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local default project(self) with the cloud instance dataset name 'mnist-cloud'
    swcli dataset cp --patch cloud://pre-k8s/project/dataset/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq .

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local project(myproject) with the cloud instance dataset name 'mnist-cloud'
    swcli dataset cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq . -dlp myproject

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local default project(self) with a dataset name 'mnist-local'
    swcli dataset cp --overwrite cloud://pre-k8s/project/dataset/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local project(myproject) with a dataset name 'mnist-local'
    swcli dataset cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local -dlp myproject

    #- copy standalone instance(local) default project(self)'s mnist-local dataset to cloud instance(pre-k8s) mnist project with a new dataset name 'mnist-cloud'
    swcli dataset cp mnist-local/version/latest cloud://pre-k8s/project/mnist/mnist-cloud

    #- copy standalone instance(local) default project(self)'s mnist-local dataset to cloud instance(pre-k8s) mnist project with standalone instance dataset name 'mnist-local'
    swcli dataset cp mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy standalone instance(local) default project(self)'s mnist-local dataset to cloud instance(pre-k8s) mnist project without 'cloud://' prefix
    swcli dataset cp mnist-local/version/latest pre-k8s/project/mnist

    #- copy standalone instance(local) project(myproject)'s mnist-local dataset to cloud instance(pre-k8s) mnist project with standalone instance dataset name 'mnist-local'
    swcli dataset cp local/project/myproject/dataset/mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy without some tags
    swcli dataset cp mnist cloud://cloud.starwhale.cn/project/starwhale:public --ignore-tag t1 --force

    swcli dataset diff

    swcli [GLOBAL OPTIONS] dataset diff [OPTIONS] <DATASET VERSION> <DATASET VERSION>

    dataset diff compares the difference between two versions of the same dataset.

    DATASET VERSION is a dataset URI.

    OptionRequiredTypeDefaultsDescription
    --show-detailsNBooleanFalseIf true, outputs the detail information.
    swcli [全局选项] dataset head [选项] <DATASET VERSION>

    Print the first n rows of the dataset. DATASET VERSION is a dataset URI.

    OptionRequiredTypeDefaultsDescription
    -n or --rowsNInt5Print the first NUM rows of the dataset.
    -srd or --show-raw-dataNBooleanFalseFetch raw data content from objectstore.
    -st or --show-typesNBooleanFalseshow data types.

    Examples for dataset head

    #- print the first 5 rows of the mnist dataset
    swcli dataset head -n 5 mnist

    #- print the first 10 rows of the mnist(v0 version) dataset and show raw data
    swcli dataset head -n 10 mnist/v0 --show-raw-data

    #- print the data types of the mnist dataset
    swcli dataset head mnist --show-types

    #- print the remote cloud dataset's first 5 rows
    swcli dataset head cloud://cloud-cn/project/test/dataset/mnist -n 5

    #- print the first 5 rows in the json format
    swcli -o json dataset head -n 5 mnist

    swcli dataset history

    swcli [GLOBAL OPTIONS] dataset history [OPTIONS] <DATASET>

    dataset history outputs all history versions of the specified Starwhale Dataset.

    DATASET is a dataset URI.

    OptionRequiredTypeDefaultsDescription
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.

    swcli dataset info

    swcli [GLOBAL OPTIONS] dataset info [OPTIONS] <DATASET>

    dataset info outputs detailed information about the specified Starwhale Dataset version.

    DATASET is a dataset URI.

    swcli dataset list

    swcli [GLOBAL OPTIONS] dataset list [OPTIONS]

    dataset list shows all Starwhale Datasets.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.
    --show-removed or -srNBooleanFalseIf true, include datasets that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.
    --filter or -flNStringShow only Starwhale Datasetes that match specified filters. This option can be used multiple times in one command.
    FilterTypeDescriptionExample
    nameKey-ValueThe name prefix of datasets--filter name=mnist
    ownerKey-ValueThe dataset owner name--filter owner=starwhale
    latestFlagIf specified, it shows only the latest version.--filter latest

    swcli dataset recover

    swcli [GLOBAL OPTIONS] dataset recover [OPTIONS] <DATASET>

    dataset recover recovers previously removed Starwhale Datasets or versions.

    DATASET is a dataset URI. If the version part of the URI is omitted, all removed versions are recovered.

    Garbage-collected Starwhale Datasets or versions can not be recovered, as well as those are removed with the --force option.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, overwrite the Starwhale Dataset or version with the same name or version id.

    swcli dataset remove

    swcli [GLOBAL OPTIONS] dataset remove [OPTIONS] <DATASET>

    dataset remove removes the specified Starwhale Dataset or version.

    DATASET is a dataset URI. If the version part of the URI is omitted, all versions are removed.

    Removed Starwhale Datasets or versions can be recovered by swcli dataset recover before garbage collection. Use the --force option to persistently remove a Starwhale Dataset or version.

    Removed Starwhale Datasets or versions can be listed by swcli dataset list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Dataset or version. It can not be recovered.

    swcli dataset summary

    swcli [GLOBAL OPTIONS]  dataset summary <DATASET>

    Show dataset summary. DATASET is a dataset URI.

    swcli dataset tag

    swcli [GLOBAL OPTIONS] dataset tag [OPTIONS] <DATASET> [TAGS]...

    dataset tag attaches a tag to a specified Starwhale Dataset version. At the same time, tag command also supports list and remove tags. The tag can be used in a dataset URI instead of the version id.

    DATASET is a dataset URI.

    Each dataset version can have any number of tags, but duplicated tag names are not allowed in the same dataset.

    dataset tag only works for the Standalone Instance.

    OptionRequiredTypeDefaultsDescription
    --remove or -rNBooleanFalseremove the tag if true
    --quiet or -qNBooleanFalseignore errors, for example, removing tags that do not exist.
    --force-add or -fNBooleanFalseWhen adding labels to server/cloud instances, if the label is already used by another dataset version, an error will be prompted. In this case, you can force an update using the --force-add parameter.

    Examples for dataset tag

    #- list tags of the mnist dataset
    swcli dataset tag mnist

    #- add tags for the mnist dataset
    swcli dataset tag mnist t1 t2
    swcli dataset tag cloud://cloud.starwhale.cn/project/public:starwhale/dataset/mnist/version/latest t1 --force-ad
    swcli dataset tag mnist t1 --quiet

    #- remove tags for the mnist dataset
    swcli dataset tag mnist -r t1 t2
    swcli dataset tag cloud://cloud.starwhale.cn/project/public:starwhale/dataset/mnist --remove t1
    - - + + \ No newline at end of file diff --git a/0.6.5/reference/swcli/index.html b/0.6.5/reference/swcli/index.html index 6ad219b8c..a323e4ce4 100644 --- a/0.6.5/reference/swcli/index.html +++ b/0.6.5/reference/swcli/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Overview

    Usage

    swcli [OPTIONS] <COMMAND> [ARGS]...
    note

    sw and starwhale are aliases for swcli.

    Global Options

    OptionDescription
    --versionShow the Starwhale Client version
    -v or --verboseShow verbose log, support multi counts for -v args. More -v args, more logs.
    --helpShow the help message.
    caution

    Global options must be put immediately after swcli, and before any command.

    Commands

    - - + + \ No newline at end of file diff --git a/0.6.5/reference/swcli/instance/index.html b/0.6.5/reference/swcli/instance/index.html index a359d69e5..fb8e95cdb 100644 --- a/0.6.5/reference/swcli/instance/index.html +++ b/0.6.5/reference/swcli/instance/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    swcli instance

    Overview

    swcli [GLOBAL OPTIONS] instance [OPTIONS] <SUBCOMMAND> [ARGS]

    The instance command includes the following subcommands:

    • info
    • list (ls)
    • login
    • logout
    • use (select)

    swcli instance info

    swcli [GLOBAL OPTIONS] instance info [OPTIONS] <INSTANCE>

    instance info outputs detailed information about the specified Starwhale Instance.

    INSTANCE is an instance URI.

    swcli instance list

    swcli [GLOBAL OPTIONS] instance list [OPTIONS]

    instance list shows all Starwhale Instances.

    swcli instance login

    swcli [GLOBAL OPTIONS] instance login [OPTIONS] <INSTANCE>

    instance login connects to a Server/Cloud instance and makes the specified instance default.

    INSTANCE is an instance URI.

    OptionRequiredTypeDefaultsDescription
    --usernameNStringThe login username.
    --passwordNStringThe login password.
    --tokenNStringThe login token.
    --aliasYStringThe alias of the instance. You can use it anywhere that requires an instance URI.

    --username and --password can not be used together with --token.

    swcli instance logout

    swcli [GLOBAL OPTIONS] instance logout [INSTANCE]

    instance logout disconnects from the Server/Cloud instance, and clears information stored in the local storage.

    INSTANCE is an instance URI. If it is omiited, the default instance is used instead.

    swcli instance use

    swcli [GLOBAL OPTIONS] instance use <INSTANCE>

    instance use make the specified instance default.

    INSTANCE is an instance URI.

    - - + + \ No newline at end of file diff --git a/0.6.5/reference/swcli/job/index.html b/0.6.5/reference/swcli/job/index.html index a23e4a1ef..31671790c 100644 --- a/0.6.5/reference/swcli/job/index.html +++ b/0.6.5/reference/swcli/job/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    swcli job

    Overview

    swcli [GLOBAL OPTIONS] job [OPTIONS] <SUBCOMMAND> [ARGS]...

    The job command includes the following subcommands:

    • cancel
    • info
    • list(ls)
    • pause
    • recover
    • remove(rm)
    • resume

    swcli job cancel

    swcli [GLOBAL OPTIONS] job cancel [OPTIONS] <JOB>

    job cancel stops the specified job. On Standalone instance, this command only takes effect for containerized jobs.

    JOB is a job URI.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, kill the Starwhale Job by force.

    swcli job info

    swcli [GLOBAL OPTIONS] job info [OPTIONS] <JOB>

    job info outputs detailed information about the specified Starwhale Job.

    JOB is a job URI.

    swcli job list

    swcli [GLOBAL OPTIONS] job list [OPTIONS]

    job list shows all Starwhale Jobs.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --show-removed or -srNBooleanFalseIf true, include packages that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.

    swcli job pause

    swcli [GLOBAL OPTIONS] job pause [OPTIONS] <JOB>

    job pause pauses the specified job. Paused jobs can be resumed by job resume. On Standalone instance, this command only takes effect for containerized jobs.

    JOB is a job URI.

    From Starwhale's perspective, pause is almost the same as cancel, except that the job reuses the old Job id when resumed. It is job developer's responsibility to save all data periodically and load them when resumed. The job id is usually used as a key of the checkpoint.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, kill the Starwhale Job by force.

    swcli job resume

    swcli [GLOBAL OPTIONS] job resume [OPTIONS] <JOB>

    job resume resumes the specified job. On Standalone instance, this command only takes effect for containerized jobs.

    JOB is a job URI.

    - - + + \ No newline at end of file diff --git a/0.6.5/reference/swcli/model/index.html b/0.6.5/reference/swcli/model/index.html index f03978d96..211e372f0 100644 --- a/0.6.5/reference/swcli/model/index.html +++ b/0.6.5/reference/swcli/model/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    swcli model

    Overview

    swcli [GLOBAL OPTIONS] model [OPTIONS] <SUBCOMMAND> [ARGS]...

    The model command includes the following subcommands:

    • build
    • copy(cp)
    • diff
    • extract
    • history
    • info
    • list(ls)
    • recover
    • remove(rm)
    • run
    • serve
    • tag

    swcli model build

    swcli [GLOBAL OPTIONS] model build [OPTIONS] <WORKDIR>

    model build will put the whole WORKDIR into the model, except files that match patterns defined in .swignore.

    model build will import modules specified by --module to generate the required configurations to run the model. If your module depends on third-party libraries, we strongly recommend you use the --runtime option; otherwise, you need to ensure that the python environment used by swcli has these libraries installed.

    OptionRequiredTypeDefaultsDescription
    --project or -pNStringthe default projectthe project URI
    --model-yaml or -fNString${workdir}/model.yamlmodel yaml path, default use ${workdir}/model.yaml file. model.yaml is optional for model build.
    --module or -mNStringPython modules to be imported during the build process. Starwhale will export model handlers from these modules to the model package. This option supports set multiple times.
    --runtime or -rNStringthe URI of the Starwhale Runtime to use when running this command. If this option is used, this command will run in an independent python environment specified by the Starwhale Runtime; otherwise, it will run directly in the swcli's current python environment.
    --name or -nNStringmodel package name
    --desc or -dNStringmodel package description
    --package-runtime--no-package-runtimeNBooleanTrueWhen using the --runtime parameter, by default, the corresponding Starwhale runtime will become the built-in runtime for the Starwhale model. This feature can be disabled with the --no-package-runtime parameter.
    --add-allNBooleanFalseAdd all files in the working directory to the model package(excludes python cache files and virtual environment files when disabled).The .swignore file still takes effect.
    -t or --tagNGlobalString

    Examples for model build

    # build by the model.yaml in current directory and model package will package all the files from the current directory.
    swcli model build .
    # search model run decorators from mnist.evaluate, mnist.train and mnist.predict modules, then package all the files from the current directory to model package.
    swcli model build . --module mnist.evaluate --module mnist.train --module mnist.predict
    # build model package in the Starwhale Runtime environment.
    swcli model build . --module mnist.evaluate --runtime pytorch/version/v1
    # forbid to package Starwhale Runtime into the model.
    swcli model build . --module mnist.evaluate --runtime pytorch/version/v1 --no-package-runtime
    # build model package with tags.
    swcli model build . --tag tag1 --tag tag2

    swcli model copy

    swcli [GLOBAL OPTIONS] model copy [OPTIONS] <SRC> <DEST>

    model copy copies from SRC to DEST for Starwhale Model sharing.

    SRC and DEST are both model URIs.

    When copying Starwhale Model, all custom user-defined labels will be copied by default. You can use the --ignore-tag parameter to ignore certain labels. In addition, the latest and ^v\d+$ labels are Starwhale built-in labels that are only used within the instance itself and will not be copied to other instances.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, DEST will be overwritten if it exists. In addition, if the labels carried during duplication have already been used by other versions, this parameter can be used to forcibly update the labels to this version.
    -i or --ignore-tagNStringIgnore tags to copy. The option can be used multiple times.

    Examples for model copy

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local project(myproject) with a new model name 'mnist-local'
    swcli model cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq local/project/myproject/mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local default project(self) with the cloud instance model name 'mnist-cloud'
    swcli model cp cloud://pre-k8s/project/model/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq .

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local project(myproject) with the cloud instance model name 'mnist-cloud'
    swcli model cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq . -dlp myproject

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local default project(self) with a model name 'mnist-local'
    swcli model cp cloud://pre-k8s/project/model/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local project(myproject) with a model name 'mnist-local'
    swcli model cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local -dlp myproject

    #- copy standalone instance(local) default project(self)'s mnist-local model to cloud instance(pre-k8s) mnist project with a new model name 'mnist-cloud'
    swcli model cp mnist-local/version/latest cloud://pre-k8s/project/mnist/mnist-cloud

    #- copy standalone instance(local) default project(self)'s mnist-local model to cloud instance(pre-k8s) mnist project with standalone instance model name 'mnist-local'
    swcli model cp mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy standalone instance(local) default project(self)'s mnist-local model to cloud instance(pre-k8s) mnist project without 'cloud://' prefix
    swcli model cp mnist-local/version/latest pre-k8s/project/mnist

    #- copy standalone instance(local) project(myproject)'s mnist-local model to cloud instance(pre-k8s) mnist project with standalone instance model name 'mnist-local'
    swcli model cp local/project/myproject/model/mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy without some tags
    swcli model cp mnist cloud://cloud.starwhale.cn/project/starwhale:public --ignore-tag t1

    swcli model diff

    swcli [GLOBAL OPTIONS] model diff [OPTIONS] <MODEL VERSION> <MODEL VERSION>

    model diff compares the difference between two versions of the same model.

    MODEL VERSION is a model URI.

    OptionRequiredTypeDefaultsDescription
    --show-detailsNBooleanFalseIf true, outputs the detail information.

    swcli model extract

    swcli [GLOBAL OPTIONS] model extract [OPTIONS] <MODEL> <TARGET_DIR>

    The model extract command can extract a Starwhale model to a specified directory for further customization.

    MODEL is a model URI.

    OptionRequiredTypeDefaultDescription
    --force or -fNBooleanFalseIf this option is used, it will forcibly overwrite existing extracted model files in the target directory.

    Examples for model extract

    #- extract mnist model package to current directory
    swcli model extract mnist/version/xxxx .

    #- extract mnist model package to current directory and force to overwrite the files
    swcli model extract mnist/version/xxxx . -f

    swcli model history

    swcli [GLOBAL OPTIONS] model history [OPTIONS] <MODEL>

    model history outputs all history versions of the specified Starwhale Model.

    MODEL is a model URI.

    OptionRequiredTypeDefaultsDescription
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.

    swcli model info

    swcli [GLOBAL OPTIONS] model info [OPTIONS] <MODEL>

    model info outputs detailed information about the specified Starwhale Model version.

    MODEL is a model URI.

    OptionRequiredTypeDefaultsDescription
    --output-filter or -ofNChoice of [basic/model_yaml/manifest/files/handlers/all]basicFilter the output content. Only standalone instance supports this option.

    Examples for model info

    swcli model info mnist # show basic info from the latest version of model
    swcli model info mnist/version/v0 # show basic info from the v0 version of model
    swcli model info mnist/version/latest --output-filter=all # show all info
    swcli model info mnist -of basic # show basic info
    swcli model info mnist -of model_yaml # show model.yaml
    swcli model info mnist -of handlers # show model runnable handlers info
    swcli model info mnist -of files # show model package files tree
    swcli -o json model info mnist -of all # show all info in json format

    swcli model list

    swcli [GLOBAL OPTIONS] model list [OPTIONS]

    model list shows all Starwhale Models.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.
    --show-removedNBooleanFalseIf true, include packages that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.
    --filter or -flNStringShow only Starwhale Models that match specified filters. This option can be used multiple times in one command.
    FilterTypeDescriptionExample
    nameKey-ValueThe name prefix of models--filter name=mnist
    ownerKey-ValueThe model owner name--filter owner=starwhale
    latestFlagIf specified, it shows only the latest version.--filter latest

    swcli model recover

    swcli [GLOBAL OPTIONS] model recover [OPTIONS] <MODEL>

    model recover recovers previously removed Starwhale Models or versions.

    MODEL is a model URI. If the version part of the URI is omitted, all removed versions are recovered.

    Garbage-collected Starwhale Models or versions can not be recovered, as well as those are removed with the --force option.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, overwrite the Starwhale Model or version with the same name or version id.

    swcli model remove

    swcli [GLOBAL OPTIONS] model remove [OPTIONS] <MODEL>

    model remove removes the specified Starwhale Model or version.

    MODEL is a model URI. If the version part of the URI is omitted, all versions are removed.

    Removed Starwhale Models or versions can be recovered by swcli model recover before garbage collection. Use the --force option to persistently remove a Starwhale Model or version.

    Removed Starwhale Models or versions can be listed by swcli model list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Model or version. It can not be recovered.

    swcli model run

    swcli [GLOBAL OPTIONS] model run [OPTIONS]

    model run executes a model handler. Model run supports two modes to run: model URI and local development. Model URI mode needs a pre-built Starwhale Model Package. Local development model only needs the model src dir.

    OptionRequiredTypeDefaultsDescription
    --workdir or -wNStringFor local development mode, the path of model src dir.
    --uri or -uNStringFor model URI mode, the string of model uri.
    --handler or -hNStringRunnable handler index or name, default is None, will use the first handler
    --module or -mNStringThe name of the Python module to import. This parameter can be set multiple times.
    --runtime or -rNStringthe Starwhale Runtime URI to use when running this command. If this option is used, this command will run in an independent python environment specified by the Starwhale Runtime; otherwise, it will run directly in the swcli's current python environment.
    --model-yaml or -fNString${MODEL_DIR}/model.yamlThe path to the model.yaml. model.yaml is optional for model run.
    --run-project or -pNStringDefault projectProject URI, indicates the model run results will be stored in the corresponding project.
    --dataset or -dNStringDataset URI, the Starwhale dataset required for model running. This parameter can be set multiple times.
    --dataset-head or -dhNInteger0[ONLY STANDALONE]For debugging purpose, every prediction task will, at most, consume the first n rows from every dataset.When the value is less than or equal to 0, all samples will be used.
    --in-containerNBooleanFalseUse docker container to run the model. This option is only available for standalone instances. For server and cloud instances, a docker image is always used. If the runtime is a docker image, this option is always implied.
    --forbid-snapshot or -fsNBooleanFalseIn model URI mode, each model run uses a new snapshot directory. Setting this parameter will directly use the model's workdir as the run directory. In local dev mode, this parameter does not take effect, each run is in the --workdir specified directory.
    -- --user-arbitrary-argsNStringSpecify the args you defined in your handlers.

    Examples for model run

    # --> run by model uri
    # run the first handler from model uri
    swcli model run -u mnist/version/latest
    # run index id(1) handler from model uri
    swcli model run --uri mnist/version/latest --handler 1
    # run index fullname(mnist.evaluator:MNISTInference.cmp) handler from model uri
    swcli model run --uri mnist/version/latest --handler mnist.evaluator:MNISTInference.cmp

    # --> run by the working directory, which does not build model package yet. Make local debug happy.
    # run the first handler from the working directory, use the model.yaml in the working directory
    swcli model run -w .
    # run index id(1) handler from the working directory, search mnist.evaluator module and model.yaml handlers(if existed) to get runnable handlers
    swcli model run --workdir . --module mnist.evaluator --handler 1
    # run index fullname(mnist.evaluator:MNISTInference.cmp) handler from the working directory, search mnist.evaluator module to get runnable handlers
    swcli model run --workdir . --module mnist.evaluator --handler mnist.evaluator:MNISTInference.cmp
    # run the f handler in th.py from the working directory with the args defined in th:f
    # @handler()
    # def f(
    # x=ListInput(IntInput()),
    # y=2,
    # mi=MyInput(),
    # ds=DatasetInput(required=True),
    # ctx=ContextInput(),
    # )
    swcli model run -w . -m th --handler th:f -- -x 2 -x=1 --mi=blab-la --ds mnist

    # --> run with dataset of head 10
    swcli model run --uri mnist --dataset-head 10 --dataset mnist

    swcli model serve

    Here is the English translation:

    swcli [GLOBAL OPTIONS] model serve [OPTIONS]

    The model serve command can run the model as a web server, and provide a simple web interaction interface.

    OptionRequiredTypeDefaultsDescription
    --workdir or -wNStringIn local dev mode, specify the directory of the model code.
    --uri or -uNStringIn model URI mode, specify the model URI.
    --runtime or -rNStringThe URI of the Starwhale runtime to use when running this command. If specified, the command will run in the isolated Python environment defined in the Starwhale runtime. Otherwise it will run directly in the current Python environment of swcli.
    --model-yaml or -fNString${MODEL_DIR}/model.yamlThe path to the model.yaml. model.yaml is optional for model serve.
    --module or -mNStringName of the Python module to import. This parameter can be set multiple times.
    --hostNString127.0.0.1The address for the service to listen on.
    --portNInteger8080The port for the service to listen on.

    Examples for model serve

    swcli model serve -u mnist
    swcli model serve --uri mnist/version/latest --runtime pytorch/version/latest

    swcli model serve --workdir . --runtime pytorch/version/v0
    swcli model serve --workdir . --runtime pytorch/version/v1 --host 0.0.0.0 --port 8080
    swcli model serve --workdir . --runtime pytorch --module mnist.evaluator

    swcli model tag

    swcli [GLOBAL OPTIONS] model tag [OPTIONS] <MODEL> [TAGS]...

    model tag attaches a tag to a specified Starwhale Model version. At the same time, tag command also supports list and remove tags. The tag can be used in a model URI instead of the version id.

    MODEL is a model URI.

    Each model version can have any number of tags, but duplicated tag names are not allowed in the same model.

    model tag only works for the Standalone Instance.

    OptionRequiredTypeDefaultsDescription
    --remove or -rNBooleanFalseremove the tag if true
    --quiet or -qNBooleanFalseignore errors, for example, removing tags that do not exist.
    --force-add or -fNBooleanFalseWhen adding labels to server/cloud instances, if the label is already used by another model version, an error will be prompted. In this case, you can force an update using the --force-add parameter.

    Examples for model tag

    #- list tags of the mnist model
    swcli model tag mnist

    #- add tags for the mnist model
    swcli model tag mnist t1 t2
    swcli model tag cloud://cloud.starwhale.cn/project/public:starwhale/model/mnist/version/latest t1 --force-add
    swcli model tag mnist t1 --quiet

    #- remove tags for the mnist model
    swcli model tag mnist -r t1 t2
    swcli model tag cloud://cloud.starwhale.cn/project/public:starwhale/model/mnist --remove t1
    - - + + \ No newline at end of file diff --git a/0.6.5/reference/swcli/project/index.html b/0.6.5/reference/swcli/project/index.html index 59f6a1290..b82774994 100644 --- a/0.6.5/reference/swcli/project/index.html +++ b/0.6.5/reference/swcli/project/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    swcli project

    Overview

    swcli [GLOBAL OPTIONS] project [OPTIONS] <SUBCOMMAND> [ARGS]...

    The project command includes the following subcommands:

    • create(add, new)
    • info
    • list(ls)
    • recover
    • remove(ls)
    • use(select)

    swcli project create

    swcli [GLOBAL OPTIONS] project create <PROJECT>

    project create creates a new project.

    PROJECT is a project URI.

    swcli project info

    swcli [GLOBAL OPTIONS] project info [OPTIONS] <PROJECT>

    project info outputs detailed information about the specified Starwhale Project.

    PROJECT is a project URI.

    swcli project list

    swcli [GLOBAL OPTIONS] project list [OPTIONS]

    project list shows all Starwhale Projects.

    OptionRequiredTypeDefaultsDescription
    --instanceNStringThe URI of the instance to list. If this option is omitted, use the default instance.
    --show-removedNBooleanFalseIf true, include projects that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.

    swcli project recover

    swcli [GLOBAL OPTIONS] project recover [OPTIONS] <PROJECT>

    project recover recovers previously removed Starwhale Projects.

    PROJECT is a project URI.

    Garbage-collected Starwhale Projects can not be recovered, as well as those are removed with the --force option.

    swcli project remove

    swcli [GLOBAL OPTIONS] project remove [OPTIONS] <PROJECT>

    project remove removes the specified Starwhale Project.

    PROJECT is a project URI.

    Removed Starwhale Projects can be recovered by swcli project recover before garbage collection. Use the --force option to persistently remove a Starwhale Project.

    Removed Starwhale Project can be listed by swcli project list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Project. It can not be recovered.

    swcli project use

    swcli [GLOBAL OPTIONS] project use <PROJECT>

    project use make the specified project default. You must login at first to use a project on a Server/Cloud instance.

    - - + + \ No newline at end of file diff --git a/0.6.5/reference/swcli/runtime/index.html b/0.6.5/reference/swcli/runtime/index.html index 4ac3273b4..fd022e262 100644 --- a/0.6.5/reference/swcli/runtime/index.html +++ b/0.6.5/reference/swcli/runtime/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    swcli runtime

    Overview

    swcli [GLOBAL OPTIONS] runtime [OPTIONS] <SUBCOMMAND> [ARGS]...

    The runtime command includes the following subcommands:

    • activate(actv)
    • build
    • copy(cp)
    • dockerize
    • extract
    • history
    • info
    • list(ls)
    • recover
    • remove(rm)
    • tag

    swcli runtime activate

    swcli [GLOBAL OPTIONS] runtime activate [OPTIONS] <RUNTIME>

    Like source venv/bin/activate or conda activate xxx, runtime activate setups a new python environment according to the settings of the specified runtime. When the current shell is closed or switched to another one, you need to reactivate the runtime.RUNTIME is a Runtime URI.

    If you want to quit the activated runtime environment, please run venv deactivate in the venv environment or conda deactivate in the conda environment.

    The runtime activate command will build a Python isolated environment and download relevant Python packages according to the definition of the Starwhale runtime when activating the environment for the first time. This process may spend a lot of time.

    swcli runtime build

    swcli [GLOBAL OPTIONS] runtime build [OPTIONS]

    The runtime build command can build a shareable and reproducible runtime environment suitable for ML/DL from various environments or runtime.yaml file.

    Parameters

    • Parameters related to runtime building methods:
    OptionRequiredTypeDefaultsDescription
    -c or --condaNStringFind the corresponding conda environment by conda env name, export Python dependencies to generate Starwhale runtime.
    -cp or --conda-prefixNStringFind the corresponding conda environment by conda env prefix path, export Python dependencies to generate Starwhale runtime.
    -v or --venvNStringFind the corresponding venv environment by venv directory address, export Python dependencies to generate Starwhale runtime.
    -s or --shellNStringExport Python dependencies according to current shell environment to generate Starwhale runtime.
    -y or --yamlNruntime.yaml in cwd directoryBuild Starwhale runtime according to user-defined runtime.yaml.
    -d or --dockerNStringUse the docker image as Starwhale runtime.

    The parameters for runtime building methods are mutually exclusive, only one method can be specified. If not specified, it will use --yaml method to read runtime.yaml in cwd directory to build Starwhale runtime.

    • Other parameters:
    OptionRequiredScopeTypeDefaultsDescription
    --project or -pNGlobalStringDefault projectProject URI
    -del or --disable-env-lockNruntime.yaml modeBooleanFalseWhether to install dependencies in runtime.yaml and lock the version information of related dependencies. The dependencies will be locked by default.
    -nc or --no-cacheNruntime.yaml modeBooleanFalseWhether to delete the isolated environment and install related dependencies from scratch. By default dependencies will be installed in the existing isolated environment.
    --cudaNconda/venv/shell modeChoice[11.3/11.4/11.5/11.6/11.7/]CUDA version, CUDA will not be used by default.
    --cudnnNconda/venv/shell modeChoice[8/]cuDNN version, cuDNN will not be used by default.
    --archNconda/venv/shell modeChoice[amd64/arm64/noarch]noarchArchitecture
    -dpo or --dump-pip-optionsNGlobalBooleanFalseDump pip config options from the ~/.pip/pip.conf file.
    -dcc or --dump-condarcNGlobalBooleanFalseDump conda config from the ~/.condarc file.
    -t or --tagNGlobalStringRuntime tags, the option can be used multiple times.

    Examples for Starwhale Runtime building

    #- from runtime.yaml:
    swcli runtime build # use the current directory as the workdir and use the default runtime.yaml file
    swcli runtime build -y example/pytorch/runtime.yaml # use example/pytorch/runtime.yaml as the runtime.yaml file
    swcli runtime build --yaml runtime.yaml # use runtime.yaml at the current directory as the runtime.yaml file
    swcli runtime build --tag tag1 --tag tag2

    #- from conda name:
    swcli runtime build -c pytorch # lock pytorch conda environment and use `pytorch` as the runtime name
    swcli runtime build --conda pytorch --name pytorch-runtime # use `pytorch-runtime` as the runtime name
    swcli runtime build --conda pytorch --cuda 11.4 # specify the cuda version
    swcli runtime build --conda pytorch --arch noarch # specify the system architecture

    #- from conda prefix path:
    swcli runtime build --conda-prefix /home/starwhale/anaconda3/envs/pytorch # get conda prefix path by `conda info --envs` command

    #- from venv prefix path:
    swcli runtime build -v /home/starwhale/.virtualenvs/pytorch
    swcli runtime build --venv /home/starwhale/.local/share/virtualenvs/pytorch --arch amd64

    #- from docker image:
    swcli runtime build --docker pytorch/pytorch:1.9.0-cuda11.1-cudnn8-runtime # use the docker image as the runtime directly

    #- from shell:
    swcli runtime build -s --cuda 11.4 --cudnn 8 # specify the cuda and cudnn version
    swcli runtime build --shell --name pytorch-runtime # lock the current shell environment and use `pytorch-runtime` as the runtime name

    swcli runtime copy

    swcli [GLOBAL OPTIONS] runtime copy [OPTIONS] <SRC> <DEST>

    runtime copy copies from SRC to DEST. SRC and DEST are both Runtime URIs.

    When copying Starwhale Runtime, all custom user-defined labels will be copied by default. You can use the --ignore-tag parameter to ignore certain labels. In addition, the latest and ^v\d+$ labels are built-in Starwhale system labels that are only used within the instance itself and will not be copied to other instances.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, DEST will be overwritten if it exists. In addition, if the labels carried during duplication have already been used by other versions, this parameter can be used to forcibly update the labels to this version.
    -i or --ignore-tagNStringIgnore tags to copy. The option can be used multiple times.

    Examples for Starwhale Runtime copy

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local project(myproject) with a new runtime name 'mnist-local'
    swcli runtime cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq local/project/myproject/mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local default project(self) with the cloud instance runtime name 'mnist-cloud'
    swcli runtime cp cloud://pre-k8s/project/runtime/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq .

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local project(myproject) with the cloud instance runtime name 'mnist-cloud'
    swcli runtime cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq . -dlp myproject

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local default project(self) with a runtime name 'mnist-local'
    swcli runtime cp cloud://pre-k8s/project/runtime/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local project(myproject) with a runtime name 'mnist-local'
    swcli runtime cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local -dlp myproject

    #- copy standalone instance(local) default project(self)'s mnist-local runtime to cloud instance(pre-k8s) mnist project with a new runtime name 'mnist-cloud'
    swcli runtime cp mnist-local/version/latest cloud://pre-k8s/project/mnist/mnist-cloud

    #- copy standalone instance(local) default project(self)'s mnist-local runtime to cloud instance(pre-k8s) mnist project with standalone instance runtime name 'mnist-local'
    swcli runtime cp mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy standalone instance(local) default project(self)'s mnist-local runtime to cloud instance(pre-k8s) mnist project without 'cloud://' prefix
    swcli runtime cp mnist-local/version/latest pre-k8s/project/mnist

    #- copy standalone instance(local) project(myproject)'s mnist-local runtime to cloud instance(pre-k8s) mnist project with standalone instance runtime name 'mnist-local'
    swcli runtime cp local/project/myproject/runtime/mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy without some tags
    swcli runtime cp pytorch cloud://cloud.starwhale.cn/project/starwhale:public --ignore-tag t1

    swcli runtime dockerize

    swcli [GLOBAL OPTIONS] runtime dockerize [OPTIONS] <RUNTIME>

    runtime dockerize generates a docker image based on the specified runtime. Starwhale uses docker buildx to create the image. Docker 19.03 or later is required to run this command.

    RUNTIME is a Runtime URI.

    OptionRequiredTypeDefaultsDescription
    --tag or -tNStringThe tag of the docker image. This option can be repeated multiple times.
    --pushNBooleanFalseIf true, push the image to the docker registry
    --platformNStringamd64The target platform,can be either amd64 or arm64. This option can be repeated multiple times to create a multi-platform image.

    Here is the English translation:

    swcli runtime extract

    swcli [Global Options] runtime extract [Options] <RUNTIME>

    Starwhale runtimes use the compressed packages to distribute. The runtime extract command can be used to extract the runtime package for further customization and modification.

    OptionRequiredTypeDefaultDescription
    --force or -fNBooleanFalseWhether to delete and re-extract if there is already an extracted Starwhale runtime in the target directory.
    --target-dirNStringCustom extraction directory. If not specified, it will be extracted to the default Starwhale runtime workdir. The command log will show the directory location.

    swcli runtime history

    swcli [GLOBAL OPTIONS] runtime history [OPTIONS] <RUNTIME>

    runtime history outputs all history versions of the specified Starwhale Runtime.

    RUNTIME is a Runtime URI.

    OptionRequiredTypeDefaultsDescription
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.

    swcli runtime info

    swcli [GLOBAL OPTIONS] runtime info [OPTIONS] <RUNTIME>

    runtime info outputs detailed information about a specified Starwhale Runtime version.

    RUNTIME is a Runtime URI.

    OptionRequiredTypeDefaultsDescription
    --output-filter or -ofNChoice of [basic/runtime_yaml/manifest/lock/all]basicFilter the output content. Only standalone instance supports this option.

    Examples for Starwhale Runtime info

    swcli runtime info pytorch # show basic info from the latest version of runtime
    swcli runtime info pytorch/version/v0 # show basic info
    swcli runtime info pytorch/version/v0 --output-filter basic # show basic info
    swcli runtime info pytorch/version/v1 -of runtime_yaml # show runtime.yaml content
    swcli runtime info pytorch/version/v1 -of lock # show auto lock file content
    swcli runtime info pytorch/version/v1 -of manifest # show _manifest.yaml content
    swcli runtime info pytorch/version/v1 -of all # show all info of the runtime

    swcli runtime list

    swcli [GLOBAL OPTIONS] runtime list [OPTIONS]

    runtime list shows all Starwhale Runtimes.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.
    --show-removed or -srNBooleanFalseIf true, include runtimes that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.
    --filter or -flNStringShow only Starwhale Runtimes that match specified filters. This option can be used multiple times in one command.
    FilterTypeDescriptionExample
    nameKey-ValueThe name prefix of runtimes--filter name=pytorch
    ownerKey-ValueThe runtime owner name--filter owner=starwhale
    latestFlagIf specified, it shows only the latest version.--filter latest

    swcli runtime recover

    swcli [GLOBAL OPTIONS] runtime recover [OPTIONS] <RUNTIME>

    runtime recover can recover previously removed Starwhale Runtimes or versions.

    RUNTIME is a Runtime URI. If the version part of the URI is omitted, all removed versions are recovered.

    Garbage-collected Starwhale Runtimes or versions can not be recovered, as well as those are removed with the --force option.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, overwrite the Starwhale Runtime or version with the same name or version id.

    swcli runtime remove

    swcli [GLOBAL OPTIONS] runtime remove [OPTIONS] <RUNTIME>

    runtime remove removes the specified Starwhale Runtime or version.

    RUNTIME is a Runtime URI. If the version part of the URI is omitted, all versions are removed.

    Removed Starwhale Runtimes or versions can be recovered by swcli runtime recover before garbage collection. Use the -- force option to persistently remove a Starwhale Runtime or version.

    Removed Starwhale Runtimes or versions can be listed by swcli runtime list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Runtime or version. It can not be recovered.

    swcli runtime tag

    swcli [GLOBAL OPTIONS] runtime tag [OPTIONS] <RUNTIME> [TAGS]...

    runtime tag attaches a tag to a specified Starwhale Runtime version. At the same time, tag command also supports list and remove tags. The tag can be used in a runtime URI instead of the version id.

    RUNTIME is a Runtime URI.

    Each runtime version can have any number of tags, but duplicated tag names are not allowed in the same runtime.

    runtime tag only works for the Standalone Instance.

    OptionRequiredTypeDefaultsDescription
    --remove or -rNBooleanFalseRemove the tag if true
    --quiet or -qNBooleanFalseIgnore errors, for example, removing tags that do not exist.
    --force-add or -fNBooleanFalseWhen adding labels to server/cloud instances, if the label is already used by another runtime version, an error will be prompted. In this case, you can force an update using the --force-add parameter.

    Examples for runtime tag

    #- list tags of the pytorch runtime
    swcli runtime tag pytorch

    #- add tags for the pytorch runtime
    swcli runtime tag mnist t1 t2
    swcli runtime tag cloud://cloud.starwhale.cn/project/public:starwhale/runtime/pytorch/version/latest t1 --force-add
    swcli runtime tag mnist t1 --quiet

    #- remove tags for the pytorch runtime
    swcli runtime tag mnist -r t1 t2
    swcli runtime tag cloud://cloud.starwhale.cn/project/public:starwhale/runtime/pytorch --remove t1
    - - + + \ No newline at end of file diff --git a/0.6.5/reference/swcli/utilities/index.html b/0.6.5/reference/swcli/utilities/index.html index ce576aa28..912341c21 100644 --- a/0.6.5/reference/swcli/utilities/index.html +++ b/0.6.5/reference/swcli/utilities/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Utility Commands

    swcli gc

    swcli [GLOBAL OPTIONS] gc [OPTIONS]

    gc clears removed projects, models, datasets, and runtimes according to the internal garbage collection policy.

    OptionRequiredTypeDefaultsDescription
    --dry-runNBooleanFalseIf true, outputs objects to be removed instead of clearing them.
    --yesNBooleanFalseBypass confirmation prompts.

    swcli check

    swcli [GLOBAL OPTIONS] check

    Check if the external dependencies of the swcli command meet the requirements. Currently mainly checks Docker and Conda.

    swcli completion install

    swcli [GLOBAL OPTIONS] completion install <SHELL_NAME>

    Install autocompletion for swcli commands. Currently supports bash, zsh and fish. If SHELL_NAME is not specified, it will try to automatically detect the current shell type.

    swcli config edit

    swcli [GLOBAL OPTIONS] config edit

    Edit the Starwhale configuration file at ~/.config/starwhale/config.yaml.

    swcli ui

    swcli [GLOBAL OPTIONS] ui <INSTANCE>

    Open the web page for the corresponding instance.

    - - + + \ No newline at end of file diff --git a/0.6.5/runtime/index.html b/0.6.5/runtime/index.html index 7ad3887d6..843ac486f 100644 --- a/0.6.5/runtime/index.html +++ b/0.6.5/runtime/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Starwhale Runtime

    overview

    Overview

    Starwhale Runtime aims to provide a reproducible and sharable running environment for python programs. You can easily share your working environment with your teammates or outsiders, and vice versa. Furthermore, you can run your programs on Starwhale Server or Starwhale Cloud without bothering with the dependencies.

    Starwhale works well with virtualenv, conda, and docker. If you are using one of them, it is straightforward to create a Starwhale Runtime based on your current environment.

    Multiple Starwhale Runtimes on your local machine can be switched freely by one command. You can work on different projects without messing up the environment.Starwhale Runtime consists of two parts: the base image and the dependencies.

    The base image

    The base is a docker image with Python, CUDA, and cuDNN installed. Starwhale provides various base images for you to choose from; see the following list:

    • Computer system architecture:
      • X86 (amd64)
      • Arm (aarch64)
    • Operating system:
      • Ubuntu 20.04 LTS (ubuntu:20.04)
    • Python:
      • 3.7
      • 3.8
      • 3.9
      • 3.10
      • 3.11
    • CUDA:
      • CUDA 11.3 + cuDNN 8.4
      • CUDA 11.4 + cuDNN 8.4
      • CUDA 11.5 + cuDNN 8.4
      • CUDA 11.6 + cuDNN 8.4
      • CUDA 11.7
    - - + + \ No newline at end of file diff --git a/0.6.5/runtime/yaml/index.html b/0.6.5/runtime/yaml/index.html index f5348f83f..b8b1c35c9 100644 --- a/0.6.5/runtime/yaml/index.html +++ b/0.6.5/runtime/yaml/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    The runtime.yaml Specification

    runtime.yaml is the configuration file that defines the properties of the Starwhale Runtime. runtime.yaml is required for the yaml mode of the swcli runtime build command.

    Examples

    The simplest example

    dependencies:
    - pip:
    - numpy
    name: simple-test

    Define a Starwhale Runtime that uses venv as the Python virtual environment for package isolation, and installs the numpy dependency.

    The llama2 example

    name: llama2
    mode: venv
    environment:
    arch: noarch
    os: ubuntu:20.04
    cuda: 11.7
    python: "3.10"
    dependencies:
    - pip:
    - torch
    - fairscale
    - fire
    - sentencepiece
    - gradio >= 3.37.0
    # external starwhale dependencies
    - starwhale[serve] >= 0.5.5

    The full definition example

    # [required]The name of Starwhale Runtime
    name: demo
    # [optional]The mode of Starwhale Runtime: venv or conda. Default is venv.
    mode: venv
    # [optional]The configurations of pip and conda.
    configs:
    # If you do not use conda, ignore this field.
    conda:
    condarc: # custom condarc config file
    channels:
    - defaults
    show_channel_urls: true
    default_channels:
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2
    custom_channels:
    conda-forge: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    pytorch: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    pytorch-lts: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    nvidia: https://mirrors.aliyun.com/anaconda/cloud
    ssl_verify: false
    default_threads: 10
    pip:
    # pip config set global.index-url
    index_url: https://example.org/
    # pip config set global.extra-index-url
    extra_index_url: https://another.net/
    # pip config set install.trusted-host
    trusted_host:
    - example.org
    - another.net
    # [optional] The definition of the environment.
    environment:
    # Now it must be ubuntu:20.04
    os: ubuntu:20.04
    # CUDA version. possible values: 11.3, 11.4, 11.5, 11.6, 11.7
    cuda: 11.4
    # Python version. possible values: 3.7, 3.8, 3.9, 3.10, 3.11
    python: 3.8
    # Define your custom base image
    docker:
    image: mycustom.com/docker/image:tag
    # [required] The dependencies of the Starwhale Runtime.
    dependencies:
    # If this item is present, conda env create -f conda.yml will be executed
    - conda.yaml
    # If this item is present, pip install -r requirements.txt will be executed before installing other pip packages
    - requirements.txt
    # Packages to be install with conda. venv mode will ignore the conda field.
    - conda:
    - numpy
    - requests
    # Packages to be installed with pip. The format is the same as requirements.txt
    - pip:
    - pillow
    - numpy
    - deepspeed==0.9.0
    - safetensors==0.3.0
    - transformers @ git+https://github.com/huggingface/transformers.git@3c3108972af74246bc3a0ecf3259fd2eafbacdef
    - peft @ git+https://github.com/huggingface/peft.git@fcff23f005fc7bfb816ad1f55360442c170cd5f5
    - accelerate @ git+https://github.com/huggingface/accelerate.git@eba6eb79dc2ab652cd8b44b37165a4852768a8ac
    # Additional wheels packages to be installed when restoring the runtime
    - wheels:
    - dummy-0.0.0-py3-none-any.whl
    # Additional files to be included in the runtime
    - files:
    - dest: bin/prepare.sh
    name: prepare
    src: scripts/prepare.sh
    # Run some custom commands
    - commands:
    - apt-get install -y libgl1
    - touch /tmp/runtime-command-run.flag
    - - + + \ No newline at end of file diff --git a/0.6.5/server/guides/server_admin/index.html b/0.6.5/server/guides/server_admin/index.html index aaad508d4..f7f531bf7 100644 --- a/0.6.5/server/guides/server_admin/index.html +++ b/0.6.5/server/guides/server_admin/index.html @@ -10,14 +10,14 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Controller Admin Settings

    Superuser Password Reset

    In case you forget the superusers password, you could use the sql below to reset the password to abcd1234

    update user_info set user_pwd='ee9533077d01d2d65a4efdb41129a91e', user_pwd_salt='6ea18d595773ccc2beacce26' where id=1

    After that, you could login to the console and then change the password to what you really want.

    System Settings

    You could customize system to make it easier to use by leverage of System setting. Here is an example below:

    dockerSetting:
    registryForPull: "docker-registry.starwhale.cn/star-whale"
    registryForPush: ""
    userName: ""
    password: ""
    insecure: true
    pypiSetting:
    indexUrl: ""
    extraIndexUrl: ""
    trustedHost: ""
    retries: 10
    timeout: 90
    imageBuild:
    resourcePool: ""
    image: ""
    clientVersion: ""
    pythonVersion: ""
    datasetBuild:
    resourcePool: ""
    image: ""
    clientVersion: ""
    pythonVersion: ""
    resourcePoolSetting:
    - name: "default"
    nodeSelector: null
    resources:
    - name: "cpu"
    max: null
    min: null
    defaults: 5.0
    - name: "memory"
    max: null
    min: null
    defaults: 3145728.0
    - name: "nvidia.com/gpu"
    max: null
    min: null
    defaults: null
    tolerations: null
    metadata: null
    isPrivate: null
    visibleUserIds: null
    storageSetting:
    - type: "minio"
    tokens:
    bucket: "users"
    ak: "starwhale"
    sk: "starwhale"
    endpoint: "http://10.131.0.1:9000"
    region: "local"
    hugeFileThreshold: "10485760"
    hugeFilePartSize: "5242880"
    - type: "s3"
    tokens:
    bucket: "users"
    ak: "starwhale"b
    sk: "starwhale"
    endpoint: "http://10.131.0.1:9000"
    region: "local"
    hugeFileThreshold: "10485760"
    hugeFilePartSize: "5242880"

    Image Registry

    Tasks dispatched by the server are based on docker images. Pulling these images could be slow if your internet is not working well. Starwhale Server supports the custom image registries, includes dockerSetting.registryForPush and dockerSetting.registryForPull.

    Resource Pool

    The resourcePoolSetting allows you to manage your cluster in a group manner. It is currently implemented by K8S nodeSelector, you could label your machines in K8S cluster and make them a resourcePool in Starwhale.

    Remote Storage

    The storageSetting allows you to manage the storages the server could access.

    storageSetting:
    - type: s3
    tokens:
    - bucket: starwhale # required
    ak: access_key # required
    sk: scret_key # required
    endpoint: http://s3.region.amazonaws.com # optional
    region: region of the service # required when endpoint is empty
    hugeFileThreshold: 10485760 # bigger than 10MB will use multiple part upload
    hugeFilePartSize: 5242880 # MB part size for multiple part upload
    - type: minio
    tokens:
    - bucket: starwhale # required
    ak: access_key # required
    sk: scret_key # required
    endpoint: http://10.131.0.1:9000 # required
    region: local # optional
    hugeFileThreshold: 10485760 # bigger than 10MB will use multiple part upload
    hugeFilePartSize: 5242880 # MB part size for multiple part upload
    - type: aliyun
    tokens:
    - bucket: starwhale # required
    ak: access_key # required
    sk: scret_key # required
    endpoint: http://10.131.0.2:9000 # required
    region: local # optional
    hugeFileThreshold: 10485760 # bigger than 10MB will use multiple part upload
    hugeFilePartSize: 5242880 # MB part size for multiple part upload

    Every storageSetting item has a corresponding implementation of StorageAccessService interface. Starwhale has four build-in implementations:

    • StorageAccessServiceAliyun matches type in (aliyun,oss)
    • StorageAccessServiceMinio matches type in (minio)
    • StorageAccessServiceS3 matches type in (s3)
    • StorageAccessServiceFile matches type in (fs, file)

    Each of the implementations has different requirements for tokens. endpoint is required when type in (aliyun,minio), region is required when type is s3 and endpoint is empty. While fs/file type requires tokens has name rootDir and serviceProvider. Please refer the code for more details.

    - - + + \ No newline at end of file diff --git a/0.6.5/server/index.html b/0.6.5/server/index.html index f10afcc16..b25517606 100644 --- a/0.6.5/server/index.html +++ b/0.6.5/server/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.5/server/installation/docker-compose/index.html b/0.6.5/server/installation/docker-compose/index.html index b7a509d2a..4760311bb 100644 --- a/0.6.5/server/installation/docker-compose/index.html +++ b/0.6.5/server/installation/docker-compose/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Install Starwhale Server with Docker Compose

    Prerequisites

    Usage

    Start up the server

    wget https://raw.githubusercontent.com/star-whale/starwhale/main/docker/compose/compose.yaml
    GLOBAL_IP=${your_accessible_ip_for_server} ; docker compose up

    The GLOBAL_IP is the ip for Controller which could be accessed by all swcli both inside docker containers and other user machines.

    compose.yaml contains Starwhale Controller/MySQL/MinIO services. Touch a compose.override.yaml, as its name implies, can contain configuration overrides for compose.yaml. The available configurations are specified here

    - - + + \ No newline at end of file diff --git a/0.6.5/server/installation/docker/index.html b/0.6.5/server/installation/docker/index.html index 7f0ccaaae..3cd54f0a3 100644 --- a/0.6.5/server/installation/docker/index.html +++ b/0.6.5/server/installation/docker/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Install Starwhale Server with Docker

    Prerequisites

    • A running Kubernetes 1.19+ cluster to run tasks.
    • A running MySQL 8.0+ instance to store metadata.
    • A S3-compatible object storage to save datasets, models, and others.

    Please make sure pods on the Kubernetes cluster can access the port exposed by the Starwhale Server installation.

    Prepare an env file for Docker

    Starwhale Server can be configured by environment variables.

    An env file template for Docker is here. You may create your own env file by modifying the template.

    Prepare a kubeconfig file [Optional][SW_SCHEDULER=k8s]

    The kubeconfig file is used for accessing the Kubernetes cluster. For more information about kubeconfig files, see the Official Kubernetes Documentation.

    If you have a local kubectl command-line tool installed, you can run kubectl config view to see your current configuration.

    Run the Docker image

    docker run -it -d --name starwhale-server -p 8082:8082 \
    --restart unless-stopped \
    --mount type=bind,source=<path to your kubeconfig file>,destination=/root/.kube/config,readonly \
    --env-file <path to your env file> \
    ghcr.io/star-whale/server:0.5.6

    For users in the mainland of China, use docker image: docker-registry.starwhale.cn/star-whale/server.

    - - + + \ No newline at end of file diff --git a/0.6.5/server/installation/index.html b/0.6.5/server/installation/index.html index 3700e7713..ccb5bb18d 100644 --- a/0.6.5/server/installation/index.html +++ b/0.6.5/server/installation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Starwhale Server Installation Guide

    Starwhale Server is delivered as a Docker image, which can be run with Docker directly or deployed to a Kubernetes cluster or Minikube.

    - - + + \ No newline at end of file diff --git a/0.6.5/server/installation/k8s-cluster/index.html b/0.6.5/server/installation/k8s-cluster/index.html index f4224c81e..8a1739528 100644 --- a/0.6.5/server/installation/k8s-cluster/index.html +++ b/0.6.5/server/installation/k8s-cluster/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Install Starwhale Server to Kubernetes Cluster

    In a private deployment scenario, Starwhale Server can be deployed to a Kubernetes cluster using Helm. Starwhale Server relies on two fundamental infrastructure dependencies: MySQL database and object storage.

    • For production environments, it is recommended to provide externally high-availability MySQL database and object storage.
    • For trial or testing environments, the standalone versions of MySQL and MinIO, included in the Starwhale Charts, can be utilized.

    Prerequisites

    • A running Kubernetes 1.19+ cluster to run tasks.
    • Kubernetes Ingress provides HTTP(S) routing.
    • Helm 3.2.0+.
    • [Production Required] A running MySQL 8.0+ instance to store metadata.
    • [Production Required] A S3-compatible object storage system to save datasets, models, and others. Currently tested compatible object storage services:

    Helm Charts

    Downloading Helm Charts

    helm repo add starwhale https://star-whale.github.io/charts
    helm repo update

    Editing values.yaml (production required)

    In a production environment, it is recommended to configure parameters like the MySQL database, object storage, domain names, and memory allocation by editing values.yaml based on actual deployment needs. Below is a sample values.yaml for reference:

    # Set image registry for China mainland, recommend "docker-registry.starwhale.cn". Other network environments can ignore this setting, will use ghcr.io: https://github.com/orgs/star-whale/packages.
    image:
    registry: docker-registry.starwhale.cn
    org: star-whale

    # External MySQL service depended in production, MySQL version needs to be greater than 8.0
    externalMySQL:
    host: 10.0.1.100 # Database IP address or domain that is accessible within the Kubernetes cluster
    port: 3306
    username: "your-username"
    password: "your-password"
    database: starwhale # Needs to pre-create the database, name can be specified freely, default charset is fine. The database user specified above needs read/write permissions to this database

    # External S3 protocol compatible object storage service relied on in production
    externalOSS:
    host: ks3-cn-beijing.ksyuncs.com # Object storage IP address or domain that is accessible from both the Kubernetes cluster and Standalone instances
    port: 80
    accessKey: "your-ak"
    secretKey: "your-sk"
    defaultBuckets: test-gp # Needs to pre-create the Bucket, name can be specified freely. The ak/sk specified above needs read/write permissions to this Bucket
    region: BEIJING # Object storage corresponding region, defaults to local

    # If external object storage is specified in production, built-in single instance MinIO is not needed
    minio:
    enabled: false

    # If external MySQL is specified in production, built-in single instance MySQL is not needed
    mysql:
    enabled: false

    controller:
    containerPort: 8082
    storageType: "ksyun" # Type of object storage service minio/s3/ksyun/baidu/tencent/aliyun

    ingress:
    enabled: true
    ingressClassName: nginx # Corresponds to the Ingress Controller in the Kubernetes cluster
    host: server-domain-name # External accessible domain name for the Server
    path: /

    # Recommend at least 32GB memory and 8 CPU cores for Starwhale Server in production
    resources:
    controller:
    limits:
    memory: 32G
    cpu: 8
    requests:
    memory: 32G
    cpu: 8

    # Downloading Python Packages defined in Starwhale Runtime requires setting PyPI mirror corresponding to actual network environment. Can also modify later in Server System Settings page.
    mirror:
    pypi:
    enabled: true
    indexUrl: "https://mirrors.aliyun.com/pypi/simple/"
    extraIndexUrl: "https://pypi.tuna.tsinghua.edu.cn/simple/"
    trustedHost: "mirrors.aliyun.com pypi.tuna.tsinghua.edu.cn"

    Deploying/Upgrading Starwhale Server

    The following command can be used for both initial deployment and upgrades. It will automatically create a Kubernetes namespace called "starwhale". values.custom.yaml is the values.yaml file written according to the actual needs of the cluster.

    helm upgrade --devel --install starwhale starwhale/starwhale --namespace starwhale --create-namespace -f values.custom.yaml

    If you have a local kubectl command-line tool installed, you can run kubectl get pods -n starwhale to check if all pods are running.

    Uninstalling Starwhale Server

    helm delete starwhale-server
    - - + + \ No newline at end of file diff --git a/0.6.5/server/installation/minikube/index.html b/0.6.5/server/installation/minikube/index.html index e6cdfd57e..dd27a03bb 100644 --- a/0.6.5/server/installation/minikube/index.html +++ b/0.6.5/server/installation/minikube/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Install Starwhale Server with Minikube

    Prerequisites

    Starting Minikube

    minikube start --addons ingress

    For users in the mainland of China, please run the following commands:

    minikube start --kubernetes-version=1.25.3 --image-repository=docker-registry.starwhale.cn/minikube --base-image=docker-registry.starwhale.cn/minikube/k8s-minikube/kicbase:v0.0.42

    minikube addons enable ingress --images="KubeWebhookCertgenPatch=ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0,KubeWebhookCertgenCreate=ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0,IngressController=ingress-nginx/controller:v1.9.4"

    The docker registry docker-registry.starwhale.cn/minikube currently only caches the images for Kubernetes 1.25.3. Another choice, you can also use Aliyun mirror:

    minikube start --image-mirror-country=cn

    minikube addons enable ingress --images="KubeWebhookCertgenPatch=kube-webhook-certgen:v20231011-8b53cabe0,KubeWebhookCertgenCreate=kube-webhook-certgen:v20231011-8b53cabe0,IngressController=nginx-ingress-controller:v1.9.4" --registries="KubeWebhookCertgenPatch=registry.cn-hangzhou.aliyuncs.com/google_containers,KubeWebhookCertgenCreate=registry.cn-hangzhou.aliyuncs.com/google_containers,IngressController=registry.cn-hangzhou.aliyuncs.com/google_containers"

    If there is no kubectl bin in your machine, you may use minikube kubectl or alias kubectl="minikube kubectl --" alias command.

    Installing Starwhale Server

    helm repo add starwhale https://star-whale.github.io/charts
    helm repo update
    helm pull starwhale/starwhale --untar --untardir ./charts

    helm upgrade --install starwhale ./charts/starwhale -n starwhale --create-namespace -f ./charts/starwhale/values.minikube.global.yaml

    For users in the mainland of China, use values.minikube.global.yaml:

    helm upgrade --install starwhale ./charts/starwhale -n starwhale --create-namespace -f ./charts/starwhale/values.minikube.cn.yaml

    After the installation is successful, the following prompt message appears:

        Release "starwhale" has been upgraded. Happy Helming!
    NAME: starwhale
    LAST DEPLOYED: Tue Feb 14 16:25:03 2023
    NAMESPACE: starwhale
    STATUS: deployed
    REVISION: 14
    NOTES:
    ******************************************
    Chart Name: starwhale
    Chart Version: 0.5.6
    App Version: latest
    Starwhale Image:
    - server: ghcr.io/star-whale/server:latest

    ******************************************
    Controller:
    - visit: http://controller.starwhale.svc
    Minio:
    - web visit: http://minio.starwhale.svc
    - admin visit: http://minio-admin.starwhale.svc
    MySQL:
    - port-forward:
    - run: kubectl port-forward --namespace starwhale svc/mysql 3306:3306
    - visit: mysql -h 127.0.0.1 -P 3306 -ustarwhale -pstarwhale
    Please run the following command for the domains searching:
    echo "$(sudo minikube ip) controller.starwhale.svc minio.starwhale.svc minio-admin.starwhale.svc " | sudo tee -a /etc/hosts
    ******************************************
    Login Info:
    - starwhale: u:starwhale, p:abcd1234
    - minio admin: u:minioadmin, p:minioadmin

    *_* Enjoy to use Starwhale Platform. *_*

    Checking Starwhale Server status

    Keep checking the minikube service status until all deployments are running(waiting for 3~5 mins):

    kubectl get deployments -n starwhale
    NAMEREADYUP-TO-DATEAVAILABLEAGE
    controller1/1115m
    minio1/1115m
    mysql1/1115m

    Visiting for local

    Make the Starwhale controller accessible locally with the following command:

    echo "$(sudo minikube ip) controller.starwhale.svc minio.starwhale.svc  minio-admin.starwhale.svc " | sudo tee -a /etc/hosts

    Then you can visit http://controller.starwhale.svc in your local web browser.

    Visiting for others

    • Step 1: in the Starwhale Server machine

      for temporary use with socat command:

      # install socat at first, ref: https://howtoinstall.co/en/socat
      sudo socat TCP4-LISTEN:80,fork,reuseaddr,bind=0.0.0.0 TCP4:`minikube ip`:80

      When you kill the socat process, the share access will be blocked. iptables maybe a better choice for long-term use.

    • Step 2: in the other machines

      # for macOSX or Linux environment, run the command in the shell.
      echo ${your_machine_ip} controller.starwhale.svc minio.starwhale.svc minio-admin.starwhale.svc " | sudo tee -a /etc/hosts

      # for Windows environment, run the command in the PowerShell with administrator permission.
      Add-Content -Path C:\Windows\System32\drivers\etc\hosts -Value "`n${your_machine_ip} controller.starwhale.svc minio.starwhale.svc minio-admin.starwhale.svc"
    - - + + \ No newline at end of file diff --git a/0.6.5/server/installation/starwhale_env/index.html b/0.6.5/server/installation/starwhale_env/index.html index a2017b02a..f1709052b 100644 --- a/0.6.5/server/installation/starwhale_env/index.html +++ b/0.6.5/server/installation/starwhale_env/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Starwhale Server Environment Example

    ################################################################################
    # *** Required ***
    # The external Starwhale server URL. For example: https://cloud.starwhale.ai
    SW_INSTANCE_URI=

    # The listening port of Starwhale Server
    SW_CONTROLLER_PORT=8082

    # The maximum upload file size. This setting affects datasets and models uploading when copied from outside.
    SW_UPLOAD_MAX_FILE_SIZE=20480MB
    ################################################################################
    # The base URL of the Python Package Index to use when creating a runtime environment.
    SW_PYPI_INDEX_URL=http://10.131.0.1/repository/pypi-hosted/simple/

    # Extra URLs of package indexes to use in addition to the base url.
    SW_PYPI_EXTRA_INDEX_URL=

    # Space separated hostnames. When any host specified in the base URL or extra URLs does not have a valid SSL
    # certification, use this option to trust it anyway.
    SW_PYPI_TRUSTED_HOST=
    ################################################################################
    # The JWT token expiration time. When the token expires, the server will request the user to login again.
    SW_JWT_TOKEN_EXPIRE_MINUTES=43200

    # *** Required ***
    # The JWT secret key. All strings are valid, but we strongly recommend you to use a random string with at least 16 characters.
    SW_JWT_SECRET=
    ################################################################################
    # The scheduler controller to use. Valid values are:
    # docker: Controller schedule jobs by leveraging docker
    # k8s: Controller schedule jobs by leveraging Kubernetes
    SW_SCHEDULER=k8s

    # The Kubernetes namespace to use when running a task when SW_SCHEDULER is k8s
    SW_K8S_NAME_SPACE=default

    # The path on the Kubernetes host node's filesystem to cache Python packages. Use the setting only if you have
    # the permission to use host node's filesystem. The runtime environment setup process may be accelerated when the host
    # path cache is used. Leave it blank if you do not want to use it.
    SW_K8S_HOST_PATH_FOR_CACHE=

    # The ip for the containers created by Controller when SW_SCHEDULER is docker
    SW_DOCKER_CONTAINER_NODE_IP=127.0.0.1
    ###############################################################################
    # *** Required ***
    # The object storage system type. Valid values are:
    # s3: [AWS S3](https://aws.amazon.com/s3) or other s3-compatible object storage systems
    # aliyun: [Aliyun OSS](https://www.alibabacloud.com/product/object-storage-service)
    # minio: [MinIO](https://min.io)
    # file: Local filesystem
    SW_STORAGE_TYPE=

    # The path prefix for all data saved on the storage system.
    SW_STORAGE_PREFIX=
    ################################################################################
    # The following settings are only used when SW_STORAGE_TYPE is file.

    # The root directory to save data.
    # This setting is only used when SW_STORAGE_TYPE is file.
    SW_STORAGE_FS_ROOT_DIR=/usr/local/starwhale
    ################################################################################
    # The following settings are only used when SW_STORAGE_TYPE is not file.

    # *** Required ***
    # The name of the bucket to save data.
    SW_STORAGE_BUCKET=

    # *** Required ***
    # The endpoint URL of the object storage service.
    # This setting is only used when SW_STORAGE_TYPE is s3 or aliyun.
    SW_STORAGE_ENDPOINT=

    # *** Required ***
    # The access key used to access the object storage system.
    SW_STORAGE_ACCESSKEY=

    # *** Required ***
    # The secret access key used to access the object storage system.
    SW_STORAGE_SECRETKEY=

    # *** Optional ***
    # The region of the object storage system.
    SW_STORAGE_REGION=

    # Starwhale Server will use multipart upload when uploading a large file. This setting specifies the part size.
    SW_STORAGE_PART_SIZE=5MB
    ################################################################################
    # MySQL settings

    # *** Required ***
    # The hostname/IP of the MySQL server.
    SW_METADATA_STORAGE_IP=

    # The port of the MySQL server.
    SW_METADATA_STORAGE_PORT=3306

    # *** Required ***
    # The database used by Starwhale Server
    SW_METADATA_STORAGE_DB=starwhale

    # *** Required ***
    # The username of the MySQL server.
    SW_METADATA_STORAGE_USER=

    # *** Required ***
    # The password of the MySQL server.
    SW_METADATA_STORAGE_PASSWORD=
    ################################################################################

    # The cache directory for the WAL files. Point it to a mounted volume or host path with enough space.
    # If not set, the WAL files will be saved in the docker runtime layer, and will be lost when the container is restarted.
    SW_DATASTORE_WAL_LOCAL_CACHE_DIR=
    - - + + \ No newline at end of file diff --git a/0.6.5/server/project/index.html b/0.6.5/server/project/index.html index a31a4401a..f6a7cebe8 100644 --- a/0.6.5/server/project/index.html +++ b/0.6.5/server/project/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    How to Organize and Manage Resources with Starwhale Projects

    Project is the basic unit for organizing and managing resources (such as models, datasets, runtime environments, etc.). You can create and manage projects based on your needs. For example, you can create projects by business team, product line, or models. One user can create and participate in one or more projects.

    Project type

    There are two types of projects:

    • Private project: The project (and related resources in the project) is only visible to project members with permission. Project members can view or edit the project (as well as associated resources in the project). For more information on roles, please take a look at Roles and permissions in Starwhale.

    • Public project: The project (and related resources in the project) is visible to all Starwhale users. Project members can view or edit the project (as well as associated resources in the project). For more information on roles, please take a look at Roles and permissions in Starwhale.

    Create a project

    1. Click the Create button in the upper right corner of the project list page;
    2. Enter a name for the project. Pay attention to avoiding duplicate names. For more information, please see Names in Starwhale
    3. Select the Project Type, which is defaulted to private project and can be selected as public according to needs;
    4. Fill in the description content;
    5. To finish, Click the Submit button.

    Edit a project

    The name, privacy and description of a project can be edited.

    1. Go to the project list page and find the project that needs to be edited by searching for the project name, then click the Edit Project button;
    2. Edit the items that need to be edited;
    3. Click Submit to save the edited content;
    4. If you're editing multiple projects, repeat steps 1 through 3.

    View a project

    My projects

    On the project list page, only my projects are displayed by default. My projects refer to the projects participated in by the current users as project members or project owners.

    Project sorting

    On the project list page, all projects are supported to be sorted by "Recently visited", "Project creation time from new to old", and "Project creation time from old to new", which can be selected according to your needs.

    Delete a project

    Once a project is deleted, all related resources (such as datasets, models, runtimes, evaluations, etc.) will be deleted and cannot be restored.

    1. Enter the project list page and search for the project name to find the project that needs to be deleted. Hover your mouse over the project you want to delete, then click the Delete button;
    2. Follow the prompts, enter the relevant information, click Confirm to delete the project, or click Cancel to cancel the deletion;
    3. If you are deleting multiple projects, repeat the above steps.

    Manage project member

    Only users with the admin role can assign people to the project. The project owner defaulted to having the project owner role.

    Add a member

    1. Click Manage Members to go to the project member list page;
    2. Click the Add Member button in the upper right corner.
    3. Enter the Username you want to add, select a project role for the user in the project.
    4. Click submit to complete.
    5. If you're adding multiple members, repeat steps 1 through 4.

    Remove a member

    1. On the project list page or project overview tab, click Manage Members to go to the project member list page.
    2. Search for the username you want to delete, then click the Delete button.
    3. Click Yes to delete the user from this project, click No to cancel the deletion.
    4. If you're removing multiple members, repeat steps 1 through 3.

    Edit a member's role

    1. Hover your mouse over the project you want to edit, then click Manage Members to go to the project member list page.
    2. Find the username you want to adjust through searching, click the Project Role drop-down menu, and select a new project role. For more information on roles, please take a look at Roles and permissions in Starwhale.
    - - + + \ No newline at end of file diff --git a/0.6.5/swcli/config/index.html b/0.6.5/swcli/config/index.html index 2b14d7f90..63c5c4e86 100644 --- a/0.6.5/swcli/config/index.html +++ b/0.6.5/swcli/config/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Configuration

    Standalone Instance is installed on the user's laptop or development server, providing isolation at the level of Linux/macOX users. Users can install the Starwhale Python package using the pip command and execute any swcli command. After that, they can view their Starwhale configuration in ~/.config/starwhale/config.yaml. In the vast majority of cases, users do not need to manually modify the config.yaml file.

    The ~/.config/starwhale/config.yaml file has permissions set to 0o600 to ensure security, as it contains sensitive information such as encryption keys. Users are advised not to change the file permissions.You could customize your swcli by swci config edit:

    swcli config edit

    config.yaml example

    The typical config.yaml file is as follows:

    • The default instance is local.
    • cloud-cn/cloud-k8s/pre-k8s are the server/cloud instances, local is the standalone instance.
    • The local storage root directory for the Standalone Instance is set to /home/liutianwei/.starwhale.
    current_instance: local
    instances:
    cloud-cn:
    sw_token: ${TOKEN}
    type: cloud
    updated_at: 2022-09-28 18:41:05 CST
    uri: https://cloud.starwhale.cn
    user_name: starwhale
    user_role: normal
    cloud-k8s:
    sw_token: ${TOKEN}
    type: cloud
    updated_at: 2022-09-19 16:10:01 CST
    uri: http://cloud.pre.intra.starwhale.ai
    user_name: starwhale
    user_role: normal
    local:
    current_project: self
    type: standalone
    updated_at: 2022-06-09 16:14:02 CST
    uri: local
    user_name: liutianwei
    pre-k8s:
    sw_token: ${TOKEN}
    type: cloud
    updated_at: 2022-09-19 18:06:50 CST
    uri: http://console.pre.intra.starwhale.ai
    user_name: starwhale
    user_role: normal
    link_auths:
    - ak: starwhale
    bucket: users
    connect_timeout: 10.0
    endpoint: http://10.131.0.1:9000
    read_timeout: 100.0
    sk: starwhale
    type: s3
    storage:
    root: /home/liutianwei/.starwhale
    version: '2.0'

    config.yaml definition

    ParameterDescriptionTypeDefault ValueRequired
    current_instanceThe name of the default instance to use. It is usually set using the swcli instance select command.StringselfYes
    instancesManaged instances, including Standalone, Server and Cloud Instances. There must be at least one Standalone Instance named "local" and one or more Server/Cloud Instances. You can log in to a new instance with swcli instance login and log out from an instance with swcli instance logout.DictStandalone Instance named "local"Yes
    instances.{instance-alias-name}.sw_tokenLogin token for Server/Cloud Instances. It is only effective for Server/Cloud Instances. Subsequent swcli operations on Server/Cloud Instances will use this token. Note that tokens have an expiration time, typically set to one month, which can be configured within the Server/Cloud Instance.StringCloud - Yes, Standalone - No
    instances.{instance-alias-name}.typeType of the instance, currently can only be "cloud" or "standalone".Choice[string]Yes
    instances.{instance-alias-name}.uriFor Server/Cloud Instances, the URI is an http/https address. For Standalone Instances, the URI is set to "local".StringYes
    instances.{instance-alias-name}.user_nameUser's nameStringYes
    instances.{instance-alias-name}.current_projectDefault Project under the current instance. It will be used to fill the "project" field in the URI representation by default. You can set it using the swcli project select command.StringYes
    instances.{instance-alias-name}.user_roleUser's role.StringnormalYes
    instances.{instance-alias-name}.updated_atThe last updated time for this instance configuration.Time format stringYes
    storageSettings related to local storage.DictYes
    storage.rootThe root directory for Standalone Instance's local storage. Typically, if there is insufficient space in the home directory and you manually move data files to another location, you can modify this field.String~/.starwhaleYes
    versionThe version of config.yaml, currently only supports 2.0.String2.0Yes

    You could put starwhale.Link to your assets while the URI in the Link could be whatever(only s3 like or http is implemented) you need, such as s3://10.131.0.1:9000/users/path. However, Links may need to be authed, you could config the auth info in link_auths.

    link_auths:
    - type: s3
    ak: starwhale
    bucket: users
    region: local
    connect_timeout: 10.0
    endpoint: http://10.131.0.1:9000
    read_timeout: 100.0
    sk: starwhale

    Items in link_auths will match the uri in Links automatically. s3 typed link_auth matching Links by looking up bucket and endpoint.

    - - + + \ No newline at end of file diff --git a/0.6.5/swcli/index.html b/0.6.5/swcli/index.html index 361807666..b5ff4c6de 100644 --- a/0.6.5/swcli/index.html +++ b/0.6.5/swcli/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Starwhale Client (swcli) User Guide

    The Starwhale Client (swcli) is a command-line tool that enables you to interact with Starwhale instances. You can use swcli to complete almost all tasks in Starwhale. swcli is written in pure python3 (require Python 3.7 | 3.11) so that it can be easily installed by the pip command. Currently, swcli only supports Linux and macOS, Windows is coming soon.

    - - + + \ No newline at end of file diff --git a/0.6.5/swcli/installation/index.html b/0.6.5/swcli/installation/index.html index 4a1ad3a15..cf78fe775 100644 --- a/0.6.5/swcli/installation/index.html +++ b/0.6.5/swcli/installation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Installation Guide

    We can use swcli to complete all tasks for Starwhale Instances. swcli is written by pure python3, which can be installed easily by the pip command.Here are some installation tips that can help you get a cleaner, unambiguous, no dependency conflicts swcli python environment.

    Installing Advice

    DO NOT install Starwhale in your system's global Python environment. It will cause a python dependency conflict problem.

    Prerequisites

    • Python 3.7 ~ 3.11
    • Linux or macOS
    • Conda (optional)

    In the Ubuntu system, you can run the following commands:

    sudo apt-get install python3 python3-venv python3-pip

    #If you want to install multi python versions
    sudo add-apt-repository -y ppa:deadsnakes/ppa
    sudo apt-get update
    sudo apt-get install -y python3.7 python3.8 python3.9 python3-pip python3-venv python3.8-venv python3.7-venv python3.9-venv

    swcli works on macOS. If you run into issues with the default system Python3 on macOS, try installing Python3 through the homebrew:

    brew install python3

    Install swcli

    Install with venv

    python3 -m venv ~/.cache/venv/starwhale
    source ~/.cache/venv/starwhale/bin/activate
    python3 -m pip install starwhale

    swcli --version

    sudo ln -sf "$(which swcli)" /usr/local/bin/

    Install with conda

    conda create --name starwhale --yes  python=3.9
    conda activate starwhale
    python3 -m pip install starwhale

    swcli --version

    sudo ln -sf "$(which swcli)" /usr/local/bin/

    👏 Now, you can use swcli in the global environment.

    Install for the special scenarios

    # for Audio processing
    python -m pip install starwhale[audio]

    # for Image processing
    python -m pip install starwhale[pillow]

    # for swcli model server command
    python -m pip install starwhale[server]

    # for built-in online serving
    python -m pip install starwhale[online-serve]

    # install all dependencies
    python -m pip install starwhale[all]

    Update swcli

    #for venv
    python3 -m pip install --upgrade starwhale

    #for conda
    conda run -n starwhale python3 -m pip install --upgrade starwhale

    Uninstall swcli

    python3 -m pip remove starwhale

    rm -rf ~/.config/starwhale
    rm -rf ~/.starwhale
    - - + + \ No newline at end of file diff --git a/0.6.5/swcli/swignore/index.html b/0.6.5/swcli/swignore/index.html index b0df2b25c..884b3108d 100644 --- a/0.6.5/swcli/swignore/index.html +++ b/0.6.5/swcli/swignore/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    About the .swignore file

    The .swignore file is similar to .gitignore, .dockerignore, and other files used to define ignored files or dirs. The .swignore files mainly used in the Starwhale Model building process. By default, the swcli model build command or starwhale.model.build() Python SDK will traverse all files in the specified directory and automatically exclude certain known files or directories that are not suitable for inclusion in the model package.

    PATTERN FORMAT

    • Each line in a swignore file specifies a pattern, which matches files and directories.
    • A blank line matches no files, so it can serve as a separator for readability.
    • An asterisk * matches anything except a slash.
    • A line starting with # serves as a comment.
    • Support wildcard expression, for example: *.jpg, .png.

    Auto Ingored files or dirs

    If you want to include the auto ingored files or dirs, you can add --add-all for swcli model build command.

    • __pycache__/
    • *.py[cod]
    • *$py.class
    • venv installation dir
    • conda installation dir

    Example

    Here is the .swignore file used in the MNIST example:

    venv/*
    .git/*
    .history*
    .vscode/*
    .venv/*
    data/*
    .idea/*
    *.py[cod]
    - - + + \ No newline at end of file diff --git a/0.6.5/swcli/uri/index.html b/0.6.5/swcli/uri/index.html index bd4606dab..14634c338 100644 --- a/0.6.5/swcli/uri/index.html +++ b/0.6.5/swcli/uri/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.5

    Starwhale Resources URI

    tip

    Resource URI is widely used in Starwhale client commands. The URI can refer to a resource in the local instance or any other resource in a remote instance. In this way, the Starwhale client can easily manipulate any resource.

    concepts-org.jpg

    Instance URI

    Instance URI can be either:

    • local: standalone instance.
    • [http(s)://]<hostname or ip>[:<port>]: cloud instance with HTTP address.
    • [cloud://]<cloud alias>: cloud or server instance with an alias name, which can be configured in the instance login phase.
    caution

    "local" is different from "localhost". The former means the local standalone instance without a controller, while the latter implies a controller listening at the default port 8082 on the localhost.

    Example:

    # log in Starwhale Cloud; the alias is swcloud
    swcli instance login --username <your account name> --password <your password> https://cloud.starwhale.ai --alias swcloud

    # copy a model from the local instance to the cloud instance
    swcli model copy mnist/version/latest swcloud/project/<your account name>:demo

    # copy a runtime to a Starwhale Server instance: http://localhost:8081
    swcli runtime copy pytorch/version/v1 http://localhost:8081/project/<your account name>:demo

    Project URI

    Project URI is in the format [<Instance URI>/project/]<project name>. If the instance URI is not specified, use the current instance instead.

    Example:

    swcli project select self   # select the self project in the current instance
    swcli project info local/project/self # inspect self project info in the local instance

    Model/Dataset/Runtime URI

    • Model URI: [<Project URI>/model/]<model name>[/version/<version id|tag>].
    • Dataset URI: [<Project URI>/dataset/]<dataset name>[/version/<version id|tag>].
    • Runtime URI: [<Project URI>/runtime/]<runtime name>[/version/<version id|tag>].
    tip
    • swcli supports human-friendly short version id. You can type the first few characters of the version id, provided it is at least four characters long and unambiguous. However, the recover command must use the complete version id.
    • If the project URI is not specified, the default project will be used.
    • You can always use the version tag instead of the version id.

    Example:

    swcli model info mnist/version/hbtdenjxgm4ggnrtmftdgyjzm43tioi  # inspect model info, model name: mnist, version:hbtdenjxgm4ggnrtmftdgyjzm43tioi
    swcli model remove mnist/version/hbtdenj # short version
    swcli model info mnist # inspect mnist model info
    swcli model run mnist --runtime pytorch-mnist --dataset mnist # use the default latest tag

    Job URI

    • format: [<Project URI>/job/]<job id>.
    • If the project URI is not specified, the default project will be used.

    Example:

    swcli job info mezdayjzge3w   # Inspect mezdayjzge3w version in default instance and default project
    swcli job info local/project/self/job/mezday # Inspect the local instance, self project, with short job id:mezday

    The default instance

    When the instance part of a project URI is omitted, the default instance is used instead. The default instance is the one selected by the swcli instance login or swcli instance use command.

    The default project

    When the project parts of Model/Dataset/Runtime/Evaluation URIs are omitted, the default project is used instead. The default project is the one selected by the swcli project use command.

    - - + + \ No newline at end of file diff --git a/0.6.6/cloud/billing/bills/index.html b/0.6.6/cloud/billing/bills/index.html index 1c6eed274..ea9b256ce 100644 --- a/0.6.6/cloud/billing/bills/index.html +++ b/0.6.6/cloud/billing/bills/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.6/cloud/billing/index.html b/0.6.6/cloud/billing/index.html index 212c0c83f..da60e9519 100644 --- a/0.6.6/cloud/billing/index.html +++ b/0.6.6/cloud/billing/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.6/cloud/billing/recharge/index.html b/0.6.6/cloud/billing/recharge/index.html index 51f5483e0..8c8771e36 100644 --- a/0.6.6/cloud/billing/recharge/index.html +++ b/0.6.6/cloud/billing/recharge/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.6/cloud/billing/refund/index.html b/0.6.6/cloud/billing/refund/index.html index 6e24773e7..44a68370a 100644 --- a/0.6.6/cloud/billing/refund/index.html +++ b/0.6.6/cloud/billing/refund/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.6/cloud/billing/voucher/index.html b/0.6.6/cloud/billing/voucher/index.html index 7adcc31df..83eb1f438 100644 --- a/0.6.6/cloud/billing/voucher/index.html +++ b/0.6.6/cloud/billing/voucher/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.6/cloud/index.html b/0.6.6/cloud/index.html index 228793d05..611719170 100644 --- a/0.6.6/cloud/index.html +++ b/0.6.6/cloud/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Starwhale Cloud User Guide

    Starwhale Cloud is a service hosted on public cloud and operated by the Starwhale team. The access url is https://cloud.starwhale.cn.

    - - + + \ No newline at end of file diff --git a/0.6.6/community/contribute/index.html b/0.6.6/community/contribute/index.html index 17fcbb7a8..958975baa 100644 --- a/0.6.6/community/contribute/index.html +++ b/0.6.6/community/contribute/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Contribute to Starwhale

    Getting Involved/Contributing

    We welcome and encourage all contributions to Starwhale, including and not limited to:

    • Describe the problems encountered during use.
    • Submit feature request.
    • Discuss in Slack and Github Issues.
    • Code Review.
    • Improve docs, tutorials and examples.
    • Fix Bug.
    • Add Test Case.
    • Code readability and code comments to import readability.
    • Develop new features.
    • Write enhancement proposal.

    You can get involved, get updates and contact Starwhale developers in the following ways:

    Starwhale Resources

    Code Structure

    • client: swcli and Python SDK with Pure Python3, which includes all Standalone Instance features.
      • api: Python SDK.
      • cli: Command Line Interface entrypoint.
      • base: Python base abstract.
      • core: Starwhale core concepts which includes Dataset,Model,Runtime,Project, job and Evaluation, etc.
      • utils: Python utilities lib.
    • console: frontend with React + TypeScript.
    • server:Starwhale Controller with java, which includes all Starwhale Cloud Instance backend apis.
    • docker:Helm Charts, dockerfile.
    • docs:Starwhale官方文档。
    • example:Example code.
    • scripts:Bash and Python scripts for E2E testing and software releases, etc.

    Fork and clone the repository

    You will need to fork the code of Starwhale repository and clone it to your local machine.

    • Fork Starwhale repository: Fork Starwhale Github Repo,For more usage details, please refer to: Fork a repo

    • Install Git-LFS:Git-LFS

       git lfs install
    • Clone code to local machine

      git clone https://github.com/${your username}/starwhale.git

    Development environment for Standalone Instance

    Standalone Instance is written in Python3. When you want to modify swcli and sdk, you need to build the development environment.

    Standalone development environment prerequisites

    • OS: Linux or macOS
    • Python: 3.7~3.11
    • Docker: >=19.03(optional)
    • Python isolated env tools:Python venv, virtualenv or conda, etc

    Building from source code

    Based on the previous step, clone to the local directory: starwhale, and enter the client subdirectory:

    cd starwhale/client

    Create an isolated python environment with conda:

    conda create -n starwhale-dev python=3.8 -y
    conda activate starwhale-dev

    Install client package and python dependencies into the starwhale-dev environment:

    make install-sw
    make install-dev-req

    Validate with the swcli --version command. In the development environment, the version is 0.0.0.dev0:

    ❯ swcli --version
    swcli, version 0.0.0.dev0

    ❯ swcli --version
    /home/username/anaconda3/envs/starwhale-dev/bin/swcli

    Modifying the code

    When you modify the code, you need not to install python package(run make install-sw command) again. .editorconfig will be imported into the most IDE and code editors which helps maintain consistent coding styles for multiple developers.

    Lint and Test

    Run unit test, E2E test, mypy lint, flake lint and isort check in the starwhale directory.

    make client-all-check

    Development environment for Cloud Instance

    Cloud Instance is written in Java(backend) and React+TypeScript(frontend).

    Development environment for Console

    Development environment for Server

    • Language: Java
    • Build tool: Maven
    • Development framework: Spring Boot+Mybatis
    • Unit test framework:Junit5
      • Mockito used for mocking
      • Hamcrest used for assertion
      • Testcontainers used for providing lightweight, throwaway instances of common databases, Selenium web browsers that can run in a Docker container.
    • Check style tool:use maven-checkstyle-plugin

    Server development environment prerequisites

    • OS: Linux, macOS or Windows
    • Docker: >=19.03
    • JDK: >=11
    • Maven: >=3.8.1
    • Mysql: >=8.0.29
    • Minio
    • Kubernetes cluster/Minikube(If you don't have a k8s cluster, you can use Minikube as an alternative for development and debugging)

    Modify the code and add unit tests

    Now you can enter the corresponding module to modify and adjust the code on the server side. The main business code directory is src/main/java, and the unit test directory is src/test/java.

    Execute code check and run unit tests

    cd starwhale/server
    mvn clean test

    Deploy the server at local machine

    • Dependent services that need to be deployed

      • Minikube(Optional. Minikube can be used when there is no k8s cluster, there is the installation doc: Minikube

        minikube start
        minikube addons enable ingress
        minikube addons enable ingress-dns
      • Mysql

        docker run --name sw-mysql -d \
        -p 3306:3306 \
        -e MYSQL_ROOT_PASSWORD=starwhale \
        -e MYSQL_USER=starwhale \
        -e MYSQL_PASSWORD=starwhale \
        -e MYSQL_DATABASE=starwhale \
        mysql:latest
      • Minio

        docker run --name minio -d \
        -p 9000:9000 --publish 9001:9001 \
        -e MINIO_DEFAULT_BUCKETS='starwhale' \
        -e MINIO_ROOT_USER="minioadmin" \
        -e MINIO_ROOT_PASSWORD="minioadmin" \
        bitnami/minio:latest
    • Package server program

      If you need to deploy the front-end at the same time when deploying the server, you can execute the build command of the front-end part first, and then execute 'mvn clean package', and the compiled front-end files will be automatically packaged.

      Use the following command to package the program

        cd starwhale/server
      mvn clean package
    • Specify the environment required for server startup

      # Minio env
      export SW_STORAGE_ENDPOINT=http://${Minio IP,default is:27.0.0.1}:9000
      export SW_STORAGE_BUCKET=${Minio bucket,default is:starwhale}
      export SW_STORAGE_ACCESSKEY=${Minio accessKey,default is:starwhale}
      export SW_STORAGE_SECRETKEY=${Minio secretKey,default is:starwhale}
      export SW_STORAGE_REGION=${Minio region,default is:local}
      # kubernetes env
      export KUBECONFIG=${the '.kube' file path}\.kube\config

      export SW_INSTANCE_URI=http://${Server IP}:8082
      export SW_METADATA_STORAGE_IP=${Mysql IP,default: 127.0.0.1}
      export SW_METADATA_STORAGE_PORT=${Mysql port,default: 3306}
      export SW_METADATA_STORAGE_DB=${Mysql dbname,default: starwhale}
      export SW_METADATA_STORAGE_USER=${Mysql user,default: starwhale}
      export SW_METADATA_STORAGE_PASSWORD=${user password,default: starwhale}
    • Deploy server service

      You can use the IDE or the command to deploy.

      java -jar controller/target/starwhale-controller-0.1.0-SNAPSHOT.jar
    • Debug

      there are two ways to debug the modified function:

      • Use swagger-ui for interface debugging, visit /swagger-ui/index.html to find the corresponding api
      • Debug the corresponding function directly in the ui (provided that the front-end code has been built in advance according to the instructions when packaging)
    - - + + \ No newline at end of file diff --git a/0.6.6/concepts/index.html b/0.6.6/concepts/index.html index 1fcdb7609..6f3a26135 100644 --- a/0.6.6/concepts/index.html +++ b/0.6.6/concepts/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.6/concepts/names/index.html b/0.6.6/concepts/names/index.html index 9e1a0a546..69036c59e 100644 --- a/0.6.6/concepts/names/index.html +++ b/0.6.6/concepts/names/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Names in Starwhale

    Names mean project names, model names, dataset names, runtime names, and tag names.

    Names Limitation

    • Names are case-insensitive.
    • A name MUST only consist of letters A-Z a-z, digits 0-9, the hyphen character -, the dot character ., and the underscore character _.
    • A name should always start with a letter or the _ character.
    • The maximum length of a name is 80.

    Names uniqueness requirement

    • The resource name should be a unique string within its owner. For example, the project name should be unique in the owner instance, and the model name should be unique in the owner project.
    • The resource name can not be used by any other resource of the same kind in their owner, including those removed ones. For example, Project "apple" can not have two models named "Alice", even if one of them is already removed.
    • Different kinds of resources can have the same name. For example, a project and a model can be called "Alice" simultaneously.
    • Resources with different owners can have the same name. For example, a model in project "Apple" and a model in project "Banana" can have the same name "Alice".
    • Garbage-collected resources' names can be reused. For example, after the model with the name "Alice" in project "Apple" is removed and garbage collected, the project can have a new model with the same name "Alice".
    - - + + \ No newline at end of file diff --git a/0.6.6/concepts/project/index.html b/0.6.6/concepts/project/index.html index cca953047..022fa2021 100644 --- a/0.6.6/concepts/project/index.html +++ b/0.6.6/concepts/project/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Project in Starwhale

    "Project" is the basic unit for organizing different resources like models, datasets, etc. You may use projects for different purposes. For example, you can create a project for a data scientist team, a product line, or a specific model. Users usually work on one or more projects in their daily lives.

    Starwhale Server/Cloud projects are grouped by accounts. Starwhale Standalone does not have accounts. So you will not see any account name prefix in Starwhale Standalone projects. Starwhale Server/Cloud projects can be either "public" or "private". Public projects means all users on the same instance are assigned a "guest" role to the project by default. For more information about roles, see Roles and permissions in Starwhale.

    A self project is created automatically and configured as the default project in Starwhale Standalone.

    - - + + \ No newline at end of file diff --git a/0.6.6/concepts/roles-permissions/index.html b/0.6.6/concepts/roles-permissions/index.html index ee35eb5a8..97f2299c8 100644 --- a/0.6.6/concepts/roles-permissions/index.html +++ b/0.6.6/concepts/roles-permissions/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Roles and permissions in Starwhale

    Roles are used to assign permissions to users. Only Starwhale Server/Cloud has roles and permissions, and Starwhale Standalone does not.The Administrator role is automatically created and assigned to the user "admin". Some sensitive operations can only be performed by users with the Administrator role, for example, creating accounts in Starwhale Server.

    Projects have three roles:

    • Admin - Project administrators can read and write project data and assign project roles to users.
    • Maintainer - Project maintainers can read and write project data.
    • Guest - Project guests can only read project data.
    ActionAdminMaintainerGuest
    Manage project membersYes
    Edit projectYesYes
    View projectYesYesYes
    Create evaluationsYesYes
    Remove evaluationsYesYes
    View evaluationsYesYesYes
    Create datasetsYesYes
    Update datasetsYesYes
    Remove datasetsYesYes
    View datasetsYesYesYes
    Create modelsYesYes
    Update modelsYesYes
    Remove modelsYesYes
    View modelsYesYesYes
    Create runtimesYesYes
    Update runtimesYesYes
    Remove runtimesYesYes
    View runtimesYesYesYes

    The user who creates a project becomes the first project administrator. They can assign roles to other users later.

    - - + + \ No newline at end of file diff --git a/0.6.6/concepts/versioning/index.html b/0.6.6/concepts/versioning/index.html index fdd8dd423..642d71da6 100644 --- a/0.6.6/concepts/versioning/index.html +++ b/0.6.6/concepts/versioning/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Resource versioning in Starwhale

    • Starwhale manages the history of all models, datasets, and runtimes. Every update to a specific resource appends a new version of the history.
    • Versions are identified by a version id which is a random string generated automatically by Starwhale and are ordered by their creation time.
    • Versions can have tags. Starwhale uses version tags to provide a human-friendly representation of versions. By default, Starwhale attaches a default tag to each version. The default tag is the letter "v", followed by a number. For each versioned resource, the first version tag is always tagged with "v0", the second version is tagged with "v1", and so on. And there is a special tag "latest" that always points to the last version. When a version is removed, its default tag will not be reused. For example, there is a model with tags "v0, v1, v2". When "v2" is removed, tags will be "v0, v1". And the following tag will be "v3" instead of "v2" again. You can attach your own tags to any version and remove them at any time.
    • Starwhale uses a linear history model. There is neither branch nor cycle in history.
    • History can not be rollback. When a version is to be reverted, Starwhale clones the version and appends it as a new version to the end of the history. Versions in history can be manually removed and recovered.
    - - + + \ No newline at end of file diff --git a/0.6.6/dataset/index.html b/0.6.6/dataset/index.html index de6b11062..6bdb50272 100644 --- a/0.6.6/dataset/index.html +++ b/0.6.6/dataset/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Starwhale Dataset User Guide

    overview

    Design Overview

    Starwhale Dataset Positioning

    The Starwhale Dataset contains three core stages: data construction, data loading, and data visualization. It is a data management tool for the ML/DL field. Starwhale Dataset can directly use the environment built by Starwhale Runtime, and can be seamlessly integrated with Starwhale Model and Starwhale Evaluation. It is an important part of the Starwhale MLOps toolchain.

    According to the classification of MLOps Roles in Machine Learning Operations (MLOps): Overview, Definition, and Architecture, the three stages of Starwhale Dataset target the following user groups:

    • Data construction: Data Engineer, Data Scientist
    • Data loading: Data Scientist, ML Developer
    • Data visualization: Data Engineer, Data Scientist, ML Developer

    mlops-users

    Core Functions

    • Efficient loading: The original dataset files are stored in external storage such as OSS or NAS, and are loaded on demand without having to save to disk.
    • Simple construction: Supports one-click dataset construction from Image/Video/Audio directories, json files and Huggingface datasets, and also supports writing Python code to build completely custom datasets.
    • Versioning: Can perform version tracking, data append and other operations, and avoid duplicate data storage through the internally abstracted ObjectStore.
    • Sharing: Implement bidirectional dataset sharing between Standalone instances and Cloud/Server instances through the swcli dataset copy command.
    • Visualization: The web interface of Cloud/Server instances can present multi-dimensional, multi-type data visualization of datasets.
    • Artifact storage: The Standalone instance can store locally built or distributed swds series files, while the Cloud/Server instance uses object storage to provide centralized swds artifact storage.
    • Seamless Starwhale integration: Starwhale Dataset can use the runtime environment built by Starwhale Runtime to build datasets. Starwhale Evaluation and Starwhale Model can directly specify the dataset through the --dataset parameter to complete automatic data loading, which facilitates inference, model evaluation and other environments.

    Key Elements

    • swds virtual package file: swds is different from swmp and swrt. It is not a single packaged file, but a virtual concept that specifically refers to a directory that contains dataset-related files for a version of the Starwhale dataset, including _manifest.yaml, dataset.yaml, dataset build Python scripts, and data file links, etc. You can use the swcli dataset info command to view where the swds is located. swds is the abbreviation of Starwhale Dataset.

    swds-tree.png

    • swcli dataset command line: A set of dataset-related commands, including construction, distribution and management functions. See CLI Reference for details.
    • dataset.yaml configuration file: Describes the dataset construction process. It can be completely omitted and specified through swcli dataset build parameters. dataset.yaml can be considered as a configuration file representation of the swcli dataset build command line parameters. swcli dataset build parameters take precedence over dataset.yaml.
    • Dataset Python SDK: Includes data construction, data loading, and several predefined data types. See Python SDK for details.
    • Python scripts for dataset construction: A series of scripts written using the Starwhale Python SDK to build datasets.

    Best Practices

    The construction of Starwhale Dataset is performed independently. If third-party libraries need to be introduced when writing construction scripts, using Starwhale Runtime can simplify Python dependency management and ensure reproducible dataset construction. The Starwhale platform will build in as many open source datasets as possible for users to copy datasets for immediate use.

    Command Line Grouping

    The Starwhale Dataset command line can be divided into the following stages from the perspective of usage phases:

    • Construction phase
      • swcli dataset build
    • Visualization phase
      • swcli dataset diff
      • swcli dataset head
    • Distribution phase
      • swcli dataset copy
    • Basic management
      • swcli dataset tag
      • swcli dataset info
      • swcli dataset history
      • swcli dataset list
      • swcli dataset summary
      • swcli dataset remove
      • swcli dataset recover

    Starwhale Dataset Viewer

    Currently, the Web UI in the Cloud/Server instance can visually display the dataset. Currently, only DataTypes using the Python SDK can be correctly interpreted by the frontend, with mappings as follows:

    • Image: Display thumbnails, enlarged images, MASK type images, support image/png, image/jpeg, image/webp, image/svg+xml, image/gif, image/apng, image/avif formats.
    • Audio: Displayed as an audio wave graph, playable, supports audio/mp3 and audio/wav formats.
    • Video: Displayed as a video, playable, supports video/mp4, video/avi and video/webm formats.
    • GrayscaleImage: Display grayscale images, support x/grayscale format.
    • Text: Display text, support text/plain format, set encoding format, default is utf-8.
    • Binary and Bytes: Not supported for display currently.
    • Link: The above multimedia types all support specifying links as storage paths.

    Starwhale Dataset Data Format

    The dataset consists of multiple rows, each row being a sample, each sample containing several features. The features have a dict-like structure with some simple restrictions [L]:

    • The dict keys must be str type.
    • The dict values must be Python basic types like int/float/bool/str/bytes/dict/list/tuple, or Starwhale built-in data types.
    • For the same key across different samples, the value types do not need to stay the same.
    • If the value is a list or tuple, the element data types must be consistent.
    • For dict values, the restrictions are the same as [L].

    Example:

    {
    "img": GrayscaleImage(
    link=Link(
    "123",
    offset=32,
    size=784,
    _swds_bin_offset=0,
    _swds_bin_size=8160,
    )
    ),
    "label": 0,
    }

    File Data Handling

    Starwhale Dataset handles file type data in a special way. You can ignore this section if you don't care about Starwhale's implementation.

    According to actual usage scenarios, Starwhale Dataset has two ways of handling file class data that is based on the base class starwhale.BaseArtifact:

    • swds-bin: Starwhale merges the data into several large files in its own binary format (swds-bin), which can efficiently perform indexing, slicing and loading.
    • remote-link: If the user's original data is stored in some external storage such as OSS or NAS, with a lot of original data that is inconvenient to move or has already been encapsulated by some internal dataset implementation, then you only need to use links in the data to establish indexes.

    In the same Starwhale dataset, two types of data can be included simultaneously.

    - - + + \ No newline at end of file diff --git a/0.6.6/dataset/yaml/index.html b/0.6.6/dataset/yaml/index.html index a0675bddb..c37981e8e 100644 --- a/0.6.6/dataset/yaml/index.html +++ b/0.6.6/dataset/yaml/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    The dataset.yaml Specification

    tip

    dataset.yaml is optional for the swcli dataset build command.

    Building Starwhale Dataset uses dataset.yaml. Omitting dataset.yaml allows describing related configurations in swcli dataset build command line parameters. dataset.yaml can be considered as a file-based representation of the build command line configuration.

    YAML Field Descriptions

    FieldDescriptionRequiredTypeDefault
    nameName of the Starwhale DatasetYesString
    handlerImportable address of a class that inherits starwhale.SWDSBinBuildExecutor, starwhale.UserRawBuildExecutor or starwhale.BuildExecutor, or a function that returns a Generator or iterable object. Format is {module path}:{class name\|function name}YesString
    descDataset descriptionNoString""
    versiondataset.yaml format version, currently only "1.0" is supportedNoString1.0
    attrDataset build parametersNoDict
    attr.volume_sizeSize of each data file in the swds-bin dataset. Can be a number in bytes, or a number plus unit like 64M, 1GB etc.NoInt or Str64MB
    attr.alignment_sizeData alignment size of each data block in the swds-bin dataset. If set to 4k, and a data block is 7.9K, 0.1K padding will be added to make the block size a multiple of alignment_size, improving page size and read efficiency.NoInteger or String128

    Examples

    Simplest Example

    name: helloworld
    handler: dataset:ExampleProcessExecutor

    The helloworld dataset uses the ExampleProcessExecutor class in dataset.py of the dataset.yaml directory to build data.

    MNIST Dataset Build Example

    name: mnist
    handler: mnist.dataset:DatasetProcessExecutor
    desc: MNIST data and label test dataset
    attr:
    alignment_size: 128
    volume_size: 4M

    Example with handler as a generator function

    dataset.yaml contents:

    name: helloworld
    handler: dataset:iter_item

    dataset.py contents:

    def iter_item():
    for i in range(10):
    yield {"img": f"image-{i}".encode(), "label": i}
    - - + + \ No newline at end of file diff --git a/0.6.6/evaluation/heterogeneous/node-able/index.html b/0.6.6/evaluation/heterogeneous/node-able/index.html index 1d320f553..2e796889b 100644 --- a/0.6.6/evaluation/heterogeneous/node-able/index.html +++ b/0.6.6/evaluation/heterogeneous/node-able/index.html @@ -10,8 +10,8 @@ - - + +
    @@ -23,7 +23,7 @@ Refer to the link.

    Take v0.13.0-rc.1 as an example:

    kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.13.0-rc.1/nvidia-device-plugin.yml

    Note: This operation will run the NVIDIA device plugin plugin on all Kubernetes nodes. If configured before, it will be updated. Please evaluate the image version used carefully.

  • Confirm GPU can be discovered and used in the cluster. Refer to the command below. Check that nvidia.com/gpu is in the Capacity of the Jetson node. The GPU is then recognized normally by the Kubernetes cluster.

    # kubectl describe node orin | grep -A15 Capacity
    Capacity:
    cpu: 12
    ephemeral-storage: 59549612Ki
    hugepages-1Gi: 0
    hugepages-2Mi: 0
    hugepages-32Mi: 0
    hugepages-64Ki: 0
    memory: 31357608Ki
    nvidia.com/gpu: 1
    pods: 110
  • Build and Use Custom Images

    The l4t-jetpack image mentioned earlier can meet our general use. If we need to customize a more streamlined image or one with more features, we can make it based on l4t-base. Relevant Dockerfiles can refer to the image Starwhale made for mnist.

    - - + + \ No newline at end of file diff --git a/0.6.6/evaluation/heterogeneous/virtual-node/index.html b/0.6.6/evaluation/heterogeneous/virtual-node/index.html index 70001de7b..5c3839d27 100644 --- a/0.6.6/evaluation/heterogeneous/virtual-node/index.html +++ b/0.6.6/evaluation/heterogeneous/virtual-node/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Virtual Kubelet as Kubernetes nodes

    Introduction

    Virtual Kubelet is an open source framework that can simulate a K8s node by mimicking the communication between kubelet and the K8s cluster.

    This solution is widely used by major cloud vendors for serverless container cluster solutions, such as Alibaba Cloud's ASK, Amazon's AWS Fargate, etc.

    Principles

    The virtual kubelet framework implements the related interfaces of kubelet for Node. With simple configuration, it can simulate a node.

    We only need to implement the PodLifecycleHandler interface to support:

    • Create, update, delete Pod
    • Get Pod status
    • Get Container logs

    Adding Devices to the Cluster

    If our device cannot serve as a K8s node due to resource constraints or other situations, we can manage these devices by using virtual kubelet to simulate a proxy node.

    The control flow between Starwhale Controller and the device is as follows:


    ┌──────────────────────┐ ┌────────────────┐ ┌─────────────────┐ ┌────────────┐
    │ Starwhale Controller ├─────►│ K8s API Server ├────►│ virtual kubelet ├────►│ Our device │
    └──────────────────────┘ └────────────────┘ └─────────────────┘ └────────────┘

    Virtual kubelet converts the Pod orchestration information sent by Starwhale Controller into control behaviors for the device, such as executing a command via ssh on the device, or sending a message via USB or serial port.

    Below is an example of using virtual kubelet to control a device not joined to the cluster that is SSH-enabled:

    1. Prepare certificates
    • Create file vklet.csr with the following content:
    [req]
    req_extensions = v3_req
    distinguished_name = req_distinguished_name

    [req_distinguished_name]

    [v3_req]
    basicConstraints = CA:FALSE
    keyUsage = digitalSignature, keyEncipherment
    extendedKeyUsage = serverAuth
    subjectAltName = @alt_names

    [alt_names]
    IP = 1.2.3.4
    • Generate the certificate:
    openssl genrsa -out vklet-key.pem 2048
    openssl req -new -key vklet-key.pem -out vklet.csr -subj '/CN=system:node:1.2.3.4;/C=US/O=system:nodes' -config ./csr.conf
    • Submit the certificate:
    cat vklet.csr| base64 | tr -d "\n" # output as content of spec.request in csr.yaml

    csr.yaml:

    apiVersion: certificates.k8s.io/v1
    kind: CertificateSigningRequest
    metadata:
    name: vklet
    spec:
    request: ******************
    signerName: kubernetes.io/kube-apiserver-client
    expirationSeconds: 1086400
    usages:
    - client auth
    kubectl apply -f csr.yaml
    kubectl certificate approve vklet
    kubectl get csr vklet -o jsonpath='{.status.certificate}'| base64 -d > vklet-cert.pem

    Now we have vklet-cert.pem.

    • Compile virtual kubelet:
    git clone https://github.com/virtual-kubelet/virtual-kubelet
    cd virtual-kubelet && make build

    Create the node configuration file mock.json:

    {
    "virtual-kubelet":
    {
    "cpu": "100",
    "memory": "100Gi",
    "pods": "100"
    }
    }

    Start virtual kubelet:

    export APISERVER_CERT_LOCATION=/path/to/vklet-cert.pem
    export APISERVER_KEY_LOCATION=/path/to/vklet-key.pem
    export KUBECONFIG=/path/to/kubeconfig
    virtual-kubelet --provider mock --provider-config /path/to/mock.json

    Now we have simulated a node with 100 cores + 100GB memory using virtual kubelet.

    • Add PodLifecycleHandler implementation to convert important information in Pod orchestration into ssh command execution, and collect logs for Starwhale Controller to collect.

    See ssh executor for a concrete implementation.

    - - + + \ No newline at end of file diff --git a/0.6.6/evaluation/index.html b/0.6.6/evaluation/index.html index f987b2f89..2bef107c2 100644 --- a/0.6.6/evaluation/index.html +++ b/0.6.6/evaluation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Starwhale Model Evaluation

    Design Overview

    Starwhale Evaluation Positioning

    The goal of Starwhale Evaluation is to provide end-to-end management for model evaluation, including creating Jobs, distributing Tasks, viewing model evaluation reports and basic management. Starwhale Evaluation is a specific application of Starwhale Model, Starwhale Dataset, and Starwhale Runtime in the model evaluation scenario. Starwhale Evaluation is part of the MLOps toolchain built by Starwhale. More applications like Starwhale Model Serving, Starwhale Training will be included in the future.

    Core Features

    • Visualization: Both swcli and the Web UI provide visualization of model evaluation results, supporting comparison of multiple results. Users can also customize logging of intermediate processes.

    • Multi-scenario Adaptation: Whether it's a notebook, desktop or distributed cluster environment, the same commands, Python scripts, artifacts and operations can be used for model evaluation. This satisfies different computational power and data volume requirements.

    • Seamless Starwhale Integration: Leverage Starwhale Runtime for the runtime environment, Starwhale Dataset as data input, and run models from Starwhale Model. Configuration is simple whether using swcli, Python SDK or Cloud/Server instance Web UI.

    Key Elements

    • swcli model run: Command line for bulk offline model evaluation.
    • swcli model serve: Command line for online model evaluation.

    Best Practices

    Command Line Grouping

    From the perspective of completing an end-to-end Starwhale Evaluation workflow, commands can be grouped as:

    • Preparation Stage
      • swcli dataset build or Starwhale Dataset Python SDK
      • swcli model build or Starwhale Model Python SDK
      • swcli runtime build
    • Evaluation Stage
      • swcli model run
      • swcli model serve
    • Results Stage
      • swcli job info
    • Basic Management
      • swcli job list
      • swcli job remove
      • swcli job recover

    Abstraction job-step-task

    • job: A model evaluation task is a job, which contains one or more steps.

    • step: A step corresponds to a stage in the evaluation process. With the default PipelineHandler, steps are predict and evaluate. For custom evaluation processes using @handler, @evaluation.predict, @evaluation.evaluate decorators, steps are the decorated functions. Steps can have dependencies, forming a DAG. A step contains one or more tasks. Tasks in the same step have the same logic but different inputs. A common approach is to split the dataset into multiple parts, with each part passed to a task. Tasks can run in parallel.

    • task: A task is the final running entity. In Cloud/Server instances, a task is a container in a Pod. In Standalone instances, a task is a Python Thread.

    The job-step-task abstraction is the basis for implementing distributed runs in Starwhale Evaluation.

    - - + + \ No newline at end of file diff --git a/0.6.6/faq/index.html b/0.6.6/faq/index.html index 0e30cb50a..112f0b30f 100644 --- a/0.6.6/faq/index.html +++ b/0.6.6/faq/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    FAQs

    Error "413 Client Error: Request Entity Too Large" when Copying Starwhale Models to Server

    • Cause: The proxy-body-size set in the Ingress (Nginx default is 1MB) is smaller than the actual uploaded file size.
    • Solution: Check the Ingress configuration of the Starwhale Server and add nginx.ingress.kubernetes.io/proxy-body-size: 30g to the annotations field.

    RBAC Authorization Error when Starwhale Server Submits Jobs to Kubernetes Cluster

    The Kubernetes cluster has RBAC enabled, and the service account for the Starwhale Server does not have sufficient permissions. It requires at least the following permissions:

    ResourceAPI GroupGetListWatchCreateDelete
    jobsbatchYYYYY
    podscoreYYY
    nodescoreYYY
    events""Y

    Example YAML:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
    name: starwhale-role
    rules:
    - apiGroups:
    - ""
    resources:
    - pods
    - nodes
    verbs:
    - get
    - list
    - watch
    - apiGroups:
    - "batch"
    resources:
    - jobs
    verbs:
    - create
    - get
    - list
    - watch
    - delete
    - apiGroups:
    - ""
    resources:
    - events
    verbs:
    - get
    - watch
    - list
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
    name: starwhale-binding
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: starwhale-role
    subjects:
    - kind: ServiceAccount
    name: starwhale
    - - + + \ No newline at end of file diff --git a/0.6.6/getting-started/cloud/index.html b/0.6.6/getting-started/cloud/index.html index 3b2e6aa3c..0f636f2c6 100644 --- a/0.6.6/getting-started/cloud/index.html +++ b/0.6.6/getting-started/cloud/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Getting started with Starwhale Cloud

    Starwhale Cloud is hosted on Aliyun with the domain name https://cloud.starwhale.cn. In the futher, we will launch the service on AWS with the domain name https://cloud.starwhale.ai. It's important to note that these are two separate instances that are not interconnected, and accounts and data are not shared. You can choose either one to get started.

    You need to install the Starwhale Client (swcli) at first.

    Sign Up for Starwhale Cloud and create your first project

    You can either directly log in with your GitHub or Weixin account or sign up for an account. You will be asked for an account name if you log in with your GitHub or Weixin account.

    Then you can create a new project. In this tutorial, we will use the name demo for the project name.

    Build the dataset, model, and runtime on your local machine

    Follow step 1 to step 4 in Getting started with Starwhale Standalone to create:

    • a Starwhale model named helloworld
    • a Starwhale dataset named mnist64
    • a Starwhale runtime named helloworld

    Login to the cloud instance

    swcli instance login --username <your account name> --password <your password> --alias swcloud https://cloud.starwhale.cn

    Copy the dataset, model, and runtime to the cloud instance

    swcli model copy helloworld swcloud/project/<your account name>:demo
    swcli dataset copy mnist64 swcloud/project/<your account name>:demo
    swcli runtime copy helloworld swcloud/project/<your account name>:demo

    Run an evaluation with the web UI

    console-create-job.gif

    Congratulations! You have completed the Starwhale Cloud Getting Started Guide.

    - - + + \ No newline at end of file diff --git a/0.6.6/getting-started/index.html b/0.6.6/getting-started/index.html index 4b3501c74..9f06ee65f 100644 --- a/0.6.6/getting-started/index.html +++ b/0.6.6/getting-started/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Getting started

    Each deployment of Starwhale is called an instance. All instances can be managed by the Starwhale Client (swcli).

    You can start using Starwhale with one of the following instance types:

    • Starwhale Standalone - Rather than a running service, Starwhale Standalone is actually a repository that resides in your local file system. It is created and managed by the Starwhale Client (swcli). You only need to install swcli to use it. Currently, each user on a single machine can have only ONE Starwhale Standalone instance. We recommend you use the Starwhale Standalone to build and test your datasets, runtime, and models before pushing them to Starwhale Server/Cloud instances.
    • Starwhale Server - Starwhale Server is a service deployed on your local server. Besides text-only results from the Starwhale Client (swcli), Starwhale Server provides Web UI for you to manage your datasets and models, evaluate your models in your local Kubernetes cluster, and review the evaluation results.
    • Starwhale Cloud - Starwhale Cloud is a managed service hosted on public clouds. By registering an account on https://cloud.starwhale.cn, you are ready to use Starwhale without needing to install, operate, and maintain your own instances. Starwhale Cloud also provides public resources for you to download, like datasets, runtimes, and models. Check the "starwhale/public" project on Starwhale Cloud for more details.

    When choosing which instance type to use, consider the following:

    Instance TypeDeployment locationMaintained byUser InterfaceScalability
    Starwhale StandaloneYour laptop or any server in your data centerNot requiredCommand lineNot scalable
    Starwhale ServerYour data centerYourselfWeb UI and command lineScalable, depends on your Kubernetes cluster
    Starwhale CloudPublic cloud, like AWS or Aliyunthe Starwhale TeamWeb UI and command lineScalable, but currently limited by the freely available resource on the cloud

    Depending on your instance type, there are three getting-started guides available for you:

    • Getting started with Starwhale Standalone - This guide helps you run an MNIST evaluation on your desktop PC/laptop. It is the fastest and simplest way to get started with Starwhale.
    • Getting started with Starwhale Server - This guide helps you install Starwhale Server in your private data center and run an MNIST evaluation. At the end of the tutorial, you will have a Starwhale Server instance where you can run model evaluations on and manage your datasets and models.
    • Getting started with Starwhale Cloud - This guide helps you create an account on Starwhale Cloud and run an MNIST evaluation. It is the easiest way to experience all Starwhale features.
    - - + + \ No newline at end of file diff --git a/0.6.6/getting-started/runtime/index.html b/0.6.6/getting-started/runtime/index.html index 947b50081..a96ffe7d2 100644 --- a/0.6.6/getting-started/runtime/index.html +++ b/0.6.6/getting-started/runtime/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Getting Started with Starwhale Runtime

    This article demonstrates how to build a Starwhale Runtime of the Pytorch environment and how to use it. This runtime can meet the dependency requirements of the six examples in Starwhale: mnist, speech commands, nmt, cifar10, ag_news, and PennFudan. Links to relevant code: example/runtime/pytorch.

    You can learn the following things from this tutorial:

    • How to build a Starwhale Runtime.
    • How to use a Starwhale Runtime in different scenarios.
    • How to release a Starwhale Runtime.

    Prerequisites

    Run the following command to clone the example code:

    git clone https://github.com/star-whale/starwhale.git
    cd starwhale/example/runtime/pytorch # for users in the mainland of China, use pytorch-cn-mirror instead.

    Build Starwhale Runtime

    ❯ swcli -vvv runtime build --yaml runtime.yaml

    Use Starwhale Runtime in the standalone instance

    Use Starwhale Runtime in the shell

    # Activate the runtime
    swcli runtime activate pytorch

    swcli runtime activate will download all python dependencies of the runtime, which may take a long time.

    All dependencies are ready in your python environment when the runtime is activated. It is similar to source venv/bin/activate of virtualenv or the conda activate command of conda. If you close the shell or switch to another shell, you need to reactivate the runtime.

    Use Starwhale Runtime in swcli

    # Use the runtime when building a Starwhale Model
    swcli model build . --runtime pytorch
    # Use the runtime when building a Starwhale Dataset
    swcli dataset build --yaml /path/to/dataset.yaml --runtime pytorch
    # Run a model evaluation with the runtime
    swcli model run --uri mnist/version/v0 --dataset mnist --runtime pytorch

    Copy Starwhale Runtime to another instance

    You can copy the runtime to a server/cloud instance, which can then be used in the server/cloud instance or downloaded by other users.

    # Copy the runtime to a server instance named 'pre-k8s'
    ❯ swcli runtime copy pytorch cloud://pre-k8s/project/starwhale
    - - + + \ No newline at end of file diff --git a/0.6.6/getting-started/server/index.html b/0.6.6/getting-started/server/index.html index af60f63bf..4dc194a47 100644 --- a/0.6.6/getting-started/server/index.html +++ b/0.6.6/getting-started/server/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Getting started with Starwhale Server

    Start Starwhale Server

    swcli server start

    For detailed informatiuon, see the installation guide.

    Create your first project

    Login to the server

    Open your browser and enter your server's URL in the address bar. Login with your username(starwhale) and password(abcd1234).

    console-artifacts.gif

    Create a new project

    Build the dataset, model, and runtime on your local machine

    Follow step 1 to step 4 in Getting started with Starwhale Standalone to create:

    • a Starwhale model named helloworld
    • a Starwhale dataset named mnist64
    • a Starwhale runtime named helloworld

    Copy the dataset, the model, and the runtime to the server

    swcli instance login --username <your username> --password <your password> --alias server <Your Server URL>

    swcli model copy helloworld server/project/demo
    swcli dataset copy mnist64 server/project/demo
    swcli runtime copy helloworld server/project/demo

    Use the Web UI to run an evaluation

    Navigate to the "demo" project in your browser and create a new one.

    console-create-job.gif

    Congratulations! You have completed the Starwhale Server Getting Started Guide.

    - - + + \ No newline at end of file diff --git a/0.6.6/getting-started/standalone/index.html b/0.6.6/getting-started/standalone/index.html index d508b0738..98b5a5642 100644 --- a/0.6.6/getting-started/standalone/index.html +++ b/0.6.6/getting-started/standalone/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Getting started with Starwhale Standalone

    When the Starwhale Client (swcli) is installed, you are ready to use Starwhale Standalone.

    We also provide a Jupyter Notebook example, you can try it in Google Colab or in your local vscode/jupyterlab.

    Installing Starwhale Client

    python3 -m pip install starwhale

    For detailed information, see Starwhale Client Installation Guide.

    Downloading Examples

    Download Starwhale examples by cloning the Starwhale project via:

    GIT_LFS_SKIP_SMUDGE=1 git clone https://github.com/star-whale/starwhale.git --depth 1
    cd starwhale

    To save time in the example downloading, we skip git-lfs and other commits info. We will use ML/DL HelloWorld code MNIST to start your Starwhale journey. The following steps are all performed in the starwhale directory.

    Core Workflow

    Building Starwhale Runtime

    Runtime example codes are in the example/helloworld directory.

    • Build the Starwhale runtime bundle:

      swcli -vvv runtime build --yaml example/helloworld/runtime.yaml
      tip

      When you first build runtime, creating an isolated python environment and downloading python dependencies will take a lot of time. The command execution time is related to the network environment of the machine and the number of packages in the runtime.yaml. Using the befitting pypi mirror and cache config in the ~/.pip/pip.conf file is a recommended practice.

      For users in the mainland of China, the following conf file is an option:

      [global]
      cache-dir = ~/.cache/pip
      index-url = https://pypi.tuna.tsinghua.edu.cn/simple
      extra-index-url = https://mirrors.aliyun.com/pypi/simple/
    • Check your local Starwhale Runtime:

      swcli runtime list
      swcli runtime info helloworld

    Building a Model

    Model example codes are in the example/helloworld directory.

    • Build a Starwhale model:

      swcli -vvv model build example/helloworld --name helloworld -m evaluation --runtime helloworld
    • Check your local Starwhale models:

      swcli model list
      swcli model info helloworld

    Building a Dataset

    Dataset example codes are in the example/helloworld directory.

    • Build a Starwhale dataset:

      swcli runtime activate helloworld
      python3 example/helloworld/dataset.py
      deactivate
    • Check your local Starwhale dataset:

      swcli dataset list
      swcli dataset info mnist64
      swcli dataset head mnist64

    Running an Evaluation Job

    • Create an evaluation job:

      swcli -vvv model run --uri helloworld --dataset mnist64 --runtime helloworld
    • Check the evaluation result

      swcli job list
      swcli job info $(swcli job list | grep mnist | grep success | awk '{print $1}' | head -n 1)

    Congratulations! You have completed the Starwhale Standalone Getting Started Guide.

    - - + + \ No newline at end of file diff --git a/0.6.6/index.html b/0.6.6/index.html index afc627f84..846395cba 100644 --- a/0.6.6/index.html +++ b/0.6.6/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    What is Starwhale

    Overview

    Starwhale is an MLOps/LLMOps platform that provides R&D operation management capabilities for machine learning projects, establishing standardized model development, testing, deployment and operation processes, connecting business teams, AI teams and operation teams. It solves problems such as long model iteration cycles, team collaboration, and waste of human resources in the machine learning process. Starwhale provides Standalone, Server and Cloud in three instance ways to meet the development needs of a single machine environment, private cluster deployment and multi-cloud services hosted by the Starwhale team.

    Starwhale is also an open source platform, using the Apache-2.0 license.

    products

    • Fundamentals:
      • Starwhale Model: Starwhale Model is a standard package format for machine learning models, which can be used for various purposes, such as model fine-tuning, model evaluation, and online services. Starwhale Model includes model files, inference code, configuration files, etc.
      • Starwhale Dataset: Starwhale Dataset enables efficient data storage, data loading, and data visualization, making it a data management tool for the ML/DL field.
      • Starwhale Runtime: Starwhale Runtime provides a reproducible and shareable runtime environment for running Python programs. With Starwhale Runtime, you can easily share with others and use it on Starwhale Server and Starwhale Cloud instances.
    • Model Evaluation:
      • Model Evaluation: Starwhale Model Evaluation allows users to implement complex, production-level, distributed model evaluation tasks with minimal Python code using the SDK.
      • Live Demo: Evaluate models online through a Web UI.
      • Reports: Create shareable, automatically integrated evaluation reports.
      • Tables: Provide multi-dimensional model evaluation result comparisons and displays, with support for multimedia data such as images, audio, and video. The tables can present all the data and artifacts recorded during the evaluation process using the Starwhale Python SDK.
    • LLM Fine-tuning: Provide a full toolchain for LLM fine-tuning, including model fine-tuning, batch evaluation comparison, online evaluation comparison, and model publishing.
    • Deployment Instances:
      • Starwhale Standalone: Deployed in a local development environment, managed by the swcli command-line tool, meeting development and debugging needs.
      • Starwhale Server: Deployed in a private data center, relying on a Kubernetes cluster, providing centralized, web-based, and secure services.
      • Starwhale Cloud: Hosted on a public cloud, with the access address https://cloud.starwhale.ai. The Starwhale team is responsible for maintenance, and no installation is required. You can start using it after registering an account.

    Typical Use Cases

    • Dataset Management: With the Starwhale Dataset Python SDK, you can easily import, create, distribute, and load datasets while achieving fine-grained version control and visualization.
    • Model Management: By using a simple packaging mechanism, you can generate Starwhale Model packages that include models, configuration files, and code, providing efficient distribution, version management, Model Registry, and visualization, making the daily management of model packages more straightforward.
    • Machine Learning Runtime Sharing: By exporting the development environment or writing a simple YAML, you can reproduce the environment in other instances, achieving a stable and consistent runtime. Starwhale Runtime abstracts and shields some underlying dependencies, so users don't need to master Dockerfile writing or CUDA installation, making it easy to define an environment that meets the requirements of machine learning programs.
    • Model Evaluation: With the Starwhale Evaluation Python SDK, you can implement efficient, large-scale, multi-dataset, and multi-stage model evaluations in a distributed cluster environment with minimal code, record data and artifacts generated during the evaluation process in Starwhale Tables, and provide various visualization methods.
    • Online Evaluation: Quickly create interactive Web UI online services for Starwhale models to perform rapid testing.
    • Model Fine-tuning: Provide a complete toolchain for fine-tuning large language models (LLMs), making the model fine-tuning process faster and more quantifiable.

    Starwhale is an open platform that can be used for individual functions or combined for use, with the core goal of providing a convenient tool for data scientists and machine learning engineers to improve work efficiency.

    Start Your Starwhale Journey

    - - + + \ No newline at end of file diff --git a/0.6.6/model/index.html b/0.6.6/model/index.html index 2efe94595..70bb6f2b7 100644 --- a/0.6.6/model/index.html +++ b/0.6.6/model/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Starwhale Model

    overview

    A Starwhale Model is a standard format for packaging machine learning models that can be used for various purposes, like model fine-tuning, model evaluation, and online serving. A Starwhale Model contains the model file, inference codes, configuration files, and any other files required to run the model.

    Create a Starwhale Model

    There are two ways to create a Starwhale Model: by swcli or by Python SDK.

    Create a Starwhale Model by swcli

    To create a Starwhale Model by swcli, you need to define a model.yaml, which describes some required information about the model package, and run the following command:

    swcli model build . --model-yaml /path/to/model.yaml

    For more information about the command and model.yaml, see the swcli reference. model.yaml is optional for model building.

    Create a Starwhale Model by Python SDK

    from starwhale import model, predict

    @predict
    def predict_img(data):
    ...

    model.build(name="mnist", modules=[predict_img])

    Model Management

    Model Management by swcli

    CommandDescription
    swcli model listList all Starwhale Models in a project
    swcli model infoShow detail information about a Starwhale Model
    swcli model copyCopy a Starwhale Model to another location
    swcli model removeRemove a Starwhale Model
    swcli model recoverRecover a previously removed Starwhale Model

    Model Management by WebUI

    Model History

    Starwhale Models are versioned. The general rules about versions are described in Resource versioning in Starwhale.

    Model History Management by swcli

    CommandDescription
    swcli model historyList all versions of a Starwhale Model
    swcli model infoShow detail information about a Starwhale Model version
    swcli model diffCompare two versions of a Starwhale model
    swcli model copyCopy a Starwhale Model version to a new one
    swcli model removeRemove a Starwhale Model version
    swcli model recoverRecover a previously removed Starwhale Model version

    Model Evaluation

    Model Evaluation by swcli

    CommandDescription
    swcli model runCreate an evaluation with a Starwhale Model

    The Storage Format

    The Starwhale Model is a tarball file that contains the source directory.

    - - + + \ No newline at end of file diff --git a/0.6.6/model/yaml/index.html b/0.6.6/model/yaml/index.html index 3b39f591e..386c36257 100644 --- a/0.6.6/model/yaml/index.html +++ b/0.6.6/model/yaml/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    The model.yaml Specification

    tip

    model.yaml is optional for swcli model build.

    When building a Starwhale Model using the swcli model build command, you can specify a yaml file that follows a specific format via the --model-yaml parameter to simplify specifying build parameters.

    Even without specifying the --model-yaml parameter, swcli model build will automatically look for a model.yaml file under the ${workdir} directory and extract parameters from it. Parameters specified on the swcli model build command line take precedence over equivalent configurations in model.yaml, so you can think of model.yaml as a file-based representation of the build command line.

    When building a Starwhale Model using the Python SDK, the model.yaml file does not take effect.

    YAML Field Descriptions

    FieldDescriptionRequiredTypeDefault
    nameName of the Starwhale Model, equivalent to --name parameter.NoString
    run.modulesPython Modules searched during model build, can specify multiple entry points for model execution, format is Python Importable path. Equivalent to --module parameter.YesList[String]
    run.handlerDeprecated alias of run.modules, can only specify one entry point.NoString
    versiondataset.yaml format version, currently only supports "1.0"NoString1.0
    descModel description, equivalent to --desc parameter.NoString

    Example


    name: helloworld

    run:
    modules:
    - src.evaluator

    desc: "example yaml"

    A Starwhale model named helloworld, searches for functions decorated with @evaluation.predict, @evaluation.evaluate or @handler, or classes inheriting from PipelineHandler in src/evaluator.py under ${WORKDIR} of the swcli model build command. These functions or classes will be added to the list of runnable entry points for the Starwhale model. When running the model via swcli model run or Web UI, select the corresponding entry point (handler) to run.

    model.yaml is optional, parameters defined in yaml can also be specified via swcli command line parameters.


    swcli model build . --model-yaml model.yaml

    Is equivalent to:


    swcli model build . --name helloworld --module src.evaluator --desc "example yaml"

    - - + + \ No newline at end of file diff --git a/0.6.6/reference/sdk/dataset/index.html b/0.6.6/reference/sdk/dataset/index.html index 6501256d7..fe94987f9 100644 --- a/0.6.6/reference/sdk/dataset/index.html +++ b/0.6.6/reference/sdk/dataset/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Starwhale Dataset SDK

    dataset

    Get starwhale.Dataset object, by creating new datasets or loading existing datasets.

    @classmethod
    def dataset(
    cls,
    uri: t.Union[str, Resource],
    create: str = _DatasetCreateMode.auto,
    readonly: bool = False,
    ) -> Dataset:

    Parameters

    • uri: (str or Resource, required)
      • The dataset uri or Resource object.
    • create: (str, optional)
      • The mode of dataset creating. The options are auto, empty and forbid.
        • auto mode: If the dataset already exists, creation is ignored. If it does not exist, the dataset is created automatically.
        • empty mode: If the dataset already exists, an Exception is raised; If it does not exist, an empty dataset is created. This mode ensures the creation of a new, empty dataset.
        • forbid mode: If the dataset already exists, nothing is done.If it does not exist, an Exception is raised. This mode ensures the existence of the dataset.
      • The default is auto.
    • readonly: (bool, optional)
      • For an existing dataset, you can specify the readonly=True argument to ensure the dataset is in readonly mode.
      • Default is False.

    Examples

    from starwhale import dataset, Image

    # create a new dataset named mnist, and add a row into the dataset
    # dataset("mnist") is equal to dataset("mnist", create="auto")
    ds = dataset("mnist")
    ds.exists() # return False, "mnist" dataset is not existing.
    ds.append({"img": Image(), "label": 1})
    ds.commit()
    ds.close()

    # load a cloud instance dataset in readonly mode
    ds = dataset("cloud://remote-instance/project/starwhale/dataset/mnist", readonly=True)
    labels = [row.features.label in ds]
    ds.close()

    # load a read/write dataset with a specified version
    ds = dataset("mnist/version/mrrdczdbmzsw")
    ds[0].features.label = 1
    ds.commit()
    ds.close()

    # create an empty dataset
    ds = dataset("mnist-empty", create="empty")

    # ensure the dataset existence
    ds = dataset("mnist-existed", create="forbid")

    class starwhale.Dataset

    starwhale.Dataset implements the abstraction of a Starwhale dataset, and can operate on datasets in Standalone/Server/Cloud instances.

    from_huggingface

    from_huggingface is a classmethod that can convert a Huggingface dataset into a Starwhale dataset.

    def from_huggingface(
    cls,
    name: str,
    repo: str,
    subset: str | None = None,
    split: str | None = None,
    revision: str = "main",
    alignment_size: int | str = D_ALIGNMENT_SIZE,
    volume_size: int | str = D_FILE_VOLUME_SIZE,
    mode: DatasetChangeMode | str = DatasetChangeMode.PATCH,
    cache: bool = True,
    tags: t.List[str] | None = None,
    ) -> Dataset:

    Parameters

    • name: (str, required)
      • dataset name.
    • repo: (str, required)
    • subset: (str, optional)
      • The subset name. If the huggingface dataset has multiple subsets, you must specify the subset name.
    • split: (str, optional)
      • The split name. If the split name is not specified, the all splits dataset will be built.
    • revision: (str, optional)
      • The huggingface datasets revision. The default value is main. If the split name is not specified, the all splits dataset will be built.
    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.
    • mode: (str|DatasetChangeMode, optional)
      • The dataset change mode. The default value is patch. Mode choices are patch and overwrite.
    • cache: (bool, optional)
      • Whether to use huggingface dataset cache(download + local hf dataset).
      • The default value is True.
    • tags: (List[str], optional)
      • The user custom tags of the dataset.

    Examples

    from starwhale import Dataset
    myds = Dataset.from_huggingface("mnist", "mnist")
    print(myds[0])
    from starwhale import Dataset
    myds = Dataset.from_huggingface("mmlu", "cais/mmlu", subset="anatomy", split="auxiliary_train", revision="7456cfb")

    from_json

    from_json is a classmethod that can convert a json text into a Starwhale dataset.

    @classmethod
    def from_json(
    cls,
    name: str,
    json_text: str,
    field_selector: str = "",
    alignment_size: int | str = D_ALIGNMENT_SIZE,
    volume_size: int | str = D_FILE_VOLUME_SIZE,
    mode: DatasetChangeMode | str = DatasetChangeMode.PATCH,
    tags: t.List[str] | None = None,
    ) -> Dataset:

    Parameters

    • name: (str, required)
      • Dataset name.
    • json_text: (str, required)
      • A json string. The from_json function deserializes this string into Python objects to start building the Starwhale dataset.
    • field_selector: (str, optional)
      • The filed from which you would like to extract dataset array items.
      • The default value is "" which indicates that the json object is an array contains all the items.
    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.
    • mode: (str|DatasetChangeMode, optional)
      • The dataset change mode. The default value is patch. Mode choices are patch and overwrite.
    • tags: (List[str], optional)
      • The user custom tags of the dataset.

    Examples

    from starwhale import Dataset
    myds = Dataset.from_json(
    name="translation",
    json_text='[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]'
    )
    print(myds[0].features.en)
    from starwhale import Dataset
    myds = Dataset.from_json(
    name="translation",
    json_text='{"content":{"child_content":[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]}}',
    field_selector="content.child_content"
    )
    print(myds[0].features["zh-cn"])

    from_folder

    from_folder is a classmethod that can read Image/Video/Audio data from a specified directory and automatically convert them into a Starwhale dataset. This function supports the following features:

    • It can recursively search the target directory and its subdirectories
    • Supports extracting three types of files:
      • image: Supports png/jpg/jpeg/webp/svg/apng image types. Image files will be converted to Starwhale.Image type.
      • video: Supports mp4/webm/avi video types. Video files will be converted to Starwhale.Video type.
      • audio: Supports mp3/wav audio types. Audio files will be converted to Starwhale.Audio type.
    • Each file corresponds to one record in the dataset, with the file stored in the file field.
    • If auto_label=True, the parent directory name will be used as the label for that record, stored in the label field. Files in the root directory will not be labeled.
    • If a txt file with the same name as an image/video/audio file exists, its content will be stored as the caption field in the dataset.
    • If metadata.csv or metadata.jsonl exists in the root directory, their content will be read automatically and associated to records by file path as meta information in the dataset.
      • metadata.csv and metadata.jsonl are mutually exclusive. An exception will be thrown if both exist.
      • Each record in metadata.csv and metadata.jsonl must contain a file_name field pointing to the file path.
      • metadata.csv and metadata.jsonl are optional for dataset building.
    @classmethod
    def from_folder(
    cls,
    folder: str | Path,
    kind: str | DatasetFolderSourceType,
    name: str | Resource = "",
    auto_label: bool = True,
    alignment_size: int | str = D_ALIGNMENT_SIZE,
    volume_size: int | str = D_FILE_VOLUME_SIZE,
    mode: DatasetChangeMode | str = DatasetChangeMode.PATCH,
    tags: t.List[str] | None = None,
    ) -> Dataset:

    Parameters

    • folder: (str|Path, required)
      • The folder path from which you would like to create this dataset.
    • kind: (str|DatasetFolderSourceType, required)
      • The dataset source type you would like to use, the choices are: image, video and audio.
      • Recursively searching for files of the specified kind in folder. Other file types will be ignored.
    • name: (str|Resource, optional)
      • The dataset name you would like to use.
      • If not specified, the name is the folder name.
    • auto_label: (bool, optional)
      • Whether to auto label by the sub-folder name.
      • The default value is True.
    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.
    • mode: (str|DatasetChangeMode, optional)
      • The dataset change mode. The default value is patch. Mode choices are patch and overwrite.
    • tags: (List[str], optional)
      • The user custom tags of the dataset.

    Examples ${folder-example}

    • Example for the normal function calling

      from starwhale import Dataset

      # create a my-image-dataset dataset from /path/to/image folder.
      ds = Dataset.from_folder(
      folder="/path/to/image",
      kind="image",
      name="my-image-dataset"
      )
    • Example for caption

      folder/dog/1.png
      folder/dog/1.txt

      1.txt content will be used as the caption of 1.png.

    • Example for metadata

      metadata.csv:

      file_name, caption
      1.png, dog
      2.png, cat

      metadata.jsonl:

      {"file_name": "1.png", "caption": "dog"}
      {"file_name": "2.png", "caption": "cat"}
    • Example for auto-labeling

      The following structure will create a dataset with 2 labels: "cat" and "dog", 4 images in total.

      folder/dog/1.png
      folder/cat/2.png
      folder/dog/3.png
      folder/cat/4.png

    __iter__

    __iter__ a method that iter the dataset rows.

    from starwhale import dataset

    ds = dataset("mnist")

    for item in ds:
    print(item.index)
    print(item.features.label) # label and img are the features of mnist.
    print(item.features.img)

    batch_iter

    batch_iter is a method that iter the dataset rows in batch.

    def batch_iter(
    self, batch_size: int = 1, drop_not_full: bool = False
    ) -> t.Iterator[t.List[DataRow]]:

    Parameters

    • batch_size: (int, optional)
      • batch size. The default value is 1.
    • drop_not_full: (bool, optional)
      • Whether the last batch of data, with a size smaller than batch_size, it will be discarded.
      • The default value is False.

    Examples

    from starwhale import dataset

    ds = dataset("mnist")
    for batch_rows in ds.batch_iter(batch_size=2):
    assert len(batch_rows) == 2
    print(batch_rows[0].features)

    __getitem__

    __getitem__ is a method that allows retrieving certain rows of data from the dataset, with usage similar to Python dict and list types.

    from starwhale import dataset

    ds = dataset("mock-int-index")

    # if the index type is string
    ds["str_key"] # get the DataRow by the "str_key" string key
    ds["start":"end"] # get a slice of the dataset by the range ("start", "end")

    ds = dataset("mock-str-index")
    # if the index type is int
    ds[1] # get the DataRow by the 1 int key
    ds[1:10:2] # get a slice of the dataset by the range (1, 10), step is 2

    __setitem__

    __setitem__ is a method that allows updating rows of data in the dataset, with usage similar to Python dicts. __setitem__ supports multi-threaded parallel data insertion.

    def __setitem__(
    self, key: t.Union[str, int], value: t.Union[DataRow, t.Tuple, t.Dict]
    ) -> None:

    Parameters

    • key: (int|str, required)
      • key is the index for each row in the dataset. The type is int or str, but a dataset only accepts one type.
    • value: (DataRow|tuple|dict, required)
      • value is the features for each row in the dataset, using a Python dict is generally recommended.

    Examples

    • Normal insertion

    Insert two rows into the test dataset, with index test and test2 repectively:

    from starwhale import dataset

    with dataset("test") as ds:
    ds["test"] = {"txt": "abc", "int": 1}
    ds["test2"] = {"txt": "bcd", "int": 2}
    ds.commit()
    • Parallel insertion
    from starwhale import dataset, Binary
    from concurrent.futures import as_completed, ThreadPoolExecutor

    ds = dataset("test")

    def _do_append(_start: int) -> None:
    for i in range(_start, 100):
    ds.append((i, {"data": Binary(), "label": i}))

    pool = ThreadPoolExecutor(max_workers=10)
    tasks = [pool.submit(_do_append, i * 10) for i in range(0, 9)]

    ds.commit()
    ds.close()

    __delitem__

    __delitem__ is a method to delete certain rows of data from the dataset.

    def __delitem__(self, key: _ItemType) -> None:
    from starwhale import dataset

    ds = dataset("existed-ds")
    del ds[6:9]
    del ds[0]
    ds.commit()
    ds.close()

    append

    append is a method to append data to a dataset, similar to the append method for Python lists.

    • Adding features dict, each row is automatically indexed with int starting from 0 and incrementing.

      from starwhale import dataset, Image

      with dataset("new-ds") as ds:
      for i in range(0, 100):
      ds.append({"label": i, "image": Image(f"folder/{i}.png")})
      ds.commit()
    • By appending the index and features dictionary, the index of each data row in the dataset will not be handled automatically.

      from dataset import dataset, Image

      with dataset("new-ds") as ds:
      for i in range(0, 100):
      ds.append((f"index-{i}", {"label": i, "image": Image(f"folder/{i}.png")}))

      ds.commit()

    extend

    extend is a method to bulk append data to a dataset, similar to the extend method for Python lists.

    from starwhale import dataset, Text

    ds = dataset("new-ds")
    ds.extend([
    (f"label-{i}", {"text": Text(), "label": i}) for i in range(0, 10)
    ])
    ds.commit()
    ds.close()

    commit

    commit is a method that flushes the current cached data to storage when called, and generates a dataset version. This version can then be used to load the corresponding dataset content afterwards.

    For a dataset, if some data is added without calling commit, but close is called or the process exits directly instead, the data will still be written to the dataset, just without generating a new version.

    @_check_readonly
    def commit(
    self,
    tags: t.Optional[t.List[str]] = None,
    message: str = "",
    force_add_tags: bool = False,
    ignore_add_tags_errors: bool = False,
    ) -> str:

    Parameters

    • tags: (list(str), optional)
      • tag as a list
    • message: (str, optional)
      • commit message. The default value is empty.
    • force_add_tags: (bool, optional)
      • For server/cloud instances, when adding labels to this version, if a label has already been applied to other dataset versions, you can use the force_add_tags=True parameter to forcibly add the label to this version, otherwise an exception will be thrown.
      • The default is False.
    • ignore_add_tags_errors: (bool, optional)
      • Ignore any exceptions thrown when adding labels.
      • The default is False.

    Examples

    from starwhale import dataset
    with dataset("mnist") as ds:
    ds.append({"label": 1})
    ds.commit(message="init commit")

    readonly

    readonly is a property attribute indicating if the dataset is read-only, it returns a bool value.

    from starwhale import dataset
    ds = dataset("mnist", readonly=True)
    assert ds.readonly

    loading_version

    loading_version is a property attribute, string type.

    • When loading an existing dataset, the loading_version is the related dataset version.
    • When creating a non-existed dataset, the loading_version is equal to the pending_commit_version.

    pending_commit_version

    pending_commit_version is a property attribute, string type. When you call the commit function, the pending_commit_version will be recorded in the Standalone instance ,Server instance or Cloud instance.

    committed_version

    committed_version is a property attribute, string type. After the commit function is called, the committed_version will come out, it is equal to the pending_commit_version. Accessing this attribute without calling commit first will raise an exception.

    remove

    remove is a method equivalent to the swcli dataset remove command, it can delete a dataset.

    def remove(self, force: bool = False) -> None:

    recover

    recover is a method equivalent to the swcli dataset recover command, it can recover a soft-deleted dataset that has not been run garbage collection.

    def recover(self, force: bool = False) -> None:

    summary

    summary is a method equivalent to the swcli dataset summary command, it returns summary information of the dataset.

    def summary(self) -> t.Optional[DatasetSummary]:

    history

    history is a method equivalent to the swcli dataset history command, it returns the history records of the dataset.

    def history(self) -> t.List[t.Dict]:

    flush

    flush is a method that flushes temporarily cached data from memory to persistent storage. The commit and close methods will automatically call flush.

    close

    close is a method that closes opened connections related to the dataset. Dataset also implements contextmanager, so datasets can be automatically closed using with syntax without needing to explicitly call close.

    from starwhale import dataset

    ds = dataset("mnist")
    ds.close()

    with dataset("mnist") as ds:
    print(ds[0])

    head is a method to show the first n rows of a dataset, equivalent to the swcli dataset head command.

    def head(self, n: int = 5, skip_fetch_data: bool = False) -> List[DataRow]:

    fetch_one

    fetch_one is a method to get the first record in a dataset, similar to head(n=1)[0].

    list

    list is a class method to list Starwhale datasets under a project URI, equivalent to the swcli dataset list command.

    @classmethod
    def list(
    cls,
    project_uri: Union[str, Project] = "",
    fullname: bool = False,
    show_removed: bool = False,
    page_index: int = DEFAULT_PAGE_IDX,
    page_size: int = DEFAULT_PAGE_SIZE,
    ) -> Tuple[DatasetListType, Dict[str, Any]]:

    copy

    copy is a method to copy a dataset to another instance, equivalent to the swcli dataset copy command.

    def copy(
    self,
    dest_uri: str,
    dest_local_project_uri: str = "",
    force: bool = False,
    mode: str = DatasetChangeMode.PATCH.value,
    ignore_tags: t.List[str] | None = None,
    ) -> None:

    Parameters

    • dest_uri: (str, required)
      • Dataset URI
    • dest_local_project_uri: (str, optional)
      • When copy the remote dataset into local, the parameter can set for the Project URI.
    • force: (bool, optional)
      • Whether to forcibly overwrite the dataset if there is already one with the same version on the target instance.
      • The default value is False.
      • When the tags are already used for the other dataset version in the dest instance, you should use force option or adjust the tags.
    • mode: (str, optional)
      • Dataset copy mode, default is 'patch'. Mode choices are: 'patch', 'overwrite'.
      • patch: Patch mode, only update the changed rows and columns for the remote dataset.
      • overwrite: Overwrite mode, update records and delete extraneous rows from the remote dataset.
    • ignore_tags (List[str], optional)
      • Ignore tags when copying.
      • In default, copy dataset with all user custom tags.
      • latest and ^v\d+$ are the system builtin tags, they are ignored automatically.

    Examples

    from starwhale import dataset
    ds = dataset("mnist")
    ds.copy("cloud://remote-instance/project/starwhale")

    to_pytorch

    to_pytorch is a method that can convert a Starwhale dataset to a Pytorch torch.utils.data.Dataset, which can then be passed to torch.utils.data.DataLoader for use.

    It should be noted that the to_pytorch function returns a Pytorch IterableDataset.

    def to_pytorch(
    self,
    transform: t.Optional[t.Callable] = None,
    drop_index: bool = True,
    skip_default_transform: bool = False,
    ) -> torch.utils.data.Dataset:

    Parameters

    • transform: (callable, optional)
      • A transform function for input data.
    • drop_index: (bool, optional)
      • Whether to drop the index column.
    • skip_default_transform: (bool, optional)
      • If transform is not set, by default the built-in Starwhale transform function will be used to transform the data. This can be disabled with the skip_default_transform parameter.

    Examples

    import torch.utils.data as tdata
    from starwhale import dataset

    ds = dataset("mnist")

    torch_ds = ds.to_pytorch()
    torch_loader = tdata.DataLoader(torch_ds, batch_size=2)
    import torch.utils.data as tdata
    from starwhale import dataset

    with dataset("mnist") as ds:
    for i in range(0, 10):
    ds.append({"txt": Text(f"data-{i}"), "label": i})

    ds.commit()

    def _custom_transform(data: t.Any) -> t.Any:
    data = data.copy()
    txt = data["txt"].to_str()
    data["txt"] = f"custom-{txt}"
    return data

    torch_loader = tdata.DataLoader(
    dataset(ds.uri).to_pytorch(transform=_custom_transform), batch_size=1
    )
    item = next(iter(torch_loader))
    assert isinstance(item["label"], torch.Tensor)
    assert item["txt"][0] in ("custom-data-0", "custom-data-1")

    to_tensorflow

    to_tensorflow is a method that can convert a Starwhale dataset to a Tensorflow tensorflow.data.Dataset.

    def to_tensorflow(self, drop_index: bool = True) -> tensorflow.data.Dataset:

    Parameters

    • drop_index: (bool, optional)
      • Whether to drop the index column.

    Examples

    from starwhale import dataset
    import tensorflow as tf

    ds = dataset("mnist")
    tf_ds = ds.to_tensorflow(drop_index=True)
    assert isinstance(tf_ds, tf.data.Dataset)

    with_builder_blob_config

    with_builder_blob_config is a method to set blob-related attributes in a Starwhale dataset. It needs to be called before making data changes.

    def with_builder_blob_config(
    self,
    volume_size: int | str | None = D_FILE_VOLUME_SIZE,
    alignment_size: int | str | None = D_ALIGNMENT_SIZE,
    ) -> Dataset:

    Parameters

    • alignment_size: (int|str, optional)
      • The blob alignment size.
      • The default value is 128 Bytes.
    • volume_size: (int|str, optional)
      • The maximum size of a dataset blob file. A new blob file will be generated when the size exceeds this limit.
      • The default value is 64MB.

    Examples

    from starwhale import dataset, Binary

    ds = dataset("mnist").with_builder_blob_config(volume_size="32M", alignment_size=128)
    ds.append({"data": Binary(b"123")})
    ds.commit()
    ds.close()

    with_loader_config

    with_loader_config is a method to set parameters for the Starwhale dataset loader process.

    def with_loader_config(
    self,
    num_workers: t.Optional[int] = None,
    cache_size: t.Optional[int] = None,
    field_transformer: t.Optional[t.Dict] = None,
    ) -> Dataset:

    Parameters

    • num_workers: (int, optional)
      • The workers number for loading dataset.
      • The default value is 2.
    • cache_size: (int, optional)
      • Prefetched data rows.
      • The default value is 20.
    • field_transformer: (dict, optional)
      • features name transform dict.

    Examples

    from starwhale import Dataset, dataset
    Dataset.from_json(
    "translation",
    '[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]'
    )
    myds = dataset("translation").with_loader_config(field_transformer={"en": "en-us"})
    assert myds[0].features["en-us"] == myds[0].features["en"]
    from starwhale import Dataset, dataset
    Dataset.from_json(
    "translation2",
    '[{"content":{"child_content":[{"en":"hello","zh-cn":"你好"},{"en":"how are you","zh-cn":"最近怎么样"}]}}]'
    )
    myds = dataset("translation2").with_loader_config(field_transformer={"content.child_content[0].en": "en-us"})
    assert myds[0].features["en-us"] == myds[0].features["content"]["child_content"][0]["en"]
    - - + + \ No newline at end of file diff --git a/0.6.6/reference/sdk/evaluation/index.html b/0.6.6/reference/sdk/evaluation/index.html index 3731230ba..1759048cb 100644 --- a/0.6.6/reference/sdk/evaluation/index.html +++ b/0.6.6/reference/sdk/evaluation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Starwhale Model Evaluation SDK

    @evaluation.predict

    The @evaluation.predict decorator defines the inference process in the Starwhale Model Evaluation, similar to the map phase in MapReduce. It contains the following core features:

    • On the Server instance, require the resources needed to run.
    • Automatically read the local or remote datasets, and pass the data in the datasets one by one or in batches to the function decorated by evaluation.predict.
    • By the replicas setting, implement distributed dataset consumption to horizontally scale and shorten the time required for the model evaluation tasks.
    • Automatically store the return values of the function and the input features of the dataset into the results table, for display in the Web UI and further use in the evaluate phase.
    • The decorated function is called once for each single piece of data or each batch, to complete the inference process.

    Parameters

    • resources: (dict, optional)
      • Defines the resources required by each predict task when running on the Server instance, including memory, cpu, and nvidia.com/gpu.
      • memory: The unit is Bytes, int and float types are supported.
        • Supports setting request and limit as a dictionary, e.g. resources={"memory": {"request": 100 * 1024, "limit": 200 * 1024}}.
        • If only a single number is set, the Python SDK will automatically set request and limit to the same value, e.g. resources={"memory": 100 * 1024} is equivalent to resources={"memory": {"request": 100 * 1024, "limit": 100 * 1024}}.
      • cpu: The unit is the number of CPU cores, int and float types are supported.
        • Supports setting request and limit as a dictionary, e.g. resources={"cpu": {"request": 1, "limit": 2}}.
        • If only a single number is set, the SDK will automatically set request and limit to the same value, e.g. resources={"cpu": 1.5} is equivalent to resources={"cpu": {"request": 1.5, "limit": 1.5}}.
      • nvidia.com/gpu: The unit is the number of GPUs, int type is supported.
        • nvidia.com/gpu does not support setting request and limit, only a single number is supported.
      • Note: The resources parameter currently only takes effect on the Server instances. For the Cloud instances, the same can be achieved by selecting the corresponding resource pool when submitting the evaluation task. Standalone instances do not support this feature at all.
    • replicas: (int, optional)
      • The number of replicas to run predict.
      • predict defines a Step, in which there are multiple equivalent Tasks. Each Task runs on a Pod in Cloud/Server instances, and a Thread in Standalone instances.
      • When multiple replicas are specified, they are equivalent and will jointly consume the selected dataset to achieve distributed dataset consumption. It can be understood that a row in the dataset will only be read by one predict replica.
      • The default is 1.
    • batch_size: (int, optional)
      • Batch size for passing data from the dataset into the function.
      • The default is 1.
    • fail_on_error: (bool, optional)
      • Whether to interrupt the entire model evaluation when the decorated function throws an exception. If you expect some "exceptional" data to cause evaluation failures but don't want to interrupt the overall evaluation, you can set fail_on_error=False.
      • The default is True.
    • auto_log: (bool, optional)
      • Whether to automatically log the return values of the function and the input features of the dataset to the results table.
      • The default is True.
    • log_mode: (str, optional)
      • When auto_log=True, you can set log_mode to define logging the return values in plain or pickle format.
      • The default is pickle.
    • log_dataset_features: (List[str], optional)
      • When auto_log=True, you can selectively log certain features from the dataset via this parameter.
      • By default, all features will be logged.
    • needs: (List[Callable], optional)
      • Defines the prerequisites for this task to run, can use the needs syntax to implement DAG.
      • needs accepts functions decorated by @evaluation.predict, @evaluation.evaluate, and @handler.
      • The default is empty, i.e. does not depend on any other tasks.

    Input

    The decorated functions need to define some input parameters to accept dataset data, etc. They contain the following patterns:

    • data:

      • data is a dict type that can read the features of the dataset.
      • When batch_size=1 or batch_size is not set, the label feature can be read through data['label'] or data.label.
      • When batch_size is set to > 1, data is a list.
      from starwhale import evaluation

      @evaluation.predict
      def predict(data):
      print(data['label'])
      print(data.label)
    • data + external:

      • data is a dict type that can read the features of the dataset.
      • external is also a dict, including: index, index_with_dataset, dataset_info, context and dataset_uri keys. The attributes can be used for the further fine-grained processing.
        • index: The index of the dataset row.
        • index_with_dataset: The index with the dataset info.
        • dataset_info: starwhale.core.dataset.tabular.TabularDatasetInfo Class.
        • context: starwhale.Context Class.
        • dataset_uri: starwhale.nase.uri.resource.Resource Class.
      from starwhale import evaluation

      @evaluation.predict
      def predict(data, external):
      print(data['label'])
      print(data.label)
      print(external["context"])
      print(external["dataset_uri"])
    • data + **kw:

      • data is a dict type that can read the features of the dataset.
      • kw is a dict that contains external.
      from starwhale import evaluation

      @evaluation.predict
      def predict(data, **kw):
      print(kw["external"]["context"])
      print(kw["external"]["dataset_uri"])
    • *args + **kwargs:

      • The first argument of args list is data.
      from starwhale import evaluation

      @evaluation.predict
      def predict(*args, **kw):
      print(args[0].label)
      print(args[0]["label"])
      print(kw["external"]["context"])
    • **kwargs:

      from starwhale import evaluation

      @evaluation.predict
      def predict(**kw):
      print(kw["data"].label)
      print(kw["data"]["label"])
      print(kw["external"]["context"])
    • *args:

      • *args does not contain external.
      from starwhale import evaluation

      @evaluation.predict
      def predict(*args):
      print(args[0].label)
      print(args[0]["label"])

    Examples

    from starwhale import evaluation

    @evaluation.predict
    def predict_image(data):
    ...

    @evaluation.predict(
    dataset="mnist/version/latest",
    batch_size=32,
    replicas=4,
    needs=[predict_image],
    )
    def predict_batch_images(batch_data)
    ...

    @evaluation.predict(
    resources={"nvidia.com/gpu": 1,
    "cpu": {"request": 1, "limit": 2},
    "memory": 200 * 1024}, # 200MB
    log_mode="plain",
    )
    def predict_with_resources(data):
    ...

    @evaluation.predict(
    replicas=1,
    log_mode="plain",
    log_dataset_features=["txt", "img", "label"],
    )
    def predict_with_selected_features(data):
    ...

    @evaluation.evaluate

    @evaluation.evaluate is a decorator that defines the evaluation process in the Starwhale Model evaluation, similar to the reduce phase in MapReduce. It contains the following core features:

    • On the Server instance, apply for the resources.
    • Read the data recorded in the results table automatically during the predict phase, and pass it into the function as an iterator.
    • The evaluate phase will only run one replica, and cannot define the replicas parameter like the predict phase.

    Parameters

    • resources: (dict, optional)
      • Consistent with the resources parameter definition in @evaluation.predict.
    • needs: (List[Callable], optional)
      • Consistent with the needs parameter definition in @evaluation.predict.
      • In the common case, it will depend on a function decorated by @evaluation.predict.
    • use_predict_auto_log: (bool, optional)
      • Defaults to True, passes an iterator that can traverse the predict results to the function.

    Input

    • When use_predict_auto_log=True (default), pass an iterator that can traverse the predict results into the function.
      • The iterated object is a dictionary containing two keys: output and input.
        • output is the element returned by the predict stage function.
        • input is the features of the corresponding dataset during the inference process, which is a dictionary type.
    • When use_predict_auto_log=False, do not pass any parameters into the function.

    Examples

    from starwhale import evaluation

    @evaluation.evaluate(needs=[predict_image])
    def evaluate_results(predict_result_iter):
    ...

    @evaluation.evaluate(
    use_predict_auto_log=False,
    needs=[predict_image],
    )
    def evaluate_results():
    ...

    class Evaluation

    starwhale.Evaluation implements the abstraction for Starwhale Model Evaluation, and can perform operations like logging and scanning for Model Evaluation on Standalone/Server/Cloud instances, to record and retrieve metrics.

    __init__

    __init__ function initializes Evaluation object.

    class Evaluation
    def __init__(self, id: str, project: Project | str) -> None:

    Parameters

    • id: (str, required)
      • The UUID of Model Evaluation that is generated by Starwhale automatically.
    • project: (Project|str, required)
      • Project object or Project URI str.

    Example

    from starwhale import Evaluation

    standalone_e = Evaluation("fcd1206bf1694fce8053724861c7874c", project="self")
    server_e = Evaluation("fcd1206bf1694fce8053724861c7874c", project="cloud://server/project/starwhale:starwhale")
    cloud_e = Evaluation("2ddab20df9e9430dbd73853d773a9ff6", project="https://cloud.starwhale.cn/project/starwhale:llm-leaderboard")

    from_context

    from_context is a classmethod that obtains the Evaluation object under the current Context. from_context can only take effect under the task runtime environment. Calling this method in a non-task runtime environment will raise a RuntimeError exception, indicating that the Starwhale Context has not been properly set.

    @classmethod
    def from_context(cls) -> Evaluation:

    Example

    from starwhale import Evaluation

    with Evaluation.from_context() as e:
    e.log("label/1", 1, {"loss": 0.99, "accuracy": 0.98})

    log

    log is a method that logs evaluation metrics to a specific table, which can then be viewed on the Server/Cloud instance's web page or through the scan method.

    def log(
    self, category: str, id: t.Union[str, int], metrics: t.Dict[str, t.Any]
    ) -> None:

    Parameters

    • category: (str, required)
      • The category of the logged metrics, which will be used as the suffix of the Starwhale Datastore table name.
      • Each category corresponds to a Starwhale Datastore table. These tables will be isolated by the evaluation task ID and will not affect each other.
    • id: (str|int, required)
      • The ID of the logged record, unique within the table.
      • For the same table, only str or int can be used as the ID type.
    • metrics: (dict, required)
      • A dict to log metrics in key-value format.
      • Keys are of str type.
      • Values can be constant types like int, float, str, bytes, bool, or compound types like tuple, list, dict. It also supports logging Artifacts types like Starwhale.Image, Starwhale.Video, Starwhale.Audio, Starwhale.Text, Starwhale.Binary.
        • When the value contains dict type, the Starwhale SDK will automatically flatten the dict for better visualization and metric comparison.
        • For example, if metrics is {"test": {"loss": 0.99, "prob": [0.98,0.99]}, "image": [Image, Image]}, it will be stored as {"test/loss": 0.99, "test/prob": [0.98, 0.99], "image/0": Image, "image/1": Image} after flattening.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation.from_context()

    evaluation_store.log("label/1", 1, {"loss": 0.99, "accuracy": 0.98})
    evaluation_store.log("ppl", "1", {"a": "test", "b": 1})

    scan

    scan is a method that returns an iterator for reading data from certain model evaluation tables.

    def scan(
    self,
    category: str,
    start: t.Any = None,
    end: t.Any = None,
    keep_none: bool = False,
    end_inclusive: bool = False,
    ) -> t.Iterator:

    Parameters

    • category: (str, required)
      • Same meaning as the category parameter in the log method.
    • start: (Any, optional)
      • Start key, if not specified, start from the first data item in the table.
    • end: (Any, optional)
      • End key, if not specified, iterate to the end of the table.
    • keep_none: (bool, optional)
      • Whether to return columns with None values, not returned by default.
    • end_inclusive: (bool, optional)
      • Whether to include the row corresponding to end, not included by default.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="https://cloud.starwhale.cn/projects/349")
    results = [data for data in evaluation_store.scan("label/0")]

    flush

    flush is a method that can immediately flush the metrics logged by the log method to the datastore and oss storage. If the flush method is not called, Evaluation will automatically flush data to storage when it is finally closed.

    def flush(self, category: str, artifacts_flush: bool = True) -> None

    Parameters

    • category: (str, required)
      • Same meaning as the category parameter in the log method.
    • artifacts_flush: (bool, optional)
      • Whether to dump artifact data to blob files and upload them to related storage. Default is True.

    log_result

    log_result is a method that logs evaluation metrics to the results table, equivalent to calling the log method with category set to results. The results table is generally used to store inference results. By default, @starwhale.predict will store the return value of the decorated function in the results table, you can also manually store using log_results.

    def log_result(self, id: t.Union[str, int], metrics: t.Dict[str, t.Any]) -> None:

    Parameters

    • id: (str|int, required)
      • The ID of the record, unique within the results table.
      • For the results table, only str or int can be used as the ID type.
    • metrics: (dict, required)
      • Same definition as the metrics parameter in the log method.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="self")
    evaluation_store.log_result(1, {"loss": 0.99, "accuracy": 0.98})
    evaluation_store.log_result(2, {"loss": 0.98, "accuracy": 0.99})

    scan_results

    scan_results is a method that returns an iterator for reading data from the results table.

    def scan_results(
    self,
    start: t.Any = None,
    end: t.Any = None,
    keep_none: bool = False,
    end_inclusive: bool = False,
    ) -> t.Iterator:

    Parameters

    • start: (Any, optional)
      • Start key, if not specified, start from the first data item in the table.
      • Same definition as the start parameter in the scan method.
    • end: (Any, optional)
      • End key, if not specified, iterate to the end of the table.
      • Same definition as the end parameter in the scan method.
    • keep_none: (bool, optional)
      • Whether to return columns with None values, not returned by default.
      • Same definition as the keep_none parameter in the scan method.
    • end_inclusive: (bool, optional)
      • Whether to include the row corresponding to end, not included by default.
      • Same definition as the end_inclusive parameter in the scan method.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="self")

    evaluation_store.log_result(1, {"loss": 0.99, "accuracy": 0.98})
    evaluation_store.log_result(2, {"loss": 0.98, "accuracy": 0.99})
    results = [data for data in evaluation_store.scan_results()]

    flush_results

    flush_results is a method that can immediately flush the metrics logged by the log_results method to the datastore and oss storage. If the flush_results method is not called, Evaluation will automatically flush data to storage when it is finally closed.

    def flush_results(self, artifacts_flush: bool = True) -> None:

    Parameters

    • artifacts_flush: (bool, optional)
      • Whether to dump artifact data to blob files and upload them to related storage. Default is True.
      • Same definition as the artifacts_flush parameter in the flush method.

    log_summary

    log_summary is a method that logs certain metrics to the summary table. The evaluation page on Server/Cloud instances displays data from the summary table.

    Each time it is called, Starwhale will automatically update with the unique ID of this evaluation as the row ID of the table. This function can be called multiple times during one evaluation to update different columns.

    Each project has one summary table. All evaluation tasks under that project will write summary information to this table for easy comparison between evaluations of different models.

    def log_summary(self, *args: t.Any, **kw: t.Any) -> None:

    Same as log method, log_summary will automatically flatten the dict.

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="https://cloud.starwhale.cn/projects/349")

    evaluation_store.log_summary(loss=0.99)
    evaluation_store.log_summary(loss=0.99, accuracy=0.99)
    evaluation_store.log_summary({"loss": 0.99, "accuracy": 0.99})

    get_summary

    get_summary is a method that returns the information logged by log_summary.

    def get_summary(self) -> t.Dict:

    flush_summary

    flush_summary is a method that can immediately flush the metrics logged by the log_summary method to the datastore and oss storage. If the flush_results method is not called, Evaluation will automatically flush data to storage when it is finally closed.

    def flush_summary(self, artifacts_flush: bool = True) -> None:

    Parameters

    • artifacts_flush: (bool, optional)
      • Whether to dump artifact data to blob files and upload them to related storage. Default is True.
      • Same definition as the artifacts_flush parameter in the flush method.

    flush_all

    flush_all is a method that can immediately flush the metrics logged by log, log_results, log_summary methods to the datastore and oss storage. If the flush_all method is not called, Evaluation will automatically flush data to storage when it is finally closed.

    def flush_all(self, artifacts_flush: bool = True) -> None:

    Parameters

    • artifacts_flush: (bool, optional)
      • Whether to dump artifact data to blob files and upload them to related storage. Default is True.
      • Same definition as the artifacts_flush parameter in the flush method.

    get_tables

    get_tables is a method that returns the names of all tables generated during model evaluation. Note that this function does not return the summary table name.

    def get_tables(self) -> t.List[str]:

    close

    close is a method to close the Evaluation object. close will automatically flush data to storage when called. Evaluation also implements __enter__ and __exit__ methods, which can simplify manual close calls using with syntax.

    def close(self) -> None:

    Example

    from starwhale import Evaluation

    evaluation_store = Evaluation(id="2ddab20df9e9430dbd73853d773a9ff6", project="https://cloud.starwhale.cn/projects/349")
    evaluation_store.log_summary(loss=0.99)
    evaluation_store.close()

    # auto close when the with-context exits.
    with Evaluation.from_context() as e:
    e.log_summary(loss=0.99)

    @handler

    @handler is a decorator that provides the following functionalities:

    • On a Server instance, it requests the required resources to run.
    • It can control the number of replicas.
    • Multiple handlers can form a DAG through dependency relationships to control the execution workflow.
    • It can expose ports externally to run like a web handler.

    @fine_tune, @evaluation.predict and @evaluation.evalute can be considered applications of @handler in the certain specific areas. @handler is the underlying implementation of these decorators and is more fundamental and flexible.

    @classmethod
    def handler(
    cls,
    resources: t.Optional[t.Dict[str, t.Any]] = None,
    replicas: int = 1,
    needs: t.Optional[t.List[t.Callable]] = None,
    name: str = "",
    expose: int = 0,
    require_dataset: bool = False,
    ) -> t.Callable:

    Parameters

    • resources: (dict, optional)
      • Consistent with the resources parameter definition in @evaluation.predict.
    • needs: (List[Callable], optional)
      • Consistent with the needs parameter definition in @evaluation.predict.
    • replicas: (int, optional)
      • Consistent with the replicas parameter definition in @evaluation.predict.
    • name: (str, optional)
      • The name displayed for the handler.
      • If not specified, use the decorated function's name.
    • expose: (int, optional)
      • The port exposed externally. When running a web handler, the exposed port needs to be declared.
      • The default is 0, meaning no port is exposed.
      • Currently only one port can be exposed.
    • require_dataset: (bool, optional)
      • Defines whether this handler requires a dataset when running.
      • If required_dataset=True, the user is required to input a dataset when creating an evaluation task on the Server/Cloud instance web page. If required_dataset=False, the user does not need to specify a dataset on the web page.
      • The default is False.

    Examples

    from starwhale import handler
    import gradio

    @handler(resources={"cpu": 1, "nvidia.com/gpu": 1}, replicas=3)
    def my_handler():
    ...

    @handler(needs=[my_handler])
    def my_another_handler():
    ...

    @handler(expose=7860)
    def chatbot():
    with gradio.Blocks() as server:
    ...
    server.launch(server_name="0.0.0.0", server_port=7860)

    @fine_tune

    fine_tune is a decorator that defines the fine-tuning process for model training.

    Some restrictions and usage suggestions:

    • fine_tune has only one replica.
    • fine_tune requires dataset input.
    • Generally, the dataset is obtained through Context.get_runtime_context() at the start of fine_tune.
    • Generally, at the end of fine_tune, the fine-tuned Starwhale model package is generated through starwhale.model.build, which will be automatically copied to the corresponding evaluation project.

    Parameters

    • resources: (dict, optional)
      • Consistent with the resources parameter definition in @evaluation.predict.
    • needs: (List[Callable], optional)
      • Consistent with the needs parameter definition in @evaluation.predict.

    Examples

    from starwhale import model as starwhale_model
    from starwhale import fine_tune, Context

    @fine_tune(resources={"nvidia.com/gpu": 1})
    def llama_fine_tuning():
    ctx = Context.get_runtime_context()

    if len(ctx.dataset_uris) == 2:
    # TODO: use more graceful way to get train and eval dataset
    train_dataset = dataset(ctx.dataset_uris[0], readonly=True, create="forbid")
    eval_dataset = dataset(ctx.dataset_uris[1], readonly=True, create="forbid")
    elif len(ctx.dataset_uris) == 1:
    train_dataset = dataset(ctx.dataset_uris[0], readonly=True, create="forbid")
    eval_dataset = None
    else:
    raise ValueError("Only support 1 or 2 datasets(train and eval dataset) for now")

    #user training code
    train_llama(
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
    )

    model_name = get_model_name()
    starwhale_model.build(name=f"llama-{model_name}-qlora-ft")

    @multi_classification

    The @multi_classification decorator uses the sklearn lib to analyze results for multi-classification problems, outputting the confusion matrix, ROC, AUC etc., and writing them to related tables in the Starwhale Datastore.

    When using it, certain requirements are placed on the return value of the decorated function, which should be (label, result) or (label, result, probability_matrix).

    def multi_classification(
    confusion_matrix_normalize: str = "all",
    show_hamming_loss: bool = True,
    show_cohen_kappa_score: bool = True,
    show_roc_auc: bool = True,
    all_labels: t.Optional[t.List[t.Any]] = None,
    ) -> t.Any:

    Parameters

    • confusion_matrix_normalize: (str, optional)
      • Accepts three parameters:
        • true: rows
        • pred: columns
        • all: rows+columns
    • show_hamming_loss: (bool, optional)
      • Whether to calculate the Hamming loss.
      • The default is True.
    • show_cohen_kappa_score: (bool, optional)
      • Whether to calculate the Cohen kappa score.
      • The default is True.
    • show_roc_auc: (bool, optional)
      • Whether to calculate ROC/AUC. To calculate, the function needs to return a (label, result, probability_matrix) tuple, otherwise a (label, result) tuple is sufficient.
      • The default is True.
    • all_labels: (List, optional)
      • Defines all the labels.

    Examples


    @multi_classification(
    confusion_matrix_normalize="all",
    show_hamming_loss=True,
    show_cohen_kappa_score=True,
    show_roc_auc=True,
    all_labels=[i for i in range(0, 10)],
    )
    def evaluate(ppl_result) -> t.Tuple[t.List[int], t.List[int], t.List[t.List[float]]]:
    label, result, probability_matrix = [], [], []
    return label, result, probability_matrix

    @multi_classification(
    confusion_matrix_normalize="all",
    show_hamming_loss=True,
    show_cohen_kappa_score=True,
    show_roc_auc=False,
    all_labels=[i for i in range(0, 10)],
    )
    def evaluate(ppl_result) -> t.Tuple[t.List[int], t.List[int], t.List[t.List[float]]]:
    label, result = [], [], []
    return label, result

    PipelineHandler

    The PipelineHandler class provides a default model evaluation workflow definition that requires users to implement the predict and evaluate functions.

    The PipelineHandler is equivalent to using the @evaluation.predict and @evaluation.evaluate decorators together - the usage looks different but the underlying model evaluation process is the same.

    Note that PipelineHandler currently does not support defining resources parameters.

    Users need to implement the following functions:

    • predict: Defines the inference process, equivalent to a function decorated with @evaluation.predict.

    • evaluate: Defines the evaluation process, equivalent to a function decorated with @evaluation.evaluate.

    from typing import Any, Iterator
    from abc import ABCMeta, abstractmethod

    class PipelineHandler(metaclass=ABCMeta):
    def __init__(
    self,
    predict_batch_size: int = 1,
    ignore_error: bool = False,
    predict_auto_log: bool = True,
    predict_log_mode: str = PredictLogMode.PICKLE.value,
    predict_log_dataset_features: t.Optional[t.List[str]] = None,
    **kwargs: t.Any,
    ) -> None:
    self.context = Context.get_runtime_context()
    ...

    def predict(self, data: Any, **kw: Any) -> Any:
    raise NotImplementedError

    def evaluate(self, ppl_result: Iterator) -> Any
    raise NotImplementedError

    Parameters

    • predict_batch_size: (int, optional)
      • Equivalent to the batch_size parameter in @evaluation.predict.
      • Default is 1.
    • ignore_error: (bool, optional)
      • Equivalent to the fail_on_error parameter in @evaluation.predict.
      • Default is False.
    • predict_auto_log: (bool, optional)
      • Equivalent to the auto_log parameter in @evaluation.predict.
      • Default is True.
    • predict_log_mode: (str, optional)
      • Equivalent to the log_mode parameter in @evaluation.predict.
      • Default is pickle.
    • predict_log_dataset_features: (bool, optional)
      • Equivalent to the log_dataset_features parameter in @evaluation.predict.
      • Default is None, which records all features.

    PipelineHandler.run Decorator

    The PipelineHandler.run decorator can be used to describe resources for the predict and evaluate methods, supporting definitions of replicas and resources:

    • The PipelineHandler.run decorator can only decorate predict and evaluate methods in subclasses inheriting from PipelineHandler.
    • The predict method can set the replicas parameter. The replicas value for the evaluate method is always 1.
    • The resources parameter is defined and used in the same way as the resources parameter in @evaluation.predict or @evaluation.evaluate.
    • The PipelineHandler.run decorator is optional.
    • The PipelineHandler.run decorator only takes effect on Server and Cloud instances, not Standalone instances that don't support resource definition.
    @classmethod
    def run(
    cls, resources: t.Optional[t.Dict[str, t.Any]] = None, replicas: int = 1
    ) -> t.Callable:

    Examples

    import typing as t

    import torch
    from starwhale import PipelineHandler

    class Example(PipelineHandler):
    def __init__(self) -> None:
    super().__init__()
    self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    self.model = self._load_model(self.device)

    @PipelineHandler.run(replicas=4, resources={"memory": 1 * 1024 * 1024 *1024, "nvidia.com/gpu": 1}) # 1G Memory, 1 GPU
    def predict(self, data: t.Dict):
    data_tensor = self._pre(data.img)
    output = self.model(data_tensor)
    return self._post(output)

    @PipelineHandler.run(resources={"memory": 1 * 1024 * 1024 *1024}) # 1G Memory
    def evaluate(self, ppl_result):
    result, label, pr = [], [], []
    for _data in ppl_result:
    label.append(_data["input"]["label"])
    result.extend(_data["output"][0])
    pr.extend(_data["output"][1])
    return label, result, pr

    def _pre(self, input: Image) -> torch.Tensor:
    ...

    def _post(self, input):
    ...

    def _load_model(self, device):
    ...

    Context

    The context information passed during model evaluation, including Project, Task ID, etc. The Context content is automatically injected and can be used in the following ways:

    • Inherit the PipelineHandler class and use the self.context object.
    • Get it through Context.get_runtime_context().

    Note that Context can only be used during model evaluation, otherwise the program will throw an exception.

    Currently Context can get the following values:

    • project: str
      • Project name.
    • version: str
      • Unique ID of model evaluation.
    • step: str
      • Step name.
    • total: int
      • Total number of Tasks under the Step.
    • index: int
      • Task index number, starting from 0.
    • dataset_uris: List[str]
      • List of Starwhale dataset URIs.

    Examples


    from starwhale import Context, PipelineHandler

    def func():
    ctx = Context.get_runtime_context()
    print(ctx.project)
    print(ctx.version)
    print(ctx.step)
    ...

    class Example(PipelineHandler):

    def predict(self, data: t.Dict):
    print(self.context.project)
    print(self.context.version)
    print(self.context.step)

    @starwhale.api.service.api

    @starwhale.api.service.api is a decorator that provides a simple Web Handler input definition based on Gradio for accepting external requests and returning inference results to the user when launching a Web Service with the swcli model serve command, enabling online evaluation.

    Examples

    import gradio
    from starwhale.api.service import api

    def predict_image(img):
    ...

    @api(gradio.File(), gradio.Label())
    def predict_view(file: t.Any) -> t.Any:
    with open(file.name, "rb") as f:
    data = Image(f.read(), shape=(28, 28, 1))
    _, prob = predict_image({"img": data})
    return {i: p for i, p in enumerate(prob)}

    starwhale.api.service.Service

    If you want to customize the web service implementation, you can subclass Service and override the serve method.

    class CustomService(Service):
    def serve(self, addr: str, port: int, handler_list: t.List[str] = None) -> None:
    ...

    svc = CustomService()

    @svc.api(...)
    def handler(data):
    ...

    Notes:

    • Handlers added with PipelineHandler.add_api and the api decorator or Service.api can work together
    • If using a custom Service, you need to instantiate the custom Service class in the model

    Custom Request and Response

    Request and Response are handler preprocessing and postprocessing classes for receiving user requests and returning results. They can be simply understood as pre and post logic for the handler.

    Starwhale provides built-in Request implementations for Dataset types and Json Response. Users can also customize the logic as follows:

    import typing as t

    from starwhale.api.service import (
    Request,
    Service,
    Response,
    )

    class CustomInput(Request):
    def load(self, req: t.Any) -> t.Any:
    return req

    class CustomOutput(Response):
    def __init__(self, prefix: str) -> None:
    self.prefix = prefix

    def dump(self, req: str) -> bytes:
    return f"{self.prefix} {req}".encode("utf-8")

    svc = Service()

    @svc.api(request=CustomInput(), response=CustomOutput("hello"))
    def foo(data: t.Any) -> t.Any:
    ...
    - - + + \ No newline at end of file diff --git a/0.6.6/reference/sdk/job/index.html b/0.6.6/reference/sdk/job/index.html index 848eb9e6f..a7afb31cc 100644 --- a/0.6.6/reference/sdk/job/index.html +++ b/0.6.6/reference/sdk/job/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Starwhale Job SDK

    job

    Get a starwhale.Job object through the Job URI parameter, which represents a Job on Standalone/Server/Cloud instances.

    @classmethod
    def job(
    cls,
    uri: str,
    ) -> Job:

    Parameters

    • uri: (str, required)
      • Job URI format.

    Usage Example

    from starwhale import job

    # get job object of uri=https://server/job/1
    j1 = job("https://server/job/1")

    # get job from standalone instance
    j2 = job("local/project/self/job/xm5wnup")
    j3 = job("xm5wnup")

    class starwhale.Job

    starwhale.Job abstracts Starwhale Job and enables some information retrieval operations on the job.

    list

    list is a classmethod that can list the jobs under a project.

    @classmethod
    def list(
    cls,
    project: str = "",
    page_index: int = DEFAULT_PAGE_IDX,
    page_size: int = DEFAULT_PAGE_SIZE,
    ) -> Tuple[List[Job], Dict]:

    Parameters

    • project: (str, optional)
      • Project URI, can be projects on Standalone/Server/Cloud instances.
      • If project is not specified, the project selected by swcli project selected will be used.
    • page_index: (int, optional)
      • When getting the jobs list from Server/Cloud instances, paging is supported. This parameter specifies the page number.
        • Default is 1.
        • Page numbers start from 1.
      • Standalone instances do not support paging. This parameter has no effect.
    • page_size: (int, optional)
      • When getting the jobs list from Server/Cloud instances, paging is supported. This parameter specifies the number of jobs returned per page.
        • Default is 1.
        • Page numbers start from 1.
      • Standalone instances do not support paging. This parameter has no effect.

    Usage Example

    from starwhale import Job

    # list jobs of current selected project
    jobs, pagination_info = Job.list()

    # list jobs of starwhale/public project in the cloud.starwhale.cn instance
    jobs, pagination_info = Job.list("https://cloud.starwhale.cn/project/starwhale:public")

    # list jobs of id=1 project in the server instance, page index is 2, page size is 10
    jobs, pagination_info = Job.list("https://server/project/1", page_index=2, page_size=10)

    get

    get is a classmethod that gets information about a specific job and returns a Starwhale.Job object. It has the same functionality and parameter definitions as the starwhale.job function.

    Usage Example

    from starwhale import Job

    # get job object of uri=https://server/job/1
    j1 = Job.get("https://server/job/1")

    # get job from standalone instance
    j2 = Job.get("local/project/self/job/xm5wnup")
    j3 = Job.get("xm5wnup")

    summary

    summary is a property that returns the data written to the summary table during the job execution, in dict type.

    @property
    def summary(self) -> Dict[str, Any]:

    Usage Example

    from starwhale import jobs

    j1 = job("https://server/job/1")

    print(j1.summary)

    tables

    tables is a property that returns the names of tables created during the job execution (not including the summary table, which is created automatically at the project level), in list type.

    @property
    def tables(self) -> List[str]:

    Usage Example

    from starwhale import jobs

    j1 = job("https://server/job/1")

    print(j1.tables)

    get_table_rows

    get_table_rows is a method that returns records from a data table according to the table name and other parameters, in iterator type.

    def get_table_rows(
    self,
    name: str,
    start: Any = None,
    end: Any = None,
    keep_none: bool = False,
    end_inclusive: bool = False,
    ) -> Iterator[Dict[str, Any]]:

    Parameters

    • name: (str, required)
      • Datastore table name. The one of table names obtained through the tables property is ok.
    • start: (Any, optional)
      • The starting ID value of the returned records.
      • Default is None, meaning start from the beginning of the table.
    • end: (Any, optional)
      • The ending ID value of the returned records.
      • Default is None, meaning until the end of the table.
      • If both start and end are None, all records in the table will be returned as an iterator.
    • keep_none: (bool, optional)
      • Whether to return records with None values.
      • Default is False.
    • end_inclusive: (bool, optional)
      • When end is set, whether the iteration includes the end record.
      • Default is False.

    Usage Example

    from starwhale import job

    j = job("local/project/self/job/xm5wnup")

    table_name = j.tables[0]

    for row in j.get_table_rows(table_name):
    print(row)

    rows = list(j.get_table_rows(table_name, start=0, end=100))

    # return the first record from the results table
    result = list(j.get_table_rows('results', start=0, end=1))[0]

    status

    status is a property that returns the current real-time state of the Job as a string. The possible states are CREATED, READY, PAUSED, RUNNING, CANCELLING, CANCELED, SUCCESS, FAIL, and UNKNOWN.

    @property
    def status(self) -> str:

    create

    create is a classmethod that can create tasks on a Standalone instance or Server/Cloud instance, including tasks for Model Evaluation, Fine-tuning, Online Serving, and Developing. The function returns a Job object.

    • create determines which instance the generated task runs on through the project parameter, including Standalone and Server/Cloud instances.
    • On a Standalone instance, create creates a synchronously executed task.
    • On a Server/Cloud instance, create creates an asynchronously executed task.
    @classmethod
    def create(
    cls,
    project: Project | str,
    model: Resource | str,
    run_handler: str,
    datasets: t.List[str | Resource] | None = None,
    runtime: Resource | str | None = None,
    resource_pool: str = DEFAULT_RESOURCE_POOL,
    ttl: int = 0,
    dev_mode: bool = False,
    dev_mode_password: str = "",
    dataset_head: int = 0,
    overwrite_specs: t.Dict[str, t.Any] | None = None,
    ) -> Job:

    Parameters

    Parameters apply to all instances:

    • project: (Project|str, required)
      • A Project object or Project URI string.
    • model: (Resource|str, required)
      • Model URI string or Resource object of Model type, representing the Starwhale model package to run.
    • run_handler: (str, required)
      • The name of the runnable handler in the Starwhale model package, e.g. the evaluate handler of mnist: mnist.evaluator:MNISTInference.evaluate.
    • datasets: (List[str | Resource], optional)
      • Datasets required for the Starwhale model package to run, not required.

    Parameters only effective for Standalone instances:

    • dataset_head: (int, optional)
      • Generally used for debugging scenarios, only uses the first N data in the dataset for the Starwhale model to consume.

    Parameters only effective for Server/Cloud instances:

    • runtime: (Resource | str, optional)
      • Runtime URI string or Resource object of Runtime type, representing the Starwhale runtime required to run the task.
      • When not specified, it will try to use the built-in runtime of the Starwhale model package.
      • When creating tasks under a Standalone instance, the Python interpreter environment used by the Python script is used as its own runtime. Specifying a runtime via the runtime parameter is not supported. If you need to specify a runtime, you can use the swcli model run command.
    • resource_pool: (str, optional)
      • Specify which resource pool the task runs in, default to the default resource pool.
    • ttl: (int, optional)
      • Maximum lifetime of the task, will be killed after timeout.
      • The unit is seconds.
      • By default, ttl is 0, meaning no timeout limit, and the task will run as expected.
      • When ttl is less than 0, it also means no timeout limit.
    • dev_mode: (bool, optional)
      • Whether to set debug mode. After turning on this mode, you can enter the related environment through VSCode Web.
      • Debug mode is off by default.
    • dev_mode_password: (str, optional)
      • Login password for VSCode Web in debug mode.
      • Default is empty, in which case the task's UUID will be used as the password, which can be obtained via job.info().job.uuid.
    • overwrite_specs: (Dict[str, Any], optional)
      • Support setting the replicas and resources fields of the handler.
      • If empty, use the values set in the corresponding handler of the model package.
      • The key of overwrite_specs is the name of the handler, e.g. the evaluate handler of mnist: mnist.evaluator:MNISTInference.evaluate.
      • The value of overwrite_specs is the set value, in dictionary format, supporting settings for replicas and resources, e.g. {"replicas": 1, "resources": {"memory": "1GiB"}}.

    Examples

    • create a Cloud Instance job
    from starwhale import Job
    project = "https://cloud.starwhale.cn/project/starwhale:public"
    job = Job.create(
    project=project,
    model=f"{project}/model/mnist/version/v0",
    run_handler="mnist.evaluator:MNISTInference.evaluate",
    datasets=[f"{project}/dataset/mnist/version/v0"],
    runtime=f"{project}/runtime/pytorch",
    overwrite_specs={"mnist.evaluator:MNISTInference.evaluate": {"resources": "4GiB"},
    "mnist.evaluator:MNISTInference.predict": {"resources": "8GiB", "replicas": 10}}
    )
    print(job.status)
    • create a Standalone Instance job
    from starwhale import Job
    job = Job.create(
    project="self",
    model="mnist",
    run_handler="mnist.evaluator:MNISTInference.evaluate",
    datasets=["mnist"],
    )
    print(job.status)
    - - + + \ No newline at end of file diff --git a/0.6.6/reference/sdk/model/index.html b/0.6.6/reference/sdk/model/index.html index 3bf95d6fd..74d070067 100644 --- a/0.6.6/reference/sdk/model/index.html +++ b/0.6.6/reference/sdk/model/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Starwhale Model SDK

    model.build

    model.build is a function that can build the Starwhale model, equivalent to the swcli model build command.

    def build(
    modules: t.Optional[t.List[t.Any]] = None,
    workdir: t.Optional[_path_T] = None,
    name: t.Optional[str] = None,
    project_uri: str = "",
    desc: str = "",
    remote_project_uri: t.Optional[str] = None,
    add_all: bool = False,
    tags: t.List[str] | None = None,
    ) -> None:

    Parameters

    • modules: (List[str|object], optional)
      • The search modules supports object(function, class or module) or str(example: "to.path.module", "to.path.module:object").
      • If the argument is not specified, the search modules are the imported modules.
    • name: (str, optional)
      • Starwhale Model name.
      • The default is the current work dir (cwd) name.
    • workdir: (str, Pathlib.Path, optional)
      • The path of the rootdir. The default workdir is the current working dir.
      • All files in the workdir will be packaged. If you want to ignore some files, you can add .swignore file in the workdir.
    • project_uri: (str, optional)
      • The project uri of the Starwhale Model.
      • If the argument is not specified, the project_uri is the config value of swcli project select command.
    • desc: (str, optional)
      • The description of the Starwhale Model.
    • remote_project_uri: (str, optional)
      • Project URI of another example instance. After the Starwhale model is built, it will be automatically copied to the remote instance.
    • add_all: (bool, optional)
      • Add all files in the working directory to the model package(excludes python cache files and virtual environment files when disabled).The .swignore file still takes effect.
      • The default value is False.
    • tags: (List[str], optional)
      • The tags for the model version.
      • latest and ^v\d+$ tags are reserved tags.

    Examples

    from starwhale import model

    # class search handlers
    from .user.code.evaluator import ExamplePipelineHandler
    model.build([ExamplePipelineHandler])

    # function search handlers
    from .user.code.evaluator import predict_image
    model.build([predict_image])

    # module handlers, @handler decorates function in this module
    from .user.code import evaluator
    model.build([evaluator])

    # str search handlers
    model.build(["user.code.evaluator:ExamplePipelineHandler"])
    model.build(["user.code1", "user.code2"])

    # no search handlers, use imported modules
    model.build()

    # add user custom tags
    model.build(tags=["t1", "t2"])
    - - + + \ No newline at end of file diff --git a/0.6.6/reference/sdk/other/index.html b/0.6.6/reference/sdk/other/index.html index 939d2dcbc..6eed95e5b 100644 --- a/0.6.6/reference/sdk/other/index.html +++ b/0.6.6/reference/sdk/other/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Other SDK

    __version__

    Version of Starwhale Python SDK and swcli, string constant.

    >>> from starwhale import __version__
    >>> print(__version__)
    0.5.7

    init_logger

    Initialize Starwhale logger and traceback. The default value is 0.

    • 0: show only errors, traceback only shows 1 frame.
    • 1: show errors + warnings, traceback shows 5 frames.
    • 2: show errors + warnings + info, traceback shows 10 frames.
    • 3: show errors + warnings + info + debug, traceback shows 100 frames.
    • >=4: show errors + warnings + info + debug + trace, traceback shows 1000 frames.
    def init_logger(verbose: int = 0) -> None:

    login

    Log in to a server/cloud instance. It is equivalent to running the swcli instance login command. Log in to the Standalone instance is meaningless.

    def login(
    instance: str,
    alias: str = "",
    username: str = "",
    password: str = "",
    token: str = "",
    ) -> None:

    Parameters

    • instance: (str, required)
      • The http url of the server/cloud instance.
    • alias: (str, optional)
      • An alias for the instance to simplify the instance part of the Starwhale URI.
      • If not specified, the hostname part of the instance http url will be used.
    • username: (str, optional)
    • password: (str, optional)
    • token: (str, optional)
      • You can only choose one of username + password or token to login to the instance.

    Examples

    from starwhale import login

    # login to Starwhale Cloud instance by token
    login(instance="https://cloud.starwhale.cn", alias="cloud-cn", token="xxx")

    # login to Starwhale Server instance by username and password
    login(instance="http://controller.starwhale.svc", alias="dev", username="starwhale", password="abcd1234")

    logout

    Log out of a server/cloud instance. It is equivalent to running the swcli instance logout command. Log out of the Standalone instance is meaningless.

    def logout(instance: str) -> None:

    Examples

    from starwhale import login, logout

    login(instance="https://cloud.starwhale.cn", alias="cloud-cn", token="xxx")
    # logout by the alias
    logout("cloud-cn")

    login(instance="http://controller.starwhale.svc", alias="dev", username="starwhale", password="abcd1234")
    # logout by the instance http url
    logout("http://controller.starwhale.svc")
    - - + + \ No newline at end of file diff --git a/0.6.6/reference/sdk/overview/index.html b/0.6.6/reference/sdk/overview/index.html index aeb6b99e2..3ba7558de 100644 --- a/0.6.6/reference/sdk/overview/index.html +++ b/0.6.6/reference/sdk/overview/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Python SDK Overview

    Starwhale provides a series of Python SDKs to help manage datasets, models, evaluations etc. Using the Starwhale Python SDK can make it easier to complete your ML/DL development tasks.

    Classes

    • PipelineHandler: Provides default model evaluation process definition, requires implementation of predict and evaluate methods.
    • Context: Passes context information during model evaluation, including Project, Task ID etc.
    • class Dataset: Starwhale Dataset class.
    • class starwhale.api.service.Service: The base class of online evaluation.
    • class Job: Starwhale Job class.
    • class Evaluation: Starwhale Evaluation class.

    Functions

    • @multi_classification: Decorator for multi-class problems to simplify evaluate result calculation and storage for better evaluation presentation.
    • @handler: Decorator to define a running entity with resource attributes (mem/cpu/gpu). You can control replica count. Handlers can form DAGs through dependencies to control execution flow.
    • @evaluation.predict: Decorator to define inference process in model evaluation, similar to map phase in MapReduce.
    • @evaluation.evaluate: Decorator to define evaluation process in model evaluation, similar to reduce phase in MapReduce.
    • model.build: Build Starwhale model.
    • @fine_tune: Decorator to define model fine-tuning process.
    • init_logger: Set log level, implement 5-level logging.
    • dataset: Get starwhale.Dataset object, by creating new datasets or loading existing datasets.
    • @starwhale.api.service.api: Decorator to provide a simple Web Handler input definition based on Gradio.
    • login: Log in to the server/cloud instance.
    • logout: Log out of the server/cloud instance.
    • job: Get starwhale.Job object by the Job URI.
    • @PipelineHandler.run: Decorator to define the resources for the predict and evaluate methods in PipelineHandler subclasses.

    Data Types

    • COCOObjectAnnotation: Provides COCO format definitions.
    • BoundingBox: Bounding box type, currently in LTWH format - left_x, top_y, width and height.
    • ClassLabel: Describes the number and types of labels.
    • Image: Image type.
    • GrayscaleImage: Grayscale image type, e.g. MNIST digit images, a special case of Image type.
    • Audio: Audio type.
    • Video: Video type.
    • Text: Text type, default utf-8 encoding, for storing large texts.
    • Binary: Binary type, stored in bytes, for storing large binary content.
    • Line: Line type.
    • Point: Point type.
    • Polygon: Polygon type.
    • Link: Link type, for creating remote-link data.
    • MIMEType: Describes multimedia types supported by Starwhale, used in mime_type attribute of Image, Video etc for better Dataset Viewer.

    Other

    • __version__: Version of Starwhale Python SDK and swcli, string constant.

    Further reading

    - - + + \ No newline at end of file diff --git a/0.6.6/reference/sdk/type/index.html b/0.6.6/reference/sdk/type/index.html index 6410a1217..c7ff63d38 100644 --- a/0.6.6/reference/sdk/type/index.html +++ b/0.6.6/reference/sdk/type/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Starwhale Data Types

    COCOObjectAnnotation

    It provides definitions following the COCO format.

    COCOObjectAnnotation(
    id: int,
    image_id: int,
    category_id: int,
    segmentation: Union[t.List, t.Dict],
    area: Union[float, int],
    bbox: Union[BoundingBox, t.List[float]],
    iscrowd: int,
    )
    ParameterDescription
    idObject id, usually a globally incrementing id
    image_idImage id, usually id of the image
    category_idCategory id, usually id of the class in object detection
    segmentationObject contour representation, Polygon (polygon vertices) or RLE format
    areaObject area
    bboxRepresents bounding box, can be BoundingBox type or list of floats
    iscrowd0 indicates a single object, 1 indicates two unseparated objects

    Examples

    def _make_coco_annotations(
    self, mask_fpath: Path, image_id: int
    ) -> t.List[COCOObjectAnnotation]:
    mask_img = PILImage.open(str(mask_fpath))

    mask = np.array(mask_img)
    object_ids = np.unique(mask)[1:]
    binary_mask = mask == object_ids[:, None, None]
    # TODO: tune permute without pytorch
    binary_mask_tensor = torch.as_tensor(binary_mask, dtype=torch.uint8)
    binary_mask_tensor = (
    binary_mask_tensor.permute(0, 2, 1).contiguous().permute(0, 2, 1)
    )

    coco_annotations = []
    for i in range(0, len(object_ids)):
    _pos = np.where(binary_mask[i])
    _xmin, _ymin = float(np.min(_pos[1])), float(np.min(_pos[0]))
    _xmax, _ymax = float(np.max(_pos[1])), float(np.max(_pos[0]))
    _bbox = BoundingBox(
    x=_xmin, y=_ymin, width=_xmax - _xmin, height=_ymax - _ymin
    )

    rle: t.Dict = coco_mask.encode(binary_mask_tensor[i].numpy()) # type: ignore
    rle["counts"] = rle["counts"].decode("utf-8")

    coco_annotations.append(
    COCOObjectAnnotation(
    id=self.object_id,
    image_id=image_id,
    category_id=1, # PennFudan Dataset only has one class-PASPersonStanding
    segmentation=rle,
    area=_bbox.width * _bbox.height,
    bbox=_bbox,
    iscrowd=0, # suppose all instances are not crowd
    )
    )
    self.object_id += 1

    return coco_annotations

    GrayscaleImage

    GrayscaleImage provides a grayscale image type. It is a special case of the Image type, for example the digit images in MNIST.

    GrayscaleImage(
    fp: _TArtifactFP = "",
    display_name: str = "",
    shape: Optional[_TShape] = None,
    as_mask: bool = False,
    mask_uri: str = "",
    )
    ParameterDescription
    fpImage path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    shapeImage width and height, default channel is 1
    as_maskWhether used as a mask image
    mask_uriURI of the original image for the mask

    Examples

    for i in range(0, min(data_number, label_number)):
    _data = data_file.read(image_size)
    _label = struct.unpack(">B", label_file.read(1))[0]
    yield GrayscaleImage(
    _data,
    display_name=f"{i}",
    shape=(height, width, 1),
    ), {"label": _label}

    GrayscaleImage Functions

    GrayscaleImage.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    GrayscaleImage.carry_raw_data

    carry_raw_data() -> GrayscaleImage

    GrayscaleImage.astype

    astype() -> Dict[str, t.Any]

    BoundingBox

    BoundingBox provides a bounding box type, currently in LTWH format:

    • left_x: x-coordinate of left edge
    • top_y: y-coordinate of top edge
    • width: width of bounding box
    • height: height of bounding box

    So it represents the bounding box using the coordinates of its left, top, width and height. This is a common format for specifying bounding boxes in computer vision tasks.

    BoundingBox(
    x: float,
    y: float,
    width: float,
    height: float
    )
    ParameterDescription
    xx-coordinate of left edge (left_x)
    yy-coordinate of top edge (top_y)
    widthWidth of bounding box
    heightHeight of bounding box

    ClassLabel

    Describe labels.

    ClassLabel(
    names: List[Union[int, float, str]]
    )

    Image

    Image Type.

    Image(
    fp: _TArtifactFP = "",
    display_name: str = "",
    shape: Optional[_TShape] = None,
    mime_type: Optional[MIMEType] = None,
    as_mask: bool = False,
    mask_uri: str = "",
    )
    ParameterDescription
    fpImage path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    shapeImage width, height and channels
    mime_typeMIMEType supported types
    as_maskWhether used as a mask image
    mask_uriURI of the original image for the mask

    The main difference from GrayscaleImage is that Image supports multi-channel RGB images by specifying shape as (W, H, C).

    Examples

    import io
    import typing as t
    import pickle
    from PIL import Image as PILImage
    from starwhale import Image, MIMEType

    def _iter_item(paths: t.List[Path]) -> t.Generator[t.Tuple[t.Any, t.Dict], None, None]:
    for path in paths:
    with path.open("rb") as f:
    content = pickle.load(f, encoding="bytes")
    for data, label, filename in zip(
    content[b"data"], content[b"labels"], content[b"filenames"]
    ):
    annotations = {
    "label": label,
    "label_display_name": dataset_meta["label_names"][label],
    }

    image_array = data.reshape(3, 32, 32).transpose(1, 2, 0)
    image_bytes = io.BytesIO()
    PILImage.fromarray(image_array).save(image_bytes, format="PNG")

    yield Image(
    fp=image_bytes.getvalue(),
    display_name=filename.decode(),
    shape=image_array.shape,
    mime_type=MIMEType.PNG,
    ), annotations

    Image Functions

    Image.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Image.carry_raw_data

    carry_raw_data() -> GrayscaleImage

    Image.astype

    astype() -> Dict[str, t.Any]

    Video

    Video type.

    Video(
    fp: _TArtifactFP = "",
    display_name: str = "",
    mime_type: Optional[MIMEType] = None,
    )
    ParameterDescription
    fpVideo path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    mime_typeMIMEType supported types

    Examples

    import typing as t
    from pathlib import Path

    from starwhale import Video, MIMEType

    root_dir = Path(__file__).parent.parent
    dataset_dir = root_dir / "data" / "UCF-101"
    test_ds_path = [root_dir / "data" / "test_list.txt"]

    def iter_ucf_item() -> t.Generator:
    for path in test_ds_path:
    with path.open() as f:
    for line in f.readlines():
    _, label, video_sub_path = line.split()

    data_path = dataset_dir / video_sub_path
    data = Video(
    data_path,
    display_name=video_sub_path,
    shape=(1,),
    mime_type=MIMEType.WEBM,
    )

    yield f"{label}_{video_sub_path}", {
    "video": data,
    "label": label,
    }

    Audio

    Audio type.

    Audio(
    fp: _TArtifactFP = "",
    display_name: str = "",
    mime_type: Optional[MIMEType] = None,
    )
    ParameterDescription
    fpAudio path, IO object, or file content bytes
    display_nameDisplay name shown in Dataset Viewer
    mime_typeMIMEType supported types

    Examples

    import typing as t
    from starwhale import Audio

    def iter_item() -> t.Generator[t.Tuple[t.Any, t.Any], None, None]:
    for path in validation_ds_paths:
    with path.open() as f:
    for item in f.readlines():
    item = item.strip()
    if not item:
    continue

    data_path = dataset_dir / item
    data = Audio(
    data_path, display_name=item, shape=(1,), mime_type=MIMEType.WAV
    )

    speaker_id, utterance_num = data_path.stem.split("_nohash_")
    annotations = {
    "label": data_path.parent.name,
    "speaker_id": speaker_id,
    "utterance_num": int(utterance_num),
    }
    yield data, annotations

    Audio Functions

    Audio.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Audio.carry_raw_data

    carry_raw_data() -> Audio

    Audio.astype

    astype() -> Dict[str, t.Any]

    Text

    Text type, the default encode type is utf-8.

    Text(
    content: str,
    encoding: str = "utf-8",
    )
    ParameterDescription
    contentThe text content
    encodingEncoding format of the text

    Examples

    import typing as t
    from pathlib import Path
    from starwhale import Text

    def iter_item(self) -> t.Generator[t.Tuple[t.Any, t.Any], None, None]:
    root_dir = Path(__file__).parent.parent / "data"

    with (root_dir / "fra-test.txt").open("r") as f:
    for line in f.readlines():
    line = line.strip()
    if not line or line.startswith("CC-BY"):
    continue

    _data, _label, *_ = line.split("\t")
    data = Text(_data, encoding="utf-8")
    annotations = {"label": _label}
    yield data, annotations

    Text Functions

    to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Text.carry_raw_data

    carry_raw_data() -> Text

    Text.astype

    astype() -> Dict[str, t.Any]

    Text.to_str

    to_str() -> str

    Binary

    Binary provides a binary data type, stored as bytes.

    Binary(
    fp: _TArtifactFP = "",
    mime_type: MIMEType = MIMEType.UNDEFINED,
    )
    ParameterDescription
    fpPath, IO object, or file content bytes
    mime_typeMIMEType supported types

    Binary Functions

    Binary.to_types

    to_bytes(encoding: str= "utf-8") -> bytes

    Binary.carry_raw_data

    carry_raw_data() -> Binary

    Binary.astype

    astype() -> Dict[str, t.Any]

    Link provides a link type to create remote-link datasets in Starwhale.

    Link(
    uri: str,
    auth: Optional[LinkAuth] = DefaultS3LinkAuth,
    offset: int = 0,
    size: int = -1,
    data_type: Optional[BaseArtifact] = None,
    )
    ParameterDescription
    uriURI of the original data, currently supports localFS and S3 protocols
    authLink auth information
    offsetData offset relative to file pointed by uri
    sizeData size
    data_typeActual data type pointed by the link, currently supports Binary, Image, Text, Audio and Video

    Link.astype

    astype() -> Dict[str, t.Any]

    MIMEType

    MIMEType describes the multimedia types supported by Starwhale, implemented using Python Enum. It is used in the mime_type attribute of Image, Video etc to enable better Dataset Viewer support.

    class MIMEType(Enum):
    PNG = "image/png"
    JPEG = "image/jpeg"
    WEBP = "image/webp"
    SVG = "image/svg+xml"
    GIF = "image/gif"
    APNG = "image/apng"
    AVIF = "image/avif"
    PPM = "image/x-portable-pixmap"
    MP4 = "video/mp4"
    AVI = "video/avi"
    WEBM = "video/webm"
    WAV = "audio/wav"
    MP3 = "audio/mp3"
    PLAIN = "text/plain"
    CSV = "text/csv"
    HTML = "text/html"
    GRAYSCALE = "x/grayscale"
    UNDEFINED = "x/undefined"

    Line

    from starwhale import ds, Point, Line

    with dataset("collections") as ds:
    line_points = [
    Point(x=0.0, y=1.0),
    Point(x=0.0, y=100.0)
    ]
    ds.append({"line": line_points})
    ds.commit()

    Point

    from starwhale import ds, Point

    with dataset("collections") as ds:
    ds.append(Point(x=0.0, y=100.0))
    ds.commit()

    Polygon

    from starwhale import ds, Point, Polygon

    with dataset("collections") as ds:
    polygon_points = [
    Point(x=0.0, y=1.0),
    Point(x=0.0, y=100.0),
    Point(x=2.0, y=1.0),
    Point(x=2.0, y=100.0),
    ]
    ds.append({"polygon": polygon_points})
    ds.commit()
    - - + + \ No newline at end of file diff --git a/0.6.6/reference/swcli/dataset/index.html b/0.6.6/reference/swcli/dataset/index.html index 171ee8d76..5cde7dd2e 100644 --- a/0.6.6/reference/swcli/dataset/index.html +++ b/0.6.6/reference/swcli/dataset/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    swcli dataset

    Overview

    swcli [GLOBAL OPTIONS] dataset [OPTIONS] <SUBCOMMAND> [ARGS]...

    The dataset command includes the following subcommands:

    • build
    • copy(cp)
    • diff
    • head
    • history
    • info
    • list(ls)
    • recover
    • remove(rm)
    • summary
    • tag

    swcli dataset build

    swcli [GLOBAL OPTIONS] dataset build [OPTIONS]

    Build Starwhale Dataset. This command only supports to build standalone dataset.

    Options

    • Data sources options:
    OptionRequiredTypeDefaultsDescription
    -if or --image or --image-folderNStringBuild dataset from image folder, the folder should contain the image files.
    -af or --audio or --audio-folderNStringBuild dataset from audio folder, the folder should contain the audio files.
    -vf or --video or --video-folderNStringBuild dataset from video folder, the folder should contain the video files.
    -h or --handler or --python-handlerNStringBuild dataset from python executor handler, the handler format is [module path]:[class or func name].
    -f or --yaml or --dataset-yamlNdataset.yaml in cwdBuild dataset from dataset.yaml file. Default uses dataset.yaml in the work directory(cwd).
    -jf or --jsonNStringBuild dataset from json or jsonl file, the json or jsonl file option is a json file path or a http downloaded url.The json content structure should be a list[dict] or tuple[dict].
    -hf or --huggingfaceNStringBuild dataset from huggingface dataset, the huggingface option is a huggingface repo name.
    -c or --csvNStringBuild dataset from csv files. The option is a csv file path, dir path or a http downloaded url.The option can be used multiple times.

    Data source options are mutually exclusive, only one option is accepted. If no set, swcli dataset build command will use dataset yaml mode to build dataset with the dataset.yaml in the cwd.

    • Other options:
    OptionRequiredScopeTypeDefaultsDescription
    -pt or --patchone of --patch and --overwriteGlobalBooleanTruePatch mode, only update the changed rows and columns for the existed dataset.
    -ow or --overwriteone of --patch and --overwriteGlobalBooleanFalseOverwrite mode, update records and delete extraneous rows from the existed dataset.
    -n or --nameNGlobalStringDataset name
    -p or --projectNGlobalStringDefault projectProject URI, the default is the current selected project. The dataset will store in the specified project.
    -d or --descNGlobalStringDataset description
    -as or --alignment-sizeNGlobalString128Bswds-bin format dataset: alignment size
    -vs or --volume-sizeNGlobalString64MBswds-bin format dataset: volume size
    -r or --runtimeNGlobalStringRuntime URI
    -w or --workdirNPython Handler ModeStringcwdwork dir to search handler.
    --auto-label/--no-auto-labelNImage/Video/Audio Folder ModeBooleanTrueWhether to auto label by the sub-folder name.
    --field-selectorNJSON File ModeStringThe filed from which you would like to extract dataset array items. The filed is split by the dot(.) symbol.
    --subsetNHuggingface ModeStringHuggingface dataset subset name. If the subset name is not specified, the all subsets will be built.
    --splitNHuggingface ModeStringHuggingface dataset split name. If the split name is not specified, the all splits will be built.
    --revisionNHuggingface ModeStringmainVersion of the dataset script to load. Defaults to 'main'. The option value accepts tag name, or branch name, or commit hash.
    --add-hf-info/--no-add-hf-infoNHuggingface ModeBooleanTrueWhether to add huggingface dataset info to the dataset rows, currently support to add subset and split into the dataset rows. Subset uses _hf_subset field name, split uses _hf_split field name.
    --cache/--no-cacheNHuggingface ModeBooleanTrueWhether to use huggingface dataset cache(download + local hf dataset).
    -t or --tagNGlobalStringDataset tags, the option can be used multiple times.
    --encodingNCSV/JSON/JSONL ModeStringfile encoding.
    --dialectNCSV ModeStringexcelThe csv file dialect, the default is excel. Current supports excel, excel-tab and unix formats.
    --delimiterNCSV ModeString,A one-character string used to separate fields for the csv file.
    --quotecharNCSV ModeString"A one-character string used to quote fields containing special characters, such as the delimiter or quotechar, or which contain new-line characters.
    --skipinitialspace/--no-skipinitialspaceNCSV ModeBoolFalseWhether to skip spaces after delimiter for the csv file.
    --strict/--no-strictNCSV ModeBoolFalseWhen True, raise exception Error if the csv is not well formed.

    Examples for dataset building

    #- from dataset.yaml
    swcli dataset build # build dataset from dataset.yaml in the current work directory(pwd)
    swcli dataset build --yaml /path/to/dataset.yaml # build dataset from /path/to/dataset.yaml, all the involved files are related to the dataset.yaml file.
    swcli dataset build --overwrite --yaml /path/to/dataset.yaml # build dataset from /path/to/dataset.yaml, and overwrite the existed dataset.
    swcli dataset build --tag tag1 --tag tag2

    #- from handler
    swcli dataset build --handler mnist.dataset:iter_mnist_item # build dataset from mnist.dataset:iter_mnist_item handler, the workdir is the current work directory(pwd).
    # build dataset from mnist.dataset:LinkRawDatasetProcessExecutor handler, the workdir is example/mnist
    swcli dataset build --handler mnist.dataset:LinkRawDatasetProcessExecutor --workdir example/mnist

    #- from image folder
    swcli dataset build --image-folder /path/to/image/folder # build dataset from /path/to/image/folder, search all image type files.

    #- from audio folder
    swcli dataset build --audio-folder /path/to/audio/folder # build dataset from /path/to/audio/folder, search all audio type files.

    #- from video folder
    swcli dataset build --video-folder /path/to/video/folder # build dataset from /path/to/video/folder, search all video type files.

    #- from json/jsonl file
    swcli dataset build --json /path/to/example.json
    swcli dataset build --json http://example.com/example.json
    swcli dataset build --json /path/to/example.json --field-selector a.b.c # extract the json_content["a"]["b"]["c"] field from the json file.
    swcli dataset build --name qald9 --json https://raw.githubusercontent.com/ag-sc/QALD/master/9/data/qald-9-test-multilingual.json --field-selector questions
    swcli dataset build --json /path/to/test01.jsonl --json /path/to/test02.jsonl
    swcli dataset build --json https://modelscope.cn/api/v1/datasets/damo/100PoisonMpts/repo\?Revision\=master\&FilePath\=train.jsonl

    #- from huggingface dataset
    swcli dataset build --huggingface mnist
    swcli dataset build -hf mnist --no-cache
    swcli dataset build -hf cais/mmlu --subset anatomy --split auxiliary_train --revision 7456cfb

    #- from csv files
    swcli dataset build --csv /path/to/example.csv
    swcli dataset build --csv /path/to/example.csv --csv-file /path/to/example2.csv
    swcli dataset build --csv /path/to/csv-dir
    swcli dataset build --csv http://example.com/example.csv
    swcli dataset build --name product-desc-modelscope --csv https://modelscope.cn/api/v1/datasets/lcl193798/product_description_generation/repo\?Revision\=master\&FilePath\=test.csv --encoding=utf-8-sig

    swcli dataset copy

    swcli [GLOBAL OPTIONS] dataset copy [OPTIONS] <SRC> <DEST>

    dataset copy copies from SRC to DEST.

    SRC and DEST are both dataset URIs.

    When copying Starwhale Dataset, all custom user-defined labels will be copied by default. You can use the --ignore-tag parameter to ignore certain labels. In addition, the latest and ^v\d+$ labels are Starwhale built-in labels that are only used within the instance itself and will not be copied to other instances.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, DEST will be overwritten if it exists. In addition, if the labels carried during duplication have already been used by other versions, this parameter can be used to forcibly update the labels to this version.
    -p or --patchone of --patch and --overwriteBooleanTruePatch mode, only update the changed rows and columns for the remote dataset.
    -o or --overwriteone of --patch and --overwriteBooleanFalseOverwrite mode, update records and delete extraneous rows from the remote dataset.
    -i or --ignore-tagNStringIgnore tags to copy. The option can be used multiple times.

    Examples for dataset copy

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local project(myproject) with a new dataset name 'mnist-local'
    swcli dataset cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq local/project/myproject/mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local default project(self) with the cloud instance dataset name 'mnist-cloud'
    swcli dataset cp --patch cloud://pre-k8s/project/dataset/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq .

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local project(myproject) with the cloud instance dataset name 'mnist-cloud'
    swcli dataset cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq . -dlp myproject

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local default project(self) with a dataset name 'mnist-local'
    swcli dataset cp --overwrite cloud://pre-k8s/project/dataset/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud dataset to local project(myproject) with a dataset name 'mnist-local'
    swcli dataset cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local -dlp myproject

    #- copy standalone instance(local) default project(self)'s mnist-local dataset to cloud instance(pre-k8s) mnist project with a new dataset name 'mnist-cloud'
    swcli dataset cp mnist-local/version/latest cloud://pre-k8s/project/mnist/mnist-cloud

    #- copy standalone instance(local) default project(self)'s mnist-local dataset to cloud instance(pre-k8s) mnist project with standalone instance dataset name 'mnist-local'
    swcli dataset cp mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy standalone instance(local) default project(self)'s mnist-local dataset to cloud instance(pre-k8s) mnist project without 'cloud://' prefix
    swcli dataset cp mnist-local/version/latest pre-k8s/project/mnist

    #- copy standalone instance(local) project(myproject)'s mnist-local dataset to cloud instance(pre-k8s) mnist project with standalone instance dataset name 'mnist-local'
    swcli dataset cp local/project/myproject/dataset/mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy without some tags
    swcli dataset cp mnist cloud://cloud.starwhale.cn/project/starwhale:public --ignore-tag t1 --force

    swcli dataset diff

    swcli [GLOBAL OPTIONS] dataset diff [OPTIONS] <DATASET VERSION> <DATASET VERSION>

    dataset diff compares the difference between two versions of the same dataset.

    DATASET VERSION is a dataset URI.

    OptionRequiredTypeDefaultsDescription
    --show-detailsNBooleanFalseIf true, outputs the detail information.
    swcli [全局选项] dataset head [选项] <DATASET VERSION>

    Print the first n rows of the dataset. DATASET VERSION is a dataset URI.

    OptionRequiredTypeDefaultsDescription
    -n or --rowsNInt5Print the first NUM rows of the dataset.
    -srd or --show-raw-dataNBooleanFalseFetch raw data content from objectstore.
    -st or --show-typesNBooleanFalseshow data types.

    Examples for dataset head

    #- print the first 5 rows of the mnist dataset
    swcli dataset head -n 5 mnist

    #- print the first 10 rows of the mnist(v0 version) dataset and show raw data
    swcli dataset head -n 10 mnist/v0 --show-raw-data

    #- print the data types of the mnist dataset
    swcli dataset head mnist --show-types

    #- print the remote cloud dataset's first 5 rows
    swcli dataset head cloud://cloud-cn/project/test/dataset/mnist -n 5

    #- print the first 5 rows in the json format
    swcli -o json dataset head -n 5 mnist

    swcli dataset history

    swcli [GLOBAL OPTIONS] dataset history [OPTIONS] <DATASET>

    dataset history outputs all history versions of the specified Starwhale Dataset.

    DATASET is a dataset URI.

    OptionRequiredTypeDefaultsDescription
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.

    swcli dataset info

    swcli [GLOBAL OPTIONS] dataset info [OPTIONS] <DATASET>

    dataset info outputs detailed information about the specified Starwhale Dataset version.

    DATASET is a dataset URI.

    swcli dataset list

    swcli [GLOBAL OPTIONS] dataset list [OPTIONS]

    dataset list shows all Starwhale Datasets.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.
    --show-removed or -srNBooleanFalseIf true, include datasets that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.
    --filter or -flNStringShow only Starwhale Datasetes that match specified filters. This option can be used multiple times in one command.
    FilterTypeDescriptionExample
    nameKey-ValueThe name prefix of datasets--filter name=mnist
    ownerKey-ValueThe dataset owner name--filter owner=starwhale
    latestFlagIf specified, it shows only the latest version.--filter latest

    swcli dataset recover

    swcli [GLOBAL OPTIONS] dataset recover [OPTIONS] <DATASET>

    dataset recover recovers previously removed Starwhale Datasets or versions.

    DATASET is a dataset URI. If the version part of the URI is omitted, all removed versions are recovered.

    Garbage-collected Starwhale Datasets or versions can not be recovered, as well as those are removed with the --force option.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, overwrite the Starwhale Dataset or version with the same name or version id.

    swcli dataset remove

    swcli [GLOBAL OPTIONS] dataset remove [OPTIONS] <DATASET>

    dataset remove removes the specified Starwhale Dataset or version.

    DATASET is a dataset URI. If the version part of the URI is omitted, all versions are removed.

    Removed Starwhale Datasets or versions can be recovered by swcli dataset recover before garbage collection. Use the --force option to persistently remove a Starwhale Dataset or version.

    Removed Starwhale Datasets or versions can be listed by swcli dataset list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Dataset or version. It can not be recovered.

    swcli dataset summary

    swcli [GLOBAL OPTIONS]  dataset summary <DATASET>

    Show dataset summary. DATASET is a dataset URI.

    swcli dataset tag

    swcli [GLOBAL OPTIONS] dataset tag [OPTIONS] <DATASET> [TAGS]...

    dataset tag attaches a tag to a specified Starwhale Dataset version. At the same time, tag command also supports list and remove tags. The tag can be used in a dataset URI instead of the version id.

    DATASET is a dataset URI.

    Each dataset version can have any number of tags, but duplicated tag names are not allowed in the same dataset.

    dataset tag only works for the Standalone Instance.

    OptionRequiredTypeDefaultsDescription
    --remove or -rNBooleanFalseremove the tag if true
    --quiet or -qNBooleanFalseignore errors, for example, removing tags that do not exist.
    --force-add or -fNBooleanFalseWhen adding labels to server/cloud instances, if the label is already used by another dataset version, an error will be prompted. In this case, you can force an update using the --force-add parameter.

    Examples for dataset tag

    #- list tags of the mnist dataset
    swcli dataset tag mnist

    #- add tags for the mnist dataset
    swcli dataset tag mnist t1 t2
    swcli dataset tag cloud://cloud.starwhale.cn/project/public:starwhale/dataset/mnist/version/latest t1 --force-ad
    swcli dataset tag mnist t1 --quiet

    #- remove tags for the mnist dataset
    swcli dataset tag mnist -r t1 t2
    swcli dataset tag cloud://cloud.starwhale.cn/project/public:starwhale/dataset/mnist --remove t1
    - - + + \ No newline at end of file diff --git a/0.6.6/reference/swcli/index.html b/0.6.6/reference/swcli/index.html index 3f2f8926e..265584c2a 100644 --- a/0.6.6/reference/swcli/index.html +++ b/0.6.6/reference/swcli/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Overview

    Usage

    swcli [OPTIONS] <COMMAND> [ARGS]...
    note

    sw and starwhale are aliases for swcli.

    Global Options

    OptionDescription
    --versionShow the Starwhale Client version
    -v or --verboseShow verbose log, support multi counts for -v args. More -v args, more logs.
    --helpShow the help message.
    caution

    Global options must be put immediately after swcli, and before any command.

    Commands

    - - + + \ No newline at end of file diff --git a/0.6.6/reference/swcli/instance/index.html b/0.6.6/reference/swcli/instance/index.html index d46ff47b9..0dac23c71 100644 --- a/0.6.6/reference/swcli/instance/index.html +++ b/0.6.6/reference/swcli/instance/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    swcli instance

    Overview

    swcli [GLOBAL OPTIONS] instance [OPTIONS] <SUBCOMMAND> [ARGS]

    The instance command includes the following subcommands:

    • info
    • list (ls)
    • login
    • logout
    • use (select)

    swcli instance info

    swcli [GLOBAL OPTIONS] instance info [OPTIONS] <INSTANCE>

    instance info outputs detailed information about the specified Starwhale Instance.

    INSTANCE is an instance URI.

    swcli instance list

    swcli [GLOBAL OPTIONS] instance list [OPTIONS]

    instance list shows all Starwhale Instances.

    swcli instance login

    swcli [GLOBAL OPTIONS] instance login [OPTIONS] <INSTANCE>

    instance login connects to a Server/Cloud instance and makes the specified instance default.

    INSTANCE is an instance URI.

    OptionRequiredTypeDefaultsDescription
    --usernameNStringThe login username.
    --passwordNStringThe login password.
    --tokenNStringThe login token.
    --aliasYStringThe alias of the instance. You can use it anywhere that requires an instance URI.

    --username and --password can not be used together with --token.

    swcli instance logout

    swcli [GLOBAL OPTIONS] instance logout [INSTANCE]

    instance logout disconnects from the Server/Cloud instance, and clears information stored in the local storage.

    INSTANCE is an instance URI. If it is omiited, the default instance is used instead.

    swcli instance use

    swcli [GLOBAL OPTIONS] instance use <INSTANCE>

    instance use make the specified instance default.

    INSTANCE is an instance URI.

    - - + + \ No newline at end of file diff --git a/0.6.6/reference/swcli/job/index.html b/0.6.6/reference/swcli/job/index.html index 179d4982d..9f5d0f165 100644 --- a/0.6.6/reference/swcli/job/index.html +++ b/0.6.6/reference/swcli/job/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    swcli job

    Overview

    swcli [GLOBAL OPTIONS] job [OPTIONS] <SUBCOMMAND> [ARGS]...

    The job command includes the following subcommands:

    • cancel
    • info
    • list(ls)
    • pause
    • recover
    • remove(rm)
    • resume

    swcli job cancel

    swcli [GLOBAL OPTIONS] job cancel [OPTIONS] <JOB>

    job cancel stops the specified job. On Standalone instance, this command only takes effect for containerized jobs.

    JOB is a job URI.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, kill the Starwhale Job by force.

    swcli job info

    swcli [GLOBAL OPTIONS] job info [OPTIONS] <JOB>

    job info outputs detailed information about the specified Starwhale Job.

    JOB is a job URI.

    swcli job list

    swcli [GLOBAL OPTIONS] job list [OPTIONS]

    job list shows all Starwhale Jobs.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --show-removed or -srNBooleanFalseIf true, include packages that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.

    swcli job pause

    swcli [GLOBAL OPTIONS] job pause [OPTIONS] <JOB>

    job pause pauses the specified job. Paused jobs can be resumed by job resume. On Standalone instance, this command only takes effect for containerized jobs.

    JOB is a job URI.

    From Starwhale's perspective, pause is almost the same as cancel, except that the job reuses the old Job id when resumed. It is job developer's responsibility to save all data periodically and load them when resumed. The job id is usually used as a key of the checkpoint.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, kill the Starwhale Job by force.

    swcli job resume

    swcli [GLOBAL OPTIONS] job resume [OPTIONS] <JOB>

    job resume resumes the specified job. On Standalone instance, this command only takes effect for containerized jobs.

    JOB is a job URI.

    - - + + \ No newline at end of file diff --git a/0.6.6/reference/swcli/model/index.html b/0.6.6/reference/swcli/model/index.html index c827c14da..0d0158654 100644 --- a/0.6.6/reference/swcli/model/index.html +++ b/0.6.6/reference/swcli/model/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    swcli model

    Overview

    swcli [GLOBAL OPTIONS] model [OPTIONS] <SUBCOMMAND> [ARGS]...

    The model command includes the following subcommands:

    • build
    • copy(cp)
    • diff
    • extract
    • history
    • info
    • list(ls)
    • recover
    • remove(rm)
    • run
    • serve
    • tag

    swcli model build

    swcli [GLOBAL OPTIONS] model build [OPTIONS] <WORKDIR>

    model build will put the whole WORKDIR into the model, except files that match patterns defined in .swignore.

    model build will import modules specified by --module to generate the required configurations to run the model. If your module depends on third-party libraries, we strongly recommend you use the --runtime option; otherwise, you need to ensure that the python environment used by swcli has these libraries installed.

    OptionRequiredTypeDefaultsDescription
    --project or -pNStringthe default projectthe project URI
    --model-yaml or -fNString${workdir}/model.yamlmodel yaml path, default use ${workdir}/model.yaml file. model.yaml is optional for model build.
    --module or -mNStringPython modules to be imported during the build process. Starwhale will export model handlers from these modules to the model package. This option supports set multiple times.
    --runtime or -rNStringthe URI of the Starwhale Runtime to use when running this command. If this option is used, this command will run in an independent python environment specified by the Starwhale Runtime; otherwise, it will run directly in the swcli's current python environment.
    --name or -nNStringmodel package name
    --desc or -dNStringmodel package description
    --package-runtime--no-package-runtimeNBooleanTrueWhen using the --runtime parameter, by default, the corresponding Starwhale runtime will become the built-in runtime for the Starwhale model. This feature can be disabled with the --no-package-runtime parameter.
    --add-allNBooleanFalseAdd all files in the working directory to the model package(excludes python cache files and virtual environment files when disabled).The .swignore file still takes effect.
    -t or --tagNGlobalString

    Examples for model build

    # build by the model.yaml in current directory and model package will package all the files from the current directory.
    swcli model build .
    # search model run decorators from mnist.evaluate, mnist.train and mnist.predict modules, then package all the files from the current directory to model package.
    swcli model build . --module mnist.evaluate --module mnist.train --module mnist.predict
    # build model package in the Starwhale Runtime environment.
    swcli model build . --module mnist.evaluate --runtime pytorch/version/v1
    # forbid to package Starwhale Runtime into the model.
    swcli model build . --module mnist.evaluate --runtime pytorch/version/v1 --no-package-runtime
    # build model package with tags.
    swcli model build . --tag tag1 --tag tag2

    swcli model copy

    swcli [GLOBAL OPTIONS] model copy [OPTIONS] <SRC> <DEST>

    model copy copies from SRC to DEST for Starwhale Model sharing.

    SRC and DEST are both model URIs.

    When copying Starwhale Model, all custom user-defined labels will be copied by default. You can use the --ignore-tag parameter to ignore certain labels. In addition, the latest and ^v\d+$ labels are Starwhale built-in labels that are only used within the instance itself and will not be copied to other instances.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, DEST will be overwritten if it exists. In addition, if the labels carried during duplication have already been used by other versions, this parameter can be used to forcibly update the labels to this version.
    -i or --ignore-tagNStringIgnore tags to copy. The option can be used multiple times.

    Examples for model copy

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local project(myproject) with a new model name 'mnist-local'
    swcli model cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq local/project/myproject/mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local default project(self) with the cloud instance model name 'mnist-cloud'
    swcli model cp cloud://pre-k8s/project/model/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq .

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local project(myproject) with the cloud instance model name 'mnist-cloud'
    swcli model cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq . -dlp myproject

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local default project(self) with a model name 'mnist-local'
    swcli model cp cloud://pre-k8s/project/model/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud model to local project(myproject) with a model name 'mnist-local'
    swcli model cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local -dlp myproject

    #- copy standalone instance(local) default project(self)'s mnist-local model to cloud instance(pre-k8s) mnist project with a new model name 'mnist-cloud'
    swcli model cp mnist-local/version/latest cloud://pre-k8s/project/mnist/mnist-cloud

    #- copy standalone instance(local) default project(self)'s mnist-local model to cloud instance(pre-k8s) mnist project with standalone instance model name 'mnist-local'
    swcli model cp mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy standalone instance(local) default project(self)'s mnist-local model to cloud instance(pre-k8s) mnist project without 'cloud://' prefix
    swcli model cp mnist-local/version/latest pre-k8s/project/mnist

    #- copy standalone instance(local) project(myproject)'s mnist-local model to cloud instance(pre-k8s) mnist project with standalone instance model name 'mnist-local'
    swcli model cp local/project/myproject/model/mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy without some tags
    swcli model cp mnist cloud://cloud.starwhale.cn/project/starwhale:public --ignore-tag t1

    swcli model diff

    swcli [GLOBAL OPTIONS] model diff [OPTIONS] <MODEL VERSION> <MODEL VERSION>

    model diff compares the difference between two versions of the same model.

    MODEL VERSION is a model URI.

    OptionRequiredTypeDefaultsDescription
    --show-detailsNBooleanFalseIf true, outputs the detail information.

    swcli model extract

    swcli [GLOBAL OPTIONS] model extract [OPTIONS] <MODEL> <TARGET_DIR>

    The model extract command can extract a Starwhale model to a specified directory for further customization.

    MODEL is a model URI.

    OptionRequiredTypeDefaultDescription
    --force or -fNBooleanFalseIf this option is used, it will forcibly overwrite existing extracted model files in the target directory.

    Examples for model extract

    #- extract mnist model package to current directory
    swcli model extract mnist/version/xxxx .

    #- extract mnist model package to current directory and force to overwrite the files
    swcli model extract mnist/version/xxxx . -f

    swcli model history

    swcli [GLOBAL OPTIONS] model history [OPTIONS] <MODEL>

    model history outputs all history versions of the specified Starwhale Model.

    MODEL is a model URI.

    OptionRequiredTypeDefaultsDescription
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.

    swcli model info

    swcli [GLOBAL OPTIONS] model info [OPTIONS] <MODEL>

    model info outputs detailed information about the specified Starwhale Model version.

    MODEL is a model URI.

    OptionRequiredTypeDefaultsDescription
    --output-filter or -ofNChoice of [basic/model_yaml/manifest/files/handlers/all]basicFilter the output content. Only standalone instance supports this option.

    Examples for model info

    swcli model info mnist # show basic info from the latest version of model
    swcli model info mnist/version/v0 # show basic info from the v0 version of model
    swcli model info mnist/version/latest --output-filter=all # show all info
    swcli model info mnist -of basic # show basic info
    swcli model info mnist -of model_yaml # show model.yaml
    swcli model info mnist -of handlers # show model runnable handlers info
    swcli model info mnist -of files # show model package files tree
    swcli -o json model info mnist -of all # show all info in json format

    swcli model list

    swcli [GLOBAL OPTIONS] model list [OPTIONS]

    model list shows all Starwhale Models.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.
    --show-removedNBooleanFalseIf true, include packages that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.
    --filter or -flNStringShow only Starwhale Models that match specified filters. This option can be used multiple times in one command.
    FilterTypeDescriptionExample
    nameKey-ValueThe name prefix of models--filter name=mnist
    ownerKey-ValueThe model owner name--filter owner=starwhale
    latestFlagIf specified, it shows only the latest version.--filter latest

    swcli model recover

    swcli [GLOBAL OPTIONS] model recover [OPTIONS] <MODEL>

    model recover recovers previously removed Starwhale Models or versions.

    MODEL is a model URI. If the version part of the URI is omitted, all removed versions are recovered.

    Garbage-collected Starwhale Models or versions can not be recovered, as well as those are removed with the --force option.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, overwrite the Starwhale Model or version with the same name or version id.

    swcli model remove

    swcli [GLOBAL OPTIONS] model remove [OPTIONS] <MODEL>

    model remove removes the specified Starwhale Model or version.

    MODEL is a model URI. If the version part of the URI is omitted, all versions are removed.

    Removed Starwhale Models or versions can be recovered by swcli model recover before garbage collection. Use the --force option to persistently remove a Starwhale Model or version.

    Removed Starwhale Models or versions can be listed by swcli model list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Model or version. It can not be recovered.

    swcli model run

    swcli [GLOBAL OPTIONS] model run [OPTIONS]

    model run executes a model handler. Model run supports two modes to run: model URI and local development. Model URI mode needs a pre-built Starwhale Model Package. Local development model only needs the model src dir.

    OptionRequiredTypeDefaultsDescription
    --workdir or -wNStringFor local development mode, the path of model src dir.
    --uri or -uNStringFor model URI mode, the string of model uri.
    --handler or -hNStringRunnable handler index or name, default is None, will use the first handler
    --module or -mNStringThe name of the Python module to import. This parameter can be set multiple times.
    --runtime or -rNStringthe Starwhale Runtime URI to use when running this command. If this option is used, this command will run in an independent python environment specified by the Starwhale Runtime; otherwise, it will run directly in the swcli's current python environment.
    --model-yaml or -fNString${MODEL_DIR}/model.yamlThe path to the model.yaml. model.yaml is optional for model run.
    --run-project or -pNStringDefault projectProject URI, indicates the model run results will be stored in the corresponding project.
    --dataset or -dNStringDataset URI, the Starwhale dataset required for model running. This parameter can be set multiple times.
    --dataset-head or -dhNInteger0[ONLY STANDALONE]For debugging purpose, every prediction task will, at most, consume the first n rows from every dataset.When the value is less than or equal to 0, all samples will be used.
    --in-containerNBooleanFalseUse docker container to run the model. This option is only available for standalone instances. For server and cloud instances, a docker image is always used. If the runtime is a docker image, this option is always implied.
    --forbid-snapshot or -fsNBooleanFalseIn model URI mode, each model run uses a new snapshot directory. Setting this parameter will directly use the model's workdir as the run directory. In local dev mode, this parameter does not take effect, each run is in the --workdir specified directory.
    -- --user-arbitrary-argsNStringSpecify the args you defined in your handlers.

    Examples for model run

    # --> run by model uri
    # run the first handler from model uri
    swcli model run -u mnist/version/latest
    # run index id(1) handler from model uri
    swcli model run --uri mnist/version/latest --handler 1
    # run index fullname(mnist.evaluator:MNISTInference.cmp) handler from model uri
    swcli model run --uri mnist/version/latest --handler mnist.evaluator:MNISTInference.cmp

    # --> run by the working directory, which does not build model package yet. Make local debug happy.
    # run the first handler from the working directory, use the model.yaml in the working directory
    swcli model run -w .
    # run index id(1) handler from the working directory, search mnist.evaluator module and model.yaml handlers(if existed) to get runnable handlers
    swcli model run --workdir . --module mnist.evaluator --handler 1
    # run index fullname(mnist.evaluator:MNISTInference.cmp) handler from the working directory, search mnist.evaluator module to get runnable handlers
    swcli model run --workdir . --module mnist.evaluator --handler mnist.evaluator:MNISTInference.cmp
    # run the f handler in th.py from the working directory with the args defined in th:f
    # @handler()
    # def f(
    # x=ListInput(IntInput()),
    # y=2,
    # mi=MyInput(),
    # ds=DatasetInput(required=True),
    # ctx=ContextInput(),
    # )
    swcli model run -w . -m th --handler th:f -- -x 2 -x=1 --mi=blab-la --ds mnist

    # --> run with dataset of head 10
    swcli model run --uri mnist --dataset-head 10 --dataset mnist

    swcli model serve

    Here is the English translation:

    swcli [GLOBAL OPTIONS] model serve [OPTIONS]

    The model serve command can run the model as a web server, and provide a simple web interaction interface.

    OptionRequiredTypeDefaultsDescription
    --workdir or -wNStringIn local dev mode, specify the directory of the model code.
    --uri or -uNStringIn model URI mode, specify the model URI.
    --runtime or -rNStringThe URI of the Starwhale runtime to use when running this command. If specified, the command will run in the isolated Python environment defined in the Starwhale runtime. Otherwise it will run directly in the current Python environment of swcli.
    --model-yaml or -fNString${MODEL_DIR}/model.yamlThe path to the model.yaml. model.yaml is optional for model serve.
    --module or -mNStringName of the Python module to import. This parameter can be set multiple times.
    --hostNString127.0.0.1The address for the service to listen on.
    --portNInteger8080The port for the service to listen on.

    Examples for model serve

    swcli model serve -u mnist
    swcli model serve --uri mnist/version/latest --runtime pytorch/version/latest

    swcli model serve --workdir . --runtime pytorch/version/v0
    swcli model serve --workdir . --runtime pytorch/version/v1 --host 0.0.0.0 --port 8080
    swcli model serve --workdir . --runtime pytorch --module mnist.evaluator

    swcli model tag

    swcli [GLOBAL OPTIONS] model tag [OPTIONS] <MODEL> [TAGS]...

    model tag attaches a tag to a specified Starwhale Model version. At the same time, tag command also supports list and remove tags. The tag can be used in a model URI instead of the version id.

    MODEL is a model URI.

    Each model version can have any number of tags, but duplicated tag names are not allowed in the same model.

    model tag only works for the Standalone Instance.

    OptionRequiredTypeDefaultsDescription
    --remove or -rNBooleanFalseremove the tag if true
    --quiet or -qNBooleanFalseignore errors, for example, removing tags that do not exist.
    --force-add or -fNBooleanFalseWhen adding labels to server/cloud instances, if the label is already used by another model version, an error will be prompted. In this case, you can force an update using the --force-add parameter.

    Examples for model tag

    #- list tags of the mnist model
    swcli model tag mnist

    #- add tags for the mnist model
    swcli model tag mnist t1 t2
    swcli model tag cloud://cloud.starwhale.cn/project/public:starwhale/model/mnist/version/latest t1 --force-add
    swcli model tag mnist t1 --quiet

    #- remove tags for the mnist model
    swcli model tag mnist -r t1 t2
    swcli model tag cloud://cloud.starwhale.cn/project/public:starwhale/model/mnist --remove t1
    - - + + \ No newline at end of file diff --git a/0.6.6/reference/swcli/project/index.html b/0.6.6/reference/swcli/project/index.html index 62cc00d55..8fccb7a39 100644 --- a/0.6.6/reference/swcli/project/index.html +++ b/0.6.6/reference/swcli/project/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    swcli project

    Overview

    swcli [GLOBAL OPTIONS] project [OPTIONS] <SUBCOMMAND> [ARGS]...

    The project command includes the following subcommands:

    • create(add, new)
    • info
    • list(ls)
    • recover
    • remove(ls)
    • use(select)

    swcli project create

    swcli [GLOBAL OPTIONS] project create <PROJECT>

    project create creates a new project.

    PROJECT is a project URI.

    swcli project info

    swcli [GLOBAL OPTIONS] project info [OPTIONS] <PROJECT>

    project info outputs detailed information about the specified Starwhale Project.

    PROJECT is a project URI.

    swcli project list

    swcli [GLOBAL OPTIONS] project list [OPTIONS]

    project list shows all Starwhale Projects.

    OptionRequiredTypeDefaultsDescription
    --instanceNStringThe URI of the instance to list. If this option is omitted, use the default instance.
    --show-removedNBooleanFalseIf true, include projects that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.

    swcli project recover

    swcli [GLOBAL OPTIONS] project recover [OPTIONS] <PROJECT>

    project recover recovers previously removed Starwhale Projects.

    PROJECT is a project URI.

    Garbage-collected Starwhale Projects can not be recovered, as well as those are removed with the --force option.

    swcli project remove

    swcli [GLOBAL OPTIONS] project remove [OPTIONS] <PROJECT>

    project remove removes the specified Starwhale Project.

    PROJECT is a project URI.

    Removed Starwhale Projects can be recovered by swcli project recover before garbage collection. Use the --force option to persistently remove a Starwhale Project.

    Removed Starwhale Project can be listed by swcli project list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Project. It can not be recovered.

    swcli project use

    swcli [GLOBAL OPTIONS] project use <PROJECT>

    project use make the specified project default. You must login at first to use a project on a Server/Cloud instance.

    - - + + \ No newline at end of file diff --git a/0.6.6/reference/swcli/runtime/index.html b/0.6.6/reference/swcli/runtime/index.html index dbb65d018..ebc9ccb78 100644 --- a/0.6.6/reference/swcli/runtime/index.html +++ b/0.6.6/reference/swcli/runtime/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    swcli runtime

    Overview

    swcli [GLOBAL OPTIONS] runtime [OPTIONS] <SUBCOMMAND> [ARGS]...

    The runtime command includes the following subcommands:

    • activate(actv)
    • build
    • copy(cp)
    • dockerize
    • extract
    • history
    • info
    • list(ls)
    • recover
    • remove(rm)
    • tag

    swcli runtime activate

    swcli [GLOBAL OPTIONS] runtime activate [OPTIONS] <RUNTIME>

    Like source venv/bin/activate or conda activate xxx, runtime activate setups a new python environment according to the settings of the specified runtime. When the current shell is closed or switched to another one, you need to reactivate the runtime.RUNTIME is a Runtime URI.

    If you want to quit the activated runtime environment, please run venv deactivate in the venv environment or conda deactivate in the conda environment.

    The runtime activate command will build a Python isolated environment and download relevant Python packages according to the definition of the Starwhale runtime when activating the environment for the first time. This process may spend a lot of time.

    swcli runtime build

    swcli [GLOBAL OPTIONS] runtime build [OPTIONS]

    The runtime build command can build a shareable and reproducible runtime environment suitable for ML/DL from various environments or runtime.yaml file.

    Parameters

    • Parameters related to runtime building methods:
    OptionRequiredTypeDefaultsDescription
    -c or --condaNStringFind the corresponding conda environment by conda env name, export Python dependencies to generate Starwhale runtime.
    -cp or --conda-prefixNStringFind the corresponding conda environment by conda env prefix path, export Python dependencies to generate Starwhale runtime.
    -v or --venvNStringFind the corresponding venv environment by venv directory address, export Python dependencies to generate Starwhale runtime.
    -s or --shellNStringExport Python dependencies according to current shell environment to generate Starwhale runtime.
    -y or --yamlNruntime.yaml in cwd directoryBuild Starwhale runtime according to user-defined runtime.yaml.
    -d or --dockerNStringUse the docker image as Starwhale runtime.

    The parameters for runtime building methods are mutually exclusive, only one method can be specified. If not specified, it will use --yaml method to read runtime.yaml in cwd directory to build Starwhale runtime.

    • Other parameters:
    OptionRequiredScopeTypeDefaultsDescription
    --project or -pNGlobalStringDefault projectProject URI
    -del or --disable-env-lockNruntime.yaml modeBooleanFalseWhether to install dependencies in runtime.yaml and lock the version information of related dependencies. The dependencies will be locked by default.
    -nc or --no-cacheNruntime.yaml modeBooleanFalseWhether to delete the isolated environment and install related dependencies from scratch. By default dependencies will be installed in the existing isolated environment.
    --cudaNconda/venv/shell modeChoice[11.3/11.4/11.5/11.6/11.7/]CUDA version, CUDA will not be used by default.
    --cudnnNconda/venv/shell modeChoice[8/]cuDNN version, cuDNN will not be used by default.
    --archNconda/venv/shell modeChoice[amd64/arm64/noarch]noarchArchitecture
    -dpo or --dump-pip-optionsNGlobalBooleanFalseDump pip config options from the ~/.pip/pip.conf file.
    -dcc or --dump-condarcNGlobalBooleanFalseDump conda config from the ~/.condarc file.
    -t or --tagNGlobalStringRuntime tags, the option can be used multiple times.

    Examples for Starwhale Runtime building

    #- from runtime.yaml:
    swcli runtime build # use the current directory as the workdir and use the default runtime.yaml file
    swcli runtime build -y example/pytorch/runtime.yaml # use example/pytorch/runtime.yaml as the runtime.yaml file
    swcli runtime build --yaml runtime.yaml # use runtime.yaml at the current directory as the runtime.yaml file
    swcli runtime build --tag tag1 --tag tag2

    #- from conda name:
    swcli runtime build -c pytorch # lock pytorch conda environment and use `pytorch` as the runtime name
    swcli runtime build --conda pytorch --name pytorch-runtime # use `pytorch-runtime` as the runtime name
    swcli runtime build --conda pytorch --cuda 11.4 # specify the cuda version
    swcli runtime build --conda pytorch --arch noarch # specify the system architecture

    #- from conda prefix path:
    swcli runtime build --conda-prefix /home/starwhale/anaconda3/envs/pytorch # get conda prefix path by `conda info --envs` command

    #- from venv prefix path:
    swcli runtime build -v /home/starwhale/.virtualenvs/pytorch
    swcli runtime build --venv /home/starwhale/.local/share/virtualenvs/pytorch --arch amd64

    #- from docker image:
    swcli runtime build --docker pytorch/pytorch:1.9.0-cuda11.1-cudnn8-runtime # use the docker image as the runtime directly

    #- from shell:
    swcli runtime build -s --cuda 11.4 --cudnn 8 # specify the cuda and cudnn version
    swcli runtime build --shell --name pytorch-runtime # lock the current shell environment and use `pytorch-runtime` as the runtime name

    swcli runtime copy

    swcli [GLOBAL OPTIONS] runtime copy [OPTIONS] <SRC> <DEST>

    runtime copy copies from SRC to DEST. SRC and DEST are both Runtime URIs.

    When copying Starwhale Runtime, all custom user-defined labels will be copied by default. You can use the --ignore-tag parameter to ignore certain labels. In addition, the latest and ^v\d+$ labels are built-in Starwhale system labels that are only used within the instance itself and will not be copied to other instances.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, DEST will be overwritten if it exists. In addition, if the labels carried during duplication have already been used by other versions, this parameter can be used to forcibly update the labels to this version.
    -i or --ignore-tagNStringIgnore tags to copy. The option can be used multiple times.

    Examples for Starwhale Runtime copy

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local project(myproject) with a new runtime name 'mnist-local'
    swcli runtime cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq local/project/myproject/mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local default project(self) with the cloud instance runtime name 'mnist-cloud'
    swcli runtime cp cloud://pre-k8s/project/runtime/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq .

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local project(myproject) with the cloud instance runtime name 'mnist-cloud'
    swcli runtime cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq . -dlp myproject

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local default project(self) with a runtime name 'mnist-local'
    swcli runtime cp cloud://pre-k8s/project/runtime/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local

    #- copy cloud instance(pre-k8s) mnist project's mnist-cloud runtime to local project(myproject) with a runtime name 'mnist-local'
    swcli runtime cp cloud://pre-k8s/project/mnist/mnist-cloud/version/ge3tkylgha2tenrtmftdgyjzni3dayq mnist-local -dlp myproject

    #- copy standalone instance(local) default project(self)'s mnist-local runtime to cloud instance(pre-k8s) mnist project with a new runtime name 'mnist-cloud'
    swcli runtime cp mnist-local/version/latest cloud://pre-k8s/project/mnist/mnist-cloud

    #- copy standalone instance(local) default project(self)'s mnist-local runtime to cloud instance(pre-k8s) mnist project with standalone instance runtime name 'mnist-local'
    swcli runtime cp mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy standalone instance(local) default project(self)'s mnist-local runtime to cloud instance(pre-k8s) mnist project without 'cloud://' prefix
    swcli runtime cp mnist-local/version/latest pre-k8s/project/mnist

    #- copy standalone instance(local) project(myproject)'s mnist-local runtime to cloud instance(pre-k8s) mnist project with standalone instance runtime name 'mnist-local'
    swcli runtime cp local/project/myproject/runtime/mnist-local/version/latest cloud://pre-k8s/project/mnist

    #- copy without some tags
    swcli runtime cp pytorch cloud://cloud.starwhale.cn/project/starwhale:public --ignore-tag t1

    swcli runtime dockerize

    swcli [GLOBAL OPTIONS] runtime dockerize [OPTIONS] <RUNTIME>

    runtime dockerize generates a docker image based on the specified runtime. Starwhale uses docker buildx to create the image. Docker 19.03 or later is required to run this command.

    RUNTIME is a Runtime URI.

    OptionRequiredTypeDefaultsDescription
    --tag or -tNStringThe tag of the docker image. This option can be repeated multiple times.
    --pushNBooleanFalseIf true, push the image to the docker registry
    --platformNStringamd64The target platform,can be either amd64 or arm64. This option can be repeated multiple times to create a multi-platform image.

    Here is the English translation:

    swcli runtime extract

    swcli [Global Options] runtime extract [Options] <RUNTIME>

    Starwhale runtimes use the compressed packages to distribute. The runtime extract command can be used to extract the runtime package for further customization and modification.

    OptionRequiredTypeDefaultDescription
    --force or -fNBooleanFalseWhether to delete and re-extract if there is already an extracted Starwhale runtime in the target directory.
    --target-dirNStringCustom extraction directory. If not specified, it will be extracted to the default Starwhale runtime workdir. The command log will show the directory location.

    swcli runtime history

    swcli [GLOBAL OPTIONS] runtime history [OPTIONS] <RUNTIME>

    runtime history outputs all history versions of the specified Starwhale Runtime.

    RUNTIME is a Runtime URI.

    OptionRequiredTypeDefaultsDescription
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.

    swcli runtime info

    swcli [GLOBAL OPTIONS] runtime info [OPTIONS] <RUNTIME>

    runtime info outputs detailed information about a specified Starwhale Runtime version.

    RUNTIME is a Runtime URI.

    OptionRequiredTypeDefaultsDescription
    --output-filter or -ofNChoice of [basic/runtime_yaml/manifest/lock/all]basicFilter the output content. Only standalone instance supports this option.

    Examples for Starwhale Runtime info

    swcli runtime info pytorch # show basic info from the latest version of runtime
    swcli runtime info pytorch/version/v0 # show basic info
    swcli runtime info pytorch/version/v0 --output-filter basic # show basic info
    swcli runtime info pytorch/version/v1 -of runtime_yaml # show runtime.yaml content
    swcli runtime info pytorch/version/v1 -of lock # show auto lock file content
    swcli runtime info pytorch/version/v1 -of manifest # show _manifest.yaml content
    swcli runtime info pytorch/version/v1 -of all # show all info of the runtime

    swcli runtime list

    swcli [GLOBAL OPTIONS] runtime list [OPTIONS]

    runtime list shows all Starwhale Runtimes.

    OptionRequiredTypeDefaultsDescription
    --projectNStringThe URI of the project to list. Use the default project if not specified.
    --fullnameNBooleanFalseShow the full version name. Only the first 12 characters are shown if this option is false.
    --show-removed or -srNBooleanFalseIf true, include runtimes that are removed but not garbage collected.
    --pageNInteger1The starting page number. Server and cloud instances only.
    --sizeNInteger20The number of items in one page. Server and cloud instances only.
    --filter or -flNStringShow only Starwhale Runtimes that match specified filters. This option can be used multiple times in one command.
    FilterTypeDescriptionExample
    nameKey-ValueThe name prefix of runtimes--filter name=pytorch
    ownerKey-ValueThe runtime owner name--filter owner=starwhale
    latestFlagIf specified, it shows only the latest version.--filter latest

    swcli runtime recover

    swcli [GLOBAL OPTIONS] runtime recover [OPTIONS] <RUNTIME>

    runtime recover can recover previously removed Starwhale Runtimes or versions.

    RUNTIME is a Runtime URI. If the version part of the URI is omitted, all removed versions are recovered.

    Garbage-collected Starwhale Runtimes or versions can not be recovered, as well as those are removed with the --force option.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, overwrite the Starwhale Runtime or version with the same name or version id.

    swcli runtime remove

    swcli [GLOBAL OPTIONS] runtime remove [OPTIONS] <RUNTIME>

    runtime remove removes the specified Starwhale Runtime or version.

    RUNTIME is a Runtime URI. If the version part of the URI is omitted, all versions are removed.

    Removed Starwhale Runtimes or versions can be recovered by swcli runtime recover before garbage collection. Use the -- force option to persistently remove a Starwhale Runtime or version.

    Removed Starwhale Runtimes or versions can be listed by swcli runtime list --show-removed.

    OptionRequiredTypeDefaultsDescription
    --force or -fNBooleanFalseIf true, persistently delete the Starwhale Runtime or version. It can not be recovered.

    swcli runtime tag

    swcli [GLOBAL OPTIONS] runtime tag [OPTIONS] <RUNTIME> [TAGS]...

    runtime tag attaches a tag to a specified Starwhale Runtime version. At the same time, tag command also supports list and remove tags. The tag can be used in a runtime URI instead of the version id.

    RUNTIME is a Runtime URI.

    Each runtime version can have any number of tags, but duplicated tag names are not allowed in the same runtime.

    runtime tag only works for the Standalone Instance.

    OptionRequiredTypeDefaultsDescription
    --remove or -rNBooleanFalseRemove the tag if true
    --quiet or -qNBooleanFalseIgnore errors, for example, removing tags that do not exist.
    --force-add or -fNBooleanFalseWhen adding labels to server/cloud instances, if the label is already used by another runtime version, an error will be prompted. In this case, you can force an update using the --force-add parameter.

    Examples for runtime tag

    #- list tags of the pytorch runtime
    swcli runtime tag pytorch

    #- add tags for the pytorch runtime
    swcli runtime tag mnist t1 t2
    swcli runtime tag cloud://cloud.starwhale.cn/project/public:starwhale/runtime/pytorch/version/latest t1 --force-add
    swcli runtime tag mnist t1 --quiet

    #- remove tags for the pytorch runtime
    swcli runtime tag mnist -r t1 t2
    swcli runtime tag cloud://cloud.starwhale.cn/project/public:starwhale/runtime/pytorch --remove t1
    - - + + \ No newline at end of file diff --git a/0.6.6/reference/swcli/server/index.html b/0.6.6/reference/swcli/server/index.html index 112609c35..874cddcfe 100644 --- a/0.6.6/reference/swcli/server/index.html +++ b/0.6.6/reference/swcli/server/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    swcli server

    Overview

    swcli [GLOBAL OPTIONS] server <SUBCOMMAND> [ARGS]...

    The server command includes the following subcommands:

    • start
    • stop
    • status (ps)

    swcli server start

    swcli [GLOBAL OPTIONS] server start [OPTIONS]

    The server start command uses Docker and Docker-Compose to quickly start the Starwhale Server in a local environment.

    • Requirements: Docker >= 19.03, Docker-Compose >= v2. You can use the swcli check command to check.
    • You need use swcli server stop to close Starwhale Server.
    • For containers started by server start, the restart policy is restart=always. Even if the machine restarts, related containers will start automatically.
    • server start renders docker compose yaml files in the ~/.starwhale/.server directory. docker compose commands can use this file for richer operations like viewing logs: docker compose -f ~/.starwhale/.server/docker-compose.yaml logs -f.

    Options

    OptionRequiredTypeDefaultsDescription
    -hor--hostNString127.0.0.1IP address bound by the Starwhale Server startup port, default is 127.0.0.1. If you want other machines to access, you can set it to 0.0.0.0
    -por--portNInt8082Port bound by the Starwhale Server.
    -eor--envNStringSet environment variables for Starwhale Server startup or runtime use, e.g. SW_PYPI_INDEX_URL and SW_PYPI_EXTRA_INDEX_URL environment variables can change the Starwhale Server's PYPI source.
    -ior--server-imageNStringDocker Image for Starwhale Server. If not specified, the Starwhale Server Image corresponding to the swcli command line version will be used.
    --detach/--no-detachNBool--detachRun Starwhale Server in the background.
    --dry-runNBoolFalserender compose yaml file and dry run docker compose.

    Server start examples

    # Start Starwhale Server with default settings, then you can visit http://127.0.0.1:8082 to use Starwhale Server.
    swcli server start

    # Start Starwhale Server with custom Server image.
    swcli server start -i docker-registry.starwhale.cn/star-whale/server:latest

    # Start Starwhale Server with custom host and port.
    swcli server start --port 18082 --host 0.0.0.0

    # Start Starwhale Server in the foreground and custom environment variables for pypi.
    swcli server start --no-detach -e SW_PYPI_INDEX_URL=https://pypi.tuna.tsinghua.edu.cn/simple -e SW_PYPI_EXTRA_INDEX_URL=https://mirrors.aliyun.com/pypi/simple

    swcli server stop

    swcli [GLOBAL OPTIONS] server stop

    The server stop command will stop containers started by swcli server start and close the Starwhale Server service.

    swcli server status

    swcli [GLOBAL OPTIONS] server status

    The server status command shows the status of Starwhale Server related containers. The swcli server ps command has the same effect.

    - - + + \ No newline at end of file diff --git a/0.6.6/reference/swcli/utilities/index.html b/0.6.6/reference/swcli/utilities/index.html index cde7f3cb7..0972860be 100644 --- a/0.6.6/reference/swcli/utilities/index.html +++ b/0.6.6/reference/swcli/utilities/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Utility Commands

    swcli gc

    swcli [GLOBAL OPTIONS] gc [OPTIONS]

    gc clears removed projects, models, datasets, and runtimes according to the internal garbage collection policy.

    OptionRequiredTypeDefaultsDescription
    --dry-runNBooleanFalseIf true, outputs objects to be removed instead of clearing them.
    --yesNBooleanFalseBypass confirmation prompts.

    swcli check

    swcli [GLOBAL OPTIONS] check

    Check if the external dependencies of the swcli command meet the requirements. Currently mainly checks Docker and Conda.

    swcli completion install

    swcli [GLOBAL OPTIONS] completion install <SHELL_NAME>

    Install autocompletion for swcli commands. Currently supports bash, zsh and fish. If SHELL_NAME is not specified, it will try to automatically detect the current shell type.

    swcli config edit

    swcli [GLOBAL OPTIONS] config edit

    Edit the Starwhale configuration file at ~/.config/starwhale/config.yaml.

    swcli ui

    swcli [GLOBAL OPTIONS] ui <INSTANCE>

    Open the web page for the corresponding instance.

    - - + + \ No newline at end of file diff --git a/0.6.6/runtime/index.html b/0.6.6/runtime/index.html index 8ec19a171..cbfe38a6f 100644 --- a/0.6.6/runtime/index.html +++ b/0.6.6/runtime/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Starwhale Runtime

    overview

    Overview

    Starwhale Runtime aims to provide a reproducible and sharable running environment for python programs. You can easily share your working environment with your teammates or outsiders, and vice versa. Furthermore, you can run your programs on Starwhale Server or Starwhale Cloud without bothering with the dependencies.

    Starwhale works well with virtualenv, conda, and docker. If you are using one of them, it is straightforward to create a Starwhale Runtime based on your current environment.

    Multiple Starwhale Runtimes on your local machine can be switched freely by one command. You can work on different projects without messing up the environment.Starwhale Runtime consists of two parts: the base image and the dependencies.

    The base image

    The base is a docker image with Python, CUDA, and cuDNN installed. Starwhale provides various base images for you to choose from; see the following list:

    • Computer system architecture:
      • X86 (amd64)
      • Arm (aarch64)
    • Operating system:
      • Ubuntu 20.04 LTS (ubuntu:20.04)
    • Python:
      • 3.7
      • 3.8
      • 3.9
      • 3.10
      • 3.11
    • CUDA:
      • CUDA 11.3 + cuDNN 8.4
      • CUDA 11.4 + cuDNN 8.4
      • CUDA 11.5 + cuDNN 8.4
      • CUDA 11.6 + cuDNN 8.4
      • CUDA 11.7
    - - + + \ No newline at end of file diff --git a/0.6.6/runtime/yaml/index.html b/0.6.6/runtime/yaml/index.html index bc3ed23b7..8ea626dfc 100644 --- a/0.6.6/runtime/yaml/index.html +++ b/0.6.6/runtime/yaml/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    The runtime.yaml Specification

    runtime.yaml is the configuration file that defines the properties of the Starwhale Runtime. runtime.yaml is required for the yaml mode of the swcli runtime build command.

    Examples

    The simplest example

    dependencies:
    - pip:
    - numpy
    name: simple-test

    Define a Starwhale Runtime that uses venv as the Python virtual environment for package isolation, and installs the numpy dependency.

    The llama2 example

    name: llama2
    mode: venv
    environment:
    arch: noarch
    os: ubuntu:20.04
    cuda: 11.7
    python: "3.10"
    dependencies:
    - pip:
    - torch
    - fairscale
    - fire
    - sentencepiece
    - gradio >= 3.37.0
    # external starwhale dependencies
    - starwhale[serve] >= 0.5.5

    The full definition example

    # [required]The name of Starwhale Runtime
    name: demo
    # [optional]The mode of Starwhale Runtime: venv or conda. Default is venv.
    mode: venv
    # [optional]The configurations of pip and conda.
    configs:
    # If you do not use conda, ignore this field.
    conda:
    condarc: # custom condarc config file
    channels:
    - defaults
    show_channel_urls: true
    default_channels:
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/r
    - https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/msys2
    custom_channels:
    conda-forge: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    pytorch: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    pytorch-lts: https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud
    nvidia: https://mirrors.aliyun.com/anaconda/cloud
    ssl_verify: false
    default_threads: 10
    pip:
    # pip config set global.index-url
    index_url: https://example.org/
    # pip config set global.extra-index-url
    extra_index_url: https://another.net/
    # pip config set install.trusted-host
    trusted_host:
    - example.org
    - another.net
    # [optional] The definition of the environment.
    environment:
    # Now it must be ubuntu:20.04
    os: ubuntu:20.04
    # CUDA version. possible values: 11.3, 11.4, 11.5, 11.6, 11.7
    cuda: 11.4
    # Python version. possible values: 3.7, 3.8, 3.9, 3.10, 3.11
    python: 3.8
    # Define your custom base image
    docker:
    image: mycustom.com/docker/image:tag
    # [required] The dependencies of the Starwhale Runtime.
    dependencies:
    # If this item is present, conda env create -f conda.yml will be executed
    - conda.yaml
    # If this item is present, pip install -r requirements.txt will be executed before installing other pip packages
    - requirements.txt
    # Packages to be install with conda. venv mode will ignore the conda field.
    - conda:
    - numpy
    - requests
    # Packages to be installed with pip. The format is the same as requirements.txt
    - pip:
    - pillow
    - numpy
    - deepspeed==0.9.0
    - safetensors==0.3.0
    - transformers @ git+https://github.com/huggingface/transformers.git@3c3108972af74246bc3a0ecf3259fd2eafbacdef
    - peft @ git+https://github.com/huggingface/peft.git@fcff23f005fc7bfb816ad1f55360442c170cd5f5
    - accelerate @ git+https://github.com/huggingface/accelerate.git@eba6eb79dc2ab652cd8b44b37165a4852768a8ac
    # Additional wheels packages to be installed when restoring the runtime
    - wheels:
    - dummy-0.0.0-py3-none-any.whl
    # Additional files to be included in the runtime
    - files:
    - dest: bin/prepare.sh
    name: prepare
    src: scripts/prepare.sh
    # Run some custom commands
    - commands:
    - apt-get install -y libgl1
    - touch /tmp/runtime-command-run.flag
    - - + + \ No newline at end of file diff --git a/0.6.6/server/guides/server_admin/index.html b/0.6.6/server/guides/server_admin/index.html index 9f9465f17..7be07102e 100644 --- a/0.6.6/server/guides/server_admin/index.html +++ b/0.6.6/server/guides/server_admin/index.html @@ -10,14 +10,14 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Controller Admin Settings

    Superuser Password Reset

    In case you forget the superusers password, you could use the sql below to reset the password to abcd1234

    update user_info set user_pwd='ee9533077d01d2d65a4efdb41129a91e', user_pwd_salt='6ea18d595773ccc2beacce26' where id=1

    After that, you could login to the console and then change the password to what you really want.

    System Settings

    You could customize system to make it easier to use by leverage of System setting. Here is an example below:

    dockerSetting:
    registryForPull: "docker-registry.starwhale.cn/star-whale"
    registryForPush: ""
    userName: ""
    password: ""
    insecure: true
    pypiSetting:
    indexUrl: ""
    extraIndexUrl: ""
    trustedHost: ""
    retries: 10
    timeout: 90
    imageBuild:
    resourcePool: ""
    image: ""
    clientVersion: ""
    pythonVersion: ""
    datasetBuild:
    resourcePool: ""
    image: ""
    clientVersion: ""
    pythonVersion: ""
    resourcePoolSetting:
    - name: "default"
    nodeSelector: null
    resources:
    - name: "cpu"
    max: null
    min: null
    defaults: 5.0
    - name: "memory"
    max: null
    min: null
    defaults: 3145728.0
    - name: "nvidia.com/gpu"
    max: null
    min: null
    defaults: null
    tolerations: null
    metadata: null
    isPrivate: null
    visibleUserIds: null
    storageSetting:
    - type: "minio"
    tokens:
    bucket: "users"
    ak: "starwhale"
    sk: "starwhale"
    endpoint: "http://10.131.0.1:9000"
    region: "local"
    hugeFileThreshold: "10485760"
    hugeFilePartSize: "5242880"
    - type: "s3"
    tokens:
    bucket: "users"
    ak: "starwhale"b
    sk: "starwhale"
    endpoint: "http://10.131.0.1:9000"
    region: "local"
    hugeFileThreshold: "10485760"
    hugeFilePartSize: "5242880"

    Image Registry

    Tasks dispatched by the server are based on docker images. Pulling these images could be slow if your internet is not working well. Starwhale Server supports the custom image registries, includes dockerSetting.registryForPush and dockerSetting.registryForPull.

    Resource Pool

    The resourcePoolSetting allows you to manage your cluster in a group manner. It is currently implemented by K8S nodeSelector, you could label your machines in K8S cluster and make them a resourcePool in Starwhale.

    Remote Storage

    The storageSetting allows you to manage the storages the server could access.

    storageSetting:
    - type: s3
    tokens:
    - bucket: starwhale # required
    ak: access_key # required
    sk: scret_key # required
    endpoint: http://s3.region.amazonaws.com # optional
    region: region of the service # required when endpoint is empty
    hugeFileThreshold: 10485760 # bigger than 10MB will use multiple part upload
    hugeFilePartSize: 5242880 # MB part size for multiple part upload
    - type: minio
    tokens:
    - bucket: starwhale # required
    ak: access_key # required
    sk: scret_key # required
    endpoint: http://10.131.0.1:9000 # required
    region: local # optional
    hugeFileThreshold: 10485760 # bigger than 10MB will use multiple part upload
    hugeFilePartSize: 5242880 # MB part size for multiple part upload
    - type: aliyun
    tokens:
    - bucket: starwhale # required
    ak: access_key # required
    sk: scret_key # required
    endpoint: http://10.131.0.2:9000 # required
    region: local # optional
    hugeFileThreshold: 10485760 # bigger than 10MB will use multiple part upload
    hugeFilePartSize: 5242880 # MB part size for multiple part upload

    Every storageSetting item has a corresponding implementation of StorageAccessService interface. Starwhale has four build-in implementations:

    • StorageAccessServiceAliyun matches type in (aliyun,oss)
    • StorageAccessServiceMinio matches type in (minio)
    • StorageAccessServiceS3 matches type in (s3)
    • StorageAccessServiceFile matches type in (fs, file)

    Each of the implementations has different requirements for tokens. endpoint is required when type in (aliyun,minio), region is required when type is s3 and endpoint is empty. While fs/file type requires tokens has name rootDir and serviceProvider. Please refer the code for more details.

    - - + + \ No newline at end of file diff --git a/0.6.6/server/index.html b/0.6.6/server/index.html index 0b1f43d20..ccc3d1095 100644 --- a/0.6.6/server/index.html +++ b/0.6.6/server/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    - - + + \ No newline at end of file diff --git a/0.6.6/server/installation/docker-compose/index.html b/0.6.6/server/installation/docker-compose/index.html index 39cbaaef6..f9026d24b 100644 --- a/0.6.6/server/installation/docker-compose/index.html +++ b/0.6.6/server/installation/docker-compose/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Install Starwhale Server with Docker Compose

    Prerequisites

    Usage

    Start up the server

    wget https://raw.githubusercontent.com/star-whale/starwhale/main/docker/compose/compose.yaml
    GLOBAL_IP=${your_accessible_ip_for_server} ; docker compose up

    The GLOBAL_IP is the ip for Controller which could be accessed by all swcli both inside docker containers and other user machines.

    compose.yaml contains Starwhale Controller/MySQL/MinIO services. Touch a compose.override.yaml, as its name implies, can contain configuration overrides for compose.yaml. The available configurations are specified here

    - - + + \ No newline at end of file diff --git a/0.6.6/server/installation/docker/index.html b/0.6.6/server/installation/docker/index.html index 8ef768d86..d482bc5ed 100644 --- a/0.6.6/server/installation/docker/index.html +++ b/0.6.6/server/installation/docker/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Install Starwhale Server with Docker

    Prerequisites

    • A running Kubernetes 1.19+ cluster to run tasks.
    • A running MySQL 8.0+ instance to store metadata.
    • A S3-compatible object storage to save datasets, models, and others.

    Please make sure pods on the Kubernetes cluster can access the port exposed by the Starwhale Server installation.

    Prepare an env file for Docker

    Starwhale Server can be configured by environment variables.

    An env file template for Docker is here. You may create your own env file by modifying the template.

    Prepare a kubeconfig file [Optional][SW_SCHEDULER=k8s]

    The kubeconfig file is used for accessing the Kubernetes cluster. For more information about kubeconfig files, see the Official Kubernetes Documentation.

    If you have a local kubectl command-line tool installed, you can run kubectl config view to see your current configuration.

    Run the Docker image

    docker run -it -d --name starwhale-server -p 8082:8082 \
    --restart unless-stopped \
    --mount type=bind,source=<path to your kubeconfig file>,destination=/root/.kube/config,readonly \
    --env-file <path to your env file> \
    ghcr.io/star-whale/server:0.5.6

    For users in the mainland of China, use docker image: docker-registry.starwhale.cn/star-whale/server.

    - - + + \ No newline at end of file diff --git a/0.6.6/server/installation/index.html b/0.6.6/server/installation/index.html index bddf26029..b6bbf0ff0 100644 --- a/0.6.6/server/installation/index.html +++ b/0.6.6/server/installation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Starwhale Server Installation Guide

    Starwhale Server is delivered as a Docker image, which can be run with Docker directly or deployed to a Kubernetes cluster or Minikube.

    - - + + \ No newline at end of file diff --git a/0.6.6/server/installation/k8s-cluster/index.html b/0.6.6/server/installation/k8s-cluster/index.html index 7de97104b..54e0304ed 100644 --- a/0.6.6/server/installation/k8s-cluster/index.html +++ b/0.6.6/server/installation/k8s-cluster/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Install Starwhale Server to Kubernetes Cluster

    In a private deployment scenario, Starwhale Server can be deployed to a Kubernetes cluster using Helm. Starwhale Server relies on two fundamental infrastructure dependencies: MySQL database and object storage.

    • For production environments, it is recommended to provide externally high-availability MySQL database and object storage.
    • For trial or testing environments, the standalone versions of MySQL and MinIO, included in the Starwhale Charts, can be utilized.

    Prerequisites

    • A running Kubernetes 1.19+ cluster to run tasks.
    • Kubernetes Ingress provides HTTP(S) routing.
    • Helm 3.2.0+.
    • [Production Required] A running MySQL 8.0+ instance to store metadata.
    • [Production Required] A S3-compatible object storage system to save datasets, models, and others. Currently tested compatible object storage services:

    Helm Charts

    Downloading Helm Charts

    helm repo add starwhale https://star-whale.github.io/charts
    helm repo update

    Editing values.yaml (production required)

    In a production environment, it is recommended to configure parameters like the MySQL database, object storage, domain names, and memory allocation by editing values.yaml based on actual deployment needs. Below is a sample values.yaml for reference:

    # Set image registry for China mainland, recommend "docker-registry.starwhale.cn". Other network environments can ignore this setting, will use ghcr.io: https://github.com/orgs/star-whale/packages.
    image:
    registry: docker-registry.starwhale.cn
    org: star-whale

    # External MySQL service depended in production, MySQL version needs to be greater than 8.0
    externalMySQL:
    host: 10.0.1.100 # Database IP address or domain that is accessible within the Kubernetes cluster
    port: 3306
    username: "your-username"
    password: "your-password"
    database: starwhale # Needs to pre-create the database, name can be specified freely, default charset is fine. The database user specified above needs read/write permissions to this database

    # External S3 protocol compatible object storage service relied on in production
    externalOSS:
    host: ks3-cn-beijing.ksyuncs.com # Object storage IP address or domain that is accessible from both the Kubernetes cluster and Standalone instances
    port: 80
    accessKey: "your-ak"
    secretKey: "your-sk"
    defaultBuckets: test-gp # Needs to pre-create the Bucket, name can be specified freely. The ak/sk specified above needs read/write permissions to this Bucket
    region: BEIJING # Object storage corresponding region, defaults to local

    # If external object storage is specified in production, built-in single instance MinIO is not needed
    minio:
    enabled: false

    # If external MySQL is specified in production, built-in single instance MySQL is not needed
    mysql:
    enabled: false

    controller:
    containerPort: 8082
    storageType: "ksyun" # Type of object storage service minio/s3/ksyun/baidu/tencent/aliyun

    ingress:
    enabled: true
    ingressClassName: nginx # Corresponds to the Ingress Controller in the Kubernetes cluster
    host: server-domain-name # External accessible domain name for the Server
    path: /

    # Recommend at least 32GB memory and 8 CPU cores for Starwhale Server in production
    resources:
    controller:
    limits:
    memory: 32G
    cpu: 8
    requests:
    memory: 32G
    cpu: 8

    # Downloading Python Packages defined in Starwhale Runtime requires setting PyPI mirror corresponding to actual network environment. Can also modify later in Server System Settings page.
    mirror:
    pypi:
    enabled: true
    indexUrl: "https://mirrors.aliyun.com/pypi/simple/"
    extraIndexUrl: "https://pypi.tuna.tsinghua.edu.cn/simple/"
    trustedHost: "mirrors.aliyun.com pypi.tuna.tsinghua.edu.cn"

    Deploying/Upgrading Starwhale Server

    The following command can be used for both initial deployment and upgrades. It will automatically create a Kubernetes namespace called "starwhale". values.custom.yaml is the values.yaml file written according to the actual needs of the cluster.

    helm upgrade --devel --install starwhale starwhale/starwhale --namespace starwhale --create-namespace -f values.custom.yaml

    If you have a local kubectl command-line tool installed, you can run kubectl get pods -n starwhale to check if all pods are running.

    Uninstalling Starwhale Server

    helm delete starwhale-server
    - - + + \ No newline at end of file diff --git a/0.6.6/server/installation/minikube/index.html b/0.6.6/server/installation/minikube/index.html index 70ea33524..04fc0fcce 100644 --- a/0.6.6/server/installation/minikube/index.html +++ b/0.6.6/server/installation/minikube/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Install Starwhale Server with Minikube

    Prerequisites

    Starting Minikube

    minikube start --addons ingress

    For users in the mainland of China, please run the following commands:

    minikube start --kubernetes-version=1.25.3 --image-repository=docker-registry.starwhale.cn/minikube --base-image=docker-registry.starwhale.cn/minikube/k8s-minikube/kicbase:v0.0.42

    minikube addons enable ingress --images="KubeWebhookCertgenPatch=ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0,KubeWebhookCertgenCreate=ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0,IngressController=ingress-nginx/controller:v1.9.4"

    The docker registry docker-registry.starwhale.cn/minikube currently only caches the images for Kubernetes 1.25.3. Another choice, you can also use Aliyun mirror:

    minikube start --image-mirror-country=cn

    minikube addons enable ingress --images="KubeWebhookCertgenPatch=kube-webhook-certgen:v20231011-8b53cabe0,KubeWebhookCertgenCreate=kube-webhook-certgen:v20231011-8b53cabe0,IngressController=nginx-ingress-controller:v1.9.4" --registries="KubeWebhookCertgenPatch=registry.cn-hangzhou.aliyuncs.com/google_containers,KubeWebhookCertgenCreate=registry.cn-hangzhou.aliyuncs.com/google_containers,IngressController=registry.cn-hangzhou.aliyuncs.com/google_containers"

    If there is no kubectl bin in your machine, you may use minikube kubectl or alias kubectl="minikube kubectl --" alias command.

    Installing Starwhale Server

    helm repo add starwhale https://star-whale.github.io/charts
    helm repo update
    helm pull starwhale/starwhale --untar --untardir ./charts

    helm upgrade --install starwhale ./charts/starwhale -n starwhale --create-namespace -f ./charts/starwhale/values.minikube.global.yaml

    For users in the mainland of China, use values.minikube.global.yaml:

    helm upgrade --install starwhale ./charts/starwhale -n starwhale --create-namespace -f ./charts/starwhale/values.minikube.cn.yaml

    After the installation is successful, the following prompt message appears:

        Release "starwhale" has been upgraded. Happy Helming!
    NAME: starwhale
    LAST DEPLOYED: Tue Feb 14 16:25:03 2023
    NAMESPACE: starwhale
    STATUS: deployed
    REVISION: 14
    NOTES:
    ******************************************
    Chart Name: starwhale
    Chart Version: 0.5.6
    App Version: latest
    Starwhale Image:
    - server: ghcr.io/star-whale/server:latest

    ******************************************
    Controller:
    - visit: http://controller.starwhale.svc
    Minio:
    - web visit: http://minio.starwhale.svc
    - admin visit: http://minio-admin.starwhale.svc
    MySQL:
    - port-forward:
    - run: kubectl port-forward --namespace starwhale svc/mysql 3306:3306
    - visit: mysql -h 127.0.0.1 -P 3306 -ustarwhale -pstarwhale
    Please run the following command for the domains searching:
    echo "$(sudo minikube ip) controller.starwhale.svc minio.starwhale.svc minio-admin.starwhale.svc " | sudo tee -a /etc/hosts
    ******************************************
    Login Info:
    - starwhale: u:starwhale, p:abcd1234
    - minio admin: u:minioadmin, p:minioadmin

    *_* Enjoy to use Starwhale Platform. *_*

    Checking Starwhale Server status

    Keep checking the minikube service status until all deployments are running(waiting for 3~5 mins):

    kubectl get deployments -n starwhale
    NAMEREADYUP-TO-DATEAVAILABLEAGE
    controller1/1115m
    minio1/1115m
    mysql1/1115m

    Visiting for local

    Make the Starwhale controller accessible locally with the following command:

    echo "$(sudo minikube ip) controller.starwhale.svc minio.starwhale.svc  minio-admin.starwhale.svc " | sudo tee -a /etc/hosts

    Then you can visit http://controller.starwhale.svc in your local web browser.

    Visiting for others

    • Step 1: in the Starwhale Server machine

      for temporary use with socat command:

      # install socat at first, ref: https://howtoinstall.co/en/socat
      sudo socat TCP4-LISTEN:80,fork,reuseaddr,bind=0.0.0.0 TCP4:`minikube ip`:80

      When you kill the socat process, the share access will be blocked. iptables maybe a better choice for long-term use.

    • Step 2: in the other machines

      # for macOSX or Linux environment, run the command in the shell.
      echo ${your_machine_ip} controller.starwhale.svc minio.starwhale.svc minio-admin.starwhale.svc " | sudo tee -a /etc/hosts

      # for Windows environment, run the command in the PowerShell with administrator permission.
      Add-Content -Path C:\Windows\System32\drivers\etc\hosts -Value "`n${your_machine_ip} controller.starwhale.svc minio.starwhale.svc minio-admin.starwhale.svc"
    - - + + \ No newline at end of file diff --git a/0.6.6/server/installation/server-start/index.html b/0.6.6/server/installation/server-start/index.html index ff430cf99..e2cdf6527 100644 --- a/0.6.6/server/installation/server-start/index.html +++ b/0.6.6/server/installation/server-start/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Launch Starwhale Server with the "swcli server start" command

    Prerequisites

    If you are unsure whether your dependencies meet the requirements, you can use the swcli check command to check, and the normal output should be as follows:

    ❯ swcli check
    ✅ Docker 24.0.7
    ✅ Docker Compose 2.21.0
    ✅ Conda 22.9.0

    Launch Starwhale Server

    swcli server start

    After executing the command, it will pull the Starwhale Server Docker image consistent with the swcli version and start the Starwhale Server related container services. Finally, it will open the browser http://127.0.0.1:8008 page, where you can log in to the Starwhale Server with the default username starwhale and password abcd1234.

    When the server is successfully started, you will see a prompt similar to the following:

    ❯ swcli server start
    🛸 render compose yaml file: /home/tianwei/.starwhale/.server/docker-compose.yaml
    🏓 start Starwhale Server by docker compose
    Container starwhale_local-db-1 Created
    Container starwhale_local-server-1 Recreate
    Container starwhale_local-server-1 Recreated
    Container starwhale_local-db-1 Starting
    Container starwhale_local-db-1 Started
    Container starwhale_local-db-1 Waiting
    Container starwhale_local-db-1 Healthy
    Container starwhale_local-server-1 Starting
    Container starwhale_local-server-1 Started
    Starwhale Server is running in the background.
    🍎 stop: swcli server stop
    🍌 check status: swcli server status
    🍉 more compose command: docker compose -f /home/tianwei/.starwhale/.server/docker-compose.yaml sub-command
    🥕 visit web:

    If there are any issues during the startup process, you can use the docker compose -f ~/.starwhale/.server/docker-compose.yaml logs command to view the logs, or you can check the service status through the swcli server status command.

    Stop Starwhale Server

    swcli server stop

    After executing the command, it will stop the previously launched Starwhale Server service.

    - - + + \ No newline at end of file diff --git a/0.6.6/server/installation/starwhale_env/index.html b/0.6.6/server/installation/starwhale_env/index.html index 9f54a4cef..fa43f819d 100644 --- a/0.6.6/server/installation/starwhale_env/index.html +++ b/0.6.6/server/installation/starwhale_env/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Starwhale Server Environment Example

    ################################################################################
    # *** Required ***
    # The external Starwhale server URL. For example: https://cloud.starwhale.ai
    SW_INSTANCE_URI=

    # The listening port of Starwhale Server
    SW_CONTROLLER_PORT=8082

    # The maximum upload file size. This setting affects datasets and models uploading when copied from outside.
    SW_UPLOAD_MAX_FILE_SIZE=20480MB
    ################################################################################
    # The base URL of the Python Package Index to use when creating a runtime environment.
    SW_PYPI_INDEX_URL=http://10.131.0.1/repository/pypi-hosted/simple/

    # Extra URLs of package indexes to use in addition to the base url.
    SW_PYPI_EXTRA_INDEX_URL=

    # Space separated hostnames. When any host specified in the base URL or extra URLs does not have a valid SSL
    # certification, use this option to trust it anyway.
    SW_PYPI_TRUSTED_HOST=
    ################################################################################
    # The JWT token expiration time. When the token expires, the server will request the user to login again.
    SW_JWT_TOKEN_EXPIRE_MINUTES=43200

    # *** Required ***
    # The JWT secret key. All strings are valid, but we strongly recommend you to use a random string with at least 16 characters.
    SW_JWT_SECRET=
    ################################################################################
    # The scheduler controller to use. Valid values are:
    # docker: Controller schedule jobs by leveraging docker
    # k8s: Controller schedule jobs by leveraging Kubernetes
    SW_SCHEDULER=k8s

    # The Kubernetes namespace to use when running a task when SW_SCHEDULER is k8s
    SW_K8S_NAME_SPACE=default

    # The path on the Kubernetes host node's filesystem to cache Python packages. Use the setting only if you have
    # the permission to use host node's filesystem. The runtime environment setup process may be accelerated when the host
    # path cache is used. Leave it blank if you do not want to use it.
    SW_K8S_HOST_PATH_FOR_CACHE=

    # The ip for the containers created by Controller when SW_SCHEDULER is docker
    SW_DOCKER_CONTAINER_NODE_IP=127.0.0.1
    ###############################################################################
    # *** Required ***
    # The object storage system type. Valid values are:
    # s3: [AWS S3](https://aws.amazon.com/s3) or other s3-compatible object storage systems
    # aliyun: [Aliyun OSS](https://www.alibabacloud.com/product/object-storage-service)
    # minio: [MinIO](https://min.io)
    # file: Local filesystem
    SW_STORAGE_TYPE=

    # The path prefix for all data saved on the storage system.
    SW_STORAGE_PREFIX=
    ################################################################################
    # The following settings are only used when SW_STORAGE_TYPE is file.

    # The root directory to save data.
    # This setting is only used when SW_STORAGE_TYPE is file.
    SW_STORAGE_FS_ROOT_DIR=/usr/local/starwhale
    ################################################################################
    # The following settings are only used when SW_STORAGE_TYPE is not file.

    # *** Required ***
    # The name of the bucket to save data.
    SW_STORAGE_BUCKET=

    # *** Required ***
    # The endpoint URL of the object storage service.
    # This setting is only used when SW_STORAGE_TYPE is s3 or aliyun.
    SW_STORAGE_ENDPOINT=

    # *** Required ***
    # The access key used to access the object storage system.
    SW_STORAGE_ACCESSKEY=

    # *** Required ***
    # The secret access key used to access the object storage system.
    SW_STORAGE_SECRETKEY=

    # *** Optional ***
    # The region of the object storage system.
    SW_STORAGE_REGION=

    # Starwhale Server will use multipart upload when uploading a large file. This setting specifies the part size.
    SW_STORAGE_PART_SIZE=5MB
    ################################################################################
    # MySQL settings

    # *** Required ***
    # The hostname/IP of the MySQL server.
    SW_METADATA_STORAGE_IP=

    # The port of the MySQL server.
    SW_METADATA_STORAGE_PORT=3306

    # *** Required ***
    # The database used by Starwhale Server
    SW_METADATA_STORAGE_DB=starwhale

    # *** Required ***
    # The username of the MySQL server.
    SW_METADATA_STORAGE_USER=

    # *** Required ***
    # The password of the MySQL server.
    SW_METADATA_STORAGE_PASSWORD=
    ################################################################################

    # The cache directory for the WAL files. Point it to a mounted volume or host path with enough space.
    # If not set, the WAL files will be saved in the docker runtime layer, and will be lost when the container is restarted.
    SW_DATASTORE_WAL_LOCAL_CACHE_DIR=
    - - + + \ No newline at end of file diff --git a/0.6.6/server/project/index.html b/0.6.6/server/project/index.html index 9e1c5d8c6..e0175735c 100644 --- a/0.6.6/server/project/index.html +++ b/0.6.6/server/project/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    How to Organize and Manage Resources with Starwhale Projects

    Project is the basic unit for organizing and managing resources (such as models, datasets, runtime environments, etc.). You can create and manage projects based on your needs. For example, you can create projects by business team, product line, or models. One user can create and participate in one or more projects.

    Project type

    There are two types of projects:

    • Private project: The project (and related resources in the project) is only visible to project members with permission. Project members can view or edit the project (as well as associated resources in the project). For more information on roles, please take a look at Roles and permissions in Starwhale.

    • Public project: The project (and related resources in the project) is visible to all Starwhale users. Project members can view or edit the project (as well as associated resources in the project). For more information on roles, please take a look at Roles and permissions in Starwhale.

    Create a project

    1. Click the Create button in the upper right corner of the project list page;
    2. Enter a name for the project. Pay attention to avoiding duplicate names. For more information, please see Names in Starwhale
    3. Select the Project Type, which is defaulted to private project and can be selected as public according to needs;
    4. Fill in the description content;
    5. To finish, Click the Submit button.

    Edit a project

    The name, privacy and description of a project can be edited.

    1. Go to the project list page and find the project that needs to be edited by searching for the project name, then click the Edit Project button;
    2. Edit the items that need to be edited;
    3. Click Submit to save the edited content;
    4. If you're editing multiple projects, repeat steps 1 through 3.

    View a project

    My projects

    On the project list page, only my projects are displayed by default. My projects refer to the projects participated in by the current users as project members or project owners.

    Project sorting

    On the project list page, all projects are supported to be sorted by "Recently visited", "Project creation time from new to old", and "Project creation time from old to new", which can be selected according to your needs.

    Delete a project

    Once a project is deleted, all related resources (such as datasets, models, runtimes, evaluations, etc.) will be deleted and cannot be restored.

    1. Enter the project list page and search for the project name to find the project that needs to be deleted. Hover your mouse over the project you want to delete, then click the Delete button;
    2. Follow the prompts, enter the relevant information, click Confirm to delete the project, or click Cancel to cancel the deletion;
    3. If you are deleting multiple projects, repeat the above steps.

    Manage project member

    Only users with the admin role can assign people to the project. The project owner defaulted to having the project owner role.

    Add a member

    1. Click Manage Members to go to the project member list page;
    2. Click the Add Member button in the upper right corner.
    3. Enter the Username you want to add, select a project role for the user in the project.
    4. Click submit to complete.
    5. If you're adding multiple members, repeat steps 1 through 4.

    Remove a member

    1. On the project list page or project overview tab, click Manage Members to go to the project member list page.
    2. Search for the username you want to delete, then click the Delete button.
    3. Click Yes to delete the user from this project, click No to cancel the deletion.
    4. If you're removing multiple members, repeat steps 1 through 3.

    Edit a member's role

    1. Hover your mouse over the project you want to edit, then click Manage Members to go to the project member list page.
    2. Find the username you want to adjust through searching, click the Project Role drop-down menu, and select a new project role. For more information on roles, please take a look at Roles and permissions in Starwhale.
    - - + + \ No newline at end of file diff --git a/0.6.6/swcli/config/index.html b/0.6.6/swcli/config/index.html index 458ac0f4d..c4a6c1a21 100644 --- a/0.6.6/swcli/config/index.html +++ b/0.6.6/swcli/config/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Configuration

    Standalone Instance is installed on the user's laptop or development server, providing isolation at the level of Linux/macOX users. Users can install the Starwhale Python package using the pip command and execute any swcli command. After that, they can view their Starwhale configuration in ~/.config/starwhale/config.yaml. In the vast majority of cases, users do not need to manually modify the config.yaml file.

    The ~/.config/starwhale/config.yaml file has permissions set to 0o600 to ensure security, as it contains sensitive information such as encryption keys. Users are advised not to change the file permissions.You could customize your swcli by swci config edit:

    swcli config edit

    config.yaml example

    The typical config.yaml file is as follows:

    • The default instance is local.
    • cloud-cn/cloud-k8s/pre-k8s are the server/cloud instances, local is the standalone instance.
    • The local storage root directory for the Standalone Instance is set to /home/liutianwei/.starwhale.
    current_instance: local
    instances:
    cloud-cn:
    sw_token: ${TOKEN}
    type: cloud
    updated_at: 2022-09-28 18:41:05 CST
    uri: https://cloud.starwhale.cn
    user_name: starwhale
    user_role: normal
    cloud-k8s:
    sw_token: ${TOKEN}
    type: cloud
    updated_at: 2022-09-19 16:10:01 CST
    uri: http://cloud.pre.intra.starwhale.ai
    user_name: starwhale
    user_role: normal
    local:
    current_project: self
    type: standalone
    updated_at: 2022-06-09 16:14:02 CST
    uri: local
    user_name: liutianwei
    pre-k8s:
    sw_token: ${TOKEN}
    type: cloud
    updated_at: 2022-09-19 18:06:50 CST
    uri: http://console.pre.intra.starwhale.ai
    user_name: starwhale
    user_role: normal
    link_auths:
    - ak: starwhale
    bucket: users
    connect_timeout: 10.0
    endpoint: http://10.131.0.1:9000
    read_timeout: 100.0
    sk: starwhale
    type: s3
    storage:
    root: /home/liutianwei/.starwhale
    version: '2.0'

    config.yaml definition

    ParameterDescriptionTypeDefault ValueRequired
    current_instanceThe name of the default instance to use. It is usually set using the swcli instance select command.StringselfYes
    instancesManaged instances, including Standalone, Server and Cloud Instances. There must be at least one Standalone Instance named "local" and one or more Server/Cloud Instances. You can log in to a new instance with swcli instance login and log out from an instance with swcli instance logout.DictStandalone Instance named "local"Yes
    instances.{instance-alias-name}.sw_tokenLogin token for Server/Cloud Instances. It is only effective for Server/Cloud Instances. Subsequent swcli operations on Server/Cloud Instances will use this token. Note that tokens have an expiration time, typically set to one month, which can be configured within the Server/Cloud Instance.StringCloud - Yes, Standalone - No
    instances.{instance-alias-name}.typeType of the instance, currently can only be "cloud" or "standalone".Choice[string]Yes
    instances.{instance-alias-name}.uriFor Server/Cloud Instances, the URI is an http/https address. For Standalone Instances, the URI is set to "local".StringYes
    instances.{instance-alias-name}.user_nameUser's nameStringYes
    instances.{instance-alias-name}.current_projectDefault Project under the current instance. It will be used to fill the "project" field in the URI representation by default. You can set it using the swcli project select command.StringYes
    instances.{instance-alias-name}.user_roleUser's role.StringnormalYes
    instances.{instance-alias-name}.updated_atThe last updated time for this instance configuration.Time format stringYes
    storageSettings related to local storage.DictYes
    storage.rootThe root directory for Standalone Instance's local storage. Typically, if there is insufficient space in the home directory and you manually move data files to another location, you can modify this field.String~/.starwhaleYes
    versionThe version of config.yaml, currently only supports 2.0.String2.0Yes

    You could put starwhale.Link to your assets while the URI in the Link could be whatever(only s3 like or http is implemented) you need, such as s3://10.131.0.1:9000/users/path. However, Links may need to be authed, you could config the auth info in link_auths.

    link_auths:
    - type: s3
    ak: starwhale
    bucket: users
    region: local
    connect_timeout: 10.0
    endpoint: http://10.131.0.1:9000
    read_timeout: 100.0
    sk: starwhale

    Items in link_auths will match the uri in Links automatically. s3 typed link_auth matching Links by looking up bucket and endpoint.

    - - + + \ No newline at end of file diff --git a/0.6.6/swcli/index.html b/0.6.6/swcli/index.html index 6b5acd056..f41a6314a 100644 --- a/0.6.6/swcli/index.html +++ b/0.6.6/swcli/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Starwhale Client (swcli) User Guide

    The Starwhale Client (swcli) is a command-line tool that enables you to interact with Starwhale instances. You can use swcli to complete almost all tasks in Starwhale. swcli is written in pure python3 (require Python 3.7 | 3.11) so that it can be easily installed by the pip command. Currently, swcli only supports Linux and macOS, Windows is coming soon.

    - - + + \ No newline at end of file diff --git a/0.6.6/swcli/installation/index.html b/0.6.6/swcli/installation/index.html index 3c07bbb54..12b1865ba 100644 --- a/0.6.6/swcli/installation/index.html +++ b/0.6.6/swcli/installation/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Installation Guide

    We can use swcli to complete all tasks for Starwhale Instances. swcli is written by pure python3, which can be installed easily by the pip command.Here are some installation tips that can help you get a cleaner, unambiguous, no dependency conflicts swcli python environment.

    Installing Advice

    DO NOT install Starwhale in your system's global Python environment. It will cause a python dependency conflict problem.

    Quick install

    python3 -m pip install starwhale

    Prerequisites

    • Python 3.7 ~ 3.11
    • Linux or macOS
    • Conda (optional)

    In the Ubuntu system, you can run the following commands:

    sudo apt-get install python3 python3-venv python3-pip

    #If you want to install multi python versions
    sudo add-apt-repository -y ppa:deadsnakes/ppa
    sudo apt-get update
    sudo apt-get install -y python3.7 python3.8 python3.9 python3-pip python3-venv python3.8-venv python3.7-venv python3.9-venv

    swcli works on macOS. If you run into issues with the default system Python3 on macOS, try installing Python3 through the homebrew:

    brew install python3

    Install swcli

    Install with venv

    python3 -m venv ~/.cache/venv/starwhale
    source ~/.cache/venv/starwhale/bin/activate
    python3 -m pip install starwhale

    swcli --version

    sudo ln -sf "$(which swcli)" /usr/local/bin/

    Install with conda

    conda create --name starwhale --yes  python=3.9
    conda activate starwhale
    python3 -m pip install starwhale

    swcli --version

    sudo ln -sf "$(which swcli)" /usr/local/bin/

    👏 Now, you can use swcli in the global environment.

    Install for the special scenarios

    # for Audio processing
    python -m pip install starwhale[audio]

    # for Image processing
    python -m pip install starwhale[pillow]

    # for swcli model server command
    python -m pip install starwhale[server]

    # for built-in online serving
    python -m pip install starwhale[online-serve]

    # install all dependencies
    python -m pip install starwhale[all]

    Update swcli

    #for venv
    python3 -m pip install --upgrade starwhale

    #for conda
    conda run -n starwhale python3 -m pip install --upgrade starwhale

    Uninstall swcli

    python3 -m pip remove starwhale

    rm -rf ~/.config/starwhale
    rm -rf ~/.starwhale
    - - + + \ No newline at end of file diff --git a/0.6.6/swcli/swignore/index.html b/0.6.6/swcli/swignore/index.html index a9a3c9f25..b1db682f4 100644 --- a/0.6.6/swcli/swignore/index.html +++ b/0.6.6/swcli/swignore/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    About the .swignore file

    The .swignore file is similar to .gitignore, .dockerignore, and other files used to define ignored files or dirs. The .swignore files mainly used in the Starwhale Model building process. By default, the swcli model build command or starwhale.model.build() Python SDK will traverse all files in the specified directory and automatically exclude certain known files or directories that are not suitable for inclusion in the model package.

    PATTERN FORMAT

    • Each line in a swignore file specifies a pattern, which matches files and directories.
    • A blank line matches no files, so it can serve as a separator for readability.
    • An asterisk * matches anything except a slash.
    • A line starting with # serves as a comment.
    • Support wildcard expression, for example: *.jpg, .png.

    Auto Ingored files or dirs

    If you want to include the auto ingored files or dirs, you can add --add-all for swcli model build command.

    • __pycache__/
    • *.py[cod]
    • *$py.class
    • venv installation dir
    • conda installation dir

    Example

    Here is the .swignore file used in the MNIST example:

    venv/*
    .git/*
    .history*
    .vscode/*
    .venv/*
    data/*
    .idea/*
    *.py[cod]
    - - + + \ No newline at end of file diff --git a/0.6.6/swcli/uri/index.html b/0.6.6/swcli/uri/index.html index 48fe0186d..1ba680ba6 100644 --- a/0.6.6/swcli/uri/index.html +++ b/0.6.6/swcli/uri/index.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content
    Version: 0.6.6

    Starwhale Resources URI

    tip

    Resource URI is widely used in Starwhale client commands. The URI can refer to a resource in the local instance or any other resource in a remote instance. In this way, the Starwhale client can easily manipulate any resource.

    concepts-org.jpg

    Instance URI

    Instance URI can be either:

    • local: standalone instance.
    • [http(s)://]<hostname or ip>[:<port>]: cloud instance with HTTP address.
    • [cloud://]<cloud alias>: cloud or server instance with an alias name, which can be configured in the instance login phase.
    caution

    "local" is different from "localhost". The former means the local standalone instance without a controller, while the latter implies a controller listening at the default port 8082 on the localhost.

    Example:

    # log in Starwhale Cloud; the alias is swcloud
    swcli instance login --username <your account name> --password <your password> https://cloud.starwhale.ai --alias swcloud

    # copy a model from the local instance to the cloud instance
    swcli model copy mnist/version/latest swcloud/project/<your account name>:demo

    # copy a runtime to a Starwhale Server instance: http://localhost:8081
    swcli runtime copy pytorch/version/v1 http://localhost:8081/project/<your account name>:demo

    Project URI

    Project URI is in the format [<Instance URI>/project/]<project name>. If the instance URI is not specified, use the current instance instead.

    Example:

    swcli project select self   # select the self project in the current instance
    swcli project info local/project/self # inspect self project info in the local instance

    Model/Dataset/Runtime URI

    • Model URI: [<Project URI>/model/]<model name>[/version/<version id|tag>].
    • Dataset URI: [<Project URI>/dataset/]<dataset name>[/version/<version id|tag>].
    • Runtime URI: [<Project URI>/runtime/]<runtime name>[/version/<version id|tag>].
    tip
    • swcli supports human-friendly short version id. You can type the first few characters of the version id, provided it is at least four characters long and unambiguous. However, the recover command must use the complete version id.
    • If the project URI is not specified, the default project will be used.
    • You can always use the version tag instead of the version id.

    Example:

    swcli model info mnist/version/hbtdenjxgm4ggnrtmftdgyjzm43tioi  # inspect model info, model name: mnist, version:hbtdenjxgm4ggnrtmftdgyjzm43tioi
    swcli model remove mnist/version/hbtdenj # short version
    swcli model info mnist # inspect mnist model info
    swcli model run mnist --runtime pytorch-mnist --dataset mnist # use the default latest tag

    Job URI

    • format: [<Project URI>/job/]<job id>.
    • If the project URI is not specified, the default project will be used.

    Example:

    swcli job info mezdayjzge3w   # Inspect mezdayjzge3w version in default instance and default project
    swcli job info local/project/self/job/mezday # Inspect the local instance, self project, with short job id:mezday

    The default instance

    When the instance part of a project URI is omitted, the default instance is used instead. The default instance is the one selected by the swcli instance login or swcli instance use command.

    The default project

    When the project parts of Model/Dataset/Runtime/Evaluation URIs are omitted, the default project is used instead. The default project is the one selected by the swcli project use command.

    - - + + \ No newline at end of file diff --git a/404.html b/404.html index a8045a547..3ab79d0d3 100644 --- a/404.html +++ b/404.html @@ -10,13 +10,13 @@ - - + +
    Skip to main content

    Page Not Found

    We could not find what you were looking for.

    Please contact the owner of the site that linked you to the original URL and let them know their link is broken.

    - - + + \ No newline at end of file diff --git a/assets/js/5ee7b1bc.b265632d.js b/assets/js/5ee7b1bc.b265632d.js new file mode 100644 index 000000000..8c0af406a --- /dev/null +++ b/assets/js/5ee7b1bc.b265632d.js @@ -0,0 +1 @@ +"use strict";(self.webpackChunkstarwhale_docs=self.webpackChunkstarwhale_docs||[]).push([[69810],{3905:(e,t,n)=>{n.d(t,{Zo:()=>d,kt:()=>u});var a=n(67294);function r(e,t,n){return t in e?Object.defineProperty(e,t,{value:n,enumerable:!0,configurable:!0,writable:!0}):e[t]=n,e}function i(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var a=Object.getOwnPropertySymbols(e);t&&(a=a.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,a)}return n}function o(e){for(var t=1;t=0||(r[n]=e[n]);return r}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(a=0;a=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(r[n]=e[n])}return r}var s=a.createContext({}),c=function(e){var t=a.useContext(s),n=t;return e&&(n="function"==typeof e?e(t):o(o({},t),e)),n},d=function(e){var t=c(e.components);return a.createElement(s.Provider,{value:t},e.children)},p={inlineCode:"code",wrapper:function(e){var t=e.children;return a.createElement(a.Fragment,{},t)}},m=a.forwardRef((function(e,t){var n=e.components,r=e.mdxType,i=e.originalType,s=e.parentName,d=l(e,["components","mdxType","originalType","parentName"]),m=c(n),u=r,h=m["".concat(s,".").concat(u)]||m[u]||p[u]||i;return n?a.createElement(h,o(o({ref:t},d),{},{components:n})):a.createElement(h,o({ref:t},d))}));function u(e,t){var n=arguments,r=t&&t.mdxType;if("string"==typeof e||r){var i=n.length,o=new Array(i);o[0]=m;var l={};for(var s in t)hasOwnProperty.call(t,s)&&(l[s]=t[s]);l.originalType=e,l.mdxType="string"==typeof e?e:r,o[1]=l;for(var c=2;c{n.r(t),n.d(t,{assets:()=>s,contentTitle:()=>o,default:()=>p,frontMatter:()=>i,metadata:()=>l,toc:()=>c});var a=n(83117),r=(n(67294),n(3905));const i={title:"Starwhale Glossary"},o=void 0,l={unversionedId:"concepts/glossary",id:"concepts/glossary",title:"Starwhale Glossary",description:"On this page you find a list of important terminology used throughout the Starwhale documentation.",source:"@site/docs/concepts/glossary.md",sourceDirName:"concepts",slug:"/concepts/glossary",permalink:"/next/concepts/glossary",draft:!1,editUrl:"https://github.com/star-whale/docs/tree/main/docs/concepts/glossary.md",tags:[],version:"current",frontMatter:{title:"Starwhale Glossary"},sidebar:"mainSidebar",previous:{title:"Starwhale Common Concepts",permalink:"/next/concepts/"},next:{title:"Names in Starwhale",permalink:"/next/concepts/names"}},s={},c=[],d={toc:c};function p(e){let{components:t,...n}=e;return(0,r.kt)("wrapper",(0,a.Z)({},d,n,{components:t,mdxType:"MDXLayout"}),(0,r.kt)("p",null,"On this page you find a list of important terminology used throughout the Starwhale documentation."),(0,r.kt)("ul",null,(0,r.kt)("li",{parentName:"ul"},(0,r.kt)("strong",{parentName:"li"},"Starwhale Dataset"),": An abstraction of datasets in the machine learning field by Starwhale, which implements dataset construction, sharing, loading, version control and visualization to meet the requirements of processes like model training and evaluation."),(0,r.kt)("li",{parentName:"ul"},(0,r.kt)("strong",{parentName:"li"},"Starwhale Model"),": A standard package format for models in machine learning defined by Starwhale, including model weight files, code and configurations, etc. It meets requirements like model evaluation, fine-tuning in processes like model package construction, sharing, version control and running."),(0,r.kt)("li",{parentName:"ul"},(0,r.kt)("strong",{parentName:"li"},"Starwhale Runtime"),": An abstraction of program running environments in the machine learning field by Starwhale. It shields details like Dockerfile writing and CUDA installation and realizes a reproducible, shareable Python running environment."),(0,r.kt)("li",{parentName:"ul"},(0,r.kt)("strong",{parentName:"li"},"Starwhale Instance"),": Each deployment of Starwhale is called an instance. All instances can be managed by the ",(0,r.kt)("inlineCode",{parentName:"li"},"swcli"),". There are 3 types of Starwhale instances: Starwhale Standalone, Starwhale Server and Starwhale Cloud. Starwhale tries to keep concepts consistent across different types of instances. In this way, people can easily exchange data and migrate between them."),(0,r.kt)("li",{parentName:"ul"},(0,r.kt)("strong",{parentName:"li"},"Starwhale Standalone"),": One of the 3 Starwhale instance types. Aimed at independent developers, deployed in local development environments and managed through the ",(0,r.kt)("inlineCode",{parentName:"li"},"swcli")," command line tool to meet development, debugging needs etc."),(0,r.kt)("li",{parentName:"ul"},(0,r.kt)("strong",{parentName:"li"},"Starwhale Server"),": One of the 3 Starwhale instance types. Aimed at team users, deployed in private data centers, relies on Kubernetes clusters, provides centralized, interactive, secure services."),(0,r.kt)("li",{parentName:"ul"},(0,r.kt)("strong",{parentName:"li"},"Starwhale Cloud"),": One of the 3 Starwhale instance types. Hosted public cloud service, available at ",(0,r.kt)("a",{parentName:"li",href:"https://cloud.starwhale.cn"},"https://cloud.starwhale.cn"),", operated and maintained by the Starwhale team, no installation needed, ready to use."),(0,r.kt)("li",{parentName:"ul"},(0,r.kt)("strong",{parentName:"li"},(0,r.kt)("inlineCode",{parentName:"strong"},"swcli")),": A Starwhale command line tool written in Python, used to manage model packages, datasets and runtimes on different instances."),(0,r.kt)("li",{parentName:"ul"},(0,r.kt)("strong",{parentName:"li"},"datastore"),": An infrastructure in Starwhale, provides storage and access methods like Big Table, meets requirements like storage and retrieval of datasets and evaluation data."),(0,r.kt)("li",{parentName:"ul"},(0,r.kt)("strong",{parentName:"li"},"Starwhale Project"),": The basic unit to organize different resources (e.g. models, datasets etc)."),(0,r.kt)("li",{parentName:"ul"},(0,r.kt)("strong",{parentName:"li"},(0,r.kt)("inlineCode",{parentName:"strong"},".swignore")," file"),": Similar to .gitignore, .dockerignore files, used to define ignoring some files or folders. The Starwhale model building process will try to read this file and decide which files to ignore."),(0,r.kt)("li",{parentName:"ul"},(0,r.kt)("strong",{parentName:"li"},(0,r.kt)("inlineCode",{parentName:"strong"},"model.yaml")," file"),": A descriptive file defining how to build a Starwhale Model, optional."),(0,r.kt)("li",{parentName:"ul"},(0,r.kt)("strong",{parentName:"li"},(0,r.kt)("inlineCode",{parentName:"strong"},"dataset.yaml")," file"),": A descriptive file defining how to build a Starwhale Dataset, needs to work with some Python scripts. Used by ",(0,r.kt)("inlineCode",{parentName:"li"},"swcli dataset build")," command, optional."),(0,r.kt)("li",{parentName:"ul"},(0,r.kt)("strong",{parentName:"li"},(0,r.kt)("inlineCode",{parentName:"strong"},"runtime.yaml")," file"),": A descriptive file defining a Starwhale Runtime, used by ",(0,r.kt)("inlineCode",{parentName:"li"},"swcli runtime build")," command, optional.")))}p.isMDXComponent=!0}}]); \ No newline at end of file diff --git a/assets/js/923434ee.0297ac0e.js b/assets/js/923434ee.6b7f6b44.js similarity index 53% rename from assets/js/923434ee.0297ac0e.js rename to assets/js/923434ee.6b7f6b44.js index 5b59199dc..7fc804e2c 100644 --- a/assets/js/923434ee.0297ac0e.js +++ b/assets/js/923434ee.6b7f6b44.js @@ -1 +1 @@ -"use strict";(self.webpackChunkstarwhale_docs=self.webpackChunkstarwhale_docs||[]).push([[1678],{3905:(e,t,n)=>{n.d(t,{Zo:()=>m,kt:()=>d});var a=n(67294);function r(e,t,n){return t in e?Object.defineProperty(e,t,{value:n,enumerable:!0,configurable:!0,writable:!0}):e[t]=n,e}function o(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var a=Object.getOwnPropertySymbols(e);t&&(a=a.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,a)}return n}function i(e){for(var t=1;t=0||(r[n]=e[n]);return r}(e,t);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);for(a=0;a=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(r[n]=e[n])}return r}var s=a.createContext({}),c=function(e){var t=a.useContext(s),n=t;return e&&(n="function"==typeof e?e(t):i(i({},t),e)),n},m=function(e){var t=c(e.components);return a.createElement(s.Provider,{value:t},e.children)},p={inlineCode:"code",wrapper:function(e){var t=e.children;return a.createElement(a.Fragment,{},t)}},u=a.forwardRef((function(e,t){var n=e.components,r=e.mdxType,o=e.originalType,s=e.parentName,m=l(e,["components","mdxType","originalType","parentName"]),u=c(n),d=r,h=u["".concat(s,".").concat(d)]||u[d]||p[d]||o;return n?a.createElement(h,i(i({ref:t},m),{},{components:n})):a.createElement(h,i({ref:t},m))}));function d(e,t){var n=arguments,r=t&&t.mdxType;if("string"==typeof e||r){var o=n.length,i=new Array(o);i[0]=u;var l={};for(var s in t)hasOwnProperty.call(t,s)&&(l[s]=t[s]);l.originalType=e,l.mdxType="string"==typeof e?e:r,i[1]=l;for(var c=2;c{n.r(t),n.d(t,{assets:()=>s,contentTitle:()=>i,default:()=>p,frontMatter:()=>o,metadata:()=>l,toc:()=>c});var a=n(83117),r=(n(67294),n(3905));const o={title:"Names in Starwhale"},i=void 0,l={unversionedId:"concepts/names",id:"concepts/names",title:"Names in Starwhale",description:"Names mean project names, model names, dataset names, runtime names, and tag names.",source:"@site/docs/concepts/names.md",sourceDirName:"concepts",slug:"/concepts/names",permalink:"/next/concepts/names",draft:!1,editUrl:"https://github.com/star-whale/docs/tree/main/docs/concepts/names.md",tags:[],version:"current",frontMatter:{title:"Names in Starwhale"},sidebar:"mainSidebar",previous:{title:"Starwhale Common Concepts",permalink:"/next/concepts/"},next:{title:"Project in Starwhale",permalink:"/next/concepts/project"}},s={},c=[{value:"Names Limitation",id:"names-limitation",level:2},{value:"Names uniqueness requirement",id:"names-uniqueness-requirement",level:2}],m={toc:c};function p(e){let{components:t,...n}=e;return(0,r.kt)("wrapper",(0,a.Z)({},m,n,{components:t,mdxType:"MDXLayout"}),(0,r.kt)("p",null,"Names mean project names, model names, dataset names, runtime names, and tag names."),(0,r.kt)("h2",{id:"names-limitation"},"Names Limitation"),(0,r.kt)("ul",null,(0,r.kt)("li",{parentName:"ul"},"Names are case-insensitive."),(0,r.kt)("li",{parentName:"ul"},"A name MUST only consist of letters ",(0,r.kt)("inlineCode",{parentName:"li"},"A-Z a-z"),", digits ",(0,r.kt)("inlineCode",{parentName:"li"},"0-9"),", the hyphen character ",(0,r.kt)("inlineCode",{parentName:"li"},"-"),", the dot character ",(0,r.kt)("inlineCode",{parentName:"li"},"."),", and the underscore character ",(0,r.kt)("inlineCode",{parentName:"li"},"_"),"."),(0,r.kt)("li",{parentName:"ul"},"A name should always start with a letter or the ",(0,r.kt)("inlineCode",{parentName:"li"},"_")," character."),(0,r.kt)("li",{parentName:"ul"},"The maximum length of a name is 80.")),(0,r.kt)("h2",{id:"names-uniqueness-requirement"},"Names uniqueness requirement"),(0,r.kt)("ul",null,(0,r.kt)("li",{parentName:"ul"},(0,r.kt)("strong",{parentName:"li"},"The resource name should be a unique string within its owner"),". For example, the project name should be unique in the owner instance, and the model name should be unique in the owner project."),(0,r.kt)("li",{parentName:"ul"},'The resource name can not be used by any other resource of the same kind in their owner, including those removed ones. For example, Project "apple" can not have two models named "Alice", even if one of them is already removed.'),(0,r.kt)("li",{parentName:"ul"},'Different kinds of resources can have the same name. For example, a project and a model can be called "Alice" simultaneously.'),(0,r.kt)("li",{parentName:"ul"},'Resources with different owners can have the same name. For example, a model in project "Apple" and a model in project "Banana" can have the same name "Alice".'),(0,r.kt)("li",{parentName:"ul"},'Garbage-collected resources\' names can be reused. For example, after the model with the name "Alice" in project "Apple" is removed and garbage collected, the project can have a new model with the same name "Alice".')))}p.isMDXComponent=!0}}]); \ No newline at end of file +"use strict";(self.webpackChunkstarwhale_docs=self.webpackChunkstarwhale_docs||[]).push([[1678],{3905:(e,t,n)=>{n.d(t,{Zo:()=>m,kt:()=>d});var a=n(67294);function r(e,t,n){return t in e?Object.defineProperty(e,t,{value:n,enumerable:!0,configurable:!0,writable:!0}):e[t]=n,e}function o(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var a=Object.getOwnPropertySymbols(e);t&&(a=a.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,a)}return n}function i(e){for(var t=1;t=0||(r[n]=e[n]);return r}(e,t);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);for(a=0;a=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(r[n]=e[n])}return r}var s=a.createContext({}),c=function(e){var t=a.useContext(s),n=t;return e&&(n="function"==typeof e?e(t):i(i({},t),e)),n},m=function(e){var t=c(e.components);return a.createElement(s.Provider,{value:t},e.children)},p={inlineCode:"code",wrapper:function(e){var t=e.children;return a.createElement(a.Fragment,{},t)}},u=a.forwardRef((function(e,t){var n=e.components,r=e.mdxType,o=e.originalType,s=e.parentName,m=l(e,["components","mdxType","originalType","parentName"]),u=c(n),d=r,h=u["".concat(s,".").concat(d)]||u[d]||p[d]||o;return n?a.createElement(h,i(i({ref:t},m),{},{components:n})):a.createElement(h,i({ref:t},m))}));function d(e,t){var n=arguments,r=t&&t.mdxType;if("string"==typeof e||r){var o=n.length,i=new Array(o);i[0]=u;var l={};for(var s in t)hasOwnProperty.call(t,s)&&(l[s]=t[s]);l.originalType=e,l.mdxType="string"==typeof e?e:r,i[1]=l;for(var c=2;c{n.r(t),n.d(t,{assets:()=>s,contentTitle:()=>i,default:()=>p,frontMatter:()=>o,metadata:()=>l,toc:()=>c});var a=n(83117),r=(n(67294),n(3905));const o={title:"Names in Starwhale"},i=void 0,l={unversionedId:"concepts/names",id:"concepts/names",title:"Names in Starwhale",description:"Names mean project names, model names, dataset names, runtime names, and tag names.",source:"@site/docs/concepts/names.md",sourceDirName:"concepts",slug:"/concepts/names",permalink:"/next/concepts/names",draft:!1,editUrl:"https://github.com/star-whale/docs/tree/main/docs/concepts/names.md",tags:[],version:"current",frontMatter:{title:"Names in Starwhale"},sidebar:"mainSidebar",previous:{title:"Starwhale Glossary",permalink:"/next/concepts/glossary"},next:{title:"Project in Starwhale",permalink:"/next/concepts/project"}},s={},c=[{value:"Names Limitation",id:"names-limitation",level:2},{value:"Names uniqueness requirement",id:"names-uniqueness-requirement",level:2}],m={toc:c};function p(e){let{components:t,...n}=e;return(0,r.kt)("wrapper",(0,a.Z)({},m,n,{components:t,mdxType:"MDXLayout"}),(0,r.kt)("p",null,"Names mean project names, model names, dataset names, runtime names, and tag names."),(0,r.kt)("h2",{id:"names-limitation"},"Names Limitation"),(0,r.kt)("ul",null,(0,r.kt)("li",{parentName:"ul"},"Names are case-insensitive."),(0,r.kt)("li",{parentName:"ul"},"A name MUST only consist of letters ",(0,r.kt)("inlineCode",{parentName:"li"},"A-Z a-z"),", digits ",(0,r.kt)("inlineCode",{parentName:"li"},"0-9"),", the hyphen character ",(0,r.kt)("inlineCode",{parentName:"li"},"-"),", the dot character ",(0,r.kt)("inlineCode",{parentName:"li"},"."),", and the underscore character ",(0,r.kt)("inlineCode",{parentName:"li"},"_"),"."),(0,r.kt)("li",{parentName:"ul"},"A name should always start with a letter or the ",(0,r.kt)("inlineCode",{parentName:"li"},"_")," character."),(0,r.kt)("li",{parentName:"ul"},"The maximum length of a name is 80.")),(0,r.kt)("h2",{id:"names-uniqueness-requirement"},"Names uniqueness requirement"),(0,r.kt)("ul",null,(0,r.kt)("li",{parentName:"ul"},(0,r.kt)("strong",{parentName:"li"},"The resource name should be a unique string within its owner"),". For example, the project name should be unique in the owner instance, and the model name should be unique in the owner project."),(0,r.kt)("li",{parentName:"ul"},'The resource name can not be used by any other resource of the same kind in their owner, including those removed ones. For example, Project "apple" can not have two models named "Alice", even if one of them is already removed.'),(0,r.kt)("li",{parentName:"ul"},'Different kinds of resources can have the same name. For example, a project and a model can be called "Alice" simultaneously.'),(0,r.kt)("li",{parentName:"ul"},'Resources with different owners can have the same name. For example, a model in project "Apple" and a model in project "Banana" can have the same name "Alice".'),(0,r.kt)("li",{parentName:"ul"},'Garbage-collected resources\' names can be reused. For example, after the model with the name "Alice" in project "Apple" is removed and garbage collected, the project can have a new model with the same name "Alice".')))}p.isMDXComponent=!0}}]); \ No newline at end of file diff --git a/assets/js/935f2afb.83bfd1f1.js b/assets/js/935f2afb.83bfd1f1.js deleted file mode 100644 index b25745364..000000000 --- a/assets/js/935f2afb.83bfd1f1.js +++ /dev/null @@ -1 +0,0 @@ -"use strict";(self.webpackChunkstarwhale_docs=self.webpackChunkstarwhale_docs||[]).push([[80053],{1109:e=>{e.exports=JSON.parse('{"pluginId":"default","version":"current","label":"WIP","banner":"unreleased","badge":true,"noIndex":false,"className":"docs-version-current","isLast":false,"docsSidebars":{"mainSidebar":[{"type":"link","label":"What is Starwhale","href":"/next/","docId":"what-is-starwhale"},{"type":"category","label":"Getting Started","items":[{"type":"link","label":"Getting started with Starwhale Standalone","href":"/next/getting-started/standalone","docId":"getting-started/standalone"},{"type":"link","label":"Getting started with Starwhale Server","href":"/next/getting-started/server","docId":"getting-started/server"},{"type":"link","label":"Getting started with Starwhale Cloud","href":"/next/getting-started/cloud","docId":"getting-started/cloud"}],"collapsed":true,"collapsible":true,"href":"/next/getting-started/"},{"type":"category","label":"Examples","items":[{"type":"link","label":"Starwhale\'s Helloworld Example - Evaluating the KNN Algorithm on Handwritten Digit Recognition Tasks","href":"/next/examples/helloworld","docId":"examples/helloworld"}],"collapsed":true,"collapsible":true,"href":"/next/examples/"},{"type":"category","label":"Concepts","items":[{"type":"link","label":"Names in Starwhale","href":"/next/concepts/names","docId":"concepts/names"},{"type":"link","label":"Project in Starwhale","href":"/next/concepts/project","docId":"concepts/project"},{"type":"link","label":"Roles and permissions in Starwhale","href":"/next/concepts/roles-permissions","docId":"concepts/roles-permissions"},{"type":"link","label":"Resource versioning in Starwhale","href":"/next/concepts/versioning","docId":"concepts/versioning"}],"collapsed":true,"collapsible":true,"href":"/next/concepts/"},{"type":"category","label":"User Guides","items":[{"type":"category","label":"Starwhale Client(swcli) User Guide","items":[{"type":"link","label":"Installation Guide","href":"/next/swcli/installation","docId":"swcli/installation"},{"type":"link","label":"Starwhale Resources URI","href":"/next/swcli/uri","docId":"swcli/uri"},{"type":"link","label":"About the .swignore file","href":"/next/swcli/swignore","docId":"swcli/swignore"},{"type":"link","label":"Configuration","href":"/next/swcli/config","docId":"swcli/config"}],"collapsed":true,"collapsible":true,"href":"/next/swcli/"},{"type":"category","label":"Starwhale Server User Guide","collapsed":true,"items":[{"type":"category","label":"Installation Guide","collapsed":true,"items":[{"type":"link","label":"Launch Starwhale Server with the \\"swcli server start\\" command","href":"/next/server/installation/server-start","docId":"server/installation/server-start"},{"type":"link","label":"Install Starwhale Server with Minikube","href":"/next/server/installation/minikube","docId":"server/installation/minikube"},{"type":"link","label":"Install Starwhale Server to Kubernetes Cluster","href":"/next/server/installation/k8s-cluster","docId":"server/installation/k8s-cluster"},{"type":"category","label":"Install Starwhale Server with Docker","collapsed":true,"items":[{"type":"link","label":"Starwhale Server Environment Example","href":"/next/server/installation/starwhale_env","docId":"server/installation/starwhale_env"},{"type":"link","label":"Install Starwhale Server with Docker Compose","href":"/next/server/installation/docker-compose","docId":"server/installation/docker-compose"}],"collapsible":true,"href":"/next/server/installation/docker"}],"collapsible":true,"href":"/next/server/installation/"},{"type":"link","label":"Controller Admin Settings","href":"/next/server/guides/server_admin","docId":"server/guides/server_admin"},{"type":"link","label":"How to Organize and Manage Resources with Starwhale Projects","href":"/next/server/project","docId":"server/project"}],"collapsible":true,"href":"/next/server/"},{"type":"category","label":"Starwhale Cloud User Guide","collapsed":true,"items":[{"type":"category","label":"Cloud Billing","collapsed":true,"items":[{"type":"link","label":"Billing Details","href":"/next/cloud/billing/bills","docId":"cloud/billing/bills"},{"type":"link","label":"Recharge and refund","href":"/next/cloud/billing/recharge","docId":"cloud/billing/recharge"},{"type":"link","label":"Refund","href":"/next/cloud/billing/refund","docId":"cloud/billing/refund"},{"type":"link","label":"Voucher","href":"/next/cloud/billing/voucher","docId":"cloud/billing/voucher"}],"collapsible":true,"href":"/next/cloud/billing/"}],"collapsible":true,"href":"/next/cloud/"},{"type":"category","label":"Starwhale Model","collapsed":true,"items":[{"type":"link","label":"The model.yaml Specification","href":"/next/model/yaml","docId":"model/yaml"}],"collapsible":true,"href":"/next/model/"},{"type":"category","label":"Starwhale Runtime","collapsed":true,"items":[{"type":"link","label":"The runtime.yaml Specification","href":"/next/runtime/yaml","docId":"runtime/yaml"}],"collapsible":true,"href":"/next/runtime/"},{"type":"category","label":"Starwhale Dataset","collapsed":true,"items":[{"type":"link","label":"The dataset.yaml Specification","href":"/next/dataset/yaml","docId":"dataset/yaml"}],"collapsible":true,"href":"/next/dataset/"},{"type":"category","label":"Starwhale Model Evaluation","collapsed":true,"items":[{"type":"category","label":"Heterogeneous Devices","collapsed":true,"items":[{"type":"link","label":"Devices as Kubernetes nodes","href":"/next/evaluation/heterogeneous/node-able","docId":"evaluation/heterogeneous/node-able"},{"type":"link","label":"Virtual Kubelet as Kubernetes nodes","href":"/next/evaluation/heterogeneous/virtual-node","docId":"evaluation/heterogeneous/virtual-node"}],"collapsible":true,"href":"/next/evaluation/heterogeneous/node-able"}],"collapsible":true,"href":"/next/evaluation/"}],"collapsed":true,"collapsible":true},{"type":"category","label":"Reference","items":[{"type":"category","label":"Starwhale Client","items":[{"type":"link","label":"swcli instance","href":"/next/reference/swcli/instance","docId":"reference/swcli/instance"},{"type":"link","label":"swcli project","href":"/next/reference/swcli/project","docId":"reference/swcli/project"},{"type":"link","label":"swcli model","href":"/next/reference/swcli/model","docId":"reference/swcli/model"},{"type":"link","label":"swcli dataset","href":"/next/reference/swcli/dataset","docId":"reference/swcli/dataset"},{"type":"link","label":"swcli runtime","href":"/next/reference/swcli/runtime","docId":"reference/swcli/runtime"},{"type":"link","label":"swcli job","href":"/next/reference/swcli/job","docId":"reference/swcli/job"},{"type":"link","label":"swcli server","href":"/next/reference/swcli/server","docId":"reference/swcli/server"},{"type":"link","label":"Utility Commands","href":"/next/reference/swcli/utilities","docId":"reference/swcli/utilities"}],"collapsed":true,"collapsible":true,"href":"/next/reference/swcli/"},{"type":"category","label":"Python SDK","items":[{"type":"link","label":"Starwhale Dataset SDK","href":"/next/reference/sdk/dataset","docId":"reference/sdk/dataset"},{"type":"link","label":"Starwhale Data Types","href":"/next/reference/sdk/type","docId":"reference/sdk/type"},{"type":"link","label":"Starwhale Model Evaluation SDK","href":"/next/reference/sdk/evaluation","docId":"reference/sdk/evaluation"},{"type":"link","label":"Starwhale Model SDK","href":"/next/reference/sdk/model","docId":"reference/sdk/model"},{"type":"link","label":"Starwhale Job SDK","href":"/next/reference/sdk/job","docId":"reference/sdk/job"},{"type":"link","label":"swcli server","href":"/next/reference/swcli/server","docId":"reference/swcli/server"},{"type":"link","label":"Other SDK","href":"/next/reference/sdk/other","docId":"reference/sdk/other"}],"collapsed":true,"collapsible":true,"href":"/next/reference/sdk/overview"}],"collapsed":true,"collapsible":true},{"type":"link","label":"FAQs","href":"/next/faq/","docId":"faq/index"},{"type":"category","label":"Community","items":[{"type":"link","label":"Contribute to Starwhale","href":"/next/community/contribute","docId":"community/contribute"}],"collapsed":true,"collapsible":true}]},"docs":{"cloud/billing/billing":{"id":"cloud/billing/billing","title":"Billing Overview","description":"","sidebar":"mainSidebar"},"cloud/billing/bills":{"id":"cloud/billing/bills","title":"Billing Details","description":"","sidebar":"mainSidebar"},"cloud/billing/recharge":{"id":"cloud/billing/recharge","title":"Recharge and refund","description":"","sidebar":"mainSidebar"},"cloud/billing/refund":{"id":"cloud/billing/refund","title":"Refund","description":"","sidebar":"mainSidebar"},"cloud/billing/voucher":{"id":"cloud/billing/voucher","title":"Voucher","description":"","sidebar":"mainSidebar"},"cloud/index":{"id":"cloud/index","title":"Starwhale Cloud User Guide","description":"Starwhale Cloud is a service hosted on public cloud and operated by the Starwhale team. The access url is .","sidebar":"mainSidebar"},"community/contribute":{"id":"community/contribute","title":"Contribute to Starwhale","description":"Getting Involved/Contributing","sidebar":"mainSidebar"},"concepts/index":{"id":"concepts/index","title":"Starwhale Common Concepts","description":"This section explains some basic concepts in Starwhale.","sidebar":"mainSidebar"},"concepts/names":{"id":"concepts/names","title":"Names in Starwhale","description":"Names mean project names, model names, dataset names, runtime names, and tag names.","sidebar":"mainSidebar"},"concepts/project":{"id":"concepts/project","title":"Project in Starwhale","description":"\\"Project\\" is the basic unit for organizing different resources like models, datasets, etc. You may use projects for different purposes. For example, you can create a project for a data scientist team, a product line, or a specific model. Users usually work on one or more projects in their daily lives.","sidebar":"mainSidebar"},"concepts/roles-permissions":{"id":"concepts/roles-permissions","title":"Roles and permissions in Starwhale","description":"Roles are used to assign permissions to users. Only Starwhale Server/Cloud has roles and permissions, and Starwhale Standalone does not.The Administrator role is automatically created and assigned to the user \\"admin\\". Some sensitive operations can only be performed by users with the Administrator role, for example, creating accounts in Starwhale Server.","sidebar":"mainSidebar"},"concepts/versioning":{"id":"concepts/versioning","title":"Resource versioning in Starwhale","description":"- Starwhale manages the history of all models, datasets, and runtimes. Every update to a specific resource appends a new version of the history.","sidebar":"mainSidebar"},"dataset/index":{"id":"dataset/index","title":"Starwhale Dataset User Guide","description":"overview","sidebar":"mainSidebar"},"dataset/yaml":{"id":"dataset/yaml","title":"The dataset.yaml Specification","description":"dataset.yaml is optional for the swcli dataset build command.","sidebar":"mainSidebar"},"evaluation/heterogeneous/node-able":{"id":"evaluation/heterogeneous/node-able","title":"Devices as Kubernetes nodes","description":"Characteristics","sidebar":"mainSidebar"},"evaluation/heterogeneous/virtual-node":{"id":"evaluation/heterogeneous/virtual-node","title":"Virtual Kubelet as Kubernetes nodes","description":"Introduction","sidebar":"mainSidebar"},"evaluation/index":{"id":"evaluation/index","title":"Starwhale Model Evaluation","description":"Design Overview","sidebar":"mainSidebar"},"examples/helloworld":{"id":"examples/helloworld","title":"Starwhale\'s Helloworld Example - Evaluating the KNN Algorithm on Handwritten Digit Recognition Tasks","description":"This tutorial will start with the installation of the Starwhale Client, and then introduce the process of writing evaluation code, creating datasets, debugging on Standalone instances, and finally running evaluations on Server instances.","sidebar":"mainSidebar"},"examples/index":{"id":"examples/index","title":"Examples","description":"- \ud83d\udd25 Helloworld: Cloud, Code","sidebar":"mainSidebar"},"faq/index":{"id":"faq/index","title":"FAQs","description":"Error \\"413 Client Error: Request Entity Too Large\\" when Copying Starwhale Models to Server","sidebar":"mainSidebar"},"getting-started/cloud":{"id":"getting-started/cloud","title":"Getting started with Starwhale Cloud","description":"Starwhale Cloud is hosted on Aliyun with the domain name . In the futher, we will launch the service on AWS with the domain name . It\'s important to note that these are two separate instances that are not interconnected, and accounts and data are not shared. You can choose either one to get started.","sidebar":"mainSidebar"},"getting-started/index":{"id":"getting-started/index","title":"Getting started","description":"Each deployment of Starwhale is called an instance. All instances can be managed by the Starwhale Client (swcli).","sidebar":"mainSidebar"},"getting-started/runtime":{"id":"getting-started/runtime","title":"Getting Started with Starwhale Runtime","description":"This article demonstrates how to build a Starwhale Runtime of the Pytorch environment and how to use it. This runtime can meet the dependency requirements of the six examples in Starwhale example/runtime/pytorch."},"getting-started/server":{"id":"getting-started/server","title":"Getting started with Starwhale Server","description":"Start Starwhale Server","sidebar":"mainSidebar"},"getting-started/standalone":{"id":"getting-started/standalone","title":"Getting started with Starwhale Standalone","description":"When the Starwhale Client (swcli) is installed, you are ready to use Starwhale Standalone.","sidebar":"mainSidebar"},"model/index":{"id":"model/index","title":"Starwhale Model","description":"overview","sidebar":"mainSidebar"},"model/yaml":{"id":"model/yaml","title":"The model.yaml Specification","description":"model.yaml is optional for swcli model build.","sidebar":"mainSidebar"},"reference/sdk/dataset":{"id":"reference/sdk/dataset","title":"Starwhale Dataset SDK","description":"dataset","sidebar":"mainSidebar"},"reference/sdk/evaluation":{"id":"reference/sdk/evaluation","title":"Starwhale Model Evaluation SDK","description":"@evaluation.predict","sidebar":"mainSidebar"},"reference/sdk/job":{"id":"reference/sdk/job","title":"Starwhale Job SDK","description":"job","sidebar":"mainSidebar"},"reference/sdk/model":{"id":"reference/sdk/model","title":"Starwhale Model SDK","description":"model.build","sidebar":"mainSidebar"},"reference/sdk/other":{"id":"reference/sdk/other","title":"Other SDK","description":"\\\\version","sidebar":"mainSidebar"},"reference/sdk/overview":{"id":"reference/sdk/overview","title":"Python SDK Overview","description":"Starwhale provides a series of Python SDKs to help manage datasets, models, evaluations etc. Using the Starwhale Python SDK can make it easier to complete your ML/DL development tasks.","sidebar":"mainSidebar"},"reference/sdk/type":{"id":"reference/sdk/type","title":"Starwhale Data Types","description":"COCOObjectAnnotation","sidebar":"mainSidebar"},"reference/swcli/dataset":{"id":"reference/swcli/dataset","title":"swcli dataset","description":"Overview","sidebar":"mainSidebar"},"reference/swcli/index":{"id":"reference/swcli/index","title":"Overview","description":"Usage","sidebar":"mainSidebar"},"reference/swcli/instance":{"id":"reference/swcli/instance","title":"swcli instance","description":"Overview","sidebar":"mainSidebar"},"reference/swcli/job":{"id":"reference/swcli/job","title":"swcli job","description":"Overview","sidebar":"mainSidebar"},"reference/swcli/model":{"id":"reference/swcli/model","title":"swcli model","description":"Overview","sidebar":"mainSidebar"},"reference/swcli/project":{"id":"reference/swcli/project","title":"swcli project","description":"Overview","sidebar":"mainSidebar"},"reference/swcli/runtime":{"id":"reference/swcli/runtime","title":"swcli runtime","description":"Overview","sidebar":"mainSidebar"},"reference/swcli/server":{"id":"reference/swcli/server","title":"swcli server","description":"Overview","sidebar":"mainSidebar"},"reference/swcli/utilities":{"id":"reference/swcli/utilities","title":"Utility Commands","description":"swcli gc","sidebar":"mainSidebar"},"runtime/index":{"id":"runtime/index","title":"Starwhale Runtime","description":"overview","sidebar":"mainSidebar"},"runtime/yaml":{"id":"runtime/yaml","title":"The runtime.yaml Specification","description":"runtime.yaml is the configuration file that defines the properties of the Starwhale Runtime. runtime.yaml is required for the yaml mode of the swcli runtime build command.","sidebar":"mainSidebar"},"server/guides/server_admin":{"id":"server/guides/server_admin","title":"Controller Admin Settings","description":"Superuser Password Reset","sidebar":"mainSidebar"},"server/index":{"id":"server/index","title":"Starwhale Server User Guide","description":"To install/update/uninstall Starwhale Server, see the Starwhale Server Installation Guide.","sidebar":"mainSidebar"},"server/installation/docker":{"id":"server/installation/docker","title":"Install Starwhale Server with Docker","description":"Prerequisites","sidebar":"mainSidebar"},"server/installation/docker-compose":{"id":"server/installation/docker-compose","title":"Install Starwhale Server with Docker Compose","description":"Prerequisites","sidebar":"mainSidebar"},"server/installation/index":{"id":"server/installation/index","title":"Starwhale Server Installation Guide","description":"Starwhale Server is delivered as a Docker image, which can be run with Docker directly or deployed to a Kubernetes cluster or Minikube.","sidebar":"mainSidebar"},"server/installation/k8s-cluster":{"id":"server/installation/k8s-cluster","title":"Install Starwhale Server to Kubernetes Cluster","description":"In a private deployment scenario, Starwhale Server can be deployed to a Kubernetes cluster using Helm. Starwhale Server relies on two fundamental infrastructure dependencies: MySQL database and object storage.","sidebar":"mainSidebar"},"server/installation/minikube":{"id":"server/installation/minikube","title":"Install Starwhale Server with Minikube","description":"Prerequisites","sidebar":"mainSidebar"},"server/installation/server-start":{"id":"server/installation/server-start","title":"Launch Starwhale Server with the \\"swcli server start\\" command","description":"Prerequisites","sidebar":"mainSidebar"},"server/installation/starwhale_env":{"id":"server/installation/starwhale_env","title":"Starwhale Server Environment Example","description":"","sidebar":"mainSidebar"},"server/project":{"id":"server/project","title":"How to Organize and Manage Resources with Starwhale Projects","description":"Project is the basic unit for organizing and managing resources (such as models, datasets, runtime environments, etc.). You can create and manage projects based on your needs. For example, you can create projects by business team, product line, or models. One user can create and participate in one or more projects.","sidebar":"mainSidebar"},"swcli/config":{"id":"swcli/config","title":"Configuration","description":"Standalone Instance is installed on the user\'s laptop or development server, providing isolation at the level of Linux/macOX users. Users can install the Starwhale Python package using the pip command and execute any swcli command. After that, they can view their Starwhale configuration in ~/.config/starwhale/config.yaml. In the vast majority of cases, users do not need to manually modify the config.yaml file.","sidebar":"mainSidebar"},"swcli/index":{"id":"swcli/index","title":"Starwhale Client (swcli) User Guide","description":"The Starwhale Client (swcli) is a command-line tool that enables you to interact with Starwhale instances. You can use swcli to complete almost all tasks in Starwhale. swcli is written in pure python3 (require Python 3.7 | 3.11) so that it can be easily installed by the pip command. Currently, swcli only supports Linux and macOS, Windows is coming soon.","sidebar":"mainSidebar"},"swcli/installation":{"id":"swcli/installation","title":"Installation Guide","description":"We can use swcli to complete all tasks for Starwhale Instances. swcli is written by pure python3, which can be installed easily by the pip command.Here are some installation tips that can help you get a cleaner, unambiguous, no dependency conflicts swcli python environment.","sidebar":"mainSidebar"},"swcli/swignore":{"id":"swcli/swignore","title":"About the .swignore file","description":"The .swignore file is similar to .gitignore, .dockerignore, and other files used to define ignored files or dirs. The .swignore files mainly used in the Starwhale Model building process. By default, the swcli model build command or starwhale.model.build() Python SDK will traverse all files in the specified directory and automatically exclude certain known files or directories that are not suitable for inclusion in the model package.","sidebar":"mainSidebar"},"swcli/uri":{"id":"swcli/uri","title":"Starwhale Resources URI","description":"Resource URI is widely used in Starwhale client commands. The URI can refer to a resource in the local instance or any other resource in a remote instance. In this way, the Starwhale client can easily manipulate any resource.","sidebar":"mainSidebar"},"what-is-starwhale":{"id":"what-is-starwhale","title":"What is Starwhale","description":"Overview","sidebar":"mainSidebar"}}}')}}]); \ No newline at end of file diff --git a/assets/js/935f2afb.91eb5493.js b/assets/js/935f2afb.91eb5493.js new file mode 100644 index 000000000..ef45b6a83 --- /dev/null +++ b/assets/js/935f2afb.91eb5493.js @@ -0,0 +1 @@ +"use strict";(self.webpackChunkstarwhale_docs=self.webpackChunkstarwhale_docs||[]).push([[80053],{1109:e=>{e.exports=JSON.parse('{"pluginId":"default","version":"current","label":"WIP","banner":"unreleased","badge":true,"noIndex":false,"className":"docs-version-current","isLast":false,"docsSidebars":{"mainSidebar":[{"type":"link","label":"What is Starwhale","href":"/next/","docId":"what-is-starwhale"},{"type":"category","label":"Getting Started","items":[{"type":"link","label":"Getting started with Starwhale Standalone","href":"/next/getting-started/standalone","docId":"getting-started/standalone"},{"type":"link","label":"Getting started with Starwhale Server","href":"/next/getting-started/server","docId":"getting-started/server"},{"type":"link","label":"Getting started with Starwhale Cloud","href":"/next/getting-started/cloud","docId":"getting-started/cloud"}],"collapsed":true,"collapsible":true,"href":"/next/getting-started/"},{"type":"category","label":"Examples","items":[{"type":"link","label":"Starwhale\'s Helloworld Example - Evaluating the KNN Algorithm on Handwritten Digit Recognition Tasks","href":"/next/examples/helloworld","docId":"examples/helloworld"}],"collapsed":true,"collapsible":true,"href":"/next/examples/"},{"type":"category","label":"Concepts","items":[{"type":"link","label":"Starwhale Glossary","href":"/next/concepts/glossary","docId":"concepts/glossary"},{"type":"link","label":"Names in Starwhale","href":"/next/concepts/names","docId":"concepts/names"},{"type":"link","label":"Project in Starwhale","href":"/next/concepts/project","docId":"concepts/project"},{"type":"link","label":"Roles and permissions in Starwhale","href":"/next/concepts/roles-permissions","docId":"concepts/roles-permissions"},{"type":"link","label":"Resource versioning in Starwhale","href":"/next/concepts/versioning","docId":"concepts/versioning"}],"collapsed":true,"collapsible":true,"href":"/next/concepts/"},{"type":"category","label":"User Guides","items":[{"type":"category","label":"Starwhale Client(swcli) User Guide","items":[{"type":"link","label":"Installation Guide","href":"/next/swcli/installation","docId":"swcli/installation"},{"type":"link","label":"Starwhale Resources URI","href":"/next/swcli/uri","docId":"swcli/uri"},{"type":"link","label":"About the .swignore file","href":"/next/swcli/swignore","docId":"swcli/swignore"},{"type":"link","label":"Configuration","href":"/next/swcli/config","docId":"swcli/config"}],"collapsed":true,"collapsible":true,"href":"/next/swcli/"},{"type":"category","label":"Starwhale Server User Guide","collapsed":true,"items":[{"type":"category","label":"Installation Guide","collapsed":true,"items":[{"type":"link","label":"Launch Starwhale Server with the \\"swcli server start\\" command","href":"/next/server/installation/server-start","docId":"server/installation/server-start"},{"type":"link","label":"Install Starwhale Server with Minikube","href":"/next/server/installation/minikube","docId":"server/installation/minikube"},{"type":"link","label":"Install Starwhale Server to Kubernetes Cluster","href":"/next/server/installation/k8s-cluster","docId":"server/installation/k8s-cluster"},{"type":"category","label":"Install Starwhale Server with Docker","collapsed":true,"items":[{"type":"link","label":"Starwhale Server Environment Example","href":"/next/server/installation/starwhale_env","docId":"server/installation/starwhale_env"},{"type":"link","label":"Install Starwhale Server with Docker Compose","href":"/next/server/installation/docker-compose","docId":"server/installation/docker-compose"}],"collapsible":true,"href":"/next/server/installation/docker"}],"collapsible":true,"href":"/next/server/installation/"},{"type":"link","label":"Controller Admin Settings","href":"/next/server/guides/server_admin","docId":"server/guides/server_admin"},{"type":"link","label":"How to Organize and Manage Resources with Starwhale Projects","href":"/next/server/project","docId":"server/project"}],"collapsible":true,"href":"/next/server/"},{"type":"category","label":"Starwhale Cloud User Guide","collapsed":true,"items":[{"type":"category","label":"Cloud Billing","collapsed":true,"items":[{"type":"link","label":"Billing Details","href":"/next/cloud/billing/bills","docId":"cloud/billing/bills"},{"type":"link","label":"Recharge and refund","href":"/next/cloud/billing/recharge","docId":"cloud/billing/recharge"},{"type":"link","label":"Refund","href":"/next/cloud/billing/refund","docId":"cloud/billing/refund"},{"type":"link","label":"Voucher","href":"/next/cloud/billing/voucher","docId":"cloud/billing/voucher"}],"collapsible":true,"href":"/next/cloud/billing/"}],"collapsible":true,"href":"/next/cloud/"},{"type":"category","label":"Starwhale Model","collapsed":true,"items":[{"type":"link","label":"The model.yaml Specification","href":"/next/model/yaml","docId":"model/yaml"}],"collapsible":true,"href":"/next/model/"},{"type":"category","label":"Starwhale Runtime","collapsed":true,"items":[{"type":"link","label":"The runtime.yaml Specification","href":"/next/runtime/yaml","docId":"runtime/yaml"}],"collapsible":true,"href":"/next/runtime/"},{"type":"category","label":"Starwhale Dataset","collapsed":true,"items":[{"type":"link","label":"The dataset.yaml Specification","href":"/next/dataset/yaml","docId":"dataset/yaml"}],"collapsible":true,"href":"/next/dataset/"},{"type":"category","label":"Starwhale Model Evaluation","collapsed":true,"items":[{"type":"category","label":"Heterogeneous Devices","collapsed":true,"items":[{"type":"link","label":"Devices as Kubernetes nodes","href":"/next/evaluation/heterogeneous/node-able","docId":"evaluation/heterogeneous/node-able"},{"type":"link","label":"Virtual Kubelet as Kubernetes nodes","href":"/next/evaluation/heterogeneous/virtual-node","docId":"evaluation/heterogeneous/virtual-node"}],"collapsible":true,"href":"/next/evaluation/heterogeneous/node-able"}],"collapsible":true,"href":"/next/evaluation/"}],"collapsed":true,"collapsible":true},{"type":"category","label":"Reference","items":[{"type":"category","label":"Starwhale Client","items":[{"type":"link","label":"swcli instance","href":"/next/reference/swcli/instance","docId":"reference/swcli/instance"},{"type":"link","label":"swcli project","href":"/next/reference/swcli/project","docId":"reference/swcli/project"},{"type":"link","label":"swcli model","href":"/next/reference/swcli/model","docId":"reference/swcli/model"},{"type":"link","label":"swcli dataset","href":"/next/reference/swcli/dataset","docId":"reference/swcli/dataset"},{"type":"link","label":"swcli runtime","href":"/next/reference/swcli/runtime","docId":"reference/swcli/runtime"},{"type":"link","label":"swcli job","href":"/next/reference/swcli/job","docId":"reference/swcli/job"},{"type":"link","label":"swcli server","href":"/next/reference/swcli/server","docId":"reference/swcli/server"},{"type":"link","label":"Utility Commands","href":"/next/reference/swcli/utilities","docId":"reference/swcli/utilities"}],"collapsed":true,"collapsible":true,"href":"/next/reference/swcli/"},{"type":"category","label":"Python SDK","items":[{"type":"link","label":"Starwhale Dataset SDK","href":"/next/reference/sdk/dataset","docId":"reference/sdk/dataset"},{"type":"link","label":"Starwhale Data Types","href":"/next/reference/sdk/type","docId":"reference/sdk/type"},{"type":"link","label":"Starwhale Model Evaluation SDK","href":"/next/reference/sdk/evaluation","docId":"reference/sdk/evaluation"},{"type":"link","label":"Starwhale Model SDK","href":"/next/reference/sdk/model","docId":"reference/sdk/model"},{"type":"link","label":"Starwhale Job SDK","href":"/next/reference/sdk/job","docId":"reference/sdk/job"},{"type":"link","label":"swcli server","href":"/next/reference/swcli/server","docId":"reference/swcli/server"},{"type":"link","label":"Other SDK","href":"/next/reference/sdk/other","docId":"reference/sdk/other"}],"collapsed":true,"collapsible":true,"href":"/next/reference/sdk/overview"}],"collapsed":true,"collapsible":true},{"type":"link","label":"FAQs","href":"/next/faq/","docId":"faq/index"},{"type":"category","label":"Community","items":[{"type":"link","label":"Contribute to Starwhale","href":"/next/community/contribute","docId":"community/contribute"}],"collapsed":true,"collapsible":true}]},"docs":{"cloud/billing/billing":{"id":"cloud/billing/billing","title":"Billing Overview","description":"","sidebar":"mainSidebar"},"cloud/billing/bills":{"id":"cloud/billing/bills","title":"Billing Details","description":"","sidebar":"mainSidebar"},"cloud/billing/recharge":{"id":"cloud/billing/recharge","title":"Recharge and refund","description":"","sidebar":"mainSidebar"},"cloud/billing/refund":{"id":"cloud/billing/refund","title":"Refund","description":"","sidebar":"mainSidebar"},"cloud/billing/voucher":{"id":"cloud/billing/voucher","title":"Voucher","description":"","sidebar":"mainSidebar"},"cloud/index":{"id":"cloud/index","title":"Starwhale Cloud User Guide","description":"Starwhale Cloud is a service hosted on public cloud and operated by the Starwhale team. The access url is .","sidebar":"mainSidebar"},"community/contribute":{"id":"community/contribute","title":"Contribute to Starwhale","description":"Getting Involved/Contributing","sidebar":"mainSidebar"},"concepts/glossary":{"id":"concepts/glossary","title":"Starwhale Glossary","description":"On this page you find a list of important terminology used throughout the Starwhale documentation.","sidebar":"mainSidebar"},"concepts/index":{"id":"concepts/index","title":"Starwhale Common Concepts","description":"This section explains some basic concepts in Starwhale.","sidebar":"mainSidebar"},"concepts/names":{"id":"concepts/names","title":"Names in Starwhale","description":"Names mean project names, model names, dataset names, runtime names, and tag names.","sidebar":"mainSidebar"},"concepts/project":{"id":"concepts/project","title":"Project in Starwhale","description":"\\"Project\\" is the basic unit for organizing different resources like models, datasets, etc. You may use projects for different purposes. For example, you can create a project for a data scientist team, a product line, or a specific model. Users usually work on one or more projects in their daily lives.","sidebar":"mainSidebar"},"concepts/roles-permissions":{"id":"concepts/roles-permissions","title":"Roles and permissions in Starwhale","description":"Roles are used to assign permissions to users. Only Starwhale Server/Cloud has roles and permissions, and Starwhale Standalone does not.The Administrator role is automatically created and assigned to the user \\"admin\\". Some sensitive operations can only be performed by users with the Administrator role, for example, creating accounts in Starwhale Server.","sidebar":"mainSidebar"},"concepts/versioning":{"id":"concepts/versioning","title":"Resource versioning in Starwhale","description":"- Starwhale manages the history of all models, datasets, and runtimes. Every update to a specific resource appends a new version of the history.","sidebar":"mainSidebar"},"dataset/index":{"id":"dataset/index","title":"Starwhale Dataset User Guide","description":"overview","sidebar":"mainSidebar"},"dataset/yaml":{"id":"dataset/yaml","title":"The dataset.yaml Specification","description":"dataset.yaml is optional for the swcli dataset build command.","sidebar":"mainSidebar"},"evaluation/heterogeneous/node-able":{"id":"evaluation/heterogeneous/node-able","title":"Devices as Kubernetes nodes","description":"Characteristics","sidebar":"mainSidebar"},"evaluation/heterogeneous/virtual-node":{"id":"evaluation/heterogeneous/virtual-node","title":"Virtual Kubelet as Kubernetes nodes","description":"Introduction","sidebar":"mainSidebar"},"evaluation/index":{"id":"evaluation/index","title":"Starwhale Model Evaluation","description":"Design Overview","sidebar":"mainSidebar"},"examples/helloworld":{"id":"examples/helloworld","title":"Starwhale\'s Helloworld Example - Evaluating the KNN Algorithm on Handwritten Digit Recognition Tasks","description":"This tutorial will start with the installation of the Starwhale Client, and then introduce the process of writing evaluation code, creating datasets, debugging on Standalone instances, and finally running evaluations on Server instances.","sidebar":"mainSidebar"},"examples/index":{"id":"examples/index","title":"Examples","description":"- \ud83d\udd25 Helloworld: Cloud, Code","sidebar":"mainSidebar"},"faq/index":{"id":"faq/index","title":"FAQs","description":"Error \\"413 Client Error: Request Entity Too Large\\" when Copying Starwhale Models to Server","sidebar":"mainSidebar"},"getting-started/cloud":{"id":"getting-started/cloud","title":"Getting started with Starwhale Cloud","description":"Starwhale Cloud is hosted on Aliyun with the domain name . In the futher, we will launch the service on AWS with the domain name . It\'s important to note that these are two separate instances that are not interconnected, and accounts and data are not shared. You can choose either one to get started.","sidebar":"mainSidebar"},"getting-started/index":{"id":"getting-started/index","title":"Getting started","description":"Each deployment of Starwhale is called an instance. All instances can be managed by the Starwhale Client (swcli).","sidebar":"mainSidebar"},"getting-started/runtime":{"id":"getting-started/runtime","title":"Getting Started with Starwhale Runtime","description":"This article demonstrates how to build a Starwhale Runtime of the Pytorch environment and how to use it. This runtime can meet the dependency requirements of the six examples in Starwhale example/runtime/pytorch."},"getting-started/server":{"id":"getting-started/server","title":"Getting started with Starwhale Server","description":"Start Starwhale Server","sidebar":"mainSidebar"},"getting-started/standalone":{"id":"getting-started/standalone","title":"Getting started with Starwhale Standalone","description":"When the Starwhale Client (swcli) is installed, you are ready to use Starwhale Standalone.","sidebar":"mainSidebar"},"model/index":{"id":"model/index","title":"Starwhale Model","description":"overview","sidebar":"mainSidebar"},"model/yaml":{"id":"model/yaml","title":"The model.yaml Specification","description":"model.yaml is optional for swcli model build.","sidebar":"mainSidebar"},"reference/sdk/dataset":{"id":"reference/sdk/dataset","title":"Starwhale Dataset SDK","description":"dataset","sidebar":"mainSidebar"},"reference/sdk/evaluation":{"id":"reference/sdk/evaluation","title":"Starwhale Model Evaluation SDK","description":"@evaluation.predict","sidebar":"mainSidebar"},"reference/sdk/job":{"id":"reference/sdk/job","title":"Starwhale Job SDK","description":"job","sidebar":"mainSidebar"},"reference/sdk/model":{"id":"reference/sdk/model","title":"Starwhale Model SDK","description":"model.build","sidebar":"mainSidebar"},"reference/sdk/other":{"id":"reference/sdk/other","title":"Other SDK","description":"\\\\version","sidebar":"mainSidebar"},"reference/sdk/overview":{"id":"reference/sdk/overview","title":"Python SDK Overview","description":"Starwhale provides a series of Python SDKs to help manage datasets, models, evaluations etc. Using the Starwhale Python SDK can make it easier to complete your ML/DL development tasks.","sidebar":"mainSidebar"},"reference/sdk/type":{"id":"reference/sdk/type","title":"Starwhale Data Types","description":"COCOObjectAnnotation","sidebar":"mainSidebar"},"reference/swcli/dataset":{"id":"reference/swcli/dataset","title":"swcli dataset","description":"Overview","sidebar":"mainSidebar"},"reference/swcli/index":{"id":"reference/swcli/index","title":"Overview","description":"Usage","sidebar":"mainSidebar"},"reference/swcli/instance":{"id":"reference/swcli/instance","title":"swcli instance","description":"Overview","sidebar":"mainSidebar"},"reference/swcli/job":{"id":"reference/swcli/job","title":"swcli job","description":"Overview","sidebar":"mainSidebar"},"reference/swcli/model":{"id":"reference/swcli/model","title":"swcli model","description":"Overview","sidebar":"mainSidebar"},"reference/swcli/project":{"id":"reference/swcli/project","title":"swcli project","description":"Overview","sidebar":"mainSidebar"},"reference/swcli/runtime":{"id":"reference/swcli/runtime","title":"swcli runtime","description":"Overview","sidebar":"mainSidebar"},"reference/swcli/server":{"id":"reference/swcli/server","title":"swcli server","description":"Overview","sidebar":"mainSidebar"},"reference/swcli/utilities":{"id":"reference/swcli/utilities","title":"Utility Commands","description":"swcli gc","sidebar":"mainSidebar"},"runtime/index":{"id":"runtime/index","title":"Starwhale Runtime","description":"overview","sidebar":"mainSidebar"},"runtime/yaml":{"id":"runtime/yaml","title":"The runtime.yaml Specification","description":"runtime.yaml is the configuration file that defines the properties of the Starwhale Runtime. runtime.yaml is required for the yaml mode of the swcli runtime build command.","sidebar":"mainSidebar"},"server/guides/server_admin":{"id":"server/guides/server_admin","title":"Controller Admin Settings","description":"Superuser Password Reset","sidebar":"mainSidebar"},"server/index":{"id":"server/index","title":"Starwhale Server User Guide","description":"To install/update/uninstall Starwhale Server, see the Starwhale Server Installation Guide.","sidebar":"mainSidebar"},"server/installation/docker":{"id":"server/installation/docker","title":"Install Starwhale Server with Docker","description":"Prerequisites","sidebar":"mainSidebar"},"server/installation/docker-compose":{"id":"server/installation/docker-compose","title":"Install Starwhale Server with Docker Compose","description":"Prerequisites","sidebar":"mainSidebar"},"server/installation/index":{"id":"server/installation/index","title":"Starwhale Server Installation Guide","description":"Starwhale Server is delivered as a Docker image, which can be run with Docker directly or deployed to a Kubernetes cluster or Minikube.","sidebar":"mainSidebar"},"server/installation/k8s-cluster":{"id":"server/installation/k8s-cluster","title":"Install Starwhale Server to Kubernetes Cluster","description":"In a private deployment scenario, Starwhale Server can be deployed to a Kubernetes cluster using Helm. Starwhale Server relies on two fundamental infrastructure dependencies: MySQL database and object storage.","sidebar":"mainSidebar"},"server/installation/minikube":{"id":"server/installation/minikube","title":"Install Starwhale Server with Minikube","description":"Prerequisites","sidebar":"mainSidebar"},"server/installation/server-start":{"id":"server/installation/server-start","title":"Launch Starwhale Server with the \\"swcli server start\\" command","description":"Prerequisites","sidebar":"mainSidebar"},"server/installation/starwhale_env":{"id":"server/installation/starwhale_env","title":"Starwhale Server Environment Example","description":"","sidebar":"mainSidebar"},"server/project":{"id":"server/project","title":"How to Organize and Manage Resources with Starwhale Projects","description":"Project is the basic unit for organizing and managing resources (such as models, datasets, runtime environments, etc.). You can create and manage projects based on your needs. For example, you can create projects by business team, product line, or models. One user can create and participate in one or more projects.","sidebar":"mainSidebar"},"swcli/config":{"id":"swcli/config","title":"Configuration","description":"Standalone Instance is installed on the user\'s laptop or development server, providing isolation at the level of Linux/macOX users. Users can install the Starwhale Python package using the pip command and execute any swcli command. After that, they can view their Starwhale configuration in ~/.config/starwhale/config.yaml. In the vast majority of cases, users do not need to manually modify the config.yaml file.","sidebar":"mainSidebar"},"swcli/index":{"id":"swcli/index","title":"Starwhale Client (swcli) User Guide","description":"The Starwhale Client (swcli) is a command-line tool that enables you to interact with Starwhale instances. You can use swcli to complete almost all tasks in Starwhale. swcli is written in pure python3 (require Python 3.7 | 3.11) so that it can be easily installed by the pip command. Currently, swcli only supports Linux and macOS, Windows is coming soon.","sidebar":"mainSidebar"},"swcli/installation":{"id":"swcli/installation","title":"Installation Guide","description":"We can use swcli to complete all tasks for Starwhale Instances. swcli is written by pure python3, which can be installed easily by the pip command.Here are some installation tips that can help you get a cleaner, unambiguous, no dependency conflicts swcli python environment.","sidebar":"mainSidebar"},"swcli/swignore":{"id":"swcli/swignore","title":"About the .swignore file","description":"The .swignore file is similar to .gitignore, .dockerignore, and other files used to define ignored files or dirs. The .swignore files mainly used in the Starwhale Model building process. By default, the swcli model build command or starwhale.model.build() Python SDK will traverse all files in the specified directory and automatically exclude certain known files or directories that are not suitable for inclusion in the model package.","sidebar":"mainSidebar"},"swcli/uri":{"id":"swcli/uri","title":"Starwhale Resources URI","description":"Resource URI is widely used in Starwhale client commands. The URI can refer to a resource in the local instance or any other resource in a remote instance. In this way, the Starwhale client can easily manipulate any resource.","sidebar":"mainSidebar"},"what-is-starwhale":{"id":"what-is-starwhale","title":"What is Starwhale","description":"Overview","sidebar":"mainSidebar"}}}')}}]); \ No newline at end of file diff --git a/assets/js/c2728190.3680afbb.js b/assets/js/c2728190.3680afbb.js deleted file mode 100644 index 9846f020e..000000000 --- a/assets/js/c2728190.3680afbb.js +++ /dev/null @@ -1 +0,0 @@ -"use strict";(self.webpackChunkstarwhale_docs=self.webpackChunkstarwhale_docs||[]).push([[5689],{3905:(e,t,n)=>{n.d(t,{Zo:()=>p,kt:()=>f});var r=n(67294);function a(e,t,n){return t in e?Object.defineProperty(e,t,{value:n,enumerable:!0,configurable:!0,writable:!0}):e[t]=n,e}function o(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function i(e){for(var t=1;t=0||(a[n]=e[n]);return a}(e,t);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(a[n]=e[n])}return a}var l=r.createContext({}),s=function(e){var t=r.useContext(l),n=t;return e&&(n="function"==typeof e?e(t):i(i({},t),e)),n},p=function(e){var t=s(e.components);return r.createElement(l.Provider,{value:t},e.children)},m={inlineCode:"code",wrapper:function(e){var t=e.children;return r.createElement(r.Fragment,{},t)}},u=r.forwardRef((function(e,t){var n=e.components,a=e.mdxType,o=e.originalType,l=e.parentName,p=c(e,["components","mdxType","originalType","parentName"]),u=s(n),f=a,d=u["".concat(l,".").concat(f)]||u[f]||m[f]||o;return n?r.createElement(d,i(i({ref:t},p),{},{components:n})):r.createElement(d,i({ref:t},p))}));function f(e,t){var n=arguments,a=t&&t.mdxType;if("string"==typeof e||a){var o=n.length,i=new Array(o);i[0]=u;var c={};for(var l in t)hasOwnProperty.call(t,l)&&(c[l]=t[l]);c.originalType=e,c.mdxType="string"==typeof e?e:a,i[1]=c;for(var s=2;s{n.r(t),n.d(t,{assets:()=>l,contentTitle:()=>i,default:()=>m,frontMatter:()=>o,metadata:()=>c,toc:()=>s});var r=n(83117),a=(n(67294),n(3905));const o={title:"Starwhale Common Concepts"},i=void 0,c={unversionedId:"concepts/index",id:"concepts/index",title:"Starwhale Common Concepts",description:"This section explains some basic concepts in Starwhale.",source:"@site/docs/concepts/index.md",sourceDirName:"concepts",slug:"/concepts/",permalink:"/next/concepts/",draft:!1,editUrl:"https://github.com/star-whale/docs/tree/main/docs/concepts/index.md",tags:[],version:"current",frontMatter:{title:"Starwhale Common Concepts"},sidebar:"mainSidebar",previous:{title:"Starwhale's Helloworld Example - Evaluating the KNN Algorithm on Handwritten Digit Recognition Tasks",permalink:"/next/examples/helloworld"},next:{title:"Names in Starwhale",permalink:"/next/concepts/names"}},l={},s=[],p={toc:s};function m(e){let{components:t,...n}=e;return(0,a.kt)("wrapper",(0,r.Z)({},p,n,{components:t,mdxType:"MDXLayout"}),(0,a.kt)("p",null,"This section explains some basic concepts in Starwhale."),(0,a.kt)("ul",null,(0,a.kt)("li",{parentName:"ul"},(0,a.kt)("a",{parentName:"li",href:"names"},"Names in Starwhale")),(0,a.kt)("li",{parentName:"ul"},(0,a.kt)("a",{parentName:"li",href:"project"},"Project in Starwhale")),(0,a.kt)("li",{parentName:"ul"},(0,a.kt)("a",{parentName:"li",href:"roles-permissions"},"Roles and permissions in Starwhale")),(0,a.kt)("li",{parentName:"ul"},(0,a.kt)("a",{parentName:"li",href:"versioning"},"Resource versioning in Starwhale"))))}m.isMDXComponent=!0}}]); \ No newline at end of file diff --git a/assets/js/c2728190.96957c60.js b/assets/js/c2728190.96957c60.js new file mode 100644 index 000000000..4641cc7bf --- /dev/null +++ b/assets/js/c2728190.96957c60.js @@ -0,0 +1 @@ +"use strict";(self.webpackChunkstarwhale_docs=self.webpackChunkstarwhale_docs||[]).push([[5689],{3905:(e,t,r)=>{r.d(t,{Zo:()=>p,kt:()=>f});var n=r(67294);function a(e,t,r){return t in e?Object.defineProperty(e,t,{value:r,enumerable:!0,configurable:!0,writable:!0}):e[t]=r,e}function o(e,t){var r=Object.keys(e);if(Object.getOwnPropertySymbols){var n=Object.getOwnPropertySymbols(e);t&&(n=n.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),r.push.apply(r,n)}return r}function i(e){for(var t=1;t=0||(a[r]=e[r]);return a}(e,t);if(Object.getOwnPropertySymbols){var o=Object.getOwnPropertySymbols(e);for(n=0;n=0||Object.prototype.propertyIsEnumerable.call(e,r)&&(a[r]=e[r])}return a}var c=n.createContext({}),s=function(e){var t=n.useContext(c),r=t;return e&&(r="function"==typeof e?e(t):i(i({},t),e)),r},p=function(e){var t=s(e.components);return n.createElement(c.Provider,{value:t},e.children)},u={inlineCode:"code",wrapper:function(e){var t=e.children;return n.createElement(n.Fragment,{},t)}},m=n.forwardRef((function(e,t){var r=e.components,a=e.mdxType,o=e.originalType,c=e.parentName,p=l(e,["components","mdxType","originalType","parentName"]),m=s(r),f=a,d=m["".concat(c,".").concat(f)]||m[f]||u[f]||o;return r?n.createElement(d,i(i({ref:t},p),{},{components:r})):n.createElement(d,i({ref:t},p))}));function f(e,t){var r=arguments,a=t&&t.mdxType;if("string"==typeof e||a){var o=r.length,i=new Array(o);i[0]=m;var l={};for(var c in t)hasOwnProperty.call(t,c)&&(l[c]=t[c]);l.originalType=e,l.mdxType="string"==typeof e?e:a,i[1]=l;for(var s=2;s{r.r(t),r.d(t,{assets:()=>c,contentTitle:()=>i,default:()=>u,frontMatter:()=>o,metadata:()=>l,toc:()=>s});var n=r(83117),a=(r(67294),r(3905));const o={title:"Starwhale Common Concepts"},i=void 0,l={unversionedId:"concepts/index",id:"concepts/index",title:"Starwhale Common Concepts",description:"This section explains some basic concepts in Starwhale.",source:"@site/docs/concepts/index.md",sourceDirName:"concepts",slug:"/concepts/",permalink:"/next/concepts/",draft:!1,editUrl:"https://github.com/star-whale/docs/tree/main/docs/concepts/index.md",tags:[],version:"current",frontMatter:{title:"Starwhale Common Concepts"},sidebar:"mainSidebar",previous:{title:"Starwhale's Helloworld Example - Evaluating the KNN Algorithm on Handwritten Digit Recognition Tasks",permalink:"/next/examples/helloworld"},next:{title:"Starwhale Glossary",permalink:"/next/concepts/glossary"}},c={},s=[],p={toc:s};function u(e){let{components:t,...r}=e;return(0,a.kt)("wrapper",(0,n.Z)({},p,r,{components:t,mdxType:"MDXLayout"}),(0,a.kt)("p",null,"This section explains some basic concepts in Starwhale."),(0,a.kt)("ul",null,(0,a.kt)("li",{parentName:"ul"},(0,a.kt)("a",{parentName:"li",href:"names"},"Names in Starwhale")),(0,a.kt)("li",{parentName:"ul"},(0,a.kt)("a",{parentName:"li",href:"project"},"Project in Starwhale")),(0,a.kt)("li",{parentName:"ul"},(0,a.kt)("a",{parentName:"li",href:"roles-permissions"},"Roles and permissions in Starwhale")),(0,a.kt)("li",{parentName:"ul"},(0,a.kt)("a",{parentName:"li",href:"versioning"},"Resource versioning in Starwhale"))))}u.isMDXComponent=!0}}]); \ No newline at end of file diff --git a/assets/js/main.5e6ca8f1.js b/assets/js/main.5e6ca8f1.js new file mode 100644 index 000000000..c7da17dbe --- /dev/null +++ b/assets/js/main.5e6ca8f1.js @@ -0,0 +1,2 @@ +/*! For license information please see main.5e6ca8f1.js.LICENSE.txt */ +(self.webpackChunkstarwhale_docs=self.webpackChunkstarwhale_docs||[]).push([[40179],{34334:(e,t,n)=>{"use strict";function r(e){var t,n,i="";if("string"==typeof e||"number"==typeof e)i+=e;else if("object"==typeof e)if(Array.isArray(e))for(t=0;ti});const i=function(){for(var e,t,n=0,i="";n{"use strict";n.d(t,{Z:()=>a});var r=function(){var e=/(?:^|\s)lang(?:uage)?-([\w-]+)(?=\s|$)/i,t=0,n={},r={util:{encode:function e(t){return t instanceof i?new i(t.type,e(t.content),t.alias):Array.isArray(t)?t.map(e):t.replace(/&/g,"&").replace(/=u.reach);x+=S.value.length,S=S.next){var k=S.value;if(t.length>e.length)return;if(!(k instanceof i)){var E,T=1;if(v){if(!(E=a(_,x,e,g))||E.index>=e.length)break;var C=E.index,A=E.index+E[0].length,P=x;for(P+=S.value.length;C>=P;)P+=(S=S.next).value.length;if(x=P-=S.value.length,S.value instanceof i)continue;for(var N=S;N!==t.tail&&(Pu.reach&&(u.reach=R);var j=S.prev;if(L&&(j=c(t,j,L),x+=L.length),l(t,j,T),S=c(t,j,new i(p,h?r.tokenize(O,h):O,y,O)),I&&c(t,S,I),T>1){var M={cause:p+","+f,reach:R};o(e,t,n,S.prev,x,M),u&&M.reach>u.reach&&(u.reach=M.reach)}}}}}}function s(){var e={value:null,prev:null,next:null},t={value:null,prev:e,next:null};e.next=t,this.head=e,this.tail=t,this.length=0}function c(e,t,n){var r=t.next,i={value:n,prev:t,next:r};return t.next=i,r.prev=i,e.length++,i}function l(e,t,n){for(var r=t.next,i=0;i"+a.content+""},r}(),i=r;r.default=r,i.languages.markup={comment:{pattern://,greedy:!0},prolog:{pattern:/<\?[\s\S]+?\?>/,greedy:!0},doctype:{pattern:/"'[\]]|"[^"]*"|'[^']*')+(?:\[(?:[^<"'\]]|"[^"]*"|'[^']*'|<(?!!--)|)*\]\s*)?>/i,greedy:!0,inside:{"internal-subset":{pattern:/(^[^\[]*\[)[\s\S]+(?=\]>$)/,lookbehind:!0,greedy:!0,inside:null},string:{pattern:/"[^"]*"|'[^']*'/,greedy:!0},punctuation:/^$|[[\]]/,"doctype-tag":/^DOCTYPE/i,name:/[^\s<>'"]+/}},cdata:{pattern://i,greedy:!0},tag:{pattern:/<\/?(?!\d)[^\s>\/=$<%]+(?:\s(?:\s*[^\s>\/=]+(?:\s*=\s*(?:"[^"]*"|'[^']*'|[^\s'">=]+(?=[\s>]))|(?=[\s/>])))+)?\s*\/?>/,greedy:!0,inside:{tag:{pattern:/^<\/?[^\s>\/]+/,inside:{punctuation:/^<\/?/,namespace:/^[^\s>\/:]+:/}},"special-attr":[],"attr-value":{pattern:/=\s*(?:"[^"]*"|'[^']*'|[^\s'">=]+)/,inside:{punctuation:[{pattern:/^=/,alias:"attr-equals"},/"|'/]}},punctuation:/\/?>/,"attr-name":{pattern:/[^\s>\/]+/,inside:{namespace:/^[^\s>\/:]+:/}}}},entity:[{pattern:/&[\da-z]{1,8};/i,alias:"named-entity"},/&#x?[\da-f]{1,8};/i]},i.languages.markup.tag.inside["attr-value"].inside.entity=i.languages.markup.entity,i.languages.markup.doctype.inside["internal-subset"].inside=i.languages.markup,i.hooks.add("wrap",(function(e){"entity"===e.type&&(e.attributes.title=e.content.replace(/&/,"&"))})),Object.defineProperty(i.languages.markup.tag,"addInlined",{value:function(e,t){var n={};n["language-"+t]={pattern:/(^$)/i,lookbehind:!0,inside:i.languages[t]},n.cdata=/^$/i;var r={"included-cdata":{pattern://i,inside:n}};r["language-"+t]={pattern:/[\s\S]+/,inside:i.languages[t]};var a={};a[e]={pattern:RegExp(/(<__[^>]*>)(?:))*\]\]>|(?!)/.source.replace(/__/g,(function(){return e})),"i"),lookbehind:!0,greedy:!0,inside:r},i.languages.insertBefore("markup","cdata",a)}}),Object.defineProperty(i.languages.markup.tag,"addAttribute",{value:function(e,t){i.languages.markup.tag.inside["special-attr"].push({pattern:RegExp(/(^|["'\s])/.source+"(?:"+e+")"+/\s*=\s*(?:"[^"]*"|'[^']*'|[^\s'">=]+(?=[\s>]))/.source,"i"),lookbehind:!0,inside:{"attr-name":/^[^\s=]+/,"attr-value":{pattern:/=[\s\S]+/,inside:{value:{pattern:/(^=\s*(["']|(?!["'])))\S[\s\S]*(?=\2$)/,lookbehind:!0,alias:[t,"language-"+t],inside:i.languages[t]},punctuation:[{pattern:/^=/,alias:"attr-equals"},/"|'/]}}}})}}),i.languages.html=i.languages.markup,i.languages.mathml=i.languages.markup,i.languages.svg=i.languages.markup,i.languages.xml=i.languages.extend("markup",{}),i.languages.ssml=i.languages.xml,i.languages.atom=i.languages.xml,i.languages.rss=i.languages.xml,function(e){var t="\\b(?:BASH|BASHOPTS|BASH_ALIASES|BASH_ARGC|BASH_ARGV|BASH_CMDS|BASH_COMPLETION_COMPAT_DIR|BASH_LINENO|BASH_REMATCH|BASH_SOURCE|BASH_VERSINFO|BASH_VERSION|COLORTERM|COLUMNS|COMP_WORDBREAKS|DBUS_SESSION_BUS_ADDRESS|DEFAULTS_PATH|DESKTOP_SESSION|DIRSTACK|DISPLAY|EUID|GDMSESSION|GDM_LANG|GNOME_KEYRING_CONTROL|GNOME_KEYRING_PID|GPG_AGENT_INFO|GROUPS|HISTCONTROL|HISTFILE|HISTFILESIZE|HISTSIZE|HOME|HOSTNAME|HOSTTYPE|IFS|INSTANCE|JOB|LANG|LANGUAGE|LC_ADDRESS|LC_ALL|LC_IDENTIFICATION|LC_MEASUREMENT|LC_MONETARY|LC_NAME|LC_NUMERIC|LC_PAPER|LC_TELEPHONE|LC_TIME|LESSCLOSE|LESSOPEN|LINES|LOGNAME|LS_COLORS|MACHTYPE|MAILCHECK|MANDATORY_PATH|NO_AT_BRIDGE|OLDPWD|OPTERR|OPTIND|ORBIT_SOCKETDIR|OSTYPE|PAPERSIZE|PATH|PIPESTATUS|PPID|PS1|PS2|PS3|PS4|PWD|RANDOM|REPLY|SECONDS|SELINUX_INIT|SESSION|SESSIONTYPE|SESSION_MANAGER|SHELL|SHELLOPTS|SHLVL|SSH_AUTH_SOCK|TERM|UID|UPSTART_EVENTS|UPSTART_INSTANCE|UPSTART_JOB|UPSTART_SESSION|USER|WINDOWID|XAUTHORITY|XDG_CONFIG_DIRS|XDG_CURRENT_DESKTOP|XDG_DATA_DIRS|XDG_GREETER_DATA_DIR|XDG_MENU_PREFIX|XDG_RUNTIME_DIR|XDG_SEAT|XDG_SEAT_PATH|XDG_SESSION_DESKTOP|XDG_SESSION_ID|XDG_SESSION_PATH|XDG_SESSION_TYPE|XDG_VTNR|XMODIFIERS)\\b",n={pattern:/(^(["']?)\w+\2)[ \t]+\S.*/,lookbehind:!0,alias:"punctuation",inside:null},r={bash:n,environment:{pattern:RegExp("\\$"+t),alias:"constant"},variable:[{pattern:/\$?\(\([\s\S]+?\)\)/,greedy:!0,inside:{variable:[{pattern:/(^\$\(\([\s\S]+)\)\)/,lookbehind:!0},/^\$\(\(/],number:/\b0x[\dA-Fa-f]+\b|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:[Ee]-?\d+)?/,operator:/--|\+\+|\*\*=?|<<=?|>>=?|&&|\|\||[=!+\-*/%<>^&|]=?|[?~:]/,punctuation:/\(\(?|\)\)?|,|;/}},{pattern:/\$\((?:\([^)]+\)|[^()])+\)|`[^`]+`/,greedy:!0,inside:{variable:/^\$\(|^`|\)$|`$/}},{pattern:/\$\{[^}]+\}/,greedy:!0,inside:{operator:/:[-=?+]?|[!\/]|##?|%%?|\^\^?|,,?/,punctuation:/[\[\]]/,environment:{pattern:RegExp("(\\{)"+t),lookbehind:!0,alias:"constant"}}},/\$(?:\w+|[#?*!@$])/],entity:/\\(?:[abceEfnrtv\\"]|O?[0-7]{1,3}|U[0-9a-fA-F]{8}|u[0-9a-fA-F]{4}|x[0-9a-fA-F]{1,2})/};e.languages.bash={shebang:{pattern:/^#!\s*\/.*/,alias:"important"},comment:{pattern:/(^|[^"{\\$])#.*/,lookbehind:!0},"function-name":[{pattern:/(\bfunction\s+)[\w-]+(?=(?:\s*\(?:\s*\))?\s*\{)/,lookbehind:!0,alias:"function"},{pattern:/\b[\w-]+(?=\s*\(\s*\)\s*\{)/,alias:"function"}],"for-or-select":{pattern:/(\b(?:for|select)\s+)\w+(?=\s+in\s)/,alias:"variable",lookbehind:!0},"assign-left":{pattern:/(^|[\s;|&]|[<>]\()\w+(?=\+?=)/,inside:{environment:{pattern:RegExp("(^|[\\s;|&]|[<>]\\()"+t),lookbehind:!0,alias:"constant"}},alias:"variable",lookbehind:!0},string:[{pattern:/((?:^|[^<])<<-?\s*)(\w+)\s[\s\S]*?(?:\r?\n|\r)\2/,lookbehind:!0,greedy:!0,inside:r},{pattern:/((?:^|[^<])<<-?\s*)(["'])(\w+)\2\s[\s\S]*?(?:\r?\n|\r)\3/,lookbehind:!0,greedy:!0,inside:{bash:n}},{pattern:/(^|[^\\](?:\\\\)*)"(?:\\[\s\S]|\$\([^)]+\)|\$(?!\()|`[^`]+`|[^"\\`$])*"/,lookbehind:!0,greedy:!0,inside:r},{pattern:/(^|[^$\\])'[^']*'/,lookbehind:!0,greedy:!0},{pattern:/\$'(?:[^'\\]|\\[\s\S])*'/,greedy:!0,inside:{entity:r.entity}}],environment:{pattern:RegExp("\\$?"+t),alias:"constant"},variable:r.variable,function:{pattern:/(^|[\s;|&]|[<>]\()(?:add|apropos|apt|apt-cache|apt-get|aptitude|aspell|automysqlbackup|awk|basename|bash|bc|bconsole|bg|bzip2|cal|cat|cfdisk|chgrp|chkconfig|chmod|chown|chroot|cksum|clear|cmp|column|comm|composer|cp|cron|crontab|csplit|curl|cut|date|dc|dd|ddrescue|debootstrap|df|diff|diff3|dig|dir|dircolors|dirname|dirs|dmesg|docker|docker-compose|du|egrep|eject|env|ethtool|expand|expect|expr|fdformat|fdisk|fg|fgrep|file|find|fmt|fold|format|free|fsck|ftp|fuser|gawk|git|gparted|grep|groupadd|groupdel|groupmod|groups|grub-mkconfig|gzip|halt|head|hg|history|host|hostname|htop|iconv|id|ifconfig|ifdown|ifup|import|install|ip|jobs|join|kill|killall|less|link|ln|locate|logname|logrotate|look|lpc|lpr|lprint|lprintd|lprintq|lprm|ls|lsof|lynx|make|man|mc|mdadm|mkconfig|mkdir|mke2fs|mkfifo|mkfs|mkisofs|mknod|mkswap|mmv|more|most|mount|mtools|mtr|mutt|mv|nano|nc|netstat|nice|nl|node|nohup|notify-send|npm|nslookup|op|open|parted|passwd|paste|pathchk|ping|pkill|pnpm|podman|podman-compose|popd|pr|printcap|printenv|ps|pushd|pv|quota|quotacheck|quotactl|ram|rar|rcp|reboot|remsync|rename|renice|rev|rm|rmdir|rpm|rsync|scp|screen|sdiff|sed|sendmail|seq|service|sftp|sh|shellcheck|shuf|shutdown|sleep|slocate|sort|split|ssh|stat|strace|su|sudo|sum|suspend|swapon|sync|tac|tail|tar|tee|time|timeout|top|touch|tr|traceroute|tsort|tty|umount|uname|unexpand|uniq|units|unrar|unshar|unzip|update-grub|uptime|useradd|userdel|usermod|users|uudecode|uuencode|v|vcpkg|vdir|vi|vim|virsh|vmstat|wait|watch|wc|wget|whereis|which|who|whoami|write|xargs|xdg-open|yarn|yes|zenity|zip|zsh|zypper)(?=$|[)\s;|&])/,lookbehind:!0},keyword:{pattern:/(^|[\s;|&]|[<>]\()(?:case|do|done|elif|else|esac|fi|for|function|if|in|select|then|until|while)(?=$|[)\s;|&])/,lookbehind:!0},builtin:{pattern:/(^|[\s;|&]|[<>]\()(?:\.|:|alias|bind|break|builtin|caller|cd|command|continue|declare|echo|enable|eval|exec|exit|export|getopts|hash|help|let|local|logout|mapfile|printf|pwd|read|readarray|readonly|return|set|shift|shopt|source|test|times|trap|type|typeset|ulimit|umask|unalias|unset)(?=$|[)\s;|&])/,lookbehind:!0,alias:"class-name"},boolean:{pattern:/(^|[\s;|&]|[<>]\()(?:false|true)(?=$|[)\s;|&])/,lookbehind:!0},"file-descriptor":{pattern:/\B&\d\b/,alias:"important"},operator:{pattern:/\d?<>|>\||\+=|=[=~]?|!=?|<<[<-]?|[&\d]?>>|\d[<>]&?|[<>][&=]?|&[>&]?|\|[&|]?/,inside:{"file-descriptor":{pattern:/^\d/,alias:"important"}}},punctuation:/\$?\(\(?|\)\)?|\.\.|[{}[\];\\]/,number:{pattern:/(^|\s)(?:[1-9]\d*|0)(?:[.,]\d+)?\b/,lookbehind:!0}},n.inside=e.languages.bash;for(var i=["comment","function-name","for-or-select","assign-left","string","environment","function","keyword","builtin","boolean","file-descriptor","operator","punctuation","number"],a=r.variable[1].inside,o=0;o]=?|[!=]=?=?|--?|\+\+?|&&?|\|\|?|[?*/~^%]/,punctuation:/[{}[\];(),.:]/},i.languages.c=i.languages.extend("clike",{comment:{pattern:/\/\/(?:[^\r\n\\]|\\(?:\r\n?|\n|(?![\r\n])))*|\/\*[\s\S]*?(?:\*\/|$)/,greedy:!0},string:{pattern:/"(?:\\(?:\r\n|[\s\S])|[^"\\\r\n])*"/,greedy:!0},"class-name":{pattern:/(\b(?:enum|struct)\s+(?:__attribute__\s*\(\([\s\S]*?\)\)\s*)?)\w+|\b[a-z]\w*_t\b/,lookbehind:!0},keyword:/\b(?:_Alignas|_Alignof|_Atomic|_Bool|_Complex|_Generic|_Imaginary|_Noreturn|_Static_assert|_Thread_local|__attribute__|asm|auto|break|case|char|const|continue|default|do|double|else|enum|extern|float|for|goto|if|inline|int|long|register|return|short|signed|sizeof|static|struct|switch|typedef|typeof|union|unsigned|void|volatile|while)\b/,function:/\b[a-z_]\w*(?=\s*\()/i,number:/(?:\b0x(?:[\da-f]+(?:\.[\da-f]*)?|\.[\da-f]+)(?:p[+-]?\d+)?|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:e[+-]?\d+)?)[ful]{0,4}/i,operator:/>>=?|<<=?|->|([-+&|:])\1|[?:~]|[-+*/%&|^!=<>]=?/}),i.languages.insertBefore("c","string",{char:{pattern:/'(?:\\(?:\r\n|[\s\S])|[^'\\\r\n]){0,32}'/,greedy:!0}}),i.languages.insertBefore("c","string",{macro:{pattern:/(^[\t ]*)#\s*[a-z](?:[^\r\n\\/]|\/(?!\*)|\/\*(?:[^*]|\*(?!\/))*\*\/|\\(?:\r\n|[\s\S]))*/im,lookbehind:!0,greedy:!0,alias:"property",inside:{string:[{pattern:/^(#\s*include\s*)<[^>]+>/,lookbehind:!0},i.languages.c.string],char:i.languages.c.char,comment:i.languages.c.comment,"macro-name":[{pattern:/(^#\s*define\s+)\w+\b(?!\()/i,lookbehind:!0},{pattern:/(^#\s*define\s+)\w+\b(?=\()/i,lookbehind:!0,alias:"function"}],directive:{pattern:/^(#\s*)[a-z]+/,lookbehind:!0,alias:"keyword"},"directive-hash":/^#/,punctuation:/##|\\(?=[\r\n])/,expression:{pattern:/\S[\s\S]*/,inside:i.languages.c}}}}),i.languages.insertBefore("c","function",{constant:/\b(?:EOF|NULL|SEEK_CUR|SEEK_END|SEEK_SET|__DATE__|__FILE__|__LINE__|__TIMESTAMP__|__TIME__|__func__|stderr|stdin|stdout)\b/}),delete i.languages.c.boolean,function(e){var t=/\b(?:alignas|alignof|asm|auto|bool|break|case|catch|char|char16_t|char32_t|char8_t|class|co_await|co_return|co_yield|compl|concept|const|const_cast|consteval|constexpr|constinit|continue|decltype|default|delete|do|double|dynamic_cast|else|enum|explicit|export|extern|final|float|for|friend|goto|if|import|inline|int|int16_t|int32_t|int64_t|int8_t|long|module|mutable|namespace|new|noexcept|nullptr|operator|override|private|protected|public|register|reinterpret_cast|requires|return|short|signed|sizeof|static|static_assert|static_cast|struct|switch|template|this|thread_local|throw|try|typedef|typeid|typename|uint16_t|uint32_t|uint64_t|uint8_t|union|unsigned|using|virtual|void|volatile|wchar_t|while)\b/,n=/\b(?!)\w+(?:\s*\.\s*\w+)*\b/.source.replace(//g,(function(){return t.source}));e.languages.cpp=e.languages.extend("c",{"class-name":[{pattern:RegExp(/(\b(?:class|concept|enum|struct|typename)\s+)(?!)\w+/.source.replace(//g,(function(){return t.source}))),lookbehind:!0},/\b[A-Z]\w*(?=\s*::\s*\w+\s*\()/,/\b[A-Z_]\w*(?=\s*::\s*~\w+\s*\()/i,/\b\w+(?=\s*<(?:[^<>]|<(?:[^<>]|<[^<>]*>)*>)*>\s*::\s*\w+\s*\()/],keyword:t,number:{pattern:/(?:\b0b[01']+|\b0x(?:[\da-f']+(?:\.[\da-f']*)?|\.[\da-f']+)(?:p[+-]?[\d']+)?|(?:\b[\d']+(?:\.[\d']*)?|\B\.[\d']+)(?:e[+-]?[\d']+)?)[ful]{0,4}/i,greedy:!0},operator:/>>=?|<<=?|->|--|\+\+|&&|\|\||[?:~]|<=>|[-+*/%&|^!=<>]=?|\b(?:and|and_eq|bitand|bitor|not|not_eq|or|or_eq|xor|xor_eq)\b/,boolean:/\b(?:false|true)\b/}),e.languages.insertBefore("cpp","string",{module:{pattern:RegExp(/(\b(?:import|module)\s+)/.source+"(?:"+/"(?:\\(?:\r\n|[\s\S])|[^"\\\r\n])*"|<[^<>\r\n]*>/.source+"|"+/(?:\s*:\s*)?|:\s*/.source.replace(//g,(function(){return n}))+")"),lookbehind:!0,greedy:!0,inside:{string:/^[<"][\s\S]+/,operator:/:/,punctuation:/\./}},"raw-string":{pattern:/R"([^()\\ ]{0,16})\([\s\S]*?\)\1"/,alias:"string",greedy:!0}}),e.languages.insertBefore("cpp","keyword",{"generic-function":{pattern:/\b(?!operator\b)[a-z_]\w*\s*<(?:[^<>]|<[^<>]*>)*>(?=\s*\()/i,inside:{function:/^\w+/,generic:{pattern:/<[\s\S]+/,alias:"class-name",inside:e.languages.cpp}}}}),e.languages.insertBefore("cpp","operator",{"double-colon":{pattern:/::/,alias:"punctuation"}}),e.languages.insertBefore("cpp","class-name",{"base-clause":{pattern:/(\b(?:class|struct)\s+\w+\s*:\s*)[^;{}"'\s]+(?:\s+[^;{}"'\s]+)*(?=\s*[;{])/,lookbehind:!0,greedy:!0,inside:e.languages.extend("cpp",{})}}),e.languages.insertBefore("inside","double-colon",{"class-name":/\b[a-z_]\w*\b(?!\s*::)/i},e.languages.cpp["base-clause"])}(i),function(e){var t=/(?:"(?:\\(?:\r\n|[\s\S])|[^"\\\r\n])*"|'(?:\\(?:\r\n|[\s\S])|[^'\\\r\n])*')/;e.languages.css={comment:/\/\*[\s\S]*?\*\//,atrule:{pattern:/@[\w-](?:[^;{\s]|\s+(?![\s{]))*(?:;|(?=\s*\{))/,inside:{rule:/^@[\w-]+/,"selector-function-argument":{pattern:/(\bselector\s*\(\s*(?![\s)]))(?:[^()\s]|\s+(?![\s)])|\((?:[^()]|\([^()]*\))*\))+(?=\s*\))/,lookbehind:!0,alias:"selector"},keyword:{pattern:/(^|[^\w-])(?:and|not|only|or)(?![\w-])/,lookbehind:!0}}},url:{pattern:RegExp("\\burl\\((?:"+t.source+"|"+/(?:[^\\\r\n()"']|\\[\s\S])*/.source+")\\)","i"),greedy:!0,inside:{function:/^url/i,punctuation:/^\(|\)$/,string:{pattern:RegExp("^"+t.source+"$"),alias:"url"}}},selector:{pattern:RegExp("(^|[{}\\s])[^{}\\s](?:[^{};\"'\\s]|\\s+(?![\\s{])|"+t.source+")*(?=\\s*\\{)"),lookbehind:!0},string:{pattern:t,greedy:!0},property:{pattern:/(^|[^-\w\xA0-\uFFFF])(?!\s)[-_a-z\xA0-\uFFFF](?:(?!\s)[-\w\xA0-\uFFFF])*(?=\s*:)/i,lookbehind:!0},important:/!important\b/i,function:{pattern:/(^|[^-a-z0-9])[-a-z0-9]+(?=\()/i,lookbehind:!0},punctuation:/[(){};:,]/},e.languages.css.atrule.inside.rest=e.languages.css;var n=e.languages.markup;n&&(n.tag.addInlined("style","css"),n.tag.addAttribute("style","css"))}(i),function(e){var t,n=/("|')(?:\\(?:\r\n|[\s\S])|(?!\1)[^\\\r\n])*\1/;e.languages.css.selector={pattern:e.languages.css.selector.pattern,lookbehind:!0,inside:t={"pseudo-element":/:(?:after|before|first-letter|first-line|selection)|::[-\w]+/,"pseudo-class":/:[-\w]+/,class:/\.[-\w]+/,id:/#[-\w]+/,attribute:{pattern:RegExp("\\[(?:[^[\\]\"']|"+n.source+")*\\]"),greedy:!0,inside:{punctuation:/^\[|\]$/,"case-sensitivity":{pattern:/(\s)[si]$/i,lookbehind:!0,alias:"keyword"},namespace:{pattern:/^(\s*)(?:(?!\s)[-*\w\xA0-\uFFFF])*\|(?!=)/,lookbehind:!0,inside:{punctuation:/\|$/}},"attr-name":{pattern:/^(\s*)(?:(?!\s)[-\w\xA0-\uFFFF])+/,lookbehind:!0},"attr-value":[n,{pattern:/(=\s*)(?:(?!\s)[-\w\xA0-\uFFFF])+(?=\s*$)/,lookbehind:!0}],operator:/[|~*^$]?=/}},"n-th":[{pattern:/(\(\s*)[+-]?\d*[\dn](?:\s*[+-]\s*\d+)?(?=\s*\))/,lookbehind:!0,inside:{number:/[\dn]+/,operator:/[+-]/}},{pattern:/(\(\s*)(?:even|odd)(?=\s*\))/i,lookbehind:!0}],combinator:/>|\+|~|\|\|/,punctuation:/[(),]/}},e.languages.css.atrule.inside["selector-function-argument"].inside=t,e.languages.insertBefore("css","property",{variable:{pattern:/(^|[^-\w\xA0-\uFFFF])--(?!\s)[-_a-z\xA0-\uFFFF](?:(?!\s)[-\w\xA0-\uFFFF])*/i,lookbehind:!0}});var r={pattern:/(\b\d+)(?:%|[a-z]+(?![\w-]))/,lookbehind:!0},i={pattern:/(^|[^\w.-])-?(?:\d+(?:\.\d+)?|\.\d+)/,lookbehind:!0};e.languages.insertBefore("css","function",{operator:{pattern:/(\s)[+\-*\/](?=\s)/,lookbehind:!0},hexcode:{pattern:/\B#[\da-f]{3,8}\b/i,alias:"color"},color:[{pattern:/(^|[^\w-])(?:AliceBlue|AntiqueWhite|Aqua|Aquamarine|Azure|Beige|Bisque|Black|BlanchedAlmond|Blue|BlueViolet|Brown|BurlyWood|CadetBlue|Chartreuse|Chocolate|Coral|CornflowerBlue|Cornsilk|Crimson|Cyan|DarkBlue|DarkCyan|DarkGoldenRod|DarkGr[ae]y|DarkGreen|DarkKhaki|DarkMagenta|DarkOliveGreen|DarkOrange|DarkOrchid|DarkRed|DarkSalmon|DarkSeaGreen|DarkSlateBlue|DarkSlateGr[ae]y|DarkTurquoise|DarkViolet|DeepPink|DeepSkyBlue|DimGr[ae]y|DodgerBlue|FireBrick|FloralWhite|ForestGreen|Fuchsia|Gainsboro|GhostWhite|Gold|GoldenRod|Gr[ae]y|Green|GreenYellow|HoneyDew|HotPink|IndianRed|Indigo|Ivory|Khaki|Lavender|LavenderBlush|LawnGreen|LemonChiffon|LightBlue|LightCoral|LightCyan|LightGoldenRodYellow|LightGr[ae]y|LightGreen|LightPink|LightSalmon|LightSeaGreen|LightSkyBlue|LightSlateGr[ae]y|LightSteelBlue|LightYellow|Lime|LimeGreen|Linen|Magenta|Maroon|MediumAquaMarine|MediumBlue|MediumOrchid|MediumPurple|MediumSeaGreen|MediumSlateBlue|MediumSpringGreen|MediumTurquoise|MediumVioletRed|MidnightBlue|MintCream|MistyRose|Moccasin|NavajoWhite|Navy|OldLace|Olive|OliveDrab|Orange|OrangeRed|Orchid|PaleGoldenRod|PaleGreen|PaleTurquoise|PaleVioletRed|PapayaWhip|PeachPuff|Peru|Pink|Plum|PowderBlue|Purple|Red|RosyBrown|RoyalBlue|SaddleBrown|Salmon|SandyBrown|SeaGreen|SeaShell|Sienna|Silver|SkyBlue|SlateBlue|SlateGr[ae]y|Snow|SpringGreen|SteelBlue|Tan|Teal|Thistle|Tomato|Transparent|Turquoise|Violet|Wheat|White|WhiteSmoke|Yellow|YellowGreen)(?![\w-])/i,lookbehind:!0},{pattern:/\b(?:hsl|rgb)\(\s*\d{1,3}\s*,\s*\d{1,3}%?\s*,\s*\d{1,3}%?\s*\)\B|\b(?:hsl|rgb)a\(\s*\d{1,3}\s*,\s*\d{1,3}%?\s*,\s*\d{1,3}%?\s*,\s*(?:0|0?\.\d+|1)\s*\)\B/i,inside:{unit:r,number:i,function:/[\w-]+(?=\()/,punctuation:/[(),]/}}],entity:/\\[\da-f]{1,8}/i,unit:r,number:i})}(i),i.languages.javascript=i.languages.extend("clike",{"class-name":[i.languages.clike["class-name"],{pattern:/(^|[^$\w\xA0-\uFFFF])(?!\s)[_$A-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\.(?:constructor|prototype))/,lookbehind:!0}],keyword:[{pattern:/((?:^|\})\s*)catch\b/,lookbehind:!0},{pattern:/(^|[^.]|\.\.\.\s*)\b(?:as|assert(?=\s*\{)|async(?=\s*(?:function\b|\(|[$\w\xA0-\uFFFF]|$))|await|break|case|class|const|continue|debugger|default|delete|do|else|enum|export|extends|finally(?=\s*(?:\{|$))|for|from(?=\s*(?:['"]|$))|function|(?:get|set)(?=\s*(?:[#\[$\w\xA0-\uFFFF]|$))|if|implements|import|in|instanceof|interface|let|new|null|of|package|private|protected|public|return|static|super|switch|this|throw|try|typeof|undefined|var|void|while|with|yield)\b/,lookbehind:!0}],function:/#?(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*(?:\.\s*(?:apply|bind|call)\s*)?\()/,number:{pattern:RegExp(/(^|[^\w$])/.source+"(?:"+/NaN|Infinity/.source+"|"+/0[bB][01]+(?:_[01]+)*n?/.source+"|"+/0[oO][0-7]+(?:_[0-7]+)*n?/.source+"|"+/0[xX][\dA-Fa-f]+(?:_[\dA-Fa-f]+)*n?/.source+"|"+/\d+(?:_\d+)*n/.source+"|"+/(?:\d+(?:_\d+)*(?:\.(?:\d+(?:_\d+)*)?)?|\.\d+(?:_\d+)*)(?:[Ee][+-]?\d+(?:_\d+)*)?/.source+")"+/(?![\w$])/.source),lookbehind:!0},operator:/--|\+\+|\*\*=?|=>|&&=?|\|\|=?|[!=]==|<<=?|>>>?=?|[-+*/%&|^!=<>]=?|\.{3}|\?\?=?|\?\.?|[~:]/}),i.languages.javascript["class-name"][0].pattern=/(\b(?:class|extends|implements|instanceof|interface|new)\s+)[\w.\\]+/,i.languages.insertBefore("javascript","keyword",{regex:{pattern:/((?:^|[^$\w\xA0-\uFFFF."'\])\s]|\b(?:return|yield))\s*)\/(?:\[(?:[^\]\\\r\n]|\\.)*\]|\\.|[^/\\\[\r\n])+\/[dgimyus]{0,7}(?=(?:\s|\/\*(?:[^*]|\*(?!\/))*\*\/)*(?:$|[\r\n,.;:})\]]|\/\/))/,lookbehind:!0,greedy:!0,inside:{"regex-source":{pattern:/^(\/)[\s\S]+(?=\/[a-z]*$)/,lookbehind:!0,alias:"language-regex",inside:i.languages.regex},"regex-delimiter":/^\/|\/$/,"regex-flags":/^[a-z]+$/}},"function-variable":{pattern:/#?(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*[=:]\s*(?:async\s*)?(?:\bfunction\b|(?:\((?:[^()]|\([^()]*\))*\)|(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*)\s*=>))/,alias:"function"},parameter:[{pattern:/(function(?:\s+(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*)?\s*\(\s*)(?!\s)(?:[^()\s]|\s+(?![\s)])|\([^()]*\))+(?=\s*\))/,lookbehind:!0,inside:i.languages.javascript},{pattern:/(^|[^$\w\xA0-\uFFFF])(?!\s)[_$a-z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*=>)/i,lookbehind:!0,inside:i.languages.javascript},{pattern:/(\(\s*)(?!\s)(?:[^()\s]|\s+(?![\s)])|\([^()]*\))+(?=\s*\)\s*=>)/,lookbehind:!0,inside:i.languages.javascript},{pattern:/((?:\b|\s|^)(?!(?:as|async|await|break|case|catch|class|const|continue|debugger|default|delete|do|else|enum|export|extends|finally|for|from|function|get|if|implements|import|in|instanceof|interface|let|new|null|of|package|private|protected|public|return|set|static|super|switch|this|throw|try|typeof|undefined|var|void|while|with|yield)(?![$\w\xA0-\uFFFF]))(?:(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*\s*)\(\s*|\]\s*\(\s*)(?!\s)(?:[^()\s]|\s+(?![\s)])|\([^()]*\))+(?=\s*\)\s*\{)/,lookbehind:!0,inside:i.languages.javascript}],constant:/\b[A-Z](?:[A-Z_]|\dx?)*\b/}),i.languages.insertBefore("javascript","string",{hashbang:{pattern:/^#!.*/,greedy:!0,alias:"comment"},"template-string":{pattern:/`(?:\\[\s\S]|\$\{(?:[^{}]|\{(?:[^{}]|\{[^}]*\})*\})+\}|(?!\$\{)[^\\`])*`/,greedy:!0,inside:{"template-punctuation":{pattern:/^`|`$/,alias:"string"},interpolation:{pattern:/((?:^|[^\\])(?:\\{2})*)\$\{(?:[^{}]|\{(?:[^{}]|\{[^}]*\})*\})+\}/,lookbehind:!0,inside:{"interpolation-punctuation":{pattern:/^\$\{|\}$/,alias:"punctuation"},rest:i.languages.javascript}},string:/[\s\S]+/}},"string-property":{pattern:/((?:^|[,{])[ \t]*)(["'])(?:\\(?:\r\n|[\s\S])|(?!\2)[^\\\r\n])*\2(?=\s*:)/m,lookbehind:!0,greedy:!0,alias:"property"}}),i.languages.insertBefore("javascript","operator",{"literal-property":{pattern:/((?:^|[,{])[ \t]*)(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*(?=\s*:)/m,lookbehind:!0,alias:"property"}}),i.languages.markup&&(i.languages.markup.tag.addInlined("script","javascript"),i.languages.markup.tag.addAttribute(/on(?:abort|blur|change|click|composition(?:end|start|update)|dblclick|error|focus(?:in|out)?|key(?:down|up)|load|mouse(?:down|enter|leave|move|out|over|up)|reset|resize|scroll|select|slotchange|submit|unload|wheel)/.source,"javascript")),i.languages.js=i.languages.javascript,function(e){var t=/#(?!\{).+/,n={pattern:/#\{[^}]+\}/,alias:"variable"};e.languages.coffeescript=e.languages.extend("javascript",{comment:t,string:[{pattern:/'(?:\\[\s\S]|[^\\'])*'/,greedy:!0},{pattern:/"(?:\\[\s\S]|[^\\"])*"/,greedy:!0,inside:{interpolation:n}}],keyword:/\b(?:and|break|by|catch|class|continue|debugger|delete|do|each|else|extend|extends|false|finally|for|if|in|instanceof|is|isnt|let|loop|namespace|new|no|not|null|of|off|on|or|own|return|super|switch|then|this|throw|true|try|typeof|undefined|unless|until|when|while|window|with|yes|yield)\b/,"class-member":{pattern:/@(?!\d)\w+/,alias:"variable"}}),e.languages.insertBefore("coffeescript","comment",{"multiline-comment":{pattern:/###[\s\S]+?###/,alias:"comment"},"block-regex":{pattern:/\/{3}[\s\S]*?\/{3}/,alias:"regex",inside:{comment:t,interpolation:n}}}),e.languages.insertBefore("coffeescript","string",{"inline-javascript":{pattern:/`(?:\\[\s\S]|[^\\`])*`/,inside:{delimiter:{pattern:/^`|`$/,alias:"punctuation"},script:{pattern:/[\s\S]+/,alias:"language-javascript",inside:e.languages.javascript}}},"multiline-string":[{pattern:/'''[\s\S]*?'''/,greedy:!0,alias:"string"},{pattern:/"""[\s\S]*?"""/,greedy:!0,alias:"string",inside:{interpolation:n}}]}),e.languages.insertBefore("coffeescript","keyword",{property:/(?!\d)\w+(?=\s*:(?!:))/}),delete e.languages.coffeescript["template-string"],e.languages.coffee=e.languages.coffeescript}(i),function(e){var t=/[*&][^\s[\]{},]+/,n=/!(?:<[\w\-%#;/?:@&=+$,.!~*'()[\]]+>|(?:[a-zA-Z\d-]*!)?[\w\-%#;/?:@&=+$.~*'()]+)?/,r="(?:"+n.source+"(?:[ \t]+"+t.source+")?|"+t.source+"(?:[ \t]+"+n.source+")?)",i=/(?:[^\s\x00-\x08\x0e-\x1f!"#%&'*,\-:>?@[\]`{|}\x7f-\x84\x86-\x9f\ud800-\udfff\ufffe\uffff]|[?:-])(?:[ \t]*(?:(?![#:])|:))*/.source.replace(//g,(function(){return/[^\s\x00-\x08\x0e-\x1f,[\]{}\x7f-\x84\x86-\x9f\ud800-\udfff\ufffe\uffff]/.source})),a=/"(?:[^"\\\r\n]|\\.)*"|'(?:[^'\\\r\n]|\\.)*'/.source;function o(e,t){t=(t||"").replace(/m/g,"")+"m";var n=/([:\-,[{]\s*(?:\s<>[ \t]+)?)(?:<>)(?=[ \t]*(?:$|,|\]|\}|(?:[\r\n]\s*)?#))/.source.replace(/<>/g,(function(){return r})).replace(/<>/g,(function(){return e}));return RegExp(n,t)}e.languages.yaml={scalar:{pattern:RegExp(/([\-:]\s*(?:\s<>[ \t]+)?[|>])[ \t]*(?:((?:\r?\n|\r)[ \t]+)\S[^\r\n]*(?:\2[^\r\n]+)*)/.source.replace(/<>/g,(function(){return r}))),lookbehind:!0,alias:"string"},comment:/#.*/,key:{pattern:RegExp(/((?:^|[:\-,[{\r\n?])[ \t]*(?:<>[ \t]+)?)<>(?=\s*:\s)/.source.replace(/<>/g,(function(){return r})).replace(/<>/g,(function(){return"(?:"+i+"|"+a+")"}))),lookbehind:!0,greedy:!0,alias:"atrule"},directive:{pattern:/(^[ \t]*)%.+/m,lookbehind:!0,alias:"important"},datetime:{pattern:o(/\d{4}-\d\d?-\d\d?(?:[tT]|[ \t]+)\d\d?:\d{2}:\d{2}(?:\.\d*)?(?:[ \t]*(?:Z|[-+]\d\d?(?::\d{2})?))?|\d{4}-\d{2}-\d{2}|\d\d?:\d{2}(?::\d{2}(?:\.\d*)?)?/.source),lookbehind:!0,alias:"number"},boolean:{pattern:o(/false|true/.source,"i"),lookbehind:!0,alias:"important"},null:{pattern:o(/null|~/.source,"i"),lookbehind:!0,alias:"important"},string:{pattern:o(a),lookbehind:!0,greedy:!0},number:{pattern:o(/[+-]?(?:0x[\da-f]+|0o[0-7]+|(?:\d+(?:\.\d*)?|\.\d+)(?:e[+-]?\d+)?|\.inf|\.nan)/.source,"i"),lookbehind:!0},tag:n,important:t,punctuation:/---|[:[\]{}\-,|>?]|\.\.\./},e.languages.yml=e.languages.yaml}(i),function(e){var t=/(?:\\.|[^\\\n\r]|(?:\n|\r\n?)(?![\r\n]))/.source;function n(e){return e=e.replace(//g,(function(){return t})),RegExp(/((?:^|[^\\])(?:\\{2})*)/.source+"(?:"+e+")")}var r=/(?:\\.|``(?:[^`\r\n]|`(?!`))+``|`[^`\r\n]+`|[^\\|\r\n`])+/.source,i=/\|?__(?:\|__)+\|?(?:(?:\n|\r\n?)|(?![\s\S]))/.source.replace(/__/g,(function(){return r})),a=/\|?[ \t]*:?-{3,}:?[ \t]*(?:\|[ \t]*:?-{3,}:?[ \t]*)+\|?(?:\n|\r\n?)/.source;e.languages.markdown=e.languages.extend("markup",{}),e.languages.insertBefore("markdown","prolog",{"front-matter-block":{pattern:/(^(?:\s*[\r\n])?)---(?!.)[\s\S]*?[\r\n]---(?!.)/,lookbehind:!0,greedy:!0,inside:{punctuation:/^---|---$/,"front-matter":{pattern:/\S+(?:\s+\S+)*/,alias:["yaml","language-yaml"],inside:e.languages.yaml}}},blockquote:{pattern:/^>(?:[\t ]*>)*/m,alias:"punctuation"},table:{pattern:RegExp("^"+i+a+"(?:"+i+")*","m"),inside:{"table-data-rows":{pattern:RegExp("^("+i+a+")(?:"+i+")*$"),lookbehind:!0,inside:{"table-data":{pattern:RegExp(r),inside:e.languages.markdown},punctuation:/\|/}},"table-line":{pattern:RegExp("^("+i+")"+a+"$"),lookbehind:!0,inside:{punctuation:/\||:?-{3,}:?/}},"table-header-row":{pattern:RegExp("^"+i+"$"),inside:{"table-header":{pattern:RegExp(r),alias:"important",inside:e.languages.markdown},punctuation:/\|/}}}},code:[{pattern:/((?:^|\n)[ \t]*\n|(?:^|\r\n?)[ \t]*\r\n?)(?: {4}|\t).+(?:(?:\n|\r\n?)(?: {4}|\t).+)*/,lookbehind:!0,alias:"keyword"},{pattern:/^```[\s\S]*?^```$/m,greedy:!0,inside:{"code-block":{pattern:/^(```.*(?:\n|\r\n?))[\s\S]+?(?=(?:\n|\r\n?)^```$)/m,lookbehind:!0},"code-language":{pattern:/^(```).+/,lookbehind:!0},punctuation:/```/}}],title:[{pattern:/\S.*(?:\n|\r\n?)(?:==+|--+)(?=[ \t]*$)/m,alias:"important",inside:{punctuation:/==+$|--+$/}},{pattern:/(^\s*)#.+/m,lookbehind:!0,alias:"important",inside:{punctuation:/^#+|#+$/}}],hr:{pattern:/(^\s*)([*-])(?:[\t ]*\2){2,}(?=\s*$)/m,lookbehind:!0,alias:"punctuation"},list:{pattern:/(^\s*)(?:[*+-]|\d+\.)(?=[\t ].)/m,lookbehind:!0,alias:"punctuation"},"url-reference":{pattern:/!?\[[^\]]+\]:[\t ]+(?:\S+|<(?:\\.|[^>\\])+>)(?:[\t ]+(?:"(?:\\.|[^"\\])*"|'(?:\\.|[^'\\])*'|\((?:\\.|[^)\\])*\)))?/,inside:{variable:{pattern:/^(!?\[)[^\]]+/,lookbehind:!0},string:/(?:"(?:\\.|[^"\\])*"|'(?:\\.|[^'\\])*'|\((?:\\.|[^)\\])*\))$/,punctuation:/^[\[\]!:]|[<>]/},alias:"url"},bold:{pattern:n(/\b__(?:(?!_)|_(?:(?!_))+_)+__\b|\*\*(?:(?!\*)|\*(?:(?!\*))+\*)+\*\*/.source),lookbehind:!0,greedy:!0,inside:{content:{pattern:/(^..)[\s\S]+(?=..$)/,lookbehind:!0,inside:{}},punctuation:/\*\*|__/}},italic:{pattern:n(/\b_(?:(?!_)|__(?:(?!_))+__)+_\b|\*(?:(?!\*)|\*\*(?:(?!\*))+\*\*)+\*/.source),lookbehind:!0,greedy:!0,inside:{content:{pattern:/(^.)[\s\S]+(?=.$)/,lookbehind:!0,inside:{}},punctuation:/[*_]/}},strike:{pattern:n(/(~~?)(?:(?!~))+\2/.source),lookbehind:!0,greedy:!0,inside:{content:{pattern:/(^~~?)[\s\S]+(?=\1$)/,lookbehind:!0,inside:{}},punctuation:/~~?/}},"code-snippet":{pattern:/(^|[^\\`])(?:``[^`\r\n]+(?:`[^`\r\n]+)*``(?!`)|`[^`\r\n]+`(?!`))/,lookbehind:!0,greedy:!0,alias:["code","keyword"]},url:{pattern:n(/!?\[(?:(?!\]))+\](?:\([^\s)]+(?:[\t ]+"(?:\\.|[^"\\])*")?\)|[ \t]?\[(?:(?!\]))+\])/.source),lookbehind:!0,greedy:!0,inside:{operator:/^!/,content:{pattern:/(^\[)[^\]]+(?=\])/,lookbehind:!0,inside:{}},variable:{pattern:/(^\][ \t]?\[)[^\]]+(?=\]$)/,lookbehind:!0},url:{pattern:/(^\]\()[^\s)]+/,lookbehind:!0},string:{pattern:/(^[ \t]+)"(?:\\.|[^"\\])*"(?=\)$)/,lookbehind:!0}}}}),["url","bold","italic","strike"].forEach((function(t){["url","bold","italic","strike","code-snippet"].forEach((function(n){t!==n&&(e.languages.markdown[t].inside.content.inside[n]=e.languages.markdown[n])}))})),e.hooks.add("after-tokenize",(function(e){"markdown"!==e.language&&"md"!==e.language||function e(t){if(t&&"string"!=typeof t)for(var n=0,r=t.length;n",quot:'"'},c=String.fromCodePoint||String.fromCharCode;e.languages.md=e.languages.markdown}(i),i.languages.graphql={comment:/#.*/,description:{pattern:/(?:"""(?:[^"]|(?!""")")*"""|"(?:\\.|[^\\"\r\n])*")(?=\s*[a-z_])/i,greedy:!0,alias:"string",inside:{"language-markdown":{pattern:/(^"(?:"")?)(?!\1)[\s\S]+(?=\1$)/,lookbehind:!0,inside:i.languages.markdown}}},string:{pattern:/"""(?:[^"]|(?!""")")*"""|"(?:\\.|[^\\"\r\n])*"/,greedy:!0},number:/(?:\B-|\b)\d+(?:\.\d+)?(?:e[+-]?\d+)?\b/i,boolean:/\b(?:false|true)\b/,variable:/\$[a-z_]\w*/i,directive:{pattern:/@[a-z_]\w*/i,alias:"function"},"attr-name":{pattern:/\b[a-z_]\w*(?=\s*(?:\((?:[^()"]|"(?:\\.|[^\\"\r\n])*")*\))?:)/i,greedy:!0},"atom-input":{pattern:/\b[A-Z]\w*Input\b/,alias:"class-name"},scalar:/\b(?:Boolean|Float|ID|Int|String)\b/,constant:/\b[A-Z][A-Z_\d]*\b/,"class-name":{pattern:/(\b(?:enum|implements|interface|on|scalar|type|union)\s+|&\s*|:\s*|\[)[A-Z_]\w*/,lookbehind:!0},fragment:{pattern:/(\bfragment\s+|\.{3}\s*(?!on\b))[a-zA-Z_]\w*/,lookbehind:!0,alias:"function"},"definition-mutation":{pattern:/(\bmutation\s+)[a-zA-Z_]\w*/,lookbehind:!0,alias:"function"},"definition-query":{pattern:/(\bquery\s+)[a-zA-Z_]\w*/,lookbehind:!0,alias:"function"},keyword:/\b(?:directive|enum|extend|fragment|implements|input|interface|mutation|on|query|repeatable|scalar|schema|subscription|type|union)\b/,operator:/[!=|&]|\.{3}/,"property-query":/\w+(?=\s*\()/,object:/\w+(?=\s*\{)/,punctuation:/[!(){}\[\]:=,]/,property:/\w+/},i.hooks.add("after-tokenize",(function(e){if("graphql"===e.language)for(var t=e.tokens.filter((function(e){return"string"!=typeof e&&"comment"!==e.type&&"scalar"!==e.type})),n=0;n0)){var s=p(/^\{$/,/^\}$/);if(-1===s)continue;for(var c=n;c=0&&m(l,"variable-input")}}}}function d(e){return t[n+e]}function u(e,t){t=t||0;for(var n=0;n?|<|>)?|>[>=]?|\b(?:AND|BETWEEN|DIV|ILIKE|IN|IS|LIKE|NOT|OR|REGEXP|RLIKE|SOUNDS LIKE|XOR)\b/i,punctuation:/[;[\]()`,.]/},function(e){var t=e.languages.javascript["template-string"],n=t.pattern.source,r=t.inside.interpolation,i=r.inside["interpolation-punctuation"],a=r.pattern.source;function o(t,r){if(e.languages[t])return{pattern:RegExp("((?:"+r+")\\s*)"+n),lookbehind:!0,greedy:!0,inside:{"template-punctuation":{pattern:/^`|`$/,alias:"string"},"embedded-code":{pattern:/[\s\S]+/,alias:t}}}}function s(e,t){return"___"+t.toUpperCase()+"_"+e+"___"}function c(t,n,r){var i={code:t,grammar:n,language:r};return e.hooks.run("before-tokenize",i),i.tokens=e.tokenize(i.code,i.grammar),e.hooks.run("after-tokenize",i),i.tokens}function l(t){var n={};n["interpolation-punctuation"]=i;var a=e.tokenize(t,n);if(3===a.length){var o=[1,1];o.push.apply(o,c(a[1],e.languages.javascript,"javascript")),a.splice.apply(a,o)}return new e.Token("interpolation",a,r.alias,t)}function d(t,n,r){var i=e.tokenize(t,{interpolation:{pattern:RegExp(a),lookbehind:!0}}),o=0,d={},u=c(i.map((function(e){if("string"==typeof e)return e;for(var n,i=e.content;-1!==t.indexOf(n=s(o++,r)););return d[n]=i,n})).join(""),n,r),p=Object.keys(d);return o=0,function e(t){for(var n=0;n=p.length)return;var r=t[n];if("string"==typeof r||"string"==typeof r.content){var i=p[o],a="string"==typeof r?r:r.content,s=a.indexOf(i);if(-1!==s){++o;var c=a.substring(0,s),u=l(d[i]),m=a.substring(s+i.length),f=[];if(c&&f.push(c),f.push(u),m){var b=[m];e(b),f.push.apply(f,b)}"string"==typeof r?(t.splice.apply(t,[n,1].concat(f)),n+=f.length-1):r.content=f}}else{var h=r.content;Array.isArray(h)?e(h):e([h])}}}(u),new e.Token(r,u,"language-"+r,t)}e.languages.javascript["template-string"]=[o("css",/\b(?:styled(?:\([^)]*\))?(?:\s*\.\s*\w+(?:\([^)]*\))*)*|css(?:\s*\.\s*(?:global|resolve))?|createGlobalStyle|keyframes)/.source),o("html",/\bhtml|\.\s*(?:inner|outer)HTML\s*\+?=/.source),o("svg",/\bsvg/.source),o("markdown",/\b(?:markdown|md)/.source),o("graphql",/\b(?:gql|graphql(?:\s*\.\s*experimental)?)/.source),o("sql",/\bsql/.source),t].filter(Boolean);var u={javascript:!0,js:!0,typescript:!0,ts:!0,jsx:!0,tsx:!0};function p(e){return"string"==typeof e?e:Array.isArray(e)?e.map(p).join(""):p(e.content)}e.hooks.add("after-tokenize",(function(t){t.language in u&&function t(n){for(var r=0,i=n.length;r]|<(?:[^<>]|<[^<>]*>)*>)*>)?/,lookbehind:!0,greedy:!0,inside:null},builtin:/\b(?:Array|Function|Promise|any|boolean|console|never|number|string|symbol|unknown)\b/}),e.languages.typescript.keyword.push(/\b(?:abstract|declare|is|keyof|readonly|require)\b/,/\b(?:asserts|infer|interface|module|namespace|type)\b(?=\s*(?:[{_$a-zA-Z\xA0-\uFFFF]|$))/,/\btype\b(?=\s*(?:[\{*]|$))/),delete e.languages.typescript.parameter,delete e.languages.typescript["literal-property"];var t=e.languages.extend("typescript",{});delete t["class-name"],e.languages.typescript["class-name"].inside=t,e.languages.insertBefore("typescript","function",{decorator:{pattern:/@[$\w\xA0-\uFFFF]+/,inside:{at:{pattern:/^@/,alias:"operator"},function:/^[\s\S]+/}},"generic-function":{pattern:/#?(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*\s*<(?:[^<>]|<(?:[^<>]|<[^<>]*>)*>)*>(?=\s*\()/,greedy:!0,inside:{function:/^#?(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*/,generic:{pattern:/<[\s\S]+/,alias:"class-name",inside:t}}}}),e.languages.ts=e.languages.typescript}(i),function(e){function t(e,t){return RegExp(e.replace(//g,(function(){return/(?!\s)[_$a-zA-Z\xA0-\uFFFF](?:(?!\s)[$\w\xA0-\uFFFF])*/.source})),t)}e.languages.insertBefore("javascript","function-variable",{"method-variable":{pattern:RegExp("(\\.\\s*)"+e.languages.javascript["function-variable"].pattern.source),lookbehind:!0,alias:["function-variable","method","function","property-access"]}}),e.languages.insertBefore("javascript","function",{method:{pattern:RegExp("(\\.\\s*)"+e.languages.javascript.function.source),lookbehind:!0,alias:["function","property-access"]}}),e.languages.insertBefore("javascript","constant",{"known-class-name":[{pattern:/\b(?:(?:Float(?:32|64)|(?:Int|Uint)(?:8|16|32)|Uint8Clamped)?Array|ArrayBuffer|BigInt|Boolean|DataView|Date|Error|Function|Intl|JSON|(?:Weak)?(?:Map|Set)|Math|Number|Object|Promise|Proxy|Reflect|RegExp|String|Symbol|WebAssembly)\b/,alias:"class-name"},{pattern:/\b(?:[A-Z]\w*)Error\b/,alias:"class-name"}]}),e.languages.insertBefore("javascript","keyword",{imports:{pattern:t(/(\bimport\b\s*)(?:(?:\s*,\s*(?:\*\s*as\s+|\{[^{}]*\}))?|\*\s*as\s+|\{[^{}]*\})(?=\s*\bfrom\b)/.source),lookbehind:!0,inside:e.languages.javascript},exports:{pattern:t(/(\bexport\b\s*)(?:\*(?:\s*as\s+)?(?=\s*\bfrom\b)|\{[^{}]*\})/.source),lookbehind:!0,inside:e.languages.javascript}}),e.languages.javascript.keyword.unshift({pattern:/\b(?:as|default|export|from|import)\b/,alias:"module"},{pattern:/\b(?:await|break|catch|continue|do|else|finally|for|if|return|switch|throw|try|while|yield)\b/,alias:"control-flow"},{pattern:/\bnull\b/,alias:["null","nil"]},{pattern:/\bundefined\b/,alias:"nil"}),e.languages.insertBefore("javascript","operator",{spread:{pattern:/\.{3}/,alias:"operator"},arrow:{pattern:/=>/,alias:"operator"}}),e.languages.insertBefore("javascript","punctuation",{"property-access":{pattern:t(/(\.\s*)#?/.source),lookbehind:!0},"maybe-class-name":{pattern:/(^|[^$\w\xA0-\uFFFF])[A-Z][$\w\xA0-\uFFFF]+/,lookbehind:!0},dom:{pattern:/\b(?:document|(?:local|session)Storage|location|navigator|performance|window)\b/,alias:"variable"},console:{pattern:/\bconsole(?=\s*\.)/,alias:"class-name"}});for(var n=["function","function-variable","method","method-variable","property-access"],r=0;r*\.{3}(?:[^{}]|)*\})/.source;function a(e,t){return e=e.replace(//g,(function(){return n})).replace(//g,(function(){return r})).replace(//g,(function(){return i})),RegExp(e,t)}i=a(i).source,e.languages.jsx=e.languages.extend("markup",t),e.languages.jsx.tag.pattern=a(/<\/?(?:[\w.:-]+(?:+(?:[\w.:$-]+(?:=(?:"(?:\\[\s\S]|[^\\"])*"|'(?:\\[\s\S]|[^\\'])*'|[^\s{'"/>=]+|))?|))**\/?)?>/.source),e.languages.jsx.tag.inside.tag.pattern=/^<\/?[^\s>\/]*/,e.languages.jsx.tag.inside["attr-value"].pattern=/=(?!\{)(?:"(?:\\[\s\S]|[^\\"])*"|'(?:\\[\s\S]|[^\\'])*'|[^\s'">]+)/,e.languages.jsx.tag.inside.tag.inside["class-name"]=/^[A-Z]\w*(?:\.[A-Z]\w*)*$/,e.languages.jsx.tag.inside.comment=t.comment,e.languages.insertBefore("inside","attr-name",{spread:{pattern:a(//.source),inside:e.languages.jsx}},e.languages.jsx.tag),e.languages.insertBefore("inside","special-attr",{script:{pattern:a(/=/.source),alias:"language-javascript",inside:{"script-punctuation":{pattern:/^=(?=\{)/,alias:"punctuation"},rest:e.languages.jsx}}},e.languages.jsx.tag);var o=function(e){return e?"string"==typeof e?e:"string"==typeof e.content?e.content:e.content.map(o).join(""):""},s=function(t){for(var n=[],r=0;r0&&n[n.length-1].tagName===o(i.content[0].content[1])&&n.pop():"/>"===i.content[i.content.length-1].content||n.push({tagName:o(i.content[0].content[1]),openedBraces:0}):n.length>0&&"punctuation"===i.type&&"{"===i.content?n[n.length-1].openedBraces++:n.length>0&&n[n.length-1].openedBraces>0&&"punctuation"===i.type&&"}"===i.content?n[n.length-1].openedBraces--:a=!0),(a||"string"==typeof i)&&n.length>0&&0===n[n.length-1].openedBraces){var c=o(i);r0&&("string"==typeof t[r-1]||"plain-text"===t[r-1].type)&&(c=o(t[r-1])+c,t.splice(r-1,1),r--),t[r]=new e.Token("plain-text",c,null,c)}i.content&&"string"!=typeof i.content&&s(i.content)}};e.hooks.add("after-tokenize",(function(e){"jsx"!==e.language&&"tsx"!==e.language||s(e.tokens)}))}(i),function(e){e.languages.diff={coord:[/^(?:\*{3}|-{3}|\+{3}).*$/m,/^@@.*@@$/m,/^\d.*$/m]};var t={"deleted-sign":"-","deleted-arrow":"<","inserted-sign":"+","inserted-arrow":">",unchanged:" ",diff:"!"};Object.keys(t).forEach((function(n){var r=t[n],i=[];/^\w+$/.test(n)||i.push(/\w+/.exec(n)[0]),"diff"===n&&i.push("bold"),e.languages.diff[n]={pattern:RegExp("^(?:["+r+"].*(?:\r\n?|\n|(?![\\s\\S])))+","m"),alias:i,inside:{line:{pattern:/(.)(?=[\s\S]).*(?:\r\n?|\n)?/,lookbehind:!0},prefix:{pattern:/[\s\S]/,alias:/\w+/.exec(n)[0]}}}})),Object.defineProperty(e.languages.diff,"PREFIXES",{value:t})}(i),i.languages.git={comment:/^#.*/m,deleted:/^[-\u2013].*/m,inserted:/^\+.*/m,string:/("|')(?:\\.|(?!\1)[^\\\r\n])*\1/,command:{pattern:/^.*\$ git .*$/m,inside:{parameter:/\s--?\w+/}},coord:/^@@.*@@$/m,"commit-sha1":/^commit \w{40}$/m},i.languages.go=i.languages.extend("clike",{string:{pattern:/(^|[^\\])"(?:\\.|[^"\\\r\n])*"|`[^`]*`/,lookbehind:!0,greedy:!0},keyword:/\b(?:break|case|chan|const|continue|default|defer|else|fallthrough|for|func|go(?:to)?|if|import|interface|map|package|range|return|select|struct|switch|type|var)\b/,boolean:/\b(?:_|false|iota|nil|true)\b/,number:[/\b0(?:b[01_]+|o[0-7_]+)i?\b/i,/\b0x(?:[a-f\d_]+(?:\.[a-f\d_]*)?|\.[a-f\d_]+)(?:p[+-]?\d+(?:_\d+)*)?i?(?!\w)/i,/(?:\b\d[\d_]*(?:\.[\d_]*)?|\B\.\d[\d_]*)(?:e[+-]?[\d_]+)?i?(?!\w)/i],operator:/[*\/%^!=]=?|\+[=+]?|-[=-]?|\|[=|]?|&(?:=|&|\^=?)?|>(?:>=?|=)?|<(?:<=?|=|-)?|:=|\.\.\./,builtin:/\b(?:append|bool|byte|cap|close|complex|complex(?:64|128)|copy|delete|error|float(?:32|64)|u?int(?:8|16|32|64)?|imag|len|make|new|panic|print(?:ln)?|real|recover|rune|string|uintptr)\b/}),i.languages.insertBefore("go","string",{char:{pattern:/'(?:\\.|[^'\\\r\n]){0,10}'/,greedy:!0}}),delete i.languages.go["class-name"],function(e){function t(e,t){return"___"+e.toUpperCase()+t+"___"}Object.defineProperties(e.languages["markup-templating"]={},{buildPlaceholders:{value:function(n,r,i,a){if(n.language===r){var o=n.tokenStack=[];n.code=n.code.replace(i,(function(e){if("function"==typeof a&&!a(e))return e;for(var i,s=o.length;-1!==n.code.indexOf(i=t(r,s));)++s;return o[s]=e,i})),n.grammar=e.languages.markup}}},tokenizePlaceholders:{value:function(n,r){if(n.language===r&&n.tokenStack){n.grammar=e.languages[r];var i=0,a=Object.keys(n.tokenStack);!function o(s){for(var c=0;c=a.length);c++){var l=s[c];if("string"==typeof l||l.content&&"string"==typeof l.content){var d=a[i],u=n.tokenStack[d],p="string"==typeof l?l:l.content,m=t(r,d),f=p.indexOf(m);if(f>-1){++i;var b=p.substring(0,f),h=new e.Token(r,e.tokenize(u,n.grammar),"language-"+r,u),g=p.substring(f+m.length),v=[];b&&v.push.apply(v,o([b])),v.push(h),g&&v.push.apply(v,o([g])),"string"==typeof l?s.splice.apply(s,[c,1].concat(v)):l.content=v}}else l.content&&o(l.content)}return s}(n.tokens)}}}})}(i),function(e){e.languages.handlebars={comment:/\{\{![\s\S]*?\}\}/,delimiter:{pattern:/^\{\{\{?|\}\}\}?$/,alias:"punctuation"},string:/(["'])(?:\\.|(?!\1)[^\\\r\n])*\1/,number:/\b0x[\dA-Fa-f]+\b|(?:\b\d+(?:\.\d*)?|\B\.\d+)(?:[Ee][+-]?\d+)?/,boolean:/\b(?:false|true)\b/,block:{pattern:/^(\s*(?:~\s*)?)[#\/]\S+?(?=\s*(?:~\s*)?$|\s)/,lookbehind:!0,alias:"keyword"},brackets:{pattern:/\[[^\]]+\]/,inside:{punctuation:/\[|\]/,variable:/[\s\S]+/}},punctuation:/[!"#%&':()*+,.\/;<=>@\[\\\]^`{|}~]/,variable:/[^!"#%&'()*+,\/;<=>@\[\\\]^`{|}~\s]+/},e.hooks.add("before-tokenize",(function(t){e.languages["markup-templating"].buildPlaceholders(t,"handlebars",/\{\{\{[\s\S]+?\}\}\}|\{\{[\s\S]+?\}\}/g)})),e.hooks.add("after-tokenize",(function(t){e.languages["markup-templating"].tokenizePlaceholders(t,"handlebars")})),e.languages.hbs=e.languages.handlebars}(i),i.languages.json={property:{pattern:/(^|[^\\])"(?:\\.|[^\\"\r\n])*"(?=\s*:)/,lookbehind:!0,greedy:!0},string:{pattern:/(^|[^\\])"(?:\\.|[^\\"\r\n])*"(?!\s*:)/,lookbehind:!0,greedy:!0},comment:{pattern:/\/\/.*|\/\*[\s\S]*?(?:\*\/|$)/,greedy:!0},number:/-?\b\d+(?:\.\d+)?(?:e[+-]?\d+)?\b/i,punctuation:/[{}[\],]/,operator:/:/,boolean:/\b(?:false|true)\b/,null:{pattern:/\bnull\b/,alias:"keyword"}},i.languages.webmanifest=i.languages.json,i.languages.less=i.languages.extend("css",{comment:[/\/\*[\s\S]*?\*\//,{pattern:/(^|[^\\])\/\/.*/,lookbehind:!0}],atrule:{pattern:/@[\w-](?:\((?:[^(){}]|\([^(){}]*\))*\)|[^(){};\s]|\s+(?!\s))*?(?=\s*\{)/,inside:{punctuation:/[:()]/}},selector:{pattern:/(?:@\{[\w-]+\}|[^{};\s@])(?:@\{[\w-]+\}|\((?:[^(){}]|\([^(){}]*\))*\)|[^(){};@\s]|\s+(?!\s))*?(?=\s*\{)/,inside:{variable:/@+[\w-]+/}},property:/(?:@\{[\w-]+\}|[\w-])+(?:\+_?)?(?=\s*:)/,operator:/[+\-*\/]/}),i.languages.insertBefore("less","property",{variable:[{pattern:/@[\w-]+\s*:/,inside:{punctuation:/:/}},/@@?[\w-]+/],"mixin-usage":{pattern:/([{;]\s*)[.#](?!\d)[\w-].*?(?=[(;])/,lookbehind:!0,alias:"function"}}),i.languages.makefile={comment:{pattern:/(^|[^\\])#(?:\\(?:\r\n|[\s\S])|[^\\\r\n])*/,lookbehind:!0},string:{pattern:/(["'])(?:\\(?:\r\n|[\s\S])|(?!\1)[^\\\r\n])*\1/,greedy:!0},"builtin-target":{pattern:/\.[A-Z][^:#=\s]+(?=\s*:(?!=))/,alias:"builtin"},target:{pattern:/^(?:[^:=\s]|[ \t]+(?![\s:]))+(?=\s*:(?!=))/m,alias:"symbol",inside:{variable:/\$+(?:(?!\$)[^(){}:#=\s]+|(?=[({]))/}},variable:/\$+(?:(?!\$)[^(){}:#=\s]+|\([@*%<^+?][DF]\)|(?=[({]))/,keyword:/-include\b|\b(?:define|else|endef|endif|export|ifn?def|ifn?eq|include|override|private|sinclude|undefine|unexport|vpath)\b/,function:{pattern:/(\()(?:abspath|addsuffix|and|basename|call|dir|error|eval|file|filter(?:-out)?|findstring|firstword|flavor|foreach|guile|if|info|join|lastword|load|notdir|or|origin|patsubst|realpath|shell|sort|strip|subst|suffix|value|warning|wildcard|word(?:list|s)?)(?=[ \t])/,lookbehind:!0},operator:/(?:::|[?:+!])?=|[|@]/,punctuation:/[:;(){}]/},i.languages.objectivec=i.languages.extend("c",{string:{pattern:/@?"(?:\\(?:\r\n|[\s\S])|[^"\\\r\n])*"/,greedy:!0},keyword:/\b(?:asm|auto|break|case|char|const|continue|default|do|double|else|enum|extern|float|for|goto|if|in|inline|int|long|register|return|self|short|signed|sizeof|static|struct|super|switch|typedef|typeof|union|unsigned|void|volatile|while)\b|(?:@interface|@end|@implementation|@protocol|@class|@public|@protected|@private|@property|@try|@catch|@finally|@throw|@synthesize|@dynamic|@selector)\b/,operator:/-[->]?|\+\+?|!=?|<>?=?|==?|&&?|\|\|?|[~^%?*\/@]/}),delete i.languages.objectivec["class-name"],i.languages.objc=i.languages.objectivec,i.languages.ocaml={comment:{pattern:/\(\*[\s\S]*?\*\)/,greedy:!0},char:{pattern:/'(?:[^\\\r\n']|\\(?:.|[ox]?[0-9a-f]{1,3}))'/i,greedy:!0},string:[{pattern:/"(?:\\(?:[\s\S]|\r\n)|[^\\\r\n"])*"/,greedy:!0},{pattern:/\{([a-z_]*)\|[\s\S]*?\|\1\}/,greedy:!0}],number:[/\b(?:0b[01][01_]*|0o[0-7][0-7_]*)\b/i,/\b0x[a-f0-9][a-f0-9_]*(?:\.[a-f0-9_]*)?(?:p[+-]?\d[\d_]*)?(?!\w)/i,/\b\d[\d_]*(?:\.[\d_]*)?(?:e[+-]?\d[\d_]*)?(?!\w)/i],directive:{pattern:/\B#\w+/,alias:"property"},label:{pattern:/\B~\w+/,alias:"property"},"type-variable":{pattern:/\B'\w+/,alias:"function"},variant:{pattern:/`\w+/,alias:"symbol"},keyword:/\b(?:as|assert|begin|class|constraint|do|done|downto|else|end|exception|external|for|fun|function|functor|if|in|include|inherit|initializer|lazy|let|match|method|module|mutable|new|nonrec|object|of|open|private|rec|sig|struct|then|to|try|type|val|value|virtual|when|where|while|with)\b/,boolean:/\b(?:false|true)\b/,"operator-like-punctuation":{pattern:/\[[<>|]|[>|]\]|\{<|>\}/,alias:"punctuation"},operator:/\.[.~]|:[=>]|[=<>@^|&+\-*\/$%!?~][!$%&*+\-.\/:<=>?@^|~]*|\b(?:and|asr|land|lor|lsl|lsr|lxor|mod|or)\b/,punctuation:/;;|::|[(){}\[\].,:;#]|\b_\b/},i.languages.python={comment:{pattern:/(^|[^\\])#.*/,lookbehind:!0,greedy:!0},"string-interpolation":{pattern:/(?:f|fr|rf)(?:("""|''')[\s\S]*?\1|("|')(?:\\.|(?!\2)[^\\\r\n])*\2)/i,greedy:!0,inside:{interpolation:{pattern:/((?:^|[^{])(?:\{\{)*)\{(?!\{)(?:[^{}]|\{(?!\{)(?:[^{}]|\{(?!\{)(?:[^{}])+\})+\})+\}/,lookbehind:!0,inside:{"format-spec":{pattern:/(:)[^:(){}]+(?=\}$)/,lookbehind:!0},"conversion-option":{pattern:/![sra](?=[:}]$)/,alias:"punctuation"},rest:null}},string:/[\s\S]+/}},"triple-quoted-string":{pattern:/(?:[rub]|br|rb)?("""|''')[\s\S]*?\1/i,greedy:!0,alias:"string"},string:{pattern:/(?:[rub]|br|rb)?("|')(?:\\.|(?!\1)[^\\\r\n])*\1/i,greedy:!0},function:{pattern:/((?:^|\s)def[ \t]+)[a-zA-Z_]\w*(?=\s*\()/g,lookbehind:!0},"class-name":{pattern:/(\bclass\s+)\w+/i,lookbehind:!0},decorator:{pattern:/(^[\t ]*)@\w+(?:\.\w+)*/m,lookbehind:!0,alias:["annotation","punctuation"],inside:{punctuation:/\./}},keyword:/\b(?:_(?=\s*:)|and|as|assert|async|await|break|case|class|continue|def|del|elif|else|except|exec|finally|for|from|global|if|import|in|is|lambda|match|nonlocal|not|or|pass|print|raise|return|try|while|with|yield)\b/,builtin:/\b(?:__import__|abs|all|any|apply|ascii|basestring|bin|bool|buffer|bytearray|bytes|callable|chr|classmethod|cmp|coerce|compile|complex|delattr|dict|dir|divmod|enumerate|eval|execfile|file|filter|float|format|frozenset|getattr|globals|hasattr|hash|help|hex|id|input|int|intern|isinstance|issubclass|iter|len|list|locals|long|map|max|memoryview|min|next|object|oct|open|ord|pow|property|range|raw_input|reduce|reload|repr|reversed|round|set|setattr|slice|sorted|staticmethod|str|sum|super|tuple|type|unichr|unicode|vars|xrange|zip)\b/,boolean:/\b(?:False|None|True)\b/,number:/\b0(?:b(?:_?[01])+|o(?:_?[0-7])+|x(?:_?[a-f0-9])+)\b|(?:\b\d+(?:_\d+)*(?:\.(?:\d+(?:_\d+)*)?)?|\B\.\d+(?:_\d+)*)(?:e[+-]?\d+(?:_\d+)*)?j?(?!\w)/i,operator:/[-+%=]=?|!=|:=|\*\*?=?|\/\/?=?|<[<=>]?|>[=>]?|[&|^~]/,punctuation:/[{}[\];(),.:]/},i.languages.python["string-interpolation"].inside.interpolation.inside.rest=i.languages.python,i.languages.py=i.languages.python,i.languages.reason=i.languages.extend("clike",{string:{pattern:/"(?:\\(?:\r\n|[\s\S])|[^\\\r\n"])*"/,greedy:!0},"class-name":/\b[A-Z]\w*/,keyword:/\b(?:and|as|assert|begin|class|constraint|do|done|downto|else|end|exception|external|for|fun|function|functor|if|in|include|inherit|initializer|lazy|let|method|module|mutable|new|nonrec|object|of|open|or|private|rec|sig|struct|switch|then|to|try|type|val|virtual|when|while|with)\b/,operator:/\.{3}|:[:=]|\|>|->|=(?:==?|>)?|<=?|>=?|[|^?'#!~`]|[+\-*\/]\.?|\b(?:asr|land|lor|lsl|lsr|lxor|mod)\b/}),i.languages.insertBefore("reason","class-name",{char:{pattern:/'(?:\\x[\da-f]{2}|\\o[0-3][0-7][0-7]|\\\d{3}|\\.|[^'\\\r\n])'/,greedy:!0},constructor:/\b[A-Z]\w*\b(?!\s*\.)/,label:{pattern:/\b[a-z]\w*(?=::)/,alias:"symbol"}}),delete i.languages.reason.function,function(e){e.languages.sass=e.languages.extend("css",{comment:{pattern:/^([ \t]*)\/[\/*].*(?:(?:\r?\n|\r)\1[ \t].+)*/m,lookbehind:!0,greedy:!0}}),e.languages.insertBefore("sass","atrule",{"atrule-line":{pattern:/^(?:[ \t]*)[@+=].+/m,greedy:!0,inside:{atrule:/(?:@[\w-]+|[+=])/}}}),delete e.languages.sass.atrule;var t=/\$[-\w]+|#\{\$[-\w]+\}/,n=[/[+*\/%]|[=!]=|<=?|>=?|\b(?:and|not|or)\b/,{pattern:/(\s)-(?=\s)/,lookbehind:!0}];e.languages.insertBefore("sass","property",{"variable-line":{pattern:/^[ \t]*\$.+/m,greedy:!0,inside:{punctuation:/:/,variable:t,operator:n}},"property-line":{pattern:/^[ \t]*(?:[^:\s]+ *:.*|:[^:\s].*)/m,greedy:!0,inside:{property:[/[^:\s]+(?=\s*:)/,{pattern:/(:)[^:\s]+/,lookbehind:!0}],punctuation:/:/,variable:t,operator:n,important:e.languages.sass.important}}}),delete e.languages.sass.property,delete e.languages.sass.important,e.languages.insertBefore("sass","punctuation",{selector:{pattern:/^([ \t]*)\S(?:,[^,\r\n]+|[^,\r\n]*)(?:,[^,\r\n]+)*(?:,(?:\r?\n|\r)\1[ \t]+\S(?:,[^,\r\n]+|[^,\r\n]*)(?:,[^,\r\n]+)*)*/m,lookbehind:!0,greedy:!0}})}(i),i.languages.scss=i.languages.extend("css",{comment:{pattern:/(^|[^\\])(?:\/\*[\s\S]*?\*\/|\/\/.*)/,lookbehind:!0},atrule:{pattern:/@[\w-](?:\([^()]+\)|[^()\s]|\s+(?!\s))*?(?=\s+[{;])/,inside:{rule:/@[\w-]+/}},url:/(?:[-a-z]+-)?url(?=\()/i,selector:{pattern:/(?=\S)[^@;{}()]?(?:[^@;{}()\s]|\s+(?!\s)|#\{\$[-\w]+\})+(?=\s*\{(?:\}|\s|[^}][^:{}]*[:{][^}]))/,inside:{parent:{pattern:/&/,alias:"important"},placeholder:/%[-\w]+/,variable:/\$[-\w]+|#\{\$[-\w]+\}/}},property:{pattern:/(?:[-\w]|\$[-\w]|#\{\$[-\w]+\})+(?=\s*:)/,inside:{variable:/\$[-\w]+|#\{\$[-\w]+\}/}}}),i.languages.insertBefore("scss","atrule",{keyword:[/@(?:content|debug|each|else(?: if)?|extend|for|forward|function|if|import|include|mixin|return|use|warn|while)\b/i,{pattern:/( )(?:from|through)(?= )/,lookbehind:!0}]}),i.languages.insertBefore("scss","important",{variable:/\$[-\w]+|#\{\$[-\w]+\}/}),i.languages.insertBefore("scss","function",{"module-modifier":{pattern:/\b(?:as|hide|show|with)\b/i,alias:"keyword"},placeholder:{pattern:/%[-\w]+/,alias:"selector"},statement:{pattern:/\B!(?:default|optional)\b/i,alias:"keyword"},boolean:/\b(?:false|true)\b/,null:{pattern:/\bnull\b/,alias:"keyword"},operator:{pattern:/(\s)(?:[-+*\/%]|[=!]=|<=?|>=?|and|not|or)(?=\s)/,lookbehind:!0}}),i.languages.scss.atrule.inside.rest=i.languages.scss,function(e){var t={pattern:/(\b\d+)(?:%|[a-z]+)/,lookbehind:!0},n={pattern:/(^|[^\w.-])-?(?:\d+(?:\.\d+)?|\.\d+)/,lookbehind:!0},r={comment:{pattern:/(^|[^\\])(?:\/\*[\s\S]*?\*\/|\/\/.*)/,lookbehind:!0},url:{pattern:/\burl\((["']?).*?\1\)/i,greedy:!0},string:{pattern:/("|')(?:(?!\1)[^\\\r\n]|\\(?:\r\n|[\s\S]))*\1/,greedy:!0},interpolation:null,func:null,important:/\B!(?:important|optional)\b/i,keyword:{pattern:/(^|\s+)(?:(?:else|for|if|return|unless)(?=\s|$)|@[\w-]+)/,lookbehind:!0},hexcode:/#[\da-f]{3,6}/i,color:[/\b(?:AliceBlue|AntiqueWhite|Aqua|Aquamarine|Azure|Beige|Bisque|Black|BlanchedAlmond|Blue|BlueViolet|Brown|BurlyWood|CadetBlue|Chartreuse|Chocolate|Coral|CornflowerBlue|Cornsilk|Crimson|Cyan|DarkBlue|DarkCyan|DarkGoldenRod|DarkGr[ae]y|DarkGreen|DarkKhaki|DarkMagenta|DarkOliveGreen|DarkOrange|DarkOrchid|DarkRed|DarkSalmon|DarkSeaGreen|DarkSlateBlue|DarkSlateGr[ae]y|DarkTurquoise|DarkViolet|DeepPink|DeepSkyBlue|DimGr[ae]y|DodgerBlue|FireBrick|FloralWhite|ForestGreen|Fuchsia|Gainsboro|GhostWhite|Gold|GoldenRod|Gr[ae]y|Green|GreenYellow|HoneyDew|HotPink|IndianRed|Indigo|Ivory|Khaki|Lavender|LavenderBlush|LawnGreen|LemonChiffon|LightBlue|LightCoral|LightCyan|LightGoldenRodYellow|LightGr[ae]y|LightGreen|LightPink|LightSalmon|LightSeaGreen|LightSkyBlue|LightSlateGr[ae]y|LightSteelBlue|LightYellow|Lime|LimeGreen|Linen|Magenta|Maroon|MediumAquaMarine|MediumBlue|MediumOrchid|MediumPurple|MediumSeaGreen|MediumSlateBlue|MediumSpringGreen|MediumTurquoise|MediumVioletRed|MidnightBlue|MintCream|MistyRose|Moccasin|NavajoWhite|Navy|OldLace|Olive|OliveDrab|Orange|OrangeRed|Orchid|PaleGoldenRod|PaleGreen|PaleTurquoise|PaleVioletRed|PapayaWhip|PeachPuff|Peru|Pink|Plum|PowderBlue|Purple|Red|RosyBrown|RoyalBlue|SaddleBrown|Salmon|SandyBrown|SeaGreen|SeaShell|Sienna|Silver|SkyBlue|SlateBlue|SlateGr[ae]y|Snow|SpringGreen|SteelBlue|Tan|Teal|Thistle|Tomato|Transparent|Turquoise|Violet|Wheat|White|WhiteSmoke|Yellow|YellowGreen)\b/i,{pattern:/\b(?:hsl|rgb)\(\s*\d{1,3}\s*,\s*\d{1,3}%?\s*,\s*\d{1,3}%?\s*\)\B|\b(?:hsl|rgb)a\(\s*\d{1,3}\s*,\s*\d{1,3}%?\s*,\s*\d{1,3}%?\s*,\s*(?:0|0?\.\d+|1)\s*\)\B/i,inside:{unit:t,number:n,function:/[\w-]+(?=\()/,punctuation:/[(),]/}}],entity:/\\[\da-f]{1,8}/i,unit:t,boolean:/\b(?:false|true)\b/,operator:[/~|[+!\/%<>?=]=?|[-:]=|\*[*=]?|\.{2,3}|&&|\|\||\B-\B|\b(?:and|in|is(?: a| defined| not|nt)?|not|or)\b/],number:n,punctuation:/[{}()\[\];:,]/};r.interpolation={pattern:/\{[^\r\n}:]+\}/,alias:"variable",inside:{delimiter:{pattern:/^\{|\}$/,alias:"punctuation"},rest:r}},r.func={pattern:/[\w-]+\([^)]*\).*/,inside:{function:/^[^(]+/,rest:r}},e.languages.stylus={"atrule-declaration":{pattern:/(^[ \t]*)@.+/m,lookbehind:!0,inside:{atrule:/^@[\w-]+/,rest:r}},"variable-declaration":{pattern:/(^[ \t]*)[\w$-]+\s*.?=[ \t]*(?:\{[^{}]*\}|\S.*|$)/m,lookbehind:!0,inside:{variable:/^\S+/,rest:r}},statement:{pattern:/(^[ \t]*)(?:else|for|if|return|unless)[ \t].+/m,lookbehind:!0,inside:{keyword:/^\S+/,rest:r}},"property-declaration":{pattern:/((?:^|\{)([ \t]*))(?:[\w-]|\{[^}\r\n]+\})+(?:\s*:\s*|[ \t]+)(?!\s)[^{\r\n]*(?:;|[^{\r\n,]$(?!(?:\r?\n|\r)(?:\{|\2[ \t])))/m,lookbehind:!0,inside:{property:{pattern:/^[^\s:]+/,inside:{interpolation:r.interpolation}},rest:r}},selector:{pattern:/(^[ \t]*)(?:(?=\S)(?:[^{}\r\n:()]|::?[\w-]+(?:\([^)\r\n]*\)|(?![\w-]))|\{[^}\r\n]+\})+)(?:(?:\r?\n|\r)(?:\1(?:(?=\S)(?:[^{}\r\n:()]|::?[\w-]+(?:\([^)\r\n]*\)|(?![\w-]))|\{[^}\r\n]+\})+)))*(?:,$|\{|(?=(?:\r?\n|\r)(?:\{|\1[ \t])))/m,lookbehind:!0,inside:{interpolation:r.interpolation,comment:r.comment,punctuation:/[{},]/}},func:r.func,string:r.string,comment:{pattern:/(^|[^\\])(?:\/\*[\s\S]*?\*\/|\/\/.*)/,lookbehind:!0,greedy:!0},interpolation:r.interpolation,punctuation:/[{}()\[\];:.]/}}(i),function(e){var t=e.util.clone(e.languages.typescript);e.languages.tsx=e.languages.extend("jsx",t),delete e.languages.tsx.parameter,delete e.languages.tsx["literal-property"];var n=e.languages.tsx.tag;n.pattern=RegExp(/(^|[^\w$]|(?=<\/))/.source+"(?:"+n.pattern.source+")",n.pattern.flags),n.lookbehind=!0}(i),i.languages.wasm={comment:[/\(;[\s\S]*?;\)/,{pattern:/;;.*/,greedy:!0}],string:{pattern:/"(?:\\[\s\S]|[^"\\])*"/,greedy:!0},keyword:[{pattern:/\b(?:align|offset)=/,inside:{operator:/=/}},{pattern:/\b(?:(?:f32|f64|i32|i64)(?:\.(?:abs|add|and|ceil|clz|const|convert_[su]\/i(?:32|64)|copysign|ctz|demote\/f64|div(?:_[su])?|eqz?|extend_[su]\/i32|floor|ge(?:_[su])?|gt(?:_[su])?|le(?:_[su])?|load(?:(?:8|16|32)_[su])?|lt(?:_[su])?|max|min|mul|neg?|nearest|or|popcnt|promote\/f32|reinterpret\/[fi](?:32|64)|rem_[su]|rot[lr]|shl|shr_[su]|sqrt|store(?:8|16|32)?|sub|trunc(?:_[su]\/f(?:32|64))?|wrap\/i64|xor))?|memory\.(?:grow|size))\b/,inside:{punctuation:/\./}},/\b(?:anyfunc|block|br(?:_if|_table)?|call(?:_indirect)?|data|drop|elem|else|end|export|func|get_(?:global|local)|global|if|import|local|loop|memory|module|mut|nop|offset|param|result|return|select|set_(?:global|local)|start|table|tee_local|then|type|unreachable)\b/],variable:/\$[\w!#$%&'*+\-./:<=>?@\\^`|~]+/,number:/[+-]?\b(?:\d(?:_?\d)*(?:\.\d(?:_?\d)*)?(?:[eE][+-]?\d(?:_?\d)*)?|0x[\da-fA-F](?:_?[\da-fA-F])*(?:\.[\da-fA-F](?:_?[\da-fA-D])*)?(?:[pP][+-]?\d(?:_?\d)*)?)\b|\binf\b|\bnan(?::0x[\da-fA-F](?:_?[\da-fA-D])*)?\b/,punctuation:/[()]/};const a=i},87459:(e,t,n)=>{"use strict";function r(e){var t,n,i="";if("string"==typeof e||"number"==typeof e)i+=e;else if("object"==typeof e)if(Array.isArray(e))for(t=0;ti});const i=function(){for(var e,t,n=0,i="";n{"use strict";n.d(t,{Z:()=>m});var r=n(67294),i=n(83117),a=n(68356),o=n.n(a),s=n(16887);const c={"00ebe6f9":[()=>n.e(23420).then(n.bind(n,53968)),"@site/versioned_docs/version-0.6.4/evaluation/heterogeneous/node-able.md",53968],"010dbd2c":[()=>n.e(93860).then(n.bind(n,98363)),"@site/versioned_docs/version-0.6.6/cloud/billing/voucher.md",98363],"01198576":[()=>n.e(53535).then(n.bind(n,88901)),"@site/versioned_docs/version-0.6.7/reference/swcli/job.md",88901],"01a85c17":[()=>Promise.all([n.e(40532),n.e(64013)]).then(n.bind(n,91223)),"@theme/BlogTagsListPage",91223],"01fa99ed":[()=>n.e(27546).then(n.bind(n,48109)),"@site/versioned_docs/version-0.6.5/concepts/roles-permissions.md",48109],"023934a0":[()=>n.e(30758).then(n.bind(n,57019)),"@site/versioned_docs/version-0.5.10/evaluation/index.md",57019],"03431110":[()=>n.e(11257).then(n.bind(n,81920)),"@site/versioned_docs/version-0.6.4/server/installation/starwhale_env.md",81920],"0430ce14":[()=>n.e(36251).then(n.bind(n,13363)),"@site/versioned_docs/version-0.5.10/cloud/billing/billing.md",13363],"0513b4b7":[()=>n.e(41128).then(n.bind(n,48640)),"@site/versioned_docs/version-0.5.10/cloud/billing/refund.md",48640],"06024424":[()=>n.e(59859).then(n.bind(n,36971)),"@site/docs/server/installation/minikube.md",36971],"06309b2a":[()=>n.e(72416).then(n.bind(n,30490)),"@site/versioned_docs/version-0.6.0/reference/sdk/job.md",30490],"0661cc41":[()=>n.e(21756).then(n.bind(n,58095)),"@site/versioned_docs/version-0.6.0/concepts/roles-permissions.md",58095],"07b3ca0b":[()=>n.e(1094).then(n.bind(n,41270)),"@site/versioned_docs/version-0.5.12/reference/swcli/job.md",41270],"07e54361":[()=>n.e(51793).then(n.bind(n,91101)),"@site/versioned_docs/version-0.5.10/getting-started/cloud.md",91101],"080f50dc":[()=>n.e(43194).then(n.bind(n,54831)),"@site/versioned_docs/version-0.6.5/model/index.md",54831],"083b1e30":[()=>n.e(78098).then(n.bind(n,75865)),"@site/versioned_docs/version-0.6.5/reference/sdk/other.md",75865],"085af76e":[()=>n.e(99995).then(n.bind(n,4390)),"@site/versioned_docs/version-0.6.6/reference/sdk/other.md",4390],"0b0df7a2":[()=>n.e(92157).then(n.bind(n,18463)),"@site/versioned_docs/version-0.6.0/reference/swcli/runtime.md",18463],"0bd945fa":[()=>n.e(73559).then(n.bind(n,27243)),"@site/versioned_docs/version-0.6.0/swcli/uri.md",27243],"0c3559ad":[()=>n.e(12685).then(n.bind(n,60709)),"@site/versioned_docs/version-0.5.10/reference/sdk/evaluation.md",60709],"0c5dac11":[()=>n.e(62427).then(n.bind(n,62466)),"@site/versioned_docs/version-0.6.6/runtime/yaml.md",62466],"0ca68b49":[()=>n.e(69726).then(n.bind(n,69678)),"@site/docs/cloud/billing/recharge.md",69678],"0d3ce289":[()=>n.e(27551).then(n.bind(n,60648)),"@site/versioned_docs/version-0.6.7/reference/sdk/job.md",60648],"0dbd89b2":[()=>n.e(23321).then(n.t.bind(n,18811,19)),"~docs/default/version-0-5-12-metadata-prop-9d9.json",18811],"0e046294":[()=>n.e(88443).then(n.bind(n,48051)),"@site/versioned_docs/version-0.6.6/swcli/config.md",48051],"0e37f00c":[()=>n.e(96530).then(n.bind(n,14174)),"@site/versioned_docs/version-0.6.6/concepts/index.md",14174],"116fecd1":[()=>n.e(62615).then(n.bind(n,42064)),"@site/versioned_docs/version-0.6.0/getting-started/runtime.md",42064],"12005b37":[()=>n.e(68860).then(n.t.bind(n,12208,19)),"~docs/default/version-0-6-7-metadata-prop-b93.json",12208],"131ff956":[()=>n.e(42457).then(n.bind(n,67381)),"@site/versioned_docs/version-0.6.7/reference/swcli/server.md",67381],"1329fb4f":[()=>n.e(13231).then(n.bind(n,51326)),"@site/versioned_docs/version-0.5.10/reference/swcli/job.md",51326],"144c8d94":[()=>n.e(20806).then(n.bind(n,20510)),"@site/versioned_docs/version-0.6.5/reference/swcli/model.md",20510],"157205ad":[()=>n.e(81653).then(n.bind(n,68632)),"@site/versioned_docs/version-0.6.6/reference/swcli/utilities.md",68632],"1609ca8c":[()=>n.e(6318).then(n.bind(n,96748)),"@site/versioned_docs/version-0.6.0/reference/swcli/instance.md",96748],"16f35551":[()=>n.e(41527).then(n.bind(n,96063)),"@site/versioned_docs/version-0.6.7/what-is-starwhale.md",96063],17896441:[()=>Promise.all([n.e(40532),n.e(78357),n.e(27918)]).then(n.bind(n,78945)),"@theme/DocItem",78945],"184fd11b":[()=>n.e(16959).then(n.bind(n,66020)),"@site/versioned_docs/version-0.6.5/what-is-starwhale.md",66020],"18a00ee6":[()=>n.e(12754).then(n.bind(n,26222)),"@site/versioned_docs/version-0.6.6/getting-started/index.md",26222],"18c9cfd4":[()=>n.e(85113).then(n.bind(n,94392)),"@site/versioned_docs/version-0.6.4/swcli/index.md",94392],"1928fade":[()=>n.e(3941).then(n.bind(n,67636)),"@site/versioned_docs/version-0.6.6/concepts/versioning.md",67636],19623007:[()=>n.e(67400).then(n.bind(n,79536)),"@site/versioned_docs/version-0.5.12/reference/swcli/dataset.md",79536],"19af0c64":[()=>n.e(21538).then(n.bind(n,6831)),"@site/docs/evaluation/index.md",6831],"19ed1308":[()=>n.e(38321).then(n.bind(n,81762)),"@site/versioned_docs/version-0.6.6/getting-started/server.md",81762],"1a0afac4":[()=>n.e(89368).then(n.bind(n,59934)),"@site/versioned_docs/version-0.6.6/faq/index.md",59934],"1a1c0fb0":[()=>n.e(84008).then(n.bind(n,64135)),"@site/docs/reference/swcli/job.md",64135],"1b262b1d":[()=>n.e(62383).then(n.bind(n,91437)),"@site/versioned_docs/version-0.6.7/cloud/billing/recharge.md",91437],"1b7cc7bc":[()=>n.e(33006).then(n.bind(n,11524)),"@site/versioned_docs/version-0.5.12/reference/swcli/index.md",11524],"1bb22f60":[()=>n.e(28760).then(n.bind(n,63083)),"@site/versioned_docs/version-0.6.6/server/project.md",63083],"1be78505":[()=>Promise.all([n.e(40532),n.e(29514)]).then(n.bind(n,19963)),"@theme/DocPage",19963],"1c091541":[()=>n.e(68271).then(n.t.bind(n,24469,19)),"/home/runner/work/docs/docs/.docusaurus/docusaurus-plugin-content-blog/default/plugin-route-context-module-100.json",24469],"1cd0502b":[()=>n.e(48158).then(n.bind(n,7312)),"@site/versioned_docs/version-0.6.0/getting-started/standalone.md",7312],"1cd68b1e":[()=>n.e(64540).then(n.bind(n,51590)),"@site/versioned_docs/version-0.5.10/swcli/installation.md",51590],"1cda5aa6":[()=>n.e(15281).then(n.bind(n,30907)),"@site/versioned_docs/version-0.5.12/reference/swcli/project.md",30907],"1d06a7a7":[()=>n.e(5).then(n.bind(n,98338)),"@site/versioned_docs/version-0.6.6/concepts/names.md",98338],"1daa9b51":[()=>n.e(99382).then(n.bind(n,41263)),"@site/versioned_docs/version-0.5.12/model/index.md",41263],"1ddcdff5":[()=>n.e(87647).then(n.t.bind(n,89476,19)),"~blog/default/blog-tags-model-evaluaitons-d53.json",89476],"1e5d9f2f":[()=>n.e(27690).then(n.bind(n,58021)),"@site/versioned_docs/version-0.6.7/examples/index.md",58021],"1efab335":[()=>n.e(34941).then(n.bind(n,79516)),"@site/docs/cloud/billing/bills.md",79516],"1f86ec65":[()=>n.e(4380).then(n.bind(n,90185)),"@site/versioned_docs/version-0.6.5/reference/sdk/job.md",90185],"206e8b40":[()=>n.e(57904).then(n.bind(n,2786)),"@site/versioned_docs/version-0.6.0/swcli/installation.md",2786],"207b7cff":[()=>n.e(90790).then(n.bind(n,36996)),"@site/versioned_docs/version-0.6.0/reference/sdk/evaluation.md",36996],"208d09d7":[()=>n.e(18337).then(n.bind(n,75649)),"@site/versioned_docs/version-0.5.10/server/installation/starwhale_env.md",75649],"20dac1bd":[()=>n.e(24567).then(n.bind(n,91441)),"@site/versioned_docs/version-0.6.4/cloud/billing/billing.md",91441],"21b82531":[()=>n.e(44868).then(n.bind(n,3404)),"@site/versioned_docs/version-0.6.5/cloud/billing/billing.md",3404],"22ced7ff":[()=>n.e(74355).then(n.bind(n,30306)),"@site/versioned_docs/version-0.6.4/server/installation/helm-charts.md",30306],"235ff073":[()=>n.e(51237).then(n.bind(n,11368)),"@site/versioned_docs/version-0.6.6/server/installation/docker-compose.md",11368],"236d8693":[()=>n.e(13215).then(n.bind(n,17158)),"@site/versioned_docs/version-0.5.10/concepts/roles-permissions.md",17158],"247783bb":[()=>n.e(59334).then(n.t.bind(n,83769,19)),"/home/runner/work/docs/docs/.docusaurus/docusaurus-plugin-content-docs/default/plugin-route-context-module-100.json",83769],"24a48cd0":[()=>n.e(474).then(n.bind(n,48251)),"@site/versioned_docs/version-0.6.0/getting-started/index.md",48251],"25243afb":[()=>n.e(21484).then(n.bind(n,8446)),"@site/versioned_docs/version-0.6.0/runtime/yaml.md",8446],"25350f7b":[()=>n.e(47249).then(n.bind(n,14513)),"@site/versioned_docs/version-0.6.7/reference/swcli/utilities.md",14513],"2545d4b6":[()=>n.e(10904).then(n.t.bind(n,65540,19)),"~blog/default/blog-tags-model-package-3fd.json",65540],"2572f700":[()=>n.e(14193).then(n.bind(n,71418)),"@site/versioned_docs/version-0.5.12/reference/swcli/runtime.md",71418],"26d65ea9":[()=>n.e(22374).then(n.bind(n,20267)),"@site/versioned_docs/version-0.6.6/reference/swcli/runtime.md",20267],"26df6cbd":[()=>n.e(54805).then(n.t.bind(n,63091,19)),"~blog/default/blog-tags-llama-2-489.json",63091],"2729f289":[()=>n.e(30556).then(n.bind(n,57966)),"@site/versioned_docs/version-0.5.12/faq/index.md",57966],"272c7b59":[()=>n.e(48242).then(n.bind(n,34851)),"@site/versioned_docs/version-0.6.0/server/installation/index.md",34851],27414590:[()=>n.e(436).then(n.bind(n,15528)),"@site/versioned_docs/version-0.6.0/server/installation/helm-charts.md",15528],"2751e0ff":[()=>n.e(15767).then(n.bind(n,2393)),"@site/versioned_docs/version-0.6.5/getting-started/index.md",2393],"286cdff1":[()=>n.e(49716).then(n.bind(n,52951)),"@site/docs/concepts/versioning.md",52951],"28bf383b":[()=>n.e(78486).then(n.bind(n,66536)),"@site/versioned_docs/version-0.6.6/evaluation/heterogeneous/node-able.md",66536],"28c853cd":[()=>n.e(61778).then(n.bind(n,51979)),"@site/versioned_docs/version-0.6.6/model/index.md",51979],"2914bf67":[()=>n.e(44060).then(n.bind(n,51657)),"@site/docs/swcli/uri.md",51657],"29713cec":[()=>n.e(99914).then(n.bind(n,2883)),"@site/versioned_docs/version-0.5.12/swcli/index.md",2883],"2a01203c":[()=>n.e(31086).then(n.bind(n,82623)),"@site/versioned_docs/version-0.6.6/reference/swcli/instance.md",82623],"2a3685db":[()=>n.e(16231).then(n.bind(n,40881)),"@site/versioned_docs/version-0.6.7/evaluation/index.md",40881],"2a436d5c":[()=>n.e(71955).then(n.bind(n,52209)),"@site/versioned_docs/version-0.5.10/runtime/yaml.md",52209],"2a7c7842":[()=>n.e(45926).then(n.bind(n,96400)),"@site/versioned_docs/version-0.6.5/reference/sdk/type.md",96400],"2c368d66":[()=>n.e(6251).then(n.bind(n,17520)),"@site/versioned_docs/version-0.6.5/getting-started/server.md",17520],"2d20cfa5":[()=>n.e(36637).then(n.bind(n,18096)),"@site/versioned_docs/version-0.6.7/cloud/index.md",18096],"2d30fa72":[()=>n.e(4792).then(n.bind(n,13909)),"@site/versioned_docs/version-0.6.5/reference/sdk/evaluation.md",13909],"2d40f4be":[()=>n.e(66618).then(n.bind(n,23515)),"@site/versioned_docs/version-0.5.10/dataset/yaml.md",23515],"2d78f039":[()=>n.e(19255).then(n.bind(n,57443)),"@site/docs/getting-started/cloud.md",57443],"2defc614":[()=>n.e(26809).then(n.bind(n,42871)),"@site/versioned_docs/version-0.6.4/reference/sdk/evaluation.md",42871],"2e0ef41a":[()=>n.e(12523).then(n.bind(n,95020)),"@site/versioned_docs/version-0.5.12/server/guides/server_admin.md",95020],"2e423130":[()=>n.e(63654).then(n.bind(n,8534)),"@site/versioned_docs/version-0.6.7/getting-started/cloud.md",8534],"2e5465c5":[()=>n.e(61237).then(n.bind(n,19243)),"@site/versioned_docs/version-0.5.10/model/index.md",19243],"2ea1e391":[()=>n.e(86792).then(n.bind(n,69202)),"@site/versioned_docs/version-0.5.12/evaluation/heterogeneous/virtual-node.md",69202],"305f83c8":[()=>n.e(78244).then(n.bind(n,45904)),"@site/versioned_docs/version-0.6.0/evaluation/heterogeneous/virtual-node.md",45904],"30f3b4a9":[()=>n.e(55008).then(n.bind(n,54261)),"@site/versioned_docs/version-0.6.7/server/guides/server_admin.md",54261],"3160d5c7":[()=>n.e(97749).then(n.bind(n,72681)),"@site/versioned_docs/version-0.5.12/reference/sdk/evaluation.md",72681],"31ea793a":[()=>n.e(2882).then(n.bind(n,59123)),"@site/versioned_docs/version-0.6.7/dataset/index.md",59123],32528799:[()=>n.e(93233).then(n.bind(n,11294)),"@site/versioned_docs/version-0.5.12/concepts/index.md",11294],"327c535f":[()=>n.e(44230).then(n.bind(n,63668)),"@site/versioned_docs/version-0.6.0/server/project.md",63668],"32cba7ce":[()=>n.e(47183).then(n.t.bind(n,80130,19)),"~blog/default/blog-tags-llama-2-489-list.json",80130],"3470dd35":[()=>n.e(88444).then(n.bind(n,22113)),"@site/versioned_docs/version-0.6.7/getting-started/standalone.md",22113],"347c37ac":[()=>n.e(89945).then(n.bind(n,96545)),"@site/versioned_docs/version-0.6.0/cloud/billing/billing.md",96545],"3480b943":[()=>n.e(10304).then(n.bind(n,8056)),"@site/docs/reference/swcli/instance.md",8056],"34bcdfb7":[()=>n.e(95051).then(n.bind(n,36511)),"@site/versioned_docs/version-0.6.7/reference/sdk/other.md",36511],"34f595d3":[()=>n.e(44720).then(n.bind(n,21832)),"@site/versioned_docs/version-0.6.4/faq/index.md",21832],"3521e0c7":[()=>n.e(56862).then(n.bind(n,11453)),"@site/versioned_docs/version-0.6.4/what-is-starwhale.md",11453],"357b8294":[()=>n.e(79375).then(n.bind(n,1781)),"@site/versioned_docs/version-0.6.5/concepts/names.md",1781],"35a1304b":[()=>n.e(19221).then(n.bind(n,36852)),"@site/versioned_docs/version-0.6.4/cloud/billing/refund.md",36852],"35b14ade":[()=>n.e(92286).then(n.bind(n,49926)),"@site/versioned_docs/version-0.5.12/reference/sdk/overview.md",49926],"373b159b":[()=>n.e(71913).then(n.bind(n,95028)),"@site/docs/dataset/index.md",95028],"3762e359":[()=>n.e(60074).then(n.bind(n,31534)),"@site/versioned_docs/version-0.6.4/dataset/yaml.md",31534],"377d34c2":[()=>n.e(37813).then(n.bind(n,93759)),"@site/docs/reference/swcli/dataset.md",93759],"3818d7db":[()=>n.e(84608).then(n.bind(n,91767)),"@site/versioned_docs/version-0.6.0/reference/swcli/dataset.md",91767],38311505:[()=>n.e(34758).then(n.bind(n,33551)),"@site/versioned_docs/version-0.6.0/swcli/swignore.md",33551],"3853ad19":[()=>n.e(42748).then(n.bind(n,83508)),"@site/versioned_docs/version-0.6.4/reference/swcli/instance.md",83508],"3909c8ec":[()=>n.e(29455).then(n.bind(n,42022)),"@site/versioned_docs/version-0.6.5/reference/sdk/dataset.md",42022],"39af834a":[()=>n.e(70996).then(n.bind(n,44526)),"@site/docs/community/contribute.md",44526],"3a421d9e":[()=>n.e(27479).then(n.bind(n,40122)),"@site/versioned_docs/version-0.6.6/evaluation/index.md",40122],"3b5b6856":[()=>n.e(29878).then(n.t.bind(n,78900,19)),"~blog/default/blog-tags-model-evaluaitons-d53-list.json",78900],"3b7875ca":[()=>n.e(94446).then(n.bind(n,16484)),"@site/versioned_docs/version-0.5.12/swcli/uri.md",16484],"3d33d8b4":[()=>n.e(68318).then(n.bind(n,62067)),"@site/versioned_docs/version-0.6.6/cloud/billing/billing.md",62067],"3e07ca9d":[()=>n.e(33217).then(n.bind(n,38070)),"@site/versioned_docs/version-0.6.6/server/installation/k8s-cluster.md",38070],"3f90d064":[()=>n.e(84472).then(n.bind(n,66491)),"@site/versioned_docs/version-0.6.0/server/installation/docker-compose.md",66491],"3fc51e24":[()=>n.e(16302).then(n.bind(n,81815)),"@site/versioned_docs/version-0.6.7/reference/sdk/model.md",81815],"4012ba53":[()=>n.e(54475).then(n.bind(n,85592)),"@site/versioned_docs/version-0.5.10/getting-started/runtime.md",85592],"406e7d86":[()=>n.e(56775).then(n.bind(n,30904)),"@site/versioned_docs/version-0.6.5/reference/sdk/overview.md",30904],"407f63f3":[()=>n.e(76597).then(n.bind(n,96320)),"@site/versioned_docs/version-0.6.5/reference/swcli/utilities.md",96320],"408f57f9":[()=>n.e(81258).then(n.bind(n,3461)),"@site/versioned_docs/version-0.6.6/server/index.md",3461],"40e2e448":[()=>n.e(17854).then(n.bind(n,55447)),"@site/docs/what-is-starwhale.md",55447],"41c3269f":[()=>n.e(78114).then(n.bind(n,17718)),"@site/versioned_docs/version-0.5.12/getting-started/server.md",17718],"42b9ee70":[()=>n.e(56334).then(n.bind(n,23854)),"@site/versioned_docs/version-0.6.6/reference/sdk/type.md",23854],"42d9f35f":[()=>n.e(6435).then(n.bind(n,33922)),"@site/docs/getting-started/standalone.md",33922],"43b1a21e":[()=>n.e(49774).then(n.bind(n,84867)),"@site/versioned_docs/version-0.6.4/reference/swcli/index.md",84867],"4431de91":[()=>n.e(76888).then(n.bind(n,55552)),"@site/versioned_docs/version-0.6.6/dataset/index.md",55552],"44b1cbe2":[()=>n.e(90838).then(n.bind(n,81503)),"@site/docs/swcli/installation.md",81503],"4511f06b":[()=>n.e(93189).then(n.bind(n,6311)),"@site/versioned_docs/version-0.5.10/concepts/names.md",6311],"4525d9ab":[()=>n.e(40702).then(n.bind(n,24559)),"@site/versioned_docs/version-0.6.6/swcli/uri.md",24559],"4528a46e":[()=>n.e(6338).then(n.bind(n,38101)),"@site/versioned_docs/version-0.6.0/reference/sdk/type.md",38101],"4645ad56":[()=>n.e(15182).then(n.bind(n,98815)),"@site/versioned_docs/version-0.6.0/cloud/billing/refund.md",98815],"4696e759":[()=>n.e(8766).then(n.bind(n,34374)),"@site/docs/model/yaml.md",34374],"481b4727":[()=>n.e(73228).then(n.bind(n,58445)),"@site/docs/reference/swcli/index.md",58445],"486179e1":[()=>n.e(76208).then(n.bind(n,13824)),"@site/versioned_docs/version-0.6.4/reference/swcli/utilities.md",13824],"4870e029":[()=>n.e(7987).then(n.bind(n,27550)),"@site/versioned_docs/version-0.6.7/reference/swcli/index.md",27550],"4888691f":[()=>n.e(76705).then(n.bind(n,89747)),"@site/versioned_docs/version-0.6.4/reference/swcli/dataset.md",89747],"49b4d3fe":[()=>n.e(71405).then(n.bind(n,6220)),"@site/docs/runtime/index.md",6220],"49e0930d":[()=>n.e(81974).then(n.bind(n,96834)),"@site/versioned_docs/version-0.6.5/reference/swcli/job.md",96834],"4a27598e":[()=>n.e(9095).then(n.bind(n,32120)),"@site/versioned_docs/version-0.6.6/concepts/project.md",32120],"4b44443b":[()=>n.e(86849).then(n.bind(n,19805)),"@site/versioned_docs/version-0.6.4/runtime/index.md",19805],"4ba13767":[()=>n.e(4956).then(n.bind(n,76540)),"@site/versioned_docs/version-0.6.5/evaluation/heterogeneous/virtual-node.md",76540],"4db21eee":[()=>n.e(6020).then(n.bind(n,10685)),"@site/versioned_docs/version-0.5.12/evaluation/heterogeneous/node-able.md",10685],"4f239cc1":[()=>n.e(55182).then(n.bind(n,62676)),"@site/versioned_docs/version-0.6.0/dataset/index.md",62676],"4f799c62":[()=>n.e(4981).then(n.bind(n,85027)),"@site/versioned_docs/version-0.6.5/evaluation/heterogeneous/node-able.md",85027],"4f7fe039":[()=>n.e(72150).then(n.bind(n,8932)),"@site/versioned_docs/version-0.6.4/server/index.md",8932],"4f907a97":[()=>n.e(72500).then(n.bind(n,22433)),"@site/versioned_docs/version-0.5.10/server/project.md",22433],50417919:[()=>n.e(7697).then(n.bind(n,27533)),"@site/versioned_docs/version-0.5.12/cloud/billing/billing.md",27533],"51c1bc08":[()=>n.e(26912).then(n.bind(n,50907)),"@site/versioned_docs/version-0.6.4/server/installation/minikube.md",50907],"51cebc0f":[()=>n.e(48220).then(n.bind(n,39799)),"@site/versioned_docs/version-0.6.5/server/installation/docker-compose.md",39799],"51f472b9":[()=>n.e(18472).then(n.bind(n,40474)),"@site/versioned_docs/version-0.6.4/reference/swcli/runtime.md",40474],"521740bf":[()=>n.e(81579).then(n.bind(n,25062)),"@site/versioned_docs/version-0.5.10/reference/swcli/utilities.md",25062],"52d05d9b":[()=>n.e(41133).then(n.bind(n,28047)),"@site/versioned_docs/version-0.6.6/reference/swcli/index.md",28047],"53e20daa":[()=>n.e(29927).then(n.bind(n,94884)),"@site/versioned_docs/version-0.5.10/what-is-starwhale.md",94884],"5431a54b":[()=>n.e(97054).then(n.bind(n,15676)),"@site/versioned_docs/version-0.6.0/server/installation/starwhale_env.md",15676],"54c82979":[()=>n.e(26329).then(n.bind(n,15262)),"@site/docs/getting-started/index.md",15262],"552162b0":[()=>n.e(26966).then(n.bind(n,52888)),"@site/docs/cloud/billing/billing.md",52888],56383101:[()=>n.e(34034).then(n.bind(n,40162)),"@site/versioned_docs/version-0.5.10/cloud/billing/bills.md",40162],"5648656a":[()=>n.e(43288).then(n.bind(n,4058)),"@site/versioned_docs/version-0.5.10/reference/sdk/overview.md",4058],"568d992e":[()=>n.e(29362).then(n.bind(n,30157)),"@site/versioned_docs/version-0.6.7/getting-started/runtime.md",30157],"568f204d":[()=>n.e(97830).then(n.bind(n,60102)),"@site/versioned_docs/version-0.5.12/cloud/billing/refund.md",60102],"56d53d53":[()=>n.e(70232).then(n.bind(n,93578)),"@site/versioned_docs/version-0.6.4/runtime/yaml.md",93578],"57f5c722":[()=>n.e(86235).then(n.bind(n,20549)),"@site/docs/server/installation/starwhale_env.md",20549],"589c66ec":[()=>n.e(3881).then(n.bind(n,48283)),"@site/versioned_docs/version-0.5.12/reference/sdk/type.md",48283],"58bb273c":[()=>n.e(62323).then(n.bind(n,62554)),"@site/versioned_docs/version-0.6.7/swcli/uri.md",62554],"58f10d9f":[()=>n.e(12493).then(n.t.bind(n,99005,19)),"~docs/default/version-0-6-0-metadata-prop-089.json",99005],"5936e3f8":[()=>n.e(84571).then(n.bind(n,61232)),"@site/versioned_docs/version-0.5.10/reference/sdk/type.md",61232],"598582d2":[()=>n.e(1371).then(n.bind(n,58398)),"@site/versioned_docs/version-0.6.7/reference/swcli/dataset.md",58398],"59b46ff7":[()=>n.e(40814).then(n.bind(n,49949)),"@site/versioned_docs/version-0.6.6/reference/swcli/job.md",49949],"5a391425":[()=>n.e(91394).then(n.bind(n,2223)),"@site/versioned_docs/version-0.5.10/swcli/uri.md",2223],"5a4ad223":[()=>n.e(45536).then(n.bind(n,85336)),"@site/docs/dataset/yaml.md",85336],"5b72acc5":[()=>n.e(57208).then(n.bind(n,6037)),"@site/versioned_docs/version-0.5.10/concepts/index.md",6037],"5bb31039":[()=>n.e(85938).then(n.bind(n,99765)),"@site/versioned_docs/version-0.6.4/server/installation/index.md",99765],"5c2ad240":[()=>n.e(75057).then(n.bind(n,35892)),"@site/docs/swcli/index.md",35892],"5c552995":[()=>n.e(54592).then(n.bind(n,5861)),"@site/versioned_docs/version-0.6.7/runtime/index.md",5861],"5d3ff7ab":[()=>n.e(73919).then(n.t.bind(n,60708,19)),"~docs/default/version-0-6-4-metadata-prop-2ba.json",60708],"5e73aff3":[()=>n.e(68481).then(n.bind(n,62308)),"@site/versioned_docs/version-0.6.0/reference/sdk/overview.md",62308],"5eb6fda8":[()=>n.e(80503).then(n.bind(n,49970)),"@site/versioned_docs/version-0.5.12/reference/sdk/model.md",49970],"5ee7b1bc":[()=>n.e(69810).then(n.bind(n,11480)),"@site/docs/concepts/glossary.md",11480],"5f0d6fdb":[()=>n.e(34885).then(n.bind(n,69767)),"@site/docs/cloud/billing/refund.md",69767],"5f38f66e":[()=>n.e(71434).then(n.bind(n,11622)),"@site/versioned_docs/version-0.5.12/reference/sdk/dataset.md",11622],"605a1123":[()=>n.e(8619).then(n.bind(n,91174)),"@site/versioned_docs/version-0.6.0/evaluation/index.md",91174],"61356d5d":[()=>n.e(82739).then(n.bind(n,37924)),"@site/docs/reference/swcli/utilities.md",37924],"63f3ccc1":[()=>n.e(85073).then(n.bind(n,50320)),"@site/versioned_docs/version-0.5.12/server/installation/index.md",50320],"648a866a":[()=>n.e(86634).then(n.bind(n,40732)),"@site/versioned_docs/version-0.6.5/concepts/versioning.md",40732],"6507269d":[()=>n.e(27480).then(n.bind(n,40288)),"@site/versioned_docs/version-0.6.7/server/project.md",40288],"65621f19":[()=>n.e(54337).then(n.bind(n,22855)),"@site/versioned_docs/version-0.6.6/cloud/billing/bills.md",22855],"658b7766":[()=>n.e(41908).then(n.bind(n,82511)),"@site/versioned_docs/version-0.6.7/cloud/billing/refund.md",82511],"65b8fb58":[()=>n.e(55018).then(n.bind(n,87603)),"@site/versioned_docs/version-0.6.7/evaluation/heterogeneous/node-able.md",87603],"65c6927d":[()=>n.e(66443).then(n.bind(n,82614)),"@site/versioned_docs/version-0.5.10/server/index.md",82614],"65f2cac7":[()=>n.e(77630).then(n.bind(n,99889)),"@site/versioned_docs/version-0.6.7/dataset/yaml.md",99889],"6763b9d9":[()=>n.e(16336).then(n.bind(n,98747)),"@site/versioned_docs/version-0.5.10/reference/swcli/runtime.md",98747],"6875c492":[()=>Promise.all([n.e(40532),n.e(78357),n.e(46048),n.e(48610)]).then(n.bind(n,41714)),"@theme/BlogTagsPostsPage",41714],"689aaa3d":[()=>n.e(91143).then(n.bind(n,95478)),"@site/versioned_docs/version-0.5.10/server/guides/server_admin.md",95478],"68ba87f2":[()=>n.e(8857).then(n.bind(n,73299)),"@site/versioned_docs/version-0.6.4/model/index.md",73299],"68badd3d":[()=>n.e(21687).then(n.bind(n,50249)),"@site/versioned_docs/version-0.6.5/cloud/billing/recharge.md",50249],"68d52b22":[()=>n.e(87665).then(n.bind(n,87768)),"@site/versioned_docs/version-0.6.5/reference/swcli/index.md",87768],"6ad060df":[()=>n.e(62201).then(n.bind(n,10818)),"@site/versioned_docs/version-0.6.7/server/installation/server-start.md",10818],"6b5d17d2":[()=>n.e(14121).then(n.bind(n,61481)),"@site/versioned_docs/version-0.6.4/swcli/config.md",61481],"6caec8c8":[()=>n.e(36707).then(n.bind(n,70550)),"@site/versioned_docs/version-0.6.7/evaluation/heterogeneous/virtual-node.md",70550],"6cbd7e7d":[()=>n.e(41840).then(n.bind(n,10275)),"@site/versioned_docs/version-0.6.0/cloud/billing/bills.md",10275],"6e1f8ce6":[()=>n.e(96366).then(n.bind(n,55546)),"@site/versioned_docs/version-0.6.0/reference/swcli/job.md",55546],"6edc6741":[()=>n.e(13952).then(n.bind(n,50247)),"@site/blog/2023-07-21-intro.md",50247],"6f13de77":[()=>n.e(8685).then(n.bind(n,19429)),"@site/versioned_docs/version-0.6.0/faq/index.md",19429],"6f90d93e":[()=>n.e(73471).then(n.bind(n,16672)),"@site/versioned_docs/version-0.6.5/swcli/installation.md",16672],"6f99d302":[()=>n.e(34927).then(n.bind(n,90225)),"@site/versioned_docs/version-0.6.4/concepts/project.md",90225],70208709:[()=>n.e(80347).then(n.bind(n,7275)),"@site/versioned_docs/version-0.6.7/getting-started/index.md",7275],70926518:[()=>n.e(8592).then(n.bind(n,16451)),"@site/docs/reference/swcli/runtime.md",16451],"715b1a07":[()=>n.e(77053).then(n.bind(n,86983)),"@site/versioned_docs/version-0.6.6/swcli/index.md",86983],"71ff360b":[()=>n.e(30168).then(n.bind(n,65530)),"@site/versioned_docs/version-0.5.12/evaluation/index.md",65530],"73c5427a":[()=>n.e(93840).then(n.bind(n,4960)),"@site/versioned_docs/version-0.5.12/reference/swcli/utilities.md",4960],"7454ca30":[()=>n.e(76410).then(n.bind(n,92852)),"@site/versioned_docs/version-0.6.5/dataset/index.md",92852],"74882eab":[()=>n.e(7833).then(n.bind(n,49672)),"@site/versioned_docs/version-0.6.4/reference/swcli/job.md",49672],"74d883b7":[()=>n.e(72828).then(n.bind(n,30237)),"@site/versioned_docs/version-0.5.12/getting-started/index.md",30237],"74da7579":[()=>n.e(9340).then(n.bind(n,97233)),"@site/versioned_docs/version-0.6.0/reference/sdk/model.md",97233],"74e63638":[()=>n.e(50173).then(n.bind(n,2252)),"@site/versioned_docs/version-0.6.7/reference/sdk/type.md",2252],"7578b5f6":[()=>n.e(49009).then(n.bind(n,63358)),"@site/versioned_docs/version-0.6.0/reference/swcli/index.md",63358],"7684512e":[()=>n.e(51435).then(n.bind(n,23646)),"@site/blog/2023-09-11-reproduce-and-compare-evaluations.md?truncated=true",23646],"77daf463":[()=>n.e(54007).then(n.bind(n,43355)),"@site/docs/evaluation/heterogeneous/virtual-node.md",43355],"781f4c86":[()=>n.e(34679).then(n.bind(n,94272)),"@site/versioned_docs/version-0.6.6/server/installation/docker.md",94272],78408446:[()=>n.e(77722).then(n.bind(n,10964)),"@site/versioned_docs/version-0.6.5/swcli/swignore.md",10964],"78886a16":[()=>n.e(18018).then(n.bind(n,61976)),"@site/versioned_docs/version-0.5.12/server/installation/helm-charts.md",61976],"78d62bd9":[()=>n.e(35881).then(n.bind(n,2828)),"@site/docs/reference/sdk/job.md",2828],"79380ea1":[()=>n.e(91823).then(n.bind(n,22062)),"@site/versioned_docs/version-0.6.6/server/guides/server_admin.md",22062],"797023eb":[()=>n.e(66508).then(n.bind(n,65162)),"@site/versioned_docs/version-0.5.10/reference/sdk/dataset.md",65162],"79d44606":[()=>n.e(25751).then(n.bind(n,35054)),"@site/versioned_docs/version-0.6.5/faq/index.md",35054],"7a8da0ce":[()=>n.e(64896).then(n.t.bind(n,5688,19)),"~docs/default/version-0-5-10-metadata-prop-95b.json",5688],"7a93542f":[()=>n.e(57199).then(n.bind(n,4623)),"@site/versioned_docs/version-0.6.0/cloud/billing/voucher.md",4623],"7ba1164b":[()=>n.e(91796).then(n.bind(n,32687)),"@site/versioned_docs/version-0.6.6/concepts/roles-permissions.md",32687],"7beeba1c":[()=>n.e(54351).then(n.bind(n,88472)),"@site/versioned_docs/version-0.6.4/swcli/uri.md",88472],"7c7a0a4e":[()=>n.e(82264).then(n.bind(n,36214)),"@site/versioned_docs/version-0.6.5/reference/swcli/project.md",36214],"7c8cfcaa":[()=>n.e(49205).then(n.bind(n,97541)),"@site/docs/reference/swcli/project.md",97541],"7d188f18":[()=>n.e(89875).then(n.bind(n,62734)),"@site/versioned_docs/version-0.6.0/concepts/project.md",62734],"7d733c18":[()=>n.e(55173).then(n.bind(n,7168)),"@site/versioned_docs/version-0.6.4/evaluation/index.md",7168],"7e15b78c":[()=>n.e(59116).then(n.bind(n,68127)),"@site/versioned_docs/version-0.6.6/reference/swcli/server.md",68127],"7e7ec2d9":[()=>n.e(59321).then(n.bind(n,70247)),"@site/versioned_docs/version-0.6.7/getting-started/server.md",70247],"7eb32d37":[()=>n.e(99487).then(n.bind(n,6103)),"@site/versioned_docs/version-0.5.10/server/installation/minikube.md",6103],"7f26efeb":[()=>n.e(63229).then(n.bind(n,7648)),"@site/versioned_docs/version-0.6.4/getting-started/server.md",7648],80151786:[()=>n.e(97656).then(n.bind(n,42063)),"@site/versioned_docs/version-0.5.12/runtime/yaml.md",42063],"814f3328":[()=>n.e(52535).then(n.t.bind(n,45641,19)),"~blog/default/blog-post-list-prop-default.json",45641],"8195011d":[()=>n.e(1933).then(n.bind(n,97371)),"@site/versioned_docs/version-0.6.4/evaluation/heterogeneous/virtual-node.md",97371],"81c352c7":[()=>n.e(61014).then(n.bind(n,27009)),"@site/docs/runtime/yaml.md",27009],"830c5ac1":[()=>n.e(1573).then(n.bind(n,15381)),"@site/versioned_docs/version-0.6.7/cloud/billing/bills.md",15381],"835a68ee":[()=>n.e(1173).then(n.bind(n,41412)),"@site/versioned_docs/version-0.6.6/server/installation/index.md",41412],"838b539f":[()=>n.e(79498).then(n.bind(n,57017)),"@site/versioned_docs/version-0.6.6/dataset/yaml.md",57017],"83e43ff1":[()=>n.e(57659).then(n.bind(n,19873)),"@site/docs/server/installation/index.md",19873],"84f9b92a":[()=>n.e(18354).then(n.bind(n,3859)),"@site/versioned_docs/version-0.6.5/evaluation/index.md",3859],"85a3d98f":[()=>n.e(24e3).then(n.bind(n,45911)),"@site/docs/examples/helloworld.md",45911],"87552efa":[()=>n.e(98190).then(n.bind(n,55822)),"@site/versioned_docs/version-0.6.5/server/installation/starwhale_env.md",55822],"8760074f":[()=>n.e(60076).then(n.bind(n,44571)),"@site/versioned_docs/version-0.6.6/runtime/index.md",44571],"8776c192":[()=>n.e(14686).then(n.bind(n,64935)),"@site/versioned_docs/version-0.6.7/community/contribute.md",64935],"877d4050":[()=>n.e(45762).then(n.bind(n,50682)),"@site/docs/server/index.md",50682],"87f36f7b":[()=>n.e(97604).then(n.bind(n,24149)),"@site/versioned_docs/version-0.6.6/cloud/index.md",24149],88015853:[()=>n.e(34934).then(n.bind(n,34323)),"@site/versioned_docs/version-0.5.12/model/yaml.md",34323],"880e6f87":[()=>n.e(14565).then(n.bind(n,68409)),"@site/versioned_docs/version-0.6.5/reference/swcli/runtime.md",68409],"88988c18":[()=>n.e(86088).then(n.bind(n,33221)),"@site/versioned_docs/version-0.5.10/cloud/billing/recharge.md",33221],"899d5fe0":[()=>n.e(1655).then(n.bind(n,38711)),"@site/blog/2023-07-24-run-llama-2-chat-in-five-minutes.md?truncated=true",38711],"8a669a0e":[()=>n.e(94720).then(n.bind(n,65656)),"@site/versioned_docs/version-0.6.7/server/installation/starwhale_env.md",65656],"8b2d4da3":[()=>n.e(20996).then(n.bind(n,1339)),"@site/versioned_docs/version-0.6.0/community/contribute.md",1339],"8ba41740":[()=>n.e(54527).then(n.bind(n,1015)),"@site/versioned_docs/version-0.5.12/server/installation/docker-compose.md",1015],"8d2529f9":[()=>n.e(91349).then(n.bind(n,65259)),"@site/versioned_docs/version-0.6.7/reference/sdk/evaluation.md",65259],"8d25ede1":[()=>n.e(77832).then(n.bind(n,36104)),"@site/versioned_docs/version-0.6.6/swcli/installation.md",36104],"8de92970":[()=>n.e(32818).then(n.bind(n,81031)),"@site/docs/reference/sdk/overview.md",81031],"8e04f48d":[()=>n.e(15620).then(n.bind(n,87396)),"@site/versioned_docs/version-0.5.10/getting-started/index.md",87396],"8e3c9231":[()=>n.e(26909).then(n.bind(n,47617)),"@site/versioned_docs/version-0.5.12/dataset/index.md",47617],"8e7c41b9":[()=>n.e(84100).then(n.bind(n,31677)),"@site/versioned_docs/version-0.6.7/server/installation/docker.md",31677],"8f11fbb5":[()=>n.e(52106).then(n.bind(n,62145)),"@site/versioned_docs/version-0.5.12/concepts/names.md",62145],"8fb4711f":[()=>n.e(66869).then(n.bind(n,44019)),"@site/versioned_docs/version-0.6.4/server/guides/server_admin.md",44019],"90d098bd":[()=>n.e(42161).then(n.bind(n,56378)),"@site/versioned_docs/version-0.5.12/reference/sdk/job.md",56378],"9130a3e1":[()=>n.e(71797).then(n.bind(n,30913)),"@site/versioned_docs/version-0.6.5/swcli/uri.md",30913],"91603a9d":[()=>n.e(6209).then(n.bind(n,83438)),"@site/docs/server/installation/server-start.md",83438],"91ab3747":[()=>n.e(91013).then(n.bind(n,38646)),"@site/versioned_docs/version-0.6.0/server/installation/minikube.md",38646],"91edb5cf":[()=>n.e(81140).then(n.bind(n,28475)),"@site/versioned_docs/version-0.6.0/cloud/billing/recharge.md",28475],"923434ee":[()=>n.e(1678).then(n.bind(n,33496)),"@site/docs/concepts/names.md",33496],"9296be1a":[()=>n.e(74149).then(n.bind(n,26423)),"@site/versioned_docs/version-0.6.6/server/installation/starwhale_env.md",26423],"93232c32":[()=>n.e(10075).then(n.bind(n,46832)),"@site/versioned_docs/version-0.6.4/getting-started/runtime.md",46832],"935f2afb":[()=>n.e(80053).then(n.t.bind(n,1109,19)),"~docs/default/version-current-metadata-prop-751.json",1109],"93b8e872":[()=>n.e(42579).then(n.bind(n,65442)),"@site/versioned_docs/version-0.6.7/cloud/billing/voucher.md",65442],"93f00860":[()=>n.e(81684).then(n.bind(n,99586)),"@site/versioned_docs/version-0.5.12/dataset/yaml.md",99586],"9547b526":[()=>n.e(40027).then(n.bind(n,46548)),"@site/versioned_docs/version-0.6.6/server/installation/server-start.md",46548],"959b44ee":[()=>n.e(5101).then(n.bind(n,5927)),"@site/versioned_docs/version-0.5.12/swcli/installation.md",5927],"963797ee":[()=>n.e(22669).then(n.bind(n,49098)),"@site/docs/cloud/index.md",49098],"967e2129":[()=>n.e(36906).then(n.bind(n,39953)),"@site/versioned_docs/version-0.6.7/model/index.md",39953],"974006d0":[()=>n.e(57257).then(n.bind(n,14670)),"@site/versioned_docs/version-0.6.6/reference/sdk/overview.md",14670],"978f5c7d":[()=>n.e(49687).then(n.bind(n,99293)),"@site/versioned_docs/version-0.5.10/concepts/project.md",99293],"97affa74":[()=>n.e(78634).then(n.bind(n,72126)),"@site/versioned_docs/version-0.5.12/swcli/config.md",72126],"986a7b24":[()=>n.e(71368).then(n.bind(n,85835)),"@site/versioned_docs/version-0.5.10/swcli/swignore.md",85835],"99977c84":[()=>n.e(7372).then(n.bind(n,14613)),"@site/versioned_docs/version-0.5.12/reference/sdk/other.md",14613],"9b1574cb":[()=>n.e(54011).then(n.bind(n,82714)),"@site/docs/reference/sdk/evaluation.md",82714],"9b79081a":[()=>n.e(63057).then(n.t.bind(n,63291,19)),"~blog/default/blog-tags-intro-fe7-list.json",63291],"9ba654c2":[()=>n.e(16237).then(n.bind(n,37638)),"@site/versioned_docs/version-0.5.12/reference/swcli/instance.md",37638],"9bdeab26":[()=>n.e(60988).then(n.t.bind(n,73693,19)),"~blog/default/blog-tags-intro-fe7.json",73693],"9c0c4186":[()=>n.e(61618).then(n.bind(n,40566)),"@site/versioned_docs/version-0.6.5/server/installation/docker.md",40566],"9cf37abf":[()=>n.e(74537).then(n.bind(n,90055)),"@site/versioned_docs/version-0.5.10/concepts/versioning.md",90055],"9d1c829d":[()=>n.e(8279).then(n.bind(n,15841)),"@site/docs/getting-started/runtime.md",15841],"9dc553d4":[()=>n.e(19033).then(n.bind(n,8156)),"@site/versioned_docs/version-0.6.0/reference/swcli/model.md",8156],"9dfe9d87":[()=>n.e(59349).then(n.bind(n,51614)),"@site/versioned_docs/version-0.6.5/cloud/index.md",51614],"9e4087bc":[()=>n.e(53608).then(n.bind(n,63169)),"@theme/BlogArchivePage",63169],"9f104ddb":[()=>n.e(79404).then(n.bind(n,13943)),"@site/versioned_docs/version-0.6.0/evaluation/heterogeneous/node-able.md",13943],"9f58059d":[()=>n.e(61988).then(n.bind(n,87131)),"@site/versioned_docs/version-0.5.12/server/project.md",87131],a0a891b7:[()=>n.e(72115).then(n.bind(n,75990)),"@site/versioned_docs/version-0.6.4/concepts/names.md",75990],a1612d77:[()=>n.e(26765).then(n.bind(n,88065)),"@site/versioned_docs/version-0.6.0/server/installation/docker.md",88065],a23fbffc:[()=>n.e(62529).then(n.bind(n,64922)),"@site/versioned_docs/version-0.5.10/server/installation/helm-charts.md",64922],a25d6fd7:[()=>n.e(5208).then(n.bind(n,10283)),"@site/versioned_docs/version-0.5.10/cloud/index.md",10283],a32436d0:[()=>n.e(88461).then(n.bind(n,90452)),"@site/versioned_docs/version-0.6.4/reference/swcli/project.md",90452],a3eb7131:[()=>n.e(73368).then(n.bind(n,94603)),"@site/versioned_docs/version-0.6.0/concepts/index.md",94603],a4bc18a5:[()=>n.e(42118).then(n.bind(n,19738)),"@site/versioned_docs/version-0.6.5/server/project.md",19738],a4c1c6ce:[()=>n.e(84376).then(n.bind(n,37144)),"@site/versioned_docs/version-0.6.6/model/yaml.md",37144],a56f2bca:[()=>n.e(11224).then(n.bind(n,92102)),"@site/versioned_docs/version-0.5.10/reference/swcli/index.md",92102],a5adff03:[()=>n.e(84272).then(n.bind(n,26233)),"@site/versioned_docs/version-0.6.4/getting-started/index.md",26233],a6050b58:[()=>n.e(20518).then(n.bind(n,99325)),"@site/versioned_docs/version-0.6.5/getting-started/cloud.md",99325],a6703bbf:[()=>n.e(89247).then(n.bind(n,23732)),"@site/versioned_docs/version-0.6.0/getting-started/cloud.md",23732],a6aa9e1f:[()=>Promise.all([n.e(40532),n.e(78357),n.e(46048),n.e(93089)]).then(n.bind(n,80046)),"@theme/BlogListPage",80046],a7023ddc:[()=>n.e(11713).then(n.t.bind(n,53457,19)),"~blog/default/blog-tags-tags-4c2.json",53457],a7dd36fc:[()=>n.e(1543).then(n.bind(n,13e3)),"@site/versioned_docs/version-0.6.6/cloud/billing/recharge.md",13e3],a7e77201:[()=>n.e(78724).then(n.bind(n,48887)),"@site/versioned_docs/version-0.6.5/server/installation/minikube.md",48887],a992ad6e:[()=>n.e(17888).then(n.bind(n,9149)),"@site/versioned_docs/version-0.6.5/cloud/billing/voucher.md",9149],aa10845f:[()=>n.e(35995).then(n.bind(n,28591)),"@site/versioned_docs/version-0.6.0/what-is-starwhale.md",28591],aa126475:[()=>n.e(8333).then(n.bind(n,61978)),"@site/versioned_docs/version-0.5.10/reference/sdk/other.md",61978],aaf6642f:[()=>n.e(18254).then(n.bind(n,14813)),"@site/versioned_docs/version-0.6.7/concepts/names.md",14813],ab388152:[()=>n.e(35761).then(n.bind(n,17107)),"@site/versioned_docs/version-0.5.12/reference/swcli/model.md",17107],ac1d7d64:[()=>n.e(27783).then(n.bind(n,28545)),"@site/versioned_docs/version-0.6.5/server/installation/k8s-cluster.md",28545],ac51e66e:[()=>n.e(25870).then(n.bind(n,61829)),"@site/versioned_docs/version-0.5.12/swcli/swignore.md",61829],ac72f4d5:[()=>n.e(18309).then(n.bind(n,65890)),"@site/versioned_docs/version-0.5.12/getting-started/standalone.md",65890],ae57ea02:[()=>n.e(10143).then(n.bind(n,9940)),"@site/versioned_docs/version-0.5.10/reference/swcli/model.md",9940],aefeddaf:[()=>n.e(76398).then(n.bind(n,71302)),"@site/versioned_docs/version-0.6.0/runtime/index.md",71302],af0debe5:[()=>n.e(52995).then(n.bind(n,93321)),"@site/versioned_docs/version-0.5.10/server/installation/index.md",93321],afaa6f85:[()=>n.e(27392).then(n.bind(n,80852)),"@site/versioned_docs/version-0.6.0/model/yaml.md",80852],afc2f83f:[()=>n.e(56736).then(n.bind(n,6269)),"@site/versioned_docs/version-0.5.12/getting-started/runtime.md",6269],b07d8e47:[()=>n.e(27705).then(n.bind(n,99346)),"@site/versioned_docs/version-0.6.7/swcli/swignore.md",99346],b251fb47:[()=>n.e(37983).then(n.bind(n,87824)),"@site/docs/reference/sdk/model.md",87824],b2b675dd:[()=>n.e(90533).then(n.t.bind(n,28017,19)),"~blog/default/blog-c06.json",28017],b2f554cd:[()=>n.e(11477).then(n.t.bind(n,30010,19)),"~blog/default/blog-archive-80c.json",30010],b3c9b7e8:[()=>n.e(54319).then(n.bind(n,27412)),"@site/docs/server/installation/k8s-cluster.md",27412],b40c3376:[()=>n.e(22329).then(n.bind(n,12229)),"@site/blog/2023-09-11-reproduce-and-compare-evaluations.md",12229],b4161e04:[()=>n.e(17996).then(n.bind(n,46376)),"@site/docs/server/guides/server_admin.md",46376],b4266ab5:[()=>n.e(40070).then(n.bind(n,64778)),"@site/versioned_docs/version-0.5.10/reference/sdk/model.md",64778],b45ce566:[()=>n.e(85964).then(n.bind(n,25376)),"@site/versioned_docs/version-0.6.7/concepts/index.md",25376],b5684a7b:[()=>n.e(60379).then(n.bind(n,9597)),"@site/docs/server/installation/docker-compose.md",9597],b7557c51:[()=>n.e(35451).then(n.bind(n,95502)),"@site/docs/swcli/config.md",95502],b879cbc2:[()=>n.e(42313).then(n.bind(n,81237)),"@site/docs/server/project.md",81237],b8bffbd0:[()=>n.e(87823).then(n.bind(n,47465)),"@site/versioned_docs/version-0.6.6/getting-started/cloud.md",47465],b92391be:[()=>n.e(77279).then(n.bind(n,10282)),"@site/versioned_docs/version-0.6.7/reference/swcli/instance.md",10282],b97a63ce:[()=>n.e(34562).then(n.bind(n,10732)),"@site/versioned_docs/version-0.6.7/runtime/yaml.md",10732],ba73f294:[()=>n.e(70814).then(n.bind(n,94910)),"@site/versioned_docs/version-0.6.5/getting-started/standalone.md",94910],ba836e7d:[()=>n.e(40190).then(n.bind(n,25526)),"@site/versioned_docs/version-0.5.10/dataset/index.md",25526],bae51714:[()=>n.e(70953).then(n.bind(n,2919)),"@site/docs/reference/swcli/model.md",2919],bbf27248:[()=>n.e(86798).then(n.bind(n,12326)),"@site/versioned_docs/version-0.6.5/model/yaml.md",12326],bc1a1531:[()=>n.e(33285).then(n.bind(n,97599)),"@site/versioned_docs/version-0.6.6/reference/swcli/project.md",97599],bc50734c:[()=>n.e(80705).then(n.bind(n,26256)),"@site/versioned_docs/version-0.6.4/reference/sdk/model.md",26256],bc68cc81:[()=>n.e(10084).then(n.t.bind(n,79945,19)),"~docs/default/version-0-6-6-metadata-prop-a6a.json",79945],bca0dfde:[()=>n.e(23859).then(n.bind(n,77345)),"@site/versioned_docs/version-0.5.12/server/installation/minikube.md",77345],bd161c7f:[()=>n.e(65413).then(n.bind(n,11839)),"@site/versioned_docs/version-0.6.7/concepts/project.md",11839],bd61d482:[()=>n.e(42032).then(n.bind(n,62364)),"@site/versioned_docs/version-0.6.5/runtime/yaml.md",62364],bd7d9199:[()=>n.e(11857).then(n.bind(n,6240)),"@site/versioned_docs/version-0.5.12/cloud/billing/recharge.md",6240],bd88de40:[()=>n.e(41650).then(n.bind(n,75330)),"@site/versioned_docs/version-0.6.7/server/installation/k8s-cluster.md",75330],bd9c0894:[()=>n.e(44106).then(n.bind(n,43314)),"@site/versioned_docs/version-0.5.12/concepts/project.md",43314],bde18961:[()=>n.e(92112).then(n.bind(n,43351)),"@site/docs/concepts/roles-permissions.md",43351],be6c2ff2:[()=>n.e(91023).then(n.bind(n,6386)),"@site/versioned_docs/version-0.6.4/getting-started/standalone.md",6386],bed23bc4:[()=>n.e(4856).then(n.bind(n,97605)),"@site/versioned_docs/version-0.6.4/concepts/roles-permissions.md",97605],bf614533:[()=>n.e(51802).then(n.bind(n,5849)),"@site/docs/examples/index.md",5849],c0d46bd9:[()=>n.e(26416).then(n.bind(n,45345)),"@site/versioned_docs/version-0.6.6/community/contribute.md",45345],c0d50cc0:[()=>n.e(85258).then(n.bind(n,97812)),"@site/versioned_docs/version-0.5.10/server/installation/docker.md",97812],c0fee9fd:[()=>n.e(95875).then(n.bind(n,8549)),"@site/versioned_docs/version-0.5.12/concepts/versioning.md",8549],c25e75a4:[()=>n.e(90758).then(n.bind(n,33645)),"@site/versioned_docs/version-0.6.6/getting-started/runtime.md",33645],c2728190:[()=>n.e(5689).then(n.bind(n,18684)),"@site/docs/concepts/index.md",18684],c3542997:[()=>n.e(21433).then(n.bind(n,54821)),"@site/versioned_docs/version-0.6.4/concepts/versioning.md",54821],c3c8b115:[()=>n.e(14948).then(n.bind(n,38025)),"@site/versioned_docs/version-0.5.10/swcli/index.md",38025],c49571eb:[()=>n.e(3080).then(n.bind(n,80478)),"@site/versioned_docs/version-0.6.4/reference/sdk/other.md",80478],c592492b:[()=>n.e(13918).then(n.bind(n,19274)),"@site/versioned_docs/version-0.5.10/reference/swcli/dataset.md",19274],c6444364:[()=>n.e(67366).then(n.bind(n,15197)),"@site/versioned_docs/version-0.6.5/runtime/index.md",15197],c757b298:[()=>n.e(48884).then(n.bind(n,82137)),"@site/versioned_docs/version-0.5.10/getting-started/server.md",82137],c886740e:[()=>n.e(46230).then(n.bind(n,14176)),"@site/docs/cloud/billing/voucher.md",14176],c995278b:[()=>n.e(85459).then(n.bind(n,85940)),"@site/versioned_docs/version-0.6.7/concepts/roles-permissions.md",85940],ca3e8775:[()=>n.e(65210).then(n.bind(n,63312)),"@site/versioned_docs/version-0.6.0/concepts/names.md",63312],cade0589:[()=>n.e(67687).then(n.bind(n,8037)),"@site/versioned_docs/version-0.6.7/model/yaml.md",8037],cb09c2c8:[()=>n.e(46710).then(n.bind(n,17171)),"@site/versioned_docs/version-0.6.6/reference/sdk/dataset.md",17171],cbad36d9:[()=>n.e(11061).then(n.bind(n,31945)),"@site/versioned_docs/version-0.6.4/model/yaml.md",31945],cc120547:[()=>n.e(99391).then(n.bind(n,82867)),"@site/blog/2023-07-24-run-llama-2-chat-in-five-minutes.md",82867],cc78421f:[()=>n.e(80125).then(n.bind(n,82647)),"@site/versioned_docs/version-0.5.10/cloud/billing/voucher.md",82647],cca86c8f:[()=>n.e(30722).then(n.bind(n,33969)),"@site/versioned_docs/version-0.6.6/getting-started/standalone.md",33969],ccc49370:[()=>Promise.all([n.e(40532),n.e(78357),n.e(46048),n.e(46103)]).then(n.bind(n,65203)),"@theme/BlogPostPage",65203],ccc61ecc:[()=>n.e(42096).then(n.bind(n,13372)),"@site/versioned_docs/version-0.6.6/reference/sdk/evaluation.md",13372],cd9ae399:[()=>n.e(90733).then(n.bind(n,25464)),"@site/versioned_docs/version-0.6.5/server/installation/index.md",25464],cddb67a8:[()=>n.e(99706).then(n.bind(n,20177)),"@site/versioned_docs/version-0.5.12/runtime/index.md",20177],cea36912:[()=>n.e(21342).then(n.bind(n,91263)),"@site/versioned_docs/version-0.6.5/server/guides/server_admin.md",91263],cfd4e1da:[()=>n.e(16881).then(n.bind(n,42462)),"@site/versioned_docs/version-0.6.0/swcli/config.md",42462],d07b4b26:[()=>n.e(5009).then(n.bind(n,25699)),"@site/versioned_docs/version-0.6.5/server/index.md",25699],d15112ec:[()=>n.e(53018).then(n.bind(n,16726)),"@site/versioned_docs/version-0.6.0/model/index.md",16726],d20d7505:[()=>n.e(64732).then(n.bind(n,62421)),"@site/versioned_docs/version-0.6.6/what-is-starwhale.md",62421],d218f8f7:[()=>n.e(64433).then(n.bind(n,1796)),"@site/versioned_docs/version-0.5.12/cloud/index.md",1796],d22055dc:[()=>n.e(25208).then(n.bind(n,7960)),"@site/versioned_docs/version-0.5.12/community/contribute.md",7960],d2453d90:[()=>n.e(75491).then(n.bind(n,48871)),"@site/versioned_docs/version-0.6.4/reference/sdk/overview.md",48871],d29f8d9b:[()=>n.e(14848).then(n.bind(n,76302)),"@site/versioned_docs/version-0.6.4/reference/sdk/job.md",76302],d34d6740:[()=>n.e(74130).then(n.bind(n,78413)),"@site/versioned_docs/version-0.5.12/cloud/billing/voucher.md",78413],d3fd6aa5:[()=>n.e(55032).then(n.bind(n,81010)),"@site/versioned_docs/version-0.5.12/server/installation/docker.md",81010],d42ea169:[()=>n.e(36194).then(n.bind(n,20763)),"@site/versioned_docs/version-0.6.7/swcli/installation.md",20763],d567a5f3:[()=>n.e(9680).then(n.bind(n,4820)),"@site/versioned_docs/version-0.6.4/swcli/installation.md",4820],d7ebb4fe:[()=>n.e(77052).then(n.bind(n,25325)),"@site/versioned_docs/version-0.6.5/concepts/project.md",25325],d7efef2f:[()=>n.e(1745).then(n.bind(n,48011)),"@site/docs/reference/sdk/other.md",48011],d832a854:[()=>n.e(32366).then(n.bind(n,30657)),"@site/docs/swcli/swignore.md",30657],d83949ba:[()=>n.e(6867).then(n.bind(n,24689)),"@site/versioned_docs/version-0.6.5/swcli/index.md",24689],d83ef4b5:[()=>n.e(64447).then(n.bind(n,60653)),"@site/versioned_docs/version-0.5.12/what-is-starwhale.md",60653],d94449a9:[()=>n.e(63888).then(n.bind(n,42942)),"@site/versioned_docs/version-0.6.6/reference/swcli/model.md",42942],d9592294:[()=>n.e(94714).then(n.bind(n,34947)),"@site/versioned_docs/version-0.6.7/swcli/config.md",34947],d9a42321:[()=>n.e(12238).then(n.bind(n,10541)),"@site/versioned_docs/version-0.6.7/reference/swcli/model.md",10541],d9beab61:[()=>n.e(71171).then(n.bind(n,35063)),"@site/versioned_docs/version-0.6.4/cloud/billing/voucher.md",35063],db984927:[()=>n.e(8140).then(n.bind(n,84706)),"@site/versioned_docs/version-0.6.5/getting-started/runtime.md",84706],dbe33f09:[()=>n.e(75150).then(n.bind(n,15691)),"@site/versioned_docs/version-0.5.10/swcli/config.md",15691],dcfe1bde:[()=>n.e(5387).then(n.bind(n,88507)),"@site/versioned_docs/version-0.6.4/concepts/index.md",88507],ddb3d303:[()=>n.e(4322).then(n.bind(n,45865)),"@site/versioned_docs/version-0.5.12/cloud/billing/bills.md",45865],df36ecd4:[()=>n.e(77567).then(n.bind(n,9642)),"@site/versioned_docs/version-0.6.4/cloud/billing/recharge.md",9642],df9f2416:[()=>n.e(10808).then(n.bind(n,59039)),"@site/versioned_docs/version-0.6.6/swcli/swignore.md",59039],e0131296:[()=>n.e(80013).then(n.bind(n,28919)),"@site/versioned_docs/version-0.6.7/concepts/versioning.md",28919],e0551365:[()=>n.e(26030).then(n.bind(n,45717)),"@site/versioned_docs/version-0.6.5/reference/sdk/model.md",45717],e1105187:[()=>n.e(73154).then(n.bind(n,38670)),"@site/versioned_docs/version-0.6.0/server/guides/server_admin.md",38670],e14c639a:[()=>n.e(27796).then(n.bind(n,44844)),"@site/versioned_docs/version-0.6.4/reference/sdk/dataset.md",44844],e195e0f8:[()=>n.e(56586).then(n.bind(n,95280)),"@site/versioned_docs/version-0.6.7/reference/sdk/dataset.md",95280],e1a8dac5:[()=>n.e(64978).then(n.bind(n,31633)),"@site/versioned_docs/version-0.6.7/server/index.md",31633],e2cfa70e:[()=>n.e(29324).then(n.bind(n,44515)),"@site/versioned_docs/version-0.6.4/getting-started/cloud.md",44515],e36a0948:[()=>n.e(74535).then(n.bind(n,78057)),"@site/versioned_docs/version-0.6.4/swcli/swignore.md",78057],e44ab7b1:[()=>n.e(29546).then(n.bind(n,73438)),"@site/versioned_docs/version-0.6.4/community/contribute.md",73438],e4718587:[()=>n.e(41084).then(n.bind(n,59602)),"@site/versioned_docs/version-0.6.5/cloud/billing/refund.md",59602],e4b75637:[()=>n.e(51690).then(n.bind(n,45845)),"@site/versioned_docs/version-0.6.0/swcli/index.md",45845],e4f6b8e1:[()=>n.e(72308).then(n.bind(n,17092)),"@site/versioned_docs/version-0.5.10/faq/index.md",17092],e53d3ff9:[()=>n.e(92165).then(n.bind(n,13764)),"@site/versioned_docs/version-0.6.0/concepts/versioning.md",13764],e668d28a:[()=>n.e(27514).then(n.bind(n,43233)),"@site/versioned_docs/version-0.6.7/swcli/index.md",43233],e6b210f1:[()=>n.e(50137).then(n.t.bind(n,55659,19)),"~blog/default/blog-tags-model-package-3fd-list.json",55659],e7c33aac:[()=>n.e(97220).then(n.bind(n,61290)),"@site/versioned_docs/version-0.6.0/reference/sdk/dataset.md",61290],e8d59815:[()=>n.e(27197).then(n.bind(n,53347)),"@site/versioned_docs/version-0.6.4/reference/swcli/model.md",53347],e8f0f629:[()=>n.e(31412).then(n.bind(n,55682)),"@site/versioned_docs/version-0.6.5/reference/swcli/instance.md",55682],e9fbe6ff:[()=>n.e(8620).then(n.bind(n,15958)),"@site/docs/getting-started/server.md",15958],ea74b58f:[()=>n.e(22814).then(n.bind(n,7330)),"@site/versioned_docs/version-0.6.7/cloud/billing/billing.md",7330],ead21b0a:[()=>n.e(97968).then(n.bind(n,98356)),"@site/docs/reference/sdk/type.md",98356],eb0f784e:[()=>n.e(44128).then(n.bind(n,42333)),"@site/versioned_docs/version-0.6.7/examples/helloworld.md",42333],eb575f18:[()=>n.e(39015).then(n.bind(n,71670)),"@site/versioned_docs/version-0.5.12/server/index.md",71670],ec8a462b:[()=>n.e(36612).then(n.bind(n,54733)),"@site/versioned_docs/version-0.5.10/reference/swcli/instance.md",54733],ed22d60a:[()=>n.e(43992).then(n.bind(n,18120)),"@site/versioned_docs/version-0.6.6/evaluation/heterogeneous/virtual-node.md",18120],eda95bf2:[()=>n.e(90409).then(n.bind(n,82218)),"@site/versioned_docs/version-0.6.5/community/contribute.md",82218],ede1f75e:[()=>n.e(72133).then(n.bind(n,30189)),"@site/versioned_docs/version-0.6.7/reference/swcli/runtime.md",30189],ee3e6435:[()=>n.e(46510).then(n.bind(n,89399)),"@site/versioned_docs/version-0.6.7/server/installation/minikube.md",89399],eeb12725:[()=>n.e(81120).then(n.bind(n,49041)),"@site/docs/evaluation/heterogeneous/node-able.md",49041],ef1be1e1:[()=>n.e(41985).then(n.bind(n,36236)),"@site/versioned_docs/version-0.5.10/runtime/index.md",36236],f00cf2de:[()=>n.e(29538).then(n.bind(n,2539)),"@site/versioned_docs/version-0.6.5/dataset/yaml.md",2539],f012d72b:[()=>n.e(74528).then(n.bind(n,41406)),"@site/versioned_docs/version-0.5.12/server/installation/starwhale_env.md",41406],f051bb65:[()=>n.e(36425).then(n.bind(n,16284)),"@site/versioned_docs/version-0.6.0/cloud/index.md",16284],f05c4e6f:[()=>n.e(6727).then(n.bind(n,4708)),"@site/versioned_docs/version-0.6.5/cloud/billing/bills.md",4708],f13cb81f:[()=>n.e(34286).then(n.bind(n,50644)),"@site/versioned_docs/version-0.6.6/cloud/billing/refund.md",50644],f1e36233:[()=>n.e(43727).then(n.bind(n,55210)),"@site/versioned_docs/version-0.5.10/evaluation/heterogeneous/node-able.md",55210],f2534a3f:[()=>n.e(9322).then(n.bind(n,36680)),"@site/docs/server/installation/docker.md",36680],f2cc7669:[()=>n.e(67346).then(n.bind(n,8698)),"@site/versioned_docs/version-0.5.10/model/yaml.md",8698],f35e473a:[()=>n.e(39130).then(n.bind(n,35631)),"@site/versioned_docs/version-0.6.0/dataset/yaml.md",35631],f3650d5d:[()=>n.e(64198).then(n.bind(n,85078)),"@site/versioned_docs/version-0.6.0/server/index.md",85078],f3a90ba2:[()=>n.e(95619).then(n.bind(n,32074)),"@site/versioned_docs/version-0.6.7/server/installation/index.md",32074],f3f1a75b:[()=>n.e(98963).then(n.bind(n,19681)),"@site/versioned_docs/version-0.5.12/getting-started/cloud.md",19681],f46b7e10:[()=>n.e(1271).then(n.bind(n,21281)),"@site/versioned_docs/version-0.6.7/server/installation/docker-compose.md",21281],f5853a90:[()=>n.e(48730).then(n.bind(n,17195)),"@site/versioned_docs/version-0.6.5/concepts/index.md",17195],f5b6cb08:[()=>n.e(33492).then(n.bind(n,8183)),"@site/versioned_docs/version-0.5.10/getting-started/standalone.md",8183],f6305a2a:[()=>n.e(66392).then(n.bind(n,30678)),"@site/docs/model/index.md",30678],f6ac3114:[()=>n.e(81689).then(n.bind(n,12595)),"@site/versioned_docs/version-0.6.4/dataset/index.md",12595],f6ae8fd7:[()=>n.e(69191).then(n.bind(n,96122)),"@site/versioned_docs/version-0.6.6/reference/swcli/dataset.md",96122],f6fec203:[()=>n.e(43667).then(n.bind(n,65537)),"@site/versioned_docs/version-0.6.0/reference/swcli/project.md",65537],f74f4518:[()=>n.e(47362).then(n.bind(n,48718)),"@site/docs/reference/swcli/server.md",48718],f7950235:[()=>n.e(80846).then(n.bind(n,95672)),"@site/versioned_docs/version-0.6.0/reference/swcli/utilities.md",95672],f7da2c73:[()=>n.e(97843).then(n.bind(n,28011)),"@site/versioned_docs/version-0.6.5/swcli/config.md",28011],f8292b17:[()=>n.e(32748).then(n.bind(n,41464)),"@site/versioned_docs/version-0.6.4/server/installation/docker.md",41464],f8ddaa0f:[()=>n.e(38731).then(n.bind(n,571)),"@site/versioned_docs/version-0.6.5/reference/swcli/dataset.md",571],f972728b:[()=>n.e(60489).then(n.bind(n,14322)),"@site/versioned_docs/version-0.5.10/reference/swcli/project.md",14322],fa364872:[()=>n.e(2263).then(n.bind(n,24664)),"@site/versioned_docs/version-0.6.6/server/installation/minikube.md",24664],fa377e30:[()=>n.e(87181).then(n.bind(n,60509)),"@site/docs/concepts/project.md",60509],fa7c6226:[()=>n.e(58419).then(n.bind(n,16499)),"@site/versioned_docs/version-0.6.4/cloud/index.md",16499],faca8360:[()=>n.e(5210).then(n.bind(n,45756)),"@site/versioned_docs/version-0.6.7/faq/index.md",45756],fae84cd4:[()=>n.e(44858).then(n.bind(n,20393)),"@site/versioned_docs/version-0.6.6/reference/sdk/job.md",20393],fb1f8cbb:[()=>n.e(55416).then(n.bind(n,40705)),"@site/versioned_docs/version-0.6.0/getting-started/server.md",40705],fbb0f078:[()=>n.e(5739).then(n.bind(n,9043)),"@site/versioned_docs/version-0.6.7/reference/swcli/project.md",9043],fbde1876:[()=>n.e(71947).then(n.bind(n,14528)),"@site/versioned_docs/version-0.5.10/evaluation/heterogeneous/virtual-node.md",14528],fbf0a0a7:[()=>n.e(67490).then(n.bind(n,3925)),"@site/versioned_docs/version-0.6.4/server/project.md",3925],fcfb8e31:[()=>n.e(54457).then(n.bind(n,54748)),"@site/versioned_docs/version-0.6.0/reference/sdk/other.md",54748],fde3620e:[()=>n.e(50948).then(n.bind(n,32534)),"@site/versioned_docs/version-0.6.7/reference/sdk/overview.md",32534],fe1659de:[()=>n.e(79824).then(n.bind(n,98099)),"@site/versioned_docs/version-0.5.12/concepts/roles-permissions.md",98099],fe1b78e1:[()=>n.e(91722).then(n.bind(n,97724)),"@site/versioned_docs/version-0.5.10/community/contribute.md",97724],fe6343fd:[()=>n.e(11002).then(n.bind(n,29334)),"@site/docs/faq/index.md",29334],febe53b7:[()=>n.e(59874).then(n.bind(n,31671)),"@site/versioned_docs/version-0.6.4/reference/sdk/type.md",31671],feec69fc:[()=>n.e(89160).then(n.bind(n,39504)),"@site/versioned_docs/version-0.6.4/cloud/billing/bills.md",39504],fef1429b:[()=>n.e(32621).then(n.t.bind(n,50011,19)),"~docs/default/version-0-6-5-metadata-prop-033.json",50011],ff74d3da:[()=>n.e(71695).then(n.bind(n,56368)),"@site/blog/2023-07-21-intro.md?truncated=true",56368],ffe4100e:[()=>n.e(65133).then(n.bind(n,61340)),"@site/docs/reference/sdk/dataset.md",61340],fffebc2f:[()=>n.e(49485).then(n.bind(n,53220)),"@site/versioned_docs/version-0.6.6/reference/sdk/model.md",53220],ffff3183:[()=>n.e(1435).then(n.bind(n,87338)),"@site/versioned_docs/version-0.6.4/server/installation/docker-compose.md",87338]};function l(e){let{error:t,retry:n,pastDelay:i}=e;return t?r.createElement("div",{style:{textAlign:"center",color:"#fff",backgroundColor:"#fa383e",borderColor:"#fa383e",borderStyle:"solid",borderRadius:"0.25rem",borderWidth:"1px",boxSizing:"border-box",display:"block",padding:"1rem",flex:"0 0 50%",marginLeft:"25%",marginRight:"25%",marginTop:"5rem",maxWidth:"50%",width:"100%"}},r.createElement("p",null,String(t)),r.createElement("div",null,r.createElement("button",{type:"button",onClick:n},"Retry"))):i?r.createElement("div",{style:{display:"flex",justifyContent:"center",alignItems:"center",height:"100vh"}},r.createElement("svg",{id:"loader",style:{width:128,height:110,position:"absolute",top:"calc(100vh - 64%)"},viewBox:"0 0 45 45",xmlns:"http://www.w3.org/2000/svg",stroke:"#61dafb"},r.createElement("g",{fill:"none",fillRule:"evenodd",transform:"translate(1 1)",strokeWidth:"2"},r.createElement("circle",{cx:"22",cy:"22",r:"6",strokeOpacity:"0"},r.createElement("animate",{attributeName:"r",begin:"1.5s",dur:"3s",values:"6;22",calcMode:"linear",repeatCount:"indefinite"}),r.createElement("animate",{attributeName:"stroke-opacity",begin:"1.5s",dur:"3s",values:"1;0",calcMode:"linear",repeatCount:"indefinite"}),r.createElement("animate",{attributeName:"stroke-width",begin:"1.5s",dur:"3s",values:"2;0",calcMode:"linear",repeatCount:"indefinite"})),r.createElement("circle",{cx:"22",cy:"22",r:"6",strokeOpacity:"0"},r.createElement("animate",{attributeName:"r",begin:"3s",dur:"3s",values:"6;22",calcMode:"linear",repeatCount:"indefinite"}),r.createElement("animate",{attributeName:"stroke-opacity",begin:"3s",dur:"3s",values:"1;0",calcMode:"linear",repeatCount:"indefinite"}),r.createElement("animate",{attributeName:"stroke-width",begin:"3s",dur:"3s",values:"2;0",calcMode:"linear",repeatCount:"indefinite"})),r.createElement("circle",{cx:"22",cy:"22",r:"8"},r.createElement("animate",{attributeName:"r",begin:"0s",dur:"1.5s",values:"6;1;2;3;4;5;6",calcMode:"linear",repeatCount:"indefinite"}))))):null}var d=n(99670),u=n(30226);function p(e,t){if("*"===e)return o()({loading:l,loader:()=>n.e(4972).then(n.bind(n,4972)),modules:["@theme/NotFound"],webpack:()=>[4972],render(e,t){const n=e.default;return r.createElement(u.z,{value:{plugin:{name:"native",id:"default"}}},r.createElement(n,t))}});const a=s[`${e}-${t}`],p={},m=[],f=[],b=(0,d.Z)(a);return Object.entries(b).forEach((e=>{let[t,n]=e;const r=c[n];r&&(p[t]=r[0],m.push(r[1]),f.push(r[2]))})),o().Map({loading:l,loader:p,modules:m,webpack:()=>f,render(t,n){const o=JSON.parse(JSON.stringify(a));Object.entries(t).forEach((t=>{let[n,r]=t;const i=r.default;if(!i)throw new Error(`The page component at ${e} doesn't have a default export. This makes it impossible to render anything. Consider default-exporting a React component.`);"object"!=typeof i&&"function"!=typeof i||Object.keys(r).filter((e=>"default"!==e)).forEach((e=>{i[e]=r[e]}));let a=o;const s=n.split(".");s.slice(0,-1).forEach((e=>{a=a[e]})),a[s[s.length-1]]=i}));const s=o.__comp;delete o.__comp;const c=o.__context;return delete o.__context,r.createElement(u.z,{value:c},r.createElement(s,(0,i.Z)({},o,n)))}})}const m=[{path:"/blog",component:p("/blog","bff"),exact:!0},{path:"/blog/archive",component:p("/blog/archive","d5c"),exact:!0},{path:"/blog/intro-starwhale",component:p("/blog/intro-starwhale","15f"),exact:!0},{path:"/blog/reproduce-and-compare-evals",component:p("/blog/reproduce-and-compare-evals","f1a"),exact:!0},{path:"/blog/run-llama2-chat-in-five-minutes",component:p("/blog/run-llama2-chat-in-five-minutes","0b0"),exact:!0},{path:"/blog/tags",component:p("/blog/tags","e00"),exact:!0},{path:"/blog/tags/intro",component:p("/blog/tags/intro","3bb"),exact:!0},{path:"/blog/tags/llama-2",component:p("/blog/tags/llama-2","0be"),exact:!0},{path:"/blog/tags/model-evaluaitons",component:p("/blog/tags/model-evaluaitons","7d9"),exact:!0},{path:"/blog/tags/model-package",component:p("/blog/tags/model-package","1b6"),exact:!0},{path:"/0.5.10",component:p("/0.5.10","0b7"),routes:[{path:"/0.5.10/",component:p("/0.5.10/","3b1"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/cloud/",component:p("/0.5.10/cloud/","d88"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/cloud/billing/",component:p("/0.5.10/cloud/billing/","fcf"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/cloud/billing/bills",component:p("/0.5.10/cloud/billing/bills","a15"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/cloud/billing/recharge",component:p("/0.5.10/cloud/billing/recharge","ff2"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/cloud/billing/refund",component:p("/0.5.10/cloud/billing/refund","860"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/cloud/billing/voucher",component:p("/0.5.10/cloud/billing/voucher","7d2"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/community/contribute",component:p("/0.5.10/community/contribute","dc0"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/concepts/",component:p("/0.5.10/concepts/","7cf"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/concepts/names",component:p("/0.5.10/concepts/names","ab7"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/concepts/project",component:p("/0.5.10/concepts/project","f83"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/concepts/roles-permissions",component:p("/0.5.10/concepts/roles-permissions","bbd"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/concepts/versioning",component:p("/0.5.10/concepts/versioning","ddd"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/dataset/",component:p("/0.5.10/dataset/","c13"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/dataset/yaml",component:p("/0.5.10/dataset/yaml","8c6"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/evaluation/",component:p("/0.5.10/evaluation/","9e8"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/evaluation/heterogeneous/node-able",component:p("/0.5.10/evaluation/heterogeneous/node-able","e0c"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/evaluation/heterogeneous/virtual-node",component:p("/0.5.10/evaluation/heterogeneous/virtual-node","673"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/faq/",component:p("/0.5.10/faq/","e96"),exact:!0},{path:"/0.5.10/getting-started/",component:p("/0.5.10/getting-started/","ba9"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/getting-started/cloud",component:p("/0.5.10/getting-started/cloud","bfd"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/getting-started/runtime",component:p("/0.5.10/getting-started/runtime","32e"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/getting-started/server",component:p("/0.5.10/getting-started/server","80e"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/getting-started/standalone",component:p("/0.5.10/getting-started/standalone","cf7"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/model/",component:p("/0.5.10/model/","3e9"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/model/yaml",component:p("/0.5.10/model/yaml","70f"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/reference/sdk/dataset",component:p("/0.5.10/reference/sdk/dataset","025"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/reference/sdk/evaluation",component:p("/0.5.10/reference/sdk/evaluation","dda"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/reference/sdk/model",component:p("/0.5.10/reference/sdk/model","e4a"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/reference/sdk/other",component:p("/0.5.10/reference/sdk/other","69f"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/reference/sdk/overview",component:p("/0.5.10/reference/sdk/overview","b67"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/reference/sdk/type",component:p("/0.5.10/reference/sdk/type","049"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/reference/swcli/",component:p("/0.5.10/reference/swcli/","40c"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/reference/swcli/dataset",component:p("/0.5.10/reference/swcli/dataset","cf9"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/reference/swcli/instance",component:p("/0.5.10/reference/swcli/instance","500"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/reference/swcli/job",component:p("/0.5.10/reference/swcli/job","214"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/reference/swcli/model",component:p("/0.5.10/reference/swcli/model","dae"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/reference/swcli/project",component:p("/0.5.10/reference/swcli/project","22f"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/reference/swcli/runtime",component:p("/0.5.10/reference/swcli/runtime","7f7"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/reference/swcli/utilities",component:p("/0.5.10/reference/swcli/utilities","869"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/runtime/",component:p("/0.5.10/runtime/","3cd"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/runtime/yaml",component:p("/0.5.10/runtime/yaml","16d"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/server/",component:p("/0.5.10/server/","f12"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/server/guides/server_admin",component:p("/0.5.10/server/guides/server_admin","7bb"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/server/installation/",component:p("/0.5.10/server/installation/","c21"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/server/installation/docker",component:p("/0.5.10/server/installation/docker","fb9"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/server/installation/helm-charts",component:p("/0.5.10/server/installation/helm-charts","292"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/server/installation/minikube",component:p("/0.5.10/server/installation/minikube","f72"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/server/installation/starwhale_env",component:p("/0.5.10/server/installation/starwhale_env","e67"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/server/project",component:p("/0.5.10/server/project","0a9"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/swcli/",component:p("/0.5.10/swcli/","1c3"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/swcli/config",component:p("/0.5.10/swcli/config","c5f"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/swcli/installation",component:p("/0.5.10/swcli/installation","bcb"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/swcli/swignore",component:p("/0.5.10/swcli/swignore","f17"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.10/swcli/uri",component:p("/0.5.10/swcli/uri","117"),exact:!0,sidebar:"mainSidebar"}]},{path:"/0.5.12",component:p("/0.5.12","6f1"),routes:[{path:"/0.5.12/",component:p("/0.5.12/","8e8"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/cloud/",component:p("/0.5.12/cloud/","f4e"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/cloud/billing/",component:p("/0.5.12/cloud/billing/","ffc"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/cloud/billing/bills",component:p("/0.5.12/cloud/billing/bills","781"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/cloud/billing/recharge",component:p("/0.5.12/cloud/billing/recharge","b35"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/cloud/billing/refund",component:p("/0.5.12/cloud/billing/refund","277"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/cloud/billing/voucher",component:p("/0.5.12/cloud/billing/voucher","1e9"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/community/contribute",component:p("/0.5.12/community/contribute","8e3"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/concepts/",component:p("/0.5.12/concepts/","f22"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/concepts/names",component:p("/0.5.12/concepts/names","5c2"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/concepts/project",component:p("/0.5.12/concepts/project","6ce"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/concepts/roles-permissions",component:p("/0.5.12/concepts/roles-permissions","8f1"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/concepts/versioning",component:p("/0.5.12/concepts/versioning","dcb"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/dataset/",component:p("/0.5.12/dataset/","610"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/dataset/yaml",component:p("/0.5.12/dataset/yaml","4b1"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/evaluation/",component:p("/0.5.12/evaluation/","a84"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/evaluation/heterogeneous/node-able",component:p("/0.5.12/evaluation/heterogeneous/node-able","2fc"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/evaluation/heterogeneous/virtual-node",component:p("/0.5.12/evaluation/heterogeneous/virtual-node","b17"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/faq/",component:p("/0.5.12/faq/","5ca"),exact:!0},{path:"/0.5.12/getting-started/",component:p("/0.5.12/getting-started/","4cb"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/getting-started/cloud",component:p("/0.5.12/getting-started/cloud","0ff"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/getting-started/runtime",component:p("/0.5.12/getting-started/runtime","0bd"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/getting-started/server",component:p("/0.5.12/getting-started/server","e48"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/getting-started/standalone",component:p("/0.5.12/getting-started/standalone","2d7"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/model/",component:p("/0.5.12/model/","a13"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/model/yaml",component:p("/0.5.12/model/yaml","6b5"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/reference/sdk/dataset",component:p("/0.5.12/reference/sdk/dataset","4b9"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/reference/sdk/evaluation",component:p("/0.5.12/reference/sdk/evaluation","e5e"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/reference/sdk/job",component:p("/0.5.12/reference/sdk/job","95b"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/reference/sdk/model",component:p("/0.5.12/reference/sdk/model","b14"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/reference/sdk/other",component:p("/0.5.12/reference/sdk/other","071"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/reference/sdk/overview",component:p("/0.5.12/reference/sdk/overview","16f"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/reference/sdk/type",component:p("/0.5.12/reference/sdk/type","606"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/reference/swcli/",component:p("/0.5.12/reference/swcli/","d90"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/reference/swcli/dataset",component:p("/0.5.12/reference/swcli/dataset","fca"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/reference/swcli/instance",component:p("/0.5.12/reference/swcli/instance","25d"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/reference/swcli/job",component:p("/0.5.12/reference/swcli/job","403"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/reference/swcli/model",component:p("/0.5.12/reference/swcli/model","eed"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/reference/swcli/project",component:p("/0.5.12/reference/swcli/project","41e"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/reference/swcli/runtime",component:p("/0.5.12/reference/swcli/runtime","5ca"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/reference/swcli/utilities",component:p("/0.5.12/reference/swcli/utilities","207"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/runtime/",component:p("/0.5.12/runtime/","432"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/runtime/yaml",component:p("/0.5.12/runtime/yaml","1c5"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/server/",component:p("/0.5.12/server/","645"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/server/guides/server_admin",component:p("/0.5.12/server/guides/server_admin","7e1"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/server/installation/",component:p("/0.5.12/server/installation/","06e"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/server/installation/docker",component:p("/0.5.12/server/installation/docker","3a7"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/server/installation/docker-compose",component:p("/0.5.12/server/installation/docker-compose","430"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/server/installation/helm-charts",component:p("/0.5.12/server/installation/helm-charts","c17"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/server/installation/minikube",component:p("/0.5.12/server/installation/minikube","e8a"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/server/installation/starwhale_env",component:p("/0.5.12/server/installation/starwhale_env","75d"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/server/project",component:p("/0.5.12/server/project","8c1"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/swcli/",component:p("/0.5.12/swcli/","1e4"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/swcli/config",component:p("/0.5.12/swcli/config","f64"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/swcli/installation",component:p("/0.5.12/swcli/installation","954"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/swcli/swignore",component:p("/0.5.12/swcli/swignore","45f"),exact:!0,sidebar:"mainSidebar"},{path:"/0.5.12/swcli/uri",component:p("/0.5.12/swcli/uri","d3e"),exact:!0,sidebar:"mainSidebar"}]},{path:"/0.6.0",component:p("/0.6.0","ee0"),routes:[{path:"/0.6.0/",component:p("/0.6.0/","d13"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/cloud/",component:p("/0.6.0/cloud/","02b"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/cloud/billing/",component:p("/0.6.0/cloud/billing/","432"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/cloud/billing/bills",component:p("/0.6.0/cloud/billing/bills","d45"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/cloud/billing/recharge",component:p("/0.6.0/cloud/billing/recharge","12c"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/cloud/billing/refund",component:p("/0.6.0/cloud/billing/refund","cc5"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/cloud/billing/voucher",component:p("/0.6.0/cloud/billing/voucher","75f"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/community/contribute",component:p("/0.6.0/community/contribute","770"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/concepts/",component:p("/0.6.0/concepts/","dbe"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/concepts/names",component:p("/0.6.0/concepts/names","475"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/concepts/project",component:p("/0.6.0/concepts/project","2df"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/concepts/roles-permissions",component:p("/0.6.0/concepts/roles-permissions","d06"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/concepts/versioning",component:p("/0.6.0/concepts/versioning","c3b"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/dataset/",component:p("/0.6.0/dataset/","8a6"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/dataset/yaml",component:p("/0.6.0/dataset/yaml","801"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/evaluation/",component:p("/0.6.0/evaluation/","26f"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/evaluation/heterogeneous/node-able",component:p("/0.6.0/evaluation/heterogeneous/node-able","1a9"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/evaluation/heterogeneous/virtual-node",component:p("/0.6.0/evaluation/heterogeneous/virtual-node","0ad"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/faq/",component:p("/0.6.0/faq/","9c3"),exact:!0},{path:"/0.6.0/getting-started/",component:p("/0.6.0/getting-started/","41f"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/getting-started/cloud",component:p("/0.6.0/getting-started/cloud","ee7"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/getting-started/runtime",component:p("/0.6.0/getting-started/runtime","361"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/getting-started/server",component:p("/0.6.0/getting-started/server","165"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/getting-started/standalone",component:p("/0.6.0/getting-started/standalone","c33"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/model/",component:p("/0.6.0/model/","5c2"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/model/yaml",component:p("/0.6.0/model/yaml","941"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/reference/sdk/dataset",component:p("/0.6.0/reference/sdk/dataset","e57"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/reference/sdk/evaluation",component:p("/0.6.0/reference/sdk/evaluation","833"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/reference/sdk/job",component:p("/0.6.0/reference/sdk/job","f28"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/reference/sdk/model",component:p("/0.6.0/reference/sdk/model","f1a"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/reference/sdk/other",component:p("/0.6.0/reference/sdk/other","838"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/reference/sdk/overview",component:p("/0.6.0/reference/sdk/overview","f50"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/reference/sdk/type",component:p("/0.6.0/reference/sdk/type","35f"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/reference/swcli/",component:p("/0.6.0/reference/swcli/","3fb"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/reference/swcli/dataset",component:p("/0.6.0/reference/swcli/dataset","2d4"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/reference/swcli/instance",component:p("/0.6.0/reference/swcli/instance","4b5"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/reference/swcli/job",component:p("/0.6.0/reference/swcli/job","d8a"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/reference/swcli/model",component:p("/0.6.0/reference/swcli/model","2fa"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/reference/swcli/project",component:p("/0.6.0/reference/swcli/project","2bd"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/reference/swcli/runtime",component:p("/0.6.0/reference/swcli/runtime","038"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/reference/swcli/utilities",component:p("/0.6.0/reference/swcli/utilities","772"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/runtime/",component:p("/0.6.0/runtime/","121"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/runtime/yaml",component:p("/0.6.0/runtime/yaml","ad2"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/server/",component:p("/0.6.0/server/","ce5"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/server/guides/server_admin",component:p("/0.6.0/server/guides/server_admin","985"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/server/installation/",component:p("/0.6.0/server/installation/","5bb"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/server/installation/docker",component:p("/0.6.0/server/installation/docker","d2e"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/server/installation/docker-compose",component:p("/0.6.0/server/installation/docker-compose","27b"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/server/installation/helm-charts",component:p("/0.6.0/server/installation/helm-charts","293"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/server/installation/minikube",component:p("/0.6.0/server/installation/minikube","c13"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/server/installation/starwhale_env",component:p("/0.6.0/server/installation/starwhale_env","20d"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/server/project",component:p("/0.6.0/server/project","78c"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/swcli/",component:p("/0.6.0/swcli/","784"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/swcli/config",component:p("/0.6.0/swcli/config","4bd"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/swcli/installation",component:p("/0.6.0/swcli/installation","d86"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/swcli/swignore",component:p("/0.6.0/swcli/swignore","5aa"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.0/swcli/uri",component:p("/0.6.0/swcli/uri","ff5"),exact:!0,sidebar:"mainSidebar"}]},{path:"/0.6.4",component:p("/0.6.4","00c"),routes:[{path:"/0.6.4/",component:p("/0.6.4/","b7f"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/cloud/",component:p("/0.6.4/cloud/","612"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/cloud/billing/",component:p("/0.6.4/cloud/billing/","0b5"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/cloud/billing/bills",component:p("/0.6.4/cloud/billing/bills","823"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/cloud/billing/recharge",component:p("/0.6.4/cloud/billing/recharge","70a"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/cloud/billing/refund",component:p("/0.6.4/cloud/billing/refund","d79"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/cloud/billing/voucher",component:p("/0.6.4/cloud/billing/voucher","259"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/community/contribute",component:p("/0.6.4/community/contribute","911"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/concepts/",component:p("/0.6.4/concepts/","cba"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/concepts/names",component:p("/0.6.4/concepts/names","996"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/concepts/project",component:p("/0.6.4/concepts/project","eef"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/concepts/roles-permissions",component:p("/0.6.4/concepts/roles-permissions","d1b"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/concepts/versioning",component:p("/0.6.4/concepts/versioning","e5c"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/dataset/",component:p("/0.6.4/dataset/","ba1"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/dataset/yaml",component:p("/0.6.4/dataset/yaml","3a8"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/evaluation/",component:p("/0.6.4/evaluation/","f30"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/evaluation/heterogeneous/node-able",component:p("/0.6.4/evaluation/heterogeneous/node-able","893"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/evaluation/heterogeneous/virtual-node",component:p("/0.6.4/evaluation/heterogeneous/virtual-node","17a"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/faq/",component:p("/0.6.4/faq/","91b"),exact:!0},{path:"/0.6.4/getting-started/",component:p("/0.6.4/getting-started/","cd4"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/getting-started/cloud",component:p("/0.6.4/getting-started/cloud","ded"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/getting-started/runtime",component:p("/0.6.4/getting-started/runtime","c18"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/getting-started/server",component:p("/0.6.4/getting-started/server","c49"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/getting-started/standalone",component:p("/0.6.4/getting-started/standalone","c0a"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/model/",component:p("/0.6.4/model/","c1b"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/model/yaml",component:p("/0.6.4/model/yaml","1c4"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/reference/sdk/dataset",component:p("/0.6.4/reference/sdk/dataset","6f8"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/reference/sdk/evaluation",component:p("/0.6.4/reference/sdk/evaluation","e50"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/reference/sdk/job",component:p("/0.6.4/reference/sdk/job","de1"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/reference/sdk/model",component:p("/0.6.4/reference/sdk/model","e7e"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/reference/sdk/other",component:p("/0.6.4/reference/sdk/other","ca4"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/reference/sdk/overview",component:p("/0.6.4/reference/sdk/overview","25f"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/reference/sdk/type",component:p("/0.6.4/reference/sdk/type","482"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/reference/swcli/",component:p("/0.6.4/reference/swcli/","fb9"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/reference/swcli/dataset",component:p("/0.6.4/reference/swcli/dataset","a50"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/reference/swcli/instance",component:p("/0.6.4/reference/swcli/instance","b29"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/reference/swcli/job",component:p("/0.6.4/reference/swcli/job","7b9"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/reference/swcli/model",component:p("/0.6.4/reference/swcli/model","ac9"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/reference/swcli/project",component:p("/0.6.4/reference/swcli/project","4f4"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/reference/swcli/runtime",component:p("/0.6.4/reference/swcli/runtime","1a2"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/reference/swcli/utilities",component:p("/0.6.4/reference/swcli/utilities","9b3"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/runtime/",component:p("/0.6.4/runtime/","2f1"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/runtime/yaml",component:p("/0.6.4/runtime/yaml","0a8"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/server/",component:p("/0.6.4/server/","155"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/server/guides/server_admin",component:p("/0.6.4/server/guides/server_admin","0f4"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/server/installation/",component:p("/0.6.4/server/installation/","597"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/server/installation/docker",component:p("/0.6.4/server/installation/docker","8b4"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/server/installation/docker-compose",component:p("/0.6.4/server/installation/docker-compose","cd2"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/server/installation/helm-charts",component:p("/0.6.4/server/installation/helm-charts","32b"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/server/installation/minikube",component:p("/0.6.4/server/installation/minikube","462"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/server/installation/starwhale_env",component:p("/0.6.4/server/installation/starwhale_env","a11"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/server/project",component:p("/0.6.4/server/project","00f"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/swcli/",component:p("/0.6.4/swcli/","5cf"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/swcli/config",component:p("/0.6.4/swcli/config","e47"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/swcli/installation",component:p("/0.6.4/swcli/installation","241"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/swcli/swignore",component:p("/0.6.4/swcli/swignore","1d2"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.4/swcli/uri",component:p("/0.6.4/swcli/uri","42a"),exact:!0,sidebar:"mainSidebar"}]},{path:"/0.6.5",component:p("/0.6.5","2bc"),routes:[{path:"/0.6.5/",component:p("/0.6.5/","3e7"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/cloud/",component:p("/0.6.5/cloud/","369"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/cloud/billing/",component:p("/0.6.5/cloud/billing/","f91"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/cloud/billing/bills",component:p("/0.6.5/cloud/billing/bills","14d"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/cloud/billing/recharge",component:p("/0.6.5/cloud/billing/recharge","e93"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/cloud/billing/refund",component:p("/0.6.5/cloud/billing/refund","2e0"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/cloud/billing/voucher",component:p("/0.6.5/cloud/billing/voucher","2c6"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/community/contribute",component:p("/0.6.5/community/contribute","1a8"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/concepts/",component:p("/0.6.5/concepts/","938"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/concepts/names",component:p("/0.6.5/concepts/names","4ea"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/concepts/project",component:p("/0.6.5/concepts/project","b71"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/concepts/roles-permissions",component:p("/0.6.5/concepts/roles-permissions","2f8"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/concepts/versioning",component:p("/0.6.5/concepts/versioning","2dc"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/dataset/",component:p("/0.6.5/dataset/","693"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/dataset/yaml",component:p("/0.6.5/dataset/yaml","62f"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/evaluation/",component:p("/0.6.5/evaluation/","08d"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/evaluation/heterogeneous/node-able",component:p("/0.6.5/evaluation/heterogeneous/node-able","8d3"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/evaluation/heterogeneous/virtual-node",component:p("/0.6.5/evaluation/heterogeneous/virtual-node","2f9"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/faq/",component:p("/0.6.5/faq/","578"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/getting-started/",component:p("/0.6.5/getting-started/","ecd"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/getting-started/cloud",component:p("/0.6.5/getting-started/cloud","06d"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/getting-started/runtime",component:p("/0.6.5/getting-started/runtime","5fc"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/getting-started/server",component:p("/0.6.5/getting-started/server","fa9"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/getting-started/standalone",component:p("/0.6.5/getting-started/standalone","bdc"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/model/",component:p("/0.6.5/model/","434"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/model/yaml",component:p("/0.6.5/model/yaml","8c0"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/reference/sdk/dataset",component:p("/0.6.5/reference/sdk/dataset","473"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/reference/sdk/evaluation",component:p("/0.6.5/reference/sdk/evaluation","986"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/reference/sdk/job",component:p("/0.6.5/reference/sdk/job","1e1"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/reference/sdk/model",component:p("/0.6.5/reference/sdk/model","14a"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/reference/sdk/other",component:p("/0.6.5/reference/sdk/other","825"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/reference/sdk/overview",component:p("/0.6.5/reference/sdk/overview","675"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/reference/sdk/type",component:p("/0.6.5/reference/sdk/type","1c1"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/reference/swcli/",component:p("/0.6.5/reference/swcli/","b85"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/reference/swcli/dataset",component:p("/0.6.5/reference/swcli/dataset","ed6"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/reference/swcli/instance",component:p("/0.6.5/reference/swcli/instance","292"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/reference/swcli/job",component:p("/0.6.5/reference/swcli/job","bb3"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/reference/swcli/model",component:p("/0.6.5/reference/swcli/model","35a"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/reference/swcli/project",component:p("/0.6.5/reference/swcli/project","9f3"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/reference/swcli/runtime",component:p("/0.6.5/reference/swcli/runtime","f93"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/reference/swcli/utilities",component:p("/0.6.5/reference/swcli/utilities","d65"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/runtime/",component:p("/0.6.5/runtime/","e1a"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/runtime/yaml",component:p("/0.6.5/runtime/yaml","012"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/server/",component:p("/0.6.5/server/","985"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/server/guides/server_admin",component:p("/0.6.5/server/guides/server_admin","af9"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/server/installation/",component:p("/0.6.5/server/installation/","d72"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/server/installation/docker",component:p("/0.6.5/server/installation/docker","b15"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/server/installation/docker-compose",component:p("/0.6.5/server/installation/docker-compose","79c"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/server/installation/k8s-cluster",component:p("/0.6.5/server/installation/k8s-cluster","c8d"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/server/installation/minikube",component:p("/0.6.5/server/installation/minikube","e3d"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/server/installation/starwhale_env",component:p("/0.6.5/server/installation/starwhale_env","0d0"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/server/project",component:p("/0.6.5/server/project","7a3"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/swcli/",component:p("/0.6.5/swcli/","cb6"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/swcli/config",component:p("/0.6.5/swcli/config","b92"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/swcli/installation",component:p("/0.6.5/swcli/installation","9b3"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/swcli/swignore",component:p("/0.6.5/swcli/swignore","84c"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.5/swcli/uri",component:p("/0.6.5/swcli/uri","d2f"),exact:!0,sidebar:"mainSidebar"}]},{path:"/0.6.6",component:p("/0.6.6","ce5"),routes:[{path:"/0.6.6/",component:p("/0.6.6/","fcd"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/cloud/",component:p("/0.6.6/cloud/","a2c"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/cloud/billing/",component:p("/0.6.6/cloud/billing/","b10"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/cloud/billing/bills",component:p("/0.6.6/cloud/billing/bills","7e1"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/cloud/billing/recharge",component:p("/0.6.6/cloud/billing/recharge","c7c"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/cloud/billing/refund",component:p("/0.6.6/cloud/billing/refund","0bc"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/cloud/billing/voucher",component:p("/0.6.6/cloud/billing/voucher","f84"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/community/contribute",component:p("/0.6.6/community/contribute","568"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/concepts/",component:p("/0.6.6/concepts/","80f"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/concepts/names",component:p("/0.6.6/concepts/names","0da"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/concepts/project",component:p("/0.6.6/concepts/project","bd7"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/concepts/roles-permissions",component:p("/0.6.6/concepts/roles-permissions","383"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/concepts/versioning",component:p("/0.6.6/concepts/versioning","69f"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/dataset/",component:p("/0.6.6/dataset/","1e7"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/dataset/yaml",component:p("/0.6.6/dataset/yaml","b9c"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/evaluation/",component:p("/0.6.6/evaluation/","0d7"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/evaluation/heterogeneous/node-able",component:p("/0.6.6/evaluation/heterogeneous/node-able","dcf"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/evaluation/heterogeneous/virtual-node",component:p("/0.6.6/evaluation/heterogeneous/virtual-node","d1d"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/faq/",component:p("/0.6.6/faq/","be4"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/getting-started/",component:p("/0.6.6/getting-started/","870"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/getting-started/cloud",component:p("/0.6.6/getting-started/cloud","110"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/getting-started/runtime",component:p("/0.6.6/getting-started/runtime","49c"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/getting-started/server",component:p("/0.6.6/getting-started/server","014"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/getting-started/standalone",component:p("/0.6.6/getting-started/standalone","402"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/model/",component:p("/0.6.6/model/","344"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/model/yaml",component:p("/0.6.6/model/yaml","901"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/reference/sdk/dataset",component:p("/0.6.6/reference/sdk/dataset","724"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/reference/sdk/evaluation",component:p("/0.6.6/reference/sdk/evaluation","217"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/reference/sdk/job",component:p("/0.6.6/reference/sdk/job","6ca"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/reference/sdk/model",component:p("/0.6.6/reference/sdk/model","d58"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/reference/sdk/other",component:p("/0.6.6/reference/sdk/other","3e5"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/reference/sdk/overview",component:p("/0.6.6/reference/sdk/overview","54b"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/reference/sdk/type",component:p("/0.6.6/reference/sdk/type","471"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/reference/swcli/",component:p("/0.6.6/reference/swcli/","28b"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/reference/swcli/dataset",component:p("/0.6.6/reference/swcli/dataset","72c"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/reference/swcli/instance",component:p("/0.6.6/reference/swcli/instance","f4a"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/reference/swcli/job",component:p("/0.6.6/reference/swcli/job","5e8"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/reference/swcli/model",component:p("/0.6.6/reference/swcli/model","503"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/reference/swcli/project",component:p("/0.6.6/reference/swcli/project","6b9"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/reference/swcli/runtime",component:p("/0.6.6/reference/swcli/runtime","967"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/reference/swcli/server",component:p("/0.6.6/reference/swcli/server","926"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/reference/swcli/utilities",component:p("/0.6.6/reference/swcli/utilities","e2a"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/runtime/",component:p("/0.6.6/runtime/","f1e"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/runtime/yaml",component:p("/0.6.6/runtime/yaml","963"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/server/",component:p("/0.6.6/server/","ec7"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/server/guides/server_admin",component:p("/0.6.6/server/guides/server_admin","f9a"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/server/installation/",component:p("/0.6.6/server/installation/","412"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/server/installation/docker",component:p("/0.6.6/server/installation/docker","126"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/server/installation/docker-compose",component:p("/0.6.6/server/installation/docker-compose","463"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/server/installation/k8s-cluster",component:p("/0.6.6/server/installation/k8s-cluster","899"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/server/installation/minikube",component:p("/0.6.6/server/installation/minikube","eaa"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/server/installation/server-start",component:p("/0.6.6/server/installation/server-start","988"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/server/installation/starwhale_env",component:p("/0.6.6/server/installation/starwhale_env","bd3"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/server/project",component:p("/0.6.6/server/project","2b4"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/swcli/",component:p("/0.6.6/swcli/","7a1"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/swcli/config",component:p("/0.6.6/swcli/config","8a2"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/swcli/installation",component:p("/0.6.6/swcli/installation","780"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/swcli/swignore",component:p("/0.6.6/swcli/swignore","0dd"),exact:!0,sidebar:"mainSidebar"},{path:"/0.6.6/swcli/uri",component:p("/0.6.6/swcli/uri","d6c"),exact:!0,sidebar:"mainSidebar"}]},{path:"/next",component:p("/next","d4b"),routes:[{path:"/next/",component:p("/next/","c01"),exact:!0,sidebar:"mainSidebar"},{path:"/next/cloud/",component:p("/next/cloud/","155"),exact:!0,sidebar:"mainSidebar"},{path:"/next/cloud/billing/",component:p("/next/cloud/billing/","1c0"),exact:!0,sidebar:"mainSidebar"},{path:"/next/cloud/billing/bills",component:p("/next/cloud/billing/bills","e9a"),exact:!0,sidebar:"mainSidebar"},{path:"/next/cloud/billing/recharge",component:p("/next/cloud/billing/recharge","6bf"),exact:!0,sidebar:"mainSidebar"},{path:"/next/cloud/billing/refund",component:p("/next/cloud/billing/refund","b0c"),exact:!0,sidebar:"mainSidebar"},{path:"/next/cloud/billing/voucher",component:p("/next/cloud/billing/voucher","bbc"),exact:!0,sidebar:"mainSidebar"},{path:"/next/community/contribute",component:p("/next/community/contribute","1c1"),exact:!0,sidebar:"mainSidebar"},{path:"/next/concepts/",component:p("/next/concepts/","42d"),exact:!0,sidebar:"mainSidebar"},{path:"/next/concepts/glossary",component:p("/next/concepts/glossary","f68"),exact:!0,sidebar:"mainSidebar"},{path:"/next/concepts/names",component:p("/next/concepts/names","a88"),exact:!0,sidebar:"mainSidebar"},{path:"/next/concepts/project",component:p("/next/concepts/project","6fd"),exact:!0,sidebar:"mainSidebar"},{path:"/next/concepts/roles-permissions",component:p("/next/concepts/roles-permissions","cbc"),exact:!0,sidebar:"mainSidebar"},{path:"/next/concepts/versioning",component:p("/next/concepts/versioning","313"),exact:!0,sidebar:"mainSidebar"},{path:"/next/dataset/",component:p("/next/dataset/","6c1"),exact:!0,sidebar:"mainSidebar"},{path:"/next/dataset/yaml",component:p("/next/dataset/yaml","e71"),exact:!0,sidebar:"mainSidebar"},{path:"/next/evaluation/",component:p("/next/evaluation/","073"),exact:!0,sidebar:"mainSidebar"},{path:"/next/evaluation/heterogeneous/node-able",component:p("/next/evaluation/heterogeneous/node-able","86f"),exact:!0,sidebar:"mainSidebar"},{path:"/next/evaluation/heterogeneous/virtual-node",component:p("/next/evaluation/heterogeneous/virtual-node","1b7"),exact:!0,sidebar:"mainSidebar"},{path:"/next/examples/",component:p("/next/examples/","206"),exact:!0,sidebar:"mainSidebar"},{path:"/next/examples/helloworld",component:p("/next/examples/helloworld","88f"),exact:!0,sidebar:"mainSidebar"},{path:"/next/faq/",component:p("/next/faq/","bae"),exact:!0,sidebar:"mainSidebar"},{path:"/next/getting-started/",component:p("/next/getting-started/","1b8"),exact:!0,sidebar:"mainSidebar"},{path:"/next/getting-started/cloud",component:p("/next/getting-started/cloud","b29"),exact:!0,sidebar:"mainSidebar"},{path:"/next/getting-started/runtime",component:p("/next/getting-started/runtime","fe8"),exact:!0},{path:"/next/getting-started/server",component:p("/next/getting-started/server","043"),exact:!0,sidebar:"mainSidebar"},{path:"/next/getting-started/standalone",component:p("/next/getting-started/standalone","197"),exact:!0,sidebar:"mainSidebar"},{path:"/next/model/",component:p("/next/model/","fb9"),exact:!0,sidebar:"mainSidebar"},{path:"/next/model/yaml",component:p("/next/model/yaml","61e"),exact:!0,sidebar:"mainSidebar"},{path:"/next/reference/sdk/dataset",component:p("/next/reference/sdk/dataset","1e9"),exact:!0,sidebar:"mainSidebar"},{path:"/next/reference/sdk/evaluation",component:p("/next/reference/sdk/evaluation","115"),exact:!0,sidebar:"mainSidebar"},{path:"/next/reference/sdk/job",component:p("/next/reference/sdk/job","ee7"),exact:!0,sidebar:"mainSidebar"},{path:"/next/reference/sdk/model",component:p("/next/reference/sdk/model","a57"),exact:!0,sidebar:"mainSidebar"},{path:"/next/reference/sdk/other",component:p("/next/reference/sdk/other","b39"),exact:!0,sidebar:"mainSidebar"},{path:"/next/reference/sdk/overview",component:p("/next/reference/sdk/overview","f9b"),exact:!0,sidebar:"mainSidebar"},{path:"/next/reference/sdk/type",component:p("/next/reference/sdk/type","c6b"),exact:!0,sidebar:"mainSidebar"},{path:"/next/reference/swcli/",component:p("/next/reference/swcli/","332"),exact:!0,sidebar:"mainSidebar"},{path:"/next/reference/swcli/dataset",component:p("/next/reference/swcli/dataset","399"),exact:!0,sidebar:"mainSidebar"},{path:"/next/reference/swcli/instance",component:p("/next/reference/swcli/instance","b13"),exact:!0,sidebar:"mainSidebar"},{path:"/next/reference/swcli/job",component:p("/next/reference/swcli/job","003"),exact:!0,sidebar:"mainSidebar"},{path:"/next/reference/swcli/model",component:p("/next/reference/swcli/model","2a5"),exact:!0,sidebar:"mainSidebar"},{path:"/next/reference/swcli/project",component:p("/next/reference/swcli/project","36b"),exact:!0,sidebar:"mainSidebar"},{path:"/next/reference/swcli/runtime",component:p("/next/reference/swcli/runtime","f5e"),exact:!0,sidebar:"mainSidebar"},{path:"/next/reference/swcli/server",component:p("/next/reference/swcli/server","db0"),exact:!0,sidebar:"mainSidebar"},{path:"/next/reference/swcli/utilities",component:p("/next/reference/swcli/utilities","f9a"),exact:!0,sidebar:"mainSidebar"},{path:"/next/runtime/",component:p("/next/runtime/","a63"),exact:!0,sidebar:"mainSidebar"},{path:"/next/runtime/yaml",component:p("/next/runtime/yaml","ce4"),exact:!0,sidebar:"mainSidebar"},{path:"/next/server/",component:p("/next/server/","40e"),exact:!0,sidebar:"mainSidebar"},{path:"/next/server/guides/server_admin",component:p("/next/server/guides/server_admin","8cb"),exact:!0,sidebar:"mainSidebar"},{path:"/next/server/installation/",component:p("/next/server/installation/","24f"),exact:!0,sidebar:"mainSidebar"},{path:"/next/server/installation/docker",component:p("/next/server/installation/docker","f33"),exact:!0,sidebar:"mainSidebar"},{path:"/next/server/installation/docker-compose",component:p("/next/server/installation/docker-compose","40a"),exact:!0,sidebar:"mainSidebar"},{path:"/next/server/installation/k8s-cluster",component:p("/next/server/installation/k8s-cluster","4b6"),exact:!0,sidebar:"mainSidebar"},{path:"/next/server/installation/minikube",component:p("/next/server/installation/minikube","8ec"),exact:!0,sidebar:"mainSidebar"},{path:"/next/server/installation/server-start",component:p("/next/server/installation/server-start","b3b"),exact:!0,sidebar:"mainSidebar"},{path:"/next/server/installation/starwhale_env",component:p("/next/server/installation/starwhale_env","915"),exact:!0,sidebar:"mainSidebar"},{path:"/next/server/project",component:p("/next/server/project","8e8"),exact:!0,sidebar:"mainSidebar"},{path:"/next/swcli/",component:p("/next/swcli/","5ca"),exact:!0,sidebar:"mainSidebar"},{path:"/next/swcli/config",component:p("/next/swcli/config","831"),exact:!0,sidebar:"mainSidebar"},{path:"/next/swcli/installation",component:p("/next/swcli/installation","c5b"),exact:!0,sidebar:"mainSidebar"},{path:"/next/swcli/swignore",component:p("/next/swcli/swignore","b01"),exact:!0,sidebar:"mainSidebar"},{path:"/next/swcli/uri",component:p("/next/swcli/uri","77c"),exact:!0,sidebar:"mainSidebar"}]},{path:"/",component:p("/","6c0"),routes:[{path:"/",component:p("/","75e"),exact:!0,sidebar:"mainSidebar"},{path:"/cloud/",component:p("/cloud/","f0d"),exact:!0,sidebar:"mainSidebar"},{path:"/cloud/billing/",component:p("/cloud/billing/","7d3"),exact:!0,sidebar:"mainSidebar"},{path:"/cloud/billing/bills",component:p("/cloud/billing/bills","b11"),exact:!0,sidebar:"mainSidebar"},{path:"/cloud/billing/recharge",component:p("/cloud/billing/recharge","05c"),exact:!0,sidebar:"mainSidebar"},{path:"/cloud/billing/refund",component:p("/cloud/billing/refund","f3a"),exact:!0,sidebar:"mainSidebar"},{path:"/cloud/billing/voucher",component:p("/cloud/billing/voucher","de5"),exact:!0,sidebar:"mainSidebar"},{path:"/community/contribute",component:p("/community/contribute","238"),exact:!0,sidebar:"mainSidebar"},{path:"/concepts/",component:p("/concepts/","cb0"),exact:!0,sidebar:"mainSidebar"},{path:"/concepts/names",component:p("/concepts/names","a68"),exact:!0,sidebar:"mainSidebar"},{path:"/concepts/project",component:p("/concepts/project","01d"),exact:!0,sidebar:"mainSidebar"},{path:"/concepts/roles-permissions",component:p("/concepts/roles-permissions","0bd"),exact:!0,sidebar:"mainSidebar"},{path:"/concepts/versioning",component:p("/concepts/versioning","0ac"),exact:!0,sidebar:"mainSidebar"},{path:"/dataset/",component:p("/dataset/","085"),exact:!0,sidebar:"mainSidebar"},{path:"/dataset/yaml",component:p("/dataset/yaml","d87"),exact:!0,sidebar:"mainSidebar"},{path:"/evaluation/",component:p("/evaluation/","833"),exact:!0,sidebar:"mainSidebar"},{path:"/evaluation/heterogeneous/node-able",component:p("/evaluation/heterogeneous/node-able","419"),exact:!0,sidebar:"mainSidebar"},{path:"/evaluation/heterogeneous/virtual-node",component:p("/evaluation/heterogeneous/virtual-node","b80"),exact:!0,sidebar:"mainSidebar"},{path:"/examples/",component:p("/examples/","8a6"),exact:!0,sidebar:"mainSidebar"},{path:"/examples/helloworld",component:p("/examples/helloworld","19a"),exact:!0,sidebar:"mainSidebar"},{path:"/faq/",component:p("/faq/","5d2"),exact:!0,sidebar:"mainSidebar"},{path:"/getting-started/",component:p("/getting-started/","653"),exact:!0,sidebar:"mainSidebar"},{path:"/getting-started/cloud",component:p("/getting-started/cloud","8d1"),exact:!0,sidebar:"mainSidebar"},{path:"/getting-started/runtime",component:p("/getting-started/runtime","804"),exact:!0},{path:"/getting-started/server",component:p("/getting-started/server","081"),exact:!0,sidebar:"mainSidebar"},{path:"/getting-started/standalone",component:p("/getting-started/standalone","18a"),exact:!0,sidebar:"mainSidebar"},{path:"/model/",component:p("/model/","65b"),exact:!0,sidebar:"mainSidebar"},{path:"/model/yaml",component:p("/model/yaml","ff0"),exact:!0,sidebar:"mainSidebar"},{path:"/reference/sdk/dataset",component:p("/reference/sdk/dataset","739"),exact:!0,sidebar:"mainSidebar"},{path:"/reference/sdk/evaluation",component:p("/reference/sdk/evaluation","cdd"),exact:!0,sidebar:"mainSidebar"},{path:"/reference/sdk/job",component:p("/reference/sdk/job","bdc"),exact:!0,sidebar:"mainSidebar"},{path:"/reference/sdk/model",component:p("/reference/sdk/model","fae"),exact:!0,sidebar:"mainSidebar"},{path:"/reference/sdk/other",component:p("/reference/sdk/other","b9b"),exact:!0,sidebar:"mainSidebar"},{path:"/reference/sdk/overview",component:p("/reference/sdk/overview","a4a"),exact:!0,sidebar:"mainSidebar"},{path:"/reference/sdk/type",component:p("/reference/sdk/type","2aa"),exact:!0,sidebar:"mainSidebar"},{path:"/reference/swcli/",component:p("/reference/swcli/","567"),exact:!0,sidebar:"mainSidebar"},{path:"/reference/swcli/dataset",component:p("/reference/swcli/dataset","c7c"),exact:!0,sidebar:"mainSidebar"},{path:"/reference/swcli/instance",component:p("/reference/swcli/instance","825"),exact:!0,sidebar:"mainSidebar"},{path:"/reference/swcli/job",component:p("/reference/swcli/job","baf"),exact:!0,sidebar:"mainSidebar"},{path:"/reference/swcli/model",component:p("/reference/swcli/model","011"),exact:!0,sidebar:"mainSidebar"},{path:"/reference/swcli/project",component:p("/reference/swcli/project","cc9"),exact:!0,sidebar:"mainSidebar"},{path:"/reference/swcli/runtime",component:p("/reference/swcli/runtime","3e9"),exact:!0,sidebar:"mainSidebar"},{path:"/reference/swcli/server",component:p("/reference/swcli/server","ac0"),exact:!0,sidebar:"mainSidebar"},{path:"/reference/swcli/utilities",component:p("/reference/swcli/utilities","b34"),exact:!0,sidebar:"mainSidebar"},{path:"/runtime/",component:p("/runtime/","1a6"),exact:!0,sidebar:"mainSidebar"},{path:"/runtime/yaml",component:p("/runtime/yaml","f28"),exact:!0,sidebar:"mainSidebar"},{path:"/server/",component:p("/server/","6e1"),exact:!0,sidebar:"mainSidebar"},{path:"/server/guides/server_admin",component:p("/server/guides/server_admin","c85"),exact:!0,sidebar:"mainSidebar"},{path:"/server/installation/",component:p("/server/installation/","ed7"),exact:!0,sidebar:"mainSidebar"},{path:"/server/installation/docker",component:p("/server/installation/docker","941"),exact:!0,sidebar:"mainSidebar"},{path:"/server/installation/docker-compose",component:p("/server/installation/docker-compose","5c3"),exact:!0,sidebar:"mainSidebar"},{path:"/server/installation/k8s-cluster",component:p("/server/installation/k8s-cluster","49a"),exact:!0,sidebar:"mainSidebar"},{path:"/server/installation/minikube",component:p("/server/installation/minikube","49f"),exact:!0,sidebar:"mainSidebar"},{path:"/server/installation/server-start",component:p("/server/installation/server-start","9e9"),exact:!0,sidebar:"mainSidebar"},{path:"/server/installation/starwhale_env",component:p("/server/installation/starwhale_env","abc"),exact:!0,sidebar:"mainSidebar"},{path:"/server/project",component:p("/server/project","22e"),exact:!0,sidebar:"mainSidebar"},{path:"/swcli/",component:p("/swcli/","812"),exact:!0,sidebar:"mainSidebar"},{path:"/swcli/config",component:p("/swcli/config","656"),exact:!0,sidebar:"mainSidebar"},{path:"/swcli/installation",component:p("/swcli/installation","5df"),exact:!0,sidebar:"mainSidebar"},{path:"/swcli/swignore",component:p("/swcli/swignore","177"),exact:!0,sidebar:"mainSidebar"},{path:"/swcli/uri",component:p("/swcli/uri","58b"),exact:!0,sidebar:"mainSidebar"}]},{path:"*",component:p("*")}]},98934:(e,t,n)=>{"use strict";n.d(t,{_:()=>i,t:()=>a});var r=n(67294);const i=r.createContext(!1);function a(e){let{children:t}=e;const[n,a]=(0,r.useState)(!1);return(0,r.useEffect)((()=>{a(!0)}),[]),r.createElement(i.Provider,{value:n},t)}},49383:(e,t,n)=>{"use strict";var r=n(67294),i=n(73935),a=n(73727),o=n(70405),s=n(10412);const c=[n(56657),n(32497),n(57021),n(18320),n(93878),n(98601)];var l=n(723),d=n(16550),u=n(18790);function p(e){let{children:t}=e;return r.createElement(r.Fragment,null,t)}var m=n(83117),f=n(35742),b=n(52263),h=n(44996),g=n(86668),v=n(1944),y=n(94711),w=n(19727),_=n(43320),S=n(90197);function x(){const{i18n:{defaultLocale:e,localeConfigs:t}}=(0,b.Z)(),n=(0,y.l)();return r.createElement(f.Z,null,Object.entries(t).map((e=>{let[t,{htmlLang:i}]=e;return r.createElement("link",{key:t,rel:"alternate",href:n.createUrl({locale:t,fullyQualified:!0}),hrefLang:i})})),r.createElement("link",{rel:"alternate",href:n.createUrl({locale:e,fullyQualified:!0}),hrefLang:"x-default"}))}function k(e){let{permalink:t}=e;const{siteConfig:{url:n}}=(0,b.Z)(),i=function(){const{siteConfig:{url:e}}=(0,b.Z)(),{pathname:t}=(0,d.TH)();return e+(0,h.Z)(t)}(),a=t?`${n}${t}`:i;return r.createElement(f.Z,null,r.createElement("meta",{property:"og:url",content:a}),r.createElement("link",{rel:"canonical",href:a}))}function E(){const{i18n:{currentLocale:e}}=(0,b.Z)(),{metadata:t,image:n}=(0,g.L)();return r.createElement(r.Fragment,null,r.createElement(f.Z,null,r.createElement("meta",{name:"twitter:card",content:"summary_large_image"}),r.createElement("body",{className:w.h})),n&&r.createElement(v.d,{image:n}),r.createElement(k,null),r.createElement(x,null),r.createElement(S.Z,{tag:_.HX,locale:e}),r.createElement(f.Z,null,t.map(((e,t)=>r.createElement("meta",(0,m.Z)({key:t},e))))))}const T=new Map;function C(e){if(T.has(e.pathname))return{...e,pathname:T.get(e.pathname)};if((0,u.f)(l.Z,e.pathname).some((e=>{let{route:t}=e;return!0===t.exact})))return T.set(e.pathname,e.pathname),e;const t=e.pathname.trim().replace(/(?:\/index)?\.html$/,"")||"/";return T.set(e.pathname,t),{...e,pathname:t}}var A=n(98934),P=n(58940);function N(e){for(var t=arguments.length,n=new Array(t>1?t-1:0),r=1;r{var r;const i=(null==(r=t.default)?void 0:r[e])??t[e];return null==i?void 0:i(...n)}));return()=>i.forEach((e=>null==e?void 0:e()))}const O=function(e){let{children:t,location:n,previousLocation:i}=e;return(0,r.useLayoutEffect)((()=>{i!==n&&(!function(e){let{location:t,previousLocation:n}=e;if(!n)return;const r=t.pathname===n.pathname,i=t.hash===n.hash,a=t.search===n.search;if(r&&i&&!a)return;const{hash:o}=t;if(o){const e=decodeURIComponent(o.substring(1)),t=document.getElementById(e);null==t||t.scrollIntoView()}else window.scrollTo(0,0)}({location:n,previousLocation:i}),N("onRouteDidUpdate",{previousLocation:i,location:n}))}),[i,n]),t};function L(e){const t=Array.from(new Set([e,decodeURI(e)])).map((e=>(0,u.f)(l.Z,e))).flat();return Promise.all(t.map((e=>null==e.route.component.preload?void 0:e.route.component.preload())))}class I extends r.Component{constructor(e){super(e),this.previousLocation=void 0,this.routeUpdateCleanupCb=void 0,this.previousLocation=null,this.routeUpdateCleanupCb=s.Z.canUseDOM?N("onRouteUpdate",{previousLocation:null,location:this.props.location}):()=>{},this.state={nextRouteHasLoaded:!0}}shouldComponentUpdate(e,t){if(e.location===this.props.location)return t.nextRouteHasLoaded;const n=e.location;return this.previousLocation=this.props.location,this.setState({nextRouteHasLoaded:!1}),this.routeUpdateCleanupCb=N("onRouteUpdate",{previousLocation:this.previousLocation,location:n}),L(n.pathname).then((()=>{this.routeUpdateCleanupCb(),this.setState({nextRouteHasLoaded:!0})})).catch((e=>{console.warn(e),window.location.reload()})),!1}render(){const{children:e,location:t}=this.props;return r.createElement(O,{previousLocation:this.previousLocation,location:t},r.createElement(d.AW,{location:t,render:()=>e}))}}const R=I,j="__docusaurus-base-url-issue-banner-container",M="__docusaurus-base-url-issue-banner-suggestion-container",D="__DOCUSAURUS_INSERT_BASEURL_BANNER";function F(e){return`\nwindow['${D}'] = true;\n\ndocument.addEventListener('DOMContentLoaded', maybeInsertBanner);\n\nfunction maybeInsertBanner() {\n var shouldInsert = window['${D}'];\n shouldInsert && insertBanner();\n}\n\nfunction insertBanner() {\n var bannerContainer = document.getElementById('${j}');\n if (!bannerContainer) {\n return;\n }\n var bannerHtml = ${JSON.stringify(function(e){return`\n
    \n

    Your Docusaurus site did not load properly.

    \n

    A very common reason is a wrong site baseUrl configuration.

    \n

    Current configured baseUrl = ${e} ${"/"===e?" (default value)":""}

    \n

    We suggest trying baseUrl =

    \n
    \n`}(e)).replace(/{window[D]=!1}),[]),r.createElement(r.Fragment,null,!s.Z.canUseDOM&&r.createElement(f.Z,null,r.createElement("script",null,F(e))),r.createElement("div",{id:j}))}function B(){const{siteConfig:{baseUrl:e,baseUrlIssueBanner:t}}=(0,b.Z)(),{pathname:n}=(0,d.TH)();return t&&n===e?r.createElement(z,null):null}function $(){const{siteConfig:{favicon:e,title:t,noIndex:n},i18n:{currentLocale:i,localeConfigs:a}}=(0,b.Z)(),o=(0,h.Z)(e),{htmlLang:s,direction:c}=a[i];return r.createElement(f.Z,null,r.createElement("html",{lang:s,dir:c}),r.createElement("title",null,t),r.createElement("meta",{property:"og:title",content:t}),r.createElement("meta",{name:"viewport",content:"width=device-width, initial-scale=1.0"}),n&&r.createElement("meta",{name:"robots",content:"noindex, nofollow"}),e&&r.createElement("link",{rel:"icon",href:o}))}var U=n(44763);function H(){const e=(0,u.H)(l.Z),t=(0,d.TH)();return r.createElement(U.Z,null,r.createElement(P.M,null,r.createElement(A.t,null,r.createElement(p,null,r.createElement($,null),r.createElement(E,null),r.createElement(B,null),r.createElement(R,{location:C(t)},e)))))}var Z=n(16887);const q=function(e){try{return document.createElement("link").relList.supports(e)}catch{return!1}}("prefetch")?function(e){return new Promise(((t,n)=>{var r;if("undefined"==typeof document)return void n();const i=document.createElement("link");i.setAttribute("rel","prefetch"),i.setAttribute("href",e),i.onload=()=>t(),i.onerror=()=>n();const a=document.getElementsByTagName("head")[0]??(null==(r=document.getElementsByName("script")[0])?void 0:r.parentNode);null==a||a.appendChild(i)}))}:function(e){return new Promise(((t,n)=>{const r=new XMLHttpRequest;r.open("GET",e,!0),r.withCredentials=!0,r.onload=()=>{200===r.status?t():n()},r.send(null)}))};var V=n(99670);const W=new Set,G=new Set,Y=()=>{var e,t;return(null==(e=navigator.connection)?void 0:e.effectiveType.includes("2g"))||(null==(t=navigator.connection)?void 0:t.saveData)},K={prefetch(e){if(!(e=>!Y()&&!G.has(e)&&!W.has(e))(e))return!1;W.add(e);const t=(0,u.f)(l.Z,e).flatMap((e=>{return t=e.route.path,Object.entries(Z).filter((e=>{let[n]=e;return n.replace(/-[^-]+$/,"")===t})).flatMap((e=>{let[,t]=e;return Object.values((0,V.Z)(t))}));var t}));return Promise.all(t.map((e=>{const t=n.gca(e);return t&&!t.includes("undefined")?q(t).catch((()=>{})):Promise.resolve()})))},preload:e=>!!(e=>!Y()&&!G.has(e))(e)&&(G.add(e),L(e))},Q=Object.freeze(K);if(s.Z.canUseDOM){window.docusaurus=Q;const e=i.hydrate;L(window.location.pathname).then((()=>{e(r.createElement(o.B6,null,r.createElement(a.VK,null,r.createElement(H,null))),document.getElementById("__docusaurus"))}))}},58940:(e,t,n)=>{"use strict";n.d(t,{_:()=>d,M:()=>u});var r=n(67294),i=n(36809);const a=JSON.parse('{"docusaurus-plugin-google-gtag":{"default":{"trackingID":["none"],"anonymizeIP":true,"id":"default"}},"docusaurus-plugin-content-docs":{"default":{"path":"/","versions":[{"name":"current","label":"WIP","isLast":false,"path":"/next","mainDocId":"what-is-starwhale","docs":[{"id":"cloud/billing/billing","path":"/next/cloud/billing/","sidebar":"mainSidebar"},{"id":"cloud/billing/bills","path":"/next/cloud/billing/bills","sidebar":"mainSidebar"},{"id":"cloud/billing/recharge","path":"/next/cloud/billing/recharge","sidebar":"mainSidebar"},{"id":"cloud/billing/refund","path":"/next/cloud/billing/refund","sidebar":"mainSidebar"},{"id":"cloud/billing/voucher","path":"/next/cloud/billing/voucher","sidebar":"mainSidebar"},{"id":"cloud/index","path":"/next/cloud/","sidebar":"mainSidebar"},{"id":"community/contribute","path":"/next/community/contribute","sidebar":"mainSidebar"},{"id":"concepts/glossary","path":"/next/concepts/glossary","sidebar":"mainSidebar"},{"id":"concepts/index","path":"/next/concepts/","sidebar":"mainSidebar"},{"id":"concepts/names","path":"/next/concepts/names","sidebar":"mainSidebar"},{"id":"concepts/project","path":"/next/concepts/project","sidebar":"mainSidebar"},{"id":"concepts/roles-permissions","path":"/next/concepts/roles-permissions","sidebar":"mainSidebar"},{"id":"concepts/versioning","path":"/next/concepts/versioning","sidebar":"mainSidebar"},{"id":"dataset/index","path":"/next/dataset/","sidebar":"mainSidebar"},{"id":"dataset/yaml","path":"/next/dataset/yaml","sidebar":"mainSidebar"},{"id":"evaluation/heterogeneous/node-able","path":"/next/evaluation/heterogeneous/node-able","sidebar":"mainSidebar"},{"id":"evaluation/heterogeneous/virtual-node","path":"/next/evaluation/heterogeneous/virtual-node","sidebar":"mainSidebar"},{"id":"evaluation/index","path":"/next/evaluation/","sidebar":"mainSidebar"},{"id":"examples/helloworld","path":"/next/examples/helloworld","sidebar":"mainSidebar"},{"id":"examples/index","path":"/next/examples/","sidebar":"mainSidebar"},{"id":"faq/index","path":"/next/faq/","sidebar":"mainSidebar"},{"id":"getting-started/cloud","path":"/next/getting-started/cloud","sidebar":"mainSidebar"},{"id":"getting-started/index","path":"/next/getting-started/","sidebar":"mainSidebar"},{"id":"getting-started/runtime","path":"/next/getting-started/runtime"},{"id":"getting-started/server","path":"/next/getting-started/server","sidebar":"mainSidebar"},{"id":"getting-started/standalone","path":"/next/getting-started/standalone","sidebar":"mainSidebar"},{"id":"model/index","path":"/next/model/","sidebar":"mainSidebar"},{"id":"model/yaml","path":"/next/model/yaml","sidebar":"mainSidebar"},{"id":"reference/sdk/dataset","path":"/next/reference/sdk/dataset","sidebar":"mainSidebar"},{"id":"reference/sdk/evaluation","path":"/next/reference/sdk/evaluation","sidebar":"mainSidebar"},{"id":"reference/sdk/job","path":"/next/reference/sdk/job","sidebar":"mainSidebar"},{"id":"reference/sdk/model","path":"/next/reference/sdk/model","sidebar":"mainSidebar"},{"id":"reference/sdk/other","path":"/next/reference/sdk/other","sidebar":"mainSidebar"},{"id":"reference/sdk/overview","path":"/next/reference/sdk/overview","sidebar":"mainSidebar"},{"id":"reference/sdk/type","path":"/next/reference/sdk/type","sidebar":"mainSidebar"},{"id":"reference/swcli/dataset","path":"/next/reference/swcli/dataset","sidebar":"mainSidebar"},{"id":"reference/swcli/index","path":"/next/reference/swcli/","sidebar":"mainSidebar"},{"id":"reference/swcli/instance","path":"/next/reference/swcli/instance","sidebar":"mainSidebar"},{"id":"reference/swcli/job","path":"/next/reference/swcli/job","sidebar":"mainSidebar"},{"id":"reference/swcli/model","path":"/next/reference/swcli/model","sidebar":"mainSidebar"},{"id":"reference/swcli/project","path":"/next/reference/swcli/project","sidebar":"mainSidebar"},{"id":"reference/swcli/runtime","path":"/next/reference/swcli/runtime","sidebar":"mainSidebar"},{"id":"reference/swcli/server","path":"/next/reference/swcli/server","sidebar":"mainSidebar"},{"id":"reference/swcli/utilities","path":"/next/reference/swcli/utilities","sidebar":"mainSidebar"},{"id":"runtime/index","path":"/next/runtime/","sidebar":"mainSidebar"},{"id":"runtime/yaml","path":"/next/runtime/yaml","sidebar":"mainSidebar"},{"id":"server/guides/server_admin","path":"/next/server/guides/server_admin","sidebar":"mainSidebar"},{"id":"server/index","path":"/next/server/","sidebar":"mainSidebar"},{"id":"server/installation/docker","path":"/next/server/installation/docker","sidebar":"mainSidebar"},{"id":"server/installation/docker-compose","path":"/next/server/installation/docker-compose","sidebar":"mainSidebar"},{"id":"server/installation/index","path":"/next/server/installation/","sidebar":"mainSidebar"},{"id":"server/installation/k8s-cluster","path":"/next/server/installation/k8s-cluster","sidebar":"mainSidebar"},{"id":"server/installation/minikube","path":"/next/server/installation/minikube","sidebar":"mainSidebar"},{"id":"server/installation/server-start","path":"/next/server/installation/server-start","sidebar":"mainSidebar"},{"id":"server/installation/starwhale_env","path":"/next/server/installation/starwhale_env","sidebar":"mainSidebar"},{"id":"server/project","path":"/next/server/project","sidebar":"mainSidebar"},{"id":"swcli/config","path":"/next/swcli/config","sidebar":"mainSidebar"},{"id":"swcli/index","path":"/next/swcli/","sidebar":"mainSidebar"},{"id":"swcli/installation","path":"/next/swcli/installation","sidebar":"mainSidebar"},{"id":"swcli/swignore","path":"/next/swcli/swignore","sidebar":"mainSidebar"},{"id":"swcli/uri","path":"/next/swcli/uri","sidebar":"mainSidebar"},{"id":"what-is-starwhale","path":"/next/","sidebar":"mainSidebar"}],"draftIds":[],"sidebars":{"mainSidebar":{"link":{"path":"/next/","label":"what-is-starwhale"}}}},{"name":"0.6.7","label":"0.6.7","isLast":true,"path":"/","mainDocId":"what-is-starwhale","docs":[{"id":"cloud/billing/billing","path":"/cloud/billing/","sidebar":"mainSidebar"},{"id":"cloud/billing/bills","path":"/cloud/billing/bills","sidebar":"mainSidebar"},{"id":"cloud/billing/recharge","path":"/cloud/billing/recharge","sidebar":"mainSidebar"},{"id":"cloud/billing/refund","path":"/cloud/billing/refund","sidebar":"mainSidebar"},{"id":"cloud/billing/voucher","path":"/cloud/billing/voucher","sidebar":"mainSidebar"},{"id":"cloud/index","path":"/cloud/","sidebar":"mainSidebar"},{"id":"community/contribute","path":"/community/contribute","sidebar":"mainSidebar"},{"id":"concepts/index","path":"/concepts/","sidebar":"mainSidebar"},{"id":"concepts/names","path":"/concepts/names","sidebar":"mainSidebar"},{"id":"concepts/project","path":"/concepts/project","sidebar":"mainSidebar"},{"id":"concepts/roles-permissions","path":"/concepts/roles-permissions","sidebar":"mainSidebar"},{"id":"concepts/versioning","path":"/concepts/versioning","sidebar":"mainSidebar"},{"id":"dataset/index","path":"/dataset/","sidebar":"mainSidebar"},{"id":"dataset/yaml","path":"/dataset/yaml","sidebar":"mainSidebar"},{"id":"evaluation/heterogeneous/node-able","path":"/evaluation/heterogeneous/node-able","sidebar":"mainSidebar"},{"id":"evaluation/heterogeneous/virtual-node","path":"/evaluation/heterogeneous/virtual-node","sidebar":"mainSidebar"},{"id":"evaluation/index","path":"/evaluation/","sidebar":"mainSidebar"},{"id":"examples/helloworld","path":"/examples/helloworld","sidebar":"mainSidebar"},{"id":"examples/index","path":"/examples/","sidebar":"mainSidebar"},{"id":"faq/index","path":"/faq/","sidebar":"mainSidebar"},{"id":"getting-started/cloud","path":"/getting-started/cloud","sidebar":"mainSidebar"},{"id":"getting-started/index","path":"/getting-started/","sidebar":"mainSidebar"},{"id":"getting-started/runtime","path":"/getting-started/runtime"},{"id":"getting-started/server","path":"/getting-started/server","sidebar":"mainSidebar"},{"id":"getting-started/standalone","path":"/getting-started/standalone","sidebar":"mainSidebar"},{"id":"model/index","path":"/model/","sidebar":"mainSidebar"},{"id":"model/yaml","path":"/model/yaml","sidebar":"mainSidebar"},{"id":"reference/sdk/dataset","path":"/reference/sdk/dataset","sidebar":"mainSidebar"},{"id":"reference/sdk/evaluation","path":"/reference/sdk/evaluation","sidebar":"mainSidebar"},{"id":"reference/sdk/job","path":"/reference/sdk/job","sidebar":"mainSidebar"},{"id":"reference/sdk/model","path":"/reference/sdk/model","sidebar":"mainSidebar"},{"id":"reference/sdk/other","path":"/reference/sdk/other","sidebar":"mainSidebar"},{"id":"reference/sdk/overview","path":"/reference/sdk/overview","sidebar":"mainSidebar"},{"id":"reference/sdk/type","path":"/reference/sdk/type","sidebar":"mainSidebar"},{"id":"reference/swcli/dataset","path":"/reference/swcli/dataset","sidebar":"mainSidebar"},{"id":"reference/swcli/index","path":"/reference/swcli/","sidebar":"mainSidebar"},{"id":"reference/swcli/instance","path":"/reference/swcli/instance","sidebar":"mainSidebar"},{"id":"reference/swcli/job","path":"/reference/swcli/job","sidebar":"mainSidebar"},{"id":"reference/swcli/model","path":"/reference/swcli/model","sidebar":"mainSidebar"},{"id":"reference/swcli/project","path":"/reference/swcli/project","sidebar":"mainSidebar"},{"id":"reference/swcli/runtime","path":"/reference/swcli/runtime","sidebar":"mainSidebar"},{"id":"reference/swcli/server","path":"/reference/swcli/server","sidebar":"mainSidebar"},{"id":"reference/swcli/utilities","path":"/reference/swcli/utilities","sidebar":"mainSidebar"},{"id":"runtime/index","path":"/runtime/","sidebar":"mainSidebar"},{"id":"runtime/yaml","path":"/runtime/yaml","sidebar":"mainSidebar"},{"id":"server/guides/server_admin","path":"/server/guides/server_admin","sidebar":"mainSidebar"},{"id":"server/index","path":"/server/","sidebar":"mainSidebar"},{"id":"server/installation/docker","path":"/server/installation/docker","sidebar":"mainSidebar"},{"id":"server/installation/docker-compose","path":"/server/installation/docker-compose","sidebar":"mainSidebar"},{"id":"server/installation/index","path":"/server/installation/","sidebar":"mainSidebar"},{"id":"server/installation/k8s-cluster","path":"/server/installation/k8s-cluster","sidebar":"mainSidebar"},{"id":"server/installation/minikube","path":"/server/installation/minikube","sidebar":"mainSidebar"},{"id":"server/installation/server-start","path":"/server/installation/server-start","sidebar":"mainSidebar"},{"id":"server/installation/starwhale_env","path":"/server/installation/starwhale_env","sidebar":"mainSidebar"},{"id":"server/project","path":"/server/project","sidebar":"mainSidebar"},{"id":"swcli/config","path":"/swcli/config","sidebar":"mainSidebar"},{"id":"swcli/index","path":"/swcli/","sidebar":"mainSidebar"},{"id":"swcli/installation","path":"/swcli/installation","sidebar":"mainSidebar"},{"id":"swcli/swignore","path":"/swcli/swignore","sidebar":"mainSidebar"},{"id":"swcli/uri","path":"/swcli/uri","sidebar":"mainSidebar"},{"id":"what-is-starwhale","path":"/","sidebar":"mainSidebar"}],"draftIds":[],"sidebars":{"mainSidebar":{"link":{"path":"/","label":"what-is-starwhale"}}}},{"name":"0.6.6","label":"0.6.6","isLast":false,"path":"/0.6.6","mainDocId":"what-is-starwhale","docs":[{"id":"cloud/billing/billing","path":"/0.6.6/cloud/billing/","sidebar":"mainSidebar"},{"id":"cloud/billing/bills","path":"/0.6.6/cloud/billing/bills","sidebar":"mainSidebar"},{"id":"cloud/billing/recharge","path":"/0.6.6/cloud/billing/recharge","sidebar":"mainSidebar"},{"id":"cloud/billing/refund","path":"/0.6.6/cloud/billing/refund","sidebar":"mainSidebar"},{"id":"cloud/billing/voucher","path":"/0.6.6/cloud/billing/voucher","sidebar":"mainSidebar"},{"id":"cloud/index","path":"/0.6.6/cloud/","sidebar":"mainSidebar"},{"id":"community/contribute","path":"/0.6.6/community/contribute","sidebar":"mainSidebar"},{"id":"concepts/index","path":"/0.6.6/concepts/","sidebar":"mainSidebar"},{"id":"concepts/names","path":"/0.6.6/concepts/names","sidebar":"mainSidebar"},{"id":"concepts/project","path":"/0.6.6/concepts/project","sidebar":"mainSidebar"},{"id":"concepts/roles-permissions","path":"/0.6.6/concepts/roles-permissions","sidebar":"mainSidebar"},{"id":"concepts/versioning","path":"/0.6.6/concepts/versioning","sidebar":"mainSidebar"},{"id":"dataset/index","path":"/0.6.6/dataset/","sidebar":"mainSidebar"},{"id":"dataset/yaml","path":"/0.6.6/dataset/yaml","sidebar":"mainSidebar"},{"id":"evaluation/heterogeneous/node-able","path":"/0.6.6/evaluation/heterogeneous/node-able","sidebar":"mainSidebar"},{"id":"evaluation/heterogeneous/virtual-node","path":"/0.6.6/evaluation/heterogeneous/virtual-node","sidebar":"mainSidebar"},{"id":"evaluation/index","path":"/0.6.6/evaluation/","sidebar":"mainSidebar"},{"id":"faq/index","path":"/0.6.6/faq/","sidebar":"mainSidebar"},{"id":"getting-started/cloud","path":"/0.6.6/getting-started/cloud","sidebar":"mainSidebar"},{"id":"getting-started/index","path":"/0.6.6/getting-started/","sidebar":"mainSidebar"},{"id":"getting-started/runtime","path":"/0.6.6/getting-started/runtime","sidebar":"mainSidebar"},{"id":"getting-started/server","path":"/0.6.6/getting-started/server","sidebar":"mainSidebar"},{"id":"getting-started/standalone","path":"/0.6.6/getting-started/standalone","sidebar":"mainSidebar"},{"id":"model/index","path":"/0.6.6/model/","sidebar":"mainSidebar"},{"id":"model/yaml","path":"/0.6.6/model/yaml","sidebar":"mainSidebar"},{"id":"reference/sdk/dataset","path":"/0.6.6/reference/sdk/dataset","sidebar":"mainSidebar"},{"id":"reference/sdk/evaluation","path":"/0.6.6/reference/sdk/evaluation","sidebar":"mainSidebar"},{"id":"reference/sdk/job","path":"/0.6.6/reference/sdk/job","sidebar":"mainSidebar"},{"id":"reference/sdk/model","path":"/0.6.6/reference/sdk/model","sidebar":"mainSidebar"},{"id":"reference/sdk/other","path":"/0.6.6/reference/sdk/other","sidebar":"mainSidebar"},{"id":"reference/sdk/overview","path":"/0.6.6/reference/sdk/overview","sidebar":"mainSidebar"},{"id":"reference/sdk/type","path":"/0.6.6/reference/sdk/type","sidebar":"mainSidebar"},{"id":"reference/swcli/dataset","path":"/0.6.6/reference/swcli/dataset","sidebar":"mainSidebar"},{"id":"reference/swcli/index","path":"/0.6.6/reference/swcli/","sidebar":"mainSidebar"},{"id":"reference/swcli/instance","path":"/0.6.6/reference/swcli/instance","sidebar":"mainSidebar"},{"id":"reference/swcli/job","path":"/0.6.6/reference/swcli/job","sidebar":"mainSidebar"},{"id":"reference/swcli/model","path":"/0.6.6/reference/swcli/model","sidebar":"mainSidebar"},{"id":"reference/swcli/project","path":"/0.6.6/reference/swcli/project","sidebar":"mainSidebar"},{"id":"reference/swcli/runtime","path":"/0.6.6/reference/swcli/runtime","sidebar":"mainSidebar"},{"id":"reference/swcli/server","path":"/0.6.6/reference/swcli/server","sidebar":"mainSidebar"},{"id":"reference/swcli/utilities","path":"/0.6.6/reference/swcli/utilities","sidebar":"mainSidebar"},{"id":"runtime/index","path":"/0.6.6/runtime/","sidebar":"mainSidebar"},{"id":"runtime/yaml","path":"/0.6.6/runtime/yaml","sidebar":"mainSidebar"},{"id":"server/guides/server_admin","path":"/0.6.6/server/guides/server_admin","sidebar":"mainSidebar"},{"id":"server/index","path":"/0.6.6/server/","sidebar":"mainSidebar"},{"id":"server/installation/docker","path":"/0.6.6/server/installation/docker","sidebar":"mainSidebar"},{"id":"server/installation/docker-compose","path":"/0.6.6/server/installation/docker-compose","sidebar":"mainSidebar"},{"id":"server/installation/index","path":"/0.6.6/server/installation/","sidebar":"mainSidebar"},{"id":"server/installation/k8s-cluster","path":"/0.6.6/server/installation/k8s-cluster","sidebar":"mainSidebar"},{"id":"server/installation/minikube","path":"/0.6.6/server/installation/minikube","sidebar":"mainSidebar"},{"id":"server/installation/server-start","path":"/0.6.6/server/installation/server-start","sidebar":"mainSidebar"},{"id":"server/installation/starwhale_env","path":"/0.6.6/server/installation/starwhale_env","sidebar":"mainSidebar"},{"id":"server/project","path":"/0.6.6/server/project","sidebar":"mainSidebar"},{"id":"swcli/config","path":"/0.6.6/swcli/config","sidebar":"mainSidebar"},{"id":"swcli/index","path":"/0.6.6/swcli/","sidebar":"mainSidebar"},{"id":"swcli/installation","path":"/0.6.6/swcli/installation","sidebar":"mainSidebar"},{"id":"swcli/swignore","path":"/0.6.6/swcli/swignore","sidebar":"mainSidebar"},{"id":"swcli/uri","path":"/0.6.6/swcli/uri","sidebar":"mainSidebar"},{"id":"what-is-starwhale","path":"/0.6.6/","sidebar":"mainSidebar"}],"draftIds":[],"sidebars":{"mainSidebar":{"link":{"path":"/0.6.6/","label":"what-is-starwhale"}}}},{"name":"0.6.5","label":"0.6.5","isLast":false,"path":"/0.6.5","mainDocId":"what-is-starwhale","docs":[{"id":"cloud/billing/billing","path":"/0.6.5/cloud/billing/","sidebar":"mainSidebar"},{"id":"cloud/billing/bills","path":"/0.6.5/cloud/billing/bills","sidebar":"mainSidebar"},{"id":"cloud/billing/recharge","path":"/0.6.5/cloud/billing/recharge","sidebar":"mainSidebar"},{"id":"cloud/billing/refund","path":"/0.6.5/cloud/billing/refund","sidebar":"mainSidebar"},{"id":"cloud/billing/voucher","path":"/0.6.5/cloud/billing/voucher","sidebar":"mainSidebar"},{"id":"cloud/index","path":"/0.6.5/cloud/","sidebar":"mainSidebar"},{"id":"community/contribute","path":"/0.6.5/community/contribute","sidebar":"mainSidebar"},{"id":"concepts/index","path":"/0.6.5/concepts/","sidebar":"mainSidebar"},{"id":"concepts/names","path":"/0.6.5/concepts/names","sidebar":"mainSidebar"},{"id":"concepts/project","path":"/0.6.5/concepts/project","sidebar":"mainSidebar"},{"id":"concepts/roles-permissions","path":"/0.6.5/concepts/roles-permissions","sidebar":"mainSidebar"},{"id":"concepts/versioning","path":"/0.6.5/concepts/versioning","sidebar":"mainSidebar"},{"id":"dataset/index","path":"/0.6.5/dataset/","sidebar":"mainSidebar"},{"id":"dataset/yaml","path":"/0.6.5/dataset/yaml","sidebar":"mainSidebar"},{"id":"evaluation/heterogeneous/node-able","path":"/0.6.5/evaluation/heterogeneous/node-able","sidebar":"mainSidebar"},{"id":"evaluation/heterogeneous/virtual-node","path":"/0.6.5/evaluation/heterogeneous/virtual-node","sidebar":"mainSidebar"},{"id":"evaluation/index","path":"/0.6.5/evaluation/","sidebar":"mainSidebar"},{"id":"faq/index","path":"/0.6.5/faq/","sidebar":"mainSidebar"},{"id":"getting-started/cloud","path":"/0.6.5/getting-started/cloud","sidebar":"mainSidebar"},{"id":"getting-started/index","path":"/0.6.5/getting-started/","sidebar":"mainSidebar"},{"id":"getting-started/runtime","path":"/0.6.5/getting-started/runtime","sidebar":"mainSidebar"},{"id":"getting-started/server","path":"/0.6.5/getting-started/server","sidebar":"mainSidebar"},{"id":"getting-started/standalone","path":"/0.6.5/getting-started/standalone","sidebar":"mainSidebar"},{"id":"model/index","path":"/0.6.5/model/","sidebar":"mainSidebar"},{"id":"model/yaml","path":"/0.6.5/model/yaml","sidebar":"mainSidebar"},{"id":"reference/sdk/dataset","path":"/0.6.5/reference/sdk/dataset","sidebar":"mainSidebar"},{"id":"reference/sdk/evaluation","path":"/0.6.5/reference/sdk/evaluation","sidebar":"mainSidebar"},{"id":"reference/sdk/job","path":"/0.6.5/reference/sdk/job","sidebar":"mainSidebar"},{"id":"reference/sdk/model","path":"/0.6.5/reference/sdk/model","sidebar":"mainSidebar"},{"id":"reference/sdk/other","path":"/0.6.5/reference/sdk/other","sidebar":"mainSidebar"},{"id":"reference/sdk/overview","path":"/0.6.5/reference/sdk/overview","sidebar":"mainSidebar"},{"id":"reference/sdk/type","path":"/0.6.5/reference/sdk/type","sidebar":"mainSidebar"},{"id":"reference/swcli/dataset","path":"/0.6.5/reference/swcli/dataset","sidebar":"mainSidebar"},{"id":"reference/swcli/index","path":"/0.6.5/reference/swcli/","sidebar":"mainSidebar"},{"id":"reference/swcli/instance","path":"/0.6.5/reference/swcli/instance","sidebar":"mainSidebar"},{"id":"reference/swcli/job","path":"/0.6.5/reference/swcli/job","sidebar":"mainSidebar"},{"id":"reference/swcli/model","path":"/0.6.5/reference/swcli/model","sidebar":"mainSidebar"},{"id":"reference/swcli/project","path":"/0.6.5/reference/swcli/project","sidebar":"mainSidebar"},{"id":"reference/swcli/runtime","path":"/0.6.5/reference/swcli/runtime","sidebar":"mainSidebar"},{"id":"reference/swcli/utilities","path":"/0.6.5/reference/swcli/utilities","sidebar":"mainSidebar"},{"id":"runtime/index","path":"/0.6.5/runtime/","sidebar":"mainSidebar"},{"id":"runtime/yaml","path":"/0.6.5/runtime/yaml","sidebar":"mainSidebar"},{"id":"server/guides/server_admin","path":"/0.6.5/server/guides/server_admin","sidebar":"mainSidebar"},{"id":"server/index","path":"/0.6.5/server/","sidebar":"mainSidebar"},{"id":"server/installation/docker","path":"/0.6.5/server/installation/docker","sidebar":"mainSidebar"},{"id":"server/installation/docker-compose","path":"/0.6.5/server/installation/docker-compose","sidebar":"mainSidebar"},{"id":"server/installation/index","path":"/0.6.5/server/installation/","sidebar":"mainSidebar"},{"id":"server/installation/k8s-cluster","path":"/0.6.5/server/installation/k8s-cluster","sidebar":"mainSidebar"},{"id":"server/installation/minikube","path":"/0.6.5/server/installation/minikube","sidebar":"mainSidebar"},{"id":"server/installation/starwhale_env","path":"/0.6.5/server/installation/starwhale_env","sidebar":"mainSidebar"},{"id":"server/project","path":"/0.6.5/server/project","sidebar":"mainSidebar"},{"id":"swcli/config","path":"/0.6.5/swcli/config","sidebar":"mainSidebar"},{"id":"swcli/index","path":"/0.6.5/swcli/","sidebar":"mainSidebar"},{"id":"swcli/installation","path":"/0.6.5/swcli/installation","sidebar":"mainSidebar"},{"id":"swcli/swignore","path":"/0.6.5/swcli/swignore","sidebar":"mainSidebar"},{"id":"swcli/uri","path":"/0.6.5/swcli/uri","sidebar":"mainSidebar"},{"id":"what-is-starwhale","path":"/0.6.5/","sidebar":"mainSidebar"}],"draftIds":[],"sidebars":{"mainSidebar":{"link":{"path":"/0.6.5/","label":"what-is-starwhale"}}}},{"name":"0.6.4","label":"0.6.4","isLast":false,"path":"/0.6.4","mainDocId":"what-is-starwhale","docs":[{"id":"cloud/billing/billing","path":"/0.6.4/cloud/billing/","sidebar":"mainSidebar"},{"id":"cloud/billing/bills","path":"/0.6.4/cloud/billing/bills","sidebar":"mainSidebar"},{"id":"cloud/billing/recharge","path":"/0.6.4/cloud/billing/recharge","sidebar":"mainSidebar"},{"id":"cloud/billing/refund","path":"/0.6.4/cloud/billing/refund","sidebar":"mainSidebar"},{"id":"cloud/billing/voucher","path":"/0.6.4/cloud/billing/voucher","sidebar":"mainSidebar"},{"id":"cloud/index","path":"/0.6.4/cloud/","sidebar":"mainSidebar"},{"id":"community/contribute","path":"/0.6.4/community/contribute","sidebar":"mainSidebar"},{"id":"concepts/index","path":"/0.6.4/concepts/","sidebar":"mainSidebar"},{"id":"concepts/names","path":"/0.6.4/concepts/names","sidebar":"mainSidebar"},{"id":"concepts/project","path":"/0.6.4/concepts/project","sidebar":"mainSidebar"},{"id":"concepts/roles-permissions","path":"/0.6.4/concepts/roles-permissions","sidebar":"mainSidebar"},{"id":"concepts/versioning","path":"/0.6.4/concepts/versioning","sidebar":"mainSidebar"},{"id":"dataset/index","path":"/0.6.4/dataset/","sidebar":"mainSidebar"},{"id":"dataset/yaml","path":"/0.6.4/dataset/yaml","sidebar":"mainSidebar"},{"id":"evaluation/heterogeneous/node-able","path":"/0.6.4/evaluation/heterogeneous/node-able","sidebar":"mainSidebar"},{"id":"evaluation/heterogeneous/virtual-node","path":"/0.6.4/evaluation/heterogeneous/virtual-node","sidebar":"mainSidebar"},{"id":"evaluation/index","path":"/0.6.4/evaluation/","sidebar":"mainSidebar"},{"id":"faq/index","path":"/0.6.4/faq/"},{"id":"getting-started/cloud","path":"/0.6.4/getting-started/cloud","sidebar":"mainSidebar"},{"id":"getting-started/index","path":"/0.6.4/getting-started/","sidebar":"mainSidebar"},{"id":"getting-started/runtime","path":"/0.6.4/getting-started/runtime","sidebar":"mainSidebar"},{"id":"getting-started/server","path":"/0.6.4/getting-started/server","sidebar":"mainSidebar"},{"id":"getting-started/standalone","path":"/0.6.4/getting-started/standalone","sidebar":"mainSidebar"},{"id":"model/index","path":"/0.6.4/model/","sidebar":"mainSidebar"},{"id":"model/yaml","path":"/0.6.4/model/yaml","sidebar":"mainSidebar"},{"id":"reference/sdk/dataset","path":"/0.6.4/reference/sdk/dataset","sidebar":"mainSidebar"},{"id":"reference/sdk/evaluation","path":"/0.6.4/reference/sdk/evaluation","sidebar":"mainSidebar"},{"id":"reference/sdk/job","path":"/0.6.4/reference/sdk/job","sidebar":"mainSidebar"},{"id":"reference/sdk/model","path":"/0.6.4/reference/sdk/model","sidebar":"mainSidebar"},{"id":"reference/sdk/other","path":"/0.6.4/reference/sdk/other","sidebar":"mainSidebar"},{"id":"reference/sdk/overview","path":"/0.6.4/reference/sdk/overview","sidebar":"mainSidebar"},{"id":"reference/sdk/type","path":"/0.6.4/reference/sdk/type","sidebar":"mainSidebar"},{"id":"reference/swcli/dataset","path":"/0.6.4/reference/swcli/dataset","sidebar":"mainSidebar"},{"id":"reference/swcli/index","path":"/0.6.4/reference/swcli/","sidebar":"mainSidebar"},{"id":"reference/swcli/instance","path":"/0.6.4/reference/swcli/instance","sidebar":"mainSidebar"},{"id":"reference/swcli/job","path":"/0.6.4/reference/swcli/job","sidebar":"mainSidebar"},{"id":"reference/swcli/model","path":"/0.6.4/reference/swcli/model","sidebar":"mainSidebar"},{"id":"reference/swcli/project","path":"/0.6.4/reference/swcli/project","sidebar":"mainSidebar"},{"id":"reference/swcli/runtime","path":"/0.6.4/reference/swcli/runtime","sidebar":"mainSidebar"},{"id":"reference/swcli/utilities","path":"/0.6.4/reference/swcli/utilities","sidebar":"mainSidebar"},{"id":"runtime/index","path":"/0.6.4/runtime/","sidebar":"mainSidebar"},{"id":"runtime/yaml","path":"/0.6.4/runtime/yaml","sidebar":"mainSidebar"},{"id":"server/guides/server_admin","path":"/0.6.4/server/guides/server_admin","sidebar":"mainSidebar"},{"id":"server/index","path":"/0.6.4/server/","sidebar":"mainSidebar"},{"id":"server/installation/docker","path":"/0.6.4/server/installation/docker","sidebar":"mainSidebar"},{"id":"server/installation/docker-compose","path":"/0.6.4/server/installation/docker-compose","sidebar":"mainSidebar"},{"id":"server/installation/helm-charts","path":"/0.6.4/server/installation/helm-charts","sidebar":"mainSidebar"},{"id":"server/installation/index","path":"/0.6.4/server/installation/","sidebar":"mainSidebar"},{"id":"server/installation/minikube","path":"/0.6.4/server/installation/minikube","sidebar":"mainSidebar"},{"id":"server/installation/starwhale_env","path":"/0.6.4/server/installation/starwhale_env","sidebar":"mainSidebar"},{"id":"server/project","path":"/0.6.4/server/project","sidebar":"mainSidebar"},{"id":"swcli/config","path":"/0.6.4/swcli/config","sidebar":"mainSidebar"},{"id":"swcli/index","path":"/0.6.4/swcli/","sidebar":"mainSidebar"},{"id":"swcli/installation","path":"/0.6.4/swcli/installation","sidebar":"mainSidebar"},{"id":"swcli/swignore","path":"/0.6.4/swcli/swignore","sidebar":"mainSidebar"},{"id":"swcli/uri","path":"/0.6.4/swcli/uri","sidebar":"mainSidebar"},{"id":"what-is-starwhale","path":"/0.6.4/","sidebar":"mainSidebar"}],"draftIds":[],"sidebars":{"mainSidebar":{"link":{"path":"/0.6.4/","label":"what-is-starwhale"}}}},{"name":"0.6.0","label":"0.6.0","isLast":false,"path":"/0.6.0","mainDocId":"what-is-starwhale","docs":[{"id":"cloud/billing/billing","path":"/0.6.0/cloud/billing/","sidebar":"mainSidebar"},{"id":"cloud/billing/bills","path":"/0.6.0/cloud/billing/bills","sidebar":"mainSidebar"},{"id":"cloud/billing/recharge","path":"/0.6.0/cloud/billing/recharge","sidebar":"mainSidebar"},{"id":"cloud/billing/refund","path":"/0.6.0/cloud/billing/refund","sidebar":"mainSidebar"},{"id":"cloud/billing/voucher","path":"/0.6.0/cloud/billing/voucher","sidebar":"mainSidebar"},{"id":"cloud/index","path":"/0.6.0/cloud/","sidebar":"mainSidebar"},{"id":"community/contribute","path":"/0.6.0/community/contribute","sidebar":"mainSidebar"},{"id":"concepts/index","path":"/0.6.0/concepts/","sidebar":"mainSidebar"},{"id":"concepts/names","path":"/0.6.0/concepts/names","sidebar":"mainSidebar"},{"id":"concepts/project","path":"/0.6.0/concepts/project","sidebar":"mainSidebar"},{"id":"concepts/roles-permissions","path":"/0.6.0/concepts/roles-permissions","sidebar":"mainSidebar"},{"id":"concepts/versioning","path":"/0.6.0/concepts/versioning","sidebar":"mainSidebar"},{"id":"dataset/index","path":"/0.6.0/dataset/","sidebar":"mainSidebar"},{"id":"dataset/yaml","path":"/0.6.0/dataset/yaml","sidebar":"mainSidebar"},{"id":"evaluation/heterogeneous/node-able","path":"/0.6.0/evaluation/heterogeneous/node-able","sidebar":"mainSidebar"},{"id":"evaluation/heterogeneous/virtual-node","path":"/0.6.0/evaluation/heterogeneous/virtual-node","sidebar":"mainSidebar"},{"id":"evaluation/index","path":"/0.6.0/evaluation/","sidebar":"mainSidebar"},{"id":"faq/index","path":"/0.6.0/faq/"},{"id":"getting-started/cloud","path":"/0.6.0/getting-started/cloud","sidebar":"mainSidebar"},{"id":"getting-started/index","path":"/0.6.0/getting-started/","sidebar":"mainSidebar"},{"id":"getting-started/runtime","path":"/0.6.0/getting-started/runtime","sidebar":"mainSidebar"},{"id":"getting-started/server","path":"/0.6.0/getting-started/server","sidebar":"mainSidebar"},{"id":"getting-started/standalone","path":"/0.6.0/getting-started/standalone","sidebar":"mainSidebar"},{"id":"model/index","path":"/0.6.0/model/","sidebar":"mainSidebar"},{"id":"model/yaml","path":"/0.6.0/model/yaml","sidebar":"mainSidebar"},{"id":"reference/sdk/dataset","path":"/0.6.0/reference/sdk/dataset","sidebar":"mainSidebar"},{"id":"reference/sdk/evaluation","path":"/0.6.0/reference/sdk/evaluation","sidebar":"mainSidebar"},{"id":"reference/sdk/job","path":"/0.6.0/reference/sdk/job","sidebar":"mainSidebar"},{"id":"reference/sdk/model","path":"/0.6.0/reference/sdk/model","sidebar":"mainSidebar"},{"id":"reference/sdk/other","path":"/0.6.0/reference/sdk/other","sidebar":"mainSidebar"},{"id":"reference/sdk/overview","path":"/0.6.0/reference/sdk/overview","sidebar":"mainSidebar"},{"id":"reference/sdk/type","path":"/0.6.0/reference/sdk/type","sidebar":"mainSidebar"},{"id":"reference/swcli/dataset","path":"/0.6.0/reference/swcli/dataset","sidebar":"mainSidebar"},{"id":"reference/swcli/index","path":"/0.6.0/reference/swcli/","sidebar":"mainSidebar"},{"id":"reference/swcli/instance","path":"/0.6.0/reference/swcli/instance","sidebar":"mainSidebar"},{"id":"reference/swcli/job","path":"/0.6.0/reference/swcli/job","sidebar":"mainSidebar"},{"id":"reference/swcli/model","path":"/0.6.0/reference/swcli/model","sidebar":"mainSidebar"},{"id":"reference/swcli/project","path":"/0.6.0/reference/swcli/project","sidebar":"mainSidebar"},{"id":"reference/swcli/runtime","path":"/0.6.0/reference/swcli/runtime","sidebar":"mainSidebar"},{"id":"reference/swcli/utilities","path":"/0.6.0/reference/swcli/utilities","sidebar":"mainSidebar"},{"id":"runtime/index","path":"/0.6.0/runtime/","sidebar":"mainSidebar"},{"id":"runtime/yaml","path":"/0.6.0/runtime/yaml","sidebar":"mainSidebar"},{"id":"server/guides/server_admin","path":"/0.6.0/server/guides/server_admin","sidebar":"mainSidebar"},{"id":"server/index","path":"/0.6.0/server/","sidebar":"mainSidebar"},{"id":"server/installation/docker","path":"/0.6.0/server/installation/docker","sidebar":"mainSidebar"},{"id":"server/installation/docker-compose","path":"/0.6.0/server/installation/docker-compose","sidebar":"mainSidebar"},{"id":"server/installation/helm-charts","path":"/0.6.0/server/installation/helm-charts","sidebar":"mainSidebar"},{"id":"server/installation/index","path":"/0.6.0/server/installation/","sidebar":"mainSidebar"},{"id":"server/installation/minikube","path":"/0.6.0/server/installation/minikube","sidebar":"mainSidebar"},{"id":"server/installation/starwhale_env","path":"/0.6.0/server/installation/starwhale_env","sidebar":"mainSidebar"},{"id":"server/project","path":"/0.6.0/server/project","sidebar":"mainSidebar"},{"id":"swcli/config","path":"/0.6.0/swcli/config","sidebar":"mainSidebar"},{"id":"swcli/index","path":"/0.6.0/swcli/","sidebar":"mainSidebar"},{"id":"swcli/installation","path":"/0.6.0/swcli/installation","sidebar":"mainSidebar"},{"id":"swcli/swignore","path":"/0.6.0/swcli/swignore","sidebar":"mainSidebar"},{"id":"swcli/uri","path":"/0.6.0/swcli/uri","sidebar":"mainSidebar"},{"id":"what-is-starwhale","path":"/0.6.0/","sidebar":"mainSidebar"}],"draftIds":[],"sidebars":{"mainSidebar":{"link":{"path":"/0.6.0/","label":"what-is-starwhale"}}}},{"name":"0.5.12","label":"0.5.12","isLast":false,"path":"/0.5.12","mainDocId":"what-is-starwhale","docs":[{"id":"cloud/billing/billing","path":"/0.5.12/cloud/billing/","sidebar":"mainSidebar"},{"id":"cloud/billing/bills","path":"/0.5.12/cloud/billing/bills","sidebar":"mainSidebar"},{"id":"cloud/billing/recharge","path":"/0.5.12/cloud/billing/recharge","sidebar":"mainSidebar"},{"id":"cloud/billing/refund","path":"/0.5.12/cloud/billing/refund","sidebar":"mainSidebar"},{"id":"cloud/billing/voucher","path":"/0.5.12/cloud/billing/voucher","sidebar":"mainSidebar"},{"id":"cloud/index","path":"/0.5.12/cloud/","sidebar":"mainSidebar"},{"id":"community/contribute","path":"/0.5.12/community/contribute","sidebar":"mainSidebar"},{"id":"concepts/index","path":"/0.5.12/concepts/","sidebar":"mainSidebar"},{"id":"concepts/names","path":"/0.5.12/concepts/names","sidebar":"mainSidebar"},{"id":"concepts/project","path":"/0.5.12/concepts/project","sidebar":"mainSidebar"},{"id":"concepts/roles-permissions","path":"/0.5.12/concepts/roles-permissions","sidebar":"mainSidebar"},{"id":"concepts/versioning","path":"/0.5.12/concepts/versioning","sidebar":"mainSidebar"},{"id":"dataset/index","path":"/0.5.12/dataset/","sidebar":"mainSidebar"},{"id":"dataset/yaml","path":"/0.5.12/dataset/yaml","sidebar":"mainSidebar"},{"id":"evaluation/heterogeneous/node-able","path":"/0.5.12/evaluation/heterogeneous/node-able","sidebar":"mainSidebar"},{"id":"evaluation/heterogeneous/virtual-node","path":"/0.5.12/evaluation/heterogeneous/virtual-node","sidebar":"mainSidebar"},{"id":"evaluation/index","path":"/0.5.12/evaluation/","sidebar":"mainSidebar"},{"id":"faq/index","path":"/0.5.12/faq/"},{"id":"getting-started/cloud","path":"/0.5.12/getting-started/cloud","sidebar":"mainSidebar"},{"id":"getting-started/index","path":"/0.5.12/getting-started/","sidebar":"mainSidebar"},{"id":"getting-started/runtime","path":"/0.5.12/getting-started/runtime","sidebar":"mainSidebar"},{"id":"getting-started/server","path":"/0.5.12/getting-started/server","sidebar":"mainSidebar"},{"id":"getting-started/standalone","path":"/0.5.12/getting-started/standalone","sidebar":"mainSidebar"},{"id":"model/index","path":"/0.5.12/model/","sidebar":"mainSidebar"},{"id":"model/yaml","path":"/0.5.12/model/yaml","sidebar":"mainSidebar"},{"id":"reference/sdk/dataset","path":"/0.5.12/reference/sdk/dataset","sidebar":"mainSidebar"},{"id":"reference/sdk/evaluation","path":"/0.5.12/reference/sdk/evaluation","sidebar":"mainSidebar"},{"id":"reference/sdk/job","path":"/0.5.12/reference/sdk/job","sidebar":"mainSidebar"},{"id":"reference/sdk/model","path":"/0.5.12/reference/sdk/model","sidebar":"mainSidebar"},{"id":"reference/sdk/other","path":"/0.5.12/reference/sdk/other","sidebar":"mainSidebar"},{"id":"reference/sdk/overview","path":"/0.5.12/reference/sdk/overview","sidebar":"mainSidebar"},{"id":"reference/sdk/type","path":"/0.5.12/reference/sdk/type","sidebar":"mainSidebar"},{"id":"reference/swcli/dataset","path":"/0.5.12/reference/swcli/dataset","sidebar":"mainSidebar"},{"id":"reference/swcli/index","path":"/0.5.12/reference/swcli/","sidebar":"mainSidebar"},{"id":"reference/swcli/instance","path":"/0.5.12/reference/swcli/instance","sidebar":"mainSidebar"},{"id":"reference/swcli/job","path":"/0.5.12/reference/swcli/job","sidebar":"mainSidebar"},{"id":"reference/swcli/model","path":"/0.5.12/reference/swcli/model","sidebar":"mainSidebar"},{"id":"reference/swcli/project","path":"/0.5.12/reference/swcli/project","sidebar":"mainSidebar"},{"id":"reference/swcli/runtime","path":"/0.5.12/reference/swcli/runtime","sidebar":"mainSidebar"},{"id":"reference/swcli/utilities","path":"/0.5.12/reference/swcli/utilities","sidebar":"mainSidebar"},{"id":"runtime/index","path":"/0.5.12/runtime/","sidebar":"mainSidebar"},{"id":"runtime/yaml","path":"/0.5.12/runtime/yaml","sidebar":"mainSidebar"},{"id":"server/guides/server_admin","path":"/0.5.12/server/guides/server_admin","sidebar":"mainSidebar"},{"id":"server/index","path":"/0.5.12/server/","sidebar":"mainSidebar"},{"id":"server/installation/docker","path":"/0.5.12/server/installation/docker","sidebar":"mainSidebar"},{"id":"server/installation/docker-compose","path":"/0.5.12/server/installation/docker-compose","sidebar":"mainSidebar"},{"id":"server/installation/helm-charts","path":"/0.5.12/server/installation/helm-charts","sidebar":"mainSidebar"},{"id":"server/installation/index","path":"/0.5.12/server/installation/","sidebar":"mainSidebar"},{"id":"server/installation/minikube","path":"/0.5.12/server/installation/minikube","sidebar":"mainSidebar"},{"id":"server/installation/starwhale_env","path":"/0.5.12/server/installation/starwhale_env","sidebar":"mainSidebar"},{"id":"server/project","path":"/0.5.12/server/project","sidebar":"mainSidebar"},{"id":"swcli/config","path":"/0.5.12/swcli/config","sidebar":"mainSidebar"},{"id":"swcli/index","path":"/0.5.12/swcli/","sidebar":"mainSidebar"},{"id":"swcli/installation","path":"/0.5.12/swcli/installation","sidebar":"mainSidebar"},{"id":"swcli/swignore","path":"/0.5.12/swcli/swignore","sidebar":"mainSidebar"},{"id":"swcli/uri","path":"/0.5.12/swcli/uri","sidebar":"mainSidebar"},{"id":"what-is-starwhale","path":"/0.5.12/","sidebar":"mainSidebar"}],"draftIds":[],"sidebars":{"mainSidebar":{"link":{"path":"/0.5.12/","label":"what-is-starwhale"}}}},{"name":"0.5.10","label":"0.5.10","isLast":false,"path":"/0.5.10","mainDocId":"what-is-starwhale","docs":[{"id":"cloud/billing/billing","path":"/0.5.10/cloud/billing/","sidebar":"mainSidebar"},{"id":"cloud/billing/bills","path":"/0.5.10/cloud/billing/bills","sidebar":"mainSidebar"},{"id":"cloud/billing/recharge","path":"/0.5.10/cloud/billing/recharge","sidebar":"mainSidebar"},{"id":"cloud/billing/refund","path":"/0.5.10/cloud/billing/refund","sidebar":"mainSidebar"},{"id":"cloud/billing/voucher","path":"/0.5.10/cloud/billing/voucher","sidebar":"mainSidebar"},{"id":"cloud/index","path":"/0.5.10/cloud/","sidebar":"mainSidebar"},{"id":"community/contribute","path":"/0.5.10/community/contribute","sidebar":"mainSidebar"},{"id":"concepts/index","path":"/0.5.10/concepts/","sidebar":"mainSidebar"},{"id":"concepts/names","path":"/0.5.10/concepts/names","sidebar":"mainSidebar"},{"id":"concepts/project","path":"/0.5.10/concepts/project","sidebar":"mainSidebar"},{"id":"concepts/roles-permissions","path":"/0.5.10/concepts/roles-permissions","sidebar":"mainSidebar"},{"id":"concepts/versioning","path":"/0.5.10/concepts/versioning","sidebar":"mainSidebar"},{"id":"dataset/index","path":"/0.5.10/dataset/","sidebar":"mainSidebar"},{"id":"dataset/yaml","path":"/0.5.10/dataset/yaml","sidebar":"mainSidebar"},{"id":"evaluation/heterogeneous/node-able","path":"/0.5.10/evaluation/heterogeneous/node-able","sidebar":"mainSidebar"},{"id":"evaluation/heterogeneous/virtual-node","path":"/0.5.10/evaluation/heterogeneous/virtual-node","sidebar":"mainSidebar"},{"id":"evaluation/index","path":"/0.5.10/evaluation/","sidebar":"mainSidebar"},{"id":"faq/index","path":"/0.5.10/faq/"},{"id":"getting-started/cloud","path":"/0.5.10/getting-started/cloud","sidebar":"mainSidebar"},{"id":"getting-started/index","path":"/0.5.10/getting-started/","sidebar":"mainSidebar"},{"id":"getting-started/runtime","path":"/0.5.10/getting-started/runtime","sidebar":"mainSidebar"},{"id":"getting-started/server","path":"/0.5.10/getting-started/server","sidebar":"mainSidebar"},{"id":"getting-started/standalone","path":"/0.5.10/getting-started/standalone","sidebar":"mainSidebar"},{"id":"model/index","path":"/0.5.10/model/","sidebar":"mainSidebar"},{"id":"model/yaml","path":"/0.5.10/model/yaml","sidebar":"mainSidebar"},{"id":"reference/sdk/dataset","path":"/0.5.10/reference/sdk/dataset","sidebar":"mainSidebar"},{"id":"reference/sdk/evaluation","path":"/0.5.10/reference/sdk/evaluation","sidebar":"mainSidebar"},{"id":"reference/sdk/model","path":"/0.5.10/reference/sdk/model","sidebar":"mainSidebar"},{"id":"reference/sdk/other","path":"/0.5.10/reference/sdk/other","sidebar":"mainSidebar"},{"id":"reference/sdk/overview","path":"/0.5.10/reference/sdk/overview","sidebar":"mainSidebar"},{"id":"reference/sdk/type","path":"/0.5.10/reference/sdk/type","sidebar":"mainSidebar"},{"id":"reference/swcli/dataset","path":"/0.5.10/reference/swcli/dataset","sidebar":"mainSidebar"},{"id":"reference/swcli/index","path":"/0.5.10/reference/swcli/","sidebar":"mainSidebar"},{"id":"reference/swcli/instance","path":"/0.5.10/reference/swcli/instance","sidebar":"mainSidebar"},{"id":"reference/swcli/job","path":"/0.5.10/reference/swcli/job","sidebar":"mainSidebar"},{"id":"reference/swcli/model","path":"/0.5.10/reference/swcli/model","sidebar":"mainSidebar"},{"id":"reference/swcli/project","path":"/0.5.10/reference/swcli/project","sidebar":"mainSidebar"},{"id":"reference/swcli/runtime","path":"/0.5.10/reference/swcli/runtime","sidebar":"mainSidebar"},{"id":"reference/swcli/utilities","path":"/0.5.10/reference/swcli/utilities","sidebar":"mainSidebar"},{"id":"runtime/index","path":"/0.5.10/runtime/","sidebar":"mainSidebar"},{"id":"runtime/yaml","path":"/0.5.10/runtime/yaml","sidebar":"mainSidebar"},{"id":"server/guides/server_admin","path":"/0.5.10/server/guides/server_admin","sidebar":"mainSidebar"},{"id":"server/index","path":"/0.5.10/server/","sidebar":"mainSidebar"},{"id":"server/installation/docker","path":"/0.5.10/server/installation/docker","sidebar":"mainSidebar"},{"id":"server/installation/helm-charts","path":"/0.5.10/server/installation/helm-charts","sidebar":"mainSidebar"},{"id":"server/installation/index","path":"/0.5.10/server/installation/","sidebar":"mainSidebar"},{"id":"server/installation/minikube","path":"/0.5.10/server/installation/minikube","sidebar":"mainSidebar"},{"id":"server/installation/starwhale_env","path":"/0.5.10/server/installation/starwhale_env","sidebar":"mainSidebar"},{"id":"server/project","path":"/0.5.10/server/project","sidebar":"mainSidebar"},{"id":"swcli/config","path":"/0.5.10/swcli/config","sidebar":"mainSidebar"},{"id":"swcli/index","path":"/0.5.10/swcli/","sidebar":"mainSidebar"},{"id":"swcli/installation","path":"/0.5.10/swcli/installation","sidebar":"mainSidebar"},{"id":"swcli/swignore","path":"/0.5.10/swcli/swignore","sidebar":"mainSidebar"},{"id":"swcli/uri","path":"/0.5.10/swcli/uri","sidebar":"mainSidebar"},{"id":"what-is-starwhale","path":"/0.5.10/","sidebar":"mainSidebar"}],"draftIds":[],"sidebars":{"mainSidebar":{"link":{"path":"/0.5.10/","label":"what-is-starwhale"}}}}],"breadcrumbs":true}}}'),o=JSON.parse('{"defaultLocale":"en","locales":["en","zh"],"path":"i18n","currentLocale":"en","localeConfigs":{"en":{"label":"English","direction":"ltr","htmlLang":"en-US","calendar":"gregory","path":"en"},"zh":{"label":"\u7b80\u4f53\u4e2d\u6587","direction":"ltr","htmlLang":"zh-CN","calendar":"gregory","path":"zh"}}}');var s=n(57529);const c=JSON.parse('{"docusaurusVersion":"2.4.1","siteVersion":"0.1.0","pluginVersions":{"docusaurus-plugin-content-docs":{"type":"package","name":"@docusaurus/plugin-content-docs","version":"2.4.1"},"docusaurus-plugin-content-blog":{"type":"package","name":"@docusaurus/plugin-content-blog","version":"2.4.1"},"docusaurus-plugin-content-pages":{"type":"package","name":"@docusaurus/plugin-content-pages","version":"2.4.1"},"docusaurus-plugin-google-gtag":{"type":"package","name":"@docusaurus/plugin-google-gtag","version":"2.4.1"},"docusaurus-plugin-sitemap":{"type":"package","name":"@docusaurus/plugin-sitemap","version":"2.4.1"},"docusaurus-theme-classic":{"type":"package","name":"@docusaurus/theme-classic","version":"2.4.1"},"docusaurus-plugin-sass":{"type":"package","name":"docusaurus-plugin-sass","version":"0.2.2"},"docusaurus-plugin-image-zoom":{"type":"project"}}}'),l={siteConfig:i.Z,siteMetadata:c,globalData:a,i18n:o,codeTranslations:s},d=r.createContext(l);function u(e){let{children:t}=e;return r.createElement(d.Provider,{value:l},t)}},44763:(e,t,n)=>{"use strict";n.d(t,{Z:()=>p});var r=n(67294),i=n(10412),a=n(35742),o=n(18780),s=n(78284);function c(e){let{error:t,tryAgain:n}=e;return r.createElement("div",{style:{display:"flex",flexDirection:"column",justifyContent:"center",alignItems:"flex-start",minHeight:"100vh",width:"100%",maxWidth:"80ch",fontSize:"20px",margin:"0 auto",padding:"1rem"}},r.createElement("h1",{style:{fontSize:"3rem"}},"This page crashed"),r.createElement("button",{type:"button",onClick:n,style:{margin:"1rem 0",fontSize:"2rem",cursor:"pointer",borderRadius:20,padding:"1rem"}},"Try again"),r.createElement(l,{error:t}))}function l(e){let{error:t}=e;const n=(0,o.getErrorCausalChain)(t).map((e=>e.message)).join("\n\nCause:\n");return r.createElement("p",{style:{whiteSpace:"pre-wrap"}},n)}function d(e){let{error:t,tryAgain:n}=e;return r.createElement(p,{fallback:()=>r.createElement(c,{error:t,tryAgain:n})},r.createElement(a.Z,null,r.createElement("title",null,"Page Error")),r.createElement(s.Z,null,r.createElement(c,{error:t,tryAgain:n})))}const u=e=>r.createElement(d,e);class p extends r.Component{constructor(e){super(e),this.state={error:null}}componentDidCatch(e){i.Z.canUseDOM&&this.setState({error:e})}render(){const{children:e}=this.props,{error:t}=this.state;if(t){const e={error:t,tryAgain:()=>this.setState({error:null})};return(this.props.fallback??u)(e)}return e??null}}},10412:(e,t,n)=>{"use strict";n.d(t,{Z:()=>i});const r="undefined"!=typeof window&&"document"in window&&"createElement"in window.document,i={canUseDOM:r,canUseEventListeners:r&&("addEventListener"in window||"attachEvent"in window),canUseIntersectionObserver:r&&"IntersectionObserver"in window,canUseViewport:r&&"screen"in window}},35742:(e,t,n)=>{"use strict";n.d(t,{Z:()=>a});var r=n(67294),i=n(70405);function a(e){return r.createElement(i.ql,e)}},39960:(e,t,n)=>{"use strict";n.d(t,{Z:()=>m});var r=n(83117),i=n(67294),a=n(73727),o=n(18780),s=n(52263),c=n(13919),l=n(10412);const d=i.createContext({collectLink:()=>{}});var u=n(44996);function p(e,t){var n;let{isNavLink:p,to:m,href:f,activeClassName:b,isActive:h,"data-noBrokenLinkCheck":g,autoAddBaseUrl:v=!0,...y}=e;const{siteConfig:{trailingSlash:w,baseUrl:_}}=(0,s.Z)(),{withBaseUrl:S}=(0,u.C)(),x=(0,i.useContext)(d),k=(0,i.useRef)(null);(0,i.useImperativeHandle)(t,(()=>k.current));const E=m||f;const T=(0,c.Z)(E),C=null==E?void 0:E.replace("pathname://","");let A=void 0!==C?(P=C,v&&(e=>e.startsWith("/"))(P)?S(P):P):void 0;var P;A&&T&&(A=(0,o.applyTrailingSlash)(A,{trailingSlash:w,baseUrl:_}));const N=(0,i.useRef)(!1),O=p?a.OL:a.rU,L=l.Z.canUseIntersectionObserver,I=(0,i.useRef)(),R=()=>{N.current||null==A||(window.docusaurus.preload(A),N.current=!0)};(0,i.useEffect)((()=>(!L&&T&&null!=A&&window.docusaurus.prefetch(A),()=>{L&&I.current&&I.current.disconnect()})),[I,A,L,T]);const j=(null==(n=A)?void 0:n.startsWith("#"))??!1,M=!A||!T||j;return M||g||x.collectLink(A),M?i.createElement("a",(0,r.Z)({ref:k,href:A},E&&!T&&{target:"_blank",rel:"noopener noreferrer"},y)):i.createElement(O,(0,r.Z)({},y,{onMouseEnter:R,onTouchStart:R,innerRef:e=>{k.current=e,L&&e&&T&&(I.current=new window.IntersectionObserver((t=>{t.forEach((t=>{e===t.target&&(t.isIntersecting||t.intersectionRatio>0)&&(I.current.unobserve(e),I.current.disconnect(),null!=A&&window.docusaurus.prefetch(A))}))})),I.current.observe(e))},to:A},p&&{isActive:h,activeClassName:b}))}const m=i.forwardRef(p)},11875:(e,t,n)=>{"use strict";n.d(t,{Z:()=>r});const r=()=>null},95999:(e,t,n)=>{"use strict";n.d(t,{Z:()=>c,I:()=>s});var r=n(67294);function i(e,t){const n=e.split(/(\{\w+\})/).map(((e,n)=>{if(n%2==1){const n=null==t?void 0:t[e.slice(1,-1)];if(void 0!==n)return n}return e}));return n.some((e=>(0,r.isValidElement)(e)))?n.map(((e,t)=>(0,r.isValidElement)(e)?r.cloneElement(e,{key:t}):e)).filter((e=>""!==e)):n.join("")}var a=n(57529);function o(e){let{id:t,message:n}=e;if(void 0===t&&void 0===n)throw new Error("Docusaurus translation declarations must have at least a translation id or a default translation message");return a[t??n]??n??t}function s(e,t){let{message:n,id:r}=e;return i(o({message:n,id:r}),t)}function c(e){let{children:t,id:n,values:a}=e;if(t&&"string"!=typeof t)throw console.warn("Illegal children",t),new Error("The Docusaurus component only accept simple string values");const s=o({message:t,id:n});return r.createElement(r.Fragment,null,i(s,a))}},29935:(e,t,n)=>{"use strict";n.d(t,{m:()=>r});const r="default"},13919:(e,t,n)=>{"use strict";function r(e){return/^(?:\w*:|\/\/)/.test(e)}function i(e){return void 0!==e&&!r(e)}n.d(t,{Z:()=>i,b:()=>r})},44996:(e,t,n)=>{"use strict";n.d(t,{C:()=>o,Z:()=>s});var r=n(67294),i=n(52263),a=n(13919);function o(){const{siteConfig:{baseUrl:e,url:t}}=(0,i.Z)(),n=(0,r.useCallback)(((n,r)=>function(e,t,n,r){let{forcePrependBaseUrl:i=!1,absolute:o=!1}=void 0===r?{}:r;if(!n||n.startsWith("#")||(0,a.b)(n))return n;if(i)return t+n.replace(/^\//,"");if(n===t.replace(/\/$/,""))return t;const s=n.startsWith(t)?n:t+n.replace(/^\//,"");return o?e+s:s}(t,e,n,r)),[t,e]);return{withBaseUrl:n}}function s(e,t){void 0===t&&(t={});const{withBaseUrl:n}=o();return n(e,t)}},52263:(e,t,n)=>{"use strict";n.d(t,{Z:()=>a});var r=n(67294),i=n(58940);function a(){return(0,r.useContext)(i._)}},72389:(e,t,n)=>{"use strict";n.d(t,{Z:()=>a});var r=n(67294),i=n(98934);function a(){return(0,r.useContext)(i._)}},99670:(e,t,n)=>{"use strict";n.d(t,{Z:()=>r});function r(e){const t={};return function e(n,r){Object.entries(n).forEach((n=>{let[i,a]=n;const o=r?`${r}.${i}`:i;var s;"object"==typeof(s=a)&&s&&Object.keys(s).length>0?e(a,o):t[o]=a}))}(e),t}},30226:(e,t,n)=>{"use strict";n.d(t,{_:()=>i,z:()=>a});var r=n(67294);const i=r.createContext(null);function a(e){let{children:t,value:n}=e;const a=r.useContext(i),o=(0,r.useMemo)((()=>function(e){let{parent:t,value:n}=e;if(!t){if(!n)throw new Error("Unexpected: no Docusaurus route context found");if(!("plugin"in n))throw new Error("Unexpected: Docusaurus topmost route context has no `plugin` attribute");return n}const r={...t.data,...null==n?void 0:n.data};return{plugin:t.plugin,data:r}}({parent:a,value:n})),[a,n]);return r.createElement(i.Provider,{value:o},t)}},80143:(e,t,n)=>{"use strict";n.d(t,{Iw:()=>b,gA:()=>p,_r:()=>d,Jo:()=>h,zh:()=>u,yW:()=>f,gB:()=>m});var r=n(16550),i=n(52263),a=n(29935);function o(e,t){void 0===t&&(t={});const n=function(){const{globalData:e}=(0,i.Z)();return e}()[e];if(!n&&t.failfast)throw new Error(`Docusaurus plugin global data not found for "${e}" plugin.`);return n}const s=e=>e.versions.find((e=>e.isLast));function c(e,t){const n=function(e,t){const n=s(e);return[...e.versions.filter((e=>e!==n)),n].find((e=>!!(0,r.LX)(t,{path:e.path,exact:!1,strict:!1})))}(e,t),i=null==n?void 0:n.docs.find((e=>!!(0,r.LX)(t,{path:e.path,exact:!0,strict:!1})));return{activeVersion:n,activeDoc:i,alternateDocVersions:i?function(t){const n={};return e.versions.forEach((e=>{e.docs.forEach((r=>{r.id===t&&(n[e.name]=r)}))})),n}(i.id):{}}}const l={},d=()=>o("docusaurus-plugin-content-docs")??l,u=e=>function(e,t,n){void 0===t&&(t=a.m),void 0===n&&(n={});const r=o(e),i=null==r?void 0:r[t];if(!i&&n.failfast)throw new Error(`Docusaurus plugin global data not found for "${e}" plugin with id "${t}".`);return i}("docusaurus-plugin-content-docs",e,{failfast:!0});function p(e){void 0===e&&(e={});const t=d(),{pathname:n}=(0,r.TH)();return function(e,t,n){void 0===n&&(n={});const i=Object.entries(e).sort(((e,t)=>t[1].path.localeCompare(e[1].path))).find((e=>{let[,n]=e;return!!(0,r.LX)(t,{path:n.path,exact:!1,strict:!1})})),a=i?{pluginId:i[0],pluginData:i[1]}:void 0;if(!a&&n.failfast)throw new Error(`Can't find active docs plugin for "${t}" pathname, while it was expected to be found. Maybe you tried to use a docs feature that can only be used on a docs-related page? Existing docs plugin paths are: ${Object.values(e).map((e=>e.path)).join(", ")}`);return a}(t,n,e)}function m(e){return u(e).versions}function f(e){const t=u(e);return s(t)}function b(e){const t=u(e),{pathname:n}=(0,r.TH)();return c(t,n)}function h(e){const t=u(e),{pathname:n}=(0,r.TH)();return function(e,t){const n=s(e);return{latestDocSuggestion:c(e,t).alternateDocVersions[n.name],latestVersionSuggestion:n}}(t,n)}},56657:(e,t,n)=>{"use strict";n.r(t),n.d(t,{default:()=>r});const r={onRouteDidUpdate(e){let{location:t,previousLocation:n}=e;!n||t.pathname===n.pathname&&t.search===n.search&&t.hash===n.hash||setTimeout((()=>{window.gtag("event","page_view",{page_title:document.title,page_location:window.location.href,page_path:t.pathname+t.search+t.hash})}))}}},18320:(e,t,n)=>{"use strict";n.r(t),n.d(t,{default:()=>a});var r=n(74865),i=n.n(r);i().configure({showSpinner:!1});const a={onRouteUpdate(e){let{location:t,previousLocation:n}=e;if(n&&t.pathname!==n.pathname){const e=window.setTimeout((()=>{i().start()}),200);return()=>window.clearTimeout(e)}},onRouteDidUpdate(){i().done()}}},57021:(e,t,n)=>{"use strict";n.r(t);var r=n(11205),i=n(10412),a=n(36809);(e=>{if(i.Z.canUseDOM){const{themeConfig:{prism:{additionalLanguages:t=[]}={}}}=a.Z;window.Prism=e,t.forEach((e=>{n(96650)(`./prism-${e}`)})),delete window.Prism}})(r.Z)},39471:(e,t,n)=>{"use strict";n.d(t,{Z:()=>a});var r=n(67294);const i="iconExternalLink_nPIU";function a(e){let{width:t=13.5,height:n=13.5}=e;return r.createElement("svg",{width:t,height:n,"aria-hidden":"true",viewBox:"0 0 24 24",className:i},r.createElement("path",{fill:"currentColor",d:"M21 13v10h-21v-19h12v2h-10v15h17v-8h2zm3-12h-10.988l4.035 4-6.977 7.07 2.828 2.828 6.977-7.07 4.125 4.172v-11z"}))}},78284:(e,t,n)=>{"use strict";n.d(t,{Z:()=>ht});var r=n(67294),i=n(34334),a=n(44763),o=n(1944),s=n(83117),c=n(16550),l=n(95999),d=n(85936);const u="__docusaurus_skipToContent_fallback";function p(e){e.setAttribute("tabindex","-1"),e.focus(),e.removeAttribute("tabindex")}function m(){const e=(0,r.useRef)(null),{action:t}=(0,c.k6)(),n=(0,r.useCallback)((e=>{e.preventDefault();const t=document.querySelector("main:first-of-type")??document.getElementById(u);t&&p(t)}),[]);return(0,d.S)((n=>{let{location:r}=n;e.current&&!r.hash&&"PUSH"===t&&p(e.current)})),{containerRef:e,onClick:n}}const f=(0,l.I)({id:"theme.common.skipToMainContent",description:"The skip to content label used for accessibility, allowing to rapidly navigate to main content with keyboard tab/enter navigation",message:"Skip to main content"});function b(e){const t=e.children??f,{containerRef:n,onClick:i}=m();return r.createElement("div",{ref:n,role:"region","aria-label":f},r.createElement("a",(0,s.Z)({},e,{href:`#${u}`,onClick:i}),t))}var h=n(35281),g=n(19727);const v="skipToContent_fXgn";function y(){return r.createElement(b,{className:v})}var w=n(86668),_=n(59689);function S(e){let{width:t=21,height:n=21,color:i="currentColor",strokeWidth:a=1.2,className:o,...c}=e;return r.createElement("svg",(0,s.Z)({viewBox:"0 0 15 15",width:t,height:n},c),r.createElement("g",{stroke:i,strokeWidth:a},r.createElement("path",{d:"M.75.75l13.5 13.5M14.25.75L.75 14.25"})))}const x="closeButton_CVFx";function k(e){return r.createElement("button",(0,s.Z)({type:"button","aria-label":(0,l.I)({id:"theme.AnnouncementBar.closeButtonAriaLabel",message:"Close",description:"The ARIA label for close button of announcement bar"})},e,{className:(0,i.Z)("clean-btn close",x,e.className)}),r.createElement(S,{width:14,height:14,strokeWidth:3.1}))}const E="content_knG7";function T(e){const{announcementBar:t}=(0,w.L)(),{content:n}=t;return r.createElement("div",(0,s.Z)({},e,{className:(0,i.Z)(E,e.className),dangerouslySetInnerHTML:{__html:n}}))}const C="announcementBar_mb4j",A="announcementBarPlaceholder_vyr4",P="announcementBarClose_gvF7",N="announcementBarContent_xLdY";function O(){const{announcementBar:e}=(0,w.L)(),{isActive:t,close:n}=(0,_.nT)();if(!t)return null;const{backgroundColor:i,textColor:a,isCloseable:o}=e;return r.createElement("div",{className:C,style:{backgroundColor:i,color:a},role:"banner"},o&&r.createElement("div",{className:A}),r.createElement(T,{className:N}),o&&r.createElement(k,{onClick:n,className:P}))}var L=n(72961),I=n(12466);var R=n(902),j=n(13102);const M=r.createContext(null);function D(e){let{children:t}=e;const n=function(){const e=(0,L.e)(),t=(0,j.HY)(),[n,i]=(0,r.useState)(!1),a=null!==t.component,o=(0,R.D9)(a);return(0,r.useEffect)((()=>{a&&!o&&i(!0)}),[a,o]),(0,r.useEffect)((()=>{a?e.shown||i(!0):i(!1)}),[e.shown,a]),(0,r.useMemo)((()=>[n,i]),[n])}();return r.createElement(M.Provider,{value:n},t)}function F(e){if(e.component){const t=e.component;return r.createElement(t,e.props)}}function z(){const e=(0,r.useContext)(M);if(!e)throw new R.i6("NavbarSecondaryMenuDisplayProvider");const[t,n]=e,i=(0,r.useCallback)((()=>n(!1)),[n]),a=(0,j.HY)();return(0,r.useMemo)((()=>({shown:t,hide:i,content:F(a)})),[i,a,t])}function B(e){let{header:t,primaryMenu:n,secondaryMenu:a}=e;const{shown:o}=z();return r.createElement("div",{className:"navbar-sidebar"},t,r.createElement("div",{className:(0,i.Z)("navbar-sidebar__items",{"navbar-sidebar__items--show-secondary":o})},r.createElement("div",{className:"navbar-sidebar__item menu"},n),r.createElement("div",{className:"navbar-sidebar__item menu"},a)))}var $=n(92949),U=n(72389);function H(e){return r.createElement("svg",(0,s.Z)({viewBox:"0 0 24 24",width:24,height:24},e),r.createElement("path",{fill:"currentColor",d:"M12,9c1.65,0,3,1.35,3,3s-1.35,3-3,3s-3-1.35-3-3S10.35,9,12,9 M12,7c-2.76,0-5,2.24-5,5s2.24,5,5,5s5-2.24,5-5 S14.76,7,12,7L12,7z M2,13l2,0c0.55,0,1-0.45,1-1s-0.45-1-1-1l-2,0c-0.55,0-1,0.45-1,1S1.45,13,2,13z M20,13l2,0c0.55,0,1-0.45,1-1 s-0.45-1-1-1l-2,0c-0.55,0-1,0.45-1,1S19.45,13,20,13z M11,2v2c0,0.55,0.45,1,1,1s1-0.45,1-1V2c0-0.55-0.45-1-1-1S11,1.45,11,2z M11,20v2c0,0.55,0.45,1,1,1s1-0.45,1-1v-2c0-0.55-0.45-1-1-1C11.45,19,11,19.45,11,20z M5.99,4.58c-0.39-0.39-1.03-0.39-1.41,0 c-0.39,0.39-0.39,1.03,0,1.41l1.06,1.06c0.39,0.39,1.03,0.39,1.41,0s0.39-1.03,0-1.41L5.99,4.58z M18.36,16.95 c-0.39-0.39-1.03-0.39-1.41,0c-0.39,0.39-0.39,1.03,0,1.41l1.06,1.06c0.39,0.39,1.03,0.39,1.41,0c0.39-0.39,0.39-1.03,0-1.41 L18.36,16.95z M19.42,5.99c0.39-0.39,0.39-1.03,0-1.41c-0.39-0.39-1.03-0.39-1.41,0l-1.06,1.06c-0.39,0.39-0.39,1.03,0,1.41 s1.03,0.39,1.41,0L19.42,5.99z M7.05,18.36c0.39-0.39,0.39-1.03,0-1.41c-0.39-0.39-1.03-0.39-1.41,0l-1.06,1.06 c-0.39,0.39-0.39,1.03,0,1.41s1.03,0.39,1.41,0L7.05,18.36z"}))}function Z(e){return r.createElement("svg",(0,s.Z)({viewBox:"0 0 24 24",width:24,height:24},e),r.createElement("path",{fill:"currentColor",d:"M9.37,5.51C9.19,6.15,9.1,6.82,9.1,7.5c0,4.08,3.32,7.4,7.4,7.4c0.68,0,1.35-0.09,1.99-0.27C17.45,17.19,14.93,19,12,19 c-3.86,0-7-3.14-7-7C5,9.07,6.81,6.55,9.37,5.51z M12,3c-4.97,0-9,4.03-9,9s4.03,9,9,9s9-4.03,9-9c0-0.46-0.04-0.92-0.1-1.36 c-0.98,1.37-2.58,2.26-4.4,2.26c-2.98,0-5.4-2.42-5.4-5.4c0-1.81,0.89-3.42,2.26-4.4C12.92,3.04,12.46,3,12,3L12,3z"}))}const q={toggle:"toggle_vylO",toggleButton:"toggleButton_gllP",darkToggleIcon:"darkToggleIcon_wfgR",lightToggleIcon:"lightToggleIcon_pyhR",toggleButtonDisabled:"toggleButtonDisabled_aARS"};function V(e){let{className:t,buttonClassName:n,value:a,onChange:o}=e;const s=(0,U.Z)(),c=(0,l.I)({message:"Switch between dark and light mode (currently {mode})",id:"theme.colorToggle.ariaLabel",description:"The ARIA label for the navbar color mode toggle"},{mode:"dark"===a?(0,l.I)({message:"dark mode",id:"theme.colorToggle.ariaLabel.mode.dark",description:"The name for the dark color mode"}):(0,l.I)({message:"light mode",id:"theme.colorToggle.ariaLabel.mode.light",description:"The name for the light color mode"})});return r.createElement("div",{className:(0,i.Z)(q.toggle,t)},r.createElement("button",{className:(0,i.Z)("clean-btn",q.toggleButton,!s&&q.toggleButtonDisabled,n),type:"button",onClick:()=>o("dark"===a?"light":"dark"),disabled:!s,title:c,"aria-label":c,"aria-live":"polite"},r.createElement(H,{className:(0,i.Z)(q.toggleIcon,q.lightToggleIcon)}),r.createElement(Z,{className:(0,i.Z)(q.toggleIcon,q.darkToggleIcon)})))}const W=r.memo(V),G="darkNavbarColorModeToggle_X3D1";function Y(e){let{className:t}=e;const n=(0,w.L)().navbar.style,i=(0,w.L)().colorMode.disableSwitch,{colorMode:a,setColorMode:o}=(0,$.I)();return i?null:r.createElement(W,{className:t,buttonClassName:"dark"===n?G:void 0,value:a,onChange:o})}var K=n(21327);function Q(){return r.createElement(K.Z,{className:"navbar__brand",imageClassName:"navbar__logo",titleClassName:"navbar__title text--truncate"})}function X(){const e=(0,L.e)();return r.createElement("button",{type:"button","aria-label":(0,l.I)({id:"theme.docs.sidebar.closeSidebarButtonAriaLabel",message:"Close navigation bar",description:"The ARIA label for close button of mobile sidebar"}),className:"clean-btn navbar-sidebar__close",onClick:()=>e.toggle()},r.createElement(S,{color:"var(--ifm-color-emphasis-600)"}))}function J(){return r.createElement("div",{className:"navbar-sidebar__brand"},r.createElement(Q,null),r.createElement(Y,{className:"margin-right--md"}),r.createElement(X,null))}var ee=n(39960),te=n(44996),ne=n(13919);function re(e,t){return void 0!==e&&void 0!==t&&new RegExp(e,"gi").test(t)}var ie=n(39471);function ae(e){let{activeBasePath:t,activeBaseRegex:n,to:i,href:a,label:o,html:c,isDropdownLink:l,prependBaseUrlToHref:d,...u}=e;const p=(0,te.Z)(i),m=(0,te.Z)(t),f=(0,te.Z)(a,{forcePrependBaseUrl:!0}),b=o&&a&&!(0,ne.Z)(a),h=c?{dangerouslySetInnerHTML:{__html:c}}:{children:r.createElement(r.Fragment,null,o,b&&r.createElement(ie.Z,l&&{width:12,height:12}))};return a?r.createElement(ee.Z,(0,s.Z)({href:d?f:a},u,h)):r.createElement(ee.Z,(0,s.Z)({to:p,isNavLink:!0},(t||n)&&{isActive:(e,t)=>n?re(n,t.pathname):t.pathname.startsWith(m)},u,h))}function oe(e){let{className:t,isDropdownItem:n=!1,...a}=e;const o=r.createElement(ae,(0,s.Z)({className:(0,i.Z)(n?"dropdown__link":"navbar__item navbar__link",t),isDropdownLink:n},a));return n?r.createElement("li",null,o):o}function se(e){let{className:t,isDropdownItem:n,...a}=e;return r.createElement("li",{className:"menu__list-item"},r.createElement(ae,(0,s.Z)({className:(0,i.Z)("menu__link",t)},a)))}function ce(e){let{mobile:t=!1,position:n,...i}=e;const a=t?se:oe;return r.createElement(a,(0,s.Z)({},i,{activeClassName:i.activeClassName??(t?"menu__link--active":"navbar__link--active")}))}var le=n(86043),de=n(48596),ue=n(52263);function pe(e,t){return e.some((e=>function(e,t){return!!(0,de.Mg)(e.to,t)||!!re(e.activeBaseRegex,t)||!(!e.activeBasePath||!t.startsWith(e.activeBasePath))}(e,t)))}function me(e){let{items:t,position:n,className:a,onClick:o,...c}=e;const l=(0,r.useRef)(null),[d,u]=(0,r.useState)(!1);return(0,r.useEffect)((()=>{const e=e=>{l.current&&!l.current.contains(e.target)&&u(!1)};return document.addEventListener("mousedown",e),document.addEventListener("touchstart",e),document.addEventListener("focusin",e),()=>{document.removeEventListener("mousedown",e),document.removeEventListener("touchstart",e),document.removeEventListener("focusin",e)}}),[l]),r.createElement("div",{ref:l,className:(0,i.Z)("navbar__item","dropdown","dropdown--hoverable",{"dropdown--right":"right"===n,"dropdown--show":d})},r.createElement(ae,(0,s.Z)({"aria-haspopup":"true","aria-expanded":d,role:"button",href:c.to?void 0:"#",className:(0,i.Z)("navbar__link",a)},c,{onClick:c.to?void 0:e=>e.preventDefault(),onKeyDown:e=>{"Enter"===e.key&&(e.preventDefault(),u(!d))}}),c.children??c.label),r.createElement("ul",{className:"dropdown__menu"},t.map(((e,t)=>r.createElement(Ce,(0,s.Z)({isDropdownItem:!0,activeClassName:"dropdown__link--active"},e,{key:t}))))))}function fe(e){let{items:t,className:n,position:a,onClick:o,...l}=e;const d=function(){const{siteConfig:{baseUrl:e}}=(0,ue.Z)(),{pathname:t}=(0,c.TH)();return t.replace(e,"/")}(),u=pe(t,d),{collapsed:p,toggleCollapsed:m,setCollapsed:f}=(0,le.u)({initialState:()=>!u});return(0,r.useEffect)((()=>{u&&f(!u)}),[d,u,f]),r.createElement("li",{className:(0,i.Z)("menu__list-item",{"menu__list-item--collapsed":p})},r.createElement(ae,(0,s.Z)({role:"button",className:(0,i.Z)("menu__link menu__link--sublist menu__link--sublist-caret",n)},l,{onClick:e=>{e.preventDefault(),m()}}),l.children??l.label),r.createElement(le.z,{lazy:!0,as:"ul",className:"menu__list",collapsed:p},t.map(((e,t)=>r.createElement(Ce,(0,s.Z)({mobile:!0,isDropdownItem:!0,onClick:o,activeClassName:"menu__link--active"},e,{key:t}))))))}function be(e){let{mobile:t=!1,...n}=e;const i=t?fe:me;return r.createElement(i,n)}var he=n(94711);function ge(e){let{width:t=20,height:n=20,...i}=e;return r.createElement("svg",(0,s.Z)({viewBox:"0 0 24 24",width:t,height:n,"aria-hidden":!0},i),r.createElement("path",{fill:"currentColor",d:"M12.87 15.07l-2.54-2.51.03-.03c1.74-1.94 2.98-4.17 3.71-6.53H17V4h-7V2H8v2H1v1.99h11.17C11.5 7.92 10.44 9.75 9 11.35 8.07 10.32 7.3 9.19 6.69 8h-2c.73 1.63 1.73 3.17 2.98 4.56l-5.09 5.02L4 19l5-5 3.11 3.11.76-2.04zM18.5 10h-2L12 22h2l1.12-3h4.75L21 22h2l-4.5-12zm-2.62 7l1.62-4.33L19.12 17h-3.24z"}))}const ve="iconLanguage_nlXk";var ye=n(11875);const we="searchBox_ZlJk";function _e(e){let{children:t,className:n}=e;return r.createElement("div",{className:(0,i.Z)(n,we)},t)}var Se=n(80143),xe=n(52802);var ke=n(60373);const Ee=e=>e.docs.find((t=>t.id===e.mainDocId));const Te={default:ce,localeDropdown:function(e){let{mobile:t,dropdownItemsBefore:n,dropdownItemsAfter:i,...a}=e;const{i18n:{currentLocale:o,locales:d,localeConfigs:u}}=(0,ue.Z)(),p=(0,he.l)(),{search:m,hash:f}=(0,c.TH)(),b=[...n,...d.map((e=>{const n=`${`pathname://${p.createUrl({locale:e,fullyQualified:!1})}`}${m}${f}`;return{label:u[e].label,lang:u[e].htmlLang,to:n,target:"_self",autoAddBaseUrl:!1,className:e===o?t?"menu__link--active":"dropdown__link--active":""}})),...i],h=t?(0,l.I)({message:"Languages",id:"theme.navbar.mobileLanguageDropdown.label",description:"The label for the mobile language switcher dropdown"}):u[o].label;return r.createElement(be,(0,s.Z)({},a,{mobile:t,label:r.createElement(r.Fragment,null,r.createElement(ge,{className:ve}),h),items:b}))},search:function(e){let{mobile:t,className:n}=e;return t?null:r.createElement(_e,{className:n},r.createElement(ye.Z,null))},dropdown:be,html:function(e){let{value:t,className:n,mobile:a=!1,isDropdownItem:o=!1}=e;const s=o?"li":"div";return r.createElement(s,{className:(0,i.Z)({navbar__item:!a&&!o,"menu__list-item":a},n),dangerouslySetInnerHTML:{__html:t}})},doc:function(e){let{docId:t,label:n,docsPluginId:i,...a}=e;const{activeDoc:o}=(0,Se.Iw)(i),c=(0,xe.vY)(t,i);return null===c?null:r.createElement(ce,(0,s.Z)({exact:!0},a,{isActive:()=>(null==o?void 0:o.path)===c.path||!(null==o||!o.sidebar)&&o.sidebar===c.sidebar,label:n??c.id,to:c.path}))},docSidebar:function(e){let{sidebarId:t,label:n,docsPluginId:i,...a}=e;const{activeDoc:o}=(0,Se.Iw)(i),c=(0,xe.oz)(t,i).link;if(!c)throw new Error(`DocSidebarNavbarItem: Sidebar with ID "${t}" doesn't have anything to be linked to.`);return r.createElement(ce,(0,s.Z)({exact:!0},a,{isActive:()=>(null==o?void 0:o.sidebar)===t,label:n??c.label,to:c.path}))},docsVersion:function(e){let{label:t,to:n,docsPluginId:i,...a}=e;const o=(0,xe.lO)(i)[0],c=t??o.label,l=n??(e=>e.docs.find((t=>t.id===e.mainDocId)))(o).path;return r.createElement(ce,(0,s.Z)({},a,{label:c,to:l}))},docsVersionDropdown:function(e){let{mobile:t,docsPluginId:n,dropdownActiveClassDisabled:i,dropdownItemsBefore:a,dropdownItemsAfter:o,...d}=e;const{search:u,hash:p}=(0,c.TH)(),m=(0,Se.Iw)(n),f=(0,Se.gB)(n),{savePreferredVersionName:b}=(0,ke.J)(n),h=[...a,...f.map((e=>{const t=m.alternateDocVersions[e.name]??Ee(e);return{label:e.label,to:`${t.path}${u}${p}`,isActive:()=>e===m.activeVersion,onClick:()=>b(e.name)}})),...o],g=(0,xe.lO)(n)[0],v=t&&h.length>1?(0,l.I)({id:"theme.navbar.mobileVersionsDropdown.label",message:"Versions",description:"The label for the navbar versions dropdown on mobile view"}):g.label,y=t&&h.length>1?void 0:Ee(g).path;return h.length<=1?r.createElement(ce,(0,s.Z)({},d,{mobile:t,label:v,to:y,isActive:i?()=>!1:void 0})):r.createElement(be,(0,s.Z)({},d,{mobile:t,label:v,to:y,items:h,isActive:i?()=>!1:void 0}))}};function Ce(e){let{type:t,...n}=e;const i=function(e,t){return e&&"default"!==e?e:"items"in t?"dropdown":"default"}(t,n),a=Te[i];if(!a)throw new Error(`No NavbarItem component found for type "${t}".`);return r.createElement(a,n)}function Ae(){const e=(0,L.e)(),t=(0,w.L)().navbar.items;return r.createElement("ul",{className:"menu__list"},t.map(((t,n)=>r.createElement(Ce,(0,s.Z)({mobile:!0},t,{onClick:()=>e.toggle(),key:n})))))}function Pe(e){return r.createElement("button",(0,s.Z)({},e,{type:"button",className:"clean-btn navbar-sidebar__back"}),r.createElement(l.Z,{id:"theme.navbar.mobileSidebarSecondaryMenu.backButtonLabel",description:"The label of the back button to return to main menu, inside the mobile navbar sidebar secondary menu (notably used to display the docs sidebar)"},"\u2190 Back to main menu"))}function Ne(){const e=0===(0,w.L)().navbar.items.length,t=z();return r.createElement(r.Fragment,null,!e&&r.createElement(Pe,{onClick:()=>t.hide()}),t.content)}function Oe(){const e=(0,L.e)();var t;return void 0===(t=e.shown)&&(t=!0),(0,r.useEffect)((()=>(document.body.style.overflow=t?"hidden":"visible",()=>{document.body.style.overflow="visible"})),[t]),e.shouldRender?r.createElement(B,{header:r.createElement(J,null),primaryMenu:r.createElement(Ae,null),secondaryMenu:r.createElement(Ne,null)}):null}const Le="navbarHideable_m1mJ",Ie="navbarHidden_jGov";function Re(e){return r.createElement("div",(0,s.Z)({role:"presentation"},e,{className:(0,i.Z)("navbar-sidebar__backdrop",e.className)}))}function je(e){let{children:t}=e;const{navbar:{hideOnScroll:n,style:a}}=(0,w.L)(),o=(0,L.e)(),{navbarRef:s,isNavbarVisible:c}=function(e){const[t,n]=(0,r.useState)(e),i=(0,r.useRef)(!1),a=(0,r.useRef)(0),o=(0,r.useCallback)((e=>{null!==e&&(a.current=e.getBoundingClientRect().height)}),[]);return(0,I.RF)(((t,r)=>{let{scrollY:o}=t;if(!e)return;if(o=s?n(!1):o+l{if(!e)return;const r=t.location.hash;if(r?document.getElementById(r.substring(1)):void 0)return i.current=!0,void n(!1);n(!0)})),{navbarRef:o,isNavbarVisible:t}}(n);return r.createElement("nav",{ref:s,"aria-label":(0,l.I)({id:"theme.NavBar.navAriaLabel",message:"Main",description:"The ARIA label for the main navigation"}),className:(0,i.Z)("navbar","navbar--fixed-top",n&&[Le,!c&&Ie],{"navbar--dark":"dark"===a,"navbar--primary":"primary"===a,"navbar-sidebar--show":o.shown})},t,r.createElement(Re,{onClick:o.toggle}),r.createElement(Oe,null))}var Me=n(18780);const De="errorBoundaryError_a6uf";function Fe(e){return r.createElement("button",(0,s.Z)({type:"button"},e),r.createElement(l.Z,{id:"theme.ErrorPageContent.tryAgain",description:"The label of the button to try again rendering when the React error boundary captures an error"},"Try again"))}function ze(e){let{error:t}=e;const n=(0,Me.getErrorCausalChain)(t).map((e=>e.message)).join("\n\nCause:\n");return r.createElement("p",{className:De},n)}class Be extends r.Component{componentDidCatch(e,t){throw this.props.onError(e,t)}render(){return this.props.children}}function $e(e){let{width:t=30,height:n=30,className:i,...a}=e;return r.createElement("svg",(0,s.Z)({className:i,width:t,height:n,viewBox:"0 0 30 30","aria-hidden":"true"},a),r.createElement("path",{stroke:"currentColor",strokeLinecap:"round",strokeMiterlimit:"10",strokeWidth:"2",d:"M4 7h22M4 15h22M4 23h22"}))}function Ue(){const{toggle:e,shown:t}=(0,L.e)();return r.createElement("button",{onClick:e,"aria-label":(0,l.I)({id:"theme.docs.sidebar.toggleSidebarButtonAriaLabel",message:"Toggle navigation bar",description:"The ARIA label for hamburger menu button of mobile navigation"}),"aria-expanded":t,className:"navbar__toggle clean-btn",type:"button"},r.createElement($e,null))}const He="colorModeToggle_DEke";function Ze(e){let{items:t}=e;return r.createElement(r.Fragment,null,t.map(((e,t)=>r.createElement(Be,{key:t,onError:t=>new Error(`A theme navbar item failed to render.\nPlease double-check the following navbar item (themeConfig.navbar.items) of your Docusaurus config:\n${JSON.stringify(e,null,2)}`,{cause:t})},r.createElement(Ce,e)))))}function qe(e){let{left:t,right:n}=e;return r.createElement("div",{className:"navbar__inner"},r.createElement("div",{className:"navbar__items"},t),r.createElement("div",{className:"navbar__items navbar__items--right"},n))}function Ve(){const e=(0,L.e)(),t=(0,w.L)().navbar.items,[n,i]=function(e){function t(e){return"left"===(e.position??"right")}return[e.filter(t),e.filter((e=>!t(e)))]}(t),a=t.find((e=>"search"===e.type));return r.createElement(qe,{left:r.createElement(r.Fragment,null,!e.disabled&&r.createElement(Ue,null),r.createElement(Q,null),r.createElement(Ze,{items:n})),right:r.createElement(r.Fragment,null,r.createElement(Ze,{items:i}),r.createElement(Y,{className:He}),!a&&r.createElement(_e,null,r.createElement(ye.Z,null)))})}function We(){return r.createElement(je,null,r.createElement(Ve,null))}function Ge(e){let{item:t}=e;const{to:n,href:i,label:a,prependBaseUrlToHref:o,...c}=t,l=(0,te.Z)(n),d=(0,te.Z)(i,{forcePrependBaseUrl:!0});return r.createElement(ee.Z,(0,s.Z)({className:"footer__link-item"},i?{href:o?d:i}:{to:l},c),a,i&&!(0,ne.Z)(i)&&r.createElement(ie.Z,null))}function Ye(e){let{item:t}=e;return t.html?r.createElement("li",{className:"footer__item",dangerouslySetInnerHTML:{__html:t.html}}):r.createElement("li",{key:t.href??t.to,className:"footer__item"},r.createElement(Ge,{item:t}))}function Ke(e){let{column:t}=e;return r.createElement("div",{className:"col footer__col"},r.createElement("div",{className:"footer__title"},t.title),r.createElement("ul",{className:"footer__items clean-list"},t.items.map(((e,t)=>r.createElement(Ye,{key:t,item:e})))))}function Qe(e){let{columns:t}=e;return r.createElement("div",{className:"row footer__links"},t.map(((e,t)=>r.createElement(Ke,{key:t,column:e}))))}function Xe(){return r.createElement("span",{className:"footer__link-separator"},"\xb7")}function Je(e){let{item:t}=e;return t.html?r.createElement("span",{className:"footer__link-item",dangerouslySetInnerHTML:{__html:t.html}}):r.createElement(Ge,{item:t})}function et(e){let{links:t}=e;return r.createElement("div",{className:"footer__links text--center"},r.createElement("div",{className:"footer__links"},t.map(((e,n)=>r.createElement(r.Fragment,{key:n},r.createElement(Je,{item:e}),t.length!==n+1&&r.createElement(Xe,null))))))}function tt(e){let{links:t}=e;return function(e){return"title"in e[0]}(t)?r.createElement(Qe,{columns:t}):r.createElement(et,{links:t})}var nt=n(50941);const rt="footerLogoLink_BH7S";function it(e){let{logo:t}=e;const{withBaseUrl:n}=(0,te.C)(),a={light:n(t.src),dark:n(t.srcDark??t.src)};return r.createElement(nt.Z,{className:(0,i.Z)("footer__logo",t.className),alt:t.alt,sources:a,width:t.width,height:t.height,style:t.style})}function at(e){let{logo:t}=e;return t.href?r.createElement(ee.Z,{href:t.href,className:rt,target:t.target},r.createElement(it,{logo:t})):r.createElement(it,{logo:t})}function ot(e){let{copyright:t}=e;return r.createElement("div",{className:"footer__copyright",dangerouslySetInnerHTML:{__html:t}})}function st(e){var t,n,r="";if("string"==typeof e||"number"==typeof e)r+=e;else if("object"==typeof e)if(Array.isArray(e))for(t=0;tr.createElement(ee.Z,{to:e.to,key:e.icon},r.createElement("svg",{className:"icon","aria-hidden":"true"},r.createElement("use",{xlinkHref:`#${e.icon}`}))))))),n),a&&r.createElement("div",{className:"footer__bottom text--center"},a))}function dt(){const{footer:e,custom:t}=(0,w.L)();if(!e)return null;const{copyright:n,links:i,logo:a,style:o}=e,{footerSocials:s}=t;return r.createElement(lt,{style:o,links:i&&i.length>0&&r.createElement(tt,{links:i}),logo:a&&r.createElement(at,{logo:a}),socials:s,copyright:n&&r.createElement(ot,{copyright:n})})}const ut=r.memo(dt),pt=(0,R.Qc)([$.S,_.pl,I.OC,ke.L5,o.VC,function(e){let{children:t}=e;return r.createElement(j.n2,null,r.createElement(L.M,null,r.createElement(D,null,t)))}]);function mt(e){let{children:t}=e;return r.createElement(pt,null,t)}function ft(e){let{error:t,tryAgain:n}=e;return r.createElement("main",{className:"container margin-vert--xl"},r.createElement("div",{className:"row"},r.createElement("div",{className:"col col--6 col--offset-3"},r.createElement("h1",{className:"hero__title"},r.createElement(l.Z,{id:"theme.ErrorPageContent.title",description:"The title of the fallback page when the page crashed"},"This page crashed.")),r.createElement("div",{className:"margin-vert--lg"},r.createElement(Fe,{onClick:n,className:"button button--primary shadow--lw"})),r.createElement("hr",null),r.createElement("div",{className:"margin-vert--md"},r.createElement(ze,{error:t})))))}const bt="mainWrapper_z2l0";function ht(e){const{children:t,noFooter:n,wrapperClassName:s,title:c,description:l}=e;return(0,g.t)(),r.createElement(mt,null,r.createElement(o.d,{title:c,description:l}),r.createElement(y,null),r.createElement(O,null),r.createElement(We,null),r.createElement("div",{id:u,className:(0,i.Z)(h.k.wrapper.main,bt,s)},r.createElement(a.Z,{fallback:e=>r.createElement(ft,e)},t)),!n&&r.createElement(ut,null))}},21327:(e,t,n)=>{"use strict";n.d(t,{Z:()=>u});var r=n(83117),i=n(67294),a=n(39960),o=n(44996),s=n(52263),c=n(86668),l=n(50941);function d(e){let{logo:t,alt:n,imageClassName:r}=e;const a={light:(0,o.Z)(t.src),dark:(0,o.Z)(t.srcDark||t.src)},s=i.createElement(l.Z,{className:t.className,sources:a,height:t.height,width:t.width,alt:n,style:t.style});return r?i.createElement("div",{className:r},s):s}function u(e){const{siteConfig:{title:t}}=(0,s.Z)(),{navbar:{title:n,logo:l}}=(0,c.L)(),{imageClassName:u,titleClassName:p,...m}=e,f=(0,o.Z)((null==l?void 0:l.href)||"/"),b=n?"":t,h=(null==l?void 0:l.alt)??b;return i.createElement(a.Z,(0,r.Z)({to:f},m,(null==l?void 0:l.target)&&{target:l.target}),l&&i.createElement(d,{logo:l,alt:h,imageClassName:u}),null!=n&&i.createElement("b",{className:p},n))}},90197:(e,t,n)=>{"use strict";n.d(t,{Z:()=>a});var r=n(67294),i=n(35742);function a(e){let{locale:t,version:n,tag:a}=e;const o=t;return r.createElement(i.Z,null,t&&r.createElement("meta",{name:"docusaurus_locale",content:t}),n&&r.createElement("meta",{name:"docusaurus_version",content:n}),a&&r.createElement("meta",{name:"docusaurus_tag",content:a}),o&&r.createElement("meta",{name:"docsearch:language",content:o}),n&&r.createElement("meta",{name:"docsearch:version",content:n}),a&&r.createElement("meta",{name:"docsearch:docusaurus_tag",content:a}))}},50941:(e,t,n)=>{"use strict";n.d(t,{Z:()=>l});var r=n(83117),i=n(67294),a=n(34334),o=n(72389),s=n(92949);const c={themedImage:"themedImage_ToTc","themedImage--light":"themedImage--light_HNdA","themedImage--dark":"themedImage--dark_i4oU"};function l(e){const t=(0,o.Z)(),{colorMode:n}=(0,s.I)(),{sources:l,className:d,alt:u,...p}=e,m=t?"dark"===n?["dark"]:["light"]:["light","dark"];return i.createElement(i.Fragment,null,m.map((e=>i.createElement("img",(0,r.Z)({key:e,src:l[e],alt:u,className:(0,a.Z)(c.themedImage,c[`themedImage--${e}`],d)},p)))))}},86043:(e,t,n)=>{"use strict";n.d(t,{u:()=>s,z:()=>b});var r=n(83117),i=n(67294),a=n(10412),o=n(91442);function s(e){let{initialState:t}=e;const[n,r]=(0,i.useState)(t??!1),a=(0,i.useCallback)((()=>{r((e=>!e))}),[]);return{collapsed:n,setCollapsed:r,toggleCollapsed:a}}const c={display:"none",overflow:"hidden",height:"0px"},l={display:"block",overflow:"visible",height:"auto"};function d(e,t){const n=t?c:l;e.style.display=n.display,e.style.overflow=n.overflow,e.style.height=n.height}function u(e){let{collapsibleRef:t,collapsed:n,animation:r}=e;const a=(0,i.useRef)(!1);(0,i.useEffect)((()=>{const e=t.current;function i(){const t=e.scrollHeight,n=(null==r?void 0:r.duration)??function(e){if((0,o.n)())return 1;const t=e/36;return Math.round(10*(4+15*t**.25+t/5))}(t);return{transition:`height ${n}ms ${(null==r?void 0:r.easing)??"ease-in-out"}`,height:`${t}px`}}function s(){const t=i();e.style.transition=t.transition,e.style.height=t.height}if(!a.current)return d(e,n),void(a.current=!0);return e.style.willChange="height",function(){const t=requestAnimationFrame((()=>{n?(s(),requestAnimationFrame((()=>{e.style.height=c.height,e.style.overflow=c.overflow}))):(e.style.display="block",requestAnimationFrame((()=>{s()})))}));return()=>cancelAnimationFrame(t)}()}),[t,n,r])}function p(e){if(!a.Z.canUseDOM)return e?c:l}function m(e){let{as:t="div",collapsed:n,children:r,animation:a,onCollapseTransitionEnd:o,className:s,disableSSRStyle:c}=e;const l=(0,i.useRef)(null);return u({collapsibleRef:l,collapsed:n,animation:a}),i.createElement(t,{ref:l,style:c?void 0:p(n),onTransitionEnd:e=>{"height"===e.propertyName&&(d(l.current,n),null==o||o(n))},className:s},r)}function f(e){let{collapsed:t,...n}=e;const[a,o]=(0,i.useState)(!t),[s,c]=(0,i.useState)(t);return(0,i.useLayoutEffect)((()=>{t||o(!0)}),[t]),(0,i.useLayoutEffect)((()=>{a&&c(t)}),[a,t]),a?i.createElement(m,(0,r.Z)({},n,{collapsed:s})):null}function b(e){let{lazy:t,...n}=e;const r=t?f:m;return i.createElement(r,n)}},59689:(e,t,n)=>{"use strict";n.d(t,{nT:()=>f,pl:()=>m});var r=n(67294),i=n(72389),a=n(50012),o=n(902),s=n(86668);const c=(0,a.WA)("docusaurus.announcement.dismiss"),l=(0,a.WA)("docusaurus.announcement.id"),d=()=>"true"===c.get(),u=e=>c.set(String(e)),p=r.createContext(null);function m(e){let{children:t}=e;const n=function(){const{announcementBar:e}=(0,s.L)(),t=(0,i.Z)(),[n,a]=(0,r.useState)((()=>!!t&&d()));(0,r.useEffect)((()=>{a(d())}),[]);const o=(0,r.useCallback)((()=>{u(!0),a(!0)}),[]);return(0,r.useEffect)((()=>{if(!e)return;const{id:t}=e;let n=l.get();"annoucement-bar"===n&&(n="announcement-bar");const r=t!==n;l.set(t),r&&u(!1),!r&&d()||a(!1)}),[e]),(0,r.useMemo)((()=>({isActive:!!e&&!n,close:o})),[e,n,o])}();return r.createElement(p.Provider,{value:n},t)}function f(){const e=(0,r.useContext)(p);if(!e)throw new o.i6("AnnouncementBarProvider");return e}},92949:(e,t,n)=>{"use strict";n.d(t,{I:()=>h,S:()=>b});var r=n(67294),i=n(10412),a=n(902),o=n(50012),s=n(86668);const c=r.createContext(void 0),l="theme",d=(0,o.WA)(l),u="light",p="dark",m=e=>e===p?p:u;function f(){const{colorMode:{defaultMode:e,disableSwitch:t,respectPrefersColorScheme:n}}=(0,s.L)(),[a,o]=(0,r.useState)((e=>i.Z.canUseDOM?m(document.documentElement.getAttribute("data-theme")):m(e))(e));(0,r.useEffect)((()=>{t&&d.del()}),[t]);const c=(0,r.useCallback)((function(t,r){void 0===r&&(r={});const{persist:i=!0}=r;t?(o(t),i&&(e=>{d.set(m(e))})(t)):(o(n?window.matchMedia("(prefers-color-scheme: dark)").matches?p:u:e),d.del())}),[n,e]);(0,r.useEffect)((()=>{document.documentElement.setAttribute("data-theme",m(a))}),[a]),(0,r.useEffect)((()=>{if(t)return;const e=e=>{if(e.key!==l)return;const t=d.get();null!==t&&c(m(t))};return window.addEventListener("storage",e),()=>window.removeEventListener("storage",e)}),[t,c]);const f=(0,r.useRef)(!1);return(0,r.useEffect)((()=>{if(t&&!n)return;const e=window.matchMedia("(prefers-color-scheme: dark)"),r=()=>{window.matchMedia("print").matches||f.current?f.current=window.matchMedia("print").matches:c(null)};return e.addListener(r),()=>e.removeListener(r)}),[c,t,n]),(0,r.useMemo)((()=>({colorMode:a,setColorMode:c,get isDarkTheme(){return a===p},setLightTheme(){c(u)},setDarkTheme(){c(p)}})),[a,c])}function b(e){let{children:t}=e;const n=f();return r.createElement(c.Provider,{value:n},t)}function h(){const e=(0,r.useContext)(c);if(null==e)throw new a.i6("ColorModeProvider","Please see https://docusaurus.io/docs/api/themes/configuration#use-color-mode.");return e}},60373:(e,t,n)=>{"use strict";n.d(t,{J:()=>y,L5:()=>g});var r=n(67294),i=n(80143),a=n(29935),o=n(86668),s=n(52802),c=n(902),l=n(50012);const d=e=>`docs-preferred-version-${e}`,u=(e,t,n)=>{(0,l.WA)(d(e),{persistence:t}).set(n)},p=(e,t)=>(0,l.WA)(d(e),{persistence:t}).get(),m=(e,t)=>{(0,l.WA)(d(e),{persistence:t}).del()};const f=r.createContext(null);function b(){const e=(0,i._r)(),t=(0,o.L)().docs.versionPersistence,n=(0,r.useMemo)((()=>Object.keys(e)),[e]),[a,s]=(0,r.useState)((()=>(e=>Object.fromEntries(e.map((e=>[e,{preferredVersionName:null}]))))(n)));(0,r.useEffect)((()=>{s(function(e){let{pluginIds:t,versionPersistence:n,allDocsData:r}=e;function i(e){const t=p(e,n);return r[e].versions.some((e=>e.name===t))?{preferredVersionName:t}:(m(e,n),{preferredVersionName:null})}return Object.fromEntries(t.map((e=>[e,i(e)])))}({allDocsData:e,versionPersistence:t,pluginIds:n}))}),[e,t,n]);return[a,(0,r.useMemo)((()=>({savePreferredVersion:function(e,n){u(e,t,n),s((t=>({...t,[e]:{preferredVersionName:n}})))}})),[t])]}function h(e){let{children:t}=e;const n=b();return r.createElement(f.Provider,{value:n},t)}function g(e){let{children:t}=e;return s.cE?r.createElement(h,null,t):r.createElement(r.Fragment,null,t)}function v(){const e=(0,r.useContext)(f);if(!e)throw new c.i6("DocsPreferredVersionContextProvider");return e}function y(e){void 0===e&&(e=a.m);const t=(0,i.zh)(e),[n,o]=v(),{preferredVersionName:s}=n[e];return{preferredVersion:t.versions.find((e=>e.name===s))??null,savePreferredVersionName:(0,r.useCallback)((t=>{o.savePreferredVersion(e,t)}),[o,e])}}},1116:(e,t,n)=>{"use strict";n.d(t,{V:()=>c,b:()=>s});var r=n(67294),i=n(902);const a=Symbol("EmptyContext"),o=r.createContext(a);function s(e){let{children:t,name:n,items:i}=e;const a=(0,r.useMemo)((()=>n&&i?{name:n,items:i}:null),[n,i]);return r.createElement(o.Provider,{value:a},t)}function c(){const e=(0,r.useContext)(o);if(e===a)throw new i.i6("DocsSidebarProvider");return e}},72961:(e,t,n)=>{"use strict";n.d(t,{M:()=>p,e:()=>m});var r=n(67294),i=n(13102),a=n(87524),o=n(16550),s=(n(61688),n(902));function c(e){!function(e){const t=(0,o.k6)(),n=(0,s.zX)(e);(0,r.useEffect)((()=>t.block(((e,t)=>n(e,t)))),[t,n])}(((t,n)=>{if("POP"===n)return e(t,n)}))}var l=n(86668);const d=r.createContext(void 0);function u(){const e=function(){const e=(0,i.HY)(),{items:t}=(0,l.L)().navbar;return 0===t.length&&!e.component}(),t=(0,a.i)(),n=!e&&"mobile"===t,[o,s]=(0,r.useState)(!1);c((()=>{if(o)return s(!1),!1}));const d=(0,r.useCallback)((()=>{s((e=>!e))}),[]);return(0,r.useEffect)((()=>{"desktop"===t&&s(!1)}),[t]),(0,r.useMemo)((()=>({disabled:e,shouldRender:n,toggle:d,shown:o})),[e,n,d,o])}function p(e){let{children:t}=e;const n=u();return r.createElement(d.Provider,{value:n},t)}function m(){const e=r.useContext(d);if(void 0===e)throw new s.i6("NavbarMobileSidebarProvider");return e}},13102:(e,t,n)=>{"use strict";n.d(t,{HY:()=>s,Zo:()=>c,n2:()=>o});var r=n(67294),i=n(902);const a=r.createContext(null);function o(e){let{children:t}=e;const n=(0,r.useState)({component:null,props:null});return r.createElement(a.Provider,{value:n},t)}function s(){const e=(0,r.useContext)(a);if(!e)throw new i.i6("NavbarSecondaryMenuContentProvider");return e[0]}function c(e){let{component:t,props:n}=e;const o=(0,r.useContext)(a);if(!o)throw new i.i6("NavbarSecondaryMenuContentProvider");const[,s]=o,c=(0,i.Ql)(n);return(0,r.useEffect)((()=>{s({component:t,props:c})}),[s,t,c]),(0,r.useEffect)((()=>()=>s({component:null,props:null})),[s]),null}},19727:(e,t,n)=>{"use strict";n.d(t,{h:()=>i,t:()=>a});var r=n(67294);const i="navigation-with-keyboard";function a(){(0,r.useEffect)((()=>{function e(e){"keydown"===e.type&&"Tab"===e.key&&document.body.classList.add(i),"mousedown"===e.type&&document.body.classList.remove(i)}return document.addEventListener("keydown",e),document.addEventListener("mousedown",e),()=>{document.body.classList.remove(i),document.removeEventListener("keydown",e),document.removeEventListener("mousedown",e)}}),[])}},87524:(e,t,n)=>{"use strict";n.d(t,{i:()=>l});var r=n(67294),i=n(10412);const a="desktop",o="mobile",s="ssr";function c(){return i.Z.canUseDOM?window.innerWidth>996?a:o:s}function l(){const[e,t]=(0,r.useState)((()=>c()));return(0,r.useEffect)((()=>{function e(){t(c())}return window.addEventListener("resize",e),()=>{window.removeEventListener("resize",e),clearTimeout(undefined)}}),[]),e}},35281:(e,t,n)=>{"use strict";n.d(t,{k:()=>r});const r={page:{blogListPage:"blog-list-page",blogPostPage:"blog-post-page",blogTagsListPage:"blog-tags-list-page",blogTagPostListPage:"blog-tags-post-list-page",docsDocPage:"docs-doc-page",docsTagsListPage:"docs-tags-list-page",docsTagDocListPage:"docs-tags-doc-list-page",mdxPage:"mdx-page"},wrapper:{main:"main-wrapper",blogPages:"blog-wrapper",docsPages:"docs-wrapper",mdxPages:"mdx-wrapper"},common:{editThisPage:"theme-edit-this-page",lastUpdated:"theme-last-updated",backToTopButton:"theme-back-to-top-button",codeBlock:"theme-code-block",admonition:"theme-admonition",admonitionType:e=>`theme-admonition-${e}`},layout:{},docs:{docVersionBanner:"theme-doc-version-banner",docVersionBadge:"theme-doc-version-badge",docBreadcrumbs:"theme-doc-breadcrumbs",docMarkdown:"theme-doc-markdown",docTocMobile:"theme-doc-toc-mobile",docTocDesktop:"theme-doc-toc-desktop",docFooter:"theme-doc-footer",docFooterTagsRow:"theme-doc-footer-tags-row",docFooterEditMetaRow:"theme-doc-footer-edit-meta-row",docSidebarContainer:"theme-doc-sidebar-container",docSidebarMenu:"theme-doc-sidebar-menu",docSidebarItemCategory:"theme-doc-sidebar-item-category",docSidebarItemLink:"theme-doc-sidebar-item-link",docSidebarItemCategoryLevel:e=>`theme-doc-sidebar-item-category-level-${e}`,docSidebarItemLinkLevel:e=>`theme-doc-sidebar-item-link-level-${e}`},blog:{}}},91442:(e,t,n)=>{"use strict";function r(){return window.matchMedia("(prefers-reduced-motion: reduce)").matches}n.d(t,{n:()=>r})},52802:(e,t,n)=>{"use strict";n.d(t,{Wl:()=>p,_F:()=>f,cE:()=>u,hI:()=>w,lO:()=>g,vY:()=>y,oz:()=>v,s1:()=>h});var r=n(67294),i=n(16550),a=n(18790),o=n(80143),s=n(60373),c=n(1116);function l(e){return Array.from(new Set(e))}var d=n(48596);const u=!!o._r;function p(e){if(e.href)return e.href;for(const t of e.items){if("link"===t.type)return t.href;if("category"===t.type){const e=p(t);if(e)return e}}}const m=(e,t)=>void 0!==e&&(0,d.Mg)(e,t);function f(e,t){return"link"===e.type?m(e.href,t):"category"===e.type&&(m(e.href,t)||((e,t)=>e.some((e=>f(e,t))))(e.items,t))}function b(e){let{sidebarItems:t,pathname:n,onlyCategories:r=!1}=e;const i=[];return function e(t){for(const a of t)if("category"===a.type&&((0,d.Mg)(a.href,n)||e(a.items))||"link"===a.type&&(0,d.Mg)(a.href,n)){return r&&"category"!==a.type||i.unshift(a),!0}return!1}(t),i}function h(){var e;const t=(0,c.V)(),{pathname:n}=(0,i.TH)();return!1!==(null==(e=(0,o.gA)())?void 0:e.pluginData.breadcrumbs)&&t?b({sidebarItems:t.items,pathname:n}):null}function g(e){const{activeVersion:t}=(0,o.Iw)(e),{preferredVersion:n}=(0,s.J)(e),i=(0,o.yW)(e);return(0,r.useMemo)((()=>l([t,n,i].filter(Boolean))),[t,n,i])}function v(e,t){const n=g(t);return(0,r.useMemo)((()=>{const t=n.flatMap((e=>e.sidebars?Object.entries(e.sidebars):[])),r=t.find((t=>t[0]===e));if(!r)throw new Error(`Can't find any sidebar with id "${e}" in version${n.length>1?"s":""} ${n.map((e=>e.name)).join(", ")}".\nAvailable sidebar ids are:\n- ${t.map((e=>e[0])).join("\n- ")}`);return r[1]}),[e,n])}function y(e,t){const n=g(t);return(0,r.useMemo)((()=>{const t=n.flatMap((e=>e.docs)),r=t.find((t=>t.id===e));if(!r){if(n.flatMap((e=>e.draftIds)).includes(e))return null;throw new Error(`Couldn't find any doc with id "${e}" in version${n.length>1?"s":""} "${n.map((e=>e.name)).join(", ")}".\nAvailable doc ids are:\n- ${l(t.map((e=>e.id))).join("\n- ")}`)}return r}),[e,n])}function w(e){let{route:t,versionMetadata:n}=e;const r=(0,i.TH)(),o=t.routes,s=o.find((e=>(0,i.LX)(r.pathname,e)));if(!s)return null;const c=s.sidebar,l=c?n.docsSidebars[c]:void 0;return{docElement:(0,a.H)(o),sidebarName:c,sidebarItems:l}}},1944:(e,t,n)=>{"use strict";n.d(t,{FG:()=>p,d:()=>d,VC:()=>m});var r=n(67294),i=n(87459),a=n(35742),o=n(30226);function s(){const e=r.useContext(o._);if(!e)throw new Error("Unexpected: no Docusaurus route context found");return e}var c=n(44996),l=n(52263);function d(e){let{title:t,description:n,keywords:i,image:o,children:s}=e;const d=function(e){const{siteConfig:t}=(0,l.Z)(),{title:n,titleDelimiter:r}=t;return null!=e&&e.trim().length?`${e.trim()} ${r} ${n}`:n}(t),{withBaseUrl:u}=(0,c.C)(),p=o?u(o,{absolute:!0}):void 0;return r.createElement(a.Z,null,t&&r.createElement("title",null,d),t&&r.createElement("meta",{property:"og:title",content:d}),n&&r.createElement("meta",{name:"description",content:n}),n&&r.createElement("meta",{property:"og:description",content:n}),i&&r.createElement("meta",{name:"keywords",content:Array.isArray(i)?i.join(","):i}),p&&r.createElement("meta",{property:"og:image",content:p}),p&&r.createElement("meta",{name:"twitter:image",content:p}),s)}const u=r.createContext(void 0);function p(e){let{className:t,children:n}=e;const o=r.useContext(u),s=(0,i.Z)(o,t);return r.createElement(u.Provider,{value:s},r.createElement(a.Z,null,r.createElement("html",{className:s})),n)}function m(e){let{children:t}=e;const n=s(),a=`plugin-${n.plugin.name.replace(/docusaurus-(?:plugin|theme)-(?:content-)?/gi,"")}`;const o=`plugin-id-${n.plugin.id}`;return r.createElement(p,{className:(0,i.Z)(a,o)},t)}},902:(e,t,n)=>{"use strict";n.d(t,{D9:()=>o,Qc:()=>l,Ql:()=>c,i6:()=>s,zX:()=>a});var r=n(67294);const i=n(10412).Z.canUseDOM?r.useLayoutEffect:r.useEffect;function a(e){const t=(0,r.useRef)(e);return i((()=>{t.current=e}),[e]),(0,r.useCallback)((function(){return t.current(...arguments)}),[])}function o(e){const t=(0,r.useRef)();return i((()=>{t.current=e})),t.current}class s extends Error{constructor(e,t){var n,r,i;super(),this.name="ReactContextError",this.message=`Hook ${(null==(n=this.stack)||null==(r=n.split("\n")[1])||null==(i=r.match(/at (?:\w+\.)?(?\w+)/))?void 0:i.groups.name)??""} is called outside the <${e}>. ${t??""}`}}function c(e){const t=Object.entries(e);return t.sort(((e,t)=>e[0].localeCompare(t[0]))),(0,r.useMemo)((()=>e),t.flat())}function l(e){return t=>{let{children:n}=t;return r.createElement(r.Fragment,null,e.reduceRight(((e,t)=>r.createElement(t,null,e)),n))}}},48596:(e,t,n)=>{"use strict";n.d(t,{Mg:()=>o,Ns:()=>s});var r=n(67294),i=n(723),a=n(52263);function o(e,t){const n=e=>{var t;return null==(t=!e||e.endsWith("/")?e:`${e}/`)?void 0:t.toLowerCase()};return n(e)===n(t)}function s(){const{baseUrl:e}=(0,a.Z)().siteConfig;return(0,r.useMemo)((()=>function(e){let{baseUrl:t,routes:n}=e;function r(e){return e.path===t&&!0===e.exact}function i(e){return e.path===t&&!e.exact}return function e(t){if(0===t.length)return;return t.find(r)||e(t.filter(i).flatMap((e=>e.routes??[])))}(n)}({routes:i.Z,baseUrl:e})),[e])}},12466:(e,t,n)=>{"use strict";n.d(t,{Ct:()=>p,OC:()=>c,RF:()=>u});var r=n(67294),i=n(10412),a=n(72389),o=n(902);const s=r.createContext(void 0);function c(e){let{children:t}=e;const n=function(){const e=(0,r.useRef)(!0);return(0,r.useMemo)((()=>({scrollEventsEnabledRef:e,enableScrollEvents:()=>{e.current=!0},disableScrollEvents:()=>{e.current=!1}})),[])}();return r.createElement(s.Provider,{value:n},t)}function l(){const e=(0,r.useContext)(s);if(null==e)throw new o.i6("ScrollControllerProvider");return e}const d=()=>i.Z.canUseDOM?{scrollX:window.pageXOffset,scrollY:window.pageYOffset}:null;function u(e,t){void 0===t&&(t=[]);const{scrollEventsEnabledRef:n}=l(),i=(0,r.useRef)(d()),a=(0,o.zX)(e);(0,r.useEffect)((()=>{const e=()=>{if(!n.current)return;const e=d();a(e,i.current),i.current=e},t={passive:!0};return e(),window.addEventListener("scroll",e,t),()=>window.removeEventListener("scroll",e,t)}),[a,n,...t])}function p(){const e=(0,r.useRef)(null),t=(0,a.Z)()&&"smooth"===getComputedStyle(document.documentElement).scrollBehavior;return{startScroll:n=>{e.current=t?function(e){return window.scrollTo({top:e,behavior:"smooth"}),()=>{}}(n):function(e){let t=null;const n=document.documentElement.scrollTop>e;return function r(){const i=document.documentElement.scrollTop;(n&&i>e||!n&&it&&cancelAnimationFrame(t)}(n)},cancelScroll:()=>null==e.current?void 0:e.current()}}},43320:(e,t,n)=>{"use strict";n.d(t,{HX:()=>r,os:()=>i});n(52263);const r="default";function i(e,t){return`docs-${e}-${t}`}},50012:(e,t,n)=>{"use strict";n.d(t,{WA:()=>c});n(67294),n(61688);const r="localStorage";function i(e){let{key:t,oldValue:n,newValue:r,storage:i}=e;if(n===r)return;const a=document.createEvent("StorageEvent");a.initStorageEvent("storage",!1,!1,t,n,r,window.location.href,i),window.dispatchEvent(a)}function a(e){if(void 0===e&&(e=r),"undefined"==typeof window)throw new Error("Browser storage is not available on Node.js/Docusaurus SSR process.");if("none"===e)return null;try{return window[e]}catch(n){return t=n,o||(console.warn("Docusaurus browser storage is not available.\nPossible reasons: running Docusaurus in an iframe, in an incognito browser session, or using too strict browser privacy settings.",t),o=!0),null}var t}let o=!1;const s={get:()=>null,set:()=>{},del:()=>{},listen:()=>()=>{}};function c(e,t){if("undefined"==typeof window)return function(e){function t(){throw new Error(`Illegal storage API usage for storage key "${e}".\nDocusaurus storage APIs are not supposed to be called on the server-rendering process.\nPlease only call storage APIs in effects and event handlers.`)}return{get:t,set:t,del:t,listen:t}}(e);const n=a(null==t?void 0:t.persistence);return null===n?s:{get:()=>{try{return n.getItem(e)}catch(t){return console.error(`Docusaurus storage error, can't get key=${e}`,t),null}},set:t=>{try{const r=n.getItem(e);n.setItem(e,t),i({key:e,oldValue:r,newValue:t,storage:n})}catch(r){console.error(`Docusaurus storage error, can't set ${e}=${t}`,r)}},del:()=>{try{const t=n.getItem(e);n.removeItem(e),i({key:e,oldValue:t,newValue:null,storage:n})}catch(t){console.error(`Docusaurus storage error, can't delete key=${e}`,t)}},listen:t=>{try{const r=r=>{r.storageArea===n&&r.key===e&&t(r)};return window.addEventListener("storage",r),()=>window.removeEventListener("storage",r)}catch(r){return console.error(`Docusaurus storage error, can't listen for changes of key=${e}`,r),()=>{}}}}}},94711:(e,t,n)=>{"use strict";n.d(t,{l:()=>a});var r=n(52263),i=n(16550);function a(){const{siteConfig:{baseUrl:e,url:t},i18n:{defaultLocale:n,currentLocale:a}}=(0,r.Z)(),{pathname:o}=(0,i.TH)(),s=a===n?e:e.replace(`/${a}/`,"/"),c=o.replace(e,"");return{createUrl:function(e){let{locale:r,fullyQualified:i}=e;return`${i?t:""}${function(e){return e===n?`${s}`:`${s}${e}/`}(r)}${c}`}}}},85936:(e,t,n)=>{"use strict";n.d(t,{S:()=>o});var r=n(67294),i=n(16550),a=n(902);function o(e){const t=(0,i.TH)(),n=(0,a.D9)(t),o=(0,a.zX)(e);(0,r.useEffect)((()=>{n&&t!==n&&o({location:t,previousLocation:n})}),[o,t,n])}},86668:(e,t,n)=>{"use strict";n.d(t,{L:()=>i});var r=n(52263);function i(){return(0,r.Z)().siteConfig.themeConfig}},8802:(e,t)=>{"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.default=function(e,t){const{trailingSlash:n,baseUrl:r}=t;if(e.startsWith("#"))return e;if(void 0===n)return e;const[i]=e.split(/[#?]/),a="/"===i||i===r?i:(o=i,n?function(e){return e.endsWith("/")?e:`${e}/`}(o):function(e){return e.endsWith("/")?e.slice(0,-1):e}(o));var o;return e.replace(i,a)}},54143:(e,t)=>{"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.getErrorCausalChain=void 0,t.getErrorCausalChain=function e(t){return t.cause?[t,...e(t.cause)]:[t]}},18780:function(e,t,n){"use strict";var r=this&&this.__importDefault||function(e){return e&&e.__esModule?e:{default:e}};Object.defineProperty(t,"__esModule",{value:!0}),t.getErrorCausalChain=t.applyTrailingSlash=t.blogPostContainerID=void 0,t.blogPostContainerID="__blog-post-container";var i=n(8802);Object.defineProperty(t,"applyTrailingSlash",{enumerable:!0,get:function(){return r(i).default}});var a=n(54143);Object.defineProperty(t,"getErrorCausalChain",{enumerable:!0,get:function(){return a.getErrorCausalChain}})},98601:(e,t,n)=>{"use strict";n.r(t),n.d(t,{default:()=>f});var r=n(36809),i=Object.assign||function(e){for(var t=1;t1&&void 0!==arguments[1]?arguments[1]:{},r=window.Promise||function(e){function t(){}e(t,t)},a=function(e){var t=e.target;t!==L?-1!==T.indexOf(t)&&S({target:t}):_()},p=function(){if(!A&&O.original){var e=window.pageYOffset||document.documentElement.scrollTop||document.body.scrollTop||0;Math.abs(P-e)>N.scrollOffset&&setTimeout(_,150)}},m=function(e){var t=e.key||e.keyCode;"Escape"!==t&&"Esc"!==t&&27!==t||_()},f=function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},t=e;if(e.background&&(L.style.background=e.background),e.container&&e.container instanceof Object&&(t.container=i({},N.container,e.container)),e.template){var n=o(e.template)?e.template:document.querySelector(e.template);t.template=n}return N=i({},N,t),T.forEach((function(e){e.dispatchEvent(u("medium-zoom:update",{detail:{zoom:I}}))})),I},b=function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{};return e(i({},N,t))},h=function(){for(var e=arguments.length,t=Array(e),n=0;n0?t.reduce((function(e,t){return[].concat(e,c(t))}),[]):T;return r.forEach((function(e){e.classList.remove("medium-zoom-image"),e.dispatchEvent(u("medium-zoom:detach",{detail:{zoom:I}}))})),T=T.filter((function(e){return-1===r.indexOf(e)})),I},v=function(e,t){var n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:{};return T.forEach((function(r){r.addEventListener("medium-zoom:"+e,t,n)})),C.push({type:"medium-zoom:"+e,listener:t,options:n}),I},y=function(e,t){var n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:{};return T.forEach((function(r){r.removeEventListener("medium-zoom:"+e,t,n)})),C=C.filter((function(n){return!(n.type==="medium-zoom:"+e&&n.listener.toString()===t.toString())})),I},w=function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},t=e.target,n=function(){var e={width:document.documentElement.clientWidth,height:document.documentElement.clientHeight,left:0,top:0,right:0,bottom:0},t=void 0,n=void 0;if(N.container)if(N.container instanceof Object)t=(e=i({},e,N.container)).width-e.left-e.right-2*N.margin,n=e.height-e.top-e.bottom-2*N.margin;else{var r=(o(N.container)?N.container:document.querySelector(N.container)).getBoundingClientRect(),a=r.width,c=r.height,l=r.left,d=r.top;e=i({},e,{width:a,height:c,left:l,top:d})}t=t||e.width-2*N.margin,n=n||e.height-2*N.margin;var u=O.zoomedHd||O.original,p=s(u)?t:u.naturalWidth||t,m=s(u)?n:u.naturalHeight||n,f=u.getBoundingClientRect(),b=f.top,h=f.left,g=f.width,v=f.height,y=Math.min(p,t)/g,w=Math.min(m,n)/v,_=Math.min(y,w),S="scale("+_+") translate3d("+((t-g)/2-h+N.margin+e.left)/_+"px, "+((n-v)/2-b+N.margin+e.top)/_+"px, 0)";O.zoomed.style.transform=S,O.zoomedHd&&(O.zoomedHd.style.transform=S)};return new r((function(e){if(t&&-1===T.indexOf(t))e(I);else{if(O.zoomed)e(I);else{if(t)O.original=t;else{if(!(T.length>0))return void e(I);var r=T;O.original=r[0]}if(O.original.dispatchEvent(u("medium-zoom:open",{detail:{zoom:I}})),P=window.pageYOffset||document.documentElement.scrollTop||document.body.scrollTop||0,A=!0,O.zoomed=d(O.original),document.body.appendChild(L),N.template){var i=o(N.template)?N.template:document.querySelector(N.template);O.template=document.createElement("div"),O.template.appendChild(i.content.cloneNode(!0)),document.body.appendChild(O.template)}if(document.body.appendChild(O.zoomed),window.requestAnimationFrame((function(){document.body.classList.add("medium-zoom--opened")})),O.original.classList.add("medium-zoom-image--hidden"),O.zoomed.classList.add("medium-zoom-image--opened"),O.zoomed.addEventListener("click",_),O.zoomed.addEventListener("transitionend",(function t(){A=!1,O.zoomed.removeEventListener("transitionend",t),O.original.dispatchEvent(u("medium-zoom:opened",{detail:{zoom:I}})),e(I)})),O.original.getAttribute("data-zoom-src")){O.zoomedHd=O.zoomed.cloneNode(),O.zoomedHd.removeAttribute("srcset"),O.zoomedHd.removeAttribute("sizes"),O.zoomedHd.src=O.zoomed.getAttribute("data-zoom-src"),O.zoomedHd.onerror=function(){clearInterval(a),console.warn("Unable to reach the zoom image target "+O.zoomedHd.src),O.zoomedHd=null,n()};var a=setInterval((function(){O.zoomedHd.complete&&(clearInterval(a),O.zoomedHd.classList.add("medium-zoom-image--opened"),O.zoomedHd.addEventListener("click",_),document.body.appendChild(O.zoomedHd),n())}),10)}else if(O.original.hasAttribute("srcset")){O.zoomedHd=O.zoomed.cloneNode(),O.zoomedHd.removeAttribute("sizes"),O.zoomedHd.removeAttribute("loading");var s=O.zoomedHd.addEventListener("load",(function(){O.zoomedHd.removeEventListener("load",s),O.zoomedHd.classList.add("medium-zoom-image--opened"),O.zoomedHd.addEventListener("click",_),document.body.appendChild(O.zoomedHd),n()}))}else n()}}}))},_=function(){return new r((function(e){if(!A&&O.original){A=!0,document.body.classList.remove("medium-zoom--opened"),O.zoomed.style.transform="",O.zoomedHd&&(O.zoomedHd.style.transform=""),O.template&&(O.template.style.transition="opacity 150ms",O.template.style.opacity=0),O.original.dispatchEvent(u("medium-zoom:close",{detail:{zoom:I}})),O.zoomed.addEventListener("transitionend",(function t(){O.original.classList.remove("medium-zoom-image--hidden"),document.body.removeChild(O.zoomed),O.zoomedHd&&document.body.removeChild(O.zoomedHd),document.body.removeChild(L),O.zoomed.classList.remove("medium-zoom-image--opened"),O.template&&document.body.removeChild(O.template),A=!1,O.zoomed.removeEventListener("transitionend",t),O.original.dispatchEvent(u("medium-zoom:closed",{detail:{zoom:I}})),O.original=null,O.zoomed=null,O.zoomedHd=null,O.template=null,e(I)}))}else e(I)}))},S=function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},t=e.target;return O.original?_():w({target:t})},x=function(){return N},k=function(){return T},E=function(){return O.original},T=[],C=[],A=!1,P=0,N=n,O={original:null,zoomed:null,zoomedHd:null,template:null};"[object Object]"===Object.prototype.toString.call(t)?N=t:(t||"string"==typeof t)&&h(t),N=i({margin:0,background:"#fff",scrollOffset:40,container:null,template:null},N);var L=l(N.background);document.addEventListener("click",a),document.addEventListener("keyup",m),document.addEventListener("scroll",p),window.addEventListener("resize",_);var I={open:w,close:_,toggle:S,update:f,clone:b,attach:h,detach:g,on:v,off:y,getOptions:x,getImages:k,getZoomedImage:E};return I},{themeConfig:m}=r.Z,f=function(){if("undefined"==typeof window)return null;const e=m&&m.zoomSelector||".markdown img";return setTimeout((()=>{p(e,{margin:48,background:"#00000090"})}),1e3),{onRouteUpdate(t){let{location:n}=t;n&&n.hash&&n.hash.length||setTimeout((()=>{p(e,{margin:48,background:"#00000090"})}),1e3)}}}()},99318:(e,t,n)=>{"use strict";n.d(t,{lX:()=>w,q_:()=>T,ob:()=>m,PP:()=>A,Ep:()=>p});var r=n(83117);function i(e){return"/"===e.charAt(0)}function a(e,t){for(var n=t,r=n+1,i=e.length;r=0;p--){var m=o[p];"."===m?a(o,p):".."===m?(a(o,p),u++):u&&(a(o,p),u--)}if(!l)for(;u--;u)o.unshift("..");!l||""===o[0]||o[0]&&i(o[0])||o.unshift("");var f=o.join("/");return n&&"/"!==f.substr(-1)&&(f+="/"),f};var s=n(2177);function c(e){return"/"===e.charAt(0)?e:"/"+e}function l(e){return"/"===e.charAt(0)?e.substr(1):e}function d(e,t){return function(e,t){return 0===e.toLowerCase().indexOf(t.toLowerCase())&&-1!=="/?#".indexOf(e.charAt(t.length))}(e,t)?e.substr(t.length):e}function u(e){return"/"===e.charAt(e.length-1)?e.slice(0,-1):e}function p(e){var t=e.pathname,n=e.search,r=e.hash,i=t||"/";return n&&"?"!==n&&(i+="?"===n.charAt(0)?n:"?"+n),r&&"#"!==r&&(i+="#"===r.charAt(0)?r:"#"+r),i}function m(e,t,n,i){var a;"string"==typeof e?(a=function(e){var t=e||"/",n="",r="",i=t.indexOf("#");-1!==i&&(r=t.substr(i),t=t.substr(0,i));var a=t.indexOf("?");return-1!==a&&(n=t.substr(a),t=t.substr(0,a)),{pathname:t,search:"?"===n?"":n,hash:"#"===r?"":r}}(e),a.state=t):(void 0===(a=(0,r.Z)({},e)).pathname&&(a.pathname=""),a.search?"?"!==a.search.charAt(0)&&(a.search="?"+a.search):a.search="",a.hash?"#"!==a.hash.charAt(0)&&(a.hash="#"+a.hash):a.hash="",void 0!==t&&void 0===a.state&&(a.state=t));try{a.pathname=decodeURI(a.pathname)}catch(s){throw s instanceof URIError?new URIError('Pathname "'+a.pathname+'" could not be decoded. This is likely caused by an invalid percent-encoding.'):s}return n&&(a.key=n),i?a.pathname?"/"!==a.pathname.charAt(0)&&(a.pathname=o(a.pathname,i.pathname)):a.pathname=i.pathname:a.pathname||(a.pathname="/"),a}function f(){var e=null;var t=[];return{setPrompt:function(t){return e=t,function(){e===t&&(e=null)}},confirmTransitionTo:function(t,n,r,i){if(null!=e){var a="function"==typeof e?e(t,n):e;"string"==typeof a?"function"==typeof r?r(a,i):i(!0):i(!1!==a)}else i(!0)},appendListener:function(e){var n=!0;function r(){n&&e.apply(void 0,arguments)}return t.push(r),function(){n=!1,t=t.filter((function(e){return e!==r}))}},notifyListeners:function(){for(var e=arguments.length,n=new Array(e),r=0;rt?n.splice(t,n.length-t,i):n.push(i),u({action:r,location:i,index:t,entries:n})}}))},replace:function(e,t){var r="REPLACE",i=m(e,t,b(),w.location);d.confirmTransitionTo(i,r,n,(function(e){e&&(w.entries[w.index]=i,u({action:r,location:i}))}))},go:y,goBack:function(){y(-1)},goForward:function(){y(1)},canGo:function(e){var t=w.index+e;return t>=0&&t{"use strict";var r=n(59864),i={childContextTypes:!0,contextType:!0,contextTypes:!0,defaultProps:!0,displayName:!0,getDefaultProps:!0,getDerivedStateFromError:!0,getDerivedStateFromProps:!0,mixins:!0,propTypes:!0,type:!0},a={name:!0,length:!0,prototype:!0,caller:!0,callee:!0,arguments:!0,arity:!0},o={$$typeof:!0,compare:!0,defaultProps:!0,displayName:!0,propTypes:!0,type:!0},s={};function c(e){return r.isMemo(e)?o:s[e.$$typeof]||i}s[r.ForwardRef]={$$typeof:!0,render:!0,defaultProps:!0,displayName:!0,propTypes:!0},s[r.Memo]=o;var l=Object.defineProperty,d=Object.getOwnPropertyNames,u=Object.getOwnPropertySymbols,p=Object.getOwnPropertyDescriptor,m=Object.getPrototypeOf,f=Object.prototype;e.exports=function e(t,n,r){if("string"!=typeof n){if(f){var i=m(n);i&&i!==f&&e(t,i,r)}var o=d(n);u&&(o=o.concat(u(n)));for(var s=c(t),b=c(n),h=0;h{"use strict";e.exports=function(e,t,n,r,i,a,o,s){if(!e){var c;if(void 0===t)c=new Error("Minified exception occurred; use the non-minified dev environment for the full error message and additional helpful warnings.");else{var l=[n,r,i,a,o,s],d=0;(c=new Error(t.replace(/%s/g,(function(){return l[d++]})))).name="Invariant Violation"}throw c.framesToPop=1,c}}},5826:e=>{e.exports=Array.isArray||function(e){return"[object Array]"==Object.prototype.toString.call(e)}},93878:(e,t,n)=>{"use strict";n.r(t)},32497:(e,t,n)=>{"use strict";n.r(t)},74865:function(e,t,n){var r,i;r=function(){var e,t,n={version:"0.2.0"},r=n.settings={minimum:.08,easing:"ease",positionUsing:"",speed:200,trickle:!0,trickleRate:.02,trickleSpeed:800,showSpinner:!0,barSelector:'[role="bar"]',spinnerSelector:'[role="spinner"]',parent:"body",template:'
    '};function i(e,t,n){return en?n:e}function a(e){return 100*(-1+e)}function o(e,t,n){var i;return(i="translate3d"===r.positionUsing?{transform:"translate3d("+a(e)+"%,0,0)"}:"translate"===r.positionUsing?{transform:"translate("+a(e)+"%,0)"}:{"margin-left":a(e)+"%"}).transition="all "+t+"ms "+n,i}n.configure=function(e){var t,n;for(t in e)void 0!==(n=e[t])&&e.hasOwnProperty(t)&&(r[t]=n);return this},n.status=null,n.set=function(e){var t=n.isStarted();e=i(e,r.minimum,1),n.status=1===e?null:e;var a=n.render(!t),l=a.querySelector(r.barSelector),d=r.speed,u=r.easing;return a.offsetWidth,s((function(t){""===r.positionUsing&&(r.positionUsing=n.getPositioningCSS()),c(l,o(e,d,u)),1===e?(c(a,{transition:"none",opacity:1}),a.offsetWidth,setTimeout((function(){c(a,{transition:"all "+d+"ms linear",opacity:0}),setTimeout((function(){n.remove(),t()}),d)}),d)):setTimeout(t,d)})),this},n.isStarted=function(){return"number"==typeof n.status},n.start=function(){n.status||n.set(0);var e=function(){setTimeout((function(){n.status&&(n.trickle(),e())}),r.trickleSpeed)};return r.trickle&&e(),this},n.done=function(e){return e||n.status?n.inc(.3+.5*Math.random()).set(1):this},n.inc=function(e){var t=n.status;return t?("number"!=typeof e&&(e=(1-t)*i(Math.random()*t,.1,.95)),t=i(t+e,0,.994),n.set(t)):n.start()},n.trickle=function(){return n.inc(Math.random()*r.trickleRate)},e=0,t=0,n.promise=function(r){return r&&"resolved"!==r.state()?(0===t&&n.start(),e++,t++,r.always((function(){0==--t?(e=0,n.done()):n.set((e-t)/e)})),this):this},n.render=function(e){if(n.isRendered())return document.getElementById("nprogress");d(document.documentElement,"nprogress-busy");var t=document.createElement("div");t.id="nprogress",t.innerHTML=r.template;var i,o=t.querySelector(r.barSelector),s=e?"-100":a(n.status||0),l=document.querySelector(r.parent);return c(o,{transition:"all 0 linear",transform:"translate3d("+s+"%,0,0)"}),r.showSpinner||(i=t.querySelector(r.spinnerSelector))&&m(i),l!=document.body&&d(l,"nprogress-custom-parent"),l.appendChild(t),t},n.remove=function(){u(document.documentElement,"nprogress-busy"),u(document.querySelector(r.parent),"nprogress-custom-parent");var e=document.getElementById("nprogress");e&&m(e)},n.isRendered=function(){return!!document.getElementById("nprogress")},n.getPositioningCSS=function(){var e=document.body.style,t="WebkitTransform"in e?"Webkit":"MozTransform"in e?"Moz":"msTransform"in e?"ms":"OTransform"in e?"O":"";return t+"Perspective"in e?"translate3d":t+"Transform"in e?"translate":"margin"};var s=function(){var e=[];function t(){var n=e.shift();n&&n(t)}return function(n){e.push(n),1==e.length&&t()}}(),c=function(){var e=["Webkit","O","Moz","ms"],t={};function n(e){return e.replace(/^-ms-/,"ms-").replace(/-([\da-z])/gi,(function(e,t){return t.toUpperCase()}))}function r(t){var n=document.body.style;if(t in n)return t;for(var r,i=e.length,a=t.charAt(0).toUpperCase()+t.slice(1);i--;)if((r=e[i]+a)in n)return r;return t}function i(e){return e=n(e),t[e]||(t[e]=r(e))}function a(e,t,n){t=i(t),e.style[t]=n}return function(e,t){var n,r,i=arguments;if(2==i.length)for(n in t)void 0!==(r=t[n])&&t.hasOwnProperty(n)&&a(e,n,r);else a(e,i[1],i[2])}}();function l(e,t){return("string"==typeof e?e:p(e)).indexOf(" "+t+" ")>=0}function d(e,t){var n=p(e),r=n+t;l(n,t)||(e.className=r.substring(1))}function u(e,t){var n,r=p(e);l(e,t)&&(n=r.replace(" "+t+" "," "),e.className=n.substring(1,n.length-1))}function p(e){return(" "+(e.className||"")+" ").replace(/\s+/gi," ")}function m(e){e&&e.parentNode&&e.parentNode.removeChild(e)}return n},void 0===(i="function"==typeof r?r.call(t,n,t,e):r)||(e.exports=i)},27418:e=>{"use strict";var t=Object.getOwnPropertySymbols,n=Object.prototype.hasOwnProperty,r=Object.prototype.propertyIsEnumerable;function i(e){if(null==e)throw new TypeError("Object.assign cannot be called with null or undefined");return Object(e)}e.exports=function(){try{if(!Object.assign)return!1;var e=new String("abc");if(e[5]="de","5"===Object.getOwnPropertyNames(e)[0])return!1;for(var t={},n=0;n<10;n++)t["_"+String.fromCharCode(n)]=n;if("0123456789"!==Object.getOwnPropertyNames(t).map((function(e){return t[e]})).join(""))return!1;var r={};return"abcdefghijklmnopqrst".split("").forEach((function(e){r[e]=e})),"abcdefghijklmnopqrst"===Object.keys(Object.assign({},r)).join("")}catch(i){return!1}}()?Object.assign:function(e,a){for(var o,s,c=i(e),l=1;l{Prism.languages.ini={comment:{pattern:/(^[ \f\t\v]*)[#;][^\n\r]*/m,lookbehind:!0},section:{pattern:/(^[ \f\t\v]*)\[[^\n\r\]]*\]?/m,lookbehind:!0,inside:{"section-name":{pattern:/(^\[[ \f\t\v]*)[^ \f\t\v\]]+(?:[ \f\t\v]+[^ \f\t\v\]]+)*/,lookbehind:!0,alias:"selector"},punctuation:/\[|\]/}},key:{pattern:/(^[ \f\t\v]*)[^ \f\n\r\t\v=]+(?:[ \f\t\v]+[^ \f\n\r\t\v=]+)*(?=[ \f\t\v]*=)/m,lookbehind:!0,alias:"attr-name"},value:{pattern:/(=[ \f\t\v]*)[^ \f\n\r\t\v]+(?:[ \f\t\v]+[^ \f\n\r\t\v]+)*/,lookbehind:!0,alias:"attr-value",inside:{"inner-value":{pattern:/^("|').+(?=\1$)/,lookbehind:!0}}},punctuation:/=/}},96650:(e,t,n)=>{var r={"./prism-ini":29525};function i(e){var t=a(e);return n(t)}function a(e){if(!n.o(r,e)){var t=new Error("Cannot find module '"+e+"'");throw t.code="MODULE_NOT_FOUND",t}return r[e]}i.keys=function(){return Object.keys(r)},i.resolve=a,e.exports=i,i.id=96650},92703:(e,t,n)=>{"use strict";var r=n(50414);function i(){}function a(){}a.resetWarningCache=i,e.exports=function(){function e(e,t,n,i,a,o){if(o!==r){var s=new Error("Calling PropTypes validators directly is not supported by the `prop-types` package. Use PropTypes.checkPropTypes() to call them. Read more at http://fb.me/use-check-prop-types");throw s.name="Invariant Violation",s}}function t(){return e}e.isRequired=e;var n={array:e,bigint:e,bool:e,func:e,number:e,object:e,string:e,symbol:e,any:e,arrayOf:t,element:e,elementType:e,instanceOf:t,node:e,objectOf:t,oneOf:t,oneOfType:t,shape:t,exact:t,checkPropTypes:a,resetWarningCache:i};return n.PropTypes=n,n}},45697:(e,t,n)=>{e.exports=n(92703)()},50414:e=>{"use strict";e.exports="SECRET_DO_NOT_PASS_THIS_OR_YOU_WILL_BE_FIRED"},64448:(e,t,n)=>{"use strict";var r=n(67294),i=n(27418),a=n(63840);function o(e){for(var t="https://reactjs.org/docs/error-decoder.html?invariant="+e,n=1;n