Skip to content

Commit

Permalink
Updated most of the docs by correcting different spelling mistakes an…
Browse files Browse the repository at this point in the history
…d grammatical mistakes, also updated a link in Contributing.md that is previously showing error.

Signed-off-by: Sai Suraj <[email protected]>
  • Loading branch information
Sai-Suraj-27 committed Feb 12, 2023
1 parent 2fffae8 commit 2caddc3
Show file tree
Hide file tree
Showing 7 changed files with 123 additions and 119 deletions.
4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Contributing Guidelines

Welcome to Ianvs. We are excited about the prospect of you joining our [community](https://github.com/kubeedge/community)! The KubeEdge community abides by the CNCF [code of conduct](CODE-OF-CONDUCT.md). Here is an excerpt:
Welcome to Ianvs. We are excited about the prospect of you joining our [community](https://github.com/kubeedge/community)! The KubeEdge community abides by the CNCF [code of conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md). Here is an excerpt:

_As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities._

To learn more about contributing to the [Ianvs code repo](README.md), check out the [contributing guide](docs/guides). For example, [How to contribute algorithms] and [How to contribute test environments].
To learn more about contributing to the [Ianvs code repo](README.md), check out the [contributing guide](docs/guides). For example, [How to contribute algorithms] and [How to contribute test environments].
42 changes: 25 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,37 +1,42 @@
# Ianvs

[![CI](https://github.com/kubeedge/ianvs/workflows/CI/badge.svg?branch=main)](https://github.com/sedna/ianvs/actions)
[![LICENSE SCAN](https://app.fossa.com/api/projects/custom%2B32178%2Fgithub.com%2Fkubeedge%2Fianvs.svg?type=shield)](https://app.fossa.com/projects/custom%2B32178%2Fgithub.com%2Fkubeedge%2Fianvs?ref=badge_shield)
[![LICENSE](https://img.shields.io/github/license/kubeedge-sedna/ianvs.svg)](/LICENSE)

Ianvs is a distributed synergy AI benchmarking project incubated in KubeEdge SIG AI. Ianvs aims to test the performance of distributed synergy AI solutions following recognized standards, in order to facilitate more efficient and effective development. More detailedly, Ianvs prepares not only test cases with datasets and corresponding algorithms, but also benchmarking tools including simulation and hyper-parameter searching. Ianvs also reveals best practices for developers and end users with presentation tools including leaderboards and test reports.
Ianvs is a distributed synergy AI benchmarking project incubated in KubeEdge SIG AI. Ianvs aims to test the performance of distributed synergy AI solutions following recognized standards, in order to facilitate more efficient and effective development. More detailedly, Ianvs prepares not only test cases with datasets and corresponding algorithms, but also benchmarking tools including simulation and hyper-parameter searching. Ianvs also reveals best practices for developers and end users with presentation tools including leaderboards and test reports.

## Scope

The distributed synergy AI benchmarking Ianvs aims to test the performance of distributed synergy AI solutions following recognized standards, in order to facilitate more efficient and effective development.

The scope of Ianvs includes
- Providing end-to-end benchmark toolkits across devices, edge nodes and cloud nodes based on typical distributed-synergy AI paradigms and applications.
- Tools to manage test environment. For example, it would be necessary to support the CRUD (Create, Read, Update and Delete) actions in test environments. Elements of such test environments include algorithm-wise and system-wise configuration.

- Providing end-to-end benchmark toolkits across devices, edge nodes, and cloud nodes based on typical distributed-synergy AI paradigms and applications.
- Tools to manage test environment. For example, it would be necessary to support the CRUD (Create, Read, Update, and Delete) actions in test environments. Elements of such test environments include algorithm-wise and system-wise configuration.
- Tools to control test cases. Typical examples include paradigm templates, simulation tools, and hyper-parameter-based assistant tools.
- Tools to manage benchmark presentation, e.g., leaderboard and test report generation.
- Cooperation with other organizations or communities, e.g., in KubeEdge SIG AI, to establish comprehensive benchmarks and developed related applications, which can include but are not limited to
- Tools to manage benchmark presentation, e.g., leaderboard and test report generation.
- Cooperation with other organizations or communities, e.g., in KubeEdge SIG AI, to establish comprehensive benchmarks and developed related applications, which can include but are not limited to
- Dataset collection, re-organization, and publication
- Formalized specifications, e.g., standards
- Formalized specifications, e.g., standards
- Holding competitions or coding events, e.g., open source promotion plan
- Maintaining solution leaderboards or certifications for commercial usage
- Maintaining solution leaderboards or certifications for commercial usage


## Architecture

The architectures and related concepts are shown in the below figure. The ianvs is designed to run **within a single node**. Critical components include

- Test Environment Manager: the CRUD of test environments serving for global usage
- Test Case Controller: control the runtime behavior of test cases like instance generation and vanish
- Generation Assistant: assist users to generate test cases based on certain rules or constraints, e.g., the range of parameters
- Generation Assistant: assist users to generate test cases based on certain rules or constraints, e.g., the range of parameters
- Simulation Controller: control the simulation process of edge-cloud synergy AI, including the instance generation and vanishment of simulation containers
- Story Manager: the output management and presentation of the test case, e.g., leaderboards


![](docs/guides/images/ianvs_arch.png)

More details on Ianvs components:
More details on Ianvs components:

1. Test-Environment Manager supports the CRUD of Test environments, which basically includes
- Algorithm-wise configuration
- Public datasets
Expand All @@ -41,12 +46,12 @@ More details on Ianvs components:
- System-wise configuration
- Overall architecture
- System constraints or budgets
- End-to-end cross-node
- End-to-end cross-node
- Per node
1. Test-case Controller, which includes but is not limited to the following components
- Templates of common distributed-synergy-AI paradigms, which can help the developer to prepare their test case without too much effort. Such paradigms include edge-cloud synergy joint inference, incremental learning, federated learning, and lifelong learning.
1. Test-case Controller, which includes but is not limited to the following components
- Templates of common distributed-synergy-AI paradigms, which can help the developer to prepare their test case without too much effort. Such paradigms include edge-cloud synergy joint inference, incremental learning, federated learning, and lifelong learning.
- Simulation tools. Develop simulated test environments for test cases
- Other tools to assist test-case generation. For instance, prepare test cases based on a given range of hyper-parameters.
- Other tools to assist test-case generation. For instance, prepare test cases based on a given range of hyper-parameters.
1. Story Manager, which includes but is not limited to the following components
- Leaderboard generation
- Test report generation
Expand All @@ -58,27 +63,29 @@ More details on Ianvs components:

Documentation is located on [readthedoc.io](https://ianvs.readthedocs.io/). The documents include the quick start, guides, dataset descriptions, algorithms, user interfaces, stories, and roadmap.


### Installation

Follow the [Ianvs installation document](docs/guides/how-to-install-ianvs.md) to install Ianvs.

### Examples

Scenario PCB-AoI:[Industrial Defect Detection on the PCB-AoI Dataset](/docs/proposals/scenarios/industrial-defect-detection/pcb-aoi.md).
Example PCB-AoI-1:[Testing single task learning in industrial defect detection](/docs/proposals/test-reports/testing-single-task-learning-in-industrial-defect-detection-with-pcb-aoi.md).
Example PCB-AoI-2:[Testing incremental learning in industrial defect detection](/docs/proposals/test-reports/testing-incremental-learning-in-industrial-defect-detection-with-pcb-aoi.md).


## Roadmap

* [2022 H2 Roadmap](docs/roadmap.md)
- [2022 H2 Roadmap](docs/roadmap.md)

## Meeting

Routine Community Meeting for KubeEdge SIG AI runs weekly:

- Europe Time: **Thursdays at 16:30-17:30 Beijing Time**.
([Convert to your timezone.](https://www.thetimezoneconverter.com/?t=16%3A30&tz=GMT%2B8&))

Resources:

- [Meeting notes and agenda](https://docs.google.com/document/d/12n3kGUWTkAH4q2Wv5iCVGPTA_KRWav_eakbFrF9iAww/edit)
- [Meeting recordings](https://www.youtube.com/playlist?list=PLQtlO1kVWGXkRGkjSrLGEPJODoPb8s5FM)
- [Meeting link](https://zoom.us/j/4167237304)
Expand All @@ -91,6 +98,7 @@ If you need support, start with the [troubleshooting guide](./docs/troubleshooti
-->

If you have questions, feel free to reach out to us in the following ways:

- [slack channel](https://app.slack.com/client/TDZ5TGXQW/C01EG84REVB/details)

## Contributing
Expand Down
13 changes: 5 additions & 8 deletions docs/guides/how-to-contribute-algorithms.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,16 @@
# How to contributrbute an algorithm to Ianvs

Ianvs serves as testing tools for test objects, e.g., algorithms. Ianvs does NOT include code directly on test object. Algorithms serve as typical test objects in Ianvs and detailed algorithms are thus NOT included in this Ianvs python file. As for the details of example test objects, e.g., algorithms, please refer to third party packages in Ianvs example. For example, AI workflow and interface please refer to sedna and module implementation please refer to third party package like FPN_TensorFlow and Sedna IBT algorithm.

Ianvs serves as testing tools for test objects, e.g., algorithms. Ianvs does NOT include code directly on the test object. Algorithms serve as typical test objects in Ianvs and detailed algorithms are thus NOT included in this Ianvs python file. As for the details of example test objects, e.g., algorithms, please refer to third party packages in the Ianvs example. For example, for AI workflow and interface please refer to sedna and for module implementation please refer to third party packages like FPN_TensorFlow and Sedna IBT algorithm.

For algorithm contributors, you can:
1. Release a repo independent of ianvs, but interface should still follow the SIG AI algorithm interface to launch ianvs.

1. Release a repo independent of ianvs, but the interface should still follow the SIG AI algorithm interface to launch ianvs. Here are two examples showing how to develop an algorithm for testing in Ianvs.
Here are two examples show how to development algorithm for testing in Ianvs.
* [incremental-learning]
* [single-task-learning]
2. Integrated the targeted algorithm into sedna so that ianvs can use directly. in this case, you can connect sedna owners for help.


Also, if new algorithm has already bee integrated to Sedna, it can be used in Ianvs directly.

2. Integrated the targeted algorithm into sedna so that ianvs can use it directly. in this case, you can connect with sedna owners for help.

Also, if a new algorithm has already been integrated into Sedna, it can be used in Ianvs directly.

[Sedna Lib]: https://github.com/kubeedge/sedna/tree/main/lib
[incremental-learning]: ../proposals/algorithms/incremental-learning/basicIL-fpn.md
Expand Down
40 changes: 20 additions & 20 deletions docs/guides/how-to-contribute-leaderboards-or-test-reports.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ This document helps you to contribute stories, i.e., test reports or leaderboard
If you follow this guide and find some problem, it is appreciated to submit an issue to update this file.

## Test Reports

Everyone is welcome to submit and share your own test report to the community.

### 1. Setup and Testing
Expand All @@ -24,38 +25,37 @@ Clone the `Ianvs` repo.:
git clone http://github.com/kubeedge/ianvs.git
```

Please follow [Ianvs setup] to install Ianvs, and then run your own algorithm to output test reports.
Please follow the [Ianvs setup] to install Ianvs, and then run your own algorithm to output test reports.

### 2. Declare your grades

### 2. Declare your grades
You may want to compare your testing result and those results on the [leaderboard].
You may want to compare your testing result and those results on the [leaderboard].

Test reports are welcome after benchmarking. It can be submitted to [there](https://github.com/kubeedge/ianvs/tree/main/docs/proposals/test-reports) for further review.
Test reports are welcome after benchmarking. It can be submitted [here](https://github.com/kubeedge/ianvs/tree/main/docs/proposals/test-reports) for further review.

## Leaderboards

Leaderboards, i.e., rankings of the test object, are public for everyone to visit. Example:[leaderboard].

## Leaderboards
Leaderboards, i.e., rankings of test object, are public to everyone to visit. Examples are as [leaderboard].
Except for [Ianvs Owners](https://github.com/kubeedge/ianvs/blob/main/OWNERS), there are mainly two roles for a leaderboard publication:

Except [Ianvs Owners](https://github.com/kubeedge/ianvs/blob/main/OWNERS), there are mainly two roles for a leaderboard publication:
1. Developer: submit the test object for benchmarking, including but not limitted to materials like algorithm, test case following Ianvs settings and interfaces.
2. Maintainer: testing materials provided from developers and release the updated leaderboard to public.
1. Developer: submit the test object for benchmarking, including but not limited to materials like algorithm, test case following Ianvs settings, and interfaces.
2. Maintainer: testing materials provided by developers and releasing the updated leaderboard to the public.

For potential developers,

For potenial developers,
- Develop your algorithm with ianvs and choose the algorithm to submit.
- Make sure the submitted test object runs properly under the latest version of Ianvs before submission. Maintainers are not reponsible to debug for the submitted objects.
- Do NOT need to submit the new leaderboard. Maintainers are responsible to make test environment consistent for all test objects under the same leaderboard and execute the test object to generate new leaderboard.
- If the test object is ready, you are welcome to contact [Ianvs Owners](https://github.com/kubeedge/ianvs/blob/main/OWNERS). Ianvs owners will connect you and maintainers, in order to receive your test object. Note that when developers submit the test object, developers give maintainers the right to test them.
- Make sure the submitted test object runs properly under the latest version of Ianvs before submission. Maintainers are not responsible to debug for the submitted objects.
- Do NOT need to submit the new leaderboard. Maintainers are responsible to make the test environment consistent for all test objects under the same leaderboard and execute the test object to generate a new leaderboard.
- If the test object is ready, you are welcome to contact [Ianvs Owners](https://github.com/kubeedge/ianvs/blob/main/OWNERS). Ianvs owners will connect you and maintainers, in order to receive your test object. Note that when developers submit the test object, developers give maintainers the right to test them.

For potential maintainers,
- To maintain the consistence of test environments and test objects, the [leaderboard] submssion is at present calling for acknowledged organizations to apply in charge. Please contact
- Maintainers should be responsible for the result summitted.
- Maintainers should update the leaderboard in a monthly manner.
- Maintainers are NOT allowed to use the test object in purpose out of Ianvs benchmarking without formal authorization from developers.
- Besides submitted objects, maintainers are suggested to test objects released in KubeEdge SIG AI or other classic solutions released in public.



- To maintain the consistency of test environments and test objects, the [leaderboard] submission is at present calling for acknowledged organizations to apply in charge. Please contact
- Maintainers should be responsible for the result submitted.
- Maintainers should update the leaderboard in a monthly manner.
- Maintainers are NOT allowed to use the test object in purpose out of Ianvs benchmarking without formal authorization from developers.
- Besides submitted objects, maintainers are suggested to test objects released in KubeEdge SIG AI or other classic solutions released in public.

[git]: https://git-scm.com/
[framework]: /docs/proposals/architecture.md#architecture
Expand Down
Loading

0 comments on commit 2caddc3

Please sign in to comment.