Skip to content

Commit

Permalink
Merge pull request #36552 from github/repo-sync
Browse files Browse the repository at this point in the history
Repo sync
  • Loading branch information
docs-bot authored Feb 27, 2025
2 parents 79e4c6e + e4e9c77 commit b05ab37
Show file tree
Hide file tree
Showing 10 changed files with 54 additions and 29 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,20 @@ Changing the model that's used by {% data variables.product.prodname_copilot_cha
## AI models for {% data variables.product.prodname_copilot_chat_short %}

{% data reusables.copilot.copilot-chat-models-list %}
The following models are currently available in the immersive mode of {% data variables.product.prodname_copilot_chat_short %}:

* {% data reusables.copilot.model-description-gpt-4o %}
* {% data reusables.copilot.model-description-claude-sonnet-37 %}
* {% data reusables.copilot.model-description-claude-sonnet-35 %}
* {% data reusables.copilot.model-description-gemini-flash %}
* {% data reusables.copilot.model-description-o1 %}
* {% data reusables.copilot.model-description-o3-mini %}

For more information about these models, see:

* **OpenAI's GPT-4o, o1, and o3-mini models**: [Models](https://platform.openai.com/docs/models/models) in the OpenAI Platform documentation.
* **Anthropic's {% data variables.copilot.copilot_claude_sonnet %} models**: [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot).
* **Google's {% data variables.copilot.copilot_gemini_flash %} model**: [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot).

### Limitations of AI models for {% data variables.product.prodname_copilot_chat_short %}

Expand Down Expand Up @@ -53,7 +66,20 @@ These instructions are for {% data variables.product.prodname_copilot_short %} o
## AI models for {% data variables.product.prodname_copilot_chat_short %}

{% data reusables.copilot.copilot-chat-models-list %}
The following models are currently available through multi-model {% data variables.product.prodname_copilot_chat_short %}:

* {% data reusables.copilot.model-description-gpt-4o %}
* {% data reusables.copilot.model-description-claude-sonnet-37 %}
* {% data reusables.copilot.model-description-claude-sonnet-35 %}
* {% data reusables.copilot.model-description-gemini-flash %}
* {% data reusables.copilot.model-description-o1 %}
* {% data reusables.copilot.model-description-o3-mini %}

For more information about these models, see:

* **OpenAI's GPT-4o, o1, and o3-mini models**: [Models](https://platform.openai.com/docs/models/models) in the OpenAI Platform documentation.
* **Anthropic's {% data variables.copilot.copilot_claude_sonnet %} models**: [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot).
* **Google's {% data variables.copilot.copilot_gemini_flash %} model**: [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-gemini-flash-in-github-copilot).

## Changing your AI model

Expand All @@ -74,7 +100,18 @@ These instructions are for {% data variables.product.prodname_vscode_shortname %
## AI models for {% data variables.product.prodname_copilot_chat_short %}

{% data reusables.copilot.copilot-chat-models-list-visual-studio %}
The following models are currently available through multi-model {% data variables.product.prodname_copilot_chat_short %}:

* {% data reusables.copilot.model-description-gpt-4o %}
* {% data reusables.copilot.model-description-claude-sonnet-37 %}
* {% data reusables.copilot.model-description-claude-sonnet-35 %}
* {% data reusables.copilot.model-description-o1 %}
* {% data reusables.copilot.model-description-o3-mini %}

For more information about these models, see:

* **OpenAI's GPT-4o, o1, and o3-mini models**: [Models](https://platform.openai.com/docs/models/models) in the OpenAI Platform documentation.
* **Anthropic's {% data variables.copilot.copilot_claude_sonnet %} models**: [AUTOTITLE](/copilot/using-github-copilot/ai-models/using-claude-sonnet-in-github-copilot).

## Changing the AI model for {% data variables.product.prodname_copilot_chat_short %}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,14 +37,20 @@ This guide will guide you through completing the first phase, migrating reposito

### How soon do we need to complete the migration?

{% data reusables.enterprise-migration-tool.timeline-intro %}
Determine your timeline, which will largely dictate your approach. The first step for determining your timeline is to get an inventory of what you need to migrate.

* Number of repositories
* Number of pull requests

Migrating from Azure DevOps, we recommend the `inventory-report` command in the {% data variables.product.prodname_ado2gh_cli %}. The `inventory-report` command will connect with the Azure DevOps API, then build a simple CSV with some of the fields suggested above. To install the {% data variables.product.prodname_ado2gh_cli %} and authenticate, follow steps 1 to 3 in [AUTOTITLE](/migrations/using-github-enterprise-importer/migrating-from-azure-devops-to-github-enterprise-cloud/migrating-repositories-from-azure-devops-to-github-enterprise-cloud).

Migration timing is largely based on the number of pull requests in a repository. If you want to migrate 1,000 repositories, and each repository has 100 pull requests on average, and only 50 users have contributed to the repositories, your migration will likely be very quick. If you want to migrate only 100 repositories, but the repositories each have 75,000 pull requests on average, and 5,000 users, the migration will take much longer and require much more planning and testing.

{% data reusables.enterprise-migration-tool.timeline-tasks %}
After you take inventory of the repositories you need to migrate, you can weigh your inventory data against your desired timeline. If your organization can withstand a higher degree of change, then you might be able to migrate all your repositories at once, completing your migration efforts in a few days. However, you may have various teams that are not able to migrate at the same time. In this case, you might want to batch and stagger your migrations to fit the teams' timelines, extending your migration effort.

1. Determine how many repositories and pull requests you need to migrate.
1. To understand when teams can be ready to migrate, interview stakeholders.
1. Fully review the rest of this guide, then decide on a migration timeline.

### Do we understand what will be migrated?

Expand Down
10 changes: 0 additions & 10 deletions data/reusables/copilot/copilot-chat-models-list-visual-studio.md

This file was deleted.

14 changes: 0 additions & 14 deletions data/reusables/copilot/copilot-chat-models-list.md

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
**{% data variables.copilot.copilot_claude_sonnet_35 %}:** This model excels at coding tasks across the entire software development lifecycle, from initial design to bug fixes, maintenance to optimizations. Learn more about the [model's capabilities](https://www.anthropic.com/claude/sonnet) or read the [model card](https://assets.anthropic.com/m/61e7d27f8c8f5919/original/Claude-3-Model-Card.pdf). {% data variables.product.prodname_copilot %} uses {% data variables.copilot.copilot_claude_sonnet %} hosted on Amazon Web Services.
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
**{% data variables.copilot.copilot_claude_sonnet_37 %}:** This model, like its predecessor, excels across the software development lifecycle, from initial design to bug fixes, maintenance to optimizations. It also has thinking capabilities which can be enabled by selecting the thinking version of the model, which can be particularly useful in agentic scenarios. Learn more about the [model's capabilities](https://www.anthropic.com/claude/sonnet) or read the [model card](https://assets.anthropic.com/m/785e231869ea8b3b/original/claude-3-7-sonnet-system-card.pdf). {% data variables.product.prodname_copilot %} uses {% data variables.copilot.copilot_claude_sonnet %} hosted on Amazon Web Services.
1 change: 1 addition & 0 deletions data/reusables/copilot/model-description-gemini-flash.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
**{% data variables.copilot.copilot_gemini_flash %}:** This model has strong coding, math, and reasoning capabilities that makes it well suited to assist with software development. {% data reusables.copilot.gemini-model-info %}
1 change: 1 addition & 0 deletions data/reusables/copilot/model-description-gpt-4o.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
**GPT-4o:** This is the default {% data variables.product.prodname_copilot_chat_short %} model. It is a versatile, multimodal model that excels in both text and image processing and is designed to provide fast, reliable responses. It also has superior performance in non-English languages. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/gpt-4o) and review the [model card](https://openai.com/index/gpt-4o-system-card/). GPT-4o is hosted on Azure.
1 change: 1 addition & 0 deletions data/reusables/copilot/model-description-o1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
**o1:** This model is focused on advanced reasoning and solving complex problems, in particular in math and science. It responds more slowly than the GPT-4o model. You can make 10 requests to this model per day. Learn more about the [model's capabilities](https://platform.openai.com/docs/models/o1) and review the [model card](https://openai.com/index/openai-o1-system-card/). o1 is hosted on Azure.
1 change: 1 addition & 0 deletions data/reusables/copilot/model-description-o3-mini.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
**o3-mini:** This model is the next generation of reasoning models, following from o1 and o1-mini. The o3-mini model outperforms o1 on coding benchmarks with response times that are comparable to o1-mini, providing improved quality at nearly the same latency. It is best suited for code generation and small context operations. You can make 50 requests to this model every 12 hours. Learn more about the [model's capabilities](https://platform.openai.com/docs/models#o3-mini) and review the [model card](https://openai.com/index/o3-mini-system-card/). o3-mini is hosted on Azure.

0 comments on commit b05ab37

Please sign in to comment.