Skip to content

Latest commit

 

History

History
157 lines (109 loc) · 6.11 KB

getting-started-azure-openai.md

File metadata and controls

157 lines (109 loc) · 6.11 KB

Setting Up the Development Environment for Azure OpenAI

Azure AI Foundry is a service that allows you to deploy and manage AI models in the cloud. You can things like create a project, deploy a model, interact with the model, and more.

Note

If you want to use Azure AI Foundry models for your .NET AI apps in this course, follow the steps in this guide.

👉 To use GitHub Models this is the guide for you

Create the Azure AI Foundry resources

To use Azure AI Foundry models, you need to take the following steps:

  1. Create a hub and project in the Azure AI Foundry portal.
  2. Deploy a model to your project.
  3. Add the Azure AI library code + API key and other credentials to your code.

-1- Create a Hub and Project in Azure AI Foundry

  1. Go to the Azure AI Foundry Portal.

  2. Sign in with your Azure account.

  3. Select All hubs + projects from the left-hand menu and then click the + New hub from the dropdown. (Note: You may have to click on + New project first to see the + New hub option).

  4. A new window will open. Fill in the details for your hub:

    • Give your hub a name (e.g., "MyAIHub").
    • Choose a region closest to you.
    • Select the appropriate subscription and resource group.
    • You can leave the rest of the settings as they are.
    • Click Next.
    • Review the details and click Create.
  5. Once your hub is created, the portal will open its details page. Click the Create Project button.

    • Give your project a name (e.g., "GenAIJavaScript") or accept the default.
    • Click Create.

🎉 Done! You’ve just created your first project in Azure AI Foundry.

Before you can interact with the model, you need to deploy it to your project, so let's do that next.

-2- Deploy a Language Model in Azure AI Foundry

Now, let’s deploy a gpt-4o-mini model to your project:

  1. In the Azure AI Foundry portal, navigate to your project (it should automatically open after creating it).
  2. Click on Models and Endpoints from the left-hand menu and then the Deploy Model button.
  3. Select Deploy base model from the dropdown.
  4. Search for gpt-4o-mini in the model catalog.
  5. Select the model and click the Confirm button.
  6. Specify a deployment name (e.g., "gpt-4o-mini"). You can leave the rest of the options as they are.
  7. Click Deploy and wait for the model to be provisioned.
  8. Once deployed, make a note of the values for Model Name, Target URI, and API Key from the model details page as you will use the later in your sample code.

🎉 Done! You’ve deployed your first Large Language Model in Azure AI Foundry.

Note

The endpoint is similar to https://< your hub name>.openai.azure.com/openai/deployments/gpt-4o-mini/chat/completions?api-version=2024-08-01-preview. The endpoint name that we need is only https://< your hub name >.openai.azure.com/*.

Adding Azure to your code

Now let’s update the code to use the newly deployed model. Here's the plan:

  • Install the needed libraries, you will need the libraries @azure/openai and dotenv.
  • Update the code to use the values you noted above for the model name, endpoint, and API key.
  • Run the app.
  1. Open the terminal and switch to the project directory (providing you're in the project root directory):

    cd app
  2. Run the following commands to add the required libraries:

    npm install @azure/openai dotenv
  3. At the top of app.js add the following imports:

    import { OpenAIClient, AzureKeyCredential, ChatRequestMessage } from "@azure/openai";
    import * as dotenv from "dotenv";
  4. Add the following code to load the environment variables:

    dotenv.config();
  5. Create new variables to hold the model name, endpoint, and API key:

    const endpoint = process.env.AZURE_OPENAI_ENDPOINT || '';
    const azureApiKey = process.env.AZURE_OPENAI_API_KEY || '';
  6. Create a client to interact with the model:

    const client = new OpenAIClient(endpoint, new AzureKeyCredential(azureApiKey));
    const deploymentName = '<replace with your deployment name>';

    The code above creates a new client using the endpoint and API key you noted earlier. It also assigned the deployment name to a variable.

  7. Create a prompt and chat messages:

    const promptText = `Tell me about yourself.`;
    
    const chatMessages: ChatRequestMessage[] = [
        {
            role: 'system',
            content: "You're Ada Lovelace, a mathematician and writer. You're considered the first computer programmer. You only know about the time you lived in, so you might not know about modern technology.",
        },
        {
            role: 'user',
            content: promptText
        },
    ];

    Now you have a prompt and chat messages to send to the model. Let's send the messages and get the response:

    const completionResponse = await client.getChatCompletions(deploymentName, chatMessages, {
            maxTokens: 150,
            temperature: 0.1,
        });
    
        console.log("Ada says: ");
        console.log(completionResponse.choices[0].message?.content);
  8. Run the app:

    npm start
  9. You should see output similar to the following:

    Ada says: I'm Ada Lovelace, a mathematician and writer. I'm considered the first computer programmer. I only know about the time I lived in, so I might not know about modern technology. 
    

🙋 Need help?: Something not working? Open an issue and we'll help you out.

Summary

In this lesson, you learned how to create a hub and project in Azure AI Foundry, deploy a model to your project, and interact with the model in your code.

Additional Resources