All prerequisites from this sample are required, so you can run it both locally and using Azure OpenAI.
- Clone the sample repo on your machine: https://github.com/Azure-Samples/langchainjs-quickstart-demo
- Open the code in VS Code
- Create the Azure resources needed for the sample, following the instructions in the README file, and fill in the
.env
file with the required information. - Open the
local.js
file in the editor.
- Open the code from this repo in VS Code: https://github.com/Azure-Samples/langchainjs-quickstart-demo
- Open
local.js
, and explain the code step by step. - Run the demo using play button
- Update the prototype to use Azure:
- Replace model/db imports (use the
imp
snippet) - Replace the model init section (use Copilot or
newai
snippet for the AI Search part - Explain that the models are defined in
.env
file - Replace the embeddings part
- Use the
sea
snippet to first need to check if documents are already indexed - Use
add
snippet (or use Copilot) to complete the embedding part - Run the demo again using play button
We did not have to change any of the workflow logic, as you’ve seen the migration is quite straightforward. Now I can take this code and wrap it in a serverless function to deploy it to Azure.