Skip to content

Commit

Permalink
vertex quickstart (#563)
Browse files Browse the repository at this point in the history
* vertex quickstart

* pin versions to b4 openai

* clear outputs

* colab name
  • Loading branch information
joshreini1 authored Nov 16, 2023
1 parent ddb2841 commit 66a17c6
Show file tree
Hide file tree
Showing 3 changed files with 288 additions and 2 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
"\n",
"Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems. Through our LiteLLM integration, you are able to easily run feedback functions with Anthropic's Claude and Claude Instant.\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/truera/trulens/blob/main/trulens_eval/examples/models/anthropic.ipynb)"
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/truera/trulens/blob/main/trulens_eval/examples/models/anthropic_quickstart.ipynb)"
]
},
{
Expand All @@ -18,7 +18,7 @@
"metadata": {},
"outputs": [],
"source": [
"# ! pip install anthropic trulens_eval langchain"
"# ! pip install anthropic trulens_eval==0.17.0 langchain==0.0.323"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,277 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google Vertex\n",
"\n",
"In this quickstart you will learn how to run evaluation functions using models from google Vertex like PaLM-2.\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/truera/trulens/blob/main/trulens_eval/examples/expositional/models/google_vertex_quickstart.ipynb)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#! pip install google-cloud-aiplatform==1.36.3 litellm==0.14.1 trulens_eval==0.17.0 langchain==0.0.323"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Authentication"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from google.cloud import aiplatform"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"aiplatform.init(\n",
" project = \"...\",\n",
" location=\"us-central1\"\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Import from LangChain and TruLens"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import JSON\n",
"\n",
"# Imports main tools:\n",
"from trulens_eval import TruChain, Feedback, Tru, LiteLLM\n",
"tru = Tru()\n",
"tru.reset_database()\n",
"\n",
"\n",
"# Imports from langchain to build app. You may need to install langchain first\n",
"# with the following:\n",
"# ! pip install langchain>=0.0.170\n",
"from langchain.chains import LLMChain\n",
"from langchain.llms import VertexAI\n",
"from langchain.prompts.chat import ChatPromptTemplate, PromptTemplate\n",
"from langchain.prompts.chat import HumanMessagePromptTemplate"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Simple LLM Application\n",
"\n",
"This example uses a LangChain framework and OpenAI LLM"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"full_prompt = HumanMessagePromptTemplate(\n",
" prompt=PromptTemplate(\n",
" template=\n",
" \"Provide a helpful response with relevant background information for the following: {prompt}\",\n",
" input_variables=[\"prompt\"],\n",
" )\n",
")\n",
"\n",
"chat_prompt_template = ChatPromptTemplate.from_messages([full_prompt])\n",
"\n",
"llm = VertexAI()\n",
"\n",
"chain = LLMChain(llm=llm, prompt=chat_prompt_template, verbose=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Send your first request"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"prompt_input = 'What is a good name for a store that sells colorful socks?'"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm_response = chain(prompt_input)\n",
"\n",
"display(llm_response)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialize Feedback Function(s)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Initialize LiteLLM-based feedback function collection class:\n",
"litellm = LiteLLM(model_engine=\"chat-bison\")\n",
"\n",
"# Define a relevance function using LiteLLM\n",
"relevance = Feedback(litellm.relevance_with_cot_reasons).on_input_output()\n",
"# By default this will check relevance on the main app input and main app\n",
"# output."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instrument chain for logging with TruLens"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tru_recorder = TruChain(chain,\n",
" app_id='Chain1_ChatApplication',\n",
" feedbacks=[relevance])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with tru_recorder as recording:\n",
" llm_response = chain(prompt_input)\n",
"\n",
"display(llm_response)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tru.get_records_and_feedback(app_ids=[])[0]"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Explore in a Dashboard"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tru.run_dashboard() # open a local streamlit app to explore\n",
"\n",
"# tru.stop_dashboard() # stop if needed"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Alternatively, you can run `trulens-eval` from a command line in the same folder to start the dashboard."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Or view results directly in your notebook"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tru.get_records_and_feedback(app_ids=[])[0] # pass an empty list of app_ids to get all"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
},
"vscode": {
"interpreter": {
"hash": "d5737f6101ac92451320b0e41890107145710b89f85909f3780d702e7818f973"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,15 @@
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/truera/trulens/blob/main/trulens_eval/examples/expositional/models/litellm_quickstart.ipynb)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#! pip install openai==0.28.1 litellm==0.14.1 trulens_eval==0.17.0 langchain==0.0.323"
]
},
{
"attachments": {},
"cell_type": "markdown",
Expand Down

0 comments on commit 66a17c6

Please sign in to comment.