Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

correct spelling mistakes #40

Merged
merged 5 commits into from
Aug 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Common_use_cases.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@
"* Token to indicate the beginning of dialogue turn: `<start_of_turn>`\n",
"* Token to indicate the end of dialogue turn: `<end_of_turn>`\n",
"\n",
"Here's the [official documentation](https://ai.google.dev/gemma/docs/formatting) regarding promping instruction-tuned models."
"Here's the [official documentation](https://ai.google.dev/gemma/docs/formatting) regarding prompting instruction-tuned models."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion Gemma/Advanced_Prompting_Techniques.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@
"* Token to indicate the beginning of dialogue turn: `<start_of_turn>`\n",
"* Token to indicate the end of dialogue turn: `<end_of_turn>`\n",
"\n",
"Here's the [official documentation](https://ai.google.dev/gemma/docs/formatting) regarding promping instruction-tuned models."
"Here's the [official documentation](https://ai.google.dev/gemma/docs/formatting) regarding prompting instruction-tuned models."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion Gemma/Prompt_chaining.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@
"\n",
"Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone.\n",
"\n",
"Here's the [official documentation](https://ai.google.dev/gemma/docs/formatting) regarding promping instruction-tuned models."
"Here's the [official documentation](https://ai.google.dev/gemma/docs/formatting) regarding prompting instruction-tuned models."
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions Gemma/Using_Gemma_with_LangChain.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@
"* Token to indicate the beginning of dialogue turn: `<start_of_turn>`\n",
"* Token to indicate the end of dialogue turn: `<end_of_turn>`\n",
"\n",
"Here's the [official documentation](https://ai.google.dev/gemma/docs/formatting) regarding promping instruction-tuned models."
"Here's the [official documentation](https://ai.google.dev/gemma/docs/formatting) regarding prompting instruction-tuned models."
]
},
{
Expand Down Expand Up @@ -621,7 +621,7 @@
"# Create an actual chain\n",
"\n",
"rag_chain = (\n",
" # First you need retrieve documnets that are relevant to the\n",
" # First you need retrieve documents that are relevant to the\n",
" # given query\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" # The output is passed the prompt and fills fields like `{question}`\n",
Expand Down
Loading