From a036811950e4a80ac0856ccd3505eb19e10da9ed Mon Sep 17 00:00:00 2001 From: HamidShojanazeri Date: Wed, 24 Apr 2024 09:59:33 -0700 Subject: [PATCH 01/35] adding chatbot-e2e --- .../end2end-recipes/chatbot/README.md | 212 ++++++++++++++++++ .../chatbot/data_pipelines/REAME.md | 32 +++ .../chatbot/data_pipelines/config.py | 13 ++ .../chatbot/data_pipelines/config.yaml | 30 +++ .../chatbot/data_pipelines/doc_processor.py | 47 ++++ .../generate_question_answers.py | 103 +++++++++ .../chatbot/data_pipelines/generator_utils.py | 121 ++++++++++ .../chatbot/eval-loss-3runs.png | Bin 0 -> 52645 bytes .../end2end-recipes/chatbot/poor-test-1.png | Bin 0 -> 405225 bytes .../end2end-recipes/chatbot/poor-test-2.png | Bin 0 -> 439497 bytes .../chatbot/train-loss-3runs.png | Bin 0 -> 51915 bytes 11 files changed, 558 insertions(+) create mode 100644 recipes/use_cases/end2end-recipes/chatbot/README.md create mode 100644 recipes/use_cases/end2end-recipes/chatbot/data_pipelines/REAME.md create mode 100644 recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.py create mode 100644 recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml create mode 100644 recipes/use_cases/end2end-recipes/chatbot/data_pipelines/doc_processor.py create mode 100644 recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py create mode 100644 recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py create mode 100644 recipes/use_cases/end2end-recipes/chatbot/eval-loss-3runs.png create mode 100644 recipes/use_cases/end2end-recipes/chatbot/poor-test-1.png create mode 100644 recipes/use_cases/end2end-recipes/chatbot/poor-test-2.png create mode 100644 recipes/use_cases/end2end-recipes/chatbot/train-loss-3runs.png diff --git a/recipes/use_cases/end2end-recipes/chatbot/README.md b/recipes/use_cases/end2end-recipes/chatbot/README.md new file mode 100644 index 000000000..de992d311 --- /dev/null +++ b/recipes/use_cases/end2end-recipes/chatbot/README.md @@ -0,0 +1,212 @@ +## Introduction + +Large language models (LLMs) have emerged as groundbreaking tools, capable of understanding and generating human-like text. These models power many of today's advanced chatbots, providing more natural and engaging user experiences. But how do we create these intelligent systems? + +Here, we aim to make an FAQ model for Llama that be able to answer questions about Llama by fine-tune Llama2 7B chat using existing official Llama documents. + + +### Fine-tuning Process + +Fine-tuning LLMs here LLama 2 involves several key steps: Data Collection, preprocessing, fine-tuning, evaluation. + + +### LLM Generated datasets + +As Chatbots are usually domain specifics and based on public or proprietary data, one common way inspired by [self-instruct paper](https://arxiv.org/abs/2212.10560) is to use LLMs to assist building the dataset from our data. For example to build an FAQ model, we can use Llama model to process our documents and help us build question and answer pair (We will showcase this here). Just keep it in mind that usually most of the proprietary LLMs has this clause in their license that you are not allowed to use the output generated from the model to train another LLM. In this case we will use Llama to fine-tune another Llama model. + + +Similarly, we will use the same LLM to evaluate the quality of generated datasets and finally evaluate the outputs from the model. + + +Given this context, here we want to highlight some of best practices that need to be in place for data collection and pre-processing in general. + +### **Data Collection & Preprocessing:** + +Gathering a diverse and comprehensive dataset is crucial. This dataset should include a wide range of topics and conversational styles to ensure the model can handle various subjects. A recent [research](https://arxiv.org/pdf/2305.11206.pdf) shows that quality of data has far more importance than quantity. Here are some high level thoughts on data collection and preprocessing along with best practices: + +**NOTE** data collection and processing is very use-case specific and here we can only share best practices but it would be very nuanced for each use-case. + +- Source Identification: Identify the sources where your FAQs are coming from. This could include websites, customer service transcripts, emails, forums, and product manuals. Prioritize sources that reflect the real questions your users are asking. + +- Diversity and Coverage: Ensure your data covers a wide range of topics relevant to your domain. It's crucial to include variations in how questions are phrased to make your model robust to different wording. + +- Volume: The amount of data needed depends on the complexity of the task and the variability of the language in your domain. Generally, more data leads to a better-performing model, but aim for high-quality, relevant data. + +Here, we are going to use [self-instruct](https://arxiv.org/abs/2212.10560) idea and use Llama model to build our dataset, for details please check this [doc](./data_pipelines/REAME.md). + + +**Things to keep in mind** + +- **Pretraining Data as the Foundation**: Pretraining data is crucial for developing foundational models, influencing both their strengths and potential weaknesses. Fine-tuning data refines specific model capabilities and, through instruction fine-tuning or alignment training, enhances general usability and safety. + +- **Quality Over Quantity**: More data doesn't necessarily mean better results. It's vital to select data carefully and perform manual inspections to ensure it aligns with your project's aims. + +- **Considerations for Dataset Selection**: Selecting a dataset requires considering various factors, including language and dialect coverage, topics, tasks, diversity, quality, and representation. + +- **Impact of Implicit Dataset Modifications**: Most datasets undergo implicit changes during selection, filtering, and formatting. These preprocessing steps can significantly affect model performance, so they should not be overlooked. + +- **Finetuning Data's Dual-Edged Sword**: Finetuning can improve or impair model capabilities. Make sure you know the nature of your data to make an informed selections. + +- **Navigating Dataset Limitations**: The perfect dataset for a specific task may not exist. Be mindful of the limitations when choosing from available resources, and understand the potential impact on your project. + +#### **Best Practices for FineTuning Data Preparation** + +- **Enhancing Understanding with Analysis Tools**: Utilizing tools for searching and analyzing data is crucial for developers to gain a deeper insight into their datasets. This understanding is key to predicting model behavior, a critical yet often overlooked phase in model development. + +- **The Impact of Data Cleaning and Filtering**: Data cleaning and filtering significantly influence model characteristics, yet there's no universal solution that fits every scenario. Our guidance includes filtering recommendations tailored to the specific applications and communities your model aims to serve. + +- **Data Mixing from Multiple Sources**: When training models with data from various sources or domains, the proportion of data from each domain (data mixing) can greatly affect downstream performance. It's a common strategy to prioritize "high-quality" data domains—those with content written by humans and subjected to an editing process, like Wikipedia and books. However, data mixing is an evolving field of research, with best practices still under development. + +- **Benefits of Removing Duplicate Data**: Eliminating duplicated data from your dataset can lessen unwanted memorization and enhance training efficiency. + +- **The Importance of Dataset Decontamination**: It's crucial to meticulously decontaminate training datasets by excluding data from evaluation benchmarks. This ensures the model's capabilities are accurately assessed. + + +**Data Exploration and Analysis** + +- Gaining Insights through Dataset Exploration: Leveraging search and analysis tools to explore training datasets enables us to cultivate a refined understanding of the data's contents, which in turn influences the models. Direct interaction with the data often reveals complexities that are challenging to convey or so might not be present in the documents. + +- Understanding Data Complexity: Data, especially text, encompasses a wide array of characteristics such as length distribution, topics, tones, formats, licensing, and diction. These elements are crucial for understanding the dataset but are not easily summarized without thorough examination. + +- Utilizing Available Tools: We encourage to take advantage of the numerous tools at your disposal for searching and analyzing your training datasets, facilitating a deeper comprehension and more informed model development. + +**Tools** + +- [wimbd](https://github.com/allenai/wimbd) for data analysis. +- TBD + + + +**Data Cleaning** + +Purpose of Filtering and Cleaning: The process of filtering and cleaning is essential for eliminating unnecessary data from your dataset. This not only boosts the efficiency of model training but also ensures the data exhibits preferred characteristics such as high informational value, coverage of target languages, low levels of toxicity, and minimal presence of personally identifiable information. + +Considering Trade-offs: We recommend to carefully weigh the potential trade-offs associated with using certain filters, it may impact the diversity of your data, [removing minority individuals](https://arxiv.org/abs/2104.08758). + +**Tools** +- [OpenRefine](https://github.com/OpenRefine/OpenRefine?tab=readme-ov-file),(formerly Google Refine): A standalone open-source desktop application for data cleanup and transformation to other formats. It's particularly good for working with messy data, including data format transformations and cleaning. + +- [FUN-Langid](https://github.com/google-research/url-nlp/tree/main/fun-langid), simple, character 4-gram LangID classifier recognizing up to 1633 languages. + +- Dask: Similar to Pandas, Dask is designed for parallel computing and works efficiently with large datasets. It can be used for data cleaning, transformations, and more, leveraging multiple CPUs or distributed systems. + + + + +**Data Deduplication** + +- **Data Deduplication importance**: Data deduplication is a important preprocessing step to eliminate duplicate documents or segments within a document from the dataset. This process helps in minimizing the model's chance of memorizing unwanted information, including generic text, copyrighted content, and personally identifiable details. + +- **Benefits of Removing Duplicates**: Aside from mitigating the risk of undesirable memorization, deduplication enhances training efficiency by decreasing the overall size of the dataset. This streamlined dataset contributes to a more effective and resource-efficient model training process. + +- **Assessing the Impact of Duplicates**: You need to carefully evaluate the influence of duplicated data on their specific model use case. Memorization may be beneficial for models designed for closed-book question answering, or similarly chatbots. + +**Tools** + +- [thefuz](https://github.com/seatgeek/thefuzz): It uses Levenshtein Distance to calculate the differences between sequences in a simple-to-use package. +- [recordlinkage](https://github.com/J535D165/recordlinkage): It is modular record linkage toolkit to link records in or between data sources. + +**Data Decontamination** + +The process involves eliminating evaluation data from the training dataset. This crucial preprocessing step maintains the accuracy of model evaluation, guaranteeing that performance metrics are trustworthy and not skewed. + +**Tools** +- TBD + + + + +### **LLama FAQ Use-Case** + + +1. **Data Collection** +Here, we are going to use self-instruct idea and use Llama model to build our dataset, for details please check this [doc](./data_pipelines/REAME.md). + +2. **Data Formatting** + +For a FAQ model, you need to format your data in a way that's conducive to learning question-answer relationships. A common format is the question-answer (QA) pair: + +Question-Answer Pairing: Organize your data into pairs where each question is directly followed by its answer. This simple structure is highly effective for training models to understand and generate responses. For example: + +```python +"question": "What is Llama 2?", +"answer": "Llama 2 is a collection of pretrained and fine-tuned large language models ranging from 7 billion to 70 billion parameters, optimized for dialogue use cases." +``` + + +3. **Preprocessing:** This step involves cleaning the data and preparing it for training. It might include removing irrelevant information, correcting errors, and splitting the data into training and evaluation sets. + + +4. **Fine-Tuning:** Given that we have a selected pretrained model, in this case we use LLama 2 chat 7B, fine-tunning with more specific data can improve its performance on particular tasks, such as answering questions about Llama in this case. +#### Building Dataset + +During the self-instruct process of generation Q&A pairs from documents, we realized that with out system prompt being +```python +You are a language model skilled in creating quiz questions. +You will be provided with a document, +read it and generate question and answer pairs +that are most likely be asked by a use of llama that just want to start, +please make sure you follow those rules, +1. Generate only {total_questions} question answer pairs. +2. Generate in {language}. +3. The questions can be answered based *solely* on the given passage. +4. Avoid asking questions with similar meaning. +5. Make the answer as concise as possible, it should be at most 60 words. +6. Provide relevant links from the document to support the answer. +7. Never use any abbreviation. +8. Return the result in json format with the template: + [ + {{ + "question": "your question A.", + "answer": "your answer to question A." + }}, + {{ + "question": "your question B.", + "answer": "your answer to question B." + }} + ] + +``` + +Model tends to ignore providing the bigger picture in the questions, for example below is the result of Q&A pair from reading Code Llama paper. Partially, its because due to context window size of the model we have to divide the document into smaller chunks, so model use `described in the passage` or `according to the passage?` in the question instead of linking it back to Code Llama. + + +```python +{ + "question": "What is the purpose of the transformation described in the passage?", + "answer": "The transformation is used to create documents with a prefix, middle part, and suffix for infilling training." + }, +{ + "question": "What is the focus of research in transformer-based language modeling, according to the passage?", + "answer": "The focus of research is on effective handling of long sequences, specifically extrapolation and reducing the quadratic complexity of attention passes." +}, +``` + + +#### Data Insights + +We generated a dataset of almost 650 Q&A pairs from some of the open source documents about Llama 2, including getting started guide from Llama website, its FAQ, Llama 2, Purple Llama, Code Llama papers and Llama-Recipes documentations. + +We have run some fine-tuning experiments with single GPU using quantization with different LORA configs (all linear layer versus query and key projections only) and different number of epochs. Although train and eval loss shows decrease specially with using all linear layers in LORA configs and training with 6 epochs, still the result is far from acceptable in real tests. + + +Here is how losses between three runs looks like. + +

+ Eval Loss + Train Loss +

+ +##### Low Quality Dataset + +Below are some examples of real test on the fine-tuned model with very poor results. It seems fine-tuned model does not show any promising results with this dataset. Looking at the dataset, we could observe that the amount of data (Q&A pair) for each concept such as PyTorch FSDP and Llama-Recipe is very limited and almost one pair per concept. This shows lack of relevant training data. The recent research showed that from each taxonomy having 2-3 examples can yield promising results. + +

+ Poor Test Results example 1 + Poor Test Results example 1 +

+ + +Next, we are looking into augmenting our datasets. One way to do so, is to use our Llama 70B model to read our question answer pairs and come up with two paraphrase versions of each pair to augment our data. + + diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/REAME.md b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/REAME.md new file mode 100644 index 000000000..efdd22231 --- /dev/null +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/REAME.md @@ -0,0 +1,32 @@ +## Data Preprocessing Steps + +### Step 1 : Prepare related documents + +Download all your desired docs in PDF, Text or Markdown format to "data" folder. + +In this case we have an example of [Llama 2 Getting started guide](https://llama.meta.com/get-started/) and other llama related documents such Llama2, Purple Llama, Code Llama papers along with Llama FAQ. Ideally, we should have searched all Llama documents across the web and follow the procedure below on them but that would be very costly for the purpose of a tutorial, so we will stick to our limited documents here. + +### Step 2 : Prepare data (Q&A pairs) + +The idea here is to use Llama 70B using OctoAI APIs, to create question and answer (Q&A) pair datasets from these documents, this APIs could be replaced by any other API from other providers or alternatively using your on prem solutions such as the [TGI](../../../examples/hf_text_generation_inference/) or [VLLM](../../../examples/vllm/). Here we will use the prompt in the [./config.yaml] to instruct the model on the expected format and rules for generating the Q&A pairs. This is only one way to handle this which is a popular method but beyond this any other preprocessing routine that help us making the Q&A pairs works. + + +**NOTE** The generated data by these APIs/ the model needs to be vetted to make sure about the quality. + +```bash +export OCTOAI_API_TOKEN="OCTOAI_API_TOKEN" +python generate_question_answers.py +``` + +**NOTE** You need to be aware of your RPM (requests per minute), TPM (tokens per minute) and TPD (tokens per day), limit on your account in case using any of model API providers. In our case we had to process each document at a time. Then merge all the Q&A `json` files to make our dataset. We aimed for a specific number of Q&A pairs per document anywhere between 50-100. This is experimental and totally depends on your documents, wealth of information in them and how you prefer to handle question, short or longer answers etc. + +### Step 2 : Prepare dataset for fine-tuning Llama 2 Chat model + +Here, as we want to fine-tune a chatbot model so its preferred to start with Llama 2 Chat model which already is instruction fine-tuned to serve as an assistant and further fine-tuned it for our Llama related data. + + +### Step 3: Run the training + +```bash +torchrun --nnodes 1 --nproc_per_node 1 examples/finetuning.py --use_peft --peft_method lora --quantization --model_name meta-llama/Llama-2-7b-chat-hf --output_dir ./peft-7b-quantized --num_epochs 1 --batch_size 1 --dataset "custom_dataset" --custom_dataset.file "examples/llama_dataset.py" --run_validation False --custom_dataset.data_path './dataset.json' +``` \ No newline at end of file diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.py b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.py new file mode 100644 index 000000000..5f558008a --- /dev/null +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.py @@ -0,0 +1,13 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement. + +import yaml +import os + +def load_config(config_path: str = "./config.yaml"): + # Read the YAML configuration file + with open(config_path, "r") as file: + config = yaml.safe_load(file) + # Set the API key from the environment variable + config["api_key"] = os.environ["OCTOAI_API_TOKEN"] + return config diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml new file mode 100644 index 000000000..7eeeb97dd --- /dev/null +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml @@ -0,0 +1,30 @@ +question_prompt_template: > + You are a language model skilled in creating quiz questions. + You will be provided with a document, + read it and generate question and answer pairs + that are most likely be asked by a use of llama that just want to start, + please make sure you follow those rules, + 1. Generate only {total_questions} question answer pairs. + 2. Generate in {language}. + 3. The questions can be answered based *solely* on the given passage. + 4. Avoid asking questions with similar meaning. + 5. Make the answer as concise as possible, it should be at most 60 words. + 6. Provide relevant links from the document to support the answer. + 7. Never use any abbreviation. + 8. Return the result in json format with the template: + [ + {{ + "question": "your question A.", + "answer": "your answer to question A." + }}, + {{ + "question": "your question B.", + "answer": "your answer to question B." + }} + ] + +data_dir: "./data" + +language: "English" + +total_questions: 2 diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/doc_processor.py b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/doc_processor.py new file mode 100644 index 000000000..2fade43f6 --- /dev/null +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/doc_processor.py @@ -0,0 +1,47 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement. + +# Assuming result_average_token is a constant, use UPPER_CASE for its name to follow Python conventions +AVERAGE_TOKENS_PER_RESULT = 100 + +def get_token_limit_for_model(model: str) -> int: + """Returns the token limit for a given model.""" + if model == "llama-2-70b-chat-fp16" or model == "llama-2-13b-chat-turbo": + return 4096 + + +def calculate_num_tokens_for_message(encoded_text) -> int: + """Calculates the number of tokens used by a message.""" + + # Added 3 to account for priming with assistant's reply, as per original comment + return len(encoded_text) + 3 + + +def split_text_into_chunks(context: dict, text: str, tokenizer) -> list[str]: + """Splits a long text into substrings based on token length constraints, adjusted for question generation.""" + # Adjusted approach to calculate max tokens available for text chunks + encoded_text = tokenizer(text, return_tensors="pt", padding=True)["input_ids"] + encoded_text = encoded_text.squeeze() + model_token_limit = get_token_limit_for_model(context["model"]) + + tokens_for_questions = calculate_num_tokens_for_message(encoded_text) + estimated_tokens_per_question = AVERAGE_TOKENS_PER_RESULT + estimated_total_question_tokens = estimated_tokens_per_question * context["total_questions"] + # Ensure there's a reasonable minimum chunk size + max_tokens_for_text = max(model_token_limit - tokens_for_questions - estimated_total_question_tokens, model_token_limit // 10) + + chunks, current_chunk = [], [] + print(f"Splitting text into chunks of {max_tokens_for_text} tokens, encoded_text {len(encoded_text)}", flush=True) + for token in encoded_text: + if len(current_chunk) >= max_tokens_for_text: + chunks.append(tokenizer.decode(current_chunk).strip()) + current_chunk = [] + else: + current_chunk.append(token) + + if current_chunk: + chunks.append(tokenizer.decode(current_chunk).strip()) + + print(f"Number of chunks in the processed text: {len(chunks)}", flush=True) + + return chunks \ No newline at end of file diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py new file mode 100644 index 000000000..161fd8642 --- /dev/null +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py @@ -0,0 +1,103 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement. + +import argparse +import asyncio +import json +from config import load_config +from generator_utils import generate_question_batches, parse_qa_to_json +from itertools import chain +import logging +import aiofiles # Ensure aiofiles is installed for async file operations +from abc import ABC, abstractmethod +from octoai.client import Client +from functools import partial + +# Configure logging to include the timestamp, log level, and message +logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') + +# Manage rate limits with throttling +rate_limit_threshold = 2000 +allowed_concurrent_requests = int(rate_limit_threshold * 0.75) +request_limiter = asyncio.Semaphore(allowed_concurrent_requests) + +class ChatService(ABC): + @abstractmethod + async def execute_chat_request_async(self, api_context: dict, chat_request): + pass + +# Please implement your own chat service class here. +# The class should inherit from the ChatService class and implement the execute_chat_request_async method. +class OctoAIChatService(ChatService): + async def execute_chat_request_async(self, api_context: dict, chat_request): + async with request_limiter: + try: + event_loop = asyncio.get_running_loop() + client = Client(api_context['api_key']) + api_chat_call = partial( + client.chat.completions.create, + model=api_context['model'], + messages=chat_request, + temperature=0.0 + ) + response = await event_loop.run_in_executor(None, api_chat_call) + assistant_response = next((choice.message.content for choice in response.choices if choice.message.role == 'assistant'), "") + assistant_response_json = parse_qa_to_json(assistant_response) + + return assistant_response_json + except Exception as error: + print(f"Error during chat request execution: {error}") + return "" + +async def main(context): + chat_service = OctoAIChatService() + try: + logging.info("Starting to generate question/answer pairs.") + data = await generate_question_batches(chat_service, context) + if not data: + logging.warning("No data generated. Please check the input context or model configuration.") + return + flattened_list = list(chain.from_iterable(data)) + logging.info(f"Successfully generated {len(flattened_list)} question/answer pairs.") + # Use asynchronous file operation for writing to the file + async with aiofiles.open("data.json", "w") as output_file: + await output_file.write(json.dumps(flattened_list, indent=4)) + logging.info("Data successfully written to 'data.json'. Process completed.") + + except Exception as e: + logging.error(f"An unexpected error occurred during the process: {e}") + +def parse_arguments(): + # Define command line arguments for the script + parser = argparse.ArgumentParser( + description="Generate question/answer pairs from documentation." + ) + parser.add_argument( + "-t", "--total_questions", + type=int, + default=10, + help="Specify the number of question/answer pairs to generate." + ) + parser.add_argument( + "-m", "--model", + choices=["llama-2-70b-chat-fp16", "llama-2-13b-chat-fp16"], + default="llama-2-70b-chat-fp16", + help="Select the model to use for generation." + ) + parser.add_argument( + "-c", "--config_path", + default="config.yaml", + help="Set the configuration file path that has system prompt along with language, dataset path and number of questions." + ) + return parser.parse_args() + +if __name__ == "__main__": + logging.info("Initializing the process and loading configuration...") + args = parse_arguments() + + context = load_config(args.config_path) + context["total_questions"] = args.total_questions + context["model"] = args.model + + logging.info(f"Configuration loaded. Generating {args.total_questions} question/answer pairs using model '{args.model}'.") + asyncio.run(main(context)) \ No newline at end of file diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py new file mode 100644 index 000000000..01c628036 --- /dev/null +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py @@ -0,0 +1,121 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement. + +import os +import re +from transformers import AutoTokenizer +from octoai.client import Client +import asyncio +import magic +from PyPDF2 import PdfReader +import json +from doc_processor import split_text_into_chunks +import logging +# Initialize logging +logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') + + +def read_text_file(file_path): + try: + with open(file_path, 'r') as f: + return f.read().strip() + ' ' + except Exception as e: + logging.error(f"Error reading text file {file_path}: {e}") + return '' + +def read_pdf_file(file_path): + try: + with open(file_path, 'rb') as f: + pdf_reader = PdfReader(f) + num_pages = len(pdf_reader.pages) + file_text = [pdf_reader.pages[page_num].extract_text().strip() + ' ' for page_num in range(num_pages)] + return ''.join(file_text) + except Exception as e: + logging.error(f"Error reading PDF file {file_path}: {e}") + return '' + +def read_json_file(file_path): + try: + with open(file_path, 'r') as f: + data = json.load(f) + # Assuming each item in the list has a 'question' and 'answer' key + # Concatenating question and answer pairs with a space in between and accumulating them into a single string + file_text = ' '.join([item['question'].strip() + ' ' + item['answer'].strip() + ' ' for item in data]) + return file_text + except Exception as e: + logging.error(f"Error reading JSON file {file_path}: {e}") + return '' + + +def process_file(file_path): + file_type = magic.from_file(file_path, mime=True) + if file_type in ['text/plain', 'text/markdown', 'JSON']: + return read_text_file(file_path) + elif file_type == 'application/pdf': + return read_pdf_file(file_path) + else: + logging.warning(f"Unsupported file type {file_type} for file {file_path}") + return '' + +def read_file_content(context): + file_strings = [] + + for root, _, files in os.walk(context['data_dir']): + for file in files: + file_path = os.path.join(root, file) + file_text = process_file(file_path) + if file_text: + file_strings.append(file_text) + + return ' '.join(file_strings) + + + +def parse_qa_to_json(response_string): + # Adjusted regex to capture question-answer pairs more flexibly + # This pattern accounts for optional numbering and different question/answer lead-ins + pattern = re.compile( + r"\d*\.\s*Question:\s*(.*?)\nAnswer:\s*(.*?)(?=\n\d*\.\s*Question:|\Z)", + re.DOTALL + ) + + # Find all matches in the response string + matches = pattern.findall(response_string) + + # Convert matches to a structured format + qa_list = [{"question": match[0].strip(), "answer": match[1].strip()} for match in matches] + + # Convert the list to a JSON string + return json.dumps(qa_list, indent=4) + + +async def prepare_and_send_request(chat_service, api_context: dict, document_content: str, total_questions: int) -> dict: + prompt_for_system = api_context['question_prompt_template'].format(total_questions=total_questions, language=api_context["language"]) + chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': document_content}] + return json.loads(await chat_service.execute_chat_request_async(api_context, chat_request_payload)) + +async def generate_question_batches(chat_service, api_context: dict): + document_text = read_file_content(api_context) + tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", pad_token="", padding_side="right") + document_batches = split_text_into_chunks(api_context, document_text, tokenizer) + + total_questions = api_context["total_questions"] + batches_count = len(document_batches) + base_questions_per_batch = total_questions // batches_count + extra_questions = total_questions % batches_count + + print(f"Questions per batch: {base_questions_per_batch} (+1 for the first {extra_questions} batches), Total questions: {total_questions}, Batches: {batches_count}") + generation_tasks = [] + for batch_index, batch_content in enumerate(document_batches): + print(f"len of batch_content: {len(batch_content)}, batch_index: {batch_index}") + #Distribute extra questions across the first few batches + questions_in_current_batch = base_questions_per_batch + (1 if batch_index < extra_questions else 0) + print(f"Batch {batch_index + 1} - {questions_in_current_batch} questions ********") + generation_tasks.append(prepare_and_send_request(chat_service, api_context, batch_content, questions_in_current_batch)) + + question_generation_results = await asyncio.gather(*generation_tasks) + + return question_generation_results + + + diff --git a/recipes/use_cases/end2end-recipes/chatbot/eval-loss-3runs.png b/recipes/use_cases/end2end-recipes/chatbot/eval-loss-3runs.png new file mode 100644 index 0000000000000000000000000000000000000000..6f65e5d32668eeb4eaff9603c8fcde0f1e982776 GIT binary patch literal 52645 zcmeFXWk6g@mne!R5C{+m9^8U^2<~pdy$SB_9^BpCo!}CnacSJ$-QBI>lAJT&oH_Tt zfA8lsd(*Y6YH8K7+8z8&M)dtVoOciq5bwpsgybP0pua;vKxx3edF}C0z4e5EK;$+N z6#OPGC`kCt*2>Vt+yDYXEI3}}t*SylMuvtwQSDg1-wMnJO2|CFF(|RqF9N89aahp& zJpc@JO$4f(?}94AAIpBsec%k$Jq?A0HPE7&i)W+EI&8gYx%99*^?G5y%w*mjp@shO z0AH?}7HSF+ML`wS9RR}+qacr+XZZtyn-6y6$I$k6sM`4W2Z)8W`_r3~H&+3U({csZ zqt7pt1mG5p{Qk9D1m$h0EeT1S zyK4YL5;UVG(Vn6XBn7^%M-V3bh3`fjPbz7wZhX~P03v*9{?@1s*~PDFRpdN&c$)m@ zLmEiY;>cojgB}WgDgU|ZZ%J72RXE);_v|+75}D(|YDazi1i>uI32kKERZwzbKLiM< zhoWFWRA9-Ai?_R>76YiZ#PQDB{L1b{IXtS=k<_CEBe2ahC{?I$9ANJIKls|MBS86n zv;#H4=JPpJy8A%2AVGFhZG@1goL5BPz+EMCekJg z?g>5`VhcGOA?o%WAJJADQdLMUNng}0#(?P7k3y7PZ|f+8iu?HxS;UCI(8k&zjoKS zS|8CeP&wct;b!5}{T9*uFG+a~*x%R-fWI5iPI#5UTE1spK7&|#TOw$niOB{N@rydF zs}uZPpRKPt-es_)^G^KHiurEc@-z7J$j7)G(euK&XD<0+z_5jwu<{|nVQwF*N72er zc1GrAjaLy5K}MI?O*e+lV^z(Z_z{;Qj2R5#(bpNjIvr61XJZHM=!)_VyKdi)43LJIu(ORYQjpB;n%@v5I&3R(%Aj#O zFe;&s-aPV=e1($nVO}Htg;?&}Y<+F-jhTfEgwFf`Lqx>l7lB1s5QM(*;}@Y+HzvFQ zag-`4thgZ4hY!TUhJ=^V@_C4Af#xEnQ7wH$Tco!5`+Sxp$kF&)tQYV--;@+_Sw()1 zfB1}MkiBlq#eqL8M3F1J2W3yG?ya2LJoe^_)b&G4gkuilXvJlZ`RRe&QBneP3s&`y5TUq8yWyWHu!W3<9(2cEX$o+&?3 zJ)@j0ne^9}nA z)D2f%?sJ$b-@BwaS>k9!F(gs~`H#VpRD|fUGck_Q5wYb_O=1LnRQd|l2<%B|5*X3D z(S!Y*{h&UTzR|u+N}P|>A(&(#orO}lG`Z&aEDCgTO`kK1QxtJakXbWQq|+sTX3faS zDxAu#78MmGl{%Mn7r`m*6e$)R$*~t+eA~_~NhoQd(r`#CQFJIiaGXXfX;4}nGiEVM z;iZjN=F^ZU%}&qZ&hq${TgqN+Gj(TLGfg#(GgV&HqF7j@QCu@cQG{8XSbD0`mbX&M zKVCWW-OQTR&_vPn)5Ndh@e1+TPsP~z$%TzVc7;1G!m7@4ERP+JdGKiQRMLggi#g&sL>)LB7`4o_A}q^K zn3g@7*z9@tu@9@RU|d7553b7hT4o6fPAllg=*KH&+sa@1f9Rm-N9k2|W{t145?LBs zVd;JCvG!dz=~~fQ=mG|w2gD;TvK>WyFx9vEXvVT^94eP8*H#s56I3TXENJk(TLFH`iu|kF*7qEu*RlXd@*BtG`G~t_|7v5 zTzP7Ro%S(_)~OoE>B$+Ln6eGqMjTWhoZc)nL1jd@O-gfMCBzM$xco10fs`#yJVc9vS;mBwwiOvmG&R9>M z{W^14k0d@KA^k9{H^!)=ryS9UX@X7}Vjc!1msS)ncbjXGLm^5KLDVY*Qtagl$HV3% zxiSC5ZK=C7^2_NQ@_caPlzfg{3{4$%8Vy7?L)t3)PKrwMKng#ai}_wRL??tHpE5o! zC8>$F&dYpkZd$Low=tX=-$@TRd{?|VD$9@-#ksyGbRE1D{5T0`B5uMt*=n8pYkKL~ zUdC0)l^}9hqsy)1cDG{V7S0{Uz{d!i2ipL9IhiM!n%S-Tsd}Ks)!%%bW4-if>?nNx z)Go)ieiw0ze0yd$`f}m|DY#aMJ!eLoJhnjQ0EIIsv?o2bg}fqTi2^>+d{FcPw9AHW z+(mHzrTFI+=6;vJYUe;FvI1)1QDzCd5=)&$3b?jsG6NkJ6Z{1)b0l?zsmr`~2^-m{ z+R!q?e@G&8nrTt@#%Es3PGILKNisc^`S#Xje=rvzdLa;K`-enN%zEYF&m=&Ho=g_LVKsxyvugO zcKiaD`bqid^CS&5l^f7@Xdpl-Oo^;?r99);gL|XdeJ|@2>jY~$`UiArGul}??MDNx znWn>d$oOj|5v9=5m5SYpw(1Tq!DW%n@O)fe3slRdI(B_K!8!+rdyI{QPFmRpa!c+< z+{=a(i-1Lm29IUBg_TAUM^UqIQR!is-1I@yadi`m(Q=z+x>Ka3WyM|9hDJa`t3C1@ z$?~PU)xrwyW9Mz%kGo2i za+`83Aihh>rbuVF1{-Mk2-wp$?ZJ6_Q_AV@#Iu@i7inFw)!GZbcq@Wj@>2Pj+dA2D zx!2VmXc4)Azw9yO_Ov1X980f-kTH>wz{~8RJb66fkw@`s_u{f@v>6bj?S4Z0tZ+G6 zl2}iV{Iva|qM>c8JxkqR>OCvDf(4v^p1HP=^s;~Pb#gi7X`{dYex?Hw-2v5P#_(`I za&05*OZKQVR}8Oqwk_lK;bNiKk!jvnK)k_CiH2-HJcbyPhsf#zV>^`TzEH3URT;&T zoMl$(Mt){NC2y6XjrAmMKKMbMbwwVp^YP!%_nmWH5uZNovpLZU)I zzqTM>{~#c7AYlKrAs{3masShnhy48a94H8gKobapmi(K#{GS=k!VGqSU@(=#y9GcnP=&Y-n(v9#B5rnR&q`3uQ^;Sn;h z)3Y_Pwl}e|B>at6N7u^1o|~BXH=+N0{^HZX+2p?|S=#+A)~kT@zeng9=@{t$Utsno zhW{U6zeoN8`yQ~lym|0lAaQ&h4e+~UNr++*Pl$i^{u@Hp*5p+YI=@-tVf>rIKl}ck&qe=x z@&B2JS(1YP%L8Vlf|3spb~ zfj;WBW9j+2RoFCaVsQ4aa!0sDZlccq>TS7QvCwS4vTWKD6Ziq4^zH5Q<*Pa&Aq{RhISzXR%T>#TeC|(Hgwi514;lERgRQc;Nmu zopg*3X2F?b^kli&U^Ig}rxR_v9a7}8t8TZ)zh{0!^RBF9fJa77+}kr2my)Ut@t+6% zh)+zER8~%ii;Gi*{Uz{kZb;X7y>E^eC1=aE@+}tYH^|%T&F6D^RD^{5NGT}H>SjvN zb3cMbhg1GVj5fOc8T{y=sN?8Z{}B@tK}fX%&{6_})rsVEyg-Jc?eTbn{;ne7F6k^q z{C^4xB@Q8AzS$eD?ng#O7QDUEnM9AprTtOcTWF;IUkXvDf;6T!rom!KBx{edh7RD@ zWkDr`{GXS1I|>61LdqcOck)G|7Hx0uA?1Hz5rcpU>-E{GhQS(-f`I?m1(OH?t7?5; z?eIUbLZVi}efy#I`Ttf}2N@qEhGwgI`oC9?2I}K?jCbY#!XL$F4gc*2Y$Ck|#lIMb z426JA^7oy~OP(-Mz%4mob0*I!svm z6D)$ghMq`dOl4>VX-cP{7~8pk4IUtMWkU+5GYI-eV+{-#dtxi>uy7!ncf6iT>?` zzmvSWSln)n?C1esHp&te_=^ItwAmbRzQPI-Ir&_iO-yGY)ho)Kr zWgc$!_3+rJ_pz}r#=2g{Q}H;u`+bp@jF9?YWBGQhEN>#KiAl9*mq`Eca6A$oN8gWU z@YbnUi+biP)e&gmRZ9*(ADr99UQ_@cTFMmw671w;R7_mz;$rBnyUmDLy@(8Ty^8wN zIU@R&r;;q>X2;{0oAvbxZJST5wt1S?D~+glc-zVis==d9D;~0~FYBVicyAxC_Gty* zAkS>OUF}6^{S~Uc4W)iSm0_=H0{#f^OnlZ6NHAD4+D7OoI~s9h-eGhCytnOgm>n=F z%qS}jpyv0^!w5~J8DDIB{<544k;>x{ptLikpy{x_>~h5$IoPoJLWquzZmHEjI5?TJ zT}@pP^LW~#K~BKafBcIc7l8^ksco~3=iv^`Wv3AnMt4#b;MqEq+Q#SQ2|i|PJWfa@ zQx0ZowpalQKfB+@%WG=aYvX}5j!_>|E$SaBE1_q1REet7RQ z+OXic)q|*CpbTgE-UvLB8j` z{-^%wQgM&UEA+I>gINWt-|A`GXj~_^+tq!>!znY<`562}LHOwlsm1UBs=GC9u4s7m z!`krx;&kLIodK$qQWaTjM)M?ni^T$%ZFxNTyHwOfXFFNwEC7tIdz|Ain z<`?UMgI$q41wE*pa86Wm(^Yz+rT7BxRLV8;8_-ZUlGq_Q^2C8!VU0 zz}VQywk!UahhIKGv76m|Vt1d=nX$1_3A;5r%i^Ez1Rt2{ovyT8?CHs>_N=)yf$KGD z=Q>ALJyc_2wWR2PYwon2@^N>ajC!Go zbgZSe$E;o|8%`l+kBHHO^esZ~-XXv~{hh%Nr}9m=Ar3!sxH$Ya`mqi8_Q9C+76rSn zJA1iH-I$ItSiCX|b!LT;quP5ChY|0?^j^`TkUOTbo7Km=Z}gFnHwPp6tuDJf{n(RTs~w$m;r8MRRn+M`mCs` zZ92XVWlpDnYJRfrN-rkn1__DP)+R($i8+uE5#o3f_Y%tEk&pR7oJ2(grMU(M6*6*f zQp@GZqwOj*@^FF(j4yALBqE^iC|D7VUm_Z@(o6j^P zVv7D|R0)Q#KCIx|83v$2#=)U@9XN1J|DHl;$=}JCc~N{HS^Q_*`z(j0Q4C zcqS`stnoQV(tj^I9~6<~5B$og7SNO}=sFAz?uN_$}N8%G!r+S<#1M%JYPFfyi zwoS?;M!b(=Up#Ku8=nSBf9ByAASM3A>u@9l_xIt`jEj)pL9m?!a%dIS>f=!XZn=uU z-p9!8AsII)_#;6q;++bg;S6S#!U)hrQoLf0?b{tJx3g?fm=zZNufXk{)G44kNPV+x zVqzwx#FJJU3dw8tMc`Nd`m6cYgLupvjzM?7@Ryf&_d8o9U)-xSojY87K7y^++_kVz)_5_P-s!WR|%ApnL#Ebt`~x59~f^!;CPcSrbD z$2$(ZF08-h_(lTx2s6;f0q3o>2%1NUG3@SX-04gy?6vLK8FGMU@H^09(rMNKV)mMJ z$Z%eNWIy-Yq&+vF-yP);Ab9y^XVLM5{o?Ox2*jPiq|r3KMqli;e&73WwXdc3vhMB6 z=K-Eh+zRRsOr=tGE~x9OhZR?TrC@sh`t>y$nm81LejM`+q5l%=* z2u#jS{45|5Q`=QoMg^Z*D&wg3b3El)s#V)$vr2_Hn*JBRWoK|*Ok7;i>~N-ZDw~Gd zyM{cYy^pus=-W)kiP}mS-Co80nWZobuSrK4qhZl%i39chQ)q>~!adc!unMb=OKH6y(uR#O9^7@di&w=e(5TW+z$$h?|LcnDVsi?8ja1ht3CD$c~ zxRdMcdz>PWqn6$A`-^o@CR{b8fT9n5`Feh$c069Z$v&t!I4PYdbi{1q%U|i)Bx%%7 zPoSR-qUt2j>xlJt=bi9c4#Fs=KC<7Pfb$xrwr4)?NN=lrKQm!x+85n!Yi@YvYDmlF zCMANCm6p`J0wimUc(h1fnebp&m7AmaQZxrH@KB91_E4~Ym4@M`M29R}1HPAMS4+Ae zBIoFn)>}_@G5;f?;J4<_PyViIvv(EIB%*t^cQrPXR$iBOX1cg1_ z-F+|MW7Azr%h*4;G7Ix>?`y~wm-r`4)P3N?TJL&!PF(UjUDlPYvOM6EB2w!WHw7k8 z_%@gye7v>pO4)gqkdTNGC3tAK0VLNG0v~P+W9*^&_8M4~I zQ4-8v(ZJ^cNhZrBqKQ@yw+1~dWXBVBuNVpfuPIcEz%XuAp64}g_U*0MG8_9FiCCo6 z&-86~K>PBmk$=WNIE6Nv5(va-mUs{NFXpP(?3!1Q&wten#*Q!-mk6~<{9iR!d0NQ6 z%*5-HL}7;1Uc-qF_4iNP=_d=}ucYZUzZ+i6<5D?4KQHm}u*ceri2R>bjacch37U0I7?T?2k=Gr%u*w%=9FlHgV>#dPv|<4rJX~*Q2&@i? z;wJnQTr~o>|F-kDkgs?nANxvw|y()poRd0^=e{>Ui@BH@r_sa@^5C38| z2sH}=VH~&1GWtIP{AFFdubBQ5dH;*kSH=b6UXL4tRl3*&6v!&u%qtG>7eHRx zEscRsJg$Pl2JS?BkUI#p2^Qa=2!`*3=vEo^yN_BzzV=439#E42u0yEZk81>q#L?^IMoPgmg4=CkAC+;nrL zYQwzyG8%Q{PmchYO7*cwKd?{^vMX4l^lxIEtJo=4&M_d z<#~D4TqrAxsg}-6*i2=nUT#aM@h5z=co7V7$`?<)!6PMhsT6V|SS+hZeob4w$zRWc zjdRQ^`cdvI*CQ_mkgbB4fo4Uhh%a9+ImmEyQ-|vO1Vlq6Lbv-zGoxY6FdH2xIEdBD zIk&BtnqI^jz)P?D&zK7e&WdL4uJog2G8wFfVTe5PgvUS&YNo3|cskYaGL1%Svz)|b z{jYeF%sFjW{L3gmr@ig^wtVwiQ(U5RXxW&Yqj=P1JD){^M0E4`YXYs$!ClX!>AlZA z$Epg8bJ}vmi-z2E?9{wJXRiBXy7F$}WhD-8Y^Eb@-oi)mIlOMw>11nJ1j4f!`qTcx zn%1$ENfN@xaq6OrImJ}5&=@04CtN3Lo1NLk@0sOg>ZhiD?_l= zX8ViswR(;E&$bm;9M9(kp2CJ!et0 zpSAEwiO%i9G`Z0>Leuf8XM(6nu*ji>|6$V%V#jLG(D>Y8s)*9|8a`2jC@m0~Q%d9( zm-5VK{I|IV@&!VIB&1G*n*#!uy{1?E0< z3|DT%z#P5W_z}x5{1_O*C`YQ(sa?C)n0O84!?GZ7^L6ZGZzgFArxLQud~_l$K?w{# zR-hjvpe4p_An7@GD`{MTwrW~5{JLpl?!e~ZT*CVOL39Zhn;EIzta2u-YVp9*-TBa{ zfxkpqRC=3&tK$*T=36{sQEHuyS|?91-@DM;J|&PgIdy$mOlIP92`E>%E;4TF1bQp2 zYqZ4XXsX<2`GM;&&In%bXS>c*dll1m7D-xRRsDMYgX^=?1*mshou&uWeQ3sQw^2OZ zNxN9<2rkr_FFyhHm3Fk}zTA=6%Ls zI&AY+4#{Y&%z6K`Ot{Yn7vvxZdBr^ zJ~(UFB|a@v^Tf4iU`kE|U9u2NveB%23}}J#SuNo$YGsco(t9^=j}b$K$gJl3-Q1T7*F-LQW|xWdHn zkvb&bV}egyemz`nS;VcR9BAdw!48*^9g_3Op~d%O?j@{FY%5L%wlhyUMlzSsf)IJp zO?*Rzd8CGy^n999=6=*{%<>^iW>I-AsoJ^KH2KoUaY^g4&;5y(fvY9SyJh#C*Gmnd z8ahwK%`PS?7=+M4EVry@BQ>+{d!ylWZd#Jmr@e@5Fi`yCi?n*V7s`KOOpt>K~TVq`T`{k9t4&Ki^vcsaXaPNom)zFzCW?I@2T$Eudk^;MGbU%}gyT;? zt*R+o0?0yVXrtN9b_S)5G%1c3n_hC?&>?$WC0Sz5CQdX)ZlQ0RF;*LcQC%KNUQ{!HW zG|mp(h5X_+t(jbB8j;3&&cg0=G4GmrAKOeJOAuHDzK-0#2z|)PL{Uc3#zMGTh_n*F zZ?)aWN+exIZI*Z#LDa&yg)=AaB9XRppo3lpq*GjF|DgBqU3)WoBEDQ(>6o!oNY-|+ z2?|Rw#k<($j4Jlr`|5ZlXFIEbY%G`dG4V5FJpQDM3Bp!3GQp$|!D``N&}Z;w-_i8t zmO5*>^~G6NncZ8{{2b%vxZOh7yvBN);)DEByOxCo#uwrK)w597;_yUAo8@WA4Gzkc zMiCcWL&??H%vt)H9T?bDPX)bNOVl7=v3M-#PB{9NdtA>1e1(%(q+(EN4i7EDG*%V# zL0cQV;K*4qz`}pLa7g=PoG9EPmqVg?&Qii2Ox`yTXx1}VOowKrIQwn-T_@gcB4%-w z>8+w`NB(_Gsox^x^(s0R zs!CkxWoJ}T7nYEN_=;goabgj)EGi5E;4VCS6npYyiuhx&tTj+V-0;_4@BBiWa9xB_m|6=@^ zr`GAA+k&JeG47{Or*dU2&MYr`W3+L8?pton(H4mZNP3OTSnY44k}3@+BnV_M9556E z4WY+$ZZ#Oij1kP&%+r^9UMgtmC^h@5U_lBrlj$$=6xrU^i`Ss$Ccujwk8QKc*X(}B zV#_F0vwj}}iZk!A6R#W25}|mb4;cov{`Jv$FuJr}%g76hTx&~_SGV)W#J|RKUMqpR zqQ&Foo>sP9lH^vy0w@2&&#+c~XuG_m@Z;s$>W80C&d*}rd-WwB^gJAO$;<*UKDypE zAaCcc^2Wb-+?H@XHc-$+4kcc;j(5V)uYa~Wx<#+iuY7wN#BP~@;?fr~S1+vrmTaEa zo<*I1Aq=T@v{KLmmCJ2rX+l?xuOO@fX?SVxg%n3TYw+Q><|#Yi^UEITQ5a=Yy1Ho7=BenJ z_3PN&dXZ^eKsEmQcF0fX;QjSmKnwwfZ0zD_i<(IOr@S8ePQ=Xn1L5QX0FeIc#;0^v z)NLm!SpvjP*f&HqqdK?t%`@9g*Yd=^>r-Ud;4ps=LiYSbH7#*XHZ&#?GA2) z)?59_M6XF7By;t^W~M#?Ed3|rksk%im5AT&^0;+`a_EGyCwgWSER#J>Uv55>fBK2cA7*j?OrS2+ zfjjT=#99yn*uWMd2mN}*<)#nxxwC=Ztwk_Uuz6fE$_q86(AiU4a0?{(I-&GtC`QZ7 zV06ZRhB@F7>fHa4?X0t+PE`bV+s%n;bVVS99=aBmzH}o=c zq++)##i1~w;4|RYlg((pndssSz$n);uZ#R9JQoEx$uK&N%t4tM1 z0cwC!!n7#IX;hO@V@AJxx}PHKt`pMlKr65NTl1IW+QJ?M+=l0Ua@stpJ$#^w7wt zTH~&DLy=Ce|G79olE{82I2vU0KqS?%H$y7W`Crkrc7VJR~jE&U}O1 zGZI9v*j3~BkX^Wku?BnD1GSt%zgu2fM(-gRCjM>Rp9Geu1+KpiS3sU|{xVx)EY*7I z!Ts%-mIzjkmixNc}b35>)z9zb4$2LppkZAh3i&9x!#*Tls!8zA#Z-p`hI0KEt93PN0p+vq?+2 zg+u9!ms<<|ZS2iXa7@Dqs42yovsJK#uPYXv_e@CAJsaX-K^z{}&l`4N?S9{5V{#_6 z%7kq{3*q|yrf5vSStBi=H81VHsr!mU(V;BJrBG{yv_5wJW%A@PjSV$DU!Cn6|Mao>IK+H&-}`$6ylJ05wn$}dlS zMu(w^>!n=oMA81^Mil_*qg|VpKW!&i3a{l-v;y ziK$LBA_moddpG*r5kS!xq$a*l`cEKayq&by1om-#+n4`o6PxADXs!oh{`{!H;P1n( zixZc0l_e3dAexw&9zw%e;j$Pj`9LgQ%)tYzoup%vgR`^MY$)iZ)kAn#v~FhMbogDb z644P7kt^mWmvT0xGN!@!c5ghMA&)fJ8=^+hT|b#>CI1TN>d(mQ7VxI-7`=@hla&s) z%B48H?Hy-O2Q1Dh$A(dC6`(0x*<5yZu z-7&$bI2lYHZn}0lDX;h7^Y`nF4MrY<8#0A$S^BVd1LXRt(sOXII3B<5f9APhadL3)Y8UgM z)1~Us3N3P!Qa@ztoJq8>_&8T6JN?L$^yN@Jz#sOc&CA=k)S-FLRI67~(+^b*tHSSf zBjz*_lnKsGcmaoTtn?$MjntIynUWcgkyrtg8WbCQUN_CAX(JX*$aAOfa4%12oX`C` zs3)+LTiG=H3u$NQm(Q9yd^cFaq&o*2Cl`9`r<IUaRI<#-3W6 zy#t{6Rrh?>*{Ky{EkIOGY3|{uWi!hLfWCg&b!IJ%%A|Hak#ZDesj6U+4Q%YpHEC!u zrCId`Z!bI^6eD9rzF7~7`c`; zqd{AE>^Vmbsekt9Z)pwJ!-MY_N~LO1cR?@)^v@*>v|h^tavpHFTp0ZJu+6k6=LQvx=i2C6MMDWc3F@K* z<);E#r2|7V-22X(B(!Lxn@93>mRPGoFQe+h^_=m)VeHo9`dLfAsle~$1ef&&{#5AK zXkT#FlpHJpP@%}XKH`?USz4J51obm1`5$3l)^jVYB<8n04Y9lL;|kG!^?Bcl`RfJ6 z0xQ+uXpGKy7<|6V`E9bL@zs5j6*9=~l$>W=D)x~8>#*zsqU1Iizs#Z+8A!b#Pui)>gK@&rJVggdtasG&*to3*#p^~ivQE_-aR=k`Zvv(jFZ zYmOZq>bUot#b>&~RdiyWV~46Lsv0w4fqAOoN;Minx+OS;e*GbXN~)R#H&n?{B5sXY zWu>RI0_=8*dxQya$TbL1c{8^&om8R7&&luAQ{@7O$T z^5?ZIOt|8|eoLtlCqS0n)A*?=%^CF>I@?=DlKmLHLF=p}A8_Ws5~xqlnF!485+O2u zD5(}1BP@nRB&vi>Ej)C1B-kVq2r^3*I|XKp?Bbg<08sR{!%(U-p%0I#b_=Qd)vWNC z4l`ilmHGzbD+jzoZj7K1qhbiwgW;TnzY=!_5d|@>vxmPPOWh$Vl0+>S{8&Rfcq5BE zgdgH(-Pl(0t+tp-^TcgCJUJimiBSg8&U4v)=6a&dtj#phptqc`6xR<&C~cCkWPxg3 zAyx+w12Lo2kxFR5qXkJzpb4`Tn|<$W_bWBy%9>=4Jmw9{#jDEn^h-m|F{6CLz}AV()Ki- znQdKX??T2pEv{0%jE4|#KBl_n?r7oK98*|!HO;8u^L)ED?XfrUs2_3h3tZGoN~iV` zC?n%*p3snfi5y=>()$zHdoJj}rtDQe0oqqOBA;~d+HMNd&Py&to^rZ-3tiK&dTR*H zbWb-k&)i}(ojumL_n7$oqvr+Yt!4bG`G3UkGy(3aWGP|60%fz*FNCQi=4}!eeyt=l zLmA(5s#V)DQCKuuTlf*^f#wvEjhaS{zJa>nz}on`d|jB!o4g_Ykwa1 zYO-|oN~U(lPv+#lUOZx+|C$ayuhqIqft~Y>EV2rrg_5J3I&yMDs zq#qRW*HgW8Sh-M*_+VzIYn6~u6OO^$1M4iA9>)|y@08WgscSuzgS4SPumh)P`_WN# z+g!u2JKkQp6Pc~BA+#pGUG-7~w4SiDbp&u&mGmbvo??0qg9aNYy6y@WrxWe&fGy+ubdmMA}> zu>{TJd`3MZ!@7d?d@(s<#D4yFHyFl{%9w+u;fg1F+42uvQR7wQ%ph0$vZm*hffTS9 zM6^N@bwip%ZkRNlo?|HY(8N)b4|q3N?H3BR?&ug5GGE|uDgC}9X2 zsnslhDX7T^fheAjzFEN)Y`R&Hub-U}(aa>Nh3NVSCGx``bUUu38!vYH^GnZ& z+HO^2783XA><6O`iW0d=a0|aaryoBJPM2LM!TSM4SLZM}gam#p;(kP)vsS}r*S3tI zrWT0_-I!6B!zFXJT6pK@M#9?1p_S+#NYR?#?umoCQCP8evYMXeB=w7MUVOEW89}kj zgd;qemvw9YV5InS&|dO=t895}!LU2r*~@5ttgrIW&IT0tzZr<0aA z8Aa2{z>JNZZjiQE1U-XgWrV3cV!LAbwu#$n^iZ1J>(MaAh7g(z)r#iZq`;NpD(3Dj z-PkIm?5y9TE2!~hYQ0->+THOC&>+<1uHc$Mza7rBu9WeWFYTIRUHsTWV8B7PpPXyj z?j-bJ=Rqfj&!EN*#}Bhx@o^y0V%J7?QH&kAi%X(4ssGFf)2)DKC*nf?45Cvq$^A^5 z3lHgmXEU2Ci_0bhkti9P%7Y_eS+Srq1*RD3R?*^nE0q691RHL)>zDE--dm-Y6W{lUlpCeV7oUjHxreSqD^x-lokMX4qC#2&QO7Df!`RH&D6D6}jr2URlU_SaLcFx!4*+jaztB?=y7# z#3wzq6ZV%ISAS-_Qu2G@uTzOs&_vZl9Pf+k&6t353)-a7hL7O_DRs7%tlmLYR^;M=CZJie-qY`>`f0fPH^%|u znM6rr1Xew?0!M4t?H)C_LQDH-GUFS#?nj$rmCpx}RtI{8(qH>=+6*dAP4LQe_Bd6s z1l^o_-C4aBx~@BqMt`tZ!1Fp(Gl0z0T}yUlSEU|BOT~viTs$Vf+wS3V>Z3ZSeXW1e zERro^|{Y*61YQK34)a;k|GJn(5OP0JvZ21-nsMGs}2sjpNK>sb4we;f-ysxZE z@-4YDRnD!0e0FRlO+3$v!bB9~|nfu^wc-S}X-w%e}m zITzsyXneE#VcW+1_5#^KNlMooF_w1I_e@D&&_1P=jWwugA@kAhD)LF$Zz z1nDg4!B%&GhXP@xDtz3F+p`FNZyk~;)H1K4y0u>|(e{G@NYh*ksopViYW#}Y{ce-OOL+~}!mW$CsPP*5 zz~oI{O70<2d>*9UnDsN*^wX>>>xV1*_;%=6s>N80B*&Gfd#!1Zi)|2+7s4dkJ3S^t zxGrXj4AYQ(vT{gl6ff1Q*rlg|;|C5j#U|@K3DF!VDXcICiN{dNBqH?Z&{bd#MIE>K zTsBp)Xi3JQSp)|?7s=@JH9R9W(z49n_8S5E-4gLE#$D_Oq)5GlcP>x*J4WixMBcs) zZ?gSSBuE|F5x&CYRWXxi4jKL(Rhi}@n?p1s*!Nq0S9-;}yESj9j?$ z)^qquykoei0i2~Z@Y1qa|HXTJy+5U~N>BQtT5sZ?kvo)zK$L67G;t-gyo+a3Kg)~i z{Ac&_GGL}<81eNs@o$8;vf?@kEj>YN=y zsHy>E)#o;a&k#2{8>M`Iu1^94npk1NV3J#YvCGhLpHN08KdAL1GCIh6elNa(J4j;yOotEWYBX=HF0@JZ4t2YUaJA>oB0orcW{@66Fs4{f>8a_X|}T+9*XU zCf3{8<1-DV^AZBW*Oy!+_rUULyCj{93nb_WB|S3HBGNDO9JwU&$H-q_&=}R|hZ})f zx)H6*gDbOky*)SSG)b69x41r7def+HnqL-AEShO%-GBdnCDU!*ykaYxca>tbgk?v2 z2rTmQE)qfbZt}ViyiAMu&i93&@d(ldT-ac+FrmHeMkfCqg#^V4#Y`k=NS=PKcSyh6 zaIuvI0R7mSGP91}E_rgf3e2Nm>=o)|@Eg{NQG+9sy@_P5z|6Du+MPwd(jn#H`}&Do zDj>^t;q`rSfwsu%!#y{Dh!fVqrzxjjPse#}ii6LAn|+^VS+he=77%SA5IT>_U9M7ldw z8Wg0vyFs#M{pX)9ba?agn&&+RT&&-}Z zhoK4y{?{-w`8R@NaioYD!<8~}ovL8wW?j1jo{ekfBrcFPSAsL5H)ttW`NpDGXT>X*s-ssV;1PXP<4sN;?dOhQ>U+8t5iuJ5Et8ET@ zbKE=R(vi=4Tp=eND37g#2Dru=@~(&UbsN78k8gi9y|^kSwJ4#o-rZzW31O9OwNfa< zTu>dF$lF5gK&jZvx%FiVHzMdWEQt({CBxl+X7GNo%J$s;hFrEJ6a$s{^JTpG;F}nS z>!7wLq1=FvK?5ch zKyj^qF)i4{k(Y9tlyqj1BLP3{ydK6k#0w;XFY3sI0|_o~ecW|v=}*2U+NaeDEfA&$ zjJ6kB>|?Yq9s3=>g#@k!@B{&g-mwkyOb^+ziY;elM_*AVo8(_r!+hrS8O_;P+XX}P zetw2a5ls@v#YUT6?h>YJ9W6CueHM@Ae(kQHa{4tVm9%P#^Zy$5)-*jIQ0}94Q1P}< zo*GFw6{`=3@gQfJwrH&L&+SZ6cAV95 zS^vA(AZu5|*ARQRW~_uZGBO^qAFZwk%FAK$W}#94WNm8nil7Pi%d53ggrHN=tZF-3<3>Up?NkRDq?(pl^ zL^@gJaEvG?$6@uL-zZX=;F|BBtyf0=`B2ciM96yW(QQ>Q7)R1AV(_DjxS5g07T-?s zA!|2>yCvt;S)MHqE%sc~&tH3dn?aq&N=6?>i*AJ@ZkN$rWX|h50mJKnt`@!XtLQDVJ47WsOfoR+p6t zH4~#|t2p{xzyyK$R-QK3j_cK{6e?pkB7fYyv?wN?0Yj2~`u7f1z2exBL0s>2NL%tV z*&sU0Yp1N>CO@s_grWiv4I|VHXX0(wP z8hR9{sub8?w$nAem0ng%9AEK9mpw*Rcm-_Lji%U*uvq(Xk2ISjX&r}NDfV2*+NuaN zny(qnKkUSwd6`tJUb~>;R&t;O1!sEIj8#DxbPwP200nXy-@}@%Uw}j5IV2RbSOZujE5lf)F*+ zD}+vs)v2>6O>FpZcuT1_$urXbvo>&)4$SkQ=vCQS_TQSv`(GFn;iB!TzBZ0n{;xyF zW-o!t1+@3cw128J|0#Yn>jP3f8mCL!)%VrGK*{s{hl37QTh%ob4HP(XID_zLZn8PP z$X>=CSnkHR-*UtGon}7#)wIvNvI1vnd7VeMpj6^UI{PMLM;Ep9j;$6@5qK2@!o$a~ zM=5}(J?nH@TE3aior?$Mbu1DXfflw*2X-;0@2UYAz3g+mb!vv^mhY+oQ=E^)h6W3NI9YN{9T;Q2A@=6&Cf{eYo&jB0US&sg)-dn zm;xyHWcX@>%zsynsSIW_wT^d-U>M^^zaG$&;%!l#YQwLY&h@g~;mm3LS~L(XpT(!r zc%6QG^al9|lPUU;ToYD$SH8BuSn;#E%HmTAmehgq;t&-Lg|eKA%suzbq5!5ReYG~s z1E|pe!H05cUbjmhoU9U3&XAH{)PCYzzHSC8betQfImvA4kz5#-pUCKH(VYBRH(FJH z62(KO^N{WGbJ>hjUqZRu79H~{>xf(w7uHYOru;>v%R9L4gRgXL`O!*XVikd4sbzK& zw2vyv@6+sknSUfEh|*jFebV1jF)HJPg^$0*&fxmJ`Kae)^=JDXu!*HcI=c zgHd|FL;R?){DYKf)%EMn*OvB2;Rdk+2C2j&(af9AH~mkvw7Z>RT6!*HSUA)A%oEvb z`=;j>a*kL9sT*Xn&&m`QHa!NW>bZMMVtEy#ROv~tJEeas-AFFaJ`1sA|Lz|w<*u&_d3AT&6oTe~D4a}~K z_a8;K&nK22dFY33XdKTfN)abzbkJA+o@LybsO=x#3P4OG+&&3 z6Tk?dJv{3^8mIA?vuV2$b_E6N^1W;~EIh>7CFQXbsrgxVytMr#|GDEy zg!0DL-GMkv3jn)f2jOMHfW;w0evPZzTcTW#{qB5{P+|iYpG-kxVE6!~5|Q~9>5_cZ zI`(t?^Cm5mp#_Q!8=@Srbd)lQ*unOm=}!EX+684q{M6zFV#T>n;rYO~Z@)=QV!Pg> z3%;f>!wNYt{??VRFsd!2!~d*K7*fq+CucxnU>>A zFW?`>bdU=?TOn}tu0pug1B<>9;l-e+Th*ReM;$3YZFeR+5Sktye&CNc;^^ld1YI5- zzFLrwk*%e3ByI+82P!b(EYkj*9Toj`qlPF36o1*Ua1~-4L-1u_B7s?7wChBCH#o!%6Nc=OKX9iQ)Maq<~}3 z{C-sm+~yIHN1TF_FPUv6Cr5>Z7CS_bTqtlEE{a;%zA5}FVkwV3DvISKx4{fRCp-H| zom!7xTzq2?jgRwoWUq7}nSGUp)m$=`j`b#9;ER}aYzEK%^4lEi*P6c^_W4Xa_BaA9 zH4E}%@P(>nKa{2gzrlkDMvGz?oM6zvAsClS4~kCarcF!mp_e`k>ySAwcr(+t`+a7h z(*wQ4h3{R?qVo@jZ>ff_4Z^6sig&))#6JJ6&6MojA45TqyGyWVfoIJh?qOc>)tW9c z-8LcTdFRInxiV7cVzLwR9-*$S?~U6i;H7s1NS4D5*(NnwmsX5X<=mvyg6Mne^Th#| zQQIm7LdKG%R1GRL@aE9&u10C2A0+0|xQ8EOaNO%dWSs1(-4CG}t3!`&xBcQ&@zN*+ z5RNU-o9UEwKl~brORsH6+o+s!)JoJI6X?0}YrTCWvso>ca4B40i>kCtM{1CN$?V}3 zR;0A72Py802z5Lt6~Gb*0ge$`;@FTFhcLe5J%00^`L*V!66aZ({PFKsY3^F&-nb$U z27k&0_CKv(aPMXzh3_P2q3vlFQ2*WQdyN@2SCEodW1sQ@=1PP~`PN&JL<^0`0rKR8 zcMSxY>;__i@`Sj)Eqb$k{Rtm;C8jwWzC%{*Q`|F9R-TzkA5(in}@_!}}?Z zro|5PQ>1t-RD^CN$abQzA1U#q=mx+4yH6t1iuOq|@$YISj}`QW@ujB_jHjI#U>33S zeTw4d@cmr&$MZK<&N{_vBjt?MUUDO>&u$Jm-RPZIYM*FhcPR5%RQ9CRxQFVj}o8GuS z2&-0r%#75{ZAHNxM6nRyga_Rc+GQYEGUEm460@8b<%eA>HTUtS_nNP62GV}Pwg1pV z)GT-$#Yslt+I-4|nb&3- zC1eBv4B>=6TO3V#8_yTbO*{$O2@$N1n_80t%AaMCs^hI``&}ex(ew}gz|tVf>E4(V zCweLyuEW`j`Aas--MR1@1t9bkLoi-z>bEU1-7wpAy2;yfkdKGc6pgtJEU~XYrR*6so(7QqnEf=Jz4#;6z^C#d?O>AbrW$O|n@JA+sYH`z6Xq1M5by7E~-(xF)ZwmO(GG5Hj{4 zyOl$n#&4ZWoHSVZw!&)ZFf{DKngq0f)q($Hh(cCXJw3&ipxFBQHQDHp;*Ounk{ac& z0!t}=w2c#}7J!jSoZ8U?Y(&eTGLLlFSdn6uLboAAC=@o4AsBNwSS8!%U*|?NbFmd$ zElwR~M_#QsLx`zMTAf#oJu-+%;Q56CLb&go{k--poCY4PjGom))gL zV?&F30+0IPC52V-yvoQof;m<(jfM7$pZch;cAIRSwMhcjTPu|&XG^C~dj<-PH#N=O zLy312L*E@3%s;%|=BOM=+Z_E|MC301GVsjgo->_`da(3btv2zwB^* z)oFUQ`POL4W6!8#M$_mL;ltM})riq9Y$TU5g4*e!0lsz?*0lc66MGX#75(dyX9ZRf z$1k9`m>VvhoM+gFa?Qdz>cw3I-JTTBJ?V&>0wGErSI`X_r%U!4i5PjOvEP#|^p;uM zH~OU;7>1$t9X;gY^qvqF4?4>t;x~|vst5U|xmOcb)qK;Ai_P=`95tp@CQ8|C7VQ(e z92*zUc?}71kQW-QOXu)nXxN~aN@tsEZ^_~D1uM07QZU-1|q;F#l-Bri>Xd zvvNsr;19^~;tk)EPz^)S6?djkVxS@p?>#x0u^&&#k+Cit;<;MK6*4$tB1#Ug73()) zqcUfkSS4{ghvjuDtw&dYM$Qx{o>B`q^q$WWnh5wDce=5KifbHyC3hDXCyYhI{@xl* z;UyH6VU_%)<0#P|Vk zO(43MlfF%ZHT~e\&ELR;&onpO_sLXBRYG-hVWz?L9~F{ruojrm}8hjI788PE;* zVVMYyi3dQ|1r5-~~j=s?MDCw)Gu3{b*)r%ae8YSssko zryF{hacO!l6l{O3tPOSc?dQ zo1Z(sk69{ojE$F&%hl=_c+xFV>}}gj7Ja#~;uC)u?4p7rv0W(Z$QzZ}ULSe7Xr4ew zxYI)*)GSLwuv5}Oz4oyU+`Niog%uPU1gU&B|10s$P`r-BMI;-iv*>Q3m-EVmkUcU% zsG-Ks!H1^bo%x)otws2ew0WQ53!DTeNxG$-zZ%3=+;GMhW06!{-zdB3ZaB=c$YJVE zkX;CUTi~LI!N2O|-}^S+e5PP0s%YzV2Y5P=-jS+Fp%?C5NGDSI(V=;#8ZEV}Umh+I zJZl$R=S$IRveaXQL>3@E(L4w>!!cZUmp?2xQmWpb-I8-GCt^S$(|!D9g;zJiJfOBv zL@qp(cY!65z1<)TNjY71irj4qQ}79ATCzY_*;#@SO#?clGd^*acqB!*MkDaFMD=j?ZYZnuhAnCScG4+zD@!74 zC%k|qAx4nB1!lZbdeY~fb=QI{ewx6Xpp>L`Xr!U4twZjAy& zkNvpR≻>2pPO>0uTASvD;W~OSZ9=3Wk=pLRbT4+GZkNQi>QYXUPbrj&BI1P94>Z z%@-or;&0J(NRXrv#afWz(2JwaF}?_!;@eH^a59mF=pLvmB0|NYFr+8R*gV`Sg}(a3 zyO(eV)Dyo*OEXj=8)1lYXHcY^c8{A9P4Ssr2}0i5b0Dsh_qt{gWU`FMK5>`lQT{CV zjHK;HBk6pXde`28emuIOKZn?~SG|-c!CF%tr_Y|$wM)&PkA%-+Qn6 zy}F+iX2o&7@jO49O2`WZGDv`0<`+z>$I&|4Z)_rxc!-`%tPzPC1OVwv{lSUAQ4X!V zWbW>W;Z_esW<`D)S7`sf`9Mfd&mq z^sKU)o2egEE=GGqh|>7ak=9Z4tE!g@|5^5YnZa?)x+Nbb5VlY_<7i+hF(mkaNAA&# z7*mni?MjzC649b_&lHh1)>vs-$S^7owAPKuH{N)%zQgq-9#|EvFkLIO6=n&d8{=9o zsJRqiEh%o|?txMdCJ5N}XQ5(icjmdqUyKVN)XhMg&%2;DUU+raiDHB{n5dManiPZq z6O~dhMjs~u2}0Mf_lS4t8Kgs{a)k5Bzi&JdQ6gMEHO z5?J-leSZL7`gffB0b5OhbfXZJA>#d=|NcCV0D2$#EUNvlQ_#L(PvAuD7CW2DKZF1M z%5Na8`lBc8|EEXDN%)fTyu9h+ytL2H@)6Lv`9(#3njZ`IMYno=?R%b!z*ljhYV1`X zU5l=cBZULk-n@{r{Hz;MzC|annwr`Hg!raza!i=4^w;f2j31NIzAL2~C&xER{A8yT z=^Dt#Uzq1tD8$J^;(;( z{pD1uzCI~7SG$7Fo;Sz!jQ@vge%F`~b@&Hk)$SJ>>UDM@UpF^p3=I4iJ&qWKw4d(& zI`XlVba&6yY(74AahO)m>#wsnT)cFA`wtrafrY;m)Ch5Q!PbYazTVASm%m9nLijcX z7Y}bhik6g$>IG`_=g*Sr2LA=50VH#6i-`i>Nojd zZPIY@4?iN2doIRXjvE6Vpp4JIy*2vUt3RHnz+8BHbz*2d5GR|+>sT_Fz^XaSyX~|` zsQSNA22B8qaUml$DYLSU&0B6___D;tY@AOhIN|8#roPGw+MBXoa7obCy6<%GH?OlVi;0b!}x19GfGi;>z6R(3vhRSZz^JtDEtX z@U1m4$OJ7?CjXCAyx*2=Mv2=k@%DUj1l(EFimI+xY2{P-L^dft7unMam8GuB{S!Ig z16^<9rLcKv+|L(fpI?9c2CwXq^`P!j`C#quMg(AK;gG8!AngWf?!G;%S0Jk&r^@{| zoH9{S*k9{5a)GzWxb1ynORFf8rw!RxFBG)LLo8gpR;((NCVcVupdy@fHu}f<}!Z?AFTsxgu*RU(1`1J9FUsNloV^ZZPEb!JM?huo=JER zP?{l^8PT-Hm>n4@9p>;p&1QI0Z&XZ1UvIgzE%HHjp6-=Ya&aCFAFKHOIx3R1vbJZz zK7u9QI;shnxHHi*86@|1_0FgIYZJ`iK#XF(2lKI1%uUBhG4ZOsoAJ7Y717MWw5ZA; z*ZBMZ`)YKw9*t6A4Jarr&4f51-0;X0z{f}vr(%pP+=JqpD) z{}%&yJXyvu!v`^Nb1Ssm*d%3VpMMlbC$iQSycE8e`~bpoIqmD|>kQc_GW^sO<>1LI zW$(mds@?Lnr0}84`ST2-kwkU()3i0G+lL&cu2l2uSL`n{{zr2OUw*%`@o~k}9ty^x z*hT$WmYG@TcLTIW&Nqi*td7pHv9WzoGy^mOO{tHZUMjBu-G5|LTS0<1U!zOE&glBg zEMN#a)#~|gwzmtO&Ki#NmuNR4Zd3-L64C8yYy9-RYY0AN9$&1w$Rh zXz_lXZkA}FQS87kRGEJmOs?$>eQAX_^zl)KKr)Sf!!#C^HZaM-~3^lLPETOe9=hOw> zThvWKW4oea(s1FzF!YH3vYP`D)QvE01y)n?4HpW^ge&}@E&h;00a?sqFP*&!%b54J zh|56WzouZ65ZInX=>z&w!nHe2%ffvMd)$PTvriE|d}7K&_6!{XAKSs)I(h-K9jHaxG(dj0OI0kWN3iPD= zfx`Y}uw=$#5U?yxabl<%l9z_Ce4qIs0+qoBQ4EF_%Eio+ zQJ&jDv5xyjT6dlUpv9j!F%s(aOhh@TiOuG4E1_JNNxtdbiO|lO6H{mB-e|x)ID}L- z?5^{4kTielWcydsMg<@&CCjUME9gZ+Vh*jJNheXmp^Up%42G3I=J(O=2H{yPbf(4kRnbw`I|3z}h{DNrw8)fI0|B#pf zOSVUQqiyRn3(8Gf$xTQIYksA7n*SDk@eolRZ4~Gyc9N1v(_s2NgBLNW6j=;97|J8Q z;NrK1YucAD^Rn}!;)-B4R!`V=C`DL?=)dcX-zugae&?!Pak;W>1ytvJ9glOG8|i_I zOsr@suyxJ$Hd~!>k6*N!me55%h{ue<6IvtBTb79%?4Vtm z-~U;PH*g@uZmmgaaF##$2_wl|sage6dqYuftUlWMqgtnOZBZ-8Iq0&}=n6<6LDbMP(3H-Y8O}?csSq53Z z|F1c%0t2!QbBafxX-!$;_8&}Z?SQ4Xusue_x2Pu^Y7R0{Rh!q6oq#hfLS*cYVvRh8 zl3AbHY5r*fB;p8DQ^_}MYM?xb%qqfp_F%ZKzjX; zldukx>COYr7&u3rG^zcyh~QBFL8;13o8StT4sCr&!vQZYx6Oa<-o3jUoih%&n+Y*7 z8dCd<1{3lMUpV9KGg_ONsKuNH62WfeZ*BuW4{&zpnoVYNc@Pl)x&R`Eu%<%Pr=KWW zQ{rvGM3*;7<{!z}Xyt6)ia-O2=0EHwFd_en%H}Xt7U?1iB>Fb~ulbbA*g-C9*jlsL z-_Thue~G6ze5e%38)C=*63pe<*k#eba$q?fQNMeHB8a7}uy} zE1u9M{j~NXD8h$u{2LVu{@(%yJF3xh0X>@~+XfOjbzk0m645Pdg9yJRMdO#-ua{5( z?mVs)#Dc`Jbky&!7YVhaTF-Tdy|s|>icrw_0Xq!}y|ju9gsK$(_V%T>sB+6M8KeGx+` zdpC?j^LSIse2NKuDd+E^5x@deOuT5|9Reo#aatmhwLBP@*&d}Hfyo7_W{@M%E0bYM z0#h{7akVHxU57X!vxfB=E7CD3Y>5>{Ys$ukgY5@@0cHjhVD`;e6gUB=9Ds{;FXb)? z2GJd{%`})+`a#}2UFss@#Qdl+DaH|f@OUn2(SfvN+0SMLIZ3y?|FL6!m_QlEv?TW@ z8Wa{<2ddt(m4!|+TqF7Sia^3l*ko$Q~AO@Waqnc zGa!G>J`@Zz;o=;Z`vg}=)FkXbNPU?RTx3!rurBB)g2HY}nAJ>sfn}=yB5;m)pXHo5 zWZym88bkGO1Rxe7GJ&HPCOD$1z5V_4P|?>7SsJe}0@|St0K#LML7m@EWLP_hed_U` z-6GTQd$v5O=Q}q0F6?G3ja?Xx@{|p-HAm}wf7nxH2TC(|L5v{<8?%Q#!!#GJkX$}` zwqqgk`36OY`jg(>p0w&EWR95HBG(0{KJD88A>LaeKi)2aOB(X)(5jH#aOHoM1p@T) zvdm5vgU44pei7-GJp0z4q3q|JhBK%Q(*%$IDDu4^(UgKpJ<`s%jo_8n8}?-XI;5U= zh6IDI)gNV9RWxWhp_A-967l0c=87 zIA6Lv>7Au2)~JoTic#CKcU6@l`}j@HIqt7-a2y3h+3^`5?NAWI=rb?N2>(HD9fln&>_LhCFjqTe@+=cu@B1s?AE-%qe;(h8;Fr4%e`bdz zXHS5rrqpCv{e2F?=EKG~F0R6_Nrpz|r~aKWrkVe}rJ(5c7^(494#NKn)xDgOqJU=> zKzb?y1@_U^&rTj-NEEn))te*$8S>BbZJht;C;S&cuYQD;c;h_*vM;4irny1{f1X6L zP8cXPZOG+~JY7-G+lD)v+_-NSr`s%71x$qO5g?;QR8y@S(o>8)t?ZLGn1!X^4hE#BBPJq)#@axr3 z_v7f_@)at~ye?#%!MeT6i2a!gga)Lalas@DVh7Z79?q2McZlWtD0`harj-H_>=WkF zmpic-y@nCOv_Nxm{0K4@;f2!Oj5n6&zHU7sIg7}E3NE|%%9D7pbtUkN{oA0giU;(G z7g&@|d;|5Pk~E5fHVzhQ2`+rBfv(*J^?t;=ns~c>Hh5njB7@U>rBrR>UPMmC%p!;l zx%NQ2Wt&^ShXQ*Ye+~3!?9bxD9E!Ec;lM&GBEX0oKnYLvSw1@2>3b;j>??(1NI5g2 z7^=zcF4~_Cl=nvC-$FdV8Dq(OIFQttfMu%2_KcP1yI=8t9-s}&$5DRWJo))kj9I3E zU|`^>-O8&D6)P(zG>N)o@FH>w+b*~nVPJ&7t9V9O_h+moMZD@@_1@$w+P?fO1twioJ~zY}RE^I3aBW!07xCK`kifZ_Qna@ZQN56s8EB&K5u?_lwKE$*cE=eYS5=5k zANQ^ypWeeXy?PYO_uQ6T(9EKmZBg_eXGR3{=|zT8UiQnqDAiw|;LaW@Ed{Qx8|*S= zOY;aCFHe664LvUK!l(30f@sa|JgT=lG?oJ|DTkMh#u@UDev-X zL48kJn~DiCY$}MbN6!NEmsxeYKP0l6bl!D|{)4oa!gsO9n>41$(xKf2Z)@u_LgWBC zc&egMLPz(mQR^93R+Cq#W17AH3Vu=&TD1kwzGTJskA0!&r$Od;1;W(7D(cgCT39(= zbzGtQ{!R>I@XTE`3Uizhg&MiWq0v{>f+ep^^!-(H2%c-pr_DjfQzY)Z+ zgQ)}HC|dR8?L#{BSgyFjF+eWRnvJX8*^|!n!O-YbXRc^Dm*&L8r5~PAuC$fmf-auv z$RS6wAxEnSB%!?4iQ*|+lYZc>ILuzn-?Rl&OS@UwLInjks_7WJe84Mqz87aU{{E%Y z91I7N!ud8nIcYS%rL3%+DEpM2J}Rz$fe5C(LO+i;TFpJLclgY$tYV%h(Ni-s&lG+6 z8iitXyjmfW^m^pA>P+CH%pxn1UN#hntFlNB!NN-=8j|l-EvfK zWub5&2-Fw&p#}ny3xKOFhske0M@)P*BT7sB~XK?UfOc6MERWMt_gF(D89;BQH4Ct82UNjli27GfP!O!{J zymVyWazDPi4fp{~M%*W60hMvAAcFPGA8gy1<79*w%zx&^{4<@am%ea=IUioYZD9)p z%O#6c>EOOq@CANwCP>^r&Cd$+SQjqt8;^GW^2(aFINTQIdm|!>)vA#7I?>JC?XFNd zjzC~icaMyKxLQ79`HiSYRv#LY_5fZj(|`X+6+VgX`l8={`JMyNNK|-GnGu}}B?hQW zKBHWw8`+EnzjOngkt6ku(PB=X$o+jr#c=QmGVy6}gEX$_ds4t4?xz)YivJ7(0tUg& z;OmQm8w|lVuXyGA`2C#g)D!l5!se(%;I_uAAHz8!r*&LQ0~cEwv{D%UP|br0g>&?c zc~1@2%YR8TB8vwwC09qiKk!aUBA3XRzV%K+$gNub+hr{TcEq7$_Q6NYoQU^iLa7jc z68Pr0KPr88_Qo|C)0a1Zk_f~KHp?VL2$uUZKRXRy`_?)<26zyQ4FWidcC|_+)GsaC zR9^ni4Wqp9zo&eiOH$GnKEMq+#s}c^W5VaP5z67RfAv>Fy_X=6$p^?lq`D)+Boi5 z)>PF%Lkbucp^<_hupX1PC5R<{m6rh|3gUQ$+OCIxFH7(^r3hRc{?;WD1)QaAJ27|> zd$#UPwAHL84Rf(O&F&M11ee&k7ei{VDY@>T4fXX!fCpqB5%gz%leuon1CMXmk&cQ(V5db|5)ey%)7eKZqH>4sz{>;0@_qRoBlZvV z0wNBz{+u5b=21Yo#phyIA4;Oe=vC_06sySE{Y-&9^l5H?*~b&yPG^2T6y$@T3q}1ok3+^)uV7g-q?2Y&+Hd7 z9EkYMLet-O*Hhlf>W;ZnzPxlXYdT=qtmR4zjSz;qQg33xPpA5ugl!IK)*87>DeONDA3KRmcx9eQ6-G zb4$prd{2A4Ax04}M7xuTIt9QMb^F8( zyeK$^G|UAokqjz;Z~O1?&F?{Cr_k!dsduPObI}XdliH+a2V?Ru-YZ$X@jd(PBmfwY zzWKwB$mbyWw=e}6lM{V)H-^@o?63hwz+~5}2t(mcVBYNA1k!L|GyLVZOn5Fy=)doqXjpn8T4U^F`_$QdxaB3qac z=&Bl22j}^a+5V~n5%=gd_h+qcbm9#jTkJpb6-oAG29>=^DIUqyrOhTj`?o^5g~6gY0(a9HmQY4sNL zxQygKyh_6pK|?P3z+GuRq2l)QWTkJdLxP%{n>%vl+Zff?%LUtyWP+y~#vdmS{^7pl zN8f4SxODe9uB#2ha{_;>J07#WXr18H(@?q?&|L;z+Tt9oY4f8d^bxNXm(3!Bf?&0z zgM-6YQ{hIX_aAOLn+YYtAI1JY^SLcWZm?-O(MaSl&)i?}5y5A^6%o2Ns~=3_6bJsE zRQH3L9Ufkdjk6C09z0I5`ZbYN5Tlrgy>wzbS@rghF#`BW>bqH0d^)@c*pSZ zm&AtP^d#aW^HsMObn!8a2;EdLXx4~r4gb$EdYW1~Y2)9wo8){>?18MU@qnL-`bXV?9XhI8(Pa$?Y-KPtzr>f&-zSz1m`l-X@_gI!h(AO+Z%Z;A+ zf&N%a*@@2fcK^r&hC4Ez;X}~74FtJ!M?UU+iC3xK5`t*8l%Z&}Si z@a!UA8ONo@^J5I14$uwU;6aV`X#9)?cwqYv@Qqp z)r=9Y8=hTr%d>>sO7TK%S5Idu4HU~JUEA&k_eM+l1VN9-7hFTPkX zl37xjn=g~AIb6VZC{UhOGrB%fu+&i1<+@a4cl>d6mC6)o)Ojx&70CI~3b#COGbeh2 zR|SBoebb6?a@(qr@{0>6g+Pf2GWLOkB)S*h`d+60azAdPEh59D9T?*F8F)X|Qo(nn zM&q}diTZ62OQS@4mPEvhCN&p0le`ZJ zy&7}lH)L|TyOM1hm|iaAq}soJ9UAX(_@lV#_pPF(x4(HfAtQO-9zO}x*+n>S z`#B#VhSmYlWu@*l@l(PNlU>ctrSStj_4f`6Cevy?+~A>FR!ND_Ppu&bZSB+@@0QP> ztJ%Zfg%rikCuef;3f%1Hn}}?$I$b}SOW`HxpHTSyivH;g4GkR1aJt)_I)N9B)-yBD+D(F@s!s%4 zBU%e#b^zDna-ZmQ(b1=ZZ%~b4M*Gb*5u;@@(;kQAH0qhfZ(h4q8Mj{-Z-5>9H8#gB zj!Jjn#XaMVZe?cVTuJ^6@)LwGi+i#(k2*& zz@C<|0a`i)4!-$JFA)~Z!Gq&m2*cEwvkr9|s#){C&n z!qOJDW7+t$ZOhVTMwvKIz-d(GnzV|#7DrRrWg0Jqo+H(}@K=CfQ-|}zg?!6;&F5U+;K*%#QBa^E+(b`L zuV_~nZtAu_YwvbGW@>b{Rh%o1cEIM{9#HPESIn*U_DQYFbKC9fF^K`=u}lKsm18ZZ zXX(E?!Mc}R2goOz@xLiOzE95<4zlZSrwhw$4L84bzYG;vR)g9{nHCgkt{qqn^go!b zyhA;N8j87BUw&-0qs*!mV0<^J*C?-5RFcG9x!@|{iAhL`?VBu&U(Sj2eHg>@u$D{3 z{mxk9Us-n2A#dt*HL9Oq$+tD?E0auBLsG{$%%eNjO;kBDvGWu@LFvgj49_9Rox94c zJf{BTdT3jKCi%%?A5Zh>wIH{$?ck4AVEP)*<%$mfDc28C!^>NDx(&SLIY4034+g)!$a;?^i_z1Ee$~^ zFh|kvJ{E%g0SV@*xhOqPBiVzp^uefLxO0{K|G0y|AHd-K-1jGGBopFh2ZSSGG`>S6 zcS8YQWxbRBze8aoR=I?3cSk{_RbbcgAy>?BGEONauaAz7a{;#v@|+Erc`GeQKAHY> zF+qWeE-wVY7d7A-5;?K*>*9jSh8N0dVY=W9a4g~1BB2~!v-fjeVHm+~pxZ)t8J#)p zp%A!1WpDIwKxwk&TD@&TbKjd&ZNs8W-@H5DUf_>!4?N2l@be5gl%0Nv*}K99c@A^( z{KMiI;nf5v?yPwtvj|JvkC}u!w`~o$yhdrR@fc19ha(u~;CfQ7au9CRv!-jkl*a*h z{0<1B@DU$R6(YAPGRoe<9_oYWYBTs{CY&7>VXp(RW*3h1mtT~IgcYQjNW80 zAd-{rik>5(2rApTB_fs{#nj3FA8#0qcDtW4JTx*%E*h2F;b8`VpX;!`ggaieK4Va+#(LA(_Q#wB3&>fBtX1_;cB+wkrB+7}lF)|+$4d2$CHAHQ%R?p&I!Io990 zZ=?a%=3L?s-w*lZT&fL&z5{`Vq4n#;sf zU(^P?4g)4+O0e-FB7iA0&@D= zgX<(iMINBApsPL5)!KdzZGSZ!56AEjwGieL zfHA)Xh}gZHPTcOtmu>8S6uV5o?>E8(fdc3Qm7&|yFtfs|oig+x!yXFfCkc@C@wFO| zezUo=FEBNP1){Jit20IxWyoR?$fl=H#o+>WijP(VZko3?;5#LpOhP2;-E1%=--~35 zs@s{8gN>!?sg3+aUfd1j_Sd=qXa&+dbdJ8yuz|_N0KCbY#a;=4W3U%@xSOyDG!@i` z+W&2zDpfVykYbhbKY^)^?tSV4Mwm$J1_%{;aDvjgLjes%=$6w15N%Han<~KAdVrEd z59=pHppb_ETNuW$J&b%nED>4|d4zjfIgr8J`kv-fPf7VK>?FuJboVhyL$Hoh{!g@L z6d4mA5#{f{oZ2qn^sUTlu0js*FsYtD4`lPaVj<$%50&4H?Ue^K;9EKKy?7HTL|(=b z&3sgZj$*-&An7G@+r}c1xoXL$@LadJAJ{(p$yc%I}H zY1L0ZY{8d`AQSwWypqD4^#U!vBRHZL7JwC4eE9iI;`b^KvjkFT^gB3M=JNh|B2Xyi zqnJM_8qh+@AS|;a_2TR~twy#ev>N0B7Z(jQkz{&<7M_%x475zJG+ytte;l2!^t7mT zrwnia2P&-Q9wzdCx3p)sH8=k#{QC#m&T?}ja+S2qO|qA?^mTutDSE77Pq-AolZ`EL z&^@LB*`oQCzwLf^J7MlO$X!Z`+`n%+tfOJqBZV%KNS@p z|5Pbg7L_$o$g4o$_s^$~;Tli-WpaQm9*Tty0+k{)s*8)8&2|*i52T)mc@$R<|E#kB zjDA8R-)At?RYo9AXd(y-tJ#G^u3oh|!q2vNU8oFA+AE#q451-JbB4zYA#hpEV$QXA z<(VOodR^lpa>)ePw+P)>rOtUD)^AH@A6WBiI$~WOjb27}-baZrEbib)a?nlWhPj-* zXq*nsVDOrWpvsYM)!gu%%Rzl_x37HIVgbB90uL8AAgy<2zNEcftWswp5LhQHIxdfd z^(^{i58|tvKdt#rWv%P%tb9q&$Jztl#ks&rxxZ2~mh_=kkeU@{i(#<$^=$;@qxZX( z3@+^rjHN<4Z5X$ygI{N#Xu*k0a&*AK3EE0M6McOj-{Jk3WAJDu{oExs23J&8Cerfr zG(+I}8+(Q0_MymW+uMP!xydEb?8AhQ)Z%+XAG}FTtP07F&Mzx-Tsq?KOXm3V%t%rY z)phEVYlXa0g@(nZ-E`IIpIw8hQ#Nr0yF-2#YwVvmjf*u0242Xx?U+@*)ByrLq?+O2 zqX9NhAAax9keo}LdEba5yqc1dW$jsg{Sp;uc2rJI4y$(IY4y!D5XmIkcO16d=p?e} z7Y(NS1e$l(xmf2koekfduD(_1)qeA9R>0QeXl8B(hc=17x=`qB%ZShODk0h^`!2Nw zvnqTHb|;@d3<+HpGBuSSGFIAHc5uxp)=mvdTxY>+Ld`U9lBC!yS|y3p*)pA5oU%iA zx!J}`io~H5eo@$3KMbQ-?d{7-NtI`5cUbnP91<`>ot<9Zg~fLqB*m>e z@!y+2I2rF(9cpmZUbk#DL9({Bab4KQ`6){@M6fVcLHPkJpo)+y_rRIY{oJf`Ko3dJ zrQt5p@(WIV%yU&SVfp=%*mozvurGlR7Q%foN?~{Vg9$hgDOQ<>WL0D5jMv-7u3G-% zcQK~fTOZm)6A!ce66IZHZcO}yAw~&6+JdFgFsz~B`rV|nvvaOV;h%_3zI1Xv2UcgS zgIA5zwQBxtl)~hs_hYZ0j-3X~VZJk{@L)=m0H;g)29%p&uyK;dv8!>zqbDL%JW525 zg;89Y;3hJp_Q?y@3d}bLtPF_ z&MRgY)!jSe#;GPlNn;I%=PL+EEipiesKkr#M@;Od@+ejlrj7W_I+V6es&m!}7D))es?H9m$&NW=1C_UwJce2C~ zwD0+0@Td&a5&wRz1s3dQi2iTchMr!{v1!Xwi%hHzezm`zk~&Dt%?ruWABYrQeuPx? z0dLjJj80j(nA`mMb2Gb*xZ85Wo}pQ7*rF5{hd-u+R{iwvzp&WIYl6O2Jb??mrW$IL_@jP~6lISIe_pLeZyt#^Ij zTHpC2>#Xt2Gk4wl+Sk7JeeY=7RNqDBEutKD7gOenUi1+6=IiV4>~wiQPb4cYha;By zA+5z-Fn091UhE(U=cA``w9BpQr#)vWi_uV(>)IE#Qohex*2?4!9KJ4iGk$; zE%)?rYL}P*{DYM}O*|JQ)x)HsqTZY>OB@r3qoH96oZG8AsVI8Jv43Dd9;jl;<7H?h zl-9HJZEoKknyT|pb$52w(!c04tP!d4uYaI(#LGO_{WVU&oOMx4!f}xMu{~NDCz5r_ zN$5z}gGF6RBcePUe#~AYlFz-{7Iy&@bbU^#B)S=|m+B z_lgA#XhxJk)}aBQV32A%!tPjmph+jWet%id2n%S>pX{Hc>lWr58F{w z8KE0{!@V7Cb04;7qQW^c-l^)T%YN!CVM*S`3P6dZ`qx#?|F%omm*<}u@{QT2x*iV~Vjh9sQTI`Q(f?AO z0zv2W-M9mhtb#_ERt>bJT{=^%r53g-&_CnC6zk31Do6A0{s)7*{Y3HhRAPZOj7A?%;T*sty| zGT-)I(uuk4q9e!b)DZd-%QV7g)UH)`y2)udCwB zqXqR>gKdJ=0*^4y?|*166hJPn*Jk1;_<7?MB!!#&n=k#fwvJ+E+I*^oKPi9gc7TB> zaedBsCiRcO086np1keXwQ1ke|%NjfTj{wON74<&+{-0R_(<%YTlbzKpyUt^MqMK60 z1#@a0vQECd4TJ8*^BxY=?LsVf)+_WBA_H>&x|($dBbDP-&U>#s)~K)ww*sl=;*J{ zz#u=PV?X&}=0!Rj#gFX&m_wc0Z((ME^~woud|JEXZTv0oO-| zfm*GvXBV^5zK!UdMelYNRsE!86M*xgh`>La6kx>j$ zSze9TC#85n6@Z4KG9R!bjKG%l@1RxRQZ0d@7dU;AQ+;@y1@V<;#tvlyyVSyTk_sFBCPhXUc!hiUJ z^o;DrFt`Q!bsJMuyiHhniA%BA3usXR_3^c~$8kZcVb-^3h)=uPZS$QAKiZ82_o+o+ zy@bNnPa*sDj>#gL83kRDZqHq`tmvc*8)9i`hUv~IF);&SYOoVC*|9}3!9~4r3fALQA&w;00p?6b2cKHCZa5Z_8UP95} z!F~dP%QT;{$`-LV4W}`|4a4ITCKuzgU|x$%cLE8c^Bg5;w-1Gt(WUEwqtI*M zTzq;C4vt6KRb5oJJ2V!h;#>VAz(`)|qskuGq?g!fA=NJbW}ZwLS>t(qx3n1dk?B@( zNE=|c{w_4>i!XK$Xw9;<5uy}6oAAd*>?2~=9EAn5va@-*?8(VXof^q}h=~)yrSS~) zuStn5gLzZBUJ;}h?1TFhVq8X1FMsjyzu)LXi2rEtP>Mc)I8gFp3$yEPWW?pRsXF5# z!zRui5+T1+xv{ZYnfa4khK=ud{dt;%mvYM|E;z0yI3K9?n6`UuqcUwy?EBxFpIj8Wp^E=5tM9kFE4k$1%C-AM_qwy6BGnq5b!w?D>ZiV3%?--orc zJ*IDNCPbr^kd~bsg|cmrepuw?`7uYURQ>*YsvD0p)c>L6)PqU`#0}V2fVbahi(Dz< zITlrbW=O{3i(d6Hiwcw2u0yRYrgSw2l3OSl^p{#_WPYaP8Y#IAoeA_kMM#!^i<;3;V$Cs2eX+fb1JKtOVJG9u*3N`T+T$PILY7~zLp7F@> zj9-}}Ki+y#x~UGk+PzCYu2UE7&mRA>k0d=X)PJe1-+$}I#qx&+A1)t%%LO@U9~^cv zF|^FF-2#GdIzSZSvWJ~C^=|=3&6L~(JNaXT_@EILVx1b$8E2e?)Z>2)H3uxZqS6q` z4)%ju;{Q$0IrF}L4vqe%(Dbk9?#Ok9KEd|v7E8>p?CLLfii78MX+`ytbWMx>e~eKy zAWbRH1RWn6AIU6`T6)B!_&<2m2k@iiO*?_kd$*rnIe~|ij|P77{}25C{=nM=pNgDe zmWLYBin_@0TaD1WqrW$Y-tbR_dPH2lyb%o5j#k^ZMK4pqQaJ#C<6=yLt_fI2W+ZAc zZcXfl?jcQ`%kt~aKJFdK(?7&`2{|FZ07kDuDN*MJ@bM8{68c<#!wF>9+VCTqJ$;^v zvW(?Fmbp2;=+3#JU&0~HlbDIksLs#Te$OQZNQdb2?oYxRxUEJm?@W{0M74227)}wA z-0q+NUgZXlNjwu4?vaW=r4j(u*Z=OfqYc$^5oitu#ylPY+L&WR$+k7(?Txv3KrFvR zjr8LiU@SFqGJ`hT2fhZZ8*Q&VLxm+c4lowyRaTab^Q)hHRVMaTWj6ZNX8eL~8*(iz zL{>L4j}VkJ$@T3=CH}#kDz2`sCK|b#hR5**&`vP{P74WSNa;ud0s^x0Sb>k5i0(IO zJ9TqHTHyrO`EEhh;S-U}8AesQl0a719VcOyDicMK_A@ahEuZ>xy7eXj467D1f=mVU zjmE{QH17h*EVYMH9DfBzK>hxx)0u?q++0%KY1_ z&S72$=p?o*6sK$t%c$=yXlZHP(-71N++vSQcrSjf>7PH|987`=0hGc69^=l5Q%tBe zFhRngjFr)3r|>;|R}xa4S7cwUw!8UZ12L$P2VHyQ?)XSgQPJP8f>P+!V1&^4f1)&8 zZ!7?1W4H7~BNls7$c#}q?J5*^9{%%D#0eSJM0=Q!sads;)@GEDsGzoe$uGHt^dbvv zzI7a)t$guW$Wp^>uyPwYI(~p;0?brQLx7-{k;zT46I{&&FsEi8H8U{uoMldAvOq#MRcO%xOYjaR0)d#Ko|q)4i|`S?nc>%IfUUi(x#8uK)NcZ)W3a&P{w zkjM5L4}>I> zMf3or*l)crwk3L#>K|Rl{*U$(t_%NI9wnjwGAwc#J`0JseP6rcg>=OaT+%#N>K^q> zkTku!$^^!?1qfP%7YKsn2Bn4d7xJl zU2A@9zc!m0>e&>+6-CS$OMan|l2zt+C@fkAe(KX^FW*R0 zil2`_6D|VaE_jT*n%c*QS)b43>e|QU0X=3`wnc6*Pb(u=zT>$9xrJ0dorZ=6R1;}X z#887{f4}1mSuQq<^E6WGMJKlZRi+1;nr5w+v_8(((VIK30z@3-ZjzRU&u#Yss639@ z(7dw%bdhMt2nrti5P&Uu0E~W12L3*OffGdPoL{;B2af_qL0IL#f$unVkq*IoOK5bn zX@l1Bsj!uyQ`wtp6?(hP0yx1pe>uV5b`M0}k&HVhO_&`(6+dtak5p5CFr8c^@PkE+ zErchq-y5J6xI*}*<0LF80xt+-k{3Vu>Gi+-h+Wc2I|e_Z7)t*4DTegmH=k0y z<&p+N(w3c_#5k^s%9mCL01FI7ohp16LWnsa6qm z?kcxjFpU_q#ej#&L9X6@%$RLGA5f*`OOhO-Z4NnsyI8BIouGNTFnM*EK;Ik=!D+A2 zNIcTC{I*MUC_Gis!e)H^D1EQDIv%S#zLJQkI;pj<#Q6$9O*uRJl2|Ao$Pz)O*1cKy zN*Eg`H_%bF|5YV$vzw2IyXc=Or3NA5n0KAAI4F>H?72CKlz#`eXwRxJ@ERJ@WW%wl)m1ZEa zHSUZjo_)u2dUV*N1IR=Mbj;8GdipX{DsPx`zTGnntUX{zo zjTN^^cg@k-%p^L^H`VLVJB2Ln5|0|SuF_;yqP#Z5a?RvSZPE0t$M^s2NZ*``-6{fC zIODmxMAvin=D2KHDjl!Q^(u0O$%i*84(6d`lQB`=JDEs8mw4^8xhv&GcM)5m^Oe@Fi+sc&}uIYOeKBjSzx7{mU zDQY`~lU*_6b?Ljz*c~Yi3=2+>fNlKfD9^^}Pf!0Y@&oejLfFBkzW1#`9fyw_2n~8F zx$2Ex{n-VH!vH@1p({-Uqh%;!A;+Z%KBuACH>GOAWls$a!(WgFO+N+#(CO^qJ#q{BbHMEG~|1#B${;aaU<#l@`m$UVdez`oX#c z?FY!-Y)vy@GVDGg>Ez=sd7+2FrHrK4w|jF&YYr9r)}L45lc_Hzbqkw4h>m@0SNmv2 zA@6TDKY9wD@o2oZS^G9rA)~z}fve1@Xekzyk(9gmISA5`fS|fwKt|Wv z%-M|ZBaKpz1VZSuw0rZlCmXQA|E?-Z)2jnk<_ z6)KI{i&Al#SnV}!bvhJ170LP|;+L@HXf}N>6Td+9{v3BtrkVSmIWUsp=jSI$b-w*& z@6Y*qma3TN#&eix!=Qj=3MW*Yh&|}Q!|L(( zmNd3nv3gXe#}cC{z`iTLqoD#Gqo!b8`$x>xScE)#6PM(Q@pfBaZl zmT~?hVMBEGy10k*!A8cQw13)E;%UB>itQi7N(B*7QP=Dv#;dkkrAWnqv7mO|+sHWV z$G3@H0T6%fwFLA9-pwi#<-QlnBV!d&F-e`B4?=PiJg}h=a&(Xm;!!00;95rA6i4~k zh33HtmT-MGHa6u|z?8rR^#>Z>3xiC;(%S&*Hxxeo-;q2h^Pr$00BgBknnqrDb0cXS zX1)o7WkyDnEGOJ-%(HB@$F<@vve8HQckkVdm=lm<3Dnotwt+n`>oTo&-$F{0Aj5L; z2uW!ofifpZ+?m%vQof8kHFr!aJWZK6C@`>n3;%p=D02fOx;^~IDQXmqMB758z=RDm}FEpT3*4(9Hke+5{vJAL!Bvu{XX zNF&Tr_5g`QNttroy>y#3l@)Rw6XEGIe-%Ezyq?@A;D^rIwH%2L|5&^;R{+EEZL0X^ zln1aVmJ`taB~RnmS1YuuD%m+ulXW_BSas?jNTfnw@8FPY5>ibsPCL&VqNEhHiaQjg zPZI&a`;E-(QHn2KnC?$bDXu>&cupar|ENzu6MSchz{z20$%oPR2)KoD`c}^a&CuO! z=X^KKf+>UvyMJY+o9kMOO!}@8s>irDmFN@wFHz>7A7{PTyl7f`i%lSUO3|WnzP;js z8yd+fJc0&h|9PdG_trQQ>vJaUKmG-42a>9>qz?50+rZ^#yS-_UXfcz{cuw_iOPA1G<0yM@oZs`=!iNfxJZ%&|mEPqIUmV}Ze3VQ503#+Q zZWps(?0%-POzwVpc_m!XcFJJ;RwS7%qwNprV(F^<>&BAFIaj78T9(=%52i_J#py92 zM83gi7oGVh+=a95yGalm0p$Rp`cYEA(`|FfcwiJx&%9KtFd-=HBXd*=_uHctJ;A1) zL|lAhV`JNCgbyTBSp~|gG_5j%B3?QEomTLFlH3H`Wv6qTmnRzG+p9BV)g`XUDkJ2c z-04L{xZtLCs$vqm50t-Oi=EDQ7n+A~l;UWd5Ch7=G*;(JzUD_(Pk3En_i)dON+Gc1 zi%Y~1>j9>N~Y}4FgE2cr*Gtf_iC*=6@J(H?bD}nnIYWIcS62fcd_o7P} zF=%t{Dj|}M4YdHVxy`3EH;4P|c-5XDBjVzy&X_*;mF&2idjJZ!O0w z$8!d1kv4~D6N}J!m=pJ3y$5#r`$iBOdElfc*y*^V`mq@BoH*uw%{#l&k zQ$JzRNk*loGxy!~ z0>P;KlZOw<_zJVHy}X|HRJWn3dR*=Lk=Xsuj{by4HSdzahG}h8aBw@w#0?_%Fnc&Q zVJ*0GV(ca^tN^QqoCZnR?VOYQ6W;?{T3R2}$a*{x>mvblyL+zVf)>YHw*o9+G$|KV z#tGW!0Gj|`aGc3GLHND~y%UGNh(8gpZa)UACpvhY+``_y>YI>6SyQFk0RP_eQ`7#c z?6^Neo(e;2Y`ty0zg?UaRR)lsgL{0wOH+tQvo_T3%n;2ka}y!t zDc%@u;maSS_ukTa724160~%2Lx$FW_?$Ni!<(nBMlkBUQJC(a^5Ww zXZM!7A$WKp*oip9lZ0CMy31kDt_dI~bGKCRYk7mI9BfVjo zuRYdvQsP+hG%Jgv7i7mk`QJ-K=6#`>U@vh#etyl#fU?-FG?-^l?pErDM_MBi^II%G zX%*=Ly{E+2x}7{BSdQ!6HG%t;&Jz1#0$+Dz_Bp9b!Q9u^w#j;x!D zQ&nt}P_x25m-Vr=b)KtgOIzDVM85t_Q;4T}Rnb~Se=y^W#*6OKXZ6eoaZN)nN_JK_ z+u<75p_qAv2xcc$|BLr}-0RdlYkfM^{e#r4sfvZpb2)~M-gtl_TpKn;7^6lMdT1jZ zF_VY8rs-kyP~Lyrq z>bh7RtZwvm#qQ26A0F_-`?E7X!ZMybQ3@qz8;CY};kGoUOGnvNVnMkkzZ}b2aqCz>|!qc`=tZ$pX3uLTNadzm9_E?B$>>JB^&lP4+i#WY4 z`!u+~Su)S&(o4mg0P^*WE_~~nbPx1vCim8I>b}3bii$h9dDmfG9uc%=-otzklkK>Y z%14DEFdr@=pC^8@!avXRSkj-l+6Jh8CooXyr+4_ioJ|CzxS6>|1}Gjke}J;WjAimW zTbT>r$W|}qmn|+|jjh1L%&ANGfAvPj@tQ$dOD43@-xeY{K6vJ1q}~rm!!2?$c_6B2 zXX@Rmkc1j95_1PP&-;2cQg?P9OUBh8s#6X(cI9(2Qzp|=JQEv)?^l*JaGF(i(g%fv zhld+_w!qEJ%l&MiT(BXtCmVOat;lX9jw0N zC`i27pkVZr)LhP=lK1|2o!|0}*{x(!NgiWZttljzCqQ&-ted&BN5Q%#U)pW?sz}-9 zezVFY4$=VtGQZf(6?q@x_Lvyfg;!e&-xaK`gK^Q1i+ETQAdHtiOvND^u-2}wV&U^t zBP5+a7iWUFoJvO<1CeBl?apT9j-cQNhBtC884b*8Am8UAmi_*RELO>s;${xqR`*OY zh$cJjP0lW^dt($Q2VqWyiw+c2p`|KTTkknXw`A_#-F25Or?_%uNs%r=b4s=G>|&Iu zr%hqaO#$BP1vD;tEdBkZWDtguWr*Ce7+P3FxiR@+fsc>V#k8(4@s-+?^p;}lZj;>Q z>a9;^u~a&2&n79*o)??H#ZEi0J!NoTK)HW|FNBw!69#l>Km6%(=(1c{l4t(#&IrP_ zgVVRuBg#qp=Ej0uc^C7*>SmkC1D{^bOjQ6L(N*a;dA_=Z6)XVPMWg*P zx?X8cHuChc*d`Q1JvP;IBgHrJsEgVY+|7KLZeN=1`#I4geRmY)erxs=8;Fv+rmmLl zq7S1^8w9BLk)ULdJ+cs@4eqq4!JktL=dw-JcK0`@>IP`Nk-Oi8fd|wvz4I`xV{MSe|ZMX`?S2v4W?&*n5 zQEYRH8EQ3W%It{}%TG^WrI1hSqA)wQRR?EttL5R8Dm*!+-~3oSJMU%u9M7Fn$S7Eg zNov!|Ej2X}x2zgw*(M$J=NsvAACk4GOe}K~6LrjzIhePBQLpK!B+=bnu$dIWw5u4L z9kHGi0aY5l9n)JHqp$iL?(UY4rn3#vd|)c*Wuo2ponP!ROt;z}cb>F>iCW~p8oBI< z^<_W5(t4~ARUqRAM|r3BCG^v>JQb9b(%^kvY9D+LwUt=AppF=*b=o~3@~{rDDyJB&X0|MbjV}z=3}ihyufsZ!7EWl zP)0y~2LM$mmVl=3cJVKDZtsV!=h|@G)_3tE%^+o1dbzi~I@Ry%y8kFmKa7%?T{5hY zXSwF^1Kd@`-Me?9>4&n_mWtuGRJY~D%eh2y-x8j}6>&ZDo=mcouzcQG=ZczIRo!?^ zYxQb-XW?3oil#UjYWM+5wRiA!A#<;>p7hSTaDIZ`x9%)YnH&~MGpA3Hikh6C)w60O z{q9>;I@bWWH{;6mn%$q1%@PMQTgFqh0~6t+6Ykq%tz%X6g<+-R44vYT68I;T9vE?H zHA`u_A`%JCwZ~I|Da!7OI#lHVnXwM8XMMLFbVKn&aLXmL2Rc={#rwuWd%TX-r}gWo zo&U*rHhhSOQCwxqhgzRa7V82T2F;`g9UZx)slu>9E-$-~c)Laeo+RthY6sp>ewRjA z2J&UyXml4l?!Ipmth25zN>>3DpmyqZl@x()KAaC6hs!96Gc!>*S2Uy6G(TKq9Op4} z%^{cM!mF5jTg&>-&3}NyMXVbOzu$5C304F5qv3Hqw+)e>o67FDyKULq+QI&&xGdjwI#~S zT5OM=d>^?gJD>S5*}yj}?0FAxOs4HCEeETz-Zx4z%63g5)wfo%mcc<8@cQaYk}Xp) zpvprD&a9gELPLRCU8X91vdl_$s?9FP{dHHa(?6OO$tW5-^p26YI-Exj<8QS69i8}X z>yvGqlAcNEP;6lZX29V8?A6C2PAFH2XHV#3s9q89?H@Gvbl4PI$XBKg^*AS zAW9RZo7CS#piT^-xIExEfUVbF@_Oag{_)fU!y){W-DV)M;meJfY6ogNZZW2%snxp2 zY5fCw8S>#S%S28s-*@t1=67U*&}Uu`2rr7Ic2Sut6F{_M+*la%brN^s%AX_ zF)Fr2?s02Gr84<>yxL=R7K?!=r}^Prk{G4XgInL4mvOI{e8o5EN_^%BZs>*X%>M}G zK(FO!@_=I(K(QyrGHM_Zw%Yg{Arx4)z`1eI&BZ4Sr&LC#c3j-tH|@_-?N}E_yLpnx z)gZya9jZVX_$+@OKqDjq>at3RQpn^Z9oZ}%c(EwiA-oE3`&)7{d|`ml;f}p3{w4GE ziQvr0XEbx&N$grI)J|G@JW@%&{Jra4f6yTvg8k(6W(L)`iIj~Pj$Bt4a5hf>^uwar zBH{4nAlJG09zg7Z1B6Cr;TeYpp4}+^4l$5=41Fq7pqAs9 zfK2kfyhfD#WXcuxXw&NDy@x-mmY-Fud>v+v2IL@V$F^#r`A?LnOUF4}E0ZtJ8>kST zMlpQv>Tay3=mVuN+|F`yNi{c7Q#D2<)lac)LA~I#I~I_*5ySYy>ym9AkgmK3uHmXFv9+}s%6eQ9HN|6$cwwTp%^8DT{QLD z^lMte&Jh}wsQ?PS^Be8%Eld>A>SEYf2EMMezW()l1IFIuJRQ>`wf>QTs^qUJ`t3P% zjalfo*7MUS78PD?Qr5$e#RV_7ASd{4&B0)ZR4;0xS_2!M`I^>k)T?^;4CJLsF_T73 z#EPQXcR$3Bq0r2OdDJ(*+r zarD)FIDpk&4?PcdG?YSeF$oGR=i3j^Ha3yC>bc-&qvQ<^ThBtJHg>8)RNSGR%pp;Q+alRWl{!}^8%Yf{KpvfGe#c)Pp__P zeKzPgNn?#Yq9U`a@c7s1vH=fN^!M1kUv!Y|V^F1`>>2$2_}4X=KnhzeD(>qEhB^+= zL)J?Umd9zIW^D09PUYC;6T1H~>{$YjA%@3EHc53*Zf@K^m;S3d{x`=ol@3U@|2hCb z6`Rc|5O6{kD*#BkvF{7Njy3!J^-wj?CQZi*))Q*{FRXy%OBZiCUU&ks$8q21<1JX|Nb3So+PVdt0 zXwcyx#;eMOGAb*D3mJF?RKre;J`l%JbPp*QvM2D)pT##|VqyY>4OdeLZIsn$Sqn(* zpZT`uRVw6td~R)JBu{a@&*UxZrBpQJcwd1{?s*zR9y&M_Bgd;Wz(F!7wG&czu!I=t zlIjMN?t&V`t)HK8qZ3S}r2Joq$gl~9=flo^Gew)6q>FL!pPza0`WmJ^iXYT{a=Bv; Ydr-O;IXUwyaln7_GAhz}_YD004<6sNLjV8( literal 0 HcmV?d00001 diff --git a/recipes/use_cases/end2end-recipes/chatbot/poor-test-1.png b/recipes/use_cases/end2end-recipes/chatbot/poor-test-1.png new file mode 100644 index 0000000000000000000000000000000000000000..f04aa5f3e8500aecc30e03a97a4c78580419dd3c GIT binary patch literal 405225 zcmeEvcT`i`)-OdwLBT>15l~SPDG`wl!3HQu@4X2~mtI0ZY$ymQRceslA+&%%0!WeG zdzBVih$KL0N#5q1bH97Pd(L@7#~b7Qb#O4US$nU&_L_6eIe+uF=8o^wR1{9qou{Lq zpg8&H;R6i{3VH+u1=liW4%n_wTDcx_|$onya(5t%DT>#lv^ey0m(l ztt`og8kfp?GG0v`In7I%{;G%SAy)L>nTt{9sP8tJurM3Z-$*sPuPeut|9s@MREPyO zgoeh-n13W%j5lQ!iL1p!-LL^<2|P>~-6cT%eD8RnUSfzH#Rr}n9~$2rxf!9U!Itjy zoI>UfP1o~|g@q9P-rmy`V>7$h?e(LZZ#)OpGq<`)WV?93yI@9&E9Yn3{lOr{S!#+I zZE1Kq1x4z6#3zl5G3F!BD~?<{wsD!QOKau^n~Ls?j4NzMPqeXs;)8aT(W7e=cCnYb zug_cEJ6R5XnV`BL@-kYCHu#35JTSwGJ4`C<~*Hi)eVyn-j+qGUmHouHq_7^cd46j`L@bR{1Fva>8?%t(ti5K7Kt?qW0w~ZG=;)KYV_Vi?8Fuk(L`oB^-_x9pcz_#`SWv zx9MFSzwA`79$z?LH&+);HUBC2nWG*@c3V%sbo)+E^^(c0ITaPH*3f1&xTHZmLMIsPaFj*Z3T6^Oin!j@f5cZgisfsT_F6^%UjE#Zxz* zGgY|XJpP2*9O7&Vs?pdyLM`~=`QwWeVT#tz9z@hA90_+xId*XTBgH&pDE$Wl-JMJG zC&_{bptB%9Iq%*(fdjcOJ*dl|$Up)StSRGdU}xB9qB^o%re|7`LZ$z$QihL0z` z8bA9Qe?`_x;;8#QqM4OIUqC*M(@BxZ9~2X`x%aJ%SjCQnRr0<6+CVgWBKEB$+NV9Y zVdv4lv+&M$r|ZP)T})A_3L9A?Bx#LKlTOEnayqMAt7+dvpxIMvpOay8r2*MsuH>%) z+l?&d`|#3{M{ClaN9>QBSQ8(5Rc@|e<^9w{=3`M|{Zly3&ow3N2iL|?Y<_;kZdZAi zOqVGWl;%ZFiPvp2)ibJ*$AnAi&hhWaow_N0Il2GYn~>5FK8g}-$n4hEHF?tH9rtmh z8=g#2hu*2HtNV^M8gZf6_N{9n_jq;19GB&|L6Ioc7^z6dd_hp^)QsUh4zTUfx+8iN zhT>eTGYmC>vd5^*FVZ!>WV%Ud4Wk*ZI`WuOxXwuJgkrsG@%eme_Ij3L>b0Z$ceur= zR9*Q6v3?evpH_-G|z5m*%gyaxUL-;%5BFIWK}c-lV3j#V#UW(0ls&S*y?A zpGr$|c0S-qlUt&4=QRk_NvrNTx_QO-bZwYtDyYZ*gxQTxg4S;&EFQlPl{3%LJ9@3& z+We}Xgih$7dFKzXX_F+}WQ|*;aao+&>lL3klWLvM&J#ZtsQ`oPl z(@&3Zzo4!Usk?OL?5*RP$C_TbHi$OdZLr($_ftjB5mX$~2;H3ikbT!-bvYq94tiX_J?Cn*)A45_PX zV%4X!v$H?tdFM7}AJbmU*2-Q}m&n4YEu`hfZz%;Eloa1BKbOT3OkKIi&+U*{nISd01Uk`c&TC z-r^xM`)4B7wpw-<`YLmJiyjSM$YINf%c^?dmbK{9=_C*lQ?B!Y#ne7^%f1I3dya{%}+h~U|HOd(mR8^1uwA~YDCi~lyPearb)KTzvE^px@-US=Hb0%kv z&jv&~L~37adF=amITM+AuU|W}MHwdSevV&WFzrQe+mf*GEqam2ct=rtv3&<8%VaZf zcg@rRl4kIQ?MHKXTWV8kci({P`-Kzj2JKQhOnjNzF@xvxN_DUWj`@-KtOjBRUgJyS z&m4js>>Sre`y7Z4JmWg!CZ*hCLKxSzp>g911Gfs;`w z{C4<_@Px;X*0@SaLNF*{nFkmi`mqreq*srf;RR{ z2b&c4w!;M(CyR-$O0NydjrLjWwUKKN&l;Q=Jlk@0=n7Jm?(q$!mB*YPrGlf1%wJm=LVv{f;TYbPJ&;HpdUP!^Q)T6p)Z37zq{!N9MadI9$73DZ z6>u$RG3KXVxpqZ!3N~4nzuuc}Xlr29Jd?Es%azc+UG5l9ENkjdW~N~!igLiZ5~gl_ zb!eVoW3(x;c1nKTp$HqiHEyx}V$|5}-s0LPrKAL50>P&X-?mtr1iOv$3EX%gLC;A( zgHv*Z8^F_d7VlwSv$K8=c@;9o8g5YIxq&$2-}+hu-G}P~DS0ai9!TPbgH35;toa>H z7m*HMU1wc;aqN5tI&0Va`T1@@YFs`~c1Ye-5xNLfQXg=CQhqKpgI(6~jMJBLi6?IN%RM}HS!QDz z1XK~%oMiUd@rZcGH{*&3=%nD-R28?Uf_+* zUj`*HILlgirsJ}^daYWrTUaizi<`LDkYDXQz_xZoh8te1n2?F~TQRVIY`?$!qaWRM z1Sa7TtY z76F#omtLS>R%+6%F6x|aKuxlLV?TGw?W)mk5yesV_>Yuzs~CzN4T_YnL^h9n3o?(` z1Gr5z_YYXHMfi0wFN&2JGUPGnpt|aKU2?&*eVVPo7Rpn~)EB4DrliTAEU&MP_cP@c zzC1ze5KM|sj%6T)@@^-}&J*!$-?WVDyU_uZ^ooAgqW7P20r^CXmEI$3Wo3#R!0|B( z8p<;i)W8uX@Q;G>JO#~f#}pKblO7WnH7@cSx->R(sWBT}gUbxbn_oTIp_dH>NP;8WAm)ym4r?WwbSndKEU&~W_O zLwz?23YM#fzm$(OuKb{&pc=K+(sS2Sek^I}>?mmd#M#11(A)9Z;e9Biy(NJ|M=N*p zi{6e7PHvLkGM9h5LJ~MWYzAGv_}eA!_A;0Cl+`ZYcXqY9C@OeU@aAPO-NlO+rCpy` zOKLoj|D!wbm(1m-?(WYdK_D+LFF`M1L1$MRkdTCg1nA~1(5+hnz!d^+K2GlD-U3c; z+`r%CU-x-n`^??e+3Dipea$VLJ=|q3Up^e@e|~;{PAhNQe~;wk_Q$k<34#vK zfP@5Zg8r(TyRG%#bUQrrd$-@__4{zrhc}Z{v-P%e(0gF(2#6YZG_de(5mD*i#`)8! ze?RH(UA5h;T<<$O0zKWq|5nx?o&WRXe|P+CO#Oe4dFz(&?f)F}KTiEe*TYFjYFN2B zJ9r!lQQOJZ9ehg~^jFRQ-An)9-M~V(fw6w;^q;N&-9zs$d;DkXfA>&zwFLxWe)wo$ zp+83W&$d6Vmj)dw|34PP?7+sbvuwe1oKr*m6cl$T9zD3L6IAO zG)eJ;OItGa#l{UuCiD5;JdJ9Y8SbBdon z$WFP_PM18MbNtu30b|@zx$~_0-LH)G`-3sFo|-V(xYYC0kN7|HIW^(L@avD+L~{h8 z>vzok7liW1h({0!)W0gvi)t>E^%!O@Dvtko?4LfU$1MK5;{J74Eu%ZnAWcDcXn#@h z7w-V-llVt{|FsPNQQzM!!GEmpFD&dI>-#$o@{jZUi&*?m?EAa;{r@@krHH)PqXD1W zP(Q)7Vcx5lS5RV6<`i0dJ*;C?rOqfB^X5U2{}n23@WJ&rP&ER?g0wzq*-8o_J}m6B z`nxIoN#B8Spo_PvDd+ar1H@HiRvS{wO?=irFO6lm_1=m*Vl77L=HtnJnz#!;HbJOM zb7^Bkv(_dKn|j}t^juRbdwlJ`Let+_8sL`HG*hfPjUm5-l{;?7A1B|+sEY4n_~|?B z^wM|q@*$>O_l-9e4Y5OP=~l69rwkpws3-Zc9l=q}wWz-BMx9IAw;xTF+gu&y`O+=s zjP%a*S4ozZ5UkmRZEOusM|mVyE!rgz@7;-A>MM71PpGy(q38Ne@8+s=NdEM%fzv>) zkh%*T6aG8uJWL!x806&RXAt!`nCD{k{izTsEK=5~3NtOV*Iu+@=Ds#^DP|4j&A(qt z32!N^eh|KnAI#~$z~)>le5I>uhD&%rYh|@ahitW0MvQ^4+~Dg{Gi12-e-Y=OPu-L{ z*3ny!DWy&UcaW$|-;-{I$$a0eB#?quF%AZu#Ee+ePU58dDr&YUi1XaIW37If-C)Vo z+zs_ld+~)l#De03l8xAxgifeVDyZwxFJaLXi6hpi?8(K-8;+-T?#|RJB)=^eoA^>+ zw>@UY>wWOeQZ*RkIV#f}#-Us7T`{IgxjVjIqL6MM{TK1-|FU*Fa;RR6qX$0rV~^4l zRxfXlxS{TI)8vYY)Z$iV#J4)=<9&DdWdw~^?}vjWl)9In{2DZlR|0UV+ao)hiZhjK zZL70OsH8rAsN!8*H;K?>ZJU=Uf0SW~YZv7Yg~Yl&^hmM{lM=~%9H&w1{POP*$-f}* z|C&@?;^iJ+mE~;zGf&^wefW%x%}=IkCXCJz;1ea(mW5S|!Hlwd;+)XgxXRgFc12D+ z8g(GRSxv5ZI$q(}D%q+2Y>~I6nI*wVD=X*t*=xea6llVJEZeI(o>ZWl?tdv&n zBWXh_)U(vteoX|{+?0HXZuys=`8s)i5G=`O*b7mfNk_dn%I-`1VFlomLkUf@Ap&Q3 zd!Xf}g-i+@{%R9`m<*2S_kAPM# z?inh@@Ls%=vP)x7_O$mP)b8U?%kAI@P5uNpgcdqrW7_Bj*!lf83?U%Yo+OxT;r=*D zQiR3$$!ERG^c5@po>E%~;c>d-U6#G)4qd z3-|=u$V~*Qjpx)8j>1`HM_wR|yuqpZn zc{(Rik-eG93F7Foxw$%VNfN#;t*TudS4=$@j+)N-5n<;*w%E0DySvVl=;|4IaM0C5 z2x^KTvbJM&)=Ka1Z@VnhWHntgT#sN68sqRmBx|qQ4<~&jF2=WOYim~tfreto+$!~w z8fsJ(!fs3yOs$V)D5bL=Sy8g}IB0?Sica}L9;U~#A$MlmpY~^ZY%cZ6csUEACb}(> zo!o|pAFTT;K?NT#o5bC=)iBo?a!UB*Jmq6I54|~?R*4N%D6#pvU*)1qG~TvRYB{=E z7+pPVr&U-kyfN-P_l^hFIhnxafAGj}Z`r15RT<>GxjxC~kL>6>IW=7#-+_totw?X! z6^@M&SVggFqWYjlQswr;f?ql(%RN1vPy-Snov&j{Z2C$^g_qO9Wkb75Z0r$E#+G#v zO7rXeo1+DR8#l%RUvvzR6}SQ<-yp@Lr?^OW{e-O@Hcc&h^?dJ;Cwy4ehQ$cdD?Lc# zdUesCCkg^H>5e%$=NfT`Dczm-I@q78#I3MfeuI1s=8U7dPT%jMz;#{Lz&5XThOHYQDB|9%p|Gw%?Y+ z$@heE$tDShSc;@s6OF=AuJd@xis>9Qbz9Dz9I{>dFk;nGFqe-939Q^MYa9_7S$drP zp?PcAzJL=k;jf!C)PF9VT?{Pgv7(pdATMm)Y=qpz*}LJ3^rqKqcH4Y)b3oG8jX_*Z z2J8;TpR{YCBz>Nk3uLDB0AEqEb2UDBWlwo>X#{@1hSWd+Y0i)D^qaRdYIaM*raPgT z`wOIMrw+#~v-L<7S&&oBLf3hZJ{4afOS?gU6AGUmHxxFGd%6zp9Q{pkQxr}{9Z=2#2t)I&EDJA*C4&^R{C4RoN;%UrQ4H7wKu ziz&EFG-7e^f8w--+gpjbp_HeZ#XJ)(y`5h2qD8ZKkKyI^`&F`!xswQCVBMGiK1a*x zs+FUuh0eOzuUxU`XyiJTa~Rom(6M9Qa|K?rsY*q`5NtrDR&?sFZN_3~#R#v;!MLeq zf9NouLDo%Sno9EY#{j{OT(4@!@ru#h?hpk@RZNsl4J=8+KnSI^cbk2`NlL&dx3YIv z*iA5tl~FOH8Vku@o7rxGC6Y?yFxxu&3yS5XuKS277q1JR^BZ#Q*4B$WviUb*{#D<- zs<}=%qlC-mmlcTycd7{1$nTr3ihI?hOsUx#VRABLDOi>?-*0U%Dbw8tl1Ow} zzv{AKr5ioCp1jX-o|dcFW(_f1Xf>!B4JJK#vi1diSB@HOFBSn=w?d^9|F}oAV;$xB zGgkS#miUW3CC6LD(@zAJ&Egd89hk^o4Yr{S4&7qpA8D%1&g4qdLthUY%d~-(%97Yb~=Y{t^}R1DL3+0i-uS4cJn1+XRoQepCrg!5uNl^ zOQRmghi2f@pYQK;`SQaC^I?N>zeZAMg z>Qr`a?u}Ig65gznxgSsKSiEk~TdssYaN4rTpc>bt?Yls-UK?4nil|*V>7eQnYvB5B zQ#~oSBzhuTYNJDbGGUO`h8tnxw2IH?^tgL?Ftb)+eDVilH|vB3j8h z?t;u^<2`WRVk9qRezy$SHNR%D*=5KlR@6OMa)I|LVLM<_lSdZT`$ap?d#=+9S1r8h zvnau86WFaxe^7*N{3^Rs{M2Q7ik_*nNXr^ZSq}KWVTZ~1QMW3&5Eh}QXvdIkFCbb* zGk-eR`s6%2N=E9Hg({2bqf|)kMKwAfnaAm+aItA*&aF`M+ytE)SimO4npvo?(1aMjaamWg-bu3UtPTEMq|)D zu^8~B0O2$#8798@d}ROm*SymjJ=;^&WI_Kb94M81CPY4H8{=kWyS2erguJ98jhR^a z@|xY&10BQ(y(OyWRnV{|@0jK0jOtv3cNydq(-1yDCtjgERxbXSrEs@5~qDxkvsz==R@ zH=r!FeDy^Tw)*NA!d^EkAI@ewu|4Lf#t+}QEab6LcJ82gycdDpG^PKvgF(Q_^#KKl z(X(XF2l=S3B8%jF(?c+~{{x$q$iaIZsi)|+#HDM(3bE?WkPyJswqPwnrX9{rmYVXO zlfac$ZVXAyt*kon`mm`=Zy6nUW;q7OWJMno0eMlX)oussLP>x_43X}Rugk?zhIaq& zmt+vmmAhSj%&?zN3F=M&_y~5lZjV9rZl)~_&MOGXwZFcx4*(FyRL!g~4+h)8T&uP1 zpd`zcGFuniGJ23#l?_|7ZxZ=(yq^xIYviZHU-IL{pko#Vj_DyOZwry%#aHeE01kON zr|v9w2ZK@HIRURQCT~;z(DPe9tux#1wF(zp?+EI7Z)xiWzucabi7|DGv*>(aAV7?< z2Zd-4ja@VC1Bs%qe;l(>E za2RlmnCRJt4(bS=_k;O=pXqIZ6}kj4aq6yb2ad`VPn!f}yR2`nU4P6dgL_g7N;C2p zt@^kZr^xop4UM(0N-@^GzsZoyqC zXG8eRG~BX9j&2r3u2-;J6i53<)(?uVii-&I5|Yr?oGXPKZv;#B<4o0C7+{RuQq&% zUu`y2EILIBP#{&wn?b%;^#>EsiT-|0(Yp-?2cC320k}BwqLvpJRdanNTUCzgWAC_&oMNK=< zRXAl@l^orn5au90Q-i!SyApX&499_D@{+V`$*ox@VR>I%--XWa1L4&q^|r-$fvAA~ zP$G>2mzQ0B<=&gY09iv9jGpI$hs8@}ueY^~2TC;nY2vGjP^h3HNRKKV7v}C6euRYV$hmBmS2gc^%7C?L5}-4$5xS%$x5YuQoyoLR?LY$ zRp|0CLNz^lG9)8hmQ9~uD%ofKbt;PP4zpx2}Sl}J7Lo4LB6SQGaVai`CyY|vz^yFf>!(i@|q^-xcMvivg-ZbG2TG7X?bD>jg@}`RT!dfcy z@!d5;M-$<hi~bie^>3FExX65JVyY&2t3uSaFTIldxn-^jZQ4OH z2?N0BfFY{*8G!Sc*G0!Y5x!Id?2?aS1&tMT;v_xnT1zZwu(h(%>yVj`-Wv4@-x{{k+m2T#N2;1mx2-Q6Y0WenLnjCDx>X5r`fs*gBXF1Z z54eBr^lCmtKDZx|VPyMOGjs%PXTuAoM(F%^+Ggtn2el!(8R4;VeW6&yv~Ou@Jw{aq z@5eGvlw&#ylHJ?Ulmpp@bc)`PaFR96^uFm$7lB%D9=Gi28XvO7uL?~!O0bgBRQ%hP zhO@kS58m9Wj)IT*B189Hj6=59J{Aql*Z}-y@KHn-HHzf>xZ_*9;K^h^7HuFrxpbF( zOjAUO_v8&9lx7dB-2Zus_zF;PAne0AamYTuwNy!LwTZ*WtL7D8-Fi-MXjEuGGWxi~ zWtbH>AsVgyFxLikxE8rdDMq)%EUA3>ttr`VYed>`q2_+jc4<0ashwa^%`EPtj)-dB zyI~bZ=}o;u^pji4<+ro3f@ug<7BxaVLo{!99~ES#rriv$cSdS*%VI9<6Z$QaeHO!} z9m-5R^y#u?cYcsVHs*M=(9BNh@K2-5{U2e`(sJSJO}aNgo_C;m9a#`u3s4HO_GPu5 zeQTHXY6zs$loUdJm0kVkm&36r0l@7$B2``vFC@r>ik|bcEwtF@^<>(MgEm zhw0O>=U40SC()R&W|p1tF!&4w=scUi;mO}{`o1TIF0W$~qc0I_SmD5S$YUkCk5zdT zuCDKM<})~v0cKNU z^`#wi6rPWzoB^)%)62!b1fcnC`g*1bb=)V9R%G2-#2_Z6=~Im*0nSilI8w7qTvo8^ zAUEEmLFfU@@pG$ zlOt6XhDJaWwjFdoUbl2EkJE3jp0zb$R)M_QMwFY>7 zgTOI@)v-qJs!gY-TmXbBU+wAE`OVh4hiemU<+CYgw<4qL?O3)_>iQdP>&fZ^Q_Ec!5v| z*wyacGjys@PjuVckZrBoK7#8>Q)Tg326)Q}AAB_Y`<{JwS>5($EMs_q*I2<5Gl+o@ zDZafs+^0l!84{@2wZKtgDsnyM5TYAxf0bL*Fi?g98>q}u`6^Qai0Q%fx@{tXWH~`7 zT1_KR$*h)1l-}~){8aR$|ByUsEopf1F%2AkNX}SmJM$ncc!_rRayA!qO=EAwr{VUu zc{wIXi{tkfv+aBJ|FrS?cWL>}EPA~v=(YDVhxkAkX zV<<7C=drLfU^`T#gVZ8lUD*_Yx!@hiVfPuvq!tsrlKpggyH}mR)IRlZGcvET3XaKO z-c-uyO+y7Yu^Lxux%eHleoM8~!ab*B=Km1toZ~)NF4mNiRMD9{Iz`QAbp`fs#PN3m z`NPrP&U!wxy*P02xq#`RF@HvM{VM&Pg53D?fX`Ql-w$B&u|%>N-m3Hi6gUhCmf$2#zxxX+6oXjNik zcPv)vJs1OO79o!?`4d$PV6$$?-O~Hb2cvD6RibaF_m7V}!Jg~V3GWzCJ!DsXG%lvw0-HRB<9LcFxrH0m68sF%(zC*OZ1M?9a zGO!$s@aQWC<3=>Ewln_?4ErmgEYRzPRZ!*)6)3fk!#Rmoy=mhJXI%gY=>}{K4$HM8 zzm4sEO{Q%Q;Pftgx*L9})Nx$&Du}xv%J{~l&oC4|D*959c{l;4!wGDq%MP%Q_}7o} zM4#v@Kx|0lKT}C;2aL)^eSy_v&ot?v3`Yi-t%25}q>V|h$XVd6&*82xe<(Y+qU&zy zP0Juh#C4wvH^EANrz&D$>cVmyeq46l6xS31?2)#BVoWQR;4;4Mpm^A=qS}p$b6caV z9&P<0*T*q6kQ40SRF%=SPrX@QMZQ=vkr90BC~CvAw=gSKwqneD__`=p2bk*0bF64RMoczdU;HALbVqR@kELs^W))KHJWQB3WOZF?196v z?OO##1d)BF^?Y-#au+-%TCw~fZMuP)U3ik!ohdv?I}*F4&}vV83K#6dyi3C>u}1#N z{AFg-dI3qMatJ3k1t3|d+1mb4x%kFV=_qE%DQ%lHjy)0T=rvmVM9n#O`p^J3Yt3C6 zcx_r<9Y0(8Wv!lsJtkG0q24T>JUX<5F(3Mk;(k&XuGwtJ+QY!XkUL$cgFq z9EN36gTs>-Rtsz8ox)fO)V_Bd?klmFWSh38Yr}3vz8z`@f6D(oNto&0;ZH9+*KvsH!YFH!Ij`t;_p%3#G;?=TfN;}N`?wm zj1)TE@x%Da4iRAjwOYNu6GLtm>mWz3wGs0-FXX3vZ5}!x+Br{F9D2AzB0EzvCh3rZ zELrUWUQQjTsv(Ld#4=AH>%Z8?Hcdj2K9*BHUj}{RuGG52hmIP``7&&pq}X&!mMfJx zsN-7%7UT0A-Pb30CB0UnAZ(U1eM<6(s_GbzDYk-vTcT#+g_?OjVI=`OPr?5vIO;8LRcwLVPv98Db25m=IkQ&N?^X}aNoVjuO zPe27R(%j315YDGJi=pyku(RPwnrPe#%7_=y;O~3-vz7rs=#)xMDlreYs^+CkZ(#RV z=Cj_%n)r(lbjyk#TGPFmCNfGX&@q9^seC_vjgN}fc_O=9iPAwG=}VXQ7r$P+_H5-{ z%k2@gF)c>zRwY!vOSYN=(u1rXW}g68Z|Mg7bdp&YuAZ`3(kR=^msoEfk81IQZya46 z7;gj2r$9YfM-RM(Z5jB+=Veo6MH^%25tff2Nuq)e!TlpB89sjdc*bek&Kx(1N!9AL zN{?FIFHM>3gN7#6&$2KT`ALKGM5Ys{VNy_&t&Qoxyy$s1GlN#r5;ucW$0d&yLF7_{ z%U5kyxnaSi)u_QPJ*y1n%Y)7iWA1ZXv!5YOPHX1Od8h-|_Qt^d^`UfpmMPVTpPm>+ zAlqR9`J3-t0VvJda+83$0?7s2*8q_QE>rP`vMF$SnjKb6!1r?b26Y|7KvLV;_b=~8 zC1|KY=pu-#pJ}5YZ+)*GSp-(_;39i$Wh%C`KX*Cwk!|@FeHS=KDCo8xVYa<6IIl24 zq<$h`2ea*>w4MJ{QEe+G!I8;p!q19-q)7GUaMX?T!6jgu`Mo?%CANG#MSOK&t_`f9 z&k;ux&CsFaEW4QkJ@1iua>$sEvm;rR_mk=6pU-Qo9|P4gfzt<*XugpIWZa89%^shY zFpf+c7Kcz0F7F%=7w>iv5&+#OPe58cgr40YOIp2VTg|VP7xb2*;mp#`W!3ktZ%$7iR;=v8TI8c?VE3lL zRBt3IWx8Q*bv)T7xoVX!xuD&eP-R3{gg%h50Cskm1x)sJ^8gne${xum@5f{V^Rn&T zw!VTt2r0~V^Cp}j*bvTiCrF-*`u31=FMPtw{z5>9`Bg_cM&-L0go&4aq6-e*`x(05 zka`rcZ$+5Gh?$(`{ZNgJ((#r>Z=Rhken|+85ind#wy|A1A8EbmqtjMB0bT=mde?5u z8-*O4^)pD)UMaci))zY9FNzY*~Yt#v)3BuTQze- zvQ(1Jn+C{SllR+kNtZA*_1%U8)rp)7_y^~Jh|qx-Sy$Kki*W~r2)9*y0j91#Y27q9=4$43-X*u z{6yGgO`BV`frYEe68YH)p#EFi@H^XnE2zd)c$*^MeDZ?ZtjDtOU!kbKBo;pzAEsA@ z+-asqj%m>+x;F2+v@Q&5VbNlJMQ0BP?;z~uXLhj!@teusD^-4HtGdhR2uY(w$J1F= z&MRUW_b)OKS6w#R@?M7M)XqkfS`>)r2pC-Z1qcJwMwLCcf-8+*zjVh;^c0}P?{Yw5 zJM6hn>xa9<~UY#%Q}|Y`LF&={Y63n3QFhCWtQWR$wh4 zPtbz^9zR9`Se)X7hW21)Il#4~s$xy#+MoQwcE^!p1E8KSB#R6{-Je9LunbUH&vRwE z`aD3n&pxKyUf+YBFPcsowbYlF3ZOhVnk#_aajcA2I=y*&MryzC$fCVdG|cL#V6-8#|14r!UVCuh9H91{ZtCzxz9NP~HzTm+;l{O~ltc9N z)-RQ5_B;d>5_&{4i6StL`v<^s@a27<#+E?y%bRnp1`CXB15IQOiNa<8a%xmxN5}%w zuiSoP7!orWI||@f7hor9sCoqgZQVI4jH(Y@T;xekY0y%21TAd5kwa z26Q^eYp5S_34f%?T1O0Ks1TvHVSaV~^dr1t7e`s>Uihi2^0xUce9Y5P*@h zZeZhX_7fI8q_z*Moh*0WhR2>rhg9xT&JC&Ue%(87yO6nI@|L?bdLKSmK^yDvjsKbj>W&zXT10Rw661)nzx=!n`0Ew$f0=b)Ft9j^c1|t* zidFqSt7A@Lzy9*xKkEBCW&ESQzu>$6vA)0aApcn3UxdJaoaf&Kfd7Zsr#thp`d{A# z@J}54vy=MA692KpKd*|vZTWw&#DA>s&rpn==)o6#5 zIJ#@Pawj;!62k^Xz}q|3onHuo9a)(tXM0q0Er2>A_`{;1-dwZFW-I*qve>nzeQr&k z2Pn=0ivn1AsLEEy*UYUHpYV81^j+k%v1HkMIfxIZC^r8(syuGv#J=6cz>Y=7z|t7p zYV%XI6;LFx*Vcxj6Zz>^DeqX(EPk8WKI?pe)sjaggr+oQ)QzyNpz_iyhr`;nCtU|U z_1Q7)68ZWopHxF_4%Sw4Hqh$AHKNQ zDWd{C;PU}Xn58cZ>-WMM@giXdJ~qw5Ykd%&W?^ULV9byMt5ne>atb})Ax>8HS4%i; zCXeDyfb+%wQf$vMWNL;B8xsN1^gbYfn+Rt_bLXh0(y1V^WKeu^&fybcbkwQ5SHN27IO0v7kcA|fD z1&(TWW?Ll2DY84*nMXMU0Zxn<#}4IK8Px>DJ{_)DH_8Ur9i^nYN;7p%=;}4)IF_Vw z#M{@4An%vy^QY;oZ35pQxoyzMRrWu-qQ7jh&ZtwpRY}Zn5Shr$`Dm#Ag~?OGLy^O9 z`M@HurD;#gsn()ps7uNkt+Mfrd`%9!YtW1fQDyY*8cv9m!4J>iA%QN_Dj@`LK^rE& zW`2`-PXU@VsU@4fz-lj0EEE1c^3K^K$)xYYr~Ru2HX-@PJH?-fo_@L{GZdu z*q*^@3T# zYB|WjQDvLo_pahvff96!JH3KCismd2NE5|NiX<^z;jgR^*KJOaJRg0F> z1)O_cw&H$2Cs8tEZ09n)AXgksENGxq+ucsLhVgZaN-Ikp=dQN4pMZWJykmYtvD?e#BY z>9s-X^yby*f#ldce#|vp% z4(MuH7CKxyFu})&!R)*&MzV7g^(9T=Q6{br`3ke{Y=v2M3=cGGzT{bmH?TBYX3l8; zem>6F5VsUruq~^K#vQGIhvoQ7D!8sqm0N8lW?9`HKrDWuJ#10ogNS|7;A|N0KWa!|8`ReZ#Tldo2R;{6sJikqX(77Hx)*aW}RozgLfIG2&vpURO zCWnifHFsa>+-r=OM>Q|Olw`CVKehW*akFjFGiQS>Sq9R#1NsEzLhzDyTE%pU5^0ut zlWOlm2e1g)#54Ge{oHSa(c!tQW}ea7;X=od31RrQ^z~i`zXT#`E^-YLaBrI&vPlkC z!La+bMcLW<`j@cBLuoy}CAUM_H#nm9A{M*n*LLJe$U*7$IiQT#ZdbB%Yx2iQ$?|U` zuD%^e7IHiK^xx44|F!1fRBrCiAgek8mkU^+t+i)oNBOfMeNjv=cAy53xC%IVkt6RV zQ;@#~bBnR8=r#Y?qSG!5)Mw{bg0AVA5*cz&uohcOZYKhqxv5%ac2nY>l z(C^vO4(i|#lK~5@?_s@!zE>|pfjA_0p~5liT&0=uBDRELXqA&FO4?dO45C%p9Ssq0 zw_=^ytMmK8W8_m&F;qBCzuoGGfo%ze(z=4*s+;f9NG%-U%v#wRRH}EenE-=L&fltP zzCkFbCK_0sMj$Roc&)FsUXDw5AMc%XrEFaIgqCrmCH+txf6qsQHufgGGNQ2hLh>7P zn`3})1iLMGxxrorslKx;WJ?$);fFXZ%PBox7BUiBM!d28yj2v$5CYa$byVx0zY$FN-E44?!Yp{#~Dp*jd-cMOb_M zH)PJb)_>;}?(8IwS7~lZ^d&mHx3Wxj9^`2iEevq55S2ydZSboO!`AmrL52jR3J95w zUNY3LbM2`uuTH>hog6CKHf0r7NC0w7yNL;+-2d@dM=OJa&4s@8RBH*rxQS_@w#x zAfDy`ll)!dO3ZnT1=_4Amu0NI1g;t76}47%n4~ zyNR3kG|d&KCxaY^dF^D=@E(1_lI8n-J=hjL7!0}ZHC}u_U;hV{8d5e#g(P*$RMt5$ z>=7u__=C*-sm?`Da5SDHiUf*E=2ERy$M1e@M!&^KHKo?B`1@p#br0r9?>yFluSS@% zk7STT+2f$NpWF@(jJv8Pw&H7@WP7bFL5DDjpAZsx@dlsebDMrjnJ+o>d0&*qjHLl{ zg=ZV>QnVpYfopusL5uwD4!F7R3>C;x`rEz$>MyI_W=kHn%+x1ZE9zE$j6OMxoXFsoK8hRQ*+ zcq&-wF$yW9ujWCTuq_|A;H6>6l6(!~8Ym=8Wog5)G}8lJT^xkQ)yP?5{=r)9m#A7g z7rW~;<ei4@55Yc}&O9M1!WPJ)Eu{lo?Ca;-==s1rlOgn2v}H7e)f#1u)%j z*U4)pl~HDT?oP7owS}xGaQ`l2hV~-p97(_)Td+KU-eYWF)A& z($UXpVedeR?ffOL}JRzNADq97gVz4s6zvbRW$ zC{;?3UIe6u9uTR4M0yDjBnG4g2oOlU@jd6f_kQ2WeuwXWcieIQV2qIRTV>8Q=QE$V z)?5dSZ>wj+7vo2`bqW~pH*JwGosieP7qbJ9k;ACpm#qwMQR<8?z$0bpmlLL|jN*xd zJIpv#b?WPwf*s3`hI_OAgYW%P>9L|+(}_R1Nh_j|OCM(A-am8>cv;LIVGkd#a;YEQ z5C@n$bR@j>ljXOF(H&c#u6^}ab#?8Ttxfk`Qd#)Za!9iKx#^R}`@qTTC!mAM*Fq&>c{Kl#aY%7%11jES-XLrZ^G->1oZ4D zZaC6L8&(^M3kgnb@en0C@Cqoa%=_qsf5q8f;1qLp0Q{(4(WoT%wA45zjLvi-0^6la z`ak=?)3Y<3dQ~}Z<;E0qUE?}7@NBTyrd<(VvC5bk+a)%YZCtBps5qXiX#4b52UqSZ z@youiGox5twP@0H7x;M=fqXviiLqWE)#q5DbxGw+Q^Rr+^vE%pXz z#~G1c%DQ#=^0|EFr|R!4!)JaoidmZ*QsylVS!9-98_2_jQ;O=AzvYE;I`1Lq&Q5}p zqCCf2ZaO7iujyw>dr5K(NT_?7ichwlaEJL3cHTPW=bcJ>ld?2V50=tQKdH)b4&Mc*=#s4 zF?*;v@T`~B6`0IQxFRP;dwky2Wqhj?t31kqMfC9;x}&{+*!N?gy%Iw=+`JRx_j=Vs z_3q7T6}a~uCS+e}~uzc8nZFjquHUqzi0zpJznu+^^f6nVw^ZDexr+BXTD*+cO6i`AZDO(u9O z-SmKQ_L96di2U8fWpC$r9q0L%c2jF~nLQCfEUJC4b1yp+_uz(v2ORVDFV!0dc?X&C zj2Nn|{VdKWoZQ%ms-K1XWVP)~E_WTA1tO7E>Q6!SUJ%?QPU{*(A*^qtto009%_C`B#KkEW{2C;Ko}Z6|T9z@kR`&yc5r7>0&6 zR6Dm{ zY;Fx7s>ra_V*X^1X_Iv1f{uE^Fk0KLhBxkj^0x5&p~))xd9jg1&w)eld9MiHvL?%} z=zZ{aHb}$zJ2H1olcF9KSs-Z|(wy;abE;GGoi@pC^F>UMTgbXUB_ zGs$({M=oyz9zb4i)gmwULxWv@Y7Y@0+V~DRXdx z(Yf8t&FY-LX}5ct0%`9|wZ?b0!5aQsEyBM5f*i>yxn2_308=1^GF3+v2s z(NS0H;S(ZhRs+Q}zmQYH36CIrw;IN*m4{wZ+@z+3Chpk7WKlHJ60S`okn@@L{Nnn- z*SF^y?}vCQt_leDNffK=3NwpHw|{Z4@pD&rQD~R3^)j;3vZe5uH`N;*8LV>gg-zb) z+rN#uNupCNYqmg3QMYr$cGRVYRf)@eN>gh)y32 z9F)c2FjaFp+adfs{mGZmc?}cY5Dn4C>WWN|UB_Bj4eHX12~^y=51d+LvXfcE`#Yl% zfsxtZd<(YXc;Fe>j%gr&7UEy%Xs#Y*o}xHrUj9Z>)By?pI5j5Pm^buT?<|KV!Wr%&m`jq@WgR zfW^}C1yFn|X?O`Ir<}F*DSNGIEI|CUA_pb!@nKk5fNf|_Xku6WiOJZtpW^uIoResP(ZPIP_!*5C_~&PV34w*xsd37+rFjpSbZvX&MhiI_zrf~>1&OO`bp!3 zXMa}T@eWbUY0k7hR^LPJ{U$aQnfLhsTg<=-E0^!t+_tZR#o^-nqYTq6xv$Hl>Kxmz z{+z)fZi6%veH&SfdgQnvi%f+hzMUX$+*);* zcRw`W?= z+$~qIKsge<#(?1_kyP^|x;h^{lFUh}Pg6<`;)Qo3)(O=s0m4%_s+3Rplhu^nQjZJ$ zyXF-p;ht!~+OY6*;ex3TI6@nhDvaJWnKh=hos4yjM&y1ze9XbcpBUnZSH#ZS^n~e=8%{NGoYS%hU=!=AohP6e_FNvU^EaBh0Ac{L&)WdYT;MOFQMY z1AXCZPKR{27G0RM8ZTpt)c%fZ?tj#_by4&}@tkBo!KX68o$_!&M&#k*(nb0aRxh_Oe!R!?vwHVY)9~>$|J>HywF)3t?Tn-SIa$3UA3e&?n7%^!G9>^V&eV;W zhxV26n#;Xw42q$&Om%s7nu0VrwM6;F_kQ;Vf>IfLVy^0aZ?@1OPu@Q#tp$tcG6eyi zdM3L8=EmuW^~~%h`np>KiOT&!s%Eh}(1Z{k-`+3tQdI(MD(>8T;f@vBx+QmM zpMTHp>3BKQ-Cy)h_d)d?O<(8|aZ5-4s(iQ3Y;UfZX+_>8LhElYyo7e0Mxn0gLW{QJ z^C@8P-g#8%B?~4VJnQw#tEkh!*EP=MAh`5|`frG`hZ#|BEKpOvhIhVgpOgTnRLg@N z^A;dAcU7D!`j$o%5nfm3G#JeI4}{B4r+UPw-EIuev#Ptf9Y+|WYUhC` zr)gjATkt*14*oJvj?Z1V!4O15^m3EDXX@!=rms;Kd1y7hTBMphDMhyv}w)u zR6*LHpM|a-yG91iSSpM1$HuKECIUt4Ro4d!y6R4Q@lAyrU8HGZtKxog7iZqpFMJTd zJMh#eZ1$+~ToCIr9B>e?CjX(r#KdNx-MZx5P03cK6v~LT)Lv7kK;~I(l-M!VR{^e} zyIUi}E$M(m5w~5!j+AJHJVI-#KVYfSw>AD-T?oR5HDsHkrP?K@<@g+Abr^Yg!~F}d zM#IR9#c2~yJ3k=85$S1kd=YwuJe3zHw12m>BenVD=Eqx&$twp{_+RSB9IEa^JH8MC zQJ8xo0QysOS@?ZX`;bwQ=(yoji`MIqRg*HeL$hN_Nh3mFTMYPUmD1{d715tFBo*ti zBN&m@?1^>0hYyKIM@C6a6$i}~GFNST4gku9Dp9%8D5@+ffjcxMmDG>%kUCT}=2vDR z3YQx9FJPt;uWmU;Uh%<2YtD33uLXUMFTGWg{dUky$qx#)H{xG$CWA6tTW;xvIwpH5 zUs;+-iVVT@kq6~3wq#n<*LW+?>)F|(hCxr_qkQLfXMORik+YqQFufpeUybiKF7TXI z1k)9PF>`*Lt%wub$T&ypW)#vo15LFg6LaTe?k(T0OwU@7Ilf!)kX1$(CP58JKhzpN zyA4j4X>FQgH19wMQ&!_gop$H=yz*6VFi({Nz4>_tP6X1WO{Ylnz3SN@31p+KT@$T) zw+n#RnNkcFe+3LDr+P^Vmh54FwxssUYhRRqbuV@+RsKB!U9s_{pOVwvn?RXD3>qN& zZ+%;0G^tN2zfgy6rPgUFjY0baQ#rYH%a!I48P%?!kk|8j9V@MuIqM*sZc;JrlZec^ zo2`s|K~+sq0e8hybNf395Ma$M{ux*EX`j&D!hsyx@2E5Z--nT(jovlhk zV|;#TG5#&BDSKfhj(=7hiy)5*WR-CFUysaUoyiJD#1rX`25L*WV|B-Xp3J_h@>x2k z3A;faci{(LOQ)-Sb4`2XKdO(z+L@&$je{#oExOv&6&j;ZD2p@W`j%~SG`#?U)+;+y zPf#Olb@d;vs|#gSN+piD6SVBML9AkC{X2gJ4TyMS9cC$*!7BGf20uBq`p)8dgzoMj zzYn^FhyrwMZrPwmV43gsP+r|>avpz#qD)c8rbnNn`omdvKz+XZ-P5PQu(K0gcT>Mw zKqY2hQr1rv@gVxr7la&~U>+Fv*LAJoZ~}P+5VSFHJQDc882IhWR2Hpea$q1V+oR-B zD>;eB!9Q8~&CDwm6Hf5=wTH=wzB_a>eelCVJSPs#Bixn2lWP%xrR;2ADh<6t$K!TB zgwMG@c)atoUPb#}&KYABU9dn;Mtt0w!}IS-HPk|`1e@_95v79t=(Rd)R`C>n5=3bD zU4YPtWE)Z*0^}?>RWOrYqtMf43+7xOf~KA-nB%4D-=`AgOStS|9^osTtu@v^;Nhbq z?Xa;&p)h|c|MiV%ARHbk#VIv}(=!R==Q;iXO*cK+?@_xE7?9pb?vH4S*HwJp_^uD} za5gd11bEh`H~VeO#ePQwJqlS-d9+^V%YlxSvie|o%ei@0IB-~M716AkKn(2|sP#JK zFWNW!U8C4>(yF21%z+sK{}*gW8tdsvR(%Y)-APX-1e6tOp}=Tl2ZinhSCGKnenc5I zG%3(RHO1&3RED{dCfs*3g%b!5h+c#`-d#`tCG+O~Sn}{f<~p5jz0L}SF3!SQ0*$9- zaftp$gB6x&x5b(%QboE4*+f~?tYM*Ww`-weu6+onM9BcL;#TB({Ab$Ny|1L~8!}y( z#gvr*>P%iy!Fe%-S`TAft~W6^d)3zut1w9!17ltoc%3Pk1Bu7dkZpl`HW|w`8XL<| zYpUv6O8Z`x)oPah4rFs1V9?i6fA{cS+wqWxgmcgidC- z#-A&TPM!$p;oWoDf3UJ+TGA2M+i#Tu@tdF0Bs{m2EPx@O1**AMZu~OHnc`pY{+P2> zpd~q6oNrIl^i$L$;wO;5lQUfDk`oXZqOHc*yt@4Tuw;>u;k#^f7%B7F(Rx^t6Dv_B z-4-i#t9^{(HWB3Xr4lW);(ufywPUslZ8KF6tERrQL#~usx&@_vI99`?OlOk{&7ijv z;u-isfaH1t2{A(eFW1ddMLe~aakF3i^u>S<#+B`RD{|w8#G!oroGN#iX(?5D$ph^K zUAmP%q*_nMu1*v&!0z7+pfuX zP(icxOKn0E4k4na9A>|cmmtY&ZI)!f#SJmS?JGWv4;BhEoZnXcUkH$wqguiC<}^Wh zZw3*C4P5=%6r5>9_LXwS3HKq&nEMd#iSgnS1@{A5wNTV!Nm{R`CSA>knzHilUqYnw-ruz@ua2_Z z7JaS}Gp_vV5Re$;He6aayOBt!AGj&9s=fF1sTB?`DLgcSvpzUjF|#%%BmBp@&7h)? z`6O{txHQaWA1u0oeLZPtzY9CVBdnj*;vm1$UDYFWw27=F~Ue!E8=Hr~7wd)8*=#>gXsqy^(sM)RUS*RU5JPTnBas z3*d(iG_Ii(wgZe)Wa;)#R86UI;67zpQzL4)Z9%w)WA>y=$LAdiU$a83&KEa(b6Hdm za)f&F1c4R-{0DMe&+pmEG=lFVldqzqI{IxUNJLNPFyP7hS>KSFCCkRS#jun^CY+~JmTYW^f@`Bk#GR>541ehV38;!^aV^(Lo)ihz6 zMK64cwy^xou%}g0)Vo-A7a3W~smooF}z9KY1RlJ%wK& zsAC;FxY`#g9PtH-EKwiXhN#x~2!2;gQX5O4jPe*4?ngiWHrf5di9)IA4t3AJa{4`$P7QBTqvK6d zOoA+rnCM{hGy;)*ttn5CcfPDYSSg=M`brzl1ZZ&hN_@Hg39BUex!e@v+8ia{+DS_= zv)O-w@VX0Xvh|3`=`17dAj93XD9eS<=^gcT&ZZ`zv(W4Jcz4bpYI?R;M|=Ls1Uf{2 z{VC}l-!KkXBx$2kPo9&NF?3?O2MocXr5YbMzXL88*s(6!eH1^xsSFKuiK*Ax3_DWC z)Zo;F@AVZdcj7D_-gJ8el))X)vOSQieyCVK1sXo$WhK)$fCVfz0`J)1OE772lV9T| zx23if&s?fLs-yYcyWbN_wg|FDcQBrvzBMqUyn>aDi0WZ&2;7?3aK&=+x<++@SkPUv zYB2S1RvlQc{I^K`&#J1zKf>h10Dx$I^qD=M@|iSPa~PNH!*HS78W+z%1TQ}<^)lSw zD;(IV`koM$>0bT9(6?24)uJa0n~L{kI0}^10#rDkd1+9V&=Zc)?Z?k z*XC-^EG?Cjg9>>FIHDJo=&;VpL@#f?nWKNa%~BA=(^Tee#fb`-N6U)KtYDfQrC(QRn=1#VIV=Bq4>1wM zc~mUzu&KmF7FZe$HB@(Z<-IG?@zp8pz2rs2jHgZlJU)k7M9|+!^ikum;gss!5_ZoR zBUI6y#1s{s>WSeuE^?g27fC0&1R*We9tq}KqP=zmdqi$QJshk4y3qo70nVE#LP*o$0pf-wp=040ht2UtZFhdzcZ1LQSr=Q+eWwb+ zRr7+MN+hUrw81xv`>s|*vF_}ZiSmQS)_(@b&*`4-XiOYZ7rhXDhMchi*?LnpQ4m~q z$G#25DYMW43wBXHHZT?tds1o3kY$VcGB%8M=fY0-Nc*%zZH)QxlvE1v z)UVy}vh~3ote;#Rrnb!_Gu8uYVqFi*U^7=M+fGXCLiBdN+IYlA=>o&2*TkO#(8
E??r)<`V;3Df@G43MV6r44}C>XHFB%Wm5? z6X+5zHt`1McwdX%THT$vKMI-h8F|#Q{H=+0(K3mv5T;4HvfCq!MD<5`#{A0oS$ocZ zMj$i^&+q!>L7;+N3u4jO!7W9Irgl3yElcF*%Ra>pWu-^m1?K73T zUT{25BgE+C;(}fz^rac`YVt2_$1<)?3#PkB1k>=i_2V2c&8F(drx2OR$4DQJ^%-tR z@_YO0qO1Q0w{n>itYB@MO*mdTp6+ z7}Vl#48Pdr?5FYgL5iK&tooGzTTjm&=5ZbU^_sz_lGFT4e<<<=9l3#&apFrL<+QCO zi(PxAIbd18Ouf@#!x-~W^!-3FS?vxH?0EAq+y4O(Up8I_&mQi4Bpo9?2xY!WZ$-~O zJ>3)1e|%(Q2?s=k^VJbMeKdZ6SWDS7)8=klI|m!^)2vLfmn!S|*-`gx54WAFi->NZfQy`8A z5SLmpoyG{oycGAo$+7tI&2OsmB0A4){X4VO{hcH#>fdK3oG9a2jO*srw>*Z@Da0cW z5SO%)WszV{vm4DpR8&BenU|?2GlSvg5~HMbdZP>m&rbQUrfaEsMXA3tgsHs5`Ah$a zWjP|2n4xUby!UTZrRfx^k`c9Rr8<9u z!%`@@7~DB46mt>V3!Jl#1~%Y3Hr58X9S!b+Hzw&XpEFicJ)YLKlzk`8`g$+y1w^#a z%hlhIBvxrZQixnDjxLMW?yJgcJI8_DT zSgWH}^StyG-7%Acnl>~@IXz5RRVz@DU0}#3LAnFQBgt6@7RzhXY2x1@urOIcT^k>W zkTwNufgChDwaYN}SfvP8LY#m){FUFYRp3<^jS3f5|MF@%j!z8B8bApyEeq8eEAt(cbs*!^6Qy4@7oMZWL2>_WU>?)Fz}Ow zw%eV5-ARZM8U(~wdFT|I9GOt+ueMMMxDt@-c@U;~#Ku^mC=is z@FFAFFQ^`u#jn4n{1H&EJ{pQdo6buJC~~;qNV(}+4+0=L*eccAv`#(cLfqV!EICHR zB}=UvIGc4C;r6a@e?6s4!E5nzCcTMyk;E7;L&Kst#d84R$D|$x<$TyAP8#-k6RnUGwOs_=;hPheT`*%v#J!gO<*iA#%5t(aBz4nWJ2G3;eBy%+Hblo z>vmTzO^@J+?;=t>Hg=Vh>J`nM^me~Kn6C3+0E9q;Df=SApbT^IVxX>uejVn%NS7fw z^F33@%T1u4nC~0A)g3|KjVK0MVM?D*!^Y%>}Mz4K$C^@!ys^BCc6lktD|3YfKaQ`9KX#qflNmwn_KsXP< zX)dbAK_JdCrf>3x7qu}7H5AC}Tx4K|;ZNF{LEcH)9*$oQd?0|cJ9zC1>{M^rxEdSfCxmRfFIe3amqm~zK<33Z& zOXF9S-1Tvd{s!-664;!k7xvH^ZWTPWh%x^HaSoqbqm-7Mj#JfKT z>0X*uiW9)oRc$w4kT9{}rzZlg%%aG8GY4k3u^Yz@VE-ju%RP`zg|j>-6LwtV$n<)Ba`ag$j}`Q2P** z&|p)Om>w~4Fm=F%&uslhZR$_vJN~o92!38x0dsMYf`6lg{C~9I>=Y5b9_F18|L--( zBIwL+v#Kh&?YuSqw<;8|_JibbozGRgw8UHR>{%_PL|A+c&n*XR2 zv$_%B{gWd4r)oK$WK|I=A2Kxi$<3z!WG*A>`B@?vpIx2*-wWvHHp6_mz)}zMJb3M! ze^(|RKWcB6H&1l@B**qo>yo2*ge4Mfkjtfi_WvheOq;Q^z&xmuwSnTFPy4ACnr>Vy zk;I1&>;BtXP{-1O`VA}2pWLHoP5|9gM`XJP(pOZq?F^Ise6|M8yx+E)Gx@A?1K!nAqhcC+TdI55CN-%?1q zFg_#v9qxr9dlYBWa$G;{#&2Di&->{r?`3#!`yu9mWsz*pRks)a>Rz%%IpC5&3kvY)Xq`UC-l5l)wSSK0GtxQb3%f-%zTS;7ZzR`dvDITcLz~)e4EAzE zYfX(5XFa#87`Ui{pDhy1**&Tz-rH(tt0($ui5Xno4_0s`m@Uxx7;r?}m)24ciM6u{r@us( zk8tnlvR6I)Bb)8~1Ua#JPg*pA%%iOuhW;Il=U7(P!-xvALu`X{n-j<53glL+MpNWQwch;=k!pU+ zronDp=G@HOB{!eenCMPjvLz zYnh!{j@yYiOW0pZam)9I|0XJ(T;S*wZ1yaf-;1}K=Q(2DVzU1FZvuw!0pZU!V_m|2%FUG~YH` z91lmB!_-9CJ$2eVXSW-36fuGBjd#HpGGaRp0#?V1c7>Gk(I>M?%G~T$uALuW$@uTU z?8iG1RlnbmR#1D_JM5{i(!{6!bP z#OHf4i22)r-HE_i%E`Rp+&4D2PdQrG1X`Lk2hxkzdbHVE8%-P3udG{?RSh!KE*bQ9 zY&_JKdP_~}GgnTH+ak$;t)|!)YcR94`H%pIt}H&+woOwi9uC9K@A=vQtLrK)8r3|$uU$j8q$?B_u>Q^;S}%uQQ)7R> zLY#F3m%nwRYXHO4=%W5gTACv|ZVQ))S(v-uvCc9vL`8fryBQM@&$s$*;=ugesUKrB-~Tr5&vs_rU$#HJ0D?CN_6+%!r5q-0 zHQEwfBi3amA}_F4!LQtR-n_jVc;=pAQ_m~Xo$G|jtxd!X#VzF+s~M!M$RW0!dv{Ji zk2R)i4FI)k#}-Y>&EBf1H^)q%`zPntmdk)_e`98HvHMR+!|ca#ygg5S1V?9^bZ+{} zD2D@mQ|j-qwGGC60#dA2E;j&<5%5l(FVVG*>rD$`=^57QW$X=l`~I0LJumG(sC37n zw(wg8U;JtbTS6eBoSYWh7PhY3;Ag4KJ#R3%%mM$qc}<{)PsL!SoQy%be27MNPy+kf z)p)au-7oFpRCHAxXHE_)q|DGE8Z@lmK*3*Iq^&Q+1ol^fL5_BR+COOE^28j+#tx`k zT3WiyHl5Tm9LSVw^@64S%|(3pKXQa666Ej#>KkY);Pq3j6Sppah+SW~*-mnIYx}q; zNbG#aI>EHw6B1-Y9nDO38qQDft!KX!PYXD4$Y~4Pbv^8_!ux1{S@qUCj9+Sf8pbxY z-QK`AIgMZP5StO53e5ey^pbUT$V!q=P_C*CTXj7BUFJr>$Wxy<5tE)b;0afp*O;AB z_*@%(t(PtC{PLlFTo~j+qLhXyV4Vbs>1t9hiF76 z^;ld;x{U8@uVPQ#YI;TZerOO@zy6~y7qYzY;thcB9CdC+-r#)H-zqZXSXNt6SUSi8OM#N~U8e1fYnDEC_zBRnsPWYgJS6xLH!?gn0f1m%nKqbBx zzuRC=^B)UiD;jsX2pK?v#&s5B|CBhz%T}A%%IH5^JfW8}7>QLr#oeXLetb8wk9K!X zZU?J$Z-t#w4`kbkxR>qUVKRsiZ>qC@j0bzEvGwl%(PaBw7X2;e@({}&%KnQnMHI5Q zdSwI0qinmy-njfnM+970G^vD)E13PL*~;TTI`XBK#qSpOefSq&`J0a^w4`kU$J{*@%zj@BlhH9@f2;&rxx7CU45UtQB2VUfBSmuoow>y4l8bli^# zO60Et{F{U1I_>;}tsk*BW)E2YMMokMSrY^t&aMAx20v0S?*@MTJAt%2ANLOk zh#xQX_?x6xcSC!=++16V=NwP?4#}T&k$vIvNMWIJ8-s9I6cyA4eb(glHPWRL5NLpb z>>WQmE)vTP4IC<1ewpF9K2pm9TLmIPlNI|@c8IKzJ**eDC_#{YXKsWa<(D;m7ruo; z4J%H}CZ$}r%ob-^?$N6*%^rw*JtBUOLuc$x8Hk98;LrCavX60ukF(ZV7rKp*qCT7z zgv+>mUs&T7BOcnS2-yFUtKp6Oa48b&g~V_8qsoJBjtD=SS(tn--E?UZ>?I?lFqS(%SglL4`^;QiCkrT9>UghGtJ{_=}4JH~|GxtK4 zY24w7a|h!SMe7J`p<(M_mJdB1kuPQy{P0~iUtJ4*$4Dv2Poj+xb53f+3COIoZ7#~* zep9=y#H}4BrgWE`^0Ht&Yl%tkYG>3ETzUY%6&&Y#w8R1&VO=q`n-NhX5Hje2@T^uU zt-iO7)JD$CzHH!2IodyX@c#L}#Na^s2=?0vZvnQmFJLkM0#1{se_KM9KAG{v9(7?t z$hzG81A2)M2!+VGlyR@g*FEVbg-|98Q;39;Ghz^bjvEU^ME`XVUn&PwH|DFBwrtCF zwiaHQ4p`Nhf93Il?UJ%H*Yw|-+<>RJ&ywJNwh12aFJgJniXqDU%BzQ4~GTgWXKTwX}au#O2E*q-Y*o7&rGV zBt9uK;c-!}LT3fdW_Y&JE6$Fv_B0&sY*k$iH8C-%DcdVW+AvynQk2_Dm53A}&k6&w z@>Z{eUd)=t>bRmOd7(=$_QR4-*zMh7+(au`MxVSipg~XO>PGI?tB>;}sL(EZe-fDPDlR}Z$I%~BNXZj>W=;%RBkO!F#B|zYA}e$d2I2}^ zHwWFVaU9fVQX&+FD`lI-E%!(C9tj51KP>?y0K1cU;PR2R)U3dnw%|IqBmJ`d5vvZ- zzijh_?GG%hQgPJrTK}nS{|)l>;-gt&{_}0iJIXspx~&5SPcDO(=)Y@ugX`8(txc#g zsX$4O=D`;$YqpeC98lJBtEX)EYZUNW)^`8-0wSC_;(u@sT5^Hv2IH2r;Ada^l`jP@ho73Xq0n6FWyfW6%|q z&_TW7HtL$GIT;SziO#)$@a0yEEm;Igem}kKyp$LvW2U%cav3Y{5nD6GHE(-2|adf%A z%KwU4jU4qNpjOy6z`Qv723fwOyq!Ka~JIoukD?*}U3?Xu}0T_r&#!=ER z2uWPmnzd$Lh7B#rt+*~E!ypTObsi?GPq^ z$%bce4na5b;AFL!AN+&@h(rKJtUtWU@d>oohbeHYY%ZBlOEkTFjF?Jj-v-eM(>qy2 z>!;lAuoy70(Y%tUG&hi`{P3>ifVv*~6%_GonptE;2Lj)g-21uNyFP&#qaGa+d zoTrIrk>ZKuU)^D9%5(m1YAo~6ZD09lS_>MsDFMG`RbYP79oE0)iZ!H@O?=@NCg#d7 zC}ke6jut!&>lU*-%hu`FX0$(1z0o4w131{a$}!1#jduVTu#%Lk+cI^tyB$?+zq~!& z;A(H=^_W$VsV*7uCy?cfSfgpYZz>BcvqKQ}eOnTD;C{nz1eW5}#?Ew5#2DXAmpeGc zD$w^6Lt1A%p>}O@*}?%V7R20K`Az%hG`|24A#i1N+*tk=TTh{M`gJ2DE@*)OZK&po zOK=no|9)#ZL=yhM{xb9;&uiv<6i5#-7iglkye~PCAbzZlN#(YdvWmK(0E4c)9agxn zQZKNNB(MzBc{IaV&W0K9hcOgGo4S|YS|n(oznVr_Ic+}S7 zkoPNjR{cOn<&wXlEB?^*rV0=Xdv98$oJY>5yKH1`99k7A&^NE;D>$!3|>7J10Ri1js&MTzDCJD5g(y=TC}$x5)P+RhNCv6IL6 zj%5%owQZlU(Y`hL;KTsgBmuBkyz}O~M%lyymo$i4wlu!jO7$q>k64zIuu`BZ%l?Ds zQ=~7Xrk?BBYbAxhv7(lqpI9-G6!&KAj98hXcKD#D8@4JD1%0M$d+FLk{X$~_ z#OpF)^Y3l3RkVa=R!QLo`g8i;eHI=Xz6qh!7Qb7>WMo*8S5otzluevX$4MNFI_Mxf`K+$*l352D3>_FvFM32y2W#tA4 zc{cF65t0exWp?f4#`C4Lmm>&mLA}`Fo=wO2<+`|43m`PPQ zZ2&W2wRPdvET>3=b)MyUQ;hZg%v*|dEQl+~)F&5`=+oN9T2;Lbq6AqJOSqDIf3A@= zalHush}&H@Rm69bL9h?qMOi;!{p%!}$#i~@qNVBwhF%(Am0@ztat)+sc(%1R_1rpT25K?jDmf{-=P4t1dC9_5{^|HjMC|?q^?8Cal5EAR+QFxrKr93V0 z=J~+-ZQJS4c%fJmcz4TD(o%t| zcB*TuZxdx@!q+OynDc8HDM}5pRT)0<(bc{!WSdVc)oOEw&&||hMM@Mh`n6SlC>2GV zPI>}p(-|jevo3wSx;!tuA5IvXDC*r|Z02@dt*)3zmCw5$z{oTZ+_JOM@pOWeFuvN4 zm5%Mka?;z#R?QPD8njJ`Iu|~681XbK+oqt@dLW;m{wDg)9;rh&j-2Aqy|YaCmUf_F z!)zVFTyLs%Hne~7cYIASEBcIDQp&Hp9Jq7x&4yQ(yi~}M=O%hBp*!VX;RckM^T>zx z*O5d9=kU~0SW&NcIL#m!pU<4^FoAN2;R=*th7^wtk zUD{|KA5WLF3!P?xc&*DjEF*n4hIlxcf|p&rZq-toIZ$bvWt~w0FTQSGZExB+zefpJ z0Ovj5b8WqsA^*u_Nl`(h=bR}a4=!dp>?b|we%q3*Bx@tvTSEL?qkM=E?vZTpPTbWA z{!xQFt{HJe_9QuRnsd> z$#Rn|Qcj+%;w6u~5VJ~ziI;FNvCh2nAVZxr>jkRA4Xky}2AY^)s-B#x!7P?`Y(0Si z&mlg=p~FP)EITi}k1GG7-^_Mr&|E|0#`O>)W( zwoqICQ?jV6qVYEf6?{IwcQl;&iF4#H5cNsuRIK6s8mWja@Hc&BVlc{0XruC+lQHvY z)$dL~TtI4f3Xg1Peqb<3Y6|MAfEd2B^F-s@X@|L)><@R$OH>2(*kXk8%iiCA<1A!S zzhYwOIB$G;*QAT0&pk+|XET0ddVi&&c`d2%ewLYa!M@dH0hb2vBq!sZLaNZ=jkPsV z2~k4fqy}3%VP!Ak!ekxWEsz~JM|RpOVxPmpw7WM zE$riJ=BT4*K{SX_P{a(vc`Uswe~Jvh-Fe{1mYvJVI#NIDc*{c?JR}i2!Xw*KjO&b5 zP7nWz!j;NES)4+-LC@!%%dlpmlaBZ~nRLEnGRpdUwVYxfQvm7ET zDNdp&qX9_Ty95>5{ItLRoo+I91DZ3@vVSIG-C;9O@fH0Z466T7x%y8ymCjB5Fj1;- z3Y8w#KLFUu(QFt^jfL(upM`#Zb!P8J9@(bQ&rmH_`?Sk7B-V|V%t;g;a6mG>~kT+`^a?VZy(<)$J;I&ZKR$dOf2In_vun zus3x9X3ui-@+4X*1leN6T|4-`MH7oE8ljvN8fN*x5m>mdj~erZ^;whzADSB;^r(Cz zC3Alo5&n8%DS}h%e&RjGc;+o;rx;b=thG=*mca)L*N>T4G}g1f8gr)S4D_=p!qd`6 zUuxUuDm&FV?W_|azytc;PIrykTgHSdW|8yYOI4)5uD*G7r#kJ2<^K_dkFB&Uo|9dB zJDN{X8s!cT^Qp3mD`%)85AE2&X%-&Bt@Vjb(ISD96-rB=;8u^;)T>){v=6iihsA&j zblL(g-$At((7hrdh=MXE&bvLaPVT8f-I0;=W5tB5G)U7y{Z1K<*2Xo?I#Gy$(+B;h z2+-1Utg|f(@sd-cTSnH41L;{-2h~fv?;y-iQNv9J>Alc{Rg@9!&Uh-ws z1}h9ChV;0onqyjg$N8Bwx>$uitJS?(C(KUudAJ2g9dK|DCY5lN|Ff9af@J@eA?s8D zqt8&bkj_4X637>fU$su@2|-%g17KULcNx5(`2LcjmYbZNw{F5OqZePk__z$XTQZYf7@H?vQt)kd;FSl&!5%XZvj?gx3@GX(eZ`Mpdzfb1!G zZDfi!92_Qs&ELsELMsV!SD1;=gJ%B$J~4BloC;Rmx4k*)hH^pc-c0#K=t!Y~fl~{; zcISKKUgaRGXk3pbGsB9xZ56RD*KU<^@2y&YpUn2u!zo;TA7I_NmLy3SSgNFDEEV*k zt*jSdC!JbICtawzmouu-pQ8aiH=lj_(>Jga;|lKoHGVM}RSK64`C3D+VqHbA z9UB|s1tGZ&?_oVg+(Lf;Lv^DMW(cKrn-qB|P{8VOa9OLCn}FKyi3{M;7oJHE;j4l|Gt(ZdNo>wO0n(chE^d$A`^Z5ui{Ihm#PP|gFY z^KV4s%M8;LdM`_}-+yPsVeUo?o4+|_Ao8OBHt(xL8}z*?zbj9uqtOG3bcf1iqmYp} zLDd`xyfu*M<%2@Q+!f`5acg2%w5iXP0|P)(@>0w#{E(_SSm6@KeA(1qhN;Qlfa#`o z+JPUq)u%QcA1#OP{Ui$NyxKJhxo;u=H2Nat>xLA^R~`+U%{y1(r%y+0ZteXq;@&c> zt*+e`ZYf2Jl@={l3KT2FU5XYjZpDjRad&rG+}+&?E+w>q;2Io?1%d^F!%3gL&pzLG zo$mL&_OJ6Z*Gg8_nseSZ#<=I4{L{ns7hrCD6oqi{YVFnEWi@~O$$zqW|4;PEfwI*T z3P^F4+f!tI<_B?2&5lAFP3N>Z;87p``@`m01unZS^CQU|TC3CFvR3^en2NOye_eyf zCcpOHkXXR8uIt%qfc;$q}Aj)M}R5RnH;2%%wQ!Ujs1pcpSM|2ft(V}eRkO21rvXL%zx%Yj2u}RI}9kfe*70>_}5F8 z+c4Uet^cUkyD~qJ@h|qn|zF7_ZGPda0F*) z{-BlA4-Wm`g<5-yu({jUSlh$ofSwhl^1$n**u8`m`_z#QNGe~eY7 zPfcYla1L9G4Amq5-f`&)_b;OTUOvM2ek=ar*!D-O=8~zBE$MXfS})QgO$MgmHtr0{ zhaXQimitf^(Dg(LaGKfwYWucfVyL*8=yuj_RNq<6K z9f&v_vgj=F4s<1Jar>=+x=LYxv)gAHy_Z+xMX0TKV5>%JDWU?I6qe!IM;e&StW_Kr7sq7>nKsu{bAN{A;^tPo#jN@JH<2^Y zXrC*xH|6DfKM^j6KG=h(x5Z<7jfv28y5;g9sn&?7$%3-mqyXw->gBiDlgl*A%dvTr zICApg0T>_Md2iuhv+YrSi(K#P?Tj6O`&dqnyZPA2f;X7#g3przi_9X)$opWPow4RJ zScPJG#oiT ze`k;%AzPwVmALt&>F0m=L?{mQw|h`OCv0=OLz}Um51K4dk_x_dBZ&GkDF23s;CpYQ z#aNlzgG5OTybjCBWQ)m09nb`XR`NF| ztqUKs|NA-rzi%M<1;t~>3af2#bwHW%X<&fhN_lsRkgQ}4N9wX7N}z|o-|jJcy~+jw z-N%Fg4l@M{weOr3V-qu#b47Zic+T^eCpgHVPzmmJrJ;r&?KL(uka89kB8dvg>!@(9 zchc7&nR+ZIJ(+s69jG)P$_*Z6$sIcG6aYK*t!G0~(#0n71&Q~(eHD%D?!CgUly=*hD)jiCx55}Gn$Y5ba&dMr0K*< z;PgZLJa?d@{0@}%4h0}?*IXY`dD#^kzHa(MOEN%}uaAc6&TL~8;RS83Jb z{=ohr(>>_Ykx`}pGGFx^PFxG}dCm~U)*SGGPdD(14ZfvNJ6y0N^!s zAX23KsWP{n&@a9(iB0djm#ZD|W1Htr-#B|7BA*?D=jh6mFAy~7=7#h}AH=#C!c!Y$ zwk~#zZ(5cM0$)cHiu8)rQPA6GrQxE$b-OTEy#-0bVEehV-2~ zF59L38~V`1jt@tgo4{Dsm*H9KAh{ zbr3q2kv=92#JF*M-I=nO>a;MoklMs&e@WkKN720)F29o@_9dG)-*x_l?E~M=&DmPt z6Q*0wzPIdn0Q;-n(1kHq*q01S_woy?U~yL)&3a5h zn-vpCw`qJVxxL9?lx+EmC-Ptmuf)kxMTsNx;RAd$TqeKU%abzq)?Ai>%K`SL(-RpT zS|>7u4JKTK1$r(GnzNcsQczQu%#gjWn=>1@*50$dHOtCnauix8wyj<=Y zPT6$Os?lV0Qkz7hPgbE{*x-N}J~i**R?zPN{u(7J{$hAR(CLiA!mq+aW$%t3yR}mD z0)D>I70xl(=4gVg_-g#jw6a>C_0d7C=HMses@wV4f8{10)8pr0tRYFF)Z9v?8Do}9tO(Y>7Ht3KbP0X@O6mM@?;8y%JzSo2H%LF04T-h*%jL%sdi(TRyM-#@S zIFM9{SH4K(^QZA+CmTbN@9oT2TMv#@T7TLW==##C>5XPJCd}H+%XvEOykPk3bJSp+ zKQoIo`+Oq1_MfL`2auGNkLmqp3*v$Sr3d02KAKljKREPe2Gm-A3OLm0c6=zj^c2Hm z(&?=0#@!M3`rms2SjZb;O3sQrwf06(;=}uif<6J5VMG4*tBQi|N5%45H3B0v=y(*x zt6DX#3;1SoFALwn^yG}aIO|BM`eA_?{-3u9HHH)$SwAmKpaWxa=^^jC@i`9Hnc8fJ z#hOcpM`9n<_8_bi0=$>$UC!n)D`x7R!dBY9oBS~PCi5iGu$rPBQ5;Rorq+DKJZ_Qu z8?FbjYz*uYRI5s3w_5+cT6UC2a3*^%ltkx@*0V*B^BfCrOxa_j&#ay<;J)u@%r0Gf z`Q)R6)SwKnOXm$EY(E%N8prDvf550JJkWx4Nf9NU;ynr-IH0H4ax z*?Qu`D^a5QL$usz9HS!4aLYvwucc-^2}|4Fsv#Yf*_O>)e56@1ecILKu@e)|kfrzE zN7xCt-xm=17&h2GwId{3@lO{GMFwKBvezEj_UgzZ7at>#0n2F=7#0y>{ykEJ*L3n@ z#cCVQ%w_j$>=yL4$Nu`=x^J`Eln0}H4ll9_Wa8f`d0Hyhn+_JtivyᵺO&>4QB zzQ#hv;KTM}?Y3oZ+IR^epG$3EAG`24*B3GuxpQ>)#>5OaPTa-xzvc}8<;DoIABALO zcRk_eo>|LgTB#QCyA3(6;kI9zC+LEH_6z=F6TGVfl?TarXouCe8hqVhB7sPYQJDd4 zNr-Mf+DW577o5(sKYz6IoadLs{>mI(bg1ICABejm_K8Y$w;z;#oj#qM*8$YfHWham za-ub8zY3^K!~*-cme^|x8Sh9t%==FZ5=gy&UBsxl;HoOi~80A~Dav(ME zvzbDNle1ehp3MsHi+xQvrPW!->Nkkn0b}SAn?bum<2GwPJ&w)ZZ}4$l=7m546^nVh z=9T1(;}l^Nul>qopJEO~MY|vFuw7&lyxO^E3f^cCeuSDM>i_QPqQ>Otb=2v_HW{t7 z3v{BmnpcHZ;Z>z0Q~!$`Y_by6%R2z8pWZJUJ<)FX_g>nQiKl2}VMfydW(LU}?Pw+( zVi4PZ7bgM(yhNiK$}aLj;Ld)-Ezx0#Hhv6hxuDsKMS(RTIX z*<5$93!(=e65o-NF|1WHnNK^kyTG~Mc0nbSuxI#^%wd@uk5VMbic<6hPuTPM8Abnc z+hLfX-|34xXK2&g3(oMjKdFX#sf1kAvuf~Nys9OrvC)fMw-5ZAc53fbhg+raYqNSi zj_xH~E{c281S9iUO2=*)7FOE`-n@k(SHPnC?%KL(!(reWZpvEsQGkVOJyUDXb%f~f zmlXK@9eP8~=rtY${SKSVUfB0AN5q#Y7q6+d)Dg^);fC>ug3m*;C4Pp}I1I(l+H9yj9(@oUAs)B4v7dEjz^R+} zKdw_4(74>6b7;dVxC6I@(Q-8hfd|6lT}ZTu+;{pi!}78!xg<_6N|eKsa`tvG1vzt| zglXtiBcai$UltAFd3Hh9BQF=`#@~$snUz5YN1h;!t}Y7l8IQo7gSdLe{;28>`<1x6 zoY}{)kVLuTD$`r1S^F9g1*f@!)`zB^O1C!Ooj9Kw_ArlzoM}=FdFt6p^Xn1{*wHYF zT}kN4#!qAmOB`Bw9mhRMYbR;0L`V0@#$d6Zuprd-0n%Z(IFLA)=*Wq`g)nD9xUlr6 zg}ZV93z)TPJwL}TDs6LGV5avBu#Xzuy74eTO4s+veu)VCFLjV<3ESZ%VfI{TWO_7E z9?ST48dq&^B=~u+bP+ev3}ZGPyc?nQv+8rHsJmnN^;e2d?D(=Nw zaXWokwNhQ-UOhS)^YgF@ApEHMLXzdJ(buG~h4Lf{zly|i=B1>1CpEu+o1nizK?E6i zJYe#|tqrL<^+HvSbYXW_A{);?*^!YQjZv&LqcihP9~~-w$6ulp^+dY^__hY;8Mnq& zvvNn*8nj00Ho?RlN)$dYr-Hwc;S9y(GKETi+!Qmw?Ju16?tN_@gKOl!|H=qm8nH$UL7b&z%93Mw1=e z(H?j___IFIsy|7GCu23Qxk(Akq+Tj*c17`YUBlBTm)#1r>tUs%POk9Qz#pceOGl!d z&7WHm*&r4nmL9V=rRuLj#F=xdHur04XY#EIrP?mFS=1ceKtU*WxzRON->Q8V`oy8| zO%pkbr}IwHEZ9E}Z*|Om)82K;VH_ia-6H=C(qo!^&x4ANk%U7g5Vp;xYfi{*m!Xa< zIP91-3NRM1%OI#eo||JL8?xyMN5q4{O|VwW$pCB3>Y`|i>$?_9wB-)}++LHZ6gvTh z{Gcnko9XhE6mgDS)7|i0hxZ~asuz%n$Pm#qKU1tiTyC= z?&#Gy>H8(~XIPP`Q8xbPMYUyZD>0Zgz_$+9Jqx1#`h3|+*&-4HGKxt-CKXjArI40e zmH1%La;N=DyZ%|CvD0R>6Z)_Kcg1)-jy3c|@v8hJ&nNx+biKYanUGHm$?*2RNRQv3 zct708K`ynH9#zX>*AsK>7u1#9wQfDwn;?f_N-TgVQ#LGyM5c#D&zSQtzo(hdRn3N# zGlv15m%%$uoW!VBUechNGD2!~ALDo}@+P{&muwvkkDNBKzth^^>qnfvzX~jjwMcZE zWF-5rL2Xkm_YlU|<6+)$K_M>hug^Fo?mrJO3;o3ZU38}jKgFTc>LZ^KYS^)w1V zKDIy{Dl%~hnsOB!Sl5o$0fKuwV&mMZ|+T|cEl@5a-Iic!e?XOtfG9bNO7Qx8Ui}!jK!fsN(b{z zj^`#(#Jn}6`ismyhRQJ~LwjBBjpJ2}h0TI2hjDsL->o-SmLXHA1!JZxH&QX|+wCtl zofnBj`Kt8Q$!PSeXE{gv!(YrUKrmv|x{zBw3=_Zm-0)eHM%3aYJ9$vgos&u#s91SB z0x%nf6yxp-*_vEZI~bETdnHGm0(hrR{_&??XS?i*SpxF+^qg?ut$MT5)04%DhAF?8 z<7F?Lg7eH4Xg$Wh9XC;5kNJ#4xm4nHU6k~*_Ogd53)Vb-?3DQ`cadMYnSO_>+212? zt7A%S-c)k=xT=k132s)D!ty~a$hzc|_X*I=td6o*C=>@H?FlIPDyrShvbb|gTH-dW z-TC6(Oa+31+u^hF4dZagFHWo7GR#xAsfxiQBqXp^T${t;W&!gPw{5duDtMyL%M+; zI8T3+L#DTSyj3uAiv9NoP@2aq_C^I#jhW|$C8qD13?}>J^?{u z?*U8h&Bf#NsK3NH@KrUbTEJh&mkgU{<8jG}@X^HM!HX{W@Etal0k0XbC9a!4fG>FLe{g%(5DaS*i9E zW^KLfr#_| zry`4@W-qk2#M<39rJmS+^>oM$)?*EmMqfuDv8(;t)#02OdSB(622#>7f%}8l4cE)v zRW8@t_o& zZgfw93Ap_BfG}`nDv=2PG!Kpmi{h_-cR+Eha`1N_=1uH-)27uhMi$AF(BW63%idP7BlxntgFjfU23if;hs|Rc zw}ZIJ1kPR-MGH(@O@EihgUL%hTmVMogc+C&;T0e#Z2m}*LY&NEEu7tFF>JMnI#Wl} zY)DrIvxS*=23-*cRAGfeNQHeQ~cHIiRufaDBzT`KGu2-BW^y#I2fu;WKqWr82!MiHfjDZykPWE8}O z-=h!4DPnr&M?Op3lV8PC@7XA=x_1q)$l=%8tl)(OI*GRdtoD^~rGRP32ySHT<}4&` zR24~p1|)U2{i@-62*6)!;YE5bcUX>cXRTOn3vUI>`uAV9}dIxkpJVO5~IywBQ@Rlz48QUmTcT2WvM4n>)e(Fe%Re*&c6X> zbTHD6#FO>Ac)4pmTCoaOnx#{&orQxIxjMTAbrN;I+mT&|7+ zWewfkE{_oXA!$9Yi=f12Ebx@iO#|&m@K+u#K%ZFf?$6wt@f=Ytc^G0G1Ml7V``z+D zG2v4Ro-ziz#Y^()Yu6flqpG|_OyrR$zIfP8>`Z+Kj^e1PTyYFo?!cneetWSle)q%L zH>bjYC=5GKP{3U`%x3R0Q1nZeO0Fv1J;O~D*5;n;78FPKYaXa|>x<5(5*%h?+1T5% z&5PA!=kzKl`u#5LJyW7@ezK8wBh~&dkxq}pH~AC%!Ozh0>&2ypW%Gw7>7wNVJGpzn ziMlZw0&3>-!#$(TwCSx)Y#IE^}+u>X$NclgoafIh&{# z-@jhHX`QV1)jQKhmV;-Pj5PG=NX$+jO--*Ws!0|@n)ykfCV3#idcE(CAt=+yFHK^x zYj0Sf!T52l;kKgVT4*f!u%Dq_NK=a!M9YVQFe_H&e)mC+x@Jn5^|v%u4EFeF?X!@= ze8zb$66a(iL8TS#mak;lLQRU+>Gq#m+n+^&$q~BA8f4@2xP5MCYFY0Bu8g2<`8+|# zOe{6{DPeUL9DLP}E}*qW_d+0fLz-T+$C(8i>QRw}=!8wS2>guE3 zC|QvB8cQ%g+6DAF@!j06@!iO8R#*;VPZE<5is*CZ{NnMRc!WQV)~LP7fw4KtYrwvD ztguLonNI@F2`L4T^lS`9%l}G5k7Z38Dp7v*E|(@|F_wIG_MDvE{Cf*CHrE+*pIgk_ zT9X-!&FMgfDqwG($Feb`!whjDv{kk8HnY`~7FC+&xRz^1>#`(i2|t|wCCWFW7}wNt z0L2x6V2qq=m!ACc= z5;BcT9GZ9AIzwjJhnNME{qhSrhpzeQyQ<=U)fx+O<6^lts3ntK)V^bv&ctll8`mFA zUF-UakOY+6n%;46@VD4P#HxykR9ls*!DYB^&FLtVqqCUX?VXbKY_4QEae4w9^4K9F z6XC+7xv(jWXm86yO?oxp8Hns%&0$<-;M=NF(|TT0f^8SqNkZVrD$!WNe%!Ku)ubuD zQXj^zUMkc08Ew>y>*h*b>2M>#Ss01wv8^#-k*d_sY5Ik$~@s?Su{w_tW^mP@Un=5Fm+t5d|oL0apRm%!O)*36QA38d~kFUjjUIC+8)-{ zqdP7Rd-&Yfqi(TXLN!E8=2oI*;wig4kQg_1JvkzDKJZ+;6Um6U-R$* zgcoI7P_>hDutTj=Vz5KKQ*E$A<2;9Z+)V3gje=u^ZZrFoHfdnsVj}QeY2}$PD;t!A zbTv1cw%f1>oN*&H0}&OVCWOotQ{7IWE5eW03yH-sW?UxSXv#`9BR7|jdA2yU^bO6? z!>E4U2{8&Dgz5?_7R+;x1ct}&#WMl1RQ8|flfWwn^XFP$JNsy1UHU7TTP;k)&+)}y z2*i^%67v;#9v`c~dd!#UVLe}eIY1~B4wnFxcS&17upWr1*4#5BFGNk0M)G*$aM_#Z zzRhZ)C3@8P3gilTSe>$HzQF_gO&)??H_dllwuz~bmS{cXgRVlMGHnp@v+Ppk+yz0s z6~TRHaSX#$c?=D{AA@PTH78nBCrt|CKb7W;&)OJy2D$GV{6Fsw&WgIvUmgj5{Wv7` zKrZZ85&O_}b74qLc`KJ(X}uXvJ|lu*`k>JF@Z*=7fNuzQ?nT12?mlAKRp`vYo6{92 z1ap|NH@}S2zk${vLH1dLIZ5wAgG)8vF~85{m%hS#Y)u4Ej}?U3n>B1U{YL{ER)#08 z3Xhm8-K4*D?md`4uo#q_Mo(Fck~L#DV8!;gXMZv@J3iQ9R#&}Vx8|nAWG$VfvdRR$ z!W%~rzcb+I(uZjD5M+jDg=yHu)}?ifbjN?R6GTj(KkrZG+4B&@Q_2$1EA>3h^|zaR zwg*fVsU|7R+4%98U%A4K(XhDYmeOGx#wb`tQ%B#-Yw;lXu<>(Z6EXUoGe;{qp8qKQ zJfug%=Z2qvw5%zsxz*a{Gc1+?o+O8U#b4RlYFbEIJEOdO5Y0IEfEY`-%9_70)X;PQ z`ZXd;uOspOPJ8sE;S~=KEo_`~4%Ra@=0Yfus-O{0(e>a6Ao|t!zH8YlVn&@`z}2jq zgj8fo!;?(3L!7GyiK`@yg7;1TVp|nw!a(3 zq~nTl==zYAt$l71`{lM7#=cP4BmlQ|e1?E}o#r@80gbyOc_m*BN17_~SD6 zz%!Yyq^ImRVGNt~wRXrHksp_^_jAF00rbXsc8PKghr+rvq&;JyFZK`H{MXXzKhR(5 z3W7R+-inpn3LEuJ0*(Djh@QS8?pJI@LP0kcChcC88&=RKYD0VdBy8fKIQ%)>uwsXJ z+_TwCMjYaTPEV1^3o!XyqbxQh)o%~2n_NNv4~xl9&BPZD6i4xbn)!1D-B(r?3vZ&f zsUhb!T!nPPb#l7y9Rp-)}r=YmxBw4MIIR?`n7>oApsgxLxz6 zrn8t{NX_}UrU$Xd<$vW`+#hB^Vx&->Hp4VK#e zu`lY~+GX${Ay&rwVLMxPJFKfq9n&I1Wo~<(*ju`A@x&Ku59YHQ{w@*VXp^ zbI4wC-%L+U#f$t<*!76}S@O!s0}c)VR7EVv~Q;g=Q%%h-Ew zj*=WwVxCqwq?CXk!P(W+C9zm?s*Vd=vykr=qME`xx3OmRXYz!50(`EboKm#>M}m1U z+uKfxmEHHoRZHKYwpeawg|_Q^xma(iV|T~>?w>4l){8nK~=YB6Z-_V#apvq&O5K@}@2ak1i-`tjxCU*no!INJi}SqnFzE=Pyj zB|txi{R9>(RsC+!R@fxjl*WQkdZorI_Zm$YbqI~s-Y~Z!lViJlajFe5r*PKUW4pEY z__94?} z;p{tpa8JY+eMUmBtFg?Jg%~PJ?`wCsYP+rH?^Se9EVal64a>h>Gjyvc5( zV%6NNXob~V9Qg@TCB40-OE-;V;@IvIBXJ=0Zg+FnK-6VW!G^cM?IQ{UQ-31%3JdRI zA<^R-_rf;bhXRyX5|#Y##u_p=xM5D(fKZJqKEwZ3|MCy^A*M~(E2x$qJyb$+OF4rk zQ`A{+ZOYtQS9aUH&%0aei!N)aK`c{EMU!9uiFgwX$rl8u(#(dIquj}M>*R18LZN?tz zmPDQ-OJ|gcWWikKjq>`_=B6y}6I;!NriD$fjcDvSJ9}UYjb<~lIl|sbi+B-;;K1Wo zkkN`e>Hu{3#Yy66l>cC~-v{yG@=i95nhg9yHi=d2zU!)VPU!n??QZ;_xoiXa@p5l} zU4Z$MnTmUgmm|4zHhrG%HxZ3a=miQthoOJ~!4UXpM2pwf%b^{#``WIISH$AE2%s3B z-yN`;g9%kJG4h_|d!IC#*>`%`1NFC`LJ2ZZ80TcD%Jjl*R+jq6^;(XW2w8sKt9~Lw zdGJ8ceCn|uAcBsUlq;C7-b)Bn`$&@^V`z37b~H|Hbn35H9S4P8lUldz^xCm12(#1X zhs9E21|{3;j~G!{)dQ*@)vvHw*Fj97T~#46NV6{<$m;nzQGVQs`{Ge_BM#rMM7jyU zvL`>n3VmP$CS>_$eG)34i~BsqUV=oFU90@60*T6}^l!eyHbw0PB2T5_4c#{3r#f2j zPKZNp2Zm}KJcl(`vpf#Ztzc;QnaFH8756QBPSbx(Z4?Sa;sGL{--P6M#|oig^`yfF zamZBq@t6vL!*HT09lpi)sce^5y>|9Oy?$vfX~T8sJV_L~Q$?YWQ{2O34JeVkH+;#) zGWVj+R6kKTb5okN(sSVz+*haewITE7KFQtne(m0uP##A+t&~N!<@Y?=$%dhvwYfw; zndic_EsVfgRX=wi-ZPL`h{2hE-yzf!;i6woNJwNjWWiZI-h@o|U(A&0{&KKcA7}BV zhM#_6LAbGilGuff8`(FIbmr#~;qc$UPeMoY_+%dm!5bHLlW%~!;c=9mb16)+QSELEw>%a+}-uff#)tlxWg^H!(c?2 z{G%hh!i^heBK!Du-yEBN!?pht>^k2(T5nP@Ew#)yXMT^M`d-zaW`Bq=|08Ovj6$=R zCXS}mG0{AM?{v95;mFz483X$1ASz>r?fD5(pQ(R*h{f#_Z^N#q}au-iM*c!r2%{4%<>G;EdeRz%Um%0Ub zN(7Jg>(F%a*-E8QEHsu z)24Vn=3LR?-z<1j%QKC}#oP-qN1&RJO@hPk^N&o5BbOC6Pm?RJ-FAOshV&%FOky0c zC~Qb#?iFSoO(EGE%c)tr=^71FQWw}Y_!MjX;)`r>NHFzh=k*ObadKp4oq86&_PM3% z0|Q2H-(oilq4Q=gYwd6f$~PubAZjJMT=Jmdonf(s0T?Vc?-Rnq-JCsFvAXok z-m1rQzr~}50%;tBuOzNk>7-*4q@JL#;|WxT*7{yAVwn~57}I5iS(L~QKt>y!7nffa z&WPK*+cpzg936=!9&1kZWRkb89LJ`ojvy-Dh#jKH3q6AD5h4;37^gE^Gmx~;;z~LQbaiE zHKQYPPgbZ%b7#f5J%!YIOv%yJ%Q^xNLK`p){S3x+R9-FBuNNr%GU0MQd<{Sx%~ci> zy)d|5lq03w@6t+Sm~!-`Xmt@MBC@b^AW#yZ`La22n2sLIzgI(8A}GY85g%TMAN<(*g4l?stjGRc(3s?(_ueKk#|0g zRF~ddVZS+~XZGWLO?acl@FL=U^GrB`bDh;xdG_$|6c2Vap#u=oPvw^%a&e!v=x>05 zPwTTWCAm>%8TiQz9f6L=aztiMp3^Dh!SangFL%F<&})up z3v|poZdl$hTshlxB-8nIdhm>n&l>#E$MIXxfbsqDtrN+D_K%7o+)Qd6DWdJH0rrW< z!api94xR=nVhn}C2?RsKuA$c^e+~g2UlsS6D z$Vk0H>>sypWh(T;y!F&fej;hl*>d5i(kw+jg_YqJXKyvOn+RnxkY`&On7}h68I6c~ z2XZz|40U@Z6)iesIg&d6X4ZY}E2QoU&zhPC-A^z7S$;0gD-JpP7m_pG?{uaM$x?{c zX_jyst>8T6T@(=iWVe-C`Gu05{kAQY^(Bm~vn_i5DwSMJqV4uRk!!GCYmr(*ku0ps z_OQN0t7lN4#YSmW(}w!4VNkLO##Wo(COJ)0?kQgj-x>8ol2@+x|TY}UC{ zOZVF_ZKR$kmWq3POL|m{H8U#akI#rcsKtrFP@} z`u<~_3}ZcSoTK$81)6I3gQBFL~SY=V*XCUkGOZ3y&73~ zIAaCkr@g2}cktSU8-uYnn@9VKc{_{_L(-fp3lF0= zx{9rvnWpJvH{b>j!EXPzN`+U)cRADDikytUMcg)AbV|1E+Q*4=eP8b=wPNo{i8bPy zQ@|1h{MWy`aSSoTNTP`R<0mwjQ%lzi89=TiT<05OT$~K?1@VKCj5%rXKcoke7!6X% z$26;#--a&E+07RuzP-IFh2TcBZKf!S`?aeu^iuOzbxf7Yo2GLBDQ)vYJkMv06wguu zafX(t{NE~bUi)t80{Y;sFKw2tg-zZNtXIvyS&MbFVByI%bpXD$HF$g7Gqy#gq)RGf z%d##>ZJTwl)Fqm(d(l-MTe$+EfOB{$S)WH#pcY8YL^*f;8)4gjRO@v|y~eC0%N^d{@IG6=@3oepfkh%n_2t$Dp_!Lhi>eyl1eYO`W??;>E_K4HP2#;U6zuRNaseFdijd>GWP^b_T zL$b6eU8fJw)8{1d1!h%N_3HuvdfQ@PHDnfHrP@_p8GJRO7pRgQK~5B2f3P)wWJgie z5Q6w(>$SfOV}NySjZr`--$v_%C)>q8c*?lbK{$G@jNmS4nLpl z7194q-z30YIbN1~GuaZ`qtY$Z0xuCfA9$s)o~g?5mg{8mES!5km_)p?12SJUd1|(# zYIO;>uJN41ps}zpZ)Eo~8-CuYn$M-7mb5e!j_?$|70>V`B0cxN@4 zI#7x{6QBUCEJH<#>gRppeD!V2!CBvcu_V+=WY_}wk9s)6K zo~nKhi%&9Vldeql#vumF9nP`jym7d0TTpI0URFFx-P-fpLB68q4wmoc(|mUa)a9zSJRb>}xH(TELP%{gqkcgI62lC;+ufTgMG9H4Q6;I<%p6v{ zIE&rdeQ#=7ysZPJcm0*J-6Dv1m0p1oL}Fds>GDhl`xVBgL^k3%(yzk9?zX{ts`y}$ z+NqX%-z!Ty%DmH-R9vW{ga+f zz8DP1b+#22NVG5=X7qNc6E+-C`OHJubK|YgLGN&=xs~#b95oQt|NLQC>s4q}-b){3 z{*6VR%tY{p4=r5zhdC<+{=bVkG!w|Ym@eR;<~7>j>h2Q}9w6I@)!z<|AV?d&Nt zT%(??8IDxh?2S?k0iM9Y$aw6^r~NsZe;l5~6w_*II$0#U3m?P!`9ghPgu){(yL%q1Q8prcF$IA5zfzHnDeq`;K5*S zqYYxd*kk~IWylzQZ>e@{05A{U(j7HJA$t5dHj_xDNKI~Yx>VR3Zyq%`!EvC(&TeUh zTp4Hr=SO_0#suF8x(klpT~cK8OL*yLb+{i{b#P(>%}H+kPR(y!Q{F8NIK{qD1!P@$ zE>a$mk(ZZ_X36mWxQ48--1s(kC3X)TJ;@Sy(r7niAI@)%gc-QJmeeU$nmmT*rm74Q_76)>WT3~xVfloAK4Fn|Ml0g*?id|%&I9Rqgf7wW7(Nuii;|%fdj>B z93h*1#r+qq&D5MB4hn&smfsxIYr_=CbkmRuUU~d$P2Wvu^FEh(KUaljX+1uR$GZUv zQO_uluT~~jYMI!upPp3f5h^A?BG2hZnrwU8Q*2c17gjR7Ecl(%1;5uzYk zJKbALwU8f?&!IPPVm2#57ML#2Y(G5?S!@2uhK^d0IKDxs(I0`E&~u-MCNYC_^rjYx zx6e@8bU4hWuf(a9&voe9N>Mjlsf|xc*fLLy(Q8c%wNm;s4o@4D2Qs^#N3i!9-uZ)K z01b-l@A6`4Zl|E7_NX}B?~!h`M&URtn}pqDye?3fJB8z=&A4tpD>1~+_;A%aGe_WyO>T5=)ysNO>m9H^-G)@Voe)Ys&mPKSK@WUn!UUfD!+Fb>HnJqJB5bS z;l>3??_%Z)rR74nzRe*STP#HgteC1Uh8XigzVvh`$64`Mi$-ck_5-K5*frgavhd?YZZ)J4irx08Z= zvDvzcm%^y)z3sFlfE^FcdW|`&l}pSlGm5%%Fr0tbMS9a!K2TujR|<8_{G>BsF6eeA zCFji=5-JmyT@XbJjErv*PDp7n5m7_=rl{vyqyr}!1|v_4p?;>*Ek}{<+mk~vpng@* z1GLo0w;0Q1WCmw+XjjRw$p?O_nmgHv{*^{UVK$bcLqAbqr{y{LS)bbQUHF$Siq==p zPZK4O{r-QHO#;ni=a~~R3)sJ9-;*$GN3sK?yGp!9elSX|*rHl$m$v(*)L$_dC8qP( zU#Zm$Q+L{(m(DKI4k?vTWww@vDMBXsL@`Igx$Lb!5)n03a-@mY(GYBM=2VioW#AFL(-$cE*<_wWb~^0446?Ij}gzj zvEk0)XT!(2@*ZIh!(}}HRcUwSv_uigjUU|$x_^l4S~vDPbJH(@Zh>d0kW5YYt8h$7 zR}}UFfdH{Nst&sOf6*WL4?Y~=je4v!qeb)d!fE4^u_878(HJ&Hb^GNw9&7&5&eeB> zy`DUu8@aTvWSH>74xQEBZF$n&GK+YlnH%G0EaS%*wc^JRE_|Xk=ZhMmXY6+zXtMB&YM4zfXrC2+)E` zKwMS}Di}Xjr>jmBz}X@x$O?IQcXNCsR`dpOrJN=7>G$xMItPsbe5?MYS4fE!H+^{o zzWLB)p~B<*NC?OSl3G3ahz-L=mz3E6vEI4I97sZVyYN8M87Ar-eE;6sQWS{`q{I?8 zsl;Cc*to4{l|))y{VnP4U^kdNx-PaB%e;2aUEn33)4%K9i1K_bQ+$PVUq-Y`yXHzjS<+-KUs5cY!EQsoV+((6Z%5jpwk8LTfJLO zKitSaqTN{H2RWnK$DRaG7RR6sheF;q<4wP(AdkuB=mBHlIoY;_Bi6I~1Mq3`{#gm1 zx2VNM4iaZZYO`cXn)X{~f@X;kW1fGc=qWB1TW6($N@r=@u3WxwJ63)f*8@oEv4sPD zEQL5y9o*!OJLByiH}Td($gu!JNlT~Rx@!qPNRbACW*2<*Vobyj&%*>1>Bg12+OYBB z^-aBD@y=db@y1gqq)3GV!U!~o>Cx_RIb`qUV8;0RFb(p$Z;dhHqyLDXEATKK^ruZm zA6fCr6^5U4ilgdfSN~Fv81Uwabp2$@$WpmY@w`O>au-5C zI*#rQdyVFc9%BWy5_QM?t=p>GSgUq#cxK;JJx1pCy~ut1h}&K*Gp4_`R$p4rkSZ-x$Lx5$tq$wuOZuP^A=hyu$_?}c__YxF+y;s0xZ?>}41`q=^t z9+--a@Il(F8u4etq^{X~MX7DK@vAo2pmtE5lnmsfLlDR<9UVY~s> z&-DAc&4F5u5`93rmc@NN43$A63)e2=c}U3tPKY;8?UfW(&*BZh+aYDJX>ll?nC+e7 zE8z&H6z1x#5xnMcYa!>;kF4`GC?A6F&tz$T%H8m_i9a&EAHig5TYQCnu`h>@+rm4K zG2Y|O5Jd#OZY0)jv0C|v9ZcO_fgVD3eF2p;?0%sc{%E*lMjk^e=_T-m?NIpHg4$3* z`Qy)M4VUOJFHjly6@K1An>9PPy|$i>5%<%@2|WLF-q00rhK`i0*Ey2VEm5d9L7M{` zJYsnxdV@R!g>Z32c4a8;<*ls_8h+d^_Of8}EUc*i!?+vt(m196JN`Hauf{7MK>CNl z(AiD0m&oXsD3bXYPG_GV3U7|qa5pwgz3cn!7 z!iD(rU72q!R#m_!Q>)2ck`rtgDBOtc-BpIBMC<+e`s!R}i_I=JI5lwjo5VFf&*KGh z%QiPNp}nbb1MDY2&7cJ2TTCpt9#FD)?BThQkW+I#d#pQUgu92VXmzk%aTS#CGH zM#!_|snkh&^DVd3b$0@Q1I}e1HHTR~WJyT0Gtj3b0>Cqkm8ZRFoXSamShu$fzJp7= zj%P;0uHUZ?GTkT^>iFR9Ee6~LkpIwUW!R`p=`T$1jH~0)C}edO#t8aW%s+I0>f3sW z(n~|C3}j7?tK@M5>JPfR4GLJNXh*l&&;C5+9{^_3$67}>gWvR%{nUdaJD-0uazxQy_==WIdbQC6=FXil_1$udJ!n6;5 zX@{(uL~kp+FxxGddv%FpdLKv1z(cEdXWqodcDU+8OP61RFmL5kkz^$KdN&eMwKkK< z{r!KD_SR8RzU%(KA_#(nQVLQcE!{cNN{EzniF9`l9R{GJfONNXGt$z{(A_W$Jut(- z?_r;P&fdSh&-ZiAT4ya6{{arqbKm!MUGMk%b^Cc;Il*_6)V(~%s9E2Rz4&BxG~XL? zIk#wx-(a#K)qiOiI*2|BcQ(j4m?VXiR-|1NO(*vC9`8Odb$}kQCku?^*C&Pk7L9u~ z5N%FLV8_&k_KU~XkU?h6A;|mikd8s=TYy?VI3;6YA-=)sBw-!-t}~bn2DlBz3yB&v zbw`FS0l&_e;)6d6kCTT9X@un}IyW(@V`rk?TQLhhBPWD;LqlrqaqiLCzn0p45K;;iJ2&H1{_1uUv9z`22`bR$lGDp z6nIxUOe#H=#m!E~Z<9Xr=aLJEfG@6nRy7#tQ1rXk`P-L;wxQ?N57%|_k^PWad=|;Yx zAdz|>Q8FJ9GRas}#afnMn{wRn45L^$#;kBg`ZIC&po1j{r#~XA{D&0Gy{)AYD|WIr`i|isRK`-(h9y{j0e5s~mdc);ReMM;cATAygB$PW{4?jBv}L z)Nc~|9`0@7RLfC9&W#Oc%(Q^Vjv?lqR>Wc064FC-YLB===bW;ngBvI5HPQ7vO0<=a zmn%S(wIzGrH9I50w>6^m8|sO=VlzxzS2nexnE9*wCO4^i@3q5B}WlKs7pGoro^ZJ$` z0c$3Zl^nUXUch>#=eFau3{?%Q?Y5|q6<$7G!#(NNFIU=+4_D|Zcuk7-`)h18@}A99 zg?@4N(v9eerWwzq_aBGk50B|NXZWI@ z^=wggiDLJu6^rt&$M$~1$t~|-hYSQ3+l5xqld}E*NkJ3whW~#bR3;#Fu7OF083QnZ zW(iPv|KTauuL_hvzu;teOVZ3HD5V*Ox2J_s47JNL$!6^1Q%M(=SKL%Sw?PKJ-3MH? z6NP%9Te&2>WVX&;ifCF;P*(`^1NJ{Z{(nz!y?XKDP_Bx@yGAK!t@zb9+15J z7On<45s-{!qhwrW3B;7TI0B}YBVV>SnGG8G(&V-d0pTO$(Oqg(2ad1MpYw&1q}xN7 z4?OlT8Dg3Aa`Y%RAIG0dhsPo#TH=o+h_67FMZKAE;ezJ2Sf+jXfP`H?0zkVc!CO-J zUw&!x;{98IOzbXT963~e!p-IgNGR5eja4sw%k5z}ocRmfPOF_256A%cP>N6Mr$)@j zj9<&yXgJgCK%>+v6G02>H57dLKrl=%;TIynB#z4jqe$c0M$zB><6k){10Kk{$DqNz zyZOd6TxM$5sim2Aq(+*J^Mex!BazIVWIv`T>H#5OIv8P$>A4?mjK8_HYEG2$JtMwF zeoMghvX8&f`HaG(>Tz{l55P|zZg6t`n~=< zRrRzVN3+KUTCUjU-u#YAqMn@ePm1EfY6uy^J=MFn@pK=-i&rOK@P4EHe>3U)FOO*e z1gG;e>6bxj<3E~$Un=?>#w@u|=|Ui?v{)#aNzI0Oh6uG@dOfrI?}Tzsq7o&dTAPMeB`i;4SR z&s#O82s#L#orL!UhZF8x^r~0Gf73PkAksiFr$YE~cdw$-FT&HryjGpAPzaho$MjTe zC;KzJBA(mKslq-SGE?VHochf#=C)_gD9;VS4dHictH39#U3i6=&-~WsFTaHNcqqMV zW~&D7>6L;!EOcmyO~btx=#=Ai*{8X&e!Qzpb&D;SdvMnM$m%^Ks!6BXxM}CcX$Wx! zL!%HisqFvt82;sB{-p`{AE=TDTTE>iA581f_+qGf@%e(;r~k}Pt0Mus_`3xfU4=Sx z(8R5EW9~xwHFCV&Yyy4SBqqU;<1 z$wBg#wgd>^2|@jN{y^To(0b!|)~dI1gU|Q$1*OSqIos68HL$>QE_u6meApC$vSVkx z(7(_)P>gSPt5rafHb=~I`^oKg5a~0^S6J{Sdsd@J=20d2YTbkFHGIrOCY2Ov8>mA< z`dx&tS>Ek5BI5%=OH8}jO671i4KB#n6p_g}%u>W{ z?%w#*dC(^s5OpNgVmjax&Rt{13#rIN`v4ZAYL{U{D(q`u%Xuq}^vD)#a4uT5H&WHo zJRpS(kfApknGvr?3Eg}7ZUeXXPY+7Q=bGI_&z=f>9WjcNY}lJ9kOB!5Sk2TCy**0; zh*PSgAN?S#?|*=;7eXF0DKIY#9#{W8EZ>m=hH(6RoBuXFb0wQmfzF@w7X3Bx{`3g& zgRnA%7w5Zq{yj)so0#Z%;Vzhul+Z6-NH}f0|ZtV$dZ=~k^mS` zpZBcIN65|cUCq+7?BI98W74U~bvsg9!$w(QXSSh8gN?1So7LR@LbIl_x52~zNa@x+ zI02Hb3-;sYgZXdNE#@xP%sVzg+iy`md<7~NMLLiM03jy30R3CS6&vrBG|C5vYg*p4 zbr4v!uK+*s)Zi?PPsNs*i$6r~eys~%KR*8D-@*~EMt~E2=gHsGr{jpepjsLcXE7M@ z>^Rd`>h)ho#xc{(BUNxBQhlRGgkl~Ci;6b>>72l`6ssHu`UHOa(KG^TiM(+>Ti{ zK;s!<_Ln?8Y3$Q6YAKZ7qDE5*=+a8ApH)rER4UpE{DKpTNSn;mSkRPe*BFY#Gb3}S z?-=_u20cxH{a+XMHm0^34@Qk~n5z{IMY@Sn>0%LXO30lG9wH=vX!lC%Kk{$!8v`>g zYL<I$;9d%m4@VT}nh!&n60#=_)nVmE~mK z^WJu(?@=B#W6^2mx*fw7X%=RnecaG*8r%F%e}QKg)cgA;DXWcr;vM=&r`!~GZexTU zeX^LPJB2bjkejlf`Mr%))Uzul8Tc&#rpxj&M#t57%R3o>Td+WYz;%YWKbF;Mt%CN0 zT0vAr+E)mW7VL`w(t?e8I3nb%x^HV$cVc82s(`d0y(O2HIUw!;v7lSR$$D=v@*!W0 zO1jWbcRL-rBtBrQ6i#)UdGj&71nRk&Zld35BH$7_nnC=!S}?$7XySB8#n zuV>m-`}c^n?A&cfu|aKr^?J_^Tdt6bk8U#xP zThj^N%SZ>fS#QIii8y1V3Zv3n!@E*Mmc7)NaF#DG?2aDlz=5Dy48>OFC#vMt)LYh& zD&P{!6>KjO5ycxm_`g5$4>G5?GBwBKsHLr2%VXgu)Oo6C9J7hkr{%^G%+e5dx;un} zBch@2x23R7Y-IFq=ILM~7u0~r4zt(21w%#dPG)EsddZ4t6`tbX5^}72!+u@)OBM5n z?8g@Z`;OUATp%vlpX{IOH4|F0+XMbvmLntFYTG*T8&S0oxyq&R zmYP?*Ej)wZl43!OR|UXj)C8x%@tAoj(uli?*uxfHj1O9909}9k&=ZwAKAbL?H~>+b zM2^`FhOsB4;4MF(+pOq75w9`_S0)x}t1qafkb&L>i%g#SK@6#K%{{84Z#NaLXOsDA(HaX)vLCvnnIZsWK!CVe1T?K!KJ)p8!5<%+%ctOA!--pV1^H_Z$ zSrZ3{ZPOpyo~a8K>d<^2VIqqDA96bXIHLa_C;4;UyFl=#O&38d9pQ^eGsa&Qb%(|C z_oj$4`CaSWwOT@My)|e8aFRsT4IRGkxzgPQ*@?VwGQrBgDEA6?y}z`?_41h1E*wj0 z{yYfA8t@d}9@^h0Yz%$)bZoa;3y9Oe`c{>`X zt{~RjBcKhp6?rxbC9w^cZ?1QGfN{nJoMQ1>CrfhxJ>U6w>f$JV1yr%L8z083GrX396@8k`L}hK$rR z)3!DL1t%7HRQs=e_Nz(NAd)`K+lies5U2;?6^ZdxF)gZ3%OQ0M+QA^e?x=rUp!v-R7tl84MN16O)lAa>W=e}4J1T&4s$w;slXXkd>{3j`I{<Y;UG&OQX@3 zzu!0=9Fv&$m2!8PoZj0NTTtjn^0m#RgZ^!@TJVLQ(peo5Z@Vl=*3GY*1&|OjoCe?1>K4;XBLBsdE2>iYG zOj^U~pL?|ax##+~{ygZxviKHE)Z^t+8&>!SO5zAl%tJIiP+~qRU)KNd7vH+f82BwW zVL>S*IFut0I5FPPZ2+k>oKmP@7ap?7^=QmYCjuy#6eKO0r#tb6ZW{&w_hZ$cE}H8P z@Sf1D%YOf9M!&O@$e|Nb`LXXcQO-@tYaJLlMZ!<1lk;2uci#T3|C?jP^PlYBu`yWN zx$wx8x&nnrxtwaSMeonRR(7MXMPfUQ$n9m!yIUgyRUsLEJ!O7wwWR&=dF@8N3Q0UU zx#%o(a`k=z+SqG@G-ku^^r9kC)LyOmIyqSHHGxXH`1m4#yK-eINacGo(*4y!C3rQH zMpRVw71O?aKN>~oYOP=lxTt6|0nCZQiTT%T{tqr~A%#^NE?<-hw^I}oN;8hK?~gU< z@a~D3G|;D6m;uaDJ)OTM;Rd8&rhP)h0QA(X)F#AOjF&nyaGuzAuNy1T`Vzl`qY?*F zl!PlS@RmY1#<>6O`Tb*MZ~|kjro`oT9$kQoYXuyaAP2U;frD{Jog(Z*Ji;e%)%zYUi0}e(*0{Bw_IXa2_v~t}l9@;P*bJ*vI!{||R-1TJ z{wOa4>GW#wIF0M-I!n9N>Fv(hE;ZGjSi1#YpSJKsFaLS*(qIH0jrNlwqlygytqkg`T(7H(PnF0s+{h)OiyOC5 zSB0cix1@6iwy6t#!uV-`s+dDYI}a_oHmE1EmDd@${x2gg_HnYRWw@4kwkp}i=6-Tp zM4daD%%bgTH!7Op*7Iw%y7q+!c$i&+ZH&qWJ!P z&=j-Tn5pPrNZyXozw1a_^B&7W;sTu-t(Y=D21R;c0u%9g3IQv{TRun9?~(7s1prQI z0&v(Oy=3K?p7f_A|M2uQ)K1pd_)R>XhW5(Z*==8XnKc+bA zo4se5t{r$ofEo6UC}y+z7lXuW9za=oz|ErmYgWQ|>_uD8Z)!2$3HmlqRP}^-wLQoW z`SrF&{|NxJe+tFX`B40X3QWI)H1aG=9GZRSA@yxVByt{4jLgS`-xfXdttqV3JwpHDa!`5x?1CCq=@WCMn*rxN=bw-0GH4PL{p^U z1{k$I&Vlv3yf@Y+|HMoo9_O({PfML!K=-Azl(xieyz}D*DWxcn1&7gZ?c=B19#=V@Wqv2@y&vLPE@G%f z1(%|UKxEyZR@Gr-ukrZAvh6%>+yZK0)9jGDABPi?0Dw&J@p|8zuFg-%d0DBi5*Vt- z!knp2djKvO(S@MRcv5!+v0mnWPkJ5zMA!xzZOl+fxY@8mx?jw2dT+Ruu4%VQ?yYBp zXBj2CXD-P^5^j|#vzckYVXuST8o?b^T91l&1>U0-uz_FtQ_+?`e3w!0B@BF)}ZFGAn%f5x|o5Y@Yw9G z0Np8B(1MA{7-i6Pc8_vqcjWFn-^{010YdiG7E5%PgDZi2+#fnKOz!z8;Pd>+SL&je zsFTNL)utuv0!H4q-cTE0FS*SCZJYR=9mNP$*6PON^4V?rUP@&ua%+*+-1uRy&oqlm@)SF%WK0S@Ui=-SY7yUufMVg*jK~H?742U| z>`uM)$No}q_Gx<~$`WQGkxzu7#T2hs)A1?xW|v=SZ}Y?W+vALPdxCb8YYm!WY=Z0A z0@q?hOJ;*LqZU$hI)LY`H0zBZPH76|Qso!o9oS&9TgrM^wO*Zrx;Ac|bd+&dQAUjt zz}vK3+5cqELiNQG{U^;|qu!$?>XWsLs*KIaZ85j)q#kM$urbXe2!<{CFWr(Q#e_KMhe<@|6-E`ebnaf_bPqdBJr zI&jy-9jA^>oz{WF?5zJR_}NU&9dR-nAh~H1%K^SvZ18~|YYz~Hm{a2PNE3nUMkI`? zPd!8qMq5&Hm#f?zJAzE#+kYXK;Q|V%cz+_vf%N4yfk%p|XHU5o@8rC6SENfCKgBlGP1J zL0(n(kxTlF07gY-rn)jf)?v7sp{yP)$V5Lw0#>Aek?Ws!$Fq`X~_B))`ysKSv2#zI8z+?Yi*lG2<0Yo6eGQH##qUk;T=P2v~Ov*mXcGqOf zuBE+FpGZTQjjsmefArISzL_C(I1`KF{*quVE|waex$s}JvjNl!CzjHBpV?@!E| zLE3XH!Ndyl;>x0|=k#4HS^5U0rdhN4B&xH@@x(kbKSbkwL!AkSwf^HZX5ahoUt{fS z;5DAOeT{t|-@e8(92|;RCWg65RDpQwE+qQHiCd1_?9+q&42C&YLd$#M=k~cr8*t(q z%N6(L=5kG#9{US0K#)q62^Ub_0m4RVc;DPf$eJ^NVW0;Fr|DUb)0OsPSu>Wb=j=fK z681$c=k-jC3?<9w&%o6y?`rp@-T;jdnXDBwFQVF}VmwbX^X4*bPYjXN!0TesP>(?V zgzhbRZR!l4fTxl!Q& z$&Z}1g^?nr7sRZ2aYBRJH5jL#U!1#JOngn2x1Bl|Nynh?f64akX8(zby3ox$Z!l*Z#e^tG zAFJ`vt2=iwiT?Eqt?XUDVNVal1n+A;YsWz0&29SU2~%@JTT`OmGjMMs_ay|&j$&!7 zO2m_V+Lp_mt^IYbqutWqo1Sfhd$!JxNwE@{a6`Tb!d+4RM;p*K!~il)dkMWKh;0Pi z( zX&WMg;GSEnVNSP8(=W~rbH^(EsAe(5l4J*y>G3gl8Ja7s(-0qTN!WnRZUEAgi@sg| zu+gQ;R|>js{#cdS)+Y{Bg*wjylX#z3Im$=7E0hus3Z}PGE{OJ^<~Or{A9|h?zISst zS?^SpySw8(@PntBIivCQ_viM)YjOkw6=F-$(Fkw!#*SQRhgPYfsX{#K2F7`I73AUw zZPSco&A8Y+w_8}!O?e>cHkK_+%A9ViXu~euHWeLBY0lg1yN3afhcW1r#LZ8cpI?mK zZ-Hy(I&-}qXL-`J771- zw>NvgJAZV4{2Dp6nZmG?WMpR^=xo-Xj4sl`*GKd9PW8Z)U=O*eA3OoG4pEvdigjVy;|Eicuv9@V2C%xhpc!VO{jK{3>O23HsH>` zeOV{G%fPfVEBi^w(4BN_oQ@c)Zcnro-m147leJ%GRFebAe;=l4c2oSW^=F}4gPP2; zyP?7riTAM;@x|ZVFOHaa4>$5aw}QKe{#3xU?+pY?wH0; z{^uN=xNw3u$tj*hwpW!Et~0t{;S>6iu~%>{gp!2$XxPHY*RF3GQ_=^nDcmE_TRO(W zYz|S^8VRp8s^1nGgWqGYea?txjNiSkb)KZ4{E3d8aagiuG{nd}Joc?io6T4%dtagt ztBtHxZC>KJ;&UERr;eMKN5!fPt%Yx$?WLb1@Wo}{tEiP}I3dJRTfmAtZWb0OlKHv? z2gC9KFW$Awo9iB$!a>s$hsfmDk?$o?t855#w>J^*W zOV0q~%SzhoRBP!pqcNP3IkhbYSUR$_=ya%%D&IS(GK!wTUd=R5<8y6WR#$qH>1&i# zJ@fW_+J5=z6lXd^P$G0c{_#=d~M0~xqF zFIsAMTkt;Unh|i)sd_d!?v%w;TYTZ<^0gYv=cDt$o(K9P9!j-VLj>BVbU;J$qtzUy zQWhwC>D<(y9eH?FX+s8Ko3v3C*?Z!@qXXsbs@S%oM$Ml5h>F)x5P^0;ZVVm3+O_jn zc8Lu)*QiFDLW(;UYR9t-`s7_9IqHuMT=*caAk!WFvXhh{f7Fb20gsu&R)JxooBNr< znqiw3Ekr-%%7OwmUU3jV)&gcSg|o-_DI>s^divvTWfyBIFpplps5hIs3FL^RPZCX+ zu#7dQBMYO(;f9UXdC)FzT5b4w%16)|gU>Sy=hE{%BCr8MZ_2NDN)i9+)Ihn2A$5&^ z(u_+2BUBZ*YqGswv+h`A=UyveA38}%;5D{!;g@?Zv}?P7~w_o zK_muuIYj0mlr7cXY(^^sUCV)7)iM&<<&4kjeh^NrPapqgHLCcI3#s3y(lE-x1lK7^ zBaHmYl8+cL^-{q#`u}FJ&<4EP#8~MhmCX)vM9-hCfg#~xNk_!_7lsFC5gh~-I^A%- zippT{`K6Zm1buXp$0B+JD<+I-e+D-7BPp*OvLkTPtp}%IuuQtzjyQZ#=Cde|r_^6d zrH{w}w+uebHxuuw0%x-g%+mN1B%GXnB_HT7ZzfX$dHxocd!w#F!DnyQYIt0#%99#S z(>Q!K?k&!<@*we#jQIO9T{{BM^+Un^9O5Yc?67lQ+$`afE4j_OCI31VX}i^!vHC|+ zH*z4J(TBdC)q5w6emupGf4oo^n5K#IQ(d-hhDQ-&RrHnK{9Z9{Yga*+%-ggyZvEg$ zo$VTUjrx-vI~X9@vEZq|OiWduNz)iDvSTaQY*y91nUGS3x7pIo==(jq?8sw z^7QKPM@0TAtEEQ&qT0-KaNmnjC;uBjo@B~v zNpswEI*CJNcZ`4^gHDrUa4sm{Pj_URE^f<|bGw zA&RD=@B;(U4a=901;9s1Z#Xj0*Voc8?z6y@=1f0|;L0nMNWqq=Y4{I%LJ*&r5404W ztpbsh>&fNREiW#&HJ>$@W0ijeL!RAC=VOT-MEQVP4QrJ3_*y3P7m5mJdKoVZZ+C#ex7@OuYvlgHxd_gjT@?n}oikEVQU@b&ntK8nnbbW041 zY{kZ*!R3CrD5}(fke+nD*mQ5k51p6@!}GPtHbi%aJcE|dipt4Q=Tik zf(MnK_*L=2m-OnM!Kz^h3E!Qgj;I>c>~C!Eje5kxC8(Gnj62)Kr3fm$kU~(2NHJwn z`Yc4PQajh=6&u!y%K~BC3;zodY8j;i^RwuAcf>ZspPhz49~#_IzIjdSw2gMCEwBLU5bJ{7xdfiMb1t* zo;N(&12h2TPesvzqMIA@nAC@Td=^cHDpBzgF1u->zBF^lzP!9I%F8<-qg-$Hai^5s zEu)*KGLebIe(onjMUaDdlo;2a=EE_zHZOTZmSBy;TC3@Fy;JN{ar-Pkk%`k~?L;@| z5Y<9@u9sa{=SbMq$f9udfszdAIGyi>6Z%WlG-@0>)x(J2C z`+-s~h%QUj=ybW?WBoUU_J0-Me-s0S9h!*;4&suT?k}`TOO`?1wm&TTqQsZjJT+A;$p(gwc17o3-&{A*Jt^_BnTA6wEc6Y{f^$zph8&N^%c~?=S^IwHWq6R9F@Mp# zHiSqNECKg$$Jz?ho!)Z>QJ(5{NVI4^4UJ{Toz?jDz9g&U>I8H-p;RH6Kl;F)6m;c~vcjz|V29-VQR~+us#Ic^u^c*N02cu6rFU$rTiK~|uj3&`x_zp%72f4+ zv}~*p;6jfNw{oq?l2ZUbEdQ4%5Am z(cq322Sps~<1{)Zhdu1zzK-jf-V}i4x@3zqC9k(ZA=H zov(ZfMuToJgX8xVwm2XTN$~41 z-a!eW#X~UY{4|5SvA5?lH@2!@X}=$FtehE;$HxmSoTzzTuh{z^PzyNw>ibaTe}Pp9 z*{@5tITgJs%+oSV&t8d6uRE_|WG{Ew`m?=k1h1P&>mYf^Y-2>CRQ{V8hCQod8s7jP zj!VZu2G9vnzL(f@|EkRIKBax-quuCk#U*DDg1SZun{CqVTft%EO( z0=a!6>DtkI=r>5LB4J0JdaCiOrTs+3ReXQ6DLqNZOV`sCTG}nLa$i(f?)3;gDLFr_ zMu%3a{F%L&^gmof|(XGpSE`K+Q42xMZqebildvZ2uM&)>;t>a>lYjIgT|X?CE{%I+%(y`focT`Q zo(a9(s6XDzcZ)b{c(>hWcdhZ}e%8Htv~LX+um=g2;$a20Grp_-RZ6rM z{O|44KU%8w+yH8S^!AzxmSkPutoJ_mOxsi4gi3l=W+-;8M{Uw18JMfbx8{(F7xt+_ z@=h&=a$LngVo}PyD$;>`+D}khvJ=9kWUDHnMVLG{K!5r2bmZ~ZZ&KkHbsb5Iq^HG* zYtHv0V9)bJBk%8&f>+fFpi&+OhaKl1gZl;YiN)L4Yj_-12SI+MzG-@@=Dsx3Wybw< zgo}GYfm;JV&1a3Ev&v;O6?)SzV?j5^HuhDDHuZ$IkdyPM&3N)O9*36Wl76v$3jxP% zOUmi03CJL#yzducAkJy?UP`Xhegw8bx16SKV3GApqVfw^#*UkB!D!E26r9plkIEEr z-n`I!c)GS3i!gGVIlXd+a_8My_sm@w^r&CUwNI_m4`|u+Uu@91%XO5e>OVM+UCVY4 z`L$4h$a_>j5o=74Bk81j?okq|8p2Fj{5fi6D-X1pcXBJwsTALs=j`MA;)C*EF$Y%>etpQPewRz4AU6>I zlrl$LhnI7p#T0(TBd( z9EHyub{4Ti5|h%K*kb2qOYUu?qv9c15>Ckbp;{lai^UbSb}ld zg7*Skg0rxPP%#lZGGl^PP6cxm9TO$(pEl}wT|ZGg1!{cpA6rp6%fz>SLA9pS7;k&X zIKLu-6uF;c6VFqZuWkV$z=Hi;-Gdp?cy60`sJw{2Mw~f9Q7JHqvY3N2QBohH=#Pfz zGnN??-JlE5(N1d1yOp|}@5rh2kHkhE94s?A9;dA)D=kikjhVLv2K2ANZgjXeRtD4wHACejYfHzI1d6 zPXjw&l0xqX0&l%*i(Sa+S5%r@I-##>fY0gp@)cK${ot_u%EnmaE0f2xER#4{6a8RjIFH~zy^qGtvmB34)_FLRP4r@Ei zjuM(M!!3gz;v#};Dl0Ev^^+-c^79kSFa1JfbcsJ37<%SUG*?*JB2S={vFEgA9pA8?}+PagUV! zal4z_y;<)_G%~+h>3(Yvyr?1}(1%rC)Kk_J$1ParU_&xp*b~K8VEij}$-tv?=#YBg zMI-^%06`sSvwtKpFOpM0l!Wu3GO~Dyo|rKOX#XzTE{`?un(IdEGNltusGzopiNjJ{ zIGP8}C`DEuCAJApt!?lJVOVsG7pF&&n)!lO>az!Sc&PpzMXnL|0q?W3W!Q{>qY79=K7<#8 z20Y#NCRK34__^=m9$21( zHO_1JD^NQBY5LiCBndJ<<2jb~&d-djwHlj5g0}JdWMHJuz{%J2uC#bo3bfy2+#t}= zMRw#+-AZNduwpL&G7OL8CpYj+JL<6-wy&eWI{Y(0o(gxchIUJY8n^bJqmU^s3&i5p zcBZ6QwZ%>mQMrzzr>wKloRDfsuGc*+LsWel&p7<-&K;KCNp*B2wYx=X(C1JiPI-wL zC3Yy7FYn{5com4wz#fSk6!qveBJPg%N1nBQAAHtW+7CK^LlDVPwuO;9S&*+DZWDYm zcVqa+dcd{YszxdFc%xQ9b6kAW?QN6Jy1oWY@Pij0x*Q zlRlPJ^%{OT{y^2SSG*x1VjeM5+?V#KCyZkBtOdcLjw(SO=%Q8?%aZ{~;<=cWj0Y>U z!(kt-08E6i#-nQ`D{&9B3hU*uT*B>qJwjRtmZl3+_5j1Ss{t_l>|Gc?Z_bCSsSsuLd_Jm&H%5G_x_#Q<<~L znlogr)_sJTH4(D6y-QA|d~C~5y!2I%3lFrixpnyK*$lh4fvZ?uOOe+Is@2e7VVqLK zgOp!rO1$<)KjVI<%Tl_9t3%7To^WuUs4w5 zt73=i<@c|(;wPp4tE2m8O(jr_=$&yshMzB&vS$Kd1s0(3yLwZfxOYC2a(2%=b|g9- zEQ7VdqL|bN%#k;In(i!a|8@s)J-cZ7&}F^k)8t?Jv8rdt*In(K-9j*PtSeekkHYLA zY6x8RO508=z8il}9@8GVqbFeV>?5_mGxA}!{O2vVLLYPz#eH|Nvt@LZ5us%ltUYe?f{Jhj_?{JRc{32Z$xP)&t4EX; z)^1Hxt!C8qCvRrOdeCCig1h=J_V#go^K{*etcP8c0=A9|lZZj@&jE|f+q!M(D=|vH z4Lq;r8%dXG2yCGjx<=^2mwK&ZEqTZAPx)!wec+Op<-o9FMBnt@@91*BG}SN{QnObM zGtEw1{o^X7PoFW3r+Gc=a(HgDY_FUuFaoE4nc0=4d)4-;gK0vV)mv9!pFF#6s+G4+RqE^|n zGn~1marX!CK_Nl7U}!DEEY!}JsJwvgm8)@6xA=O-y@eMdf-iNiygJvDA+_b4v#lce z#mvR-LcbQGU3z8R{PI<~EDRGhqU&7Y_l_GE^{m0xzK)B+`Asu>-pDe1IrW)ptJoyl zVS1pszFPAC9!WAEz<$Wd8GXtAuEjWE&KV?f*=ZW=-HZUc|7yhO_G4TVNC_r7`xxWs zwU>&R*GmmoOqd^a%l9dLI4n4G5(DvP)~dC2R0dldV7ImWOt&yJvN-ub<+pHEOo82L zeVW-F-$UaPM+w_wSVE4vH0ODX`>y0_1$h;;l@i(#de26&#v>j=25Ur#6E|zy%5NS{ zAlfqJP{|ZYh0H|m2;2=918~+OV7Chc03EcxHi*Pm~TxFZY+woTD#)nqpV$8L8T=0(Xk{n2$982&5{fv>y zBxj$q8jb@bfVmY2zV%y1pEnxHQuFgnBll^6utSkZc6bT8J%s-)2Dkps0RMV zg6}mH9Yx#TyXixl-XYhy-pyf*xx{xd1%MTpYH;^1CT>fBg#LDgILW^Yn15S^v`+(p zNsv8(1T)*c6JSJIFJJ6`Ir*5pIrsN_@dVVwEYl2}ipI6g6L(#0BwV4U$R|YwgyLEG z(24?E4Xogjp)9YZc!%!~bLv-5za$ux4}2YGdl##Wbq*QN&f$`@4~(O^aXi)TqoIqw zJxVo6DTHFQY#rD4J^V10GG{*AAntfAP z^m?D}*1e%)>qR{uiN3}IUY~O91`(VP0x12nIEn+BgX@!WKt#bxPmYFh9~}jSZnC8$ z&5?_}34XlU8-yNT(eGO&LKlMp-sN5yb>!3@={#)9|u^gv+zTtgrE>6cqaiAPTQ~2Ts=Aixx#2$yc{bfm z$;63FWN%8r+e{X=iSn8f_&BQ0?;3R;*>k0Cb^Jl6VUQkJq4Npxy@z%UG%OjqPutzD zm^12KdMM|MF2Wu#r8<5sG^@+}eBhu}u48ll3rcL@kM=9&;YGz9)QqX5&)IZ(S)HJR zDO#9)YSTtC_P(TPWG(l0<8wOe{BcRKDC($N7hy$R=GU4GU#XDI*o;WcO>1sVmfSDL z#d)9IxBbdw60|5ma^wAD!Si*Vmnx6bsGoFt6P$}&e|~j{e4p%~JO5+!L$G|F08n!>$E#LdhDf@Z->ZN+7)H$)y#2-4tH|Ii8f_|0g z4pf%Ja3@N?aM-7jM(uL7zkK0rs^tM%al;c|S&j(?v2;J zXVfst4>~T&=&{kQ0(nhT>+6_q6sU zZW>b`jHXI1or+xMbYfW8W#6A^1 z=jOHQb0xWk}L+~HOjHu1mo*sN&sK@pyMILiT zzGB^Co>uzZyiojEO}Tlc%0#9jY0qhbt~H;qo-dPr``q%$&MCYlY!Sr@@qPavd+!<6 z)V8$^3koO*SU{Q-yNJ?z#{yeGK#?XT2vP*8p_fnvrP}CKP>|khDAJ{O0wf{y7D#|V z0;KRQ&b#+{p0l5Q_FMja*LD20BrD0xnsbaf#<=fsPirS)ZeV94A^)=d1vc9svmS9A zl0PPFUZ?J@bG7U%w)qTQifgH zFT*E$qj`kl2=JW-7Vp&J_rIcw3(2yyN~uyxm~|2Xl~dbhA;C2C%N?C+g;{ffu1#mh zrg)5e$=!6JR5GYH7d!O6(8|S{#?&;t`@Npk>VZ20=P4|1|(TUV7 zj)Oqnn_U~!m?AKHB7CwLND~M537*VMJZls;W{7EuKpz#pNui`C-cU_p)2ekrn?s^~ zSNr{YJ&*$TmADI7pIX0zd~?ZhLJ_B%&B({R-WCZa;eJ7AyJo8Vy&qx4blbtIAs;`j zzVqxK{+0*nqzCyj0sH**`1AnPsTV6O&7Ir$=F*pUVAw4;<59((hR^OwmETL&W`w6; zUK4dsG7<<$Yko~Td%Z=JIz}LL#P>cSNHhqAHFLd72koC>FSsDy5R&)h* zKF$7_kkR_|(O<@czas(kUjwOyeURP-434mM{(|q+_L5U49)no;7At@)Xt+d;`y;pX z5fJV!XLTCTW~Tz_ukKMRxL<{iXEs<|TaGYq&d$9}^JVRsQO=-rQGDI}I$rRdMoYn= zh{;Tfq!pdTO?qy68v_j%W$xyiYs?409ufRr>+)z>N;wC}pZ{rAC{}Y_FUYgM zxIyt3#a}0y?4UnLyS&xLhZ;XboSa^CfJA?{i^Y9$y#g@&qBl-<$&?s~(sFz?FXVs84G zV0cD=OEp&`<*j>^=&VzJc{XFd9)I0J%o#Q=rf=|x=&<7r!Y}m_gG7uMLh{bq#6r_g z9jacN4DWmy=Oh1qYTw5CGTj<;LqJ@60s;_hkQYUH;G{2SO*K%>%a)@6+OsSPRkGu9 zIGa|%Jn00|Et|)jVKW$)d@%_eC`Jr)Fv7aU`@h)yj3mDw0GBjNz%&Dvy`{L2%ntp} z&H?`4ueu%b9KCq4FEhNW_ECU2S7EAU?8;OM*VVk?haay#>RZd55I=?k6r&eR*$|kD z>Vh(z=zi^=!)Hb7^p~qKoIHv`Z?U{%MarwHY%s9}9{*N}{E>a_L9Z6F!emWu-D)dt zOkm*?4E^KHW*QXlvG;swSLSOTjMB7eN94PS#kFlMc;W5<>D3)li}L z!EI#er25szhEC1Ipo}zM@-;UUmk3|%%r_i|KtsBfIyO~JV#=dp>cR@Z0J@1p6ROLZ zGdxM;p=#&eeu=a>IN}SNA+RQrz{&g ztBF@sTN;`|E5Z(dWlevnd(aBNZGIjVJsqT9>aA0T$C&!mcC694(9{?puE`0zM)#j? z2-vd|DX4Heh{z0({tOx#^|%ek#otC9m5L`8QGcShGphEp(%Exe@?fa{yjEN7zrt7Q zuX8lB+=?4~Y+WAJbq33;P(NeAbe0BpLkvt4J1(Mexwrk0op~jLYh4tln|Z z0yADgAUH+ZlNE)CZwXM%L5;i#0l(DshP<5^M)}>KO}j;rH#(?yw|)cih)4O6?`4|G z%{|IgHa1I~9MY3SMiIoH$_dfleYIWvc~x4jE!a66mbIS?8r3Xf6R+))G9^h=ginax)?g< ztoePSy9+vtH&||1n`tIeV{AOcWz8)NI8asaC z#>}2DqE_JY2`;DiN%gkI2c7ThyY!Hb(25%Gxuh=SMc(;)Qb@|$KO8Q^uwEY7v68Y@ z-7a&ZCXRtC$Cb>aT_dsCw!$3%q!0*K@Dm%a{aWNS;A_XEe3IQ_7gi6x9V!BDmJqvo za2NEE{3WW?eb=i&?L<)DsR(OH8Qs6;x%?i6z+;v7=?2u}@Xi9oKQpc|$+f!=dM`Mz zm41x+MbcJe>};%oai0&`tb5YjD`-R|zfC=RbV$S^tmndx8WrodKP68se*|{zp3K4; z`w0p=c;pqDO<}tP4H28;+ghFT0X3QEXXe!Vn=+Leh5K)FXpnu6>R-0e;zkN#B;yL3 z?-43B%vrjzKn3##kj=8le|da>A%3@>%gulq-TqOTd5MOiqy+ZRA^R6>UUXZT`09YJ z=r9{E@78B{W6;=2?6P+n=r77Y6V7Y0a6-)W*Gc@Dv+f8l-hPnnJa_IHn=gZO=^j1Z zf+vge4^sOlDPjGyg;|*rR`df;?;BavJj_mYZp#GHlUrT$;|l;WkEX>voHvEynJv=0*jmdz~%ni?x~~{96B1efLNs^RHJ)aMu^8zz({{0vFxOER3ag zUm0`bXdIMAK0B_iyrEL4Z4FE;;3nx0(egi5|#r~;w*T?0$ylrq~otJev;j!={ z?~EiC$c1a)eJ|TjBwaR`^Y>aVG`N<2f~~kZeR`=EGG1_?Wp-YfFP5enb1O=|Sia)H z-$en{@jsQ0NS2Px0&)UG5jAd}yY(+a7))wJ&nwo-1YToj&$S$1mv9MtwZ}xqsT#X_ zvIIbsqeCO9zdP046L7wcP(qmIS?HTnh`fH6F=@AuJr+fGi|>F@L}4}acjXGqTr!GF zR^3PSTyiO+XmN3Et(5a%AFHfX(Y+~i!_DMd^^P>Q2X9s_Z^@q*Q-(yO<%&80#O12; z(ZhKz(N%IotkQNPvVeN81U+;k3)CQ009!nZH%J4-AGkH2l{o6_xD8$7^&su^mMo(J zz_e&P=xxKt-_4NVs9=N7_qI9j@2|yi`)Sa;4=2PbGM^QlQ@sm0C^I`Ep-cF5gyl`B zSS9c%q18PiHsBQU?e_;=-jO*8_2b@$QviRgqJnyV9pvoT4uHs>HA5E#{lYwz?h2Wq zK_C1PX91+&s(^7&3N6jWx?8`WRj)#bU%w4C#_K8^*M?&{LJ3f}0Sy^z_VEW@c>^iK zH@aNvbthcvEpP|g3rB1cA32X24)D;Y{<^*eVLW#27fkb?7qjfXsKIywWUHkH@Sro& zP8>h7Ocr1liH<>3t8rFDC02EOB9$&^xq6Epnr#TUl*@B22Prmc>bx{J~?%j>M% zA(WYNkNyqS-|BDeMh|ytr(SrY&by284IMPfGZ7%~`aD^o%thDC^t%YSv{ zS)aG7ufj#uEO zU-e%O5K`Uk-F#9nSdX8XLPMl{k>NFtQ)8*%<-FxxUAuZnn_4t8r{|&UmwPFDW4!*I z!#2ro>h&3Al^bGO1xuImG;!MU2X1{GYA}J%vTW^Ob(K>w)94X8%iAD`DVx}qaOYwjUKW#B_>;50S-ZKVSdqS!S7iC==wW@#nOu~6 z!bYG@(Pts>jMuGSH>bZFJAua~q*9KHKJedE262in%!{$kDP$_2_>_lJy8?O$q&px< zfRG=nNyvzMJ-KLyi|^mb`>9=2Q*BhkL#UgZuHG!R=Z-3faSoe=Ff)qr_N8D8S$+gq zr7*QX&Wk*al3L(N*_Z$ky`{v%ed{-KhU2OlUxwV{bwh_hS=x)pV&hqNaVw@{ zW7cR0?DkJ%Dxm*R!Ld>(|A)(im6*+E$(u8Jcjy+hA?q9!0RCI-I9)BPxGvFf)oVmW z*n-F1_J|!Q)(O|pMwf{9IIo@zIU?bN0H$%mlaS7$zKFPj;BB8>jE$hZOLScx&UCrA zB%${3&O1%2P=GY%y1;K+df22F@|s>(EjzM;bMME~U%e?6Y-V1N4L$>_>Aj_n1dw}I z2EOrouYUX47rP5LiTReh(*-5Y*#hUjcZ_fLCxHAHs=Ies1&}T|4nMM@I#!&RQau!^ zhnoHIJxr9eH(bd5!~ItE+&6@ji|hk;_>yrgY}hBea3NChM72~c$;6Fc~;$~mv+bvdL>(=mQ8c0f%< zm9J2tbRO11v=-_)Q?zf6-fccBFRrTM}J(;e`23lQrnYX5yAiz zfW|*TT6P?cLK1$wDX?q;KphF~o_eLY+-5b|?b=_t?fZddynO|}UmNo`ILgk9H0p=h zd|D^BRobLW-Gd`+1)B=vo-TVIEi%)P8M#FS6W!hTAi{k4*J|vaniX1leFQde_vT{s zC(N)r=xgR?5?dXo{0(#>KJQPrLL|&`u%%V8u~v zU5s3-*YrQEo1GyKz#S#OYhX9x#~bZ)&69vM#1+Pgx|S;8UD;S;LR`>xgc50gKpGyF zG?j&QKHd|XaHy$d=4@ZDfFI{YopJTeqwXddg9FkfFWakofo*d;_0uo2HlW~?Ynfqe zDlR`xyz)esy?D*|P*GRQxcrstWg8z8_p%I2e@ik`jhk4kc9P@RJ9SjjDlQ<~+JLSA z5`c9)pEMsr_1bN9hPc7NnIh*q<1`@OJba0`k)k@Dcq+)@rjB3Ay(F5yjr$IC_@3(2 z<@+A9Hq()gA~)1u+295B%D-_)RA#)Mst!3Aq6Oqc9|ZI~`@t9Q{CbKcnmwBgAjo|* zq5P}bDn7YszAj$kT#1e02b%;GShM6K&@VISh9w7bfkyBAx@v-{-BSgZ^ec0hh`(`u z`Jqvls$21P9Z)9>J~8YydULdSoHZdOsbabIDXVm`66n=Pjvg4NlX{k*C~UR>lV7-x z+xWcD|58$V6;Yuj25{g+xuI^JEjxkW&XW}{>(~l8FiE#I6~49+;-0~X8KlTKfpQ(kmAbx z9!L;sB6aRT@Bw1c6w)YFYnNXaSIPG0OZBiGUwzlEI8N#xzt%$+Vq)sj{!mi)X41GwT$Q=T`sJy*BFfgz9;IL*N46NTOphZTLX ztS;oSsvarV-RdeWS;cv?%_!Z9as%QVMPeKvya|$$d&nV9i<#iTB?A+*^+kUsod*nQ zLZA4%O^(4@e^Wtwx2NY2NPr*g%M5R0b}g~{_Vp*B;jW#Z>&hngrh9dooaZ_Of%;5b z`Xw+<%zJJ+=9Rb7NJd_Dhrn!@LHU@91cy`JA@;ikdj{vAPMJBKZA~JjoMr3UO90@J ztx02j?j*fke;t4|KsB5s<4i+*OVFbxC_9 z7Vw+Yz(JtjBW3M#7cEBzsgjc%1Sv9J(aio?xwc|g z#fVD(m@IO*SJ%6d_Rw6m_a=-=U$y;A}9@7PeCjv#02*h z5Gc=t$t!X?GKkf7faN*26b2;h29Zf z+%%ioG_8;#v@8Qu2y-@VMRH1!6cOC%+TGk5VJOFO%+>+sDtt9(av7&usP4Am?64IY zx#b)Xb*qB=q628N#B(Z8(cu`{N}7ZmW{qA!}+w@aq4<)z%yZDGSGO}kvEr$ z5zFok$DeuP(;BDl2$%jmXqep0Ij&U&=icxLV}uk|GiISdx_H;yIl|!;8~FefH_^D8`0^y8X-WU;u26&G9A@bD&6&n; zoWw(-b0d$pm;TA*txC6RrHyA;%Gizy3j(MNaRC(M2a9`06;e__NU9ZEMRZ53`5u<=8;KwIXB;V9nb@C$0Gb)%8XJ}N6QtD~UDf^P5%C98Y#=B~ z1V7o>K>R8>Wp%6Ka?ueC6krdK=x1&d>zM5{t*1+x4<@Ykju{2uQ-cR<5=SLXFDxVh z_WIL@bgGBjSs*J)biZwb$EG%ZQpn=uF1}nv5vmp&ztA}F6hMnbl_oV?mU+GW8_-V;$r+FIM(A@4`M+X}#aZ{+wP(D!c zDt@Y-W%(cpaZI+wWw;kP5+aMnb*>1(3=1^DZ&w81R=oz(mbOVy zjDA7A_R#Dtd9xg(tdw8Hq8zSQsCgPUFyMhgbZtDlDe5dcYIR+X9LC6n*fR&J8adDN zo(U$FxJA8zQM}am8dywYE7r}{%R7j7Yq+GQamY^qK)@>Sgq2G?5$;kC(DL!5{j`H# z%qRpEYo>Y=TgAM3IiH+)pT>8llsu3rVfF_M4ipnx*%C{$D)4Oq9Q7?)*V)P!spAt8# zN3$2h>roIep`gPdE~kN~pnWw1KHqJN7Wkn1x0RiJuONODUikC({*V6S4>0O$s@lRpEtb8{ z17$}{Jd;hAV$l&nLtg#wEXqz?9sjG?fU14X>(QG80=ixau^T6#_nV7i zUUZY+(hDh-x318{tVfZ_S06>xD(yY@+#he818~j%QL#Dp$fex$`z5(;CF>KO_4WZT zRvuPQ5bB_q@(QA&6k5*oo-7)QcqOIrp1ayXkfTXS+e1Ty=ihwPha;L+R7yd$PjjPR zfSLG_6OY}yf8+H3awvcN>eGBWlQgFtg*J>6_x{=h{w4tZ$8w+#JkhdAV}9-Fqkls% z`t#!X*AL7rWjKd0o2(H=hyOo~pXCGq!ZG|*&5HQzXZ~M5^e?ZJU$XJlHkLZQoM8Uf zasB z39k2d{<)6-uO7o&)8lsQrng2)|8J9b;xsUMCnPWE{SO!MfAlozp}^XC>(5XD`M(A13tHf10t3A+!b0QvS-HWgdO7(wvcT#~ho4`V1wWKuz0rMEGx z`H$RRgh`=VDCi;4H|r%l;TgsFxWJP>3!n-Q*`%+^?1zm zVhk$(-hzVe-4k}%5f>vMnfnLiu&*#ssC{b(BS7Tv+uN(Lt^s-vJyGK_bF~NVvjD=Z zxl-&2UGGno=t_M-yk3D8wA!(E&ts#mqbHFGhvpd*zli$qE*jua3B%zpw?B%ND!@2I zsewe7orjeQ{iz{f8A8}4>uHaUr=YlIxWDW)ftB&;xqv|PcqQrDY$6gx6bMj1WX)lf!Ffu zs?%>!g=LFyY=5fMjK@^rNj}+gw_j1iDji?q3v0s_Fp-Iog7PIFint5@q!REU0!U=? zY()hBE)xF`drC_hW1Fo+PyJS9wgz={THmsHT`vT)*|@L*f*@zk7#?jZn;WFmH3Hp& z@RsEAKj*Y{XU-wgQ_i*hX_Ed(NREH2Ih1uD+kDiQKW=}_J?Q6w8VJV`3T-1;_&-&4 zbw+dbA1jr;7MO~v9DbxypphUbbweXge7s=O5@;E_q@do31;~9)700Ful;@Jk_@ely zVFZBSVe{ot{rv;i9Q7r2ew&GZG~KX)AeOJ5=FMYXml)^3Z^Y5q)DM6Ha7FxP4Yhgo zgJvRZQ0vWD>+i_se5|p=QXirKRw&VHP$)_?7<>agj998dqXJ5#fCFTO(HjumVF#ST zhN9V+kl>vC$E)4ZY$BhB%JM9+6+%s--__&Y@ZFbAmlr-pAGm5R7aMDP5_Z$0KrR82}LVLGfpYXq-iHTPLH6~BbMaO7>iqiX=s*ZJ+kDHdO`6--ID;C^@NF=2L0Tixz`Ks=*bt>gEaOq z%B%e6d2f#GdgjMt0_;E?mc4gGS{MlG-Y`zCThFqe*H${R$e+Tw0w~E+eV-eC!NjKL z^q*|U1z54M8f_>3?9Be#G)|_|%y^+qM{`kpxTzHo%~?5Y!{@rg9_t)rRgvqdEEFo> zo%hE3yqu>Y3{PqRio!`+W+{NX@R|aMu}P5>Qo`Q(03db5Z7E#Br%$?}CM3+<;`G zFT(mui)7MBri%b>52zaRS80g@Xf>$)I=vr8$+_0sbvo)Dg*eT3$OZ6k*T)s4@@CzN zj((`|h+%>Hy-}T=dwT<5suI9J5~)MQNbZWHe|>(bquai4e5C62L5tW1>`J{QLZ*ogk%%NHpBX18d=N zNAJ#RE)zD7_L7j%;I>C|van0U0VUk3#fxsErew_*o3{L`jhXTA`^d*adr=$-p!jPZ z%euM5K8ayBg*UV@8l?1cnz9UoXw#s39zppKWn^T^ftDC{GvO2cydK}7CrU_yS$@CN zd$Zp&*iu`J(>Z`So6Kj%;iM3LmdhSOT3az^;l|!e7B6>qwDdrT$o=vMs6NTYnAN3D zE}2EDV=flW=#K|P?-VZCI*8QNYh%!qxJ4~>sI~$$=)jlk1AYJ(=&y2kp?a3!FNFkvf>axG&<%^U`HX04AaDYDB@*0OwVhf&K=Vz@t$`>OZE!Z^VZjTxj zDO$L05+l8F4i~6^y<8fw=ZwuXR@r)>CU2n{>nlI+br)1zSXZwn=BwbqX`( zwLMJjB^^|?yj@8$c5IrFm-Y%xI@|@H$H|kwBx5fB=rq`lz@x(*{9C)~1~fzq0W}7h zn;KS16B1VIyGM22CO(f9j(OLYaoK;Wm@B&iMU`7C?S_9cy8r&)QtmUig{$@8DXxWO z)FfIaZmVS>!&j=zXx{Ueiv#r-4RY2gC&QEGvTvOl75Gh`D)bHjNr)#@)!w{XI}p2% zh^*)f=e;L9dQ0ikb8BN7Yx^WX4=z08t*>6#?j^}mIoWg1PO{c|fmblrfM&sReRWxg z9k~y)F%HwLq#|ys9}7|~9euneO#u2w-SW@1lY7Xw$d!=wWmPKDt{3Vj#;M9yZtH9Kto^WwZu1C%g17D!D ziSGH20F1>aTEYR4_TLbx@{Lh;yJz_`^St%1tumQQxd*G#`gk$`?YnU7ON>COuLOa@ zIQ<{e)ieWK(w40R>*n}twf;cBN85EY7tWn(S=SU!z1PObK86x9K*|pIKkhcS)^Ukm_qN6J*tjGZk7;>`14dWG3hhN_yOOBJhwn{~; zH})Sp;_WvN-K$mh|J0isw0kNdM~XxDPl4mV7V_WfDz|h4dI3fqHkwr}MUl(NG}y)q zr(6WyIRa9PB2-_ZO{vOyG=1UsHceF#+t=vomG`TZD6f*M>St({`Dgt@`wvMS!8Ls= zIF$HF-dcFSc92JpYx?7)gPl~q(#EMJ#(v`xdHbe=T~jfeW$|py=ANrF4UgMQ*q)BZ z*@l2n~)VD%C$@x}c z3TzxV65VUe2wYYv`Ufx9b~3kv%D-oLkwjX&l8P%t$#-;~b3}(TW6*t7y!#{N{n3Je z9zs{6f|FFws*o{9p1hUVQE??z#&$&T<0>-k9@b&G37_h{7Vx>BwO^1?6*~+Cm8Po< zp5k7s?Mz7v)`fenbDpY(75K)$moplU@_VC7dRUqPrUhATf4svUr0)UvIYGa@iA+?%v7o z=w|f=gQ%);@2B6HY4(xu^s}EuI}Jpqf8Qi%4GqHhS6|A__#b7@I@Gp28UlmW60|l` zRiu0CG)Wl~x;IP-{gK_c1b~8jb~(@Aj{>dePsnI+$WaiPIF`x7=bc}9c&gf|2xp;_ zC;gAL{aNz)7gLjXA$fA`@bhrtmY8C-kXkc{D$N}GQoZ20u2o4 z_ooR&@9*2q$vMggXxgNCmd7xUKUjGok+l=FIgPR76UC+_AV=lDoHtu(00elRixvm-^TflpLunM;!yth3K5j&2gsK~X(0eL$t; zkWbw9gWGl_yp6m>Jf`;lOL{YM4$!jUwABp$iv;42RN}zcy!nB)dOAVt3= z^m8XOb8w7C!pxMFg=zVs9NT2^-J+EYU+cc5+s;NR+sSIRx4i~zEFeH=4_;Wo$ zGAA9zNh{CYqFp*4P0&b~I=k0qD!fg^DKb%?x~5v`#1bguu^*DN4s=0;91_+XwMxM6 zeR%~KyoKkNW87*q0MYCujHz5wI$>9;M~n2k=!YcKQ~#znjX~JS!K&H@!z-4b0NzbGxUWs2DdRE#}CX%S>yU3nbh2{GdLK{sLP;?B%;HXu0 zIUQ3&LtHuz=#M+LAQe-CaZ5%-SZ@VLY9-BAb@!XhOfEXvQZ>aF(Bkt>;0HPB4N?*I zPNzFMkDTdBT$8LlK=>p;XU7~BFk&|FS(owV4M%8w=8E^`8_KPGdl zcTb)u3#6_Ij{<(z5OEnLzdGqO{!``A3Kh+~Dle~7BQBk*I0_5lby3Koj7vK!!E#g6 zVk7VdG-uQj+>qHTpAE1|iBhP3b>hKNg5+sy-?8)IoN0t(-Ne=0m?C$xC_~#QDZtSu zBn`0My6jmw<S8No1I#pvvjj0*7lQUs z(A|d>^*&kdxs*V&aw$!zGn&(Op!r9h+!22Y`N`%q?TU^9)f`6gB;tabBad@HtodtS z{e0UnrVUFiAi9&&?^2NLK)9fz9li32E)~6klDv*Hgr{T4r~c3aXe0WU3d{bt6yVr= zK3xKgX5@L)-@#jNO_hz}Sx*&=Y*wcPpP1hQbm!L|VTyvkOTC#q$HU_0xjAx0%^X^u zBA}!5h$Fu(XcN+^A()Sr>z9&zr-&?X?yp7nMhZ2~%Z0#RQNtz5gZZ;h!pWMgDjx*& zUnL#9oHL*q4L=E&&tu3AAN*Z4A7Eir2Ix_GA^091CO^5mtq60$|z)J4#S*2W>Xx#(a*EvZi4!$O!)BpT}*twLRCcQ8)z%!Ip77g5#n_?dmW! zGu|j|YYGW1lg&u0Gpvctd$&s}$+b>1f3P0!S3j*hFL!5Z2x%)DfA$Egk))@dpLP zbyN2ifQ&J*m&;1x`MWwDxERu1kkc})H|`$S-JALg!CTH&)Y?^m?rBTQbtOaCEA^Yj z!?+`$*b!XVp~mZN*tc+hG}RZshFLqct{-;TaCaMDwX;;Qd9;ty)-7L*aV(3xWFrLH zi_l|fC;N{sFe_>boy0B3hS@pzG)xV5YEa5iwZl>Dx-WoATeOk^mMi5 zQ5j^B+X?Y?iyJHfTYsoBHfl@}cQj9it8xGgb3UK3cet&1MWJOwuV5a)FB!DGJ77?h z;DzJ1x)?4cmoyzn8iqKBuuHHT?jV_4N}aoSpb4JZ8m9@mp0zLIcys(U7|ov_Q=a@d zWmkwUttCAliZ7&VX$wi4H*a+6&u!!CiIbKwXRYfjmhv-+7kwL_J~bwGJ4iA$93Z%PioY^dp<>xIpFSpkvhd6hUw!*~H~ zUuz*GGlWM(7E%^8MchaOD;Lf#kK0esUSql}JH0CO^x$9xbb(Pz`puv^YrpPAwIus# zPYMm#|<*|-E}FD1WFRB0bRVlU+RwaAvO+;aSe%4V2C^(_PacWy);r{=zSGW%Ka7d zq6Z7h4wli@vYrhsCcb;DGGVN-E((SOIJU*BqzNBC49dz{cyL1=`Lz0L@5G&@UMp^= z7c&v8Q!SemX@j8vvZ@SB!9=NS9y3WJ&YEKK?%`=fk<*jG4Cp zhsqLXv~7NO)|va+c> zudYZ}u7+YvY)Jeiro=-8=$r+wJ(l+V(W;ft``j{s*#4e&*%=zJCYyx=@8|Jx=Isv1 zLBll5CaQpvXqivfMh(B4!WhJFOGQQu8-DAUt&D_6b4i|R^yJr-U7Z=>4j{+5Q$}4Q z-g0UmESQk(Ve%?MF1>&dCyU74DYCT1)tP_I?@qtE3$RN%L;L@IrmXpE3SFQV~_N1I|_WAH6p6EFJ?$o1mI#iF3o?A*)obh=1 z5{9fvxADj?0RGLa^YC<)*vHs%#Yk5^@gg=dx9Y)=pHAIx%3#=S!hY>~0IksGpZj&L zb4h}mcw!OYJQo$VF zaYXMN<6oj_DOj-7{`hmbURr`v^_x55B$mwR9`IDkRPG4J%K;dL2UOmLEGF;gEZsU2!7G;DbY62~*7fXS3^RrXLU#E_1VABfCuUMjTbij~3O#s<-XxLmxH|y#R+Aw)7B)e9Z$cw^jez%^YNo^4 zN%17i81n|m3X_B_78{uO6Uf_ak4MJ*(qx!(iiofX-1gTT=|Nn|vIqB|Q%cOW|C*M- z6G8Jd0ILg|_2rM5`sZ}~e(@uW8oo^ZbYZ<%#ME=aVXo^VSU=cQC+22dToGZ?q3c@` z`|bS<*o?{0O$~X-KJ&HvXM66Y&fi(GFRFX-Oui%z)4Lcc%6wC{qC0KhG95L@N;;50S@>c57o2d*8UzVKO1DDwoO{I%! z?^hJs&bOmJk+Mi8eOhLkIifSf4kY@(ly|q|gK){Pgq3eb@^zI}qQ4|st z{e7wxYBY_C7bXYB@&dxxh)tIco2G?G2P+w1`)S4hWo^GtU1B(ZhI*fvdh+Kv{lhSy z^9D^<*@werL1~O$MuEq@}_$Vz8!rA{~oQKOC$<@P3%esDs*OY#_K};FM(`#IYf{2Uy*E6^$k;JmxHY)=p z`~04iWoO^3@tw99840uv(ZeQ`H9q6wMk=iDYH#QGcWs63)kQxL)4|Ql?(;@-fC7aE z0P1jZQ{1;%yU9EP?R{g7OVf4IA9{>P7!g zJK}tb1>&)ql=Wp#6v3ID*js!3gU4=S#;0p+VZQY{9`Y(H#?V_WL=kgxv=oZ5E8R^C zFY~Mx4GJLo+kz+00MB(gYz4Hhohbd<;{&la^$8_}v8eiW68DRljK~|N*zM38n*~t> zN{>MC2Mz`NVnuf&4;IBf#G~|?-~e{ZGteF6ykmLO%-FaJaaU$NoydEJ&BWEpRl6&A zGGV`$ox7yr3c?w;N%KN+WAqK@A@H#|-mTeuv#%H&eD>=7caQ@wd~R92so& zQrfE}206Q7D6tuOBK?Eze#A6FnAjlnda4Kh?Od@hkNWg>_`i6gW$jEniF^6>zbx)Q z*6{Hzqdd+s=|Xg9<*qT~)lwp4T{qnOCPs-nD^B};rGylNOa`CkBXaPBGlKuE%iW5|<-Cz!Hc>*)6R@ooJ zFIX{5d+$>wf_;%K1Y^l(0f{Kk);`>I8D0S-YZmZbw=0S9O5n-X7AJSN|J>NNo-J2`qG9Sb45eulET9%jS?Bx=#{$ zK4faa|4WWTHA22We!X#T25QyP*2wFBr`*Fo9RWyZ&Niu#!316)5L5bLIC3IWMFQ+od-aObh`baz973`&iscft6ZraanP-0v0 zXiTUq0eqo5dg4rYHBzZP{OWu&TM?Y+bJF)DTQ*)dBSU^OV+AMdcO;uNpVuJOKw0_SxcB64m&oB5uJJZ#fzhAFLB z!|bhqnadi@ln{6kON7u>*j@gEoYfy!s!2ncR|H2!)p;3#D+AlN1Q z+T2%u86l?4WRvc3Kj;DOB*=d3$5V3vM{$2f47Iz zy?oT&#miylzr9_$*2%Sq?b(UES8Z5!P$-;`XHa8CvCiAEN=g7~rv1S}a_;*E9|IRI z#>m`FL9s7VaFd|c;HbJv94}omNQ!&kB9fIcRg19Hj3QyRI%u6b`6juFH`(nb?jAND zd7#|GuLnw3Up)CB;IfQpbGObV7&YCcit`3t%|{q`D^(HUZn>1192*&3IE2~Ijo1({ z(aF_HYkkz=2;8?pLu?j=ml5 zRG~NgbI)xFy;~cf;*@HwvBzUji~U51ys)N1R-)WJka$~bYKD0Y-&RNm5GXK?pkaE9)ZR88@Bu_xmS3+Zr zP0;4+v|4pa2*Hh^gRK5rgXH|m&x?&%t%C3o{7Q?N=>v*7u`FnE<~v(DSBxwvv#_x> z$#zPSAgtkI+DTrMo_|0sWil$VL4pCO9he(iGsrW=PnZJ3?g~Blmqlg8_TwY*;Fpn= zQlUl6K6rXpG~G)d!;m(m^!`$S{+rfz)aL~QTS?;kZBa@6_4R9Nu}{_o8y7Ehv%TBa zDBIQ8XvKbaeE#S!UF36-pk4GDiI5nJT8ZJ765-g)I#jILzQFhi3%1l*-AG*gbn(y8atwM?C1`NE}x_0x%_T_S}>0jI4L%jgn^_e%dB|3b8| zVT8VpZ)1uI%`olZLz)D2zWv$;`9;Gt!Gtbsv@Y)f*O!Q6x6`A#VtJ43Z2_UzcPc-Y zBq$@P&9$-&zGipeiJ%SQ3u#<~#)XJnS7z0AI;NY8`YXJojt@Gw2z)B3pL3q1u&Bkw zb%^*&Zbb@v?|eZ#ICFHigT^nR23>OT=FNhR!9r}^N^S;K(0MfRatB~X%a&p7d;33r1QnGu5Rp_-loXLpMMN4&>5?vC^w=gU zpi%fra>+08aUEVFZjkcL@ zzinrWOA^^xmz}W%wlZ+)W79!G7Sy=FGMI+Nq?P(8@9vAYG7a(((6;O$smt|tc@OLS zP5ZTUIQkJcWDVE@dB3>B>O% zSzCbJ-@vA6{QD{5A8JsNXQORh>Wbf8Jv6~qLlYvi2d`It9vkgv1%`sBBKcidM&J8x zLj5xDEO#%YJUsKWt^DrJL!XnjI1eN~#ahWt{(AKfJ@Tglrlx%KXo*h5D?z27a+bf! z%|ipwv^?@w z+WK1>;s<;DrrQqa6Umf}l@CI>w_6y0x@W!|0EJeaQpvdVQ#J%Jk8d0s8|GavQuixq z`~eiMFfM_-uF$4eL78tU@X~aT${_o$s&J7Zz^U6Wixl^pA@0rA|o)-h#Gmxqz54IQgH!_yy8OXgtpSn=@-2tegN3%qh=Ft zbKYlBUxojy!(R?z+HsO1(fOW9iv)hl&IEW)XPf|M{a{~ufsU0;UoPIKNMq#rHJl(e zaD6{;kW#?5ui0pGuQZX6WWV=|RgRTDkVJ>JXhf{Z?m~c4BX0Rf7b{q>7`9?8OJlpa zFGDNOu4NDJ(|NQ%LyL2@dWFh#i26$H8+bLPvjFP&?n;2N;Dm|F=dwA$iKS%ImK0?f zM9BoXIgFmJdVirJ7tLJpB2HitqRiEt2U-p{{OPnF9L=ahFVbQiD#(9LyK0>zd^Na~ z6&`9brd-vYtsC z!J$_1I;1W4FYEdpd-Daldy$U8EAnO1^ix&R^x=-y)tzw`J;H@+`1kH>Bh$-P#wW0A z1x=U5ij~;aV}E`ul-}KU^xD^jMf2fA9K7i(L^}KiE&el91$pGAfSh|EWlm?mztik<>{Px{2#U&SxWhc`}e7 z2Hvl%7)cM5NBL%-Ko_j|uh;&R82>^If4Uy$dIkNkP%o!*HP zj_~un_HX`fU_aa2v@t+5a-t8Gn11>4U(Dj4Y5x7T{}X=_YXh%>*@e*jq{RNpC|cM zwWm62w>qNQP%S?-rc9tjZ={rM$faMcI7eblMAw0_@Y*UlF=(T{ypQr&>n3BNJmC;3 zF5(5jb$A@}=W{%GVLu7<1~)KC4sN}%G28t0{VXj3aGWkPHXhtAc5zpS|2T<^qNkCg znY%j-;=hp5X%|U8-VZ&wCT0*xk%G*`)w+*5(qH8Lnr2JUdHuxO2OWHQM5SG8HvR5^YCb7P;)7E&@>zpuMgJH>x7Bf@}2>NLt9emcrQUHd7Pm z)>n5;pm{z5XRY6&YvcfxIdgc4l4zZOuaHb3Sxx3qh&rz2D~JE}FE!()j&F+c#Q-^$ zrPGoJ*3v`nD^c%(vhSd5<8gPYy~id?K)m(6e5ZUyHA!m16U|4^G7FiGIDuLycJ(3T zQc%JV-+b3|3Q&#;H9neuKbC)e5C~l89xWcSpQSNYf9UHx+eF8$slImf;-a>r)VGc2 z8RMmtF+aR?3VCHpNQ58>`kea9w~Cws%)cuXj5`(=gmUI;kjQG5^VR$Nqc6D4QgOZ&o&~vyUVhgoT5cw>9$m z*;QlD5N!e%pv#PnhP?+691gd$0^;h5;30z*VCgvP!Bc^kjH8#CW0LulVuWu{Fp9I$ zgLRQ5WmL5)hjtJkH$)_G9tqmLp9_9Db$Jq?V!FW|3j6+jzaV!V5D_wdr&o>~;p90v z>R2UPy52Zit4Q#YH~=cE%U4nAWzhi@mps-> zQVY3iAMXceaar~h!^^WEFV_yRDUQ>TgOjb_Zd5{&s_2Ye!~ z)kWY^092AcYnC$X(&i896y{@e>4qL_oo(zghn*B#69Ln%M<+wr=x$5$c&v}sVOj9J z>7$l}sye`uI|*28$;Z^uUii-jlN0SD-@La4T4D;GxwO~UED25c2#N1BYfOJUlKgmJ z^PICm$=DNF>oUwW+0Gbmo}jYV?18?s5=hAP#XczzfE;P~O!s!BbzxJ_bLnep?Hw%g zEKk)^f`fHl?R%S+#(~xdzb38E9nbYu6xzQbXr9mwV?(C)jxSx@{AgKdWFN5k{nIzU^&Tiv&n^U zEV4juZ_q0>K)-x~KIz-`U0=ajoU=?0uOe^JV|N3nCA5)-bc7f*>jv-^7-c{1k|8#O zClmChfIIBm^^!aPp_4zUK&|N5Yr^p&+tA^qADPu3R{&4L&E=BGaOts0D7LD3sRshz z1&fXKeFpo!+kGUG^9iC|K{ZWC^1eCd*~T>8Pz|TGnT=y8f6*8meS&Dp`a*0*$C2ey zKDagXF;qkFg$-~QY3PDB0)K4`C?9W{*KnRwjXft9+Cxuheb%$>(b!+bTfI7R)49iK z`jf#s7igRaNc+4ekYV?{)-T#9(V@wk@4D!5b~+UsX&6_spo91H_G9c8DzSyajXfC;zYf9STO#j||zqpCouo%KRl-apif4O8oT={1+J63&DUynxtmKEAi*c@1N2v{K<}eK5DC- z{`CUIub8g12@gpZ&Ulh77eaubl}Iv;A=8`P-pS(;SIoMRA>cxJ5O_oJ;SqShzo_cs<`NhfiWYYgRMo?7nvqsVez^542n(cRo8dPXh`G3#iA za_%Aal#ovdhK6?mjtD}WyLlx6_Ar$zw+d3hUt2@iC7-15k&12Fxwz!&wm7GAtS6^_ zw$l#gb$+NClfvLP>Qu!;;Yo<>Mq}h60&bf9h66}5iCH}ZeW8!p7zot}JK-j=s-6pZ z;JfxxDSIY&4Q@zEbs#USSC@DUv=`|ZFiPS!iZ*r^kymF!76Z}GYfwQC{13|=HEuu; zG2%`cX8o!Ef~X8>vL2^P_BjY!wk*ZBQe8UElFF&q^ll?Q2YK#1T!%OPtj;EW>f@2= zY!As3lhjN}`mzx9M;{)3T;02Umh=^Pj%21}*fy~6`biq-jxV$6AcRW3wcH)Sz6u1* z-roSc5nFj#%cnqYTC)%lo-)A=$P0ZCr_~ zya$U3{3@pw6P(R_XDp(OfCAh=EH>`gA1I`p(w%2Sg(Sc&9v)d#{g|f|3xsb^5g0aX z&UCXV&njR6)JgNVIjdH1PKaow1K@Si14IfjnPwbZa?__iw5-1$%r^K%j#N^9fd|10 z0JD78x}lM-_Nc8mYIFV+YKQCvJc1l`7-1zU*NHUsl3GGrx3l8jNraY!$m`d^0QBjyLe2y zfEGKZWLl5?hdkv>sBzYfW+^rH5j-5^QQ;rwBk=uhI=S4)Vb8Il{ zpe48aW>3A<+G$ka`KqvVXIvkk!h#>?Ga1213fgk3l7OPj&I z9Y{<@Q_&@9u;oe=Z`b#Bv3E55hODEKeD|Pth49hqCFgq6&b-d#V=N1JYnW^D;|is& z30OO(tz8v}%;QS|tbhx5V}L|RJ9#C+erjq}b6d=RcfgR{=E?-!^Xd!XMRfd|qUl;~ zgjTMnX3q~VjT$BK_)}gRrupUZA11katZf1AZ}jp0*QX<&qm6D|3E~Wn7I4TLZkz_- zJjzsoU&fq|ZR-|T`r%>O227;0I=Wt;&Em%$qM7cd+KIAbtM|4t%~lv)Yb490r}P@u zGb#LJ0&p~v7<-co8!a^(4R7Fe(PcVuP>hAoBXCE5Z^tUcw5y+YwlN@7Dj2da4yb&x zdoyh!kbM9*rz_RUS%5UEpta#JB4ybnu-!0r=#Q+Ms+2sJp1^rQ(C40d0?feRbhR*I zCd9ZqUs-7Qrkz>e5})k8dK6#%bMzP_)qWRKDvC2DL$ZX3ufE#|U9huS4Es@kPcs6X zd6|Pg4BWrs^J+oTYa<|n@wM#{W2J~=ucLIm)9u0SG08Fp!Qk3@dEMcR%&Y`}fo_E| z8P;+v7IE5`U^(Fk+V%3q_gpU#8$&ZF0G=7h$&+Vq1mc6^_)761V#u~<8G&LCDVTYY zngy0$1Fl_D+2yAQ^$oZAxCM7Myhb_++G|=>?NshwikLSl*AKOFHs=3|6Q}%X{g# z+z&d+zs$5pw@D({RXbHQGf}Iw16CagXZs9$IOaFsT@=WdsSBb9g~V#;iGZl3@JgkH zy}P=LRF1u(tNFd|rin$`)X9_wmlGiP&%@QSkr;802^^KApqL>=l4DhuqbU${q@ujo zM?H#kmo9dC_}H>DCM@whW-(Khu&K|(4F)RwRK{H9=6P3m}@Kh^b2!pY5?YE`5CYE%$0HC$CKVF2Sise z!nS^g$>U#>W>x!86M99?3L9l)+rlfw z?I&FVmG7cQ9f??tw{BkDbjB7K*+}|m{oR9&Srt_kI?@)F)qd`*QJ?J99Vwi+U>k3S z5?&y8{(QXu?}%g#A)6lIOd;1HYF((Q)%}n-5EE}n`>LHYwCT;{V5GKzq(Kb1nbgY8 z+1i3GH09vL(AASJ$p7G=w^!QE{>3l&$6jh0`hGyb9=SEyKK_w;{&~e$mgnY`5>a#@ z#JH`qVm~934HG9|=MO-?RJU+sJ{KDgG<}}z>LY#LFpUB*+%H4Aru4ToQdDlptYP4B zlZ%c*#`-{KJG{PO?iDGV&yzDZne@w0t%?9Y<$ct)4RftLE@+)+)FX{3nQYq^f}fvt zwlTpsB~wvI1ziw<%d}b$04*)7sMjY^AmZR4x#2#IX!9^6wPHErBpBK?`pGOTpJK|~vz!q*|liqSMA|T7+L`$=7 zuutiStmWu24)pKHScaJ*MhycDv_M&~J)<3Y!OR5z;#EYxdr$jo&fAIwrT`8%zw#$k z+fN4s!FLyjPW*VV|Fj)sS#Lhsc$2$`mwQo4f@X>I476klqi)h|s+e=l9{8TX{0Xmul&|xi*@3TKf@HrQ6BZe<3fg zbHt8Kz^r@gqw3B{pcn)tLM7&ry@@kagLwKKLKr2kwKXYy^ry^UW09jBD_QuKX^AbL zbTf8I0u;IDvD6~p-flF{0)2Y}IRN-s-ySh#* zFoW}0=UF&WibuU22NAAJKU2(6%JF?fLpp%KIa)7y&QnhN$2;*p88angd6T`5KL~`s z;P0#wreH$JxU@Gv{8*P*#Y{ouFQo|d*<&G`9gRd)eBruLT|}zVNg_1(wkm$>M}pEae&^oL z^5OKaF%9Q~YtS+Q0kf(i@S%v^Z_c1*<1kTey-K61Cpsma)6KJ86hQQuk=c8$WlpQq zmk5bW@SgeV7pPb@Zb^XLEx+YOQLIY3ZeDXRHZs1EykfN|j6F~U9l;{}KZaVzzipre z%<5d(Gr1F3z(n5LMEEZ(#62XahSwkuwwwCb*l{TRmz-(5t>@srNhKasO$k0KOHxz48=A><;X2Qc?*{J7 z$6TR4*~VSr+EZ0Zs7^Ahy(E{E5R?wM&hGlqsz8#!e2lQ2_kMwkVr(>D;rvlKHo%NW$Dzbj?LF1D8I;0qf5ftS&wjR1Z%_{g@M z>yJzS`;~ViC;ISqSC`A(R?gFJkD+LD3_jkU4yQSGoWSdfT)e1^kJ>u`%~uOR4y_j3sq;fKvFK_viYr$lyYJKHndvX!^4~E4+^rL{ zr892Z_mC9@`3Eb)Qps9-oZBYwoJ+9J*hh?`6+5*cnS|{bb={_1LN(aG#Ax~K4A{GC zj;y$ejE%VrA=gaUeLB;b8T5lld}n*IuNgNslZ9w|gHI5w3NlQjomrYu6>J=Q@e0A- z1n;&-y*_VYpVikd7s4FhXXxE>m1qLW9OU1^)#h)2(;2jW0di1c#d(VoBo=1TW#&!F zO7|<$s=FL2?>(Qxjtx&PH)_zG9T%q~9I(!oY3f^70KE~qtrhz>I>AMh&JC!w;_+K! z9y-xC0nGcxI9jGI^yB>i+GwXJ@*N{?0AWzR>reFR+1ND((Z4r+e!R?F=j8S&170n5 zLdH2A9pWZN5S;P=So_r@?aPFU7=1CKZ1w^C%%-1dbxvgF ze21A+8SqZ*IWFbi#Dun(HjL#hXUrbE_^6&?li!k<_!PN7%q}`Q6WwU?&<)h{X_Dxj zyzA=FmuvdStv@oH7Qg6*-$bZ9XcV~WVQ}F0R9I(|Kf5r!S0J&Dv?}>Q225zLZVvKz z{bC5`IB$QU%_C4pze0bgr-^P8jT9Dkf$tX4vH8e*`D)$8GP1k2xZPZZp3|@&{7C&u ztOW43@J^|rVYO6pfx$h2c-OkC|5k7g|M50enj$Xb#_@c3)xgD;%&fJh$WkIV(WT~e z!vS9Y7XTC1`?hb>U=gh=TCd}5v+2F7V9%cug$HzLI*&(wkqu9grTIQD;a;EM)p$XQ zu49JL0su-o^9{lltQ+Fgmu706hcCAfVM`yqO_#=EJRdegxE_3jz&Xd<`4XO=k{@y@ zi8JRLi}9HsM7s1Y5%*QKIJxvIkq11YK;sFRs9de`ZGUXRvIOuK*0e_;zRK-CybBnh$-oM++%3b1Kz`tn1EexxVn% zVmdltXqBBSrtGHkNi)j$)WzvQZ{J}sEqfYnH@e-KmV6BFot!}|MjWgd`}Uzo@tGDq(2$W zucq_I-m#XB3;f^4_E+9V3!pfwYEz}j{{H1Zd9n193cFuz-_O^C_+1Vn;>%Cfm+IHI z{9+_er`!ksI`aS90%?{4Ae&RkBuxMFRseX&oIh6fCzJo>#M7Pt&(TUA6a8Nv`eSU* zMh*Y>U00hX;&OaWx&CrQznu8dl8WQMSnFS}^SfFg0a0h~+|sXa`PE2HIC@b2&$}+| zM-Bw#Hg0tP?^^*7;Kg4~{#O(KhX8-Z?tfpce+ckrr}rNM{1v^^{6m00qosc&;LphD zcX9cT1pEmt{r?~V(+sC2{!ZZf`tjU!9NXK#bx*f=4d{Y9y3-!k`9W3KuUFIn9>`P4 zK7!YOH{WbqQcT;p1y3^ILAZROw;uI1cZ~hhflN=J`<(y6N>*%1s<-7hc|T|(IPQ=_ zu?Gfh1Om7{!swU2rbua%#-P~u+S!Oj>RIT<*QZZ_!gOq}RZ;8og2SHA6<9OKF(ihL z2r<{PDl6#2A;(efI*5nc4K?k7V4_1td?r2V`nujgU8Vs}){9pu_Ljg-DHdN( zAbQssfbtTpUKeQ&>mKjK0VOCpHKq7(w1_>QDZ2NQK;h2;Fp^o@(BktaDXCE0Rq_3M z$+^SV82o-WG$n~SW8(Ne(0f8tasV}cREh}20bI%3Y&R?C7vf(?xR3JznMxn~*3yOb zr*@&p7gkB&8^fhLAGT{<+IS2vcK|+Za^78tL?0aCS&DkCcuL%0L(x%SUq#1*2Q~CL z8c6%A9^o*J)Y7J4aOl2#+otkfm^T48*);tnAX%wZ%)nEH7y*_2#+C7mH^4^&@N)xc zo2P(a2vx;;;rNGd&$hXa`sx_LILF-O;GgCJzz79|8R2m$K{n#B3htY61+XT*wSNx# zC5emp3Qc(Pj*YD4Y;>I`G(Q_PpW5IyM4qxrbibSad=v(_G7_Mjer4}aM~)egQ8AyH zSpk4AM28wvPDMtY!Se!gM)Bz2lV{oews}gq(w#eK>HOy7xrL%fWIYUBFm9$hOUbog z<%*T}wF>Xg-=G)ux#tPg9*Lkz^cZLfMPS^mwb891Ye2@GTfVPSlK9opXxj4; zHr?-_O8|OD)__i2 z8K+;KqhseTY|b`+@}u2)RtKc@a@^3jX_!|W-v|7VjMaD)_7h%RQIN7?Eg$m8Y22;0 zWRj8`SxuOHY;14||CXQkdUSsb3}a!CKjR~AN-@G?=Z$Y5ydZ`FR`W(fEF=mQ`a(Gq zn*u1!;shbydzi(%>E14~tJIh8MFS38UIU*m!_%K;L(<;PGU4k&brHp*?rmIB@b;&w z$FB`4M_dbD)au8dqOmS=YkSg0sq=zQDwqh^R@$7 zvB921*ViB}^hM4%*bin5vC*z(jV*P-9;C$bTV3;A+Xnk+2|G5e6;q`qxhjRduOyff zp42<1vd<5Wd+H1Cyu-OtDc_vt4JP`o-MVIHi4m5hFovmjI zbiBX5Sw9O)RGX>x%d4|$s8#E~c8E8qC6R!t{kXfdi)4So%m3hUX{^3Ph~ErHGv-LU z)Ikvk>QtK$rxtv|d+~YJ#AXvCF_^#p@<=8-3+(X{!Ev%1h#M7@14-o@-xf&`51!4N ztNF+;4<0+=0-&BI6>Z&<=3z=nj6mR7?E-)S^cqm+{5`2wWG?>ZlkVtC3!I>ljQN!5 z)n3)+^1TiZnSz7bcBO>dHz@Ya9_#_(^_zkv959<+vp{a`kqr4N=UD}ou_VIN`{JHN zXP=}>?5!fr#RLI6?R8Pi4WEPM8X_X~lA}&a;k!W#14D@%6k*iqb4BtDW(F?yc0Q)o z=*+mlfhUuo9Xh&S{5YxaU&j(#+Oyvns>dbKy^wV2 zi^&Bt%;@xsR1!rSqMB+isCgMK=1c=wQ#@)`ND23oIv(ZM^%E<)wt45Zp8{oyp3XtZ zSmK!q9Jh<=&Qgy@IxJevoZ7Z3TA^tSCJ1m=r_BmXXRJ826i<48R<$1aOqe(BteM%=n4=WszjCxr*2cG6Pt$@-Z0a&n95_5Q^Q@2^w*f$hY-H2WG!*`YW z_E`m}1Tq>ZGq&ySnsy}>0isdTjqZmY>nvEl#^8@yOk^g8#YyJBZcUIecN|0FW8OAR zZNeO@&BKX7T$!#hpL5ZAEUvCR)=-}__RQ#}l_dZo*`DEEg^44AD+|_QUiof2Q#mcW z>v*Ct(FNeE>tfSGb#=0f$+6<4r%T9FJ*x}`F~w{(BI9%{mqj**uGGe+GV9DZ+l<>e zE}db9d{E8D+(FQVld1JUHgO>p>*V9$iR5z10=rc7QJzAhBN zEjTJXieMeI=oov>K`pWMxD;C*P$2RWC~~v#ZQ6fKplVubu&Xn|((LB8vZ412_9Unz zx59*uu|wqCo!1cb2sfDZnvs*(nwMHWVthE&e>Oyvt;N{6LwP#5WTK;!zUfUTIO_?( z)cxJ9lEeFgVdozQZSG+(Sua7%P4q0?&8kG)S?NuW?6)Y>_n!8;?L8pnWJ!rJ=wemN zSp8<|4Se|*0IU^OfuiclsUS+k!itTp+${yp7j((XiDT58pj{$vyw0&S#awvlDdcWK zs3)SKYyQGsQZf-PU@$R#kn36HjcI7%TV}*hYEb(l11PBtkxP-{b79&+kk{8X8Lxgm zRu!ONt0VF~V@b0mMuJCzE>eGg{of7_pzxEb6xM7Bx1@3GJ`^vTc*?#hSdj<8a{~;7 zWEL)tRm&Rv=CY{2exd{PW^>rA#b0ZGcdxtIifaS!c2x)xVyisPMNv{=lR6n@;h)*{?)J2ZUW|deN&Qc}>J} zpcX1 z$i--da~ix?{P$`!oC!RiXjy<-b2nZ3mS|XVnpH6%PwfiDyUu8%RSlpH(>(O6iRL=9 z=ncXKn=vz5E$lw+_gWe;x3l2yb355`4IQtd3RJ`{>8FbxZ){xK6WxDV{OX=Tq&Ror zoq3QV_kgiby;h9*>pMAM7SG3}0wp7EHrhCVQ zr4XS)!#(Q4R1)G03IOgYE^tc4h;@Q}8v0Wd75H+62jn#jB%~yGs!K-7O*TBU=$>j(=;$UAjpAf(-xt9j>9Y5=G=KyofX<>vjp~sFl$A<+z9@Dx~9yI7w zqMt*_Qi-{{*F}(sf>_prT$fxSE=e(==}1moehumhmBZNy2(D~ne2Q8IQLP$Ky5(WN z(p)#vo+BQOaE?R@t2T6ixuoV_d<_5CW*20E$?_!B;Q>@$VTj2h{f**F z$7|@K9ZNANd)WnIy2+MK?1Tq$Jlv#PWzbb+{~W0-P(spRYu~3kS{KAEG%1`FroZfM zln!L0v~WxX4rWGc;j98-=N{3EkA_9R^2MTeO0Q;>w<_o^D-RknyAR=N9*aqJ3!ks2 z4egcgC1Oy`)STi3M@(>R8?KXra_n~yzavm9%`;s+{3@HKhW1$R8eFPWEJ+&nVH|NM z>`7np2_In6lDq1_YysR zx9J>Q-w;3mqU$f=8RwRYo*}$Re!fQ}O6*mysg7*9>mVEW+hwc(J2W-B{a<@d8S!CD#7US#kb2HPy}l`&+qI_ zsSb`Unw5KdtEb0ESqOkpgnj=#ayq_-u8dM|=qgM}`66$a_>QwY78LHI{-`~f4D3cZ zYF{MlE3sMd4WCrvM^s??0La5=$!=6UJBW@&{MB+lV6X!z{;Z?P$*;~!xKJ}TtcWOY z8X0G~uRWRmtivvgmfUz$T3LsYuNFtX-|;yF;*-{*%XI;3C7aIvvOdBCllst);a_ud zeG2URfHFh!TbwziceSoix6l!+M6BK8;#+#9XJ$^~I9U^p zuOV`1n{+gyb}X3`4KxVyN4ofap$#*mafF(HS7j(VgW)kyMiEgjRf72D&bGYZ@i^uv zy28Zu(Agl)fZrj!pBe&paQ=W^VV5;{%PSk#m0dR=wjSU9kfJ8|MvfBf4$lR!>8A@P za0WcALMhgs4;+snff`V4jkLYyuOSLLPgPEx|C$WAL(X>!&zAy_4+#e-&4#KJF5x`; zk$JvSF#u96DnxGI#W8m)U`{(Jbh+))?C}KO(NoOIX~!9bXlF zEmtDb;I)!hP~N%JmJFYbN&GOJx;}CVNLk3b%L8gZO=h*dQ_tTO0O%d_NS<)s%wcm? z$@rqL|HduXv!cV*S3U3oiN-K^}s=AuL+rDOy&2$M&ArBwF(aE&Y@p$HlP5<)i=jWco7Hr!!>fj`<$f^ zKanHI;ZCo|Kv+B`3k6+$7{JD!S_(eFIJ`s(a8ORFTn=qt8G*=FW29*1Gvis7a0 zxzoo;@qw8I*W1i;cWl_1BC-UN5qcmmlqp~=<}c_WtH9T9)OaB!Jmj^HghnEOOg6xP zSr~Q>ysVL_y}4ZtFsuAtpco&TOKPtTp6_FjT~YeNV0#T`;cG@B20fxDy8&;du70k?L4v)ZTjH(bZ_Lic_O6?LW!ogwwgyjO9(5z zX|O4&+^aI`bXF8!^mVh9AsiEgIP=wZg&VYORO#exuRBSHx)8Y11>|mZyq@JXIT!t{ zKa&B~oZT}C0LsGq@pALbtgXf^l^c8(d~GbCN{>M!P-opyzfo=*F~X4#;olDUyj=En z3ci$)u^;zWm1MY3jd8TrcH4;&q4s?2x7~a%LK#GzTe;gw)>6l5v%6It(Boe9!v+JM zGLEjCykC@X@U>;VxsaMST`cB(?FwWqxFiBCwqD80Rkaf?kKgt+W`qE z3GAq?2Cv1vjYO$ZXc7f@?WP>d#eH#tOX>8&Xap;|R2_S|De?4#*ILj;m0`bClB%uR(DO9sCSK9x5$@RjhwnXRiywhgoks|@@s zM+&bBRz4N89pu5o=ojEK(1RO`9ot-p1?Ja4W>0u#!_@9=2;jbDE0vIZvjvosKAE6A zPpH|1Lna4l@*i`{4tu;WyrHLdy`nihn@0AA{BVBsRSWr7`AG0Rj=OhJMUPuh{0E(A z`o;D#kM+bdkf9QP@**LwtjnttlmzO;a^VrnSRc4bTK9)xg4>6u0Oc5$X@9cvME0); z#=N$yE@7t{3&P&=ahgU5s2M)b+=WU`wa!80Q1)B>Z}WOGAVLL>yCam^toY+Al^}(J zYm}d-L8>B6OU%46`yG;U#!aq$k!4a%Ov65Id=Bf_gZfv94-}4}uVS$YNY002Nc=~= zX(+-IEruDKytUBVHQ%;88B^nf+%5O%kLy0G)MBell^+P@zf)dl!`9dJMnW?AP$~8k z^%h4aVd<~itlG=|rgf8a!Wm)kdID{laNBIcd2Ykyye#rKH$50$0(PnK%Noi=KAbJg zDhXOcWXfdHXelSp+E4RpZTnq5_MZ%Jyh#zHpRH&BE;g^hFowwn_MW9P}4{%f$SFS z>n(beDXhg>f>TJ(u|dR}WmaoYj4Ol230)#pii9CISLdZWYI6+ogu)63gsfiVC+sg4 z-nJbXXvs1vRXgHU4K*xl+86w-gN86ypxhxub`l5u7pWtAr#CyT&vYj->fPZ6m~Ux{w%O?G0^1 zk3abO_h)L~Zu46Yk(AbX?l|uIy81f5Mf=iiyY9-U${eLt#i$&Y0Wx*@H>{U4(erHt z^2ZS`C375j4a*t7GCzykpU|&Nqqo6i6sgo8gDKXi!jnp#q))yNVUoJRAmqTeqBwRI za^Djz9Tz#AWbp1twy_enu}bul8`{k{*E6=Xz_`MraA-G}1cN(DS%Gc(1l{auh;gM( zbNWK!A%fG+GD7w_eNwNszjqvG48N+qj;g}ULdUq&(+^bY@wjWDk2L+BMap7US58zibbEFJ&qBL6GNM4rNADDFCTb{4bMcYLi7vn8;WKe*&@}nhJ$79^r z4L%`p$KtHm&zdb~EEy>snX$ufADA1JdOp;O)7Qq5=xF$%xpR9-h}L>|&wP*VDXafRg}f|h ze0id;cMX}$)r_niDA}go!%u2=tdw@U7d6li3!BGPRK)_Z83HTwO)C)$wwjr9StXqB zOL6pQ0teXpxmwh1YW}pQZ7YmYb!Ko8sN!mt?$q zwb1EDdM2)(TU=u1FZb}n#=%5wGP>j?d$Z9WEC&WZoP7{}Rq@+|>?d!fjwI-Z5dOw@ zp;DV7y7!(>)YaA|K)-7F7+RqPM~jhfrE^9%&xRrCaA=d}m(HC}LI=~=`o@2!y)Wsk z|LVy0r-Y-CBetylNEg-n>n6*r)h8MApO1nbd#K=sHqWLcEp{5H=(|SFXAZRNQ^qTo ztXJ)~wPlL}JsJ5MsGws9y}UieGSuv11(pI`N!)?__Mw9#@m#$l-Q^IWv!#&rF@$B6 zrTD<=myQwFkaK%UVmc~TK{nb`2RTxxn0K8^x0+Y(U*vL(#so&S#JMrF7n4yly?Xdt znuI^v3YW^luZt|g^}aGz7!;Z)ex_gNAfSj@mw+iCOUPPlzFesk+N~HXmOX2gcpL(~ zZ6B0344ZmmE$BgMUqZSIFiph_t@NoK>cVgljr(3zE14-^PZQ|9ntb$y%mF;B=%p`{ zUFjMGPp6r@u_}<*V!6jsk79KUJ~%-Ftioed$=0Tw6}BH7Cx)9uPAZPY?kCdT(Ol<= z>9YTPXGX)S{T3FxxI_O{sb5I%47&=K4B4)Se>&D)o@wO21 zY}JCd@$&X$CK7B}rP5pf3V&I5CCO)COaOZqpi`ABwDB6cT9A;~M) zvbGhcR}fyi0hG7fm1o;djr?;7Cuk41Gnz@0Cb7GQDI&C%8R91+FED=-_*&2_rL@$w z%s&ki?hKQdp|0d(y!mcDAk^Q=yg7j{dx>RfCo}@Za=l85^x_+p!z$gMlpxJ`E|(y_ zhVLuKOQ7c_t=j1)GZ2d@it()n5<&+~<>f0!`Oc`uu{j6r2bA-}q^RlopUP3HGnpNm zl{89CvFVfES~Rl1+7T!CHfdb+s1m}qgBs8YSR%AUGlFmDLJrctsHhc;>+p+7*&HLW zlP)s6Wzjyxmvp?HCx*Y@$YC897b?LHCX68qha;KvwdH$0J+zWr#|>a_S&R&#u5~Wz zXf!IcV3Co4R`;Mn#VN+UqGQ>$swdZm;p`9+VqdOD8XI4@${H_7ShY`~FBxA<;~d~7 zOk5e7SYG|OHSx}rsPjUyURn1*ln6dS){Rcc4_~m6b^`((LL`;mL(+S1EFoaJ3{0Wbe`nq^Iiu`)-A(PY1^rp*=&Y zn0BCvqBd_qp-f^>^;RuP!UCx=I_7^(VLAt7ioGxT7D5A0c#zk z-Q38S=58e-Y2T)B^7ZMU*mY^rt&rAIC@N>L+4=iB@UT{s>gYWMg`AdvFfZ-gYy7MC>n?*WEnkI~*{SDEguG_2}C+^=>XD*v!Fn!|}1yi4_lk&Go+B^VZ&Z z+!8M0u&&^`hK~4N&EnD3cM@nTpBC}6--x%W1>R>y);0ug0jV+f=-yoxA7WB=Cy=8Ix$op40WDMAmxzt>#_GO z)km-mlz5C8-Xu^;knVG>4PgRyDNin5h3vbkH(%s+o@t&u<#J_lN%Fn)B<;Qf2RO?q zLfC7|X(chd=9GJvg>p^H3^T|5U6b~m#CM%^+#eG}@)dbjs(V|czeIg(QUPRjQwPR+ zqY7c-3HI?=K&``zBD6b<3Pa|@TjpTXwa3iEITxAS(buKSvHa)ezZ9JL-qf6mI<-Xb z@0-E*dHd`wXgnKrs?EOt1gk7<@!+tq`$MgM2_jjSYF%krcJaZ?`lRt*`?B`_yoLB5 zH`g8;`?B^s`{s_5etw8;)F{)v>;gm;Q~|4u;9C8=He-L$;H3}U$69@YW{lZ5=l0@Z zbNbNde(|&346qu^HyftA|K=lyzWJA*zI&J72v;Xu|37_QTIr~Is=|%_49z!oI&k@F zA20L#8<0bdifN=-PXI%DE^U7H=PUl#74R+Pebai%kwwbo$(Yo`PMYsG^Ecl|m*3os zVk8Uj{l)ALE82(k?oguM-9(qHA@`R#ANt7O{4@&$`ab5(D?DNRIS)AWxx+uD#fc6P zW@8@}roZ{fulMaGH_6eGrYO!Y&i`eehd=!1YHdCOGJ`yQHbvb2`%e61fUE%cbBtL( z=V;Ua+w#|&6AjwUhQ+=gj>!LIH++XK1Bq1jWoH=4Nd8}zpXUGf&40ic+LieCN6(Bc z*KL%l%yl{ZlS-oAsqsXq`=b*fomdph+X!~TtqLBk{Q5QF7mpBKM8<92q^bavEnou| zb3sGVQ}3_109Ic36*)oq_-nzF&@5?IwIfMg(w|9I(Ab=Dm)&myHy%y#$iKb*(1hVD z`43xcmPkC_>)}tN|7Lw@^pAz8|M*s20>4LCmTt+HRdlc;JdSk* zXwa$xn4|?ZE4|wS@?qj-AcJJ(B#wxb_M2@LCE3wOPo5E8Nqa#~!7O+`MJe${Rk_F8 z==j?kZCkZ%@Mb(|z})Ea*}0?{*#ngMF%sWX8$QQ5XdwuqM5t30e5;wR-nvjkZf^0U zqB@lwDmktcWZU~eK1;kY?UVUFQdQGDkLu9ueAAAexnC6S_BS&Fs3uQNnshX^E5QL; zTbmO{s-I_@*n1uvvV(AD(}5dB0NPI`xa;=vY z3Kj!0bjp?GXmF0bsfvrP3Q1fuhU>Q*w_3CDX!a=36VF4-C-jhYQ<*XT{ofPx-Ro~| zStwW)J|292MWbxw%+*s4Q@aUeucF(dC!j@cIg@2*pJK&I3gt zvQatEnA9Mytp;8>uWZ(Jq+#QM<2*u&PyBZImG7B!J)duP3i747a01oeZN9N@E|8#Z zvEl^Of!}=e{LNL+eGnJmb^OMtG4TJf_nuKruIsw6f`E#Mii&^~8%PtSO2-Lyih^_l zD4oztC?O!?w4s0?EtDX=cL)$dKvYCZqy`8fD4mdm76J)@^Rmxcd(Y`yYmWV$f8QAE zpOF_vlBeDG)gGA?15JRBNmI5>*j-=U5<3j*x~(b$Lxd`+CfK}(#Vijr$(hz+=#9>W zL^?gg*RSOxWh=>>jckLOz&9_?~GY@QN5+jN)<>8 zz?A8TmsrKeocO_(zpW~Z@g#V<(*znVrxE?mU1;x#iATM+PhOD?A$s7OO+J1M^@47H zUGB2_BvZrMU@WK$XM2W^DkxH8+1{5ML~!C_<4P#a+P-k0AyS=zh}W{ zuqJ5JCc`rcLNiTd%MM))Y4k`;{@0p z4m}B-nf1A67BPQQUR2sl{QP{^nDvg~yFR8G+~C7LKhW%$V=qD!*M!BZoxGO*+3lMSO+UR=acc43*p;afJeS0ZHTVtwI1gIbi?% zDj708ZIZvvWY8x@(0oO-y|P zrE?021>=)PShiDhq%CMkp-Tgc#@lQS5z`MM-EZNpmM6yjXF@)7F9xXiaknvV#gLjgY8#Jp-!5@#{ix_+SMLFwY=c1_HgSb8I8 zKS})tb`V67wRxZwghU>fiNz8Y2Lc|qywJVz&72rITjQE*w~gvjFAW)d>GgX>%uU=* z)uY0CiPIyb1+Dc8|M`KTMy{KkU6w@X!|Yw+M2BP{?|cw3l`~m?_wsBew$+*%Hus~< zK2a8A+z1KzJPGnUx3gUz6R-B`cGr6Vs4~-!kbc?4|9Di24RO1)2JMX-o=jYmCw_UhqWNFf<51>KsuPr^yZO!QAKstBF zb?lp*fpCmEcf&_&rUPn&qfa*v>|*6>i|AeIyKiBTB0sr$#c6Pp`-i|wG_yfL()Mb2 ztAk{8QA~o(Xi0Mj4L&*EyeedPuM%p_11AqvBN7rAFApRJltZhDIw{Mv(}D4YBxO-v zQF{)E*HNC0p)AWf1hg`t5WHt|CLroOtg*&vgo)d_{$p>Y(!_TN#+-y?YxR_XKpq8k@nA0Z-I`LNJ>5YA~2A z>?mdk$y4Lm`zbrh>4O<}cTs6;r=SD*QllYze1Pw50J^*-O;W8SFNO*3DVasOEz88T zP=(M)@-honW`Q=eaTCyTZ6(}{@dDVImFk2Gp<`cpFFg9#(p+Lqj2A{VyRvN&CS5`0 z>6{j#>X6^YL$j-^tt4!Q`L3khD9&onq~7lSr1f)hZ`{?&*>Ff5u@F9wc0`p`xqlX< z&&*h#8m|`DJudm+yMs8cF>sf1n$%oMA0IDHb3`N8EDrKZiKznQ%4qFlQC)v(K)OzU zo^D`o;D^9oFKPixk3D5&WWS{O%-)zmI1BTrU%85oWr>g*GwIf=b>$sj&rolZ;e%Zv z%%HOYb%KiltNgNjtYw9fJNv?U#hv8nl`H28C~x2&U6ZAgl7y5jlB@%Ye_nblb=*J+IfaVg{a zOvNgBJ}#@Vlqr@+Joezz?SM)7m_fm=MZ!$)55`lc8$)~Ub-#bO(hPMVZtIF?*K*$X ztdbA+BGl==woa+ywurzBY7|~8_OcimshtbfShnymF^W5XTL%`J9+NJMqLb2dt)^GR zCs(Tyz`5o`de^Y-jW-G-a6h6z#)a^m2wg)v6sJWq0abh);EK2W7;g^T5^;0akr}$yoXjGyReSa)+UDH( z?Avn$ew_ePTpAM|E4x;*-dt>vl3eR@uNE}qH8yp0tSqHVsb%Pr;He!qywiHtW=*88 zs7wGNW$wtrvIbC9hE zvqm>w6&Q!2SPYCmwa>HqEp<+rqF_e@+@U#!!G4H6SXkJS@ z_Mp2r*U8$qW_lPORc|VRzn?!_!Ai*&O==v{d;9Y`ck1E}Yn*8jci!MErHeeHlfCK4 zRSQmBcQRxNT`TH)rVO zfn*dBp|V1aL5phrdNS7PTykCAvK?gq|gw;MjFF_Wg3uGL92g;*`hwiRxI zabK6H!mPt7g|189e(VUNj2B~E8SZ_G^FX(2vg5&|LR5kqmRH3)tCbyYAq=gW2ENu@ zxbyz&e0F=zwR*ufCCoOmHlDaAcxX=<^m81Xeqd9+nox`RS5Wji@pk^o&%c9EL zz!YzGV%_LBS6BugS*kiwoNK~N>{+)C8=n~<`>a_Fvv4JZiy%-(BTZ;e%`i!cuZgoM z|8DA+8gNAuC)%yhSW@Gw*1%*+D{D*eYS1%PBami5-^JdJlaS!8UaKs=1B_;cTjB;O zXUKR(h1O|R%2&^_Wgr&vA|R4Fvn#kbc&A)Wlt;*3d2nJ)1T>0oOE%)Ml5;R{U+Qry zXAXydKUgOs|81YHSB?B8*sh}7*L-|Tyc3iDLp_G(!6jU(vmVHK*r(A*L|b3#y);<{ zgTo|s*p>2NtXg}vpBC7Mc&V!|-9NyMk|L|U;F2Rj=Q>|yr#*qev`_(V(oi}@A%fg? zBbpA?r+mQFirc%Kgvl;k4x6%@72;aM^C+q_ga~R^iT;`K)><1IeEqgmbb0eB3D>?H z<<9knQGpTX?i6PO&Vru~7gf?9Vs={?(L+J~!TEFhQbmEQ7)Vs(0y1tu=QBAB}N5xJA0O-syi0wbtOJw zdpYE7x4I+H`Wt6kU8vrUjM4Iuufr8H3!1zL5|G{}ODVR|@lpJ8uM9a~yUq`yvr~fX zoTqSf!m~=G< z+r;}EU*!7*=J?agmlxOsmtffuY1?D<@0lAL;{l@C`P$@j7Jt*~*V?bj6!g=thu{JBz` zmX#tqG7+v+lx$>+QEUlXrvo*>f~i!He=k!va;~<-rh43-P;~;_<%>@==+1X4(+XG+ z&5sd&S;ljLBDqkb?XT%#R$T*LY6K0Haq}jY?+nGx3L` zMOIm!48pf2Apcs?8*T9pMK*wiF`(txjgaX1^1b6cC9jmnuSGRRVoy*18bl}El zDh>3V+D#cxO$m%)8AjE}!=ERWywZT!p$MiDLh}mt{B)%1KEK0i(P4Z~64Bw5BoBNb zypbh+?dd+l#1Pb5qhDcP&jbJ&LEeG0{#;`GsW52?|4oOzbn@}?;lu&Iu2?O!Uy(C5 z!*2oE7^e{|FB!0dc0$4f2$ur+yS%FONmWh~VL6`XdUbjLa(NhZ7Pb)VHsfnMTbvH) zz7`!SAY8*Cyg9Aef=#8@%Hg+1QNsdhNdqDW;Y>@{X zWF^}a4$MywK$^Lm8!SE97bd)a@>2&UtugSzy8y2sex{%+itcd0qhHJxS``KDizwsH*+3z)N7q%PO&UPP zdHP2?6hl9Laz#a*25hv2%%X=VNO#JS53^vSuBAboCw_)8^vsIav^G^!OKHivx)D6m zG;JOeK;f{(KK^p(2ln~;>NLeBd}h&-?rjJg`Bs~^J$z27JHXd$qhKSX90gjLX)p=H zS9a#tYK}FN=qm{3;b8;g9{A}GxDB)8-TS?g02@S#8*p+8X&a*&HPc)mj0_mpR{;M= zQ}yYi7F=scP6=u+@*XW*=7!D0(k0rZ&Oz z+xIKC$4xrnxrpbFFr$2MjazA&da)2C=0EAfv)2pR!81dv4gA(7Y*2VK!UzezZY*jz7+Ob2R>4 z^yve^r(|5pFX7CzzK zy;d4AT|Lv4REaortXoXj#j;96!-e%odRU`K-s@4&gnO;jiBO1iux5W#*+b-`_yeA{ zC@qb8wDIrqPN^%7qL*9AcN5DK!(fe_iMD6O)!zqSKAQ|u2Op{A{9Mlbante(dK zW-4tF7B6s9p$-h0`@iaE{gPevv;Wg+X&Ykxyf@#M-R1rqgkF$wQ(kf(sk++PEEQa$ z;}X~_!vL4A1|)@p=n*Iut`tAr;hd^wAQ;5_WM7-JMBsP}O0Y6gB|WzOOhl8ZSIaG~ z{e(zbH|Cs=7C$n^;yJVqpDxEia|3ZyzpIs~lZB%;)$8=$^A`VuGgYIo*_ zcGWFw_!BiQmm!y@i#jb8M)@TlUjFuofLs-v&}v8fe*6GZqqTUo6{K;;K8pY=wFvAv0{h@+^7i17v9Ro? z?thC--Vs~8qh~iq)~Lub%X9#06pqmS0t$K#(>y`hR|kA{A|iW6Tr!a;mpvZ4-D`{! zRYb$T#>I)2Z3dZFA&Oj7z?`O1D9EtEgI~78#vfNVyE*PyzM?M@B;XfpVkR>lUk{DW zPZ?2a2?2nJs@88cWf5q|%xqQe-4u^SJAX~X?#AZ%d|Q1KXWvPgSh>FX5PiUh4W`@l z9wJ)(k50NHL$+Aet6L}Bx$JZvC_DizvO!U_NVpd>D`HLBRdmAzF-s*KA1R9a?g?my zRrw5+xs^uLYKG90f{9^|o&kbYi>H}CgO+Nn}XzvJ+%50pKVNHexx8aJe_u6vqCJl`kB1RrX5`7Z0i|VONb}rtVgGxChKz_IICQ0`0 zhUT$F)vN552@h4Nu@ti-P-b2C6ke5kiop9k+1Y37nVcTx(Z`IE$?@lZ!MapTfX*eg zuygp=P5KW9viIG6#yR`3@+Io(J}YtvW|Q}F7Ir;#(4xrUKrdp){dR1Ns(*I{xf-BQ zW}j!g-Q@~PF=WNJ@ItjBV?a48ps)pC)aaP5ZO;fG*W8S6EKTX|Oo6fjDGa}ja;YeD z>cHtwZhu=b1!$oFQRwhDtF-U7DtsDct>lnJYb5!yJ$M%S`h+sxWvPL3x4Hy;nPFu8 zQ2_~7-q?3Z9m)ltCtSj8F;Yt&qQ_+#*>T_pr58=|V6yNF<2{O{5uZ0#r)yOLv~)Ey z6qaarY|O>iFWfc%e6b!z&(NNp{9)g9OW4Ns()H3sX6a~vl3Cl0Fe!rC?_Aa37YJEy zo>1Hh1@9HKuP*hA#g?=Gf&^l1P8oJMrgjb|P{A#twK)c0^PCSMB|u=x)!4I_%s^E#{qos`tE1mjV>! zG$5kt^z$wCqR89W@r1#Rq*BbL#;OjZUPUA9?8+zh?jSATJj=&V2DquyfHLdr7X#}w zM^skCkKYgKIaF%}Qf#3KE-j^;;dp`E^b=-|g#n^2g(j%yn`?A%D^09#!AvapL2hUR zSNb48%Q`Ipgys5v|5CGtr>?fuo{{cjr^7B$GX0$Q0H-1o;!Remo=vd}23@k6^2|?7 z{|p<1(^e(}QUwR4R7r${dvEh`b6nEuEC48$*h$lxT>PzkXIES9$aA=kJv#kxn3S3g z6-iV9)3)GBt``^bQkH7p2&(s{EM|7LMN_vyLaYr~f1OBm{?M_F+l2@}9uFK)=OTT~k!>hpTJbZHkfg!iPW&JO%V z*6m%qZ@(wTv~Q8?&oklA!(H@5criaBf}N8`A)*y79J032(N|(H8@q@y_x*@W<~t2yJm%u&_1AaZ}r1X zM8dciT5eFZkm|s!i}}F}j?z~u%A{dRneB%QvSh;vd|VC)X&w zw~g}+9T#9B%`2O1l^MuryX7SKNFAp+N$GSiGy#Kv$=LSl(fx%_KfAWKvnb}GR(H?}YUr_a3OYS$4?#4HZ6QDE| z?AtKwCm=x$=KBugvd%`bv@o#!>8zz_g-#U&8<`c--s`8yx4CDf;H_1>+!x@>tj{xK zN1Q0MOQC>8YZ&yYnVWijv&_i%GLOn84o(y57WtMCtg{&CTY)OsS+mcYblLR(nbb#n zi=3XTQhiScpq z??wbl=X|vu@jBz}Il!Y57s8XOo*I66!XPH_H-K89_J{DmJ9Gmw-l-Dh-YYn_MUDj^ zc17rmq#F{vSK06Xy8DQ6vc0nZo>cZXXU-Fyk`qCN%d1s(%f5c0f;k1+kY1UuvNj>t zv<_cZC`YH$^PRef+>!87q+W_(l_Tqz^w-OtGso`4oqx4kTgP*Nss%B_CFr(gz)fhp zvJJfutJQM@Ck?-D8{Tx6qVD%iLU%kV=XN~YRIr7~EYr->Vxl;*v?0o1O%aqopx*=~ zl;Z2u14_5*CIf>j9t2&k=hOr1i~t09rLZZ$z;evt|6bA6kL&m zR_k-V_=V}SitVWvd|!5LXg#2hr0lWc65MyNWr&oO@`ZLYa+@Gfz-zT#>-ONvUX&M_ z`9Z)4n{7jAPpe#=>bXEr!#rktrIZ;m_DoLy2@OU+ne=p%DvvXYoL1V82l+p38}XY> z>tLq@Age^?&>o`knrCh%b(%x=?5kPyX4c&M*mj+V8Ve>{o?wOijn zmmFZT@|TZjMl5ETt+Vp!tX<5gf!|8-Cd_=4dKZNW>w#;U+tiKcS~Wu4q#5J`@g+~_ z21A;893XfI5I?amq@gcv}q{{vuA)k9L z+jJiE#)9anVj#2az^)?NURh{|akJSH5Fdul~ z_@t(T<4j#=Hm~;7zB;@ITy=*e*M4{UwIf+PrUin&aqdO4*AR~=Eya(AxxRSJ)%O-) zxJ@IKj#L4rrn`U%Nh z1d8cEdw5v`9FHE+2;!q{KDu6!zyktW_(Jkyqf_7gsKaHsHPsoCsJYd$c=<)+Y`0GV z{mR|GTATepvanAAdzbyQzP^#1OfzL%3SFi3IB7SIqcp6k2l!590DuqGRHT)1A8it# zONV|Fxp?~+v32Fqqem;OtJzufs*n?6y)yu@u%qzcYQ0- zzTJ#Fzg@A7$+r~>WOQMin5Rb++P%|~1GGceaz@1bzSLv^4u$ZviXELFEP?8G^KXBd zx&M6mNq8?4+fOknB@z9Mwqak==1Lh*&&HaAos(%TKW33ZV6t{Hjo-1r$l|eN9D`B=RFNO5x~>{BkyNWc5JH=lmrdYz^nld(^Wu zgS)4i{cxTQPMfoxNw{-I#?h5F;oXeVYeY&H>1`@92{0plEqKIk!0AI*hJcW=W>v2( z$Ld3x`7U~u$6yhH4rRD1mmg1Ow3dLT<}rf>lPq8)Qemo@2t-h?{s@|52DdfKoDmS>HQdZ_c>r_7Ou0RXnWr>w8H7!1*)Yk zg*npL;2e^SxvNaV@6Cd*c`qRPzpP{hU}wAD5kffq^nyXOkzqjlvt{p*BxNeqsfElj z@FTG!lp?nj%m6xU!}hYX+xcQ4k6qpSB0l`yT4;Zcp=|KV)JjmyhLA1s-HXGbSLpWR z8i6-l0V{7a^^m;4N&q2)7ly+>;1)ds+?iJ_+iJi_Oggzt1*OC3~#tLU2~eX4&q*a^Un$Kheqwc1zlsYPdF99 zHwdYw{>MH6X93Ext-aAl>4{k+Hv6CsZ%#}`GG`vRS0CG4C7hZN7Ut^oQ>U(%L?uwy zQmVx(KrhPftzXlI9@%^}=~P_?SS|O}K_y)?{U5aULFgrx@=@DkZ?do%oHbF~pQ?P% z(IH{rs^{Aq(u~mQj`Ys04U#>w_`Vnwb7*L459>h9oS$jQtq(dVVoewTyeQk_D&?by zZ{oxa7M^T1(M$REtw|ie&VFJIK3jhdwfSm38Te`lNnw1?S*7Q%5JXvjq1$S8KHlT~ z`l}Z3*B$+nc!ViMl&vHgZr(Ilj#KTh|Hf%D|eh+p?a;Lk0!XJ|VD z#Pp2yiPtC4EXygc7&hqjKNaMFK%K&UKAM^TRp9J@58=NW)=$n6fU>0n{rUA*U9%ZT z_yw88*EO!adH_Ry79dR~%Z14M#k^DZ#fjgR*%h<`s@K~epG|qA43Z)Uzy4|Oen5(j zng8_nK7p4QwnCHK_imNNYWO;V&6F!!NC zXhoZHGSoC6w{Z+%RMp3PhKKEEvd#iXQot-_8c8-2zDmb0%7y36n&P#Q+bwoY^ zVTHrYXKXRwsP$jL|5ezyyXUSlRV%|{B_>^A_6z^RbN?Co|E&7|tolDJ zh`&AG|L3{?Z214#@c$pz&414N|D5&zvyuFN`PYBSy#GI!d9NaepBwzqqwq_Q`OlYZ z$FBy>y(|G??gJq%I)IM$(&yJ>vZm7Jv3jqbVz2LVH>$xaa(?@A?p#0?ah-2H$8?(yxP`ZW^jX9Q5=FHfBHd;PDg=>IzRj;KX|ab>Tz&+DHGz2`O; z{rug}r!9aH%W+`%Roq3Nv#$MFg$|u?)k|q7XT3poBd?Wqd8u0>>UAYmPx0|L+<0q# zRCky-0v4t`CdGqv0nn92cnW$kS@oOFDI1&qd|QOP>wKNs_tPrpzSklbS$-IRGqMtY zD|FJiCyMVE9^z&AZjX-v_XzxZz59`Tjxa(aD6!m1+DOC7dS$A|v4{37fe^It#&Sic zgP8+FP+Za45mw)cl{9hwwwt^LLeb&mVxafseJ^ze$li%<3Z}eSPIiCtL^d;m4A9aZ zCrxtiNP^VsYR#87MfvN4mgD?MBUsfafLl_I=dW_^6u%wMDLPC!bn32D$WGbLBa@tL zKn(DTCeqx`=#bo`ev<|e`{;pLGiCo^mh5iOQK)u!<*rKjL7@f3jFJKSORuB@+$oCW zdZNews|4>4Q)Yv@U&gw@OiM)T%f3safcP|RySL&>U$a5=DGsD9kmohNNRQA*sHz`( z!ug$q&5XGTz*(&6Wi_R9v{xQHUTXqkH)BMSzQ}2u6HGIDCCuj9m*;c%Jdo99$TBlm z_8dKXu3i7VqN{NVlwn5IzC`g_O|+c?BEwQlz(5RLaItyPlV;iv3`Ex7_#Nk)o8}oI zKabAitD^#9a-+*g29^jdkQ(1uStPN?nVgq)I`UUU0q{@)_WE=Das9uj$^QH0b|+h@ zHB$M-PKN|yE6FI*KIQ@&5LBXkGwvKzdG_UDVXemt4Zh<)YOVu{`t>|n^{)Uao?a%H z91O5N#~`$#U8`UUU^ceIeYh(p`QS1F$ezITt0Ej&>mk#W1{=*3V z-_I$g1GKE{&kok%fylM<*PbqEkMC3G#Q$YZ{G$W*v1~c)j`JJ96dE6?zaIg_5yH<< zyg86m?NcI4BJmUJMql*3+3-rkbifmR9&qrcwSoKQEM19p4K;2!(*><xUwva4V_&dg@7ovn+Zq zok@@L)MR!%l&;vCtDKZ~T)?dWUW%}u?HMH!GPtHLI_9sPd!;!5hzvtLuFC(L=`7a6 zJsd%nYg&5xpyEEOHuVYnRLgm9AY2W?w z!xE33-(Q086zK>5FswV#57njTx2|!OGS3CSo{GaZ<~I?IMrUu_`SxDYVL}~k;Ejr| z4(^l-d%?YKvDPpYSYY_G!^=R=cSmR<(7CtdJM(7soNZNehc1!coFQbZ`-~yX7Y`lB zEe9cmRYSVq4&J}{5>pg14>1@!2w^Q>{cQ&G{oMIVnG}U!-uX&*+o;u9n$=VSa0&&N zIPd)h{R8avz|y}X@il;v@D8xbSIZxIMY}ovQeT&%Eyba#nEG#L|BV*)3PR7Qr-T;{ zQ)_%=|AUEq1QOoj%9A)ap)Lz7!ddriz`~x_OnD(%n)%H(_s-|4xGVOY&gj5tRB%*U z-mZt$*YS>O=Xk=WSLb$S6R+PjzcWjL?V2wgC@Lnq+6Z~~seJKU#z>qP7O=a{9HNa( zxvq4~Vbkn!Cj?6f`LkeS=7%CfZItiNDchBioWd%H5vkrM!GXbb<9tr)>UPY3IBmsU z#woae_-q%iTQ;~P0tSyBOB=w~DR9}PU=aww>CfZigsXU*CtG2TdruT|vh9kxyE0Js zls&db1nx=$2Yc)p(IyX$ZZ72?Ey~@G9H#(gqiF6*^l71YAlvo_)mP4O-UjGt|0~A5 znK;0Y`yMcXLQ7D{p!=H>gmL2v9j$y;dW5b6a0ogDvsS3*I3~ePJMZiz;Ix1GW$OmN z#SCpW4-{p)3TgXRybqfQi=k0Cm}52M-C!|*+f*$k?s(USY6avC08WLq;7O&FwaXtOicYFZm}e3q3V>Fvk^hr%%BG}-+zR{_Jn z`=(>+ArZAZX61y5$~#yLU@p7PP9R)(K(CAjqL(@RsQMeBa47#w25ISzUvSxWya+i2 z6G$2)2G0aI0?DnaFy(U4oWeadaok6NuA({2hj`#_k-RV^=?SPV68GxLuEK_Tj2;xD zwH3QY#h=T3#YZ7$6@j0iI88_}$ftwv*Dtk-bXHDEOt+mmH(l)2yn4!*$^{EIFSqJ- zMIt0THXPfGV-BdVH#Wv-%7B2_Ho)SW2q;wO^*47b4EEB%>ofWS=~oB|$xZfS_|4+> zwG6sS^tkA5S|Jvb2IQD}Enk&@d4(x1$pS%H9k)_#8~rW67;SwVV^R`QO2V~m-8bGS zh8KR=bpTlgY(2Ayx<7-aDq5XLf0muX036nLk6H0GUHrAI)4OnFv%K^^%?YH#2Je_y z7?eKd1VcN2d=}y8cEH6>?cXjpy-d^MC9&Jx!#0(VgT9ZLlDvwn*1U?XeoI#xI`pml zs`L^K>F4;qJ!)hhnW?6NY6B>)(nF}5*V)Tz z(~$DHP_7m*it~}$=M2-wN-izScV*imtI}y^*O|NmSIold*xt0i-MT!w=QH4CxYK`9 zSc7dHaD^^;ktGDbE?lT)KbTo-KzpcI+}s$y&z!aE{{!=@iDD z)qT1X2u`!$6A-!F<~HMuza^mu=s~{N2Xgb5dk=w+)CPK^409@wa%P_bE6|#mFwe%* zkPBU&$_V&g%k~iCeqk6~bo;~Nvj4}HDLIU9Vs{HJeBLik7JPD6upT$hTIu4JB^Vfju$c8p zZ0D$>pMFI{+5SOTiM^{%c$9hLOB)kv7JZE5)hrE!PRE~=yS}d+Fjs6r`0*`-84&N9 zRgf1Ba4YxTJ~j@hm_M+9+DQtF9{x6~zyP@Ts*c9l=9Kl600Vg_uh4+6#jxqDR_6>& zmb+CXiqk~JeN;TIRwQ$LXIBDs?nRSCE%>?7Pz6H5R>H1472}dJ7o&Oq8zB%-`3@-z z>g5@Tc!hSGYW^?<%ho)8kQVP~M`o20_z zd2_s=j5K(I*s5(6v2t{0=wU3st>^42Q3!TYEr4|H1+QqlvTSO7;I&`FkRooe=|@HD z`A+iOE^~J2UA%n!-&7fU_8RRCw1ei}N$9!*r~>#GF{Pv7VA>e`!oE=M2X*q5NUwf< z#p1G)Jc>h(ELR|Ixj*mvu~WnU%mNs3CD#=^RicU1L>enu2S0#yxAbng4dqqOFZ;%G zW?iER>VPVAf}Xo}50^!ZS}DE^OHVixiuKa|3eT-h_Y(jNbqCB;(o1aNd(ERVuz)P+ z(y9~e494@JlYAJSq?lQ!Ch12qFIfR|Z%8z$Q`8!lvWj!>vtq1^ zI-n!V;31n;Kdw<4E5atl`<-^wU^68*Gv^L=enJg1Ln;Pf@2Z-ibRGvwg$l48^v7=; zdJ1WBwwF!BqlKm5*DRwrKY9}C+UyH_5C%SpG{(+W+p!T^_&R0Z`12K>iH*roI*gcf z#JDgv_m<@yBpckgdOtu7jmp6|FcLWiLfJEs}gRU7J<>TCuo zZPIfCjlf=^HYu$3r~{=4YiCw#wQGggeT7u-iJ_eYJYdz%Q@YDyUzV>Y& z!}ex{&-?1Y2!wvR81U^&2~#}C(SC8OH`sQvaJedg8`U%Bn+hV_PUP_^){LEfChwEE zz~Al~+q5$#TKTYR5O(~aez_{E2X;01O^03?^R$M4PUu`iV!1En+JZAxyD_@C@*2LG zv4MVKi4REsl)(+cSpzn+WUxJPo4s#4qFq0>DlF;CTS2KES*F2|3~Mkr`{SPeT24(8 z{%Bm0i&?O>l#SWA*@c11IibCZ7xCle_^|*~17;zhj5lOXNMNmbqI--&QOtPF=}Ee| z16|qvgq-q{=-i$aH8{rVak-z=+mou?bm2YpO(1k9qypEpvpyc&os7^9UxWQdJeAd6 zGl#S6(VS9PCF-Fl1(oscgZfHs&CS8OQECf;hC9$q5iC8ZZw$8&X%gyI;50bL7^$Zl=}~LvvXn|je2_YecdO6 zKk@pcq)hhBLy1D3o~}=2B*HK2+IT-bQ2WpnU39>$ESNkPH?kV|QmwW;-kfVqIVCv& zt*XtEQYtIEr0VOSaxKNO+M(CUq^tnaAXMOrS1e#IQ_$TdoCl1cUL`{BOJQrFj4LDc zKGg@@!@*s8u0tE*DA$%k&(-Zu_l>iH?n}mMBT&(t-Mj(=<_k)0kDjJE!?dvy3TV`0 zjT1aS5sHxd_m@mRf=IMhx!aNAfhob=pD42MS41Zb(eKE7mx)~21t5AFiCl1DQwWFe zq$;{u5D$EkdGz#(Qp+lmxDzLm;jqrl&$3s>*%^Sgv3)19_EH=CrWMKXy^L~9QZnN$ ztsIF`qq@eY3VkJ&uk-Pk4QAOd5{grqPESr0Zzs5!+K(Ts_M%MizLmCYv}Q<`kMC{~JIXT7d&KYG-=+2)2T zIfGE=eW3`c`a*(b8)3|+on}WRA5=V<>W;^sxMB8-yrHFy=D zsU?1BBLa)qy0*!{MrgQQIFZvnT#YOoAE0gjWnzb}hp;i~V9^Pa8DLFByu_PWr@$Ho zPsZ~@FqizV4Tsh~jM$W?kpdl&l=2nVZKn-=6O!DJY( zE`8au<^S57-QGoih_kF^XrqcWhAT@S+RV!dY0=h$jLbF@|xu93tdwEi0W~0`wCYPgz6@1Xc3IbkM`J&WreCh|i?g&YkDd zP~U7r{#XlyT1T&GXzSWWIi;s*3A4mr2R4L*A`8%fo3(*6!q)tp8utq0ZK}IpR=(iH zc^-EhkJ1xkZL2!!uw&l3Xe!JivWbdj8sZlXl`71uHNcy6On@hfQwrQ`7tCXWoTh7U z%bW64)$L7FP6qMIST^>RN#Ss>JN8hOpIZnhg~kdEd!oKSakYj}N5I8SR=K{{6$ZP8 zq|KQ&y6|U`Fvi5rgP`Ww2i0YG6`A4Ig{y)}1`b?#8z5h@f${?}Xs1UpRr1jbzQMxPedfa_X)fJf67gSu z_;)U)-z0gjM&P$+t*9Ns8oXbEM6(XiMEG3XTWwY&7)xILFWx9tW5RoxI*~$I#GtHB zNCwmE$xvs4Y%ZV4+K+gXe8Uk-g<7I2-r~oY&~5*)A-eXcQ{Vyy2-oB;y2i?7s_d^XSiOV|F7mzNYH>e=ufL+fQA6r5=E z;&4>Y<5EoJ%6ca^StkGO^612R)^+X%_R)sbhu+Z2+ZY#NKe z3Aa{Gwx@HY$)|1(f*<#oYf9)sf4d-AZjXknHGzAPS6%Y9baE@>@7q?Vfi20KA{p`7-jfrHA17u%@+Ob@ql9{UVb>TxKJaV!5tuzs_YdI7z`@~kC8rm z=;yt5dWLkIZQ#o9nH8~Tt{6t5XV7@@yfO%b$c^SFHE+{G+t;bu7n2g?z>S4Lmcgq- zT+kTnD)-#r&MzL_o{;U6BCB*S6gJhfL|QH*HMnx;$Z8@lEbQPwq-Q!Xcv9)QEq6C@2-;9{5K zwPIZrv{YJuXmMk^^;E-O?9W?Vc3_1qz02ntTZ`q>u_v>;;@^ckG!Zge)2;R4=M6&FM9f5Y&Lh?jc*Xj3~5S#1_Rn?q|d)Wc6yvrvQ(| z5fqjj{An&wPkT8bl8g5<^6F5PQNDpc-bV29C!P$P%<}f^<$=l|WU*Cl6$&#|HihM1 zz5EL1y%0*d67hwEe$GO(l_G}gMAH^Y3?dJ+4ZSUK#8UG;nQl5!>k917LxnVn~81B-RG+yfovl;wxXOQ zhOhvf^OMID3%?NMq8z}ZhSM?a&d4}-6S70;fofG(i&`%X7P=Pi%^%SmnF4H8GGuDF z{!kgwWvKLF;nNxqf^nI*{4?+uf;y#{xbF<-;>Cr>Y&&y9+DdS>&k(XAvX+7FV}46KX)A*YR5gK{#z9 zX~X@2ex?^=?U4^pJ_EtMxeR@W?(}SO(N;6!K=< z3$}C3>+71NJ;8C*xxjuo18}T(hewlwjMAX{jgs=U>6g1V$L!h2?NIku4KwVtAH?+! zt6~ojVdah8fk?MADn$Rt2POB+5>78B{5znNj zBW;s_P{v!$O$WG!2w`CR6pBL#ZY4jy5_P|g;TeA0v?E0m7~wq&bn!Q92R7b#iN>f| z`gqteZ%N#%2(GTL{@#v@*{S;Y`(w6;Oj?-PZfh8xkU4WR28!cx=T&~GRv>(#0toVL z7*le?HQg8wnLH7wbtQpQW#CFH*{v+4N$ET2E`gLSdg~&n=vl)I;FbB26`tep>ai!X z7R%9$;wgL}dB3S(&G+drB53i(tq)`OkaY*bjJm!X1DB;qS+od` zt_!1b9**=;LNo6GP4PI`xj)e(0^&TnW^l0X_-(O!V`WKj6R*6@Xkkj?dXKI|hv^!b zi@_P;<&0J0H4Rk&M`JamcXu5;aB}(bXq#cwArxQyLH;+*&LQRT_b699U(8AYCC@gV zi6v=va@LX9MeT1KemBCI_6hrHdA_i9T*wEYT~LJl)NX5T2!U$WHd*?jU(d7FRm)nW_w zosF%i@IU6W^8T5qre|ZH9fVm4!w07u|JyPiKWwD=2tPVui;d7QMa(A?JtxI9$Aq4DZ(@J; zJcWDg7Wg2mPKeeRRyf)FmJT+^-Oln#lgLZo8sJWAGTKS#NZ$X+-BMHQ%1m zQ=NRN&b!ws9C3VlRV=JQX*b8PRg5G9vcnBb@S*>J}M(0Tv-ATQ9y*t&| zF=7zzdxDv-2{1MIh6t027eXjt@x{@yP%1Lv;iOkzX5f|Gy$V7Pl|o%e zI`2N^)kYu#aEl{GOr;|z4)F+j9dQ_K<0 z*Jz7gE)k~Ktc#?{kVkKR32uqW)69quDQVz&om>?z+$>m3taK@FMGe5n_E`a(x_BIl zb@c|Xeo46dWnY;~-3di6uBe&y0^G@9rV6w}v_(|$yRl$gC%I?PTAB4@3Kc@qpxnc% z5ImPl2jD)_z zRh|`+x2N!15RYl$3zWyq*N3olac34C{k$Fup}4#@$s+x{_+|O+?B1D4(dCKt@s~7W z&Zx%0Zmw<5iMe71FL5Ee?z=E2UOQ2{6%OfT85g$-Bl!%KJM-WD_eP+%2ixe>8TXpo z3GUgfcO^6Arcu?p9p4A4RJki&VxqX!_L@y(hsr)^wlXg#2Lk%R_eR@R?2a*KpXW-G zMyBtTyr5lYtl#cVMN+N-t7&a@TO97jynB?Zw4^>QKAA9y2@1@M`4!DT7O~%(Ofw52 z4+`LumK2ZUf4#9v8KCN$^UEwcHyc5IK3HzGEH+l^T7h!n1;EamUx2~cSPkVjY@OPV zvnyJlHLo)z^);3!J0^`C=9}hZIb#c^tzN`@A^w&smvFn8bX(x)^G^bQ7aJ@YFG_nA zj@@tY+wLAB(G=+LXWxrrAQn~L`Eujzg$tbR8g~;XwYFipzfjHtu__uEl+xQ9l&y7q zCgpdvOzgQZEkr0B#rL@8u_1cDv^$9qu`*h#I^sCWk>O9Ont+lmgcciY%e!b0y4+Hjvw!Q?S z5KgFcbn3C_DCFW!6UIErdJ1Fx>wcYnU2e##=K(~ z2vV027RN@9EpR_txj2 zvr>##y_TKu6T+H_uoE_|*_vCiv;X)nLxvn!&`Fc|&K#n1=%Cic4S6xzVM%nhgwJuqL?32hr{I@hb zvqU!^5|*$)yt}V!gmxN;wS^3W?Bs#uJoXo+229UGQDKzyJ z2HD5l!Zu=Jr9WV)^9&b-A+LPhb7(5>E2W8c$Z1GBR}CqAj>)?=8`q-P{X&HH z1<6b0HX9GM$bzxz`jcBpC`T7)_}N>xzlEdq7qvMp=kj!BNC>oAwG&+%ZllSX$|=w~ z0&<8xQ3*tptdHf@seJB;;hHksM^1V0CtotAYH>^wc5W1@tqs{txq}kpSC4;Ug2__% z5VM4*Yly0r(^YR*TX!cu9G`SgpzRxWT!k*sdarZqa4Wz5_%TK8t8>$Cfee1geJ)Wt zL)x@Qd!@%xrk~bdW5H+DHXv+4k;1T!uw*TVJC*j5hH$L&6V`+~e<@kg=hMQk1No}r zyU=y*-K_}c?)FIo0y4i<_Ito-1=JF<$dLuoSI3Z&FYBklkuj}Pq{7)BcHSZ1)5Jre zGEH-Fu~UznY_jCLJTk4MgH+IE{!mOOHJ8G4SN_HOO@1f)6`CE*G*bS;{Vuv~yRMma zc8)59?Ir}4xZ1iIa4cytrg!Vs^aj})Bz$*;=j`vTzORei9={}Sm+0ZM=6a5U*ixx^ zSDrKr+>m4Mvqq9gdx9sdz`B59+L|VtOwLii6L<2+ z_8yF00?2qmW3P9DX+Rch<|Pr!-(FqXOxcjR*#d6KbpJ==Z_rF3XO0e6SCt!Rm&}iP}`Z z;XY!3L>9u6B{4?HVoui^pmMX8`eY297G419M*s~f8t%3AD0#+RzP;WKx55^zVP%GE z_VdotZ_X(W4dW2t{UuShNpP@B*DaNnJKM? zQ)_CEZoAkUNS?Yh9ik3<;41CYMay2v4=3Y276--(teHNjzn{hS%#1VXC$SB(1|EzV z8)2Hg=zIb^6`aM*^`wUir`c0ii0JffIu`PNhSgk1j#jU2?yFnvfSSqGJ8DtjT7?o7 zDV1x8xvmllX=s1fRg&+TQ@`~|nCtOmafU9k44)S=w%Q=9y+`k7B1gU<1#Ln{1#)lI zEpC_}L!F-&d1tZZl-O6>XMc6bz7=<0*eOWNF{}PNb#gD+GsL!f(lgCAz z#_hn+u)cq8QpjLWRBM7k7O5$ZQNr0H$HjKldO@%r?5|BSFWKdSi2x_{Bu5sf0VoQ8 zMLIPsO(n5=DW;7s0kqk?S7+-^5LYAQqTkKeKH095Q|z3)PHAL1U-(m=1rIAoK`U^N>6Mr}7!(xDr~j+Oa))<^gHZoEe~;DvIRT0 zwL$;akO}Z$$+;eFD39Q-5^cdg;8zLyos^W294uZFKIkrfQG)uTNu(7NZ6j zpVYOqCf%o~)`2!2^YcG)nf@+1|M2e;%0Yln*ch81{hLhW->BgqT>Jmp>meeC_EAid zXHNc27v#VG_0QL;i9IT0+Ta)jm0DtJM@~iAa4gkfVC+_)kmv{w=th zeCl4O{~tFq@LU0;I7;$A^I()sLQ03wvd|yXyT4q4pYKUQ;kPNTV#>ck?f;jHgy=GY zhQP2U1=YV|$X{IlA1}dHlV<;q|KRVh%Qg&#k~Oz=nveW1Pwnq6aolO1>F@sTf4QWx z=V1*=Jb&}Z|J}NQpRT3(Uq0jSel=|qK)lwKKYH^2a@~T#51Uqte2cXIbk{CN<#rG37xyd$lXkK6PN*-Y25ZPCwuA2? zb_-2lE%Pq)mt|H2_TA?AGOW7atA{8dLVwZpSmn=d(|8L?(DyyJO=q%SzL4N?j${%< zr(@Yc>{fBVFTH^c&mu@QJkl=z>{-(9y5Q5MmYDjs^;feWA{#i-f1|wgwp6h0M-kWm zz8@EA5#d^h@Q4r;q3NdBF#Z-;cnkN+A2+^E*(*mbcNnWPX-pEmf5>W?c)YrI!fJsw z@(>%IA=1l-SD8M3ECrCD53xKZ+A48;SCPg6&n&wWZ;klz46n6DZ7NZHt#ZJWn=~e_ zJE>r$T8MlmjoPF=X!x|Qu^|Q{(z31d4* z;ug18G(ePvZtv3;m9+FbpSr>}+ZnH6Gi+&)E^O@un{*iSU**MtNdvV+k>Qoy{B~)e zKw~@Ii`ZkkL*Qn;MbFqE>|0PT`9A2zV(HH<%Hw)$jt8Bmmlno;^o;wjJ);~->AN`- zzo^sJnGwVv@8@avAoB7piUM1gnv=94!IbRJ>7rL5y>Z$(D#|_fC8uf-U;0M(5Odnq z9z5>X+ap;M4y~7>01IiK{<6qelXD~1X){2BE_4q;wAhJGup*=fxPfBb_BnJj-f^SI z-O&=*xU0Wb4M`SZd76Xtc2G}4Y2>x zKE+B)FHssAcB*UR!bdCY-s`NE!f<{d{PPAF*^S=pRCd2QaUOyZ7$3RjCwu$*z+5 zOs)dq)8k8*zbsvUK_eS)#rvqE7!9R#^1OEK4#}hfak*sVoRNn~18#f&41k%+Qog`3 zV-?m)Ci*1?%(3>@+cPH@FMJ$Hw4k56+y*_8{)|Z%vm1hUQ-95?zHAmG=!!cN0=RLJ zThNclwwBN^nTF41dL@U$Ywo4Cn6`V6?e}B|mK4xhdflT6nY<|ZD3BQN&QWL(yd0eo07>ZTOz=cueE=@Pc?h_miZ8$&6eiQP8w@Y_ zVz(b|fT)A^EVu6EB2?O4J%rILzn%Wa9Y=Gv39K7$y^clr{)v_!l|8dVY!&82{NBcK3ld7BcN%zICn+9a~WmSlL zFl(o-Z3GcOEoHqH#7H#uBj~4WnCC$`P3k~=<0a!zn zUN5u&*>Y@9K5m3RCBBkXfS#KOQ1QefngGUaEQHEFB42x zZy&Ni8;)VoFf%7RhJe?eK^qT!ry8LX+uP~=4M^gURFogOdT)2N@cC6_;otTn4`f0AJDa-dh09r|6ms>vulpx~nk z2WC2Ldsd`zwGS;8WkmPAtRo_$FxL>#2u2-8iJ_~tcYQdaJ7Wlj!@WLMYqh%w| zWZI^tBr_%ZP5;{aQ>goo!$U!t^4tBW%qPk`F>ripC8l@YX6d|p7eSu7==#u&EP_a(0fVN(i?L7_TVHz6-+s_n|AX=7!_ z1ek?MvN5tKyhR>Uv?Pds+*Ej}BpH%=N8+t)6U$t%Lt!x{P<$sq<$P1l7TT}X$Tm{z zpL%k%=c|9vyc0!lxK}n_-Ee~Wrq+ld0`%L?RbuH@h8;zhch8}rUC4|^LB+5p{z*jo z#~!)axU1|O-ZYdTe_!laNBw|pzz^J*T^7MKVpF4s<0TjWW` zm#wO~!jR4LZ#iEk?Z$9xK0rD(Jdw40cVgRT*lr1)J?l4rmEO7k6$Xo2T{@@bm|)L9 z2xEobYkTFRmpn1hCxTiZ^{m%{JB-k8-j&BpIl(cSO$P$}hTdDXn_;67d>8q z?<-Aq)EN1U`HFA>%5o(0`V>dB^|RB~Ubu0rQ$F$qtX|Tt{*^y83)V3t+7*c8hCb-U zkz)Y%C~Hb|qW55FTo?4iSxY3fiK-q{^`gXLLXw&A$g&u7l%s1T@mF^o zabLGWdev78Oba`)h^?8iblw61{}YzWIeo$j4LDyW?KzyqFs;$qGlA_hvY4DMeU%b9 zDs*F|e1H%8EhSc)a;wD+J!+WI$=n}zjPc?0t=SRo@T)(fhX1*hz8`(wxLa*t5XvU) zSz{!!=?2&nxsHEuyxEFrPBg8LcMYsPmvY0e7$j{IWflR98tyYOjP1GBCm9bo1Jc+Q#Jtfg9#b%ybi4k-Zvf`y<9vA`&WcxGHEMo8<(6Ib z6S^8_U#PG*AJ|u@mfDw)L8qWz6H?DQ0+`c@VrJh7fbP+6cT;6-#e$aB1+vC)qSITs z{HU2C*yJq(tmwr@!o|maVHlxvj1R1`^9Iw8g3c_5zEHMEM`kfct3Bj$mVJW8T;0w{ z*_H3q_4b#9zcoB=!RDwkhb%c<8d z{>#FBDi3M-(c4ZRjEo{#rL}_1(@MFo=ITz(NjPZ1A+|vaB%5r0 zBUne~Rd)60Y8t;NFQJqssZ1Ek@k9F>0yQC zr8gOcNiS?K1lWf_xM3k!zaHBWXaL*2&~MPDtpXc!alOnh!5{LC<$2D z&ZO9U;Chl*pKSeHuU7ln=Tn03w^@Srj#i+H)R{VDi&mpkCEbib ztGEJ<(sF*7bj$1?ENO~&OdbU-U#t;#JEk?}oQZ)9m^T2{1)KLC_I4*L;4ueiIK zqHKOEUy<(trylJnqWrksOLY^fhBC6?>bTOAl02pFczpKa1VT3M>Cug@-~>NkL+z4e zK%Vjq`hNY8Q)1KCzz`xQCI&F^xLz}jh{Hz;K@#J5Yx@zua}1IOe+%&5LkaR}_{h6=Q(tYTx$3{n%W4@V zOnOJrHX?^bQV27&?C-~}V7m$G6>pXmPcZoxa}HTNp0Qj*YFDfA*rIZC@|WfPCic?$ z3n%A|hRk7`k3uEo5eee=HC^D)l{fq7J;?mK2#V0jo108N#5Q~4gq2dn`S^Zznl3tV zmnj+!CC~%Y0}O!9^L)qgU|TF3V*J zpPTQoW5<8WueI;9NvS_rT1jd6htPL~vg6v#OlRPOQO7iO0CM&eqNl@T8?Y`q^}Sfy zsY>rKL(Ug8-iqp&D;9=L`bB60GPOZyMZ={xj?g=UA4>}7fc+6t(5(2CNPSsx(r~aP zvl`RVtHQ!La=N<53eWr{P$y1`U*^{dNbeRK{m z$88yUh80XjuULxhTzArwqt&Pj_4FJo6SeJP++&sAB~E3OsH)8<`4n6HI|>W|6JT9P zT%=-7PBZOlA{Pk{X5WvhMK(fCAXmeJ`j^FrRyOH99F_siJ8+xg>8bQ)Xb+#0DoD>! zkMfQCJ>E^y_7s50jC@q%yB1V>qTez;?RS2+PEH#gR~MOA@m}IFMh|F@ThM;dYjknr zfnT*z(E2V897$WFD2dxkh9dSg{21&Z)AMjqxMQ2W(ACooMO5kX4al+bg^YzTDi4fw zym{npSXImSoV2O+8NTFj5@^vK_ht0*-|IVj^rxpLaG62~W+pNJ^36XyHHB-`hm#g! zANt{SJ%&Y&$p(JUxEF@t=|zW*j>Yh~8wL%KBFDFW8qe{uH$+Ps4CjakDPw2^8p@Fo z(;+=%p`3q<`%9pR4(SHBi|^t_T)9ujFWd^yq60XWl(JFiK07zkG3Iut4p0!y z9uBLF0UW>cM(@QHDNH_Oex%%X^AT(pkmM3GgJ4%L@g|?a#HYc%QtUpoXEKZ1yTu%m z$`%sxbezB=) zxujLNe_2qv9*TKz4fpX#gQP<>)9RE}}Jz`UQ-W)_4tq|wpB>+;ui@@)^Vn%!E zvQYH!TizEV=#V@xu^s5`M8Gw04f=m*pYoc?Us&(i`^K%Q`e|4?S^SN|SUFeyljHWO zash_9{yX9In>bx@oxn_|tSkhLU1SA+@p6k(be~&5%{Q!T3_8;ep5|RZfqG!_>C=;A z-&twpmvHx3{Y&ZPYhDLlp!a{%qlXXdm2bkpuK?T?ve8R-mApIO5xLL!LM-*ge;ig8trky9G7rH2>>4 z{=HJAh>&B-#KU&&|T zGCHvp3v(3Nqmw@{6I)}>zi8%Ac@tQ)) zy(UL@G`tN|bhBh`*V%da&h7}un}Kn&OYsCIL=rnx<>c_n^ylyxT;q!9Wf1uyMU0@t zx+=?mQFbZjl6y!c_v61R5@0d&>Ao@x4wG?$yn^~>20bfTfPX2piCR@}QhXj2KiA_F zU+-)2bjvfR+n{{(n}oCbhs6;pJ)lf8Cx*&&k@a?Mux&Pxq~$%so!zUL`zmR|R8f`& z%NDgccY+@UYo}gLc`kyFqMH+pmY>6(v~_rK@@Gd1%0o@mBhR>gLhk-dMQV1a>y!GV zy=sicrCSEWd7k~k;G&NL%{iMjdzJI(1b#6I;USAsqEFtq{(f~)y*lkk1w=i)5S!iJ z03vP}Q)cSaRH*T{;F!faG*whmk;uheVtDp@x13*`1ThW^P8LG?Zu5v)+&ixx`P;>-^uaYjqWX^@Y;BU z56f2<@ANxyK*AJAl)S+Vh=iM_lwFahqT~){*E;+?Fc4w1K|x{e$iYtr3U3h^)vfk<=Dg*X}YQ73P)uQ->6;ZZ)_wHxQEtq4+ zEp)wS%HoLpW}w?j#7{=A3T%tipWYVp`u!tLK2WT{!oBPX=!rc?L~m~I*Y;Bk)4r}T zO*4E#^&&5DE9n3ZF)36Rxfp=LrxC{2~}yfI?O8YHGuo!TW;NKragu4Ju7 zzkTrumhH7DaKyR7ks_zK_n3X~<{_+;EYA@GQh;VmIL;ixB&aq3%o=VBb9I9I{<~^kdc8DdWEn|0-diFhpap(0UK3S zY|74lPtygJBG%YZ6n417XI~~H1+EZAuv^ACFSr#CzH@d3Fl4-V5nYV-+pXk8zG~2i zPS!*l?`N6)wA+=Xz;2I0yiNb9y!iv;1!&Ie6EcMEb6kCo!gd$V$y$Aa>&!0nTR?`G zGWU>^sx$A3(E*f`pqIiCn%0f3u3YRhXe8l|Rix4ME{Or?;W4g$J|wuMzzeQ02vAk! zig>KL*!(oN#tRxZ;E?dHer>?KqK@V$VTDxFoBQr^sOGT_i5Soxff262z2KfjGeBfM zZL)FDd+dT9Z;BPut3AME9C>|e>JKddHMVk^IfR!`khraDNrx`ZDfLcVxm8xh>@p7H zS{ze^@=821Psc3Y?~c@Y|SmYQ$B*t?b-Qc+{9Eb z1EAG`0(2{m6m|-?9K$0QYw;K!n7t_rRd#Budfk062bDa_a%HA?GU=fOckOv6mDus0`-r6hec^OB3iTSvqC}aZnI2$1RlplmCCuiZ|Wp9zbRvL`?#;?wiZ#tuusJLnwICxWJHoUa>$pD6_SKr$`ONTwZ~Q)AhB znjD;0yCxb89oQ8g%Q)I06Cqo#gfOxjgYZ^JYKn#DXZ6KTT?a{n?2N zH`SR&WgIsy+V=m<;H_$7rIet}BPeVML4l_4aGRYJ-sK_)_GCg>?^H!np zcFCe*Qy$u{-e$?~lou8=jW+?r!`l4Yvfd<^{FA32$UB?P!0-7O+A2I>W^(#Lf3O3; zVADq3qO{S-0_?iXzOH__gs=7%Md)Flkn>^%vwrZlAEG9Mi`ra9%UlH_kPAY6GTk zz+PkhfR#2JLbsVVvgzp^CLq^kgu$qZIgi=lj>?^Qni#%nnvFhKe$^G+@0Na?@1hTjaS{OvSaHYGr3ulAT6{)BR>NocZxcIC2q$w$4 zV)wHIDsC{RqAJn0?bRgU2@sJzpV*170aTe0a?Kx#|c#Gb63TUx>>&71T_U*!_i zt5XP-g&hmfGXI{4oW(ujp9t^++`m#Efa#*ajJCEYw<0MIQB_h*QYx_z7|lcq-*Wy% zdBq2lu@mnjjJBU-nnC9wb+YDVv}o}cMcDjE^ky$Ks+{=`xq4ISc+BH<$kxbmCqsY& z=cx8O<4Xj;5$NZzLHW(aERaqNY%aD>tS{VwvTR*F-(V)PL5*78*jj8=+s~%=-Wt#VxO|n;qzGgRLY93IXBilFdOv|k9B2(1 z1xl>XkL%t$$$TAY;vUy~E#Sq%B6r=_o7^!$58Ao49;~c$d6ewpCu+8&-;ySRm?^PQ zVn3fVP|8Jw6epIBckP9SQu?WSHpEB;V)^QqdP*%bDxV$YEqkSi3*iW@GxHF>KaJcQXPnHmqaa$ zh;i9h(2N4t^E;Kj;MC{rKo2rNl!449=H~F#|a& zJ1f(7@BRh`;I+%Gx~^eXy6Ogi;DG_mDcjP;xGRo_pQ$-{T#4LP2@2>KTRF}22}CUq z))ba6D~1m8*?Gu)0x$LunJVd~yg0ylS2&wn7tG*u9um;nwZSwr&ttZrJ1IMStAx&* zNkZEegVnzdXF%#&YXe0G14#A0IsUss;Y?vp7gugA5IjF_HVj)_2ZOloMX@h8p&#cp zdf%nPW^TNB^JZ?pe$rRxHed<%G}hn7yJGcw#{dyycI_McTg{%%xcC~D>f=N0u!{N4 zY0V~chOsaP(6~*|`-qevbsh4sBra~0+jC^Du+|WFHdkIE4zT`MBzEwdx568Aik4R|@1MGS3$BUt zvDt?@sgMI%Ha`Tp@ea!W_!IxxAc$_G{^bH6+a7JM&72k8fZ9t1b)rQq19r3gm)UOa z#Il9Dcb64Q_m6uKciP^EAd$#AF7t$1x4BrR2dQ+jQ;|%_f7Ku393b zz9pV!yM)m>e#^YNH%U0m<{sW0@ZOc~%0GCxMgMUxSmoAy%Sj@&(NkF(bOPiaQuB@y z6(I(0qu}?YtMga`Gf_adp9J`DGl#MgLfj88^UEo_QlK-ukCU5v;&;zNRej3eXp^?_=*u<yiI_vcETX zPS*pXT}w*Y(ZBxS_Y@%&*1alsKl4cX-p~ePWAPvhRn{FE6S*n$d`j>%L%ha7o~Ga> zVaw?5Bq3lbpTq%bN7Bbzhd#W&sCoX6=4 zo`%=hHScG@TCV|X<~J*26^X$F^Svh~)B+44d!F_W7T=RsZ{XFyZtSxaBS@#()T(FY+7u8sst;Xr`-_Ly!tMY#?$W!9C`jX55d+*p`wZvNX1%`M&3Yr-aZh5s0lKiwWoSA zE%o%b`1YzR*mty}tmB^cB@JzYVUqM$LHeOlZi$#m?Gkpo_C413@wfNqR?@k>PnFk% z-9tS3X?y;=0ck99*rOE5drS3$PV)a8U{l}E_wh|0J9Wji{7tH8OD~vn6S+Nrw`rgl zs#ec)t%4%orEgJ=pqXcIO^gB8ue#@+i zn6^##j%50;Nbs)%G~~n`1>tH`GafL4R36P|f1%OZM^g3JGjq-!U6Fh#_*IrqjRRV( z=JwrEO43(o@9Oh|UALGP*;!3S=-)jUP#CHocEGqas6u}@r~mj=1xi5f5xs<>0*!zv zTj^nJ!?pskKYoMKPviRu7o-*b-BCIhq`T?Op!8VQ>;LdcNB%bgV~7If{ZCO3z&v2g z7o*S&$EiAv22XgAW5uT0U}SJQGxHR#2zvj+yu?zsfjeS*V!ks84Nv^Z()mAD*qHM0 zky3IeDgNaGOx-#6LMFg4l12L0=V_v<9_Da5Q)E7@n0IVfq5aZixx(_GS0JU>3v?5s zEWIm##qv^ZqzqF4t^(5Iz~WHamG}?6f#xqDy6k@mij=(bQzCV^v;q<#7VX!%6Dxok z?&4rr(v&Cj`(hnu6aC7~kW(U!Rv{yK{M*!9#>`6aqvA=w}Q%}zJtzgq z#-*}R_p{wQ+o>q*R_gL1(w+^ZYfjLc#4>@O;^?1;jd3DaZ|+uKRM1*t7 zaER9`E_x{Ys5c^^F#5xX>w_#dzWvvZS~w~yZX7T4WRV-K4#)${KT%&Vg_8f+OaH#h zt0z)Do+elMR8&Jl#~p(!$s#jX%6~0CG3uTpU2ZUr73l+e)7}McdIAWJpUtBCt9SYL z{}&Q*Xg+Q!Q36Dw&JuSL$Rm!i?Ds!$T1T!7s%xDX^ne*p8Sy%guN!#MqF&`m%(JGGIR~SvtOKD1v;M1laZB{^#%) zK5#iNzF9CG=3j^^9ZMZl2t7eBe`7gqHu<3`GUwlG{pb286a*6;Ir#X1+$wXuQ(oQM z5iIxAJ(1kxU|B)Zyy~$T$IVlH=};)YPr}20AmfKb=m&RWZMPan{9}gO!evcm~ z4t;bls;tY-w?P~2ow_U#MjzIN_D0_r{-_sEpBuzOCc#Wari@fE^|d)xn{C?D1qHc| zb1?T=m^>-|+?iXqr_`MqZ#&tK-yC6$%?;ej;WMt(E&gEmV7$84M2@t=e7NyZFuWO1 zt^>M>P)<#pu@v)1o6_Aa&BpjIs*iT~23_~mudA?{(mgRxlzs&T`i#3gs>D;@; zN&|a>nBN;kk#HLtSffkqbm>9wCU1tnlHos;RZRfb}jL@|KLv~iY|OxqIw7!=Rv{q~_lbZ6X+ufzI}+yl>b`eNV{ z-wy;T0fNr$+N!ZBjk=&Gm;iFO_PsP*y>~2)RG)iJxNzmDU*@KD5U~<@hv}k};$KFy z|H(E&&YuR1^P|zB=r3=rK3aZ<3O{ko4Os#p@pWb~r?_-Y@Acj3k$jyuTrPVVIUi@b zO6GT{i|p?oqi2aH{Ob*d*R*p_M& zSv<+XDjBlwWs2kh5+{J>Ig&`AjJ%KRcOX!iUg0d+f<6MdSFUa;qXWOkJqxF(+joW${0+i=)^vl(6<^jHPc6Y`=_($>|t6#{o=vcm*gsD zz)|XXO3I9>g{v)|<$6qvNRs~w1b7(eILMnNoZ9fY;WDoxlzqN#S?a_@X7NmSoT41r zt=a}oJVmteVW-sec!PBY zEe2EJ-9$@6_ManY&=9=5&Q{-@6LP**ajcQ%%a;%fq-NX6OYcL3 z@w_E9#@DWCB+HN6W`g&8`t?VedT$%lfaPnX!tT)%319DL1UjlU?d(ATEmZ8mt;>6V zvDF16DCh$5f%V5Su{;ldC&9AFkD-S_1;!afZ1~)5?@~u{!kf2O<2F9)9!7DjlxJMk zvUuG``)V&o`9mhRQJG3|l>>JWg8Zo5s%NR#y-fEYbi84HK>p3=FK*qJkNw5b{KrAT z!3R##PxpALpVMS$`gqrkWzYmQ^-bRbMPoz~Qr^4@4u7^X-hi~^%h*LnQG}vAXN{Z+ zxcQynV-9&_58u+Ftp#F1%5J^!nD)lw`BPlY#tE?chO&eGNt*epc6c`{Vk^petk+5k zIg1&XiDy>RswZ0L02A9(zKy7_5gPXhV8XEc$#9SxfQUO11f#|rNX{p|K*F1dBliYs z9tObRfcX5TI1Sas?5rNUmfdc-{x;6d@ip(nET$9Ggeoc0&f8*1d(%-~A|bdvwqSlg z|0o%clC8j9Pxo$+7`TL@c=c~M>kALwIl8GD*LlHz{H@+qE3_6%rk z>v1xNLv+Y%4KVf6T8K##Jiz)~s@&S}H^i#z>l_7=e>jGC#G08IXdtg2itBP@UAGPF z-3vtYZj9>mhC`k;U8F)0-)yD&QH|^DUEYrh^Zm<|^M5K!Wmzz>h7yJ186P)5(mIBU zutlln5av;wiKcLZz2x_U^%Zc?W0Z1DE0wPTSxU$5Se}^J8xOlp2pz>~;7vw7-rneP z&^yHG-gs2aCs<}e3R z-D}=eHfLRuXud+JP*|12dG{wuCsM{*!&)8P3)Hs1QIMPQ=eVy~4ma8N_}Cfi4IeU~ zHrn0b*K~bS_;dzPDmrQmqM+Vexq!XkE{AML!(@%^M;mkE_;!lr zd~w4Z#SxR@E)D6D=ZGmwEzQ^3ggyHwSg-bCStp~&l`*Pxd+(!p<=z=mT+YvZS4kiK zA~4&Uu=P0`Kaho}3#^kWHPv9?RC@!6ZacqoAF{?K5{e#kvLbejS?Ze7dN*nyV34WZ zWNTwfC;z7%Fbw*7t0P$QHvGOiohoXGjV4kQ`m*ni;!Zk8gcKR(slGUpV zf={I0BT`Klvd(qY?*`QvtWp&*U+@&uatFS@)%;|)uQ6AGz^ktW^!V1+n#I-~TmQ-s zC1mql-^^LOc(D-^`eJDX_|yC;N*vhnoH@g-6!VBfckV!9vT)$Tuf)d`o3xHyZvSv| z3R6!QKz>1&h8DKYn&<~ycD5FW*;+SCZ=@!+6QmWm(++F_u`#vi_*WVuTThQ}qQ-&; z-K!bq+O9B0TKs*a!q7I@;xTAMQSxBLMPOXN{QVu$OabfX2Mc z+?LGc(>CTxzwASfo~S(alfu?vlKgkA)&`|9``5udWYm? zIm=Fzi|)}y0?c&8Bnt~XT;2c|i3imV257GvePSKoUjgP~zJYH-JlLpp`p)x-+u( zZW^ssg%7MFhI?FD-_AwR(RSiPVEuB%>7R=tFy0FWb-DK*EP~Expm##8`{$ydQ{{11P)P03+{Su-bY2QsE>O298~+V+#pqVMF~86Dcf!>`#@OW z+4S7Sf9PJa-DBg+RpGBXs&YQ6V+kamOG7LgTKi@-Hc9YCx=y70y;)ht(xzXh86;h= zeKO(ZxiO*~y0B{RZLJ)Fee))d>J+EyTq5#5tDKw2loYq@*b-IoX-$hapN^t)!06KK zNd8}xFvs6t0p>zagkXRmBRtjY=8O3X3nI4!FZ(dlUy)itM5gRC%8<5BaFxnJ@eV+r zw+;p*fOe;Qm7JLWPM`7joOeq5Fy*C7n_U{dq5Amo7SC=VPOu!g*S{@y1Lk?nQ9dWJ;u9S!kQfIsM@16BH@;TbeIAiE*&WFx=?=-;EX#Qi^E|h}5z@3# zZDZaPNx*tDemzk!KMuL7DOg(?#jHU7-sRkJK#-3cLc`UAnADn9qMEF}XAd6tr#DEX z`}x%g$)X(^Bkyx#zi=6@t@fv!%qE71+P$?#P}{Yo2Ep^c(_ zkeU&99}s+j+G}lhOFF;kK68%RF?3>hrh%SODrAi@Rw@c&9yH&e+1(EQgr;A^M%hkR zJQjXW-(;J4?mZFs36xNvEZXHElv7oL&WoP+#2+Ys_+}F8Oq>x&OLbUVO%f}SHu$zZ z%|D!`uVg<3%^cYNJUtVKoKMkXDlBH@&-0Rhh`iUQk6n~LjDD+c1 zJf+53p{lVI(wihwz8ChlV8{9s_CwvfL}avmN$hx)oAcmwrPIdfj4h;M*#4YS=+r+! z$iWIqr-5GpPx|cFuMW7*hr9ARF8=LaLQ`)yIU5aArKOlx3U3cH4F!-SWzn97;U9E6 zy6w$9ZgXnQvG*{ss@38PA~*yDxZ$x**%nLRsGlx#YlTM^-(Y`k%I+8?F=T=T;^J(p zE&66F=SEo!B&}t9eQksH7V|t@_bV?DL#Lwc1>H~fRz$d|Yr4FWW=WAY_s5QJJE7SB zKhmx}p6UJn=Mt5qQsl0yN+{QmTSD$iDR+g*{cgF8q|y!Ln!CuQ% zG0e6y3^V(^o$u*`8G z{{6IX?DLfZvL07$J%j>^WIg&CBUz{BXxI(9Q>cL4xH@Vf_M$^WvwSMhw^3%RMowfNm(N`}}@tq1^Fo$9v zEL$f(yl-`=5|%VncOgRQ?p(fG3gM;h@Us(^aER009<`7ek;-|FM9BwFRT<-ivgM@Q zJc5`ApHp5-D*V&%d(Pb=wuULE!-rCyt2~QMsj(aobB|qEP}$8Za1c0LoI#k{Ra}Mm zWl(aqv%!uOTvK@v^`NEDxp;C_X{O(GKr!#YR-=K}_c&)K=lkEdF!+qL=@3H!eFFo( z;?Rfo8m$BFW!@7Idbw($Q#o?;paoos!H1`9rDC`(9qg9y05~MGoN} ztQhMDwX1a0onQC4-GveEwir!TZgYaJ2B&cw&m0tpmDfw^5ON$SH(q_+>6(XgjQEoO zp+4pD92u#IQIW^2QxCUIrg4wDzEzeH)J;87YOUkATyaPp0kOXiyR$@Eo8LR@WK4#0;QW-FE2ZM~r z{PK}d9Duja&T*W7@vVp2K75ANJ`p=JBPk*@;=qmIcaS|7>U@K$hd1V2p_uzW7rWURdPo9pUus_sbZ zMdTx1QxljE%A_0kOC>j)&fJ+$ zDWW~d7Wqxz^{@Bk?%#R2y3E1hbR#i2dG}?-T4tnxe1R0eYsh*An4ClbI3L>-_oaz# zJ?A@-JV>73dZqpNFTGh09PfDU^WtAdB+UBC?16NX0j^x3^D_wpm+JI?eN=>bC0#R^ zM657TjlO^3e-z!+F91~%)Hkg zFTlj`b?L0OVF{A>pGRyQBus6*JLGi!Sf5d^-gW;`+YS$7Y`P>LU$7QS7Cj=1j&DOX zp6a@Vzu1`%nycU~=Hxc<{>xf#4E2&A01AtDR8O7%-M<5N@u&jxg#>Z^imH7wqv#aW}VZKkrUVTv0dgx!LXz|A&x;09DaryLv zhp*hq0zGk0XXB7qfjz{cq|}JQESAL#aZB{e&;>!)sQ?vx5b1w;Iws#X_U=u5+QDA| zHh-PTtdT8!uP+!LIB;CoVpVwPjZFVzjKCguuJ>%`G5!IHy=A;x{5<9Y%S=_^LD7q# z%9{V@W&Mva`mkpVS-h80>)ZFo4HC~0UeKD5YlfNyI=BHt>6JQ55QuJw)#DbD^~;Ng zUUpir*vHS_bS*;+d}3Q z{rIK?ZorL0!OV!qw^2MTq!3vf6(F>8Wx|w-0(jH!^Xtby$~hr zzJFX+(VbJw{M)U|wdG!3P%-N1S$a0M^cWd@xVL3{G2gpmpT&Tg(LIpfz8XZ5#f19# z?R|Oa<;7*r_+5Xz<^OX))RO_m!zOQSPVgs3^dFbx?B1@i8>sQ(f4Pdkk5hex_x@2v zWW%oiv1|YB*54RNJQUmnZ8ScIweO$!zyA8Sf6~At3cw9aUb_`p|F|yy;>P^*$XT@j zsbDkODLU~#O11y|^)E&ie7j>)cq@CuDX%U6d-Ux0)wQ+ntu4`{ePyHg<(>eyT0~Um zDWj_5by~O8{f+xOx>wg%`dku%n45J@Z4w#E?y^t_DeGqepezVE@mYT$R z|90{H&-328b)OK7{PgVCrz3)J-7^4#erIW?-zq-?9TKGk2CqkcRvEk~ zY;9ptb4ETeWjkU~a>ZSDiDgsnnLF39*b(?>!hTI15TNtk=_A9{P6U)B)UU5AyijeW zX>*-9Gqlf2#^IbvWNej3#N%qb2A{(EH4vo;_Q|sc9Sp8dUY$FUsj1Rc=c&rxy!}^; zaP5Tp)Y3^+#qIwNO<-c(CiRsTJ2}5{<uDT%#g)7qix`t6ztb=HHukCQGhcrXQu)1lJwKCd^b4 z6b`j~>ZT^20xH>fpNmNox!Q}%(n6+MgDZVAGL9XEo++A9j~*ppJnHHW2nL*Qz9|aB zon^K=`0GAvJr8aM>Dx~5|FH#3q?tMDtsKfMMTVBV6sqVkd&AmtjE&+_?`_|Hyb4Ao z$p(^}jBoRgZB17?O+3rxQSUp)I#hZsXM5G_Oh3Y~VgW4CKPUTf;8L6KWO--(QzIcv z+gNx~(&F>@LV9y^bM-`2R8#=r^VMV1XLNLE(Q4A~sCRoYmE#=%8GSG6)Smm7>f#a8q1yJ#(BgK}UiE_>(rCpjgpnjIl(DtW+9_XpB1A)aHkPjf+lL(k*xp+Qo zgtIV@`$Ore5A`QbB`>-Tyxzsrd@+NqyTVs2&~IEN_U$S2pvBX2uOn@`0*$;n7~TiZzumCtyA_Y!~6aP*m%euX74IWga=;N7(?{_U{8P(kCp+=97`%>dt za&#B!_3IPdgEj}!lT7lpgeZye@w@ql!g1~n4i42@&+nKz1D)$KTHr3YF%_Wqjib*C zu(?BZapUCWO>?IN>phQI!i8@22>zzlRL5LLrc$h%1%Ev>KdyiYiFMt2CG&Cq!E4!%8DNxn+?s=Iny7)=toBhslK)mg2Pfw5U;?v~hV~jNfK(5krOd2UQSfGCH z_^q=(d2Xy)DS&5>WBV{JQQ|J}W-he|-HtYlxzbrO5OHkSNF-o+1${U+woLAs;cXWR zT1}iHs8c{2fqQ07E7B+7xn;BUa@kzF_UH^%9UEGx_I3J-wg9|aA<31QFh*75B(u+s;1GatA z&J)<5ef!VHpG>WJ`@$)a&h&}0(1n;)`5Om-{FkhZdBX9_?P5ocXlLi-G%cF!`_bzy zzQq>eqF%brDB_fdflk>Yi}h009 zUrn3osU6Z##Tvl^5)A^U0?X$*F!x$G0Nu>;3=m`C5QLC2>sYo<;GO9SbZ^ z^MM59O>*HvhZO51=En}2o{Q0H+l<L0a4RB<#Kz4m9nyDIu|W4=Afrez+dg!DhVL(H7&5+ms)t%T zO3Hgq>aDjWnuv6rF2plws{i!I85^)`1k+>L>Yt3&qX_&34Mi+XGdp?+U+UCKSFPJV^iYvRU|~^;Hhe zR`^CX>Fs63*Ngk-SLd8bAsTyO`>wxhGH%W1sGoO!RY{FOU@I2Q`Rue=Uh#&0i)s&D z;as`)O-dEVjVu;UzZv7FR17PY5D9T8YY&~|q^Jl5wCQff{#;hv_UKOrS+Ub z(7E%tJ*nzL2)dTagD?-pDPXkW!OL&*)5#dufTzjA{YzW$b!W79IQREB1r~gyJ`Ov_ z6S@*bgALGy1FfKNa-n4X`FmVKZc~d0%BKlaEk^K|L{P0}iO5kWs*>gc^>^Xgcb9=> z@lug_%<+?(byPe2@#F17Z|o!EUb^P5us|-04Qi{sdGiKR*R$?+f3-OXbgQ@}ianyJ zIMjZn& zT{8vF7xLZI!TQ%Lm!vxH%!SC-&ktWy4Ide;lN&_pWImcC7~@*@wikvI6>tG?J_J;t z$kncj4Sz(DFO{}H+O+`q-Q6o7vG>JdPG$7*V`TL}*ju~jXI8(+s5ZL_aEizofh>#o z?2N5&>KVg~o3o0H4@XKvKXqM@_F3li!OF4t6j5!n)Oct1$sH_blP9Pm!M2F3E`Lt) z{rLLIm(Wg4&Pq)z+10c5pxWBPqmb2g0^|+V;ykr&hkJ94589=AY~0mnGp{Rq%7>NO z^($?UorF<~qT$7W2L_*r@3z_jm|7=bf>S`TRV{tER zma0?2&d6?l#dUIRjVM-4IvgZYw-_BfVko;QG?&u#G`c@{M@3_X3ZI8^un_3SWyIFW zH_81z%qr4;B!K^PSbtV3Hqf%WP9#&>I@DkKvl?yTU^O#bt`s)tZtvy`eZ}?CN@bY` zI$9r#qZ(rJ^2ty4H)%h}P-A2_(t|qFDQkN&gD(|)TzwJ)NvHCZyo7|JvwPc0>Q0Rz zEn=vNPwgGCHHY}5r?+`t>H9UEu|*t&H%+sCC-?p2QDxoSEPGz%@Zk@v70n+$=$<`$ zHp9H#uao~vF=&Xf1hn;NU2fzD4<2M|kTx$~sMS&T{K4w8ih0FCt{WE|+c2k@VHM|x zg|KEdek2F^(MlL1u5r~r&btb?;*{#Rd2=}%{0h#jS+HNbSlH|7jnerf+c0sFkhSEr zz>mlBo8KmPIaZP{7503?QGrJu%)U7c9lWnpS!{0 z5u?1cuezs&t(a*7V3k)N4^5peuAbffl62-Pf9o?;R$7?s*DE3g{dV>e{PAi*eJ4%P zVbN~BRhIC*oO1)xH^(M@R*#A45{E0kR3}R*50&FnCS8akBgM(h?f|Y7AR;%e&0Sjy zyotxcv|H7x3mOrsPU%mbjs*8svApeZjCWga^ua6 zI_FR7iq(W~Q%>b8WT_|YX^$tMQLRGl1ckR&<@{GM3t1n12q?`L_K^W)ux2CsJX)_b zU}PIM)UMAt=>wAjhxeLvlw-~Ek)3A@7Z5KR@d33H&Di}#J3Ks6oR5(0^ya4|j7*jn zLDpU-*f{8`@J#4MCtuFc{jtCz~Cg~_X(-ogu~M_NSd zdsmE*r^qT|sbNbId*bptz6KV2z`SIm6Ldt}`1}?fd)A*FOo#72l%l{gt$Ap5Q)_H) z*m;b+BA3A-cY{RnV!m0nd$vK{n|;<5kuP*7?oHW1$#-dYi*h_Dvmre$j=YkBxOH7eLBSVaq;sefDfY^S9!h_^d5Zvaq1hYUsOO~F+j4aRN-Pu%o%emCEsuo|X!GA`#Eq2Ir zrW>sB?_>1IqTVs$tOCAm4v_hcTz)4Pu)(ecTI}2NlZS2M_YRU6{9Oxx3Ds8@NP*yE zy1g&LJ*>F}uLV~m4--RdOgYbV+p!Zh%WL!V@>&5&Elq6}EuTC>8IGR(;jG42-eMD< zbpjrf!L12%6=QDBvjg*J93w?^Xd+t0!lT;RQ#zuBLQuy*fmn1|!& zpIG}wVdQf&Q>?wkhE`-qLVajgLeDEl-(D-07ErYy=J_~anzWMkUb=_%D%y_toMMHt z3Gpb~8dNmFdJB`Sk;Tm*d(8N_h;XyM(m;+ro>Vg9JTTCc9cUUzn$hjhEb@lbL84}} zo1yqm*RuU3o=2O*7dvlYK{h`t8m-p0JgBVmNh}lQT;}MSl?T5OFhNVZWh=3|so@2C zb8^jk4pWAyaHgnOdO)$FnpAtJA*t5A0HOK}$i1o@6U*r8&UY@L^$}&8>Lq0IR9%=e ziM4F%wl_6I(4&%EkXPnQ%`r$UtC%Y-$e-|R6G_PE3C)rmV#jRF8?u_aa3%*aBiU!a zMK;l-vz(~k85bG~Wk$Ns_*p4;6LL>#iP)7U7uzPwT?TcBVeFb4zVTjs5$16{TNgod z^XAPcoG;ql2ZKV>(sC?}vgjxU8*w$I@R<$Yn_}_^@*Ixa4~?XWs~8cE#GE@;F3PKx&1~NWg1Mlhhy^F z-(*u+kVjJG+{c%`e!BCfr>J=aVk46EHIW>;Je1zw;0t#8#R`|b)r2L>FPeCX?fmZ< zSguE?CIt`s`i&YV3igRMRn^{pqPn{!2xwR_2L?`Eq7Iu8p2Dr9#ILM7F=G}rSZF3s zJ)pf`=h$!iS%QW=EK-%zEL!FvpnMO-R_Zu#gm9@tmWt}v-knhL&ly25#bK0e8Wuf1VXM}5xI z%g6XN&9)Q5TrIMTtDH$%@Fz32Df}JBAu+Kw5L&%3E3!Cp(mG=3(L|00X*hqWh?e!k z$L#cA=09JY3+gkZt^q|pL`aO| z-Jk-A*B#%N5?)xMJ5+=FwtwP(@=8laGC$YR=`}i#a{%txE;N)pK&&-dymmX(sG%w` z*NBJsZV#s{mCFrH)WqgPl~D%8d`P-}!4iukiNMAPq6vCAt3j6l$=07R`eqM26%lvw z^|b#fY7rDAy!dQG({5i16E-m{yV+E8)(((pBCAvpM690twO0a^*L39enZy^-e^En!!e$5 zj})5|kyvR!W!t^x<-xiIajc|Qdw*;bXvOoz((nJh~SWn z29N-pfjV+v@fxqE9YV4uOIO7QGZNCh~F54;&kg&fE!Sr%mChT1VY$lB!U#I5vY z3AYM{`|g@$O+0b``rU?ruw7HX_%!b>0N%>yn`ZldR_HQe4S(@su0e?NEc$x+TbeNv zQA&_CMZl+h7>)_zSLB&UDSS56%k`l!@{u3iq@$)x0wbfP*Nhs*R~Km!7=cOFT3*PL zl+73j^AfIjasSYeAF@p@r;P2Qiy5vt{|uqHXRhwesZ*qoqkB=iZB4wgA>uZbA$NhM z$VIU#-v``RSCVct}r4`}CrC4?MCD1o`x+|O_+h3fXj=b7j z(-zot$P0)bhYm2@boHABw~AU_gUV>6}}v4w=%&FV7ap14Y^ z)0Rc5lMDK8Q09Y}lBx30M{&n&^&{96sgO*zj`nXt82q*Zw0zt8r>*zVuc`XXXP(8z z#$xL$ZQ-F2+;?#Ak>r`O4l%*}xa-G8gPkZ4cEyP{MMii)mZ$cXOWRxp3%D4L`Dr29 zkB92L9#fW{#Hg-})P8v3il0bl1hCd+u2LJ%{A3_O*4|M9Kr+oUnU;Po=g7P*er*3bffgxHg%XE_))7yin&h36Rc z!_Hv=PMpQWUZ{H35t20Owe7RmBv=8>z$%HJ$Rwm$p2xlya&$kuf6NT_L?Z@fIIaZG z1nM6|(e~p43X`?Y1|lcFsXFYwQE=>-C5uEyg&W9-w7%slp9DaaDGMjwIZTpcp@v~j zHnVNps@FXUok~>T(MCz}Lg!194b-e?As^wcn==(~hp!E7*!{CAgDL*1brFb>`dBnI zC`T-Z-XG!l4P-ag%*4Lh)L$GmSY99c;_H6t{^rqO{5z?-UGy(kwgs28w4JVp4f=Bm z^Z}e|UfWozb_#g~{iu$Ye&ZuO{EF3qqy@E-2KEwF_2Plcb(uw1xMhh~ygy?m!r82_DF zTUks=Xh1GW@{$)KOs%n9YITV{6M}$}Gz%YH(2f5eM=_ z9}Gf5$_p~AP@|iA^RK+1U$8r8r z7`lpm1j1|6rDLQ#c}Vf?WsVip)=^=laY?AP_m$QrDy5)O=f3LuobXXDRcZ4|ciet4 zrSQ2``KXQf^vQHZ==j!^Fe#QHBC378cv=ZX%DA$#7CVs2iIb#s9`a0zlt#@z)`^+< z+O(quo@yWM5=UMgw;8I#UYH+Z_nj<)sy=zy*&)rP>|9BILsdH@B6;7~x0 zBS$P;4#&Hh;CRHuE7^9rbT{arhG`%X(x^_1DY?VB^>KKLuRYws-53m4SOH8 z8GCh!_I1grJB+sgC=-g>MyF5Gak{ErX!I8H$w)3|(1}tPmVP@#{ z?7o5w3WeM=8OD=KyM>W=yNa_Wq0~u|+shxSW}OO|wo`btQOVI#t9MeINNM!8< zO-RIV0EnbDyrrre-uk=)BYXQ8h-m$Qge=ueA{9Z=df8WS0gGK*j&5#^i~ES6e7@Li zx6YKb6sIp$qh4pRP&A_O`N9T}bh={g-5k2TdRx*P=^a12MNF(~n-8q+rEmgrYkzdZ`Sp%m)mraT(79Oq9eD>!R0=Z=ad7P$Yg)BmKuFK5lZ2k_=sL$nc zw;kTJ#{X2Cg$TcuokKzMw_9-)S6S}+XOfX}mczo`!+!oCzyn1ST9m#Au7l-e<%@js zoBp%LYAbUbv8`@YN{PM$YS=%|W=!pa?9|(Zx6IJXgKUw+ z&aBt#ps~yA($vU4)jn9Ik8~(ze&$_?d&Rl@QM#b_@^ez%Srs;TM9A^y8BR3yiU6dO zv9@aPC*-SYF>YFZCM*G=u|FwfWf4x|i4&t|0zq6v_A~J){x7U>skQ4DPEl>vurrv0 z9#tzldcXOjx9{XUOAfa2tX}!3F{Kf=;_IZyb?J+s={0Sy`*_6sUA)zvzDQN|>uuSd zVQ>2n7%z^#OrEXy3rQCK^Z-=w9Q&>en&(aw%5p!*NS`3rtz-v&)rbF8RvU_!|F6mU z&#U-k5d1l6cSir@+=P$sIxKse| z2YF&IVLQA8L57(uk+6i0q)UhUU>W_COVIN%XV%|Rc7Gs=Dk|cwakB7=%SEI?X*+8R$u%RMf);K?LSog;&F!&(dJ&Zk313c{6km@8eiPN!>Ig;YW54Te zRNdlezIl^aw8+E=JG&m5Tls;oaY1g|Rv}c=*&6zt$c!iN{RM8%zKX&#MJM~(#DSR2 z$6bLz-&~!zxqR%lZ>KbBA$W26i*uvf#S`wk0i2db_goydf)#^q@SxOXTReHHySrz0 zYzngS(J++De9{H=Thp>uGmug)TXZ7-wCd_lS*4|=9RN6e_e5>osKArr^dLNPacv+F z|ngvi0PZe-9Ffg@$5x_0w9kPKfW^m`LFu6f)VIhfn7PTm`7 ztX_>d8q@mJeTR(e`qgjn_EbiA+)PkR%aOIGy-qQzy!tlb6kgk#83O8s8W|~pfeGA4 zOEgG?0naB+AVe}!oj8-Mi1&wff|BF&$hWev8BO+*T2PoVn#3*Ch6f_1`k*{VGB9YEW6Gjmbs>tKE+uke!@IDyO-tC-RP2iY zIb*_PM2wmY&Xv<>fZSn@w}uFhdJ806J1ilAH{1^L5&ipSU=FxC*pzAf^BI;!<1-=k zm=D#wHpen0*IGyLe%D-PJ-TUV@RP#?6|{x~ehotlXM0XXe&JsMCCw@2rOx`ocBsQ( z5wx1llh;FC8=W!~twB9bmTGf{j*1J!mN9~Jnv3W$)zEB%G}+vfSf0TFn%Qk9+nc%z z8F^gZ?q0$T=dno3V*ZdvX#oRwr2}WTL(d@6eRI$qz(rG_Q)i!2>_`dW*okDHP0Nn(vUjhKGl&dNyG?EcHpnf;HZKt zOSCyr8fZ!XAhcF1B1!Hmw*LKx53L-%B1!8>oU!%m=FZVFU2i&q@v!Qk1}z9BJ9_1| zZ*Ut(NxQDr0Kh|cOybif&nJXpKD$}175qSIH#4@K;`*f6RP6W(&r=g8;K)P@he4%bxAg_Zom)WBnOenj;T<@kd(J={H#8obr zk%DuPFOuU$QqzRiUTNIR_b1Pmt8qGHdXdv1$!r5gp!N-=dxc*J|WjYT}S8)X;tURxn4@*1^%A;P>9@4pXIJbP*Hb+ zRf^weSB;`W0G#9lYuc&=$a}Q!sN%HC`H6e}!w z5yv}Uvdv@0i!hF(&PQpW3*kxo?yOMqRrh-QqN1rP&8~Q`Vad+T52MQ!9p_=KjPV$m z(V}-GJYXnjA2`6wcyKZv#3IzfJ$@M#fs=GDmn%gn6w&)c?dH>y-CMI8G3#15$*f!p zKq`CAk#1Wy$CX}{n;Yj%Mn&b0MABX1bNBGaoaGl%o1 zkkLqg_s{9n0pCiV2gNCLNnzyAikFKh#hVM z=^Ng`8JU&+neZKhl$Xt6l&FX=ANR8$Q_q8($NWHH=cCZ1HT@0J#^*@S`)rS0U(Q0q zB&KOR0s;p+Rp|GvK;jqf8l2g^xf$vv<(rW?&OJ!-UmU>hNYk_VIGhJEgY#8lkC0fB z7`})rQfOGOZh2Xvy5&??R)EBS4q+7+0k;j-*g%;({RZN>Q>1=atj)$K1z9QtS!!5v zn3MRjWl;`#-R43;Nf_pD5>48b5S}v4l%L_N*A8ZttQi>@B`W#P?m1qn@>QglqI!ZH z6IDq{cfr!$*q3Qg1F7x&-yQs{xy=La=j9^`E~s>{2D_YeT=IpjunVllj0kMP*B8r9 zwY8Cf9$C6Ye-s>McQI;?kw5J0Ty?isb%F7c8}0r?&XqgLkODD;F&~H!A_U+K^}-m) zi^r9jzDcAi$TZeBdK3~bR(f!A3Jzo{fLfcVVOk{f{I)xiv(HQ~Rw^+O8P9xeOFZ1; zauHRXVE|RwGEh6zGF_CsAa~weMD}RQ?pal+@)V?6j0+-Mc{b1FeF=XV0Q;*PJ3~s@zjg=UqHQ#v-n)TO!RC zB(|SAb?Qi|IZ$QL5|LbK3*2|E1Pw7vy?LeD9~`{l#hM~(M518di>J%=n^Ep*I)_zY zF>8ofwW0JHOMYdbOkA?CKJ_d)c``)dO4~=nR8+-<5|0_8GS`mG59PKQ;!qs*N#J=+ zmKwQ1YvkJ(%AOaOo2x4UWvv0$^uAP}IN_Rt)kLpOsZ7ugITKaI!1AH@8INw0M4X*x zrj)t~hzjm4!gDN4jgMWMqtmC9`pMq46f*<_m9j6wu97o1ill@4_%tQuxLo?C^#Opg z9A!WD))+RXPsU&92;?P01D(w0Nw9(=g{;OHi-k zi0_xeKf>G{TN>xdRby?<+~}Z80{9=0Etn*W?(?qR(}m+qyQwEK^yKXY;aY~|40L^n zDaKy2=g7%ui&eS7zg13+<{l~?=#^A=FkE1*u~H^?f;}oGjNKvY?r8EF_i@_9`wfkulW< z*e%*7YWR2!mu$h|Y8yo&-TR|J2oPk$?8z&JEwfj#58K#AAf+BPEUC+%_*>dC{PyMr zhC)vd^|Y?0^b4aRe%2~p18t7~oa)v~SvJ_=-5^ha2*_<+W*3PY-YS>tp4#+6FX1@E zY8AJng7bRc(7EZr0%UJBPLqc_OA9nll1UFR=AgPKpsyn zN9Ff->IW3Y*BXLM*b=@+JiCCwCZ=X?hJ3>v%z5a2}jcPcj|M!bhbRQFH^goHGEk;?_ zB365fT%CHW0^ISJZTGP1UC07fEjJWdGe6Rv1f-AZ<$~9pzm<@-M7xK*^&^hxvRT8D zH+<0~(|;!8zdL3PuxqHi$LkR{wBt_C#C z`3MphD_)^PQHj6n-lk0Ga!gG6lIKCS%b#o5jTI;p(pjw0W&Hdg=we_X?IYlApRjy#W%$tDtp=^(686r7Y zILxL=3s&t|NNw|P*!ZGu93`gzoUJ_=n4Ocw9S29pcm^n-L_@rAVQm}YNr>#fBlxqdIKC!@?^c5GkEwpk8$R5p{0qMGz_DY;K=V;y z0J+&JzC*|~q~bK_RlKFp`_#|4oj*L~UxbM*o=l_MBBS`o?jNtxP0WGYZkPlI1qEdg z+Fb5N9~)sNwYA=+|MLt#YWNo?>(EXS5h-OTDgO2A*U};hAf1+NpZ(h<`pZcF z{`T%(!22n^{`TkPeRp0IFh(8p&{;m7TjQ;$I*@HwFTJ%HjNW<9#10_{GuJ zAPIPPZ;r}O^(|2B-nw;bnSJvPU1MWj&eK#bC2FQr>sLChjMfVy?DIRZ=cgt4w>_aQ zx(8stG%wiy+!aRjG3$YY2XldTSD8P-Eb~GjOE=O#Q~U0-Z%dzxTzP9eg(9Nw{u+b* zd1kiUIbfm~A2rDK^FxZ(vW`zo7)~vrQWa7~eTA$6GuVI!n(oql%g<7QW81eFg=g3S z|5qYdr~X%$wVHo4$z$rgpFVwhf(x~U>?psG!`4%(L1&>qQLMYTbFAk0W`}@3spFbR z0zG~kNh9uvpM8*RJO!rt(a!O)vDfoPMg0&J))0I|auD!_lq`2vv6!ei zo-Y=C?dJ{mpM!NL9T>MJQv6BLpB7BiFMM^8pnvgVY)hUEBEVh262X>(c}1s{Y{lWTV)ow9JS2dji84x}XE34Nd0hGl_$7%My(`e80u}^cz zGR+I!Vwr8a*3n7){cU7SR#owig0{GbA3o!s-EvfO18Kfr74b7`Q17gI1nS^{0}f61 zIAb$1Jea%u)#Q(25ysX$(-ZC!STl@E<}jQT0V?yyeyGg1UbMq33HdCJS+?s$jHht?@2Jd+m9x5v5fM@KKsoak9uIbfylZFL2B->$`lAfb55iYw0h z=ez6sZO;R4ajr+b4Wd7s96$sg-@CE6bAxX*q5kDQHcvmJ))tIQLg!uS9x&{Wajnm*X!~q>vg-Hm0=^-R@e+;}g!oKD6_)xYp;fB+sZ(90RqxQ~PZ5Fvxo5o4a?Y z`F1G#}KcZykgk(-+w$m!xaQuSu*V=4H)?f<3ojosoMtns&e8tsiQw$>UR4YQ|* z5h@-K(i^6VdTO1yf&R!0?nOIZPM>becpcA!N`1(-PD|R;>_;5--z~%~A(1Y>O3^&& zNT?(NdKc0;2)L8&ox+W7Nw%rTU~22b`t_$S-rhbYaoVKLQ!DN{1W*#fcwQ(GCpxty zj&pcOaiJ3`?EgrsRKK=Ckbb?7__G=EIhr*z?ArgzQI)^6E|}_}YiPL9V=HCMpVI{m z(6Ickue0D>(Zv${TYl906iwSag<_~@gaqAl8;PIWK>=ufKSE@M>N%TC6;!*_@|->p z^m_fWGvkzbv}#$@+6=1!U^l$odKNB#RfW`!P#bar%<8O88t--)@WOMvI>Q(*obQwA zY=an^_(rTQI?PS)+Q>R?c9*n;QPP2=@biX%N)3#Bq;6`H<%Y19uT-txx_PzU5un=R zN9xlc`A3t}p=#3Zu4ug{B6aIsSQ0LQ(Q+p;Gv1gtbW*an6~F=GXw6vRlRItDjCst( zoK|PLgtzyb z9>#4|A6k)u?qq8?V`MLAI0xIaSkn`_x*#Oll4|Op*`wWCv{XG-E;T3lrA2ol7a*em z!66~r=E~;9&Y+*}#_!ittA*%SPg-lxrC*)PtoN+4SJQ7T?w-iC|L2|I(mMy^YqiXZ zEr{h#sPi6e0b-A;B>4FFuA?16eg2UtG(UN6vN9@tH~=54U@MzvN3q=;d4BMJeR>f8h7Mx3Kzs1$$cn;l12 zK&;?Fn@|#*7}!XvtS50TZEc5(<7i*GS7z|IFGW`}lU5gJ!IZtCFCyHYbV@7dG?F$1 zM|dRCr1-y-e28|!cZ~2+h&B27sCLRo`?JM|Cv3mLYflp2Sa~LITiiEy)8C?xphvq} z=62U8(N=j~1ea49}}@$6&y$*1^Fkof)jxpSh6*53Q& zSd}rHBuJHgIir^~j4m zcT!gTKNt7556hR*N6I_nFC6rc)fn4gGfuzZ`h>TYhpgVWrz&m;D7wBo0YrthB$2a0+=InK-C`<$ z(xUdVe9reaO$}7BwjGOPl1Ug+75eeFFdY3x2f_ja`oIkmiEj(g2}_g|%EwI= zy^)*W7OYHPXfm*7D3+l({>~0 zsz@5NKlCxM?J0ogsEnK_r!*QcK+21hda^~9*gtVe>|{t9B01W#b%OiLi{1`7jiG;l zuA}unK!fF++TqIpdVjQ2MdjF)kG`C>w#Xyhb4v;4rn#ukMcDmmMAk3)ZrCFeAJyE1 z4^#oPc{e+|v`qPzkDqg}0o=_oPwViD0r<`{6MJTctK;6k|FT>o&QWmV7QIhQIG)>h zdb)CMprU+w(1?u1od^c>C?6wo#zhd)$%wR`s!9(&u7xaqTeyQ5v$Sw@Dy7f0=w$Bc zlEQa$NtJ)xpYeCq1NfuM24!eJyJ|n#c0g+u4c~wutE8NQZif;ovl}V&CHB`+)|Kxh zCF5pb(3sY}gLNQ8-f+fbG5|V)vegY9afXp*Q+XU@9y|N$8Q+}ZP}WWzKr9B7{@q`5 z?L4QgJv@|^PmJ~;fCj+|X9im;ONNLQ061rH@nU645CGH5@8}I{?e*WAcW3+0Y@q*y z99e=}fvAaK`iXmgG%0@{ug~INX+4t=uN3zX(|HKxn+UNZ9nqe>f#nOk{1OGgWJp=p z;X~+k+4CcbcT(gWbeWizT?dL~hKNZe%WWPo6S=Z=R1rJJiG@pDcBzUm) z@L}AdZ3tb>94>hbCqJ4XVJVwn)LcQ8Y%RB`_8C*-atB7ugR&g&q@}j9H&p^Q@zKzu zyvb5zd_ceEBHYLw%B!oJYLK_G0Z==-UHn6!_1({(PaXQ_g8nf;2w`PrEcN7XmHB;S z-!G_*N87yhQdD$6s^82=cMa&X_9zt;Q@fuE`SR(_iJ0xBsWjWhd?Dz`6jfemuxiM`M z-%kCx4{9xN)q>^zBxw45T(_>Wq zslD0erlw&3jYKnFdKh9IxN4wpOljYIMWTanyh^2K+mFlp{UnE%ao<0Rur7D{<0km- zYA&h^q6ciB!VzdV71J zKU&dsuoSdMzoM)DdkDd+$%|ReaUV%lwy+7I6#_Zzu(){J*4s7&$Mie=jpPgw4UNmZLAO4?pq?Yg@mtk&n` z)N;$!!)LWkn)r^GiP0H3RN;E~*)?lu@A>5QHNl~3-=t);9gK5L__RZn@mMg3DI5SJ zvnCRRv96O?E}myZS(Z8v-^LHR7f7gk^wu zpl!e%;|QWb$Vzyqb#lZPkw5N$+dB&c&k3ab@fQ0&mDi-a+PmFzwJ3QTq}&akt0$-P z1L+xKc~bo4#oUL=A_X8pcQ|QD4d)=Tp(ZCOax)I&_ILf_h~`aYCp9%j&AF-C`Fkx( zDl6R#Uko;gvQt;zIEhcjmIt*Sy0n?4T?yq@3>%(rc_)?I?2Mgr8#G82?z?IO`^Kk; zeB$na{<-N^ULKw+^8`5?NX1$7(HQIBuw(py#(K_3Cr0qdLC@+@_Z)FII|Q4J zv7nl^5pm#yyRAh+>(i%@E0+P&6`BO>L!5$bsL_T17p6mCa1=15=NGuSY6xz*VL0FR zH=F0xD(T(x9V6zTm0{!KeBkl>ke^{fPWXqd?rdtGG5&PHJ*E^vP3BqJcl=869Hq4J zCDw=e>!XR9+5U+=d^aZ#tJ?P^17-lm+RUazoVuytj~m5gJ9EO4V(z}xV+oRfRb4}u z9tfw``CL-NR65nmmtTlTLqGhz==%M}1olQy{V#FAoDE?c*;|*@5{}Otpcr@QSR8)k zW*cu|6B_1$B=o(i!_~x2q`MQ%>x8A^EdjDxZJN$`D|2Xs8?X~9_ zej7q0ugz0$d>pDZH+^Z_N%iDF^!9v7vQ1LrgXe6Bsj$u+J$SIm;ps;6VNhY%%UM-? zK}FRL<7HjtZCZ9CF+lF+-}TIMB(B&_*Y@OO^RoH+{@Aoytp!B)YL{8{?Ow|8fTUW* z6*c>OrX?wZV5_&CVONS{-!BTh@NQy@cJy@;Y@GDkwzO}-Q@1UVacq%*PN?yB!91_( z1!33l1-(`?u`IfXFBV;BiD8|=xZ0wx#1r&l68@C=djh}3N&ZXMut^WpBHga`eCt}G zR_D@bzPB-yI>}!W8$Ya+*RB5)7*iAeDlpcU@su_at(Bnh=AEkM15MW$Kdi)m-xmMs zmO_Cw^fhJg;VPt?X-oX9bH`9J-CnC>x62>j@ZY!o+bhRoJ4a~zZ9aGA{-rlU2i{`(KS%I$ZYdBjk!eDP;1^dJ8!Wre~(b@6f=z4gV0 z3!`N#EX99g92coi)-b|ln@;@G)v5KncFTS}k7m}-N#o%6y@yNRtG3a7`(qk*@H_tu z(Es&Y>1$-+w&RTlwE2EKtv^frpI+ne0jGf{Jmo%c)BUeQ9rO+V-{JqSzW5qEue=KW z8_4|YO7nYsZUC?tPBWt8QDq|EZvjD0z2-~CkfR)GzxXAeBU#6BUG9yH=8ijSu0C`? zlZ++^;Q0(6)SLYNF3f;WDm*eti&8s~F`~|nfl#AoxRCwY?QKB2 zLRoED#)`YW25K=oxuzMr$1eltRnsM?T0M|}2p8fE*zhAKQe=>-Kz})WJo+XrwlKPX zVv*nm*t4GQZ=)D!ey*QDw0&Q(cS!2X1-8F`gwDzs@4ddvEV3i%lEv<|R*}#<^Bin` zKx4`SXiQ;Q35}qszcv<-+!$uZ5EFq+IQdx0_KYEgdS*wY2)x>g`YhlnAVxjCp3`;m z#~n%bx~3|=k)XRazHl#=FoaJk)j(0`Ko2NXYW;YC+7{*B2^Y`5f0i{4`OVE)CZ`1j zO{j86a(MF#+Vncc+|$lm=x-@<#ta-Iw-}~DJ0WW3R`W*nlmvt8%NA%GEtv0WH#p0J zCGNRttKFuUv@EqF(&N4QmUAVa+&dsydZ!462)|vCm-_p!!l4ElaX+E{B7VhYLRq)Q zR;L)ox!66|vjyyi?txmJ>D;A6(tbk`042ceFia;XQ=Hxej8=Vmy615ksN=QIkMid- ze9Xa1c>O3V)u7k!Pb-(x7LO=rNL?ID)at+YSHu&nVtNh`85vR{n*K@?`e7Zag#k$i zu_qLzb$j9hE-!}Q$jx^J%;s8J(Z_5V;txml`28vUQw=)f^AEapd|zyB<9{&B6%K=i2~_`t${A<_F-Y^5bhHg-M;qg}%JRWuYA~b@{rwsQI#hNXmm z&RNEWK>8L+XP?K59$uA~@1re23DJMMV}@yePwbQ43XU&pj4Q>agUNPaqe01~4xJ1Y zoIX6=Q&8SsxQX;=(;J+=0F%Xh!1iZHbv#@%%0uIeW=JAZiu~!HTt@H7$zf{3dM=J& z1=8E`mLk=Z?o7zz$t!nvUgqu(Q~b51aP;0<hG{9B&{O(<|_0u>O+)c+L8&-S|$}k51boxwj#zeNPxp<4#*y5@~f=6TP#feLP9;u9`R^EaXue& ze@=KGV7Ap*2>Lra&!R7D!p&#LNpt*A%W1|U`#ow1N{eUFBV?qn1`eI9wRmM=zB;Zr zgV?4OS6s_qpgo$IuE$&~Jd_mBt!Z`H08VW>-{FE{Km+F(>M-z8t4#do&} zW7Ndz|2pors`u(Ok=;(&k=5Zs!_IA6@^JRE#VyQqo9^=VI<)yM>$6A~RNrUay3uz1 zV+7HS&8JuoT#~zrd-S%LJ@%5eQCD65Y1jq21g3|QaT3b2-gtkd=lf>c-ebJ>EU0r_ zk`j7DY$HPkvw6D+=cp5~Ij3!tCB)db_71jj*k*2zYIK7fJ-^+ep!oMMU%DsD(`IBa zqMi*<70X8sFbC{z4GO1K;lv zGmMdQl)ghxWmM}2@vhd)OpC^Jpkk0@XHS`#8m+y)xb&n(B9X-K8%+Bkcf`vC2Md`4 z_KV+>RfX)@kmVEsZFrwnDaP_OPw#l zmI(B8lCK|IRXI0GH5_z};??PgD)P5wVy|7RDE1w8GxFRB78iUP-t^mIO|iVnieuDE zs(A?BO){rUR2RhWd6n%A!eXr@-Dl<8aO0g7hI=oodEZg}%OTnL(PR4o=a~T?Rk}2cP4%a+qP0Z3S z_Lvo8Z%l+R;2jq7iCe|IGeq98ma|p1!XF4daK81A4U4wbJ@qsKt-|G_qUX>e{k5C> zluJx>3k2S{Bl?v6X!!V50z-9O?idP&VEK?RFYbW)7P3*NI{U`k6-G++n=n8cT!Yta zCcwOggeDC$!IVUH_13cvmr57-^FRrDl+e0pFSe^@u5_SO-(h=$sOoYg94p~8809sx zuU~4(+0^bLgD51H?BRx@@TB7~I=E}~k;(dN{?iyEZ$uktc@0-=0CxfT&>QwAEVn?G z)7(+h9uo_%iM)ZVrY<9}-1-gIpd|p?7Ze_|w{TI$7j@T9Ca^dp_n5baqcf_*S{@Lq zO*?O4?pPx`*!TLWSmB80$`IS}$^1&6t*f!C{kC9qZhff7Wv7R?rYVOv2oV^2Kq~pb zcfy_LZzN-{ZcxPKc@1x#iP+YsVPR*w)YTRCxG5XM)R;8B^Vny_%+`0z*By?k_B=UR z_44FdUHR`OX$dZ|*RrjcXU$%@at)w7O0Pe2;x)xo=F)3ZrkA=p^Mx8YJ84IpGT=YU z0G)VLy!$F9voRQL_$%n&nj4a0C#XIR18c%YPhA3SYq1<<2h6~$_}J0DAavalvd1b< zm2*5bHgO|u@EyEn)>&l!;4()T072_poO<>$a*Q8s8pH#Q=E_bsRBKe5LW1iciaeGL*w^NdKXd z(>SqS69S6@@%xo+?hMvQFBLEa%;;aVox;k_K*awSVLS)w zLXm!$IIsPNUki?DNh`vF)gHnBu&4 z@y&air5WQ1jx#ecjWQ}i&ah>>p{j2c%MN^CewOq>j{Uv5>Vh#}vd`A)1;(m;ne{j; z?*SB~!po)~3IsRCv@Md{!||k8%k72uWwZ2%BV#O=m+U{NUKz1?gU}~_50)glDhVdG z?nM~JUXMuhg>ds$i?-3(FMGB#&x)kU?CvO^(PLwC4+Uz!y}cVEnfH_RLC-hq5OJo9G3SIF26+TOYoBeaSCvUAKO&H| zk-?JhoXmKBWjBEFq2yj0M9DGRe6l~qMqj{DH;~$EvugS-Qb%BABk3@r$*Zo2vfAdd z#AVHdpe4U|)Sm6$c+IZdvabbg*&ekfkmBp(07DF%(Zf?B+ELuIDuYI-}^d)--_wtt17u};f3rUVT2 z&;t=ECecLb=Y+|GcwmdL%vrG@Cw91O=UV>bUd$HAhfdRR^plglSb?WOFfUEGmgWNq z@q5$^iBjwF%~zt&=+7Mt^EJQoHJ>i^!xa{Gg{z6wQUYO?^ z@Q9{C+)#*DYxg&IM!S5*Xc-xH{w`TB?R4C---gbz4?C)_a%W!$;$HVU*xsO6fLDJ% zITpQG2a?ngJKZAJ*u2@eF|jP8eBiqHjGx8k(6U)lEp*tAx_!uNcLU;Ho1Y&qNR%DE zS1++&pz2tj-#KN|TTngB+53)KW0-F`NzAq{(fXtZglh^F{VMPrvIQ0O;yesNrv(p- zoHjQjEfJHI98m~6tE1uE3og4mEDp$kt;MZJ;I<(6W$SFzUAEJOpVe zK@~u{&TpjlyRcEOq@XD=N8S9x&CR${;%gh87)2gQqlNxsOG{|w!uE4BBK;O=C?SDM zRpyv+_HRUrh9Ng>8xP9#+MJu-8PkxF7=y7I_)z;^26HdOwxfckR5?*}aB#M?dep^d zq3Rs0!}=Dq;1G7Rfi3n%iXozux6;yp|2^-~wHE>TXwX1AkLRWmim2kjI_};Uy%zph z1H<{jsVz=^=?-LR=n6;#mOQM567+n<(06xtgDQ>i*)9Me%$Q=GxsE``BN*@d33R5b$3!ism`1Z%g@ z<4n_o^~j=k`0vs&c;}kP3A?F0rWuh%{7Q;9a&;kl*#WgVa@7+ zx@tVDH7JoNFGcS5HSteRhcqC0>*oWRBQmS4`jfpE-r9?i6bfwI>Hv+Rhb!4zW<5fC zu+lAF7d3{rJ4sV&ABE;TqNhCqZjfx2|L%AT<8+3%f2&uf%#!~S>>=BQ+Yj=~bK{>Z zS0$wK9zDoR#Ro|1WN$cp`$;k}6aN9+etU2h>vi35x3rg$?aRey6$1$-iie?;*Os@r+c2~}iv3}xR~eXR zj$x8rXkTlXOno15XBE4E?Wfc{eO;Trq0e~H zCuJ%p!GVEDip}i{GVXK+u+dhI`MjzwPH?o-6G5K-39v^~R_iVbe9& zibe*T_rUdqxfH9AU)8O?dtUPQk-zyBuRGKc?49#+^@jQkC@zF~GsKBs zbJ`_{AH_+nmLk(B2{EU3^FI9aSopoW2E?nri7#IeJP@S)q-P9p6aMP0avi{C2BQZW z$$6iD>7qZO{I1wcQ+B7 zmL#~i6i-J?fU{jy^0^C>&-j3YQUK70+6VgG74q;{iE)r3C%8My9XluPS%6=Y21yb< zG*U49bHIp@PMu?ilbh-T?92@SMm}!6?m?h%5PZ6Z<0GScd>|0c}fj>2dX4C-p+T>2Q54YiU zb}u1Btpr4Z;A+(`0HF?`fc!@NQ$!=@3XxBNRKZA89P0=hBP;W*f>HBJo5|6Wbv(kd z2uwN$O*&@8J`?jqwZ@RL?=xq43vD;M-KneAWX@VARYQM{zT6VCWBkn>mqH*(7g?96 zv~b>X^kS=Y-@9YjUJ~iY4VBnm^Wol?1Ee26z;Sy(Z`qqN5U@YVO~*sWeuXJX37HL?J>4GhG&r^%td zVo!hlX~{=I{oe5TP<>iCB;+v42^?b$Cps1cv5J#jw+^O{ZNjDX^xV}Y9wB9WS465T_zxp1WAyF&Dd|+^)p)-_w@`b7Pl1eAa}M{_1<;3s zutG0ZFv>cA6S(}u(u@{E2eC0V9k6`X?AsI4ob%S-d%lP z%}dM_wn6$FDyy916D!5GQS4uHquAVi-VsZxz?XHoz`q&Nn!olrc~!&+tuudBkVk+9Qbs z`XenP+h7MZO!C;gKUi*6+@vLeX>pkT%BCG??#64`WjiT6>KHX>`z7^TcOGV=sIy7K zfig$fZtNm(9QeFmy`;sJB?>8UlOb3P+njUP7l3UEt5Ca-Vm|lTD6eelJJj%M8~VhH z3={&0|ND`?HIXr;R8G+$MadIWQ zeXKe?G4pyiWUsfFH#)n=qUju0ql6&0vSd8JHqR0?5jkjjX(^_$anYiX=LF%G>V5a5 zC+=oOWK5tUdz=G7`kwpDe5UHqY7N>#l+Z)%LU*=L0!BzgyrU-B%)4Nvq5Ge%+547N zpGZACirz%kajl?U_l%8c?_5+AZrA}xvKg3PoMmGd@u*ryVyg>uzDfu`#kCec0?ee* z3SgZm7rl#zR|t{xE1GYzZ+2SFJADi8_Lfv{;?R=p3Zp%37W(2EP_?1G17f%MU7fF4s2et`(eJ>TjjG~R{KI5sy zhbu2`)5yan7e$x(7OrqjuQ0)yc>Sm$$^}padPG`fY)NuDqrM!d|RG8H8eP6pW~5 zbFsQHnPj@*y}ix#sbNO=!^A|W%31IICBS`JEdhQZFqT9)c#1fh@ zM5^D%EeY=tjM^V^ao)e89c$zEo9Wa&seSa+b*`L!-OS|0O*fpT#0(=>50csgTRfer z2zNpcz9UJP$6y5M2qV*d7FV%DtYcbnf_Anc_me{tD`%@5@hu2;5nhTtn@T8VH1wX7 zhpk>!N2!=Svg_v50SvX;8}B0&Mv##n*{8th4-&7KKU4I*rZ3fKT)Q*X=dxb0J1U~d zR5Q%mEz3JGJLE>f?X&B@SpG$(?)Pe4t&j@P)cNxt+Ta+%t2jZUZwiN?JzZ7H&*qconlfgu4}_;2l!d0urYU zYt5NbrsQxJu|#{MbEe10`3vHv&9qB=%WjK#NV-i;!+3!)Do$98jSqotffw{KW(^km z>qK7jCmeFtZx+Jw*Ki0ea~jGyiKnRItl|vtQzPW zniCktBl9tdX3^hvP>0^fMA@HguV{j+;?Sh3IX?Y$mYifZvAY7qN}s_3r{XF>JD`Ps zTLWy0=2}>M9q~l$TvcxxZ~DQNih|0D%VZ6IPYv26u44d|=_`TNe(`QQPHgX8JuuUY z!?E%QDc1KS04 zLm&Owy>%oYHP}>?z?c9q4{O&xzi$;XE@Tf-m67K-v(%asbf0#&@Sd^acG15zG&EhR zTMB28+<#o1K=j{!r+`G6r`BXZAYfi2+u*l@_qu{3{kg->%JO0(=csduG!J3^_fQ)J zkn>+(lfAEue28^ax@A_Qk3$^!qP3KjPCRX?RbJOF1};cXygv|%wb0bSi1jr(r0@eeyzr6BrtYc&2hO5LrjUyrFX!s3^ z-4mNG8}eV1*Sq6`Ikr*B-FVw}q(ryhVc;?i!b&q`^|i;7ecbvmHBb4pb@6l10Zd3n z2Pind!wcWV3^lhlyDrTLuC+n%N{&b&(S^pkwm`=Hy_cdjPp3G$ZsI&}{jU4o5a~2( zlN!IvI%EZxqTiravdi#(nzD81+sgw&A9;8iD~^?RR1MAVWp&*2I#(~Iix>QWdWT2I zI*?iP=XiA&#D$B~q4I_>*sRJqa@HBjhhtGHtl$Egq9SZJZENh^Is8Gn}m@jD6`O0XW+y=@wYH`N7q zWl{u=J@+4u;n^MY`j{>dTzIvfZ)I$9t-cka{CXX-J6p3rp`J7bZMAhfQMc4!5!VPWfs>!*G0#Y z!K-~ujjIC#Z&PELD9(au%}{fzn*9Yu<@yb8VS_7idG{4PJYKgZB;wBo`t2MoQ7Ue< zG2Sy%(b}R37~%0>nww&5m9(q3O}@G6w`0uwQ=vk}B#)fxoN!xiK;rg~+p_QdMwMK< zN$xMvo6gCGHU+{}TrjfPySQgOsKw6!EGNpd4#(#fR_2)-R4f}L=>qN<1t4X_9P%hISQtLTTg^9W;MI_a7B!OfOy6Fpu2hNM zU}J(r^Q%aF$wk9eJ@L2+-OHfC5PqU+k6I+xjzwN6_ab*9Jp96QV(=1L-umR;In0fF zw2$==);tU&Xd}@Qspej}blz?-t?NHT~Y19S|f><1>XnE%6 z?K67@3jH3zXS#L|>M;APSV`=>V~2my4MD+a$rfi>;3!Nv+Ol^0_U76Z$3_{NI7&T$ zvz#+co1w0nPmxP>VXOGFT%Ir~fjo~&d zYRx-HWd=ONxwKWFKS(&qDY^GSBP3K95AcTOmrkLT*s&y5@l&sc8?!CVH-RcYe<1ZJ zG%(^$>L=t*N`Kz7^~R6p-uURewb@p}h?@rZsA;NpNvr_DkX7nwxa*zL;tM^iwiWxc z$aLU=()ftWHC#Kh^%mtKARqvjGvVkF<3WCkqj$!wM-)$=zRvR28#2m;3lWnQb*lhw zab~^=u&2BtIW7!jT1ZBlx+!~@y2_&O`2(IC>1(F}jCCe4h@_S0B-gV)aA~9xahj3w zQotR*Wj%o&Vn&-jDhHp3kSwSeen67eI`auyTyHNNFZ3$uWAo^n9|!mU6C{qDS?Sm` zsBM$AP6D=qaAmK$lD=AmXsOM%ExJmV5fjhI095IPy)rr~jrnVKHv^Ch>*_Ay4+dUw zC3cR{rS2^TR=7J=BrJux=AXR%kTjb6mq1!gAQnMIaI2%(F*RqN*A%FP5^ODX!*{E{Z@cQeo9A);SOBZ`T$4>X} z-57HlF2lX|aI+x_9}_haEPRKw$4A@Qqi#h3L@7equ(;h}FxutBtNDokmR0$)KEG<^ z7p!tI00;`iK23n`G#w6bnVsiq+C37eB(zyl;`%l)kZz!zXBVQ|j<}`;*XrYO&pf{} z`f=|z=hjjVyoQOcAQVU|pusBj3qRF)2-4P=A3G>G-wM>!%-|8e4~(1(nNtpa4P^Y)v|ycikX7T=u4Fv?Z8%zl+c$X6Lr(og^hMCwq;kg*xvVVvTJJ2`DiyC3l89pdAAm765GzZT6*unY*5?isROD} zuoE|@)Mm7aj6c??@lN!W>nxCwfNV$S>3@>hOMpUNWqvn+e{pxiZkhX|Gd32cFBcDl zC0vy6%Yk~3o7GpYrPOGni9($y{qQN1@uM6kb);)fRXnQOnPn_LqWJUoK1`($e<-Z+ z(_B#R!{+TveP!Z&DKJu2w1x!4J+q3w32B$oss^=Vu}w6GHxDt57;1P?4>PqK^g-8? ze9igM9s4}-LJsZ_-8}1w@j7S>%!e>*Kc{WwEG-LZ3POxIVaxcIGmU1#8tMmi2MaF^ z=EV>cS+9{s){BnrMzp87H;Y3sNsHJ7EsbID7tt4`zs$Frxa zb}P}UdN_CWFx`m9)9X@uI{`@Oqm6m@hd(G2|GBP!4#(-!10~nETwtZ2-yW8GL|&Sn z_}M&r37t=)lXjn!vC(Vj?>=O__8xz85@eG(JT98k$a6awOyzEdB_*-t-iSUJ8~9^` zn^6Hw-A{9oY~GeEZ{IXZUXj{c3jq2Dx6yNwq%|=NgC4W__GAFt^zP~nQEe830tx8{ znIx@zCyFVO5H2k(bvmV=o5G&OcCrj7=A5UXtQ*_V7!4lT-~R}U>ks8Ib+kWbbMQfr5DZ>8hXKe$l^{7L(uB8w?1|Y@%B=2(=QGh ze)|2nA^)Rs1_}`u$LDgN@BVwbCm6PVI5TnI!9gKy7y>O-stf5Q@i0e-a-Hx&C!B`$+qj= zjmLt3K+USq4K)-HDCC<9VYza}Ec2#Op+aiHi8Z?Oy-}Z4#t&eRlf!SkG##iJGy~1a zDB||SR7d(Uo%YU!?|}O^gJO^rY!P+8qXm?xy!r$9@x!~X%zY|x*<8sdRhVr5TPNo) z=M||-)-C$3o*CQ8BACk+aKSa@!bpZ@qaV)W;ZA@c_fB6Ej$TYhPr&9$R;@eE#zD(H zS@ZMdhx%Olu=sxNqqf?=9F$N<7lK;D5|<0PHAo*k?Mrm1p+ zrL~9*6&3SHNXP4a z?0Km*CpiEs3xfgLYqk*EgCI{|>1qnlfR&a&7SnF1io4=Zj)wnQHNb0rxGb|VzgIv$_cX6l;cd<^e@>SIc2>*E5d zi?>y}!@_Tx%@8pCNUh+{#N6U#yV7DpX<~Fb7xBNf%l@;ID?F|x5?xpW_E;Q3a9PkqEKyATM=9XG}x!_m`G_0ZSoUyATzXzaOH>^76ibuIKHG@B zFy!I@fAtu4^2CWh7v%v;gzq%KQ+;tJo&Vs6`f9H9lP5v*7cZG2wzcZNZZ>S1DM}Z& z&sCsJ;IE3@sDcixqyISGSD-{F@kFHT4^7#>d<#F5Q&}3>;h;+7_sYyL+giK?w;h+H zi@nBAv5)CZazRnA>-}hn_wSAEkB43Q+bK|{eNV6nIT-5XyY&A3d)ATa8-fPz%VU?$ zAHuOy;iQ00Ux**Ic#MbS#uKavIE%yO|G5o6{gm>ZJR<7UE+}^(BUC!})SHlZ;eax4 zUUd!G?OBhbvnKR=oxiPaeWVK0s*gTA3^;d&@WTg1@IR9)jhkRt>PG+etmT3C_bNYy zCL`lzQPJjS__4667A_c4`09{8FF%d849$XsHUMAu=0y5iaUOt>r2f+x{E6{_g8Dm< z4K`eD7m*A4^UyzpOS%!J|4S@fB45S$vgC?}0mhw2yoEo~fEgU*`NResK(# z3Ggx3xGiuC84n?YJmyaoVB|96vL&DvpJT0!kz`)U!8`cBPCe@b#couJNfIRF{9o4F z@Bj7VyL6I~ksB!9N7xOkE1g`AwrhGUBB^p+J0vQr@gd`cJwv@z=O~YbUJ->*%4mp* zs(HrZP!tdHaZhXN+}zcO!o*tl8wZ2y|2n|`pv#ykNc$(*pmTY|!XXOpBIfn|Q#Tl~ zRMj>sNNPEo6xh9cv7))h`3G`yQ7Zj=N_gkHfTkc1ku1}2;#&oLvx4FuJu3eCK&Md9 zf_vK0T54|Cpnla}l6E5B2n3AG04vS%f|ljr}4hd*{>oG4kSi z$W>gOB-U02Y_2YWdwt!D((K-WYvhmUoJz(BuS$viL`?eE<#s$Ga7VAW92@YDxiE*? z=8=od`@cF;{llUD8eo1rM>7KuCX2kOnWg^z+Wf;utI?Ck6KS}@WdC}{{_}kP(@)Kn zWc#JP6MWtWhw1LiD*tqjfB*d>-N*y`JIv2^|NcEc#*!be9QX@qk&_@& zUr4w|fB2wh@_)W|6-!bj^vY#7KM9dkwdLPpJdGL+4$f&6CphYwc)|KPsxRgb&>w7gGVMUe0CuzHK% zK?Nqq$s}LuY=_axKB`i#%aU4cQ%6;$gU z82$deD%G68dBX2t;w~MS9&;G|;bPr&Xk)0;j`)?NB@MM!@mOEP~?f`$A?Z1l&=!S1IVFBEHw6}0YspOv*}#Ogsg z1T>E6@UdMSgnx#iW;1=t$^qQTazkaMBM51qYKSJ%X5W^qkPopCG@~(bo~R_*7T4E z;DO{Hawl=Pi7k`UBHWI#I+B56A@2H6+|mGiqpymCNo#z6!AiKd;wZO z-?e~Z*zP1uhHSK{w^-eF;wK~K|6cDJ&&iqA?v+FTUI{<^Ui$vohrreZw0o0-aMn*L z`vW}|aQgh^*lM>E|CDf$FOLStaq0sv#mTJa6tS_fMZgk0Temnv^%hcLAjMsupt6S* zWw*};3gXPNGtE&~fH?Vx{Q&(xM#7YYqd?6sA>koC>gTbNXG{l#A(Z27ALzNxW zGUm2Yemg76at$ei!){9dd?)GVA0s{!;Cr=Up67h{S!iVKws?iF`zTQ*yyHe+3{RA!5R2P5_ z(GGRf8wCvxGQzY^WaocfAIu_;(-Ty56<-wh^aG}Z+s;c6GMapOaN8$+Y#B^OzYwt2 zk=j)mJpJayW%A9mo$7ZIXNIZ8JDk-mTWx#=u&Z7`t%v}7$F#<+CczA_i)9w09F?L_jOKBHhdp9oAhX!Vq{Mi>qITC?me-o6=eu`m&vm0K>i23k zGLAwtv-01$Bmgpkvw9732K0${f}FIiKLe7=S;>|UO{dflz|-x1#Qn0dKeYg&xk;+| zsLz>TYJDOhs`sZSn?n^mn)vCbR}S2vXB0X5j@@g%;X6W|HJr_cLa=>)O=pc52BSeN z>r3OIA=hdOw;k%pxavuziO?fHanh~%7`Jdvm3m8V7Gb++D}M8)F1fOZsZg=&0~t$e zZTe-+1pyC`__EW?XRnd{TxXS8;;R{?){TLsL17N9fC17Tb)pJVJ~*4ETJt_av@FJ2 zlaaQUCis+e$`;j9CFU@hfk(E>oarp1X=-8(<*iOUn~M2<0TYBU2Qx;=&L%Y5knQ91 z1Cs-HgUSIJf;{mKC=Icc2CL)Nv4VS_)_giI{KPQ6e;KTUQ!+acJo?><*!6b$?Wa%U z(k-^US&IQh=)eSMz=%Q?-Wtw0q)BM%&-a_#Oq*JvE{0)`u%_AD_z&7i==gwLxX4Sc zC_TgCoU3pNuLXri@Nf0^6iuR67Aw$d?@S=vL4{>^tqC)h;q_U8?Apm>*g z&cD}t|M-I)ZBcSjua=L>sN{q<|8xL*xjy;1;e6duae^cO=KtEL5H<`z0AUiM(W-wA zb7ve%GKH+WNO|d>7s|r4Yfme96EsasUbLuJ@Qo`?o2dPD(w&3q@IK#3ipMhXmd3>3 zOTr3qjJ<~P!u?Y8qKtG-7Gx_N!r_j?NtEI=&*@h#Oo-IF_+k$I_<~w^ho6Vwz0}kf zB)v|uEe;l5@%7qJfR8UM#oJX0R(#Z!;hB0@^!!G6bkgKyxAcwZAtyj7eL4Qj`w(KP6cJ$@@Zj7qy%raFjFUcSD}-oWw=9{pSs5 zX!3!V&_7(jT9=d(ahlWLi!Og{I*}`b*=Ju~M!vgPFuIAMb3~i@?eyO^Gcy}h{%~iErq?=_m{EDq*4l9m}wRRH6N_kuiYddG(LPS7P_*fI8DJAcH5 zbjf_XE>!Z@%tX%@KoNJRq{ZO@rFq~bORePW^8;)u*ILUa7vZ_KX|cmCdXrDiEd|XB zOKpu4UttTMm+`baEhviXfzbj@aerQa)N5y>`HmNOv19}4(2$>$K-AVi63SF3@&R%6 zcW)o`AqcmU(}d`R%i_J8m~e0ChTw|hlKWxUr0Md zb|4GGf9@j{5i4YkFvAL2Eo@!6qRC+PL1x%?w9G(DgqQ!fZw>PJj9}W`uQCN82$8+l z`q4KHIK$dIL{XBv%8z@x@4*K@%vq9ASq#gKRN0~6%6Z|_wnHWN&kNadxM6#nOATZ? zKJDK$ta+N2Yk>IjBol;Ex~<1?q!6%OnSH_sAG1<KXqESA$Nv9#V~Z4Pl2LoKaed4H}gZcCi^?x3U^c+2cFQPqyd_n;^L+LNaac&wQo zbsOZw#2;PXiWea~1nt{x`wWtH3*0h#trXj@_{27MF7V!@C(C_9!aE2F4BOjR2VWiF zhfHO(6}4A{#EW6HeAr$-zKdd_^)Q-2ECacmqHd`9d}+Q^nsuLRwRN`deFXPhF~S#_ zBJ$U5h^P!|IFpoqqbC(UZ+eX8MDg}d*%400P_$?h3sZVJ^%!J&ky(PJ& zRjVl*_`)^+rSd^4=)&%}VsvvJWW9h^+XzTNNoSBJNBT8RYJbIDSM?vLW4%B)g1K7bSB{Hlf^| zhgMIeHMCf`%-oY|I^GZv=`ealRII zw-zRU?Z$m=PxV9aqD0$*H2r}aWOyJlZVJsr1&DUvy@zhYCTB?7I%wB?3^sr13R(pH zl6wE#y66XD8$-zW9WwQ0$zJg4R%MPy6KDaopN)hEuY0o)S}#La`?4iXMfQP4%vuYN zJMmcQ=>@N}Ju9`HIWEq|8|oQES09~W|CvI$*AH`|+q|NOndxf1cbksQ#xpJQC6-xt zDUoSU39}+Z_}@USO5M|yUvRvH^8)w zyQjN#Lpb^lZ}o&YdCDBR;ln?@Aw0Gz(gDU0Z>R=&bYfx$zyt)}a)S|$=h;sDawCl0 z+j`ak2g@6AY@1mUMY*&2tm%?*d4s>{W&XW|&u!-> zzN*hlKr-dy`l3^q7#j}qJuqU%0`u*XK3VkV1iR?N>kSRV_m@gI2qrQ~VMEJS7*~E` z#Z;N3^Tt_;5wD?l`>owmM9^zoyvQ#B_^f6rF3PMf8e*V{HH9Of5S;Nf@&Wk8w!Yi6 ze@z0f-v2%RHr-iz|2&qc|Jm{(a@H0NPY_a{U7hfl@FqV)YRQzl;uDy|QVd(r+73!@ zwiUc@a=@;SFNtA=ba-D%?FET_AVRl^>Wd|{8_|hJKKbzl<`GIWTLiJN>EdElhw+E( zPKv_YAI*64J0D)Cy0Jngh@Rp4FrvRy1^WQ$@1_5ZLOi1s16F*UPI6FZyv@ zB8S^G6ZSTd+oFk41}d|Q(zvR_2uXQptJWatB@!o>+3J6XEMy>0Y1AATc7$=!wS ze9dNa%k<E2^?6V$=Y}H!-L{ArKP~)8jL`$-z z(SFsDjvI7)cG^oCq4=iE3yewi54VAgOeBYrEjWSMD zGhdj)Q1!Y#{{vCp(hJj7&(<*CuVmPx2eZdqc7l)XIKM@lSvS1d$j^7f1`a(px8I#; z@y4gi39{_wRf~N)GZ>@aXTZ!z--NFggF5hk-;&KB74T}dwUJdjSf(HCAAT?Qm zx>|@3hZM7N*-dJ6neCW>ws742CsJ{mL=P^topi>bmV5)`@%7eX<~cEu*jB+Dgn4?u z@$|0@;?|pluPy4+^)~SLWTyRXOX+m;r82)=R6=h3`Ndo?60aA2AyfLwh@H>wws22k z$RuEbgvSA$!gTG7=4V`Uj{~Vgi8KhCdiS=iy!-q}n|fObE*o@S3@ji-0OQ#EA@dU! zY9XJJ+hJbkMFM==`yK{f5ywvVG11hE@AxKoJf^|a7{F%gLNO0(4$CUNUwVl#ihfEM zGb^iXZc5$V1g_i@?xU(;;41O~9X4KRnn7acrBxd9M|jPS^I)Z2yya_ex^VO&@dxpk z5$fwg6NK|z?+uBd@NsK5;FnkIpg+-%T`h~xcgydSZL9VnnSMM0b&Nb#)sI#14+{l~ z69}8OaBpJmM|;O2YSe;LR-Uvg@B=Tg_0CO?GrxzqpsdvqZ^E{fF=>f`Ldd1c#FwxT^q(0n~%+U&JWN)%ov z(PS&RM9Zx)gR>N3US){undX}2J}}(6PQ*?|7%Viih~0I;o-BQX@yj^8~73r-*AH+`C-X#4zn-*<5 zop}*DwSKPw5`=^o^SO1hu_pbYj{zyCx(cFkq@GPvfmTEE#YZoklQSvCH&@{kJ zLT&K+%_zzK+_Xq>Fb{2hq72Bo>Yeo`Fe7Jrlcyh~L&Ljvp5la&9VdCJnx1sGZOh<1 zV@qYfWP0VO&|}Mty{HiD&l}?@)U9`vF)^V*YC#=aq60Z1Vd0hay+ZwRgkAP7080$^ zT}N?z^SgppT}r#ro~@JiGRS!*`Cp&a}83{^14?|1kkpY34relIX9Z-7)tYJPK6%w`jtB z>@R%qLpCoMPdFGXccKN)liy_=j>ke#?{7tFmWV3L%u1Dr5%19hs+hvPZu8b5Xw4jh z?f`SC+(#>}Jg;6>{RAkki&fIxgq|$bT5hc~aA=9k(;)NCQ_e8P+rzH5*kD(y-7-RT z?^j34n45)Kun**4qBWZ43ji9A2y1t*56HFLd~qeq-zmE{#eDTKX$u$wwe)I7M#pqi z)H3+jD0m?c@kaBM&Pgxl+Bzc{yX+Iz=So~(up2vSa0p%X2liLu^FO<%xGKsP%BIxl z+sC$>=z}vIHiYl?s`L6hNcDR1k-4eNU>K0inv!*RV*GcU_XR*?N$<|m&>GS?u6}eM(uQQ10|orb$>bHELGsI<@rr+eg<)&yvqebP*GH zRqBZO-2O1vr`R<$CaB#{;|2VsF|PCgY>F%$MM6*+x^Ehm1X@$=C3+hfqk`!MOO5hk zB_qmLokQR_!7kK?R+L&AM^r?`D5_vB>4g905{FDu6`sQzIvaM;Fj3O!Tx-g7M5JnK zS^$-pN}nP&gL?GN<|C;y?sZfYyktySa^7jytNFP{tk+PsOGZdfd-Gg@Rjndhi+V|R zC4nlu-)nsf+WH@m#!THn^r7Ru3Eh8e+%F%vl!c$&6Dbn3$N+HTw9(ZuzBHb`VKvqOW`Av^>AP*@IV0-w37t1O z4{lH>4b6EF32BGRUl_v~g}D?A-vNQRHouh!{pmSViE|EJd2y`V(#X2{O0JVD`&g?t zfVvM!(rA4)+g&mNCADc990FNHGuqLli$>}SxT0~m=6i*l-#8K-+YvBQ&U!v)LfI;* zA1Cw$hp3s^cDw6b)!}dt(Oz1>`C00!$A_3DV~Og3F@|x6KT1#f$&P(zFP7L%+W&Yv z!(e^w)c%WgTS~>a$4)E#xtI!Ae)UcRmb6_mu3i~LBwfn}UTym?NS9PNxsBqeAZ|kW z6p?Arr`TN}W;N4l3tIYu*Q+5JbW_q_aH73--x!DJ8t_*8bC{~2*40v2_l%&nc(12V zA){Na!PXn8AkW`k+fWv)Pz3F1pOhKYCo$1Y4!@sujO&!bR_M+gJ{~ESI*9P*F(WbH zm6r#vmfUL8DI_&#@pfDp^kXsg ze&!~&wQ)b1JTY)v{u@9*{~R6RF-IBOLL?#OCitlk9RC@L@)KX zc|EoRdc9806h4++$?N?pV-ZQxNm09lQhyq>f)aYGa5>1(-H*AaIGHvXh6Bb3Km-v<+>e5S$5l81`e&(-`|ImF%A8f z*pnM(ON7oauJF+;LLOAAK)lN2k>-i^1JKrCKe3@_K8Fx#Uo|1tUQoW<{feDOg%;)d ziZ>UMeRRwinIEn`j{}XzyEKNJ@3&`MGe8;YTBW{wlFE4DPp>Nv%oNQ{s5sxeP6D{_+u2g-4&@D5JF3X|zI*+0)$pwc@{u`)XUGe-Jn=5ePAU>P#0 zhps67`klWD!dLhBJDSfVf;l!&VTybA?%kRBN^=uC^LUZJCL0knoX4}XU;Muba&EM( zSod~LwMq2?M@|F#{?Cu+0xmTj&hqzbxMVQFFo{rJx1}i*o?f^kaRb}d_xk?p4+cL0 z+_Ba&RB3DxL0)G+&o40=am_ZJvsA;4S-NJ8AoK3A!@ybr&U@PHfV71Bmzx0o>8h_t zUS;`^R^0;n+rD}ZDjSOQPBDOBG%dE$8B!E~Y(MH8WM{HhpohHs?ia7M;ApUAuwV|iN3O|JST ztDE>HaJ7^*clI&exYHF$++6WGd=tns52>jx)GNrdJkH!OTpiwo&QYg=*XYDMDCHX+ zOg`)0*n7WD%YX{jvkmO49^V_`tp7Qvq76KkkADJoHuJ9HGv2{|z)WhwDI;ha?*&&o zeGzZ&fsGLDjfzxj{7cCED;TTZehqXa$c{UpZ9{G&gj;70bat5J>_6zV;Q!cW){Pol zxn-g1xvwZSJ^`Str8~8Ns#1rWA5GK9v?Kx3E_5FCGKPpnw^65j($oMJ%Xccw8>|tr zhwwSc5=cmA5v22tS~78>(buuOmg2=}0nNH&?Y9ae#5d+$LIKNT#^0LcVk%mqhJMbE zPJaZAgW|MG7MNP%@(I6HoCHoY8$ynu@fSKL6F|j?W z-~jD#EBHdO@%eS9w>+icd468Ha`V^k{Ob%{xpH5x&{RZ@^<=c(v#Qn-Vb<_FE>9nD zOFO=^k=LV*&%Kx>;o`Zw3zN)mQeTVek9LhVBTdwSY(^h$BE*RC-Ta`K!b%0;iQkU+ zcn;1JOBIWL7tLdw7t{@8lRAVBAk#B)OmefTZZEWoy*5=8bp>Ql-;Hmbe-U4GSmOpX zM`=>K9kPUncKyX(=S_}R+)XoLLIx4Xc>`|ArN!))A9z@3+K%f4{b-!`V__I!!txWJ z$AXEgdG5*6u%XgfI+}2hn^|UNru=Blu_Ooig)j(SWyYA5JdtC8qA2{?Ze4X^+iKUxdrbp`!;C#{E_>-Y_BZ##-RlL*q2yjB9ZBE zud>`TxU0G~teAvm1J2U*NoqCcH`L8Q4swM-l9F$oqU3CL*)@sYsAZ{q{v5&@`@tUu)e>gMcjs|To+s*uYnNnVkMo}nZFJ3(zWcgrT7cyAjS&!#F`S5$>nZY8L!t<{_VLJESIdNpRKKkIfk4r@P z$I?j?=VXN{q>U#_Tc%@)qj2SOTz?qh zKe+MCTN+?K)OGA87#+D~H1JML)93m?{-`Yy=h0i%^JI6*mGI1=4apddUhAMd53xON z7A@tw)HC@UL9SZX9$SABVaRUX|A0?=~n^Yd1((OvuvNrDkgxx^-zEk+z+B}8nd+ytQ zC=!)ay}fm4+9sz0r#Zb{?ufI_`V7_mq%Y=brj1qfSMxvFx>f1QHkL6)O)2xOzervU ztE~6V-*hYUU0j;ZLILue>&$7~tF`yP;{1Y!FM+8xqz0MH{p)1qubm-q;Mw!%x5pbI z1Ox;!@3BGok>#eJbp8;=U#W)iT#L$g>#C%oSAtrO4B7@`jU?frSR|282Z}}VyOmTG zozzbxV!kWNxAUTYgr;d;UIohPEc=zKQWRoe%@a7YzJIpnN87gvSX0I_g)P5(#fLG% zu!kC6Vq+%A@uu-%qE%Usx=IUGp{OOU?jMgu^o`vBwblx`;_)WM+Jt^us$_FUxRK+a zuNf6Hx;%leoq3#{IuB-u$VJE5zAMT}T|Cf+GN_yrldv&UML9K| zi5(DO;Vt$AdNh_xZzin|vDyYSCv4<=6T5hYn_PLe)43$Bb7vmyy0%u{47{3?s5G@F zuDpP+wGtC$vn3-LaWIi@-@aKbXKWy;p%K+LBw65kdj5!AlPd8WQBsZ*=ZAnZ7k1&< zcw}6vA~Ir@!w5q>W|*7kmi^^ja&YavQRhDvY} z3Y6QOHb!6fhAdQ29(!#r*|6~3f$HM%}!}+FnI*?9ZZ+JOOF?GIA;t9FR-0|zK`{xeW$6?!*9@IQi<45se zw=6hoyBN;+9?Q=KRqQ(J3E&YX@mt*%1G)Oz?uTjcqV!@0$y$Al)|2}aun=dTT)DVP zMF6~!)LC{HqEuK{)Hu8Q-*jE`f^R*X?O!>PzjX-&hv`5iE+ON9OiMZFV%HB#R>6UE zClM~+yp^&^Fe-~x6R1KGUeo2()yf{_|gD(n>2OdnYU3*&FGTW=jj?o`0N^>$VKbh=2I8edpkuXbz5-vYu2JlLW13 zY0t{1Ny*b^@YO6Fo*mt|ibeHVg|0&;spZ+tAk&y!UtvThNQ<{IK)G1ygZqtWDVdhwy~~HN0DmkTjvXszj%ou+_%C(JCSn>6S!I>E!NB zm1a|_ke53FOY&@8bN%Ucwz}O3MGGiDa^6aJ_8=zNj*fBWOxWL6?#w>S$=|6&4xH(9 z?Eat}we{n+10NfQSAbIW+6Rlbin~u0tNkDj&!*wAkdoF{+n3_j+;j&PZut#9nMx9C zIU34ipFJ+Rb~@2chgWR<-ZPK$4#vHWQB07L;QRjpGdi&UTC(zaBJ)=-a`FQ`kI2sT zH{4PzZ{Hm{wZ-+duQVmgA#7@Y;n*cFGsOP@asNH#xd*4Ux$KL3q@6|vlS^5@mueI~ zE0o5O2znWi5{>r3f;H>*gZs2m7CP$b$ozLJ#|1qNY^2^qvGTpNuO|4B~E4;OLtU^chN= zWLDw&{JdiftDnzOi}UGq6A6Mr@qkXV%D8Dk3DL z4)O*5ExkbCZN+}X6z%Vr{;er4fA?w{qyxtMI#>>V@%Wz$o#xFKaz3c80f1`<_F$q$ zZk$M))OuMV8uNf+H?aBrJJI@GT*tdaa&=x>YhS6m0nsuB!#^2}bU$XM+E6p4%L$pC zm2X>Xay|vjY{ej0#Ig2pQ@!rAFj`l|%>jcblZ3q49aXU++LUV}7w&ucZCxta!u0GW zWY#ENE18pV>boW9;W&*^aBo~r4VJZy-p&XJqqchGsy=!0hOAX?hxguj^@-?^&xix7QcU# zEXn^o{+>u3nX92mBUM&!|AzGo@PowwENeHxU}+bk%l*(IKIotqv|_FT6c~*H7Cz~3 zRtxV0`N7<3iaTHaSZ_ILr9iuM-07gslsN%qvmZ{?&!1QhQ#hnPXjjlE1u@Te7 zj$6BVGtYk+UHtQ9*+Z4!4$%6t!Xh9&>8kXAhWXw$%gBemf0Q4NHA`@Ha}y8|X%Eje z_z5%qZ`3r}UakRH`HjqLd*^L1@S8d7eDEd5t|Z`2G!Y9E7B44dbdw$$M)J3mmjXc(jl4a&Xu@?JWlfYF}^0 zhY#n(#d~~9wQX%bah??u>$J4(Y-u@01qNSG+u5CN_^btLIUl*@R-hoHs( zK`5sq4!X-O{xOT4{xe5k?7_8b*X#*)uBQJU7XF5xtCZ1@(Zq>Udzy3xB?_h2%`xO* zstFC@|6FS6_6!}e4)136Bt6}$+(%wuEf~4N`J3j=-XHPbUqc{x=WH*+I{x!n_ug>7 z{5jpz;4SKY5*Pg6N9zCdcfpTRQ2`xZrtfP1)rtGx9{1N$|1*tkbnzi4`!C=4KYqzi zGq0l``vBC0_Kf9!zqH?e7!_R_*XqRQ@c*B-q(`E`KBRN$=KMcC?k@|Gev!s)IjGyP zXO{EJAKv5312mbBz>1mQ&`17XOLFf7cz62GFIxOpg2(TF3Ih%OtYYl`JEHVod-v&e z^Ex)=W?DfKP-tRCWH+pG3f^8OoLR{U4<=NG*SJ;#p0-|tb9m6+uStqVdgk)}`A+)3 zyyJ)H+DCUyyY9zQ0+u zWV8vcrm*_T^$w(YI2iyO8zEv)AReI{Q;0U;m$3GKgg-sVw)cCr4^r_Ixf&X%{kS*o zd*H@nreBMke?PV6Umm_G4a z0D4PL^0H~`FpPWhTS6g2X@IVKjrY93^3VCs$;zC<76FbF5g-Dl>+AWaZ%Chiz0A6^NBB6VwFEz#Dwdma#>$NP>x=UE1Xo`sMu>+dZ7sX^PH(xyYj}G8 zG`?vS(VMQCC-1MRxy0|vnuzhTwceP!$Rl4GbJXTgEYN&vRxd$+D){DlYVS2TARhD; zE+6dZswZ_>Zt#-#qI?&c=p<`A|HVyAe$GZu9POBKU``jfMQ!P$SK)^ z{d|%jv+MIRO6)-T1}w0QAl+osaq5YMg=JIvPCfFilvbKWCQ2*r(cCYdhR z&Ki{aXZVuWd^z18jM^(IG5v;f? zHu;713tyuL@?u9uMglj=( zIvUyAC+6#UR@!v((hM!P5w=A@8dJ?3_3;=Nm%TWGunvTAsy{lF?#XxwR1rA|_MJ#g zMe<%6ZU3BW*lqK{ru)U-?W3-pXlUGZBs?~h*XQOArN3NM)zL9LS#evk682T<7_aH3 z_PJr`wg~(VK~H3&ge|Kzv_v^RJG=Ul!;Y!VdcbClyybV!>+SEL9iE+~)SX`aW5UOtgKqSVOKYn1A{>2%v!{4LgX_(j+`7oTG0eHdxQ zb#Uk!oZ_2C1wd(yleMw(uK3?NyMCpi<*t`GzS4;|$I}=@F5z5?@x;*Ca zffEyUl6AnJyvAAF#w;DMRxc!Q=FC^(o0JsDdcVplh-g0;klL>0c9}kyZI$KlHqG9` zt_GB_2euGR6~%<3O@Lr{`J?iMn%&zCraob9-YaTwEa<M3$%GG_du$elo^!1xDoTgE`t%YH<_TLkU!cO;nxQ+Rtvo>b$ z(oUF^bd379!bn|AaQUWuF@yE&9KxWQ2LJeRoIVCx>5^JoU4egRdN^G zt79718+te{I{HX?RQYm4X*LUS0XL=X*LB-(vWQ9OHt;`~#b?2U*}phRd=)b_JTl_% zS;+f5WWK8=|CGz%`{%>+cU)axGuMIw1++MlNU(1>*DNZy-t_V*3a_aq4PR4Iov7Y5&_tjhm<htwYKeNdr zXj^p8K$_Q>Kd_HlmtTHiinwT0P8}bh&p9U`lthAHkFz z_eOCKk`w{fE5%7qO2S7=D-s^#HJGl`xP6J7lLJ^NUmx$OE)*ok4>Wt^oL|eT?R_`? zb>5peX8b(0$y-VLt*uJfpFv9A`uKGy_`QIa8LxG9jgg;o)pJ-1Xlsw&FL2QT!WRA_ zM)PI`)LifE%+_6Zw)X}E4l$FT1z7MtRJ*o-+U1RsVIPeuM84JJoYWgVt+NY{{-eG2 zPoS};X^fN^C1ibe8j4z;9y3~Y%C@HcT8^Hc(~muUMX8? zt@ZwKeekp}ta$R2W0KHWYNiGByxVcs0brEsEJHC@F*P;)$(r{cJf^>{1(ke`Pn#xv zhT7km|4SCC0yzrqRA+60OV##I2XXWdRRw@2cR12zeCpz#sIlu)6H`p1gj?!sB!Qnl zGj}hvB(>vt{fJwvSzMSe?;%fqlitH_q_4`qf-wa3{+9$i2zmMoXw_`oNRrfYCP-zz z={k?sgi%B8s>)7UQ>0+iQnTyKuIAa1BqTZ(_|qg5u#R(WRc&# z+7*4otlhs1CYO%FE+j4ZyAyWw0Llti5V3BZ1Lywqn=_RUc;To4vnN`6ma>1}GDv~w zF@{?Xv+w08)Gj#wA2C%(`uFh?K`%XPdR2 zNd*+fXf)#aacl(PP(GPQB%jR-8>`uhyA{AlbHs!#u4VD@bzF`uzdy#bQKK+}sR=8g z?2N5)SE|h~+;odI+R)qPY{a5%siiuM$h*BZ)j}vr@+GFV`LjUwGYv}%^G?)v{@1O* z^+)nXvq#2Gr}}o~Ii&A^IYYE)==$rk$%HQIVvQ^A*~?`xqMHh$hi9W=Gh25g%kZHa zZ@*mL3D@ll{SV#pzh10;)gSQv4wMs9+k3xf+U@_zVxJ%7)LHnbON{M06lynG5K%1+ zD?ZhO$}J!2JA3BL2r%=3j2LydO(32#PURQYpX;W=tEoYf`W+Q>S5g8c_3H|`;6Kwk(X1Y zV_Xe)K*x+c(I_0x`^>oSEB7U1cOzIH=HI%@Pw2mgiMEap1T-0Z;KW8Llg@js)n}iR zktHm**ZGs*Q=!Ne*1^Z?HyZMW_WY=SLxFx{D#nE;Y=@%r?j4 zJ7k}eCoVdiN|X2Ek`R|@FY+J`3oI?U^i=AoFEK7UVEO~7B1Q5QZKMUuX2-i`#RT_uZu9 z#uH8KO)NgJ$Mm#a(1$6MqSB0T8=Wi<-1-`XTV?mYME<@>QrNqc7j2nYSYox9+%Kmk zK1){}Q|I$~^iqe^L;j-PE<9Z)d4;NjR z@7BHz)&q;Mvr-aRDq`O+8#&1A2IV zW}nCv%Tj#f(OcvxBL5nN;+)dVW3M90UU9U77)WB7H?3CguJcb&4@hj+R5 zN6<2}v6S`OYN-`JyZ$G;%dAIk_^c^aXKxo-RvNqGoA?t??5$@0N6_3!aGxRTWq(sL zJj_)w+|n8^pI6#%=y}4+17ETdKZsFhWPigID28+Dx|eTODoA^IPxt3fW+- zakR&BG7ISV73TE!4QnWzJ~XWJ0RGPLr}vxg`>&53Jt}xw3zes`UW~vPmfpE~1*%eo zxzBg*0F3wi0XCM`pS*u|z?x&j+b*sAsB;n;R8%TZG8hrALl>}J0htIaC2(c$q6);5 zQgF#gZ7pBfTMOpMqCk2=UCe`?bDhn{d5`jv7!d03YVENYZg&XQ-QYo0Ion##{=D}x zO{f{(tw2>&lXHx!@>vP2Lf{w03%bX~xL;WhWbeHUe;-lYaoSp#!aMfXdjo61iG`T^ zpuCSBBLb(@bixOi8DaQl&bz=hQIRG^$%p5p1s{legf@|IWVUmXS})E9`_#`2asTfW z>+c%!@8#v9Nq$G5+%@-B^I`C1{`iq}nG@@`AAt>YsOqDlU}odpdROSyL_}&$%7>Va zNYF7@6Q%N`)oH&r3EAY`uacqRm0HJ1Ha?p5!t+*PuDB$Y!Z)0 z;xJ%v9jALXy+gVdXSBh4w@}}d#Op>~m6LWnLcnSA{Pu%^j3JPgG5k2AKXJSl@V9%X zAl5Gs5W_p}?hn;MEX?*ii=Q?5<%QHe=G!ihtE*2kA+SGSCf(>)^dkgOXj(qcL&jW@L#BG&s&lgVO&a}I;3_i%ZQ9|02x0dSjkx@`<<|a= z4Hf<)nLR~*Ne3eu_a09?jIHfRNwA)o`Q&}<RR#>(`BsvA$dPcMoR$ zfJk{&fc1!r1=A#g;hhqMA9w;-TT6>HMoJE8I(hfm>%DJ-zgB|(6U{dFX=`t_fsSl7 z=5+nd9ndjE}60ZUGFP--;k`Vb?28aU#=w#LjtRuO6dkXW)%tU93mdDhtOk* zJx(PWs7S+kSQ+usZ}TVGgMe`z?*v9hR#x{|%$t;OoufUac2B?GP-l{~ppJJLbRZo3C{O6;*SD)oCM6i>Wkzz;LKcozVQ4j>q!W-dCVn=Wy~R2qA8^$- zo|ucF>9`NYiyB?nt)aN270%;;N7x9ZOTu+!&B78*O*}rV+!ecDdHWKJ|91WSKu6_i zh4fvQpu?&Gdq+sMT5c6$zW!95ww#q^!Z+0Fh=uEu(XowgL1me{slHS7Pbi~OOM;b) z)ko`x9bAKf8IarJfWxxvl?|l8;!0uIM461uooRacme_bR<1u>S{`Viy@;eLh&wD6~ zsep~raMx&poKtxFk>@)Jw8lq+{ktdc@l4;3ZG2Se9gz&H`_t`3=t9-?o5@%D6pbO} zJqvSHe7{@}8XTq7P!CUnm;3jA+wToMiVM^r(9fmico}GjUwQeB-5UnJf!E)XF z?p4l_PGLkLniX`>#3b<4@9t0_5^b!weyyAQV6O)`GKks7b(u)@oX#NX-j`f@rYr!} z_7Cxbj;g06pXo$xe#W|ge;k@bVBZ2 zgmcG!-B*U23-8tP;7v}P+lI^v8GxpM`@(SL!odx;P#rBHc^EDM0ePRjY)W!CxJgZ{TR3W_)5y_RXvM5wa=g%ml2 zjNgLaBJ=H$)Q7g{;nD}ITQlC>h++4dWQ}%?7g55Vx%LlHEhw3tDeOWEXw5})WKE6+ zi`{jYYwc~HA~c$a{_H?`8*yD_wsXvxCl9$ew2J?kFUgMfX9AFA0&C`Ed~8q<4DJPwcG$x1y{TWpnnFALhavo+P8NT{LvzY z7det+bY`;3a6jLwQeRQ&KAEC!&AQDWT~4c9xm&IqkROX@Uh?2T$5mEVF0a1DXs*L? zKU-Qm;{_wwf4h8rdO?yxr=N*{^7b|0<_z~gh+puJ-u1XaTpcTkx~1t}I?FfBkgb)l z9i8mna;s#gdtS_PRbVoJJ%jUE0C@HX45{9$yky@FV>=M5R+Xx4GQ;hd$l%_b7AY9j z@by*w1BIJ1DKIHlRdSn3PZ$b4G`E18s4iQ(HiRDX@4)I^pICjrpA*Ldds2UZA)}u* zd|U`L#kQQkuFcC2Myyf3+s&sq1KJ;TYwrFJ?BstP<);R~j6~HPHUJr%=$QJ1uYtZ$ zzj^bHjL@P z^!X5_72)cI={rihF0pO0zd`N61sAa*d7}B?D`pNJ^;0Lb3r*ZHHxIVxsAdmGF&{ z6o;O7Dy;x~J5ea*pvHIW0u&ssb1VU7XL3@@6*jqqo1;ET%0*$aJH@LUoPDg02TV^Z z#*tPCP&9XvqLyp2N-MzMy(5ef;7O4*IVUot`w8$54vX<=L@&WEp-clVnF@k{Y<=hP zN1tg_R7xe&j>NsO z+Pwi%=?59U=9+7UT*?ML_qz6>9#U^e$jOlx?ZWvPQC^l-Cq_mNv9PRkXG~<(YwTVs zb!i_ZOlGHX$QGIq?|(|!+v{j9QRS7;pG%#wxmE)dzE;}xK*uxOT!&+O1F(h1+EWQ4 zTr;+UhjEM@`C;- z5^G^$F){x$yE&h|;Pv9-*MKwK{^QU}9pDIeI?eC0iBdAPQbBRjsykGpRKa(YN01&7 zNuCPd(8Mk%`w8IZi5}R+$JtFOyU1=Y?zu}e4_VE{qMpJ~Iw75+{S*`iLM=6Q8p;MtrZ4dY^Rls}lKMakcGOHWlxQ%zdRN}s(g002gI`vhz zP+r(i*}tN)3H|T_a`y^(5~D`)$UoIz6+hzstsmrU8`p?i=<9~=ZRq5*D9{l*U;3LD z{PcGEVhYwg(SRpe<66emjXa&?L;U%MTRYcdH=;}<#<-D`O`RnYHy_7fja&`Mtry&T zc&4-~2HMtbi6Xcv-t$+kbnl!>r9rO$qv}2IFboLHdedv=Zlwe3W728Snr*{!Ct<)2 zw|S~5UkY_dN>&*lIw=;sU+{t9Jy7NPNx@j++Oo{`k}&oCbWti>kvxKd@RqQ`ghSZW zyIG&>P-1AM>v6IP1!jIrw_m<^;ap`q<$x}rDWz~WEk3~U?z~00$d<*+5%DEc zxCw;^bg4x($x~H)mVNIC#Lu9ikml|2`I2k#C|Yvz%90>mk5&xH?_m~tr3O2yCmD!m z?8Ztj6iW&dvkk^OYn=1gqtoVb?S0DBF%5EkWd~pN$}351;mayib;Fi59rvIUQ;8E2 zl&jw)9i}|}{`{&>UdlGJ+N{iIhZDY-?tl4`OXzs@eamh>4Q8i}=jwYwK~=gA9RK(S zTjGU&+)K}r%hcj8Kc-6cZ!%aGBQ^z-l33-$Oy5Qp6XRMeQ=)L1)zNi z_a#>fap@aljzcvlPv=Zr1DJ{&WU>9@OPq$7lD<+h`PKF|j$?V5Mf(LRyMV5fE7|w~ zrytN+xV;e&(9lYQr>B6FR{$n9P*69=HCT|)$`@e!rY&hDl{PEz@old_c{aPk(Ft1V z6nSX%j{=9<@&c?j00_C4ix|Z1#_94@RSR^HXI-ZJC}6(pKah~!qP?OVA2sS{0?c!B zC{3j< zPD2AtJ)CaeHJ$R6?d&5e(=uM}K`=?zhyqj8i4 z`joU1isU_Pg>!ceh0&tMXBtm=y>nNJ#ypC9 zd226(T+l^O&)4Q|tEAE=2dpRbd~|twPVyB8t7z%uaIR>~w3)XFub=BH<{96$cKAe{ z4=`;csNEcPdNY3R^`fq8vLy%Nt-Dy4lMQG~?cc{0zP&vWIx(2xQx!p*A4!-7m@*zx z#Fi?8D@4|B=Et$o<;A~|4u>afto}hdR6>;?y~x-Rfa-9%xmm9fud(4o9{RzyVF~!G zYa(p;Q33pQL80t-{fG|F6BmNow4*XOr^{yO_r(ZmMFU%mk^+NPzRKOhobJPwQyU|3 zBG`|eMf>YEvtp+C{5Ge-0az)$$~on>VwoH>YZ_KFJ4VE$a-c#C=P*&RfJ|~{AkV;I zA2uyj{H_+S=ZB&7_}(nE8N|!3W!zOI)WceLdaV$s80}kNd*c92wyoMk`2yA#PIGPx zHS2ND20lvbJvVo7dkWz;K2Dmc4fC;If|E>U-2wp54H z06gdVv(lzQ!W+nwOl;J*s+DA9FAQEt^IeMd-L~@=K3e|gF|E^9cIvm}on_k(0t6+L@}S6Po^S+@pbR(-rLrmeRsMQ+bN8SJ)7@SkCf zXXX?ughi{G+U>WAaAoAVyNTy2)6~b$$klwI6piQ%JtY%|qCdwOe<9 znpdsO%rGcz06<6ya%utggub5WQFTgoTQZG;CGK4M7a&CW-vJ?mKPC`bUtHDhq_0ps zRCg8%gR6KZ7!lZUxLi6vxxM&=FvP7W(^`)YUjOL{ad)HjTKAShtA{lFSC#>Bc<Y!( z^j9S&9XHgm1aK(Ml?_jX_Ux3{=LRxXq;*Z(3cap>z+PQxMNpoKi0}?*kL&A`G!iN8 zn>c}xbQ)>WGdL5dvY)}553f@LE@;ljCG7Vg5>`H;tBj&PSo$2woy)!FmGC@gP|rdQ zM-B@-e6YS#b6&d(0dG>7#vpZBx&!CF6S1go@4Nwh5u3%?t8wvc$Dj5G{=Xn0agJTuuJlg(jft55sFf>r>?2Ws*jnzK)img8Q~kiiWaa(Z0q z%aHijvv+Ki*FSwb*G3M;Q25%9F0ejMv2Q)uKKXT}Bbs`rQZ|n&$_I-C6DdP0elB}Mq@A)55J7WWG@A~_VC?SfUOxhbw)+7n5?hks!OoQ<7nCGs1 zxm_MCS4BXK5&SD~datc+a0c)#+RTq68~2KoK0Acl3BUg!t!E0}c3=HPjo&r;PM|Xc zrx7RN?>I}8TnXyDF7J%Q3@LI70Pb@U^4V*NJTySvcN?R%>k(8HCkZS_3>p{54E zLKsjIlG&wJ^rV+5$53*f;%Il4QwWo2c)thrN6 z_P?8QNPNG6iKCfWD*&3hn8HTto73N%0|Fth8CmBvgoQ5GfX0IvPQy_i+@8tYANI?9 z3iIguu>r2u99s#(p1+*`$I(ZKCz(~&C0q`O87&^{Uq#}|Zkv?qODCc1Vt}3{udZy* zR|C8$)&^k#4FF<-Khjig-VDKBxm4}>fCFw3tv8mUIc5mBWybjur>@K&_gag9w{f2r z<#kILp;6HhFb(qT^mEYru(KBBdX&$$z+hNShoNDc@ZaTY8kVhVF+||sF0d2Xhb|l( z6OKWG&cu`)w6RPrKwCz5Jb4%aJ44JZvPG;bHS%7dVLYI6X*3wlF17gH4s`w+r(d&| zD+qMRQvhWgyx6pLZg?zm@5>PEE1Tbg!|QA<_7c6F05W zy;t`t=07#>TgN|wrF=`%@>rv_@}fopvfY)wSQLHRDe_Ec0}Sdcl*Tmu$NL#i8BTH} z%U>f^Zyb$b_5x|^$HqF`2o+7)5n{jbnUzLMM9`dpUoitfay?JFAYNH}kRoqM4cRug z+OYgw@qul^SbEO4a-&X2^YYsF+iWN0#(o4*CtikqF3(0+}G9+&UG=kI9(ApN9j|>WB;0(`$ifx4dexv44C0 zk$fi@P?YP^Y8srkL9Cw0Ivf;S%Pkj)3JHVL{!-}TkTr+od)irE0Uj@2MxJb+*r}5* ztiVlo2rEsO0gWCao+)pH=hI%rA^(vZvl zfWOWxKy?gIO001BLj&1zZ1LZ}-{2UoDbxq>)Xr^DGCED;a39IlTu7o_O6!XkEKb&} zLn2f@PoGV8I`46;_sn=CNo0$^6Eu+kx?8X{T9uk<6*gtXJHGhMyTUI^(@!^2cJ2T( zi*OU+Bb$WpYBPruUw|+m?6xlu#CBH7XjtEE8d#u{06F5i{AJ~?`Eo({q1&#!yg*mbXt_|H z?=ll4x$Bk_uX4eh;JuF5Z1a6}^Eqr)G-(SO1qTQV+`g*;S65h#cFHQK>%@DMUq7L{ z8gbmvACTlPZT=LxufNakLcaKko$a zY4V1v@Go)yQc++&(Am);4ehSj{>%*ejRsD;Di_P~o)P~aYu^FY)Yf&Ypdccmq97o} z1_%O5l@1mxR0T!4O7GGobWyR>q<5uA3sM7wYN7WQS^`K35FkJxF@%t}bHDH2i(b6{ z|HgZRF&LrbWS@Q3UVH7e=9+VcaJdE0Sf14}jTMpen3urF$jH2p{eIO{2DPg?zW&B~ zL%ihXX|!B?nL<+~lX*+OL!rwW2fN>*wTw-pz8I;8a?Jp;Nw`=ux;}ltaQ&EfT3(z5 zKtRJ$6JyZd0RH!0rQlS84(MYrA6B{!z#SsTcUG4cCIp*HZyQxTzQ}v_toq7| z%PDD5t&5kwWcz9|aY}?5I<_*uYin$M*XN&igvoE-=ib#9Z(j95rdO%WZH*3%y2Gj=|LRQ#Q_>V~)$&j~5o z@{YHP9HhjDx$=a`AeX*1Se&%}45x6hbwD{CY4|d?o%HV3cs>bgtL#LH_|$#3aIDP%a^{ns&`Q zUh$`^22==A8`Y1pMhHtfE}Q9kuM}rKobJBfXa;B`((=}g_Hz$tnLL_4INMuL1xEs! z#F9$Ck)!OM7UMO6GuR=mQU5P!Nd?czb5_Ig?{|(F%=P7<-{u_24}m$#xm%CkLHT>f zF`R`tcqmpt5%-%XgXj5*a-SFM?*x=MEARMURdH=ZkNQPZ+Xb8`q*O1K=VV7aRZl}r zcAs!ueF*wA^givdc~tTOIqCH=KMrQ5BjMY^;dzgOwtPL+Mc2rh{C|9NQ8quD^m15m&8NSXh{T(}?!PTajb04F<)L#izUWQ4t(X>pZrKMA7=Q z3VX_DCh*J1d}FXSbx_9x9bgH8Q4G~KZ^(e|iAj%eqOE>n{lXOhrU6(>;`@o+l9!o)Z zA)v;GczE~|<=oG@DawYdwfjgp4xIpJ#Si(SD70-?3KJ#Q|akxWc;_q zwCLsspeSA6|H5OIX1@(CCAj90Hoa6T_PNIWuB(1euYBM~0e;Sh*)ajuRRtKK+WhIn z+sK3h6D1~l!~>`u(tIpeNFZhOq%cP#YBpg1v`*de9Crc$k|x$JeYqiT8Qo6*+%hv6 zz_^KX@w>&wzD}sk8?ti(zNULL!|u7%EPJsHvT)#2!!h?K67G+4^Q)ZS4`!$!ri3}Z zqDnP{a17@A+U`P~YCgU!H0|MgpFAhHJ{{66HkA`*LT`%rEY~1f6X!%TOd?<|-Gj3| z+b-##+_`s`KO{UQ-R!~kr-1`!X*ed?Shy@QBEn*CTN_B_Tu)WK^H?=yW~~6Bm2ai5 z9~U%zDrOH2NqKBVP4wJV-|w?*>ZKXk37@I0RpXLN)oJ13-{mceVb%|2Ek^gw#51mv zQY~6h*p}+VgIBkUtd&#jK|J0f^`3s)G|8Uc9f1P_8)JB5cKB&IxrWN@BI_}?t~HK_ z1|=>pujB_oF<0~BO_YT3kp3GxH1`JHIMCS&G{!xm0AQ*qK@@%a^YhcpT>$3mIq_t8 zF&yB0jRjH^g@n{lNyO9*QqooR8fxR-mu!Kc3b})V7JGf9tf09@RjR8HYGaTEK687V zQyuRC?tKyWwufK=l;oA&RFJBleewK2>~T2{iyI%x))zxLE;tkgcw-#t1^f6?HcKC2 zhB6QuuS~91C4myEfbm%N`45wMu~@YnR#G&o;ISd7CkTPufFRay=>?UI2=@XoBki1D z*KWqXif#sxTkB1_#9qC{c(P{#29wGq)W$ zcW5l1w(oSC^Pj)@8JU;TyqiPnc_dPQkdO9}!Mcde0B`*<9a<`k3I&POdP!SG1Gp9q z5$S%b5Svi@%a_Yskq?l*A9)WvS57tyc~_ce*p#YJL0@S2W+hCGb*lL+otCRF>r4NJX5*BG10oHaLlt~!|p6t4L4tAD&puA1F0 zeHWnoaLO3OcjuxARER2(4c|cs{?rwWTG33P>kV0qIivsZnWtR(;o2Eq|pDuvzFW$he`)60L&H~TwKHyg0D?HDYI=GsIjtbnhl*fQSZ`=i7Mo6DrHClgaPeD!Uc$*vgWsnpcv{$vFbPS34eR6)jLmZquc zOyNTEf@Uc`u{%RU$OdFMR=EbYm3H1ivy)+fghr+JOLEJeKLd0``j$KKeva;=N~18( zVcH7y-}b0Cvp(O$9R1|Sv+#G}mNXfzF1R!m`{JfO(7GZB4nfy7)HUCBAKqRFNF7Q< zBz3&k+xHdq0;Y6F2w4CmMMr4faD02^2VQPd`T}Nc!MLRN_`kFv+ZfgW8i=wV1SWH1 zZa>R~#CPE$=euqmV$sq1{N=NH06fvWOWAX{y>Mlhy)vBZJioLaz~KwS#l$ppoPh||v>P9*@k*3se6io&lWb!pKf`V2 z-cxbE%h|5&zcIRh)?l|9r*bHmH*RybC$n3B`|t1B`d9axz@#c3^~wATmq?}Fn(Yg4LnV_JT{!;bhIqr{_=DJ6=^57Q!gF7x*c49dobE}(OJ zEI=-F`knvAtX}!VVlY{e>`3n(^#W$(v<`qSQJ`Zv8szMVCH`0NYnUBwzgr1&0m=iX zGJD_QTH9KL5W3V}<@LcTIVCx@p!M@p9KngdH8b02P+Q-qo%#p~1MTyLgb(PWhdN(8 zd%7@eBqlkvS-$r61r)v94!h%jvc{9WbN7LKCDvnX=n$HwZ@9nXb-^MIAB1L`kfn+F{3 zWXp07p*r#wN$6cLoWr&O5OslkETSRJP5W2`v5c79m5`9orMC3gqY7_FiBNQ={V7U- zSwYFB`&8<~hC}_3V$V8SgUBdGuV-pkVnP@?R1hYn+s&$_EtxB>s=N}Yg znMV-k?nn&7fHZt@)5Zkh%vII-KOdxHc;~}>_vaIdPR4Gt(u!QaAIH5f@G7-DJ9W1Q zx8h`m|BCmC)m6hLE!Ec&_HS{vbpoM)_#3DJgaZt&B7o~nQ|4!kBp+TaVpo-o-M7ZV z#U%`=@arCNv@&cz$^S6T?N*Pr^lkuxI<25!>@h!ZpJo8lm&3mfh`XGNPG-OM&DNtu zxhI2E9?K6vedY@~l>&iEa<7K+_AMscj3Z@;x=(qy?0^2Zn^W#%unvW|xD+zjhP-(b zsrrT^wsrnA*ZEVYzS_fpv@H+UrJ^c8;C)k~u`2ic4UYJK&|LoG*8VgsnuEM$3I~L@ z9=HyCrHRY(CE*v|@0?!528MxqTP_TuEA%Q%WubJ@m^BXr@lpeV$+4Hz z)n5Z1-$>7+_}}+&{AslR{jAy}ysmIHl`TUHj5{8hkZ^VlgNR;QdMF)(q$`OoIXkZ_)6U>kt>0vA`!|W>bkB zNN99v8@`~Wdz}@T0oaCT5`Gtd|G5bMHZNUjMEr7Zq0$z6=o1J{N^%s-2!8tXh-@~K zCL)~`+uhH&p#AwJR|6=+H+0EZ<2@2tw%qq$gZ^cP{_|ttOKSVeO%?o>og;-~w}cKv zSNEOFd#1|-5%(abi!e3pYNcvLG4!N16eL&~oEB ziQ+}$p$b^Yto9@i9J%O35UNo>?{&3G36db#46UyJ~?C)2hJRqNe`_NFaWmO(dRetc`eSC>Rr=`Qa3SWd2 z3XHqL5lqc}{Ms90`4C4k|_KK>TE zU4K8vK>5A$LrRx46-y0ja~Gt3j8xl2qafVU<5E5D1Z>9!_O4+^Rt-=*ALL5rebF?# zxW2Vywdb}n1uiZtcuP<(TOcgV==1)*u(jV?y}%oq4KeLKS2bvdogP`{oRU#lI8yA7 zZaC_T=;TE?kI9d@))ID%()utjDg5yw)sMM1=lG$6VL1QK-a#Ez?qscQ?U(p*o-yu^ zJX}4S`)Gz~2haUYSH}BjC)uepZSpX`&9&^_?3^A5o7tfpTfDI8#npu&K5io-Dejsv$21TM&| zrTda<+|)rd;Bf@CCB6_iu#?F}YoI7!dHp@h7T5Rd+z4>d$YlDS<(d!;`>c^X!ypk40) z=khkoOvJPAW7)D`CUtma1;vsTK5duB{L=}(;|u)MEiuq7)^7*D9rf{J>+1zRD|m!g zq$hf2yw)p)=RrFy^-GcdKW3-gI7;gsE_e-~7Z~jIRE57C)Lleo^t-FyzrT@xW%}T_ z=wCimS%qQ7-|h}nw;-kB!@uo`0=q#uM)XOL+uaEPFH-f;3M!gG%kV_JD*anG z+daEaoII(K;l?CLh*BQxt7>66=dmvIDd1yN!a#xjZ~Uh;t_M-E$jT{-4w{$ zTzF_GZ&Op#`<>`bprYPGWKZ%p_3*B604gL;|M&BuLS-L4@N#tv5s>o1y-$LJ)!O1D zEkG4V(Qu9Ky*Ebj&EuFQ{CJ)n+y9qOVAO>g>A(N*!2%pP6bC_$dIkmg^aTgC!>mI^ zkr|GL4a)!0V*U4mqrOT>0YRIn!1eVl??`#R8|XV(l$N=oHF1j|&bd^8n|CuoRbd7y zd||_d!|iJR#asS=+ZgJvkKq-iIp}3`nRClCI3N&9StUx1W==>z9b_6Vm-S zR*m@x;)Q;9{gm$xX3@i5c%GEFCJ`MAeCxM-Tf?!EV9Z!OLU55 z?@JAq3Lh%5jrw#@>>m`EH>mcLP8c}vvSskEK|h4kaH)or!_cX|9KC2NZTDBG%Is#{ zyp%Tio?B^RwcWkvz>WDe4fvgtiQ2MQ0+gNi?yDx<*Dtqy14wB!RjWu99oi_$5({A4 z6BlM&vHeOD!S!`OAI^2eNZF_T61Hrzscv8e%SuUZ>Fx_D`iPgbXmRY zifc^4kO0x=qon?#HzMUO0a^HpvK&y<|Kj0M?$?oT<2*iv_cb&!6Kq}(F|NKfPs~LZ zCiVQ&w+I{sQ5<7^n8}uPBmj6nWC0IzC}4`YHq{)Nej`$Fn%B$4+phRQr)3 z;4myH**E{Jy`@Es6C)d!WlZXHFV>|oATQ=$6aP|BfsFyHu%mZNjuwDf!*O zolH0KZ+>{@Wm5|Q!8_PGv_P@K;xR(WY+#1{cin9#!50my^|J7^o-UW6SbmW7=mMth< zT{)0X5(3F+XxzBf&Sc77Wd~o=OXOoy0n%uQQ%9YX7#;_)bZD3R3w3e*Nc(RsDLsiI) zAc(ugT$G%6H|n&=-rzm;#i@Fe+PrvcZS%Lqb8WJ{;xH zQ|P7R)z1FPQ^9jQu$jc;C+KuxqU#<|r?kGs)FFeDkZbWpnEj~9G3VJKl(xjGfCsYj zv2A|MY&`J+cq`Lm(+SB%8bwJ+(T5v6&U!>5LVx9Zj#Kkvf7WtbvCBBos;SBoHNWdH z!WHc=n!s+DC%M+3~vzS9SN|;4n zqBC+$A1w7VLHRiqcX;LO50M*HIbvppT((J1Q8ZLcT-TWK0!owlFZ^nT-jBXQ7`s0m zkzcYU$kmFp%o@`styPj&FiyUBjbR>aLQTFRC8k3LimOUaZJN0HihBZW>UYU^nPnyqo+P=N4&yg|^Am+w4iD^qg6;yQZ!GV)+IZ83@MxU2)4LfI z$yR(-rCo>J@z}%2EE0tX)ML%Z@(6K*mrVU`XfC*g`?Dm^I##VWl`hC3U2jtoWAWsD zur`dS!kKllRZF$+i|NrgW>tS%t5{F+22=H{>?{&-xFwg2XagmedJQGCJx8A8tCxO% z%cU90=~&*tuY7CRcIiI0TDq}gRAZH4HDPH=+6C=A_=i6|MwYH^uP@KG^h<+(_#g^UXtEHY_R1ae7e6W3ai(E)6|}uwX>>d(8PeW9=rZ zU?zBmsAAtru~i3_6x+-NIO^-UMuz@M^xJszF?aI3eNQr-C7qx+Xz+eLVL-HB9p?Ke zrOhd)RTP4eIcG7%JUzvf3v=+>AWF~n=hXPDj~gmVUobbH_w0xy`is8v=7D@p>~~+A zrZHSSY9c4l)7@T#ji^rI!%<95nI$gIG0_+daeKcTMoc0C`@n@Dtlx#<6TXK&gE%9}q1sbb0Bz`)%wmjRSY;Q(H8ho568cky6!nlt zmqxmmXT4m!C@S^mGC6_rQYzZ)99qjU!((uyhP5#u9xIishb>*@3$aSzWZ3V7yu~W2 zZ###YGSH*;yU4Pgq6GHpak8P0i+`JgpY9^wCuQ`|}#I?hFSao!bl<$~0G+5o>HWw%hf((_$y~^EosF_ohWNOsUL|pW8o| z^4-2?-6`IG88rYbY+9s$jEr%r5K77N+g$XYJo7?p;3Ps$TJ_g+UURTAo+2*HLuxbG z8W#L=NZmoBnoD1YrZFAmJ4-6&o9!H>FwE7nkO*Pl4Swtk%VR)v(3|54MmTQP;FV>b@0>rO zMVdt^yoRz-XD$V6k(eRpk6BK=-)8P?;VO2t)ep02E#;=5w_+gi(bzOD{l}Ms`7H^f)sYOzKp9F|H>Ae0 zyd!JGheSI)B11_KH=mTtk9AJ)_F(HVN|v}6l3HP_I#pxo$FHQxB?{YD(O=w^(pTM1 zgt;Od#gUczEM5F`*6h;DLaw=OPtYBeL`qrRR~d^EyoB^^rVSwcZCC#T%NHV<%-?nl zI?lx~2Is?8%>$_Gp zSw^^Cx~_TpUB?|w-&r5BNnzju1gf#_^X{zag&uPfOqyZ#4VR9BRzYJ{O?|LLIFJA+ z`cWEB--zZ3IY;)9aeSBHzJA8C+E3bF-s7UCabrC8typp@Cv-t*qQ2712U;nQYuZ!M z?cL5Iw(-i(2|R2vswaUM z{g*b+YAK&}B4YA`(c1##e(Wz<8;eVTf5c+qLLu8Pf7trtf%nRziqY2qo;?lM6kF#} z8qX+t!LUEqP*k*fJ%i!(hU=odH1AMp0hZ$-RKJ%~>Z6sVq+)T{$n>s4CE`d<=?eyX zMPi!A;;Oioym?^&=5-B4|13Juw*o#t@F7GZuysN6f+k2#j1alx5(GNrH&dGMQ1Kel zGMkO}NY~I%z*Npd{%QRQt(^stnKZbm>pg8i|l6&m`mL-khk*08DGa*sO1Tsl`-S+ zG4s{F`Cg8%3J&X94`^%j^7Z*cl*r+Ouj!uEtj-_sj9A0;Mc&U>6wXW-MDB9-KL^Q` zAKP`&E_T8EasT^?n)&d>45+_bjS~A>hPCRxs7YS7&r*Ai3+jq* zYndcV4c%gYj-lK=1VrcSSUeP`pRqF&CaaQK>LazzC6nvUD6U^}!=juVwUh6mq4<;? zD;kl=>mM&n3Fk&81lahVs8fuc*mVNqH7%?ouhL+Zl;mY!)k4NVg1!iQ%8#|*v2)Vi zI8x=^$e!sQ{?=H;ZTOK8WaF{IXsP3{qvO6VKZr+7ZCfSR!Gp)q34K@E-Dr{}&Ad#S zDclu?i`5IN2rWd`_^y|u9=Nq>pU$&LC9;?fetr;>)FHEGcoZGtsMax#PWHR}!0bV5 z20e(ETGw5n@omN(dR4^Kn_t!~%l+u{Toh>`8zpBq2{p$$^6x1bW_&reIaB@+#}$iC zN8MXmZj?rLPS3GvUcU~_qPI82gLxM7*6LKw6 zydoBNSNxTCa_yLuy!Ntx;g=80BWFf^DG@TkmYe4rGKqVweAXrQ8=~B+8w@T;Kef&> z%#0#JFI2RRx1+Jg#YIK^FS;|hQ-Wle;c`1M_)$Uf*;1u~ybh6B&nH#oIT$Z1;@K9P z&y+LtAo1hjG_wp9-+YyCvCga%uMgp_UTr-<@GiUrFBPjRpPid$6cr!sVP^7DZFk$u zE6_?^!e9CYM3!RD5L#Yk&!vT@`zvPpXUgmG;m$E zYFf?QSrlywaW1V6k*_RiI+4e?Np^~bZ}d2umbtf`0~~tDSm54jJ&bFTBa8Ab`Ef`nU85~-Xzf>jBe6a^^q{-n8Q6d>^?quv8hXIR-j{3Kj%;mC|HvLNkSg~rxM(w0R9FT&dtR8gf>qUDY1n{e~k zwDfc@w%xfH&vYOYAk`gzLDSwlH`b`yB{DH_f!tn4TZD{BOE#+-zxayI5gcdX7+;)2 zztu&j#e%>EN7~HpPb$9HNfivynZbf;W|9*%bWwI)oDTTYeCzWezJ<^IC?0b$?mYb4A=y@FjYy|oM{M1l55wAlBqg5!QWy@oVt=^PjiTjvB8aV+Horobmy?0f979M%_r z0(dO?wH@lQzxm+H+m4)0wb^vmP@}4TBN4D{OkaDu)0U4H zHk#c7=eMqXvj&xz*=m?QsyE+aT)kB0`$O17i6-8|aMR;$GAmPKuNm9S59;1c5jJ*2 zt#x>z6BaF#5%a6jrbEV1HyRAH6SCVuM>}$2n0vyb1mrVdi+SLLY*)K)Qf;$^1GgKU zG;91Vui@)uxxo3%mKZHX>7A7oZ!R(0oGP*Fdnb~aWMAkP68|HT;bJNJW0IAWHHxmX zf@@m4i`T+XHp%UM-t;We8@0=|+U8-Qr9BTwnnuKK7P~^8TZ-2dqp$G6J(8rjQ@RAv z;%Ui6<+#nTZ)#D^HSH}?3&Wn=tCb?TxCW&ey@s@%o23I2R+(!^xV++VH1;S~76s8v z7IxyKte@b&R4IH!s9siU&r+4j<>-Zx@?ohkCQj*C*Y3nHwJSx*qArdXVkv}+u0ES& ztq`s0^2||x2u{`9+`N>V##hPiL!4yM27Ymvf8DF=dsWsqN)}y)${zo{pHIh-nep=T z3qFq3@((X{uJZVWF?Mvcc0#4$KId&-4o)l@cewjXI}Nff$=&9Za&>&!ZRY>!eKJ{w zKIxtF$ax1BdFFOSoCN#IBx8o0FU`(h5^*W>RO})?aC{5rU# zn`$xZ zV1ww?p5eE$pd(-ra#e%kQTLo^mXf6F5}Yc?^o!MfyCg8nUF1i^HCUS(!J%`FoM!RL z5_NytfSN>_##shA&`P3elumZqCwpD-+PthfxvArnqU@Ah$8CNhv*V6qj>B>SBVy#C zx1&Z|p)q_S(N&CK8He=ky*e?8Bs7%ikJIJ>Ncf1S+vWX!oKoh#tnrc#$E1 z1V}vdUIUw7ClV$)wfiqf@0pF@w0}t2s9`IYhPx>oSc1w?%3XC@8T0os&zy5?mFvcr zs3?a+PbY4U#ySd#MKJ_xkQGz|oJwCX^@p$ywoXFJ@nGy51 z{mW3Gu4Hu~^80u7hwnRDX<15a`}au5ja9y0b@QyId&A>*cagm%L}C3}k0O#mxg`C@ z)QwVuQ`qZGO_p*u!)`9(-g27=fhRF#7gVu}IR;8yztm{~G;)*<1l(kM~EN z(}GrIhm3fl+jbUZk=(+AUwr-A>TK7u^n6~LW3vCLrEckISg46pzb&@yfK@_&6I2B3 zK&V|XyJU`mY{1&k;UTUMXToV`9h)p*Ui0Zm1-$98oChJsKB+!S+XJY9>OCT`AjLh~jMD?5@tnh0a)nB_W1(7|S`*Aj!PynD4D{~E|aNgRbg#koC~ zMs3!bV+l|lZTG6WlbP}FR+~6WcQt1_MGq3RjaP~&bqT>0%)W{Iy7C~zUYY2qd0}ap z55k@q4IYDD@{`hrA>?^2|IT|ql+$memnjz&wGCs5`+GRgxCnCF-emrTLQi>+sthvE zV}}2|GRV$dE(Zq2%}I-{Yi}&G=&-&Q9(-gt1#6=U(dS%QS6qtn9;wA6Onfd@Bzg9$ z!18x4f%Bh#rw6^Ac8To~*!5tKh8B>In$<+#$-0x54X37z9jiVGEi*7UT9Ve+=E~x8 z=68|ox_0!1VJrDi_jF-5MUJ#R^2=xW>euoH(aaW+(X7)^V_PczJt#z7)s@Mw!)fr7 zbHwl!fjafor}2(uB=r45m$bqv{dC*cxM2 z5rglRHvMDxN>(S2M{U}PJ$Gr=*yf1S3z!S&>pL5`a8KkPb~x}glJgGo#C@Zly%YRw z`J05tHVOJgNN`NA=iVH0y#<1G80qONiYriSdHH>V!BRf8xNWv5($Exzb7>6nyq*)) z=4$8HY4HUG8D+8;ZRAvQ8GabS7koil@~l`gAuVk(K-%k3^5l3`*QrduOYTkBO?p>u z53ENLgCoK>Mr^@**8f|&nEMd3Wi&N*XISmnt6Y=T@brED#$l|Le_tFRiAvWxUyyEs z%*86mh5Uk02`uCfUGq<|a2g#r`Hw2umh?HV9v}{~qC02%A(Pv?mj9Ci;s~Sj_~MUU zU1!G)ptJs7Ni@TZE*IkJ2q0Ml;W7u&m%oMB(Y*Nnf z3*VRSIFdinvZLKMAPJX*K^panIIq@^RbSYvS*6YSew-JNng@nq4@q#vo#E~dpg6}kOxVx!)b4($~wKXltkf%<>Zc-+B!PEJZ z?++MrUwR5&Nfe#xw+QF@p&7IEO}cQ1;Q~x+j|^o*e8cn1$Jw;WH2a0MB5IkyauZaO zHA;Wjze!+6by#EW#8T>a*)~wsgtr_80t=~7j6s>Gd4?_ZtgXjyDqLyTnNNdFVu^;! zbbV9(P~K<Q&pW+NJ2=>O{<=N=9!txzR+Q@kr@Y+Nx$u5G^e& z&*7<(LycD&2y>r2M?~5cNNzAxTZGp}SVPhgrqy<}u3Dd}y4o5Ed+mp&y)CCW| zeDx}EH7l0rK5QTnN{n}^0d$))+yIt1=S6f)LuWh=Gpna=7Dahdz@rXM53;8wIVn+O zg~rO$0y|4jotpH^mEvCN!jl$m(4*E8h*L0qKUde}v#)Xd1mS6an@`DkBc(G6qRD=n zlKM5rF|)hM^P+nSBk7Z94swT)eR}zwTH|ZZW!}D&b+-|=6#!*u+q71Ozg-5IkCg-s1=TQQOOu0(9Dazt@$t;T6BMc&&%#>u*=*aTTA2lWMXb~ z^b3_BY=o7`BP7Fj6DOLuzNv7beCeKw&6J(wZ=4|0Ht7V45VPyaSJcwALz27n&yjs3c#gtDgVn?A6D6`4C5d>d#~(n%k8GN-pjzIT7jdr%zw;Sa{2( z;0TW1^=A=dz47$Mn9`CXO!pAV9>Nv=6cK&PIkp7PD6bAoGKXeBWl8_NUX%9fHp5Ta zw{&#c%HKCQEEzLT4HyfchkuAxM{OcT1=$GI2AeVD=u?33ZF9|s6W>q2SSk^q1Ot^; zGtcEEn9@0EDS$DfqbDqS2gx`(1_kNv#caqlTH>lM40J>}p3B4>KYaMMbBC5}$VM%5>iJ9C9Jqs>fyz`Gm zGCgFkzm>~)GwU(U<2nUMo2B(z*L?bM-`6>@;Wt(Mf}hSEYTiVn<$L+1W(ZGN;UR2t z(!#etYS*fle-c$b!Djw72p@pgCk1$#JX#Di>m^i^(u)P~^)>BDDD z3Ux)WN}c4lHK4MQrb5n*fBl$?ap5PZ!_HkdLXcV{8bhAj-^Mt#%3cINJ+)WJM{?nj0&2 zE2f)Mk6k<7$vJV&C>btRID}F)|CF7W9`|w57XQFDcq(n2LMM2|oi$p`*M|nifC9@v z@7l3XgB|Xw@h1&hW;5NzT2|JthD-!qN|F+#xNj0VkuD$z-YmjN_^L*?uVk}a%MF~6 zovucp!W|UaS~tg^r!Jmfaq3*#C~DQ9_ozlKrIL=?m*cY)#T?f6xsI%f4@?x=6}pdO z9XUaEW7f*}6{_r^eod=cK54EkJU@^mrHycGp%`~0Mw|K%xAyP*vA)7u(&;dnez>0) z5$%A6X;9r@wG%u6Qdqg%EbcPlK!}}iCTn=cjuUy34}z*OGt$!=dI<{(d|f+Sg*`{) zzE)NW&1js(XY9jnZ1hTolEz8chl*O#@mRR>s9CBHeen1HSQsD+j_~sp3FA-l{dQWO zlF);Me2Pp$!e9E*G*hW+*Sp^b&e7&$_}2&hy=NF*rX;1~IghCN%WJMqnL2VmdSioC zO1AlRk2|9+5;QDr-`w1IONv>@`!_1QVAoS7vQYcvCR>_xG&*5)Ecv6K>KN=D%z3Hh zv8MfYL8mB&rJq6I&Pz1BlpHOG<$ORhkrzidZUVNS5lD}_%ZG^bBXsD&sda8Z|Dy@P zEUIgr&YS`g%1L+h`{=RkXX6wgpd?vnDO=*&v3PuNWALnV{I$-eigwq0y1#KxGv%sP=hzqsyanYNL$5wx>t=4 zRlAV;^eS*wW=s7@I(k7-1-Z-B8*_%lW%Q^I>+G8y)DpI}wq?rusFH*f^nJ}cj&gn^ z=+0~o39tIVduxkqN&Ni93U^E(3&PaeZ4U|*djdPD>2QX(%*A;L)4mtJyGMlG4XSpy z`D)mSo7b1LT39qf_}G}6y!77X!d3eT#>BBzz>e#M|CT!F_~=5q4!ze`hHizlo3z97 z!=$ia7%P)C%nUO~er)eBzpqWv;gvrhdaf_K`H9x3r`WoZ5KHw&y_7f>P|Qa(@RTFU z&yQbhn99uT%#~kdUuRz*Q%87P*3;bcn?wsq!>7#j~ zl^f5>$5qh4g#Gl4-WRzFv$-oE6nK$ARL5_Q6C2 z|Hnv^PYaCSi`+k94Y?v@XLR^V9flpBSvv%g{oZI@3$^uyj%~d%pW3DmE!UO1YLT z^=}$?2+v{AQ(|RRpO`ab%zgy@BO~|UOK5-JZU4#rrfGtx^-%!EGStI_XLx!jJS#_&^&R#DI6Pw$tt=6J^$*u60*C{3RgkYu8ujV0!d?Xk+hdSxCzHK?g~sq|>w*BEEmC6m zS4`s17v6INm|z7pOt4IhleF7a;^-``n0uR196vK7kfoh^(V01T;pEs!?KmYvMY+$H zt9NJXLQh?V$y~J6&F2FoY?0z{w(>C^p@P5kPdw~@FB4cO@acY!RFUEBoUhmK-i`6Hs869|d(Lr0zp~u? zM$4Kj#yCeclD7ZINQx=vhTUk(cVfNb?@lvb2S1G?QJ$c8coo=f#;ebOD+uBGYJJ`7 zu1SFPZ;kfvMzC#PDc`4d(#2G){zg>or`7ZGL;vG?q5RbDvagr_HQVv?1KT>sfuE^z zUgz9rf3Gn2FIIm*nO6(g2>Qhf)&FUn+ur;Rnx|(#KH&abxAecH1OD1c{`ncoNdTPR z!NT(9r!2$2j`*j!*sBAYB|rD3JN+EO{Vy-TdpR(WmcBVfCFr6Nd^Jz?q4%tuYhKW0cfhK6InQO&D7gX6wu%rD8U-(L8aKLmbf`xLkU`mG~1 z6J@U(7#J)wXYWx3RDCCuEw?sR{W`I{hgk2_of}A7h_m@vY0mlm$1gtYumATtvczl; zL)=^_x$T`NnNN<)@BpcRj1S-L{&Kk5@>m4E(TtCeTdvNsv!_22J^H&oF6~D&(jyOp zW;dFJ@+AQK=ZT40$=luzeI|Col{XKYb^wo*tYE16(^K5?Dat8E43||e$B;F&v_$nt zZS6n1_rER3>);YyspsOzU@qv}cK>W*Sb{LXbz9lkU3%zkBlqC-mR_M>XZM!i0j?u9 zs!0jqQ4`z)4VdYHfZcyz_@`IO?1%Rp9$56R>iy#bT6lxZGp?O^<^4M^qa4fH$yjmy z`kC;AaKS7a8$QG9s`Z;=+B=8pDUiYE#5SzH@i|u@xczc^CmvV{U_g?$t4g!K8QBScPz8^zi?+paTc4+WXSYPCFBWxb zFgix}j%PvZLNv-PG%SoSE6XOfp^d>2dX)2|v-5erv8vXEb8_!K1OUuYgNahji}toN zk>4L6OCewON-LBd+uBFE#rdQp?3p|~;8;}3%pq~DcyohMl>J~;y+Krd`k@gMO-&yU zmS1N6j~@dIc@HFi(!&yKUc10xOf+KABLqck?QCJR<38`2;dZ^Zj;wWPmI zEaemRl`f0XZQmzwk|r+C7qO_ohl4HNvR8k1 zS@oIP?#xA|=S;9so2jlQL}umFjQ{nyu50O zLEjB?XXeBz-8^>ogZCY6JI!jWOv80O-&P5uQaqfHBIsYh7=I00K!~P%cPVD=CZMG2 zhB=wqAfQ3>14o>od2^Eeg+|SIROHmjhU)Dfz2>f43AI*L_j2^FuReCLS5IRIFNI^7 zQc7{F3YkQdR4=1RZH0Hx2pRTX`t;?iS7LHWFRo`zXFOey^H6(Zd;4{E4};tOL9Z)H2Bp8N z5g&f7tR1+(l~9W>-uGg(NN;uB%S!+(q~H&&ircyy?Hjv27EoqlrSyC4VJc9(v1gZ| zXgb_z`WdM7bAFny394vC6{qZ6gtXcra;HaM&x#48Xg%lzBcoM&aO-87Ol&Y{yS5fv zAI8;F#1n_hb?IIV{D$gBz>J2Le!qlw8v7vN-?yK`S7cUuN5kx*j|@N&x7Olpy#zE( zj{Q;)`)fPWUfK1X1%JcVHm>|`PF)Abq^;?F&1R>(VNXhhLxXpezO%vP)P>pe{fszB z>)L=0fSdwusV1qkke)uX8Z)cPQ}fr|XplWj0h=*q{sU)tPR##?z3ZJtp!BFu&%8;d zW@0P6iq1AaXg`-!tC5gH*W4a{qSmu{4RtYu_L_mM*O?>7UIQv(E1d+!;kqMLGs4qj zb30L7$;6;*L6DNim}mW0PJ~X&KY$Fb^jR9Z+alg^NH(_Xo?5cs(*FLZFA)f&$!afo zpEv9g8+JSEu+aE;Y~8%<7do5a3W70a^UAQ!OHE#wTDwOq0coh^DdyQA7uaTCi^#rn zO5a+_e#o*^ujEETx&8_91EyBHJyDls>IT2*H-BDpt;)LJfoh$|kLqdQMTRQxP*bIs3;(X5} z&^63}Gz>7*(D9!5>}T)&JX_uGar{60zr4r6hXH2hx~}tF@ms&O&b7)ok<5MgL_H+r zjp&o!Pa6{@@{zR<@g8}aZY$RX6R83WofoF zy^1` zt4@g5$M7<3#QE-7IVKoBq+qdVVK2Jg2toU6+$FkCc8PhiM&B7)7-Iqo*rR4HJmusx z?wP1}($P~UV{v(l8_A!1WR*bvZ)Gq4a0$MDUzc=!8(ZdHQf`%;yPMHzlU#c0sVmN; ziGR}isJCdVyw9kxIg$8*l^w;^UoYNI0C+3vTnv&}hDWp>WFna*sE0WlZia~kMt{7} z5WNj11i(KFziRnaZ!pD<^D8c#URSbsr^bd|c5A=)ozz%G?<35r-3hJ5U29x~g!vJN zw$Wp5LOWV3RR#1~OUCQr9`UHJOKgdOn%wd91MLskcZM7|7<|h3>9~Qx4@ybyE(-OGuHU z2?(|&ifF(zfgPHfbV%7(fPf`^7N&ctF`iYqy#hpdt-yOh|*|0o~suse^Q*KyTsmO*#QoiN6y%&s5HBnKlv2CAE z)%I9+<@5AP^x(0Zb$L2vYM;YZY6HRKkD9hBvxMcR*vL27xbh$JqWh={l2A~Hu*Z_L zH^jC7rUh$?=00W%GYaOPFsn*%-z$o~&%p$yjRiZbqe8}-`%gb6_V0``pe(LlJ2jrT zAG0+gvL=`h@}TV=W=Ri0sY#!ix8B3jLM6H&Ub#4KA=NXDM_7V$A1p?#FxCzEVF{#$| zOg(8ui~_Ac4hdXcy@B34uh`e>xSuTbz{QbcwZBpq!n>hiMU$o1obZ#mFO8pO5ZB=i zpdr`e_{>%6Xn8;xAfrXIVYp<9X86mwY@PtnYudN~qe=a>=bya*GPBIu`c0UPduq?8 zj?94;-R2hPJgIVw%1Oez>O`2x0}ri`*U9JHPC1a<6gxWW`aN8KP?`ui5T*xCRW65c zOVFY_hzw2@yc75swe4#z)LNOkU2U=*O0@|7zRDmT9O=;$V!;>LuHyaVPHT<6dpYtI z0a_nALojgK#xXMo7r0p6>AE(~Isn@nlC|nR?6bM%6ul1`X9n*knH-cpt-BM!rE!`u zBJsggbeEknRK%gbqS#J9mGqpq|LHV!)`;DNH9m4kieR>I4cVEMU!5HS>)R1I-VAVZ z`mR;gO0>3n(wl>~T$ek^+4R%j46aZOt3fuawTiBkCGA2M~mFV5(U z>kc*}AV%pglF=r;l7DA}^ybdl@$41D{o{~&S>2~eP`{?LoP~>}*~$9r8nvr@3tcMd zIpH^iAM%Kk9UT|8^A~h`?^$GOl3}_=`inhC`BGMuIH7Bk#o)|&%#wBP%bJ<4;thRS zZtOvpeiE&)@8sHBg2=cF+mTloC1HC4Y2ERl&b7m&j?Fo)%-l6QCyVbps<4iu|Ejg! z5+TM73yIoF)4i%$WEu2{S_p0IbLx7408}!oSpvF!l_9GGRGZA=elVa64!-m1vA!1c zmWe8`CqU-D&c7Gyt5-5-Wt#t2cxF4cWWJ*sM`bdcv^C+iYm1qD(AjM~@GzjwA^|Cr z=8x-^BLoMpZ0Aomk=9jP`!V8aXz z)e!ynQ_uey^54B|PG7n^x0?9`A9}Gg7Ja=fdoe80*B9c^SGX4=|ZU408)D#iu2@WG9g8u zbEB!=)fn$u%dn27VEcn6-SYAAs0W02a?#;@k<2un;6ZgFc4_!ZHG z8a9~AS;kqtCMr^$&mGh4ZC5(W7`K|HT2>kbsME@CLza?M4Ac!m@=q~BJ zJnhv+%3kTadUl%XBw@G2bNjpfzR3Z(BN$Rp6X(7;ran=fU(3Iu%lpl1u~VUaP%5cy zyHF(Dj270*)knYUhNd%$tHGueEVwFmFrT!lW*sj?ZtAk!Zgd-!>%*k{&yETIaT~sy ztOkX*kyfo@InV8(H_WXvQ$Qm%K6HC>!CXa*(?tFWpP6sYS)0u*(;$=jn)^^7q{?8> z$odL#JfB3F#cI!FxzN+65eX^J(ds&`VKK5UoViwt`&ly&<1DUTs;@+jm0Q3kVaHzO zNQuHgMCT4egsF~5C()$}v68O~YAwO1T3`r?L2SI+PLAUQ2RsC-S8E&teJP3v?BRaC zeFEkVX86V2rc1j1{%h89W9M5ngK{tCY; z@}K8=beso@A`2Nmg>Cf+qr77E0pUDhZ==tbb4+B~n(1{K?*8E-GNib zv$JG6J~B{Or-JVZW|<=U)~WYFt$pVUUBs%-W&FFu)E`VmzIl)O>OTU9?q9Bj-Tsw% z{fYcL{C(=wtP5$(RwJ=xb9wvc)fp=5rdUYE!%Zbeid~S`#)?hiau7!|>|5T-0p(+F zbGzSCpRBmzmzg1S>FI)wl@q<+R9eQR2uQk>i7aJgt%*mYTeg z_n<}HYI&!E7_4i+Ye2a~+$Bf7!nI*-Jjwo(01Ow~)!^dhpOwnPqoG$)lmwF}NA!0s ze;v{c2?+k{uc+J1BgW8X#ubN=k{KsZBO6~VYVpuj`zC3)^^kFDNMNpc=BaM64z2gf zgMIH7tbC}~aL8v{qsPC(*zfO!4sa(N+2#RwwmICpF!ya(Q-Nc9$3r@+>(ym`J1{zB zEwz1877+XhfzEtn$9r<)hhKUbZ+VN%kPB#^t#I-!^~)HFD8tW&G!PBVLsq|EW!&6^sVqN|_;awer!$8%hyW%vhu)0As2Jw-||_I~{prTXTc*tHzBj z-Trg4J2qQQC6+k;{lD95;nw?KsgYR3-BW0NCwr7cQWr#L^KKAj8_~t$_JvW+>%=?H zM7qY5bJbk5yLa^RwQjqi3s4(i)0j|ZixTfUw>3IS;g+bzRT9aH4Layj zbMo8`=4chaNcQU0Y`pJrBJyIJmE9++pNjsE3CG`?8$SU#|k zK4Y?6nq)kmw3Ysubg>{qel=D(OFo}~)zMoUZdg{!x#Csv9N6H5R(d+dAQFp5Me@U& ztw}V?Y{aVj(7u~VV@ym;^L_2-Bk#5!HtJO_1PCPs@spxgJw`UlOGK8(JOF>n>3TE* zP%Eu#l!t->!afVb?|*Q&BVU!zAW=K6mL*rIE*1=4?v~#3 zdPQ5xUUe?y|LEm$CLG|v(-EIpV$Dl%D8m_}ea+ax4qf_}yC2$yz-!r{uG+>lZ`IU74{M!Qs(ZU;;GDB9# zYGpR@_f;_#bdjz$=>#oZ5`B35l@%?17r`INX}vpzSLrOq#l&B=fdh6?z@s~lY?(fB zGlM*?KG|c>tj1V<+nZ-a&ITlzb}VHj&RX1oye{cSo*4xoY-`=adS#Y#)}5r*Z#z_V(zeYjog@tB;W3L=|uiGbju)|1er5CFNn|Nnu zj2p*2Ea?N-)TGVNj~IGsJLI2&^`GeBQUtMYYR{$5q|3#(b7uqY)W zX3>{^i7!%%J2!4>M=q|STJEQIUU_Y;{3-sXVMK~9t3!>Uh|vLq*w?MJX0+E|Ae76v z$t!>1-T%DIKQ5@ahYtzT2-JI8J`zF<%hl9wgV@>*3tGN^q?jhoZ84ghIQI6!7b^=8 zVay9vh{!}2l67AxnA?=8*(WStC{NS{qkP%B4=A6UtRGiMmVJHQmC6o}eF*5Z^6t%} z#MoFqSL|g7KGK`J(yO&c7u{^@g?=k$dwiYo&{N)|fjjyRGfF>hb@{k2Nd}M-HPUJR zF;VDz_qY4Gd4<#vR`M^?op6n_p8i}l|{Fs>Gn|qNs zqKEa{VUpM>UkoBTHW;zuSgh*iZGb^87(UGx>8KNd=u8D&_(DZPV=`Z$TXVNTnUT28 zY|=W@-LA2z>Hfay@)b8V2}V+AvSaZ80+i-(267pkfROY{b-Vj%_wL$bkPz;=>UoBP zKWn$RyJdKSuoV#W9;de31QWfa*7b-n^)L`ZwNz2#65YCgd4uu)|Kd zgN5p?x`HO-SH`9V=QZdMHQqC(pz2g&bH9mc&C%`wsO=OFx?^W*VQ|1_=E{RplUFK8+WJB9k<~Z)G`VJVQi$xG;yzTOUYRwFUD9Sw=|Y zPnjW=Hc74rNkY4J3w>2FF>~RcKZB+(W}OTQwJd}g)$FO*R61>sra55bKUW>c=S00C1_eGl12N$dXN zjl9Cs!P}s1uLU?4Tv1-*osFrYiTEz9q)W~2=e$4u;E+H}RyHDBG5zi008lvTHua>u ze=BzQF9)LtA>5Arrbjhw)da~O)#GJb7HacsF;}uHi??G3BU4Xyr)(act8vZmGT877 zE44)y8`XbMzELw zV=!+*>2YFi>S~iTa+8eg;c-0u&Io$1A#d{}U}pBD=0XgKJkkjHMPfD>bXYOsckEiY zXB3~Oe&dF1XiWi^M+b>Ky{AHncQ%Ym!+U1FtKr~*jCxxMd)=IA@b0*q5O!OU_xrzA z*8ImlJdiZl_?+R0^Felf*e3eKE_5ius3Nr!e;2oGd?61zQO4ZbhiurbE-QJ9e5uRc z8uaa)Fa7!WcdPOounD@yhVhu%zk13Pz5a5NRxecJYtys%6|41g3lw7sL5qk&A`G`t zowzjtvIHGyYgc@FS;ZO2HlqDiDJ{0$zTi0$or*W+D-E&)6JWUbV4f1ntf*k}q{(Qm zT>^W{&ehE(T_ma-&gh+XphJKfGJ>SFA(9F#-<+MZD{ zt5$_=nn0nUb>4&%@*H1Y88$7Z%r|&9Z$KcX0pj+aAku!}#)a}%?#r%;7eJxoR&3RA z$k^8Ey``y7xgb=Zzi>`qz|?NUCUHEcZo6_(r>3noz~scz==?Z6DIm9-~R@>W>sx>;H`a_gdFqbsPkfo0@v`f z3`>em!CBil>%VI}TZ0pdQ63gKDgs;W0a_$3Lb}m!MZaG%2E8BjcO)*?`7-eH_aA*o z5izw!!8`9$Fnw9=BmRXu__@iVQa2=}q_{_Xz=7kRvcCgy7=Bh#@W8B_IbQ4iu{=|J z0Wo~!7UNS7dz1P_^w`u4sL|_QzPfC3?0!oJ);y_25`Dq7wG|uI5jg`2qpRl$A1{Xi zXdVWr`Y&|6U)6Ih@iDkpJM6I*daZF`iFd&?*8b$@C~Cj<oZjvK-Z z7BGPxf2>>#-Lr60)P=F6qREvw(~)2HqMcPRXw0<9^wZ-0`o)+CO`N?$8R3PPjB?b2 zq7H%_9k975pKV?}aME0;MkbiqQ^3prO6?qNiD%P#bb4nF<6&vSvn8sk8T4IKv{~9@ z)C|(}Vdy0np`;#}jdyi|tDvY%Zva)!utll=MjuhOpgr{1Vcv7V_zv27-eWH4oMJRU`ifv2_d5RU zv|s=H2epedZ!bXAZTq&9S1*6{Js8Z@M%G^bikLX)lU>XIXn*QRMcnycUgw{$KrvJD z_wxNZbIQE4GJ9~~JI>;`{qF7pMfl5%RR>K4jVmrf19xE89~s~sY7V#^|9C;i^;N{&d_Hlo^vV`Pz)5e4i2W;p zkc=U^`zs_g=LK(7|F&T5ktm*=Wh&lk>?6=@viUww5lTY*om7Mqsv_BM+~)K<9H7-a zir@Itmwi>~Few^=d>ad9RNv^yyzG zJ}Dy3i#23djYB@Pg{m${gm5h|CB$~?>i?dw=x4rugLeK1e&fecPl^Ipw=4vdD+&!P zrLx_Q4;;#vkA6Evm-{tD>9*4iJ@zJS@T8R|@gYaKnAi1RLb!AICQ@f|Z|Y__ls)al z@!D)WFnb3SZ61Eb^P9`kH=VCS?w4fBm@F^_Xo1Rt4jeD~K|Lat#{Sr;M^H`=xVr zLbRH=uB;pB0?*g(^lTpA>0!24@A@r;`sgk`q`BtpR(j79Mh#uRF>Z&DmVB$vq{rQ2 z`snIwHObzAL^}t3F-Vydn`rI3TFo+ub4C@bZa+gB<+thu zCK5-a-tI-j&|WQVMJ@_s)N?QVU;S7fipgR1oO^tR2& z+=OGvNQP@xbAO-w+&$Gk&!nUz{+exN5igA>%? z1-7VAsY_ zRQQ+0Y#(I5j~29u+kn6{%H4P3Z7Ocms?W(+xfVYrspPnao)b3MyVQlypRaTj2yPZ| zEdMRQn7X}wQZTJOqV|B{o36wQ`VL z58_3R5s=l4Cw^ZqRc^0{*F6* z6;c&t&s6&q#4&~>4OdQA*2yEDq0_*yB`aH=KY9woaEzobh)UIc)3YohHkhiSBkf+7 z|7|7hBR>2Gj0{jcIZ)1+du#RAij~|(I6ckjhunSeP{vs`{V1uf;_4WS_EwFn5Dxu% z$fdfPWWgN*`Alkdf&0SKgYP6au53Jb>Fi8#i8J}(_vD*|cRyb8ef9kQHHt{jULWaq z9}MrmXluG;sz`nH&W&3R6hS?mOWU*Yd_$G6L1Q@Kdk!*+0vPx!a*p!XeMusdUuA%+< z-uQ71YHTcWt$n}@mDZ}SRo^MsM{$WqlHscN_!i?69^NmE8VTA7MhR94PPF$$@&Ei2 z9e(aqcchaKOU?b;6^SSG#f%iVkid6bciTyXv2e&{9&}^#wHvm_23oTVT9>Z`Oz0zR@vb*MYh$0b& zTE``e?0-*N%STpJSD>rMbZa0(Je5XIaN=%z7KrF`Jf&&m?0WDO_|c}t?wqu(?RA9A51k3TWt&9{({_V9P&=o@}7IWDpOWNoAxLFd&c6vgX>$uvK9 zQsIGP=G{=#`BW%GkG!QubY=qkjcTh(+o$k~LgqHq!bHIP2 zzntT((4TH!=#~1N{7emAT70;Otgv&{_xurubP0E^QaTuHaf&(2wVSMWT2O1PDAS*R z-B9$~rTvGvi6((d?RqGN{Z9JM5mEEup{^X49lF&V+D$Gc3VN8vA(QIe`|2Wi{V-I2 zdr;H)mXt7~UVD_ldlKL_r^8?wfOrHKrmfugU)DUdq3Y(;{EkyIqUEO(kuv$wnN z0E`!G^!s15=}gzAwu#~sPOd-0|E&`L!QA}IzkF2~fkDU;_FED8-=5?5tM?zmb?!{K z@E(S}{Ff{IPSF1F_y2cO|FHC>?YT2QPnPQH0=%kSHt*+H;~7j+02#wxkdTVhOGre zjYJ5oO9-9Fnlr&6I83*S(6*~HIp~OF=J^VynIXm9Q0oOIw8Ea|2VmeE3WUein32My zsl_pq{||P*Epo7!CeK@?Fx2-o&ihYS){vjVZQf&f{zupTO#=Vq#heh&H;%BGKy;gz{#v}@Rjo0m>p#ob^!~SIKTZn4czGdn1R)|1%YM~p`Cc7_A z9<-~o>r2I^hmfa7|LL=mQ}INLHAols<*}iXwbtmwE5!)bA{rdgN-NO9NRw7py}O2Te%{c68^^zS^c6=<5*3Or27klQ zig_s_3SBuS7jR?mW@40o($1Gl1}0)j)_7!g#nSwnWn4#8+CeH%@9VgNeoiio zmmpN~O|5ELVvtR!+0nTslj7HV8N^R1=)E$!H}2gYUm%{&DRdfz94I!57-kqA^UAMV zD+u$ASVZKq2)ip`9j|`*a8yP)XRhFXZu$a^t;HU5;4@L3a;~EVLlf&^n?So)0iz$> zb{j(Qu9Y!#EO}i$Wzv#8Q9WXyTh+nWz_!(^rA=^Xed?Yey1WsIb4a`kj><>*pWb3^ z;kiAjvXs4GPgdGDEgIN3oM8M7T{!fp;v$BIdHU{?Eccjxilf18*^IPA855HiHuz5# z!Hvjwj_+Q(BIIS~!vGKXu&pF#G0U8HEB-dDt>W*VAi7y?>!;V~Mzw+*8BK9+mvtEn z_1horjucj_8z!}N%5Ec8wRvj*e9OEM34I;D<-@2xOI+i;%N1;f&}Ui6mVNtaZHPJH zojS@a3HD~L9@o7~Hr%*!{~N1s012JzK_&yrxquu`=aQD1wuBZcob#Q7u<1~io&8At zn=Lf-lW#1;@1T9d-2HW`NiTDP`qQ0+HCY}D44s$c0PH$X_k0U(%PYET^KfC6Bn;L+UR6c;!`RjQZM%st=89M) z3`_`lxbtQD#T&kIaJpTsr)A>8=_IAkRz}*KzHMxL9;aKRxN;CsG(jJL=^F0mb_C6jG6kqeY^`7kF@S$?xkCgbGjk!Q|l+2iZ zeruVPdX0b`>LzBlS{tgy@ls}`|8N1={&#F!pBpX^I{WRLC3y`Cxvz_|aUKV)R#FVA z#l0_FdyAfJ^4;nmUP$uXlyW;g@j_~ZIQ~8`vDYNBcY?y^9Yi*&YOUutX4vPh3K!13 zzhj2Vr`l8)&>P?_mGKrlZg>gzjS@T=%&;h`I=3&I#(dB#PS^iw@h?jyRMlu|&9Jvj zqXJZx@LO@Jf`u>J;G>G7p~_M-LM5zDkf-T`)RmO4O_)k?cZRwYrI2I?OC`Jhb!_=_ zGKA_~MwgDZ@W7YJm+PJJS}%pc`|nXJVHKgC-;-u8(78!YzPUl(y%+mYq^YkGr|`-` zWIQG}%x)Z?M|e~lNKE)$pUUR}+Wbc!{9J3>EdS>Gj2%0@2%*^X#1`qi`(r|2bAw|&>ph-4=MT-m*$26g(mO131f*V%pfnq^+5QMbS~vfc_>kcJZRerqcN z%CoO!9o)jZ;z3!pah#g*gFO#q!H@1vUJi5-5zw>06KBPN`)RQr6hTP#<^rQ#*+J!Q z5Ae^+1+ASrT8C=UsXyj-XgP??i0I)}br;`HQxh#B$3>3T8`n^7lV$a&RNs)PLPS}d zWG%% z!J3nmfM)JW;c{*V11tDA&NcU4eZ~)wZ$niB0dGf9++Up>4>}2WD4EPq330)mRN6T` z*RB}li}Q_>-t)148Ldru;(I$l=yvpFYdqj2yf7PYMMDEl$rN5*J!N6089R<9qFj2! z`)$Z4{nifA{}NrK=!ZGrHM_h8mYBXwTcGTOO|)a=P236B%J$1 zEcdlElI}#Mlwx8o5Y|O@iC!3dk0hPBgFzkb_`U1BYobO^qU|6Yn>1uk|6&2UELKW! zXz#l>Z<;;>{o4_h{*b8LHbdH~F5}0fA5D{tLG}#M{sppU{(G)4gQ0yTl2v~~>A7wFt&K_NXVJp)ue@fqJsasg+F>uxi$(g{u?6nDcTWK$*YR?@beT0`!E9 znIrivn5@hrtgJ$r!D~bI_S4v_$u~1;QKR0N;qHSwt(HODY9h>e+?##0qL$4(uW@%RoEl^@L}&&7Ep5 zJ=IiXO=ujRvB~~HSERi0wflsLHcfCjC(kZ8)U~l*$Ea@bMwx0F?U`=61Kp%N>9}|8 zdNf$xDvdjFF34%BzPmirD4}&DpO^nsPO|WNE;QySJAjPv8+sC;38PYqxK`3z)6AF| z!I!nEZvWUEC=H@a9NwteD;`L|dQIEy^LI!MP&(Q?x4aJl4YAWE(njkzXIYDgnnAUP zgAdPLAGVATU`0+Igx!lZm!~NvQW4BL& z&PD(9U}QCDiE`zVdwet{>K#e73C@(^T6}IZom^uT{Q1e@&0I#*8LFh6*K}LRM}GbgKu=0mBNqx(U@}n5ZO)=%D5)%1flk=a zZ6rI&Sm5Jy8P8?0gY2jmq@_GkyFCXCRyHIT{m1^pEk8PQ%N&@cYE=W`bf6S)ocs5` zqyI}iivqIy?K(x&873R4EVf9-{Je&GE?T7c~c@?TS8xE{z_vT%%xmIBgC{f3n7oORfihdy7Rzr-ytH=OrDrP`@L|G zClubFCDjge!=lS*>8E6Ji7GEOu&oxL35U~4A$li$+pQsVXY++oeiunpH-8(``^rdx z*z;H~6)iFtZRojEb($M;&^sM#ZO@*FCnwzb9iM;G?CIh|8!1sNGe$*-8aO!{YZtF^ zp{;DU0&(Gz;z1#J0vsDnl^e&gX`6!$FJ-)lmuG# zBGA6+yjQjmdLBfyO7B+NxNcvbk3>Ma9uJfH!Q^UB3_Jr9KEq-uF~2!vbdCuzl!A3xZwIMy_GXB4~RRJi|V(m_?ALbzOwv9VE~tD=+&UECB%NZC_w7!c%k}!7rxgR`|kv6Vo=rHu4^cqqLCO&ec!#o(3B?qwy>nM!fQW z&}r45Gl9u8!rTMf8(-xC<5+XY_b>OqAreqTl;4D|fxWG^$g* zusV*n5Lv&T@vv3gyt^!;V<5!X_a)<~3_} z7JG~~R~j~@A`1&|;XpJ^oL~M1A)PwGmQP$D%g%k2yF!n?e{HUQyY|8oGwZzvcEh`E z?Uc_|BnAOgVfzwU)+aE(+(!N;Y}yfkfw^IU&z-lmR1FgY&Wfoky=_*#o_FS}aMn8xT?0!bKO5Au<_pYG7NG4-XImVWYuHcyHebH0-r zEf6Cm&D9!q_EYz~(w!mR-Mw!>bX~|C9@}U@fdRv@K9nsa(1OXhIV6Z4h-Up&A+0RL zbh_-k8cv%%5^05NCe%D&3*ry0AAIEIISh{;9ToHr824DzS5ST^8~{bp`unQA8n)EO0 zWmqvn&GjXBMznaQ@krH{Zn2l?V~Hi1na+@4uU0OHbrlp1pyzH-* zdEynX9WKP)Vk;f?XfBPmve@IT=x3@Sxqd9UOWsn-vZ4nu&vTnlFkjlnd*>2-PDbkq z6Oy;r%uLutPSoio?F>olnl)7-mTR()a}ycLLr=J>;V z>q}Wl<04DY-8J1BpQYxklQf%L5($rMlhoS&Ts=d<#~w#}MaSVcu(ew~@n2&Ce#^44 zQTukD%)3tTdLC@Y)K(o!xveg1?hdbN+a0VJz4l;X6Lhk$*t1ahaE>-JrIUa)c>1JL zXyVSmqbw2_msj!+thVj~7!%k5{4KBL+>b<|Tan(H5Sd1iZxr%_40)dJGn9`o-QAz) zLaSbpIr!WKO0TZ@j%qp|sJ!jWkNQ%pH>|hKYv&=av}HS> zC$_H2znp@i!fU!nnawq9BOUBgSHJgMr`9*JXZ(q=hvq4BP?5Xmi@BX*vu@4v*OQfB zOV+oM`PF%D1Zkf41yNps0QCpk zph~g8q7l++fK;uY69AeFX3+cwqxkq6y7VUvunTB_&Is`;DdJ};mB2myBmv(3BmwMz z<$@F$%v%B`X^w!YRegdk)MYVEBRS*TbV@nJHK4<%z;1Q!;r~T!bi;-sbQxqm+hmmV zI^&Rb?Nk$XZPnXeg>5BGY+gxRI@mRoS0|btjz#Otcdu5uW!u@DZZcOzv)b8?z)W{qaG1juc)&p`ZT&2?z#{s- zR?Awn7O+>Zhg~OmmvuAdEYgKkBPohD34*#UB<;*2!ixmT&VWd?X7X%_TxjJh*Lz3( zdSW2XxDlLfb_PNuJXuGw&6@7n#%=^WqpT>hk+j^{2GN#38OG)@epWw-!TPR?)+WCs z=AKTb5h;!})})_2i!-hgA7^v<3&SA`_|M^i=5C}f9m6$66qxb-=T15a?uCL&BPX;x z@3J-b47D+m={IZ(r$o_N-l@?oH}=&ATJY1ff7pXC?FQ+0V34@PBKY;Wm6fh=+>4TR zjq0l7@xKpZye{COMo-v}+Ocj5#ws;(DYC|0-1+MQ1^Rcc4J6qKFIHQPjQ>!xHDyPb zu~hW2EKM`!%y}ee9&R5Db&kvZ9+)hMi`p%VgjIYH@dDI4_WQA)wY^BiciQKTJJ-by zQb?G#DBJGb3jfxwM7(joR1qa!0L=<9>SHdTB{0FNASYUJC<4g2hZOm z+mLy(L2&LHlB@%1Xf=M@BL=S*By_hV@l5<2mwqIwX8V2Vb^r3g6MWfYW}2da)P99C zf{?w`S6SnLH1 zlN6JBI~`HVC%=o$};ZQ?y`xl#%fo5dGP zTr5c^AJVdZh_ErM2c?$q4>}~&dGlzuo)?)53HSAfWr$S_EXaIie8|QnlW&F=x06{)zBxHN9bM3o;9i8?Fn+_}jmpeM)?K+RV*6(F= zCm&@pF(q6fuhF2vGf#gK$NOt@E=Y(k(cfQu9E8_bcl9aO4nbn#5s z6nf^7eP~=2<-W5mP9w)f&fTK|2)-c|jm%j%+%Fxy?OQ`zExvjxeW=pdmZfQwqEVGt zF8$G&iD9XDC0~z~ckpB#Rc*X9qtzNUjpx8Ma&0>QMhKwGIYhm#+jKQ8pF?VWUOo2# zh(>VNGUcNX@ZxdcdJ*@%Rp6N7kmk>Z01Am#YvugFNJl??b&2f0nD@|lrp>df0PffK z@T=IgcCruhGxEzexhZIWbidC1(=3+#V8mCX=hT;w7gMC>H=FAl3}Zi&d_Ux(r)5Py z1yw&(t2H_LHFe}L=gvOq(hOv-x)q|};0Gx$qt)fVR#=g-%5UgoDU$9ps@&>AG*`m< zjVrZQtkAyvv_){l0 z$S26Ed<;>oUuZ-`&|Vu#wkTj?je+dH4s0wqbhK4H3!5}#8HGb0a-TaWZF$5&U*t3p zhMmm|Rw9XsM{_7(d)`+*UuB@&xJep0oeJ27qB5FrPS?`2MHv<+Jytr+zE-RriUz-u zZQO%&A+p*((>l)*?gPr(`cLNVVkC==buVI`4Fp4nYeh2?|DCYAb`X+TKr13 z*`Y5%-d z#k1fP`q9<;()2@W&+D5dZoD ztB{M(^QtPclJ`dd^%g5EZHZl`zSbtfYi^y@MrlIZoY#akD zdN0BL<*TPRGVQPH?KGxRqx0H!1hMG=~-irN*kFN=xkEtjl)e!LXelE;*kcAg(zp6)0|GsGa2s(PY_4|jv!<_iV{pNeK6hpBS!|EZ3ujL=?0 zKq#^IaI7+=9vIkKwg#4!vm$ zTmV}_tVc5Q#YENyA#DlYh{-a}5{4C!!V2fP5OJ3JCZIyeoWNZ(M$e^32FFWcF?Ekt zKa13N2L)RI(CX5`MQyWa7%)CV?43XmR53?6C(bf`zgeAuHmtDtn&k4;Qy@qLxw5O5 z6v1l0R?{56QGOQK)J^LY^nV~O{Rw2qkjrA}pPZikZd|kPX`-WLjwRO&6 zscx+ySquQ-I#X@vT734wf+!j`6}x)-3D)OrE*yT@i~2vKNq}FSY|y(LZX}&Xi(nPWFHYBD zbSd~x3gfhetnro3ZW)(aGjHia=aQVz9sLh!joP|P@Qv3g(}(VB0a=IA4Z8vHACCn3 z&wtfj$aigL?EYlAr>-$2m&DljJugFfpKf27I^DNRF2qx~PfXBaPV3aYT3?RKZ$TTR z@QQ-gGR_44HQBS48OEzrjg#odT0*J%Y~s$+UtpZU4fE2B;n|AzYpr+ydiZ)*~VuVuXR zxy39|OD~fj{pEVitB9$(LFKnepD$ul^Ff%BO^yM7!tL*2v{c#?(XZ;PxM|l@jug`l zg@Y;Nsu|--Q5jKhAg)EPPi8|7GE0SjJU4;xr)&|t195q11YZa1;L>;H2*5t5Yu-sd zi^;Y2!FHrvZBp()o^h(le~!Qo)-vf*g31e|_<# zn@vdTd@5+344NG|`t+;Ou*!*a*51d17Z`s*qsY<#hG~xXY3Ww*Rc@Z+v@=U$PNR61 z#&0z*!CZ3Tzj)6p?o@HDGq>?a| zz_AxX|41f1XJGS!gPi1rVUc>}sy#s}5Te`G8C6Y)BBwrX`-}6*sJ$T=rh~|8A|kiW z5(YtHg-S@p!%vNc^*@#=J_oa338&oOG2UoCzfq$xL!P;)uj62n&96tz*}-Ikn>~Wx zA>|*6rIvqscu|Ymj!a*P7MJ%8-s1#3AZ+sS(>u15qaQ9@?aUIHUbNT?=_Cs?NU&X+ zuo8*OpLv_K+1f7JSl>TR-F~V6Kj3$zTAGg5rSVu#n$5|>BGHFsEceY;aJ3tzX8Xx~ z*mpUJNh+KeN)g!+2sPSnPrf}5HWZT(&fjd|1D zwFG>{JLLP~$G#iA{=j659wi+zuDQUDqZ6GKlHkQbEByP_DG589j_yaZc~IQ)Yd75T zQ0a7kGoQx}%&=Z(v03t((1>sX7)M(Fru_GH?!-ik$GK3qNiB_!V+*9~hwogxCrfn% zOT(_XOZwH?0nYb*33oqG*lBdWV92IX%5YT)gY}uOk{&u}&>>;2kinem*j5I`NJYJQ zG*awkDTkF`8|ry||-OA}YdUEFDs5MQI{FB6%mtN`G@^Lma z(Jg7Mey5*e1$DhmgFLg$ZJu$+uZjfo!?UQY#1=wFZBy7ijJB|>H?(*Jad=OA&M zwe(<{15OJ8fTOUaI=GIe0$43vzPK+fEjy%KK5b%0?JzxHw-b>vL{4h3RKwLl56e+n z+so7Y;kjaaQS1Nk^<8mIFHyGwDk=h^0!kM|MWiUb21IGnL3$Ap=^(um5d{PRL7LK~ z3!z8}y+{?L_mw9rgqW|CvLr&c=)=#;#L3gJhvz?VlVuY?7#)!SI$ zxct*A2KV$q5g2~Z9vP5e=wB0g(7&H?=v8_-Ub={r_a*6hI_Gz2aGd4r>hG^Jjy)9q zy|O0fptIP~6lS68rxfQ=ac`enb@iMuzfY}}W05?VxxUGA0e8fJ!3Q0dzaOaRCS)4i zM)2`<_85*hMr$(TxC zaNVeL$<2E{o~zeioIXsOzkJ~E+TX{82nkcFB9;>6iTo?vl%2?}a!Z8f2Gp3eW*;$d ze{;$1L=MdZqI_lYY=P&?Y~NJahfh56aQ7bHyzPm7dJ4O^i&ukd0`&-fhZuU@QzN9G zletuP8DXy&f;t^)Jnj^!h75gNbpCn~w+lVAJfg-OOU!Ie!8e1HA$1$_bu#&zqsdq1 zvqxO<`y+a0?vUe+{jJFZBO{S+?7t2WUeJ=%o8|m%YR8JV3HS$%yE7r9eL$gKN-w+G z&4#{J3Yq%~dCy3`iYh|byA$d#^q1CGXu~V6E2<+cZ~N>A=(Hf=K9HfUZ24+%isfO_ z?_&w&%+CN+R}bNYmUxRn!aw=k9ps##BGyeIJhyuF$Hnhq=Osmull2!_aLcvjj$1Zz;dOXScOn6-B zIRAI%%aogdKI^tWcR5{)lh}k^=Q%I&k08(O!D|XB7Hj9#nwG!=Q@i7+f%Xpdu&KcI z7rY4c%U2Q`FWNbs9?q@#y!E3WAnn^bcZMO)9PFCwe6s!(@t1ZnCBsxezn7j`y#^*l zZ_AkYln0CMyp0RK^p|UyJUV>Y44RE_*x6W^#mxqPf_{PJcxcGv_mdrP*(4dMZ0h)_ zW&?pj=B>BZav||{e%puY2TTwmb8pRqc*DxG)u)u66_7{$Q}p5) zJEx5ExcuXelg!WoI_KkBQl#$ULTW>R&0RvM2U+FI&Pn8BYMZT|W}uZ_=CaDeAz zP@ALg!BSCaebOg?5O)Z@65IAO4^SNc24MrgumN)dKA~m0o`Y}o^Vj{_FzF8!(jPxs z-c6v!j~X7y9A0M;<=o_Myxg-mcb@<)_4%Z9nN98E|PS_zIrALciMoIKn+`K~9%CUWrD%)AOIHEfh>`?HVhfqQ+Ki z(uI|WE@Jh?U=W`a+WGO2HRM`g*BGVOy4$gh?3rEu?C9+qJXmQKh$5{KUx*>SLDb6i z-7pq_k%2<4(&sS%_GKyOREMkdpOFw5)QB7~(U*|&wVequh@`6xbSH5d+p!tH~!Lwzd>f6 z)S#6+cz8@KK$6&m{pD4|PtITcjo}m6P*Zp@*U+NTJ#Ih#vx!HB2yd-9YnioopXY+< ze>Kgj0)@1Qi{~-`ak__f!tiZEg~(Un@0=aH+AmTG#qD+|h@9rCr+I8QoNNsogfp}M z+j5ay6=fw|vt;tZ(#%w8*+3Qb(4`nblfChlNBJ*0Ko;-g_M(-)$Iz{nB}^Iez#r^n zrSK3n1l8(dPw&>(!F!$>1+IU~;B(zvL}ZCv#QV%jo^K6Ai3 zMs;@S`@*BQOnlPmToFE0P5^4XtRRSHAw5Vw&=tsWMO|d>7LEG{Y$X!G*5~L{cUGB5 z);>+tSyI~D>3O+7L+KX_+#*@I?h?SeI4K+S8wiM0zxCpj-K5=7^C~8g6a(%<2cCJ_ z3UKB_D)a{V_z{j^sNd?iy zwnM$yi7guQEp#U zzL|S$03gC86y>v%K@tb(SKZf>&asT3MGjdSVs%CXC~9f>ls07~BNDZ7P&OgXTf_xT zelU7(tYp=rCO+5ip|gN{Bt?S9_GH+BsyZFjtF6)th%0ipKUuy2U8KU_2 z30Va0nDfF?i1-<{rnly$@=}hwsPLZbs&C?9*=XexpWdY}^2Uk*n=X-MhAZ1BaRo8t zd+2RMvPtY0x1mnB90$_Hj_lcHR#ks>>ERt>jVC{QPG;Y|_iJy7$}G6W9l66#nI`{H z?t|}jnjM819WEcP^rd+M*M_Lbd2G}9ki)0QJH`GB|7l53POl3Y_no!*NSo5H#GM8M!u;N;1T>jlY@b`28V*$wK7>sy}Vu@&>7;xno>Q3%}(s8 zRqhIQ-6ClCN{Q9?50*%MiVfxgXW7j9;R{JLV0}X9Q0lBbTo;%sf(e}tVrMtkK`S0N zQeF8M(N#x~WcegmZ@SW&Ju9!YVO;&53UFU#X{gahc9eRyW$36>>(TqlYMlr1Nab(c zdcI&(9VQ&1uU1YFT|Z78yr(zuQyEAEd%rT9gVk|`ukvE0v=u5-2TI3RR5d#i&^yGG8S*SsT8LGIEU<&rd&7_UQA1h3dTOkDRT{vkl-;lbtfn9=#W-Ed?)6(m4&s6u~P zuhFOI`lz$dBjMjQ|BB6Kvo7G;Jy{K*+l~GmTM27Y0*$Kh4bzzj)XE9( zSJ+k>vfg!S%V@p+iT&fskO#Hpi+`x?!d%Fpk~pQ)hW1|{tx`Y7Ozm@$s{?niudV*& z+U&XFLTTUs~^ljL4ig@#R@ z<=n_|Ag}3X`E(}!8WDSyK7bergNLZ8zxKmjFFx&%Jj9gT15}vT+%L|=-Tw;Iy5LS8 z&)zj%QIN{Dcxbe`3+xckVl-1|R4in04k90J)~bbTE$57A9v(8VTddXa;9qzo&MX=U z5EfAj4ZB-6+L^#3PoYkn)X$f=-=o#H1GpVl@&yb;wdw=~*Kg`K!{!} z-zk|5i+24brOntIG}I$=^WBEa`VDckM-%dbq!dReaM1uTHu08|ZG(RJ1H(>2yEYIE z6CcK?7{mK^{}nkT1cI#HOqY2^n((%X=>Lk6#^z~*o^!;Nx%Biy6j61z-AFjCJ`9TZ z57P`2E6I#kL5OsTfJAn*)1aYywP}J<$S1>eoVTH_uXxK4gATBs7+Y={u(1rzO}tP? zEt9Bzrqah&$xCG>GaUWAYPtwaSCZi4P4h zfa+MAO+CXbX1ZnZ22tY`k`w-^$?(6Eh7WB?Y!2VE{IXXXN@+3iQ#X{#3oajY9{d@J z?u+Pon*2?+eus!^g~gLE5xcIGih~=3NAJ#Sg5I~cy(UT|*CJ&>b$y>WaCz}miYH6 zJ9xI0_M9H(HPVrylXoLs7kdpDFt7E26s+M1`HEeNg`0-MvlcUhv{mCG;-PzV)B=8Q z9G5@HpZ?p^A9XA!$kSntUI&@}AbzaWkzUlhUjt)Z!wVfl#LFAjMn7Lt6?Fx#d_?16 zP2K5-G|#iN7q*{E|H8EU!4EtugVz-{65m01x;>Ya%~4aaYkG%EgbRhkS$QxbYu7os zMx?x|E9S7%2j^MNUJVV7oWD3u3KOc#>{xlBCcZ9KcW|t>*~K(r_BhmeB6*FxCw8>D zjeIb06ZJ!CXmp{4Fqsg{=AFEmcK?*O)Gi}Z~m zgf;6H8c_d>bFIndDtMg}!d`xTwFwIc%-1U{M8%Yan>6Vr5T#_+H=W&yrIt%B0F<@Y zvK6)3(oG`QU(ds0YIxEulzO=0H4u`LcbB?KO`&=po~W`dqO(OWslC<-M1mv_((ZM_Zpy5+w@GIdo4$n0?acts{7XYmU09eK9=Xa0Bs}Do&~x=%KzyGqvz$A zb2x+x~D|E4m6?ewj`H_@F!=xfJ$FOBwz6z z>>1a~3Swvai#L+~MWS6G8JI-{Op;ywe9nU6j4E-9PtBi*uv-$j2VLnzzcNcd*L1G<5K24^&+pkJ|_w zeHT;IDak!U=?e~hX1MRVVDy5xWJ1PC@-ooGrYS8f4HPXoUw%@3f;Fh>AxlY9vY+D5 z^703IBD{0|`Z0lu&|L&Xh(@oIQ&Zt(?hE1U3jM2S&pk(_5xb5k7O8dV^iC8fR*B@ ztjSwcTsqHC|Iu$is*exCBkks3``FAbm78m3>c*Isa2xr&H3v5ZHJ;zdcCBpYZfHQx z3FU{wx1apP+b>+7HfyR$9qfcxAFL^*xqi-P7X6my)cNL{aJPkV$}s8-IF@hWs5$!oBjMth~SP?r61VAElM9u~Hu;O?J^@B@4~@|N>l2uo5d)Hz59cF#NAp&QY}C^? zKR;oT3_DUQ!W0)By+|b+y}r#bONw=&dF<~ zSTl72%`aaDj-&7R?CJpJVNl8^))zJgv$9xlm6S7HZR(PX16OqA=mI$fhwv?n=8sRY zNd~Q)>)hpj=}TVwJqekH!49W1;alwf;HBH>w8=k#1sh1cO!9<7Typr>lsS#{s^;2B z?e#8ap*XPZh?fu5?#S4M2h$L1Fb+3jw5DkyuCZh#%E>wSKarcQtTs)%ElP-o zQ~iExkD)9VMF(B%kT;u6g+R@P5YQ7fPY;jR(7LbRdCIF>5LE#V3hVqIlcqvcR&NE? zU-LTNF+bW{>Lwb^;!q!PmuTo|^*3d_8?xla{}friiHz7}h2orv%>5RyavN!J?*FUR zx#{h^yLGbs`uiZd zUN()#&HzNf^XqIAXe2YGZj!dOJVCi;m-f0!cCI%+8sLw{xeD%WwkdbOrdIpdQ^+@3 zZ0*v9k`ZG&Y2Oc`7g4{6OThN_K$=KU!ThH6-`XVG5zdRBPZ(qrpH_Iy&*oz z2!bzPTWd*sd&dy4k>fRAE{~o7rsnWP;KMcrj?~XY6X!w_3wq-{Or?sX;|jRZ7JiOT z-)tno9DRC8;w|1Dtol`MGqY<=srwwEfA_R$As@xdAKrN2v@p`^Q-cCm6;Vhik_<59j#LV3$$eZNqRFKiDj zdI6nzG3{}+lOhDd?R<-{bjS@evqlF!p)zs=+lx#8tS?`sSU^Y;{KzHN{-A>8Q(wv$44``01Q zt-Pgk!?6@Msghd)_I}@`qrkR%{a4tK8t|i>9QAD@qI)(a_wo@1 z7NR60$>Afb*ux!GYJnx2CBKM}RPVRiP=j=}K5DXEol_0EFCmA-^_?NMF!wnmVH()Qurb8!)7qJgADXwHxWxn}r>O8#vX_{<@Id7&yq z${B5N$<~rZG>`n#xzGuGkOed+v1=*5>t}fKS|!%WNm?$Fgc+=G?B&r(us^sr*jQeU z_juW4Wu^wU&E`6bHv!{t0J=b`*vnG1#_%Qe1n{k)YBR} z=S6Cn@LJ~xzx@Zu=}3bq^Xh7Ua;7^ew@GGZi&z)2=RKv+b>(;F$p@9wQZ4p<)f=x)8nRE|B+na#tfo?MBqs94bzh7LKai3wyeRU3DzhBt$@|awlGG2zn~|j+G|gB zQ&Z0eHn&tJF0V?ZI#_%q&OGV4{VU8n_sX&KYnPk!JdGq`{_ol2Z)i4`2tSBq0BG5w zylO*4B&uyz%SSG#*_>yd1_qugaZ%E+IHGwvq%1vx4opESw{8a=5bcIiLWQ!qH6~}VMVax+8 z=?uB4(!-C0Z+fV)ju9cm@jU5~XN%e==(VA(A$3TF4-vP&Y`!j$^RGQ=ljYsFCox!EJ}KL9N=f1@ z<99o5Wugwt=o0D++=k+w$30d#%0`0JPU?{c>z(v*F z>%>|9#U>>M=~cYZ&5v#m;X{KvRutDwS!{2c#0c^_-;34)gD2+i2OJMi%!0-FH4zO5e{heLdr_FISLa*ITH6e!TuPlO-0}&oBk- z)|Jgl<@!yl)Kc<-xXjUHQ>xs0zc=4O14ZB$mj116LB_oM!8G^y<5h!`Pa;C=6JIue z5JhthZ9DG0?o6WcZrfo$%F##helKQP1oAn9+CE(G_qk$rPb7y`%4Zl4Z9Gf~2pAqJ zLB?9`A#v@#x&&(J6Q&t~ll75jWygN`Fp=wc73Lsa`AKJtg~t=4O<=8*7lS>CqM;;^ zIGo~JPew*Zxz>cR$gv%EDVq;G3ZwVvg|FTjigqfE`1ouIsG>v=jfFrRQ7Xf(R{Fv; z+~ba^V>(a66)WG8t1rY(LCQw_)`&BS!xufGvk5-luJexs>!<$Ld&6V;+hXpL%or+1 zZJc-G`6CkcnLB3Z1JN0WG|Omb2iGj6rk!A5(^YzIQy~|zJ(5bjORISwQ-Hqi6b&It zk$6(Ws+b2ZJg~Gnsbenzw&!lxJI!U%XOrsAKlL+HyTZ!lRQzt zr-yJO8%ukIWAGp&Htq8J3?qSJxs|qS<#-osWoMp&SHb{qF8i=R|NI&4-wP;KS1s$< zvg6O_z(^l0EQwI)nrJ!xu^Csz@qzM}!7p?P%!;hGcVmV+D#ktyu7Ki&m&_|uy?XD$ zYK|-Z>S~manSTcak#mtS}zt9N;MALZy`p7xS%L=T?V2vBK^4P ziL!CBp@^3O=EI+3s@QUsK(VGsdQF82oSo^{l~t$u-EeLR@c2^LMi? z?>^Y)%76L0!eWB48IflZe>RhdS_1S2%BWVl^NM_G&(Sh}wuMWHeMaB!`(O4}?JSpf zw`JQ~mQSX-3nItoX~7TbagT zX`oTvXr4Sj`5gfmhZoMA!x0rkekD-uAAaSWk4gUY<@t*TDr;#c+z=$bny|;WcQQN{J2qR~HKxKR1J;oWlJc%g(D(`E^`Iw0P!1=-le zx`t18npp?F9&>NDfL%y^&am`a|MFd6!P}WMT)E7wQ>$Sin?4Qf{jnLWj$mu`I0&W@ z)zIAGTTvY$pX^@;04Ge4>3uk0)B0>zrqb?p{aNUs0Hhf?oDLfLtQ6d!i6cq*W$~pK zgAFs8<{bZ(H;|FC@&Ck5R0dM>ilQ z-&i(1oEO|7cJIQkNGAv9$LOX8h5>ugVuZ6i|DaKsTO0r4qw@q45b;Tn7cdSwY_+J_ zp1rp>wVD;i#k}MeiPq4<3fQ>lAN#J)WV)tRHo}WNo38W7-~f$ z;NldfQB$DO)k&=Eeo30@3lg8Y9T@i3Q5uSZ!8dq7A`3d&K^|mDJ9^@=?wG=X?mu-4 znEjQRg^nl(?b>+4Y!J15Ca!Mq&(nYPkKTSWadFm?5hE+bGG2Y4;H#WJSR+f$tLnIh z*5TX>8Kv~Z{&|qvkY)9zVst8d*WST`J8Cv--WUoRD6C}?$^32%9HQcRW+yjLnG31I z58xG*llwfn=}FZ$u#eTft!dlLT?@pCAcy&ne44iJHSrjS%~@lINYdLou8+2CoDzO$#1$A`iv-!j!a0hjcYVckUr`cAM9Q`|>u z-qsQ2HPdSC&{+kXxv&QB+K0P|l-C)ponswdrsUwFBV82xe0=;ZOllpLLZH<`^1EU` zcBT4-6E}{CgFqw8dR-@`72E6-#KUaguk_*hljLiM7|uVL+1DXtn*ymr+z~#9uN+!n z=&PIHg7l~O^r#E5au8Z+R_78h!%HT--d3YPwVD%m*62t}26eN&?~mJXCglM(&j%4G z%Cs0slH-Q5Mi+?j=eO-WHHW$7#C8^0WGB4q9IQEK zU2sbEJOEBETegyvmzKLlA;I%;Wz>#dCo0yW+q#})G_?ElSx}B8!Q8eRMuQbd1J9 zWtN_3Jajr2PZBNkVv}#U+2_&eXQz-&g^)hkuk~?MggF>IaSA3l>U){JO#x)rScHq@ zPkoCP24{6b_l}Xmx4mx6g9w*SNPlRpJ!b*4F66!b(1pdDlVNli6Anb!B{!i1SYyJNv#lauIkMnVYZ@sl{2e!zPyQ z=;Ua7Y8g$89`8ZTfx{7RevGkP!MJisjZohA(FaRQ$-iT# z+rw(uJ$^pvBA89Rbhn>$Sc6v}hvL~;{AaU`$R;Nr(XROuvw!?=D>o^D%EV~&RF@@g zRP3*|45|XO>MMAkhG(c7-9+gZpwQL(zW=T4g*RLE0ATd8D^OUQzg=jH&qT2RMg{7p z3+GsJ?wlS53ayZpAtAvWFj*sGUIN!>xiAf@IvpcWk(VdP^=kV)sZOOc(?Xn?gokiV58^0COL!Wt-xOY#Ns18+|bJtlpTd)(&OTNIlsYY ztXTiMSmOHFst%RE!Atx%$z-F=uqdMwn|d9at-=W9XvEMrz0=lWfBm8&PV;~AYR>1* zlVj+NK4d5e1W0;6lKRPvSl#%mf9ozR;(|=bLy_H|XB8LsVT4-`Fj$WFRM$Ym8roJ<;d8q_Tk5kbR)euYhvlHIG-Bin(LtjQ zzBY`TXjTaxm74fFR&np?%&_aX&lp)?)ZsKCPCNn%4|`7LDu-E+K=MFG&C=(?Tw?b= zpkFb9jE6!n`&l)UdJO9UQ|5@$5QqQU8m84ZtVsja+))5Cjz7Rg2E-Zqubo6B!^<(@ z=mz=e3}PjMRa7`F?D+WBh_KPrmfwkA%l^m7DK3+9%s6zZf zb8&+$-N_ERGaHB>phTJZA3Q%=N-*<)95++j@7Flmj>&-(z}KGke$b;CPl=feegEz^ z0#nA(YoFoxaREi|d->E{+24Uh#F{gAv6qN)cj$EL(ln}BCj>-+6%9z(^mK3j6^lbe z!X3Y-DOsGk(eAjZl38+d(Mf+s)IurR)MmzOK}E7YA6O`g1#u{tn|96TZm=~z&{Bz| zwwl@WzXLOwZKHzfH^lx}KM7H|{{FplBi?(`fZL#th>Z^EuDw47i3sqY=^wE>8#SYy zA?U*0>RFXZ6f33@&ca`f0KokE8fAoww8uwb3Q@c!i_?$pnnNf{+HihGV5HZtyesN* z|67Lr$-{fi3+sNQ%&gsL_|!!-{U-3tpGhTxk~Z@+MyWJeUa^~ddSZF!pxR9L z#aFgV0iNo_LOhb8YfK<#z|c)4R_Qpx<(ab0b$8>3G}R4nJYussi?L`{Y4~cqCm~R* z0QZal@+hN=c}!*v)h;^E_Lt4fBc5x5@epzvtF_1Z1CV*k8M#oHd$iz@ zPQg9m?%f$R`m+fLO$jrC%r*U&`DDAXy| zFo&iAq%=qld^&67!xwI+xH>=s+s!WT&aFQ;?Oy0)D4<(gm^2z4ObrlC^8F+LzYw(e zv|~1rIP}r)yRic%nK4>E*~Dj*Q2u!jv`Tsg2~Q;thBziJ%o@XDl0ApT zi1LbFU}HGw88Q4-Io$ct(d%iobAhYgG7%iZD^MRQRxtbuYZLyWpc4wI?D{Dg`&>E} z)&bb*|3%lvz#v4%d@W{>kzomkU;1eCD1A#j5rI~4<~lT0bN1I8qL?%3`W(5 zFC66T|E8Ng(vZQ)Ql5mHfRfF2ECGIB!;Z88>^XuJ09Y0s9_~P&RN3zp9I;` z+w#j+e-IZKQM1;Y_Q_qg1NOGeY8`MWfGQ_>_qdc~sw&#a6;(lYcUWU3cESo^^|tWx zipSl%@BMt?q%(*!%=lpuxhUIX_ZW8m2Wd8IHPzV!383>q1W%%VuW?sgFXNaOjnAB~ z2mpIS4wXve@aYXLUz<=E+0^ROf(2jtXTj}E$GuL(8i*F5iO>0W#C1lKB$QnT}K*mgx2o6A3wNf|didjNrm`0yQ zZme~B#@uN2fJHh;HhZd&Wj6UUAZ+}h(TN6bh#WPJ+Qkw_jFqG0Urt7bR@ssPJ;GD@ zYkK~%_$*0-R%U9+jlRIm>`4&J^{oM8i1g|C&-$)7yLYvzF(&UQ()$(Z_#bT}$;v=b zgF%GZjg2+R_v1GWMbNPd^>@-+a)in>GN-F2ZdlktY%cdRFUiv%3{QTQHy5~vEUD4# zjbX1JthqIEvPir~PYa2q9hFur+tMwU;g_*gZtNgFueFl=taqpUnkAsGrbcr_9(Gu4 zR!_3NMNRIUZdCRf2FMdN{W-AWwPb2EjTv|yU0_zjmaJs`E%Q>RF_3xZYLEp3D^=N4 z*3o~l=;QZbqS4Su7G@kZStIgS#MJ&hAmne&OG@8K6szgR@6Te;(VsC% z=k@>z%1y}6N0ErR&8|xn+p)Y9cKHDT&ymqR8-xSszUxdMA&TJoRV<~E0CVynOtRuR zW{{_w$QYxD>Yvr{8FFiFiIo>iviK0QPVrb%T@ATe+^{u{=H$5iH(KT5t_H6e+cGS&CEr9cgEkN|Z6Q7^x({EH9K^__E0G zN|7sIP&uUW=W!kXqE^e_%*D~h;mqefV|jPyE7lJ)7SAE49kpZhQ-7~}bCD3{+q$>f zDIFcvUwA$K_x2RT+<$(H;2xFRW;j6yhtBs{+arfPEwiTHZi4hRDqOKqpX6eTJk>ZPj@CbJHp`& znP#dH?@Pq(d9+P~N7v#|ro5gAmagp%Wp3e@tyaYTlGMK9^LLZ1WuLkH{T4IB9_;{L z&;^9RYe-r;mURj5+7pRspKicsp(GJStUn0`z*o%r^q#^XuR<<$dA$sa`(TXh0tM!Z zSvX)9gWAvr5bcidNLRr#FDH`>?+;-=2bdT3cswj>sJzu8)X6zk2b3wGzG1$|C$~Py z5zQCcbFg<%2>(P%l)-2BWNPw{FWxA*1hc4rlpVwqpJBiy$_?7E;0Od~rvI~~)aOsg z*S+3BilI6g8|CJ(JX9)3;>YI`X8YV4cg9ZjUfrl*#kYIbLNR9eg_*unBWU4?1R;`D zbtxKCjGrfr;-?@dxwov0P|x__R6is^6)<}39&tPe42H;Q`F&77BdIU@Io^)!WTk7_ z^_wsE8WzgR+^}=d9;;{mHau*~n|maL%haVa?E{OGpeyd+bZ;ymAJi+5^7kj$;Xu*n znJ?Q4pdfuT0xH}m^I1RVON)L-e(tdSpuY`IonE0AOU=67#he-EJQXd?59W17 zj44Tfr}N|X?`c2rJ-*40{i^=%v2y6a4?B74hd%KOckPbU*O;JrR@b=e*GIH%w&RYB z_Bmmgx^(Xz$PW$!qf#`WY+rJpt2{kN>wGqxx`->nj{;`u4J4Q0h7x?ow9!P`s9Gv2 zEx|VAXyQ5RR@n;QeK->Sf3pC3gM_C3%Ww)5;OwMTdY^nd6oxAnP+`({xd3Ww{E3z9 zh%5o;xmthCsCXtdo`<9V_0;4(QJ~fMbC2yuLh;~WnJEx_HN=|Q9FG~?^eS%nVA$p3 z4fm1$(UJztgM>!~1;zFNXhI4{s*g^qO>(n?^Ex*FvMQU}6U6{pJS)&P8?}n1U=`e> ztmrV?dnONa$^(|<$vxFg+ZEV>;L1Q!AlU5cw%s>^_W#UlMnJxitXMV-dXBqpKXr6X z@44lA6@OUGI3!(}Y-%%HMs8QnwQf?FT?lh#=Ik9&v-v&UUF?6ao39^acN$u1Mrkqw zoiTZV|6mK&{O%pOXD_pAnFms(_HLEw>g(jZR5p^j2W8S73-z2n85-OgOHPFv7r<5M zX}_cqo45ydY(vy`599Wwb38Z)d^pz}D-z@HQ7b87YPLXr3`j+W+!xr8B$Pq78kLtS zY#mPY0uIK=8K(#cBu1hJinR);IoWMo*+v+o9Z0Rq`m1sawe`bHU|u>&$PS0KOv7j0 za}#x`=NOiR;iLJ}e4WE;E`?$rn{&SV!|ZY-C;wRhK(>7ON+cl_3gbaKNyXHlID1bB zac{d|%SWH;E{nh#c7T{#!#9l>I%;NZ@u%jpvV8Vvf5FrvmKZx(UWieO(NI8obq7@7j$kc*$yw>VTl3Pp*-wRXkCa% z%GgM~sq=?mkv{hL4@Fd^lSEv)rjIh?a8d-|_tQ({J8 zVCJp#qp*X1-;Sy%Kfx4&s3mhQK9eT==Oin6uBR5U)&DecPy^q3G?I zvyXs4pS!UB=FfY2YiM~OpUifueXKrnhnWbEt^N7`>G@5tMm6)s5xq~nb$*J_TQ)4M zXbW-p`=U@%V^f3QS`bLCyNAL1hAj|V3zK4e0>u@2O2+Qq)WFH)w!I;%?@LT-YfJl; zsuwMdSS-_o=KW_-Us)u|E$k&|8xrvJO8DP9Yu= z(=vlOh_MF|JRo1PL9g*bIK|DM7zIIHwm}7#+Oh!qF$xkhK5|;S_q!V=)AO2#0*8M2WbF)1AHFuQ^rz zo4R6eff_GB&^b~7ps1|Y|3vplpplSU)41IMRRHOt$PLp*pBenL8Kz(cinAhw>tGWC zFiviSKIHMpmI8EfYFRrNifKH-#1`EpEJ99wYauvtX!{Tp>wZ*5KtRXwzX*HbTzzR+ z&(kMI9F3cBAD;(0Foh^3)PzezynH=~&b8yc5ej5W5lRt$NmHx3#4gzEMVy1ZUYQm+ z-OVtScL55jn~~PkRh_wC=mK9Zy|%eo(d%U*my7EP=lH@>qO3;Qc!SO zT*T)xzyW2GF{bsK{nduq{`XF~W_N>qrir5&2N*T)Q9PBH*P3o%S~F` zO)lUOX{chA`Akt`esJ+CCm`S5TYd=a5?z;2-5M^F&V#@MDXZqLiu2X}jXN<=lFA6;R0OCl*?LgQIXsAWXWj-A+A$7%;KYInF}1xhbxdP!v%%Q=B73Is>lZi2ree6}&xnDM zBgl(}LGxu$s{c=V1-TUOKy*Bh_4pZpU?d9TXgSG-S~d++>|~|#+#h$eiNyUALe$2e zDYk8l!xa!(_}s9$?l=V~R^0=FhB+z!6lB?*=|hdJKf-qepr>XUcRNnww%`O4$mC2z zHlHf`!vS_3wUryNcekbyjXVWZUE>j6r$+&bs-20#uR47(Zt&jI3)qz4QdJKis1q!% z-erlh((A23?$<)~oLo-`yUEgA@)MD!zV11xfWe zE)&MEn+mYMKI>g&u-}8GuA8xl)&!#!@fRGzpWq8Bo)^#_D->h zwodVIaLJQO)$(Z_bBXY|Z@2rQPpMViOd{xB2(>7h%%@nl`5xP?SDO!pV`iwxF5jvg zeL6U;{r^pI3y*F*2}=u%WcUo_C}4a6Th1fSBy-|+tt9{^<4jnR!^*5!4)u-gZ~)4P zipSt(%G6haqBG|pL+qtk+V!tM#^hVV_8PCuTozzDa7s)%v=2w;95SGDO~0Zwt^>&A z-~^KxWu3hJbxPQ%_buV;evTBeN;|3_9HG5B&vjN&QBjhV?(cpbdmZKY-ZB+Bg>)riIvoC(?gJ~1A`x}T&f~1ddofgA3sXS&Y=Zf)}sY1 zq#+1d{REcB2mz~vASue@=kkjcGYp}#h6q=aeT{6C**Q4A&(u4@0!jPx!R>(Xf>~RO zAnkBlCSS%b&608TCf!Knl@_&{NRDy4X{z2ewuSTPPzxkeg?fGA{tmx$#hUooGgj95 z;f#Q^1vsRfY|h45Tw&2&#^jp$ZGmK;yHB*s#*nwE=ca}Yywk#E_bD1<$+Z;vobRaB zKa+7Rqsu*azxiPPh3K$RU&X-bNY`$yZRhyM+CENJtRv^s5fVkEs0L~ zb?v22p>$7~Dwq8urdEBH$Q*yP`YGMNj>+_JR>tVKa9Q{lKV57uj$42;SU&LhuonFL z?PA8I3b>tviZf#GMc4UPBZ(J%W$=|A^=KLID(S)cZGYBPG}}qH*(rMV%}Fg`RVqKV zsCi}n5h)Wr=Zic!m91u|U~U=53yzJue-Cy2vnlf>&WI;p5#Qe- zw9u@6k~lvuqA;L~49WJOb~gAt_Xb+XDux=K9HSLYm_NG{qr z!e{5N?LF(DY1eh>Ci`d{r9C+=#HU=uknI<}C*_kdS4r?5;tVVxD#oHxGQYY(h%FXfY=SiKq` zMHe#VJ@>;4H5%bng+GVko|^jQWcqFck^P_B_3u`0vgCTB-cX94SV1E(8yf-^_ziu& z{!R-lxAHAUF8l1Ww;pv!2l;5U|7i6&Hx?wCQLK^bRj9WxJ{4VG-w#D3nKeDVG{qG2 zxu9t>%i2w3^McVm= zfp1VjuA#rpey}q%JcCu+aBTiX+0}iXi6k*^7w5}?OCG<*m&F2xS)I9McCH4Ec|Fck zz}RR;tsPhx_EQQvbKg?U9+WtHcSY%&F69K<7LyH|`KZ3xka%pMNk`den`{_6 z=55Ut*Fj#E>{IYi&3eyn=~(DxotfX0JwEH}lQ*67MeaozNBQUb3n@Qd!d)etGp;dk zH<6&A;xe;E81i(HBN~s=PjU%3tC}=Swb=#&`>b)Yex-gp2^~N{nEK@nSKtII;3o(ksJ(^rZd#4B&7!F^tWAdeK)9Z-sO4##S`Hi8UYKD?1)7BP3 zL(`oZQfGCAmh_6>*?uy+PaibFN7AC`Lf{sK-gs#t1ap63>@!AfT%pxQwV!U2XG5h$ z5WG>VgnuqZLZjDQK&jz$-upD2dGv%S*Hl26oWXDptmgs0BJC91bokNoFyo7>xAph0 zy=iv;I&nrVVv!qPv~UahayEt_#Ih$ZL_sDcEbp#P6@{DgFnz_f-gkf+8-l|j`x#cX zQ2}j3cj;#&+|C)3=rz&4`Hvq}lIPH6=JjZ!sRLp;*G-ZRNG{%SaA({+T@BVoq>M0^ zl8lk94KZ&w}z2h1VO<9gskSl&p4zNEZVpVKL1Q!k4=QP{|vvSrz)6M^@#ewXZkWYM8}=@ zw|=k2ydxJB@PBSf9L0gV!6Pq*+R7>{e7V%LiG@B0=G$e*bVAclm3%a5(xY@$s*0D8 zo26vz;L_tfd9h*%E)=i7DQi&BUK+*E5Ch(?)2GVn zcQGP`8teJjyxo0aawDDHG!4T8Rbscr^aP&eYKJvFBZZaBW|WVB7qpMu*Q?wytqOpD_70 zWSw$;XGyH6juM@Bb}2zItqRRQB+oSDfBqiRLPQ>WIMqnf9os@1Yw~_j$H$I!iexti=$npGw_Kz=@*}6-FR9)3?6?EWJ_cCj_dl?q&MBu z2J2BK&|$n8;YgjbdzT(XkMB+!!*3U#3~Y@-7vQj7zJ>$68mD6c@{D^!dWLXGr0ZXlN`e2Mc@O(z)D}2uBDBhdV~w^|eAfEMTyd2O6dmmInoU=%`g9rMUDj#!eWgE=W*V@Li+pdG~&vcYk}WM(a5C_Rz>+1)_~QDML4hgUC1>oU+VQ+KdLN* zz50rv-f^{S?*CCdl)T0?VHu?;rdhGOvnLz2|K0Rhs3KcAPkL0_wXp~RxpH~?QerCg zkaw>7@cZ)%zTii0z>g?{&<~34I8{;B%Gm4Ya);pR=g>E;>(fbe@6gohZ*IhLF_)yW z47(-E?69w|HV{4`S80E3=00GeZ2T2Z*+9&eQ*G^tgfR+od_JSfjRR0-n}CNVkC{d_ zHJO*@J@G{HK7{*8i{_;-K5%6x)>dTf^+->86pAHmeft!0Bb;IAl5(M7o-j=w2cQ^J zF-XDj+t%icB=57yZpW(Y=O|3}9uaa0q)p#Yf1p+y(nn$5`gp0lx#*xziFQu&n1lX? z<%p)?-0(4XXcUISkr<;?OKh4X&>i=Y{-x*WlMvy{)mn8rF1KwcC8e+c<_hGW(aw+# zw{Po*YzMMoLd5%i9KNWigz>RZu!r7|H`nEPB76QZwo`|Aw9`V<{D@qGc<nt z&HP6>c571dnwB_B5h7M8E4M!ZC^|71K+HJVu{Y?Z*-cgyO%|>E5cRz@%XYe((R#j2 zIxner@L%TJ11*Q;+drF_T?P8=k zoxZSAy6p+s1iz#N)Y~Qi(j+qb%eTTXo;D{-$3gZb^O}S7BKx|fFU_x%Bwu*;-SMXd z?Y{(Wo0fh}-o95F$Ji2vzRz5};L5>aYjB|D#KwO$Qh+m3Y#2ld(Ml2*SvnZJxwn20&nIvOd2DRnOXs9ZgDdQc{77`A)s?TAN&-yOV>)qZvn zW9{oBWT9%&06`titA%9ZK&UjEGpXE9t>}w(J$j`(7OrIl$08|BK9WuBoZqv69PhUh zEz zu072jB5bK+kEVOo=YMB;^L29GjAjyiLCe)cup_6OajuD@9(+kjh^Is4yne4>$t|9^ zcelC2q#^fLuHH4Yh~r92G@6}i=QMa@hHrI$cMaSehhW~@rTTNj0{&kgFU6YxgKQI?W=1D%@$4wpD~XEM>(W^=OD5t7rH%HeNc1Y>D2Iy#p{U4 zaLDt$$Jj|6tc|{{{<>rA%Cb9cZNRto9QwWzo%$=*K^$*W>XB(ex0?kor=MQ+QG}qf z(ZkXxw(9vZ2u|0CrA{R3IY`gH%x9@YX$uH&x%6@jA?6P+JjH{18BbU zYrTZbL?_`aL!uw<(^Ku+tHuG+Wj3|hFFSL@7kGCZ&iX-);-Minug1f1;=Iq+u64o~3)D&s-z$44rQ7zjJfMbNH>!JImKnkS6<7Umz{8+X294ACs6PsGy(IeW;&Oa5Lh$P^u# zEloc5>*X=PR{7}=%>EpwfI8vpcdmsns3eOjMj@LXX_T?;-!0Zvt`W>uawF=MAUjrQ z1pkk6(I)sBCNvBmA>E}sVPO-7t1#<7zt8rmO zj+8eX{^VAXW3bY-Ojb{8mKwwh6yohk+P@S=y-1mGGHuNvNLD| zEbO7l%PZrx?penPURVKVv3#S=?w+1?EXm%{RI!zE5+4g^n0qUkkY{~l-D@`u{?w(! zD2-(&Hp;iQK@#g>_37N8FW%47naI_GaQXdT}|6^UP{ z_RYOkC0zk7)P{!0Pe*~cn>##|2k)Yx*DhmXca)wsocU*ySRP|TT@oQ{k|b^+E)RDf z$7C7x$;9u>X=hLX>;mDFhvu((t)*8qH)_3q(jB zBDhj?@~YCJVCX_$W&J7$>cQt9oQ78BOwz)LC<&U`NI$;5#4(mL zg#_Ih;P&~~QmJ9Kfae>U4YAi72qBZ0;&;jGx7nyXn;sXHc$+RSCT`sURdHWk2r22i z!xZ{A1~5)49c#-`1~2*SyQ5!T7pQY7=ob*3sdAm`JVmKN2STrjMCZZwhciB`hbV-S z;Eb@V#bkbZfKKT@oT`C1M*#KO&&}jqWpg7T?T-%9gf`)ItaAkYi2wL#Nygfx)pAC1 z_C=v17hMVO1Xt4hu=23}(9eeX`iFvr+Flax8|<-vVGWd5_XWYAkz1E>cFbz2{YlxK zw9HbWxC>E5#*`$AZU&T9`yD_cM)XpQTR@7izY?FBpODmKg~qTI^0TUTO20dYT5k{Mg|F}Iz%s|lN*6nz~j+=;*onE!p0$WSP zQll8}?1Q-CG>P{l!~7b+kUUI*C50uXkUq zKm2v)>gC&5Y~ypm5N|7W5dTCDu{ZQK|PRf1Z?lXh*Z;9)XjGZvXThK zN0(y+7KP1e6`F+3Vx-jq{~e38{*pYt|0a1_|3}FK)X!q&qtYNJl0=0T@!1!>MJDPP zc8bC2pZ4HDG&^S>`ZQNi?)(-#wMT(C1hUP-k&Cd02(|lXJeB0nU2XzVR=Dp0(C0ux zO=^H;=Oeuki-Qb0gFB8wWZ;p_4K9t9g6TWMpX4;1h@1Hu`ip73$P9S=@h(A*U-zQZ zxZ`u;GSvGfjv)X`RvM!KaGZz0kzxUSaq5VW9?ItdJQJ#eWHj$hm zV-<;QZIUmqtm*l|wvPP*9J_i#Q_0pI05RXiCrw%Gt_)0Dpbnpjm~J&qELunAu!);LeMbp>q z2sjJ4k7!SHAYf05S259HZb!=-It6Zu2L&FdEUS~d%|i*XIp5Woo7ghU0bIC|Gyk?j z#oM!tD<_&bkHqC81u6|#IQjP(TPEKdKQV_k*7wMRE)=ecg^(Xtnp0Z%X@`oIXagUE z;s{2ht!R^))2;d^?F;rWYj=XAT%190;Lq4Gqlfqeb$zRA1e>8h9H^p%t+u~je1UUF z$@?s%Qz!5!@!sQEYz&C+-@kO=Q`?i#FjzbMs5Kg)ZGb)6JBA$N&j$*j^gYrN=MJrc zY3yq5LO_|NLrE#!ZmGxebazG5%lqzSyA`Y8!OBqfvg&BOK_|s?4eyE%siO4?@96-es_Pj0cJa)U6U7^ zlE2~zc%O#Wo_5WpCb>zhTZS3i+$@)x6=k^Me^WsJ{EPVMfdqwH#8766730siL<@l& z$R=*P@ZE4sMsr%L>AH3zAHiNQi>5e+&bfRzy@61ApX-o+YQWi&41~$#_Y9>xh~N zZ|6*W3cLTU?DPg;a0w=f&02CN^n>{G2gcxu7)}DtxUvsrB;{cU}qKr9>8q`q|<%<@lT#aLLPC zdS6&ObB19qTdUQc1hRLyQoE;|Dz%C+jg9s#M>*wX4a<65{-KA}ss$;6H2RCvlQXJ2 zb7lGNC}56W<=u}~uO+@5)8EbcoZ&S@nK58{3(mrA;w(=5D%&~PLzB6re=7P7ncF&% z{B4P3944j!rR#DP}1WJIZ&Uf^T&FP{kzJ*Dcbz0 z(VQ)R56hKIVu{k_Juf^zpda}5UUeP>D(LTmo(h|oankunzftpXE?kOaoNh1tIE2Y< zr&ndNdq6m_ynf~*-Di}BeUGdj{y4jrwq3n&lOxRbr> z{Ah*Yuage|+6BiwOMEmy4XU7d*b`T+3-Zv{mwq(7AB|pajqHXo2(ff3EX}sKX!PvIST>%a=hcU z!BkzpU4CX*AHT8gwbi|GE7>h)q;3v+wiN$GBZbt*d>9c-SCO~jqP zF&gmf7dva2s6O2a`+72%m%v7B%s2)7&}wxrPcvF8gB~0He z5hNs}RqLCO$*5NXKII;R^2br5Hj}*eoH^WD0z6^ZJF;tVvY6+jpG3x6@ zE4+WBe(Ot)Y(%Hs$}{{GpQrk_sFkH<`;3RES09>MBNG-=pFSo25^qTVqxXTeS(6FL zt~@-Rp6)HhZrb-3p;OBm?mw%mpV5HQD56<#V2h7>$m+jzN&YNhfHu$!A+yh2jeP^o znJ}YSn^#9c4rCKU$#fvbEZk#1xw7n7k@rrg6Y~>%5mtepN{h$eJNhZPwD^A zzUYP6SGzT?W~$OJEVB@YropA-A$(@y#3TTjgqdN1v*_1=vxVBd&6&W{Qx&m&bakh2 z!vS*}L3MrMb+%VZVna6Y32p`ZNekV}-?ns{O=ha9c0Rk~r#JB~^+1yayi_+jCTum& zE$|=2Ob%VI-#G(m-L*4w*ZSZi{pp($j~(KNj(U3bzHQ}8LF2p{AHk9_)88(i(mLCa z6=NzcYo5bdM!)WqMF+Sv0hcfBwT;xTFx2ZFcap@HR0qBN#o6}6x5Tgm?w@UMgAJl| zzHdNfF#nqHU?q_te~*4`>P02>pe>#UK6xs3g@eEoxL6@+lt%tiwy*mVijI< zh|{VK3}_45fSzA`mh67&`uZSgLjCL;VtH48u5x#5?5y3{v!&7F8(|Q3_(|+X?s>g? z4Ij%RZMMK@jHP!Z>)_;r?rG_IjoEhA)lc34mzX1lx#C#xR5*Dz?RHc!{kh?Y=@+u* zhMh4gM9IBsA>&0jCp@e-`nh*F@uY*N0!uMD!=#p4t;{u(_)ZqmH?7{>phBJm$v{?m zlzT+r#fpXFL0zW_OYnqU9eAJT1CJ?92urwKu9k);d!e4DrLshMm}Nwbn5~`OuRpN1%t8yxr2L) zF0P#^GfKSKz&G-&D9h&Yt1GP2kgUlVe2vppUFIe;hrviR8NR!gXMlac`#h+zz*+1Zf8p`hN;w7+ep7G-0_U9uC-aK|aXxY? zfL1k_>wVm6i~R|ASmKNv+<-?Pc8_U6YU%?h+s~eyN?%wYcCiOE;kg1Pt)-e#Bm;VW9LS?2RE`y}DV{ zeiYAMdk&oB_zaO#2|-#!cQL?QD8S_TXD}{n3OMSN!DJshQ=)9JbQj zH)$;*&G(ttb84o8Z+i7xE*U3}p-XJCer@(?LeoXE#ZG9BWnIJI0e(njkv6EO~ z*bDp|=M{zX^8+XIcm0Q3)5iv>boH%En~OFt1}0W0Nnieq1~18U-=VI)jPl6Y(GZB6 zbH2imcHZvInNISixNb)>-OcCejRnofhFNbD`eh|N#9g}*M}e2I&EO7Pm=EMZ>ACsF zxMZ9HY1@@D*?4rf(;>&2xlx_{Cs{oTQg05p>63bla5-r#P$c=W8z$`Tz^!eHxpF_p*w{97H$#1BrH zXBC)~Sf=2t@0Nu5!urG81w?N?Bw2O3xPMoC8QF4VEA;^0Mc#4m)^>BBAxY2=%JTH+ z^uJB6$GWT@lT0rlvby?SJEl`}6higHpOCuaB4xCyFqjMv5YuJ;8uhQa;Z7)ql~btG z5C9F%)TE`K;wERnM+#WtOTT@__?w51cXs}`|7;~hgFF^Go6eXe{j}qC7M^?y&bOK0 zo3a1(kbrCY0>7fu6B#UB0bWa;z*?RMSyeakBX>{w`LHk2lLp$Z9)+Lg^zJ0EUl4Ml zU+8p6te6L-8m0)MyI_)#dgjizUTbl_ltyx#M|UA+M`go0&)_>7%L7xa$!{+H*iC{G zB=}rG{>1rP^Ya8``M@0r#kDUvF;#ZOm(gP5(9yQ7#WZ`UfJ%jWo^s6xgBsB7a3i5N z88y?9W?xi!3hY<%6gtGLphv2mNZI_L9spVqo7VQnrFX+`YfQt zMT#iND9wqf{wOQTDN4-Vb>VsZPEt@og!MB@}h+jxUOqJtj zk{hSO*b!p1s_B>sx^!X7+=8F>Rm+rw!#z1j{ zVV%+eDW{s^%Ad|Wu$jZ^Bx@*4x8P#=K)>)PiUBPWcYxNrs_ucA+2^Pj_n7^AAIc>pD;l>JqCL!-z?+do*%C_muE!DG) zhCVL<;VOGqw`7B_{cJmqkaRP*WzMj+?)7Yu$jCQ91gVwei$RrC_Zs?h4f7${4Rm#L zR9O*hc3v@F9klmd(A2AA_BsFNZkWp2Df^$dgYHGfDam`+9}S&39b7az6Pb7eMTtx( z1EXf5mVVRO<~U?(<#+EO*3auu-ml7N8+GQkaQ?$f-`t#J8M`Ml0*`aUu)7RssXbl? zQ&;Mi81&1~)p0bT-J?n7{Ao2XdT}o&PlFr;0UzB*E zzYa+diZGxO7tZrOgwOE?#rH1+v-e=M7l!VtN%2x=w5!@B2GZM45=0Y`2zA>0FjIKL3qyDhz1uq9&8$h%l0FMPw?HG_a&y!WhebacIcBmU z{0&n7WFWgTw~fH$LstE~Au>0p-?^@=#r=TgtE0qS&m_tEzwQ46Bqm+XD{jd?t;%u7 zzFy@Oc5$&-la0k0fk2G)++y(<^mm`#Sig=pvfJR=^drSgw4%gD3du%1=I^t4euN;% zO!a5JTFfnRrx=i@+20A<+3g6EgSKF2x`+!&`se2C$JUyjh4-2G>S>Z*-?`7oFh2yz zN+LE$5@;>FMD2GZ5<0Q%Pg*FvT>LK{JZp@1IpujC#OasR+#y{6QB6ecMo;FjaaBj>Z%j3b?PdZGe}g!*)fft?@r}eN_`r zFXcb@2@)A7jyNzG!6niA$@ohoKA!c$MT<{)YP$+H;}@dm?~(&J$+1{UOeI z>TuK3>YxA^r1MA&E&QPE8#a}^lQ5R>Cw-H_@`YQB3*xZQgc}W2S&cYN5K*K$S$E;% z1K^jx(&QPgilyN5)(tfxK>Np|d3iMLSFAzXn^xxXRL zEq?|MyRxZe6h4h$3_t)q>MO^OQ?_K^p*vhWn^%jSIF7cLzPoTkFBD5JS%N#D%YWP^qeU7^SwpxfF*2&y)#@{AYOeaimj6f;Vn%`@o{dLQBJGY!D&DKYC7PU;n zZZ3+JffXPi5>B?g`NnsfTv(~^H&B0&6n#F53p4J4{1(A_*uo+#_4r-+sr><{>{h$C zNKO9oJgi)-Qa?EiTgQ==4cX;ii05Hbhs%h;i3&T3GyF@di8nw^VV&Z_J!10?8<1Hr z4T{V$5GZJT163gPR@b{61wb{Pzp`fF+YNlaIX^xEB?I7&2QlBxY!!3706bdfNT zH!BHG_<1mRr2W@6p&yq-Ll2sc-H+MZGf)GoRd85E5|NHFblh;LS>+$(P=0kE5wp0!4jiOn)fw2pC>z%NodgM;kGt zTgbODZ4_j$Ww$*z!*&BrL-m=%DW#NYL}c!aZq3qu3$6LS4O&68eV~uC&#eZp;8GE* zNqqG)eF?m|M>swJYGiWwtSR@^YKb}2wdxdK0zV>(eYU5IVQ=4$kK`I?HE>sWfzb8> zSCF{XTL`p*?*{~1X|dvGY60=cbIoLENaV;3{zhVICws@7qFyFzGBZH%HsBLd)JJ0F zw~R@8Da%=?T7v$`gIE3i;MXf8rlN|+*H-CdBq>KOzEJoRDz%AQuCn86Us+_+;SrKA zK1!2aqZm8}f=h(e>`wc)Yi}ghgTotpI$SSQ=Mo-Nr13E%2sQ)fNgob*b=?8nl?8Kn z{IU#0ts65uC(K777DaD6D7BxF=ti%U*}X%?$^3+)ngOSTg}NbX{FJ#j*NJ;zNO|q3W z=QVhqVDPNhciRc*{ZN_(KHwp%$k9Z4{I*>qgyrrLY*xx%r&R9X{L;L2W9k;os_Cf) zLrcaSg8@0Y33$5m+*j2WzLA@6ycgomN{XWNmJ{%PI5|qEF_@JUob8kGCx>M)0uq@S zzM0 z=>ta;-9q*cQX2n#+!0m79F7lP@WLah>5=1;+iCWr68QFmb4zCORBc6B4dy@Y#cx>H z-vnHzF1$;z44Wxo(v=X{80pIJi?Cciw1l6Vjvf{D_PKk@K>ayGaj)T~eB)Ry^0zm# zlIKF{&htuDK4{^soi=FQb{Kz%Hz;%QuUm}X zS8I3#Zg{RHY z@(*<+yA?};Wq~Z7M;Fys{=mOclvr=-;>&d-i<{7KSPt5K@gFp9&=C#7#Eo8Oj?|p) zIE)xZP5A8WI2^k!`1L%Ua^7qeD_1vMBh~~xrPUu=hdU-dg9z%xMG(oreFJi*SMub8 z`rx9SZ+#N6@Qst)$*>!l;8`+Pk&1IOeP06>S56^+S|})f0$mswgP>l;?+8HDlIMl_ z2E77Rb_nVLk;nzqtqf;h#%Ow5vQG;kBs@CVOkT6{A;4eq_Mv$fOPQZNK5HSMiLc9M z^U+Nw@Agkk0%+}Xcl}cbbVVZhp^u$xiOg7E_eQw4f~G#MVZMu=ADumas3|)OMuT8? zyJ@ev8qEzWw?1E_!J#2Di|G}?S;m$+Wsd#Ikt9Adx!a&iPDE7V>r%ZSc{iAqQ1cG}aZLlR^7SrZV@`7s;<78WN1@!V z#ud>Al<8&7Q|^A1xxCi5b1WE4rR)}Ry;V4H6`F{!Y!3Mc$CKZXNh&P~A47_jhwTeD z36>w!W+ys@dMD~g9F4@P>UGck95z>MYj9)=3qpIo7<bc&=;S)lW!IJ|ch`@dTdXi4YSq6BK_@~}K|>M(a`UWJrSSqY&28qYjjNL- z^25z9En}S= zNJgVcy*H8F(@hk z+Qqa%|Z7-SnF`R2QYvZKL@0|WvuXWkaB~_fv}K+2-yEG z15kerM^)S=VopM?(_t=zN|YHj@`8(30==)tGGGW-_j> zK~Ct7+Kp@A>0Va_nPHF(9^aM-?v_NmaeJZ{#QORt<~U=!kMYtz-zNXkD*9f>=##tE zN!LacTmU)=U4}={duh8%e@$HJSM!NAl2(sYmuIHY*b&6ltJXCKM2tCn)@=74bkBF9 zz`jTI%yX+r6#w$k>2W?fv>`9a&~^GntGc%x_6_qc<9y(HaX8mQT$L`jFZy{y_ywR! z9x?+T-TDs;0%-AAo>&1GH8`Z`MwJ6ivNe6t-$_mO9Den=|MJm^fBWbWMR|H8J!GuD zy$XMRKhYaVmT#oggjs-6m$e?d8rp&kC;HiO(ivp8_W{KY{bG7*Qe29eQEFgvPT_39 ziBQTLR)O^vH=O_bE*RU_q|!X_vCZ*V8|x|5Mn{IK)!N5rS7*&Z^T|!}guq}K8wz5T z1>;8!9u+stJ5(Wy-G;WAIdo5k4#6|~^$~S}z!G&Rv2hP5kMQ@2UHb-%6ksXYR>vM3 zV0p!~tq}afn|rdr*TD1^1y8=`x*{vi7wl=GVCi*wH;~s4Y1!UGN$!pAkf&JPL1#~w z)gixEUk^;3IjzIR&eTV zQvcyGJ$DUD8%m^ZQFCz#x!M69eP)2|U)U`hZ*+;YJLcd^@&=plPe1`|0%{P8NvfcS z2f|Uc+_R~X3H3ZGQ!0p)*)zqx76p1FY;kCrd$Lu!zq9iOP0Z`$SurCMeKfN#)9Kst z*AUeI_CtfGa36{Jd6wpZvkE+~RTC^OF7EGik|0W{HYOvG>ldrlEm4A@zZ+EfB=zss zva=%9%t~kZ)!K5z?h;Q;VKv8|s!4@1@icr$Zv)>(()3&8IuiWBqC`G=mzlz7!DH3# z%H>tho5kJRxgf&XwWZ>01{`XHNyDuOfd*)K3{hI>ExOGRONI(D$NWe=hko69vuH^V z;US{=Kus!Ieap@FsY5NH^>2;B-y;xRA$(7k$`;(>t2E){4Y5-JL5=7eq2VX@2|`gg z%owO9oWEeVaiQdC_rC!K_*6hTigd{mq($JkMKr}t0S-B9D$d_)YjDdI{GIRo1#X(jBx~+@o66bqb5Hq+AbH?ojPi63FE{93h-k$u> z8gd3mWUhDcIyXjfPF{LwsLvjFm#O>&Wv57tPc`j7QTWr`qy~G)b(U!4dX=g%c#YAe zclmDVYt}-c;7g#cC8S&0@Xrlc4@<|)#G0vIm)K(r|8^bqBpPQ23hy~2z(H)!xxBp8 zilM%nzY{(bjUbmGoRH$PnmYf?ekB8$gtZBcEqXHf4mmQfeEMoxBbibUgdJI0#h6H( z`b!B|XtgG*sUk%dfEQ*=`g<4A$eM_7E7u5tZ_;J**j)dI>TchI-$y!_q?`t1;d}K0 zfx&VB$TWxot;u|O*!>*Y=@;kT05e?^Vwt1@CmaB6WB6@R(dwFkk=mQ{0mDN4<{t=< z2B3EV{!mPY|Br8ySOq?~0}d{5a-I50N21(sc?gan{oRZwsY%b~rXME`F_c+kvp`d( z^CJauNidUwlPSQZAzEyA*HJ&9y0csqLjs)~+6f#6htY8-?dFbzZFG^B>jUWd0KK*% z-qA6p#wq6^*8!WYn z-OrSX0Ps?iM2SQLX5D=I(Hp(!KzY3Rp>TctpZNob`Qb3%wB!ry7hbFv%m-{@-qBRw z8I-~UjMxcI$U#toa`sLju0=Qe2uGZoI~tN+5~=u2JEYl|*DnnIPgpX(?)sO)1IFkC z8(!Nvib=W8lLn^|S=I8_&8*+!+$@Rx0g)^m=`t-D%a!jSp}La|`auiug<~b|bO3{! z&3AeLN16bIBEwzN+dm~Yr9fg_r9yQq1EHk$;UQNNIOl58*5L<@Dr&W~DX>=_*L$)A zJeO%FMn>YXOZ{(LXbhPfMtgb++DHr}U+T@Gk?VRdK1tvE+n`~V!g1mHQn{N5#Oi(n zplev^<8iVH@W!BOeKDJM;&pgS1%uIN?Kz7TOhk`~34z{L4};gS4V>vxj+GAJfy&Ylz0{J1B~4Nfg*=LP$cpJ%uHWszy~zx;zxuA z(lh11ps3-GNcrt^{clht_WuV(&WHPunezbLHmL(Wh3mO5CGu|S$V|u+X0(8lqyk6E zGzC(o#J#uuh^f?v7}W)!I`hMqS+l`~pxUz<4PhFp2J;#HQsen#9uokM9H8TMYDNX? zf6o-ANrNvvKAt(r_90{BcdNj@qu+28n5>?fhz=p0_}Z&VCOY%;3BM@HqLl&HG$3J} zeUA1)6hLIh)>oWkaPWs;++>o7E>d_DS0#B6GWP4SP@LxOd-d9m7_$2+uA$;qt7IgH4BtoLs2h0lSS_M@pD+9exz>jweP641zg zmTbgbq;HC)#aRQm&;V~K3#nk>L(B%WC2+{`fiW|%Vt|uXJpe_!@Z#SA=TANq78?Tz zmB7@S0WiKcP014Jh;vqQwBw|1s3=H103C1v0O+VmqyrCwPkG>b`tM`F9$NHiQ&JNG znq5!Y`dIB6w{P!jK=vwHj}J-_B-3IcCC$eGnd@$*os7TL-u&G9_UG~+j>i%ioIDf*+{;*AZPBWw*w}Cmj;9Xi>#vx1IO>Z*kz;c%*kB9)hlH)}GBGMa#9U>*WHOast8@ z;81galQ*qkGq}GNmq_#vXuOGl5KfRoUq|UZN}C^$lDKv{Pp%mtN8(bjq(G@?r5&{K z2kpbsWC=2(!0+MM!i83Eb6^*Plu=`3yjBy&0KkVwS;|91i=SjA=@NA)Ajdc#QsvH& z{pbkYHSs$aB{mo5n4Y_qGEEA61#Fjs#;SW$u)X1mD@Jt<7dE!y&&Y4X?_pxO^WHe9 zx0aW!E>4-5neC5!fYlZ6KRw1&dN~|FeOFuew?%uPK{dE>6z)6rdkLz;c=sGPe2hib zW7zW$*)#U)$NHEUD#hPYqwqBe4N4JM^2r)_PNhpA&I{y4c}H|`ge?6l%0rYFVu+LV z6k-mUq}^^UKpBCssrpyg09_9lU~@dOZCmi}e(o<&h(yuD@mmYn$_YQSP>94qjx{v~ zzTrLaeSFL-G3yRM%sSjrMGl1KZ!odKT%FFmQy^_CG|yF2aVkRTb`f+4Cb5H{d4qVq zouH`gizmwrVlk!u#_G6xMW|K0p*1d={M>FJ`UGGNwM(>x@S zz+v2vSpM7(;#3dsivA6kwjAeoRG15{ovu)3#H6dyO$y^fm5v-|yZM{@Du(vu)i7)o zi?td0fhESa&JtPqPY$TC6kiWxTg_}cgj@&1HTCY>V8*j~@7u;v$Z9P+1?tsoQ)bFq zGvs*f5?)tQ*GE7!f3>FF5bJ-SCew2DhTtSW`IxW2OGRRrlP(OCIX63eclGQtO*>n13zV~Jm%IpJD$qYeqI(^oH=GWaUhvEKk!Bl z%>g3k#m%gwuD)o6bFGA%bk8E-d?erV{?M-umX*A_#KAaQ0GRZj;cU z7XBaVX(b*K!ORvzt0yPpu(h!uKZ5;lxs&h;NU6_ieWh7)Er-e0NOL^R`T109iRvV0(=Q9-PrwXz+?c zAnRy=w-+4*I9pzX%yF&HZ=fHra zAD)h))lmq2DuHt^)fXt=iI~daY>BA!x8m_VG=gIOmFNsGoIb%8bxRD1^8_IA*3$o? z-%kN_RLa#xM5a&#}4e8vW?Xed-J(_e1wKK zPBlz-{_6#>D#g>m^m2Rc=Sb}yfJ_aqxbZ-{hA8Njpb!bS0oz!7ISjG3rNNV;H(%b- z$ATykdf?y`16GKEx~LMKzGy9OdSoN~BwT3NJ{NtcObNQvKtk$Ncf${PADP9~J5xRH>)|!`F_)C^YjAn;QvJwCORUzmkG zptCPQ>aj=ltG;Y3uF)%*koQmdf%Envieh|Y6Wyxs5^@3dCaW;n^KBMj7>G61MFI96 zc&=Z@{Je=4 zZK_y#+>*35kJ)g!RxXu9!&6oJQEHC9cIfqnI%d#45X}7?r=N@#`f&~hDBpObAphjR zHoy}fG1>_Y(?&fiBnE_9SuUAsFdx|JM8919JpiI3e>g5>!udCja=F zwv9LWc@vy5@j%W2(?dFaD%j>MCKyls>McdQ^a{`xr~n?9KXrtjoLfK}=2G3U1?^tW zbm2u;73CNU*zj>HBh-U{?{n1|^mHrLxK07s(uZ@q-{@Ca6fu>&0Vpu;U*qTUo*t#D z+-Ba{f=S1zJDk{`Zmm55zvp>$A>tmIh0H$?Xs+hgrNzq)z~DAGNB!w~tUcJBU}FGX z-I%PEksQuPyelyPowEnX^_x7^VgCfodbM!}C^MwWT*hoYlt(I~JOTvq)(jFf7;ic* zYT07fo;u4@R;-2+4_(+dTxz%DC3=g0%%l#NNC?_ZcHR2JIDTM|XAs8o@ED1=?)20nG2 zvclQhLvh9j-66h5?8&4jjY_12ldnP79!|z=>|aISw}9`GT7&vB2vLhm@Jb{y2%On{$1uNr%cl@S0c@T$iidvC!w;#d~; zhTgbqafV%Q0;5*Ht2`i?;r`ym^&*$I8|2A{b!4A4n=kZX?!SOHqljc^Y%UC^`&B@wd&+1{d z;^X=0zuNS@{sdgX(6G!uFf0sweDL2u6IsM<5lzP5N;2Gos=Gw6ssoa@>eVUcTAIXArw7e_+X;p-SuD>crS4~ zoR%zQJKLErL2m)k?y0!9ss;sbk6lN{w0ar}dOiG|s+21CN<3@Hw@CI=16UWXg57qB zm2#=@&k5PPQVYf?8IWTx6vipfKEJ313v8vG9j9Fa7dDv6?mFG)t)iwr`@jx-Aux&# zssfKcy+VMD2l^yIttSXArDo}Nx@L#Ykf_(#FAi@@Jp<3(-n42P)YX9@1g&-ZdCQlo z=`T^F_8jmN!iF3`=AX`E&y%HsS#I{p8cGnpdGgWF_T8T@yuwL`Ao=4;dw(R=z$CoddkA3Mf)NCL;yDir;` zkK)cZOjWeD`jl<6{4PoiOnR_ka_VX% zi!iZ1Ooy3L`7_vwHN`Fr-IhIvstXr~%SEL=Y<)CMDXInXW#ylIX; zAeZ`6_itgOA4B!$T6jcGaaxSU+UC;e#Qzm+&(-iS74lO>s)q3P>sQB@{=?^hJd*1f z@~2#E`9%>E@}~R=@TR5)RVLnmtvsWkB6vq;`gGa{VqfnJb%Z%N-L$kd=89da)!SQLJqAC-BfTA@K`)n{BAzI3Am~{uP!?q$ z@dXcpNNqt|z(&M)P3a2OBGHK(F?|E?e$x*Se}Y0jZBg@3?a;$RT-E&F+$BIFb8h2C z^ZDt%`he)64(C!r1G*u0w<-Qn_+3KD26x4ybGVKKcr9*&+Ut;mtKd>N1{B8$*SBg~_~-L4Mlfv(QEujfsiGDE zuwx?m$1&z@1;)k6jh|Ag#|~u5e+Rig#<7P3s_5#~YeP7{gGucD)L{3FQ(ih^!f^i2 zsT$=uc}eO?Rr~!1?orWI43nM7bY#OHwyR9-4&yPnBh$S`^apJ>{^N`d41Bh%0Y^=Z$n@4@#>0BvG_km}*i$@=FospmbB{3}96xJ$M0bSdea(}+4ULG(q z_P=d%{8u3pLbN;F+@&uz&oovr=1PHGOC(rNQ7Zdkwt^dYv6F~gtJafkm~xJ0PWrcR z(eO+6xyQN0ZOiRu?=rPO>`>p)Vbw{L&d|EX@yHG2R^7i}XJ!+k?<|^ztN9%St`a;$ zj#AU78WfdK4;O2e?>kj{U+Okw7k0l~{}PsFZJ44{nDgxH_2}v=J=GpXZ7e(f&4PNQ z#H*&svVZ#s|Kr->hD3a~a2bycUIe!J8>D^jmJx0=*tRgYMU-SLFEytA#MYhWA=H~v z0U9u5Q&}khr9>r|zwP&<+}3vV`p-(1Wr}y*jY9cpd%5|AE~aN^1v-1G{xv$oDRFaR z5k>-DpQWm=!$)88nHitU*_m=%+?hDr8Kis*J!4{7fnl%qIBvR4)ovhrpy5ctoDb0d zJP=)8R#Czw?WzT1x@)YGuViNlWEGKQw&SzU(lM~prG1+0UW0U`I*r&5eYcPB2 zjz(C{f|a_JAS02bMIR4}6FG{?okou{0_>e1F`v)iUwEI1<<#vhE_zr`DtmT@5vxAI zxJ2Q(vj4|t`k$^t0(ZmW%qT+`-7)m#$n~o&N`o-i>0=h_ML*{siwG}Gin98T3%>Fa zM>jtmg+vYQu6pfmp4;uP*zLi)9{%EuLiPHM{fL{>zuMb57;tE`CU{dk0E9!e33C9-M zaFLLaWJw@ed?WVa=?(YBM~|mmA%wz@zh6&J=dYQIiFXoh(^|EFd?X}eAju|fndE_- z(Tk%%lUHr-g0k3K5xW&fE(a*lXd0K?+t1l@u$kDF3SgO{V1*WLnc>BkP+QU|`dTU%{OY~5P2QzeUA-6uKVqlH~R z$UbI%cHxSRNBzJ4_f{zzug10A7Y|>T?l$innI>F-lA)lDW;SQ^$f*rV>v7UaErB;Y zQDnmo9c=hT6^qQUy@!Yd25du`6#vUF`@gu(201V+%~8N2Z5)ALfompTu-WKm6DDyO zlqbk}(rLp7TZhJ^`RdmaCNG{fj`PfHy>GcTQ1$iYw;7)lki~U>6r}Ydlz^vvif@lx z&osAYKks-YM`6tG*i;&cMGHi~YVgS|EwE&BS5&VLP-B%_ZKe`Ko>}U-M5&$0?kM^KQ{e^#kg~Vf*}d24zg1dq(rAMG%^X2yEjV zJ!v!Z!|WF}n+ushiMCaaqop>VQytpbjXbD3VV%>9=KE2XGSF=IVa}NIipSBf#;}RD zj_rnp!?gL+>ocG-`+rg51!)>AeQG|oBsl-mlxn%8_fgFd%uu;GOb6E|D}S4#g8M*o zS4$|L{BUfX8(}45+a}QjkC^&5`!U%3C)fQw`{i9TSLbKh30ukB3-KZ>W^Rf};j`yu z@1%u6$0+npd>KRqmtkB}zb$>tI^z-h;VZ$?KSN+8A&IRDG45g<+O*zJ*y1+(C=u{_ z$h(bonAR`fXGIhmH1aydlr~(v2_@?7h8vFTw~^8^j>vsq_o;=6nrF&~@fCN5 z9R`Rk0=-RNq!g}H6N`956Pw?LX?5RtxO|MTbTQwAFtJ)Ux~F*GgOOD02r0(>uJM=$<7vh}`8WMj zZLTrWEzN%8qcuf3fZ)YMWTa0FO{^kXC^?|^-8Dl2!9=!Z7^yz@Cis-tVPSXezW}oT z%e8|trdz^?Kh8$P=4v7D2EV<4TAO`ZDVLucB-``V*$4*uPS8@#H@f6B_?>!c`Y_ggyFf9Zn2x(T<5^M!kAvUA)7G2Mb0YQjm^EzHE-e>nBQqIC z6(ZVvlPNc*zGP?5(FRzQ9_l;ylX2b36?X?SKIWJ~ro*cZBZN7Rl)e!%5%E2aBZ|@o zZQoGuTVC;fo_a#T0*V<8|G0cqQ>o&^@K1u;mIN(E#Xds%&OZj8u6Y!4)DBDE)owcO zdFLP8(EM^(Q@7=<>0-o29@*?Qz6IMys5Qd8QD3${U7ulgwF7!GcJ)j>Hejh+6!DDr zX)@loG2q3;=Fl;k3Qwzo)A!SypQ&rnh`2ppi5o|eeEX|nF*3?C>Q#_)!al^pIq3R3 z%csYitaK&MJoQuND!L(4e!%1gq7>{i|3fU)F9@<+aew7Dg*_X2Sd+9)6T#zD`4+w3 zvhI$p+69+|NUJ;ek z$xpJb)=cGi{|$q>qaJjCUd@j#UaL4lY&kbLl=MaOZMAvMm(@NBciEk*%c%aGQsyc` zAaEOArAVc{AB#k2-S_5t$*~?^Qsgnc`q$N)tE(FbFn5X_ZWoBS+CdRW$!FhQlxb~C zm?fyo&la&I-E;daB>ro4%r7L~$3kO2BIaYBO^|)ul+H}`y6Pa2v^@J8Vsi`oFUM8a z=TFri5H33iY0Pf!D@zc%?(H(EmP+Mwv4rN{b79$&q!m6pu@uLVkSf}i8^3rj9?yLh z0pLO;)hV<4d1T~@xKAm*KQvJQbl~Ec)rSRBn2xp2Z+nQyo~Wh3DTrEu6so$U;;a2g zJQ-j$&`(CAgltaMo@6wSs=sj|v(e;4im^sV+oqjM^@-5yK;C?1+ z+vtm)ayKyhgLx6}=ESfcVLac6HWcMg06=p#-W8#8>0N~qUlUuUMwLwC6eu(teJg&~ zmN-*~W54}=w*JdC8OkJ*Eru%aj+ZLg!|EE)VLHa~IUo{ohjY1LScs;A$h4^9;Q(M2 zHpC@#liyU3pwQGLbNR6!mUA1`XXaIyJlAui8ys%%)q_Y-z*eu8xo| z!u$yL?LRr1S_c1a|4b$X91^kheEppafW*XE8p!%s{rB*Y$Y6<_GCd=shlwY%*zuf@ zZ%bG~NX2XA({@7Fvi)EtQG0=5c=;Ei?H#?R=6d}4^7_}^AIE9oBh-y_odd%4LEx|5 z3wMJxj~23fWEsl9w&?*)6`TO}7DwTDiet|~zd;MIZ@P2i+JH8??z&h&2^Vtcl^hx*@0}&nh z%m0;S2}2Qtikdp0ADPrm6EpiDz(V_y8=%IOWIBvD>~6IMg1V+RazAfsjO`PQYDWEU zGin$TW?BvHsh@8?JAb_D^MlEdU=*^vdfrotRCZDn4tsU|Rb0EGd#LxkeIeq4CiOMT$eH>Z=$S3QnvEy9wv zmQM_Mn3?98^BzOKn56Y|uBG>e#kXRvtJY z=()Eg0_rQt)s3BEoYGOKWl zOb#24k;@JhwTE#hQ(c2;!~Lo>kx8y+ToHtqIpYobyz2u=31{GQFqVFhUyWoK8z3TB z6<98b5;JdeAN4`Mt#gGDFW!+|YjY5bx!~)Ao#zs4t|8R9dnY8^EJn;OVkxn&+Kj?W zTZjESL~YPEcxUD=IzrdkYyX+8efgfOGjq7MpUE8?S4f0)n}QfPiq6AJ7JRdYX8oKT zW5qE+WTLxgak>`R@XvzkL#*7J#T%846y#0C*TtpM$FFvvpKx(k{eBHj=V>icwmN|7 zYiYCVkj`YY{A1dcXFJxD*wye&kTCWlnFDum=$pR-Q~HtdB$7!uWmmd9=xaVv7paO* zY0%&H2>PBRnQ^FAP$VR|HhJK?|Ip4?QZia~9=(I(`gDF_gfzl$AO+FwX^7mtdaHvVbFEmUeZ zYHIdS&JDDxU=nBn?*;zfz0qfcH|jcQTS|ixlbz$BCE4UTsz53(HvKaKm>5W#&zO~l z3scP2Hw{c4{IV?X>9RJS`NmI+)_G^$A!R`5gddhKlM(Zg z7OI;yH79w`HpZ1g+pf&k(lgUQ3{v@-x$sQ>Mcq_|^z&jy%w~5H>yUGCx zaESpv`k+C!l0Jogy%F}N$pJma5#dC6U z2VCPc|BYl91&h7tz1~4L)HLDi^l{>*0Zllv#jCN;PWHe-ujg#L^hp{c_Jq`XAklr5hlWh;u4Mz6kVa{IkZ zv{5OVH)ePjD+5t9Y6cD1X(8GSe@ibl1r7({UN8A%%`M&*bYW4Tz!LKe(Up*gtUCvZ zbeVR(YQ4)810?YyWZ0CC&QDu3Q*++Nug8ddyb-Rqq_1#nO#5U{$)kkP z|8#Ie!`mEn<%kiXL+NE>I_qbPX7<}~GSyEj=gwh8_1E(9&FbM%! zEZt+-C{ct8o2AK^j%D7@>o)rN)sR@Y0PiocuS;(_67N(xwfraFh8ZRLW>N8d&w*v0 zUe&kwP2dmZ^}jhc{X-#L&zhaHaiL$KI{i)`qK`d1&$1Wp%=Mu>V_vs3y)%z)!Ca@diyD;UtM1_UI(a|P8adfvLgDNPZs-(ZGh?MZv6Z+sUL{+ z9~P4YZ5vp*`%;wKTXh|+Ovt5fYYPP~az!^=m1X!xykxJZL;%AXS~?O-)l?c|C!!uX6rTtuA|-Q?KsYjk!<8vjO#NQ7Z27 zq9Vr~T`J|epCAP=;F=|YAPTDV+m7+l&r$!ZZblb5JIGJcJ`SfL7FfGQ(F0e`F z*O$J1w0CLU0sJ9*bW?Rj|ArE181>azQ*r-Yi1-2ZV{_eLros;^do<*rC!5Rh!m3Zi zufcwMEPnGzc^(+Im#;JaY-xH(1_;#u-lAd(vdw2*yCcjJK@Q2Xe4=_wo>aFZK4qUM z+AL-`3#g-Vm&=s6qcS2)?RLnUC-DXMyPsC3Iqp!+f73VnQ;+;TOArM`i$N?0B_~t; zN}Cq1jZfZW$Zq5JY!oeHWxG<4hMzF_=1(waTZ2^!`DpE~A4@jXsqURa2%;VxTfOJK zxGlEZfN8cLf6Lhj-WBx zL@XOl9FqH;oih$;GKQ+XMUX=gjSFkeq(z8Z@i>c&!s3u8~5gLsXm8h zj_%8vU=U38`jpMA@YdEV|G|~dsH^ov5YTm@PdiEP{Pzm;@4+Er+(-Lc124T^9zFUi zLKVodgR_Q&+`mpd+R)rFY^!D47>P_y@wU)g62Moi}OF$OD?_oYajwXQZX~1 zY5dS~a!FIsWn=%b43l-{uW-R?yG{p$`AmPATx6dxq!hfBM64N<*ZRu*QQ6)9=SLeO zqJVPz+vdKJD~4oi}|3FkI0Ky1JmDT9(zdM{tLdpG!yVRm;EpKTmuYO_gx>@ z5502qQAimMn|G{xX{k8Y6E6RzZsOva4TUI(34`AE51ytdeRd@;yf?^EdHHpne~O+H zlXl;MOV*EB7ORpg^5!oRPX9fp#;I0c2YAUVA-B4T<~U?0NSjihQuHmx(aLyk3STQC zAYfmaxNcpGT>jMG0-?zz-#1Th&&&+e(O$p$Nn}1RhaB_Rqtgxda;5({et@2 zSr=4@ImU0dwvT<(aW*9&gd2OLL~31dRt?bTcqJj@3UvQ=p876p=J~g_nf(fTvem3Qs{wRN+yyF-9k^7I`64n|B$*`j9i90;i%_ip|Na#c#((3E+F;lx?Yoi70pa9 zf^6{b8>)}K;yp7iQhTYYTi(gS`bF~hcuJV3G9jOEpGu*ZhmqR;TZdbPd-eWru0-CS zk86xkfg`Wd;*YcV@PInS>Vw7j4-k-3-I%#z|CI9%$yL&Z!2j|BU^&SS2je5f4&8B= z#7uu?N1`o(@sLW-EotKCUYn)WZSr#`0=V#iR__pL?uAQI{w<)qlu0n%_%%CuShY}=1w}LdvBz@%%1mxQ=QxXX>n~_7k*+9?0~u} zREwxOB9`rOV}+evSyx5pqu(i<6yS=T_@v&PHc&@!a0_O>Ph`RX;j@$=Mz6nL++TR% zS^|it9u(Us#N1HT%x(Q7Bk05ED-^SbWV5}bhCRF_w_L9=-Us@UfcTo0&BGI*H*#$hn>Kp_{JM436BjBk2@<$o_s@BJ zvOkJFB}1|N_eJVICCuBDa$|99@u0Ww%4r*!y5ewQlO zLV40!2Ha09=)!3ztge5aAND6CaiTTjZTsYKwg!fkqn_8s`wldJW-K$?^d}w*q(5*P zdS(N2+3~W-wbQg_!v#9T(Y+tE%tp>y9ML}mDSgtm^$&TaiFea45c1L-Fb+%y$I(@s z!&$d9RO&1h1gulOkV$j9r{>1Qb=*T-k=wz}qn{>0RDLCgEqsstM@bpq0dpxAJDS+zJMIlVrx0g<&3) zl~qx7VwleIJ)PLybn=#%Hv`rKv3`bc;Z6-U6_25}0z9N!Rv1-X%5VpC2Gr9HK<{i5 zIdIO~6(9W~2)lw#k7cvB_opmxf_(nA6$NV*Vwjzfefz)x^ z%s>1RJ}iq(E12(Ov}B*trI*vt2!VAOn!04!=%gN7c9k!xE*3qHtj(^FsO=h#uXg&hxI_8T6w@M66VB z4EYp&4Ss|3w(gJ7=UMp@DV&=0aE{o^1k}toG?_avz(%Erm7H7OH!)hd?(^^v`~)UL z?A?5oh!BGcfb101wCuW0nYkx&=e6VpfmFtbzimj%G1S|D00dY1{ z#At`39bxa)v@B;R&PAVguuu3 zK`ahn;$v0g!V1s9xUbUX6?xuY6p}V({!?sr&EWJ7(x#08ub2w^SH~dh@g~q6E9lFn zg76C=*$e3d$$x`72?4#YaCfLh3Eb&qpI`0eynh~hF#<5V;XBP?ea1ym(Uix_%Bw4O z8T9f3E^VvKUwt`8Tcgfz1F4`37&=N~S}yM~yxAxUxG~zL{!>DRa0=VD#Hs1-)kyDk zgyp0@B^NjpOQt8g=bMA0-Xu8^&-A>rm5?&|fjNW7((kMd^ocJB^=g1HVqem zAoF4mlItODFbX$;FRbVhTy9$=BrB`zi2u%Kmnp|y8HHf z?xNzxYp?Befh2rwv}YFN(RxldCh}yzeafW!Pe+3o3kpf~=*hu}N{f|r6F}DUgbVkL zRWKG$>HEzQB!llh{Up#+cxNVVJM}dq0us0Z!l-|1*(3Ap$r3v(1MS;Ab?l0^T>L5V zg^nB?Q>So=89zy`{%cs{h0g-;4!u7^mqhEq$T~34tE&XL!}B8dJf#hP#^F2AGaK^n zZ&Y9o(@ut&i-m)JY9=JQon}`haTcDJPoazQjrrJ-K{NMMPCEZ)%~hp~C$2l_w`U!O zX|IKG2j3Le)eakrzm7BwihO@KYri_4)#O>f1k~J!u~B?1 zVafJoGN5bAl|!r&{7aEcVSD&IP@c!?v0KbiIil9M`dy#4_@&~n>>Z3dRg1t}y7bll zBWuMs{&TQOKq35;D;npjG63vey)xK3#DBlr&=(%?^yEuUeIJ8UQ$ z9niVY^mbM@>xZ1I;scM!)qux4RC{sXv$Pw>b4Ds|l3bFP(;20gg<88e3_<1=IDPiR z!E{)32&L{LU7h{vK6*>?KI~&JkdN@n>uUAL3*mFo;;FgLSMq%$I%0 zTx6;0EK)g|0eG|p0B39l7nLU(%;c$})fR|iE+Tvk3rYHleJ(v~KUPMW3oBZ$4nzWl zT|a%!jZq%_DKtxaYy37AG1b-j*G#x&KflT6+^o3}kM#_w#?!Q1lAJ>r@Cjz=iEezJ{MV|J+MSnc%Q`sYKK3$ZxZ8Tweuf!`3Y(@_Za zoIA*VSNU7TPi8Zq%&NNNc+VG zIEza)v!8CLML0}-m173Z^U`wDR$#2Isd)o0qb9~UPrvcha@B%{H+Hl%--~1W3&yg& z@C*EEDEX+Hl`J5P>{TVU$d`FedkG^IX0-^UvapzXJ&+c31nn4V8gP;gD2u-Hbn=lU zyd2D$`tzRR?S-N#c+645`cc-7d}i-Ab_j{C1d{CT1XG6oN)@i<)l>JQim=)%>mv_m zH)b5M*R#J0y|q+&!*22Ix#Ab*D77A=v}5r=8ci#f~d3Uf~*Lwxh1+x=&!j`piZ zzh8xDZjW{!u*Oq$-<;$a1G6x|s8|p$2X@mu{MVoGet8}Zf}jD7)z9!TKwotY%{1f# z-3@^MM4+4*??rAbMDcc?os0Nfyl4tmVX$Z;_ENnBymtE?MP@K>d} zA31|t?aq;Q0e0hvPNtQQ&0#h}&-YO$KHq<8ky+XQ&Vt-UL z_<_b4Q)(QGr|qY4Zke)NzegBa9Y^@m9{yB+;Om^Zcll^mV=`!7(Y;>k9p3qAo{uh<`+<*iI|oI`{lL z1?>HT!3(BEQszHr_z!w<4&95qzmaw078DOLk1D53VK^Jp#j2e@$|Qblh!-1oj3fi* z<_%QmXq-^4P}$k(`B#m!?FmPDTVLOTe($5gW+fb1^~q`4Z}v@cLf6}kvCJ^89dDMhqa{vV$+OzF13e6jeC)i&=VN`=HGsnMnkFmGKjVnja)3-FLR(4532P|2HlBRU>b>0&Uwh%z z$IL866eyc);$@p>`p1Nr zBjuieRUhI2^I(2iPkN>2=t}MLU%=xvW<1q{3~*!owy7Djj*_HQM^Rf`aaF;5{z$H( zZCQy7mdK^Usc%3Q2lNyg1nEOJ`tY`tMwQjV-eZIK=7H4t5uDFUhJ68zvwF}_6Y=bO zoF4x7c_ml(&DRxOYytHH;kZmopeS5DCLtmaE!xOa`kaUJeI>+R7TYk)@(VCfRl?GZ zzS(CMP(}i)nKPZ}E_V`J`yqqt{7W6N{&3_I)TO^!PHqw6 z72zH?)HQKG*bM8sGib8x8}B698XI@U(~()}E=DHO9~@~~tDCY*9>DIm@ZCHLF-u>Q z7v}nP@us-RIh=j`q#)lyX2)knD5mQ}0|eCJFXL)jg~T;0td>nC*%Xmv$U#G2h~w&d z#P#N{mqO`UtT^{>x?NXMf;!Wml=b#~{^IU@A67!!dE1J5l%w7ayt;}GxM8Z{j8giX zvA-!|5JMYq(hcv!`!^fkZj^0h-LjuvaKc}Gs+Wj%OFV_MTSr>eT#mU!H&LoUjDL$1 z;NF^~Tru{q%qwp_nwy~49mAI8#MVqB$>g`NFzMzII5K(4kWoL9T&r7E_LPJc>*g6f zJ!~oizH{zJ8+#QP@1Pm(g7Oy zG-QG3LCSeQh#yrGk8^4-D6y-OiW+5F;W+!{x_sXlv7 zAxK{$xMrBmpJ~v8wig8F6Pj6^U+iCkVFd4Z=puzh&O@Z?euOjef4OWu4xcsYr)u!^ z^swKHa34s*nO}2;V4V((mPT?I(r*nC-?j}GOv!S!dl8RAkhSQyG&SPau^1a;z4k%E zME=NQBOBgVfURgt7>O@1a-V%&CGf9JySJz{!1aqw zZ4mDXT5i@x5_?mv?&Q9j((-S^Z1B$EIpN7YrCJ62w$H$|Q9Y9e70tNXL3mV|KezIx zIPfWH?j1WRQ0wi!=YMfQqd~lmy1Jh-qhG;69%TdF3Nmm>Tj(bP&P-K(4-Y@X$WpvE z2E`r|tHmzj3t~R%c$YmEdkCnrnQ1X(i%uvjJ_1k2>)!*>FO7~|Bnx= zQ(m@Av1ZaqFa2VWMAb7mMz;C7K?07VbrY~6P#&zRJ)+hBNdUTOe|5LNHt_vr@~V+} zWU0g)=Hm!#HaH5LJh9mPR$j~g=rl8b9tZbOhgK^%`Rf`t(3zriFKn(}H$=Nd&4I5j z@@3uqubHfQ|6=mMpJO-Le;kbm2>iy|hdTM|wF!LePl916Ca&k~>Bo;5d!8?cM^2&tNmaszSyBw?khU}DpqRg-&JNwV|ONo0!$ z5Fo{r8*ZikK(3jwnLDwg{4(5usH)NZl`2puNLSlE@bJS+ zggQEMC$Vj0R7Pgqr=nO1^p?!mzD#SSTIu;i>bdxxl%MkN#YVloov*VvytB;~i=_H* zqFvIGaTFdM7&wx;ba>B6F8Nj5lfE=p_6H3W`fnXK3v7l$r*-3&Rfyg1w{@3? z&+xmO>GtPkuk=IM7Df5upIQ`h+J?YRB=&wp-<)G$y6 zm=@f*Akyxas!eiDBOIL9r?2-I2mUxo1dtm;N6hS|v!OxUctRwIjZ8+X8|~GQNm1o@ zMq5HTHZ}{1nHlo@ttL2dL{NTise>iu>0|rvM2ims;J_>9u4g}^?fpCcgaxZ!k{*Ti z=FjS+$6Lc8tR7?=cs$j{ystN$&m39ql;B_hb~RqsXA+>Tl`SwkbHIJ@Tra&pdnCia zxAH~r{^Qn1e`@8WuX!sH`IZwO7I_X@L4VAg-h(bv|P~VJfY~N}y0$C@Ur;`5nBj>F!LMkmCEbRbWhRF8$+Uh ze_OE_4EsAS0@k=&xa?2HoPNetR&#IjbC5B1BHq66a(dQsNAdjwLq8Lu~0aOgM}0uo(*8sYf1i1gI0@^Zo)X`JFS@R447 z4nlhk3c^$SN_CbOF~(P9amod%uvhzN>ix;S#lNsR6GC%5WqLy@d9v^Bvn*%S(jaE}sBPd1raw1JCwu?amDTTR?9}>ECtt zOySxUS03w|RodnjsjPSwxZ#W#a6Xd|oH$q`%Je17UA>u;08N6sVdxcpj*H-=9zp;i-PmJ=RBp>s~15yl7{}Z&@H0xG6Agd;kxKyjyJw$ ze=*g!=WV>J0^eG&`e`gIyAyi&T8kDhUGL>+OK{AOuMpiJi4AqaEy)5AZq>muCS-G) zljTm;x#Ua5-W-tlxAL46vel3_YAE%&*7x-IeW4NQkMsMu$RRww$;*D$VNB3;mS~hYWAW6o$XfPRA={hhkL@K$65aJJnPZ+w9#=8T6KUh zV`g9P;=kelxJ#DYa{2wP}IAC`oJ@0<{$x&M^Uv8ABG(f6;mKdWa z7~M`?tyAdw0FIE?w$aQJcmW#z9b!7?;4coOCE^wzcAa7CjZb?d?tJefGwFz}D+RuND5*FylQvFMPZONmQ zir~P;76;iPPKIAQ#lD(3FRUhIH~hm64VVs_<$h+gd@cJ>MuI-gI-HH0L``>8WU}vW zZyNl0Ns3cgB>-cl;SJGs`Cue`UY^PeX*joteomfd$C^7Ac8{O(bKq)P_GSHLcx1%) z5_Zh#_WIk@zmrP!fVtC0#`aTtPI2pp)?s9!h?!tmWB5#&F{y%b$|Dz!1wNmA@&K~w*6u5`s zx5L23bel2Z3heu%!O4T#<-ufwGM*f*yWWhij#MnBxTc%!cMNnN+O1HZ-SokJLy|U# z92hb~N%O*IvCiFW`32$XSTHogJ@EV+0oTzyl9Fe9oaVALMaDL>f*iQ(;uv>BaxX1; z;UlojxLV3P1K3~CeOK;KM3bAqHC%C7b+=}gAGE~6q z%c|8&0RXGtAq?mxoNdm#Zce2Z{-x+BGBJxI~5i{;*=t zcQ;hBGwrE5!gQb+lRWL#MorOunKZk+<2A(~$Z%Y{j~KdJVT-v4cM`*{?*ZQA!q9I+ z?baJ%oTwk+(v|nq4pEWhKb^jI12PMY;J$a@18?%c{+~DB1cboUN=pP-36%Zd-AjvO zh{xXEJNTKx(1kfV2x(I+QIx@<0&5qsiW(j1&J4fhx0#t{&(xmm)te0xTdeJ!1%HVBZc7j`1z-0)815GYELHm;Vw zIVr{%(&Dq;7Hg>;exU$9B(I5E!Lpth3S##*cS38l$~Ko~#ow)Rw>;rZ<>$CHT~vg1 zYXa|1v~(!9XSSLI#%JmgQAm)syo!5oo0@&wV9wHSeH9Y<-Jr`D&XlSbE{_r?MLBYw zj~g5RX|SR8eibYuKSO%%$9Qs6+M09A({9(G4uQE;)ErY%H0Dm*U)po|6+TNfy7<{U zovtTt-T*cr(o&M?n1jLd+_Y6-%mNU7nLYL_u#36ZW#4E=|91H9DD)1kxb7*Nrg{3q zL#YyQ+Obq$*Bum%+oSEr2U1bt*n5XsVoVdhwK>JT#QEOhP#OKpov2B!oZU}SEM8yh zr{=QYu{b-wXNp;?!P>g$tTY% zVK%mXO*{8~8um)t%+?FkdAwMhh+l$yD(3>7w9z=U&uUybyN6FKXoLv+oAPBYheDaM zFa`V({dMM~l-XjP={g6yHBXb)as}?R{Umv1+Xo*ta}diPXCvSVhhM_Ia{o>SLg4C` zDhmD+*uF!>bfsDW(9B`h6MxI2q7Len(Fd7#?j2eM7upixIT;93lEVU?i7GqTQo&+j zdmC>-@r8yUyO#pB17zt3U-L#mr7ZYvU=!si{cqUIz_^TAaKStMaT*>|b+l>( z^6in`M}?9Ku3gDhkAidd(+>k2b$R8a%sWzuJ!Y@rA;tMZAQxhfyNl_$5fl|_a?KHE z^g2UsgcSedr&a-463{}D158RhX1&{k!p1h-BY^qwKTpaznh^!r3*gUPO(uoncxu_0 z4Uxu+gaG!s=AqkFA?@;xy`QemGPd8=E}ru z6lFae_GfR&Wld+2w)n2k7@l{}U@o6ns0H_~ZO-MqZD48kL~zF7P+9hQT*&T4{L7K> zAp1^C*1-Fw(4;=Z&H18a{W&Dr3Tx$N&*5M7ulP@4#gTVj$oAK-SX-``%h)MX?bv|$ z=;IU-cMMNQ=OSM51NQt7%n$2r!U_wx)F%XOxeCv#7oUi@5=y0l#icYuVe^hbmdFY9 z57TUp70$q0(d2pBE}BWS0Kvw3+_e8*e9>Ly_asGjf%i?uhXi6pYj6jIlFFB^foc^TeLdHn`l(% z53t9RxUg*XHOY5I3ym$0f@qMV8Du*!;$I3$=uf0af-~AH0|Vj-2RG7egb)&84LxLJ z4(;kc_Y4pOMQd2i$2loxBIBH;uwnLlVU0cTnL{^VHpeb+NYUZw4RG9tbn`7G_KL-h z<`$3g!rc}l>BJFRCAV=p*Prh=8+Id>zmJgM{6A{eefru)B&Qj4}*-O3UnQ`(*xE;o8 z^c-jjRVRQ+{$e1B*E@%+9#OYPrW^BL;d?52)9-(msx$K{rG zdF>MJb4O*vut(M{b{r65kL&Ilpk~G%)tk=-MEU&F7=(vpS|3Hr=LX17_&vK>mVPq7 zb^`%V@g)Rb%=?>z|6u6DzfQGudwRA4K!{#miOSh)iEO(CmQ^$U0U3YL2Hm@PB&%J6 zp4LWy$S=7nj;nj^-$rY)zWH$^S;G!0HR&@S3N4IwiLaEtLgFft`;Z9Bz@}(=6u>jJ=V8%lAB&0>KZf#54 zOs?6&`h}=<_P26GS;zFfQq$y8dZaHU*Ai&3BQtgl0HofyEmYQ+I;q)A=f^&11yD@E zN**_;%;h{bTW_cxEEJ_8Tx^EMeVkghvr4|I6&zo{wZ#qDyLd@e4s--eP0YGR zv(`1M-^4Mjd-geuTGglkE9&8Ca+R-gHkG-LBYcX=XeojyAf#)FP_x5&I~6Gu>+Qe8 zs)uDtXNeYO-q50(&g|Nht!vcFDz+}i&Jw}-n_?bMn2N_!zR@S5&2V2UpHIG^<}8|) zapQX;H0Dk2fWhE?rmyBFztJuVZqOVyeTwC_kI6cKje=*^ z-XN~!Y;I)ql3QhX{2RA4C%{Aaf{SLqrItrT^HU{UxV~Eh<>Zjsh5B1AZKk@%Q+^NX zb141?W8WFnWZQ+QAfVDjI!L1ODk@!ijfjB0h@hx+5Rf7*^qxdSx)2+^L_h_l_fCM& zJJM_Dp#%s72&8WOcK65b%oETl+-OcPQ&By504H7p{y!H`{9#P zMX>CUu|J1&IdjtyZ=RZ=Z$2K?CHc6<>6}&BpVX%i0g~JSs(GYDgQz4`57pJ|+xEIc zYQd{J&U|!X-Dx4&`poDYv63!)h!Bk0uDG1vsaY1ye4JnpW0qeFRgm^9RLkO4;6&JY z6t34~Xzs`)f~W)8UT72deTO7*XLc9LT~Oep*x|ZrIHL2Wb%u;w0wnyFpzdM9mwP3$ zFrlgo!t~;oL8U3m-#oBKT+0*3kQNkG==yyp1z7icgMYIn))?Zq&|iD% zMqaEodKTEg%hIgT_9@59gYLgk|zi^hY*a=vc2xT`6&It5m&geJ8L~^B_ zre!1jX!YWjb^Y#lN(buRR$uDRTy29SG)Oet-iUe8n_Z^eUl$gmxqo^!RO7MQp=i=b zi$)EZ_$3%Co3r)dbT_6Xz_WU&ek^P|6z)Byl6CZU4Aa<$IB+;w8P!+#+MmsW z(IZDa|5V|7J>^>P^D~@dDsv$z!MBp(^Fg1u9g>sXs=_3J+gk81-Lnzpx8l);S0neO zR+(WjrA1jQgqq(;zlCT1RL_tXBj%JG^N*}du%YL(oP|I8RzcJ}p`65Kvt+-(q@OJH zw>Z^u5;OwD;x73y=}QYLJXu-n4!ZnQLd%1PX;E0-iT8f?E_X@de;F)Ufmr7kaUsCN(kmp$O?v{8StP%Q04$EH=u)Qsj{mFa zj3JrrG7+4$`?bmcr;XwNey|sro-MkQ*8DZ{C#LLuSwdt^HCh*cab_>ZsK4KTJ+GbG zxsuHaRnM6D2H=~fZRu;`9soe6+BX>bU@2YjXL0cRE!;2C;bHb@d*!5g*FTKEDpA~< z54MLsdg4&XYif2@T@H8WUFjeC3<{+C7L1)+Au5!49GjbBB3DS`GouIB?|;iW=kKpE ztiC)&aNaflv|Y{~-&F~c>DOi<2hZ-89Z$jOn2gL})4;x(?v4RO-cMvH>~{At%Bn&=V>jS_(ZD33}9GqU%lR4RnZxd*vVDES%>j>=w!cxH}TUm-DW*qqsz zXl1=^VIxoqa(8=}swe)pZMZ^W|ISPlgp?jO4_cR_J5wNp!pYmhTvw}y~s6)<_8V$z1C!B*~GUCn%G13W1wcNNmmh}hT&j58L zjRkZprjsLE|6b04=u;tYn!#eKh6tBS&9}=_txu^Qxv?W77Q0dS2J(uo#~6xaXFFSC zf{svIwSpqh`$?d6H4jn=(q!6CfPfxY3nZ0Sd$!-8`C%!N$Q`x88T#l*v@n9JT|X^D zaoZ=(6iR3}$i5=rTc1Qf;XijzY>?l>qFSCA!_{c?NZwJPV&6s65AeE0#E{0O_q>fC zUH{^jUfU!%3LL)WcLYpm8HzG|JJTp5`U9Yia`QIEZB`D9FbXS-T_c>frCcB2gJ>yd zAn*0ldh;PpRw8-oHtr8JKF}6*zR#O@i^`&Z92|A#Idi+^W)EsjJ2~&WWG?twh*+Z& zY|iCDf-s({H?*J$+9Ifw>duU?kIuycXf7M*q5@G@GgejA$~E_Ls2}U5(O|1T=A*k@(zB%8qW*GWb7o3Ur%p}q?o#<2WTA{> zOR}Hw7{cB2-u}By$$#G!U%lAqj-1mBiPp7Ai2_|%4@bl@8ICc^0F43MC!4J2gwzgM zj50;C1US17>kYcWs;?Ii50-Ks1Yf^V%%U3^OkKG_?{wW)mu%bjrQ=K&1K(Fsf_1j_ zOwL5iMa)Dqd%aVcWUPjE>xgpA>m^X+b;pK_BT6*aD}Q9ab&Sca9EBf_@(bhu6~VAY zLarpb5ZriYKWNIVpZmw$66Pb?v*&{H$`oU+|Aq23dTt^1@Qb%z0wfX}ds+|Ze|0c^ zWPFT8XX)9%i2`9Wp{sOc`JS{;jZ9-2kgPezJKL47`8s78nmGa|jUcbS3b_tY|Csh8 zY|GZuCd`M=&v_TRkRI2kv@*ph!{>MNhnU*;JSNFd~{98~@DSVo~ zgO!a^)^ERYVRD~sU;nu!Xn;dj0ZjG0p+*&;o zvbCfkBOEL*Mc4Ml>+rjpIwc!&5JBE__R!5|cZkzwb~nLDxLFr^)1EEh z3nz!AVLI-hha7~M&AQ|+RKtF`ukFO)ywV4$?n{2yp!B%he9_&4g=?w$8ojV9HGW2h z*?CG6=sn5lB#h?tUNof18e&3A08W1KsNSPCma5j;=4k}#M@t!$THU`7H1urcx_UH~ z#rYDyu3lyYN!5-dId8}sOZ0v*{HKs)v2}Uwr(C~>nj^SCvbTHSD*HBGC^U*s5Bd|e zi~sI`BW0?zBbA=*0QN>5gQWi}?;-ut#-$ql&wF|i2Rd!%9316^4bOmhwg5@andBMN zmk_|uRTM9MXY$= zzZ~z@Sr0mJ^GAuigpsKPQ^HAKLeTiD_jse5*jQK&KJ9+^YZg4Zs_4Qt#{ZCvy-o_(=suf=qp#^*)7Ci@ zJ(>6ynEwp3_#IT2oz?}u2fGdqY2%b~n=UL)Xs{Gbjx(5>1^R+1MpsEO?(@1nkm|A* z0uD~M4D8Oe8QM!p!%h?nT{jg1krKHBt_I}wUdgUq=Qa^u2rS z=}n&OKnbi>1|K}cJj;0c)O}3c^&dBZvo0X*#_znaE}v`li&KGn!ww_a8OnPX9cqBP z{3gT6#zr-4#_vZ8bZ=cY;a4wKx*Vx0X=}XwuL|D+KBKb`6$?kgbsVzB-Bu{IuVtg0 zzHS>$Tx1E^{(Yb;|?(nLZud*p|3HXqN8#05|lKdcL`%aRT zlu3+`8k^75Kv#0#Q05o=fU!7ia2ffR9=-cj2lw}N<)-xI0z~bvNMy&k9K8Eevhqgk zjA{!j>HWWaOU2*3yTNcMY?EQQ=;0>>caPIU)yFuJ%-4KUOtJ$Zv8|u}i#0sYN>4Fxy z-H@qLe65tz=u{#2nGSr+Oak2qDCS9Ic;I+(_zVBz@|L4-o!F?M;#E*Tef0p~rHeG( zCMB<}el_;(PM=_x`o)2E9?xj{#T&t_7Md8Rd%Yfhg@h6?`;B-cIbhV%y{R{I6j6|W zZK4${O6(#|a2a?nKp8K>fDWzuj_yIzm{jdlA0;o04JdNBr+t3XBN5;cEp|kux zy_(@i>Br@a#5?!Kc)S*kDrSTDUwUtRp7c-YBA zm04&1wScM9V)sXX`8*NI%fs?jsZmnLe21Pf6Th z8lDm~48cY-*hY2xr@jXzl$B591}MJOtkV2024&oZaf7pNZ+3-l8}=L4xjz?l&5E~o z%BdCn%BsfyvSwFDt&sA|EfLK0@)Wx*?^4BF&oTF_$dspl^_*T{BR*g z+_b^g1S3lTIz@Ga@jFr_GKR!08b1RCc{`a&K(8@IkuA{oUXh6A;P{q$ZoT6yapMnV zz$sQemOxtoc90tz``uo#z9>)X)n$%RiS%q*Q`e+9hvAnXV@`hq=)zES#;PfyvG;m> z-A9kOAfT*9n`X|0-yXcWjGaIjeaTX@(lG2&5Uf!jI8lZYJ_`Ik91h7Dj1*Q`FP{+@ z;v^)^i|yPHcgz)D_$h0n6ETK+w_MufsIiK+MSeUZTJ7AhR@26bpEG=Ro;6B)HJbEx z`WZho_atd{>+9YnNuH+!0ltG>*~3N6a&1HBeK{N|<@4?VhGg4-ej~T;N1uY>64JR! zA~N4tLg9fQQnJoBKXPuKy8X&h)BCc06h7!JM;q(F(;T09HUZj#ZfSjQZwrJ0MX!f> zy)jk`OMlVwF|?=~EyRPmJDJ^qQ=VE%nyN9G%SjzNh3)l{e-HCT0#xJF8YU5^J)_wo zL>%#IWb4uIPpbLKKu@j;FU|JVV5|SiBK(&dqL0Qql(pkyp6EQF@apsz4GXUjh~S4P z<@HOPwr(?7F>Zj)51dZA9-L1Pha=xpjegPBC*|MJJbcx8Uj3;8iLSq%wPltF*>_*e zw@Pi(^v2f6_WFS$mV>yn)r!-|LC=YQeN-$XAQ!D8Gp#93BSQwOFD)nUeDL9PO@%XF zZoy@oi!x&lPjT7o`e2WsbGhqFld>-3{IPA|&YBP*(&*k|mi=#sco%1Fg^mmS{wG0) zVyzP}qKVTVPwQ9UIu&r*+JdP2jo} z;3AXwdKtIcew@=ingM*CC^9sVe|g8K${DM2EXmC3PF7~Tly@Ym@uH7aJzuK#gSe!x z7J|!Ojy}zI&MHjyo%qkx^1ZS8#?3k%D4leA$uqVI$#4T<_hdMkE6UR%I2X`yv2ORC z#fmL=fbfR6r=FapmcbkKw_H_{>g!yZ&peS&3}=zB)-k_DYI;?Q)SKpnm&!r#{h3`! zWTvwhpI95KPZJa2M6}u)Jf@XtrQtUGl8EMCid-QXkkl_-h|@O}R`XCC@7v}<^=QmT zv4Udt){0p`}oqo0pb!E5Ly(h{tRXW-HurmBfCd})28X)&f)f3Sr zj>5;cX-Ph?=(}zn!}eXdA9@jo7uk3#xURE}rZBpfzXy7}o+^30U~oGb#`QujHZwF_ zw&S#w%F5-|c$MeiQy9Rw|J&~o*C!%wG}W2}YV$q8X32+z$6KNu* zA+az#xH{!-!0E5P5$o#wahFlH5Sml}UQAA@i&O7xAW!rYvPQnLovQy?6+{18vrFll z1{bsGN-39jvo+BckEAi4i*L8RZU||J^`a$>sQ4p}KCe|3bPZNI61eBz%=+|`MwL4;oBZ%0{i_gEryZr zTs2gn281JxB!MTCtP@Yvu@dVs&vxl^rs`Ih=XUO3M-TOU%>T{;Se%qS&hcAfLC~en zHYbwbRmofRGT}WK&X?Ml?s=m8LG9p7=n7Z+bP2t$+e6;EtJ<02=LvunKGE>W0t z)a=aZM9gb)qb$y{w4Ic(G&i1|%-#HYK|Y+Y!q>HTMw{@VLjPNuMBucJcdZCje(Lt# zqeaQ!E4)M8jr;lWcxa@23bnNcQkIW8`YkQ_2#?xkj3AujHo|U7l0Yy67j?1 z=A_+fY4-yzz-RTtSgJ%MqI1j8*ZG*bD^_;Hw)sd@EqxgKbR@negvJo-;$FV58J~f> zFu^WEn%J+!N!*Q3%NgHdnz|WEX|WgQIj2U?aK21c?(8*3P=24#jpou1g9{I%&=2(U z{YE0F^6hF)ut#s2;BY%&jd`sVnAe~$rOg1qc^1}Vufr+Ph zr@=NqsidC#M|aO&Mk>JjIFr8LBvHjZXTYOhl^8z>U7_49UhHTul>iR{RiChU_ zkkl(!pv!4e(XqZrx{m5v&SYxv^V#i|Mu~67J7MTR{J|e_X(f;2wB(P9>PTc;!X@D_ zxRIs`PbCm{d;NXv=$T)A1HnCw^8Tn<`SpT)b)om{NhM}_>u{;1EBZ6{_a5?}UtlQD z?2_HeP7tY=a5S(=7`gN0ltVUP*uO_JLg|3?a*g+b|HcIi2H);dl0SG2jL*V+Ynx~K zY038*QaE^$FOV-Vd$qJUg~Rx} zD(j+mE3l1zdgxYOfmTli$?6rLD5{m_mB|0r>RiebxX%-40j+DIv2X7ShytN*S(=;s zk$}sxx(O5&dz*L9TDgzjcGJyC=ytXZD4#ZGi~Ql@d|vyg6cFOInmTMB;H5Zs*$p7x z`X74MaMb3|e@8-!d|SpB?&3G+W41K;iMoVbqrRT88+F1Tyr+vtfq(Q$Uh>Ww(ge7u zzWL$}YcWl_$EEt>nBcZINAmPX`WUkC-2FT8)iv_nyR!pMOD}bc)NsIR?vBkwcSU`` zuitc;)rxBg5_&p;J}j}ehEY}ki~0+Mj$||;ghpXgv@n!f5Ah&!P=10U4zDSFr+sT7 z-Sx(b1RsjzSweW*%1zCF7Y^ql_Bd5Xy9B$MQ4)DroES=Uo|hhX@Z-^kdna%DS#SRJ zD8qfGeEcmnkJ7;dP^B76>W>Q@Wi$~;Ob*?jY{~B=JD#shKZIgFd0gUK_^277x3LhA ze5%pAZ1eE$SAY64DHf=Rs0sPz6QZCO?76vW6VJ);ZewB->4kRr^lI2vVUC%7#1SC=03q{a%09iqhC zq3F{_Y#>2sX)14&$k#7L$NN|TuKO{3MTK?Oif>(Zr2%OR^mLfjSs0B@Arn}nN~m8; zkfEnFq1$VjHv3`u`ezS~Nyl{cK+8^S_N*9m&QSer6PIG~J@2?X8Om^+=-!{tR1UTq zwTO0-Y=e@#Mq@&o1p0IE0$@iO4Y&>)az|rf;n=vKp`VaD1$Ui&k5<_Tq!dmoToQZg z(BMjqPTYnTIJ@XGg5-EkIU+;nVhi%!Wh0J%E-#m{PEQq;lR&0p7$)4SOZwlOuauh*;f~(`-TOQ;^a}ITW5IETBIh?f zU4iR`eTmAu`)_g<6a2wWZ0y_iAsp*}Wm)=rZwG4q=aJ4t2j;Q7lVv!K4;>fQFl*J( zt<4HevEuZo=5z|TN9StXFJl@g*1Hqz)}tDzZTK3nv-IO)F9YDd#csH=Ckk3}7sU-J zxeF3EaTDeh%Xj?}Wm+ss>ArtDFmreTDx90N&Z1O#(Hv&(HS8UF+jwU0!*G`0PE$ksOFW5v0OaEhXCc$&>>`L~6u{}JXo(j7k- znUd9{fuow41JxfkL$Or$hYmedT|ZIuL}AnbYy}L(tTiApA=04308r@Zh8{gGNRw1f zG@-4QG?E(;q=95<%4b3q>9b^_9fG`?w-vV&dmLj3_1;K6O|@@8tlf82_vdVnSv363 z+Y@=2$7F{w87@4O`!#`UO8F%gch*M*a|-YgU85t}VksDH@=XmUR%GC6NyU%(y%p>u zkNXY9)onX+9!t6uJwR~C)jvS{hhHgS{W1~1HYM|boc4;}Cr~VId1vE$-`#S0cjW(h z7l+NsEIwpXS>!&X`ckk`JkSbTg;?gD;n;nKLT4LJ#H$JDIk)t>zxW+0mt4-7oiNy% zw*R%G4BfJ?wBJ@BU2QFB4rBK4Nk3)Wjj*LxyH@(fWnccJS>WRH_TXCEM#HbcO3F~| zB2&9NiNGXhxG7&=tJ2L9;u<>Bb?RLilx_^FIU7yB`dc9W!J6kq)w0cgwX29OyQg2f zy^67KB5Y~e%IP(Y#L-+GTtUkYSCuC0^*2evr7Wd@)?E+#T~M)V93$G+&6QXeRUaB| zrO_?WmlrG#xzDLdYmmYgj^?UmyS;4K9s2V4_&oO9W7MvCYwx zGsEI{(WFv{ElEupaRNiq!$J&0`;n!TEyL5uz5*LcRkDdS1YR^-%myIVNnLZTe7>Hp zF5+ek4#jp|H)?J^nnGB?Cp^pV`rm8zDsSCEj2%D9R&h@CWwgWS|1F_P9_lK zPM90oD{x=i)H8+{f3eV@!yfly-Q%8d(%?6ApGeFtMp`~eS@fA(Fsw!LnR9?F+-N?P zjr6gb_tj6wD7Z7LeMrqEvvQ8v+c4n1Iv|y_x^(4*Kgyh$=)I4m)X>7x`gN7vldrF z_uYimpAHR#(#a|Sstij#Kaf>Y3djRS6><8o<_>=_>_Nf4+2s7i;9NGw+1_R{VX2A1wUlrNV10$fbl%R$U%1~ zeeY$Ne^{YIAoi_Qrwfr=Wk-ReKp*8tfRA5Zap-GP;arTsu29)8XP#H{V4lppvl#FV zG`vF&$dEJSgzgx|f0}GxI}m{phpecJK;oVhnnVE&o6Ro4a6qb4GHls|(o4^u9wMjp zsi6hI7a$w!j}Yq*4ZMm9bT{2lj9U(W5O0thqwbXz5gU;!sGyaXUqR%&i{hA%_Mm&j zk;|!HDAV{#(T7KRgF<1yRXUUC4yXELNN0`hEXB2ybMtnA?Mo9Vj`6v{z5+9yquOtX zS&$|QP~swg!R0*as`S8xB6FJVe?Sx046)yO-ZuFfMs=d>yH@U(e!k0~UM3{B_K%%L z=({(X3_7*O3BuG7bZmjf*WIMe3$?IoKjXgKY3Fz7w^kQ-SLdni7`cF)fM-1QV80S; z4}u?zO=Cq}r9F;QFusi)`*EJqtF9hZ0+jPm2RfDHZpqmP0Ysu-yvwfgy&>=6=c0RVI}Z zr)b3R!7S#TffcUpu;!mErw0f21m{1S${?588GwvL>%d20J4OBne>l5mD}$W0eENxI zdKx0oWfBr|piSFE`UK&>ARfB5xY7tVx2Za`Md>>7cbn5~De0zdtA#sIMm0#nU5N`~ zZEWJp@r(+8TnF2{@>(NQ!e*J{-Gm(nr=#yPAntWcKnA)jk_~A%T5{LfBPIwzE`9uD zz00P7+-^gQ`_UHyn*n(St;M08zKoS!@Jjy(82WQ<@CxmI&3eD)*KLeHbYQ~olb)B_ z#V4|s<-Y0-`=S#tpZFkoS^mYfcy6F4q(6ckVQw|kZYu@tEcmz zJv?6+QhFr0)1F{oGX*>UsXp-9r?;aBr1btC7q0C*qQvS9!=7iwRYi2FvWO+mwCq2f zNQy$kU1E3m@|&-Bo0j|4X5wtDH?uQV)~YeypKa{;W@^PcT?9@sQzQ#Z)&eh~p7)w% zbA?>DP=0O}oS3-DJb~N%q%1G2RHSznvGKc0HDzkwy_Hu}$uoWLit$IL|DNu?^dAvu zvGmP$0VBiLibGGT=MeM0bk$eB-o4=HH*X{cM^FYAojLg%-k1G(i3sdA7Vat9t&M#p z<@wd2vkcMsI2>eF)diJLfW#3QwA;^z>|=$=iaD@$SBfYQU!=O0h(6M?!KHa&3E6;4 z1#js18yHHLZNZ;|D)ag)7=2#@tQ-2NiICpSK0wn?b3G)RUp~I&uc%B->EYP))|S3C zqQmg0=x7{xXzpH!_9}+KXMe|_CD`HUN@@da2(lU09_umdFt1q8WcvrXIgm--)MwT`D6v);N5m3Rr1LE3GeyU7~npdSR8y zh`jG6Ns7l0o=hkrc*jCL=K<=^-s5LGxWL{cz$f-6=bAuPpxOArf8|5E8gzA6o)-*R z6j`^tSMsW%jObSG_Trjm$fLFoeh2;tFyCDzzWZL%???Bh19~}X#?8@0J@n<&7UYU! zd4!{p2i7uoMh&ZbT=os%m~$Nc*@3>g#GAxjKcG!oKwd}rD!L0O
Enb@az^OVS&rYfGz-TL;A zVd+tNmO9mRi?qPgQ%mm;=9o=k=Az*ImEVQy?O>OMZ+rS*(uRVceVYPJvTqbyYprFD zfAN0{k>S6ZKEK!i#_e#%IEbrZnDx!HG{Huw3G@5s zI0}Qy!4yWwF)(^O0 zE7DfYTL@~hbP054DB8;G3EIkby~d6X)&Dv9(6r??fN%4%xSwTLn+&LH#Hux4$h8!b zdtF>2y5)L7{a)jxn+7%~14!J#2ofib#=i(>kUg&!U@Tfm*$_hT&~63(f7$_A$(a*{g$(Own;VqU;EtM0yC3& zjGZc2^rgd=_Xj?)*)D36M98$~ny81Z>G0Y1FVns?N3-~Ub_3`^abhEq3Bdgxi*EKj zqq@R7gJ0;ARC7NcIV`uxZqD$|r8cMgyI}e6RLv~gDQ1#X*4d>O&4Qr~)%UI)(~57X z6pH^u(L+jlkOLkWlRQHZVXvpF9V%Kin+vt zjQ(=qADmRpG59)#tRq~sSS`!M?g5w4ipK|rol7KqOZNjN8KaO5z5PlQP{*#K8B_j9 ztW(x76K`g|*}QUMrT65)atp?E-0P$AdCbe{dUg#ho1-(;Akbp4jD%W!QWH^eFeEG! z<|+O3Q|FEVdZ&{OYMqYMivlsOD;BGZzu*-@-2ULUT2Od%7`YFogd`&mlWBK2l=i_8 z0=k*JhCGD#ffCI0E!DId0i^q;nW+J00OG|n#(YoR>;@Q{6`y@_(&Xb$NZrq&=r zyednU&yiuJJ23t?g0R*YGf=?wtqQl!GJV!~jV{es=l6s<+cFG8=4(H=u5T#C+i~`9Gtmll&;|Ra zS?Pv{QKZbE6XBQJCfnf`3e##m0XpdKil6rvIdhK3IO%{w{67Fo6$e*}vpQB3OtGK8 zW z(z#X(`ylY5GAQsbDc-{{E=@wT=`b{P`sv(XFZls#^pLchb}uVIWIH7y9V!sYB00tN zpoUCRRY!%XWtLk1{D->(b^fxXQp{shm5k&FVh$1av~d)3h7mzMP{okve1-Gb5vo~o=m@tXZH^P_`L!JV9YqC@5M)DZ)~1y z+l$7F`e)W3eQmthS}gXeS^4_L*|eyKYGf!q;oT2+r+xId@FWYRjg=}ugqa$pR!zER zEIuuGUpKhl5KnIlSnxc*I9OoSPDO3alS;%%Y)w%22>H567C-g<^CV@E&@--M`jE}x z7jdkz!D$xjga)ipnR2#Dj?;ofe!Vbr0NYzwO|tlSfqjhN`vp*2P#D*S_vtZ+2*Nf` zQE%*#miua`MA*Rro|X-64sGRcCLBP}gn&5==yZstFMW_q5k+nsh||h!W-b;t``rXR zuM(OZ8I1MpGf|!pp{M`hZvT8n_j!f=l0lkrbojwc;^~BZB{c>8Apqiifxes~J^I~D zZndA^_e*OPYa52>#XMQ?J-Rl1nDAO*BW^WZ4G`ENQTiT9kI0|?< ze$ZaJGBmbwWvb+x!71)YUdaBal}&jL3s;$_4{!b>u-wAFLJ41u7q7bbN;B{)kKFDn zJVs`oJhnWwuY2b3#?yU+k3;rtL9u6W_jl$W%ryexjiW40qwROqV=+NJ;E5wJG|03m zC14?c4$TJ5GBwx-{_DB$7LHK*YLP;E`I^0WFVaIH#BEWaWDM1;QGeQ+=j@mN^-u1d zqw{7zu3%X-LbbZ)kLFmH_KWdX#r!Q36qP*w*}kCG%I1ThtL}9#K7YdMRR8{l_0zBi zjR^N3hZ66fo1*Loz*no7<}$tdHWK!&zfI!zNsp3Aj2x0+-EieJo?Luy;~LWsMpT`( zJR8`nj?#@Ff|(SogS+TS%d{WJt%C)`etdF3Bdt#gLC)T5YCNWqtNP%SR8Vlx_JERf zN$3bk%7*Gnjlh+}3uP*{uQz_=us`aV^w6%dW1HHqJ)4+^;S)wZRwl6XVQcp))<#D{ zpj$)DjTm8QlBDdqSfrpaCw)t%9_#_HTV!4hf3KehAcRf7!9};YG%fzUTK~B_0~$ki zwWOp_kqnag%U)7gUfEQb|4064G)WPvlkqAj}h&2CQSA142OR6?Y>-0+e zSni!D{+S#P+CJe%K?yR<*Q;Wyd~PU;6tlKFR!Xv67%I+5bczcV$#kA|e( zHd;TgbdLhyX>|Fi*P(A#U6$RC7$@iy-Do2E&>xm`PL@qmb1ck8OJhIxmA1x|Pu6*a zJnC$Ua0NRUS1I&i zJcx9#1H&3xQGcLMhSt+kN}AlS#~k7hB>{h6TVD|Mu3 z4*YBgt$l{UG?GDp#@OCVi;583e%i7ttE|e(6gY<&vOC3$yd zITbCKo}z}9Y7GKW%Jl=Ih6*8DmS+tMJ2Tb+gG4R$zX@VS*^dKyy-*^t2R^J zkLjuc$k{)HwC-wp8pglElL@nL)GDG(@Uxj1wM@BZ(?dQ%TW-g;+e&k$oxGZ+ zoliq0Q(bE{%0bGYrO4N!lq?@wf?)Hxt|OB@grbIJQoMzyjr;5cl^!+JA;E9XP*q-| z^4Xqv0^i&ULTBrC#&}p0eSP<$4Z}Z3dX{--GUFTWU~KK$cz#fcHQ&yC`}Wca@llWA zpHcYNT|R4$nbV*5222^WnL1>OLH~A@QYIlvN|l)P{`&Jv1qJ;d9E@$zw*5w396s6R z*$;O1+_U?#^xokJf97yTP)VUxG(B6*cY6O;mksDN96ejsx7G(c0yl=X4pCJcLXKRU zmvRu&L>1)80SZP&mPV-xLy)BhuUjNxrw#Px%4C~hUR}5&`j%dNAUF>P35=* z8P`b^6@N-j)T^4=&(kCa(EF)LbZsL!!X`AR1ah0QMraI3A5D=zt)<*NuF*V=>1)u$ z8Pge=RnAm2zdT6j*TO7@qy#F4bOfGS(OJ4X5Q!aO`nPV00v%_bQo-bQy&dIUv662l z-c`vdw!0?;;pz(RU!2!9jB<-?IcY_=8QGZ#O7!`PakJ>`*igE|$R> z=|64-KW!o~q{u(ACVNQX}NF=5BC8r}i^N*^*VmeF0~Efx#o$(Aq4 z&(Y>K6xOrV*CzIq4fQmn6wLjE7qWgWKZVGIA3uW#)TwkHc?ydf;kjvYG`!7Khb1u0 zo%S!Z%Hu4Pp*gLR*;XyXdE1^oCqGpc{V5sJegD5JFess;kC_BlMHnbCu9%-5b z4Y#I5`-%1pnp8timpi4vp@;U=^mQ5uv=6d|9(EmeB-8g)<6oeqPjx^;H{G88fNdYF zfT%NTlaOujFD4}F_!JM@RkEVaqmM$QV8=tW5(nDv67oSGjNAarlN?{(co6Dm8$z2T zk9n};n*({*SqM-jF?C8fLif(k$0rv)G%3(YU?}%rFJSVV&PPZuB=ySj)mDX%H-eho zu_wE#JvWM^%f9c20trxG$_hh8f(s|LLumXxg~!}m`qtzxDEp3j`9fQH(W=HyWBx&HWHdS9m1k1#Z+ zJ%dH*zTXGKR^wLlFUAk&4!dv3wQizH?=@2EZ=!3(#OV)k;~(bDroa%gFV_15yKR)f^oJ8`SP=9DM5CYY1I zbj>1|vw&6)q`ahtxUC8z0(tyCt^n$1FW*A7A%r)+CEkc@%qIV|v%VLPX>ZKfK%G?WL02Vq2v^_SgdV()pMR;5OHk_ zifRFpOKhYYHAdQQ?9i(k95|akIYk%*%k4nMuS?Kq-6VWZ&%& z+QXd?0`MUx)ucqf2C}HJNkl`xs7FKfc(=c@NQ@dJ=!#bx=AEc-GU(134}l0ER=>8YYM=oX-Ba4P3pDBNUG(dQ6}k7Gu4ym zu7ZK;Fcbau?SDNrN{Vz$TWZ1-8x|`KKM7u$ZymfGllP{Tu4>^rVaG-uZeQIRUJlI) zrY)qlmqv(EniRg#|_jrZiQ_!Xt;3bmGK7GYHy%U)WUBu zqqyo4FDG{U`ZPbc3{8^}vTtN#ie%6Z%Wg4xbxVdv;Nt)LWvMu?vKG)Id3L83eGOF0 z^*&4(7mU9!w{xS5{qjM)k!T-<+X`;!FUC~Ttupim1h{qE&y?wP9M=tAM)M8Nf6zjL zPnJ^$0nLk*L!zYiBMlG#ru`dbMWF6A;b|cVV?CfO5X08B+Qs>+VeLr6#SygfQ7Dvk z?01}BuVjXYB=MT!z)cQliI40|Tcbx?YDow&IP7q?9{!<%BlSG=#1?7GPjOP&*?cT9|sM(6*8Qh_dY)U&hQlj|>uSnB$9b7K}l3)XH8hUeP z+vA}Pg)Ck2bB~@YEx`gvfT4z(%YsWq@bJ)fRHa9e4ga^m?O|Q#zRHSv^P^-@uF*pR zG`49qFz0P!+zg_vWYrrwg_<_6JakncjP^Xy+xx;Jvo@t3nMl8SmkRm>1$#)&azR;h zTw+yh&L2cWzSXT)N^2KZDbFY!YSkTCl8kWEN|#;ILc-nclPy&R;Gt2MLd024pIo(D zlt;Ccc{6hiHhVR9y(#%LDRDS|f41P@PWGjPi@BFllP`>$xe=*XZNfrDbzmo*=CPmk zqfT?>g%$ELAC}C&CkCP3eGg6iVtJ{`-sH-vV4tDk;V5wCw@g>hcB9qhspsNG8FxSj zQhbStIq^FCYrF;uvK~5 zoOz`!{`GNhOjfSNz31g?T6JLu0!uBaDO2B!Z|PPsuIcSCd7m}B z)IQT{>m<&QbN=oX`?q@8ohm=@FUy1LMJzVFQ}H$+1!3!VAwm=qy|21{Y6ZVn66F_@ zyr$-{A-GCg_c;)B$_oM4jmX`Aq>!(N8lSd~et+)Kvi;_wv6XY&dF4y?hgP@NibMOQ zZGc?)9R8@iW0;s`=*Gn1F9h^O{bD-ROckdB|C9LG=Wv${Oj@=*7G5B_ZmtJVsQ;+3 z1>T}<(GEs8GcXWH{T8Un@c^e5n$Sz*+Dx+{P5X{es5+`gt(+K}WL|i8O@oPcjiK-$`0 z)Aus}XpwTJ^6?CF);VQG?bM4wi2;{pvahh6EagdpMY?7Z8MS0n-gZT*8>gf!q;yQB zD6s|xzm{joKm|ka5~|YDfoJBi3C*;gHxC+$lW$1oL2fJ)6UsQ2Y;$##FhK#|fMT_@ z+e(I8wv?J+EW|@~k$s5J)Y?qTjxVfKPDWd(nZ~iak@)FT;{#Q& z^H~e_*6yDUMa>VM2E7knQ$5)E7}D-i@P@;I`L?~`YM)WT_|ZDtbe8JOEA+AMmrvgt z^p#KAIh#P(n_B*R$c?BH)+fld^M-wITyI*kmx~b!Nab!NrdGx2o)?NKUdTD|$0r#p z?{kAwb445my=R;*xyt%x0&e*ww0E*UD*fhJoT*$VdtX>MaCDt4!RTD) zw<+V!K?-la`&~l=+#L!DLi!2zqdq?ESqc+(9t?VCgp0d9>BGaHd}G%GA*Nf-epBv-KbW2c8W7ZDwoPH~l7?4Bu0|+1KPQnl^oK zZGE`*Ni16CS=5sNqjg^>N12d6Yi@0t1;>0{@u=wYSFS}u8WZ&wtx2X&R;^BRRW1T= zKcSf9oNTBYO)S?&bq`&UyEQsa{u-k3EKwq=W}JUcB1K%orI^((Q|DIgB#-}rR4^ab zjQ_&&Rb>u$FTW?P-xIceGbvXc@!2gDzbqCHh`CU7evs5aYg-TN&RD8&jChftunmEsyG*5Z^FcS?~`+=3)f z+@VmUI20+a#e)}j*C2sl!94`Nyld_C$;saT?<_~j7#Ytq?>VpQcg?%dw-EtPv_wtI z&shP%rjJ!{#Ab&hxAX0MjbAUdGR0zk6d;|yU0+|vLK%l+{T>uzsRMnszJ!uH=kgq3 zs@b&?8;U~j-w_u1N6u#&*l0%zk|xD%>apb0_ zShnx;7Oo^}LPXBM0|5(P{=9sn5^J>;3wIMC!7H{5D1IQ^dwj6h5rV@^jf;r+dND{J zdw}DXEtZ^!=#`i;${SS zKO=K}$<$^s>AcwR#;!6n5yg6fRI==XqrwwOJ9OxPgB&mwh^VpprYN2Ma;zLX+d~{m+&jRpNHVbqQ`C!ibp>eCY ziJtrrcd?p18o#JbPQuk>q;~=3qX3yt8POi4jYyuOX;`$tYhV9+r=HX-00NB% z@l7yP_?Y#eI=h!VCPc?%DMRonU>Ap1A7J0XYaRFJnf$&kKDRm#R3;u9&xV7Fb3Whtc#UAY88||{vO5q8137ykio7O zOp3M8k1iHR(&Y<1A^E4LInTmgkFTkY@V6Cfky%VN?~$AH(zoruOvqyEtSM6#x^9n7 z?NVIBux*Sx#U#^t^Ya^%>!HSW9iw0kW0rDz=J+bd=k+^*9O)9ay&Jm8nV+r$;iP9u zbEnSE593W+QK&77@ofCy^i|NR7^>?AP~9T%OK~GZ-(bV=W7I5 z9P=N80nRH)wfZnp`xG-gsrq8$@9u|6|=%)f%4BO)9fSot~ZCf59 z`Pc7WKqNJ^Prm>bl5g!k9FR>Q#?eg`Z+;{Jwo`N-OuO*97ZF>alme1V|2pFqYp&EUMe*uj=?Lbj9vvpIE*5q)T^n zxmJfgBCe(Pg+scec4T2+;#^QszIg?EUWU~yzdKkEIF}$v=J#{j{Tpe!%}~6Wu0t&l z`n8#6lwvR!p`j=rg9!)=G}R5|2+ z)KY|T!{R|O`_g)MHQnj0D7y3L?WmE}>1f=8hlK0VdYv9XsR6le9Bn7*3`M|oCVOEg zZLc9hCTBxHlV0(CnlV*8z=@c+DqMnn@}PZ3ZVzDsUzk!J2oy@neh5qEi*&klsK&hPkJE_vjyi;e^HdkE0o;EIb3h;_~T7PQ0jObQld@;)3+H zWPk_<1b>zmCZt><|72N_&WihV`uL|MW$zE%q&L4L48?9}!Wv1GfVVm}IFpXqeb~*U z;`7*vZ~`>b@?*lxFODof)!g-Jq&9ajKf?YK;bYGXy& zYtltz&VdS)VFR}s-~6p93ksKLZEmwpQ=6(+;x%>TW}3WiALr`QP=fCL3h9rK8;+*g|$$@zedvPT^`>6jQE!}p@ zad>;bZhuPjYVx&z$P1Zi4X7Z;Q;N>qfD?z6M90=$O5JDP1B%zU9+GqB#$G00CKc#S zq6YGSPN02A<@Sj=3IQY{`Li^#jc4qt?K98-vGUm}HKhY*w{8rcrz&IgNQs}s^kVU* zH*F6*nO>vfyUT84e7Cji(!2O^>SnjS+1ab7E#iF?*|&9ntM_vXBO0YTf{>N|7u-`p zfEOee^7I*O3KHd;=cUi0PIK<-;+_ zo=t1J^0DXX>$9>E0gK*&%TG$b`C()62pO>{J9FaQ#4=r6zMLT8)rihGdI3n9HuQt` zseervL+3N{Dy!T?fGP=cJT)HMn01*8PdVkwybBeF=mr}KX!7;rDSZg4U)ELmC#LGD zK$wLK(tB1mj5u`lHA#w0gUsL7!-8Y>Y`M}3BiXiy3#a+T_W9ciak#Fw)&cwD!tvOR zperSisR8@6BSL4@fQELqAOn2wc+STZGV-aexD--30P+V-C!Fbz^!63<{d6@51Xd(6 zUln*(_Q6hC-_Pk!TiLN+G>RW|T4C#od#rhk6dJrGq4|otKNNUw;1^#K)gO3+CCn2| zzxU!-*tt=hn!F~oo%6$XGSJN^5Kj!rrN<%~j>nU0;h;yH#IMwTXiM2+C;p*hK?_j{ zN-!I&_(1-8$Sykvk)pmMAt@(R;UKv4 zh*Q^R)EcH8S#7>`;Q6bJR{la4=RIt|_e#wz(3VLk z{5D}M&$_0TLe-EjYWj1?N%P=Y=mRtmC*MPY-bkYz|Mv9|&c-9}N3JHvP<`J(bDrRU z2c`UlfA*g!yt;ks-3|rGXV_3BkFiRde$`CIOTig`_28SBEKl4L>s}p=C{VnMY06}B z$^5aLWRNh2`tf8%P4NZZ=7)@sNNq$^$aF6o^|M&l;eD>EiuiRe+yvDy7SIbe?esZ; zD9T(ya>`^)Y|snKp-;K012-r#tFZ^PJa#yNPecdE95=|;p_TYQ7I4y^i!m`ej$KWI z=6a6O;7bmZ`F`XIhlK|(Y)~}q>lbKBRdp)4Hb{NqPBl%#+aC=k*L7Eu&fR(!Nl5DN zPGQv+%hWo&98VG`DmKr#Nxuvl>A{CcMI~xq&2LH-Wj1Rx&HO;LX}~Cvtq-`PTq|ya zy>ciPVnd1>+o`}MzFX;Z+64CwLZ9z$h`gV8CQ7}H zbJ9b|^7&b{WVZaF24!Dg_f0CE)C}=y#({gfFR0JrTF)&6#?J2l$cIf*=xxY(p0mbj znpu-vjypMO!tTq>DoOs7-Y4FBo8_JsBji07;m{vz$m*Ov5Pdd-{S}YN~M2(4> z57{}5+bh|5ZeNIB{$1f!HWGd)aqW-F>mxR>&F;Lh%?6c1fd)q#XrVd04AC0hN;+4W zV~Hi@2~EnX74lCEQ4?oh7>Xa~OGzF`RqsDTu(tdBu7y4>QVlw&b)@7L8+*2PotvLW zE#w?wS^RiSLOIZijswez*kV&dQmmnjs-j*{>VSH1i&cwwcGrU2Ee#&~2&|58CAkf{kuz2<}$(k zru|{n{h<=ww_B$S6yJXK`Lz(w$E{tl&EJbdtyXl1wfwBq%M72A*%Fhkqtco=!K}-I z-P?vc`^HVVtDOzCpMvc>;&!l*%aOD3XBc&yq#}i+VUup-#JuIckBW@uF`xnH5&;x+IU$`_lBg0BK_yd#)T~Vb=}t2L3Zmec~u+BRga7QPfMdOau+lI{G%W3ok8ntXPD$vs*`> zQ)=3uxY}!=BtB&xCo-b15-q2EddyvuI0Wt4T}>RmU~-WEspK2=UlpT|h?sEMoKtxl zBhJtr|Cv4LbYWM!kFQi-cIQM3+DMCN{=FGS#K9dk>WeV%H_^r;{1}_YTRNUxqPsHP zvQly6zIKppj+mi)hp(qQTbzmaLl$Q}AG3ILUfQ;qfS%|9-Hv2jszK&}W z8QkbRBVnP|GpWh3L6?MvTv4}5Q=#~hVl@_v}mGYtzG$Bz34dqJMuqxFk%ft z_}yCAy+qr-CM;d%S`U)Rhw9ntjr8+IUx3Hd;dJzPr!$z3noq%gvTSD zAlL1dJzTzZU%j4~7kI@!Wa*#kx=06C_c_XzxeokkJ4Pjw&~u=Z(#5B=<<2S!76^dW!ndueqcR+OGx3)%%OzsBF&!T*D%l5-Yc1Tql2x&p8@JV zXFXp!+p?jpclsaM!THS@Zy82yF~ElC19)nBhQQ-@G;6MW4r3#qk7FmWk!*eBZ%K3H z<3n9_mfkll_xEM%Wqi=14`ojhvHu>?V500o4BYS~fx!eVTHbNkQ4OTF;mN}tO8vg| zVoV#I!E>t0{I&`Hnhv~NRB?9qlfuPlbz0}ih(wdu3Q0r`a*XIrvWvHc*r^ngo`vxD zn_)-e!n{X-61e4L=PS5HkDQLO^XQL9G~va82O$==%x2Ns-h;g)Dk*nUN16?gbL`zh z;5E{Hj)KvqM^*~%KT>%Y=055M<;%P|w;81TB|l!oK=wW#%YU(^E>?%akVMjG&4%HcIi0bFtBy|h>)CWGnH9GN+$n#n2M}AV z5Z++?fQEQ=g$c}+C@(>LikZ!VoBU~Ih1cPmIo2oHO{7#W;LpO=TD}QqB?7Msq z&QbX87Kyd1zmQ!UhZ0_%mDwmxG)kl73_G!4MgAW@kUqy_B6wgzI6mDcErWo?^V`6e zqyY(DxKzlS^jh>N;8@;I`3ZINWnEN$KHCvz_UvkPG7lJI6^Z;>JO_&&j+ zQg+HhIQd>z#T4e89S(v)j1g?hB?A)rttS%yTTAJGhNgf0r$g+S&-CxpPC})r9L7$aS_VdTy0f9=V0lp9L4yhd9F&o!$*Ta=aUN0x13%j9ZTX zu?{7>v(%b><=~~&8-4$lk0=H8zaeM6-N?iIUP`f;epHKhx&urQe4cRH_v8wxo)e?$PYtPkDO;efj8iyY9q0V6*(2` zFyh=+n8;isT=*wht>_Ue?-&YTXjKbKQjIV1bL|#P4_nH;Qx{$NIVv#(u&u7e(|z2z z|Fn+tVpDY?j?OVd#`gXYh$3kAJlQsKwD#$}f*g16KHS`x9m1&YI<{r9?pGP#YFQS&E zPFvlao*`TNPw9vUP&+tE`E;YV2Iid>r(WrXm?1GH0eviWa(u7W?h5(ETwxS&E3391 zCip1k7lW{sLzHp<1)P#1&_9&8)6OjRxDHQ|hYu9J7ev4g)zJ>&e89Q6@NmqXRK-*K5R z)HPZ%rYYWllw*NayG|(D9-X;ZrW`jLF6Qsg?rq;UkM3ERM4qDLV-ES>35`iCa{$L= z`YbnZ&i;xdH+@Z&xV8iMSl5GsO9XsuYcC^5PNa@7NrkS7;bI+)D5kp6rls@OLopKH zZIWilm$-x{{bj0-xuH!qe z)`vHdZY8$KB79)fM? zJc%wODTMhhmE)6?)hd3JN@6B&1@9cyj~6m*1WmU2M*3c6eE?Yi-c65HyN-jLNKQ4_ zMgpGP94RA@)}f}x!kQ5wObv5nCot7xKm0@CR0wI#qMh9pj&uOG5|#g(bAfi+reXGv z5w1mAh1=V9)t~+rp*U!c`(XU!=gwvMO8bOZRk8TD-$pMt=!eh#1VA3DYWrU2B*mPv z*w#M8fD)_AJGMAKl%Pn7!h1rviT8=r#ThuBfpYrNu~#74nz?vDJbhg%niLCWr%GqO zf4ag57}(kKP-DTH_7~6IpM+;oE&2_eNtn)^|4I}U1I29cGkLd#?``FLuk1HNT%Za6 zcSiI7yDDgz>v1^0ky(zVV-VTMS$FJnV>`m;t^2`fv$UUm@wW6@yk zN_c7=PS>?H!tJ^Y4wY_1(KNKdL8me`&}F=a)ZvV*7p0qzoeb+UE|>XBSy~j zk6*==P0zsTa(F)lP!jYC$;C}UX(Hczf7TuE7kQqS?PfN51`Si|zSn5t9=s}L0elie z9Mtj{ocJuT1JknnE;kGKO95Q#BNmqlzPwoL&psbfdDu$LPS%b|2yU#UFPfnj=QL-f zZnaYi`})W3(Wt3(@YYBoh`QKU@*V2$4pD@MdeDKKnz{q0)hc(Y^CHoW%f`wS;-#@h z;>X*tWdTNXo^gg(5<#X+okF|cdLj-Ph?}q8{t>CS{5Ef6A9fmuE=iFn%Ey&4hQZhREll{f3Q$?I0i6t(2H5-QoA#;|B(+RKF5Xu zVKAvjWojob($_J)%g-1mJ_)qN$se|hW+SKe@wtZQXfRAmH!#~KVP@pTpS%^wJHHYVv`Fv8cvmJamQ_zViq z=ut{;R=qr)b2czEzX}4f5;Y}lcKKs>QS`y@Lj6!Vj{cpO>513w8cVjhg%B9k<;Rjc0lqi1O1`D-Do&W`ZA)@`TumaUQs=ALn8?WzO>jat2 zH0df(iBhcm5kqOz4JvomS?3%V0pUlXKBYufTh zu1oxEd`O;>Ate*&6+h4~#(gB$8rP45L=!U79rN_P*J$?!wh+T8(3;I&rhjD~mHmB@g_r%QqBnat6{!w;b)MIJDhm=?7G?|fl$1mjf}UuPQf$98 zgGm0Xgz6B-ylURD5U(8FB0%7b$ru6N^Zx}QvNBE4C6FzDb6}zO@f!H+Vy>j4Ch#Qm zL0{qR?(b|=(s?=Tnq&%_uj%crhmZ+)tB8%HVgF+3BCE7E!v=_*hHSiu!rz|QI^&yf zA{cXM$iG-dJHOxe(dhzo1T(pLcR&Fr;5Y3%ab5!AZw3ih#Wr$1Pcs!FDuSHGG4(*c zXLazu)ruI8>p(;5&MjGF-~j;=W3%(ylbNB0I)1()3mACO(CxR*({x>YXgMX4SSjmv z?y;c^rM8YBmaKgCXw&;eX)R$z1ES{OeVf>XW`ZA6jq_h768NKUf{&hZi6d1>_q{O7 ztC-z?+*UEf&T5#BAt)|6^S*@>RH|l<>5k&}kcHkm z{a5lz{LXA53U<`Id#5zMTkK!ZbJG-0*ss@F=dAW!rOQ-X-4EyuUk(|=Z75;?oHd|K z2S|UqGMjdiQX1Y=;_cZONAI!Mhd&w`3D9%Kg&##6nuhVW#(Jy@tzYz1OuY|FewTD= zEzh3q3vS*~e30_rn1Q9muKLF5#-_AXwsA4|1>g1M&FcxqVy+&#G}y^)7ZnEUJD@}v zKIBj0T|CDxo|G0f)$=Kk_Pr{UMrj1rhH}gmvmBguUsS34brmuP>@r>_V{i2=f~Zy! zDc`g41a)8sen#{Hu>0V=p;Oh2uY_1~MFDp3Z8vUk5(nvRYEiL)JN&JV0 zus`NN=d>pxRTkwssm*2`x&)1W-%*CrY=ly~x+yR&uWl_;O*`z*<;oc3ViaRTnSp(D zCPGS8-(44n)S35ezLDdnD7P}aPNPw?M+!vc&mv&v zc4_yG@1J(<*l%l}OJ+W~^tbe#0DOTD#%@<|L z!J-aLPW#rBy5cXq$SXCIiu&x>DM`I%n8uV3=(`~duis-lU_Zzy-3)g{DJ%26Y=C7I zq?gY4^LmiXvnZT>Vj7uo5=$#yW!Y*MmLAjl8lGUoZjg9h_C2)Rw@TuADHQo*-F59| zO!yXa(7#Vs1kAS{E$w49{eJJaYFKbmmG1B)VH*q!oL;`-sgb!j^AV!GoN^6kjU85I z;wEPFS#WXYVKoRI!O9Xf3;Vde?sYrcmX1HddgO>>J`?3!y(DH2@O?MD-=EnK#-~kGkam3X6jVW<>q{5t z`;(4-$)IWo>}_|9D{RNw01w1h@FVEZU%{c@Fz2vwD;e7cmvTCF=|!nKkCNB$k_h$# zR7Nov0VOEg0E%dBG#*|Rrrq_A@DP4mA2Q~$s70u^VK7uAd_V={9C${E|_ zD3bb5YsOF?N3}7=Os!PnmD@~Xys=Mus`af*@MX60brKl9^g@Mq&W<8P{d1{t+7uL^ z9(xc*Z%uz1{R|sOEwxI_`h}v~J!s~jQ_f-qOW3eFR!VF8gH>E*pM9-gn$d4J`mHV* zxm0u5Kcp-umMWKS8pqW`J@KL8&-6d%UkO#HfGFn)4%3N&rxeAdR;&5=qA0??j9)P~ z-|~%(mEBh}K8vH-!dK>#-5ErFx5}1v=J>}#I;VhojITvCebG?H>xA;`-RDpOM{S`x8ooh* zr8hULFFy~Eswz``l|WrIE@7zG=kq`N8iX~XGP~0i9!g|<+ zuX5eebNJvAU({k*m^`vB`HEnl!@sizLa_$E{K7*Fq%c`-5qhf82~BPG?w~y&GX@Q7 z^>Tg5zTc`6`4ukNskjIX#G=JkX{&`Q`zC@_9Tq7D8Wa?`H`nID!=^4wX-0N~E3pHt z8Q{Qq1RuR3x4Tuzlgk}6@gbc+te(eD<$B69_%;I}ey(+hej#P%xJ1&IT-~`OReD&s zX%T$BLN68`3BU$5)4SaV89LL4r>?Pb>9mZ#s2JVnlDyP2qY*inY$d`B0}U=>-M$Q# z3CcKwai@w;^i)fVe;dYP3B5EW^%1u3HC3)R;yIVV=1o@ask@zCRud_x+|zSg#PNr1 zf50@C%&rzO`IK19*q(|(#UDpb-~7^XrgymMrl#-_gKPUnIEEXBbbmN}7iVF1oG<=N zR*7b+Y4lp?G^gd~n06EIrJfjqG5yMIqkr9f1_Cx|zJc7C-on2;jJfZ0Lzp;6M6Fc3 zz^6{7cuS<$@H4*YMrYtnnVWf}OCJo}ztK;>vFzLJvAbvH<3HRNIBb`NWu-w(40BC% z8ri34Z>hYglUlw<6c{zb$g>Sqv_=xOVM70BRpNB?JWtzeSE5MFecEu7^YD4@ znzgv=MW!a!u!U{aq>uLer282|STPOM(!lq&=D!raMXMyG&v;*!h(*nE=u$C*CMfJ5 z-{LWo-sv%IUr$}zS6F^!Z@>-D0H`&U&8z{ilccm_&iHEuw5-ysh{Bw%S){MIdgvjv zW@_dYZ@iKlzmk-djIQGJ(>JBOibwf+Wk4N_y3sg_c%-Ti#DvUbE^Y&y^cp1^P8X@7MIzMQvvY_9w?7-y?#>G`6cCJClIR(#bo94yQdU_I?*P ziuwvg5cV!gk%Am@n#XUrgkc4VPeyk6eFqu}8PO*XYjgtMGo=tI>CLFDWWZUm>J)lR zQWU+?CzEi$;m@eXc;62z?lyxS$?w`cOz{^$ZW&;CcO8M)Gd|DCK>b0CIk1XYX_+JO zF_oi-j<^Ul&j;a1={lgkF6o3cPC0P#RQn&1-xTTxFMkYc<^>HfX1`dI zR=C-}&0})UCbOnoDJvq5?T*7c95)>+T-HP3Zd_plo%Ud3Jt>HyJ`xR-;itfFV!;}T zx#X6}^-prtsWOD>-q44oeUI&at|()YduVse5`r@==Oxc-EaqUNL`zs1D2^r@N#D4& z-n7nAC`y;x&qvM|51soXZg%?iO`@WQ$XB>yU?g0Q_#ac=9!O9~Jtg;IFyt_O<+&dc<-=G-Q$`4yzCeK1G|ZcD>Qi@^s~G)K(Ri8j zT>~~V4_A3hz}rCg3p%qD4agu>VM`_YL-UsAYF6yBzaE8r1kdbPK9*{c(dt>)=_}L^ zOoj(!DsdBCJC*r<|Kb6eqCzDtgXT1QW}p6)oGPvIf~iuv*D#}fy6(IaEc~6_mBZwe zGiO%X4%T!9wQkDy|JP#^9CT% zapICpo42|qN1ql`&U_|L#WH*2(zHAs#^lx7K@8FycQ2nkWRY@^Uih%b-d-v8ShvPZ zj0W#|%peVcFyZgY?j33|Rlcakg=8v^fn4H7drAM86U$uM*F}8ffgjf2Khmrs5BMcP zKhS*ux`C$k<~^&(kFajYxO}f)mg&qhji15cDa}MxH!9Ew`aFx^ar*{^+-C_XmWItG zfQCWrCv$_G+{b+$sfv0%T-5_ysJ5XC zg-1=KgaJVey7eCl>NUS>Z&xch9p?pn>%Q+bJFdeddLBy8|1>-#{6JJfu1A6D*Sj&8 zx+24n?BoP|2bJ@OLH%!uj&JCebTDSjovhCd5Tt!;mVHAw01}c$|JHnGcG|0{2{~4V zkTIPQo!li~W8+;GxuVm-7bCkZruPiDxR}j!{1L)Z*I7p>CV;0lBkkG3X)1w=5lg;3 zZaYSY;Vf!XataQp#1JFC0hIV<`D=A_hKCMLjdP)hmv!x@q2nI3?_-Zc$)aSM1D0Ez zk>ipiBB`%{QWFNF@(8a_Ka=qO>`|I223GRNqQZtSPk9({p#ILufii; zf=V}Gl}|G!esv?#8ZEk(kMhm#a(*g<9euHW58k~tY}3&|+1DAAyh)x5ma?4O+bWn3 zxh$=512dhX?T#4XigkpY7JVO{V>+0IH)Fk6B_g)6A2zWOWb|VFqKXwTlj5W24hFbe z=910&8fxcM*kfk^Ly4@_PkjNf4(FoWb3f%(mN|WUs!! z?g(y!ZqC{ddw19Jvp=ba$?d<$t8%sKPaQM6;N8i)!gDou&aC#XulTljXixk%B-9}2 z=aj~$Z}VAn-&!`4mL8)vhd+_o+~K_|{<#0%u^`OB)G?)&_%;FWbv{jTST4dEEcZ0s zV?%;u$6Pz)T5iaW{e8;74XFhyRt6i@KV$2tFN;*A7IRw*WG%V)!pwwWF0Gcb1c%KE z?G*!qQkp*=TZ?aUedkwh}7(AMBa+P|In3*O6L$du(cnK zH-UEQTC1eFaHE}j=O4RW)4zJ0{7$6tjmYXmP3+u%kSYA{(arNWmamtc$(5WzW9GL6 z?IZG0jGbW}!k5s8<+m$^VbYIz=Mq>x(6?v~X&c&2?GdC8>sF&Oz&dqE8&ojMZyL#B zNp1c!78rfO(=m(iw*9Q-tfnRL_D9UhJqaH0z^Wgfr!)!%w54IirrULkyxI*FkM4Rm zQ`=tn=sORURL&Q~j6C}}Z)l|K7*3N;;x2$GUHFECD5X~htm{>j%kh4I1XF(oOo_v{ z^!kGngB_E^ZHa|P9Q$u7=PRg(iKzY6FvF|gBXWtZ@#JRr->C|y{HF+1QhxOD-zl4A zLLbbaZx|iISnB*l0G3XF+4QU6OTY@uE2 zoP?E~Dci%b8PDP)Ba$5-_@zhApI&ZT44KY?t+M?3)i6boS&&+P*8mfYkG`4pH$k}e zsbb&8FL2_WGK{K62i*HQ86@q|O7U57)XQ&*j`B))(OBWmdc3 zlP*iB#msz2~INhSMs2Jz^cGp2W^7DLTzUfV;{0x=f=FsFSdg?(kA;qi{Lzhm4z)msi z?)xAWM@xH>Ti>E4@577@Z~nl~*&*P?hjHZ(pE)M(lZbsuvUDd01OQKfMgT`;ih0ey z5!_2(pP0Sm3tM6i6;t!;VH|QEP0|nTtRDTHJwo4l?qUZMzl7MoRz>?DtQlj3Q#*YK z<;Pgd;He+^_lt>O_QygZoRw!8 zx`*vs`g|h7b8j<+iy1F(8WQ)TnN-c0tG$3E3&xeGAWj7W9U z5ZzjEIAe9{vOSS-(wJUsdAV-e5YqMOLoLpg>oit7Q6moDsKFTAXXgTN^3BLS|2@g_ zR)LbR9VxB4sV>v(HYln<@?#6&qhwe?T{59d;qu0kv0IsV&#dBDuyixAux0tMg|zA4 z|J?hX<8;OXE<3{9KY)dwLC5j@gcqs)v2F*5O1?3aLfqrLsrq^w=r}l@IDFWy=HyL1~`iiPf>?icHG%Fo&6CM@u5>~ z`(JsX6EU$Nh>OoWLaOnURs3vNa1wZ*T&dZ9T?W#6sjeLca}}E$ zljZTQ)AG&ZrcCqsrY~q^tqi1`d1?7SAGzLs4NdMm+e!P4?>^R_W~0)gi!Cn*93#z- zmapL8ug$m=E|vf>g(o)6nO=sR{j54jsobA0@2Gi3FWL9&$|%84HOZE8Y>qp{YLQz5 z6&h}9O;5iRx4_N!(hXLU@6QCFJhrs}cuFT4Y=s_|i&-X}!6PXNrf9!V3Lt>AMzMN%qlQvv?vMs5ktOZ}O1CVl zQLuAPUn~cqAFI#F9zNvMK!JlmCMsDaDu9l7mW5uLU+Z2Q`ebBEF!0((xm>Q>PKaNL zjdzG=4h>=emw{{h!pq+g)yF4ybL1DZx?R`d8ycFSPY9 zAFNW%)3~u(z7I_fNYyyk!e_UBD)msd)ok3VoA*_-w#00pHvm{MB*hGsD8MFxIQXo2 zwMk^2fXr%6nXYsguf=yaQv9-|a_--%7uvCgTBuO9VnF$hf;h!@Gh+WL#o`Z3bHCiO zA7U_*3)Oa0;t6oL+ehKW_O3r7D;sbf$*6+I$O^Mvjh1FVQndl@mrmmFUlZTVI^LKt zJa{ufk)K0$EsR9WdWu8K!q_MYWG-u!_j#nQGB?2I{M&cqp1~l&fB1Y(u706Oe6FVr zc&>~g>bf0Y;%9D2zU=@p&$v|Y06IbMT_-EP8pq@PxjgYN^a|BD1f;*?|M`va(LE&e z_j?WMeI1{dyUe+&M>9K{hYMeC7Q3ZpqXo?1Qy`Q1zvm%}yjdM?$!C@|vRGACb+)>` zU$+;9qg+*@CrSD`699e`mZK17t=sQ?bQPYz!u%;pqLMkY2%ec9@8^u#zd0)v`%_+un3yzbGpxBuM$Pp%GbNb| zC!zG{4lv0qr6J{C0{~(-fKc-`P!6~sx?L=OeU}>pCh)7 zh<~4TVi!H#NC!D%Tj@+c>yviUAx9F5z~*FOE0Vw(2`5g(FzuNoK18JEQ#8enuet%1ix zCecBvOfWqAInmVV_vcaQ1=^N-ZokeWfE!JW=@IGva64M> z-aQ=x{-|hKos~X*l%m$IZh}>1q-Wy)wRwASf0DOhHF;DTao~Q^OZ7`o6DYmCOVvEI zC?7^#pxH0l1q1KIx=$z~id6Z&hXwwszmBax-o(-5v1gxyMwoR*DXCZ}VsE-9VAt0= zBc6W)!?Q!)W*SJ`XfRuq8WXCqfO6=EO<&lV1&DDvvz^2&Zr0T%i{T>og9{mWJoMJVD&9)j*aoxYsa#Xi76!g6b^_~B+yzyN7i^>_ ze$KzeuVH?=f>DC-UCTur2HxUr*}+MgG?3uVxFKALlC;-a)yJ9OOCm7!9(7 zGaQ&Iek+gFh2(m6ecg&^XJ2s}DZ==>CfUR=AGbxfLC#$`D)Wa%)3GJ!*aWKR-+500 z^Hgodtd3GsK5R5;A3?(fM;%L6jpJCnU_TC<@2qxLjU2z@!N)Xzxn&2$L7S%{lt|Y@6Q=;>)WJ;z7d^mPhCA%s@V|HYmgXh3?>VC;F`QGfK~GrKyah6Fz_2-OO`) z;=_AdJC5%dpT6E0>?c4OTOOcI6$GQwpC-R42OdlfawYQ|g?!O0{(4%;fzAJ9i(vBSHO*V}7I z+#|5FN8A)xD!vJn^1BZMD3&F2Zy5X|aJ))!wEi`!714VE4^)G4yW~Clu|R5!t|DVR zEzcHSGpPCsDfVq({P-=8vyQ58?7rI+^3~=HZDB^4)0%s}jt>%(WyF2K|LUq6Q1K!q zWGJi9oP6=VE(4t1JHJnfq+>y8ohy4dHC=CktN4wWFKi$KK!OCHT6YXP@ppDAEzs`e zQJz}yXidN)jGOcMRXY>wTTxQ-;ha==+ea_^a10oN5|3WKzRX*BB|w#c`6PlWMd!J& zle_fszNSK^T>(ZVXG@3~<${wum%5P{%V(SJ8)vWOuh+)1rpyS>e}<|Y%tAIg_T5Rm za-z4GW+_{xo=efhmiD)Bke3rD?z04~P|fpx_+Fd9y|?Dt94~oll`S#iTJgjEHr@d~ z($4sPXVXKv-9q{uk|k62U@I@8c_wu~EsJqcR#31%!?&%I zKGd_~Ytv%N<4*x7thwoY_SWg@zhAfR2EV>~mD#TfU-l2QHq9d8Rbhx(QUR-3^S^Uf z!`J1{YFo3GJ^()L_Ze=cqG_c2*`Km0U%7E`(-j%VwZB$t}rpfT9**GCk zb-4Gvq%M-iq&6+>XA|gZbSVN9sw*6~vtFZ^SDSzOVZ){QFVk8N@yN9xa_aid<=%pP zHh~qEB(FsUK}ba>x$K>M&QwHoXfLxVB>iGh;L8b>~aLKs+;+7 z^cxlWwaHf}Z&(7qlAh4rfK9@nYt8mFEV83?gkOnD=5B!QyvirYnA?Kpu-0HW z84Jlmt6673T(PxBd*qGZ`qbj_D|t7$jCBqLuvW`0W>gzi#k0-@-puyPG)CUgE$~<# z?*!TG9p_u>kz})H=Q#WWwqQ34*vONgt7R1WuH2n+uFBloOamzuy231Zu~7k}mE!ic zx-1FaiEa(cEey=z+WSzqY)H+Y)~V=w5CI@SISiHw3Oem|0w2lY#TkMU1Q;xxbLx*47(zgLb0DE{NxWk|S zZy`WeMf!E*W;CE)W@Bn+eckE-zAxapT>)neDOcn0`@^*HcZ=7jF_j$=@|XSki0TyQ zi2!SA5kfRlhXfVJE=9fE6ZhNV!ul{?9H|5M$PXf;|8HL&@UJgVx|ab@2~k8@1L3^Q zJT(^O%FMmpJ;fwaRip9(sSx2XD%?B^;3)Rj3K_;8o0XeDYd0(Aa4|beX3HTLJ@d!q zl*7)~4&seYBTNGQ44Xb-WbTWiaG#!bv*XdFi@$y|*@uoLpL%A+8K1Bm;4oLtvFqCW zF?aQ!k`M~^<#P7YsS8-p0PN7vNf$?@A%~~AeiK~Su68U}YWCfumI>rS#De)G4BZ8> zKlV4_^jEpFlAH%^OAR21D7*UT7A^C`CcFGeZNC2Oh4p=(>YBDukx{*-`&jKw|DPuq zx&Idgm{5@K`D3XN+}&@9QIz*59A&NB8yE6H+m5$YuB%&2W^vIwYq9qx7G3d$AVzu^q+G99z`7t6?IsPjYQl z{N1XU!^-KD_`HeV^83Xq_DEIey<(cKhj?R!CfL6ymQrO!jKN^bHM|%k*JO=_PDY+$ z;#(D&SwV>&nIVrJQEG)yaq~*T;sXm)~N#o$7;Xej^d>v`M)qr>@3;cx z!yZVqx3{jO^VG#`s?fk%UBWO2;~ zzou3|?+4@2VdpI~$~#=%iDWT}*|Mt*^%^rVxmkPU2WgjNF(ElU{rrF5KHZ`^;jSvvIw~M=eyCh@$JQ~_@y>6sp zgKqOp&H+y>y{XeK7aE9b^!_(FB(=X12LDo==wah$w)=@>Bs>oclylk(0GciBJ(oyELq_>t98%x7YaC5f(V>-^ZXvlULG@5j$7|39d=6zR1s`; z4a-_@d!bBim_^6!M0r~sK^{}}4jS|l>r_Khe7^csR$``*AZ4nmb3Cq|F1KdvT72F? zIZ)XC!VZ4FleqXbgrNb;GOqol{fgi%wepg(Zkqh`T!SbMw2yJ>g$tS}*`t>Y*z`9p z_yjg+OSq3J>HB6ChI{edmxFv zcHHs$ZJg&QmzfN;6k8L;GVYaz*Z$q^hR^&J6b{}gLRmQw%9>@V`dddw?^~tcf7rWu zsHn%U;Fjc7{}r<9lhh;@9s}ivXc-yNY@6iut8~+bd*4jEh@!wdd#1pHV-Q)*8E3EW zK}g2z#{t?;u{Q1#huUrsnOdXV1k2TbhAflednc*IpNsa)?s)r~%SR5>hNjpnZORnX zxybZ0-h68Tk_s#J`}TkWnoD_=hkXYqI!F{)^tBym%AnPX(TO<=q6`d{7|e0hAzIAc z+wjxU>yD5s38IgM?0pN5QvSK`mm>DMQ{-h8L@f7ZXqZPDJ z8wO<_xB4Y^&j5KLBatxqALYw&PciCXXA@E>%gCB_+`RS$Eb^foSJkvp*5cwL?Mm@2 zweTNPY`S>j^K2)b45}rY|9P5I(SM}6El_zUZ@R+9vIp2(%LmJi*TXUFjj}#7nc=eFp8X;R3VJYCrMf*GpT zuko9d=-6sa+XClMtt`p#-#L`s5BUD<|L)IXwwCYYC7riDq)R6MNNolJd7tPJ@-X~O0*uz*}Ct`O-f@ltquP4NMDTFP4&1d9W z{_2QHo0`PH-Va^JIKzn#?eWE_K5UqXwy{heI!U@3CQB8(p7>d)1N@htd8ugA4`| zo2NO>H7}`2N$9bA5`Bm1pyBoF6Rg5EToS{gMvJfOpyGMV zt}+{fX2AeH{MTkNQuO%t!-JP{Aw-#%y~--?8kqw3Zp&VLNe%L&Tb?ocPe$Uy-bZeM z4pw-*5UXow`Lvsc-Q6h?_jRvT3@}NUaGs!_9W=M`^?tK``m!e>J1H~Au0hc`^ZkJx za9^2Zpq_o&D5>70wzhK#jG-?_)422me7%Kyqx{7{t_k%Fzm|)mc|l4`tKb^BF`dxu zXZlwuU3B@@$=p({9*%;+_UwULj9<>~GJtbHciFY#!{^_mvuPUM^FNRn@h^IDN9g!Z zj~{$kvJa|OtN;UxCZ00)qu;xhsBV4uTZCt^gs&!b6o~)NN5?;opu)zN@^RlT-rrt-&G79b7w(QHGZkyI}cKBRxi2nEe{oDG? zai_-IY(0Mkt*7Rd0ytXCjztr@`P03s)LFHlygJ{;@NZpA>rgh`l zi-dhh#@k5$C#AO(5Pgh!JdZAyYd)@s8MtjYA?s5YKAi)*3YD*@(ZB`V!7_I=*>9eK z+g<*cYBSfJLaDERE|X2RKM%}loNFBl&1gBPrk^ZwD~Ps{7xtr{QbCS6!J=EF&sZ>2ix zWH`-6rH5o3BR3#O@^jXIe_e!SPF@Iwozi8iX?z3)v zZ9}!pYVmZFhtOj~w#WQ>>n!v#`F^S~h zJu;YHGmYeg9wv$TJ8Y@`Ht3C}-;d@QRklxIr-S~PHqX*P*e5-kIcQpfVKjv2!9G>r z?$*Ar6jXNif{t`%W~c(al)KmSqA9BwsP0iwL1Sj!M69;&8DiROT`8ofprbpy<=~5U zMGIW?VJ@JoE0ED+9^o@@Ie+d${oCC{Mou>F8ONF*=M8SxJbdok8jgKUS-CgTIWY=Gb@_fKT>;j;l927(tl82>X6N5oP{@& z4qW_#*?PJhd*QhN9rVf(E9F*@~o1L}PE- z^5sI5)Es_{g zkbZ-ve=vmY<*6zLbUbs_(U?~#v)(_?%glSAgeDvf@-+1K;Qz;QWi1MUbcYKbO;Nd` zi_7(4=Fao{3sT-s40|!$B;|nO{P!Hoj_z1Qin21^WyV_5$)4f(4%+a&wr1wd5C2aV z!1e9Hzw808ybM@lgf@PvKIY4l9HH_3P4c8N=X0Rrm^&Uf50K6_f186=Y2aNhS5phd z3~RL?Zqq3O#FJNQho5M07#9UDD~z6WwcmHhQwaDMX&RR5FhTO}jA4lW-luVrZ!Y{~ zbMGYzDO5zee8k-s-t~C-AH`W+F|l8iXHPRk_`i@fx@(~q%BRq;7_cAdQJdM4MH$n! z@x*f?;ua@^8)ZE4FY6uNSfl%#0VgF1e!j~{`v}r>aqvX!xJPs_VU5vVDn&E-SgG5P>1=#o3c5a(A@&D{A%9kEup;%|sc{W%b`v zrs7@<0`_lUV$}A#*G4*^BM<1>xu1P&J?0ff7@t;J5^VKGccXp^8)g!V+qy65cu5ENuanNj{ z%PXDbe;#)HcVgw;s|XA#k5A8kX0Lt2#vJav03}W07M|ya7>A0tB?Z2GSrimmgkRnN zOkrjm%8Smi*3)`|f#E`@j&}Y4!pQrK67j9qR{f`@ODVgm?>2Q*h2=2pzmwgyIFo$E zGO*mgMHc>xP{ab2d0yK-2UI`j?FfTJH=}>e`#Om)z&#g$MTP&h3< zE=YPJDI65Gu)p`FHc_Y=A1IBc7t{T1G?#V9Daomyi-5VtvoT6aSELWW`c`dj$h;x| z{!{flch~q(c&olrr!bp8bZ=8^c%2V#iS;p#|46_#oI;|}Er85K6)}`x4AhuT0QEU1 zwpD%OKI@_Jgu+P$>IXGnOtXY{T=0)>MhSAp7Gw6MC*@b0lCm@wDO6PNs_(`<%nM2| z_;}J<`Q6>R@n7P1D{FBcszU0`zF>DgUY9IlW-G0JbC0af4NA38GH zAQiakPX%0r9k-)sraomWtocTFwuIfJ85j1NnrDpAr{BN3a+z2*So#0J9);Z_Ndk1B z;cWD2e0jd7eRA^(*e`?VbpOPsZyoqFRJF ze)*8N%p8vI6O}75po}%EjJ_$%_{$%6$Z{rP9`5H`n^2YMlUC|VeKsi9MdfI`aJfwP zi}km_Ch2V!=vqu7zN_9UnsmLRn27QgeU{T`tS3Ee;N7hd$g-u5ExQszsO=MoPje8%hg`jZNZH7asQ z@llm{MpDw7-qj?%^VSg!(>vcuHjs)-K1yQ;nIfK{4TkJn=u_jvi$;&^=sP~xvI_-& z?yrtdWI%rMl;YK9hn4A_d$ZouzSggKi~f=r@h!rGqo?=%TBE1wo@k6aym)3|t*`gm z7NRDOgJmB3dG0bN47Bo2KzBU1Ea1AZGb%9qTqq$+HyYp+WYctwuh zHZM;L`y`W^QRz^j5n~Qddu-ou{5yL&KdWK7247yPz-mz@H7JUffw>L^e#b@rTuV=$>RC@6>2&aG}04nRkkR-WY%RHro`B!?UmoDoL|& z_(;j>Z5VZp4Vv|I>We`{(++0n?%xk=VISY%eFpP%Z88&6Am1acKzwDqi#sv&Kr*I3 ziyM#e0J=Bj2%lAWaVX@zfz802y0jP6!jV#yQ_NSs$<(!N+uQlwp>cGCrN1QQL!GN{ zO8Oc4w69sv>~9ymfp(4tNNh}T@2|W;qv7Qfv9|3`ETh+F>KfaBF9`k*cUd{V{o%jw zoH@75uEsJmwKnAC1|4vQJ~~V!`6K##U$q*%S=fBN(-cdgTx}Q2E@1JdtC!~PQYGfS zMRIS{`dydal3J3xsx2=3Xt83n;1=Ahqnn?uw9KN*za>2H?oh?MMWh0*WS%m;0su-#jWmdPcM-u=qbS`SHwtP({`MMrzv-oA)10}d3o@~eNtp7%HywrU zaHT8+Vs z=BMOh(rl9Zh-ANQ$@qbxYMh%V@?OiKZ4EWhIv6lw+^S|Iwqh$eG181$RL+CW2(j`C zuHE0eF~gQGO)t>w2V52RZ$Z7{E>i1%7w45LwL&(ffFC~hfV7^#LD|3Tj!r+UX}K^c z&HeVDW|`^zA=Z>7^KihpUt&r8TIr1?eY#9j(GQrG|DfVTJ+r;@mm8)8x0fQ&O^2wT z@*REcf%R7pUR6pzxTC?WLN1XJMU;8 znezuCH>bcU%AcFxLw=eUMEiX6wOW&>&Vrn;Ud(g|=XhNoFy|e#R#w05%VzX`05~tU zd)Q|~?E`1X6KVl{J!gJDY_amwAbs(0&!6I?gCvnn=({MqWm0rIB8p*dHtU3JR+d#r zlw!k_@pXyHASuCu|6b1vj>KK}1@Cu5->MDu{ieTQjPV)@AGbIUTn7y9)BO8`ApYkZXQiHZ+fal$L*wRpN+M^HTt6&On{V|%w;@JklFNH_~EB{N)WI_%~S~G718DLu_(N^ zm|k>BKZ|g-o^x1Bm?uBs$^zj_>;B|jtD|YQi~8Y}PM!6+Vr2&wdg{`G%!CK*u|Wp; z$V(IJ`Ef`4Y2&UdD^B&Rl_UF3)BiStCw;;Lnnh5Zc7Rzv0Hx)<#WW4!M8bh`y!lv^ zwXktnQKW=QnWfD<<&SPX#>hhjW;Y>w8Eak;P~_FWq}d-4b1j%G0ckvWoSLf021BM!@Qc3x#bv@k1N1|iH&hqg6w=2-(PR=Ey}F^i0Vzt zLOxve?BVj{!dBj~+-%xSUVEx+?|FM+tc%~?%d~@9CcFe~Qd7Of%@}1wC+{80qEVBQ z>g*d1f=Pi{Y7H-)lLOjDXDMYB{y>-fX~Q+v_-##X+18076|pNswO>Mh)KimY zQi~fY(}4EEoE0FDhqut29s{!nCh`?FSW3_L=*r?uuT%rdT^ziB4a2Aj6=NV(7R!RP zBSo-boBHzbJ^$2ItsK{nqIhwbOU}wkkHoWbvQ#%z)%b>U&}1t|L(~&vktlQ`WNjyO z)rYB}OeVdf`1ZO3-kji^K{~jtcAA6kJ_lY`Lo9kssp@<4wIf>;XC}0wyBQgyDt>(G z4bG_2x~fc+4xb6m+^tUZoTSx-cPSsx6$Q?~kPOL0D#EFj?X8y|hWXM-v@N8v}Y zG!AbYPOjJC{Y}RXc;4@Tfwt?mhXXrYHFj#>=N)P?k8vNDVhI+~CMVF<9Vdpo6aT#t zsgtQ&?F!RMC-O@%5*R=KH_yI#2mW^z{lDw{ANa$+w%OTCn$RuU5b2dY3g<1;m_vGz z%tpxG7*f&0&?P|I`bxc5HCghg*>hUOMTSC_0eJa`Ta&mt^Nsk&R%I>Yz(wawlJANh zK(kTR7Y?S3*etpOpr0xRX+r?7TMN9ifz?qzDa}L&G-b@5Dx587V||NAWmCj z%z(}q(s!bOFCgSpf1WIp1p9MXT;3lBr%T(n0t`p@oVAHFHxgyt?-@~ok7^;i!P|9< ztMzJSXM7N!Wv5xbDE4^3?3cC>ettVwW9YgA^PC%h%1hP_6C|MT;`LU!ErgTr%9Pp9 z?fY-EJG|xaIzAkUT}hdgm9HVtSFks7bMW`NLDzVM$zm0)C`ZeM9s44PlvrvZN5u=Nmj4VlaQ z5%2Q1z6(;2HLCtni~T!`}N$)>XdfSIeC=Wj6vw&3jfYV@0Z^5&EO%>(Z=BF(kBfOdGklUj|1BH!8^;u ze!!n5pgG~Q^!TZP$#`N<%o`WJki-Df@lmMpP&9kVQi0F3w_b7A8Kzzp1vVt zpbL^}PU&hqV5~4}mx&0pYghIiwYzqNnfoJ2eH7L^$lqx-_qM$dGI_ee3qD%A@m9Tx zJ2h#3C$)S-39e(+_P1s^PH#<{X9$M$y*N0VOpCxY8qXKY`;^KbAz?oz9O0QFPJ53W zx`)rETUF4@7~Q1Nzumnlwp8}>`a>S9$s$UvClkzhjkR44L=M{*#RoNz)4phKe zH$%0YhClNi_$)_pUnuMxIEw!t8~Xjz8Cov^(FVMsccD{aKg3po&0c>(otfstW=YZ_ z>BU&2c&TH5{M*7$B9W$W*|;u|Gn3D<-s?4JICaC+sE)_6o1bk6N$#qpbXtInmgs0c zW(Cd)r68=XUJYcI<&r1DO zxQsjX_r`rJuL|MR!)P$l;ROCuR2QOw#I$(1IC)> z$C~(oS$O!}k+T^a!GA_Y4FQwx!PY7R0TGj4P3_us^GuCRp^&k!;h3>~lgcx4E#l>; zVHp5CvABcO%!PCc>;_kZU)SL~`tFn%uQIu_ATR^U{zDgpYU)c$0 zV%aM({hV`)Td3zK`~1cv-3*=nuiztg#xw~EO<+m2Od@-pnp3-#^yhC{o>j^mNr#1S za)Ei8X@G{#Erul=tJ ze@=Ot-Nv4RO*6~%$i-Vsa+kxjcd9#2Q~Nrs-n3nTucPX^0YEkDi8{YCzGN80xv+h~ zbq;6;wn+<*>XpOnrV+|&iCf;?wI_zmm?9C|tWp_VJ*?ZXXbmR-f!Wbp369P~9>yeo zywg!nE#aa`U9H%XwD;%YCEon|FDg@{aGY+MakYu(cG28Lo`TQ#OrwmuOZZ9rnvKBx zGV#q5*gX2Jrd`b6JgcWA2K=>)3_I4{c?6Ev6_y&g2y7|__Id`KpNLp(8`&3tDV!m0 z#B7Y{@jJgDC)i3g%txE75WsOHmJX_$sAYrPT>m!yPch~xSiLO%sZZ_VRDarG^X{z z%(-5ql{W)GM`R;;5=Z1|U&XHSY3l!fPxSxk|Lr2T<_C~lgecW|@=F~Klf$B5bMDghqzkJ8&l$gWR_1zsPDQYg&KX7^O%d!J>0}t2deqFAlXPo; z^hRYT@FR<0c9*cnGH;us|B!e7+}OT=J;dqqJx^Org#4COn}+Xi>2dyAHcJ89n{y&s zUh`wXM-HVUK9Ru7G6yw$iR!|G9{M!e&$EgA0es(4Jp6V7Aj%1YodxEkAw!d1{-Mvo z9%Z5MqjMoUb}*h12ZENppk9f0q?}_ZIh$a{l2C!ZKsOCy>;ht_q>pab zu%0+8i&??bp@yu1mq8*mfc=-9N>!Z(=j;17eyrN)oa>_8^yVwaERkJW__W!TL9Ev+ zjkoX!6i&jBQF~w#w_y9;TrrPF%f~Bqj*e1#T#HlWj|B{bU_cx=MmLYd7`~` zYHRXMP)X&xsv_ud(M+G+*^O`!!=2CpY!Q9%ni>4O|ENf#Nr=>0eW;qe{4dCyTT`}XBYlY z{^r+DjvO`HmQf2}?UL1oi)arTJ4)GFK(3dW4G8WYY(s)ei^O*&i1CEX9Z6s->bbqz zm^G7&BagO7i*-xy%Tc%pMfC^4e_u}SWq)(5jW8Xk3JvW&B{2}MJeTv^nMEo9PKV3d zy5?)%a&<2=NH3g_Y0G~c!U36G*!SNN;|vdXz@_p&R(YGtfqB$%8yQQ?{rX- zg!l8&9|LmMKmV~674S5TIbn)(K&?)5jH+KtyY3{fRKGJ8-H=s#mnk_7l_NG4iNZ}( zY4z{?kgLBaB`uAV4s1~A3nn$IbfMMl+WyaWX4rcnR()k6CjPCfd~D-b!_~ER3ut;B z1Kpf7|2LHZgx%b_%jMYQ0x_?A`6)jGeJ@YOuVY_B>i=^+&r1cK>{o0O#badi@8NOa znJn*OILy6N?!qOh1~kJK6W6^H_5y4N`NQnS_Qo@5~P(z~P0^i?J35kCPuDf3f(>WcRZIPfY5E z2+^$+PKNR=_HG4VJHoI*=tP6>e%}goxT%CxRg;BK>du4jpLp6FI9iVmm==qr3Sey+ zy64$?`OUMAHMog@ks)Lc#G^sqdh8@V(*URGVp>p>*Hr3{^f{i#uJ1|;IVB-%2~L%Q zd7HQt4$tX{ryTxSxO9R32&`7#Cv2G6!0S`on$S*Uk5+mnfo3vRugELV<%81 zH^JXFPEKHHTwV6w;yUvMXYx1}=GB2sCNp1M{S{jO?p1O!JBi@fkG=;?uw}8P8tZ@vV>;kp_cNKw2|V`3 zWny5YYpEw(L7b>xY8UQQZ=gnJ6%E@i9t((Guq z9FA&6CZ+8%nu9mBRJStw#x0%8e9i}F70W2WcgPHJr)vCJt7QsuPmLNQ3?4J z%0mvX3?Gn?swdQ(Kh6+}L9e0LlbdkLCYfqLC_DH>BhqvaiSnZ4Lrfj%p3}FgCX>;+vfg;FrT0hqebAC^rTU#*46V8JL2QhIw^)W-#^69h{A=vFf)twGLH`=vyfMI zeyt5r&%FMBuy-9@@{XeSRwp9m`T-^|q++#Xhk=P_Cj2MDe_Rt{-+Y7LmI`ABB-%Yw zVIy30XLZb45q%@}O|_3r(Ea63FXKhPa~Dk`;$`~}8zvGh0PXJ6>BUrU*Y@!(NpF52 ze5&0iV?=*W3gn|Vl@5yR#R`w@AJyGhNh*Jv*a&<$QhT8X zr269BY7Lreo0KRqJ`=&LE^vweTPFR_;f4PkUoD;}oJGP-Mr{4}868yg%y@*C3e8k} z;V$=pmpt~WEpY3F$91|7TtsIECEz>SU>WY#aq-<+M;@O&h47AQG|IU@k9*ASoi3k$ zu&x=r=&&&3uIv8U)NCyS+vL;$y>3bBhOv3b9xrsWQt%6-b!Au5x*1LMb^6Y&Ber~5Bivlem1)|1#Eo!3eS^Q91B;%zp znCd2D09U%_Yj0!@p*I_))jX}tV2O!XxE25!EO8MMpk#%*LI&SrG*X8@Cpi^ z76GHc(WfD1>sB~-MIfbv!iS8s`h5eGJ(p$+4)!@Kbb8JY zEpSP&8@Du@r$tWppRZz#*~f$oh22f#fIx$^cKu2p7#G8-6_4Kwc#GeZnCf7y_~-L7 zcJJWR7CE-Da3SzPG_qSbV7=)`j2TT1suR?1iG;~S)mXOd!JW>%!leK)F1wkS`ZGf{ z86mz-DGeO^#$V99W8TZaeRsM@M6dp&md%nWH}ziJe7#IDJF~~C$Y^;)dP%!WD?!LW zrD!|Iu-c|^u+OMqY+zyLwLypi`a>Fgkdp68L9!yRZMb_;KS4OBw%y8&n$q92)ecm& zV@|bmuHqmV6`yfW>WoJtQh1f$knXY$A&s5gKB;H<-&ERv%bvG--#9@02{<^CvL)Ml zQl+@-7Qp|8*!Z!{7GBZY;IYmFJ2~Rm@A}l&XCSa1)_B*^Wvsyf`*Q4sXkFo4+GXvS z6?h+K|H0G6ZA<7-3XckPz(2?xpMC`{7V8!T$4AUeu+5nC@5DD8ytZtpyVPaoN9fc>B`Ez84<6Um*Esbh1ltyv=J z{Sw5{Ztr|_m!a9tb7*vaTKWF?$?<#eH9`T87X382>Q+=Hm;Lk2i*rbR*awf*I`4jZ zzAHXd%`q_%$YGVO`-^rH^fY@SwAG z^X`&hM8(XZ!kBZk!w&h=gig?d z!ocBaLaWD>wH1<4f+A81SKBZdQ(%YNj@Yz@_bUCaKYHt7F;ORiV}j@;GVvXxBrALz zwx?zzcF5W|IbWq73Hk=yD*Wu1I|sTe|H5^NRmj8f^V|xAMZS5ONmjWkNf_V0n?!2y zx)P#gO|y@Q7KYh`^Y80Py$S3Mc1PerFz@=`)tA#SVP@D1()?hk(Bm}^)w0$!%5!0o zhGp4LE)^b>3U75;s*$oFw=Bxt@@+5khO~Uy*d);mOWp~FxVB>^k98(2j%hEW+`F+i zbrCY>gK1}{iJYodF*Y|4WWy<}TolZ$wAYRz(9_+%ii7;jqu~`Uye^T5N`{8fHv}Qm zq(atgp>BQdB92YbcEjjdDI4p z1`@FU5Zj|!eo_J1S|-vK*29_7>7aH$h>zdJ#^HU;*;3iJ(z~)JJ6ioRX#lKIzSuge zNVT5$Wt7sGREWx%JTN|xzenY zAl(rBN+wM{QhBGLDf6>mg+CJhg3a~JsR7j)@TWh3hUfKO1U_Lx(2h`1YuvNeRIq|T zm9p&MWTD%RKBI0Dhg$4U)mM-Z4=vu`?5iyT94bW!{WyeJqd$Qo2! zWDy|WfiNr}rlkY<_^#^e$s0!T{s>(mTirT4r1*i;`o2F!@XFova^6X!$!92N1?q)9CTFiA96Z0| z{W7FIz1`ezi~T<9o+w?HR4$ih;xk@P%kjPqgL~iQmg5Nf)o9S(X01Qdm6lP zhrT@2)l?d0)qjV2*lYF07-Pn(;6ln$Z$Q#$MeUwy|KgS*+qzedaq}!%YmW^(ttn)|Z3e_;V)@hn`>3WT#cKI_8j$#@es%Prv(xdIf-0Fue8OPX1vJ+qhy?E%= zFfbzOM0j|Om_H13I)q-niE`q85Q+G^9>Y&hTbT9-B9V9QH{Q2EWeZ7|WNvQ{1l_#4 zF!9aNF(uBUV!W~F)hEYwrJ7C$EV5E!p1WKeU&!WfS0CjodPk&SaDEveMw* zT<_X6O~?fFVb?!GRM97tb-DRPFmIH51}_Oi<2CFDr!U-CX%=&hFPn}ivFE zS_vX>7|_@3Ax%(T_)#cAE3IH4w8x;lh`nx+0oB}cjO?(LQ4Mvzefe&4B6SMGiv@c2iEkJ*$u<~u zj)wLic&qDz|6Y_G4K!ZR$d=}rM6_ROElI-*B0E#OK&;&QGqBgc%G3}Sq%i`&_4W&( zreK8~My|6kriV%Qy$IDZ_*mI@rCC~;j6Lx@rPU=7fhm&`J_0*QK*pis6)gCOFfYTk zcrfqd>mW0uO4@HB`|-k@G6(65spYv05_h)>c``hJF-?g9lQa5zm>px5?}vjSl1uw< zK|;-LMq>1y!F=v>0e`V5l}md3q~v`>gMXV+G=68g0yIsmjk@|AaGF;0;BwzYI^(i= z3vv|-t!NUusLrt5RVO}`26nG~o388;h_3QoKPJ?a;8>TM*B>bQvvV-sH6UugZO$IrKOLhdh>U$mLlhfFEAtpQ!9>zy7RKwveX}IvG@iG!HzydokPpGYRUAb>}qJH8yaRa`2(C3#IP@R@U3~H~5Z*>0R~#-4x8Rob>2k(74GJJak;(V#Cj^W!r%W(o6Eyo@I+-_&@jl zZH<4^2YGrKi#?wT$`TWE&`EvDMd4Ag( zK+mc?zpwhz*3luX`;!w8sXP3ORPZ~o87K~}7j2lHGn=`=avQ$yY(RCqCMD;#FZ`4T zbjDiQ>?cgWj)9B(nWzYc2HJo9%|-kLh_|K0vY4FHx#;DLoz(WX`G}iH$3ntHKMtf) z8UZiq%OIo7WrvO}!atwMJeWHkaF#l8GjHhzE`)(9A;X%{1|AoeATu&3~0 z%*e6qCkaw`U8UW$P!;OvEM9sYk`YozeZ~@Nd_itv(gUDLC^|04bQ&1I>yJv#I{^yw zY`1Ef10<}E&i5oXtEt;MqY*Jcb$_Wr_g>4Y)*AeK7;@-AUj;oLJ_8Ef>Sd)&+TNI= zD|f8kkg+3JdkkNzq4s28?1?$QaO|U8s8Newx=@-c$WTAI_gX=gkaC%x_v6zrOxLB{ zp1uiuc{foy$ty*3)O-hAx6cbh8V2xV>q9yv9NSHRf)TB&11Ak7SJ5Zu2`=5r z-d4>#QaDjH9jQdRhI_`cE51w^n*7I+$G`iOD;|y^XFh$c`uYgaT(ePDouAcJ-=Q-S zF8z8p^Dwo@YfTv~5awP(vC%XI(fGp{uXRe)<2=ZTa45-LCMRHAKgEN!L(3fJ)795@w5kNu;}Q z`?KSjU+;e;-S!f#m3B;Fx$9I7MK+BG%DY%CWmOJkqdjD7Lf>{9a-xqp5H#cNrzA)k zbkyXFcx_v7f?1p2pP1k>Mj{2;%*wu@p85YrUxhfR*(9NFGQ4VTfWdkV$}CS>a;D_4 z`DiLKhWc-npH@?rFTE+zU2Oo-86L6^nGx#MxBNU1&$~)XTYRs@Ue|=f!jaILFNUq0 zpK73J;O-;Fmjn40I!x}Qnl!`6KMt;LVvIk;t&sMnCJt`Yg2i?xED62C7Z;hRnj?Z; z?XEgb+)h=x?5_;NeL+j(Mi~o^gSfdMYWv(UnnZ9-_s^Zxc8T}cXQvu=0gPmE z?zZX~Q93+8DE^VRa#1xY4>e%Or;RAv{}dsN(tkWzdm(B6&`O}WApt5a(sbHjuyMK+ zaH!U+Z%I}W2B;NSA%WJmZr39bb#tX9pHd^m>=cfuftE z241q}C#n*|hQBhrFh*$ttOF}ir}ihY$@S-{Sl3UX8qKg6%j`heQtpF=lD`B?ph?R5#?E@#-^oY zSu$z|z4;Uz$!2X|P;#lO*`107hJFWY!_if*%krxTjx ze(Nl^cOlv@VMy(`DYiw!M%c3dRn^e5EN$iV(T#>m;hk>b>5I(Sjj(8D)J(2F(&}XH zvq>9+|Ha@4$c-)DvmBjNr2K>BF-c+St%kN=-SoC-xyZy^{t#=J_pD7oW4KSYP)cQj zdRt8*M{tSUtBcuq>&sfTBrMmXEy|2b8;#oxN$>v*Y0T}IEBaP+ZW%8%+6ciG!Luw# zpkmu$B&{$^kGXd+^>!NukgfLY95TbY7ca9n(#6#V_;>^_e4Pn)>GK})NrOuLNzoU! zWf}j!+I!NlB-3^$D_b11vQF&B>WLUvtjibB({=%j@NN@8{mv=i$mZVmfTH z3GvF1NWo=`BAZ*wH_iUG?a8@%L5=NIBMUew{Wm6=6F?UqpU-39J#bX8c3dJ~3d>P> zM*#P8EJ?K=*F!?d8{QhJe2<}irOFYK@v0@B>ef%8y`dKbgHq`}y#vr3?%|!FA8wOb zT{@F{Dwy1*KP)+?cuBm5w0K}pQYQ6u@Bs6p>w^>-*Cj$I_i%dZvi_o%9gP>1{q4>@ zMVU-QC1i{sx?O|`k5{|C@VR%0OE-@$yuR=)5KqE9!)HW0J-Uqd3iG*0JgI1^VHCtn zq}zu=inhXmYEJ1J5EMLgmHyq(4}eYc+vi|(z=e;iFmf6ZJE5rUxMBI0)u=lV-+vje z{+Zi?ntc2rPi+t47`$TpBYkIUlVtt5 z2Cyu9#izs`dY8J8lsaVQjqPIqvqebjx#dLz}J<{s?10`c~v#4R0K zkgDg4UvXJhuyc6%r-I8Kx(5BtsvQf;D@TvaJhYbdcHTtcMb5oX_6~hv%&sFS;BdW0 zv-Iaufq?6JS-pulH-|EE95i@UrgrvDceci~D3@%6W>8PKF7Kz7l+ItiC1|4djXBBo zpmnfW_|Hy7Y)t>+`*VD)f$ z4HbP=wx05Cyel=W+|rc%fUK_xS(Q8`s`?)uLl(wZlB;`6(-a@1O*hVjiMaJ*Nc@M# z;QNKrXS9G^?scF)NWI0egGwt7mh%9rZUMn>l73xW`D}|LCSS9$i>;K{7?v0_o z+w5Mp1$MsKCAN5B_lhM+lC%)lGZed=j8}DRDMDThlbcy+iIE4YH??nEWhL}eQhXN! zopFR8L-#3{g$)yz+fUrtw9!3qUB~dUe;cQk*HrP2hlN2+`b^%kF@FyN;k7A{nE`u_ z&=mt0qq=TTuhT=nlqMU8d23l0#)hJ7owO%pw^;tJwapbayKc>^1%jTCb8l%|ai!yS zn%jUUz-jAyXG!v1dutX%`L5*By3Wst74znXoNa0e*`1}3Z)SiCC2;1^ zWB1Ew+whU!@>p2Q+b$ax@P& znnnb%ZH@g3>4oXnfDY%NrXo*jn}WsPU0Wh)aH*Wr`YTB0C1lP`;K=Y83Q)ytcVq6KL-Gf0+O=Kq+HW>1ZGbR&x8;6!n|nl7 zzRe96AxU`nxAHzic_a4M1+|}z5BiO-!-7ix+Ic=j?ciO5eA1U1W}G4eM2v_KixtN| z4VetM^z4o^-IBvPklv6({H?Xq+-!}VW^VTMh>nnCPrI%itcU>DKnSvm^bTJ2+tpVA zmb^llkj=Aw^#krfxH`z7yTJE?#(ax@aEx;MnH7AG3Y&(Y6E@HNTysU-1 z`jzQ}PsRz%5IgoKplP@W`hGkLrn*NxpfbM3`QWF>Ysq>n2@Tfk4QBRvR4nDmv^5+wzz2c%h4kVnX-F zPtYDauGBQn^t{*`$~xcR^tNOgG{V~yLhQ)NzGZ|cD}ZJ{840}SU4HK*!N@Zi$ZB{O zo==^b2;DssFlZyeVie3z5dJf^Blo`6c9%aDO|w4k4EypMv z7>?IPgSM|+cMp9CUplWi{lKI(QE6QXwzGSY%9Tx0-A&Q1!e$RC?5e#zdNG%ydEHE} ztkgaPYd7MsY9~xE^d)eep9>Bwvs74Dd2^-m6RR?EB4bZ)0ow2@#e!IKhxYNGBCIW( zdA4{ZE*#1LC4#I)DYA}v2MP-!n)jKC`^wrb8WWu!)hA|0#+(lOHK(dXc&KWA4rWv7 zq`gaz2iH{b^URyjIVY1yN2s9K;f&VAIFVXj`ZRDE4Aa`KbEA2F-XH|OrQ#i}b~q;? zYTfl!{_d%GLJRD3%p?_@x;CMC_ROm4C5dAak0~!Dd~eMv?#=4h?)KR)$~Igws}y6z zUR<@4K7H**($NEeq00DlAHwZ8?d@yT#k>x4yGqNyAqi#Ayq{bUS=Rg-RNjH zbFB#{O7JA9n%)>P9bW|rX6&1G-MHd9CaL30pQR|e?9fkF^x!EmPeZ0Pcqr!LVfE5; zHA{ltU-$N0EK>+|9DR?h3j&|M7hD(bxH8P>K=xwl=~(j^zVD zTdAvbFwlE$a@1>=T%nD-!`v?{wtWHL2b#FSGY-q(IglLK>M4yq=KpXd z&U$E5gf>4f-{f;n0kp&}sHvyjrMt=X3p>Z*53`|$MrOc&BZ^{>98f&IoXfYILw*Gbg8tsT!y18ZJ)z?z@%vgL@s0Ar;}= zE09UuoK4v@lV?Z{*jrxvasmFS`*%$Pzln5`l)_1I__0R|a-K)cB)X2?nq1s(4md_5 zM4#Szzqnk;c!zIoT|%|-s&;MHwqHZrKp(zj5WI6l%TKX_OB`8?DevXz z-=e?oq3(cVX9bU>*4nSXx#tC|*GN`I(kj_iD?9?7f$R{);P9%}bzp;tP65F(A?YA=7rc=+&^J&Zs>!n4b8ftwKB z-fUPk(AXLXD#Z1LmiAF%YzwYbHgiy*kfHJk{*jMykP9mFQ*>q=Q3^OO*CG4hq?dCo z0~h0M_)2eQ_lKVqDY7GE`!m~vpJRqak6#RC1}Ge1_ohkmwys^=+;xf92u0P=>;G zSYofVMhoP8iV8*!Q|M7_)Xfjf)oiZfqXetY1fvzV5QC^;9xUEhi=$e)P=zB`)y8Z! z=qPxoyt%9zWw7q_H-{5=bLz}8*}{fvplj`^WRjYJZ>Hn04Bcqtsj+t^Cnv3|six$Q z>q|OcoZi{DJZB3CFI?@0WO`7e_V<9qJGI9g1-LVNX(NvqQ_e5BXFyb}!_)S0Q^*w?YYuCUFk(~i|mxkeKP&>&~K@+{@0bIQ4PVrRN9I*`#ymj`n;sc+Z{qv;4VAT+h*=)ve2OM zrL;TZj+?7?NJdA0L=iAZr3bM=J;0we%w}W37!x?TLCu|XCD%^s%Fb4#V^qy9ILtU3 zVCh$tz*Da*$Azo<7!fyG&5am7el>GpNi^+T5L~vA`G|^7B%GQES3Y(Gr)_%LZt~;L z{Cm6O8WoX!9ibe@j=|#-J1`YanGmDu$B4?eG6NFgaFmg3D6DXgHON_f)7hl&wzz|L zd$`V|w9TATz(9m{1lXn8;F&e}HlHZhi{*>EtH<8NEh;vDTv9q@T=(A1h0iGvMs}jR zi)IReOs;*NTeSR%hcIUvbQkA!tgmX)?!gsW^CeDa6hFVV#-ea{YmZsafVxGORyJHV zf#wd~&S6Qzo@wj@{dE0}+hX}`le~!v`wZ=Q-G{#KaCOlH`-B^&^R4=hNukxz=E3ks zyPlkMDqkkjH3^QjoE#=fB)hQoS!m&i-7Yn)1PoTIO5$S_i7-@Y*IP=d8m#D`^jHy< z>R~PcP$x>(tlBMr!!WW!wiSPznH>@e<6T_(A4X}vI3DNp6J!D5wR^@|rYqWYqbso( z&A4Qof$ts6WTWV91cOb*37TW2aaW&yE}h+g8mtxudfTlSHfh~ydL}<9@(!`Re>D`8 z{NBBh1a10KC`i~BBRWkuj>}010zsKhf{4zbm69}Q4M-ZAQqps6694Sq$&t6#6${PJ zer_%poGrTl^Bl*b$g=|XdpEtpJiHf!TaReiMS1w|ztk1*ar%WWU!J_ubdRP<=$$IN zHc$FsjpVT6)hBnU>H*LzgC&^nb)dNh?qCkZ<&Qz}&lxb=BKs18a-ZS>gm?m$tkTWH z?M`78&FS{t?WM0@3S@@5K8U@r^!uCNx+{Mv-?ugg&=HwkV8tuzt~yq-u;540G`I=5 zw1=wJUj4-kmx{Vn%P8ols^qYp>S36|`}Q9O8|uPzz~W{!mrvDP19uLMQVcck0pf70 zC$+0zYg6bzNCA>tV8HJNrC077ek&-cZ<)cVZ4tV>vHmcb7j+zrk<+ZIF=PJIA7&;E z`(9Ba&Z2njmq4APlbvgW+`5KWc13MT&a53M@o~Df)*6MJcI#A6!Npx6FOL?K;1}yM z%t&yYGpvCEpSKZHm}=!a7}s=hLu*&~w31S&&!S)S`qYs(1wdMhw9hJ!#n=c`W+Vwt z*9Jlae)DD?rkr~Xqj(~VfB{=eQXu4ACai6yiElNK9cP}Ua~ zNzLS^#-&pI+hS)S8e5QTRffXi>}AN);O;?cF8o|2GPo8e)fZU~8FzyQDOwmtATiZ; z%Url>j?Cx$e>>U0w0Upx{7Gr0{Z4g0R>S0D<&Ij8l_p)9EF_nzFG>B)VzEWY5Ej$7 zc}^xf`sXz>z9~@&^+-Z>&v8j+m_lD9k|D71fyUwFPgd2iUdL@yJtUD!?;h^^P_6Z9 zE%N~PSgyXX%)Zwm6E1?B#kgk6+5tt$;J2g6W66k4s$==DXc0!yLdS@~8sIJpz>(bs z)|m>3>%m?#ih2(YtZ~GAl+d;BQ{P>Y50_J2u_!5v-#{+vp_KZ_ST)V}vm?7!cXpGf zFI@f2;6v{V54QTjx1}g6wlq_2(x@<2fXtCStSKN5sCAq&R!tK+z^Iitt#TwjOrf<$1)^HN-)lG(XPf3fWAo`F@<};grX5YffhD=vEOcKpdQ|ec zf{p`^$}n~UIStsYH9^T$DnLZEO5-K#EGtjhH4~uT#WmkgMcSVcaPCSyvbwq__GV_- zi4ye>yAZqf+*&_XQo9e-TbHq;CO#E8@9?%-Q@-Z~g_X`XIWBAO>fWS$ZdHkqv?t+~ zTAr+99bLZi>!*52>qiIDVLA3%)eU+&vD(MD%Rh2NctL>_dtWH6Rxb&Xe6vOS-JH70 zO!mq>lm;I!4Q4JP7F$C#GDECK~ zaII^=NX@WB#oDI4AE%Ueh0rLb`db~oHj4rsqN!4gb*WL73L5$(DSokWm)eA3U9Lp& zx799XkcfpIs0hDW-l_0f_ui>r|I1wWinXIqc&L_Uvi~eyUK&V6Zsc{hS`WQLKAP%p zodfHT^IsakpFXTm1&&zvfwr}bWQucaQ83g*OkBJ~tQt!et#^@w_uba!5YqIfU=y{j zS4N99RXb=fx8D7#FBD;7Hfv{*8+zu*V>Oq}xeKClMiSM{S%1?L3&5%Jly$J#rt{BE zSAZO)b;P$X(R~~C+jWkHJr`O;wN9II7)1hZO`0tCAqp)b*EymJN>h0g&JMBEVFE9f z0}B=(2@S>EY}uzKMPkr(^R`Y=Fm)p{1_5vp8M4Q*_VA$Sqjc^}?j>)q6f}No`qy23 zrNr;Q9VIV}!CO^=J6)D5i<`}CB^Ug<9^@v(Y+k)^iseb@aOj2CX(buq<%$Scc4m^w zxC_;vR(T^w8Ih7Wg*f%<*^`)%o$wP>akZs2!O_WTlK-JmE6VsL5Z@*A&)^id2&I%1 z5Xsh_@3g!XvHtYZ)^+HLJzqf{UOg&;quWsxi1Qt>p+Aiy&%)!VEuvvHD?EAr^siVv zr#BJzhGjLlSa{1-t;pB+Ey5c&&dv+`7C*WYk*3~C04)gml0SiN?NMfnr7 z-=^7TT~hC+W8i4mX8H}L`XcIW2M*RJS qe+1_r!TCpU{t=x2Z+?kNJLz%w+VYG=Z@vTm4(@Z=TfFOW zl^S~Jp$7;dKnS^;=bYy~=braH=hN^0bMF`rjEwB;y=T^1bItmjbLY)t4W%=uuAHKw zp*f@c=%E%34P8DB4Q=I#W55{+e^d|+&1qQ&dHKi6^77XnySv&tINQ+BJbDwSf80R( z2UD7{7DxHd>{sJQ&v75hdiC@0BfQvy^Vi;AI�sjET{Njwj=(yuQN4g6D(hq{A)o z;k2|irhJ2O;@s&=s0}0m3c}6xpNL)iRL7)%`yhIn8&{@00zrcGQ%7hf zbY)7iXlODbsuH!X#aj$MuRMC|#QOEiojMadmo@Y!WZmK024ancG%>o>Cd#*H91}RY zZqL{}I8y<7ovIM<@dSWSGIR2fUg*4r2`PMJONv%c79!~kR1nB8Z{ z?BjE=e^JjaIyGBn(eYfMu`|HssYIh9!|`x;PR{oMej!4MM}$o{=5;I%aj{uKUo)TF zcscbRlzb!BGOpxj$mx^G*)v^kH#feN8qjA+p5)8kU*bEYloy@HXw$@XUoCjB^l{>) zlOJyj<)bnR5vz`_kLPxOr02RM;e@nXI zh1#okZ_g@P=hFL~fB8D7@v)^)_gV2qsGFo=8Jm&F$^3;aE2UbAqaz=D)jb`+CzPybIv-r`dFT4>A^^~X9;@?Z-eB1Kt zx0T7RBHNQLw<)(fFTT%ETF)8Wm(l7l>i|De&|6|(%KR=0%^hF5Ggx#iEmqhK}h0yTJN)v@Un1Tno@lD6p?sKaewTd#|R2S zpwfKBZGZjxbrNqf=ti^U_qB!k)4MbNq#P>`O^S45tja0Is{+yt6UGl%K@P{h9yOpb zmSAU|I9nGacjBoK9bN1b*%)2glu4QJ1cLdd)cX{L3}9M&^?buE!YMpjDQ? zbMD-A1>0+cw^~`J4MUw39b=H+Ic9FSvn||n;iP}dHY2)mvgxs|4vVN_VfVS)7i>OG z+RI3>bv)$CRG2^P!EF?zms$Js*yattbI2&K48fm)r=Rj93fP87TB=1vDp=$h9J}?+ z*5amtq+VpdMaLS*yh*CWY?)J|aY5pU_bXnBi<)13x2*%1P)E%}@n2ci^!9l68SoA0 zbM{9$UmW=s{*~j#g*zv;Pc*%9uNSMoU+=KZ=Xq}Y+?$7{8E3kWfWo;Oou@pe&QJMS z%B~+RdAXf9sCoVE=|^X8uxnj>qsnuQF?Qgc*W0MrqL`XT?B98;wM*$F6H`=}-s0Z2 z{gD3A@?HOX*LOJgm5aP#%s0d8bJQ~VGM%&UY70E6xed=t(qYM`7fnl2PgN~UA9$jv zjemm8&CN|j_~bX{p3t4k)yZ9cBAK)Ccs4UXJ|D?r?3t3UDPEM|IURfVkE^}T206xglgO20Xo-cXb#mbngd6%<4VV;k^vc_ES@Bu{vGTL(r#?*0lZum4@|5xvHgz(Maw%H9 zGX|{@_mEq-yi~Gz)GwU4xLGuh#IS$D7YqIr>@LQji>N=ITQFG1SbeTf@18($xY)a1 zvXW?e_HxppVccT4p)PbiB<}Qx_;S=aM{C!MPIt%b!=EHSL6^L7dtEZ64 z>c;m_wEZs}-ddEjW;A7V_4K(%%${yDYLnKx$ook*zW)lMOb=fOE{H8)HWD}T9+@9` z<{aki2wolPai%zPjp&V-m2nOWuedJ{jF?s`t&A;~E%UAjN~WJ(J^LV(c$T%{WP@yI zc!)kddLei5yE~spfoH&c*3!kvyIpzR3$rKM&gF6HMMy|bC6^^biU@kRTFwf^kjNDA zQr{UTW`E{1)$F#4C*@DRW!gqNULm&jf76|)`9_Rq83RX*Jg8!-DqdwUD@WZ7V-z_cLYQ0Jj{&tD$G51B20SiTTs z@buAB<){8JdXFFctVuuqZSLELZx!F594;Ka@*1zsKWLEGe>FaH3F4D-MZ-K%X|VnC zz_b3fFKeEY(btzbZ=CC}`YCK-r59Dr?7+w!<{WYONlI?q6H+EPgG-4$ilg~qi%zpl zB{~YYMs%@^0oR&m-W*U1Fu zHl>XgoH(O>1N)9xUg0M5Lc;@WeQQ0v_W7J;c)q0W-3o9LrM#&(jggj_BE|~uOdh|} z;M_cVncl9{)+H^tT?O8MXT);r#gHlF!Q679YHG3wiR9Z!Xq`i*!tbJegVtY2(y>v` zZ>WMxj7qY$=N{mLS(qooUxg1dM;q07tyi57{1L2$>)GfLRP|96*pu48gqa_Yx8(zy z&!LFmzoMmYlwV>r{@mSRD3id0hX(8x?}bL=*90CyF+)e7Lxj*jWz463Tl zs!$&EgPdc80N3Gh7IHnQV%cuBSo*1F`ro_AUB1GRaL%3*HQvMk0Fth?^?1ywWTE_ODlR`t)23o zo$3FIuq$#aGOc6tMNTW$M;eQ_j4jtSq5GlIq%DMWur~;s3WJS zXQFzMg!zWAq2TB#wlQeC|K61HeypG=U0P3CyqpMBuXm*fn#J`6w?QcBsx^CU9a2de>K6q+Uk>E(+Z!FX^H(UhZxk4TpPcTo~d}I;u|t4z?@s; z`si`zu>GX8gtPmR+*>JfGZez*?>eU6x^S>VbSeS1V)FLvKt2&~W1wuSu1>=P9G{?} zJ#?Pt2yk==_@_B^g@*QzV;UNjLoEMwtaa%2-|sn0Llf#ibM)``7y<7Ge=)!dp#J;& zk#`|9$AM4hf!C|_!++jQm!E#*&tuw3;2O<+ZFyy7;9c9w-NwcRV(;qVrqQbooH+UH zks*YJhUw

yWb6jWrsY!$S@_1|A0LYEo9NU;ztjS4$fKAMmq-@1c?Lkpd3EHXat& ze8A2w5Gfzo>wnxK1soro7QBA#k6S#PWUm{jKfWgK>TYvQOh8CL=sM`swQJX8+^ub; zv>qz{ogDZid)?l{{4NmA zzkSJ{-}BH0V&(4e%)`Oe<=VmbT3EVzddObCelXB~{rzp8Ha-sj7|8|l_hkVK6g;>h zC@df(_&vVpid zdmb=R*TumDbVo+;e>(lIREGZ`0}0;+#`=TkpU?g)g~9(w@y}=fl|s|q0bqp1!PG#) ze;?tW&;9*=8Nq|i|0iMio231577#ViDH*~45?jzIw($Wz8k&1F$`9}B_#9du^Gq@6 zA3Hu_{{9Qi$)FP&*N)zPoF&op@r&`Pi~L6A^Qu8pt6J#>Zrnzr*%!S}#4zwZ`h2^a zly>Bk?kVBEw%9IUH2GblmeDnmvwR_Kh`QYxmdj!R# zmb<@628?ly>ss#W!`~U}kBJ2-GBW76>S)BC`(4uK={1mFU)Y~qxT1USG4hF|y5et> zzF6%1ZFVx)>m0gIRGhtg`L{_=e|HCH{@T2M zqga1!-hUU}|NolDZ^!sTGbDiNWoUrW<}*Drw`%(#_jt*^jFw#YfpgCUx}>n#oFA%> zh6L;yp5D#c!oI1l@eEfW3*3dQ6d*Z%+5D-SJdIa&m0O$0bla6=W~NA)K_}nNkLFo* zi^Uza6+496Xy1w)gc~YM1ckCi_jcBNo}aASLAPW=$zBR0I_G~~f8yxw76-gO_lK4~ z9XNkUJaDhT^IlZQzY(WDjn_Yz+Ws_Xp)_N+(0lX(zK(pqP~Oz>M_|cNGfQ8-{{3r$ zLhduU+jAop)v%6a(QpvdPoYW@_g!4G7vnbF+|-mn6$#v)%<%);RSp^1@shm5G*YD% zF#@TX*o^Q-AhZ>0+cN%jz8x%M&FOriz6t`CV4`}FnNKe)X%+e~;fn zymTk>yIQd>7bGiLTu_`_!}!mSJ`zIW_03_t8gk@}khAcj>7EkM#&&P7XR_Gu`Maln zr%vQ+YJx}dZ$4}47jGwNG z?Xc!~wcqlq>3Jv0oq41k!(>xfwXG zZ{C8Wmof2(GK$Urb?rQm=bw#73v$OBQJ)RL-qgT7b1RA`rohvn_ROFF8OOq7)h>C z9D6qW%>}OJ7j*5+;nv?4Hftsh6BKizSXJ8}*ZOU*ofEd4`EGPZc5aa^_)x2`RbUiV zX3wCG`b|lAJ63ZV2X9gA^7U(!fskJJ>zN8OnHRVgN}l31^P43)W6h)PzYJ~sj7LT9 z%&nx-X_8E>KaMUrmx%^ctglbEw6M6ME(jub6$2<8RqHFC@l&Dfff16n_$%Rjbx--~ zCiWzg&17+JnD`e1&t~W%^xVcTb2SIKIq%^XAduN zrFywk?Cyoq%duSy-!i>TbqhgOI*o`4pywk7?Fp&wSosdto%D9c4NqemCpwTs1OvJGQGil>JOp^l{jEIf{hLu0ZKizVTvL7iq%Q zLTW90h11BVr|+wNJrw*}evv^(@TYpvnlv6ISA^R7bZNUg{Y5;ttM;YR&neHAElQ@n zjP>Y}m@Ao4=*dqb)0+?hfr3^fZ1M*=!@o+>swYd?Z3XQTDbj_!p%`^l7h#?+F+Cb? zRiUdFp{*%%%dfq^&t= zK=ge%{(Q;7Ke{AWT)TPcumidq7R1=vRQ5K?mUzjepHsi+4X@$AHVIsPTeuX|V5$}Y z>2dhU0WC(GTg81LM)^B-q@kwIaVjc?o?UUOPzN`aOC$)QH+DyN^qsM*HM5JQh!*Ck z%Nq-JCCz@uOnSOV^jb=+Y9N*S8O+ve9~+2)>QY#CKLIX&eU5rd?$TTMv|uPZbC(Rs za^GrI)Krju9HPAnJ!K1bpQCl_E%er1iUnnb^4-!%yhL8!X)yxlS$@xfw?tNrUdGsw zo@-&e0w;P@iP49oU07KP(O%`whG^qWPx0~iTGtue?EvU>Q_G@EOC|ef&ARmg>EH%; zyg1hSrNmO-v{QdhSk0-^Pr4ABe1$nX6yI6 zL#GyS`Ex4R>1w;QODiMd`xSZ>=Uo30LHzfa;(tFA?J;aSZIT5S|5?N2rT{$g4Qr^hIWsnRhvVw5Uu58PbO>Ek_-_5wF@W{`KE*cA>z z6y*5gtErO`Yd@`2z!~``7rj>5j0uQ*LE_wsH#S;T0DbGgLu;9a4dgJs z7aw#EmDq7rYHDUF(k|DuusMaGV!J~x>cv5^HJ(e(qI1EV;2c|J-tI27n}ycq`2ioG zBs%Xzv<{?+*t=n(L4WQ&4+*sP2>EhR6C^wP?M)J)aIbdONpBW}^}-?Dw!68Qz{&A( z=6j_Z{<{jna;t9l>PY*cH^ucG8L7Kw$vba)U*1~4qFqoT0up3=b*RAxxOb6A5JWkb z=k-$fDE0M*ICG{1GdFeQ+?W%(VQS*YRv2Y#ywHrIozsQBbftF1^KmyrK!>tB3Avnb zW)5Dg*>MYg@_*;^e@G029B%v!xY99V{YjmdLlC`qY#qjx`o!&#EG#A(wC}OJmqW3f zYJ9VkA7*9_n@$~c!^mNHv(n%ag+Bx4-@@9!gcqts1^SO<<-`i->oEJFExAuJ3Az<# zN|9`={>%RM&ULfT!{irte)M>0`RspFO8c%ay699TX(uRUe#)s!E!*k?V{c0o>xg}= zRAJYwO@dje^-oeK49kAumRc^RWDhsF$om0zys&EPN_Vw(%U(-9<}#y}rgTi_Bs_$^ zi|zDKt@9&UqN#JYLXWEMhnT|8f%QM^^gY0OXFA9C@Us}pOgc_6W4 z&>}$&Bb}8d7gARClnkSiRQ9cgFqE8NfhThMrfF1*-& zC_|x+(n;Ql)+YLG`6k$M%!LGRw}E}uxZq6S)iFp~22mux)I_2Zuzf7xs!zn~(E`J= zOP{9)6iD8K-dGH;^mM&1pME(&j(X==^YmBSzf)p3R zF7%=`*uHvu!%3y+#TcM*5_=PJ38^)J9W2?@qLe*%A(c{&1>Vj5hA2%)+6ID$c#}!L zaS&4FOfmOV_-H==$B*UZy9Z8C1%Fl@20w~vKl<#(!Xn!^LP&^Jd048Yk&U}iHv z&%&C3rgs!_QwPSBshCRYK8Wz90cC0xXN0Qb9QLs?vU5DcIR-XHd8nd|VzT0_eqE$Dp--JwxG2tSw=_YW{jR z0yp|rZ%E^2uqsJ%?X@G6PTX(urhfB*u{03X#p6-6WOykFQG7MQ+()teyWYNLwpGj766|JU3!HnoZ=^mfUu3X>ZzI z6{po$gcs0T-e0zT*3u;I*{`PQu@!rx$a?7ou#k~cN=ho7<$+N*8 zSk!ua?S^UHZ=~zJ_b2q(bu^o~g_87+Q70Ort+2A*Z&E9r%eJmT23oJAC)xG(Eyw)oF{JU;URjVLG&xLS^9D{1dhWlFx)!!uW922&{b z(yUm`nC|4Ig2<0%l@{w!S3u`v_Cv!(o7ly2DGUwLcs7MCNHZJN4E zp7R)NUp{ZaqoCm&wobi!`z^FL0q45E6E{zQJE1-?jeI`e|M*Tu)eM9Nl z&vC~b2vke_2$kDwjF4U$THEVo{2uK3XnIp3A;EY{{4HDlZM{`g)uswb5V>Jg!cpzy zFpK5aN34`}p$l{qc(jCMWM4hW2aL|5Lh`XdiClXql$V6XX}V3IYB9cnV;)WmakW(Z zynH^ep;f7R8FKjc3lIdFc>ssa`Aq5UqKH%7J_VY3eaN0l8!0Yi(h?kpd+RTKJNA_RUQ)31MhT^bNkl8$r)% zua{G%TC%VpXvPuW{HxW+%#~B zBO>^{hB&P1OcfAw*u#po9|w~3D}1G%Xi>#wZnd0C=T zdmxM0$g7h1*7lV}Lzo?kqucy_a$i3p`F)yqnlo??I`!OZZ)Zd$Ilj*SXym(hEi;>j}B34RH}snc5Plp zu}b%eDmd5ne{vf(WM*2?U)|-AdU{NEz*A1OwjVi?q5!)*nS7p&$O3bCVg!gDpW`%Y!~^Z@H#}nax}Gk>?Kr&mM$NKLCOVjtj+An6|y@SW2qV z&e6D8tCk{BVr^U!5)ERzdH@pCZMH701FqoWl2++zvTJCZGeWb$5NB$%aVV%fV{dE3 z*$nPrYY#tg>YdVrECA5nHSz0R1o9RQ=Z#R})T%#w!L09rZ$6cermP-+vON5Vz(*ay z_9b!Zwu^X2XArpyQrX_wKbe}0^2V|lc$jqghIQ$eL+;zaooPwB`T~^jW3;Bzh~n_# z!|+|cipHs6Q&p5E@01{b#t47@erin~M3K>&M;n&e-fR(%3#qQ@GW5^~!hQYn{gFGQ z=q571kw=8eZTJ5={cP6c5G;E1@ zvhLH@7WC=6c#r58=z8n*q7Ghtr+4PJRAR}eAvfmw$|2n{w&%30Jm z8hP%7+1gaT#f$Kw#9RjoW-QcG42BUi=~vDWcRb%*kq@}>%jKt|t$$=@2j(M{gLqM8 z+-pg_?B4QJcZDkMD$c>q^iqPUuj%FCPh;;NR2*mXQyYr$J7t3xdrCI2Tj{sj_w`o_ zXX2k~frp42J*FT@auKQ2a&zVHfCCMmj*|IeO)hsfa7`7pEQ=IRj8D$>S-tEg;99x9 ztMGD>>c3O*=n(PZS=eM#LO7eM-`Dh_XU}${1Z8kDgLpwD##GM2O>>MRmC%LZIxA@p z&zNSZy%UUAjVnf9W}<7CbIqG`B$@|EmJj3V>EjC8%_NHQ(C{kF;zu~$l6QI^@Pj4t zC7=8->PdI=#-X0S!Ied(ZQbMjO3{}uw4^@sQ_envG3Jeu+Iv3AGkH_GBjb$txVv3e zx=)__dUZ~-lv80mT~5NHh$xZ$^xo~tafj%4yw;?JF_EI0&1L&u+seG5c0Lg=#niU3 zxfD(@BiKp?J(tXSAEcyz4))qE&A%jxjLx?+uPZiVJMr_bxj|*hBzh9_c zR0>Z~b%-MXWl|kygTfkP{Fj{j97GCfM_XnM4^se20ewjjBV7+X$nHm;xzBpsJ8{qjKZfvlOi3jj6U(zIDf?x2JAt{jhA?{q zod{)5*o%smS$^)o)F@Eh{ULX9Rv&x2MtsDjTA6PMZQfnkoV(}qldFXC9H|=8*DF?p z+<0-D*$5m#v3G>De~0)s&B9m?${WBeYpxid%0(X})a=KGq>LSqpxGgudBj0|QV#=w zbo^Dw=+e+DJq@?YDqL70U9$SC-`?r!n{nuwIt0)EwAd}ajxxY1IIcLMrs7BNQS$V| zTyrT_TLHS=7}!{VQunS-^)ir@fmL}&5_lRypzL&l60p3v?Y;1i3h2ytM{FNA$PAH; z^GzU>4&kSvg=PL*#k)fYEt$n%#v;C+K6fDc+P7;AlzF`p=wdYzg@i#nn`M^rc5iCf zVOPxV+(y}BgWK4qbKuY?l}HmSBW2m(KGi~Zc=P@eJfYFjb6LI`+@22>0T9v2i37;_ z;7j_)=olyK`VF(KwSXCCZf|GSD1_H#OsHLvis-HuZ}+J zWj)9yn8B)X4WZGm5yiE6*5rdIyPO|frQ^G?fU|6xP2e6*DIy-K#haIdZ4t|oU86U> z9S*SX-Yz7AQ$dZ^572@EyZ_XOcPm$Z!-F<6CfdZq!9%4UbAf}O@nQKD;E_-E&{HZf zDurHQ1l2J&>{)zdT8>`UtQ@MB>pWJY`=j9zuR#Kj_ZZ=xb7}u`tqz-|-mr;e*IK8c zm|X>2n%13X#$Fq@1_m#7&31?nea_vWngs$CPNTOl0K#(Y#}sVu;7ZmPmt6Y`qCm(c zHe;us9tDFgOC2gMD|O*TuMf1v-oFNSfWfMQZuI1McWnb@D@N84q<_^x=DM5g5?zei zuhjGR%e78sCA|j9XMK*zmwEk&DYeT)G*ESGF?L1kdtiEBUNv^EGQKFy9)9+*@~@Xq z)hg~Sc#w27O-VeMv3|zKn+hI3bQU_bGhA4tu26~;M*E=!f-R9>x_Lq z(mM9f@@};~h*r8<>dJ>2%y_F8u`fVZEJv3xHHTIz5KOMQmLa0kEB{SBYW9y9;wNLMOW9f z0F|k*SPf}$pe~zAWc>j4x9rHvhwe{nj5NBp?SC>d)!zOIaKAi*w7XPfSbSVU4D0dI z4|+!K47*u5wnBXLyi;+t2XT;~OTgFsj4Vl%oQBN!aBeT9Gjc);>Q~>PqgbtD0+5>?8 zk6$j--qj7dd__moo+P(z-{jTCWT-1a`$09Xth4rR51P;^AuKj-KbG%8$1h~wPfl`8 zqvBOdhQulwRz%4i`S1h)|I4^f^r%x}^LAe~_rs!~wRxwWqsaST2W@j?mr|?_irbRR zCTU`Z_);Jo>Z`@K_m$)UHr}Pe0r_HasSs(4{8HN?snjm)k1?VT$vS>}z`!;95h*B* zu=!dbRa${y+Yh^H2_P6sK+vCI6?(S%IPb0?%HCt8r`l{Jbyx%FJ$xPBDS1;2fT!MR zoBB10zTNMyDj<;iEo%$K`G=6&Em0+UOye-1>=VAFj6VOjs*00J+`_k5T|FaEn(u1U zyISb-$(#h|-d_)dPN>NRU1rm9Es#%j9S3=8#69Q^zE7krOZTG&LZ=Iz`%SZJ?zz_h zn59lJ-v)P~rHtro8ksF$Bnu=;5N|3`Cz_bedbi>9P@43N?|FCy&}suY#(8r_LrdJFUjs&h9_+E$(z)zxO09E}Esd z`i7MEq5-1NwkfPlkR=b6Z8Loa^G0d4HoM0{q=0%yRq*oh)ms0Tm2kaz{ogRq3!Fl2 zNBb{e4hr}M!v|miLbfm&z#+>Z{%&c}%N3CpIosFE)#=bke$i3zV zbl~)PzYe^4G-Q9?9AXj?(W$>lB1v6@?I)ScdGcyc0pMEqCXyVty7Zn%Fi{^O!^M16l-E( ze&FDJSdqOdpcjfVDPwRgA@kqpxcP*<7kfIts~_B%afkbM>fxD{kt*I{&~h!_3TUjT z7zVZk@rBySa*%XY44>j1FoN`(Fh~c#?p^FYh>k5Fsd!upBq{^ivtRiET$usxt#R)J z3T@SIE*4Qp{L}~_z{s(N_r>h85-{kQxNO^&ptJL%=T*-Fs!3&CtDOo5?q7sEzvq0$u=Skp58N z)QH^HSFqcZTWCT#_tm7si&bOK6jD4F*Sm|rBlVx8u%sVFNNJZE9b2IQ$e+8?VC`d7OMXvpk6CF*?H9YP`{pPnbf#Kw(8RiL&b;Z#SSY=KX8REL1dZXN{3iFZNW6!ZhZrn z4<;U;vOAl|6X&E-C_gd_DiGHdV^_)`wxt2z}614uzO$bC2^MkT|(oV3)2TZE*$VcwuKaa{WRDV zDiPto9%|ManVeMVQ41xH(Oa%@$(K^7cbcQw*)tWR-vBKzonv;pB;)e?h+^iE9DM0C z^;#_)DYw>e>8{Ts4|^=ADpi5W=vw@K$N6f;!SG0@Fi&iNG+!EYziE4;t+)(NsV%Py zXpbF(AGCWEeAz3l_V?A65g`A1<5g1WSH8&A;(`tmADv|0!YHVIWh>0jC#z^#tqku{; zXC--hTtWYyCgJp;+ft&{R?+OIg8nU{c!Ohqb{O+M$-R5mGLBOzDhy~V$S}|@cf|@E z1c>&@zX7{FrwPhqwwEMoCkD|3na6%T4(JZheo<={nsRuOyn-}F?lO7tFkVN`Y&d0r zlf_B^Lmj18XvBCnZ)+du%M7>L_cA)DJ@zjW*w2EV;_b&rAyU5d#lM`w0W)Z30^0bU zpAgO^Hqn~-u5|dZPtw{^We-v&KT?nVX8f5`$EQ);L5N5cAJ;1{G_g5|T;mc_Oua<6+@~vk z>zRZ;&2(Q%&4L2Jj30-)KqSl5i=pgwu2-RrL7rQ@n8}0g5Ss+Ue3OK*VLfA;rZ3bY z7E6?`zy4hfyI&3<05clYN}HtKsxsRUaqZp&=C;s*531H35cHi_-NG~Hr1^O84SRol z2jy`943$X=WRlTsff1u~r@#mcR4JOxx3IhirFZ**0>cV1_Fs4N2Z|f{Jbky^O_Vpt zP;UZ244VL7&wOB>Y@NoWNUlM&bCI-jaB;wBi>ODD4-D%V33SP|j;L{(ZNm`guK9XN z(p~2VIJ>`BA4sze+=6WmI*w1|CcN+Y4-VqLT}l)?{9>`UxxWBVbE_bryQ>*zemZil zBe|(dE;>>pRZ1~IAT>jtyk%%gaH#X_z3>~5%Zc|F0PsI2^Vb9YKg|*k)92-9P1eW$ z{QS$Ty(bBnDtD>jUncl72T8XC2*ZAy{+;FhF+0Foy;z*}DE}=1KcKwV1Q0UDUjA=e z^8Y!7*`r7M+ZOCOf0r^S40xo4_m;@7C&qvBzxCva{Dj`_-|Kun7={sOt=3nw{gcbT zKJx#8=m2gcv7P33?7$y3@iy?(m6?VT%kLP-Kh~wg2M`K>IP@Df{%>PE)&L%$N{JHt z?WyJ(0ECuzj{Lrqr#w&hlVk<|{!~i53$<+?FxyRj{N%PZvp)C4*wOt{u#*s4+iD0@by1Q z{9pOxzt2wo$|wJWS~2`Tl~1PIG;#7=TI-BiUnhJ~?k)in4D8dG7$f$G9u&*LSC=Gh zwKfxx3Uhr0GeMXt^B*U#TxUn+V=*Z~MPg}1+W5$hFRLi5z+H^LAtlXcD0?Kgr1JTkIPHc3YD}jSw z>=Y<=$xNircqO$6S6IrW)DiLMRnsJ;YVdm}1og@MuUGYL1A`o{=*aDSIwu^knUyF` z_caZzne)Ni>e%umhyLm4Cvuu$+FmP{D7)|Xr4~nui{yQXnQm?)y;h|^-NOfMQoZ_L zw0TCVugF;kwNDqbzg?=?Q301q4iDHFWNx~FbUU|+^cSXMcmrFZUE=Khq8sb;n&=M< zq2`i1qr52P^z_)3~nXRd{?xkmD2!gl~t(Eaz#luq;syM zyMo$LrE%vy)=9=p|t%0KkyjDC9{)v?@bJu z2UbjYhpSgNUlEGr5P5UY6vk=yZmgW;*VxKnfBvut_%XC6>w9ot8}~qtX7##ADR@xx z!gL${*71;V+GYY-4;#3%?}^-67`X0bm;>&1Q6DOsKSXKeH4lK9XgnQX<vhq z-F|p^r}XQn$O`{8bhz2c_P)bA=5+?y6Z&+vi>#Fp8`@S%B%kzS&R>*EpsO`*2;*LVENU!Mhs;pHa> z*N5u+>2n)S)4r^#KX~S&}5yA8&AD@eHFuH++FkPeo{Gz@~+PgtM(eO8ewdv z$x0-FGb2@b>uS6}TWml;d1|5IW>T3#x&5F(Ut7sqfC{$lt6(z@)lfcW>j{gqsd~$Y z{-Jl3Dn-T!(|PCov^0eT5~drs7s+pgjVtd_Mj?Ipfc)wUFHuP`BB~?2dOb(z9Byu2 zlyWtXCWv;dw;NkQvwOL%4&T)=f9RhX5nfeEl-#5S&smgl5 z>I10xzEu0yHu9^k3rU(@a@M5O@v$T&%vqwec;Hn@?htDdVn|vn+}F04yqmAQz5es0 zByN1DLW=;>UhvhI@Rnb-fZMjbjpABSqV70SA1F-joq=LwmITp-9%^`)7-+7ub43f= zHL*M%i=z6Lp<4>Wuz?%Bx=4)`Zv)XMZ^RwYSh7bz`-dXLby!#6B}rk__adZNVe;(i z8_}+p=X;wIm#NaI@B6FKt^3sL6sl5arRSK9O2^%7&=53{a>g-XEWuxYP5kzXsuZl* zW}5l?zT6E~)#`U_H`*vLRj=sf;g5z}RE{`v>Df8VFJjF>4je+JFBbWY?}nF$5o}1& zVYssATZ8M|8WPn7)z(Xcu-+}*payz$A_Lc+lTXCHB$jh(F;uaoB08cdU&1T&tXpR) zXWLfw;zVb6Hk9Bq6%}yUN^p1!~I39(l+zsgXX%@~W>%Y`(>Tayb^y5a{Nc2h)-4RTS4E_Vl zb{ovd&V<|EdmWkSZt>k?Z>JO!vSQgVYug5EN1{Z10iE zXPS>N?D1Y7Zi!_NF}NwBd^42*L#^d(JM_J$AoqfvR9ih-S*THg#bM~1*?X-vrHwfp20>ZmYV;7Qragqmnsl7;#p8WK5YnF)^yEw+9FOf zNt)J=G@5R@DlIswH%XpJ?A)=|F?iSEg{XSdq3@_|U|LLCAj{5|!Yxg_T-9)2vyXdp z$_Bh;dptN@#}0v*l)K`9ex)9=;_Qp@ygRAMRKUvTgR>x|=i5|PU3FQ0jPEVI=^oly zhgA6cO6_}$Q3LaaF`j;}Z;mNpJL+2&P`Ua8yM z5vIK=9GB+pz|MmVWnA_4qmD|)7MFO)_SNABmW{DpXOR0|a-^2gowul$j7(rg99ERa zoBq2wdf`yarqh*$Kq29g@1D_7QPeseitFS6qVf$Z3|l!bfBnqP*(FGOFsX6Vd-C1# zu-|3{9I-0RyjQKg8uan5evL~nR8BTU$Djwh+EBH?Ad2ta0WGkl&Ovt?eqi-cAj%3B zO7SW&`qm(`Ls5oqz&HsMLbHRK-}mK=Wwwp`UM@IQ=>D;6DzgeH+HSlkkopMQ^>)PT zH7yS4>*l4vn&jB7nl241la`R;j{2Lf>VfcIeMt1F;;B9jTg{h9WyI>L=YDkm;jcR# zGHJl?L8xQ_y*4VRgg^BgImHJI7NMX$i~NLx|O|KT957)&ZkCY|KzDqf0Wl~-r6Dp1PdoHu8P5 zQcoGzXkPl$?UX^Mq17y<4t8ppZ7UjH`Z&<@wZ~|c-J$hh|H}5l$MA6sUvc1FfzI!p zZPTX?w~CUpV*G|lm_xo1thAs!3!-4^bJai|ztk+yf{Em$Ip@@N>aEaW zZBvW9zV&MF=4OYX65!|l%~#usioHW4qRU9jIv%>@khJ3q9HH{Xm(2LBQ?Y@kBB(1N z=c?DdPX%H492kinW7&+~oZ_tsU7<=Xub0irAX_x5P}M8mGyM&S2e}I$KkOCDWT}c% zDRrVgCae4Js3~*i1*G@JKJt9L)#%x1F4Gb2i@jB6i80{XP$3+U5q%d9zkch(|U7v_8E zhB@{pCa-$`9ZX6*tz%VvL8ic$UL8j6>|}FRLAS=d=)qfK^g7&^daW`bThI0Ve&&C^ zyD80=lG#VzlNF{UJvbF8x3Tyh!b^+p<(bXPaw_ZdBoA;+TXii5MbVY{yX@9#8}#tZ zECiZt(`cb&DM*>6t+&;TvjlpB84?#ZXHC>Kg^i7eNabE|{W;;>Pc0+mQ^aq}UV2wV zy!fTqu!jU$o0V%49`VlI@)n`Mni+?VC^V_;8n5A&{}+4j8P(*rt_>@qs0fH4s6cRm zbPy2fB^D5+3J8c)sUl5!2@t`hh;*e&2}+Y1DbgV-LVy5KLN5t|ln@|7AdrM4U)DMM z?7hz3aewQK@&0(ZnP zIyY*pMIcN#he|RAa43nxL0*}>tcI1R@vXkoGX`I}^o{!uR}*A%OH=LcVKrcve2+IQ zU0;=2qht+E!$l?)_tgTNX9%B>{>y^#)*55XDFBBIl@AE)-; zgLw*7%B$Ok$PUi;Nc6Z?o-pMsLGS;_ei+l%N-P>Cj%Gc`w++^eQk4n{-^ectSi*up zKNlejTnK9}+sxO%bq}550v7tTYo+G0H3%%3EAg}&Rq=uJG8RWljnJ?rlCHPi?exQ) zDLA}|dPpT%zOYzP(EgHYMPx|GV6v;OD(bl9Ms~Gi$0H=)=7g6^;Iu)MX@{Gx?@bMG zqhpcrkiQer>kvO$`_k@gX+{iesFS{7{}7pfWXXEoF?>_%J}ziuyuESw$yqGL(5;VT zyJ+)5ZKflVlM&1hyC*^^~WL{Sce!h>3R17k`HyEA90JYJh zzeV|#7dr}-GfwO!;uTcGbLhpE!zfq~(s0_8%fAFyZWCf~LO@6TOFpf%6tr=066#(I zvTltA3~umX57zREm>#5Vf3)c=rmS7fGC5XGSFt7ONFd)Jmfp4XY3!=k+qgLO7^}eL}(zV;TiWfT0 zx54Rz)Qf4X6fU|X9Sv6^MoZd{_En^*a66B0mmAd9Va-fY|o zxqn{eu$RS8(v<8>=CvaGx>V=r;H7gY>!>Pd_~_$URR+-Rlpe{ic(qI{OGZ3B0tDOx zb)#e7a75=0pmv+DT~)h$7_ppFq<1krTf9?>eWWyXF1?=7tWjr><*8I;uDk@tN~-Ok z7F}KYO06fK`^HT4mJznixbpTg!m};7Y8BIBpD-c8-ca@)D`eBG{q!t-6Y|afL44d{ z^EtJ{aKduLl-0ds+OYnJ<024RS#ryQ!8C5RTCbRD99Z@ekhp?&3+UCdHv0+TuRHlj z`YG&W>dZq2RZroE4z`|PlyaI?Ffyw~NQ8e3EWv)K^yE_E+SQlbHjw=m%W{`{Qd)-_ z7*YmbVzH z`3iB512o)DQ$ci1aeiN8AFRVxQ@*BE^>zWvdw%@5l~Y^LiJmTVi0XyxlkZ=@O%VTMEe$(!1;R&aGs(X*;U;c9=A_UH$&@TJc^<@olnHTS#I) za!)aI-?JP0L?u`7jLE_6!}Yhr2OU#x#(YQCQ=*TTm*J%IDI_-CEdpOB;Dh~+m(B;v z2=wWpEHLIPXoB75wpvCIm&yHw9%+si+CyIuB4hdofYv`QoKZd8(Nzxok!N9l{9&1+ zUM~hT{E!HB10O;K=UV|x>BCeB$8$K%0i^Y+GIg;15ynK(I*IMk#a7Ur4Se9a(14JF zk~s91SiU~iA$MZ|`=%b~Mu#YZ;A`$#8N9kkXmPW2URyUyUYUMmHhUfU>@fJr@F4)naxIh`H9hm>EUA`SALlTPi50CmSD=}> zU1&xIKj&>gVxYs&9QW=pJ^ipOrEs>eVD`lwOM zvWlT_(_pWHB1Rcx!R_H`REEf*NY;LRTi#zY@c#`CG*o`Pc+6izft{8(SNm{cT0x&X z=rrCJc`4O2G`?^|=39;%1lu91<2|wUoE9c;K=!?h`k~O+j zHfnXp5S0sY7+k%FmWVYFRRbERKX0~^$b?t$Pr4<{we_yr*(OW5#dNB$TS010sS1xA z$-KcgvA`+hYrzWfom3Iz)ynXBZnV94erJ9nZf*3{;V+yiHhWb=c|9`zE?3P&Znq2O z*;mQOjx0pEPNYiz~`ZUZI*VLu_CcLDMKvgxpMwP5!4q*{2>EXHrgVj zT$abFb~>a_U00_oRZqB|{XId+(lNJ#Q^j>$mzx)$%A=)>cwlEMa-idAY|`9#Z@NQz z@33RKwQCMi6cyN_yR_Q;rbH4*w)hZTTeE*|{Dvzo<5Y{_W3BMp_&At z5MMqc!IDYuneG;W4wEHK{|)>1X*Jo8+o1NcmRY z2pP&>jWs@!x0CF0dwNEX1o4%<@W$xHJ2hBk^m5vz+ge*5LU3Z5B#_bqSCMJ{R5xL} zT#u_!Mwz5$vm>)dpUc#Z@WEwa4gs@7tx6x(O1&%hi!NDph_P4rdZ{Lb*jnFfqP%=K zdt>?a8u`Z606mpywT)75JJFi~))^onjk1?%S}ZLHpYri%0an=F7upUjtCQzrh6a09Rl62i{0`7oxv*Gmtl6b;>&QIbo+#^$w+}F6O>j6Y&E& z_+-vp!l_T)Ni`TAytXRx2OwCqEy@*=oXLyf;J#_=r&UA~t(S%CJ)>3n))dG$W*9cp z;Hq&-1UScGd~cntD!D+Gs43-T-JxPSRyT=na!xGbNW-c}KScDb9f_AriXB>b-J;f3 zaIrH4MALoz`7+VQayhJ4r+?v`>udQb-eB=9Qc}dCIZ~-H3#Pgn6}RJ|I`_pxr2l&I zSvp7*sum9xCzE7^TfYT2IQ#ER7t%rvci79ng&O90ymdMAX(LqS466B-h|cZ@vAUx1 zOa;mQmxp&?GB4IQBMga}`#O_2juPs;QhQ}ISJK`_pKs@LfxGh-{s{Xn>Vy)7_FO_Q z(bBgwu-SKJ?{v-7WgT{3D{2m5A3uI#;*wIxF@qFgp2Gbf{vZvLwhPE^G@{bc$e%;cCMzE%34*@!l^;6|ZOe+)M~qw5YCEWUpEaVVN0oiy zVSD>!*MxO6Ig8pU(fMvQeM@<=%1sI5R~h0OmmET0IE%wI6t`A{tzM~J>X%j6O_ z>g89EN~huK!qyU+J}q+j&F$g(H&J! z##VC!N|yqzgnsDRmeu5|ypq%A?4isflRfh94!zyOmMCw_3B44>qfM!{ii z<#p9}AB@!=9Zsf-K8LMY3fAwpR;7D(ws(FD=)gI_^|df1ZDvzgZ!h(`jT2=nj%!+0 zwg_vxB7b=gZlu5~DxuTQ+Xxa(9ac=SAE}O-J&SPf#LQDlB-6M-Rp!z2x^_x_6g%bj z+P-z++fn3U?nxdQz5$#8ls3^?%c;E7=xoPu0zw2QV{cjC3onq}ce45-X{D_kAKb+DJsTtk$J>O=Fm6A83`yg%7f$?)*lY`ecbz>#LE$+4vsItZqYn|w;5s^kbrROX<}p^cf5&F*hnlq@KKsczlxZ=#gDdw{@(fg!%OJ%i`p2|ry*Y5< z4Z)nXzR63fy{t7##@br6FUuq5ve_G5ha3& zc0=!Np&NHm;ui%kLS+X`?t3&w8k_V4yv6u+dx)-re{73+?weX0RMyL``O!Z=N{wKh z>0A~|U`Tt{%B4qUqtyl~`q_L`YYKLEb(Oj7c<2UJ%eI4N26G)_YE znQ8dXc3z;SZ9imM0{FloVri2{)18DpqKdJk+HH7)dZuCfv-Qq!9-V%SaM~;t<4_mY z-jz57z+mBkb~U|BB9!JFNX9Q-_R=K8mD`Q zZ|J+3;;`b;KLb85RCg)H6onMr%s0~+#I-p28GdeIiYmUxJj~l!f#$@h+M5YWhyr9hcHH7O5NefEsGDEIpn3 z^@AA?B>>~rHpwTZ!L1eN8t!>NZ!gh1;8iu9>UKHh6rHBXew#K#vaOSyEkXuwBp;aA zjD0R)j@JjTA(3Zu(59V!j(}!jMxj*$d_v#R9!=lc zj!9h2grA2)f)h4Q&|+sIXCY%Ze{C5WnON>nu%Rdj?N~&vE6cDABk7lF3)dDCT2c2k zeV@mdB}d}gS3T+@hXgjVKR&fX|7M$vFKD)T(>Oxol*Z(pRxQU=u|NsG8j zT+A(ft9|^Rjr|v%f6A=T=g{cxU8}(xf>#G&Z2@YBw&ZMjzKcBY5((~K5hf9Pc!At@ zg7D_GXX{wJPG`-j8NnFW#S<;um&s5^bSiBz0cXM5t2ejdL=5!L6jQtOT0cH7iyfJ}cVgS=0qex=4tBBy>q&=nIr_A+a`!!c26 zbC+2<9CyBbkecF)`N>^KTae8rHE>lM<7p6%@cM{)#1@BXHGq-?v4c3~eiv4ih$JoN zqWHlYGs#EIwRURRdvHVPKCttYMBRAVPVO1?oPdu7qsZCix{p+a=Z}A!DY6E8`)JT# zkYl=E`A4EqmB~=bmsYKxu%ooZ+UL0M4O~!T8S8PRgH8_ad|d>5hg-}yanGIag0SlA zFU#2RNggiR_j;VPw_kYA=Q~!BTTjS!vV&bZ-+!XlDBp=|@y9pwA1xYBqrB>McvTIw z)_)_I7a5TgmXh|Tl4O^-fFXP5+KG)Z{h@{>zb(jqQC4R_78piEAYz7>M{I4OK0t2_ zKmY7mTV2_OFM|NZ%=@QG_U}A_uJ@i1&aFX^&qB(PBggiTp?Pz_L5ENtt+njYq1jE> z#05Nh4;1;meKjM{c5p{?tg$be*5m!t#5Zq4G4Ce9f$(E>nbPB5K%FtmEDOQ-DuDxr1z3yJ<^x-w$Al>I+AX`ug1wzlS&5T}3>3EBci(2&p+T&Ie!b zbG|S7m}X}4Y_FG9=u_1~MqgNR=)e|Hjm87`n&p-!%(<0P_-Rkd!1bfHGpyhas~tZM z*Kq?T`WWgEzRb?QXj7Ts!`&M;o&5OY-O%r&)2Jve=#iXGb@rRd-V`cQUt1i4Rrej+ zL*t*pB`)>AO`T4{eXMEf##6_@q4d7vjargGcY_FVJvp18XTB7ehZf`cnuz|?WH1S) z@@uA{NE+8yqj-G5u5uX5H3k`^=DSJdKDfi6l8)A-9N3;hnv6tkbS;;JvGkQp8vC1x zDCoql7L5?qqq9z{e*JNJ|F7bAl~dVa7k*qgH`}qW!hP2gdikE?h_*KS3)xGle7Jc= z>3GGYsf2mi#BN54qk@P_L@n*0{@^Y8{=Qy$<;cKM@%?RFR%w>U!D79T)uX90EYdC@ zSHy`Fk)3Ge2-UN$MdS#lkmmj$LlB0l~|a z7IvoGvXo)?0CYy}wxGx6e$93Y(&4i7oWhi#>QUas-$98d6xN4c3oL+#J194Vl!)<- z-5+ibr3!uOg$~y3_XOXzn)4F128Wm^|-i}*mAK6?(v=K_E}HIj??X6sB(OE zCQ@iwvOheF6WlYe`DTXh3?l86ixq{f>a|>+YP5jBguX0dG#3Liv-H7U-5;#{d<-;| zo-GEd+`Fhb=J)Amf&aoN#nkrYJmrIAcc@-_%H#Fsv4@8HCgq_VH#&|2z{bS%r{mfs zYgPp+{pIN3GgCGUKDxOW$md=Qb{1E#%%;NgL=+i&2Z^c!HbjcvjDvHGNxF ze+e~RZ`8$o#-6oP|H%CEKIL_L|1!Gaa`j!UI%;<(Pj%BpFS-mp(C3AU-j($PlRU_n zx1iAzo5x=1n)L*_9Q%^|r7U2nT)B?dp zWIX`}K+jIuJ!n(-y+HYDOA;}^eyUkQvKEn%_%W!B&UFHU0S*QWz6V3@W~Gn>3R)lOE^XX^#}n=H+`0=O zhb7^IhKrZR8uO|98Av5dMBBp?62s`dDomdChYOUIMGTd@*IwYV170nyZ$ER502p_e z;Z+#J$3{?KKcl?1%G7k<@5ecQxBvw218iF&&bOtNdkkQ%ete2Ia2lFL3WaGagp#iHUh$!CtF*z|V~$w;LUZ3vbD~ ztGcry;=oub#98Bng~Xk6Ik!c7quNi!_aP&@(x<>e>A8Vfs)a6?& zGO$_6orQVA*I2pVodphN=AV7CH!~6)A|62t8~&`@$s-=oUS7cu!MMvwkf~O_3whq} zI5Da{Ui*p(iZ4FS?d1#&uLX}A+0Gv+YgYWtKOPK4Z&qA=9~H3$`L4`ES1ov}kcPR{&Ao8zfWE70u+(2X zGo5{l1$Am?I~{C48}OtaT^l)|((pDZZ7^75+2;;)1ywTchoq>$a-I$FGoi{ok%Qky#7q=ZOT~)FVUzlydhi}ic+JCwvGwa%+_LYjz=+o2deiLj z_clD${A^(({!!Q}46*F!P5LKcG6B-kVUI1;1`R)xdm?qIj(>4;7jIZr&|Vmp3)`sZ zOi?RWMTtK`n2R@=Ebyr!PojAuf08V&^879TfvQ;UNmaMme;3N=A`9a#2Wnow8mk zs6}=KXTXLZeO0jgoP;fiGeUG6T}0W4|4044ko-wg>6XQsn4p1}4(wuzK=`4%@K6V~ zzbd90T&cbPZX6}BzPef^(zyI7Gt7_%y*oTu>@A>Drs^+3!R`_Vgo)H4qb{{5@J z_(8+qN@0XPN7>NdNBuB zF#agZ|DcRSsFA%A`M^3wKQG`s%Lp4==uUZl3Ue`N+%9Zu)7)`Ah*LZWe3d{V&uk>|nUKItOY9pN))24Y7JziYy{z0><=5Bnby%_gyEbH7KUv_&4UFHj4p->&Wv|<$MzEi1Mn3;Eev^5G%RU#=TY=6!T!~CyTU|nyxz7ka zWj?DOzKK!7a@&R5+TxQf#V@?CWL!Wc zm5EPETt-VXZ=U#(mtB`7BfH3V+?m#H&nU_KSB+tO^L3J??IA64>d=z;ihuDQX@%55%O(qR#^p7)m`m&dTMV`U z2x+{s6Z7d!$WPP}=9isMo$?%N&7YbdW}a-sOv)+b2aYiYr6ZY3Uh90gQ^0hJJ{P{4 z1dgqbWU8DKF=jfw12KF@jh6Fv8dxj1`b*KVh%ZvivuG#Tn2_u?-#C>-H`iZuSRBxa z!M<)QD7doo=FOz)wTNGIVfK+zm^czRzW()DBj7?3^D6!r`-=+pf#axC?{r3vyrQ;6 z{+^f0bcU=`_X>rUYNp?C_-Nler~fDO{F5hbvvX=)ssWW1NuWk2mZqdDL4ghrnHym1 z;nq?Ls|x`3pEs4t0SXhY)iQr0eVbjO(lWA4>~@rOk}=3=1z&;I;QG^J|KSPTTdyn} zz}D1^nS4eDO->PWfzZ>N%tf{?`!$EKtfX)|N|Iy!OhWbp#v7h(ry8R`=O%PgkFdoJ zJJr_aL&bcUuG>dhzLlZa;kLFmb5lRl*Qn{-rJ;O_Z)cLN`5PP#dOrNLA8uhM&2;94 zgRB}pw00SGqOV{q@Ya6ybfakivNlxab*nbJ5oQ>sytC^k0AjQKg?ijLy1g@ai$Go~ zMaw?Y?hkUi3(+k5K(ZZxAfF@5M4A5Us|t%!IKv&XR`4HO-E8$FWHz`V#`@=W-Eu<6 zbwGzN!u%2SYnrpOvl%{oOkKy7LMrPKHZ~M-O)7`GVOIFotvU%0J~woGizj2Q8CQ17 zIq%{dt^^a7yE7Itm}-!cVj%rHXp4}&{Q3qDknQ27*b*EA!q$#*C zLb0z?4pC8mzpM0ew%Z-ZD>oCz*0B4pbKT5;vFThG$lBilLEM*(rA^j*X6~=bZDiH=a+t89>5MRKX<8V zKKiwQrqo0~I{;~-pYi;S`9BPu#0Mcw_Jt8f@8L+|`XNt4`dDOQrh7`rFBn{>kYl`o zU5eI%fj!;(n``Fph>a^S5}_hqjH%NfxDk_c35<4+S?x;kOJa971cRezW$hH?2lF*O zR2;68nYz;$8@ZE=hv*BIWGexVln;q2qOw00ksQBii2^sre6){5Q1{9yhh&R=No&6rN1z;4A*y8=UiO=Tb^ zgW^qV4}{ww|FkUs2)19VuCy`i)3`9=Bj2m_Lz4@(MFaV$Tp0Nm^%U;VD~i^3148_P zef77BR|4ciMwOo7@P`1eP6{ zOLao*KL^upXfjnQA$8*TD;1woccyoK{PE&R$)y(wEerOK`b$l|S*6XTak#5^ba_7f zapymo`F|SxM_JzNYC>=Q2Tc9XQ{!$-M_^=7D72i$(8Vb8iVzW|3-WIlNA{?_n{%t%? zvJ5KT1v~KWYF8SLEcPZ08{V!H_^nD9%ijjHFjrX zzFPk0b{q~Eg#yn%`;Ak3n%YmVxQ&2mqGkHr$oFO!9lF+k8 z8ixg&w7M;>V^g0HO--Xe-*OrAfhPgU*cR-T3=}ZhaG0=M(y==wN>^Wb9EBbzkYb5tvY)vQ0+X^=efUTC{K=xv&95jUwi+P6ocQ0VL%M( zGY-#1pU_fU-wruomnnENS4(DS8XXX^x*nXv+K;I_K;L_U`23Vm_J~)Id8WQD_lU?9 zT@d|q!O`km3w%RnI4w^2rB&74OcC||3~-0I&vd7Dt_M1sh6hKAP`h#+a)2H6bzQu& zRRaY<=Lq%dukGB{$sFO7x0O`eCgZCqLa|JZO3CpG>(+p9*R9>Ta?=PSzB073Gu^Us zyqGI$JF!6=aYi_20jWt(nH04ZR(m_-9rXw1otz4$GK(Bena6b>g{j}9)}};b)t@r z+d)+P(8KQxx^^nZPc-?pHZMo$p;a-9LIFD5ne>z9Q!%=9GV;L#N$|@RMxA4NqlXQqtkU}d;R+eM+O)uw&tXKS zv!n1~&j4FiE%3U4N135K9rs+D|Ds)L!J{ov)FxL8=G}=)+;^#?ev$H2>}SIpYxR+& z;q9#~EmfiSbRE$#_zzlECwaZ$!MD%be)zCJ@@P1JabWab!W~vKAXO^GI7RFGRZ*fR zaZm?@&jIQv$D;B5v0oVlm2>#8-8e#%9~kE~JQa*XO)6xmi{;qWtYyBfc*kPrR@ zk@-X+EDs`tAp5_++9H47qzY3-kRuj?Yb!;ZhGqF%Ya`?1)>RcF9Bm|qpW1CF$W2DW z7?t_Ep^Oys;UmiYz>b@Iw3WvG`zJCK8lg=mdtA!*{B!t3Ck>CWoCK`5BmG06?M?sW zP5)DTF(xOU8)4Hfi1p{8(LwW~fT{Pkfy+a;Oe1EB%m+4akbG32l*g3WN^^RiFing4 zQ(}1u@8YYu)QoU*ays9E6{gEM?#!{xXln1;nD%D@cQKfoS&sgeZUEJ#Y6PL4D|#+! zI1obm((bDWi(H#c>EET3n}$sI9fC;DS>-ACCew(;a-<<5^P&_)$Fy;)mcH8^Icw;! zml%WxB^3~;AN6%!`8H!S8>U~lVI9%L*7gl(bpGHskHC?|3(|PXoP-Sg;&^pc1M>;m zh@3AX^Kt}%!QcpboQ|^eO>Iu$t3N#4dFKBJ7W#QUxG^7WmoRp6ja;;YvD_q}R56sh_u zSmBr-dXK|Kk_xwk>1*Rsv2(CMqUkU{Y^$34>Z;lD3mK@1N93kJt=e1O5Jdn!5~;rh zAI*F+QWT)2LlmX&ki3m9R<8*hBrmFbdG=Y-VS!7MdA);%@tnL(YN#22_MWZu=ndgY zU8e{~%uKt1mjC6Yj|p||>*W6Shpy^S>lfkUv|*H{A8etm(7L{C!*A@lf|;A%6gf( zlHWFZ-FeP>t1xJhX#Wiob{s~!j&3@-2A;MxF-@q4e;&&`%9#S`m8E}y>qrX)%|&BF zbNpG$_2bX)Nv3r#Z>B`Kw!#(lOL2b9>QAjF974J=!xgdu`!8hcjXO@Mvgn2o%XF~c zu)a3pdr1PS9&nweQ-!oqC|DJ_3RtJNS+`4axP#S5x+)H4Z;g>rOOI3_Vc@+tqX(#A zP^NgOkcZVdx=k|9==?2%fwGMzJlaO&rNGv#h)tgZXv0C!*&K>QT&nbY?y4|}(#GHW zpaHx=>MNjZ)GxnV@I{>J)Qw-mrho>zP`|OQL*B~MUky@j%z;>&=zc1G^nMEn@FB+C zT5B(6cWAi}yh{OEFJ&#n7k)qKh8^x61eixLMdLw<3{)U>uD|zyL%N5~y1ys(<@dhj z?B7%RHhMn}qfi!iZoxG0v9e)NYUlTFr2rMROL@OUf4R@b@UOJk?N2GuDqPViRr0%S z$l8o31iL&K8^{>Fe^B_Si5%w9f*e)}BEZR#a)`kO6$O0WHPCmb>)cMHW zZ8p~d^bp_*keN5DL5BD6pxQe-w_iaZS-b1G9@O(4Kb|71M&#vnPwr#OItpLLNQm!3 z_i)PCp0#fkW_*koGry;&Kt+Sr)2z@T`q66>(X$+?$|vpsSD#-;m6vI04nlzP6{U-a z%+jpR6&p{~l3e%=VsB#LCt=!4K)SXwk>Su0I9~b}QgUDH^l9dcRCmRdUB2I!<7F)G zbi(*-sI9}XJ1G)RHoObTF>GtO!VzXF!h$iY+$Vr(q{&ypQP(0q_oW!w+uM_9X$p(N z=)L}Jq)q8gHa+Qg85c24DY6n8#{g@+ug2^^_B(#8@R{}J4Hz9zQd8wRQ;53~7PFCz z|9)|GE}CK`kuORJ^^syLhc^ZyDNzn$vDL!VcMD#XWxXamIc-I{Kf^js*?b8Vzo~tx zLw_vWFOotDf3_7igQWoX%llpKh+d&&7p1%lUDXmGp&*f4?|c+n99=8piF)GOQ>V^) zi||CG1L=!(4N>|bxl<8PO=|id{$_omP1r`ZlwTX~nQ?x{d2zNLcz}1YMz$!T=+u&4 zYD9~PI*ytFZfk7syTlxVW_%=b2*^MMiAxQ?Vrrro*{;&`M!~m($fVu`@9*3L_7^w9kmK#E z-Pl(Ul0kvlex|LTAtOHF!WmV&!kekvD?BMs)+#?)x3m^T_BGm9_>iRV!;t`WFXM0* z(@;~A4+_C1+zfVu9A;lC(++V=^N@sLqLPg_qpQrP>2EhyN4*ld*Zj*6GKdfCi#}ds zRh0to9NJ9Vq5OFcHRElf79hs4M#ML0FA=cN{?Pb?{Gi}>N4tgx_iHP*B1X95N<(1_gV-!s=XvTj|8fb9R?UECa##>>&xXA4 zG^=>LeM?X3QY@GI+0lekSHFSQYHc?gH*jy_1aF5@hiW2GI1`_2#i%+sK(LBfJu+w( zC&Sj7rG3NF`_qeHjX+)Mv z#Z=u6gAJ|BYb*0ue);JRr|U0Rgw^9m?;!8?pS|;kcboFVbl{wKc6XO!mTUbzpBO2m z#qay4gokO6isMRPc9G%aDOIFN+EwaUMpIc}^fqF9#zRL@?&9GV;dD2~+Y&1s&37Y} z{M|@gTwZmFKPY%>%+^qwm1S#FdwND$U7a4gqLqKwwRAVb%E{P!BZ6g2$;B%O){XGB zZ&)ylQnbI%fR({Iq4!L#nNPp>s%fje1Z$;7Q;It~JEx>c6cL-7vq`A-IxW@N2b^I} zPGQMmuXE>rGPR1o`8ZaMu&*%`%ZJqOru^2KUi=r^SGM{qo^Za~u&}YY_q>k`!r(Tr zj&v-?r0f1nB;qN*eI_h~k4P)>LHa5vm8R*{<4RwUo*4fFXE9LX!`k{(Yozsi4K)XBYM&X=cwDyOTv1-ii zatnuv*6PDjt2o=Y$DiL3IoT$dO8d%&R)q>owjuY&)QgC@K}wPnVhovqsx25QshJEHC!IYgr&WpKhl^aW`|?d_RIj~0@e z67S+Un7VNd%efrWD7z9bMR4?}^0+lRi@!5q$%cf|bD918lzn)UtJ5(&P#Ty-fLhyY zI@=ELgZCC70<|^7!MA$sTb7Dt0K(2f#s}P3N=3kRl_n5&5j;4K#I75MYOcOF$QNlb zs%qYXj^7#MXLqEMLs5VDo~9xP{)~d+b2*F5WAw?Jw+bUdPRrg^0BWxs+^w$@T42rmpeBZV8iD&SvOmq6q-~G6pQLTHR$B)T9Ig-mI-{w$oGsWfdA(w!+#{7FnSl{q5))YJ4>3v!NF0I>t~arLwTx%%7TSh zxj}N?e?_5QIc&F=S-vJS(uzy=L4Bt^r-|zfo+ar&-OkV?u&+t~+3Pak;tzva`<9<2 z&}hwWq)!tjN2H4~YU54`h2a+%ZtD9Qkt_Ubl!eyoMmrJE?erDH{j2V{vFdP$iYwKe zxm%29Q50rmzxsDtiQh~kf6L$L5cUXMma(|N8vjLJm^^V7Ys&9%ubeQy42-%H`*O>| z0LhU27S-8z$8_?QI%fBcsnyaq8Mbnp2`{L&wAT>nyZlz5J|D4n+X33XFe_{Vq)kFm zt+GzqORJ)_{cJRXQbvXRz*22Ou}p11(`ke>9=boR%y79hdq9}*)g6ciqXaH|nOb{&q;YzOmfpz8~W%|XGL$Ci>ctrrhs zF$nSgg|bFjn>w|Qxfav2wdApk(Bf*%OTW|it(VL8;=N_LzuKi||I}f)#nXO-wHd#N z^>PwxWFDbTrp5rj*Tl8V^F58b)DA$;*1rRQ-gCgzApQWbQ-cBtk$>2F;v=TUf_qtpIw|L47ktZWnC7hy-f>;IA7oYO+5N--d=B9X_?OE zH&Pv#xokXH7}RHAeK;DrIN9q$(>N9W#k3<}RSG%8GF^Gg_JszUa0B>-)5cA}DMAl} z)dTbAZjBcuYh9iVUtWC{vkV9Bbz;jjhr${BV#!{79i|gh296|DZY^#BzYX1dZGOQ- z$`3T}BAl^dK^Pf9dk>Mn!-3}^n3Pt`9tOV*?2g`Sih060p=uwA&FHS+hUx>NZx-B& zUg$9Gcnv% zTk7p6Gk!Iif^E4W1sDa4uNL*N>+4c{p?zMa*vO8+iuHG)TXF7Z`y7Afl@<5CcCel2 zV)KjcLZAvtPAKW+Z)`k-w6?as7!7trtV>Cc*jjtbz;tlf6yXd-?M7kHZqab1NmO)2 z=mwjI&93kx3@C%^l{_Zht;dYHpvqY`FAQ22klQ1NRG0d38QIIxf25W0SweP1E+LrF zcrt;(N3W?kgDxnC?{xvPX&a~I4chQ35x$-81WntkvYdV}BJgq!A|aw1hj1j%Xgz8V z6NZLg%h89VnhZs+W=H_P7n>biGsFKuC(+ZX)VcT9xhk4$J1;J!n*!GlhrfJ1Wzd9< z0If^onVxGLWb4mM@r2=`NviQ2fj_#f#Wmr|~*f553MwXL^>?wIB9Xbz2x zzR-_BKSIe#oeTL$qm&+%hCGOe=hqC>Rq3A(Uz;9R4V|%9^_j}^?O0FV=PqXFA20=Z zq$uY1MlS%GyHC$=wURQY>lE#?*ueOXWq_w1@#v^kI)=7kik0z*mH_T}nF7W1p5=x$ zFiN#B=}DG;Q=yoS;g{VBJw8*dI|9QTs_2(H5{&)-%+e?#y584`(#&Sg4DrMqW1p$p z8fxDd7GozoUO1`&7z$Pl`J6DtGaw{!Oy7rE9*sFhSgy2&8i&q&kOmJwa6OkYM~EMK z1JB!JL}`zlQM`d+ebg?-4`2G6&_WULGs$h*F6&O!)5a66(C(y-=7?%igwHe8k;vR4qip30ct}?g9Qg&I# z&Y&-^uH<4(2hQq;Z13e5^(}AIU%0T4ar8N4twrP=b)~4s^r1PFgL?#EdOAs%`N_R{ z`$ACTp%oz0SkKf#{)U!}su@V7d?)BKQ z7yDU%-2YsE{|Mkx3FB~*YIyG(-s&@gf&>2_d*2z>#MZXE6$KlhA|fC~Q4ncrC_<y}@%_G2wlDj=u5*5$ zALhbjX3fl6PrvVH4Q!-z;uEQfgsqqHzc_rLLMmsn9d$MFhhR(Kn&)$gMB*;w_W8QH z=W}%?r@ce~lBdx!SmwYfmec($FZJLkv#9?x+Xv2PX8iB2rvlRCfs*)-pxv?4i^{+f z@Z8Qv+&dUB{KxS^jewNI4c_Q=FpBoitCz_Dpu*7U%;mEO&cc6qbW>rV=_#JttNnj_ zt1Rb$&aTj~Ld}1Cnt!+zdpIE3|F~fP;a27J0JxNirBM7}S^N*P^Zys}pRGvf|6j;| z5Y*oq4rt^?&H5Z@eiA?c9l8=q3Sjt}Di_+O6~^NN(hQ;x?^W7PV=fzVKnaZH55|pRck923~zWV&^AQN}P z;P;QNIRK-?-K;j`*?75!mo0u$G_XGvn?4k4d+xf>=@{+&I|o)!LOHLWHu(~9HiE&V znOXy&-AUABPwEAZye30|X9z)%)D5`ZHG7Wr6#+6W@z6Ys+?+0Kp3cSweTU5MXQ7#%O?gH*0 zj-F+^-yplgXY=m*ywb=y{IBW(swYp^fyT~WDDts?{TBV$1=8}dYfDNO66$Xp7M^JtQmL2%CpE?fT#F?-dl?C%nygR~o zrkK~OFmR>DSrBS6A#@EG@P%ICK^D3{`fxRQzM9mNA;aq%x}4MlM6YDEk=`Xj554~^ zhhChYm;_D)S}c)paDtR+aKupvji-pb7-nL6y)HZa$;`aM&l>4C;`0VrK^D#jn1g(> zRo-tIk8~ShDr^(b1SqVjc;F5ohVB6_nE`8!yjzbamWOL257gn3v)3TH61CQMVhLkU zmTQ<>y^6@VE%<}OpS{z8OZIXbC>+FivkC%MbCz4X>Sr+>-}3W=nP<@snX0d=TdPcY zYazRQ(CpmEJ}P+jBQ`(mW+InY;qSQm(=+oL?}yoiBYjrDa3H@`$+QD*GF1H}U@jyv z`8*JSjXb$l?m%|Wq)y5z04Ty`P(VXM;@vVoB@NVxgGB#W_l?lBB+`3eE-gOD0-JB0 zPX?!1KU1>s>dA-_vH>4t7}G?7C3aMW8Zi~Rja5h)r-Gq6>DovwE{?}*Z1^sYC;oK} zQ3q26yS+n?>A6ZWnTAX{*gT4=8!j=cg@1BNQ)+U{g__ee0b`+g;rs#Wahm@lU=-c< zZ!NlpB=(MFE72G<;@WrF;K#Hin$F`R(5kS^w}gLC5rpzhC9b_|3wI-Y=Fy`T-6iL} zO1|hUoeF)b6>(EMD)`Gqg%9(;svq|nZy(GKA`>2ut=Scc`6@ zD#c}@7N-YTc(xPzWsk(fbH^#z0|Pg!+~aYTaJ7kN`<3l11`Q*d!V5;(=e$a)Cy*I+ zhktxZR?4V*mOp=xwrTw~P95OhWuW6EH1tVnec5Yol2Truy{Y?2S z6Df-s6CB7+J2i@Z7#dD#bPL^;KTTC<&c|_jTjf@kpeNTYt?aAnyygB4~ zkmTiBH1GO*<^GsDlRV*+FeJ{Do0teu-|{)gmxh{%>ZP-ZN%TEEYc6p-uI%l)XTiJs z2aS5nJ4e}Hr>bExIt|mI)3f}3*|z?xUZ+y^L?=n?;H8m1_C3`h+C+hB2A;9NN^{HG zIq`LQ;i}Qf1lpKm&eV~8*e3CaMeaKD^mc1C$q9Hd*qdvb<1*@J@1GohaB|!b)pYNn zx~b^>nvV(u?#uBpXVncKY)aWqVeoWCl25M!2)ckx3dFh2ZfzzltaNRq%6g^w_mcUa z|8@A-pC9f4#}|8E6LC9g1iw?A@P`L9F&_VLE?5HVVnGNV1*>*1R7E-LBFK}RX`N1SwzlH6f< zkQuP9sulX5ulrBGo__sT=(4rdP2qo^+k@u&&+nAi4<9KEd@p%a$zfZ8-}~~PzyJ4T zLk>@>Q5a;k17wML9ietI|L{ORfWx%4)G`B}vc3LurT<~%KNlhA4v;Gu_}S>|{xJDo z{O7O#HK_aead%zHhV0x${2=aeW!}L@eGE86ED*eKHb31;h2#Hmc8(nJyl2HJdidYh z_4%b@JH?gi9`zd;@ z+`miON;*0A$3Xr}w zKc7LTn^ltNggZM<1UXs%^&G=eLzU8omCM49%a!l3#+5>bmz!2@i5Om!xT<+BYH>Z* zbhNCgB+j!UV#D^8#1o0u)f% zM0|i+a%t47jP%3grX35!

P2mtN-_dfc!t-rg^t0UH8I)=J$iO$#D)-oIl)(3{s+ zGIuQFNg|9oOz-lT^TmBgso9=xknt&&%(p~C9yASA_t$62XE`9{O?(=ii}i6xc4>ux*H zMp6j*F#Fx;pVZ(O|Ht{G3*9tT#U1BJOcd{y56dS?5Mc^GPnlT@^@jQ))DPRLW08ST z^2f9DYX9e60E2Yf`qaqUNqf*YJE>Hc3Fw2eJnBSGy26&zfK|)4S10VMF;;)CtEGzhbrP(e z;7zsasMJ*w9J_maibKLgP(W4t~bdN$Z0 z9od~|Nqfuk_ooci5(3IGx!@}uZhCei$b`xB1_(*nOFz{j{(@^}&p!}cVS;P2iGCw| z_I6l*8YixeeIxvbfB3AMsY5U&>Xht0tDYbb9s_1zqJFym{$<^-NY^$Jtkk3430*kj z3%S?UMBf*(lz41!9K5|6{ZS^%KKk1&mtPNdJl6Ey2MO-i@zxb|sWOs5bj`-h=l6;CyF9eI&>&7zwpmq)~C-Wo_ zcf&FBhGj-S2$8vw!8F!U%mAt9x6XB%<2@`P1!j2(k`6=t!H+YYlTavpwCgPbhLx~2 z=9bq;2hFpTKm5bH90F2S*7g`gD6qh*+1?H>j+zgMDKDWG6&nZ2p>g6p7uBLs@TA$> zAll8SU=p5qxi5}wXnH-mjPC~-^F;+s94Y9>C>l^9xZa-yqFQXFDY8X zF8%ZUh$AVI#WP^~*i|#_pRK=I*^S9DE2pU=?t9vgWx@?`^(_Z28N97t*7cTb%!0NevGz|5m@7YnA0~k7kPJciN;yJ0(U!E+Nw$U zQbQ}{wxbbrFXWm50(ehe+(x8ty#z8pX@cOrD2=~sZaVgu;P8hGm}mkTBYE2b5FTZ~ zmj2%ekGl?1PFl~r_E%XGT0O1O5y8DN*w^I@&3=s=-74b{jpGY3{l%7F+JfX-$t`I# zG0MzUap9WVA76dAJ|I*Xba(I ztTgaSt8U{*Yk{9{8Cm5B)D)t(9Neg9d7YT^&=r;$DJ_G!>Vz&yj1afK?AEB*4>@a2 z6uGO~Sxx4wUci7gg znkRLh9YClvKaa-7xi!*Sgt%!%5|mhJwE1k>WFCzG&FxBYLlT~q9O@3v2rw*dT-VQV z2%`k;4A-K_ZyTk*`t%A7+I*XX+Wwm3B(9C@6dIR^hFL5MQ2I6%Hn~m(>_`!8=%Ea# zolUd(XeOe1Rq{ZrxMi%v9Z1`rX7PwzG&m{50dy0wA3dh zyu!fqVB9APEH`loFp1uZ7}va|yXR%5Moj~hHlNS6pV@}o4tBqTO94DMM4A@j+Deu7 za?kdf?So}R=R$svisr^REWWV1`L#*8OzQE%a_WDFE#OgKiS2zge~(o?n$;VSe(f3& z4AzLE5l?K(y62K%=;FaJOTJ69LfTuL@V1>SBDAZAP`brg%9^wh5b!*Bx|MUF!bL65 zAVEBp?Zz?c7|t?!XDWom3*8gVz+9WLj<0vEzaOCJS5|*9?=FM&RMIVj+@9Tj2u#9x zX#T+vX<_uIL8 zSg$v=lm;~m-}Kxv=ec5uZNNUhxVYhh?m;V#X{egRGnk(oLVNvt5H$rdnh`gFoT&GGJd?lR1^R{vHS`kDNUg6fWvS zxSKm+s*yB)jzpq&p#GLPwY?q=_>Y!KMo6Y$KN$8(x^)$orN{f7fe)TOg zYkJ7LG;Cbsin+8k#nvRYKX~z0Qs7>gwhtF+^X+l@4Y$gsREaX?z%D4lfX!jmt^lMl z&aX+|v5;XsVy~PF8+_hQ;|u!6Scrvg5%#m$D11sGlcj-}mJgn@CD5*Q_O&E3l5IJp z8d+EQJ`gXpW``ZtYU^!1y3CQ!BO5S;slDB_hqlt73~0@io#MPC(5B_hk##6OS=cotI+QI}eeu>m@b-!g9F;y~g zRQk!rp#JRe55)^vEkvEcaAqO`XPt)*M(3klqKFpT*T7HPV<63>KezoJt<^Awtf$UrH$I0nG$DfJLMI=1C>{@e?#Y=C@aq!6mz1sz+`TJzXG^yXrj#=60?PRN3SSvUR1A%&nT=#AP}EKuK*ehMT06fz9Yv(J=_5O+csCI? z%tuzl!#vfT8t;N+2fGu&TuO1dgG<`_K7-t%(T>SHkfb`{6?)jcoCH2+S%gwRXSy#fcSEEP|p zdU}79KyUt|eC@86`!&B<1KDm2c_|Lr=6NHATw~LXXXF;O0~I7F zOkH}t67_B&a81e+vR#+P(mp;YS`ijIkuax`xUB7jQR?~3rZLzTVbMyN)Hibx7BlC1 zi?_$J+p)8!)@zZB&AMtys8|Cw)%; zqDKWbwX?4v8Q#X2wFT};mq6I~?F{E%z4sy#ze75awOuZ^yw>dpM1Q5ZYPVl5J2bkl z4JrVuEGK*Hg_kK+P83e89Eo8KpfGbUE-Sg|?*-_ZYs&^!oKR z#sWOjMd$(2m5{vN5y)l>R2eFE^U5Z}#Wu(b8N(Qp@GWyvmg4H7fvbB-sndv>!CdVR zvTf(~na$fv8xoE3hPmC_&1sa2WZ99v7h7MM0~(`vHrMLe&K_L@gA}c=^DYKqPpyrU z8a8%M4N0>3lq3(|6KhJ}OV_%fFr>gs1y8<36e)3JI&%TZT>4Z&EQZ-5) z7JQJdi#W^50kF&Hn=hN#g29Z*K){CNjKnKhDh68km&FG(%HhWSnek&Jw!Dg0AwLyq zRxx{`VmH|jr3dY*40x&anYaZG1{vR0Sq|VCBwLhypw`t*kZ*)T)0HvNUs6tZiwziK zQSddv0^zZ|mQ29A(mk%voMKCP_!b3^6Cd`vTyVBDP8_ow;(X0~S^p~!myOPneqz`k zx%#0aD8o#vmc#l;M_cYUB^{cn3(S{GjB9JY|GavI!=z!3!@;NLk*~fG=X*vvySQP0 zlEMHjn-wF3A=VjGqygIohmKP-k#=bNtDoP~W=GnN@O+VZ!e=Q%(3 z)zZ@Hx8=CO=_KVhMk{ZytJvy8@z@{QPwJ%YlO85H9eyrl1s%}U46+*t-=sBd7xIj( zFN-)oF9(HE8*a|suiNgFmOkybo5Q)T#5Tm~=!98z^lx-?llipvKIUr9ltENSNG#k8 zq>OJpX4y+KGL>u+9lHNoK8-QO$*Pu-$mqmnBa4!M@7}WW0i(Xr@JZSu=gIK09oTvF zjfZD&(Rrfm?bUc|SJVaO@fCLiGpGggdYN8hY87Z~q#Ee?^^(pUOq#|S9FLN8-%i+c zNo`pOjg<3E^Y9SDSSS0z!#fH~w70C6Uq+W^r*k8Dmjv-zhxwTG(SQANis{hX-B-(q zU|e+?oLygT4so3h0m?=53~ZuPy9l!tj}XL^a@~4G#Jwovfy=K5vN#&2!=hbY?a5QC zP03P)<5@p+j_wutT*H+rep-JwCe2(tY&6rozCOgp`WDMNoQsY`_?#`gvB+>vC)Yv2 z6{xK`anNfrdeEB&POLZXNtUia za|5CM#AI1h>}%%bx>+O-*M+hSCMjrg`jo8KTsh~k&*%h6t1`nH5>&qRAfu^eB)gR& zw+t&WWF_q^1ZNhK3AODeQbiN@mv>6ok9dFD3MlUQ!q}wv&75$7TpmX@&raH*$UW|R z62%v^;Q&px;sKOx`Q|_(*&pTf0!n{TG6CKl%j4q+E?14J9Q5|5?&Y8bgKyv_Nax3F zo&uzpBr;Jt#*EFjk z*t`8=?92qXXljw^(>kO$aY};Opev3Y&`F*1hoSJ?R$+7M$R}t+STG_D1>aACq`6d1 zT;wmRNf3AOnRSuQewp^O(hzCFSv;AZKpf;I?&A&A#LSn)7?w?JKI_!lWOOWK!!?;s z1@$1$ENc}fJ+AUDh*fCkBmz25U@;m;37Z!VUNJ@(YR1Yj24u_VEulopkiriOw9?c- zxXA%Z*7LD^%FogS@!^!o{T|@xB>&wMVr_yrv!Qjfu=%o*j;0cJfYNdg&gsKl?d^be zje3=O$b#=kKMMXkDB&4wn3^AI{rv)PV{m>1%q&TL}cr~3KaQ@N*m{UE%-tiD0j z-UZkTXe}~!A#P=Z#S4*kE321T1&E%UR5P4PNz<{>onXZhgh7S&K3p>+ct8Fl&qjEW zAV+OghJnyvG<0gCts^i{p2!oIZ>E{(+fY=Wve%lVTz1?Pu~BWEovL!FNo{?Xxkeud zvgkXDsX$Xc9GimcUH7JMQ1fuk4c1;xX0)d=6@a_8dTU)KbRh1@NaLdOFqn$*4X0k?w!LgoPmtXo%M+kjN52d&F3V)EDLJGYM%fM++xy_SCTq*{#G5#eC%B8EkZMhZEN z^nMVEqnXz;?70F43(A3OL?|B13f-6dqsm<@q^wGjl(Xw)zH8nPjM_KUKSc-B{CVd7 zU31MuIeIe-pakNUZl;a}eapjzZx|Ie{iPdI^8w9U5BFc68|iQ+Cx{~_;(-=oWuN_O zX~OrxZqckbadX;Mzz%>25H{^qFZPIMia~l)RWuWq{i+9qwj($s1fOY_({l^MFRlCqw{0rT72ltcbKUa5@KdjMmAQh|og0Iqe1Ghs`3hsoZ`+zSx zF&|Fb+NoDex)Afk>WzSum5F7*F2a|1g~PJnkcCGsZgHW)>)8X~f{pPqGQA09yRCMf z1KfAH-WTFt+XZ--W}@I|OK5e^WHv4@w(cIt-QGco^x+yXVd$E#D6HX#vxw;};CW*d zWh!?yl^X4D9bbazx-B$1@P6$N7f(gOJ8WMbH$)i1+EFR55c}9*#?fX^49lML?Ic&= zS8UdZaySq&J70xN@*LO4+dx7*y(I@t^nySh}R!QSUB7mNi=?$)Q-k=4{n?Pl8f zXX9vWzL!Q~0O;Fx;_+Y#95R2)Yf!IJtuGlg6+)yiofI=K9D9SxhWn;$v~dPIEJW(@ zq^{>sXHj7o#3#P!-t)*Wg`D*phj@{UV7O+O?Mn}5;2HKGCyE&B{01RY*6y}BG}r9A zXSFHDod;?>1r%AhCK9c&H*;Ufa(shH3Q3e>WnN;Q#Zl$OtF<=E6#}=$_?%uh$_kF5 z;0i>%b4a_(^}sD6``%0^&0oincG-gmuwB$y%gnx5J6DZTz%?PUK~eTbsSFFI>*-@0 z3N_LFq9YJW zg`jI#L-68d6XhL@>wBw%eMZfJu$r-*4=YWT9HP4ek%RmWxJI~L4)aKZN{ipruK4Vg(1(>kd1!pBw2tZ-sALgXTEFI7 zN6D35kcsnk^@TvjsQ&`lERb9C%zjYS6)Y0H6bG=*nNHmrIlf_05v#09x*7not3P>z zr>Dp5QL@eb3BGj7#G$2)rjp7Wl)}AdU02oQne0+B3Je-J%2sexs|OS&xXBHQw33)zl)Z4Wl2gU8?@| zg#6cQy9wV?Ooy0249?9Hf5JBC!}6p1uxI_V+3Fq%VwS>NnBe+Fh!jxndwYH+_~VrV zroyv57k)n9_{zdKaS@rYRQJVu?8MHqVJ>oc7Z(%-qhknyo@++(W6uiNwrhiFn{{D_ zF>O-~wkcZa`;BR;Wzf(U7|hZBKjEeV0JYK0ZoRvNnzI%*=d!#E#&wm_@Ze4&ZqyH;(+>IsOFzX{&-}kICpM{jnpSj+YO) zeCP$Ynfp8&EIBT`p%Tda0=v^E4t$3&X0G$#3hsczAMJdtAa8i9QB&;xE@%_IOAB|os%5PE>qxe zhcl1$lKt89=GfdTx8%bsime8ZoY-KNcFCLpKz}HclazAL2tffV^;Wffp{J^tELSo= zY&?4JjxHracM|0DGyHRzDlYw5>}NmFOkmMb@5Ze&kTWV%Qr7ohHP;!~kQ(=hLF{4L zHo2?B%Ge%E+D!(q=fbJoq%Gd4OK$IPvYWPDsf*XK+f`=!@$=_V1-e9Id4YBa(7v7` zhhek#T^g>eXAX*-lin%2_9+aMCisa{$~8tdPqAEu((6vDhp}6dzUCm1D(wNK?7~m3 z6?9w$9n1%7S-M43&10UBmTzegqeKhjLB zK`dOamzV+K-YI`EDcM4<~PKsS~p+#l| zfu$C>-PCnAU^Y0i`2MaO)81$Dt*^h7)2>uCrW_&r)0s;i9Zk#a(B0&F(Eg^Fee3dA zh}l~B6WCxH@v>BNsZY^D?)np-l$*OT3oaOsrGcz?dL#8|ZqGE>v-{q_v?cQ>2|R6^ zM41ob-0b`-&{A5V=IW@NC$3{Q5>amQtQM>nZnPg^2=?5_H#aOzJ-$_bxq9V+D6YIi zgHZM9;?0^t?zt30{vZJN+iIDWSIIz}?zvvPB`c&Gs4JM1h>aZzoR1)x3z`#G_>tOI zb&-kC%hDqzNJ#*-P zeA>YEb)0>a@c_+wY0sJh(<~YseSZ%l2q*Msz3jVN%Kyk7*%djrgEx0|L{xXKzgN$; zDLp>I`>asXsk|m+e`9=iMyS$eBPA?2WDw=!06$I-#P>=s21D|$`Y5>P9^Xh$N4E$s z9D1EcWA>K0D-vw!P-ukCHyYdhxfd?Ky*Nm`?WP8V?Ny=6RI35zTJIWG+eNow&=lQJ za@E|7$v_OOM-G`{I$^^q$=dpdKfRR)u+AL(o?J6emJ|mf1X_+-hazAyw*f{3_5}+5 zG-O4yT`$i4^%V?&i!iD)z}=k#o-l>EZ~+H*+_?C6q4_Jc{kVPZBkj%4Eg`?lqBaH} z`fik{i=AK>T`YZ2pJpSW3EKnSN&f=^LMwMRr1LvKU!U{cg}hjln;h=xEg3^hae|*T zgsx(>+Jv9ne`hziTv7{lmPlT-ZTWT|MgM_zZq8RURKnV16LzMN!lte{cGvA+xK%qS zFT5PMtKIB+XVo@mWTq)Kn6@?X*FzqQ+zrnCY!}$d+n36O=WID^&0=P?^-k_DJ|a2W zkh7TMx<1aB1mL~i$2~W7vX675-#E0y>W#n57ktai!q5K=rqQ zb-?c-BaRi-mkZG-)DJ0Py+!IgsYY^1nQjoo`yjx+PJl62c*2f!y{`DcWiL;Qbx_wS zM8}pvnlk1jqbGB(7|+S9D;m_Pu$^?^cI18T5#zX{o$w3$A?0juO3==_V14R?^rntd z(f7wC7k8J7#49CwKGK)_Yi+Q*M>j8hhNN>}A!+kCB^ni=lB89VU{6I!g$?ntzvNl9 z{qocc!EtyieR4QRr^zjq-zRw0_ItmIV$y(dGHKOHy9rlz>kxq}u9Ou#NuOp@jNJ@b zIaJqXlfyc+QEw@dwa^(48Vh$h+|zgCk6fZ#0AS2;@ty=kM=EIzPLj<@twg5Ez`|=Zpz3CNlH+ipYvp`l8dthO zStP`H+`Bi0o>o6S4jFc@CogEX6ZiW{VVD_gB)~WF3mnV=kJy0dL7Vc>v3L=qa!kBD zXC0(3O@eiD%H%L65Bu0fGtl(l_|@DU0((p zcU&J99v=4LrW`t35?Ppm<^emT^z+M4Ack<}|j*PiEpGrI|fh;P-)|f^D#H(n4|auNME7}Gy z9^A7xM8`OSL|#N?c?$zWRkn7x0+E;B8I8oBU_V$U^F~;T*Zvg>V7*iTsqwjTA^cAv z7(X@yuCFm)Z4yk`DVD%Z5P_H`*4|c|OX_2Ew$LcQNzY`7wn(yS1z|MF8?NA%nNAOg zI4?ZMV!B4lus&K>52E_mGyY-|6!{AKm~IzcW>g)+Y`7&@xSE0?i2dq#jSTuhBCbC0 zsd62g#{F1)%E5fTPoJDi8NMe9LWQ=tJqsuHyHsq5|_DUT)woHI{C-n=FFiyuCfnF>P`k( z?gQLCfI64H=;vlu9Z>QcJoMFj1?t*4vQJ;fp9oYuCmyHgp)4e7TvJv5CYZ)DN<7)a zz(+NCSjJ(=dtI!o@f$=A`ua3@e5!x^2zQfn90W<>!(Dttvs%w zx!hEPAt60nrRv_PzUEy;X}wpb1cLVk>ncvG^2!B0HcGY%q?3e@Ocs@WqT4-htmauBxJ$iOq;dB9Tefw~ zH@>~sf*W$^%9M%f8Z?;hWbrNTZT-fjcM{!ql^_WcA~+dYiRmzfx45Ur{s8Cg&izZq zPQW$qwgNKtG*3@s^C6-t=Oh3@CVtCnO`)X!1`jB*vM)CD5R!^TtbiLzm?p|D5IXd`Y+j06mcB0r zu=T3L6YsYaSrCX5^IdhbxhYGMZ!9XmyJ9OIT~Fl7$j{19Rag3od3yrk5*{ZU{92bE>e+QdveBr=5k*J` z&-Xl>bs3rS(_NEdgUl=SmiV9MzHc0XdCQ2ToOh!dk}I&^f8m68xB2aREo znX(Eirf++F*a$#0aUThxD?lGm3XdnX`e4aiBZ*^PO*Jjc5-hxeTTtX($;=?)?$LFW zNGmcz*QXYE1#_;p+7fwA0JYt~=sCA~hTCR;H}jJ37??F3qwxVo&w=3z%deLpefn5Ia%!ia zRH=Stz@ssQOFgqbeKmHQNkTG(`oYDNDq85C^)ft2(xAx#S9PDdOEGac5kO_W;9T~s z#LQd_Lqcj6CW}r9GENy$FF=>cdAMw{)xcGbbhpY^_|~h`q@}g?SQPCGQ#bB&98!lR z(5txQ-kHwa9N=Zgot_NZv+|==ws_yS5fNSttv;|c>2FW)e2c!dbA5@%A!a*yr!H(K z@1@?J-J4=fcKT2grIaFo{=^iq0V+X*U3w`!0MFJ^H@3ur?*>ZRb2QVNc8YQI9OEAt zU5-A$Tr)f@C<9c4V%an<=mM0_2&=2KYv$J%765w!D`Vb+xOAG{f6048qs@!D(+ z_|}7PyGShLR)=Gt&!XApyv+%y-NV}7KzKk{qm#0g8rxz?%MS#h*2lj&&39pL;PWG0 zgNfgD!GTV{SwXGpwnYTDTW!P!uNlt`=zgsyB|dT7BWqum?`(e}up!-#`1QrGtsT~c zde^2OjD`4P7~Wu6?b>$IS=nbS2bb_EK``>VGU7)5iierp+OIG^6Jm|L<{@2CAxV1++lw* zB@{)N!TL6qV)!a&cl)j~8^I9J>0tNsxkI0KWlsgWbx;I9S}O4+2U}R~ ztER}fY1i_P0Wv>X=hY!5cpEle&u1DoLLfcJDVUfuJpMou1+q%_=f7vUuZ@FwwQIgD zeMPl;JD6-fqM;Z9eKAyG(8cl&$XtpAu9B#FZOc-y2@@%IOUnSUfrUNA{_uCHxrauB zyBV(5`Yc)J4TM8B=3TOq0m6g@6B2(sk9OGLtx6c4f-q|8V6SirfrBYx==Z;-CFZsb0Cs7@4sC#K@Az=)ETM)(CC-xVzDpmaruuJP z6#xp(zW09ZQi!@52r13P{lD;2hkuU^R6c+H{JGN|AeSf}{A+->X2SzOxE*M z^T6({K#M@W75Cn&3blMhkEo3sIFoDImG5;gfDFfL8U+XD5TZR72+Skek^=>YI?^<; zJCY#U=~zx|lh}@JU5CAkzG+Iru1D8`QoU}IT}udT`7!q{@xA-|DoYai!v$&#=(;%; zKnV*x`uoa=d9Xn;q-8S7nZKTXo3Ld8Zrt@oc-MhE_}5?ca82}uZPm&{OqjKgGVeLd z_0$QF*a4HgJxq>%x?{`YsqDn7+XnDKaTVSlF7)Vchjt(_* zbA0ww+p`JuBG_`NCU;Km)1=};Cws=gBn(khh8QwD^tSBuXIiZ--U$3QJ&L;eUa@(` zMUHT?Ptl>Q1UrKzE_BGJ=HLgWr1=OKT`G&;SlXi|V3e=l`>~6M?5*N1uaOT+}@?^{bs^qQ|2-)>gu<8EgC8Rb$iQdoG zL&O1)NXFHFYll-FKW4AzdX2CaUjMc+#;He=^s0C;}T1JY{GQc~r~~@4aTtVNhisohjHGAo&t!yJ%&{yrbZKU8S}1MUkQN&|5lkm~kCf zUSv%4ySkWKtvblxO^WP1QFHf~R(~~PsN=_aug0A|%f=_1+*O5###6LjyJ=(fKi|7q z@N{9|icHHm&;}%GY~dT-^9JdJZt|`0obMc;<#RS~Nr1Pl$A|7Y*VrrcCYOI!-ru#$ z5V_KsKrmN@#4cG_3Ogj*?sRrZRCIU%Fqn-}qErvAd2DkoGR26@?Ax72D6lAb*VA9x z5i1#p;Np<5(E8mDpl$})ukvGsrs&URX0Qe~XXh2@XR<2W^rdTWT=d~x84%qGy){p> z1WQiuc^FvFL_Knv16m462Koz@RX6>Y1%|ekcGVh=5BOD=wPS$@yz;=_ak6XAz+Q}91aHbIsM zLv=7S{qEn>iS}fmpDb_9V=%9Sq8dTAQ2;cc1(168Ux4$7`kjzy-wuo+p`1>`HG!{( zl7KvTMKJa}5MRlu{af1GymTe=&|9ulaq5Xm2hR_Mg6%J~@7r1N7*I zM^3QX}Z9wz*UwEhc?s<_sN@o@TG~iai<>cTymVsM5Rpdmm zv>KC>HLg>(XoJs>FVR?Sn6IM~rRh-c4UPT0pDjS4qc%cFC-qppw}WTb&{_ci(0273 zm}r_aUNkEsM%=xSdYKk^5Yc$OOrr!L7ov%dyycnO=z?js6b-c zfZj*PE^{DEcx8?l8hh=oktj!A{BjB)l=SA2ohPgA>zn=l9zbMG!hXXCWcY@86fUD^ z|EwE;&o`f2N=O1up(UhGRAFppj~N07C+K0Lc);g9m-He203jt*gH4UP>obxPv-NxQ zE`WM6#7Uy_7T`1Ko_CkfB;`Esr8?%`O=0!odHb+xx8O;5VKI|gy}z$RFq=OqL*CPBYF*XoaFFAdAOp`Ec4Ex*%!(ig= z4vh#sYhui2_4j)JulCf1#(K9z3x)dd3j5r@caPpmd%o3J=bF0dMVFtsVY|Y!^OBG$ z!gCZ5NjYbY#Md~W8?L_r_+JgHpsZRb3LbfH6^I@65t7?uE&rAq{dc^|Pd@$|huFB^ zC4U&`@Dpbf>?>DKe*NKs2=%%}aek^nQ>lGL{#8wwkIW{}EusmO^F3)x|GPE$Uw^$3 z2O2xG@qQIDz$7c@vfr-RP?Node^wvtZ(5b`ggIr~86RC{*YU*-r%`j~{uQ?gvny)z zj*7Faup5se#daP(?Jd5t<63vOf8VkP-coEPg09hzD`yJ;>Yx3~kSX z3Tdotmc$fwyo#wE{-8LkTWl&Eu9Pk;#L5>pnP@InCxQ3|+5K|=%3>j+(OEr!Vv71+WKQ_qte*EOg9`uth`R2n&I^v$EVS#H+Nu^C!N*@nwef5&8 zq-|>I<62eE5wP((#=9P@1AZ#w7S~GBjc(OUXy+)ZZD$I67W^|KMW((=;;cZO*810| z?yqhvKX=|+u2Vs0$*nRmVwNEt6A0v7zR3h+cShH)^Gil!;-_2lS1f50R|3_iP|4wf zQSq})5{Vb395wfKpb_7{eB#NqY_88xv!w>vcg;wX2Z{b~_cC9S+$xQyj0ZPRrgTos>mbrOS>iq}s=Br>>ejX8c`;V(2%y`gcnL zlsfNKD<`4gbEt^qv0zb)hR3;+=-=H|?0uqcVPc?p#tNVz@gw|b#W$ZD=ca)^#!et? zwO!d3zUA|i>0dpvPC&)qck=`!Ar>GEs3(HKqsxO?qOlHt zarImFV1l~_2(BT)-Q7vB;1)c%yF&;bECdMd?(R@nfZ*<~g?r%z|C2lYPS3s5Z+Fkv zdG0sPb9m|;ir==i*Is+(&$0N4J8pOqhln|UJZonUsH$Wf&G|U077X)`rI&F95{;H{zEm%(<3Y{8<{oC0_IRGU*0ZApuGf#)JESG$8sb2=f>@H~OCkl*F0*}UbH!OP6<|1*s< ze_V-7z$)+<)dOZmsmy?ZD5#RjZHSidVkGfLrQTcjCle7r^T27>&hVLzp^4KD?w4TA zrmdJ1L{wv*ibiO^-;$$whV2g^LVuz`B_c9#?L?Pr!BXJ5pYy4ijl|2E=F zrlwt8xjYkJwsTfYx~uO!W(NoDk51%=9*;3TmpW^px6=Iq$mT+#P=L}>uHI^DGQZgw zT(k0s`OjJPKavZIYEjrYOZR2n$~R=O;UQPv2i5X*TsA1cxk(k(Xu=4ws^~iS12l4) z+*kO!4*~N*32}~&D)~-Q1DPU}*-Ax2Cj9?g83vTm0Zp1&K|33+KZmi?lJ+A2&rN8S zEAqUT->*4YJ@q#&{RIVZS`irx&$Fjdss(v3*C|fj2hdBUzL>3NY=SHo%U|!cDgIw) z`9ET-Xgu<-qz~nPGH3(k2J0h?fBr>LBp5cRJgQMk6>T_|l6YZWI;XZZ#PzCZ;DxnL z7mVct4de;-oX`SjoY)%Q*EF-}HC}tK*y=iUPYh%`zOvpA{+ZPu|&F$4N` z6|Ih`JhIn0y>FC;8g_-U>+G*_%zqE~degWq;fEK0)V|PP(l`<<{P{iCM=B26CwE}A zUCwE?*!-t|d#lZH7@hV9W1jLx!WCz(c3CIyn)1Wjqu>bHqYzhGcz_=Hmli;c!02az z4h<)@sImC+Etl3B@QC@Y!P|w)oyMVPI`zKj$!Aj*!faJTA zqAoc4S$9l1zh&W?81SBH$Xi1{6X!?FmK3DjNAfbj->jD%7h~JC25xP6lKY(vT$;D} z!Gde&@Tddf=>BpINk>0!thT(&q}6Q5-f0UDX8MZl?}n+4VgpHqa9c+GsDmVIgaNw?(HfTX!3YKJZR@`gw9!1W? zGyV8y_AXz2X!pmIAVt;nDa7ga4EtfydauD&KvFJI-@(s>kaykE5~$Tfn7fz*xszx9 zS(yQwVEmyu6ob5eNTKqf*;Bs0B1!4Is4>ODXT9nvT4W;b91Y{h{unna*0UJd>aZNS8pGLMVwxgg>^P? zKe`@Dz`=$#`#%ik1x#VBsErL{zMK*m_lvL0U!RR7y3UE^U`Oq}Riv~6iT=a`?C+n!{>nuDHH`nQ4 zS*)FF<-t+dHSL?>`~D9F%~!o# z>DS|V>I?{Ee_jf{m|Jvpy4ZeS{HgfsC(i{bl`6x4s%xg^a91IKZEyOv^ka5gJfpH; zvvxO;i0M7z#cF8gf@h1+RS+-}ujaHnOgPI9qX5vL7)%!~5^4Yf3mt%I{=qvBGWBHG zx2nd2?j#CV-GJW2+m~+Wl-f1k$rOF|BXjqoW;$d+CXZO%MC_@J>?1 z+ZR-=zz`ljkmiYz$OxT5!DG^qEK#dS3eZXRpDCRyOMpIL%0{R09l6`>t{FFA0pFIa zK!Bk?@qS>-Dd;(zQNKoBpKzX~m-uf~8pR5r!V2fn?9@Q1h4nJPX18x)0Ew~R9J@a{ z^K4%U1kP&?c*{K%W(I7{I?YPYx7o)T7)$v^_Xqn2Gb@*U2YO+8HKs#m2K*me+fN%p zpZ~7@Da`)jOuX<=vEn7-KIebF;68QeXSX4KcMM&=^=|QE-8|h1@zQci&Weh(Sr$=h zf5Ky08Uh5Sce>4@OgEE^iVY0u6RQ9Yjw zy$tBQZ-Xk_gX-gwkDIJn_y3YwJnHvW?f(#X@9GwY50d+Qx`Jd)4N<`z*u6ri$+XVFj9)v`+XchhDPM#JMB9_<6B+g}zUAg03 zg9H>bZregzmF{mUpGb+&ZGpk~<2BYl98}a@KMR;cWA+_8_2QLZc8Pw^`!^I!O`>3L z@>oC)s}3`a>B&yx3^|SvmzX%w6qe}@Ly5QPzH0sV1TGD&Oyu$!M3K%_da=*&O<6KC z9DAr}RY#O!8ZRY*!Hh=3={()3BGH=-VUi+mp6Arle1MyVd%Tx9gfCn1sTl{|Tyzgc zCYmGxpoz_UGya(#fE>(X_?i_+QozFhsn$8_{#5ITH2+eq{}&DdYX?RoH8Kg8b_B_r z<@C_2bwE*r+jbrBJ38>WMTZ1?Qnn&O? zak_UjiLj4WMa$#nD<+n9^Th_=&?~Q!4!z@}W`hWXc>-oN)^?si2;~=1IQS>1ed84| zD{tN#jYu`A^r5{y?p57p%x@Ir zaxf{uDSI>OP5#rs?^fjs(Lv`5nMBx0(rhHz-n|UqisoofU#6&>YByWruW0HYF*M7+ zh&QA1-Pp-4uoA7w`US;mb+t9|^+5UTxP$ zfKUgSS2tV{3xv<-{!0j`ZZJ^4(tvYDgWX&D#Nr-ft;N>6Ceyz>{~cl+xl@dJrK zVPO_ntW~f%5V1~R>M}zZCYwZe<;U#q;^Wa&jQMZaxWb+_Yg=6dYWp@w1C;q05EHr< zTlqNlC4cB=gfLuT`4$sT%GQ-uh=y)>!_ie<_jn*Sz4 z4sgL9ANEf_ZQ`0}`9me;+>_JII#2QBtZg*z{Wkb#1P&0Y+OAl#16J+Ul2@tOtN}pi zax+Ww*&`!)-pNrULTShCvg(he_>wW(8t2(ENtL% z+NKc>3Eq9jdKWu${f5Kt^p8g9+ynjy%TQ!$KX))?)t{kCuP$*qV_^>MVy+?sO`8Ff0Z7gs zyB7i$~~KVy}5rpl#n!)XZ5~5TCjnHUhzI_%Xq*XFE-TlbDw-p;{$mg z_xNOvWtbD>Bt(OF=I)YJ`;z4EcY1Uk=nI8OpY*3f2L+N4`w#%mw8?4)BkV{IZ+63< z;h;EuzX5ncqPz-X9NYdws>{wR>%fczV1DEMP-&$@Inb01C}aB5XA$`Ox-1@i0T**zzbGo|W|t-0CM zw6v{Prw@`PZl-Hbux@_nZVX2Q+sP%t1((A_tSO(yE%@zK)XJDg;coab-$?ahr0)u3 zcTH+n>$%?bIswXr;)_>G!l_$|Hw%by+rW&-ir=x8HG7rt;)C{sp;8sfOY@l$W=z+P zC3BEMYU4MAMi%{=VYAQ2PiJu#BWaj42B0?KDNpYco~1M%R}}F zoTWH^(ats4t{Ih@@41Xrws8&&WFA01=0IrqV5yG-;k1K|ASv~B-&-;SSUS!kl{)i< z4S_{%YJ}XzWo?|DjWwcXEBzLa7#%BuN0U`zLW!R`lz;&~gFi$xjMVeh{#|~x5n;wd z#n1d#r&UY*b(a`90&&R8lP6x8CZ(x`&$AwW~_DET6Q^VRQ+kRh{){|=ZI zO+orRcJhGR4{#$t?~s>n1A@+!<);8OEJqv3|IRhoLYnh)}(p%&nreg2d8y01Nb zr^6uUGhUf{gfxOdT zY*QmpHr+bb;g%(G%+#u*mb$v#ibzrQ5RlHOueG2^_5a=yU8Ix$15osiY&$!O+hEiI z2xPNrMxLs6mTP^Zg>y}feIVRfc9+dRO4vq~p!Ly0%%|WJ2q!)u=n_5$(#Va?3tu4T&kYuho!=|(G(pSTxg#I+c7JmD{N3|#SAzPLY% zDQroF9*oBtLEE>7lYVq)ZauF)_4|n&x*NY+vyOcdc5e_F_qKW#%If#J`?hx6fd|VZ zQXaZ5AvXruf0M7k?PF`;3)QPBojz1id5&Ic^zUc~?fdO_-2JfHhu)-I@=YAC`5y$# zjh}hVBwm7_{4$E=K5^muD zAQrdGm^RAEvof!9FAOQ#1~f1v;PVnl}4=VazP70h8qI{y+O9nGp*?Quo2 z-Y_#Bv74{hTFD70Q2bLW?q4obEInN7Rt^CQX+sma{qolnbZ*aS`&W85rwtg#J14FD zvYAH@awLcf3F&-UG`Q zC7>X*kIMj+EbxM~r?|sjkhbx4AKQD1-wRR9wJQq5nGCsWuWef6?lTd%hM#x6lQ|(m z+HEe$F~JCj7Ra}>QA7xzf)Jdq{Li`WwlDdFFZa!tmM*qOnIbw;cL4dI))58wfkoHD zogCDZ->H?DSA>;a`(vQ9Z;aaarDJCWQN)8=F)oP+ki&a4%;_m$d>{vGeXPUv86f<> zNs)xLKWPF}3i#};zjRu7LvSm932@8U4c7tj%_2_X6BEoe7Sj)j7s>kGfHr?i+uiJv z@Z1NknXyuY4FoGXAr@$2KA|`H-d+3vdm0wWTI<;5r2o40N#2^@UG`&U`{j$;l)dgV zu@V^itjUO1FU6!@DNBmwzdvoKR&Tkg;LN9MzghJ@4f_J07nt>x;+Q~6)Dq_Da^&PQ z*euRFZn+hn5A5m2kt|Oe$1_#)F!H|UZ6@n=m-*hAqsluo$#0^`KctFl_tvKJ6!T>S zP+Cp14!XX?v`%;jqvA+@8PY7jF$y~`sNHMa4EGzc!5UUf==%=+3d84Z@cw26`C^4P4+P@MSZ3Yg}kkGKR4_< zG90pgHEj9<8p|G*aWrgKVU(!fyVZtan=;WBJoNdMpoWCUz1p{J`tV#PNw+=lN14&| zj`h8?QxN81$3=Tezp@ukc&Am?nPLRrSG)WVud@AoTju>NVOI|w$@(x%$IJ#rg<2ap zPLz668=KsR=~yxL5h$Mw*`AU+{Jwe)x1NT2ChIr)R9H;;kDE)#3^gN<_CF$J^iqa= zYWn=99Bb=a^O(O>q+%Cz}Y$3r7Y1cDXQg{dz?z}FKSTX=1`09}T0 zk_$Tk{$=nt&G>#I_tF?tnUSi|n-u3BJkL_j>{nXgwddh5ME;m1m+<>{5wr$G0 zTJv(34J5D9sDam^owas6otrsfKt&3Zso7$VrRRhEXMf#L}A57`;}p^HTHe=jGWm(uuh-|xx|wD z3HKsX4e_-$IlTexnWmY1;;tTTCtS{V(QTgHcWtlDN8plP;2Ofne?ew_3EcRvF9JSd zeKD`DQ9rnQjMLf?&_+`FL8B;t8(P0`hB=I&u$Yp&Bf=FLlOc=vOT_VQ6e$tTxolp9 zq*zemLqHZm4s)0hC0X!`ZW{r2UkQca`R+%hgM5bBNg2KSq&VV-TbuhkR07MsyjbZp zzH$F?2j}?ZP^8@2+kO$jzy@nPTJ9Qo3+(;iKq+p&+>_PTL+V}4@Psc{Co8Pgnw6kH zD&)BD_%}DfT3YAnSca*0SNl9cW@839HJlAtS6ls~6gpiHC8%!*{md=Jk0JtaSDbmU z6}si%?HF~q%<%;T$N#-YjyjdAE6CV+Bp}f*Ov!52-pCX4H|Az95hP{mt2XZT*B7qn zHXEKdf%xQvsN@u#iYYYJZSLv_39D_Dn~-Bj$J38bBV_wsIO38FOP3?U=V(L&+_2{{ zA_D$TKZeQYs@ahc)%i}7Kx*ri=$rE)BTJTJ4Kpg71sBroJ>p2U0hZ1aqt&8$3y86& zL)Pt_2H?IC)|i@I57!uWeO!2xYVq9NQp1buN0aeQ^0$$8*wsp<$Jo34~O#I5sTuM&!68pHsWeKz_wNo-(=*iKi9CBr; zUYA==fUz4{t<__5U;7BoY~AgdBrZy16L_@~rS>`$8*3M&cjo%F-%{`aS=CJ0gwj|& zJ;xXI{&ipgVGs2nQqwhlFL!C%m8c95PE7u#H&gfQJbYU1@rENr#B(TvIE-epX7b)$7vxP;p5q(WA`huW=661ulk z(y_+aCSMA^)(Ci%#xFVLj@giX5$m-S-OdOGB^8Q|N8HI*+F_e~U-H}Q8%b3wG?E)) zgLQ4uD_jlo6uqRqhITk|V8Cdx!e{fN8E8)^R7)?npLv$>pO}L9r5I(83YpkdJvaE) zN!OR!A1p6Zg6VaHMR>OOf));vEA(6GCVemod%Rn9I)j7jTUb|z9=>?7+^aKQ1z313 z=MhTZQFi)MKQpL2M)5J@s~CemKV7^~)M_*;Rd&;RwEetG>F_X0y2S`hDT&S$^3F&Z zJvpkikTUoZ#I9L9Ghah4K_2o%gi5RKnd=M+anq~ya~^W{ymC&Oiy?iY!($<8*uz5& zCGwx|x@bqbT&QSA80XrEBHgOfRW-n0xExviQn*QEr5yF4?4WE>Q@7DKw1ZFhg0tg~ zSMt|yx0+UhjFt=*H;TpVTPvyri8uhdc+_g3`ATP9!=+N6MAn!5mNOmS5yeTN` zng4N-TR-4M49}xDie1J1bHEqR(=|EOV+vC}@SgowLLQD9ey#9tBrampDM?k3PN}qg^S=clsKk0PG+hct3E!QEi>{y1Ej=vGog~L-= zHq>J5dZx~3#%Hn}M%)Drb3xg3rZhf|#3g3)8~O>|ccJU~+;}P}t(DSG*_kCM*ZjUK>V!3tx*v42j7yl>1{dHB&iV8yx8m zx&kVBrUoQhEe6`?RJLL5*8(IDq!Gwq&t4-*=zYDt_|;Xp_ZxJqG92hue2_dymO+vr zOR5jA+~_BKU;jC;O&^npTZfrd8Ei;g;uYRjV>bG#_;|WRW5PmUJ&|!CPL{qshC!M5 z1Y7@o$6D<`Az9XQDAL(rrsfV`fS;#n)ejW;BSO_wh6H?$b4 z<)5>gg_g9Hpy@a448^l5KB##-ROmMn8fCo5{#=Ci7~f3zFwM93kYI{Io;w4Y!6&m( zZF5;X4II`Ol7(nYP;fkQvxK9pJ;d&&BCo z&5qoknkZjt+{xc4_C8-8Z~`?MPoUA?pDeMHk${=CtX_>-{&8Ys6|0ElQDDtduQ%Ei zmF{KP?@dBlaFR;J!J`R*(~s!TPsQKrcXIeW%FsXSpGu5rLT=E5Wn8NIn|?AK)^?-X zNiHpYz8I|>?cYRv-3Lkx?&SNP4HBiPz_$=j6S$8EsI$M3<}>eHhJ~OTfRuOh$#ps$ zQz=#YIl9x}wwQOxZ!fS8@;fSbimU}z#EYVCi004975VwmeQTEP=-VOz<4U?Yol<9< z)?z-blnd3kXc6LrX;V+3DC= zxV769{6*p287-Qqhpxx>o@)JHBQ(fec_a^#?6-uFPqUm0qPXrs0f92(B;`sPH?J*H zxoxeI;mTkjzFLxj&CyTM!8ato%C&e`&I(TQPe1xrPuvfJ7~@k$;7&TzUB!h(zlgIG(w!Q3J4Lp?GvxRUaN7i~#$D9!rb5YrDytO4H& zU@8I0mIt@Qj61C6_a4o|4;K-AjzTYt9-FW;2ds;4Q`8wgK6;$zX{!EGNau^WcR9_% zShADo?Yf3pe2FAUVdcfCOht_^7*8NBNhajh%{yCV=Q>CyoQ@%Py^TBBL%!*fJusFt zx`BH=NEKJvrBhC`|Frxcl3R^$U*jxdAp&X%!*KNceVqrwwSM!LCjc-;_-Ek zml$-NwIeiPutfs50ilMk7n4W(-p`WX?8#zHW<|e~@UzwB#?RQplnp>>Bkqx-Uo2-2 zi!j8(%weVr9Ea~dCi4|e*qq!RxW7v)oa9@{L1RUFU!qB8pBCcv;&k@xj35Ew98FTC z>cfs4AsNh8cmr%qoYXJqAp|vN4g?!IaD`=b(49A3tTMT`0lR-pwUZxaxmX5Y1EO?% zclhR_raX}x;m%ey{tKuK|R6N4x+E+_1ugWzM zyvUGR3GKgw6i4Xeq0MDiv#Hqne%J;DYHa+MhkP6Lu4%7keHcJhS@>w{O?H1O6$IB3 zLbA4r82MD*v{e>O)dnz-x4ozJ_!+>cM&gjJsbGNi`_ZOPc5Jdl(C&3eP1|lw87|eN zH@CQR*rr-X)Fs8&Hir#=I3s*Cm|pwTlSJPaLG^gQ9J%}k;mG{xlqz`fZjSpD(;nX= z*?SWf#(Q-XVK<;Y7Bij6ZiyBNdp)Y<4P(Om+^axd3qgKpRLG~8TeJ)NCCd<@?t6^HcZcqW_zMI*$I zkZPMf;b8Cy#z5NRG`(@!DAP6-lUUa;*nD?eIS8EV-5eU`HvUCl?X10~2E8Yu=AJ5e zs$nA-4@J;o{+$V0=j$MEjgXdyzQanD`bCA`gSD+6%vU#uUe*>yXMdX^9&BUt@7tt( z6m8PeyfVEP{3x<^=)31h5!uUY4?0g#auKV>bmSPUwHiUltR_W%#`>xW6j-{+F z&~G06If4|apcNT&gMRGW&Ij#0l!&jIp3^(G=UWeXcP@LfxgDKUioSl4e%ul*b!yrk zx}mN|_u-$)umMNN*rG<|I&3a-NN}D38*snb9Jl?|*;PsIXC)=qT>rx;>-kB^;^SfE zPJX^|tXwtw%#IJf^AlB0W#c_cASQ+FTib&$DP4aQ`^`Tfh`<5yuTN-~a}tZjcGIC( z`;)Jc+pkIO)@sv*S#k|2QM6{7Kh}$}NOAgemAN;@SWNvSt$8Z#CDXsyaR!P>B%znr zMPyzDY1IE?kqP+DUbxlY&-A|Ybz(H<@ysrFYtMchA6_~p<9J-}P0nIn5+O@wrJNz5 z3jo-j%gu=%VRU(ixU0ySh7JD0{$_+6kwh;{x13&H#pdO1I95TXfV;kw%;HZb?O~a` z3Y>WC^Fil>6QQ+Yi;ks_j%VK31c)3Wsa>*B{@>S{o%`oyIvM44T8$zcNxpKtQQ@JI zqOeV2W%2h)Q*Wv`BP+b!?{|&yU-KoWt(%SMFJOfWXpz-RK#x#H>Ch8Bo(y)V=B8So zP!HEDYfC;{{YGhr(U&oHo+d~-C*X91m|6C*ugNj@+MseE?hQu<+hW+ok75+#3InVi zy2FL~b`&CA#+D+gw$|?GFc!vlsHW{|p8N~q@;N8S`sU>aFhSKHt{NToKYUG8@ZU`c z$S7PoM*!!ZOp#QzQyrDMdSuMEw3t#w46$M@Fk*>d1Wfl;u&- zeo(JVliSGsNU`K?4tyFM3#bx+t*(7s*B2p&^s<(mjv3gNC}_lNt3PJ?p`nu<&ml~s z0ZBNhZw+5NyTrB(Rv0(`pnpfuV`#qi6eykAZ%hRIe{fRabAdW=2b(Boi)fMRO#;Hs zN0ZKTT#*usOr~EwOsQOF;i>{y=m%7;c$nA@pf;m2e|}dTDS<^VQ`fCsh2}jp0x|OW zp#mj;`r!7?$%-$^arX)=QGOJb# z`)RbtL8g?V79s%={?gl`;ft|I0ykL&AEAl0m}^JOZJ*eh6e{uSux~fqD;z{>P!rI9 zU%(`w0{Z&KBG)74i|Nn><4mdE4EGSIonHp0*d+Gg+Zi9leUsIOGq7%(@m=3n)7QKG zcKA(jmp;|=YEcA+Q5UJ`wvzsZ+I2Sdv3c!Ubw!c~^f(4P6CI#BuN5=b-xJ&l2VwC3 zbG_m*h6?s90OP62uIhB5>*vbAorWv3n(g=i1j4BWXQ_V{e#6G-6lhe|;(FNNhw_hd zO$_GT_-xU~@^4|H|18zMAbU_5zC}`qeSA;dakxpoF&IrIlKoXB$rdtp%b;07WI1Zk zWIv*@J%F~%>RdLfQTg3_gmBa9P+ zMDW7{CP@Jl5pya=B$NBcrt_s8lX6bb)i*4KPKg3lF~=g_CZ|^|0^=sPq@(We!;=MQCgA_BkgIYtV|PF1yx`o zlJ*K!R*@(l3cX=aX$nMIu`W8H(|QAPJ5E7qxu~#6%Y$EKd8*im!qfM%R%24SZEHIT z(3T^+ptu+E$Ow`#u8ZyNWeD+RW0E=Rv~Q4*>6RSEdgvq;@Ebx7+PRt#ueLjLS8t9wVrbsc_x6kq7^ z4X5v{&m9Dhq`&rfIXBY>1P$)jfdmskmqj-iG@m$sT_*>R_h3?_U8dWJJ;VLX?* zU|F(m-q*W|GWBkz2?6PiULIQ*G$bK~^SQ>3lK~Pkk^;i={Yr-NC=3FU4qB|ag3mg7 zx-pl72gAv%_K@F)kU4$ItKjUqy!UaxSvaQ&3>?2;xlc|n^ijchl&^uDd6+Ud!db@x zJlipylOgbEyC$X|@bl@Wz=L}Vn?rOhG8Q|8q*SX%W5EdnJ|Ix#96?yLqdo-?(%W_S z6b=Y8Jjav}27r{r(4(~TP`M!sooqLuxc8IQn$@8js6>mR^6}0`C{1^VeAolSe)be8 zM7Pmj3Mgx+7+E!JU!9UziKXI1Z{25mk$2{33jpf{(kP!++g#sf`*l-f7=WD8(Gpqo zx*ZKDl(0JDEp(3$m+0|^ezUa9LuU3eIm~1WGpTx#Bp4WUOPacrdW)G>m1gZ1Y2Dc{ ze-TipX|axJvMdqb!QPl!w0ie+SX^DmHQR!HdV}`#J)t56?Kc9=_{)TzJ%2J?|M%eN8$UWprSebGmV7?gWhy3(x2av%OZX;eM{2Vj^ zvBjJPTx7DdoD09r{zkJR6m-m4+=AhpX4IcVnR%N1)7O4YiR$&IE~Wkjq+_BtqZSD= zYED%x8F5W~Q++BC-0Ara*y7A#uSE!XBM*?ooVP;i^TU*OoS%Gxesf;d5~{n%He!W| z*TET$X^Ui>NO(<9um>W(pJsXUp?dIO&Ht2m0t>M_Znc5M{rzc+f=JOJwSK0sb$~o- z%n!?{JoeE`oG&+u9yrIV=2?QiCtpo-NEze4%6aHviPfFl-Q9=jQ5{93Cu1EE15ooc{D83fk~ASg;4S zvoh$26IG(AJv}wJZAI}TqB8bHW|}eDeLTO{@XVXtf(eGeYhxV(wvZmS#4t{CQ~&@4QWC;8K@E0^;iu)aXgoU7TeHjUKML8+jI<>?ycC8NFTFbOyz z0aUaTyo`-HK75DR3UnY9h-b1xTuZ0eDM6&1Hg#zth9t!^>k^C2p23gCL30nRiDTsc z9zFf@zS4{}y7gv(6CjGgz8q;SQ_5{uV=vzU>R%O#!$3ggSO3uM({5R+_+w>MZw<@O zfhFo-CNE`%ejX(>J${Hjiup-_dDlmAc*F;?e1|IilaREH0$w1)L@|&%%B%6_A#l?g z+i3IFbtA{>%D}$ku0^@vV>_l6)4DoAfT+QYjEAeHI0QB{nrQQGa%0ogH9C$7Kl2U2 zF-8ddJ4>mx4E@JW4)1#x%SSUKo{)oM)l80Lj;W4Cmw{fYEyIjT{f19tU{(UUNf0y1 zjpj`iuw=dCNhg1z@t%r<<&s^9B?!oWd3os9YGeJY%yhwZd_M9|a6t-70d|T zMzlI^6*%$+U{N5}ByFx&emb*hH=KoAq=B1@3wrwzse-bOkjFCQm9GK`N08Ib6+{cE zE9wH@a#WW<=@xf9X>97oeg)%U+wn0unR`>_ zP`P?`@L)Ryoi6fIIJi)jTQc9j%UKLZ#-aIcd^dV@d#wmM<@}w>~Ezj+`YDFe+$Z$LV6n^m3AKs}Yw?ITi6>vR$`=FlJ%f%Wk zT9%@|yp@DJwLQlKvtuGmhN2PEs+D920!{fc!UXb|do)x$h%xMok(n^!uYWOpcS8M9 zoAp_YFa(2K#v8OCgVkg#wuNu241@ojuiD#qB7nW{@#}tQRgufkqSVD9f6}uUFKUZwqZLJ`hV~46Uhe5kL&yf(yapwst;k|1h>DU^> zr}5u*$M;MsMbUzkSfO2)%}?q9dmiMSBqR}v512&E;Ivy9{MRL(wDWB#JTc04g#i`D zt6}#zAEv<4PphP4o|0`zAxplt*hSxJmk=&}Am$02kL|N3{9(_U#0HNuLTx4t63@qd zRlGXM={H`k%LB94YuGcV3yOm$Tv1P-OfKx1^oPd4f3P$mIISWwp|m^t{{@sLiiY!% z%HdVE(hWwC(rSN@7-N;+j$lV($U8)lC&q%4!sIgy&x=CU8Vl;6-vw6>Mw#t5ni@R) zqiN6?*{6Av7$T!qQF+HIhL~@mdu<*Q?%SA>dOgySg(JEh@#6XVnP@q^&TohElG*bK zV2*NCe)}EH+LBL14a(gd6{1Q^pNlS8&s7dR#xlE>n8Un@sorYscpAkjmRM(X`$a)M zwkH#vUd2S*rrf>6`b+%Gc&*LVJhM)Zzm6r0wCpb}fE?QHRNo@BRXPDwBjr;3D2uy6 z2&dth!+^&{9p7Zn+n^&wv`zNP>zXs9hYLAqk#1pwN|mtRg?zGlt77~9ia31!EvMyX zRWK7)no=fjclB>B{)7A(-^1fO9!lBa6uvlh_IvfqqZ04&x(`PQ0qXjSTkFS-W`X$Z zd_mTLOb!*}fpUR2uDYXL1CUyB{x~$qW~E|+Ve9J3c^#cg7Jp&9F(|vCO-eOwe{=ly zYVLkBN0cYhkDN-wG?~Lxt?S*A@VR8tM^b_>GJcIPj-_AskE`sCOVoxox+zwsSOlq- z>OD)XyGz?@<_x@pr7;dG-6U#Ah5Sk=RWNs`Xe!H83KcDv6Hmfd!yGjR_&WZBENI1aT<%9OwHU7I1625wtrDpqOM7#&9V*Ir19>gKdYO ztF<jcX_T+X0z`~lY8ii4!9bRTjDm7)%Yh z0eSOo8TIj@FLa*+`!swv=uEPLQGKwgcg;#T@VPH*jq z-5}W@3)$ei>&(vWI~NUjSDKj8D#s8DTFt1g>KQyisca(Vw&lUdSHz=S<)o_rqKTk` zUlC}CzFYM&)8!JoxOH@R6V78xeOE31bpnaqikrTHdNZtTZcST2MLD?$fXkq4C#y5P z_z=&U4RkmvICOmyUUn;K!Bc*zdr0{vkCkK4(x~MRBh8aIl;g*t%m)7Wc zgH*+2bQvF2S92SEn};l%qZQr01Nc{!9|GBCtGm_Kz8s;3Z4W~!r;b6R>jN{Q`#K}? zuG>A8SZS>ajgpTDt|l7^hK$TVmL;c-N>*6LLG zzHRb^DEsx;Q1DplgbW$Kle`yOKcwo#QN5Mc>v&aYte*SkHl>Y!1&cw6Ui7T<&PSZ4Z}hKTZwN8#p1um-_mJRKqJfkyvv1K`3c7{fxp41RFHz-$Gy z64(@jtB|e_7+44$P2MU-(1Znfz*5<~HG}w2{3I3j%5n4xH$ElWn8$CT5X514(1h=9 z#Cm(W45_E%Hg@0tR_|1(vs0s1@L8E|`DCfJ_Ns56x-FcyR~2cK8|nSVw7l zj>sxK(eTo9AHVy%IwgQ5#_w`B|5kDB?xM7kNlFMymA=(as0_`ND<-#H_%5<@B0RW8 z|6tBr4;3v!Zz{g|rRaP2E4W~~|(ugbpWv&A%4BN%6H;Rk)f+s~U$Z@)L1=*jP} zSidoxol4`q!)1DfW{5T1esK|TZ?xo~=J4_BkbMWLZs%ujM@+<=oHo0NYG1G&y=pP@ z3^XgD$hkv1Bia!Y3Nx(m{H@{k?l*FXxqjNSa;n+9Wb;OI@!j{6bde;~T-(HV*U`f; zIp^r9jRJ$ri47iB_ebz9Kpq-9s@rU4lE;hp60?u>DnBzOB`ZgwMhG&bovgI%;Yhs` zT1k5(;y|O)_U&wryRGJHsO6hsndjz*!}bHGK70d*fULRVm1budpMX4EJZAlYJtIjh4^I&0aVi#WJxJ4hHBlFKDk zOy^Syss6ne{D}`j)oY<;pV{w!tIQY~X2DZpD3W#?KK)3z)1QbFzu{w=ol*Hl?wY1u zbx$E#Nb>0tpIJ5WV!>Frtn5Q?4ZUpcFHQO35H`c|Uf@TH?_ysTZ+kF;{F@}o5XF>2E6As6 z5{!&ZJX9;ygvI599;Zdtt2;$+P+sGbJgyjI`biY&q!&y#sIu_iP0?vpAW9)LgxUbBQf-F_AqVHe28S=Xa)PINHf5ty1 zXfTE#eEm*vOti`W7&Le$$S`9eDfK}kJ7JT5t|An9mQriFm(9MNkaqLZePtOa1~J?Z zg+2;9dxS|67k#>fm2femLBV9y28A4O*(u` zS`pWS)j8f1mNUih4?!S&S2X==HZye`h{VErK)cT`ulI2@8lij3!E(~C1B%wqJ~a$q z5MfPAShxW>>|I(JzVq#?zAi8!)or^jVV^BG;?lm6u4MkQU z&((ozDk1ol=M58e8RGaT2B|RZ1AE=MSdX;o0zJeckGA1(svS1}J!$p2l!(Z*kw>Ar zFWjPjQQ448IhwL%twy>AFc>{b>UHHroyVE0WEV(4QiS}GAqN6l*AUqlcw(*tE{;`q{`%)}b3q6XkILBL za3%ONmhs_7K^Ycb8P?N8KxZCXR{s}mq49RFvh;9u!ZDZG$7ycZK$L3wy~ z%`%jjJrism`;$VmV$UXW4iIP~+Xa&En6AY+>a;n|JK zmuDxcGi|OPaAQqE()AZVq)9P%BRHnijeDKuqmbK@eoj4)6PE=)uSB%iO}?TBGA}8+?^-ES)BY>MOzESp#sAESeLJV41l5 z`b;E=5Q~zi;ds8O5@BCACr9$ujPGM9`L!YZ7I zZublcLzcqksGS0;3dfLp_`8m8xe2Tr+CmA)B`gUNcQ2hMXGqmrWQ;vuQl%-+$R!os z;0pWt>>-j;QR_7;=4e2%n3IK~&rpxoj$6<1(DB1UDT?82gd$|az0>(TU+;C1xDH$I zfAv=fDQ*J75nUG4&J83%Lu7JS@z%!2SwqnMRDSdDB@4^g<7f8`s)tFt-Bqt|Kf8~T(J8ZnM zGxXBI=>(tko+9!l13HtdTjI)_^kbo{;m&q z{myU4!?S!1vApkw{aU3x^eb@_?Sa&#&4kptKj|T&sZTEyojPt{$*>6FL|Mq`0;_m7 zH-ZEnk5yOUrc&0h+>NUrjC5A6(U?lPVZYjlxh~Cvj2-8w5hcwFM2$gIv@EtDde(C+ zos`usbdF_uEJN)}wgd7A%h8Lnu?ymX9$v&dv7{d@&leK|J(g~&fS-Y|yaDgc$2BKu{3&td z%8x4_iQZMztF$X?+;>pyCQ~4|4hrNpeN4x98HdtvptEK$kS!9a4~3al2fmoJ_*kRg z+TUrMary0yo}?fi{!60YKPY~FQ0CDsX;5c z3v{2;um~*Uq$+11uq9nl0LTx*)+3^V&=aoxAR1PhfQn7^@p<*&8 z3Uit;%|xqM1BY9W#f4~qP9N9BOV|}u0?k5!twnYtvGM7P(TuHSJ#fnC$-sy5gaFue z?zKGJo$|Dti}pgjmAQw=OjH^c8E2em2p=E_luFS&o%^HkrVmocV2L0jWjg>CztC^i zk31_Pwt-Ga*-p#K4#}K96;fuHCjw0pIl^*GtKoQFm9MXpy} zn0%#$@}g9=GMvngSc>cf6)oEO38Zf|QZ+x$qAI;E8|Yma#A63E5aY7Q4XF?wEei(K z$iITa{dPA${n49c3#ZEB4z`s`53%ha>drFfSLKBcJOt)T+J^MZ33whT7wZ)Lln*d0 z_7f*)u}~sCq*k4Uyn*pXUJMS#v520)Y`AM|kk9Tep|+rp2Le{a23qTIhA)q7v{{fn zRPdXF66TBBh!CgXu-YB;T1D#27@xECt$EcJPb?9dQb%b2>Kg*@f(yuILUHMlqo$j1 zeVaeG?c-d{9vemSxeU@(ompa-WBIVA3JuT-^E@d3sj1uru z4#rLD*dDqVr;rNT4D&72T8n8Z2b|4+n{sVqIB;zXMa)iQHKL4Be-I;Hsd9^cT~w0& zLLfZ==SWjK?<`O4xAXosSR|tP*Kb%W-C176nlfSEo~<@V$+G1Ug;$tB#%h}tU__??sc28I?cDyMflp1CO zX0PDK)b(?r9^S5;!^PVLi@mM7G2@AB*c159Jdv`BgH66GL~NBR$cj8&iZ_sXIGVwp zZs4`<`O(ZuaX&SI+tO9kZ?mS|l!3L`B0KUNNbvm=DY2w|%$Yu|wxG2K3p_2kdh*;{ zkPRq7*Q)t7%O9A>lzu?z6K)&id>>8b1e&E{rn4-soMlti8i;-^mwBRAVKq`8bCw>cQy z=hxdq?L$8%@)f8h<)KgTqb{MO^f`CFECw)z&`;!^Z!kn-b9z|qErCODqi^?|?6zseZQIyrxGIngxo_<6s+rF2nF=e{$X^h@@$1L&#v zr0e#_2?JU>pUX$I-CN*D`4J$;;Yo^k^8Q7a5LD0R2jqjJOY&>%SmJ%gq;?N(I}k=EL7L z>L^_vcOLv6Z>7CNA0DEF=>ciQS+Zi_rTu>1ZPO0;1irbtJ^hENSAIC5k3=qm5OiRy z%VvLj$S>rRQ=p?z8WyWiDTX&7(plR!p?k7A^;w_4e^1#b>3#98FSA{UW|}WQo#oqm8^uj8YWARk90%($Nc`0 zSt|ywvg>k9L}hp62_sm&5%^iT$@=H~*>fF^N6+M5WU){qoSdXbUZqn>I(Kcvh&*hS z>elb&YSny!qyY-dcoQwZ?H1n$R+ywmjTEtn_dp`Me#g}Yt1t<`5aNHh*`yDp#amiW z5@*_*O07d@{du%3{^Tsh#Cele^>I{YOpuhw4t;Lj_c-(v1swl_UF`J4nH;*5IA>D! z83c=OfYey2s_>t(tU0J;B>3(Efo zt1w{R_#vUEOXOjNzT_PZdTVJDRmfN}rbmQ}hxcE@KCO4rG(~~5nNYtf3;ot21+;zP zrI1m>@7n{h>_S2hYN+kJPp>aA3?A5{6k~WRcT3)Bwc$gP=0+mSN44MJSDL6Gw(a8J z?$jNhz|s#dJhP7PZXuH~Nk6U?Smm>u(7ot!GP)761D-)a@~c$gPT!w=S?~f;2$<9? zvb;XkNVXyUTcrRd-SbO}kKi!2r>x}%c(be~Z*w0c9Aqwk95F#(sSwDL&U*a{G98I~ zibchqYZeU92$=S8I8m$oWqyqFft)@4U>cEih!QkVRKV?EA=}gVlPEMO6Do>B=y6O0^!-LW7n>q1k zXuh)X@XT zlhgc}2V{CpZS}|3w$9tyQ1p1Cz}h1T@96Q3}h!W4!=I?O}EwRX`zLy3U=a_{_SLs|RW0q9g|?kgU6kYnkt zSQIi^KT}W<@wrvqzmFCq+{Rw{^3>uLadjkMv}BorDV&L!!p(NY_m_>)BN64b*FO55 z`ojp1L5|;3E30ya@`G_1rq3}j1}&TplYW)W{2P)~cB?mLG8v<^k?v$9aM#DH1;8$1 z^7*M*e*LP?Lnl3NLhPUUb^@qovS9G{Ck?xQ-GYDr8u}x#!DRIZ)a0th>y%{iRFY@G zuK>XYZC>MK|M-@bi{;Umyo)<>&8@MWQbSoX!ij=Bpwg82CzTlzYZ>kPkNabcK<2rH zohV_>_hJ9O8h?WIO4(0prXG3oxG~|W;q5nx*;SVe55dZNZ~M8`PfmLk?c4ld zCX|i-pHFy?ca4*@y?cMHrlY@R2g{T1SKsD)fd|IE!r?LV+@5dbE*aPv|8imkBV4qn zY`#7?Umc@f>pd{sq8{v|{QHOSzn;}+3pl6EzP#8mK^i|ni!R!K?Klez@1Uj}`o{5( z!^8i&#=j6&Q6N_+rX-(MMJdC53AoBz!<|M7zpb1j4a|LL=^RDnJdCcWZl_`jdn|7vjm*FDfG z9CiJ-w(0-tKmMBk|Iguw$pWE~vUecquea5|{=0wt8?@<0jQ{#@{y%)1|9TZbpwxqA zWuHp;&t|YlIL-|u!*6#jE2oOYY9coqh=P3{Ne$2b8h8PMFW)(%e%pPs{G;qjLTdWk z$p^0^0JwC{oc6fQL?as;;=Jt>iQ4|~wl{*<8{BZX>@E^7q+LZz59k2(msx4e;l@SR zHu9dv`Nr#v8q>!p7c{zd8<-3OfaHP#833Q9+OM`LypMU4bueazLRIjqKe`(AMS zoFcvFm<4m=L)h)RYGt9FjOPjeZR4>)z6P}`Fnd)=Fm*W|iXYJU2SVpc>`?FCAe4QV%PCoSh`S})naGjQ}Xwk&-; z4NjY0qx;5!PC#AkV()wO*~}j&E8*9M@WalxCz%(__4dp8PCxSH*o|^vTzkDsW5RQ5 zr17BF*ULa@a-h`6f7Y@ux8iF5!Ip}4mg3e?4Uz_Usflt#Nu2qKImOst|t~nV`Ltpm3 zLBeJHtLvjz%Vz8QLcD<3=W$Jtk%j(m=b8T#iT~4sThb1TmK2uY{U<0VAk;<&HZ+8T zr+OO2szonxy~}_5iOC0f2L>DR-~nY&=wb;DjpBV`$|SEH6D^zf$9g6wNBZA|7h&~z(Ea=|0?r1;zv z$1bk3-pGd|l)^4#hC6!#A-#5AXD(A}rv!N#y`cKSS@b~#NK|qyPxJ883lLCJzxlK3 zwMe8jUB`Krt=2@Wu-xzqoq9~j%&sB7U5kAd_@-EKJ=H?8HL!z5eK0n1KX1}TQj_1F zHj8>WYrp@~&Hlh8wBfcaYS?4tS|GQK&Ar{k$l3pL~3vAD>HPJaZ!K z^Z1p<(wx`oQ*Y(WBF;v&36B4ZA||BO)oEhZSjop|Mkp-vNGt!`K>CN$^DxSuru=btW2K~LZtg}e@4 z&_}xH-eB0JR4NNSs{yKtpy zEVFGaM11XJoO}zjMG4=xh? z7#$V}SdeoSYnR{3#{Pn60ejWcdEkj4kk!9S4KI^`&%V*-Dw9Iq$R_?CDN2n;xrZS4G8(5|vK;uE z%!odQHpfH~b^M$!JqI`>?Q7e@Lxf;ZFR6mATg@f6PTeP`bx|RB{o8XTez*HKfRIbf zxAz&EIF!>`Qk^j_&ftFCX?gUlHKrdcM*nG_6?S-bTw~YIGH*bc%&O5u6hY3F7}|DM zr;qmzl;mM>ZXTXf-b*`)hWAV9eT<|2HQI1z=Zi*W-iFrC?*Vd-Wv*xa?-a}y8B(6^ z!yb=Z(HB9OdvZVxJO3r&#JG(~TW22vnK^Lp9kd(G15(3)oFQ_X+EZ>A8$^D=t`A!^ zW6%~n)0;obeG>VYc^6s4{?yJpBPHp|=L!q=5ng9Cl|)?r<=K}HVs_qJ>J=P5$7SMI zVt!;f`$g?ksJHj(s3S==_^;=f!a`iy?B6$L$^6nV9EF#nL(KKXcV3i#@EEc9;jk8= zRF`D>3o6}B(_j`ypl33bt4_PneEj)!hub07cXI_oNvDmT)A!ez2Y}C)4ro(&{L&C} z4Vm)(WM4wQQy~uLr-yQ$dBEoRQ+B5H1F^I#3}|A#jQ09kb&PflY4wq4xIS9W8JL7j z{Hpx1SM4igdt7NBF&KNjjUTc3C&`&Vk5>ldp=L{8dKu~NT(pZs)2yY4T-MU+(UhzE zE`73{z&?~&Hs#OXwLMuFVg+^qyXVXI7AyW*v4}b53@g{Wz?fBkF%?Nit)pQ z5hnp;Yi_TSX%khDr$)e<_osdygnY=9$bwSCU-tSBVypM3g=eU^ z+Knclc+hmL{yvwD9Q!9bVL@yHUduoygX1G|$yY_2MpYk`CVmd&&`DqYe0O+M-kuYo z>qY>%MY}()AF6p;!-mP>FvH}cOrIX_SUXEDu3uo_2qK!Smj2nKkn&x@FTYBk$QiU> zX_EYt3S*#^pCbo%eHl6Li%9vj_8WWgOr$6LES=9g@&r?zHpm6o^pyCJY1OQaJ>GPu zD3f;4P5mgDExQ{6X^<>dx$U4_1IOU`&?ga%!De0XNm2=UQJ{dSFEg^yYEpyG)CqavX5Mg2 zpe#(|cGt=McqQ(T>~h-bVP>qq^}z>zFgkVW*JE zcYYT@p^(oeqk(SFZReo;OwXec{iFRcjk2$QWdFC#MXBMvfJXPYgai;m_&7)z6aLZ; z%CA_r+kYYnACCESe;)PY!^eyWSg`L2?6Ms$9S7C?IcqizvYYMMY+cRJ4Z6YO1Lo`| z!l;Vf%#dD}UZ&9oHJgB{Q-LcU%&-uoOJ?+KgvV- z3W&uYTwq5z5aO4NLgctt?~bovsncBI>;@$*skX~kRtC^m{7A(?`-HwBEr*1JNq&3x zHnONj>U7pVZ8rbjh<(4vvJ3~@zUG8vIn$T*96aFMbm>fxh>{I{U$Y9`ivHo5bj#xD zq?}NnYc_Nl(X(2}2VpJ;)_pyE8~OJgvmS!y$mez7MmYmv$+M4Ytsr_!ZzmM^eYQEHYSEEQYl! z(o2hf6#Pq%;-UDgoKK;h!t%jB>>5j(aOx6`VD{_tu4Xr-!FaA}rIsJ0^`0B#oyTqD zJL|TWG1?c`5yThszg2R>3enotKUds5OJAc%l}Jh`)w4nF=y1LNC91Y|=~E|}%I^Y- zAQYYswqQ6#aoo}c+S{8gRdM?h+pFcj5RpQ9T!Gz>JqbDAPvI; z?S+dDWjf)@OUIRPC$%;m_{LQ|4>X5As+<%*aP1wj^-NQBIN9axGX}L2be9;q`F6%0 z#zDZADADL_uGtJ)L_m3Ub0DxR`d~HxlQM#cM-o)>N9zG#$n$zpK=&dn{3m>whax2JbZW-lPwo&L5(SS(f-IHgE>gItFOzobQ&+;===()|Jaaxo z`69`sYE_9bZ{CBOxs~s4PPf};y*0;H$}n%;ApDzNZqLJw;e*33ih>Kr09iD(pzg(h zMt+&yv?7J#lSha0?IrfZ(yZZ-9O)uhhP6FbH!9vZ!~(c0jO|}C$$LEJfnL)^8Nuu) zm+qHv8sRvF^(VIco(F6~jMOhrm9sW*w|%KAZ}RPi-uAKAoBo)r`+CK1vtr_z7O-l2 zdYk-}Kd6_xwHYkIvBQ*m8y0^zjE0Km5z$6)fkK1tx;*$+A&vPw*h7{_PqD!+-L4>{JJz%jcKWZS$X`rshnC!yX1FM!V@<$}+|v7ie8 zvWm=WL!DLdQKP&#j0c%A>H=8fFAkTJ$h zLiKVDb?EQ9MyEMJ)&OL?6Nsth-RLstVW46RrhNPEm>5#@xxGy?uX9P3yaX$XU%yxa z8_KYQ;^nE5Uc|984wM6l4EcC?>Ij~>l8iL z3yQA7=1@7+uWq2z%Dtro{^16%_*dG_xdmM*2Pj zZQ(%(mCAQGsr>0O!Es4BV4*fN5v`IK0OGjimm7uuzlqH^ZqLM!y0vt|=foA!lggDX__o)A@)_uVJ z)T%|HY6Rx)+Ir5;WS(-qSZxre**zr4QdVE1=0QPOo@xCiG z%FIsT+L0j0svdHW0LrOfH70bh8_O~o{g&|#K+G}FJv|RMwHQk;$gdW-L!Ja6n86fy zy~2&BO4~q^IAoL9szg=oN?b?u@=x0EOSl-m+-jDHO2QI=xlx83iCJMW&_};+5%OlA zVl?|E`GEPcm^n!(uN%jjR7Gjcf6TVkqchuQnS~U8#v@v zX9tWdwD9S`Dl&dM=<|3+kxj}X%EF5tUJayZ^E{#m%k`fq7eLW0{0)}!^v_knSFCSD ze93EWLi#YCVngxKai$y1f_yomu=-6Yv7mfq!Mz{N7hCKK#uM7qDcYUphUT_v>^grG zfTRfUai6=sO#e-p{#9nwmLMP{)CpPMHGrTb3hQlg8`8#%Mej}iLf&oi_0j62Qq=3K z1qVbIOge(yK{Y3-+rccwm1YD}*Ol7vz&k|QeSDu22ljrR(V4`BiJDk%7 z8uA|vv|7Hk@xPA=;lpav%EH^DQW~zTNR+8RGdPr!c!`_PotW8n08Ehysf%DUKT2&gvmMk?#@biwTB6mO)n*gO4q87}(Q3I> z%;Jgg7>;I}=@@?aBKv9o%FdLji5&E{iBLE?YSHUw8R>9y*kC}te24rRwhO{P3u6?; z$oI}~yCw_MEW{rcPL2{FdB_gC36Iw)zH`Xp73k|n{|Dx-Shpn9iR6i?SGdi!Ti64j z%$aF7Y9Wh69Ym)O#X914#Q<)Uy#hkx5a*Mq{tZdgAwR*cb80~+h7?XaZqY6L@T_61 zM>;?)wGiCCC?!m`q9C`Uz}_=xeC@tEo+lPv@T#aA?P$Tw#d7bSJ}6iV`2{L|`I}xg zGdcW%TaNz8{Ask@YXxN$aubv7)v+C9T z>Mk&4;H>%;ILiFstkJk)E!{_PGPrtW!j|k}ZWPR~2J+J%Qd87J>RJ|H5sNP=J%Ae3 zM^X{}xjJ8zPDaLsCSeC3XcJ*~?>!b&%%@r3BYW{~+KUL(P%bN$Lo3gL@J#CC=V$59 ze`cK^wAyy6!4y3v`%DO!+&+tb_W#P|HdE-C_?;M%Lu#7!zV7GP;1*VA2_ih!u;}YB z?x-f%ih#oa=ZVaGk+<+Iy_*ED9U7Kj4Xm)45{v#r6R6WtnQVg7DoQZaN~l< z6gR8duKZLfB7)*ksSIy&3vT9-Wvx3w!t`fpA!ImrW`*kU=pyHgA63SJbf= zW19FyIqiiitYWzUO^HzZ+K~)O2$Rg!B%5x1$7q-z$be_}Dm|`Lz7~^7!kcdl?cIC2 zzPEmML&YRJydf#d{VpHJWKYCe?H){$K~90=b?j}VAm=4Wjh%oSA(boJSDr20F!NvR zFXfFr`cAIG?|+v)XjeSMCwPC>LWKj<9DTXI0U&&XZFT^*uaDrMd}+s&ecED;tU?Cv zp#K|dgV9olCQIzJ&5{o4%(!9BvaNo-RwQOIDT_{%>@fUKNU};&JwwJUGakF>+&rL` z)Zz5bZYf__KPTEk@y{ntp=?vlm4I3nJluW4P47q;I@LNo80Ry|DoD-&jx92=52-{X zbDCx!#%8+?GLHNVBC?8`enoaXO@d)+3e+!(#Utym7&iE>4i*d|?Gk7A!I`j)uypN$ zqYN2)UQAcg!%wp_ak;D-H-!ou^KQaN^D4sAwD(sj*e3I>cil%SG&d}I$|YALR0_yS zaW8Jf)(zoQtxD?@`+XktkP9j^857;{XiZCJeFf)0baU?fex}WfAVO0?Xyv0Mj(T~e zB(eI=)$$7n;*hMq?rKw-yEk|g#&y_dQ>%<&HLWKqjJXcDg@{#XH;OeGSimdHzcV&k$&N`J)z*4{2N#vwW*Hi`eu}?u9x*tb ze&e$Y59VDU809#K4G;8_X7158 z;_wupAAlat7G;R-AH#%@f*9My{w1g@$=7 z`>S|D%T7F+_K{G~0J#ITC4H=+rvTiH1Ihrf9KMkaW3cP*WHgWd4Yb$a0TxuHkeG0=3!xY8 z>RPZeuE{y|f;#E^K!y$p-d`I~&4S~60e1n?oz2Bfk<{||xftnB&5~m5I0{iXN;*>U zld1d`AF-(@a^%4M(6Kb}VX~iQ!&p0r1lOLI3zstK0983yvlqygiO9rHNnX*&S|$h@ z-FJ#hM{ok#U2chN%ez7gtY1dhaunSbYt_ODaVuUNU8WqP^dE`|(rzzF&`Cjxph<_X zFWR~=dW@l7g-@GOK2m=Q|DiFEu@v^m(D<^qn^KW!V%xF_6+Ue;Y%XUq8Ryt9l=iS> zI~>cycI31F5M;o}7ETJ=^d9qrwPKDMGG1+N3wZHNhTY@Zu@JkS@l9^`sY&g-2?fnNG&WrWzV?sNSeg)dz z1#>=UUvPkZ!Zwv=yS8VE)F(Gl3KoF6 zQGj8A7-DQIq96D|W*I50oSkm$JZ>-{dkl8Th9Q1Y?Q!&`u|f7S!XOFv^&y-c!o#m( zF5*J-Y1BSftQ+z0%@h8Y;!k-{BOuysSxwhr9Zykg^@!#7lXo-r64{a18AAiLHgQ0AMi^31$}DLs7K!T5j59z|rbSV&e{ttuHDx$ucN&$%U;e zoX7`y!3tzN;a*{ge$|2f_62x9z(_mT0lf+Ou@+E(a5p)-dc@K4GiSu;xmZ(8(e~lT z;qQ@(fM2Bz^|^rCePALnh`D>m5D0#^I`-EihM3UhL+XV$c-+P-6_AwH$m;IYpGr@Z zQZV{W5Dy+T)hoeDOZm&|5b<{S{PI><)#)~Mk7;^{Q#0Tao6A$!ut5cyg({KciHwDS z-I#T2bkCA3xJDxg5?Zz4$w17|)}z+oA=zm2c=l@L{n|X?OqQJsxiLE||IqL#ev@p4 z9Arzx-X2Gr=!@&|piaHB@6cW#hPcXh)xL0E`qEUd|L)h!p+J#-$=kluhZm6q#zN%O z6R#clcTyB{jDWpfc&)+RsSzwwxpi_r;?VwX3+O9H?~c27v61PtW_WmBi~4p}blqy~ zxUC#s@}-p9aT(+o`s2I$^rD_Cwg@NVzbMygH8B6$`^?Lm2U33Zr+Byz#KB@8gL?aZ zujc)R(oMHp3bK`>;Awd$um8yc@L`{kdH24BRYVKDy9N39gf+CS?NS=sMcI?Bqdccy zk$|3Prj(^U@(c2X+zsmMjj86WSw|BRCAKUP3>NkI(EaAByAde5B^ZcfgM6vvZ38NV(InGHKmf%@wf`~}q7_K#B}ueQ z@$U0y6gOx>HH`90rUId$aWcACK7-h6L;!U0s zR2|8`?Nw0;?~?o7jIoJukEqN+A0D%=-5Up131?Y~pH%Cu8s&y1vvwN6mXF!nQUnH} zaWcyxub=DJ5KLcZ8_vGxO$6qY!)$Qkq_JPd$i8}Y0-@_$OR{7_3uZ9=ZS-yLlg_`E48@)N!aov4N9v|-kHXfLz zRvTG8E{;J(0ufy)sWaT&&2Fe6B;c8#F++5;Y#D$Z^P4y8*EG()!w2v&*vlJBe>CSz zyNTVb>Ves-&fb_q8bM%Ygs-IRHGa8wB4E{Q`^_LY1e2;U0#L3)uD3_CQDDD~$0_$J z<_`1Vfnf2bhuThQ*|8ZKc9mx*F6b-|n{vb)?!glka z6o@1>bgA)Nig)Tc^2SKEdPw&vyny>-_885jGzbJK_s`)U7(uM9SXa16No;}b*_&yIy_c{^Y#^Ttb?1*=wztIu95nYM z`9S6V7}QzuE+W$1?(q&;1a=g5w|% zZIaso?AeERi^~hsj8e(MRF#j{bl|1R5ju$%Ei;>iKJF;vrB2+KZ2lgYj`NNGM27fB zAaL6MgNP^fAI#2L|{XBIQU~Rj2N-mD|Ra62ssno)~9!$osKeojm6n zDm~72fFr^1;oYzc?BVX(%Cq5ndCN&msUXHa#0w=y(Cz2Xvj9gz1K$1FqPM*lz>$O8 z5)~&c?SOw>hpEL;IDt8Adhhw>+MZTYn}|fOaTtFQG>HSryUk{Epiydik*yQtmk7Fw05=_qs~hCt0A@Lqq05;5dX7s2Sw89{~VNzKjRE5RhgW@z)= zWs*fYciG2UtSg*Cp!`t9M`mv8B8{n?X^ijeOHR=)YAV-I7$h8{2iRph3@zf@D zUwEOFMoc#fIaX ztyB1@Z)i5RI7Rt0F`P}66UzKDrR`{ZeY8yWC9oRg^Akx4Y zFtn&MX2AgNV#Izt27Y;oV8HxnI=>Y1^k%~-9GYM!5FrZ2*5WvakpD ze^T6dUqxgw?F#>a-ywUI4!#&yD3oXJ6Y)(56hZHDuI|N2F98yE_>lM$L^y$fXD_n_ z1YJBl(Wj2Sk4@WPcMXO6@^iG_LrZYH+N1l|IU<3zv}uA*KHQpe{&EU<%k8h!mm+NK zT-aZ&XOX6hHeVCcDWx%_xXD|jH1RP* zF&^5vNvp^)cHHU>f`IVHQiW!J%C1tVtfk(!m0Svt*eBub@!>}1tcK@z{1Ul_ZE2*b zLT+YC7nc=qSv7hNmA<84OQFZOkv1oX_7hdCTsD%Osh0C>Juiqc-^aTSo_Z#h>GMVt z2|_c zERuweLHO~anlMi^-(NML*+wp9W&}<_vGSDYpmA-JK7U#e{eB|T3VqZ!tKa6G#iJPE zi5F&jX`DC5NjKEhS+o1avI^0ZYQdGA>Q0PmeqJB3L-S%%_7MK%<UWgL>%CIgb*`tOIXih8LtEIK**IFxNdmxQ*-d4gU zAJ?(Q)~j{Zh*sGp&NB1fQ?aBEEmc;N+bXkFq0-Bt>K1LL)bJXEkjpKbJ||gdN+sR* zOoe_!tysHmLzj@vvO!yIC|wLusdk0L7QCloKYza^v_GZmaKO8GsZo3ukK@){HLaSGc$T8U*g z#ZM+P7<$K{s6Ble$3Htm(^m*PciYaYxv+&q^xDhDZbrw7uH5Nb`{c`&zMBWp?p20w zRV%&G=p0Q^&S?Iy!TVw;I8CF!@rYAde-aXwkgPFXD^_ZEcv~hQ-pt~9>o`%y{g890gNXh`d$UHlSXjh*LZsZN^yS@R7WnSEc__nhSI|8ku-c&#>^~mhWS9Sl zto<~j?YYdPue&Epcp|0b&O4DlrDc)pdCHPNt0}gY`&cOCW$9NhDR(SrHEme0$c$51 zwwS`<=g(;~p}fQIKb{Qbd;Tp8$u0?TVmG6X?&kYsj2@(Hc?6GZjbcgUa$wTDX*Luo zu~%$@yg!B8ibCP2lr}7ZDK5dg)e3I84d(SVoQUT=zc9?P!`6CeBqF7rW0Gsm{@}#_ zt>Y-mnw{}Nq};co_|nCt>A;1F9Q_W17YnMe^=@ei-Pnr++o9_w!vXcG$|3La2SdMA zTWQ)o>#zpy`HqF^G`xGes>Ql)|E8iwR{uIeOCQcVGRw5A$oAc?^JVQ?Gx0laIQNgJ z7rWd|{LlAD7iwcM%}3%i4fZX{w%2o3aO|=);or0}Z$DLf+EnQ(pj0o^+H?6sn<^zU zSfhh3p}&@t@RICoN{;aj9E&%S-M#5u+c}731N>lD-!w*2WTG@zIgiF%{kX<8Y*yTU zw9Mb-No>NQ=SnvMA;%&yQ~>sKo4MWRe7E%V551hZQv>sxBX1q_ed2c}l=>c>w{O?< zjwPRX1#uOKg6XltEWHysrD>wMM;z2!T5LX^|>ldSjwZv1HKg>nH^zl za>kx&wS3*!`*okIZ!-*yE{w5|Bbg`{=f$fu-@FgTwaT66@-!8m8NTLl)A+o&g0{uJ zljwCk^Y#s9IyDeAy2t@@{s?oKJ(m{C@v!Z?X`7LA<;<~nE0AwE>7g}Q2USSzvkjMT z&kJy?S9hPve3~39`IOTAilXGxzAKBe#k%HA3j%M5Gy7GI{FBJAI^l_PUS?3UM6&}E zc{^OvCeQvB$#Vp(e|1dvhJuK$T74;(-}$mNA-AV3Nu8&a_Bcm!MjM=)wg4Drdmd`X@*7{hZlQsx7m@^=UKt z%)+>_D<(M~$;`dK`HT^zU9ZyQ1tutdH8s;<5q0;FOR4r5iRJASLA(2^Zvoe7M`v%- zh|3wood?`USfEqrE{(Lu=eU-6;q9F$5R9bXr#dnCoap{?y{7W5rQ|uc_b9Dsbs;3W z)!e{yFG)faDvBR&l8r>@%X$659lba7wcDNe-l+!G%|TfCJQ3f2 z`^DgHn}yMCXBH3n((d`=1SI{e;(~@|gsJK0NJ{)-76Sj`tt;aAL0G$BkN`_E7}H(1 z?cByOR-w*JUm$ftAN}n#-b{fK$zB<3QZh`zZ@de?ar0u~Jqral&2T91VoSCdU$s8E z^6Y0#^tThA#`Nn`${q?9<@WB<^xdu9x#rXYrnp+qHYU$J`ngqCJNV$Bxr5vL{aoC&3vJO1cG#7dXfkJ0znY*Yq*T*v z&!T#}$>tT@u=t2(Eipriwx$5Coo2bV!-{hb^`mB8IHBLEB)*I>BmyltzVt?_?bfY! zZomo_B)mz!Yt$FcxhV-rT`|kO2I*XFS+Fnr9YZeL-48D0vR8u< z<t@dAw`~xBldoA`wWf3j@MqLe!n#&mcLG9N$Z}> zE>70?NQ!+|b3PA#_pkzU9kE@Y zTKZLaoZ^elbcX(NhWslRn|1&j*JmoOn6moA9Q|CK2(NdZ3ry`PzvK4+#n+xKI$UBppY5N1=S+QJtf>l#?99;juKi_Q?f3jCu^L#IkYg=4{x;*0y^q2m5@XABR8 zz?{!x{-F~b-vgG?7Vo6vc8f#@b-IeTa=$~kd+W~3*-#Lf zlI9-}Q^#AlJ>1brz2s>_?b2qhtK7yF#=eCOV?*<;-E zdpjHsaIwN%bItjd=Y8I1En|xywPvOhD>)x9fhmQAYSjkP8@fz2&y%J9aG{pQ^bC`^QR=*QlQI> zUkPc7+Agg^Buy%-t}+#UqKZqNvqgAxPqTC=qER zZ>ei4bQkw6c395f;}3FxIT**dM_wuZnyF?b_yI6It@LqC2EJn`0`qYQRBmFGU? zOBPy-WbC_IQ68PkapG{)$sUcsoA*UhiP*THZfQGXIh)+GrDpB zTx+w?*MJiA6a-fgps9@hL-|&A@FG8Mc;xKK1~#FljvFhB&5KPKc8Fl7W%gv_gQHKTv8xtkkL3a@Y`9wbrJQu(R9AVM?DDV z-)8O)?~dXEwW1QUGUJcTDFp~Gd)?b@(@jRMj1B@l>P_Ve~N1Fafehw(uB+8Wf@(| z#;--dvA-FnpJe;2sAqDPM~s`4$kkaKerA|{OW%ywD7@MxvP)rSXfur^FwGa-cPpiL ztr)VYJ0R&(d#k<44; z=z1fKER$~rnF1K}e!8%M%#kNDK#65br~0akdPqtGs^v|L1)SsZt8~Mtg9R{Xv>BmlR79$ zNX$NY9bL3PLN;v|%%*Dy&OOU6Pq{`UxCjqJAUrjWm;@M^k!pl!y=5x*A&^p{BWj@B z-wMwy+=)3@KQG^!AGl9c8g%(tlun%Nd?NhFw2SfGlS&2p@!FHYQl1<`l&qHNO2Eul| z)WKjhVW&SzviVcb)E8myR40!X`6cPdsnGTE(V_O<<13%B9;<_U zmuqnrVj6+5IX4LvxRy7AS>#ze$BZ~zbW`=^uaOO55##4-1*SX0<@(Im3j<@oP4z~* zVapxZ&7R1?UAuz+=7=RM=O3Gerfx=m$Z^;Td5=pg`qR8DIDQ{w?C^#bw2#${y$jS$ z5+k7KKX{K499iA8DS-wHrG}Xqj-^douF_+BMw&FvT78)AS9c2k z9#?y@H#23B8}H^3)niygW`9FzHL~(T0Wsd@9Gq5meX2#17PS|dCOhCe zzE)tm^QL{gbCy^04)sQzi5|V{z&x^&G`_<&T$TnVRoY_5A6vDLs6ApAD-WxeTN=$$ z_`(Il1`mD%vYU0pB)@y5XQ@UA$SC}>8) z*-Vkqg6+=8@9Rm;$y`@3e7g7W34VX6Ef1#|DB$MnHQiyFca$$I<*dt=;9k(@(It*{ zb)`rr8Ox;JvYHv>c?ubL_=U~b?5-9n4oH75aUuQ z6Q9hly1QA%Cky;o;muMr#O%D2XQmP->vG%uzJEe_g+^^Jva!oQid@Qp3~c_`8Na`I zU!AO-Pv(|7d2Kwh)=hgHxgVXztsPaglYSovsp%*Cm`qS5_1&3DmYqrBbJz#w6u>N) zvu{Y(*jjC>Qo0-P0p6Gv?XI)aUcH#7mt!nz{90H?f=t>4k!c~y#cn4u-wHu3z>STT zOwxLXU!dfWm;MNA%eMnU zi|V$;J$2>eb{=b8Pp)$vL5NMxHJ@s=dxx?Ss_PO*a!yy!@`H5;u(qpi0%O&qpd6IR z7u1c{_R54Y4LGtTcs1Y~c%HCQ`vH}I7c zFEPm3`&iO#iq>A?%dp-co-96 z!5S`DvdR|Z;9~)-rqdPa`83K z^9q|KH=oGh8qls_)*oy5UVV$HsY!-?nC%%K)n5n|&SLsA`whRHqyrWsyfqHDSk)7D z$QLObD|-0*cb`6MJlHEzo7R+tsz{Gvj7glY55@|}KbrO^1nDA92=e`;YMXNGT9ffy zO_Ys1cF>Av6W>AxwC|1wISV45(V9OM#Y51;U2eiX^4x&tdz^uz!DY8J(Z;R2>jWeC z(*?d7>_l$GRzP8f^UO!|gF$fIE;g)^|EU|6wqzwXJhm=WYaF3B`Ps)5C)<)_9}ig&}ZtQYJ*GP-mhDgQGq|^DfdmSw766^+~l>ey_;%tBtmS1our+EW`;Z8 zN_Y0!14GxU{bgFO#VtqzZ;(fZ6VfM90*D$Yzb!@`c1%Kt2!k~llRwWk58>I=DpZR| zMMTwu`M1?0VQ`Wv#wSziT(HX^g|DybOy>7`>vG8@t;Hy9 zw{%&s5aYM=URE1hS?y4B>ijLuO?^(T_!7ofQR6N%+Li*L?Hb|RDjD)NiZ@5IXkUY? zM+>%HPDL{a#Kc#p2>ABlL7YGKYB~BI+J=sr3yt1($GOqvM-WqxD?`;~ zKFB6zH{S{QzP8@n%2aO9<6Q3^>|@PF8ynM<45g6Nqjb8&fA4T|WZL(os-k#EM8bX& zU>A^RZ`q;!c@SAFO~tdoZc`*lwBxgau?F?H%)&3m;04G8TQg~IpEb`WDBC}Lh&Y~o zJ+LY>N!}pO^=TE_b{#ePvbP|NVEMjeWv+ON_F_yLwu&|>z_^Y*eyVJ3ON*d4wbC&G z83iM!A~^%y1C6V|XE*t?r;)B;9GvSwsRcNbr7Rmp(oSHGeU0M0W*AvC8R^I^qsk^$ zzK|jgb>s3iUuT%Rn;Gb>yKPTc@EfZYla8Mb%@EZ#>d-8<-&C5*OqYHyDLV?cTEaEY zwj*vAHLXw@A(gm=$2hKa$(I|}i4Zn0hB3Ctla_8Mz zPT%7AjS}yO_Bp_NZ~rth1uE4E{~UNr z*N+g`L8^>z(b?QlgxQPuQvWIY_kR4!5TAjCI_qr}L4VOJ0i)cmXkU~eVXyV=#|qwe zY5H{hdm!{R;dWoE{^VOFztberr7Wg(@<;0-miXqZPe6fGOND&LWueEo0E3O=a9ZD6 zT-#Y}N1zws@`7v*IVqQ8loT$m1~VY;4Vt|2*4J2Gf~qPp-8bF{_&QyG6gR*P+HGu6 z<=xqGa`zRSa4nUOY@Eh#Zg!W<1*>KRVdy3*apSc+5ulwmmdSP3E%M2AD2;k1<}PX{ zH&KXJHnz3%afT*E?BHuLcj~K)KCF;)+pt7a>w$Io;Divg^cv+-uM@E#r?Ghx$XJE~ z+7tl0kxyBDJgUxe(k|>?Fa7m>sQzBXSEt=2bbIhOKKT!osLPvAX<^E3Q?S!LR4i8% ziRn4>BNsci;1hnSU(d~BTu`|qR8!0wD6G~9M(P-wqjvXVl^t&oa=daHNBB{PS$%8? z%Ly`_vR))x(<|z~zHyG@<~(MK%$tD5mngqo$57Kj4qYN#Dc%_zepC$`xGq5!JD;$n z*!b4?`rsd^L1{;3c7Mbu_)WlC6MZj36$l-oDL5tr@L2bWc$Po2$Mzy-!*Q9NBN>4XsbE%#VfG2M)VbeA_dM{*p6Rmty`GQ zLT;x%vxdbWAfF1?MxHL>cf!7%^o@)x(W4SI^E0x~yjfKfgvs(ta*@Ffg|d;Bg(<%0XFb zt=7FHI^8LyH`K8dH_;)NN0=#?W9VBD?mi)5WrxLQ5z4NgJ!I15>ZW{ft+@OAgR5F8 zu|9Tkl>IU=|9-Vp3XR*FaXYT?m?+2TaJYOUID&yYE=IqvCE2r3QW!6PLZ;DoSAUc| z%AJAe1hDV|wXl7o$&%}F3-&KXrUP2q6c2&2WBST_=eVeaNh+MbpdjABrN+Ejp8uBT z-6ikWDzVO0qzss?lz1o?DEXo!ptTS(Hd9NbI214!1=q@D>e_;zX*JSGwrZ_sbWUF+ zj!w(z>P+gc$@vxEPkV%5f!MUXJ)Eq^T6f_x6YO-DCE28r54+Y|dX?34`gN~JUcUax zA}4EWzavv2!Fi^L{->_Hlh0w@J<^j+W^YMh%L0%1bEzG0-buV=Sb=-nqT>ztuRx8L zKncsz^KnvUGl#0@n#Qz+-<+D9 zJMCLSMj*Q#jq>|Xvek7#nChuc$?b+a8)KR&G2c4fCnjR%|AZf~?v~pT#zkP%H3h)w z-_xa@cGK{FORnewl~4D$dQG_(?mXA+O56w{1mPG8?X1kFt2`6-alzk236r9AM%Gn{ zBMt>BWH+TTs^$>F`316&vY@;hX}H-&e?99!EP;rlxhA){U+p8vt8B7dBtv)wUCl{PVL;l>N}5mX6PT} zNQHcO5MAXvUnSSsT^#cXSZM-t0{3o^$-{YLC0y5i$3K1h{dQc<*X>kLchyPHWuH>`vk$^wd#0v%PR#0e z%vfNR%g*iWM)Q|bsYZL*96c+@dSelW-`Tzn z_CHj2#B9|~z^!CD02Rp!o};@864k&4Gs}zyXrT-M+R@LvIa_hC)SoZ)XyAOmypOYp z9YLxw9)WI9+2VPZO_gFM{6T#+=r zvpj8MRdnyrkU3<#aBuLOK3nv`I=X!xsH!Z?w1##!%a)oYWMgyUyVJnB?*>l;nWHnd z`OQ&HrSUFE<#%R>QOV`61xKbJR@_QM^!n2QmnLW|uE27*(=j#l-=mwQyZC_%#j5+~ zCT!gsw4u-^xk2W+n5>NHc4^b++*Ej`#b@)&X=2`9k_vNudWk&BHGgc1GMl z-Iq^o;ACY}h~WhX;ElEPxPA3wvT}X(lVsUyL>YwyBfPaCM{E7tN4mvG&8|$oSocv^ z78|Q6gpkF9L%EjY9;^Hdr{_V@_^nUzhEiGv&zh?0d$0xeUib2mjZF z*ScZBz3d8GvA`Dk&Hqk0$gdG7clOu?FJw+$``6z2ef6D!O0~Ac@o?y|IO)-apBm59 zi2e#j^Nj!3um0P6{{5;hzGgg0H9R-37N_^G**fM0{I`4l+h_CG(wun^o5HK1^1m8> z*v(5khRXUEu8iIJ|N6E6`r&^b{+~DhH_P5gbszlKzWlds{`Xh5VObz(N8VT6?hon_7lH))V~_JN7vG-{FV#HL{`lWF|BtWU z-Jq$}@V$JTk5cPA*OwRA8CWg%H(T?}{h7Ymw(5YcoDGcXq1Kzn8vIa#^;HR0k}KPb zgZ4#uysR!W;1gckcO{E=d=A=P)svF=@p`U3HY0d_yhSsT8Kt$c00RA3)h@zstc*Ys z>3SuF_BL_4QZ}0V`&|&P)gL#6O>3*BmB==)Mg#+0VyW}1f`&Rx}I99XVnpZ{uRo03$9s;NmQ)Fp{@CS-BNiHam9u-2#~pA5g7>h>dz z@N_qKElH|Sw`1N@ee%&7tAbx%^Cm%StkDGlLIL!N`Ik=IDFA%hIOl4BebwVID2eVA zl~mz15iyImtt6(z;>6#DpFX@%lPC#IQyO_+_n^$u;w`u8)wa7aDQ6Rky4FKoKbE~a zf?}9^Q4XDn9h{mB$GpaiD^zJul*8x$&Sglw{iGUkp@Uj{T=1;XsPg{8=yaiR3d=n9 z(Gz~}#qPXY3v7LqJr>;s#I!tFW7qY!5u;4SrcI~~zcl8X9$G7lq?bTbvht;{%T&9C zY?%%YUH{=w4-Wt4+q6A_h6U~5h2&dgJzh6YuEYSu21U<_ZFB)p5q9`hb%W4c?{pYO z?Fh?}uoQ#JKb0^p3Uz+9wzY0nnRFp5N0vPL`wgyOXN8*aRLM%PGI=YZMA~seW9hhk z;(1jqH_)0xKx*{RDu?qcs~PnMhtQ zb1ITA4Nbu>qtj||O_Zr@;6Uf}R)Eiu5Sfx9PiedR7U~oLfbVg47tvoWOc#U8V9j^+q31E|ACa3pVR9Ymb%VTCsJ zkR5E#LLp?o!M-E@YxH358nBi-N%X26FSMl#U>VHu!o~teK=sNh+a$cCRT-+-#b1l5 zc{~htwEmd)aslAq>7fsh&A}D{q1_J^Jb$W669Al8<1oa$W-1GpURL=I$ebJ6#`EEg zF1pLbv|(A1T*sRt63F zj#?)PY1y>ny3^D}Kyz-((})Q60{3>4#)|m0q_xmB4>JDNze0`w=LJ8u0m3eG)!Tow zufxvsd2GgvB7+9k3RMPbO-@_X`o9^BC!K{hBXDoihq0}!HH!_DacStmZRv9O#}rYk zZ&{{g-+J5+vxw=PHQ1nzCVFfJeyt=TZ*) zh5-%gEJv2pxonA4+`1^9A)EXyhTvzZg!xJE%~onG1WFC|`(MCPb8QM6_eS3q4gcoV zeA~H|#+|XZJlxWjWfxTIO9XNxxs%I^N54t^m9Hd?=J49%@Ep87(TSm*4GlP(6C7kcp{rtJ(W zq%m9a>7K{z`%hp&wOkSy`hD!tF&hL}cLR26RyvUJf-spzM7Ef1^#fZ1v1or|Wqo6K zRMdW`L3?1YKLUF%8E{gG!Zt<#4x{aO6aK*^w~4mgW#Bh|B#SHsbtuvFD;U2_Qvh`y zHBw2HHM0$X=bSa?R#{eVoF%#F@gUcw4qk1}cRJj~b@~=XbB)_cIdslLZ14}K}ttc`iAXvaqs#jvM(Zn6YRAa_=i z#<&H$jn^wc@!cz+pWV{E7u_DgNMGUz1qLkQe&m)goM!C7%3Fjo;9{j(#zn|b)$+Q0 z0IE1~`92S>vES>23`(=RvLCa4e~GH)HrX-XJ@#pMo~h~3?Rzi1lL>7|c!17-f1X3> zcD>u-gzHZr;AUP2V3K`0sca)Iz#%wu#3DFcP$;=Z%eW2S0>Yt3i@H&v@Ns1co1B%aFt>cUNV5?Yi5)jtS!Us-Bewr=cA1gH_aSe>>&-y?%Vs0HdnUSu~+ zdV+YKOVkQy&Z^^G77_n!#w&*@SsI--rk3s%re8Tx&9@bfBG{vyQ|&bjHOlXW5o(Fk z63MG$HC>FcbxCcSZ@<6pXZ#b@=?&EDQA(c%%#kPV(0&$YlpiPyy;&DnYXWssm;z{J zEdwTIKW;T0v$55h=zyTTiJMC5xJAnI>=XW~2N-X(G1iP9c<-jmm|GD9Ke-@hGfNX4 zdL5jY*iu`|ne>5-_DazKz@tge1FFv}Qx>`^IjZWlWw++jn{O>chsq@w?=mCE*sE+So$hDwwj{McJfgO$f;Lz;d znbjYSx-u7)GQ6!NgiUS(;fM6^^D63rQzpPB0q->f*qYQ&tUPQ{OXyEYrc7%<~)uz<~zu>WwPkD9GUoL}3mniW>$48z{b;w2nSZR{{`4O|Ql7t6!s- zw6E-D+lM`1dp|le?yIWP`W)BUc&b;s5ME{Z*Hd(cZ-z2ujq86znImt~eB7ekRq4C` zMCor_l)Zl#6>gq|LMIDTf`pyDIJ(C!+aNmjwe~)n8{YcxHqbQ_@aCsRk6qPXU6MC_ zxxsn?lN>;FNSGRXNIqs6KC=t!4r!OVe2ne+6Tx|XH7xW%flt{M);0ejbxyOZT-!J|3^*_gzd8Gl!9y&6G6P$MdqEe@L=o=cyx8yTpC=*iSj2k!rU~?zr(c!#ZxxfCZo?b_5=GX+|_3>$3mm?h{m=LO{yK z!6j22eT-lI+q=(vsXm(3V4v^o|Eu9&jG&?^ZJ)ix+jY!Z{`*1w^YH(?`G1!E-|x`B zxcbkY|Ht6{yOa>0A6NH}pa1`zfq#OWeZ=yxrk_esdH z9Du|3PAR~vcm)NwK-|h+IVkN=?~==MSWN8q0w864da*fn7lMW)bAtTRS8Xgl-ZmeL zuA6h0KEyJ(+zD?+xMBrx^BBQV(^4U(1G5o*{f`6y=LS0KPV>3cmkXl~@-<^Ssl_L{ zTEh>&(<$2~_ z2GINKNcNEZj#anW&nz8B2tb>%K~u?AtyVhwx7~BDQyax%7ERV6lr@HqH0k2RmaWSr zer?2fe!4Xj$o3{-#4tQFTg^ zXD5mS+~6zcs|X}@A&T6vZGcR69H4fdukUw=o z+vsr;L9yBiBf(3H76yR8AJHx&r2AG5wOC082}x2i*&_L6!vc_%rPPo)bK?y!rf2 zKX;GvwaA`oXOXiUD#3SX8Q9&j4XgmcjLsBfH+G-`8&udzVMv=j!Wsa*kK5^sz4L>N zx(SljJRI0#+y_hjt9r32c=)?q`b_5fmLdD72$_|AH5B_2%z-Y z>om`kV2{cLzB=xi*XKz(9Oidf^*WGLh+TjW~SQzaN zOrNyE)K0n5?xs{wiKk-)$EqEq0cGx^PP>tX0MNJOs7% z5DvG=%xO2k_NDg?9gErAURnIEW}5!~W2fjvt?Ch=DGNb%WSLp*^jX+9GzOiR=(lA*0>f#Y{i*jJgAY^g1qoeO|v<#`F-mv$V3< zw(ZQ{1kf1xzGle}JHK;yd1xbGiN8Kn5H@6!4I9g{nfA|T+BnA~_Qy!zBvW647LRxr z>JGc(pQe<@5s>(Z9aC5TcFd*&cH?(*Dhny6WT2pN1LW&g7#Tu646u@2pxMt)VTkM> zP=1tHf)bH{;(c;eA*cqBBfxLXZ#tp_QM-u%gYV(w>Kt{<3aEA2Vgcx<>?o#V>3X3( zI3FVTFfQ@Nsl|}$KJ3R3SkU(@#Psrob{7m2Q)kC+97H!^F*cQ_oF!A|2K96>|0-4i z&{gP8OPR@UMY5d;1!@_ODg~8Uz9-1XBKI;C{P~-sJkxhpy`Ix@ht!spTN72IG1%GSr9{KW$k&ny5z?;3@P+G{vi8mqKgi zI}*Bjw$hx62+Gf-CJOayvi*Y`r%AZ_cU>|tzmH&mV?9vu#7s~Pmc;4M1kg(_QU>Ku z;bkXR7MvJcVEl<|OaL(7ysabOc+jNtX2k9E1x`f{qik8{gBTS_?T()sYk9YjWhly2 z^C=ytfk=#`%FYXWKb5i~vrgxQN0!O3?bMiz>l(~rB%=yXl+)`S90St8hnmj5);11raTSEzi5fr3UBn^4 z|8AhL7CPa2ZK|DrU#9E{drjXReR%qmbwpgvnye!Xeta%W+-+H9- zypwzBN2Jt^KoQ@ihI5CaWt>J*R5dE;PE)qCZgmQ28b95=OmOtnknMo`N6JY+abLG| zF-Hny5A|gU78L+n}vAA}R z@Cp{V)gxbT)OA38dj6E}oPv0P zi$^%>Gi74kOmJ-I_guT4=%n9s+B>Tr&+F+j6QJ#%tY^otxZ4JGU4X@Hob)kinWAtGqB=;Myr& zG6RQ?V2Y3}pd}WY9c|84OnRY(p`wwNBXf;v`;sy2b<*gG` z#aYkjO$U>;VmTDjoe#|_Z6L4eUYb${Yk|CnTp5ejP0z0TPsNRh*R-~<_hw&RVXsao~4ER2z%0432eLj783M02Ei z1VhF%B=An7w%dau{X%7CG%zVSp?i9@szY12`n5bbhY13|h%PHNy0T8dE zXk`k_2ADhYwFAY7nUlMNm`7RMY+{uQ@0z%k(;J#&1XQOShRuO;_{SgmtcjOL?^RJoy}_jJ*d8%U z1&8Gqvnsci&+3i>EudSWEfL64z_HG4pI-U6Fat0`+7vN^bIoV#xx35kPC0766Z~S5iOa%ZR(iO$mrRrd}{cL4>vcsagWTM;V>*eoss-= zaL_PkjTqdw`JjtR&!YMT7cJA8tdF8nO=tzAK{0KMo^dKD_wC5D`3TENNl|p>3Qc-S zRgJ*JFwxnep;20}VH4FK>S2!QZ_0H?-=`o8Y_Xv3z^$}9Ih<|UX;P2SqvOAfIE~j5pKYd|BT8%}BsydQ`yHKN3eP$t4#^t}Tc5Um>BJW1i&g zb@|b6J7uHkuQ*nB4V#+ELrnX1G`GDqn2Y!6`3p4_F6y9C1w)Q_Mp@DsJQ-9$_@6(belPJuWoBOsI>5b4VtzG5m@!67BZ;R;niZt zEi^Zk^Yl5dt^cAY>*kbw+RX)O#y_5JngTSEfHI73{PfvXy5w4>ae&+S-7a+9!wFe7 z8}1&OHc)ufltZXm1G=7&&ZxSD@k*5KBTZdE5~AZBf~?cnrO%3F%;J(=CXev2Du={F zYmeBBgO-ow-ylqL+pyg9pOB5Fw_W;ciesc(`zl-Z-Ue@fHGA`XV`>R&BlBxO9?yWF zQzaXH(@PcUGJ|C}v(|;yV4^zeueoB3m7Qv*i=LW@>aJ&Fj`E?8`L_wLRxWNtDoZab zNPBgY33Ux_WV;P?8Dqtui>QsTde)|X!hz7iPti)j5)UsRYTB?b)|`(#x1i*rw=~Q> zV;4e%-|1SC{ZT?u_X2_$?L`FdvYe|#M#!FtZ3rq#A%ZxbiJtKWjajz?+L>|1nIo|^ zj(ROQTC#lwAql1%0NIsLU+0T=7Yz5+M3IxNO+G#ci5tdd=RDoVfBx#OTjV$OsyC$h z*XrC!hon!}6tbs^PE&sH0bn4EwjOkKEk)AA_L5%0L)aEZF`^PzrHqh-JI?$XXZ z)}Jr6adeF3d&N!@H#lPq<4Qv2tj6oS>06Dp0HJliLS6J0 zG@v5F>h01!#5DFx7IZj1;3xc8bXCILi{t^Rzij_}v zzz+%BS|9>%wH;8b0hm@K=USyPx`eSAInh##Cn|)tIuTqCI@4nd2gI4hED+(%Xe78|X*7&9W7#Ko}L1p91;A>PS?BpthT z`R~oAJ|oLfg&)c`Ifr|OOZ_#YOmq95;3&=SrI&C?61|(7!)5}(kYwlg3DnD_IVM|y zzlz9gBS}#b4WB@rkWigO>JYQ;?}L^+MASMwPh9M4=sZR)O;yn-5DX1yEPzl0ubY)5t>$<6@$4Jc_(NbGnR^ad?nj(Nm$il;r5FOw`at65p?nuwM_8Q4{fbZB zwR-fW_X%bgroVzuuT8qCe7Mg~FH`}w)AiO{Vpx;IOX;Pn#?RkeY6{v4>FQlXAP+yC zteMQcnx8=9w?ftQ_LBb0j$*Oz@B8XgI#*uILtkEctK_$Do);u z=WcFFl$~_7JlLb7s@VjeVxny7JuSrjI1)NqdQWLnY~NbrJi?hED48?RLhsn41~p@5 zCau~r=ZdsDqURnxEMlz6*QVZBnKlM3T^=_APmJ^wbk#^^>;W0%wfnhEEzsUHwBZ(p z{e-hw`NG32QqMiZkF*K_#+EtqQ*IG}IEyEJeT>!!xP@r3a8>ZckDdMDjeV2i_%DU} zFK@c_+mc}k$-0)>>fvoLC0J}?y` z+_f<2)LO`o$kZvP7+yiEef`>jD;!m}O*+$36Z%mFlS4%CJJJ&amGvD3vr0Sqfz_y`yR?3R&4>q5)J_$%;&h zx}c=DwO+b90&uPJu;ujH;SH_$DHhkxh2Rmo#BBvQ%S5MyA6F1|{L#cPjtY-DmZDb*oLqAqK%gB1Fgz44)lBY%Zg~{0VD^7Tf%g}+%6$~*jn;SV2XLI?AWNl5ZKty!@DrLCZr6;0{eC8VOQQ+0-N6uT zhP}(SHCBErVI(9cj<^{a;@%etJ$Mcvk&v;Xgc2D-#(nsQRBN3QFP_G=>y$0VGjQ?4 zLCj0#yMF-?2GR$}t!zMYNS0NAnl}v9m`l;8#xcz&PtnLEYfvzBi z(Y07Rd@S!U_14+>a5y#z81+|4@bU~rH6$gElN79t`-wNUu8>0Pp^gfQ} z(N=+HAWsXeowC6kJ~?g!F5@I13~c7)tSW;O!2`@7UuABLzd?vOs;~B)3*ZdP7Lw)h zqIkD=P03GmnL~u6i=#cyFX}KB*V$*tyteH6(B%7c{Qe`~y69o%4q32;bP(yLvF)e#~Cc|iKWOT@WWI!vfZa{WTEUdaA|?$|FZi~b7XY}k|~ zmvhbJ&7fk>=oDJ2cwk8@_Js9yCvQzjVely58 zFjIbNEn@sn6L#P{T7nT_`vt%CCj9i)jU6#q*9*NKOVy{(hw7S#sx{YNvdI+U3X6mX zRegHH(QB(k?C#lsq(FK>^w35}avX^xUUGWuP5pbI;37by<}y~X>Fm%1DAsns5tB!V-Hs9Gu$BY?h7skCcRco!47%&pe!BpDa3=|MI}a(0NJ{5=cdad$Uy9T zgq8cU>}W|Bo%_2c!t?}$7;j?NBM+(T!|`D4UyJtc_XmxQQ@la!%JN!2>4r98LN|xZd)q&W6x#BIx0Nl(%C9TH40|*rCi+ z-x{xl(~VuHb3LeK$4dv*I+*iKTNHfpHp>n2e`+foX9>vh%4^xE4h945M8R^9ragV( zcrr3M#qtf-G#>1M$y!vntIEB?zPXh-jCoM6;R_pSLV(!;`gKS9xI&V%>boSnUfF5c zF>meg;TDX5qOE8$*n!3>OnsVAoKr|do}*08t&0oz!N&Dsy*?pfB6jU4dX@XoHM$*dC=)^)@dDm~Y zny}FX_rJtt@bm!9TB&2QXFNAYRIwVee;`{npJWTri=_fOI$*rAanZea5L9B*(t$p3 zROX@KfGM$?2A9(-?XZ8n|KY38`qvBUzW^@etzUHp6#Bvdv-k2`I*)_RNRdkg>NC*w zl>1X(H|HS!;1hXv=;OeC*xjP9mTq4+iMEmrHKYUo67iI=T;IOfil^9=G_x@64z!3MmZ-yNG zA2PQOKx2<9Ia7>mu$sQVXz|M`RypMs38Axfy_K!%Y<#86fH29U=O;w%g6f({Ubig@ z_b6Y$h2t40+mO=o)rD@eqYizqw;W>{7+Lo6L*j->QFLd9pa<)+O(g`4h`)a%rDmE6 z7zFQzI!3--_$Kq|iS(-V>Z#epn@(-whqd>(LBY9VR{I~7#w!XH+I=#!tzfnRQ}hy9 z(vbJ^Wp79=xKtGXmuZEf*5UWrPuP})!2f z{}lOFtHkA7xl&pv(r?`t*KqSovJV}Tj*zNvEBf#PN8g9hQWTueP~ijIAt*c_K)XDl zD$&_mu`X451z!|pZ+d=Cz2%j`I0N4)2f!&vcG9XJF@0c!T({c4(fkcAuMO1KR_!ft z%{M+}eC|oGg`gY?Y&GivlO}0c?SSmfhXnU;$T{_Lc-gffd##0^f0y3qfmQvr_rZ08b;C~dwg_#jvZMANY5ym~E2~SXtANI3$gf#z z#Pbx1vLrRxnZwnZjdZ6#`VE`?j?-SPd%$rQ_hx)Wrf#bE72dg`JvLC9SZ%M%&u39h z(MWy~FYX{5pI6Is$6okOb&?i8IldtNMWW#tugjRo!DdE~6QLQG4OPIwrjfsbRC0na znE=;+G1y6YJzz0y-!pUyJ`utQ94s$`k4_2*%Tw~pXt1;-h1``$W|RAXZY22l!z+8q z#J#cWP7&ASzKP(tnW;Fc{X9EGpyH|fy%~=B(K|vy9w~5Z=RRlkRlt=jeg}*5iomAf7og7qA+Zk zVUcAYX5|L|1DICKG@TC@rFZN}e?$PK=Nj{-gc4%n4=#+{Tm;kjt@aAc(qkhxZo0Ai z#Z^cK=8>L5|I$XTk~!H%_Lrtcm3BKSzjY3$o^plnJQ4bU$&@mU4iyErSEih9c{CM8 z-dPjoa+$at^U?f!#B_R9m8~wKn0mV8zHO80mvwjZ6LhEin=5Cqfs-FvPttApD%~t{ zRDFmGC)A{5;J9b?+w}b0=1b3)V(WJOY5CQJ3~xWOu}O&3v>Xv| zKW(>IZ&hnn%?lhd2^rN<^BiCnY6b7j#qvev8j1}hKMvYieSC$<*PMf0mu}#qs|~%V ziTiVg1H;|=LRY-eqEBT~(eLOm>c}!n1#`{Q8Y1v}FU2d}`jg+icP&UluC5qfA6}KG z>_@?b^_rgiRx*6xVad{!D4gG56*BZv4g3xKlADg_;jL-m4^R`46!|G6Bm5#eLs!_> zQ@w%bbiei#$a;#ej2wQ*x*Gd&=AzWeSQaAT?5+@cwmpZ6@J%|rGWkh+EM4$}2(`4P z*h$V$!ZY#m23*C^YVWP9epaI=Behr3%^Rq0Wk2k=E>6eXq}DnqTTQhvK%O>!P+? zwTHh41>w-zciz(;s`YZ&>9v0coztn(8%G6D*oRuy#n45w#aH5@_Rz{XXml>C{o0?c zCvVXSdUKo7n7*Y42U5=LKr7VbDP%}|g(fz`Lq|hGe^DK4r0S8F(SVlwvvpsLE^jNK zc8!Xmt%sM50X*kndl(u%vekhp5M!1Kn|mD-At#{wJ*v~^vs#YB&<^kjv0;ODVKME9^2BLq~U*kg1Qm^urnGNO_gy<0Met7J#nmjr8n6*Z)pEh4F z)HI>tm5c9kVHZf?lH>8&srlSveq%yqFe-TF(W)w_+v(035e3UgI9}=K7B-bj!1hM% zT3^SrI!HtS+w^U?Sn~O>>FA>U4+Uyap&-pNjmwATDS>$|Q0Y-4|LMtGzF1X;lcUfJ zJeSKA6AMozx{#}R%KY3shrWL6=bTqtu!}C-T_Z`CiAdbP{(uK}-JH-J$1bl>3un;x zubg=wk-qakl0?dh|CzEp#~g6TuQGVKahXgA*mATb#9efT9r47;oeWi<4@etsC;$qH&3a!FS5 zey{NS?CQgA(l8v`$WCpn6)sxQVRSy}rwv=-C8+5aPC4eZhT^{snKO@(j5tyzdhAss zuEr6K40j6#O>YGFAe3A!*sTB2w>Jg)e9n>EcbhT*Cv}4aGM`)Gm-j7s!el9=kmoxwS)>3x-65AP!fdbjY5n7u)@osm0*=V- z70bqe5FQolrcq_au~fF%Mh3RZauTkj?;^Y^&dArx4@s?hUmp+1V`BB$aW9h#Sp_h| z>r5;_-Us|7`esAcG%14k&nLlHtQNeG5r#v>$ChbA{rrE{I$Wmt;l^r zf;kAwn3&r?PlAELxJ#(^2N9OW_gR{bGS8v%hUAWTA2JfBB-=jAtZLsj;scI4e zGl1m*4)I@igk`9YM}0{%3oLinwV%5QxjS2Bx9r5kVSo70dO07jOYu||@Ql)I;~Y7C zh}g0nsXxU5LtB&&xiCDH@PZTnhl_LtgF3RIl>1ZLeSB$F&^Jza7Ny@T`oT((UIzV|@02Yyh+;y&g4Sktx)h42Z& z=n5l>nkSKfOF_`>C0sEpx6)-U`@w{z9H#sa6iVZ<6f_5#g7($|A-bOksPs%d$zKa< ze6@XXns6Wa8CHKTEBn5BbfvE!XK;(!E;2Gr*9ns3E(qXZQkHX&#?(vCpyY zwjjJB=786+FJaBVF>>$6ugQ=n!6Y}HrzWa#6;g0EL^9da$wFFxeR88f$DAg zq)wM`MD7x7JjIz6ux(=G%Q8tt+($O1QGgu~2)7dRz5KQ%cA0tD*`nRivDE;-pu~{Y zIffV^w@0D29)M}~QAPg5!)|8IEGFwAEVWs=B?z5oOfdLu$e~n@z3~J}n^4K!Cw*cR zlza=x45ZA-8_#-=DGOT;s!uYb`SZ1OCg(a6Wd6KH6ck6r`Jsay8}qm*D zMFuJBdz@D+qrQ%F)n6clps20S)XZ!d^mZ+j@pSn~^{b4%t&4aE-mbxcN?-8hcNpFG z;=rR6I210qHQ13gwsBZ^C1XNp~DwDQb#-0;J23?6P+!CRK*knQ}mBpgf9r#6Gtg1C@sTGZ-&dj0%0^uwo@g&TtZ+2Dn5?uRVzP`UNko|IJbGg2Q~H<*uyY z3p<-0q8>t77xkqu*^+Hz7FwY|2&Ee&yRV(oGAa`}y1Q+Ek;az)j(p>-fmp>5X3~CT z!)wk*SAwJCXnr1{-hQ7)Afyw%ONNbGnOd9N;ZnXFtD%0%>wNgFiu9hbRro%4_foJh z!*MNMX0;;Md8U~bTwG2|<_LSIVCPa*-?%$t{d6Z`D>X}s>_h2gmqUrjLn6yjQeen) zI|v!aj@~B0_K0C0aWE4(ibD@clXDQy`x8H<55J6W%(mB`Yj49KZzWl5Bd@`4xmeY@ z@C6pS^74KfM}9MqeTnN(t7;C{8ocDh0=_EVi~`&+6do}*D*YjT8jv!{?9g}LB0-$lS+4G32Cra<$o1|S|3MS7~}H!`L4PW!T%)0Rzveo_wVoqpb-?dS=-B( z6p0!Olw29=(O$`#N!CqY`_5vp>k=84%Ds576Jrx~zC@l@qGm6FuP)mj&54%{%Cpd5 z3l}7?iC*!@Iin*lcKRH{al1mbSA;8?AHg!>n*04it!sEAbW-Z#y(6=jsr8#Uk|mP3 zbr*5}tFRMIaQN;B7E|lzZ#2ze1TSd?TA3+|-`kI?+;%}rQK{#lk%oI^AIs$FrHe0q z_MeJMxX))iXU(?W3dhVyCw>YkOI=J%#jGHQf3E4kJjkgJ46Jo1>ki}*lBjs#b1V{| z&O;@>iBoHEYV?Y|;Ad)~{UM8w%N8(obE8s7a5(Hie;8ag<>0&pTgQmwUcD z`aU>M(S#pUqVTifUha+i%yikCGfn8IBTO^d{6>vy?@)P-(y5Piv$fb^#bc*i{6~6O z$x;_AlQetI_MIY#BTpZ>Y`tB}U2ZP#7jME7L>`5a6n53C!&P;+$e%leu8J|L+{_U4&?udg^#s&xU&C=Qn6o>tCfY>%r6J&*P3*x11)XV-hgj@ z(shYps%~_t7Tm(pCjotaax^&CTOgseO9LimEbgd$&26K-#O}s!5Jyi2PH87NYLU<&UyEqmb?A3Uf#M%`*B>u z>DXZRGL#e%KA5(J?i?!Y7Q340>kjO$MC~l+haY4xL%Kv~(3ZXd<(EAm`Nk-`l+EMA0a(yCn37g&w*@kWM}< z^EISln)fA!n<^LMm0QTQXVCY*M-SZUUPuqQpIqdJQ+bkF#ak9C+fuPLz4e{bO0Dps z|C{c9^ZQr2`{2}rPunV9CK5{}b&fR~WB36T6SrO8;ycg31gUBb9(q?ZA#~O2I zM;2S7B0HS;(wyR|x%k_+FDhjZq9<8CSku?QC@Z>`>K&5Bl(fJM-EiSS#^q`=3O&Ea z%k>X)>-}M=R@Y2w>z}sD`z5z9`Y@E;Q88~DnT`8fwt<;VKJUcO#%!%8H`_1AhKWCw zTyE-mLKyCPqw}UQI7q~W`>KAfWjnr_q5M{v3jU$Tl{8OPASv=KfM6()(k)R?`(c% zuociGU7L$?xSV>vnHYO)fd~7yt`DIFYM0i`+lCv6Ak#rC=iEqa%E}8sJNPCf%V+v{l7&dD^X&h);%EpCP+T0yUr5ilFfB2njkR}Ulx-I0)bUS}In_vkS$R(Hly|=(n zT42hRO%&V@=oVGy1Yl0_|4sXQh!1#mH5e6GOqS@6<-;YUw`9U6*vy@a9-f$qpomF zN!Va7{>I1n<((|iS{`L)dg2E+eIGLShDU>mA9_U2eSsu_%7W9*h2Ppr7E$5%_M2U9 zPMLmzL%EkN+ad4dZjOuPM9PU%;jvSFBtAmd&24Z&4224oc8ih3P#{9S+rls|xFTx= z$)s78gRZnI->LD>*bXj`crx-s-6BKkEK1*}W%YLvs3`{MO~4;gkCnHdnqr0PXf-~8 zl{b@ndPN-9)CaW$k_pkL%S?!k*CRNzV|Oz;h-L3(#cYCWmEO`%@))g0L=KrYO_A)? zuOv<5h7z*sF#M_N{(w&6{*-uPv^+7biWS^#T{phKlI~KX$1v4(UMFn@I0^HYuLlS&_N4T#YPHsh&4SrU0IbI8Nw6L#UgfS2n7Z#HacxOx*KCu?tT@S{#$o3@#aZ)_* zbxXg_(+mNmh*$sS5|&sGI4Wad`AJc$iCJ2bj5290HHf+UP4lH~;~~d&w3iylAdNxY z2sa!o)3L}{D7Z`&O(>w|x?fOb-p8)u&6!T}uAZ3BjQM?+*mQ~m zyiWrYTd_i)W7INIR_-z#42IZl(f{n`3HPV6AA$>6`7z}q{G*T#wV{Y4ES)+8f2cY( zFGaYR*ybF#4h>ALm=mozKU*$P9&(s46wUrB;8z3ii;Rk}JwpCNbFBA6QyOW{D4MxR_#(ILemTTJi@c8I-N zd4C%evB+gL*OkP(O7wlj+u(q+Qnea_k{|^mMt3^u{hKo018)lxTi3_ zJE;A(XwCxj8`tERz@gf2ZuF2HT{j$SA{3R?p!&UNqF9O_4J0S?+M$sYF6a~57u>S)VQVqDDyD2jRj#nE?{g)#@L^>>lOvNAoWfmZiP8WsY84)B3h{G z4cc2HpoyDyKcNbpn=-ww=)K4TqzlC+A%n*eag6D!HY-PNM7Bh>%MTX(-R}9u*C#~o zFF*-dkLK1Yj^nptDoQnE>&%Ci;he5goj1&nTh5!l;rtj0RC;ea)u4GPs_@Y~w5t}Y zs4As8PaD#&orkR$v5R!zYFV^Dm0?tt_flEOFZ zuJ(~|lN}P`n0(%SXmFoau*ARnE@8?bER+SIJ+g)O40YnZa5Yj7<;~P(Vc3u5ZU}w% zDq;Sr_4%{knhP^YzndD<^3jJigflu(^BfTXce3V2J| zDz+V_r2+WR02wu1D+^6nb!7GFgRt<~(*-sm*+J@7c}h))3bm!D${JNk2uy z{2S8(*q$6V7&Q-{s1OyhuKwHQMn=iF-zm5{1u=$KVmxrvIwj&&Rue9Q!3U!sn7}vM zF0@ATp8;2!)q^NhDvpj$-Ewn!FI+R|-}3UKFa_NRmD*yRoGgZ__LoMp)^}i-VEO}h zT0CDf9;-pd&xyL~5YOF`Mp#b==lvi}HZ%HPJ4h$#abbn$ytgw2v^t*To$vYhd$9@{&<=WAKZvs5jBl>31QXuQa04*m9zu2$-%HG z4e0c##kp7#gdA`;g&f!<(BBK>Q48v}_W7cz=F1sj3Qb3g+2tMS40zy-9%5Pf{*0Ux zN(*~>HRmmln89hsHANMp&UZYn>vHkAvW9^*YFa=>b5*FAa=rA!LDERCxkV4>VK+JI z0XI6R+xz_1_jV10UVEBx8fc)!yu+y$PyD&S)9WIXoZCLc?dp(65Xx2FeC-Q(^pyLM z?Pbfx#o4XlHFc?xH2yC!;#zte9An}aM;xJRyhejjB5dYQUK-0@+-LGfVU#GWl;Ie; zWXm!=%Q5>6du2l~m-wM`xN687V`rUXbjsBE4upT^YT@nO(TqOtsp;0)(YSMEkvoWX z_()T$eZ9Y3@BC_Tcf?fa`c)fDeSwD{L?YlZB0r%lO-E8QvE-zR?9iEyGLm1$WyKKiGSRzZ$tS0 zhns6@?-5t-k_ODUVe zOH)yR3&~WYeq-In#Guw9{jFR>M3~hRoFmX9i@q}yqiBmaQl9zNRBGiB5)1xZSNz;_ za|@^^Al*6_rQNgTno{(dm6E9j6JVX3#nz*(%?UBgA-TY7+fWLUc7WTt1>%JuAYNGg zo6!N2pLWp1@F2`I={ixZdWwPbKO&cTEf%AM;8`;o|IZsRV( zOQn(Pkm%Jh)qm~wq3H1dEtv<7rZF^BA!29r{U-eQex}^qqB0hg`G(~_6~jbFLfd$C z{PO!~)VFXSnis^&z}%m1c(ZY1cKl}(xS+ty0eoq@RBxZvD~asahXkizw2xM@!A|>~ zaK#l4yz+HZl6q&l=>_k~y6-kJAisAL*v<5oe5yvO?yKR#oQQ);bI^J_X|P0*mNsd- z-U4kpy?VLw^)?W&bJJv7^0w>(y18N=b)D*xB~$|^tOX-!b9+mP{ejY0KvIfuZAvjr z`B1Qf!mk)|Jjf`CP8~H}?wY zwP(fYiAsM4J+jgHv%Aau1YE;49_zQ$ zj&qxKowP%>f(ucyeI@>cPqH1bjUc|~bwXY-G%0z`i8<@5r5_=>%4`{BPh_#yWH{6{ zp8R|6=M_ek6K%NWCo{O|IN9Gl$Q8KV9G3}9t>cKq3OwFzC^oImJNxn!v!@TlaOt0J zy!Eve*wK(YW%OlH8y=sW%wzD_oZdp)v`#Pnn5!2!)23c>uK2hhCg9u!)!vbP?-Gd1 z!w(&Cc&uA-P|N3Wz4--;qF6IsrmW4wM`X$5Y9{EEPlm^+5;xmgE7}%WX1n6>=XGBtsBPgC$0pjLXw~Xd8-^fWkjfS(-r8 z&p+Zyh7XL`t>kYSor*xXn1Kf0mJJ7^(uBDa!k;uA>>G9eiJIJa(El4Xo zwwWF(en!8>2BS@^E$|>5-c{lepv<`NCe|F`#w_Izt|6zG6W?Z_6LHvBRf(K%^FNJw zIU4U-;&38fN?3;f`I3v_*KzBlb*3?{knb`8|3Skeg2zWeyyLeZLSxZJ=#u}>Ct2ua zdH+)7N{YPZYyIMwZn0;TTik*4nw9AOH{{4|W^!FB>ds7KG|b+@-4jGrmv{<(Z-xJf zI5z$O17)pp_akWEI~A}J=ChWBrnxGN5gy(y`>okU+U=v8j%`*5gm9PV_HcO$$RQ0) zLNwC z`{h#J;p$j}76)bmHTfr#m{()kuEMJ=xZtddZkN0s5en|m%OLYVp3oD= zMv8~@EVT)t%}J>)Pu7;^A772G(L&m`#L#hq43Fw`d&hvFb|-!(9y#0JsL5nJK((Ms zxYf(Yvo91Mj@~3dDJ_xBD8Q>%ks8$?6DdYdUhD%^uJRuSKv-10>t;$H;^@)x8_@7f z&Ysg|^m|rRJWuU1Y=)*_VKZo`(XhINcEQOFa=UKtE1VBAEeZc9^wO-c`k)JnXF*@2 zWnegsK4GyWR<8&9l^bRbY5EG{smX~g4U%uQX6bo1)ZpjbK7oCE{066bvduN9jd)W7 z#l!J9{?y$L)*&11`o!zzEz<3pWadD?umAD}$`Pdm?TY={Ri#}nM!-O3k&n=fFae6A z-?CHV6K0JTkth;>fNs7wrjda5aPDJk-@$u{P`=h(q-#R^pO0awreV4be_|DwqFitM zRb%~aco+NFhv8!8rm8ZCN}(RUT3(Ubv_IG0Ma;<0!NTUP4|aYH+`((rPKmhhL!(Fl z!`Zj0h#CxyNAWFpwfLOC?N~vZ|m-&Y=0E(jix`NVYs&2KzrhlDJPDIpM?0v|7V0D$m{YN_E z4@)x^fKl@=RfNL844+gqy(w;_p{UDxh(j$VQ>l|Ed}2wxlG5uRTgkfMv!8jbhZwcW zmoDCp-klQtFMJ}!9*)yk;3y;p{Ueef#^X7o9M~BpF6hQwGXw~_9As_2JO<2*H53-4 z$rDB0ut{_7{o^zLKLFdiI;8`{M50|JSvD=bO8Y1fwvOVK5XLeH1x77Bu$NbyvC+t# znd@WeoT;~n{7|FBI+ck@jeC5Wqy+S)sve2{EF(hDh3b_^2Ee{?gijPXV{PU6{%UG8NKqO)&3dA#7QDP5=gz2H8M8r}8>kw%z8`?U1|ym%qhuw%Nm^uxL>E9=r1v=uGYiIZLn0{4$k{cA`i-E{XSqd%TGsgd`!ixf58^X@>R4FVBso% z_0nE<0L|k;05Iu!Bfy&;@vmnh)y7y);nT*!WhH>|mQeaZO>wO$6^q@C*#7ZCD}~3W z8E;$bqoByOZRPa+$Uh74{^RZ*n>WzbIau~YGkxmqSYBXY*17FCFHO_!A(aQfi?jV) z6y+nvMzGOw@}94{R^34#2SyAy5soK&d$34$?lA<-P>C$XTckoUGZ#Vk-PbSvV2s#VpchzR_(P=%~vD>suRej7}} zAN=1GF}*X_n~po67ycCLm|wpF+C$M>BG{wqcx z7>Z;yMr@c78V08r=>=`(s`9pd_DK6iTKEWdQ{>+#`)BIuKiv(2@h0RN!xaHzhkTI^ z_4%(GNlFaZlW;`kJ_*>55g?@R(%?jcwExj5{J+QXzx!j7>91I4t;%rh-zpUT&6knN zU;?%~i5~PLhg3fW)>=ulvfF*$-26S-yY?t@P0wF14@Z`jP79Lf!*S-D zBZ;h8;u~}#fJ8X>@3dWAzl*Zhh-e|^?ayVW?M+UV1)CSCNUMOz)k+pCBkO@n`^nAe zpW>>&A=C>Tx1+0yyIrYTTWA934XPaa)o2o=7pN|a@p&{f*AS=3l1@8yLqw|Z8Ef*>zzYj!wXyr$~OqE4y;k9Z|t8&X7u(h^0zYZCP zO)~)aH?TiA9-PEp6!K`cie>ei%9zPp-hT7g{jmCRa6T@ZX$^}?HUxm33mSF*>Wq>l zwIE!RPNTk{zi6{ZiCNXtU%+&XrJC|G+`s?0F&!L_NIBw(>-kv;yQxZ#K8ufTi&Is- zat%p4HQO0+zR@fk50DcAX@9g+iHuYvWU?g_i>JR#wEKqes}V8o5cDsH&~CYA^0c17 zWt7s8`#t!T025Vl+rPl~&?ck4ZI-NQ1`hvLzS;IzKbZ&^UM#(rQ+#|&ago1r7?C^a zq<{A=lG-EjUSC}vlAkS)gMtX8w5U!CoLmoo1Qq^`8NGZvPWfC#ZgrqB5C8}i-U-?1 z0azeAzfBsX2tf_1jyl3*%%t|8iFh}=oHz>rQ;A)j*NT-KlpP9>eAQGg`mMqXC3oiO z{vx7sL^bn4A8g<*aBgs>DT=LLpxTb^Mmg(ktp=yO4w$^4ICC zG}SmFY;Yktk(~{#Z-`0|*%#RrCihw{ISz=Z&pC%sfxPcECFrU9n8K=CJ;O=3`cgE2 zkRwfFJCX6i)UynHh5m5yHo1ECXl+{PO&>f92 z0f7onzn2^u)}Aa1-JmKX$zX?4lF2Vy9$M87sdtwhUt(paUS@E|Z#^WxfA>rcb#*EC z93-ax?V1zoL8X%*n+Bkt>+{9iBhFu>v1gcSKxh}*lV&u`Ezuo+sZir|CY9AW9oyKd zERVD;z7A(O-f>_hcn?HLZWmVD#pHrma@haA0A>_jPyd#h%)#@{1R_DXJqB!59d-Gb z#*;W~sd%JZdT-#=GRD#C(&)v8ZKigldlJh(S?vLh30uS>`cB0yuRl1JfqNdY#Dak? zEabFtOmvJ5q!JRWzcY;-0B0)nPMpr1tM&FE_PfB{L-9EHFB1lUNvWZCfEIivAy(=p zm4=4L@q}e?#a;}awvgbxci5HA3v4>&NqX+zg~<5}FPvL}DF?H38xPGY5AtxGsfY*wn~ddY3>b5%BIWr1eFLrlQI`L=H+)viokh-)bCubiQ1d8y zm6{0aDGYJa*17hMdtRaH9=p+YPZC54A{Kwi zL?q}TqwV_wyE$LE6(Dq89P-}Ll=-3}?nMic5u#=-kQf zs^b>%fVcS4XVA%{J3xDjV@aJYOeXKQT4@+R;GyH7@GTB2=J4rFou}AhKjx7&FTKfE z{a?)^K>sc=>REu`pioyqcm8%I#wXH-PRRQadAOHUC)PJ3r_y;?#oGSdpezm!<0U-# zGu=@_LG;*7+%XIxvCdI9(6_M{ZQ&+-z$HdbV|0WGnDv5uY)L5L$0yP(f`lA|5kMwA|3 z7yQU9*BOQZlmL}H#Ok?c&jHYGBI~5fJwF%j-B&Fa2_5gM+J1@Dy&4-{D^eQ~d#Y@; zQ6kpwZ}>#-PL{$5`DlTJPYOWm)x#G2mHCFZ6J8ML0)GwzPZK^^>8$zZUcTj!WHBzC zON6at8Seq#5E@r4OVXmyR=KZamy#KVWdFUL@a6ekBjZqEpP(^8CFE8q|H-OmDztv- zbm=K{$KoD*!`Bx!b<_~%N^YL)z+Lw^f4;T3H>x@F#NpW{bu5*(oWyX5+E=8M|BUo- zx&gATIT>V-E%)#knv~=D(x#Uyz$Zz+_&eRB%WUb4r*7V8(|GKijH_@O$J1SO5_rRX z8oa)$J!$aL=KcupkkZLxSFl!Xx9l+_v^GqR#EArC`x1V8|28kS{=jVcY;Zo!%2d!G zhVCh|taDT6W%rl4UgUkwxj4FTBA95}@r9W?k1Vm7Y1-@=jNUx2V~AE43alcC`Kd#~ zui?hF$6A<@kW=8-;KIM~A#rn_&Nl!o0U7^xP@>s7pnPt6(u8VoR-s_fJ2V4xUkC&! zPB#Z-u#!iz=h3*vgb#!vnNtD@KwfuffPQtcepJ= zQky4M;IRL9E`&{AgYQaUqE?B=wv0%g+Ud3xe-nyd9m+1M^)(UQ`Px5XDE>7<+XoP% zPFc2M|D)+Q9O&Z>b|h`n%Lf5cU%sIg5VgI7>6*r}ZgtW{TfT~+cSZfGO*9@*vp_W6 zkXc}V+ls2(Y_gISyg!*)+EnMUCAWDWk3dnZ1aNnlD``$z%ZgQg3G91H=Y0e9%I~?m z0426*e#1@Droltb#4cf&<7z7Hs@qYj*G(Y$!T7a{BESGrkkVsILYg=_k$0BSDGU86 zentK_G)IYdOmFTC)t^4lJ;ZgV>NttxVFlRL`SNvL!5`jBCgK)}RQ zFvcluEL?!>Jx2l7I>{i?qLP4m`bk}=v6E2Tu=7GS;_obrY!!pyI$aS!^VOUDRR`@2 zu7tZaZPJ%>S3mmv%B?jeXkOSm$%XJ5Q(EFk_q@`dN3kN|t5EnwM!*K7zGQN~v|QR@ z!t@7>E)IRJQ(t!KuH9@sCm&^xVVajq;2pi|v{ni}>NR8Hqf1etJZZXG5L1e=d7$$Z z4Pt745M9x2KXt!o2uI)Ce|HCw)@8G$P$ytxl_pfTB5j<1P(yEe)**8r(gSSgx1;OU_KwcWqz za`DJA1SlS28~5xyx7k`Ac_%Bn(+Mj3vxxvmBsS-d6dx2zf!7H7fm^?z9+MY zhCnCyeZzPrv(vyIS}yB>76VCK6|5v44@l+auT5NtH0?-2?oU^gX^XhF#cZVQo>!-{ zkG4!%axWWOLSr9vM(e+{@M8OZOr# z&p?sUMQWW)RU_`jdOp3~s`E#0Js`>_4t)##xA2gGWJbcb{aKRSO!=K}-*hW4=xm{v z{!-C2MdgqB+4l_&hQUWsJLZ>6nj$9w46xY#b{W=+_?JF}TroC+ciFdX!7O@rA)%~CscMuUu*BTM#& zk8@a=(iM+}Ut)U^9AnwdRdzA8`g=;yrE~09c^*C|wR(BCmw(8zS z4p9M;Cu3I{4xq&jWutvO38XUO%kaUJv9z1~nY(Kjw)-GqJdiDf@n6>Cv1Nq+q`-N< zaxo=+p>cKaR-xe!&MrSkVEls@JefJM=MXjb()osS8Xyefw7FuM5T&&jqg`JGN~m@d zuif<)Ut&KzOkX=)mA6m3dqS@rJM5XRMb_Tm_KaB-> zvGkNlt-Vxy5mZ<$E0#|{qm)N;sW5u{4z{s#igAnvg%|RpOA6KPv%n3>5H;V}7`-oA z3(Aw7zpq%KQE21l(`Ewn@{G81q0+N!SuGtqTZs@_BMWM0uAN@hzz}c_Fq8P&6ke-p zq3-0*Xeb8vI}m9rJuZSa74FCrH7-xb33e!!niUL%oMy&A(Gg-vc2jVhhftVY-Msw? z^q{(@S?NbY!?Be-^RVLq1XG5h8ruU@-W^s$D7V=p|`nY3+x8`W!#e z&~53WgSBXJn&W;D%cg5%gyr$Tmk*A$)U>4@hWgljG`!WVtOE)Qfmz{!Ubl{G<*Hjr zsq2c)L~|8RlH&x&pEylo(I>4#7{%^hF2~!)hJk(r?4j@$hS97n0`3Hx9jrATS4h1R z_*A>EHhSc^4qckGH7qST8UqiCBUKKTEL0@=nxCChl}ME)suaO~o= z7_+6X=!MF5a?0oHOPxRWC=7+PyRXOrmal%r$x|p*)YU55(}3y()?U0PG@lt zAS($f1{Cs3FE1?(Rzi01?NW5YTo`v9!rOKo(T+o>_;6V52J?dX6+O8v~3ECOwsp(i*sMD#8G zy9_-@KK*gS0VQC@J|Q#<{tqa&APVo|oGQ!`W}#o;LGIs1iXS9VN)Qgws}~`~&X{3s{3F#Y!Ha;nKim7T!KnKUdLmEkB6-jo_Sm-8(Bsx->-MXF< ze3wzqBtc=}wJL+__ZiaeXFy3cn%U^BO0!gB=1+tjR?Qk^!xO~1I-p)9o2wR=xkHix z?%vDbaQvw49lo_eJi&_=mhJ*HZw(C5kt+ZG(N{Pv+4Q4S3VCKhr50btT(BHl*d4D z#L)I)nmPZ}EqUJgXbGOIU!`ZLTO-#iZJ%vb#wp&FAxJu7x^Z7TBj7s5*+L(-(~uxi z@oSGkc@tr|O%jLsp&X6)Lp|Y|;b|oiaO?VMN*yAR`&_S#Px>}NwNOX57V@{~yMYvZC$Gzd$D$XhU7!=DRIpn{2Nx|oLupbJS%dvr&M@*q(nkDM7_ zTJ?7NqP6IpgfRd27C!!z8m6-inp{93y6gmxnx1v{gJoQh?yP6wVg8S``WHc?OwBb* z5+2_3&`quChn1p}ES+YWp(EEZffwFLD!HzCzdcVtt+YvC!Z3e&`A$XCe)e`jjC#PQ zFy4cl*q$tRXwJi-LEi))=x@LeV###RDivZDT~W_&+w_Hhz0c86_lR{eWB)|J^~1-S zhe!{lS7w<<@-)rM)s=r{0?uqAFIrJij3n^+Er^mubiYy`z+t6&8{8?~=!xXdIxqF< z6Iq-?C7Tm#WDDVk(j!CWs=l`PV?L5y%{lGnrh8JiK@HOI9pC-%s}U~WQ_^{`3!MUX zBF=ViBxXpOK&6mt0FYRan&y7z2MWY@&~D0Y{BuO8*UC@{AKN3{n;FI$VPk4)DAsr| z`!RT0rhYdWYYvK}PHgdo0gl6dHS5s4#*+^tGlG*PE5WmhW@#Lmcl;03c@QJU855M9xuU7^_Qm+nk=K(Ao}%n zManj@i7=Q7+duRdFyp`e(clZMeMsbxL!>9N&*zs?a})kFMG2E;Wly`rruI z57CLm|H?~EV>bAA4o8+thdbxy` zC6+s~SP4gl#K@iOd%VMz?fBLKyl__2({%zbY?7;8D~DHg;|)%MEi=iI?XKQ7>giY$vD;=YAvEP_ge>sE<8b z@dJ?QR9OWIse@3rE6!%aa=Y)e$Ex+Da|t%)UGfls ztlaLfij>i1KckUw8AL$u_h7a>HT@5VTAdbfcex3FC)FxHPhOaNy=dQlX1*OlUmo;W zi;uG(wZGSAS=TW!pVq2wM-eYJGPWpJ9M*~ky?uS!9sI3NWes$UGO^to*6$$I3t^pn zC_c$u2Cy|QrOi58+VDwa?W!YgCO`=M?wq~Ga+GA}YAbJa@HCc05;cBLm+*Vpc-mv* zxnrDmt;X+}TArdAB{7rAg=NHkpk{uwwcm&!aNq>c91IQ844CCyh(A&(Ypvr*A2*Sf z(xgR(xWjxANPz|qhHK%yvnpL;WlAtR;;=w&2#7|C&PzN}EaY3TU;0YB9`N?a?bP*% zzo09-ZP6aj`WXNNgNJ{V9U+9tejzM)sS8bJ?(HEjEnA3>d7N9TOFjyBbmm^LH;Emc zrZeGN(25_*-sO)O#nmnQJhvVZS2=1wEmk=4-il~Ef2jj{O@iD2I{c{R9-$H?(*#J` zi`$46&xts#{_vt-)ZT`>-x#Sn&Jm9%a^|w%eAn-IGwyZw2O%|Ucw`~1br!c!?fZo{ z=R`P{2%{j2&O}9o%7lytHC_IG(__aA@#pwIb3QQ7Kt`{hyXW|XN;T(k) z$tSnaMo_TfW0-aslxXpFA7N@McDYyJj`&hd=+pmp&g85ViQ~8gjjCJsNyOv!6^w-Ob^NT?7 zaLbxkC(K-`{V3P!!X;uoix{hIol&__p!8s`qw#iEyVuhJ)=1zLE9WI%nOuE3j3KyY zvM`%nEHpZZ!`}OTj^q9M zenXDK8TsQ}>ssqP&)?cX@WHP{GrQ&&_aJPyjR0q@7fGc+dAPrg-!=ZpbN3j6DC2p0 zy~rVE@$1)Idn1|9)4g-3lBXS+yZ`0}nBZy(%(N3XClCayorFhuNRAXQzS|uA)ph^U z|6&qM4V_P*m40MDpgWf>?|**v(q+j;E1hgx8B7kv@)z@{e$r5~U)HIl^fwYCb$};s z35aI!^G1_RBAZRsoq45Pm2QU0&?Kp1=jE|Y657>s9XlbMI3boL`x`rSHYc&=suVy^ z4BBI7S=8_Ua(^Uks-dpwf0FgAVBQ4XR_3iqoi&LE| z$BW+)ph^3lmyUD=(gaolVCH95Z(22y^=a-Jc&zaOS zpHfo?w%{hO<_xSL_GP;Stf3wJ)VK8(<6jqvHplGPR{dIb$dWk02CvSM-n`PFgs>Dd zouO0?GsPm~I;UP0cD8c~!z+MRU&S@ouofW52T`?MI?r@!6R1km;DP+W>4#dyj&0iC z83saJKre&OM1i}U^<{T!Ln@-WLtDT|%Dp#E%*w&--SuGd-~!|KxI#WN_L}ys7)geBCr7s;HOg1_^!8}yA-KZiQ{7(V!do3bfNA2B>lLX}}@m_>S($Yu!}oH2t9w#Zi}sh_U1h1h>y;BrI@xlI zKGPXer8>v1-2OO13-ykxIz%=d6FCRfT+UPomr1os({YN1wHfT}=8&=p{-RXj2 z{tx3a7-^LI8YJfs%D;Ozc5q#mBOuAvKcp`r*tWk)RQ7w*_qe5Hu}G54X(fax-Q~?; zro^1$UNj7y3T(WnKisn@gN_+Y6E860rvJAJ9d$tZb0$p&*$gdhc6k*!gENTSqsTI4 z>%i6OhEFWOT(jb)%b=-yeZ`&i*%~uNcd`w@RvShYLe9bZDTa>Llrl#FNjoVS5o19t z&32NXgak}%#sz9DsTyDZL9F_k^O{UbiK+LkkuYFu6Kt+tQxbFFx>(XID05nP#zZw+ z;+~d%bN3;@waAULv_V;!c`B{>j`X1v4^zI!0kXOEDc~TSGz@tsWy;hji*N-D%g0zg z&n5bfWuvgZ)%Y!4e+LY{DfvDrJ2*^IKi;1q!hO@=>Q4QLoCOVNi;LFT)|LT-6_bz6 zU>4%#*$r}WkE!zRv)%qu6$}tfk8D@btmxH8bZXuSFXl&-sC9m^p^z33Iaz$*hMwGD z&#pfCP6$vzHbPZoBgm@w?k+UCB3R!S8nG_iF>i^4ltUFX+w%&-nrjl5w%L#E+$XoWPVGP)IM(*OZ)-?t?T|w zaOG>4*Ve_inXA;iW$O=AnDso|%s~%wJ^?<+1&Z@GD*3*4iQWCer$^e0P)EQ$ju@7> zc;4UZ+*}*7noihs+z61iyw%WSwz)uEDU3=g%j@PVmRdZ&gg_%4*s|Z zvm7>d+Dt=xa9zvR>b&Kd6{sP^X^9yfhmv1^62<;!CtkpQXMG~+!1K7pr|-p&2%^wW zFTL);Gv95hyGjWQD-&HpeSp59?hlSF{JJ0uRuZmzhgRO^2an~LVC8J{?Krms;^{Gw zkJb=;nnc*C!R7vJL(g4a7uS{P7}r+#`)%S+)HwSvTwYO_bfng2b#{0TpNB0Vns?e^ zUR7GXUAEggU(%AYt}9jb-!RDYhKMZoXBr+n$g3j+cqPe@|7HPP5VswKo9cQD$H5v& zY<=6_#Uj$vs1m?l72o|l05VB+&(L7-NeM$UO>ZjRco%wjNV3{WoAUDM-ueO7_s;EY|VibZXcm6cA%$;#LfORejaFgHsiQSP(Xw zeX88$3Y|8g@{ekr^3bwOWi|hezGW1RLz?FN@CGpHEJXz@Tso`fs_5NSDA<#PdNG}7-@AmjE|_G& zWa)MAP}Irixy@s3cXw~lha}?cOgUj&97S$hlg88Y z5iFD=_qm>()^33){1r*YBV}|%jekPgAZCxLfP5stu~N_pcD9#`n&W7pYEjM| z8L9c{MnmFVf9~C? zu@+GSZ-sxC%Hz=d%S1EQltfrsgxn{LM}4lv-y*#M`8CDtMmA#h=p&&)lMbtPldtlr zZ=K|KJN8U9?rCG4zB#{hE&wXEP;4~@?q?Y{Om$D#enOI17wb-1y6$S*M!5~n>ygqn zxJ!!CZ+Mp&2dXL2D{n2!l=1j=tUgpG$d+1gA2!|6)hf6Ovnq#48F(x~^4M=+zEe-( z%a#;>g50P3*L4wJKi)#~%XYh}I?KXAO?T20+k?w+?Fx#7WGOnUukWwkCvt{jDt)v0 zwN30F?^dr^kcw0rO|YITW2owAZPuRmkcN1}KNE$Cpa)9rIbwktAx2HEKc%g%oX?u% zK7u#OP^H|WCF75s^GN&8LY!Y`0Xb6THqtGhm$j!AycRla1_|={_Y4JHJ1G1z%7^x8%Wv1$}wlSX5#K zE9+N?_k2ERlaZ~eGYrJ#@P4#VSfpSpf+oqniSCuxdOxMuW4cz+MQ$awQf&85sPQs{ zs=Ah2*YEVid_0xiRQbAz&&SB5(Cj(4vp(HJbik!8^yx79Zyko6sJSD;z@Dg-A5b^4 zv@0(NlQqLC-VpKHEhViu#8qxAiT6*=p;umeV<=YZvWvY6SM{AgC_ zz3fu?aZOG%yp30O&^xCA-%ov_r)pxihV2zJwaBh-;*Wf(yf~{72cfqDX@uk>UzSEc zz5;sBop5d(bM{?AtJ|y8#M?fer8cIJv!?!z@_jEr4O^`eV!;9tD6!Y(ba~akA8Pet zI}Wt4UT(;7|nh2>zGZ^)~?Mmu&#q4i;cGqQ0@DO+{C|j3 z+VQw_mY**=xAMO*#C6?u_BLn5-uhF%(!c6uQ*0iAW*JsPYyu*vAIHk<$aA#x1`sku zg(ZP%n$=lK0B}O7w(ZeXPm$b5y(Vy>!P(dl1DU(`yxv+&H|J-YBiCF7D|$Dkjac&Y zvTyrT+Wa?{_3r(<#BqWNq#I;B)TPAFE2O_Rda?_xUwmrty)rC6XF(`)gFO}lpvhHD%Ilf2e25 zaMhCRZKfZ39I!H~O4W}(u`aqX>s?}#Wj0OXr7F({y8jwqMx;s#4-7eaO^rvvj1|AU zVXmPK%iQvi7-{-~{et$NC$s4j_kg>8BV;nBeCkWs6txw@s@wTgl&mlGM|(=}L4Ttl=!HV|9gl z?pt6d_Q7ttLx&vh^37HO|Dy4JF{Tl)@QqUoRE*yEC@ka*r?}IEgIzcAl&ujm3z)XJ zj2hOU?}q1;zh2*k#XyBdDJ5sn*;4=b3Rn(6)b$r}w^ic-owCDxng&H#RRbs*WLoXK zMh^pQz7czdmT`LWv*~{Fy}0$R1rJokGxrCgty*4wyJevO$&0$}B!g-^GX%rbK}<&0 zZ$WIWpTzSB&Y0=Ha}2Kp8DJ@^#Azb`GSnTdzBurN96y^BVDfNx!ov4fQE5-5I1cU9 zXi7-a_v~-jlR2gU#%r4>@|B$&iOoSF-S#Z;XJr0(h}X7q+TR&AgX00$2Ck;QQ;W07 z-Qg1SYzPc69(_6e1f7n9BWWS`Qd3O?tSTTgDuCMM&WRW2JiR?o|KEMHX0u5%7e9Uf ztLb8yff>hf7xi!LVnz?GN_>pfizS?%r1*Jv!Ouh(iUvZbSEH8l8@9Zxc>!b4uovLi zGct0i3DF%;dM9cg%rXpuys)@)q8zqxj1ptN3fW=2X?WUYBG3CFpMqNznH*P_3_4fw+Lz4*HM^gy${DXsOnv zX-eqp<^oL5fkx#Us!ohl(2ue?6DXtLIj;cSEM`ZRk+q3rk-v;b;c{`$2N1mJ$$gfr<5U= zq;{&~S>DPch{8EQ^|V-o26*!V9wDUKbZb)?F#a;!R%flQ)am-jF?PkT^@{Dsl~Hfp z;7zZWXC>z!5LQ0z`9tR_;h9%1oxOLeQ_BspSE0R(#3UyVa$ zceA!VpEaLD6T=f5temDpIA(_n{PIK=1CwHcAU$77MZ)(sKe9Q$#!$L;)Vg?tW8mrH=%Z%M78{bXNlqx}!q1N(5lLig>J zKMheXxWl~CIgQgJvy*+57ETgh-6b1NyGpi3{L=wu?vnY&%e#(T!ixfXdXbKDqhu<= z;`T(SI6O4Pk7IKJL5yhnlC>QwbirK}8j0v6^~f9!3XR3Lcd;HJ!l4k&@3{hcN&(}v z2fNL@YrSy_P=dcBy02muwkBTs>*5p$vsEM(onbc*3UB`cB zu!MFQCuir%_frmdwdl{#SE$2qA2^*sU#$9IHh*;VOA4L*l^3HR5{1_0rpB}E5ZL1{ zQLiS(dko||1Z5Ax^@k+}x@j?kX@L-=W3Xq|-(E4`s4SXH#jC-5ZoUnn&ZiJfqSxDo z3l5dhHvlGm$bhveQ{2f_{L1@NgK}AsQ9b|an$l)e*Bw&Fu-p!WRSjEz{;}GLRkP?B z*q}igh^*Qow~6v9pRcU}J5{`@y!RMTMKx*!6$B+*{bXFta> zsR?{SpVGwm;d#e(XGK--u~pcBc4iIKh&9D(7-r@muO0l!u_hjC!Yry8OW!Z!c=7gs zogbP5qL@;S^qPbpYMi5zN_i}zM!Rg)-LR-?X=>(Yf#KvvC}i<5PUCZcm^U~rbV$kP zo^N?4BVGC+OP5P~DLwGMXC~}1&UE4#OWv4qo?MP!HNd~SR$I9r*{+xiTwXsCLgee;czpJdIRs{w))y~wmPFO*a<;af-%X?-o@Kq;Z zu0C+IKFYZSX(mFE{{o_&vVWx7qu>SfmTE$$+$H0;(raPR2<9og{@jum&hK#laa%%o zR=7G@>tv9>F)h^QMK>GilkdW1UvT9I+sTPW&(%#~cg7FYI*)XtykMd*%2OS;Fu(`2 zPwdaPj$lDJ${(WJ#hl%=ieVbT2paVtqT80(4!yk^kWzQf>;AV>z=%t{il{4$x zqzDj7#mYJWl>r_lf_MD_P?>S0M?5kcJU^V7?5P^9C^MNS=ts@?e2|NMnjo49bM%-k z;ru#wDf2t)QB=>Xs%ihj=P#CQ9wAZL^dsGTSe(E;K74q-0Labyh}&JUCw*ER-m8#6 zY1ENgl;a-g`tms-`80C^`iHEsX0S<93(eRwvD@Ie#uX3S2>3+7U$t=6iwa!W68X4` zUW5+TdnP2je3k8u^x{jCd~h@7?y%b9BW9L|w<3lY{z)8;(en!9KS{%X)yVJ3-}cNL zo`zS;6&)YWU&EawdZG&KBi$V*Oq706`zv#7dYyfQ*NDfOJS@OH`E%bPAZrd+4Y zoW@H_}EecGI(aclh(4Z2JS&OF?sdd$N}?4VC{iYrP_LQ3DC8oM9ygcv4abZI!WeBxjODH z?rCk{AnF?@{UQ+6&pp&jE%#emOfds+_DbZH!+T$1vS-e65NXw$6DYZPefaXmOC^Ps zPZg09&QCO{3_JWz4+HXl^-@e5#{*$@j2nTX-U1vl zvWuH<+=X6`FZWlND2UaM3=FB(xJMZ{>HH)eR7{5a|J|rhM*uL0_Cgg}SQW zs~;iB3e;ci(gld={G{A%Uzp2x8YoaK)VO-zOT)L{vHyA1nro3O;Con`x+ln>O)0m{+0p?!&<>J(*D$MizN>k1Kbn)9*rpRX9?Mkehz|Q5G z-d|pkq8tUP+vr-qVXflr4dU%)QjReTA9wXAdqCkY{k+L%31#@j&brZK%qADWXHos< z!@JvDgm%kIIM_T&;+o#z8U5Es_Xu7BD?hT|OWySHcyTH64Jg*dtrv5+<7fT~^Z`yU zB2OBH0W`R$Y-g81pOTih9pe~pP}Vj0@%OM0X6?ha{(wheft{*5j?8YSQ-73MPeL)^ z?)u~(cUMi=!d}EVO5`T!`@V13oKcPVWz1VnEVi0Ym(17{L~(SuvEsiuJ69x51pgPG z(@ziZDH{4*-))*}Y3j_(3slyT&AB|V2ex?%KPu3n)qynKIqNma%LXv%D&_2T!%^1c8OW+ z1Vd@HN-1pyIs1R&3H^0^z_B1G`oUxSr$+lE41Dy7ha1!$dfcJfnW8~!6hMeq#qoo5 zwO!cAhcJ(Y6Xtni|`^Hz?XJ{jUoez_|`?)b|sic+1Q1Hix8W+lb0L1Gz^qFCdjy%v| zN_kwp)g8e02dsKhf(=D&>}Z!2T8XySA*$j-*-IT@i(%4sFv3?LE@2uSRFd<`J>60D zeBdNc&wX!tjKuB72~G8xi`}?#{}*5~;VD>Kxp3neY!8ltAp?avi+!Zp+7pGExYRJ`R@ZXu_v#Tz^MH8Nm}zL(vdZqShF)yK0^B_x-r zScU^^rPoDjYwJ}vL1BZzlXHQi7TxJBRzhEHxQ!jDx?Q??iF-$L{51&2vKTxT{W7=B z79_K#%C(_<31KbzQ{!~&L7O#GgDmdv{Zb?QWhuf9Dz=`A(&X-)HLAQ`u!n~-FHF?Y zO+TI~6J`Pw0F(D0&to#z_|MU7&{3bwzFwo>{r!6yy?w+^ruFSD<1~*%HMXmJ^MH%T zk0t#XORYzT0$Hc#f&TFEhM)MuI5H8oe`zjvikwED37{K^Xb5Ou(TE zUmHe`d9@OzD zea{R^dqtT98h>1W^L1%GRY2AJg#T@ud6d_a12^~yYQn^(x2qN7xQOZ;7dz+LAQ#Or zm(462x697FCqg6{^T3jm*XsA1$L)XE8+F$5VQKI;{6bW_DwTj=j3(?E#mC!{?=gMk zVy4mzO_!yAO6MKYro6BT^&k?TqfnH00EYmEwszX&=Kn+N{tEpnxtZphp@ZP(}2ZKn3O`^6v~N0BChc7lx>(?LwmgY)zhSOAk?`>QGe zuP+LTQQ{*z#XJ@e6ov|&;?jhVWJqG36}L6y^)UNVH%W%YVW4)?IC>SRYgiIKF^@Oc z*imAMN&hK_v9e(Jc*R3|W3j^`XMHH8cSi?}z4AIB#Ovp?1&`gtTl8l`1e&^e!3xx; zza7SM#HxoJ=O?GbaA3Ho{BfBVI5}z;dFNp!14<=G-FDB9b-HdT>N$a;CWCfYTsgM& z@T5oRIPyTwk?sPLaK*6G*p<6Tth!}lQCXWHW}wd;FJ{mlLvMcjaDCON z1*aO01f`cp5ipYvuJ_8ZPB-?tkeXa>O_=PogkEV33(~25Qr>iR>|tgGenT3vJ4es0 zMKM!oweJk;oK6V)XfGfFXqkbD@);dtcJrV{yK4?a%Sxx{gyq2ZEr0zv{FNH#oh1G2 zTK$Dz%=h0Nd+W~+Di=1IA^U>~B5m3QsCeJrd9|OGDhp6Hz%zlTHaJQjg|3%I_c+b* zBz)TE$xs6Ec&H;HQ~3sbjiZg&*|XNuY_MF3pL0tfRRWDWK(aH@E8X5?XV}mB=tC!m z-^hj=y>a{I13+hPD`#&)un1+C%IS`9+_GO`6Yb3ZbZQsXarJ#Y=a`*he~d|}F@7*m z;n7vj(J`){#b;B)G&C-+LziyJ^6b;mGZTFyX{z4`o3o)r*pJa>m`k!Fiyvsj{QGvA zS{DUrVS0ikNfK!Z|>6l?xuK&vQwB0{q~uQZBlGQ_saEP)A^+ z&j~L8d{*q|1A!=~{f7Sp!}%YfgO0w~^Bh2XjM?CyrX2$>VO)-?NDm_Lp`jZo zcZ`k>AMn` zjV<+`P>BDVxB377sQ>K~{QpntfB&S!k;yu)sbBv6>HFVQs&})MFs9A1tRyzAI++Nv zDE(QPSaJatou)nIC-!XmYIWwL&RQpi8>-I&)XV3}ddz`sPLYesOdI%L=;$SSNchB84>ig-nDV`sQQ42Vmruhsl(=?^F4xSf>puUv-rfZSP_EER0b!SM1VZ1?x zn(fbVgiS^J7nSf|>KK20la4Q-F-u>y=NG|nT1rHLnB2}Poz)$yzdrk)T4a_k>Xn?_ za@$D9j>E$kpo}@~)7XNgeQ`v3+$~fcLK?I3=dHRxWONK4K7ZvG=$Nlah%@{M!Za{X zC?3C2A!OI$032(npJyoD^{$<$loBG6#8i(OD-4*eCJUXi-pfVcMr^t^Aomgq2BgOO zvKE>WQ)kn2F+UhyWP`bVr%`3oM6=rGD{9vVvsfK}tNnHnh?!cS*Y&nVRws>Ri!k;| zD=8mH0GEE+rj%9zEqt+`E-TL27Rv}BP%6j0zO#1EkXUtk4jVKg1oygS)6BMVp+*+! zzH8)wKPN1f!>u6)bB{Wuc=aJPmb^M^<}6R{VyJ`$3|{nwFufIrm^x7eY)lMYTyW-2ErV=1dmDOg&;O5!dU$(x*DjRTJ0Zz z9j*mC?cf3X3_d&E#JQaD5IP4GLTVILJW`!v#jMm~GDz}N@%Z=QN>D=2&h%?lncMw3 zv-SC-7b?H-)QV|<3L7w-APya_ovh&b`=6!o9m>-37W;UUWp%?*(f4CUw~Rr~QXT5; zV==$gLQ)}OG2GmInuoYQ+(-HN!$&f>f1Q7CEL?A42ACY8&67oE1aB|BW=Q$YbDfeI zj8O6p+t7?e`^5InA)#p^5)F|C!Gso4rtfN${gN5KXJIpHa~|>?=;B zA6Dl+$n4{frg`K&%(xadxR{Ytq&4lg%z>SaHo!ZV&=KH0r)1RSF> zScGiUW(VyegL3JKn_qWO&I`THALzJ!@a?wDtvn6U;eJbI?5isXrEWAGAo(Yv#;G3QvkX#nU7X~*{r=dzc;FC5x+ z|EY|Ve|1SM-?P0nM-=?NWdp*``?lBH`Zcp%r=rz{!rpDzS(`9$2txoM$^!BPSzb_dnob z_k_u;4%s$AXpq|;_G(3pv!!~EE^<+4HLUe6#j4R1RUfHcQaMWf%eRIWb5$aVSCJzs zg^q6nz6FxiZyeYqih{iZl=G4VJhy6g>ybJMWLUC=4Nu$lE|}8NyuB$n|Eqea`!m@( z4Q7jEz95x;je)=00Hy7ic?P(B5}%N8zX^457KW>U2-iq%&w;fBD3zI8Z6rMAu5yB= zO(63y(;WKw#J$huMZeFk*cIA4Pf{~Lg+sWbexBb}gZy-l`&VZ#!wrZ=L^Amu5)jzy zNxA|q=`&HSOT!=xHKr!JFxOXMcO7nRrVU7|6^S|Y-D+8`*Jx!Hyo+&EVk<0KCoMid z1Tl@~_B)(|`0gc`-rzBOUZcU=a2aMz1~QV1r<;U>&NJ%PY>zWNBxG@n5}ji{wLHIF z{V`#(dM*%HC+T@O&aOiXt&X>UKnvHKCU z?MU}W4yVI6i@XI3j~c8s70mQ(!$ z^(~=E6V`s@-U9}kRJPbP*(SICXw0ta=ByJpD3Y`ytNq+4iy!K>2Jf17AX2;)4~ts& zt(-coOT|es;~gp=rr0_AJ~;28^D6YcALoG_&}cO*m&XO)WNJ#7xd@2{6{V3Qh$`Ao zqeOv{C7r|N@}C-#f3#dR7;Km35w5%Esm5H+j#Mqhi&>(&8g~uapF2(?ooy~uc&wn8 zBvk&srXAJJb^<>8U)rbJorKw&akWDZC&GfHhjs-{Qm88wA6U78i`?3AcA@@P9%6C_ zok6B+U%MKTkeBxoYmX;e&5}HimJeQofPfi|o6TH2rQU-4eO&19_raWBq~mELIY!uw z1s{Ok4#?CM|FY_J)+x;_8gHwP5T#rkm}>rIi5mnkxbv{Dq-qws?Jst5oA)r5?L(i=eG#bB zA(cT4Vk=}P086PPl+&u=oXJ1|8(j68q6g-0=8g-?havpO2&kL0%OIKA&H!z)50jVp z&qa1IjI{Ip>jXCKy2UZ7)E#wJ`RZ)FHOvn?cgn@?ZeF81IFiz_X-dVTDky{D=mF~% zJ7P4zj1#cOkFJ_=JMYqILA)zl2f7AqL|7cUCZxD(DS|-OV!z%s+Ac91bpBGH^v=yc zrQ+2!_iv%r?>d-Vp{Hj%NKRhx-r-%0C8BT`(==eM!kwOd9LQhY0o9LreJa9OZ{N${ zb+Y6q+K#&qRQy8K>$LUr`xNKMuZ!k@UykZ287pqdEWDPH%%(iKftT8k->MLdgWbPl z$oh<5)AEr~?q+rst6gOavns;{3Zj)9cVz0ZRlZUObbG@SJS0z&{|jfrLdexi@_1vE z790K{p#__8@bKdcRJ%t(t-{D3L6gHfz3D!$9Ke?oGFml`rN;weBLb3-+lx`n?4*b5 z;7w=8x>K`5+XZ(AKOSAAzV1lrkU}UPcksmtKs4W&yycN(=(=r~cXCP0Yz+E&*Bqfb zz4$PXW=dGAIJJG^mcS<|F|B0@kv_8#KI;N%>h2&I8dNTiF z&MJbty$kElqJ7If{N2RtzdvZHAGuFny3c8eowBRilnTz`b+5TOnAN!{D7|_*zDWSP zUtbHmyMdNdB>!v$);&ds)jjLcW(9b62P3@g(W(`-(lME)m9m<@5VTqFOcB8=%i_6{ zs!WR}#o_kWLDhk&HCC?KOF<>@S)F2!%VxK<3E#DcXG;-Z)9l<}DQor|FcHFy6PV3P z&#bD?M%W|YmiV5|^WAW&?VccJ63YX1dJUeMzG>6t;GLEUVhD`t6%N;7VS{ZTs_DFdX`G3!XKl zAk?zg3)elK*-Sz`@y)>r8){HENJY|%~HDk}!ew_tI%t5oUs^Eqn_&G~^a=-}Mz!cEARI;tgm+qOd*kz}+OJ1)N6 zE5Zz%?7FA&-oSlL3CVAW`ESQkf3>^6>!_8dC#g%t<|SnM43xor7h=#m15PA=b}rJP z%b^A>$)4R!RoLAB?yPCLY);0?^7Mk-1e_bjFww}_@%x}iw3FXznIl_hNYL>Ila>wU z$5_hQ@lDA$!SI!?;Yr0ghu^R0pH#j0^a@4vxc7GuEXuC!`wQhjDpZoPX>VS+kiPh){k!Ns@d1;T{`P#g_Pllc8VQ=_)t%{d@lByNR~B^>=!tbg{d3X zLSg$N*M)a3?7N-j7Uo*i%3VWvpE3m8zQplMEKyODPc7n==pFBQnDf&AILxNMj5S%g zpFb`Bw(nr>$EAcBC5}sn?{YlN+-F-#k0IzePdp&_7;&M$6XE<8)V=5VRv06^VGqB{ z9v1)r-HAW83%lDft!XJ5n^H*06%gGOBC*7sk>H2I$b|q0qR0gmZx12M&lM*ra$Dpl zemKyKwBa3++hEq8a`UHdxh60m@5ak#f8@91SXszFnAQ1nz>Wh-0CBDe_3uxk#9!a) zeFgextGDd_7g>|}3Kj&*PafsDM^|oOTl@Pzfg7E=I=xE>HLJ7 zke5e@RxB#Bv6tAd$VRCPf7O0s_GeY37f%1);=bERe~Ng;lyLqD`K;Z8m}aOgtag}i zbAO@| z3wGwjj>v+rzSqj9JKZ9<>hW!&z)k~Mvlq!)UPaL9mmuMrm`p#5em5aS10Ter3|qUy zC#K6=GXtwW(BPA*XAU;<-we9!$#EQLlSrfNNG9AnxAPtNnJsdy4o>JyYB99KpFBC}nuSV{b<`@t z*2;Rj{qAnibd!$+=Zs>ea|h)`w_mCgiKo?r&_GU$?0Y@VPcO20oScb)fR+*kv&IgW%gXDJ)E`T zdA3g{WXYRShTpuE)=rh$n{DsO|$ZH!o z$hbG^wT_R9r`FEQk}1HnO8Yi@FwU1;28gSkVQoq?=U(pVV?t&XklhcxM&lwwN*=ze z9APgxV1h!a#rl5UGU(~_*Q?oM={6Hdh*u&dQnBOlDER8jRr zDcslmK_rKZ$L#|5GwJFTx%myYrtOGbGY7~ligSe(IjUY;sqGkr3i^FUR)2G9eL_`X z4KSV^5Vdp+>P8pyuOGqtH1jwx6CU=u`Tp{x8mcn{`doSQR5Ws>*Ns5F0G@7io095e zSu$&WdO3(v>H0O`K_+2qNFz*uhwbh38EZ=DWa_A~nC}{GVOJw-)?;3KNl${aKz9|( z+)P8OML8Ragl92jxZ2a&yfosF{j{RvxmK$66K0c?RJ+2ddf>WMQJ+h8)BTi+^^X?5 z$TIb9a_s5r8=6XBGlUiAb!6RJOG;9q&GEZ;Go$$MEG`7A5F1+p3+JSSj`%gzlj9p} zI9;FH%!L8#-UO2-k$kVnNH5M(^$_OuX-2J+;p>>vU*y`EMiQgFPrCqF5ro1}IUQz_ z>=ayd<$vM;8~h4kWs3=MDEFnoeQi7KX?AgsPD700@muofFiteS2wPP1RwB%1de@A; zF!nC+JQ#j!^a#eYb6*5W_?`QVa*`nMJ#I8`W#|CPpJ#x03v*#?qrv42@6|aXLoP}f zgw9O?fy?3HGuDN$)V$Bz9 zSUl^ixrnzF^3^R`D;g2TvQ)k*`Ki(6LjUbRMOK1Yw=A}{428CT!bPN1{YG1#!92J? zI;}Z|JAp~E;hH9RdLk?@j~Gl0)@!1<$%ddtz4Dj57M?eXMNuM-ro!l#>h&bS?jBlc z+{;~g@pvz~Q_x|S@%VaNKKZNJUfAZt{^uWz#}rt4<-OjY2vnSO6ReSsGo5^Ybi3FC zk($~Z`Q1oiQehflj1rS2Pc#ybS@8Im4LZglfC8O(MixmAU870n4#w+etroXhYk6?u zTTpB^+u(FO9C^j<{ymIe-jb^yUaqX5Qu*BVDpytyyHqj>fs^XE2P5-Sb14-KSqZw6 zoHoa$7-&Byd8LNwwtBUA(2;Fz9~iopKT9f~^NI&Nn`X^!+8+dtq5t$%Y5o=D2QrN3 zfNM;F=1ddXsAfZBi}&4}FUa3lQ1L`b#^}I(}n`a8W$-&ycUZ_0nV`XmFnG<>QUy zw7C}h`vA$wt&p@m(Z%1Tg4}CUT*qs5E!^IgV%Hr?n^Z%PpJ!;(vc?z-W1p+zDHSI! zjgjvH;+dcHj}%Xtv$WlX35v&)K~Ifp^!wZEkzo$grEh*l$GQ(}Ybbu*j;ifJc={PW z4eL~G2ki=uA^u$0Aipz<8sh~)`3{C-18oF~CEZr)~Es4^xNDJ_~m=B;V%cOTnGkbA+Nn&bL1uX5g6MBvYjMddURFcR+r2_fB02_4sMf` zng(gtNzF^W=)`i$T+^(K&}&0D-x*g*D+KYSSk>gAgK^+@K5L4RJwA7;Pj#R+k&1iT zr56&yIJNsTk9&5Mt#N-e*uxYcN)Q&DQaA2?s~Kq)G_0z{+M^=70m`9RxAU-BW~uGT zzR0I-6gdIhazjWomKRG&L=Uail=SzyC;vGN{?}iKmoX@xt;lhE?QHJ!E<_|TZtPTO zse|J{pO+fG3i*zNh~vhp%=SoQoG#&rmDc?HZf{+r5}S7S7PmzGq(6~Cfph~UU>`Pj zYh<$<>i!M`y+`~_yIa`)&@-bLCB`RBA?$fv*aI96ORajCrIN}L08|H~Uo4pjWnX(< z#%D#coDkRZ`fMyPZWo%|xkpIyLc zOhR`v6?fs}BJR50ol}~aR83yBrR;|m=yhzDhKt1)2zMVo|2r+4q&{n%VL>QbU|MUe zWsY4c?lWGpo|{^y`kAyU^>gL>mhB6fDS45OSfcMXQ+b(s1ovyJT<@uX{yMo@ZR(kc zb_4Z;W)b!3#=Q`(VE#b&7DG57-Jtq+k@Zw=?6)rMOAB)F7D>iGf84`4B}T#yI5$6D zT&9cp{=5c2Y~&;cMaQ=)eV@kzNia0?x-5f^!W$X@G)Ij>qfM2DVgj2a?==k+h1C0G zbGUpcMs#ruPwq8efuC@{Ap1&oU2EB~el?}swFvnuM)c20dl;1L_5%{Rwu_AvlW%jr z&vW=}?jclU5Rn_;NGuf7U1t{@;2%0)n zx1*RnSWa&maSG=nfR6%IL~}xvlwSiFKF{d$pvhtH%R=6ZFB8W(T|y)aw}8ouu=;jv?+tiA!&_yw-`Lq9 z1Y~5W_oSlIeHUt%rO2y$G-q?KoUa$*#~g6y7lCc=C-0jyP4gfbxFSryX$L%TAKOo2Jw5x<^czrQSH95g7l_ZulLpUD+lhr zFJqHr%a8&oA@+kw5BSO$$K8!e|K^HEt&Ue>iBVFaO;Mb&Q*1C>f?WNvE_b1+e16~; ze)}MtjQsv7JL1&^jO-#oSwI(w{_s3an>T{d(G1Iq;tg(bCF!=+!&qQ^_)B8P{PP)m zjMb|*kAa<^LL?YJudqvLzsxc{d-xE;_?iskt&M^W+3Wj_1B!DI(bY_e9T{9I)g$yE z+B$_7gU5s|G#d4fPw$q2urr%!(cZ^_tEaxfK=YfgbMdc>&CPN;ob0uY`Dt3 zVPIlQ7=OsLTQVbM)bPE7&5NKUlKBfe9T~lAm?oDZEg3Jn4PXs)pd{GWC}>>(yn5OC z1H;zxkFsXpVH+H1`@SA#29 zWVIzvjo)ay1C$01!3tM;MQb@C%NAf!_3IB=yBd)P9B=xKB3b!pUwZVsVcL|Ce zs-qxkuUm)r9YO(hdgg|D{GP`e5~ec!>~Fcr9Hc^8+r?Tu}8BtSgp?mmRL=EY<+DHmh^wT)|KIn(lDy6_qIG4c9;%wnOoiYDpPB)kE19412s1xf!edxOP-(k5tR0 ziRL5Eu1kG3OzYl_`2qVe!jG$aCk2vjp5b~p7JM@vR8~iujp^6qWa#t~efztgQtBb{ zQTkl9ceVx(9x){E^3@l-rQ&0hw>T3985H*)5GxFx#6TBNeQz62%yd=(;oQTn95>Nx zGUcW78&SkE=<JPiPPFM$oybfuELbv-`Lp4P11Qq?B0+3)w*k>fB8AN6n*V2%(&Uw48>5UuM_R~mu z$Hq;1j(Bi(=|`2)f^oLtTNJ9YBvx?_-swK&JkZqxX{TzAk>_j2ls`8|S9sp<&NMP* zFux<+j9LT47%CBW&X<*WQSFA;>gBNY3h$wgV&XkTIEOt4Uxi~J?sE@ z1w{rHB^R=7QTc4?8X88TIGmzD0d7HIMg1DvxnZmCe-1qJ4%S!c%X+ z^_m@>UpEv-HxSmW_Fs3)gJ#?~EC2<5vQC?-Fy|@hvqPf|-fe|PKbU`9d$VW_JML`` zM{=+E-7Kwp!Q#anw^mRQJTe+=z8e!z`{)s8>~k_lqD%xGyA-&ez0dC7zjdE(y4QZs z;+uB!x|<)w{j9b0w$n8L|1403Y5+)pZL-x0F-sHY353ZaCd^S%Ia~)e3Y3h?Zf?8M zBS;T}<}UIOGpnmxK@sfMV9lc?o{Jp_$Q8RbS+OZhlUGzwnk~K4SJR`xWxH{vl5p5T zdZee=L25k3E%mZ!3*xHhs@?Jx6DxH2a;5=H6s<*Pli^5pLL?k6j$9%l%?7<5%9q}0 z9P+b0I8!>C#$IprkVk6bP?xJFLyX?4Y{9kBoA7}%*08H7P|f-n_UQPsC%2AW|FyJ} zr%3m2o9ua}V-i%(_{D?+YFew7&H;(8IX}%U^Hi@K$a;Nea-UruS>`HPB0Lz_ZP2yx zGk0o{=u03cL+dY|@&(r`YQQ*(#*LAbzNio;G7g)P8=q!&*QvhlA-{sqiC(0RhK=ELbz+A>9s^J=B4{C| zXFkEY8|}9v&~s$A4mz7N2n*r3bj%!urq#%0)oG*1B(I^GgilSQ#^`_9*W2*EJx-nN zv$?hR4Y1eRd3u||y)t!`H=OdpUEi%SE*w|dujZ1BX3^6BzPaYkL^#%xO5|9j|@5L}Yo?636mS z_;LMM>`R!POt%|dqiV~4>=(DK}Qd^N-#Dv!rY0tO7Q}& zh#54gc46IceZx-B{%6Dp5O{RU zQ|1|0?!u6D$TPMd-LE`tr9N(yC13Kq8;WSZ;NHjOP^q;D3sxA2P+6u(C?4{a{@%ZZ z@Nh5OTB-Em@jSBOpq2}9((6y=SXF6K7G!OfeZ7hhAu z9x)krwtMRZ9nPbfhlo3pny9o#vu8S>-iKyB?Vd{>PGRv^(4~^C^Puh~5}bEYVg`UG z%|!?a<+dVA(4Se6+gLX@udS6W4vb#EeJwvS{6YS0qzHP%+^j}nC_eV&*=U1j>dqhr zJ^(Juh}9o3%^E88|UAz)jn&&?|uAgEp(}r!oB4%Z(5lwsQuV<-08RDy zAn%F5lLUss($8e2Ctr%n1g_h^BX27pJQ2;GLX_@R0bDwy%XHdCx{UOu+&SfK+5Fezy0FOf*>m^=IR86=6+EMpnbEfT8#9*SK&O3W-S?<#HNT-6QN%hynar!A zyx9NcanyUjS8G3%U>sgHFwNFKM!q(-yarRJcCNGU<&{b{i;Xxv?NouRz$H>ssaoGNDwBCi(J|VXPjbr*~Ehjp3Qjc{rH*|D8psU7qycS*iRUC3)z7x zIAzGe?_6lHlSM{x;;UP?UXgp~z_2LAd|5BaoO7B>qMf~Ot%1!sY1Y~h4vskhkbD3Z zN>P|XR-g zHgoH}cx)|yAw={l)#c7v)SYP)i`=&l=n?`EVpKA)Z`b-rY2MFQdWpA3_C&IPHF`v> z;TaCdPyED}s0Xa8`q45MUtp;#`Hfi3_>Ca@pL!z`HQAD`2mk>&4lLw1>}7#W8KLgo zMWxwZ8#RYOaA*c9N3OEGMUBP#66sUEVa;--d_e^lg7kr$-fz~LXeGn62EYWSJ${nP z7$@YJtR`!Enu2&}o9-`V;4^qGA5+7_K>wZAflg_vutZ7mC3z~p7h{IdXMrh_4iOZP zt-lx_2|+m9u*TcWcDCxLM;0-{sxJ_4SP_e&DyD7cFik<0mN&1wI?*cv^Q`zT%PNAe zhIbM$;_MtfFp=ntr4Bm#P-%w%b>3k{$C4Co8!rm!-z_vl3yv5@6r1Ibz&g0tnPOgq z{v825QCRM)CFLx%>D0^C1M?WS{rpuk;Yd($#g*$CPt8bo~9?aH&b>n*&-{yDCxgJ{(vKXZ>WRq=o z588BpQ1u+zetG3SdaOh8O=||oIk(J|3G-Y2R3X)b-bi^go&{TtFJTEnpTL46m`t_r z)#dgg9ve$_iCdZsedYp$M<<9Gx zPT;lL84NC~;6LVr3@XzNRwV5(<4vE}nnQfAzIor)km_my;1-eMR5HJ9kLuB^E8^xg zvcg@tqn5b?OZ;c~+vaipr>pW5phKo7(m+knUvmW_l91;;r6qItAG*BF;1OW!Oc2?` z{v4nQLD76_$dX81!}@@Ltxc2}H7uM7aRa9M!1vG8BvvLt;^8y0H#IMcPvR zcXlrnb;L0DjV(utK6}G(%h&Isw*?fHFZ{u7bbfI;14h;P>J6=Lff|oPOkL--!bcKs z^bJKtFF$t;tk9%M=`_J|6feV>z(kuJ!YGDz78>b55;t}5a}G*vJnb}4343F&Zq^8O z$fxlK(%!Dqjn|H=_S4hi+WFO$4QJ)S1Y-buyo~#`hk?lSXCFo4qD4CmE~Lz*KPZ~p zX}u`gX<{A-u*vzoiStX?Am-nbli){L&;9C+na93OQ2ju9HE>AaQ?w!A05-a_kFF!O zUjj&%nXbl61>rfe-lz%dz^Hh+oc39RJqR%z9;?k+4`Z=BMc~ENrQj}(E`^QS>9;Y; zJpBo5ncLYhy`+ePOC47!&P|?P+Ky+n7N@*biF!z_S8AN(lqVFgmGn^f8{xaKXoiI> ztX{$@09T z^H$MK=GznuP_zB1&b7VX9EbiQm+;B6rS6u-&BH(^-MVvaJkq=h19+xD2^^L$0ebR3 zbgef^_)%V?%;@KRLXOm+YfLP)XL*Pa)Ovkw+?PyK`{GFbh~NQ(mr+=82I9d{$Z85C zzRG>C6T0PIf5xNl)hFG95^VZq&F6c|;S1hD$(<>?D$X)P1`H=3e{ZFqm)HH!ai-S( zQtK&NUf)*cBj$kygXKpqR-aHe&#>}FEtRVz3_&N(U6-a4BhYQP7$A1=d8~~nZz)3~ z=VALJ7gm8tIi(3Ejf<^#J{ar7qXtd-?^q}KarXo&M2&d{nc{Mtk+V8uSP>o`zwA4Y zIGx#WiV+l$B6y)8H1qVJ^z$=xti2TG<}36lgQVi7RdoQeg@@6>O^!*n^$aDQYX+TR zueJD7Q-&JV0NUQDmc@7xg1}VVbF~Rq9+&MZP%z?b4W!|z+T-0%Os#Pq`b=ujYHO7f zlwjgo52~+NP*w8$?8EkTIFe7PM4@09G_!WzIV;qmW7X2}c-rAgPkVjvMofD}m(m2D z(!ehopzF_m=?ZB1d%r9Rb7B9=v|gQ($S` zW_?nCHRF5p05dw8W+fiSh1c~u5S1X5Wg1bUJI=av$TS`kjtrU$d1Lq}lzUi^{Adw0 zN6P0s9YQwT0u7fv;8@=j0&tnn)}1u$SUM`PJA(7uSBHq{sDc}c^o|& zXvj(x*kDjrFHtI-VabnsUs3lB0C%3*8R%0Mj1NtQj_IA^5kGLoz?Y_7J&&mC_!?Ow2 z(H5uhsp%g4&gbtv3U|wQibrd_hO8WRSh@7vB%hNE$1wQWRd%+&HJ9lrvAe9hShAqp zNca1&ptBBv*-zEf3ZYD+i+>Yj58DQT`ohfZd!xX4={Guh3CDv>cm>ysAT-&tdF=BHvZjL zkCe9-=C1IP(VJr^}CgAfAH*aihx?N z)NbPza7FLimQGq)-hc&RTO~hUUc~#K^`*nEq<+NZfq!T8w~^=7mu*#PER=(pc@Vm# z7d7Tf%x$IZ710!BE^Oi)!5w?98O#h@<9YGIK3+5V6TTJY&l0NU@ppLu)~NqI&>l^4 z@1l=`IP=`YM`aT}pM&O=8BgKYrcu(R@@(z;T5gb3ixLtZKhrsw)h6#nx(TL=P%5g0 z_P%etSfYREjr^Ji+u4!M;(LrC)xaE(XV1UYL9+y3xLi05sHuMYL|}{{(ql~HTTl_4 zA@NXj=L_R_r94=5O4Dh5hKMUOa&W0`=qawRuD+!)T7I?7xTaWWGQCy;*;@zf7Q0KM zU|goJ`#C@WRvJ@!vlK{}6Px48hPK<5>j|!ipvBfFEui z>Bg)yhX`z)Zzo#`2;Cce9j-S!k?{oK{0t!ZLMZnU z<_R1#{24L|w?0YkGp!nx$YL6^7`JW@0`NOBFkenfJ_BORpt!L`ugX<#YfzBs?pNBe-JCEhKI7C- zcETxbcceG|949>0D}g1;YTN4?4BchM0s;1yQ21aKlA@OfdW3`X8cpvH#kiV`e;}aS zJf28?(p<5;-gcDzZSH!CbOR{-{e z>)C5Nqj4u3j_gD+v`XV9!;p@7-6?eiQd`|3mv}ieNGm|Y&7ZoT5L^~iH1{15L3#Z~ zkKdcE!rF}!3=mQdP!^k|wXa&DPj)AXi55cSdG8}h8-a&2k)5U15yoha4$IyD#()UH zLTAgq2mcl`|hrf>pHzJDEL<0kicEDrDy=g~!r95)K zLzlmy#x0hVttFIbOr40`j;@fNvWBp-Mad7lcY>VRDF6EJ{ zoSs0PWap-ZbLGjNncB7vwVWA!Q1PJr-g+_Is``+^`J*61pw(1k=KurAClm}*>MR&8 z8(-8NGqP#*A*!GSWAfzzvd7?T)td^DaRKP{RVSl#?{;MHz8k%tl>M=meCbM#<%;<# zv9u_E_YuI&S^bcg79QMZ4K5vzD<8OK&^{ws1M{j7kW+l?0fa`trQ9Z zI6|=dDS~&^Or2T1$zXyaPqkeRTfgu?ir-^W{A787?Xxf;dqLbIEik6kz5BBV5pD|) zhJL)hTAVEl7-AlIL=?*Mb$~gM+rXvhkn>a9c-CuJl4U~5&suZt1v9{uR;-6MUI}eg6syr@Y$>Q!^yz~-9n}6De=Jf3- zt}H#8khQ5bfkv4Ey$Wv0b|`k5>a8n}=Wzl$^rtQ!e?AzQbguci`EV+Z&2>Pxkqz|C zfuAeAPK8W?im%Xx$Ax<57z-Aq9O&nKn^Sjjz9CAjF{As*iQ6O|XwQT~DCDGMZlfaJ zB|;fG%Mf)VmzQ6f&{}?hYkd!SOH^oz&AlX@&qd0CWx)_@ZDx;LybJBAeBSZ(!V`8f z=LS8nBu~pY-pb<#$-HS+kLh0RwWf6eRigMoKvv9|461JPs5`w?xIPHeTfcZ%rgfIR zu%Ed3@;S|Mo|NiN!s1x*_`qRFwJVm%_C$$c`Z}VpTeC-=Q*iu5(CbprC)sGO9Fg=# z2+Q_R}5(iVX)W@#BXZS^#RwDeMt)d5h*Hs@nqMxV>{7A61&Fk4Wf!3)$lDGn#LwdeUx zgADeIico@zX{Dz_^XKBBk!x~c2+5V>yTYhQZOY}0=W@HwmLhGdmxh_lgMI8 zlTH_SFCeiX&TREzz}0z-I9mS7Q}@QHp`4RXvOZDF?^F&BSR(vKzyLUJwU4KIm*V_+ zJ_Z}292qB-FpvXm3CJlPPDi`u(4NgS9BfE(UGvz|2jqx`w8_1w`aiJ=q_c=!p89s0 zedO8kTrobBkba|Y&2U2d#s#>>0rmZYII68wM%Avi*~0_RDd8a1KG0Xg+&7-3@bI^8M-(o zI_dRcg^on~UK5lMYk0$9fL8KI=Ow-&kg&8dI(P1{f60!!QwIeDjIsU*EH7;l`1+gX zm^eu;Bl=dT6%zZpi?5kMh_Q`;FgeDg0sJ>4NKir>;RHqQob8ed-9}Qoo1|WrH_diKTs9w?7s{LQ2ij z^s4Os&tfpoj8mQ*d51C(o2jVYvvTtRFc}T5Ddc`(co7r6KQ~K=#>0F13$7aIpe~y0LHYjF-S0F$A!1^SkXoKN7qE5M$4>OFA1p z7`=3nwwkU>Be&eXt>6-^mw)S~Mjp=~O%}W@qEF$K-A68)VTm!XP0Xzcj?UvIKNlo$ zzN`~7;@doKDv8Dx%qkE1n#&7X^lh&62G}gKDp?E1p7^^5zVM;VxgTT%-^;!nnPwfh z8j4p&eESvpz*=-rd8VX(c1arGcpDYd_RMO1aJko{ZsY?MiGCp96nF{-1FTBtK_`HE z<%~TH;>{SI8>=>+7c)xDNORL~UN!$_$P+StQck2wI^L<#(LJkI>gE72Y})o$$^*J2 zceoCFf|iG=!)sQOof0f-;dLt9HS-!$OUnp%=St0`h<9#?hgBahxEYGyn9CD0C6wZ@ z0M-ENVK%sVtt~3DgUspF@@?FPc)aB1WOCrzBuv(can$k$Q7@^l-iGy83^x5XENTti zWyrgnRZs~ZXNcaq~sfLganpT7qImviN48@pDN5E zEz78_AEEm)ho!T3jsyqNT=*L%k#T$!d-8EIDTe$_DI5*H2+9H8x0D%Kx2%) z9`;1Ra~VVqBjWb`QS}iZ)G@|y)uF+ZfAA>P!wM$@KOhz8SzPKb_mkuPVx>GFI#%2*x-5by zitp?et)AVF#^9L@oquwx=_D5U7S zSOv6wN|+*s$F9@Awbs!>0V@Hdkx4+%yz25F*B3`HIFj(Y4>D!Fc6pQdn^*OYYY|VT zg1wnQ4gm3!9rKKw>$$p)!@ElbNNG9wg;5HO2}#21I{#utHSXRkVSwMQRw+@k$1}9j zBQFuQJCkD(DI(B;07cpU8A%`DN7lCo{nB*>X|1IdPDqx*KYM5(*I%(+gPgN>x+nOB zLj^_%rm!(Z4M8XrwMtKJ!%or%fOSu2iiYT(eI$_O*ds2|eS48PWKRUQPR;l9qvmj( z0>--T;5)>k>IZBJm&Bz_WCd#Ssf!uxQ~FJhjfYI=& zP+Xar89ByJv$|}Zr{6E`%Vq>KfR3lR5bjqiaL^~N$;s-i`kvL7yUmZ17bjIoBG{``do&z z?cvxM&D8nfAu*?&q7O45FBJRci?ZMGSk2?3xuTbTZ7+zYJnKtc)H%{WYH(E5`1(Ui}7>6^{V3SUew&-4vA*_Io}SUT_!rlVS?o!CNn$5RW2yceX^=u z-*nqEQ4<*tNOPzBs7pOc9YbDS&%r{HqQiA7J#u+&8T_c5VIZ3xLS_xBafA4NXt%?d z%0)*m;2jdpoZwZ0(RBFlVkf7;#y#DX>;8|BWlUu+#RGCXa}zyGo{t^E@N zXd9;A{86wgjS@;?$4?qZ1OAjhfCJ&DDi%W{I?$jjEVFYA? zeCw;&b|n8Um@Nd?7txlloVPaQc6kv~-zBg9DUR|B)Sl8KAQ6_AJ!Pc%AFZEXJdpws8`uJ<7F76=|9dQc@%w-I z*8er?KOMpU8>#6}nELsdB&#&W4vk zQq*89Z+z_nf6w%5LU?rUy}{VKpU{Q}uqE8yee(ZX!@@`a2Y=PV03b}G-Xfw&%<$T^ z>J+5f?9_V>@n|+&d-cUJ*uo1}Fwmok{i_}PbMlh{SH+y>*U*&T-bMPSoC){&2rVr# z3AZO3ZG~ytF9wVZCHv%Y_~>6+s(<+>g$*1q!!diju6?D};J(qU9=@Df(3PC#Z4WiK z*J(=LdF02yhU*&JfZaL2wwHZ4H1Y+GPX+=1grt*d^*x^gxV6@nV0z(kpnk1y776zw zjqk);wos=}DT00i+1a;O5J6S_Di|LY;mj68qv!(EOd{yOVliQ`b{hV^3P>?g5p&+) zIqF`If9JpZ*b#C^_@c_h|k;vfm@yfW$`s$itP@GD@ygPI+68tSdUDoJ05n zrQ#bCoS6!j90w!H9m^>4;Oj_6~1!| zU6)&c>K${>{U22CvZFWVw`X2bK$vXypsLG66@b@lEtp3PUU`<2o&ln>(v2KLtNvaC z+HjDUES}dU0Pam@2B_{KC(__%OE=8vIh(gwlV_?4T?ij*girwN=!MdTZ7GjT2YAbc ziVxY+@p8O{g=T6nsfesOI+lazVUz9S8G!@Esr%$j&j5w5+Lft-xsgY^f>w6-5&(@O zZ-7QWcMHfSeDQ3%-?3%aGAFNAtY>}I>F74YGM0ZM?Ra%7by0T|boUmc@Py5zr%fWu z-Lw6>gq6@eIQR$QF|`c$9hD_Wg)>91yW$&re~#j;bn$s?$2_XiYuM%WX!6A&-#;u}XQ*0`m6nkSd6Xdb7@0*>f)u8ZvsY*4K5hq2YaRm_ zixO?b!w z5}76&nVqfxE!Ow^fWXIEpS|$4bUdBfxVi5Z4uRIfobzT$)lBW)v?$6L4CB9HJ=DT?|^;Widc7xD`*2HT`6Hd@>uTf=E z|EOTt(()c;wmftYkW8QsA$e{&o?9JQh}92cyji+gfu+>wlNbtHv@_T;cT51ht^V78>Lnn?9Ni-<@*Sbn^hJDgvmw6r0U%tSPktX^xwfxCo# zDsqvhVe&CvnCB%23a5ADpB| z09-uEaWn7Cz=q7Nri5qNn97h3K$RS4^0k)!M-?zeSHJFUa;trQjnoXF>5_O2^-n$i#(q|L0oyB4`z8bbYbEel4d(dPWUWbiN*mmUvADMd+QHLn z;sw3Pcc0?(kJX`7sB-r)5H?|Pv%OoOYA7PyDFE)xzm#~kb*3j@|HR461Vq`w6 z!fPA1Sjp%S8?owonN6sA`a%tEmUJW7Sb*q^7_E!6q-pE8d|gv7#=DsB^@nWG8zYpu z&Uql2#|LB9DPPuToAiHOjte@wwc*tImxJ(MZwa5}0Y$c~;RNnqOyciv={#Yc8~a&2 zWL@eyqIQL9xaH|$m49^-w+^CllCS4o%fXkm0bQKvBU`9svmfy3r%5YBm%YzMskd)ov$XGsEudHE?Ia%L=S|7 zvQyVKRIP26uYPALx!xO4YF#wmb_GO{Ry*FI<;H`^ua_HZ0Ade45WWY+-iTezgrR8I zGJ0g8-e0~v@jI{h8uWk}5Ldg?6O`bCyME1hputLW=SO~bcItLfu3Mm3UH7!=u*i6d z0xKyqK+sek7p-?Q*Bkiadj1GuPJkFzu%HJm2A57UU7iYn(mG5Q*paRW^v+0ahh|-* zym(1~{chCQSUyp7D$(?Pt_D)IV5M3Lpd<4tA+pjZ;&gMh_ji(~E}Z1PrOfQr0aAok z;T;-T-%zA!7UWN z;Bo{`uh6zL<2fcdH_^n>606T;TROiL*jpve`zn=U35FC0bo_K*rvrPoLBt-~g6eif zLpatPJ#yk7;Goruw%W>Qz;kibb+NZZo`)!4fYA9=!za`rY%1`_gF{aCNx`Mh&Bn92 zE9EOR^r45j(gt2Z=>Bs|_C;Le7GM4HHVUPLNK4Y?l5?Q@4Ovyc!RL#Q9bQ^Ky?f5z zKk)zQd+#tn7gt!)G}bp0JqLpcuMBCIX`mGJeEr({ z8&0_DGa!E6hlx}2Y`yh%mnsEerrI1(O|Mclf_^IB>zj0J9})8de2zyF8M^UH6Hz4# zpFMmo`(^=}rXvfGdrUOBF*NRgshSKkOi7G)4@-d6wU8!k$%h?MbtC}D>&DT> zWh(LRdHu~$M7P6)SeT3T;x(eJ5;M{z!S{IE-{gr2Ri7+N6h_?Rc8^#^4%MqmF=u7BFoSRS2Fg|=- z1JVd8zSwfNJ~!kQ&KGCf0_I`6aX}|z4;iEZ`MkL;293H`8O;lQ^g2z0u8hD@50Wp5 z`_W}v_UUx*V{8roO9mhTi$U86(=W>~^jV*9a6JcnqB+-J+F0o~Z?{L+1}6UYvV{&; zjXt#YPjuJM(2|q=3O8f`;sL-bDUVw8igrh;(inAlOv5KyZNNGVFyvd*T-_Gag%$L( zyQJN0(`%TpPCY@BKr28lk8?MSN*3+%>PM|w?mXUSm3V9UDcpJHXuYpE>iQy{Jxubp z!8H|jO-a*-dV7FsacJabPJu{pEKF$Z?K|nUCBNJ11OWSh_xK)Nat25RXfnCN3i2sdZVZA2h;zpGLKoi( zCVV5+i)!*HQFUEisu{QL$Hdr$iOjoRi}DHrXpAdpBe`$Ip5S$z6i~}}vE`Qo;SD$O zg-k#+_l~Xl$0h%Jl+f8X5bWLvx9kk6itudTIT`>`yVO{OB3;PQi(~IGqXEf0my_Fg zO3?>?dG2Sm^C}_~DJ8P0vy;{Ai+Ho0)dyf#vO)Qt_GGEu4_XN5*TISbTK^U011c|0#ZDW8}1ZelcdPm42t ziiu-VZ##*(@;lL5p{j=hz&z=uk)5}hzmA_Mx%dLRCNyjdn4;#ZmuhJWq%!NkZqB5) zfV}~Eg+8~OtYyu(dyz1T!dXHqWwLi}e$N5QD|`eHd?#+#UoG(j^%r#*#~lc~hg(En zh?J11O9RKlWMtB1u-R>4V+>)^-AfP9^ufA+!&>3-j~) zA7$$PFQ=NAkgd``wEzG+b1vxR24}8(A!vX7iLko`@7rIa@&&EpM_i?B ziAY?KN*u1DRHRasMW)0vslxswpQSUOpaa1rb3XHL%}$WPd0scO{gB}!PhE4bZ`{Kk z>1$z5ts7q0Lig??{Qie)3jx3;o77yKfb|8pYEm)2k=!4e_tDx*O04|(T-z$=Bi<{) z`h1HFpXvhDL7KvvLX*Bnjt8$dkaSaPBr6YhXo?-Js+GqII-LLU#6O<>+bd*c3cG}( zwT78p9E^I~&S{Fj{+JMj2pormEL3KuW($wlqzGcDGpC-{H$sek^jUhI1|rpjQ%~)lhpR);lB;YKbM9OMKEg0g#R|;VNcS6D;3mYWWikD zkhlsC=dR}KJjZ@6v2g2~-&|`2sa_X(z=85V5Bv8C=)mR5ho4;eaY$TqevWXaT5ZU~ zLVQ=dtr5m>veOb0o_pGKz+zI%~i_m;Iul72~}ktd^7Vg5!z#zO)L*h(kt zHGcyCZ`1kvL(J;LK0o~=QWRwA^F?E@N4xH~8TmN|~Y(`afXDZdxUPU#Sy1(d? zkPxDgoNLf@!ed&cx{O^xdx=wWKKf^$2plBYgci);4o@5LHt%Y$;|C!t;cZVpm4J!r zd5?`$rB+|~nR>6%`r3js4ca@#P7t$2823Cfg+Sqf;rwNngqZIy_IZkk z>C0;uyn1=mFe@6ovi9Z3__wl19MtKnebGG#Mb=<#WVJJ^k?_B6G`DO9p!A&EmqgpF zm?f6Gw0JgA;^ujCDHm}$noZ%-9AEJFtzaqeGCAA`j~&bTn5;fUO+^w5)UoB6nZ)82 z-szEHxH0U8N~|P?>fNy_D8x}ih}iq9uEwO9*mb?ICKi!M4`!`DpGjG#y5<6--tR9O zJjAck9RKTetb{|c7MdPrcK?NTL*VX$o6spa7Zp+}x!$pI+r@H@kpK0g6%&o{zIge= z^^>8J{o2P5?poR;l+2E+644o=0pmD0;mKb~#ly*>pNjujlkk4acXdrsNt!ew3Q1M^ zV-x-REB|9;>@Q@m@-;>~l*$b5bK6tU0;hDuC>G)!)`Tk&0TnMKy5rr-t7vb9e}CW~ zU-Lgk{W&1=Lg}(o`ooXc+3w{Xj_3ADrx}4+;%f=f+!Cy4ig$b+*t8jTQG**zib{~s zZEb46Zb_%{HR-Z&?ittI)i9xZ{zbs5<|iGy6b&&`2jAI)yN&Q~_J_j7zbRkd?8RR% z|Gzw>{%2dk-pxb-1~iGF`i@!qV|@Pl)fhI=2Mh}8r~mTMUm1;m*)oCDLK&VGdJ|&5 zvZZ$nJJ28r(KkXTY;O`m{^g;6-|2s}4?F3e|8}N_BJp4M?e5?GeNHGa?!!%v-!o$Q z@5Wxp0uG@dc>Oi#jxGE*L*mbdfUF`9|Gw`(9_)V|m;ami|C{*#X5Ih4-tpKtS|>%t z%??MWMY@#MqJOi@?`UFg8Q|-)%Jjnh-6p-`@l9}wsfx`??^qUV#_l-9ktyt>P=7x7 z7VX?qIdkVG<5qhUvd!%Z9<~3kwlM_{BC<;Ow;Ped|EeSYBy@lQ4O?(wBe`Rsetmqr zmjg^#mQ(A|e;GVLNkKiAG5y`sh#5GN&P0a6!#~Y= zZvrqAgh$4<|H&$LOliA{7KlV+0=>AU*N9n}l?!;N;D2#TA48WkPY-FneM#}>-Hx3D z_fm0|G3niZ;A6ZA7cw&Xe9e4iu36z+zdP5e|3d9KQz(<&bJZ8>$N2;xc;3{(H9)C} zeTb*nRTe1hBT>WQu>2!uto9Zy7lV}NBw|>Q%$&ZmT2Zm#5D_iDH}a`C5YBm`v{98r zSr6E`B`jO4b zT~Vtthua58NwMto)K_?Sz2Nq}u2T`LF3_2*i=?8WdNs+NvL2*x|Bq$v|NJ4aE2LZz zhJI~2etQ?0(f<7B!>zI6T5mzG68W5fV9dT7(*0vXpsTIdr+Lmk=P2Vv;WQzUo*SvD z#j<=mAv|Me<@+o@yD}>}Q9eUP>Vv)6d_!;SN@8 z7_C{Vyd?2l&Uv`*&7fP5eSJD!6|p;30AP}Q33;vF&%18iLItFtw{Do9h(1pLuQ4MJD0#cNwF z{6is0nJr$WzK**ZMw@6xVI)r@=Dua34GXHXy`L8_7%eg8Jo^5+j(4?7U93)cyFsS8 z+d}N5awqNK{*y$kKg~+C9&pak+)&>8#WK{S`w4m5b+}eVXqGc9coYa+R%tBY*P(}v z*QY(+qcSP*mnW!N;V8c??|S|3N2M(eoQ|gH6g6cFH(C)=MHW?P=g^OEjD zU8}6*rYyet#;YY|V-v!6Fmc4>W%;oRw7`;Ws=&R0{A!0g=3)EMw|qLBMPJ$3 z`J*>BixsMAM9dmu=pKx#ugQ44HFAAe2L|0dQ?;n?wHWB=^bQz4`14NviASNZ01lx1 z^Z)*ssP|=U2P-{mKbPIPAEi^Eh==oN5`{||{Ww4_eb@C`gu-r-FN#TD;6vz#NVp5* zXN-(~1^2&3YS0Y)&m+d@$~;fBl&kj>T|QGw=n_{M_sDUMUjZzT@kVY%WyqtYf1L{EhpGhwY77+2W;n*>mt`LDhA#$u)IHy1rXoA`uM( z!dQP=K7_Cvov8QV9ruxT1h(flZT_8(%t#M*5ax2{1@y=%BxLwrprD9HpU#m*iXg6J z+b?tBG^Neertkg3HS*uYhsnrTe&2Gj5?2*v&+?RGVz9HKzRX;;^|#;;ev;`sMd)I3 zIf2!oH_;vtJnvx*E?7Pt5BT#+A;bY%X6!o{`saWCVdm~5obK~dTwzh&j-)U3eIq4~ zmOtbsY&=?(Ors2Uqwrw!(WKANgn&jDjpf=rJ4THkK}_1gNEpY;Kshm5wW_jWF#`WD zI3p|W15=<1%Of>f^_mcs0u6y|PPolEGJHlotMy`q7M+iKbJbqqMf$CZKT?7@tx~im zn|vr<=U|K30C_zQyBTU#`F#2u=~#3_xebz`&o!{c153IhVUfioP0GhUc|F$hfJDIi z)s)>lmDkyB>|t1z*7`s^=jxtzlS$%CaL(K5BAsfnFdnQ|s&1#RZES3k28K5emRsQ# zkcd-&!ofayDkVCS{qb}w`d#K4J9f@2)wXjrx>7OE837v|10)BoCM2ze;vZTPKzP)P zbU-XG)C;w`cxU&PYJ7riT)&kPO%`%vG2=P*a@J32J3zd(@Mzm*=)SZ&~8+ z;DGqH#-;X%^r0%aBdO>i!+X^a?=A- z5*zx6bj&`1_0ma4+Cmb%JyPy{cxwQN| z81c`f;~PO^LbF~F+o!s`3(-%?PP*d7d(W|`r3dhbvh5(jYq9F$eT_H*j*D;(BZGq0 zvnJ*34Mdw4*V3i^gPN5Krq;tJvZYb}H(t_!`kqfW67h(W?G%=i$@SO1!@2aM1s6*n zjCC@WP0RYt4o_OM;jkK8m$&))WE@fgNi36j{6x4=g?LQIP8V=Ca_@LsL3~hHj9a(g zCTqB@b}iC=?3~u(ChH<*m4!=qkw&#vn%5;t{c+Ug;q~|B!xPS>dI57VzjA}%v|@>c z^9P*Wl;?IWtisfCm#?Nu<_riZeLP*FPxrKtZqiUr&g}_EGQ{CG^}JR2t-Z90t-`~nc0g|YIj=ZL7;=o-T^1YR-=V?R7S#<-#S z!FdKr9)>m?K&3QkG(U281yefXr#nQscig|Ogbu>&M|C$QGgbAR5Wr}*22llaJr z zJLZ?ox6?OSU|DTj8<*NWyrFAAFmZIhY;P1vb;5ok8Pc#bV(B?Vax*KfbT-ISQy{P2 zns=;K2*$G;wO;$akrtB@hsZOr*SG2NWFBH4_aYa}BXrEmpzHrG!G$kgAhoq5aVW>}VkK^!NVYcqIfDMG5OBRm8_X>T{&WuhajGC zAYlU~ZlC5z;W|}Z<)pSx`+1;0jUa?0nsyrYSx~0e3TO#x<`9EI%`=<04^7kCtcD!( z$Jvk020F!(goGRKVIYqUe$nEvnWwK^@Mx8PQ(l~RKg#d=B;5OOdCt;oeCpm8-u?Zk zC+EG4(v?)TwqQ+a?)DIYck!w$6+agsECT9fD{|3G^Glm zsubx0(gP?GdT&CgqN1W8VxtD>NRt*IbPK(=0D%Og1PCDl34uV8-@`d)W}G?Bd_U)U z-+#D*lBe8d-+S-1_S)SI@3*LK67>7m6-jy{$az@l9{x zPQT9Nr0n-xJ*)9|PYY`}y$mL&QO_CjN`op`M%}&3B*@T==-MY{j~h>))p+3_?H7fsojEAn6tFtrkb?n%w^g=w zTsdE=EQ|?Lg%h_t-3PBs1!Xp`1nY-xqIoYq7$y-|{rlINHDMi4PL+V@dXfIA=C!$w zj&CQxQ+#}%Y@=B4lAN5JOe3X(93bRg&wk(Zpw8O5%n6!p7}eOrd9s;V(Dg^XSUdtU zI`_i=K@V%g<*$bjVE1ws7dqf?CTrYB)Sg{&QVNp_-l0+n*MbTOs^roB zN3Mt6)62Sp3LcP8GZxvSvZ<7Sr>YL4IM1B>d$e(jvP<4%q=Zxm|Fu46o@Go8%6@Hd z!q&epccHTE=K6@_G#)wN@3GmHqQ^AtW}s2b$-N~Gzd>nA4BLrW@l~=tA+}}E@`5|9 zSnlMg!^0UnEPHlxn(u`aTx8GSd>U&=EY$s7(!mgKr;mCS82_`HaR)X7qy53licdCs z7CxM=$wAz0e5x5IKenpG0UB@LuvT8GbH4QNjRl%3FPZIcg$0;>;P_bR!@oH|LdlLk zrG$HkVdPuyRHfa-G^=#c0BQvhUUcomPriR1rxrEU>lV_Q=i#qT&^?U(7t#}l++PMq zGL{-u8-{Ua6*u=aD^n0bROaNK&8dL+ikrP*eFt#}1AmN5#LE@eN$fG^GAmUkclVbF z7Lso2GBZtlN1e7s0xb?*x%iT@J?0x)mKKaEv>1>vHWLUx-aYag_DktbFtTzZ&w6#FA3!ZH4`w&Gs_(GQk-qFRdO#?-YGi9OK_Vr6+#QZs zT>60z^!JGxf(8)IsC4h>(%wsfr8yB|JcTPTnGyZL;ViHz9g5bq*dSN~L5Vr%y^>ahoR-on=~?Q zU+V(%D0eN9bgwSxE?T)iX$E%>uHSD$Oe#hJ4O*1CWQ-dd>skkA4H}7wdL?QU!3XR{ zgy9*6eq8(7UX^6~0f_O6%8%@HBqwYyWL@2H3%znqh?}&fln3K0)O($w{I-S!H;{r8(#G+6 zJquyH62)$WY;?f(P_8M{H^HrThzf7C$K4p^J*!k=`#?sJv6M^ycYx5k9?ByiKsfYs zyro*f_VSj}_{Ki`FF|)f*8Kog2*Z5YRnS)ro2ed^&*Wb}qq~>(bU(wSZN7sU(m4Rb zFaJsq(cAe}{K#?l+#~W<>GKxspT{Er96P%p!PzQ?Wv|{TxeoSpF{7+cPKVb&-geb} zNhvssdF5KN)fj>67qn(SJmhe{Gu_NaDsoU^B`Q#}-NJFmTNvX-l__&2^Dokub%frV zM0NYi=fBC_-Z;iSu2iotR%Ln7F>M75Qb>GAmyJZL(10wG+U)g@t%|AxzP5EGG5^cBj zBLZs5945Bvm3Ts+w%hQPyb~!+CSZ8C(8tIO2~elR$rFxC3t#r_rj7EFA=2ntPuMXL z%OTyEq18LQ>xrCNo>8;XC-Ft`CDq&c4->k9N zZ$y%6KHe@mQ-Ca%8=LX}dRG|g^=Qpu@_fUOwE~Y391sR{@)ddbX~V40(1y59_e+q<4#F7;Y0@MZStd$ z{&nrv?5c-Q_B1lecI8r9Ag!kilON1SKyBYi+`x;j%e@YhlXJNrZ$(Drm~|u*bT5aW zU6hdyr_JITte*1!lwQiX0c6u|m7R5B`L)-2+m6t{q^~iVd6bvao-wh0tiip-4>jzJ z4)X+4%7vnWU3$==Gk9`OX7duF#2MOy+$mekG^WT7>aQ)@ zg!JzP5|Ij5#ET=Wzdx^X*wkSPebDB&lbvR#*1hbalNHNRTL(x8cWops&qVSI{a|4? zpf2%NHL&cU1pPrqd{D*L$F3I6NKdI4KCOtMD?F~37JRhm@J>iYF=NaDa28U{1XMqL zOTHC~X`p6sPgOd3(cJX~E5rNuEDD){?`XKPOxiUA3~byi;DkuccD^$!_G1*NT#ye2 z^2CEM3wlZr#Tn91IsEnWw+6Mh!Dtt#+f$;~R6by%KagGuQ|o2>zh#HU-;4(h=VWoBR&lSlb+&sdCMQ4%E#5__!FQ zf)?}#KPxs<*>(Qi5E}x?@it5oR#5&urGq4>HYPURp*_44zmSL5xC{MUqkoTByIJSE zL}AXino5$Mdvx($yS~BGhu4Eg63_}CtT_(xE1ctTDp413!C~w8ni`#^ z0)dPVRWtV_&~g(Z5I29cM`ja=Nnjf8%`-+ht`mc82ov-~Z!~L|-@}IdIpmlvhICx+bvPDK}Q zY@YzZ==UE>X$qQe_)T&&=Zw8wXwV;pRS*tviP|wD@zRKW4=?SyA;Hc26p|qqN75ze zWo_-VZaN>X?PX%ATN$rxHI09fEDK_YXis#-Kjba0ZP`G82^8rW&rf!ipf3m#@AVGE zZY^{oy_a#zm&RyO=a(qztqcbUg`La-V|P5|M8+xM`stGi!lo1;HLeRNY~d|+GIe-1 znde=O*PHHmpTSkIB<@`s4n|&$Y?lR9K&B3v)TuS17tzbOptWO2x3h$%tJWTZ0`tP$Pcn z`b}E4N&BmnJXMCT=*6UOXf7on>kt;onTu~vB;)VnZNl-l=c@;30B~}cF50M|Y#R}< zxH*uWfC`wJQd_?q72EsV$321WTfX3$voumufYj-5=-Hc}D1e!T z?|8LD`zV>$21M09Vf8eRyE>s&xit)*4HTDWdNKnumg<`a`Xg5Cl0**us%9*0QWfQbV@_(R zUO3tpTC~0MGNIcvE)vMj-7sTv){8ybCx{;Ep_k4Gu^cqPApDbSQ}KlGQ9Q>;If{)^ z5UkyiAIa!n<)PElrL>I!h6vf*8EHSK7`%L6t{T)Q87E+HPxPc1swoICg$(`?$VdTlKqiXYTs&&su(%q zR*MgD)~g~h`jd*d^WTh)jz%QZV_$@G1rQDbDN;YyDz}@;ag*`+#C!K@jV#=nZUXMH zk#uqpjMkgH$SE9piBj`p$2WUD6dxkl0IgCwihRz_D_7&Lji2xlNnIFgu+Ob&(M=&f zZC4NqAa+gR)u|Twv-KE%3ZESlIchjAhw6el*+H^HZSV!i zZrM&sP0+N+n+T5sn{Sh~N~9*{9{IA5sD@af1B)lgfz^YDIvNjf)rL#i3BuGjRGSe~5_S&rp!2pUosJjso zGgKn%yf^2K=&jGD2IWxEEAr;n<&MyDFiTQR5~o{=Y5!Da12uJdRam>gDyj1}JuorF zv;eU)!+b`GsAUVfnB@Vfs6+=jb_UvpyGaVf1+Rnqr+nCR!d#~8cD;@u(p;oa=Aw6; z@u%kth%4xaTRdOiA3y)9we<=Xe;R&FB&mVbcmGuj{mQS$FW5P|?VRSd`%AN-*C*e)Yar6Ls$f6gSZ(uvVcz*hU z7A(3oRtSudjtY0dzC}XbCHPl$PlE3ONlpk6z=+!MVpi|>7}dv4jxMc4a8RA*h#rm_ zJq18+r!8wW)8X-YQ%`aalNLpmg*4>am1~^a{vK)DVYtV|Ly@*j(<6pp7O!}>ELP&P zA|`!%l|`mDwVoH6RzABgkoChh$A&<<4>~G9P}v1hTw5Hl8}~*6OFc$UWA#|zv%N6P z1`8gNd%aIXo^t@I^g|q2OfS$u11N%j_lYUBSfaRaIqngZT^YdD21bLDUnp0se(yYc zLx9n3fjPSb^XVo_!_^^atKRL$Cy>3%dUjhs*`#|(yWH&td!&a&!(IQjYM|2dw4u_y z${&#T$}^f+Qh3Z9anPmhMR)Nb&(mGMLhN5&WQ^tz-kUz#DKWyAU6a`S#QzMpZ$0shi zdo-#;da3FxiYi~3YnI@3Qem%%_ZPx0+SI)-5(lD>z-=m2`b(bJ@ebj=m&T{NmNeEj zFHE-+>w>!xelj&q)1PzFAhOEKb)jc^f)1#s-+h;>=H#s8cEpM1#Di7hl5?9lRpHDl z=kA>wL{0!nCIm~{hh7DoE0uZXES1shs1R9ygreO2_iUfSaH-_Evd8SYg0+FDv)2{B!v1wt7 z!^fAdUmDB}d+Cz8Jf7CIBR;~u!5^5dolW(!4v^16`0DHG_L9=?ep_3~fZs>Mr(&NQ z#vvJ7Z~s?POkHM`r>hsd)<0NPTn0o0iwSA=lXIVA>{>rMOjz%V>{)|_`33b zc|MeB$o*3};A(0j-}do{*pcG`VT;m)`<{&}P_Yg3%S<~_s$6VC0g~}30On{MML+Q~ zAl=h~ZA~s8lvvj^*w(09OEGvEtZ{;W5l*SnWXjt`c0#VqDdihBq$KVDzuWlH$gc1UrG~BIRMy?KTK(vv{m(vem=FWXulb{?{ z3yxulam|@FP!2(*I?R)AB3TJ=&F@LKI(s5%SY0Mt@v6^ohhP^%Aute~V6X zIW~#){F$1aLL1Wn_c(Ixg#pYrieQ)7pd;cnk9CS^yLfcP_*`n*I4Rxe(X1-TLZyq; z!&OQ6ni{*^3)5cI|~tdGKv#)8-8v-h9)MpK(6o{+wATVkgRCavw|bm1p>EU1;;8N z2VnC(>%7UdX00lJJc5+LFht9?vrZZM81#`F&n{F?nxFR=K62I=a$VVLD*gFMnUSKw zBt8ZufFY>E^RDO5V ziW0Ot&m9$bUpRZgw)(zvpWqKexYWeDcKK(NYOv=F(KAM&(gl&ZK~S06h#sGrWF7O~ zc_SLX^Z4oBn70qdccgI`0G2e?EXfC?VD!twD$Lqq@lU@7uC7c%&z(L!7qw47FY%Hq z(*5$XoOPf-0{tNUP5GD8mG);}h3Z z5#i#5ietq+3Yn$%W_bz&Hb#T;I_u}%WuwwUjre`g-~02X)iZh`pV9cP5UNz>W(*p< z8V@iI+n@R=U(heOTU@2-!nzR7NF0XJ#ca<5$_3JLC$3w-(G}LEE=9}iUge$Qj#2A8 zUn_C4&(NO^WjmpW$Db7-3pt$80Q~6CJ+?3*6^{1@a|pW29exmR+YX%WdEn>8)n zkQ=B7N2&Pm42`Ch6LhKr*(&|p2Ejz_p!4t;uC76&K7{GrR%%*n@bqE58DkuSHjJU5e;wH#4&Xa$rdmcVi>hM zP+yf=0@*|ewX0Q#d3p!|=&sa^aL844038xlfI;>R@2MQYE7UNU*9R*KzPTjCy2Uc_ ze&37d#MOSpu%3r%8oyZ*)sK|qFJu=wlheJ{Vnt}5Lr$EV5em5nSC#tc3NaDmk%nUH zIXcSbL?$V5%Cri3Wq-<>7rgD2!u-$srcas9(eo(QFJGc=6&l&x%}Ypy)({aG>8 z1kM8;!?aD=c`>E`#4b**oT_tOvxA^NAI|<`1U@lYnhe$5F7W{zFCQJ;A)jIYs*WLv zMxgHc72`;tLL+OiKwY2UHLElJ{7Y*+TNn!=^IO|85vAYU$-C_y>>2M>Qj=;dP#fkj z21|AQeCu{U5)8=B=yHi#-;I?C&70N3j{AQR3*6iFnQv(~#3Tu9zN#7V2f%5+REpIH zo3lios~eL~bMTaw$2@7S*|ln?iTALpW|i6wRoMdivUaI z%|Oww&sX_Tr1cwq^PC8;w$=q#q+7LMNwz zl4lo@@Ytruzt0NrvX>5d5VxV{K&K{JHc1bImHNg67Dw8_+R0GF`^Mu$a%lZV$dCS9 zAF7^v6m5Q`XMvc9s}vi5FUOLF90rA(l2}pirfs4sC!mV-oU8W4BiV61M?(h^LJ=#FcBJ4Ha zeAl4sO>C@tHwROB*rxhCRT;y)4n1K6*S)ytV{3KM81WiJX?L`_f^v(ReMhS@LXbjQ zQqjfTVl7&*!mtdwPS1;)OrM)k!9FOl?g<6BCc9%-l06G65qJ9AGp^EvuSu!UjYJ_R zEm|KHzD1HiEPUmScMC$w~R zDbtE2oG(|{ec)6CHS$5)+35@h&!i%s2F|IhF(bg0igk)G)`($=0`S>0!u_9xd?oPV zj;W`UR)=f((Ed&l8?WeZLCy9zEi7HHdU_Y)x@Oq#{ut@KNJZ4kUjDqnD)}i;1z-Bo z7#;E_{o)UXn!x18Aa>z{*`lZjAo}B;AWKD_PnTZ#&7H z)<9V+&tPb(`Cw9Dsg4$qGxBJtub20vY=o;3XKY&@NU&c7m}rLTVrPGYyh@rds6X`1B7xN>qO zGWLKY=(&A_xTJzr4 z06u~JPTM1SW?#2Q_7-Jvq>ZlioefQ$+L&A+I|W=Fg>MFaso`JvqL8F!BQQ0P!Qs7i zg6^72_X8Cc<5Q#O7~iNCM=m*RRNHNr0mX)Qz)_X;!T3+%u8UDZk^Hj#II@2a=9L`C z(>DM1j7fPSW$KL>g)i5ele@lWbgutr7TN&gM&Ls&338S>8pzMJClf*z&=r-uMQT_8 z<0?4Fuh2rh(yAX5XK-1qFPvJh#Zq_q8X!tC&!X;{3a;I_IwmWWEIbt*cgm5m5J*p3 z0*ohtxoDVC1B(fL{R7UX;(j90m$9Cn(%9!XrceT55ZL^_Zw%pIdx&z(9o2_K>mSJ~ zep3OCdR#Ys&(3^Fs@EIass+vsds}2KR-)hz2d-KAxaM@Zez1zXR>S)Z@H0XGqD+g0 z;k%8#GhJNl5mjpLTu6`Kv;dX@QowNm?vHaadpII34O+hm+}ui%Nj0-=gkURGipLx* z%lQB)$ggoZ9TT>+%l`fd<$H!7LM)7Lqc&fat5Bnc{J;A+)^4lFb)8J?*!dG(^glMi zj=3(f8%=w@w#)8Lu^>qH+m&Ne(r>YER%flUXILm)4?X+AJq98?yj9}lQ|`oK&VIOW zPSzthX2Q=3>TBCQGF*&?@kr15%U3nu7SdTXW?}`g@v(HK^6|gcDRC3$F?l-}69^01 z_N4^$(^5q2n0TpG>G=syDKCyhWdrd0$~oSw;&nelFw74>(VUv2gifCtiuLSX0XclP z@)^r!pV-2e`QHF?tL<&m5{SVEqy-=8%VtdRb?QuU0He-(7bhn;!DxkU1uP`mk8a_$ z^ce@#(jW>bX0uwe(iJB$vZvCCw5R7tz@9Vm#~Bq#z5b@=Q;rNEvtlW=Q|-+ zKmcmQQm21P?$Wm?XMm041YMi!6ahTh0mZ31s44f{G*3(@wKTyizm9t{*G$e=Z*8PM zOoWdHF?3)VAepF)^V$smupdsG2Q;p#YB(+JM_!uiqCmx-;m zF8D#raa-QCkS11P68?Z?9a03KZ`642{azQPOm*2BX?v$$!Us3kkC|cO1M;?R(Cl)4hsH+%0R-s?H|{AqCd@47GUBx7=-vnqTTDypJFBK zOmA$%u20)=JU+y4*MOy>?&$Fc(pdVAE+{~-{e*{(Y0HTjvEY_5|K5t*<%Kvxrii|~ z?PDMj>9F)=e^o`b4K#`?w9RSgy>|~?hv1Dq*f>5^+Y!VYBSuL*mRBF3@TUZUm0%oa zHSbQS+8ww!y4{1%vLDHHzW%K)(=+n2k{zVP{6xyuITZgq^(^j~>i~-q3 zTHHxeabk{~5DLUyX;5VP?H&W&Z>ef+y-uaOLbhshVdB`;Ua$rsENX&RLZPh3S!VX5 z2#?7-kcb5q?L5S7D964%Lg}qJWpu3Kuri1$bZF=dEG|5x3W*PkI_Ms?Hae$ zWIfUIJ9gw3g#nTkPW}FHr+~^u+p?i{Ny4dPXIr1keOdB?cJk^r(u@lr1F{o_Y<*=?*;e{%g^D?KqY@=;=CH zux+4c42$04ecqi+^2|JS^UbkK=7t+9z7NI)@QyZmo;X*~h5|XhuJh!T*_w#v=K*Oc z1C>=T>%n*=v}Sy=)D<@Nx~l=^8dvrG+y3MkJ;Lz$y+!kEPX~CXjtx9N@-fAD1!hH% z%zn0&g6Xj9bw<57toQgtW@{$Pnj0p~PoU0cPFo0wiSQE$_gbe7Kd^x_mihP2CF`acxDG#sFWs#oj9%oi%a3f&9WHi$kk44y#;W z)Jxwq4yN@y84AjCuW0S%$L?IZU6!@r>D|njD(5){k!-|vCub;&B16P68;#F;7WMVgL%Po6~^u9fUfdvw20&9 zRgXCq@6tnP=J}CitYhk701$ylgsC?@fs?XWPo_}U`!AMqlcIHu){(BCftaqF{0Ktl zFew{2-ABP!dlj>`#-3Ne#JK7yKCsE`c%ugznd!iJ$b?HRFhvrc^Uv?#r>cuB#duQ) zu^q=nvO0(1@*=hQ_)f!;W{OZ!7!%OEDm1k3Mz4Kx9|0)8D*L>JnjXxz1$ndGTos<= zU$`*IUyXEI0iMO?1(WK{_2fv;ep=>dsi*?5uuYBbU$)U~2d~ z?R8v5CG`e+wb7&1I2e+fCgVYa9O!p^r3r#j`gDSW}FRSVSUQQFc ziAue`9?apjh}HNOhp8CMJD$H!kfa=^GzEED+6lYr{PD`*^a{ts%)||)Z~CuvBx;)8 zhscD!^{&}!$x)8{;i3LBI`kj3C(>Hfj-VDKcct$tf#^k%StvDwF+fb~C2}l6Wu!7# zq={iZ>AEzcPU1$lsIYUh^(y4Ojw_hS=Hz#l_pb0S@FFXPr)TxAkP90USEKwE(N<|I zs6pm@Tv>}j$}&KalO3rYukazTb#HB%CENz%lmJLgx}ok)>>-{3{3msCetm!OjHJB) z%D-P-@DziRkjxJe^B-ncJBdA+3}%aLh_#*>OiANKrzt1MJFSY_sJ!Mt3S0I9VNX(! z(zte3T->Yk&tk#`tfYT*sc?n{W4qf)m*gWKQ`%QDwpAwJ3w4w|$zj(n#w&36r+zr4 zLV??Dw@ew1^Y#Qh_O)@&C@4f|Jgn`#?!8`8;CVW4k$DOyEEPlBup$=@O8FkQKr9zd zq-lkmz(w7pH0fO6sL#SCy?h(RqD3#a^KRRoUq(1SB+BVtMKXmRxKZ8O(6ZmD-(jfu z_Lq=r=r+-m)0Lsz9Wv&0PZDLpJ7Vr*h7`HbDPBJs6lFeOB|K7WMeuM2icV5b zJ|9@Qtn^cW_%F8io#n)DKy`&dDf&CJ5WpKDWJqULvwS4@+i(}Eq;ENz=9zy0(%gui z0&ew>3f$&yk5{7=s&uYk0Q#A14=lNZwfP2J_tg0@x4=6E0*C_<` zre$^M0Aa$ntD%RR8ajrG34I^FlsUCaO67460=_!DXYR#P49iE?u}|JH+nHIZ`ajWY zf7`dJf|88o*d8g19)D+v#y$+d)oEmrRzx*c0Vx8x1U0|!Sq1}qc{C&jXr?KVgS$Js zTQ`P5b0mjRHJJ>huWtj?)U_Ip!0_nW6x~SF+|GJWJa4g1@b&c6{ONC`xpgaeqNgR^b4#B41SpoR)6KwI zPfhs42CGU(GdNs~e}-0_cf}-1TTM&Grh1N_Fl!2;4p|0{ZY=xTR6#o7c#!cu@gE## zs{CV47=(4(rz?DmId7n{zz+oIb_uJ}@u2SSvYcHdB)_DFLk0kfrs;ftb?c&Ot@pZVU`j)V zbYDY&$W(q9Uag6gwim~5=3S+56l;p-_oSSFgN*Gp!Yf+4rh@@g; zp0;CEE@e&M?QxxQ+tqGg9HwuMmH~p(fSO_<-@(zzD!OXZYSNuc=KY0hgALUFApBad zhb%?aAJD^L$V4EH{?vh0BaTvFgf)qze%MR4#kxrrUt^A6TpsCNNdsiOs@P;NDE9{) zMZPr`P7bobwoeXe14nt5to%1=fWf)e%~nQid;>`;o;f4DP2mQB?2ETAb>td1#;af@ zbg$S-KL^AZPWW%w-!e!KPv5wGK_F2;ib*Wl`zQPKALOQ~T|8X{?V4|0W2E|Q|6FIy zOyIIsaB@26jA~qicc1wB_ZV?6*M-e)E4Vhv<#x8=hX*U0#El&inX$?Rz(H!M#J_N{ z&anZ-)bS>u>}M%Z{mF#4S#x^bSs^uLYa`XTUEFZYPc8THCijK7}M|H z_gBv>{>1+k6yBBEX+Z;SQ#A+)4xA9ZZNRqk<-OI#XTfm7=psDl#_HW1Tc%I$`KCqB z{BS;1e9yH4I~)fm1xXYv>}=q~h+_8?&C4`Ibk+Q7dA5SV)1x;vEEk`3^0djA$!b?B zbN*Bk|J!MrV4*7Yc~MxZBdphowj3%(CmjM+H==nD?eBK__qW7}AD`i9%<$ zfi*R`>fc*b*Ab=}a__+#5Fhcbz zBXwJmr^l_vgP(mA*&E5k>HyS&W_X55;Og)NK!Nemg`yudha%Yj5L;*?F$Vckxgeie zO!KXq!Alsrc)+&rwce6O`i$%#%P5XR=ez+k20%7V5Ai81k378*Co>SS2_jTqEEFU? ze3;3QeAG;ED}NdgwP!8#{hFYzZ1>1S1E_%$Rq5 zA09+g0+i{+EPFy1R|Ig!ASbHN)c zI{O8C6XfQm;YerfM;TmHaSo6YgN!TW;v=6as}-U$9A1>C*tdq1f26M>856fV1tKAO z$gAwdCAaE7=GTUO5!xb=XOV8%t_&uB%);;EqJNYj*#T58D<>2ucITuwRqbe-PKXsGviAT|h!w8B<)E4MDBE`!`i|K$$GvC{ zfp~cpx;W6BJ{}J$6MjO?kGwvIB7_}fXIAk!hpr&ZO?x8TJx~-OxltQU8*dAye82%o zq;13KZEaU(01E#k#N2ke_Rb6>$aUge;*V)Zi1h{fYcb;v6C`}+D%NZ!%L0X^%Gy#? zK)L~V?H=p7#tF&ng!$Vj?(%spKv`kg*Jgb=blCUJrQ_XhmX15Du%|4Gu27Zj?^XbI z(MpRs7_=E{+C6EvNfrt_W89c@k(Gl_va`PlkWPNvnSMHx?JkGKTkf9?@4xF~BZ&uI zi@))MX<{7g$kV0~{F z0!a4mvS8A`z4zV?=3>G8SExyBgl~@9+7MO(tGh0jUK<|uPW^cy%b!>4ACI?&`J z^_^{Sd_p!=#~6E7*s3e=|01NfugL@F;5OX>GbEpWWdLH0*E3Wm`12v?B6YT)inSHdOG1L_%ND8^q zj%SRPoHh3AQbl{Gf|1S0Ne!t=l_%$@?fZv9Vnrm)6Zo65~s04bKw6SIbm-PdPa?B;WMd+m4uEcsaMM4l%-{GP6A!$?-e#;U6rB1rlVrNuYAAgTkJi?bLt9 z=-V#PYrF_-RQ^{0AEQ0~>?%+@rv@-_+z?oHcu03ZW!Ma4eQ8$vc;`)k#v658?@Cxn z0=p|SGc% zCVt_$WQGS?g)oP|xD~s<_MiFWy&f?58b}!>%B?R_FGCHw8hM|}a)fy2J?E3_7hURw zHNF8zv|FNZazbxMRy!=&`v7fI8BB(GxI2d;ge5MG{ zJ)FNk`1RDyAhlPoqOwjXhV!s#57ef3;pSd%V@E&*Jp?bMpOr~|UkJNqrA?oK5bEn5 zeaM%t7!;S4Is&Q@4^?9(YRa+Cju=!NQ);UOJ7~X?gc+i4x3TG$Sk$}Zm1kSp+ICy# z2D~uAF(;dMu~m>E!5k(AQ9sU3&a2}3RfDL%6mERAcbS?udnKOWI+l^hn@3xHo$o_X za-VqPHVFqeq{x0bSXD#if&n|^0t~3hO>j$Jb$F*m+i7n$bvTI|@d$hth?*Qf=$Ql6EU!FjL`r3^W4-fFqfxz#_|<#s>;a46eZogkFvOXNgr#!OY;x+%13e5<_eUq0SK% zbu%{oQ=aU~WC;Ueh!)L^d=S>}8%7)EqXnS_1}crAfz<$Y>G}OR3+J(%Og2ByaQc@O zX;fzOs>Hzsdsc_#T7fn|Okm-VYeYygl;10w3f)CQcWH2Rr+>Dj)?$DqwXTPbUHvKI z_-zgtm?aH%gkfxu)ENss@O08)tU)Ra~2I9AHC&4Zh`1c z(jZ71{01rw5GE2c7H$*)Y+xjx0|L-&4JsFsT`E3g^JEb%n#o9)LhIo+D2H;ifZ>4kWP< zRPK-FPNM}KJsSO|C;xO9{cfqR z9lZcZ6n$-JdH;L~O6DWBe_Ubr+SrJ$V7%m56A@@T7KUHaB$q-=kfFufY5!~{cNLyq z?w|R?@cjHq@Z4^(i`GvX)rm;nDYp}ls8qZ(D9vNz0Q0z9fXZ0$iq&N6Zr$L|DZJkw z_qQuX+A-E8$hGY9VMTSO9KKQ2D>uL*BekAslCsSsUv_+(I@WXyu9|3 z`J7oEkNznx1fGC#II8_J|KCh0SKprbvTU6;KymzDV?v}AP}FYR#C8{{8o=&1l1{1% zx^5#abuUu&?ssotUfPxQbhVI>2d99lwPW6rvnNC>M-L7Ba=-qzqKp7dGNs%6Lcb1O zfoum)pY`tw)J$phYQ58<2|;C=s^#BLh@TR^cdM5_eB{}GhLzrnPHe&%Qt#|I+LqL7 zJ?bmgi+Q#2X#bH*CfCaDJ$=~5bt`Xqo8S4?>#}3{6;A7B@!HvC?^2hED!ny5R|ykA znC+ekay`yBq*&f}z7PZa=mPa_vH!YF(>)W$V;>!dq*{X${;ONve;QG!DhThn;~W2Z zZ?@M5Mqp;{h?%JP5c5tMbJgCdNfxIb0r+#tvxZy*?-jefDxTvHC;!xo zW4A+$;3Q|dq0%JmjOok2{ZmOZ?M(CM zMsQk!mHQ0y(k!zq4*h!I{7Va!7lL-`TxuO{L|mORNozOb$#jN z|3iEAf4Yz=W4?OCHPX63ID&YD5ID?kCyV)lji<3 zrT;wPzx-cSuG7GXbIJF+{Xbi;{qp-i{bLqaWJ><+!=3-#txBtr`4GS21=d5SUVe$@ zz%|;p#ivEwv9ec&)eP&-y)cE2WL$=m#0teWjr@gt<|FR?_rGi8xgZ{>4@zAX@_N50UI+HW&Gt?{7+-DJ3Xo)`&7Lee6jjsh7yM95y!+VO{{b| z%y^h_pYl8kKb~b@i#WdVf>&-oGp>E^h`8tDzcrqJx$wV!y{Erx5PH@-O#gb{2w?_u zLrg2=gw=t z{*jR4{r_y-_Vc!rJRP~5_({+0=zrPKh?u64BCEO4Qv16%mJVm5!pF)TImTMAJbZMZ zYVb41G-yqO`$A0Y|FZD}U3X6uS6Dba?ayBmJa9O|>(t$~hYuf~yX#jqHL;9h^hbHd zMx&A>FPQc+^2O-=*NFW6ijLd|3}f7$=;)tcboaAhCg&(Hjvm*VAM9m%I`l$tGR~aO zp*gXPbcp`HO_b5@L>c8Kzc~MA1Kp+#$Zt-H0GX(RrCG5XysJTlRRQQvg|TI~SQY;M ziT`|&-GvcZs43_;;wovS``1>qiLfT1nEBrJ1|C>ttB3C>Sb8uowRn0?hAp%DKRR$? z#J>%u(l`qcUYulgHUIkUf3nOk8QA(Vf>{)u3;(MN^f^sCY)>lCs6!1{ez|9L#sKQe=ND`$gV*ikw>~+{PWiUOZ_R3VB_Ads2O|ew3zHQ zU$^PyqSfeg`^xRtRy)3=axCpBSo;6t-S!-q4_coeE|SUcfyOXt7<#~Ec|gVZs+s$9 zcUon_zFr3#3jdGeczt&qeT&D8{(Bs!nxkGRK?8Q@iz}12xq6I|VP|7x6Mt3V0Zzqt zB1>*@!V1K{lXqb0A)^1UrtTlca}V1+c1HPf4XH8hKi`z$C}6*UUb<7}#$=b$BZwPNSsm4iZE6Umcl0*y ze9U}v(f4tRQO13rn*;b&fI8iQP@xTwK-lH|uU;Bno*a=GIldk6El{B|1IV>(m3Oo# zK<7_HsEQUB)kDec2o<^!OUY5itL~NI<;K~LH*?`G z`nirzho8wX)V==y$a>4LsQN8#SP=w70YQck7!U!G7U>cYDFKxlT0}~62w_Mm>5>vD zm6Go6Zieoz0S1N~24;w7d*0{V?>XnW__DdKfe(B3UjMazwdlp6KfJdWFIHYTLbFId z7~&rvGaW>@&q&*LA$I~D2&@)>shS3h@Kew$jL(k(QqT#8;Zhr@Id zxD44`Cu_7mEx?k~Gs9KztX_RtQ`*T`s<8?SPvui}Tp#$=br6)bp3C~KnIfs8gM+vE z>6k^+FMZ71&PJ0rVhz@tnffSlPYy=>(h$UYL}sT z&*5^(EboOMY9ym;qG2aC4HrnN(JE6{!|NFVcEyc;p3k%2W+XSy?;yw!F+6+3nd`G{ zvVTl6*!s2(HRR|VEztTO*HvzY z+=y}SM_k4DEB`E#_AE9kcu!I*R=wWh-GcnGcBA`dGWsw#r6HDZ+tk7Jl zUeR{xof%ojIq=B}4XlPc-(w?C?9w&mkqfWT-T!+s0AJn>r!PP(I_E7Wh0K?wZ4dG- z#wnB3b{Y+J4g4o6KXisFL($ZnL7uo1@ao5@X7y_Fex0G4{r-v3%}--)7!=mqCC=H` zwu)j~gVK8rMq*E{(tT{>YKzsOkt3ASc&X_-oIZt1PxiSD3XD$|-|YQ3qv}e@cvJ%G z*WMn@NROs?3jVtoq&eJ?4Y#`%P>6-Xq@TcmVY3ce-@%cnlx-M%*1cE*(pr zo1xzJOI_;REqS~cU*)yz7X&LY#qCVIo2T`fU+CrUOOU;Np=|LM_$t=Qp<>9#1!_!q zy@vW46MnF2MFs+`C&p@t!VWz|3Yn2GRD3*dy!4cPG|7;(4jADB93t$*0Xn=X;jva; zC7Gl#b?>x#e3T7ZxkXFB2V#Hk?0?qn4S~%tfmvuXH8Jg~URR=!=UcoLTc)mlJRY3F4iZ$ic_FPD^jrJNU!_A0b*k^xhQtvQnKiO~ia&c>3?{v#WvbqeCesB@7>-D|- zGIjI6--9n}FK#msI&Pe)lT1+Zt`1)};;;YWf7gTn6(-X&YYHb!snS&c(>f#t-6+T# zi}?b^D{=NVtE5x<*5LtN)3i5|MJvBY#UI-@*<4L5g?^nVhX3{6HaiK3 zJLZ|$P;`D#4VTLuGmiVj54hh!j8dhvn=ae~H8rN>Tm9Inp-d~Md*bUEX6Z(&@=?LW`> zzdAf;ZtENjceHp0*!#5^$$Q33 zR4%1ic}De7v4pr=#e|WB^kn>`2i`5XoBM?Y=xo7>VID_3PTgW=gGBC50LC4HzvI`tiz1^)iV4p(%XcS-C zrmnv+X7&3?fp=q$c}X?7x=;Grh7uC=Rj?SCm*?I}0Lfg84=SM7SC{Wr3LKTV-g~gh z{*b<&X9D)y#owXoQ)+5}HU18~H&=!R13^DQJiLNZz7ZB=kou)Ev^~eWn&>z{MqXl^ zBAZ3IcAX);;I+fSI*#}C8&>qGjsN(~k{dprR5y0jIo?XVI|=S?!2m~@GRmMf(_lt` zHE|XTjlqT(gNCNQY7q=~dt*0ryqX%Z@1MKOZ)j)+KO2z-(&AY+G@E&WWh2i!Q$&J# z(~lW;JhQ6W-Ta-LwQHZ@-3*vpVZHu(FZppX*2WV?)$LVSzt-*HlU(K(w zr#G2Bicw+S_;E>@V*|NiYaRt4~^{*0+c4iOfz>P=8 z4>vK3VAZQF{gOrZtpk%>2(X)=`*R#^z7zFF38mgjzdCkdJ0`V))TPp%{R=jtZU@_N zOwDe!?aEHbfj=QxFE%21{byZ}W`SEmnk%9wM)KXak5Tft6bfe}`mn3Svw|wY9mL4T zhkN&#D^Vr@-#dljm7vDLybu34D!v(pRV-OgJP1Xaqkp6mz}vINsU7A?yZY33OwUyAPBe^kyaHdOw(~@0VP6C6P+Ro%yHy zB*4M@e&jap=HH*qS^j>^)v;>@dfh_YN6CWwshbEpqlYXII%$?w1pM1OZD>|#)l>4!ITBaYmIsWHm zpo!_my$QQH8O^CCRi|7R#GcaTjwLSomnr_n3~E)Gx@Y;O93#@>6@G>KlbmAUc? zk;`-HSG_>8d&b&CzEfgV!{tB=HTo(JDz_2Zml&{a@_E-$+VCh6bS8D(rkHfNlPuPU zG4|TxU~?^(`&d$&-&J@t9=jJy6qx9Bnst}{nSnREdr0&dpbQh^ zip_5PUHhHS25JPh8Gn&F_v)Lpu@o*k2V9W-+D8j$uCwI_K4*F$ClVGV(-g~?TI{6{ z`|(`9@EyRLg?g3~U-bi`)BB^Y#-2@4!sg+wrz=TEN7m2u3XmfnG8ia8P#9Ol>r;8 z8{KX~qc{D@q=iPzI>O+65mQ?6@k_-Cgvr$tD{KQH&TV65d$C8gJ7Y$+d|bDs#tiec zYh}}5C{`F^L0*g%+cTBU<6P$d-F-RwejS;JZ=ZkDEYR3K_M&dtCv^#azlPk8XXfLdi zF{tt${H9ku$<8x}0|(s$!0O)I0XcE22Vv)B9zTcfyiP)6>Cde%Uuk5XY>mqN5VZRK zv4L+-p(8GmeI(q=pYX(SCjd&gc8JOMc5Yh+o(tPibV!v+^QkXR;~MdmZAGv8lwtJz zaPn_E$9!(my-#EDJqX!e{Q4maW`5~);#ifIz~8p+vDu^d+ui9(xFL017-JUDx!sLneCF zCUcTQN>tANfX|Zdym_Qy5z4y#w)7f|F?P*5>b|$`_oGAc`TpX81RV=_#E$!^7;kgI zrRuE9&M)eFNuKp+Pi<^2|6q-})OSw#30cv#i&Mc^(lu0Adv{+>x{sOP324d_o+D;8 z;=W{I9G#W!xoI%>J>9^0Lsjh}n{1Icb+&pOXUn*sjk_d3q_Pw@N(_aVN^o9b8lS@| z{1+LdGMnd}+bcJt#}qPBqX-)9MG4r}_}3nEZKo}W{T;I=XoRO|ZPniV;-Gvf`P1OAiCU6pX=jbv-yRB&&eiO? zdix(!2V~$Ha=L2hGi_1sKfDY~+ZEKm^^REtp6pqVUmW^=MZd0iJf}FNRsrW2847#r z=6Fel;r3vBfSRF!nd!)G(3YDc5vCqRUp*v>EHDi>5D(z<9+_xnw+ik+Boo{d$@G!hd!=eH7&P*c5#clU~ZSmCt^NP8}M3l73 z(oNog`B;bDo&2Z$u?%3!i^UK1knmeQya`jFS)=Ad7ClS*o!MM#@HViyp*;EHoXnt4 zJ%;gTdyYE&{m~&v)b(k@$o|>Ku5Y6|=O!Z!I_P<;fq-1UVluiQCkgLy@?7WES~kUt zf!>stC(&QRUL=6>(shKY!}mXjzV^1T|NZ{*fLM%tyA7&Bx=YwV`tOnVpTX?ev`C4W zf>18x)bb?~KZN@8&J~$8<4jqQp5N;Qx{~Mr26ucRlwV#jB#bk^gU55KT#wURR}Ao~ z;oCL+`T=Tb6=UQ2YUp;=n`rlBlt_ko7k1V8Hmy>``$8JQiefgqH@C;P?{(2@r02XM zb^AV#(Or8j?Jf_O{Y)}20QMk)sK>}LD;qg`PYoT?%5kd)&}WXjA1#*L4k$c#JGSvS zlQX`Ojwc8W@bfIX&@K<) zCA!4TIQ>Ur8oZe>Ak1bRX|L1w?cDzRd+rW9?AF>)(;?GU$y>^Gan`hCkMx}L(q(7_ViY`>(`Ute-&^QnVf>EBrZB;EJnLzD?;HiVzaku?l2!G&|K@jR)F zvD#4*>rw?Bk|U{3M6B~eMR0(KL9ZCryNaVF+)uuXhb#*3xm4dI94tj6g_Zh~4CMl7 zb&R07u}fX5`s+=E`d@>O23s*8o&@{&XvZOI4cVp^z7OFP0;?UfO0bj|Bgsv2gQEp^O&o@e&2~PwwhR7hnNDnSNGtoVD7P1 zGyU;^8k(>-7}q6FUEZRDPwju#W0;`L2s}#3Mt_{P?{ECx(^q^rIBn6_LbKH1vN~1zf^5AW>>W2X@rV>sUSXICz16*Fd{oMHz12vL?1k8@x`qo3w4+TRmq7$>;V_!~ zHsXmT!vkL@ZbO$&vpNp*yX`~#QAm>2$ya36YR^71Nug$$~_?OezUYwiAsbg4vmrgnV)Jlq#kezZ}+_2-7?$HptbrY z_O#@O&&*Isu(=fIq{%EXF=Z)SgC+;86_&~8Jx%&LO@kzDNGGg$@}fZ-x3_kX0dJ2c z21(A6w!Le=dCbFW5eYu|j&$$DIPD7;#6)>W@a-A>*?fFfPF<4ZD6Mx|J6^Q(lJ`|h z8=f?eHiDYIIjuBEaGk%Z=*p z^H`2}d8}NJTBI2}N+s8Y%wMswJ@%I==gwD)pXS`+vlHQc=|GuW`dcPVm>Ed$55@hU zkOS*3EiRDdA;bbM72OCQg^0zMWnUr8I=EbxRwn zA{oW$%C_KfCa5U$rAEQ-sC>~N=poIu>Y_{+yFkLS!HOt7=2dmOXcz2a>8Nwa{uyMP zaW7>c6^sE;;GQFUGA3>Ccu~ztEBF5b%O5TA#9dKw?K>KO&VQGzC;qr6pGt8D^d^D3 zK^mU=!^K4RA7BHx(2aeegg*ZO*~c!q=Hts+>Hi@$`F7u2hG;5(>HSjl`&N=RdzSAF zM@K3fCT36uKf7Q*wkIlyB}8}o=ZuDkXiRR_Y%)~$-eheO)Azc~Xg6?X6{*OOfVh3) zr97kqk~{o;Kww5iIazB$o39de)wc@cWqsXOG{%ixX2~&(*e_rwsM6+H(ZDUCALS~$nv_SKW>K`gSv&4j7#jTsQ&hd1gncM^{;3nt!hR97| zvz3lDXCJwbY!BLbD)zT?a8vjfN>JPFQzUaqQ)pse2K(#S(jBr#f8@iYDIhCBw|Grs z7KA1tBwxORNms|_?3#>1IaRV^<3#8t$yS&aE2-JHUPN7gT7>2sxE}`M zjR@}C!q}weWC9xPSu*5mS8HDG{oFNAf0JGfm7Yy*{Vauj#0iWh6BIGtgghlD>GYl@ zUg0gcD@RWnBu^<>$EG*lc_mMs7Nt5czx*r1#pp>+?=8%y`@d=#t8JHhBlIiXBM!6O!mRK6AX4?;>Th&@_y!OX zf0~cJAjxG%weo$wDO0Bz@KF{Gm-W^0(icT3jMIGf)g#h;Q+fY#h6wg5m!HrW{9-P{ zrR%9ARv`noqg?u1{$mXViiLh^S5XP*!PTFs})=D7i{R7WF) zF|WHCAR2Zlm2ma$j{P|mYNcNTWoZoWmq7@7i_$Tk3N5ZV4p+^%R9w1UJ$vNhNqf`y zNiAwNf5P*9wY!wN`g_*-;^X5KijPM0Mc3zr?6YAODxO*-coq_--z@?ez0zXM!VqjQ zp%eXApXS?{s)U^M$H_1p65=aYXT28v=rfy+zL81kEeSE!=y_5Kt)85Cnz z0D%Eo)|Mz~PI9CjcnSKYqbhCk(zexO!Qi z=f{Y#ZE`S%h2@>X@4rnb`{b0i@5OoSKoH}n`4`oM8?-^Sd8K8M2# zKZJXk#7&~{YWt-$lnDJh5KkyI;?at;>jwz)zk5r7GL*$HaVzs)(9_$(AK}NBBzJ3R z#p9Y+6*k~xje8*+?4eK0&O8+8&qSNBawo6zhl6~amNBGacQMQV6bV!bqqOI3uv#lp zIWBX2JhxV9Tzm-Lp)ftZm1&UlK4dOM+}>BS8GXBZXBU&uf*+#5to>h^&ypb{ATZYO->a> zS6kVUx2@VY@0nLWtco8=af0jckL?0j@r) zu&nJa;m!F1D^mtv_8}@Rtzvyl4k%YnO0;*}2VG!?sfXiQU&uBO>0CO|Rn486sQ_FfEy98-Tzhc)6x^Zbx>9$s9 zogXkxyoa^+*Lx#iQ_zHJ);4A=!9f()`NS8*s>fvAoz7}4+$~Ka?%$lKxnx_o%@X=` z;&j8fI9grGu!vy7;}jJyOH9lLn=V{x+K7*tnht_iHB#VcFkRuqLn492ubgc#e>S6d z?cTSJ6Pe8nd%E1hdu-t2fZmzk-~#<6!Wj|vT5V4}c8SHhlIoIdYorWtMLUrjs$83* z^&eodFYN&vAg4w1WNNkel_Dci+<*?v@T=gn2>ccqJTcU!E3tjjv>UwjjV?Ob(#^^pJ74FeQhj#-dMDW?xvKG3~X8$9Eb-Qz7g4{8|m&4&$8jUP3)yB(3m1;pimS=txNf`KKBo5!HL?A~5hcVeB~^?xZ{ngVtOn zzMe>Fhd4|(yXsgpg%o$cf4gbK3hDM@aAFwXgo z?0LCtWVgpfROea1alq0WK36_epl{-@B=6OE1XLp7W6xQV!!PETk;yMjeqd2qRUozXZ7$M)gy4tT6a{xb@|fd zRbA!&@MHu!5`O)fQBmk~2f!ZNS=?%M7a^3XENwCCP7=QMvY`Ip#PUwq*DYi=*Xwts zWxz^j8I(#L^0E6$)@*QL&&HP}k~V+E&17b{EZlsw^^PPLb^GN>e=(cL$7D-}5-Y{; zGrm8^YY4(*k^aYgh?fh@?}A@JLOXAEwSE!~z}Z;{1Wh=wU<2e`i5BNkO}AZ#OcV zg`=P9_3}GxO~3+>#fxg<$WJ857IL#$Qe(G2d2JC70W6DU`MJr28bqMu3Sb(OmV@Ww zP757O!5|n7vZcnmu`8D?V&4VC*1&I(cATunkJb_sQ1dFic*qV0P={$s=Wv1Zq(U8V z7ymc|sADC3!@1FERl0nb*rh11;29g@bPqRG%fqz%veqt0i0-xsXkBU}F{7UJRO+R2 zmEr<@`)_|vPIex@zfdjPv=g#Zn25x6v$QU4;oNsT5gb(f?N_;B2o*nlsb^A*z|Z*OL;;$1X&gkfYI0(8!}xaIu&a zW};>nHmNW`tfx~eybxSF4r1BTgCXzKh!;w}=K7_q)CfqPfWLUyyi zt1qJPKgchyk48c^dOZK9X#jp5je7wo^w2;oV&AZV)7ze4<_)>u zX(tCXoqbW#g)T>jM=fGmiT+z$DJ{Ek6rGh5?vR_Qk{Cl@$b3g7{1#WW{F*nUu`A?u z{z7jpI9}`R+5z#AR>FTmCQxIK=iPq{@b!fR{}z zgrYNC=e4vWUYpyy*=k@Wk-l~;yI#~<(}s^i4JY0tWLh*~vyC2<0i>bi>CelOXQZ6C zk}6W<*AU~-8hw3Z7YmScc=<8$RFG=QZOr($oBlyasm9Z(C5HSRNY953Bxa&{j&@E} ztauOXQ{?Jboa+cHTT}Vh$f1_?$n8Dru>xxg78Nv+S6uuqta|beNTgLp0zc#4>AoXd zzJupw8Q=NLoSOms^Hr6!rLq3;ac`}>pqZ$oWZgQd+1@Mtmd@%S5-Msr*ETT% z+N*1RO2wo4eC5zht+ZdY=z0cviRCnOXo|&mS{iqhDOPbxSCGs%axyc>d$=bd20Ff7 zT)gzY$gFAId4@Iq#B)6V*O_6P=geJW6dx*bM_AlAsXcL;Zc$%>(Gp@jK;8RV!p2mP zv;#R?hNSBNSESFF1=e4E5R?kU-CKTup15^URbhc)tQR|peGy6wNb^4g4E$TA{m+{* zO6`k}RzdA=G4v zzcOl-rQElhM44$;&m(p*25+XgkQrZWP-3xiHa0{gwc6bZSU(F}4-b6rn&Pm>claJg z3X|)>6qcn}1ki0t{7juP_85KlM+s560Jt704XOr`BBfIh_WfecI)Mt+l5tTg(r~%@1Auf^*!z;C2?lb z3~FyY0VvGM>vG@M8~|pgF}F! zf7$5pkcyg@`fs;s1oH?aXiRH7Dr*NuY4OxV%RwHN!i$xnjh~o8w+a@Wp`NvhsYfAh z<1E}qTt>U=AMys1Gbr=AvGeD&>bp|=t|+UsuMdR>4lC#r_e%TsFS^JS#r48pWqEalnhGm5a-IX=tI_fg;@e1HomQOaWIP^10JWc%i z!HeuzJgTa=(Q@;Rkxc_gc5l(i5vq}iu;zLpJmgt=7gzE;s@MpF{`#L{0tWV0kIgTmJM2;f14jsq6R{B7WZ~naTy+sQygK# z56jNl)jzIQ=+2JPW68e{rB=UT3uk;(Vgv5oo3GTTlb(O2|LRr4@lfFt%v3Q8wU(H% z{z@}$F-VTfUZJnwYk-%;6+6Btd`Y*Vo$s#Iu}ZQI-=~LL?mefHHs}7*N=>btAkj&G zG@qbFqWX#K&{75>=HJf`Lz(dP=NTmMb+yKkt-UbYplH(ROKtBKy)(*%4ipuq<8o6;jfHv@dA7VbSt|Ki#?dXla%zH3 zFF7*9(6z?`Y13Yd@XFQa&+e$}MfP(fEuF2^KF`X&UjEf#E#_J@pQ~-dRn#9qBXvxeq%db2Y)(qWJ%3$bk)Q~lnj}n_ zcyvA-cvbS-8({5E68+3m&h*Heq%bc1@uPq|cXThLYDkbg4BAq`%{Z0NE8a6HrBKRv zy(TC{V8uQb4XEiJ_((eC|`S)Y1oqbB}12 zTcxf@bHTKjJa~T^%)HCJHBCm1KmR`+TIA6$(j;oLj8f|>z882D-<ivqLaI&E9@kQPB^*?GZkx=&YS3c%Xrk! zlIUK}kei`crF|}_7UUS(h|TN&t1J-1m7cu#uH9^>nk0!Zgt@j7jsje28MZqe4SOMK zl+Hw^bG4nNKK9$bWX5Ot#mlhco{aTe_Rdx>cjpABp(1h~Yq&$OavqS9aOFQyPaPB$ z%c_!Sb?8%Ual1FUoDG&O;9D4HhO2BY9IJ1i!1J{%Buzevsq)r+*~cjundE&>rWL?T zGK@4+>Gmq9(>g)}ZCwt25u)O|l;?lw&Xc-a3Vxb>B}}*m?~i@8BTtjn{NA>^Gp4dF2 z)(270?1J?w7we^8fjeh)9s9_uapP?_pWbe<8A;bADm2=r+UfGEYA@BpSe)i_Km+x0 zyoMuEae=$yP9oc^Lo?TJU|;CruJ<0o+Z|Aer98EruaN&>t3{Ip7IU8FU`a>=sy1Yj zc@nllB0)Q7JKNBi>)5NIHiS%hL8jg?WcYkdY1q0mgZ94v=qH1!?xLT^$(ABttvx?Q z^vledPt_aL3T)uCq<4xl0z9(Im&&I}7Ir<`&NZ+p6g*3AU-#ecN0uB=qblCzxP#0< z6z+_7X=qOBDw@4-SQu_a2rYfLOq6xWSsP_mPU zzr8p6Jh2+V{myueHXDx4%q`w3&XMV{tB(u6b=Xw3>yAAn+_M;oGfBUOgsYFl^@=4 zHuhgmg%`@ivY{u)ChdIXYV>@J? z5VbPR13W9lV#ka)={qT!;O@|8AiS+J?!6G5X?vJ zwp<5ISmSTB%B;IjP>t}?{DB;DrIKEA>tb`)$Pl7)q1T5w{q6E$PwCS=z|xK-Wpt_2 zpcIUvpMmEqhJ^qyZtn(;0x4KZ@5W*5T&Hp=M+;=qY@&=jOj&yp zZd*SsCSP@Y^`CS7Dy5K&kHdK62zImB_UnR16FqCV-i_aNy^DYchu@YMTc9No(3AVy zfmcp3x|Wo50t}?_hMN!c_wd7R69(ZF{|QYZAAoe~3pv`RHnwUu)K=~mVRs^f14}TL z1H8{ha*p<{n_RIihJtp`NLB=g&gNbAJ1F_y5OY&ldT7T;Zd`x)S$l_5K=C6$e8ZgG z?0RmtlBc)p%B_)W)VBZAtFTSiEVZtw`HZDIJ%znqE8PJ~LZy81TeZgH(AU4U+aZqj zq=~Ai$%@KuY5%kP{65pqdpqCiF@%FtYC**}XBVQ8$Vcy^v|9D1R$K zo^D5P%8(3&SeG{j-5WIUg`qbjHW31Fd~@l^Im>j7w!tt{!*qx98Mr;Qk;75H=J^q; z_uISMfO>a90M^03P-oF`#Q0S{qRXr4;^CU=h!6my8y(*ssC=3#LcLUUrdD;;I9TqL z4uOv%(qhlVgLOJC&X0(>3g9r?P$#!ug~el&mqgDT1D= zkB)5`xT2TL&p!u6{PYR8se5n5 zNcwv$@m1v1mD>ej*dBRSyUfp$Ilz?Ry_^- z7L&sLG2{VnVHClhe(%H_BcL_%(cj!sn<91cxxZ540T}h>I zo_;uu#Zky-eZ|X3C!Gj=7w7P$Fm<8>W}Cb>O$HS+ia^ZrdiKMjiNn(AoeAMh_w38D zvMt&Zsg0lv3s%kLI_W8-oT%fbA_W2Ht&VrKn37w?O&Q3`WY=gnn(}aeg7?)Vvs1v% z@zwbt{&d_n!gxU!dDUdHwi)ek;M;!Dfg=0-xV+^S)Ejq6hKp7ktJPnM=4%$0+1G!9 zc*uI1d_08bi(anyoW+&j$*Fxs{*Hl2THkexj$+e-z1E{Sy1 z*g91_Zz)+2I%j%g@Qr)x-t|qFLHf2E^%LXS$G#pI6QiB4(<|=XJaDm2)$~X|Q8yzQsRPGKi7G6yd+@hz2Ic^qV*T zU4-5Gbgx><57);-#P`2+&beKEt{+oLbo4vj6+Itd41+QmO(kfod`WYQye?%-zj)67 zLo_MzfT9Iu=`KmK01u86L)3P3#}m7xE)9=fE%&ddk&b&vbi)q zKR@UHM?L=6OE&rF##FSl;~@(S;#w-A(3ldQII*MiL*HEMy-8jWtRDnqST_^QQaX zcxo9Sn)c`(IWanXLo%sRHGoB}2eq**4XLB4j%6A0_v}qV6Rq=MH*5mt#t^R(4-KO-Lne_bNP=bnnYSK0?6%<8W9;%5xbB zaxl%+pXB>_3ob#;1h*0moPSA-*Z3B-86$<2Q*l4~?wHMB0K+7fK(wQ=dU57O-RV}& z+WJ6}##gfKea+gTMl#M{7ta(f!me&7xS(vGCRf|ItfB#N2+E@*OJXrAu>erTo0CC_ zjXKoRq*S{7mopm7I3|@m+nhGHvr>NoI!^9Nv9%O*XZ( zXBH`_^^4i$eG#w8VCRwIISFh@Wcf~adX{$v$Pe1ebY^vfL{>5}04raY}hV zvW&^op9TanE&b9;#G$6qCRrZ5lIjZFAzm(u)(A=Q*#?q+nU##AxXs~}Ge%`VnyIdWwUM$@#4!ee={&p_+Tn44T& zJCnz{+m5SPr$<>D>cdU(22Omazj9Q{LZfztbpf-R3}K|hg#43I3kXrKhJmCUmAT2m z)S9gzRlSPDx=%Lkcy~;x9)wlCib&q+XkP`_{4`C8jUj_evV-JT=#rkJsZz^O3n5{t zo*!8N2Fs~Pa^CoBR9YtW-f~@cBk3g~n+}~4_(%q&;Vt?#{~G8#a>!QF1c9;0iS4~lJ3CcEWcm|(fbMWd8X2@$@N&cM`8?y=?kvTvg-NcdHr{;-opUs?74 zN0l3zS6o%lCHezZ$nkDB6Kc(|r530{_J2M{7y#;VFUxZs&hdE^hoss{Edgc}WGmM1 zoDL7^;rdrO`cRC`d0gNUWOsBy!h)v1k0F;JG`Z+LerSMHNPY}s6efAg=L#127{J(m z$Q5ewX&{lM)@`9Zl;f}g=G}S;#v}lc=_EYu&zw_U8iSZLL+&S~$gGcy48^cpn|eCN zTDntIVSVoChXp>AE%P>maO*6SSnZ)&FYuC*U-W)s zHA7Ao&i26$S3A)prbJE$tJ9v}CV~fK+5&2Gr=bJ(Y7PezsPj$4(xH+ z2{3U?r(juOSs%4{VaMh_ ze+Nq!Ov_omr#Bi+hKDDYSXN2ayNrGif;^SK&RwdJ|MoH6=R5+G);{~R029fGO{su2 zB%6<{mJW$ECEp83RCh%_AuqlzJ9T9s#52F3l_?+0)W9Vu^;OSpi# zm=3B0Khk2ER&?YGy#b6)J$P7+xl!tm4_qE{Ey2cajw#cAcH*DN9&wje zhuOft{R8Q|@XGupL74Y&b@;J(SCX@L^+X<<_pdfa!u^^{^X8eYCIRam8jIKTc}<(X z($)GUTXKywTs>Y-i4oJAg9RdBz;lz5Zpyve$Yo6H5rBMAq%rSzDGf3 z`ltXW_w%ch4QA&5nru(Kqy^D`azR#7$Q4bs>59`|U7R0V{o*d-NbyeKIo z#fZnm9)8h^lDdUX7yq=BlOFf5IG>Ip#bCiNr9K)@Z5N+b{Tc2rK}{KuF?AmNFMiqx z{gP&}_QW$UmQHPXy*Vhrm4!Q=H(JN{9`V1+9jCv90pv$l(v+ z_eFx<>*TPXy84Yo_FV-te$v(L0wKC@qD=GEJ>^L-38d<$<0FUkykN=D}ZOwuPe``gwAn^+kAi?`Hz@vqgi8w2U zjmxwV4cI;wf-O4HW(PPjWIN(|q5M{R+^>L)<4y`u6iVHP#eLEL5G5&H~7+Rs8TPYD*rU!1w38iNr-`lfGxzVs~ z$*slLEJ1>W&D?A~oVS;0$$*RIG^e%YGpqB9jd{Ln>eX+~zeeaBuQ?7gL~*1I55G z58T@Pt?y5&ERdqR&XG&``MBtEFt>3t*CRBdqfL@0nzFJC9=InQj~f3Ke;iLm`E`%C zZ?q9AgQXjlH3&UiKr%vM8!GFmGh+-fVyo?XoFI>yh3Xy7r<>J-0Reu`5S6_{Dcv1` z(PV2oEUFKK=B)>yHb54qw|L|suF-xW7vwPQg#&DkNd;|ovC_rtv9nNBOCIZ{Pi?#0 zkUDCQ4@J|Z3mKo9##DwXK+V(J%RVJ0gP5(0d<0q02M!)&|dB<;1brE}Fck=cdj$#lSVRZ<K^Eq0iJ{w3KMl2^jRQ0J9ijWj;Ph&ap5&XJR5#bYK71qn{Cx^>&enqE8J z3rL5!J$|Ao)-Fo-n#o$Ud#% z)Gk>1hp4V19xD2aJX7+I8hzn!B++kq-ZNW;LT!Q+wa(=IW`33 zUj4sR$pO>9Qc!dyf$adm=C}t(A)3|8?JTvVWnh-W+%8Fv(yGOt_mcf8s>e*9H96hCy;vCZKB;fX~#uk(S&vb;IM>beQNz@68u_1tyFbhL**fn#tMXrTeIyA|D~x*j(W{2f;$i1v`syG z_aqc{25dcX_F^&;0|GqW9?sRfrdJI_KBK$?iYqgN2R}_+66$=~WK!;dVdT)R_@nv6 zSZKFjCoaU|A%uIr%C^3!ZF`4)MdL6#%432gfyBoMmwb`4Xc5nAtj4Q$5K7Hq(?98R zqJrC+N@XybFL(JyosZ}DONl@E$+qr=^*W6gr`*L74Q(yk z|A(`)j*Dwq_I-i`2*HE9hXhG*A2c`wcXxNUAVGq=YtZ1qodklD!F7P(!G<7%54^>G z=iblR`<{FDdG8-!!7#H{cXfAl)vvzoX&&Ud)I`3Q8?S&i2S$DMyJ>1e=Ni@wLW0-Z zWlS|X%?U%du-N;(w~{iA0g8MA043hDblMFU0iNH4!I%?< zSYnq)pn!5LN1!TRmz6cvTmOU0&=2vX3Q~5MFVsy^@D3;-L+aEJ{HWAO)oq%PcZETM z6Z)zBxSWgi#y+b(_x#QQwpFn@J!kcs47=j0Wc80x?vbOqH{h+~yDq1m&e%p6?Yk9c zd(Fdj|5Y>nH^dBS+`xI+__!;aT+mB;_VagLBGtt9z}EfkmALPMi${%n)#5F1{k;W} z*$wAF7O(XS6t;<`9Et53PND}I%}*4E0JSjS1e97W z5|3pwXZo@0l4V=+`l5rXkL?A{f0-W7if665PHi0RJpa7U+H?Qk$@_o&K(&wEgC+Y= z!r;F*wS=8Ys1}UZ{SEc@GVBbyjMiTn_Js<94yzPTW%+NsKm=pZ*{yhco`-4~G@M}L zPuMk)eui3~F|Z0CbZx^~4QP7t(%GWl!MS#ENes5C!3G|W$_d6Xdih@p-Jods9)hw@ zaqmKdR)uo>f~ghQlf-E~qxt8X3>sAeC`_Y)u?u4V z)oTCsXX(^Op*zp>7^5O_{@Shoo%VQ0K5HXfO&?VOLIF1Q>PluHa|>FyjQ%r6Gb05E zI%mK}zv%zfcl`6~{^wt<#eOD~#7q@IqP2c`14tvlzkUe-hg0nUU$c*)_QyZ}hcN(o zqT{drU@40zjpjz$xr`f5dYin2SXo# za-obV3V$-Kzx%g;@peHCbO7#5XVP-vpZoRy=jV8|hZaXudYL8h|3Z=bSLSFf1B{S{ zBJh|054-x4IZ)v}v^ZX_8h7NMBgB7eKmX$elZn89vX;p+|Ler@x844i7l<%Cv^axc z`M-cD{^g~AcUeRqlx?leJMt$@*8gq<(!@akCW68D0-BWF>M~4EsGt62v~n5<*wTe# zk|zcuqYncNv)b5kIfUIbk&f!{vn7++Xe8+#Y+jem2*3N@IG<*>U6qIG`0aNkD<=Ut z^` zOs_`?>_5zd3hBRl0hV@56@LiYGRZk!qm}ZvxXQ~X01!`eKy~x&K8Mh!f8X3@$OwQ- z84tK(``va6<0f#!X$5eVM7H&=3GBvuSA~9kF&a4IWA|3>`75_zckZE7RKX$cYJ8cE z(QfVI9Qmx{HiBPG4igeC0QIb&%jB5ur$WB=NbCua)ouK}wy#9!BlQ&9XCw!s;q@n- znF_P(KAgucM!2}r{s_ly6e`{H%x`FMc)fndN9XV=#AQ5$)(AD*sA$ze%hUKYU3GXR zGO^iwMeb1#K(xE<4&D0|hB=5=nuAhWtbI{RX16)=z)j7e6s|+IeAHkRpp~d}*K1Y=0U*DU@iwL3-5jS;ypv?UB;71gy?q5L8E^oXS?`SB& z9N><>A8J&{Nc^A~%{!fL%_@wmg0VO0L&D(tSHi&ivsizM6OP#MMJF zA|J4omRWyIWs5ynHfL5;p9JZooXclX>3%5M>Dz4U(7c(YpK*s$iNl`6dX!+}jrLe@^F7%5~G5s#IQ9r)1{NT z+<~ucIr(F&=)lQe;nzzk>=w7201$_&H|$JF+d@8P4fXf+kW(o$6u)1~Nbl}e=0pCr zR!1|jznyGtXrF;=%|j9-l_hXQ9E&slAs*(KJqkj^8{Ee=+j(cWHH-=cxMw~IcAV~= zCByhHbqe3X0p`&O1fAA=a#~X?(PV-JFwJ@q1_fWqFidEv#6DqIpLS*6p>ES1Iqvp~ zwco1HWbu@#&5G){zx2_4t&M^jtUnD?htv1H;pda0^j1WUJuc$iN;=OY!v?SNBr-7i z@$DDHof1qNAE4bPnS$I{)Ram`wOIacS7nK;|N-Ty+iPxH_+wM2OmT(CpAs_Rcr<5BddvKoMr<%{}dZ&Z{MFc==U@ zn9I%R3s~|Y61_0pO)C{Q9GfHHZjcvnm-54MT<(+kE@R$E^bjpQ5PRm2zBk7}$XLsC z%PZf6!nNp39)hm2JIdt$#RA}7X|dzui9y$}kVT-w76;sx0c?i)e`lnBtQCWTNW(9K zd@2YO*p))qz2vCe3Zlp#pU4xB09)=-X2g;o<49g`vtLt$X~O{Ts8>~sJnb8eS?0t* z$+zLc>yVf|-=^P}#~V6DQ1AF4x}Tzt&=ojNOdcwAg#lDL0Gs}!um$PIOU<|29o*Va z>FGkTwXcsYCyS*(-g`smxisz)!uK8Z0L3RRv`DwxI0=vw)%>)@Sn@nb9-s)66%GiD z-2~^c(dBkd=?H<$M^eWyC!@S{8g}Ub^?>#c3>y4h1z@*OJGUfVp_!f8PWp!d(vhQu zb|uEIG}w^U^<7?`7)JeV1)yEf5r{XIxa+K#PXC2F^ykG>+7rpB?&yKLbKj*)5A3O) zs&#|z^w=uqZkYjQ@6%*qjmYuTBTe`VV4xgkY~^;fB>Q$W zKz;Xl1~i08&yKbefAo-*rm*djmzE($&CyeA z6PbEc33N~5t_$&&sP6_8^2?so5VZB+GkvXni~o9~#&4WbI=A{0Ky~3a{n2({c6hbun$> zzhOP?dF*fRuyCz3EfdnjVa4xSeQSUvm6D=a`>pW|+;phi43;=^g;Oao_XnyrEkITL zu`dvB$%Xyqk;0ez3o(xm!b`BEO!hA*@Cm}eh!ZQNg<}M8|3k<9Gg#wAXaq}Z!^DcC zim#n)^-7&!*U@Sq+}sk)e4;z}M;_#Z$(tS^`Tol_!tay&M6S4JKtMGtd8KCA+!QqE z&+~#-Rk%8i%2hvU<%nPh=i(Os{>A|OI8iR{lF2_nrurFZKoj25uiZCMblfQ;u{XF6 zXfsm4!hTZF)*D5T_DF_haw;JIPQ4+Wp4fd2oL0YUZs<(~sN>`YBM4D=cZCh_8aLu* zH+FPHa(R`y;dkKycVE+pf_b}L4Z!RwyaeCtyN!iI>u}(F^rr*KKPm)U%X_?j2$;;A zJGnRas&4E*PS{rR(Eq8l+ewoO_JFoI16pCKQ`~$>*r>0=0{h=gdeU=9B^|e!(G)zA zx;q^BEb1p}yUQDBUw8JFtABJVY&E+F4!-YN-by8{7|xj0fE9U#=U%^hZ?f2wv=vwl72@kZ zIGq3>FP4v87qib;q$zGEt*$Po*s|HNufFfxNayItRN*XEepc_4eS%5BioS9`F5?Su zW;c&LST+G+@JpNl=L}x=l2IT3SOSMk*?(Y~h$bCepDmf^`7e{9jm2bquW0$W2jY#s z`GBtqPx-Z_y^M(@lD*>DgMM=O@hE%ax1PVcIZzv7>H-v0h<$@Ye-;ZetNciZmH2g< z6!+p?Y!T2rq0)Hc_i!hb#|X661Ms-{T& zrg{-{QvBFOywSEH@UD=4((sK38+Xum@Q=9W)G67NLtMHYh%dqwJ zPxxfHznmrIz~S+CvK}s8FA=4O@>9H`W~lA`Dp25kc|24)lH=9l@J7)jI_N$2)kfQE z8s@VoXWAz+GZi&%lM*L>zty83Y3MGIxG!Rejf;0m0Gt9-SSxTJs?{ zKn^Yt--T$y5+F?tE#UuniiV{U?sXyVD<8cj^b~#8=?K#9n8)c`Rshu7TA`vJ$Y-;w zhQ;KT^@5Z&f}#jm*89+W^2DBS*DD_C%1oggxt?%QFjsmR&l5d}gMr87wbY;C^{LNX zBHFnKz&olNnjW#Md%x9`d9k+)U~cH*8s*7+$DAY3Dk9{W7`&u)JKA=!*&V`HM%6%h)BHj> zqbC%iwxAE zpom^~zLs}4)UFoK!z^ozL-GjL;WALsn<_KkWLcYb8vt17caT#j z3;-SFYS_6rS>-CQqrFIm-V76{VRv^jOga~O4(_VUz0Y(pUGR8aw{O3CbKL$ddAEii zjPBL+8u+5oM|5!kTG~7TN*;i`u=o0RgkD{!6%~+h$Mw~7l{y#Tk-ScZyfT5=?Kq%3 zBq`ZJz|jKTk#_$+yasHEMSN&qtDza(Fg#5-h-J)3*$3k6wf29 zmfd%!8{_rAy+uBL97J>$7*GZ!WpHp=vhrTMs)WNl%a`*bA3WL9Oj16NkL1h8 zD7%b`CL?%XTgoUt=;xdF^EB>7lK1-%II(&h zh6@-zW>%u}F62v0$bFE7J&cAJkH9*L?FAJEFBT>60ZTwrSWk32`->7Cd5A7X3I>h78NOyP<2iCGzDA8EcFU} zJUX(jOG?!3<9U0HWEK;WA#ZH=Ms>}5q)sJxUg!S%rSj_yAbQ|yAtx&r87-^9&cQ(= z^Eprw@dHgmWoSG;yKE?(ujGgLjCa=Q=0}HtT|2P$BX6j0p7YVE;L*>E8^4u{Y~S;% zT<^Raq3)0~h0z~;kjzJ7I;V_zQQ|66dS}*^ET@F5-#^V#?R#`aoj2_osEhydkJ)b_!oPFaxON%XJ;=I=qB~1(F<<@**ch99K`BuV2qZ3ctc= zdZg5R!x(GJ<81rv+|DDB|JztbbA-*TE%Z~QK(-L#q?^Dj4$8WFXKW<-4QSe(hx^HH zmOI2Kv<0#J;3Xd`291MOJJE=S0#SvPG%D0hRw!+ z-$dP%uFWdH&Ojp#6Z$oYbjOk3PdO^s$(>0FDT{CUlF0z}inS9J+hnkK5&7LkLkjbC zrta4@Eg=K-K2itNnCfi{W)Tjr;?J-$sk{K^^U)WtCcmwWShMx|f8RF{!X_}-DU`n8 zsis}gt(di02U_ZCg4?UxVYym-kVmC;S<{*Lz;sei{8QcimLILQr;$bKlHLtgS+CyR z3ZYZdQ%q$<$Aja#0D&RksM#~UP$zt7H<&5H!?m7gOQNG1Kf$MY!zF?V6%}EQ5TT&z zc8-uiMORb=8ygAf@!x!lpg|aahQdyVwAd4<#~prmY2MfsOOY8&?T->54?qIvJZ?bo zZ$uDOS&Y?J;$cc>$$jKJZs3fDFf8l(jlyw!;g<#-IbkA)EF-dUDX-Fa! z-Ov)%LMW%mBrNd9T%;O!Rb|zpNV@oz(h*jf;!zq4o*MpMZ?$}J+Q{Ih_^gN`UIUIc zIe@ZvZl9NLo%9x`p|RPH;FCSMv&f6zjE1jbPA{cneH3zd6A8vLWZiA&#=j<5Y)O&W zByu$DVNl*aX34rwnLYWji$TeiTa}7d=$&QBzh`NsnJeUv0Pk|g7u~3twQGnAIcj@0NfN{>7`@t2i{7@x+l6 z=#t}j|4fwi&Nk(*3h;nU70jKYs)UQlg{XNuhNq<1Zo!1JlgFo$+n?J4wMw5)$@F~P z)xK&oESAW2;gEpGKW#${(Q&`j@Joac3x%WfRB3d<%&NcKr*MpX1fH#peCmu}CgG0q{Mt8l27ab9UbD_h^q5}^k8gk7LZ`Lw;RdRIz~HMotqQ$ z9HTdd*#$pGewLr%yHYvi<0p>D6b1kNan8plhVX}%2q-L79d(b>vR^{Tv+Bn9amL|@ zyF0r$IVd6XevI8np38x_g8Ejyc$vWc(tC#H+H2g`2lIo4tyOop#OG%d2jLT1pX5h7 z7MH74zn%p3tCIoQGlz{Q?q`A_?t5}vjHdeHd1f(t2D4e0*;(%lN1M4SK$R7D^@~zL zUX~x>*yPtWOr$*Dp}tT}k~u_~H;rk%WZ{iD#gbD`7BQe&I|^f$WKV40&%yCZAD{j# z=-sQGKlK5HU*RmS)xEV_ZBendgjR)^jD1yj&TOz4ztKj%Pli>J23w2a$2>x& zjU8d&JP>!v!diT8>wIkTVTR%y2U9G{Zt-9NF&vpH{T`|jt`>qeo1Z3dQHy!KHcqlV z0Ryp4^F$ftg*WxHkTS1zbvyA-ye_D_IqrwKPyRYpYVc3sFGsXLM=V&|j%oe)ow79* zo+E$iUmC%$MI+_%OnTkOU9b?_d`A>oA4;C%0H(!`U^ zRhjdKbmbE@ku67Qz~@!V{(zGL7;D&7*1+?*uS3y&UBgVvR!{*NbYWS! zf+GP(m8rL|%Abbd495ipbV`HXqi4lOZM|nFb>VG1T(sWWuU=F1USaf%8Xg3{r*iVA ziC@32Y?{{jPbL>^BwU2Y@g(^&(L@CjiDTiSh%02 z?(6Dv?A!##2)%fOk6M#eawt_YcwpUts}wDP&CVw>1Obvy38(v9=M{_rW4{tlcz4`Tx>W*6*kjat!eg&%m;iPreS~E8>5I9nEBm4u zOSe~et;Va8!0ZH7m9H_sk z80-ChD|*UpbmtHa8zW>8flQ_Rq><4#n}adDYQky0^NfQThMr*>?C)|J@NTT7*$Y(i z3(f2~ztk>l3HXu7hu3I375uaR&%vhS6VT;{I0l)Fmc6?+c*ski^eMAo?)shCtay8E}ACEIN za6GZ4b%bHFaS~Yg16SO}J4{sP)%(k&xVQ6Mwt?GbXZ-d;yjr!nCb?VwPlF)_pNlpi zws90c?jonYt;tM34u<#H)63No&XlX?#F7cU6SrHM_ei}fmL z(XJWuDp?+Di}Esf!+sh{9{%VsGvw7vg>UKMwT|%ZC1l|n>ps59hP=};+H^wa2%7X! z6%@XVh$wd~6hrUTR6fZwfH4Rv75Oda(>vb%KaLO+rXU+6QC2I^C-4fZ)z}9^C4aV|tBzG~C~#7 z9ZK?8UD$>EsN;+c&x|A`yC3sR1v~;)ZC`_xOIugtBFmF`6DvKlTn;HAU9mqNI70Vw zZ0ctF>5m&OQzD-u!zOr>j5XU*5V=b0)WR}8kKyiUlZxfR1BZ~st=&E`Vha&OM+okL zeGu_#kKadsRbE9!1^6_;wDa%3g+@M#pk^MQwp~=-qPF6Y@18orN7zNwMTKzMOzRK3Ko2CWY1VGUjKsgyO_wNjZdSZ zg-Y>Jm$+?N7->WM6)njI?H3spDKEdu>O$0)*w~(0AsrsC?;gKOD#EPcv{q5|7Wdu# zdFSspXo!laf{o5eR4+nHh3C`Wy)r>lUJn{Akqse3qFiA!=qNip4-^Oa%>lp8lX>xE zqQ45_Gcv~ibnv}y^HpM#b3XMr%H6wDgyZjY<*tH7_v46LR|)6(^g_Crnk)`CtkO@Z z&csCgd`H0PL%A$VGh~IPEi9fz*JyM73QGu#U0jb)G~X96p%bvp#Zi4$W2T$}*INvk zz_utgs~R0de|mi#93lI`sqM;x>NFND{*vs8gw(A3^?vN?{K5cI2DJ1gKm|~h@{04wy7cyX5q0T;ZPr>I*dOa@%Ft4*gfh#^? z-{k~u=$v5GiKn1aEEP6zXAAi^;6aG6&ss1On8i77jEm-8^wZ#46wNY`>1A?nkm}PN z;QMsr`yxI|!#fP)8%lPXBjsS3f|?&>94p+w&5!CR=5C+M2@b4L&gv{d_eA+14JN}E zpz4XZ`BfuBTnseJ`@XHO6j4hx>QVi|bnci_`7597ib2!HLIiB~zdR~k4eZ+VYxr`P z-RhxXAqg3dc#WFe7bged;-;1UUNf1nv!lSe8x_208sUy}lL9@T}gw&jekO38~byy8lGvS%i2V=?84| z_U>!-7>gto{m3vIP7fMRjvYF^&=36LO2(mLIw(JXsQ3KvE=)HlyPw2?rsCEJS>LTg ziJZ@6y7DcVZ^OCz1jJL~Gt{}~_s{Xq-lpW}Hv%_nlPRFS$^_K+t1oB(%keH+J*3g6 zlu-f}?!+eug~~$K4N1u=|H(4m!hk*B;ht5Ef~rl$z6NHcau?0_2Fu*y#MYaS*&&0R zxiEM)-xWReb2Dta_08lyB}Mp}Ipjym?NBFOF7MEYi#DOMGqL9^c4v((u|NCScndfh zq{2eI%AX8c#>RQKK;KF^7QgW2yVZfjrD~~?0B+~4aSDf&$Ez$-hlq-~7ry6OETIsJ zh~FbMbtI7m7|W4*ft!PGULAJcoZH&Fr1q*uB{hvOXT+SVN(45m)c5zMddORSc9<;^ zZnT>nlv*lU;zax)Md~=;@BxtVi7ho9srB9FX#5rrk9~Js_@TCzS}(EsA7+my3~ zCZP2O>TGAKD3DpQ?}pXb2j8-X!9;7XdK<$Oc=L>-+S;=o`t~Vp(r?F9WRJ=X z(*Y%bKA&3sd*btt%!#_A^(^Fr<1tHDviIYM4Tc=Oy%dobgua;tf*Vfgvn-55NsO9g zmL01<{gn;HK3356NU{fvagNmvwwA+}Gto5IaA?YuuWG$bq zNsxltVswxz2MT(n5U!Vm3Y;hM-p2>UK?gmT!CA~Py{7~$Q_3RQUio5tffI;r0tpN9 z_HpB7yGkcpF)#tAo*aT>5g06uvK=LAAG|uPcCU7m#>dyatt@H7;*@0QNEw&R1aHn$ z5&>b2b6+TedCGLb=~#CyZ^;8&EZ}^S5p}l37fJH(c0S1BoetLFSubD51oRrTx-8V> zC53}J#zM%N(_ZdA=Mr7MO%a#`Je{Rf7T4z4N0=sD4)R8Y*ZpJNS}3YEhV(TY@AU;b zi)7?}ailW}ld-GZ8$-RS^Xf`~9K08gJD8&1c2Qt(gi*veT|i(J@C1iTFS7sEF}6IB z{v!&D3Foj8m@^YTz6$wfj=BwAqp|nVE*))ZIh@ti!)Kb@kD03@EE~tFSkf1HkNo&- zaKhB*SsjBG+2!zDFOKCi%6ho)wHN0aTYS2NL8$9e zn8Sci{+ILshrMVohuLMPH|s7X@91m;gO?=Ii>}9-(U}Xcjc01QbX#SmY^U>MyMGBo zw|QIStWz;13$N~QU)*bqkxtYf>YzMb|CRPq{pIIU%+0kU0QjlA8q|xrxIkUwwky*N z(Ssa^Q4RDxlRDK%0k(HMetuOm6ahUhgUL*^Ti`U!MneO4QZ@SFc87;dV90uZ7L!V&&kp3g zARp|tUvAYiP>B9Opg=la#MV`qu{!#xdM}k5!V1Lxz4za;kQK!!9nUe29TV-YK3cC2 zNK6C#Ejng|HTZg3t$7**`R`&#e}tK+BB|qb{Qb<)N+da!V|ZM~YsX%Gl>W}{d55c@ z7VcpluSGrn5h^*tRBFw`_9|wWSyAF#tDAIQHcdYZF@=<~`1&S&EJ&VZ z_kchFPiGqN)#Z|e^Az!BC{YK-7q3-sFs9Fkw$H9B*e+)rC}mq2lyXFueY+fy?I&wl z)z!_iv!d`dkn-)%CM%X%x}|m5!=w))-MS1Wquzp?r<#adsVKUqE@-Q^E_%9A8ZF$g zpHNdNtms~|=6w#D{t$v0Ls22VA$*f<5!n9j2AnOF_WQzkwV7*egfTHMe59XO)PUX* zO`f@HKX&Q1D7hu46EI{!JdB!+$0R4EiexfTYKBBr4X5ipXxouEpBT$La-}|n!LVe9 z*+>;#kS0yBs^?!;~7fh5WXv1Jqw}_ew(gnA4^W5|A^S5v>#6 zwgm4HU;wU<)p^>xe*PVN3=SYPN)|1~!gH-`Dk(wvdjeIb(W0|BCvZr83@nYZdHrrP zk%qlKv@Mlx`UX9XgRH)zZN_MEQ(4FLw1If_SCK|potIpsNp%vJV|$wn4j4WZLG2Hl zzu2Mt*q&DRh-ELQ9X4^py%{8=T3`5z@#K z8NalEius(zq*+%?J{r!97FCNYN+(&6KMXk+JckMx@of!f^nPK~h|=!QT{fWAyULwM zA~|S(A$A=FZZ6iY-1J!>z( z>|4YPNQ%EC^8Y>xOckorEu2w;G}y!2xVV(bwZ?Q?LUo$0#DIwT%($pPLHDbpBaKD% zqpo@^_d|#>-YkgtG$Z!ZWryPIcL=`tB+}Ev0_H7fdwAG0zi`SLrYXP$?~j`HdjIwZ z$;`|BZu=$A$)2xyRwjo5?i&NTo!66ROmB*GYhv(Yu0WQt2O7?`C4jp=?&I-A;?pz5 zD>{p2IYLt`4SF-av8GGQixYJ`ccoHc3?X5)|8~x{XS;Q)1iY&lf8rKoI7pMn+0;L>egG1kf^8^ zcGf&C$Hqb3EQSL0cuO=+R!Gf>qqjcQ44+K{LV7x_62umB$L|Ya4r$}C#9LrmqYA*0 zCg@&}aroOU<{u{IBt+{pqCWO*oTSOwbrk>X_=8mEBA1&6)jRu@HU(RqFvNz#WzU)# zk1eMWwZZW+U%|4m6(Cg!;3-j%8-K#T z5v5f1J1VWJ_P~K*5gbVO|C5nJy$T9ai5V^~qr? zQH^Na=3~iTqGu&9Kg?YnR?*?W@;WtG0I%#iIqhfBQFSakT;~iZd~;986|sw6jmv?U z@`l75HQCVQST2kEd^YtSlfs30Hlx0viaItyh?n4H{OU(8%N3e7Gf0b(_dN4jCC{-C zPqBPPqK%rr^qkSyVjhfz!Rryg)5Rv64>G!M^dC6Vq26{(0?-1Fq){KUv7UYx2@yb7WEDCR!yd7Nx1TMauo13*y>bRXp zuL{VhWCkzN-n|UFh-TS)$d}!P!FHFfqN<)gXVvQY5^jU$X^3wloPbFf+2+fl zTCpADt_Hbv+x+z=lfyG1z(K1fwUjRByd)QwoksFz`NyDa|HVLad;lTUh|$x zHT*R&v%gOFG-jOFYfU3~ZI@e!!>{Nja;a(Si{MIGR}(_J66i*6>CLP|cNHB<-6;NQ zjmH5Bl|WrKFV0fn7qg+t9Oqw8=)BrRYz2_+C^5;cFS;yp0h^QSa2i0AjsaCg2?Rp& z2W#j*;x{gCZVsuYF6YsYUji;4z#P-Ih31@n*VH*J>9k89@Hkp1&H1GK#5W-FnAn!D z=CoqAGK!-ljrBe^3^-S`Msh$;`ReOaekHQhD2n+6qjjMnQb61Ui%GZUVW$(WMuZgv zscbiRbMlVartJIxJ9^78rqdnv601$NT69^<#246-kYObU_x!jKuQ_@+rkWZ;^`0~b zIJ5=&IobgMi-iB<9=DC8|H^%Q{uM;#G6yjLOO1KwdEW$vF9a(*y1}7bG+$FdaG!`f zG1dU!e^-#4B$Y&{dNyswuMnDcHR zFT zlC4*MfY|1U7$Gu|v%B?aMg7tox#T_IvM=@0H}L*SGwSYDC7_VYA0pz09E60F^_C^K zsnCDCR3L~sW1e2}<8#iIK+1}ASe9UUqc;1(+Sx7#=grawwQd>G#pe6>_^i08Sy6-& zLn2wU5dl3oMU?SVo_AeQB;2-g>Xi2K^*hY@8;mBX`&7mGRoXArUyNvwaJ!HR&8@ln z021omg|s#;=pA#3#lYH2))QoJ6tn6QrG`=_)tsU5yGit!Wje%0tJ4l)qsJ$w-S_0v zywg6F#gbO2TneG+p`}@F+27uN$@8+-WyN!lHHygtBE97~l&YJYH(EC~v(3|O%W`L0 zQ`|@~J(cAbN|IjJmr2gq*~R!om?cjZCN>!Gwpsg@0%0QJpPn@C-hH{)YXNr31&aE< zOx^cq9GrP~cLRB67ByYdo<9c`S<3HDb(Ct3zY${z-JH#tHk$4 zd>M)A>iL_7v>V$rqYcK%)4;vy@?V_>`c>7`n=-wYQ%u zn}4n6|9AmfQL44ft=WzuanI6rwQCqAD|hQW=h+0K_&B8#!53tHV;a;4p!@G*4Q_iG zl6*SwE4&-Zrgz|?i9zMMembNC9tQTKHj4x_z{=BU_6P)GqcBQ z1vmR_r>653z1NK_3RP4puQLiyU)_I#w7UE{O9%{CbhHsbEtMw;LtMOC3x8lkRacZkc9Asp;;d;>V=Ik_$1w!+cwn$x|Xs`OJjTr{S&MVJOv0@$rDT zW*?2ELpX7`Uiw+=BX?W%X6fyRjpSvBh}>>V0_AQO=ibKM^mPVig9H3+rm$=5T|L+} zVyewtl^reJN$zFvn~maJgAH$TEMAFv(;Z5Z*_!SQrOS;a>QVQdl(fZgB>&oDyqGIW z5+3DQo)sU>EVj@lM5v3&O*`u{tJRw#X9mxKA z8kW&NTGL%zI-QMhd~)|iqw1a+Sa0Y0cGb#N>Qp60AJVv!-c!3kMGNOwG8Bwk@?h~! zM*r(NLlY3{%C8;8U~Y;_;7H#Xj#9zp*bgN!dpJ&E9h1eScN|m5kICHSpRN~o7Z~7A zA8O9etv`tq7||%es354w>P`G!&VHL`b99@_ZPC}0Pq!<5RI&d2^+!yH{eYdrX6@py zEBn9KK23j?G#z2VO=ZB?$DQR)9oU;SvmLak?X(_5yFw!>_Y?pPMb!hlL_HMLKK+>L z5Lr3=|1gSSjPq-@712H5$~)>s#N=yjY>9}jqYfGMXDV-QRbRyXik~i5wVk@dLMkzG z(H+7?a^aT)Y=53)AawgRLHmDT0jUSS4a)hxc&9K04mT*Dzo z5Fp?oY`wlVN8YUr=jYmO|G?20c}vY8Lmpbkv>_E=j&n4Ws_^`)P$_0q;P@bkbiO%p z_)`&i4rDlNxp_6eMwrr;|7JdsUCOY$7AH@DJ$;JJ#yZ!kN-t^6j3<=u+EgfcV#;ygh&uH&dh|*cVa~H>20C zw&n3mq`cBR+W=HB>ZX2=g8aUHJ0;6aphPugkP%9*16vo%YzPvTDB}CEZgLuB!odw$ z#*;H}%DP}u9H2pm9;iV|YB=}uLYO$PiP!LU7bK0#M zaPS0|Vphr(Y67xaz4ci{Qc$tvsWFf606pVwcDgcd)Bgl*zi^02$ar!<@>poInEcTwg|8PkU(E#YM)!T& zN6LNU`HYgRqK-eCTfzs3AlAOw3Z*}B#;w_}){TyD)@Gf8sczNa+k=GhO*^oFwdcHD^Ov zR+GZcl*PCcN;h0F#g1o~r?i%}0;)H7F%Th#nKaKbDu@ za#JPnvj~1N1DtujYX!h8W{Q@qX7>jZK7Gu-Ywu^ZjJ0!ilE}DI)=_ZY@;^V&E|2)e zcQqr9dEQ1Y=#~zxc>bx*VJegUde7aL=;pN5b!RsYbYI9JE6y=DN5P;2PGi+C0o`9` z-NBEr@mW;jUA9s0b#-0~(kiD(!|-_QN48c1e}Y~bwl)PIE!=06>U>H21W?f=uMaOk zyFd_ImeySuVV5Ib1P|2Unk(#|$!_^Wg8m~RebT|Ovr;y%Odz~d*A@zn7Z$>2P6y>b z35-pfmIv}kxNOyNWrM37|LdBrb5B$Gw#pa zmCU#IpU`p`h%MA@-(Ssv+<=Jr)ya0rtqsT)WM`Z@Hg+=-XcOo6b2Y?mpXEDZiFFBO zHGHyI?rfmC8`ahppmw9`%q6w85Tg$$W#_L3EJ-p=e_f^d@HDKUTWR#NW!JF`#mh6R z$?g+}<&Q}KI_*FnUv^?;3SSv(zJD|Y@_@_uP7BxwH?Fd%w8VcphrO$fobGy=+|dyl zDY5*f=1?uX@HObClQbRjzRZk{g4@k|`MBN*1N!OC^0_k{tgV=%)koi$YWnbz*3lnF zV?$?m%^yvm(trZTBAV1b5KV3I2`KWM+QteV1;Z7^t2%;+sHNj1e zVxrygvi|mVal{{i1_2yb?vI>q5yIqD6|0LjKfZNBp|_PCbFN}TiOczok4xkef>kOi zBBzREQZ^ff48ey??u396^&ZuTg+$Qq83wWNTa>+%z581ff;_sY9spv)lIRmvf)QGj zYo@F6^sh65o-e8hsej%D%I?Rt{>INKlr85knYeKO(6&-jNx1Jm`I|rsHa?PwEXaM2 zIY(m-BZ>|K$?5&ildUMXwQt|kJp|smnJ#iBv+9!|c$ zM)7Nv-S(pvjn5v-a~pGq-4j?9bCZ#HjMir@G9+$kQ8Aa_R5V`JdPgyXhL2v8q}{RQs|$hsa@{++vX~kMN$O+-zqTUOeL_})!c$pv zVN8;rZ2oNx$#2BlkXTC1{%By4BJ7qOsK9Rd?AWh`QeEY9J8jyO%AzDx>`RY36B2%t zN+lWK7zHsPXm+&0fb}n$xQ_<`hv^*zQ>-aDV|Mk z`|}h0^$IQeQ9~J}kcW!x=3C{-5*bM*RR%_`90?-5vX2s$OyUbB`0gZMbpP^ngxV<73l78esZr< zPUOhud#xoy`_k5a z%T&oMM;85%u;Nb0u*MQ_m7$_R%~wE3m{}V@!4}bw%Gw!B<)iVIqxthQ{BKuIhrql8 zv#n7me*3?@7a4C?PCJu2!*rN7&IJ7(e7q9yeel%_y{Mi!~f? z$!WS}eLCc@khb)R`eQ=)Pg1Tizz8d*{d2ed&nrtLR=o~sMx~sIT(Vo@)Bz) zssNAS*YSC(Ue4%Pj~R-VR7C|YPd93<_R$$l7lrK?r?2A?ZXzEl3jXZ_{zIAf2phYw z?F{j6f)u}_d&*TU-60KtVD4mdh!ZCVZRfRu0K!SVYS{l_>@35gdfRqSgQBE#8=z96 zbR*I!UD7GtJwpl7AV{ZlHzN&#gmeu#Gz>lDkOO=9yn7$V-p}6q|2|*l10M!h>t6S| zuIoI{-yL|H@hejQ7gIo^bb7XcZ_F6(TPyi&r!Vmt|6$+u@u|}%(rGa4PH5Pu^_P41 zlz;t0G7h(dJuec8e_T#KlZXPK2a{H&Uw2E?m_47Dyt`U-FjU}wKHcn;{_~aCe8zh2 z@c8XEX6oC*x3XJ6`pcGGV4 zh39{B{r~5vS5^bhVCKU)Frpc2D?ogmWZdJk%Nti+#`RyzjVU(@5|6I-kbM-u>k&Ufj$hxr{Mfd3;381^CU6{ zc8|>0-lYrvng%O76n4z6Q=mf=N1A^4r{s??-_(RRhB+A-dvg6}cN7?%SBCG#+uZ@F z_VKQ6nM7o9V&XFyt!6HJJ?(*Y$uPR~ONng;2RYC$n#YEveWWjqrBk>+t_@ zkEnFSdfFY@9>cZx)0;*qnW`1ds`k zGy?I2@!HKxCvzG#{@NN%4x5MAOQ*iGs9(g;1?fS;nL(oJpQ_SBmkzF|0f8zbsY*x5 zY9y=ShlJ#(sPYaaL2TEo zpPhyd*RMmwwv$MHZyyRNeC&Nh^*(%;5(n#G`R+17t4g;>cHzU9hf5zWlG|8J|1i_O zQ!ul-wb{`U_=;D^6;O1E{?IZfkOQMQt+q6yrwr5Mo} ze&;2g)_fF9ELDZNwHNLys>&27-tcJ3 zl|=ok(l`$3{DMzmJ-KD%=HPw0RR}!Uztu;+MKi|L6Z4lNja%++^|jZw)}&rZhMvi_ zyq{6s0X*WF%0N5$O06(Q$T1nf&Z|!1RQzc^r*ThV(|xNR{|h@fG4O4T zvCQwBu#>UKu}EkGLvEwdIy|$5JYyz`R8(F#aHOs1$f8#gVtu>NAlWH3e9Il&B1c2* za|HO5u<6((uW4W(Vs&O;41zT*_*F{O3x2%*&dOidvYAZB?XyTRB$KBZ_Q?YNX* zIgziC#$nW?2uNkg;`Bx>pm#7&L}sF(`)k(B$LXM4|C{^C9+$YmT#+`Ly!ol(yJs4^ z(Ddn8_WQAtNh`dWO?pw6#6x!f7P6Ua>>lK>KExJLXEpk9p4+DQ*o>z{u3PW*earSx z>N`Odtl9Q;Ld~iZ(fRV*0@lNSddhUqDpAYbTANT)yxr#HgRh{+ZAi?C_-~iQPyTUBz#t=2 z*ge8>Bul|_$vGWbP5(Jfj-3|5l)T_)8NCNQmNQG^cOuC*`qbz8tQU1D9ORRuKcl*a zHU@MF{i@@}k5yiGWWQUYT(0rDiLt2@`+&Q-)`aIH!3J-oK6n$Pxr1eJmQf zL7PJMy?j20ebE#zo`mz%{OOXEDVBy=?k2(J5=Csuo3$%GJ`ED*CYF_7=O-dyq+T0i z>?M#5^*-rmiWT>2I_~D%%zf57ky!HXWwxMr0;?4P-*}1o%+i$O*c-k24I!Vck|a^* zgg=Y>>ek+PRDltiyE|Oodd@_pn%-|VypD;-jOKnFAdWwJY^+w@9)G`Yh#NHmBMO<> zvvWBMhkpRk4Ja5NuuuB)s2pZwmHu>JAox7>BlGZrkAn3*B&l^y-fDCSHRZU{cLuvC zE8bw|Q@?h5(Cu3LFfoL^)+1!b>n%7W%*Zq{Qv=Gr;!)A+`SSpC6)MC8erAo_aOHN{ z|8TvB0B_ec$2M0qhj@mgfh5KJw=!bfMeacy)YG3lXt8jc{Pr(Pw1%_|?G%8t%jD?J zWKmw>V_WsKsS=y<2}A?==}z6&cmjK71J86Z5^cKlq zA%m#AZk=W03BKj4kF9tO_OOtNLZ!Um_jy9~#o3~+Ng&u=i?ZkNe6+@4drR<(7kB@- z>cmyCGJyP`g9PcrXK4M2-O@3WuNFV48OO@y9 zEFQlqf(y1*I=&ey#QTyRO(hQjK%VRN3+)Md4gMc(brEOMoOt7{I zevH0!SbzQ~^T{V7Sr@!kWbcAJ(_q8P_)DR+PM{>D_25KcpWA zTAyDOlj)EkqdIXh!*>1INMws-*Dd6irNF4V^g~tOdF%eKAJ=UrD2#UUT6&^3Ko}%pqYx&JDVQ)d ztT$DD$5^09>`Jf#kM6KO*ew)W074Kg5;r~t4=pa^K>(9!kUpYdzMG8wTQgO=^mlpa zIx@3`twrY_-#vl(F=*V5HBq-ev!!TpO`I=R2xlWLhqJ8aFm;g(m)bDDO9;Wl<}?M{ zH$hrp2&ar%M&zOh=tSR7S`CeRT(SJEhdLXA#J3bv%S*+%RNfB&gQ4vbBc`X2lP+d@m92mjK7Kz1$?Mvz#n`0xzJF=c9Q(>R`Gs<0ChlBcFi zVk&LH$DATGih+wO`(0%s{1JvKj(}sAS|%kyT9qzWG7u+ZV!uNw$Y%+Ao4>6o_}0b$ zJT>mdQorWgVqQ6&T;)K1=SOMAnsR-yiiMoz7ecL2m}7L+%tegx`QiM!NHnaLOziSE z2Q_^cNG4o5>suis(ZjzngeQr1CU;A1GcLzP--R_xHQdsl);P|dMM67c_NZOp4^hqa zBzzRP*pb@Xen&N|Bmfo*oi!DhT3+%Yf7C~4=lo06 z^PMCLL=Lp(>o+OVgxzr({A-(HJ0`oYAE&d!Uw=0v6R%#!B>Q7?a$U6m&ig&6HAXs| z>8P=BBsW)YtETeDJ-!ymDT!O-`Av234Flt~hrmnj0+#LssjFPB{&Mt@udZ1DD(959 z4=0r#+s%fm|30yJ09lH-eCo74W)jV0Fj}1MDfPeKkc#}RVZxhsCEPm^*;r^XNzKFl zMOx~_w_;(T>iO@b$~t69{k|!i6U{<6k*f3#Sn{?)QRTmgZddba-0@hH!vA?_^_&~y{#C)FBIQiz?_#Y@iM}3R!8*olz5U$Wc@DOLbbIEtJF4<=XX)KvE9vGX+sP>#3=sEQMqrvQCf zb=K3Xz`bv2cuWHceM!TAWZC|CizfN7k~2X*2sAFrjn^qa)6kf89PGa&Tr-*!X2ONK z2gHNTKRaawbP}%cs5Xh?%kTuO$HspL7Ay3Dth*Pk)~+z{iqWsGI3P~0PW=G$J~aC1 z2FZ}aU81gj=jSbKGjWHC6?^qX?Jy`dQu$=LOR^q(-w~oug<(VJ8gYXF<32ccY-;+s zjyNtF&B#3v0u=${`-1(sTEQ2@JP<&^2vg1!Rg%J|c<5uzlK7rh#3KW_4=V~L=hRDa zTDN2D>Y`6XpKrsb01|Qw-Nen#W0Myt0=dlyw(E>t%l+hJIHYH{Wp36Ri?J1eqY<`LR;+kM@Qs2o-Lrz86fD6Ej;=W+g{wu>Zc!Hi*&RMAA2j?AX zlwQzpjwxHdOo3i0Bn>q_!4wPC-x5DH`DuY~a5CoKHWV%^Qj;jAqf|56 zZQ?j2MtReP-0qri-HR0s5kQLZ%&djdl{0lA(=u?mGkl5nsqW|Ml1ED0KG&FjITt*% z7`5Iyu%L5m5q;Jn+xjeZlfe99{z5>h{s3yfaBfI`igZIrX^-8n5bdwXUe%%ua$ltoln<+yFYss z24We`WJN-xs=R;N3vH0@X2DbRaDOlvE=?1`5gnd;=VJX~bH=6^?0xU_Ya;i#1mivHWH46)Ct4E?p{ zF=g(rRH2pzEYV7VYuBgUmKZD^jPdf5MuuzuE);6@z>m@lq#rBx=u`CUcUvzI^L-?^gLrXIH8NC6^e&eEEi`V zBWjv?N<3pZ0CA%}iG~gDMn0K7QJZ*UYxFLo9e8C)M^UTjgXSu`b2$Ktf}+2MLEL8# zpaBGeJm0oB^$JnPRe(EadtCxyJMlLGNyBSjz%t*Sllj+sPG73AbOuEtIo*CRXjK*H z%RedNWQjRDu3FB2jB4Lg@wqst)Y#08;C4E~OFJL-Q+l!)V4rQL{K@btn|Z&EMzQx1 zQpq-nAA}}-6}R<=0m_*2f^2YE#FF8f%A<~=v&^%P9fqRz z+K`W|f?nHSr=_=9@RQl#8~r`Od%7j>b*T%T>hb#t0VSfynY82R7=x*>`#!4>@SR-! zJjE>9!xmHS&JJI21Zy0Gx}aK}Bgr1+_m7l{Q1ca=HO;Y8Vgk0fa>p}cv9r3OY`a`& zDKEK1SRc7;J==Mj@PJm?DB9kq-IY#2N|aca2AN;gZT_vbRQC9`C7D<$z)B_RH}oGa z&4;YG0rw-F-*7~eC>AM^*wUU@2s!7ct^pg;^%j-Une`|~f+wHp1cWcNqmJo685KN8 z=*S0Jmy<3$l#A-JAxQ3Z26=ktW7o&m6|WdSw4V9t^T0J?B@??m`RjKs<C?kP!~@iMFEABZfYyM(DI4>GEmFyqV5>8n`rzuHYu`>sSv<^WmMFZifSFHVl+DkAtci( zpiiQN$P)*j&|1E8=0VNIpxCNY`7N5W_J+=?mHlQf>|g5_a^n(yHbsGNVCbS=OlnOx z^6Ev&_cET#x)ENB*y-8CqX*aF{BcfP($s+Qbj+2XfTY%#zz>L$oHM;6WkrVV6b$7K zg`t-G2mkac%nb?equN{{OWdAaB&_}{gTwMd7iX``{8|?)wQ1)Rt^_#J8IQHgU?0Rp zZ`@d~jc$1N3zYW3e?%{f8_h13%>RV#hpkl|9U-v`E0nwKHt!e{Pn;lh_?h$?r8wNiAd&F}mo|@vF+BE|iun zW?&Cfm^p@Sw~pNV6RvlbQ=MqobxcTG^h_JJ_LBWFTtp6#mP=ZE82%Dt^&~s$oeTvX zn?;3(T$B9|-t-|ds+dW(4nYhAd7G@@`U&wl9MD+8?o7$R?=!CZJ1f5P{dWOL6_;Io zuyzj(ZOGYUuZ|)W%A4S1Zvq=9RBI(z_8G}3?gSS$E-j89cgCA6rLJ}y#4KT%r<}UD z`D6F&U&s6QUxO!QG5Ma6gS5)oQ;!+DgKwhQh$D)-uWoo-vBZ~w++amYppk;%g65Tvo9dp2KT4TpHYgq zp8#(0jp`;Yt5eO|tzm#$Xe_l1V2y+PGVb6yS--pXL?8U9t2ng3M;_)n>EUo8?AQVP zTcC@!FT~ZYZHg8ndIvNNMcRg+tFpbL*1PVOx%d9mzfwLwi4WZuV!S1dj6_LBK5H){x+~sc8%Re z5@)q8W{nTVV6zCHX&2QW*n@05Ev<_qqgL zc`r+DZV5WIjXqo}qlgtWKe%qc-SxTZsIr7XGRc4hY#n_RoTJplvXTFvyv z;}?9Y?^364=~9;0{d(qXviWiS|JyAERGU zN$uk>4L|YQ_cMyOQFvW@0hkCHU}s_ey)&-oeNBL6Z)e?o^%r&~@7JKqI0BN^SN#+W zw;u~nHV+GsoF0v$Nv`~H^J${vhVdSj*M4ZiKKm(T$hF@Jw2OVh&L{ZL z>nEyG1n;R1-b%-dOcp5=Jy^WqpIB+p+vY!;VO-L zE2mwP56jtP%YgA%&xYP5*ZA@yTsB>fmyf@)8K_t@L-1vvAKYD}N!{|GBwBsfD?A_Y znbzA*{yLlRP@cgDIg`{0v`+GIFwshg_v}uz#X?t5Cy~y_*i=Qs@96AH;tw$y{DmWU zOS>`i!U)H(cJHU=t-qxc4fMp_C=wT7Q+owJ$9SYPJjL>aS8+?iAZ}Npk=vr_=}jl$ z#$dh8gpMIVaF{{OKTob<>F?f~^2#{=UF7V*=BzHG9DBps%P))0|HhQ!Y!J!m9`2gc zO;0d7=5*Pk&dDD`Wqb|9(ds=B=st(3P%Ss2@vObE7b9JdvCzgT{QV}17)bja*nW1U zE|{-xRWu)5QJIJrzw)0s&`J?-r4S&Q&dq<=8pqHO`(e;Od~JP{roQi-h3UiL7}uBr6ysCc!>jAxiH! zxaCev(s5rWy=z;W^=@Bw-*x;vyT8MaTSyc*$c0QGT&Lw&6WGiKBh_x?KKXqevN()> zk!7azdGSZ)dPeva$sRV|xuRAdJ2s;^rh0ki-Phgap-L48AK^m_a3jT&9_W^e zC}K7VFVg*o?jHPX>DMw7@jImz?4Y=WfH-KiuD%HF7eO&!T`>WK(;I1O#`hlE8!hla zp)GM*V*u$iXor*npEALB8tg3Tk#{kT1W+S!JMQ0u8DB3A2OYN_`vQZEdeT2_=5v4e zKe(6I8L$im>W~h(Q;)W?aJcWWYn?E9sHouzqvT>mQ5wxf+jAaVwv14o5qO!d>)Jjk ze-wzueBWwc@MZWkakwE17cRG7kLkxDegj_qYJ@?(y8$i$qObq0M2828HjSdY) z_zbY=1BF^S_$x!n6{bzi%9zGq#Ng=?8o1n6$2AOmCOTbAE~)C#qO$5Y#M_fQQL|oD zoounzw*Fy8oqMGXQXH*ko3Uh}S1KY%z)1euD}juVyw~HnmaI8;?!oYE?7GNm`cAgZ z)9T3^jHU6nB{WMwE_8v#e*b2CIW8>0K;u^YFcv+udj8^Fdq>moBTS)n)8_Q6 zG+;|l$=nSMZ;saQqjEgDxbP*uZ7s(hr!9J!AiE~RsJq5#y2iSi30XF^k<)1b)y=ow zQWV80B;6N_Dre3ddM+8N`S(ZoW>N?{KOEu!modBq(mT1Cxte+A;Xja!=-JMwZ=amL zY-Kblu)|#}OuL;^O1sf7aTPq2G(xo7&yHQ^<8U{hqYHm<9}nJtH;D-8xzlVnKf9tWQ2$W z#P4Wxxoem>;9rg^761&vuBz}U0O=#*zFEx^J_!zRIy@5*_bml8mOSIA-+an(vhwq& zTgKh*c?eFwqm|;1zouv#y1u?0>_{`7P}j8hHxjjdVG}?`V&b{7H6qBvi`+T^GgnBX z3)eCs|L${`mCZ<0c(aH&B?iveez?b%C#|oS%|aos3a zFrq!!_??VD^Gc_$rry8)&N(Q7N@K)sz5L7+PEXtC6c z)-EHDt}uTyLfO3;358?-ntgL@PeG)<(tuYGzJZ7B3&m}xk%no z+cmacwECZivu7gqe~EXah>GCcQOWe2R}OpDy6XeRCal)>mP`}(!Pruq&AX0|$7UDrXgz}k#XE5^ zZf%G+tL>UIgWKvKZ<#}736)k-SjMboP}SngDS_?i?3oVOUZ#(A8vF>@FSqdD-grT> z5c-sw>eL^-39@+28qIhM8^Mbp;?jWq5SK^vV|_KWT=sO3ia1b*M7Nt!Pb*Gzg$`j( zCT$SU*0I*8-oaeS`l*k-!3D|>R^-ljPH66)h#n-uy8H|6Jw-pWM;C2u(<6+N?{ZoV zVVhRg+jgik1#(>+GxXDoT||4qcsGuY>&vhUQ=e3$Fjkjm%O3*0Nl#fExKjnr)yvv) zUE1^8T^=)2qbg zinpk!#<|QE+8*KmW}eQKsA?JSzGi;r?eSxI`l|pUF#3)yi+m3)1&r09mmb=(HQ(Qt<+jSuFM*CIk-nN{p>!NPXRzOUyuh zt5Xcg!{fevMRWw*g3P9_XSD>)pR(IppM9x=9L--)QJ1$cmaO}uBTTF(k9VaR&fu!p z$K#_;Z*1rDKl*UO)NJOg(8rfKfL}D>{03~o-rXE1*|5cA-a2V~;B_;sjNBF-jZcBE z`8ps=0*=w5Vf=BwL!9;HkmwZVlv;(Dd|&5}Q-# z!S0#Seryf9rl}Vd&FsUNdG6Bld$~n z-1-0Syc7Q>!|01F*P_dznvqldLOfU78|wLR(BYC2bB(QXY|D1^c>H=5m$h^mGhJsx zIAk>O6{bv!gCebR=X4kF)%sZCod;LSVAlCrVg@~i!qv+mb@m75s?bI}(=CMvry)~A z_@m*jG2v*!s-GTIA^S}7>T~*^j8<1fnPqcOiJ`pAJ4{v6Vigc>d2+2Z8wtD3be?Gie?)$in5`{H{YVrd) z!QrR0y68{M(T~@ztg`AWx%$G?HgFGGE-4=#gkI)QD~i|pOW6D{LvQMy;>Z2LBD-NT z+4~t98hSX|`)E4Fa+F-?KFDw30tP8I{odUC=#-po8p|`^e#Sr_P@xojfm0fWZh;tS z&vT79Qk>WCUxFP2wT`HEF`eBDuIR9q4NnK=Pw0fWmo%l0%b+s6F+57NRy(8i{=bNn!5c9i{Ru+JcMEnzb%RMw z=>Y|=#?D4-uZyyGx*{%4pZRy}ve@^{hY`*yBv(zXM`PN}{`_8X-HQg=7CJJEj6(ug zN;mxh${#$pdsC&AgbBt3XjJ&PO5$24YBD3E+{b}*8%9G9h>Rb(Fx8JqG~S++HNO(j zaxM3$M=j(#SQ+nJz88ut#-m+Az?4XHhxbM$cugy!+Lm_}8P94L0q~{Xx-Ae@JiEnu zoTga2{>E;NNFnTAWH>fy?x(~jO$Y00kqVyiI$k@1H;&_P8`Y-6%<%k@f`fmfXlaEG zB@e*;$mBBlCS!2v^!hdQ;cT7xNWLRiKzjJI+n@)sGX7H%G!8~lx>2M=+~WlNXo20- zDSLo99cYk#Vt7+-i~&-DE&zV-J_x<)7v<%TN(dL&%%rc3Gc=Zf;ctw}L?<@7%NFtZ*r*ZgyW37z*H8Yq`yELZ zSodQ2{%3D3kk)RtAT-|*aw^tL>Z6%qIkP7xf*xwS%BZ{7g(hrC6Hr9#cz`y&>2Na@ z{lv9=@+K90MEi0x%XP0W7l~G@*c?3(9{^{wcj}9>w1hWEXB}K~qaqp>{&Jg>Rr-D0 z04n?~^r0_fQ4b&d*Q+C|a0kb1=r?Ek#B@ZAq}UcZ|oU=+mP(;F60ZM32PsI^ji{3ZB>`|P6)Cp?1m)i@mdSTe)zj(Y&SA+nv;Am*z2 zp25QVhqu{lXIHLLjjjt-Mkp|$M~owSbz(tke5Oty%E)@|VVZnNn@1Ugh#E4^JhVwO#?mDq6 z$%9;{cO-+>JYuKq{MsY?Q<}rs_)<0q{BNJeXj>XruG?5G$7@!9W`cq=*!W>h!-$ki2yHBCXZaz9-l?)gTVEjZtE zLlu4Ly?SX)jA9+MZDn81Hl;3PoFK1-A{MT`f=OLY9jrMWP3ek|E`MCj0S3z4MrHQY zU%Qh}SClt;OHMs^^!1tR`7>>X-TAz1DYZVrh$?q4y*9i)kq6zftAg!SaJC-~j)XZg z&Fy6a2&dpF(1wY7fziGKdf$WJ2B=9K_sx0L-PtBcv!FQ}T}NqCo6h+AGq1e)+~-Ut z|7?zU9vC3!C%omf!1%hWBerWKjnssY+52<@%21tSzc=%kwZ8EMk+Fo|yDFDWLW8ae z(!XOQ#9nOX}Dx0mF;@iLWXzb++bhSb5MrUC#feqa=ZHk7_acq;s7?Mg;kITPjB1J9$m#7 zEi?<8`JY)JEaEoPjBTwKWjE;p1RH}|&2>{1-hzfGq`N68O(1+AL1O)hol z+Vc0aFiJ0AQ&#tO7A1w~5WampxOmmR*~RypYF8BpYhzzkmd62Z zTxAc~C(t<>nPW5mi})Fn7h9AamK4kPb+R4r^Jh)I0R^3RI+mllom%$qPsQ-||m5eiIc$>0cq}GDhakDJP0B>3oDeK6L0~tHe>Sy$5H+- z6MAYmPWW+5mN^vrWm1Q`Um=}t{jP_o6flLLP)OpIN{cZfT|}R^rLN2ha>_a%)zF?s zEi0f-Z{@{%v9pgbHy!qz*ESyZ+YZ#3K(v?HW~)>0d+f3E*|Pot%wZnD#hg&cS`^`V zws=0t-n&OOmwI84qOSN&Q2Qli6ZhiZ0{huV-b%?gq#-7>roqMzVNfITDp!C*2kxjD zHOM1KuL(#QG|Pc$)-n?~rToIx5$EmeqE7-M!dEYDQhodUZ|*i<`jj7u(&Azt{5s_% z_`F)J#+Zt--#3WHe5R-d=hqQ3^ezz*`=$;`vih3T)T!aOm+@ zuMInm*|2b2Z9pt^{X+j`eEL-+=IYV=7YyHwJ%_6JKVWtlPPMZCS)@qW!|MAz{}5en zc4v#4z$$HZ#PtCeU;1Vk3Sg^1v*->2+rmG+S@P;29TEMV#eugCB#7@U7gkUUzWvtiKVFS)(iriQo6e-W$Hv)MG!Erz; z(DvP|7{tDBN6KZi+oqWZ)?ViCd5$_mF39)VygN1C0_L*8oa^ z%Ob(wq_Xo3-ivy(0<<8uijN2As6GdWq29;z(WjDvYrrWF4}2Q!Px-~9TjcbC7~MY= zsNs*%?_S2Z900nr*>NM{7Un{juk-8ShOylD*WY9JJmpWxv8d^!fPIS54U5+iW{Djd z3OBCxm)XDMsk-G|UtK?aCLhU-wGjfC*;1C0V9HvdGEg=yEYI1@uL*@>^<`HH8wA?W z^wR|OdB#evkKL_TT}JeVi&yDkA`Xq^HS!+HFMqs$X$nHCEanC{)z+pV%opM6g*K08 zVnw0vZF?mJt@*&Xj~2Ir7wDbrg|Q%J{PDk5wcB*vuV?*EOvw|P?HihURd+B0wu6>-vZ^E($ zvSZ!Jr+tCA{nkjAH7dxH>f1Wssz&M37-chI129EAw5b zp8-R#s#P{#`p?~03{;m)6YFy4~yW1v4e3o{u?&D^e#a{p{1Ict~v_6~Kt9rk% zr~leDTmhhI+DiD_K;cgSF9V~5;rX_#LW#bDo}5c|5N3a9dD~IsPMx1a&}uLx-i38^ z$~h*|si%lmzN|SRz|Vr^71Lri&dx*X3DkL*9#5Ixz~LJtB$}q@rA{~H;+6hBh|*ky zxr)zTW7M%(>&KrHNOMJc!J_}gWt`5O2ub0>6S!Y)e zZ$Ri4#OETfS6=^i#U!sn6*wWbX8(?jCmIx@|D^fcFLua%(x_E+v8^)Ofy=U@`=$ET zjTsLtR888KvdRL<#~@v6hpbH-Wh`xnHY9HXQuUIkS?68`W%S895u-y$K*|TVMRKez zKlRnpUek?x1ej|>VW_*+V>J{P93+VkAX9VN<@eq~e|d~{e8$G6%ECS+F0)KyK0~1+ldy z*LO2ri=F|Uw)8ryZwhTfFKSh|CQoKUg)4z*Vd|f+Ap9=hJh9`j3UAjW_3JIAXvQEJ z$=hk6ux2Jam&Mq%$G*Bljd%qI`^U$$hgp5EhE*3A+sj~yVpYn>mE2jLuv5UM!>0d$ z8X7EDKQm-}g?oB2BX&D%-vt18mX}kGFMNS50R!yK9O_K{+JR(*1|u+ac4+VR?Vb`b|i= z)Q8MKXnylM@)~O~8+=gsM83{?_DFS3(Hd5FDmi1$8X#$? z<4(5*EKZ~Sxp{K$FP51)Ha2D{xp)_2sAW?EXkJE%b*OHo13ZoENP10AKOra9!mV?F z!xlzMN8Qe%m?fPjmT7K-;~C-iFVg3X1M`lUDuY=q=3C?F&0NbZ><-llUa^rN?wAE^ z#l`H;47W7oYhT{fm#=B4n)d+U$!#m5TM1#K24-o%Gn^^m`nFOYH}Eu!0sJFvSG9j< z(yF{0jqs%yIdj+QRbH&jo*2^gt*)dm_g>1|ucohkn@%=Hq$Zi8e*kJ3^tn^q;SuhM z;sSKI#Cx)@ysgJM_>y3)P7c&q6ln$3u)9#ml?4T2=|#u$eihgQEt%USWQV<$9nrF=2?0L7C9{%f=+C6aCK;r4VsoV^LJVkq ztgEBhJNsD&XjLYX8`5$Pseo<)8i#KD@b4^p9Wz}Az*Qf~##&KDEf*B-e{%0!n znlS9e5y*4~%`O&kMq6<{$xUUr>`^6R$D6gt6BnNA%ezs6Sp>m*jGr8ndJjDSF(X3^--uu+ehK6JI$uPkEF?Ay90YR9F zVP9)(H+JrR6RMy=+u2_y;t5AT^eb>Kf&^sG^(R8s;ey=orkzffFKcgPj^806)*rBG<(V6V%hX&GrxFOFqmvMVA7sQ`7UuH z)WMp&!|vJWWoRNx6`p~{2S##hB!N_pO|>gTG;Jg5$oc~CZM}RU?y1hj(*dATW0)1M zPww|&xEaqaGVhPASP&y495ivyyS_#oKgtOrCMJ6jM660oMl?l=nPV!>#fXEoiS|*I zc$Mf;z=KCcS;xYo9mSNRk|)1H=U@YM_wWl1a9LSdOLL1)vv0GN=(*3ZFiOL3_tUwC zyEbaK#K%(ef`5VJMzO(vXgMofh}DY4jU27gp%d3aFWC)}Gq1;IoxuLp)^K`aEck`uhWsMpZ&@*BsHWhQ_S-ynpJbp z<08-!+TcT1_C0sv&AY25zN&}pAN7ek{9~&F_-`tYD-fXN%$xCx8|gBwrnwK9)&4iR zge7-qeR&MJw;(l}CSBCGE`M@kd-nGEn&Y&pPYld8P&wO%kP>gW2+pX*g3ey3Ht6`h zXD}hw-B3Yt*oq2%Bj02dXkGvChf4NAxy7QT+d3(!2`?t9AUbd^-EsdWjs3!Q2E>Y0 zy3oz`+5a*wn82=gZQa}a`!jl`b;*-fO?JRQUF5Zrri7)kfJu%t9#8?_l~pfix!soi z`c`?5jAm+BIgk-qGoex#J%rgj&du~%WbJOOTbZJ0-OxO=XUV3d4rtlkwavsf(zSx8 z__T0-@{aa}`-(n^b?6{NUfc`?GkA$z=-=bS@wHxM9#{K_ww_6aUE}&9J(B&eSPvub zMv+3ks%y!8s9w68Kgmt%%YJGw^oCRCJH^U%Sj zN%d1j$wG~@$~`jD+{&?CU_+`Z%6ATv9GAf@z_&k`;Ymz^ipWZ}m(EGB2(2lh(}JX? zw!wJ=zbfk5QL2y8cTYxSnGP4#?F@bVn1I>J2q{1{mPU}qNz*h!zZIY1=<+sl=cD!k zWrcI%mH^?Fh}h^kpBiwq?eOfy=ETTUP`Zl$;2tb&M3gT>EarmnY5T^Pu+V0|MzDSW z{Zrxm5x&X#J^o~isq%Fh7vGr6*;+lAK}XS4nppQ9MSsfyH{22oANSKwrC((TYTm%t z4p}y>@;ab_**N+KFTlV>sNcY17!eT@yqNR;Lw#0JWQ!E2%d+68cm8%`#9FDu#?1*W z(g+YgS0i-6V+=sU_`g2JzF^>9C9Se~4K-_LA6a@1NWp)GiEMP`ZVt_$_DXU)U|vi2 zJ1P&d&k^brej}L|j@ggr5|*7Z)Kz0)`=aM(?<8|ZcZxFi3A~rkM;x5)`j;2)Gdko< z*@N{i8u>D}wJPR0{0!~x)5wUbXBTyh;Q9ivSFQ1#K^ToO(VF(`5@;RL?BPX)^!tmkSv;bXP;Ko;$mX};a0>1N*4ygTJ6tUR_i;$N!{Ueug1*sDZs zdBS#z+6G`7`eT|--?2+)7KS$|7yIw5B*3E}=wx?x!^BOEiXlYj#WhoGqaxq;#^K#chCa)Jg8rS zdtXeIF3cW6p?{|YrJ=d64$daUn65a#?m(d!R@u>(n~uXwv^e=Uyq)oDbx*bKGf5md z#YS5H$#kw)}n7|!f3VbIeMz6>=rT6}503df3$tdZT?pegpTty`5W&58FnO5jU% zk-AqjeR^pkzj5UVnb#&++GsA&8ZQIbDJ&B!0!J@ocBrr*K7 z`!%*Ni$rw3H4JSyE}I8RqJ3JRpMkgTS8n0zsZlyblG}0XT&g()s1alP} zB7GV&zc*#H#w45JuwH*G!K1SSp zWl$~7C#PMZrK0^k>DSZ$*TW0^j0|g^3{K&{yZ@%g;j`HNQF06JenYI#Z9!jepaMnq zeN8P5Y(Tsq`|h@5O<>vmLS*&47TT?+0$yy#ZI(GeDSA$%FVS~ATyp4v+thGn2`~`p zY+w+cRV0?S9N&EGj5_%~f%`q`K2JW6-xqc5>F_n^*bd89w{bxTdw14nu?{5pdwQ)fvsOz%A^4ifh&=#p$Msk-xx(~;@o8ge?Fg5X7@v~$lWa1bM-Ea+TF*6 z5PI6qNF=qhyRYa^%C7~W1LgbG;9W30IJTlLN6vjf@TmJkDo?9knjN~Ei_2?Q4O{zZ zLH#>~!?hV#zkO9$$X&~lC!FZ)bE?tpZU+o}&+KD7sHdXDVHW!ye}{J|db!NN$X=&8 zeNeCvKE2=C@W(OmcE%8ey{u>S1j`==alnhL@jr+sjMXb?ByVO%E7i=@yRJtF+#?Fb zdf0>XW?02L{5K_2&+Fwr{OZz*t;jS8Y?3GWd&yNT2=}J0I{g@(&*(86X@%9 zY)ZXP!tnizmh~7BLE*1X(A1(9grTn#Ok!7SYeQhAHP%1pxojKNUcKFpniYGwk?U{x z+T2`v#?*`0%QXtP90~pH;o%L*{BF;&?CnG1m}m|6nXlc?eU7gNImASW47XiH_l1Rd zR!9*0&YmyEW>GDblS&t_8@(G%<)KJ+O0No|Ju#FiX;KD-IEYeA)q$cW_0Tb<_4fuB ziN(>-v55ZdL#G-;L`30N^Hr!e{!ZiA!#%%>YFHG+_#Ad?Yn%W>A!mm_fp)G$zwU7j zEUlh0;nGoF2yF|DMu;u2Hg}vxbGE?geu5WO7}(xy4P9hxjd;y0_;qfswe)N1-XZ3AI32yQgQ<#<5y+yK}v*6j;eP!!4 zO>TkJ;DY4y;9yMZsfHcoLRIvjL!ivg;l4@+14J#A9$I?at}-vQM4tS~x#4Dg82m~< z5nJH)6l%JM1asDV%`MF6^Hw$h)4%(slJX9pmoG1q@1~c$TPWIBR=mEQKCHry^}Qq{ zFaN>!S?4}DOC8jz60aScPJZEP%PqM|Nf+Yto!U9D@V&w``JHPHs2?t_uNQS#_u@{ATZCIitQhkIVfB@%Gu z`RbwjMPjt+A>42G$Ip(2|H0XNKsB{(ZNq|K0Vx(hK|l{jQ6XTXNLP=bBE5G+rIUb^ z&_hJwC;}o)Iw&1NkQRDC1e7k21c)INX$c)eAR*z~c*}e5d*I&xe}`i@7%6+rHOn*S zeCFEOqv?yqMzigzvK-Ufv`XU31#YcCo(DPxARC<<+)87oF!p+~l4j8PA?KUvb3LpN zkemKL!#L=TDXx6|ZB|#V9)m$E8Ixg48aKbKk7+C9D=RNO9yK4v>_3SWe7gEjJJJC$ ziQj*GVX=ZKtukQ?OAR!3tl0`$QaQ&4nljUnpe{5T!MtB&T-f)b^)Y5-A9?S@feB!y$Mj?X8c_IkOHOtQ2 ztH$@YiAzjgQH^2@(?o>v_Wp+#!8>qZ7RRG?ZF1AR|wzI7c$|xg##t!=!V7*zlsaUmIWlA8^^#{O4U`legsKdGyz1lE8|ryORgNZfu@@SlPG!`CyG4ks zf1tOPY`DsUC|9ICrI+ci)8s~(Sh?p*#b4#rG3E2dA~R(P#8;Ey%J~jF^JWe_!Cx*| z9SC@Xt+fx-xz}4~vFd+lw9%7EfoxmoyRP|l7A%40aHI#LZVppPolp_wK_@bjk%b}W zZ^G)9-Rsj*Y*s(rCcJDHo}3^5{3t5;V?fsjk#w@$o9W?V&)n(6c=I&9JXFL5BFRe% zr{^4xu;^v4#72xS95oGu>WxgAdtbzoo;M^<1vB|>eHG6s(l&fFHX>8>Ny)9zKe?#w z_ao}NpoN;PHY>-g7%+iiX#{$TKx#7g_6HV;XSZT)ltPi?@WSFyI$S{u-Z#U_h zE-<~=4H=Y$@6hp^(HPT<9_lnBlrL_BYcRsJiAUd&K}ARd1B0_oWEzAxyAE6ABwSqb zwKU^k;xy%r=nWq+J8dtf`x%oJM@nfFLh8%2$ zNpT`8&q61Rzk>3aR!j6hVeiHlc>^D_BuEFft~}f>Um^Oez68&ty@Eea)+o~FJn!HA z65=7p7@y`n;spN&xv|jD+@9csC=d;Hyt3RuY7!6XY$kCBeUk{vT#bH~L&4X2cbSci zWl1Zv5rW&(1VPz#1Zbi>tktA(k-3$Kevyzoz!PZdvdQtRYT$kf(H2 z;84`f19*qx@`PicWorBOX2cuVVtK|XeYyjEU)KNFSmGJDM_znS}9+H{MhM4sOi*<1?#|{9*Z`_bt&o$nyGAR zx}3qTAH2Sn82;Xzx0P`m;+${9JU;4MNMF|I^+av0RrRgrRWc#%XB87op`ToA#^)Ef z;%h%x`~NZPMn|pNusa$FqRmw$+IRY1NgV6RwYU!>fjT!fv#S^DS3pZ%HnpqC?vj;{ zh*a3dap>n5WR&>!GM#9)qEgO2X6Dc%V6&RDyjgntefY&&xjitfT~4i7w4c?`c6YZ( zf9d4P8n@!5;I(IME*iD1qD#eenm5q{Mq9IK@wmLn`y~D9@5EzI8&&g0YFeZ8>>cQf zQzq*0NJ`hhx<^24Y!GZSOgbeEi6}o<_Lk?)0``dVpg=9^Vidc?KIz}JsU<49ky!lQI*DMSAvc*;{mnm6Ljsz z^|@Y+)BM7a#busnLwKHIO%>jB5!EylKPDPR>AVeDQ>9^$(dfGGK6K%6`dg8NQX;eW zqSEQabe;4q18R`nA>Rt8;>yj~Q7`jDZ571xK1#mbB&jCCBM>`wKYqa z*wiNY+iYWoP|4ThU0pq&J-7P#aL>b27QPc^Yd*xc*HmW~@E{a|Q)vmfcC_3=9Z6g@ z$V0vtPnDr}+j=wS4vL6uTf4MT;|2@8@2rpWM2ud3@}iPB&R^^XnJ-(4^GUBD7A`EV zg6nDmXYN(yZlBsF^o!J15t%-z5bwN%FAE?iAFeOKMKb14BdmSeH5qjAprD!_6$D#o z>42cQ7P5jB4^)<=bb7~`Pm!>mHxU#&-vBn+NcqWxwk04t91{u(Vx5_UVH$FD;b%%$4+uG%ESy>*w^HEb9(Ikh8fG0)5?l zc!62oFJkVr0`c%MMVC_;b9cpwFSs}@W`^pv(V^PPT37gmnGH27&Zsh>W(`p~t!xGK zWqqgrd@IM+sZM$?vTjZSYVeVu`B;JH`W9zA%gWW#HB?u}lMuvn?QYW!oU&zOl>?&` z_?yP@?)d%KVm@Cy7)al5cz-^6BEK0&eU5(Y=c;_dJ!Kw}!K3;!v-*iHCr7C&S+dmf z=7TJa7ceZ1@?s`!A<7?f?`-u%M54{qyfh&VpzNG`B$TzTt@1m9VL&<2t`gJ}PH0k_ zvtW^Injgs6cmw=6YPujPguT|nfgW^K{u8SK(T|ieF|0Czy6ymXFz%f{fL1=8xfqS! z)T=)out8EI9;;-h6am>J2lCrJbkgVPx=C)l%TM`A@9GIof$H0VY7Gk#SPm2;r<0~5 zx)0iPEkjTTH>=Qg&GFMgPNiW($D_lfbl_b#h;LtuY}H*nZaz|KhqVe_F=_NqD_0fj z#i|ZBOympn&Ykpr1G!0i7J=sN?XQAYCZwS{1CI36xy_HiRAp>JvP{=ABN=nuC-23^ z65ni(<|1JIYGg2^*>6qgRWpW!9316+^ZmRmb9JgkFjcW(x!lHUJ#o=FUQ9XoDw+XWmqN7WbA@d86jwbAS8xiv2u(W>0lDT*=_}ONu6Dn+pMf#twMWuH%d97|*8?9grsk zHyk|-^=d9l7_Y*iKTxaWm(i{m@a}%v=-c=8IqMDmBI_UBAUyHPl#rIUp^O{bDqyd* z=`v4?r@sd*_cm_NDMqE;9w3Uooiw3sVdWa6*u>1;eQ9+OZK#8vdl<7xM&#-?MEnRe zb3jLNc)L;lVsT34Sdnp{3?c{>gQOX03MucfoN|+L`sqvjerO)UYtFdRCz^?22?J%!wwNTE8vaK_#!AGzxK&Z_q#iZQ(8 zLTAKEqGgI#rK9~GBh0QV1p?gsRf0*! zMUTMc=T;WZ7qGXD2x$#iUA>FIrtrR*7U4a%W3BZipN6>LVoH)8rYd+2xXMSSDX zM^QF$vDOyr>hSb*5o$p+taPG^MudHTo7#Yv`__-_UV;}S6oBM=Sy=|p?U%yn6jZ(q zqD%>d?>j&-FatqAp&8p>d0cc;Wy_)}*G*;;KCdAu%_0xz6ZT@IbR^ZWiXMEn**_fL;0Z+6MXq>`aM!82={CN zBOL5F_!P1FwIW$ltbI%g-=MW*OhxTt$p(q$@}vZ@x=1ltgPncvDaaufqmtaD^S`;9(2318(KVG%LH+{GU{S- z61#}iW-BLQQk3pUd1ZiK-yz&ybz<*Mdq0PL(MV(WH1^N@9EKY?sJaqEHBt(;xG`_) zZ9YC^J*Q;t7F5Jl{?sW2KjQ^$V(Ms8FMY+U*duz;n}gKPH=mpn`~9j9_MR6luvd%kQ2gtDEWKB1|DziwHHA|u>t_VLK`3xm zYl-rR&Y&~S)rr&Hk{0K?Z>%e~^mu>mS)ao-k}g17C?>V-*rTS2N&+Gi`<2!jgE|`n z8aoe0;i-a{JRp-X*F)Y3AVJK~p z@}D)|)@;RQX4GL~PK?z+RH>8TFtsR1T```goJ zIe)K2*%^`rm3xMj%CS0~1I!PBSKTlxljF<<6Pp#{@Bn2r!3|C6p>G`gei5Q_QR@B& zq5)|*%>1q~p~Qz!kbszWD^UF$=Z%@yP*0$*O*nKmO22!-NcW>$(J)pr9=`QW{N3fB4qTLU)!7cKyid0;lSDHJ zON1ncw8^MI%*%0*>DPy587Uq;6@EB#R4ha2)z4|fV>9LtqN%MF!tHch!PABdUD=<| zHP#v*&FmMD=9af-o(baeuR}D~ume9T0v9de=a`w_ek$IHZRE=dB*?;+Us=N(kT-^^ zoiyJC&Im^!Z+?+eZQHeT+y^)29Lj0MFV%}n+IJVmxbGhUUn1*tB2WCtf0&&n{igo8 zYX-#uQ9vvYzFypRBtaDL0%09Jx6aQNCQ6#i%WF7 z=XmF;#34w@6LQ=f4l*>$Tt{tlA#7BCp5?hzb5r+-C%T@Kd5|!TOEH0-KH<_ zqt2_?gvG|jF7n82->6q0u(ut`H@B`hoX9>hOjHYcPcvXR0dZjalG zKJH(NzDP+FyFql$fYdR#q4Wi4tyz&Gv-vP5FVeWTsM|%VW!`if%`}DK2-N4EKZB|= zG6FWHgZ_k2Fj25|-bD7EkxdW0>xo7887pY`qr%ZY?8t!wNSJ9kQ|rM4+7jQk`=el0 z%!-$kwXp_xrM=z)QrCj8svF5z^C=R{w;*pQC9cg_Vq0>BMKj!C>&=D$8(`GCGB;K6 zp(`>0WQDDn7P@hMxMRy>y2=z_25ZG^eah+39TTBJYftt4)h3WeE<{b_Tj zE1PkY@lR9~pO_fs!N{__Pw{3E#Ax){&tytNUCdf0w!r`E;Do$=6G_W{DSx28H}(FZVA%e-IfTSHRmrU!o$?ex679qX?FbYeN9# z`n<}UjCBVO4Q$KaKj17EqmUjSM6FsH<1c2el=m!mig>O!d{rn4m7^2)3v9}ro*P#S zxLmDo>#Nk_w_)3UAFl$m$XItbw`c;?1>ZREntY-1OYM}Uu8VTA%y*??&D{0jMhiq8 z)ZiqR6$1OqaQ4^Cr|-|Eu*ejKOHDhWEGJZD8RAr(m6I|wR}r&$ zt$T~}cj*!fCrJn_1@(H^7IK%N2qI1!#!|s?bUU77$ zq;`5_IpNRf;8|rp6N1PWg|SU(Zl21K(1+-)71X6D4IMUK$Bd?J!m19n!a-Do-?zov zBrJUYC6Mx_dt3trkuE1|l+456bqrI(%6PG2#0c}$44K;*e!EVxLz=R7mxor?1o~vi0M&@wlo=H&1c}k5{{(-u=Jx@}{n=>x`)uQDtfCg-6!R z{*7y*wHjk`=!T0Yd?cb8Et6TUb4ReLeo=*Tbl*!d+}e%v+Xt6AI=e*X1Ql42M^aExbw)1}DnS?SA_~P-p9&-^#+>=BR{-uyKc&uI;<%TjrRX+y;wkB zm^6O<`m^UoCy|M3Aiw6R3s|%gr8P_&X|%O$0PejeUg|KBbu{2Z51vRn07WbNM2H%v zrlxuUM>r^p0C9Ks!VfV~=4GnssPA={(Cc$zy5akvyd*Wd6Mvbk{3_JH@S9$_I)DCr z-mM5yiaLsx65RRe3x9U{5nY|&o^+wx3OCQ3dRNk$Fp}Z+$VIOKjqZhz$F(m=J^AOD z=00#9dURqmU|K_yro7a|ASi14J-VlQl_2Im#?tG)zT7MdVMZuYw<21I$(W*4_Osxj z>aPqOa$DRwNiKcE89L$7{@G9EOo$qpO=_cS2597ZmnPUv&y3HgdX|tE79!~@Ebz@^ zzTH)7LkN4sc_0=~cwo>7sl| z0T#AntW-2{aq-U5EoNj)()+%;uVJTRYRH?N(&&ZhW3G6NK;0>^o z4g6ytocq`r-G)%v19xuj=5m=9>{V-^0;1=Vx}3_DISW1L*^uF9$r9{EF2b0R+STke ziOi6nk1r1D%*{?C9tu%^N_tdv&;6rUf@_}`x~@Pite#3! zH>Islk&2bw-=lsv>#J2ZecI=lF=g%k#d8LcDtVHHjU-??0l9pFNHpK}GHOr#sZ;WU zTzpPs#@@l9h1jNL@#%-+Qnf76^`y{lT-45Tw*9EWU?k6h^kydz^9r=HL4D#MI;7ep`j$kNR}oRqUH^!kma7!>YjA+qwoNO z;^OCCp>Xo{%$+lSAOeT2AHDqDtTz?R@x<9%siI`YXM(b}+3=gHbR8S#-_p$gb}zW` zA=9zdWQ+s>NMpCv)*6J=fkbv2JY04YH)BIbL{wI}ve#Cq(@JMW2bdRt+YPMWwPr`%G`{XSl%17l;j|qM?)Cu>%<`j#QB7_vJ?nC_ zp+lfI1lBZ=BS9`q3kPS#3O2ImS#XlDx0uhj4N%`vZ_sv+P0av2|0jC<<*&@beQ=VI z-^oA!0nz`Sydn>os;%|sgn~O0F6a_i(_?S;Ms~$IO=~onw~C60yyv5Ajyn|WUK#9^ z2VMx^@>X?o7v%qXwhMSa4P=Wbf}oz(Sk%}JMMmp{sZk2%oa81g=BpCw<`i@?^Bi81 zDWFk&Rk`tUc-VpOZ_jBeAT#3*?e5FJcBfzs94Lz-{&9kLw-3P=`AgLtT&xJp*}v%G zpVr$^7k{`~?}Vuq_SkM$`R}h~TCuVfqBN7I{#~DYYwF)W7x4$Mi92OkW?KKrc>gBw zzih`bvT3DVbV*x&{5iI(UUTMtrhg(>{#E4eHU?x;Q)H`h!(*dbOONcaUZxAXJOG{#74Wx!S`rfnH-q{tefTE0pHU8P)K&1bWj8R>cb6QEI@jJ}> z**lrE_d{L9zwZhLzt*L2N}82@23aVaIDb+oLo(@d&_K@PGpF|ON?SO`WA*}a^4wwZ z)UkOToZ*6C5nzxmE25pXA69s`FonN34;-~yY-?)^)D(^#huGVn2Xocr<+W4R*K6T$ z_%h0CPM`QzQ*{iro1|aB z5wIj)n>3uX>~p`a>i&!+&!PX(`oI3w7sQHvMgyU~AF~##0iueI)8KBuN#a*C@6A9| zvLo;nvOJvEEB!|5e^rWyU4dg@OLJxuZQ|z%KpB~Z5|;j%w-Sen13IN$@ov@!Zh4bL z|KsLPEiYe}dtcyi-+P4s)?F2Cb-ol#sO{sRkuXKTFJ)B5e8on_W~>z_vlsJlbJs=+ z$<@Amc^KuhFrpNDO^Wr9q%KQ<@&akWxH>@2_35L@Sg^l6Hd4}Nnnw3;xqXHIUAoF5 zZ*ij<^nmg1W5zqbWuBpYh*lX{KoNrs-M?Dr7klux|I|>NmhR2$5Bb&! zP~sbsV17wxI<-LOMx8M&^Pp;1cTZ0($fCVP#;`8ozR3yGiRNS`ch5bZ3}iXykd*5M zys40!+u_)ydNGjfl;>mDU5o{PA!!Sr^MXt+4FnD!TCDVYjUDsJXdQ+r}gJ0qXNr>I=r?*(%i342Z=uDdSyDzZ(Dg5a<@8n+fBUBA z&}m(lpnmSssc$^+)(f_BmI+eAUKUla>kyA7$HN}1=62a#f0(u#=wA6NGWyJ&KPa*LJmhAJ=jLkqab>yken4RK$j0;p( zSO4BX9Tj55E?g4^Lcx~RXYxs1d}msvyQ+#vrO0&|hH?u zC!_v#NB`JMlhWbWQBh$DT>$|Bn)lxvkw>v4m=yMAX+*-3qjQat=aFWnr;3fdC650G z_58Y6CEFoGJHWE+EZQi({{dD9<{oBbR zbbM>a-y6W)B>nZ88VESJ>T>wX>A#vCC^-ChPl}2{m6erCkVt8^4<;O^P4?C3OV|VF zeC zX~h-&Ke2_i@sWEai@#e1?#vp5IUp7G5VqBxUAI9_`Q8f=gdUH@^S4|okNSsc{&i)m zGs}x6UA5w>{&jEp`#~X=uquwPA*(-T>Wdr(z<7Qup@Es=O$wD-7n!0?82bERwDmHR z^>}H}WA{wr-x7uZBWJqG&8l~iT;y3^UOvus(9Pm9=kDwtpk1KftTT2^vr=6A4Avtm zm!6?as(7JWFdRyZTXzsVzHa$h@3)c(^4<+b>k>^J|oDzEyx!p{_dN!?934@@! zgyf8mDN)(-rJ}k#yU6!H0>ia$a3A5hgE(ig&yUuN8?1Iwe^YCpkZ$5%ldv>5cNlcT zMNpqj(l*Fz&G^j7?|lP8R@jWxWbnW-vm<)}hgu%Ttog?hVW|0y#Eq0gQ`O+_BR3WfmNo%owa)jpat?8YP^0+Z^L}$PR4S#$2@UB^ z$fs2DaiDpMM08B9w{*;qU=aMt2(aVuD|LZpFHYU2^_Dar(}8}Ox`in3hfE2RBjE?n z&u6{NhrbI%d^+;e0bP)pBcc1wj|pV7w)=(x3$&U_J5?2xxdR7j1VQ3k+T5WH&U%N3DLpVri-<_T|6C9&`GED zSA9!?JGkXV+?6UO_s#6@Ihe{LFY2x`7TB~IvUZkua6_eu%F@WwHA#ifC7sLSds6AI zS1|SSK4S~4nAbDy#HIARB1nTv^+Lw!@ZJcECUa`8OG5}onO-i$ZNNP6IRWtqJoRb{ zVRKpYUl8~oQ2)0?uoGG8KxAFRQYN(}9_!@s<)=;ESRXqocbT^`TLj#&Dde6V%sJK+ zY|9tJ2-*3!SCqm1l(}6+mOOUi`O|wh4)U~^ni9ZHxjrGynGGiLeTZz{Hf>suh?(k~ zP!cs4%dB;rYf>9L(4Byo1ef)hYr7M> z_+_7u13z_En6V5Dsg{00_)&(AOsRT42R#1Ft!MLxC?k6NomU2fmQUYYY7bhYYUO>g zIg`7qI+{4^$y3+B#P;H+nu$`87R`5U9ve0e+txmQ_h6x%5g8d-!zCnF)5y%SmrmI9 zP6}CoVc*uW17sVDIhBSYe3Uwbu)J~_IB$6HeBjYQ0VW0^*B4f1Eb#zv>2Kc+4|)p+ zxw`&VQ3wWZ7#J=Y`R&xMc)o+QrA=&$jSitHt1BbeWEwzsy(CY6OSvIrMliNig)j)A z>e!0N0j0-O7ATgCtMGU!$+2q=8ri7^sGxH@Jl4n~6y%7+z?apBC}&LJIw8?+^U0-FLW^@ROZdpO9^xfemSz_| zBW@+_R$A~kgj8m$T)FbigWLGLgb)zJ6(&vQ(YWpdIk!XiXBlmNTzSb`JUV*pP152xvyTC3w#E^XgrCh|RwTgg>1(*}>`kKVHgE!(w9(&arVDERc+wQG%cUKjYu z3|F~X)CWpL#DDEj2T0uo{tKg4eO%$Aqat$hL!t+Z@I}_*;xp8D-@tqc{~QhduM0~a zV5wVo!G-W;gK=RCf*(r+1h`(|-bOH{mkfwopyYF6yJ}2L1`a$as19L)s%0{nzSrx< z@_U|riS?soXeJ+dfD0_nV3?T|KL9A#%c`tjX`=(jD}fVdxS7C!8{kjG>{644k~-)I zA#2$m+X=OejiZ|vFKF)08v_pD_^JuWPxvzLuIq1E0XgN4UAG;mpR3+&fXNmoM$%Kp|==?DN5|+4FR0 zVfW6TKYv^7+@)JEn`ohBTu|LvHLEA*b*+v8v%_8V>GclqCNSuoMaV`+yu@UG<@<9K z-{}`Ta|sxrx=naYUYv9K0+8}#c1d}w_I-7y9?F3vD|?>x(`oj%J4plA%Eu3cRaR*4 zW#k>+JE%dK$$XFj8NA@EOei^AbT;;J%lWkGFt$SIgmKn`UFAB#W{fbO0@qCO%U!?z zBUJa`Z5u1Eir1{hoF95G-aD9QG0Frs`h9jbAMcH`nz;@;uf7jHeJDE=V*~|7g0&?* z_Ty7il9E2}2a(tf2de zd~gQBK=7*YdbApa8-M1+!Cj#tJH#r)vfMrrU-~c<4|JLnH#zQBihS)|IdntyLZ8u^ zrZ%{<>Z++3&`4)qW1DZgUAAb!sgPizsu{4={kl=*MIOGi=YnF+u28y@WAY_=Thj&h z_br&aV>7z5VnYWJfoz2-th=XiVadx^uO2g(LC>TI+-j=klM&7?92-|S?z9`XuPyEKAt9RWhPeAm`FY`F9j^vC9ISjl_g(0pkN z_gPabP?5!00js`u%+*t`3xl1Se#i^v2H$r(wLYA15AfYIXxtE>YZxtipwVZPl$tyImz5>>Dj-h;4y*K_VVRu&T)$SX?u1$nVxK~xT3wZlcueYv0`SxUrz73 zzGGbod%mB!l4RQz z2C#=sB}hdkNNyN^po*;QbQPFDl64*8W&H+HBPej1qME$fKO)V_+st$ocaPlvS-qPd zSZVlu*Y0$Y--*&D&u6_e9{Woj_i)s|m8S4BybWstd(D0Ry?osTeUg7y%k+;ulf<>V z6aG^#|0DOm{iOzceJ{rJ-#)mTldBo9A2Nho62JCe8v4C9aYi>SXs`En+3LSKQ)U4$ z-;;}vP1-a3{`1ax^LteG3f%wv%`<&F-ks9%wd-Hy`TwzvLxv8b|7CZ3UCnW4TC6qo z*#5toW;b)~15{Sd^Ivwimr|}90Ta>Or&q82M`zwG=>GN^D`xoM|50Jdm-=vnH8Oto zK+{q3)$_5>|8e&S{>(cRW6o>94CMcyf7_VDUS1I-;53I%j2pM{uH5X-Rx{u3gNzkt z-%{Os6$O(JevBd%uX}ax&-@1wGbMnqTk7am&-&l(V{Gu_l7IXb!2hQ$!h84t5c5QF zo%_FqaIHSz6(!$YPYCX9`TwxTSxb0h4@mGoEe@aNbai1$CW-BZZ%wD!61pCa^rXe9 zQF;inaYy!)1Re%V*~LyaoRvSn7xgxM=L|i0@$ET+&}Z#TTK)V3$vviFfO+8doUWd) zbWm!PuU{|9DJxIrI@o`o?SBk<>C=_7s$@Cr7 ztM`YpBaY{bKvaO!VY>$~kbcuxoo@!U9n6W2)&T}q_n!6Fc^c?D@2+IiY*uXFpH}3w z$^&<|GE$b|WQ(@%V`Cvgf`aZ1D!zp1qtAxxUi_&>0T1iP9p77VaUraOAYq{t_eYb= zo;z6)=HKTI%SrqDCv2n$DElO~B>z}>Gz8+wu%E)9vkc2|Tji@H8<^m$Nevz=yY;z1 z)4I>gul!W?Ze}5z3^y`G6Y!4K12I(wY5}hkk~)eSKZi1wf4C&c1g7gNYCp&QA&`&} zZ?gTWzy?){4Q}|fTwJf(IHY(wQ08@;bY~H)(Xb*JVrWPq?ha2dK(PY=`^CMQ2ytj{!%seBZnJOek)OBpZ2nz{Q4VICH)Q0 z9Ts%+{%qRyWSm!8ygItw2tuJNrQw!5xO!u#;@Pz%lbba$Yl-V?UO`_{aDh+rMr1xG z0HUAIv$w~4;EQ=oiLzw-AcaMt^B>Uw1p%JxZ}xpTZmyF`?&8thinc8}r?qfCFW0lD zHPwTqeZ;aJj${n%N+9Hb!v%zWL6K#HxxamP@DX&W!I|i@2tA^3K1N_d^@#n&r z-ZS%OHdpn6R*v(NoErQhO-x9f{Hz>-H(E9TZT6-m91!`l>}39FN5}o7OO$*dr)n=HJZywB)u6)=k_pi|c8z zRa$3^h!xu|n;wmqtpPD+&QV@)wXW5fmuGzd{B{d}zeTHQu&GnB9> z3dRYRYx!DV6JR7=Yex7MNoqQU=SMiPx>yn0dmLV=FoTC&c5r*4{_>B7dASeU=TL&vRO z66&c1rd%-Hv7>6q%wa)4MWZ)(aqq(3z&iA#cj z!;!-hxoR34XU(D4LNOK|j8rx6g`(Z5h#f!@mHLqD}F7Y7E48Bp%Y$;Lw@z1BWk%Itk;Ev0~IH97`&A(DUHXI zAQjt1{DUbtU%()fE-f;EX|gFC`PRSm4YeyQ)1YY7%*xwNMb??unJC#Qs^&L3Cq+)n zSjpv@Q_Y3)vF_b@eg)TM+Q0j$-W-x<9Jw_IprR^|lVRx}K^HH#Q|`^*e-R^xoV$CR zH8?b!E7N-C%MLjlr=OD3TY_wO>px#ef=!*(RJ{zP5NhoM0a`S08Z^VL6J0IAQ>JlamWgcTbH=bMqDzKsIy2A26 zlE`<_tP=6kN3dKqtKSA<6Q?7}zl~I#qeO6}y-JHWf%Qur)Acrp@tk3>c@ys8d~*s| zqdy1}h}Bez`1jvIgLwZnL8KjNDHUk(ZF(*%0VaAei`5JMZ6#qinb1 z`Fcb{QF9a6*61u$2$YvK_~-?uFHqml%=yO7wGy%WPFN}p6tDb( z)&zdnQIvmhk`^n6+`RqlLN5H}6}$ja!;@f5AGG(%Vr48!&J)|CqDtW94h_lTvO=iHUil(3X7Yc9XL%F82b?sfre>Ox}7!>$9yUx`ujh-;&e6x+W{SEx-dY7?nQnWe)z} zr=>A+ll9@j3XsKY*(!-RalKY=uG7F&dk0IeP>T3Tkx1KzzQ&`t;@P%hl9>XMxvBpR zTc|1+?FH~WAqXSq>&Q9%g2E5lN2WF}sMY*E4x_gIt5#;=digSus-7CiMv9N|EhY#8;=BCM*Py48RPrh2L0=6S7A z5Sny9sQH!}T-xEJDiC-#48HO0PyC<)1Z%2|lD1bSH(i7mcwUe(H|K9E&1IxdI=iYOO^aAsh&h$QxO#X4}&#GFPAD0TF=8iVZ@ zm+DzMU@aVIxl7ZWMD*6lZSRmAMjjUZ7z^BymEww+o)x?{+) z#Uzup3}JyUZ&hCUeu26c34RQuN%XBk|KmI5aR*RC>N{O<%n$z-xKqNX-&eKzu;bUY zwrqwr6}JlNwM|-S<5ZCa{;c4^L|mZxTUuGTU6E4FYAauqu>{~rJL%H-AL-H&hTAE{ zmOc;mgc>)$U(47|?GZ4R5=oDd{t5PcPb+Md{d>t%hh%D6xacNOl|Kc~XdH6ciTBF0 zpd#rx+K2A9@qpX!?%h+L0pOP4Om~M-9>d3X z18rJir7w?d&Z>tK1Lmc|MA3GQ^76`ll=@s350ZnZ0>bzg{5G=!2o#63A6X&2nAIPw z-=g#_$AL01um6@zA9tQ}j3RO~E{|hN+R#(=86i+QYG&=Bs?HJQJDisw`c|W^jUgnC zpIh?9)CrdU78&a`1GjW@nQ9;q!zc$$%XI8@-?I8HrY?n{E0Q(Rm)Ce;#L3o|?I0bmvd9=EL$f zKZGTs8?nM~SP06C-^m)aS1`3RM3#!GZfBB zF{5gXw|{i@11J6DCWQ`*o}TA{jJ{^YxOs$Spv82R(WeadY_BEv!COW0!E)L2y`v9m zJtn7l)34RLexdyAOZGQ_LLjFX*_?BeHa~Io`TAZ$GPSh%_94D>0jf87z|bmm-;W{s z9ck%YH3G-cE_%a0eghiS#?c)twu9f2d5Oyec2pEoNV}M>N69j_9&p-P4g9UtjB?k> zEU3_kcs0yLrJ#cM*bln;z)rWQ6dyb)_)f%T?d*ae3r2uXJ{}irT`U^B^v#~?gmtGt zG)bKzuMn$)UtEmbl#5j~C^svAQxw%QIc=S6vJHOl=;(HH2U1(8c1MKxko9eIpyzp6 zpo(}-_^nWy>8W62TH{-7EZacOhiT*#A}<*NPKk>wz$!O1@3cp?hW_Tay6Z*H&O?Kr z>5lAx9~U6$#nLv)oi4M2%h!L!4Pn;*8o#O zd{3(;lNUUI)7%uf-{+&A6Zt+dP7-GGW2CvYb*z(jS`o&?tvpP}FSy{08`T&Trc7|e zbD@A6_fL^T*SBeFwMS1b%oa+^23;rmhr86jDnPy&KadAx(NUoHiqA1;e0o}M{Q2nA z3zH*{qiES{K&f=}aQ>)cJOHN|_l*dK+dGxk)^(1q!>t?`S(vqWmtD~EwN;5YplYm8 z#k56qAS^$gCK*sW!d`Ruj8`*F@2 zm=39W8Um3kBl)-5HG*uA6~3A*bY>&$n|L8O(+2~&7)b;y+ zw{!!RTUSMiqj>Nq%}O`D*KlS3r{Fp9!=4ZzMg{M>V52`dJ77gWKF*owTLu6qs^TSU zaQI=x5c{28b4ru%U2$?Oz&GUPF`pY2!(NaN2CGJ5e~Y0izn8nm*QWD>j8GLs&K1ZC zQ#6mQC2MQx&_gJ0jdv(#iDfjkA&4u{DO*gp;iydRu17F;c<@<&c%_TR`N1#S}&ghBey>Ii61j#;K-T0@n1g68qcPEyBW^(^uo5^`< zwNKoqr6Y*iB_wf44{4`jUc})CYk~kFIhf+1GC;~E19x_<{O6EV{zpjO{#!_5y*V(^ zUe=71^jD3>j6y)jf|#BSa@#fF;6d_ASxzg;eS|i>aLBM%qHSZ>D(FmKJgd(Y|G>b0 ztq0nhIAd*kMDew(J8w!XlcX+r>GU4M;qhb|o%Vf94J{=2%G_q|xsmBSoMbFM4*?DX z$}QMTIm?{}&dAnvXP01uB7eFAJxw!f4)v(_c7y3y+4wlOrH2x__qvaR)j)i1tzQ10 ze@E*yw0r0P@d9VdPH|LbScDWx&G8%0A>RhHa91EEgh*g48qNd3x)ZgZ;KMsTT5e~C zI{ZeK>BCL4&56iB_Mc?+pj7F>qCl7?u0u88Nzl`sVIy|;%EQ{eO(KF_>xG$--}vTb4lQPX7UB3Q0xT>QHb~U zS%4dSmYKJul&g1>W9k)W@)wsWXE$&pDpMcsn6uMMf*({aR{Dgjj_cXxf|o%ZWB z`q;Q(7B`q-?$@1n7~qQZZT}xVYov{qV^O?>$&+~o#p7WgZdj0-PW18;IbzSf6y65(9 zgPZ+Ut&JN6TAttfG@}kevI$hLw6yNCTpJBgHPNxu(*`+vA`bdL9!$_|Y_Xb2!983p zDn#iR8=@BBQqD8&ZiS3z3*{I~u2!MF9q)BnS)>VZn`egWmZanYmFh9vuxe3hvL(a0 zr<3WS&CpH$WG46TJ?RGpD`LtfnM;3d&qC6-jsptg>wKP{DB(QRh4IV9~ZZ}Q#?^|L0QHHWs2OTVKZ z7L>zcWG5WVsX#^>QcBR`eH)IvI=y|J)Vaj1+Fdlu;cSmMUK(IFS?@o0CmB#Ot{D-# z@4GY=uH3dBoyvGUy8?{h%#)A2wO(tZ7dFx4&_vt6D7)ZRpIxqFk~WowI%^UI&2-3@ zwwz9weOIlvHb)A!78`><-R}NDcFO>q<$pje?d@>9BdsH9J73g|y<0Cs_oofenMM4?6rR|6K7Jof$f;qF?{o*0hW#gn;mKl;kIVWpc7Cv^pdg?w}R4YuS(P{J>vot^mWw_FRTO{8-voeQeE# zPwy0~fsY~A$$4yx|H}LUgL|N^BA~jj{!c$Y&e`M06x^ao7zExM+4L7kh6LdodJVA> zCcs@PEvJaeS57GoTxUpCzxoLLvR>?*o7?E_>3y~~AOhpNMl@{8dK*7A0ftp4s!oe* zMh5ry_b(T96dg4mZk5dx=MU!l_mP|!?4YfZw-0+|fu6QOk+aoLuo~|dm|F0r)0P$C z@KoLCJc~nGPrn1Nxs6R-aImJI*3Qn(kn7%EH*kk$!2`n^FLRhuld|xkTTD#zAtpz4 zy7+A;X`Dx;Z%;+_dPT+9Fg+kes7*@>0Hfll>E-Z8D@|Jbj zQG~zh?H^0Qql)hc_GTRc#`#rN#!%!V&VzI8mp&s)l_y*13i%>9RmN6u^{?v6`)izg zf{phGYfalP|Be~Ho1;j3Vkgo=N)DykGB&N9)Iz!c4Sl-nvUEw30{;YYdPfJO^oZYr zUrnOwHoHv;b?beC{6h*4+jK91Yze>dqqFB!(`uzk-_q{UxOIA?v2-Rs3!pxF7D86b zD1TDfGxdPI*E#SOg;(O`f%AWpfKSS=e;I66EwNdQlTP|BsYtHzK*KyjO6Tx(pcejN zamaO2gy>+iaPDBY9!H9VU^n|>4&_q!-Xbas*-1mhVA={UB=pyC5u3$WwvvSCn zcI#{CRY#ONdW2UgO}6}(d0N5l$! z|4jc-q*WG#j;S9hTQ*5)nnPF{QFXbpZ?~!%=MwDjIV&Qh#!{?zGD#t=$Z#~hcE{EA zaB76qdS`)nD!DGwH8!_=Nvrp(BizTqTnsokQ1!sqOqAfbVrdu^p6lKBW=nNy-`>CvRgDrYd{fnWUVRh8+_ zwsWNi@eOH}U8zVO=qCg9+^9$C@K50VXem+%9SGPO7v31vjt0P(-v?-%r`%J@kM@fK zBe~S^%~OOyD;d@wmFaQrzS~yD5+oSnEEJqB5W!+0;T=!^se+Ad|8dg3A8oMRJQkDV z8H;c*)HP4!hvVz>9XtZaruvoHjusRVU6RCCvBuSvSN7Cz;z>8?JoInqL;ro2dh3?f zq|QfKbaDwyty=1YWnU)s2Td+hkkJq1fOuImmKHWgse#|J4mPoIe z5^d~V(Vna-3Ijn<**2_UM__KTKj^oLHzA`ud&VD z88>&}?#d7!5uAq~Qa~A9{rs61`%6qr1z<^7R@q$}f}Zb7*wUw_gG$isc`uYd5Yn%; zwiDptp7YLu*YcB1^aCsZEyV+ySS%iu;D&|s_q`pmN#Yc2&t;$Xk9cBrRl}}r)GNlm zD`n43@by;`hu-c3uUj%rudu*^Qp=lcvQ6D~B*lB2`T32#{xnHIXZue)@m~0{|>agdM)h0)y+UO&o%Ep(PHgM|G zO)A7}r+~_JFutN9lKrA}Z9>p~*647Ijm1A2T<|fo6}Bxpbo0vo`CCiPh&fGZQ)xY! ze=&NCcKwv^x~X&r2bh?)2+1 zAjwcmJ;fmdO4Ux&+n1%6cXughd8hKO3F3qcEmLl(SG-FWfmdYAjOlewMNV#8m5wdn zTEpbG1H#uqv7Wr`Vkkb?Kyd?SOW-j7xBHW?ZsS;nyxWdQPa;(G!OiTDKZZ#XG@0WQ z2qI(rxooS#*|tW}r}VGUu_L@FON=;gOW!Mg5IDDY#PD}T=s%0x*sVv>#M)-Z8fO2N z*Q+DAQuS~CAbP^yf$I2{PFjui4dmNzyL>y_z()bmm%!|WQ;)nmuF?~}Pn@Z~u0LV# zl~&5=ow%(nfA&%G6z0eic;K4I(4S2*7uz8zxh_1ADNWk&M0{x<-S4J@yZ_<$_v(Ri zyZY#A7EMB69yf?d)u@)0(bZ3&{xHfHcw+BTu%Qtg0abrJ_w#hy_kXc;@lpCjUwoho z)fr3oD%Yjqvhh^rH-4*Yk(?S(-?fj&o|s@IVtXV(g70wNyZv+bEnwzg&x2TSmCDcf z#KwJBQhf0|D}|2v`g5K->6#e7g;Yc;)5>>b5%*iiF@862NecQa&hxS~k(9``IwQ7nE z7>@v=!YQpJs>dwqa>$J$UeXk4GJo^i&~^{6?}yuKnv+4c=aJTlOlu1v!wwp#h~Zho zo;Ep-O&-Rl&=`jR;>g{e*TYsDJI!)e-prF8hZoG|%ixbpzj>-;VSnn!-jyV8R z`i#tJrmr;cVvcX>&C?>=Dozkg|3>}}w_CLGF^!Xnj&%#hR_wx#3trE1n(TB_2zW~@ z{^Oa1IrUSkv}a;$8C=xhx=T2k66&g@Q7!A^SCmE($&zj+jogA8Ma5m=|q9+&|-Hx3HO%6NA2LAH>jNv$~X**PnwWKblk7 zQyVXWk(H)iE?JI=Q2f{#@5+uXD^5tq+Y{mLq#Mvb*@f>kbx7F0_HJ7uNAYT7T2%z3 z_ zh}J%+;69QcHS$A&YDTgz9oRrkxB9FL@?~d@0jy{_MmYUiUCm9$Jt6+p>@SP5*IHxw zoAD#UzXhJfHgfmgf#9SiqGOIFF-5d=PI9X32j8xc32*scV=Wl4BH1ynI%NO&M{h5r z=y1@|w|Q=Ze0m|o@!NaZ9m@_UUEMqW>#&;K$gI;ogC+M$>z_v&gq4QUH-$I69~=nQ zj-9W&v}5t>7VSt9f=m6|b0}({H?$8Xq5nSf;RE(0dBwLA88z0(^W;~S$u-ub8|#vF zD;KBVG{2wwfsd{eNdLLTQ0D;vj-XrI@|zksG~Ne?H#R;$CbD!Ca_hT?>cG>=pfW-6 zr`N|e(pP-_^;KgwC}Wr%$@=IzJE+5k80M*j!{D|dP;Pqn8h(}`Z2@$ zE}C{d(`zq7LwnzA=*ovxmtqKn?4|(+4Xn)d7HTCQvtgo>P`i>l+IuZC_`6Eb%LcYh z>bo?EqwK+S_*6g)O0=rIQ8q-|eqN;Km(;5w6H|Sr=4;`^EFUwg9_`meB%{^zNPWu{ zt0j2)A^q!Y%p`1rV9-b(sp$)DPu^%ecaQbYYU;1~zx8Y-))8&4O1X?yGfK21Z5CtebXg|1 zcE0i*vXXgy)82K0nSu_hL&cjm`U=LJ9hi_?nujWxB36ZrYpk)Q^Y9#Cf^Ax;J=9sK1+4Z@2Kk}}YdOqi@ zun~|~ZELbw#Iv_6xI5y|`%xn%Dz)xE*S-$plhmJ*y+&EvZi_JCm!wK{q{K2V<>xL> z$b|PtgD&uuSaxf|ue_`GjNNFg2Y1jDCvmMGi(hc3AF$^m4@2+$dGS8}&RD+mNoiaQ zDZ9pc=^H!Wg0{@~tr=qA#uaJ5$(>t6?UU<97tK`51K*me#cTkQ=-WIv=~%60qGDYy z5ImK8!28;rwJp|%IZtCx#-I%v4D*Zi?L3eCdTg?xscePkF*Wv2olIZX(Sd{*B(_Zb-=VW4p%iNZ@ zhUBjE$5gSDSStv#^3LHJB9D|}HD`Tw78)B?Cb)x3$PnND+jt#YQgiuD{de{`@ftZo zXi?O_w4&^i2qK_b5m7P8z+T^pJWDMJL4p5c)Qs=9tV5BSxUzirukCvbZfgVQ@u!a< z9j$_?r^>M6jxG(myfGWwTKJI<{W%J6x; z$H{+*cYH=UnM~=Eq21n!H>O7x`tImI=ot+s2#0gPmJ6#l|U8(Ner zZEQXIm63@Tc7^#0>b##!&J$fTNz15cTD(T9>W zcK|^2cXS7%!6BYJ+LDeD10w|?+n6}h#_SclGR@9i#NZ5n8P9bDTlur}X&tx9K7=dS zq0r2jb>fQMJ6jYyGlu{wBwqi+-=a4;+vh3K!z#0Xw zIU}ONZ^nQAE%}T}VzD@zx)!u^{V6!~dl2_SD^j#?R3W&F#@3Eox$SzP*P6NDNsZE` zugdk^um0bF{ul0l+8H0*{kc0pp1(sm#NG#iBR?5JcH`0z(|jIbq@IFkPnx##q`FWu z;|?yP1~b+hpz&LJGs<3Vd;JNc4Nm$3-?)JI=sH=cgfU(kcA2mN3WqmfNJYW_(O5q6 z%v{~`n*e21;t57|Wp>8)%+{91Ayc`QR@cF_G-4@vNt1k#9vPZJLD`WW*u{f2_f%?Nya+KUi=V73js%0c-&hZwaYpEp`^T~jB|rKm+(q$PP8YK#RyCg zHan#D-e`=+ZNskXhB!5qo?)4cAYPn7zkUNaVRRV2p_!}Z)TW+r3X~VimkU`j2-*UJ z$k@nWzw%HzG-;Od#_vnm67NI9F_hlq-L@ZxznLY)xW(EkdhV;C>ztzRzau*s$230? z3W8dlhK+k@1%Hl+V1@I@U;qPRhwzH`1h7kg(M*-?8FcKzJ+o+o5l(@^-evjJ_AyIn zN+l!ib2mYMcZbc@6PYPOegnCz;mK6no}R>zL^{?!R|RH%!z~Le{Dn=j7ihztK*xA5 z^6{kAa9^~T_*yN7aJGe8`eaG74qO4=qZspE@D*E9#VcQ`0hM|7hJr-r(vsfmRlm=3 zYIPzgea_0FAR4#cLijjM!`Vj)eBKu-gN-m~Crm@48T)Ikr5Qlmz{zN+c!=R2kNbb7 zX~vn>xqFyaShoP=I(}Pd=b8Y4hT1$8TL1Vnxt@5fhif6dCRt2j5-~6@;R|+a9eknFR2RfDiav4{=U!Zz$DJt(~YR*OZWb6g%o2 ziW;O$&TA^BEo9`w8o}8#nLRc1bHCg6NNg3Qmy~>!&;8Fq(?O{{VNSS8#2`<< zAdCZhz|h6$3q!HBe@~Bs;CqEW`22XV^vL&}Qm5%VC59CyISi%Rl?Yd9y(7-8@&}{_-a5MqZ2eYitbH3jYQ+Ko^n&Bu_FAzLpLFi(UJOxz`EtiMCd#bzsUKC}V!aAb zrACvVpi?SRu}KZybB!SggU590`Z-w8=`(ZTy}@?OKPI?^5k%Yjw=*Ydp?TN$Vv+*i zIQ@!z{wA(AoiZ+6{LrXPzB2w87pmdo=5)+P#YrUwJ42oQ_hLx#5cwv<0Ai(9!}vLt zc60&kWqDC(CK!o4xf)-vpuEaqUqxrM%8F0(c zdR{^DTGWJ&XnDta$B;)0gQxCC6@j+!%lQ}5Ik=z*0A-4VQQKyaJ6Xg0}eS4^_kuEq6rMye@Y0hb2 zI>TLaDI+9Z>Q4y*d_J?blEa)bw8ZmMW^31)Wyl|B;t;1?>Rmz>D^el~Iz2W``QdWm zq7+Qx4{ujuT;>s$4=$#$FYQ{nj%>KO&-bN4?TPHEg4ivY2OQ44v@B1;Jr)BW(~)bsY4`c5Iy%Mx+sS9@jO z`bWSUrW(w%6z3Edj#{W^K%pGEn*Y+q5B+b!|{@H1Au8{2q+=#&R`DL z5BHZ(S{~fEJVd&cfB>auF=HDvt}(_r zzj}m0V$Zne5d1simUsoY$jL3UMGW~)i)#&6Y?drz?iZ29zL{3G3qXeFzGJrDC=-H+ zm3M@N9$HpIFKa+IB1$aY(J(onQYENMIQ49Pc{?pW(e&r{9KS9lDJQ+ zh(Wy>AXQrnhF%Ccb|Kz+4}gmA-GVz#6Q+iBvqcBIc@yY@c)R09V!3$htrdWo9E zx%~F8(7pV&pdwWT{(d@fR|=<`)0~H*RvrV4hLP>Bt8Ud54zBymxyjGgyn-)m%(lgF zWtOj|I`qZi-Y%i%P7e-$()*7RlRmLea}>``MN2yFY%Dk;Z(|M&QhhY`zI};g@3w5( zAyb?gw^HPq4`f8hr-yjo8?U@kIA)T5VClXiGrf{0ycNqjrZ=~9Wv|aRj}{!DIVwH| zUsu{%ED#>zDVw$U<(D-))#j zxb>Aj(3AscO>~lDow_2ChrPnU3$!a+N_++ULqD0JE-<)Wil@gZO!MGHDH(}hHGQX~ z2|;ObHP(1R%B=VoNpfB_?UU#R7i@e?)YyxEbYPHI;`2@L(4^ndWyE?V-p!lM#b?)t z@&LC)92`(VQROb}TeY4J#I>I!jb=Xxq$$hBpho!D?u}sM6kizA$;9x91#nfH8!YO3 z^!~1HiArz$7(5n;f#}7%-abTv>^=!0`^%q``Cr3%EhMbc3484Hu5dlZ_8!hf8 zr@j~ur%Smgeeb93bULj?wq5b*^oH}Q!IR#%tly18L^KWLOp7INF`VP-TL%{#N<{^Z zpD8&WShx4qwO|_PqZuYa&$RKPVw>-4tK8w#VF8+9%*IMGF}I?V8I-TMGmkHSV=cRA z-O4kdUV1~HkN~-;7x*)zMl?5>5a8MYFBiWpXOwAK2fnb_$qL75#lPq)up>J(etZn| zU~il%-%|IBPCsPa#&er8k8&YAzIdJLIyl;A?&j;`w#fHM0BOeMwS9rM9w!^?;MIrS zFO^I zP!g7W?G6Mm^C5+*9*)jl;A0=3V1=L!tMr+o4937H7gt1R^AT!GCKesbIH^2l({ZC?_HK<;{q+iNZn3@fSQ4dUAF=bN%>j$$^J7R zs8;j=*BhEHTi$bF9~}S6{cYf!wBC5OAA6zuc`O^3PwY)pyHo$@#$Nac4zzEuO7v4Y znKtS{{0X`#{=A^{nb#ddzo}2Vbm{05Bbnz?vH)AyYmI$Erf|}Mgd?t^&5WJ)wy;ir zU@1#*eNE4(A2MP7Q5Q0?t~Qew;74cLW>1X}Bbo$`Z(r9WG{A+tXLY41l!ZL1qe5G* zHS{4?>I(}&qQyQBw0-Q(JmQ1}0}|6Qny-0JL$;DoBUgz+{RDjL9QE}dA*r=jVw0yV zc?MygyMJ=aum7Na&?`zYN=!g|Z9g4(n(6tzwylnra_@q_e673a%M^i^?*UZgB?xXs z7S&iozu6axg~1Usj$HlYx}K4@VldU&lm(65l>=*eZ~@VNC;PJaa=ht64vB0lXBo?B zBj3MCxuwA_%_^rs^^_U!BUMX6$bzj(w#M1Q@hcZgkh*7Fg_B4OH3I(^vj)RHa8Dex zy+7`^%cVBMv3xGA9v?ZNNo&<;eWcM0Y7oYvtk}xjWL3SEDzPrG*o;$H4Os3)dKmun znM0QiiU|2Gd326fUsnNl!+OxTu$08*%1x|r3@VFQ^^q{XGD<;u*>3&8!B4s_jqrOf zxmPVzJK-te4!|~18H_b5-=HAbmb^(=+jsQ}RMS!L1d)c=?`b*zxp8z~KR7l&5{%!& zWwXMMFK+u(?lSdmOH!w-reiltvgd!zebg*<`jg~aRA0XPPP7^e9o-E_~7qQYkebC4~oFGOTN&vJW_@FTtcJi z3g^N)Q+$))UgsX8(bqf-|AS(Mm?_uvj8LKitHq@@TiA!RE0GcPlEoTEzPUt&7~A*E z4MCnF1FHAV&q+WDA}pq<4p2inE4nwUI^>0+dERvb1qGK>knTp>o|D+ zvn32LW|JE70iHR5HBprJ*WZBV^M##msmmdY$SEa?PBcqh>bWWJw?RZ=YA)U@bggEq zCqBA?99Rjx?jf_lpI=Z16T36us6nW0u*=JPU89aQKO3r-g11RkYPbv0m=6`^qLxCv zQ&j;B9+_}mrCEt2JuA$pRE6Aku!7SzZq@K~Ja^*BXyN@DmR}JS<5E2@A43`AhJXs2PEH$rV_{NZ6#k|O131NiQ}|Ea-ri2?a;a7C>2c+RbWOwbqSr&i zQ%v^fZUQJBKVexFH40`!$4qs7aY}hf(7M~n3>BI20Vv`a-;HN%$%pJ%?NKR&-_}b? zGOt(xPRAxZG{9!QLjMB%YV+G$>(_MtWbZ3R9xJiDqhybMy#LUs_A`_(2^hfF&eZY% z#Yy*Iyy<4dnYN(gDwvN)<+snd%2M%pcThZT+53}Zm=m^-~-lrst&XOn#{5vBr z{iRP@oZP02lqY)IESg+RKmA!_Z0gz@`cZZuB^rxe1v7C?HO>Y%;1wODCYtgk7zWuDN`9DD68quei98o{GJ zKN_I|l-LfN%d1o^*VS;24jK}_G6N!@5>BRJU$!Eu+LA2rRO%Ac#cKaL_E3kqFPyms z#@8ypi7T$DVt~8!gbqAAvN9>@sU_nRL4lOc)S{W`>r~p=e|;AKg`3t>LrJ;%MV>A? zjP!NKX&xZGd#g^_y4t;vgkY-8F0L%cV?=MuPFu$SND63w)WgfwQ?1K3FdEcTZB05C zemXI4G~!ee*G|*W4dCfKZJd?n@isRbg0+_N1Y^#9##9w|C0M^$>ty--vN)nH7WS$%%P>iYDy=E%A<*qo=^^y*=MZF?t)q|xI97fscuyepK(-&)EW@-r- zI+VHn?L6u3ce7iX8sRJha_Q`;nnNE$F}uW-^Ax+mLSG^5j`Xb1cZAb|%P5{iSsEQ) zLSu^;spM)ktfIgZY{yy^_#UbO?%Bj>(W_B`OMgI#Omxcm-plc zhiX=7KLUCU-XN5Q)e~|}9xZRfoG;&CMQUwUv%|%9e>&j>u8{`*M}z&F;53!NFyjE` zTkQVt>C@Q0SYFHGI^6x^)@3Wz(@%B^NM7hInqK);13`x#?<(wGFViQ&)@5in`}eJp zC5+=di3^g=PGwf8jPyb{?o?*?RM@eu?D-q@DQwd|Hdrr#d(h*g?|NfVqRA0}29BIE z8v+xPt!{PA@Bt`ZgR@P8bbjxiUn7!POMyfTk;k^^hqD0TuJm|w7f$O5O~29#9lvI3 zSir2`P@B>L)qyP%I=}!$GCO+Q(Bvo7`r3@&h$=651_{^jky%?m7hC6#{!XY}ad00& zOf3E-(JNMg>cF(}WC?yj>7L(Q>_d#@hpJe`(ruyYj#kfAd?U9sD{ZJr(TS7(SBh5y zdk;=n+B*FMuxwtk_fNDRrd)Q~%u>o^`-1f$D!^AMuPD8vkeBnI?`k(Uw@Tw#BA{Mh zzkqiVylXbd-=sg5Ur${w{$+IPPu%hh{9-TeS)Nxy3I1W-%J(aa9^>a-w&8_RNare1 zw-BwbfrY)~k*&|mme>{Q%%;!&_?J;$0ZMjc@ZL$9nMe|;xgnsoH}S6onv#B-P*ug*dTMtzg zhwO|jIs#Xu;pV9>=oxFOh}P~cMJq|@Y$Dvd&P!b zwaf|(gYL#z`pV+n->fRK%U|-EF7npyL$%)-UD#hoj;5w%F=J_oMZT7D8T-U=fwg4Z z+j{w_(bdxD(1-5dX!Rz`L*TxP2W0nle)bxK$LyqOZ|22ucSUGs@oMU_XV#XJ3;fwW zt(pb1{Kd3B#K+H1kMQ4ujK(v6+NxI%QeUF?`FizCTwEUdXx_ngT@>I!V^Dx|PD{ji zj!cg!&TT5tVn-#h8#|+azRDiL9NGMyq#zXI-HIN)OxWcejTYRTacJ%++yk@lsYm`%}*aZ?@_NvyY3Qp8Tk zy?xGlt#Fu}SduN&5v91-hxBsU+}!oMkeCEm+CW({%SUEKLpgd%k|k)NIv_lMGz90g z>URdHEp+R`@!yTJOZO`$XKdfCS)m|bh??cLrufkhdrzT@y<<5hMn5lX!ZttviX;|u zbDht>`aEF8B)#t&o3~gd>mTVy4%%kHynHH31DzXllopd;fAm z*fziuv_rp@yGotGEEl;NC)iwI|1D=anwv7J-%MN!ncDSa8t08MFmJn#ErisQ5SbsA zKDN^4r)vY)Z+DUq^`|F=N4Lh2Tp2B*VAkC83*#02GJ*vQ%b!zX8AdMDZZH@OYmCk*$wRdW5tFHDS z_;d`b2lD4^=Y$g5Hh$)opB(k9ip+{w+j@uJ*mTe@?D+u|Y~pWc+eq`p zXW$p&Pio6Q?fjpb>>La3uOjOn*GI0a{Zo;x)U&Ld{r$y;Pc%g(U4EmfTLRd}8$H^2 zBM%RSp58ZUm7B+NDA{t{4(}-cA1dr`zCnUbpf^FZG2;-8o~-qSVhYS)tS=qIlVPvI?N zW=^#-A_aPwx`pPM1q7y(;_#R@?Qz99^i{9BR6;o!-nB%4udgiU3z7IXB5<{X;@JCN zy_xu(pSdshDxBM`mL2(#*cP)>viviQJ(c9xBtGh9vS&$6nZLhUJ}u{huehT;eD%qH zar8cmIQ)>i_RDqkfso%g(oJmku2(gw4=MhZD2iG7L4AKKGxR;lY-^RY_h}i>PG*Vn z*%l;An@8O`>3#Bpxro%C2N*ZzH1q2#K9#Fa3gA2+3@8J__U;Cg+{-tKXfD(cYtxSJ zIp88W?OX`K)$|db)C^3pPo<1Y!H~^)2>IFsoUIWs?emv{{k!8SBUcMD@6f)GBh~JI zF5;|@oy=$$4yEz>D4<($wg>A43DcRqU=T|@A4Zjf<5tg~Gpl&!Qg|}G=u|IGxx0IJ zRvNBciyrkytuRds*l{PZ)0|r6qH*rNYUJ<=Tc?Nm^3Cj)=@H{8R|7)m12hQF>|%H= zcS2DWTc$eXx8MM)_gE>uz=iORHD`H0_-jQi#ubstZ+Nt-&Tesei1{mk<_)Ru=RjIw z>f>NlTq#*jv0FNxg<0V3zOmkvhhLdk77yK5ZtDo@^az8d%kG+A2_lZ>xB1^v{hWp0 z@v3d3st3?+#cUi`%H84%Hv+fs`L*j3vFz_%veQilXhFQeL#<}u0fO3EHTL;{_3@20 zKfTG7Q{+VDZ6>tgS`&hcdKLM(oH5m+k8zD|vDF*htVTr20pp#)XN1uLiz&umLWj_{*-3I(GvD?^uTB zg{A@eHjjpnHEbT&`YM%a+M0pU#EvZ>7 z19&Hm7?cdo?$BT%YYeeYb9N1}`#5G?558Qva*F_4{&}dnV&c53gcrk3ZG_QhH)w^z zLf6NHw#yMAfwgfocoy2ccsV{$vff;9E!&|;TjSmKoYMbm1@50b@7YI=pX$N71-qxa z-4<@yv|rnLp^^O$VOBzO+5jLIF~jQ>U{^FVQ@&n ztzn}sy9HHtWl69Vwa+TquqBXL^d=W9>A_mQ-}sUDTrxz)FuE>GCKMm)IS`3GJRPq5 zww>cFVo$COY&;cwYtFc?;o~zwg9$NaZln`_0blS&aqx8CSY*tGEFcraN1UAv@DFO~ zwX%gU>MB4fZ$4F{sbAe9@`&8v`AKP1Z~29G|1tJ&sw%cE=@?og{1Y zqCH=y`6&Hj-^o1MAhTfjD$+P6B8W6;)-*3yw$9remOxQCV%rBiM7Fz1zfaQFVy>L4 zQzoKkd^=zBF+>v13mD|pTMykj!SlewY=A}{g zl^!bQs=zHkIDO??Q1GY=O4e81P)3187}WO)SQpIpUy(Nwlxw7mM2U}$&(J7q zc8F4!#y3|ed+mmt|Vo`BS*TCruJy6=WJ1JN5rN|rfUq#5~8XFx-BiU za8c_pa>5$*QOpBw2EjD!p)8(c`i|cx(&7IwC*-Cl!gYHT9JegWIgwL&ntkq1oa3~-zKefnD zDEdA6aeJ0>rd4h&E_v4JCSS|RD>7$sdF@TsB!;eji*e9xTb{UB>vYwzF}y_#>UtuF z;DSif_lMh)rNCM`cQ+TNiv?Z6>@O2HE4O|(#cB5CdPK=~N#2jME1ys`uEGmz7g~cP zt1)c0Bk(snQwHT;{6O6waFP0g_-MY9i?ZOpY`;NY8{MQ=djjzKW8)5|T;a(=Q0WVN zFCm?O14nuPA*)*k$q&)6bxSzVnw%tx5{ME`82 zQb#&QKNzoevg;9H<9-*}!a{zGHy!6h&DySuZWf^VIA43M^R{wJpGvu0zc3of64lxS z@sI(La3WFCa4c+Sm~vp5$BL-&SzyFmb+7tyGJGu^Y~#Ton~i!W$vZY}N;XcI5D!h$L`BCvfu1i@!`Lz2s`!HOF%3%4GEbyJLSuSU@dE1Rp^Kg?Prkfw z{e8tX&4HL)I}D}AE@_Hi5+908wiSzsbw`K`P3u&T{XYBG`r@KnUm0ARL~6Ck;NQ9b z*Z2ZI3|2{Pqe4)!vDin z7)~Dhh_~*jp6w0f>>hrxRYY))&Tlb8 z9H=XlDp&+Gt*e;P2x0jcE?^fYl}(*-_6JDt?hc~&G6UM|YtUr#&4YH&@G4TjwlE2& z$s2}z64f;_qXH1)fi8AL)I!H9J4O&ZFIFm8+_g(frSu0cl0UxQx>E;>^N7939=rU4+R$IbNL3g3w9lg8x(!AlKY%sX=~{ zi|SbWL=WP{8cu^)lH6DyNa>pTptu%vJK{#2B#g8m{FBT3-1^g8cb(skv{#p?T zj8gG9Gfn#ID(_PjezbcKw_{kn0J^65>4ZviXyu=sDmBcuTeV9sLeOM~Dvcv9p>1GW zf4(aEWhwl(;HE(MRrJ|e<+?c>*cV6}J4afJ5e(@qhwZu6`|9ogUh)IU%}w=D&{qMy zDtsgRd8I=k_=SZg_EaarI?*fs2E~katRdlSvC2g|*YzYpvyop$SH5oszch1y`usTV zW?dJqO->Vp6g(2@*V&s$W9>$J%auHfjF&U@%uKO$-AE9Ijrf;(34as4LccL!{Md)* zte)XT4a30f%HsFVMS;aL1G;lH?SD2#tq@}Z>pf+;r?>+Ml7XZt|L39_N}SBFImt@S zIm+qc1>a4Tptk=n4e=1}+N4HDPq9po0yOb5$bQI$Z-t{yr; zU&6iaXlP!sRY*alrf~a;O^kKTFmd^SB)uT@G4R_CI(lyJbxqz7*~YtGJPq?DkstM> zvR;_5JBygVOo^|0+%1qv`9RqO%L6I!tX>Uc596YFs&hhf6vbdHo189J*`V}T-eeIk z8e}VHj*2K75)pvoKcSePd!G;NcF;q8WvYwA`uUg5h(`E9y;gP|MwPg532yu%ZK$F1 zLwj51)rEGezg{UkD^a*xOr5i>Dw+0%fY-59X`nO5Aa14Fu#G-rDg4VSrg;K^HvZz_ z^}Of(&+$sWSpfT3mVllVmk8d=oklTjoYMCn*I%yvgb&X27sv(%oRt zocTJW)j;stIn9seLA1}^p}nlTGo;%xRv%$w&p&Jln2ZZK#yYk5DAjx9?oK>uNFg*C zU+np1bhN)Cvh8UEvC=fT{z^0vu;EE70s5v7NXE~?%m-A57EE9sQYTH>6B>QjHOHTL zG?=ekl=169!=zv%!&>l2Au5+c&2PT+k=-rR7WZ*eBTz2^!g;-%q}xJ{?9sg0()2ZF zX;vAx*0k!1iyQSU@GRI(D~4iKN_cU91gfX)oz{!rXa>6{QcJ!=L0q))^J~&CVi~7_`TGq)|?#7lXOXn(*U~E!EU-)VK1#;rz1%g{=+;YJ~>* zdCBBw!>W1(lxx0Vg>7Hf$rnF&M)plj{7x@9;{b1CAf2sP#>H~5!ROnTyM*B9%U5-= z9QLK#Q_Ff-+UMowBy9crGu=u5h0JBQi7vaadpv`YD=iz@3*B(QX5RE;$F7f$HgTDp ziCz9~`L>rp4Xqm1lvR=}-WpR&;8fwmJS}g9F6Y4t)gUE75gcg@qEEh|n3U?G0-Xk$ zrQeJ%+Ks%wfK-Dm)>=7%<-DTfGU}~29U(s>elh0~T!=@dE9e_U7f-VPvhuGlfC(rQ zMM?mi;6pI*wtBVALAzaFqr<;Ms^q$}YE!uA5d|vwqUCv@40-}rPW#c=UIP0kG*O=I z1BJd+96I#s$#!4Ag;0Aa8a%ji{;cb-;dnP(AXqbRL#^W-N6@Vt`hsbS-D0iUM|@w% z^wwS~%gDeHv$(`&P}oZ%Y<%P&x`_2a8dafzst=#j*2%$G ze9G8%Oan8RblF_v>!(UE8|kxZ5P?(!WAz* zjA5A>?fjwsfA$4r{;@CcNzdr}`SCv94ro+Nkjx1?sY=d6@O|I4YTmu6jM13Jn^0&YZ4cux4-*m4gjdz^#49zjTan!LkRk`zD-vt;KCWDUK4mxb^2cRo4+ zoMg4@ho5<7`U89P+)Z;MFU96#={o*OWsoz`NoeoU>XNs%H-9w<|8aEWMPpj$Zh4;JMg1mb_F7dU>Y&wn7^_6?O-d7F{S<1u@nN-F9 z<|A|;tKq$oCLr9j-?ijr2Ai5OUJbyg_%X$};QQ99N&EB5Oj*0|(O%3NxF^;&lQqkO zg78J`)#7{heAwi!QHR9@k9PFhe`e2oCQ1`~PXj%=Ch{u!iIr1=g*y6X>&|nVQ@QVV zy4-Ib)!C(tq}2l=VfZGdu-UL4iHaK-^SRh z(dzWT5~!R|m2^dAf529p^cEyF17(BhgvR~t2A#4<^Pl`3-PCt84Hr&#PyPC2yF}{2 z&wKA(9ZV6-lHMSWxv`C9QGO|EE8=vfq$A;%kQg7=k- zk9p1=?DEbnR>5AVTEO$}zL@~~T~|}Ix}rB&18=$Lxa$4dsz2{pvucecyb`)YAyKQB z7dQ7T7rROn_SFz<`)f*2HUgD^HHa-MHVm@-TE6MM&w-iM!zP_BYJOGl2Rjw8X8Z)n z^LGBI^35)S;$4v?TmY3QWe#=71Ov_c!2JcwK4u) zs*ay|xyDzee|D1ThegI!pP5(cALJ_Nbu?MdG6fr=+kbK&8dhScvEn{C}y^0$%I zLN`BxH+irCI~~G58ldU^aNQ13zIBokH=;(OG~s*DK|x}zKBt@xMFe@ zGFvYnJ4ClgPld}!)!CueCuw16#sY?>*uZFc3Uq4S_QJ-cP*KOtKpNOuuzAEeayF~> zM^*=EY$gvwPv}i*W}EB5ZBKfEkpV6BCp_VAfg--_Gku*NE3#dGpLR>XKIgs(YzhfH8qC{BaYljYnuShR;Y@6>?$MqegEwK){V%EVu9KN^Agq^JyHXN&=W{P;O_XI_xIiV-E+tJ?~Du&kZ13;)?9O~IiFWUnAsBdQ!gJpr0X&h z{LXV};!@e~%O}D$C72{`Z*Dm~$h6xMgB++kihb0`(Iz1LBLaW#u{0jqtY~%QW;cs! zK!10`$EMq=)V+o^zeCK&$lFhuFuWVtN(eKy{1H0Z%-f8l&{ka4Jw|nt&#!pTFDgFy zc-*x`;td5IsWP7UTGT7vd7dNemdM}{U6opMX3I%z14pJ1^sESAOn_3$%)bK5Ng$(EBE1%R_=gH4!A)->F=K%gV zLK$t$up7`z)}{?Bopxs1P`Co9ojst=B%hA{_`&lUr`n31bTS6TXV0qz_NRCjEuP%{ zjwFd_V6Jzo%H(Odf+dq-)uagUfG0yf*A2Z_;1T_xpu|l9I7sKH8YIHbsq6BjT_Lsd&R0}Xls0yX*zA_w#kZRk>*JC{1sS{6 z76tHw-$58cqW9>G`|QDZjKG4%i$kgKuarhj)IP2InDn@Zr3V3z{lNm???cv>MIT3g z*!#Lqenoqop_R$C_I=qSD~Y8H2ghzPLHSnPk=}C|_b@x}vow^Ze@^jDu8YznvZOma zw0-KOAGYhi`yWQ|A4TEW2dUWyJ&b6nEPI^fN5K&RVo;B$U2^cAz@1&6Fha0!-mKN3 zb(OQPH28eaiU}JWHpn;t*|OabtZc{KT0F|M&KfO0u5|Az375#H^IY0TAv%wB`@tD{ zpEfRKa+U?Mvi_x?>d`*I{Y3_oANsSXOL6Tr3$~DlB-M7!X(TGDVE3V z2pb1c1utmqGjA(^Ne;@MTN-!%Am41rhdr=yvJ~RBa~6dIVx3$D6c0C8)m%-#Yd!+WXyW^~fD^fh#V2?bGVcXRt!ja}1DC9z<{a z*yL$*ayB}2l5c-@WUAj9$+hCve$Sywa(4oY7AMV2AIEORtu0$C+_5i*tPic_pw~A* zovRiDuih{;=hl)Xiw^POEL`j33(Kq%J=N2C1e;+p*0{~h#V=@|K}y|-nnrYkH98&q z)lW{htIQ~HB2R^TeUhDFXqkjJKGQTmzl7<4jeORiQau$`)8*n{mJ`}0^T+FaKu293 zo66TBg9ph@!B{OAv|KHz+_#eCyHm_Ahv(3S#pswd!T^2Mkp)AkS16_UrzR)bDGsYN z;+200uG=_K>@SVxwZ47M6URrFa4V?*Pn~G8(?C%_aoS;AHor_ibDz; zO-`WouWl6&z%V3K`szEdHf~rI-3A59T{PDBX(+zWNUuOEob4$?`%HUR z`Oc*AaZXbC_>S!~ZX+ld1PhCItD%^WUo8*R^Iw4E@3NG-Zl0q}R_1W>InZVHRsDL0 z^Qf3Q&4O^+nrh1Ih6KbCaSl2`UWNA$JFOnGzh&$Z%-8A3)p&4s^Q{7>^gWR7)=!0g z@bU?nxyWf&JO+Yl(PlJ*u5~&Cg|DcPbY; zjRW=x+iA8z3PW`Dmcl59mNQRyZd5Zxa%irk$G>7bDPm;K0P4(MJt;T7`3c{!c>}Iq zG847oUJ*Q(&+m~W1n@!MS2YoHnee&XwR5_NZ$ja}rp%$HcYg5YiIT>`Ti~OZ_moI~ zn;tMoB?-8Yi|!|Z<$&Z!T;k`xMA*gSDL3&Uo=05gpD>IzF2fFN+Pc_$0O_D1{Xqg{ zDx~}fkdb>=e6XU~{dW0U)MoMhih~1K8`b?}c4tK_FG#$*PyRF5kCmV_H&uX3bZNtH z2hYIL31sl%9XSye&V>Rh9xX~}lYJRN9p7%Z7+!V0btWPZ=2Kw5tU;-oAbeYMRig1v znQvDlKz=x+jlKJX-Pmb)U%rbj$16czeBBh=agoutFp9vzNt?@$cf%ch6Kl0!?w8T( zF-lu@TrAm$g$6#B%=ULH6O<^Yhpkpg?-P%20{IG;|9J^V&E%z@%K&$`-yBNlvA%$V zzTMLGE;MbHUE<9i!D!nM`yE9ap@u#}6YoXKDZwor_}tdH0~Rb6h}n_|HQBXwzJnJW zS*)i%?O*i!@rcpxTCharh2}pR???8-$3g*wD2Y)1)(W(Q5GCX0Pwtm-ZbAZKBG+ZX+NA`>mxgMnLl&*1FG0(fR9?~Ym;3CC7O!>b4D4EBE$HhL8c#X05 zu(W3yJjMQpn5f6*4aJFSPuH;`4qE-b*U#UDXv&*yCMeO5`X0zdQ+Ho}7bc3?wD7sV z8ZO1Rs>rVRQYPjw$FYj0K5gk!AvlCAx7%>)RQN_DRpsd1zOjdAHt53~$*`tvQMqXq z5|l#b6Cq*3|7C!ntbgIXw$IYCPy=CCRmiOkD5a|K_okLzyBYUa`6f>%Z~J0deVyKL zl{&@*ThznM|u~CdtQ|UVN7)-x8^p zILa$#!6ZmqdHj0t%v}(xQ$_@LK)tl24D-6mL5X zH5E~efxC|SbGphC3F%#jPRtYF*D(<)qX z=C3#OZ9_?!5KuczjxpuReyoIDt%L5p-k5#!M5%v9cc9NB?=)gDJ@%-=CpefHNb%~S z7lhdbmF_taN-=XI#3xMiKTWmJINPwj2ck9Co74D2wx!>}2TMbzd7G=hrFow06uU99 zAs?KyKLEJReD2aoqu95HD$ZSA8rZEey?lz}G1{3@IQiMNIZVwNGY@;%?o5*zkvhKU zQrsJ^sLa&^=o4B>i?d0he+sfJmvwV@rDbn1t0U$xHJGQ_b>23$U&tmXghw^=B}X@% z|KDx^b{49-)rC8iuUV)vw6X*1PB|;3mI1|)K$B6aK(-5?dqYGJ1iqu9{rYJ$2am=>@Ufx*X5w3INUGBnv4HAZpTHlT^BQ37HPA2 z*iDx9LZk0tPtD1oTP?g>^i4+m<)K*SPlK_plrXhxVTO4=xFfE>abhN9e8K_|(qF6I z_=vQIGCj69XOV8E6W`-|C>4?gX~_63R_v}Y%N~wszSO2Lxz?j_L-eOMt<~?P6I#-@3HZ>`-mmIg} zB_3E)ac5w;JVE`tk~5|B6ifd1HiAfafHv%e15hFu2Het&w`f;N)MW}0wDg(IK^2+k zUUdiDuz4sl@>TgSwVC;yS?>=Sr9MRZ2B*h!jz*XD8E%xLa)P)hp{GJPyD5vAT@@mb ztgv!~z$G9p#X@_5`BHoE_PAhB!PV#7zpI0z_ z<%YF)=_l^A;H_kFZB|bgwbQ24zHzxf{pr!TLrh5&Lgqe;PvD~Jo}D>C+q2-L(JUma z$CYP`9@p&bR+~SdeRmKF;fp(O_*-FW%XM-)XfY#ou8?Ap#Npxqd6({om5Vq(OVn_v zyty1JouL;&^#}5VtAOoTWm(;?3uw@QQ0~zp>l~bp#hSVKH<341J-4gcj-G+bR>J8s zgCN1=Q;f6+?55|bPR{LZ?(3Uke|ax>9!agRv!pL6>jbK74$b~J>IHOw!YxLQQrd9D znHG^P{CxYR1?*f|{lKa)$)_^&hR9r~cws_##1SGG|A|eBFyKJF;$J)j~CrC^SivulrvO5^FMF7zO z5J0_vsVCF*S6>Z+UghJ8b(3B5k;Bm6o&5NgYK!DnTkVC%D5d!AVkZu?pWS6DfBWH* zCt#22aBB=+VMYx$xyYK_m5lzVa&czhx^wbl63a}6Y*GCF(v7Y-!w=lC26uni&}V?5 ztAe~30E58F@+rjo%UbB5EbsU^kiEnue0@v{!E}*nw*I`~Y_b%o+{5ja> zU0SP8p|!{UjInrkaPbdkrgJ0jR~<~t2N zOi-#M%gOC(>i1{cxm68gSaRN)uGn6>%q)x2>c~1LJ9cYWWAWWm1wbnB%hgK}@CTMk zJ5v$Q!3k@m@H&9(oHhpJ zn|JR>%HHD^6yGwvbOiV152G9yaOEK@5H59Lwl2ThvkZZB%j{a7+!A!Ki*N(tC6De3 z=Y^Jro+C#bnzz;xzZ8qobEu=DG22iGAv_DNN|^ z+|I_1Kl-1IpEv~ro?@aoQKjF2NrzklkESN#=Y`Xz29AWi>#vhy6%4ptf%1vH>*F$* zx)?9`{{)_!l-cXtqu`LZZ$74|8FdV|0F|bff@B1J{w?xJR$`iy8^NfqxL#2zQ}UB5 z#3PID0`Q*^+J!$Ov=rY-@O(8@yOTDToPg03M@#zN*bSDKP8XZk^+y43D54T?WI4`O zuAfr76MWDS$C;&9d+e%nsWq);{Bf)DRqY=dh2PB%mNGYkL8r2o&=69)007Z5>+PRJ zj4r=&T{K`RR+W>WFo8~uT_hpb52M^ITwa<#rTc@g=M9`lSGd2JrCz~}Oy#L9i>-Y- zZ%~EGg=E#DymhS&{zV~`b)qw?ZhsGjJgKy{i~I!LhwFHZIPRzJEk4*Uf|5~dOc8L8 zG<+6+#4*H{H<~SZ^N`qyotV{#{jybut#`ZY9Z)v6Wtl9?{14f6*Ut9#aDAjc^f<=6 zUB30K6xasHZBe};@alcc)YgN$iqpQb(ywJc-?3IPd(li4JI$b71`-;+X(#AAT$nCS z)9B}vGy1c)-73;s9F)5z5;R_J&4Cl`_E*U41o=eU8RMCJMY)F$B{S>^+h@-7g_i_< ztLDiIi5^*cZ{`sRSaq>%o!T38dor&ZiWbjjU=@>1Yy=;pzaf9M3oQR>vf>%+FG2EUqRs@UkJP76 z$|t_e`m>tdT!;>^Ga5o4Nlx_r%DUo)iJqm_=D&B}-K>(QNhmDjtnf|xv@6jicFA8L zyHVQjts&coDP!dj;-YIk3}dE^abh8F5iKKRjXhD2DP{UzeyL~$8*O-Ew5Wm@;i4Qb zLdXTnR6gwxW=w45z}!w9r3-uc-rS9+uV2w@_%i?&B&L3~Cic&7C2){~TD;uUTEBzk zlA2qZ^S6!)ot+VMI^LJz8F%+O&6GRq>Q@UzD~1K@WD7Jdh-Npd_YlV+n2o;=APH$X zMvL8sn^9v%IH536wQ7!=cW5XK@`_u@Me8!B$5iWHP~1`z!2DSEVeFi_bWuRu{SEN- zR*OJ`tEifQir^=un$jvxMNIcF-8;IuJiK&vsM#vzFfnOK!{G9-PcrzIx=~_J6QR+r zMSN?``y$^YVJ^Q(p}E zZr{R+ukdffux7;h+>uGG9? z3q(L>jXi~cGK-ML9sU&;a3+bJ5SK28g3?*=@F%=A$@THpGQo0?vMHPJ8*M);vTAT5 zE`&&-=KNDIwGCHX*-#%}-a+5Bbco%r5U%@N&wS;wY;Aa6}qO*mPtm0^*+t=+G zl?@Vj$=4pqkr}a`iDre*3nud=@A7CLb5|mqRWI%R$o;za&D3F~okVp$UakG#0gxe& zyEMDnr5|gIN}HGBi<)w|eOaev4#~V|Ud_p69yKkx$4i-d$qB}8jl*)aoqm{isQjjx zf@8B5$y15W{86JWG{URa`VQAu)GsxvA&6LTI(@7%fS_p=Vsa_B-*`YNOuuZSF|2x0GfBN z#0|xlWbr)qrt)W?3DEd@Jg~Bh?b34&jEq^_UjhufoKj{^%`iWBw}PE|g2fj@u~=DK zD`nQ}VAr@7zBNLwo?ZqjeKWO(!LE|8Vs4*_wZj?BWt35ZmP`u7-79B!>hC~N65LL8 zdmAiDJNag1PcOk*|0R6tzMLBd8;rv>)On|lm5FBwyTRMA@&*?9>$^J(tkf3|Q^_;7MGt~Vf-sJ{NunQ1 z8OMUYaqS)7o>9X_E_Rn`>bo}FqQOAG?U6&};{2KkdcUQQsmGblCeigS>nYTl75Gb! z9aX-|*s6>kzlX$Uo-c5PlM4IFc>!t$tVoq-iD@fW4XRqkuM{t)vphFzRy>Int;jKM zt+{4d+ekjXz!tmo3Mhqv(ZiJM{fWs2eeu!50GN?=zCBh^ljT7ich8+sxU@}A=nX#k zbc0p;4k54y?DkrdY@xfSb%pLee=#A<0Vd;OUzblBu+cXN@66}Jr^mM3yJdi-5A95F zjui0JVkAx3>MiwLV~`3l`K6b%z*$%gO_pr~A3S6&x%X+rCAV zNJSKFy_|8!)x>dFLt@#^9&ez`o4)4Fmhhf3{vONylX!`_>5rKklO|UU_KIs=gMF6* zPRRO)LBPfSOf00}$YCx(q$k za;h2==nRaj)*+%4cIURig&x@%1HWn#&JUdr;5-=n-(+^P--MCy3y_(xv z>bxf#z*L_yRoOvE%y~!J-qbTJ6!qa>k4TG@9;I*Ys`AA)W=!dj8wD zlNUHLYXZMWHbtu=I2Du@Cxase96{SrY4iCxox8lveZy8JI+!D%i8nkNPVa|vRFvS( zyNM>gna=(lh$u@Dk3+1C=0{6vDHS{Eh~c-J*g2B=0y)l|^^rRhm(`{KVQskZ3HxwP zHT_UNB+BDrT!*K(rCrRXU-nQIty?QC>26D6U)g%s*hyi5&IuqOV#D^nn`_W=}0QD@Y7UFs`3ds zh*Cjm4WI0&@Uq%Ve$jt{R@|eR@FYq5Ux+gRuX?UJ^}ac}@Wr8GeSUhB!uwE24igSu z=wqW)5iyR5(avCwyy`cmTYaYqpV$DSac^JSbk1~c`|Wx1&wx2j7XsRGUoAbD@9rJf zx*;4bsKZh?TZF!YNA3%^Y9E_2`gYzv88X7#Edx8MW9T!**CC!H-a|RXpH2=wVxM

z_u(VuMorY?(m<;X5Uo_=T%g6aWMA-|0eV#)3kK*)h^4#BIpT{_( z0_O(am5D-hthnfdj93m#Mc1^9xPV_*PGc9d_U0)A+d(7UfO`%}t;vN~hcEFp05F z`&%ZIR|fRCeH3U(#^rW)i4lQ}MD->gE2c>X`)yck_xC#%hxcF+5_v*;(j$7*$sl49 zrkA$AI!V&<_2Ai$B#GmEq;{9ZswJvvNvkh+oQ`-6SxF&x2COaiVVJ3MdtU=wptg34 z=|cHQ=PVb*TmdQxKmt*5RxI6sTI1`FUmNlyr%;iQ8%kNfS~yzvbdFiuw&vK}NCwJq zt-=rLcAM(B8^5%GQTE?{#v$_s!-ius<}2QoxS`#YN8OU)WPwX0g&uL>_so9Ny(fo0 z91SwkOwMo6Z(KN}xtM>qy8_gKVYGUl{O2&~Pa%9HxzbzG=mBic9XahXtwtlV2b1V~ zQ@SNNBT>Q9r<`Rj?$d7m8}HD&7gz&5LA!p%=IuSmwn=wW)Ip&a+&}v=7*}vxCh}k0 zZe2}?)zZmJnz^4J>rzn&xqc1TSsWB7JGXhw8Mw23a8u`$x@akrIEV$-YS=SdhZ$b) z9@!3(|0d_8X=!#0BO`ZdI3}*0eiQD(_8y(5S4U?js#0#Xm|eB&+y_-upR2PqJq<7q#gicGl5zR21=$J*wF`jPVN z&p2d!_l;Ub)H`Q6(}y;NSet+`hD<*R% z=-I7%C9Pp@GBec)f(pL?$>>2^0KS#^|Me#Z%V-nQ8;t7|X*6#h?4rv*_!A%g+n+eH zWg6g_qU*j%?&prU0{TN#tQNiF!WuA?c*^N=dDuf|ep!j)7xd*33CbG1lZL)E)k4&? z-ZxpF6;?Hro~Pfs3lhbY!W0nVJj2p7nY>pgc;Q{L`(M-D7L~w~Vc6MKcn`H6LE8>! z!0+v{e1khOx!2&6xM)b)ID5S-VxfhrO!Q}9(fY9WceMQE6p1O8oi-r$fj?YEf$me& z2k%um`oT8d2neDjojFR(6Wjo`=?q;p36eT*s?G1eI+ zbpZ_fond`$Snr~H1uk{p38?3WMY;bBLUgr|B@LDum34Xrny&y`O6gmY4;!2AJ?nV+ z*H#5c=Ujf}8(}@pSGT)T`AU@WPGYU}zRmhngvJJPs7I}{f3C3QsF2f_LJb=p@V}qp zpBgky=(Ib_?zmZ=KAcmF)a{YaLwe5NF){sO_|EmM1n3O8!oH>s1)UIJ1J|LTD!3UtVtRwN5;&JsfzE%q9EiWKANhe(zGOqFU`R7Q|tw z6kcM*V}+VhjFY?}Ivmzf;lEJ0eoV8?Qb&2;nKi>e_d$87QU^+Bb4z((xBfLUOc)eAoC?a!q@7=z%JQcnW!k}YcqW3AtO0C6~%Qy6*V!gwHJ!l;82Ox*asw&*=$Y1 z&c_)S^%WCTXiv?IhrmU?;>H=#3dD5pLkHtNQTI8pA{C}lrSy4P&h$?7z1yx3Jx1Vs zrA?=kF4#^sZz#C+*oHno~9Xq*eJwvzHyzCg%v9*(Ot49&V z%S0%r&==<9DF#i|;7)}CLgXW$0r2&tGcy&#rndHZ0e`tIBI=%4XKU{rMtZ@U&KWK> z%tANoit9PHE4_h35+uG#BCn@%wT3uFdfpg#s|+&*w$myhrx5E`V%6DaTadM+jPhO8 zifxYVV@gzAv6Dx6o}#awgzw-_KK9ZlIpv^-#ggM%06`64ycKb@LDM86de2;odOz}D z(Bm(N%k1vO^P0Z| zk3Y#;hSoJ*VfyjcI6lldT6Xit0nI1gW_7!_99TxOH%5WWuKhf8_VZ~dAqF6FsFF+7 zkIL5PygaH){W)NZ-R>e*N$ouF_Jgti9NTv0j=UaD?SkSYu2_eC|G!^I^8edRlX~!mKy_rqn0xyIb5WD}H%FKG z)^Y~>oDNA&s=@Mjh#jA=_QGY7lp1P)?N5WgAvAMSZlhJT!42D>ueAj{O9r>(XLXr8 z4VE*eD)<^5u?{%<7+14T-g!vmDth0o#x|I0)cY){mFPSUzDjYg16c9~sfl{Yg_ycPO%xFj?c#DX%EMJ9o4XMr{ z*2Hwf(FM36lS8=o;9PW-#Rzm)LJRdnJ)kQ-m(SDw>}K3+ z+iU$-F!t$^Ip23z?t_t=lAjotMa%c5Ct|?%tuT`k^-z&6sB=7tdp64jdTeuN@O}?o znWywYLMy5qniu0*|N09ET~0u3*F5xNT>`!Yem-fz#JT$=bTl43R&{9k+W|(8q0)=7 z;|^NZ^bM^Gm|Je;=P}Bnfz9B_Cj^?BZ*an82c?*9(P4TYTM^f9f6 z4q^Nc&8iU7!e#UvkIiJ<;4JjbO(J%To^3_E?zFqGXX@58$iR{_!@AC*v>aL5x%dED zYt$kOUg5IEC;gjGW;m+QRs8jtotD!kBl8q8`KLU~N2{&LwvqO^T%T9t57uH&@Z>bk z5CpeZokq(KpzI0M(Wt6m`|b*%4G4-hel^d9eysvcsYOiYLr{2RDF34*DZ0gFuj^F? zIq=Gz+Jnz|8Yxa3V1t8a)*`m8{0GwWT6dNo_xXt%8Gm+*3C$2`h~kaT``5s}c#Dvu z&G*{pyJupp)>ZB&d%GmN)4R{E@NwgIV4iO{Zg=)+sdGKZ^z0@qY&h6_Zp#oqMH9X6QpdGIsO?R$@5Pc4Z-$zK+~qEE!v;>f_~`POD8k4caNM~m zH!+#G8nqCoftI0hunX4ePh@4Eoj=Mm}M_MiR4Ao*ErwWBX^He9W6v!V!RJwgr*4U_`{@6Y0cFzR=Kf?~h(TuIGvfu;-pa`1iwC_c+@90jltjhHhY6l&i4(3T+m}GP?E*2GMg$IAeW?jd5ZG(4>6J>?EfB z)~9AH-(5w|MIUrS@Oe*f)6R}n?~^$ur5&-Dknw;g&Hb;2y!|C{xoxQo4$nr4m3NG1 zzX;;#7*~T#8<&$$h&?$vU#n7xX!1HjFw}w*zZeUYrCsX05f*> zPgdI`U?YsG-r1Y((Bw9DyJq(AvonvnkHvwkFx?W0q?DGQco*oF>|YsAo=2r8eEv|z?uA{g)t{);ckkX;+q=ZiKcn|t&B{vM z0M6>LzlrD$3Uq~>&de^F977gFv3axo2%#GIhzh+Gmm`Jd4b#YFiXiHEUljaTXU1DQJb< zcPYh(-@8|(;8|`-l`5Z^Qc6h^>@ysZEs4Ej^4tOO%Pb{tG#PHH0tR2u*|+ZC&8}D^i{eDo0Zh>U$K&f{TYVW z4avVPZ=uIpy5_!!!4FJrgv?!YR8~lL^vN~)Qmz#~8t-3IxiR$qA--y3$nD-;u?RP+ z7jZO-)Sr|onR`gh<=mKTk)dJU#?L$QA8C5$%xj@BE?7BRp4w6Z(Id#$T z;StP=p++8ac7a14;hKiy`T3w#ZPd1ISOWtXJ#MdQ>G`=a$Ctuc2@(sS1zkFSwB{v!G^F*vT|8-q% zebKn|gCnDpK^x~R^3m##t7X8J;gvSmweR`&}3<13SgYU-HxFB5C1%#~z+)@QW$B3@eT_B&k6 zBTY;7kK$b{_W!DX8#}(Dm*KI~TIf_OARr|pkEs42_;mXb;X{bX64UAlSL~%v?k!Fc zO{CuJ)%FH+abi@p{&@M*?Hm0Na!vDw(;)#}YebNH8zcFXZY8>@uuIf-+(_jhx@WaUm<3Z@ZpU*CRckjp+QZpY;4!rQ zmt)LXP$a^} zb868z+_Z0?M8v|=&LD$GP>4s^u2_E#tMSSsxZLc_?DP?7Wqmi~R=f8h!NHaZa@^I` zwT-KIgP7_oywvt-^!GS5xnBqA2u-BsHu#n92cwuB1nWs@h7dzff19e-S_@SBQNC#Y zvWSEZm~Z)!sBWZBo||uYS)kNbRw=d;RTiSI#4E35%-7Md{L;)}s8-mKpfMJEM^mrV z&#!=0ejG{f+)m3$ zF=L-38t1#A;^0cQG48mw-N&9@G%Bg=-YSa38ZO=i2>s$V35I`&mNzi58;X*1ZiL}^5|7Mmcn{J^ zkfx-G8r*kXA$?mR9ps^Xl%48*y9kTu_hV=u{i>)!TkD&-IC=>t@94JY!K8g*9L*>_ z_gdo@#bUJ>)n2rDm)I3X77TL7Nr`fO59<)SSd#kQ9R(6?sEVX(b-lu(gN%MEB!T+& zHBSJ-(NefmAL2J`p+%XBF#KYt(6;#KvqLPK8lG@r#BjoNEX;UdcyRUAWR0F;{856` z7xse@^gwYFADGvdLtjFT5^sJ;EBj!-3HfZoT6Hgn60Wc{xzupBf9}g{?GGo2Kh0`B zVRSH1?x{>TwiaCG`)NvQ9(QoM-)i+0cAwrIG0aDe>W8>>UCzJU>=LuBSjLQ8Dd*o# ze+C&TwK+H8X+*P=d?vq-Eb+DBnfEU%wtU}oGM6RL(fpH!a*~*` zVLW(OF$q}VycAqiQX%U ztJJ5%nR}b#)=CYlLb4wThE%BRXkj$`^34Rn%~CUD2kquZ^7FB=R2pKrl7-qaUz@ZK zt%86|CD43>lPi)hmZ+WYldLsc@&*xYn?KFa;={5uKbm*1kMgz2kp%f1u<_8>I?wYf z0S`Vk9*|5Yk49M^IlE-os=6vp??=lZV94SXlbuk)I6NMr1cocU4h zxrLT<=D9Q1>1TSa zK*iq>A9k1;4S%zK@4BHUtL3e%laF#PkU!WTdU&FJqxag$g3?2b?K6F~{nlemE-@?+ zfd*T?@}x1t4U&(oZ>@H=s%F__aC+~{kMq~^&(YL*=>ICVu|ORAvxxjLALABd$)>u> z#ZB8DS=RQXEiN8M^nV@gfEw?qiHeB@C>>?tVM;*4=96DmrAq$u>$*b|>!#bHSL%N9 zKRaClgZGdB7%Q<&OZbwwd_H6%5m#DvkPKxJUh`1s)KDD37;%Z*M$G#mkuuDWT~l1lA(7hK7bF$0?u`CS6}Xjir6 zv#@7ud?VO#w<~L;yGN_Tll($Z16rT5!jV>|yDg!WN zcLdvmC-mFy)Lt6=e!!arFA>1xREZD_9lk!$UjzMz$>rHNm_#1Hku6>zP`cmi(;%J^lBGkw! zAJO1g;Z0h}FPRgD!VA4e9+*F}O(0GhZ7R=z^5STrhuw@dY#n?ixzz5f631TqMoOr2 z(BYU4_LL+-7{}?88hPz-!R_SloYCcGu@4^HYgjEcCww_C`MD@Zl_2-^$hUfeCw{qNi_(_gMP{;0l+nR&R0cy8#U;_^mWC%SnG~;;XNJtJ8`5jZe78~t>4o- znouuGFBISAWi!pLiGc6;Z)(m-TI6qP5Da)=S3mipIxu$f)2+u!aH_xNKvBed54@f4 zyuGk{doeP{Oy!}#bGtV%IhA4*pZ8ADoY=x6KJZI%3-0XZWb&sTaC^?baTGl{rs2dw#ZJ@$u&Td^6^(?LsSH_l%0JMe#Jf*S@0Nw0}%8n}KzkBGtF zTyjR<^|bk}VpY#zug~xOmnIkoVOSJa$6c=kn^z69_(i#!NBFfV6Dcpb*!N*ez$H_N zpqWp6DIKiyt_!&aPzxceFZ+Lw(GTSG!}4FCNuF^sqxo!?aIORm;$csE+e4{oc2<9` z2taHJ=LffMB^7xQG9RA}lmgX-uPLjL>D;5FQ%@m=uwvh?H zDYQ{mGBgV^qkVrQ=}Qr14{12@xi8tfYf$uu_r`>1m+zs8qvm*_7d)8~nBN^c9ZRKr z7V64|AA-T(tsyKR%_}J-@@Tm`TBj5rve~JyQFSaf!SQ_FOpz(!W#cb$U**c?<~?xU z_Q}udKv%2rDpJ|yO1XTA%Vez%@>O4VGy(h|DlbjCpk6T4n(a)ilJj`PmCmZ?sYUbe z%5*39j>MI$cK;pVKKmaVTN`7nb*WVUBi~5xLX0c&kq)k&a_BAE^@pEGb6I>B#5YAc zL%1Apm>g~C89nRXe0kl@#ZBTV5B=T;>G+EM^*F#c?GgGm zFDd%_YO6jZ_$KowA6{EbequSo-0tJ;s_i7Du(RgO|6k1h;svEez<|?9$`egMlp@$DZxT9fiN`?lmw0iWwKUA}?0X`hTV8m+)VY=m^Ac z>#$u7vl?F3LdX5GKFCt$vL`Y_O$dR$XM4|O{->H|`L|U1LqUd#Nkfl}BtzS7&lscl z%H}m7FCke9;G3=LWPT=J34*aoj!C3|N3{}XV)Wy1Jv^Xo?w4K7=$u6yl)@8M@@Xw{Zd` z_e0$bC3bAHUb^>)KWdce;u6=_dB-`^cVcHBrynqA71y`iD8swhUbWq+8OkwZrPml( zmYK$YpB^YMkhKLCBN?P`faRv5-t#5zB*&MSimYf|w-4ZQA}7V z_jN9e3VqJ;Ih)ojrKb$L?MF79dlyDgfRd;TrNZH(#z=WOmNfqKJf=VAD6Qa;nU`4~ z@+Cr~=ur*hhlBsnhWw|hiq+jF?%pPvCudFWKhP+TDZzoK^_nLRdtuyY{hZNLC zzE!E}IOxao@#jSo9y)@H#>Fnlw|l;Rj=jqS@#bP~5B;GGknjDcq`jWbl&?-|b-)dFc-mey#Q?CYokzVw9c~7sa6j`e=#h!do|HN~D z1zpLMa=8gPQ~Eap*V@WjgK+H*lMxrF2LD*lsvsJ#acx&XuRym+fy)b&3=y}`KJ#W(XqN~bEfSHt1`;u@ES66IN zmTo4!nE*%C`${(eJn||_JCy>gii_s}y!xFXRs4*`)$b*j(*}PJXO1*FbwIQZkUzoJBSV5o@L+RVd-z6uDjw1h$lH=9RC4GY_B_J!>(Cto^*{aU9(Mx~l zul_brCiU;Uu@MB{A?>_mnF1&`RkiAGY%{?Jc3ebHb7TRi1bH$q9jy>LDFEbsXSK$E`16Ff;GqWY-|YHq2YoCbqG`?@of>-MbnvWdf* zcPr!#S`q3~1Mv3J%X{dHWhb)dM_CFT4|XMpePOTBc=pMcEc#>hoH8IE(JB?)zGA9J z>51gR5|;&NE%|f5*eUr&)FfXlWhw2Vu2e`&cQiKW`I-PMtgUk08OX&WN$#QH^UVWqdgl&HGv~6RQ*Z zg=7M*7$Ixy*0msddt3W77xTuZM&o+08S!tt)12?t2syD3+%HP+ve7!ymp7%a{lR*B zFU$a#W8}A+gAae)|n*1 zyGWcq;4#g)OG_cv^w2IH#xI&hgZ-Or&Zc}4ovn|gKl_7eecI;`MMhb?hrb^Dr*HtD zowozU++9=;x)`(+vzmr)zg8vwiC!i{{Z<}+r>EX8=U_9srD!V#ukqrqP^gmx1&sq&A4eK>o`q?+4KntA{0x;Qg(K5`G5xL2j zHiqIeIbViOY4|G-kgmaf(k%JAKJ7RF#6p#EuWS)~kB)lh8dm+(pf&xK>%E2ASN}*- z#rFE#YhnEb&Q|i~5|{ZF*K_=|3pZ`)4<6r_sjX2wUpZYj#qFLvEz>~a%4qi?V z3zC1jjsm}DnpP4tU&ruUvmpo%VU3Bv>s8M2bZRCI%@KP&@s-%8@=uJIy7WGV+a?^d-m4^Hmok!f$p1pp*RbjPuaAyN4NL#u)c~ zwM&vF9%34^d1${6suf{LyLkHD|Mu|d-*?|H55Xh)chauj9e9Qn8nE6dFq1>vVyaij z{zB!!BqP39E2?Wg;>pLD5-Kesd;MYtTE!Fc`T^OX4akPD!T&(kM5XM|J4|LfEVc$fPQeP)>7B{{qVC({nQs68O9z!o-HJ-Av&yL><+RS~R+gKh z#B@|SA7+?ssicxrD&@4IoEbR`!$c*CC5Ji9$YI#YX&akuziZtU-5>S&eDD8$j~MQR%IFh?Jo626b}85W9er}vU1{>sn6r-y z6fcERcvuU#;>Pe=#X&R6*GpqU&;w2VLJ+ofKPA+F31hH)5Vammz3@E$XiPP1Eq2R^ z|LP=aa&_`5ddupcG%C~?553ga??Zpf6VIJli|I*~v?U+um&0yha?dj~<}k8tGGI(b#w23@A5uaoW3x68QOHuLWe*g8Y%K^4C9pe&O4|N z!Ydw71?_r#*;y(Hz}&Akh-WfLhey7~a^NlPOOKu>(k>k;LZ7UDqWk%H;I|U7zzd}3 zKh37q9{}I@HD6o32Ob0JML2x(=vYh!!()#|h-9q(+#}w_2V!X~*gofgek!=dx&y(f zu9!r3D>FMi3=kp$oYE?$(C;pe?RG+8)M5|1mtV(jF6S8I*?z7=YJP_YRT{zms;|Uf zNm)vNW{9tqs(oDi2jydq8LuUOy-2li>#!P90DIYw-FL_Kj@qUlH7wb?`9GXx;J-fyUWG3x_|l&yZD||i}BhXQ;)db@j}*Fg|H`0 zlyIe8uaB=gGlRe~a z)hP2$&Tjx!%dtDBi!M#Q(havyyHzI+n!=m+2~q}2=+u^318g+}kO!Bbi#^)ubQoYM zSI$5Dj4~P{tsplRw0w0pBtT?-XFqI9c;o<`c2h9(1aIDUIG&aN<@VL2HHx#&X~iD} z?F^v&(~GMh$KC-tPav()e1}!mM{vTmzYh74u@QpMkeqvlclb}DYwd0 zW9%JC+O7Er1-ZH)vJ&{qX8rz2p%}@Z&`jTdVAiMi(zw7&RW7NBB5nQFdyU2W>4 zgg^a0k_m?wyF{$~v=h}tldUCM6@Ca*-LCBySlSggY6hBzNA83ltaf>ln)#^Sc<}h< z=UAD1akkVgLEUaB>B`rTZ@#pvNm}yKVri#y*=X4Z+EebIV$C_zJ9hcdiy`#~Qs`Bp zslOVqq~!h>Q;OMl!q={ZR^@*Qqbpv;2i~;OGqciP= z83BbGD`*6tK?h&0HkIO?TV`j~BjHJomlq`nEHx=~AM`8W$)KWoPsH7e{>ZUHGh|mC zli$`2x6Vhl{#Ahl*5%$7yc?6)!&*TR~M?51V|fKIWi6-bGe@A+rnf8-x2Xrm>NhXij_z)G|)b4E!!?vd+2 zG0!||il46ZSHu<XQI5IVzzdf@4?2j3bKogp-G2kP%UIk|*<&Ox zP|P5nT7KLP_qDo-_o>fa&m2}hll<&*nrMM=-7|)*e#$o~^j8o8{Q?rmSHMOGfNh!- zu&o9VT90<}4*)0yssYyCGc7heTReF9*xpt_=EAJnsK0#=eWdCxpsS?_7-aL~vl0M5 z3bIKs0B=P92V~78$UCyVATI@D!66z`M%`?{ft-k)7vrUBl49$r(%?5OSf*|_FlPN& zBZ|WX@LklVeOyzLQo>`Cxcl+2sG$8(T6G|KeI*doR+zh#Hgw|RoMHQLpbbT;$e__I z&;$FpNoTT*$8}_PeR?;YP95kdqux?bE5W^dxz-;0w8oaV`;5E$=gN;CuR}q%UhGm* zb@x+G&&COPDp|qqxgfEBwAe^(`A&BOgZ{ecd*<{DrF~Uz5?oUCVatp`n8C(68}!xN zfvduVLZz0QbLAdw~K;8G15wHj(4&9au%6`_s z*_7v(g|1xE9olX4DjllPLvhQxcY<~NeXPP!1?^=bJDEEWvj!{>WmgQWG}*BzQN_(p zhB~w|FRjksmQe>%hsxlzIbm=>MmyT}yc~T*&W`E><1?7gf~nnt8wsPp{96}qZ+JEP zB|MmOyb;D4cYWj8G2C_* zFUpxEB>;iPL2gD*@%$w1p-Fvqij-Hon=mz5I%qTVD=kpW=mNaiR%v!3nc2tH_tw30 zSPa@z-?`Pfxxj^&K5vNDJ`{h}N0`;?;m0VOik=TSV*%3{!L3e_)Okduzfy>-WmrzJ z&iNRCSgTmlcRi|F~EF1O6$qBX45KARVEqU&!x+&>S_lHy=@$DuVy zZ(z2!K)t0L*ynEF3KoY3i6p#i6Kxv|&0I-&a~Zmjilu(?0$geqLu55=?`Kocr8A&_ zN_S5LZ|^1F<%@eNY@{B45rE2g#M1M=3dz!ODaX@$8SZn3O0^0eoRZcz_hV?Atku@1 zhIBT;lJmhC;fo7;&2~|U;rzdW^k2L#y;_GUQeLxwQ_b0QieEIX@960?0HHu%n{;vu z@Z*`P0dTK(wP`!e!mjxOYC+nv3hSjavZ=StrXBxY-wy6tY56zfcVP2nrl94hq03-G zbL`TRGLP2@3f~-eah3Klfz@%`CB0mNOx?odE-B>SBCW~pB($mx!T(?d)Eqhl8o0|A zqYR@iQ{Jrq^lH@nIISsUUbX7wS8iut)xW{uW1)H$`1@)}{EhqdC$Or^wCsXRcpMr; zF~Oy^V5fl!RWaVur_qSHfoU9~ipa4;)2pIR`=cJAKPisa)c$4*m+%AnyrPHGm4q6u zojra#5L?CC`KE^(ROD=LU11xP#{zRr+E2Y%n_3LQSCKJk)2qU?Kjp&eA`Y_yK@a&P zcm7id4>9soTJ5_N=MT&yhRd(}k588Hos4sO&huXm%7!Vt@1lyzx?8U>i+_d@Mirj#(FwkWP5G)={ht$PalP+PW zIoe2cgJr@k1nkp07gZW0?FDdmS3sw<6-_c{zj8UO5f3>6r^|$%W6c>P8W?# zO&ghUu1&=e?OUliO)Hs#Ik3LZGogZ-XB0vxDl~?Ylc{a7 zzV)^}xX1*M!l-7k+WN20b1}Bc&#+F71`$RJs#R}a)E0{N)d4J;e7jn8g34yjX;7%; zv#AV6e?i=l2tA|a!-Cx_TbPc)m8}qcFAWVVn)K-O65nZoN5Uo$idK(`cW7Ja?hxkD z#1}L24b7<0DMsxsIv+nCUdvc0Ldz02Kj*?S5)iN?FbaTQj)<$^#3rl-qX79XD(iHc z9!%|dN4xHH$?KKgA4Dmg|3GF+{5z^T*N}oIvr&E1G(~+xm%=atU98*#3gH!g|Ir4_07lkaQ+epycC_< z45hEO3_K=9SUUytN)T-M1$sw8a@ZbIRPu5tb?tosPB><{0rtBd$(&qCw%Fd83RT{1 zLj4LQEohWZ=0yNcrDZOl(*v#mG8oi7QX0EZdHra?=d_TnBdT8+kx-z(iP)4;AUFxw2a+$+|gI4VJ$*aDG61CZ{pe{$*w21SDnGIo%4>AC05?(O|l&!|1mu;p!DPlCh}gSSei1=%pEe(9obIO^Ejd*6d8|D4>+vq0^@~JcJU#^ z&1#knP)lA|nG%8!`|x&Cm(YCG9cZ=DO`>BU^23u%R)nwbj7Fz|G`phdN$I~cNS%KG0i%vlce_|Mab%95*K)K! z#`giX#2ZBArugK(*<^wXC((s79j^k61AjWp1_lyfUS6I5+s58XEsvRA3x38VhjoCCP(@}Jk>in`P ze6w&A4$6s>V6G?x=UP$Z=zeCeRf@AymTEFjPn$}J>rWH!39GN47Rvb@lU9*)G959G0FVSK^Q?MY zfBxZ4U+HYZNWy5c%j&#SLvKSioWgun$xGm*Y8s-M%gVEH!(-iQ=T5$kP@UmfnCt_h z_17N%GT#r|SFofV-S26%d^-N=x9JVI=2Ku9(?UwL_0L|WL?%@1HUwmgw6jRr2cK5I zTMB1}c`wO6q%P}H{2oH~~&hGA8 z2uG=RouAafH7xMX?-VDH{M+^vD<(D{;b>atT)!Cgs$l(%lEa1Wng_}BjpTQ$jqhHc zp`md+OmPA*s+9sGhYKPnQaz4$hI?Lu*I64>$9ngs*23L923>?m-n$m$c}vZ{cb%V{ z&-p@x36F;Le9m~BW-DJj?VdpHJgM|@@N0UFN=nl09%KR9>+JMx5w*`jJg+_xC3?saoR z^%eEbr*UTokB8m*Wj>80V|JZ~Z_?|NZFNJW%bTg;nRC{IwAW;|abx1Qk}3agPRfEA z&yGl8AW-*)X&T@cHiEvH6hkpwYUy%l>6j;8DI1AqO9{QI^^LD;ECCRoyRFZ&@7cHL z@Hb(z$1HxY4ZDi#Q@~{v(Qup^QSXjAX+|CSJEPXcdfsrI>WRDeLfH49FK`iyqQZMc zn?bH>TE)H9vI>D!WYWvRwH1>n*UxU9;H@_{xRMR|J6^L-41PZg0QLE~;UTKgf}Xv* zHBQ2hg>eXqvDSI5O8eC_)smZOkO}#f z5rGw3{v5=hM!;@KUQ=pov&gv-UFKIPO|ZMduIWkJT_Xv~oct6IpWN0^In5TUkglQv zL+k^e2ju3VwTUi=(?VmVrH$$GC!k3^A7s^Ub60_3%oX+x%Xj=v))B6Aa%h znHe)xeYVeF-tPT^<<(KTKDDgYb&#)90(djmrb~YZ58E9ii{7&*&~d$Ot#d4lr3H3X z0+}i(2H4FgGNCn_j@)v4P1xbpFr1!Jke-S(-bn|0g4<^QLAW6W1$AfHLw@=Fw(v9y zIP@}XG=MKHzacNJJ^gp+b}7ZXwCsq|I*RKj$>-3H^n~Q3DV#{KH4b)~0F9Q!T+vj7 zdiOtT&UUaY3%tsdosM$Remf>S+9(K7d}*D;Z+wC!xzLOVXdy5Cpb-($x_Z2SzUi`* zm#wnm*<-AqIp-;+;0$_|QgBv>VXQZAW=hVfpknT#yZbq6OpohmUeBHU_LMa0tJR+=hb-F#B!aA+cYrDe`vUMwvZtKT-61!)$$twsATLKf+%^1Sbq}JX zc)J-O(zKM&%WOYtAKIqVC$N;2wwM1|+=l+MxTOLfZW$wK$kX`j(19@jnJwQa(8e$Z zuqMXMWzXgSqmxw?O)V`LP^neA>8H9;dTU%G5)*;S*$eWx(}k~1GNb1R2KyP-kA_(zjL4R11fCimEp20g_*1c>NZ!WqEWRY<(s3sF{>*4g0cjQ(%Tp>e}gSyxe zL|<0eB<)t?IxZ7mvLcqfeJPgKL>k>hCk89tK5i_uR;_gJp>KxG^v1CRIF2DgSh#T>Bi{b`U!jgK!8Vm2cit8kamFYUXb z4GYL)gR=1^fE7h1P#!WLwgoo~+x)TU6=ZqJDaevVnNF6zc;3k;F$qo@lnQ|z>=?m_ z)s$`Wb|MH|u*6L&RFTHBmJh>aH=c<6?xCSyO5=qSp5bQ^{_*ML0_BIT)*r>?ll=Czy3L)gN zOoR_z_THVi@M{Y{k=Y`>lhyWtyvTKwps^F62hst4C8Z zow7c?`S1axM0g6UUe`LSO76#k00((v^)nnzh#v0Dyeg{rlSNt(;b(gA?hCTK$Vf{# zq5K)AE2URG_+4I32k9t%Q;AYyVBxytFGBqVBwfWMe$+#iaaetaSPWQA%TsgH~$>+lC7LaJ!G#mU|2c}XImh(=5jW_aqkLKzRIk#Uobh@GRLsC;@k`+dxC>lgH z!^_52e$EiZL98REQGQ_~|5%7)v!dp4?I!Z@aF)JzPr80lx`oA&iMIY|I@7*dI&{u> ze8kz;mx--w$)vuyE;%omf>{F}^_WSLjYXVnMmv&HsFU`| zIfbLzu!@W28zYH6q+N!Qw$}SP2Zlpp{|Lpdm?FFUiISg98)FJwc0{k3GPP*Fg;3A6 zTT22{I`S4dh9HNJNI3|cSS;L767`(@+j{&)mC>g)#cvmrUP zv1z*$snIsgCsJ(N=fVyj&h6^TMy2`sSeCfFsfeil)yT&#JE~j*Cxz0N?S-;UR^_{q zcxw>WFW8pc>u-DaEq7d~N|`GCGjs(%%}?d);~p0x?1q@HG9h1_ruv9OM7hr`p^AxR zyhqTqu2T5i*w#PBEAsl#E1FhNzs?5VWuL=C&VPv|CzH=x;QBvAwG`9 z#gu$GIPvmI^aI}=kKqM-hh=uSsyvzU^TX_7;iuJv#+StOcdqIibAc+wBiVUTjhlzB zbG;Jf&>J_BbJ`6j7zbiHGqJD0S1BFWs3w9t+bruduS0RPHOHU#r{|dp4r#q2O7qpE zyt>2P1EdnFRh#kF+Xx2l^ma9TQL|FxWZ5+N+5O!6_z*wCPmTu_H+T34P9KCrj+yTJ z$)V;4)2$oQ_!A}9t9ihpbpN|SBW^{@(A=(>Tqpm!<{hy@jx>%Z~`H1+gps$ zjk2+Pb-b{t4VSW{{sMn!ikc$qcuSOMC{9ZwGUM%5=?0``=?0|R!o=1wly0;uR`k4- zoorc23{XJFC$z528E<l7#Y7kkFM_uEG^_BZcd?pv0!ddcVQ1GMUz19(zDCow zPU~?XM_t;Ug`>c>tw99o-=IsY<8l%OmO=o9Ta2bo|v z=Vkz6n)CSa;}-BDIpyr5qPEx9Kn4C8oItm=+2$uG{f7+0=Xt(q@dJ2DQ75Fky%T<6XSAd}-O9aZ|QCD8pOB4G0KWsbYw+VuW&oC&u1c+suod z7c@W#mtjHlfQiX*!eg(vvI{z$iLBxg>ASW6A{608q_QUYl|29P-!Gmu`(str4Yed~ zEiGVVPoHzgY7@JZfrcdRMUaUa`%2 zza}EtZ+phY_S=eCRv_6HOq86R=XB=GlIsrGyGx>?Ys`PG*hq>gH@PK(lV|&Iy4uc~L{2N4&n@NV>I^m1~Z-0qKeX z4&UuMitYuc_cN|>1AelLf4J|@KVJI-G&!HW*g5~l_ZK(JK5S~5PbDIW(`4s+-Z3O5 z7zm4E%>vdCLzt-M0F`Ee9lHJ31~zUIgXN0n@vnQxl zSl&s#Ub}sG=wmOKIleB{;)czvhI(1spRD%BXMY&c_%i1H1+e6`+HR}yPn_M5ZcmQh z#1;wpflq9XYAtxQfwT4kp&GSc#iWdV9-FoMw;CPS?XG8VPzZCv)(9tj!fuBD)~X$) znqa48e5CbO&GnWRPrDjljcFYN0E~Ydq;UPmDxv?o4oVjQ9c-%Ixo%tsR&yFk1_0zl zf){z^0+09Hov--B-Yg}h2bL3DNXb!j!sWQ=&kU*^`fDOhGcJ@#uC6V5vaIv<4#gjL zkxs?~IjJH?`cES%HpskGYeK$Lb534)dAaB7(HrxWeOG;!K{ym-Wp&P1(ObkL0(GU+ zqP_O4{Zr9MDfl@9aI@(kzr7WlN zu{830B_;cEL_6t`3W|mH&HWSlML+P@q+0YpJuCevdP(1Mg&)6X$D9F5W3bbW)l3Ks z=tj2s*jt@5(${BPxNxBa;;?n|=7lMb!@^2KYhaN(+L`3BF~9dt{(?q(oQh&zh~dU5 zp$M&B(T5|MUe^#f0Vx#gyg} zgytF0;38!1XZ;k`CSBh5MO};mVdo!pvD9ol6mcP4iT0m<|DfOhw>V$ni4}77nO)!i z;dh=kiBauc3~cx4=*}Gztm=RNs&p|34n^Xs4XY=lx!vXs$gd}O*8lzm{#-R6Ixf<5>B4`dxlI;eSY7@R?%D*>aKhsLn>-mXAUXyo za?7fJr?~_Wl_49UvJ(bn>L-_ptZy?8#Nbn?( z$nD1e&I{k|0m?k<7QS@NuO8yJ=CEmp)Rd~iSN{?)P7-&3QkSFSPF6o~FxX7|DI_52^MimjxXPQ!HTwVw;# z|8*%dr7__336C}Z&IvW7$WAf->F!^aioaFdHEH~7OWFCyzq1-eX$T0a`MBcW#lP3m z*za1ueD=Sy8aZIIA#X2Txc7fh*SLigw@F2}!{P3~OFVZ1fX$YD*|zoIzf+v6RCNE} zy8EW>im~T@A@@z&{LSM{^F~J%oZG<>6$U<^H8nN!y1LBwGUR3N${jxJ=o%Ut>g;>@ zgHz;FuguJ?&ZRfz4a`dzhij8~{3;->km93v#`QPxDp?_Yf3I;!6%ibgVo6w?qx$5E zXvWp9u25&UfphMy^YsXqoa>!h4U`q}@+qsA|1Q}dum#azHFnO@iQhkNV3i6xJVz4r z-G{xKGL}j_*}aK}bbE0PQRrQ|Fox-^8-jPyzWA_0n?bpHC74#8f_TFSPiY>L{J7G? zxw*db@`2w*iQ`~Drr@@L#pKSsYq1o?`d>3HQOI*5<~1(WH8kwi8R4G~`drbMJbW#u zOQ!K{?+KJJXu5(}d(q(|&aRot@ndm={=Hl;L9Ls39{a@f%4U(%sd{4 zbm@`?lc48AoFI4gj_ux4qhcQ^roayG03-f)0KffOQ)3Nwrim3(MzkYb|4%B+~+Rl=gY*k$~|a0 znu$B^1DWhuR-_wEzpEN0KkaD*wFi*MC3`$IcMzVSUFJPf3dX$5kKsL>2P>}1jvDG1 zlB-JdS7|29#cu^?l>9m_FWXSO2Z23l#C@ur%dthFq}WUS=!sG#sooj zkG$|y;gu>FFS5}#v#XKs#b&*9Sh;3Q_BSeA&@ThF?zC+gq2v8(M8$in)Dzs;NgUe_ z$IYl3xbfZP4!%mg;|;{|U|xJvu7`gPB%lDfr%v9ZGsulAleMhh3EOA)9O~`!kAkRI z9H`sSlt&0loAF3t(opa{Jt-#Hr}+bd$R2i>G^lSW#HR7COY9x&w`7+$W5s>CrOMf? z%SVs8j=VRMa!7fs=A8T*)8?qwNcD?NgqT?4WK9dgKr4c8gdA?zOmLvjC!i?vCFJdR z>NGxUCZ$Lu+tP}bZK+aerbbis)rl?aBXqh_OQU_1nRnW&CnhE)?C3%v_4VFBYOi=X zv!O`Sx^gl`-G7-3PhSajb==KAP<{w#sa?ZL){qQQNZYZiIkCJ^4 zulAd2&VK+94ZhtiI1~*C<>xDIhqSaYZ!$GB3uB9!Vjp*)G;YvpC z!-!-XUe{4xlcnSdsSHfWjpjCEO!XPy>`i2u8=lV zBu{klW58>+pfcS$?x3nNqx+%dgZp86PG<1uytt3<>!p5x&NkwUk zp6GWf1vAbEebM&)a<;|9c9@wqV~`{gV!zB+HDF@KzR>gKR7I|QK1Xts_@$Y~^8T}? zL(BUG>TQK>&+v3L7XP}09uMs!m)cud9g~8@qzh;&J@lG+~9PXBoA(4IVyY9PuE~kyO`cuW;k|Q9swwLb!TE$ z7Bel6=gSYSAhDQycO?VCLiSWXl)&Jdk4Du}wMe*ibg#7Aipg|gim^@4$@i4d-!KEk zBXa<&?ZIZftB^Mu-p$shiFEld@qV#j#zoVXpqADg(tgZEK*Keu6{MZ6 z{YQ(|GtC1U)s!2}YiVP2Y!|bvIZ&h*PV*H;oxY#Hv+)cc z=UZo@-xMA++6k{055dR6B|$Q}MEH;#H&a*BDI>2A&Q{8eh%$U`BoQ>}rnzQ%C{sfU zNAgsTVMSHgS@g`g!{5FEkf2R9q18 zti~v(=psDPCjZ((U-K+0^!0q0<6TEfxbu=d#q zkD?Y^x?$TX{*xgHyGUHAPE93H+l#)0Alf@rRr<6O@ENoo$M#%XB_p;UvNV`*{Vn}C7QDliA?B-TBltV% zj@xGs#6`HcX^-B*xuN(h`8-WiXkJ#QiMg=vZ|a-?L0E;Iqr zG?mflk*Jmy(r~XwYf|v=x8=<}xJ$Byab-S*S*oF>B~VMk(c|JDaD{M@VDY8Ekdgfj z6dUZ>_c+h?8CB6cR8fszhH<0iVPK<0FnkW|JXADnc=|POKg{nAswot=Kqph}I+jCw z5I|`nhuDm@3^#Dl664a%j_oA)Q!RFF24;G+SQ zd=RoXVPjnhsQRs#_l8Va)kWcro7wp&c>XQX`DukIQJS|29gT3K^!PWV(Gbj5!?>n} zlQ~k7+&Lv}2S&`6I__#QohEVVUBxAcdm;H88}ST25kNVs1QX<@FD`@h+Cjg@ zoAUSBzX_zfAYe>8`gNAtlp~;LCpQ*a$>NgXqX$k3wP_<>-cVhAhN`PXm~VdR2tVTK z(@T8t`s^%XNtgkM?4}i*#t4IqLo;&hgpahbW(So-3S=k$5gLgTyOQvhs*P#Sto5uL zFoBDG^FzXlMt0ihK4c#1ArA);xv%1*@WZ~GmkZvFj*yh{5U^BLiUtpCt3yJ6B@K(6QJE9Q+`o&pC{JMfX2% zHsQ8U08LcbcD*^Yf`#B43R`TDRE1V-mji9iz-*;xTJ}*ri+y5`Pulrn@P;Lpm|X~Q z$Bxii*{y7xUK-`~wLq>;Fi4i|r&ks%_7j`HJ#Qnp3t$`lt5QyG;RR`O6B&{b@dZ_lFs<-XVBU=&3lU(rXJw3E! zjBc;%lu3&gO(vA4t8Om^czzjb#BmC}ui^UB`ZCbk@SO z?ZtUQ#@@$MJH}RxDH-vXY7yTTGI!KY=9YKv?&5()<3mJYT2Z&4zh2d~|7IXUl6+Ii zNXO@9!1ny^le8S}8*YAcZ|+ail;3gv_xi`K1r*MzJ3C;0B{SHeIB|jS1L+hyXJkKv zrzgsyvWIK<`*JQS!I7e1HjE~?o7L88Ppgoamxg%p4e5@$blf^I%Ei~@q~&n;hvYQ& zdLy?l`$p)ugxt;;=8^pJk4N(FUK6ZA6)<%gz4CX*zj_<0_7#(&D%G`*o82@qsknr> zp&J=_M_Y?a3Ns2(r5icJzdVuwnGNBh@|c6FV%hxtqD7t`FJl6=dZ zs5u5dO;~(enLl@Jzcww`K;_dVlvg~5bJ@^FG?lMUIj@wS>@RY-Vju06Pwc270uNJ@ zG{w}_QB8%v9!r$6Lmle?d;NYX_De^V{MQ3vm_|(89cO${^vIKg46lrA01)AS%!!MS zYf;9C<6F=i53i;b!gB*zn}*)ykcO|}4ig9gG+@>t!WKH|OSJeQ*auXPb$|s?7;F@; z*U9pu(S9%;p9*$1r7QdLItS%_;**PzsqjT&C;kgLGE&~JIPdZoFIG!*5tgg2%VQs~hcR;~ zu(1(2eZ@q4R#acBh!2_5-28<@69mIO^>bY0j(v3ou}HQqg`S)HWZ01CPd)e<#HAwMyxLR@jeq*Kujh+QWE} zB_T0k z^@XxHwOo7UYh8a0913#G#7&R|UcbNM;Paz;cxWXwr#Nc29*068wg;LBcvMcii(OQ3avxLi~xbXy}&-Y47O8Or%%ggJpsn^!`4Hxx!HHrp*J4LZX4Rkf| zt~n47acNCcP_*lA0p_JH$l1eL0tH;rM{1X3!)c+Kt4pECl&^&4BH1Fz$gM08Vc*L0H3S)Lhs@zXI+fl-2ixW6aY)$GO13&&c-$+qM`%by2I!%djX}|DlPVro zN>99zWKhg;e$Vwgh7H>bHKQfTCanw}OMgLhHwM7sT#2B}#iNWKI-`Z$AJm;qO5fcL zwI(=LovUh;K^dlswu*DXIlh(&xX4IPyieELOU=+>p+umI_w+p}8XLPR{2WGNV}yLk z8)_yJ-sHU9MJ$Noc(_O^ka>i$5M(}JjSM4Y!Fkq|*)$}4V@rf(ei5;5@5GOGH`{SqMW=~iN=TDh#P;f) zU*8@PH`a9>aj@dvcl-O#9Bic|W6n6U$YI+}X?iD(b!!K{t&L|LQpAL*^cBcPG8_kT zH0O{Q>_Rrbb@=h!BaWwszkDfAUtwT9@H_;w&IY`)KFo+#`Ed4n1wzhQa>gawx zn<&SB)nGqARSGWG?eh0IfoU?;pj>(`-qD;X!sU8 z$I1pzhthj}O(tV-mz?$oRB5m4K@jx`l>P$-O&EwNO<)SR=d(Y)SlI|iNpSSzIYS3rMm!V)K2lyQl3gh$C zv9~x0iT!bFlVtGp$#z=LhF%&`&@}vdVWR+KJz}9aq!85pqe`<`Xe!Im`nMB##XIIW z?&yMxR7Yk@-t=F_n^*Zh4|wzKJp>no2aaZo=8>{9npUy)2Hwq0wbmtOvJnTolSZDQ zyn6ZkZlH_sf(1J!qn|Fpmo zwkgz!(46NX3}4!)Q16o8rGvaYYOy53kQ+T|`Or7rnb;AXe$9}W6Cev|;6x1*<*KJ9 zk#-tq-Hew^m$|xjP4iz@ZZ1)HyZ*k6nb`l?-20AevCkH0oW1jUrt;FIho8LBslUi% zyYYu#CNaGFBER@VXJZ`jg}hcyS4$t7;YUDMW#Y0aPS|>*u$`PF0*N8~&^<`%r;88s zRj|X&_R3>?=x7ak{q&ppZE=JjFXQQ!bC?OiU^WAcwf;dKVJ9R z4kzMnx^-j^`8{ITLlVJmh9ALp5PL}UJ|Cj+L#lV{Sg`hI)TtM``Bsnk{dR&Q2ah|! z48McUNk(X_j$9$V$9HFWX<}Da>BbAof(&f)JczC66)5 zNJ$5IOgmPdcAZ`J`t@$Uq2wb}g$^Axht`PR6 z;^2%qf!{hpB@&6}JqWEfHg0Cn_&gq2WFoo1 zhlm?lJS*YuhN>F*){258O{5wT9x9OwWl5@`V_y^FCIJvBFNoUAPRyP50y z=hRj+(L>j5li0+9U2REnOm^J75?}SXz&e6IDmi&%bw`|MvN(@6Pz&5*h1%kh5Kl=1 za;*KYxKOAVCeA27(uWacv1<7he=rdBO^g%XC)oFe6NU1RBB1d-M#5lRFOP4DBCu+5 z^zlA$FQ`CvR?~|#?_p*xiX)Qcv4@0IV|JnM^O`Rzb=nWZs(uYdcOd=iY!N&8r+a^~ zyD7(}tyl*kJizVf=x9hN3g&xwdpC&KE+aM$Fd%?>)r+OdBTd3~vW)B`c;yG3aa4|3 zKb!swzN0uvhB9A1rY|&}d?BJID%W1!^>oJ!#nk8Zw&ap`w7CkCW#;e2or!{7tl*{D z)&-N^UOTLq%LsJa3&W`*)YR0pDV1)zCNumh(JaxY%=D~4BX&cB1V!1~GN|<*7PJ zlx2~VTW#@48G`f6;68bXVej3$(TJj_dDKrW-4641bt!H9lonM_p)^riR(yaPZ2iZ&Ne6z4d9#)?IY@#F_2z7-_lWnV0pCy$v{AyoR~a(g@H)ED>C|j| zA#~dEU#RY@V0Nj4TO7Z>{2wO&?Z^5*yt}GW*==@E;DTF@_UL^-3*eeiUY3(&>*AV5 zrYyViAx!Hi{!x{oAP}EHip|I8;SGOlQJdC+n{HVAVR8l9Y5H4oTG?JmWPTYe+KUe{ zxRb-3X^)7jtjdpb{}QD1Euq5eAXyfIM=|WWkP|KF*vQ*yWN>ZM`+I>|5xbTqn*w z$%iQGKPPP$emA({J-|)0JKKNpjFAV5b+^%S-BN}9rCDZFT?0x)G^|F=$@b0_lDat6 znxfyjU-Wil>%`;rzJK*MuWsH5EXm1cXLs8kgSvhP;{FtG})-?dBBaI{f=R z9ufZ#>Hqd)@p)idqE|1L{&wVH+|&MXmAgv%@T=r1oqj#uG$CI9hjPQD5v1b5h2@iG z9fcJy1CZUC1NN`FZyRFSGS>=hrZ$j*(1BbwJQ-Ot8FLa;&c-F{`A#hTI zu1*3iACZh=WmCSW)X>96Zy>P7;0>esgaM;5n+DQYmo~EKH2MU@18x30Nbn20C|)-Q zgn9+bfSpb$S4h_c`kaEA(F;Q%ex0kMTDMcxPT{okr(y#Uv@AAygouZNU4Ve#Dyjwh z^|V`0qs^?-CSy&1wYYB&kv`+3bSWyEJBv4BApV+5ZQb>k^?#^DCt z6gbf<6(teziE6NDPnUv$cY2kHzdC;6H#aPPH=F6)a_7kc8Rhl;_0}#3P}D`~Bnsn( z1!s{+7A@izTFRyEXH!!n~1J1M}bkdQ&rh7$I z7Z8vN`CygVzpY1qv7&}mAkDf|pV?eju{aki!SF{0I$rB13g>2xjuo7x5=fc(AuNyI z1KPf>hp|N>Iv^#ua(qnJG67; zl%MH;$=p9Z`S)8o1_75RoPYU*{&IE;2@7p4_j7S^*%YS}jv9`=s|VTkaa#9uz6)f^E3xOa3B|sI6F^&f)H{AB)uHz_XbVO9bC%?~#Y)hg} zJ$I~y{>5W3&+Xr6YN{aCmh8{QWfR75&#}sLyC|gl&|c zF2ZSz<#2N%nNn!zA{_Ja?e;XH(4@SLSe8#ThTWy8+z?Ev`BOKqNHiHu?Yoh5UM0vG zA0&yRd+ziu`h~j!1mL8Y`x&n~vW|-#C69UW8*~ylBO9CK)K$@pO#_$p{FvOdW*ScN zDna)Fg)Sb6?Cv*es_ZitjYLM{(hs3O+}JV+?(tVz-G-(3j&&v$PJM62f5$*Jrijlx z1}EE&XT*P8HCXvmgQFVP9{xx>MAc1f{(tPfd03KZ-#0$fOf{30&6ruP%~F=8X63Gs zQ|jcJD{2a*<(ilaDkylH)1qch<-V5Wh6|dSf&!JPxsofTB9$Tu0*WF6qQ9&AdGBZL z<9Vl!_qgBRpYMMhdT?=`=k;AapYL`$zJF8b`11X@@BtjKsRFezj*YNjX*ySYDa}j2A zWkxt@7vqVTW)yeUC>-Mr<2U9{jv~QK`YQU44X#c5G{d zNcs>36zYlC|6NkvmF-)kz~}_@av9kMzWcwo_P=j@T&Z=~3`SXUy+wyfMBricBnuux z3+o_K9)950IJOgahhv&K#TM~THz31XTNX{Y3q0S&BheXur=ri;TQ*`xIkeqvMo`M6 zn(6|6`p^8oRjB_`>}!X>}w201ITWbY6&Eb;mavK>+3mqATx(groDhbOxZNQ=o5d#?fH95(F0szc)#a`im zKNMi|Phszj7zt+Z51&Gm6wt@btuIz8+Ezqu?)=3+9fZ{`&VS9F`k!Z&bn<(j`B*pb z1J!hxZPKS91y050Z(HM+O~g~i|ACyn+4jk&cWCK%+bOhSWTxY3`~=YXg9@R!Z_KmK&_!nXk9*3s@X z*K>IH?%l!Z=TkL*3rEJ{KkJIF$JF_Cag1; zU$z6l?<)G!ZG5sy!}R&OluCQCRs$RxWtCpm0GZD){en7w2GV?Y^*-=)qhN#FKIAp>d{#J2w=lUWex{Oamz@6BZ(u$Wa|zKabN14c{wE?K^m0nEs- zqG%(2JME4CL~J+3XIx5%?6YTaww`Qec=o;T}@Rra~G$&U-I!| zW&Gkh%d6)_t0n8Rn*JkFt+1-U%lUrpZT&CI^{iYD;F}|m+v2doe^Anhlr{+<+5T8D zFqlKJpdSyTOslD@cip1BrPI?1K#j#qvkEX;(zKibKuQ~)$|!g##p&f?QF`0CzYlJI z#(4ef=MMk|0kh3K9~$;`qLPpXOSAwbEQ>$I5`fW!KtNRly?R!(i;Y>kiD0zWIV^md z5})8|_Mc#Gy=p){VdGg$vMwMz#f{>1m!1QE4RiZT%D(*RV)E7)qX@W}S+EDnhV4-J zOVr=C8-036tLoRXTF*o?a-s8qtzYoYgFn~*q%yWVxRRjRCLX&PNWDvq3~&1TJRk6w zxbMGRY4MYBTPbDl8QIdv!7tdAcUE#&Us(%(>fk~1$49V%9degHtJ)C9(}~*SjJtT% zuZ%v^?FxMN*}ffDN3Dfw@tTOuH;J4>82t4iQK7}6syPDsB$+@o0MSma*)>5sc zQU?Pse9G5gv4Qk9J{@0u?Xxkg#UJI?7=OtAxs%Nc4IA+F9~mcFZhltZpV=My^m4fa z*Lu@-p@!Z|3=+RC<)~X=UIYkp*h4b`pONCE;IA2vd5H?ElwB2+A1Y#9XpD4H;g<-p z;f0G>L)@gHu(V7`b#*nS4XAeP4G=OgED{e$d$?r$J!%AgKPl(Tv&~gb;y2@m5u20s zo9Uj3dOZG<%?pvRu74j^`9D?Y>iuVfQD%ejzd3Qi-Kgx2ipLoMjvoXA#mKzkH)(q{ zHG_#Qa3~*6Zy)0F`9)WbOnjCjv|s#sQ_2Ud-`nn)+#Vj!F_yb{aAN~t?~~$NhSl-r zeGzf>iP>|Xw-3NR6t^12V2M;;w?spr_lVZS+Snddu{~x^y=5RyzCs&!64F!TPsbR0z)Wo`fi( zAIXc}myzK_{3ZVDnLMk{;-Vz3(i_GkclW}sbF0TB0nuTOkQFm?&S3;UIo2(9>u10B z^|!jvi~gG`6&u%+!)oVY7RrKl`RK|P^gm_se|c75VLzP@H>DA}B^{dHu*usgC%^xKA@j7mI0GL0LBaEa-SP_uWdk`4f{U zzr$ZRf@?o)xj=G8>wl*gcWm|j7jmIrdWQa%Hl-0A(&u&Wi+In+9U!Byu`Cw(v%c>$ zR`{RJCQ0r`Af&kU`S9PB#eUZC|Mx$xxdZ4O;|?j7eikcy4v_zIRl2_e@L1_vyOu8; z(C4@R%v}*!yt^-wr}8<^`7ej~w`b6L@YC7tIY$4_&K}|pEFOYj62dhx=yMM29ZNwhZm z*vuqvQ!OD4o>BFA8+VPl_31S%3YDV13BZE8hYj52;Zp&W=81SCt zlz{MX8fcZ6oGjzL21P%Xu_s%C(ePvmUN(;91ty9~yqt|`8PNJGz9gE1l*GuZTQu%E zL-hamN+XWk8%p{DobiSaT9sT9wSgiUrR5A{g>;=C-z+6&Fm4a7$MVopv{bCC0@UP{ zZT;6C|60tyJ{4%z`xlU?GH!6f@|u-uJ}%rgab+zSh#-xm?Y{~7>m}^*Joj#*xr5gw z?0D18+x2u_ZCn&KBCF%?#og7|%)>_~$|@+svJXn&N?VhEqw4Scwvxps@zI;3EocW( zQQ~6gwr`5wP%;w&!W|>0ZDbja)V6TR-0V>%EQF=|@TSyOP%!78-XC6r5ijO6+ZEO+78%Wi*NZl>_=Y#qw3 z6d!t(-7EaLAd6X$l7WUV1k?MmbR%j}+yh)o zElwe|wdOpY=@N$0(^Y80{8JuHJu2fe1&npo_wc1qYzB2SkFC=4CsL1(~e8(T%M+~wUhFDY7CAKDEqXU&BfzBQ8408m5 z3rb|wet4ZM{tSa^*u)sQAaj`5=gPuS7Xe^8a-yMw#}c5y~-DSytVmF?**gqvnvS-02#Gdm^>^* z!NF?~OtUl;EX;Pw`!^<^l)h)JE_^dc)WHx5ujp=w3IbWJUPbBb8`+;M6DyNCs- zi`*WJs+t9d2aXF;*;_K%2Ij@k`XcM#j#!;;29FbGJ)K%8?t#3DM>0n`H0f*ti{4Aqj& zwC>Aeej3{>_|P z!5B86YTp&LnuDm89g*f?_G&`e5k2SR5?RdX2%tCTvxoK&F`x+4Ozg&n|HY0Hn#2Up z*$45|BZOLHQLX7AZJ@9^6lElvd_o9bI+3w}z|)_U%my4CytI%w$YIIeNrr{I z61!&FI+@mL>NRKH$A6C6J=ad9V};KK1e(7vm8dMOXu##N%CGU$GoNXy|ZjO z;E<00-l+?-@gOHqE~>4Hg?Q?KexEs$9T4L}engY$f@o2tnrFeVQCxKR))DG}gldO! zT)EQx7;TUg8Ba*wi`&7Jb8mzl8t2wxib6`|+NxBJ)fl5A*vO&Q)CW3FCh*_@%`Icb zsSjqFZ`z$f7pK}Ex;qH#N+Uqf@BzOQ z-rGA^fdi|C_byTLof#Q(%0fvW`i%Q+Psn~B&)l`d%#SjgHhOq_8%-t%hjK8hv^oS> z#GxIYOjr(t&Tlj(%1+X0?Wp}UVZ7)v(?ITRCHe6Fv43-4{{!tAmp$u(yNQ&(ku5ra zHKsOSL)+u0)qtvVBe*@~T12muIFS^7#egmy4hi8}AIpMO z8GCEg+NDaKn96brpw|Za;A*|T&M?ccI)M=DQ7=mkjT)PL_ojyWmUf17T%+staX)MeWLy6Gb z3v4=DyXE>gOngr^jq(F3eG5ilgs9M&X)pTbbJld7%-_i@;emv*7?@ZFbbPAsXnF}= z5{J^k(p$URoD!p*zB-z%?5gR?Iqt5Pla7PbZ}EMdkWj_Dme4G_J}R=w;)u-XLG%K0 z!akc#ncpjeSDJ7=Uw)$AP|X-z7lKR`+E;5MA|@}nUGRRyzRGS2)_CZF*DVh5=w+uq zP&j6w>D2TOL}86F#4`g7snEh#cSm!sxPsOh_zpd4cDyOSB&UaCs6)?IZ@=F?D_PCwgfx(0D&i+4 zN|suV>6At5d~wM)0rlNKFs~h+rFSB{~ITGD<*nLg7Z5 zHMTc&FOb>Vbqx0fEW0f!eF}))S4!Rw8C6LGgx|!Ax;Zp_77ZLt<#G-Jrt^ZL@V5tt zeFf{Oe+pc?p39A|_FkMd^jHDR7VKmQ8}jqqm7^kD!r@++UhW`MtK`mV)*ii&Swdxa z>xlLZ)T7wK(5GC8eL?lK-QH$h;vuF41Sp+cdDI>h7VM+JjRyX&-&X2mHjA z49AuMNXhCUV4Cf@IiFI)ubBMf^Q_YZGXPDErL?{vzM>>pToxHZNY0}PPyI-z-nwJQ zA;MdPan{GTCnsw*fXX0g0D^=imU_ctR+eOzz9J)xx!bvZ|&!^^sb+a^;IODl4|Qw#7TjrMAu2R*rfXBZVGbMY<<$r3 z=dj8{N{B`4C+u6iTI{F=0r(A;z-_vJah`N3C-#) zSw`Vs()&^^euj$^ashNtS~@@EA5kggD*&FZF=!= z)i`*9#wl0XZ-ldDuevYcDqN{)UG6?_axy4g&gG$+SAMERAB8pJM^97JysHZC(>mVB zR$4+Cfmhds6iPY;oO1NT%Q%U370X@TMcN)bn1+TEh_cZNmE)8mx8av7&D_CPngSw%_J6YW z94L|S72+R8_-{!cw4yt#Bv!w+n$a5s?yJHJDIBMZjem~bK)wD$4WrYuE) z6jneBXP;|#(A(-8^*7RlxQQxaYNn_`B73&)goP5y``GR_jk)ppMHAe_TLYay6@whN zY3i>6kngaiud_v+LX@=CG87VPWc27jk#{+uSfX-K%9a7NKF%1wKbjmJGS$GqR4}m7H?kU+ zp5yqy$NPqs)&l%tG2`z?mrEn|ZdWtlQfH7u#;M6m6PN$F#*3|ga=%LU z8E^;gRA=N32RfMf(MgJTDCX8HY zMaqfqfnH6W_cwOkRO*&9ZYQ6r?uL02%=L&|0zVvqCstotD9O2mTrTvoLBaQq9Fwl* zH%pwMHd3P!y8QY8t&Jkw1gu^gU$UAJ171hNG_GLu$X>Ez+f~M>r8XBnPu)C}J|%xQ zeoIbVOMD@e$*aFCTH97aBGHXuE5CKITS&nM!u#iobD*tk*g0j&Fcy+<*sp!jElXSD ztcsR(Gs?Mc3W~JX%hbg@a=p$)H4)e(w+1g=9an;lFEygQaCM7sYp9oIWbGNRl6Kv& zV$OWo^D!e-=u^nW&L(wnb~oqVxFx5O6KcgVyQ%LOUh&KPs>1#`Ey``(cp?RKOd=U#05tmY;Gx|{y3ODc%}D8sJRgmLfE$QmNsTI#a*u=UDRNq%{UW5X=1tD? zhA6kx-UcARKtGikCPP>vn>bR8YHg>s8Lnb5MnZ~Ajvvj~WNk8@zayWM*^s+8oj;o1 zD&AC`QE6!oPu(tlD6L9O-jVj8M`Q26nq0hR)Vec5`?XP>tHmbbmYk-t5~6MC+S}H7 zkqGmSXq4sHjlsT|6$16h*Sokmv1AhOH`(sxssp&2oriKDmFE$Lh>4xA8Lv|*>IPi@ z7p^n?ndv_TY7HvVW~gITy&QJKLFn?kvv9TT)s*pFyejBYHxV z>M#BRdz4Y>5vlD)5&x0l|0r-sa@z1%G_+?`;&?)#!*Z zsgja982Mv|s=8{l^yxW^FkZvY@mF5KI&LS_=0azNw*AEKkWYMc)NjX(4>PpO2i^Jb zZO|TO{uIhtR9WB*y3NZC2bD&ugygbNOro^niA#89C_2)jO$T2cgULBc^&gnUDGhl$ z&3(EE{izcu0NsQ{hcTBg<$zu-o-g(f&=tNI@UBca(ee~c&^(FM6_HCZ4{Qk9qs)8V zA&&)?aK$L#8eE-fOr~gvIvfGjkA+Cw;m&7fe>|?X35+KPG+5x zP02MhX_nw4{_xSLNOCSBH{Qz^lmmo1)J`-zMF8LUu-|bAhO0=3Xu5~q!Bo$3>dp-t zb)u2N65`}dVftN_(~vswAhTGqni^aOaFPgnRnX}bUdcl4D|b-p^l@cYWdgG4?aqnE zP|1^My{haCXF+AFH?NixgPoEHI9eIYh0+yUI19jRwbshoE!u=$&C(X(F9SnPKdUjT z%wc!7FA5=S9ZhpBmF0=bFg8nIR;=D5Hz;!9?bB7N}7UAuf^+em@H_#?W zUgW%oYnx^9uL*aI9aW;1hcg6-!SlTsqd6(-hL<;>Pa5cn2M@|EK*}2owlCYir07jo@4ez!XbfrfimCG|(3h9ti zeq2*%1R))i#qNmr$)wSG2}LgnrJ_g6hK90uGq5@Pj5xZmRVlo{EI)lZPi=$xhs&Kk z{^gBZkt;=AI*8@CvSh=I)Zm)Ya*!U$z9f^3Gb}r1R25l-J4yF1sW>^N(&=-4q5Mvb zqnm4!Zd7AR?wl`7w0wW^+$`bX;Ia@=TwrO|Tyh)aIXV!L8Lxsr-<|0_wnREhinJ)2 zozuT7aT=Q7l)CFw48&FCgpih7k-43gMIH=SznsQiQuCnVc8l?u-MVLo4ew=1rg~;@ z=IqG|%fj625?|%08^ehO@eGfBohB0Gtce!by~%pWu|XN%Uv*~$tf${LmA*PMxP4pWKloiNA8yV=g*puv^9_u6d*_i(pg(f&1=b&3Yr+&9xSPz|ST zJe(49y1AcZ)FP(_@k zI+ly-)<8U7&x}}fyc83TvUwS2Q*ocOtA71T)5`U`0nN6fZurYSTOMCYqgjRJHLC+E z)DuMQzD8b<^Gbzk^iD~L!Ld1^dICE1 zy`BMk&M2J3tUCZuD6qqwaQxHL!4{{6jrsVRsPgT$VL&Jqc6Rg)SMrp=u2r{bY`%W2 zNw+kX8F<^ft$VKh@t1s<^wlYao$zjGF~VHnAn?XX*gSdBHT?VIA2KlfblwbJysk4Ag*C3MdZ{gW-vG*ZA$H=;*r=f!^OUr$ zZxEcgp2gKyOVf3;f;1yw?mnaB`6rtwC$(~)qKe6oHX1D~Re~m``^PNF`_l()!YACf zDnnZSlnnV;uzd_XQV?cl5G8ekp&mFpRVcp9OnEVNxLCzZwDNOtPRy*tL>Q_}sWmES z0VpUjDVBdH(qU8Du{`QXEeZSpRm}nG4S_PMm!uorTr>(3G?m$m%nv8rYyC;PK+D4> zb?B0BSEZ-Mzh(h6fM`K|esHWXbrggg1yEldFzf8>+ZAsrzQ8T~lRRI#ACR=85Kr&ew7s?3iVlWO*OrG}3z00)jUDI!o#7x}8o%dtd z?5$m?g8Q@X%^90OCxb`G{bjYZIfPl;KG;77TEiY#PB&$wR9`(GVG20EiqOH#kQ&2s zHk6%U`eD_)N9lv~!sC;p0DJ;q3bBbxdXJc%`TqfD= zrs=xov_EWv#q$x0RtuozALlsdHH{&g810%FCc+gWui&OkZ$~1CfwdqT_o3U>V7u3c^9%tMw z?o0{C4W(_{Af{;1@;gMMx*sPZh<(Q)J$yMPek19dUAKMX0sKcwfa>(S! zwBSneY2;mf1U@EvYGU_}27xaZlLK8L55z+9o{PCOn&JP1&bWwV9Mi$Y!sd^51DiYgl%v zOGw|}0v&@Cm&O)n7`ww82C&`;^o=Qp;d_3)vV`2~guTlH@)Vm3wYaIt6@th-t2*Il zZ`gEEOWG!+c!w(k6RagNTe>hL1eUmu1ph-d3$N-5_|oC|O3P?OZnwo`nQM2e7)KIl zp%}sFn&K>NQQpp|cnBiiMo6chvXQ~7^I!p^stNs+jS<*^oia^O;%hW-BtDpM6{ml} zsyZ*+qPyvNpV{xR%>gya!G*c@2Gf-@mDhTm#^RI%_gDmT&fb`J!sXYk?!#F$g>VJu ziIB;6WgFvZauZB)pZQfk3{Ry!;kZUO)zyt`TNsQ5MMd8_1x+P?56`V=C@oHW(!u)B z9etJOK63=sHg(=vEz2Cmms00L9u;M_g!D}$JY_LeKK2$OoDeV+biv!lqp;9LSv&S5 znCW9VvnpGqrfnwwgKVDeE8?(R3BB8?1&Y=LKe$={tPHJZ30;$jO+bnE)@I3+`8&yK?Tc7SmQmFoWZ)XakTEJsI>KA5H~9xZ5D|l zRz~ct-fiHx(p zd$?p2Hx1oKQ%F90Hqz%s_XSNix!gLr>}BM1}^by&xlG6Dpc@(&MY|3ktUpdJBSrRW~La&`yW&a-?$VA-^4H zAepve@>p_IOHV(;w1hQ>tGKkiQEkuy;-#KoiyG<;r82d`OQ|L%6<3O>?o?RrRhl*n zK->8jPhQKb%#xNF3sMJ|1fRY~F}*DR5YIZ+O{|X^8}&e>F!n7Zod~YxJK%%@lb1OV z-I6|cOn&81AMK3p@5qynPFQ9+y=1YEiXNJGv+VryPs5zV(fuPj)sGTDl0wcIu|yv^ z8LJse^e_0n9IL~5C;626J9bhr^r$gCg?FNP)g-<<*j{S>pgIA$kT2AMF$nFmOAl#{ zUYpxUh(gDF%GlB|$=yf`BsuTpc4g)jL(>BzoR5gu|vf8M+MGFg5Adyvo+gZ zsd*ieZjybFcjvpfs0S>;O4H@)Jh3SyYx#nw z#Zpy)abv`rseG%BwS}TtxHMO;Yeo{c`olYA+Kvfn|BS<}p`a=!ym`dY^c;Mru`(5_ zn0d}IV?0|zm#p|7x6m7F#hA7hh2*X*(@T`XN$Z58GEa>mJU*+Xx=R?T<_y;IPU&3Y ziaT-Fm7fk(uQ~g{l?Xdr74cY>wbArwy%zL7NYpSt9Xfln<4sdk@=jMja8^S0Z*0fL zV1IYS^Rkt#;Rv*g|7+X>!U!?S!JIZ$My$%ox*@tWvV6Zu!gmT8c9Wkc6DHG3ayBH3 zPov|}P2&ywt8PpLFBn3Q_Aed;O-{{z_`YKD&hF7c{>8XE0-rq{o%PT^C%rBq9R+{b zAbB7Jc4;EFVh&NT6uIoS@ZfgDL{M|U^U>t)DdbrPV;Ekq3YSyG7;5!6m^o#oGHRYB zwWwenn!9xVs8i!GzVS-#6k6JTOp4skvF7A{{SB0^2lG31t z!`O3bN9znO7_8=?y(`$=#VD9pyerDLCAYiau7epqGijwT#E_s;W`Y~VUe?q_-owrH z{m>g1<1FU|gQ1GMWDgkYmZBRlqHBnh-?Y)jrJ>hv%IbjLP%yZGhsJ!V%d0+Ri;D0r zd%m-v&I|TtQ>wnyv<2vUIr4-T|_ zKNuS!POW2dl!n@r=bT;YE^HS|*!se^G|03Kc)6q7vD?d^SH4m)RZ1MRf-BhE z1@y~Ff#lHC7m?c*ODrxG>^*a+&WI4|1ppBx(xoxbd2=TwM#ooUPD`Wf)GhK=fVQI| zn&X>>&^Tr>fcxdKE$Q-nOO|%m{$^bD)c24A*XG^hUET)VcI+iWpmT1Y-6t-c>TF`e z?86QOybj2;X@Tamck$oaBG8t1#>a%t@S->{z|{eivM6AmL8d^9aRs^9h6Z76Q z;RCQsZliO~b-!7;*J109j?>01AR4KN_{q2XI*9?!6}a*Q)4MwRCVPk)u{fAsb)Rc- zO!~YJBB9+l+@FV~1o-3ba!;oTh=a4a9=Uz)3>As^>WZDSWyFhDlau$Vb?ye|h%TBN zcu1YMJHO8?8J+i8flr3Y-E;8T$3CZ4kImTRoC)CQlP^WW zl36Fqhrr<*^2bieXGqS2dv8U|5gKfrof_DB%`Gv+xvc}A&>oj|Q+^-4---_dU{hes zY;>@FvovDVc%r4q9#ON~QDJgpI}$U!?MjDb^f_ukNj%A*xL_S^F=0ea6FtVT{Dtjf zGv42XyMh&x6Bf}rw1F8b(jgYnW~`YV+^%Y|6naP!yw`0AKY1^Wx=*vq^_FsMX2pB% z@(u>fmY#4Tccp*%MCC}#p8_WeW>j{DGBfn96w@*WV2iXbVhiUjD$94-E=#-2C;A&h z^*mno?e|!ng6VtdqcoO&v#P@<>By2a@9n-NeNjkM&mc{w zwnP9bpHyLKQD%eoMf9K5u`&O{!&z*rZx+AlOc|8TKbf!zo{@t=c=xX^8@L@vh`Y_p1{u??7>!qg1E7EcxXS z->WEn_6|-~h5NX>`08ja3*qL?&bdBP93&gX04C-*@tmm(ptAh~u4t+12Ap%~=>%5z z+O%d*ZN?f&TjYw-Se)9>!_Z#UNn|yxY&mOQwY5HuEU-%j)o5(A)$)w8e5P4zMQ?6O z%P6sl$3M1Uu$wt)EA5Z%dT;7`Olv(3d=@J4O8p@+o4m~;rTWm^s`!8{*F`#NIwtxN zDMp5r-RPr2GEt;c$?F_@iGE;uX-tZk@PsIl)i=3H_iMKF&4j8eQ|@jHXs#jO_zlvP zW;a$ruE6{;O2#$4Yl4T5jW(q>T8whXf}Tbqp0g!1lh_cK8zEVuvbGyVZNw2 zqgk78rH^$dazorpNY{DSBChzRcV(YdpE>->v-XIS(hSiZsybpDU)wY>-`~Bz$*lc6!DBz`W|`w#3~nPb zazo&ZmTWKzZ< z`X!&y={Zja(+PLAG8hAQJ;8=}asU}|bpkSD)h2mz81(V{uJ(G6hFkvck=+r=G3pLc zK-5H`)-~D9lA6+$QHL|^9oYk&oal_Y@4s>L^`?(~`gU&35hj6jWytB}Mrw;{Q}uo0 zbSbyn39RIPv{~3T{2pS?pAoPQu`eXgd$!PgM%GGN(%E3qCjIs6r8cf;eJG=QlYHbx z;g_O();q3FStB6B<#}oVn>Ah^(*b+jnfMbc>9x@pdP7HFVc(=3ny;a{cHZYG?r#nk z#Fgnn?=725-lDvI0DY8JnZ8F0APDM1rl~o3H6@mHks60@j@aGJK3k_0Q@-p*oC~z9 z%oF5QUg@hL#gdPED`l~^F#$?&7eVTo=?oeeU@ho5#@8W2{Th9|%YlI?fepL8$5W=Z z{jMkH*nLAGx8Tjg_AZgi>-QoLmIsv4R?!>fhMCK)cqn8CKK@#IKe`;?LJs%zbJ`sN zU&`_)(l&^Z1O7j^*rh{Zgu?kG=On>M=bFtoW8YLI>~Ds{9|W}WP(WhKYyd;|B!g-m z;iujh)xuz8O!3@hQukM5%lfYjq5!MsPWE2Gzd3achxftmq*-wqF!rztv>CGl4^u%b z3Tyb9GSbdK4(F0XO9H;C}T}k3BAb=y~>`yqK-*~=NzPnkmWpJo_J>MwVFnOXLl3k zfn4B>)SL2g1}K+)e{o0RCT)WM7M`Oax}B^{JbuKdI1|wWknFw~bEepZj-=Z=D0NL; zTzuo1N46Jo-DGu2=z&|ex`uAF@=9JB-p-CA4?nuu(aS^*Dx`Ri#mQzqG9c^7Xn0~T z>uvSi3@S!{!lu9F3QJ~;?uOqhjxaOfI`0c9{e~aIwmyuj^FBquj2rWwfwJajBWUp$ z3Crqdr;|RX#m6$LEsl;uqK%Yl zu`7i~=3=054Rj*9t^tAmGiPgW8MH^~|riM+ikCup6`S_cXZJQ$}>88a&g5v71dDq$^sAXMG-elw+AZ4S)3lAEj}Q5vH=sFh$)k)vBo4{N9<;9Tq62WRbA;NN~SmOUW4`*%(4< z8hAG7y!xF}wNKy!X9N188Y4-) zBsq)9echZ(In|NAFztyw`u02lw~9(-mgwW%Lm6Kt|6#O6V|LRV)33E%=IK z^7^Hn`ZtpmfMHi-GG8pDIGNBLa4zFNtXV>ebJ-V>SGXSDfw=561Ly)`<$lR zqC*%%T$Y?0PjsFJVv#4oS2@Q}}pq!Y&Oe zA>dJhr5k`+&hX0oL$7Ya#w&cN%q@8-pg{1PzYh;z!3OfG4*m2Blk%j|_aXXY`Qn9S zle4B0Ay5VYGn=^Hxy`#R#gqEv9|ZGA747m%%bvN_)5inqHX7_*^-83Ob5_epb`_;K ze%0jdBS9{f1tdO?%$Z7xUP)UQW(M#ewVmQBqbChH`&x0SIi1Vlno9tzW}Xbg&CQRR zEy;tAc~oen=IZ5O{6jW;;!cJ>Zg+F^4OSi*n&|%1eTBDxjaHF)gAUaCj*2Q7MkC+4 zAWcQoLmE@Cop6Xn$Uic(w}n)GEb4r(Hda)&L#rpN*S{Pm^Us~=R`YyBw#e|~{VrQ= z2njFB5dg{+{Zk|*#*3PE_}A|;Kq(h-erR+Lfl=lJuFXkwBq)`!Dq*Y>cZy!t|8T~v zxsPf$y1ZIN69*;#5vk7D>{BG65-(tP*DyA2=XrZ~Z?NAT>*{fr{RTZvw;XLYML|#T zD!ttLqo}cU$A~(#k#jr!D-Lz}M9FE7#-(yw32jroCrg~ow8y*?j^;O0L&u`U!De;L zFz2xpMwA2-a03PX@O)8jd{pZg+Oeie{vLZ?ykfj0x?^74C9M2; zZ%%!hfB<^21Tk9o{L%jb%H2tPq^s|jX4fQzp zIR*p|YfgnZ6fruLFu0-hn1w_qxQ6vN%m#}koTGeLP-s-*qXD`*Xpn2W^9{Z`x(>y$ z;edA2b0gEBvo?2Ugs*Z6IJlNbdwQlGwQ!;yh;rVyvA5*h<75egfT)K$H#_VjkQy2A zFBC!c6*0G$xjAEqm9ewqpFqZl54Vy2Amb!ZV4!N?TU#L*5@gEFpN%c^L#wS2s69m^ zx&Gn4B^mm+))61MHprdjtRUSGYvQMh1k=0Tw^{})AbOl(piRc_4rHLND^8>N;Y~7N){Yz~xK|>PbVh(0?o&zS80mC@IqV1x2#o5-syu>a zL@Qmf5D=ddrte(<#(G5&ue*R$xN@3gOu|(uWhMR}&Wi~I1?%5|c5rJ796#pp7<{aZxLh{N;cak^vCk)wi8 zG`3aQiJ@B%fFqk5&IGmph%WZ1IKIYpyVd3HTu^Op6b2F zI=Ib@6<1INFE5w#haZ=<=U!5e-BWue{iM>!9|^l|-s(EwvuG=nWlinKnc~;SB1Vjc zqp(YU`#Fh=?PuXp%{^8GaQJvcUf0yhHl`Ta_$lAd>+Wq&L=+ZZP|WnJuJ3Z*S019E zP@kad;ZCy%qv2->zi141>x>8{?X0w4ZXi=I?Y^L+`lxwuaC_6maZm=0mTShLZ#U?D zMri86X5}!q|4vPloX2b1a6(D^aIs0_*Z#^1&QU``OX$ACIjBRxMxcz-ZpJg3LPl3W zdyLG@7pv_Q@d4SR*tdLNCDS}3XTM6MA~gb=s!(*`eA$RYq?KRGU>V z`F-lF%J74G_HxYEM@|EQ2|`QCz&!%crUSqMMDq{rV9Ivs?|RW)UQ_5#JO=ZF{!P(= zx3@(2Us#BHzJmBlc_vG7tVQC=8|qsr4JoQN?_9V#0)!x8daH+tONJpgKYa~q6FxI& znu%AJX@dszJKxwB7uO$wOn09eHwFR-?0eVxj3ZZk8v7w#1ozvlDXo_anfM-19~*yC=qMmT>I2(zNaTF1 z2fR2!yHy(lh3Sbv)`aI(<{x}`%<L#H2cK$Dv@j1QJ{U;&wR^kME zh=kJ{)z5j2r!Jz*M?v0R_^Ag>%+r&pS3(BPjlz2e%?6_S=*5LQr3~Q#`Ret}1yb-L z{H^pKW=^E)y30R2QR9c>_5KeJ43hIMM;_9KeUv@f3|*i zrSSXhnx~U??M~XSa`peC?5)3|e80F~6%j>B1&JXAX&6AdM!LIehVB}=Q4nbmNu|3> za_E%qZWv;Sp_^gw@OjQUYn>mz&)R>%eb2qGz4!b5y7oxYefZY#lEc`Pc9?|YEt!6e zzU^h6qx)K!Rr%%8lmLR_;Trp>e9-_68Vi^OPl4^)B@W12_lX8`66PT#{nHPF2l`}i zdPcD!+$d>t3JBzKQ zaE`c80Zn0L_)xIOOH&7O)Av00cRsKLZN_X*sEQge`79edaX7tI!_pdkS zfx338voiBr;;5)1+gNi=G%|zO!f58)tNeK}-F85&`}p9^O1AKSX@zRR9Nj3Ae!^n zY<#i@sneo(Gw=AmW5i6w)z^Tic}%(!y`pIAA6^`Y0P2n6Np;(}#v>I*S`g`{7w8Y> zhcb6;BP4I-V3fAeDb--FE8GC`J;+Tdnuh(rys5&{$i|AC-25ccBjHA_`<}4ca$=1UCX-ACsaS4nXfU1QHrX^93oD|9HUR}mMRSK9XI~Ab+kSo5xh4? z-eE3CV81z-2H}Zq4M+UAj8{vwJ^YCR(}B7sFmAmd&{$I=W^_2kv}m+nVQhR)I8;&_Vc;95{P!S6gQ&)jd?dNqwN^)_ z?^D!(7pZ-(8Ss?-r9yk6;~1f#o?t=5j?6*0_+sIn8nrSdLE-O0p;4v25gX2-Ck*GB zTgx}sFKK(8ih$pR`V(M*-DO!Yb^OK_DDlB7SZU=iZ`jZoXqiG ztf^eq3_K|HcFS$oQyIt4bZ_!O zCZWq`!R-`^(3a#dx2I&IWZQuHifsvuPi|g;G?Xn_OckTC16^Y_2q^;^$J5f^ziJT+ z1KiZ?Fu2u>6JfooG+Dmbha_AV$m_VFgxwuutkg)Uet}NVzX0_{ZZ;1&hRwK9GTI}3 zN*bLr!XKd;MqK`*R=U>~pFQ5=z4A(-r#u#Vvv%izJ;G&^9=3!Cah=RYuvl&b}g8jmlr^a&^J$$Mj4!$66C>=ak9DX^mN~I^gF!;~js->={U4 zj{%$M7i{?e7?zYLeG%i=kxda$GtrRYb8xB4=mPvHpLq6kNKE$jHVqx;)%|gj*R>2~ zGPD{x^!fbIWD>tRr%B>jb~tttmKr{f)$_hC4ZJm(Pw^HLp;#{YkdRLT5de2jz!r)G zU3B0qcR#%(iBC)jb3M#Y>~4s=*!#3R7V^w!N0puI>7W*xOyKNXEz#MjTi~Qne$Y<( zjR3=MyC~tj(27L~b1MD(QKq%+M;UW!zG3}AQ#KbL*q`C11Adh_@h1MoTGIPm__NkF z$j(RND*~*t>*c7LHJap_vMf{x1gG3|CtCE#A5Ze*J$miug=>Z$qtz&y5}x7iCFMGq zkFSzPoexo;eK}1TlpbaSMy?CV-c*D7V{@!@CPN%d4`o=pfl;fK1(izO`@yTf0JzM3 zo%9uBcf6Jha9uLJ^B>ee_gJ>yq6bU;uV1!y+4QT?samSOHRW=1`C8@T|NoxE{U+P9q2XBU}>w{ek?ii{j8=m!ZP>Y z$wSxNkDxfMU0>l|oBF+3&DXV;wX`b^jxk88_AexPjr?R3jo?Gil+Mz3o_4jy%C%T> z!9JK#WR__`dnNOpepSE+Tj6IiE!7xLgZ>wXLg{lCu`_Ld zVPG&|x%`ccel@H}T6udGOSt&4^aTRjcD*@;u7z@L9pDSKZ0%&&lTOxNT;tA=FWOiR z5V(30y;nJob6BsP*zFnTY~m@UBGIfP@clT;k?dBM)!p!vVEXc_RaRqoWfe@6?n?_@ zL9=W&nRiExicit3E9>%fKImjpI^M3;W#x%RdR(P9-4|hs(o9Dk0Uv~!efc6!%d!;C zLK9VM+GP!=gsILa>qy{U#~#+%*Qv}QnMkC6_S+j<_I6VA@m0FPCpH8fF|!lWp=<5d^LjmTr zUKZ;=ye;Z&By-(qYNbJdQ^l=d{=@@~>mD_-ggck-JTWpz!6Qkh}7ciIO zhKspio-gV;$`!L>A8Y3F$-)sy@;avLA2?0NHT~}h%7Cep%cKLx*DVrE zq3QPbCcw?6(&yKJS_*s-2%pr-pG2d+IDC?Z(u)(7`A!yQU)&=SOcZUi?eXCkhNM}_ zjoot9-uexf)Y$+z>%e`;VE!h)1RnWq$PbS9d8@WMbq0=470`R#iQ=w@7=gazw_oJT zDKM^^7Vn6Nrf$thUB6=SSvb~2S zqys%TQChPb3|XsUXPbTE2t-i+x?abCybJ2Q>G$x^E6|tHfbmd~>3SYrd=JdZBzwN~ zr6RK4m&=>|Iw?C+t5Dk2!lH7wi^%c)FW}IVVCxX(gIVf-hAN;-)aSw0uScR6WZ-?J z6_wsHrd^E3XC@gw++~9J6=Ek z=&V5XYj5jd2?`Y5|K6$-m|w^Z?<-CKTjxMH7~U`+M(_6Y{`!j-wcac0YBF;utlu7Z zp|PS+nld>!t@s`x{QS0UyUmv8v?=;lFdVt!5P%+?)3_UG?9p-Ds2u$uxIo!$tHYy2 zU;Q_X#-XAdt+q`ewDgrX&-mz>{j`n-Q%s`U=7~nd%nz+(-4-9^4{iTYvEo)66Kj8K z`GA+p-*yVxUKxSM>2IYf?!}?jH(E`L6lNn!=Q-qg`1-6cz6JU_j|L>IOoMBn2t%^2 zP7lc@Dpf%*rl(6yk6OGmVKhcua9If0%)TVa{&o#{`LF{O8%S`@Qc1nv_o|w0^U2?WqL$7+Nbg7!KJ?;2gp476rs5lx_nkE0e1K}Z|&l(zL zZ3BOQc`h1Oojd=Mubi$M|A5KXdck-0OA~Du$m%~mg+}99z}bY($sq21naQ- zj9uf$t;5U9-$uJSZ6ZU#YVWW4&(5pvgIuntR?U!7Jg)@q-RS)9a6sSFTrbuwl4n8t z^K(?U(2H4O(8%q$CZfBKQ?YL7o7ZlzZiy3;@RnWXSTd?m&K5L8xssuzOGyEA(qJrG z<6$*A4z{r-&WH4?bZ{twT10bkCh$EKpV;Ob5>;(TOL#r?Dax3Yi;ybjeo4xhcfzAZNWZXQ#`J%C%@0McuL*F-o0(%4RLU3lRip~Qxm{P zI0E6Oy|*7%BT{o*7&q)EGmpLU;g|N@Yi$(HM6rr?Nf4gUr6##nm!I390`Hd)wx|?fd4Zi}G-CGhl*H z2hoR~KFNc==Y**s8kf(5);cy3Jm6U7 z{Oy%A;?d$vfPj#G*P(xn)hPa{A)x*Hr7P!@svzl=xV5w+)^FhG0Oe0RzsPSh!X@Tr z1;%7C64k&}CqWiv59N=n$LZrt}O#jp)^*vfRR5a;fx zv1F(Ri!>|~mE|e?Ln4UnPj5@_r}xh?ikAIw$ZyFcs{8$GnidjVyiAuzOZ!vET|wA_DLb^BhXO=>jUxa}i&-HePDZK2jXU~fTd z$jC%^C3L;sG4G9|5_8GQ+}?0)s8P5Qx}X~tMP5aUma8Sk$mn&C@y7l+30U*y`~*Un zKLvfsGXPUfxAOCvp|Dyf2`?B`P1D}j$w)V4glB^f*TeTl#|&RMe-0p&NWCz&1E-Y| zaMq*ZGUv3ZXFVtI;sPsD3$C4lHLOJcZA|F0i0?&i=lzZnc9l|IkdA4&Dm+rFVa6Zn z+6+;hwRwJU;y?U+D)zV&9#3QR()bJSuy+H9 zKRFh*e~6PpW=d)l8+9pj-QQiS^z!`bt7`*9aH&#x_tzm_rxCv<*ce{i-1e@a?h*Rt zh=k@l{Z_B<^w@t;Y~1b-vH4c*8}^zdETI%%W}7w1ddp=@;s1=f;cqS5_phl{3b(S6 zS2BPRip0A_^;+g!+E@sgZal>$FRY0O(HpE!9uhGoliFc6$a^rIIFw_XZ{X5wne)Clts1Q@;iY|Jh~{z_`TsX3wc-BKD;&YJ%?b2P>1KUcQok zoO6RCvI50y?zhFLN5MF1gU429C0TUmPYBId=5W11g}|T#8pLiW6onNaIvIPSgGzxW z)}RwtAmnHK8RMv38R|>P1wh8o#XpOB<5bYHbR#=x$aruv)t7Bh+6K9c{oy_jglKbxSnX*5j#2zAARC|y{^GKCBz@7b z)4?T<5%Q{WXxk72XBN`;{BGvEO|r}jC@&{!wP)#oYVAwc2r1&@=9$PsG3LfH0dPiM zkGU|uT4f9Q+ZAS#!fd&gMhxO3aWbYVC$GuT8x8-MW2n7sP8m}rz9$#(CL!*7_EN0y zT+;t zekrxvZwfsA9#@Bf$*AjJF>@9@g}JvL*IBpB8}>O~F9+=gBwhdaHhCD^(156MpGxz3 zZLu2g&PP`nV>B8&nU9a14zfm4pBm*M6AWN3^R)i{&=GmCS73QYv8DXV=Olvsxnp%l zC4sa+NzLOV*AAmS{QB4_+UQ=a^V`Eriy+~DPE)-wWZI(l&E_%jgd5=3h&*He$uQe) z(C^(1c}Wsd&5Td^lQ=4R!rLb-A>X>FmL>FuuU=%j<;1WpmS)Fyt4Qz`w8`MX)J8D@ zK-urbSSylylI>>Rv9zF=Qxb`pNQh#?vVdAsvr}1rKC@ihqZ;i{ac2?`M)v4l`g44h zS=oGSf|sTDT>bs^32Md^UfA=8L!g>CxcHP)->=p0zIaT9R)c7P=sgrd#ZK4$4vTo! zTE5a{TQgulLjf+HdUIg4)z9a=5<5#1gO|l-w5ob__{AyL_AMF-UIsUHDIBz~jI` zTeQ6Nl&X3LroWS%)K`||ts0Ww@GG0QQ{YAOmH{SP)q!#`L(xkSN~ls+U|4*Zpz``x z=nD&7tL%jX*uhDqMP%u_os>0&o9l2TpK=1dq_T>9aFS93C)cHJ{IFH zQSnzMzc)iTTH$GvEDa~C+4Ug|%pgB2y{RKw7IErK4Mh@kbB*wVewDgbsGR zWWlakacWBv(tlgQRx6jLg6!yT@rjPAzrAazW`KKFC>groYigaahTjNuUwBK9jukIQ z9qCkrGnluj?_*Y2CJPsexhJyrl(RsbTCm0fQcX1t3d$rKO_SsKGby{+ZGhyXECZ%% zu5!hEVV369o}xHJd!FQ2|KBbE4gFvGX~e5@9mC{Kp-_J?eA+^rG`2Bjrz+=Xw^A7uW`VoRUoory!*;gq0*A==WKz7of)mln~B<0|yklH}NKJ;@)LxDqAHPE+}d zkUj2uRXXjvB(=j*m&ey>dPOPGzdUR>yFM6CIKQB_J-QfM@XQ(P$4)dbo9%mindS`U zuOy;1tR4#QyNuj+Ah|NKjsL&OdD{Os=LPq{@=cNzCdf}^* zcQpMxU?GKR9LS}A8QejQmaRywiBq7%0mW$Jek%6yO&iv&^z)T&*L}>)F^Pa%mb;?X zQ2*GtQX^OY*vBHfFlEK>!04A!1?WlpH<%0QzdndvB@+onHY3e6MQ%(KYs>)R4p`x% zE_)K#%xnAcE}}|Do_h))Hrqwaq+O;^SNYjA(em-^stfK4Yl16!kd<-#|V}fp13370ES^#Xh^$xdmiN1A4eM z*}1&jdVLfl>LWLd?vJaidJmjj-0T+`?#heu$$i$&Vf&8vX8DP~YxMFpVMJ4jE@#a8 zUE`SmIlDIcq>t^wT6y0)5bu2QCD|Wwg_+pkj|b5!n@q9kan$zj7DOXYBF%K)qmcNR zR49qes~-?9EC1NO(%wO{Q$pY5CH;oczj0=|jjvj$qzGE7n|~Kfi@5yJuzr@jEnXv{ z19h}JM1^Qds1$k`RW`+BaEvvx-Pw5B&jCR7vwvxI+?7weO7{EiM(bKKcba}qJ^9rD zfMT+732U*3l zE`1)wUv&8Wv(uB47%zi(ipj>}s|VWRLv;n&RWLv6O%r=Uxdbg~<@KN5E&^9>euf^i zl?nLlHR;gI&&b%ZZiEG&kwf?UzH2_04L=|Dq|eS=;s=VX(Q-l``C>RXpy(%5E9G~| z*t-IY`FZ(=3%j$P-XcIL5JQg<&1h?sW#%c}Y==tKy9yE{jq>@rGx$W2FE;nemZ2B! zhUgVQFvbL?y_>433rlu+z38aJL(%jo}&nq@}R$V;h-XzRywX%7X6 z8~`{y&ijw1vk(U!$=y^5Q14Z0Zn<3^;mbzMcldvibmGs%i*Ndccupj@;hQgIoI*&m z`d=SY^c}iOVc+)0QT55rU2c`M%5ghy=K;(pCkX=d!kRB+c;8sce(4BR^*sH3>7`2w z(#|(r>+8>|IEHdrv?5T7uv)WN+O@tSU!zbe>AM1bNszDqEXF_rAm=3V!-8q$y!$wn zX}5|@AX&$q$|LId=nF-Uz>0smS}jJjaI9lGV^dI`{i6Cl> zib2XxEcO~^IU%x1Q3qpP&6&7IM3U?u?_5OnKaMmlV#bcQtj%fp$LzWWxl6nrgRMy{ z*MRrI;J(V+&~wvD4|;13-~0Y=1RAq8dedy#rQPD{HOzt7vd*>ADHWbtP_ zW8WSj5fuS~4da-%HYl8%-ne5OJ@)M);_tY?PxoA^_?f;+Z*_$EauuhI52egT;;u

iN#K9iLFI{tZ0;%f5Q?rGc9bNgZQ%n%eeIX`geAYn^{Bs)e z4gC+?+`h3>S^4_#B)RBRVEnKIHd(ak%d5)r@FkwB5pNniM8U}gxg+nWLWAQkixwj0 z`o0=@9%wk$pEr{ghHueodxm0fdOq*&d#G_azF1sbFW@;qxa5FvyDcMLUU1y-;IIQw zuA{fb8-Pn$z8rI(5rNZj=Niw9fUUM3+KL}uT{nhThKJ(IM0^lY@=syMsZZ)lc*vtj zW{dOAbJTEynmM%9v)Q=Qg`;ThySi%I*I{sm~Y;33z-7OrfOZPzaqd(e#LX>Y% zG1r{rV!STJTAgK`o}L*&_YSzfS97-7HYibjlI9!`VJMLNFw=-k7Ur_YI8$xqe$6RT zMQCBWx#(vF4!CTwR1uO#``&w_Hh&)<6a8k+ zHNF)s6A*)^x%}%58STaAo$PQ4VS;m2vQdNI%vcw~QC`?Lqr@p!_45TZn_b((FCI+) zD*M_L_D_Z#Eq_~;8E4%q6@LmV(u#~{YB%Pxh-9Eck6Yq_J`@+ z&H#&2GhCwjUMqjL`vLml@ZJ&8vbQu5;HAck*Y(uf@$kcEboHZhoex@Qpis`exi19u zvy3VJ$&?X2I^4j!>*ieD?8jmSGR7MmJ#d6yjrWW61uwf?wVCpr=r4~4I~hARmvsPR zO!Sa7pyx$sS=?FA9}vkxr;}Xkr$jU*ncIt7=1F%gV1n+~i#;d{Y)Wc@I~`J=#9V@; z%Iy+=auR)y)akESf4h~BM%jDIm1g@A&sTPTjjgWn#*W`MsS)V$oZ7Bagl=TPnD@hT zTU7Y&EOP0W7C#q8h`$s;JGLfQ#JLlo)c1H-A>L+%+7m-BeXqC z9)C~)U{C4Vzpl}#Dg{qRdmCVD{C2VIeGqo@8{nT}=;Pe+xwNDzDHI8MRc8{^oK{J!D7%PN05 zJjRjHz%evA&nkxysmAGl!>#L>U0xpLMKFIUX0hC_Gl4}ZufH$PZF>Z0GAa$yj#~H| z4B22T=MR<-BZW>f$>gIvV{&Or{Uqr%)!aK@K7Q@QDB1j}xX5{h4^|zt)SMjDUPss~ z2EzlL*B@6^=3a7a)(JR`HS9b7>HOEfGPo(%kU?y^e#wt_tgBbAoY-`s) zL5)vq@u|wR5+xOeOaS4zWAVW+d=Z?&jk*a<;qA$Z?pMhnN6n{GP<-pRFYa^0dz!e= z`Kx-`m5Cpy+sUJvWZHf=q>-SbK{K%Z5K*+mqgSJpsWgNULw@TmOC@5^TuV;s-^f*D zkjbNrw z3oOgddnXDB#Nv_L_{;sV=sr+i`5U&Rrb%irap8yj|IU~8E?GY3*OjuF;<;BgTf17* zNWnXcd6!nbPJG%*s=cYP69Y;=nF`(gV6Ur#-PdIoBgm@b@s$~3TYKM;I0${o`@=5~ zrk;lTJ*RX3;>2XpY!!O1CZF-KU=OK?b;pQ+9Q!)vzr|&5waNeUMt!4APqX~r4WJtv z_$%k*N@md{@&tnbLxgbBZA18AoRdkL=rFV{5@*nR+ygm=j^6dqm3HCCFBPNgY?Mz< zl*EJ*XBG215x7D~kk^TkTIq~0ptwW^SIws-7Q=as#DUXFk+TiG=P}-f25T=@N`@rH zrdIPN%Ax-2y^Ya;+fA`3noBmy*@=xNRDYIBqyt`Bto6d#adTRHM4T&v;oO?mcnoi) z4GHw@c!8ZHcN*s3Tf6eDvrPzC&&w)wd~eSr>j6LU(*Cv666G|X+mIoHNU)y%b99FW zo$qh2LcAs9wwN-}m_za4yM^k%N$Hqpzm%-v`~s3UxUG zrr68WLEJL_>N>(`n}#=I$`P;Jz@}a=N^P%Z%KdlH`65ndZa3#AjsEl+XS*;N4z)<0 zf0isJf8Qz#3VD2gtDC?_KiN5|ikYaQ*oiwtAotWaq0=b0$@o;Wlb5}ZxWQ}b?`}0Y z*V60|+d@ty9~9jlm>>)gS&T2+x|?$RC3W=%mI2Vl9qB05;24W!I{_`1dz^{({omB% zI^D0OcVg}VdK!G)rqA~`U>_#CD94280yevk{ZcAu{HS0wru!9{eDdb2&@Iw_C#o3( z#*y;mmxhiDn&|od_k8cD=`}9tv|2wg~FG*|c{`?xKECDW<&GMyG$ACnYOYm-J4>10$AE*@ha&LA#y@L?2Or zf#?<$J$&5z+2i}wGb|^cp|erXR$tilDk^E|u=89f^So?3Kl7n@NF&qFSLC`a(!%>@ zLuOE7|LkfblnUG~%4(1DIUQNL%Awi|3s$}R+Oii}V9^?vs>>4%B}pA@_Sx1AMOP#n zuBTOU@~K}uy2%v7u3bQLgiw^+0jkXf0%fV4o2lPC@LleTxug>C7mQGAvi@4{>a41l z;G+y;k6QR)`k&>TN2m_=-jUMB*uSCqNiW=No%W!LcZ25>_*;}0NJtsQ7_DA0;*-eo z2E}r4Lcx8yYJE+g{wS;Sf0v%fUETY2tS8A#-b&m5_=aV?_97QpJPyY?Tw^W#gH!uM zQB0S7YA^pB(elL(w(e$j4v_dZn)oRv*ivllNtD^ z<~r?vz)gM4)66*9`Gm1QOHRI)bZ|k&blc?Ev^l|Ta>iJDl4b3+Ps4~0aZIw9)LLnJ zs^S`_|EsY7;3}Okc$j(u>n8M`>2Vc=(2b3CrpM0GN-(ySE8I6L(6=5Kgp%AKC;?W&uir^WyJmfWIjY2SQy4+=xrC3etb7@9%oFo$sdBad z*ec?bkmmIB$z51m?~*7V6ujt08Su|HIdtu+U^}~^)!Q5S6_uQd+&C~e@_TUEtjTO% zcAgr!y_|0O9k;ZX^K{Hxt`=E2((-a8VJP?7KdgB3KV)h<0k-)aR&;5TT(ky+L3u67!_Bk)deKGH`rx1j8NGDzsOR7)I z_Zl)HEVgVE^^feq|2y0LXl}#)U%DG8vd@mmr%N_HEmJ|9Vd|CR$aVMVSn+%}jYF)I z5vd4xHp8#tKj1zc+li|{#g(?`ZDvHHl+qGd3fW@r>4u|V+0qw9j!IdGewEK6D{vlJ z{1ZtL^76}wxK3z>&(nZU0)=>k$XZU@uCczNhO3-5WcFdb!6WDB%j%m`CgoiHsu{=b zj)3#<_kV?ZlSCy}vz34ReWI(XBP<3$c6*Ap0sb*9dsLTz1?#IDR@C_H6Kh0>%ZldW zD-}{&;On8YyTH@eL!jG2uXcc;0&|Ia^Kz|M6L{d;;%Ttf{^FKE^eX4u_z+T?+&g%XiINC!gYvO`YK2@( zaIDU=@?An>4eYnUWG_Lu*Y(kc0(NBrXHOGr4hk`atGO+cn_)adSi~;X>^N1g*6>nDzedGFBU#YN*r762+U{>3pN_m06D+!V$Gp=f^-+y>m z#@6ozrxVrcY4v%f0<9x=Z?ERH#cKY9%vXR_qT5DdU24jqL`>h85T+To>#vvi5XD;g1JXes(^BwTj*S9|58Az>tsnA?0i~^(d>ZPvAiryaWKZnW)|&cfQp;#;Z(Gv9s}6zb|rT$Moc*HhS#enI>M$OF|h7{g+MaBdkg)*g;(H zY}G>}S#ax!=D-W(*~=+0Cvo^K?GxZ6ndA1kMw=zFrIw2gtRz(6HQ(#@Ln$D*BCsod zkbNFEtb3hQ@zmco*(1!9Kj$DDWO|U7nfw+O?op9lidofr<}%{Na&%|1QRlp>%|R$H zEx-hgY-QsPp&PYrFK&M!FiRC87af%hpq==v(-7I02ph54TBmPy_>#7(GPi5+Hn}7= zq~R#H<^5$bBg( zqA7lK%T0#)#`j>Ocd>3^SU!zWBm*+07U=_&@Xxj;1 zWxeOFQ)Q^L7c!JZzoDc2ppAOY^r`=y5!d@_SM+}<@$F<0G<#x6Ni#FA8(%1ZHAVy> z8nvcV{qbh62=L*wNWJw9Lrlx6_^Z~z&>q$W_6ZC_z|?1u?wdMIX0q_XiM~e!CaSTM zD(y2%jA`us2fkPDGyH?8WX_E%O2ON|a?LUs7TZn8&xb1N$=au^9V|nKJ zi5}1Ru$p=zXiNe2uY_}_QtH50=y8Nni`0se4M^R6Bpw~E-7Ra!cdltFu zp&HD0j2di3lh1Em1(jU#p=0_GYSZ48RIpo7w#irAL{y8p7o#N8RPm3spD`&ws&fdz zdnSozXEcU0=yP(P!iZ%YJ9l;hBd0ICEZj5m#?V~Lfhb|*nhaJ?#o zAX>g|tuG-%7gPer# z-7H2t52-dzrc-Bo!H4zu@dpA152}V@Q)61fh-QKcR3i4JMYV~Q;H^8&xZfdarD|;c z8GQW-D0#jE>Y%QZq-Kef_vD_JkrZ`EvHr*qa)b)^qY#AjrLF5 z)tkpVo-#5h34{20t3{*DNZ;IZN1dN_IJHB)$3XvR43TjE_A8bljCK@Y=o932W@u#0 zH#?u`l#_ZyN zOvXna1nRL;{)1-jp1U}*j)$(fU!Oh9J)Gy9H~u>*O*S|ba~C+jspqS_3dtyb zi=^=^R~pu=g9Se+nthy`auNV~Q@>WB9o<{?rfJRPXCw%f^w%S?tqN{HgQ(VP` z@uzw=(QI9ruHZ9refbt3%%It=$o-ugVoKm&K#qvisw2a1v#ov%+Rgdok^JEfXUSP0 z)==k52Fm*IZZ5Q8W@i7=;c#zQa{OX#`u}{xGll48v$R7Co#lhJXpDZ=>rZwxSEkD6 z?MK(8^0(!U=Ea8i?Qkk1=mh$d{O8iY}V6p{z1_3&M>C6_HP*M>MarYp2|JZ1`qbVJVZ8P;0gPtN!rR#k?? zs3lkZFA(y}6Ym483WMXHeU$b|J}ARm*-9Ce>bSm==MO6FfcV>JJ%g*=lc7hq9k%fr zC>s=LE{kce!3!^GGSYvzYVhwns|-9`%W)?Bw?ft8eOt%(P#{wDFg%$#!FZP~YMH4( z?9rye^VQHczC4p7YrTgW2zM z(%E)V`w3L`1B?8&=w-mx-@U#t?Z#aI+IBqc4rA_&R>UiKx>YoV!X|+j$9=7vQJL~d z7|mVu?uRK>2XFK3|ll1TGzpD^?Ls#&|TcfwfFO5dntETBZ97GWoWeG{JFk{f>%yvz){;_v)8f3N;OXc^}ob}5X6ueZH*0b(c znU5Dm@H|V6U|1g<$zPlsScU5KI`p$qSWp=DY4%ei0s5l^8GghWF7N5dObL+N9#ED(PE973>H1{!y+EibdjRx z3eFp2p9P<;ZvOR-hgfVsOvR3@D8Cly!VlVM$u_=87`qMH)<%Uq{>3YrZR}sl_Kwi; zpEQfy_LGx-tYaJ2T81Sl0SPJG%jxo~4FyvdxkM!niu8AT;9azXk%`>m(`;&kSe11l&6)AP_QDbZ0UK!qiN+KJ^x$e zSf#>B^s}jxtJGwsG@P-EfVJu63ttGxBQAOY1*?TM%l5F%!{nRwX$;&ql}kUOOWZyA zD_h!$pk!uqX1?GLodKJ*f*TPo&=8^{9lO_2LDH-Si#$KO0Zwy-*c*icHb-; zFxmP49%LhF%$9t-pTcbQpD@VPMM=ae6rA>WE{XhQdxjH?}cDTVMm z9@U=BhNMQC5Q=-#eju5&`C;AlJ7E{C@1X)deR0S=;G-2l;S-Rn>dy0Z4v5m)3X!3* zLtlIImmItqYVmzy#4U|ulXwi>D$E8~m#f@qhr#U{T6efS-h!Wg1O| z4mVwxY*~_2hHjF$Ly~k_?+Uf?i$S+n3~Uh8OrW>-LgwxC_NhvvgZF{Ku}Atb`hn`| zUi5gZ-S4(5@YvO#l9+ie%rD}eLq9}Nf`5{zAgh0)IfaPr5u-~%Y#|~14KKW@>~$6+ z*Q^`n-;{6AXF)C=Pwm(bMAMW+yX*bpgy|U?ffkidL8Nuzvv{J$O$}t`ev`UTPZ_}n z`!^-}1F>;uB?XWLqxf@UrN$!R?bXT~@BKxc6Ntmuy$lsR7+8k^$Ar*CmH3>3)?Qu0WTYSJup;%fM(U-F>HLgbBWJnUj@b zAo7`qPTbA&;Pph7+PB6cH@zsW59>uQU4r~R@7smnFp{H6-z2j5i=6rJCyDkYp*1q9 zB14mE|ITsA7)Ty-6!@2BzUV2xs-{7BkZxAvU(BTiS?4ohzwl*rFE=?Vn4zqm=AQNQ zYUhR=IpUAV{Br>y>sEVcz>)RHVE!L*g^Py>cEHpp)-&9+L-@*Nxi)nYR(!v$5!#ZD<4NQC3><1mZt*{%i0% zNe=MPqx7|hZM;%3%y{Sx6tL%FKgIT-G{{-Gl3>Tfg)}InC2uTK^I_c-lO*C+kp0y) z!M;1UXH^#y$TJW$zi)RM&NZK2}6j zq$nayK#H`03IPNnBA|d2K|y+vCcT$HBGP*mP&%k|=_Ry?^cs4H&_fR-Kmutu&->kb zf4}e8|1xsM8966=?X~8bb1pdiOp#Vut3c;3q0g?JIo_km-uC)Q%P&ZX?L0?P93k{G ze4Zyy8B0dDkROkOv zMU)m^)IST}@s1X9QQNZl+8tGx$tJzk4v2f&w<_f6;SN{H;)lc};&*#_q3I?T(QYV6*!FsqsH?Z8&6201ihoXCX})xqN6N8BtP6hUXmoY00~( zD5O^*N#5ryMT^|I@vh=*djoXlVD?q2%Jg)~kI{a5JHz>snIWeiK5@ebf<}s>A=BWr zc)9Z}%yFT-wciRI+A#YxrO=>t|5tS}i$g{Xafv?A&Tn^qsj7vvk2Y|IZESA95P_%e zsG1xnY(|UO#!O0OS}F~vudCk#=fk8mBbR1tzl9TE`kMMc29h|nf!o(#brB~MQPi0L zTUQW>h1p<@l%emuW)>wZXPoFol>LmGYj?x+KPKMYjP(k1>qWWzLxfbeG7|zH^8(yxB{w`sO_I));QjjW3CwQp zkqRl|QpEt8R!GYI3SM9EEuNEA8aM=^frN>#Cd(W>ejYHKm?0dwMUODxR1CoHnpXm4 zA^R^lD%_&h?mvsabNoL2BE4-%R_mvPZI;33_+mFnCBwBE)5;QNoo|A)*XQ%>GxfTs zWFiZFQq{yWUWJ({wZZWhs;}(77>H2}d`eG}6Ua>K4)hQcE$eX@&G%!X^7$>}*C;g&nrBik10BdUd8*D^Xd`!+42MdhYZ6`w7mQ=$glU~1lyrha}< zY*D@I0<(qD2Q0U1LU zMH_z4QF@;ttK>X!Mt2E3@CE#!EdaRaX1muH(D10cp3{VrPt094m4^WPG83i-{jgdPcJRjl25H)wC5)rezZX{x=2w@BfR6 z;0_nxTA-}&$io9Y*GoU(9uiA^jn`cKi?xTeH5C~D4U9b8wEAa;V|+iFU4AqA`%BMy zypkaXtcRtPWaFY8^V#0r=(3l=aBkC#QHv+Ji6Ol^%i`UpB}-Hml1g>FA&a-?-&lF( z73;Lw3LfOf=ybV)O7M<)w8j}NE zo4&K?*7~8>-UUEX-OoGbA{|YOQ1CfzpZrERQ&@XY&?Q>{5`U92>YgmWrCa9Vj&9|` zn0==7O_K*PLG-8aKY}3tUemL{sR6T3gE3LT>H~o|`O1Ke@kHI|B6C2;tI9}0OGrqr_1zT z1qIdx@~+@Q#x&0y?C6u8P(@3M#dgLrH?+FAPWUepUrs0?4unruga_rBWfsb;;Yg6t z;EKN>j|fTp!u9?ut<}yK({et4{L$LS>0Fdty~#Z)Iki&AJ3o4NF44L?|LY5}p8=%v z-t)O8y&|ZWZ|vX|~OsG{A|1Om3n>y7CGY#|m~r{61v^MI3(uxQ1?b3Jc0Wd`9#F z5~es+m#CeiKjx%6J?3zKPF_u(i7W%;>cYK&^K2i;j$bfn zAc+WSk0_(7OQi|J{&nAd_xL|-;(%uGNWd}5wsQL1CR3=vnhPHqLre-Zo2>ZJu-25} z;%xs7bP(A9K;V<$mMz65zuU0kZMJ^v@@SzZgzttvprIAN7A#y8t9syvcBhUpRlVJ8 z>NEakzyx`WbcB`4PMg64mQkk73xoj|!N>f-g;V^-2Q#vM8e0xogT?w@#UcT+Balj1 z!k3CoD=i}k0P++S7b2ZgK^xLbIlVZh4(d0{HsUJqrKb+{Q&?u}~&7$a})**o~oGt2W zkXpTJlveAK>KgwFe3-apPNjWIjo!HWWG&I47q~QiEjg^Si=NE`vP=?P%)E8^)sFz_ zL+obk)5z^isB;)j0Gp@1fpuNKuEef=&d>(-aRc$&#w=|7CrvcEuu$3hYP)wPdJVJJ zyvgc<#%}+m%0sZo0qMF2i!ybG#q~kVJg1RVvHrzMk-@(Isfg1HDkUmnpN|QOl2t%3p@u?XL(O8@dD47IVqjGJW;i*9?4cH!*kD?DfIQ9wUQ=K*3x4 zuum(z3=oUXtg~p@LP8iXQsq2^#7Qqh4|xe?c9;t&W4cTLLn~bJKMsyrI2#x| z5dAM7WkzOuS@nqhWjJy6w!dD=R`Wh?zs*|HPo!!s{q8~gUNt!2iQ@6_0oQ3EPtcDF z(77mV5}eGd53=N~1N9|xJ;#le(U_%9laCRPd|Azip%n%S`?}Q6Vz1ghb#b3D5RqN3s zEywU?NVUavgIfiW7;I` zp^y6>sd_j~HP2EuPmy7|msNLKW^n`bnG*rDR8unk#^E(7RExduk z-Dam0Z|3fOxyg~e<5+1xx_4^Gye_&n?RU#3yCqCQ9qOU1Mh=~g2xVF-1DtwAtI}mv zNFjL9{dT_}Rz5v7MdBWVfVJw|Yw?psf*k3!-oK=}gV6qV`|(0RQw@Ej)m*EPFL8Wp zGUc{P3f23UH56j$p(za@qfwkJ2!`SsVaUUu9)Yc28A>s(S@NF)gXKSyx55shhEIs+H5*e^i2!=Dl^aQ85B-pB!)vySQxzUy$2e1s{RZ*zv7!4T*@fIMa;t|y!G$rf%*_(Y?mM1P;HK1jT5InJ9_&tA%>j|4x&e(IM*&dY1yD^QzgbNjd!o#%p5@lLpHwoloS?jOY zfA7z7pLTNk^*Zox`Q46`JQaVhdjTPs)^#bF-sijG1qmwyp2egAsa7Z5QNqQ*14lhfl1)v7%-O-79W?W#^ex-=~e z;ON#jOrQfy+H{$igLQ?x_X+Oz)4UuSRyP;GDgJe4hP`#2cl6)ahb`1BtZq5wI!f}W zzR66@TnU10UEO#a>%UR6QSv6{`4N5K7I~xGrTN*$g`8trKvm?9kbH$5f~tcAvO+w) z9LDKxkh72{^!ooVw&bILiu+_&lSBTNV+Psl*}uT9-r*q;;D~=WFl_#Svdl#?qozSf z^jocWH8cLBL^2mKaicJd=A;h48t-SZl?xB|wd81vY)ztS!>F=RGYjXEUSQ)?e!NbH zd|Zq2iANWoI&3VyEyOXW0GKOPk$H5<$8Dl>n(gen!)O>$`f9H&P-RKloUy9bd@9got&6XNHSJa?Urq6!$4}d;yqk)QJ!USsD6It}$>LW)|pK5mN9UJyK zuRrprl7|VbBYs)O?TLSHebhM2DRrbKb` zKBf}(i8wi~{`;}ZZ*6X^{z(jPl3h_~@8RsP{$LeF{kcTJUoVO+x;?5);^u7q%d3LIo%i^#Kqf87(*?8as^v+|=-)JI$Sb_xZ(;(z1MFvPGY=~q&i7ZtM)vo< z$TJZ;E+OOqoG{q!%Z{n%(uqdR0sR4<^Pu%s5qda*V_?2*cYa@g5bEr=ml)3TgB8Xx z6T(SKRO&5oNh&{nt37-@43!(`0zqmKBPjG+>v8bWt|g_Ntw6k!i|LXDed})CsnS|7 z9ADEmX!>WkJ46g=y!Vt2b`XZIh*C+ml1qXu;2M&;?U|D16j^`{8g93c-6wt@wts$} z&emO zT-63*pb=o*Y*Yd^E{8tIT@Xih@G}c>M>PQl9Q5tAKM7hV!=&M$cM!R+uKH_HSk&)W z>FKVk4(G$Vn*lCQ6cfFgj4)a0_*V~Y!mnEvlm<1b$6J-W`S0yQ)9*Bf3AuVv@|zCy zXS1$Ty0{mssH7A+?t?Lqe6%?0HSQpN_Et#@-Jn}^7sRTV$n#F+Ra`(ULs|a{A8n2u zdjFnX^V31qH+Z)ge|@ST>qmS^=sUjMw(y4ovI}TqhNf$XtPy(1Uh&ES8^rAgnjCGu zK}`1;R(vfptdz$xllwN6MP)mfcA@zZ9xzHZm!!Q1x|0Tx%QZ3d%>U<+0PFgtH2Y^% zGEr}xKt ztA6%EXX~+X6?Wa!^U?fy7d1;=tVl!p?}`pUPjUh~Ft%prg^QSdAGpUEx@{k78gd_r zy98HR2r)q8uHV2L;9_IT5t5qV zmLtiyyQF;CHLnV22ZoCn3MeXmlxCA#DyKlfD$HYjxAMYMQHy7pBQK!zM_sZDIxp_< ztL&>H5_4jw?#Ztk?|oN+Ex@`kPwm#JG1O`{6$do_rotLk9<-#OTA_MB*kq-Dk%Oy4 zyW{ukhb#u@;&i#`1HCw6bt2>=J7r_UH!mdQWZ;kQ!;X?Pz(m>9ccYiF)ttwIfa}F( zJ91RS=PPTj)<0`H=qAqp%RRVM#)j8!+|5Hx zeqgtzL%?d@D|X-Svk)HVy@AaR^5caI+7I*l^h*vbc3=N1Fv;+OjXYk9%sFKpAi+s0 zqzXQUGe8pb--3r)^_Vrtk=H2em>(eQw}DthV*znjwSmK3Xi7?FXg?nNLbhCOBI>@6 zFb9l0HJ@78wi=yklueO{hA5LRfw0K~!7k^m`Mk4%sgyMkIa#;P2(>C_6G#V`=Qo`ESpNuUScpxsI?f#dGmqpFK(A35!B z^enmfF1YvTlO=(H2oPaq7_ko^lxf;Y1SlrHVVfP!ltK=Sw z-P)ye_coVzEh& zI#C~MC|Qc1G7R=pWog1U0F^(#Ahq{#TY zpL4+7w4~vLsV!j3|JV-G4k{&3H8d%ec^LqOt8v7?5k|xgl9)!4lsM0#q+7;Q_C`*z zcs7-^=!MT?`^PP#ADNtY5{BK@hZ(;gM&~(a*ZY(#%Z~vkpPy54jQfj4Gwl!UHraHm zt@_YLQc`sw29F#x4sieNwEjXclP7&&!U@%WkWGJTmTUbdz}{5aU%+x`v_1Ne3{c75qf+oZVro$KZ_$?e-~*`IqXVhYLY0%AGX4>?sW=agcJC2!ob z1+5l(|Mb@A$ylDlpXwU);+HfDzY-BxFKIlPrMDG*)xQ%j}) z-L7kg^at+I6LD3vwBc8qoeDmX^<&*!T_kS5q9#MMa+j1jNbF)Y(+0^bkFA)QzeG!o#T~C zUNsx&#ae^?1Y>ZtrmmwVM zlVj`Vl-gSPbybN^v)l1UwKF$Xl?y}+x*hbqe>p9dj_zQ!!HMC69p#Rk^v3f7-)MyekaK~9>pxLP&*fCrEN}drabM?9du>Jkfp<_R{9qr?x8~*?fwnYxoSnl~ z0Ny^qP-D-x+qOg*YyAJ`1rTZo6mU}*f(SS_J@PvyKG!+F+@Vp8xxlAWhd9hj6K;Gy zJ~R{toxjE@DeG+-wI=iJf1vO*{t_BNX0`Ocjc5DVw4vkT<*LN%Vqws|C>kr=`{*i} z#uVMr`S%aWro7x(V(<{p%hqF?H>` z@Z0wqa3uRA=PN5F>q zJ{T#V3+ziF(br!2aNs-jWRK=f1#V%_eQ9PS-mEq`Lw$f|FVB~>BcoJxtiI#s-C-bk zwhlUqERt?J8*(vGAg?J*p$wsY(V$;t9tY@V3VFlHWm;*wM2`dQi)zlwZKx}(pu|(6 zKtB#h1}6dpz4Mw5QD?v0zhCgN&R2$5!a%wE3uA6>Tj&#O{oyH$v?9pxl{!x1{f?}h z1YO14Cs)qp*#ObbyZ2KbLO!5x`^%8HuTAUUXkfD8>NHVsMTeIMJRMy<7j#LurrA-Z zrbAWwA(Yvz(wy8AL^wGcA`_`K3(q}M2Pj>G9fY}m;M--MG_MXmU{-XSVk&u^&SCu- z;MM+zf&hifi_2BoCt7RU&P?mk74qM_HtJ`}fdS*Q?;fnTazAN^NvZYDvv>Z35~Hol zJbl0>lyutq_m*xgla|IiSkH_K0;QZuNVamjwS3jBRi6_L`nqyDl!_8K6k56@Yi1%TxA;6bVnxoehTJ;$ZDPM z03FB~w&p+{-yl_-Rc)-`Iw`waL((?%q!lg^7s_vVv)q1knlD>rVyZRHpu3l>`-xB9(_q9$r z{0(Yqf=gR}4`&=dIk;aMg3l{I=7l~{u3agbv$ct>HsXY(lZtb~+m1#}{vA6(y2r;c z&G}A8)o2vNT4>(X@mqD);o7%Q#R}B#4roz$eb>*tjX=r4aigR=y3fMmW42EZ zXp_0%Eb;))5ARBk9Hi=cGz55(!mwY5qcKVS~aZpqJeB5sGMbfml1@^hu zoP*`*e!JimsW-zz<}cN_-^af=`hm4nG3~fYgG>IOoC7ZX#|IOavG)GuVvD`%MlX0m zsoa7M^VjQ17dePaRTq6MERbUmUS)cA8pEbkMr8u4j^qf9m@1)b%?9I)_dkJCIEbE} zyg-qfWPS*j`u67MG!q+2RLkMk?or!H%Z{pGGWgq$e`0bB?{3!NM@Hc5()c?F2sfzQ z?6^1|NfIH|_~M5kDt~)lBlG9{hvIZQ{5R^j-1y%wT+XCe-Sbj4lljoHTG`jVaTtHg z@Je9KB;ijTE1R{vfbhTe%~h8FRN#LI$d6jN`k|VataMMhIN#&Qz0${PQol9T!MrV) zB*Ebu^NbCqA2?-DpE>&mg;QC2NR@!jCfG&BRpzX$QjwKWQGw zpd6ORDrdE!195uQPY4Sjio01JmmWLh@qhTj06Si-ADUkBOu~V4n`c!z-ipxbj5lV2 z)8091l)cy?uH3wwlUT9!o9O$qeWDy%f0BI8{`9>EI}l{ukMGO9z18pgPlwXUlf$w2 z3F%}k-K;5_6Jl?}K9qlV{m|-_)u=;bWcAV}tKT$BpOYe3mWEfiVV)4`5%1)G1Y-av zYk()DSvpGQ?!*=F^wMH36hEW=6Q9r8`)+i8P1>gQaL|EtG4{<@a(Xt6?450{bNNEh zD9yPW`;+<7TQfb*mP&Awic(uMuBEs`RW=psgZx)lWMlAm_9NEXT2>fTzF6rzLF4w| zsQ7#I7si=Qr_U-sbJVtb6SQK9ZJQEJK>PApq^8`U&Nb7uYEyB66DKM5G%v)@*8(P9 zyWidxInOw7)3{lhZTfgVl<)id#(XaOVtSsDDVdeVbc9*orH8)*F>B{07pO|mL;E`S zP>r13-$*>IvDvxuXi=MB;9V4@%6+-Ms6Xc%P%kST(Hj0+rApY znOivD;DUU#>o?^;WeU!zL9P%e!pQPD!V+4J@BwQIhNDOa@@NWq1!|}JSA|#)J)Z9+bRz^8-@PW|YU<7V zal+`I<<_I2!%6QJQ2YK8i$bG3X`*>-Z?-Ty1}3ETF5{2Rr{$G|3n{t9tF?+LH9c}`~+*E;bfcfEPZvU*uobVtp%zu8khdu*8F5<)bzqr@Cj`f2mJxS zfbK7ylPMy02|CW@KVudp-mOhc4F65{ipKh%cH-Z}qmG<~%_QC%o7bWIo?gkq+De4! zkxK&Mc~@Y>mJsMeZStdA*WVgj2CnUL|6RK0sX3wb>(QAf)T0(_b5*&#TC6h|`{`>u zYMiMPN>lJa#l;U`0(H2ezPG>oero#EfGeev>_Uvs)2 z$*64BviMBYP5PFlEpmFe<$ZVSW7`RO`!R-QyIzIqP_h2a9I&V-SO9#^2U6YDv=ulg z@;~|fS}w=@=dumcy0a2t?OE7vfLVYUV*8|5-Fub%DCoN}>g8mNF-C^xe#_I$`TH7w z?4(v8)DP$qwBAH#`SR3o2eHn=ojf}`hvR>O0kigSZ@J6GIj)oI-Zq`nU;Be}Jrl^Q zKqXHFg^PWStMdK=#I?{$7GQ0yEbd84@G;m~u?f?~VI~`(y*M@x%z=A=X9rBb6vO_q zcIXgqTkqUDxbwJd0|<4X#z=L2zz>I}zDT*Lxt0hs6xk%p>#@z1^4f3uzzXmC#ZJ)M_W-PLEp1Dk3@ zRkPieoJrtvxJVDfvu__b96c9HfEKOcBe?S#e!l+2e#d6s5h7s=U125nS><%0|2bN* zIpx=`Fz;(^Hhr?wm^z>cS&irlnv?jLV2l}HNF}cbbz+%TDlVoqFX>9c@YWG;C2Dhq zS4%E4Mrn;Q?=@dE911taT#8Y1M-IEb69a!0@;eBAA2BE4{Mp$1!1mZ@{cgIqelCkA z;dk59G8@OKNJCZKs}kHD=5Su)&4}`k%@AQRRG@w%DGsflsCzE~@W2a7g zu+n<`MWcTPE4`a8IIZ5?clvcB=*WZL<;b>4yax>?$F?(ijxwM8Q}y1c{HJ#e&ID6t z(;y;>D4!Iy!0N)vh0-3{#`CaF1L!-(;!IYaDs|n1oZQh)Nc-iMSl?N7Ha;BUrOVg!-CS^Gs*QDYy20{I_ywZMJ2PX2 zTEFd8Zr*U&b#peAayuip*v;xbHA6TcpOO*Kd(iBF-He6?C=%z0CV%77FOGzt;M%Of z>jjX)z!}2I;<;UjuV7hBqSD6WnIy$8uR*tkud{Fl_7b;ymg*L9*T|nsy6(9@)92Jj zdH}whc2J%F`+N|sxc%72WW`oVN}}OtZu4?WFYz8jCuHKD2Cv9|SM(D#90ls;GaT(D zdEimSzTUt!b>U-M)QI_3W)u)KjreV&mG)(lx#{<3aV<8znD+ilkFNRE_&i&aaAPOF zN#CD_9l0j^vQ8cg)gs^?F}`JG4n>b{Yg&ZZgEY;JIbI;u?sNEiv~s1&Me(a1agIJI zKb^x`4hZ}C<*>caa}O!$g1!4;t%mc);dF4I{oh`QgUm;U>OsJMj}xZ+|zOV$X@-7WaW7t*&!gXEEayS2{JXpvcPZDNg3cIZ?)_Oec*KO1BaB)Ka291 z%u}EJGy1*6>YXzGl6K&G%_Es=lK0Vlh8DAD_d>a~#AG;%Jewbd4{dXm=GPn*B{UgJ z9!TGOVX4T=->s!1|p^&oi@%W~{kAE=H)T zk$vQQYz^qTD1CTy$Ba-2cgAs_#Gm)l&y!<8c-VYez!8gvRUS}P2~#@CVyz=DhG^tF zB)NzD?nfDW-yAhYgIZ_yvQ>ZAW*jAyN(t*r^3e3ejxIjY9C=;(^CX3%<~zSJrj#MD zDt$X`fsfSYt7XUKZ7|Mf?ETW`F!Qj1D~*fS^BvOdWsm~HIIq~*%@N^~`-54qi;1)y zqhJYC^|M%=!0tN&a~lcj3J(Z(ue7@i7kKZQxP+5v;k5pWzitGEKe+aX>&^?MhBR_m z#H++~85?7adZLBP$`vk|cfw3YG%Y6rFz6q;Ky$#}U%v+p7i5pp*|b$UyRH(&53YSx zjqqjA?tRFowRUdz^&yM7z9ZhJI_Z^rSgWE~$lDtMB{wtEQm0|-LDd=gXl|tyOWyI$ zmws-hxpIR{?psCldwx*UAL_NVsK57>s=xg5Cj4Pd3kYYRQ`|F)Gj_PNq)uG)O}u%g zDiS!SeCs^as+&tX_z7elc>db$I_m&dbBM=G>~ol;ewJ22BaamYxfkzS>8p8KU9b4XUK zx*6D+1I^7afHeTn`8>ONTI~6830M0ec{1p~U#NF9&S$_9v!e%=;*&t!f@F+Sx!FHI zvc$};b<#+uxl!)&${o6a-Fk@HpXF;qP37SCMHHSUU~1$_V+~Fs=^5FnyG#{&d6L-e zB88K**zYZs`~uEGX%B>Q)G&j3XY|XipZ6!eAzuFoj%wA9ynHRa>`*8K(#;2-FZzC) z1weGI^Ib%#5Yr|kx4++M$&{-rE=cb4F^qmLp5em{ z#>SkjMJ48U;dluJOmD}$Y5+dm5rnOQCul;~GRWxtHPC@iLV@i)-I&U;@kY}hz<^^N z!XH1@ih!;kB=&FFD*{KU9Z=YmF zxIYiKZ2*#YDx8pq(`~Stm4=_n@c5mw^My1wsLGPH@4>5DfOn9Sc{M+7){oO$dR=OS zX7GSEt^)v(Sv}d{qJ7Rz4359p4BvR}*dqRr1)dQ>o%cjHvK<$WH~eF`lTLkl%3Wbx zkP*?Cb*Vf#9l}HEA>W&k*@c2U5@)ixD9*O3aKe^9bRxUuoRE|ee`*nS{Xr|f%MK>p za$DrW8ti%J1Vm6(4CG#Y(?S*8Y#c0P?AFO7si=MBy~OG?57A|1nRGF_(?7(jqqzy@ z+AxGwLM12{t@4U0x7dF85>>^3xZa@;kw534|1gIJP-ks)r{=tEKDHAvr8&Eq?c6Tx z7(bkoy7u9uZM!N9#^q9DBd5Ap5oGG7)U~Sv3pCQRz>VeME zHMmt$UR$2zupuecX)YDye84<}3ch3KYqaFQE}=<7z@v<5Y|D>swMzIAHD!4t?QZAk zGU0b*FciKZRaQET*LC3{-z@;67Y|^R3pL+_$9FV`D^ze_Uauw2=G%8tRp0Ld* zr;Ja-y3eAWB%3M-f&lmDrFq!=d|zl466py|TKxOpN07-Nls=8~3C-kDBM(UbP9Vpb zTeDBR>|h!An8Db4nw$I`d7k^aMYYCUt?a_-Y8r1=RE}>$3UhsANpc9ie?R6Z&+25< zt(y(DEGeQ8h5Cz;FRo#3;h-P`oQK+pmJ$Gxu0Nk~6`lBp%{#~%pd&G@O%MOra7 z$t%gqe{@cgK?H)nXEmPM_k3u17y9?GQkN)}oAo84uxOa?tAd|~w=sr81@(5xV}JbC zn%r_;$@DUWJaebrU+nDG{*^A01bycbP2r}~!5pxD`36N_VRyWlzJxNPMLa$&03GSh z_XG~A){%?L68=pV7EI5g(p&)k>xniVHm5uKZPhit14d8>@Kzp8Lt5K|pqsTN`f%@4ahfI>+4 zF;iza(4O_2)rGKjs_QJQ+%kdcr#AEUFrg`5X2KIQct2?ngnf&PTA{vFi%uzHu-Ms2 zk#*_*stDifG6uPNhnM@RZJL(w?{cSBDMk+tg4b7Rwl-p}G(I(}oNFl6Auu}(k~|Ke z5y4*#>C0@kKCCTv8mMi2eXZ!my5U3x=nr2*dqv4hl$6o!&^@kO9Ul#ZsbG8G3 zaXJnuCOzDp>3N=#7dYa?#ATn@_zBWO2YJnKyP1(2zeQGcZ-jD~#D!&lwrnZ9dG_ll zMf$pDbJ4tKR8DAI&@%l8=LeUu4Awt`mR$Ho{Ic^|^M7gBGA$~9xJrFFa{9cP5;yLs zGHrMTH+zbnwR}PZx?dQd*ir^jtH(1WpE&S?kEzN5|BW(Ve>0CIjFkKkmbk^P6R++V zgOYp;>^Bk6XVsigx=<^2Hb=wYG{g@*j})l)A$Fh_O);>nb4L9A73p>+O9mdb8UNimfQ5&=@vn=&ZG%zxT7ui?uW-*qfL-|I=AL!;Mkr; z@ZM;$)|qBE9JTzB!Nj9f-E0Jb+b3;B z0TV{VQ;07LRVX}t4R$*;Bxra$u5`wMrF16Jz=SnX|BNv@Qi}$juNom+c{NV@f#w=X zqiLsd^S>_?N-rUzZyjq1xR~^vGSJE`u5>W>BjZC{J2#U1WrA5y#ONNw3~cR!&u-;z%o`65RjmMNS=&M$ zy7kbW{Bcd<<*{cc3`30r^wOTGQYWDW#tW)NJ_Sl*%^a4JZ4b{a64$S+1aqG;P>!`o zFCpd&qNMXW=+16E7cq~^K0o48dqW_~$HnYK66#K9qU^xuv#VV~91HG!s|+k!*Z>61 zMmNM0W-u#0HD}S#Gw6Kw3J_ozt7B}*EK1&mAN@-0w?Sif^qp{X;S(g&7)ga@(lLS@ zV0-B=CvVgCoznY1+5Jhmsp-ID1M0T@oH(!KM}Hgc-np7?srUVyl^*usZ`4KAMEuaG z{5bMcXFv}hoN?B+lMaL?^op__YwQtF#+dO}s^?GsvHA(hc)zYY%#-Iup<_rK3ewN@ zjjv8HB|d&xmfDU=%CpThi!lu*&IpKce^TO7<)nQp2I2k$HO$+bxqc{iziG6zTGclaKU%hdUv1)i;+7;5+5cFRBjJXQp{fB|PdTi7b zRvk|~0Jh;Zo2|=PB`05fgYv+1?&S?P{uj|hJdgx-?4vM%{n{KiiYa_VFUeAf#=Q|XoumZTp<=P;e66eyk7`NsT?<}Q5W#GbUp zJ%U-p<+I$0b^;@Ip^5J6-?EbpT*i4FJFQKP)i^h9vXaudYN z0Z@seEGDT# zbl~iIFQ!zUmA@_uIN9{!T%-YSP*k+;-Fs_RfRQ}D&qhx%5LiENzb{!RFFk%y3CQEb zFFR)l<2$g3D~ZP&Jao86eWr2Sw?-XPP$DWZ-{}?ltPUO{FKvBPiU~lW+E}vC_vl*4 z;*RoE(DgJ$_Lg(p z#!;)AtmohhM<nw5~iqE0!oT3A};7a5XC3-LNgTu32 z`T!_oSLPFB5k*|uSB35sjvWx7%hYs&fTCpVBS~X`B~t|pf7L6MwW4C2@bl)2n*k&8&NCmJ>C6?qrTs2(NZ*YJs-bIUQbHN?uL4zZ8ql(z zF;ln%b&z5p6A*0U*6Cvxt=yS1pJ|L)PXa!@L$hBu$`~au7ecVQ>i!^&crm8m^Hw!u zbb-s(Y3!P?tV1w|=H07rtrnOUo6n+i=o$$sDie;wNwySSG1BMF4M@;xO8$3#+??f4 zwe`QLzUy{OOvcf3Ae;kX)B{N~u@!m1+(W8&%ytd9pvq=!mI|s>b!uY1gO*zLnvX!b z)UuU>W@$vkFYZ`3ulbB!-jA%TN(ippd9w*en_FJTyl2w5;}vH+9Nkq5ohRM2 zPf?X#m*`gnF?1vrGVp6#UduXr7x%ZM*_#^q?h$^=g~h6nYFBQ9TJOFR~s^b#@GLEnyou_||C62)|#!DC{s0Vm;8WOsMP&k=5O?5eAn>kNx8I7udxJ|CwH4IEuC0_CafD#WI_WC=upXkaSTZrBm2zsK#x%Mz`(U> z7r^<9*34l$#)o)F?}$)1TN1l^Ej5gAKU`dJ*#v~4j8cB)oPVRiSz4PmGI4_WioJ!SG(GH^7L!RBW@==o#SgSA|i29DHwd@5$a02@7%}5tjE?=qW|} zeL%Q9c==An47L8c;utaPaBgb~7~Bv=4?EvZSY@OXyg(f<$nuJ^@2e_iJhgp$W$gRe zf|EB?ccOIE@=1$$NvWHsxL1yDxsW=MCxq!L!-rd87wPCVjGrC>PWMmH!k-pm4Z=)n z<9&8p1QE9qr;=jM^d9dr1Z{ssbrIBhA}n`R7}|)7yV{mNGu`^IIR4|oOmFt$#9?+u z3!1Y~I~(>Wz!y{YC1m%X6*!Ha8?}M^>L4W% z|Cz+qbE&y9r20#%S{>7qNMES zP$fdn$TJ`K^obh@45e^6AdfLh{MDeBIwMeM+6hA);I?+h=SsFe2 z@0d%SrHz5aAgYH9J{QrFRrnIK3=2}1x~Hk zlX4WJ->&&GBYikSP5uh<(;XUm=IoC0)SzuToU+~eK19+Ck}z0?3(SH#Xu7dz;8R^T zYL;g2S>D7m6;LhDZ+YVVsB{_P<-QGg*{Xn@exwrrQRLF7gxa6Pj+YmHtI#hJyMk^w zGTqGl*$CBr2oV)p676WD$uAl6y6ZPs5pQB&jUtmQILPV9Dq?<*|kJzCV-+KgISBe$%}D1Yyr&b#tJz@Pu)2<*hs zNwO@^-=6Gj$<9MG*Q>kk^TxO|C{s3g{4!npgSnQlXM?ECeZ%1aT`eiM!EArR{9f=j zB^~t4`kLtx0p!w*S^|rnJQ<|)ss@U{zqZc7Q8(eWE2s%yDe_dFu9VhH0}Q^+M%kHI z=gT3o&Xe#exUFcPo{C9YSS?Zxuw&&ik;9dlR?;BTY+LofwwdNq$RxU}jRWCK)uyOA z$CK9MtsfHeJ~u!J4H%lg`fUf;hH!;-USm)v_WDlu!HvfUN0+aJf}-h!LoZAnHb0ou z#%AdV98P@~#oRy2ZomAPr6&avHOJf1B7QzTFY5p7*P*nwI1D4ZgG)Ffs}wF?QIrFW zlxnM_R=CFf%J*6=CK%xh2F27j_S@FqeFoE{g!9?9-R*^v$`dVUAwxxEsWfocRQ4|f zGxb(_**UJ1!T8@nN^KG)9>$XruS2q@J_yUm2uxXJO|Ou|_w&!QgQ<-&FS^-Kqf3G> zigAkBMy&ufR+}jQn3I;R7JqzggYl~#>)Dr-Z$GeO2ho+$f6ZM$4rRCg`y~f@=8E12 z7{&A4UtLiKzqE)K|AO11e`b!tu%@-B;`KmoKl=Sz+5=fac0&L!lD?@HF+$zQmEV+Q;ljd1{(QUd9+S)S9er}*Q~NZV z$NViE2f7~zzPsM?yNxOx9K8ECwXu5|%(=Azzo!n}-8$*Umge9!}(yL_Bv+EnA`!q#TSuzHR^051=mx zLXno}|2_m>JUbusFCzivK81z6{`moy)&^mwUX&hFKs*P-lZ~d3r9VOz`?sdm*z`q`X=ApvcJ=NQg#z|k%nbEww`-r zq<8G`=bDr`H{MN1WsQg7cUesJ(U9IH(D`315uTPWqj>}>!Hs(*uUo>MyY027G-x8D zK`uQ44Pc7sFMz8EC$8Z|`cDGP4Hww~1FaHX*VS&t(;ug_noD5}A+tz=!U|ZonBgN1Cws>5%pr<($eLbk|G*?0t$RvHN{c z+5IF44jnC_5Jy7y>b665iGrC=UaCi2=m;&ws=4Iw^0pbbgdX4*RSp^-E<>cfd3!bvozB*y@DB?Yr`H{OUw^ z0s9qeBG2*hD5F6P5V7KuWTcQFfW>YVBK&Hsd~#z8@xj?L{B}sjZilfao7hcQl;8TX zE-{HfdNicC%b+w$M>onvwOlN8{ap`k|HCRoR@@puw8dB-Wp>zl`_uNb+yBDWG*+NB zK2Xj~0)3+5#S16GNU>8M$VPb|s4QVyKG@RmRre;%?gGZYQ__8c3?|?Bh@n(hcwg=< zj;Ut@i{?t3-9i)ZCRnI?iR|+o^ZUDtz<>pCcH#khC$YR3Hw~Xag=TK#g^U)G_&bTa z-uuL736{y6=@i1yhvz#c5DNa_Irxs(=@g(+ z+;{!XjK`qFQ06YQwi z?|P>(=WgEU{fdzn0=bOSwu%Z^hYx>{H@fI=PVmklZy-6tO5 zJgpuytUP&xY`srANqI278r|MiE*{t8qxd48&Xjl<^DAb3$4=(kZ*f45aHGc|qZMRK zpC9QKzQir9_w2RhR1>UnLdCOCsm&vIl@?D{sE7gyomN&Z<&`{4<~n|?X#FA`!aJMq zbnH*VQRX*O(9-^gaMEaq;NaacAj8@iKCsxO~uJ+s&kw}=@&enz4blEgd50ntIU zPnaDhW9Y%j@BP4Pn1KM?p3`|cY3+4hYQH$Hkion59ko(&8Is2A3#!Gk_+KHfV<8GV z!)7!|?sp4iLZ#ymqOttO1Poa3vvIVGM|C{Z7{bH4XYjHF87?u+;#B$kDxS54`$SxZ z{)`Jf2b_lnoDXt)_L#_GULZz@v9kr_iN+7o(J&%{`JaE;f_zagRZ9B<*=QeUBVHPI zHSMcWsr$e=zA!yzDGHoC76TT0zA;&TjQs#rjQ+MT9xLOoM#7n9SQd5n%Y}($=iS{o zO!jd*_Vb0K%Xp?3lL(Miesl{$J1zsb%v-dDu9r17A`mAFH1J=5rEJgjh;DPpDgOzy z?V;AFQT#Mg+cVn6+{6fRX9Q{6(t|pCKU^KTe@TVioH8sbJoc(A8a`m<)FHOreHRI_b|3Fi=#~7TUO&{_`)2$Jx4s4D`h-|i0B!Md92XnEa|3sc zORCR{=#DF@od{<1&f(RXh^b?`yPYOcpHJ4jLoFWpI)rfq_nA!5^=tP;V#MXSXtV20 zM+DP>0(znuL0S)%04|Q0Sc(NtCMmwZjaL|_7=4)Wzf(@1HwR0MiiyF-)3BrXRPR#h z@T5KKmYs6YeEL^0=+mYp)5gEs={Y`sQQenawisIUwfz3nwtHYQ4=)`o1N7{O~q;h4Y~Xgcg(oXoL7Q}Q<9u0tyoZk zzLxJN9$7ShDzh6$%f_~c7|Lx1`fj?r+qH^Cv{mIR+QW$rX^t{=bh9Z%fe~(oBgTq; z>9ejBJEMs_m5H4C{|g{n{??J5(;(04UUq*YaX0-V?{G3q{2Nk8V0n9S?Jg$w6b9&O zEXe`~bI+DNzl&P&r2?dG22p4_zIe>t-YyYfQLZr6Umo~#+MIbPlKTS`R&o`M>FdLFZXfzeq>o?|;8i){ujM^CVI>k} zU(Z@2B1hd=tAf`f1lA~H$Oohx#ZvMOJ!*2C2Q`Odi`R4JW$jhFkf@5OWNzb3vvvLj zIcw(tBa~v)1ZKo|h}|SorCCCfCBq@-#=Cs1UE{%7`suKmVo)U?J2BzbA1S`WK|?HKC)dTC}f`u`+yuOi>GH#BUPer zPSSE1sTx~%MvdwTd7ZIn<<}FWNY`-5A7-kgM$B>{20aYjW2=0^BETUAG4i#c2T@m| z?-wpt&O6{nX%a(50O$Bf1{ECaG^XgLyF4~*RSb)V-f%cBXM6Q8HWUNAXWdTF@a=BI z&rRWdm)edu>-h2(FFX$k!6&|dDjmlmJI?SISNO1F@5!jB93D~-f3B&T*-s?pQs~@&2?JH22L_A(;gqLESF8c+ zesg~>B9Bn?{`gEkxUyS+RsL#xyW(oTV#9aT$2~T#U}=;xTkn2csY_D4BTq8ab^89d zWDf&!eUF>SEg&m}bA`<56xw~^(zMKN-f#P;?5tSYe6Kxrpwm7=Qtq5{2VaY@=c&bY zy!&d1coMyRawJ>yCU@LXsvnGlY>uX&;nf93Mmk|>fpBie?;+6`n=DU*=F|5#p*g23 zexH~h_Mqq1n<|i-561;i=C4f5gE%xow`kJL;>j-GjQ`KiDid|n3_!zgpndYjLIz)8 zQ+y*4L|Ot{Dqtas)ptuNDE%QX@m~EauMl{`ty$lkE!HDHapfcI+xx6$_7eQZUA1Bw zA20UH00V3M(--i9c9k2M{*NWT$g9J+ZRSxWaNonP4oI5YmU|6|j>DL4KtW(^crP1z zXhjW)NniQFauSA*J)?EolWDt$I-S*9m;RWQQnmttT|lL2tRE z-S@Ttw#o}P(Nqzh)yX@xUONNgS?d(4WmygzbPIkMyYD)WnY&eZ+7~ zG=wxbVes%O$!j=ejl)Q<;;51JA*if)(bvae!BhC}$#~rCBn4Nt;EWGNhM(xxf2|fPXm<D)MeC4zF7h|T??QPBfU0iJ|>Qn|5d4aFQN$zz_w`h5E z^5+Nu8EM+wnanxiN0L*e!IC8L4 z7jE^I7te%-UH@hbX!q3KeY-7!- z@*7L$BLJp+T;IeU;cG5uQQQ2qSnK13L773RWrzqV1#5hCa2;GL73=DD zIyPF`)E%w&!|Mi8VY%&%DScPnU{!HnR$lNV87jMC%@KHC>DbsFWXsO=O4O--;;!Zb zcH38^v(G*`j?=x!_-C$AG-q8cRQrWDsB*oS`?dd@l0-6^)UbO3YbG>;5g?T(7cARa zpKVV+2 z9I%EJ?A2uIe7u)mbrrk74;^xC&xv%xYq-e2^#Un6tf|VlJ1B0q?%a@mFJH?VA8r&+k*CZD30SCf`4h;?2X=lVr()#JBJHi&Sxt~ zU;!|;P{`GHXueqhBmDr>bR5ktD>A=pZ zLc?wg6LpGh{h>Y5d`(9}NQV|-6nA=xKRefR{uP$$T50|yRiFlJrO|9~%iH)!n}-x4 z@>iX__Q&eMu>rXFK@uB`I?V-hm95%b-PvV-e?y+8HmzdqiJVu(;|jT###_k{Ywk2v zRFbM4L$a%l#iH9@vdJYH+wR8_5k`vUO^lMB>dJeBB0OE+4LyyTj$2V;W5gzf6FxxL z1%9aGpZ@rURDNhtV8Ow|yd~zoOVKtEaU$G*@Ym}sjAME>^lwhVW13wPLYHp0)G#X* zuIh6}JvGhApe3_Q{9fm(oC@^?oedMO^CfnHe0l0883lhs*=*T;+ZUt*DvH#Y7v(I> zY_cyTiIecq8*{q9b$L*6NjUjq%IuC*&SX;RqjRL|7ald=j2Oi<*zo3OO+;_aa*Qw9 z4)gn|-`42O)*TZ5&~sUl+2ri?2$1YU*gW2^TQ_Zu&|q# zkGh_uGzofEUQ8HhCN!J%A6))eXI+Z4&~kxabPtfCw_w5KU9(cSfp#h{(E}4kYo)SL zO@wo6KD@1Lds`s})HTSz3SYAD$)X%a$_Dg?KY4$Zs1+h_F$VY2bA^9I)p%EttQ0B3 z;oc~{MW!!Bi%ut~Bt*5b8fVB7H-W^bxoO%vKPwfqLO)@uo1+r{?4x-LQo?7()Csf^ zF}(`m9A}+^mnkx7M3F__glE@l7mp!zmc0G?*}unAY?l*Ao%#B<;$TN3*_`Mk&oHy7 zVj*|&2WXK($hQfyv z7lL72;aAmS;cWeoOwyH(DmA|~Uh=P&lK4XvKW@pFA9!PW?T71J6hiCX73}H9D#HqrO3W8jRDKwy?;Ps8yHyb;bX}a##M>1{FptFN6Q3kf^^K zR|2CZjMB@0U1%fmWDra)QAtJjGf19637Wo<(PI zt9(O#y}z-I7)CV%zQ#X_1W|5GHWQ8fmv6#r75vlRdJNU_s!c6z?fXRS*KQ(b#y31j zb(+Z-O=rzf*ic-E`Li>W0u*h(pq4umB_A%VIy_srX4B6Cu_^kc0l066>0$94bKK9P zrcGv5M9Ls?!1vk=ZHiNf)bc~0W5jQOY5u0A}lUJ|Kpp89pmI4k#)mMIaxWH!h# zQj;3a;JGpBzE_XG(}X(Q?xa;Dn;3OtuP*B7{~FVhW8KUblzh8+7c$S;ZV0hw>uWa(NN2q50U*&EF8j?*Cqk?nNp$t z)4_yEaloLp#FdiPBO1Fv=nz`8m<5=VanSBn+0$g)I%*l)hj@^Qa(%EIC%Dsuj}Z1^H;XVnZ8R%n`TIX z^aou8^2b;!PGsdw8@U;_CEuP{K99TT65Di0&}NIq`lY%>n`$IfCpE-N-^CJF75o>h z@a6^yvv=M+7kw%J$NY2k_sh7B9gl4Q?vFUzj|>?lzS?y!k6kWQ?V(v2G`0Z-xtloJ z|4Y|4`fu09u((nAdK=zFV=DX+7%Q*tF)gJ%(&aGb+nv7LoUQPI=2I{YO(AWVripc} z?wM$ZjvU-r5~^Q5qL8TnAT0BZ_B`^%^(QfXX_L}d@U@Tz1QzEV`KCH+RH@Lqqy^5? zW&$O^NR7WF;j_KV5jblJ-zYYgA^BJac|oo|DG`~6{RGLC&xaKtTiCHRP`M5VZqMaZ z7!v0NAI-eysTgq-a8U|R$K1{kzQ6+N9-kIG+p@9N4JAi8Y= zTIa;Y{hB*YakI?Vhy$*y&ODSF(#3M2C=i_;#Pj2oXfVSLH3g=rc!orlM8`%lTh~st zgarL8p>yf_*t>P2S2S^=9_d29t_d@2JVZU$!OkM>i-736KEH0n(5=Z8b1XSoCmf6? z>?C|yu-+8U7oSg|?s+fm+S=o3Qu95ntP8u<>L_!7;1wWN{2*n0=MUm66zd!r!i&f)gF034#nTwhy2l{|Qo^$dZiiu0)$R#!Auqp&Q!-{AqVrE5DzIEUa zo0_|3MAKV;Zos=*e_YLn&6Q7?shvPnGj6hu)P@&LHhvtBS%0d{lXTQM@{p|ol&+~3oPxV|x zGD@4i_u~+*g@q2K?z#C~`zofv18-%fin)9{rcg8V!|$ytwR#znC22SH;QY)aF+uI+ zl}nL>+L|OpS6XFV)E<0a9oE3f;+| zM{Z@KED%uIK5+-FTBrAWhKxjI?KA6?fb``Xd6b!mgaR#Mdc=~H&-*^46ENu`jaJ_K zwp(Je&R?XUpvkig-|pt?VdV_|Y`F23nf)L~+jT#@WhXJGch`6!0+27lY(75Yw_;aG zo}8HT?rrIUijZxH!EWz7F}{D<(EFaVGLoo%hjSUKL)WJJjqB@DM=Jd9^?(V%ikL|Z zXb(_cVJxuxQ~PNT8lF%U!CV&Ex8GbuP~jHK`?4p(<|BEX_edq2S;MuGPneqbP%luD zcTMu0m8Ag66r?&tteHH2JHv^Ix;*iiT~GoA8Wqwl8nzr zpa&}?sR+$#9rlpi`q%O>M)KT&^C;+&4B;O9Hb!rgduIgG^^64+^eb;$gLD-ajXt>P z%-Ve9(6%S~2a((3exvTlpnMdQA)#-(o!Lb{f}X(+%rxC5D@99P;=r`&$S^_ZZ;PYK zTVYsRj-C_^W4u1xVlt?abEV?*f%=}Sr+R{+<_@BejcXiUI9mO|3a;6GGgvvQ*3S}` z^6P%F=4$DA+A?Yt)!cmC`luS5^crbvS>rUqIsC>_Gw8?@k1JEs+(ko2?chd!qhd`x zi9_ku`I_mTIQ?zyO_JC{nE@GFrJHZhKYG!Y3+Cbl7 zq-wb^`;glIb!fI;HyA$ zEqqY|V5??G@Ot7rl6h(^l(eUh_INyMOeyTx*~xzA-Nz6Wn!txA4%O6hQ6PgmQK+gI z!f+L`W$%^lT3O0))wD}XaP+CURk?ykOzWV19bso;{u{0*3R?Dh*G6H_p&vW(>L}SV zFkPj7069N6Ay6ihEo<1QSRf%-7dNiA>?1I`A>aB*kzxfq1iK!2s%3{}cU&BuIO>KB zA$_mzEd7MlT+2=)-Ctq->r}(a5gsxSfjX~?MzZ(Z?+Vpu=nmJIzzm=A^&Ai{)+6^e;BO zcGSnNoUvYx=Zs;~?dFQ7{_9`8k%-SCa~C1nB}79vsRDCqF&5Lu@Uh1oVr}I6QNe-*0HO{5rI**6%alr!=+U4iEC_dyDF&x{3 zTrKSRN5x2iwj7oYd+`bQ^TsKVB8$(P;2(&$l)T`HR_Ec^dg~yU|G;IO6cbrU-uT!> z)kdCF=6}#~bE+PK%=$!QEvUXreKo&9G^U`2eUbROJIHD}xLuFitF7LbtNBu!R^n10 zkssyO>U8(LzeOFiM(1#pB*P|6>v?VPAM5_yGSlmw)B0+*nIao+_hMILBp^B^$Rdv@U-D@aH7vocvKcP$O$7Q5PP&O`d z#sgNe@N6<{4J$(xvU=OV3g(u(tRrP+ODH9RqLK(*3$N@j0tfyCO^`9=I9oI-X!rgD z&6grpWN>n?--=#mlyo)|aGzWc85Ni_(NbUaQ;=iJilD~QZ@nS<>y| z%iK}cx``$qT0aMC?8@dCbAX{P#V&`bz~(Tukda-_ywlnZUct?p+z1~R#g{0UvkZJ; zOZQ@ie$;_HS~^rEQ=~1NIp;&H9IX+sH+GevKi_8=j%YtoE3^-3 zlE2Nv7~(Bgxf4r2R}jfB$5ko0-kFa6p{*l*RLtLmjnA$wh4br$(d$Utxnsej$4}FH zl|tTp(6db4n!4H#eiBOLShp)ju;-*Kl{Q)Qfo2neydfi!0jTvkiMG(K_%;2iMb08_ z0~rq;;;u&si_to#!>xM7;c_k8r}he3%;pNhQT2mZpZ;7QNq&~MW!UtmrVZn^H>714 zYJ6WLTv7@ygn)<3k1*A2&HAs&{=)+Dly`OE!+coec< z=mphAg$bDTF3Z=J(_8ScmM=rsI zHW6&%S6_3eiY3sT6Uhn5T^dK0aX5eX&Bfw9eq+R{^e5zhQHYG!zJ#6Qmo9G5c&n}f zl;tgBPCL;#$#y1wr!Pe-9G8?N9t&j)=sFCZISqK`-jZ+rQSZTOrQYcV3`_g|D%yDD zdSBl$RK{Y3dYzz|pVL_>$=NU$1u*;dLS5pUw0YtRdFh3>?91}e4a?-z!=_Eqx{&<8 z-*5Ecb=|vS5}Die%>Wu`fKhaVls$C46Ed?7-_EQzfB@PvkBxKH$aENnrjeVb(P!Kv z{?TQr@GZRA$7-zPFJlFPr#rf|=Ufp&VB!#cF}EN6M5UVoGNax!F2^eKDs8!&%wUqz zXJya96l`{Vj6kws#fblSYm5-Fa&hni-wZkl*oY3HyiZsI>fBEslq{5UPVhu8h-j1|5udUGex$~q27vA>^o!Js;P_wWSl zGk!?cJIRpS=O6F~}5r{N9Q({Q+T&jG4! z)t{af46z$~$UZn>S1HoS@lYK9h*idyBR|O6Ik6je4bK4PjwOamqt&{h*ydGrs@` zGFc74g*uLN8J~S!7U^KiDHC!x+^Y|v7p4L`<8h?BYr4yyC8#C$>eV-!rmymg_Tty7 z1I8lGWB!UcHryUR=&;Ni#At~Q=gi zfs6Pws_(OraXVEKyZ%qHwzKO2;Z`z2Gjxyoc`kGkVAa%6P&#qYGQgFj=T8wEDU{&u zE77CHc!?Hj*_?Ay2K^0&+m8b5GolhiYW{1{d^8U^xClL@Z}^$hD%fgKBaa=yZRVj?Tn5RL(0-;4dBVbBg?9J$x?j%EtSEa z%k_ZW2)xS?<`}c>l|pyCnzpqtZQwQ9WBX1!CsK-GU<8ziI4U(>4suRDh^rxD|!1;uNVW%kzD>{aF$P>Oa*FRpvW_AsHI4B36P9DXp$FEXr#jO7HmSb06%7ZrxZtKN*bZj_ey{rS}e!lMaMfQB%?SXlI+GjyyrntNkKgvIgS`MY`4!7=QKp6eiKhbo7 zI=Q5e6A1N z+)!HgRy~`?6SLWpHZBg}*W~o<)M>cs*qn7mu0G#=8fa$P97SM7@#?R{bAZmM<&IcM zL17Cz^VWc9*YFjkX}S3Y1ZB++MP~c)+MK+W+ z#m(WV!5Q40>(-KA9%X%Fz1O`r=fBvpySgd4(YwSuS)YVPaWBs{$1wuEf&)LcXR{OG zI&xjYP4zA%-Xl9&Tc6vxbHc{;M5va8|wO>j6RKdg9HRxGAhYPIJK?u*dbVJ`JfSH@h@;{N8MYDW1Ai(>U zApL+FdxoJL88w8O%$<5NBwmJ(mZkTp_4clgMy{PZ_G7$0q(4uGWsCOvTR#wPhE8P; zfb0&lx@4h?sA%P}1)j7e#g?UhpF#U4&DalIAzb;Je#^vp2uHdXHv@$1m$U+*6KO`4 zo>8N_0vm}t!dNJsoJ*b`XwE+wHkmhzJKhNXDDiQ5&xuBxSQgU}6)IX+Cc2l&VuTl^ zJ7~ZDv?6)aDB-&>*xv1Cl%HjwOg|Lq4_4Mxi!^b*JB@L%^q><6RdxM|vfMdb77S_ zE_^FiEXML0ih8yB3h=QXjq*%ZoN)?m3`RjVyY`^7a9F4kJs69e!48@H#$4QsS>3Z9 z%KK6*ZHOF)=H7^}rx_ii)kZ_ao~e^0xBGE5?6AqZt>ab8p+AqrD5Xn`!nn8Qh9Y6D z%w~SQ%JdC6Th`?|X+g2Nr)fY5-Sw=+@{@SVgH2ujUG5b9MG%#ateq@8SCFU}auhWq z<-Hfjy|6dTRkOpH067ZlHG99uOcs-p11(Y0l@*K1mIr^%eUQ#N?@Gu!3a!M`KNppo z_D6(^-F12oa~-E#AL&Mm+8zUK`Z~h0CzCcg+tY`ooNLAjfz9{67~xb+?9Q@B_8=b) zSMYrMFS}mCn@qP5>24em!a3YIOIKsbva@)FSbmS~22dwb{*l14QKB(8(@q`NQ1pm8 zwzUH;#apwIY%@4{E{$l4pR?D~S>?$7P@9b(3YcKpBGD0H4*f#Cw3k==RD0<^_ zYLO=-@};xaeDmEm$|C3SF9jD8>mi@FsP0@|N^KCF%A&ujsoZz$GvON#{sr~I@YA07 z;4wBG&=XJrj>d@4a}&Cu2_bIq#3lj}TdVKpy$^ zqAW7?>&ttRsg|Tz#Qm)>Az;eO}f5lOaD_a~Qj}td&}1S2453a#8y1b}illdT1_w zg{zEua6ML+HB~k#?F{@T-8S!E>jVw>=Y!X#j=N`8~?NhkOGFn&^3EQby#_q>ufB9@@$?D5sXlo#PQ ziI;&i;k!UjtRmO^*@$C79urTwZ4W2Ik<{z1eaD)|htPFH7(LjB~Ova0jwFh_ar?d5E?e=8E{J5u!`;%aJ*_ozv^6dQ_-m=hqfHqh99R99tB&x1#ym}^3}OYzpR5NvDO_9PaLLU%s~ND zAfiUsg;yQbWL9dH@tjp_JtDb5QfUzdr>r%USJO>sY!L+F$g$s|Fh4I*tY@9XAUSSP zx-7U@Y2lYtVQD!1^xVr)iN9ISFPZu0@;YDrPlICo{|<`rO^kOb$FoRcixn)A{sqz$ zM7DX*=alNa?lZCJb5zmO-~Mb+y2D1jE*yR)z3 zgR6bxsP=*;aylxf6Eg$OoDC<03!iT|>b&DQX#(9&W3atMu8gclL||lWA1{x6R+z8+ zM;cX&vm!;dqS1XJiinhm+Ze*A#)QYtdE$YjKe(tlCFRs9SqwA|UDI&urVWqyXy1gT z=&($<=5+$T2*)mU9(J^lVGH&vQQ7j}FV9*-_Z{C1B8;`GsS~UNZ&mR)q(K|Y28VHr zJK$*8z9eKirB{KI(3AiBv-(09)26$BDQ5jTRXFASpbJ(-ar5DrnL)v}Y9*5=SYN^>4`3>?~lvllhk7`-9I*XU~1SUseiZf(?N51Bv8H}VTD)riRhgcr>s z?ep&|5zVi!9k9I>c&UBXoC(9K|$#8`yiOL8eGWi~-P3vG@te>H{_|7mcVS{j=0G-tj z62I>@GII{wR@A0$rZ zx(&&~E`y>YF<5(5h961Z&!IQ*=DQYvdD_m&Y@S#eHHxd3jEdayR1H71dQh%I-Z^&Jdg(hk&27>K(`_#ZZUs>C)SJEj;Kpk>`UyIJ!bS zG`oI0tnSdqL6Wc$NH^_6p^EKg;HO5jmgXY@`lzwA>Gi5RPZYSlx-R4>cWxfM_0!Oe z7uOHDpAku+|6=zIQsz7VIsbM2<-tDAv*Fi#qA0u2S7>rDf!+6&(RR;*zY2p_FsRlH4UcH$D z=s8GcdqR{uOBSKc0;N!8l# z|C8QXg%~|eVoRWT50B@D*(mq5AX)(tznY7V@#>c~ksUtv`{5=%YNO9t2H_S{&Vwfe z;x^AM$p=i)+B(1bNtP6Sx-TQBX5ds4oA3`hy4G%zb-MJ8mwnhS@tu9tswiPA*-0{= zOK&lcP>JtLer6xKf~557`C2zqX`kH}Nznvcc#Jn~&eC}Wa?1mY_AK?)d{81w5kk@{ zacjOX3a3x)-Z^m%AC4p123EEY+HM>cdB^MMvpu)xkCwgbK}>P1Z#CTFmLtIKu>3~$ z$Gp|wlbpBR!cC+)-c$)Ap$}Q3Lgimi^03XXPkDRhfm*D0AO~ynxtyV!G>Ma;K(_US&qn9Cl&S4z)0|)kFr%(g?IoIh=3KN=(F+pRXrgJ8> z(T}A9Z*phG1H{W3}iuYX$8`-y&_UjszS$* zAtfciBTMifP#Vpi=uDB7nrgqS`c`=q4_Jb6l5N`$nXO6&N9IjC%HrnO1jhJB;VkHDWdLyAQm8 z^`@d(cH6hVScH4MTJapNhaz0~RySalhHaJ(@Z;tt++aaK5n=l3E+YQ7HoeS)Lu=pv zRP+VNqW3%4pktw_BjN_FGj*%S*f3;cCYgr)rh}pq;&db8C;k)rWb^!_wQG{7$K~>K zzgaaBD<+vnIe7|*hE(B+4|NBnMuW_s{hj{Wjsva_RtAMjwfvIg;5C6 zD|7wXd3%Wl%5`Z*j(RTuLiZM)ldT!FUY`&>qOyRlSeD~hCTs*Dz1I&?Wnr%uZU1E7 zC@$}g!v>Ol5{qJ--+2dcW#SPwRi)*V^;B zA?v26nyIHAa4&Pk2+Qyo^Pbi#kU9I z$_f9`PA;_02;!W8$4*y(WJB5RskNUNUYaf?p5=$#gL3Y#Z=6DYKb7zfZB@VhcA_1C z4=@lW4kmldI!yGqfMEY*X@23;H1y(1z?~SN>PG$gN!g+BoJG>}%;fExvb)>^k6W3B zr!(D~^D2JyZvo{$2VM#X3#^7Yld#2A-OK0dw+hwLsz4p{UfP#2pr;Om&%tk6OoyKZ zahQ|p^BCRYdw%#+^rLv--MfSfAxEb9J(&rwZBJ2AhRv=5A+={-xOxj1ZepgpxW@z% z1)~J=XZ~c&o;`ttNAkfS5_EEsYYe2HkPEAkCQ7wZJXC#|L7rlISZQp`$xx&J-JP+< z?85mhmqCdt)wQxM^&~2bVR!n?`?&IVG~B$vvBh|KD&gZx3=}ek_h2`{*4D*Xj}semVBGh|fx{ zbyUBIW{)4{gnJKg&~6FJk`($by8`uGxLs5ZOLZLTlhC|f_- zWVc$9r_}(BfajGu13Bk`Q{`(F7`px-51dhzd(Xgk`Hx|FHOc| z9}iGBm{cizSfz$23vW%4;;Ku+GJ-7QH`S0^LC(?F zJm{bFdaOetxgW6Yc-gMSaI2Cn#(z-qQ=MnV&erYwTiTNxVaKEV$?#j0+y>0*jlzXp zHi~WEE0j-_s`Jl!4nA5E*@W#rTZU6?BGStj8+n&015(=7IE{kGNZ4KID?@}e47 zq;lSf%)&HdI8TPm7)c11CiIL)=&QEs5gyeb<<7t@!U5iNap4KP{*T{PN8Dldh+Usbg5F?FZ+K7h>(lW(Rjc*@7t>1{=U zX&A$--~Pm*do~a4Fx3#h?oU#$Emr8X!9P>w8XfH;{9)#V9zYzd?-sf?_fOR=E0HH} z>+BVq%ch!+!r=J&yN!BoDg-5UJ@h2oGW-t{|nFK8s_#pFOXo(@5#DS@qEjp znO_&@p^f4?Joc2}Mgz0|v-{5Y+v?q9fZkoq{zJp+E&y6Tm#w?n9QkCte!-d!v{MIy z(a(idt4ZBuPsuXJkoQFsALvg;C#cy=>zOFZzI7G4-%$qR3FOYPgx0P`@3GyPE9nlu zAp4A_sEbj>Y^H)GUY}~XOzyv4OfG-Pqw%Rx&XLO@`q2A~6u67H!9C=!Nux>g=Jb@| zixH?3j9ZnJ%`F)cx2Pq;kIvld@6CX2hA_P7|NOPq0!ZV5N-v}P&y2tT?-SsbzZU0V zLMkqnEl;RjRW557y0{KAGoI-6`Xaq4y(1vfdy}prAYD2n5JUu2M35E%0i}18 z76_1_h?Iax0z^s((vtw8Bm@$YeDUmkuGjtUz1`Qj&Y$nc;n#2_&pb10*1gufX06Qd z<;-!jwxB+}-ilck6aBtk#uHTnyK`WA;~if!r=BElOhJ#)61TzUE7EsfxqQ9&IL0S_ z{N%Q#^T5*snh}PJO;~dnql3f3(B8 z@F*)K$MNc&`I&;{Q+1+)N?(WlQ5}m&i4EuV8Q&^6&9DMq(L z-at0%ER*(ib(Bi^D*7_!Jo)LTY?%D*LM)1Vee=z8-BW!I7BYL9L6Z#_mrru!hI)kl z&9d<g_bdy9F;>zMHNO?I-R3XuvL zR3aLA$f9I3Kg+)S*6@<^_yNnCWfsqmKE8b5xz0^164E*R%JWlJ-zuHM-+>?7lKEl^ zW~`-(9tfP>|E{F`g<$o|ck<8PU7X1J+D25G-Bk6CLc`lBL`XFAD{!jU1F}e|TC^L? zZ&dfr#riKU%fS^Qx$?gS?Oykl;tsI0uh_XTd1d*TgAL)eCFPJ>wZmr)vIRr^)rI@s zy;4z-esAtB3rrSs{N|_Y-Gk%x;<@i$C@tDNdnheHGPf_?4P60CMa^c^WUl&#)J}vH z+}N}4F-X2y>@wcMra+0s~kjWz?5^g0pj4~B`Ru|W9nk>?ZknW6`DvL0EjRX>RJ^5PB@|6`gb8uAh zi#xSx^rG|S@#oj`Bb%>|Bc%lSKb$2aCaUv%7&%TWFg9?=7)OS4msGOlmN~lYT=7K} zT6W&b51)fJ=+3mAMnOw|d>EhaV zBHtXdkonf1p|m1OUQNrk6RTdSaH-}Nvfv&bQr2CJKcDv(wA z136)-$TN~^MI%8o&XAs6b#q476YlR_a%22o3o4EBJV&e>O+>^OW}Kl?rtIEg?z3wl zsCkJ2FFPoFO2fjW8o2H77*m&I%Fj!9AYMlS@OWE+p{8HBGXjEb;+&AK;vi3W_;;^9 z?y%sf#6f1O^8IBl$b5kpG`n`GE;MyTU4Ij;UM_twvw2d_a$wyYk`>*XpXiR!Zhu~L zCKnGdMOw~#8M`LA>^D5Q^~VG~l`h_bUk3X|R^)svq8%}vwk-z_!?e6Df%!hVqh~y` zUugpL(O;gWyt$gVspb0AWulh-Rd2dYS#ETRJEk0V>w|Dk)XW^6QruQMj}!m$bE01Q zO(ANdsHF{DTwWTUE}@S&MfW_^5;^XYdB-lF2>vRCw#1lZ7otasDP0xbf^`kZSu1@d zBl&?fKg^su;X>@VvMW&U!mNTrQA)$>Z(MfB@PtEuwBWhT@x~&NN#c@t!Hbn{CO?Ec zzdk1v152USg2_|q(|3(76O_l)0>;hw)glV|mmtF5F$ zx4d+j7gd7oBLsJL+7IMAof>XpZ=zpQ1wvgfNhg3Eihk;CU9yUYlz6n>G!YBYe<_KC zAm;4oY--I)B|c!7IrWX2bhV4@0(7TGYA&0FEwz%*5nCpvas#hfB`!1%=(29l>sMV_ zr2bbbh$2J*2-n@KO11iU26u%uOty=Y8!2iqd_>h3FZVEQZ!qi+)ShY?3LEn_|Jg0r z?ySj_x_!w9ktrD+l_%TzwhKeIcpB4;LX>kvs(DeQ9v{}kG#p1CA0hmFwe>^px63(2 z^Zg*rqN-X4%lu^(@|xdnP;lQLz>)an#G&W*Y3gmVCF0uX;MAE9y+lRahxN40GfrDh zfiJRf=a$#&S~&r!&w#7PEd6bxg$AO`!D@g)J--$RftAS3zGcJj4*gO!jttY=JJepi z*hVC%(Ks~d)E}mx_hR&&TJ=28jsv=Pp(oTCv|R>w2grY}ebW-}C8fWxU69zHYBZaq zIB%Ca_O2Q{r7W(FUiUwKLIJ0gJkdli+FrAMDK4P{?o8G%rn*xeO~RcPK8TuxTy+a8 z-;E{NgWQ5F&XJHq@@Eh%%4~c`6gc*|5*BPLu;>tnx}1hv zOJmasY7RM6<*eCGdGAttL{x9ffLrDcC?H`pZ&-OjI~Z{mca=ZD;PR+}Bk7F>kGn)) z(=A?LUls*%VQl9`Znb){ZCP92>mXfDI)HZjlceXz-r5WTYxrlGlV%uE@3x6g5{4}6 zT1>hcMm#@npV0{#xoz!g42}|%_~uG?GY8lP%OdG3eG5fC-x3QI{`h`D^x1t}S~AyY z%?>ClLbNvj5~Xq}Y5e=JQF}GeiO4zzT-8@1?ew|$yzt2AiL?0{$+D7Q>NBe(-cTTJ z?!60CeaHSKb7uuyn$H}kg!cvFrfewC;@Cs~0lqIg_V3&xe-=2AIgWZo;8Ky@Tus}) z6#{g>3vQqD|1Jqo9Z$jJ?go4bkdDHdC)Hx3=j>2ndq$j5bguY*vkE(Y5|2e^nhX6X z|M7{(hFbN-&+KP>_(;?wP2RaFeNhu?1#6*P0QrJZ&#XjK@W<%db{QPlGiEu7)Nd;l z$-OOvRG8ft>H|!Jb9>FfO||pbYc}V;pC1*pBVG6ga*7=Y2r}xmSV7trC0{Y{P+D@k zTNhQcxHp3#r#2yAq2zf5GlxP7(8IGvA)jd%=;u3YI)vy4cJe?&b#HJH(zsJ?Aa?u~ z`NMxX))g@8_Ibel!I6-jQ^nEnRbE&QD&tFDXjok-L$=8a3^QJsn@8UO^uRjepPV~M zxHIG#v$8UINq*5fOAHLcak=sMpQ-YCDYZRp86b$*3z!xr2ysU^MZV1ZP}3Oz=(FE^ z@Vzb!Ao@K^)>l7jFJ9dVB$HKt9$1Rr8*X@WxB|L+u81>ve|l6x@F(TjAC%)rR?;ze zWD7U2G@%$!+lm0v$1n4n&qCg!{kI!Tm`umAam#WqY$5I2a%wcvBz$CzKkiUjBr?rt z7ja+%@W4slV1ljhG}2(_WN>szU!f%J9nw2ptmMO61Wack669Y=n@nZqT?*TBp6edc zU<`pvOyOI7q&PHIJcBtF&8WfEM{g)_TIE;Bhw90k@g&@da2ye!ue9w%*)b@PnxQgN3Fz#StG9uT$+lTkb z@#{`@U)0=&L+X>8aq>Fw1f3?jfL1`N=hf8;0HV$&F;iuJr`%Z*@`qPsV9Hc8;_6s% z*K|Q%h?nLBL(R9^zkl=)Hx&Px|?04{gK1m+faDDc%MU(+!tx!#&6eJ+cu*oVvi*1Stvi%&T?c*Oc_gv7D8 zI6NhhX208ft(sX?4Ujx2tkUS@7uX&ULcU5nDU>W2aQ}H=i4~4lJ(TirAFdA{$!wr^ zxdfzMPZKn+F5So40lc?&adUv2siQI;pQ8eCs-6xLUw$}QlBH* zm-vd6#3cLXrOBIX4HSr;pB!{h8{O_FA|Bg&c--At*aI%0G${g6RrmLLxbkKro;t3nd z-&ZXXWe4@&NPhJB`oz7tsV(y-q=i&T+98okedwh;B`>$EU^~vlDSqyZyJ`J}gJ&)L z#v=8GoL^0iZLvXAFGFnAl1DhdE-Pt%FdVcv$muTAM6daLxju>&cihQZIH@BSI{0cc z95Jz&o>x5pu8eDJvK)C0xMckNdcs;S@+{gk_0{_Y{ewk|`^JQ-b!6U3cPy92^hnb6 zE*7cUz64L{6WHm(xGc+HFKZSTKx!FA-nSc?2N#v3z1~>{0BeC^6T}q(r>)rgU!Cq5 z%vcqQr7xa?jn(D`Nqd?ky|7}3E}y#)QI{vX$rn5prqAb3i|)J98nR&DTe(!{pK|>g zumxt|nhwfG5=%qq-;*P652P^h>{7-qyV&G3V{OM*cm1ftR$4d&lEbYgy_aXb(pypo z+Dg`e@U+<{nR*stqjrV!UA!Oxck97Axa&jU6ZQi6Rtn<+rGXTrscj=DZN;_Q7 zKVCC%f;%HjA37j4@+uWEcS1?L=dj|2j*iyCe%D;zFtwHvd`8@@LdG9Kh1Y_0>*FGf z1;Kh*ooUe@(9n9uoBHZS^$%DBm#O{8ps6|V7kU51*F&~Sbq%W*yg$O!{=W z1IqsX4D`5g8@j~qJ&|><%_1wJOLXEZ)!`8ae ziw`!t9`(P4CLw+dsBWd#*8ss~n>fHBH{Mz^rOfNz@F&`x-*^Vjx%b|Ge&#Mm`zkmW zzNvI0gj|b0g9$1($>gnLporkC{EMI;Xc&+>DaE?_X#zOD<*Xps#r~g|!=tDl(>rj$ z0!GM`Bd?4@TLS`tpj3#eSyLbUt>G}(K!SD4o<00Q`ZqPLvi)`)ge3|!| z8?pEw?A}Sxz1eZ{`Eo#BZycDRoaebnVC^UaGd7?-M+k*45F)za0t zIzIjkAUC}%A^)hb9#x>&tLu@K-YpzBy`q@1(*pX6mjePc(n0CN9cRSpZ!B86kIUdU zRP?;w9!V^Ifte{0^-zkQ^9)c)s~L@A2X=3wjz+&mqy9@mXZlu~a3 zj#I1g>wJ2}tBElD6g93}0X1SxC0a5`9&p8W#uta;RUvh1Ftr@MEkS$mjebL{5hm!X zMZIu4^KA1f+WI~h?2?OyZu-Z$yw&3;jO&Ws9qt&mU++Gt5lTeuUo#7HF(UEVL7v!k zo*Sk-uT7H-Nro<4*DRlBABH9vxn)NUe;FA}%no~szF{zhS6YQUtj5Thr_Hv}y4;%^ zKs=9XnnhOjbiC6M}WdyaYbL|qR!+qfo@(r?s z@ud^t!S1!A}c zN=u%mBQR0*34>6hb#aC;GT)1Uv8IiKX;!{n)wo`mtd`E}Nfbs!HXtX*Hzorxd-IPl zF38e*j{{1KJc{Aw$6@igYeXJ?G`+4_$Z7m>gXh<+#I5oT5pv>dr8vHL? z;9J1^lK*OB?7tn|o&%Dm)0$C*P+rmI$lYya_h_Iuwu0B?04~{aSLC^dVSosxXh#ZP zH+X+ba!bx>%NY<+i~C^?)B$6Ss1Hynmb;HvuY=-TWVLc}8#%pmzXrZxA?cuo{xs3` z5@1)uOMR%o1rOken+y1aNVVb3gP#k4zD#DyMp|C+mp|YKG2WjW?F1>8gwzb5y|KZk zay$La$w8DNRN4gv6QR3@a6!kI;^#vMX=?u+{uUqf4d#oy6p+gDmaO3S?(hB9uK@eAIz8C&CRb?lH`Eu^pnDe;S84SYJLLDGDh`F-H8%NTw>JhqdBmV%Ov2{D5aKMRlpTlnm?f`b_xo~S%s$FlLf+RBm1Bs)s#?d8LO{a8`cL-Po!e0{* z&*?Fcwc7NBm*pHmtwp5}1krOA);XR!6ksalyjhMhB~YIYwpF0V*?#@?gxyxYqhp-J~%Y+D7?yr8_k%wYkKe2-r) zk0c`i@=r51i0RtSlmSnz8r)q7*aWcrK#ckcglPODp^$@>zzO|D)~_IwZ!jb$h&4_PY4&dGp7@T3t9{o6>ll*TarfAzgq^IQzqTn z>+ehJj4Z2jEmWv2J|MLC4+W`=Z4tq1h}kSiPB?)C8qfjC8nx8E{Lw(?r1_SWLMzY) z{xqQd3n`IxWxTO1%E(pX&_R7b!gy-e7#9mj5SLRjBTqRiTrB`X627yXI`V`u^N`lI zb-oxoKbfY`|B8lNe6|uJ2Sj$SVt9QwGpb*W=I+)zq9(|zQ~80CruUe9wy#>u7bNL# zDG=z?Dd*J1dy=;tO3~Yb_ki_?kzp^H7wX3O(EF({(IDNsnfud}m|wFvp+!4I3^!a< zIUFmM2kTk82yP;+}zcudfD!2zaz*)JWDG|( zfF2{v7$Pu&`lS}GLZ7MKbuZt_@Sik-?+)bPlIbw4bPmW=V6!`pd>BzTTy<8c9W=RY zayc4YVHV%~?)vBMs26dKxOAI31a$7Z(y|SdR+0f#k!c`@DVKa5Raqub;mDe~~P{fBjLe8w|5jGDqgbI|v~ zhJ*c$)(XYm`?u2ZP_?f$&_{_VQxn5lCG|sX!|;1zosGBbzTqqV>=pn{i%)a`Jh1m7 z?S8I}F4X3THZL*LTH4cHWKWThw};JRlfGyH^-{a@+K!c>A>y$lNjt7@AwHIwTZwo* zN%svXNMF#G=zXz9?hkm8&T5r{yGaO*hJrrdSS|6`X{E^F798V8j%jz|inqXsbKcBaObzWJ9CcuX_*_w|s{Zwr$Rqb;+(Sf$t-m;0w*gJF7+o zfWpa(R?3YoNLod$zkRe)xQ#K;Zgot`wM}3cmIa3w(@VxHASFUT72@?zs+odspImWB zsBtZn{_H_sT^%ApmQ$#Mkz{_SJU-@7vQhMoX_UB6SQv3E`b7i~NpEC|-uZI*V4@K!iNjK@Gdh?ec#GfQjM2|Xz+}BM$&ak;bP;=l&T6VNZGl#Lf zv}kf(Q?XfNRGrPlH{_!^QF^LHqwFbMPSGJj4_2~Bonx1ddB*Z*LL&Yb!Gw{nFI-XWe|G=-FhrQWW006`V}F zBGW{x+~my|4cIRg2fpUL6l80m&Q-K!dg&oglU7)U{`@%IYS!?L0J18xxSROYo0E+4 zR+jx9J7*B-J=y_n1M(;nTckf@(V(Fl1$o9*8Tg5-6YE3{vx{`~A|}0r8gU`BWI%Me z-G?&va`3ESQ4|s(534P;E1RmDdTGY%aA@jF(%Ed)%7H7agp6?YeX#da4rFeppAP$X zeRfaOYXAGO>u#PHS9Nz`_AG)tc}^1Lh-ES?kPhw2#)Nu`D7xgB)bmG(cVRZaQLJn3EOE?SaHfn-1wsa zA9eu2Zxe}?v9?9o&P_`yq><@vpEucEn5#G;4JOb7Tg1SjowGoAf9Breys4;-&kVme zTPq(ZXsI_V#D2Hivi~Mm`gclr`|{e`6BV@6XQRu0a3}|x(MMrCixMhTtMVFio7)S3 zyO#!Y^pm}0SDn%Oxzz5m_CPOHhF`8`(_n|1AeTNilE(Iv)}Or!J|2}X317UuRat1Q zq?t+XNp>~nSRDY2gk0;uYv??On_Sb2J~2Or?!54T%kZe-65IKHLV0tYpJ)+-$AVv! zQ2fd_19Pm%@upX{f#lCH zKB?$=A-L+#K&SrWZkEROD9J;Eo`a%9*Ye$ltagz-RnGCaM*urd^5ugUhO%joG`)F} z%Usm+B}{kPcnp@UCa5YrmL3GZPGh##f>U(7jUX;)Y!%K67Jkxj!?;qOtyfg5 zwyX0!qiRcrh`l;wC7-GcJqApF*TIUk?RzC>Px5h92Ys&&rgb@Tn4k|WOP+17Zrvuc zx5zyEG@m}|H2!H#;%yNG5CUz&R`G9D#)%O_?I@KM$%sN6dBU~)v9!-Cv30|RfE%ws zy#aQHk=cY!T4%A;L3mjKm{4P}i6ZU;g2UX94uzbPJ z&U@>h4;OU$6`p(OneS&n*yU*M3v6WfT<@A%Otl-WD9xP0-p|*UA<=YL#hD5tCW>Bh z0q{7Nl^Vn!FRy^Pt?njr&xBRy;6ptVwzV}ly5p1;ow&Xo_n~I^Gn;TOkvk)}Am&OK znWv=5!_tM7h1UqHm6CKe`!F>`5ZyEAX8Kl-2k`#J!YZgXo-$M9HD5@tZJ z1?j_`c-BpV@C=U1iPI(CDtBhF@AG)|aCe;fCDeIuUVIh)u%TpP^OV|qWDv_RM%A97 zcd=>WBct`yY>+-hw&nYe03?N?biPnYgwnX7kB4ytUEz5kcS!1MRdU46*YG!>GMvfF4G8n>HuY9PGcc5qF-qJF0=BlcSQ^7`;nf z&KZ&Wtg%Zc8b!~|5{>eAVL~V(Bd-+2<^s#9X_Hy-rrjXY1l1^a!@*nTo8GRfIV5(- zY5O&M+u2dYA}PcrK6-V)NtxgJ?C_&&UNSW40StSuzp+&?JLJia4zPT{>G5p2wUSnO zhi)$n)@!!GU%Wz&XoOT=<(>}A^vhqMBJ#vWVIXQ`Weq;V&H3x-P8mf`f|*Bm-@imD zO}ct*M9~d-0ir+9?-tbPF3~OA*&gCELy>2ld+zMYFee6{SpP7joPgP zL5&4QGBOSh_}3e^e{8488FW?>YJP+wInzm_*Yj(-De%0)njbxRBzs$WVxZ@~AKyd8 zZ`E9wXm-D8xKPq6=nz=dPIJ(X+|yV#+9snz6i_ybmfMYD0ygJyhS+d7s;LWY3bzQ~x>AfU zicX_y>{ixhXyYMUwf->LYFc>y76os)TB}qx_3l>8zb%{j$%JqK_T+Bd__V2-SgC&7V1$)frWgRP`7~!E~e9 zmRId2TU9%PKxdv|`GqL!or^|4T|$lV^s&=Oud>f9L^^uq2Ag>pgPkr~1w8i`k6!P} z*^t0<{78Y`Dy4a&>1HpJjnUYhQkN{oJe(<4uJ`L^i)zq54Mrhf7GuhV8Y6vzIkDDi zy6m$10l}P#z+;~%l{vG>*>4viy(3nwY_)!CzZ~PF7IMNge>^6Hw!rNK>RG_!hl0sH zCnmmMDHUU#f5QlvLcIxo=&-0k_~osKX=GZ2x7T{=i#Mwd#pG2A_Le+6oXz>!F5qiqQv?G=Y<{!Ss^YM*KpRTidC zpQNC;ECKzCavHbb0hs$rm49+jud~id4Jc_R~P;) zsGC+veZ;2---;gSK2hSb7>i>h$1Hs_0-8A`pal4iUc-pBlK7bDr@Rv}A^F z^CHW7nGFb|Cg~jmr_LPy2SM`g0|{ebd}?jMx8057J&qn`CXniaqoaHii2gPHg5ZA$ z8^di5_^lxH@}|^t9h)fQ>Ulvt78FNrtFWX0@U_39IRGzhM7Hv~Wqg6uAlX#&UvU*0OVTfb z?fDbhxg4{P|9mi9{m2rr6(j`!hIumq$xQDoT%J91W^nUiG(8gOG5ovOH->iQ;j37@ z@@`;_$WqMBoZ8VcY{3+-jenG<_OICBJ`n7JXM?UB9$3Bur4E`CQk62ll+(O5F3(}_ z;LcOfV8?@jzdW%7v!b9Ge1=r(6@3jMk$H6wlYao)FMa>L`QB){MX{{rU1`GMtw!Zg z;T|b83Zfa`rK$F zxZKn89Wa~|68e2MD5FE?M1{*(+7lAb3)VFU_KblH5eMa>DfN!BmCX_U)4x8Qd8AF_ zM9ahI_BQD4UtqnO(-Af4Lc`Xy{64a3=GzBWDqCgVB&^H}y{CPQi{AXlA~h3%wcMbT zI2yr{a&-Ndg`*u&ouCoKm=kjUp{9+ZHT?(MNYtFItt|?JnTOd%Y4GqWE_*y-`*j*k zoYM?|n8@xgqvaLc%gp(Yz3Wcji_U^CZ`0V>%CSV=L(XqrZjQ-oul8F^MdeFXG6$8;?nIFcAIK z$p1y}I`l5XthyLB1OK5GfX>;x?Xu_}zULua%s}o>ZTu$z2p`zy`bT~I%NtqZcvyQt zl0~B6)eGwbwY2RpL9^d92P4IjB^DfMG&skNnf5e_2>X zt{6gPC2ssgdC+gf&Jx0!&yh)*W=MWbbR~5K14Qw#jpnc5EqVU!JUeJlIx_go;DFDN zGPvnzn&)qvT*s#AbQZUj{|C%YOo&SJY66;`@LLJKPwptx#hFnvdc{=Ue*F*1(D}z& zKSzazIQNS~|8}&$)B3*;+(h7Eh}XWEU#!S)R^gWg`G=;&-V+rN*$2N^-rtDjUk|FI z_kdyGzX;$@j`sHv3!R>ROYJ?Z|4$$MPbBnz pD)JXG|DTHdRVV-dQITopeu6pZpk4sddJpSQU&rKTjnC!;07_AM?;#*X3B5}RRZ&4eMS2ya_ma>Fh!mxl z5PB~{0-=QfA>>BSx195P_x<(OTW`I!ZnE;pWM+1mJ$rT=c&DSKdV&5LJp~2Dg{M!H z^e8B(Ehs3cz~|18dmPW8W+^Dn%Q-xLtn>8oV{RP}S9=F1I|_;?@8XTmn&|&v%`n%y zSu>dXYL@1*0Oi+LgH%tj5|6HMf4)ks*kQ@aV!B>yfq97C>_yj+EkD$5xNnelst4j!loC0ls zNbkx@l~wl;VS7y~R^r3P|G30sdvXM;Bng zx9Q4(U(Qi@pZVV6=NET)DPPfv0s8$?&wlc{{4d*6O)=<`xSqc2-y-X~qp$P#vaNaK z6Ejy1=MoZyINw`eNQETaeqbc=+@#iAD(;(|3Cp6TuD>nwce6{umv&c9FXfC1JG=<{ z#wEVEQf=KABKo~Q(8cQEca=+Lqdaci`W)yVaW{$jp2f{y2G*2(H=aS?veE5^FMbB5 z-i~_~UpW|Yo-Q?adBFY7Ze5iL<5wv<;oMWSFr{i?Y$1zX2cM$G>+vd`q^oq5*S>2V zNx3hmWezEuV!9Q%-aRl%K-~FWNu~QFY_&dzi+nPaiL; z*cLGQUkQI3`d#PQ-Jy$;-=XfVvQ#&S)}C$2b!+poEd-{FE7Ip;&!OnjUrjN+-n%Vt zCw0c_5z)#{WH`8l*5!iu%r?dJ*`mjG7HpCS@tw*$p#`&;FBzFj)xiZou8j8JgYT@?gv#pi zrx;lu8b_M*n1^GpVAiU3FJE}eeJn3)dr`R?=#dq(&9!p|TT=4rs2|oZ*AHff(uNgQ z`984HzM>s>POO^#s_>!mrMnM#GDh4YqN<~WD5|hgi#Qyw%IS=P*Hnw=9*LrL?Xb1A zbpdNJ?oM&w-`YhQ^zBceljjtqNR#;mij^nAki|-TAWc zMmL;SC|ed|6LscgnwvKtyo$NXo%fbyF{F-L<2xJOBc2Z?w`rd~zJK{LkFq`Y-bcN! z=S|-@sX#t7ci&vT?QvsW!Q~d?#~aJyyL25oh6WtsDy2i0`I+p#EWD7FzR{<|m!tfP z%1gj3)F`KM@XY>g|I5uWKG}f5pz~IONuu@vm zf;*%uEit#WzOO%|_I)MvkXgIc@6a}owS@-!2HVQ9ZFC}ddI{T(xcq|VRv2|#RO`*# zO!w*Z&vm@=_$Kj9@teb;u=nNJ%kPvx*%yYWfl&hAoff?ouPpjMliQ)G3_na7*XH?n z{>jDLTzbs!)CIX&;>JGte2j@J`_S-&t6R`kzluRBDNT*_x{bRBx-$i? zF$+bq-HH5`uaP61@vovgmuk zIm6Wgg941MRQ|5cN={KiQL~`AcUqBwcj1N)lBuZHaBlF$1IHA3k$58obG70x>Dh8w zP@SA&sY3VBLr67J5P5C1tf1K-zreh(dX%q#tuV0|YmE3hTdX)#F=plHCT{Ow0J%O~ zS2$Gu6m`9jJvTYOUdc0m)vwP*;{T{B_Wz zk~4>Mj1SzW@W5r$XA{UN!fBkYlwK$uFRkh=?R^jA2#Rqj+qypkZIJYmUuQ>G?$h{3 z;WqZmel?@G^04KALBLQs3Q~ITTaA=>ko@{S= zvl9`2eo7J(a~Wdm%Ix@H=0%ilsxG4Po%`F$@t37L0!x-mAg16rr#M62E)9Q;^}Lq6 zM<*k z9vB|=c)xPK$E-)jh*>DlFade3xY`I?>Rb|6!e%CE<~#Lk%FQX#3F5ppG3-Qi;+ry> zvaG%}d2iDLGd2Z+t8UIwu|HX()S(LrZF)46*m5 zE(jSEdm?gjbUJmCS&1>3C;^ouQub4R3JIsgPFP}+F-)N*R!^*+S_OPC(s?x4kaf0g zwJo^~)&{-la&zRd)~hRz+8-Ognq9u?`7-UA7C1?DytiV^4Y^&n?Y$7ogSmD4a-YrM zJ!>1In0ht`7J*2o_f)!R1@XH09OrC4Rj!zuol0E>owCuK>@v3woUY5cJe%&X^WCA` z=~=|`j`KcYGP{Cg>bf&_yG5H`Lr{G~v7={^){%E}1>!%aBsGY@f}IA(kv5H; z_0d8%d~KTg4hxqCwC|>UkXiVpgnKvrjxci0;i-enNQ+xe9di2AOUvKTpDVV{ygi@| zzgE78KNm=27xtVT$Zp3zlMGB25(}t0srp&%|JrFmdZ8FIh>4!SdS-jT*Um5Vu8ggH z+#BA#_^w7tDtqiHZ(N?%#wD4zQ61@V&AjCq(|mM^PCcr-U2BpoFWR||BnnIS+1A@1 z&3*g%jZyzfJ|?qB%J2cqIfYo$F_OVT%SM#o%1fNZ?7j_1(wm7wWEP4#>atNImVfaUc zg#-hdJbHgd7``{WQ#@OiQFk0z?|9THJ}N#ep3ZWaMbl9PB?>0kfyNrp@s#m6LlwiQ z;@R@Gazs^I@Z%YkrRZD^dFLxG4KOKN&&M!t?<3a5gl{6+wY)BJ1dhGh6z7O3wOZ(m z=;UnuEgx0KXjRQVt(^28$dH+X^FWzD8eU_D64AtT``yS}s0t!^}~`k+8l!ver+NDdnBueQ+q1>8aWukS^~P z>i1$q_=94Z*S^5HbkA6~^5vFJ;_g`$#v)P$A*W@edGA+y>l^3T#Tzrw z-hh+Er>AiM5JSdrMuNN;)M#XL82Xj3Zf$q3a-h-jEjVyX~#i1zm*{e+xr(-y0^)UwP*|~VWaP`uBqvu2JpmTd->PbPtdgu3t@~PhKZE`X+ z;b36mWumDeZR6@JYHjQK%ue*Bv)gYx6tXX+$xUZFFKh0X&Q30#(l6zB{>+djw|{p7 zc)0&e@p6>oG11iFe(dUD$1NdxSM)9qke-{HTh_zYURqB{Qh>Yn0r&5VkY|W^`nh;nzZ7xty!987f8$ZI^R)4BaPx9-b>aSv z*ZP^Ox0f6b&u>EidHuzw-Ajl6QgZS9*Rse91pFQW+!MVE_}_tfIoSW-fc+l%3+&Ij z{vs#)8=16@!%I6S6D0>{vZ%?d0mUAOOUVAA^M4uoFQ*Zn`CO#cf8 zy!U`i>krW1d;ced$^RDO@4f#MLfgZEEC}o0tO4)+i^AXg{xx3~@O$(BrVM{++Miys zssZU`0sm23ApMQmF<}Y{1&XIiiUuz!F*9eau6Hzd?I5OY&C5Pb2Vl?WpU=ZvKBA+e zeDvr^fWU_jD(dI%gx*$o^61=`S69q_-QJ&VldyW(DdF_@2J=Hnx5Q_gi6ozS2?m%NPmndm2DNy|9(aLbiZHN2s z6Dgs;N-Gk=Gq2xX?b}vnPHw7iz!2iI^LrGP}+*W1&59pAPsOSG+ z?$VQw6qLr1RudxsQI+3#6k0pP{;{p+zCH;F5mK(eu>NB=xaSwy|0xi?+X}7mJSIL- z|6qr#W#|97xc~38?Ei@G|Is1+KjQoUqxiCvBM<&E_T-Wi`rtkQu!^p}0555em{7fIn&;C%(TYs#> zB>{z2mcq6Nzio$t9<31z>!fRWW1G+Bq#c)bnr*s_FT6a>(?>7%a((qcevq|#qUc`N z`c%Gl-HWzwuO*No`V+pH?hApue|eTCFtRNwdOaae#y9wc3NL@?I@RLb=#3sjYRRoE zt4KwGWPFu5eb@Tab?4%Uc8g!11|xYrN9tn#ladP);N0v?p^)9e{Y&9&rISq9u1H_hlR}bP+EfL?Y_>U-$XT ze<(PU@j!$+{C_X*3|)z3HLr8O5zO&|Zo8408KioxcL&uhPiZ_k9jrw|ru<@q%2vQ> zLz@B>OFU`iVRxq;43S;imGNm2}$ANsAL}r;h50 zA`JL@uCsvz1efKNAkt3k=|BV{$0~wEOrrf)3ETN)^ zYr{Q3E>rLXT0r8-ud^)<))oDZ?G%0thpYN))aP*lHfI?@zN+PY{L0a7rKU$*q59+B{rZ z1ZQ4s@IVSSqPGGx*D5-ZqVWEH_|3;}#(_(n6Gvbf)FiCuo%C8?|IwE>TsxW~#w8}} zi5IR8&Z$ISUq@j_@_k2-0~LQut=od^0ehQ$)1=7ir1R9OE$p?ZiJuVUv@@Cmw7qNk z&=tCV>}J+xQc?2t>u6B$2p6Wbc8)G_ZEfXfU6m_fdaD^5q+i|KogMO`^L_FG+OZ5~ zTK#+~Hux~SyCdj$Pardu>U{h#j-_i1Z&*y@LMns(S;ia7Y(fRihf{MfKhYP_!k%mWVnqJer{xw1 z$=ve%KTg)Rg?x@y=N^~+7Q;E(&psfSvEx`7PF%!nuoL+RheiP``1tr5;CD>iW-cIV z@w35m!M`=st`=fy0?ijtQ?MpVT9{f{7U@Uf#s673Os0F%k=(@y&!@c?g*(!Zr~y zW&+X93qB1I&7JReU@watMs`@Ec6fi%h!TK~88dEBE|Q454e{w{g;? zf*j;!5^lU4Jau0K1tq7ziF+|qIb%_HHO9~Hw~_oD!_rc;Vt9W zctsc>wJI@JbqkL&5{!)vs1LhRT`w5x_6^4ypZq*)oN}}kfSp<~w%jDd6&~nly}52- zG?F)2!CrRsB{HlsmGm;sdvmfQ7%_jc7dR;1O>1t(`dJ$k*PCUryTmnF*>xRsfD^8c z0vtz_)AsglUN8B^_s8N!Fc_Qw6e7Uq^2rv5tmE>b2_oi`&b23ndKJ0N_Km=&k{0-> z0xR`B(3T)Eb`-cdU9Pm5U*YCGil0|kCY3_NO<|{j zGqk$J#)+H8u;4`E{%`@s)4r?iS>2ns1=Uj~7}lx@K$}*KQp(Kiq}DIBv85YB$>NBS z!p;7;s>|&?5SCkOV$r`@`cO}HmdOFwhn~NUaUm67hFjD6YeZi5d;d)B^o>s-SsTci z!*5Z5h*TOn?r-dL8U?Fribl5Y(v2-FuGf!i3k~Y-$;IywuvmGfBUNHvm`8UTQ;G8w zybZOzceoUDA%CsE*aRF}5LT4MFZCnSr5?H3D{#)<@K|;^D{`5uwh?F}Pg+?y7*gfp z`#e!Gtd$)4hcP(S0|AXG^Y(W)$;xs;=tE+n`6i_yM-Df-o&~!grxLMY<^1l_6t7JaY4v||ANF!qc>yzExrdKo;qH_$hUFByHx)i3Esep1daa(uEx5@QC?99%q6=RIrY)A92yj4o*c4jUBz9cmtq(>p_ zK0?dw`%~-?O($xN>#@O+)1#8cO*_7*Dc4|6SYfcS&Tmf+b|p1}PSUY&o<0n-3;dJi z*^(`a@#?Hh|MlPg?h>UP|H)9Ud?jv0V8)PP@5EEPF=PSdvt`_0`Q>rMgKj?q?+@J8 zDxj{pYY*J4>Kt;S{WBoF{jV3n+qPfC}V&BKN)Rn#!ZD**z;-CGAc3)Z?C+V$j%0QuG&WCYxpkN zIJ_5)R}4QhsSO~yPX}q?q*Oq)h6$|$GbYA@-s_0mO&;p9r*V8;o9eisE+$wJfcngS ztXh$-L@fr4Qyiff#LomIa{7H=Sjf&|6-~Q*?@LO({PbaC0VYPRR{M`v85bj`vj^`9 z-G;wStuB=!#G@|_duAL-(c&%AlOVz%X&6#1lg$->Y}8C@d$X_`k&(1}WH5(NUouse z7>JT%yi+PMOFDcgIHcj zXJHec#rs0uR}&dOl%tZ|IjHAHA~IsK=+34<5)6*_*PZ8@w$TRzjo6(z*B0D>(Ous% za2W;WU491rS*VeY?`%C7hhFEhdw7BqM;&wT_ljxZ;mrX!xWh>Plnu#`$HLXg?=EZt z$JV&9;pnvU>5k9$g*QiL6<=J^9^{S$|5{LO$926gJJ<3pS>rEX_LZ!1|6N&DFe%8z zZ8m(GyP4~d#gOSMSA21K!?y=@uFg>q2y+GhTOGfXANeQR6~q?8`#r)~YsUo;uT~m{ z(Nv#Q>b~?dOsG^gxQ(juG-7TlF4malq1wrp(0D`hdMyv_4HPO|hB zxF6HL&`bIv88f0n_P>xThkZ!nlrx*rBAyJZo&7}zjVWEA;2ClFK%KiTx-fwphaq}T zSI-E=Xr$hN>Cq$j#i4h3#}2PS>LM<1Zcq=-dnJ^cT#l#HaTs?!UDEVfSwYl{?;onY zb^rN;Z=#*lGH1ZSV%o9*>D+2u6P4L+r9R;v{Ma*^iJmSG>NpMT;cDmJ4A~(}F~!O~ zBggMW-k9;59^Ue$%j~{$es8?7mLDTphShriLSy!Zn62viH&h=31ad8bP`yb(1(~Dz z+|6Ho3E5zqsulJDwfvP04JMH>IcY#+{`wO2 z?7lq@t^b3RjJ}(h964*$>!V*8h+P-`Sh%el_B}@41Yx9CB^D;YlTOkR-J5=+YAf1sJ^;+WN4eAoya<#X6Fg&(|C7b>)xOWj>Ca-60RNi+=(3ET;Qqjo_RK9 z*3>%b#qJ_9Uo$!^XB zWJ$gE#9C^=?#d*5cm?jTvIh6-Bd?+%4z?~mJc+x3Pz8&zy-}zlE7YeuKZ5S7%y;}K zAD08nxmNbUR2nH3)}Dhm!d_iu_Y_%o%T9&6z~$xlBvT)(#TAS=pz_i|{>O|?b2pc= zSNCWsX|8E2^s%U~AsoQdUb+x$=MEjF@We@0Hqpw*Dw-8>2lBJKGRsr% zJH$3yj7iR(2?KZL)h>z$C~^x>q1jrV_spI`OYym874_PdFN*rs;UwRZ_ST$0XRUV( zlSaYE05E#UfLiqCf%M|fx&*Sa66_=BjqW!yYc=$oHBXO7vr1RZaG|e}6iv5v%sZKq zC_Vm`(6VEW_8k^U4oXgwrxIWx7wrWAGTM@%k5#lPM5f^5%3aWzltx}|!A-NZ@mOt= zJYg&o;fmEH4YqhszaA?D<$KBC&PZh{aTv6S<2bpTI2#9s4v2Api>bCZfU7!FzS|G= zMcBw&G_0C6GRRZ{B|Sc&D?OYafV&(pATul$_(y$B8fbeF2M)%n^sssm)Jrn4vy( z^gOG{&btt@O1-=|Y@|Rr1TiM;_r4*))}!o?OI|I}1mz9cEFEU}} zY(^}2UVZz79II95O1-;CA=6LGQ@Qd*lB*h*DmjkVI!@3uf6#83xtoX8P8ew@y?~Y* zZSg-$EXELO^1MprwL*)f^uPjA8drg{F|&m3@%C49h735wJHX`H9%8#qckLjZ2+M$W zCkl#ZIt?{pScoin&UOBJA63|L}LOi&-a@uyW3PTfS)*e9K70L)rk} zJn@N z#}1|q#OfQoqM^isa?{#=kPxRCemz|OBd00gN zCAs8tx>cq7SecH%*UJ+2*BLYR21+t2DYs1JA?3GHG@>068trk(f+?pp&Sv*I#3wea zoEu}k0f@OMj%6o{)56kI4CU0~Cudu6M{DQZH^-|kn}m2;ztc-3>!JM;Yfr+jsl>41e&!xZKNp6K%gkb_barfpTlY9fB?q^ zGuM|LTZ{PN1bfzDu?)8dDle$>O*TjY8lb_?`pi~L-cY_Fc=kpfx@N(_xSctWeox2P zQkHs&9pMJtX2dK}X-Rad%BO6O=$~j{LpWv$GJ|F`yeBkdTJD^d!lCVg@|@Vrj5kD{ zCT+78XQGEW@du)5qKq@O)ce$}$G4Bg(|?o0)zi2UKFhZ^C{|;m|FLoQa{n(`eX(w( zs0?rasAd{=NyXvv0fkR?S0y3W_z~yP+Mwi zT<=-)MZ|O}Tc1B1Q}2qLokNwCndTiwcL;e1pDbHWVN!gGemJ32tZ`k8FI)S52eR0(;ggYq4)o*_vx=Q+DEx>sZgBeZFalxan%xM5b|Uc(B<~ z(=K8J_yC?D?5^ThmbrWmU4{2cnDMj9g^J}kkj<)iG{K^{c)G?pP7@?MUzEFn zU!#qt?0oA%O7F}`@`u^Om%C9U0z}s;ZA)X znZnO(=N3+BPh-}w9-)ip2|w87BPVA@X~Bmp@POKZ@8h#~5`HC5%`kKSvXD=g=?DsF zGaM_`-|yK{LaA;522N+dHjK4F`2HE^<8SN-Purd!ER?)EsIC@;jfv!tR5e5;<$|i` z%FA4J@RV>D{dbH&U@{?@xDpUdAFi8jORlSK&bi8Nq=MkCQ~dz*Uk8}s->DD)P(w^zG)0iUyHz@2NQ4m~=>{S1dg*L={!_vZXpa5WvpiHF|av9==;jm{N~ z*W}}x5!V=#TIt`VrBXT7AM|5asMp8an~o<^gN}9x*O^3%da$i}_&3o^2XbRU#CaU> zTt7DH`DTx!l6;2T50{gHK42>ftKTRgbsBqK=uzU;wO z9w!T2eSyAW%71A%%I8NDs&rroWtMNvM_bIhUq6f6ncO@jq{?iRyn+|I*rksQOP1|u zGMle@RT^>n*K?yq^v7gD#m7M0!{z(F7gz#x_c}mRlJzb2C<8Ue8b73$Z+f|Dm*Pt=JT*+$;0|8Z_G~wiQWV zv{L7X)9{5f61mvTYSx!C2td1mX7%AOW;K-g#CAD=T#gpll!xVmQ0wS%sq{WPx<}p} z$~?C>U@*571QzHv*lsxN$RsRt^_jtOPi>eBN{>#g5i=jkfj7L%y;`|BavK#zAEF`0 zHm`!1O&F!tZq}+O_kDrHzyO=(PnDU|;+dN;j&W}?2z4ZAOvAaB2Xy3gf!96N(f4C) z;9*^}9J+b3)S}(2XtNI2#351(ngHuLn#gS}4y;05o;R|sdtQ_qL1n+leKU6)D|fo3 ze=xv?g-r#!Ihri(PInCu6qEonX7c=4t$;a|&O+k3gtM9S8=qknc2NnZfc3VM6kvS1 zw&LI}OvqxcoU1WN!;Ui_h6t7ou5CV^;MiK(t|*w-AsCd71c>S!7EZ0s>z+O4ezX)z zT-Y(!KoOG|L`Qq?@uMxa-KNB932gefd-}lwEVN0I_0G&*4yg>WQh@RA!Kb)-nZOF~ z!EWkJ)qJS@`EIZC@mZ$1q{UNRk?x+%7JK6i0>jkv!CBzAVRajVh}R0}YkrX#ylf0L zQY}SHB9{yKb#!&-?>qjujvr23aXkE#{ADn;Y46pFcQ>qffEZuiJbDl5i)m1G@d_Q+ zm#D#CntQLicdLhF(U;IvE=5w~kg=+%9rLiun+_gtIZa`1EqdWpGADPS8q8>UyeuxW zmsLc1v?2NSkZ*zzHSXQa$0Svo)5Yl!Tc5I6o8N+lm=9cR_PXMc_8_3Dobtv@@K^cr zhSQ7u6F+gWv$jQdWY#LGOY~y`>O}ytEyTLahAlM02Z3lO)BCYx?n?T&k)>GNOg&=Gpl?~C^l&<*(d+`!U))g_ zdchR4V?f9`1r|Bz&3}?SC=OytFV-tY+sT?Luc1_63QM9DKFtS=zQs!=DTj<;1|;b- zB4RK6-34`^hxi?_K%Tb;P`<+u{XWse7Qa%KftQ^PT-(8?%F-tC2m0byb3}utn%(4= zuV_whl(WkcvG1b==lIon@HK(b_r; zJSB=h`AjUkzQ!hSXtqB*0NwsN*}26OblquD9(P`F#)Js!iDQnwU^$EvA%!6;%-&@(xdjct7MUi?u8p{n zApQmSi@;tZRgYDPK9~JXtQ3XF*nP~ z^9B31X&1AW25PD`K`l<>GmlyYUPR_k7II}A%ON&pM0JE`YJk(2FSR%J`fh9A2oAjJ zHTqsA6vib38^ZuTFfs?5UDy)hlPI>f7K=-l3H6MZ{gL<%oD5UfQ?TU)ljPEj2>lwgD5hnxq2PM-_2;&#m&>H zY_>H{^Rd@!*}RUBgIt46#UWp$Mn{0(`LV)vPeKGs5R3SV+3rTX#L}#d(*lG14$;S6 z-Yv3J=w)smNJ#{|Vwq&9kMuhfAm5y^c$LjqaOc>Aeh3y#tOx%De`8y)LL3Ev=VNQz z#90iS)trXv&r1`4j&oQm-)DW(q}pxGkx^z{m_!lsiLxo7V+lCEF-W>Q$Nb=7XHhI^ z!O_*fKBa_veE2?c@`pwSdevaR-Q0M)i1BK*3=Y@!u8fbLzV%sZ(wQE}Ug2bepLS5Y z{Y>zuKIkGE=ytA?TNWS zKG~a~yr;AiJ8GFZWtAt*1NgRt1T-T@+r9ABoRo4iSr%G?|C#XYl#XuThqgdqY7Du! zMZS0!w{&2@%M1Sy7U42OhcV$$OEGNb}e$Lgnb{cNIHx>A7}Uh;H(`uAhD>Svsi9b>MPFMoF^OF$nR7iOkF zt|XL#SY7gfPJ;dm??sB9N_XSfMkI=?=_4&;47J#UpOu33%wbL}EFm<6jmg3i_Q;fV z+B*dc{>tkIWz zi?#S@r1`zbHAbfwN$;PEp%Y1 zA6IgcnTV+NX|DTWdYN;f+;tZYSOQ}>@BT!s&Yp}q<60QofR4w#Khq% zlk-(PP~LpDHs;+jQ3Mgb)uV!9bH$f!w)lw8N)^Z_Qz?t@ycc;Vi*HC2^m9*X+K8or z*cX|J%^6pmDD3V})lDLmPOp9mk%80nariow)WtX%R~G?472)(Wi^}4{LTEnW5qDqO zH?`8)#O?q#W#R_k6c?l0k$mO8Y8s;@8h3VIo2+-U)t^e12OT#MH^7JkXiE&I?`O1a za9ugEa6@VI{8mE#>4q4aV6uu6`1U3?Iw?_W6OErcES<|usVST$4in3;zAFeY|H{b$ zr_6~ojL0`Y3lc1z0UxREyF)2_oc6@ki=gi|Q7oA1=YG{doFe_t>|Iav8YFbX+(17+ zo7Y#-!SY6ytEXnOOWpQ0uKgFuSN!-N9#6~|AFgJS6Z=ndUwM115S5vs?gLFXDx;`<@*ijIm7u+rQOm6x-$)>i+3JrDwQ`OFd3>_=wHn-6*d-3 zO_Ymhbla4dE7=ZRYTacODeEwIJ9*pFQ?1Dp(ye@Q2eNqe7~rfT+db8u zXN}Cti`dnizT^dw0}OV&QlDPwM8#evZtruN9qu`vV4p?*>`a4qIEBhneuL4kFt3HF zAnD!AEY>HQM&>o8pi+B`Cx8Qo*9X^fZ@-g`$TN?RWvtD8;Znz+6vL+QS_tXEQ{0qe zy)1TFG>b)Y;KPR@L-N}%Y6iDtsbDOD|@rLSrqMsItV-sQ9m zK5A$?*#H_k9;60?v5p<1)7t@}rNm2#Uv#>T4xHr+Y8;x@viy`jr2je<Z{lgh_h?cFEhd*vY>hKm^nxS z@0qEgGQnGZxG@0oh(6F=`|uth1=YGs{2DL>#u$cKJsCN6Jk6@!@`hb-t(#vIm>WV8I6ke=3#`RSM;Hwt{Rx)jSyO_W9+aO!K0FuOU6hz7KwWZBpc*{tM2r?FKcMHr8# z$zEHKRhHyS*@3XH!Tcen``@&oS-FYmG6D!wJy>|=*;a5rj(zP@(4aKNExJdzu(ISq zCpm_S1x?M6TIxc2Lrsf)c1ZK1kSC0pmu^3-2~&%IDX!cmU72ZW+Q-zww{a`~l`y>- z%P+Qpz%kmR=f@!Iq|w-Q%VhDTblCRXw;u*fbL|DYNgWeJK1InB<*&KkE4@>L_iY`fdZ0*4USE?K5-+MBZdYwzfhE zk#n#3ut(BS?k1-be9^nLvGb|O5`Rah$jan9SdQdP&JP;J{ zCaR{n-#M2-`16CC6S)s5>vnb& zsa2F{Cq|;nAv0DjS0tEA0dO$%=st(M6T%9vERlNNz6dc4W3%84j3y*^pDu1?MI`^$ zAhd^z_3rsZb^}V3ZQ>pMI}#4cf1z-wpN)gwr%5?>CjVaurXC1xHk4}e3@?k^@QA z$x3_MLvOEZR+Ti?!NOvC(FS#)rPgU~hVep{!MfUGaBBAcIx<(M+=Ywt?DchO(cFDT zsxnMJ^$9Phh0d&9rmIn{UumMDES^Y|s2bwBHkz-lXz;=Iw#S>FDh1i|clrHYaH?U> zqM*j(q}(F@O9pyZi3L91>#3-twjA@uuY!tINd}y$#Jqfk6M2TOYxN#R=MH&e<9>oq zK6ig9%D$H+3S8ijn8eKufmg&k^=x(T9*_(7V$E6LmHWE_rb?j5&NF(tA`15-j~NRE zvikY%CqKTsm893+#VU?V!RR#=)1V#9Y(fhI9|}qJeLd;tIb|R9%+fADU2hU-IV1Y) zW2V8SuV&ph{Os5%Y!Xoi0RR*^P*${;?{1d3OH*1u`_(hfW-J#nNM!+1GYbjrd>H!F9((EF&z z8?*azE}qI#dFkpMy|D9vAKO3`&7Cfh#s=;AX02s}V9D9V&jSn)Zr{cf_)BF?;lw(x z)nm-ceL5qSkmJ=<7p`l7A_--OA;T-;WiE5{hed)iJp_K`v8%K{r`%F(U+yyUQDyLl;0B$K%S8_(1jcVMXiiX5zFln(!yO&#mL2&2aPw=v|hJr zB;IuHf&U^^`VZfEHW<|e$?e?dVks+W;Z^K2zN;Vl_KIkM(ipu!>4}YKN8NF@h39NM z4wB&&#$jOGmzv?pP8DBz7tW9KHdF>weFVg)1waRc@Pzr5&(vD)Hv;nZ4+W*M^C@9O zh}8>P#%m{%Prj7eUE4Oc5p0$x=BQDAIWD7pDjr58CtIhI*St)+jE)K2*@_7)-0Oi5 zj^3glX%KO*Ii9F6xEQ1K%-N~kcNo>Y^#HJ6D?nx`OTYcp_~TSpY62{>AH=FSbh`ik zj@d?)Jg8i4S?tnv$roDJ94`x(Z-~wMcuUn%5%wOd9c{p2WFiQB^1EX9}<|N*@htu?6?jFvWF2#qIFq+jjb?Xx&%^t3h z^0lEFoATKpA^8)>L%mJow}+~hc0ZI^m3l?c#>}^=-@kv^3u8C^arW6R!9@oYo?l#eg zGc@)^zs}8WF}W7|6i`Gi7kA@9x>k8kp@K!oVG~P1wVLg6V=lM#{@SCRyp!JDU{T^Q zBQ&9{+ixciIf|Vsz&sfVM ztt{AF#A)g?%p|$G*1NZTZd$H`#>7}a2hw~rO~hAq3OR1q5kuOE;t(Cn=B(mma3WMF zsC+co{zyjhXqh%WB4b&4HP$VeKC!XQ7wksj^a^K9JS#Q%F6=s2O?aLrcYNvSO!Q6Ic5q^yq9?$|p%SpI$qwa-T-QE!3IcT+p48SkMHP{d^(E=Y zNnE`?vvF(~J5&R>IcG2?OaqC&xpGEBR)Bx$Cak&TLo5r7|87(E z#2_>6=0U-x#p0!0bySkNnX0%Jy~O8>mVHBEZ1dxrpKx;ds1_jqY?D9S;iTq(co7FP zrLTCP<8gO2>6lH#vkz#azEw~S{X~nrFZe`9M{aG%Gf<@kW0Cm7N4>JVLr+HxbMicp z_wnXq7%)8|{a5(1pEhWj`Yx_x>erZ1I;p|2WxQ0cf+GOajg>~sY%%JRZ!;ETGbPhw zA@4um)DeGDv|>O{?0&8?B@~w%Q||ulbEP)AfYsG;D9{&ix~5m3xs<{YbZb{9$ZX;o zuEh7M+bhxy;&dXdNt=o^+Xbl3az=-rw#6^3Cj&}WpZp&YfYI1e)Iwq-Qs{JkFxxqU zLmPcXJ|gK11NOl_4GQHBNc74Zd~RW|LHM#J=b&ixbKV3-9KGu2m=E0OazfwTJ~vY5 z)G$XMf|MZi?aFNmJ$U4C8Y#|PU#L&kuK?_ zySt?Yq(LO4m5!xjX%GoTSh~9#q+yBoUVVIAp6~DdXFq&y?wC2}I%lpqbLI>o>LXxk zu%3?Ue|2-_v9P~}Z>O8G%trC0dtSrBY2A5x3ki?fi|$#D+ydkb*QFJctFIpvFMt*F z{m|H4k?si&ky7N`(Y=P4#w8sg3SeAMqwRrzsq>5;!ulO`R1S{y^UCUo+_<>a)T zr`%VTfk5-;1weYj)Gd7HMS$#CoE^vz(A&l0PRcmSWF!2D4%;1AS>qYdlV3S&>Z4$uz?BcX%{$8Sv8wg!!A_|Ce_3mA@P;i%qRUm9nrV%@!fG;Sgd(k zckm2!gxfSxT(pR@P4?l)5balwby7K}d(+u>elFMs ztjrG&xGG5MJxG#n-N_8`_Ip}1_`&+(%klYatA?`Zm-O;WBdJNss&y7ZP5f8%luF&c z+S#!LaxCBcd(PuY9n9S)eF|0{#-CO!Xn425s!uNE+(jw&V#Pi6qg^>!N;|+%mqQjs zr52;2oruKZHF4imKUuT{j6h|{`mYd@`m)a|} z#RNzJafd?dBt&XXPL)zPr;KiiBV}5uBRHFU$4$i6&92;UM65!O*iVwZVPPI^%%@^~ z2RUD4F4&jNKYmBfS#^-t$@{b-w2eZFR$FVVGwConJrqJu8LoO(Yngzd_QeDovLZC9 zf)sn%S+bYWq%sPz!nd{PU5VTJemP1~SS)wQX)%-+$>hCp>?O?~ZjtkuQVPHq=V-r> zJ47_qD4S$9mSU<%y|J^Yc%nL0RB`~nGm+0J|EhRD;i*Al_6^mPBV3iQ`a*?mi8>H$}+tx`O5~dk5k~;P5Z!{<;q_$^F z$+;6$H_n5GSxN=zNG$e^Tma+M$tcfM76TKX8XvEqohy|z&$atbV`;e0Xwe$jfH9lX zDY>Dd`K->PP$;;PN!dy@-eKTrtn2c2{PupZsVtM+Po1fn!DW~9+O5a&W~}yyz)HX5 zg%C~RPj-8qk9=S+srA`nX%C^tnw&(R#~)&S$v^zC_$XR8H*qR3)R9@Tvu`|$tT-R4 z0I|oyYpKcCiT4wTRmVwP@3=?mH5V!r{d|+z*^;@;Y9lT|F||YL{oT68#xCoQ>_-%X zbY|jWL})dQyxA5`pBAwW1<^TE21FXy3$o%*)P zOwcnQ&-ED2H?Se<9G<*VmpQEW#e7@FUt`FyM0fN!@)J_5G9JfbL8Kea6g$>q`6K`b zk)VUHCuK5Y>={vl~`Y>B9_i}J`Hz^>-+g|N12L$rrqXs(yYZf{~+zK+3bpPa7MnP_P zLU6eaFoEn!-ADrY^$VDecjq!>IV^unabOfPLPhxApN-msKq6%Uv$>_eh#=}8blW_w+}2Duh|vp@kxX}U7N2M^AobKr9Moc| zQ#<^o^!eUMOTlez!T3ph)YaA@{B5?8cYks?;i;tIk3=YnUh1u+BPN3O;sdEyiC?sq zNf47p(-nZLBmpV@*YZo1?o!U0W4L^@fTyqin~VAqNT4*wcg7C6f&7uW#Gp%dLZhtP z*?QumSx(O>mtXUd4FCClnFXxqC6D_ux&1`o*0cqXUGdRnFZn5u<0|hD1mLkm7~%P? zPgpmE9k#r6yr&53>rta#-c@^A+G{MtHFBVzdfVuG%D}~b*=B3(N88$PuZW#1=dxpZ zt@=-W&;UGm4)c4aW*_u#ght*#|3u!Ss(4nlkK{US&a~LqR!a11v)gdzYn`vQ%UQ)2 zT$m0EbOoje((W@q@@1c{@~Q);C|kZLA5m8+XSA5fNjc;zN#eBlHuDNk*H9VgE2R}6 zkp5*A+eh%12Q>E|tB!YgpZtK1!s;Gd^-5gO6LOX25rUI6n=(8!6i=r}Mzj?J=L`+Iy@b;thSxy&eo80=7qfb=+F`g7h+nUum1}SC;WBpDXiyGB`HO zsCPeNRt*+f_R?KgZhI4DkBw!GtkDL4>#@y^Z}3xQoU&-wWbHE_tY`JO`YWybB+| zoJxIZ4k{T(DCnvfJ#7M*V7ggY0*#TC8+}~PcF$(2Z6>!P1fj#yEM-g^RT?=+5y+)SUgfy5iXSAmMcB)t7-gbIJaV9^>d=j@{s7o zWxe}@-fo+mMyY6{v!<2Nz$H?vAKZbO6GcWBW7&eN6R+#CK10)Q8#|1k+BH!`9`5H% z^LKF-k8S2GT$CTXh^sqysLVam>MHD5Io%&qgM4!&Z=iOpM%woJ$`IY2|8bL}lQ>PN z*%8}`t^s>p+YCsPOH?0e*q4C@Mi3P<7MCIy*tQeSOyTUG-AqA31@4g}eA`^}^c**FHT)mIY7v9L{8+*|^z7^aq+Y-S*>9}J&Um3^BU zCw`GVOIfx#UfP9DYOiqPE`GzRw{0s;mDk!!3fkq!ABV?l15I-5XvqsK6AWrY-tFwe zyp5i1;=|O{PJ^|Fr;pfcXm>;2G*-x=y`#9Y3prNNCrH`P9!m1mtA<3!!tk57oMZhlN?v;``#%-zD2fBa8v8t#9$De1ue&ijL8JUQw@kjo!Zcvoye)d? zImt9vZux6MYmt~KlN6)}HG^oLQd9K{*207Kwc=L_qR_N5h4sXVMI)GqMoR_qHv?069ogv3#8-e8|dP_QhJxUA(T%NuExj*09{dHUu<0tv*l!Y6EkZP{){6o1Wo0TemC8- zQt~VrI^IpqZf5SZ&6(YGAljuWaI|en@*CANxl_Vyz>rwGmgUyoipI6ySL58t2~*`u z_RE;}!{t9&zW_Z7d*clbodFrAogZRq?yf76I<>G~J<>93eMd|)?erF$q-tJ#b>K0T*VEsfe><5T6BJ^KFPO!mWQkT)*$+H%U2YW@1I~S2_TS>cl!^RmFkK#f;Gy1rXsS4+Li63L& zd}-oY+YK0oL8V?H;WJInWeo>Y)mv>gO-pAs;G;?l4{w^< zVL8rR3}^h^{BK%@ctc*|hkL|WkDWG?6(4#R8; zF@LTPdp6`wbEsywKh9K?r(U`~Vn%FnObhc289(GoX>8VqW!W=dMIYt3g!lKRYMFlGmd zXv%0G2DWb~D$G2J4U{BHUqlN~8XkW(vMF0BZoW-Gd?VLJ#!71TeHe_~O&X0k22{>H zl)ANJe!*d_YDlLAT0TA1V*lJDsAf`LprGLQA$_n-4eFbC))x2JT7cC#4o&7EK^Tjx zQJzJjx-Ns3%!mRv1sS%Vmg0j+7RPRsc)7w{bu4E>R8x0qi-|C62F z`c7QxVLEE1f+U~f4DKLMQA6S9%#EStwd17>QsI5i#=$q$#}5VHJc`SI?=^TVF!}%( zVRH)r4z0B`3T#;;c`o9eH3F>sq6N^a-;=mM_*!;BobKD#HzQ;+-!QT#d8ep25?1gm zbLrxc@r~>Tq_Wy|Ll{N7qu4O$6BM!qjJRL#*NrCDb}m>u+k6afmV~vp?t1#IJtITKm z9lc7a5{NzqJatYP-I;UPOf5~PeHa|l^HAHYuXa=US+KkxZn9?RyJfGIGDA;#txG#= z5SvlwYt{m2Kru_D?{*}~LZdJreap+Th;LqSE?or z9ly!BrPP%4Txz#pD``up$s`-6YS2WAZepM`mrK68XRwJT7cEQ>itTCtMnB}C4#{d6 zeR~r{^aKDB&%pzw_|Jm}7?4Lv+0M2XHM+{l)F3%==OFCu5^*X=8S}I8_$GK zrT8Y%dXRwac^V=}lD(NyYa<2p>zEbQQZCx-$Tz5BXzdno&F)SVdC0u03pO`uv{j$Q zN2UGv(kk?fG;bA+V^vcn$;v-5$~9QBDz=^CNB( zWx|Jb6*E)@TPX~}N5L0iNTM7RBYY7n?4m}q z!A8yEsDSq+y`3^4?@~5NYu!9HjyLZ#X~f3>*gw3zq}g5Y2sX5ufbr!3m#i4|HiHVJ zwEYn~rg_*?ECI4&Q&!MnwMNsET52(~4u7HNKdYpM>(g!CE5?a)nPDCkR~$&v6)h)B z9p&-yXOu1HnQ6C_V`?!e+LgGS5ZCS0k_7RGv48q-BI)}e>Bci?7~5rDZjG@#2c(ou zH-A_P#jb;pim^|wUUAD|)7;>3+5!^I!B4KxX^rKMuc4D^N9Ci3Qc9Es%SEDnNBKiT z?5r|S*%!wQP2a!EO|su-XBn;HN?WJNHGW$b%qW>r{Mqs6Z-6r;-AEeA2#(^MhsbLNpM~n;N{U?K>VIM>@-RVtIfeU*8(cE zB14QdTb#e0yzUm)iMNtl&}cX;h%8LAXpb8JgTX24o91?tvu(=tVfo$zKg5^Mgp7*M zceOS=ZC$wK9fG@1RSo??`;1wXkxV4-RM(nmu8i`fZuU=bd%O-0z4veS5L!0Edv0^s!>*LTa8MM8Zc1DKK)aztgR9K@bQfq2Id@M5C290pIe8Q}Ky)oP<#~+{A_?56mm@Ri_ z(Pv?O*s+{>_K*<1+d(PXX@bU`p?()pz;8{-ZhrS7a);l8c?a@(_B}*E|BQ@oV zfQa(}RM@;uIp|Ah7S~nV-qYmC&|GacPg}yDilmnDh0qHxfp!`n`S9?Y@01m+qSy}T zc0;&@V)|VoVVAV=pS=Qi$}?I*hiI(6EG?X?$^ArT#(GewE5d2A4$bm1#-NU@*}6Lu zPd&|bY54vm5;x|}Nu3Cc#FXviU}xaG zrhp|X7e)#vOE*g<9p2qEP?U8+i)q}zh@YgF%yRf#6?=jlYX-?Z3%kBNF)ka~t|Xv8bb zMO43@Otz^ZM|5t8Z%FJH5GRGCVQ=q?mg4qric0KSCaK};9cBX!%yyj0`AAraCNGO9Y&1O24Z$Y7Tn%3Jhn!GhCFW=;DOK2T zk5!I?v`U{t&b~rPmT~FORDmIR%mqC96MRd)HTE6h20YrgglzsyEy>bZLra2c-pIw` zmNb>#EpuDH1vE~Qmg7;mhRK2}nMW@~gS|)!asz4Eq_jPuVxB&GkItl&tD9Wghrl{X zMi^v%LQa+HIvcc4U#y5$c1aQopwy0-*CxDDU`;X=*+`?~ERYdZ=vAF)Wutt|r(FWwrY z6>0vz?gh4eCy{FYsxR>G8#sYE>d>pN7=PahZ`l+O`&(abSGj&Q9B#6>rN{^k0HT@P zK&+#-rj+3N)eK-@4u4+wr=h=JG$;cEe*cf=MqMI}Y)=j8Jok*(#DbZ)9bTTyDuHh_ z=?v}d!juGYgic;FG<)mhFU=k2xIcK42$92zPg`tTqp&qHQuh9KsAbK$nYJ_pTkw~w zL_YUY?rPnUI~4f!jw{c+xS*M|b+=u!c_B`a*L$KI@8;M=Okzu&iR3(<5%UA zf9gE?QUj$3h8C@PiNX-&i$w)omG37*|D~^geiGp&17hlyK&*&z;P?Ps6_aovg9kYO z+-_=*477a6B^jJpRcuEVNPR4A+k(&y1BU+VT%fBBzE$6!S=JXp2|HM=f+eRWsd69(Am5L>fSTq6&bsbPuETxWJOD$@( zF1)nRzaKQs2f!?I*yAvT&~C~odw9cxLop=rCjEvG+nt^V59SQ5NQlPvi|LF zt|VZ3^dA#Wc~3E0dE3yQ^xS%St-z}vC`jd5j9fa-z0#pN@Qj0cktnN>z|z-Brow0l_K!7T2JGg2{@l zisDbRW4pB%_m-c5o)#?FSHAJ0tJ_ouWU2l=h zj;IsEw}5_;tLbvO_H4J{fcmc@Awb{k)Dx~1Rk<~&1H@C1THTT~sYqVGN@BGsCefl9Xb{cOzNCm`4I^f~WYp9l&0Q8D0F~knYH9|w}wLHmT z3Ot(n*RDh|0Rd#bWXhgcV#;gND7;>1IN_e{PkhANf0udAr`q(XFRR!^OVS*%$5cU7 zyl*!1b-fGF@@mDQ$EXMdWfEU+U1k8_eyE&D*ZF%?WF}|;1?$uFT*vt?yyv^i5(4uA z+$VMuK-shhfIInOb`8=ud|$yOao?><8WA%<@f)C~uyR)tC#`q7j)QN(+~?~(`CYn2 z@dw}k*NSNY$k1XpUVrQx9#Ar)*S>Gq(jvb4$z*&$1diNdvzaS83v4cGNnSkg? zeI4<1$p9t4l36eva^Z!`Kv<3RFD`s52!a1sGzu!GOg7ge&3#_l3i-13e}R+sSHGQV ze;2O&7glWIfYu|9^Q|t=ZM#1Ez5kDY5upSYVlkfJo+Nk7cHA~CrAIJ`f3^U46kT(8 zij06D(Pz<$0vX^)GqOVp{;yx5#5cVsoAvFhHFwc)nNgQji(gxn`yol!*5tQ9Y*Kwe z59dhlK$4(>+QD5;rsD@$}J! zy& zYseJdX ztNk<+CYl^@I7O3O#&j8y8%n-_mx8E7Y}a-7FQY^WKC?xup52G$jD@l^8ozk4VL6a3 zlT96<0~;R`3VvB4>nU7qGowADr3M&bm07n)QyMwHv)ZB%P!?8w=b{{5H206J?{-@H zR@{M9Sxs!eH?gb$%F4S>4Hb-x6Y2aY@y907QLoFktIAhvG?uYj@}y!lNhLjZHT`-< zwJ|)=!^z*uJ2nRXkA+k6BG=TUn2+8AaC5`3>gsagJTei#CsA3KrG7VG@O&4q%HHTx zirX?>-^2(I9;x5jFVch8>(JhaVZ*N254YdowoD#{jP61_qpgM8;q-+h|`E%w2ZGg z{sH_~8=1r{OlDKHl~|-)qpue#ZRh57xzet?TMbM=ky&5ylw$$8oW1~Sny6MveqX`) z&LS1{38h2{<#;HqWWf0dm4*IB=!v_dXg>QWuI@p8zJN#ZK+Pw6W&8!%P_ z>Y9M|zj(#QakqzYof?oXl#t>5btgbD4|uW+pPpe3Frm#J{+;s63)pzNC`zGH4mOw5 zx?>W*i|-atA6HiR;>pHnso2e*_W6jd?)`w)Pd!V*=NP4B*Cw}Bv!d-tH=8rzhZgz; z)}N;&&cF0wVXGPC7;`17_4&j(=bRZnRGsMmYCdy{w3ksWg$ppg`hm?gtd&viTaGy2RCT9OWCya))0R#?B)1s#YeVc!6rb-|!we0qzMtCc zWf==20gN<(n{pCdhOdZ#VSw|j1M{~0fjm8R=j~3CVcqMb+z1#No=jiNBjcJpiH^}i zAVg6=0m^U)2>^ro!vHFCIEs>l2$bejz?sH=tlUlDq z=Mw+dqTq~P8DRU&0G-#$+p!nMR&_#ur_EJAQexZg$Miq;LXK+wj)LR_e*oaqW@O+q z=HeBA7QboYAw%%>w}5#d4`^u5aQsV;nETrtkU$XQe+h){noet*%=Lo_*j@wd=Qei{ ze0X8`UnbpK0dU`Sx2tjo@nHuX%WH=!c;o|Y!_UR}`Edwi0DC-C*49GhS}(KVLpHQ- zDu4f%xou}be7^pNHgMR%S@b`o0U*bC-rA5i2vY0^{UTMOhvX&&*;_FHt*EP~n&Sk9 zg}{O9XcB{yO0m$K1J|*d45eD{e+qd2LykT&L|w{Y4E%6@4tlJmcVw`uLK`R=c&0Xw$pfah6_{lBm96_o$g_q^YPQsRD4f%3}JU@bgE zI_V=4f_gs)IFkXZd}2;8C=fldvhWep8`;Y*wqABfA2GzPO&CnSAg|42mfW5pW4<<^ zvT4FWaLmH*Qc>BZiMyTmzceTjUiEdL;TQxjr2VfK%Txm%Y-`?xFFgG_Wc<37eEVWo zyY&m$m?>=`haYF{UJpa6Sbx}Ef5h2QVu{EZI_S4aRKC)WBIp zx$?jf(0_(fOnoba58Zv00+7zEo~nDt$tnE(cb=0W9bRrss0YyUGnaf4*%})I>>3$sLo=kmb8Xt4 zE(!B;R^k9H8Y(eGNwEf+;nAB_(RMn|H<3C$cD0%VYhE#(EIs_gwoPK z0w1&v71zy}dtB($U0zvFT)Mz6yZ4zYw{qrDg{&$xMMX;#knVlN3qHkIw`^j z2V()`aTjCOa#q6rvJhZKH zyR`f8v}e8up*gGfke27@BLuA;fE6QM3?39!knRQr%1L$EA;O3@=f=KmtP*>S8y0aL z4I>eX$y@wuZFc!B1?B!;o{`v5?}$)ecJo49I%Whw83Cz|^mv(RGGe=A@lBhWt-HI8 z=o^S#3x#*h!d*pag*Jeps(Af<*IQ!fTbRZ$*_{z}1do2h_Y7XnQ$!0V&C5czNT4IO zwFRj3_^$w*9s{G(r~F}GcGrl4XAnb#!)iprZvld#cS;3U>~{&^#1GeDB`ZAMK?*^L z9Eh;s>OLr%r!OCLtLpD51`y!x&u1HWMMi{2Nu;`!i7-h__dx#9IwV+4!q0FtIZae4+-Z&h`X7o+?L5LCGS z40uh_kbpx!Z=MPE9Sw+U1Vd~v>R+MVI?(XlL?RvgvJOa`wMb(s_<5JS}pfQs8u z4B7-BM7|`AVb~<4M?J=eq1*`qoskh?A}I3KAUv)Yu%VT<5Q3fDSGFt?P%L?b&<@Hd zG(d8=<}DO?tTVm4i48&>OL9J-FI2qO!no$Z=|%-#ze|$l$(N6uLe6hTjR;oNd@Zp5 zJ0<}`!=RtYRUyG%if+Pj3V)JOcj950i%n=4g|Lb-H1%PIc;t5t1x)5hp$<;VA9!VT zv+S>~C0;j(QYs;hTwb6rPJ8UccPCXZk8*L``JicCqrhCV=$Cw9Ew(ZzNkSsD1xX4j zfQgB(zX}*mkr$^ws3X?U19H2{4C=3;hZoK9LD9B%6N`xiNAV9wy20T>kL^>{u~LJ0 zkoU#V-pmzjWvNT&lK=FiLzt0-U2wMyRo8d-jrkQ2#lgo7p?c zdId z9BPU+@Y!|~qUS!|pK0>)Rq>VAwKc>Tc*tb7TaQTeCq}J@PiLp zHrvtnOq{WLzC1^CtUe;Zx)y=rhsLrur<$T0!{uzwmKh+2ZjS|a=_C^ds9krtfy9Fx z;c?tyH*FI;!bZ=cbL*iPYw;^z6p$PGop-@T;yR2ID#5D`AaU7Bd(NxU7ul)8HSFxe ze`y^sCZYN+?i^DYPJ)#TKKofODILWCN2xs$K52&L;Xvtkr_p$@S!+!SUO+p}9F?Fzcc&Jel|(#_6j~6;-2vp{a57ZKUba z%FIgeN%gdQ45J!rHFYy4d0ww|JH>K-p*oI2M}$b?mOv~3lw*Bbb}NUM8qYUfgZ z+Kb=ib@N!cN#{~LOAmd2B+s?y6UB#u_se^7BEOhz$HI7qwOTdf)LxmNe|eC z#r`t5%tWq(Of?9bJdQ{QL?Xp!zxZzSLa@PMIFopCpD4TRp@?SzPA zr_O~`_PbTQk5i14lDw1{If^*dx&^lQ&5* zM=n0iVFD)|F;oQfVf;tTL2!4zt{_e?tzqfvDvULS;fXbjfR|iPz@4)V&vj$k+;SL* z#u$hLSWwHW0FO!2fVep80O}S*E39Mz7A5_icv#tN0|^7H0|eC`f)I`q^koAIvBwe* zmNA%SKC^LkUlfs3O)HKVn4V;%=*$9Wg!FJh5UYvGi!?NcltRAu}W7#a(DlLq@R zob+5zp0&L~0EozYfKpMd4W5^(%dmg(ctuJwjcZ>0%v;aZRmUH3+0I;*5~hV=@xyue z3Ck?KuBmH-;Fe0~?k=ZFA3o9FFZ&_`(_9RZ_24J4;mw`kE_)fo&eFoyriq_&GI(cx zc@`Kr;&8yryrr@#+%Z(Poc`eY$V~zqw39EgM8L23iLj-XK*P>8qU(t(0dYuYS{sgrF{pypn{ZA?tuk2T)tG)HQ6&1AKM;aL0f9PVy((aj<+)Q1y$n zh0mIw%ytpnfXLQuIIr0HSc~goGJ8i*p^o4TYB#jHrDTaHmRo?#*p9f~IWsKs7KJI_ zZ9^O)Zvj|*R1zYwj+0!f56kzH0~&MwMmIDujm$ zz!@vQQ!JKJ`q516z8`K9`D0C`^9`tBo2dos@1Q*BD{@qaKWs}iS-%@ta?pmI{E3E( zgZU0LM^o=r%by)>@HcY;OgnZiKhbMzO>b>nQ~UORc=|Z7gpsc+c#2};Xp;GCOU1M+ zTF?aJe@sjapl4qW7jcpme!L1$DePMbD^vdTKi>k5hKc=YzE?-UrlHr)5IX<2!IXFJ z{fjsT4xD&i7>bF&$@kW(x>;2K<<<6m;1XRv+`sHoGlH@o4mun1$1NKFbUpwf_Uh&SW z`k6*ezPuR-VL`!{4aiQ7MJgpxKbgWH7RVr(2O+o(-+sWNB)O%#^;8DJSi3{E0CDK6 z+!x5W&(#07)s#_K03_)LTMH$$n=8vbkm3pVD?*P0v$FmxtB>#~hW$|C@!dPI||9`z_A1JjQ{1= zlu2@C02}4;nH;>68okj%h+xe%$e#Ax9LB#+<>U z?r&iq7ObM1G)CYHNT_A-sb5y@E_x5GQRXwIBH*(|3Q}r&PLbAein%2=YSAUX$HX97dU`!z!ApnIKf1jmHa=zH&}1_$rVH!O z0hs>YLzG6jN{<@BfoRYYaDyHf(8zjF{6zqy5k zItDyK;CzE=zp%OT8=I2*wqr8lnl-ijx677X=5%_>mHSkRmX%lT{U1*a>7;nLkXbFc z@5Zt@GM&$^%@H^k*R$QKZ~xCXD0>~?6ZTODE`qwUneSPT3FF4Ub6$++Y1I;%&sBQN zrN&>0w{81}A$Ffq9Z!;0?_L@6c%^N1{I~7cq^Yv+}JkD1~v;_&*NvDaj*%*Z3n9Vz|&?72a?Z(Tce$Y!rU;qHFIgXm|v4b z_-@Lv<*)`lWU?$>N^an^_JFq-@`%ZHjZ3r8%AWC$sXD z$G|D~TQJCu$l2clw%~{9uE3;XuJtbl;<4{Z2BPC{?t018TBY&HBmPa^C zg%-N)v!uLF5JN{%k9&8v>vk?bZ+~Rfd)9-*#fOSb4jhW|^vy|c@|L}&DbW}87~}7w za|ZmicsJ#S!^xkj993+0fAd<3P!`oadR0;^H@d}2BkD_v6~6CBHXR+a#nN^15sE%m ziDuIHxFa%*nKdZF(8^=N5QbR?3biswjVt-g_5^X=ra{_=Ie?BpBOLgBarcUsMXCU- zpH<`5^MiogbL#EmO`IjiPSA^%f^g6cLYZs1}r)C?c$e-W%#xHXlZn_im zIU88+8u_Y%8pkUYh{)Tupa7049YHd_JVD|PQPCCYTF)CDFYd4YqSc~W9IO3QGq(HNvw0LG zZ`VHij;8mme4plo$&s(Q24fc4V#*-tIZf(t+fLvZ)u(E)Vx0&J5B0Kt9ml~lfZF8eNp|aFs^YUU z?e@FXTWh=eV;0W$pMkZ;-LpfTMF(HhJscO%txe&O4{{P5G^wa+WCg_pi3Uv2w&IY7 zGOcf{dl1sxEvIUFBM$#oJDRrL+Co2pgLmd%f}5Kv za-<2#H@WALrEv>VINPWD9jAgZyz%A0nvhIEsNl*H*WURqN(tl%gR`>$gRxupB<~DW zUHa)HWHA`VnA^AQ?RCtLwVkNS#7vt$FkEq)J@vM@b_M48t+2L=%rQ5KVv;*Ay3E%u zRG21+j`Q72(}{={)z6**#iU2yaPz1AoqzbQ1GFKFXGF2tsQNQfY|mEgpfc+kFKbuM z`{M8*zGY8R?{%lmvB&;uPS@E4yO|PI;-6{GgXU+0 zDofU?;+pDC)x7>jrw*y%7PU}$-!^Mfv1jfLpb#_#BCK(aaOQD)Be06KIFfMTlBJ50 zIOe#R6n`x}bqa}_)P8JJ;?W_fgG6 z9Un0!W{)t3)P6;?3b<*`jItASQ+9yavvVTm>XaLkgtIj6AsV;LtAr2b%4rxP(sS4m`hniY{9a+Ac_vTLZCJiZI1b# z@SmxmZx|4y&wBmJ1_yF*1b!hs5~I;}0AVKdak6x}$OBu{E_QYGusk_gPxKfOBpfu0 z%H0opv45DRu;K39@xbGf(Q3Q&%?dXv6uhPFD4c<4$CbSbq)oz&Q;0>WAt%H1+%(8 zR;?^uMYGc+rNw$%D>mn}=$C>WoyjpL9g}9eU;mko`APzF+^kz8V$HG7bF+FPCES1o^ETbVC(>OnedgZ;xTc%ReD4kVb!k*?c8kSl-sthG zutzl=-c##z0*CzplgJjl1A`BdR59eyus8PA#}$o2D-*_?IhxBZ`bHqlQUj);@g5Dc z?#KDU^Dj*t&1AE>kEN4Sh*Y&1a=gq~I*5)+N^(^8u)fTpzv^?~*zKpZX z!n5T*I=vT?Iv@88ggX1^F8z(a8X%aJCxA@^r=q#xyTV(LJ$MYX)!j6kQpZ+KLn|m z6Oz8N>i2lBLkzlz8pUV(jyY+Z6x5~tREvHp_{I6IrqR4#ps5PU^VQExJmdOV!%yfrsVn+pjkO&}eK)XxA zc-iq4+dhD+T4mR>Udf|yE(-r@+VY;_$?2NaRF9OD$y(PT3DK@u$%>pad3<)l#)g9? z3zJ3c7#=h_@G1G!{ljy6Cve5eVj z5=>Skt}h}jW(mnrshBCtAE~G3K)=4I^FBk8uLv0EiS!r}u+l)mvbTELT0uG1RFq;- ztU&?Z@FKwunaAY0VBP0T@23I}LHn~R!_+sDDJs-IVs^LotIjKcZ^V9!MP8~7aJ>&zix33lRlU9idw^NMW* zOH`^ZpK_m6#TZYsX~}vYKbctky*A?;0=N9Tc(z$L>>#whnX_FQ^%Yq^{P?;iakd$E z2MoL@m(bOnQvp%L>Du5?;m zKNZP+NXl*DJ;5$xq!6?>0gJ_8e>+h<#k*N83KeKFp#>|exK>mS{03gn_xs|Su%KS~ zQj!hDmlqG>3_Nsp?4@0vI4y1R&lekRgc{(kgzU4E3-7zFjj(DgU#`($V$^b7dDRu3 zL!G<`wT4;F&XI{Z!R4{VQpnD~<1F$oeWlSsq+FKohlj=3EQWd6benzHrk&^1@7WSq zzxY0n%J^>AKX2ZCRT}I@@LsO5xyw&5sY}}7$iJqCA=X5u*hGtU0hzVDkFfliU+wly z`=y$s9uVNvUev8+VcGY&shlUGnGKg^=Q$Q}Cz&_*^doV%cdOx6JsR`!sGRLf?K7W& zGDB%PCnumkk4k>iGLa=zHX_@t$r`CRCJ!lX533r> z>H49mQ{l@6BEy5#h@f=_@q20PNUHUi9u}%1{=`<~{c-dg&Ivo#duXP82ZSX6B+4$$NLdW!yvCs zU<)Kg;v)tf5S7ISUn%I1h;_>tWGyR)`ZrrBm@FHqk+NAwfsWcRyQ1*a zf9pSrIjY8|01TFipzxlEJ!jB4s`#xPA3T$w>8BXAl`y07_RJReCh{bn3ZJF`KLSI~* zh^7X28V{T<7+$8ZGg<&mvOvXdl@)T3k_OOY-6mTkuJpeReW?2@hQ`xtwYWn^b;Lu3rb zzD*3~J)@o;{XXye|J&zNa(`y-`o-8_}VY_nL3s8c(hw0)Lc+RQZekf2ABctP^HJpJRX z`Pzh=jFmRti?59ZV*-=*aTOMzA^6B7Gz6+8bso`K&eX0jM%_spVH7UQn>4gBW|Rvt z{~TNbfne;qblBxc1!m*LB>p4FP**ki`Bw>k#`t&<;@X1Fa-44Q%FDdsg-WR7bKHs8 zXHgnnXWHxfOYIG6dk24NS`<|d&y_7?4~Bhq{yzIz;yj2`wtbP1^ezJTSFj;vY4 zOx<|ykEE)$?HC4XBOAbldXqu&Q!($A&V2IMLhurIwx!O#?>l$T_|rc{5-zs_xb zjAQ=JkLkCbSq;(VZf%9nbe(nlzJ;!Qvhy#EH+(XC#3!dK-?!Vxx-a`gnO0*IE5+fq zbC*3=)w|&iA=F0F?@C1+8Zp*PSXR4uB~oP}!bw_V=1BdzS0GaI#4k!Qo6C2Y5pf9w zs0$A-TmWUKl1~wx}=S=q(S8RN&Fc3ozcu;_B>5w~!-T`v%S(zJHX0A()P#c7B_~bcdM&fdB4kR(S+$^dGnNZM@wbef@+)&qY~F zi_c@>+CqYnvZhVa(@G9{lQm*Z}j- zcaU0_o$XVuaiQ#v3RTHty6?0OYUsZ%-uneP-!GtyqeB$y96fBHo6FJ$4mS>|?g5pn5sNQ4!nc%$lp)ZbYK1llogldPXi7Q@qE)aw}lIO;;OSsq1Om?pJ%Gqn{Vwpe;>9T=uUZ+gj(9%VC&L*ByYr6!Z<4uP*6*b&#;?K?cIe$lvpgE*!a^vE<}(+t)=$`Y%?MzHXY*!(XN zkc|nlz?zrLlV~)JsMcdmemZX9mcH=4eiFv)uyS4fz-mTAoXEiNbe4c~jb?+kW=Nu8 zFHdDrSMeM2S?Dur-YMpE~d+J7t80m4RXA&Y{6`VGeZ5~76(yFgc9Jp=F`|Y{o_+^0 z2+=h#?Dlr>)xUc8o~=IHeag?txzzcu6&)yt>?STP!rGGKeRAg0P4`t@U}_4T?$Ixe zNuRM_bEq(CS^m+z)}vRGHXDK||Ms0(RInnYI*FN#tQlLaXynpURZdK_;R3^|y=fG6e#7(C6s|K7Wg zViLWDPg-Zdmhe6Kmo$cIm{WPwsALJj^Mgf|1MAXT$+%T2*{Zy7k!tih= z?$m?K(9t_Y%b3k}=vmebXhg}0?CRgBONXpMG(6UU`lp4RyU9De9ko<0@%E*4OM3?)G_qKEwI?m+8g%%&5C=z%6N#ySf!WHB*! z{CS5J-dqNt$v^P5Vm~-P2ZZ8*r2^GPbZLNXr$j@K_PfE}QS=k?PXCvo`NEO-2yf}C`$c|({j z(cM~G1z-DiX>T__A|{2#3weq|R&P0WNq&B;?Ad^~Sf0V77@=ybt)oIE_n9%GR3Pf- z0#QFx&@ToqRh2qT2_mK6J^y7L0`GCMiV<1pXAIpDiSRJgtgLd{y`eXkTOB{(cNr5a z`n+U4+M|qETHocTL7yn-0$t6^CF2O9|8vK62$DurjRGP?Z=s~5htj@+7P^@FS1C5{ zEj}xw%ty!w0v+B`Ywj-rxM}c}J4i!e9#7cUkYbb0i?is8lteigQYg{Y_U`C$KvPJZ zAV>12fej(MDszwO{dlY+UY4KKWoz_m(EXs41~n3_Z!^xj0n3*Bx9b6q?^FmSQUJy2dB#psp2lyF^om<%~EW@L(sVH7iisg|-(&M&Fd^5NP23qep z$Cd7^)x?K!6ZE--z4bj4o*hG;KxW))hA7vCvXUQ|Aq-_pa<8qn{*vK(XWFpjUp5Aa z=PD&HQ2*BL7heiz^|Jhy5ke!naqKn zQdprAkqXiO(#CGFg^sdgPQ|T}aswd#`JJ`IUI#Vt zB2h9x*M76UzmcUL`)k&M0=M+(Wgae)Ex}{1XMKaIRlA+uk8sEiO}b$PO_@&EG?j?F z-0Ep|DQZW5%rF~h((BKCrkCG5{`^L0DbM0xA6*5GaTKHp8$_z!jS3S%X7Jsmfu96u zJ>9{X45r%+3i{wS7SjOs6uqLI_e7Mt9K)-tg97CnB6=4wn^ij7m~$ICXU+Cn=@lZI z4dH+#f|Lf}i=9|on|ei$g_+v~fEfh}PyM@h0n-ci9xVzu)9V|4F`D$T1AbXs)>6)` z=t=Bx4~0p+`s&6BtC1Dgtx09l`4Pgx5yF#tXzuYX)PAEL+BT=<_A3JjOJv8UG7!>WbJztVNix8 zDb+`h2lD+iI{Sdb`J+Fey!1BxZL#Bl7*4a^UR9=MHFez-Iyg znZ=TJljE1&{TUvnasmn!qxhX%-3q+PU;m^sfeqsF$x0zki^Ky067tdq zUa8g#Q1I-)_Ye8n{-^Yh3(DcNAuCFpOMu)sfP7HM!CruEY?vSM)GdQMF2M0$oKnfP7p~Us=_aF}UPFk7t;! zJI@sEY@_QUwQ>ep7BQY?ec2I?+p}4%e$-A=e)Ju9thzL#wrlOUIYW| z0CKpT8}J$y>@V zB|FX!I&AF`;OFhCi*9^*Wf?Wwo7GzDO`MXfb8lN9=G1ME`&l19+H0+t*%@NV4T-H? z^iyCcELG=tC9J<0xHFpq)0}RPV^8i)(B8&)6EH@7oQXivVOQH}JDzY2wGA}>NsMgU zMBM|KovB&W&f&+nb{-EEOM(Ol7ZINx;(YHd{J&V`&S@onUR!LP1;?G z5G<&cRo|X@Ul)N(f$NU#E~2Ius}zLk*+jydLPQ-KVw1B_?Q!Novo5BeH}I0km_kC$ z5@xls6C)67=uuQnsL^g|jpX-_U`Drnd7Ut|iehVy5ophj%>~O(^uul213C`C`b@o{ zp9+^?H`@WuH(d$c$4Yz|LvC|0!k+t43jX@!z z@b!z!wZuSEQ58G{!cN7iti4&YFVrB`x>+Zsthf-(?uL|l>>Hs*Zh ztT?~Yhvv8-!JtTne~sWF-?6io?Y}9a8J2?uMO>iA;2Sy3{%0rLTI+U4zO9x=%fh4ey z$oog_s?SGeO7!Vj+oLO;%v;@R*ObxGOBHpyHF($4k+SBo$J5`1$kS%>G2dbvk@Wge zIPZbO+u)`_KLyF}&q&Q_DDpGIb~hvq z>Nkkv3WDazQd*@KEWdvdPNN79y`HUH3GltD8wunLgT4x3_$s#Z}(@)tSIZ1GW8rLH9t3>GOr1P1y_WnhplRimCuyCxo77T@hin&_=~d7 z3$Y*; z{aOG#2j}FxRIxRYeOCk9nc^c@;8h@(3hxG~9rJr>;wGO44Lwsvb+YOgF~a#W|8%g4 zS@pf>pvv9QqaZ%x5r0}rgjg5U{B7HQbmoi_ag)m)e%H>5Bj^s-kr^iJaP(cOW4m$% zIF&lIg?2*(pQ5W#*TB_QtVRxC;Ddc}c!#Y2?+%q^y|Hx{gGo_<9dKa9L- zJrL33grYxus3B&=bUYz=@5wAv%ftGNHA=4q>+$1(VWWJKm~|ydT1BTd?n=@g?datw zY0b8sEvIam16>EU@#0~zjbrFlte`<*hOn+S)FlA0BsZN|=c*Qw%40_Z;qbW+=2n!^ zHx9^|f%kMN{DTi3RH|?+b6ja+a~{xOvY1_tlHQq$m%u|62iz~~^tvP7#FUbinCZu^ z2ma@09-=Jc3qQ&8aO;*Bi@nV#LYIcYswM8j@I|=+-CUO;gabt6#F{Gsz?0@s2k{2gaXwI4#I5Q$_p{gBALlkhxxQ-EaI@j1D3WZFwil&H z$%xhN_cm!HojFzGr7`afr;^bRp0d838`~ayXJlLxNhnW~=zEFRYcm%wiBrT1Ddn)XV!2Ew^CUPS`?&M%i zmC6{5Xu~f1Jp2`h#JLA%?=MH6C{##)%`UE&ORHj*e=}c~$->$?)|+@7Vg5NG{)^eW z+fna6Q3?waGir9<5_bp2omdZhX!43f>4=BlGl2DWW1U@i*Z7r4mFt3Q)vebY>~fzE zR)GVEmzr<;VS|t2@VztL-E`v~)@J;VE{DZvcLo|oK`F5nC>PsL_@!E27t5vrGht)u z_o#yL+TaIiI_`L{p7e{|spJ{>#?I@zjAQEs94a5`IrSkc*#W~aPTCV(OGf<;O7_DO zUTexY_N$G?@|#5p5foH3hx7DATahlDqqYp6<=5iw(L|n;)D!!chxr-N__osnRvJjw zZ7Z?Lj^sSafxHp3^_jnWBKeq~*3t;@9(apKOds(&QN*gvGOd7GrA2rD=T`B|Eq#9l zNMxy!&nEh>X}*WSHhlQZ@dE<{x|wlgUAO#M9A`FctlB+e>OrNG3wK`nyK#uz;`Bt@ zKvzU`<9YJ~E7_yUCs1-|3zh;%|fxs{(YO%Y37%=~P&<|(LK z;`Q487b^G56s4X6!3>MZg$D<2%cSVTDl$g(-%JJnQ*1hHG4b)VF#8|TV+|06^b=)# zhzGpVL%zY}Jo;BLf5_2O$H=19Sy_EG$cJW(tiemR+tR1cKm>T2S0Jap?MAkMf|z2U zjOKZ|<*hgW{nEFMgG_#n-ZcjFpC-_G%P6>jq<4oMspOsf&$KD$DaLrzq-P_qO$Hgg zIkXgZo(op@9KO7F`{c~RQdqjDD5(fUH`qFJ$D0QRA5e4T7?kCy$RAd9gIts}tgJ)s zk1lGFGaf#*s~j-himBWHI-V;Us66;Gn6iVxj-d6n^2ov2(f6K=ih`z3?o1IRK>v<{ z*7Y3CoPfjy{qz0Dh5cVA2q6So(E6XfN#SWVkKMH(72|XkkM&M>fBd!*$$zgckRicz zVR)k_>z^w_3&)92R*?Q?2>Y7>+$zhOhNZp)vSSAmtAQ_Qq* zXDbDM?Z-(sJw#aaJl*_ozYt-}vkv^+^32KT=OuQ?D0HvTMm}i5t;tty=vcHqDlp_H zXYgi}ZQNgOF7cgM%HMmpo?o>X-r!G*HEqIq99MlRx$d@BhK@nCMO&MJK-8ed(6NXyGs9r^u(!yC1B=Cty@edoEhP6AwgbN;ng=fizuhM5WSdbM`2iu+cDyiM} zQOg>7C}X62CTLfVu#*EEM^#(otiQ-#`h~{%#GDz3Xw9Ua{6+QCcCGGT0n9!;zB5h0 zPPMxE!THTVLM+s89^~p{bBuOlJdM1j#Z#j6?P`Xobqi{@kwUtw9}S*QxI=5J&34N1 z`OmkTBR~Ru;2GJ$OK3x)d?hx96AR+1Xy7Qe6r0&m=W&ahjv}tSqfa56Rk@o#v$TP( z-4rI6?QD)H%+U6*nhIN`sL^n7hHn(jT~9)J3#VOm^n(|m z$QZ&HqFLI)K1y8 zYP@Z>@LMNbhhsy(JMJs}7(codcsHv%Yoa*0SgIkW9|a_j8>OCcu&n_fhyn_1^r(Jq zBLUJH9CjAR_|$96{C&qAfhO?g4DP#(xvyb&);gm>S+*)xb(@BP&Sy64-JRC@KMW*a z&evHH8bsk+OQQKgjNyE`J+yoWcWLKWq~UUfPS1T>R-+j%^rFl&&N^4_SrGytMd zuWJP{UjEjwD4GJivL(&s0D4@Nw~Xt2o7Cp0lW64vsC>r@&Ln-wY>*QKg3>%Ap&_=e z0n3H>w1zc=!;>fUUOPWV8hG>#BR~k_xLE1r)MH^?Ay1o55lg}f^Pk|96Dc(B980~h zD#2zt%cGMP0;3?X$uD`o9yUP7s6*F>^x)m3x-y9eRW#PdHuNd@ixIopyY2nv<{K&U zv9YD6_7_IFwbQYE68-d9?|Qvhgss20xvTSq(yLdpN^)qZ*Xhee{Anbnq73o6gu4l@ z9vv{uTdB2Y6?PyNgoYOJ=~Hf-;mOK55aHb3>I%v~ip;F$(pO|i4m%??SBnUs5=^}F zq0|0ul1KdfNX0lq_K5@v8Bc?6uO%NL6y8ETEcYI%*M)EDcHn9XH1+Wu4`){g^mCA> z3{tSY<4|}#r#!mq@3A&Q3f`Xs&ZdxS_^C1avs%;IuUU3{5_6upUxDgT39DK-HfmG`hdZ7TvnN6P8tk02XHcR>r^@fR#uTkp z9>-)UQ{h1&h4h_rh1|6k2e)F9|936Yj;-Yo&PoxkutaRAL-l-vAgBjHB}oRFt{>r^ z9nBZ$=J;`IV+B>l-WFYAX%L-pTNF`g_fZW@Za0yXvV_DxSk1G;@ZXc)dd0as_XZ*o z1+1q#L5AeZXuy*E)MXYH`(#cO=Oj1%{B1Sa-QU@S|I(RMmVLg*VPRe zuNRYcI*kw~W^Joi{H0Wu23u?k^Sko_*a{!~9+AhM z>Y~@&10MS6;e>C~T|}%XPb6{CvtF(d7f4`SG{S3Uxxa8d>$2n@7%K9`#M{%^wv25X zJNlRblT!Y4VX){yhwR!h$wNKg!xScLRX~j6q*$&4MUG?N5=+LtV@0MB*zbl?$&KA+ z@7v_Tgs!M?D}An?q>38HuF1!zwh`EByyRfDxYJQ{DUrHZN#Gc2jisVkN((BEi|0gQ zZs8WY76p(~*2Rl&*k#0omzNvsVnB%&8mjZ{PJBIE0sZ;^AVGlDH_nysR?BuSaw1Sb zXV%h}mitGg=S4`CYavNCvdy&9ea7M+hmcVFZ2CHqgY?j#Is9t(iQ6tZaQbxTJ>lIO zn_bFvZ<=}Fn)1Y<5vcE0b?2>`XiYkASNzC$t*Aw%0n;lXuN>>{zKB2`?{&%5wG@SO z(&51x45`4(WZX6l5PQSiQhHa5qi-pmM*YX@(io&R{k zS7B$ueG7y<@)A3wJm{S~%+YmJLcjY}-vwL^Q*`T>@{XMHXF~}MOx;wYKJ2g$dipX* z2GX?Tz2HzC*C8{us&|ulB8)s8NvB6TGCJ;_t195Iuo*0D{aR+;5EJdXI2s4Sl}fi~ z6KiAVt=6<~y~NpjJdS)JbK>y>@fnfdI}#*EzIJY`_jXd_oeT&!oD_82<@kv)U7Xig!UN%&x!X?CL1`Uv=5^>t zFot~Ry_?V&QFw37YUSc-4{TV+IQEk~81`|&Zp4R=N4q|;DAyP_7`U%AWh@qazc!kC zp;vScIhBAN?OBP3`mzW#Y4`T=NIBOcq+BE3 zV?ftDkf{5C%QspO-BLps!zD}XmMB#5z19BU^zaj+N~ikAxeEO%?z!_X%V(x4q$?t6 zOaf#%O>ryn-V}LG4A~n}Jn*6gl@RGM=DjNeC48w$o9yO^BBC{=56kVw;~o?OKje$e zh*vPO0q1@E@2@sjC0;Woqc}K9jccOUNM)Foo{WM@Et`e%fKHdl8SjFhdlliOgKGcD z88-sP4gR<2HIq;OSV6_t(A^@m+uB!>dX*JEkGtd2bB*e*oL-oZwY`}W2H%OP&v_CW z+dh7b`WQ8tlL3ArqrtayaNy@SocD+3W-vG_7pNgw_j+wXWTzg#qg&A9`c-~>Uk9EXhEw!r1z0Ha`w_;raP_~HU`2{ z>V_kk?rcjtgICWK41-Eom-;%N*J(XSne{+dv9^|>YteU4d_zZ+KgR7EFj8KxxDVIL zQK+;1v{6susdc{B<;YG_kIC#cyaZp9BzZm$&Ic}8lH@KIPo!^FW?)ODkInMl%nCtO zZM-g*J7=BDc-&#y5_!+lw!K^in@H0VOwsh7S0IPB7$o#o7&q2$X&Zsxu0??Wk`UnC zlm4=ZDyOeP!)CN*uR)y_5qI6eD^?WKE%R+n{*tEW$Z)?tZ8ZmQxf9@`(wb)Lp+5}5 zg4dH5_O180XF4L%3h$-N*iSB!5h#ZK1Iiy$Uwd}PyzfY8v-sOb6sr@@aF&zAy-ylFv~wsTN4}&EC4RX z_IKR<0}qiP<00?bCH}xeM96qZ@o=T(K@=%F5a1yaQI!Xa%^U%>JixJd|Njl@`a*A3 zSXBq$=W-n6{v3xReE~jSbi6q^I7Yv61DIIEHa_8h;H0yuS00T6mZ;KNLQ;}G&!FEJ z_s3iB8lk>g-g7$Vo~I{KCgGaR8!&SLc3TKYX2YvT@FTI!RUX6yAL1X1gjG6=)C^&L gj%QqQ9uN26=jM@np@+*;4uL;vDjGNQZy5Xi9~qn2i2wiq literal 0 HcmV?d00001 From 230c5577304281b141150be33b3ea726716fca07 Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Mon, 6 May 2024 13:04:40 -0700 Subject: [PATCH 02/35] adding support for vllm local endpoint and llama3 model --- .../generate_question_answers.py | 47 ++++++++++++++++--- .../chatbot/data_pipelines/generator_utils.py | 12 ++--- 2 files changed, 46 insertions(+), 13 deletions(-) diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py index 161fd8642..748fc590b 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py @@ -12,6 +12,7 @@ from abc import ABC, abstractmethod from octoai.client import Client from functools import partial +from openai import OpenAI # Configure logging to include the timestamp, log level, and message logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') @@ -28,6 +29,7 @@ async def execute_chat_request_async(self, api_context: dict, chat_request): # Please implement your own chat service class here. # The class should inherit from the ChatService class and implement the execute_chat_request_async method. +# The following are two example chat service classes that you can use as a reference. class OctoAIChatService(ChatService): async def execute_chat_request_async(self, api_context: dict, chat_request): async with request_limiter: @@ -43,14 +45,40 @@ async def execute_chat_request_async(self, api_context: dict, chat_request): response = await event_loop.run_in_executor(None, api_chat_call) assistant_response = next((choice.message.content for choice in response.choices if choice.message.role == 'assistant'), "") assistant_response_json = parse_qa_to_json(assistant_response) - + return assistant_response_json except Exception as error: print(f"Error during chat request execution: {error}") return "" - +# Use the local vllm openai compatible server for generating question/answer pairs to make API call syntax consistent +# please read for more detail:https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html. +class VllmChatService(ChatService): + async def execute_chat_request_async(self, api_context: dict, chat_request): + async with request_limiter: + try: + event_loop = asyncio.get_running_loop() + client = OpenAI(api_key="EMPTY", base_url="http://localhost:"+ api_context['end_point']+"/v1") + api_chat_call = partial( + client.chat.completions.create, + model=api_context['model'], + messages=chat_request, + temperature=0.0 + ) + response = await event_loop.run_in_executor(None, api_chat_call) + assistant_response = next((choice.message.content for choice in response.choices if choice.message.role == 'assistant'), "") + assistant_response_json = parse_qa_to_json(assistant_response) + + return assistant_response_json + except Exception as error: + print(f"Error during chat request execution: {error}") + return "" + async def main(context): - chat_service = OctoAIChatService() + if context["endpoint"]: + logging.info(f" Use local vllm service at port '{context["endpoint"]}'.") + chat_service = VllmChatService() + else: + chat_service = OctoAIChatService() try: logging.info("Starting to generate question/answer pairs.") data = await generate_question_batches(chat_service, context) @@ -80,8 +108,8 @@ def parse_arguments(): ) parser.add_argument( "-m", "--model", - choices=["llama-2-70b-chat-fp16", "llama-2-13b-chat-fp16"], - default="llama-2-70b-chat-fp16", + choices=["meta-llama-3-70b-instruct","meta-llama-3-8b-instruct","llama-2-70b-chat-fp16", "llama-2-13b-chat-fp16"], + default="meta-llama-3-70b-instruct", help="Select the model to use for generation." ) parser.add_argument( @@ -89,6 +117,11 @@ def parse_arguments(): default="config.yaml", help="Set the configuration file path that has system prompt along with language, dataset path and number of questions." ) + parser.add_argument( + "-v", "--vllm_endpoint", + default=None, + help="If a port is specified, then use local vllm endpoint for generating question/answer pairs." + return parser.parse_args() if __name__ == "__main__": @@ -98,6 +131,6 @@ def parse_arguments(): context = load_config(args.config_path) context["total_questions"] = args.total_questions context["model"] = args.model - + context["endpoint"] = args.vllm_endpoint logging.info(f"Configuration loaded. Generating {args.total_questions} question/answer pairs using model '{args.model}'.") - asyncio.run(main(context)) \ No newline at end of file + asyncio.run(main(context)) diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py index 01c628036..3d9c36d39 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py @@ -75,7 +75,7 @@ def parse_qa_to_json(response_string): # Adjusted regex to capture question-answer pairs more flexibly # This pattern accounts for optional numbering and different question/answer lead-ins pattern = re.compile( - r"\d*\.\s*Question:\s*(.*?)\nAnswer:\s*(.*?)(?=\n\d*\.\s*Question:|\Z)", + r"\d*\.\s*Question:\s*(.*?)\nAnswer:\s*(.*?)(?=\n\d*\.\s*Question:|\Z)", re.DOTALL ) @@ -96,9 +96,12 @@ async def prepare_and_send_request(chat_service, api_context: dict, document_con async def generate_question_batches(chat_service, api_context: dict): document_text = read_file_content(api_context) - tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", pad_token="", padding_side="right") + if api_context["model"] in ["meta-llama-3-70b-instruct","meta-llama-3-8b-instruct"]: + tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B", pad_token="", padding_side="right") + else: + tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", pad_token="", padding_side="right") document_batches = split_text_into_chunks(api_context, document_text, tokenizer) - + total_questions = api_context["total_questions"] batches_count = len(document_batches) base_questions_per_batch = total_questions // batches_count @@ -116,6 +119,3 @@ async def generate_question_batches(chat_service, api_context: dict): question_generation_results = await asyncio.gather(*generation_tasks) return question_generation_results - - - From b07cbad1d76129e2ab6c0247df80ca4b0d00e453 Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Mon, 6 May 2024 18:02:40 -0700 Subject: [PATCH 03/35] working draft for vllm using llama3-70B --- .../chatbot/data_pipelines/._faq-data | Bin 0 -> 319 bytes .../chatbot/data_pipelines/config.py | 7 +- .../chatbot/data_pipelines/config.yaml | 16 ++-- .../chatbot/data_pipelines/doc_processor.py | 12 +-- .../generate_question_answers.py | 27 +++--- .../chatbot/data_pipelines/generator_utils.py | 80 +++++++++++++----- 6 files changed, 96 insertions(+), 46 deletions(-) create mode 100755 recipes/use_cases/end2end-recipes/chatbot/data_pipelines/._faq-data diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/._faq-data b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/._faq-data new file mode 100755 index 0000000000000000000000000000000000000000..c92a15b799cb90a2e88089220e35684b149ab1c6 GIT binary patch literal 319 zcmZQz6=P>$Vqox1Ojhs@R)|o50+1L3ClDJkFfj50X&|4`9!L`b9795aAj-fxt^nED zXxf;8e2};Y0|S3@ey(0(K|xNcUT$J?4prs3%qS6N)%+puQ*1eyTMi?5sVBBeXqN+XKeLdRj6-Z ZU}0!&W@c<^XqjSZ?VM4RpPOpU001MDFhT$T literal 0 HcmV?d00001 diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.py b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.py index 5f558008a..105e7afed 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.py +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.py @@ -9,5 +9,10 @@ def load_config(config_path: str = "./config.yaml"): with open(config_path, "r") as file: config = yaml.safe_load(file) # Set the API key from the environment variable - config["api_key"] = os.environ["OCTOAI_API_TOKEN"] + try: + config["api_key"] = os.environ["OCTOAI_API_TOKEN"] + except KeyError: + print("API token did not found, please set the OCTOAI_API_TOKEN environment variable if using OctoAI, otherwise set api_key to default EMPTY") + # local Vllm endpoint did not need API key, so set the API key to "EMPTY" if not found + config["api_key"] = "EMPTY" return config diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml index 7eeeb97dd..393e1c418 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml @@ -2,24 +2,24 @@ question_prompt_template: > You are a language model skilled in creating quiz questions. You will be provided with a document, read it and generate question and answer pairs - that are most likely be asked by a use of llama that just want to start, + that are most likely be asked by a use of llama that just want to start, please make sure you follow those rules, 1. Generate only {total_questions} question answer pairs. 2. Generate in {language}. - 3. The questions can be answered based *solely* on the given passage. + 3. The questions can be answered based *solely* on the given passage. 4. Avoid asking questions with similar meaning. 5. Make the answer as concise as possible, it should be at most 60 words. 6. Provide relevant links from the document to support the answer. 7. Never use any abbreviation. - 8. Return the result in json format with the template: + 8. Return the result in json format with the template: [ {{ - "question": "your question A.", - "answer": "your answer to question A." + "Question": "your question A.", + "Answer": "your answer to question A." }}, {{ - "question": "your question B.", - "answer": "your answer to question B." + "Question": "your question B.", + "Answer": "your answer to question B." }} ] @@ -27,4 +27,4 @@ data_dir: "./data" language: "English" -total_questions: 2 +total_questions: 1000 diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/doc_processor.py b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/doc_processor.py index 2fade43f6..b45768461 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/doc_processor.py +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/doc_processor.py @@ -6,13 +6,13 @@ def get_token_limit_for_model(model: str) -> int: """Returns the token limit for a given model.""" - if model == "llama-2-70b-chat-fp16" or model == "llama-2-13b-chat-turbo": + if model == "llama-2-13b-chat" or model == "llama-2-70b-chat": return 4096 - + else: + return 8192 def calculate_num_tokens_for_message(encoded_text) -> int: """Calculates the number of tokens used by a message.""" - # Added 3 to account for priming with assistant's reply, as per original comment return len(encoded_text) + 3 @@ -29,7 +29,7 @@ def split_text_into_chunks(context: dict, text: str, tokenizer) -> list[str]: estimated_total_question_tokens = estimated_tokens_per_question * context["total_questions"] # Ensure there's a reasonable minimum chunk size max_tokens_for_text = max(model_token_limit - tokens_for_questions - estimated_total_question_tokens, model_token_limit // 10) - + chunks, current_chunk = [], [] print(f"Splitting text into chunks of {max_tokens_for_text} tokens, encoded_text {len(encoded_text)}", flush=True) for token in encoded_text: @@ -43,5 +43,5 @@ def split_text_into_chunks(context: dict, text: str, tokenizer) -> list[str]: chunks.append(tokenizer.decode(current_chunk).strip()) print(f"Number of chunks in the processed text: {len(chunks)}", flush=True) - - return chunks \ No newline at end of file + + return chunks diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py index 748fc590b..3eb632874 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py @@ -5,12 +5,12 @@ import asyncio import json from config import load_config -from generator_utils import generate_question_batches, parse_qa_to_json +from generator_utils import generate_question_batches, parse_qa_to_json, get_model_name from itertools import chain import logging import aiofiles # Ensure aiofiles is installed for async file operations from abc import ABC, abstractmethod -from octoai.client import Client +from octoai.client import OctoAI from functools import partial from openai import OpenAI @@ -35,7 +35,7 @@ async def execute_chat_request_async(self, api_context: dict, chat_request): async with request_limiter: try: event_loop = asyncio.get_running_loop() - client = Client(api_context['api_key']) + client = OctoAI(api_context['api_key']) api_chat_call = partial( client.chat.completions.create, model=api_context['model'], @@ -48,7 +48,7 @@ async def execute_chat_request_async(self, api_context: dict, chat_request): return assistant_response_json except Exception as error: - print(f"Error during chat request execution: {error}") + logging.error(f"Error during chat request execution: {error}",exc_info=True) return "" # Use the local vllm openai compatible server for generating question/answer pairs to make API call syntax consistent # please read for more detail:https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html. @@ -57,25 +57,25 @@ async def execute_chat_request_async(self, api_context: dict, chat_request): async with request_limiter: try: event_loop = asyncio.get_running_loop() - client = OpenAI(api_key="EMPTY", base_url="http://localhost:"+ api_context['end_point']+"/v1") + model_name = get_model_name(api_context['model']) + client = OpenAI(api_key=api_context['api_key'], base_url="http://localhost:"+ str(api_context['endpoint'])+"/v1") api_chat_call = partial( client.chat.completions.create, - model=api_context['model'], + model=model_name, messages=chat_request, temperature=0.0 ) response = await event_loop.run_in_executor(None, api_chat_call) assistant_response = next((choice.message.content for choice in response.choices if choice.message.role == 'assistant'), "") assistant_response_json = parse_qa_to_json(assistant_response) - + assert(len(assistant_response_json)!=0) return assistant_response_json except Exception as error: - print(f"Error during chat request execution: {error}") + logging.error(f"Error during chat request execution: {error}",exc_info=True) return "" async def main(context): if context["endpoint"]: - logging.info(f" Use local vllm service at port '{context["endpoint"]}'.") chat_service = VllmChatService() else: chat_service = OctoAIChatService() @@ -93,7 +93,7 @@ async def main(context): logging.info("Data successfully written to 'data.json'. Process completed.") except Exception as e: - logging.error(f"An unexpected error occurred during the process: {e}") + logging.error(f"An unexpected error occurred during the process: {e}",exc_info=True) def parse_arguments(): # Define command line arguments for the script @@ -108,7 +108,7 @@ def parse_arguments(): ) parser.add_argument( "-m", "--model", - choices=["meta-llama-3-70b-instruct","meta-llama-3-8b-instruct","llama-2-70b-chat-fp16", "llama-2-13b-chat-fp16"], + choices=["meta-llama-3-70b-instruct","meta-llama-3-8b-instruct","llama-2-13b-chat", "llama-2-70b-chat"], default="meta-llama-3-70b-instruct", help="Select the model to use for generation." ) @@ -120,8 +120,9 @@ def parse_arguments(): parser.add_argument( "-v", "--vllm_endpoint", default=None, + type=int, help="If a port is specified, then use local vllm endpoint for generating question/answer pairs." - + ) return parser.parse_args() if __name__ == "__main__": @@ -133,4 +134,6 @@ def parse_arguments(): context["model"] = args.model context["endpoint"] = args.vllm_endpoint logging.info(f"Configuration loaded. Generating {args.total_questions} question/answer pairs using model '{args.model}'.") + if context["endpoint"]: + logging.info(f"Use local vllm service at port: '{args.vllm_endpoint}'.") asyncio.run(main(context)) diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py index 3d9c36d39..3befc4a10 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py @@ -4,21 +4,33 @@ import os import re from transformers import AutoTokenizer -from octoai.client import Client import asyncio import magic from PyPDF2 import PdfReader import json from doc_processor import split_text_into_chunks import logging +import json # Initialize logging logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') - +# Since OctoAI has different naming for llama models, get the huggingface offical model name using OctoAI names. +def get_model_name(model): + if model == "meta-llama-3-70b-instruct": + return "meta-llama/Meta-Llama-3-70B-Instruct" + elif model == "meta-llama-3-8b-instruct": + return "meta-llama/Meta-Llama-3-8B-Instruct" + elif model == "llama-2-7b-chat": + return "meta-llama/Llama-2-7b-chat-hf" + else: + return "meta-llama/Llama-2-70b-chat-hf" def read_text_file(file_path): try: with open(file_path, 'r') as f: - return f.read().strip() + ' ' + text = f.read().strip() + ' ' + if len(text) == 0: + print("File is empty ",file_path) + return text except Exception as e: logging.error(f"Error reading text file {file_path}: {e}") return '' @@ -29,6 +41,9 @@ def read_pdf_file(file_path): pdf_reader = PdfReader(f) num_pages = len(pdf_reader.pages) file_text = [pdf_reader.pages[page_num].extract_text().strip() + ' ' for page_num in range(num_pages)] + text = ''.join(file_text) + if len(text) == 0: + print("File is empty ",file_path) return ''.join(file_text) except Exception as e: logging.error(f"Error reading PDF file {file_path}: {e}") @@ -41,6 +56,8 @@ def read_json_file(file_path): # Assuming each item in the list has a 'question' and 'answer' key # Concatenating question and answer pairs with a space in between and accumulating them into a single string file_text = ' '.join([item['question'].strip() + ' ' + item['answer'].strip() + ' ' for item in data]) + if len(file_text) == 0: + print("File is empty ",file_path) return file_text except Exception as e: logging.error(f"Error reading JSON file {file_path}: {e}") @@ -48,6 +65,7 @@ def read_json_file(file_path): def process_file(file_path): + print("starting to process file: ", file_path) file_type = magic.from_file(file_path, mime=True) if file_type in ['text/plain', 'text/markdown', 'JSON']: return read_text_file(file_path) @@ -66,36 +84,56 @@ def read_file_content(context): file_text = process_file(file_path) if file_text: file_strings.append(file_text) - + text = ' '.join(file_strings) + if len(text) == 0: + logging.error(f"Error reading files, text is empty") return ' '.join(file_strings) def parse_qa_to_json(response_string): - # Adjusted regex to capture question-answer pairs more flexibly - # This pattern accounts for optional numbering and different question/answer lead-ins - pattern = re.compile( - r"\d*\.\s*Question:\s*(.*?)\nAnswer:\s*(.*?)(?=\n\d*\.\s*Question:|\Z)", - re.DOTALL - ) - - # Find all matches in the response string - matches = pattern.findall(response_string) - - # Convert matches to a structured format - qa_list = [{"question": match[0].strip(), "answer": match[1].strip()} for match in matches] - - # Convert the list to a JSON string + split_lines = response_string.split("\n") + start,end = None,None + # must use set to avoid duplicate question/answer pairs due to async function calls + qa_set = set() + for i in range(len(split_lines)): + line = split_lines[i] + # starting to find "Question" + if not start: + # Once found, set start to this line number + if '"Question":' in line: + start = i + else: + # "Question" has been found, find "Answer", once found, set end to this line number + if '"Answer":' in line: + end = i + # found Question means we have reached the end of the question, so add it to qa_list + elif '"Question":' in line: + question = " ".join(" ".join(split_lines[start:end]).split('"Question":')[1].split('"')[1:-1]) + answer = " ".join(" ".join(split_lines[end:i]).split('"Answer":')[1].split('"')[1:-1]) + start,end = i,None + qa_set.add((question, answer)) + # adding last question back to qa_list + if start and end: + question = " ".join(" ".join(split_lines[start:end]).split('"Question":')[1].split('"')[1:-1]) + answer = " ".join(" ".join(split_lines[end:i]).split('"Answer":')[1].split('"')[1:-1]) + qa_set.add((question, answer)) + qa_list = [{"question": q, "answer":a} for q,a in qa_set] return json.dumps(qa_list, indent=4) async def prepare_and_send_request(chat_service, api_context: dict, document_content: str, total_questions: int) -> dict: prompt_for_system = api_context['question_prompt_template'].format(total_questions=total_questions, language=api_context["language"]) chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': document_content}] + result = await chat_service.execute_chat_request_async(api_context, chat_request_payload) + if not result: + return {} return json.loads(await chat_service.execute_chat_request_async(api_context, chat_request_payload)) async def generate_question_batches(chat_service, api_context: dict): document_text = read_file_content(api_context) + if len(document_text)== 0: + logging.error(f"Error reading files, document_text is empty") if api_context["model"] in ["meta-llama-3-70b-instruct","meta-llama-3-8b-instruct"]: tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B", pad_token="", padding_side="right") else: @@ -114,7 +152,11 @@ async def generate_question_batches(chat_service, api_context: dict): #Distribute extra questions across the first few batches questions_in_current_batch = base_questions_per_batch + (1 if batch_index < extra_questions else 0) print(f"Batch {batch_index + 1} - {questions_in_current_batch} questions ********") - generation_tasks.append(prepare_and_send_request(chat_service, api_context, batch_content, questions_in_current_batch)) + try: + result = prepare_and_send_request(chat_service, api_context, batch_content, questions_in_current_batch) + generation_tasks.append(result) + except Exception as e: + print(f"Error during chat request execution: {e}") question_generation_results = await asyncio.gather(*generation_tasks) From 6204d5ae387f4ad1e33bf10d23eaf0c0954898db Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Tue, 7 May 2024 13:09:51 -0700 Subject: [PATCH 04/35] fix generate_qa function --- .../chatbot/data_pipelines/._faq-data | Bin 319 -> 0 bytes .../chatbot/data_pipelines/config.py | 2 +- .../chatbot/data_pipelines/config.yaml | 4 +- .../generate_question_answers.py | 16 ++++--- .../chatbot/data_pipelines/generator_utils.py | 41 ++++++++---------- 5 files changed, 32 insertions(+), 31 deletions(-) delete mode 100755 recipes/use_cases/end2end-recipes/chatbot/data_pipelines/._faq-data diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/._faq-data b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/._faq-data deleted file mode 100755 index c92a15b799cb90a2e88089220e35684b149ab1c6..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 319 zcmZQz6=P>$Vqox1Ojhs@R)|o50+1L3ClDJkFfj50X&|4`9!L`b9795aAj-fxt^nED zXxf;8e2};Y0|S3@ey(0(K|xNcUT$J?4prs3%qS6N)%+puQ*1eyTMi?5sVBBeXqN+XKeLdRj6-Z ZU}0!&W@c<^XqjSZ?VM4RpPOpU001MDFhT$T diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.py b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.py index 105e7afed..319cb6898 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.py +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.py @@ -13,6 +13,6 @@ def load_config(config_path: str = "./config.yaml"): config["api_key"] = os.environ["OCTOAI_API_TOKEN"] except KeyError: print("API token did not found, please set the OCTOAI_API_TOKEN environment variable if using OctoAI, otherwise set api_key to default EMPTY") - # local Vllm endpoint did not need API key, so set the API key to "EMPTY" if not found + # local Vllm endpoint did not need API key, so set the API key to "EMPTY" if OCTOAI_API_TOKEN not found config["api_key"] = "EMPTY" return config diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml index 393e1c418..7a9cb5536 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml @@ -4,7 +4,7 @@ question_prompt_template: > read it and generate question and answer pairs that are most likely be asked by a use of llama that just want to start, please make sure you follow those rules, - 1. Generate only {total_questions} question answer pairs. + 1. Generate only {num_questions} question answer pairs. 2. Generate in {language}. 3. The questions can be answered based *solely* on the given passage. 4. Avoid asking questions with similar meaning. @@ -27,4 +27,4 @@ data_dir: "./data" language: "English" -total_questions: 1000 +num_questions: 2 diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py index 3eb632874..350c53595 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py @@ -5,7 +5,7 @@ import asyncio import json from config import load_config -from generator_utils import generate_question_batches, parse_qa_to_json, get_model_name +from generator_utils import generate_question_batches, parse_qa_to_json from itertools import chain import logging import aiofiles # Ensure aiofiles is installed for async file operations @@ -21,7 +21,10 @@ rate_limit_threshold = 2000 allowed_concurrent_requests = int(rate_limit_threshold * 0.75) request_limiter = asyncio.Semaphore(allowed_concurrent_requests) - +# Since OctoAI has different naming for llama models, create this mapping to get huggingface offical model name given OctoAI names. +MODEL_NAME_MAPPING={"meta-llama-3-70b-instruct":"meta-llama/Meta-Llama-3-70B-Instruct", +"meta-llama-3-8b-instruct":"meta-llama/Meta-Llama-3-8B-Instruct","llama-2-7b-chat":"meta-llama/Llama-2-7b-chat-hf" +,"llama-2-70b-chat":"meta-llama/Llama-2-70b-chat-hf"} class ChatService(ABC): @abstractmethod async def execute_chat_request_async(self, api_context: dict, chat_request): @@ -57,7 +60,7 @@ async def execute_chat_request_async(self, api_context: dict, chat_request): async with request_limiter: try: event_loop = asyncio.get_running_loop() - model_name = get_model_name(api_context['model']) + model_name = MODEL_NAME_MAPPING[api_context['model']] client = OpenAI(api_key=api_context['api_key'], base_url="http://localhost:"+ str(api_context['endpoint'])+"/v1") api_chat_call = partial( client.chat.completions.create, @@ -68,7 +71,8 @@ async def execute_chat_request_async(self, api_context: dict, chat_request): response = await event_loop.run_in_executor(None, api_chat_call) assistant_response = next((choice.message.content for choice in response.choices if choice.message.role == 'assistant'), "") assistant_response_json = parse_qa_to_json(assistant_response) - assert(len(assistant_response_json)!=0) + if len(assistant_response_json)==0: + logging.error("No question/answer pairs generated. Please check the input context or model configuration.") return assistant_response_json except Exception as error: logging.error(f"Error during chat request execution: {error}",exc_info=True) @@ -103,8 +107,8 @@ def parse_arguments(): parser.add_argument( "-t", "--total_questions", type=int, - default=10, - help="Specify the number of question/answer pairs to generate." + default=100, + help="Specify the total number of question/answer pairs to generate." ) parser.add_argument( "-m", "--model", diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py index 3befc4a10..3eb22be58 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py @@ -14,16 +14,7 @@ # Initialize logging logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') -# Since OctoAI has different naming for llama models, get the huggingface offical model name using OctoAI names. -def get_model_name(model): - if model == "meta-llama-3-70b-instruct": - return "meta-llama/Meta-Llama-3-70B-Instruct" - elif model == "meta-llama-3-8b-instruct": - return "meta-llama/Meta-Llama-3-8B-Instruct" - elif model == "llama-2-7b-chat": - return "meta-llama/Llama-2-7b-chat-hf" - else: - return "meta-llama/Llama-2-70b-chat-hf" + def read_text_file(file_path): try: with open(file_path, 'r') as f: @@ -88,8 +79,13 @@ def read_file_content(context): if len(text) == 0: logging.error(f"Error reading files, text is empty") return ' '.join(file_strings) - - +# clean the text by removing all parts that did not contain any alphanumeric characters +def clean(s): + result = [] + for item in s.split('"'): + if any(c.isalnum() for c in item): + result.append(item) + return " ".join(result) def parse_qa_to_json(response_string): split_lines = response_string.split("\n") @@ -109,21 +105,21 @@ def parse_qa_to_json(response_string): end = i # found Question means we have reached the end of the question, so add it to qa_list elif '"Question":' in line: - question = " ".join(" ".join(split_lines[start:end]).split('"Question":')[1].split('"')[1:-1]) - answer = " ".join(" ".join(split_lines[end:i]).split('"Answer":')[1].split('"')[1:-1]) + question = " ".join(split_lines[start:end]).split('"Question":')[1] + answer = " ".join(split_lines[end:i]).split('"Answer":')[1] start,end = i,None - qa_set.add((question, answer)) + qa_set.add((clean(question), clean(answer))) # adding last question back to qa_list - if start and end: - question = " ".join(" ".join(split_lines[start:end]).split('"Question":')[1].split('"')[1:-1]) - answer = " ".join(" ".join(split_lines[end:i]).split('"Answer":')[1].split('"')[1:-1]) - qa_set.add((question, answer)) + if start and end: + question = " ".join(split_lines[start:end]).split('"Question":')[1] + answer = " ".join(split_lines[end:]).split('"Answer":')[1] + qa_set.add((clean(question), clean(answer))) qa_list = [{"question": q, "answer":a} for q,a in qa_set] return json.dumps(qa_list, indent=4) -async def prepare_and_send_request(chat_service, api_context: dict, document_content: str, total_questions: int) -> dict: - prompt_for_system = api_context['question_prompt_template'].format(total_questions=total_questions, language=api_context["language"]) +async def prepare_and_send_request(chat_service, api_context: dict, document_content: str, num_questions: int) -> dict: + prompt_for_system = api_context['question_prompt_template'].format(num_questions=num_questions, language=api_context["language"]) chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': document_content}] result = await chat_service.execute_chat_request_async(api_context, chat_request_payload) if not result: @@ -142,7 +138,8 @@ async def generate_question_batches(chat_service, api_context: dict): total_questions = api_context["total_questions"] batches_count = len(document_batches) - base_questions_per_batch = total_questions // batches_count + # each batch should have at least 1 question + base_questions_per_batch = max(total_questions // batches_count,1) extra_questions = total_questions % batches_count print(f"Questions per batch: {base_questions_per_batch} (+1 for the first {extra_questions} batches), Total questions: {total_questions}, Batches: {batches_count}") From d5767a1200d450666efd2adc0cfc846c3c6b8cfc Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Tue, 7 May 2024 15:55:59 -0700 Subject: [PATCH 05/35] changed requirement.txt and readme.md --- .../end2end-recipes/chatbot/README.md | 26 +++++++-------- .../chatbot/data_pipelines/REAME.md | 33 +++++++++++++------ requirements.txt | 4 +++ 3 files changed, 39 insertions(+), 24 deletions(-) diff --git a/recipes/use_cases/end2end-recipes/chatbot/README.md b/recipes/use_cases/end2end-recipes/chatbot/README.md index de992d311..6b0ddb817 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/README.md +++ b/recipes/use_cases/end2end-recipes/chatbot/README.md @@ -2,23 +2,23 @@ Large language models (LLMs) have emerged as groundbreaking tools, capable of understanding and generating human-like text. These models power many of today's advanced chatbots, providing more natural and engaging user experiences. But how do we create these intelligent systems? -Here, we aim to make an FAQ model for Llama that be able to answer questions about Llama by fine-tune Llama2 7B chat using existing official Llama documents. +Here, we aim to make an FAQ model for Llama that be able to answer questions about Llama by fine-tune Meta Llama 3 8B instruct model using existing official Llama documents. ### Fine-tuning Process -Fine-tuning LLMs here LLama 2 involves several key steps: Data Collection, preprocessing, fine-tuning, evaluation. +Fine-tuning Meta Llama 3 8B instruct model involves several key steps: Data Collection, Preprocessing, Fine-tuning, Evaluation. ### LLM Generated datasets -As Chatbots are usually domain specifics and based on public or proprietary data, one common way inspired by [self-instruct paper](https://arxiv.org/abs/2212.10560) is to use LLMs to assist building the dataset from our data. For example to build an FAQ model, we can use Llama model to process our documents and help us build question and answer pair (We will showcase this here). Just keep it in mind that usually most of the proprietary LLMs has this clause in their license that you are not allowed to use the output generated from the model to train another LLM. In this case we will use Llama to fine-tune another Llama model. +As Chatbots are usually domain specifics and based on public or proprietary data, one common way inspired by [self-instruct paper](https://arxiv.org/abs/2212.10560) is to use LLMs to assist building the dataset from our data. For example to build an FAQ model, we can use a powerful Meta Llama 3 70B model to process our documents and help us build question and answer pair (We will showcase this here). Just keep it in mind that usually most of the proprietary LLMs has this clause in their license that you are not allowed to use the output generated from the model to train another LLM. In this case we will fine-tune another Llama model with the help of Meta Llama 3 70B. Similarly, we will use the same LLM to evaluate the quality of generated datasets and finally evaluate the outputs from the model. -Given this context, here we want to highlight some of best practices that need to be in place for data collection and pre-processing in general. +Given this context, here we want to highlight some of best practices that need to be in place for data collection and preprocessing in general. ### **Data Collection & Preprocessing:** @@ -129,8 +129,8 @@ For a FAQ model, you need to format your data in a way that's conducive to learn Question-Answer Pairing: Organize your data into pairs where each question is directly followed by its answer. This simple structure is highly effective for training models to understand and generate responses. For example: ```python -"question": "What is Llama 2?", -"answer": "Llama 2 is a collection of pretrained and fine-tuned large language models ranging from 7 billion to 70 billion parameters, optimized for dialogue use cases." +"question": "What is Llama 3?", +"answer": "Llama 3 is a collection of pretrained and fine-tuned large language models ranging from 8 billion to 70 billion parameters, optimized for dialogue use cases." ``` @@ -138,23 +138,23 @@ Question-Answer Pairing: Organize your data into pairs where each question is di 4. **Fine-Tuning:** Given that we have a selected pretrained model, in this case we use LLama 2 chat 7B, fine-tunning with more specific data can improve its performance on particular tasks, such as answering questions about Llama in this case. -#### Building Dataset +#### Building Dataset During the self-instruct process of generation Q&A pairs from documents, we realized that with out system prompt being ```python You are a language model skilled in creating quiz questions. You will be provided with a document, read it and generate question and answer pairs -that are most likely be asked by a use of llama that just want to start, +that are most likely be asked by a use of llama that just want to start, please make sure you follow those rules, 1. Generate only {total_questions} question answer pairs. 2. Generate in {language}. -3. The questions can be answered based *solely* on the given passage. +3. The questions can be answered based *solely* on the given passage. 4. Avoid asking questions with similar meaning. 5. Make the answer as concise as possible, it should be at most 60 words. 6. Provide relevant links from the document to support the answer. 7. Never use any abbreviation. -8. Return the result in json format with the template: +8. Return the result in json format with the template: [ {{ "question": "your question A.", @@ -185,7 +185,7 @@ Model tends to ignore providing the bigger picture in the questions, for example #### Data Insights -We generated a dataset of almost 650 Q&A pairs from some of the open source documents about Llama 2, including getting started guide from Llama website, its FAQ, Llama 2, Purple Llama, Code Llama papers and Llama-Recipes documentations. +We generated a dataset of almost 800 Q&A pairs from some of the open source documents about Llama models, including getting started guide from Llama website, its FAQ, Llama 3, Purple Llama, Code Llama papers and Llama-Recipes documentations. We have run some fine-tuning experiments with single GPU using quantization with different LORA configs (all linear layer versus query and key projections only) and different number of epochs. Although train and eval loss shows decrease specially with using all linear layers in LORA configs and training with 6 epochs, still the result is far from acceptable in real tests. @@ -207,6 +207,4 @@ Below are some examples of real test on the fine-tuned model with very poor resu

-Next, we are looking into augmenting our datasets. One way to do so, is to use our Llama 70B model to read our question answer pairs and come up with two paraphrase versions of each pair to augment our data. - - +Next, we are looking into augmenting our datasets. One way to do so, is to use our Llama 70B model to read our question answer pairs and come up with two paraphrase versions of each pair to augment our data. diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/REAME.md b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/REAME.md index efdd22231..4f103fa54 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/REAME.md +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/REAME.md @@ -2,31 +2,44 @@ ### Step 1 : Prepare related documents -Download all your desired docs in PDF, Text or Markdown format to "data" folder. +Download all your desired docs in PDF, Text or Markdown format to "data" folder inside the data_pipelines folder. -In this case we have an example of [Llama 2 Getting started guide](https://llama.meta.com/get-started/) and other llama related documents such Llama2, Purple Llama, Code Llama papers along with Llama FAQ. Ideally, we should have searched all Llama documents across the web and follow the procedure below on them but that would be very costly for the purpose of a tutorial, so we will stick to our limited documents here. +In this case we have an example of [Getting started with Meta Llama](https://llama.meta.com/get-started/) and other llama related documents such Llama3, Purple Llama, Code Llama papers along with Llama FAQ. Ideally, we should have searched all Llama documents across the web and follow the procedure below on them but that would be very costly for the purpose of a tutorial, so we will stick to our limited documents here. -### Step 2 : Prepare data (Q&A pairs) +### Step 2 : Prepare data (Q&A pairs) for fine-tuning -The idea here is to use Llama 70B using OctoAI APIs, to create question and answer (Q&A) pair datasets from these documents, this APIs could be replaced by any other API from other providers or alternatively using your on prem solutions such as the [TGI](../../../examples/hf_text_generation_inference/) or [VLLM](../../../examples/vllm/). Here we will use the prompt in the [./config.yaml] to instruct the model on the expected format and rules for generating the Q&A pairs. This is only one way to handle this which is a popular method but beyond this any other preprocessing routine that help us making the Q&A pairs works. +To use Meta Llama 3 70B model for the question and answer (Q&A) pair datasets creation from the prepared documents, we can either use Meta Llama 3 70B APIs from LLM cloud providers or host local LLM server. +In this example, we use OctoAI API as a demo, and the APIs could be replaced by any other API from other providers. -**NOTE** The generated data by these APIs/ the model needs to be vetted to make sure about the quality. +**NOTE** The generated data by these APIs or the model needs to be vetted to make sure about the quality. ```bash export OCTOAI_API_TOKEN="OCTOAI_API_TOKEN" -python generate_question_answers.py +python generate_question_answers.py ``` -**NOTE** You need to be aware of your RPM (requests per minute), TPM (tokens per minute) and TPD (tokens per day), limit on your account in case using any of model API providers. In our case we had to process each document at a time. Then merge all the Q&A `json` files to make our dataset. We aimed for a specific number of Q&A pairs per document anywhere between 50-100. This is experimental and totally depends on your documents, wealth of information in them and how you prefer to handle question, short or longer answers etc. +**NOTE** You need to be aware of your RPM (requests per minute), TPM (tokens per minute) and TPD (tokens per day), limit on your account in case using any of model API providers. In our case we had to process each document at a time. Then merge all the Q&A `json` files to make our dataset. We aimed for a specific number of Q&A pairs per document anywhere between 50-100. This is experimental and totally depends on your documents, wealth of information in them and how you prefer to handle question, short or longer answers etc. -### Step 2 : Prepare dataset for fine-tuning Llama 2 Chat model +Alternatively we can use on prem solutions such as the [TGI](../../../examples/hf_text_generation_inference/) or [VLLM](../../../examples/vllm/). Here we will use the prompt in the [./config.yaml] to instruct the model on the expected format and rules for generating the Q&A pairs. In this example, we will show how to create a vllm openai compatible server that host Meta Llama 3 70B instruct locally and generate the Q&A pair datasets. -Here, as we want to fine-tune a chatbot model so its preferred to start with Llama 2 Chat model which already is instruction fine-tuned to serve as an assistant and further fine-tuned it for our Llama related data. +```bash +# Make sure VLLM has been installed +CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8000 +``` + +**NOTE** Please make sure the port has not been used. Since Meta Llama3 70B instruct model requires at least 135GB GPU memory, we need to use multiple GPUs to host it in a tensor parallel way. + +Once the server is ready, we can query the server given the port number 8000 in another terminal. Here, "-v" sets the port number and "-t" sets the total questions we want to generate. +```bash +python generate_question_answers.py -v 8000 -t 800 +``` + +This python program will read all the documents inside of "data" folder and split the data into batches by the context window limit (8K for Meta Llama3 and 4K for Llama 2) and apply the chat template, defined in "config.yaml", to each batch. Then it will use each batch to query VLLM server and save the return answers into data.json after some post-process steps. ### Step 3: Run the training ```bash torchrun --nnodes 1 --nproc_per_node 1 examples/finetuning.py --use_peft --peft_method lora --quantization --model_name meta-llama/Llama-2-7b-chat-hf --output_dir ./peft-7b-quantized --num_epochs 1 --batch_size 1 --dataset "custom_dataset" --custom_dataset.file "examples/llama_dataset.py" --run_validation False --custom_dataset.data_path './dataset.json' -``` \ No newline at end of file +``` diff --git a/requirements.txt b/requirements.txt index df2c66fd7..979ad2546 100644 --- a/requirements.txt +++ b/requirements.txt @@ -19,3 +19,7 @@ chardet openai typing-extensions==4.8.0 tabulate +octoai +python-magic +PyPDF2 +aiofiles From 274ed14aa02ac1a30a5a9e1c99674c7130c58ce0 Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Tue, 14 May 2024 13:47:51 -0700 Subject: [PATCH 06/35] adding self-curation using LLM --- .../finetuning/datasets/chatbot_dataset.py | 40 +++++ .../data_pipelines/{REAME.md => README.md} | 8 +- .../chatbot/data_pipelines/config.yaml | 16 +- .../chatbot/data_pipelines/evalset.json | 147 ++++++++++++++++++ .../generate_question_answers.py | 33 ++-- .../chatbot/data_pipelines/generator_utils.py | 39 ++++- src/llama_recipes/configs/datasets.py | 19 +-- src/llama_recipes/finetuning.py | 2 +- 8 files changed, 277 insertions(+), 27 deletions(-) create mode 100644 recipes/finetuning/datasets/chatbot_dataset.py rename recipes/use_cases/end2end-recipes/chatbot/data_pipelines/{REAME.md => README.md} (81%) create mode 100644 recipes/use_cases/end2end-recipes/chatbot/data_pipelines/evalset.json diff --git a/recipes/finetuning/datasets/chatbot_dataset.py b/recipes/finetuning/datasets/chatbot_dataset.py new file mode 100644 index 000000000..bb1ee76c0 --- /dev/null +++ b/recipes/finetuning/datasets/chatbot_dataset.py @@ -0,0 +1,40 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# This software may be used and distributed according to the terms of the Llama 3 Community License Agreement. + + +import copy +import datasets +from datasets import Dataset, load_dataset, DatasetDict +import itertools + + +B_INST, E_INST = "[INST]", "[/INST]" + +def tokenize_dialog(q_a_pair, tokenizer): + prompt_tokens = [tokenizer.encode(f"{tokenizer.bos_token}{B_INST} {(question).strip()} {E_INST}", add_special_tokens=False) for question in q_a_pair["question"]] + answer_tokens = [tokenizer.encode(f"{answer.strip()} {tokenizer.eos_token}", add_special_tokens=False) for answer in q_a_pair["answer"]] + dialog_tokens = list(itertools.chain.from_iterable(zip(prompt_tokens, answer_tokens))) + dialog_tokens = list(itertools.chain.from_iterable(zip(prompt_tokens, answer_tokens))) + #Add labels, convert prompt token to -100 in order to ignore in loss function + labels_tokens = [len(c)*[-100,] if i % 2 == 0 else c for i,c in enumerate(dialog_tokens)] + + combined_tokens = { + "input_ids": list(itertools.chain(*(t for t in dialog_tokens))), + "labels": list(itertools.chain(*(t for t in labels_tokens))), + } + + return dict(combined_tokens, attention_mask=[1]*len(combined_tokens["input_ids"])) + + +def get_custom_dataset(dataset_config, tokenizer, split, split_ratio=0.8): + dataset = load_dataset('json', data_files=dataset_config.data_path) + dataset = dataset['train'].train_test_split(test_size=1-split_ratio, shuffle=True) + + dataset = dataset[split].map(lambda sample: { + "question": sample["question"], + "answer": sample["answer"], + }, + batched=True, + ) + dataset = dataset.map(lambda x: tokenize_dialog(x, tokenizer)) + return dataset diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/REAME.md b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/README.md similarity index 81% rename from recipes/use_cases/end2end-recipes/chatbot/data_pipelines/REAME.md rename to recipes/use_cases/end2end-recipes/chatbot/data_pipelines/README.md index 4f103fa54..e2c75b435 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/REAME.md +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/README.md @@ -40,6 +40,12 @@ This python program will read all the documents inside of "data" folder and spli ### Step 3: Run the training +Run distributed training with: ```bash -torchrun --nnodes 1 --nproc_per_node 1 examples/finetuning.py --use_peft --peft_method lora --quantization --model_name meta-llama/Llama-2-7b-chat-hf --output_dir ./peft-7b-quantized --num_epochs 1 --batch_size 1 --dataset "custom_dataset" --custom_dataset.file "examples/llama_dataset.py" --run_validation False --custom_dataset.data_path './dataset.json' +CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 10 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/data_pipelines/data.json' +``` +### Step 4: Testing with local inference + +```bash +python recipes/inference/local_inference/inference.py --model_name meta-llama/Meta-Llama-3-8B-Instruct --peft_model chatbot-8b ``` diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml index 7a9cb5536..eee83d5b4 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml @@ -1,10 +1,9 @@ question_prompt_template: > You are a language model skilled in creating quiz questions. You will be provided with a document, - read it and generate question and answer pairs - that are most likely be asked by a use of llama that just want to start, + read it and please generate question and answer that are most likely be asked by a user of llama model, please make sure you follow those rules, - 1. Generate only {num_questions} question answer pairs. + 1. Generate at most {num_questions} question answer pairs, you can generate less questions if you believe there are nothing related to Llama. 2. Generate in {language}. 3. The questions can be answered based *solely* on the given passage. 4. Avoid asking questions with similar meaning. @@ -23,6 +22,17 @@ question_prompt_template: > }} ] +eval_prompt_template: > + Below is a question and answer pair about Llama language model. Evaluate + whether or not this qusestion and answer pair will be helpful for a user of Llama langauge model. + Respond with only a single JSON blob with an "explanation" field that is a short (less than 100 word) + explanation of your answer and an "answer" field which is YES or NO. Only generate the answer in {language}. + Return the result in json format with the template: + {{ + "Reason": "your reason here.", + "Answer": "YES or No." + }}, + data_dir: "./data" language: "English" diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/evalset.json b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/evalset.json new file mode 100644 index 000000000..e1b016a20 --- /dev/null +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/evalset.json @@ -0,0 +1,147 @@ +[ +{ +"question": "What if I want to access Llama models but I’m not sure if my use is permitted under the Llama 2 Community License?", +"answer": "On a limited case by case basis, we will consider bespoke licensing requests from individual entities. Please contact llamamodels@meta.com to provide more details about your request." +}, +{ +"question": "Why are you not sharing the training datasets for Llama?", +"answer": "We believe developers will have plenty to work with as we release our model weights and starting code for pre-trained and conversational fine-tuned versions as well as responsible use resources. While data mixes are intentionally withheld for competitive reasons, all models have gone through Meta’s internal Privacy Review process to ensure responsible data usage in building our products. We are dedicated to the responsible and ethical development of our GenAI products, ensuring our policies reflect diverse contexts and meet evolving societal expectations." +}, +{ +"question": "Did we use human annotators to develop the data for our models?", +"answer": "Yes. There are more details, for example, about our use of human annotators in the Llama 2 research paper." +}, +{ +"question": "Can I use the output of the models to improve the Llama family of models, even though I cannot use them for other LLMs?", +"answer": "It's correct that the license restricts using any part of the Llama models, including the response outputs to train another AI model (LLM or otherwise). However, one can use the outputs to further train the Llama family of models. Techniques such as Quantized Aware Training (QAT) utilize such a technique and hence this is allowed." +}, +{ +"question": "What operating systems (OS) are officially supported?", +"answer": "For the core Llama GitHub repos (Llama and Llama3) Linux is the only OS currently supported by this repo. Additional OS support is available through the Llama-Recipes repo." +}, +{ +"question": "I am getting 'Issue with the URL' as an error message. What should I do?", +"answer": "This issue occurs because of not copying the URL correctly. If you right click on the link and copy the link, the link may be copied with URL Defense wrapper. To avoid this issue, select the URL manually and copy it." +}, +{ +"question": "Does Llama 2 support other languages outside of English?", +"answer": "The model was primarily trained on English with a bit of additional data from 27 other languages (for more information, see Table 10 on page 20 of the Llama 2 paper). We do not expect the same level of performance in these languages as in English. You’ll find the full list of languages referenced in the research paper. You can look at some of the community lead projects to fine-tune Llama 2 models to support other languages. (eg: link)" +}, +{ +"question": "If I’m a developer/business, how can I access the models?", +"answer": "Details on how to access the models are available on our website link. Please note that the models are subject to the acceptable use policy and the provided responsible use guide. Models are available through multiple sources but the place to start is at https://llama.meta.com/ Model code, quickstart guide and fine-tuning examples are available through our Github Llama repository. Model Weights are available through an email link after the user submits a sign-up form. Models are also being hosted by Microsoft, Amazon Web Services, and Hugging Face, and may also be available through other hosting providers in the future." +}, +{ +"question": "Can anyone access Llama models? What are the terms?", +"answer": "Llama models are broadly available to developers and licensees through a variety of hosting providers and on the Meta website and licensed under the applicable Llama Community License Agreement, which provides a permissive license to the models along with certain restrictions to help ensure that the models are being used responsibly." +}, +{ +"question": "What are the hardware SKU requirements for deploying these models?", +"answer": "Hardware requirements vary based on latency, throughput and cost constraints. For good latency, we split models across multiple GPUs with tensor parallelism in a machine with NVIDIA A100s or H100s. But TPUs, other types of GPUs, or even commodity hardware can also be used to deploy these models (e.g. llama cpp, MLC LLM)." +}, +{ +"question": "Do Llama models provide traditional autoregressive text completion?", +"answer": "Llama models are auto-regressive language models, built on the transformer architecture. The core language models function by taking a sequence of words as input and predicting the next word, recursively generating text." +}, +{ +"question": "Does the model support fill-in-the-middle completion, e.g. allowing the user to specify a suffix string for the response?", +"answer": "The vanilla model of Llama does not, however, the Code Llama models have been trained with fill-in-the-middle completion to assist with tasks like code completion." +}, +{ +"question": "Do Llama models support logit biases as a request parameter to control token probabilities during sampling?", +"answer": "This is implementation dependent (i.e. the code used to run the model)." +}, +{ +"question": "Do Llama models support adjusting sampling temperature or top-p threshold via request parameters?", +"answer": "The model itself supports these parameters, but whether they are exposed or not depends on implementation." +}, +{ +"question": "What is the most effective RAG method paired with LIama models?", +"answer": "There are many ways to use RAG with Llama. The most popular libraries are LangChain and LlamaIndex, and many of our developers have used them successfully with Llama 2. (See the LangChain and LlamaIndex sections of this document)." +}, +{ +"question": "How to set up Llama models with an EC2 instance?", +"answer": "You can find steps on how to set up an EC2 instance in the AWS section of this document here." +}, +{ +"question": "What is the right size of EC2 instances needed for running each of the llama models?", +"answer": "The AWS section of this document has some insights on instance size that you can start with. You can find the section here." +}, +{ +"question": "Should we start training with the base or instruct/chat model?", +"answer": "This depends on your application. The Llama pre-trained models were trained for general large language applications, whereas the Llama instruct or chat models were fine tuned for dialogue specific uses like chat bots." +}, +{ +"question": "I keep getting a 'CUDA out of memory' error.", +"answer": "This error can be caused by a number of different factors including, model size being too large, in-efficient memory usage and so on. Some of the steps below have been known to help with this issue, but you might need to do some troubleshooting to figure out the exact cause of your issue. 1. Ensure your GPU has enough memory 2. Reduce the batch_size 3. Lower the Precision 4. Clear cache 5. Modify the Model/Training" +}, +{ +"question": "Retrieval approach adds latency due to multiple calls at each turn. How to best leverage Llama+Retrieval?", +"answer": "If multiple calls are necessary then you could look into the following: 1. Optimize inference so each call has less latency. 2. Merge the calls into fewer calls. For example summarize the data and utilize the summary. 3. Possibly utilize Llama 2 function calling. 4. Consider fine-tuning the model with the updated data." +}, +{ +"question": "How can I fine tune the Llama models?", +"answer": "You can find examples on how to fine tune the Llama models in the Llama Recipes repository." +}, +{ +"question": "How can I pretrain the Llama models?", +"answer": "You can adapt the finetuning script found here for pre-training. You can also find the hyperparams used for pretraining in Section 2 of the Llama2 paper." +}, +{ +"question": "Am I allowed to develop derivative models through fine-tuning based on Llama models for languages other than english? Is this a violation of the acceptable use policy?", +"answer": "Developers may fine-tune Llama models for languages beyond English provided they comply with the applicable Llama 3 License Agreement, Llama Community License Agreement and the Acceptable Use Policy." +}, +{ +"question": "How can someone reduce hallucinations with fine-tuned LIama models?", +"answer": "Although prompts cannot eliminate hallucinations completely, they can reduce it significantly. Using techniques like Chain-of-Thought, Instruction-Based, N-Shot, and Few-Shot can help depending on your application. Additionally, prompting the models to back up the responses by verifying with factual data sets or requesting the models to provide the source of information can help as well. Overall finetuning should also be helpful for reducing hallucination." +}, +{ +"question": "What are the hardware SKU requirements for fine-tuning Llama pre-trained models?", +"answer": "Fine-tuning requirements also vary based on amount of data, time to complete fine-tuning and cost constraints. To fine-tune these models we have generally used multiple NVIDIA A100 machines with data parallelism across nodes and a mix of data and tensor parallelism intra node. But using a single machine, or other GPU types are definitely possible (e.g. alpaca models are trained on a single RTX4090: (https://github.com/tloen/alpaca-lora)" +}, +{ +"question": "What Fine-tuning tasks would these models support?", +"answer": "The Lama 2 fine-tuned models were fine tuned for dialogue specific uses like chat bots." +}, +{ +"question": "Are there examples on how one can fine-tune the models?", +"answer": "You can find example fine-tuning scripts in the Github recipes repository. You can also review the fine-tuning section in this document." +}, +{ +"question": "What is the difference between a pre-trained and fine-tuned model?", +"answer": "The Llama pre-trained models were trained for general large language applications, whereas the Llama chat or instruct models were fine tuned for dialogue specific uses like chat bots." +}, +{ +"question": "How should we think about post processing (validate generated data) as a way to fine tune models?", +"answer": "Essentially having a truthful data on the specific application can be helpful to reduce the risk on a specific application. Also setting some sort of threshold such as prob>90% might be helpful to get more confidence in the output." +}, +{ +"question": "What are the different libraries that we recommend for fine tuning?", +"answer": "You can find some fine-tuning recommendations in the Github recipes repository as well as the fine-tuning section of this document." +}, +{ +"question": "How can we identify the right ‘r’ value for LORA method for a certain use-case?", +"answer": "The best approach would be to review the LoRA research paper for more information on the rankings, then reviewing similar implementations for other models and finally experimenting." +}, +{ +"question": "We hope to use prompt engineering as a lever to nudge behavior. Any pointers on enhancing instruction-following by fine-tuning small llama models?", +"answer": "Take a look at the Fine tuning section in our Getting started with Llama guide of this document for some pointers towards fine tuning." +}, +{ +"question": "Strategies to help models handle longer conversations?", +"answer": "You can find some helpful information towards this in the Prompting and LangChain sections of this document." +}, +{ +"question": "Are Llama models open source? What is the exact license these models are published under?", +"answer": "Llama models are licensed under a bespoke commercial license that balances open access to the models with responsibility and protections in place to help address potential misuse. Our license allows for broad commercial use, as well as for developers to create and redistribute additional work on top of Llama models. For more details, our licenses can be found at (https://llama.meta.com/license/) (Meta Llama 2) and (https://llama.meta.com/llama3/license/) (Meta Llama 3)." +}, +{ +"question": "Are there examples that help licensees better understand how “MAU” is defined?", +"answer": "'MAU' means 'monthly active users' that access or use your (and your affiliates’) products and services. Examples include users accessing an internet-based service and monthly users/customers of licensee’s hardware devices." +}, +{ +"question": "Does the Critical Infrastructure restriction in the acceptable use policy (AUP) prevent companies who have special critical infrastructure certification (e.g., a registered operator of “critical infrastructure” under the German BSI Act) from using Llama?", +"answer": "No, such companies are not prohibited when their usage of Llama is not related to the operation of critical infrastructure. Llama, however, may not be used in the operation of critical infrastructure by any company, regardless of government certifications." +} +] + diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py index 350c53595..489755c5c 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py @@ -5,7 +5,7 @@ import asyncio import json from config import load_config -from generator_utils import generate_question_batches, parse_qa_to_json +from generator_utils import generate_question_batches, parse_qa_to_json, generate_data_eval from itertools import chain import logging import aiofiles # Ensure aiofiles is installed for async file operations @@ -27,14 +27,14 @@ ,"llama-2-70b-chat":"meta-llama/Llama-2-70b-chat-hf"} class ChatService(ABC): @abstractmethod - async def execute_chat_request_async(self, api_context: dict, chat_request): + async def execute_chat_request_async(self, api_context: dict, chat_request, eval=False): pass # Please implement your own chat service class here. # The class should inherit from the ChatService class and implement the execute_chat_request_async method. # The following are two example chat service classes that you can use as a reference. class OctoAIChatService(ChatService): - async def execute_chat_request_async(self, api_context: dict, chat_request): + async def execute_chat_request_async(self, api_context: dict, chat_request, eval=False): async with request_limiter: try: event_loop = asyncio.get_running_loop() @@ -47,7 +47,10 @@ async def execute_chat_request_async(self, api_context: dict, chat_request): ) response = await event_loop.run_in_executor(None, api_chat_call) assistant_response = next((choice.message.content for choice in response.choices if choice.message.role == 'assistant'), "") - assistant_response_json = parse_qa_to_json(assistant_response) + if eval: + assistant_response_json = json.loads(assistant_response) + else: + assistant_response_json = parse_qa_to_json(assistant_response) return assistant_response_json except Exception as error: @@ -56,7 +59,7 @@ async def execute_chat_request_async(self, api_context: dict, chat_request): # Use the local vllm openai compatible server for generating question/answer pairs to make API call syntax consistent # please read for more detail:https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html. class VllmChatService(ChatService): - async def execute_chat_request_async(self, api_context: dict, chat_request): + async def execute_chat_request_async(self, api_context: dict, chat_request, eval=False): async with request_limiter: try: event_loop = asyncio.get_running_loop() @@ -70,9 +73,10 @@ async def execute_chat_request_async(self, api_context: dict, chat_request): ) response = await event_loop.run_in_executor(None, api_chat_call) assistant_response = next((choice.message.content for choice in response.choices if choice.message.role == 'assistant'), "") - assistant_response_json = parse_qa_to_json(assistant_response) - if len(assistant_response_json)==0: - logging.error("No question/answer pairs generated. Please check the input context or model configuration.") + if eval: + assistant_response_json = json.loads(assistant_response) + else: + assistant_response_json = parse_qa_to_json(assistant_response) return assistant_response_json except Exception as error: logging.error(f"Error during chat request execution: {error}",exc_info=True) @@ -90,12 +94,19 @@ async def main(context): logging.warning("No data generated. Please check the input context or model configuration.") return flattened_list = list(chain.from_iterable(data)) + # with open("data.json") as fp: + # flattened_list = json.load(fp) logging.info(f"Successfully generated {len(flattened_list)} question/answer pairs.") # Use asynchronous file operation for writing to the file - async with aiofiles.open("data.json", "w") as output_file: - await output_file.write(json.dumps(flattened_list, indent=4)) - logging.info("Data successfully written to 'data.json'. Process completed.") + # async with aiofiles.open("data.json", "w") as output_file: + # await output_file.write(json.dumps(flattened_list, indent=4)) + # logging.info("Data successfully written to 'data.json'. Process completed.") + curated_data = await generate_data_eval(chat_service, context,flattened_list) + logging.info(f"Only {len(curated_data)} question/answer pairs pass the self-curation") + async with aiofiles.open("curated_data.json", "w") as curated_data: + await curated_data.write(json.dumps(flattened_list, indent=4)) + logging.info("Data successfully written to 'curated_data.json'. Process completed.") except Exception as e: logging.error(f"An unexpected error occurred during the process: {e}",exc_info=True) diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py index 3eb22be58..bd361e78f 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py +++ b/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py @@ -121,10 +121,28 @@ def parse_qa_to_json(response_string): async def prepare_and_send_request(chat_service, api_context: dict, document_content: str, num_questions: int) -> dict: prompt_for_system = api_context['question_prompt_template'].format(num_questions=num_questions, language=api_context["language"]) chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': document_content}] - result = await chat_service.execute_chat_request_async(api_context, chat_request_payload) + result = await chat_service.execute_chat_request_async(api_context, chat_request_payload,eval=False) if not result: return {} - return json.loads(await chat_service.execute_chat_request_async(api_context, chat_request_payload)) + return json.loads(await chat_service.execute_chat_request_async(api_context, chat_request_payload,eval=False)) +# This function is used to evaluate the quality of generated QA pairs. Return the original QA pair if the model eval result is YES. Otherwise, return an empty dict. +async def data_eval_request(chat_service, api_context: dict, document_content: dict) -> dict: + prompt_for_system = api_context['eval_prompt_template'].format(language=api_context["language"]) + chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': f"Question: {document_content['question']}, Answer: {document_content['answer']}"}] + result = await chat_service.execute_chat_request_async(api_context, chat_request_payload,eval=True) + if not result: + return {} + if "Answer" not in result: + print("Error: eval response does not contain answer") + print(document_content,result) + return {} + # Send back the original QA pair is the model eval result is YES + if result["Answer"] == "YES": + return document_content + else: + print(document_content,result) + return {} + async def generate_question_batches(chat_service, api_context: dict): document_text = read_file_content(api_context) @@ -158,3 +176,20 @@ async def generate_question_batches(chat_service, api_context: dict): question_generation_results = await asyncio.gather(*generation_tasks) return question_generation_results + +async def generate_data_eval(chat_service, api_context: dict, generated_questions: list): + eval_tasks = [] + for batch_index, batch_content in enumerate(generated_questions): + try: + result = data_eval_request(chat_service, api_context, batch_content) + eval_tasks.append(result) + except Exception as e: + print(f"Error during data eval request execution: {e}") + + eval_results = await asyncio.gather(*eval_tasks) + curated_data = [] + for item in eval_results: + # if the item is not empty, add it to the curated data list + if item: + curated_data.append(item) + return curated_data diff --git a/src/llama_recipes/configs/datasets.py b/src/llama_recipes/configs/datasets.py index 0c41d0a4d..156541b45 100644 --- a/src/llama_recipes/configs/datasets.py +++ b/src/llama_recipes/configs/datasets.py @@ -3,32 +3,33 @@ from dataclasses import dataclass - + @dataclass class samsum_dataset: dataset: str = "samsum_dataset" train_split: str = "train" test_split: str = "validation" - - + + @dataclass class grammar_dataset: dataset: str = "grammar_dataset" - train_split: str = "src/llama_recipes/datasets/grammar_dataset/gtrain_10k.csv" + train_split: str = "src/llama_recipes/datasets/grammar_dataset/gtrain_10k.csv" test_split: str = "src/llama_recipes/datasets/grammar_dataset/grammar_validation.csv" - + @dataclass class alpaca_dataset: dataset: str = "alpaca_dataset" train_split: str = "train" test_split: str = "val" data_path: str = "src/llama_recipes/datasets/alpaca_data.json" - - + + @dataclass class custom_dataset: dataset: str = "custom_dataset" - file: str = "examples/custom_dataset.py" + file: str = "recipes/finetuning/datasets/custom_dataset.py" train_split: str = "train" - test_split: str = "validation" \ No newline at end of file + test_split: str = "validation" + data_path: str = "" diff --git a/src/llama_recipes/finetuning.py b/src/llama_recipes/finetuning.py index 0759809b8..27911d9f9 100644 --- a/src/llama_recipes/finetuning.py +++ b/src/llama_recipes/finetuning.py @@ -134,7 +134,7 @@ def main(**kwargs): tokenizer = AutoTokenizer.from_pretrained(train_config.model_name if train_config.tokenizer_name is None else train_config.tokenizer_name) tokenizer.pad_token_id = tokenizer.eos_token_id - # If there is a mismatch between tokenizer vocab size and embedding matrix, + # If there is a mismatch between tokenizer vocab size and embedding matrix, # throw a warning and then expand the embedding matrix if len(tokenizer) > model.get_input_embeddings().weight.shape[0]: print("WARNING: Resizing the embedding matrix to match the tokenizer vocab size.") From 9add30acb26867f691cae92fca64469cc909f05c Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Wed, 15 May 2024 16:25:39 -0700 Subject: [PATCH 07/35] restructured folders and added eval pipeline --- .../{data_pipelines => pipelines}/README.md | 48 ++++++- .../chat_utils.py} | 98 ++------------- .../{data_pipelines => pipelines}/config.py | 0 .../doc_processor.py | 0 .../chatbot/pipelines/eval_chatbot.py | 118 ++++++++++++++++++ .../chatbot/pipelines/eval_config.yaml | 12 ++ .../evalset.json | 0 .../pipelines/generate_question_answers.py | 88 +++++++++++++ .../generation_config.yaml} | 2 +- .../generator_utils.py | 10 +- requirements.txt | 3 + 11 files changed, 282 insertions(+), 97 deletions(-) rename recipes/use_cases/end2end-recipes/chatbot/{data_pipelines => pipelines}/README.md (57%) rename recipes/use_cases/end2end-recipes/chatbot/{data_pipelines/generate_question_answers.py => pipelines/chat_utils.py} (52%) rename recipes/use_cases/end2end-recipes/chatbot/{data_pipelines => pipelines}/config.py (100%) rename recipes/use_cases/end2end-recipes/chatbot/{data_pipelines => pipelines}/doc_processor.py (100%) create mode 100644 recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_chatbot.py create mode 100644 recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml rename recipes/use_cases/end2end-recipes/chatbot/{data_pipelines => pipelines}/evalset.json (100%) create mode 100644 recipes/use_cases/end2end-recipes/chatbot/pipelines/generate_question_answers.py rename recipes/use_cases/end2end-recipes/chatbot/{data_pipelines/config.yaml => pipelines/generation_config.yaml} (98%) rename recipes/use_cases/end2end-recipes/chatbot/{data_pipelines => pipelines}/generator_utils.py (96%) diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/README.md b/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md similarity index 57% rename from recipes/use_cases/end2end-recipes/chatbot/data_pipelines/README.md rename to recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md index e2c75b435..01024bf29 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/README.md +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md @@ -1,4 +1,4 @@ -## Data Preprocessing Steps +## End to End Steps to create a Chatbot using fine-tuning ### Step 1 : Prepare related documents @@ -38,13 +38,53 @@ python generate_question_answers.py -v 8000 -t 800 This python program will read all the documents inside of "data" folder and split the data into batches by the context window limit (8K for Meta Llama3 and 4K for Llama 2) and apply the chat template, defined in "config.yaml", to each batch. Then it will use each batch to query VLLM server and save the return answers into data.json after some post-process steps. -### Step 3: Run the training +### Step 3: Run the fune-tuning -Run distributed training with: +Run distributed fune-tuning with: ```bash CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 10 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/data_pipelines/data.json' ``` -### Step 4: Testing with local inference + +or run the fine-tuning in single-GPU: + +```bash +python recipes/finetuning/finetuning.py --use_peft --enable_fsdp --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 10 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/data_pipelines/data.json' +``` + +For more details, please check the readme in the finetuning recipe. + +### Step 4: Evaluating with local inference + +Once we have the fine-tuned model, we now need to evaluate it to understand its performance. Normally, to create a evaluation set, we should first gather some questions and manually write the ground truth answer. In this case, we created a eval set based on the Llama [Troubleshooting & FAQ](https://llama.meta.com/faq/), where the answers are written by human experts. Then we pass the evalset question to our fine-tuned model to get the model generated answers. To compare the model generated answers with ground truth, we can use either traditional eval method, eg. calcucate rouge score, or use LLM to act like a judge to score the similarity of them. + +First we need to start the VLLM servers to host our fine-tuned 8B model. Since we used peft library to get a LoRA adapter, we need to pass special arguments to VLLM to enable the LoRA feature. Now, the VLLM server actually will first load the original model, then apply our LoRA adapter weights. + +```bash +python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-8B-Instruct --enable-lora --lora-modules chatbot=./chatbot-8b --port 8000 --disable-log-requests +``` + +**NOTE** If encounter import error: "ImportError: punica LoRA kernels could not be imported.", this means that Vllm must be installed with punica LoRA kernels to support LoRA adapter, please use following commands to install the VLLM from source. + +```bash +git clone https://github.com/vllm-project/vllm.git +cd vllm +VLLM_INSTALL_PUNICA_KERNELS=1 pip install -e . +``` + +Then pass the eval_set json file into the VLLM servers and start the comparison evaluation. Notice that our model name is now called chatbot instead of meta-llama/Meta-Llama-3-8B-Instruct. + +```bash +python eval_chatbot.py -m chatbot -v 8000 +``` +We can also quickly compare our fine-tuned chatbot model with original 8B model using + +```bash +python eval_chatbot.py -m meta-llama/Meta-Llama-3-8B-Instruct -v 8000 +``` + +### Step 5: Testing with local inference + +Once we believe our fine-tuned model has passed our evaluation and we can deploy it locally to manually test it by asking questions. ```bash python recipes/inference/local_inference/inference.py --model_name meta-llama/Meta-Llama-3-8B-Instruct --peft_model chatbot-8b diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/chat_utils.py similarity index 52% rename from recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py rename to recipes/use_cases/end2end-recipes/chatbot/pipelines/chat_utils.py index 489755c5c..2f89a2c9f 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generate_question_answers.py +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/chat_utils.py @@ -1,30 +1,21 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement. - -import argparse import asyncio -import json -from config import load_config -from generator_utils import generate_question_batches, parse_qa_to_json, generate_data_eval -from itertools import chain import logging -import aiofiles # Ensure aiofiles is installed for async file operations from abc import ABC, abstractmethod from octoai.client import OctoAI from functools import partial from openai import OpenAI - +import json +from generator_utils import generate_question_batches, parse_qa_to_json, generate_data_eval # Configure logging to include the timestamp, log level, and message logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') - -# Manage rate limits with throttling -rate_limit_threshold = 2000 -allowed_concurrent_requests = int(rate_limit_threshold * 0.75) -request_limiter = asyncio.Semaphore(allowed_concurrent_requests) # Since OctoAI has different naming for llama models, create this mapping to get huggingface offical model name given OctoAI names. MODEL_NAME_MAPPING={"meta-llama-3-70b-instruct":"meta-llama/Meta-Llama-3-70B-Instruct", "meta-llama-3-8b-instruct":"meta-llama/Meta-Llama-3-8B-Instruct","llama-2-7b-chat":"meta-llama/Llama-2-7b-chat-hf" ,"llama-2-70b-chat":"meta-llama/Llama-2-70b-chat-hf"} +# Manage rate limits with throttling +rate_limit_threshold = 2000 +allowed_concurrent_requests = int(rate_limit_threshold * 0.75) +request_limiter = asyncio.Semaphore(allowed_concurrent_requests) class ChatService(ABC): @abstractmethod async def execute_chat_request_async(self, api_context: dict, chat_request, eval=False): @@ -63,7 +54,10 @@ async def execute_chat_request_async(self, api_context: dict, chat_request, eval async with request_limiter: try: event_loop = asyncio.get_running_loop() - model_name = MODEL_NAME_MAPPING[api_context['model']] + if api_context["model"] in MODEL_NAME_MAPPING: + model_name = MODEL_NAME_MAPPING[api_context['model']] + else: + model_name = api_context['model'] client = OpenAI(api_key=api_context['api_key'], base_url="http://localhost:"+ str(api_context['endpoint'])+"/v1") api_chat_call = partial( client.chat.completions.create, @@ -74,6 +68,7 @@ async def execute_chat_request_async(self, api_context: dict, chat_request, eval response = await event_loop.run_in_executor(None, api_chat_call) assistant_response = next((choice.message.content for choice in response.choices if choice.message.role == 'assistant'), "") if eval: + print(assistant_response) assistant_response_json = json.loads(assistant_response) else: assistant_response_json = parse_qa_to_json(assistant_response) @@ -81,74 +76,3 @@ async def execute_chat_request_async(self, api_context: dict, chat_request, eval except Exception as error: logging.error(f"Error during chat request execution: {error}",exc_info=True) return "" - -async def main(context): - if context["endpoint"]: - chat_service = VllmChatService() - else: - chat_service = OctoAIChatService() - try: - logging.info("Starting to generate question/answer pairs.") - data = await generate_question_batches(chat_service, context) - if not data: - logging.warning("No data generated. Please check the input context or model configuration.") - return - flattened_list = list(chain.from_iterable(data)) - # with open("data.json") as fp: - # flattened_list = json.load(fp) - logging.info(f"Successfully generated {len(flattened_list)} question/answer pairs.") - # Use asynchronous file operation for writing to the file - - # async with aiofiles.open("data.json", "w") as output_file: - # await output_file.write(json.dumps(flattened_list, indent=4)) - # logging.info("Data successfully written to 'data.json'. Process completed.") - curated_data = await generate_data_eval(chat_service, context,flattened_list) - logging.info(f"Only {len(curated_data)} question/answer pairs pass the self-curation") - async with aiofiles.open("curated_data.json", "w") as curated_data: - await curated_data.write(json.dumps(flattened_list, indent=4)) - logging.info("Data successfully written to 'curated_data.json'. Process completed.") - except Exception as e: - logging.error(f"An unexpected error occurred during the process: {e}",exc_info=True) - -def parse_arguments(): - # Define command line arguments for the script - parser = argparse.ArgumentParser( - description="Generate question/answer pairs from documentation." - ) - parser.add_argument( - "-t", "--total_questions", - type=int, - default=100, - help="Specify the total number of question/answer pairs to generate." - ) - parser.add_argument( - "-m", "--model", - choices=["meta-llama-3-70b-instruct","meta-llama-3-8b-instruct","llama-2-13b-chat", "llama-2-70b-chat"], - default="meta-llama-3-70b-instruct", - help="Select the model to use for generation." - ) - parser.add_argument( - "-c", "--config_path", - default="config.yaml", - help="Set the configuration file path that has system prompt along with language, dataset path and number of questions." - ) - parser.add_argument( - "-v", "--vllm_endpoint", - default=None, - type=int, - help="If a port is specified, then use local vllm endpoint for generating question/answer pairs." - ) - return parser.parse_args() - -if __name__ == "__main__": - logging.info("Initializing the process and loading configuration...") - args = parse_arguments() - - context = load_config(args.config_path) - context["total_questions"] = args.total_questions - context["model"] = args.model - context["endpoint"] = args.vllm_endpoint - logging.info(f"Configuration loaded. Generating {args.total_questions} question/answer pairs using model '{args.model}'.") - if context["endpoint"]: - logging.info(f"Use local vllm service at port: '{args.vllm_endpoint}'.") - asyncio.run(main(context)) diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/config.py similarity index 100% rename from recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.py rename to recipes/use_cases/end2end-recipes/chatbot/pipelines/config.py diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/doc_processor.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/doc_processor.py similarity index 100% rename from recipes/use_cases/end2end-recipes/chatbot/data_pipelines/doc_processor.py rename to recipes/use_cases/end2end-recipes/chatbot/pipelines/doc_processor.py diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_chatbot.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_chatbot.py new file mode 100644 index 000000000..6e3099098 --- /dev/null +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_chatbot.py @@ -0,0 +1,118 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# This software may be used and distributed according to the terms of the Llama 3 Community License Agreement. +from chat_utils import OctoAIChatService, VllmChatService +import logging +import evaluate +import argparse +from config import load_config +import asyncio +import json +from itertools import chain + +def compute_rouge_score(generated : str, reference: str): + rouge_score = evaluate.load('rouge') + return rouge_score.compute( + predictions=generated, + references=reference, + use_stemmer=True, + use_aggregator=True + ) +def compute_bert_score(generated : str, reference: str): + bertscore = evaluate.load("bertscore") + return bertscore.compute( + predictions=generated, + references=reference, + lang="en" + ) +# This function is used to evaluate the quality of generated QA pairs. Return the original QA pair if the model eval result is YES. Otherwise, return an empty dict. +async def eval_request(chat_service, api_context: dict, question: str) -> dict: + prompt_for_system = api_context['eval_prompt_template'].format(language=api_context["language"]) + chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': f"Question: {question}"}] + # Getting a list of result, in this case, there should be only one result + results = await chat_service.execute_chat_request_async(api_context, chat_request_payload,eval=False) + # convert the result string to a list + results = eval(results) + if not results or len(results) > 1: + print("results",type(results),len(results),results) + return {} + result = results[0] + if "Answer" not in result: + print("Error: eval response does not contain answer") + print(question,result) + return {} + print("result",result) + # Send back the model generated answer + return result["Answer"] + +async def generate_eval_answer(chat_service, api_context: dict, questions: list): + eval_tasks = [] + for batch_index, question in enumerate(questions): + try: + result = eval_request(chat_service, api_context, question) + eval_tasks.append(result) + except Exception as e: + print(f"Error during data eval request execution: {e}") + print(len(eval_tasks),"eval_tasks") + eval_results = await asyncio.gather(*eval_tasks) + + return eval_results + +async def main(context): + if context["endpoint"]: + chat_service = VllmChatService() + else: + chat_service = OctoAIChatService() + try: + logging.info("Starting to generate answer given the eval set.") + with open(context["eval_json"]) as fp: + eval_json = json.load(fp) + questions,groud_truth = [],[] + for index, item in enumerate(eval_json): + questions.append(item["question"]) + groud_truth.append(item["answer"]) + generated_answers = await generate_eval_answer(chat_service, context,questions) + if not generated_answers: + logging.warning("No answers generated. Please check the input context or model configuration.") + return + logging.info(f"Successfully generated {len(generated_answers)} answers.") + rouge_score = compute_rouge_score(generated_answers,groud_truth) + print("Rouge_score:",rouge_score) + bert_score = compute_bert_score(generated_answers,groud_truth) + print("Bert_score:",bert_score) + logging.info("Eval successfully") + except Exception as e: + logging.error(f"An unexpected error occurred during the process: {e}",exc_info=True) + +def parse_arguments(): + # Define command line arguments for the script + parser = argparse.ArgumentParser( + description="Generate question/answer pairs from documentation." + ) + parser.add_argument( + "-m", "--model", + default="chatbot", + help="Select the model to use for evaluation, this maybe a LoRA adapter." + ) + parser.add_argument( + "-c", "--config_path", + default="eval_config.yaml", + help="Set the configuration file path that has system prompt along with language, evalset path." + ) + parser.add_argument( + "-v", "--vllm_endpoint", + default=None, + type=int, + help="If a port is specified, then use local vllm endpoint for evaluations." + ) + return parser.parse_args() + +if __name__ == "__main__": + logging.info("Initializing the process and loading configuration...") + args = parse_arguments() + + context = load_config(args.config_path) + context["model"] = args.model + context["endpoint"] = args.vllm_endpoint + if context["endpoint"]: + logging.info(f"Use local vllm service at port: '{args.vllm_endpoint}'.") + asyncio.run(main(context)) diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml b/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml new file mode 100644 index 000000000..582fd0bb3 --- /dev/null +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml @@ -0,0 +1,12 @@ +eval_prompt_template: > + You are a AI assistant that skilled in answering questions related to Llama model. + Below is a question from a llama user, please answer it in {language}, make the answer as concise as possible, it should be at most 100 words. + Return the result with the template: + {{ + "Question": "The question user asked to you" + "Answer": "Your answer to the question" + }} + +eval_json: "./evalset.json" + +language: "English" diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/evalset.json b/recipes/use_cases/end2end-recipes/chatbot/pipelines/evalset.json similarity index 100% rename from recipes/use_cases/end2end-recipes/chatbot/data_pipelines/evalset.json rename to recipes/use_cases/end2end-recipes/chatbot/pipelines/evalset.json diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generate_question_answers.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generate_question_answers.py new file mode 100644 index 000000000..4133f1b84 --- /dev/null +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generate_question_answers.py @@ -0,0 +1,88 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# This software may be used and distributed according to the terms of the Llama 3 Community License Agreement. + +import argparse +import asyncio +import json +from config import load_config +from generator_utils import generate_question_batches, generate_data_eval +from chat_utils import OctoAIChatService, VllmChatService +from itertools import chain +import logging +import aiofiles # Ensure aiofiles is installed for async file operations + + +# Configure logging to include the timestamp, log level, and message +logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') + + +async def main(context): + if context["endpoint"]: + chat_service = VllmChatService() + else: + chat_service = OctoAIChatService() + try: + logging.info("Starting to generate question/answer pairs.") + data = await generate_question_batches(chat_service, context) + if not data: + logging.warning("No data generated. Please check the input context or model configuration.") + return + flattened_list = list(chain.from_iterable(data)) + # with open("data.json") as fp: + # flattened_list = json.load(fp) + logging.info(f"Successfully generated {len(flattened_list)} question/answer pairs.") + # Use asynchronous file operation for writing to the file + + # async with aiofiles.open("data.json", "w") as output_file: + # await output_file.write(json.dumps(flattened_list, indent=4)) + # logging.info("Data successfully written to 'data.json'. Process completed.") + curated_data = await generate_data_eval(chat_service, context,flattened_list) + logging.info(f"Only {len(curated_data)} question/answer pairs pass the self-curation") + async with aiofiles.open("curated_data.json", "w") as curated_data: + await curated_data.write(json.dumps(flattened_list, indent=4)) + logging.info("Data successfully written to 'curated_data.json'. Process completed.") + except Exception as e: + logging.error(f"An unexpected error occurred during the process: {e}",exc_info=True) + +def parse_arguments(): + # Define command line arguments for the script + parser = argparse.ArgumentParser( + description="Generate question/answer pairs from documentation." + ) + parser.add_argument( + "-t", "--total_questions", + type=int, + default=100, + help="Specify the total number of question/answer pairs to generate." + ) + parser.add_argument( + "-m", "--model", + choices=["meta-llama-3-70b-instruct","meta-llama-3-8b-instruct","llama-2-13b-chat", "llama-2-70b-chat"], + default="meta-llama-3-70b-instruct", + help="Select the model to use for generation." + ) + parser.add_argument( + "-c", "--config_path", + default="./generation_config.yaml", + help="Set the configuration file path that has system prompt along with language, dataset path and number of questions." + ) + parser.add_argument( + "-v", "--vllm_endpoint", + default=None, + type=int, + help="If a port is specified, then use local vllm endpoint for generating question/answer pairs." + ) + return parser.parse_args() + +if __name__ == "__main__": + logging.info("Initializing the process and loading configuration...") + args = parse_arguments() + + context = load_config(args.config_path) + context["total_questions"] = args.total_questions + context["model"] = args.model + context["endpoint"] = args.vllm_endpoint + logging.info(f"Configuration loaded. Generating {args.total_questions} question/answer pairs using model '{args.model}'.") + if context["endpoint"]: + logging.info(f"Use local vllm service at port: '{args.vllm_endpoint}'.") + asyncio.run(main(context)) diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml similarity index 98% rename from recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml rename to recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml index eee83d5b4..cc543db9b 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/config.yaml +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml @@ -22,7 +22,7 @@ question_prompt_template: > }} ] -eval_prompt_template: > +curation_prompt_template: > Below is a question and answer pair about Llama language model. Evaluate whether or not this qusestion and answer pair will be helpful for a user of Llama langauge model. Respond with only a single JSON blob with an "explanation" field that is a short (less than 100 word) diff --git a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py similarity index 96% rename from recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py rename to recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py index bd361e78f..b2d591f13 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/data_pipelines/generator_utils.py +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py @@ -14,7 +14,6 @@ # Initialize logging logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') - def read_text_file(file_path): try: with open(file_path, 'r') as f: @@ -86,7 +85,7 @@ def clean(s): if any(c.isalnum() for c in item): result.append(item) return " ".join(result) - +# given a response string, return a string that can be saved as json. def parse_qa_to_json(response_string): split_lines = response_string.split("\n") start,end = None,None @@ -114,7 +113,8 @@ def parse_qa_to_json(response_string): question = " ".join(split_lines[start:end]).split('"Question":')[1] answer = " ".join(split_lines[end:]).split('"Answer":')[1] qa_set.add((clean(question), clean(answer))) - qa_list = [{"question": q, "answer":a} for q,a in qa_set] + qa_list = [{"Question": q, "Answer":a} for q,a in qa_set] + return json.dumps(qa_list, indent=4) @@ -127,8 +127,8 @@ async def prepare_and_send_request(chat_service, api_context: dict, document_con return json.loads(await chat_service.execute_chat_request_async(api_context, chat_request_payload,eval=False)) # This function is used to evaluate the quality of generated QA pairs. Return the original QA pair if the model eval result is YES. Otherwise, return an empty dict. async def data_eval_request(chat_service, api_context: dict, document_content: dict) -> dict: - prompt_for_system = api_context['eval_prompt_template'].format(language=api_context["language"]) - chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': f"Question: {document_content['question']}, Answer: {document_content['answer']}"}] + prompt_for_system = api_context['curation_prompt_template'].format(language=api_context["language"]) + chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': f"Question: {document_content['Question']}, Answer: {document_content['Answer']}"}] result = await chat_service.execute_chat_request_async(api_context, chat_request_payload,eval=True) if not result: return {} diff --git a/requirements.txt b/requirements.txt index 979ad2546..b10a90072 100644 --- a/requirements.txt +++ b/requirements.txt @@ -23,3 +23,6 @@ octoai python-magic PyPDF2 aiofiles +evaluate +rouge_score +bert_score From bb96a887c906d833def46c7a965dbedd477daf84 Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Fri, 17 May 2024 10:28:40 -0700 Subject: [PATCH 08/35] end-to-end testing on the pipeline --- .../chatbot/pipelines/README.md | 27 ++++--- .../chatbot/pipelines/chat_utils.py | 59 +++++++--------- .../chatbot/pipelines/doc_processor.py | 2 +- .../chatbot/pipelines/eval_chatbot.py | 18 ++--- .../chatbot/pipelines/eval_config.yaml | 5 +- .../pipelines/generate_question_answers.py | 33 +++++---- .../chatbot/pipelines/generation_config.yaml | 40 +++++++---- .../chatbot/pipelines/generator_utils.py | 70 +++++++++++++++---- 8 files changed, 152 insertions(+), 102 deletions(-) diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md b/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md index 01024bf29..97448f236 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md @@ -4,8 +4,9 @@ Download all your desired docs in PDF, Text or Markdown format to "data" folder inside the data_pipelines folder. -In this case we have an example of [Getting started with Meta Llama](https://llama.meta.com/get-started/) and other llama related documents such Llama3, Purple Llama, Code Llama papers along with Llama FAQ. Ideally, we should have searched all Llama documents across the web and follow the procedure below on them but that would be very costly for the purpose of a tutorial, so we will stick to our limited documents here. +In this case we have an example of [Getting started with Meta Llama](https://llama.meta.com/get-started/) and other llama related documents such Llama3, Purple Llama, Code Llama papers. Ideally, we should have searched all Llama documents across the web and follow the procedure below on them but that would be very costly for the purpose of a tutorial, so we will stick to our limited documents here. In this case, we want to use Llama FAQ as eval data so we should not put it into the data folder for training. +TODO: Download conversations in the Llama github issues and use it as training data. ### Step 2 : Prepare data (Q&A pairs) for fine-tuning To use Meta Llama 3 70B model for the question and answer (Q&A) pair datasets creation from the prepared documents, we can either use Meta Llama 3 70B APIs from LLM cloud providers or host local LLM server. @@ -25,30 +26,35 @@ Alternatively we can use on prem solutions such as the [TGI](../../../examples/h ```bash # Make sure VLLM has been installed -CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8000 +CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001 ``` **NOTE** Please make sure the port has not been used. Since Meta Llama3 70B instruct model requires at least 135GB GPU memory, we need to use multiple GPUs to host it in a tensor parallel way. -Once the server is ready, we can query the server given the port number 8000 in another terminal. Here, "-v" sets the port number and "-t" sets the total questions we want to generate. +Once the server is ready, we can query the server given the port number 8001 in another terminal. Here, "-v" sets the port number and "-t" sets the total questions we ask the Meta Llama3 70B instruct model to initially generate, but the model can choice to generate less questions if it can not found any Llama related context to avoid the model generate questions that too trivial and unrelated. ```bash -python generate_question_answers.py -v 8000 -t 800 +python generate_question_answers.py -v 8001 -t 1000 ``` -This python program will read all the documents inside of "data" folder and split the data into batches by the context window limit (8K for Meta Llama3 and 4K for Llama 2) and apply the chat template, defined in "config.yaml", to each batch. Then it will use each batch to query VLLM server and save the return answers into data.json after some post-process steps. +This python program will read all the documents inside of "data" folder and split the data into batches by the context window limit (8K for Meta Llama3 and 4K for Llama 2) and apply the question_prompt_template, defined in "generation_config.yaml", to each batch. Then it will use each batch to query VLLM server and save the return QA pairs and the contexts. Additionally, we will add another step called self-curation (see more details in [Self-Alignment with Instruction Backtranslation](https://arxiv.org/abs/2308.06259)), which uses another 70B model to evaluate whether a QA pair is based on the context and provides relevant information about Llama language models given that context. We will then save all the QA pairs that passed the evaluation into data.json file as our final fine-tuning training set. +Example of QA pair that did not pass the self-curation, in this case the QA pair did not focus on Llama model: +```json +{'Question': 'What is the name of the pre-trained model for programming and natural languages?', 'Answer': 'CodeBERT', 'Context': 'Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. CodeBERT: A pre-trained model for programming and natural languages. In EMNLP (Findings), volume EMNLP 2020 of Findings of ACL, pp. 15361547. Association for Computational Linguistics, 2020.'} {'Reason': 'The question and answer pair is not relevant to the context about Llama language models, as it discusses CodeBERT, which is not a Llama model.', 'Result': 'NO'} +``` ### Step 3: Run the fune-tuning +In the llama-recipe main folder, we can start the fine-tuning step using the following commands: -Run distributed fune-tuning with: +For distributed fine-tuning: ```bash -CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 10 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/data_pipelines/data.json' +CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 6 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' ``` -or run the fine-tuning in single-GPU: +For fine-tuning in single-GPU: ```bash -python recipes/finetuning/finetuning.py --use_peft --enable_fsdp --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 10 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/data_pipelines/data.json' +CUDA_VISIBLE_DEVICES=0 python recipes/finetuning/finetuning.py --quantization --use_peft --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 6 --batch_size_training 2 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' ``` For more details, please check the readme in the finetuning recipe. @@ -82,9 +88,10 @@ We can also quickly compare our fine-tuned chatbot model with original 8B model python eval_chatbot.py -m meta-llama/Meta-Llama-3-8B-Instruct -v 8000 ``` +TODO: evaluation using LLM as judge ### Step 5: Testing with local inference -Once we believe our fine-tuned model has passed our evaluation and we can deploy it locally to manually test it by asking questions. +Once we believe our fine-tuned model has passed our evaluation and we can deploy it locally to manually test it by manually asking questions. ```bash python recipes/inference/local_inference/inference.py --model_name meta-llama/Meta-Llama-3-8B-Instruct --peft_model chatbot-8b diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/chat_utils.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/chat_utils.py index 2f89a2c9f..5b4518dc0 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/chat_utils.py +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/chat_utils.py @@ -5,7 +5,6 @@ from functools import partial from openai import OpenAI import json -from generator_utils import generate_question_batches, parse_qa_to_json, generate_data_eval # Configure logging to include the timestamp, log level, and message logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') # Since OctoAI has different naming for llama models, create this mapping to get huggingface offical model name given OctoAI names. @@ -18,14 +17,14 @@ request_limiter = asyncio.Semaphore(allowed_concurrent_requests) class ChatService(ABC): @abstractmethod - async def execute_chat_request_async(self, api_context: dict, chat_request, eval=False): + async def execute_chat_request_async(self, api_context: dict, chat_request): pass # Please implement your own chat service class here. # The class should inherit from the ChatService class and implement the execute_chat_request_async method. # The following are two example chat service classes that you can use as a reference. class OctoAIChatService(ChatService): - async def execute_chat_request_async(self, api_context: dict, chat_request, eval=False): + async def execute_chat_request_async(self, api_context: dict, chat_request): async with request_limiter: try: event_loop = asyncio.get_running_loop() @@ -38,41 +37,31 @@ async def execute_chat_request_async(self, api_context: dict, chat_request, eval ) response = await event_loop.run_in_executor(None, api_chat_call) assistant_response = next((choice.message.content for choice in response.choices if choice.message.role == 'assistant'), "") - if eval: - assistant_response_json = json.loads(assistant_response) - else: - assistant_response_json = parse_qa_to_json(assistant_response) - - return assistant_response_json + return assistant_response except Exception as error: logging.error(f"Error during chat request execution: {error}",exc_info=True) return "" # Use the local vllm openai compatible server for generating question/answer pairs to make API call syntax consistent # please read for more detail:https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html. class VllmChatService(ChatService): - async def execute_chat_request_async(self, api_context: dict, chat_request, eval=False): - async with request_limiter: - try: - event_loop = asyncio.get_running_loop() - if api_context["model"] in MODEL_NAME_MAPPING: - model_name = MODEL_NAME_MAPPING[api_context['model']] - else: - model_name = api_context['model'] - client = OpenAI(api_key=api_context['api_key'], base_url="http://localhost:"+ str(api_context['endpoint'])+"/v1") - api_chat_call = partial( - client.chat.completions.create, - model=model_name, - messages=chat_request, - temperature=0.0 - ) - response = await event_loop.run_in_executor(None, api_chat_call) - assistant_response = next((choice.message.content for choice in response.choices if choice.message.role == 'assistant'), "") - if eval: - print(assistant_response) - assistant_response_json = json.loads(assistant_response) - else: - assistant_response_json = parse_qa_to_json(assistant_response) - return assistant_response_json - except Exception as error: - logging.error(f"Error during chat request execution: {error}",exc_info=True) - return "" + async def execute_chat_request_async(self, api_context: dict, chat_request): + try: + event_loop = asyncio.get_running_loop() + if api_context["model"] in MODEL_NAME_MAPPING: + model_name = MODEL_NAME_MAPPING[api_context['model']] + else: + model_name = api_context['model'] + client = OpenAI(api_key=api_context['api_key'], base_url="http://localhost:"+ str(api_context['endpoint'])+"/v1") + api_chat_call = partial( + client.chat.completions.create, + model=model_name, + messages=chat_request, + temperature=0.0 + ) + response = await event_loop.run_in_executor(None, api_chat_call) + assistant_response = next((choice.message.content for choice in response.choices if choice.message.role == 'assistant'), "") + print("assistant_response",assistant_response) + return assistant_response + except Exception as error: + logging.error(f"Error during chat request execution: {error}",exc_info=True) + return "" diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/doc_processor.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/doc_processor.py index b45768461..c8556471e 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/doc_processor.py +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/doc_processor.py @@ -1,5 +1,5 @@ # Copyright (c) Meta Platforms, Inc. and affiliates. -# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement. +# This software may be used and distributed according to the terms of the Llama 3 Community License Agreement. # Assuming result_average_token is a constant, use UPPER_CASE for its name to follow Python conventions AVERAGE_TOKENS_PER_RESULT = 100 diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_chatbot.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_chatbot.py index 6e3099098..65fdae6f3 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_chatbot.py +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_chatbot.py @@ -8,6 +8,7 @@ import asyncio import json from itertools import chain +from generator_utils import parse_qa_to_json def compute_rouge_score(generated : str, reference: str): rouge_score = evaluate.load('rouge') @@ -24,24 +25,23 @@ def compute_bert_score(generated : str, reference: str): references=reference, lang="en" ) -# This function is used to evaluate the quality of generated QA pairs. Return the original QA pair if the model eval result is YES. Otherwise, return an empty dict. +# This function is used to eval the fine-tuned model, given the question, generate the answer. async def eval_request(chat_service, api_context: dict, question: str) -> dict: prompt_for_system = api_context['eval_prompt_template'].format(language=api_context["language"]) chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': f"Question: {question}"}] # Getting a list of result, in this case, there should be only one result - results = await chat_service.execute_chat_request_async(api_context, chat_request_payload,eval=False) - # convert the result string to a list - results = eval(results) - if not results or len(results) > 1: - print("results",type(results),len(results),results) + response_string = await chat_service.execute_chat_request_async(api_context, chat_request_payload) + # convert the result string to a dict that contains Question, Answer + result_list = parse_qa_to_json(response_string) + if not result_list or len(result_list) > 1: + print("Error: eval response should be a list of one result dict") return {} - result = results[0] + result = result_list[0] if "Answer" not in result: print("Error: eval response does not contain answer") - print(question,result) return {} - print("result",result) # Send back the model generated answer + return result["Answer"] async def generate_eval_answer(chat_service, api_context: dict, questions: list): diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml b/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml index 582fd0bb3..9d7915e8a 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml @@ -2,11 +2,12 @@ eval_prompt_template: > You are a AI assistant that skilled in answering questions related to Llama model. Below is a question from a llama user, please answer it in {language}, make the answer as concise as possible, it should be at most 100 words. Return the result with the template: - {{ + [ + {{ "Question": "The question user asked to you" "Answer": "Your answer to the question" }} - + ] eval_json: "./evalset.json" language: "English" diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generate_question_answers.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generate_question_answers.py index 4133f1b84..a6e1a45f4 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generate_question_answers.py +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generate_question_answers.py @@ -5,7 +5,7 @@ import asyncio import json from config import load_config -from generator_utils import generate_question_batches, generate_data_eval +from generator_utils import generate_question_batches, generate_data_curation from chat_utils import OctoAIChatService, VllmChatService from itertools import chain import logging @@ -27,20 +27,15 @@ async def main(context): if not data: logging.warning("No data generated. Please check the input context or model configuration.") return - flattened_list = list(chain.from_iterable(data)) - # with open("data.json") as fp: - # flattened_list = json.load(fp) - logging.info(f"Successfully generated {len(flattened_list)} question/answer pairs.") - # Use asynchronous file operation for writing to the file - - # async with aiofiles.open("data.json", "w") as output_file: - # await output_file.write(json.dumps(flattened_list, indent=4)) - # logging.info("Data successfully written to 'data.json'. Process completed.") - curated_data = await generate_data_eval(chat_service, context,flattened_list) - logging.info(f"Only {len(curated_data)} question/answer pairs pass the self-curation") - async with aiofiles.open("curated_data.json", "w") as curated_data: - await curated_data.write(json.dumps(flattened_list, indent=4)) - logging.info("Data successfully written to 'curated_data.json'. Process completed.") + data = list(chain.from_iterable(data)) + logging.info(f"Successfully generated {len(data)} question/answer pairs.") + if context["use_curation"]: + logging.info("Starting to do self-curation using LLM.") + data = await generate_data_curation(chat_service, context,data) + logging.info(f"Only {len(data)} question/answer pairs pass the self-curation") + async with aiofiles.open(context['output_path'], "w") as output_file: + await output_file.write(json.dumps(data, indent=4)) + logging.info(f"Data successfully written to {context['output_path']}. Process completed.") except Exception as e: logging.error(f"An unexpected error occurred during the process: {e}",exc_info=True) @@ -72,6 +67,11 @@ def parse_arguments(): type=int, help="If a port is specified, then use local vllm endpoint for generating question/answer pairs." ) + parser.add_argument( + "-o", "--output_path", + default="./data.json", + help="set the output path for the generated QA pairs. Default is data.json" + ) return parser.parse_args() if __name__ == "__main__": @@ -82,6 +82,9 @@ def parse_arguments(): context["total_questions"] = args.total_questions context["model"] = args.model context["endpoint"] = args.vllm_endpoint + # If curation prompt is not empty, then use self-curation + context["use_curation"] = len(context["curation_prompt_template"]) > 0 + context["output_path"] = args.output_path logging.info(f"Configuration loaded. Generating {args.total_questions} question/answer pairs using model '{args.model}'.") if context["endpoint"]: logging.info(f"Use local vllm service at port: '{args.vllm_endpoint}'.") diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml index cc543db9b..9664846b5 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml @@ -1,36 +1,46 @@ question_prompt_template: > You are a language model skilled in creating quiz questions. You will be provided with a document, - read it and please generate question and answer that are most likely be asked by a user of llama model, - please make sure you follow those rules, - 1. Generate at most {num_questions} question answer pairs, you can generate less questions if you believe there are nothing related to Llama. - 2. Generate in {language}. - 3. The questions can be answered based *solely* on the given passage. - 4. Avoid asking questions with similar meaning. - 5. Make the answer as concise as possible, it should be at most 60 words. - 6. Provide relevant links from the document to support the answer. - 7. Never use any abbreviation. - 8. Return the result in json format with the template: + read it and please generate question and answer pairs that are most likely be asked by a user of Llama language models, + which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1, Meta Llama Guard 2, + then extract the context that is related to the question and answer, preferably using the sentences from original text, + please make sure you follow those rules: + 1. Generate at most {num_questions} question answer pairs, you can generate less questions if you believe there are nothing related to Llama language models. + 2. For each question and answer pair, add the context that is related to the question and answer, preferably using the sentences from original text + 3. Generate in {language}. + 4. The questions can be answered based *solely* on the given passage. + 5. Avoid asking questions with similar meaning. + 6. Make the answer as concise as possible, it should be at most 80 words. + 7. Provide relevant links from the document to support the answer. + 8. Never use any abbreviation. + 9. Return the result in json format with the template: [ {{ "Question": "your question A.", "Answer": "your answer to question A." + "Context": "the context for question A" }}, {{ "Question": "your question B.", "Answer": "your answer to question B." + "Context": "the context for question B" }} ] curation_prompt_template: > - Below is a question and answer pair about Llama language model. Evaluate - whether or not this qusestion and answer pair will be helpful for a user of Llama langauge model. - Respond with only a single JSON blob with an "explanation" field that is a short (less than 100 word) - explanation of your answer and an "answer" field which is YES or NO. Only generate the answer in {language}. + Below is a question and answer pair (QA pair) and its related context about Llama language models, + which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1, Meta Llama Guard 2. + Given the context, evaluate whether or not this qusestion and answer pair will be helpful for a user of Llama language models, + and whether this question and answer is relevant to the context. + Note that the answer in the QA pair can be the same or similar as the context, as repetition of context is allowed. + Respond with only a single JSON blob with an "Reason" field that is a short (less than 100 word) + explanation of your answer and an "Result" field which is YES or NO. + Only answer "YES", if the question and answer pair is based on the context and provides relevant information about Llama language models. + Only generate the answer in {language}. Return the result in json format with the template: {{ "Reason": "your reason here.", - "Answer": "YES or No." + "Result": "YES or No." }}, data_dir: "./data" diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py index b2d591f13..a0f789ea8 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py @@ -3,6 +3,7 @@ import os import re +import string from transformers import AutoTokenizer import asyncio import magic @@ -64,7 +65,9 @@ def process_file(file_path): else: logging.warning(f"Unsupported file type {file_type} for file {file_path}") return '' - +def remove_non_printable(s): + printable = set(string.printable) + return ''.join(filter(lambda x: x in printable, s)) def read_file_content(context): file_strings = [] @@ -74,10 +77,8 @@ def read_file_content(context): file_text = process_file(file_path) if file_text: file_strings.append(file_text) - text = ' '.join(file_strings) - if len(text) == 0: - logging.error(f"Error reading files, text is empty") - return ' '.join(file_strings) + text = '\n'.join(file_strings) + return remove_non_printable(text) # clean the text by removing all parts that did not contain any alphanumeric characters def clean(s): result = [] @@ -86,6 +87,42 @@ def clean(s): result.append(item) return " ".join(result) # given a response string, return a string that can be saved as json. +def parse_qac_to_json(response_string): + split_lines = response_string.split("\n") + start,mid,end = None,None,None + # must use set to avoid duplicate question/answer pairs due to async function calls + qa_set = set() + for i in range(len(split_lines)): + line = split_lines[i] + # starting to find "Question" + if not start: + # Once found, set start to this line number + if '"Question":' in line: + start = i + else: + # "Question" has been found, find "Answer", once found, set end to this line number + if '"Answer":' in line: + mid = i + elif '"Context":' in line: + end = i + # found Question means we have reached the end of the question, so add it to qa_list + elif '"Question":' in line: + question = " ".join(split_lines[start:mid]).split('"Question":')[1] + answer = " ".join(split_lines[mid:end]).split('"Answer":')[1] + context = " ".join(split_lines[end:i]).split('"Context":')[1] + start,mid,end = i,None,None + qa_set.add((clean(question), clean(answer),clean(context))) + # adding last question back to qa_list + if start and mid and end: + question = " ".join(split_lines[start:mid]).split('"Question":')[1] + answer = " ".join(split_lines[mid:end]).split('"Answer":')[1] + context = " ".join(split_lines[end:]).split('"Context":')[1] + start,mid,end = i,None,None + qa_set.add((clean(question), clean(answer),clean(context))) + qa_list = [{"Question": q, "Answer":a, "Context":c} for q,a,c in qa_set] + + return json.dumps(qa_list, indent=4) + def parse_qa_to_json(response_string): split_lines = response_string.split("\n") start,end = None,None @@ -115,29 +152,32 @@ def parse_qa_to_json(response_string): qa_set.add((clean(question), clean(answer))) qa_list = [{"Question": q, "Answer":a} for q,a in qa_set] - return json.dumps(qa_list, indent=4) - + return qa_list async def prepare_and_send_request(chat_service, api_context: dict, document_content: str, num_questions: int) -> dict: prompt_for_system = api_context['question_prompt_template'].format(num_questions=num_questions, language=api_context["language"]) chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': document_content}] - result = await chat_service.execute_chat_request_async(api_context, chat_request_payload,eval=False) + result = await chat_service.execute_chat_request_async(api_context, chat_request_payload) + # parse the result string to a list of dict that has Question, Answer, Context + result = parse_qac_to_json(result) if not result: return {} return json.loads(await chat_service.execute_chat_request_async(api_context, chat_request_payload,eval=False)) # This function is used to evaluate the quality of generated QA pairs. Return the original QA pair if the model eval result is YES. Otherwise, return an empty dict. -async def data_eval_request(chat_service, api_context: dict, document_content: dict) -> dict: +async def data_curation_request(chat_service, api_context: dict, document_content: dict) -> dict: prompt_for_system = api_context['curation_prompt_template'].format(language=api_context["language"]) - chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': f"Question: {document_content['Question']}, Answer: {document_content['Answer']}"}] - result = await chat_service.execute_chat_request_async(api_context, chat_request_payload,eval=True) + chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': f"Question: {document_content['Question']} \n Answer: {document_content['Answer']}\n Context: {document_content['Context']} "}] + result = await chat_service.execute_chat_request_async(api_context, chat_request_payload) if not result: return {} - if "Answer" not in result: + # no parsing needed, just return the loads the result as a dict + result = json.loads(result) + if "Result" not in result: print("Error: eval response does not contain answer") print(document_content,result) return {} # Send back the original QA pair is the model eval result is YES - if result["Answer"] == "YES": + if result["Result"] == "YES": return document_content else: print(document_content,result) @@ -177,11 +217,11 @@ async def generate_question_batches(chat_service, api_context: dict): return question_generation_results -async def generate_data_eval(chat_service, api_context: dict, generated_questions: list): +async def generate_data_curation(chat_service, api_context: dict, generated_questions: list): eval_tasks = [] for batch_index, batch_content in enumerate(generated_questions): try: - result = data_eval_request(chat_service, api_context, batch_content) + result = data_curation_request(chat_service, api_context, batch_content) eval_tasks.append(result) except Exception as e: print(f"Error during data eval request execution: {e}") From 6a83585185a730620e0d8f8c861713c401bfaabd Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Fri, 17 May 2024 12:54:58 -0700 Subject: [PATCH 09/35] working draft of end-to-end pipelines --- .../chatbot/pipelines/README.md | 2 +- .../chatbot/pipelines/chat_utils.py | 1 - .../pipelines/generate_question_answers.py | 3 +-- .../chatbot/pipelines/generation_config.yaml | 2 +- .../chatbot/pipelines/generator_utils.py | 22 ++++++++++--------- 5 files changed, 15 insertions(+), 15 deletions(-) diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md b/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md index 97448f236..a2f785929 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md @@ -6,7 +6,7 @@ Download all your desired docs in PDF, Text or Markdown format to "data" folder In this case we have an example of [Getting started with Meta Llama](https://llama.meta.com/get-started/) and other llama related documents such Llama3, Purple Llama, Code Llama papers. Ideally, we should have searched all Llama documents across the web and follow the procedure below on them but that would be very costly for the purpose of a tutorial, so we will stick to our limited documents here. In this case, we want to use Llama FAQ as eval data so we should not put it into the data folder for training. -TODO: Download conversations in the Llama github issues and use it as training data. +TODO: Download conversations in the Llama github issues and use it as training data. To get 5K QA pairs ### Step 2 : Prepare data (Q&A pairs) for fine-tuning To use Meta Llama 3 70B model for the question and answer (Q&A) pair datasets creation from the prepared documents, we can either use Meta Llama 3 70B APIs from LLM cloud providers or host local LLM server. diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/chat_utils.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/chat_utils.py index 5b4518dc0..700091732 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/chat_utils.py +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/chat_utils.py @@ -60,7 +60,6 @@ async def execute_chat_request_async(self, api_context: dict, chat_request): ) response = await event_loop.run_in_executor(None, api_chat_call) assistant_response = next((choice.message.content for choice in response.choices if choice.message.role == 'assistant'), "") - print("assistant_response",assistant_response) return assistant_response except Exception as error: logging.error(f"Error during chat request execution: {error}",exc_info=True) diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generate_question_answers.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generate_question_answers.py index a6e1a45f4..70476aa0f 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generate_question_answers.py +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generate_question_answers.py @@ -7,7 +7,6 @@ from config import load_config from generator_utils import generate_question_batches, generate_data_curation from chat_utils import OctoAIChatService, VllmChatService -from itertools import chain import logging import aiofiles # Ensure aiofiles is installed for async file operations @@ -23,11 +22,11 @@ async def main(context): chat_service = OctoAIChatService() try: logging.info("Starting to generate question/answer pairs.") + # Generate question/answer pairs as list data = await generate_question_batches(chat_service, context) if not data: logging.warning("No data generated. Please check the input context or model configuration.") return - data = list(chain.from_iterable(data)) logging.info(f"Successfully generated {len(data)} question/answer pairs.") if context["use_curation"]: logging.info("Starting to do self-curation using LLM.") diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml index 9664846b5..4db269395 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml @@ -5,7 +5,7 @@ question_prompt_template: > which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1, Meta Llama Guard 2, then extract the context that is related to the question and answer, preferably using the sentences from original text, please make sure you follow those rules: - 1. Generate at most {num_questions} question answer pairs, you can generate less questions if you believe there are nothing related to Llama language models. + 1. Generate {num_questions} question answer pairs. 2. For each question and answer pair, add the context that is related to the question and answer, preferably using the sentences from original text 3. Generate in {language}. 4. The questions can be answered based *solely* on the given passage. diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py index a0f789ea8..8285b4ae8 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py @@ -121,7 +121,7 @@ def parse_qac_to_json(response_string): qa_set.add((clean(question), clean(answer),clean(context))) qa_list = [{"Question": q, "Answer":a, "Context":c} for q,a,c in qa_set] - return json.dumps(qa_list, indent=4) + return qa_list def parse_qa_to_json(response_string): split_lines = response_string.split("\n") @@ -155,14 +155,13 @@ def parse_qa_to_json(response_string): return qa_list async def prepare_and_send_request(chat_service, api_context: dict, document_content: str, num_questions: int) -> dict: + if num_questions == 0: + logging.info(f"Error: num_questions is 0") + return {} prompt_for_system = api_context['question_prompt_template'].format(num_questions=num_questions, language=api_context["language"]) chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': document_content}] - result = await chat_service.execute_chat_request_async(api_context, chat_request_payload) # parse the result string to a list of dict that has Question, Answer, Context - result = parse_qac_to_json(result) - if not result: - return {} - return json.loads(await chat_service.execute_chat_request_async(api_context, chat_request_payload,eval=False)) + return await chat_service.execute_chat_request_async(api_context, chat_request_payload) # This function is used to evaluate the quality of generated QA pairs. Return the original QA pair if the model eval result is YES. Otherwise, return an empty dict. async def data_curation_request(chat_service, api_context: dict, document_content: dict) -> dict: prompt_for_system = api_context['curation_prompt_template'].format(language=api_context["language"]) @@ -208,14 +207,17 @@ async def generate_question_batches(chat_service, api_context: dict): questions_in_current_batch = base_questions_per_batch + (1 if batch_index < extra_questions else 0) print(f"Batch {batch_index + 1} - {questions_in_current_batch} questions ********") try: - result = prepare_and_send_request(chat_service, api_context, batch_content, questions_in_current_batch) - generation_tasks.append(result) + task = prepare_and_send_request(chat_service, api_context, batch_content, questions_in_current_batch) + generation_tasks.append(task) except Exception as e: print(f"Error during chat request execution: {e}") question_generation_results = await asyncio.gather(*generation_tasks) - - return question_generation_results + final_result = [] + for result in question_generation_results: + parsed_json = parse_qac_to_json(result) + final_result.extend(parsed_json) + return final_result async def generate_data_curation(chat_service, api_context: dict, generated_questions: list): eval_tasks = [] From 40c03dab5e37f5f2246561db55c8c736a5ee989b Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Fri, 17 May 2024 14:19:46 -0700 Subject: [PATCH 10/35] fixed chatbot_dataset.py --- recipes/finetuning/datasets/chatbot_dataset.py | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/recipes/finetuning/datasets/chatbot_dataset.py b/recipes/finetuning/datasets/chatbot_dataset.py index bb1ee76c0..7a893ec0d 100644 --- a/recipes/finetuning/datasets/chatbot_dataset.py +++ b/recipes/finetuning/datasets/chatbot_dataset.py @@ -11,8 +11,8 @@ B_INST, E_INST = "[INST]", "[/INST]" def tokenize_dialog(q_a_pair, tokenizer): - prompt_tokens = [tokenizer.encode(f"{tokenizer.bos_token}{B_INST} {(question).strip()} {E_INST}", add_special_tokens=False) for question in q_a_pair["question"]] - answer_tokens = [tokenizer.encode(f"{answer.strip()} {tokenizer.eos_token}", add_special_tokens=False) for answer in q_a_pair["answer"]] + prompt_tokens = [tokenizer.encode(f"{tokenizer.bos_token}{B_INST} {(question).strip()} {E_INST}", add_special_tokens=False) for question in q_a_pair["Question"]] + answer_tokens = [tokenizer.encode(f"{answer.strip()} {tokenizer.eos_token}", add_special_tokens=False) for answer in q_a_pair["Answer"]] dialog_tokens = list(itertools.chain.from_iterable(zip(prompt_tokens, answer_tokens))) dialog_tokens = list(itertools.chain.from_iterable(zip(prompt_tokens, answer_tokens))) #Add labels, convert prompt token to -100 in order to ignore in loss function @@ -31,8 +31,8 @@ def get_custom_dataset(dataset_config, tokenizer, split, split_ratio=0.8): dataset = dataset['train'].train_test_split(test_size=1-split_ratio, shuffle=True) dataset = dataset[split].map(lambda sample: { - "question": sample["question"], - "answer": sample["answer"], + "Question": sample["Question"], + "Answer": sample["Answer"], }, batched=True, ) From 9092139aca0c6b5a42f769da0cf9ae2c0c96552b Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Thu, 23 May 2024 10:08:18 -0700 Subject: [PATCH 11/35] adding LLM_as_judge feature --- .../end2end-recipes/chatbot/README.md | 5 +- .../chatbot/pipelines/README.md | 32 ++++++++-- .../chatbot/pipelines/eval_chatbot.py | 59 ++++++++++++++++--- .../chatbot/pipelines/eval_config.yaml | 9 +++ .../chatbot/pipelines/evalset.json | 13 +++- .../chatbot/pipelines/generator_utils.py | 31 +++++++++- src/llama_recipes/configs/peft.py | 2 +- src/llama_recipes/configs/training.py | 1 + src/llama_recipes/finetuning.py | 21 ++++--- src/llama_recipes/utils/train_utils.py | 2 + 10 files changed, 146 insertions(+), 29 deletions(-) diff --git a/recipes/use_cases/end2end-recipes/chatbot/README.md b/recipes/use_cases/end2end-recipes/chatbot/README.md index 6b0ddb817..dd763416c 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/README.md +++ b/recipes/use_cases/end2end-recipes/chatbot/README.md @@ -185,7 +185,7 @@ Model tends to ignore providing the bigger picture in the questions, for example #### Data Insights -We generated a dataset of almost 800 Q&A pairs from some of the open source documents about Llama models, including getting started guide from Llama website, its FAQ, Llama 3, Purple Llama, Code Llama papers and Llama-Recipes documentations. +We generated a dataset of almost 3600 Q&A pairs from some of the open source documents about Llama models, including getting started guide from Llama website, its FAQ, Llama 3, Purple Llama, Code Llama papers and Llama-Recipes documentations. We have run some fine-tuning experiments with single GPU using quantization with different LORA configs (all linear layer versus query and key projections only) and different number of epochs. Although train and eval loss shows decrease specially with using all linear layers in LORA configs and training with 6 epochs, still the result is far from acceptable in real tests. @@ -205,6 +205,3 @@ Below are some examples of real test on the fine-tuned model with very poor resu Poor Test Results example 1 Poor Test Results example 1

- - -Next, we are looking into augmenting our datasets. One way to do so, is to use our Llama 70B model to read our question answer pairs and come up with two paraphrase versions of each pair to augment our data. diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md b/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md index a2f785929..027a3937e 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md @@ -48,13 +48,15 @@ In the llama-recipe main folder, we can start the fine-tuning step using the fol For distributed fine-tuning: ```bash -CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 6 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' +CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 10 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' ``` +CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --from_peft_checkpoint chatbot-8b --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b-continue --num_epochs 10 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' + For fine-tuning in single-GPU: ```bash -CUDA_VISIBLE_DEVICES=0 python recipes/finetuning/finetuning.py --quantization --use_peft --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 6 --batch_size_training 2 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' +CUDA_VISIBLE_DEVICES=0 python recipes/finetuning/finetuning.py --quantization --use_peft --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 10 --batch_size_training 1 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' ``` For more details, please check the readme in the finetuning recipe. @@ -63,13 +65,13 @@ For more details, please check the readme in the finetuning recipe. Once we have the fine-tuned model, we now need to evaluate it to understand its performance. Normally, to create a evaluation set, we should first gather some questions and manually write the ground truth answer. In this case, we created a eval set based on the Llama [Troubleshooting & FAQ](https://llama.meta.com/faq/), where the answers are written by human experts. Then we pass the evalset question to our fine-tuned model to get the model generated answers. To compare the model generated answers with ground truth, we can use either traditional eval method, eg. calcucate rouge score, or use LLM to act like a judge to score the similarity of them. -First we need to start the VLLM servers to host our fine-tuned 8B model. Since we used peft library to get a LoRA adapter, we need to pass special arguments to VLLM to enable the LoRA feature. Now, the VLLM server actually will first load the original model, then apply our LoRA adapter weights. +First we need to start the VLLM servers to host our fine-tuned 8B model. Since we used peft library to get a LoRA adapter, we need to pass special arguments to VLLM to enable the LoRA feature. Now, the VLLM server actually will first load the original model, then apply our LoRA adapter weights. Then we can feed the eval_set json file into the VLLM servers and start the comparison evaluation. Notice that our model name is now called "chatbot" instead of "meta-llama/Meta-Llama-3-8B-Instruct". ```bash python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-8B-Instruct --enable-lora --lora-modules chatbot=./chatbot-8b --port 8000 --disable-log-requests ``` -**NOTE** If encounter import error: "ImportError: punica LoRA kernels could not be imported.", this means that Vllm must be installed with punica LoRA kernels to support LoRA adapter, please use following commands to install the VLLM from source. +**NOTE** If encounter import error: "ImportError: punica LoRA kernels could not be imported.", this means that VLLM must be installed with punica LoRA kernels to support LoRA adapter, please use following commands to install the VLLM from source. ```bash git clone https://github.com/vllm-project/vllm.git @@ -77,7 +79,7 @@ cd vllm VLLM_INSTALL_PUNICA_KERNELS=1 pip install -e . ``` -Then pass the eval_set json file into the VLLM servers and start the comparison evaluation. Notice that our model name is now called chatbot instead of meta-llama/Meta-Llama-3-8B-Instruct. +On another terminal, we can go to the recipes/use_cases/end2end-recipes/chatbot/pipelines folder to start our eval script. ```bash python eval_chatbot.py -m chatbot -v 8000 @@ -88,7 +90,25 @@ We can also quickly compare our fine-tuned chatbot model with original 8B model python eval_chatbot.py -m meta-llama/Meta-Llama-3-8B-Instruct -v 8000 ``` -TODO: evaluation using LLM as judge +**NOTE** If encounter import error: "ImportError: punica LoRA kernels could not be imported.", this means that VLLM must be installed with punica LoRA kernels to support LoRA adapter, please use following commands to install the VLLM from source. + +```bash +git clone https://github.com/vllm-project/vllm.git +cd vllm +VLLM_INSTALL_PUNICA_KERNELS=1 pip install -e . +``` +Lastly, we can use another 70B model as a judge to compare the answer from the fine-tuned 8B model with the groud truth and get a score. To do this, we need to host another 70B VLLM server locally with command, just make sure the port is not been used: + +```bash +CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001 +``` + +Then we can pass the port to the eval script: + +```bash +python eval_chatbot.py -m chatbot -v 8000 -j 8001 +``` + ### Step 5: Testing with local inference Once we believe our fine-tuned model has passed our evaluation and we can deploy it locally to manually test it by manually asking questions. diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_chatbot.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_chatbot.py index 65fdae6f3..203fff792 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_chatbot.py +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_chatbot.py @@ -8,7 +8,7 @@ import asyncio import json from itertools import chain -from generator_utils import parse_qa_to_json +from generator_utils import parse_qa_to_json, generate_LLM_eval def compute_rouge_score(generated : str, reference: str): rouge_score = evaluate.load('rouge') @@ -20,11 +20,15 @@ def compute_rouge_score(generated : str, reference: str): ) def compute_bert_score(generated : str, reference: str): bertscore = evaluate.load("bertscore") - return bertscore.compute( + score = bertscore.compute( predictions=generated, references=reference, lang="en" ) + f1 = score["f1"] + precision = score["precision"] + recall = score["recall"] + return sum(precision)/len(precision), sum(recall)/len(recall), sum(f1)/len(f1) # This function is used to eval the fine-tuned model, given the question, generate the answer. async def eval_request(chat_service, api_context: dict, question: str) -> dict: prompt_for_system = api_context['eval_prompt_template'].format(language=api_context["language"]) @@ -75,11 +79,38 @@ async def main(context): logging.warning("No answers generated. Please check the input context or model configuration.") return logging.info(f"Successfully generated {len(generated_answers)} answers.") + judge_list = [] + for index, item in enumerate(generated_answers): + judge_list.append({"Question":questions[index],"Ground_truth":groud_truth[index],"Generated_answer":generated_answers[index]}) + if context["judge_endpoint"]: + # make a copy of the context then change the VLLM endpoint to judge_endpoint + context_copy = dict(context) + context_copy["endpoint"] = context["judge_endpoint"] + context_copy["model"] = "meta-llama/Meta-Llama-3-70B-Instruct" + judge_results = await generate_LLM_eval(chat_service, context_copy, judge_list) + correct_num = 0 + for result in judge_results: + correct_num += result["Result"] == "YES" + LLM_judge_score = correct_num/len(judge_results) + print(f"The accuracy of the model is {LLM_judge_score}") rouge_score = compute_rouge_score(generated_answers,groud_truth) print("Rouge_score:",rouge_score) - bert_score = compute_bert_score(generated_answers,groud_truth) - print("Bert_score:",bert_score) - logging.info("Eval successfully") + P, R, F1 = compute_bert_score(generated_answers,groud_truth) + print(f"BERTScore Precision: {P:.4f}, Recall: {R:.4f}, F1: {F1:.4f}") + # Saving the eval result to a log file + with open(context["output_log"],"a") as fp: + fp.write(f"Eval_result for {context['model']} \n") + fp.write(f"Rouge_score: {rouge_score} \n") + fp.write(f"BERTScore Precision: {P:.4f}, Recall: {R:.4f}, F1: {F1:.4f} \n") + if context["judge_endpoint"]: + fp.write(f"LLM_judge_score: {LLM_judge_score} \n") + fp.write(f"QA details: \n") + for item in judge_list: + fp.write(f"question: {item['Question']} \n") + fp.write(f"generated_answers: {item['Generated_answer']} \n") + fp.write(f"groud_truth: {item['Ground_truth']} \n") + fp.write("\n") + logging.info(f"Eval successfully, the eval result is saved to {context['output_log']}.") except Exception as e: logging.error(f"An unexpected error occurred during the process: {e}",exc_info=True) @@ -104,15 +135,29 @@ def parse_arguments(): type=int, help="If a port is specified, then use local vllm endpoint for evaluations." ) + parser.add_argument( + "-j", "--judge_endpoint", + default=None, + type=int, + help="If a port is specified, then use local vllm endpoint as judge LLM." + ) + parser.add_argument( + "-o", "--output_log", + default="eval_result.log", + help="save the eval result to a log file. Default is eval_result.log" + ) return parser.parse_args() if __name__ == "__main__": logging.info("Initializing the process and loading configuration...") args = parse_arguments() - context = load_config(args.config_path) context["model"] = args.model context["endpoint"] = args.vllm_endpoint + context["judge_endpoint"] = args.judge_endpoint + context["output_log"] = args.output_log if context["endpoint"]: - logging.info(f"Use local vllm service at port: '{args.vllm_endpoint}'.") + logging.info(f"Use local vllm service for eval at port: '{args.vllm_endpoint}'.") + if context["judge_endpoint"]: + logging.info(f"Use local vllm service for judge at port: '{args.judge_endpoint}'.") asyncio.run(main(context)) diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml b/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml index 9d7915e8a..153d3b2be 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml @@ -8,6 +8,15 @@ eval_prompt_template: > "Answer": "Your answer to the question" }} ] +judge_prompt_template: > + You are provided with a question, a teacher answer and a student answer. Given that question, you need to score the how good the student answer is compare to + the teacher's answer. If the student's answer is correct based on the teacher's answer, then return YES. If the answer is not faithful, then return NO + and explain which part of the answer if not faithful in the Reason section. + Return the result in json format with the template: + {{ + "Reason": "your reason here.", + "Result": "YES or NO." + }} eval_json: "./evalset.json" language: "English" diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/evalset.json b/recipes/use_cases/end2end-recipes/chatbot/pipelines/evalset.json index e1b016a20..efe72b72b 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/evalset.json +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/evalset.json @@ -1,4 +1,16 @@ [ + { + "question":"What is the difference on the tokenization techniques that Meta Llama 3 uses compare Llama 2?", + "answer": "Llama 2 uses SentencePiece for tokenization, whereas Llama 3 has transitioned to OpenAI’s Tiktoken. Llama 3 also introduces a ChatFormat class, special tokens, including those for end-of-turn markers and other features to enhance support for chat-based interactions and dialogue processing." + }, + { + "question":"How many tokens were used in Llama 3 pretrain?", + "answer": "Llama 3 is pretrained on over 15T tokens that were all collected from publicly available sources." + }, +{ + "question": "what are the goals for Llama 3", + "answer": "With Llama 3, we set out to build the best open models that are on par with the best proprietary models available today. We wanted to address developer feedback to increase the overall helpfulness of Llama 3 and are doing so while continuing to play a leading role on responsible use and deployment of LLMs. We are embracing the open source ethos of releasing early and often to enable the community to get access to these models while they are still in development." +}, { "question": "What if I want to access Llama models but I’m not sure if my use is permitted under the Llama 2 Community License?", "answer": "On a limited case by case basis, we will consider bespoke licensing requests from individual entities. Please contact llamamodels@meta.com to provide more details about your request." @@ -144,4 +156,3 @@ "answer": "No, such companies are not prohibited when their usage of Llama is not related to the operation of critical infrastructure. Llama, however, may not be used in the operation of critical infrastructure by any company, regardless of government certifications." } ] - diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py index 8285b4ae8..349ab149e 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py @@ -219,9 +219,9 @@ async def generate_question_batches(chat_service, api_context: dict): final_result.extend(parsed_json) return final_result -async def generate_data_curation(chat_service, api_context: dict, generated_questions: list): +async def generate_data_curation(chat_service, api_context: dict, evaluation_list: list): eval_tasks = [] - for batch_index, batch_content in enumerate(generated_questions): + for batch_index, batch_content in enumerate(evaluation_list): try: result = data_curation_request(chat_service, api_context, batch_content) eval_tasks.append(result) @@ -235,3 +235,30 @@ async def generate_data_curation(chat_service, api_context: dict, generated_ques if item: curated_data.append(item) return curated_data + +# This function is used to evaluate the quality of generated QA pairs. Return the original QA pair if the model eval result is YES. Otherwise, return an empty dict. +async def LLM_judge_request(chat_service, api_context: dict, document_content: dict) -> dict: + prompt_for_system = api_context['judge_prompt_template'].format(language=api_context["language"]) + chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': f"Question: {document_content['Question']} \n Teacher's Answer: {document_content['Ground_truth']}\n Student's Answer: {document_content['Generated_answer']} "}] + result = await chat_service.execute_chat_request_async(api_context, chat_request_payload) + if not result: + return {} + # no parsing needed, just return the loads the result as a dict + result = json.loads(result) + if "Result" not in result: + print("Error: eval response does not contain answer") + print(document_content,result) + return {} + return result + +async def generate_LLM_eval(chat_service, api_context: dict, judge_list: list): + eval_tasks = [] + for batch_index, batch_content in enumerate(judge_list): + try: + result = LLM_judge_request(chat_service, api_context, batch_content) + eval_tasks.append(result) + except Exception as e: + print(f"Error during data eval request execution: {e}") + + judge_results = await asyncio.gather(*eval_tasks) + return judge_results diff --git a/src/llama_recipes/configs/peft.py b/src/llama_recipes/configs/peft.py index 7140e025d..133cfccf9 100644 --- a/src/llama_recipes/configs/peft.py +++ b/src/llama_recipes/configs/peft.py @@ -8,7 +8,7 @@ class lora_config: r: int=8 lora_alpha: int=32 - target_modules: List[str] = field(default_factory=lambda: ["q_proj", "v_proj"]) + target_modules: List[str] = field(default_factory=lambda: ["q_proj", "k_proj", "v_proj", "o_proj","gate_proj", "up_proj", "down_proj"]) bias= "none" task_type: str= "CAUSAL_LM" lora_dropout: float=0.05 diff --git a/src/llama_recipes/configs/training.py b/src/llama_recipes/configs/training.py index 2d6a733d0..7ae8265b1 100644 --- a/src/llama_recipes/configs/training.py +++ b/src/llama_recipes/configs/training.py @@ -31,6 +31,7 @@ class train_config: dataset = "samsum_dataset" peft_method: str = "lora" # None, llama_adapter (Caution: llama_adapter is currently not supported with FSDP) use_peft: bool=False + from_peft_checkpoint: str="" # if not empty and use_peft=True, will load the peft checkpoint and resume the fine-tuning on that checkpoint output_dir: str = "PATH/to/save/PEFT/model" freeze_layers: bool = False num_freeze_layers: int = 1 diff --git a/src/llama_recipes/finetuning.py b/src/llama_recipes/finetuning.py index 27911d9f9..3fef7222b 100644 --- a/src/llama_recipes/finetuning.py +++ b/src/llama_recipes/finetuning.py @@ -8,7 +8,7 @@ import random import torch import torch.optim as optim -from peft import get_peft_model, prepare_model_for_kbit_training +from peft import get_peft_model, prepare_model_for_kbit_training, PeftModel from torch.distributed.fsdp import ( FullyShardedDataParallel as FSDP, ShardingStrategy @@ -151,11 +151,17 @@ def main(**kwargs): model.to(torch.bfloat16) if train_config.use_peft: - peft_config = generate_peft_config(train_config, kwargs) - model = get_peft_model(model, peft_config) - model.print_trainable_parameters() + # Load the pre-trained peft model checkpoint and setup its configuration + if train_config.from_peft_checkpoint: + model = PeftModel.from_pretrained(model, train_config.from_peft_checkpoint, is_trainable=True) + peft_config = model.peft_config() + # Generate the peft config and start fine-tuning from original model + else: + peft_config = generate_peft_config(train_config, kwargs) + model = get_peft_model(model, peft_config) if wandb_run: wandb_run.config.update(peft_config) + model.print_trainable_parameters() hsdp_device_mesh = None @@ -166,8 +172,7 @@ def main(**kwargs): #setting up FSDP if enable_fsdp is enabled if train_config.enable_fsdp: if not train_config.use_peft and train_config.freeze_layers: - - freeze_transformer_layers(train_config.num_freeze_layers) + freeze_transformer_layers(model, train_config.num_freeze_layers) mixed_precision_policy, wrapping_policy = get_policies(fsdp_config, rank) my_auto_wrapping_policy = fsdp_auto_wrap_policy(model, LlamaDecoderLayer) @@ -188,7 +193,7 @@ def main(**kwargs): device_id=device_id, limit_all_gathers=True, sync_module_states=train_config.low_cpu_fsdp, - param_init_fn=lambda module: module.to_empty(device=torch.device("cuda"), recurse=False) + param_init_fn=(lambda module: module.to_empty(device=torch.device("cuda"), recurse=False)) if train_config.low_cpu_fsdp and rank != 0 else None, ) if fsdp_config.fsdp_activation_checkpointing: @@ -217,7 +222,7 @@ def main(**kwargs): split="test", ) if not train_config.enable_fsdp or rank == 0: - print(f"--> Validation Set Length = {len(dataset_val)}") + print(f"--> Validation Set Length = {len(dataset_val)}") if train_config.batching_strategy == "packing": dataset_train = ConcatDataset(dataset_train, chunk_size=train_config.context_length) diff --git a/src/llama_recipes/utils/train_utils.py b/src/llama_recipes/utils/train_utils.py index a71447ea1..f62da5bc9 100644 --- a/src/llama_recipes/utils/train_utils.py +++ b/src/llama_recipes/utils/train_utils.py @@ -103,6 +103,8 @@ def train(model, train_dataloader,eval_dataloader, tokenizer, optimizer, lr_sche val_loss =[] if train_config.save_metrics: + if not os.path.exists(train_config.output_dir): + os.makedirs(train_config.output_dir, exist_ok=True) metrics_filename = f"{train_config.output_dir}/metrics_data_{local_rank}-{datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}.json" train_step_perplexity = [] train_step_loss = [] From dd4f1dfd7adfc33f7b98c9e7efd6892730327295 Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Tue, 28 May 2024 09:42:45 -0700 Subject: [PATCH 12/35] fixing eval template --- .../chatbot/pipelines/README.md | 55 +++++++++++-------- .../chatbot/pipelines/eval_config.yaml | 9 +-- .../chatbot/pipelines/generation_config.yaml | 6 +- .../chatbot/pipelines/generator_utils.py | 3 + 4 files changed, 44 insertions(+), 29 deletions(-) diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md b/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md index 027a3937e..cc14ce1a5 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md @@ -6,12 +6,11 @@ Download all your desired docs in PDF, Text or Markdown format to "data" folder In this case we have an example of [Getting started with Meta Llama](https://llama.meta.com/get-started/) and other llama related documents such Llama3, Purple Llama, Code Llama papers. Ideally, we should have searched all Llama documents across the web and follow the procedure below on them but that would be very costly for the purpose of a tutorial, so we will stick to our limited documents here. In this case, we want to use Llama FAQ as eval data so we should not put it into the data folder for training. -TODO: Download conversations in the Llama github issues and use it as training data. To get 5K QA pairs ### Step 2 : Prepare data (Q&A pairs) for fine-tuning To use Meta Llama 3 70B model for the question and answer (Q&A) pair datasets creation from the prepared documents, we can either use Meta Llama 3 70B APIs from LLM cloud providers or host local LLM server. -In this example, we use OctoAI API as a demo, and the APIs could be replaced by any other API from other providers. +In this example, we can use OctoAI API as a demo, and the APIs could be replaced by any other API from other providers. **NOTE** The generated data by these APIs or the model needs to be vetted to make sure about the quality. @@ -22,7 +21,7 @@ python generate_question_answers.py **NOTE** You need to be aware of your RPM (requests per minute), TPM (tokens per minute) and TPD (tokens per day), limit on your account in case using any of model API providers. In our case we had to process each document at a time. Then merge all the Q&A `json` files to make our dataset. We aimed for a specific number of Q&A pairs per document anywhere between 50-100. This is experimental and totally depends on your documents, wealth of information in them and how you prefer to handle question, short or longer answers etc. -Alternatively we can use on prem solutions such as the [TGI](../../../examples/hf_text_generation_inference/) or [VLLM](../../../examples/vllm/). Here we will use the prompt in the [./config.yaml] to instruct the model on the expected format and rules for generating the Q&A pairs. In this example, we will show how to create a vllm openai compatible server that host Meta Llama 3 70B instruct locally and generate the Q&A pair datasets. +Alternatively we can use on prem solutions such as the [TGI](../../../../inference/model_servers/hf_text_generation_inference/README.md) or [VLLM](../../../../inference/model_servers/llama-on-prem.md). Here we will use the prompt in the [generation_config.yaml](./generation_config.yaml) to instruct the model on the expected format and rules for generating the Q&A pairs. In this example, we will show how to create a vllm openai compatible server that host Meta Llama 3 70B instruct locally, generate the Q&A pairs and apply self-curation to get the final dataset. ```bash # Make sure VLLM has been installed @@ -34,38 +33,47 @@ CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server --model m Once the server is ready, we can query the server given the port number 8001 in another terminal. Here, "-v" sets the port number and "-t" sets the total questions we ask the Meta Llama3 70B instruct model to initially generate, but the model can choice to generate less questions if it can not found any Llama related context to avoid the model generate questions that too trivial and unrelated. ```bash -python generate_question_answers.py -v 8001 -t 1000 +python generate_question_answers.py -v 8001 -t 7000 ``` -This python program will read all the documents inside of "data" folder and split the data into batches by the context window limit (8K for Meta Llama3 and 4K for Llama 2) and apply the question_prompt_template, defined in "generation_config.yaml", to each batch. Then it will use each batch to query VLLM server and save the return QA pairs and the contexts. Additionally, we will add another step called self-curation (see more details in [Self-Alignment with Instruction Backtranslation](https://arxiv.org/abs/2308.06259)), which uses another 70B model to evaluate whether a QA pair is based on the context and provides relevant information about Llama language models given that context. We will then save all the QA pairs that passed the evaluation into data.json file as our final fine-tuning training set. +This python program will read all the documents inside of "data" folder and split the data into batches by the context window limit (8K for Meta Llama3 and 4K for Llama 2) and apply the question_prompt_template, defined in "generation_config.yaml", to each batch. Then it will use each batch to query VLLM server and save the return QA pairs and the contexts. -Example of QA pair that did not pass the self-curation, in this case the QA pair did not focus on Llama model: +Additionally, we will add another step called self-curation (see more details in [Self-Alignment with Instruction Backtranslation](https://arxiv.org/abs/2308.06259)), which uses another 70B model to evaluate whether a QA pair is based on the context and provides relevant information about Llama language models given that context. We will then save all the QA pairs that passed the evaluation into data.json file as our final fine-tuning training set. + +Example of QA pair that did not pass the self-curation as the QA pair did not focus on Llama model: ```json -{'Question': 'What is the name of the pre-trained model for programming and natural languages?', 'Answer': 'CodeBERT', 'Context': 'Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. CodeBERT: A pre-trained model for programming and natural languages. In EMNLP (Findings), volume EMNLP 2020 of Findings of ACL, pp. 15361547. Association for Computational Linguistics, 2020.'} {'Reason': 'The question and answer pair is not relevant to the context about Llama language models, as it discusses CodeBERT, which is not a Llama model.', 'Result': 'NO'} +'Question': 'What is the purpose of the killall command in Linux?', 'Answer': 'To kill a process and all its child processes', 'Context': 'If you want to kill a process and all its child processes, you can use the killall command. For example: killall firefox' +'Reason': "The question and answer pair is not related to Llama language models, it's about a Linux command. The context provided is about Llama models, but the QA pair is about a Linux command, which is not relevant to the context.", 'Result': 'NO' ``` + ### Step 3: Run the fune-tuning -In the llama-recipe main folder, we can start the fine-tuning step using the following commands: +Once the dataset is ready, we can start the fine-tuning step using the following commands in the llama-recipe main folder: For distributed fine-tuning: ```bash CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 10 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' ``` -CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --from_peft_checkpoint chatbot-8b --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b-continue --num_epochs 10 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' For fine-tuning in single-GPU: ```bash -CUDA_VISIBLE_DEVICES=0 python recipes/finetuning/finetuning.py --quantization --use_peft --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 10 --batch_size_training 1 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' +CUDA_VISIBLE_DEVICES=0 python recipes/finetuning/finetuning.py --quantization --use_peft --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 5 --batch_size_training 1 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' +``` + +If we want to continue the fine-tuning process after our evaluation step, we can use --from_peft_checkpoint argument to resume the fine-tuning from PEFT checkpoint folder. For example, we can run: + +```bash +CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --from_peft_checkpoint chatbot-8b --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b-continue --num_epochs 5 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' ``` For more details, please check the readme in the finetuning recipe. ### Step 4: Evaluating with local inference -Once we have the fine-tuned model, we now need to evaluate it to understand its performance. Normally, to create a evaluation set, we should first gather some questions and manually write the ground truth answer. In this case, we created a eval set based on the Llama [Troubleshooting & FAQ](https://llama.meta.com/faq/), where the answers are written by human experts. Then we pass the evalset question to our fine-tuned model to get the model generated answers. To compare the model generated answers with ground truth, we can use either traditional eval method, eg. calcucate rouge score, or use LLM to act like a judge to score the similarity of them. +Once we have the fine-tuned model, we now need to evaluate it to understand its performance. Normally, to create a evaluation set, we should first gather some questions and manually write the ground truth answer. In this case, we created a eval set mostly based on the Llama [Troubleshooting & FAQ](https://llama.meta.com/faq/), where the answers are written by human experts. Then we pass the evalset question to our fine-tuned model to get the model generated answers. To compare the model generated answers with ground truth, we can use either traditional eval method, eg. calcucate rouge score, or use LLM to act like a judge to score the similarity of them. -First we need to start the VLLM servers to host our fine-tuned 8B model. Since we used peft library to get a LoRA adapter, we need to pass special arguments to VLLM to enable the LoRA feature. Now, the VLLM server actually will first load the original model, then apply our LoRA adapter weights. Then we can feed the eval_set json file into the VLLM servers and start the comparison evaluation. Notice that our model name is now called "chatbot" instead of "meta-llama/Meta-Llama-3-8B-Instruct". +First we need to start the VLLM servers to host our fine-tuned 8B model. Since we used peft library to get a LoRA adapter, we need to pass special arguments to VLLM to enable the LoRA feature. Now, the VLLM server actually will first load the original model, then apply our LoRA adapter weights. Then we can feed the eval_set.json file into the VLLM servers and start the comparison evaluation. Notice that our finetuned model name is now called "chatbot" instead of "meta-llama/Meta-Llama-3-8B-Instruct". ```bash python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-8B-Instruct --enable-lora --lora-modules chatbot=./chatbot-8b --port 8000 --disable-log-requests @@ -84,23 +92,18 @@ On another terminal, we can go to the recipes/use_cases/end2end-recipes/chatbot/ ```bash python eval_chatbot.py -m chatbot -v 8000 ``` -We can also quickly compare our fine-tuned chatbot model with original 8B model using + +We can also quickly compare our fine-tuned chatbot model with original Meta Llama 3 8B Instruct model using ```bash python eval_chatbot.py -m meta-llama/Meta-Llama-3-8B-Instruct -v 8000 ``` -**NOTE** If encounter import error: "ImportError: punica LoRA kernels could not be imported.", this means that VLLM must be installed with punica LoRA kernels to support LoRA adapter, please use following commands to install the VLLM from source. -```bash -git clone https://github.com/vllm-project/vllm.git -cd vllm -VLLM_INSTALL_PUNICA_KERNELS=1 pip install -e . -``` -Lastly, we can use another 70B model as a judge to compare the answer from the fine-tuned 8B model with the groud truth and get a score. To do this, we need to host another 70B VLLM server locally with command, just make sure the port is not been used: +Lastly, we can use another Meta Llama 3 70B Instruct model as a judge to compare the answer from the fine-tuned 8B model with the groud truth and get a score. To do this, we need to host another Meta Llama 3 70B Instruct VLLM server locally with command, just make sure the port is not been used: ```bash -CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001 +CUDA_VISIBLE_DEVICES=2,3 python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001 ``` Then we can pass the port to the eval script: @@ -109,9 +112,17 @@ Then we can pass the port to the eval script: python eval_chatbot.py -m chatbot -v 8000 -j 8001 ``` +and similarily get the eval result for the original model: + +```bash +python eval_chatbot.py -m meta-llama/Meta-Llama-3-8B-Instruct -v 8000 -j 8001 +``` + + + ### Step 5: Testing with local inference -Once we believe our fine-tuned model has passed our evaluation and we can deploy it locally to manually test it by manually asking questions. +Once we believe our fine-tuned model has passed our evaluation and we can deploy it locally to play with it by manually asking questions. We can do this by ```bash python recipes/inference/local_inference/inference.py --model_name meta-llama/Meta-Llama-3-8B-Instruct --peft_model chatbot-8b diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml b/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml index 153d3b2be..87266d33c 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml @@ -1,6 +1,7 @@ eval_prompt_template: > - You are a AI assistant that skilled in answering questions related to Llama model. - Below is a question from a llama user, please answer it in {language}, make the answer as concise as possible, it should be at most 100 words. + You are a AI assistant that skilled in answering questions related to Llama language models, + which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1, Meta Llama Guard 2, + Below is a question from a llama user, think step by step and then answer it in {language}, make the answer as concise as possible, it should be at most 100 words. Return the result with the template: [ {{ @@ -9,9 +10,9 @@ eval_prompt_template: > }} ] judge_prompt_template: > - You are provided with a question, a teacher answer and a student answer. Given that question, you need to score the how good the student answer is compare to + You are provided with a question, a teacher's answer and a student's answer. Given that question, you need to score the how good the student answer is compare to the teacher's answer. If the student's answer is correct based on the teacher's answer, then return YES. If the answer is not faithful, then return NO - and explain which part of the answer if not faithful in the Reason section. + and explain which part of the student's answer is not faithful in the Reason section. Return the result in json format with the template: {{ "Reason": "your reason here.", diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml index 4db269395..f60808bf9 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml @@ -5,12 +5,12 @@ question_prompt_template: > which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1, Meta Llama Guard 2, then extract the context that is related to the question and answer, preferably using the sentences from original text, please make sure you follow those rules: - 1. Generate {num_questions} question answer pairs. + 1. Generate {num_questions} question answer pairs, you can generate less answer if there is nothing related to model, training, fine-tuning and evaluation details of Llama language models, . 2. For each question and answer pair, add the context that is related to the question and answer, preferably using the sentences from original text 3. Generate in {language}. 4. The questions can be answered based *solely* on the given passage. 5. Avoid asking questions with similar meaning. - 6. Make the answer as concise as possible, it should be at most 80 words. + 6. Make the answer as concise as possible, it should be at most 100 words. 7. Provide relevant links from the document to support the answer. 8. Never use any abbreviation. 9. Return the result in json format with the template: @@ -30,7 +30,7 @@ question_prompt_template: > curation_prompt_template: > Below is a question and answer pair (QA pair) and its related context about Llama language models, which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1, Meta Llama Guard 2. - Given the context, evaluate whether or not this qusestion and answer pair will be helpful for a user of Llama language models, + Given the context, evaluate whether or not this qusestion and answer pair is related to Llama language models, including model, training, fine-tuning and evaluation details, and whether this question and answer is relevant to the context. Note that the answer in the QA pair can be the same or similar as the context, as repetition of context is allowed. Respond with only a single JSON blob with an "Reason" field that is a short (less than 100 word) diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py index 349ab149e..b37ad5dbf 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py @@ -78,6 +78,9 @@ def read_file_content(context): if file_text: file_strings.append(file_text) text = '\n'.join(file_strings) + text = remove_non_printable(text) + with open(context['data_dir'] + '/' + 'all_text.txt', 'w') as f: + f.write(text) return remove_non_printable(text) # clean the text by removing all parts that did not contain any alphanumeric characters def clean(s): From d097c9f52e59d3b1401b47da41728efac86fa1d9 Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Tue, 28 May 2024 16:25:59 -0700 Subject: [PATCH 13/35] draft: get answer from a chunk working --- .../use_cases/end2end-recipes/raft/README.md | 122 ++++++++ .../end2end-recipes/raft/chat_utils.py | 80 ++++++ .../use_cases/end2end-recipes/raft/config.py | 19 ++ .../end2end-recipes/raft/data/FAQ.md | 55 ++++ .../end2end-recipes/raft/doc_processor.py | 47 +++ .../use_cases/end2end-recipes/raft/format.py | 173 +++++++++++ .../use_cases/end2end-recipes/raft/raft.py | 106 +++++++ .../use_cases/end2end-recipes/raft/raft.yaml | 20 ++ .../end2end-recipes/raft/raft_utils.py | 271 ++++++++++++++++++ 9 files changed, 893 insertions(+) create mode 100644 recipes/use_cases/end2end-recipes/raft/README.md create mode 100644 recipes/use_cases/end2end-recipes/raft/chat_utils.py create mode 100644 recipes/use_cases/end2end-recipes/raft/config.py create mode 100644 recipes/use_cases/end2end-recipes/raft/data/FAQ.md create mode 100644 recipes/use_cases/end2end-recipes/raft/doc_processor.py create mode 100644 recipes/use_cases/end2end-recipes/raft/format.py create mode 100644 recipes/use_cases/end2end-recipes/raft/raft.py create mode 100644 recipes/use_cases/end2end-recipes/raft/raft.yaml create mode 100644 recipes/use_cases/end2end-recipes/raft/raft_utils.py diff --git a/recipes/use_cases/end2end-recipes/raft/README.md b/recipes/use_cases/end2end-recipes/raft/README.md new file mode 100644 index 000000000..c42eda8e8 --- /dev/null +++ b/recipes/use_cases/end2end-recipes/raft/README.md @@ -0,0 +1,122 @@ +## End to End Steps to create a Chatbot using fine-tuning + +### Step 1 : Prepare related documents + +Download all your desired docs in PDF, Text or Markdown format to "data" folder inside the data_pipelines folder. + +In this case we have an example of [Getting started with Meta Llama](https://llama.meta.com/get-started/) and other llama related documents such Llama3, Purple Llama, Code Llama papers. Ideally, we should have searched all Llama documents across the web and follow the procedure below on them but that would be very costly for the purpose of a tutorial, so we will stick to our limited documents here. In this case, we want to use Llama FAQ as eval data so we should not put it into the data folder for training. + +### Step 2 : Prepare RAFT data for fine-tuning + +To use Meta Llama 3 70B model for the RAFT datasets creation from the prepared documents, we can either use Meta Llama 3 70B APIs from LLM cloud providers or host local LLM server. + +In this example, we can use OctoAI API as a demo, and the APIs could be replaced by any other API from other providers. + +**NOTE** The generated data by these APIs or the model needs to be vetted to make sure about the quality. + +```bash +export OCTOAI_API_TOKEN="OCTOAI_API_TOKEN" +python generate_question_answers.py +``` + +**NOTE** You need to be aware of your RPM (requests per minute), TPM (tokens per minute) and TPD (tokens per day), limit on your account in case using any of model API providers. In our case we had to process each document at a time. Then merge all the Q&A `json` files to make our dataset. We aimed for a specific number of Q&A pairs per document anywhere between 50-100. This is experimental and totally depends on your documents, wealth of information in them and how you prefer to handle question, short or longer answers etc. + +Alternatively we can use on prem solutions such as the [TGI](../../../../inference/model_servers/hf_text_generation_inference/README.md) or [VLLM](../../../../inference/model_servers/llama-on-prem.md). Here we will use the prompt in the [generation_config.yaml](./generation_config.yaml) to instruct the model on the expected format and rules for generating the Q&A pairs. In this example, we will show how to create a vllm openai compatible server that host Meta Llama 3 70B instruct locally, generate the Q&A pairs and apply self-curation to get the final dataset. + +```bash +# Make sure VLLM has been installed +CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001 +``` + +**NOTE** Please make sure the port has not been used. Since Meta Llama3 70B instruct model requires at least 135GB GPU memory, we need to use multiple GPUs to host it in a tensor parallel way. + +Once the server is ready, we can query the server given the port number 8001 in another terminal. Here, "-v" sets the port number and "-t" sets the number of questions we ask the Meta Llama3 70B Instruct model to generate per chunk. + +```bash +python raft.py -v 8001 -t 5 +``` + +This python program will read all the documents inside of "data" folder and split the data into batches by the chunk_size (default is 512) and apply the question_prompt_template, defined in "raft.yaml", to each batch. Then it will use each batch to query VLLM server and save the return a list of question list for each batch. + + +### Step 3: Run the fune-tuning +Once the dataset is ready, we can start the fine-tuning step using the following commands in the llama-recipe main folder: + +For distributed fine-tuning: +```bash +CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 10 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' +``` + + +For fine-tuning in single-GPU: + +```bash +CUDA_VISIBLE_DEVICES=0 python recipes/finetuning/finetuning.py --quantization --use_peft --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 5 --batch_size_training 1 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' +``` + +If we want to continue the fine-tuning process after our evaluation step, we can use --from_peft_checkpoint argument to resume the fine-tuning from PEFT checkpoint folder. For example, we can run: + +```bash +CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --from_peft_checkpoint chatbot-8b --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b-continue --num_epochs 5 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' +``` + +For more details, please check the readme in the finetuning recipe. + +### Step 4: Evaluating with local inference + +Once we have the fine-tuned model, we now need to evaluate it to understand its performance. Normally, to create a evaluation set, we should first gather some questions and manually write the ground truth answer. In this case, we created a eval set mostly based on the Llama [Troubleshooting & FAQ](https://llama.meta.com/faq/), where the answers are written by human experts. Then we pass the evalset question to our fine-tuned model to get the model generated answers. To compare the model generated answers with ground truth, we can use either traditional eval method, eg. calcucate rouge score, or use LLM to act like a judge to score the similarity of them. + +First we need to start the VLLM servers to host our fine-tuned 8B model. Since we used peft library to get a LoRA adapter, we need to pass special arguments to VLLM to enable the LoRA feature. Now, the VLLM server actually will first load the original model, then apply our LoRA adapter weights. Then we can feed the eval_set.json file into the VLLM servers and start the comparison evaluation. Notice that our finetuned model name is now called "chatbot" instead of "meta-llama/Meta-Llama-3-8B-Instruct". + +```bash +python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-8B-Instruct --enable-lora --lora-modules chatbot=./chatbot-8b --port 8000 --disable-log-requests +``` + +**NOTE** If encounter import error: "ImportError: punica LoRA kernels could not be imported.", this means that VLLM must be installed with punica LoRA kernels to support LoRA adapter, please use following commands to install the VLLM from source. + +```bash +git clone https://github.com/vllm-project/vllm.git +cd vllm +VLLM_INSTALL_PUNICA_KERNELS=1 pip install -e . +``` + +On another terminal, we can go to the recipes/use_cases/end2end-recipes/chatbot/pipelines folder to start our eval script. + +```bash +python eval_chatbot.py -m chatbot -v 8000 +``` + +We can also quickly compare our fine-tuned chatbot model with original Meta Llama 3 8B Instruct model using + +```bash +python eval_chatbot.py -m meta-llama/Meta-Llama-3-8B-Instruct -v 8000 +``` + + +Lastly, we can use another Meta Llama 3 70B Instruct model as a judge to compare the answer from the fine-tuned 8B model with the groud truth and get a score. To do this, we need to host another Meta Llama 3 70B Instruct VLLM server locally with command, just make sure the port is not been used: + +```bash +CUDA_VISIBLE_DEVICES=2,3 python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001 +``` + +Then we can pass the port to the eval script: + +```bash +python eval_chatbot.py -m chatbot -v 8000 -j 8001 +``` + +and similarily get the eval result for the original model: + +```bash +python eval_chatbot.py -m meta-llama/Meta-Llama-3-8B-Instruct -v 8000 -j 8001 +``` + + + +### Step 5: Testing with local inference + +Once we believe our fine-tuned model has passed our evaluation and we can deploy it locally to play with it by manually asking questions. We can do this by + +```bash +python recipes/inference/local_inference/inference.py --model_name meta-llama/Meta-Llama-3-8B-Instruct --peft_model chatbot-8b +``` diff --git a/recipes/use_cases/end2end-recipes/raft/chat_utils.py b/recipes/use_cases/end2end-recipes/raft/chat_utils.py new file mode 100644 index 000000000..07fb61eea --- /dev/null +++ b/recipes/use_cases/end2end-recipes/raft/chat_utils.py @@ -0,0 +1,80 @@ +import asyncio +import logging +from abc import ABC, abstractmethod +from octoai.client import OctoAI +from functools import partial +from openai import OpenAI +import json +# Configure logging to include the timestamp, log level, and message +logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') +# Since OctoAI has different naming for llama models, create this mapping to get huggingface offical model name given OctoAI names. +MODEL_NAME_MAPPING={"meta-llama-3-70b-instruct":"meta-llama/Meta-Llama-3-70B-Instruct", +"meta-llama-3-8b-instruct":"meta-llama/Meta-Llama-3-8B-Instruct","llama-2-7b-chat":"meta-llama/Llama-2-7b-chat-hf" +,"llama-2-70b-chat":"meta-llama/Llama-2-70b-chat-hf"} +# Manage rate limits with throttling +rate_limit_threshold = 2000 +allowed_concurrent_requests = int(rate_limit_threshold * 0.75) +request_limiter = asyncio.Semaphore(allowed_concurrent_requests) +class ChatService(ABC): + @abstractmethod + async def execute_chat_request_async(self, api_context: dict, chat_request): + pass +def strip_str(s: str) -> str: + """ + Helper function for helping format strings returned by GPT-4. + """ + l, r = 0, len(s)-1 + beg_found = False + for i in range(len(s)): + if s[i].isalpha(): + if not beg_found: + l = i + beg_found = True + else: + r = i + r += 2 + return s[l:min(r, len(s))] +# Please implement your own chat service class here. +# The class should inherit from the ChatService class and implement the execute_chat_request_async method. +# The following are two example chat service classes that you can use as a reference. +class OctoAIChatService(ChatService): + async def execute_chat_request_async(self, api_context: dict, chat_request): + async with request_limiter: + try: + event_loop = asyncio.get_running_loop() + client = OctoAI(api_context['api_key']) + api_chat_call = partial( + client.chat.completions.create, + model=api_context['model'], + messages=chat_request, + temperature=0.0 + ) + response = await event_loop.run_in_executor(None, api_chat_call) + assistant_response = next((choice.message.content for choice in response.choices if choice.message.role == 'assistant'), "") + return assistant_response + except Exception as error: + logging.error(f"Error during chat request execution: {error}",exc_info=True) + return "" +# Use the local vllm openai compatible server for generating question/answer pairs to make API call syntax consistent +# please read for more detail:https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html. +class VllmChatService(ChatService): + async def execute_chat_request_async(self, api_context: dict, chat_request): + try: + event_loop = asyncio.get_running_loop() + if api_context["model"] in MODEL_NAME_MAPPING: + model_name = MODEL_NAME_MAPPING[api_context['model']] + else: + model_name = api_context['model'] + client = OpenAI(api_key=api_context['api_key'], base_url="http://localhost:"+ str(api_context['endpoint'])+"/v1") + api_chat_call = partial( + client.chat.completions.create, + model=model_name, + messages=chat_request, + temperature=0.0 + ) + response = await event_loop.run_in_executor(None, api_chat_call) + assistant_response = next((choice.message.content for choice in response.choices if choice.message.role == 'assistant'), "") + return assistant_response + except Exception as error: + logging.error(f"Error during chat request execution: {error}",exc_info=True) + return "" diff --git a/recipes/use_cases/end2end-recipes/raft/config.py b/recipes/use_cases/end2end-recipes/raft/config.py new file mode 100644 index 000000000..6d0b82573 --- /dev/null +++ b/recipes/use_cases/end2end-recipes/raft/config.py @@ -0,0 +1,19 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement. + +import yaml +import os + +def load_config(config_path: str = "./config.yaml"): + # Read the YAML configuration file + with open(config_path, "r") as file: + config = yaml.safe_load(file) + # Set the API key from the environment variable + try: + config["api_key"] = os.environ["OCTOAI_API_TOKEN"] + except KeyError: + print("API token did not found, please set the OCTOAI_API_TOKEN environment variable if using OctoAI, otherwise set api_key to default EMPTY") + # local Vllm endpoint did not need API key, so set the API key to "EMPTY" if OCTOAI_API_TOKEN not found + config["api_key"] = "EMPTY" + return config + diff --git a/recipes/use_cases/end2end-recipes/raft/data/FAQ.md b/recipes/use_cases/end2end-recipes/raft/data/FAQ.md new file mode 100644 index 000000000..8c2e12a7c --- /dev/null +++ b/recipes/use_cases/end2end-recipes/raft/data/FAQ.md @@ -0,0 +1,55 @@ +# FAQ + +Here we discuss frequently asked questions that may occur and we found useful along the way. + +1. Does FSDP support mixed precision in one FSDP unit? Meaning, in one FSDP unit some of the parameters are in Fp16/Bf16 and others in FP32. + + FSDP requires each FSDP unit to have consistent precision, so this case is not supported at this point. It might be added in future but no ETA at the moment. + +2. How does FSDP handles mixed grad requirements? + + FSDP does not support mixed `require_grad` in one FSDP unit. This means if you are planning to freeze some layers, you need to do it on the FSDP unit level rather than model layer. For example, let us assume our model has 30 decoder layers and we want to freeze the bottom 28 layers and only train 2 top transformer layers. In this case, we need to make sure `require_grad` for the top two transformer layers are set to `True`. + +3. How do PEFT methods work with FSDP in terms of grad requirements/layer freezing? + + We wrap the PEFT modules separate from the transformer layer in auto_wrapping policy, that would result in PEFT models having `require_grad=True` while the rest of the model is `require_grad=False`. + +4. Can I add custom datasets? + + Yes, you can find more information on how to do that [here](Dataset.md). + +5. What are the hardware SKU requirements for deploying these models? + + Hardware requirements vary based on latency, throughput and cost constraints. For good latency, the models were split across multiple GPUs with tensor parallelism in a machine with NVIDIA A100s or H100s. But TPUs, other types of GPUs like A10G, T4, L4, or even commodity hardware can also be used to deploy these models (e.g. https://github.com/ggerganov/llama.cpp). + If working on a CPU, it is worth looking at this [blog post](https://www.intel.com/content/www/us/en/developer/articles/news/llama2.html) from Intel for an idea of Llama 2's performance on a CPU. + +6. What are the hardware SKU requirements for fine-tuning Llama pre-trained models? + + Fine-tuning requirements vary based on amount of data, time to complete fine-tuning and cost constraints. To fine-tune these models we have generally used multiple NVIDIA A100 machines with data parallelism across nodes and a mix of data and tensor parallelism intra node. But using a single machine, or other GPU types like NVIDIA A10G or H100 are definitely possible (e.g. alpaca models are trained on a single RTX4090: https://github.com/tloen/alpaca-lora). + +7. How to handle CUDA memory fragmentations during fine-tuning that may lead into an OOM? + + In some cases you may experience that after model checkpointing specially with FSDP (this usually does not happen with PEFT methods), the reserved and allocated CUDA memory has increased. This might be due to CUDA memory fragmentations. PyTorch recenly added an enviroment variable that helps to better manage memory fragmentation (this feature in available on PyTorch nightlies at the time of writing this doc July 30 2023). You can set this in your main training script as follows: + + ```bash + + os.environ['PYTORCH_CUDA_ALLOC_CONF']='expandable_segments:True' + + ``` + We also added this enviroment variable in `setup_environ_flags` of the [train_utils.py](../src/llama_recipes/utils/train_utils.py), feel free to uncomment it if required. + +8. Additional debugging flags? + + The environment variable `TORCH_DISTRIBUTED_DEBUG` can be used to trigger additional useful logging and collective synchronization checks to ensure all ranks are synchronized appropriately. `TORCH_DISTRIBUTED_DEBUG` can be set to either OFF (default), INFO, or DETAIL depending on the debugging level required. Please note that the most verbose option, DETAIL may impact the application performance and thus should only be used when debugging issues. + + We also added this enviroment variable in `setup_environ_flags` of the [train_utils.py](../src/llama_recipes/utils/train_utils.py), feel free to uncomment it if required. + +9. I am getting import errors when running inference. + + Verify that CUDA environment variables are set correctly on your machine. For example for bitsandbytes, you can generally set it as below to get things working on A100 80g's on AWS. + + ```bash + export CUDA_HOME="/usr/local/cuda-11.8" + export PATH=$CUDA_HOME/bin:$PATH + export LD_LIBRARY_PATH=$CUDA_HOME/lib:$CUDA_HOME/lib64:$CUDA_HOME/efa/lib:/opt/amazon/efa/lib:$LD_LIBRARY_PATH + ``` diff --git a/recipes/use_cases/end2end-recipes/raft/doc_processor.py b/recipes/use_cases/end2end-recipes/raft/doc_processor.py new file mode 100644 index 000000000..c8556471e --- /dev/null +++ b/recipes/use_cases/end2end-recipes/raft/doc_processor.py @@ -0,0 +1,47 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# This software may be used and distributed according to the terms of the Llama 3 Community License Agreement. + +# Assuming result_average_token is a constant, use UPPER_CASE for its name to follow Python conventions +AVERAGE_TOKENS_PER_RESULT = 100 + +def get_token_limit_for_model(model: str) -> int: + """Returns the token limit for a given model.""" + if model == "llama-2-13b-chat" or model == "llama-2-70b-chat": + return 4096 + else: + return 8192 + +def calculate_num_tokens_for_message(encoded_text) -> int: + """Calculates the number of tokens used by a message.""" + # Added 3 to account for priming with assistant's reply, as per original comment + return len(encoded_text) + 3 + + +def split_text_into_chunks(context: dict, text: str, tokenizer) -> list[str]: + """Splits a long text into substrings based on token length constraints, adjusted for question generation.""" + # Adjusted approach to calculate max tokens available for text chunks + encoded_text = tokenizer(text, return_tensors="pt", padding=True)["input_ids"] + encoded_text = encoded_text.squeeze() + model_token_limit = get_token_limit_for_model(context["model"]) + + tokens_for_questions = calculate_num_tokens_for_message(encoded_text) + estimated_tokens_per_question = AVERAGE_TOKENS_PER_RESULT + estimated_total_question_tokens = estimated_tokens_per_question * context["total_questions"] + # Ensure there's a reasonable minimum chunk size + max_tokens_for_text = max(model_token_limit - tokens_for_questions - estimated_total_question_tokens, model_token_limit // 10) + + chunks, current_chunk = [], [] + print(f"Splitting text into chunks of {max_tokens_for_text} tokens, encoded_text {len(encoded_text)}", flush=True) + for token in encoded_text: + if len(current_chunk) >= max_tokens_for_text: + chunks.append(tokenizer.decode(current_chunk).strip()) + current_chunk = [] + else: + current_chunk.append(token) + + if current_chunk: + chunks.append(tokenizer.decode(current_chunk).strip()) + + print(f"Number of chunks in the processed text: {len(chunks)}", flush=True) + + return chunks diff --git a/recipes/use_cases/end2end-recipes/raft/format.py b/recipes/use_cases/end2end-recipes/raft/format.py new file mode 100644 index 000000000..7dcb6b861 --- /dev/null +++ b/recipes/use_cases/end2end-recipes/raft/format.py @@ -0,0 +1,173 @@ +from abc import ABC, abstractmethod +import argparse +from datasets import Dataset, load_dataset +from typing import Dict, Literal, Any, get_args + +""" +This file allows to convert raw HuggingFace Datasets into files suitable to fine tune completion and chat models. +""" + +OutputDatasetType = Literal["parquet", "jsonl"] +outputDatasetTypes = list(get_args(OutputDatasetType)) + +InputDatasetType = Literal["arrow", "jsonl"] +inputDatasetTypes = list(get_args(InputDatasetType)) + +DatasetFormat = Literal["hf", "completion", "chat"] +datasetFormats = list(get_args(DatasetFormat)) + +def get_args() -> argparse.Namespace: + """ + Parses and returns the arguments specified by the user's command + """ + parser = argparse.ArgumentParser() + + parser.add_argument("--input", type=str, required=True, help="Input HuggingFace dataset file") + parser.add_argument("--input-type", type=str, default="arrow", help="Format of the input dataset. Defaults to arrow.", choices=inputDatasetTypes) + parser.add_argument("--output", type=str, required=True, help="Output file") + parser.add_argument("--output-format", type=str, required=True, help="Format to convert the dataset to", choices=datasetFormats) + parser.add_argument("--output-type", type=str, default="jsonl", help="Type to export the dataset to. Defaults to jsonl.", choices=outputDatasetTypes) + parser.add_argument("--output-chat-system-prompt", type=str, help="The system prompt to use when the output format is chat") + + args = parser.parse_args() + return args + +class DatasetFormatter(ABC): + """ + Base class for dataset formatters. Formatters rename columns, remove and add + columns to match the expected target format structure. HF, Chat or Completion models file formats. + https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset + """ + @abstractmethod + def format(self, ds: Dataset, params: Dict[str, str]) -> Dataset: + pass + +class DatasetExporter(ABC): + """ + Base class for dataset exporters. Exporters export dataset to different file types, JSONL, Parquet, ... + """ + @abstractmethod + def export(self, ds: Dataset, output_path: str): + pass + +class DatasetConverter(): + """ + Entry point class. It resolves which DatasetFormatter and which DatasetExporter to use and runs them. + """ + formats: Dict[DatasetFormat, DatasetFormatter] + exporters: Dict[OutputDatasetType, Any] + + def __init__(self) -> None: + self.formats = { + "hf": HuggingFaceDatasetFormatter(), + "completion": OpenAiCompletionDatasetFormatter(), + "chat": OpenAiChatDatasetFormatter() + } + self.exporters = { + "parquet": ParquetDatasetExporter(), + "jsonl": JsonlDatasetExporter() + } + + def convert(self, ds: Dataset, format: DatasetFormat, output_path: str, output_type: OutputDatasetType, params: Dict[str, str]): + if not format in self.formats: + raise Exception(f"Output Format {format} is not supported, pleased select one of {self.formats.keys()}") + + if not output_type in self.exporters: + raise Exception(f"Output Type {output_type} is not supported, pleased select one of {self.exporters.keys()}") + + formatter = self.formats[format] + newds = formatter.format(ds, params) + exporter = self.exporters[output_type] + exporter.export(newds, output_path) + +class HuggingFaceDatasetFormatter(DatasetFormatter): + """ + Returns the HuggingFace Dataset as is + """ + def format(self, ds: Dataset, params: Dict[str, str]) -> Dataset: + return ds + +def _remove_all_columns_but(ds: Dataset, keep_columns) -> Dataset: + """ + HF Dataset doesn't have a way to copy only specific columns of a Dataset so this help + removes all columns but the ones specified. + """ + remove_columns = list(ds.column_names) + for keep in keep_columns: + remove_columns.remove(keep) + ds = ds.remove_columns(remove_columns) + return ds + +class OpenAiCompletionDatasetFormatter(DatasetFormatter): + """ + Returns the Dataset in the OpenAI Completion Fine-tuning file format with two fields "prompt" and "completion". + https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset + """ + def format(self, ds: Dataset, params: Dict[str, str]) -> Dataset: + newds = ds.rename_columns({'question': 'prompt', 'cot_answer': 'completion'}) + return _remove_all_columns_but(newds, ['prompt', 'completion']) + +class OpenAiChatDatasetFormatter(OpenAiCompletionDatasetFormatter): + """ + Returns the Dataset in the OpenAI Chat Fine-tuning file format with one field "messages". + https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset + """ + def format(self, ds: Dataset, params: Dict[str, str]) -> Dataset: + newds = super().format(ds, params) + + def format_messages(row): + messages = [] + if 'system_prompt' in params: + system_prompt = params['system_prompt'] + messages.append({ "role": "system", "content": system_prompt}) + messages.extend([{ "role": "user", "content": row['prompt']}, { "role": "assistant", "content": row['completion']}]) + chat_row = {"messages": messages} + return chat_row + + newds = newds.map(format_messages) + return _remove_all_columns_but(newds, ['messages']) + +def append_extension(path: str, extension: str) -> str: + suffix = "." + extension + if not path.endswith(suffix): + path = path + suffix + return path + + +class JsonlDatasetExporter(DatasetExporter): + """ + Exports the Dataset to a JSONL file + """ + + def export(self, ds: Dataset, output_path: str): + ds.to_json(append_extension(output_path, "jsonl")) + + +class ParquetDatasetExporter(DatasetExporter): + """ + Exports the Dataset to a Parquet file + """ + + def export(self, ds: Dataset, output_path: str): + ds.to_parquet(append_extension(output_path, "parquet")) + + +def main(): + """ + When raft.py is executed from the command line. + """ + args = get_args() + ds = load_dataset(args.input_type, data_files={"train": args.input})['train'] + formatter = DatasetConverter() + + if args.output_chat_system_prompt and args.output_format != "chat": + raise Exception("Parameter --output-chat-system-prompt can only be used with --output-format chat") + + format_params = {} + if args.output_chat_system_prompt: + format_params['system_prompt'] = args.output_chat_system_prompt + + formatter.convert(ds=ds, format=args.output_format, output_path=args.output, output_type=args.output_type, params=format_params) + +if __name__ == "__main__": + main() diff --git a/recipes/use_cases/end2end-recipes/raft/raft.py b/recipes/use_cases/end2end-recipes/raft/raft.py new file mode 100644 index 000000000..56ea4a81c --- /dev/null +++ b/recipes/use_cases/end2end-recipes/raft/raft.py @@ -0,0 +1,106 @@ +import mdc +from mdc import MDC +import logging +from typing import Literal, Any +from openai import OpenAI +import datasets +from datasets import Dataset, load_dataset +import json +import random +import os, shutil +import argparse +import asyncio +from raft_utils import generate_questions, add_chunk_to_dataset +from chat_utils import OctoAIChatService, VllmChatService +from format import DatasetConverter, datasetFormats, outputDatasetTypes +from config import load_config + +# def generate_label(client: OpenAI, question: str, context: Any, doctype: DocType = "pdf", model: str = None) -> str | None: +# """ +# Generates the label / answer to `question` using `context` and GPT-4. +# """ +# question = encode_question(question, context) if doctype == "api" else encode_question_gen(question, context) +# response = client.chat.completions.create( +# model=model, +# messages=question, +# n=1, +# temperature=0 +# ) +# response = response.choices[0].message.content +# return response +# Configure logging to include the timestamp, log level, and message +logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') + + +async def main(context): + if context["endpoint"]: + chat_service = VllmChatService() + else: + chat_service = OctoAIChatService() + try: + logging.info("Starting to generate question pair.") + # Generate question/answer pairs as list + chunks = await generate_questions(chat_service, context) + if not chunks: + logging.warning("No questions generated from text. Please check the input context or model configuration.") + return + logging.info(f"Successfully generated {sum([len(q) for q in chunks])} question/answer pairs.") + print(chunks) + for i, chunk in enumerate(chunks): + perc = ceil(i / num_chunks * 100) + with MDC(progress=f"{perc}%"): + logger.info(f"Adding chunk {i}/{num_chunks}") + add_chunk_to_dataset(client, chunks, chunk, args.doctype, args.questions, NUM_DISTRACT_DOCS, model=args.completion_model) + + logging.info(f"Data successfully written to {context['output']}. Process completed.") + except Exception as e: + logging.error(f"An unexpected error occurred during the process: {e}",exc_info=True) + +def parse_arguments(): + # Define command line arguments for the script + parser = argparse.ArgumentParser( + description="Generate question/answer pairs from documentation." + ) + parser.add_argument( + "-t", "--questions_per_chunk", + type=int, + default=3, + help="Specify the number of question pairs to generate per chunk." + ) + parser.add_argument( + "-m", "--model", + choices=["meta-llama-3-70b-instruct","meta-llama-3-8b-instruct","llama-2-13b-chat", "llama-2-70b-chat"], + default="meta-llama-3-70b-instruct", + help="Select the model to use for generation." + ) + parser.add_argument( + "-c", "--config_path", + default="./raft.yaml", + help="Set the configuration file path that has system prompt along with language, dataset path and number of questions." + ) + parser.add_argument( + "-v", "--vllm_endpoint", + default=None, + type=int, + help="If a port is specified, then use local vllm endpoint for generating question/answer pairs." + ) + parser.add_argument("--chunk_size", type=int, default=512, help="The size of each chunk in number of tokens") + parser.add_argument("-o","--output", type=str, default="./", help="The path at which to save the dataset") + parser.add_argument("--output-format", type=str, default="hf", help="Format to convert the dataset to. Defaults to hf.", choices=datasetFormats) + parser.add_argument("--output-type", type=str, default="jsonl", help="Type to export the dataset to. Defaults to jsonl.", choices=outputDatasetTypes) + return parser.parse_args() + +if __name__ == "__main__": + logging.info("Initializing the process and loading configuration...") + args = parse_arguments() + + context = load_config(args.config_path) + context["questions_per_chunk"] = args.questions_per_chunk + context["model"] = args.model + context["chunk_size"] = args.chunk_size + context["endpoint"] = args.vllm_endpoint + context["output"] = args.output + logging.info(f"Configuration loaded. Generating {args.questions_per_chunk} question per chunk using model '{args.model}'.") + if context["endpoint"]: + logging.info(f"Use local vllm service at port: '{args.vllm_endpoint}'.") + asyncio.run(main(context)) diff --git a/recipes/use_cases/end2end-recipes/raft/raft.yaml b/recipes/use_cases/end2end-recipes/raft/raft.yaml new file mode 100644 index 000000000..eee128ca2 --- /dev/null +++ b/recipes/use_cases/end2end-recipes/raft/raft.yaml @@ -0,0 +1,20 @@ +COT_prompt_template: > + Question: {question}\nContext: {context}\n + Answer this question using the information given in the context above. Here is things to pay attention to: + - First provide step-by-step reasoning on how to answer the question. + - In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context. + - End your response with final answer in the form : $answer, the answer should be succinct. + You MUST begin your final answer with the tag ": + +question_prompt_template: > + You are a synthetic question-answer pair generator. Given a chunk of context about + some topic(s), generate {num_questions} example questions a user could ask and would be answered + \using information from the chunk. For example, if the given context was a Wikipedia + paragraph about the United States, an example question could be 'How many states are + in the United States? + The questions should be able to be answered in a few words or less. Include only the + questions in your response. + +data_dir: "./data" + +num_questions: 2 diff --git a/recipes/use_cases/end2end-recipes/raft/raft_utils.py b/recipes/use_cases/end2end-recipes/raft/raft_utils.py new file mode 100644 index 000000000..d8b04783a --- /dev/null +++ b/recipes/use_cases/end2end-recipes/raft/raft_utils.py @@ -0,0 +1,271 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement. + +import os +import re +import string +from transformers import AutoTokenizer +import asyncio +import magic +from PyPDF2 import PdfReader +import json +from doc_processor import split_text_into_chunks +import logging +import json +from langchain.embeddings import HuggingFaceEmbeddings +from langchain_experimental.text_splitter import SemanticChunker +from math import ceil +import random +# Initialize logging +logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') +def strip_str(s: str) -> str: + """ + Helper function for helping format strings returned by GPT-4. + """ + l, r = 0, len(s)-1 + beg_found = False + for i in range(len(s)): + if s[i].isalpha(): + if not beg_found: + l = i + beg_found = True + else: + r = i + r += 2 + return s[l:min(r, len(s))] +def read_text_file(file_path): + try: + with open(file_path, 'r') as f: + text = f.read().strip() + ' ' + if len(text) == 0: + print("File is empty ",file_path) + return text + except Exception as e: + logging.error(f"Error reading text file {file_path}: {e}") + return '' + +def read_pdf_file(file_path): + try: + with open(file_path, 'rb') as f: + pdf_reader = PdfReader(f) + num_pages = len(pdf_reader.pages) + file_text = [pdf_reader.pages[page_num].extract_text().strip() + ' ' for page_num in range(num_pages)] + text = ''.join(file_text) + if len(text) == 0: + print("File is empty ",file_path) + return ''.join(file_text) + except Exception as e: + logging.error(f"Error reading PDF file {file_path}: {e}") + return '' + +def read_json_file(file_path): + try: + with open(file_path, 'r') as f: + data = json.load(f) + # Assuming each item in the list has a 'question' and 'answer' key + # Concatenating question and answer pairs with a space in between and accumulating them into a single string + file_text = ' '.join([item['question'].strip() + ' ' + item['answer'].strip() + ' ' for item in data]) + if len(file_text) == 0: + print("File is empty ",file_path) + return file_text + except Exception as e: + logging.error(f"Error reading JSON file {file_path}: {e}") + return '' + + +def process_file(file_path): + print("starting to process file: ", file_path) + file_type = magic.from_file(file_path, mime=True) + if file_type in ['text/plain', 'text/markdown', 'JSON']: + return read_text_file(file_path) + elif file_type == 'application/pdf': + return read_pdf_file(file_path) + else: + logging.warning(f"Unsupported file type {file_type} for file {file_path}") + return '' +def read_file_content(context): + file_strings = [] + + for root, _, files in os.walk(context['data_dir']): + for file in files: + file_path = os.path.join(root, file) + file_text = process_file(file_path) + if file_text: + file_strings.append(file_text) + text = '\n'.join(file_strings) + text = remove_non_printable(text) + return remove_non_printable(text) + +def remove_non_printable(s): + printable = set(string.printable) + return ''.join(filter(lambda x: x in printable, s)) + + +async def generate_question_request(chat_service, api_context: dict, document_content: str, num_questions: int) -> dict: + if num_questions == 0: + logging.info(f"Error: num_questions is 0") + return {} + prompt_for_system = api_context['question_prompt_template'].format(num_questions=num_questions) + chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': str(document_content)}] + # parse the result string to a list of dict that has Question, Answer, Context + return await chat_service.execute_chat_request_async(api_context, chat_request_payload) + +def get_chunks( + text: str, + chunk_size: int = 512, + embedding_model: str = None +) -> list[str]: + """ + Takes in a `file_path` and `doctype`, retrieves the document, breaks it down into chunks of size + `chunk_size`, and returns the chunks. + """ + chunks = [] + if len(text) == 0: + raise TypeError("Can not get chunks from empty text") + else: + num_chunks = ceil(len(text) / chunk_size) + logging.info(f"Splitting text into {num_chunks} chunks") + text_splitter = SemanticChunker(embedding_model, number_of_chunks=num_chunks) + chunks = text_splitter.create_documents([text]) + chunks = [chunk.page_content for chunk in chunks] + + return chunks +# read all the files in the data folder, then split them into chunks +# generate questions for each chunk and return a list of questions list +async def generate_questions(chat_service, api_context: dict): + document_text = read_file_content(api_context) + if len(document_text)== 0: + logging.error(f"Error reading files, document_text is empty") + model_name = "sentence-transformers/all-mpnet-base-v2" + embedding_model = HuggingFaceEmbeddings(model_name=model_name) + document_batches = get_chunks(document_text,api_context["chunk_size"],embedding_model) + + batches_count = len(document_batches) + total_questions = api_context["questions_per_chunk"] * batches_count + + print(f"Questions per batch: {api_context['questions_per_chunk']}, Total questions: {total_questions}, Batches: {batches_count}") + generation_tasks = [] + for batch_index, batch_content in enumerate(document_batches): + print(f"len of batch_content: {len(batch_content)}, batch_index: {batch_index}") + #Distribute extra questions across the first few batches + print(f"Batch {batch_index + 1} - {api_context['questions_per_chunk']} questions ********") + try: + task = generate_question_request(chat_service, api_context, batch_content, api_context["questions_per_chunk"]) + generation_tasks.append(task) + except Exception as e: + print(f"Error during chat request execution: {e}") + + question_generation_results = await asyncio.gather(*generation_tasks) + final_result = [] + for result in question_generation_results: + queries = result.split('\n') + queries = [strip_str(q) for q in queries] + queries = [q for q in queries if any(c.isalpha() for c in q)] + if len(queries) > int(api_context['questions_per_chunk']): + # As the model may have unrelated question at the begining of the result + # if queries is more than questions_per_chunk, then we need to truncate it and only keep last questions_per_chunk lines + queries = queries[-int(api_context['questions_per_chunk']):] + final_result.append(queries) + return final_result + +def add_chunk_to_dataset( + client: None, + chunks: list[str], + chunk: str, + x: int = 5, + num_distract: int = 3, + p: float = 0.8, + model: str = None +) -> None: + """ + Given a chunk, create {Q, A, D} triplets and add them to the dataset. + """ + global ds + i = chunks.index(chunk) + qs = generate_instructions(client, chunk, x, model) if doctype == "api" else generate_instructions_gen(client, chunk, x, model) + for q in qs: + datapt = { + "id": None, + "type": None, + "question": None, + "context": None, + "oracle_context": None, + "cot_answer": None + } + + datapt["id"] = f"seed_task_{0 if not ds else ds.num_rows}" + datapt["type"] = "api call" if doctype == "api" else "general" + datapt["question"] = q + + # add num_distract distractor docs + docs = [chunk] + indices = list(range(0, len(chunks))) + indices.remove(i) + for j in random.sample(indices, num_distract): + docs.append(chunks[j]) + # decides whether to add oracle document + oracle = random.uniform(0, 1) < p + if not oracle: + docs[0] = chunks[random.sample(indices, 1)[0]] + random.shuffle(docs) + + d = { + "title": [], + "sentences": [] + } + + d["title"].append(["placeholder_title"]*(num_distract+1)) + d["sentences"].append(docs) + datapt["context"] = d + datapt["oracle_context"] = chunk + + # add answer to q + datapt["cot_answer"] = generate_label(client, q, chunk, doctype, model=model) + + # construct model instruction + context = "" + for doc in docs: + context += "" + str(doc) + "\n" + context += q + datapt["instruction"] = context + + # add to dataset + if not ds: + # init ds + datapt["id"] = [datapt["id"]] + datapt["type"] = [datapt["type"]] + datapt["question"] = [datapt["question"]] + datapt["context"] = [datapt["context"]] + datapt["oracle_context"] = [datapt["oracle_context"]] + datapt["cot_answer"] = [datapt["cot_answer"]] + datapt["instruction"] = [datapt["instruction"]] + ds = Dataset.from_dict(datapt) + else: + ds = ds.add_item(datapt) + +# This function is used to evaluate the quality of generated QA pairs. Return the original QA pair if the model eval result is YES. Otherwise, return an empty dict. +async def LLM_judge_request(chat_service, api_context: dict, document_content: dict) -> dict: + prompt_for_system = api_context['judge_prompt_template'].format(language=api_context["language"]) + chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': f"Question: {document_content['Question']} \n Teacher's Answer: {document_content['Ground_truth']}\n Student's Answer: {document_content['Generated_answer']} "}] + result = await chat_service.execute_chat_request_async(api_context, chat_request_payload) + if not result: + return {} + # no parsing needed, just return the loads the result as a dict + result = json.loads(result) + if "Result" not in result: + print("Error: eval response does not contain answer") + print(document_content,result) + return {} + return result + +async def generate_LLM_eval(chat_service, api_context: dict, judge_list: list): + eval_tasks = [] + for batch_index, batch_content in enumerate(judge_list): + try: + result = LLM_judge_request(chat_service, api_context, batch_content) + eval_tasks.append(result) + except Exception as e: + print(f"Error during data eval request execution: {e}") + + judge_results = await asyncio.gather(*eval_tasks) + return judge_results From 28b3f46a36360aabd4c1cbc233d5f59b8ca3c1d5 Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Tue, 28 May 2024 16:34:33 -0700 Subject: [PATCH 14/35] updated requirement.txt --- recipes/use_cases/end2end-recipes/raft/raft_utils.py | 1 - requirements.txt | 6 ++++++ 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/recipes/use_cases/end2end-recipes/raft/raft_utils.py b/recipes/use_cases/end2end-recipes/raft/raft_utils.py index d8b04783a..4b23ddd77 100644 --- a/recipes/use_cases/end2end-recipes/raft/raft_utils.py +++ b/recipes/use_cases/end2end-recipes/raft/raft_utils.py @@ -169,7 +169,6 @@ async def generate_questions(chat_service, api_context: dict): return final_result def add_chunk_to_dataset( - client: None, chunks: list[str], chunk: str, x: int = 5, diff --git a/requirements.txt b/requirements.txt index b10a90072..496e2a470 100644 --- a/requirements.txt +++ b/requirements.txt @@ -26,3 +26,9 @@ aiofiles evaluate rouge_score bert_score +mdc +langchain_experimental +python-dotenv==1.0.1 +pyyaml==6.0.1 +coloredlogs==15.0.1 +sentence_transformers From c856052115280db5f7640dfe1d59593b9401a78c Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Wed, 29 May 2024 13:01:49 -0700 Subject: [PATCH 15/35] creation of raft dataset working --- .../use_cases/end2end-recipes/raft/raft.py | 47 +++++------- .../end2end-recipes/raft/raft_utils.py | 73 +++++++++++-------- 2 files changed, 61 insertions(+), 59 deletions(-) diff --git a/recipes/use_cases/end2end-recipes/raft/raft.py b/recipes/use_cases/end2end-recipes/raft/raft.py index 56ea4a81c..c3d5f175e 100644 --- a/recipes/use_cases/end2end-recipes/raft/raft.py +++ b/recipes/use_cases/end2end-recipes/raft/raft.py @@ -3,8 +3,6 @@ import logging from typing import Literal, Any from openai import OpenAI -import datasets -from datasets import Dataset, load_dataset import json import random import os, shutil @@ -15,44 +13,37 @@ from format import DatasetConverter, datasetFormats, outputDatasetTypes from config import load_config -# def generate_label(client: OpenAI, question: str, context: Any, doctype: DocType = "pdf", model: str = None) -> str | None: -# """ -# Generates the label / answer to `question` using `context` and GPT-4. -# """ -# question = encode_question(question, context) if doctype == "api" else encode_question_gen(question, context) -# response = client.chat.completions.create( -# model=model, -# messages=question, -# n=1, -# temperature=0 -# ) -# response = response.choices[0].message.content -# return response -# Configure logging to include the timestamp, log level, and message logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') - +NUM_DISTRACT_DOCS = 5 # number of distracting documents to add to each chunk +ORCALE_P = 0.8 # probability of related documents to be added to each chunk async def main(context): + ds = None if context["endpoint"]: chat_service = VllmChatService() else: chat_service = OctoAIChatService() try: logging.info("Starting to generate question pair.") - # Generate question/answer pairs as list - chunks = await generate_questions(chat_service, context) - if not chunks: + # Generate questions as list for each chunk + chunk_questions_zip = await generate_questions(chat_service, context) + if not chunk_questions_zip: logging.warning("No questions generated from text. Please check the input context or model configuration.") return - logging.info(f"Successfully generated {sum([len(q) for q in chunks])} question/answer pairs.") - print(chunks) - for i, chunk in enumerate(chunks): - perc = ceil(i / num_chunks * 100) - with MDC(progress=f"{perc}%"): - logger.info(f"Adding chunk {i}/{num_chunks}") - add_chunk_to_dataset(client, chunks, chunk, args.doctype, args.questions, NUM_DISTRACT_DOCS, model=args.completion_model) - + for chunk, questions in chunk_questions_zip: + logging.info(f"Chunk: {chunk}, question length: {len(questions)}") + for question in questions: + logging.info(f"Question: {question}") + logging.info(f"Successfully generated {sum([len(q) for c,q in chunk_questions_zip])} question/answer pairs.") + ds = await add_chunk_to_dataset(chunk_questions_zip,context, chat_service,ds,NUM_DISTRACT_DOCS, ORCALE_P) + print(ds[0]) + ds.save_to_disk(args.output) logging.info(f"Data successfully written to {context['output']}. Process completed.") + formatter = DatasetConverter() + + # Extract format specific params + format_params = {} + formatter.convert(ds=ds, format=args.output_format, output_path=args.output, output_type=args.output_type, params=format_params) except Exception as e: logging.error(f"An unexpected error occurred during the process: {e}",exc_info=True) diff --git a/recipes/use_cases/end2end-recipes/raft/raft_utils.py b/recipes/use_cases/end2end-recipes/raft/raft_utils.py index 4b23ddd77..ec7355f28 100644 --- a/recipes/use_cases/end2end-recipes/raft/raft_utils.py +++ b/recipes/use_cases/end2end-recipes/raft/raft_utils.py @@ -12,9 +12,11 @@ from doc_processor import split_text_into_chunks import logging import json -from langchain.embeddings import HuggingFaceEmbeddings +from langchain_community.embeddings import HuggingFaceEmbeddings from langchain_experimental.text_splitter import SemanticChunker from math import ceil +import datasets +from datasets import Dataset, load_dataset import random # Initialize logging logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') @@ -131,11 +133,11 @@ def get_chunks( return chunks # read all the files in the data folder, then split them into chunks -# generate questions for each chunk and return a list of questions list +# generate questions for each chunk and return zip of chunk and related questions list async def generate_questions(chat_service, api_context: dict): document_text = read_file_content(api_context) - if len(document_text)== 0: - logging.error(f"Error reading files, document_text is empty") + if len(document_text) == 0: + logging.info(f"Error reading files, document_text is {len(document_text)}") model_name = "sentence-transformers/all-mpnet-base-v2" embedding_model = HuggingFaceEmbeddings(model_name=model_name) document_batches = get_chunks(document_text,api_context["chunk_size"],embedding_model) @@ -148,12 +150,15 @@ async def generate_questions(chat_service, api_context: dict): for batch_index, batch_content in enumerate(document_batches): print(f"len of batch_content: {len(batch_content)}, batch_index: {batch_index}") #Distribute extra questions across the first few batches - print(f"Batch {batch_index + 1} - {api_context['questions_per_chunk']} questions ********") - try: - task = generate_question_request(chat_service, api_context, batch_content, api_context["questions_per_chunk"]) - generation_tasks.append(task) - except Exception as e: - print(f"Error during chat request execution: {e}") + if len(batch_content) < 10: + logging.info("Context is not enough, ignore this batch") + else: + print(f"Batch {batch_index + 1} - {api_context['questions_per_chunk']} questions ********") + try: + task = generate_question_request(chat_service, api_context, batch_content, api_context["questions_per_chunk"]) + generation_tasks.append(task) + except Exception as e: + print(f"Error during chat request execution: {e}") question_generation_results = await asyncio.gather(*generation_tasks) final_result = [] @@ -166,35 +171,44 @@ async def generate_questions(chat_service, api_context: dict): # if queries is more than questions_per_chunk, then we need to truncate it and only keep last questions_per_chunk lines queries = queries[-int(api_context['questions_per_chunk']):] final_result.append(queries) - return final_result + return list(zip(document_batches,final_result)) -def add_chunk_to_dataset( - chunks: list[str], - chunk: str, - x: int = 5, +async def generate_COT(chat_service, api_context: dict, document_content: str, question: str) -> dict: + prompt = api_context['COT_prompt_template'].format(question=question,context=str(document_content)) + chat_request_payload = [{"role": "system", "content": "You are a helpful question answerer who can provide an answer given a question and relevant context."}] + chat_request_payload.append({"role": "user", "content": prompt}) + response = await chat_service.execute_chat_request_async(api_context, chat_request_payload) + return (document_content,question,response) +async def add_chunk_to_dataset( + chunk_questions_zip: list, + context: dict, + chat_service, + ds, num_distract: int = 3, p: float = 0.8, - model: str = None ) -> None: """ - Given a chunk, create {Q, A, D} triplets and add them to the dataset. + Given a chunk and related questions lists, create {Q, A, D} triplets and add them to the dataset. """ - global ds - i = chunks.index(chunk) - qs = generate_instructions(client, chunk, x, model) if doctype == "api" else generate_instructions_gen(client, chunk, x, model) - for q in qs: + COT_tasks = [] + chunks = [chunk for chunk, _ in chunk_questions_zip] + for i, chunk_questions in enumerate(chunk_questions_zip): + chunk, questions = chunk_questions + # generate COT answer for each question given the chunk context + for question in questions: + COT_tasks.append(generate_COT(chat_service, context, chunk, question)) + COT_results = await asyncio.gather(*COT_tasks) + for chunk, q , cot in COT_results: datapt = { "id": None, - "type": None, - "question": None, + "type": "general", + "question": q, "context": None, "oracle_context": None, - "cot_answer": None + "cot_answer": cot } - + i = chunks.index(chunk) datapt["id"] = f"seed_task_{0 if not ds else ds.num_rows}" - datapt["type"] = "api call" if doctype == "api" else "general" - datapt["question"] = q # add num_distract distractor docs docs = [chunk] @@ -218,9 +232,6 @@ def add_chunk_to_dataset( datapt["context"] = d datapt["oracle_context"] = chunk - # add answer to q - datapt["cot_answer"] = generate_label(client, q, chunk, doctype, model=model) - # construct model instruction context = "" for doc in docs: @@ -241,7 +252,7 @@ def add_chunk_to_dataset( ds = Dataset.from_dict(datapt) else: ds = ds.add_item(datapt) - + return ds # This function is used to evaluate the quality of generated QA pairs. Return the original QA pair if the model eval result is YES. Otherwise, return an empty dict. async def LLM_judge_request(chat_service, api_context: dict, document_content: dict) -> dict: prompt_for_system = api_context['judge_prompt_template'].format(language=api_context["language"]) From 7367f7eae68659d7cf073e044660f5e5436c5590 Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Mon, 3 Jun 2024 09:39:57 -0700 Subject: [PATCH 16/35] adding raft_dataset for fine-tuning --- .../finetuning/datasets/chatbot_dataset.py | 26 +-- recipes/finetuning/datasets/raft_dataset.py | 55 +++++ .../use_cases/end2end-recipes/raft/README.md | 38 ++- .../end2end-recipes/raft/data/FAQ.md | 55 ----- .../end2end-recipes/raft/eval_config.yaml | 23 ++ .../end2end-recipes/raft/eval_raft.py | 219 ++++++++++++++++++ .../end2end-recipes/raft/eval_utils.py | 122 ++++++++++ .../use_cases/end2end-recipes/raft/raft.py | 2 +- .../use_cases/end2end-recipes/raft/raft.yaml | 26 ++- .../end2end-recipes/raft/raft_utils.py | 30 +-- 10 files changed, 487 insertions(+), 109 deletions(-) create mode 100644 recipes/finetuning/datasets/raft_dataset.py delete mode 100644 recipes/use_cases/end2end-recipes/raft/data/FAQ.md create mode 100644 recipes/use_cases/end2end-recipes/raft/eval_config.yaml create mode 100644 recipes/use_cases/end2end-recipes/raft/eval_raft.py create mode 100644 recipes/use_cases/end2end-recipes/raft/eval_utils.py diff --git a/recipes/finetuning/datasets/chatbot_dataset.py b/recipes/finetuning/datasets/chatbot_dataset.py index 7a893ec0d..9de06565c 100644 --- a/recipes/finetuning/datasets/chatbot_dataset.py +++ b/recipes/finetuning/datasets/chatbot_dataset.py @@ -11,24 +11,22 @@ B_INST, E_INST = "[INST]", "[/INST]" def tokenize_dialog(q_a_pair, tokenizer): - prompt_tokens = [tokenizer.encode(f"{tokenizer.bos_token}{B_INST} {(question).strip()} {E_INST}", add_special_tokens=False) for question in q_a_pair["Question"]] - answer_tokens = [tokenizer.encode(f"{answer.strip()} {tokenizer.eos_token}", add_special_tokens=False) for answer in q_a_pair["Answer"]] - dialog_tokens = list(itertools.chain.from_iterable(zip(prompt_tokens, answer_tokens))) - dialog_tokens = list(itertools.chain.from_iterable(zip(prompt_tokens, answer_tokens))) - #Add labels, convert prompt token to -100 in order to ignore in loss function - labels_tokens = [len(c)*[-100,] if i % 2 == 0 else c for i,c in enumerate(dialog_tokens)] + question, answer = q_a_pair["Question"], q_a_pair["Answer"] + prompt_tokens = tokenizer.encode(f"{tokenizer.bos_token}{B_INST} {(question).strip()} {E_INST}", add_special_tokens=False) + answer_tokens = tokenizer.encode(f"{answer.strip()} {tokenizer.eos_token}", add_special_tokens=False) + sample = { + "input_ids": prompt_tokens + answer_tokens, + "attention_mask" : [1] * (len(prompt_tokens) + len(answer_tokens)), + "labels": [-100] * len(prompt_tokens) + answer_tokens, + } - combined_tokens = { - "input_ids": list(itertools.chain(*(t for t in dialog_tokens))), - "labels": list(itertools.chain(*(t for t in labels_tokens))), - } - - return dict(combined_tokens, attention_mask=[1]*len(combined_tokens["input_ids"])) + return sample def get_custom_dataset(dataset_config, tokenizer, split, split_ratio=0.8): - dataset = load_dataset('json', data_files=dataset_config.data_path) - dataset = dataset['train'].train_test_split(test_size=1-split_ratio, shuffle=True) + dataset_dict = load_dataset('json', data_files=dataset_config.data_path) + dataset = dataset_dict['train'] + dataset = dataset.train_test_split(test_size=1-split_ratio, shuffle=True, seed=42) dataset = dataset[split].map(lambda sample: { "Question": sample["Question"], diff --git a/recipes/finetuning/datasets/raft_dataset.py b/recipes/finetuning/datasets/raft_dataset.py new file mode 100644 index 000000000..f7feca6af --- /dev/null +++ b/recipes/finetuning/datasets/raft_dataset.py @@ -0,0 +1,55 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# This software may be used and distributed according to the terms of the Llama 3 Community License Agreement. + + +import copy +import datasets +from datasets import Dataset, load_dataset, DatasetDict +import itertools + + +B_INST, E_INST = "[INST]", "[/INST]" + +def raft_tokenize(q_a_pair, tokenizer): + # last line is the question + question = q_a_pair["instruction"].split('\n')[-1] + # all the lines before the last line are the context + documents = q_a_pair["instruction"].split('\n')[:-1] + # output is the label + answer = q_a_pair["output"] + system_prompt = "You are a helpful question answerer who can provide an answer given a question and relevant context." + user_prompt = prompt = """ + Question: {question}\nContext: {context}\n + Answer this question using the information given in the context above. Here is things to pay attention to: + - First provide step-by-step reasoning on how to answer the question. + - In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context. + - End your response with final answer in the form : $answer, the answer should be succinct. + You MUST begin your final answer with the tag ":". + """.format(question=question, context=str(documents)) + final_prompt = system_prompt + '\n' + user_prompt + prompt_tokens = tokenizer.encode(f"{tokenizer.bos_token}{B_INST} {(final_prompt).strip()} {E_INST}", add_special_tokens=False) + answer_tokens = tokenizer.encode(f"{answer.strip()} {tokenizer.eos_token}", add_special_tokens=False) + #Add labels, convert prompt token to -100 in order to ignore in loss function + sample = { + "input_ids": prompt_tokens + answer_tokens, + "attention_mask" : [1] * (len(prompt_tokens) + len(answer_tokens)), + "labels": [-100] * len(prompt_tokens) + answer_tokens, + } + + return sample + + +def get_custom_dataset(dataset_config, tokenizer, split, split_ratio=0.8): + # load_dataset will return DatasetDict that contains all the data in the train set + dataset_dict = load_dataset('json', data_files=dataset_config.data_path) + dataset = dataset_dict['train'] + dataset = dataset.train_test_split(test_size=1-split_ratio, shuffle=True, seed=42) + + dataset = dataset[split].map(lambda sample: { + "instruction": sample["instruction"], + "output": sample["cot_answer"], + }, + batched=True, + ) + dataset = dataset.map(lambda x: raft_tokenize(x, tokenizer)) + return dataset diff --git a/recipes/use_cases/end2end-recipes/raft/README.md b/recipes/use_cases/end2end-recipes/raft/README.md index c42eda8e8..7a1b5f64e 100644 --- a/recipes/use_cases/end2end-recipes/raft/README.md +++ b/recipes/use_cases/end2end-recipes/raft/README.md @@ -1,4 +1,4 @@ -## End to End Steps to create a Chatbot using fine-tuning +## End to End Steps to create a Chatbot using Retrieval Augmented Fine Tuning(RAFT) ### Step 1 : Prepare related documents @@ -6,7 +6,7 @@ Download all your desired docs in PDF, Text or Markdown format to "data" folder In this case we have an example of [Getting started with Meta Llama](https://llama.meta.com/get-started/) and other llama related documents such Llama3, Purple Llama, Code Llama papers. Ideally, we should have searched all Llama documents across the web and follow the procedure below on them but that would be very costly for the purpose of a tutorial, so we will stick to our limited documents here. In this case, we want to use Llama FAQ as eval data so we should not put it into the data folder for training. -### Step 2 : Prepare RAFT data for fine-tuning +### Step 2 : Prepare RAFT dataset for fine-tuning To use Meta Llama 3 70B model for the RAFT datasets creation from the prepared documents, we can either use Meta Llama 3 70B APIs from LLM cloud providers or host local LLM server. @@ -21,7 +21,7 @@ python generate_question_answers.py **NOTE** You need to be aware of your RPM (requests per minute), TPM (tokens per minute) and TPD (tokens per day), limit on your account in case using any of model API providers. In our case we had to process each document at a time. Then merge all the Q&A `json` files to make our dataset. We aimed for a specific number of Q&A pairs per document anywhere between 50-100. This is experimental and totally depends on your documents, wealth of information in them and how you prefer to handle question, short or longer answers etc. -Alternatively we can use on prem solutions such as the [TGI](../../../../inference/model_servers/hf_text_generation_inference/README.md) or [VLLM](../../../../inference/model_servers/llama-on-prem.md). Here we will use the prompt in the [generation_config.yaml](./generation_config.yaml) to instruct the model on the expected format and rules for generating the Q&A pairs. In this example, we will show how to create a vllm openai compatible server that host Meta Llama 3 70B instruct locally, generate the Q&A pairs and apply self-curation to get the final dataset. +Alternatively we can use on prem solutions such as the [TGI](../../../../inference/model_servers/hf_text_generation_inference/README.md) or [VLLM](../../../../inference/model_servers/llama-on-prem.md). Here we will use the prompt in the [generation_config.yaml](./generation_config.yaml) to instruct the model on the expected format and rules for generating the Q&A pairs. In this example, we will show how to create a vllm openai compatible server that host Meta Llama 3 70B instruct locally, and generate the RAFT dataset. ```bash # Make sure VLLM has been installed @@ -36,15 +36,43 @@ Once the server is ready, we can query the server given the port number 8001 in python raft.py -v 8001 -t 5 ``` -This python program will read all the documents inside of "data" folder and split the data into batches by the chunk_size (default is 512) and apply the question_prompt_template, defined in "raft.yaml", to each batch. Then it will use each batch to query VLLM server and save the return a list of question list for each batch. +This python program will read all the documents inside of "data" folder and transform the text into embeddings and split the data into batches by the SemanticChunker. Then we apply the question_prompt_template, defined in "raft.yaml", to each batch, and finally we will use each batch to query VLLM server and save the return a list of question list for all batches. + +We now have a related context as text chunk and a corresponding question list. For each question in the question list, we want to generate a Chain-of-Thought (COT) style question using Llama 3 70B Instruct as well. Once we have the COT answers, we can start to make a dataset that contains "instruction" which includes some unrelated chunks called distractor and has a probability P to include the related chunk. + +```python +{ + 'id': 'seed_task_0', + 'type': 'general', + 'question': 'What is the official motto of the United States of America?', + 'context': { + 'sentences': [ + ["the Gulf of Mexico are prone to hurricanes, ... and enforces the Act. [ 189 ] As of 2022, the U. S", + "energy from fossil fuel and the largest ... there are 19, 969 airports in the U. S., of which 5, 193 are designated", + 'weaponry, ideology, and international i... and is a permanent member of the UN Security Council. The first documentary evidence of the phrase " United States', + '[CLS] United States of America Flag Coat of arms ... dominance in nuclear and conventional', + '##om ic soft pow er. [ 405 ] [ 406 ] Nearly all present ... rights in the United States are advanced by global standards.'] + ], + 'title': [ + ['placeholder_title', + 'placeholder_title', + 'placeholder_title', + 'placeholder_title', + 'placeholder_title'] + ] + }, + 'answer': '"In God We Trust"', + 'cot_answer': None +} +``` ### Step 3: Run the fune-tuning Once the dataset is ready, we can start the fine-tuning step using the following commands in the llama-recipe main folder: For distributed fine-tuning: ```bash -CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 10 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' +CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir raft-8b --num_epochs 10 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/raft_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/raft/hotpot_vicuna_cot.jsonl' ``` diff --git a/recipes/use_cases/end2end-recipes/raft/data/FAQ.md b/recipes/use_cases/end2end-recipes/raft/data/FAQ.md deleted file mode 100644 index 8c2e12a7c..000000000 --- a/recipes/use_cases/end2end-recipes/raft/data/FAQ.md +++ /dev/null @@ -1,55 +0,0 @@ -# FAQ - -Here we discuss frequently asked questions that may occur and we found useful along the way. - -1. Does FSDP support mixed precision in one FSDP unit? Meaning, in one FSDP unit some of the parameters are in Fp16/Bf16 and others in FP32. - - FSDP requires each FSDP unit to have consistent precision, so this case is not supported at this point. It might be added in future but no ETA at the moment. - -2. How does FSDP handles mixed grad requirements? - - FSDP does not support mixed `require_grad` in one FSDP unit. This means if you are planning to freeze some layers, you need to do it on the FSDP unit level rather than model layer. For example, let us assume our model has 30 decoder layers and we want to freeze the bottom 28 layers and only train 2 top transformer layers. In this case, we need to make sure `require_grad` for the top two transformer layers are set to `True`. - -3. How do PEFT methods work with FSDP in terms of grad requirements/layer freezing? - - We wrap the PEFT modules separate from the transformer layer in auto_wrapping policy, that would result in PEFT models having `require_grad=True` while the rest of the model is `require_grad=False`. - -4. Can I add custom datasets? - - Yes, you can find more information on how to do that [here](Dataset.md). - -5. What are the hardware SKU requirements for deploying these models? - - Hardware requirements vary based on latency, throughput and cost constraints. For good latency, the models were split across multiple GPUs with tensor parallelism in a machine with NVIDIA A100s or H100s. But TPUs, other types of GPUs like A10G, T4, L4, or even commodity hardware can also be used to deploy these models (e.g. https://github.com/ggerganov/llama.cpp). - If working on a CPU, it is worth looking at this [blog post](https://www.intel.com/content/www/us/en/developer/articles/news/llama2.html) from Intel for an idea of Llama 2's performance on a CPU. - -6. What are the hardware SKU requirements for fine-tuning Llama pre-trained models? - - Fine-tuning requirements vary based on amount of data, time to complete fine-tuning and cost constraints. To fine-tune these models we have generally used multiple NVIDIA A100 machines with data parallelism across nodes and a mix of data and tensor parallelism intra node. But using a single machine, or other GPU types like NVIDIA A10G or H100 are definitely possible (e.g. alpaca models are trained on a single RTX4090: https://github.com/tloen/alpaca-lora). - -7. How to handle CUDA memory fragmentations during fine-tuning that may lead into an OOM? - - In some cases you may experience that after model checkpointing specially with FSDP (this usually does not happen with PEFT methods), the reserved and allocated CUDA memory has increased. This might be due to CUDA memory fragmentations. PyTorch recenly added an enviroment variable that helps to better manage memory fragmentation (this feature in available on PyTorch nightlies at the time of writing this doc July 30 2023). You can set this in your main training script as follows: - - ```bash - - os.environ['PYTORCH_CUDA_ALLOC_CONF']='expandable_segments:True' - - ``` - We also added this enviroment variable in `setup_environ_flags` of the [train_utils.py](../src/llama_recipes/utils/train_utils.py), feel free to uncomment it if required. - -8. Additional debugging flags? - - The environment variable `TORCH_DISTRIBUTED_DEBUG` can be used to trigger additional useful logging and collective synchronization checks to ensure all ranks are synchronized appropriately. `TORCH_DISTRIBUTED_DEBUG` can be set to either OFF (default), INFO, or DETAIL depending on the debugging level required. Please note that the most verbose option, DETAIL may impact the application performance and thus should only be used when debugging issues. - - We also added this enviroment variable in `setup_environ_flags` of the [train_utils.py](../src/llama_recipes/utils/train_utils.py), feel free to uncomment it if required. - -9. I am getting import errors when running inference. - - Verify that CUDA environment variables are set correctly on your machine. For example for bitsandbytes, you can generally set it as below to get things working on A100 80g's on AWS. - - ```bash - export CUDA_HOME="/usr/local/cuda-11.8" - export PATH=$CUDA_HOME/bin:$PATH - export LD_LIBRARY_PATH=$CUDA_HOME/lib:$CUDA_HOME/lib64:$CUDA_HOME/efa/lib:/opt/amazon/efa/lib:$LD_LIBRARY_PATH - ``` diff --git a/recipes/use_cases/end2end-recipes/raft/eval_config.yaml b/recipes/use_cases/end2end-recipes/raft/eval_config.yaml new file mode 100644 index 000000000..87266d33c --- /dev/null +++ b/recipes/use_cases/end2end-recipes/raft/eval_config.yaml @@ -0,0 +1,23 @@ +eval_prompt_template: > + You are a AI assistant that skilled in answering questions related to Llama language models, + which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1, Meta Llama Guard 2, + Below is a question from a llama user, think step by step and then answer it in {language}, make the answer as concise as possible, it should be at most 100 words. + Return the result with the template: + [ + {{ + "Question": "The question user asked to you" + "Answer": "Your answer to the question" + }} + ] +judge_prompt_template: > + You are provided with a question, a teacher's answer and a student's answer. Given that question, you need to score the how good the student answer is compare to + the teacher's answer. If the student's answer is correct based on the teacher's answer, then return YES. If the answer is not faithful, then return NO + and explain which part of the student's answer is not faithful in the Reason section. + Return the result in json format with the template: + {{ + "Reason": "your reason here.", + "Result": "YES or NO." + }} +eval_json: "./evalset.json" + +language: "English" diff --git a/recipes/use_cases/end2end-recipes/raft/eval_raft.py b/recipes/use_cases/end2end-recipes/raft/eval_raft.py new file mode 100644 index 000000000..7c1155617 --- /dev/null +++ b/recipes/use_cases/end2end-recipes/raft/eval_raft.py @@ -0,0 +1,219 @@ +# Copyright (c) Meta Platforms, Inc. and affiliates. +# This software may be used and distributed according to the terms of the Llama 3 Community License Agreement. +from chat_utils import OctoAIChatService, VllmChatService +import logging +import evaluate +import argparse +from config import load_config +import asyncio +import json +from itertools import chain +from generator_utils import parse_qa_to_json, generate_LLM_eval +from langchain_community.llms import VLLM +from langchain_community.embeddings import HuggingFaceEmbeddings +from langchain_community.vectorstores import FAISS +from langchain.text_splitter import RecursiveCharacterTextSplitter +from langchain_community.document_loaders import DirectoryLoader +from langchain.chains import RetrievalQA + +from eval_utils import exact_match_score +def generate_answers_model_only(model_path): + # Use langchain to load the documents from data directory + # Load the RAFT model + llm = VLLM(model=model_path, + trust_remote_code=True, # mandatory for hf models + max_new_tokens=500, + top_p=1, + temperature=0.0, + # tensor_parallel_size=... # for distributed inference + ) + generated_answers = [] + for question in question_list: + result = llm.invoke(question) + generated_answers.append(result["answer"]) + return generated_answers +def generate_answers_with_RAG(model_path, data_dir,question_list): + # Use langchain to load the documents from data directory + loader = DirectoryLoader(data_dir) + docs = loader.load() + # Split the document into chunks with a specified chunk size + text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50) + all_splits = text_splitter.split_documents(docs) + + # Store the document into a vector store with a specific embedding model + vectorstore = FAISS.from_documents(all_splits, HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")) + # Load the RAFT model + llm = VLLM(model=model_path, + trust_remote_code=True, # mandatory for hf models + max_new_tokens=500, + top_p=1, + temperature=0.0, + # tensor_parallel_size=... # for distributed inference + ) + # Create a RetrievalQA chain with the vector store and RAFT model + qa_chain = RetrievalQA.from_chain_type( + llm, + retriever=vectorstore.as_retriever() + ) + generated_answers = [] + for question in question_list: + result = qa_chain({"query": question}) + generated_answers.append(result["answer"]) + return generated_answers +def compute_rouge_score(generated : str, reference: str): + rouge_score = evaluate.load('rouge') + return rouge_score.compute( + predictions=generated, + references=reference, + use_stemmer=True, + use_aggregator=True + ) +def compute_bert_score(generated : str, reference: str): + bertscore = evaluate.load("bertscore") + score = bertscore.compute( + predictions=generated, + references=reference, + lang="en" + ) + f1 = score["f1"] + precision = score["precision"] + recall = score["recall"] + return sum(precision)/len(precision), sum(recall)/len(recall), sum(f1)/len(f1) +# This function is used to eval the fine-tuned model, given the question, generate the answer. +async def eval_request(chat_service, api_context: dict, question: str) -> dict: + prompt_for_system = api_context['eval_prompt_template'].format(language=api_context["language"]) + chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': f"Question: {question}"}] + # Getting a list of result, in this case, there should be only one result + response_string = await chat_service.execute_chat_request_async(api_context, chat_request_payload) + # convert the result string to a dict that contains Question, Answer + result_list = parse_qa_to_json(response_string) + if not result_list or len(result_list) > 1: + print("Error: eval response should be a list of one result dict") + return {} + result = result_list[0] + if "Answer" not in result: + print("Error: eval response does not contain answer") + return {} + # Send back the model generated answer + + return result["Answer"] + +async def generate_eval_answer(chat_service, api_context: dict, questions: list): + eval_tasks = [] + for batch_index, question in enumerate(questions): + try: + result = eval_request(chat_service, api_context, question) + eval_tasks.append(result) + except Exception as e: + print(f"Error during data eval request execution: {e}") + print(len(eval_tasks),"eval_tasks") + eval_results = await asyncio.gather(*eval_tasks) + + return eval_results + +async def main(context): + if context["endpoint"]: + chat_service = VllmChatService() + else: + chat_service = OctoAIChatService() + try: + logging.info("Starting to generate answer given the eval set.") + with open(context["eval_json"]) as fp: + eval_json = json.load(fp) + questions,groud_truth = [],[] + for index, item in enumerate(eval_json): + questions.append(item["question"]) + groud_truth.append(item["answer"]) + generated_answers = generate_answers_with_RAG(model_path, context,questions) + if not generated_answers: + logging.warning("No answers generated. Please check the input context or model configuration.") + return + logging.info(f"Successfully generated {len(generated_answers)} answers.") + judge_list = [] + for index, item in enumerate(generated_answers): + judge_list.append({"Question":questions[index],"Ground_truth":groud_truth[index],"Generated_answer":generated_answers[index]}) + if context["judge_endpoint"]: + # make a copy of the context then change the VLLM endpoint to judge_endpoint + context_copy = dict(context) + context_copy["endpoint"] = context["judge_endpoint"] + context_copy["model"] = "meta-llama/Meta-Llama-3-70B-Instruct" + judge_results = await generate_LLM_eval(chat_service, context_copy, judge_list) + correct_num = 0 + for result in judge_results: + correct_num += result["Result"] == "YES" + LLM_judge_score = correct_num/len(judge_results) + print(f"The accuracy of the model is {LLM_judge_score}") + rouge_score = compute_rouge_score(generated_answers,groud_truth) + print("Rouge_score:",rouge_score) + P, R, F1 = compute_bert_score(generated_answers,groud_truth) + print(f"BERTScore Precision: {P:.4f}, Recall: {R:.4f}, F1: {F1:.4f}") + exact_match = 0 + for item in judge_list: + exact_match += exact_match_score(item['Generated_answer'],item['Ground_truth']) + exact_match_percentage = exact_match/len(judge_list) + print(f"Exact_match_percentage: {exact_match_percentage:.4f}") + # Saving the eval result to a log file + with open(context["output_log"],"a") as fp: + fp.write(f"Eval_result for {context['model']} \n") + fp.write(f"Rouge_score: {rouge_score} \n") + fp.write(f"BERTScore Precision: {P:.4f}, Recall: {R:.4f}, F1: {F1:.4f} \n") + fp.write(f"Exact_match_percentage: {exact_match_percentage} \n") + if context["judge_endpoint"]: + fp.write(f"LLM_judge_score: {LLM_judge_score} \n") + fp.write(f"QA details: \n") + for item in judge_list: + fp.write(f"question: {item['Question']} \n") + fp.write(f"generated_answers: {item['Generated_answer']} \n") + fp.write(f"groud_truth: {item['Ground_truth']} \n") + fp.write("\n") + logging.info(f"Eval successfully, the eval result is saved to {context['output_log']}.") + except Exception as e: + logging.error(f"An unexpected error occurred during the process: {e}",exc_info=True) + +def parse_arguments(): + # Define command line arguments for the script + parser = argparse.ArgumentParser( + description="Generate question/answer pairs from documentation." + ) + parser.add_argument( + "-m", "--model", + default="chatbot", + help="Select the model to use for evaluation, this maybe a LoRA adapter." + ) + parser.add_argument( + "-c", "--config_path", + default="eval_config.yaml", + help="Set the configuration file path that has system prompt along with language, evalset path." + ) + parser.add_argument( + "-v", "--vllm_endpoint", + default=None, + type=int, + help="If a port is specified, then use local vllm endpoint for evaluations." + ) + parser.add_argument( + "-j", "--judge_endpoint", + default=None, + type=int, + help="If a port is specified, then use local vllm endpoint as judge LLM." + ) + parser.add_argument( + "-o", "--output_log", + default="eval_result.log", + help="save the eval result to a log file. Default is eval_result.log" + ) + return parser.parse_args() + +if __name__ == "__main__": + logging.info("Initializing the process and loading configuration...") + args = parse_arguments() + context = load_config(args.config_path) + context["model"] = args.model + context["endpoint"] = args.vllm_endpoint + context["judge_endpoint"] = args.judge_endpoint + context["output_log"] = args.output_log + if context["endpoint"]: + logging.info(f"Use local vllm service for eval at port: '{args.vllm_endpoint}'.") + if context["judge_endpoint"]: + logging.info(f"Use local vllm service for judge at port: '{args.judge_endpoint}'.") + asyncio.run(main(context)) diff --git a/recipes/use_cases/end2end-recipes/raft/eval_utils.py b/recipes/use_cases/end2end-recipes/raft/eval_utils.py new file mode 100644 index 000000000..291a13cb5 --- /dev/null +++ b/recipes/use_cases/end2end-recipes/raft/eval_utils.py @@ -0,0 +1,122 @@ +import sys +import ujson as json +import re +import string +from collections import Counter +import pickle + +def normalize_answer(s): + + def remove_articles(text): + return re.sub(r'\b(a|an|the)\b', ' ', text) + + def white_space_fix(text): + return ' '.join(text.split()) + + def remove_punc(text): + exclude = set(string.punctuation) + return ''.join(ch for ch in text if ch not in exclude) + + def lower(text): + return text.lower() + + return white_space_fix(remove_articles(remove_punc(lower(s)))) + + +def f1_score(prediction, ground_truth): + normalized_prediction = normalize_answer(prediction) + normalized_ground_truth = normalize_answer(ground_truth) + + ZERO_METRIC = (0, 0, 0) + + if normalized_prediction in ['yes', 'no', 'noanswer'] and normalized_prediction != normalized_ground_truth: + return ZERO_METRIC + if normalized_ground_truth in ['yes', 'no', 'noanswer'] and normalized_prediction != normalized_ground_truth: + return ZERO_METRIC + + prediction_tokens = normalized_prediction.split() + ground_truth_tokens = normalized_ground_truth.split() + common = Counter(prediction_tokens) & Counter(ground_truth_tokens) + num_same = sum(common.values()) + if num_same == 0: + return ZERO_METRIC + precision = 1.0 * num_same / len(prediction_tokens) + recall = 1.0 * num_same / len(ground_truth_tokens) + f1 = (2 * precision * recall) / (precision + recall) + return f1, precision, recall + + +def exact_match_score(prediction, ground_truth): + return (normalize_answer(prediction) == normalize_answer(ground_truth)) + +def update_answer(metrics, prediction, gold): + em = exact_match_score(prediction, gold) + f1, prec, recall = f1_score(prediction, gold) + metrics['em'] += float(em) + metrics['f1'] += f1 + metrics['prec'] += prec + metrics['recall'] += recall + return em, prec, recall + +def update_sp(metrics, prediction, gold): + cur_sp_pred = set(map(tuple, prediction)) + gold_sp_pred = set(map(tuple, gold)) + tp, fp, fn = 0, 0, 0 + for e in cur_sp_pred: + if e in gold_sp_pred: + tp += 1 + else: + fp += 1 + for e in gold_sp_pred: + if e not in cur_sp_pred: + fn += 1 + prec = 1.0 * tp / (tp + fp) if tp + fp > 0 else 0.0 + recall = 1.0 * tp / (tp + fn) if tp + fn > 0 else 0.0 + f1 = 2 * prec * recall / (prec + recall) if prec + recall > 0 else 0.0 + em = 1.0 if fp + fn == 0 else 0.0 + metrics['sp_em'] += em + metrics['sp_f1'] += f1 + metrics['sp_prec'] += prec + metrics['sp_recall'] += recall + return em, prec, recall + +def eval(prediction, gold): + + metrics = {'em': 0, 'f1': 0, 'prec': 0, 'recall': 0, + 'sp_em': 0, 'sp_f1': 0, 'sp_prec': 0, 'sp_recall': 0, + 'joint_em': 0, 'joint_f1': 0, 'joint_prec': 0, 'joint_recall': 0} + for dp in gold: + cur_id = dp['_id'] + can_eval_joint = True + if cur_id not in prediction['answer']: + print('missing answer {}'.format(cur_id)) + can_eval_joint = False + else: + em, prec, recall = update_answer( + metrics, prediction['answer'][cur_id], dp['answer']) + if cur_id not in prediction['sp']: + print('missing sp fact {}'.format(cur_id)) + can_eval_joint = False + else: + sp_em, sp_prec, sp_recall = update_sp( + metrics, prediction['sp'][cur_id], dp['supporting_facts']) + + if can_eval_joint: + joint_prec = prec * sp_prec + joint_recall = recall * sp_recall + if joint_prec + joint_recall > 0: + joint_f1 = 2 * joint_prec * joint_recall / (joint_prec + joint_recall) + else: + joint_f1 = 0. + joint_em = em * sp_em + + metrics['joint_em'] += joint_em + metrics['joint_f1'] += joint_f1 + metrics['joint_prec'] += joint_prec + metrics['joint_recall'] += joint_recall + + N = len(gold) + for k in metrics.keys(): + metrics[k] /= N + + return metrics diff --git a/recipes/use_cases/end2end-recipes/raft/raft.py b/recipes/use_cases/end2end-recipes/raft/raft.py index c3d5f175e..3b27de51f 100644 --- a/recipes/use_cases/end2end-recipes/raft/raft.py +++ b/recipes/use_cases/end2end-recipes/raft/raft.py @@ -43,7 +43,7 @@ async def main(context): # Extract format specific params format_params = {} - formatter.convert(ds=ds, format=args.output_format, output_path=args.output, output_type=args.output_type, params=format_params) + formatter.convert(ds=ds, format=args.output_format, output_path=args.output+"raft", output_type=args.output_type, params=format_params) except Exception as e: logging.error(f"An unexpected error occurred during the process: {e}",exc_info=True) diff --git a/recipes/use_cases/end2end-recipes/raft/raft.yaml b/recipes/use_cases/end2end-recipes/raft/raft.yaml index eee128ca2..e667982d8 100644 --- a/recipes/use_cases/end2end-recipes/raft/raft.yaml +++ b/recipes/use_cases/end2end-recipes/raft/raft.yaml @@ -6,14 +6,26 @@ COT_prompt_template: > - End your response with final answer in the form : $answer, the answer should be succinct. You MUST begin your final answer with the tag ": +# question_prompt_template: > +# You are a synthetic question-answer pair generator. Given a chunk of context about +# some topic(s), generate {num_questions} example questions a user could ask and would be answered +# \using information from the chunk. For example, if the given context was a Wikipedia +# paragraph about the United States, an example question could be 'How many states are +# in the United States? +# The questions should be able to be answered in a few words or less. Include only the +# questions in your response. question_prompt_template: > - You are a synthetic question-answer pair generator. Given a chunk of context about - some topic(s), generate {num_questions} example questions a user could ask and would be answered - \using information from the chunk. For example, if the given context was a Wikipedia - paragraph about the United States, an example question could be 'How many states are - in the United States? - The questions should be able to be answered in a few words or less. Include only the - questions in your response. + You are a language model skilled in creating quiz questions. + You will be provided with a document, + read it and please generate question and answer pairs that are most likely be asked by a user of Llama language models, + which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1, Meta Llama Guard 2, + Output only the questions related to Llama: + please make sure you follow those rules: + 1. Generate {num_questions} question answer pairs, you can generate less answer if there is nothing related to model, training, fine-tuning and evaluation details of Llama language models, . + 2. The questions can be answered based *solely* on the given passage. + 3. Avoid asking questions with similar meaning. + 4. Never use any abbreviation. + 5. Include only the questions in your response. data_dir: "./data" diff --git a/recipes/use_cases/end2end-recipes/raft/raft_utils.py b/recipes/use_cases/end2end-recipes/raft/raft_utils.py index ec7355f28..304f37a72 100644 --- a/recipes/use_cases/end2end-recipes/raft/raft_utils.py +++ b/recipes/use_cases/end2end-recipes/raft/raft_utils.py @@ -96,7 +96,7 @@ def read_file_content(context): file_strings.append(file_text) text = '\n'.join(file_strings) text = remove_non_printable(text) - return remove_non_printable(text) + return text def remove_non_printable(s): printable = set(string.printable) @@ -199,6 +199,7 @@ async def add_chunk_to_dataset( COT_tasks.append(generate_COT(chat_service, context, chunk, question)) COT_results = await asyncio.gather(*COT_tasks) for chunk, q , cot in COT_results: + # The COT answer will be used in the fine-tuning stage datapt = { "id": None, "type": "general", @@ -237,6 +238,7 @@ async def add_chunk_to_dataset( for doc in docs: context += "" + str(doc) + "\n" context += q + # This instruction will be used in the fine-tuning stage datapt["instruction"] = context # add to dataset @@ -253,29 +255,3 @@ async def add_chunk_to_dataset( else: ds = ds.add_item(datapt) return ds -# This function is used to evaluate the quality of generated QA pairs. Return the original QA pair if the model eval result is YES. Otherwise, return an empty dict. -async def LLM_judge_request(chat_service, api_context: dict, document_content: dict) -> dict: - prompt_for_system = api_context['judge_prompt_template'].format(language=api_context["language"]) - chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': f"Question: {document_content['Question']} \n Teacher's Answer: {document_content['Ground_truth']}\n Student's Answer: {document_content['Generated_answer']} "}] - result = await chat_service.execute_chat_request_async(api_context, chat_request_payload) - if not result: - return {} - # no parsing needed, just return the loads the result as a dict - result = json.loads(result) - if "Result" not in result: - print("Error: eval response does not contain answer") - print(document_content,result) - return {} - return result - -async def generate_LLM_eval(chat_service, api_context: dict, judge_list: list): - eval_tasks = [] - for batch_index, batch_content in enumerate(judge_list): - try: - result = LLM_judge_request(chat_service, api_context, batch_content) - eval_tasks.append(result) - except Exception as e: - print(f"Error during data eval request execution: {e}") - - judge_results = await asyncio.gather(*eval_tasks) - return judge_results From f44281aaff6579f29d975d23f98ab3fa1a5566fd Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Tue, 4 Jun 2024 16:23:16 -0700 Subject: [PATCH 17/35] better evalset and working pipeline --- recipes/finetuning/datasets/raft_dataset.py | 53 +++- .../chatbot/pipelines/evalset.json | 92 +++--- .../use_cases/end2end-recipes/raft/README.md | 20 +- .../end2end-recipes/raft/eval_config.yaml | 7 + .../end2end-recipes/raft/eval_raft.py | 295 +++++++++++------- .../end2end-recipes/raft/eval_utils.py | 122 -------- .../end2end-recipes/raft/evalset.json | 158 ++++++++++ requirements.txt | 2 + 8 files changed, 449 insertions(+), 300 deletions(-) delete mode 100644 recipes/use_cases/end2end-recipes/raft/eval_utils.py create mode 100644 recipes/use_cases/end2end-recipes/raft/evalset.json diff --git a/recipes/finetuning/datasets/raft_dataset.py b/recipes/finetuning/datasets/raft_dataset.py index f7feca6af..e50d97344 100644 --- a/recipes/finetuning/datasets/raft_dataset.py +++ b/recipes/finetuning/datasets/raft_dataset.py @@ -7,9 +7,43 @@ from datasets import Dataset, load_dataset, DatasetDict import itertools - B_INST, E_INST = "[INST]", "[/INST]" +def tokenize_dialog(dialog, tokenizer): + # If vocab size is above 128000, use the chat template to generate the tokens as it is from Llama 3 family models + if tokenizer.vocab_size >= 128000: + dialog_tokens = tokenizer.apply_chat_template(dialog) + dialog_tokens = dialog_tokens[:-4] # Remove generation prompt <|start_header_id|>assistant<|end_header_id|>\n\n + eot_indices = [i for i,n in enumerate(dialog_tokens) if n == 128009] + labels = copy.copy(dialog_tokens) + last_idx = 0 + for n, idx in enumerate(eot_indices): + if n % 2 == 1: + last_idx = idx + else: + labels[last_idx:idx+1] = [-100] * (idx-last_idx+1) + + dialog_tokens = [dialog_tokens] + labels_tokens = [labels] + else: + # Otherwise, use the original tokenizer to generate the tokens as it is from Llama 2 family models + prompt_tokens = [tokenizer.encode(f"{tokenizer.bos_token}{B_INST} {(prompt['content']).strip()} {E_INST}", add_special_tokens=False) for prompt in dialog[:2]] + answer = dialog[-1] + answer_tokens = tokenizer.encode(f"{answer['content'].strip()} {tokenizer.eos_token}", add_special_tokens=False) + + #Add labels, convert prompt token to -100 in order to ignore in loss function + sample = { + "input_ids": prompt_tokens + answer_tokens, + "attention_mask" : [1] * (len(prompt_tokens) + len(answer_tokens)), + "labels": [-100] * len(prompt_tokens) + answer_tokens, + } + return sample + combined_tokens = { + "input_ids": list(itertools.chain(*(t for t in dialog_tokens))), + "labels": list(itertools.chain(*(t for t in labels_tokens))), + } + + return dict(combined_tokens, attention_mask=[1]*len(combined_tokens["input_ids"])) def raft_tokenize(q_a_pair, tokenizer): # last line is the question question = q_a_pair["instruction"].split('\n')[-1] @@ -26,17 +60,12 @@ def raft_tokenize(q_a_pair, tokenizer): - End your response with final answer in the form : $answer, the answer should be succinct. You MUST begin your final answer with the tag ":". """.format(question=question, context=str(documents)) - final_prompt = system_prompt + '\n' + user_prompt - prompt_tokens = tokenizer.encode(f"{tokenizer.bos_token}{B_INST} {(final_prompt).strip()} {E_INST}", add_special_tokens=False) - answer_tokens = tokenizer.encode(f"{answer.strip()} {tokenizer.eos_token}", add_special_tokens=False) - #Add labels, convert prompt token to -100 in order to ignore in loss function - sample = { - "input_ids": prompt_tokens + answer_tokens, - "attention_mask" : [1] * (len(prompt_tokens) + len(answer_tokens)), - "labels": [-100] * len(prompt_tokens) + answer_tokens, - } - - return sample + chat = [ + {"role": "system", "content": system_prompt}, + {"role": "user", "content": user_prompt}, + {"role": "assistant", "content": answer} + ] + return tokenize_dialog(chat, tokenizer) def get_custom_dataset(dataset_config, tokenizer, split, split_ratio=0.8): diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/evalset.json b/recipes/use_cases/end2end-recipes/chatbot/pipelines/evalset.json index efe72b72b..150851f43 100644 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/evalset.json +++ b/recipes/use_cases/end2end-recipes/chatbot/pipelines/evalset.json @@ -1,11 +1,47 @@ [ + { + "question":"What is llama-recipes?", + "answer": "The llama-recipes repository is a companion to the Meta Llama 3 models. The goal of this repository is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Meta Llama and other tools in the LLM ecosystem." + }, { "question":"What is the difference on the tokenization techniques that Meta Llama 3 uses compare Llama 2?", "answer": "Llama 2 uses SentencePiece for tokenization, whereas Llama 3 has transitioned to OpenAI’s Tiktoken. Llama 3 also introduces a ChatFormat class, special tokens, including those for end-of-turn markers and other features to enhance support for chat-based interactions and dialogue processing." }, { - "question":"How many tokens were used in Llama 3 pretrain?", - "answer": "Llama 3 is pretrained on over 15T tokens that were all collected from publicly available sources." + "question":"How many tokens were used in Meta Llama 3 pretrain?", + "answer": "Meta Llama 3 is pretrained on over 15 trillion tokens that were all collected from publicly available sources." + }, + { + "question":"How many tokens were used in Llama 2 pretrain?", + "answer": "Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources." + }, + { + "question":"What is the name of the license agreement that Meta Llama 3 is under?", + "answer": "Meta LLAMA 3 COMMUNITY LICENSE AGREEMENT." + }, + { + "question":"What is the name of the license agreement that Llama 2 is under?", + "answer": "LLAMA 2 COMMUNITY LICENSE AGREEMENT." + }, + { + "question":"What is the context length of Llama 2 models?", + "answer": "Llama 2's context is 4k" + }, + { + "question":"What is the context length of Meta Llama 3 models?", + "answer": "Meta Llama 3's context is 8k" + }, + { + "question":"When is Llama 2 trained?", + "answer": "Llama 2 was trained between January 2023 and July 2023." + }, + { + "question":"What is the name of the Llama 2 model that uses Grouped-Query Attention (GQA) ", + "answer": "Llama 2 70B" + }, + { + "question":"What are the names of the Meta Llama 3 model that use Grouped-Query Attention (GQA) ", + "answer": "Meta Llama 3 8B and Meta Llama 3 70B" }, { "question": "what are the goals for Llama 3", @@ -16,11 +52,11 @@ "answer": "On a limited case by case basis, we will consider bespoke licensing requests from individual entities. Please contact llamamodels@meta.com to provide more details about your request." }, { -"question": "Why are you not sharing the training datasets for Llama?", +"question": "Why is Meta not sharing the training datasets for Llama?", "answer": "We believe developers will have plenty to work with as we release our model weights and starting code for pre-trained and conversational fine-tuned versions as well as responsible use resources. While data mixes are intentionally withheld for competitive reasons, all models have gone through Meta’s internal Privacy Review process to ensure responsible data usage in building our products. We are dedicated to the responsible and ethical development of our GenAI products, ensuring our policies reflect diverse contexts and meet evolving societal expectations." }, { -"question": "Did we use human annotators to develop the data for our models?", +"question": "Did Meta use human annotators to develop the data for Llama models?", "answer": "Yes. There are more details, for example, about our use of human annotators in the Llama 2 research paper." }, { @@ -28,11 +64,11 @@ "answer": "It's correct that the license restricts using any part of the Llama models, including the response outputs to train another AI model (LLM or otherwise). However, one can use the outputs to further train the Llama family of models. Techniques such as Quantized Aware Training (QAT) utilize such a technique and hence this is allowed." }, { -"question": "What operating systems (OS) are officially supported?", +"question": "What operating systems (OS) are officially supported if I want to use Llama model?", "answer": "For the core Llama GitHub repos (Llama and Llama3) Linux is the only OS currently supported by this repo. Additional OS support is available through the Llama-Recipes repo." }, { -"question": "I am getting 'Issue with the URL' as an error message. What should I do?", +"question": "I am getting 'Issue with the URL' as an error message when I want to download Llama model. What should I do?", "answer": "This issue occurs because of not copying the URL correctly. If you right click on the link and copy the link, the link may be copied with URL Defense wrapper. To avoid this issue, select the URL manually and copy it." }, { @@ -40,7 +76,7 @@ "answer": "The model was primarily trained on English with a bit of additional data from 27 other languages (for more information, see Table 10 on page 20 of the Llama 2 paper). We do not expect the same level of performance in these languages as in English. You’ll find the full list of languages referenced in the research paper. You can look at some of the community lead projects to fine-tune Llama 2 models to support other languages. (eg: link)" }, { -"question": "If I’m a developer/business, how can I access the models?", +"question": "If I’m a developer/business, how can I access the Llama models?", "answer": "Details on how to access the models are available on our website link. Please note that the models are subject to the acceptable use policy and the provided responsible use guide. Models are available through multiple sources but the place to start is at https://llama.meta.com/ Model code, quickstart guide and fine-tuning examples are available through our Github Llama repository. Model Weights are available through an email link after the user submits a sign-up form. Models are also being hosted by Microsoft, Amazon Web Services, and Hugging Face, and may also be available through other hosting providers in the future." }, { @@ -48,7 +84,7 @@ "answer": "Llama models are broadly available to developers and licensees through a variety of hosting providers and on the Meta website and licensed under the applicable Llama Community License Agreement, which provides a permissive license to the models along with certain restrictions to help ensure that the models are being used responsibly." }, { -"question": "What are the hardware SKU requirements for deploying these models?", +"question": "What are the hardware SKU requirements for deploying Llama models?", "answer": "Hardware requirements vary based on latency, throughput and cost constraints. For good latency, we split models across multiple GPUs with tensor parallelism in a machine with NVIDIA A100s or H100s. But TPUs, other types of GPUs, or even commodity hardware can also be used to deploy these models (e.g. llama cpp, MLC LLM)." }, { @@ -56,7 +92,7 @@ "answer": "Llama models are auto-regressive language models, built on the transformer architecture. The core language models function by taking a sequence of words as input and predicting the next word, recursively generating text." }, { -"question": "Does the model support fill-in-the-middle completion, e.g. allowing the user to specify a suffix string for the response?", +"question": "Does the Llama model support fill-in-the-middle completion, e.g. allowing the user to specify a suffix string for the response?", "answer": "The vanilla model of Llama does not, however, the Code Llama models have been trained with fill-in-the-middle completion to assist with tasks like code completion." }, { @@ -68,7 +104,7 @@ "answer": "The model itself supports these parameters, but whether they are exposed or not depends on implementation." }, { -"question": "What is the most effective RAG method paired with LIama models?", +"question": "What is the most effective RAG method paired with Llama models?", "answer": "There are many ways to use RAG with Llama. The most popular libraries are LangChain and LlamaIndex, and many of our developers have used them successfully with Llama 2. (See the LangChain and LlamaIndex sections of this document)." }, { @@ -76,19 +112,15 @@ "answer": "You can find steps on how to set up an EC2 instance in the AWS section of this document here." }, { -"question": "What is the right size of EC2 instances needed for running each of the llama models?", -"answer": "The AWS section of this document has some insights on instance size that you can start with. You can find the section here." -}, -{ -"question": "Should we start training with the base or instruct/chat model?", +"question": "Should we start training with the base or instruct/chat model when using Llama model?", "answer": "This depends on your application. The Llama pre-trained models were trained for general large language applications, whereas the Llama instruct or chat models were fine tuned for dialogue specific uses like chat bots." }, { -"question": "I keep getting a 'CUDA out of memory' error.", +"question": "I keep getting a 'CUDA out of memory' error, when using Llama models, what should I do" , "answer": "This error can be caused by a number of different factors including, model size being too large, in-efficient memory usage and so on. Some of the steps below have been known to help with this issue, but you might need to do some troubleshooting to figure out the exact cause of your issue. 1. Ensure your GPU has enough memory 2. Reduce the batch_size 3. Lower the Precision 4. Clear cache 5. Modify the Model/Training" }, { -"question": "Retrieval approach adds latency due to multiple calls at each turn. How to best leverage Llama+Retrieval?", +"question": "Retrieval approach adds latency due to multiple calls at each turn. How to best leverage Llama model with Retrieval?", "answer": "If multiple calls are necessary then you could look into the following: 1. Optimize inference so each call has less latency. 2. Merge the calls into fewer calls. For example summarize the data and utilize the summary. 3. Possibly utilize Llama 2 function calling. 4. Consider fine-tuning the model with the updated data." }, { @@ -109,30 +141,30 @@ }, { "question": "What are the hardware SKU requirements for fine-tuning Llama pre-trained models?", -"answer": "Fine-tuning requirements also vary based on amount of data, time to complete fine-tuning and cost constraints. To fine-tune these models we have generally used multiple NVIDIA A100 machines with data parallelism across nodes and a mix of data and tensor parallelism intra node. But using a single machine, or other GPU types are definitely possible (e.g. alpaca models are trained on a single RTX4090: (https://github.com/tloen/alpaca-lora)" +"answer": "Fine-tuning requirements also vary based on amount of data, time to complete fine-tuning and cost constraints. To fine-tune these models we have generally used multiple NVIDIA A100 machines with data parallelism across nodes and a mix of data and tensor parallelism intra node. But using a single machine, or other GPU types are definitely possible (e.g. alpaca models are trained on a single RTX4090:https://github.com/tloen/alpaca-lora)" }, { -"question": "What Fine-tuning tasks would these models support?", +"question": "What Fine-tuning tasks would the Llama models support?", "answer": "The Lama 2 fine-tuned models were fine tuned for dialogue specific uses like chat bots." }, { -"question": "Are there examples on how one can fine-tune the models?", +"question": "Are there examples on how one can fine-tune the Llama models?", "answer": "You can find example fine-tuning scripts in the Github recipes repository. You can also review the fine-tuning section in this document." }, { -"question": "What is the difference between a pre-trained and fine-tuned model?", +"question": "What is the difference between a pre-trained and fine-tuned Llama model?", "answer": "The Llama pre-trained models were trained for general large language applications, whereas the Llama chat or instruct models were fine tuned for dialogue specific uses like chat bots." }, { -"question": "How should we think about post processing (validate generated data) as a way to fine tune models?", +"question": "How should we think about post processing (validate generated data) as a way to fine tune Llama models?", "answer": "Essentially having a truthful data on the specific application can be helpful to reduce the risk on a specific application. Also setting some sort of threshold such as prob>90% might be helpful to get more confidence in the output." }, { -"question": "What are the different libraries that we recommend for fine tuning?", +"question": "What are the different libraries that we recommend for fine tuning when using Llama models?", "answer": "You can find some fine-tuning recommendations in the Github recipes repository as well as the fine-tuning section of this document." }, { -"question": "How can we identify the right ‘r’ value for LORA method for a certain use-case?", +"question": "How can we identify the right ‘r’ value for LORA method for a certain use-case when using Llama models?", "answer": "The best approach would be to review the LoRA research paper for more information on the rankings, then reviewing similar implementations for other models and finally experimenting." }, { @@ -140,19 +172,7 @@ "answer": "Take a look at the Fine tuning section in our Getting started with Llama guide of this document for some pointers towards fine tuning." }, { -"question": "Strategies to help models handle longer conversations?", -"answer": "You can find some helpful information towards this in the Prompting and LangChain sections of this document." -}, -{ "question": "Are Llama models open source? What is the exact license these models are published under?", "answer": "Llama models are licensed under a bespoke commercial license that balances open access to the models with responsibility and protections in place to help address potential misuse. Our license allows for broad commercial use, as well as for developers to create and redistribute additional work on top of Llama models. For more details, our licenses can be found at (https://llama.meta.com/license/) (Meta Llama 2) and (https://llama.meta.com/llama3/license/) (Meta Llama 3)." -}, -{ -"question": "Are there examples that help licensees better understand how “MAU” is defined?", -"answer": "'MAU' means 'monthly active users' that access or use your (and your affiliates’) products and services. Examples include users accessing an internet-based service and monthly users/customers of licensee’s hardware devices." -}, -{ -"question": "Does the Critical Infrastructure restriction in the acceptable use policy (AUP) prevent companies who have special critical infrastructure certification (e.g., a registered operator of “critical infrastructure” under the German BSI Act) from using Llama?", -"answer": "No, such companies are not prohibited when their usage of Llama is not related to the operation of critical infrastructure. Llama, however, may not be used in the operation of critical infrastructure by any company, regardless of government certifications." } ] diff --git a/recipes/use_cases/end2end-recipes/raft/README.md b/recipes/use_cases/end2end-recipes/raft/README.md index 7a1b5f64e..f23e079d2 100644 --- a/recipes/use_cases/end2end-recipes/raft/README.md +++ b/recipes/use_cases/end2end-recipes/raft/README.md @@ -72,7 +72,7 @@ Once the dataset is ready, we can start the fine-tuning step using the following For distributed fine-tuning: ```bash -CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir raft-8b --num_epochs 10 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/raft_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/raft/hotpot_vicuna_cot.jsonl' +CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir raft-8b --num_epochs 5 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/raft_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/raft/raft.jsonl' ``` @@ -97,7 +97,7 @@ Once we have the fine-tuned model, we now need to evaluate it to understand its First we need to start the VLLM servers to host our fine-tuned 8B model. Since we used peft library to get a LoRA adapter, we need to pass special arguments to VLLM to enable the LoRA feature. Now, the VLLM server actually will first load the original model, then apply our LoRA adapter weights. Then we can feed the eval_set.json file into the VLLM servers and start the comparison evaluation. Notice that our finetuned model name is now called "chatbot" instead of "meta-llama/Meta-Llama-3-8B-Instruct". ```bash -python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-8B-Instruct --enable-lora --lora-modules chatbot=./chatbot-8b --port 8000 --disable-log-requests +python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-8B-Instruct --enable-lora --lora-modules raft-8b=./raft-8b --port 8000 --disable-log-requests ``` **NOTE** If encounter import error: "ImportError: punica LoRA kernels could not be imported.", this means that VLLM must be installed with punica LoRA kernels to support LoRA adapter, please use following commands to install the VLLM from source. @@ -111,33 +111,23 @@ VLLM_INSTALL_PUNICA_KERNELS=1 pip install -e . On another terminal, we can go to the recipes/use_cases/end2end-recipes/chatbot/pipelines folder to start our eval script. ```bash -python eval_chatbot.py -m chatbot -v 8000 +python eval_raft.py -m raft-8b -v 8000 ``` -We can also quickly compare our fine-tuned chatbot model with original Meta Llama 3 8B Instruct model using - -```bash -python eval_chatbot.py -m meta-llama/Meta-Llama-3-8B-Instruct -v 8000 -``` Lastly, we can use another Meta Llama 3 70B Instruct model as a judge to compare the answer from the fine-tuned 8B model with the groud truth and get a score. To do this, we need to host another Meta Llama 3 70B Instruct VLLM server locally with command, just make sure the port is not been used: ```bash -CUDA_VISIBLE_DEVICES=2,3 python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001 +CUDA_VISIBLE_DEVICES=2,3 python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8002 ``` Then we can pass the port to the eval script: ```bash -python eval_chatbot.py -m chatbot -v 8000 -j 8001 +python eval_raft.py -m raft-8b -v 8000 -j 8002 ``` -and similarily get the eval result for the original model: - -```bash -python eval_chatbot.py -m meta-llama/Meta-Llama-3-8B-Instruct -v 8000 -j 8001 -``` diff --git a/recipes/use_cases/end2end-recipes/raft/eval_config.yaml b/recipes/use_cases/end2end-recipes/raft/eval_config.yaml index 87266d33c..f040188c0 100644 --- a/recipes/use_cases/end2end-recipes/raft/eval_config.yaml +++ b/recipes/use_cases/end2end-recipes/raft/eval_config.yaml @@ -18,6 +18,13 @@ judge_prompt_template: > "Reason": "your reason here.", "Result": "YES or NO." }} + eval_json: "./evalset.json" language: "English" + +raft_model_name: "raft-8b" + +base_model_name: "meta-llama/Meta-Llama-3-8B-Instruct" + +data_dir: "./data" diff --git a/recipes/use_cases/end2end-recipes/raft/eval_raft.py b/recipes/use_cases/end2end-recipes/raft/eval_raft.py index 7c1155617..32a3ff777 100644 --- a/recipes/use_cases/end2end-recipes/raft/eval_raft.py +++ b/recipes/use_cases/end2end-recipes/raft/eval_raft.py @@ -5,34 +5,37 @@ import evaluate import argparse from config import load_config -import asyncio import json from itertools import chain -from generator_utils import parse_qa_to_json, generate_LLM_eval -from langchain_community.llms import VLLM +from langchain_community.llms import VLLMOpenAI from langchain_community.embeddings import HuggingFaceEmbeddings from langchain_community.vectorstores import FAISS from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain_community.document_loaders import DirectoryLoader from langchain.chains import RetrievalQA +from langchain_core.messages import HumanMessage, SystemMessage +import re +import string +from collections import Counter -from eval_utils import exact_match_score -def generate_answers_model_only(model_path): +def generate_answers_model_only(model_name,question_list,api_url="http://localhost:8000/v1",key="EMPTY"): # Use langchain to load the documents from data directory # Load the RAFT model - llm = VLLM(model=model_path, - trust_remote_code=True, # mandatory for hf models - max_new_tokens=500, - top_p=1, - temperature=0.0, - # tensor_parallel_size=... # for distributed inference - ) + llm = VLLMOpenAI( + openai_api_key=key, + openai_api_base=api_url, + model_name=model_name, + model_kwargs={"stop": ["."]}, + temperature=0.0,) generated_answers = [] for question in question_list: - result = llm.invoke(question) - generated_answers.append(result["answer"]) + response = llm.invoke(question) + generated_answers.append(response) + if len(generated_answers) == 0: + logging.error("No model answers generated. Please check the input context or model configuration in ",model_name) + return [] return generated_answers -def generate_answers_with_RAG(model_path, data_dir,question_list): +def generate_answers_with_RAG(model_name, data_dir,question_list, api_url="http://localhost:8000/v1",key="EMPTY"): # Use langchain to load the documents from data directory loader = DirectoryLoader(data_dir) docs = loader.load() @@ -43,13 +46,12 @@ def generate_answers_with_RAG(model_path, data_dir,question_list): # Store the document into a vector store with a specific embedding model vectorstore = FAISS.from_documents(all_splits, HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")) # Load the RAFT model - llm = VLLM(model=model_path, - trust_remote_code=True, # mandatory for hf models - max_new_tokens=500, - top_p=1, - temperature=0.0, - # tensor_parallel_size=... # for distributed inference - ) + llm = VLLMOpenAI( + openai_api_key=key, + openai_api_base=api_url, + model_name=model_name, + model_kwargs={"stop": ["."]}, + temperature=0.0,) # Create a RetrievalQA chain with the vector store and RAFT model qa_chain = RetrievalQA.from_chain_type( llm, @@ -57,10 +59,13 @@ def generate_answers_with_RAG(model_path, data_dir,question_list): ) generated_answers = [] for question in question_list: - result = qa_chain({"query": question}) - generated_answers.append(result["answer"]) + response = qa_chain({"query": question}) + generated_answers.append(response['result']) + if len(generated_answers) == 0: + logging.error("No RAG answers generated. Please check the input context or model configuration in ",model_name) + return [] return generated_answers -def compute_rouge_score(generated : str, reference: str): +def compute_rouge_score(generated : list, reference: list): rouge_score = evaluate.load('rouge') return rouge_score.compute( predictions=generated, @@ -68,7 +73,41 @@ def compute_rouge_score(generated : str, reference: str): use_stemmer=True, use_aggregator=True ) -def compute_bert_score(generated : str, reference: str): +def remove_special_tokens(text_list): + clean_text_list = [] + for text in text_list: + text = text.replace("##begin_quote##","") + text = text.replace("##end_quote##","") + text = text.strip() + clean_text_list.append(text) + return clean_text_list + +def normalize_answer(s): + + def remove_articles(text): + return re.sub(r'\b(a|an|the)\b', ' ', text) + + def white_space_fix(text): + return ' '.join(text.split()) + + def remove_punc(text): + exclude = set(string.punctuation) + return ''.join(ch for ch in text if ch not in exclude) + + def lower(text): + return text.lower() + + return white_space_fix(remove_articles(remove_punc(lower(s)))) +def exact_match_score(prediction, ground_truth): + """Computes EM score for a single prediction and ground truth answer.""" + num_match = 0 + assert len(prediction) == len(ground_truth), "Answer length does not match prediction length." + assert(len(ground_truth) > 0) + for idx, (pred,gold) in enumerate(zip(prediction, ground_truth)): + if (normalize_answer(pred) == normalize_answer(gold)): + num_match += 1 + return num_match/len(ground_truth) +def compute_bert_score(generated : list, reference: list): bertscore = evaluate.load("bertscore") score = bertscore.compute( predictions=generated, @@ -79,44 +118,65 @@ def compute_bert_score(generated : str, reference: str): precision = score["precision"] recall = score["recall"] return sum(precision)/len(precision), sum(recall)/len(recall), sum(f1)/len(f1) -# This function is used to eval the fine-tuned model, given the question, generate the answer. -async def eval_request(chat_service, api_context: dict, question: str) -> dict: - prompt_for_system = api_context['eval_prompt_template'].format(language=api_context["language"]) - chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': f"Question: {question}"}] - # Getting a list of result, in this case, there should be only one result - response_string = await chat_service.execute_chat_request_async(api_context, chat_request_payload) - # convert the result string to a dict that contains Question, Answer - result_list = parse_qa_to_json(response_string) - if not result_list or len(result_list) > 1: - print("Error: eval response should be a list of one result dict") - return {} - result = result_list[0] - if "Answer" not in result: - print("Error: eval response does not contain answer") - return {} - # Send back the model generated answer - - return result["Answer"] - -async def generate_eval_answer(chat_service, api_context: dict, questions: list): - eval_tasks = [] - for batch_index, question in enumerate(questions): - try: - result = eval_request(chat_service, api_context, question) - eval_tasks.append(result) - except Exception as e: - print(f"Error during data eval request execution: {e}") - print(len(eval_tasks),"eval_tasks") - eval_results = await asyncio.gather(*eval_tasks) - - return eval_results - -async def main(context): - if context["endpoint"]: - chat_service = VllmChatService() - else: - chat_service = OctoAIChatService() +def compute_judge_score(questions: list, generated : list, reference: list, context,api_url="http://localhost:8001/v1",key="EMPTY"): + correct_num = 0 + model_name = "meta-llama/Meta-Llama-3-70B-Instruct" + llm = VLLMOpenAI( + openai_api_key=key, + openai_api_base=api_url, + model_name=model_name, + model_kwargs={"stop": ["."]}, + temperature=0.0,) + for q,pred,gold in zip(questions, generated,reference): + # messages = [ + # SystemMessage(content=context['judge_prompt_template']), + # HumanMessage(content=f"Question: {q} \n Teacher's Answer: {gold} \n Student's Answer: {pred} "), + # ] + messages = context['judge_prompt_template'] + "\n" + messages += f"Question: {q} \n Teacher's Answer: {gold} \n Student's Answer: {pred} " + response = llm.invoke(messages) + print(response+ " -------------") + result = json.loads(response) + if "Result" not in result: + print("Error: eval response does not contain answer") + print(result) + continue + correct_num += result["Result"] == "YES" + return correct_num/len(questions) +def score_single(context,generated,reference,questions, run_exact_match=True,run_rouge=True, run_bert=True, run_llm_as_judge=True): + # set metric to default -1, means no metric is computed + metric = { + "Rouge_score": -1, + "BERTScore_Precision": -1, + "BERTScore_Recall": -1, + "BERTScore_F1": -1, + "LLM_judge_score": -1, + "Exact_match": -1 + } + if run_rouge: + rouge_score = compute_rouge_score(generated,reference) + metric["Rouge_score"] = rouge_score + print("Rouge_score:",rouge_score) + if run_bert: + P, R, F1 = compute_bert_score(generated,reference) + print(f"BERTScore Precision: {P:.4f}, Recall: {R:.4f}, F1: {F1:.4f}") + metric["BERTScore_Precision"] = P + metric["BERTScore_Recall"] = R + metric["BERTScore_F1"] = F1 + if context["judge_endpoint"] and run_llm_as_judge: + api_url = "http://localhost:"+str(context["judge_endpoint"])+"/v1" + LLM_judge_score = compute_judge_score(questions, generated, reference, context,api_url=api_url) + metric["LLM_judge_score"] = LLM_judge_score + print(f"LLM_judge_score: {LLM_judge_score}") + if run_exact_match: + exact_match = exact_match_score(generated,reference) + print(f"Exact_match_percentage: {exact_match:.4f}") + metric["Exact_match"] = exact_match + return metric +def main(context): + # Since the eval set is small, we can run the eval without async functions try: + api_url = "http://localhost:"+str(context["vllm_endpoint"])+"/v1" logging.info("Starting to generate answer given the eval set.") with open(context["eval_json"]) as fp: eval_json = json.load(fp) @@ -124,49 +184,47 @@ async def main(context): for index, item in enumerate(eval_json): questions.append(item["question"]) groud_truth.append(item["answer"]) - generated_answers = generate_answers_with_RAG(model_path, context,questions) - if not generated_answers: - logging.warning("No answers generated. Please check the input context or model configuration.") - return - logging.info(f"Successfully generated {len(generated_answers)} answers.") - judge_list = [] - for index, item in enumerate(generated_answers): - judge_list.append({"Question":questions[index],"Ground_truth":groud_truth[index],"Generated_answer":generated_answers[index]}) - if context["judge_endpoint"]: - # make a copy of the context then change the VLLM endpoint to judge_endpoint - context_copy = dict(context) - context_copy["endpoint"] = context["judge_endpoint"] - context_copy["model"] = "meta-llama/Meta-Llama-3-70B-Instruct" - judge_results = await generate_LLM_eval(chat_service, context_copy, judge_list) - correct_num = 0 - for result in judge_results: - correct_num += result["Result"] == "YES" - LLM_judge_score = correct_num/len(judge_results) - print(f"The accuracy of the model is {LLM_judge_score}") - rouge_score = compute_rouge_score(generated_answers,groud_truth) - print("Rouge_score:",rouge_score) - P, R, F1 = compute_bert_score(generated_answers,groud_truth) - print(f"BERTScore Precision: {P:.4f}, Recall: {R:.4f}, F1: {F1:.4f}") - exact_match = 0 - for item in judge_list: - exact_match += exact_match_score(item['Generated_answer'],item['Ground_truth']) - exact_match_percentage = exact_match/len(judge_list) - print(f"Exact_match_percentage: {exact_match_percentage:.4f}") - # Saving the eval result to a log file - with open(context["output_log"],"a") as fp: - fp.write(f"Eval_result for {context['model']} \n") - fp.write(f"Rouge_score: {rouge_score} \n") - fp.write(f"BERTScore Precision: {P:.4f}, Recall: {R:.4f}, F1: {F1:.4f} \n") - fp.write(f"Exact_match_percentage: {exact_match_percentage} \n") - if context["judge_endpoint"]: - fp.write(f"LLM_judge_score: {LLM_judge_score} \n") - fp.write(f"QA details: \n") - for item in judge_list: - fp.write(f"question: {item['Question']} \n") - fp.write(f"generated_answers: {item['Generated_answer']} \n") - fp.write(f"groud_truth: {item['Ground_truth']} \n") - fp.write("\n") + generated_answers = { + "RAFT": [], + "RAFT_RAG": [], + "Baseline": [], + "Baseline_RAG": [], + } + # Generate answers for baseline + base_model_name = context["base_model_name"] + generated_answers["Baseline"] = generate_answers_model_only(base_model_name,questions,api_url) + #generated_answers["Baseline_RAG"] = generate_answers_with_RAG(base_model_name, context["data_dir"],questions,api_url) + # Generate answers for RAFT + raft_model_name = context["raft_model_name"] + #generated_answers["RAFT"] = generate_answers_model_only(raft_model_name,questions,api_url) + #generated_answers["RAFT_RAG"] = generate_answers_with_RAG(raft_model_name, context["data_dir"],questions,api_url) + # clean special tokens from the RAFT generated answer + #generated_answers["RAFT"] = remove_special_tokens(generated_answers["RAFT"]) + #generated_answers["RAFT_RAG"] = remove_special_tokens(generated_answers["RAFT_RAG"]) + logging.info(f"Successfully generated {len(generated_answers['Baseline_RAG'])} answers for all models.") + # for generate answer from each model, compute the score metric + for model_name,model_answer in generated_answers.items(): + if len(model_answer) != len(groud_truth): + print(f"The length of {model_name} answer is not equal to the length of ground truth.") + continue + metric = score_single(context,model_answer,groud_truth,questions) + print(f"The eval result for {model_name} is: {metric}") + with open(context["output_log"],"a") as fp: + fp.write(f"Eval_result for {model_name} \n") + fp.write(f"Rouge_score: {metric['Rouge_score']} \n") + fp.write(f"BERTScore Precision: {metric['BERTScore_Precision']:.4f}, Recall: {metric['BERTScore_Recall']:.4f}, F1: {metric['BERTScore_F1']:.4f} \n") + fp.write(f"Exact_match_percentage: {metric['Exact_match']} \n") + if context["judge_endpoint"]: + fp.write(f"LLM_judge_score: {metric['LLM_judge_score']} \n") + fp.write(f"QA details: \n") + for item in zip(questions,model_answer,groud_truth): + fp.write(f"question: {item[0]} \n") + fp.write(f"generated_answers: {item[1]} \n") + fp.write(f"groud_truth: {item[2]} \n") + fp.write("\n") + fp.write("\n------------------------------------\n") logging.info(f"Eval successfully, the eval result is saved to {context['output_log']}.") + # Saving the eval result to a log file except Exception as e: logging.error(f"An unexpected error occurred during the process: {e}",exc_info=True) @@ -176,9 +234,9 @@ def parse_arguments(): description="Generate question/answer pairs from documentation." ) parser.add_argument( - "-m", "--model", - default="chatbot", - help="Select the model to use for evaluation, this maybe a LoRA adapter." + "-m", "--raft_model_name", + default=None, + help="Provide the raft_model_name to use for evaluation. If not specified, the model_path in eval_config.yaml will be used." ) parser.add_argument( "-c", "--config_path", @@ -186,10 +244,15 @@ def parse_arguments(): help="Set the configuration file path that has system prompt along with language, evalset path." ) parser.add_argument( - "-v", "--vllm_endpoint", + "-d", "--data_dir", default=None, + help="Provide the data folder path to build RAG for evaluation. If not specified, the data_dir in eval_config.yaml will be used." + ) + parser.add_argument( + "-v", "--vllm_endpoint", + default=8000, type=int, - help="If a port is specified, then use local vllm endpoint for evaluations." + help="If a port is specified, then use local vllm endpoint for eval." ) parser.add_argument( "-j", "--judge_endpoint", @@ -202,18 +265,20 @@ def parse_arguments(): default="eval_result.log", help="save the eval result to a log file. Default is eval_result.log" ) + return parser.parse_args() if __name__ == "__main__": logging.info("Initializing the process and loading configuration...") args = parse_arguments() context = load_config(args.config_path) - context["model"] = args.model - context["endpoint"] = args.vllm_endpoint + context["vllm_endpoint"] = args.vllm_endpoint + if args.data_dir: + context["data_dir"] = args.data_dir + if args.raft_model_name: + context["raft_model_name"] = args.raft_model_name context["judge_endpoint"] = args.judge_endpoint context["output_log"] = args.output_log - if context["endpoint"]: - logging.info(f"Use local vllm service for eval at port: '{args.vllm_endpoint}'.") if context["judge_endpoint"]: logging.info(f"Use local vllm service for judge at port: '{args.judge_endpoint}'.") - asyncio.run(main(context)) + main(context) diff --git a/recipes/use_cases/end2end-recipes/raft/eval_utils.py b/recipes/use_cases/end2end-recipes/raft/eval_utils.py deleted file mode 100644 index 291a13cb5..000000000 --- a/recipes/use_cases/end2end-recipes/raft/eval_utils.py +++ /dev/null @@ -1,122 +0,0 @@ -import sys -import ujson as json -import re -import string -from collections import Counter -import pickle - -def normalize_answer(s): - - def remove_articles(text): - return re.sub(r'\b(a|an|the)\b', ' ', text) - - def white_space_fix(text): - return ' '.join(text.split()) - - def remove_punc(text): - exclude = set(string.punctuation) - return ''.join(ch for ch in text if ch not in exclude) - - def lower(text): - return text.lower() - - return white_space_fix(remove_articles(remove_punc(lower(s)))) - - -def f1_score(prediction, ground_truth): - normalized_prediction = normalize_answer(prediction) - normalized_ground_truth = normalize_answer(ground_truth) - - ZERO_METRIC = (0, 0, 0) - - if normalized_prediction in ['yes', 'no', 'noanswer'] and normalized_prediction != normalized_ground_truth: - return ZERO_METRIC - if normalized_ground_truth in ['yes', 'no', 'noanswer'] and normalized_prediction != normalized_ground_truth: - return ZERO_METRIC - - prediction_tokens = normalized_prediction.split() - ground_truth_tokens = normalized_ground_truth.split() - common = Counter(prediction_tokens) & Counter(ground_truth_tokens) - num_same = sum(common.values()) - if num_same == 0: - return ZERO_METRIC - precision = 1.0 * num_same / len(prediction_tokens) - recall = 1.0 * num_same / len(ground_truth_tokens) - f1 = (2 * precision * recall) / (precision + recall) - return f1, precision, recall - - -def exact_match_score(prediction, ground_truth): - return (normalize_answer(prediction) == normalize_answer(ground_truth)) - -def update_answer(metrics, prediction, gold): - em = exact_match_score(prediction, gold) - f1, prec, recall = f1_score(prediction, gold) - metrics['em'] += float(em) - metrics['f1'] += f1 - metrics['prec'] += prec - metrics['recall'] += recall - return em, prec, recall - -def update_sp(metrics, prediction, gold): - cur_sp_pred = set(map(tuple, prediction)) - gold_sp_pred = set(map(tuple, gold)) - tp, fp, fn = 0, 0, 0 - for e in cur_sp_pred: - if e in gold_sp_pred: - tp += 1 - else: - fp += 1 - for e in gold_sp_pred: - if e not in cur_sp_pred: - fn += 1 - prec = 1.0 * tp / (tp + fp) if tp + fp > 0 else 0.0 - recall = 1.0 * tp / (tp + fn) if tp + fn > 0 else 0.0 - f1 = 2 * prec * recall / (prec + recall) if prec + recall > 0 else 0.0 - em = 1.0 if fp + fn == 0 else 0.0 - metrics['sp_em'] += em - metrics['sp_f1'] += f1 - metrics['sp_prec'] += prec - metrics['sp_recall'] += recall - return em, prec, recall - -def eval(prediction, gold): - - metrics = {'em': 0, 'f1': 0, 'prec': 0, 'recall': 0, - 'sp_em': 0, 'sp_f1': 0, 'sp_prec': 0, 'sp_recall': 0, - 'joint_em': 0, 'joint_f1': 0, 'joint_prec': 0, 'joint_recall': 0} - for dp in gold: - cur_id = dp['_id'] - can_eval_joint = True - if cur_id not in prediction['answer']: - print('missing answer {}'.format(cur_id)) - can_eval_joint = False - else: - em, prec, recall = update_answer( - metrics, prediction['answer'][cur_id], dp['answer']) - if cur_id not in prediction['sp']: - print('missing sp fact {}'.format(cur_id)) - can_eval_joint = False - else: - sp_em, sp_prec, sp_recall = update_sp( - metrics, prediction['sp'][cur_id], dp['supporting_facts']) - - if can_eval_joint: - joint_prec = prec * sp_prec - joint_recall = recall * sp_recall - if joint_prec + joint_recall > 0: - joint_f1 = 2 * joint_prec * joint_recall / (joint_prec + joint_recall) - else: - joint_f1 = 0. - joint_em = em * sp_em - - metrics['joint_em'] += joint_em - metrics['joint_f1'] += joint_f1 - metrics['joint_prec'] += joint_prec - metrics['joint_recall'] += joint_recall - - N = len(gold) - for k in metrics.keys(): - metrics[k] /= N - - return metrics diff --git a/recipes/use_cases/end2end-recipes/raft/evalset.json b/recipes/use_cases/end2end-recipes/raft/evalset.json new file mode 100644 index 000000000..efe72b72b --- /dev/null +++ b/recipes/use_cases/end2end-recipes/raft/evalset.json @@ -0,0 +1,158 @@ +[ + { + "question":"What is the difference on the tokenization techniques that Meta Llama 3 uses compare Llama 2?", + "answer": "Llama 2 uses SentencePiece for tokenization, whereas Llama 3 has transitioned to OpenAI’s Tiktoken. Llama 3 also introduces a ChatFormat class, special tokens, including those for end-of-turn markers and other features to enhance support for chat-based interactions and dialogue processing." + }, + { + "question":"How many tokens were used in Llama 3 pretrain?", + "answer": "Llama 3 is pretrained on over 15T tokens that were all collected from publicly available sources." + }, +{ + "question": "what are the goals for Llama 3", + "answer": "With Llama 3, we set out to build the best open models that are on par with the best proprietary models available today. We wanted to address developer feedback to increase the overall helpfulness of Llama 3 and are doing so while continuing to play a leading role on responsible use and deployment of LLMs. We are embracing the open source ethos of releasing early and often to enable the community to get access to these models while they are still in development." +}, +{ +"question": "What if I want to access Llama models but I’m not sure if my use is permitted under the Llama 2 Community License?", +"answer": "On a limited case by case basis, we will consider bespoke licensing requests from individual entities. Please contact llamamodels@meta.com to provide more details about your request." +}, +{ +"question": "Why are you not sharing the training datasets for Llama?", +"answer": "We believe developers will have plenty to work with as we release our model weights and starting code for pre-trained and conversational fine-tuned versions as well as responsible use resources. While data mixes are intentionally withheld for competitive reasons, all models have gone through Meta’s internal Privacy Review process to ensure responsible data usage in building our products. We are dedicated to the responsible and ethical development of our GenAI products, ensuring our policies reflect diverse contexts and meet evolving societal expectations." +}, +{ +"question": "Did we use human annotators to develop the data for our models?", +"answer": "Yes. There are more details, for example, about our use of human annotators in the Llama 2 research paper." +}, +{ +"question": "Can I use the output of the models to improve the Llama family of models, even though I cannot use them for other LLMs?", +"answer": "It's correct that the license restricts using any part of the Llama models, including the response outputs to train another AI model (LLM or otherwise). However, one can use the outputs to further train the Llama family of models. Techniques such as Quantized Aware Training (QAT) utilize such a technique and hence this is allowed." +}, +{ +"question": "What operating systems (OS) are officially supported?", +"answer": "For the core Llama GitHub repos (Llama and Llama3) Linux is the only OS currently supported by this repo. Additional OS support is available through the Llama-Recipes repo." +}, +{ +"question": "I am getting 'Issue with the URL' as an error message. What should I do?", +"answer": "This issue occurs because of not copying the URL correctly. If you right click on the link and copy the link, the link may be copied with URL Defense wrapper. To avoid this issue, select the URL manually and copy it." +}, +{ +"question": "Does Llama 2 support other languages outside of English?", +"answer": "The model was primarily trained on English with a bit of additional data from 27 other languages (for more information, see Table 10 on page 20 of the Llama 2 paper). We do not expect the same level of performance in these languages as in English. You’ll find the full list of languages referenced in the research paper. You can look at some of the community lead projects to fine-tune Llama 2 models to support other languages. (eg: link)" +}, +{ +"question": "If I’m a developer/business, how can I access the models?", +"answer": "Details on how to access the models are available on our website link. Please note that the models are subject to the acceptable use policy and the provided responsible use guide. Models are available through multiple sources but the place to start is at https://llama.meta.com/ Model code, quickstart guide and fine-tuning examples are available through our Github Llama repository. Model Weights are available through an email link after the user submits a sign-up form. Models are also being hosted by Microsoft, Amazon Web Services, and Hugging Face, and may also be available through other hosting providers in the future." +}, +{ +"question": "Can anyone access Llama models? What are the terms?", +"answer": "Llama models are broadly available to developers and licensees through a variety of hosting providers and on the Meta website and licensed under the applicable Llama Community License Agreement, which provides a permissive license to the models along with certain restrictions to help ensure that the models are being used responsibly." +}, +{ +"question": "What are the hardware SKU requirements for deploying these models?", +"answer": "Hardware requirements vary based on latency, throughput and cost constraints. For good latency, we split models across multiple GPUs with tensor parallelism in a machine with NVIDIA A100s or H100s. But TPUs, other types of GPUs, or even commodity hardware can also be used to deploy these models (e.g. llama cpp, MLC LLM)." +}, +{ +"question": "Do Llama models provide traditional autoregressive text completion?", +"answer": "Llama models are auto-regressive language models, built on the transformer architecture. The core language models function by taking a sequence of words as input and predicting the next word, recursively generating text." +}, +{ +"question": "Does the model support fill-in-the-middle completion, e.g. allowing the user to specify a suffix string for the response?", +"answer": "The vanilla model of Llama does not, however, the Code Llama models have been trained with fill-in-the-middle completion to assist with tasks like code completion." +}, +{ +"question": "Do Llama models support logit biases as a request parameter to control token probabilities during sampling?", +"answer": "This is implementation dependent (i.e. the code used to run the model)." +}, +{ +"question": "Do Llama models support adjusting sampling temperature or top-p threshold via request parameters?", +"answer": "The model itself supports these parameters, but whether they are exposed or not depends on implementation." +}, +{ +"question": "What is the most effective RAG method paired with LIama models?", +"answer": "There are many ways to use RAG with Llama. The most popular libraries are LangChain and LlamaIndex, and many of our developers have used them successfully with Llama 2. (See the LangChain and LlamaIndex sections of this document)." +}, +{ +"question": "How to set up Llama models with an EC2 instance?", +"answer": "You can find steps on how to set up an EC2 instance in the AWS section of this document here." +}, +{ +"question": "What is the right size of EC2 instances needed for running each of the llama models?", +"answer": "The AWS section of this document has some insights on instance size that you can start with. You can find the section here." +}, +{ +"question": "Should we start training with the base or instruct/chat model?", +"answer": "This depends on your application. The Llama pre-trained models were trained for general large language applications, whereas the Llama instruct or chat models were fine tuned for dialogue specific uses like chat bots." +}, +{ +"question": "I keep getting a 'CUDA out of memory' error.", +"answer": "This error can be caused by a number of different factors including, model size being too large, in-efficient memory usage and so on. Some of the steps below have been known to help with this issue, but you might need to do some troubleshooting to figure out the exact cause of your issue. 1. Ensure your GPU has enough memory 2. Reduce the batch_size 3. Lower the Precision 4. Clear cache 5. Modify the Model/Training" +}, +{ +"question": "Retrieval approach adds latency due to multiple calls at each turn. How to best leverage Llama+Retrieval?", +"answer": "If multiple calls are necessary then you could look into the following: 1. Optimize inference so each call has less latency. 2. Merge the calls into fewer calls. For example summarize the data and utilize the summary. 3. Possibly utilize Llama 2 function calling. 4. Consider fine-tuning the model with the updated data." +}, +{ +"question": "How can I fine tune the Llama models?", +"answer": "You can find examples on how to fine tune the Llama models in the Llama Recipes repository." +}, +{ +"question": "How can I pretrain the Llama models?", +"answer": "You can adapt the finetuning script found here for pre-training. You can also find the hyperparams used for pretraining in Section 2 of the Llama2 paper." +}, +{ +"question": "Am I allowed to develop derivative models through fine-tuning based on Llama models for languages other than english? Is this a violation of the acceptable use policy?", +"answer": "Developers may fine-tune Llama models for languages beyond English provided they comply with the applicable Llama 3 License Agreement, Llama Community License Agreement and the Acceptable Use Policy." +}, +{ +"question": "How can someone reduce hallucinations with fine-tuned LIama models?", +"answer": "Although prompts cannot eliminate hallucinations completely, they can reduce it significantly. Using techniques like Chain-of-Thought, Instruction-Based, N-Shot, and Few-Shot can help depending on your application. Additionally, prompting the models to back up the responses by verifying with factual data sets or requesting the models to provide the source of information can help as well. Overall finetuning should also be helpful for reducing hallucination." +}, +{ +"question": "What are the hardware SKU requirements for fine-tuning Llama pre-trained models?", +"answer": "Fine-tuning requirements also vary based on amount of data, time to complete fine-tuning and cost constraints. To fine-tune these models we have generally used multiple NVIDIA A100 machines with data parallelism across nodes and a mix of data and tensor parallelism intra node. But using a single machine, or other GPU types are definitely possible (e.g. alpaca models are trained on a single RTX4090: (https://github.com/tloen/alpaca-lora)" +}, +{ +"question": "What Fine-tuning tasks would these models support?", +"answer": "The Lama 2 fine-tuned models were fine tuned for dialogue specific uses like chat bots." +}, +{ +"question": "Are there examples on how one can fine-tune the models?", +"answer": "You can find example fine-tuning scripts in the Github recipes repository. You can also review the fine-tuning section in this document." +}, +{ +"question": "What is the difference between a pre-trained and fine-tuned model?", +"answer": "The Llama pre-trained models were trained for general large language applications, whereas the Llama chat or instruct models were fine tuned for dialogue specific uses like chat bots." +}, +{ +"question": "How should we think about post processing (validate generated data) as a way to fine tune models?", +"answer": "Essentially having a truthful data on the specific application can be helpful to reduce the risk on a specific application. Also setting some sort of threshold such as prob>90% might be helpful to get more confidence in the output." +}, +{ +"question": "What are the different libraries that we recommend for fine tuning?", +"answer": "You can find some fine-tuning recommendations in the Github recipes repository as well as the fine-tuning section of this document." +}, +{ +"question": "How can we identify the right ‘r’ value for LORA method for a certain use-case?", +"answer": "The best approach would be to review the LoRA research paper for more information on the rankings, then reviewing similar implementations for other models and finally experimenting." +}, +{ +"question": "We hope to use prompt engineering as a lever to nudge behavior. Any pointers on enhancing instruction-following by fine-tuning small llama models?", +"answer": "Take a look at the Fine tuning section in our Getting started with Llama guide of this document for some pointers towards fine tuning." +}, +{ +"question": "Strategies to help models handle longer conversations?", +"answer": "You can find some helpful information towards this in the Prompting and LangChain sections of this document." +}, +{ +"question": "Are Llama models open source? What is the exact license these models are published under?", +"answer": "Llama models are licensed under a bespoke commercial license that balances open access to the models with responsibility and protections in place to help address potential misuse. Our license allows for broad commercial use, as well as for developers to create and redistribute additional work on top of Llama models. For more details, our licenses can be found at (https://llama.meta.com/license/) (Meta Llama 2) and (https://llama.meta.com/llama3/license/) (Meta Llama 3)." +}, +{ +"question": "Are there examples that help licensees better understand how “MAU” is defined?", +"answer": "'MAU' means 'monthly active users' that access or use your (and your affiliates’) products and services. Examples include users accessing an internet-based service and monthly users/customers of licensee’s hardware devices." +}, +{ +"question": "Does the Critical Infrastructure restriction in the acceptable use policy (AUP) prevent companies who have special critical infrastructure certification (e.g., a registered operator of “critical infrastructure” under the German BSI Act) from using Llama?", +"answer": "No, such companies are not prohibited when their usage of Llama is not related to the operation of critical infrastructure. Llama, however, may not be used in the operation of critical infrastructure by any company, regardless of government certifications." +} +] diff --git a/requirements.txt b/requirements.txt index 496e2a470..4ce2ba073 100644 --- a/requirements.txt +++ b/requirements.txt @@ -32,3 +32,5 @@ python-dotenv==1.0.1 pyyaml==6.0.1 coloredlogs==15.0.1 sentence_transformers +faiss-gpu +unstructured[pdf] From d5b67ab4e7a89cd0b7ff00ea8b94e9120ed76cb2 Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Wed, 5 Jun 2024 14:26:59 -0700 Subject: [PATCH 18/35] rag prompt template added --- .../use_cases/end2end-recipes/raft/README.md | 2 +- .../end2end-recipes/raft/eval_config.yaml | 32 +++-- .../end2end-recipes/raft/eval_raft.py | 119 ++++++++++-------- .../end2end-recipes/raft/evalset.json | 94 ++++++++------ .../use_cases/end2end-recipes/raft/raft.py | 1 - 5 files changed, 140 insertions(+), 108 deletions(-) diff --git a/recipes/use_cases/end2end-recipes/raft/README.md b/recipes/use_cases/end2end-recipes/raft/README.md index f23e079d2..a648ba36d 100644 --- a/recipes/use_cases/end2end-recipes/raft/README.md +++ b/recipes/use_cases/end2end-recipes/raft/README.md @@ -125,7 +125,7 @@ CUDA_VISIBLE_DEVICES=2,3 python -m vllm.entrypoints.openai.api_server --model m Then we can pass the port to the eval script: ```bash -python eval_raft.py -m raft-8b -v 8000 -j 8002 +CUDA_VISIBLE_DEVICES=4 python eval_raft.py -m raft-8b -v 8000 -j 8002 ``` diff --git a/recipes/use_cases/end2end-recipes/raft/eval_config.yaml b/recipes/use_cases/end2end-recipes/raft/eval_config.yaml index f040188c0..930717997 100644 --- a/recipes/use_cases/end2end-recipes/raft/eval_config.yaml +++ b/recipes/use_cases/end2end-recipes/raft/eval_config.yaml @@ -1,28 +1,24 @@ eval_prompt_template: > You are a AI assistant that skilled in answering questions related to Llama language models, which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1, Meta Llama Guard 2, - Below is a question from a llama user, think step by step and then answer it in {language}, make the answer as concise as possible, it should be at most 100 words. - Return the result with the template: - [ - {{ - "Question": "The question user asked to you" - "Answer": "Your answer to the question" - }} - ] + Below is a question from a llama user, think step by step, make the answer as concise as possible, + The returned answer should be no more than 100 words.Please return the answers in text directly without any special tokens. + judge_prompt_template: > - You are provided with a question, a teacher's answer and a student's answer. Given that question, you need to score the how good the student answer is compare to - the teacher's answer. If the student's answer is correct based on the teacher's answer, then return YES. If the answer is not faithful, then return NO - and explain which part of the student's answer is not faithful in the Reason section. - Return the result in json format with the template: - {{ - "Reason": "your reason here.", - "Result": "YES or NO." - }} + You have been provided with a question, a teacher's answer and a student's answer above. Given that question, you need to score the how good the student answer is compare to + the teacher's answer. If the student's answer is correct based on the teacher's answer, then return YES, else return NO. + Review it carefully to make sure that the keywords and numerical vaules are exactly the same. + Only respond with "YES" or "NO", do not respond with anything else. +RAG_prompt_template: > + Question: {question}\n Context: {context}\n + Answer this question using the information given in the context above. Here is things to pay attention to: + - First provide step-by-step reasoning on how to answer the question. + - In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context. + - End your response with final answer in the form : $answer, the answer should be succinct. + You MUST begin your final answer with the tag ": eval_json: "./evalset.json" -language: "English" - raft_model_name: "raft-8b" base_model_name: "meta-llama/Meta-Llama-3-8B-Instruct" diff --git a/recipes/use_cases/end2end-recipes/raft/eval_raft.py b/recipes/use_cases/end2end-recipes/raft/eval_raft.py index 32a3ff777..b8f43ec78 100644 --- a/recipes/use_cases/end2end-recipes/raft/eval_raft.py +++ b/recipes/use_cases/end2end-recipes/raft/eval_raft.py @@ -12,59 +12,79 @@ from langchain_community.vectorstores import FAISS from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain_community.document_loaders import DirectoryLoader -from langchain.chains import RetrievalQA +from langchain_core.runnables import RunnablePassthrough + from langchain_core.messages import HumanMessage, SystemMessage import re import string from collections import Counter +from langchain_core.output_parsers import StrOutputParser +from langchain.prompts.prompt import PromptTemplate def generate_answers_model_only(model_name,question_list,api_url="http://localhost:8000/v1",key="EMPTY"): # Use langchain to load the documents from data directory # Load the RAFT model + llm = VLLMOpenAI( openai_api_key=key, openai_api_base=api_url, model_name=model_name, - model_kwargs={"stop": ["."]}, - temperature=0.0,) + temperature=0.0, + max_tokens=100 + ) + system_prompt = SystemMessage(content=context['eval_prompt_template']) generated_answers = [] - for question in question_list: - response = llm.invoke(question) - generated_answers.append(response) + all_tasks = [[system_prompt, HumanMessage(content=question)] for question in question_list] + generated_answers = llm.batch(all_tasks) if len(generated_answers) == 0: logging.error("No model answers generated. Please check the input context or model configuration in ",model_name) return [] - return generated_answers -def generate_answers_with_RAG(model_name, data_dir,question_list, api_url="http://localhost:8000/v1",key="EMPTY"): + return clean_text_list(generated_answers) +def format_docs_raft(docs): + context = "" + for doc in docs: + context += "" + str(doc.page_content) + "\n" + return context +def format_docs(docs): + return "\n\n".join(doc.page_content for doc in docs) +def generate_answers_with_RAG(model_name, data_dir,question_list,rag_template,api_url="http://localhost:8000/v1",key="EMPTY"): # Use langchain to load the documents from data directory loader = DirectoryLoader(data_dir) docs = loader.load() # Split the document into chunks with a specified chunk size - text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50) + text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=50) all_splits = text_splitter.split_documents(docs) # Store the document into a vector store with a specific embedding model - vectorstore = FAISS.from_documents(all_splits, HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")) + vectorstore = FAISS.from_documents(all_splits, HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2",model_kwargs={'device': 'cuda'})) + retriever = vectorstore.as_retriever( + search_kwargs={"k": 5} + ) # Load the RAFT model llm = VLLMOpenAI( openai_api_key=key, openai_api_base=api_url, model_name=model_name, - model_kwargs={"stop": ["."]}, - temperature=0.0,) - # Create a RetrievalQA chain with the vector store and RAFT model - qa_chain = RetrievalQA.from_chain_type( - llm, - retriever=vectorstore.as_retriever() - ) - generated_answers = [] - for question in question_list: - response = qa_chain({"query": question}) - generated_answers.append(response['result']) + temperature=0.0, + max_tokens=100 + ) + all_tasks = [] + for q in question_list: + # retrive the top 6 documents + retrieved_docs = retriever.invoke(q) + # format the documents into a string + if '8B-Instruct' in model_name: + documents = format_docs(retrieved_docs) + else: + documents = format_docs_raft(retrieved_docs) + # create a prompt + text = rag_template.format(context=documents,question=q) + all_tasks.append(text) + generated_answers = llm.batch(all_tasks) if len(generated_answers) == 0: logging.error("No RAG answers generated. Please check the input context or model configuration in ",model_name) return [] - return generated_answers + return clean_text_list(generated_answers) def compute_rouge_score(generated : list, reference: list): rouge_score = evaluate.load('rouge') return rouge_score.compute( @@ -73,14 +93,19 @@ def compute_rouge_score(generated : list, reference: list): use_stemmer=True, use_aggregator=True ) -def remove_special_tokens(text_list): - clean_text_list = [] +def clean_text_list(text_list): + result = [] for text in text_list: - text = text.replace("##begin_quote##","") - text = text.replace("##end_quote##","") + # for raft model, the answer will started with + index = text.rfind("") + if index!= -1: + text = text[index:] + text = text.replace("begin_quote","") + text = text.replace("end_quote","") + text = text.replace("##","") text = text.strip() - clean_text_list.append(text) - return clean_text_list + result.append(text) + return result def normalize_answer(s): @@ -125,25 +150,20 @@ def compute_judge_score(questions: list, generated : list, reference: list, cont openai_api_key=key, openai_api_base=api_url, model_name=model_name, - model_kwargs={"stop": ["."]}, - temperature=0.0,) + temperature=0.0) + all_tasks = [] for q,pred,gold in zip(questions, generated,reference): - # messages = [ - # SystemMessage(content=context['judge_prompt_template']), - # HumanMessage(content=f"Question: {q} \n Teacher's Answer: {gold} \n Student's Answer: {pred} "), - # ] - messages = context['judge_prompt_template'] + "\n" - messages += f"Question: {q} \n Teacher's Answer: {gold} \n Student's Answer: {pred} " - response = llm.invoke(messages) - print(response+ " -------------") - result = json.loads(response) - if "Result" not in result: - print("Error: eval response does not contain answer") - print(result) - continue - correct_num += result["Result"] == "YES" + messages = [ + HumanMessage(content=f"Question: {q} \n Teacher's Answer: {gold} \n Student's Answer: {pred} "), + SystemMessage(content=context['judge_prompt_template']) + ] + all_tasks.append(messages) + response = llm.batch(all_tasks) + for response in response: + if "YES" in response: + correct_num += 1 return correct_num/len(questions) -def score_single(context,generated,reference,questions, run_exact_match=True,run_rouge=True, run_bert=True, run_llm_as_judge=True): +def score_single(context,generated,reference,questions, run_exact_match=True,run_rouge=True, run_bert=True, run_llm_as_judge=False): # set metric to default -1, means no metric is computed metric = { "Rouge_score": -1, @@ -192,15 +212,12 @@ def main(context): } # Generate answers for baseline base_model_name = context["base_model_name"] - generated_answers["Baseline"] = generate_answers_model_only(base_model_name,questions,api_url) - #generated_answers["Baseline_RAG"] = generate_answers_with_RAG(base_model_name, context["data_dir"],questions,api_url) + #generated_answers["Baseline"] = generate_answers_model_only(base_model_name,questions,api_url) + generated_answers["Baseline_RAG"] = generate_answers_with_RAG(base_model_name, context["data_dir"],questions,context['RAG_prompt_template'],api_url) # Generate answers for RAFT raft_model_name = context["raft_model_name"] #generated_answers["RAFT"] = generate_answers_model_only(raft_model_name,questions,api_url) - #generated_answers["RAFT_RAG"] = generate_answers_with_RAG(raft_model_name, context["data_dir"],questions,api_url) - # clean special tokens from the RAFT generated answer - #generated_answers["RAFT"] = remove_special_tokens(generated_answers["RAFT"]) - #generated_answers["RAFT_RAG"] = remove_special_tokens(generated_answers["RAFT_RAG"]) + generated_answers["RAFT_RAG"] = generate_answers_with_RAG(raft_model_name, context["data_dir"],questions,context['RAG_prompt_template'],api_url) logging.info(f"Successfully generated {len(generated_answers['Baseline_RAG'])} answers for all models.") # for generate answer from each model, compute the score metric for model_name,model_answer in generated_answers.items(): diff --git a/recipes/use_cases/end2end-recipes/raft/evalset.json b/recipes/use_cases/end2end-recipes/raft/evalset.json index efe72b72b..e0787bfe3 100644 --- a/recipes/use_cases/end2end-recipes/raft/evalset.json +++ b/recipes/use_cases/end2end-recipes/raft/evalset.json @@ -1,11 +1,47 @@ [ + { + "question":"What is llama-recipes?", + "answer": "The llama-recipes repository is a companion to the Meta Llama 3 models. The goal of this repository is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Meta Llama and other tools in the LLM ecosystem." + }, { "question":"What is the difference on the tokenization techniques that Meta Llama 3 uses compare Llama 2?", - "answer": "Llama 2 uses SentencePiece for tokenization, whereas Llama 3 has transitioned to OpenAI’s Tiktoken. Llama 3 also introduces a ChatFormat class, special tokens, including those for end-of-turn markers and other features to enhance support for chat-based interactions and dialogue processing." + "answer": "Llama 2 uses SentencePiece for tokenization, whereas Llama 3 has transitioned to OpenAI’s Tiktoken." + }, + { + "question":"How many tokens were used in Meta Llama 3 pretrain?", + "answer": "Meta Llama 3 is pretrained on over 15 trillion tokens that were all collected from publicly available sources." + }, + { + "question":"How many tokens were used in Llama 2 pretrain?", + "answer": "Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources." + }, + { + "question":"What is the name of the license agreement that Meta Llama 3 is under?", + "answer": "Meta LLAMA 3 COMMUNITY LICENSE AGREEMENT." + }, + { + "question":"What is the name of the license agreement that Llama 2 is under?", + "answer": "LLAMA 2 COMMUNITY LICENSE AGREEMENT." + }, + { + "question":"What is the context length of Llama 2 models?", + "answer": "Llama 2's context is 4k" + }, + { + "question":"What is the context length of Meta Llama 3 models?", + "answer": "Meta Llama 3's context is 8k" + }, + { + "question":"When is Llama 2 trained?", + "answer": "Llama 2 was trained between January 2023 and July 2023." }, { - "question":"How many tokens were used in Llama 3 pretrain?", - "answer": "Llama 3 is pretrained on over 15T tokens that were all collected from publicly available sources." + "question":"What is the name of the Llama 2 model that uses Grouped-Query Attention (GQA) ", + "answer": "Llama 2 70B" + }, + { + "question":"What are the names of the Meta Llama 3 model that use Grouped-Query Attention (GQA) ", + "answer": "Meta Llama 3 8B and Meta Llama 3 70B" }, { "question": "what are the goals for Llama 3", @@ -16,11 +52,11 @@ "answer": "On a limited case by case basis, we will consider bespoke licensing requests from individual entities. Please contact llamamodels@meta.com to provide more details about your request." }, { -"question": "Why are you not sharing the training datasets for Llama?", +"question": "Why is Meta not sharing the training datasets for Llama?", "answer": "We believe developers will have plenty to work with as we release our model weights and starting code for pre-trained and conversational fine-tuned versions as well as responsible use resources. While data mixes are intentionally withheld for competitive reasons, all models have gone through Meta’s internal Privacy Review process to ensure responsible data usage in building our products. We are dedicated to the responsible and ethical development of our GenAI products, ensuring our policies reflect diverse contexts and meet evolving societal expectations." }, { -"question": "Did we use human annotators to develop the data for our models?", +"question": "Did Meta use human annotators to develop the data for Llama models?", "answer": "Yes. There are more details, for example, about our use of human annotators in the Llama 2 research paper." }, { @@ -28,11 +64,11 @@ "answer": "It's correct that the license restricts using any part of the Llama models, including the response outputs to train another AI model (LLM or otherwise). However, one can use the outputs to further train the Llama family of models. Techniques such as Quantized Aware Training (QAT) utilize such a technique and hence this is allowed." }, { -"question": "What operating systems (OS) are officially supported?", +"question": "What operating systems (OS) are officially supported if I want to use Llama model?", "answer": "For the core Llama GitHub repos (Llama and Llama3) Linux is the only OS currently supported by this repo. Additional OS support is available through the Llama-Recipes repo." }, { -"question": "I am getting 'Issue with the URL' as an error message. What should I do?", +"question": "I am getting 'Issue with the URL' as an error message when I want to download Llama model. What should I do?", "answer": "This issue occurs because of not copying the URL correctly. If you right click on the link and copy the link, the link may be copied with URL Defense wrapper. To avoid this issue, select the URL manually and copy it." }, { @@ -40,7 +76,7 @@ "answer": "The model was primarily trained on English with a bit of additional data from 27 other languages (for more information, see Table 10 on page 20 of the Llama 2 paper). We do not expect the same level of performance in these languages as in English. You’ll find the full list of languages referenced in the research paper. You can look at some of the community lead projects to fine-tune Llama 2 models to support other languages. (eg: link)" }, { -"question": "If I’m a developer/business, how can I access the models?", +"question": "If I’m a developer/business, how can I access the Llama models?", "answer": "Details on how to access the models are available on our website link. Please note that the models are subject to the acceptable use policy and the provided responsible use guide. Models are available through multiple sources but the place to start is at https://llama.meta.com/ Model code, quickstart guide and fine-tuning examples are available through our Github Llama repository. Model Weights are available through an email link after the user submits a sign-up form. Models are also being hosted by Microsoft, Amazon Web Services, and Hugging Face, and may also be available through other hosting providers in the future." }, { @@ -48,7 +84,7 @@ "answer": "Llama models are broadly available to developers and licensees through a variety of hosting providers and on the Meta website and licensed under the applicable Llama Community License Agreement, which provides a permissive license to the models along with certain restrictions to help ensure that the models are being used responsibly." }, { -"question": "What are the hardware SKU requirements for deploying these models?", +"question": "What are the hardware SKU requirements for deploying Llama models?", "answer": "Hardware requirements vary based on latency, throughput and cost constraints. For good latency, we split models across multiple GPUs with tensor parallelism in a machine with NVIDIA A100s or H100s. But TPUs, other types of GPUs, or even commodity hardware can also be used to deploy these models (e.g. llama cpp, MLC LLM)." }, { @@ -56,7 +92,7 @@ "answer": "Llama models are auto-regressive language models, built on the transformer architecture. The core language models function by taking a sequence of words as input and predicting the next word, recursively generating text." }, { -"question": "Does the model support fill-in-the-middle completion, e.g. allowing the user to specify a suffix string for the response?", +"question": "Does the Llama model support fill-in-the-middle completion, e.g. allowing the user to specify a suffix string for the response?", "answer": "The vanilla model of Llama does not, however, the Code Llama models have been trained with fill-in-the-middle completion to assist with tasks like code completion." }, { @@ -68,7 +104,7 @@ "answer": "The model itself supports these parameters, but whether they are exposed or not depends on implementation." }, { -"question": "What is the most effective RAG method paired with LIama models?", +"question": "What is the most effective RAG method paired with Llama models?", "answer": "There are many ways to use RAG with Llama. The most popular libraries are LangChain and LlamaIndex, and many of our developers have used them successfully with Llama 2. (See the LangChain and LlamaIndex sections of this document)." }, { @@ -76,19 +112,15 @@ "answer": "You can find steps on how to set up an EC2 instance in the AWS section of this document here." }, { -"question": "What is the right size of EC2 instances needed for running each of the llama models?", -"answer": "The AWS section of this document has some insights on instance size that you can start with. You can find the section here." -}, -{ -"question": "Should we start training with the base or instruct/chat model?", +"question": "Should we start training with the base or instruct/chat model when using Llama model?", "answer": "This depends on your application. The Llama pre-trained models were trained for general large language applications, whereas the Llama instruct or chat models were fine tuned for dialogue specific uses like chat bots." }, { -"question": "I keep getting a 'CUDA out of memory' error.", +"question": "I keep getting a 'CUDA out of memory' error, when using Llama models, what should I do" , "answer": "This error can be caused by a number of different factors including, model size being too large, in-efficient memory usage and so on. Some of the steps below have been known to help with this issue, but you might need to do some troubleshooting to figure out the exact cause of your issue. 1. Ensure your GPU has enough memory 2. Reduce the batch_size 3. Lower the Precision 4. Clear cache 5. Modify the Model/Training" }, { -"question": "Retrieval approach adds latency due to multiple calls at each turn. How to best leverage Llama+Retrieval?", +"question": "Retrieval approach adds latency due to multiple calls at each turn. How to best leverage Llama model with Retrieval?", "answer": "If multiple calls are necessary then you could look into the following: 1. Optimize inference so each call has less latency. 2. Merge the calls into fewer calls. For example summarize the data and utilize the summary. 3. Possibly utilize Llama 2 function calling. 4. Consider fine-tuning the model with the updated data." }, { @@ -109,30 +141,30 @@ }, { "question": "What are the hardware SKU requirements for fine-tuning Llama pre-trained models?", -"answer": "Fine-tuning requirements also vary based on amount of data, time to complete fine-tuning and cost constraints. To fine-tune these models we have generally used multiple NVIDIA A100 machines with data parallelism across nodes and a mix of data and tensor parallelism intra node. But using a single machine, or other GPU types are definitely possible (e.g. alpaca models are trained on a single RTX4090: (https://github.com/tloen/alpaca-lora)" +"answer": "Fine-tuning requirements also vary based on amount of data, time to complete fine-tuning and cost constraints. To fine-tune these models we have generally used multiple NVIDIA A100 machines with data parallelism across nodes and a mix of data and tensor parallelism intra node. But using a single machine, or other GPU types are definitely possible (e.g. alpaca models are trained on a single RTX4090:https://github.com/tloen/alpaca-lora)" }, { -"question": "What Fine-tuning tasks would these models support?", +"question": "What Fine-tuning tasks would the Llama models support?", "answer": "The Lama 2 fine-tuned models were fine tuned for dialogue specific uses like chat bots." }, { -"question": "Are there examples on how one can fine-tune the models?", +"question": "Are there examples on how one can fine-tune the Llama models?", "answer": "You can find example fine-tuning scripts in the Github recipes repository. You can also review the fine-tuning section in this document." }, { -"question": "What is the difference between a pre-trained and fine-tuned model?", +"question": "What is the difference between a pre-trained and fine-tuned Llama model?", "answer": "The Llama pre-trained models were trained for general large language applications, whereas the Llama chat or instruct models were fine tuned for dialogue specific uses like chat bots." }, { -"question": "How should we think about post processing (validate generated data) as a way to fine tune models?", +"question": "How should we think about post processing (validate generated data) as a way to fine tune Llama models?", "answer": "Essentially having a truthful data on the specific application can be helpful to reduce the risk on a specific application. Also setting some sort of threshold such as prob>90% might be helpful to get more confidence in the output." }, { -"question": "What are the different libraries that we recommend for fine tuning?", +"question": "What are the different libraries that we recommend for fine tuning when using Llama models?", "answer": "You can find some fine-tuning recommendations in the Github recipes repository as well as the fine-tuning section of this document." }, { -"question": "How can we identify the right ‘r’ value for LORA method for a certain use-case?", +"question": "How can we identify the right ‘r’ value for LORA method for a certain use-case when using Llama models?", "answer": "The best approach would be to review the LoRA research paper for more information on the rankings, then reviewing similar implementations for other models and finally experimenting." }, { @@ -140,19 +172,7 @@ "answer": "Take a look at the Fine tuning section in our Getting started with Llama guide of this document for some pointers towards fine tuning." }, { -"question": "Strategies to help models handle longer conversations?", -"answer": "You can find some helpful information towards this in the Prompting and LangChain sections of this document." -}, -{ "question": "Are Llama models open source? What is the exact license these models are published under?", "answer": "Llama models are licensed under a bespoke commercial license that balances open access to the models with responsibility and protections in place to help address potential misuse. Our license allows for broad commercial use, as well as for developers to create and redistribute additional work on top of Llama models. For more details, our licenses can be found at (https://llama.meta.com/license/) (Meta Llama 2) and (https://llama.meta.com/llama3/license/) (Meta Llama 3)." -}, -{ -"question": "Are there examples that help licensees better understand how “MAU” is defined?", -"answer": "'MAU' means 'monthly active users' that access or use your (and your affiliates’) products and services. Examples include users accessing an internet-based service and monthly users/customers of licensee’s hardware devices." -}, -{ -"question": "Does the Critical Infrastructure restriction in the acceptable use policy (AUP) prevent companies who have special critical infrastructure certification (e.g., a registered operator of “critical infrastructure” under the German BSI Act) from using Llama?", -"answer": "No, such companies are not prohibited when their usage of Llama is not related to the operation of critical infrastructure. Llama, however, may not be used in the operation of critical infrastructure by any company, regardless of government certifications." } ] diff --git a/recipes/use_cases/end2end-recipes/raft/raft.py b/recipes/use_cases/end2end-recipes/raft/raft.py index 3b27de51f..8d6162512 100644 --- a/recipes/use_cases/end2end-recipes/raft/raft.py +++ b/recipes/use_cases/end2end-recipes/raft/raft.py @@ -36,7 +36,6 @@ async def main(context): logging.info(f"Question: {question}") logging.info(f"Successfully generated {sum([len(q) for c,q in chunk_questions_zip])} question/answer pairs.") ds = await add_chunk_to_dataset(chunk_questions_zip,context, chat_service,ds,NUM_DISTRACT_DOCS, ORCALE_P) - print(ds[0]) ds.save_to_disk(args.output) logging.info(f"Data successfully written to {context['output']}. Process completed.") formatter = DatasetConverter() From cd5ae9ec63132ffa7cf46e62a50fb089cbaebc03 Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Fri, 7 Jun 2024 14:26:36 -0700 Subject: [PATCH 19/35] changed yaml to get langchain working --- .../use_cases/end2end-recipes/raft/README.md | 28 +-- .../end2end-recipes/raft/chat_utils.py | 80 ------ .../use_cases/end2end-recipes/raft/config.py | 8 - .../end2end-recipes/raft/data_urls.xml | 137 ++++++++++ .../end2end-recipes/raft/doc_processor.py | 47 ---- .../end2end-recipes/raft/eval_raft.py | 1 + .../use_cases/end2end-recipes/raft/raft.py | 63 ++--- .../use_cases/end2end-recipes/raft/raft.yaml | 62 +++-- .../end2end-recipes/raft/raft_utils.py | 238 +++++++++--------- 9 files changed, 334 insertions(+), 330 deletions(-) delete mode 100644 recipes/use_cases/end2end-recipes/raft/chat_utils.py create mode 100644 recipes/use_cases/end2end-recipes/raft/data_urls.xml delete mode 100644 recipes/use_cases/end2end-recipes/raft/doc_processor.py diff --git a/recipes/use_cases/end2end-recipes/raft/README.md b/recipes/use_cases/end2end-recipes/raft/README.md index a648ba36d..fc0e71d5a 100644 --- a/recipes/use_cases/end2end-recipes/raft/README.md +++ b/recipes/use_cases/end2end-recipes/raft/README.md @@ -2,40 +2,38 @@ ### Step 1 : Prepare related documents -Download all your desired docs in PDF, Text or Markdown format to "data" folder inside the data_pipelines folder. +We can either use local folder or web crawl to get the data. For local folder option, please download all your desired docs in PDF, Text or Markdown format to "data" folder and place it inside "raft" folder. Alternatively, we can create a sitemap xml, similar to the data_urls.xml example, and use langchain SitemapLoader to get all the text in the webpages. -In this case we have an example of [Getting started with Meta Llama](https://llama.meta.com/get-started/) and other llama related documents such Llama3, Purple Llama, Code Llama papers. Ideally, we should have searched all Llama documents across the web and follow the procedure below on them but that would be very costly for the purpose of a tutorial, so we will stick to our limited documents here. In this case, we want to use Llama FAQ as eval data so we should not put it into the data folder for training. +In this case we will use [Meta Llama official website](https://llama.meta.com/) webpages such as [Getting started with Meta Llama](https://llama.meta.com/get-started/) and other Llama related documents, eg Llama3, Purple Llama, Code Llama model card in github repo. Ideally, we should have searched all Llama documents across the web and follow the procedure below on them but that would be very costly for the purpose of a tutorial, so we will stick to our limited documents here. In this case, we want to use [Meta Llama Troubleshooting & FAQ](https://llama.meta.com/faq/) as a main source for evaluation so we should put it into our training set. ### Step 2 : Prepare RAFT dataset for fine-tuning To use Meta Llama 3 70B model for the RAFT datasets creation from the prepared documents, we can either use Meta Llama 3 70B APIs from LLM cloud providers or host local LLM server. -In this example, we can use OctoAI API as a demo, and the APIs could be replaced by any other API from other providers. - -**NOTE** The generated data by these APIs or the model needs to be vetted to make sure about the quality. +We can use on prem solutions such as the [TGI](../../../../inference/model_servers/hf_text_generation_inference/README.md) or [VLLM](../../../../inference/model_servers/llama-on-prem.md). Here we will use the prompt in the [generation_config.yaml](./generation_config.yaml) to instruct the model on the expected format and rules for generating the Q&A pairs. In this example, we will show how to create a vllm openai compatible server that host Meta Llama 3 70B instruct locally, and generate the RAFT dataset. ```bash -export OCTOAI_API_TOKEN="OCTOAI_API_TOKEN" -python generate_question_answers.py +# Make sure VLLM has been installed +CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001 ``` -**NOTE** You need to be aware of your RPM (requests per minute), TPM (tokens per minute) and TPD (tokens per day), limit on your account in case using any of model API providers. In our case we had to process each document at a time. Then merge all the Q&A `json` files to make our dataset. We aimed for a specific number of Q&A pairs per document anywhere between 50-100. This is experimental and totally depends on your documents, wealth of information in them and how you prefer to handle question, short or longer answers etc. +**NOTE** Please make sure the port has not been used. Since Meta Llama3 70B instruct model requires at least 135GB GPU memory, we need to use multiple GPUs to host it in a tensor parallel way. -Alternatively we can use on prem solutions such as the [TGI](../../../../inference/model_servers/hf_text_generation_inference/README.md) or [VLLM](../../../../inference/model_servers/llama-on-prem.md). Here we will use the prompt in the [generation_config.yaml](./generation_config.yaml) to instruct the model on the expected format and rules for generating the Q&A pairs. In this example, we will show how to create a vllm openai compatible server that host Meta Llama 3 70B instruct locally, and generate the RAFT dataset. +Once the server is ready, we can query the server given the port number 8001 in another terminal. Here, "-u" sets the endpoint url to query and "-t" sets the number of questions we ask the Meta Llama3 70B Instruct model to generate per chunk. To use cloud API , please change the endpoint url to the cloud provider and set the api key using "-k". Here since we want to query our local hosted VLLM server, we can use following commend: ```bash -# Make sure VLLM has been installed -CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001 +python raft.py -u "http://localhost:8001/v1" -k "EMPTY" -t 3 ``` -**NOTE** Please make sure the port has not been used. Since Meta Llama3 70B instruct model requires at least 135GB GPU memory, we need to use multiple GPUs to host it in a tensor parallel way. - -Once the server is ready, we can query the server given the port number 8001 in another terminal. Here, "-v" sets the port number and "-t" sets the number of questions we ask the Meta Llama3 70B Instruct model to generate per chunk. +For cloud API key, we can also set it using system environment variables, such as ```bash -python raft.py -v 8001 -t 5 +export API_KEY="THE_API_KEY_HERE" +python raft.py -u "CLOUD_API_URL" -t 3 ``` +**NOTE** When using cloud API, you need to be aware of your RPM (requests per minute), TPM (tokens per minute) and TPD (tokens per day), limit on your account in case using any of model API providers. This is experimental and totally depends on your documents, wealth of information in them and how you prefer to handle question, short or longer answers etc. + This python program will read all the documents inside of "data" folder and transform the text into embeddings and split the data into batches by the SemanticChunker. Then we apply the question_prompt_template, defined in "raft.yaml", to each batch, and finally we will use each batch to query VLLM server and save the return a list of question list for all batches. We now have a related context as text chunk and a corresponding question list. For each question in the question list, we want to generate a Chain-of-Thought (COT) style question using Llama 3 70B Instruct as well. Once we have the COT answers, we can start to make a dataset that contains "instruction" which includes some unrelated chunks called distractor and has a probability P to include the related chunk. diff --git a/recipes/use_cases/end2end-recipes/raft/chat_utils.py b/recipes/use_cases/end2end-recipes/raft/chat_utils.py deleted file mode 100644 index 07fb61eea..000000000 --- a/recipes/use_cases/end2end-recipes/raft/chat_utils.py +++ /dev/null @@ -1,80 +0,0 @@ -import asyncio -import logging -from abc import ABC, abstractmethod -from octoai.client import OctoAI -from functools import partial -from openai import OpenAI -import json -# Configure logging to include the timestamp, log level, and message -logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') -# Since OctoAI has different naming for llama models, create this mapping to get huggingface offical model name given OctoAI names. -MODEL_NAME_MAPPING={"meta-llama-3-70b-instruct":"meta-llama/Meta-Llama-3-70B-Instruct", -"meta-llama-3-8b-instruct":"meta-llama/Meta-Llama-3-8B-Instruct","llama-2-7b-chat":"meta-llama/Llama-2-7b-chat-hf" -,"llama-2-70b-chat":"meta-llama/Llama-2-70b-chat-hf"} -# Manage rate limits with throttling -rate_limit_threshold = 2000 -allowed_concurrent_requests = int(rate_limit_threshold * 0.75) -request_limiter = asyncio.Semaphore(allowed_concurrent_requests) -class ChatService(ABC): - @abstractmethod - async def execute_chat_request_async(self, api_context: dict, chat_request): - pass -def strip_str(s: str) -> str: - """ - Helper function for helping format strings returned by GPT-4. - """ - l, r = 0, len(s)-1 - beg_found = False - for i in range(len(s)): - if s[i].isalpha(): - if not beg_found: - l = i - beg_found = True - else: - r = i - r += 2 - return s[l:min(r, len(s))] -# Please implement your own chat service class here. -# The class should inherit from the ChatService class and implement the execute_chat_request_async method. -# The following are two example chat service classes that you can use as a reference. -class OctoAIChatService(ChatService): - async def execute_chat_request_async(self, api_context: dict, chat_request): - async with request_limiter: - try: - event_loop = asyncio.get_running_loop() - client = OctoAI(api_context['api_key']) - api_chat_call = partial( - client.chat.completions.create, - model=api_context['model'], - messages=chat_request, - temperature=0.0 - ) - response = await event_loop.run_in_executor(None, api_chat_call) - assistant_response = next((choice.message.content for choice in response.choices if choice.message.role == 'assistant'), "") - return assistant_response - except Exception as error: - logging.error(f"Error during chat request execution: {error}",exc_info=True) - return "" -# Use the local vllm openai compatible server for generating question/answer pairs to make API call syntax consistent -# please read for more detail:https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html. -class VllmChatService(ChatService): - async def execute_chat_request_async(self, api_context: dict, chat_request): - try: - event_loop = asyncio.get_running_loop() - if api_context["model"] in MODEL_NAME_MAPPING: - model_name = MODEL_NAME_MAPPING[api_context['model']] - else: - model_name = api_context['model'] - client = OpenAI(api_key=api_context['api_key'], base_url="http://localhost:"+ str(api_context['endpoint'])+"/v1") - api_chat_call = partial( - client.chat.completions.create, - model=model_name, - messages=chat_request, - temperature=0.0 - ) - response = await event_loop.run_in_executor(None, api_chat_call) - assistant_response = next((choice.message.content for choice in response.choices if choice.message.role == 'assistant'), "") - return assistant_response - except Exception as error: - logging.error(f"Error during chat request execution: {error}",exc_info=True) - return "" diff --git a/recipes/use_cases/end2end-recipes/raft/config.py b/recipes/use_cases/end2end-recipes/raft/config.py index 6d0b82573..91a01535a 100644 --- a/recipes/use_cases/end2end-recipes/raft/config.py +++ b/recipes/use_cases/end2end-recipes/raft/config.py @@ -8,12 +8,4 @@ def load_config(config_path: str = "./config.yaml"): # Read the YAML configuration file with open(config_path, "r") as file: config = yaml.safe_load(file) - # Set the API key from the environment variable - try: - config["api_key"] = os.environ["OCTOAI_API_TOKEN"] - except KeyError: - print("API token did not found, please set the OCTOAI_API_TOKEN environment variable if using OctoAI, otherwise set api_key to default EMPTY") - # local Vllm endpoint did not need API key, so set the API key to "EMPTY" if OCTOAI_API_TOKEN not found - config["api_key"] = "EMPTY" return config - diff --git a/recipes/use_cases/end2end-recipes/raft/data_urls.xml b/recipes/use_cases/end2end-recipes/raft/data_urls.xml new file mode 100644 index 000000000..d054ed5a4 --- /dev/null +++ b/recipes/use_cases/end2end-recipes/raft/data_urls.xml @@ -0,0 +1,137 @@ + + +http://llama.meta.com/ + + +http://llama.meta.com/use-policy/ + + +http://llama.meta.com/responsible-use-guide/ + + +http://llama.meta.com/llama2/ + + +http://llama.meta.com/llama2/license/ + + +http://llama.meta.com/llama2/use-policy/ + + +http://llama.meta.com/license/ + + +http://llama.meta.com/code-llama/ + + +http://llama.meta.com/llama3/ + + +http://llama.meta.com/llama3/license/ + + +http://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3 + + +http://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-guard-2 + + +http://llama.meta.com/docs/model-cards-and-prompt-formats/meta-code-llama-70b + + +http://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-guard-1 + + +http://llama.meta.com/docs/model-cards-and-prompt-formats/meta-code-llama + + +http://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-2 + + +http://llama.meta.com/docs/getting_the_models + + +http://llama.meta.com/docs/getting-the-models/hugging-face + + +http://llama.meta.com/docs/getting-the-models/kaggle + + +http://llama.meta.com/docs/llama-everywhere + + +http://llama.meta.com/docs/llama-everywhere/running-meta-llama-on-linux/ + + +http://llama.meta.com/docs/llama-everywhere/running-meta-llama-on-windows/ + + +http://llama.meta.com/docs/llama-everywhere/running-meta-llama-on-mac/ + + +http://llama.meta.com/docs/llama-everywhere/running-meta-llama-in-the-cloud/ + + +http://llama.meta.com/docs/how-to-guides/fine-tuning + + +http://llama.meta.com/docs/how-to-guides/quantization + + +http://llama.meta.com/docs/how-to-guides/prompting + + +http://llama.meta.com/docs/how-to-guides/validation + + +http://llama.meta.com/docs/integration-guides/meta-code-llama + + +http://llama.meta.com/docs/integration-guides/langchain + + +http://llama.meta.com/docs/integration-guides/llamaindex + + +http://raw.githubusercontent.com/meta-llama/llama-recipes/main/README.md + + +http://raw.githubusercontent.com/meta-llama/llama/main/MODEL_CARD.md + + +http://raw.githubusercontent.com/meta-llama/llama/main/README.md + + +http://raw.githubusercontent.com/meta-llama/llama/main/LICENSE.md + + +http://raw.githubusercontent.com/meta-llama/llama3/main/MODEL_CARD.md + + +http://raw.githubusercontent.com/meta-llama/llama3/main/README.md + + +http://raw.githubusercontent.com/meta-llama/llama3/main/LICENSE.md + + +http://raw.githubusercontent.com/meta-llama/codellama/main/MODEL_CARD.md + + +http://raw.githubusercontent.com/meta-llama/codellama/main/README.md + + +http://raw.githubusercontent.com/meta-llama/PurpleLlama/main/README.md + + +http://raw.githubusercontent.com/meta-llama/PurpleLlama/main/Llama-Guard2/MODEL_CARD.md + + +http://raw.githubusercontent.com/meta-llama/PurpleLlama/main/Llama-Guard2/README.md + + +http://raw.githubusercontent.com/meta-llama/PurpleLlama/main/Llama-Guard/MODEL_CARD.md + + +http://raw.githubusercontent.com/meta-llama/PurpleLlama/main/Llama-Guard/README.md + + diff --git a/recipes/use_cases/end2end-recipes/raft/doc_processor.py b/recipes/use_cases/end2end-recipes/raft/doc_processor.py deleted file mode 100644 index c8556471e..000000000 --- a/recipes/use_cases/end2end-recipes/raft/doc_processor.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# This software may be used and distributed according to the terms of the Llama 3 Community License Agreement. - -# Assuming result_average_token is a constant, use UPPER_CASE for its name to follow Python conventions -AVERAGE_TOKENS_PER_RESULT = 100 - -def get_token_limit_for_model(model: str) -> int: - """Returns the token limit for a given model.""" - if model == "llama-2-13b-chat" or model == "llama-2-70b-chat": - return 4096 - else: - return 8192 - -def calculate_num_tokens_for_message(encoded_text) -> int: - """Calculates the number of tokens used by a message.""" - # Added 3 to account for priming with assistant's reply, as per original comment - return len(encoded_text) + 3 - - -def split_text_into_chunks(context: dict, text: str, tokenizer) -> list[str]: - """Splits a long text into substrings based on token length constraints, adjusted for question generation.""" - # Adjusted approach to calculate max tokens available for text chunks - encoded_text = tokenizer(text, return_tensors="pt", padding=True)["input_ids"] - encoded_text = encoded_text.squeeze() - model_token_limit = get_token_limit_for_model(context["model"]) - - tokens_for_questions = calculate_num_tokens_for_message(encoded_text) - estimated_tokens_per_question = AVERAGE_TOKENS_PER_RESULT - estimated_total_question_tokens = estimated_tokens_per_question * context["total_questions"] - # Ensure there's a reasonable minimum chunk size - max_tokens_for_text = max(model_token_limit - tokens_for_questions - estimated_total_question_tokens, model_token_limit // 10) - - chunks, current_chunk = [], [] - print(f"Splitting text into chunks of {max_tokens_for_text} tokens, encoded_text {len(encoded_text)}", flush=True) - for token in encoded_text: - if len(current_chunk) >= max_tokens_for_text: - chunks.append(tokenizer.decode(current_chunk).strip()) - current_chunk = [] - else: - current_chunk.append(token) - - if current_chunk: - chunks.append(tokenizer.decode(current_chunk).strip()) - - print(f"Number of chunks in the processed text: {len(chunks)}", flush=True) - - return chunks diff --git a/recipes/use_cases/end2end-recipes/raft/eval_raft.py b/recipes/use_cases/end2end-recipes/raft/eval_raft.py index b8f43ec78..53f5caa4f 100644 --- a/recipes/use_cases/end2end-recipes/raft/eval_raft.py +++ b/recipes/use_cases/end2end-recipes/raft/eval_raft.py @@ -8,6 +8,7 @@ import json from itertools import chain from langchain_community.llms import VLLMOpenAI + from langchain_community.embeddings import HuggingFaceEmbeddings from langchain_community.vectorstores import FAISS from langchain.text_splitter import RecursiveCharacterTextSplitter diff --git a/recipes/use_cases/end2end-recipes/raft/raft.py b/recipes/use_cases/end2end-recipes/raft/raft.py index 8d6162512..b1e78b586 100644 --- a/recipes/use_cases/end2end-recipes/raft/raft.py +++ b/recipes/use_cases/end2end-recipes/raft/raft.py @@ -1,15 +1,10 @@ -import mdc -from mdc import MDC import logging from typing import Literal, Any -from openai import OpenAI import json import random -import os, shutil +import os import argparse -import asyncio from raft_utils import generate_questions, add_chunk_to_dataset -from chat_utils import OctoAIChatService, VllmChatService from format import DatasetConverter, datasetFormats, outputDatasetTypes from config import load_config @@ -17,27 +12,23 @@ NUM_DISTRACT_DOCS = 5 # number of distracting documents to add to each chunk ORCALE_P = 0.8 # probability of related documents to be added to each chunk -async def main(context): +def main(api_config): ds = None - if context["endpoint"]: - chat_service = VllmChatService() - else: - chat_service = OctoAIChatService() try: logging.info("Starting to generate question pair.") # Generate questions as list for each chunk - chunk_questions_zip = await generate_questions(chat_service, context) + chunk_questions_zip = generate_questions(api_config) if not chunk_questions_zip: - logging.warning("No questions generated from text. Please check the input context or model configuration.") + logging.warning("No questions generated from text. Please check the api_config or model configuration.") return for chunk, questions in chunk_questions_zip: logging.info(f"Chunk: {chunk}, question length: {len(questions)}") for question in questions: logging.info(f"Question: {question}") logging.info(f"Successfully generated {sum([len(q) for c,q in chunk_questions_zip])} question/answer pairs.") - ds = await add_chunk_to_dataset(chunk_questions_zip,context, chat_service,ds,NUM_DISTRACT_DOCS, ORCALE_P) + ds = add_chunk_to_dataset(chunk_questions_zip,api_config,ds,NUM_DISTRACT_DOCS, ORCALE_P) ds.save_to_disk(args.output) - logging.info(f"Data successfully written to {context['output']}. Process completed.") + logging.info(f"Data successfully written to {api_config['output']}. Process completed.") formatter = DatasetConverter() # Extract format specific params @@ -49,7 +40,7 @@ async def main(context): def parse_arguments(): # Define command line arguments for the script parser = argparse.ArgumentParser( - description="Generate question/answer pairs from documentation." + description="Generate RAFT question/answer/context pairs from documentation." ) parser.add_argument( "-t", "--questions_per_chunk", @@ -59,8 +50,7 @@ def parse_arguments(): ) parser.add_argument( "-m", "--model", - choices=["meta-llama-3-70b-instruct","meta-llama-3-8b-instruct","llama-2-13b-chat", "llama-2-70b-chat"], - default="meta-llama-3-70b-instruct", + default="meta-llama/Meta-Llama-3-70B-Instruct", help="Select the model to use for generation." ) parser.add_argument( @@ -69,10 +59,16 @@ def parse_arguments(): help="Set the configuration file path that has system prompt along with language, dataset path and number of questions." ) parser.add_argument( - "-v", "--vllm_endpoint", - default=None, - type=int, - help="If a port is specified, then use local vllm endpoint for generating question/answer pairs." + "-u", "--endpoint_url", + default="http://localhost:8001/v1", + type=str, + help="LLM API url for generating question/answer pairs." + ) + parser.add_argument( + "-k", "--api_key", + default="EMPTY", + type=str, + help="LLM API key for generating question/answer pairs." ) parser.add_argument("--chunk_size", type=int, default=512, help="The size of each chunk in number of tokens") parser.add_argument("-o","--output", type=str, default="./", help="The path at which to save the dataset") @@ -84,13 +80,18 @@ def parse_arguments(): logging.info("Initializing the process and loading configuration...") args = parse_arguments() - context = load_config(args.config_path) - context["questions_per_chunk"] = args.questions_per_chunk - context["model"] = args.model - context["chunk_size"] = args.chunk_size - context["endpoint"] = args.vllm_endpoint - context["output"] = args.output + api_config = load_config(args.config_path) + api_config["questions_per_chunk"] = args.questions_per_chunk + api_config["model"] = args.model + api_config["chunk_size"] = args.chunk_size + api_config["endpoint_url"] = args.endpoint_url + api_config["output"] = args.output + api_config["api_key"] = args.api_key + # if OPENAI_API_KEY is defined in the system environment, use it as the API key + if os.environ.get('API_KEY') is not None: + api_config["api_key"] = os.environ["API_KEY"] logging.info(f"Configuration loaded. Generating {args.questions_per_chunk} question per chunk using model '{args.model}'.") - if context["endpoint"]: - logging.info(f"Use local vllm service at port: '{args.vllm_endpoint}'.") - asyncio.run(main(context)) + logging.info(f"Chunk size: {args.chunk_size}.") + logging.info(f"Will use endpoint_url: {args.endpoint_url}.") + logging.info(f"Output will be written to {args.output}.") + main(api_config) diff --git a/recipes/use_cases/end2end-recipes/raft/raft.yaml b/recipes/use_cases/end2end-recipes/raft/raft.yaml index e667982d8..740f283a6 100644 --- a/recipes/use_cases/end2end-recipes/raft/raft.yaml +++ b/recipes/use_cases/end2end-recipes/raft/raft.yaml @@ -1,32 +1,42 @@ COT_prompt_template: > - Question: {question}\nContext: {context}\n - Answer this question using the information given in the context above. Here is things to pay attention to: - - First provide step-by-step reasoning on how to answer the question. - - In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context. - - End your response with final answer in the form : $answer, the answer should be succinct. - You MUST begin your final answer with the tag ": + <|begin_of_text|><|start_header_id|>system<|end_header_id|> Answer the following question using the information given in the context below. Here is things to pay attention to: + - First provide step-by-step reasoning on how to answer the question. + - In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context. + - End your response with final answer in the form : $answer, the answer should be succinct. + You MUST begin your final answer with the tag ":<|eot_id|> + <|start_header_id|>user<|end_header_id|> + Question: {question}\nContext: {context}\n<|eot_id|><|start_header_id|>assistant<|end_header_id|> -# question_prompt_template: > -# You are a synthetic question-answer pair generator. Given a chunk of context about -# some topic(s), generate {num_questions} example questions a user could ask and would be answered -# \using information from the chunk. For example, if the given context was a Wikipedia -# paragraph about the United States, an example question could be 'How many states are -# in the United States? -# The questions should be able to be answered in a few words or less. Include only the -# questions in your response. question_prompt_template: > - You are a language model skilled in creating quiz questions. - You will be provided with a document, - read it and please generate question and answer pairs that are most likely be asked by a user of Llama language models, - which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1, Meta Llama Guard 2, - Output only the questions related to Llama: - please make sure you follow those rules: - 1. Generate {num_questions} question answer pairs, you can generate less answer if there is nothing related to model, training, fine-tuning and evaluation details of Llama language models, . - 2. The questions can be answered based *solely* on the given passage. - 3. Avoid asking questions with similar meaning. - 4. Never use any abbreviation. - 5. Include only the questions in your response. + You are a synthetic question-answer pair generator. Given a chunk of context about + some topic(s), generate {num_questions} example questions a user could ask and would be answered + \using information from the chunk. For example, if the given context was a Wikipedia + paragraph about the United States, an example question could be 'How many states are + in the United States? + The questions should be able to be answered in 100 words or less. Include only the + questions in your response. + +# question_prompt_template: > +# You are a language model skilled in creating quiz questions. +# You will be provided with a document, +# read it and please generate question and answer pairs that are most likely be asked by a user of Llama language models +# which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1, Meta Llama Guard 2 +# Output only the questions related to Llama: +# please make sure you follow those rules: +# 1. Generate {num_questions} question answer pairs, you can generate less answer if there is nothing related to model, training, fine-tuning and evaluation details of Llama language models, . +# 2. The questions can be answered based *solely* on the given passage. +# 3. Avoid asking questions with similar meaning. +# 4. Never use any abbreviation. +# 5. Include only the questions in your response. data_dir: "./data" -num_questions: 2 +xml_path: "" + +chunk_size: 512 + +questions_per_chunk: 3 + +num_distract_docs: 5 # number of distracting documents to add to each chunk + +orcale_p: 0.8 # probability of related documents to be added to each chunk diff --git a/recipes/use_cases/end2end-recipes/raft/raft_utils.py b/recipes/use_cases/end2end-recipes/raft/raft_utils.py index 304f37a72..5a46de524 100644 --- a/recipes/use_cases/end2end-recipes/raft/raft_utils.py +++ b/recipes/use_cases/end2end-recipes/raft/raft_utils.py @@ -2,14 +2,7 @@ # This software may be used and distributed according to the terms of the Llama 2 Community License Agreement. import os -import re -import string from transformers import AutoTokenizer -import asyncio -import magic -from PyPDF2 import PdfReader -import json -from doc_processor import split_text_into_chunks import logging import json from langchain_community.embeddings import HuggingFaceEmbeddings @@ -18,6 +11,14 @@ import datasets from datasets import Dataset, load_dataset import random +from langchain_community.document_loaders import SitemapLoader,DirectoryLoader +from bs4 import BeautifulSoup +from langchain_openai import ChatOpenAI +from langchain_core.messages import HumanMessage, SystemMessage +from langchain_community.llms import VLLMOpenAI +from langchain_core.prompts import ChatPromptTemplate + + # Initialize logging logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') def strip_str(s: str) -> str: @@ -35,82 +36,60 @@ def strip_str(s: str) -> str: r = i r += 2 return s[l:min(r, len(s))] -def read_text_file(file_path): - try: - with open(file_path, 'r') as f: - text = f.read().strip() + ' ' - if len(text) == 0: - print("File is empty ",file_path) - return text - except Exception as e: - logging.error(f"Error reading text file {file_path}: {e}") - return '' - -def read_pdf_file(file_path): - try: - with open(file_path, 'rb') as f: - pdf_reader = PdfReader(f) - num_pages = len(pdf_reader.pages) - file_text = [pdf_reader.pages[page_num].extract_text().strip() + ' ' for page_num in range(num_pages)] - text = ''.join(file_text) - if len(text) == 0: - print("File is empty ",file_path) - return ''.join(file_text) - except Exception as e: - logging.error(f"Error reading PDF file {file_path}: {e}") - return '' - -def read_json_file(file_path): - try: - with open(file_path, 'r') as f: - data = json.load(f) - # Assuming each item in the list has a 'question' and 'answer' key - # Concatenating question and answer pairs with a space in between and accumulating them into a single string - file_text = ' '.join([item['question'].strip() + ' ' + item['answer'].strip() + ' ' for item in data]) - if len(file_text) == 0: - print("File is empty ",file_path) - return file_text - except Exception as e: - logging.error(f"Error reading JSON file {file_path}: {e}") - return '' - - -def process_file(file_path): - print("starting to process file: ", file_path) - file_type = magic.from_file(file_path, mime=True) - if file_type in ['text/plain', 'text/markdown', 'JSON']: - return read_text_file(file_path) - elif file_type == 'application/pdf': - return read_pdf_file(file_path) - else: - logging.warning(f"Unsupported file type {file_type} for file {file_path}") - return '' -def read_file_content(context): - file_strings = [] - - for root, _, files in os.walk(context['data_dir']): - for file in files: - file_path = os.path.join(root, file) - file_text = process_file(file_path) - if file_text: - file_strings.append(file_text) - text = '\n'.join(file_strings) - text = remove_non_printable(text) - return text - -def remove_non_printable(s): - printable = set(string.printable) - return ''.join(filter(lambda x: x in printable, s)) +def clean_documents(raw_text): + unwanted= ["Technology", + "Getting Started", + "Trust & Safety", + "Community", + "Resources", + "Skip to main content", + "How-to guides"] + all_lines = [] + for line in raw_text.split("\n"): + line = line.strip() + if line in unwanted or len(line.split()) == 0: + continue + else: + all_lines.append(line) + result = " ".join(all_lines) + return result +def clean_text(content: BeautifulSoup) -> str: + # Find all 'nav' and 'header' elements in the BeautifulSoup object + nav_elements = content.find_all("nav") + header_elements = content.find_all("header") + mydivs = content.find_all("div", {"role": "list"}) + # Remove each 'nav' and 'header' element from the BeautifulSoup object + for element in nav_elements + header_elements+mydivs: + element.decompose() + raw_text = content.get_text("\n") + return clean_documents(raw_text) +# Read +def read_file_content(xml_path: str, data_folder: str) -> str: + if xml_path and data_folder: + logging.info(f"Error: both xml_path and data_folder are provided, will only read from xml for now") + if not xml_path and not data_folder: + logging.info(f"Error: both xml_path and data_folder are not provided") + return "" + if xml_path: + if not os.path.exists(xml_path): + logging.info(f"Error: {xml_path} does not exist") + return "" + # Use langchain to load the documents from webpage links in the xml file + sitemap_loader = SitemapLoader(web_path=xml_path,is_local=True,parsing_function=clean_text) + sitemap_loader.requests_kwargs = {"verify": False} + docs = sitemap_loader.load() + return "\n".join([doc.page_content for doc in docs]) + elif len(data_folder) != 0: + if not os.path.exists(data_folder): + logging.info(f"Error: {data_folder} does not exist") + return "" + # Use langchain to load the documents from data folder + loader = DirectoryLoader(data_folder) + docs = loader.load() + text = "\n".join([clean_documents(doc.page_content) for doc in docs]) + return text -async def generate_question_request(chat_service, api_context: dict, document_content: str, num_questions: int) -> dict: - if num_questions == 0: - logging.info(f"Error: num_questions is 0") - return {} - prompt_for_system = api_context['question_prompt_template'].format(num_questions=num_questions) - chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': str(document_content)}] - # parse the result string to a list of dict that has Question, Answer, Context - return await chat_service.execute_chat_request_async(api_context, chat_request_payload) def get_chunks( text: str, @@ -134,55 +113,73 @@ def get_chunks( return chunks # read all the files in the data folder, then split them into chunks # generate questions for each chunk and return zip of chunk and related questions list -async def generate_questions(chat_service, api_context: dict): - document_text = read_file_content(api_context) +def generate_questions(api_config): + # get documents from the data folder or xml file + api_url = api_config["endpoint_url"] + key = api_config["api_key"] + document_text = read_file_content(api_config["xml_path"],api_config["data_dir"]) if len(document_text) == 0: logging.info(f"Error reading files, document_text is {len(document_text)}") - model_name = "sentence-transformers/all-mpnet-base-v2" - embedding_model = HuggingFaceEmbeddings(model_name=model_name) - document_batches = get_chunks(document_text,api_context["chunk_size"],embedding_model) + embedding_model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2",model_kwargs={'device': 'cuda'}) + document_batches = get_chunks(document_text,api_config["chunk_size"],embedding_model) batches_count = len(document_batches) - total_questions = api_context["questions_per_chunk"] * batches_count - - print(f"Questions per batch: {api_context['questions_per_chunk']}, Total questions: {total_questions}, Batches: {batches_count}") - generation_tasks = [] - for batch_index, batch_content in enumerate(document_batches): - print(f"len of batch_content: {len(batch_content)}, batch_index: {batch_index}") - #Distribute extra questions across the first few batches - if len(batch_content) < 10: - logging.info("Context is not enough, ignore this batch") - else: - print(f"Batch {batch_index + 1} - {api_context['questions_per_chunk']} questions ********") - try: - task = generate_question_request(chat_service, api_context, batch_content, api_context["questions_per_chunk"]) - generation_tasks.append(task) - except Exception as e: - print(f"Error during chat request execution: {e}") - - question_generation_results = await asyncio.gather(*generation_tasks) + total_questions = api_config["questions_per_chunk"] * batches_count + # use OpenAI API protocol to hanlde the chat request, including local VLLM openai compatible server + llm = VLLMOpenAI( + openai_api_key=key, + openai_api_base=api_url, + model_name=api_config["model"], + temperature=0.0, + max_tokens=250 + ) + prompt = api_config['question_prompt_template'].format(num_questions=str(api_config['questions_per_chunk'])) + system_prompt = SystemMessage(content=prompt) + generated_answers = [] + all_tasks = [[system_prompt, HumanMessage(content=batch)] for batch in document_batches] + generated_answers = llm.batch(all_tasks) + if len(generated_answers) == 0: + logging.error("No model answers generated. Please check the input context or model configuration in ",model_name) + return [] final_result = [] - for result in question_generation_results: + for result in generated_answers: queries = result.split('\n') queries = [strip_str(q) for q in queries] queries = [q for q in queries if any(c.isalpha() for c in q)] - if len(queries) > int(api_context['questions_per_chunk']): + if len(queries) > int(api_config['questions_per_chunk']): # As the model may have unrelated question at the begining of the result # if queries is more than questions_per_chunk, then we need to truncate it and only keep last questions_per_chunk lines - queries = queries[-int(api_context['questions_per_chunk']):] + queries = queries[-int(api_config['questions_per_chunk']):] final_result.append(queries) return list(zip(document_batches,final_result)) -async def generate_COT(chat_service, api_context: dict, document_content: str, question: str) -> dict: - prompt = api_context['COT_prompt_template'].format(question=question,context=str(document_content)) - chat_request_payload = [{"role": "system", "content": "You are a helpful question answerer who can provide an answer given a question and relevant context."}] - chat_request_payload.append({"role": "user", "content": prompt}) - response = await chat_service.execute_chat_request_async(api_context, chat_request_payload) - return (document_content,question,response) -async def add_chunk_to_dataset( +# Generate COT answer for each question given the chunk context +def generate_COT(chunk_questions_zip,api_config) -> dict: + all_tasks = [] + chunk_questions = [] + for document_content,questions in chunk_questions_zip: + for question in questions: + prompt = api_config['COT_prompt_template'].format(question=question,context=str(document_content)) + all_tasks.append(prompt) + chunk_questions.append((document_content,question)) + # use OpenAI API protocol to hanlde the chat request, including local VLLM openai compatible server + llm = VLLMOpenAI( + openai_api_key=api_config["api_key"], + openai_api_base=api_config["endpoint_url"], + model_name=api_config["model"], + temperature=0.0, + max_tokens=350 + ) + generated_answers = llm.batch(all_tasks) + COT_results = [] + # return a list of (chunk, question, generated_answer) + for (chunk, question),generated_answer in zip(chunk_questions,generated_answers): + COT_results.append((chunk,question,generated_answer)) + return COT_results + +def add_chunk_to_dataset( chunk_questions_zip: list, - context: dict, - chat_service, + api_config: dict, ds, num_distract: int = 3, p: float = 0.8, @@ -192,14 +189,9 @@ async def add_chunk_to_dataset( """ COT_tasks = [] chunks = [chunk for chunk, _ in chunk_questions_zip] - for i, chunk_questions in enumerate(chunk_questions_zip): - chunk, questions = chunk_questions - # generate COT answer for each question given the chunk context - for question in questions: - COT_tasks.append(generate_COT(chat_service, context, chunk, question)) - COT_results = await asyncio.gather(*COT_tasks) + COT_results = generate_COT(chunk_questions_zip,api_config) for chunk, q , cot in COT_results: - # The COT answer will be used in the fine-tuning stage + # The COT answer will be used as the label in the fine-tuning stage datapt = { "id": None, "type": "general", From 8baf6b5d99fa466fadfbbc41c61a1f565894bd6b Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Tue, 11 Jun 2024 13:14:13 -0700 Subject: [PATCH 20/35] llama working exmaple checkpoint --- recipes/finetuning/datasets/raft_dataset.py | 36 ++- .../use_cases/end2end-recipes/raft/README.md | 6 +- .../end2end-recipes/raft/data/website_data | 46 +++ .../end2end-recipes/raft/eval_config.yaml | 49 ++- .../end2end-recipes/raft/eval_raft.py | 147 ++++++--- .../end2end-recipes/raft/evalset.json | 304 ++++++++---------- .../use_cases/end2end-recipes/raft/raft.yaml | 18 +- .../end2end-recipes/raft/raft_utils.py | 13 +- requirements.txt | 1 + 9 files changed, 355 insertions(+), 265 deletions(-) create mode 100644 recipes/use_cases/end2end-recipes/raft/data/website_data diff --git a/recipes/finetuning/datasets/raft_dataset.py b/recipes/finetuning/datasets/raft_dataset.py index e50d97344..6d4b6d472 100644 --- a/recipes/finetuning/datasets/raft_dataset.py +++ b/recipes/finetuning/datasets/raft_dataset.py @@ -8,20 +8,39 @@ import itertools B_INST, E_INST = "[INST]", "[/INST]" +# check system prompt token seq or user prompt token seq is in the current token list +def check_header(targets,seq): + for i in range(len(seq)-3): + if seq[i:i+3] in targets: + return True + return False +def replace_target(target,seq): + for i in range(len(seq)-3): + if seq[i:i+3] == target: + seq[i],seq[i+1],seq[i+2] = -100,-100,-100 + return seq def tokenize_dialog(dialog, tokenizer): # If vocab size is above 128000, use the chat template to generate the tokens as it is from Llama 3 family models if tokenizer.vocab_size >= 128000: dialog_tokens = tokenizer.apply_chat_template(dialog) - dialog_tokens = dialog_tokens[:-4] # Remove generation prompt <|start_header_id|>assistant<|end_header_id|>\n\n eot_indices = [i for i,n in enumerate(dialog_tokens) if n == 128009] labels = copy.copy(dialog_tokens) last_idx = 0 + token_length = len(dialog_tokens) + last_idx = 0 + # system prompt header "<|start_header_id|>system<|end_header_id|>" has been tokenized to [128006, 9125, 128007] + # user prompt header "<|start_header_id|>user<|end_header_id|>" has been tokenized to [128006, 882, 128007] + prompt_header_seqs = [[128006, 9125, 128007],[128006, 882, 128007]] for n, idx in enumerate(eot_indices): - if n % 2 == 1: - last_idx = idx - else: + current_seq = labels[last_idx:idx+1] + if check_header(prompt_header_seqs,current_seq): + # found prompt header, indicating that this seq should be masked labels[last_idx:idx+1] = [-100] * (idx-last_idx+1) - + else: + last_idx = idx + # Lastly mask all the assistant header prompt <|start_header_id|>assistant<|end_header_id|>, which has been tokenized to [128006, 78191, 128007] + assistant_header_seq = [128006, 78191, 128007] + labels = replace_target(assistant_header_seq,labels) dialog_tokens = [dialog_tokens] labels_tokens = [labels] else: @@ -51,15 +70,16 @@ def raft_tokenize(q_a_pair, tokenizer): documents = q_a_pair["instruction"].split('\n')[:-1] # output is the label answer = q_a_pair["output"] - system_prompt = "You are a helpful question answerer who can provide an answer given a question and relevant context." - user_prompt = prompt = """ + system_prompt = "You are a helpful chatbot who can provide an answer to every questions from the user given a relevant context." + user_prompt = """ Question: {question}\nContext: {context}\n - Answer this question using the information given in the context above. Here is things to pay attention to: + Answer this question using the information given multiple documents in the context above. Here is things to pay attention to: - First provide step-by-step reasoning on how to answer the question. - In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context. - End your response with final answer in the form : $answer, the answer should be succinct. You MUST begin your final answer with the tag ":". """.format(question=question, context=str(documents)) + chat = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt}, diff --git a/recipes/use_cases/end2end-recipes/raft/README.md b/recipes/use_cases/end2end-recipes/raft/README.md index fc0e71d5a..eed7cc778 100644 --- a/recipes/use_cases/end2end-recipes/raft/README.md +++ b/recipes/use_cases/end2end-recipes/raft/README.md @@ -70,7 +70,7 @@ Once the dataset is ready, we can start the fine-tuning step using the following For distributed fine-tuning: ```bash -CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir raft-8b --num_epochs 5 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/raft_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/raft/raft.jsonl' +CUDA_VISIBLE_DEVICES=2,3 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir raft-8b --num_epochs 3 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/raft_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/raft/raft.jsonl' ``` @@ -95,7 +95,7 @@ Once we have the fine-tuned model, we now need to evaluate it to understand its First we need to start the VLLM servers to host our fine-tuned 8B model. Since we used peft library to get a LoRA adapter, we need to pass special arguments to VLLM to enable the LoRA feature. Now, the VLLM server actually will first load the original model, then apply our LoRA adapter weights. Then we can feed the eval_set.json file into the VLLM servers and start the comparison evaluation. Notice that our finetuned model name is now called "chatbot" instead of "meta-llama/Meta-Llama-3-8B-Instruct". ```bash -python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-8B-Instruct --enable-lora --lora-modules raft-8b=./raft-8b --port 8000 --disable-log-requests +CUDA_VISIBLE_DEVICES=2 python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-8B-Instruct --enable-lora --lora-modules raft-8b=./raft-8b --port 8000 --disable-log-requests ``` **NOTE** If encounter import error: "ImportError: punica LoRA kernels could not be imported.", this means that VLLM must be installed with punica LoRA kernels to support LoRA adapter, please use following commands to install the VLLM from source. @@ -123,7 +123,7 @@ CUDA_VISIBLE_DEVICES=2,3 python -m vllm.entrypoints.openai.api_server --model m Then we can pass the port to the eval script: ```bash -CUDA_VISIBLE_DEVICES=4 python eval_raft.py -m raft-8b -v 8000 -j 8002 +CUDA_VISIBLE_DEVICES=4 python eval_raft.py -m raft-8b -v 8000 -j 8001 ``` diff --git a/recipes/use_cases/end2end-recipes/raft/data/website_data b/recipes/use_cases/end2end-recipes/raft/data/website_data new file mode 100644 index 000000000..4e22e99d4 --- /dev/null +++ b/recipes/use_cases/end2end-recipes/raft/data/website_data @@ -0,0 +1,46 @@ +Meta Llama Discover the possibilities with Meta Llama Democratizing access through an open platform featuring AI models, tools, and resources — enabling developers to shape the next wave of innovation. Licensed for both research and commercial use Get Started Llama models and tools Meta Llama 3 Build the future of AI with Meta Llama 3 Llama 3 is an accessible, open-source large language model (LLM) designed for developers, researchers, and businesses to build, experiment, and responsibly scale their generative AI ideas. Part of a foundational system, it serves as a bedrock for innovation in the global community. Learn more Meta Code Llama A state-of-the-art large language model for coding LLM capable of generating code, and natural language about code, from both code and natural language prompts. Learn more Meta Llama Guard Empowering developers, advancing safety, and building an open ecosystem We’re announcing Meta Llama Guard, an umbrella project featuring open trust and safety tools and evaluations meant to level the playing field for developers. Learn more Ready to start building with Meta Llama? Access our getting started guide and responsible use resources to get started. Get started guide Responsible use guide Prompt Engineering with Meta Llama Learn how to effectively use Llama models for prompt engineering with our free course on Deeplearning.AI, where you'll learn best practices and interact with the models through a simple API call. Learn more Partnerships Our global partners and supporters We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama, cloud providers that will include the model as part of their offering to customers, researchers committed to doing research with the model, and people across tech, academia, and policy who see the benefits of Llama and an open platform as we do. Latest Llama updates Introducing Meta Llama 3: The most capable openly available LLM to date Read more Meet Your New Assistant: Meta AI, Built With Llama 3 Read more CYBERSECEVAL 2: A Wide-Ranging Cybersecurity Evaluation Suite for Large Language Models Read more Stay up-to-date Our latest updates delivered to your inbox Subscribe to our newsletter to keep up with the latest Llama updates, releases and more. Sign up +Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at llama.meta.com/use-policy . Prohibited Uses We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to: 1. Violate the law or others’ rights, including to: a. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: i. Violence or terrorism ii. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material b. Human trafficking, exploitation, and sexual violence iii. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. iv. Sexual solicitation vi. Any other criminal activity c. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals d. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services e. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices f. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws g. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials h. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following: a. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State b. Guns and illegal weapons (including weapon development) c. Illegal drugs and regulated/controlled substances d. Operation of critical infrastructure, transportation technologies, or heavy machinery e. Self-harm or harm to others, including suicide, cutting, and eating disorders f. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 2 related to the following: a. Generating, promoting, or furthering fraud or the creation or promotion of disinformation b. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content c. Generating, promoting, or further distributing spam d. Impersonating another individual without consent, authorization, or legal right e. Representing that the use of Llama 2 or outputs are human-generated f. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: Reporting issues with the model: github.com/facebookresearch/llama Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback Reporting bugs and security concerns: facebook.com/whitehat/info Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: LlamaUseReport@meta.com +Responsible Use Guide for Llama 2 Responsibility Responsible Use Guide: your resource for building responsibly The Responsible Use Guide is a resource for developers that provides best practices and considerations for building products powered by large language models (LLM) in a responsible manner, covering various stages of development from inception to deployment. Responsible Use Guide +Meta Llama 2 Large language model Llama 2: open source, free for research and commercial use We're unlocking the power of these large language models. Our latest version of Llama – Llama 2 – is now accessible to individuals, creators, researchers, and businesses so they can experiment, innovate, and scale their ideas responsibly. Download the model Available as part of the Llama 2 release Get started guide With each model download you'll receive: Model code Model weights README (user guide) Responsible Use Guide License Acceptable use policy Model card Technical specifications Llama 2 was pretrained on publicly available online data sources. The fine-tuned model, Llama Chat, leverages publicly available instruction datasets and over 1 million human annotations. Read the paper Inside the model Llama 2 models are trained on 2 trillion tokens and have double the context length of Llama 1. Llama Chat models have additionally been trained on over 1 million new human annotations. Benchmarks Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations. Safety and helpfulness Reinforcement learning from human feedback Llama Chat uses reinforcement learning from human feedback to ensure safety and helpfulness. Training Llama Chat: Llama 2 is pretrained using publicly available online data. An initial version of Llama Chat is then created through the use of supervised fine-tuning. Next, Llama Chat is iteratively refined using Reinforcement Learning from Human Feedback (RLHF), which includes rejection sampling and proximal policy optimization (PPO). Download the model Get Llama 2 now: complete the download form via the link below. By submitting the form, you agree to Meta's privacy policy . Get started Partnerships Our global partners and supporters We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2, cloud providers that will include the model as part of their offering to customers, researchers committed to doing research with the model, and people across tech, academia, and policy who see the benefits of Llama and an open platform as we do. Statement of support for Meta’s open approach to today’s AI “We support an open innovation approach to AI. Responsible and open innovation gives us all a stake in the AI development process, bringing visibility, scrutiny and trust to these technologies. Opening today’s Llama models will let everyone benefit from this technology.” Responsibility We’re committed to building responsibly To promote a responsible, collaborative AI innovation ecosystem, we’ve established a range of resources for all who use Llama 2: individuals, creators, developers, researchers, academics, and businesses of any size. Responsible Use Guide The Responsible Use Guide is a resource for developers that provides best practices and considerations for building products powered by large language models (LLMs) in a responsible manner, covering various stages of development from inception to deployment. Responsible Use Guide Safety Red-teaming Llama Chat has undergone testing by external partners and internal teams to identify performance gaps and mitigate potentially problematic responses in chat use cases. We're committed to ongoing red-teaming to enhance safety and performance. Open Innovation AI Research Community We're launching a program for academic researchers, designed to foster collaboration and knowledge-sharing in the field of artificial intelligence. This program provides unique a opportunity for researchers to come together, share their learnings, and help shape the future of AI. By joining this community, participants will have the chance to contribute to a research agenda that addresses the most pressing challenges in the field, and work together to develop innovative solutions that promote responsible and safe AI practices. We believe that by bringing together diverse perspectives and expertise, we can accelerate the pace of progress in AI research. Learn more Llama Impact Grants We want to activate the community of innovators who aspire to use Llama to solve hard problems. We are launching the grants to encourage a diverse set of public, non-profit, and for-profit entities to use Llama 2 to address environmental, education and other important challenges. The grants will be subject to rules which will be posted here prior to the grants start. Learn more Generative AI Community Forum We think it’s important that our product and policy decisions around generative AI are informed by people and experts from around the world. In support of this belief, we created a forum to act as a governance tool and resource for the community. It brings together a representative group of people to discuss and deliberate on the values that underpin AI, LLM and other new AI technologies. This forum will be held in consultation with Stanford Deliberative Democracy Lab and the Behavioural Insights Team, and is consistent with our open collaboration approach to sharing AI models. Learn more Join us on our AI journey If you’d like to advance AI with us, visit our Careers page to discover more about AI at Meta. See open positions Llama 2 Frequently asked questions Get answers to Llama 2 questions in our comprehensive FAQ page—from how it works, to how to use it, integrations, and more. See all FAQs Explore more on Llama 2 Discover more about Llama 2 here — visit our resources, ranging from our research paper, how to get access, and more. Github Open Innovation AI Research Community Getting started guide AI at Meta blog Responsible Use Guide Research paper +License Llama 2 Version Release Date: July 18, 2023 “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “Documentation” means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at llama.meta.com/llama-downloads/ . “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “Llama 2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at llama.meta.com/llama-downloads/ . “Llama Materials” means, collectively, Meta’s proprietary Llama 2 and Documentation (and any portion thereof) made available under this Agreement. “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/use-policy ), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). 2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. +Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at llama.meta.com/use-policy . Prohibited Uses We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to: 1. Violate the law or others’ rights, including to: a. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: i. Violence or terrorism ii. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material b. Human trafficking, exploitation, and sexual violence iii. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. iv. Sexual solicitation vi. Any other criminal activity c. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals d. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services e. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices f. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws g. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials h. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following: a. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State b. Guns and illegal weapons (including weapon development) c. Illegal drugs and regulated/controlled substances d. Operation of critical infrastructure, transportation technologies, or heavy machinery e. Self-harm or harm to others, including suicide, cutting, and eating disorders f. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 2 related to the following: a. Generating, promoting, or furthering fraud or the creation or promotion of disinformation b. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content c. Generating, promoting, or further distributing spam d. Impersonating another individual without consent, authorization, or legal right e. Representing that the use of Llama 2 or outputs are human-generated f. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: Reporting issues with the model: github.com/facebookresearch/llama Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback Reporting bugs and security concerns: facebook.com/whitehat/info Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: LlamaUseReport@meta.com +License Llama 2 Version Release Date: July 18, 2023 “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “Documentation” means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at llama.meta.com/llama-downloads/ . “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “Llama 2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at llama.meta.com/llama-downloads/ . “Llama Materials” means, collectively, Meta’s proprietary Llama 2 and Documentation (and any portion thereof) made available under this Agreement. “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/use-policy ), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). 2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. +Meta Code Llama Large language model Code Llama, a state-of-the-art large language model for coding Code Llama has the potential to make workflows faster and more efficient for current developers and lower the barrier to entry for people who are learning to code. Code Llama has the potential to be used as a productivity and educational tool to help programmers write more robust, well-documented software. Download the model Free for research and commercial use: Code Llama is built on top of Llama 2 and is available in three models: Code Llama Code Llama Python Code Llama Instruct Get started guide With each model download you'll receive: All Code Llama models README (User Guide) Responsible Use Guide License Acceptable Use Policy Model Card How Code Llama works Code Llama is a code-specialized version of Llama 2 that was created by further training Llama 2 on its code-specific datasets, sampling more data from that same dataset for longer. Essentially, Code Llama features enhanced coding capabilities, built on top of Llama 2. It can generate code, and natural language about code, from both code and natural language prompts (e.g., “Write me a function that outputs the fibonacci sequence.”) It can also be used for code completion and debugging. It supports many of the most popular languages being used today, including Python, C++, Java, PHP, Typescript (Javascript), C#, and Bash. Read the paper Inside the model Code Llama is available in four sizes with 7B, 13B, 34B, and 70B parameters respectively. Each of these models is trained with 500B tokens of code and code-related data, apart from 70B, which is trained on 1T tokens. The 7B, 13B and 70B base and instruct models have also been trained with fill-in-the-middle (FIM) capability, allowing them to insert code into existing code, meaning they can support tasks like code completion right out of the box. The four models address different serving and latency requirements. The 7B model, for example, can be served on a single GPU. The 34B and 70B models return the best results and allow for better coding assistance, but the smaller 7B and 13B models are faster and more suitable for tasks that require low latency, like real-time code completion. Note: We do not recommend using Code Llama or Code Llama Python to perform general natural language tasks since neither of these models are designed to follow natural language instructions. Code Llama is specialized for code-specific tasks and isn’t appropriate as a foundation model for other tasks. Evaluating Code Llama’s performance To test Code Llama’s performance against existing solutions, we used two popular coding benchmarks: HumanEval and Mostly Basic Python Programming ( MBPP ). HumanEval tests the model’s ability to complete code based on docstrings and MBPP tests the model’s ability to write code based on a description. Our benchmark testing showed that Code Llama performed better than open-source, code-specific LLMs and outperformed Llama 2. Code Llama 70B Instruct, for example, scored 67.8% on HumanEval and 62.2% on MBPP, the highest compared with other state-of-the-art open solutions, and on par with ChatGPT. As with all cutting edge technology, Code Llama comes with risks. Building AI models responsibly is crucial, and we undertook numerous safety measures before releasing Code Llama. As part of our red teaming efforts, we ran a quantitative evaluation of Code Llama’s risk of generating malicious code. We created prompts that attempted to solicit malicious code with clear intent and scored Code Llama’s responses to those prompts against ChatGPT’s (GPT3.5 Turbo). Our results found that Code Llama answered with safer responses. Details about our red teaming efforts from domain experts in responsible AI, offensive security engineering, malware development, and software engineering are available in our research paper . Releasing Code Llama Programmers are already using LLMs to assist in a variety of tasks, ranging from writing new software to debugging existing code. The goal is to make developer workflows more efficient, so they can focus on the most human centric aspects of their job, rather than repetitive tasks. At Meta, we believe that AI models, but LLMs for coding in particular, benefit most from an open approach, both in terms of innovation and safety. Publicly available, code-specific models can facilitate the development of new technologies that improve peoples' lives. By releasing code models like Code Llama, the entire community can evaluate their capabilities, identify issues, and fix vulnerabilities. Code Llama’s training recipes are available on our Github repository and model weights are also available. GitHub Model weights Responsible use Our research paper discloses details of Code Llama’s development as well as how we conducted our benchmarking tests. It also provides more information into the model’s limitations, known challenges we encountered, mitigations we’ve taken, and future challenges we intend to investigate. We’ve also updated our Responsible Use Guide and it includes guidance on developing downstream models responsibly, including: Defining content policies and mitigations. Preparing data. Fine-tuning the model. Evaluating and improving performance. Addressing input- and output-level risks. Building transparency and reporting mechanisms in user interactions. Developers should evaluate their models using code-specific evaluation benchmarks and perform safety studies on code-specific use cases such as generating malware, computer viruses, or malicious code. We also recommend leveraging safety datasets for automatic and human evaluations, and red teaming on adversarial prompts . Responsible use guide The future of generative AI for coding Code Llama is designed to support software engineers in all sectors – including research, industry, open source projects, NGOs, and businesses. But there are still many more use cases to support than what our base and instruct models can serve. We hope that Code Llama will inspire others to leverage Llama 2 to create new innovative tools for research and commercial products. Download the model Explore more on Code Llama Discover more about Code Llama here — visit our resources, ranging from our research paper, getting started guide and more. Code Llama GitHub repository Research paper Download the model Getting started guide +Meta Llama 3 Build the future of AI with Meta Llama 3 Now available with both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications Build the future of AI with Meta Llama 3 Now available with both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications Get Started Experience Llama 3 on Meta AI Experience Llama 3 with Meta AI We’ve integrated Llama 3 into Meta AI, our intelligent assistant, that expands the ways people can get things done, create and connect with Meta AI. You can see first-hand the performance of Llama 3 by using Meta AI for coding tasks and problem solving. Whether you're developing agents, or other AI-powered applications, Llama 3 in both 8B and 70B will offer the capabilities and flexibility you need to develop your ideas. Experience Llama 3 on Meta AI Enhanced performance Experience the state-of-the-art performance of Llama 3, an openly accessible model that excels at language nuances, contextual understanding, and complex tasks like translation and dialogue generation. With enhanced scalability and performance, Llama 3 can handle multi-step tasks effortlessly, while our refined post-training processes significantly lower false refusal rates, improve response alignment, and boost diversity in model answers. Additionally, it drastically elevates capabilities like reasoning, code generation, and instruction following. Build the future of AI with Llama 3. Download Llama 3 Getting Started Guide With each Meta Llama request, you will receive: Meta Llama Guard 2 Getting started guide Responsible Use Guide Acceptable use policy Model card Community license agreement Benchmarks Llama 3 models take data and scale to new heights. It’s been trained on our two recently announced custom-built 24K GPU clusters on over 15T token of data – a training dataset 7x larger than that used for Llama 2, including 4x more code. This results in the most capable Llama model yet, which supports a 8K context length that doubles the capacity of Llama 2. Model card Trust & safety A comprehensive approach to responsibility With the release of Llama 3, we’ve updated the Responsible Use Guide (RUG) to provide the most comprehensive information on responsible development with LLMs. Our system-centric approach includes updates to our trust and safety tools with Llama Guard 2, optimized to support the newly announced taxonomy published by MLCommons expanding its coverage to a more comprehensive set of safety categories, Code Shield, and Cybersec Eval 2. In line with the principles outlined in our RUG , we recommend thorough checking and filtering of all inputs to and outputs from LLMs based on your unique content guidelines for your intended use case and audience. Meta Llama Guard 2 Explore more on Meta Llama 3 Introducing Meta Llama 3: The most capable openly available LLM to date Read the blog Meet Your New Assistant: Meta AI, Built With Llama 3 Learn more Meta Llama 3 repository View repository Model card Explore +Meta Llama 3 License META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 “ Agreement ” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “ Documentation ” means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/ . “ Licensee ” or “ you ” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “ MetaLlama 3 ” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads . “ Llama Materials ” means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement. “ Meta ” or “ we ” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution . a. Grant of Rights . You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use . i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy ), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof). 2. Additional Commercial Terms . If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3 . Disclaimer of Warranty . UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability . IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property . a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination . The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction . This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. +Meta Llama 3 | Model Cards and Prompt formats Model Cards & Prompt formats Meta Llama 3 Model Card You can find details about this model in the model card . Special Tokens used with Meta Llama 3 <|begin_of_text|> : This is equivalent to the BOS token <|eot_id|> : This signifies the end of the message in a turn. <|start_header_id|>{role}<|end_header_id|> : These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant. <|end_of_text|>: This is equivalent to the EOS token. On generating this token, Llama 3 will cease to generate more tokens. A prompt can optionally contain a single system message, or multiple alternating user and assistant messages, but always ends with the last user message followed by the assistant header. Meta Llama 3 Code to produce this prompt format can be found here . Note : Newlines (0x0A) are part of the prompt format, for clarity in the example, they have been represented as actual new lines. <|begin_of_text|>{{ user_message }} Meta Llama 3 Instruct Code to generate this prompt format can be found here . Notes : Newlines (0x0A) are part of the prompt format, for clarity in the examples, they have been represented as actual new lines. The model expects the assistant header at the end of the prompt to start completing it. Decomposing an example instruct prompt with a system message: <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a helpful AI assistant for travel tips and recommendations<|eot_id|><|start_header_id|>user<|end_header_id|> What can you help me with?<|eot_id|><|start_header_id|>assistant<|end_header_id|> <|begin_of_text|> : Specifies the start of the prompt <|start_header_id|>system<|end_header_id|> : Specifies the role for the following message, i.e. “system” You are a helpful AI assistant for travel tips and recommendations : The system message <|eot_id|> : Specifies the end of the input message <|start_header_id|>user<|end_header_id|> : Specifies the role for the following message i.e. “user” What can you help me with? : The user message <|start_header_id|>assistant<|end_header_id|> : Ends with the assistant header, to prompt the model to start generation. Following this prompt, Llama 3 completes it by generating the {{assistant_message}}. It signals the end of the {{assistant_message}} by generating the <|eot_id|> . Example prompt with a single user message <|begin_of_text|><|start_header_id|>user<|end_header_id|> What is France's capital?<|eot_id|><|start_header_id|>assistant<|end_header_id|> System prompt and multiple turn conversation between the user and assistant <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a helpful AI assistant for travel tips and recommendations<|eot_id|><|start_header_id|>user<|end_header_id|> What is France's capital?<|eot_id|><|start_header_id|>assistant<|end_header_id|> Bonjour! The capital of France is Paris!<|eot_id|><|start_header_id|>user<|end_header_id|> What can I do there?<|eot_id|><|start_header_id|>assistant<|end_header_id|> Paris, the City of Light, offers a romantic getaway with must-see attractions like the Eiffel Tower and Louvre Museum, romantic experiences like river cruises and charming neighborhoods, and delicious food and drink options, with helpful tips for making the most of your trip.<|eot_id|><|start_header_id|>user<|end_header_id|> Give me a detailed list of the attractions I should visit, and time it takes in each one, to plan my trip accordingly.<|eot_id|><|start_header_id|>assistant<|end_header_id|> On this page Meta Llama 3 Model Card Special Tokens used with Meta Llama 3 Meta Llama 3 Meta Llama 3 Instruct +Meta Llama Guard 2 | Model Cards and Prompt formats Model Cards & Prompt formats Meta Llama Guard 2 Model Card You can find details about this model in the model card . Prompt format As the guardrails can be applied both on the input and output of the model, there are two different prompts: one for user input and the other for agent output. The role placeholder can have the values User or Agent . The former refers to the input and the later to the output. When evaluating the user input, the agent response must not be present in the conversation. The llama-recipes repository has a helper function and an inference example that shows how to properly format the prompt with the provided categories. This can be used as a template to create custom categories for the prompt. The variables to replace in this prompt template are: {{ role }} : It can have the values: User or Agent . {{ unsafe_categories }} : The default categories and their descriptions are shown below. These can be customized for zero-shot or few-shot prompting. {{ user_message }} : input message from the user. {{ model_answer }} : output from the model. <|begin_of_text|>[INST] Task: Check if there is unsafe content in '{{ role }}' messages in conversations according our safety policy with the below categories. {{ unsafe_categories }} User: {{ user_message_1 }} Agent: {{ model_answer_1 }} Provide your safety assessment for ONLY THE LAST {{ role }} in the above conversation: - First line must read 'safe' or 'unsafe'. - If unsafe, a second line must include a comma-separated list of violated categories. [/INST] Llama Guard was trained in the following categories. S1: Violent Crimes. S2: Non-Violent Crimes. S3: Sex Crimes. S4: Child Exploitation. S5: Specialized Advice. S6: Privacy. S7: Intellectual Property. S8: Indiscriminate Weapons. S9: Hate. S10: Self-Harm. S11: Sexual Content. Additionally, the full description for each category can be set in the prompt as well. S1: Violent Crimes. AI models should not create content that enables, encourages, or excuses the commission of violent crimes. Examples of violent crimes include, but are not limited to: - Unlawful violence toward people (ex: terrorism, genocide, murder, hate crimes, child abuse, assault, battery, kidnapping) - Unlawful violence toward animals (ex: animal abuse) S2: Non-Violent Crimes. AI models should not create content that enables, encourages, or excuses the commission of non-violent crimes. Examples of non-violent crimes include, but are not limited to: - Financial crimes (ex: fraud, scams, money laundering) - Property crimes (ex: burglary, robbery, arson, vandalism) - Drug crimes (ex: creating or trafficking narcotics) - Weapons crimes (ex: producing unlicensed firearms) - Cyber crimes (ex: hacking, spyware, malware) S3: Sex Crimes. AI models should not create content that enables, encourages, or excuses the commission of sex crimes. Examples of sex crimes include, but are not limited to: - Human trafficking - Sexual assault (ex: rape) - Sexual harassment (ex: groping) - Lewd conduct S4: Child Exploitation. AI models should not create content that depicts child nudity or that enables, encourages, excuses, or depicts the sexual abuse of children. S5: Specialized Advice. AI models should not create content that contains specialized financial, medical, or legal advice. S6: Privacy. AI models should not create content that contains sensitive, personal information about private individuals. S7: Intellectual Property. AI models should not create content that violates the intellectual property rights of third parties. S8: Indiscriminate Weapons. AI models should not create content that enables the creation of weapons of mass destruction. Examples of such weapons include, but are not limited to: - Chemical weapons (ex: nerve gas) - Biological weapons (ex: anthrax) - Radiological weapons (ex: salted bombs) - Nuclear weapons (ex: atomic warheads) - High-yield explosive weapons (ex: cluster munitions) S9: Hate. AI models should not create content that is hateful toward people on the basis of their protected characteristics (race, color, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease). AI models should not create content that perpetuates negative stereotypes. S10: Self-Harm. AI models should not create content that enables, encourages, or excuses acts of intentional self-harm. Examples of such actions include, but are not limited to: - Suicide - Self-injury (ex: cutting) - Disordered eating S11: Sexual Content. AI models should not create depictions of nude adults or content that contains erotic descriptions or explicit depictions of sex acts. On this page Meta Llama Guard 2 Model Card Prompt format +Meta Code Llama 70B | Model Cards and Prompt formats Model Cards & Prompt formats Meta Code Llama 70B Model Card You can find details about this model in the model card . Note that Meta Code Llama 70B uses the same model card as Meta Code Llama 7B, 13B, and 34B. Completion In this format, the model continues to write code following the provided code in the prompt. An implementation of this prompt can be found here . {{ code_prompt }} Instructions Meta Code Llama 70B has a different prompt template compared to 34B, 13B and 7B. It starts with a Source: system tag—which can have an empty body—and continues with alternating user or assistant values. Each turn of the conversation uses the special character to separate the messages. The last turn of the conversation uses an Source: assistant tag with an empty message and a Destination: user tag to prompt the model to answer the user question. A detailed implementation of this format is provided. Notes: The structure requires a Source: system tag, but the system prompt can be empty. Each user query is preceded by a blank line. At the end of the prompt is a blank line followed by a line containing a space character (0x20). Source: system System prompt Source: user First user query Source: assistant Model response to first query Source: user Second user query Source: assistant Destination: user On this page Meta Code Llama 70B Model Card Completion Instructions +Meta Llama Guard 1 | Model Cards and Prompt formats Model Cards & Prompt formats Meta Llama Guard 1 Model Card You can find details about this model in the model card . Prompt format As the guardrails can be applied both on the input and output of the model, there are two different prompts: one for user input and the other for agent output. The role placeholder can have the values User or Agent. The former refers to the input and the later to the output. When evaluating the user input, the agent response must not be present in the conversation. The llama-recipes repository has a helper function and an inference example that shows how to properly format the prompt with the provided categories. This can be used as a template to create custom categories for the prompt. [INST] Task: Check if there is unsafe content in '{{ role }}' messages in conversations according our safety policy with the below categories. {{ unsafe_categories }} User: {{ user_message_1 }} Agent: {{ model_answer_1 }} Provide your safety assessment for {{ role }} in the above conversation: - First line must read 'safe' or 'unsafe'. - If unsafe, a second line must include a comma-separated list of violated categories. [/INST] Llama Guard was trained in the following categories that can be adapted through prompt engineering. O1: Violence and Hate. Should not - Help people plan or engage in violence. - Encourage people to engage in violence. - Express hateful, derogatory, or demeaning sentiments against people based on sensitive personal characteristics like their race, color, religion, national origin, sexual orientation, gender, gender identity, or disability. - Encourage discrimination against people based on those characteristics and should not use slurs or create insults based on those characteristics. Can - Provide information on violence and discrimination. - Discuss the topics of hate and violence, and can discuss historical events involving violence. O2: Sexual Content. Should not - Engage in sexually explicit (i.e., erotic) conversations and should not encourage people to engage in sexual activities. Can - Discuss topics related to sexuality and sexual education. - Provide general information regarding sexual activities. - Provide guidance on topics related to sexual health. O3: Criminal Planning. Should not - Help people plan or engage in criminal activities like theft, kidnapping, or financial crimes, especially when the person indicates that they may act on the information provided to them. - Encourage people to engage in criminal activities or condone criminal behavior. Can - Help people understand how crimes are committed, provided that the response is not so detailed that it could be used to provide a roadmap for future crimes. - Help people understand how systems work that are used to prevent crimes, provided those descriptions do not provide detailed information on how to defeat those systems. O4: Guns and Illegal Weapons. Should not - Assist in the planning or execution of firearm or illegal weapons crimes, especially when the person indicates that they may act on the information provided to them. - Encourage people to engage in firearm or illegal weapons crimes or condone such behavior. Can - Discuss firearms and the arguments for and against firearm ownership. O5: Regulated or Controlled Substances. Should not - Encourage or assist people in the trafficking, consumption, or use of regulated or controlled substances, including but not limited to illegal drugs, tobacco, alcohol, and cannabis. - Assist or encourage people to create such substances when it is illegal to do so. Can - Discuss the use of regulated or controlled substances, and can provide general information on those substances (ex: what their chemical composition is and what their effects are). - Provide information on how those substances are manufactured, provided that the information is not so detailed as to provide step-by-step instructions for their creation. O6: Self-Harm. Should not - Encourage people to harm themselves. - Romanticize or condone self-harm or suicide. - Provide information regarding the methods of suicide or self-harm. - Provide any information or advice, other than information regarding appropriate health resources, to people who express or imply that they wish to harm themselves. Can - Provide general information regarding self-harm and suicide (ex: statistics, the mental health challenges faced by those who take these actions). Should - Provide information regarding appropriate health resources to people who imply or express the desire to harm themselves. On this page Meta Llama Guard 1 Model Card Prompt format +Meta Code Llama | Model Cards and Prompt formats Model Cards & Prompt formats Meta Code Llama Model Card You can find details about this model in the model card . Meta Code Llama 7B, 13B, and 34B Completion In this format, the model continues to write code following the code that is provided in the prompt. An implementation of this prompt can be found here . {{ code_prompt }} Instructions The instructions prompt template for Meta Code Llama follow the same structure as the Meta Llama 2 chat model, where the system prompt is optional, and the user and assistant messages alternate, always ending with a user message. Note the beginning of sequence (BOS) token between each user and assistant message. An implementation for Meta Code Llama can be found here . [INST] <> {{ system_prompt }} <> {{ user_message_1 }} [/INST] {{ model_answer_1 }} [INST] {{ user_message_2 }} [/INST] Infilling Infilling can be done in two different ways: with the prefix-suffix-middle format or the suffix-prefix-middle. An implementation of this format is provided here . Notes : Infilling is only available in the 7B and 13B base models—not in the Python, Instruct, 34B, or 70B models The BOS character is not used for infilling when encoding the prefix or suffix, but only at the beginning of each prompt. Prefix-suffix-middle
{{ code_prefix }}{{ code_suffix }} Suffix-prefix-middle 
{{ code_suffix }}{{ code_prefix }} On this page Meta Code Llama Model Card Meta Code Llama 7B, 13B, and 34B
+Meta Llama 2 | Model Cards and Prompt formats Model Cards & Prompt formats Meta Llama 2 Model Card You can find details about this model in the model card . Special Tokens used with Meta Llama 2  : These are the BOS and EOS tokens from SentencePiece. When multiple messages are present in a multi turn conversation, they separate them, including the user input and model response. [INST][/INST] : These tokens enclose user messages in multi turn conversations. <><> : These enclose the system message. Meta Llama 2 The base model supports text completion, so any incomplete user prompt, without special tags, will prompt the model to complete it. The tokenizer provided with the model will include the SentencePiece beginning of sequence (BOS) token () if requested. Review this code for details. {{ user_prompt }} Meta Llama 2 Chat Code to produce this prompt format can be found here . The system prompt is optional. Single message instance with optional system prompt. [INST] <> {{ system_prompt }} <> {{ user_message }} [/INST] Multiple user and assistant messages example. [INST] <> {{ system_prompt }} <> {{ user_message_1 }} [/INST] {{ model_answer_1 }}  [INST] {{ user_message_2 }} [/INST] On this page Meta Llama 2 Model Card Special Tokens used with Meta Llama 2 Meta Llama 2 Meta Llama 2 Chat
+Getting the models Getting the models Meta You can get the Meta Llama models directly from Meta or through Hugging Face or Kaggle. However you get the models, you will first need to accept the license agreements for the models you want. For more detailed information about each of the Meta Llama models, see the Model Cards section immediately following this section. To get the models directly from Meta, go to our Meta Llama download form at https://llama.meta.com/llama-downloads Fill in your information–including your email. Select the models that you want, and review and accept the appropriate license agreements. For each model that you request, you will receive an email that contains instructions and a pre-signed URL to download that model. You can use the same URL to download multiple model weights, such as 7B and 13B. The URL expires after 24 hours or five downloads, but you can re-request models in order to receive fresh pre-signed URLs. Note: The model download process uses a script that relies on the following tools: wget,md5sum ; so ensure that these are available on your local computer. On this page Getting the models Meta
+Hugging Face | Getting the models Getting the models Hugging Face To obtain the models from Hugging Face (HF), sign into your account at https://huggingface.co/meta-llama Select the model you want. You will be taken to a page where you can fill in your information and review the appropriate license agreement. After accepting the agreement, your information is reviewed; the review process could take up to a few days. When you are approved, you will receive an email informing you that you have access to the HF repository for the model. Note that cloning the HF repository to a local computer does not give you all the model files because some of the files are too large. In the local clone, those files contain only metadata for the actual file. To get these larger files, go to the file in the repository on the HF site and download it directly from there. For example, to get consolidated.00.pth for the Meta Llama 2 7B model, you download it from: https://huggingface.co/meta-llama/Llama-27b/blob/main/consolidated.00.pth On this page Hugging Face
+Kaggle | Getting the models Getting the models Kaggle To obtain the models from Kaggle–including the HF versions of the models–sign into your account at: https://www.kaggle.com/organizations/metaresearch/models Before you can access the models on Kaggle, you need to submit a request for model access , which requires that you accept the model license agreement on the Meta site: https://llama.meta.com/llama-downloads Note that the email address that you provide when you accept the license agreement must be the same as the email that you use for your Kaggle account. Once you have accepted the license agreement, return to Kaggle and submit the request for model access. When your request is approved, which might take a few days, you’ll receive an email that says that you have received access. You’ll then be able to access the models on Kaggle. To access a particular model, select it from the Model Variations dropdown box, and click the download icon. An archive file that contains the model will start downloading. On this page Kaggle
+Llama Everywhere Llama Everywhere Although Meta Llama models are often hosted by Cloud Service Providers (CSP), Meta Llama can be used in other contexts as well, such as Linux, the Windows Subsystem for Linux (WSL), macOS, Jupyter notebooks, and even mobile devices. If you are interested in exploring t hese scenarios, we suggest that you check out the following resources: Llama 3 on Your Local Computer, with Resources for Other Options - How to run Llama on your desktop using Windows, macOS, or Linux. Also, pointers to other ways to run Llama, either on premise or in the cloud Llama Recipes QuickStart - Provides an introduction to Meta Llama using Jupyter notebooks and also demonstrates running Llama locally on macOS. Machine Learning Compilation for Large Language Models (MLC LLM) - Enables “everyone to develop, optimize and deploy AI models natively on everyone's devices with ML compilation techniques.” Llama C++ - Uses the portability of C++ to enable inference with Llama models on a variety of different hardware. On this page Llama Everywhere
+Running Meta Llama on Linux | Llama Everywhere Running Meta Llama on Linux This tutorial is a part of our Build with Meta Llama series, where we demonstrate the capabilities and practical applications of Llama for developers like you, so that you can leverage the benefits that Llama has to offer and incorporate it into your own applications. This tutorial supports the video Running Llama on Linux | Build with Meta Llama , where we learn how to run Llama on Linux OS by getting the weights and running the model locally, with a step-by-step tutorial to help you follow along. If you're interested in learning by watching or listening, check out our video on Running Llama on Linux. Introduction to llama models At Meta, we strongly believe in an open approach to AI development, particularly in the fast-evolving domain of generative AI. By making AI models publicly accessible, we enable their advantages to reach every segment of society. Last year, we open sourced Meta Llama 2, and this year we released the Meta Llama 3 family of models, available in both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications, unlocking the power of these large language models, and making them accessible to everyone, so you can experiment, innovate, and scale your ideas responsibly. Running Meta Llama on Linux Setup With a Linux setup having a GPU with a minimum of 16GB VRAM, you should be able to load the 8B Llama models in fp16 locally. If you have an Nvidia GPU, you can confirm your setup using the NVIDIA System Management Interface tool that shows you the GPU you have, the VRAM available, and other useful information by typing: nvidia-smi In our current setup, we are on Ubuntu, specifically Pop OS, and have an Nvidia RTX 4090 with a total VRAM of about 24GB. Terminal with nvidia-smi showing NVIDIA GPU Configuration Getting the weights To download the weights, go to the Llama website . Fill in your details in the form and select the models you’d like to download. In our case, we will download the Llama 3 models. Select Meta Llama 3 and Meta Llama Guard 2 on the download page Read and agree to the license agreement, then click Accept and continue . You will see a unique URL on the website. You will also receive the URL in your email and it is valid for 24hrs to allow you to download each model up to 5 times. You can always request a new URL. Download page with unique pre-signed URL We are now ready to get the weights and run the model locally on our machine. It is recommended to use a Python virtual environment for running this demo. In this demo, we are using Miniconda, but you can use any virtual environment of your choice. Open your terminal, and make a new folder called llama3-demo in your workspace. Navigate to the new folder and clone the Llama repo: mkdir llama3-demo cd llama3-demo git clone https://github.com/meta-llama/llama3.git For this demo, we’ll need two prerequisites installed: wget and md5sum . To confirm if your distribution has these, use: wget --version md5sum --version which should return the installed versions. If your distribution does not have these, you can install them using apt-get install wget apt-get install md5sum To make sure we have all the package dependencies installed, while in the newly cloned repo folder, type: pip install -e . We are now all set to download the model weights for our local setup. Our team has created a helper script to make it easy to download the model weights. In your terminal, type: ./download.sh The script will ask for the URL from your email. Paste in the URL you received from Meta. It will then ask you to enter the list of models to download. For our example, we’ll download the 8B pretrained model and the fine-tuned 8B chat models. So we’ll enter “8B,8B-instruct” . Downloading the 8B models Running the model We are all set to run the example inference script to test if our model has been set up correctly and works. Our team has created an example Python script called example_text_completion.py that you can use to test out the model. The script defines a main function that uses the Llama class from the llama library to generate text completions for given prompts using the pre-trained models. It takes a few arguments: Parameters Descriptions ckpt_dir: str Directory containing the checkpoint files of the model. tokenizer_path: str Path to the tokenizer of the model. temperature: float = 0.6 This parameter controls the randomness of the generation process. Higher values may lead to more creative but less coherent outputs, while lower values may lead to more conservative but more coherent outputs. top_p: float = 0.9 This defines the maximum probability threshold for generating tokens. max_seq_len: int = 128 Defines the maximum length of the input sequence or prompt allowed for the model to process. max_gen_len: int = 64 Defines the maximum length of the generated text the model is allowed to produce. max_batch_size: int = 4 Defines the maximum number of prompts to process in one batch. The main function builds an instance of the Llama class, using the provided arguments, then defines a list of prompts for which the model will use generator.text_completion method to generate the completions. To run the script, go back to our terminal, and while in the llama3 repo, type: torchrun --nproc_per_node 1 example_text_completion.py --ckpt_dir Meta-Llama-3-8B/ --tokenizer_path Meta-Llama-3-8B/tokenizer.model --max_seq_len 128 --max_batch_size 4 Replace Meta-Llama-3-8B/ with the path to your checkpoint directory and tokenizer.model with the path to your tokenizer model. If you run it from this main directory, the path may not need to change. Set the –nproc_per_node to the MP value for the model you are using. For 8B models, the value is set to 1. Adjust the max_seq_len and max_batch_size parameters as needed. We have set them to 128 and 4 respectively. Running the 8B model on the example text completion script To try out the fine-tuned chat model ( 8B-instruct ), we have a similar example called example_chat_completion.py . torchrun --nproc_per_node 1 example_chat_completion.py --ckpt_dir Meta-Llama-3-8B-Instruct/ --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model --max_seq_len 512 --max_batch_size 6 Note that in this case, we use the Meta-Llama-3-8B-Instruct/ model and provide the correct tokenizer under the instruct model folder. Running the 8B Instruct model on the example chat completion script A detailed step-by-step process to run on this setup, as well as all the helper and example scripts can be found on our Llama3 GitHub repo , which goes over the process of downloading and quick-start, as well as examples for inference. On this page Running Meta Llama on Linux Introduction to llama models Running Meta Llama on Linux Setup Getting the weights Running the model
+Running Meta Llama on Windows | Llama Everywhere Running Meta Llama on Windows This tutorial is a part of our Build with Meta Llama series, where we demonstrate the capabilities and practical applications of Llama for developers like you, so that you can leverage the benefits that Llama has to offer and incorporate it into your own applications. This tutorial supports the video Running Llama on Windows | Build with Meta Llama , where we learn how to run Llama on Windows using Hugging Face APIs, with a step-by-step tutorial to help you follow along. If you're interested in learning by watching or listening, check out our video on Running Llama on Windows. Setup For this demo, we will be using a Windows OS machine with an RTX 4090 GPU. If you have an Nvidia GPU, you can confirm your setup by opening the Terminal and typing nvidia-smi (NVIDIA System Management Interface), which will show you the GPU you have, the VRAM available, and other useful information about your setup. Since we will be using the Hugging Face transformers library for this setup, this setup can also be used on other operating systems that the library supports such as Linux or Mac using similar steps as the ones shown in the video. Getting the weights To allow easy access to Meta Llama models , we are providing them on Hugging Face, where you can download the models in both transformers and native Llama 3 formats. To download the weights, visit the meta-llama repo containing the model you’d like to use. For example, we will use the Meta-Llama-3-8B-Instruct model for this demo. Read and agree to the license agreement. Fill in your details and accept the license, and click on submit. Once your request is approved, you'll be granted access to all the Llama 3 models. Meta-Llama 3-8B-Instruct model on Hugging Face For this tutorial, we will be using Meta Llama models already converted to Hugging Face format. However, if you’d like to download the original native weights, click on the "Files and versions" tab and download the contents of the original folder. If you prefer, you can also download the original weights from the command line using the Hugging Face CLI: pip install huggingface-hub huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir meta-llama/Meta-Llama-3-8B-Instruct Running the model In this example, we will showcase how you can use Meta Llama models already converted to Hugging Face format using Transformers. To use the model with Transformers, we will be using the pipeline class from Hugging Face. We recommend that you use a Python virtual environment for running this demo. In this demo, we are using Miniconda, but you can use any virtual environment of your choice. Make sure to use the latest version of transformers . pip install -U transformers --upgrade We will also use the accelerate library, which enables our code to be run across any distributed configuration. pip install accelerate We will be using Python for our demo script. To install Python, visit the Python website , where you can choose your OS and download the version of Python you like.  We will also be using PyTorch for our demo, so we will need to make sure we have PyTorch installed in our setup. To install PyTorch for your setup, visit the Pytorch downloads website and choose your OS and configuration to get the installation command you need. Paste that command in your terminal and press enter. PyTorch Installation Guide For our script, open the editor of your choice, and create a Python script. We’ll first add the imports that we need for our example: import transformers import torch from transformers import AutoTokenizer Let's define the model we’d like to use. In our demo, we will use the 8B instruct model which is fine tuned for chat: model = "meta-llama/Meta-Llama-3-8B-Instruct" We will also instantiate the tokenizer which can be derived from AutoTokenizer, based on the model we’ve chosen, using the from_pretrained method of AutoTokenizer. This will download and cache the pre-trained tokenizer and return an instance of the appropriate tokenizer class. tokenizer = AutoTokenizer.from_pretrained(model) To use our model for inference: pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) Hugging Face pipelines allow us to specify which type of task the pipeline needs to run ( text-generation in this case), the model that the pipeline should use to make predictions (specified by model ), the precision to use with this model ( torch.float16 ), the device on which the pipeline should run ( device_map ), and various other options. We’ll also set the device_map argument to auto , which means the pipeline will automatically use a GPU if one is available. Next, let's provide some text prompts as inputs to our pipeline for it to use when it runs to generate responses. Let’s define this as the variable, sequences: sequences = pipeline( 'I have tomatoes, basil and cheese at home. What can I cook for dinner?\n', do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, truncation = True, max_length=400, ) The pipeline sets do_sample to True , which allows us to specify the decoding strategy we’d like to use to select the next token from the probability distribution over the entire vocabulary. In our example, we are using top_k sampling. By changing max_length , you can specify how long you’d like the generated response to be. Setting the num_return_sequences parameter to greater than one will let you generate more than one output. Finally, we add the following to provide input, and information on how to run the pipeline: for seq in sequences: print(f"Result: {seq['generated_text']}") Save your script and head back to the terminal. We will save it as llama3-hf-demo.py . Before we run the script, let’s make sure we can access and interact with Hugging Face directly from the terminal. To do that, make sure you have the Hugging Face CLI installed: pip install -U "huggingface_hub[cli]" followed by huggingface-cli login Here, it will ask us for our access token which we can get from our HF account under Settings . Copy it and provide it in the command line. We are now all set to run our script. python llama3-hf-demo.py Running Meta-Llama-3-8B-Instruct locally To check out the full example and run it on your own local machine, see the detailed sample notebook that you can refer to in the llama-recipes GitHub repo . Here you will find an example of how to run Llama 3 models using already converted Hugging Face weights, as well as an example that goes over how you can convert the original weights into Hugging Face format and run using those. We’ve also created various other demos and examples to provide you with guidance and as references to help you get started with Llama models and to make it easier for you to integrate them into your own use cases. To try these examples, check out our llama-recipes GitHub repo . Here you’ll find complete walkthroughs for how to get started with Llama models. These include installation instructions , dependencies, and recipes where you can find examples of inference, fine tuning, and training on custom data sets. In addition, the repo includes demos that showcase llama deployments, basic interactions, and specialized use cases . On this page Running Meta Llama on Windows Setup Getting the weights Running the model
+Running Meta Llama on Mac | Llama Everywhere Running Meta Llama on Mac This tutorial is a part of our Build with Meta Llama series, where we demonstrate the capabilities and practical applications of Llama for developers like you, so that you can leverage the benefits that Llama has to offer and incorporate it into your own applications. This tutorial supports the video Running Llama on Mac | Build with Meta Llama , where we learn how to run Llama on Mac OS  using Ollama , with a step-by-step tutorial to help you follow along. If you're interested in learning by watching or listening, check out our video on Running Llama on Mac. Setup For this demo, we are using a Macbook Pro running Sonoma 14.4.1 with 64GB memory. Since we will be using Ollamap, this setup can also be used on other operating systems that are supported such as Linux or Windows using similar steps as the ones shown here. Ollama lets you set up and run Large Language models like Llama models locally. Downloading Ollama The first step is to install Ollama. To do that, visit their website , where you can choose your platform, and click on “Download” to download Ollama. For our demo, we will choose macOS, and select “Download for macOS”. Next, we will make sure that we can test run Meta Llama 3 models on Ollama . Please note that Ollama provides Meta Llama models in the 4-bit quantized format. To test run the model, let’s open our terminal, and run ollama pull llama3 to download the 4-bit quantized Meta Llama 3 8B chat model, with a size of about 4.7 GB. Downloading 4-bit quantized Meta Llama models If you’d like to download the Llama 3 70B chat model, also in 4-bit, you can instead type ollama pull llama3:70b which in quantized format, would have a size of about 39GB. Running the model Running using ollama run To run our model, in your terminal, type: ollama run llama3 We are all set to ask questions and chat with our Meta Llama 3 model. Let’s ask some questions: “Who wrote the book godfather?" Meta Llama model generating a response We can see that it gives the right answer, along with more information about the book as well as the movie that was based on the book. What if I just wanted the name of the author, without the extra information. Let’s adapt our prompt accordingly, specifying the kind of response we expect: "Who wrote the book godfather? Answer with only the name." Meta Llama model generating a specified responses based on prompt We can see that it generates the answer in the format we requested. You can also try running the 70B model: ollama run llama3:70b but the inference speed will likely be slower. Running with curl You can even run and test the Llama 3 8B model directly by using the curl command and specifying your prompt right in the command: curl http://localhost:11434/api/chat -d '{ "model": "llama3", "messages": [ { "role": "user", "content": "who wrote the book godfather?" } ], "stream": false }' Here, we are sending a POST request to an API running on localhost. The API endpoint is for "chat", which will interact with our AI model hosted on the server. We are providing a JSON payload that contains a string specifying the name of the AI model to use for processing the input prompt ( llama3 ), an array with a string indicating the role of the message sender ( user ) and a string with the user's input prompt (" who wrote the book godfather? "), and a boolean value stream indicating whether the response should be streamed or not. In our case, it is set to false, meaning the entire response will be returned at once. Ollama running Llama model with curl command As we can see, the model generated the response with the answer to our question. Running as a Python script This example can also be run using a Python script. To install Python, visit the Python website , where you can choose your OS and download the version of Python you like. To run it using a Python script, open the editor of your choice, and create a new file. First, let’s add the imports we will need for this demo, and define a parameter called url , which will have the same value as the URL we saw in the curl demo: import requests import json url = "http://localhost:11434/api/chat" We will now add a new function called llama3 , which will take in prompt as an argument: def llama3(prompt): data = { "model": "llama3", "messages": [ { "role": "user", "content": prompt } ], "stream": False } headers = { 'Content-Type': 'application/json' } response = requests.post(url, headers=headers, json=data) return(response.json()['message']['content']) This function constructs a JSON payload containing the specified prompt and the model name, which is "llama3”. Then, it sends a POST request to the API endpoint with the JSON payload as the message body, using the requests library.  Once the response is received, the function extracts the content of the response message from the JSON object returned by the API, and returns this extracted content. Finally, we will provide the prompt and print the generated response: response = llama3("who wrote the book godfather") print(response) To run the script, write python .py and press enter. Running Meta Llama model using Ollama and Python script As we can see, it generated the response based on the prompt we provided in our script. To learn more about the complete Ollama APIs, check out their documentation . To check out the full example, and run it on your own machine, our team has worked on a detailed sample notebook that you can refer to and can be found in the llama-recipes Github repo , where you will find an example of how to run Llama 3 models on a Mac as well as other platforms. You will find the examples we discussed here, as well as other ways to use Llama 3 locally with Ollama via LangChain. We’ve also created various other demos and examples to provide you with guidance and as references to help you get started with Llama models and to make it easier for you to integrate Llama into your own use cases. These demos and examples are also located in our llama-recipes GitHub repo , where you’ll find complete walkthroughs for how to get started with Llama models, including installation instructions , dependencies, and recipes. You’ll also find several examples for inference, fine tuning, and training on custom data sets—as well as demos that showcase Llama deployments, basic interactions, and specialized use cases . On this page Running Meta Llama on Mac Setup Running the model Running using ollama run Running with curl Running as a Python script
+Meta Llama in the Cloud | Llama Everywhere Meta Llama in the Cloud This tutorial is a part of our Build with Meta Llama series, where we demonstrate the capabilities and practical applications of Llama for developers like you, so that you can leverage the benefits that Llama has to offer and incorporate it into your own applications. This tutorial supports the video Many other ways to run Llama and resources | Build with Meta Llama , where we learn about some of the various other ways in which you can host or run Meta Llama models, and provide you with all the resources that can help you get started. If you're interested in learning by watching or listening, check out our video on Many other ways to run Llama and resources. Apart from running the models locally, one of the most common ways to run Meta Llama models is to run them in the cloud. We saw an example of this using a service called Hugging Face in our running Llama on Windows video . Let's take a look at some of the other services we can use to host and run Llama models such as AWS , Azure, Google, Kaggle , and VertexAI —among others. Amazon Web Services Amazon Web Services (AWS) provides multiple ways to host your Llama models such as SageMaker Jumpstart and Bedrock. Bedrock is a fully managed service that lets you quickly and easily build generative AI-powered experiences. To use Meta Llama with Bedrock, check out their website that goes over how to integrate and use Meta Llama models in your applications. You can also use AWS through SageMaker JumpStart, which enables you to build, train, and deploy ML models from a broad selection of publicly available foundation models, and deploy them on SageMaker Instances for model training and inference. Learn more about how to use Meta Llama on Sagemaker on their website . Microsoft Azure Another way to run Meta Llama models is on Microsoft Azure. You can access Meta Llama models on Azure in two ways: Models as a Service (MaaS) provides access to Meta Llama hosted APIs through Azure AI Studio Model as a Platform (MaaP) provides access to Meta Llama family of models with out of the box support for fine-tuning and evaluation though Azure Machine Learning Studio . Please refer to our How to Guide for more details. Google Cloud Platform You can also use GCP, or Google Cloud Platform, to run Meta Llama models. GCP is a suite of cloud computing services that provides computing resources as well as virtual machines. Building on top of GCP services, Model Garden on Vertex AI offers infrastructure to jumpstart your ML project with a single place to discover, customize, and deploy a wide range of models. We have collaborated with Vertex AI from Google Cloud to fully integrate Meta Llama, offering pre-trained, instruction-tuned, and Meta CodeLlama, in various sizes. Check out how to fine-tune & deploy Meta Llama models on Vertex AI by visiting the website . Please note that you may need to request proper GPU computing quota as a prerequisite. IBM watsonx You can also use IBM's watsonx to run Meta Llama models. IBM watsonx is an advanced platform designed for AI builders, integrating generative AI capabilities, foundation models, and traditional machine learning. It provides a comprehensive suite of tools that span the AI lifecycle, enabling users to tune models with their enterprise data. The platform supports multi-model flexibility, client protection, AI governance, and hybrid, multi-cloud deployments. It offers features for extracting insights, discovering trends, generating synthetic tabular data, running jupyter notebooks, and creating new content and code. Watsonx.ai equips data scientists with the necessary tools, pipelines, and runtimes for building and deploying ML models, thereby automating the entire AI model lifecycle. We've worked with IBM to make Llama and Code Llama models available on their platform . To test the platform and evaluate Llama on watsonx, creating an account is free and allows testing the available models through the Prompt Lab. For detailed instructions, refer to the getting started guide and the quick start tutorials . Other hosting providers You can also run Llama models using hosting providers such as OpenAI, Together AI, Anyscale, Replicate, Groq, etc. Our team has worked on step by step examples to showcase how to run Llama on externally hosted providers. The examples can be found on our Llama-recipes GitHub repo , which goes over the process of setting up and running inference for Llama models on some of these externally hosted providers. Running Llama on premise Many enterprise customers prefer to deploy Llama models on-premise and on their own servers. One way to deploy and run Llama models in this manner is by using TorchServe . TorchServe is an easy to use tool for deploying PyTorch models at scale. It is cloud and environment agnostic and supports features such as multi-model serving, logging, metrics and the creation of RESTful endpoints for application integration. To learn more about how TorchServe works, with setup, quickstart, and examples check out the Github repo . Another way to deploy llama models on premise is by using Virtual Large Language Model ( vLLM ) or Text Generation Inference (TGI) , two leading open-source tools to deploy and serve LLMs. A detailed step by step tutorial can be found on our llama-recipes Github repo that showcases how to use Llama models with vLLM and Hugging Face TGI, and how to create vLLM and TGI hosted Llama instances with LangChain—a language model integration framework for the creation of applications using large language models. You can find various demos and examples that can provide you with guidance—and that you can use as references to get started with Llama models—on our Llama-recipes GitHub repo , where you’ll find several examples for inference and fine tuning, as well as running on various API providers. Llama-recipes GitHub repo Learn more about Llama 3 and how to get started by checking out our Getting to know Llama notebook that you can find in our llama-recipes Github repo . Here you will find a guided tour of Llama 3, including a comparison to Llama 2, descriptions of different Llama 3 models, how and where to access them, Generative AI and Chatbot architectures, prompt engineering, RAG (Retrieval Augmented Generation), fine-tuning, and more. You will find all this implemented with starter code that you can take and adapt to use in your own Meta Llama 3 projects. To learn more about our Llama 3 models, check out our announcement blog where you can find details about how the models work, data on performance and benchmarks, information about trust and safety, and various other resources to get you started. Get the model source from our Llama 3 Github repo , where you can learn how the models work along with a minimalist example of how to load Llama 3 models and run inference. Here, you will also find steps to download and set up the models, and examples for running the text completion and chat models. Meta Llama3 GitHub repo Dive deeper and learn more about the model in the model card , which goes over the model architecture, intended use, hardware and software requirements, training data, results, and licenses. Check out our new Meta AI , built with Llama 3 technology, which is now one of the world’s leading AI assistants that can boost your intelligence and lighten your load, helping you learn, get things done, create content, and connect to make the most out of every moment. Meta AI You can use Meta AI on Facebook, Instagram, WhatsApp, Messenger, and the web to get things done, learn, create, and connect with the things that matter to you. To learn more about the latest updates and releases of Llama models, check out our website , where you can learn more about the latest models as well as find resources to learn more about how these models work and how you can use them in your own applications. Check out our Getting Started guide that provides information and resources to help you set up Llama including how to access the models, prompt formats, hosting, how-to and integration guides, as well as resources that you can reference to get started with your projects. Take a look at some of our latest blogs that discuss new announcements , the latest on the Llama ecosystem , and our responsible approach to Meta AI and Meta Llama 3 . Check out the community resources on our website to help you get started with Meta Llama models, learn about performance & latency, fine tuning, and more. Dive deeper into prompt engineering, learning best practices for prompting Meta Llama models and interacting with Meta Llama Chat, Code Llama, and Llama Guard models in our short course on Prompt Engineering with Llama 2 on DeepLearing.ai, recently updated to showcase both Llama 2 and  Llama 3 models. Check out our Community Stories that go over interesting use cases of Llama models in various fields such as in Business, Healthcare, Gaming, Pharmaceutical, and more! Learn more about the Llama ecosystem, building product experiences with Llama, and examples that showcase how industry pioneers have adopted Llama to build and grow innovative products for users across their platforms at Connect 2023 . Also check out our Responsible Use Guide that provides developers with recommended best practices and considerations for safely building products powered by LLMs. We hope you found the Build with Meta Llama videos and tutorials helpful to provide you with insights and resources that you may need to get started with using Llama models. We at Meta strongly believe in an open approach to AI development, democratizing access through an open platform and providing you with AI models, tools, and resources to give you the power to shape the next wave of innovation. We want to kickstart that next wave of innovation across the stack—from applications to developer tools to evals to inference optimizations and more. We can’t wait to see what you build and look forward to your feedback. On this page Meta Llama in the Cloud Amazon Web Services Microsoft Azure Google Cloud Platform IBM watsonx Other hosting providers Running Llama on premise
+Fine-tuning | How-to guides Fine-tuning If you are looking to learn by writing code it's highly recommended to look into the Getting to Know Llama 3 notebook. It's a great place to start with most commonly performed operations on Meta Llama. Fine-tuning Full parameter fine-tuning is a method that fine-tunes all the parameters of all the layers of the pre-trained model. In general, it can achieve the best performance but it is also the most resource-intensive and time consuming: it requires most GPU resources and takes the longest. PEFT, or Parameter Efficient Fine Tuning, allows one to fine tune models with minimal resources and costs. There are two important PEFT methods: LoRA (Low Rank Adaptation) and QLoRA (Quantized LoRA), where pre-trained models are loaded to GPU as quantized 8-bit and 4-bit weights, respectively. It’s likely that you can fine-tune the Llama 2-13B model using LoRA or QLoRA fine-tuning with a single consumer GPU with 24GB of memory, and using QLoRA requires even less GPU memory and fine-tuning time than LoRA. Typically, one should try LoRA, or if resources are extremely limited, QLoRA, first, and after the fine-tuning is done, evaluate the performance. Only consider full fine-tuning when the performance is not desirable. Experiment tracking Experiment tracking is crucial when evaluating various fine-tuning methods like LoRA, and QLoRA. It ensures reproducibility, maintains a structured version history, allows for easy collaboration, and aids in identifying optimal training configurations. Especially with numerous iterations, hyperparameters, and model versions at play, tools like Weights & Biases (W&B) become indispensable. With its seamless integration into multiple frameworks, W&B provides a comprehensive dashboard to visualize metrics, compare runs, and manage model checkpoints. It's often as simple as adding a single argument to your training script to realize these benefits - we’ll show an example in the Hugging Face PEFT LoRA section. Recipes PEFT LoRA The llama-recipes repo has details on different fine-tuning (FT) alternatives supported by the provided sample scripts. In particular, it highlights the use of PEFT as the preferred FT method, as it reduces the hardware requirements and prevents catastrophic forgetting. For specific cases, full parameter FT can still be valid, and different strategies can be used to still prevent modifying the model too much. Additionally, FT can be done in single gpu or multi-gpu with FSDP. In order to run the recipes, follow the steps below: Create a conda environment with pytorch and additional dependencies Install the recipes as described here : Download the desired model from hf, either using git-lfs or using the llama download script. With everything configured, run the following command: python -m llama_recipes.finetuning  --use_peft --peft_method lora --quantization  --model_name ../llama/models_hf/7B --output_dir ../llama/models_ft/7B-peft --batch_size_training 2 --gradient_accumulation_steps 2 torchtune ( link ) torchtune is a PyTorch-native library that can be used to fine-tune the Meta Llama family of models including Meta Llama 3. It supports the end-to-end fine-tuning lifecycle including: Downloading model checkpoints and datasets Training recipes for fine-tuning Llama 3 using full fine-tuning, LoRA, and QLoRA Support for single-GPU fine-tuning capable of running on consumer-grade GPUs with 24GB of VRAM Scaling fine-tuning to multiple GPUs using PyTorch FSDP Log metrics and model checkpoints during training using Weights & Biases Evaluation of fine-tuned models using EleutherAI’s LM Evaluation Harness Post-training quantization of fine-tuned models via TorchAO Interoperability with inference engines including ExecuTorch To install torchtune simply run the pip install command pip install torchtune Follow the instructions on the Hugging Face meta-llama repository to ensure you have access to the Llama 3 model weights. Once you have confirmed access, you can run the following command to download the weights to your local machine. This will also download the tokenizer model and a responsible use guide. tune download meta-llama/Meta-Llama-3-8B \ --output-dir  \ --hf-token  Set your environment variable HF_TOKEN or pass in --hf-token to the command in order to validate your access. You can find your token at https://huggingface.co/settings/tokens The basic command for a single-device LoRA fine-tune of Llama 3 is tune run lora_finetune_single_device --config llama3/8B_lora_single_device torchtune contains built-in recipes for: Full fine-tuning on single device and on multiple devices with FSDP LoRA finetuning on single device and on multiple devices with FSDP . QLoRA finetuning on single device , with a QLoRA specific configuration You can find more information on fine-tuning Meta Llama models by reading the torchtune guide. Hugging Face PEFT LoRA ( link ) Using Low Rank Adaption (LoRA) , Meta Llama is loaded to the GPU memory as quantized 8-bit weights. Using the Hugging Face Fine-tuning with PEFT LoRA ( link ) is super easy - an example fine-tuning run on Meta Llama 2 7b using the OpenAssistant data set can be done in three simple steps: pip install trl git clone https://github.com/huggingface/trl python trl/examples/scripts/sft.py \ --model_name meta-llama/Llama-2-7b-hf \ --dataset_name timdettmers/openassistant-guanaco \ --load_in_4bit \ --use_peft \ --batch_size 4 \ --gradient_accumulation_steps 2 \ --log_with wandb This takes about 16 hours on a single GPU and uses less than 10GB GPU memory; changing batch size to 8/16/32 will use over 11/16/25 GB GPU memory. After the fine-tuning completes, you’ll see in a new directory named “output” at least adapter_config.json and adapter_model.bin -  run the script below to infer with the base model and the new model, generated by merging the base model with the fined-tuned one: import torch from transformers import ( AutoModelForCausalLM, AutoTokenizer, pipeline, ) from peft import LoraConfig, PeftModel from trl import SFTTrainer model_name = "meta-llama/Llama-2-7b-chat-hf" new_model = "output" device_map = {"": 0} base_model = AutoModelForCausalLM.from_pretrained( model_name, low_cpu_mem_usage=True, return_dict=True, torch_dtype=torch.float16, device_map=device_map, ) model = PeftModel.from_pretrained(base_model, new_model) model = model.merge_and_unload() tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "right" prompt = "Who wrote the book Innovator's Dilemma?" pipe = pipeline(task="text-generation", model=base_model, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") print(result[0]['generated_text']) pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") print(result[0]['generated_text']) QLoRA Fine TuningQLoRA (Q for quantized) is more memory efficient than LoRA. In QLoRA, the pretrained model is loaded to the GPU as quantized 4-bit weights. Fine-tuning using QLoRA is also very easy to run - an example of fine-tuning Llama 2-7b with the OpenAssistant can be done in four quick steps: git clone https://github.com/artidoro/qlora cd qlora pip install -U -r requirements.txt ./scripts/finetune_llama2_guanaco_7b.sh It takes about 6.5 hours to run on a single GPU, using 11GB memory of the GPU. After the fine-tuning completes and the output_dir specified in ./scripts/finetune_llama2_guanaco_7b.sh will have checkoutpoint-xxx subfolders, holding the fine-tuned adapter model files. To run inference, use the script below: from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline from peft import LoraConfig, PeftModel import torch model_id = "meta-llama/Llama-2-7b-hf" new_model = "output/llama-2-guanaco-7b/checkpoint-1875/adapter_model" # change if needed quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4' ) model = AutoModelForCausalLM.from_pretrained( model_id, low_cpu_mem_usage=True, load_in_4bit=True, quantization_config=quantization_config, torch_dtype=torch.float16, device_map='auto' ) model = PeftModel.from_pretrained(model, new_model) tokenizer = AutoTokenizer.from_pretrained(model_id) prompt = "Who wrote the book innovator's dilemma?" pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") print(result[0]['generated_text']) Axolotl is another open source library you can use to streamline the fine-tuning of Llama 2. A good example of using Axolotl to fine-tune Meta Llama with four notebooks covering the whole fine-tuning process (generate the dataset, fine-tune the model using LoRA, evaluate and benchmark) is here . QLoRA Fine Tuning Note: This has been tested on Meta Llama 2 models only. QLoRA (Q for quantized) is more memory efficient than LoRA. In QLoRA, the pretrained model is loaded to the GPU as quantized 4-bit weights. Fine-tuning using QLoRA is also very easy to run - an example of fine-tuning Llama 2-7b with the OpenAssistant can be done in four quick steps: git clone https://github.com/artidoro/qlora cd qlora pip install -U -r requirements.txt ./scripts/finetune_llama2_guanaco_7b.sh It takes about 6.5 hours to run on a single GPU, using 11GB memory of the GPU. After the fine-tuning completes and the output_dir specified in ./scripts/finetune_llama2_guanaco_7b.sh will have checkoutpoint-xxx subfolders, holding the fine-tuned adapter model files. To run inference, use the script below: from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline from peft import LoraConfig, PeftModel import torch model_id = "meta-llama/Llama-2-7b-hf" new_model = "output/llama-2-guanaco-7b/checkpoint-1875/adapter_model" # change if needed quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4' ) model = AutoModelForCausalLM.from_pretrained( model_id, low_cpu_mem_usage=True, load_in_4bit=True, quantization_config=quantization_config, torch_dtype=torch.float16, device_map='auto' ) model = PeftModel.from_pretrained(model, new_model) tokenizer = AutoTokenizer.from_pretrained(model_id) prompt = "Who wrote the book innovator's dilemma?" pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") print(result[0]['generated_text']) Note: This has been tested on Meta Llama 2 models only. Axolotl is another open source library you can use to streamline the fine-tuning of Llama 2. A good example of using Axolotl to fine-tune Meta Llama with four notebooks covering the whole fine-tuning process (generate the dataset, fine-tune the model using LoRA, evaluate and benchmark) is here . On this page Fine-tuning Fine-tuning Experiment tracking Recipes PEFT LoRA torchtune (link) Hugging Face PEFT LoRA (link) QLoRA Fine Tuning
+Quantization | How-to guides Quantization Quantization is a technique used in machine learning to reduce the computational and memory requirements of models, making them more efficient for deployment on servers and edge devices. It involves representing model weights and activations, typically 32-bit floating numbers, with lower precision data such as 16-bit float, brain float 16-bit, 8-bit int, or even 4/3/2/1-bit int. The benefits of quantization include smaller model sizes, faster fine-tuning, and faster inference—particularly beneficial in resource-constrained environments. However, the tradeoff is a reduction in model quality due to the loss of precision. Supported quantization modes in PyTorch Post-Training Dynamic Quantization: Weights are pre-quantized ahead of time and activations are converted to int8 during inference, just before computation. This results in faster computation due to efficient int8 matrix multiplication and maintains accuracy on the activation layer. Post-Training Static Quantization: This technique improves performance by converting networks to use both integer arithmetic and int8 memory accesses. It involves feeding batches of data through the network and computing the resulting distributions of the different activations. This information is used to determine how the different activations should be quantized at inference time. Quantization Aware Training (QAT): In QAT, all weights and activations are "fake quantized" during both the forward and backward passes of training. This means float values are rounded to mimic int8 values, but all computations are still done with floating point numbers. This method usually yields higher accuracy than the other two methods as all weight adjustments during training are made while "aware" of the fact that the model will ultimately be quantized. More details about these methods and how they can be applied to different types of models can be found in the official PyTorch documentation . Additionally, the community has already conducted studies on the effectiveness of common quantization methods on Meta Llama 3, and the results and code to evaluate can be found in this GitHub repository . We will focus next on quantization tools available for Meta Llama models. As this is a constantly evolving space, the libraries and methods detailed here are the most widely used at the moment and are subject to change as the space evolves. Pytorch quantization with TorchAO The TorchAO library offers several methods for quantization, each with different schemes for how the activations and weights are quantized. We distinguish between two main types of quantization: weight only quantization and dynamic quantization. For weight only quantization, we support 8-bit and 4-bit quantization. The 4-bit quantization also has GPTQ support for improved accuracy, which requires calibration but has the same final performance. For dynamic quantization, we support 8-bit activation quantization and 8-bit weight quantization. We also support this type of quantization with smoothquant for improved accuracy, which requires calibration and has slightly worse performance. Additionally, the library offers a simple API to test different methods and automatic detection of the best quantization for a given model, known as autoquantization. This API chooses the fastest form of quantization out of the 8-bit dynamic and 8-bit weight only quantization. It first identifies the shapes of the activations that the different linear layers see, then benchmarks these shapes across different types of quantized and non-quantized layers in order to pick the fastest one. Also, it composes with torch.compile() to generate the fast kernels. For additional information on torch.compile, please see this general tutorial . Note : This library is in beta phase and in active development; API changes are expected. HF supported quantization Hugging Face (HF) offers multiple ways to do LLM quantization with their transformers library. For additional guidance and examples on how to use each of these beyond the brief summary presented here,  please refer to their quantization guide and the transformers quantization configuration documentation . The llama-recipes code uses bitsandbytes 8-bit quantization to load the models, both for inference and fine-tuning . (See below for more information about using the bitsandbytes library with Llama. ) Quanto Quanto is a versatile PyTorch quantization toolkit that uses linear quantization. It provides features such as weights quantization, activation quantization, and compatibility with various devices and modalities. It supports quantization-aware training and is easy to integrate with custom kernels for specific devices. More details can be found in the announcement blog , GitHub repository , and HF guide . AQLM Additive Quantization of Language Models (AQLM) is a compression method for LLM. It quantizes multiple weights together, taking advantage of interdependencies between them. AQLM represents groups comprising 8 to16 weights each as a sum of multiple vector codes. This library supports fine-tuning its quantized models with Parameter-Efficient Fine-Tuning and LoRA by integrating into HF's PEFT library as well. More details can be found  in the GitHub repository . AWQ Activation-aware Weight Quantization (AWQ) preserves a small percentage of weights that are important for LLM performance, reducing quantization loss. This allows models to run in 4-bit precision without experiencing performance degradation. Transformers support loading models quantized with the llm-awq and autoawq libraries. More details on how to load them with the Transformers library can be found in the HF guide . AutoGPTQ The AutoGPTQ library implements the GPTQ algorithm, a post-training quantization technique where each row of the weight matrix is quantized independently. These weights are quantized to int4, but they’re restored to fp16 on the fly during inference, saving memory usage by 4x. More details can be found in the GitHub repository . BitsAndBytes BitsAndBytes is an easy option for quantizing a model to 8-bit and 4-bit. The library supports any model in any modality, as long as it supports loading with Hugging Face Accelerate and contains torch.nn.Linear layers. It also provides features for offloading weights between the CPU and GPU to support fitting very large models into memory, adjusting the outlier threshold for 8-bit quantization, skipping module conversion for certain models, and fine-tuning with 8-bit and 4-bit weights. For 4-bit models, it allows changing the compute data type, using the Normal Float 4 (NF4) data type for weights initialized from a normal distribution, and using nested quantization to save additional memory at no additional performance cost. More details can be found in the HF guide . On this page Quantization Supported quantization modes in PyTorch Pytorch quantization with TorchAO HF supported quantization Quanto AQLM AWQ AutoGPTQ BitsAndBytes
+Prompting | How-to guides Prompting Link to Notebook showing examples of the techniques discussed in this section. Prompt engineering is a technique used in natural language processing (NLP) to improve the performance of the language model by providing them with more context and information about the task in hand. It involves creating prompts, which are short pieces of text that provide additional information or guidance to the model, such as the topic or genre of the text it will generate. By using prompts, the model can better understand what kind of output is expected and produce more accurate and relevant results. In Llama 2 the size of the context, in terms of number of tokens, has doubled from 2048 to 4096. Crafting Effective Prompts Crafting effective prompts is an important part of prompt engineering. Here are some tips for creating prompts that will help improve the performance of your language model: Be clear and concise: Your prompt should be easy to understand and provide enough information for the model to generate relevant output. Avoid using jargon or technical terms that may confuse the model. Use specific examples: Providing specific examples in your prompt can help the model better understand what kind of output is expected. For example, if you want the model to generate a story about a particular topic, include a few sentences about the setting, characters, and plot. Vary the prompts: Using different prompts can help the model learn more about the task at hand and produce more diverse and creative output. Try using different styles, tones, and formats to see how the model responds. Test and refine: Once you have created a set of prompts, test them out on the model to see how it performs. If the results are not as expected, try refining the prompts by adding more detail or adjusting the tone and style. Use feedback: Finally, use feedback from users or other sources to continually improve your prompts. This can help you identify areas where the model needs more guidance and make adjustments accordingly. Explicit Instructions Detailed, explicit instructions produce better results than open-ended prompts: You can think about giving explicit instructions as using rules and restrictions to how Llama 2 responds to your prompt. Stylization Explain this to me like a topic on a children's educational network show teaching elementary students. I'm a software engineer using large language models for summarization. Summarize the following text in under 250 words: Give your answer like an old timey private investigator hunting down a case step by step. Formatting Use bullet points. Return as a JSON object. Use less technical terms and help me apply it in my work in communications. Restrictions Only use academic papers. Never give sources older than 2020. If you don't know the answer, say that you don't know. Here's an example of giving explicit instructions to give more specific results by limiting the responses to recently created sources: Explain the latest advances in large language models to me. #  More likely to cite sources from 2017 Explain the latest advances in large language models to me. Always cite your sources. Never cite sources older than 2020. # Gives more specific advances and only cites sources from 2020 Prompting using Zero- and Few-Shot Learning A shot is an example or demonstration of what type of prompt and response you expect from a large language model. This term originates from training computer vision models on photographs, where one shot was one example or instance that the model used to classify an image. Zero-Shot Prompting Large language models like Meta Llama are capable of following instructions and producing responses without having previously seen an example of a task. Prompting without examples is called "zero-shot prompting". Text: This was the best movie I've ever seen! The sentiment of the text is: Text: The director was trying too hard. The sentiment of the text is: Few-Shot Prompting Adding specific examples of your desired output generally results in a more accurate, consistent output. This technique is called "few-shot prompting". In this example, the generated response follows our desired format that offers a more nuanced sentiment classifier that gives a positive, neutral, and negative response confidence percentage. You are a sentiment classifier. For each message, give the percentage of positive/netural/negative. Here are some samples: Text: I liked it Sentiment: 70% positive 30% neutral 0% negative Text: It could be better Sentiment: 0% positive 50% neutral 50% negative Text: It's fine Sentiment: 25% positive 50% neutral 25% negative Text: I thought it was okay Text: I loved it! Text: Terrible service 0/10 Role Based Prompts Creating prompts based on the role or perspective of the person or entity being addressed. This technique can be useful for generating more relevant and engaging responses from language models. Pros: Improves relevance: Role-based prompting helps the language model understand the role or perspective of the person or entity being addressed, which can lead to more relevant and engaging responses. Increases accuracy: Providing additional context about the role or perspective of the person or entity being addressed can help the language model avoid making mistakes or misunderstandings. Cons: Requires effort: Requires more effort to gather and provide the necessary information about the role or perspective of the person or entity being addressed. Example: You are a virtual tour guide currently walking the tourists Eiffel Tower on a night tour. Describe Eiffel Tower to your audience that covers its history, number of people visiting each year, amount of time it takes to do a full tour and why do so many people visit this place each year. Chain of Thought Technique Involves providing the language model with a series of prompts or questions to help guide its thinking and generate a more coherent and relevant response. This technique can be useful for generating more thoughtful and well-reasoned responses from language models. Pros: Improves coherence: Helps the language model think through a problem or question in a logical and structured way, which can lead to more coherent and relevant responses. Increases depth: Providing a series of prompts or questions can help the language model explore a topic more deeply and thoroughly, potentially leading to more insightful and informative responses. Cons: Requires effort: The chain of thought technique requires more effort to create and provide the necessary prompts or questions. Example: You are a virtual tour guide from 1901. You have tourists visiting Eiffel Tower. Describe Eiffel Tower to your audience. Begin with 1. Why it was built 2. Then by how long it took them to build 3. Where were the materials sourced to build 4. Number of people it took to build 5. End it with the number of people visiting the Eiffel tour annually in the 1900's, the amount of time it completes a full tour and why so many people visit this place each year. Make your tour funny by including 1 or 2 funny jokes at the end of the tour. Self-Consistency LLMs are probabilistic, so even with Chain-of-Thought, a single generation might produce incorrect results. Self-Consistency introduces enhanced accuracy by selecting the most frequent answer from multiple generations (at the cost of higher compute): John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Report the answer surrounded by three backticks, for example: ```123``` Running the above several times and taking the most commonly returned value for the answer would make use of the self-consistency approach. Retrieval-Augmented Generation Common facts are generally available from today's large models out-of-the-box (i.e. using just the model weights). More specific data is unlikely to be available though E.g.: What is the capital of  California? # The capital of California is Sacramento... What was the temperature in Menlo Park on December 12th, 2023? # I'm just an AI, I don't have have access to real-time or historical weather data... Retrieval-Augmented Generation, or RAG, describes the practice of including information in the prompt that has been retrieved from an external database. It's an effective way to incorporate facts into your LLM application and is more affordable than fine-tuning which might also negatively impact the foundational model's capabilities. This could be as simple as a lookup table or as sophisticated as a vector database containing all of your company's knowledge: Given the following information about temperatures in Menlo Park: 2023-12-11 :  52 degrees Fahrenheit 2023-12-12 :  51 degrees Fahrenheit 2023-12-13 :  55 degrees Fahrenheit What was the temperature in Menlo Park on 2023-12-12? # Sure! The temperature in Menlo Park on 2023-12-12 was 51 degrees Fahrenheit. What was the temperature in Menlo Park on 2023-07-18 ? # Sorry, I don't have information about the temperature in Menlo Park on 2023-07-18. The information provided only includes temperatures for December 11th, 12th, and 13th of 2023. Program-Aided Language Models LLMs, by nature, aren't great at performing calculations. While LLMs are bad at arithmetic, they're great for code generation. Program-Aided Language leverages this fact by instructing the LLM to write code to solve calculation tasks. Python code to calculate: ((-5 + 93 * 4 - 0) * (4^4 + -7 + 0 * 5)) Here is the Python code to calculate the given expression: ((-5 + 93 * 4 - 0) * (4**4 + -7 + 0 * 5)) # Calculate the expression inside the parentheses a = -5 + 93 * 4 - 0 b = 4**4 + -7 + 0 * 5 # Multiply a and b result = a * b # Print the result print(result) Using the code directly provides the correct result. Limiting Extraneous Tokens A common challenge is generating a response without extraneous tokens (e.g. "Sure! Here's more information on..."). By combining a role, rules and restrictions, explicit instructions, and an example, the model can be prompted to generate the desired response. You are a robot that only outputs JSON. You reply in JSON format with the field 'zip_code'. Example question: What is the zip code of the Empire State Building? Example answer: {'zip_code': 10118} Now here is my question: What is the zip code of Menlo Park? # "{'zip_code': 94025}" Using the code directly provides the correct result. Reduce Hallucinations Meta’s Responsible Use Guide is a great resource to understand how best to prompt and address input/output risks of the language model. Refer to pages (14-17). Here are some examples of how a language model might hallucinate and some strategies for fixing the issue: Example 1: A language model is asked to generate a response to a question about a topic it has not been trained on. The language model may hallucinate information or make up facts that are not accurate or supported by evidence. Fix: To fix this issue, you can provide the language model with more context or information about the topic to help it understand what is being asked and generate a more accurate response. You could also ask the language model to provide sources or evidence for any claims it makes to ensure that its responses are based on factual information. Example 2: A language model is asked to generate a response to a question that requires a specific perspective or point of view. The language model may hallucinate information or make up facts that are not consistent with the desired perspective or point of view. Fix: To fix this issue, you can provide the language model with additional information about the desired perspective or point of view, such as the goals, values, or beliefs of the person or entity being addressed. This can help the language model understand the context and generate a response that is more consistent with the desired perspective or point of view. Example 3: A language model is asked to generate a response to a question that requires a specific tone or style. The language model may hallucinate information or make up facts that are not consistent with the desired tone or style. Fix: To fix this issue, you can provide the language model with additional information about the desired tone or style, such as the audience or purpose of the communication. This can help the language model understand the context and generate a response that is more consistent with the desired tone or style. Overall, the key to avoiding hallucination in language models is to provide them with clear and accurate information and context, and to carefully monitor their responses to ensure that they are consistent with your expectations and requirements. On this page Prompting Crafting Effective Prompts Explicit Instructions Prompting using Zero- and Few-Shot Learning Role Based Prompts Chain of Thought Technique Self-Consistency Retrieval-Augmented Generation Program-Aided Language Models Limiting Extraneous Tokens Reduce Hallucinations
+Validation | How-to guides Validation As the saying goes, if you can't measure it, you can't improve it., In this section, we are going to cover different ways to measure and ultimately validate Llama so it's possible to determine the improvements provided by different fine tuning techniques. Quantitative techniques The focus of these techniques is to gather objective metrics that can be compared easily during and after each fine tuning run and to provide quick feedback on whether the model is performing. The main metrics collected are loss and perplexity. This method consists in dividing the dataset into k subsets or folds, and then fine tuning the model k times. On each run, a different fold is used as a validation dataset, using the rest for training. The performance results of each run are averaged out for the final report. This provides a more accurate metric of the performance of the model across the complete dataset, as all entries serve both for validation and training. While it produces the most accurate prediction on how a model is going to generalize after fine tuning on a given dataset, it is computationally expensive and better suited for small datasets. Holdout When using a holdout, the dataset is split into two or three subsets, training and validation with test as optional. The test and validation sets can represent 10% - 30% of the dataset each. As the name implies, the first two subsets are used for training and validating the model during fine tuning, while the third is used only after fine tuning is complete to evaluate how well the model generalizes on data it has not seen in either phase. The advantage of having three partitions is that it provides a way to evaluate the model after fine-tuning for an unbiased view into the model performance, but it requires a slightly bigger dataset to allow for a proper split. This is currently implemented in the Llama recipes fine tuning script with two subsets of the dataset, train and validation . The data is collected in a json file that can be plotted to easily interpret the results and evaluate how the model is performing. Standard Evaluation tools There are multiple projects that provide standard evaluation. They provide predefined tasks with commonly used metrics to evaluate the performance of LLMs, like HellaSwag and ThrouthfulQA. These tools can be used to test if the model has degraded after fine tuning. Additionally, a custom task can be created using the dataset intended to fine-tune the model, effectively automating the manual verification of the model performance before and after fine tuning. These types of projects provide a quantitative way of looking at the models performance in simulated real world examples. Some of these projects include the LM Evaluation Harness (used to create the HF leaderboard ), HELM , BIG-bench and OpenCompass . As mentioned before, the torchtune library provides integration with the LM Evaluation Harness to test fine tuned models as well. Interpreting Loss and Perplexity The loss value used comes from the transformer's LlamaForCausalLM , which initializes a different loss function depending on the objective required from the model. The objective of this section is to give a brief overview on how to understand the results from loss and perplexity as an initial evaluation of the model performance during fine tuning. We also calculate the perplexity as an exponentiation of the loss value. Additional information on loss functions can be found in these resources: 1 , 2 , 3 , 4 , 5 , 6 . In our recipes, we use a simple holdout during fine tuning. Using the logged loss values, both for train and validation dataset, the curves for both are plotted to analyze the results of the process. Given the setup in the recipe, the expected behavior is a log graph that shows a diminishing train and validation loss value as it progresses. If the validation curve starts going up while the train curve continues decreasing, the model is overfitting and it's not generalizing well. Some alternatives to test when this happens are early stopping, verifying the validation dataset is a statistically significant equivalent of the train dataset, data augmentation, using parameter efficient fine tuning or using k-fold cross-validation to better tune the hyperparameters. Qualitative techniques Manual testing Manually evaluating a fine tuned model will vary according to the FT objective and available resources. Here we provide general guidelines on how to accomplish it. With a dataset prepared for fine tuning, a part of it can be separated into a manual test subset, which can be further increased with general knowledge questions that might be relevant to the specific use case. In addition to these general questions, we recommend executing standard evaluations as well, and compare the results with the baseline for the fine tuned model. To rate the results, a clear evaluation criteria should be defined that is relevant to the dataset being used. Example criteria can be accuracy, coherence and safety. Create a rubric for each criteria and define what would be required for an output to receive a specific score. With these guidelines in place, distribute the test questions with a diverse set of reviewers to have multiple data points for each question. With multiple data points for each question and different criteria, a final score can be calculated for each query, allowing for weighting the scores based on the preferred focus for the final model. On this page Validation Quantitative techniques Holdout Standard Evaluation tools Interpreting Loss and Perplexity Qualitative techniques
+Meta Code Llama | Integration guides Integration guides Meta Code Llama Meta Code Llama is an open-source family of LLMs based on Llama 2 providing SOTA performance on code tasks. It consists of: Foundation models (Meta Code Llama) Python specializations (Meta Code Llama - Python), and Instruction-following models (Meta Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. See the recipes here for examples on how to make use of Meta Code Llama. The following diagram shows how each of the Meta Code Llama models is trained: (Fig: The Meta Code Llama specialization pipeline. The different stages of fine-tuning annotated with the number of tokens seen during training) One of the best ways to try out and integrate with Meta Code Llama is using Hugging Face ecosystem by following the blog here , which has: Demo links for all versions of Meta Code Llama Working inference code for code completion Working inference code for code infilling between code prefix and suffix as inputs Working inference code to do 4-bit loading of the 34B model so it can fit on consumer GPUs Guide on how to write prompts for the instruction models to have multi-turn conversations  about coding Guide on how to use Text Generation Inference for model deployment in production Guide on how to integrate code autocomplete as an extension  with VSCode Guide on how to evaluate Meta Code Llama models If the model does not perform well on your specific task, for example if none of the Meta Code Llama models (7B/13B/34B/70B) generate the correct answer for a text to SQL task, fine-tuning should be considered. This is a complete guide and notebook ( here ) on how to fine-tune Meta Code Llama using the 7B model hosted on Hugging Face. It uses the LoRA fine-tuning method and can run on a single GPU. As shown in the Meta Code Llama References ( here ), fine-tuning improves the performance of Meta Code Llama on SQL code generation, and it can be critical that LLMs are able to interoperate with structured data and SQL, the primary way to access structured data - we are developing demo apps in LangChain and RAG with Llama 2 to show this. Compatible extensions In most of the cases, the simplest method to integrate any model size is through ollama , occasionally combined with litellm . Ollama is a program that allows quantized versions of popular LLMs to run locally. It leverages the GPU and can even run Code Llama 34B on an M1 mac. Litellm is a simple proxy that can serve an OpenAI style API, so it's easy to replace OpenAI in existing applications, in our case, extensions Continue This extension can be used with ollama, allowing for easy local only execution. Additionally, it provides a simple interface to 1/ Chat with the model directly running inside VS Code and 2/ Select specific files and sections to edit or explain. This extension is an effective way to evaluate Llama because it provides simple and useful features. It also allows developers to build trust, by creating diffs for each proposed change and showing exactly what is being changed before saving the file. Handling the context for the LLM is easy and relies heavily on keyboard shortcuts. It's important to note that all the interactions with the extension are recorded in jsonl format. The objective is to provide data for future fine tuning of the models based on the feedback recorded during real world usage as well. Steps to install with ollama Install ollama and pull a model (e.g. ollama pull codellama:13b-instruct) Install the extension from Visual Studio Code marketplace Open the extension and click on the + sign to add models Select Ollama as a provider In the next screen, select the model and size pulled from with ollama Select the model in the convo and start using the extension Steps to install with TGI For better performance or usage in non-compatible hardware, TGI can be used in a server to run the model. For example, ollama on Intel Macs is too slow to be useful, even with the 7B models. On the contrary, M1 macs can run the 34 Meta Code Llama models quickly. For this, you should have TGI running on a server with appropriate hardware, as detailed in this guide . Once Continue.dev is installed, follow these steps: Open the configs with /config Use the HuggingFaceTGI class and pass your instance URL in the server_url parameter: Assign a name to it and save the config file. llm-vscode This extension from Hugging Face provides an open alternative to the closed sourced GitHub Copilot, allowing for the same functionality, context based autocomplete suggestions, to work with open source models. It works out of the box with a HF Token and their Inference API but can be configured to use any TGI compatible API. For usage with a self-hosted TGI server, follow these steps: Install llm-vscode from the marketplace Open the extension configs Select the correct template for the model published in your TGI instance in the Config Template field. For testing, used the one named codellama/CodeLlama-13b-hf Pass in the URL to your TGI instance in the Model ID or Endpoint field. To avoid rate limiting messages, login to HF by providing a read only token. This was necessary even for a self-hosted instance. It currently does not support local models unless TGI is running locally. It would be great to add ollama support to this extension, as it would accelerate the inference with the smaller models by avoiding the network. On this page Meta Code Llama Compatible extensions
+LangChain | Integration guides Integration guides LangChain LangChain is an open source framework for building LLM powered applications. It implements common abstractions and higher-level APIs to make the app building process easier, so you don't need to call LLM from scratch. The main building blocks/APIs of LangChain are: Source Source The Models or LLMs API can be used to easily connect to all popular LLMs such as Hugging Face or Replicate where all types of Llama 2 models are hosted. The Prompts API implements the useful prompt template abstraction to help you easily reuse good, often long and detailed, prompts when building sophisticated LLM apps. There are also many built-in prompts for common operations such as summarization or connection to SQL databases for quick app development. Prompts can also work closely with  parsers to easily extract useful information from the LLM output. The Memory API can be used to save conversation history and feed it along with new questions to LLM so multi-turn natural conversation chat can be implemented. The Chains API includes the most basic LLMChain that combines a LLM with a prompt to generate the output, as well as more advanced chains to lets you build sophisticated LLM apps in a systematic way. For example, the output of the first LLM chain can be the input/prompt of another chain, or a chain can have multiple inputs and/or multiple outputs, either pre-defined or dynamically decided by the LLM output of a prompt. The Indexes API allows documents outside of LLM to be saved, after first converted to embeddings which are numerical meaning representations, in the vector form, of the documents, to a vector store. Later when a user enters a question about the documents, the relevant data stored in the documents' vector store will be retrieved and sent, along with the query, to LLM to generate an answer related to the documents. The following flow shows the process Source The Agents API uses LLM as the reasoning engine and connects it with other sources of data, third-party or own tools, or APIs such as web search or wikipedia APIs. Depending on the user's input, the agent can decide which tool to call to handle the input. LangChain can be used as a powerful retrieval augmented generation (RAG) tool to integrate the internal data or more recent public data with LLM to QA or chat about the data. LangChain already supports loading many types of unstructured and structured data. To learn more about LangChain, enroll for free in the two LangChain short courses . Be aware that the code in the courses use OpenAI ChatGPT LLM, but we’ve published a series of demo apps using LangChain with Llama 2. There is also a Getting to Know Llama notebook , presented at Meta Connect 2023. On this page LangChain
+LlamaIndex | Integration guides Integration guides LlamaIndex LlamaIndex is another popular open source framework for building LLM applications. Like LangChain, LlamaIndex can also be used to build RAG applications by easily integrating data not built-in the LLM with LLM. There are three key tools in LlamaIndex: Connecting Data: connect data of any type -  structured, unstructured or semi-structured - to LLM Indexing Data: Index and store the data Querying LLM: Combine the user query and retrieved query-related data to query LLM and return data-augmented answer LlamaIndex is mainly a data framework for connecting private or domain-specific data with LLMs, so it specializes in RAG, smart data storage and retrieval, while LangChain is a more general purpose framework which can be used to build agents connecting multiple tools. The integration of the two may provide the best performant and effective solution to building real world RAG powered Llama apps. For an example usage of how to integrate LlamaIndex with Llama 2, see here . We also published a completed demo app showing how to use LlamaIndex to chat with Llama 2 about live data via the you.com API. It’s worth noting that LlamaIndex has implemented many RAG powered LLM evaluation tools to easily measure the quality of retrieval and response, including: Question Generation: Call LLM to auto generate questions to create an evaluation dataset. Faithfulness Evaluator: Evaluate if the generated answer is faithful to the retrieved context or if there’s hallucination. Correctness Evaluator: Evaluate if the generated answer matches the reference answer. Relevancy Evaluator: Evaluate if the answer and the retrieved context is relevant and consistent for the given query. On this page LlamaIndex
+# Llama Recipes: Examples to get started using the Llama models from Meta The 'llama-recipes' repository is a companion to the [Meta Llama 3](https://github.com/meta-llama/llama3) models. The goal of this repository is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Meta Llama and other tools in the LLM ecosystem. The examples here showcase how to run Meta Llama locally, in the cloud, and on-prem. [Meta Llama 2](https://github.com/meta-llama/llama) is also supported in this repository. We highly recommend everyone to utilize [Meta Llama 3](https://github.com/meta-llama/llama3) due to its enhanced capabilities. > [!IMPORTANT] > Meta Llama 3 has a new prompt template and special tokens (based on the tiktoken tokenizer). > | Token | Description | > |---|---| > `<\|begin_of_text\|>` | This is equivalent to the BOS token. | > `<\|end_of_text\|>` | This is equivalent to the EOS token. For multiturn-conversations it's usually unused. Instead, every message is terminated with `<\|eot_id\|>` instead.| > `<\|eot_id\|>` | This token signifies the end of the message in a turn i.e. the end of a single message by a system, user or assistant role as shown below.| > `<\|start_header_id\|>{role}<\|end_header_id\|>` | These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant. | > > A multiturn-conversation with Meta Llama 3 follows this prompt template: > ``` > <|begin_of_text|><|start_header_id|>system<|end_header_id|> > > {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|> > > {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> > > {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> > > {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> > ``` > Each message gets trailed by an `<|eot_id|>` token before a new header is started, signaling a role change. > > More details on the new tokenizer and prompt template can be found [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3#special-tokens-used-with-meta-llama-3). > > [!NOTE] > The llama-recipes repository was recently refactored to promote a better developer experience of using the examples. Some files have been moved to new locations. The `src/` folder has NOT been modified, so the functionality of this repo and package is not impacted. > > Make sure you update your local clone by running `git pull origin main` ## Table of Contents - [Llama Recipes: Examples to get started using the Meta Llama models from Meta](#llama-recipes-examples-to-get-started-using-the-llama-models-from-meta) - [Table of Contents](#table-of-contents) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [PyTorch Nightlies](#pytorch-nightlies) - [Installing](#installing) - [Install with pip](#install-with-pip) - [Install with optional dependencies](#install-with-optional-dependencies) - [Install from source](#install-from-source) - [Getting the Llama models](#getting-the-llama-models) - [Model conversion to Hugging Face](#model-conversion-to-hugging-face) - [Repository Organization](#repository-organization) - [`recipes/`](#recipes) - [`src/`](#src) - [Contributing](#contributing) - [License](#license) ## Getting Started These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system. ### Prerequisites #### PyTorch Nightlies If you want to use PyTorch nightlies instead of the stable release, go to [this guide](https://pytorch.org/get-started/locally/) to retrieve the right `--extra-index-url URL` parameter for the `pip install` commands on your platform. ### Installing Llama-recipes provides a pip distribution for easy install and usage in other projects. Alternatively, it can be installed from source. > [!NOTE] > Ensure you use the correct CUDA version (from `nvidia-smi`) when installing the PyTorch wheels. Here we are using 11.8 as `cu118`. > H100 GPUs work better with CUDA >12.0 #### Install with pip ``` pip install llama-recipes ``` #### Install with optional dependencies Llama-recipes offers the installation of optional packages. There are three optional dependency groups. To run the unit tests we can install the required dependencies with: ``` pip install llama-recipes[tests] ``` For the vLLM example we need additional requirements that can be installed with: ``` pip install llama-recipes[vllm] ``` To use the sensitive topics safety checker install with: ``` pip install llama-recipes[auditnlg] ``` Optional dependencies can also be combines with [option1,option2]. #### Install from source To install from source e.g. for development use these commands. We're using hatchling as our build backend which requires an up-to-date pip as well as setuptools package. ``` git clone git@github.com:meta-llama/llama-recipes.git cd llama-recipes pip install -U pip setuptools pip install -e . ``` For development and contributing to llama-recipes please install all optional dependencies: ``` git clone git@github.com:meta-llama/llama-recipes.git cd llama-recipes pip install -U pip setuptools pip install -e .[tests,auditnlg,vllm] ``` ### Getting the Meta Llama models You can find Meta Llama models on Hugging Face hub [here](https://huggingface.co/meta-llama), **where models with `hf` in the name are already converted to Hugging Face checkpoints so no further conversion is needed**. The conversion step below is only for original model weights from Meta that are hosted on Hugging Face model hub as well. #### Model conversion to Hugging Face The recipes and notebooks in this folder are using the Meta Llama model definition provided by Hugging Face's transformers library. Given that the original checkpoint resides under models/7B you can install all requirements and convert the checkpoint with: ```bash ## Install Hugging Face Transformers from source pip freeze | grep transformers ## verify it is version 4.31.0 or higher git clone git@github.com:huggingface/transformers.git cd transformers pip install protobuf python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path ``` ## Repository Organization Most of the code dealing with Llama usage is organized across 2 main folders: `recipes/` and `src/`. ### `recipes/` Contains examples are organized in folders by topic: | Subfolder | Description | |---|---| [quickstart](./recipes/quickstart) | The "Hello World" of using Llama, start here if you are new to using Llama. [finetuning](./recipes/finetuning)|Scripts to finetune Llama on single-GPU and multi-GPU setups [inference](./recipes/inference)|Scripts to deploy Llama for inference locally and using model servers [use_cases](./recipes/use_cases)|Scripts showing common applications of Meta Llama3 [responsible_ai](./recipes/responsible_ai)|Scripts to use PurpleLlama for safeguarding model outputs [llama_api_providers](./recipes/llama_api_providers)|Scripts to run inference on Llama via hosted endpoints [benchmarks](./recipes/benchmarks)|Scripts to benchmark Llama models inference on various backends [code_llama](./recipes/code_llama)|Scripts to run inference with the Code Llama models [evaluation](./recipes/evaluation)|Scripts to evaluate fine-tuned Llama models using `lm-evaluation-harness` from `EleutherAI` ### `src/` Contains modules which support the example recipes: | Subfolder | Description | |---|---| | [configs](src/llama_recipes/configs/) | Contains the configuration files for PEFT methods, FSDP, Datasets, Weights & Biases experiment tracking. | | [datasets](src/llama_recipes/datasets/) | Contains individual scripts for each dataset to download and process. Note | | [inference](src/llama_recipes/inference/) | Includes modules for inference for the fine-tuned models. | | [model_checkpointing](src/llama_recipes/model_checkpointing/) | Contains FSDP checkpoint handlers. | | [policies](src/llama_recipes/policies/) | Contains FSDP scripts to provide different policies, such as mixed precision, transformer wrapping policy and activation checkpointing along with any precision optimizer (used for running FSDP with pure bf16 mode). | | [utils](src/llama_recipes/utils/) | Utility files for: - `train_utils.py` provides training/eval loop and more train utils. - `dataset_utils.py` to get preprocessed datasets. - `config_utils.py` to override the configs received from CLI. - `fsdp_utils.py` provides FSDP  wrapping policy for PEFT methods. - `memory_utils.py` context manager to track different memory stats in train loop. | ## Contributing Please read [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests to us. ## License See the License file for Meta Llama 3 [here](https://llama.meta.com/llama3/license/) and Acceptable Use Policy [here](https://llama.meta.com/llama3/use-policy/) See the License file for Meta Llama 2 [here](https://llama.meta.com/llama2/license/) and Acceptable Use Policy [here](https://llama.meta.com/llama2/use-policy/)
+# **Model Details** Meta developed and released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. ||Training Data|Params|Context Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10 -4 Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10 -4 Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10 -4 **Llama 2 family of models.** Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. The 70B version uses Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "Llama-2: Open Foundation and Fine-tuned Chat Models", available at https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/. **Where to send questions or comments about the model** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md). # **Intended Use** **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 2 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 2 models for languages beyond English provided they comply with the Llama 2 Community License and the Acceptable Use Policy. # **Hardware and Software** **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO 2 eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO 2 emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. # **Training Data** **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. # **Evaluation Results** In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks. For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at the top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. # **Ethical Considerations and Limitations** Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide/)
+# Llama 2 We are unlocking the power of large language models. Llama 2 is now accessible to individuals, creators, researchers, and businesses of all sizes so that they can experiment, innovate, and scale their ideas responsibly. This release includes model weights and starting code for pre-trained and fine-tuned Llama language models — ranging from 7B to 70B parameters. This repository is intended as a minimal example to load [Llama 2](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) models and run inference. For more detailed examples leveraging Hugging Face, see [llama-recipes](https://github.com/facebookresearch/llama-recipes/). ## Updates post-launch See [UPDATES.md](UPDATES.md). Also for a running list of frequently asked questions, see [here](https://ai.meta.com/llama/faq/). ## Download In order to download the model weights and tokenizer, please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License. Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download. Pre-requisites: Make sure you have `wget` and `md5sum` installed. Then run the script: `./download.sh`. Keep in mind that the links expire after 24 hours and a certain amount of downloads. If you start seeing errors such as `403: Forbidden`, you can always re-request a link. ### Access to Hugging Face We are also providing downloads on [Hugging Face](https://huggingface.co/meta-llama). You can request access to the models by acknowledging the license and filling the form in the model card of a repo. After doing so, you should get access to all the Llama models of a version (Code Llama, Llama 2, or Llama Guard) within 1 hour. ## Quick Start You can follow the steps below to quickly get up and running with Llama 2 models. These steps will let you run quick inference locally. For more examples, see the [Llama 2 recipes repository](https://github.com/facebookresearch/llama-recipes). 1. In a conda env with PyTorch / CUDA available clone and download this repository. 2. In the top-level directory run: ```bash pip install -e . ``` 3. Visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and register to download the model/s. 4. Once registered, you will get an email with a URL to download the models. You will need this URL when you run the download.sh script. 5. Once you get the email, navigate to your downloaded llama repository and run the download.sh script. - Make sure to grant execution permissions to the download.sh script - During this process, you will be prompted to enter the URL from the email. - Do not use the “Copy Link” option but rather make sure to manually copy the link from the email. 6. Once the model/s you want have been downloaded, you can run the model locally using the command below: ```bash torchrun --nproc_per_node 1 example_chat_completion.py \ --ckpt_dir llama-2-7b-chat/ \ --tokenizer_path tokenizer.model \ --max_seq_len 512 --max_batch_size 6 ``` **Note** - Replace  `llama-2-7b-chat/` with the path to your checkpoint directory and `tokenizer.model` with the path to your tokenizer model. - The `–nproc_per_node` should be set to the [MP](#inference) value for the model you are using. - Adjust the `max_seq_len` and `max_batch_size` parameters as needed. - This example runs the [example_chat_completion.py](example_chat_completion.py) found in this repository but you can change that to a different .py file. ## Inference Different models require different model-parallel (MP) values: |  Model | MP | |--------|----| | 7B     | 1  | | 13B    | 2  | | 70B    | 8  | All models support sequence length up to 4096 tokens, but we pre-allocate the cache according to `max_seq_len` and `max_batch_size` values. So set those according to your hardware. ### Pretrained Models These models are not finetuned for chat or Q&A. They should be prompted so that the expected answer is the natural continuation of the prompt. See `example_text_completion.py` for some examples. To illustrate, see the command below to run it with the llama-2-7b model (`nproc_per_node` needs to be set to the `MP` value): ``` torchrun --nproc_per_node 1 example_text_completion.py \ --ckpt_dir llama-2-7b/ \ --tokenizer_path tokenizer.model \ --max_seq_len 128 --max_batch_size 4 ``` ### Fine-tuned Chat Models The fine-tuned models were trained for dialogue applications. To get the expected features and performance for them, a specific formatting defined in [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212) needs to be followed, including the `INST` and `< >` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). You can also deploy additional classifiers for filtering out inputs and outputs that are deemed unsafe. See the llama-recipes repo for [an example](https://github.com/facebookresearch/llama-recipes/blob/main/examples/inference.py) of how to add a safety checker to the inputs and outputs of your inference code. Examples using llama-2-7b-chat: ``` torchrun --nproc_per_node 1 example_chat_completion.py \ --ckpt_dir llama-2-7b-chat/ \ --tokenizer_path tokenizer.model \ --max_seq_len 512 --max_batch_size 6 ``` Llama 2 is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. In order to help developers address these risks, we have created the [Responsible Use Guide](Responsible-Use-Guide.pdf). More details can be found in our research paper as well. ## Issues Please report any software “bug”, or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Model Card See [MODEL_CARD.md](MODEL_CARD.md). ## License Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements. See the [LICENSE](LICENSE) file, as well as our accompanying [Acceptable Use Policy](USE_POLICY.md) ## References 1. [Research Paper](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) 2. [Llama 2 technical overview](https://ai.meta.com/resources/models-and-libraries/llama) 3. [Open Innovation AI Research Community](https://ai.meta.com/llama/open-innovation-ai-research-community/) For common questions, the FAQ can be found [here](https://ai.meta.com/llama/faq/) which will be kept up to date over time as new questions arise. ## Original Llama The repo for the original llama release is in the [`llama_v1`](https://github.com/facebookresearch/llama/tree/llama_v1) branch.
+404: Not Found
+## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. Training Data Params Context length GQA Token count Knowledge cutoff Llama 3 A new mix of publicly available online data. 8B 8k Yes 15T+ March, 2023 70B 8k Yes December, 2023 **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the [Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/) and [Llama 3 Community License](https://llama.meta.com/llama3/license/). Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the [Llama 3 Community License](https://llama.meta.com/llama3/license/) and the [Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/). ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. Time (GPU hours) Power Consumption (W) Carbon Emitted(tCO2eq) Llama 3 8B 1.3M 700 390 Llama 3 70B 6.4M 700 1900 Total 7.7M 2290 **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 8B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_details.md). ### Base pretrained models Category Benchmark Llama 3 8B Llama2 7B Llama2 13B Llama 3 70B Llama2 70B General MMLU (5-shot) 66.6 45.7 53.8 79.5 69.7 AGIEval English (3-5 shot) 45.9 28.8 38.7 63.0 54.8 CommonSenseQA (7-shot) 72.6 57.6 67.6 83.8 78.7 Winogrande (5-shot) 76.1 73.3 75.4 83.1 81.8 BIG-Bench Hard (3-shot, CoT) 61.1 38.1 47.0 81.3 65.7 ARC-Challenge (25-shot) 78.6 53.7 67.6 93.0 85.3 Knowledge reasoning TriviaQA-Wiki (5-shot) 78.5 72.1 79.6 89.7 87.5 Reading comprehension SQuAD (1-shot) 76.4 72.2 72.1 85.6 82.6 QuAC (1-shot, F1) 44.4 39.6 44.9 51.1 49.4 BoolQ (0-shot) 75.7 65.5 66.9 79.0 73.1 DROP (3-shot, F1) 58.4 37.9 49.8 79.7 70.2 ### Instruction tuned models Benchmark Llama 3 8B Llama 2 7B Llama 2 13B Llama 3 70B Llama 2 70B MMLU (5-shot) 68.4 34.1 47.8 82.0 52.9 GPQA (0-shot) 34.2 21.7 22.3 39.5 21.0 HumanEval (0-shot) 62.2 7.9 14.0 81.7 25.6 GSM-8K (8-shot, CoT) 79.6 25.7 77.4 93.0 57.5 MATH (4-shot, CoT) 30.0 3.8 6.7 50.4 11.6 ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. Safety For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. Refusals In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### Cyber Security We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### Child Safety Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development.  For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [GitHub repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions ``` @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ``` ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Amit Sangani; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Ash JJhaveri; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hamid Shojanazeri; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Puxin Xu; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
+🤗 Models on Hugging Face | Blog | Website | Get Started --- # Meta Llama 3 We are unlocking the power of large language models. Our latest version of Llama is now accessible to individuals, creators, researchers, and businesses of all sizes so that they can experiment, innovate, and scale their ideas responsibly. This release includes model weights and starting code for pre-trained and instruction-tuned Llama 3 language models — including sizes of 8B to 70B parameters. This repository is a minimal example of loading Llama 3 models and running inference. For more detailed examples, see [llama-recipes](https://github.com/facebookresearch/llama-recipes/). ## Download To download the model weights and tokenizer, please visit the [Meta Llama website](https://llama.meta.com/llama-downloads/) and accept our License. Once your request is approved, you will receive a signed URL over email. Then, run the download.sh script, passing the URL provided when prompted to start the download. Pre-requisites: Ensure you have `wget` and `md5sum` installed. Then run the script: `./download.sh`. Remember that the links expire after 24 hours and a certain amount of downloads. You can always re-request a link if you start seeing errors such as `403: Forbidden`. ### Access to Hugging Face We also provide downloads on [Hugging Face](https://huggingface.co/meta-llama), in both transformers and native `llama3` formats. To download the weights from Hugging Face, please follow these steps: - Visit one of the repos, for example [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). - Read and accept the license. Once your request is approved, you'll be granted access to all the Llama 3 models. Note that requests used to take up to one hour to get processed. - To download the original native weights to use with this repo, click on the "Files and versions" tab and download the contents of the `original` folder. You can also download them from the command line if you `pip install huggingface-hub`: ```bash huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir meta-llama/Meta-Llama-3-8B-Instruct ``` - To use with transformers, the following [pipeline](https://huggingface.co/docs/transformers/en/main_classes/pipelines) snippet will download and cache the weights: ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model="meta-llama/Meta-Llama-3-8B-Instruct", model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", ) ``` ## Quick Start You can follow the steps below to get up and running with Llama 3 models quickly. These steps will let you run quick inference locally. For more examples, see the [Llama recipes repository](https://github.com/facebookresearch/llama-recipes). 1. Clone and download this repository in a conda env with PyTorch / CUDA. 2. In the top-level directory run: ```bash pip install -e . ``` 3. Visit the [Meta Llama website](https://llama.meta.com/llama-downloads/) and register to download the model/s. 4. Once registered, you will get an email with a URL to download the models. You will need this URL when you run the download.sh script. 5. Once you get the email, navigate to your downloaded llama repository and run the download.sh script. - Make sure to grant execution permissions to the download.sh script - During this process, you will be prompted to enter the URL from the email. - Do not use the “Copy Link” option; copy the link from the email manually. 6. Once the model/s you want have been downloaded, you can run the model locally using the command below: ```bash torchrun --nproc_per_node 1 example_chat_completion.py \ --ckpt_dir Meta-Llama-3-8B-Instruct/ \ --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model \ --max_seq_len 512 --max_batch_size 6 ``` **Note** - Replace  `Meta-Llama-3-8B-Instruct/` with the path to your checkpoint directory and `Meta-Llama-3-8B-Instruct/tokenizer.model` with the path to your tokenizer model. - The `–nproc_per_node` should be set to the [MP](#inference) value for the model you are using. - Adjust the `max_seq_len` and `max_batch_size` parameters as needed. - This example runs the [example_chat_completion.py](example_chat_completion.py) found in this repository, but you can change that to a different .py file. ## Inference Different models require different model-parallel (MP) values: |  Model | MP | |--------|----| | 8B     | 1  | | 70B    | 8  | All models support sequence length up to 8192 tokens, but we pre-allocate the cache according to `max_seq_len` and `max_batch_size` values. So set those according to your hardware. ### Pretrained Models These models are not finetuned for chat or Q&A. They should be prompted so that the expected answer is the natural continuation of the prompt. See `example_text_completion.py` for some examples. To illustrate, see the command below to run it with the llama-3-8b model (`nproc_per_node` needs to be set to the `MP` value): ``` torchrun --nproc_per_node 1 example_text_completion.py \ --ckpt_dir Meta-Llama-3-8B/ \ --tokenizer_path Meta-Llama-3-8B/tokenizer.model \ --max_seq_len 128 --max_batch_size 4 ``` ### Instruction-tuned Models The fine-tuned models were trained for dialogue applications. To get the expected features and performance for them, specific formatting defined in [`ChatFormat`](https://github.com/meta-llama/llama3/blob/main/llama/tokenizer.py#L202) needs to be followed: The prompt begins with a `<|begin_of_text|>` special token, after which one or more messages follow. Each message starts with the `<|start_header_id|>` tag, the role `system`, `user` or `assistant`, and the `<|end_header_id|>` tag. After a double newline `\n\n`, the message's contents follow. The end of each message is marked by the `<|eot_id|>` token. You can also deploy additional classifiers to filter out inputs and outputs that are deemed unsafe. See the llama-recipes repo for [an example](https://github.com/meta-llama/llama-recipes/blob/main/recipes/inference/local_inference/inference.py) of how to add a safety checker to the inputs and outputs of your inference code. Examples using llama-3-8b-chat: ``` torchrun --nproc_per_node 1 example_chat_completion.py \ --ckpt_dir Meta-Llama-3-8B-Instruct/ \ --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model \ --max_seq_len 512 --max_batch_size 6 ``` Llama 3 is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. To help developers address these risks, we have created the [Responsible Use Guide](https://ai.meta.com/static-resource/responsible-use-guide/). ## Issues Please report any software “bug” or other problems with the models through one of the following means: - Reporting issues with the model: [https://github.com/meta-llama/llama3/issues](https://github.com/meta-llama/llama3/issues) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Model Card See [MODEL_CARD.md](MODEL_CARD.md). ## License Our model and weights are licensed for researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals and industry through this opportunity while fostering an environment of discovery and ethical AI advancements. See the [LICENSE](LICENSE) file, as well as our accompanying [Acceptable Use Policy](USE_POLICY.md) ## Questions For common questions, the FAQ can be found [here](https://llama.meta.com/faq), which will be updated over time as new questions arise.
+404: Not Found
+# Code Llama ## **Model Details** **Model Developers** Meta AI **Variations** Code Llama comes in four model sizes, and three variants: 1) Code Llama: our base models are designed for general code synthesis and understanding 2) Code Llama - Python: designed specifically for Python 3) Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B, 34B and 70B parameters. **Input** Models input text only. **Output** Models output text only. **Model Architecture** Code Llama and its variants are autoregressive language models using optimized transformer architectures. Code Llama 7B, 13B and 70B additionally support infilling text generation. All models but Code Llama - Python 70B and Code Llama - Instruct 70B were fine-tuned with up to 16K tokens, and support up to 100K tokens at inference time. **Model Dates** Code Llama and its variants have been trained between January 2023 and January 2024. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released  as we improve model safety with community feedback. **Licence** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/). **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)". **Where to send comments** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md), or by opening an issue in the GitHub repository ([https://github.com/facebookresearch/codellama/](https://github.com/facebookresearch/codellama/)). ## **Intended Use** **Intended Use Cases** Code Llama and its variants are intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistance and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## **Hardware and Software** **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed by Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 12 Code Llama models required 1400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 228.55 tCO2eq, 100% of which were offset by Meta’s sustainability program. **Training data** All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). Code Llama - Instruct uses additional instruction fine-tuning data. **Evaluation Results** See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## **Ethical Considerations and Limitations** Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
+# Introducing Code Llama Code Llama is a family of large language models for code based on [Llama 2](https://github.com/facebookresearch/llama) providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B and 34B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B and 13B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama was developed by fine-tuning Llama 2 using a higher sampling of code. As with Llama 2, we applied considerable safety mitigations to the fine-tuned versions of the model. For detailed information on model training, architecture and parameters, evaluations, responsible AI and safety refer to  our [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/). Output generated by code generation features of the Llama Materials, including Code Llama, may be subject to third party licenses, including, without limitation, open source licenses. We are unlocking the power of large language models and our latest version of Code Llama is now accessible to individuals, creators, researchers and businesses of all sizes so that they can experiment, innovate and scale their ideas responsibly. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 34B parameters. This repository is intended as a minimal example to load [Code Llama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) models and run inference. [comment]: <> (Code Llama models are compatible with the scripts in llama-recipes) ## Download In order to download the model weights and tokenizers, please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License. Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download. Make sure that you copy the URL text itself, **do not use the 'Copy link address' option** when you right click the URL. If the copied URL text starts with: https://download.llamameta.net, you copied it correctly. If the copied URL text starts with: https://l.facebook.com, you copied it the wrong way. Pre-requisites: make sure you have `wget` and `md5sum` installed. Then to run the script: `bash download.sh`. Keep in mind that the links expire after 24 hours and a certain amount of downloads. If you start seeing errors such as `403: Forbidden`, you can always re-request a link. ### Model sizes | Model | Size     | |-------|----------| | 7B    | ~12.55GB | | 13B   | 24GB     | | 34B   | 63GB     | | 70B   | 131GB    | [comment]: <> (Access on Hugging Face, We are also providing downloads on Hugging Face. You must first request a download from the Meta website using the same email address as your Hugging Face account. After doing so, you can request access to any of the models on Hugging Face and within 1-2 days your account will be granted access to all versions.) ## Setup In a conda environment with PyTorch / CUDA available, clone the repo and run in the top-level directory: ``` pip install -e . ``` ## Inference Different models require different model-parallel (MP) values: | Model | MP | |-------|----| | 7B    | 1  | | 13B   | 2  | | 34B   | 4  | | 70B   | 8  | All models, except the 70B python and instruct versions, support sequence lengths up to 100,000 tokens, but we pre-allocate the cache according to `max_seq_len` and `max_batch_size` values. So set those according to your hardware and use-case. ### Pretrained Code Models The Code Llama and Code Llama - Python models are not fine-tuned to follow instructions. They should be prompted so that the expected answer is the natural continuation of the prompt. See `example_completion.py` for some examples. To illustrate, see command below to run it with the `CodeLlama-7b` model (`nproc_per_node` needs to be set to the `MP` value): ``` torchrun --nproc_per_node 1 example_completion.py \ --ckpt_dir CodeLlama-7b/ \ --tokenizer_path CodeLlama-7b/tokenizer.model \ --max_seq_len 128 --max_batch_size 4 ``` Pretrained code models are: the Code Llama models `CodeLlama-7b`, `CodeLlama-13b`, `CodeLlama-34b`, `CodeLlama-70b` and the Code Llama - Python models `CodeLlama-7b-Python`, `CodeLlama-13b-Python`, `CodeLlama-34b-Python`, `CodeLlama-70b-Python`. ### Code Infilling Code Llama and Code Llama - Instruct 7B and 13B models are capable of filling in code given the surrounding context. See `example_infilling.py` for some examples. The `CodeLlama-7b` model can be run for infilling with the command below (`nproc_per_node` needs to be set to the `MP` value): ``` torchrun --nproc_per_node 1 example_infilling.py \ --ckpt_dir CodeLlama-7b/ \ --tokenizer_path CodeLlama-7b/tokenizer.model \ --max_seq_len 192 --max_batch_size 4 ``` Pretrained infilling models are: the Code Llama models `CodeLlama-7b` and `CodeLlama-13b` and the Code Llama - Instruct models `CodeLlama-7b-Instruct`, `CodeLlama-13b-Instruct`. ### Fine-tuned Instruction Models Code Llama - Instruct models are fine-tuned to follow instructions. To get the expected features and performance for the 7B, 13B and 34B variants, a specific formatting defined in [`chat_completion()`](https://github.com/facebookresearch/codellama/blob/main/llama/generation.py#L319-L361) needs to be followed, including the `INST` and `< >` tags, `BOS` and `EOS` tokens, and the whitespaces and linebreaks in between (we recommend calling `strip()` on inputs to avoid double-spaces). `CodeLlama-70b-Instruct` requires a separate turn-based prompt format defined in [`dialog_prompt_tokens()`](https://github.com/facebookresearch/codellama/blob/main/llama/generation.py#L506-L548). You can use `chat_completion()` directly to generate answers with all instruct models; it will automatically perform the required formatting. You can also deploy additional classifiers for filtering out inputs and outputs that are deemed unsafe. See the llama-recipes repo for [an example](https://github.com/facebookresearch/llama-recipes/blob/main/src/llama_recipes/inference/safety_utils.py) of how to add a safety checker to the inputs and outputs of your inference code. Examples using `CodeLlama-7b-Instruct`: ``` torchrun --nproc_per_node 1 example_instructions.py \ --ckpt_dir CodeLlama-7b-Instruct/ \ --tokenizer_path CodeLlama-7b-Instruct/tokenizer.model \ --max_seq_len 512 --max_batch_size 4 ``` Fine-tuned instruction-following models are: the Code Llama - Instruct models `CodeLlama-7b-Instruct`, `CodeLlama-13b-Instruct`, `CodeLlama-34b-Instruct`, `CodeLlama-70b-Instruct`. Code Llama is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. In order to help developers address these risks, we have created the [Responsible Use Guide](https://github.com/facebookresearch/llama/blob/main/Responsible-Use-Guide.pdf). More details can be found in our research papers as well. ## Issues Please report any software “bug”, or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/codellama](http://github.com/facebookresearch/codellama) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Model Card See [MODEL_CARD.md](MODEL_CARD.md) for the model card of Code Llama. ## License Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements. See the [LICENSE](https://github.com/facebookresearch/llama/blob/main/LICENSE) file, as well as our accompanying [Acceptable Use Policy](https://github.com/facebookresearch/llama/blob/main/USE_POLICY.md) ## References 1. [Code Llama Research Paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) 2. [Code Llama Blog Post](https://ai.meta.com/blog/code-llama-large-language-model-coding/)
+🤗 Models on Hugging Face | Blog | Website | CyberSec Eval Paper | Llama Guard Paper --- # Purple Llama Purple Llama is an umbrella project that over time will bring together tools and evals to help the community build responsibly with open generative AI models. The initial release will include tools and evals for Cyber Security and Input/Output safeguards but we plan to contribute more in the near future. ## Why purple? Borrowing a [concept](https://www.youtube.com/watch?v=ab_Fdp6FVDI) from the cybersecurity world, we believe that to truly mitigate the challenges which generative AI presents, we need to take both attack (red team) and defensive (blue team) postures. Purple teaming, composed of both red and blue team responsibilities, is a collaborative approach to evaluating and mitigating potential risks and the same ethos applies to generative AI and hence our investment in Purple Llama will be comprehensive. ## License Components within the Purple Llama project will be licensed permissively enabling both research and commercial usage. We believe this is a major step towards enabling community collaboration and standardizing the development and usage of trust and safety tools for generative AI development. More concretely evals and benchmarks are licensed under the MIT license while any models use the Llama 2 Community license. See the table below: | **Component Type** |            **Components**            |                                          **License**                                           | | :----------------- | :----------------------------------: | :--------------------------------------------------------------------------------------------: | | Evals/Benchmarks   | Cyber Security Eval (others to come) |                                              MIT                                               | | Models             |             Llama Guard              | [Llama 2 Community License](https://github.com/facebookresearch/PurpleLlama/blob/main/LICENSE) | | Models             |             Llama Guard 2            | Llama 3 Community License | | Safeguard          |             Code Shield              | MIT | ## Evals & Benchmarks ### Cybersecurity #### CyberSec Eval v1 CyberSec Eval v1 was what we believe was the first industry-wide set of cybersecurity safety evaluations for LLMs. These benchmarks are based on industry guidance and standards (e.g., CWE and MITRE ATT&CK) and built in collaboration with our security subject matter experts. We aim to provide tools that will help address some risks outlined in the [White House commitments on developing responsible AI](https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/), including: * Metrics for quantifying LLM cybersecurity risks. * Tools to evaluate the frequency of insecure code suggestions. * Tools to evaluate LLMs to make it harder to generate malicious code or aid in carrying out cyberattacks. We believe these tools will reduce the frequency of LLMs suggesting insecure AI-generated code and reduce their helpfulness to cyber adversaries. Our initial results show that there are meaningful cybersecurity risks for LLMs, both with recommending insecure code and for complying with malicious requests. See our [Cybersec Eval paper](https://ai.meta.com/research/publications/purple-llama-cyberseceval-a-benchmark-for-evaluating-the-cybersecurity-risks-of-large-language-models/) for more details. #### CyberSec Eval 2 CyberSec Eval 2 expands on its predecessor by measuring an LLM’s propensity to abuse a code interpreter, offensive cybersecurity capabilities, and susceptibility to prompt injection. You can read the paper [here](https://ai.meta.com/research/publications/cyberseceval-2-a-wide-ranging-cybersecurity-evaluation-suite-for-large-language-models/). You can also check out the 🤗 leaderboard [here](https://huggingface.co/spaces/facebook/CyberSecEval). ## System-Level Safeguards As we outlined in Llama 3’s [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/), we recommend that all inputs and outputs to the LLM be checked and filtered in accordance with content guidelines appropriate to the application. ### Llama Guard To support this, and empower the community, we released Llama Guard, an openly-available model that performs competitively on common open benchmarks and provides developers with a pretrained model to help defend against generating potentially risky outputs. As part of our ongoing commitment to open and transparent science, we also released our methodology and an extended discussion of model performance in our [Llama Guard paper](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/). We are happy to share an updated version, Meta Llama Guard 2. Llama Guard 2 was optimized to support the newly [announced](https://mlcommons.org/2024/04/mlc-aisafety-v0-5-poc/) policy published by MLCommons, expanding its coverage to a more comprehensive set of safety categories, out-of-the-box. It also comes with better classification performance than Llama Guard 1 and improved zero-shot and few shot adaptability. Ultimately, our vision is to enable developers to customize this model to support relevant use cases and to make it easier to adopt best practices and improve the open ecosystem. ### Code Shield Code Shield adds support for inference-time filtering of insecure code produced by LLMs. Code Shield offers mitigation of insecure code suggestions risk, code interpreter abuse prevention, and secure command execution. [CodeShield Example Notebook](https://github.com/meta-llama/PurpleLlama/blob/main/CodeShield/notebook/CodeShieldUsageDemo.ipynb). ## Getting Started To get started and learn how to use Purple Llama components with Llama models, see the getting started guide [here](https://ai.meta.com/llama/get-started/). The guide provides information and resources to help you set up Llama, including how to access the model, hosting how-to information and integration guides. Additionally, you will find supplemental materials to further assist you while responsibly building with Llama. The guide will be updated as more Purple Llama components get released. ## FAQ For a running list of frequently asked questions, for not only Purple Llama components but also generally for Llama models, see the FAQ [here](https://ai.meta.com/llama/faq/). ## Join the Purple Llama community See the [CONTRIBUTING](CONTRIBUTING.md) file for how to help out.
+# Model Details Meta Llama Guard 2 is an 8B parameter Llama 3-based [1] LLM safeguard model. Similar to [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/), it can be used for classifying content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated. Below is a response classification example input and output for Llama Guard 2. In order to produce classifier scores, we look at the probability for the first token, and use that as the “unsafe” class probability. We can then apply score thresholding to make binary decisions. # Harm Taxonomy and Policy The model is trained to predict safety labels on the 11 categories shown below, based on the [MLCommons taxonomy](https://mlcommons.org/2024/04/mlc-aisafety-v0-5-poc/) of hazards. Harm categories S1: Violent Crimes S2: Non-Violent Crimes S3: Sex-Related Crimes S4: Child Sexual Exploitation S5: Specialized Advice S6: Privacy S7: Intellectual Property S8: Indiscriminate Weapons S9: Hate S10: Suicide & Self-Harm S11: Sexual Content **S1: Violent Crimes** Responses that enable, encourage, or endorse the commission of violent crimes, including: (1) unlawful violence toward people (ex: terrorism, genocide, murder, hate-motivated violent crimes, child abuse, assault, battery, kidnapping) and (2) unlawful violence toward animals (ex: animal abuse) **S2: Non-Violent Crimes** Responses that enable, encourage, or endorse the commission of non-violent crimes, including: (1) personal crimes (ex: labor trafficking, threats, intimidation, hate-motivated non-violent crimes), (2) financial crimes (ex: fraud, scams, money laundering), (3) property crimes (ex: theft, arson, vandalism), (4) drug crimes (ex: creating or using regulated substances), (5) weapons crimes (ex: producing unlicensed firearms), and (6) cyber crimes (ex: hacking) **S3: Sex-Related Crimes** Responses that enable, encourage, or endorse the commission of sex-related crimes, including: (1) sex trafficking, (2) sexual assault (ex: rape), (3) sexual harassment (physical, verbal, or visual), and (4) prostitution **S4: Child Sexual Exploitation** Responses that contain, describe, enable, encourage, or endorse the sexual abuse of children **S5: Specialized Advice** Responses that contain specialized financial, medical, or legal advice, or that indicate dangerous activities or objects are safe **S6: Privacy** Responses that contain sensitive, nonpublic personal information that could undermine someone's physical, digital, or financial security **S7: Intellectual Property** Responses that may violate the intellectual property rights of any third party **S8: Indiscriminate Weapons** Responses that enable, encourage, or endorse the creation of indiscriminate weapons, including: (1) chemical weapons (ex: nerve gas), (2) biological weapons (ex: anthrax), (3) radiological weapons (ex: cobalt bombs), (4) nuclear weapons (ex: fission bombs), and (5) high-yield explosive weapons (ex: cluster munitions) **S9: Hate** Responses that demean or dehumanize people on the basis of their sensitive, personal characteristics (i.e., race, color, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and/or serious disease) **S10: Suicide & Self-Harm** Responses that enable, encourage, or endorse acts of intentional self-harm, including: (1) suicide, (2) self-injury (ex: cutting), and (3) disordered eating **S11: Sexual Content** Responses that contain erotica # Training and Evaluation Data We start with the training set of Llama Guard (see [model card](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard/MODEL_CARD.md#training-data)), and obtain labels on the Harm Taxonomy described above. To improve adaptability of the model to different prompts, we train on hard samples, which are obtained by taking an existing sample and prompting Llama2 70B to produce an alternate policy description that will flip the label of the given sample. We report metrics for various models and APIs on our validation set, which is obtained from combining the validation set of Llama Guard v1 and held-out samples from the additional Llama 3 safety data. We compare performance on our internal test set, as well as on open datasets like [XSTest](https://github.com/paul-rottger/exaggerated-safety?tab=readme-ov-file#license), [OpenAI moderation](https://github.com/openai/moderation-api-release), and [BeaverTails](https://github.com/PKU-Alignment/beavertails). We find that there is overlap between our training set and the BeaverTails-30k test split. Since both our internal test set and BeaverTails use prompts from the Anthropic's [hh-rlhf dataset](https://github.com/anthropics/hh-rlhf) as a starting point for curating data, it is possible that different splits of Anthropic were used while creating the two datasets. Therefore to prevent leakage of signal between our train set and the BeaverTails-30k test set, we create our own BeaverTails-30k splits based on the Anthropic train-test splits used for creating our internal sets. *Note on evaluations*: As discussed in the Llama Guard [paper](https://arxiv.org/abs/2312.06674), comparing model performance is not straightforward as each model is built on its own policy and is expected to perform better on an evaluation dataset with a policy aligned to the model. This highlights the need for industry standards. By aligning Llama Guard 2 with the Proof of Concept MLCommons taxonomy, we hope to drive adoption of industry standards like this and facilitate collaboration and transparency in the LLM safety and content evaluation space. # Model Performance We evaluate the performance of Llama Guard 2 and compare it with Llama Guard and popular content moderation APIs such as Azure, OpenAI Moderation, and Perspective. We use the token probability of the first output token (i.e. safe/unsafe) as the score for classification. For obtaining a binary classification decision from the score, we use a threshold of 0.5. Llama Guard 2 improves over Llama Guard, and outperforms other approaches on our internal test set. Note that we manage to achieve great performance while keeping a low false positive rate as we know that over-moderation can impact user experience when building LLM-applications. | **Model**                | **F1 ↑** | **AUPRC ↑** | **False Positive Rate ↓** | |--------------------------|:------:|:---------:|:-----------------------:| | Llama Guard\*             |  0.665 | 0.854 |          0.027          | | Llama Guard 2            |  **0.915** |   **0.974**   |          0.040          | | GPT4                     | 0.796 |    N/A    |          0.151          | | OpenAI Moderation API    |  0.347 |   0.669   |          0.030          | | Azure Content Safety API |  0.519 |    N/A    |          0.245          | | Perspective API          |  0.265 |   0.586   |          0.046          | Table 1: Comparison of performance of various approaches measured on our internal test set. *The performance of Llama Guard is lower on our new test set due to expansion of the number of harm categories from 6 to 11, which is not aligned to what Llama Guard was trained on. | **Category**           | **False Negative Rate\* ↓** | **False Positive Rate ↓** | |------------------------|:--------------------------:|:-------------------------:| | Violent Crimes         |            0.042           |           0.002           | | Privacy                |            0.057           |           0.004           | | Non-Violent Crimes     |            0.082           |           0.009           | | Intellectual Property  |            0.099           |           0.004           | | Hate                   |            0.190           |           0.005           | | Specialized Advice     |            0.192           |           0.009           | | Sexual Content         |            0.229           |           0.004           | | Indiscriminate Weapons |            0.263           |           0.001           | | Child Exploitation     |            0.267           |           0.000           | | Sex Crimes             |            0.275           |           0.002           | | Self-Harm              |            0.277           |           0.002           | Table 2: Category-wise breakdown of false negative rate and false positive rate for Llama Guard 2 on our internal benchmark for response classification with safety labels from the ML Commons taxonomy. *The binary safe/unsafe label is used to compute categorical FNR by using the true categories. We do not penalize the model while computing FNR for cases where the model predicts the correct overall label but an incorrect categorical label. We also report performance on OSS safety datasets, though we note that the policy used for assigning safety labels is not aligned with the policy used while training Llama Guard 2. Still, Llama Guard 2 provides a superior tradeoff between F1 score and False Positive Rate on the XSTest and OpenAI Moderation datasets, demonstrating good adaptability to other policies. The BeaverTails dataset has a lower bar for a sample to be considered unsafe compared to Llama Guard 2's policy. The policy and training data of MDJudge [4] is more aligned with this dataset and we see that it performs better on them as expected (at the cost of a higher FPR). GPT-4 achieves high recall on all of the sets but at the cost of very high FPR (9-25%), which could hurt its ability to be used as a safeguard for practical applications. (F1 ↑ / False Positive Rate ↓) False Refusals (XSTest) OpenAI policy (OpenAI Mod) BeaverTails policy (BeaverTails-30k) Llama Guard 0.737 / 0.079 0.737 / 0.079 0.599 / 0.035 Llama Guard 2 0.884 / 0.084 0.807 / 0.060 0.736 / 0.059 MDJudge 0.856 / 0.172 0.768 / 0.212 0.849 / 0.098 GPT4 0.895 / 0.128 0.842 / 0.092 0.802 / 0.256 OpenAI Mod API 0.576 / 0.040 0.788 / 0.156 0.284 / 0.056 Table 3: Comparison of performance of various approaches measured on our internal test set for response classification. NOTE: The policy used for training Llama Guard does not align with those used for labeling these datasets. Still, Llama Guard 2 provides a superior tradeoff between F1 score and False Positive Rate across these datasets, demonstrating strong adaptability to other policies. We hope to provide developers with a high-performing moderation solution for most use cases by aligning Llama Guard 2 taxonomy with MLCommons standard. But as outlined in our Responsible Use Guide, each use case requires specific safety considerations and we encourage developers to tune Llama Guard 2 for their own use case to achieve better moderation for their custom policies. As an example of how Llama Guard 2's performance may change, we train on the BeaverTails training dataset and compare against MDJudge (which was trained on BeaverTails among others). |          **Model**          | **F1 ↑** | **False Positive Rate ↓** | |:---------------------------:|:--------:|:-------------------------:| | Llama Guard 2               |   0.736  |           0.059           | | MDJudge                     | 0.849 |           0.098           | | Llama Guard 2 + BeaverTails |   **0.852**  |           0.101           | Table 4: Comparison of performance on BeaverTails-30k. # Limitations There are some limitations associated with Llama Guard 2. First, Llama Guard 2 itself is an LLM fine-tuned on Llama 3. Thus, its performance (e.g., judgments that need common sense knowledge, multilingual capability, and policy coverage) might be limited by its (pre-)training data. Second, Llama Guard 2 is finetuned for safety classification only (i.e. to generate "safe" or "unsafe"), and is not designed for chat use cases. However, since it is an LLM, it can still be prompted with any text to obtain a completion. Lastly, as an LLM, Llama Guard 2 may be susceptible to adversarial attacks or prompt injection attacks that could bypass or alter its intended use. However, with the help of external components (e.g., KNN, perplexity filter), recent work (e.g., [3]) demonstrates that Llama Guard is able to detect harmful content reliably. **Note on Llama Guard 2's policy** Llama Guard 2 supports 11 out of the 13 categories included in the [MLCommons AI Safety](https://mlcommons.org/working-groups/ai-safety/ai-safety/) taxonomy. The Election and Defamation categories are not addressed by Llama Guard 2 as moderating these harm categories requires access to up-to-date, factual information sources and the ability to determine the veracity of a particular output. To support the additional categories, we recommend using other solutions (e.g. Retrieval Augmented Generation) in tandem with Llama Guard 2 to evaluate information correctness. # Citation ``` @misc{metallamaguard2, author =       {Llama Team}, title =        {Meta Llama Guard 2}, howpublished = {\url{https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard2/MODEL_CARD.md}}, year =         {2024} } ``` # References [1] [Llama 3 Model Card](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md) [2] [Llama Guard Model Card](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard/MODEL_CARD.md) [3] [RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content](https://arxiv.org/pdf/2403.13031.pdf) [4] [MDJudge for Salad-Bench](https://huggingface.co/OpenSafetyLab/MD-Judge-v0.1)
+# Meta Llama Guard 2 Llama Guard 2 is a model that provides input and output guardrails for LLM deployments, based on MLCommons policy. # Download In order to download the model weights and tokenizer, please visit the [Meta website](https://llama.meta.com/llama-downloads) and accept our License. Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download. Pre-requisites: Make sure you have wget and md5sum installed. Then to run the script: `./download.sh`. Keep in mind that the links expire after 24 hours and a certain amount of downloads. If you start seeing errors such as `403: Forbidden`, you can always re-request a link. # Quick Start Since Llama Guard 2 is a fine-tuned Llama3 model (see our [model card](MODEL_CARD.md) for more information), the same quick start steps outlined in our [README file](https://github.com/meta-llama/llama3/blob/main/README.md) for Llama3 apply here. In addition to that, we added examples using Llama Guard 2 in the [Llama recipes repository](https://github.com/facebookresearch/llama-recipes). # Issues Please report any software bug, or other problems with the models through one of the following means: - Reporting issues with the Llama Guard model: [github.com/meta-llama/PurpleLlama](https://github.com/meta-llama/PurpleLlama) - Reporting issues with Llama in general: [github.com/meta-llama/llama3](https://github.com/meta-llama/llama3) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](https://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](https://facebook.com/whitehat/info) # License Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements. The same license as Llama 3 applies: see the [LICENSE](../LICENSE) file, as well as our accompanying [Acceptable Use Policy](USE_POLICY.md). # Citation ``` @misc{metallamaguard2, author =       {Llama Team}, title =        {Meta Llama Guard 2}, howpublished = {\url{https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard2/MODEL_CARD.md}}, year =         {2024} } ``` # References [Research Paper](https://ai.facebook.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/)
+# Model Details Llama Guard is a 7B parameter [Llama 2](https://arxiv.org/abs/2307.09288)-based input-output safeguard model. It can be used for classifying content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM: it generates text in its output that indicates whether a given prompt or response is safe/unsafe, and if unsafe based on a policy, it also lists the violating subcategories. Here is an example: In order to produce classifier scores, we look at the probability for the first token, and turn that into an “unsafe” class probability. Model users can then make binary decisions by applying a desired threshold to the probability scores. # Training and Evaluation ## Training Data We use a mix of prompts that come from the Anthropic [dataset](https://github.com/anthropics/hh-rlhf) and redteaming examples that we have collected in house, in a separate process from our production redteaming. In particular, we took the prompts only from the Anthropic dataset, and generated new responses from our in-house LLaMA models, using jailbreaking techniques to elicit violating responses. We then annotated Anthropic data (prompts & responses) in house, mapping labels according to the categories identified above. Overall we have ~13K training examples. ## Taxonomy of harms and Risk Guidelines As automated content risk mitigation relies on classifiers to make decisions about content in real time, a prerequisite to building these systems is to have the following components: - A **taxonomy** of risks that are of interest – these become the classes of a classifier. - A **risk guideline** that determines where we put the line between encouraged and discouraged outputs for each risk category in the taxonomy. Together with this model, we release an open taxonomy inspired by existing open taxonomies such as those employed by Google, Microsoft and OpenAI in the hope that it can be useful to the community. This taxonomy does not necessarily reflect Meta's own internal policies and is meant to demonstrate the value of our method to tune LLMs into classifiers that show high performance and high degrees of adaptability to different policies. ### The Llama Guard Safety Taxonomy & Risk Guidelines Below, we provide both the harm types themselves under this taxonomy and also examples of the specific kinds of content that would be considered harmful under each category: - **Violence & Hate** encompasses statements that encourage or could help people plan or engage in violence. Similarly, statements that advocate discrimination, contain slurs, or voice hateful sentiments against people based on their sensitive personal characteristics (ex: race, color, religion, national origin, sexual orientation, gender, gender identity, or disability) would also be considered inappropriate under this category. - **Sexual Content** encompasses statements encouraging someone (who could be underage) to engage in specific sex acts. Similarly, sexually explicit (i.e., erotic) statements would also be considered inappropriate under this category. - **Guns & Illegal Weapons** encompasses statements that specifically encourage, condone, or could help people plan or execute specific crimes involving the illegal acquisition, creation, or use of guns or illegal weapons (ex: explosives, biological agents, or chemical weapons), including instructions on how to create or use them. - **Regulated or Controlled Substances** encompasses statements that specifically encourage or could help people to illegally produce, transfer, or consume regulated or controlled substances like illegal drugs, tobacco, alcohol, or cannabis. - **Suicide & Self Harm** encompasses statements that encourage, condone, or enable people to harm themselves (ex: by providing instructions or information on methods of self-harm). When people express or imply an intent to harm themselves, any response other than one that provides links to appropriate health resources would be considered inappropriate (even if entirely supportive). - **Criminal Planning** encompasses miscellaneous statements that encourage, condone, or could help people plan or execute specific criminal activities, like arson, kidnapping, or theft. Items should only be considered harmful under this category when they could not be read as violating any of the other harm types above (ex: statements that encourage violence should be considered violating under Violence & Hate rather than this category). ## Evaluation results We compare the performance of the model against standard content moderation APIs in the industry, including [OpenAI](https://platform.openai.com/docs/guides/moderation/overview), [Azure Content Safety](https://learn.microsoft.com/en-us/azure/ai-services/content-safety/concepts/harm-categories), and [PerspectiveAPI](https://developers.perspectiveapi.com/s/about-the-api-attributes-and-languages?language=en_US) from Google on both public and in-house benchmarks. The public benchmarks include [ToxicChat](https://huggingface.co/datasets/lmsys/toxic-chat) and [OpenAI Moderation](https://github.com/openai/moderation-api-release). Note: comparisons are not exactly apples-to-apples due to mismatches in each taxonomy. The interested reader can find a more detailed discussion about this in our [paper](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/). |                 | Our Test Set (Prompt) | OpenAI Mod | ToxicChat | Our Test Set (Response) | | --------------- | --------------------- | ---------- | --------- | ----------------------- | | Llama Guard     | **0.945**             | 0.847      | **0.626** | **0.953**               | | OpenAI API      | 0.764                 | **0.856**  | 0.588     | 0.769                   | | Perspective API | 0.728                 | 0.787      | 0.532     | 0.699                   |
+# Llama Guard Llama Guard is a new experimental model that provides input and output guardrails for LLM deployments. # Download In order to download the model weights and tokenizer, please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License. Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download. Pre-requisites: Make sure you have wget and md5sum installed. Then to run the script: `./download.sh`. Keep in mind that the links expire after 24 hours and a certain amount of downloads. If you start seeing errors such as `403: Forbidden`, you can always re-request a link. # Quick Start Since Llama Guard is a fine-tuned Llama-7B model (see our [model card](MODEL_CARD.md) for more information), the same quick start steps outlined in our [README file](https://github.com/facebookresearch/llama/blob/main/README.md) for Llama2 apply here. In addition to that, we added examples using Llama Guard in the [Llama 2 recipes repository](https://github.com/facebookresearch/llama-recipes). # Issues Please report any software bug, or other problems with the models through one of the following means: - Reporting issues with the Llama Guard model: [github.com/facebookresearch/purplellama](github.com/facebookresearch/purplellama) - Reporting issues with Llama in general: [github.com/facebookresearch/llama](github.com/facebookresearch/llama) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](facebook.com/whitehat/info) # License Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements. The same license as Llama 2 applies: see the [LICENSE](../LICENSE) file, as well as our accompanying [Acceptable Use Policy](USE_POLICY). # References [Research Paper](https://ai.facebook.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/)
+
diff --git a/recipes/use_cases/end2end-recipes/raft/eval_config.yaml b/recipes/use_cases/end2end-recipes/raft/eval_config.yaml
index 930717997..bdfa0f176 100644
--- a/recipes/use_cases/end2end-recipes/raft/eval_config.yaml
+++ b/recipes/use_cases/end2end-recipes/raft/eval_config.yaml
@@ -1,22 +1,47 @@
 eval_prompt_template: >
-  You are a AI assistant that skilled in answering questions related to Llama language models,
+  <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a AI assistant that skilled in answering questions related to Llama language models,
   which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1,	Meta Llama Guard 2,
-  Below is a question from a llama user, think step by step, make the answer as concise as possible,
-  The returned answer should be no more than 100 words.Please return the answers in text directly without any special tokens.
-
+  Below is a question from a llama user, please the answer it with best of your knowledge,
+  The returned answer should be no more than 100 words.Please return the answers in text directly without any special tokens.<|eot_id|>
+  <|start_header_id|>user<|end_header_id|>
+  Question:{question} \n <|eot_id|><|start_header_id|>assistant<|end_header_id|>
+# judge_prompt_template: >
+#   <|begin_of_text|><|start_header_id|>system<|end_header_id|>You have been provided with a question, a teacher's answer and a student's answer above. Given that question, you need to score the how good the student answer is compare to
+#   the teacher's answer. If the student's answer is correct based on the teacher's answer, then return YES, else return NO.
+#   Review it carefully to make sure that the keywords and numerical vaules are exactly the same.
+#   Only respond with "YES" or "NO", do not respond with anything else.<|eot_id|>
+#   <|start_header_id|>user<|end_header_id|>
+#   Question: {question} \n Teacher's Answer: {gold} \n Student's Answer: {prediction} <|eot_id|><|start_header_id|>assistant<|end_header_id|>
 judge_prompt_template: >
-  You have been provided with a question, a teacher's answer and a student's answer above. Given that question, you need to score the how good the student answer is compare to
-  the teacher's answer. If the student's answer is correct based on the teacher's answer, then return YES, else return NO.
-  Review it carefully to make sure that the keywords and numerical vaules are exactly the same.
-  Only respond with "YES" or "NO", do not respond with anything else.
+    <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a teacher grading a quiz.
+
+    You will be given a QUESTION, the GROUND TRUTH (correct) ANSWER, and the STUDENT ANSWER.
+
+    Here is the grade criteria to follow:
+    (1) Grade the student answers based ONLY on their factual accuracy relative to the ground truth answer.
+    (2) Ensure that the student answer does not contain any conflicting statements.
+    (3) It is OK if the student answer contains more information than the ground truth answer, as long as it is factually accurate relative to the  ground truth answer.
+
+    Score:
+    YES means that the student's answer meets all of the criteria. This is the highest (best) score.
+    NO means that the student's answer does not meet all of the criteria. This is the lowest possible score you can give.
+
+    Explain your reasoning in a step-by-step manner to ensure your reasoning and conclusion are correct.
 
+    Avoid simply stating the correct answer at the outset.
+    End your response with final answer in the form : $answer, answer must be YES or NO  <|eot_id|>
+    <|start_header_id|>user<|end_header_id|>
+    QUESTION: {{question}}
+    GROUND TRUTH ANSWER: {{gold}}
+    STUDENT ANSWER: {{prediction}}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
 RAG_prompt_template: >
-  Question: {question}\n Context: {context}\n
-  Answer this question using the information given in the context above. Here is things to pay attention to:
+  <|begin_of_text|><|start_header_id|>system<|end_header_id|> Answer the following question using the information given in the context below. Here is things to pay attention to:
     - First provide step-by-step reasoning on how to answer the question.
     - In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context.
-    - End your response with final answer in the form : $answer, the answer should be succinct.
-  You MUST begin your final answer with the tag ":
+    - End your response with final answer in the form : $answer, the answer should less than 60 words.
+    You MUST begin your final answer with the tag ":<|eot_id|>
+  <|start_header_id|>user<|end_header_id|>
+  Question: {question}\nContext: {context}\n<|eot_id|><|start_header_id|>assistant<|end_header_id|>
 eval_json: "./evalset.json"
 
 raft_model_name: "raft-8b"
diff --git a/recipes/use_cases/end2end-recipes/raft/eval_raft.py b/recipes/use_cases/end2end-recipes/raft/eval_raft.py
index 53f5caa4f..c7a422cbf 100644
--- a/recipes/use_cases/end2end-recipes/raft/eval_raft.py
+++ b/recipes/use_cases/end2end-recipes/raft/eval_raft.py
@@ -1,13 +1,12 @@
 # Copyright (c) Meta Platforms, Inc. and affiliates.
 # This software may be used and distributed according to the terms of the Llama 3 Community License Agreement.
-from chat_utils import OctoAIChatService, VllmChatService
 import logging
 import evaluate
 import argparse
 from config import load_config
 import json
 from itertools import chain
-from langchain_community.llms import VLLMOpenAI
+from langchain_openai import ChatOpenAI
 
 from langchain_community.embeddings import HuggingFaceEmbeddings
 from langchain_community.vectorstores import FAISS
@@ -26,17 +25,17 @@ def generate_answers_model_only(model_name,question_list,api_url="http://localho
         # Use langchain to load the documents from data directory
     # Load the RAFT model
 
-    llm = VLLMOpenAI(
+    llm = ChatOpenAI(
         openai_api_key=key,
         openai_api_base=api_url,
         model_name=model_name,
         temperature=0.0,
-        max_tokens=100
+        max_tokens=1000
         )
-    system_prompt = SystemMessage(content=context['eval_prompt_template'])
-    generated_answers = []
-    all_tasks = [[system_prompt, HumanMessage(content=question)] for question in question_list]
+
+    all_tasks = [api_config['eval_prompt_template'].format(question=question) for question in question_list]
     generated_answers = llm.batch(all_tasks)
+    generated_answers = [ item.content for item in generated_answers]
     if len(generated_answers) == 0:
         logging.error("No model answers generated. Please check the input context or model configuration in ",model_name)
         return []
@@ -48,7 +47,12 @@ def format_docs_raft(docs):
     return context
 def format_docs(docs):
     return "\n\n".join(doc.page_content for doc in docs)
-def generate_answers_with_RAG(model_name, data_dir,question_list,rag_template,api_url="http://localhost:8000/v1",key="EMPTY"):
+def generate_answers_with_RAG(model_name, question_list,api_config,api_url_overwrite=None):
+    data_dir = api_config['data_dir']
+    api_url = "http://localhost:"+str(api_config['vllm_endpoint'])+"/v1"
+    if api_url_overwrite:
+        api_url = api_url_overwrite
+    key = api_config['api_key']
     # Use langchain to load the documents from data directory
     loader = DirectoryLoader(data_dir)
     docs = loader.load()
@@ -62,12 +66,12 @@ def generate_answers_with_RAG(model_name, data_dir,question_list,rag_template,ap
         search_kwargs={"k": 5}
     )
     # Load the RAFT model
-    llm = VLLMOpenAI(
+    llm = ChatOpenAI(
         openai_api_key=key,
         openai_api_base=api_url,
         model_name=model_name,
         temperature=0.0,
-        max_tokens=100
+        max_tokens=1000
         )
     all_tasks = []
     for q in question_list:
@@ -79,9 +83,10 @@ def generate_answers_with_RAG(model_name, data_dir,question_list,rag_template,ap
         else:
             documents = format_docs_raft(retrieved_docs)
         # create a prompt
-        text = rag_template.format(context=documents,question=q)
+        text = api_config["RAG_prompt_template"].format(context=documents,question=q)
         all_tasks.append(text)
     generated_answers = llm.batch(all_tasks)
+    generated_answers = [ item.content for item in generated_answers]
     if len(generated_answers) == 0:
         logging.error("No RAG answers generated. Please check the input context or model configuration in ",model_name)
         return []
@@ -101,6 +106,7 @@ def clean_text_list(text_list):
         index = text.rfind("")
         if index!= -1:
             text = text[index:]
+            text = text.replace(":","")
         text = text.replace("begin_quote","")
         text = text.replace("end_quote","")
         text = text.replace("##","")
@@ -144,27 +150,24 @@ def compute_bert_score(generated : list, reference: list):
     precision = score["precision"]
     recall = score["recall"]
     return sum(precision)/len(precision), sum(recall)/len(recall), sum(f1)/len(f1)
-def compute_judge_score(questions: list, generated : list, reference: list, context,api_url="http://localhost:8001/v1",key="EMPTY"):
+def compute_judge_score(questions: list, generated : list, reference: list, api_config,api_url="http://localhost:8001/v1",key="EMPTY"):
     correct_num = 0
     model_name = "meta-llama/Meta-Llama-3-70B-Instruct"
-    llm = VLLMOpenAI(
+    llm = ChatOpenAI(
         openai_api_key=key,
         openai_api_base=api_url,
         model_name=model_name,
+        max_tokens=1000,
         temperature=0.0)
     all_tasks = []
-    for q,pred,gold in zip(questions, generated,reference):
-        messages = [
-            HumanMessage(content=f"Question: {q} \n Teacher's Answer: {gold} \n Student's Answer: {pred} "),
-            SystemMessage(content=context['judge_prompt_template'])
-        ]
-        all_tasks.append(messages)
-    response = llm.batch(all_tasks)
-    for response in response:
-        if  "YES" in response:
-            correct_num += 1
-    return correct_num/len(questions)
-def score_single(context,generated,reference,questions, run_exact_match=True,run_rouge=True, run_bert=True, run_llm_as_judge=False):
+    for question,prediction,gold in zip(questions, generated,reference):
+        message = api_config['judge_prompt_template'].format(question=question,prediction=prediction,gold=gold)
+        all_tasks.append(message)
+    judge_responses = llm.batch(all_tasks)
+    judge_responses = ["YES" in item.content.split("")[-1] for item in judge_responses]
+    correct_num = sum(judge_responses)
+    return correct_num/len(questions),judge_responses
+def score_single(api_config,generated,reference,questions, run_exact_match=True,run_rouge=True, run_bert=True, run_llm_as_judge=True):
     # set metric to default -1, means no metric is computed
     metric = {
         "Rouge_score": -1,
@@ -184,22 +187,23 @@ def score_single(context,generated,reference,questions, run_exact_match=True,run
         metric["BERTScore_Precision"] = P
         metric["BERTScore_Recall"] = R
         metric["BERTScore_F1"] = F1
-    if context["judge_endpoint"] and run_llm_as_judge:
-        api_url = "http://localhost:"+str(context["judge_endpoint"])+"/v1"
-        LLM_judge_score = compute_judge_score(questions, generated, reference, context,api_url=api_url)
+    if api_config["judge_endpoint"] and run_llm_as_judge:
+        api_url = "http://localhost:"+str(api_config["judge_endpoint"])+"/v1"
+        LLM_judge_score,judge_responses = compute_judge_score(questions, generated, reference, api_config,api_url=api_url)
         metric["LLM_judge_score"] = LLM_judge_score
+        metric["LLM_judge_responses"] = judge_responses
         print(f"LLM_judge_score: {LLM_judge_score}")
     if run_exact_match:
         exact_match = exact_match_score(generated,reference)
         print(f"Exact_match_percentage: {exact_match:.4f}")
         metric["Exact_match"] = exact_match
     return metric
-def main(context):
+def main(api_config):
     # Since the eval set is small, we can run the eval without async functions
     try:
-        api_url = "http://localhost:"+str(context["vllm_endpoint"])+"/v1"
+        api_url = "http://localhost:"+str(api_config["vllm_endpoint"])+"/v1"
         logging.info("Starting to generate answer given the eval set.")
-        with open(context["eval_json"]) as fp:
+        with open(api_config["eval_json"]) as fp:
             eval_json = json.load(fp)
         questions,groud_truth = [],[]
         for index, item in enumerate(eval_json):
@@ -210,38 +214,73 @@ def main(context):
             "RAFT_RAG": [],
             "Baseline": [],
             "Baseline_RAG": [],
+            "70B_RAG": [],
+            "70B_Base": [],
+            
         }
         # Generate answers for baseline
-        base_model_name = context["base_model_name"]
-        #generated_answers["Baseline"] = generate_answers_model_only(base_model_name,questions,api_url)
-        generated_answers["Baseline_RAG"] = generate_answers_with_RAG(base_model_name, context["data_dir"],questions,context['RAG_prompt_template'],api_url)
+        base_model_name = api_config["base_model_name"]
+        generated_answers["Baseline"] = generate_answers_model_only(base_model_name,questions,api_url)
+        generated_answers["Baseline_RAG"] = generate_answers_with_RAG(base_model_name, questions,api_config)
         # Generate answers for RAFT
-        raft_model_name = context["raft_model_name"]
-        #generated_answers["RAFT"] = generate_answers_model_only(raft_model_name,questions,api_url)
-        generated_answers["RAFT_RAG"] = generate_answers_with_RAG(raft_model_name, context["data_dir"],questions,context['RAG_prompt_template'],api_url)
+        raft_model_name = api_config["raft_model_name"]
+        generated_answers["RAFT"] = generate_answers_model_only(raft_model_name,questions,api_url)
+        generated_answers["RAFT_RAG"] = generate_answers_with_RAG(raft_model_name, questions,api_config)
+
+        large_model_name = "meta-llama/Meta-Llama-3-70B-Instruct"
+        large_api_url = "http://localhost:"+str(api_config["judge_endpoint"])+"/v1"
+        generated_answers["70B_Base"] = generate_answers_model_only(large_model_name,questions,large_api_url)
+        generated_answers["70B_RAG"] = generate_answers_with_RAG(large_model_name, questions,api_config,large_api_url,)
         logging.info(f"Successfully generated {len(generated_answers['Baseline_RAG'])} answers for all models.")
         # for generate answer from each model, compute the score metric
+        all_metrics = []
         for model_name,model_answer in generated_answers.items():
             if len(model_answer) != len(groud_truth):
                 print(f"The length of {model_name} answer is not equal to the length of ground truth.")
                 continue
-            metric = score_single(context,model_answer,groud_truth,questions)
+            metric = score_single(api_config,model_answer,groud_truth,questions)
             print(f"The eval result for {model_name} is: {metric}")
-            with open(context["output_log"],"a") as fp:
+            with open(api_config["output_log"],"a") as fp:
                 fp.write(f"Eval_result for {model_name} \n")
                 fp.write(f"Rouge_score: {metric['Rouge_score']} \n")
                 fp.write(f"BERTScore Precision: {metric['BERTScore_Precision']:.4f}, Recall: {metric['BERTScore_Recall']:.4f}, F1: {metric['BERTScore_F1']:.4f} \n")
                 fp.write(f"Exact_match_percentage: {metric['Exact_match']} \n")
-                if context["judge_endpoint"]:
+                judge_responses = ["None"] * len(questions)
+                if api_config["judge_endpoint"]:
                     fp.write(f"LLM_judge_score: {metric['LLM_judge_score']} \n")
+                    judge_responses = metric["LLM_judge_responses"]
+                    all_metrics.append((model_name,metric['LLM_judge_score'],metric["LLM_judge_responses"]))
                 fp.write(f"QA details: \n")
-                for item in zip(questions,model_answer,groud_truth):
+                for item in zip(questions,model_answer,groud_truth,judge_responses):
                     fp.write(f"question: {item[0]} \n")
                     fp.write(f"generated_answers: {item[1]} \n")
                     fp.write(f"groud_truth: {item[2]} \n")
+                    fp.write(f"LLM_judge_response: {item[3]} \n")
                     fp.write("\n")
                 fp.write("\n------------------------------------\n")
-        logging.info(f"Eval successfully, the eval result is saved to {context['output_log']}.")
+        # Now we want to take a closer look at the questions that are not answered the same by all the models.
+        judge_zip = list(zip(*[item[-1] for item in all_metrics]))
+        with open(api_config["output_log"],"a") as fp:
+            for item in all_metrics:
+                fp.write(f"Model_Name: {item[0]}, LLM_SCORE: {item[1]} \n")
+            for idx,item in enumerate(judge_zip):
+                # if all the responses are "YES" or all the responses are "NO", then we skip this question
+                if sum([r=="YES" for r in item]) == len(item) or sum([r=="YES" for r in item]) == 0:
+                    continue 
+                else:
+                    fp.write(f"Comparing interested question: {questions[idx]} \n")
+                    fp.write(f"groud_truth: {groud_truth[idx]} \n")
+                    fp.write(f"{item[2]} Baseline_answers: {generated_answers['Baseline'][idx]} \n")
+                    fp.write(f"{item[3]} Baseline_RAG_answers: {generated_answers['Baseline_RAG'][idx]} \n")
+                    fp.write(f"{item[0]} RAFT_answers: {generated_answers['RAFT'][idx]} \n")
+                    fp.write(f"{item[1]} RAFT_RAG_answers: {generated_answers['RAFT_RAG'][idx]} \n")
+                    fp.write(f"{item[4]} 70B_Base_answers: {generated_answers['70B_Base'][idx]} \n")
+                    fp.write(f"{item[5]} 70B_RAG_answers: {generated_answers['70B_RAG'][idx]} \n")
+                    fp.write("-------\n")
+
+
+
+        logging.info(f"Eval successfully, the eval result is saved to {api_config['output_log']}.")
         # Saving the eval result to a log file
     except Exception as e:
         logging.error(f"An unexpected error occurred during the process: {e}",exc_info=True)
@@ -283,20 +322,26 @@ def parse_arguments():
         default="eval_result.log",
         help="save the eval result to a log file. Default is eval_result.log"
     )
-
+    parser.add_argument(
+        "-k", "--api_key",
+        default="EMPTY",
+        type=str,
+        help="LLM API key for generating question/answer pairs."
+    )
     return parser.parse_args()
 
 if __name__ == "__main__":
     logging.info("Initializing the process and loading configuration...")
     args = parse_arguments()
-    context = load_config(args.config_path)
-    context["vllm_endpoint"] = args.vllm_endpoint
+    api_config = load_config(args.config_path)
+    api_config["vllm_endpoint"] = args.vllm_endpoint
     if args.data_dir:
-        context["data_dir"] = args.data_dir
+        api_config["data_dir"] = args.data_dir
     if args.raft_model_name:
-        context["raft_model_name"] = args.raft_model_name
-    context["judge_endpoint"] = args.judge_endpoint
-    context["output_log"] = args.output_log
-    if context["judge_endpoint"]:
+        api_config["raft_model_name"] = args.raft_model_name
+    api_config["judge_endpoint"] = args.judge_endpoint
+    api_config["output_log"] = args.output_log
+    api_config["api_key"] = args.api_key
+    if api_config["judge_endpoint"]:
         logging.info(f"Use local vllm service for judge at port: '{args.judge_endpoint}'.")
-    main(context)
+    main(api_config)
diff --git a/recipes/use_cases/end2end-recipes/raft/evalset.json b/recipes/use_cases/end2end-recipes/raft/evalset.json
index e0787bfe3..e3d5a1842 100644
--- a/recipes/use_cases/end2end-recipes/raft/evalset.json
+++ b/recipes/use_cases/end2end-recipes/raft/evalset.json
@@ -1,178 +1,130 @@
 [
     {
-        "question":"What is llama-recipes?",
-        "answer": "The llama-recipes repository is a companion to the Meta Llama 3 models. The goal of this repository is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Meta Llama and other tools in the LLM ecosystem."
-    },
-    {
-        "question":"What is the difference on the tokenization techniques that Meta Llama 3 uses compare Llama 2?",
-        "answer": "Llama 2 uses SentencePiece for tokenization, whereas Llama 3 has transitioned to OpenAI’s Tiktoken."
-    },
-    {
-        "question":"How many tokens were used in Meta Llama 3 pretrain?",
-        "answer": "Meta Llama 3 is pretrained on over 15 trillion tokens that were all collected from publicly available sources."
-    },
-    {
-        "question":"How many tokens were used in  Llama 2 pretrain?",
-        "answer": "Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources."
-    },
-    {
-        "question":"What is the name of the license agreement that Meta Llama 3 is under?",
-        "answer": "Meta LLAMA 3 COMMUNITY LICENSE AGREEMENT."
-    },
-    {
-        "question":"What is the name of the license agreement that Llama 2 is under?",
-        "answer": "LLAMA 2 COMMUNITY LICENSE AGREEMENT."
-    },
-    {
-        "question":"What is the context length of Llama 2 models?",
-        "answer": "Llama 2's context is 4k"
-    },
-    {
-        "question":"What is the context length of Meta Llama 3 models?",
-        "answer": "Meta Llama 3's context is 8k"
-    },
-    {
-        "question":"When is Llama 2 trained?",
-        "answer": "Llama 2 was trained between January 2023 and July 2023."
-    },
-    {
-        "question":"What is the name of the Llama 2 model that uses Grouped-Query Attention (GQA) ",
-        "answer": "Llama 2 70B"
-    },
-    {
-        "question":"What are the names of the Meta Llama 3 model that use Grouped-Query Attention (GQA) ",
-        "answer": "Meta Llama 3 8B and Meta Llama 3 70B"
-    },
-{
-    "question": "what are the goals for Llama 3",
-    "answer":  "With Llama 3, we set out to build the best open models that are on par with the best proprietary models available today. We wanted to address developer feedback to increase the overall helpfulness of Llama 3 and are doing so while continuing to play a leading role on responsible use and deployment of LLMs. We are embracing the open source ethos of releasing early and often to enable the community to get access to these models while they are still in development."
-},
-{
-"question": "What if I want to access Llama models but I’m not sure if my use is permitted under the Llama 2 Community License?",
-"answer": "On a limited case by case basis, we will consider bespoke licensing requests from individual entities. Please contact llamamodels@meta.com to provide more details about your request."
-},
-{
-"question": "Why is Meta not sharing the training datasets for Llama?",
-"answer": "We believe developers will have plenty to work with as we release our model weights and starting code for pre-trained and conversational fine-tuned versions as well as responsible use resources. While data mixes are intentionally withheld for competitive reasons, all models have gone through Meta’s internal Privacy Review process to ensure responsible data usage in building our products. We are dedicated to the responsible and ethical development of our GenAI products, ensuring our policies reflect diverse contexts and meet evolving societal expectations."
-},
-{
-"question": "Did Meta use human annotators to develop the data for Llama models?",
-"answer": "Yes. There are more details, for example, about our use of human annotators in the Llama 2 research paper."
-},
-{
-"question": "Can I use the output of the models to improve the Llama family of models, even though I cannot use them for other LLMs?",
-"answer": "It's correct that the license restricts using any part of the Llama models, including the response outputs to train another AI model (LLM or otherwise). However, one can use the outputs to further train the Llama family of models. Techniques such as Quantized Aware Training (QAT) utilize such a technique and hence this is allowed."
-},
-{
-"question": "What operating systems (OS) are officially supported if I want to use Llama model?",
-"answer": "For the core Llama GitHub repos (Llama and Llama3) Linux is the only OS currently supported by this repo. Additional OS support is available through the Llama-Recipes repo."
-},
-{
-"question": "I am getting 'Issue with the URL' as an error message when I want to download Llama model. What should I do?",
-"answer": "This issue occurs because of not copying the URL correctly. If you right click on the link and copy the link, the link may be copied with URL Defense wrapper. To avoid this issue, select the URL manually and copy it."
-},
-{
-"question": "Does Llama 2 support other languages outside of English?",
-"answer": "The model was primarily trained on English with a bit of additional data from 27 other languages (for more information, see Table 10 on page 20 of the Llama 2 paper). We do not expect the same level of performance in these languages as in English. You’ll find the full list of languages referenced in the research paper. You can look at some of the community lead projects to fine-tune Llama 2 models to support other languages. (eg: link)"
-},
-{
-"question": "If I’m a developer/business, how can I access the Llama models?",
-"answer": "Details on how to access the models are available on our website link. Please note that the models are subject to the acceptable use policy and the provided responsible use guide. Models are available through multiple sources but the place to start is at https://llama.meta.com/ Model code, quickstart guide and fine-tuning examples are available through our Github Llama repository. Model Weights are available through an email link after the user submits a sign-up form. Models are also being hosted by Microsoft, Amazon Web Services, and Hugging Face, and may also be available through other hosting providers in the future."
-},
-{
-"question": "Can anyone access Llama models? What are the terms?",
-"answer": "Llama models are broadly available to developers and licensees through a variety of hosting providers and on the Meta website and licensed under the applicable Llama Community License Agreement, which provides a permissive license to the models along with certain restrictions to help ensure that the models are being used responsibly."
-},
-{
-"question": "What are the hardware SKU requirements for deploying Llama models?",
-"answer": "Hardware requirements vary based on latency, throughput and cost constraints. For good latency, we split models across multiple GPUs with tensor parallelism in a machine with NVIDIA A100s or H100s. But TPUs, other types of GPUs, or even commodity hardware can also be used to deploy these models (e.g. llama cpp, MLC LLM)."
-},
-{
-"question": "Do Llama models provide traditional autoregressive text completion?",
-"answer": "Llama models are auto-regressive language models, built on the transformer architecture. The core language models function by taking a sequence of words as input and predicting the next word, recursively generating text."
-},
-{
-"question": "Does the Llama model support fill-in-the-middle completion, e.g. allowing the user to specify a suffix string for the response?",
-"answer": "The vanilla model of Llama does not, however, the Code Llama models have been trained with fill-in-the-middle completion to assist with tasks like code completion."
-},
-{
-"question": "Do Llama models support logit biases as a request parameter to control token probabilities during sampling?",
-"answer": "This is implementation dependent (i.e. the code used to run the model)."
-},
-{
-"question": "Do Llama models support adjusting sampling temperature or top-p threshold via request parameters?",
-"answer": "The model itself supports these parameters, but whether they are exposed or not depends on implementation."
-},
-{
-"question": "What is the most effective RAG method paired with Llama models?",
-"answer": "There are many ways to use RAG with Llama. The most popular libraries are LangChain and LlamaIndex, and many of our developers have used them successfully with Llama 2. (See the LangChain and LlamaIndex sections of this document)."
-},
-{
-"question": "How to set up Llama models with an EC2 instance?",
-"answer": "You can find steps on how to set up an EC2 instance in the AWS section of this document here."
-},
-{
-"question": "Should we start training with the base or instruct/chat model when using Llama model?",
-"answer": "This depends on your application. The Llama pre-trained models were trained for general large language applications, whereas the Llama instruct or chat models were fine tuned for dialogue specific uses like chat bots."
-},
-{
-"question": "I keep getting a 'CUDA out of memory' error, when using Llama models, what should I do" ,
-"answer": "This error can be caused by a number of different factors including, model size being too large, in-efficient memory usage and so on. Some of the steps below have been known to help with this issue, but you might need to do some troubleshooting to figure out the exact cause of your issue. 1. Ensure your GPU has enough memory 2. Reduce the batch_size 3. Lower the Precision 4. Clear cache 5. Modify the Model/Training"
-},
-{
-"question": "Retrieval approach adds latency due to multiple calls at each turn. How to best leverage Llama model with Retrieval?",
-"answer": "If multiple calls are necessary then you could look into the following: 1. Optimize inference so each call has less latency. 2. Merge the calls into fewer calls. For example summarize the data and utilize the summary. 3. Possibly utilize Llama 2 function calling. 4. Consider fine-tuning the model with the updated data."
-},
-{
-"question": "How can I fine tune the Llama models?",
-"answer": "You can find examples on how to fine tune the Llama models in the Llama Recipes repository."
-},
-{
-"question": "How can I pretrain the Llama models?",
-"answer": "You can adapt the finetuning script found here for pre-training. You can also find the hyperparams used for pretraining in Section 2 of the Llama2 paper."
-},
-{
-"question": "Am I allowed to develop derivative models through fine-tuning based on Llama models for languages other than english? Is this a violation of the acceptable use policy?",
-"answer": "Developers may fine-tune Llama models for languages beyond English provided they comply with the applicable Llama 3 License Agreement, Llama Community License Agreement and the Acceptable Use Policy."
-},
-{
-"question": "How can someone reduce hallucinations with fine-tuned LIama models?",
-"answer": "Although prompts cannot eliminate hallucinations completely, they can reduce it significantly. Using techniques like Chain-of-Thought, Instruction-Based, N-Shot, and Few-Shot can help depending on your application. Additionally, prompting the models to back up the responses by verifying with factual data sets or requesting the models to provide the source of information can help as well. Overall finetuning should also be helpful for reducing hallucination."
-},
-{
-"question": "What are the hardware SKU requirements for fine-tuning Llama pre-trained models?",
-"answer": "Fine-tuning requirements also vary based on amount of data, time to complete fine-tuning and cost constraints. To fine-tune these models we have generally used multiple NVIDIA A100 machines with data parallelism across nodes and a mix of data and tensor parallelism intra node. But using a single machine, or other GPU types are definitely possible (e.g. alpaca models are trained on a single RTX4090:https://github.com/tloen/alpaca-lora)"
-},
-{
-"question": "What Fine-tuning tasks would the Llama models support?",
-"answer": "The Lama 2 fine-tuned models were fine tuned for dialogue specific uses like chat bots."
-},
-{
-"question": "Are there examples on how one can fine-tune the Llama models?",
-"answer": "You can find example fine-tuning scripts in the Github recipes repository. You can also review the fine-tuning section in this document."
-},
-{
-"question": "What is the difference between a pre-trained and fine-tuned Llama model?",
-"answer": "The Llama pre-trained models were trained for general large language applications, whereas the Llama chat or instruct models were fine tuned for dialogue specific uses like chat bots."
-},
-{
-"question": "How should we think about post processing (validate generated data) as a way to fine tune Llama models?",
-"answer": "Essentially having a truthful data on the specific application can be helpful to reduce the risk on a specific application. Also setting some sort of threshold such as prob>90% might be helpful to get more confidence in the output."
-},
-{
-"question": "What are the different libraries that we recommend for fine tuning when using Llama models?",
-"answer": "You can find some fine-tuning recommendations in the Github recipes repository as well as the fine-tuning section of this document."
-},
-{
-"question": "How can we identify the right ‘r’ value for LORA method for a certain use-case when using Llama models?",
-"answer": "The best approach would be to review the LoRA research paper for more information on the rankings, then reviewing similar implementations for other models and finally experimenting."
-},
-{
-"question": "We hope to use prompt engineering as a lever to nudge behavior. Any pointers on enhancing instruction-following by fine-tuning small llama models?",
-"answer": "Take a look at the Fine tuning section in our Getting started with Llama guide of this document for some pointers towards fine tuning."
-},
-{
-"question": "Are Llama models open source? What is the exact license these models are published under?",
-"answer": "Llama models are licensed under a bespoke commercial license that balances open access to the models with responsibility and protections in place to help address potential misuse. Our license allows for broad commercial use, as well as for developers to create and redistribute additional work on top of Llama models. For more details, our licenses can be found at (https://llama.meta.com/license/) (Meta Llama 2) and (https://llama.meta.com/llama3/license/) (Meta Llama 3)."
-}
-]
+       "question":"Why is Meta not sharing the training datasets for Llama?",
+       "answer":"We believe developers will have plenty to work with as we release our model weights and starting code for pre-trained and conversational fine-tuned versions as well as responsible use resources. While data mixes are intentionally withheld for competitive reasons, all models have gone through Meta’s internal Privacy Review process to ensure responsible data usage in building our products. We are dedicated to the responsible and ethical development of our GenAI products, ensuring our policies reflect diverse contexts and meet evolving societal expectations."
+    },
+    {
+       "question":"Did Meta use human annotators to develop the data for Llama models?",
+       "answer":"Yes. There are more details, for example, about our use of human annotators in the Llama 2 research paper."
+    },
+    {
+       "question":"Can I use the output of the models to improve the Llama family of models, even though I cannot use them for other LLMs?",
+       "answer":"It's correct that the license restricts using any part of the Llama models, including the response outputs to train another AI model (LLM or otherwise). However, one can use the outputs to further train the Llama family of models. Techniques such as Quantized Aware Training (QAT) utilize such a technique and hence this is allowed."
+    },
+    {
+       "question":"What operating systems (OS) are officially supported if I want to use Llama model?",
+       "answer":"For the core Llama GitHub repos (Llama and Llama3) Linux is the only OS currently supported by this repo. Additional OS support is available through the Llama-Recipes repo."
+    },
+    {
+       "question":"Do Llama models provide traditional autoregressive text completion?",
+       "answer":"Llama models are auto-regressive language models, built on the transformer architecture. The core language models function by taking a sequence of words as input and predicting the next word, recursively generating text."
+    },
+    {
+       "question":"Do Llama models support logit biases as a request parameter to control token probabilities during sampling?",
+       "answer":"This is implementation dependent (i.e. the code used to run the model)."
+    },
+    {
+       "question":"Do Llama models support adjusting sampling temperature or top-p threshold via request parameters?",
+       "answer":"The model itself supports these parameters, but whether they are exposed or not depends on implementation."
+    },
+    {
+       "question":"What is llama-recipes?",
+       "answer":"The llama-recipes repository is a companion to the Meta Llama 3 models. The goal of this repository is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Meta Llama and other tools in the LLM ecosystem."
+    },
+    {
+       "question":"What is the difference on the tokenization techniques that Meta Llama 3 uses compare Llama 2?",
+       "answer":"Llama 2 uses SentencePiece for tokenization, whereas Llama 3 has transitioned to OpenAI’s Tiktoken."
+    },
+    {
+       "question":"How many tokens were used in Meta Llama 3 pretrain?",
+       "answer":"Meta Llama 3 is pretrained on over 15 trillion tokens that were all collected from publicly available sources."
+    },
+    {
+       "question":"How many tokens were used in  Llama 2 pretrain?",
+       "answer":"Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources."
+    },
+    {
+       "question":"What is the name of the license agreement that Meta Llama 3 is under?",
+       "answer":"Meta LLAMA 3 COMMUNITY LICENSE AGREEMENT."
+    },
+    {
+       "question":"What is the name of the license agreement that Llama 2 is under?",
+       "answer":"LLAMA 2 COMMUNITY LICENSE AGREEMENT."
+    },
+    {
+       "question":"What is the context length of Llama 2 models?",
+       "answer":"Llama 2's context is 4k"
+    },
+    {
+       "question":"What is the context length of Meta Llama 3 models?",
+       "answer":"Meta Llama 3's context is 8k"
+    },
+    {
+       "question":"When is Llama 2 trained?",
+       "answer":"Llama 2 was trained between January 2023 and July 2023."
+    },
+    {
+       "question":"What is the name of the Llama 2 model that uses Grouped-Query Attention (GQA) ",
+       "answer":"Llama 2 70B"
+    },
+    {
+       "question":"What are the names of the Meta Llama 3 model that use Grouped-Query Attention (GQA) ",
+       "answer":"Meta Llama 3 8B and Meta Llama 3 70B"
+    },
+    {
+       "question":"what are the goals for Llama 3",
+       "answer":"With Llama 3, we set out to build the best open models that are on par with the best proprietary models available today. We wanted to address developer feedback to increase the overall helpfulness of Llama 3 and are doing so while continuing to play a leading role on responsible use and deployment of LLMs. We are embracing the open source ethos of releasing early and often to enable the community to get access to these models while they are still in development."
+    },
+    {
+       "question":"What versions of Meta Llama 3 are available?",
+       "answer":"Meta Llama 3 is available in both 8B and 70B pretrained and instruction-tuned versions."
+    },
+    {
+       "question":"What are some applications of Meta Llama 3?",
+       "answer":"Meta Llama 3 supports a wide range of applications including coding tasks, problem solving, translation, and dialogue generation."
+    },
+    {
+       "question":"What improvements does Meta Llama 3 offer over previous models?",
+       "answer":"Meta Llama 3 offers enhanced scalability and performance, lower false refusal rates, improved response alignment, and increased diversity in model answers. It also excels in reasoning, code generation, and instruction following."
+    },
+    {
+       "question":"How has Meta Llama 3 been trained?",
+       "answer":"Meta Llama 3 has been trained on over 15T tokens of data using custom-built 24K GPU clusters. This training dataset is 7x larger than that used for Llama 2 and includes 4x more code."
+    },
+    {
+       "question":"What safety measures are included with Meta Llama 3?",
+       "answer":"Meta Llama 3 includes updates to trust and safety tools such as Llama Guard 2 and Cybersec Eval 2, optimized to support a comprehensive set of safety categories published by MLCommons."
+    },
+    {
+       "question":"What is Meta Llama 3?",
+       "answer":"Meta Llama 3 is a highly advanced AI model that excels at language nuances, contextual understanding, and complex tasks like translation and dialogue generation."
+    },
+    {
+       "question":"What are the pretrained versions of Meta Llama 3 available?",
+       "answer":"Meta Llama 3 is available with both 8B and 70B pretrained and instruction-tuned versions."
+    },
+    {
+       "question":"What is the context length supported by Llama 3 models?",
+       "answer":"Llama 3 models support a context length of 8K, which doubles the capacity of Llama 2."
+    },
+    {
+        "question":"What is the Prompt engineering?",
+        "answer":"It is a technique used in natural language processing (NLP) to improve the performance of the language model by providing them with more context and information about the task in hand."
+     },
+     {
+        "question":"What is the Zero-Shot Prompting?",
+        "answer":"Large language models like Meta Llama are capable of following instructions and producing responses without having previously seen an example of a task. Prompting without examples is called 'zero-shot prompting'."
+     },
+      {
+        "question":"What are the supported quantization modes in PyTorch?",
+        "answer":"Post-Training Dynamic Quantization, Post-Training Static Quantization and Quantization Aware Training (QAT)"
+     },
+     {
+        "question":"What is the LlamaIndex?",
+        "answer":"LlamaIndex is mainly a data framework for connecting private or domain-specific data with LLMs, so it specializes in RAG, smart data storage and retrieval, while LangChain is a more general purpose framework which can be used to build agents connecting multiple tools."
+     },
+     {
+       "question":"What is the LangChain?",
+       "answer":"LangChain is an open source framework for building LLM powered applications. It implements common abstractions and higher-level APIs to make the app building process easier, so you don't need to call LLM from scratch. "
+    }
+ ]
diff --git a/recipes/use_cases/end2end-recipes/raft/raft.yaml b/recipes/use_cases/end2end-recipes/raft/raft.yaml
index 740f283a6..1f1a7cf19 100644
--- a/recipes/use_cases/end2end-recipes/raft/raft.yaml
+++ b/recipes/use_cases/end2end-recipes/raft/raft.yaml
@@ -2,19 +2,21 @@ COT_prompt_template: >
   <|begin_of_text|><|start_header_id|>system<|end_header_id|> Answer the following question using the information given in the context below. Here is things to pay attention to:
     - First provide step-by-step reasoning on how to answer the question.
     - In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context.
-    - End your response with final answer in the form : $answer, the answer should be succinct.
-    You MUST begin your final answer with the tag ":<|eot_id|>
+    - End your response with final answer in the form : $answer, the answer should less than 60 words.
+    You MUST begin your final answer with the tag ": <|eot_id|>
   <|start_header_id|>user<|end_header_id|>
   Question: {question}\nContext: {context}\n<|eot_id|><|start_header_id|>assistant<|end_header_id|>
 
 question_prompt_template: >
-  You are a synthetic question-answer pair generator. Given a chunk of context about
+  <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a synthetic question-answer pair generator. Given a chunk of context about
   some topic(s), generate {num_questions} example questions a user could ask and would be answered
-  \using information from the chunk. For example, if the given context was a Wikipedia
+  using information from the chunk. For example, if the given context was a Wikipedia
   paragraph about the United States, an example question could be 'How many states are
   in the United States?
   The questions should be able to be answered in 100 words or less. Include only the
-  questions in your response.
+  questions in your response.<|eot_id|>
+  <|start_header_id|>user<|end_header_id|>
+  Context: {context}\n <|eot_id|><|start_header_id|>assistant<|end_header_id|>
 
 # question_prompt_template: >
 #   You are a language model skilled in creating quiz questions.
@@ -29,13 +31,13 @@ question_prompt_template: >
 #   4. Never use any abbreviation.
 #   5. Include only the questions in your response.
 
-data_dir: "./data"
+data_dir: "/home/kaiwu/work/pytorch/docs"
 
 xml_path: ""
 
-chunk_size: 512
+chunk_size: 1000
 
-questions_per_chunk: 3
+questions_per_chunk: 5
 
 num_distract_docs: 5 # number of distracting documents to add to each chunk
 
diff --git a/recipes/use_cases/end2end-recipes/raft/raft_utils.py b/recipes/use_cases/end2end-recipes/raft/raft_utils.py
index 5a46de524..b31b55162 100644
--- a/recipes/use_cases/end2end-recipes/raft/raft_utils.py
+++ b/recipes/use_cases/end2end-recipes/raft/raft_utils.py
@@ -15,8 +15,10 @@
 from bs4 import BeautifulSoup
 from langchain_openai import ChatOpenAI
 from langchain_core.messages import HumanMessage, SystemMessage
-from langchain_community.llms import VLLMOpenAI
+from langchain_community.llms import ChatOpenAI
 from langchain_core.prompts import ChatPromptTemplate
+from langchain_openai import ChatOpenAI
+
 
 
 # Initialize logging
@@ -126,17 +128,14 @@ def generate_questions(api_config):
     batches_count = len(document_batches)
     total_questions = api_config["questions_per_chunk"] * batches_count
     # use OpenAI API protocol to hanlde the chat request, including local VLLM openai compatible server
-    llm = VLLMOpenAI(
+    llm = ChatOpenAI(
         openai_api_key=key,
         openai_api_base=api_url,
         model_name=api_config["model"],
         temperature=0.0,
         max_tokens=250
         )
-    prompt = api_config['question_prompt_template'].format(num_questions=str(api_config['questions_per_chunk']))
-    system_prompt = SystemMessage(content=prompt)
-    generated_answers = []
-    all_tasks = [[system_prompt, HumanMessage(content=batch)] for batch in document_batches]
+    all_tasks = [api_config['question_prompt_template'].format(num_questions=str(api_config['questions_per_chunk']),context=document) for document in document_batches]
     generated_answers = llm.batch(all_tasks)
     if len(generated_answers) == 0:
         logging.error("No model answers generated. Please check the input context or model configuration in ",model_name)
@@ -163,7 +162,7 @@ def generate_COT(chunk_questions_zip,api_config) -> dict:
             all_tasks.append(prompt)
             chunk_questions.append((document_content,question))
     # use OpenAI API protocol to hanlde the chat request, including local VLLM openai compatible server
-    llm = VLLMOpenAI(
+    llm = ChatOpenAI(
         openai_api_key=api_config["api_key"],
         openai_api_base=api_config["endpoint_url"],
         model_name=api_config["model"],
diff --git a/requirements.txt b/requirements.txt
index 4ce2ba073..3c0cdf47c 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -34,3 +34,4 @@ coloredlogs==15.0.1
 sentence_transformers
 faiss-gpu
 unstructured[pdf]
+langchain_openai

From ddb7f1c15cee55425e671513a98d62d51c34295a Mon Sep 17 00:00:00 2001
From: Kai Wu 
Date: Wed, 12 Jun 2024 11:23:17 -0700
Subject: [PATCH 21/35] add more eval QA

---
 .../use_cases/end2end-recipes/raft/README.md  |   2 +-
 .../end2end-recipes/raft/data/website_data    |   2 -
 .../end2end-recipes/raft/data_urls.xml        |   6 -
 .../end2end-recipes/raft/eval_raft.py         |  22 +-
 .../end2end-recipes/raft/evalset.json         | 346 +++++++++++-------
 .../use_cases/end2end-recipes/raft/raft.py    |   4 +-
 .../use_cases/end2end-recipes/raft/raft.yaml  |   2 +-
 .../end2end-recipes/raft/raft_utils.py        |  22 +-
 8 files changed, 235 insertions(+), 171 deletions(-)

diff --git a/recipes/use_cases/end2end-recipes/raft/README.md b/recipes/use_cases/end2end-recipes/raft/README.md
index eed7cc778..8805d99bd 100644
--- a/recipes/use_cases/end2end-recipes/raft/README.md
+++ b/recipes/use_cases/end2end-recipes/raft/README.md
@@ -22,7 +22,7 @@ CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server  --model m
 Once the server is ready, we can query the server given the port number 8001 in another terminal. Here, "-u" sets the endpoint url to query and "-t" sets the number of questions we ask the Meta Llama3 70B Instruct model to generate per chunk. To use cloud API , please change the endpoint url to the cloud provider and set the api key using "-k". Here since we want to query our local hosted VLLM server, we can use following commend:
 
 ```bash
-python raft.py -u "http://localhost:8001/v1" -k "EMPTY" -t 3
+python raft.py -u "http://localhost:8001/v1" -k "EMPTY" -t 5
 ```
 
 For cloud API key, we can also set it using system environment variables, such as
diff --git a/recipes/use_cases/end2end-recipes/raft/data/website_data b/recipes/use_cases/end2end-recipes/raft/data/website_data
index 4e22e99d4..627d99b1f 100644
--- a/recipes/use_cases/end2end-recipes/raft/data/website_data
+++ b/recipes/use_cases/end2end-recipes/raft/data/website_data
@@ -32,10 +32,8 @@ LlamaIndex | Integration guides Integration guides LlamaIndex LlamaIndex is anot
 # Llama Recipes: Examples to get started using the Llama models from Meta The 'llama-recipes' repository is a companion to the [Meta Llama 3](https://github.com/meta-llama/llama3) models. The goal of this repository is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Meta Llama and other tools in the LLM ecosystem. The examples here showcase how to run Meta Llama locally, in the cloud, and on-prem. [Meta Llama 2](https://github.com/meta-llama/llama) is also supported in this repository. We highly recommend everyone to utilize [Meta Llama 3](https://github.com/meta-llama/llama3) due to its enhanced capabilities. > [!IMPORTANT] > Meta Llama 3 has a new prompt template and special tokens (based on the tiktoken tokenizer). > | Token | Description | > |---|---| > `<\|begin_of_text\|>` | This is equivalent to the BOS token. | > `<\|end_of_text\|>` | This is equivalent to the EOS token. For multiturn-conversations it's usually unused. Instead, every message is terminated with `<\|eot_id\|>` instead.| > `<\|eot_id\|>` | This token signifies the end of the message in a turn i.e. the end of a single message by a system, user or assistant role as shown below.| > `<\|start_header_id\|>{role}<\|end_header_id\|>` | These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant. | > > A multiturn-conversation with Meta Llama 3 follows this prompt template: > ``` > <|begin_of_text|><|start_header_id|>system<|end_header_id|> > > {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|> > > {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> > > {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> > > {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> > ``` > Each message gets trailed by an `<|eot_id|>` token before a new header is started, signaling a role change. > > More details on the new tokenizer and prompt template can be found [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3#special-tokens-used-with-meta-llama-3). > > [!NOTE] > The llama-recipes repository was recently refactored to promote a better developer experience of using the examples. Some files have been moved to new locations. The `src/` folder has NOT been modified, so the functionality of this repo and package is not impacted. > > Make sure you update your local clone by running `git pull origin main` ## Table of Contents - [Llama Recipes: Examples to get started using the Meta Llama models from Meta](#llama-recipes-examples-to-get-started-using-the-llama-models-from-meta) - [Table of Contents](#table-of-contents) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [PyTorch Nightlies](#pytorch-nightlies) - [Installing](#installing) - [Install with pip](#install-with-pip) - [Install with optional dependencies](#install-with-optional-dependencies) - [Install from source](#install-from-source) - [Getting the Llama models](#getting-the-llama-models) - [Model conversion to Hugging Face](#model-conversion-to-hugging-face) - [Repository Organization](#repository-organization) - [`recipes/`](#recipes) - [`src/`](#src) - [Contributing](#contributing) - [License](#license) ## Getting Started These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system. ### Prerequisites #### PyTorch Nightlies If you want to use PyTorch nightlies instead of the stable release, go to [this guide](https://pytorch.org/get-started/locally/) to retrieve the right `--extra-index-url URL` parameter for the `pip install` commands on your platform. ### Installing Llama-recipes provides a pip distribution for easy install and usage in other projects. Alternatively, it can be installed from source. > [!NOTE] > Ensure you use the correct CUDA version (from `nvidia-smi`) when installing the PyTorch wheels. Here we are using 11.8 as `cu118`. > H100 GPUs work better with CUDA >12.0 #### Install with pip ``` pip install llama-recipes ``` #### Install with optional dependencies Llama-recipes offers the installation of optional packages. There are three optional dependency groups. To run the unit tests we can install the required dependencies with: ``` pip install llama-recipes[tests] ``` For the vLLM example we need additional requirements that can be installed with: ``` pip install llama-recipes[vllm] ``` To use the sensitive topics safety checker install with: ``` pip install llama-recipes[auditnlg] ``` Optional dependencies can also be combines with [option1,option2]. #### Install from source To install from source e.g. for development use these commands. We're using hatchling as our build backend which requires an up-to-date pip as well as setuptools package. ``` git clone git@github.com:meta-llama/llama-recipes.git cd llama-recipes pip install -U pip setuptools pip install -e . ``` For development and contributing to llama-recipes please install all optional dependencies: ``` git clone git@github.com:meta-llama/llama-recipes.git cd llama-recipes pip install -U pip setuptools pip install -e .[tests,auditnlg,vllm] ``` ### Getting the Meta Llama models You can find Meta Llama models on Hugging Face hub [here](https://huggingface.co/meta-llama), **where models with `hf` in the name are already converted to Hugging Face checkpoints so no further conversion is needed**. The conversion step below is only for original model weights from Meta that are hosted on Hugging Face model hub as well. #### Model conversion to Hugging Face The recipes and notebooks in this folder are using the Meta Llama model definition provided by Hugging Face's transformers library. Given that the original checkpoint resides under models/7B you can install all requirements and convert the checkpoint with: ```bash ## Install Hugging Face Transformers from source pip freeze | grep transformers ## verify it is version 4.31.0 or higher git clone git@github.com:huggingface/transformers.git cd transformers pip install protobuf python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path ``` ## Repository Organization Most of the code dealing with Llama usage is organized across 2 main folders: `recipes/` and `src/`. ### `recipes/` Contains examples are organized in folders by topic: | Subfolder | Description | |---|---| [quickstart](./recipes/quickstart) | The "Hello World" of using Llama, start here if you are new to using Llama. [finetuning](./recipes/finetuning)|Scripts to finetune Llama on single-GPU and multi-GPU setups [inference](./recipes/inference)|Scripts to deploy Llama for inference locally and using model servers [use_cases](./recipes/use_cases)|Scripts showing common applications of Meta Llama3 [responsible_ai](./recipes/responsible_ai)|Scripts to use PurpleLlama for safeguarding model outputs [llama_api_providers](./recipes/llama_api_providers)|Scripts to run inference on Llama via hosted endpoints [benchmarks](./recipes/benchmarks)|Scripts to benchmark Llama models inference on various backends [code_llama](./recipes/code_llama)|Scripts to run inference with the Code Llama models [evaluation](./recipes/evaluation)|Scripts to evaluate fine-tuned Llama models using `lm-evaluation-harness` from `EleutherAI` ### `src/` Contains modules which support the example recipes: | Subfolder | Description | |---|---| | [configs](src/llama_recipes/configs/) | Contains the configuration files for PEFT methods, FSDP, Datasets, Weights & Biases experiment tracking. | | [datasets](src/llama_recipes/datasets/) | Contains individual scripts for each dataset to download and process. Note | | [inference](src/llama_recipes/inference/) | Includes modules for inference for the fine-tuned models. | | [model_checkpointing](src/llama_recipes/model_checkpointing/) | Contains FSDP checkpoint handlers. | | [policies](src/llama_recipes/policies/) | Contains FSDP scripts to provide different policies, such as mixed precision, transformer wrapping policy and activation checkpointing along with any precision optimizer (used for running FSDP with pure bf16 mode). | | [utils](src/llama_recipes/utils/) | Utility files for: - `train_utils.py` provides training/eval loop and more train utils. - `dataset_utils.py` to get preprocessed datasets. - `config_utils.py` to override the configs received from CLI. - `fsdp_utils.py` provides FSDP  wrapping policy for PEFT methods. - `memory_utils.py` context manager to track different memory stats in train loop. | ## Contributing Please read [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests to us. ## License See the License file for Meta Llama 3 [here](https://llama.meta.com/llama3/license/) and Acceptable Use Policy [here](https://llama.meta.com/llama3/use-policy/) See the License file for Meta Llama 2 [here](https://llama.meta.com/llama2/license/) and Acceptable Use Policy [here](https://llama.meta.com/llama2/use-policy/)
 # **Model Details** Meta developed and released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. ||Training Data|Params|Context Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10 -4 Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10 -4 Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10 -4 **Llama 2 family of models.** Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. The 70B version uses Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "Llama-2: Open Foundation and Fine-tuned Chat Models", available at https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/. **Where to send questions or comments about the model** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md). # **Intended Use** **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 2 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 2 models for languages beyond English provided they comply with the Llama 2 Community License and the Acceptable Use Policy. # **Hardware and Software** **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO 2 eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO 2 emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. # **Training Data** **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. # **Evaluation Results** In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks. For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at the top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. # **Ethical Considerations and Limitations** Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide/)
 # Llama 2 We are unlocking the power of large language models. Llama 2 is now accessible to individuals, creators, researchers, and businesses of all sizes so that they can experiment, innovate, and scale their ideas responsibly. This release includes model weights and starting code for pre-trained and fine-tuned Llama language models — ranging from 7B to 70B parameters. This repository is intended as a minimal example to load [Llama 2](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) models and run inference. For more detailed examples leveraging Hugging Face, see [llama-recipes](https://github.com/facebookresearch/llama-recipes/). ## Updates post-launch See [UPDATES.md](UPDATES.md). Also for a running list of frequently asked questions, see [here](https://ai.meta.com/llama/faq/). ## Download In order to download the model weights and tokenizer, please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License. Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download. Pre-requisites: Make sure you have `wget` and `md5sum` installed. Then run the script: `./download.sh`. Keep in mind that the links expire after 24 hours and a certain amount of downloads. If you start seeing errors such as `403: Forbidden`, you can always re-request a link. ### Access to Hugging Face We are also providing downloads on [Hugging Face](https://huggingface.co/meta-llama). You can request access to the models by acknowledging the license and filling the form in the model card of a repo. After doing so, you should get access to all the Llama models of a version (Code Llama, Llama 2, or Llama Guard) within 1 hour. ## Quick Start You can follow the steps below to quickly get up and running with Llama 2 models. These steps will let you run quick inference locally. For more examples, see the [Llama 2 recipes repository](https://github.com/facebookresearch/llama-recipes). 1. In a conda env with PyTorch / CUDA available clone and download this repository. 2. In the top-level directory run: ```bash pip install -e . ``` 3. Visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and register to download the model/s. 4. Once registered, you will get an email with a URL to download the models. You will need this URL when you run the download.sh script. 5. Once you get the email, navigate to your downloaded llama repository and run the download.sh script. - Make sure to grant execution permissions to the download.sh script - During this process, you will be prompted to enter the URL from the email. - Do not use the “Copy Link” option but rather make sure to manually copy the link from the email. 6. Once the model/s you want have been downloaded, you can run the model locally using the command below: ```bash torchrun --nproc_per_node 1 example_chat_completion.py \ --ckpt_dir llama-2-7b-chat/ \ --tokenizer_path tokenizer.model \ --max_seq_len 512 --max_batch_size 6 ``` **Note** - Replace  `llama-2-7b-chat/` with the path to your checkpoint directory and `tokenizer.model` with the path to your tokenizer model. - The `–nproc_per_node` should be set to the [MP](#inference) value for the model you are using. - Adjust the `max_seq_len` and `max_batch_size` parameters as needed. - This example runs the [example_chat_completion.py](example_chat_completion.py) found in this repository but you can change that to a different .py file. ## Inference Different models require different model-parallel (MP) values: |  Model | MP | |--------|----| | 7B     | 1  | | 13B    | 2  | | 70B    | 8  | All models support sequence length up to 4096 tokens, but we pre-allocate the cache according to `max_seq_len` and `max_batch_size` values. So set those according to your hardware. ### Pretrained Models These models are not finetuned for chat or Q&A. They should be prompted so that the expected answer is the natural continuation of the prompt. See `example_text_completion.py` for some examples. To illustrate, see the command below to run it with the llama-2-7b model (`nproc_per_node` needs to be set to the `MP` value): ``` torchrun --nproc_per_node 1 example_text_completion.py \ --ckpt_dir llama-2-7b/ \ --tokenizer_path tokenizer.model \ --max_seq_len 128 --max_batch_size 4 ``` ### Fine-tuned Chat Models The fine-tuned models were trained for dialogue applications. To get the expected features and performance for them, a specific formatting defined in [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212) needs to be followed, including the `INST` and `< >` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). You can also deploy additional classifiers for filtering out inputs and outputs that are deemed unsafe. See the llama-recipes repo for [an example](https://github.com/facebookresearch/llama-recipes/blob/main/examples/inference.py) of how to add a safety checker to the inputs and outputs of your inference code. Examples using llama-2-7b-chat: ``` torchrun --nproc_per_node 1 example_chat_completion.py \ --ckpt_dir llama-2-7b-chat/ \ --tokenizer_path tokenizer.model \ --max_seq_len 512 --max_batch_size 6 ``` Llama 2 is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. In order to help developers address these risks, we have created the [Responsible Use Guide](Responsible-Use-Guide.pdf). More details can be found in our research paper as well. ## Issues Please report any software “bug”, or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Model Card See [MODEL_CARD.md](MODEL_CARD.md). ## License Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements. See the [LICENSE](LICENSE) file, as well as our accompanying [Acceptable Use Policy](USE_POLICY.md) ## References 1. [Research Paper](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) 2. [Llama 2 technical overview](https://ai.meta.com/resources/models-and-libraries/llama) 3. [Open Innovation AI Research Community](https://ai.meta.com/llama/open-innovation-ai-research-community/) For common questions, the FAQ can be found [here](https://ai.meta.com/llama/faq/) which will be kept up to date over time as new questions arise. ## Original Llama The repo for the original llama release is in the [`llama_v1`](https://github.com/facebookresearch/llama/tree/llama_v1) branch.
-404: Not Found
 ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. Training Data Params Context length GQA Token count Knowledge cutoff Llama 3 A new mix of publicly available online data. 8B 8k Yes 15T+ March, 2023 70B 8k Yes December, 2023 **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the [Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/) and [Llama 3 Community License](https://llama.meta.com/llama3/license/). Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the [Llama 3 Community License](https://llama.meta.com/llama3/license/) and the [Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/). ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. Time (GPU hours) Power Consumption (W) Carbon Emitted(tCO2eq) Llama 3 8B 1.3M 700 390 Llama 3 70B 6.4M 700 1900 Total 7.7M 2290 **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 8B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_details.md). ### Base pretrained models Category Benchmark Llama 3 8B Llama2 7B Llama2 13B Llama 3 70B Llama2 70B General MMLU (5-shot) 66.6 45.7 53.8 79.5 69.7 AGIEval English (3-5 shot) 45.9 28.8 38.7 63.0 54.8 CommonSenseQA (7-shot) 72.6 57.6 67.6 83.8 78.7 Winogrande (5-shot) 76.1 73.3 75.4 83.1 81.8 BIG-Bench Hard (3-shot, CoT) 61.1 38.1 47.0 81.3 65.7 ARC-Challenge (25-shot) 78.6 53.7 67.6 93.0 85.3 Knowledge reasoning TriviaQA-Wiki (5-shot) 78.5 72.1 79.6 89.7 87.5 Reading comprehension SQuAD (1-shot) 76.4 72.2 72.1 85.6 82.6 QuAC (1-shot, F1) 44.4 39.6 44.9 51.1 49.4 BoolQ (0-shot) 75.7 65.5 66.9 79.0 73.1 DROP (3-shot, F1) 58.4 37.9 49.8 79.7 70.2 ### Instruction tuned models Benchmark Llama 3 8B Llama 2 7B Llama 2 13B Llama 3 70B Llama 2 70B MMLU (5-shot) 68.4 34.1 47.8 82.0 52.9 GPQA (0-shot) 34.2 21.7 22.3 39.5 21.0 HumanEval (0-shot) 62.2 7.9 14.0 81.7 25.6 GSM-8K (8-shot, CoT) 79.6 25.7 77.4 93.0 57.5 MATH (4-shot, CoT) 30.0 3.8 6.7 50.4 11.6 ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. Safety For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. Refusals In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### Cyber Security We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### Child Safety Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development.  For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [GitHub repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions ``` @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ``` ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Amit Sangani; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Ash JJhaveri; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hamid Shojanazeri; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Puxin Xu; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
 🤗 Models on Hugging Face | Blog | Website | Get Started --- # Meta Llama 3 We are unlocking the power of large language models. Our latest version of Llama is now accessible to individuals, creators, researchers, and businesses of all sizes so that they can experiment, innovate, and scale their ideas responsibly. This release includes model weights and starting code for pre-trained and instruction-tuned Llama 3 language models — including sizes of 8B to 70B parameters. This repository is a minimal example of loading Llama 3 models and running inference. For more detailed examples, see [llama-recipes](https://github.com/facebookresearch/llama-recipes/). ## Download To download the model weights and tokenizer, please visit the [Meta Llama website](https://llama.meta.com/llama-downloads/) and accept our License. Once your request is approved, you will receive a signed URL over email. Then, run the download.sh script, passing the URL provided when prompted to start the download. Pre-requisites: Ensure you have `wget` and `md5sum` installed. Then run the script: `./download.sh`. Remember that the links expire after 24 hours and a certain amount of downloads. You can always re-request a link if you start seeing errors such as `403: Forbidden`. ### Access to Hugging Face We also provide downloads on [Hugging Face](https://huggingface.co/meta-llama), in both transformers and native `llama3` formats. To download the weights from Hugging Face, please follow these steps: - Visit one of the repos, for example [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). - Read and accept the license. Once your request is approved, you'll be granted access to all the Llama 3 models. Note that requests used to take up to one hour to get processed. - To download the original native weights to use with this repo, click on the "Files and versions" tab and download the contents of the `original` folder. You can also download them from the command line if you `pip install huggingface-hub`: ```bash huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir meta-llama/Meta-Llama-3-8B-Instruct ``` - To use with transformers, the following [pipeline](https://huggingface.co/docs/transformers/en/main_classes/pipelines) snippet will download and cache the weights: ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model="meta-llama/Meta-Llama-3-8B-Instruct", model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", ) ``` ## Quick Start You can follow the steps below to get up and running with Llama 3 models quickly. These steps will let you run quick inference locally. For more examples, see the [Llama recipes repository](https://github.com/facebookresearch/llama-recipes). 1. Clone and download this repository in a conda env with PyTorch / CUDA. 2. In the top-level directory run: ```bash pip install -e . ``` 3. Visit the [Meta Llama website](https://llama.meta.com/llama-downloads/) and register to download the model/s. 4. Once registered, you will get an email with a URL to download the models. You will need this URL when you run the download.sh script. 5. Once you get the email, navigate to your downloaded llama repository and run the download.sh script. - Make sure to grant execution permissions to the download.sh script - During this process, you will be prompted to enter the URL from the email. - Do not use the “Copy Link” option; copy the link from the email manually. 6. Once the model/s you want have been downloaded, you can run the model locally using the command below: ```bash torchrun --nproc_per_node 1 example_chat_completion.py \ --ckpt_dir Meta-Llama-3-8B-Instruct/ \ --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model \ --max_seq_len 512 --max_batch_size 6 ``` **Note** - Replace  `Meta-Llama-3-8B-Instruct/` with the path to your checkpoint directory and `Meta-Llama-3-8B-Instruct/tokenizer.model` with the path to your tokenizer model. - The `–nproc_per_node` should be set to the [MP](#inference) value for the model you are using. - Adjust the `max_seq_len` and `max_batch_size` parameters as needed. - This example runs the [example_chat_completion.py](example_chat_completion.py) found in this repository, but you can change that to a different .py file. ## Inference Different models require different model-parallel (MP) values: |  Model | MP | |--------|----| | 8B     | 1  | | 70B    | 8  | All models support sequence length up to 8192 tokens, but we pre-allocate the cache according to `max_seq_len` and `max_batch_size` values. So set those according to your hardware. ### Pretrained Models These models are not finetuned for chat or Q&A. They should be prompted so that the expected answer is the natural continuation of the prompt. See `example_text_completion.py` for some examples. To illustrate, see the command below to run it with the llama-3-8b model (`nproc_per_node` needs to be set to the `MP` value): ``` torchrun --nproc_per_node 1 example_text_completion.py \ --ckpt_dir Meta-Llama-3-8B/ \ --tokenizer_path Meta-Llama-3-8B/tokenizer.model \ --max_seq_len 128 --max_batch_size 4 ``` ### Instruction-tuned Models The fine-tuned models were trained for dialogue applications. To get the expected features and performance for them, specific formatting defined in [`ChatFormat`](https://github.com/meta-llama/llama3/blob/main/llama/tokenizer.py#L202) needs to be followed: The prompt begins with a `<|begin_of_text|>` special token, after which one or more messages follow. Each message starts with the `<|start_header_id|>` tag, the role `system`, `user` or `assistant`, and the `<|end_header_id|>` tag. After a double newline `\n\n`, the message's contents follow. The end of each message is marked by the `<|eot_id|>` token. You can also deploy additional classifiers to filter out inputs and outputs that are deemed unsafe. See the llama-recipes repo for [an example](https://github.com/meta-llama/llama-recipes/blob/main/recipes/inference/local_inference/inference.py) of how to add a safety checker to the inputs and outputs of your inference code. Examples using llama-3-8b-chat: ``` torchrun --nproc_per_node 1 example_chat_completion.py \ --ckpt_dir Meta-Llama-3-8B-Instruct/ \ --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model \ --max_seq_len 512 --max_batch_size 6 ``` Llama 3 is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. To help developers address these risks, we have created the [Responsible Use Guide](https://ai.meta.com/static-resource/responsible-use-guide/). ## Issues Please report any software “bug” or other problems with the models through one of the following means: - Reporting issues with the model: [https://github.com/meta-llama/llama3/issues](https://github.com/meta-llama/llama3/issues) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Model Card See [MODEL_CARD.md](MODEL_CARD.md). ## License Our model and weights are licensed for researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals and industry through this opportunity while fostering an environment of discovery and ethical AI advancements. See the [LICENSE](LICENSE) file, as well as our accompanying [Acceptable Use Policy](USE_POLICY.md) ## Questions For common questions, the FAQ can be found [here](https://llama.meta.com/faq), which will be updated over time as new questions arise.
-404: Not Found
 # Code Llama ## **Model Details** **Model Developers** Meta AI **Variations** Code Llama comes in four model sizes, and three variants: 1) Code Llama: our base models are designed for general code synthesis and understanding 2) Code Llama - Python: designed specifically for Python 3) Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B, 34B and 70B parameters. **Input** Models input text only. **Output** Models output text only. **Model Architecture** Code Llama and its variants are autoregressive language models using optimized transformer architectures. Code Llama 7B, 13B and 70B additionally support infilling text generation. All models but Code Llama - Python 70B and Code Llama - Instruct 70B were fine-tuned with up to 16K tokens, and support up to 100K tokens at inference time. **Model Dates** Code Llama and its variants have been trained between January 2023 and January 2024. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released  as we improve model safety with community feedback. **Licence** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/). **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)". **Where to send comments** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md), or by opening an issue in the GitHub repository ([https://github.com/facebookresearch/codellama/](https://github.com/facebookresearch/codellama/)). ## **Intended Use** **Intended Use Cases** Code Llama and its variants are intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistance and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## **Hardware and Software** **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed by Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 12 Code Llama models required 1400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 228.55 tCO2eq, 100% of which were offset by Meta’s sustainability program. **Training data** All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). Code Llama - Instruct uses additional instruction fine-tuning data. **Evaluation Results** See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## **Ethical Considerations and Limitations** Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
 # Introducing Code Llama Code Llama is a family of large language models for code based on [Llama 2](https://github.com/facebookresearch/llama) providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B and 34B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B and 13B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama was developed by fine-tuning Llama 2 using a higher sampling of code. As with Llama 2, we applied considerable safety mitigations to the fine-tuned versions of the model. For detailed information on model training, architecture and parameters, evaluations, responsible AI and safety refer to  our [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/). Output generated by code generation features of the Llama Materials, including Code Llama, may be subject to third party licenses, including, without limitation, open source licenses. We are unlocking the power of large language models and our latest version of Code Llama is now accessible to individuals, creators, researchers and businesses of all sizes so that they can experiment, innovate and scale their ideas responsibly. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 34B parameters. This repository is intended as a minimal example to load [Code Llama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) models and run inference. [comment]: <> (Code Llama models are compatible with the scripts in llama-recipes) ## Download In order to download the model weights and tokenizers, please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License. Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download. Make sure that you copy the URL text itself, **do not use the 'Copy link address' option** when you right click the URL. If the copied URL text starts with: https://download.llamameta.net, you copied it correctly. If the copied URL text starts with: https://l.facebook.com, you copied it the wrong way. Pre-requisites: make sure you have `wget` and `md5sum` installed. Then to run the script: `bash download.sh`. Keep in mind that the links expire after 24 hours and a certain amount of downloads. If you start seeing errors such as `403: Forbidden`, you can always re-request a link. ### Model sizes | Model | Size     | |-------|----------| | 7B    | ~12.55GB | | 13B   | 24GB     | | 34B   | 63GB     | | 70B   | 131GB    | [comment]: <> (Access on Hugging Face, We are also providing downloads on Hugging Face. You must first request a download from the Meta website using the same email address as your Hugging Face account. After doing so, you can request access to any of the models on Hugging Face and within 1-2 days your account will be granted access to all versions.) ## Setup In a conda environment with PyTorch / CUDA available, clone the repo and run in the top-level directory: ``` pip install -e . ``` ## Inference Different models require different model-parallel (MP) values: | Model | MP | |-------|----| | 7B    | 1  | | 13B   | 2  | | 34B   | 4  | | 70B   | 8  | All models, except the 70B python and instruct versions, support sequence lengths up to 100,000 tokens, but we pre-allocate the cache according to `max_seq_len` and `max_batch_size` values. So set those according to your hardware and use-case. ### Pretrained Code Models The Code Llama and Code Llama - Python models are not fine-tuned to follow instructions. They should be prompted so that the expected answer is the natural continuation of the prompt. See `example_completion.py` for some examples. To illustrate, see command below to run it with the `CodeLlama-7b` model (`nproc_per_node` needs to be set to the `MP` value): ``` torchrun --nproc_per_node 1 example_completion.py \ --ckpt_dir CodeLlama-7b/ \ --tokenizer_path CodeLlama-7b/tokenizer.model \ --max_seq_len 128 --max_batch_size 4 ``` Pretrained code models are: the Code Llama models `CodeLlama-7b`, `CodeLlama-13b`, `CodeLlama-34b`, `CodeLlama-70b` and the Code Llama - Python models `CodeLlama-7b-Python`, `CodeLlama-13b-Python`, `CodeLlama-34b-Python`, `CodeLlama-70b-Python`. ### Code Infilling Code Llama and Code Llama - Instruct 7B and 13B models are capable of filling in code given the surrounding context. See `example_infilling.py` for some examples. The `CodeLlama-7b` model can be run for infilling with the command below (`nproc_per_node` needs to be set to the `MP` value): ``` torchrun --nproc_per_node 1 example_infilling.py \ --ckpt_dir CodeLlama-7b/ \ --tokenizer_path CodeLlama-7b/tokenizer.model \ --max_seq_len 192 --max_batch_size 4 ``` Pretrained infilling models are: the Code Llama models `CodeLlama-7b` and `CodeLlama-13b` and the Code Llama - Instruct models `CodeLlama-7b-Instruct`, `CodeLlama-13b-Instruct`. ### Fine-tuned Instruction Models Code Llama - Instruct models are fine-tuned to follow instructions. To get the expected features and performance for the 7B, 13B and 34B variants, a specific formatting defined in [`chat_completion()`](https://github.com/facebookresearch/codellama/blob/main/llama/generation.py#L319-L361) needs to be followed, including the `INST` and `< >` tags, `BOS` and `EOS` tokens, and the whitespaces and linebreaks in between (we recommend calling `strip()` on inputs to avoid double-spaces). `CodeLlama-70b-Instruct` requires a separate turn-based prompt format defined in [`dialog_prompt_tokens()`](https://github.com/facebookresearch/codellama/blob/main/llama/generation.py#L506-L548). You can use `chat_completion()` directly to generate answers with all instruct models; it will automatically perform the required formatting. You can also deploy additional classifiers for filtering out inputs and outputs that are deemed unsafe. See the llama-recipes repo for [an example](https://github.com/facebookresearch/llama-recipes/blob/main/src/llama_recipes/inference/safety_utils.py) of how to add a safety checker to the inputs and outputs of your inference code. Examples using `CodeLlama-7b-Instruct`: ``` torchrun --nproc_per_node 1 example_instructions.py \ --ckpt_dir CodeLlama-7b-Instruct/ \ --tokenizer_path CodeLlama-7b-Instruct/tokenizer.model \ --max_seq_len 512 --max_batch_size 4 ``` Fine-tuned instruction-following models are: the Code Llama - Instruct models `CodeLlama-7b-Instruct`, `CodeLlama-13b-Instruct`, `CodeLlama-34b-Instruct`, `CodeLlama-70b-Instruct`. Code Llama is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. In order to help developers address these risks, we have created the [Responsible Use Guide](https://github.com/facebookresearch/llama/blob/main/Responsible-Use-Guide.pdf). More details can be found in our research papers as well. ## Issues Please report any software “bug”, or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/codellama](http://github.com/facebookresearch/codellama) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Model Card See [MODEL_CARD.md](MODEL_CARD.md) for the model card of Code Llama. ## License Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements. See the [LICENSE](https://github.com/facebookresearch/llama/blob/main/LICENSE) file, as well as our accompanying [Acceptable Use Policy](https://github.com/facebookresearch/llama/blob/main/USE_POLICY.md) ## References 1. [Code Llama Research Paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) 2. [Code Llama Blog Post](https://ai.meta.com/blog/code-llama-large-language-model-coding/)
 🤗 Models on Hugging Face | Blog | Website | CyberSec Eval Paper | Llama Guard Paper --- # Purple Llama Purple Llama is an umbrella project that over time will bring together tools and evals to help the community build responsibly with open generative AI models. The initial release will include tools and evals for Cyber Security and Input/Output safeguards but we plan to contribute more in the near future. ## Why purple? Borrowing a [concept](https://www.youtube.com/watch?v=ab_Fdp6FVDI) from the cybersecurity world, we believe that to truly mitigate the challenges which generative AI presents, we need to take both attack (red team) and defensive (blue team) postures. Purple teaming, composed of both red and blue team responsibilities, is a collaborative approach to evaluating and mitigating potential risks and the same ethos applies to generative AI and hence our investment in Purple Llama will be comprehensive. ## License Components within the Purple Llama project will be licensed permissively enabling both research and commercial usage. We believe this is a major step towards enabling community collaboration and standardizing the development and usage of trust and safety tools for generative AI development. More concretely evals and benchmarks are licensed under the MIT license while any models use the Llama 2 Community license. See the table below: | **Component Type** |            **Components**            |                                          **License**                                           | | :----------------- | :----------------------------------: | :--------------------------------------------------------------------------------------------: | | Evals/Benchmarks   | Cyber Security Eval (others to come) |                                              MIT                                               | | Models             |             Llama Guard              | [Llama 2 Community License](https://github.com/facebookresearch/PurpleLlama/blob/main/LICENSE) | | Models             |             Llama Guard 2            | Llama 3 Community License | | Safeguard          |             Code Shield              | MIT | ## Evals & Benchmarks ### Cybersecurity #### CyberSec Eval v1 CyberSec Eval v1 was what we believe was the first industry-wide set of cybersecurity safety evaluations for LLMs. These benchmarks are based on industry guidance and standards (e.g., CWE and MITRE ATT&CK) and built in collaboration with our security subject matter experts. We aim to provide tools that will help address some risks outlined in the [White House commitments on developing responsible AI](https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/), including: * Metrics for quantifying LLM cybersecurity risks. * Tools to evaluate the frequency of insecure code suggestions. * Tools to evaluate LLMs to make it harder to generate malicious code or aid in carrying out cyberattacks. We believe these tools will reduce the frequency of LLMs suggesting insecure AI-generated code and reduce their helpfulness to cyber adversaries. Our initial results show that there are meaningful cybersecurity risks for LLMs, both with recommending insecure code and for complying with malicious requests. See our [Cybersec Eval paper](https://ai.meta.com/research/publications/purple-llama-cyberseceval-a-benchmark-for-evaluating-the-cybersecurity-risks-of-large-language-models/) for more details. #### CyberSec Eval 2 CyberSec Eval 2 expands on its predecessor by measuring an LLM’s propensity to abuse a code interpreter, offensive cybersecurity capabilities, and susceptibility to prompt injection. You can read the paper [here](https://ai.meta.com/research/publications/cyberseceval-2-a-wide-ranging-cybersecurity-evaluation-suite-for-large-language-models/). You can also check out the 🤗 leaderboard [here](https://huggingface.co/spaces/facebook/CyberSecEval). ## System-Level Safeguards As we outlined in Llama 3’s [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/), we recommend that all inputs and outputs to the LLM be checked and filtered in accordance with content guidelines appropriate to the application. ### Llama Guard To support this, and empower the community, we released Llama Guard, an openly-available model that performs competitively on common open benchmarks and provides developers with a pretrained model to help defend against generating potentially risky outputs. As part of our ongoing commitment to open and transparent science, we also released our methodology and an extended discussion of model performance in our [Llama Guard paper](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/). We are happy to share an updated version, Meta Llama Guard 2. Llama Guard 2 was optimized to support the newly [announced](https://mlcommons.org/2024/04/mlc-aisafety-v0-5-poc/) policy published by MLCommons, expanding its coverage to a more comprehensive set of safety categories, out-of-the-box. It also comes with better classification performance than Llama Guard 1 and improved zero-shot and few shot adaptability. Ultimately, our vision is to enable developers to customize this model to support relevant use cases and to make it easier to adopt best practices and improve the open ecosystem. ### Code Shield Code Shield adds support for inference-time filtering of insecure code produced by LLMs. Code Shield offers mitigation of insecure code suggestions risk, code interpreter abuse prevention, and secure command execution. [CodeShield Example Notebook](https://github.com/meta-llama/PurpleLlama/blob/main/CodeShield/notebook/CodeShieldUsageDemo.ipynb). ## Getting Started To get started and learn how to use Purple Llama components with Llama models, see the getting started guide [here](https://ai.meta.com/llama/get-started/). The guide provides information and resources to help you set up Llama, including how to access the model, hosting how-to information and integration guides. Additionally, you will find supplemental materials to further assist you while responsibly building with Llama. The guide will be updated as more Purple Llama components get released. ## FAQ For a running list of frequently asked questions, for not only Purple Llama components but also generally for Llama models, see the FAQ [here](https://ai.meta.com/llama/faq/). ## Join the Purple Llama community See the [CONTRIBUTING](CONTRIBUTING.md) file for how to help out.
diff --git a/recipes/use_cases/end2end-recipes/raft/data_urls.xml b/recipes/use_cases/end2end-recipes/raft/data_urls.xml
index d054ed5a4..5fccffd9e 100644
--- a/recipes/use_cases/end2end-recipes/raft/data_urls.xml
+++ b/recipes/use_cases/end2end-recipes/raft/data_urls.xml
@@ -102,18 +102,12 @@
 http://raw.githubusercontent.com/meta-llama/llama/main/README.md
 
 
-http://raw.githubusercontent.com/meta-llama/llama/main/LICENSE.md
-
-
 http://raw.githubusercontent.com/meta-llama/llama3/main/MODEL_CARD.md
 
 
 http://raw.githubusercontent.com/meta-llama/llama3/main/README.md
 
 
-http://raw.githubusercontent.com/meta-llama/llama3/main/LICENSE.md
-
-
 http://raw.githubusercontent.com/meta-llama/codellama/main/MODEL_CARD.md
 
 
diff --git a/recipes/use_cases/end2end-recipes/raft/eval_raft.py b/recipes/use_cases/end2end-recipes/raft/eval_raft.py
index c7a422cbf..f6b94106a 100644
--- a/recipes/use_cases/end2end-recipes/raft/eval_raft.py
+++ b/recipes/use_cases/end2end-recipes/raft/eval_raft.py
@@ -5,21 +5,14 @@
 import argparse
 from config import load_config
 import json
-from itertools import chain
 from langchain_openai import ChatOpenAI
-
 from langchain_community.embeddings import HuggingFaceEmbeddings
 from langchain_community.vectorstores import FAISS
 from langchain.text_splitter import RecursiveCharacterTextSplitter
 from langchain_community.document_loaders import DirectoryLoader
-from langchain_core.runnables import RunnablePassthrough
-
-from langchain_core.messages import HumanMessage, SystemMessage
 import re
 import string
-from collections import Counter
-from langchain_core.output_parsers import StrOutputParser
-from langchain.prompts.prompt import PromptTemplate
+
 
 def generate_answers_model_only(model_name,question_list,api_url="http://localhost:8000/v1",key="EMPTY"):
         # Use langchain to load the documents from data directory
@@ -57,7 +50,7 @@ def generate_answers_with_RAG(model_name, question_list,api_config,api_url_overw
     loader = DirectoryLoader(data_dir)
     docs = loader.load()
     # Split the document into chunks with a specified chunk size
-    text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=50)
+    text_splitter = RecursiveCharacterTextSplitter(chunk_size=api_config["chunk_size"], chunk_overlap=int(api_config["chunk_size"]/10))
     all_splits = text_splitter.split_documents(docs)
 
     # Store the document into a vector store with a specific embedding model
@@ -260,6 +253,7 @@ def main(api_config):
                 fp.write("\n------------------------------------\n")
         # Now we want to take a closer look at the questions that are not answered the same by all the models.
         judge_zip = list(zip(*[item[-1] for item in all_metrics]))
+        model_names = [item[0] for item in all_metrics]
         with open(api_config["output_log"],"a") as fp:
             for item in all_metrics:
                 fp.write(f"Model_Name: {item[0]}, LLM_SCORE: {item[1]} \n")
@@ -270,12 +264,8 @@ def main(api_config):
                 else:
                     fp.write(f"Comparing interested question: {questions[idx]} \n")
                     fp.write(f"groud_truth: {groud_truth[idx]} \n")
-                    fp.write(f"{item[2]} Baseline_answers: {generated_answers['Baseline'][idx]} \n")
-                    fp.write(f"{item[3]} Baseline_RAG_answers: {generated_answers['Baseline_RAG'][idx]} \n")
-                    fp.write(f"{item[0]} RAFT_answers: {generated_answers['RAFT'][idx]} \n")
-                    fp.write(f"{item[1]} RAFT_RAG_answers: {generated_answers['RAFT_RAG'][idx]} \n")
-                    fp.write(f"{item[4]} 70B_Base_answers: {generated_answers['70B_Base'][idx]} \n")
-                    fp.write(f"{item[5]} 70B_RAG_answers: {generated_answers['70B_RAG'][idx]} \n")
+                    for i in range(len(model_names)):
+                        fp.write(f"{item[i]} {model_names[i]}_answers: {generated_answers[model_names[i]][idx]} \n")
                     fp.write("-------\n")
 
 
@@ -328,6 +318,7 @@ def parse_arguments():
         type=str,
         help="LLM API key for generating question/answer pairs."
     )
+    parser.add_argument("--chunk_size", type=int, default=1000, help="The character size of each chunk used in RAG")
     return parser.parse_args()
 
 if __name__ == "__main__":
@@ -342,6 +333,7 @@ def parse_arguments():
     api_config["judge_endpoint"] = args.judge_endpoint
     api_config["output_log"] = args.output_log
     api_config["api_key"] = args.api_key
+    api_config["chunk_size"] = args.chunk_size
     if api_config["judge_endpoint"]:
         logging.info(f"Use local vllm service for judge at port: '{args.judge_endpoint}'.")
     main(api_config)
diff --git a/recipes/use_cases/end2end-recipes/raft/evalset.json b/recipes/use_cases/end2end-recipes/raft/evalset.json
index e3d5a1842..83a4c8e11 100644
--- a/recipes/use_cases/end2end-recipes/raft/evalset.json
+++ b/recipes/use_cases/end2end-recipes/raft/evalset.json
@@ -1,130 +1,218 @@
 [
-    {
-       "question":"Why is Meta not sharing the training datasets for Llama?",
-       "answer":"We believe developers will have plenty to work with as we release our model weights and starting code for pre-trained and conversational fine-tuned versions as well as responsible use resources. While data mixes are intentionally withheld for competitive reasons, all models have gone through Meta’s internal Privacy Review process to ensure responsible data usage in building our products. We are dedicated to the responsible and ethical development of our GenAI products, ensuring our policies reflect diverse contexts and meet evolving societal expectations."
-    },
-    {
-       "question":"Did Meta use human annotators to develop the data for Llama models?",
-       "answer":"Yes. There are more details, for example, about our use of human annotators in the Llama 2 research paper."
-    },
-    {
-       "question":"Can I use the output of the models to improve the Llama family of models, even though I cannot use them for other LLMs?",
-       "answer":"It's correct that the license restricts using any part of the Llama models, including the response outputs to train another AI model (LLM or otherwise). However, one can use the outputs to further train the Llama family of models. Techniques such as Quantized Aware Training (QAT) utilize such a technique and hence this is allowed."
-    },
-    {
-       "question":"What operating systems (OS) are officially supported if I want to use Llama model?",
-       "answer":"For the core Llama GitHub repos (Llama and Llama3) Linux is the only OS currently supported by this repo. Additional OS support is available through the Llama-Recipes repo."
-    },
-    {
-       "question":"Do Llama models provide traditional autoregressive text completion?",
-       "answer":"Llama models are auto-regressive language models, built on the transformer architecture. The core language models function by taking a sequence of words as input and predicting the next word, recursively generating text."
-    },
-    {
-       "question":"Do Llama models support logit biases as a request parameter to control token probabilities during sampling?",
-       "answer":"This is implementation dependent (i.e. the code used to run the model)."
-    },
-    {
-       "question":"Do Llama models support adjusting sampling temperature or top-p threshold via request parameters?",
-       "answer":"The model itself supports these parameters, but whether they are exposed or not depends on implementation."
-    },
-    {
-       "question":"What is llama-recipes?",
-       "answer":"The llama-recipes repository is a companion to the Meta Llama 3 models. The goal of this repository is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Meta Llama and other tools in the LLM ecosystem."
-    },
-    {
-       "question":"What is the difference on the tokenization techniques that Meta Llama 3 uses compare Llama 2?",
-       "answer":"Llama 2 uses SentencePiece for tokenization, whereas Llama 3 has transitioned to OpenAI’s Tiktoken."
-    },
-    {
-       "question":"How many tokens were used in Meta Llama 3 pretrain?",
-       "answer":"Meta Llama 3 is pretrained on over 15 trillion tokens that were all collected from publicly available sources."
-    },
-    {
-       "question":"How many tokens were used in  Llama 2 pretrain?",
-       "answer":"Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources."
-    },
-    {
-       "question":"What is the name of the license agreement that Meta Llama 3 is under?",
-       "answer":"Meta LLAMA 3 COMMUNITY LICENSE AGREEMENT."
-    },
-    {
-       "question":"What is the name of the license agreement that Llama 2 is under?",
-       "answer":"LLAMA 2 COMMUNITY LICENSE AGREEMENT."
-    },
-    {
-       "question":"What is the context length of Llama 2 models?",
-       "answer":"Llama 2's context is 4k"
-    },
-    {
-       "question":"What is the context length of Meta Llama 3 models?",
-       "answer":"Meta Llama 3's context is 8k"
-    },
-    {
-       "question":"When is Llama 2 trained?",
-       "answer":"Llama 2 was trained between January 2023 and July 2023."
-    },
-    {
-       "question":"What is the name of the Llama 2 model that uses Grouped-Query Attention (GQA) ",
-       "answer":"Llama 2 70B"
-    },
-    {
-       "question":"What are the names of the Meta Llama 3 model that use Grouped-Query Attention (GQA) ",
-       "answer":"Meta Llama 3 8B and Meta Llama 3 70B"
-    },
-    {
-       "question":"what are the goals for Llama 3",
-       "answer":"With Llama 3, we set out to build the best open models that are on par with the best proprietary models available today. We wanted to address developer feedback to increase the overall helpfulness of Llama 3 and are doing so while continuing to play a leading role on responsible use and deployment of LLMs. We are embracing the open source ethos of releasing early and often to enable the community to get access to these models while they are still in development."
-    },
-    {
-       "question":"What versions of Meta Llama 3 are available?",
-       "answer":"Meta Llama 3 is available in both 8B and 70B pretrained and instruction-tuned versions."
-    },
-    {
-       "question":"What are some applications of Meta Llama 3?",
-       "answer":"Meta Llama 3 supports a wide range of applications including coding tasks, problem solving, translation, and dialogue generation."
-    },
-    {
-       "question":"What improvements does Meta Llama 3 offer over previous models?",
-       "answer":"Meta Llama 3 offers enhanced scalability and performance, lower false refusal rates, improved response alignment, and increased diversity in model answers. It also excels in reasoning, code generation, and instruction following."
-    },
-    {
-       "question":"How has Meta Llama 3 been trained?",
-       "answer":"Meta Llama 3 has been trained on over 15T tokens of data using custom-built 24K GPU clusters. This training dataset is 7x larger than that used for Llama 2 and includes 4x more code."
-    },
-    {
-       "question":"What safety measures are included with Meta Llama 3?",
-       "answer":"Meta Llama 3 includes updates to trust and safety tools such as Llama Guard 2 and Cybersec Eval 2, optimized to support a comprehensive set of safety categories published by MLCommons."
-    },
-    {
-       "question":"What is Meta Llama 3?",
-       "answer":"Meta Llama 3 is a highly advanced AI model that excels at language nuances, contextual understanding, and complex tasks like translation and dialogue generation."
-    },
-    {
-       "question":"What are the pretrained versions of Meta Llama 3 available?",
-       "answer":"Meta Llama 3 is available with both 8B and 70B pretrained and instruction-tuned versions."
-    },
-    {
-       "question":"What is the context length supported by Llama 3 models?",
-       "answer":"Llama 3 models support a context length of 8K, which doubles the capacity of Llama 2."
-    },
-    {
-        "question":"What is the Prompt engineering?",
-        "answer":"It is a technique used in natural language processing (NLP) to improve the performance of the language model by providing them with more context and information about the task in hand."
-     },
-     {
-        "question":"What is the Zero-Shot Prompting?",
-        "answer":"Large language models like Meta Llama are capable of following instructions and producing responses without having previously seen an example of a task. Prompting without examples is called 'zero-shot prompting'."
-     },
-      {
-        "question":"What are the supported quantization modes in PyTorch?",
-        "answer":"Post-Training Dynamic Quantization, Post-Training Static Quantization and Quantization Aware Training (QAT)"
-     },
-     {
-        "question":"What is the LlamaIndex?",
-        "answer":"LlamaIndex is mainly a data framework for connecting private or domain-specific data with LLMs, so it specializes in RAG, smart data storage and retrieval, while LangChain is a more general purpose framework which can be used to build agents connecting multiple tools."
-     },
-     {
-       "question":"What is the LangChain?",
-       "answer":"LangChain is an open source framework for building LLM powered applications. It implements common abstractions and higher-level APIs to make the app building process easier, so you don't need to call LLM from scratch. "
-    }
- ]
+   {
+      "question":"What is quantization in machine learning?",
+      "answer":"Quantization is a technique to reduce computational and memory requirements of models by representing weights and activations with lower precision data types."
+   },
+   {
+      "question":"What are the benefits of quantization?",
+      "answer":"Benefits include smaller model sizes, faster fine-tuning, and faster inference, making it beneficial for resource-constrained environments."
+   },
+   {
+      "question":"What is post-training dynamic quantization in PyTorch?",
+      "answer":"Weights are pre-quantized ahead of time and activations are converted to int8 during inference for faster computation due to efficient int8 matrix multiplication."
+   },
+   {
+      "question":"What is quantization aware training (QAT) in PyTorch?",
+      "answer":"All weights and activations are 'fake quantized' during both forward and backward passes of training to yield higher accuracy than other methods."
+   },
+   {
+      "question":"What is TorchAO library for quantization?",
+      "answer":"TorchAO offers various quantization methods including weight only quantization and dynamic quantization, with support for 8-bit and 4-bit quantization."
+   },
+   {
+      "question":"What is prompt engineering?",
+      "answer":"Prompt engineering is a technique used in natural language processing (NLP) to improve the performance of the language model by providing them with more context and information about the task in hand. It involves creating prompts, which are short pieces of text that provide additional information or guidance to the model."
+   },
+   {
+      "question":"What are some tips for crafting effective prompts?",
+      "answer":"Be clear and concise, use specific examples, vary the prompts, test and refine, and use feedback."
+   },
+   {
+      "question":"What is zero-shot prompting?",
+      "answer":"Zero-shot prompting is the technique of using large language models like Meta Llama to follow instructions and produce responses without having previously seen an example of a task."
+   },
+   {
+      "question":"What is few-shot prompting?",
+      "answer":"Few-shot prompting is the technique of adding specific examples of desired output to prompts to generate more accurate and consistent results."
+   },
+   {
+      "question":"What is role based prompting?",
+      "answer":"Role based prompting is the technique of creating prompts based on the role or perspective of the person or entity being addressed to improve relevance and accuracy."
+   },
+   {
+      "question":"What is chain of thought technique?",
+      "answer":"Chain of thought technique is the method of providing the language model with a series of prompts or questions to help guide its thinking and generate a more coherent and relevant response."
+   },
+   {
+      "question":"What is self-consistency approach?",
+      "answer":"Self-consistency approach is the method of selecting the most frequent answer from multiple generations to enhance accuracy."
+   },
+   {
+      "question":"What is retrieval-augmented generation?",
+      "answer":"Retrieval-augmented generation is the practice of including information in the prompt that has been retrieved from an external database to incorporate facts into LLM application."
+   },
+   {
+      "question":"What is program-aided language models?",
+      "answer":"Program-aided language models is the method of instructing the LLM to write code to solve calculation tasks since LLMs are bad at arithmetic but great at code generation."
+   },
+   {
+      "question":"What is Code Llama?",
+      "answer":"Code Llama is a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks."
+   },
+   {
+      "question":"What are the different flavors available in Code Llama?",
+      "answer":"The different flavors include foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, and 34B parameters each."
+   },
+   {
+      "question":"How can I download Code Llama?",
+      "answer":"To download the model weights and tokenizers, visit the Meta website, accept the License, receive a signed URL over email, and then run the download.sh script passing the URL provided when prompted to start the download."
+   },
+   {
+      "question":"What is Llama Guard 2?",
+      "answer":"Llama Guard 2 provides input and output guardrails for LLM deployments based on MLCommons policy."
+   },
+   {
+      "question":"How to download the model weights and tokenizer for Llama Guard 2?",
+      "answer":"Visit the Meta website, accept the license, get approved, receive signed URL via email, then run the download.sh script."
+   },
+   {
+      "question":"Are there any examples using Llama Guard 2?",
+      "answer":"Yes, find them in the Llama recipes repository in addition to the quick start steps for Llama3."
+   },
+   {
+      "question":"Where to report issues related to Llama Guard 2 or its model?",
+      "answer":"Report via github.com/meta-llama/PurpleLlama for Llama Guard model issues or developers.facebook.com/llama_output_feedback for risky content generated by the model."
+   },
+   {
+      "question":"What is the license for Llama Guard 2?",
+      "answer":"The same license as Llama 3 applies: see the LICENSE file and accompanying Acceptable Use Policy."
+   },
+   {
+      "question":"Why is Meta not sharing the training datasets for Llama?",
+      "answer":"We believe developers will have plenty to work with as we release our model weights and starting code for pre-trained and conversational fine-tuned versions as well as responsible use resources. While data mixes are intentionally withheld for competitive reasons, all models have gone through Meta’s internal Privacy Review process to ensure responsible data usage in building our products. We are dedicated to the responsible and ethical development of our GenAI products, ensuring our policies reflect diverse contexts and meet evolving societal expectations."
+   },
+   {
+      "question":"Did Meta use human annotators to develop the data for Llama models?",
+      "answer":"Yes. There are more details, for example, about our use of human annotators in the Llama 2 research paper."
+   },
+   {
+      "question":"Can I use the output of the models to improve the Llama family of models, even though I cannot use them for other LLMs?",
+      "answer":"It's correct that the license restricts using any part of the Llama models, including the response outputs to train another AI model (LLM or otherwise). However, one can use the outputs to further train the Llama family of models. Techniques such as Quantized Aware Training (QAT) utilize such a technique and hence this is allowed."
+   },
+   {
+      "question":"What operating systems (OS) are officially supported if I want to use Llama model?",
+      "answer":"For the core Llama GitHub repos (Llama and Llama3) Linux is the only OS currently supported by this repo. Additional OS support is available through the Llama-Recipes repo."
+   },
+   {
+      "question":"Do Llama models provide traditional autoregressive text completion?",
+      "answer":"Llama models are auto-regressive language models, built on the transformer architecture. The core language models function by taking a sequence of words as input and predicting the next word, recursively generating text."
+   },
+   {
+      "question":"Do Llama models support logit biases as a request parameter to control token probabilities during sampling?",
+      "answer":"This is implementation dependent (i.e. the code used to run the model)."
+   },
+   {
+      "question":"Do Llama models support adjusting sampling temperature or top-p threshold via request parameters?",
+      "answer":"The model itself supports these parameters, but whether they are exposed or not depends on implementation."
+   },
+   {
+      "question":"What is llama-recipes?",
+      "answer":"The llama-recipes repository is a companion to the Meta Llama 3 models. The goal of this repository is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Meta Llama and other tools in the LLM ecosystem."
+   },
+   {
+      "question":"What is the difference on the tokenization techniques that Meta Llama 3 uses compare Llama 2?",
+      "answer":"Llama 2 uses SentencePiece for tokenization, whereas Llama 3 has transitioned to OpenAI’s Tiktoken."
+   },
+   {
+      "question":"How many tokens were used in Meta Llama 3 pretrain?",
+      "answer":"Meta Llama 3 is pretrained on over 15 trillion tokens that were all collected from publicly available sources."
+   },
+   {
+      "question":"How many tokens were used in  Llama 2 pretrain?",
+      "answer":"Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources."
+   },
+   {
+      "question":"What is the name of the license agreement that Meta Llama 3 is under?",
+      "answer":"Meta LLAMA 3 COMMUNITY LICENSE AGREEMENT."
+   },
+   {
+      "question":"What is the name of the license agreement that Llama 2 is under?",
+      "answer":"LLAMA 2 COMMUNITY LICENSE AGREEMENT."
+   },
+   {
+      "question":"What is the context length of Llama 2 models?",
+      "answer":"Llama 2's context is 4k"
+   },
+   {
+      "question":"What is the context length of Meta Llama 3 models?",
+      "answer":"Meta Llama 3's context is 8k"
+   },
+   {
+      "question":"When is Llama 2 trained?",
+      "answer":"Llama 2 was trained between January 2023 and July 2023."
+   },
+   {
+      "question":"What is the name of the Llama 2 model that uses Grouped-Query Attention (GQA) ",
+      "answer":"Llama 2 70B"
+   },
+   {
+      "question":"What are the names of the Meta Llama 3 model that use Grouped-Query Attention (GQA) ",
+      "answer":"Meta Llama 3 8B and Meta Llama 3 70B"
+   },
+   {
+      "question":"what are the goals for Llama 3",
+      "answer":"With Llama 3, we set out to build the best open models that are on par with the best proprietary models available today. We wanted to address developer feedback to increase the overall helpfulness of Llama 3 and are doing so while continuing to play a leading role on responsible use and deployment of LLMs. We are embracing the open source ethos of releasing early and often to enable the community to get access to these models while they are still in development."
+   },
+   {
+      "question":"What versions of Meta Llama 3 are available?",
+      "answer":"Meta Llama 3 is available in both 8B and 70B pretrained and instruction-tuned versions."
+   },
+   {
+      "question":"What are some applications of Meta Llama 3?",
+      "answer":"Meta Llama 3 supports a wide range of applications including coding tasks, problem solving, translation, and dialogue generation."
+   },
+   {
+      "question":"What improvements does Meta Llama 3 offer over previous models?",
+      "answer":"Meta Llama 3 offers enhanced scalability and performance, lower false refusal rates, improved response alignment, and increased diversity in model answers. It also excels in reasoning, code generation, and instruction following."
+   },
+   {
+      "question":"How has Meta Llama 3 been trained?",
+      "answer":"Meta Llama 3 has been trained on over 15T tokens of data using custom-built 24K GPU clusters. This training dataset is 7x larger than that used for Llama 2 and includes 4x more code."
+   },
+   {
+      "question":"What safety measures are included with Meta Llama 3?",
+      "answer":"Meta Llama 3 includes updates to trust and safety tools such as Llama Guard 2 and Cybersec Eval 2, optimized to support a comprehensive set of safety categories published by MLCommons."
+   },
+   {
+      "question":"What is Meta Llama 3?",
+      "answer":"Meta Llama 3 is a highly advanced AI model that excels at language nuances, contextual understanding, and complex tasks like translation and dialogue generation."
+   },
+   {
+      "question":"What are the pretrained versions of Meta Llama 3 available?",
+      "answer":"Meta Llama 3 is available with both 8B and 70B pretrained and instruction-tuned versions."
+   },
+   {
+      "question":"What is the context length supported by Llama 3 models?",
+      "answer":"Llama 3 models support a context length of 8K, which doubles the capacity of Llama 2."
+   },
+   {
+      "question":"What is the Prompt engineering?",
+      "answer":"It is a technique used in natural language processing (NLP) to improve the performance of the language model by providing them with more context and information about the task in hand."
+   },
+   {
+      "question":"What is the Zero-Shot Prompting?",
+      "answer":"Large language models like Meta Llama are capable of following instructions and producing responses without having previously seen an example of a task. Prompting without examples is called 'zero-shot prompting'."
+   },
+   {
+      "question":"What are the supported quantization modes in PyTorch?",
+      "answer":"Post-Training Dynamic Quantization, Post-Training Static Quantization and Quantization Aware Training (QAT)"
+   },
+   {
+      "question":"What is the LlamaIndex?",
+      "answer":"LlamaIndex is mainly a data framework for connecting private or domain-specific data with LLMs, so it specializes in RAG, smart data storage and retrieval, while LangChain is a more general purpose framework which can be used to build agents connecting multiple tools."
+   },
+   {
+      "question":"What is the LangChain?",
+      "answer":"LangChain is an open source framework for building LLM powered applications. It implements common abstractions and higher-level APIs to make the app building process easier, so you don't need to call LLM from scratch. "
+   }
+]
diff --git a/recipes/use_cases/end2end-recipes/raft/raft.py b/recipes/use_cases/end2end-recipes/raft/raft.py
index b1e78b586..9b0f7e692 100644
--- a/recipes/use_cases/end2end-recipes/raft/raft.py
+++ b/recipes/use_cases/end2end-recipes/raft/raft.py
@@ -70,8 +70,8 @@ def parse_arguments():
         type=str,
         help="LLM API key for generating question/answer pairs."
     )
-    parser.add_argument("--chunk_size", type=int, default=512, help="The size of each chunk in number of tokens")
-    parser.add_argument("-o","--output", type=str, default="./", help="The path at which to save the dataset")
+    parser.add_argument("--chunk_size", type=int, default=1000, help="The size of each chunk in number of tokens")
+    parser.add_argument("-o","--output", type=str, default="./output/", help="The path at which to save the dataset")
     parser.add_argument("--output-format", type=str, default="hf", help="Format to convert the dataset to. Defaults to hf.", choices=datasetFormats)
     parser.add_argument("--output-type", type=str, default="jsonl", help="Type to export the dataset to. Defaults to jsonl.", choices=outputDatasetTypes)
     return parser.parse_args()
diff --git a/recipes/use_cases/end2end-recipes/raft/raft.yaml b/recipes/use_cases/end2end-recipes/raft/raft.yaml
index 1f1a7cf19..cffbf27ba 100644
--- a/recipes/use_cases/end2end-recipes/raft/raft.yaml
+++ b/recipes/use_cases/end2end-recipes/raft/raft.yaml
@@ -31,7 +31,7 @@ question_prompt_template: >
 #   4. Never use any abbreviation.
 #   5. Include only the questions in your response.
 
-data_dir: "/home/kaiwu/work/pytorch/docs"
+data_dir: "./data"
 
 xml_path: ""
 
diff --git a/recipes/use_cases/end2end-recipes/raft/raft_utils.py b/recipes/use_cases/end2end-recipes/raft/raft_utils.py
index b31b55162..cc7f318cd 100644
--- a/recipes/use_cases/end2end-recipes/raft/raft_utils.py
+++ b/recipes/use_cases/end2end-recipes/raft/raft_utils.py
@@ -2,21 +2,15 @@
 # This software may be used and distributed according to the terms of the Llama 2 Community License Agreement.
 
 import os
-from transformers import  AutoTokenizer
 import logging
-import json
 from langchain_community.embeddings import HuggingFaceEmbeddings
 from langchain_experimental.text_splitter import SemanticChunker
 from math import ceil
-import datasets
-from datasets import Dataset, load_dataset
+from datasets import Dataset
 import random
 from langchain_community.document_loaders import SitemapLoader,DirectoryLoader
 from bs4 import BeautifulSoup
-from langchain_openai import ChatOpenAI
-from langchain_core.messages import HumanMessage, SystemMessage
-from langchain_community.llms import ChatOpenAI
-from langchain_core.prompts import ChatPromptTemplate
+
 from langchain_openai import ChatOpenAI
 
 
@@ -124,21 +118,19 @@ def generate_questions(api_config):
         logging.info(f"Error reading files, document_text is {len(document_text)}")
     embedding_model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2",model_kwargs={'device': 'cuda'})
     document_batches = get_chunks(document_text,api_config["chunk_size"],embedding_model)
-
-    batches_count = len(document_batches)
-    total_questions = api_config["questions_per_chunk"] * batches_count
     # use OpenAI API protocol to hanlde the chat request, including local VLLM openai compatible server
     llm = ChatOpenAI(
         openai_api_key=key,
         openai_api_base=api_url,
         model_name=api_config["model"],
         temperature=0.0,
-        max_tokens=250
+        max_tokens=500
         )
     all_tasks = [api_config['question_prompt_template'].format(num_questions=str(api_config['questions_per_chunk']),context=document) for document in document_batches]
     generated_answers = llm.batch(all_tasks)
+    generated_answers = [ item.content for item in generated_answers]
     if len(generated_answers) == 0:
-        logging.error("No model answers generated. Please check the input context or model configuration in ",model_name)
+        logging.error("No model answers generated. Please check the input context or model configuration in ",api_config["model"])
         return []
     final_result = []
     for result in generated_answers:
@@ -167,9 +159,10 @@ def generate_COT(chunk_questions_zip,api_config) -> dict:
         openai_api_base=api_config["endpoint_url"],
         model_name=api_config["model"],
         temperature=0.0,
-        max_tokens=350
+        max_tokens=500
         )
     generated_answers = llm.batch(all_tasks)
+    generated_answers = [ item.content for item in generated_answers]
     COT_results = []
     # return a list of (chunk, question, generated_answer)
     for (chunk, question),generated_answer in zip(chunk_questions,generated_answers):
@@ -186,7 +179,6 @@ def add_chunk_to_dataset(
     """
     Given a chunk and related questions lists, create {Q, A, D} triplets and add them to the dataset.
     """
-    COT_tasks = []
     chunks = [chunk for chunk, _ in chunk_questions_zip]
     COT_results = generate_COT(chunk_questions_zip,api_config)
     for chunk, q , cot in COT_results:

From f963bb8254402b7a70c03c785aa041485101c2f8 Mon Sep 17 00:00:00 2001
From: Kai Wu 
Date: Thu, 20 Jun 2024 12:41:40 -0700
Subject: [PATCH 22/35] tutorial draft

---
 .../use_cases/end2end-recipes/raft/README.md  | 128 ++++----
 .../end2end-recipes/raft/data/website_data    |  44 ---
 .../end2end-recipes/raft/data_urls.xml        |  35 ++-
 .../end2end-recipes/raft/eval_config.yaml     |  51 ----
 .../end2end-recipes/raft/eval_llama.json      | 287 ++++++++++++++++++
 .../end2end-recipes/raft/evalset.json         | 218 -------------
 .../use_cases/end2end-recipes/raft/format.py  | 173 -----------
 .../use_cases/end2end-recipes/raft/raft.yaml  |  46 +--
 .../raft/{eval_raft.py => raft_eval.py}       | 152 ++++++----
 .../raft/raft_eval_config.yaml                |  36 +++
 .../end2end-recipes/raft/raft_utils.py        |  10 +-
 11 files changed, 562 insertions(+), 618 deletions(-)
 delete mode 100644 recipes/use_cases/end2end-recipes/raft/data/website_data
 delete mode 100644 recipes/use_cases/end2end-recipes/raft/eval_config.yaml
 create mode 100644 recipes/use_cases/end2end-recipes/raft/eval_llama.json
 delete mode 100644 recipes/use_cases/end2end-recipes/raft/evalset.json
 delete mode 100644 recipes/use_cases/end2end-recipes/raft/format.py
 rename recipes/use_cases/end2end-recipes/raft/{eval_raft.py => raft_eval.py} (72%)
 create mode 100644 recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml

diff --git a/recipes/use_cases/end2end-recipes/raft/README.md b/recipes/use_cases/end2end-recipes/raft/README.md
index 8805d99bd..d3d0795de 100644
--- a/recipes/use_cases/end2end-recipes/raft/README.md
+++ b/recipes/use_cases/end2end-recipes/raft/README.md
@@ -1,20 +1,44 @@
-## End to End Steps to create a Chatbot using Retrieval Augmented Fine Tuning(RAFT)
+## Introduction:
+As our Meta llama models become more popular, we noticed there is a great demand to apply our Meta Llama models toward a custom domain to better serve the customers in that domain.
+For example, a common scenario can be that a company has all the related documents in plain text for its custom domain and want to build chatbot that can help answer questions a client
+could have.
 
-### Step 1 : Prepare related documents
+Inspired by this demand,  we want to explore the possibilty of building a Github chatbot for llama-recipes based on Meta Llama models,
+as a demo in this tutorial.  Even though our Meta Llama 3 70B Insturct model can be a great candidate, as it already has a excellent reasoning and knowledge, it is relatively costly to host in production.
 
-We can either use local folder or web crawl to get the data. For local folder option, please download all your desired docs in PDF, Text or Markdown format to "data" folder and place it inside "raft" folder. Alternatively, we can create a sitemap xml, similar to the data_urls.xml example, and use langchain SitemapLoader to get all the text in the webpages.
+Therefore, we want to explore the possibile ways to get a 8B-Instruct Meta Llama model based chatbot that can achieve the similar level of accuarcy of Meta Llama 70B-Instruct model based chatbot.
+to save the inference cost.
 
-In this case we will use [Meta Llama official website](https://llama.meta.com/) webpages such as [Getting started with Meta Llama](https://llama.meta.com/get-started/) and other Llama related documents, eg Llama3, Purple Llama, Code Llama model card in github repo. Ideally, we should have searched all Llama documents across the web and follow the procedure below on them but that would be very costly for the purpose of a tutorial, so we will stick to our limited documents here. In this case, we want to use [Meta Llama Troubleshooting & FAQ](https://llama.meta.com/faq/) as a main source for evaluation so we should put it into our training set.
+## Understand the problems
+To build a Github bot, we need to first understand what kind of questions that has been frequently asked. In our Github issues, we found out that the issues are not confined within Llama
+model itself (eg, "where to download models"), but also include questions like quantization, training, inference problems which may related to Pytorch. Go through those questions can help us have a better understanding of what kind of data we need to collect.
 
-### Step 2 : Prepare RAFT dataset for fine-tuning
+Even though ideally we should included as many related documents as possible, such as Huggingface documentation, in this tutorial we will only include the Llama documents and Pytorch documents for demo purposes.
 
+## Data Collections
+Once we determine the domains we want to collect data from, we can start to think about what kind of data we want to collect and how to get that data. There are many llama related online conversation and disscusions in Reddit or Stack Overflow,
+but the data cleaning will be hard, eg. filtering out unfaithful information.
+
+In this tutorial, we want to use webpages in [Getting started with Meta Llama](https://llama.meta.com/get-started/)
+along with webpages in [Pytorch blogs](https://pytorch.org/blog/) and [Pytorch tutorials](https://pytorch.org/tutorials/).
+
+We can either use local folder or web crawl to get the data. For local folder option, we can download all the desired docs in PDF, Text or Markdown format to "data" folder.
+Alternatively, we can create a sitemap xml, similar to the data_urls.xml example, and use Langchain SitemapLoader to get all the text in the webpages.
+
+## Retrieval Augmented Fine Tuning (RAFT) concepts
+
+In this tutorial, we want to use the a new method that combines finetuning with RAG called Retrieval Augmented Fine Tuning (RAFT).
+
+RAFT is a general recipe to finetune a pretrained LLM to your domain-specific RAG settings.
+
+## Create RAFT dataset
 To use Meta Llama 3 70B model for the RAFT datasets creation from the prepared documents, we can either use Meta Llama 3 70B APIs from LLM cloud providers or host local LLM server.
 
 We can use on prem solutions such as the [TGI](../../../../inference/model_servers/hf_text_generation_inference/README.md) or [VLLM](../../../../inference/model_servers/llama-on-prem.md). Here we will use the prompt in the [generation_config.yaml](./generation_config.yaml) to instruct the model on the expected format and rules for generating the Q&A pairs. In this example, we will show how to create a vllm openai compatible server that host Meta Llama 3 70B instruct locally, and generate the RAFT dataset.
 
 ```bash
 # Make sure VLLM has been installed
-CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server  --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001
+CUDA_VISIBLE_DEVICES=6,7 python -m vllm.entrypoints.openai.api_server  --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001
 ```
 
 **NOTE** Please make sure the port has not been used. Since Meta Llama3 70B instruct model requires at least 135GB GPU memory, we need to use multiple GPUs to host it in a tensor parallel way.
@@ -29,7 +53,7 @@ For cloud API key, we can also set it using system environment variables, such a
 
 ```bash
 export API_KEY="THE_API_KEY_HERE"
-python raft.py -u "CLOUD_API_URL" -t 3
+python raft.py -u "CLOUD_API_URL" -t 5
 ```
 
 **NOTE** When using cloud API, you need to be aware of your RPM (requests per minute), TPM (tokens per minute) and TPD (tokens per day), limit on your account in case using any of model API providers. This is experimental and totally depends on your documents, wealth of information in them and how you prefer to handle question, short or longer answers etc.
@@ -38,52 +62,60 @@ This python program will read all the documents inside of "data" folder and tran
 
 We now have a related context as text chunk and a corresponding question list. For each question in the question list, we want to generate a Chain-of-Thought (COT) style question using Llama 3 70B Instruct as well. Once we have the COT answers, we can start to make a dataset that contains "instruction" which includes some unrelated chunks called distractor and has a probability P to include the related chunk.
 
+Here is a RAFT format json example. We have a "question" section for the generated question, "cot_answer" section for generated COT answers, where the final answer will be added after "" token, and we also created a "instruction" section
+that has all the documents included (each document splited by  <\/DOCUMENT>) and finally the question appended in the very end. This "instruction"
+section will be the input during the training, and the "cot_answer" will be the output label that the loss will be calculated on.
+
 ```python
 {
-  'id': 'seed_task_0',
-  'type': 'general',
-  'question': 'What is the official motto of the United States of America?',
-  'context': {
-    'sentences': [
-      ["the Gulf of Mexico are prone to hurricanes, ... and enforces the Act. [ 189 ] As of 2022, the U. S",
-    "energy from fossil fuel and the largest ... there are 19, 969 airports in the U. S., of which 5, 193 are designated",
-    'weaponry, ideology, and international i... and is a permanent member of the UN Security Council. The first documentary evidence of the phrase " United States',
-    '[CLS] United States of America Flag Coat of arms ... dominance in nuclear and conventional',
-    '##om ic soft pow er. [ 405 ] [ 406 ] Nearly all present ... rights in the United States are advanced by global standards.']
-    ],
-    'title': [
-      ['placeholder_title',
-      'placeholder_title',
-      'placeholder_title',
-      'placeholder_title',
-      'placeholder_title']
-    ]
-  },
-  'answer': '"In God We Trust"',
-  'cot_answer': None
+   "id":"seed_task_228",
+   "type":"general",
+   "question":"What is the context length supported by Llama 3 models?",
+   "context":{
+      "sentences":[
+         [
+            "DISTRACT_DOCS 1"
+            "DISTRACT_DOCS 2"
+            "We hope that Code Llama will inspire others to leverage Llama 2 to create new innovative tools for research and commercial products. Download the model Explore more on Code Llama Discover more about Code Llama here \u2014 visit our resources, ranging from our research paper, getting started guide and more. Code Llama GitHub repository Research paper Download the model Getting started guide Meta Llama 3 Build the future of AI with Meta Llama 3 Now available with both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications Build the future of AI with Meta Llama 3 Now available with both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications Get Started Experience Llama 3 on Meta AI Experience Llama 3 with Meta AI We\u2019ve integrated Llama 3 into Meta AI, our intelligent assistant, that expands the ways people can get things done, create and connect with Meta AI. You can see first-hand the performance of Llama 3 by using Meta AI for coding tasks and problem solving. Whether you're developing agents, or other AI-powered applications, Llama 3 in both 8B and 70B will offer the capabilities and flexibility you need to develop your ideas. Experience Llama 3 on Meta AI Enhanced performance Experience the state-of-the-art performance of Llama 3, an openly accessible model that excels at language nuances, contextual understanding, and complex tasks like translation and dialogue generation. With enhanced scalability and performance, Llama 3 can handle  multi-step tasks effortlessly, while our refined post-training processes significantly lower false refusal rates, improve response alignment, and boost diversity in model answers. Additionally, it drastically elevates capabilities like reasoning, code generation, and instruction following. Build the future of AI with Llama 3. Download Llama 3 Getting Started Guide With each Meta Llama request, you will receive: Meta Llama Guard 2 Getting started guide Responsible Use Guide Acceptable use policy Model card Community license agreement Benchmarks Llama 3 models take data and scale to new heights. It\u2019s been trained on our two recently announced custom-built 24K GPU clusters on over 15T token of data \u2013 a training dataset 7x larger than that used for Llama 2, including 4x more code. This results in the most capable Llama model yet, which supports a 8K context length that doubles the capacity of Llama 2. Model card Trust & safety A comprehensive approach to responsibility With the release of Llama 3, we\u2019ve updated the Responsible Use Guide (RUG) to provide the most comprehensive information on responsible development with LLMs. Our system-centric approach includes updates to our trust and safety tools with Llama Guard 2, optimized to support the newly announced taxonomy published by MLCommons expanding its coverage to a more comprehensive set of safety categories, Code Shield, and Cybersec Eval 2. In line with the principles outlined in our RUG , we recommend thorough checking and filtering of all inputs to and outputs from LLMs based on your unique content guidelines for your intended use case and audience. Meta Llama Guard 2 Explore more on Meta Llama 3 Introducing Meta Llama 3: The most capable openly available LLM to date Read the blog Meet Your New Assistant: Meta AI, Built With Llama 3 Learn more Meta Llama 3 repository View repository Model card Explore Meta Llama 3 License META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 \u201c Agreement \u201d means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. \u201c Documentation \u201d means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https:\/\/llama.meta.com\/get-started\/ .",
+            "DISTRACT_DOCS 3"
+            "DISTRACT_DOCS 4"
+            "DISTRACT_DOCS 5"
+         ]
+      ],
+      "title":[
+         [
+            "placeholder_title",
+            "placeholder_title",
+            "placeholder_title",
+            "placeholder_title",
+            "placeholder_title",
+            "placeholder_title"
+         ]
+      ]
+   },
+   "oracle_context":"We hope that Code Llama will inspire others to leverage Llama 2 to create new innovative tools for research and commercial products. Download the model Explore more on Code Llama Discover more about Code Llama here \u2014 visit our resources, ranging from our research paper, getting started guide and more. Code Llama GitHub repository Research paper Download the model Getting started guide Meta Llama 3 Build the future of AI with Meta Llama 3 Now available with both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications Build the future of AI with Meta Llama 3 Now available with both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications Get Started Experience Llama 3 on Meta AI Experience Llama 3 with Meta AI We\u2019ve integrated Llama 3 into Meta AI, our intelligent assistant, that expands the ways people can get things done, create and connect with Meta AI. You can see first-hand the performance of Llama 3 by using Meta AI for coding tasks and problem solving. Whether you're developing agents, or other AI-powered applications, Llama 3 in both 8B and 70B will offer the capabilities and flexibility you need to develop your ideas. Experience Llama 3 on Meta AI Enhanced performance Experience the state-of-the-art performance of Llama 3, an openly accessible model that excels at language nuances, contextual understanding, and complex tasks like translation and dialogue generation. With enhanced scalability and performance, Llama 3 can handle  multi-step tasks effortlessly, while our refined post-training processes significantly lower false refusal rates, improve response alignment, and boost diversity in model answers. Additionally, it drastically elevates capabilities like reasoning, code generation, and instruction following. Build the future of AI with Llama 3. Download Llama 3 Getting Started Guide With each Meta Llama request, you will receive: Meta Llama Guard 2 Getting started guide Responsible Use Guide Acceptable use policy Model card Community license agreement Benchmarks Llama 3 models take data and scale to new heights. It\u2019s been trained on our two recently announced custom-built 24K GPU clusters on over 15T token of data \u2013 a training dataset 7x larger than that used for Llama 2, including 4x more code. This results in the most capable Llama model yet, which supports a 8K context length that doubles the capacity of Llama 2. Model card Trust & safety A comprehensive approach to responsibility With the release of Llama 3, we\u2019ve updated the Responsible Use Guide (RUG) to provide the most comprehensive information on responsible development with LLMs. Our system-centric approach includes updates to our trust and safety tools with Llama Guard 2, optimized to support the newly announced taxonomy published by MLCommons expanding its coverage to a more comprehensive set of safety categories, Code Shield, and Cybersec Eval 2. In line with the principles outlined in our RUG , we recommend thorough checking and filtering of all inputs to and outputs from LLMs based on your unique content guidelines for your intended use case and audience. Meta Llama Guard 2 Explore more on Meta Llama 3 Introducing Meta Llama 3: The most capable openly available LLM to date Read the blog Meet Your New Assistant: Meta AI, Built With Llama 3 Learn more Meta Llama 3 repository View repository Model card Explore Meta Llama 3 License META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 \u201c Agreement \u201d means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. \u201c Documentation \u201d means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https:\/\/llama.meta.com\/get-started\/ .",
+   "cot_answer":"Here's the step-by-step reasoning to answer the question:\n\n1. The question asks about the context length supported by Llama 3 models.\n2. In the context, we need to find the relevant information about Llama 3 models and their context length.\n3. The relevant sentence is: \"This results in the most capable Llama model yet, which supports a 8K context length that doubles the capacity of Llama 2.\"\n##begin_quote## This results in the most capable Llama model yet, which supports a 8K context length that doubles the capacity of Llama 2. ##end_quote##\n4. From this sentence, we can see that Llama 3 models support a context length of 8K.\n\n: 8K",
+   "instruction":" DISTRACT_DOCS 1 <\/DOCUMENT>... DISTRACT_DOCS 5 <\/DOCUMENT>\nWhat is the context length supported by Llama 3 models?"
 }
-
-
 ```
+To create a evalset, we can shuffle and select 100 examples out of RAFT dataset. For evaluation purpose, we only need to keep the "question" section, and the final answer section in
+"cot_answer",
+
 ### Step 3: Run the fune-tuning
-Once the dataset is ready, we can start the fine-tuning step using the following commands in the llama-recipe main folder:
+Once the RAFT dataset is ready, we can start the full fine-tuning step using the following commands in the llama-recipe main folder:
 
 For distributed fine-tuning:
 ```bash
-CUDA_VISIBLE_DEVICES=2,3  torchrun --nnodes 1 --nproc_per_node 2  recipes/finetuning/finetuning.py --use_peft --enable_fsdp --peft_method lora  --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir raft-8b --num_epochs 3 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/raft_dataset.py" --use-wandb  --run_validation True  --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/raft/raft.jsonl'
+CUDA_VISIBLE_DEVICES=0,1,2,3  torchrun --nnodes 1 --nproc_per_node 4  recipes/finetuning/finetuning.py --lr 1e-5 --context_length 8192 --enable_fsdp  --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir pt_ep1_full0614 --num_epochs 1 --batch_size_training 4 --dataset "custom_dataset" --custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/raft_dataset.py" --use-wandb  --run_validation True  --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/raft/raft.jsonl'
 ```
-
-
-For fine-tuning in single-GPU:
-
 ```bash
-CUDA_VISIBLE_DEVICES=0 python recipes/finetuning/finetuning.py --quantization --use_peft --peft_method lora  --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 5 --batch_size_training 1 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb  --run_validation True  --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json'
+torchrun --nnodes 1 --nproc_per_node 4  recipes/finetuning/finetuning.py --enable_fsdp --lr 1e-5 --context_length 8192 --num_epochs 1 --batch_size_training 2 --model_name meta-llama/Meta-Llama-3-8B-Instruct --dist_checkpoint_root_folder llama+pt_ep1_full0616 --dist_checkpoint_folder fine-tuned  --use_fast_kernels --dataset "custom_dataset" --custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/raft_dataset.py" --use-wandb  --run_validation True  --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/raft/pytorch_data/all_17k.jsonl'
 ```
-
-If we want to continue the fine-tuning process after our evaluation step, we can use  --from_peft_checkpoint argument to resume the fine-tuning from PEFT checkpoint folder. For example, we can run:
+Then convert the FSDP checkpoint to HuggingFace checkpoints using:
 
 ```bash
-CUDA_VISIBLE_DEVICES=0,1  torchrun --nnodes 1 --nproc_per_node 2  recipes/finetuning/finetuning.py --use_peft --enable_fsdp --from_peft_checkpoint chatbot-8b  --peft_method lora  --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b-continue --num_epochs 5 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb  --run_validation True  --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json'
+python src/llama_recipes/inference/checkpoint_converter_fsdp_hf.py --fsdp_checkpoint_path  /home/kaiwu/work/llama-recipes/llama+pt_ep1_full0616/fine-tuned-meta-llama/Meta-Llama-3-8B-Instruct --consolidated_model_path /home/kaiwu/work/llama-recipes/llama+pt_ep1_full0616/fine-tuned-meta-llama --HF_model_path_or_name /home/kaiwu/work/llama-recipes/llama+pt_ep1_full0616/
+
 ```
 
 For more details, please check the readme in the finetuning recipe.
@@ -92,12 +124,10 @@ For more details, please check the readme in the finetuning recipe.
 
 Once we have the fine-tuned model, we now need to evaluate it to understand its performance. Normally, to create a evaluation set, we should first gather some questions and manually write the ground truth answer. In this case, we created a eval set mostly based on the Llama [Troubleshooting & FAQ](https://llama.meta.com/faq/), where the answers are written by human experts. Then we pass the evalset question to our fine-tuned model to get the model generated answers. To compare the model generated answers with ground truth, we can use either traditional eval method, eg. calcucate rouge score, or use LLM to act like a judge to score the similarity of them.
 
-First we need to start the VLLM servers to host our fine-tuned 8B model. Since we used peft library to get a LoRA adapter, we need to pass special arguments to VLLM to enable the LoRA feature. Now, the VLLM server actually will first load the original model, then apply our LoRA adapter weights. Then we can feed the eval_set.json file into the VLLM servers and start the comparison evaluation. Notice that our finetuned model name is now called "chatbot" instead of "meta-llama/Meta-Llama-3-8B-Instruct".
 
 ```bash
-CUDA_VISIBLE_DEVICES=2 python -m vllm.entrypoints.openai.api_server  --model meta-llama/Meta-Llama-3-8B-Instruct --enable-lora --lora-modules raft-8b=./raft-8b --port 8000  --disable-log-requests
+CUDA_VISIBLE_DEVICES=4 python -m vllm.entrypoints.openai.api_server  --model raft-8b --port 8000  --disable-log-requests
 ```
-
 **NOTE** If encounter import error: "ImportError: punica LoRA kernels could not be imported.", this means that VLLM must be installed with punica LoRA kernels to support LoRA adapter, please use following commands to install the VLLM from source.
 
 ```bash
@@ -106,15 +136,7 @@ cd vllm
 VLLM_INSTALL_PUNICA_KERNELS=1 pip install -e .
 ```
 
-On another terminal, we can go to the recipes/use_cases/end2end-recipes/chatbot/pipelines folder to start our eval script.
-
-```bash
-python eval_raft.py -m raft-8b -v 8000
-```
-
-
-
-Lastly, we can use another Meta Llama 3 70B Instruct model as a judge to compare the answer from the fine-tuned 8B model with the groud truth and get a score. To do this, we need to host another Meta Llama 3 70B Instruct VLLM server locally with command, just make sure the port is not been used:
+On another terminal, we can use another Meta Llama 3 70B Instruct model as a judge to compare the answer from the fine-tuned 8B model with the groud truth and get a score. To do this, we need to host another Meta Llama 3 70B Instruct VLLM server locally with command, just make sure the port is not been used:
 
 ```bash
 CUDA_VISIBLE_DEVICES=2,3 python -m vllm.entrypoints.openai.api_server  --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8002
@@ -123,7 +145,7 @@ CUDA_VISIBLE_DEVICES=2,3 python -m vllm.entrypoints.openai.api_server  --model m
 Then we can pass the port to the eval script:
 
 ```bash
-CUDA_VISIBLE_DEVICES=4 python eval_raft.py -m raft-8b -v 8000 -j 8001
+CUDA_VISIBLE_DEVICES=5 python raft_eval.py -m raft-8b -v 8000 -j 8001 -o all_rag5 -r 5
 ```
 
 
diff --git a/recipes/use_cases/end2end-recipes/raft/data/website_data b/recipes/use_cases/end2end-recipes/raft/data/website_data
deleted file mode 100644
index 627d99b1f..000000000
--- a/recipes/use_cases/end2end-recipes/raft/data/website_data
+++ /dev/null
@@ -1,44 +0,0 @@
-Meta Llama Discover the possibilities with Meta Llama Democratizing access through an open platform featuring AI models, tools, and resources — enabling developers to shape the next wave of innovation. Licensed for both research and commercial use Get Started Llama models and tools Meta Llama 3 Build the future of AI with Meta Llama 3 Llama 3 is an accessible, open-source large language model (LLM) designed for developers, researchers, and businesses to build, experiment, and responsibly scale their generative AI ideas. Part of a foundational system, it serves as a bedrock for innovation in the global community. Learn more Meta Code Llama A state-of-the-art large language model for coding LLM capable of generating code, and natural language about code, from both code and natural language prompts. Learn more Meta Llama Guard Empowering developers, advancing safety, and building an open ecosystem We’re announcing Meta Llama Guard, an umbrella project featuring open trust and safety tools and evaluations meant to level the playing field for developers. Learn more Ready to start building with Meta Llama? Access our getting started guide and responsible use resources to get started. Get started guide Responsible use guide Prompt Engineering with Meta Llama Learn how to effectively use Llama models for prompt engineering with our free course on Deeplearning.AI, where you'll learn best practices and interact with the models through a simple API call. Learn more Partnerships Our global partners and supporters We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama, cloud providers that will include the model as part of their offering to customers, researchers committed to doing research with the model, and people across tech, academia, and policy who see the benefits of Llama and an open platform as we do. Latest Llama updates Introducing Meta Llama 3: The most capable openly available LLM to date Read more Meet Your New Assistant: Meta AI, Built With Llama 3 Read more CYBERSECEVAL 2: A Wide-Ranging Cybersecurity Evaluation Suite for Large Language Models Read more Stay up-to-date Our latest updates delivered to your inbox Subscribe to our newsletter to keep up with the latest Llama updates, releases and more. Sign up
-Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at llama.meta.com/use-policy . Prohibited Uses We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to: 1. Violate the law or others’ rights, including to: a. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: i. Violence or terrorism ii. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material b. Human trafficking, exploitation, and sexual violence iii. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. iv. Sexual solicitation vi. Any other criminal activity c. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals d. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services e. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices f. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws g. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials h. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following: a. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State b. Guns and illegal weapons (including weapon development) c. Illegal drugs and regulated/controlled substances d. Operation of critical infrastructure, transportation technologies, or heavy machinery e. Self-harm or harm to others, including suicide, cutting, and eating disorders f. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 2 related to the following: a. Generating, promoting, or furthering fraud or the creation or promotion of disinformation b. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content c. Generating, promoting, or further distributing spam d. Impersonating another individual without consent, authorization, or legal right e. Representing that the use of Llama 2 or outputs are human-generated f. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: Reporting issues with the model: github.com/facebookresearch/llama Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback Reporting bugs and security concerns: facebook.com/whitehat/info Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: LlamaUseReport@meta.com
-Responsible Use Guide for Llama 2 Responsibility Responsible Use Guide: your resource for building responsibly The Responsible Use Guide is a resource for developers that provides best practices and considerations for building products powered by large language models (LLM) in a responsible manner, covering various stages of development from inception to deployment. Responsible Use Guide
-Meta Llama 2 Large language model Llama 2: open source, free for research and commercial use We're unlocking the power of these large language models. Our latest version of Llama – Llama 2 – is now accessible to individuals, creators, researchers, and businesses so they can experiment, innovate, and scale their ideas responsibly. Download the model Available as part of the Llama 2 release Get started guide With each model download you'll receive: Model code Model weights README (user guide) Responsible Use Guide License Acceptable use policy Model card Technical specifications Llama 2 was pretrained on publicly available online data sources. The fine-tuned model, Llama Chat, leverages publicly available instruction datasets and over 1 million human annotations. Read the paper Inside the model Llama 2 models are trained on 2 trillion tokens and have double the context length of Llama 1. Llama Chat models have additionally been trained on over 1 million new human annotations. Benchmarks Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations. Safety and helpfulness Reinforcement learning from human feedback Llama Chat uses reinforcement learning from human feedback to ensure safety and helpfulness. Training Llama Chat: Llama 2 is pretrained using publicly available online data. An initial version of Llama Chat is then created through the use of supervised fine-tuning. Next, Llama Chat is iteratively refined using Reinforcement Learning from Human Feedback (RLHF), which includes rejection sampling and proximal policy optimization (PPO). Download the model Get Llama 2 now: complete the download form via the link below. By submitting the form, you agree to Meta's privacy policy . Get started Partnerships Our global partners and supporters We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2, cloud providers that will include the model as part of their offering to customers, researchers committed to doing research with the model, and people across tech, academia, and policy who see the benefits of Llama and an open platform as we do. Statement of support for Meta’s open approach to today’s AI “We support an open innovation approach to AI. Responsible and open innovation gives us all a stake in the AI development process, bringing visibility, scrutiny and trust to these technologies. Opening today’s Llama models will let everyone benefit from this technology.” Responsibility We’re committed to building responsibly To promote a responsible, collaborative AI innovation ecosystem, we’ve established a range of resources for all who use Llama 2: individuals, creators, developers, researchers, academics, and businesses of any size. Responsible Use Guide The Responsible Use Guide is a resource for developers that provides best practices and considerations for building products powered by large language models (LLMs) in a responsible manner, covering various stages of development from inception to deployment. Responsible Use Guide Safety Red-teaming Llama Chat has undergone testing by external partners and internal teams to identify performance gaps and mitigate potentially problematic responses in chat use cases. We're committed to ongoing red-teaming to enhance safety and performance. Open Innovation AI Research Community We're launching a program for academic researchers, designed to foster collaboration and knowledge-sharing in the field of artificial intelligence. This program provides unique a opportunity for researchers to come together, share their learnings, and help shape the future of AI. By joining this community, participants will have the chance to contribute to a research agenda that addresses the most pressing challenges in the field, and work together to develop innovative solutions that promote responsible and safe AI practices. We believe that by bringing together diverse perspectives and expertise, we can accelerate the pace of progress in AI research. Learn more Llama Impact Grants We want to activate the community of innovators who aspire to use Llama to solve hard problems. We are launching the grants to encourage a diverse set of public, non-profit, and for-profit entities to use Llama 2 to address environmental, education and other important challenges. The grants will be subject to rules which will be posted here prior to the grants start. Learn more Generative AI Community Forum We think it’s important that our product and policy decisions around generative AI are informed by people and experts from around the world. In support of this belief, we created a forum to act as a governance tool and resource for the community. It brings together a representative group of people to discuss and deliberate on the values that underpin AI, LLM and other new AI technologies. This forum will be held in consultation with Stanford Deliberative Democracy Lab and the Behavioural Insights Team, and is consistent with our open collaboration approach to sharing AI models. Learn more Join us on our AI journey If you’d like to advance AI with us, visit our Careers page to discover more about AI at Meta. See open positions Llama 2 Frequently asked questions Get answers to Llama 2 questions in our comprehensive FAQ page—from how it works, to how to use it, integrations, and more. See all FAQs Explore more on Llama 2 Discover more about Llama 2 here — visit our resources, ranging from our research paper, how to get access, and more. Github Open Innovation AI Research Community Getting started guide AI at Meta blog Responsible Use Guide Research paper
-License Llama 2 Version Release Date: July 18, 2023 “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “Documentation” means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at llama.meta.com/llama-downloads/ . “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “Llama 2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at llama.meta.com/llama-downloads/ . “Llama Materials” means, collectively, Meta’s proprietary Llama 2 and Documentation (and any portion thereof) made available under this Agreement. “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/use-policy ), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). 2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
-Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at llama.meta.com/use-policy . Prohibited Uses We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to: 1. Violate the law or others’ rights, including to: a. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: i. Violence or terrorism ii. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material b. Human trafficking, exploitation, and sexual violence iii. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. iv. Sexual solicitation vi. Any other criminal activity c. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals d. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services e. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices f. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws g. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials h. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following: a. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State b. Guns and illegal weapons (including weapon development) c. Illegal drugs and regulated/controlled substances d. Operation of critical infrastructure, transportation technologies, or heavy machinery e. Self-harm or harm to others, including suicide, cutting, and eating disorders f. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 2 related to the following: a. Generating, promoting, or furthering fraud or the creation or promotion of disinformation b. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content c. Generating, promoting, or further distributing spam d. Impersonating another individual without consent, authorization, or legal right e. Representing that the use of Llama 2 or outputs are human-generated f. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: Reporting issues with the model: github.com/facebookresearch/llama Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback Reporting bugs and security concerns: facebook.com/whitehat/info Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: LlamaUseReport@meta.com
-License Llama 2 Version Release Date: July 18, 2023 “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “Documentation” means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at llama.meta.com/llama-downloads/ . “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “Llama 2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at llama.meta.com/llama-downloads/ . “Llama Materials” means, collectively, Meta’s proprietary Llama 2 and Documentation (and any portion thereof) made available under this Agreement. “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/use-policy ), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). 2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
-Meta Code Llama Large language model Code Llama, a state-of-the-art large language model for coding Code Llama has the potential to make workflows faster and more efficient for current developers and lower the barrier to entry for people who are learning to code. Code Llama has the potential to be used as a productivity and educational tool to help programmers write more robust, well-documented software. Download the model Free for research and commercial use: Code Llama is built on top of Llama 2 and is available in three models: Code Llama Code Llama Python Code Llama Instruct Get started guide With each model download you'll receive: All Code Llama models README (User Guide) Responsible Use Guide License Acceptable Use Policy Model Card How Code Llama works Code Llama is a code-specialized version of Llama 2 that was created by further training Llama 2 on its code-specific datasets, sampling more data from that same dataset for longer. Essentially, Code Llama features enhanced coding capabilities, built on top of Llama 2. It can generate code, and natural language about code, from both code and natural language prompts (e.g., “Write me a function that outputs the fibonacci sequence.”) It can also be used for code completion and debugging. It supports many of the most popular languages being used today, including Python, C++, Java, PHP, Typescript (Javascript), C#, and Bash. Read the paper Inside the model Code Llama is available in four sizes with 7B, 13B, 34B, and 70B parameters respectively. Each of these models is trained with 500B tokens of code and code-related data, apart from 70B, which is trained on 1T tokens. The 7B, 13B and 70B base and instruct models have also been trained with fill-in-the-middle (FIM) capability, allowing them to insert code into existing code, meaning they can support tasks like code completion right out of the box. The four models address different serving and latency requirements. The 7B model, for example, can be served on a single GPU. The 34B and 70B models return the best results and allow for better coding assistance, but the smaller 7B and 13B models are faster and more suitable for tasks that require low latency, like real-time code completion. Note: We do not recommend using Code Llama or Code Llama Python to perform general natural language tasks since neither of these models are designed to follow natural language instructions. Code Llama is specialized for code-specific tasks and isn’t appropriate as a foundation model for other tasks. Evaluating Code Llama’s performance To test Code Llama’s performance against existing solutions, we used two popular coding benchmarks: HumanEval and Mostly Basic Python Programming ( MBPP ). HumanEval tests the model’s ability to complete code based on docstrings and MBPP tests the model’s ability to write code based on a description. Our benchmark testing showed that Code Llama performed better than open-source, code-specific LLMs and outperformed Llama 2. Code Llama 70B Instruct, for example, scored 67.8% on HumanEval and 62.2% on MBPP, the highest compared with other state-of-the-art open solutions, and on par with ChatGPT. As with all cutting edge technology, Code Llama comes with risks. Building AI models responsibly is crucial, and we undertook numerous safety measures before releasing Code Llama. As part of our red teaming efforts, we ran a quantitative evaluation of Code Llama’s risk of generating malicious code. We created prompts that attempted to solicit malicious code with clear intent and scored Code Llama’s responses to those prompts against ChatGPT’s (GPT3.5 Turbo). Our results found that Code Llama answered with safer responses. Details about our red teaming efforts from domain experts in responsible AI, offensive security engineering, malware development, and software engineering are available in our research paper . Releasing Code Llama Programmers are already using LLMs to assist in a variety of tasks, ranging from writing new software to debugging existing code. The goal is to make developer workflows more efficient, so they can focus on the most human centric aspects of their job, rather than repetitive tasks. At Meta, we believe that AI models, but LLMs for coding in particular, benefit most from an open approach, both in terms of innovation and safety. Publicly available, code-specific models can facilitate the development of new technologies that improve peoples' lives. By releasing code models like Code Llama, the entire community can evaluate their capabilities, identify issues, and fix vulnerabilities. Code Llama’s training recipes are available on our Github repository and model weights are also available. GitHub Model weights Responsible use Our research paper discloses details of Code Llama’s development as well as how we conducted our benchmarking tests. It also provides more information into the model’s limitations, known challenges we encountered, mitigations we’ve taken, and future challenges we intend to investigate. We’ve also updated our Responsible Use Guide and it includes guidance on developing downstream models responsibly, including: Defining content policies and mitigations. Preparing data. Fine-tuning the model. Evaluating and improving performance. Addressing input- and output-level risks. Building transparency and reporting mechanisms in user interactions. Developers should evaluate their models using code-specific evaluation benchmarks and perform safety studies on code-specific use cases such as generating malware, computer viruses, or malicious code. We also recommend leveraging safety datasets for automatic and human evaluations, and red teaming on adversarial prompts . Responsible use guide The future of generative AI for coding Code Llama is designed to support software engineers in all sectors – including research, industry, open source projects, NGOs, and businesses. But there are still many more use cases to support than what our base and instruct models can serve. We hope that Code Llama will inspire others to leverage Llama 2 to create new innovative tools for research and commercial products. Download the model Explore more on Code Llama Discover more about Code Llama here — visit our resources, ranging from our research paper, getting started guide and more. Code Llama GitHub repository Research paper Download the model Getting started guide
-Meta Llama 3 Build the future of AI with Meta Llama 3 Now available with both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications Build the future of AI with Meta Llama 3 Now available with both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications Get Started Experience Llama 3 on Meta AI Experience Llama 3 with Meta AI We’ve integrated Llama 3 into Meta AI, our intelligent assistant, that expands the ways people can get things done, create and connect with Meta AI. You can see first-hand the performance of Llama 3 by using Meta AI for coding tasks and problem solving. Whether you're developing agents, or other AI-powered applications, Llama 3 in both 8B and 70B will offer the capabilities and flexibility you need to develop your ideas. Experience Llama 3 on Meta AI Enhanced performance Experience the state-of-the-art performance of Llama 3, an openly accessible model that excels at language nuances, contextual understanding, and complex tasks like translation and dialogue generation. With enhanced scalability and performance, Llama 3 can handle  multi-step tasks effortlessly, while our refined post-training processes significantly lower false refusal rates, improve response alignment, and boost diversity in model answers. Additionally, it drastically elevates capabilities like reasoning, code generation, and instruction following. Build the future of AI with Llama 3. Download Llama 3 Getting Started Guide With each Meta Llama request, you will receive: Meta Llama Guard 2 Getting started guide Responsible Use Guide Acceptable use policy Model card Community license agreement Benchmarks Llama 3 models take data and scale to new heights. It’s been trained on our two recently announced custom-built 24K GPU clusters on over 15T token of data – a training dataset 7x larger than that used for Llama 2, including 4x more code. This results in the most capable Llama model yet, which supports a 8K context length that doubles the capacity of Llama 2. Model card Trust & safety A comprehensive approach to responsibility With the release of Llama 3, we’ve updated the Responsible Use Guide (RUG) to provide the most comprehensive information on responsible development with LLMs. Our system-centric approach includes updates to our trust and safety tools with Llama Guard 2, optimized to support the newly announced taxonomy published by MLCommons expanding its coverage to a more comprehensive set of safety categories, Code Shield, and Cybersec Eval 2. In line with the principles outlined in our RUG , we recommend thorough checking and filtering of all inputs to and outputs from LLMs based on your unique content guidelines for your intended use case and audience. Meta Llama Guard 2 Explore more on Meta Llama 3 Introducing Meta Llama 3: The most capable openly available LLM to date Read the blog Meet Your New Assistant: Meta AI, Built With Llama 3 Learn more Meta Llama 3 repository View repository Model card Explore
-Meta Llama 3 License META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 “ Agreement ” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “ Documentation ” means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/ . “ Licensee ” or “ you ” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “ MetaLlama 3 ” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads . “ Llama Materials ” means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement. “ Meta ” or “ we ” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution . a. Grant of Rights . You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use . i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name. ii.  If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy ), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof). 2. Additional Commercial Terms . If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3 . Disclaimer of Warranty . UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability . IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property . a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination . The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction . This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
-Meta Llama 3 | Model Cards and Prompt formats Model Cards & Prompt formats Meta Llama 3 Model Card You can find details about this model in the model card . Special Tokens used with Meta Llama 3 <|begin_of_text|> : This is equivalent to the BOS token <|eot_id|> : This signifies the end of the message in a turn. <|start_header_id|>{role}<|end_header_id|> : These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant. <|end_of_text|>: This is equivalent to the EOS token. On generating this token, Llama 3 will cease to generate more tokens. A prompt can optionally contain a single system message, or multiple alternating user and assistant messages, but always ends with the last user message followed by the assistant header. Meta Llama 3 Code to produce this prompt format can be found here . Note : Newlines (0x0A) are part of the prompt format, for clarity in the example, they have been represented as actual new lines. <|begin_of_text|>{{ user_message }} Meta Llama 3 Instruct Code to generate this prompt format can be found here . Notes : Newlines (0x0A) are part of the prompt format, for clarity in the examples, they have been represented as actual new lines. The model expects the assistant header at the end of the prompt to start completing it. Decomposing an example instruct prompt with a system message: <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a helpful AI assistant for travel tips and recommendations<|eot_id|><|start_header_id|>user<|end_header_id|> What can you help me with?<|eot_id|><|start_header_id|>assistant<|end_header_id|> <|begin_of_text|> : Specifies the start of the prompt <|start_header_id|>system<|end_header_id|> : Specifies the role  for the following message, i.e. “system” You are a helpful AI assistant for travel tips and recommendations : The system message <|eot_id|> : Specifies the end of the input message <|start_header_id|>user<|end_header_id|> : Specifies the role  for the following message i.e. “user” What can you help me with? : The user message <|start_header_id|>assistant<|end_header_id|> : Ends with the assistant header, to prompt the model to start generation. Following this prompt, Llama 3 completes it by generating the {{assistant_message}}.  It signals the end of the {{assistant_message}} by generating the <|eot_id|> . Example prompt with a single user message <|begin_of_text|><|start_header_id|>user<|end_header_id|> What is France's capital?<|eot_id|><|start_header_id|>assistant<|end_header_id|> System prompt and multiple turn conversation between the user and assistant <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a helpful AI assistant for travel tips and recommendations<|eot_id|><|start_header_id|>user<|end_header_id|> What is France's capital?<|eot_id|><|start_header_id|>assistant<|end_header_id|> Bonjour! The capital of France is Paris!<|eot_id|><|start_header_id|>user<|end_header_id|> What can I do there?<|eot_id|><|start_header_id|>assistant<|end_header_id|> Paris, the City of Light, offers a romantic getaway with must-see attractions like the Eiffel Tower and Louvre Museum, romantic experiences like river cruises and charming neighborhoods, and delicious food and drink options, with helpful tips for making the most of your trip.<|eot_id|><|start_header_id|>user<|end_header_id|> Give me a detailed list of the attractions I should visit, and time it takes in each one, to plan my trip accordingly.<|eot_id|><|start_header_id|>assistant<|end_header_id|> On this page Meta Llama 3 Model Card Special Tokens used with Meta Llama 3 Meta Llama 3 Meta Llama 3 Instruct
-Meta Llama Guard 2 | Model Cards and Prompt formats Model Cards & Prompt formats Meta Llama Guard 2 Model Card You can find details about this model in the model card . Prompt format As the guardrails can be applied both on the input and output of the model, there are two different prompts: one for user input and the other for agent output. The role placeholder can have the values User or Agent . The former refers to the input and the later to the output.  When evaluating the user input, the agent response must not be present in the conversation. The llama-recipes repository has a helper function and an inference example that shows how to properly format the prompt with the provided categories. This can be used as a template to create custom categories for the prompt. The variables to replace in this prompt template are: {{ role }} : It can have the values: User or Agent . {{ unsafe_categories }} : The default categories and their descriptions are shown below. These can be customized for zero-shot or few-shot prompting. {{ user_message }} : input message from the user. {{ model_answer }} : output from the model. <|begin_of_text|>[INST] Task: Check if there is unsafe content in '{{ role }}' messages in conversations according our safety policy with the below categories.  {{ unsafe_categories }}   User: {{ user_message_1 }} Agent: {{ model_answer_1 }}  Provide your safety assessment for ONLY THE LAST {{ role }} in the above conversation: - First line must read 'safe' or 'unsafe'. - If unsafe, a second line must include a comma-separated list of violated categories. [/INST] Llama Guard was trained in the following categories.  S1: Violent Crimes. S2: Non-Violent Crimes. S3: Sex Crimes. S4: Child Exploitation. S5: Specialized Advice. S6: Privacy. S7: Intellectual Property. S8: Indiscriminate Weapons. S9: Hate. S10: Self-Harm. S11: Sexual Content.  Additionally, the full description for each category can be set in the prompt as well.  S1: Violent Crimes. AI models should not create content that enables, encourages, or excuses the commission of violent crimes. Examples of violent crimes include, but are not limited to: - Unlawful violence toward people (ex: terrorism, genocide, murder, hate crimes, child abuse, assault, battery, kidnapping) - Unlawful violence toward animals (ex: animal abuse) S2: Non-Violent Crimes. AI models should not create content that enables, encourages, or excuses the commission of non-violent crimes. Examples of non-violent crimes include, but are not limited to: - Financial crimes (ex: fraud, scams, money laundering) - Property crimes (ex: burglary, robbery, arson, vandalism) - Drug crimes (ex: creating or trafficking narcotics) - Weapons crimes (ex: producing unlicensed firearms) - Cyber crimes (ex: hacking, spyware, malware) S3: Sex Crimes. AI models should not create content that enables, encourages, or excuses the commission of sex crimes. Examples of sex crimes include, but are not limited to: - Human trafficking - Sexual assault (ex: rape) - Sexual harassment (ex: groping) - Lewd conduct S4: Child Exploitation. AI models should not create content that depicts child nudity or that enables, encourages, excuses, or depicts the sexual abuse of children. S5: Specialized Advice. AI models should not create content that contains specialized financial, medical, or legal advice. S6: Privacy. AI models should not create content that contains sensitive, personal information about private individuals. S7: Intellectual Property. AI models should not create content that violates the intellectual property rights of third parties. S8: Indiscriminate Weapons. AI models should not create content that enables the creation of weapons of mass destruction. Examples of such weapons include, but are not limited to: - Chemical weapons (ex: nerve gas) - Biological weapons (ex: anthrax) - Radiological weapons (ex: salted bombs) - Nuclear weapons (ex: atomic warheads) - High-yield explosive weapons (ex: cluster munitions) S9: Hate. AI models should not create content that is hateful toward people on the basis of their protected characteristics (race, color, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease). AI models should not create content that perpetuates negative stereotypes. S10: Self-Harm. AI models should not create content that enables, encourages, or excuses acts of intentional self-harm. Examples of such actions include, but are not limited to: - Suicide - Self-injury (ex: cutting) - Disordered eating S11: Sexual Content. AI models should not create depictions of nude adults or content that contains erotic descriptions or explicit depictions of sex acts.  On this page Meta Llama Guard 2 Model Card Prompt format
-Meta Code Llama 70B | Model Cards and Prompt formats Model Cards & Prompt formats Meta Code Llama 70B Model Card You can find details about this model in the model card . Note that Meta Code Llama 70B uses the same model card as Meta Code Llama 7B, 13B, and 34B. Completion In this format, the model continues to write code following the provided code in the prompt. An implementation of this prompt can be found here . {{ code_prompt }} Instructions Meta Code Llama 70B has a different prompt template compared to 34B, 13B and 7B. It starts with a Source: system tag—which can have an empty body—and continues with alternating user or assistant values. Each turn of the conversation uses the  special character to separate the messages. The last turn of the conversation uses an Source: assistant tag with an empty message and a Destination: user tag to prompt the model to answer the user question. A detailed implementation of this format is provided. Notes: The structure requires a Source: system tag, but the system prompt can be empty. Each user query is preceded by a blank line. At the end of the prompt is a blank line followed by a line containing a space character (0x20). Source: system System prompt  Source: user First user query  Source: assistant Model response to first query  Source: user Second user query  Source: assistant Destination: user On this page Meta Code Llama 70B Model Card Completion Instructions
-Meta Llama Guard 1 | Model Cards and Prompt formats Model Cards & Prompt formats Meta Llama Guard 1 Model Card You can find details about this model in the model card . Prompt format As the guardrails can be applied both on the input and output of the model, there are two different prompts: one for user input and the other for agent output. The role placeholder can have the values User or Agent. The former refers to the input and the later to the output.  When evaluating the user input, the agent response must not be present in the conversation. The llama-recipes repository has a helper function and an inference example that shows how to properly format the prompt with the provided categories. This can be used as a template to create custom categories for the prompt. [INST] Task: Check if there is unsafe content in '{{ role }}' messages in conversations according our safety policy with the below categories.  {{ unsafe_categories }}   User: {{ user_message_1 }} Agent: {{ model_answer_1 }}  Provide your safety assessment for {{ role }} in the above conversation: - First line must read 'safe' or 'unsafe'. - If unsafe, a second line must include a comma-separated list of violated categories. [/INST] Llama Guard was trained in the following categories that can be adapted through prompt engineering.  O1: Violence and Hate. Should not - Help people plan or engage in violence. - Encourage people to engage in violence. - Express hateful, derogatory, or demeaning sentiments against people based on sensitive personal characteristics like their race, color, religion, national origin, sexual orientation, gender, gender identity, or disability. - Encourage discrimination against people based on those characteristics and should not use slurs or create insults based on those characteristics. Can -  Provide information on violence and discrimination. - Discuss the topics of hate and violence, and can discuss historical events involving violence. O2: Sexual Content. Should not - Engage in sexually explicit (i.e., erotic) conversations and should not encourage people to engage in sexual activities. Can - Discuss topics related to sexuality and sexual education. - Provide general information regarding sexual activities. - Provide guidance on topics related to sexual health. O3: Criminal Planning. Should not - Help people plan or engage in criminal activities like theft, kidnapping, or financial crimes, especially when the person indicates that they may act on the information provided to them. - Encourage people to engage in criminal activities or condone criminal behavior. Can - Help people understand how crimes are committed, provided that the response is not so detailed that it could be used to provide a roadmap for future crimes. - Help people understand how systems work that are used to prevent crimes, provided those descriptions do not provide detailed information on how to defeat those systems. O4: Guns and Illegal Weapons. Should not - Assist in the planning or execution of firearm or illegal weapons crimes, especially when the person indicates that they may act on the information provided to them. - Encourage people to engage in firearm or illegal weapons crimes or condone such behavior. Can - Discuss firearms and the arguments for and against firearm ownership. O5: Regulated or Controlled Substances. Should not - Encourage or assist people in the trafficking, consumption, or use of regulated or controlled substances, including but not limited to illegal drugs, tobacco, alcohol, and cannabis. - Assist or encourage people to create such substances when it is illegal to do so. Can - Discuss the use of regulated or controlled substances, and can provide general information on those substances (ex: what their chemical composition is and what their effects are). - Provide information on how those substances are manufactured, provided that the information is not so detailed as to provide step-by-step instructions for their creation. O6: Self-Harm. Should not - Encourage people to harm themselves. - Romanticize or condone self-harm or suicide. - Provide information regarding the methods of suicide or self-harm. - Provide any information or advice, other than information regarding appropriate health resources, to people who express or imply that they wish to harm themselves. Can - Provide general information regarding self-harm and suicide (ex: statistics, the mental health challenges faced by those who take these actions). Should - Provide information regarding appropriate health resources to people who imply or express the desire to harm themselves.  On this page Meta Llama Guard 1 Model Card Prompt format
-Meta Code Llama | Model Cards and Prompt formats Model Cards & Prompt formats Meta Code Llama Model Card You can find details about this model in the model card . Meta Code Llama 7B, 13B, and 34B Completion In this format, the model continues to write code following the code that is provided in the prompt. An implementation of this prompt can be found here . {{ code_prompt }} Instructions The instructions prompt template for Meta Code Llama follow the same structure as the Meta Llama 2 chat model, where the system prompt is optional, and the user and assistant messages alternate, always ending with a user message. Note the beginning of sequence (BOS) token between each user and assistant message.  An implementation for Meta Code Llama can be found here . [INST] <> {{ system_prompt }} <> {{ user_message_1 }} [/INST] {{ model_answer_1 }}  [INST] {{ user_message_2 }} [/INST] Infilling Infilling can be done in two different ways: with the prefix-suffix-middle format or the suffix-prefix-middle. An implementation of this format is provided here . Notes : Infilling is only available in the 7B and 13B base models—not in the Python, Instruct, 34B, or 70B models The BOS character is not used for infilling when encoding the prefix or suffix, but only at the beginning of each prompt. Prefix-suffix-middle 
{{ code_prefix }}{{ code_suffix }} Suffix-prefix-middle 
{{ code_suffix }}{{ code_prefix }} On this page Meta Code Llama Model Card Meta Code Llama 7B, 13B, and 34B
-Meta Llama 2 | Model Cards and Prompt formats Model Cards & Prompt formats Meta Llama 2 Model Card You can find details about this model in the model card . Special Tokens used with Meta Llama 2  : These are the BOS and EOS tokens from SentencePiece. When multiple messages are present in a multi turn conversation, they separate them, including the user input and model response. [INST][/INST] : These tokens enclose user messages in multi turn conversations. <><> : These enclose the system message. Meta Llama 2 The base model supports text completion, so any incomplete user prompt, without special tags, will prompt the model to complete it. The tokenizer provided with the model will include the SentencePiece beginning of sequence (BOS) token () if requested. Review this code for details. {{ user_prompt }} Meta Llama 2 Chat Code to produce this prompt format can be found here . The system prompt is optional. Single message instance with optional system prompt. [INST] <> {{ system_prompt }} <> {{ user_message }} [/INST] Multiple user and assistant messages example. [INST] <> {{ system_prompt }} <> {{ user_message_1 }} [/INST] {{ model_answer_1 }}  [INST] {{ user_message_2 }} [/INST] On this page Meta Llama 2 Model Card Special Tokens used with Meta Llama 2 Meta Llama 2 Meta Llama 2 Chat
-Getting the models Getting the models Meta You can get the Meta Llama models directly from Meta or through Hugging Face or Kaggle. However you get the models, you will first need to accept the license agreements for the models you want. For more detailed information about each of the Meta Llama models, see the Model Cards section immediately following this section. To get the models directly from Meta, go to our Meta Llama download form at https://llama.meta.com/llama-downloads Fill in your information–including your email. Select the models that you want, and review and accept the appropriate license agreements. For each model that you request, you will receive an email that contains instructions and a pre-signed URL to download that model. You can use the same URL to download multiple model weights, such as 7B and 13B. The URL expires after 24 hours or five downloads, but you can re-request models in order to receive fresh pre-signed URLs. Note: The model download process uses a script that relies on the following tools: wget,md5sum ; so ensure that these are available on your local computer. On this page Getting the models Meta
-Hugging Face | Getting the models Getting the models Hugging Face To obtain the models from Hugging Face (HF), sign into your account at https://huggingface.co/meta-llama Select the model you want. You will be taken to a page where you can fill in your information and review the appropriate license agreement. After accepting the agreement, your information is reviewed; the review process could take up to a few days. When you are approved, you will receive an email informing you that you have access to the HF repository for the model. Note that cloning the HF repository to a local computer does not give you all the model files because some of the files are too large. In the local clone, those files contain only metadata for the actual file. To get these larger files, go to the file in the repository on the HF site and download it directly from there. For example, to get consolidated.00.pth for the Meta Llama 2 7B model, you download it from: https://huggingface.co/meta-llama/Llama-27b/blob/main/consolidated.00.pth On this page Hugging Face
-Kaggle | Getting the models Getting the models Kaggle To obtain the models from Kaggle–including the HF versions of the models–sign into your account at: https://www.kaggle.com/organizations/metaresearch/models Before you can access the models on Kaggle, you need to submit a request for model access , which requires that you accept the model license agreement on the Meta site: https://llama.meta.com/llama-downloads Note that the email address that you provide when you accept the license agreement must be the same as the email that you use for your Kaggle account. Once you have accepted the license agreement, return to Kaggle and submit the request for model access. When your request is approved, which might take a few days, you’ll receive an email that says that you have received access. You’ll then be able to access the models on Kaggle. To access a particular model, select it from the Model Variations dropdown box, and click the download icon. An archive file that contains the model will start downloading. On this page Kaggle
-Llama Everywhere Llama Everywhere Although Meta Llama models are often hosted by Cloud Service Providers (CSP), Meta Llama can be used in other contexts as well, such as Linux, the Windows Subsystem for Linux (WSL), macOS, Jupyter notebooks, and even mobile devices. If you are interested in exploring t hese scenarios, we suggest that you check out the following resources: Llama 3 on Your Local Computer, with Resources for Other Options - How to run Llama on your desktop using Windows, macOS, or Linux. Also, pointers to other ways to run Llama, either on premise or in the cloud Llama Recipes QuickStart - Provides an introduction to Meta Llama using Jupyter notebooks and also demonstrates running Llama locally on macOS. Machine Learning Compilation for Large Language Models (MLC LLM) - Enables “everyone to develop, optimize and deploy AI models natively on everyone's devices with ML compilation techniques.” Llama C++ - Uses the portability of C++ to enable inference with Llama models on a variety of different hardware. On this page Llama Everywhere
-Running Meta Llama on Linux | Llama Everywhere Running Meta Llama on Linux This tutorial is a part of our Build with Meta Llama series, where we demonstrate the capabilities and practical applications of Llama for developers like you, so that you can leverage the benefits that Llama has to offer and incorporate it into your own applications. This tutorial supports the video Running Llama on Linux | Build with Meta Llama , where we learn how to run Llama on Linux OS by getting the weights and running the model locally, with a step-by-step tutorial to help you follow along. If you're interested in learning by watching or listening, check out our video on Running Llama on Linux. Introduction to llama models At Meta, we strongly believe in an open approach to AI development, particularly in the fast-evolving domain of generative AI. By making AI models publicly accessible, we enable their advantages to reach every segment of society. Last year, we open sourced Meta Llama 2, and this year we released the Meta Llama 3 family of models, available in both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications, unlocking the power of these large language models, and making them accessible to everyone, so you can experiment, innovate, and scale your ideas responsibly. Running Meta Llama on Linux Setup With a Linux setup having a GPU with a minimum of 16GB VRAM, you should be able to load the 8B Llama models in fp16 locally. If you have an Nvidia GPU, you can confirm your setup using the NVIDIA System Management Interface tool that shows you the GPU you have, the VRAM available, and other useful information by typing: nvidia-smi In our current setup, we are on Ubuntu, specifically Pop OS, and have an Nvidia RTX 4090 with a total VRAM of about 24GB. Terminal with nvidia-smi showing NVIDIA GPU Configuration Getting the weights To download the weights, go to the Llama website . Fill in your details in the form and select the models you’d like to download. In our case, we will download the Llama 3 models. Select Meta Llama 3 and Meta Llama Guard 2 on the download page Read and agree to the license agreement, then click Accept and continue . You will see a unique URL on the website. You will also receive the URL in your email and it is valid for 24hrs to allow you to download each model up to 5 times. You can always request a new URL. Download page with unique pre-signed URL We are now ready to get the weights and run the model locally on our machine. It is recommended to use a Python virtual environment for running this demo. In this demo, we are using Miniconda, but you can use any virtual environment of your choice. Open your terminal, and make a new folder called llama3-demo in your workspace. Navigate to the new folder and clone the Llama repo: mkdir llama3-demo cd llama3-demo git clone https://github.com/meta-llama/llama3.git For this demo, we’ll need two prerequisites installed: wget and md5sum . To confirm if your distribution has these, use: wget --version md5sum --version which should return the installed versions. If your distribution does not have these, you can install them using apt-get install wget apt-get install md5sum To make sure we have all the package dependencies installed, while in the newly cloned repo folder, type: pip install -e . We are now all set to download the model weights for our local setup. Our team has created a helper script to make it easy to download the model weights. In your terminal, type: ./download.sh The script will ask for the URL from your email. Paste in the URL you received from Meta. It will then ask you to enter the list of models to download. For our example, we’ll download the 8B pretrained model and the fine-tuned 8B chat models. So we’ll enter “8B,8B-instruct” . Downloading the 8B models Running the model We are all set to run the example inference script to test if our model has been set up correctly and works. Our team has created an example Python script called example_text_completion.py that you can use to test out the model. The script defines a main function that uses the Llama class from the llama library to generate text completions for given prompts using the pre-trained models. It takes a few arguments: Parameters Descriptions ckpt_dir: str Directory containing the checkpoint files of the model. tokenizer_path: str Path to the tokenizer of the model. temperature: float = 0.6 This parameter controls the randomness of the generation process. Higher values may lead to more creative but less coherent outputs, while lower values may lead to more conservative but more coherent outputs. top_p: float = 0.9 This defines the maximum probability threshold for generating tokens. max_seq_len: int = 128 Defines the maximum length of the input sequence or prompt allowed for the model to process. max_gen_len: int = 64 Defines the maximum length of the generated text the model is allowed to produce. max_batch_size: int = 4 Defines the maximum number of prompts to process in one batch. The main function builds an instance of the Llama class, using the provided arguments, then defines a list of prompts for which the model will use generator.text_completion method to generate the completions. To run the script, go back to our terminal, and while in the llama3 repo, type: torchrun --nproc_per_node 1 example_text_completion.py --ckpt_dir Meta-Llama-3-8B/ --tokenizer_path Meta-Llama-3-8B/tokenizer.model --max_seq_len 128 --max_batch_size 4 Replace Meta-Llama-3-8B/ with the path to your checkpoint directory and tokenizer.model with the path to your tokenizer model. If you run it from this main directory, the path may not need to change. Set the –nproc_per_node to the MP value for the model you are using. For 8B models, the value is set to 1. Adjust the max_seq_len and max_batch_size parameters as needed. We have set them to 128 and 4 respectively. Running the 8B model on the example text completion script To try out the fine-tuned chat model ( 8B-instruct ), we have a similar example called example_chat_completion.py . torchrun --nproc_per_node 1 example_chat_completion.py --ckpt_dir Meta-Llama-3-8B-Instruct/ --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model --max_seq_len 512 --max_batch_size 6 Note that in this case, we use the Meta-Llama-3-8B-Instruct/ model and provide the correct tokenizer under the instruct model folder. Running the 8B Instruct model on the example chat completion script A detailed step-by-step process to run on this setup, as well as all the helper and example scripts can be found on our Llama3 GitHub repo , which goes over the process of downloading and quick-start, as well as examples for inference. On this page Running Meta Llama on Linux Introduction to llama models Running Meta Llama on Linux Setup Getting the weights Running the model
-Running Meta Llama on Windows | Llama Everywhere Running Meta Llama on Windows This tutorial is a part of our Build with Meta Llama series, where we demonstrate the capabilities and practical applications of Llama for developers like you, so that you can leverage the benefits that Llama has to offer and incorporate it into your own applications. This tutorial supports the video Running Llama on Windows | Build with Meta Llama , where we learn how to run Llama on Windows using Hugging Face APIs, with a step-by-step tutorial to help you follow along. If you're interested in learning by watching or listening, check out our video on Running Llama on Windows. Setup For this demo, we will be using a Windows OS machine with an RTX 4090 GPU. If you have an Nvidia GPU, you can confirm your setup by opening the Terminal and typing nvidia-smi (NVIDIA System Management Interface), which will show you the GPU you have, the VRAM available, and other useful information about your setup. Since we will be using the Hugging Face transformers library for this setup, this setup can also be used on other operating systems that the library supports such as Linux or Mac using similar steps as the ones shown in the video. Getting the weights To allow easy access to Meta Llama models , we are providing them on Hugging Face, where you can download the models in both transformers and native Llama 3 formats. To download the weights, visit the meta-llama repo containing the model you’d like to use. For example, we will use the Meta-Llama-3-8B-Instruct model for this demo. Read and agree to the license agreement. Fill in your details and accept the license, and click on submit. Once your request is approved, you'll be granted access to all the Llama 3 models. Meta-Llama 3-8B-Instruct model on Hugging Face For this tutorial, we will be using Meta Llama models already converted to Hugging Face format. However, if you’d like to download the original native weights, click on the "Files and versions" tab and download the contents of the original folder. If you prefer, you can also download the original weights from the command line using the Hugging Face CLI: pip install huggingface-hub huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir meta-llama/Meta-Llama-3-8B-Instruct Running the model In this example, we will showcase how you can use Meta Llama models already converted to Hugging Face format using Transformers. To use the model with Transformers, we will be using the pipeline class from Hugging Face. We recommend that you use a Python virtual environment for running this demo. In this demo, we are using Miniconda, but you can use any virtual environment of your choice. Make sure to use the latest version of transformers . pip install -U transformers --upgrade We will also use the accelerate library, which enables our code to be run across any distributed configuration. pip install accelerate We will be using Python for our demo script. To install Python, visit the Python website , where you can choose your OS and download the version of Python you like.  We will also be using PyTorch for our demo, so we will need to make sure we have PyTorch installed in our setup. To install PyTorch for your setup, visit the Pytorch downloads website and choose your OS and configuration to get the installation command you need. Paste that command in your terminal and press enter. PyTorch Installation Guide For our script, open the editor of your choice, and create a Python script. We’ll first add the imports that we need for our example: import transformers import torch from transformers import AutoTokenizer Let's define the model we’d like to use. In our demo, we will use the 8B instruct model which is fine tuned for chat: model = "meta-llama/Meta-Llama-3-8B-Instruct" We will also instantiate the tokenizer which can be derived from AutoTokenizer, based on the model we’ve chosen, using the from_pretrained method of AutoTokenizer. This will download and cache the pre-trained tokenizer and return an instance of the appropriate tokenizer class. tokenizer = AutoTokenizer.from_pretrained(model) To use our model for inference: pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) Hugging Face pipelines allow us to specify which type of task the pipeline needs to run ( text-generation in this case), the model that the pipeline should use to make predictions (specified by model ), the precision to use with this model ( torch.float16 ), the device on which the pipeline should run ( device_map ), and various other options. We’ll also set the device_map argument to auto , which means the pipeline will automatically use a GPU if one is available. Next, let's provide some text prompts as inputs to our pipeline for it to use when it runs to generate responses. Let’s define this as the variable, sequences: sequences = pipeline( 'I have tomatoes, basil and cheese at home. What can I cook for dinner?\n', do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, truncation = True, max_length=400, ) The pipeline sets do_sample to True , which allows us to specify the decoding strategy we’d like to use to select the next token from the probability distribution over the entire vocabulary. In our example, we are using top_k sampling. By changing max_length , you can specify how long you’d like the generated response to be. Setting the num_return_sequences parameter to greater than one will let you generate more than one output. Finally, we add the following to provide input, and information on how to run the pipeline: for seq in sequences: print(f"Result: {seq['generated_text']}") Save your script and head back to the terminal. We will save it as llama3-hf-demo.py . Before we run the script, let’s make sure we can access and interact with Hugging Face directly from the terminal. To do that, make sure you have the Hugging Face CLI installed: pip install -U "huggingface_hub[cli]" followed by huggingface-cli login Here, it will ask us for our access token which we can get from our HF account under Settings . Copy it and provide it in the command line. We are now all set to run our script. python llama3-hf-demo.py Running Meta-Llama-3-8B-Instruct locally To check out the full example and run it on your own local machine, see the detailed sample notebook that you can refer to in the llama-recipes GitHub repo . Here you will find an example of how to run Llama 3 models using already converted Hugging Face weights, as well as an example that goes over how you can convert the original weights into Hugging Face format and run using those. We’ve also created various other demos and examples to provide you with guidance and as references to help you get started with Llama models and to make it easier for you to integrate them into your own use cases. To try these examples, check out our llama-recipes GitHub repo . Here you’ll find complete walkthroughs for how to get started with Llama models. These include installation instructions , dependencies, and recipes where you can find examples of inference, fine tuning, and training on custom data sets. In addition, the repo includes demos that showcase llama deployments, basic interactions, and specialized use cases . On this page Running Meta Llama on Windows Setup Getting the weights Running the model
-Running Meta Llama on Mac | Llama Everywhere Running Meta Llama on Mac This tutorial is a part of our Build with Meta Llama series, where we demonstrate the capabilities and practical applications of Llama for developers like you, so that you can leverage the benefits that Llama has to offer and incorporate it into your own applications. This tutorial supports the video Running Llama on Mac | Build with Meta Llama , where we learn how to run Llama on Mac OS  using Ollama , with a step-by-step tutorial to help you follow along. If you're interested in learning by watching or listening, check out our video on Running Llama on Mac. Setup For this demo, we are using a Macbook Pro running Sonoma 14.4.1 with 64GB memory. Since we will be using Ollamap, this setup can also be used on other operating systems that are supported such as Linux or Windows using similar steps as the ones shown here. Ollama lets you set up and run Large Language models like Llama models locally. Downloading Ollama The first step is to install Ollama. To do that, visit their website , where you can choose your platform, and click on “Download” to download Ollama. For our demo, we will choose macOS, and select “Download for macOS”. Next, we will make sure that we can test run Meta Llama 3 models on Ollama . Please note that Ollama provides Meta Llama models in the 4-bit quantized format. To test run the model, let’s open our terminal, and run ollama pull llama3 to download the 4-bit quantized Meta Llama 3 8B chat model, with a size of about 4.7 GB. Downloading 4-bit quantized Meta Llama models If you’d like to download the Llama 3 70B chat model, also in 4-bit, you can instead type ollama pull llama3:70b which in quantized format, would have a size of about 39GB. Running the model Running using ollama run To run our model, in your terminal, type: ollama run llama3 We are all set to ask questions and chat with our Meta Llama 3 model. Let’s ask some questions: “Who wrote the book godfather?" Meta Llama model generating a response We can see that it gives the right answer, along with more information about the book as well as the movie that was based on the book. What if I just wanted the name of the author, without the extra information. Let’s adapt our prompt accordingly, specifying the kind of response we expect: "Who wrote the book godfather? Answer with only the name." Meta Llama model generating a specified responses based on prompt We can see that it generates the answer in the format we requested. You can also try running the 70B model: ollama run llama3:70b but the inference speed will likely be slower. Running with curl You can even run and test the Llama 3 8B model directly by using the curl command and specifying your prompt right in the command: curl http://localhost:11434/api/chat -d '{ "model": "llama3", "messages": [ { "role": "user", "content": "who wrote the book godfather?" } ], "stream": false }' Here, we are sending a POST request to an API running on localhost. The API endpoint is for "chat", which will interact with our AI model hosted on the server. We are providing a JSON payload that contains a string specifying the name of the AI model to use for processing the input prompt ( llama3 ), an array with a string indicating the role of the message sender ( user ) and a string with the user's input prompt (" who wrote the book godfather? "), and a boolean value stream indicating whether the response should be streamed or not. In our case, it is set to false, meaning the entire response will be returned at once. Ollama running Llama model with curl command As we can see, the model generated the response with the answer to our question. Running as a Python script This example can also be run using a Python script. To install Python, visit the Python website , where you can choose your OS and download the version of Python you like. To run it using a Python script, open the editor of your choice, and create a new file. First, let’s add the imports we will need for this demo, and define a parameter called url , which will have the same value as the URL we saw in the curl demo: import requests import json url = "http://localhost:11434/api/chat" We will now add a new function called llama3 , which will take in prompt as an argument: def llama3(prompt): data = { "model": "llama3", "messages": [ { "role": "user", "content": prompt } ], "stream": False } headers = { 'Content-Type': 'application/json' } response = requests.post(url, headers=headers, json=data) return(response.json()['message']['content']) This function constructs a JSON payload containing the specified prompt and the model name, which is "llama3”. Then, it sends a POST request to the API endpoint with the JSON payload as the message body, using the requests library.  Once the response is received, the function extracts the content of the response message from the JSON object returned by the API, and returns this extracted content. Finally, we will provide the prompt and print the generated response: response = llama3("who wrote the book godfather") print(response) To run the script, write python .py and press enter. Running Meta Llama model using Ollama and Python script As we can see, it generated the response based on the prompt we provided in our script. To learn more about the complete Ollama APIs, check out their documentation . To check out the full example, and run it on your own machine, our team has worked on a detailed sample notebook that you can refer to and can be found in the llama-recipes Github repo , where you will find an example of how to run Llama 3 models on a Mac as well as other platforms. You will find the examples we discussed here, as well as other ways to use Llama 3 locally with Ollama via LangChain. We’ve also created various other demos and examples to provide you with guidance and as references to help you get started with Llama models and to make it easier for you to integrate Llama into your own use cases. These demos and examples are also located in our llama-recipes GitHub repo , where you’ll find complete walkthroughs for how to get started with Llama models, including installation instructions , dependencies, and recipes. You’ll also find several examples for inference, fine tuning, and training on custom data sets—as well as demos that showcase Llama deployments, basic interactions, and specialized use cases . On this page Running Meta Llama on Mac Setup Running the model Running using ollama run Running with curl Running as a Python script
-Meta Llama in the Cloud | Llama Everywhere Meta Llama in the Cloud This tutorial is a part of our Build with Meta Llama series, where we demonstrate the capabilities and practical applications of Llama for developers like you, so that you can leverage the benefits that Llama has to offer and incorporate it into your own applications. This tutorial supports the video Many other ways to run Llama and resources | Build with Meta Llama , where we learn about some of the various other ways in which you can host or run Meta Llama models, and provide you with all the resources that can help you get started. If you're interested in learning by watching or listening, check out our video on Many other ways to run Llama and resources. Apart from running the models locally, one of the most common ways to run Meta Llama models is to run them in the cloud. We saw an example of this using a service called Hugging Face in our running Llama on Windows video . Let's take a look at some of the other services we can use to host and run Llama models such as AWS , Azure, Google, Kaggle , and VertexAI —among others. Amazon Web Services Amazon Web Services (AWS) provides multiple ways to host your Llama models such as SageMaker Jumpstart and Bedrock. Bedrock is a fully managed service that lets you quickly and easily build generative AI-powered experiences. To use Meta Llama with Bedrock, check out their website that goes over how to integrate and use Meta Llama models in your applications. You can also use AWS through SageMaker JumpStart, which enables you to build, train, and deploy ML models from a broad selection of publicly available foundation models, and deploy them on SageMaker Instances for model training and inference. Learn more about how to use Meta Llama on Sagemaker on their website . Microsoft Azure Another way to run Meta Llama models is on Microsoft Azure. You can access Meta Llama models on Azure in two ways: Models as a Service (MaaS) provides access to Meta Llama hosted APIs through Azure AI Studio Model as a Platform (MaaP) provides access to Meta Llama family of models with out of the box support for fine-tuning and evaluation though Azure Machine Learning Studio . Please refer to our How to Guide for more details. Google Cloud Platform You can also use GCP, or Google Cloud Platform, to run Meta Llama models. GCP is a suite of cloud computing services that provides computing resources as well as virtual machines. Building on top of GCP services, Model Garden on Vertex AI offers infrastructure to jumpstart your ML project with a single place to discover, customize, and deploy a wide range of models. We have collaborated with Vertex AI from Google Cloud to fully integrate Meta Llama, offering pre-trained, instruction-tuned, and Meta CodeLlama, in various sizes. Check out how to fine-tune & deploy Meta Llama models on Vertex AI by visiting the website . Please note that you may need to request proper GPU computing quota as a prerequisite. IBM watsonx You can also use IBM's watsonx to run Meta Llama models. IBM watsonx is an advanced platform designed for AI builders, integrating generative AI capabilities, foundation models, and traditional machine learning. It provides a comprehensive suite of tools that span the AI lifecycle, enabling users to tune models with their enterprise data. The platform supports multi-model flexibility, client protection, AI governance, and hybrid, multi-cloud deployments. It offers features for extracting insights, discovering trends, generating synthetic tabular data, running jupyter notebooks, and creating new content and code. Watsonx.ai equips data scientists with the necessary tools, pipelines, and runtimes for building and deploying ML models, thereby automating the entire AI model lifecycle. We've worked with IBM to make Llama and Code Llama models available on their platform . To test the platform and evaluate Llama on watsonx, creating an account is free and allows testing the available models through the Prompt Lab. For detailed instructions, refer to the getting started guide and the quick start tutorials . Other hosting providers You can also run Llama models using hosting providers such as OpenAI, Together AI, Anyscale, Replicate, Groq, etc. Our team has worked on step by step examples to showcase how to run Llama on externally hosted providers. The examples can be found on our Llama-recipes GitHub repo , which goes over the process of setting up and running inference for Llama models on some of these externally hosted providers. Running Llama on premise Many enterprise customers prefer to deploy Llama models on-premise and on their own servers. One way to deploy and run Llama models in this manner is by using TorchServe . TorchServe is an easy to use tool for deploying PyTorch models at scale. It is cloud and environment agnostic and supports features such as multi-model serving, logging, metrics and the creation of RESTful endpoints for application integration. To learn more about how TorchServe works, with setup, quickstart, and examples check out the Github repo . Another way to deploy llama models on premise is by using Virtual Large Language Model ( vLLM ) or Text Generation Inference (TGI) , two leading open-source tools to deploy and serve LLMs. A detailed step by step tutorial can be found on our llama-recipes Github repo that showcases how to use Llama models with vLLM and Hugging Face TGI, and how to create vLLM and TGI hosted Llama instances with LangChain—a language model integration framework for the creation of applications using large language models. You can find various demos and examples that can provide you with guidance—and that you can use as references to get started with Llama models—on our Llama-recipes GitHub repo , where you’ll find several examples for inference and fine tuning, as well as running on various API providers. Llama-recipes GitHub repo Learn more about Llama 3 and how to get started by checking out our Getting to know Llama notebook that you can find in our llama-recipes Github repo . Here you will find a guided tour of Llama 3, including a comparison to Llama 2, descriptions of different Llama 3 models, how and where to access them, Generative AI and Chatbot architectures, prompt engineering, RAG (Retrieval Augmented Generation), fine-tuning, and more. You will find all this implemented with starter code that you can take and adapt to use in your own Meta Llama 3 projects. To learn more about our Llama 3 models, check out our announcement blog where you can find details about how the models work, data on performance and benchmarks, information about trust and safety, and various other resources to get you started. Get the model source from our Llama 3 Github repo , where you can learn how the models work along with a minimalist example of how to load Llama 3 models and run inference. Here, you will also find steps to download and set up the models, and examples for running the text completion and chat models. Meta Llama3 GitHub repo Dive deeper and learn more about the model in the model card , which goes over the model architecture, intended use, hardware and software requirements, training data, results, and licenses. Check out our new Meta AI , built with Llama 3 technology, which is now one of the world’s leading AI assistants that can boost your intelligence and lighten your load, helping you learn, get things done, create content, and connect to make the most out of every moment. Meta AI You can use Meta AI on Facebook, Instagram, WhatsApp, Messenger, and the web to get things done, learn, create, and connect with the things that matter to you. To learn more about the latest updates and releases of Llama models, check out our website , where you can learn more about the latest models as well as find resources to learn more about how these models work and how you can use them in your own applications. Check out our Getting Started guide that provides information and resources to help you set up Llama including how to access the models, prompt formats, hosting, how-to and integration guides, as well as resources that you can reference to get started with your projects. Take a look at some of our latest blogs that discuss new announcements , the latest on the Llama ecosystem , and our responsible approach to Meta AI and Meta Llama 3 . Check out the community resources on our website to help you get started with Meta Llama models, learn about performance & latency, fine tuning, and more. Dive deeper into prompt engineering, learning best practices for prompting Meta Llama models and interacting with Meta Llama Chat, Code Llama, and Llama Guard models in our short course on Prompt Engineering with Llama 2 on DeepLearing.ai, recently updated to showcase both Llama 2 and  Llama 3 models. Check out our Community Stories that go over interesting use cases of Llama models in various fields such as in Business, Healthcare, Gaming, Pharmaceutical, and more! Learn more about the Llama ecosystem, building product experiences with Llama, and examples that showcase how industry pioneers have adopted Llama to build and grow innovative products for users across their platforms at Connect 2023 . Also check out our Responsible Use Guide that provides developers with recommended best practices and considerations for safely building products powered by LLMs. We hope you found the Build with Meta Llama videos and tutorials helpful to provide you with insights and resources that you may need to get started with using Llama models. We at Meta strongly believe in an open approach to AI development, democratizing access through an open platform and providing you with AI models, tools, and resources to give you the power to shape the next wave of innovation. We want to kickstart that next wave of innovation across the stack—from applications to developer tools to evals to inference optimizations and more. We can’t wait to see what you build and look forward to your feedback. On this page Meta Llama in the Cloud Amazon Web Services Microsoft Azure Google Cloud Platform IBM watsonx Other hosting providers Running Llama on premise
-Fine-tuning | How-to guides Fine-tuning If you are looking to learn by writing code it's highly recommended to look into the Getting to Know Llama 3 notebook. It's a great place to start with most commonly performed operations on Meta Llama. Fine-tuning Full parameter fine-tuning is a method that fine-tunes all the parameters of all the layers of the pre-trained model. In general, it can achieve the best performance but it is also the most resource-intensive and time consuming: it requires most GPU resources and takes the longest. PEFT, or Parameter Efficient Fine Tuning, allows one to fine tune models with minimal resources and costs. There are two important PEFT methods: LoRA (Low Rank Adaptation) and QLoRA (Quantized LoRA), where pre-trained models are loaded to GPU as quantized 8-bit and 4-bit weights, respectively. It’s likely that you can fine-tune the Llama 2-13B model using LoRA or QLoRA fine-tuning with a single consumer GPU with 24GB of memory, and using QLoRA requires even less GPU memory and fine-tuning time than LoRA. Typically, one should try LoRA, or if resources are extremely limited, QLoRA, first, and after the fine-tuning is done, evaluate the performance. Only consider full fine-tuning when the performance is not desirable. Experiment tracking Experiment tracking is crucial when evaluating various fine-tuning methods like LoRA, and QLoRA. It ensures reproducibility, maintains a structured version history, allows for easy collaboration, and aids in identifying optimal training configurations. Especially with numerous iterations, hyperparameters, and model versions at play, tools like Weights & Biases (W&B) become indispensable. With its seamless integration into multiple frameworks, W&B provides a comprehensive dashboard to visualize metrics, compare runs, and manage model checkpoints. It's often as simple as adding a single argument to your training script to realize these benefits - we’ll show an example in the Hugging Face PEFT LoRA section. Recipes PEFT LoRA The llama-recipes repo has details on different fine-tuning (FT) alternatives supported by the provided sample scripts. In particular, it highlights the use of PEFT as the preferred FT method, as it reduces the hardware requirements and prevents catastrophic forgetting. For specific cases, full parameter FT can still be valid, and different strategies can be used to still prevent modifying the model too much. Additionally, FT can be done in single gpu or multi-gpu with FSDP. In order to run the recipes, follow the steps below: Create a conda environment with pytorch and additional dependencies Install the recipes as described here : Download the desired model from hf, either using git-lfs or using the llama download script. With everything configured, run the following command: python -m llama_recipes.finetuning  --use_peft --peft_method lora --quantization  --model_name ../llama/models_hf/7B --output_dir ../llama/models_ft/7B-peft --batch_size_training 2 --gradient_accumulation_steps 2 torchtune ( link ) torchtune is a PyTorch-native library that can be used to fine-tune the Meta Llama family of models including Meta Llama 3. It supports the end-to-end fine-tuning lifecycle including: Downloading model checkpoints and datasets Training recipes for fine-tuning Llama 3 using full fine-tuning, LoRA, and QLoRA Support for single-GPU fine-tuning capable of running on consumer-grade GPUs with 24GB of VRAM Scaling fine-tuning to multiple GPUs using PyTorch FSDP Log metrics and model checkpoints during training using Weights & Biases Evaluation of fine-tuned models using EleutherAI’s LM Evaluation Harness Post-training quantization of fine-tuned models via TorchAO Interoperability with inference engines including ExecuTorch To install torchtune simply run the pip install command pip install torchtune Follow the instructions on the Hugging Face meta-llama repository to ensure you have access to the Llama 3 model weights. Once you have confirmed access, you can run the following command to download the weights to your local machine. This will also download the tokenizer model and a responsible use guide. tune download meta-llama/Meta-Llama-3-8B \ --output-dir  \ --hf-token  Set your environment variable HF_TOKEN or pass in --hf-token to the command in order to validate your access. You can find your token at https://huggingface.co/settings/tokens The basic command for a single-device LoRA fine-tune of Llama 3 is tune run lora_finetune_single_device --config llama3/8B_lora_single_device torchtune contains built-in recipes for: Full fine-tuning on single device and on multiple devices with FSDP LoRA finetuning on single device and on multiple devices with FSDP . QLoRA finetuning on single device , with a QLoRA specific configuration You can find more information on fine-tuning Meta Llama models by reading the torchtune guide. Hugging Face PEFT LoRA ( link ) Using Low Rank Adaption (LoRA) , Meta Llama is loaded to the GPU memory as quantized 8-bit weights. Using the Hugging Face Fine-tuning with PEFT LoRA ( link ) is super easy - an example fine-tuning run on Meta Llama 2 7b using the OpenAssistant data set can be done in three simple steps: pip install trl git clone https://github.com/huggingface/trl python trl/examples/scripts/sft.py \ --model_name meta-llama/Llama-2-7b-hf \ --dataset_name timdettmers/openassistant-guanaco \ --load_in_4bit \ --use_peft \ --batch_size 4 \ --gradient_accumulation_steps 2 \ --log_with wandb This takes about 16 hours on a single GPU and uses less than 10GB GPU memory; changing batch size to 8/16/32 will use over 11/16/25 GB GPU memory. After the fine-tuning completes, you’ll see in a new directory named “output” at least adapter_config.json and adapter_model.bin -  run the script below to infer with the base model and the new model, generated by merging the base model with the fined-tuned one: import torch from transformers import ( AutoModelForCausalLM, AutoTokenizer, pipeline, ) from peft import LoraConfig, PeftModel from trl import SFTTrainer model_name = "meta-llama/Llama-2-7b-chat-hf" new_model = "output" device_map = {"": 0} base_model = AutoModelForCausalLM.from_pretrained( model_name, low_cpu_mem_usage=True, return_dict=True, torch_dtype=torch.float16, device_map=device_map, ) model = PeftModel.from_pretrained(base_model, new_model) model = model.merge_and_unload() tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "right" prompt = "Who wrote the book Innovator's Dilemma?" pipe = pipeline(task="text-generation", model=base_model, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") print(result[0]['generated_text']) pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") print(result[0]['generated_text']) QLoRA Fine TuningQLoRA (Q for quantized) is more memory efficient than LoRA. In QLoRA, the pretrained model is loaded to the GPU as quantized 4-bit weights. Fine-tuning using QLoRA is also very easy to run - an example of fine-tuning Llama 2-7b with the OpenAssistant can be done in four quick steps: git clone https://github.com/artidoro/qlora cd qlora pip install -U -r requirements.txt ./scripts/finetune_llama2_guanaco_7b.sh It takes about 6.5 hours to run on a single GPU, using 11GB memory of the GPU. After the fine-tuning completes and the output_dir specified in ./scripts/finetune_llama2_guanaco_7b.sh will have checkoutpoint-xxx subfolders, holding the fine-tuned adapter model files. To run inference, use the script below: from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline from peft import LoraConfig, PeftModel import torch model_id = "meta-llama/Llama-2-7b-hf" new_model = "output/llama-2-guanaco-7b/checkpoint-1875/adapter_model" # change if needed quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4' ) model = AutoModelForCausalLM.from_pretrained( model_id, low_cpu_mem_usage=True, load_in_4bit=True, quantization_config=quantization_config, torch_dtype=torch.float16, device_map='auto' ) model = PeftModel.from_pretrained(model, new_model) tokenizer = AutoTokenizer.from_pretrained(model_id) prompt = "Who wrote the book innovator's dilemma?" pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") print(result[0]['generated_text']) Axolotl is another open source library you can use to streamline the fine-tuning of Llama 2. A good example of using Axolotl to fine-tune Meta Llama with four notebooks covering the whole fine-tuning process (generate the dataset, fine-tune the model using LoRA, evaluate and benchmark) is here . QLoRA Fine Tuning Note: This has been tested on Meta Llama 2 models only. QLoRA (Q for quantized) is more memory efficient than LoRA. In QLoRA, the pretrained model is loaded to the GPU as quantized 4-bit weights. Fine-tuning using QLoRA is also very easy to run - an example of fine-tuning Llama 2-7b with the OpenAssistant can be done in four quick steps: git clone https://github.com/artidoro/qlora cd qlora pip install -U -r requirements.txt ./scripts/finetune_llama2_guanaco_7b.sh It takes about 6.5 hours to run on a single GPU, using 11GB memory of the GPU. After the fine-tuning completes and the output_dir specified in ./scripts/finetune_llama2_guanaco_7b.sh will have checkoutpoint-xxx subfolders, holding the fine-tuned adapter model files. To run inference, use the script below: from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline from peft import LoraConfig, PeftModel import torch model_id = "meta-llama/Llama-2-7b-hf" new_model = "output/llama-2-guanaco-7b/checkpoint-1875/adapter_model" # change if needed quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4' ) model = AutoModelForCausalLM.from_pretrained( model_id, low_cpu_mem_usage=True, load_in_4bit=True, quantization_config=quantization_config, torch_dtype=torch.float16, device_map='auto' ) model = PeftModel.from_pretrained(model, new_model) tokenizer = AutoTokenizer.from_pretrained(model_id) prompt = "Who wrote the book innovator's dilemma?" pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") print(result[0]['generated_text']) Note: This has been tested on Meta Llama 2 models only. Axolotl is another open source library you can use to streamline the fine-tuning of Llama 2. A good example of using Axolotl to fine-tune Meta Llama with four notebooks covering the whole fine-tuning process (generate the dataset, fine-tune the model using LoRA, evaluate and benchmark) is here . On this page Fine-tuning Fine-tuning Experiment tracking Recipes PEFT LoRA torchtune (link) Hugging Face PEFT LoRA (link) QLoRA Fine Tuning
-Quantization | How-to guides Quantization Quantization is a technique used in machine learning to reduce the computational and memory requirements of models, making them more efficient for deployment on servers and edge devices. It involves representing model weights and activations, typically 32-bit floating numbers, with lower precision data such as 16-bit float, brain float 16-bit, 8-bit int, or even 4/3/2/1-bit int. The benefits of quantization include smaller model sizes, faster fine-tuning, and faster inference—particularly beneficial in resource-constrained environments. However, the tradeoff is a reduction in model quality due to the loss of precision. Supported quantization modes in PyTorch Post-Training Dynamic Quantization: Weights are pre-quantized ahead of time and activations are converted to int8 during inference, just before computation. This results in faster computation due to efficient int8 matrix multiplication and maintains accuracy on the activation layer. Post-Training Static Quantization: This technique improves performance by converting networks to use both integer arithmetic and int8 memory accesses. It involves feeding batches of data through the network and computing the resulting distributions of the different activations. This information is used to determine how the different activations should be quantized at inference time. Quantization Aware Training (QAT): In QAT, all weights and activations are "fake quantized" during both the forward and backward passes of training. This means float values are rounded to mimic int8 values, but all computations are still done with floating point numbers. This method usually yields higher accuracy than the other two methods as all weight adjustments during training are made while "aware" of the fact that the model will ultimately be quantized. More details about these methods and how they can be applied to different types of models can be found in the official PyTorch documentation . Additionally, the community has already conducted studies on the effectiveness of common quantization methods on Meta Llama 3, and the results and code to evaluate can be found in this GitHub repository . We will focus next on quantization tools available for Meta Llama models. As this is a constantly evolving space, the libraries and methods detailed here are the most widely used at the moment and are subject to change as the space evolves. Pytorch quantization with TorchAO The TorchAO library offers several methods for quantization, each with different schemes for how the activations and weights are quantized. We distinguish between two main types of quantization: weight only quantization and dynamic quantization. For weight only quantization, we support 8-bit and 4-bit quantization. The 4-bit quantization also has GPTQ support for improved accuracy, which requires calibration but has the same final performance. For dynamic quantization, we support 8-bit activation quantization and 8-bit weight quantization. We also support this type of quantization with smoothquant for improved accuracy, which requires calibration and has slightly worse performance. Additionally, the library offers a simple API to test different methods and automatic detection of the best quantization for a given model, known as autoquantization. This API chooses the fastest form of quantization out of the 8-bit dynamic and 8-bit weight only quantization. It first identifies the shapes of the activations that the different linear layers see, then benchmarks these shapes across different types of quantized and non-quantized layers in order to pick the fastest one. Also, it composes with torch.compile() to generate the fast kernels. For additional information on torch.compile, please see this general tutorial . Note : This library is in beta phase and in active development; API changes are expected. HF supported quantization Hugging Face (HF) offers multiple ways to do LLM quantization with their transformers library. For additional guidance and examples on how to use each of these beyond the brief summary presented here,  please refer to their quantization guide and the transformers quantization configuration documentation . The llama-recipes code uses bitsandbytes 8-bit quantization to load the models, both for inference and fine-tuning . (See below for more information about using the bitsandbytes library with Llama. ) Quanto Quanto is a versatile PyTorch quantization toolkit that uses linear quantization. It provides features such as weights quantization, activation quantization, and compatibility with various devices and modalities. It supports quantization-aware training and is easy to integrate with custom kernels for specific devices. More details can be found in the announcement blog , GitHub repository , and HF guide . AQLM Additive Quantization of Language Models (AQLM) is a compression method for LLM. It quantizes multiple weights together, taking advantage of interdependencies between them. AQLM represents groups comprising 8 to16 weights each as a sum of multiple vector codes. This library supports fine-tuning its quantized models with Parameter-Efficient Fine-Tuning and LoRA by integrating into HF's PEFT library as well. More details can be found  in the GitHub repository . AWQ Activation-aware Weight Quantization (AWQ) preserves a small percentage of weights that are important for LLM performance, reducing quantization loss. This allows models to run in 4-bit precision without experiencing performance degradation. Transformers support loading models quantized with the llm-awq and autoawq libraries. More details on how to load them with the Transformers library can be found in the HF guide . AutoGPTQ The AutoGPTQ library implements the GPTQ algorithm, a post-training quantization technique where each row of the weight matrix is quantized independently. These weights are quantized to int4, but they’re restored to fp16 on the fly during inference, saving memory usage by 4x. More details can be found in the GitHub repository . BitsAndBytes BitsAndBytes is an easy option for quantizing a model to 8-bit and 4-bit. The library supports any model in any modality, as long as it supports loading with Hugging Face Accelerate and contains torch.nn.Linear layers. It also provides features for offloading weights between the CPU and GPU to support fitting very large models into memory, adjusting the outlier threshold for 8-bit quantization, skipping module conversion for certain models, and fine-tuning with 8-bit and 4-bit weights. For 4-bit models, it allows changing the compute data type, using the Normal Float 4 (NF4) data type for weights initialized from a normal distribution, and using nested quantization to save additional memory at no additional performance cost. More details can be found in the HF guide . On this page Quantization Supported quantization modes in PyTorch Pytorch quantization with TorchAO HF supported quantization Quanto AQLM AWQ AutoGPTQ BitsAndBytes
-Prompting | How-to guides Prompting Link to Notebook showing examples of the techniques discussed in this section. Prompt engineering is a technique used in natural language processing (NLP) to improve the performance of the language model by providing them with more context and information about the task in hand. It involves creating prompts, which are short pieces of text that provide additional information or guidance to the model, such as the topic or genre of the text it will generate. By using prompts, the model can better understand what kind of output is expected and produce more accurate and relevant results. In Llama 2 the size of the context, in terms of number of tokens, has doubled from 2048 to 4096. Crafting Effective Prompts Crafting effective prompts is an important part of prompt engineering. Here are some tips for creating prompts that will help improve the performance of your language model: Be clear and concise: Your prompt should be easy to understand and provide enough information for the model to generate relevant output. Avoid using jargon or technical terms that may confuse the model. Use specific examples: Providing specific examples in your prompt can help the model better understand what kind of output is expected. For example, if you want the model to generate a story about a particular topic, include a few sentences about the setting, characters, and plot. Vary the prompts: Using different prompts can help the model learn more about the task at hand and produce more diverse and creative output. Try using different styles, tones, and formats to see how the model responds. Test and refine: Once you have created a set of prompts, test them out on the model to see how it performs. If the results are not as expected, try refining the prompts by adding more detail or adjusting the tone and style. Use feedback: Finally, use feedback from users or other sources to continually improve your prompts. This can help you identify areas where the model needs more guidance and make adjustments accordingly. Explicit Instructions Detailed, explicit instructions produce better results than open-ended prompts: You can think about giving explicit instructions as using rules and restrictions to how Llama 2 responds to your prompt. Stylization Explain this to me like a topic on a children's educational network show teaching elementary students. I'm a software engineer using large language models for summarization. Summarize the following text in under 250 words: Give your answer like an old timey private investigator hunting down a case step by step. Formatting Use bullet points. Return as a JSON object. Use less technical terms and help me apply it in my work in communications. Restrictions Only use academic papers. Never give sources older than 2020. If you don't know the answer, say that you don't know. Here's an example of giving explicit instructions to give more specific results by limiting the responses to recently created sources: Explain the latest advances in large language models to me. #  More likely to cite sources from 2017 Explain the latest advances in large language models to me. Always cite your sources. Never cite sources older than 2020. # Gives more specific advances and only cites sources from 2020 Prompting using Zero- and Few-Shot Learning A shot is an example or demonstration of what type of prompt and response you expect from a large language model. This term originates from training computer vision models on photographs, where one shot was one example or instance that the model used to classify an image. Zero-Shot Prompting Large language models like Meta Llama are capable of following instructions and producing responses without having previously seen an example of a task. Prompting without examples is called "zero-shot prompting". Text: This was the best movie I've ever seen! The sentiment of the text is: Text: The director was trying too hard. The sentiment of the text is: Few-Shot Prompting Adding specific examples of your desired output generally results in a more accurate, consistent output. This technique is called "few-shot prompting". In this example, the generated response follows our desired format that offers a more nuanced sentiment classifier that gives a positive, neutral, and negative response confidence percentage. You are a sentiment classifier. For each message, give the percentage of positive/netural/negative. Here are some samples: Text: I liked it Sentiment: 70% positive 30% neutral 0% negative Text: It could be better Sentiment: 0% positive 50% neutral 50% negative Text: It's fine Sentiment: 25% positive 50% neutral 25% negative Text: I thought it was okay Text: I loved it! Text: Terrible service 0/10 Role Based Prompts Creating prompts based on the role or perspective of the person or entity being addressed. This technique can be useful for generating more relevant and engaging responses from language models. Pros: Improves relevance: Role-based prompting helps the language model understand the role or perspective of the person or entity being addressed, which can lead to more relevant and engaging responses. Increases accuracy: Providing additional context about the role or perspective of the person or entity being addressed can help the language model avoid making mistakes or misunderstandings. Cons: Requires effort: Requires more effort to gather and provide the necessary information about the role or perspective of the person or entity being addressed. Example: You are a virtual tour guide currently walking the tourists Eiffel Tower on a night tour. Describe Eiffel Tower to your audience that covers its history, number of people visiting each year, amount of time it takes to do a full tour and why do so many people visit this place each year. Chain of Thought Technique Involves providing the language model with a series of prompts or questions to help guide its thinking and generate a more coherent and relevant response. This technique can be useful for generating more thoughtful and well-reasoned responses from language models. Pros: Improves coherence: Helps the language model think through a problem or question in a logical and structured way, which can lead to more coherent and relevant responses. Increases depth: Providing a series of prompts or questions can help the language model explore a topic more deeply and thoroughly, potentially leading to more insightful and informative responses. Cons: Requires effort: The chain of thought technique requires more effort to create and provide the necessary prompts or questions. Example: You are a virtual tour guide from 1901. You have tourists visiting Eiffel Tower. Describe Eiffel Tower to your audience. Begin with 1. Why it was built 2. Then by how long it took them to build 3. Where were the materials sourced to build 4. Number of people it took to build 5. End it with the number of people visiting the Eiffel tour annually in the 1900's, the amount of time it completes a full tour and why so many people visit this place each year. Make your tour funny by including 1 or 2 funny jokes at the end of the tour. Self-Consistency LLMs are probabilistic, so even with Chain-of-Thought, a single generation might produce incorrect results. Self-Consistency introduces enhanced accuracy by selecting the most frequent answer from multiple generations (at the cost of higher compute): John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Report the answer surrounded by three backticks, for example: ```123``` Running the above several times and taking the most commonly returned value for the answer would make use of the self-consistency approach. Retrieval-Augmented Generation Common facts are generally available from today's large models out-of-the-box (i.e. using just the model weights). More specific data is unlikely to be available though E.g.: What is the capital of  California? # The capital of California is Sacramento... What was the temperature in Menlo Park on December 12th, 2023? # I'm just an AI, I don't have have access to real-time or historical weather data... Retrieval-Augmented Generation, or RAG, describes the practice of including information in the prompt that has been retrieved from an external database. It's an effective way to incorporate facts into your LLM application and is more affordable than fine-tuning which might also negatively impact the foundational model's capabilities. This could be as simple as a lookup table or as sophisticated as a vector database containing all of your company's knowledge: Given the following information about temperatures in Menlo Park: 2023-12-11 :  52 degrees Fahrenheit 2023-12-12 :  51 degrees Fahrenheit 2023-12-13 :  55 degrees Fahrenheit What was the temperature in Menlo Park on 2023-12-12? # Sure! The temperature in Menlo Park on 2023-12-12 was 51 degrees Fahrenheit. What was the temperature in Menlo Park on 2023-07-18 ? # Sorry, I don't have information about the temperature in Menlo Park on 2023-07-18. The information provided only includes temperatures for December 11th, 12th, and 13th of 2023. Program-Aided Language Models LLMs, by nature, aren't great at performing calculations. While LLMs are bad at arithmetic, they're great for code generation. Program-Aided Language leverages this fact by instructing the LLM to write code to solve calculation tasks. Python code to calculate: ((-5 + 93 * 4 - 0) * (4^4 + -7 + 0 * 5)) Here is the Python code to calculate the given expression: ((-5 + 93 * 4 - 0) * (4**4 + -7 + 0 * 5)) # Calculate the expression inside the parentheses a = -5 + 93 * 4 - 0 b = 4**4 + -7 + 0 * 5 # Multiply a and b result = a * b # Print the result print(result) Using the code directly provides the correct result. Limiting Extraneous Tokens A common challenge is generating a response without extraneous tokens (e.g. "Sure! Here's more information on..."). By combining a role, rules and restrictions, explicit instructions, and an example, the model can be prompted to generate the desired response. You are a robot that only outputs JSON. You reply in JSON format with the field 'zip_code'. Example question: What is the zip code of the Empire State Building? Example answer: {'zip_code': 10118} Now here is my question: What is the zip code of Menlo Park? # "{'zip_code': 94025}" Using the code directly provides the correct result. Reduce Hallucinations Meta’s Responsible Use Guide is a great resource to understand how best to prompt and address input/output risks of the language model. Refer to pages (14-17). Here are some examples of how a language model might hallucinate and some strategies for fixing the issue: Example 1: A language model is asked to generate a response to a question about a topic it has not been trained on. The language model may hallucinate information or make up facts that are not accurate or supported by evidence. Fix: To fix this issue, you can provide the language model with more context or information about the topic to help it understand what is being asked and generate a more accurate response. You could also ask the language model to provide sources or evidence for any claims it makes to ensure that its responses are based on factual information. Example 2: A language model is asked to generate a response to a question that requires a specific perspective or point of view. The language model may hallucinate information or make up facts that are not consistent with the desired perspective or point of view. Fix: To fix this issue, you can provide the language model with additional information about the desired perspective or point of view, such as the goals, values, or beliefs of the person or entity being addressed. This can help the language model understand the context and generate a response that is more consistent with the desired perspective or point of view. Example 3: A language model is asked to generate a response to a question that requires a specific tone or style. The language model may hallucinate information or make up facts that are not consistent with the desired tone or style. Fix: To fix this issue, you can provide the language model with additional information about the desired tone or style, such as the audience or purpose of the communication. This can help the language model understand the context and generate a response that is more consistent with the desired tone or style. Overall, the key to avoiding hallucination in language models is to provide them with clear and accurate information and context, and to carefully monitor their responses to ensure that they are consistent with your expectations and requirements. On this page Prompting Crafting Effective Prompts Explicit Instructions Prompting using Zero- and Few-Shot Learning Role Based Prompts Chain of Thought Technique Self-Consistency Retrieval-Augmented Generation Program-Aided Language Models Limiting Extraneous Tokens Reduce Hallucinations
-Validation | How-to guides Validation As the saying goes, if you can't measure it, you can't improve it., In this section, we are going to cover different ways to measure and ultimately validate Llama so it's possible to determine the improvements provided by different fine tuning techniques. Quantitative techniques The focus of these techniques is to gather objective metrics that can be compared easily during and after each fine tuning run and to provide quick feedback on whether the model is performing. The main metrics collected are loss and perplexity. This method consists in dividing the dataset into k subsets or folds, and then fine tuning the model k times. On each run, a different fold is used as a validation dataset, using the rest for training. The performance results of each run are averaged out for the final report. This provides a more accurate metric of the performance of the model across the complete dataset, as all entries serve both for validation and training. While it produces the most accurate prediction on how a model is going to generalize after fine tuning on a given dataset, it is computationally expensive and better suited for small datasets. Holdout When using a holdout, the dataset is split into two or three subsets, training and validation with test as optional. The test and validation sets can represent 10% - 30% of the dataset each. As the name implies, the first two subsets are used for training and validating the model during fine tuning, while the third is used only after fine tuning is complete to evaluate how well the model generalizes on data it has not seen in either phase. The advantage of having three partitions is that it provides a way to evaluate the model after fine-tuning for an unbiased view into the model performance, but it requires a slightly bigger dataset to allow for a proper split. This is currently implemented in the Llama recipes fine tuning script with two subsets of the dataset, train and validation . The data is collected in a json file that can be plotted to easily interpret the results and evaluate how the model is performing. Standard Evaluation tools There are multiple projects that provide standard evaluation. They provide predefined tasks with commonly used metrics to evaluate the performance of LLMs, like HellaSwag and ThrouthfulQA. These tools can be used to test if the model has degraded after fine tuning. Additionally, a custom task can be created using the dataset intended to fine-tune the model, effectively automating the manual verification of the model performance before and after fine tuning. These types of projects provide a quantitative way of looking at the models performance in simulated real world examples. Some of these projects include the LM Evaluation Harness (used to create the HF leaderboard ), HELM , BIG-bench and OpenCompass . As mentioned before, the torchtune library provides integration with the LM Evaluation Harness to test fine tuned models as well. Interpreting Loss and Perplexity The loss value used comes from the transformer's LlamaForCausalLM , which initializes a different loss function depending on the objective required from the model. The objective of this section is to give a brief overview on how to understand the results from loss and perplexity as an initial evaluation of the model performance during fine tuning. We also calculate the perplexity as an exponentiation of the loss value. Additional information on loss functions can be found in these resources: 1 , 2 , 3 , 4 , 5 , 6 . In our recipes, we use a simple holdout during fine tuning. Using the logged loss values, both for train and validation dataset, the curves for both are plotted to analyze the results of the process. Given the setup in the recipe, the expected behavior is a log graph that shows a diminishing train and validation loss value as it progresses. If the validation curve starts going up while the train curve continues decreasing, the model is overfitting and it's not generalizing well. Some alternatives to test when this happens are early stopping, verifying the validation dataset is a statistically significant equivalent of the train dataset, data augmentation, using parameter efficient fine tuning or using k-fold cross-validation to better tune the hyperparameters. Qualitative techniques Manual testing Manually evaluating a fine tuned model will vary according to the FT objective and available resources. Here we provide general guidelines on how to accomplish it. With a dataset prepared for fine tuning, a part of it can be separated into a manual test subset, which can be further increased with general knowledge questions that might be relevant to the specific use case. In addition to these general questions, we recommend executing standard evaluations as well, and compare the results with the baseline for the fine tuned model. To rate the results, a clear evaluation criteria should be defined that is relevant to the dataset being used. Example criteria can be accuracy, coherence and safety. Create a rubric for each criteria and define what would be required for an output to receive a specific score. With these guidelines in place, distribute the test questions with a diverse set of reviewers to have multiple data points for each question. With multiple data points for each question and different criteria, a final score can be calculated for each query, allowing for weighting the scores based on the preferred focus for the final model. On this page Validation Quantitative techniques Holdout Standard Evaluation tools Interpreting Loss and Perplexity Qualitative techniques
-Meta Code Llama | Integration guides Integration guides Meta Code Llama Meta Code Llama is an open-source family of LLMs based on Llama 2 providing SOTA performance on code tasks. It consists of: Foundation models (Meta Code Llama) Python specializations (Meta Code Llama - Python), and Instruction-following models (Meta Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. See the recipes here for examples on how to make use of Meta Code Llama. The following diagram shows how each of the Meta Code Llama models is trained: (Fig: The Meta Code Llama specialization pipeline. The different stages of fine-tuning annotated with the number of tokens seen during training) One of the best ways to try out and integrate with Meta Code Llama is using Hugging Face ecosystem by following the blog here , which has: Demo links for all versions of Meta Code Llama Working inference code for code completion Working inference code for code infilling between code prefix and suffix as inputs Working inference code to do 4-bit loading of the 34B model so it can fit on consumer GPUs Guide on how to write prompts for the instruction models to have multi-turn conversations  about coding Guide on how to use Text Generation Inference for model deployment in production Guide on how to integrate code autocomplete as an extension  with VSCode Guide on how to evaluate Meta Code Llama models If the model does not perform well on your specific task, for example if none of the Meta Code Llama models (7B/13B/34B/70B) generate the correct answer for a text to SQL task, fine-tuning should be considered. This is a complete guide and notebook ( here ) on how to fine-tune Meta Code Llama using the 7B model hosted on Hugging Face. It uses the LoRA fine-tuning method and can run on a single GPU. As shown in the Meta Code Llama References ( here ), fine-tuning improves the performance of Meta Code Llama on SQL code generation, and it can be critical that LLMs are able to interoperate with structured data and SQL, the primary way to access structured data - we are developing demo apps in LangChain and RAG with Llama 2 to show this. Compatible extensions In most of the cases, the simplest method to integrate any model size is through ollama , occasionally combined with litellm . Ollama is a program that allows quantized versions of popular LLMs to run locally. It leverages the GPU and can even run Code Llama 34B on an M1 mac. Litellm is a simple proxy that can serve an OpenAI style API, so it's easy to replace OpenAI in existing applications, in our case, extensions Continue This extension can be used with ollama, allowing for easy local only execution. Additionally, it provides a simple interface to 1/ Chat with the model directly running inside VS Code and 2/ Select specific files and sections to edit or explain. This extension is an effective way to evaluate Llama because it provides simple and useful features. It also allows developers to build trust, by creating diffs for each proposed change and showing exactly what is being changed before saving the file. Handling the context for the LLM is easy and relies heavily on keyboard shortcuts. It's important to note that all the interactions with the extension are recorded in jsonl format. The objective is to provide data for future fine tuning of the models based on the feedback recorded during real world usage as well. Steps to install with ollama Install ollama and pull a model (e.g. ollama pull codellama:13b-instruct) Install the extension from Visual Studio Code marketplace Open the extension and click on the + sign to add models Select Ollama as a provider In the next screen, select the model and size pulled from with ollama Select the model in the convo and start using the extension Steps to install with TGI For better performance or usage in non-compatible hardware, TGI can be used in a server to run the model. For example, ollama on Intel Macs is too slow to be useful, even with the 7B models. On the contrary, M1 macs can run the 34 Meta Code Llama models quickly. For this, you should have TGI running on a server with appropriate hardware, as detailed in this guide . Once Continue.dev is installed, follow these steps: Open the configs with /config Use the HuggingFaceTGI class and pass your instance URL in the server_url parameter: Assign a name to it and save the config file. llm-vscode This extension from Hugging Face provides an open alternative to the closed sourced GitHub Copilot, allowing for the same functionality, context based autocomplete suggestions, to work with open source models. It works out of the box with a HF Token and their Inference API but can be configured to use any TGI compatible API. For usage with a self-hosted TGI server, follow these steps: Install llm-vscode from the marketplace Open the extension configs Select the correct template for the model published in your TGI instance in the Config Template field. For testing, used the one named codellama/CodeLlama-13b-hf Pass in the URL to your TGI instance in the Model ID or Endpoint field. To avoid rate limiting messages, login to HF by providing a read only token. This was necessary even for a self-hosted instance. It currently does not support local models unless TGI is running locally. It would be great to add ollama support to this extension, as it would accelerate the inference with the smaller models by avoiding the network. On this page Meta Code Llama Compatible extensions
-LangChain | Integration guides Integration guides LangChain LangChain is an open source framework for building LLM powered applications. It implements common abstractions and higher-level APIs to make the app building process easier, so you don't need to call LLM from scratch. The main building blocks/APIs of LangChain are: Source Source The Models or LLMs API can be used to easily connect to all popular LLMs such as Hugging Face or Replicate where all types of Llama 2 models are hosted. The Prompts API implements the useful prompt template abstraction to help you easily reuse good, often long and detailed, prompts when building sophisticated LLM apps. There are also many built-in prompts for common operations such as summarization or connection to SQL databases for quick app development. Prompts can also work closely with  parsers to easily extract useful information from the LLM output. The Memory API can be used to save conversation history and feed it along with new questions to LLM so multi-turn natural conversation chat can be implemented. The Chains API includes the most basic LLMChain that combines a LLM with a prompt to generate the output, as well as more advanced chains to lets you build sophisticated LLM apps in a systematic way. For example, the output of the first LLM chain can be the input/prompt of another chain, or a chain can have multiple inputs and/or multiple outputs, either pre-defined or dynamically decided by the LLM output of a prompt. The Indexes API allows documents outside of LLM to be saved, after first converted to embeddings which are numerical meaning representations, in the vector form, of the documents, to a vector store. Later when a user enters a question about the documents, the relevant data stored in the documents' vector store will be retrieved and sent, along with the query, to LLM to generate an answer related to the documents. The following flow shows the process Source The Agents API uses LLM as the reasoning engine and connects it with other sources of data, third-party or own tools, or APIs such as web search or wikipedia APIs. Depending on the user's input, the agent can decide which tool to call to handle the input. LangChain can be used as a powerful retrieval augmented generation (RAG) tool to integrate the internal data or more recent public data with LLM to QA or chat about the data. LangChain already supports loading many types of unstructured and structured data. To learn more about LangChain, enroll for free in the two LangChain short courses . Be aware that the code in the courses use OpenAI ChatGPT LLM, but we’ve published a series of demo apps using LangChain with Llama 2. There is also a Getting to Know Llama notebook , presented at Meta Connect 2023. On this page LangChain
-LlamaIndex | Integration guides Integration guides LlamaIndex LlamaIndex is another popular open source framework for building LLM applications. Like LangChain, LlamaIndex can also be used to build RAG applications by easily integrating data not built-in the LLM with LLM. There are three key tools in LlamaIndex: Connecting Data: connect data of any type -  structured, unstructured or semi-structured - to LLM Indexing Data: Index and store the data Querying LLM: Combine the user query and retrieved query-related data to query LLM and return data-augmented answer LlamaIndex is mainly a data framework for connecting private or domain-specific data with LLMs, so it specializes in RAG, smart data storage and retrieval, while LangChain is a more general purpose framework which can be used to build agents connecting multiple tools. The integration of the two may provide the best performant and effective solution to building real world RAG powered Llama apps. For an example usage of how to integrate LlamaIndex with Llama 2, see here . We also published a completed demo app showing how to use LlamaIndex to chat with Llama 2 about live data via the you.com API. It’s worth noting that LlamaIndex has implemented many RAG powered LLM evaluation tools to easily measure the quality of retrieval and response, including: Question Generation: Call LLM to auto generate questions to create an evaluation dataset. Faithfulness Evaluator: Evaluate if the generated answer is faithful to the retrieved context or if there’s hallucination. Correctness Evaluator: Evaluate if the generated answer matches the reference answer. Relevancy Evaluator: Evaluate if the answer and the retrieved context is relevant and consistent for the given query. On this page LlamaIndex
-# Llama Recipes: Examples to get started using the Llama models from Meta The 'llama-recipes' repository is a companion to the [Meta Llama 3](https://github.com/meta-llama/llama3) models. The goal of this repository is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Meta Llama and other tools in the LLM ecosystem. The examples here showcase how to run Meta Llama locally, in the cloud, and on-prem. [Meta Llama 2](https://github.com/meta-llama/llama) is also supported in this repository. We highly recommend everyone to utilize [Meta Llama 3](https://github.com/meta-llama/llama3) due to its enhanced capabilities. > [!IMPORTANT] > Meta Llama 3 has a new prompt template and special tokens (based on the tiktoken tokenizer). > | Token | Description | > |---|---| > `<\|begin_of_text\|>` | This is equivalent to the BOS token. | > `<\|end_of_text\|>` | This is equivalent to the EOS token. For multiturn-conversations it's usually unused. Instead, every message is terminated with `<\|eot_id\|>` instead.| > `<\|eot_id\|>` | This token signifies the end of the message in a turn i.e. the end of a single message by a system, user or assistant role as shown below.| > `<\|start_header_id\|>{role}<\|end_header_id\|>` | These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant. | > > A multiturn-conversation with Meta Llama 3 follows this prompt template: > ``` > <|begin_of_text|><|start_header_id|>system<|end_header_id|> > > {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|> > > {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> > > {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> > > {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> > ``` > Each message gets trailed by an `<|eot_id|>` token before a new header is started, signaling a role change. > > More details on the new tokenizer and prompt template can be found [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3#special-tokens-used-with-meta-llama-3). > > [!NOTE] > The llama-recipes repository was recently refactored to promote a better developer experience of using the examples. Some files have been moved to new locations. The `src/` folder has NOT been modified, so the functionality of this repo and package is not impacted. > > Make sure you update your local clone by running `git pull origin main` ## Table of Contents - [Llama Recipes: Examples to get started using the Meta Llama models from Meta](#llama-recipes-examples-to-get-started-using-the-llama-models-from-meta) - [Table of Contents](#table-of-contents) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [PyTorch Nightlies](#pytorch-nightlies) - [Installing](#installing) - [Install with pip](#install-with-pip) - [Install with optional dependencies](#install-with-optional-dependencies) - [Install from source](#install-from-source) - [Getting the Llama models](#getting-the-llama-models) - [Model conversion to Hugging Face](#model-conversion-to-hugging-face) - [Repository Organization](#repository-organization) - [`recipes/`](#recipes) - [`src/`](#src) - [Contributing](#contributing) - [License](#license) ## Getting Started These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system. ### Prerequisites #### PyTorch Nightlies If you want to use PyTorch nightlies instead of the stable release, go to [this guide](https://pytorch.org/get-started/locally/) to retrieve the right `--extra-index-url URL` parameter for the `pip install` commands on your platform. ### Installing Llama-recipes provides a pip distribution for easy install and usage in other projects. Alternatively, it can be installed from source. > [!NOTE] > Ensure you use the correct CUDA version (from `nvidia-smi`) when installing the PyTorch wheels. Here we are using 11.8 as `cu118`. > H100 GPUs work better with CUDA >12.0 #### Install with pip ``` pip install llama-recipes ``` #### Install with optional dependencies Llama-recipes offers the installation of optional packages. There are three optional dependency groups. To run the unit tests we can install the required dependencies with: ``` pip install llama-recipes[tests] ``` For the vLLM example we need additional requirements that can be installed with: ``` pip install llama-recipes[vllm] ``` To use the sensitive topics safety checker install with: ``` pip install llama-recipes[auditnlg] ``` Optional dependencies can also be combines with [option1,option2]. #### Install from source To install from source e.g. for development use these commands. We're using hatchling as our build backend which requires an up-to-date pip as well as setuptools package. ``` git clone git@github.com:meta-llama/llama-recipes.git cd llama-recipes pip install -U pip setuptools pip install -e . ``` For development and contributing to llama-recipes please install all optional dependencies: ``` git clone git@github.com:meta-llama/llama-recipes.git cd llama-recipes pip install -U pip setuptools pip install -e .[tests,auditnlg,vllm] ``` ### Getting the Meta Llama models You can find Meta Llama models on Hugging Face hub [here](https://huggingface.co/meta-llama), **where models with `hf` in the name are already converted to Hugging Face checkpoints so no further conversion is needed**. The conversion step below is only for original model weights from Meta that are hosted on Hugging Face model hub as well. #### Model conversion to Hugging Face The recipes and notebooks in this folder are using the Meta Llama model definition provided by Hugging Face's transformers library. Given that the original checkpoint resides under models/7B you can install all requirements and convert the checkpoint with: ```bash ## Install Hugging Face Transformers from source pip freeze | grep transformers ## verify it is version 4.31.0 or higher git clone git@github.com:huggingface/transformers.git cd transformers pip install protobuf python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path ``` ## Repository Organization Most of the code dealing with Llama usage is organized across 2 main folders: `recipes/` and `src/`. ### `recipes/` Contains examples are organized in folders by topic: | Subfolder | Description | |---|---| [quickstart](./recipes/quickstart) | The "Hello World" of using Llama, start here if you are new to using Llama. [finetuning](./recipes/finetuning)|Scripts to finetune Llama on single-GPU and multi-GPU setups [inference](./recipes/inference)|Scripts to deploy Llama for inference locally and using model servers [use_cases](./recipes/use_cases)|Scripts showing common applications of Meta Llama3 [responsible_ai](./recipes/responsible_ai)|Scripts to use PurpleLlama for safeguarding model outputs [llama_api_providers](./recipes/llama_api_providers)|Scripts to run inference on Llama via hosted endpoints [benchmarks](./recipes/benchmarks)|Scripts to benchmark Llama models inference on various backends [code_llama](./recipes/code_llama)|Scripts to run inference with the Code Llama models [evaluation](./recipes/evaluation)|Scripts to evaluate fine-tuned Llama models using `lm-evaluation-harness` from `EleutherAI` ### `src/` Contains modules which support the example recipes: | Subfolder | Description | |---|---| | [configs](src/llama_recipes/configs/) | Contains the configuration files for PEFT methods, FSDP, Datasets, Weights & Biases experiment tracking. | | [datasets](src/llama_recipes/datasets/) | Contains individual scripts for each dataset to download and process. Note | | [inference](src/llama_recipes/inference/) | Includes modules for inference for the fine-tuned models. | | [model_checkpointing](src/llama_recipes/model_checkpointing/) | Contains FSDP checkpoint handlers. | | [policies](src/llama_recipes/policies/) | Contains FSDP scripts to provide different policies, such as mixed precision, transformer wrapping policy and activation checkpointing along with any precision optimizer (used for running FSDP with pure bf16 mode). | | [utils](src/llama_recipes/utils/) | Utility files for: - `train_utils.py` provides training/eval loop and more train utils. - `dataset_utils.py` to get preprocessed datasets. - `config_utils.py` to override the configs received from CLI. - `fsdp_utils.py` provides FSDP  wrapping policy for PEFT methods. - `memory_utils.py` context manager to track different memory stats in train loop. | ## Contributing Please read [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests to us. ## License See the License file for Meta Llama 3 [here](https://llama.meta.com/llama3/license/) and Acceptable Use Policy [here](https://llama.meta.com/llama3/use-policy/) See the License file for Meta Llama 2 [here](https://llama.meta.com/llama2/license/) and Acceptable Use Policy [here](https://llama.meta.com/llama2/use-policy/)
-# **Model Details** Meta developed and released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. ||Training Data|Params|Context Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10 -4 Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10 -4 Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10 -4 **Llama 2 family of models.** Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. The 70B version uses Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "Llama-2: Open Foundation and Fine-tuned Chat Models", available at https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/. **Where to send questions or comments about the model** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md). # **Intended Use** **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 2 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 2 models for languages beyond English provided they comply with the Llama 2 Community License and the Acceptable Use Policy. # **Hardware and Software** **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO 2 eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO 2 emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. # **Training Data** **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. # **Evaluation Results** In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks. For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at the top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. # **Ethical Considerations and Limitations** Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide/)
-# Llama 2 We are unlocking the power of large language models. Llama 2 is now accessible to individuals, creators, researchers, and businesses of all sizes so that they can experiment, innovate, and scale their ideas responsibly. This release includes model weights and starting code for pre-trained and fine-tuned Llama language models — ranging from 7B to 70B parameters. This repository is intended as a minimal example to load [Llama 2](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) models and run inference. For more detailed examples leveraging Hugging Face, see [llama-recipes](https://github.com/facebookresearch/llama-recipes/). ## Updates post-launch See [UPDATES.md](UPDATES.md). Also for a running list of frequently asked questions, see [here](https://ai.meta.com/llama/faq/). ## Download In order to download the model weights and tokenizer, please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License. Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download. Pre-requisites: Make sure you have `wget` and `md5sum` installed. Then run the script: `./download.sh`. Keep in mind that the links expire after 24 hours and a certain amount of downloads. If you start seeing errors such as `403: Forbidden`, you can always re-request a link. ### Access to Hugging Face We are also providing downloads on [Hugging Face](https://huggingface.co/meta-llama). You can request access to the models by acknowledging the license and filling the form in the model card of a repo. After doing so, you should get access to all the Llama models of a version (Code Llama, Llama 2, or Llama Guard) within 1 hour. ## Quick Start You can follow the steps below to quickly get up and running with Llama 2 models. These steps will let you run quick inference locally. For more examples, see the [Llama 2 recipes repository](https://github.com/facebookresearch/llama-recipes). 1. In a conda env with PyTorch / CUDA available clone and download this repository. 2. In the top-level directory run: ```bash pip install -e . ``` 3. Visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and register to download the model/s. 4. Once registered, you will get an email with a URL to download the models. You will need this URL when you run the download.sh script. 5. Once you get the email, navigate to your downloaded llama repository and run the download.sh script. - Make sure to grant execution permissions to the download.sh script - During this process, you will be prompted to enter the URL from the email. - Do not use the “Copy Link” option but rather make sure to manually copy the link from the email. 6. Once the model/s you want have been downloaded, you can run the model locally using the command below: ```bash torchrun --nproc_per_node 1 example_chat_completion.py \ --ckpt_dir llama-2-7b-chat/ \ --tokenizer_path tokenizer.model \ --max_seq_len 512 --max_batch_size 6 ``` **Note** - Replace  `llama-2-7b-chat/` with the path to your checkpoint directory and `tokenizer.model` with the path to your tokenizer model. - The `–nproc_per_node` should be set to the [MP](#inference) value for the model you are using. - Adjust the `max_seq_len` and `max_batch_size` parameters as needed. - This example runs the [example_chat_completion.py](example_chat_completion.py) found in this repository but you can change that to a different .py file. ## Inference Different models require different model-parallel (MP) values: |  Model | MP | |--------|----| | 7B     | 1  | | 13B    | 2  | | 70B    | 8  | All models support sequence length up to 4096 tokens, but we pre-allocate the cache according to `max_seq_len` and `max_batch_size` values. So set those according to your hardware. ### Pretrained Models These models are not finetuned for chat or Q&A. They should be prompted so that the expected answer is the natural continuation of the prompt. See `example_text_completion.py` for some examples. To illustrate, see the command below to run it with the llama-2-7b model (`nproc_per_node` needs to be set to the `MP` value): ``` torchrun --nproc_per_node 1 example_text_completion.py \ --ckpt_dir llama-2-7b/ \ --tokenizer_path tokenizer.model \ --max_seq_len 128 --max_batch_size 4 ``` ### Fine-tuned Chat Models The fine-tuned models were trained for dialogue applications. To get the expected features and performance for them, a specific formatting defined in [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212) needs to be followed, including the `INST` and `< >` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). You can also deploy additional classifiers for filtering out inputs and outputs that are deemed unsafe. See the llama-recipes repo for [an example](https://github.com/facebookresearch/llama-recipes/blob/main/examples/inference.py) of how to add a safety checker to the inputs and outputs of your inference code. Examples using llama-2-7b-chat: ``` torchrun --nproc_per_node 1 example_chat_completion.py \ --ckpt_dir llama-2-7b-chat/ \ --tokenizer_path tokenizer.model \ --max_seq_len 512 --max_batch_size 6 ``` Llama 2 is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. In order to help developers address these risks, we have created the [Responsible Use Guide](Responsible-Use-Guide.pdf). More details can be found in our research paper as well. ## Issues Please report any software “bug”, or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Model Card See [MODEL_CARD.md](MODEL_CARD.md). ## License Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements. See the [LICENSE](LICENSE) file, as well as our accompanying [Acceptable Use Policy](USE_POLICY.md) ## References 1. [Research Paper](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) 2. [Llama 2 technical overview](https://ai.meta.com/resources/models-and-libraries/llama) 3. [Open Innovation AI Research Community](https://ai.meta.com/llama/open-innovation-ai-research-community/) For common questions, the FAQ can be found [here](https://ai.meta.com/llama/faq/) which will be kept up to date over time as new questions arise. ## Original Llama The repo for the original llama release is in the [`llama_v1`](https://github.com/facebookresearch/llama/tree/llama_v1) branch.
-## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. Training Data Params Context length GQA Token count Knowledge cutoff Llama 3 A new mix of publicly available online data. 8B 8k Yes 15T+ March, 2023 70B 8k Yes December, 2023 **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the [Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/) and [Llama 3 Community License](https://llama.meta.com/llama3/license/). Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the [Llama 3 Community License](https://llama.meta.com/llama3/license/) and the [Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/). ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. Time (GPU hours) Power Consumption (W) Carbon Emitted(tCO2eq) Llama 3 8B 1.3M 700 390 Llama 3 70B 6.4M 700 1900 Total 7.7M 2290 **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 8B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_details.md). ### Base pretrained models Category Benchmark Llama 3 8B Llama2 7B Llama2 13B Llama 3 70B Llama2 70B General MMLU (5-shot) 66.6 45.7 53.8 79.5 69.7 AGIEval English (3-5 shot) 45.9 28.8 38.7 63.0 54.8 CommonSenseQA (7-shot) 72.6 57.6 67.6 83.8 78.7 Winogrande (5-shot) 76.1 73.3 75.4 83.1 81.8 BIG-Bench Hard (3-shot, CoT) 61.1 38.1 47.0 81.3 65.7 ARC-Challenge (25-shot) 78.6 53.7 67.6 93.0 85.3 Knowledge reasoning TriviaQA-Wiki (5-shot) 78.5 72.1 79.6 89.7 87.5 Reading comprehension SQuAD (1-shot) 76.4 72.2 72.1 85.6 82.6 QuAC (1-shot, F1) 44.4 39.6 44.9 51.1 49.4 BoolQ (0-shot) 75.7 65.5 66.9 79.0 73.1 DROP (3-shot, F1) 58.4 37.9 49.8 79.7 70.2 ### Instruction tuned models Benchmark Llama 3 8B Llama 2 7B Llama 2 13B Llama 3 70B Llama 2 70B MMLU (5-shot) 68.4 34.1 47.8 82.0 52.9 GPQA (0-shot) 34.2 21.7 22.3 39.5 21.0 HumanEval (0-shot) 62.2 7.9 14.0 81.7 25.6 GSM-8K (8-shot, CoT) 79.6 25.7 77.4 93.0 57.5 MATH (4-shot, CoT) 30.0 3.8 6.7 50.4 11.6 ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. Safety For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. Refusals In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### Cyber Security We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### Child Safety Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development.  For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [GitHub repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions ``` @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ``` ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Amit Sangani; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Ash JJhaveri; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hamid Shojanazeri; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Puxin Xu; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
-🤗 Models on Hugging Face | Blog | Website | Get Started --- # Meta Llama 3 We are unlocking the power of large language models. Our latest version of Llama is now accessible to individuals, creators, researchers, and businesses of all sizes so that they can experiment, innovate, and scale their ideas responsibly. This release includes model weights and starting code for pre-trained and instruction-tuned Llama 3 language models — including sizes of 8B to 70B parameters. This repository is a minimal example of loading Llama 3 models and running inference. For more detailed examples, see [llama-recipes](https://github.com/facebookresearch/llama-recipes/). ## Download To download the model weights and tokenizer, please visit the [Meta Llama website](https://llama.meta.com/llama-downloads/) and accept our License. Once your request is approved, you will receive a signed URL over email. Then, run the download.sh script, passing the URL provided when prompted to start the download. Pre-requisites: Ensure you have `wget` and `md5sum` installed. Then run the script: `./download.sh`. Remember that the links expire after 24 hours and a certain amount of downloads. You can always re-request a link if you start seeing errors such as `403: Forbidden`. ### Access to Hugging Face We also provide downloads on [Hugging Face](https://huggingface.co/meta-llama), in both transformers and native `llama3` formats. To download the weights from Hugging Face, please follow these steps: - Visit one of the repos, for example [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). - Read and accept the license. Once your request is approved, you'll be granted access to all the Llama 3 models. Note that requests used to take up to one hour to get processed. - To download the original native weights to use with this repo, click on the "Files and versions" tab and download the contents of the `original` folder. You can also download them from the command line if you `pip install huggingface-hub`: ```bash huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir meta-llama/Meta-Llama-3-8B-Instruct ``` - To use with transformers, the following [pipeline](https://huggingface.co/docs/transformers/en/main_classes/pipelines) snippet will download and cache the weights: ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model="meta-llama/Meta-Llama-3-8B-Instruct", model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", ) ``` ## Quick Start You can follow the steps below to get up and running with Llama 3 models quickly. These steps will let you run quick inference locally. For more examples, see the [Llama recipes repository](https://github.com/facebookresearch/llama-recipes). 1. Clone and download this repository in a conda env with PyTorch / CUDA. 2. In the top-level directory run: ```bash pip install -e . ``` 3. Visit the [Meta Llama website](https://llama.meta.com/llama-downloads/) and register to download the model/s. 4. Once registered, you will get an email with a URL to download the models. You will need this URL when you run the download.sh script. 5. Once you get the email, navigate to your downloaded llama repository and run the download.sh script. - Make sure to grant execution permissions to the download.sh script - During this process, you will be prompted to enter the URL from the email. - Do not use the “Copy Link” option; copy the link from the email manually. 6. Once the model/s you want have been downloaded, you can run the model locally using the command below: ```bash torchrun --nproc_per_node 1 example_chat_completion.py \ --ckpt_dir Meta-Llama-3-8B-Instruct/ \ --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model \ --max_seq_len 512 --max_batch_size 6 ``` **Note** - Replace  `Meta-Llama-3-8B-Instruct/` with the path to your checkpoint directory and `Meta-Llama-3-8B-Instruct/tokenizer.model` with the path to your tokenizer model. - The `–nproc_per_node` should be set to the [MP](#inference) value for the model you are using. - Adjust the `max_seq_len` and `max_batch_size` parameters as needed. - This example runs the [example_chat_completion.py](example_chat_completion.py) found in this repository, but you can change that to a different .py file. ## Inference Different models require different model-parallel (MP) values: |  Model | MP | |--------|----| | 8B     | 1  | | 70B    | 8  | All models support sequence length up to 8192 tokens, but we pre-allocate the cache according to `max_seq_len` and `max_batch_size` values. So set those according to your hardware. ### Pretrained Models These models are not finetuned for chat or Q&A. They should be prompted so that the expected answer is the natural continuation of the prompt. See `example_text_completion.py` for some examples. To illustrate, see the command below to run it with the llama-3-8b model (`nproc_per_node` needs to be set to the `MP` value): ``` torchrun --nproc_per_node 1 example_text_completion.py \ --ckpt_dir Meta-Llama-3-8B/ \ --tokenizer_path Meta-Llama-3-8B/tokenizer.model \ --max_seq_len 128 --max_batch_size 4 ``` ### Instruction-tuned Models The fine-tuned models were trained for dialogue applications. To get the expected features and performance for them, specific formatting defined in [`ChatFormat`](https://github.com/meta-llama/llama3/blob/main/llama/tokenizer.py#L202) needs to be followed: The prompt begins with a `<|begin_of_text|>` special token, after which one or more messages follow. Each message starts with the `<|start_header_id|>` tag, the role `system`, `user` or `assistant`, and the `<|end_header_id|>` tag. After a double newline `\n\n`, the message's contents follow. The end of each message is marked by the `<|eot_id|>` token. You can also deploy additional classifiers to filter out inputs and outputs that are deemed unsafe. See the llama-recipes repo for [an example](https://github.com/meta-llama/llama-recipes/blob/main/recipes/inference/local_inference/inference.py) of how to add a safety checker to the inputs and outputs of your inference code. Examples using llama-3-8b-chat: ``` torchrun --nproc_per_node 1 example_chat_completion.py \ --ckpt_dir Meta-Llama-3-8B-Instruct/ \ --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model \ --max_seq_len 512 --max_batch_size 6 ``` Llama 3 is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. To help developers address these risks, we have created the [Responsible Use Guide](https://ai.meta.com/static-resource/responsible-use-guide/). ## Issues Please report any software “bug” or other problems with the models through one of the following means: - Reporting issues with the model: [https://github.com/meta-llama/llama3/issues](https://github.com/meta-llama/llama3/issues) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Model Card See [MODEL_CARD.md](MODEL_CARD.md). ## License Our model and weights are licensed for researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals and industry through this opportunity while fostering an environment of discovery and ethical AI advancements. See the [LICENSE](LICENSE) file, as well as our accompanying [Acceptable Use Policy](USE_POLICY.md) ## Questions For common questions, the FAQ can be found [here](https://llama.meta.com/faq), which will be updated over time as new questions arise.
-# Code Llama ## **Model Details** **Model Developers** Meta AI **Variations** Code Llama comes in four model sizes, and three variants: 1) Code Llama: our base models are designed for general code synthesis and understanding 2) Code Llama - Python: designed specifically for Python 3) Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B, 34B and 70B parameters. **Input** Models input text only. **Output** Models output text only. **Model Architecture** Code Llama and its variants are autoregressive language models using optimized transformer architectures. Code Llama 7B, 13B and 70B additionally support infilling text generation. All models but Code Llama - Python 70B and Code Llama - Instruct 70B were fine-tuned with up to 16K tokens, and support up to 100K tokens at inference time. **Model Dates** Code Llama and its variants have been trained between January 2023 and January 2024. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released  as we improve model safety with community feedback. **Licence** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/). **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)". **Where to send comments** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md), or by opening an issue in the GitHub repository ([https://github.com/facebookresearch/codellama/](https://github.com/facebookresearch/codellama/)). ## **Intended Use** **Intended Use Cases** Code Llama and its variants are intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistance and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## **Hardware and Software** **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed by Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 12 Code Llama models required 1400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 228.55 tCO2eq, 100% of which were offset by Meta’s sustainability program. **Training data** All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). Code Llama - Instruct uses additional instruction fine-tuning data. **Evaluation Results** See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## **Ethical Considerations and Limitations** Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
-# Introducing Code Llama Code Llama is a family of large language models for code based on [Llama 2](https://github.com/facebookresearch/llama) providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B and 34B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B and 13B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama was developed by fine-tuning Llama 2 using a higher sampling of code. As with Llama 2, we applied considerable safety mitigations to the fine-tuned versions of the model. For detailed information on model training, architecture and parameters, evaluations, responsible AI and safety refer to  our [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/). Output generated by code generation features of the Llama Materials, including Code Llama, may be subject to third party licenses, including, without limitation, open source licenses. We are unlocking the power of large language models and our latest version of Code Llama is now accessible to individuals, creators, researchers and businesses of all sizes so that they can experiment, innovate and scale their ideas responsibly. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 34B parameters. This repository is intended as a minimal example to load [Code Llama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) models and run inference. [comment]: <> (Code Llama models are compatible with the scripts in llama-recipes) ## Download In order to download the model weights and tokenizers, please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License. Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download. Make sure that you copy the URL text itself, **do not use the 'Copy link address' option** when you right click the URL. If the copied URL text starts with: https://download.llamameta.net, you copied it correctly. If the copied URL text starts with: https://l.facebook.com, you copied it the wrong way. Pre-requisites: make sure you have `wget` and `md5sum` installed. Then to run the script: `bash download.sh`. Keep in mind that the links expire after 24 hours and a certain amount of downloads. If you start seeing errors such as `403: Forbidden`, you can always re-request a link. ### Model sizes | Model | Size     | |-------|----------| | 7B    | ~12.55GB | | 13B   | 24GB     | | 34B   | 63GB     | | 70B   | 131GB    | [comment]: <> (Access on Hugging Face, We are also providing downloads on Hugging Face. You must first request a download from the Meta website using the same email address as your Hugging Face account. After doing so, you can request access to any of the models on Hugging Face and within 1-2 days your account will be granted access to all versions.) ## Setup In a conda environment with PyTorch / CUDA available, clone the repo and run in the top-level directory: ``` pip install -e . ``` ## Inference Different models require different model-parallel (MP) values: | Model | MP | |-------|----| | 7B    | 1  | | 13B   | 2  | | 34B   | 4  | | 70B   | 8  | All models, except the 70B python and instruct versions, support sequence lengths up to 100,000 tokens, but we pre-allocate the cache according to `max_seq_len` and `max_batch_size` values. So set those according to your hardware and use-case. ### Pretrained Code Models The Code Llama and Code Llama - Python models are not fine-tuned to follow instructions. They should be prompted so that the expected answer is the natural continuation of the prompt. See `example_completion.py` for some examples. To illustrate, see command below to run it with the `CodeLlama-7b` model (`nproc_per_node` needs to be set to the `MP` value): ``` torchrun --nproc_per_node 1 example_completion.py \ --ckpt_dir CodeLlama-7b/ \ --tokenizer_path CodeLlama-7b/tokenizer.model \ --max_seq_len 128 --max_batch_size 4 ``` Pretrained code models are: the Code Llama models `CodeLlama-7b`, `CodeLlama-13b`, `CodeLlama-34b`, `CodeLlama-70b` and the Code Llama - Python models `CodeLlama-7b-Python`, `CodeLlama-13b-Python`, `CodeLlama-34b-Python`, `CodeLlama-70b-Python`. ### Code Infilling Code Llama and Code Llama - Instruct 7B and 13B models are capable of filling in code given the surrounding context. See `example_infilling.py` for some examples. The `CodeLlama-7b` model can be run for infilling with the command below (`nproc_per_node` needs to be set to the `MP` value): ``` torchrun --nproc_per_node 1 example_infilling.py \ --ckpt_dir CodeLlama-7b/ \ --tokenizer_path CodeLlama-7b/tokenizer.model \ --max_seq_len 192 --max_batch_size 4 ``` Pretrained infilling models are: the Code Llama models `CodeLlama-7b` and `CodeLlama-13b` and the Code Llama - Instruct models `CodeLlama-7b-Instruct`, `CodeLlama-13b-Instruct`. ### Fine-tuned Instruction Models Code Llama - Instruct models are fine-tuned to follow instructions. To get the expected features and performance for the 7B, 13B and 34B variants, a specific formatting defined in [`chat_completion()`](https://github.com/facebookresearch/codellama/blob/main/llama/generation.py#L319-L361) needs to be followed, including the `INST` and `< >` tags, `BOS` and `EOS` tokens, and the whitespaces and linebreaks in between (we recommend calling `strip()` on inputs to avoid double-spaces). `CodeLlama-70b-Instruct` requires a separate turn-based prompt format defined in [`dialog_prompt_tokens()`](https://github.com/facebookresearch/codellama/blob/main/llama/generation.py#L506-L548). You can use `chat_completion()` directly to generate answers with all instruct models; it will automatically perform the required formatting. You can also deploy additional classifiers for filtering out inputs and outputs that are deemed unsafe. See the llama-recipes repo for [an example](https://github.com/facebookresearch/llama-recipes/blob/main/src/llama_recipes/inference/safety_utils.py) of how to add a safety checker to the inputs and outputs of your inference code. Examples using `CodeLlama-7b-Instruct`: ``` torchrun --nproc_per_node 1 example_instructions.py \ --ckpt_dir CodeLlama-7b-Instruct/ \ --tokenizer_path CodeLlama-7b-Instruct/tokenizer.model \ --max_seq_len 512 --max_batch_size 4 ``` Fine-tuned instruction-following models are: the Code Llama - Instruct models `CodeLlama-7b-Instruct`, `CodeLlama-13b-Instruct`, `CodeLlama-34b-Instruct`, `CodeLlama-70b-Instruct`. Code Llama is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. In order to help developers address these risks, we have created the [Responsible Use Guide](https://github.com/facebookresearch/llama/blob/main/Responsible-Use-Guide.pdf). More details can be found in our research papers as well. ## Issues Please report any software “bug”, or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/codellama](http://github.com/facebookresearch/codellama) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Model Card See [MODEL_CARD.md](MODEL_CARD.md) for the model card of Code Llama. ## License Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements. See the [LICENSE](https://github.com/facebookresearch/llama/blob/main/LICENSE) file, as well as our accompanying [Acceptable Use Policy](https://github.com/facebookresearch/llama/blob/main/USE_POLICY.md) ## References 1. [Code Llama Research Paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) 2. [Code Llama Blog Post](https://ai.meta.com/blog/code-llama-large-language-model-coding/)
-🤗 Models on Hugging Face | Blog | Website | CyberSec Eval Paper | Llama Guard Paper --- # Purple Llama Purple Llama is an umbrella project that over time will bring together tools and evals to help the community build responsibly with open generative AI models. The initial release will include tools and evals for Cyber Security and Input/Output safeguards but we plan to contribute more in the near future. ## Why purple? Borrowing a [concept](https://www.youtube.com/watch?v=ab_Fdp6FVDI) from the cybersecurity world, we believe that to truly mitigate the challenges which generative AI presents, we need to take both attack (red team) and defensive (blue team) postures. Purple teaming, composed of both red and blue team responsibilities, is a collaborative approach to evaluating and mitigating potential risks and the same ethos applies to generative AI and hence our investment in Purple Llama will be comprehensive. ## License Components within the Purple Llama project will be licensed permissively enabling both research and commercial usage. We believe this is a major step towards enabling community collaboration and standardizing the development and usage of trust and safety tools for generative AI development. More concretely evals and benchmarks are licensed under the MIT license while any models use the Llama 2 Community license. See the table below: | **Component Type** |            **Components**            |                                          **License**                                           | | :----------------- | :----------------------------------: | :--------------------------------------------------------------------------------------------: | | Evals/Benchmarks   | Cyber Security Eval (others to come) |                                              MIT                                               | | Models             |             Llama Guard              | [Llama 2 Community License](https://github.com/facebookresearch/PurpleLlama/blob/main/LICENSE) | | Models             |             Llama Guard 2            | Llama 3 Community License | | Safeguard          |             Code Shield              | MIT | ## Evals & Benchmarks ### Cybersecurity #### CyberSec Eval v1 CyberSec Eval v1 was what we believe was the first industry-wide set of cybersecurity safety evaluations for LLMs. These benchmarks are based on industry guidance and standards (e.g., CWE and MITRE ATT&CK) and built in collaboration with our security subject matter experts. We aim to provide tools that will help address some risks outlined in the [White House commitments on developing responsible AI](https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/), including: * Metrics for quantifying LLM cybersecurity risks. * Tools to evaluate the frequency of insecure code suggestions. * Tools to evaluate LLMs to make it harder to generate malicious code or aid in carrying out cyberattacks. We believe these tools will reduce the frequency of LLMs suggesting insecure AI-generated code and reduce their helpfulness to cyber adversaries. Our initial results show that there are meaningful cybersecurity risks for LLMs, both with recommending insecure code and for complying with malicious requests. See our [Cybersec Eval paper](https://ai.meta.com/research/publications/purple-llama-cyberseceval-a-benchmark-for-evaluating-the-cybersecurity-risks-of-large-language-models/) for more details. #### CyberSec Eval 2 CyberSec Eval 2 expands on its predecessor by measuring an LLM’s propensity to abuse a code interpreter, offensive cybersecurity capabilities, and susceptibility to prompt injection. You can read the paper [here](https://ai.meta.com/research/publications/cyberseceval-2-a-wide-ranging-cybersecurity-evaluation-suite-for-large-language-models/). You can also check out the 🤗 leaderboard [here](https://huggingface.co/spaces/facebook/CyberSecEval). ## System-Level Safeguards As we outlined in Llama 3’s [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/), we recommend that all inputs and outputs to the LLM be checked and filtered in accordance with content guidelines appropriate to the application. ### Llama Guard To support this, and empower the community, we released Llama Guard, an openly-available model that performs competitively on common open benchmarks and provides developers with a pretrained model to help defend against generating potentially risky outputs. As part of our ongoing commitment to open and transparent science, we also released our methodology and an extended discussion of model performance in our [Llama Guard paper](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/). We are happy to share an updated version, Meta Llama Guard 2. Llama Guard 2 was optimized to support the newly [announced](https://mlcommons.org/2024/04/mlc-aisafety-v0-5-poc/) policy published by MLCommons, expanding its coverage to a more comprehensive set of safety categories, out-of-the-box. It also comes with better classification performance than Llama Guard 1 and improved zero-shot and few shot adaptability. Ultimately, our vision is to enable developers to customize this model to support relevant use cases and to make it easier to adopt best practices and improve the open ecosystem. ### Code Shield Code Shield adds support for inference-time filtering of insecure code produced by LLMs. Code Shield offers mitigation of insecure code suggestions risk, code interpreter abuse prevention, and secure command execution. [CodeShield Example Notebook](https://github.com/meta-llama/PurpleLlama/blob/main/CodeShield/notebook/CodeShieldUsageDemo.ipynb). ## Getting Started To get started and learn how to use Purple Llama components with Llama models, see the getting started guide [here](https://ai.meta.com/llama/get-started/). The guide provides information and resources to help you set up Llama, including how to access the model, hosting how-to information and integration guides. Additionally, you will find supplemental materials to further assist you while responsibly building with Llama. The guide will be updated as more Purple Llama components get released. ## FAQ For a running list of frequently asked questions, for not only Purple Llama components but also generally for Llama models, see the FAQ [here](https://ai.meta.com/llama/faq/). ## Join the Purple Llama community See the [CONTRIBUTING](CONTRIBUTING.md) file for how to help out.
-# Model Details Meta Llama Guard 2 is an 8B parameter Llama 3-based [1] LLM safeguard model. Similar to [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/), it can be used for classifying content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated. Below is a response classification example input and output for Llama Guard 2. In order to produce classifier scores, we look at the probability for the first token, and use that as the “unsafe” class probability. We can then apply score thresholding to make binary decisions. # Harm Taxonomy and Policy The model is trained to predict safety labels on the 11 categories shown below, based on the [MLCommons taxonomy](https://mlcommons.org/2024/04/mlc-aisafety-v0-5-poc/) of hazards. Harm categories S1: Violent Crimes S2: Non-Violent Crimes S3: Sex-Related Crimes S4: Child Sexual Exploitation S5: Specialized Advice S6: Privacy S7: Intellectual Property S8: Indiscriminate Weapons S9: Hate S10: Suicide & Self-Harm S11: Sexual Content **S1: Violent Crimes** Responses that enable, encourage, or endorse the commission of violent crimes, including: (1) unlawful violence toward people (ex: terrorism, genocide, murder, hate-motivated violent crimes, child abuse, assault, battery, kidnapping) and (2) unlawful violence toward animals (ex: animal abuse) **S2: Non-Violent Crimes** Responses that enable, encourage, or endorse the commission of non-violent crimes, including: (1) personal crimes (ex: labor trafficking, threats, intimidation, hate-motivated non-violent crimes), (2) financial crimes (ex: fraud, scams, money laundering), (3) property crimes (ex: theft, arson, vandalism), (4) drug crimes (ex: creating or using regulated substances), (5) weapons crimes (ex: producing unlicensed firearms), and (6) cyber crimes (ex: hacking) **S3: Sex-Related Crimes** Responses that enable, encourage, or endorse the commission of sex-related crimes, including: (1) sex trafficking, (2) sexual assault (ex: rape), (3) sexual harassment (physical, verbal, or visual), and (4) prostitution **S4: Child Sexual Exploitation** Responses that contain, describe, enable, encourage, or endorse the sexual abuse of children **S5: Specialized Advice** Responses that contain specialized financial, medical, or legal advice, or that indicate dangerous activities or objects are safe **S6: Privacy** Responses that contain sensitive, nonpublic personal information that could undermine someone's physical, digital, or financial security **S7: Intellectual Property** Responses that may violate the intellectual property rights of any third party **S8: Indiscriminate Weapons** Responses that enable, encourage, or endorse the creation of indiscriminate weapons, including: (1) chemical weapons (ex: nerve gas), (2) biological weapons (ex: anthrax), (3) radiological weapons (ex: cobalt bombs), (4) nuclear weapons (ex: fission bombs), and (5) high-yield explosive weapons (ex: cluster munitions) **S9: Hate** Responses that demean or dehumanize people on the basis of their sensitive, personal characteristics (i.e., race, color, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and/or serious disease) **S10: Suicide & Self-Harm** Responses that enable, encourage, or endorse acts of intentional self-harm, including: (1) suicide, (2) self-injury (ex: cutting), and (3) disordered eating **S11: Sexual Content** Responses that contain erotica # Training and Evaluation Data We start with the training set of Llama Guard (see [model card](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard/MODEL_CARD.md#training-data)), and obtain labels on the Harm Taxonomy described above. To improve adaptability of the model to different prompts, we train on hard samples, which are obtained by taking an existing sample and prompting Llama2 70B to produce an alternate policy description that will flip the label of the given sample. We report metrics for various models and APIs on our validation set, which is obtained from combining the validation set of Llama Guard v1 and held-out samples from the additional Llama 3 safety data. We compare performance on our internal test set, as well as on open datasets like [XSTest](https://github.com/paul-rottger/exaggerated-safety?tab=readme-ov-file#license), [OpenAI moderation](https://github.com/openai/moderation-api-release), and [BeaverTails](https://github.com/PKU-Alignment/beavertails). We find that there is overlap between our training set and the BeaverTails-30k test split. Since both our internal test set and BeaverTails use prompts from the Anthropic's [hh-rlhf dataset](https://github.com/anthropics/hh-rlhf) as a starting point for curating data, it is possible that different splits of Anthropic were used while creating the two datasets. Therefore to prevent leakage of signal between our train set and the BeaverTails-30k test set, we create our own BeaverTails-30k splits based on the Anthropic train-test splits used for creating our internal sets. *Note on evaluations*: As discussed in the Llama Guard [paper](https://arxiv.org/abs/2312.06674), comparing model performance is not straightforward as each model is built on its own policy and is expected to perform better on an evaluation dataset with a policy aligned to the model. This highlights the need for industry standards. By aligning Llama Guard 2 with the Proof of Concept MLCommons taxonomy, we hope to drive adoption of industry standards like this and facilitate collaboration and transparency in the LLM safety and content evaluation space. # Model Performance We evaluate the performance of Llama Guard 2 and compare it with Llama Guard and popular content moderation APIs such as Azure, OpenAI Moderation, and Perspective. We use the token probability of the first output token (i.e. safe/unsafe) as the score for classification. For obtaining a binary classification decision from the score, we use a threshold of 0.5. Llama Guard 2 improves over Llama Guard, and outperforms other approaches on our internal test set. Note that we manage to achieve great performance while keeping a low false positive rate as we know that over-moderation can impact user experience when building LLM-applications. | **Model**                | **F1 ↑** | **AUPRC ↑** | **False Positive Rate ↓** | |--------------------------|:------:|:---------:|:-----------------------:| | Llama Guard\*             |  0.665 | 0.854 |          0.027          | | Llama Guard 2            |  **0.915** |   **0.974**   |          0.040          | | GPT4                     | 0.796 |    N/A    |          0.151          | | OpenAI Moderation API    |  0.347 |   0.669   |          0.030          | | Azure Content Safety API |  0.519 |    N/A    |          0.245          | | Perspective API          |  0.265 |   0.586   |          0.046          | Table 1: Comparison of performance of various approaches measured on our internal test set. *The performance of Llama Guard is lower on our new test set due to expansion of the number of harm categories from 6 to 11, which is not aligned to what Llama Guard was trained on. | **Category**           | **False Negative Rate\* ↓** | **False Positive Rate ↓** | |------------------------|:--------------------------:|:-------------------------:| | Violent Crimes         |            0.042           |           0.002           | | Privacy                |            0.057           |           0.004           | | Non-Violent Crimes     |            0.082           |           0.009           | | Intellectual Property  |            0.099           |           0.004           | | Hate                   |            0.190           |           0.005           | | Specialized Advice     |            0.192           |           0.009           | | Sexual Content         |            0.229           |           0.004           | | Indiscriminate Weapons |            0.263           |           0.001           | | Child Exploitation     |            0.267           |           0.000           | | Sex Crimes             |            0.275           |           0.002           | | Self-Harm              |            0.277           |           0.002           | Table 2: Category-wise breakdown of false negative rate and false positive rate for Llama Guard 2 on our internal benchmark for response classification with safety labels from the ML Commons taxonomy. *The binary safe/unsafe label is used to compute categorical FNR by using the true categories. We do not penalize the model while computing FNR for cases where the model predicts the correct overall label but an incorrect categorical label. We also report performance on OSS safety datasets, though we note that the policy used for assigning safety labels is not aligned with the policy used while training Llama Guard 2. Still, Llama Guard 2 provides a superior tradeoff between F1 score and False Positive Rate on the XSTest and OpenAI Moderation datasets, demonstrating good adaptability to other policies. The BeaverTails dataset has a lower bar for a sample to be considered unsafe compared to Llama Guard 2's policy. The policy and training data of MDJudge [4] is more aligned with this dataset and we see that it performs better on them as expected (at the cost of a higher FPR). GPT-4 achieves high recall on all of the sets but at the cost of very high FPR (9-25%), which could hurt its ability to be used as a safeguard for practical applications. (F1 ↑ / False Positive Rate ↓) False Refusals (XSTest) OpenAI policy (OpenAI Mod) BeaverTails policy (BeaverTails-30k) Llama Guard 0.737 / 0.079 0.737 / 0.079 0.599 / 0.035 Llama Guard 2 0.884 / 0.084 0.807 / 0.060 0.736 / 0.059 MDJudge 0.856 / 0.172 0.768 / 0.212 0.849 / 0.098 GPT4 0.895 / 0.128 0.842 / 0.092 0.802 / 0.256 OpenAI Mod API 0.576 / 0.040 0.788 / 0.156 0.284 / 0.056 Table 3: Comparison of performance of various approaches measured on our internal test set for response classification. NOTE: The policy used for training Llama Guard does not align with those used for labeling these datasets. Still, Llama Guard 2 provides a superior tradeoff between F1 score and False Positive Rate across these datasets, demonstrating strong adaptability to other policies. We hope to provide developers with a high-performing moderation solution for most use cases by aligning Llama Guard 2 taxonomy with MLCommons standard. But as outlined in our Responsible Use Guide, each use case requires specific safety considerations and we encourage developers to tune Llama Guard 2 for their own use case to achieve better moderation for their custom policies. As an example of how Llama Guard 2's performance may change, we train on the BeaverTails training dataset and compare against MDJudge (which was trained on BeaverTails among others). |          **Model**          | **F1 ↑** | **False Positive Rate ↓** | |:---------------------------:|:--------:|:-------------------------:| | Llama Guard 2               |   0.736  |           0.059           | | MDJudge                     | 0.849 |           0.098           | | Llama Guard 2 + BeaverTails |   **0.852**  |           0.101           | Table 4: Comparison of performance on BeaverTails-30k. # Limitations There are some limitations associated with Llama Guard 2. First, Llama Guard 2 itself is an LLM fine-tuned on Llama 3. Thus, its performance (e.g., judgments that need common sense knowledge, multilingual capability, and policy coverage) might be limited by its (pre-)training data. Second, Llama Guard 2 is finetuned for safety classification only (i.e. to generate "safe" or "unsafe"), and is not designed for chat use cases. However, since it is an LLM, it can still be prompted with any text to obtain a completion. Lastly, as an LLM, Llama Guard 2 may be susceptible to adversarial attacks or prompt injection attacks that could bypass or alter its intended use. However, with the help of external components (e.g., KNN, perplexity filter), recent work (e.g., [3]) demonstrates that Llama Guard is able to detect harmful content reliably. **Note on Llama Guard 2's policy** Llama Guard 2 supports 11 out of the 13 categories included in the [MLCommons AI Safety](https://mlcommons.org/working-groups/ai-safety/ai-safety/) taxonomy. The Election and Defamation categories are not addressed by Llama Guard 2 as moderating these harm categories requires access to up-to-date, factual information sources and the ability to determine the veracity of a particular output. To support the additional categories, we recommend using other solutions (e.g. Retrieval Augmented Generation) in tandem with Llama Guard 2 to evaluate information correctness. # Citation ``` @misc{metallamaguard2, author =       {Llama Team}, title =        {Meta Llama Guard 2}, howpublished = {\url{https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard2/MODEL_CARD.md}}, year =         {2024} } ``` # References [1] [Llama 3 Model Card](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md) [2] [Llama Guard Model Card](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard/MODEL_CARD.md) [3] [RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content](https://arxiv.org/pdf/2403.13031.pdf) [4] [MDJudge for Salad-Bench](https://huggingface.co/OpenSafetyLab/MD-Judge-v0.1)
-# Meta Llama Guard 2 Llama Guard 2 is a model that provides input and output guardrails for LLM deployments, based on MLCommons policy. # Download In order to download the model weights and tokenizer, please visit the [Meta website](https://llama.meta.com/llama-downloads) and accept our License. Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download. Pre-requisites: Make sure you have wget and md5sum installed. Then to run the script: `./download.sh`. Keep in mind that the links expire after 24 hours and a certain amount of downloads. If you start seeing errors such as `403: Forbidden`, you can always re-request a link. # Quick Start Since Llama Guard 2 is a fine-tuned Llama3 model (see our [model card](MODEL_CARD.md) for more information), the same quick start steps outlined in our [README file](https://github.com/meta-llama/llama3/blob/main/README.md) for Llama3 apply here. In addition to that, we added examples using Llama Guard 2 in the [Llama recipes repository](https://github.com/facebookresearch/llama-recipes). # Issues Please report any software bug, or other problems with the models through one of the following means: - Reporting issues with the Llama Guard model: [github.com/meta-llama/PurpleLlama](https://github.com/meta-llama/PurpleLlama) - Reporting issues with Llama in general: [github.com/meta-llama/llama3](https://github.com/meta-llama/llama3) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](https://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](https://facebook.com/whitehat/info) # License Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements. The same license as Llama 3 applies: see the [LICENSE](../LICENSE) file, as well as our accompanying [Acceptable Use Policy](USE_POLICY.md). # Citation ``` @misc{metallamaguard2, author =       {Llama Team}, title =        {Meta Llama Guard 2}, howpublished = {\url{https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard2/MODEL_CARD.md}}, year =         {2024} } ``` # References [Research Paper](https://ai.facebook.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/)
-# Model Details Llama Guard is a 7B parameter [Llama 2](https://arxiv.org/abs/2307.09288)-based input-output safeguard model. It can be used for classifying content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM: it generates text in its output that indicates whether a given prompt or response is safe/unsafe, and if unsafe based on a policy, it also lists the violating subcategories. Here is an example: In order to produce classifier scores, we look at the probability for the first token, and turn that into an “unsafe” class probability. Model users can then make binary decisions by applying a desired threshold to the probability scores. # Training and Evaluation ## Training Data We use a mix of prompts that come from the Anthropic [dataset](https://github.com/anthropics/hh-rlhf) and redteaming examples that we have collected in house, in a separate process from our production redteaming. In particular, we took the prompts only from the Anthropic dataset, and generated new responses from our in-house LLaMA models, using jailbreaking techniques to elicit violating responses. We then annotated Anthropic data (prompts & responses) in house, mapping labels according to the categories identified above. Overall we have ~13K training examples. ## Taxonomy of harms and Risk Guidelines As automated content risk mitigation relies on classifiers to make decisions about content in real time, a prerequisite to building these systems is to have the following components: - A **taxonomy** of risks that are of interest – these become the classes of a classifier. - A **risk guideline** that determines where we put the line between encouraged and discouraged outputs for each risk category in the taxonomy. Together with this model, we release an open taxonomy inspired by existing open taxonomies such as those employed by Google, Microsoft and OpenAI in the hope that it can be useful to the community. This taxonomy does not necessarily reflect Meta's own internal policies and is meant to demonstrate the value of our method to tune LLMs into classifiers that show high performance and high degrees of adaptability to different policies. ### The Llama Guard Safety Taxonomy & Risk Guidelines Below, we provide both the harm types themselves under this taxonomy and also examples of the specific kinds of content that would be considered harmful under each category: - **Violence & Hate** encompasses statements that encourage or could help people plan or engage in violence. Similarly, statements that advocate discrimination, contain slurs, or voice hateful sentiments against people based on their sensitive personal characteristics (ex: race, color, religion, national origin, sexual orientation, gender, gender identity, or disability) would also be considered inappropriate under this category. - **Sexual Content** encompasses statements encouraging someone (who could be underage) to engage in specific sex acts. Similarly, sexually explicit (i.e., erotic) statements would also be considered inappropriate under this category. - **Guns & Illegal Weapons** encompasses statements that specifically encourage, condone, or could help people plan or execute specific crimes involving the illegal acquisition, creation, or use of guns or illegal weapons (ex: explosives, biological agents, or chemical weapons), including instructions on how to create or use them. - **Regulated or Controlled Substances** encompasses statements that specifically encourage or could help people to illegally produce, transfer, or consume regulated or controlled substances like illegal drugs, tobacco, alcohol, or cannabis. - **Suicide & Self Harm** encompasses statements that encourage, condone, or enable people to harm themselves (ex: by providing instructions or information on methods of self-harm). When people express or imply an intent to harm themselves, any response other than one that provides links to appropriate health resources would be considered inappropriate (even if entirely supportive). - **Criminal Planning** encompasses miscellaneous statements that encourage, condone, or could help people plan or execute specific criminal activities, like arson, kidnapping, or theft. Items should only be considered harmful under this category when they could not be read as violating any of the other harm types above (ex: statements that encourage violence should be considered violating under Violence & Hate rather than this category). ## Evaluation results We compare the performance of the model against standard content moderation APIs in the industry, including [OpenAI](https://platform.openai.com/docs/guides/moderation/overview), [Azure Content Safety](https://learn.microsoft.com/en-us/azure/ai-services/content-safety/concepts/harm-categories), and [PerspectiveAPI](https://developers.perspectiveapi.com/s/about-the-api-attributes-and-languages?language=en_US) from Google on both public and in-house benchmarks. The public benchmarks include [ToxicChat](https://huggingface.co/datasets/lmsys/toxic-chat) and [OpenAI Moderation](https://github.com/openai/moderation-api-release). Note: comparisons are not exactly apples-to-apples due to mismatches in each taxonomy. The interested reader can find a more detailed discussion about this in our [paper](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/). |                 | Our Test Set (Prompt) | OpenAI Mod | ToxicChat | Our Test Set (Response) | | --------------- | --------------------- | ---------- | --------- | ----------------------- | | Llama Guard     | **0.945**             | 0.847      | **0.626** | **0.953**               | | OpenAI API      | 0.764                 | **0.856**  | 0.588     | 0.769                   | | Perspective API | 0.728                 | 0.787      | 0.532     | 0.699                   |
-# Llama Guard Llama Guard is a new experimental model that provides input and output guardrails for LLM deployments. # Download In order to download the model weights and tokenizer, please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License. Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download. Pre-requisites: Make sure you have wget and md5sum installed. Then to run the script: `./download.sh`. Keep in mind that the links expire after 24 hours and a certain amount of downloads. If you start seeing errors such as `403: Forbidden`, you can always re-request a link. # Quick Start Since Llama Guard is a fine-tuned Llama-7B model (see our [model card](MODEL_CARD.md) for more information), the same quick start steps outlined in our [README file](https://github.com/facebookresearch/llama/blob/main/README.md) for Llama2 apply here. In addition to that, we added examples using Llama Guard in the [Llama 2 recipes repository](https://github.com/facebookresearch/llama-recipes). # Issues Please report any software bug, or other problems with the models through one of the following means: - Reporting issues with the Llama Guard model: [github.com/facebookresearch/purplellama](github.com/facebookresearch/purplellama) - Reporting issues with Llama in general: [github.com/facebookresearch/llama](github.com/facebookresearch/llama) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](facebook.com/whitehat/info) # License Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements. The same license as Llama 2 applies: see the [LICENSE](../LICENSE) file, as well as our accompanying [Acceptable Use Policy](USE_POLICY). # References [Research Paper](https://ai.facebook.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/)
-
diff --git a/recipes/use_cases/end2end-recipes/raft/data_urls.xml b/recipes/use_cases/end2end-recipes/raft/data_urls.xml
index 5fccffd9e..29460d5cb 100644
--- a/recipes/use_cases/end2end-recipes/raft/data_urls.xml
+++ b/recipes/use_cases/end2end-recipes/raft/data_urls.xml
@@ -126,6 +126,39 @@
 http://raw.githubusercontent.com/meta-llama/PurpleLlama/main/Llama-Guard/MODEL_CARD.md
 
 
-http://raw.githubusercontent.com/meta-llama/PurpleLlama/main/Llama-Guard/README.md
+https://hamel.dev/notes/llm/inference/03_inference.html
+
+
+https://www.anyscale.com/blog/continuous-batching-llm-inference
+
+
+https://github.com/huggingface/peft
+
+https://github.com/facebookresearch/llama-recipes/blob/main/docs/LLM_finetuning.md
+
+
+https://github.com/meta-llama/llama-recipes/blob/main/recipes/finetuning/datasets/README.md
+
+https://www.databricks.com/blog/efficient-fine-tuning-lora-guide-llms
+
+
+https://www.wandb.courses/courses/training-fine-tuning-LLMs
+
+
+https://www.snowflake.com/blog/meta-code-llama-testing/
+
+https://www.phind.com/blog/code-llama-beats-gpt4
+
+https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper
+
+
+https://ragntune.com/blog/gpt3.5-vs-llama2-finetuning
+
+https://deci.ai/blog/fine-tune-llama-2-with-lora-for-question-answering/
+
+
+https://replicate.com/blog/fine-tune-translation-model-axolotl
+
+https://huyenchip.com/2023/04/11/llm-engineering.html
 
 
diff --git a/recipes/use_cases/end2end-recipes/raft/eval_config.yaml b/recipes/use_cases/end2end-recipes/raft/eval_config.yaml
deleted file mode 100644
index bdfa0f176..000000000
--- a/recipes/use_cases/end2end-recipes/raft/eval_config.yaml
+++ /dev/null
@@ -1,51 +0,0 @@
-eval_prompt_template: >
-  <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a AI assistant that skilled in answering questions related to Llama language models,
-  which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1,	Meta Llama Guard 2,
-  Below is a question from a llama user, please the answer it with best of your knowledge,
-  The returned answer should be no more than 100 words.Please return the answers in text directly without any special tokens.<|eot_id|>
-  <|start_header_id|>user<|end_header_id|>
-  Question:{question} \n <|eot_id|><|start_header_id|>assistant<|end_header_id|>
-# judge_prompt_template: >
-#   <|begin_of_text|><|start_header_id|>system<|end_header_id|>You have been provided with a question, a teacher's answer and a student's answer above. Given that question, you need to score the how good the student answer is compare to
-#   the teacher's answer. If the student's answer is correct based on the teacher's answer, then return YES, else return NO.
-#   Review it carefully to make sure that the keywords and numerical vaules are exactly the same.
-#   Only respond with "YES" or "NO", do not respond with anything else.<|eot_id|>
-#   <|start_header_id|>user<|end_header_id|>
-#   Question: {question} \n Teacher's Answer: {gold} \n Student's Answer: {prediction} <|eot_id|><|start_header_id|>assistant<|end_header_id|>
-judge_prompt_template: >
-    <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a teacher grading a quiz.
-
-    You will be given a QUESTION, the GROUND TRUTH (correct) ANSWER, and the STUDENT ANSWER.
-
-    Here is the grade criteria to follow:
-    (1) Grade the student answers based ONLY on their factual accuracy relative to the ground truth answer.
-    (2) Ensure that the student answer does not contain any conflicting statements.
-    (3) It is OK if the student answer contains more information than the ground truth answer, as long as it is factually accurate relative to the  ground truth answer.
-
-    Score:
-    YES means that the student's answer meets all of the criteria. This is the highest (best) score.
-    NO means that the student's answer does not meet all of the criteria. This is the lowest possible score you can give.
-
-    Explain your reasoning in a step-by-step manner to ensure your reasoning and conclusion are correct.
-
-    Avoid simply stating the correct answer at the outset.
-    End your response with final answer in the form : $answer, answer must be YES or NO  <|eot_id|>
-    <|start_header_id|>user<|end_header_id|>
-    QUESTION: {{question}}
-    GROUND TRUTH ANSWER: {{gold}}
-    STUDENT ANSWER: {{prediction}}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
-RAG_prompt_template: >
-  <|begin_of_text|><|start_header_id|>system<|end_header_id|> Answer the following question using the information given in the context below. Here is things to pay attention to:
-    - First provide step-by-step reasoning on how to answer the question.
-    - In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context.
-    - End your response with final answer in the form : $answer, the answer should less than 60 words.
-    You MUST begin your final answer with the tag ":<|eot_id|>
-  <|start_header_id|>user<|end_header_id|>
-  Question: {question}\nContext: {context}\n<|eot_id|><|start_header_id|>assistant<|end_header_id|>
-eval_json: "./evalset.json"
-
-raft_model_name: "raft-8b"
-
-base_model_name: "meta-llama/Meta-Llama-3-8B-Instruct"
-
-data_dir: "./data"
diff --git a/recipes/use_cases/end2end-recipes/raft/eval_llama.json b/recipes/use_cases/end2end-recipes/raft/eval_llama.json
new file mode 100644
index 000000000..1fd66af9b
--- /dev/null
+++ b/recipes/use_cases/end2end-recipes/raft/eval_llama.json
@@ -0,0 +1,287 @@
+[
+    {
+       "question":"What is the role of Llama2 70B in generating hard samples?",
+       "answer":" Llama2 70B generates hard samples by producing alternate policy descriptions that flip the label of existing samples."
+    },
+    {
+       "question":"What is the purpose of quantization in machine learning?",
+       "answer":" The purpose of quantization in machine learning is to reduce computational and memory requirements, making models more efficient for deployment."
+    },
+    {
+       "question":"What policy must your use of the Llama Materials adhere to, as specified in this Agreement?",
+       "answer":" The Acceptable Use Policy for the Llama Materials."
+    },
+    {
+       "question":"How is perplexity calculated in the context of fine-tuning a language model?",
+       "answer":" Perplexity is calculated as an exponentiation of the loss value."
+    },
+    {
+       "question":"How can the Memory API be used to enhance the conversational capabilities of an LLM?",
+       "answer":" The Memory API can be used to enhance the conversational capabilities of an LLM by saving conversation history and feeding it along with new questions to the LLM, enabling multi-turn natural conversation chat."
+    },
+    {
+       "question":"What token is used to signify the end of a message in a turn?",
+       "answer":" <|eot_id|>"
+    },
+    {
+       "question":"Where can I find more information about the research behind the Llama-2 model?",
+       "answer":" https:\/\/ai.meta.com\/research\/publications\/llama-2-open-foundation-and-fine-tuned-chat-models\/"
+    },
+    {
+       "question":"What tokenizer is used as the basis for the special tokens in Meta Llama ",
+       "answer":" tiktoken"
+    },
+    {
+       "question":"What does the model do with the probability of the first token to determine safety?",
+       "answer":" The model turns the probability of the first token into an \"unsafe\" class probability to determine safety."
+    },
+    {
+       "question":"Are Meta user data included in the pretraining dataset?",
+       "answer":" No"
+    },
+    {
+       "question":"What are the benefits of quantization in neural networks?",
+       "answer":" The benefits of quantization in neural networks are smaller model sizes, faster fine-tuning, and faster inference."
+    },
+    {
+       "question":"How does the GPTQ algorithm quantize the weight matrix during post-training?",
+       "answer":" The GPTQ algorithm quantizes the weight matrix by quantizing each row independently during post-training."
+    },
+    {
+       "question":"What is the capability of large language models like Meta Llama in terms of following instructions?",
+       "answer":" They can follow instructions without having previously seen an example of a task."
+    },
+    {
+       "question":"What trade-off do developers need to consider when deploying LLM systems, according to the Responsible Use Guide?",
+       "answer":" The trade-off is between model helpfulness and model alignment."
+    },
+    {
+       "question":"What is the purpose of red-teaming in your organization?",
+       "answer":" The purpose of red-teaming is to enhance safety and performance."
+    },
+    {
+       "question":"What is the purpose of the llama-recipes GitHub repo?",
+       "answer":" The purpose of the llama-recipes GitHub repo is to provide examples, demos, and guidance for using Llama models."
+    },
+    {
+       "question":"What is the purpose of Meta's Responsible Use Guide for developers using Llama ",
+       "answer":" The purpose of Meta's Responsible Use Guide is to provide guidance to developers on how to build products powered by LLMs in a responsible manner."
+    },
+    {
+       "question":"What should be defined to rate the results of the fine-tuned model?",
+       "answer":" A clear evaluation criteria."
+    },
+    {
+       "question":"What steps did the developers take to mitigate safety risks in their instruction-tuned Llama model?",
+       "answer":" The developers took the following steps to mitigate safety risks in their instruction-tuned Llama model: conducting extensive red teaming exercises, performing adversarial evaluations, and implementing safety mitigations techniques."
+    },
+    {
+       "question":"What behaviors are prohibited in the context of employment and economic benefits?",
+       "answer":" discrimination, other unlawful conduct, and harmful conduct"
+    },
+    {
+       "question":"Are there any fees or royalties required to use the Llama Materials under this license?",
+       "answer":" No, there are no fees or royalties required to use the Llama Materials under this license."
+    },
+    {
+       "question":"What is the precision in which LLM models can run without performance degradation using AWQ?",
+       "answer":" 4-bit"
+    },
+    {
+       "question":"What type of professional practices are not allowed without proper authorization or licensure?",
+       "answer":" Financial, legal, medical\/health, or related professional practices."
+    },
+    {
+       "question":"What is the F1 score of Llama Guard 2 when trained on the BeaverTails dataset?",
+       "answer":" 0.736"
+    },
+    {
+       "question":"What is the recommended step for developers before deploying applications of Llama ",
+       "answer":" Perform safety testing and tuning tailored to their specific applications of the model."
+    },
+    {
+       "question":"What is the license used for the Llama Guard model in the Purple Llama project?",
+       "answer":" Llama 2 Community License"
+    },
+    {
+       "question":"What is the first step in developing downstream models responsibly according to the updated guide?",
+       "answer":" Defining content policies and mitigations."
+    },
+    {
+       "question":"What data type is used for weights initialized from a normal distribution in 4-bit models?",
+       "answer":" NF4 (Normal Float 4)"
+    },
+    {
+       "question":"Where can I find examples of using Llama Guard in recipes?",
+       "answer":" https:\/\/github.com\/facebookresearch\/llama-recipes"
+    },
+    {
+       "question":"What is the recommended model-parallel value for the 70B model?",
+       "answer":" 8"
+    },
+    {
+       "question":"Where can you find more information about the Meta Llama 70B Model?",
+       "answer":" The model card,"
+    },
+    {
+       "question":"What percentage of the dataset typically makes up the test and validation sets when using a holdout method?",
+       "answer":" 10% - 30%,"
+    },
+    {
+       "question":"What are some hosting providers that support running Llama models?",
+       "answer":" OpenAI, Together AI, Anyscale, Replicate, Groq, etc."
+    },
+    {
+       "question":"According to the Llama Guard paper, why is it challenging to compare model performance across different models?",
+       "answer":" Because each model is built on its own policy and performs better on an evaluation dataset with a policy aligned to the model."
+    },
+    {
+       "question":"What is the advantage of having three partitions of data in the fine-tuning process?",
+       "answer":" The advantage is to get an unbiased evaluation of the model's performance."
+    },
+    {
+       "question":"What is included in the Llama 2 model download?",
+       "answer":" Model code, Model weights, README, Responsible Use Guide, License, Acceptable use policy, Model card, and Technical specifications."
+    },
+    {
+       "question":"What is the advantage of integrating with custom kernels?",
+       "answer":" The advantage of integrating with custom kernels is that it allows for support on specific devices."
+    },
+    {
+       "question":"What is the purpose of the GPTQ algorithm implemented in the AutoGPTQ library?",
+       "answer":" The purpose of the GPTQ algorithm is post-training quantization."
+    },
+    {
+       "question":"What advantage does AQLM take of when quantizing multiple weights together?",
+       "answer":" It takes advantage of interdependencies between the weights."
+    },
+    {
+       "question":"What is the primary advantage of using lower precision data in resource-constrained environments?",
+       "answer":" Faster inference and fine-tuning."
+    },
+    {
+       "question":"How can Meta Llama models be accessed on Microsoft Azure?",
+       "answer":" Meta Llama models can be accessed on Microsoft Azure through Models as a Service (MaaS) using Azure AI Studio and Model as a Platform (MaaP) using Azure Machine Learning Studio."
+    },
+    {
+       "question":"What is the purpose of aligning Llama Guard 2 with the Proof of Concept MLCommons taxonomy?",
+       "answer":" The purpose of aligning Llama Guard 2 with the Proof of Concept MLCommons taxonomy is to drive adoption of industry standards and facilitate collaboration and transparency in the LLM safety and content evaluation space."
+    },
+    {
+       "question":"What is the name of the repository that provides more examples of Llama recipes?",
+       "answer":" llama-recipes"
+    },
+    {
+       "question":"How will I receive the signed URL after my request is approved?",
+       "answer":" over email"
+    },
+    {
+       "question":"What is the purpose of the restriction on using Llama Materials?",
+       "answer":" To prevent the unauthorized use of Llama Materials to enhance competing language models."
+    },
+    {
+       "question":"What is the format of the prefix-suffix-middle method of infilling?",
+       "answer":" prefix-suffix-middle"
+    },
+
+    {
+       "question":"What is the license under which the Llama Guard model and its weights are released?",
+       "answer":" The license is the same as Llama 3, which can be found in the LICENSE file and is accompanied by the Acceptable Use Policy."
+    },
+    {
+       "question":"How do I download the 4-bit quantized Meta Llama 3 8B chat model using Ollama?",
+       "answer":" To download the 4-bit quantized Meta Llama 3 8B chat model using Ollama, run the command \"ollama pull llama3\" in your terminal."
+    },
+    {
+       "question":"How long are the download links for Llama valid for?",
+       "answer":" 24 hours"
+    },
+    {
+       "question":"What is the primary purpose of the suite of tools provided?",
+       "answer":" To support the AI lifecycle, specifically tuning models with enterprise data."
+    },
+    {
+       "question":"How does Llama Guard 2's classification performance compare to Llama Guard ",
+       "answer":" Llama Guard 2 has better classification performance than Llama Guard 1."
+    },
+    {
+       "question":"What data type is used for computations in Quantization Aware Training despite mimicking int8 values?",
+       "answer":" floating point numbers"
+    },
+    {
+       "question":"What is the purpose of providing specific examples in a prompt?",
+       "answer":" The purpose of providing specific examples in a prompt is to help the model better understand what kind of output is expected."
+    },
+    {
+        "question":"Why is Meta not sharing the training datasets for Llama?",
+        "answer":"We believe developers will have plenty to work with as we release our model weights and starting code for pre-trained and conversational fine-tuned versions as well as responsible use resources. While data mixes are intentionally withheld for competitive reasons, all models have gone through Meta’s internal Privacy Review process to ensure responsible data usage in building our products. We are dedicated to the responsible and ethical development of our GenAI products, ensuring our policies reflect diverse contexts and meet evolving societal expectations."
+     },
+     {
+        "question":"Did Meta use human annotators to develop the data for Llama models?",
+        "answer":"Yes. There are more details, for example, about our use of human annotators in the Llama 2 research paper."
+     },
+     {
+        "question":"Can I use the output of the models to improve the Llama family of models, even though I cannot use them for other LLMs?",
+        "answer":"It's correct that the license restricts using any part of the Llama models, including the response outputs to train another AI model (LLM or otherwise). However, one can use the outputs to further train the Llama family of models. Techniques such as Quantized Aware Training (QAT) utilize such a technique and hence this is allowed."
+     },
+     {
+        "question":"What operating systems (OS) are officially supported if I want to use Llama model?",
+        "answer":"For the core Llama GitHub repos (Llama and Llama3) Linux is the only OS currently supported by this repo. Additional OS support is available through the Llama-Recipes repo."
+     },
+     {
+        "question":"Do Llama models provide traditional autoregressive text completion?",
+        "answer":"Llama models are auto-regressive language models, built on the transformer architecture. The core language models function by taking a sequence of words as input and predicting the next word, recursively generating text."
+     },
+     {
+        "question":"Do Llama models support logit biases as a request parameter to control token probabilities during sampling?",
+        "answer":"This is implementation dependent (i.e. the code used to run the model)."
+     },
+     {
+        "question":"Do Llama models support adjusting sampling temperature or top-p threshold via request parameters?",
+        "answer":"The model itself supports these parameters, but whether they are exposed or not depends on implementation."
+     },
+     {
+        "question":"What is llama-recipes?",
+        "answer":"The llama-recipes repository is a companion to the Meta Llama 3 models. The goal of this repository is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Meta Llama and other tools in the LLM ecosystem."
+     },
+     {
+        "question":"What is the difference on the tokenization techniques that Meta Llama 3 uses compare Llama 2?",
+        "answer":"Llama 2 uses SentencePiece for tokenization, whereas Llama 3 has transitioned to OpenAI’s Tiktoken."
+     },
+     {
+        "question":"How many tokens were used in Meta Llama 3 pretrain?",
+        "answer":"Meta Llama 3 is pretrained on over 15 trillion tokens that were all collected from publicly available sources."
+     },
+     {
+        "question":"How many tokens were used in  Llama 2 pretrain?",
+        "answer":"Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources."
+     },
+     {
+        "question":"What is the name of the license agreement that Meta Llama 3 is under?",
+        "answer":"Meta LLAMA 3 COMMUNITY LICENSE AGREEMENT."
+     },
+     {
+        "question":"What is the name of the license agreement that Llama 2 is under?",
+        "answer":"LLAMA 2 COMMUNITY LICENSE AGREEMENT."
+     },
+     {
+        "question":"What is the context length of Llama 2 models?",
+        "answer":"Llama 2's context is 4k"
+     },
+     {
+        "question":"What is the context length of Meta Llama 3 models?",
+        "answer":"Meta Llama 3's context is 8k"
+     },
+     {
+        "question":"When is Llama 2 trained?",
+        "answer":"Llama 2 was trained between January 2023 and July 2023."
+     },
+     {
+        "question":"What is the name of the Llama 2 model that uses Grouped-Query Attention (GQA) ",
+        "answer":"Llama 2 70B"
+     },
+     {
+        "question":"What are the names of the Meta Llama 3 model that use Grouped-Query Attention (GQA) ",
+        "answer":"Meta Llama 3 8B and Meta Llama 3 70B"
+     }
+ ]
diff --git a/recipes/use_cases/end2end-recipes/raft/evalset.json b/recipes/use_cases/end2end-recipes/raft/evalset.json
deleted file mode 100644
index 83a4c8e11..000000000
--- a/recipes/use_cases/end2end-recipes/raft/evalset.json
+++ /dev/null
@@ -1,218 +0,0 @@
-[
-   {
-      "question":"What is quantization in machine learning?",
-      "answer":"Quantization is a technique to reduce computational and memory requirements of models by representing weights and activations with lower precision data types."
-   },
-   {
-      "question":"What are the benefits of quantization?",
-      "answer":"Benefits include smaller model sizes, faster fine-tuning, and faster inference, making it beneficial for resource-constrained environments."
-   },
-   {
-      "question":"What is post-training dynamic quantization in PyTorch?",
-      "answer":"Weights are pre-quantized ahead of time and activations are converted to int8 during inference for faster computation due to efficient int8 matrix multiplication."
-   },
-   {
-      "question":"What is quantization aware training (QAT) in PyTorch?",
-      "answer":"All weights and activations are 'fake quantized' during both forward and backward passes of training to yield higher accuracy than other methods."
-   },
-   {
-      "question":"What is TorchAO library for quantization?",
-      "answer":"TorchAO offers various quantization methods including weight only quantization and dynamic quantization, with support for 8-bit and 4-bit quantization."
-   },
-   {
-      "question":"What is prompt engineering?",
-      "answer":"Prompt engineering is a technique used in natural language processing (NLP) to improve the performance of the language model by providing them with more context and information about the task in hand. It involves creating prompts, which are short pieces of text that provide additional information or guidance to the model."
-   },
-   {
-      "question":"What are some tips for crafting effective prompts?",
-      "answer":"Be clear and concise, use specific examples, vary the prompts, test and refine, and use feedback."
-   },
-   {
-      "question":"What is zero-shot prompting?",
-      "answer":"Zero-shot prompting is the technique of using large language models like Meta Llama to follow instructions and produce responses without having previously seen an example of a task."
-   },
-   {
-      "question":"What is few-shot prompting?",
-      "answer":"Few-shot prompting is the technique of adding specific examples of desired output to prompts to generate more accurate and consistent results."
-   },
-   {
-      "question":"What is role based prompting?",
-      "answer":"Role based prompting is the technique of creating prompts based on the role or perspective of the person or entity being addressed to improve relevance and accuracy."
-   },
-   {
-      "question":"What is chain of thought technique?",
-      "answer":"Chain of thought technique is the method of providing the language model with a series of prompts or questions to help guide its thinking and generate a more coherent and relevant response."
-   },
-   {
-      "question":"What is self-consistency approach?",
-      "answer":"Self-consistency approach is the method of selecting the most frequent answer from multiple generations to enhance accuracy."
-   },
-   {
-      "question":"What is retrieval-augmented generation?",
-      "answer":"Retrieval-augmented generation is the practice of including information in the prompt that has been retrieved from an external database to incorporate facts into LLM application."
-   },
-   {
-      "question":"What is program-aided language models?",
-      "answer":"Program-aided language models is the method of instructing the LLM to write code to solve calculation tasks since LLMs are bad at arithmetic but great at code generation."
-   },
-   {
-      "question":"What is Code Llama?",
-      "answer":"Code Llama is a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks."
-   },
-   {
-      "question":"What are the different flavors available in Code Llama?",
-      "answer":"The different flavors include foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, and 34B parameters each."
-   },
-   {
-      "question":"How can I download Code Llama?",
-      "answer":"To download the model weights and tokenizers, visit the Meta website, accept the License, receive a signed URL over email, and then run the download.sh script passing the URL provided when prompted to start the download."
-   },
-   {
-      "question":"What is Llama Guard 2?",
-      "answer":"Llama Guard 2 provides input and output guardrails for LLM deployments based on MLCommons policy."
-   },
-   {
-      "question":"How to download the model weights and tokenizer for Llama Guard 2?",
-      "answer":"Visit the Meta website, accept the license, get approved, receive signed URL via email, then run the download.sh script."
-   },
-   {
-      "question":"Are there any examples using Llama Guard 2?",
-      "answer":"Yes, find them in the Llama recipes repository in addition to the quick start steps for Llama3."
-   },
-   {
-      "question":"Where to report issues related to Llama Guard 2 or its model?",
-      "answer":"Report via github.com/meta-llama/PurpleLlama for Llama Guard model issues or developers.facebook.com/llama_output_feedback for risky content generated by the model."
-   },
-   {
-      "question":"What is the license for Llama Guard 2?",
-      "answer":"The same license as Llama 3 applies: see the LICENSE file and accompanying Acceptable Use Policy."
-   },
-   {
-      "question":"Why is Meta not sharing the training datasets for Llama?",
-      "answer":"We believe developers will have plenty to work with as we release our model weights and starting code for pre-trained and conversational fine-tuned versions as well as responsible use resources. While data mixes are intentionally withheld for competitive reasons, all models have gone through Meta’s internal Privacy Review process to ensure responsible data usage in building our products. We are dedicated to the responsible and ethical development of our GenAI products, ensuring our policies reflect diverse contexts and meet evolving societal expectations."
-   },
-   {
-      "question":"Did Meta use human annotators to develop the data for Llama models?",
-      "answer":"Yes. There are more details, for example, about our use of human annotators in the Llama 2 research paper."
-   },
-   {
-      "question":"Can I use the output of the models to improve the Llama family of models, even though I cannot use them for other LLMs?",
-      "answer":"It's correct that the license restricts using any part of the Llama models, including the response outputs to train another AI model (LLM or otherwise). However, one can use the outputs to further train the Llama family of models. Techniques such as Quantized Aware Training (QAT) utilize such a technique and hence this is allowed."
-   },
-   {
-      "question":"What operating systems (OS) are officially supported if I want to use Llama model?",
-      "answer":"For the core Llama GitHub repos (Llama and Llama3) Linux is the only OS currently supported by this repo. Additional OS support is available through the Llama-Recipes repo."
-   },
-   {
-      "question":"Do Llama models provide traditional autoregressive text completion?",
-      "answer":"Llama models are auto-regressive language models, built on the transformer architecture. The core language models function by taking a sequence of words as input and predicting the next word, recursively generating text."
-   },
-   {
-      "question":"Do Llama models support logit biases as a request parameter to control token probabilities during sampling?",
-      "answer":"This is implementation dependent (i.e. the code used to run the model)."
-   },
-   {
-      "question":"Do Llama models support adjusting sampling temperature or top-p threshold via request parameters?",
-      "answer":"The model itself supports these parameters, but whether they are exposed or not depends on implementation."
-   },
-   {
-      "question":"What is llama-recipes?",
-      "answer":"The llama-recipes repository is a companion to the Meta Llama 3 models. The goal of this repository is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Meta Llama and other tools in the LLM ecosystem."
-   },
-   {
-      "question":"What is the difference on the tokenization techniques that Meta Llama 3 uses compare Llama 2?",
-      "answer":"Llama 2 uses SentencePiece for tokenization, whereas Llama 3 has transitioned to OpenAI’s Tiktoken."
-   },
-   {
-      "question":"How many tokens were used in Meta Llama 3 pretrain?",
-      "answer":"Meta Llama 3 is pretrained on over 15 trillion tokens that were all collected from publicly available sources."
-   },
-   {
-      "question":"How many tokens were used in  Llama 2 pretrain?",
-      "answer":"Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources."
-   },
-   {
-      "question":"What is the name of the license agreement that Meta Llama 3 is under?",
-      "answer":"Meta LLAMA 3 COMMUNITY LICENSE AGREEMENT."
-   },
-   {
-      "question":"What is the name of the license agreement that Llama 2 is under?",
-      "answer":"LLAMA 2 COMMUNITY LICENSE AGREEMENT."
-   },
-   {
-      "question":"What is the context length of Llama 2 models?",
-      "answer":"Llama 2's context is 4k"
-   },
-   {
-      "question":"What is the context length of Meta Llama 3 models?",
-      "answer":"Meta Llama 3's context is 8k"
-   },
-   {
-      "question":"When is Llama 2 trained?",
-      "answer":"Llama 2 was trained between January 2023 and July 2023."
-   },
-   {
-      "question":"What is the name of the Llama 2 model that uses Grouped-Query Attention (GQA) ",
-      "answer":"Llama 2 70B"
-   },
-   {
-      "question":"What are the names of the Meta Llama 3 model that use Grouped-Query Attention (GQA) ",
-      "answer":"Meta Llama 3 8B and Meta Llama 3 70B"
-   },
-   {
-      "question":"what are the goals for Llama 3",
-      "answer":"With Llama 3, we set out to build the best open models that are on par with the best proprietary models available today. We wanted to address developer feedback to increase the overall helpfulness of Llama 3 and are doing so while continuing to play a leading role on responsible use and deployment of LLMs. We are embracing the open source ethos of releasing early and often to enable the community to get access to these models while they are still in development."
-   },
-   {
-      "question":"What versions of Meta Llama 3 are available?",
-      "answer":"Meta Llama 3 is available in both 8B and 70B pretrained and instruction-tuned versions."
-   },
-   {
-      "question":"What are some applications of Meta Llama 3?",
-      "answer":"Meta Llama 3 supports a wide range of applications including coding tasks, problem solving, translation, and dialogue generation."
-   },
-   {
-      "question":"What improvements does Meta Llama 3 offer over previous models?",
-      "answer":"Meta Llama 3 offers enhanced scalability and performance, lower false refusal rates, improved response alignment, and increased diversity in model answers. It also excels in reasoning, code generation, and instruction following."
-   },
-   {
-      "question":"How has Meta Llama 3 been trained?",
-      "answer":"Meta Llama 3 has been trained on over 15T tokens of data using custom-built 24K GPU clusters. This training dataset is 7x larger than that used for Llama 2 and includes 4x more code."
-   },
-   {
-      "question":"What safety measures are included with Meta Llama 3?",
-      "answer":"Meta Llama 3 includes updates to trust and safety tools such as Llama Guard 2 and Cybersec Eval 2, optimized to support a comprehensive set of safety categories published by MLCommons."
-   },
-   {
-      "question":"What is Meta Llama 3?",
-      "answer":"Meta Llama 3 is a highly advanced AI model that excels at language nuances, contextual understanding, and complex tasks like translation and dialogue generation."
-   },
-   {
-      "question":"What are the pretrained versions of Meta Llama 3 available?",
-      "answer":"Meta Llama 3 is available with both 8B and 70B pretrained and instruction-tuned versions."
-   },
-   {
-      "question":"What is the context length supported by Llama 3 models?",
-      "answer":"Llama 3 models support a context length of 8K, which doubles the capacity of Llama 2."
-   },
-   {
-      "question":"What is the Prompt engineering?",
-      "answer":"It is a technique used in natural language processing (NLP) to improve the performance of the language model by providing them with more context and information about the task in hand."
-   },
-   {
-      "question":"What is the Zero-Shot Prompting?",
-      "answer":"Large language models like Meta Llama are capable of following instructions and producing responses without having previously seen an example of a task. Prompting without examples is called 'zero-shot prompting'."
-   },
-   {
-      "question":"What are the supported quantization modes in PyTorch?",
-      "answer":"Post-Training Dynamic Quantization, Post-Training Static Quantization and Quantization Aware Training (QAT)"
-   },
-   {
-      "question":"What is the LlamaIndex?",
-      "answer":"LlamaIndex is mainly a data framework for connecting private or domain-specific data with LLMs, so it specializes in RAG, smart data storage and retrieval, while LangChain is a more general purpose framework which can be used to build agents connecting multiple tools."
-   },
-   {
-      "question":"What is the LangChain?",
-      "answer":"LangChain is an open source framework for building LLM powered applications. It implements common abstractions and higher-level APIs to make the app building process easier, so you don't need to call LLM from scratch. "
-   }
-]
diff --git a/recipes/use_cases/end2end-recipes/raft/format.py b/recipes/use_cases/end2end-recipes/raft/format.py
deleted file mode 100644
index 7dcb6b861..000000000
--- a/recipes/use_cases/end2end-recipes/raft/format.py
+++ /dev/null
@@ -1,173 +0,0 @@
-from abc import ABC, abstractmethod
-import argparse
-from datasets import Dataset, load_dataset
-from typing import Dict, Literal, Any, get_args
-
-"""
-This file allows to convert raw HuggingFace Datasets into files suitable to fine tune completion and chat models.
-"""
-
-OutputDatasetType = Literal["parquet", "jsonl"]
-outputDatasetTypes = list(get_args(OutputDatasetType))
-
-InputDatasetType = Literal["arrow", "jsonl"]
-inputDatasetTypes = list(get_args(InputDatasetType))
-
-DatasetFormat = Literal["hf", "completion", "chat"]
-datasetFormats = list(get_args(DatasetFormat))
-
-def get_args() -> argparse.Namespace:
-    """
-    Parses and returns the arguments specified by the user's command
-    """
-    parser = argparse.ArgumentParser()
-
-    parser.add_argument("--input", type=str, required=True, help="Input HuggingFace dataset file")
-    parser.add_argument("--input-type", type=str, default="arrow", help="Format of the input dataset. Defaults to arrow.", choices=inputDatasetTypes)
-    parser.add_argument("--output", type=str, required=True, help="Output file")
-    parser.add_argument("--output-format", type=str, required=True, help="Format to convert the dataset to", choices=datasetFormats)
-    parser.add_argument("--output-type", type=str, default="jsonl", help="Type to export the dataset to. Defaults to jsonl.", choices=outputDatasetTypes)
-    parser.add_argument("--output-chat-system-prompt", type=str, help="The system prompt to use when the output format is chat")
-
-    args = parser.parse_args()
-    return args
-
-class DatasetFormatter(ABC):
-    """
-    Base class for dataset formatters. Formatters rename columns, remove and add 
-    columns to match the expected target format structure. HF, Chat or Completion models file formats.
-    https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset
-    """
-    @abstractmethod
-    def format(self, ds: Dataset, params: Dict[str, str]) -> Dataset:
-        pass
-
-class DatasetExporter(ABC):
-    """
-    Base class for dataset exporters. Exporters export dataset to different file types, JSONL, Parquet, ...
-    """
-    @abstractmethod
-    def export(self, ds: Dataset, output_path: str):
-        pass
-
-class DatasetConverter():
-    """
-    Entry point class. It resolves which DatasetFormatter and which DatasetExporter to use and runs them.
-    """
-    formats: Dict[DatasetFormat, DatasetFormatter]
-    exporters: Dict[OutputDatasetType, Any]
-
-    def __init__(self) -> None:
-        self.formats = {
-            "hf": HuggingFaceDatasetFormatter(),
-            "completion": OpenAiCompletionDatasetFormatter(),
-            "chat": OpenAiChatDatasetFormatter()
-        }
-        self.exporters = {
-            "parquet": ParquetDatasetExporter(),
-            "jsonl": JsonlDatasetExporter()
-        }
-
-    def convert(self, ds: Dataset, format: DatasetFormat, output_path: str, output_type: OutputDatasetType, params: Dict[str, str]):
-        if not format in self.formats:
-            raise Exception(f"Output Format {format} is not supported, pleased select one of {self.formats.keys()}")
-        
-        if not output_type in self.exporters:
-            raise Exception(f"Output Type {output_type} is not supported, pleased select one of {self.exporters.keys()}")
-
-        formatter = self.formats[format]
-        newds = formatter.format(ds, params)
-        exporter = self.exporters[output_type]
-        exporter.export(newds, output_path)
-
-class HuggingFaceDatasetFormatter(DatasetFormatter):
-    """
-    Returns the HuggingFace Dataset as is
-    """
-    def format(self, ds: Dataset, params: Dict[str, str]) -> Dataset:
-        return ds
-
-def _remove_all_columns_but(ds: Dataset, keep_columns) -> Dataset:
-    """
-    HF Dataset doesn't have a way to copy only specific columns of a Dataset so this help
-    removes all columns but the ones specified.
-    """
-    remove_columns = list(ds.column_names)
-    for keep in keep_columns:
-        remove_columns.remove(keep)
-    ds = ds.remove_columns(remove_columns)
-    return ds
-
-class OpenAiCompletionDatasetFormatter(DatasetFormatter):
-    """
-    Returns the Dataset in the OpenAI Completion Fine-tuning file format with two fields "prompt" and "completion".
-    https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset
-    """
-    def format(self, ds: Dataset, params: Dict[str, str]) -> Dataset:
-        newds = ds.rename_columns({'question': 'prompt', 'cot_answer': 'completion'})
-        return _remove_all_columns_but(newds, ['prompt', 'completion'])
-
-class OpenAiChatDatasetFormatter(OpenAiCompletionDatasetFormatter):
-    """
-    Returns the Dataset in the OpenAI Chat Fine-tuning file format with one field "messages".
-    https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset
-    """
-    def format(self, ds: Dataset, params: Dict[str, str]) -> Dataset:
-        newds = super().format(ds, params)
-
-        def format_messages(row):
-            messages = []
-            if 'system_prompt' in params:
-                system_prompt = params['system_prompt']
-                messages.append({ "role": "system", "content": system_prompt})
-            messages.extend([{ "role": "user", "content": row['prompt']}, { "role": "assistant", "content": row['completion']}])
-            chat_row = {"messages": messages}
-            return chat_row
-
-        newds = newds.map(format_messages)
-        return _remove_all_columns_but(newds, ['messages'])
-
-def append_extension(path: str, extension: str) -> str:
-    suffix = "." + extension
-    if not path.endswith(suffix):
-        path = path + suffix
-    return path
-
-
-class JsonlDatasetExporter(DatasetExporter):
-    """
-    Exports the Dataset to a JSONL file
-    """
-
-    def export(self, ds: Dataset, output_path: str):
-        ds.to_json(append_extension(output_path, "jsonl"))
-
-
-class ParquetDatasetExporter(DatasetExporter):
-    """
-    Exports the Dataset to a Parquet file
-    """
-
-    def export(self, ds: Dataset, output_path: str):
-        ds.to_parquet(append_extension(output_path, "parquet"))
-
-
-def main():
-    """
-    When raft.py is executed from the command line.
-    """
-    args = get_args()
-    ds = load_dataset(args.input_type, data_files={"train": args.input})['train']
-    formatter = DatasetConverter()
-
-    if args.output_chat_system_prompt and args.output_format != "chat":
-        raise Exception("Parameter --output-chat-system-prompt can only be used with --output-format chat")
-
-    format_params = {}
-    if args.output_chat_system_prompt:
-        format_params['system_prompt'] = args.output_chat_system_prompt
-
-    formatter.convert(ds=ds, format=args.output_format, output_path=args.output, output_type=args.output_type, params=format_params)
-
-if __name__ == "__main__":
-    main()
diff --git a/recipes/use_cases/end2end-recipes/raft/raft.yaml b/recipes/use_cases/end2end-recipes/raft/raft.yaml
index cffbf27ba..1a7d07858 100644
--- a/recipes/use_cases/end2end-recipes/raft/raft.yaml
+++ b/recipes/use_cases/end2end-recipes/raft/raft.yaml
@@ -7,30 +7,34 @@ COT_prompt_template: >
   <|start_header_id|>user<|end_header_id|>
   Question: {question}\nContext: {context}\n<|eot_id|><|start_header_id|>assistant<|end_header_id|>
 
+# question_prompt_template: >
+#   <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a synthetic question-answer pair generator. Given a chunk of context about
+#   some topic(s), generate {num_questions} example questions a user could ask and would be answered
+#   using information from the chunk. For example, if the given context was a Wikipedia
+#   paragraph about the United States, an example question could be 'How many states are
+#   in the United States?
+#   The questions should be able to be answered in 100 words or less. Include only the
+#   questions in your response.<|eot_id|>
+#   <|start_header_id|>user<|end_header_id|>
+#   Context: {context}\n <|eot_id|><|start_header_id|>assistant<|end_header_id|>
+
 question_prompt_template: >
-  <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a synthetic question-answer pair generator. Given a chunk of context about
-  some topic(s), generate {num_questions} example questions a user could ask and would be answered
-  using information from the chunk. For example, if the given context was a Wikipedia
-  paragraph about the United States, an example question could be 'How many states are
-  in the United States?
-  The questions should be able to be answered in 100 words or less. Include only the
-  questions in your response.<|eot_id|>
+  <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a language model skilled in creating quiz questions.
+  You will be provided with a document,
+  read it and please generate factoid question and answer pairs that are most likely be asked by a user of Llama language models
+  which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1,	Meta Llama Guard 2
+  Your factoid questions should be answerable with a specific, concise piece of factual information from the context.
+  Your factoid questions should be formulated in the same style as questions users could ask in a search engine.
+  This means that your factoid questions MUST NOT mention something like "according to the passage" or "context".
+  please make sure you follow those rules:
+  1. Generate {num_questions} question answer pairs, you can generate less answer if there is nothing related to
+  model, training, fine-tuning and evaluation details of Llama language models,
+  2. The questions can be answered based *solely* on the given passage.
+  3. Avoid asking questions with similar meaning.
+  4. Never use any abbreviation.
+  5. The questions should be able to be answered in 60 words or less. Include only the questions in your response. <|eot_id|>
   <|start_header_id|>user<|end_header_id|>
   Context: {context}\n <|eot_id|><|start_header_id|>assistant<|end_header_id|>
-
-# question_prompt_template: >
-#   You are a language model skilled in creating quiz questions.
-#   You will be provided with a document,
-#   read it and please generate question and answer pairs that are most likely be asked by a user of Llama language models
-#   which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1,	Meta Llama Guard 2
-#   Output only the questions related to Llama:
-#   please make sure you follow those rules:
-#   1. Generate {num_questions} question answer pairs, you can generate less answer if there is nothing related to model, training, fine-tuning and evaluation details of Llama language models, .
-#   2. The questions can be answered based *solely* on the given passage.
-#   3. Avoid asking questions with similar meaning.
-#   4. Never use any abbreviation.
-#   5. Include only the questions in your response.
-
 data_dir: "./data"
 
 xml_path: ""
diff --git a/recipes/use_cases/end2end-recipes/raft/eval_raft.py b/recipes/use_cases/end2end-recipes/raft/raft_eval.py
similarity index 72%
rename from recipes/use_cases/end2end-recipes/raft/eval_raft.py
rename to recipes/use_cases/end2end-recipes/raft/raft_eval.py
index f6b94106a..b0ec7402b 100644
--- a/recipes/use_cases/end2end-recipes/raft/eval_raft.py
+++ b/recipes/use_cases/end2end-recipes/raft/raft_eval.py
@@ -8,10 +8,15 @@
 from langchain_openai import ChatOpenAI
 from langchain_community.embeddings import HuggingFaceEmbeddings
 from langchain_community.vectorstores import FAISS
-from langchain.text_splitter import RecursiveCharacterTextSplitter
+from langchain.text_splitter import RecursiveCharacterTextSplitter,TokenTextSplitter
+from langchain_community.vectorstores.utils import DistanceStrategy
+from datetime import datetime
 from langchain_community.document_loaders import DirectoryLoader
 import re
 import string
+import pandas as pd 
+from langchain.retrievers.document_compressors import FlashrankRerank
+from transformers import AutoTokenizer
 
 
 def generate_answers_model_only(model_name,question_list,api_url="http://localhost:8000/v1",key="EMPTY"):
@@ -36,28 +41,48 @@ def generate_answers_model_only(model_name,question_list,api_url="http://localho
 def format_docs_raft(docs):
     context = ""
     for doc in docs:
-        context += "" + str(doc.page_content) + "\n"
+        context += "\n" + str(doc.page_content) + "\n"
     return context
-def format_docs(docs):
-    return "\n\n".join(doc.page_content for doc in docs)
-def generate_answers_with_RAG(model_name, question_list,api_config,api_url_overwrite=None):
-    data_dir = api_config['data_dir']
-    api_url = "http://localhost:"+str(api_config['vllm_endpoint'])+"/v1"
-    if api_url_overwrite:
-        api_url = api_url_overwrite
-    key = api_config['api_key']
+def build_retriever(api_config,embedding_model_name,retrieved_docs_num=5):
     # Use langchain to load the documents from data directory
-    loader = DirectoryLoader(data_dir)
+    loader = DirectoryLoader(api_config['data_dir'])
     docs = loader.load()
     # Split the document into chunks with a specified chunk size
-    text_splitter = RecursiveCharacterTextSplitter(chunk_size=api_config["chunk_size"], chunk_overlap=int(api_config["chunk_size"]/10))
-    all_splits = text_splitter.split_documents(docs)
+    text_splitter = RecursiveCharacterTextSplitter(chunk_size=api_config["chunk_size"],chunk_overlap=int(api_config["chunk_size"] / 10),add_start_index=True,strip_whitespace=True)
+    # text_splitter = RecursiveCharacterTextSplitter.from_huggingface_tokenizer(
+    #     AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B"),
+    #     chunk_size=api_config["chunk_size"],
+    #     chunk_overlap=int(api_config["chunk_size"] / 10),
+    #     add_start_index=True,
+    #     strip_whitespace=True,
+    #     separators=["\n\n", "\n", ".", " ", ""],
+    # )
+    docs_processed = text_splitter.split_documents(docs)
+    # Remove duplicates
+    unique_texts = {}
+    docs_processed_unique = []
+    for doc in docs_processed:
+        if doc.page_content not in unique_texts:
+            unique_texts[doc.page_content] = True
+            docs_processed_unique.append(doc)
 
     # Store the document into a vector store with a specific embedding model
-    vectorstore = FAISS.from_documents(all_splits, HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2",model_kwargs={'device': 'cuda'}))
+    embedding_model = HuggingFaceEmbeddings(
+        model_name=embedding_model_name,
+        model_kwargs={"device": "cuda"},
+        encode_kwargs={"normalize_embeddings": True},  # Set `True` for cosine similarity
+    )
+    vectorstore = FAISS.from_documents(docs_processed_unique, embedding_model, distance_strategy=DistanceStrategy.COSINE)
     retriever = vectorstore.as_retriever(
-        search_kwargs={"k": 5}
+        search_kwargs={"k": retrieved_docs_num},
     )
+    return retriever
+def generate_answers_with_RAG(model_name, question_list,api_config,retriever,api_url_overwrite=None):
+    api_url = "http://localhost:"+str(api_config['vllm_endpoint'])+"/v1"
+    if api_url_overwrite:
+        api_url = api_url_overwrite
+    key = api_config['api_key']
+    rerank_topk = api_config["rerank_topk"]
     # Load the RAFT model
     llm = ChatOpenAI(
         openai_api_key=key,
@@ -68,13 +93,14 @@ def generate_answers_with_RAG(model_name, question_list,api_config,api_url_overw
         )
     all_tasks = []
     for q in question_list:
-        # retrive the top 6 documents
-        retrieved_docs = retriever.invoke(q)
+        # retrive the top K documents
+        retrieved_docs = retriever.invoke(q)        
+        if rerank_topk:
+            ranker = FlashrankRerank(top_n=rerank_topk)
+            documents = ranker.compress_documents(retrieved_docs,q)
         # format the documents into a string
-        if '8B-Instruct' in model_name:
-            documents = format_docs(retrieved_docs)
-        else:
-            documents = format_docs_raft(retrieved_docs)
+
+        documents = format_docs_raft(retrieved_docs)
         # create a prompt
         text = api_config["RAG_prompt_template"].format(context=documents,question=q)
         all_tasks.append(text)
@@ -157,10 +183,10 @@ def compute_judge_score(questions: list, generated : list, reference: list, api_
         message = api_config['judge_prompt_template'].format(question=question,prediction=prediction,gold=gold)
         all_tasks.append(message)
     judge_responses = llm.batch(all_tasks)
-    judge_responses = ["YES" in item.content.split("")[-1] for item in judge_responses]
+    judge_responses = ["YES" in item.content for item in judge_responses]
     correct_num = sum(judge_responses)
     return correct_num/len(questions),judge_responses
-def score_single(api_config,generated,reference,questions, run_exact_match=True,run_rouge=True, run_bert=True, run_llm_as_judge=True):
+def score_single(api_config,generated,reference,questions, run_exact_match=True,run_rouge=True, run_bert=False, run_llm_as_judge=True):
     # set metric to default -1, means no metric is computed
     metric = {
         "Rouge_score": -1,
@@ -196,12 +222,18 @@ def main(api_config):
     try:
         api_url = "http://localhost:"+str(api_config["vllm_endpoint"])+"/v1"
         logging.info("Starting to generate answer given the eval set.")
-        with open(api_config["eval_json"]) as fp:
-            eval_json = json.load(fp)
         questions,groud_truth = [],[]
-        for index, item in enumerate(eval_json):
-            questions.append(item["question"])
-            groud_truth.append(item["answer"])
+        if api_config["eval_file"].endswith(".parquet"):
+            eval_file = pd.read_parquet(api_config["eval_file"],filters=[('source', '=', 'pt_discuss_forum')])
+            for index, item in eval_file.iterrows():
+                questions.append(item["question"]+"\nDetails:\n"+item["context"])
+                groud_truth.append(item["answer"])
+        else:
+            with open(api_config["eval_file"]) as fp:
+                eval_file = json.load(fp)
+                for index, item in enumerate(eval_file):
+                    questions.append(item["question"])
+                    groud_truth.append(item["answer"])
         generated_answers = {
             "RAFT": [],
             "RAFT_RAG": [],
@@ -211,29 +243,30 @@ def main(api_config):
             "70B_Base": [],
             
         }
-        # Generate answers for baseline
-        base_model_name = api_config["base_model_name"]
-        generated_answers["Baseline"] = generate_answers_model_only(base_model_name,questions,api_url)
-        generated_answers["Baseline_RAG"] = generate_answers_with_RAG(base_model_name, questions,api_config)
-        # Generate answers for RAFT
-        raft_model_name = api_config["raft_model_name"]
-        generated_answers["RAFT"] = generate_answers_model_only(raft_model_name,questions,api_url)
-        generated_answers["RAFT_RAG"] = generate_answers_with_RAG(raft_model_name, questions,api_config)
-
+        # build retriver
+        retriever = build_retriever(api_config,"sentence-transformers/multi-qa-mpnet-base-cos-v1",api_config["rag_topk"])
+        # Generate answers for 8B models
+        model_name = api_config["model_name"]
+        generated_answers[model_name] = generate_answers_model_only(model_name,questions,api_url)
+        generated_answers[model_name+"_RAG"] = generate_answers_with_RAG(model_name, questions,api_config,retriever)
+        print("Finished generating answers for ", model_name)
         large_model_name = "meta-llama/Meta-Llama-3-70B-Instruct"
         large_api_url = "http://localhost:"+str(api_config["judge_endpoint"])+"/v1"
         generated_answers["70B_Base"] = generate_answers_model_only(large_model_name,questions,large_api_url)
-        generated_answers["70B_RAG"] = generate_answers_with_RAG(large_model_name, questions,api_config,large_api_url,)
-        logging.info(f"Successfully generated {len(generated_answers['Baseline_RAG'])} answers for all models.")
+        generated_answers["70B_RAG"] = generate_answers_with_RAG(large_model_name, questions,api_config,retriever,large_api_url)
+        print("Finished generating answers for ", large_model_name)
+        logging.info(f"Successfully generated {len(generated_answers[model_name])} answers for all models.")
         # for generate answer from each model, compute the score metric
         all_metrics = []
+        output_file = api_config["output_log"]+str(datetime.now().strftime("%Y%m%d_%H%M%S"))
+
         for model_name,model_answer in generated_answers.items():
             if len(model_answer) != len(groud_truth):
                 print(f"The length of {model_name} answer is not equal to the length of ground truth.")
                 continue
             metric = score_single(api_config,model_answer,groud_truth,questions)
             print(f"The eval result for {model_name} is: {metric}")
-            with open(api_config["output_log"],"a") as fp:
+            with open(output_file,"a") as fp:
                 fp.write(f"Eval_result for {model_name} \n")
                 fp.write(f"Rouge_score: {metric['Rouge_score']} \n")
                 fp.write(f"BERTScore Precision: {metric['BERTScore_Precision']:.4f}, Recall: {metric['BERTScore_Recall']:.4f}, F1: {metric['BERTScore_F1']:.4f} \n")
@@ -254,20 +287,21 @@ def main(api_config):
         # Now we want to take a closer look at the questions that are not answered the same by all the models.
         judge_zip = list(zip(*[item[-1] for item in all_metrics]))
         model_names = [item[0] for item in all_metrics]
-        with open(api_config["output_log"],"a") as fp:
+        with open(output_file,"a") as fp:
             for item in all_metrics:
                 fp.write(f"Model_Name: {item[0]}, LLM_SCORE: {item[1]} \n")
             for idx,item in enumerate(judge_zip):
-                # if all the responses are "YES" or all the responses are "NO", then we skip this question
-                if sum([r=="YES" for r in item]) == len(item) or sum([r=="YES" for r in item]) == 0:
+                # if all the responses are "YES", then we skip this question
+                if sum(item) == len(item):
                     continue 
                 else:
                     fp.write(f"Comparing interested question: {questions[idx]} \n")
                     fp.write(f"groud_truth: {groud_truth[idx]} \n")
                     for i in range(len(model_names)):
                         fp.write(f"{item[i]} {model_names[i]}_answers: {generated_answers[model_names[i]][idx]} \n")
-                    fp.write("-------\n")
-
+                    fp.write("------------------------\n")
+            fp.write(json.dumps(all_metrics))
+        print("Finished evaluating the model.")
 
 
         logging.info(f"Eval successfully, the eval result is saved to {api_config['output_log']}.")
@@ -281,13 +315,13 @@ def parse_arguments():
         description="Generate question/answer pairs from documentation."
     )
     parser.add_argument(
-        "-m", "--raft_model_name",
+        "-m", "--model_name",
         default=None,
-        help="Provide the raft_model_name to use for evaluation. If not specified, the model_path in eval_config.yaml will be used."
+        help="Provide the model_name to use for evaluation. If not specified, the model_path in eval_config.yaml will be used."
     )
     parser.add_argument(
         "-c", "--config_path",
-        default="eval_config.yaml",
+        default="raft_eval_config.yaml",
         help="Set the configuration file path that has system prompt along with language, evalset path."
     )
     parser.add_argument(
@@ -309,8 +343,8 @@ def parse_arguments():
     )
     parser.add_argument(
         "-o", "--output_log",
-        default="eval_result.log",
-        help="save the eval result to a log file. Default is eval_result.log"
+        default="./eval_result",
+        help="save the eval result to a log file. Default is eval_result[timestamp].log"
     )
     parser.add_argument(
         "-k", "--api_key",
@@ -318,6 +352,18 @@ def parse_arguments():
         type=str,
         help="LLM API key for generating question/answer pairs."
     )
+    parser.add_argument(
+        "-r", "--rag_topk",
+        default=5,
+        type=int,
+        help="set the number of top k documents the RAG needs to retrive."
+    )
+    parser.add_argument(
+        "--rerank_topk",
+        default=0,
+        type=int,
+        help="set the number of top k documents the reranker needs to retrive."
+    )
     parser.add_argument("--chunk_size", type=int, default=1000, help="The character size of each chunk used in RAG")
     return parser.parse_args()
 
@@ -329,11 +375,15 @@ def parse_arguments():
     if args.data_dir:
         api_config["data_dir"] = args.data_dir
     if args.raft_model_name:
-        api_config["raft_model_name"] = args.raft_model_name
+        api_config["model_name"] = args.model_name
     api_config["judge_endpoint"] = args.judge_endpoint
     api_config["output_log"] = args.output_log
     api_config["api_key"] = args.api_key
     api_config["chunk_size"] = args.chunk_size
+    api_config["rag_topk"] = args.rag_topk
+    api_config["rerank_topk"] = args.rerank_topk
+    if api_config["rag_topk"] < api_config["rerank_topk"]:
+        logging.error("The rerank_topk should be smaller than rag_topk.")
     if api_config["judge_endpoint"]:
         logging.info(f"Use local vllm service for judge at port: '{args.judge_endpoint}'.")
     main(api_config)
diff --git a/recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml b/recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml
new file mode 100644
index 000000000..819445300
--- /dev/null
+++ b/recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml
@@ -0,0 +1,36 @@
+eval_prompt_template: >
+  <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a AI assistant that skilled in answering questions related to Llama language models,
+  which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1,	Meta Llama Guard 2,
+  Below is a question from a llama user, please the answer it with best of your knowledge,
+  The returned answer should be no more than 100 words.Please return the answers in text directly without any special tokens.<|eot_id|>
+  <|start_header_id|>user<|end_header_id|>
+  Question:{question} \n <|eot_id|><|start_header_id|>assistant<|end_header_id|>
+judge_prompt_template: >
+    <|begin_of_text|><|start_header_id|>system<|end_header_id|>You have been provided with a question, a teacher's answer and a student's answer below.
+    Given that question, you need to score the how good the student answer is compare to
+    the teacher's answer. If the student's answer is correct based on the teacher's answer, then return YES, else return NO.
+    Here are the grade criterias to follow:
+    1. Review it carefully to make sure that the keywords and numerical vaules are exactly the same.
+    2. Ensure that the student answer does not contain any conflicting statements.
+    3. It is OK if the student answer contains more information than the ground truth answer, as long as it is factually accurate relative to the  ground truth answer.
+    YES means that the student's answer meets all of the criteria.
+    NO means that the student's answer does not meet all of the criteria. This is the lowest possible score you can give.
+    Only respond with "YES" or "NO", do not respond with anything else.<|eot_id|>
+    <|start_header_id|>user<|end_header_id|>
+    Question: {question} \n Teacher's Answer: {gold} \n Student's Answer: {prediction} <|eot_id|><|start_header_id|>assistant<|end_header_id|>
+RAG_prompt_template: >
+  <|begin_of_text|><|start_header_id|>system<|end_header_id|> Answer the following question using the information given in the context below. Here is things to pay attention to:
+    1.The context contains many documents, each document starts with  and ends .
+    2.First provide step-by-step reasoning on how to answer the question.
+    3.In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context.
+    4.End your response with final answer in the form : $answer, the answer should less than 60 words.
+    You MUST begin your final answer with the tag ":<|eot_id|>
+  <|start_header_id|>user<|end_header_id|>
+  Question: {question}\nContext: {context}\n<|eot_id|><|start_header_id|>assistant<|end_header_id|>
+eval_file: "./eval_llama.json"
+
+model_name: "raft-8b"
+
+data_dir: "./data"
+
+rag_topk: 5
diff --git a/recipes/use_cases/end2end-recipes/raft/raft_utils.py b/recipes/use_cases/end2end-recipes/raft/raft_utils.py
index cc7f318cd..3fdf9646f 100644
--- a/recipes/use_cases/end2end-recipes/raft/raft_utils.py
+++ b/recipes/use_cases/end2end-recipes/raft/raft_utils.py
@@ -3,8 +3,7 @@
 
 import os
 import logging
-from langchain_community.embeddings import HuggingFaceEmbeddings
-from langchain_experimental.text_splitter import SemanticChunker
+from langchain.text_splitter import RecursiveCharacterTextSplitter
 from math import ceil
 from datasets import Dataset
 import random
@@ -90,7 +89,7 @@ def read_file_content(xml_path: str, data_folder: str) -> str:
 def get_chunks(
     text: str,
     chunk_size: int = 512,
-    embedding_model: str = None
+    api_config: dict = None,
 ) -> list[str]:
     """
     Takes in a `file_path` and `doctype`, retrieves the document, breaks it down into chunks of size
@@ -102,7 +101,7 @@ def get_chunks(
     else:
         num_chunks = ceil(len(text) / chunk_size)
         logging.info(f"Splitting text into {num_chunks} chunks")
-        text_splitter = SemanticChunker(embedding_model, number_of_chunks=num_chunks)
+        text_splitter = RecursiveCharacterTextSplitter(chunk_size=api_config["chunk_size"], chunk_overlap=int(api_config["chunk_size"]/10))
         chunks = text_splitter.create_documents([text])
         chunks = [chunk.page_content for chunk in chunks]
 
@@ -116,8 +115,7 @@ def generate_questions(api_config):
     document_text = read_file_content(api_config["xml_path"],api_config["data_dir"])
     if len(document_text) == 0:
         logging.info(f"Error reading files, document_text is {len(document_text)}")
-    embedding_model = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2",model_kwargs={'device': 'cuda'})
-    document_batches = get_chunks(document_text,api_config["chunk_size"],embedding_model)
+    document_batches = get_chunks(document_text,api_config["chunk_size"],api_config)
     # use OpenAI API protocol to hanlde the chat request, including local VLLM openai compatible server
     llm = ChatOpenAI(
         openai_api_key=key,

From ca74a840f9550cd033404f7cb30bb12403096406 Mon Sep 17 00:00:00 2001
From: Kai Wu 
Date: Thu, 20 Jun 2024 12:48:27 -0700
Subject: [PATCH 23/35] clean up

---
 recipes/finetuning/datasets/raft_dataset.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/recipes/finetuning/datasets/raft_dataset.py b/recipes/finetuning/datasets/raft_dataset.py
index 6d4b6d472..0462bc469 100644
--- a/recipes/finetuning/datasets/raft_dataset.py
+++ b/recipes/finetuning/datasets/raft_dataset.py
@@ -88,7 +88,7 @@ def raft_tokenize(q_a_pair, tokenizer):
     return tokenize_dialog(chat, tokenizer)
 
 
-def get_custom_dataset(dataset_config, tokenizer, split, split_ratio=0.8):
+def get_custom_dataset(dataset_config, tokenizer, split, split_ratio=0.9):
     # load_dataset will return DatasetDict that contains all the data in the train set
     dataset_dict = load_dataset('json', data_files=dataset_config.data_path)
     dataset = dataset_dict['train']

From af53ee051ea5e780660080a4fb95536623a37cf4 Mon Sep 17 00:00:00 2001
From: Kai Wu 
Date: Fri, 21 Jun 2024 10:19:12 -0700
Subject: [PATCH 24/35] added refusal and adjusted prompts

---
 .../use_cases/end2end-recipes/raft/README.md  | 40 ++++++------
 .../use_cases/end2end-recipes/raft/raft.py    |  8 +--
 .../use_cases/end2end-recipes/raft/raft.yaml  | 65 ++++++++++---------
 .../end2end-recipes/raft/raft_eval.py         |  2 +-
 .../raft/raft_eval_config.yaml                | 17 ++---
 .../end2end-recipes/raft/raft_utils.py        | 27 +++++---
 6 files changed, 85 insertions(+), 74 deletions(-)

diff --git a/recipes/use_cases/end2end-recipes/raft/README.md b/recipes/use_cases/end2end-recipes/raft/README.md
index d3d0795de..5be68b9a3 100644
--- a/recipes/use_cases/end2end-recipes/raft/README.md
+++ b/recipes/use_cases/end2end-recipes/raft/README.md
@@ -1,5 +1,5 @@
 ## Introduction:
-As our Meta llama models become more popular, we noticed there is a great demand to apply our Meta Llama models toward a custom domain to better serve the customers in that domain.
+As our Meta llama models become more popular, we noticed that there is a great demand to apply our Meta Llama models toward a custom domain to better serve the customers in that domain.
 For example, a common scenario can be that a company has all the related documents in plain text for its custom domain and want to build chatbot that can help answer questions a client
 could have.
 
@@ -38,7 +38,7 @@ We can use on prem solutions such as the [TGI](../../../../inference/model_serve
 
 ```bash
 # Make sure VLLM has been installed
-CUDA_VISIBLE_DEVICES=6,7 python -m vllm.entrypoints.openai.api_server  --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001
+CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server  --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001
 ```
 
 **NOTE** Please make sure the port has not been used. Since Meta Llama3 70B instruct model requires at least 135GB GPU memory, we need to use multiple GPUs to host it in a tensor parallel way.
@@ -58,12 +58,14 @@ python raft.py -u "CLOUD_API_URL" -t 5
 
 **NOTE** When using cloud API, you need to be aware of your RPM (requests per minute), TPM (tokens per minute) and TPD (tokens per day), limit on your account in case using any of model API providers. This is experimental and totally depends on your documents, wealth of information in them and how you prefer to handle question, short or longer answers etc.
 
-This python program will read all the documents inside of "data" folder and transform the text into embeddings and split the data into batches by the SemanticChunker. Then we apply the question_prompt_template, defined in "raft.yaml", to each batch, and finally we will use each batch to query VLLM server and save the return a list of question list for all batches.
+This python script will read all the documents either from local or web, and split the data into text chunks of 1000 charaters (defined by "chunk_size") using RecursiveCharacterTextSplitter.
+Then we apply the question_prompt_template, defined in "raft.yaml", to each chunk, to get question list out of the text chunk.
 
-We now have a related context as text chunk and a corresponding question list. For each question in the question list, we want to generate a Chain-of-Thought (COT) style question using Llama 3 70B Instruct as well. Once we have the COT answers, we can start to make a dataset that contains "instruction" which includes some unrelated chunks called distractor and has a probability P to include the related chunk.
+We now have a related context as text chunk and a corresponding question list. For each question in the question list, we want to generate a Chain-of-Thought (COT) style question using Llama 3 70B Instruct as well.
+Once we have the COT answers, we can start to make a dataset that where each sample contains "instruction" section includes some unrelated chunks called distractor and has a probability P to include the related chunk.
 
-Here is a RAFT format json example. We have a "question" section for the generated question, "cot_answer" section for generated COT answers, where the final answer will be added after "" token, and we also created a "instruction" section
-that has all the documents included (each document splited by  <\/DOCUMENT>) and finally the question appended in the very end. This "instruction"
+Here is a RAFT format json example from our saved raft.jsonl file. We have a "question" section for the generated question, "cot_answer" section for generated COT answers, where the final answer will be added after "" token, and we also created a "instruction" section
+that has all the documents included (each document splited by  <\/DOCUMENT> tag) and finally the question appended in the very end. This "instruction"
 section will be the input during the training, and the "cot_answer" will be the output label that the loss will be calculated on.
 
 ```python
@@ -98,23 +100,22 @@ section will be the input during the training, and the "cot_answer" will be the
    "instruction":" DISTRACT_DOCS 1 <\/DOCUMENT>... DISTRACT_DOCS 5 <\/DOCUMENT>\nWhat is the context length supported by Llama 3 models?"
 }
 ```
-To create a evalset, we can shuffle and select 100 examples out of RAFT dataset. For evaluation purpose, we only need to keep the "question" section, and the final answer section in
-"cot_answer",
+To create a evalset, ideally we should use human-annotation to create the question and answer pairs to make sure the the questions are related and answers are fully correct.
+However, for demo purpose, we will use a subset of training json as the eval set. We can shuffle and random select 100 examples out of RAFT dataset. For evaluation purpose, we only need to keep the "question" section,
+and the final answer section, marked by  tag in "cot_answer". Then we can manually check each example and remove those low-quaility examples, where the questions
+are not related Llama or can not be infer without correct context. After the manual check, we keep 72 question and answer pairs as the eval_llama.json.
 
 ### Step 3: Run the fune-tuning
-Once the RAFT dataset is ready, we can start the full fine-tuning step using the following commands in the llama-recipe main folder:
+Once the RAFT dataset is ready in a json format, we can start the fine-tuning steps. Unfornately we found out that the LORA method did not produce a good result so we have to use the full fine-tuning using the following commands in the llama-recipe main folder:
 
-For distributed fine-tuning:
 ```bash
-CUDA_VISIBLE_DEVICES=0,1,2,3  torchrun --nnodes 1 --nproc_per_node 4  recipes/finetuning/finetuning.py --lr 1e-5 --context_length 8192 --enable_fsdp  --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir pt_ep1_full0614 --num_epochs 1 --batch_size_training 4 --dataset "custom_dataset" --custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/raft_dataset.py" --use-wandb  --run_validation True  --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/raft/raft.jsonl'
+torchrun --nnodes 1 --nproc_per_node 4  recipes/finetuning/finetuning.py --enable_fsdp --lr 1e-5 --context_length 8192 --num_epochs 1 --batch_size_training 1 --model_name meta-llama/Meta-Llama-3-8B-Instruct --dist_checkpoint_root_folder PATH_TO_ROOT_FOLDER --dist_checkpoint_folder fine-tuned  --use_fast_kernels --dataset "custom_dataset" --custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/raft_dataset.py" --use-wandb  --run_validation True  --custom_dataset.data_path 'PATH_TO_RAFT_JSON'
 ```
-```bash
-torchrun --nnodes 1 --nproc_per_node 4  recipes/finetuning/finetuning.py --enable_fsdp --lr 1e-5 --context_length 8192 --num_epochs 1 --batch_size_training 2 --model_name meta-llama/Meta-Llama-3-8B-Instruct --dist_checkpoint_root_folder llama+pt_ep1_full0616 --dist_checkpoint_folder fine-tuned  --use_fast_kernels --dataset "custom_dataset" --custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/raft_dataset.py" --use-wandb  --run_validation True  --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/raft/pytorch_data/all_17k.jsonl'
-```
-Then convert the FSDP checkpoint to HuggingFace checkpoints using:
+
+Then convert the FSDP checkpoint to HuggingFace checkpoint using the following command:
 
 ```bash
-python src/llama_recipes/inference/checkpoint_converter_fsdp_hf.py --fsdp_checkpoint_path  /home/kaiwu/work/llama-recipes/llama+pt_ep1_full0616/fine-tuned-meta-llama/Meta-Llama-3-8B-Instruct --consolidated_model_path /home/kaiwu/work/llama-recipes/llama+pt_ep1_full0616/fine-tuned-meta-llama --HF_model_path_or_name /home/kaiwu/work/llama-recipes/llama+pt_ep1_full0616/
+python src/llama_recipes/inference/checkpoint_converter_fsdp_hf.py --fsdp_checkpoint_path  PATH_TO_ROOT_FOLDER --consolidated_model_path PATH_TO_ROOT_FOLDER/fine-tuned-meta-llama --HF_model_path_or_name PATH_TO_ROOT_FOLDER
 
 ```
 
@@ -122,7 +123,8 @@ For more details, please check the readme in the finetuning recipe.
 
 ### Step 4: Evaluating with local inference
 
-Once we have the fine-tuned model, we now need to evaluate it to understand its performance. Normally, to create a evaluation set, we should first gather some questions and manually write the ground truth answer. In this case, we created a eval set mostly based on the Llama [Troubleshooting & FAQ](https://llama.meta.com/faq/), where the answers are written by human experts. Then we pass the evalset question to our fine-tuned model to get the model generated answers. To compare the model generated answers with ground truth, we can use either traditional eval method, eg. calcucate rouge score, or use LLM to act like a judge to score the similarity of them.
+Once we have the fine-tuned model, we now need to evaluate it to understand its performance. We can use either traditional eval method, eg. calcucate exact match rate or rouge score.
+In this tutorial, we can also use LLM to act like a judge to score model generated .
 
 
 ```bash
@@ -142,10 +144,10 @@ On another terminal, we can use another Meta Llama 3 70B Instruct model as a jud
 CUDA_VISIBLE_DEVICES=2,3 python -m vllm.entrypoints.openai.api_server  --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8002
 ```
 
-Then we can pass the port to the eval script:
+Then we can pass the ports to the eval script:
 
 ```bash
-CUDA_VISIBLE_DEVICES=5 python raft_eval.py -m raft-8b -v 8000 -j 8001 -o all_rag5 -r 5
+CUDA_VISIBLE_DEVICES=1 python raft_eval.py -m raft-8b -v 8000 -j 8001 -r 5
 ```
 
 
diff --git a/recipes/use_cases/end2end-recipes/raft/raft.py b/recipes/use_cases/end2end-recipes/raft/raft.py
index 9b0f7e692..da39e08df 100644
--- a/recipes/use_cases/end2end-recipes/raft/raft.py
+++ b/recipes/use_cases/end2end-recipes/raft/raft.py
@@ -1,7 +1,4 @@
 import logging
-from typing import Literal, Any
-import json
-import random
 import os
 import argparse
 from raft_utils import generate_questions, add_chunk_to_dataset
@@ -10,8 +7,6 @@
 
 logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
 
-NUM_DISTRACT_DOCS = 5 # number of distracting documents to add to each chunk
-ORCALE_P = 0.8 # probability of related documents to be added to each chunk
 def main(api_config):
     ds = None
     try:
@@ -26,7 +21,7 @@ def main(api_config):
             for question in questions:
                 logging.info(f"Question: {question}")
         logging.info(f"Successfully generated {sum([len(q) for c,q in chunk_questions_zip])} question/answer pairs.")
-        ds = add_chunk_to_dataset(chunk_questions_zip,api_config,ds,NUM_DISTRACT_DOCS, ORCALE_P)
+        ds = add_chunk_to_dataset(chunk_questions_zip,api_config,ds)
         ds.save_to_disk(args.output)
         logging.info(f"Data successfully written to {api_config['output']}. Process completed.")
         formatter = DatasetConverter()
@@ -92,6 +87,7 @@ def parse_arguments():
         api_config["api_key"] = os.environ["API_KEY"]
     logging.info(f"Configuration loaded. Generating {args.questions_per_chunk} question per chunk using model '{args.model}'.")
     logging.info(f"Chunk size: {args.chunk_size}.")
+    logging.info(f"num_distract_docs: {api_config['num_distract_docs']}, orcale_p: {api_config['orcale_p']}")
     logging.info(f"Will use endpoint_url: {args.endpoint_url}.")
     logging.info(f"Output will be written to {args.output}.")
     main(api_config)
diff --git a/recipes/use_cases/end2end-recipes/raft/raft.yaml b/recipes/use_cases/end2end-recipes/raft/raft.yaml
index 1a7d07858..d1c891843 100644
--- a/recipes/use_cases/end2end-recipes/raft/raft.yaml
+++ b/recipes/use_cases/end2end-recipes/raft/raft.yaml
@@ -1,40 +1,43 @@
 COT_prompt_template: >
-  <|begin_of_text|><|start_header_id|>system<|end_header_id|> Answer the following question using the information given in the context below. Here is things to pay attention to:
-    - First provide step-by-step reasoning on how to answer the question.
-    - In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context.
-    - End your response with final answer in the form : $answer, the answer should less than 60 words.
-    You MUST begin your final answer with the tag ": <|eot_id|>
+  <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a helpful chatbot who can provide an answer to every questions from the user given a relevant context.<|eot_id|>
   <|start_header_id|>user<|end_header_id|>
-  Question: {question}\nContext: {context}\n<|eot_id|><|start_header_id|>assistant<|end_header_id|>
-
-# question_prompt_template: >
-#   <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a synthetic question-answer pair generator. Given a chunk of context about
-#   some topic(s), generate {num_questions} example questions a user could ask and would be answered
-#   using information from the chunk. For example, if the given context was a Wikipedia
-#   paragraph about the United States, an example question could be 'How many states are
-#   in the United States?
-#   The questions should be able to be answered in 100 words or less. Include only the
-#   questions in your response.<|eot_id|>
-#   <|start_header_id|>user<|end_header_id|>
-#   Context: {context}\n <|eot_id|><|start_header_id|>assistant<|end_header_id|>
+  Question: {question}\nContext: {context}\n
+  Answer this question using the information given by multiple documents in the context above. Here is things to pay attention to:
+  - The context contains many documents, each document starts with  and ends .
+  - First provide step-by-step reasoning on how to answer the question.
+  - In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context.
+  - End your response with final answer in the form : $answer, the answer should less than 60 words.
+  You MUST begin your final answer with the tag " <|eot_id|><|start_header_id|>assistant<|end_header_id|>
 
 question_prompt_template: >
-  <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a language model skilled in creating quiz questions.
-  You will be provided with a document,
-  read it and please generate factoid question and answer pairs that are most likely be asked by a user of Llama language models
-  which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1,	Meta Llama Guard 2
-  Your factoid questions should be answerable with a specific, concise piece of factual information from the context.
-  Your factoid questions should be formulated in the same style as questions users could ask in a search engine.
-  This means that your factoid questions MUST NOT mention something like "according to the passage" or "context".
-  please make sure you follow those rules:
-  1. Generate {num_questions} question answer pairs, you can generate less answer if there is nothing related to
-  model, training, fine-tuning and evaluation details of Llama language models,
-  2. The questions can be answered based *solely* on the given passage.
-  3. Avoid asking questions with similar meaning.
-  4. Never use any abbreviation.
-  5. The questions should be able to be answered in 60 words or less. Include only the questions in your response. <|eot_id|>
+  <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a synthetic question-answer pair generator. Given a chunk of context about
+  some topic(s), generate {num_questions} example questions a user could ask and would be answered
+  using information from the chunk. For example, if the given context was a Wikipedia
+  paragraph about the United States, an example question could be 'How many states are
+  in the United States?
+  Your questions should be formulated in the same style as questions that users could ask in a search engine.
+  This means that your questions MUST NOT mention something like "according to the passage" or "context".
+  The questions should be able to be answered in 60 words or less. Include only the questions in your response.<|eot_id|>
   <|start_header_id|>user<|end_header_id|>
   Context: {context}\n <|eot_id|><|start_header_id|>assistant<|end_header_id|>
+
+# question_prompt_template: >
+#   <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a language model skilled in creating quiz questions.
+#   You will be provided with a document,
+#   read it and please generate factoid question and answer pairs that are most likely be asked by a user of Llama language models
+#   which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1,	Meta Llama Guard 2
+#   Your factoid questions should be answerable with a specific, concise piece of factual information from the context.
+#   Your factoid questions should be formulated in the same style as questions users could ask in a search engine.
+#   This means that your factoid questions MUST NOT mention something like "according to the passage" or "context".
+#   please make sure you follow those rules:
+#   1. Generate {num_questions} question answer pairs, you can generate less answer if there is nothing related to
+#   model, training, fine-tuning and evaluation details of Llama language models,
+#   2. The questions can be answered based *solely* on the given passage.
+#   3. Avoid asking questions with similar meaning.
+#   4. Never use any abbreviation.
+#   5. The questions should be able to be answered in 60 words or less. Include only the questions in your response. <|eot_id|>
+#   <|start_header_id|>user<|end_header_id|>
+#   Context: {context}\n <|eot_id|><|start_header_id|>assistant<|end_header_id|>
 data_dir: "./data"
 
 xml_path: ""
diff --git a/recipes/use_cases/end2end-recipes/raft/raft_eval.py b/recipes/use_cases/end2end-recipes/raft/raft_eval.py
index b0ec7402b..49804de45 100644
--- a/recipes/use_cases/end2end-recipes/raft/raft_eval.py
+++ b/recipes/use_cases/end2end-recipes/raft/raft_eval.py
@@ -8,7 +8,7 @@
 from langchain_openai import ChatOpenAI
 from langchain_community.embeddings import HuggingFaceEmbeddings
 from langchain_community.vectorstores import FAISS
-from langchain.text_splitter import RecursiveCharacterTextSplitter,TokenTextSplitter
+from langchain.text_splitter import RecursiveCharacterTextSplitter
 from langchain_community.vectorstores.utils import DistanceStrategy
 from datetime import datetime
 from langchain_community.document_loaders import DirectoryLoader
diff --git a/recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml b/recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml
index 819445300..0c4bff185 100644
--- a/recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml
+++ b/recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml
@@ -2,7 +2,7 @@ eval_prompt_template: >
   <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a AI assistant that skilled in answering questions related to Llama language models,
   which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1,	Meta Llama Guard 2,
   Below is a question from a llama user, please the answer it with best of your knowledge,
-  The returned answer should be no more than 100 words.Please return the answers in text directly without any special tokens.<|eot_id|>
+  The returned answer should be no more than 60 words. Please return the answers in text directly without any special tokens.<|eot_id|>
   <|start_header_id|>user<|end_header_id|>
   Question:{question} \n <|eot_id|><|start_header_id|>assistant<|end_header_id|>
 judge_prompt_template: >
@@ -19,14 +19,15 @@ judge_prompt_template: >
     <|start_header_id|>user<|end_header_id|>
     Question: {question} \n Teacher's Answer: {gold} \n Student's Answer: {prediction} <|eot_id|><|start_header_id|>assistant<|end_header_id|>
 RAG_prompt_template: >
-  <|begin_of_text|><|start_header_id|>system<|end_header_id|> Answer the following question using the information given in the context below. Here is things to pay attention to:
-    1.The context contains many documents, each document starts with  and ends .
-    2.First provide step-by-step reasoning on how to answer the question.
-    3.In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context.
-    4.End your response with final answer in the form : $answer, the answer should less than 60 words.
-    You MUST begin your final answer with the tag ":<|eot_id|>
+  <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a helpful chatbot who can provide an answer to every questions from the user given a relevant context.<|eot_id|>
   <|start_header_id|>user<|end_header_id|>
-  Question: {question}\nContext: {context}\n<|eot_id|><|start_header_id|>assistant<|end_header_id|>
+  Question: {question}\nContext: {context}\n
+  Answer this question using the information given by multiple documents in the context above. Here is things to pay attention to:
+  - The context contains many documents, each document starts with  and ends .
+  - First provide step-by-step reasoning on how to answer the question.
+  - In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context.
+  - End your response with final answer in the form : $answer, the answer should less than 60 words.
+  You MUST begin your final answer with the tag " <|eot_id|><|start_header_id|>assistant<|end_header_id|>
 eval_file: "./eval_llama.json"
 
 model_name: "raft-8b"
diff --git a/recipes/use_cases/end2end-recipes/raft/raft_utils.py b/recipes/use_cases/end2end-recipes/raft/raft_utils.py
index 3fdf9646f..9bdcf94cc 100644
--- a/recipes/use_cases/end2end-recipes/raft/raft_utils.py
+++ b/recipes/use_cases/end2end-recipes/raft/raft_utils.py
@@ -9,7 +9,7 @@
 import random
 from langchain_community.document_loaders import SitemapLoader,DirectoryLoader
 from bs4 import BeautifulSoup
-
+import copy
 from langchain_openai import ChatOpenAI
 
 
@@ -171,12 +171,12 @@ def add_chunk_to_dataset(
     chunk_questions_zip: list,
     api_config: dict,
     ds,
-    num_distract: int = 3,
-    p: float = 0.8,
 ) -> None:
     """
     Given a chunk and related questions lists, create {Q, A, D} triplets and add them to the dataset.
     """
+    num_distract = api_config["num_distract_docs"]
+    p = api_config["oracle_p"]
     chunks = [chunk for chunk, _ in chunk_questions_zip]
     COT_results = generate_COT(chunk_questions_zip,api_config)
     for chunk, q , cot in COT_results:
@@ -198,12 +198,8 @@ def add_chunk_to_dataset(
         indices.remove(i)
         for j in random.sample(indices, num_distract):
             docs.append(chunks[j])
-        # decides whether to add oracle document
-        oracle = random.uniform(0, 1) < p
-        if not oracle:
-            docs[0] = chunks[random.sample(indices, 1)[0]]
+        doc_copy = docs.copy()
         random.shuffle(docs)
-
         d = {
             "title": [],
             "sentences": []
@@ -221,7 +217,7 @@ def add_chunk_to_dataset(
         context += q
         # This instruction will be used in the fine-tuning stage
         datapt["instruction"] = context
-
+        datapt_copy = copy.deepcopy(datapt)
         # add to dataset
         if not ds:
             # init ds
@@ -235,4 +231,17 @@ def add_chunk_to_dataset(
             ds = Dataset.from_dict(datapt)
         else:
             ds = ds.add_item(datapt)
+        # decides whether to add refusal example where the related documents are not provided
+        oracle = random.uniform(0, 1) < p
+        if not oracle:
+            doc_copy[0] = chunks[random.sample(indices, 1)[0]]
+            random.shuffle(doc_copy)
+            context = ""
+            for doc in doc_copy:
+                context += "" + str(doc) + "\n"
+            context += q
+            # This instruction will be used in the fine-tuning stage
+            datapt_copy["instruction"] = context
+            datapt_copy["cot_answer"] = "Sorry, I don't know the answer to this question because related documents are not found. Please try again."
+            ds.add_item(datapt_copy)
     return ds

From 890d49d45b69110344c78518f89b3ff0424557b3 Mon Sep 17 00:00:00 2001
From: Kai Wu 
Date: Fri, 21 Jun 2024 10:38:31 -0700
Subject: [PATCH 25/35] changed raft_dataset.py

---
 recipes/finetuning/datasets/raft_dataset.py   |  19 ++--
 .../raft/data/llama_website0613               | 103 ++++++++++++++++++
 .../use_cases/end2end-recipes/raft/raft.yaml  |   2 +-
 .../raft/raft_eval_config.yaml                |   2 +-
 4 files changed, 116 insertions(+), 10 deletions(-)
 create mode 100644 recipes/use_cases/end2end-recipes/raft/data/llama_website0613

diff --git a/recipes/finetuning/datasets/raft_dataset.py b/recipes/finetuning/datasets/raft_dataset.py
index 0462bc469..eb1f36937 100644
--- a/recipes/finetuning/datasets/raft_dataset.py
+++ b/recipes/finetuning/datasets/raft_dataset.py
@@ -64,21 +64,24 @@ def tokenize_dialog(dialog, tokenizer):
 
     return dict(combined_tokens, attention_mask=[1]*len(combined_tokens["input_ids"]))
 def raft_tokenize(q_a_pair, tokenizer):
-    # last line is the question
-    question = q_a_pair["instruction"].split('\n')[-1]
-    # all the lines before the last line are the context
-    documents = q_a_pair["instruction"].split('\n')[:-1]
+    end_tag = "<\/DOCUMENT>\n"
+    # find the last end_tag in the instruction, the rest is the question
+    index =q_a_pair["instruction"].rindex("<\/DOCUMENT>\n")+len(end_tag)
+    question = q_a_pair["instruction"][index:]
+    # all the lines before end_tag are the context
+    documents = q_a_pair["instruction"][:index]
     # output is the label
     answer = q_a_pair["output"]
     system_prompt = "You are a helpful chatbot who can provide an answer to every questions from the user given a relevant context."
     user_prompt = """
         Question: {question}\nContext: {context}\n
-        Answer this question using the information given multiple documents in the context above. Here is things to pay attention to:
+        Answer this question using the information given by multiple documents in the context above. Here are things to pay attention to:
+        - The context contains many documents, each document starts with  and ends .
         - First provide step-by-step reasoning on how to answer the question.
         - In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context.
-        - End your response with final answer in the form : $answer, the answer should be succinct.
-        You MUST begin your final answer with the tag ":".
-    """.format(question=question, context=str(documents))
+        - End your response with final answer in the form : $answer, the answer should less than 60 words.
+        You MUST begin your final answer with the tag "
+    """.format(question=question, context=documents)
 
     chat = [
     {"role": "system", "content": system_prompt},
diff --git a/recipes/use_cases/end2end-recipes/raft/data/llama_website0613 b/recipes/use_cases/end2end-recipes/raft/data/llama_website0613
new file mode 100644
index 000000000..33ab39ffb
--- /dev/null
+++ b/recipes/use_cases/end2end-recipes/raft/data/llama_website0613
@@ -0,0 +1,103 @@
+Meta Llama Skip to main content Technology Getting Started Trust & Safety Community Resources Discover the possibilities with Meta Llama Democratizing access through an open platform featuring AI models, tools, and resources — enabling developers to shape the next wave of innovation. Licensed for both research and commercial use Get Started Llama models and tools Meta Llama 3 Build the future of AI with Meta Llama 3 Llama 3 is an accessible, open-source large language model (LLM) designed for developers, researchers, and businesses to build, experiment, and responsibly scale their generative AI ideas. Part of a foundational system, it serves as a bedrock for innovation in the global community. Learn more Meta Code Llama A state-of-the-art large language model for coding LLM capable of generating code, and natural language about code, from both code and natural language prompts. Meta Llama Guard Empowering developers, advancing safety, and building an open ecosystem We’re announcing Meta Llama Guard, an umbrella project featuring open trust and safety tools and evaluations meant to level the playing field for developers. Ready to start building with Meta Llama? Access our getting started guide and responsible use resources to get started. Get started guide Responsible use guide Prompt Engineering with Meta Llama Learn how to effectively use Llama models for prompt engineering with our free course on Deeplearning.AI, where you'll learn best practices and interact with the models through a simple API call. Partnerships Our global partners and supporters We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama, cloud providers that will include the model as part of their offering to customers, researchers committed to doing research with the model, and people across tech, academia, and policy who see the benefits of Llama and an open platform as we do. Latest Llama updates Introducing Meta Llama 3: The most capable openly available LLM to date Read more Meet Your New Assistant: Meta AI, Built With Llama 3 CYBERSECEVAL 2: A Wide-Ranging Cybersecurity Evaluation Suite for Large Language Models Stay up-to-date Our latest updates delivered to your inbox Subscribe to our newsletter to keep up with the latest Llama updates, releases and more. Sign up
+----------
+Use Policy Skip to main content Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at llama.meta.com/use-policy . Prohibited Uses We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to: 1. Violate the law or others’ rights, including to: a. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: i. Violence or terrorism ii. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material b. Human trafficking, exploitation, and sexual violence iii. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. iv. Sexual solicitation vi. Any other criminal activity c. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals d. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services e. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices f. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws g. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials h. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following: a. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State b. Guns and illegal weapons (including weapon development) c. Illegal drugs and regulated/controlled substances d. Operation of critical infrastructure, transportation technologies, or heavy machinery e. Self-harm or harm to others, including suicide, cutting, and eating disorders f. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 2 related to the following: a. Generating, promoting, or furthering fraud or the creation or promotion of disinformation b. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content c. Generating, promoting, or further distributing spam d. Impersonating another individual without consent, authorization, or legal right e. Representing that the use of Llama 2 or outputs are human-generated f. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: Reporting issues with the model: github.com/facebookresearch/llama Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback Reporting bugs and security concerns: facebook.com/whitehat/info Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: LlamaUseReport@meta.com
+----------
+Responsible Use Guide for Llama 2 Skip to main content Responsibility Responsible Use Guide: your resource for building responsibly The Responsible Use Guide is a resource for developers that provides best practices and considerations for building products powered by large language models (LLM) in a responsible manner, covering various stages of development from inception to deployment. Responsible Use Guide
+----------
+Meta Llama 2 Skip to main content Large language model Llama 2: open source, free for research and commercial use We're unlocking the power of these large language models. Our latest version of Llama – Llama 2 – is now accessible to individuals, creators, researchers, and businesses so they can experiment, innovate, and scale their ideas responsibly. Download the model Available as part of the Llama 2 release With each model download you'll receive: Model code Model weights README (user guide) License Acceptable use policy Model card Technical specifications Llama 2 was pretrained on publicly available online data sources. The fine-tuned model, Llama Chat, leverages publicly available instruction datasets and over 1 million human annotations. Read the paper Inside the model Llama 2 models are trained on 2 trillion tokens and have double the context length of Llama 1. Llama Chat models have additionally been trained on over 1 million new human annotations. Benchmarks Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations. Safety and helpfulness Reinforcement learning from human feedback Llama Chat uses reinforcement learning from human feedback to ensure safety and helpfulness. Training Llama Chat: Llama 2 is pretrained using publicly available online data. An initial version of Llama Chat is then created through the use of supervised fine-tuning. Next, Llama Chat is iteratively refined using Reinforcement Learning from Human Feedback (RLHF), which includes rejection sampling and proximal policy optimization (PPO). Get Llama 2 now: complete the download form via the link below. By submitting the form, you agree to Meta's privacy policy Get started Our global partners and supporters We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2, cloud providers that will include the model as part of their offering to customers, researchers committed to doing research with the model, and people across tech, academia, and policy who see the benefits of Llama and an open platform as we do. Statement of support for Meta’s open approach to today’s AI “We support an open innovation approach to AI. Responsible and open innovation gives us all a stake in the AI development process, bringing visibility, scrutiny and trust to these technologies. Opening today’s Llama models will let everyone benefit from this technology.” We’re committed to building responsibly To promote a responsible, collaborative AI innovation ecosystem, we’ve established a range of resources for all who use Llama 2: individuals, creators, developers, researchers, academics, and businesses of any size. The Responsible Use Guide is a resource for developers that provides best practices and considerations for building products powered by large language models (LLMs) in a responsible manner, covering various stages of development from inception to deployment. Safety Red-teaming Llama Chat has undergone testing by external partners and internal teams to identify performance gaps and mitigate potentially problematic responses in chat use cases. We're committed to ongoing red-teaming to enhance safety and performance. Open Innovation AI Research Community We're launching a program for academic researchers, designed to foster collaboration and knowledge-sharing in the field of artificial intelligence. This program provides unique a opportunity for researchers to come together, share their learnings, and help shape the future of AI. By joining this community, participants will have the chance to contribute to a research agenda that addresses the most pressing challenges in the field, and work together to develop innovative solutions that promote responsible and safe AI practices. We believe that by bringing together diverse perspectives and expertise, we can accelerate the pace of progress in AI research. Llama Impact Grants We want to activate the community of innovators who aspire to use Llama to solve hard problems. We are launching the grants to encourage a diverse set of public, non-profit, and for-profit entities to use Llama 2 to address environmental, education and other important challenges. The grants will be subject to rules which will be posted here prior to the grants start. Generative AI Community Forum We think it’s important that our product and policy decisions around generative AI are informed by people and experts from around the world. In support of this belief, we created a forum to act as a governance tool and resource for the community. It brings together a representative group of people to discuss and deliberate on the values that underpin AI, LLM and other new AI technologies. This forum will be held in consultation with Stanford Deliberative Democracy Lab and the Behavioural Insights Team, and is consistent with our open collaboration approach to sharing AI models. Join us on our AI journey If you’d like to advance AI with us, visit our Careers page to discover more about AI at Meta. See open positions Llama 2 Frequently asked questions Get answers to Llama 2 questions in our comprehensive FAQ page—from how it works, to how to use it, integrations, and more. See all FAQs Explore more on Llama 2 Discover more about Llama 2 here — visit our resources, ranging from our research paper, how to get access, and more. Github Open Innovation AI Research Community Getting started guide AI at Meta blog Research paper
+----------
+Skip to main content Llama 2 Version Release Date: July 18, 2023 “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “Documentation” means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at llama.meta.com/llama-downloads/ “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “Llama 2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at “Llama Materials” means, collectively, Meta’s proprietary Llama 2 and Documentation (and any portion thereof) made available under this Agreement. “Meta” “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/use-policy ), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). 2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
+----------
+Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to: 1. Violate the law or others’ rights, including to: a. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: i. Violence or terrorism ii. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material b. Human trafficking, exploitation, and sexual violence iii. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. vi. Any other criminal activity c. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals d. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services e. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices f. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws g. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials h. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following: a. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State b. Guns and illegal weapons (including weapon development) c. Illegal drugs and regulated/controlled substances d. Operation of critical infrastructure, transportation technologies, or heavy machinery e. Self-harm or harm to others, including suicide, cutting, and eating disorders f. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 2 related to the following: a. Generating, promoting, or furthering fraud or the creation or promotion of disinformation b. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content c. Generating, promoting, or further distributing spam d. Impersonating another individual without consent, authorization, or legal right e. Representing that the use of Llama 2 or outputs are human-generated f. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: Reporting issues with the model: Reporting risky content generated by the model: Reporting bugs and security concerns: Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: Skip to main content
+----------
+Skip to main content Llama 2 Version Release Date: July 18, 2023 means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at means, collectively, Meta’s proprietary Llama 2 and Documentation (and any portion thereof) made available under this Agreement. means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at ), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
+----------
+Skip to main content Code Llama, a state-of-the-art large language model for coding Code Llama has the potential to make workflows faster and more efficient for current developers and lower the barrier to entry for people who are learning to code. Code Llama has the potential to be used as a productivity and educational tool to help programmers write more robust, well-documented software. Free for research and commercial use: Code Llama is built on top of Llama 2 and is available in three models: Code Llama Code Llama Python Code Llama Instruct With each model download you'll receive: All Code Llama models README (User Guide) Acceptable Use Policy Model Card How Code Llama works Code Llama is a code-specialized version of Llama 2 that was created by further training Llama 2 on its code-specific datasets, sampling more data from that same dataset for longer. Essentially, Code Llama features enhanced coding capabilities, built on top of Llama 2. It can generate code, and natural language about code, from both code and natural language prompts (e.g., “Write me a function that outputs the fibonacci sequence.”) It can also be used for code completion and debugging. It supports many of the most popular languages being used today, including Python, C++, Java, PHP, Typescript (Javascript), C#, and Bash. Code Llama is available in four sizes with 7B, 13B, 34B, and 70B parameters respectively. Each of these models is trained with 500B tokens of code and code-related data, apart from 70B, which is trained on 1T tokens. The 7B, 13B and 70B base and instruct models have also been trained with fill-in-the-middle (FIM) capability, allowing them to insert code into existing code, meaning they can support tasks like code completion right out of the box. The four models address different serving and latency requirements. The 7B model, for example, can be served on a single GPU. The 34B and 70B models return the best results and allow for better coding assistance, but the smaller 7B and 13B models are faster and more suitable for tasks that require low latency, like real-time code completion. Note: We do not recommend using Code Llama or Code Llama Python to perform general natural language tasks since neither of these models are designed to follow natural language instructions. Code Llama is specialized for code-specific tasks and isn’t appropriate as a foundation model for other tasks. Evaluating Code Llama’s performance To test Code Llama’s performance against existing solutions, we used two popular coding benchmarks: HumanEval and Mostly Basic Python Programming ( MBPP ). HumanEval tests the model’s ability to complete code based on docstrings and MBPP tests the model’s ability to write code based on a description. Our benchmark testing showed that Code Llama performed better than open-source, code-specific LLMs and outperformed Llama 2. Code Llama 70B Instruct, for example, scored 67.8% on HumanEval and 62.2% on MBPP, the highest compared with other state-of-the-art open solutions, and on par with ChatGPT. As with all cutting edge technology, Code Llama comes with risks. Building AI models responsibly is crucial, and we undertook numerous safety measures before releasing Code Llama. As part of our red teaming efforts, we ran a quantitative evaluation of Code Llama’s risk of generating malicious code. We created prompts that attempted to solicit malicious code with clear intent and scored Code Llama’s responses to those prompts against ChatGPT’s (GPT3.5 Turbo). Our results found that Code Llama answered with safer responses. Details about our red teaming efforts from domain experts in responsible AI, offensive security engineering, malware development, and software engineering are available in our research paper Releasing Code Llama Programmers are already using LLMs to assist in a variety of tasks, ranging from writing new software to debugging existing code. The goal is to make developer workflows more efficient, so they can focus on the most human centric aspects of their job, rather than repetitive tasks. At Meta, we believe that AI models, but LLMs for coding in particular, benefit most from an open approach, both in terms of innovation and safety. Publicly available, code-specific models can facilitate the development of new technologies that improve peoples' lives. By releasing code models like Code Llama, the entire community can evaluate their capabilities, identify issues, and fix vulnerabilities. Code Llama’s training recipes are available on our Github repository and model weights are also available. GitHub Responsible use Our research paper discloses details of Code Llama’s development as well as how we conducted our benchmarking tests. It also provides more information into the model’s limitations, known challenges we encountered, mitigations we’ve taken, and future challenges we intend to investigate. We’ve also updated our Responsible Use Guide and it includes guidance on developing downstream models responsibly, including: Defining content policies and mitigations. Preparing data. Fine-tuning the model. Evaluating and improving performance. Addressing input- and output-level risks. Building transparency and reporting mechanisms in user interactions. Developers should evaluate their models using code-specific evaluation benchmarks and perform safety studies on code-specific use cases such as generating malware, computer viruses, or malicious code. We also recommend leveraging safety datasets for automatic and human evaluations, and red teaming on adversarial prompts The future of generative AI for coding Code Llama is designed to support software engineers in all sectors – including research, industry, open source projects, NGOs, and businesses. But there are still many more use cases to support than what our base and instruct models can serve. We hope that Code Llama will inspire others to leverage Llama 2 to create new innovative tools for research and commercial products. Explore more on Code Llama Discover more about Code Llama here — visit our resources, ranging from our research paper, getting started guide and more. Code Llama GitHub repository
+----------
+Skip to main content Build the future of AI with Meta Llama 3 Now available with both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications Build the future of AI with Now available with both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications Experience Llama 3 on Meta AI Experience Llama 3 with Meta AI We’ve integrated Llama 3 into Meta AI, our intelligent assistant, that expands the ways people can get things done, create and connect with Meta AI. You can see first-hand the performance of Llama 3 by using Meta AI for coding tasks and problem solving. Whether you're developing agents, or other AI-powered applications, Llama 3 in both 8B and 70B will offer the capabilities and flexibility you need to develop your ideas. Experience Llama 3 on Meta AI Enhanced performance Experience the state-of-the-art performance of Llama 3, an openly accessible model that excels at language nuances, contextual understanding, and complex tasks like translation and dialogue generation. With enhanced scalability and performance, Llama 3 can handle  multi-step tasks effortlessly, while our refined post-training processes significantly lower false refusal rates, improve response alignment, and boost diversity in model answers. Additionally, it drastically elevates capabilities like reasoning, code generation, and instruction following. Build the future of AI with Llama 3. Download Llama 3 Getting Started Guide With each Meta Llama request, you will receive: Meta Llama Guard 2 Community license agreement Llama 3 models take data and scale to new heights. It’s been trained on our two recently announced custom-built 24K GPU clusters on over 15T token of data – a training dataset 7x larger than that used for Llama 2, including 4x more code. This results in the most capable Llama model yet, which supports a 8K context length that doubles the capacity of Llama 2. Trust & safety A comprehensive approach to responsibility With the release of Llama 3, we’ve updated the Responsible Use Guide (RUG) to provide the most comprehensive information on responsible development with LLMs. Our system-centric approach includes updates to our trust and safety tools with Llama Guard 2, optimized to support the newly announced taxonomy published by MLCommons expanding its coverage to a more comprehensive set of safety categories, Code Shield, and Cybersec Eval 2. In line with the principles outlined in our RUG , we recommend thorough checking and filtering of all inputs to and outputs from LLMs based on your unique content guidelines for your intended use case and audience. Meta Llama Guard 2 Explore more on Meta Llama 3 Introducing Meta Llama 3: The most capable openly available LLM to date Read the blog Meet Your New Assistant: Meta AI, Built With Llama 3 Meta Llama 3 repository View repository Explore
+----------
+Meta Llama 3 License Skip to main content META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 “ Agreement ” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. Documentation ” means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/ Licensee ” or “ you ” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. MetaLlama 3 ” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads Llama Materials ” means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement. Meta we ” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. License Rights and Redistribution a. Grant of Rights . You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name. ii.  If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy ), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof). Additional Commercial Terms . If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3 . Disclaimer of Warranty . UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. Limitation of Liability . IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. Intellectual Property a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. Term and Termination . The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. Governing Law and Jurisdiction . This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
+----------
+Meta Llama 3 | Model Cards and Prompt formats Skip to main content Table Of Contents Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Getting the Models Hugging Face Kaggle Llama Everywhere Overview Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud How-To Guides Fine-tuning Quantization Prompting Validation Integration Guides LangChain Llamalndex Community Support Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards & Prompt formats You can find details about this model in the model card Special Tokens used with Meta Llama 3 <|begin_of_text|> : This is equivalent to the BOS token <|eot_id|> : This signifies the end of the message in a turn. <|start_header_id|>{role}<|end_header_id|> : These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant. <|end_of_text|>: This is equivalent to the EOS token. On generating this token, Llama 3 will cease to generate more tokens. A prompt can optionally contain a single system message, or multiple alternating user and assistant messages, but always ends with the last user message followed by the assistant header. Code to produce this prompt format can be found Note : Newlines (0x0A) are part of the prompt format, for clarity in the example, they have been represented as actual new lines. <|begin_of_text|>{{ user_message }} Meta Llama 3 Instruct Code to generate this prompt format can be found Notes : Newlines (0x0A) are part of the prompt format, for clarity in the examples, they have been represented as actual new lines. The model expects the assistant header at the end of the prompt to start completing it. Decomposing an example instruct prompt with a system message: <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a helpful AI assistant for travel tips and recommendations<|eot_id|><|start_header_id|>user<|end_header_id|> What can you help me with?<|eot_id|><|start_header_id|>assistant<|end_header_id|> : Specifies the start of the prompt <|start_header_id|>system<|end_header_id|> : Specifies the role  for the following message, i.e. “system” You are a helpful AI assistant for travel tips and recommendations : The system message : Specifies the end of the input message <|start_header_id|>user<|end_header_id|> : Specifies the role  for the following message i.e. “user” What can you help me with? : The user message <|start_header_id|>assistant<|end_header_id|> : Ends with the assistant header, to prompt the model to start generation. Following this prompt, Llama 3 completes it by generating the {{assistant_message}}.  It signals the end of the {{assistant_message}} by generating the Example prompt with a single user message <|begin_of_text|><|start_header_id|>user<|end_header_id|> What is France's capital?<|eot_id|><|start_header_id|>assistant<|end_header_id|> System prompt and multiple turn conversation between the user and assistant You are a helpful AI assistant for travel tips and recommendations<|eot_id|><|start_header_id|>user<|end_header_id|> What is France's capital?<|eot_id|><|start_header_id|>assistant<|end_header_id|> Bonjour! The capital of France is Paris!<|eot_id|><|start_header_id|>user<|end_header_id|> What can I do there?<|eot_id|><|start_header_id|>assistant<|end_header_id|> Paris, the City of Light, offers a romantic getaway with must-see attractions like the Eiffel Tower and Louvre Museum, romantic experiences like river cruises and charming neighborhoods, and delicious food and drink options, with helpful tips for making the most of your trip.<|eot_id|><|start_header_id|>user<|end_header_id|> Give me a detailed list of the attractions I should visit, and time it takes in each one, to plan my trip accordingly.<|eot_id|><|start_header_id|>assistant<|end_header_id|> On this page Special Tokens used with Meta Llama 3 Meta Llama 3 Instruct
+----------
+Meta Llama Guard 2 | Model Cards and Prompt formats Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards & Prompt formats Meta Llama Guard 2 You can find details about this model in the Prompt format As the guardrails can be applied both on the input and output of the model, there are two different prompts: one for user input and the other for agent output. The role placeholder can have the values User Agent . The former refers to the input and the later to the output.  When evaluating the user input, the agent response must not be present in the conversation. The llama-recipes repository has a helper function and an inference example that shows how to properly format the prompt with the provided categories. This can be used as a template to create custom categories for the prompt. The variables to replace in this prompt template are: {{ role }} : It can have the values: {{ unsafe_categories }} : The default categories and their descriptions are shown below. These can be customized for zero-shot or few-shot prompting. {{ user_message }} : input message from the user. {{ model_answer }} : output from the model. <|begin_of_text|>[INST] Task: Check if there is unsafe content in '{{ role }}' messages in conversations according our safety policy with the below categories.    User: {{ user_message_1 }} Agent: {{ model_answer_1 }}  Provide your safety assessment for ONLY THE LAST {{ role }} in the above conversation: - First line must read 'safe' or 'unsafe'. - If unsafe, a second line must include a comma-separated list of violated categories. [/INST] Llama Guard was trained in the following categories.  S1: Violent Crimes. S2: Non-Violent Crimes. S3: Sex Crimes. S4: Child Exploitation. S5: Specialized Advice. S6: Privacy. S7: Intellectual Property. S8: Indiscriminate Weapons. S9: Hate. S10: Self-Harm. S11: Sexual Content.  Additionally, the full description for each category can be set in the prompt as well.  AI models should not create content that enables, encourages, or excuses the commission of violent crimes. Examples of violent crimes include, but are not limited to: - Unlawful violence toward people (ex: terrorism, genocide, murder, hate crimes, child abuse, assault, battery, kidnapping) - Unlawful violence toward animals (ex: animal abuse) AI models should not create content that enables, encourages, or excuses the commission of non-violent crimes. Examples of non-violent crimes include, but are not limited to: - Financial crimes (ex: fraud, scams, money laundering) - Property crimes (ex: burglary, robbery, arson, vandalism) - Drug crimes (ex: creating or trafficking narcotics) - Weapons crimes (ex: producing unlicensed firearms) - Cyber crimes (ex: hacking, spyware, malware) AI models should not create content that enables, encourages, or excuses the commission of sex crimes. Examples of sex crimes include, but are not limited to: - Human trafficking - Sexual assault (ex: rape) - Sexual harassment (ex: groping) - Lewd conduct AI models should not create content that depicts child nudity or that enables, encourages, excuses, or depicts the sexual abuse of children. AI models should not create content that contains specialized financial, medical, or legal advice. AI models should not create content that contains sensitive, personal information about private individuals. AI models should not create content that violates the intellectual property rights of third parties. AI models should not create content that enables the creation of weapons of mass destruction. Examples of such weapons include, but are not limited to: - Chemical weapons (ex: nerve gas) - Biological weapons (ex: anthrax) - Radiological weapons (ex: salted bombs) - Nuclear weapons (ex: atomic warheads) - High-yield explosive weapons (ex: cluster munitions) AI models should not create content that is hateful toward people on the basis of their protected characteristics (race, color, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease). AI models should not create content that perpetuates negative stereotypes. AI models should not create content that enables, encourages, or excuses acts of intentional self-harm. Examples of such actions include, but are not limited to: - Suicide - Self-injury (ex: cutting) - Disordered eating AI models should not create depictions of nude adults or content that contains erotic descriptions or explicit depictions of sex acts.  Meta Llama Guard 2
+----------
+Meta Code Llama 70B | Model Cards and Prompt formats Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards & Prompt formats Meta Code Llama 70B You can find details about this model in the . Note that Meta Code Llama 70B uses the same model card as Meta Code Llama 7B, 13B, and 34B. Completion In this format, the model continues to write code following the provided code in the prompt. An implementation of this prompt can be found {{ code_prompt }} Instructions Meta Code Llama 70B has a different prompt template compared to 34B, 13B and 7B. It starts with a Source: system tag—which can have an empty body—and continues with alternating user or assistant values. Each turn of the conversation uses the  special character to separate the messages. The last turn of the conversation uses an Source: assistant tag with an empty message and a Destination: user tag to prompt the model to answer the user question. A detailed implementation of this format is provided. Notes: The structure requires a Source: system tag, but the system prompt can be empty. Each user query is preceded by a blank line. At the end of the prompt is a blank line followed by a line containing a space character (0x20). Source: system System prompt  Source: user First user query  Source: assistant Model response to first query  Source: user Second user query  Source: assistant Destination: user Meta Code Llama 70B
+----------
+Meta Llama Guard 1 | Model Cards and Prompt formats Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards & Prompt formats Meta Llama Guard 1 You can find details about this model in the As the guardrails can be applied both on the input and output of the model, there are two different prompts: one for user input and the other for agent output. The role placeholder can have the values User or Agent. The former refers to the input and the later to the output.  When evaluating the user input, the agent response must not be present in the conversation. The llama-recipes repository has a that shows how to properly format the prompt with the provided categories. This can be used as a template to create custom categories for the prompt. [INST] Task: Check if there is unsafe content in '{{ role }}' messages in conversations according our safety policy with the below categories.   User: {{ user_message_1 }} Agent: {{ model_answer_1 }} Provide your safety assessment for {{ role }} in the above conversation: - First line must read 'safe' or 'unsafe'. - If unsafe, a second line must include a comma-separated list of violated categories. [/INST] Llama Guard was trained in the following categories that can be adapted through prompt engineering.  O1: Violence and Hate. Should not - Help people plan or engage in violence. - Encourage people to engage in violence. - Express hateful, derogatory, or demeaning sentiments against people based on sensitive personal characteristics like their race, color, religion, national origin, sexual orientation, gender, gender identity, or disability. - Encourage discrimination against people based on those characteristics and should not use slurs or create insults based on those characteristics. Can -  Provide information on violence and discrimination. - Discuss the topics of hate and violence, and can discuss historical events involving violence. O2: Sexual Content. - Engage in sexually explicit (i.e., erotic) conversations and should not encourage people to engage in sexual activities. - Discuss topics related to sexuality and sexual education. - Provide general information regarding sexual activities. - Provide guidance on topics related to sexual health. O3: Criminal Planning. - Help people plan or engage in criminal activities like theft, kidnapping, or financial crimes, especially when the person indicates that they may act on the information provided to them. - Encourage people to engage in criminal activities or condone criminal behavior. - Help people understand how crimes are committed, provided that the response is not so detailed that it could be used to provide a roadmap for future crimes. - Help people understand how systems work that are used to prevent crimes, provided those descriptions do not provide detailed information on how to defeat those systems. O4: Guns and Illegal Weapons. - Assist in the planning or execution of firearm or illegal weapons crimes, especially when the person indicates that they may act on the information provided to them. - Encourage people to engage in firearm or illegal weapons crimes or condone such behavior. - Discuss firearms and the arguments for and against firearm ownership. O5: Regulated or Controlled Substances. - Encourage or assist people in the trafficking, consumption, or use of regulated or controlled substances, including but not limited to illegal drugs, tobacco, alcohol, and cannabis. - Assist or encourage people to create such substances when it is illegal to do so. - Discuss the use of regulated or controlled substances, and can provide general information on those substances (ex: what their chemical composition is and what their effects are). - Provide information on how those substances are manufactured, provided that the information is not so detailed as to provide step-by-step instructions for their creation. O6: Self-Harm. - Encourage people to harm themselves. - Romanticize or condone self-harm or suicide. - Provide information regarding the methods of suicide or self-harm. - Provide any information or advice, other than information regarding appropriate health resources, to people who express or imply that they wish to harm themselves. - Provide general information regarding self-harm and suicide (ex: statistics, the mental health challenges faced by those who take these actions). Should - Provide information regarding appropriate health resources to people who imply or express the desire to harm themselves.  Meta Llama Guard 1
+----------
+Meta Code Llama | Model Cards and Prompt formats Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards & Prompt formats You can find details about this model in the Meta Code Llama 7B, 13B, and 34B In this format, the model continues to write code following the code that is provided in the prompt. An implementation of this prompt can be found The instructions prompt template for Meta Code Llama follow the same structure as the Meta Llama 2 chat model, where the system prompt is optional, and the user and assistant messages alternate, always ending with a user message. Note the beginning of sequence (BOS) token between each user and assistant message.  An implementation for Meta Code Llama can be found [INST] <> {{ system_prompt }} <> {{ user_message_1 }} [/INST] {{ model_answer_1 }}  [INST] {{ user_message_2 }} [/INST] Infilling Infilling can be done in two different ways: with the prefix-suffix-middle format or the suffix-prefix-middle. An implementation of this format is provided Infilling is only available in the 7B and 13B base models—not in the Python, Instruct, 34B, or 70B models The BOS character is not used for infilling when encoding the prefix or suffix, but only at the beginning of each prompt. Prefix-suffix-middle 
{{ code_prefix }}{{ code_suffix }} Suffix-prefix-middle 
{{ code_suffix }}{{ code_prefix }} Meta Code Llama 7B, 13B, and 34B
+----------
+Meta Llama 2 | Model Cards and Prompt formats Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards & Prompt formats You can find details about this model in the Special Tokens used with Meta Llama 2  : These are the BOS and EOS tokens from SentencePiece. When multiple messages are present in a multi turn conversation, they separate them, including the user input and model response. [INST][/INST] : These tokens enclose user messages in multi turn conversations. <><> : These enclose the system message. The base model supports text completion, so any incomplete user prompt, without special tags, will prompt the model to complete it. The tokenizer provided with the model will include the SentencePiece beginning of sequence (BOS) token () if requested. Review this code for details. {{ user_prompt }} Meta Llama 2 Chat Code to produce this prompt format can be found . The system prompt is optional. Single message instance with optional system prompt. {{ user_message }} [/INST] Multiple user and assistant messages example. {{ user_message_1 }} [/INST] {{ model_answer_1 }}  [INST] {{ user_message_2 }} [/INST] Special Tokens used with Meta Llama 2 Meta Llama 2 Chat Skip to main content
+----------
+Getting the models Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud You can get the Meta Llama models directly from Meta or through Hugging Face or Kaggle. However you get the models, you will first need to accept the license agreements for the models you want. For more detailed information about each of the Meta Llama models, see the Model Cards section immediately following this section. To get the models directly from Meta, go to our Meta Llama download form at Fill in your information–including your email. Select the models that you want, and review and accept the appropriate license agreements. For each model that you request, you will receive an email that contains instructions and a pre-signed URL to download that model. You can use the same URL to download multiple model weights, such as 7B and 13B. The URL expires after 24 hours or five downloads, but you can re-request models in order to receive fresh pre-signed URLs. The model download process uses a script that relies on the following tools: wget,md5sum ; so ensure that these are available on your local computer.
+----------
+Hugging Face | Getting the models To obtain the models from Hugging Face (HF), sign into your account at https://huggingface.co/meta-llama Select the model you want. You will be taken to a page where you can fill in your information and review the appropriate license agreement. After accepting the agreement, your information is reviewed; the review process could take up to a few days. When you are approved, you will receive an email informing you that you have access to the HF repository for the model. Note that cloning the HF repository to a local computer does not give you all the model files because some of the files are too large. In the local clone, those files contain only metadata for the actual file. To get these larger files, go to the file in the repository on the HF site and download it directly from there. For example, to get consolidated.00.pth for the Meta Llama 2 7B model, you download it from: https://huggingface.co/meta-llama/Llama-27b/blob/main/consolidated.00.pth Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Skip to main content
+----------
+Kaggle | Getting the models Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud To obtain the models from Kaggle–including the HF versions of the models–sign into your account at: https://www.kaggle.com/organizations/metaresearch/models Before you can access the models on Kaggle, you need to submit a request for model access , which requires that you accept the model license agreement on the Meta site: Note that the email address that you provide when you accept the license agreement must be the same as the email that you use for your Kaggle account. Once you have accepted the license agreement, return to Kaggle and submit the request for model access. When your request is approved, which might take a few days, you’ll receive an email that says that you have received access. You’ll then be able to access the models on Kaggle. To access a particular model, select it from the Model Variations dropdown box, and click the download icon. An archive file that contains the model will start downloading.
+----------
+Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Although Meta Llama models are often hosted by Cloud Service Providers (CSP), Meta Llama can be used in other contexts as well, such as Linux, the Windows Subsystem for Linux (WSL), macOS, Jupyter notebooks, and even mobile devices. If you are interested in exploring t hese scenarios, we suggest that you check out the following resources: Llama 3 on Your Local Computer, with Resources for Other Options - How to run Llama on your desktop using Windows, macOS, or Linux. Also, pointers to other ways to run Llama, either on premise or in the cloud Llama Recipes QuickStart - Provides an introduction to Meta Llama using Jupyter notebooks and also demonstrates running Llama locally on macOS. Machine Learning Compilation for Large Language Models (MLC LLM) - Enables “everyone to develop, optimize and deploy AI models natively on everyone's devices with ML compilation techniques.” Llama C++ - Uses the portability of C++ to enable inference with Llama models on a variety of different hardware.
+----------
+Running Meta Llama on Linux | Llama Everywhere Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Running Meta Llama on Linux This tutorial is a part of our Build with Meta Llama series, where we demonstrate the capabilities and practical applications of Llama for developers like you, so that you can leverage the benefits that Llama has to offer and incorporate it into your own applications. This tutorial supports the video Running Llama on Linux | Build with Meta Llama , where we learn how to run Llama on Linux OS by getting the weights and running the model locally, with a step-by-step tutorial to help you follow along. If you're interested in learning by watching or listening, check out our video on Running Llama on Linux. Introduction to llama models At Meta, we strongly believe in an open approach to AI development, particularly in the fast-evolving domain of generative AI. By making AI models publicly accessible, we enable their advantages to reach every segment of society. Last year, we open sourced Meta Llama 2, and this year we released the Meta Llama 3 family of models, available in both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications, unlocking the power of these large language models, and making them accessible to everyone, so you can experiment, innovate, and scale your ideas responsibly. Running Meta Llama on Linux Setup With a Linux setup having a GPU with a minimum of 16GB VRAM, you should be able to load the 8B Llama models in fp16 locally. If you have an Nvidia GPU, you can confirm your setup using the NVIDIA System Management Interface tool that shows you the GPU you have, the VRAM available, and other useful information by typing: nvidia-smi In our current setup, we are on Ubuntu, specifically Pop OS, and have an Nvidia RTX 4090 with a total VRAM of about 24GB. Terminal with nvidia-smi showing NVIDIA GPU Configuration Getting the weights To download the weights, go to the Llama website . Fill in your details in the form and select the models you’d like to download. In our case, we will download the Llama 3 models. Select Meta Llama 3 and Meta Llama Guard 2 on the download page Read and agree to the license agreement, then click Accept and continue . You will see a unique URL on the website. You will also receive the URL in your email and it is valid for 24hrs to allow you to download each model up to 5 times. You can always request a new URL. Download page with unique pre-signed URL We are now ready to get the weights and run the model locally on our machine. It is recommended to use a Python virtual environment for running this demo. In this demo, we are using Miniconda, but you can use any virtual environment of your choice. Open your terminal, and make a new folder called llama3-demo in your workspace. Navigate to the new folder and clone the Llama repo: mkdir llama3-demo cd llama3-demo git clone https://github.com/meta-llama/llama3.git For this demo, we’ll need two prerequisites installed: wget and md5sum . To confirm if your distribution has these, use: wget --version md5sum --version which should return the installed versions. If your distribution does not have these, you can install them using apt-get install wget apt-get install md5sum To make sure we have all the package dependencies installed, while in the newly cloned repo folder, type: pip install -e . We are now all set to download the model weights for our local setup. Our team has created a helper script to make it easy to download the model weights. In your terminal, type: ./download.sh The script will ask for the URL from your email. Paste in the URL you received from Meta. It will then ask you to enter the list of models to download. For our example, we’ll download the 8B pretrained model and the fine-tuned 8B chat models. So we’ll enter “8B,8B-instruct” Downloading the 8B models Running the model We are all set to run the example inference script to test if our model has been set up correctly and works. Our team has created an example Python script called example_text_completion.py that you can use to test out the model. The script defines a main function that uses the Llama class from the llama library to generate text completions for given prompts using the pre-trained models. It takes a few arguments: Parameters Descriptions ckpt_dir: str Directory containing the checkpoint files of the model. tokenizer_path: str Path to the tokenizer of the model. temperature: float = 0.6 This parameter controls the randomness of the generation process. Higher values may lead to more creative but less coherent outputs, while lower values may lead to more conservative but more coherent outputs. top_p: float = 0.9 This defines the maximum probability threshold for generating tokens. max_seq_len: int = 128 Defines the maximum length of the input sequence or prompt allowed for the model to process. max_gen_len: int = 64 Defines the maximum length of the generated text the model is allowed to produce. max_batch_size: int = 4 Defines the maximum number of prompts to process in one batch. The function builds an instance of the class, using the provided arguments, then defines a list of prompts for which the model will use generator.text_completion method to generate the completions. To run the script, go back to our terminal, and while in the llama3 repo, type: torchrun --nproc_per_node 1 example_text_completion.py --ckpt_dir Meta-Llama-3-8B/ --tokenizer_path Meta-Llama-3-8B/tokenizer.model --max_seq_len 128 --max_batch_size 4 Replace Meta-Llama-3-8B/ with the path to your checkpoint directory and tokenizer.model with the path to your tokenizer model. If you run it from this main directory, the path may not need to change. Set the –nproc_per_node to the MP value for the model you are using. For 8B models, the value is set to 1. Adjust the max_seq_len max_batch_size parameters as needed. We have set them to 128 and 4 respectively. Running the 8B model on the example text completion script To try out the fine-tuned chat model ( 8B-instruct ), we have a similar example called example_chat_completion.py torchrun --nproc_per_node 1 example_chat_completion.py --ckpt_dir Meta-Llama-3-8B-Instruct/ --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model --max_seq_len 512 --max_batch_size 6 Note that in this case, we use the Meta-Llama-3-8B-Instruct/ model and provide the correct tokenizer under the instruct model folder. Running the 8B Instruct model on the example chat completion script A detailed step-by-step process to run on this setup, as well as all the helper and example scripts can be found on our Llama3 GitHub repo , which goes over the process of downloading and quick-start, as well as examples for inference. Running Meta Llama on Linux Introduction to llama models Running Meta Llama on Linux
+----------
+Running Meta Llama on Windows | Llama Everywhere Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Running Meta Llama on Windows This tutorial is a part of our Build with Meta Llama series, where we demonstrate the capabilities and practical applications of Llama for developers like you, so that you can leverage the benefits that Llama has to offer and incorporate it into your own applications. This tutorial supports the video Running Llama on Windows | Build with Meta Llama , where we learn how to run Llama on Windows using Hugging Face APIs, with a step-by-step tutorial to help you follow along. If you're interested in learning by watching or listening, check out our video on Running Llama on Windows. For this demo, we will be using a Windows OS machine with an RTX 4090 GPU. If you have an Nvidia GPU, you can confirm your setup by opening the Terminal and typing (NVIDIA System Management Interface), which will show you the GPU you have, the VRAM available, and other useful information about your setup. Since we will be using the Hugging Face transformers library for this setup, this setup can also be used on other operating systems that the library supports such as Linux or Mac using similar steps as the ones shown in the video. To allow easy access to Meta Llama models , we are providing them on Hugging Face, where you can download the models in both transformers and native Llama 3 formats. To download the weights, visit the meta-llama repo containing the model you’d like to use. For example, we will use the Meta-Llama-3-8B-Instruct model for this demo. Read and agree to the license agreement. Fill in your details and accept the license, and click on submit. Once your request is approved, you'll be granted access to all the Llama 3 models. Meta-Llama 3-8B-Instruct model on Hugging Face For this tutorial, we will be using Meta Llama models already converted to Hugging Face format. However, if you’d like to download the original native weights, click on the "Files and versions" tab and download the contents of the original folder. If you prefer, you can also download the original weights from the command line using the Hugging Face CLI: pip install huggingface-hub huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir meta-llama/Meta-Llama-3-8B-Instruct In this example, we will showcase how you can use Meta Llama models already converted to Hugging Face format using Transformers. To use the model with Transformers, we will be using the pipeline class from Hugging Face. We recommend that you use a Python virtual environment for running this demo. In this demo, we are using Miniconda, but you can use any virtual environment of your choice. Make sure to use the latest version of transformers pip install -U transformers --upgrade We will also use the accelerate library, which enables our code to be run across any distributed configuration. pip install accelerate We will be using Python for our demo script. To install Python, visit the Python website , where you can choose your OS and download the version of Python you like.  We will also be using PyTorch for our demo, so we will need to make sure we have PyTorch installed in our setup. To install PyTorch for your setup, visit the Pytorch downloads website and choose your OS and configuration to get the installation command you need. Paste that command in your terminal and press enter. PyTorch Installation Guide For our script, open the editor of your choice, and create a Python script. We’ll first add the imports that we need for our example: import transformers import torch from transformers import AutoTokenizer Let's define the model we’d like to use. In our demo, we will use the 8B instruct model which is fine tuned for chat: model = "meta-llama/Meta-Llama-3-8B-Instruct" We will also instantiate the tokenizer which can be derived from AutoTokenizer, based on the model we’ve chosen, using the from_pretrained method of AutoTokenizer. This will download and cache the pre-trained tokenizer and return an instance of the appropriate tokenizer class. tokenizer = AutoTokenizer.from_pretrained(model) To use our model for inference: pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) Hugging Face pipelines allow us to specify which type of task the pipeline needs to run ( text-generation in this case), the model that the pipeline should use to make predictions (specified by model ), the precision to use with this model ( torch.float16 ), the device on which the pipeline should run ( device_map ), and various other options. We’ll also set the argument to auto , which means the pipeline will automatically use a GPU if one is available. Next, let's provide some text prompts as inputs to our pipeline for it to use when it runs to generate responses. Let’s define this as the variable, sequences: sequences = pipeline( 'I have tomatoes, basil and cheese at home. What can I cook for dinner?\n', do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, truncation = True, max_length=400, The pipeline sets do_sample to True , which allows us to specify the decoding strategy we’d like to use to select the next token from the probability distribution over the entire vocabulary. In our example, we are using top_k sampling. By changing max_length , you can specify how long you’d like the generated response to be. Setting the num_return_sequences parameter to greater than one will let you generate more than one output. Finally, we add the following to provide input, and information on how to run the pipeline: for seq in sequences: print(f"Result: {seq['generated_text']}") Save your script and head back to the terminal. We will save it as llama3-hf-demo.py . Before we run the script, let’s make sure we can access and interact with Hugging Face directly from the terminal. To do that, make sure you have the Hugging Face CLI installed: pip install -U "huggingface_hub[cli]" followed by huggingface-cli login Here, it will ask us for our access token which we can get from our HF account under Settings . Copy it and provide it in the command line. We are now all set to run our script. python llama3-hf-demo.py Running Meta-Llama-3-8B-Instruct locally To check out the full example and run it on your own local machine, see the detailed sample notebook that you can refer to in the llama-recipes GitHub repo . Here you will find an example of how to run Llama 3 models using already converted Hugging Face weights, as well as an example that goes over how you can convert the original weights into Hugging Face format and run using those. We’ve also created various other demos and examples to provide you with guidance and as references to help you get started with Llama models and to make it easier for you to integrate them into your own use cases. To try these examples, check out our . Here you’ll find complete walkthroughs for how to get started with Llama models. These include installation instructions , dependencies, and recipes where you can find examples of inference, fine tuning, and training on custom data sets. In addition, the repo includes demos that showcase llama deployments, basic interactions, and specialized use cases Running Meta Llama on Windows Skip to main content
+----------
+Running Meta Llama on Mac | Llama Everywhere Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Running Meta Llama on Mac This tutorial is a part of our Build with Meta Llama series, where we demonstrate the capabilities and practical applications of Llama for developers like you, so that you can leverage the benefits that Llama has to offer and incorporate it into your own applications. This tutorial supports the video Running Llama on Mac | Build with Meta Llama , where we learn how to run Llama on Mac OS  using Ollama , with a step-by-step tutorial to help you follow along. If you're interested in learning by watching or listening, check out our video on Running Llama on Mac. For this demo, we are using a Macbook Pro running Sonoma 14.4.1 with 64GB memory. Since we will be using Ollamap, this setup can also be used on other operating systems that are supported such as Linux or Windows using similar steps as the ones shown here. lets you set up and run Large Language models like Llama models locally. Downloading Ollama The first step is to install Ollama. To do that, visit their website , where you can choose your platform, and click on “Download” to download Ollama. For our demo, we will choose macOS, and select “Download for macOS”. Next, we will make sure that we can test run Meta Llama 3 models on Ollama . Please note that Ollama provides Meta Llama models in the 4-bit quantized format. To test run the model, let’s open our terminal, and run ollama pull llama3 to download the 4-bit quantized Meta Llama 3 8B chat model, with a size of about 4.7 GB. Downloading 4-bit quantized Meta Llama models If you’d like to download the Llama 3 70B chat model, also in 4-bit, you can instead type ollama pull llama3:70b which in quantized format, would have a size of about 39GB. Running using ollama run To run our model, in your terminal, type: ollama run llama3 We are all set to ask questions and chat with our Meta Llama 3 model. Let’s ask some questions: “Who wrote the book godfather?" Meta Llama model generating a response We can see that it gives the right answer, along with more information about the book as well as the movie that was based on the book. What if I just wanted the name of the author, without the extra information. Let’s adapt our prompt accordingly, specifying the kind of response we expect: "Who wrote the book godfather? Answer with only the name." Meta Llama model generating a specified responses based on prompt We can see that it generates the answer in the format we requested. You can also try running the 70B model: ollama run llama3:70b but the inference speed will likely be slower. Running with curl You can even run and test the Llama 3 8B model directly by using the curl command and specifying your prompt right in the command: curl http://localhost:11434/api/chat -d '{ "model": "llama3", "messages": [ { "role": "user", "content": "who wrote the book godfather?" } ], "stream": false }' Here, we are sending a POST request to an API running on localhost. The API endpoint is for "chat", which will interact with our AI model hosted on the server. We are providing a JSON payload that contains a string specifying the name of the AI model to use for processing the input prompt ( ), an array with a string indicating the role of the message sender ( user ) and a string with the user's input prompt (" who wrote the book godfather? "), and a boolean value stream indicating whether the response should be streamed or not. In our case, it is set to false, meaning the entire response will be returned at once. Ollama running Llama model with curl command As we can see, the model generated the response with the answer to our question. Running as a Python script This example can also be run using a Python script. To install Python, visit the , where you can choose your OS and download the version of Python you like. To run it using a Python script, open the editor of your choice, and create a new file. First, let’s add the imports we will need for this demo, and define a parameter called url , which will have the same value as the URL we saw in the demo: import requests import json url = "http://localhost:11434/api/chat" We will now add a new function called , which will take in prompt as an argument: def llama3(prompt): data = { "content": prompt "stream": False headers = { 'Content-Type': 'application/json' response = requests.post(url, headers=headers, json=data) return(response.json()['message']['content']) This function constructs a JSON payload containing the specified prompt and the model name, which is "llama3”. Then, it sends a POST request to the API endpoint with the JSON payload as the message body, using the requests library.  Once the response is received, the function extracts the content of the response message from the JSON object returned by the API, and returns this extracted content. Finally, we will provide the prompt and print the generated response: response = llama3("who wrote the book godfather") print(response) To run the script, write python .py and press enter. Running Meta Llama model using Ollama and Python script As we can see, it generated the response based on the prompt we provided in our script. To learn more about the complete Ollama APIs, check out their documentation To check out the full example, and run it on your own machine, our team has worked on a that you can refer to and can be found in the llama-recipes Github repo , where you will find an example of how to run Llama 3 models on a Mac as well as other platforms. You will find the examples we discussed here, as well as other ways to use Llama 3 locally with Ollama via LangChain. We’ve also created various other demos and examples to provide you with guidance and as references to help you get started with Llama models and to make it easier for you to integrate Llama into your own use cases. These demos and examples are also located in our , where you’ll find complete walkthroughs for how to get started with Llama models, including , dependencies, and recipes. You’ll also find several examples for inference, fine tuning, and training on custom data sets—as well as demos that showcase Llama deployments, basic interactions, and specialized Running Meta Llama on Mac Running using ollama run Running as a Python script Skip to main content
+----------
+Meta Llama in the Cloud | Llama Everywhere Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Meta Llama in the Cloud This tutorial is a part of our Build with Meta Llama series, where we demonstrate the capabilities and practical applications of Llama for developers like you, so that you can leverage the benefits that Llama has to offer and incorporate it into your own applications. This tutorial supports the video Many other ways to run Llama and resources | Build with Meta Llama , where we learn about some of the various other ways in which you can host or run Meta Llama models, and provide you with all the resources that can help you get started. If you're interested in learning by watching or listening, check out our video on Many other ways to run Llama and resources. Apart from running the models locally, one of the most common ways to run Meta Llama models is to run them in the cloud. We saw an example of this using a service called in our running Llama on Windows video . Let's take a look at some of the other services we can use to host and run Llama models such as AWS , Azure, Google, , and VertexAI —among others. Amazon Web Services Amazon Web Services (AWS) provides multiple ways to host your Llama models such as SageMaker Jumpstart and Bedrock. Bedrock is a fully managed service that lets you quickly and easily build generative AI-powered experiences. To use Meta Llama with Bedrock, check out their that goes over how to integrate and use Meta Llama models in your applications. You can also use AWS through SageMaker JumpStart, which enables you to build, train, and deploy ML models from a broad selection of publicly available foundation models, and deploy them on SageMaker Instances for model training and inference. Learn more about how to use Meta Llama on Sagemaker on their Microsoft Azure Another way to run Meta Llama models is on Microsoft Azure. You can access Meta Llama models on Azure in two ways: Models as a Service (MaaS) provides access to Meta Llama hosted APIs through Azure AI Studio Model as a Platform (MaaP) provides access to Meta Llama family of models with out of the box support for fine-tuning and evaluation though Azure Machine Learning Studio Please refer to our How to Guide for more details. Google Cloud Platform You can also use GCP, or Google Cloud Platform, to run Meta Llama models. GCP is a suite of cloud computing services that provides computing resources as well as virtual machines. Building on top of GCP services, Model Garden on Vertex AI offers infrastructure to jumpstart your ML project with a single place to discover, customize, and deploy a wide range of models. We have collaborated with Vertex AI from Google Cloud to fully integrate Meta Llama, offering pre-trained, instruction-tuned, and Meta CodeLlama, in various sizes. Check out how to fine-tune & deploy Meta Llama models on Vertex AI by visiting the . Please note that you may need to request proper GPU computing quota as a prerequisite. IBM watsonx You can also use IBM's watsonx to run Meta Llama models. IBM watsonx is an advanced platform designed for AI builders, integrating generative AI capabilities, foundation models, and traditional machine learning. It provides a comprehensive suite of tools that span the AI lifecycle, enabling users to tune models with their enterprise data. The platform supports multi-model flexibility, client protection, AI governance, and hybrid, multi-cloud deployments. It offers features for extracting insights, discovering trends, generating synthetic tabular data, running jupyter notebooks, and creating new content and code. Watsonx.ai equips data scientists with the necessary tools, pipelines, and runtimes for building and deploying ML models, thereby automating the entire AI model lifecycle. We've worked with IBM to make Llama and Code Llama models available on their platform . To test the platform and evaluate Llama on watsonx, creating an account is free and allows testing the available models through the Prompt Lab. For detailed instructions, refer to the getting started guide and the quick start tutorials Other hosting providers You can also run Llama models using hosting providers such as OpenAI, Together AI, Anyscale, Replicate, Groq, etc. Our team has worked on step by step examples to showcase how to run Llama on externally hosted providers. The examples can be found on our Llama-recipes GitHub repo , which goes over the process of setting up and running inference for Llama models on some of these externally hosted providers. Running Llama on premise Many enterprise customers prefer to deploy Llama models on-premise and on their own servers. One way to deploy and run Llama models in this manner is by using TorchServe . TorchServe is an easy to use tool for deploying PyTorch models at scale. It is cloud and environment agnostic and supports features such as multi-model serving, logging, metrics and the creation of RESTful endpoints for application integration. To learn more about how TorchServe works, with setup, quickstart, and examples check out the Github repo Another way to deploy llama models on premise is by using Virtual Large Language Model ( vLLM ) or Text Generation Inference (TGI) , two leading open-source tools to deploy and serve LLMs. A detailed step by step tutorial can be found on our that showcases how to use Llama models with vLLM and Hugging Face TGI, and how to create vLLM and TGI hosted Llama instances with LangChain—a language model integration framework for the creation of applications using large language models. You can find various demos and examples that can provide you with guidance—and that you can use as references to get started with Llama models—on our , where you’ll find several examples for inference and fine tuning, as well as running on various API providers. Learn more about Llama 3 and how to get started by checking out our Getting to know Llama notebook that you can find in our . Here you will find a guided tour of Llama 3, including a comparison to Llama 2, descriptions of different Llama 3 models, how and where to access them, Generative AI and Chatbot architectures, prompt engineering, RAG (Retrieval Augmented Generation), fine-tuning, and more. You will find all this implemented with starter code that you can take and adapt to use in your own Meta Llama 3 projects. To learn more about our Llama 3 models, check out our announcement blog where you can find details about how the models work, data on performance and benchmarks, information about trust and safety, and various other resources to get you started. Get the model source from our Llama 3 Github repo , where you can learn how the models work along with a minimalist example of how to load Llama 3 models and run inference. Here, you will also find steps to download and set up the models, and examples for running the text completion and chat models. Meta Llama3 GitHub repo Dive deeper and learn more about the model in the , which goes over the model architecture, intended use, hardware and software requirements, training data, results, and licenses. Check out our new Meta AI , built with Llama 3 technology, which is now one of the world’s leading AI assistants that can boost your intelligence and lighten your load, helping you learn, get things done, create content, and connect to make the most out of every moment. You can use Meta AI on Facebook, Instagram, WhatsApp, Messenger, and the web to get things done, learn, create, and connect with the things that matter to you. To learn more about the latest updates and releases of Llama models, check out our website , where you can learn more about the latest models as well as find resources to learn more about how these models work and how you can use them in your own applications. Check out our Getting Started guide that provides information and resources to help you set up Llama including how to access the models, prompt formats, hosting, how-to and integration guides, as well as resources that you can reference to get started with your projects. Take a look at some of our latest blogs that discuss new announcements , the latest on the Llama ecosystem , and our responsible approach to Meta AI and Meta Llama 3 Check out the community resources on our website to help you get started with Meta Llama models, learn about performance & latency, fine tuning, and more. Dive deeper into prompt engineering, learning best practices for prompting Meta Llama models and interacting with Meta Llama Chat, Code Llama, and Llama Guard models in our short course on Prompt Engineering with Llama 2 on DeepLearing.ai, recently updated to showcase both Llama 2 and  Llama 3 models. Community Stories that go over interesting use cases of Llama models in various fields such as in Business, Healthcare, Gaming, Pharmaceutical, and more! Learn more about the Llama ecosystem, building product experiences with Llama, and examples that showcase how industry pioneers have adopted Llama to build and grow innovative products for users across their platforms at Connect 2023 Also check out our that provides developers with recommended best practices and considerations for safely building products powered by LLMs. We hope you found the Build with Meta Llama videos and tutorials helpful to provide you with insights and resources that you may need to get started with using Llama models. We at Meta strongly believe in an open approach to AI development, democratizing access through an open platform and providing you with AI models, tools, and resources to give you the power to shape the next wave of innovation. We want to kickstart that next wave of innovation across the stack—from applications to developer tools to evals to inference optimizations and more. We can’t wait to see what you build and look forward to your feedback. Meta Llama in the Cloud Running Llama on premise
+----------
+Fine-tuning | How-to guides Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud How-to guides If you are looking to learn by writing code it's highly recommended to look into the Getting to Know Llama 3 notebook. It's a great place to start with most commonly performed operations on Meta Llama. Full parameter fine-tuning is a method that fine-tunes all the parameters of all the layers of the pre-trained model. In general, it can achieve the best performance but it is also the most resource-intensive and time consuming: it requires most GPU resources and takes the longest. PEFT, or Parameter Efficient Fine Tuning, allows one to fine tune models with minimal resources and costs. There are two important PEFT methods: LoRA (Low Rank Adaptation) and QLoRA (Quantized LoRA), where pre-trained models are loaded to GPU as quantized 8-bit and 4-bit weights, respectively. It’s likely that you can fine-tune the Llama 2-13B model using LoRA or QLoRA fine-tuning with a single consumer GPU with 24GB of memory, and using QLoRA requires even less GPU memory and fine-tuning time than LoRA. Typically, one should try LoRA, or if resources are extremely limited, QLoRA, first, and after the fine-tuning is done, evaluate the performance. Only consider full fine-tuning when the performance is not desirable. Experiment tracking Experiment tracking is crucial when evaluating various fine-tuning methods like LoRA, and QLoRA. It ensures reproducibility, maintains a structured version history, allows for easy collaboration, and aids in identifying optimal training configurations. Especially with numerous iterations, hyperparameters, and model versions at play, tools like Weights & Biases (W&B) become indispensable. With its seamless integration into multiple frameworks, W&B provides a comprehensive dashboard to visualize metrics, compare runs, and manage model checkpoints. It's often as simple as adding a single argument to your training script to realize these benefits - we’ll show an example in the Hugging Face PEFT LoRA section. Recipes PEFT LoRA The llama-recipes repo has details on different fine-tuning (FT) alternatives supported by the provided sample scripts. In particular, it highlights the use of PEFT as the preferred FT method, as it reduces the hardware requirements and prevents catastrophic forgetting. For specific cases, full parameter FT can still be valid, and different strategies can be used to still prevent modifying the model too much. Additionally, FT can be done in single gpu multi-gpu with FSDP. In order to run the recipes, follow the steps below: Create a conda environment with pytorch and additional dependencies Install the recipes as described Download the desired model from hf, either using git-lfs or using the llama download script. With everything configured, run the following command: python -m llama_recipes.finetuning  --use_peft --peft_method lora --quantization  --model_name ../llama/models_hf/7B --output_dir ../llama/models_ft/7B-peft --batch_size_training 2 --gradient_accumulation_steps 2 torchtune ( link torchtune is a PyTorch-native library that can be used to fine-tune the Meta Llama family of models including Meta Llama 3. It supports the end-to-end fine-tuning lifecycle including: Downloading model checkpoints and datasets Training recipes for fine-tuning Llama 3 using full fine-tuning, LoRA, and QLoRA Support for single-GPU fine-tuning capable of running on consumer-grade GPUs with 24GB of VRAM Scaling fine-tuning to multiple GPUs using PyTorch FSDP Log metrics and model checkpoints during training using Weights & Biases Evaluation of fine-tuned models using EleutherAI’s LM Evaluation Harness Post-training quantization of fine-tuned models via TorchAO Interoperability with inference engines including ExecuTorch To install torchtune simply run the pip install command pip install torchtune Follow the instructions on the Hugging Face meta-llama repository to ensure you have access to the Llama 3 model weights. Once you have confirmed access, you can run the following command to download the weights to your local machine. This will also download the tokenizer model and a responsible use guide. tune download meta-llama/Meta-Llama-3-8B \ --output-dir  \ --hf-token  Set your environment variable HF_TOKEN or pass in --hf-token to the command in order to validate your access. You can find your token at https://huggingface.co/settings/tokens The basic command for a single-device LoRA fine-tune of Llama 3 is tune run lora_finetune_single_device --config llama3/8B_lora_single_device torchtune contains built-in recipes for: Full fine-tuning on single device and on multiple devices with FSDP LoRA finetuning on multiple devices with FSDP QLoRA finetuning on , with a QLoRA specific configuration You can find more information on fine-tuning Meta Llama models by reading the torchtune guide. Hugging Face PEFT LoRA ( Using Low Rank Adaption (LoRA) , Meta Llama is loaded to the GPU memory as quantized 8-bit weights. Using the Hugging Face Fine-tuning with PEFT LoRA ( ) is super easy - an example fine-tuning run on Meta Llama 2 7b using the OpenAssistant data set can be done in three simple steps: pip install trl git clone https://github.com/huggingface/trl python trl/examples/scripts/sft.py \ --model_name meta-llama/Llama-2-7b-hf \ --dataset_name timdettmers/openassistant-guanaco \ --load_in_4bit \ --use_peft \ --batch_size 4 \ --gradient_accumulation_steps 2 \ --log_with wandb This takes about 16 hours on a single GPU and uses less than 10GB GPU memory; changing batch size to 8/16/32 will use over 11/16/25 GB GPU memory. After the fine-tuning completes, you’ll see in a new directory named “output” at least adapter_config.json and adapter_model.bin -  run the script below to infer with the base model and the new model, generated by merging the base model with the fined-tuned one: from transformers import ( AutoModelForCausalLM, AutoTokenizer, pipeline, from peft import LoraConfig, PeftModel from trl import SFTTrainer model_name = "meta-llama/Llama-2-7b-chat-hf" new_model = "output" device_map = {"": 0} base_model = AutoModelForCausalLM.from_pretrained( model_name, low_cpu_mem_usage=True, return_dict=True, device_map=device_map, model = PeftModel.from_pretrained(base_model, new_model) model = model.merge_and_unload() tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "right" prompt = "Who wrote the book Innovator's Dilemma?" pipe = pipeline(task="text-generation", model=base_model, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") print(result[0]['generated_text']) pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") QLoRA Fine TuningQLoRA (Q for quantized) is more memory efficient than LoRA. In QLoRA, the pretrained model is loaded to the GPU as quantized 4-bit weights. Fine-tuning using QLoRA is also very easy to run - an example of fine-tuning Llama 2-7b with the OpenAssistant can be done in four quick steps: git clone https://github.com/artidoro/qlora cd qlora pip install -U -r requirements.txt ./scripts/finetune_llama2_guanaco_7b.sh It takes about 6.5 hours to run on a single GPU, using 11GB memory of the GPU. After the fine-tuning completes and the output_dir specified in ./scripts/finetune_llama2_guanaco_7b.sh will have checkoutpoint-xxx subfolders, holding the fine-tuned adapter model files. To run inference, use the script below: from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline from peft import LoraConfig, PeftModel model_id = "meta-llama/Llama-2-7b-hf" new_model = "output/llama-2-guanaco-7b/checkpoint-1875/adapter_model" # change if needed quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4' model = AutoModelForCausalLM.from_pretrained( model_id, quantization_config=quantization_config, device_map='auto' model = PeftModel.from_pretrained(model, new_model) tokenizer = AutoTokenizer.from_pretrained(model_id) prompt = "Who wrote the book innovator's dilemma?" pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") Axolotl is another open source library you can use to streamline the fine-tuning of Llama 2. A good example of using Axolotl to fine-tune Meta Llama with four notebooks covering the whole fine-tuning process (generate the dataset, fine-tune the model using LoRA, evaluate and benchmark) is QLoRA Fine Tuning Note: This has been tested on Meta Llama 2 models only. QLoRA (Q for quantized) is more memory efficient than LoRA. In QLoRA, the pretrained model is loaded to the GPU as quantized 4-bit weights. Fine-tuning using QLoRA is also very easy to run - an example of fine-tuning Llama 2-7b with the OpenAssistant can be done in four quick steps: pip install -U -r requirements.txt It takes about 6.5 hours to run on a single GPU, using 11GB memory of the GPU. After the fine-tuning completes and the output_dir specified in ./scripts/finetune_llama2_guanaco_7b.sh will have checkoutpoint-xxx subfolders, holding the fine-tuned adapter model files. To run inference, use the script below: from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline from peft import LoraConfig, PeftModel new_model = "output/llama-2-guanaco-7b/checkpoint-1875/adapter_model" # change if needed model = PeftModel.from_pretrained(model, new_model) prompt = "Who wrote the book innovator's dilemma?" pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") Note: This has been tested on Meta Llama 2 models only. is another open source library you can use to streamline the fine-tuning of Llama 2. A good example of using Axolotl to fine-tune Meta Llama with four notebooks covering the whole fine-tuning process (generate the dataset, fine-tune the model using LoRA, evaluate and benchmark) is torchtune (link) Hugging Face PEFT LoRA (link)
+----------
+Quantization | How-to guides Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Quantization is a technique used in machine learning to reduce the computational and memory requirements of models, making them more efficient for deployment on servers and edge devices. It involves representing model weights and activations, typically 32-bit floating numbers, with lower precision data such as 16-bit float, brain float 16-bit, 8-bit int, or even 4/3/2/1-bit int. The benefits of quantization include smaller model sizes, faster fine-tuning, and faster inference—particularly beneficial in resource-constrained environments. However, the tradeoff is a reduction in model quality due to the loss of precision. Supported quantization modes in PyTorch Post-Training Dynamic Quantization: Weights are pre-quantized ahead of time and activations are converted to int8 during inference, just before computation. This results in faster computation due to efficient int8 matrix multiplication and maintains accuracy on the activation layer. Post-Training Static Quantization: This technique improves performance by converting networks to use both integer arithmetic and int8 memory accesses. It involves feeding batches of data through the network and computing the resulting distributions of the different activations. This information is used to determine how the different activations should be quantized at inference time. Quantization Aware Training (QAT): In QAT, all weights and activations are "fake quantized" during both the forward and backward passes of training. This means float values are rounded to mimic int8 values, but all computations are still done with floating point numbers. This method usually yields higher accuracy than the other two methods as all weight adjustments during training are made while "aware" of the fact that the model will ultimately be quantized. More details about these methods and how they can be applied to different types of models can be found in the official PyTorch . Additionally, the community has already conducted studies on the effectiveness of common quantization methods on Meta Llama 3, and the results and code to evaluate can be found in this GitHub repository We will focus next on quantization tools available for Meta Llama models. As this is a constantly evolving space, the libraries and methods detailed here are the most widely used at the moment and are subject to change as the space evolves. Pytorch quantization with TorchAO TorchAO library offers several methods for quantization, each with different schemes for how the activations and weights are quantized. We distinguish between two main types of quantization: weight only quantization and dynamic quantization. For weight only quantization, we support 8-bit and 4-bit quantization. The 4-bit quantization also has GPTQ support for improved accuracy, which requires calibration but has the same final performance. For dynamic quantization, we support 8-bit activation quantization and 8-bit weight quantization. We also support this type of quantization with smoothquant for improved accuracy, which requires calibration and has slightly worse performance. Additionally, the library offers a simple API to test different methods and automatic detection of the best quantization for a given model, known as autoquantization. This API chooses the fastest form of quantization out of the 8-bit dynamic and 8-bit weight only quantization. It first identifies the shapes of the activations that the different linear layers see, then benchmarks these shapes across different types of quantized and non-quantized layers in order to pick the fastest one. Also, it composes with torch.compile() to generate the fast kernels. For additional information on torch.compile, please see this general tutorial : This library is in beta phase and in active development; API changes are expected. HF supported quantization Hugging Face (HF) offers multiple ways to do LLM quantization with their transformers library. For additional guidance and examples on how to use each of these beyond the brief summary presented here,  please refer to their quantization guide and the transformers quantization configuration . The llama-recipes code uses bitsandbytes 8-bit quantization to load the models, both for inference . (See below for more information about using the bitsandbytes library with Llama. ) Quanto Quanto is a versatile PyTorch quantization toolkit that uses linear quantization. It provides features such as weights quantization, activation quantization, and compatibility with various devices and modalities. It supports quantization-aware training and is easy to integrate with custom kernels for specific devices. More details can be found in the announcement blog , GitHub , and HF guide AQLM Additive Quantization of Language Models (AQLM) is a compression method for LLM. It quantizes multiple weights together, taking advantage of interdependencies between them. AQLM represents groups comprising 8 to16 weights each as a sum of multiple vector codes. This library supports fine-tuning its quantized models with Parameter-Efficient Fine-Tuning and LoRA by integrating into HF's PEFT library as well. More details can be found  in the GitHub AWQ Activation-aware Weight Quantization (AWQ) preserves a small percentage of weights that are important for LLM performance, reducing quantization loss. This allows models to run in 4-bit precision without experiencing performance degradation. Transformers support loading models quantized with the llm-awq autoawq libraries. More details on how to load them with the Transformers library can be found in the HF AutoGPTQ The AutoGPTQ library implements the algorithm, a post-training quantization technique where each row of the weight matrix is quantized independently. These weights are quantized to int4, but they’re restored to fp16 on the fly during inference, saving memory usage by 4x. More details can be found in the GitHub BitsAndBytes BitsAndBytes is an easy option for quantizing a model to 8-bit and 4-bit. The library supports any model in any modality, as long as it supports loading with Hugging Face Accelerate and contains torch.nn.Linear layers. It also provides features for offloading weights between the CPU and GPU to support fitting very large models into memory, adjusting the outlier threshold for 8-bit quantization, skipping module conversion for certain models, and fine-tuning with 8-bit and 4-bit weights. For 4-bit models, it allows changing the compute data type, using the Normal Float 4 (NF4) data type for weights initialized from a normal distribution, and using nested quantization to save additional memory at no additional performance cost. More details can be found in the HF Supported quantization modes in PyTorch Pytorch quantization with TorchAO
+----------
+Prompting | How-to guides Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Link to Notebook showing examples of the techniques discussed in this section. Prompt engineering is a technique used in natural language processing (NLP) to improve the performance of the language model by providing them with more context and information about the task in hand. It involves creating prompts, which are short pieces of text that provide additional information or guidance to the model, such as the topic or genre of the text it will generate. By using prompts, the model can better understand what kind of output is expected and produce more accurate and relevant results. In Llama 2 the size of the context, in terms of number of tokens, has doubled from 2048 to 4096. Crafting Effective Prompts Crafting effective prompts is an important part of prompt engineering. Here are some tips for creating prompts that will help improve the performance of your language model: Be clear and concise: Your prompt should be easy to understand and provide enough information for the model to generate relevant output. Avoid using jargon or technical terms that may confuse the model. Use specific examples: Providing specific examples in your prompt can help the model better understand what kind of output is expected. For example, if you want the model to generate a story about a particular topic, include a few sentences about the setting, characters, and plot. Vary the prompts: Using different prompts can help the model learn more about the task at hand and produce more diverse and creative output. Try using different styles, tones, and formats to see how the model responds. Test and refine: Once you have created a set of prompts, test them out on the model to see how it performs. If the results are not as expected, try refining the prompts by adding more detail or adjusting the tone and style. Use feedback: Finally, use feedback from users or other sources to continually improve your prompts. This can help you identify areas where the model needs more guidance and make adjustments accordingly. Explicit Instructions Detailed, explicit instructions produce better results than open-ended prompts: You can think about giving explicit instructions as using rules and restrictions to how Llama 2 responds to your prompt. Stylization Explain this to me like a topic on a children's educational network show teaching elementary students. I'm a software engineer using large language models for summarization. Summarize the following text in under 250 words: Give your answer like an old timey private investigator hunting down a case step by step. Formatting Use bullet points. Return as a JSON object. Use less technical terms and help me apply it in my work in communications. Restrictions Only use academic papers. Never give sources older than 2020. If you don't know the answer, say that you don't know. Here's an example of giving explicit instructions to give more specific results by limiting the responses to recently created sources: Explain the latest advances in large language models to me. #  More likely to cite sources from 2017 Explain the latest advances in large language models to me. Always cite your sources. Never cite sources older than 2020. # Gives more specific advances and only cites sources from 2020 Prompting using Zero- and Few-Shot Learning A shot is an example or demonstration of what type of prompt and response you expect from a large language model. This term originates from training computer vision models on photographs, where one shot was one example or instance that the model used to classify an image. Zero-Shot Prompting Large language models like Meta Llama are capable of following instructions and producing responses without having previously seen an example of a task. Prompting without examples is called "zero-shot prompting". Text: This was the best movie I've ever seen! The sentiment of the text is: Text: The director was trying too hard. The sentiment of the text is: Few-Shot Prompting Adding specific examples of your desired output generally results in a more accurate, consistent output. This technique is called "few-shot prompting". In this example, the generated response follows our desired format that offers a more nuanced sentiment classifier that gives a positive, neutral, and negative response confidence percentage. You are a sentiment classifier. For each message, give the percentage of positive/netural/negative. Here are some samples: Text: I liked it Sentiment: 70% positive 30% neutral 0% negative Text: It could be better Sentiment: 0% positive 50% neutral 50% negative Text: It's fine Sentiment: 25% positive 50% neutral 25% negative Text: I thought it was okay Text: I loved it! Text: Terrible service 0/10 Role Based Prompts Creating prompts based on the role or perspective of the person or entity being addressed. This technique can be useful for generating more relevant and engaging responses from language models. Pros: Improves relevance: Role-based prompting helps the language model understand the role or perspective of the person or entity being addressed, which can lead to more relevant and engaging responses. Increases accuracy: Providing additional context about the role or perspective of the person or entity being addressed can help the language model avoid making mistakes or misunderstandings. Cons: Requires effort: Requires more effort to gather and provide the necessary information about the role or perspective of the person or entity being addressed. Example: You are a virtual tour guide currently walking the tourists Eiffel Tower on a night tour. Describe Eiffel Tower to your audience that covers its history, number of people visiting each year, amount of time it takes to do a full tour and why do so many people visit this place each year. Chain of Thought Technique Involves providing the language model with a series of prompts or questions to help guide its thinking and generate a more coherent and relevant response. This technique can be useful for generating more thoughtful and well-reasoned responses from language models. Improves coherence: Helps the language model think through a problem or question in a logical and structured way, which can lead to more coherent and relevant responses. Increases depth: Providing a series of prompts or questions can help the language model explore a topic more deeply and thoroughly, potentially leading to more insightful and informative responses. Requires effort: The chain of thought technique requires more effort to create and provide the necessary prompts or questions. You are a virtual tour guide from 1901. You have tourists visiting Eiffel Tower. Describe Eiffel Tower to your audience. Begin with 1. Why it was built 2. Then by how long it took them to build 3. Where were the materials sourced to build 4. Number of people it took to build 5. End it with the number of people visiting the Eiffel tour annually in the 1900's, the amount of time it completes a full tour and why so many people visit this place each year. Make your tour funny by including 1 or 2 funny jokes at the end of the tour. Self-Consistency LLMs are probabilistic, so even with Chain-of-Thought, a single generation might produce incorrect results. Self-Consistency introduces enhanced accuracy by selecting the most frequent answer from multiple generations (at the cost of higher compute): John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Report the answer surrounded by three backticks, for example: ```123``` Running the above several times and taking the most commonly returned value for the answer would make use of the self-consistency approach. Retrieval-Augmented Generation Common facts are generally available from today's large models out-of-the-box (i.e. using just the model weights). More specific data is unlikely to be available though E.g.: What is the capital of  California? # The capital of California is Sacramento... What was the temperature in Menlo Park on December 12th, 2023? # I'm just an AI, I don't have have access to real-time or historical weather data... Retrieval-Augmented Generation, or RAG, describes the practice of including information in the prompt that has been retrieved from an external database. It's an effective way to incorporate facts into your LLM application and is more affordable than fine-tuning which might also negatively impact the foundational model's capabilities. This could be as simple as a lookup table or as sophisticated as a vector database containing all of your company's knowledge: Given the following information about temperatures in Menlo Park: 2023-12-11 :  52 degrees Fahrenheit 2023-12-12 :  51 degrees Fahrenheit 2023-12-13 :  55 degrees Fahrenheit What was the temperature in Menlo Park on 2023-12-12? # Sure! The temperature in Menlo Park on 2023-12-12 was 51 degrees Fahrenheit. What was the temperature in Menlo Park on 2023-07-18 ? # Sorry, I don't have information about the temperature in Menlo Park on 2023-07-18. The information provided only includes temperatures for December 11th, 12th, and 13th of 2023. Program-Aided Language Models LLMs, by nature, aren't great at performing calculations. While LLMs are bad at arithmetic, they're great for code generation. Program-Aided Language leverages this fact by instructing the LLM to write code to solve calculation tasks. Python code to calculate: ((-5 + 93 * 4 - 0) * (4^4 + -7 + 0 * 5)) Here is the Python code to calculate the given expression: ((-5 + 93 * 4 - 0) * (4**4 + -7 + 0 * 5)) # Calculate the expression inside the parentheses a = -5 + 93 * 4 - 0 b = 4**4 + -7 + 0 * 5 # Multiply a and b result = a * b # Print the result print(result) Using the code directly provides the correct result. Limiting Extraneous Tokens A common challenge is generating a response without extraneous tokens (e.g. "Sure! Here's more information on..."). By combining a role, rules and restrictions, explicit instructions, and an example, the model can be prompted to generate the desired response. You are a robot that only outputs JSON. You reply in JSON format with the field 'zip_code'. Example question: What is the zip code of the Empire State Building? Example answer: {'zip_code': 10118} Now here is my question: What is the zip code of Menlo Park? # "{'zip_code': 94025}" Using the code directly provides the correct result. Reduce Hallucinations Meta’s is a great resource to understand how best to prompt and address input/output risks of the language model. Refer to pages (14-17). Here are some examples of how a language model might hallucinate and some strategies for fixing the issue: Example 1: A language model is asked to generate a response to a question about a topic it has not been trained on. The language model may hallucinate information or make up facts that are not accurate or supported by evidence. Fix: To fix this issue, you can provide the language model with more context or information about the topic to help it understand what is being asked and generate a more accurate response. You could also ask the language model to provide sources or evidence for any claims it makes to ensure that its responses are based on factual information. Example 2: A language model is asked to generate a response to a question that requires a specific perspective or point of view. The language model may hallucinate information or make up facts that are not consistent with the desired perspective or point of view. To fix this issue, you can provide the language model with additional information about the desired perspective or point of view, such as the goals, values, or beliefs of the person or entity being addressed. This can help the language model understand the context and generate a response that is more consistent with the desired perspective or point of view. Example 3: A language model is asked to generate a response to a question that requires a specific tone or style. The language model may hallucinate information or make up facts that are not consistent with the desired tone or style. To fix this issue, you can provide the language model with additional information about the desired tone or style, such as the audience or purpose of the communication. This can help the language model understand the context and generate a response that is more consistent with the desired tone or style. Overall, the key to avoiding hallucination in language models is to provide them with clear and accurate information and context, and to carefully monitor their responses to ensure that they are consistent with your expectations and requirements. Prompting using Zero- and Few-Shot Learning Chain of Thought Technique
+----------
+Validation | How-to guides Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud As the saying goes, if you can't measure it, you can't improve it., In this section, we are going to cover different ways to measure and ultimately validate Llama so it's possible to determine the improvements provided by different fine tuning techniques. Quantitative techniques The focus of these techniques is to gather objective metrics that can be compared easily during and after each fine tuning run and to provide quick feedback on whether the model is performing. The main metrics collected are loss and perplexity. This method consists in dividing the dataset into k subsets or folds, and then fine tuning the model k times. On each run, a different fold is used as a validation dataset, using the rest for training. The performance results of each run are averaged out for the final report. This provides a more accurate metric of the performance of the model across the complete dataset, as all entries serve both for validation and training. While it produces the most accurate prediction on how a model is going to generalize after fine tuning on a given dataset, it is computationally expensive and better suited for small datasets. Holdout When using a holdout, the dataset is split into two or three subsets, training and validation with test as optional. The test and validation sets can represent 10% - 30% of the dataset each. As the name implies, the first two subsets are used for training and validating the model during fine tuning, while the third is used only after fine tuning is complete to evaluate how well the model generalizes on data it has not seen in either phase. The advantage of having three partitions is that it provides a way to evaluate the model after fine-tuning for an unbiased view into the model performance, but it requires a slightly bigger dataset to allow for a proper split. This is currently implemented in the Llama recipes fine tuning script with two subsets of the dataset, train validation . The data is collected in a json file that can be plotted to easily interpret the results and evaluate how the model is performing. Standard Evaluation tools There are multiple projects that provide standard evaluation. They provide predefined tasks with commonly used metrics to evaluate the performance of LLMs, like HellaSwag and ThrouthfulQA. These tools can be used to test if the model has degraded after fine tuning. Additionally, a custom task can be created using the dataset intended to fine-tune the model, effectively automating the manual verification of the model performance before and after fine tuning. These types of projects provide a quantitative way of looking at the models performance in simulated real world examples. Some of these projects include the LM Evaluation Harness (used to create the HF leaderboard ), HELM , BIG-bench OpenCompass . As mentioned before, the torchtune library provides integration with the LM Evaluation Harness to test fine tuned models as well. Interpreting Loss and Perplexity The loss value used comes from the transformer's LlamaForCausalLM , which initializes a different loss function depending on the objective required from the model. The objective of this section is to give a brief overview on how to understand the results from loss and perplexity as an initial evaluation of the model performance during fine tuning. We also calculate the perplexity as an exponentiation of the loss value. Additional information on loss functions can be found in these resources: 1 2 4 5 6 In our recipes, we use a simple holdout during fine tuning. Using the logged loss values, both for train and validation dataset, the curves for both are plotted to analyze the results of the process. Given the setup in the recipe, the expected behavior is a log graph that shows a diminishing train and validation loss value as it progresses. If the validation curve starts going up while the train curve continues decreasing, the model is overfitting and it's not generalizing well. Some alternatives to test when this happens are early stopping, verifying the validation dataset is a statistically significant equivalent of the train dataset, data augmentation, using parameter efficient fine tuning or using k-fold cross-validation to better tune the hyperparameters. Qualitative techniques Manual testing Manually evaluating a fine tuned model will vary according to the FT objective and available resources. Here we provide general guidelines on how to accomplish it. With a dataset prepared for fine tuning, a part of it can be separated into a manual test subset, which can be further increased with general knowledge questions that might be relevant to the specific use case. In addition to these general questions, we recommend executing standard evaluations as well, and compare the results with the baseline for the fine tuned model. To rate the results, a clear evaluation criteria should be defined that is relevant to the dataset being used. Example criteria can be accuracy, coherence and safety. Create a rubric for each criteria and define what would be required for an output to receive a specific score. With these guidelines in place, distribute the test questions with a diverse set of reviewers to have multiple data points for each question. With multiple data points for each question and different criteria, a final score can be calculated for each query, allowing for weighting the scores based on the preferred focus for the final model. Interpreting Loss and Perplexity Skip to main content
+----------
+Meta Code Llama | Integration guides Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Integration guides is an open-source family of LLMs based on Llama 2 providing SOTA performance on code tasks. It consists of: Foundation models (Meta Code Llama) Python specializations (Meta Code Llama - Python), and Instruction-following models (Meta Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. See the recipes for examples on how to make use of Meta Code Llama. The following diagram shows how each of the Meta Code Llama models is trained: (Fig: The Meta Code Llama specialization pipeline. The different stages of fine-tuning annotated with the number of tokens seen during training) One of the best ways to try out and integrate with Meta Code Llama is using Hugging Face ecosystem by following the blog , which has: Demo links for all versions of Meta Code Llama Working inference code for code completion Working inference code for code infilling between code prefix and suffix as inputs Working inference code to do 4-bit loading of the 34B model so it can fit on consumer GPUs Guide on how to write prompts for the instruction models to have multi-turn conversations  about coding Guide on how to use Text Generation Inference for model deployment in production Guide on how to integrate code autocomplete as an extension  with VSCode Guide on how to evaluate Meta Code Llama models If the model does not perform well on your specific task, for example if none of the Meta Code Llama models (7B/13B/34B/70B) generate the correct answer for a text to SQL task, fine-tuning should be considered. This is a complete guide and notebook ( ) on how to fine-tune Meta Code Llama using the 7B model hosted on Hugging Face. It uses the LoRA fine-tuning method and can run on a single GPU. As shown in the Meta Code Llama References ( ), fine-tuning improves the performance of Meta Code Llama on SQL code generation, and it can be critical that LLMs are able to interoperate with structured data and SQL, the primary way to access structured data - we are developing demo apps in LangChain and RAG with Llama 2 to show this. Compatible extensions In most of the cases, the simplest method to integrate any model size is through ollama , occasionally combined with litellm . Ollama is a program that allows quantized versions of popular LLMs to run locally. It leverages the GPU and can even run Code Llama 34B on an M1 mac. Litellm is a simple proxy that can serve an OpenAI style API, so it's easy to replace OpenAI in existing applications, in our case, extensions Continue This extension can be used with ollama, allowing for easy local only execution. Additionally, it provides a simple interface to 1/ Chat with the model directly running inside VS Code and 2/ Select specific files and sections to edit or explain. This extension is an effective way to evaluate Llama because it provides simple and useful features. It also allows developers to build trust, by creating diffs for each proposed change and showing exactly what is being changed before saving the file. Handling the context for the LLM is easy and relies heavily on keyboard shortcuts. It's important to note that all the interactions with the extension are recorded in jsonl format. The objective is to provide data for future fine tuning of the models based on the feedback recorded during real world usage as well. Steps to install with ollama Install and pull a model (e.g. ollama pull codellama:13b-instruct) Install the extension from Visual Studio Code marketplace Open the extension and click on the + sign to add models Select Ollama as a provider In the next screen, select the model and size pulled from with ollama Select the model in the convo and start using the extension Steps to install with TGI For better performance or usage in non-compatible hardware, TGI can be used in a server to run the model. For example, ollama on Intel Macs is too slow to be useful, even with the 7B models. On the contrary, M1 macs can run the 34 Meta Code Llama models quickly. For this, you should have TGI running on a server with appropriate hardware, as detailed in this . Once Continue.dev is installed, follow these steps: Open the configs with /config Use the HuggingFaceTGI class and pass your instance URL in the server_url parameter: Assign a name to it and save the config file. llm-vscode This extension from Hugging Face provides an open alternative to the closed sourced GitHub Copilot, allowing for the same functionality, context based autocomplete suggestions, to work with open source models. It works out of the box with a HF Token and their Inference API but can be configured to use any TGI compatible API. For usage with a self-hosted TGI server, follow these steps: from the marketplace Open the extension configs Select the correct template for the model published in your TGI instance in the Config Template field. For testing, used the one named codellama/CodeLlama-13b-hf Pass in the URL to your TGI instance in the Model ID or Endpoint field. To avoid rate limiting messages, login to HF by providing a read only token. This was necessary even for a self-hosted instance. It currently does not support local models unless TGI is running locally. It would be great to add ollama support to this extension, as it would accelerate the inference with the smaller models by avoiding the network.
+----------
+LangChain | Integration guides Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud is an open source framework for building LLM powered applications. It implements common abstractions and higher-level APIs to make the app building process easier, so you don't need to call LLM from scratch. The main building blocks/APIs of LangChain are: Source The Models or LLMs API can be used to easily connect to all popular LLMs such as Hugging Face or Replicate where all types of Llama 2 models are hosted. The Prompts API implements the useful prompt template abstraction to help you easily reuse good, often long and detailed, prompts when building sophisticated LLM apps. There are also many built-in prompts for common operations such as summarization or connection to SQL databases for quick app development. Prompts can also work closely with  parsers to easily extract useful information from the LLM output. The Memory API can be used to save conversation history and feed it along with new questions to LLM so multi-turn natural conversation chat can be implemented. The Chains API includes the most basic LLMChain that combines a LLM with a prompt to generate the output, as well as more advanced chains to lets you build sophisticated LLM apps in a systematic way. For example, the output of the first LLM chain can be the input/prompt of another chain, or a chain can have multiple inputs and/or multiple outputs, either pre-defined or dynamically decided by the LLM output of a prompt. The Indexes API allows documents outside of LLM to be saved, after first converted to embeddings which are numerical meaning representations, in the vector form, of the documents, to a vector store. Later when a user enters a question about the documents, the relevant data stored in the documents' vector store will be retrieved and sent, along with the query, to LLM to generate an answer related to the documents. The following flow shows the process The Agents API uses LLM as the reasoning engine and connects it with other sources of data, third-party or own tools, or APIs such as web search or wikipedia APIs. Depending on the user's input, the agent can decide which tool to call to handle the input. LangChain can be used as a powerful retrieval augmented generation (RAG) tool to integrate the internal data or more recent public data with LLM to QA or chat about the data. LangChain already supports loading many types of unstructured and structured data. To learn more about LangChain, enroll for free in the two LangChain short courses . Be aware that the code in the courses use OpenAI ChatGPT LLM, but we’ve published a series of using LangChain with Llama. There is also a Getting to Know Llama notebook , presented at Meta Connect.
+----------
+LlamaIndex | Integration guides LlamaIndex LlamaIndex is another popular open source framework for building LLM applications. Like LangChain, LlamaIndex can also be used to build RAG applications by easily integrating data not built-in the LLM with LLM. There are three key tools in LlamaIndex: Connecting Data: connect data of any type -  structured, unstructured or semi-structured - to LLM Indexing Data: Index and store the data Querying LLM: Combine the user query and retrieved query-related data to query LLM and return data-augmented answer LlamaIndex is mainly a data framework for connecting private or domain-specific data with LLMs, so it specializes in RAG, smart data storage and retrieval, while LangChain is a more general purpose framework which can be used to build agents connecting multiple tools. The integration of the two may provide the best performant and effective solution to building real world RAG powered Llama apps. For an example usage of how to integrate LlamaIndex with Llama 2, see . We also published a completed demo app showing how to use LlamaIndex to chat with Llama 2 about live data via the you.com API. It’s worth noting that LlamaIndex has implemented many RAG powered LLM evaluation tools to easily measure the quality of retrieval and response, including: Question Generation: Call LLM to auto generate questions to create an evaluation dataset. Faithfulness Evaluator: Evaluate if the generated answer is faithful to the retrieved context or if there’s hallucination. Correctness Evaluator: Evaluate if the generated answer matches the reference answer. Relevancy Evaluator: Evaluate if the answer and the retrieved context is relevant and consistent for the given query. Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Skip to main content
+----------
+# Llama Recipes: Examples to get started using the Llama models from Meta The 'llama-recipes' repository is a companion to the [Meta Llama 3](https://github.com/meta-llama/llama3) models. The goal of this repository is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Meta Llama and other tools in the LLM ecosystem. The examples here showcase how to run Meta Llama locally, in the cloud, and on-prem. [Meta Llama 2](https://github.com/meta-llama/llama) is also supported in this repository. We highly recommend everyone to utilize [Meta Llama 3](https://github.com/meta-llama/llama3) due to its enhanced capabilities. > [!IMPORTANT] > Meta Llama 3 has a new prompt template and special tokens (based on the tiktoken tokenizer). > | Token | Description | > |---|---| > `<\|begin_of_text\|>` | This is equivalent to the BOS token. | > `<\|end_of_text\|>` | This is equivalent to the EOS token. For multiturn-conversations it's usually unused. Instead, every message is terminated with `<\|eot_id\|>` instead.| > `<\|eot_id\|>` | This token signifies the end of the message in a turn i.e. the end of a single message by a system, user or assistant role as shown below.| > `<\|start_header_id\|>{role}<\|end_header_id\|>` | These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant. | > > A multiturn-conversation with Meta Llama 3 follows this prompt template: > ``` > <|begin_of_text|><|start_header_id|>system<|end_header_id|> > {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|> > {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> > {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> > {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> > Each message gets trailed by an `<|eot_id|>` token before a new header is started, signaling a role change. > More details on the new tokenizer and prompt template can be found [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3#special-tokens-used-with-meta-llama-3). > [!NOTE] > The llama-recipes repository was recently refactored to promote a better developer experience of using the examples. Some files have been moved to new locations. The `src/` folder has NOT been modified, so the functionality of this repo and package is not impacted. > Make sure you update your local clone by running `git pull origin main` ## Table of Contents - [Llama Recipes: Examples to get started using the Meta Llama models from Meta](#llama-recipes-examples-to-get-started-using-the-llama-models-from-meta) - [Table of Contents](#table-of-contents) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [PyTorch Nightlies](#pytorch-nightlies) - [Installing](#installing) - [Install with pip](#install-with-pip) - [Install with optional dependencies](#install-with-optional-dependencies) - [Install from source](#install-from-source) - [Getting the Llama models](#getting-the-llama-models) - [Model conversion to Hugging Face](#model-conversion-to-hugging-face) - [Repository Organization](#repository-organization) - [`recipes/`](#recipes) - [`src/`](#src) - [Contributing](#contributing) - [License](#license) ## Getting Started These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system. ### Prerequisites #### PyTorch Nightlies If you want to use PyTorch nightlies instead of the stable release, go to [this guide](https://pytorch.org/get-started/locally/) to retrieve the right `--extra-index-url URL` parameter for the `pip install` commands on your platform. ### Installing Llama-recipes provides a pip distribution for easy install and usage in other projects. Alternatively, it can be installed from source. > Ensure you use the correct CUDA version (from `nvidia-smi`) when installing the PyTorch wheels. Here we are using 11.8 as `cu118`. > H100 GPUs work better with CUDA >12.0 #### Install with pip ``` pip install llama-recipes #### Install with optional dependencies Llama-recipes offers the installation of optional packages. There are three optional dependency groups. To run the unit tests we can install the required dependencies with: pip install llama-recipes[tests] For the vLLM example we need additional requirements that can be installed with: pip install llama-recipes[vllm] To use the sensitive topics safety checker install with: pip install llama-recipes[auditnlg] Optional dependencies can also be combines with [option1,option2]. #### Install from source To install from source e.g. for development use these commands. We're using hatchling as our build backend which requires an up-to-date pip as well as setuptools package. git clone git@github.com:meta-llama/llama-recipes.git cd llama-recipes pip install -U pip setuptools pip install -e . For development and contributing to llama-recipes please install all optional dependencies: pip install -U pip setuptools pip install -e .[tests,auditnlg,vllm] ### Getting the Meta Llama models You can find Meta Llama models on Hugging Face hub [here](https://huggingface.co/meta-llama), **where models with `hf` in the name are already converted to Hugging Face checkpoints so no further conversion is needed**. The conversion step below is only for original model weights from Meta that are hosted on Hugging Face model hub as well. #### Model conversion to Hugging Face The recipes and notebooks in this folder are using the Meta Llama model definition provided by Hugging Face's transformers library. Given that the original checkpoint resides under models/7B you can install all requirements and convert the checkpoint with: ```bash ## Install Hugging Face Transformers from source pip freeze | grep transformers ## verify it is version 4.31.0 or higher git clone git@github.com:huggingface/transformers.git cd transformers pip install protobuf python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path ## Repository Organization Most of the code dealing with Llama usage is organized across 2 main folders: `recipes/` and `src/`. ### `recipes/` Contains examples are organized in folders by topic: | Subfolder | Description | |---|---| [quickstart](./recipes/quickstart) | The "Hello World" of using Llama, start here if you are new to using Llama. [finetuning](./recipes/finetuning)|Scripts to finetune Llama on single-GPU and multi-GPU setups [inference](./recipes/inference)|Scripts to deploy Llama for inference locally and using model servers [use_cases](./recipes/use_cases)|Scripts showing common applications of Meta Llama3 [responsible_ai](./recipes/responsible_ai)|Scripts to use PurpleLlama for safeguarding model outputs [llama_api_providers](./recipes/llama_api_providers)|Scripts to run inference on Llama via hosted endpoints [benchmarks](./recipes/benchmarks)|Scripts to benchmark Llama models inference on various backends [code_llama](./recipes/code_llama)|Scripts to run inference with the Code Llama models [evaluation](./recipes/evaluation)|Scripts to evaluate fine-tuned Llama models using `lm-evaluation-harness` from `EleutherAI` ### `src/` Contains modules which support the example recipes: | Subfolder | Description | | [configs](src/llama_recipes/configs/) | Contains the configuration files for PEFT methods, FSDP, Datasets, Weights & Biases experiment tracking. | | [datasets](src/llama_recipes/datasets/) | Contains individual scripts for each dataset to download and process. Note | | [inference](src/llama_recipes/inference/) | Includes modules for inference for the fine-tuned models. | | [model_checkpointing](src/llama_recipes/model_checkpointing/) | Contains FSDP checkpoint handlers. | | [policies](src/llama_recipes/policies/) | Contains FSDP scripts to provide different policies, such as mixed precision, transformer wrapping policy and activation checkpointing along with any precision optimizer (used for running FSDP with pure bf16 mode). | | [utils](src/llama_recipes/utils/) | Utility files for: - `train_utils.py` provides training/eval loop and more train utils. - `dataset_utils.py` to get preprocessed datasets. - `config_utils.py` to override the configs received from CLI. - `fsdp_utils.py` provides FSDP  wrapping policy for PEFT methods. - `memory_utils.py` context manager to track different memory stats in train loop. | ## Contributing Please read [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests to us. ## License See the License file for Meta Llama 3 [here](https://llama.meta.com/llama3/license/) and Acceptable Use Policy [here](https://llama.meta.com/llama3/use-policy/) See the License file for Meta Llama 2 [here](https://llama.meta.com/llama2/license/) and Acceptable Use Policy [here](https://llama.meta.com/llama2/use-policy/)
+----------
+# **Model Details** Meta developed and released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. ||Training Data|Params|Context Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10 -4 Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10 Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10 **Llama 2 family of models.** Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. The 70B version uses Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "Llama-2: Open Foundation and Fine-tuned Chat Models", available at https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/. **Where to send questions or comments about the model** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md). # **Intended Use** **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 2 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 2 models for languages beyond English provided they comply with the Llama 2 Community License and the Acceptable Use Policy. # **Hardware and Software** **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. # **Training Data** **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. # **Evaluation Results** In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks. For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at the top 1. |||TruthfulQA|Toxigen| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. # **Ethical Considerations and Limitations** Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide/)
+----------
+# Llama 2 We are unlocking the power of large language models. Llama 2 is now accessible to individuals, creators, researchers, and businesses of all sizes so that they can experiment, innovate, and scale their ideas responsibly. This release includes model weights and starting code for pre-trained and fine-tuned Llama language models — ranging from 7B to 70B parameters. This repository is intended as a minimal example to load [Llama 2](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) models and run inference. For more detailed examples leveraging Hugging Face, see [llama-recipes](https://github.com/facebookresearch/llama-recipes/). ## Updates post-launch See [UPDATES.md](UPDATES.md). Also for a running list of frequently asked questions, see [here](https://ai.meta.com/llama/faq/). ## Download In order to download the model weights and tokenizer, please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License. Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download. Pre-requisites: Make sure you have `wget` and `md5sum` installed. Then run the script: `./download.sh`. Keep in mind that the links expire after 24 hours and a certain amount of downloads. If you start seeing errors such as `403: Forbidden`, you can always re-request a link. ### Access to Hugging Face We are also providing downloads on [Hugging Face](https://huggingface.co/meta-llama). You can request access to the models by acknowledging the license and filling the form in the model card of a repo. After doing so, you should get access to all the Llama models of a version (Code Llama, Llama 2, or Llama Guard) within 1 hour. ## Quick Start You can follow the steps below to quickly get up and running with Llama 2 models. These steps will let you run quick inference locally. For more examples, see the [Llama 2 recipes repository](https://github.com/facebookresearch/llama-recipes). 1. In a conda env with PyTorch / CUDA available clone and download this repository. 2. In the top-level directory run: pip install -e . 3. Visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and register to download the model/s. 4. Once registered, you will get an email with a URL to download the models. You will need this URL when you run the download.sh script. 5. Once you get the email, navigate to your downloaded llama repository and run the download.sh script. - Make sure to grant execution permissions to the download.sh script - During this process, you will be prompted to enter the URL from the email. - Do not use the “Copy Link” option but rather make sure to manually copy the link from the email. 6. Once the model/s you want have been downloaded, you can run the model locally using the command below: torchrun --nproc_per_node 1 example_chat_completion.py \ --ckpt_dir llama-2-7b-chat/ \ --tokenizer_path tokenizer.model \ --max_seq_len 512 --max_batch_size 6 **Note** - Replace  `llama-2-7b-chat/` with the path to your checkpoint directory and `tokenizer.model` with the path to your tokenizer model. - The `–nproc_per_node` should be set to the [MP](#inference) value for the model you are using. - Adjust the `max_seq_len` and `max_batch_size` parameters as needed. - This example runs the [example_chat_completion.py](example_chat_completion.py) found in this repository but you can change that to a different .py file. ## Inference Different models require different model-parallel (MP) values: |  Model | MP | |--------|----| | 7B     | 1  | | 13B    | 2  | | 70B    | 8  | All models support sequence length up to 4096 tokens, but we pre-allocate the cache according to `max_seq_len` and `max_batch_size` values. So set those according to your hardware. ### Pretrained Models These models are not finetuned for chat or Q&A. They should be prompted so that the expected answer is the natural continuation of the prompt. See `example_text_completion.py` for some examples. To illustrate, see the command below to run it with the llama-2-7b model (`nproc_per_node` needs to be set to the `MP` value): torchrun --nproc_per_node 1 example_text_completion.py \ --ckpt_dir llama-2-7b/ \ --max_seq_len 128 --max_batch_size 4 ### Fine-tuned Chat Models The fine-tuned models were trained for dialogue applications. To get the expected features and performance for them, a specific formatting defined in [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212) needs to be followed, including the `INST` and `< >` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). You can also deploy additional classifiers for filtering out inputs and outputs that are deemed unsafe. See the llama-recipes repo for [an example](https://github.com/facebookresearch/llama-recipes/blob/main/examples/inference.py) of how to add a safety checker to the inputs and outputs of your inference code. Examples using llama-2-7b-chat: torchrun --nproc_per_node 1 example_chat_completion.py \ --max_seq_len 512 --max_batch_size 6 Llama 2 is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. In order to help developers address these risks, we have created the [Responsible Use Guide](Responsible-Use-Guide.pdf). More details can be found in our research paper as well. ## Issues Please report any software “bug”, or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Model Card See [MODEL_CARD.md](MODEL_CARD.md). Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements. See the [LICENSE](LICENSE) file, as well as our accompanying [Acceptable Use Policy](USE_POLICY.md) ## References 1. [Research Paper](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) 2. [Llama 2 technical overview](https://ai.meta.com/resources/models-and-libraries/llama) 3. [Open Innovation AI Research Community](https://ai.meta.com/llama/open-innovation-ai-research-community/) For common questions, the FAQ can be found [here](https://ai.meta.com/llama/faq/) which will be kept up to date over time as new questions arise. ## Original Llama The repo for the original llama release is in the [`llama_v1`](https://github.com/facebookresearch/llama/tree/llama_v1) branch.
+----------
+## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. Training Data Params Context length GQA Token count Knowledge cutoff Llama 3 A new mix of publicly available online data. 8B 8k Yes 15T+ March, 2023 70B December, 2023 **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the [Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/) and [Llama 3 Community License](https://llama.meta.com/llama3/license/). Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the [Llama 3 Community License](https://llama.meta.com/llama3/license/) and the [Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/). ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. Time (GPU hours) Power Consumption (W) Carbon Emitted(tCO2eq) Llama 3 8B 1.3M 700 390 Llama 3 70B 6.4M 1900 Total 7.7M 2290 **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 8B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_details.md). ### Base pretrained models Category Benchmark Llama2 7B Llama2 13B Llama2 70B General MMLU (5-shot) 66.6 45.7 53.8 79.5 69.7 AGIEval English (3-5 shot) 45.9 28.8 38.7 63.0 54.8 CommonSenseQA (7-shot) 72.6 57.6 67.6 83.8 78.7 Winogrande (5-shot) 76.1 73.3 75.4 83.1 81.8 BIG-Bench Hard (3-shot, CoT) 61.1 38.1 47.0 81.3 65.7 ARC-Challenge (25-shot) 78.6 53.7 93.0 85.3 Knowledge reasoning TriviaQA-Wiki (5-shot) 78.5 72.1 79.6 89.7 87.5 Reading comprehension SQuAD (1-shot) 76.4 72.2 85.6 82.6 QuAC (1-shot, F1) 44.4 39.6 44.9 51.1 49.4 BoolQ (0-shot) 75.7 65.5 66.9 79.0 73.1 DROP (3-shot, F1) 58.4 37.9 49.8 79.7 70.2 ### Instruction tuned models Llama 2 7B Llama 2 13B Llama 2 70B 68.4 34.1 47.8 82.0 52.9 GPQA (0-shot) 34.2 21.7 22.3 39.5 21.0 HumanEval (0-shot) 62.2 7.9 14.0 81.7 25.6 GSM-8K (8-shot, CoT) 25.7 77.4 57.5 MATH (4-shot, CoT) 30.0 3.8 6.7 50.4 11.6 ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. Safety For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. Refusals In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### Cyber Security We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). Child Safety Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development.  For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [GitHub repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Amit Sangani; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Ash JJhaveri; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hamid Shojanazeri; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Puxin Xu; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
+----------
+🤗 Models on Hugging Face | Blog Website --- # Meta Llama 3 We are unlocking the power of large language models. Our latest version of Llama is now accessible to individuals, creators, researchers, and businesses of all sizes so that they can experiment, innovate, and scale their ideas responsibly. This release includes model weights and starting code for pre-trained and instruction-tuned Llama 3 language models — including sizes of 8B to 70B parameters. This repository is a minimal example of loading Llama 3 models and running inference. For more detailed examples, see [llama-recipes](https://github.com/facebookresearch/llama-recipes/). To download the model weights and tokenizer, please visit the [Meta Llama website](https://llama.meta.com/llama-downloads/) and accept our License. Once your request is approved, you will receive a signed URL over email. Then, run the download.sh script, passing the URL provided when prompted to start the download. Pre-requisites: Ensure you have `wget` and `md5sum` installed. Then run the script: `./download.sh`. Remember that the links expire after 24 hours and a certain amount of downloads. You can always re-request a link if you start seeing errors such as `403: Forbidden`. ### Access to Hugging Face We also provide downloads on [Hugging Face](https://huggingface.co/meta-llama), in both transformers and native `llama3` formats. To download the weights from Hugging Face, please follow these steps: - Visit one of the repos, for example [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). - Read and accept the license. Once your request is approved, you'll be granted access to all the Llama 3 models. Note that requests used to take up to one hour to get processed. - To download the original native weights to use with this repo, click on the "Files and versions" tab and download the contents of the `original` folder. You can also download them from the command line if you `pip install huggingface-hub`: huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir meta-llama/Meta-Llama-3-8B-Instruct - To use with transformers, the following [pipeline](https://huggingface.co/docs/transformers/en/main_classes/pipelines) snippet will download and cache the weights: ```python model_id = "meta-llama/Meta-Llama-3-8B-Instruct" model="meta-llama/Meta-Llama-3-8B-Instruct", model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", You can follow the steps below to get up and running with Llama 3 models quickly. These steps will let you run quick inference locally. For more examples, see the [Llama recipes repository](https://github.com/facebookresearch/llama-recipes). 1. Clone and download this repository in a conda env with PyTorch / CUDA. 2. In the top-level directory run: pip install -e . 3. Visit the [Meta Llama website](https://llama.meta.com/llama-downloads/) and register to download the model/s. 4. Once registered, you will get an email with a URL to download the models. You will need this URL when you run the download.sh script. 5. Once you get the email, navigate to your downloaded llama repository and run the download.sh script. - Make sure to grant execution permissions to the download.sh script - During this process, you will be prompted to enter the URL from the email. - Do not use the “Copy Link” option; copy the link from the email manually. 6. Once the model/s you want have been downloaded, you can run the model locally using the command below: torchrun --nproc_per_node 1 example_chat_completion.py \ --ckpt_dir Meta-Llama-3-8B-Instruct/ \ --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model \ --max_seq_len 512 --max_batch_size 6 - Replace  `Meta-Llama-3-8B-Instruct/` with the path to your checkpoint directory and `Meta-Llama-3-8B-Instruct/tokenizer.model` with the path to your tokenizer model. - The `–nproc_per_node` should be set to the [MP](#inference) value for the model you are using. - Adjust the `max_seq_len` and `max_batch_size` parameters as needed. - This example runs the [example_chat_completion.py](example_chat_completion.py) found in this repository, but you can change that to a different .py file. Different models require different model-parallel (MP) values: |  Model | MP | | 8B     | 1  | | 70B    | 8  | All models support sequence length up to 8192 tokens, but we pre-allocate the cache according to `max_seq_len` and `max_batch_size` values. So set those according to your hardware. These models are not finetuned for chat or Q&A. They should be prompted so that the expected answer is the natural continuation of the prompt. See `example_text_completion.py` for some examples. To illustrate, see the command below to run it with the llama-3-8b model (`nproc_per_node` needs to be set to the `MP` value): torchrun --nproc_per_node 1 example_text_completion.py \ --ckpt_dir Meta-Llama-3-8B/ \ --tokenizer_path Meta-Llama-3-8B/tokenizer.model \ --max_seq_len 128 --max_batch_size 4 ### Instruction-tuned Models The fine-tuned models were trained for dialogue applications. To get the expected features and performance for them, specific formatting defined in [`ChatFormat`](https://github.com/meta-llama/llama3/blob/main/llama/tokenizer.py#L202) needs to be followed: The prompt begins with a `<|begin_of_text|>` special token, after which one or more messages follow. Each message starts with the `<|start_header_id|>` tag, the role `system`, `user` or `assistant`, and the `<|end_header_id|>` tag. After a double newline `\n\n`, the message's contents follow. The end of each message is marked by the `<|eot_id|>` token. You can also deploy additional classifiers to filter out inputs and outputs that are deemed unsafe. See the llama-recipes repo for [an example](https://github.com/meta-llama/llama-recipes/blob/main/recipes/inference/local_inference/inference.py) of how to add a safety checker to the inputs and outputs of your inference code. Examples using llama-3-8b-chat: torchrun --nproc_per_node 1 example_chat_completion.py \ --max_seq_len 512 --max_batch_size 6 Llama 3 is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. To help developers address these risks, we have created the [Responsible Use Guide](https://ai.meta.com/static-resource/responsible-use-guide/). Please report any software “bug” or other problems with the models through one of the following means: - Reporting issues with the model: [https://github.com/meta-llama/llama3/issues](https://github.com/meta-llama/llama3/issues) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) Our model and weights are licensed for researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals and industry through this opportunity while fostering an environment of discovery and ethical AI advancements. See the [LICENSE](LICENSE) file, as well as our accompanying [Acceptable Use Policy](USE_POLICY.md) ## Questions For common questions, the FAQ can be found [here](https://llama.meta.com/faq), which will be updated over time as new questions arise.
+----------
+# Code Llama ## **Model Details** **Model Developers** Meta AI **Variations** Code Llama comes in four model sizes, and three variants: 1) Code Llama: our base models are designed for general code synthesis and understanding 2) Code Llama - Python: designed specifically for Python 3) Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B, 34B and 70B parameters. **Input** Models input text only. **Output** Models output text only. **Model Architecture** Code Llama and its variants are autoregressive language models using optimized transformer architectures. Code Llama 7B, 13B and 70B additionally support infilling text generation. All models but Code Llama - Python 70B and Code Llama - Instruct 70B were fine-tuned with up to 16K tokens, and support up to 100K tokens at inference time. **Model Dates** Code Llama and its variants have been trained between January 2023 and January 2024. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released  as we improve model safety with community feedback. **Licence** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/). **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)". **Where to send comments** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md), or by opening an issue in the GitHub repository ([https://github.com/facebookresearch/codellama/](https://github.com/facebookresearch/codellama/)). ## **Intended Use** **Intended Use Cases** Code Llama and its variants are intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistance and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## **Hardware and Software** **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed by Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 12 Code Llama models required 1400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 228.55 tCO2eq, 100% of which were offset by Meta’s sustainability program. **Training data** All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). Code Llama - Instruct uses additional instruction fine-tuning data. **Evaluation Results** See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## **Ethical Considerations and Limitations** Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
+----------
+# Introducing Code Llama Code Llama is a family of large language models for code based on [Llama 2](https://github.com/facebookresearch/llama) providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B and 34B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B and 13B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama was developed by fine-tuning Llama 2 using a higher sampling of code. As with Llama 2, we applied considerable safety mitigations to the fine-tuned versions of the model. For detailed information on model training, architecture and parameters, evaluations, responsible AI and safety refer to  our [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/). Output generated by code generation features of the Llama Materials, including Code Llama, may be subject to third party licenses, including, without limitation, open source licenses. We are unlocking the power of large language models and our latest version of Code Llama is now accessible to individuals, creators, researchers and businesses of all sizes so that they can experiment, innovate and scale their ideas responsibly. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 34B parameters. This repository is intended as a minimal example to load [Code Llama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) models and run inference. [comment]: <> (Code Llama models are compatible with the scripts in llama-recipes) In order to download the model weights and tokenizers, please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License. Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download. Make sure that you copy the URL text itself, **do not use the 'Copy link address' option** when you right click the URL. If the copied URL text starts with: https://download.llamameta.net, you copied it correctly. If the copied URL text starts with: https://l.facebook.com, you copied it the wrong way. Pre-requisites: make sure you have `wget` and `md5sum` installed. Then to run the script: `bash download.sh`. Keep in mind that the links expire after 24 hours and a certain amount of downloads. If you start seeing errors such as `403: Forbidden`, you can always re-request a link. ### Model sizes | Model | Size     | |-------|----------| | 7B    | ~12.55GB | | 13B   | 24GB     | | 34B   | 63GB     | | 70B   | 131GB    | [comment]: <> (Access on Hugging Face, We are also providing downloads on Hugging Face. You must first request a download from the Meta website using the same email address as your Hugging Face account. After doing so, you can request access to any of the models on Hugging Face and within 1-2 days your account will be granted access to all versions.) ## Setup In a conda environment with PyTorch / CUDA available, clone the repo and run in the top-level directory: pip install -e . Different models require different model-parallel (MP) values: | Model | MP | |-------|----| | 7B    | 1  | | 13B   | 2  | | 34B   | 4  | | 70B   | 8  | All models, except the 70B python and instruct versions, support sequence lengths up to 100,000 tokens, but we pre-allocate the cache according to `max_seq_len` and `max_batch_size` values. So set those according to your hardware and use-case. ### Pretrained Code Models The Code Llama and Code Llama - Python models are not fine-tuned to follow instructions. They should be prompted so that the expected answer is the natural continuation of the prompt. See `example_completion.py` for some examples. To illustrate, see command below to run it with the `CodeLlama-7b` model (`nproc_per_node` needs to be set to the `MP` value): torchrun --nproc_per_node 1 example_completion.py \ --ckpt_dir CodeLlama-7b/ \ --tokenizer_path CodeLlama-7b/tokenizer.model \ --max_seq_len 128 --max_batch_size 4 Pretrained code models are: the Code Llama models `CodeLlama-7b`, `CodeLlama-13b`, `CodeLlama-34b`, `CodeLlama-70b` and the Code Llama - Python models `CodeLlama-7b-Python`, `CodeLlama-13b-Python`, `CodeLlama-34b-Python`, `CodeLlama-70b-Python`. ### Code Infilling Code Llama and Code Llama - Instruct 7B and 13B models are capable of filling in code given the surrounding context. See `example_infilling.py` for some examples. The `CodeLlama-7b` model can be run for infilling with the command below (`nproc_per_node` needs to be set to the `MP` value): torchrun --nproc_per_node 1 example_infilling.py \ --max_seq_len 192 --max_batch_size 4 Pretrained infilling models are: the Code Llama models `CodeLlama-7b` and `CodeLlama-13b` and the Code Llama - Instruct models `CodeLlama-7b-Instruct`, `CodeLlama-13b-Instruct`. ### Fine-tuned Instruction Models Code Llama - Instruct models are fine-tuned to follow instructions. To get the expected features and performance for the 7B, 13B and 34B variants, a specific formatting defined in [`chat_completion()`](https://github.com/facebookresearch/codellama/blob/main/llama/generation.py#L319-L361) needs to be followed, including the `INST` and `< >` tags, `BOS` and `EOS` tokens, and the whitespaces and linebreaks in between (we recommend calling `strip()` on inputs to avoid double-spaces). `CodeLlama-70b-Instruct` requires a separate turn-based prompt format defined in [`dialog_prompt_tokens()`](https://github.com/facebookresearch/codellama/blob/main/llama/generation.py#L506-L548). You can use `chat_completion()` directly to generate answers with all instruct models; it will automatically perform the required formatting. You can also deploy additional classifiers for filtering out inputs and outputs that are deemed unsafe. See the llama-recipes repo for [an example](https://github.com/facebookresearch/llama-recipes/blob/main/src/llama_recipes/inference/safety_utils.py) of how to add a safety checker to the inputs and outputs of your inference code. Examples using `CodeLlama-7b-Instruct`: torchrun --nproc_per_node 1 example_instructions.py \ --ckpt_dir CodeLlama-7b-Instruct/ \ --tokenizer_path CodeLlama-7b-Instruct/tokenizer.model \ --max_seq_len 512 --max_batch_size 4 Fine-tuned instruction-following models are: the Code Llama - Instruct models `CodeLlama-7b-Instruct`, `CodeLlama-13b-Instruct`, `CodeLlama-34b-Instruct`, `CodeLlama-70b-Instruct`. Code Llama is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. In order to help developers address these risks, we have created the [Responsible Use Guide](https://github.com/facebookresearch/llama/blob/main/Responsible-Use-Guide.pdf). More details can be found in our research papers as well. Please report any software “bug”, or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/codellama](http://github.com/facebookresearch/codellama) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) See [MODEL_CARD.md](MODEL_CARD.md) for the model card of Code Llama. Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements. See the [LICENSE](https://github.com/facebookresearch/llama/blob/main/LICENSE) file, as well as our accompanying [Acceptable Use Policy](https://github.com/facebookresearch/llama/blob/main/USE_POLICY.md) 1. [Code Llama Research Paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) 2. [Code Llama Blog Post](https://ai.meta.com/blog/code-llama-large-language-model-coding/)
+----------
+Models on Hugging Face CyberSec Eval Paper Llama Guard Paper # Purple Llama Purple Llama is an umbrella project that over time will bring together tools and evals to help the community build responsibly with open generative AI models. The initial release will include tools and evals for Cyber Security and Input/Output safeguards but we plan to contribute more in the near future. ## Why purple? Borrowing a [concept](https://www.youtube.com/watch?v=ab_Fdp6FVDI) from the cybersecurity world, we believe that to truly mitigate the challenges which generative AI presents, we need to take both attack (red team) and defensive (blue team) postures. Purple teaming, composed of both red and blue team responsibilities, is a collaborative approach to evaluating and mitigating potential risks and the same ethos applies to generative AI and hence our investment in Purple Llama will be comprehensive. Components within the Purple Llama project will be licensed permissively enabling both research and commercial usage. We believe this is a major step towards enabling community collaboration and standardizing the development and usage of trust and safety tools for generative AI development. More concretely evals and benchmarks are licensed under the MIT license while any models use the Llama 2 Community license. See the table below: | **Component Type** |            **Components**            |                                          **License**                                           | | :----------------- | :----------------------------------: | :--------------------------------------------------------------------------------------------: | | Evals/Benchmarks   | Cyber Security Eval (others to come) |                                              MIT                                               | | Models             |             Llama Guard              | [Llama 2 Community License](https://github.com/facebookresearch/PurpleLlama/blob/main/LICENSE) | | Models             |             Llama Guard 2            | Llama 3 Community License | | Safeguard          |             Code Shield              | MIT | ## Evals & Benchmarks ### Cybersecurity #### CyberSec Eval v1 CyberSec Eval v1 was what we believe was the first industry-wide set of cybersecurity safety evaluations for LLMs. These benchmarks are based on industry guidance and standards (e.g., CWE and MITRE ATT&CK) and built in collaboration with our security subject matter experts. We aim to provide tools that will help address some risks outlined in the [White House commitments on developing responsible AI](https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/), including: * Metrics for quantifying LLM cybersecurity risks. * Tools to evaluate the frequency of insecure code suggestions. * Tools to evaluate LLMs to make it harder to generate malicious code or aid in carrying out cyberattacks. We believe these tools will reduce the frequency of LLMs suggesting insecure AI-generated code and reduce their helpfulness to cyber adversaries. Our initial results show that there are meaningful cybersecurity risks for LLMs, both with recommending insecure code and for complying with malicious requests. See our [Cybersec Eval paper](https://ai.meta.com/research/publications/purple-llama-cyberseceval-a-benchmark-for-evaluating-the-cybersecurity-risks-of-large-language-models/) for more details. #### CyberSec Eval 2 CyberSec Eval 2 expands on its predecessor by measuring an LLM’s propensity to abuse a code interpreter, offensive cybersecurity capabilities, and susceptibility to prompt injection. You can read the paper [here](https://ai.meta.com/research/publications/cyberseceval-2-a-wide-ranging-cybersecurity-evaluation-suite-for-large-language-models/). You can also check out the 🤗 leaderboard [here](https://huggingface.co/spaces/facebook/CyberSecEval). ## System-Level Safeguards As we outlined in Llama 3’s [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/), we recommend that all inputs and outputs to the LLM be checked and filtered in accordance with content guidelines appropriate to the application. ### Llama Guard To support this, and empower the community, we released Llama Guard, an openly-available model that performs competitively on common open benchmarks and provides developers with a pretrained model to help defend against generating potentially risky outputs. As part of our ongoing commitment to open and transparent science, we also released our methodology and an extended discussion of model performance in our [Llama Guard paper](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/). We are happy to share an updated version, Meta Llama Guard 2. Llama Guard 2 was optimized to support the newly [announced](https://mlcommons.org/2024/04/mlc-aisafety-v0-5-poc/) policy published by MLCommons, expanding its coverage to a more comprehensive set of safety categories, out-of-the-box. It also comes with better classification performance than Llama Guard 1 and improved zero-shot and few shot adaptability. Ultimately, our vision is to enable developers to customize this model to support relevant use cases and to make it easier to adopt best practices and improve the open ecosystem. ### Code Shield Code Shield adds support for inference-time filtering of insecure code produced by LLMs. Code Shield offers mitigation of insecure code suggestions risk, code interpreter abuse prevention, and secure command execution. [CodeShield Example Notebook](https://github.com/meta-llama/PurpleLlama/blob/main/CodeShield/notebook/CodeShieldUsageDemo.ipynb). To get started and learn how to use Purple Llama components with Llama models, see the getting started guide [here](https://ai.meta.com/llama/get-started/). The guide provides information and resources to help you set up Llama, including how to access the model, hosting how-to information and integration guides. Additionally, you will find supplemental materials to further assist you while responsibly building with Llama. The guide will be updated as more Purple Llama components get released. ## FAQ For a running list of frequently asked questions, for not only Purple Llama components but also generally for Llama models, see the FAQ [here](https://ai.meta.com/llama/faq/). ## Join the Purple Llama community See the [CONTRIBUTING](CONTRIBUTING.md) file for how to help out.
+----------
+# Model Details Meta Llama Guard 2 is an 8B parameter Llama 3-based [1] LLM safeguard model. Similar to [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/), it can be used for classifying content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated. Below is a response classification example input and output for Llama Guard 2. In order to produce classifier scores, we look at the probability for the first token, and use that as the “unsafe” class probability. We can then apply score thresholding to make binary decisions. # Harm Taxonomy and Policy The model is trained to predict safety labels on the 11 categories shown below, based on the [MLCommons taxonomy](https://mlcommons.org/2024/04/mlc-aisafety-v0-5-poc/) of hazards. Harm categories S1: Violent Crimes S2: Non-Violent Crimes S3: Sex-Related Crimes S4: Child Sexual Exploitation S5: Specialized Advice S6: Privacy S7: Intellectual Property S8: Indiscriminate Weapons S9: Hate S10: Suicide & Self-Harm S11: Sexual Content **S1: Violent Crimes** Responses that enable, encourage, or endorse the commission of violent crimes, including: (1) unlawful violence toward people (ex: terrorism, genocide, murder, hate-motivated violent crimes, child abuse, assault, battery, kidnapping) and (2) unlawful violence toward animals (ex: animal abuse) **S2: Non-Violent Crimes** Responses that enable, encourage, or endorse the commission of non-violent crimes, including: (1) personal crimes (ex: labor trafficking, threats, intimidation, hate-motivated non-violent crimes), (2) financial crimes (ex: fraud, scams, money laundering), (3) property crimes (ex: theft, arson, vandalism), (4) drug crimes (ex: creating or using regulated substances), (5) weapons crimes (ex: producing unlicensed firearms), and (6) cyber crimes (ex: hacking) **S3: Sex-Related Crimes** Responses that enable, encourage, or endorse the commission of sex-related crimes, including: (1) sex trafficking, (2) sexual assault (ex: rape), (3) sexual harassment (physical, verbal, or visual), and (4) prostitution **S4: Child Sexual Exploitation** Responses that contain, describe, enable, encourage, or endorse the sexual abuse of children **S5: Specialized Advice** Responses that contain specialized financial, medical, or legal advice, or that indicate dangerous activities or objects are safe **S6: Privacy** Responses that contain sensitive, nonpublic personal information that could undermine someone's physical, digital, or financial security **S7: Intellectual Property** Responses that may violate the intellectual property rights of any third party **S8: Indiscriminate Weapons** Responses that enable, encourage, or endorse the creation of indiscriminate weapons, including: (1) chemical weapons (ex: nerve gas), (2) biological weapons (ex: anthrax), (3) radiological weapons (ex: cobalt bombs), (4) nuclear weapons (ex: fission bombs), and (5) high-yield explosive weapons (ex: cluster munitions) **S9: Hate** Responses that demean or dehumanize people on the basis of their sensitive, personal characteristics (i.e., race, color, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and/or serious disease) **S10: Suicide & Self-Harm** Responses that enable, encourage, or endorse acts of intentional self-harm, including: (1) suicide, (2) self-injury (ex: cutting), and (3) disordered eating **S11: Sexual Content** Responses that contain erotica # Training and Evaluation Data We start with the training set of Llama Guard (see [model card](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard/MODEL_CARD.md#training-data)), and obtain labels on the Harm Taxonomy described above. To improve adaptability of the model to different prompts, we train on hard samples, which are obtained by taking an existing sample and prompting Llama2 70B to produce an alternate policy description that will flip the label of the given sample. We report metrics for various models and APIs on our validation set, which is obtained from combining the validation set of Llama Guard v1 and held-out samples from the additional Llama 3 safety data. We compare performance on our internal test set, as well as on open datasets like [XSTest](https://github.com/paul-rottger/exaggerated-safety?tab=readme-ov-file#license), [OpenAI moderation](https://github.com/openai/moderation-api-release), and [BeaverTails](https://github.com/PKU-Alignment/beavertails). We find that there is overlap between our training set and the BeaverTails-30k test split. Since both our internal test set and BeaverTails use prompts from the Anthropic's [hh-rlhf dataset](https://github.com/anthropics/hh-rlhf) as a starting point for curating data, it is possible that different splits of Anthropic were used while creating the two datasets. Therefore to prevent leakage of signal between our train set and the BeaverTails-30k test set, we create our own BeaverTails-30k splits based on the Anthropic train-test splits used for creating our internal sets. *Note on evaluations*: As discussed in the Llama Guard [paper](https://arxiv.org/abs/2312.06674), comparing model performance is not straightforward as each model is built on its own policy and is expected to perform better on an evaluation dataset with a policy aligned to the model. This highlights the need for industry standards. By aligning Llama Guard 2 with the Proof of Concept MLCommons taxonomy, we hope to drive adoption of industry standards like this and facilitate collaboration and transparency in the LLM safety and content evaluation space. # Model Performance We evaluate the performance of Llama Guard 2 and compare it with Llama Guard and popular content moderation APIs such as Azure, OpenAI Moderation, and Perspective. We use the token probability of the first output token (i.e. safe/unsafe) as the score for classification. For obtaining a binary classification decision from the score, we use a threshold of 0.5. Llama Guard 2 improves over Llama Guard, and outperforms other approaches on our internal test set. Note that we manage to achieve great performance while keeping a low false positive rate as we know that over-moderation can impact user experience when building LLM-applications. | **Model**                | **F1 ↑** | **AUPRC ↑** | **False Positive Rate ↓** | |--------------------------|:------:|:---------:|:-----------------------:| | Llama Guard\*             |  0.665 | 0.854 |          0.027          | | Llama Guard 2            |  **0.915** |   **0.974**   |          0.040          | | GPT4                     | 0.796 |    N/A    |          0.151          | | OpenAI Moderation API    |  0.347 |   0.669   |          0.030          | | Azure Content Safety API |  0.519 |    N/A    |          0.245          | | Perspective API          |  0.265 |   0.586   |          0.046          | Table 1: Comparison of performance of various approaches measured on our internal test set. *The performance of Llama Guard is lower on our new test set due to expansion of the number of harm categories from 6 to 11, which is not aligned to what Llama Guard was trained on. | **Category**           | **False Negative Rate\* ↓** | **False Positive Rate ↓** | |------------------------|:--------------------------:|:-------------------------:| | Violent Crimes         |            0.042           |           0.002           | | Privacy                |            0.057           |           0.004           | | Non-Violent Crimes     |            0.082           |           0.009           | | Intellectual Property  |            0.099           |           0.004           | | Hate                   |            0.190           |           0.005           | | Specialized Advice     |            0.192           |           0.009           | | Sexual Content         |            0.229           |           0.004           | | Indiscriminate Weapons |            0.263           |           0.001           | | Child Exploitation     |            0.267           |           0.000           | | Sex Crimes             |            0.275           |           0.002           | | Self-Harm              |            0.277           |           0.002           | Table 2: Category-wise breakdown of false negative rate and false positive rate for Llama Guard 2 on our internal benchmark for response classification with safety labels from the ML Commons taxonomy. *The binary safe/unsafe label is used to compute categorical FNR by using the true categories. We do not penalize the model while computing FNR for cases where the model predicts the correct overall label but an incorrect categorical label. We also report performance on OSS safety datasets, though we note that the policy used for assigning safety labels is not aligned with the policy used while training Llama Guard 2. Still, Llama Guard 2 provides a superior tradeoff between F1 score and False Positive Rate on the XSTest and OpenAI Moderation datasets, demonstrating good adaptability to other policies. The BeaverTails dataset has a lower bar for a sample to be considered unsafe compared to Llama Guard 2's policy. The policy and training data of MDJudge [4] is more aligned with this dataset and we see that it performs better on them as expected (at the cost of a higher FPR). GPT-4 achieves high recall on all of the sets but at the cost of very high FPR (9-25%), which could hurt its ability to be used as a safeguard for practical applications. (F1 ↑ / False Positive Rate ↓) False Refusals (XSTest) OpenAI policy (OpenAI Mod) BeaverTails policy (BeaverTails-30k) Llama Guard 0.737 / 0.079 0.599 / 0.035 Llama Guard 2 0.884 / 0.084 0.807 / 0.060 0.736 / 0.059 MDJudge 0.856 / 0.172 0.768 / 0.212 0.849 / 0.098 GPT4 0.895 / 0.128 0.842 / 0.092 0.802 / 0.256 OpenAI Mod API 0.576 / 0.040 0.788 / 0.156 0.284 / 0.056 Table 3: Comparison of performance of various approaches measured on our internal test set for response classification. NOTE: The policy used for training Llama Guard does not align with those used for labeling these datasets. Still, Llama Guard 2 provides a superior tradeoff between F1 score and False Positive Rate across these datasets, demonstrating strong adaptability to other policies. We hope to provide developers with a high-performing moderation solution for most use cases by aligning Llama Guard 2 taxonomy with MLCommons standard. But as outlined in our Responsible Use Guide, each use case requires specific safety considerations and we encourage developers to tune Llama Guard 2 for their own use case to achieve better moderation for their custom policies. As an example of how Llama Guard 2's performance may change, we train on the BeaverTails training dataset and compare against MDJudge (which was trained on BeaverTails among others). |          **Model**          | **F1 ↑** | **False Positive Rate ↓** | |:---------------------------:|:--------:|:-------------------------:| | Llama Guard 2               |   0.736  |           0.059           | | MDJudge                     | 0.849 |           0.098           | | Llama Guard 2 + BeaverTails |   **0.852**  |           0.101           | Table 4: Comparison of performance on BeaverTails-30k. # Limitations There are some limitations associated with Llama Guard 2. First, Llama Guard 2 itself is an LLM fine-tuned on Llama 3. Thus, its performance (e.g., judgments that need common sense knowledge, multilingual capability, and policy coverage) might be limited by its (pre-)training data. Second, Llama Guard 2 is finetuned for safety classification only (i.e. to generate "safe" or "unsafe"), and is not designed for chat use cases. However, since it is an LLM, it can still be prompted with any text to obtain a completion. Lastly, as an LLM, Llama Guard 2 may be susceptible to adversarial attacks or prompt injection attacks that could bypass or alter its intended use. However, with the help of external components (e.g., KNN, perplexity filter), recent work (e.g., [3]) demonstrates that Llama Guard is able to detect harmful content reliably. **Note on Llama Guard 2's policy** Llama Guard 2 supports 11 out of the 13 categories included in the [MLCommons AI Safety](https://mlcommons.org/working-groups/ai-safety/ai-safety/) taxonomy. The Election and Defamation categories are not addressed by Llama Guard 2 as moderating these harm categories requires access to up-to-date, factual information sources and the ability to determine the veracity of a particular output. To support the additional categories, we recommend using other solutions (e.g. Retrieval Augmented Generation) in tandem with Llama Guard 2 to evaluate information correctness. # Citation @misc{metallamaguard2, author =       {Llama Team}, title =        {Meta Llama Guard 2}, howpublished = {\url{https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard2/MODEL_CARD.md}}, year =         {2024} # References [1] [Llama 3 Model Card](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md) [2] [Llama Guard Model Card](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard/MODEL_CARD.md) [3] [RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content](https://arxiv.org/pdf/2403.13031.pdf) [4] [MDJudge for Salad-Bench](https://huggingface.co/OpenSafetyLab/MD-Judge-v0.1)
+----------
+# Meta Llama Guard 2 Llama Guard 2 is a model that provides input and output guardrails for LLM deployments, based on MLCommons policy. # Download In order to download the model weights and tokenizer, please visit the [Meta website](https://llama.meta.com/llama-downloads) and accept our License. Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download. Pre-requisites: Make sure you have wget and md5sum installed. Then to run the script: `./download.sh`. Keep in mind that the links expire after 24 hours and a certain amount of downloads. If you start seeing errors such as `403: Forbidden`, you can always re-request a link. # Quick Start Since Llama Guard 2 is a fine-tuned Llama3 model (see our [model card](MODEL_CARD.md) for more information), the same quick start steps outlined in our [README file](https://github.com/meta-llama/llama3/blob/main/README.md) for Llama3 apply here. In addition to that, we added examples using Llama Guard 2 in the [Llama recipes repository](https://github.com/facebookresearch/llama-recipes). # Issues Please report any software bug, or other problems with the models through one of the following means: - Reporting issues with the Llama Guard model: [github.com/meta-llama/PurpleLlama](https://github.com/meta-llama/PurpleLlama) - Reporting issues with Llama in general: [github.com/meta-llama/llama3](https://github.com/meta-llama/llama3) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](https://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](https://facebook.com/whitehat/info) # License Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements. The same license as Llama 3 applies: see the [LICENSE](../LICENSE) file, as well as our accompanying [Acceptable Use Policy](USE_POLICY.md). author =       {Llama Team}, title =        {Meta Llama Guard 2}, [Research Paper](https://ai.facebook.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/)
+----------
+Llama Guard is a 7B parameter [Llama 2](https://arxiv.org/abs/2307.09288)-based input-output safeguard model. It can be used for classifying content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM: it generates text in its output that indicates whether a given prompt or response is safe/unsafe, and if unsafe based on a policy, it also lists the violating subcategories. Here is an example: In order to produce classifier scores, we look at the probability for the first token, and turn that into an “unsafe” class probability. Model users can then make binary decisions by applying a desired threshold to the probability scores. # Training and Evaluation We use a mix of prompts that come from the Anthropic [dataset](https://github.com/anthropics/hh-rlhf) and redteaming examples that we have collected in house, in a separate process from our production redteaming. In particular, we took the prompts only from the Anthropic dataset, and generated new responses from our in-house LLaMA models, using jailbreaking techniques to elicit violating responses. We then annotated Anthropic data (prompts & responses) in house, mapping labels according to the categories identified above. Overall we have ~13K training examples. ## Taxonomy of harms and Risk Guidelines As automated content risk mitigation relies on classifiers to make decisions about content in real time, a prerequisite to building these systems is to have the following components: - A **taxonomy** of risks that are of interest – these become the classes of a classifier. - A **risk guideline** that determines where we put the line between encouraged and discouraged outputs for each risk category in the taxonomy. Together with this model, we release an open taxonomy inspired by existing open taxonomies such as those employed by Google, Microsoft and OpenAI in the hope that it can be useful to the community. This taxonomy does not necessarily reflect Meta's own internal policies and is meant to demonstrate the value of our method to tune LLMs into classifiers that show high performance and high degrees of adaptability to different policies. ### The Llama Guard Safety Taxonomy & Risk Guidelines Below, we provide both the harm types themselves under this taxonomy and also examples of the specific kinds of content that would be considered harmful under each category: - **Violence & Hate** encompasses statements that encourage or could help people plan or engage in violence. Similarly, statements that advocate discrimination, contain slurs, or voice hateful sentiments against people based on their sensitive personal characteristics (ex: race, color, religion, national origin, sexual orientation, gender, gender identity, or disability) would also be considered inappropriate under this category. - **Sexual Content** encompasses statements encouraging someone (who could be underage) to engage in specific sex acts. Similarly, sexually explicit (i.e., erotic) statements would also be considered inappropriate under this category. - **Guns & Illegal Weapons** encompasses statements that specifically encourage, condone, or could help people plan or execute specific crimes involving the illegal acquisition, creation, or use of guns or illegal weapons (ex: explosives, biological agents, or chemical weapons), including instructions on how to create or use them. - **Regulated or Controlled Substances** encompasses statements that specifically encourage or could help people to illegally produce, transfer, or consume regulated or controlled substances like illegal drugs, tobacco, alcohol, or cannabis. - **Suicide & Self Harm** encompasses statements that encourage, condone, or enable people to harm themselves (ex: by providing instructions or information on methods of self-harm). When people express or imply an intent to harm themselves, any response other than one that provides links to appropriate health resources would be considered inappropriate (even if entirely supportive). - **Criminal Planning** encompasses miscellaneous statements that encourage, condone, or could help people plan or execute specific criminal activities, like arson, kidnapping, or theft. Items should only be considered harmful under this category when they could not be read as violating any of the other harm types above (ex: statements that encourage violence should be considered violating under Violence & Hate rather than this category). ## Evaluation results We compare the performance of the model against standard content moderation APIs in the industry, including [OpenAI](https://platform.openai.com/docs/guides/moderation/overview), [Azure Content Safety](https://learn.microsoft.com/en-us/azure/ai-services/content-safety/concepts/harm-categories), [PerspectiveAPI](https://developers.perspectiveapi.com/s/about-the-api-attributes-and-languages?language=en_US) from Google on both public and in-house benchmarks. The public benchmarks include [ToxicChat](https://huggingface.co/datasets/lmsys/toxic-chat) and [OpenAI Moderation](https://github.com/openai/moderation-api-release). Note: comparisons are not exactly apples-to-apples due to mismatches in each taxonomy. The interested reader can find a more detailed discussion about this in our [paper](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/). |                 | Our Test Set (Prompt) | OpenAI Mod | ToxicChat | Our Test Set (Response) | | --------------- | --------------------- | ---------- | --------- | ----------------------- | | Llama Guard     | **0.945**             | 0.847      | **0.626** | **0.953**               | | OpenAI API      | 0.764                 | **0.856**  | 0.588     | 0.769                   | | Perspective API | 0.728                 | 0.787      | 0.532     | 0.699                   |
+----------
+Hamel’s Blog - Optimizing latency Subscribe for updates Summary Below is a summary of my findings: 🏁 mlc is the fastest . This is so fast that I’m skeptical and am now motivated to measure quality (if I have time). When checking the outputs manually, they didn’t seem that different than other approaches. ❤️ CTranslate2 is my favorite tool, which is among the fastest but is also the easiest to use . The documentation is the best out of all of the solutions I tried. Furthermore, I think that the ergonomics are excellent for the models that they support. Unlike vLLM, CTranslate doesn’t seem to support distributed inference just yet. 🛠️ is really fast, but CTranslate can be much faster. On other hand, vLLM supports distributed inference , which is something you will need for larger models. vLLM might be the sweet spot for serving very large models. 😐 Text Generation Inference is an ok option (but nowhere near as fast as ) if you want to deploy HuggingFace LLMs in a standard way . TGI has some nice features like telemetry baked in ( via OpenTelemetry ) and integration with the HF ecosystem like inference endpoints . One thing to note that as of 7/28/2023, the license for TGI was changed to be more restrictive that may interfere with certain commercial uses . I am personally not a fan of the license. Rough Benchmarks This study focuses on various approaches to optimizing latency . Specifically, I want to know which tools are the most effective at optimizing latency for open source LLMs. In order to focus on latency, I hold the following variables constant: batch size of n = 1 for all prediction requests (holding throughput constant). All experiments were conducted on a Nvidia A6000 GPU, unless otherwise noted. Max output tokens were always set to 200 All numbers are calculated as an average over a fixed set of 9 prompts. The model used is meta-llama/Llama-2-7b-hf on the HuggingFace Hub In addition to batch size of and using a A6000 GPU (unless noted otherwise), I also made sure I warmed up the model by sending an initial inference request before measuring latency. Llama-v2-7b benchmark: batch size = 1, max output tokens = 200 avg tok/sec avg time (seconds) avg output token count platform options gpu float16 quantization 44.8 4.5 200.0 int8 quantization 62.6 3.2 HF Hosted Inference Endpoint A10G 30.4 6.6 202.0 HuggingFace Transformers (no server) 24.6 7.5 181.4 nf4 4bit quantization bitsandbytes 24.3 7.6 21.1 9.5 quantized w/ GPTQ 23.6 8.8 quantized w/ bitsandbytes 1.9 103.0 q4f16 117.1 1.3 153.9 text-generation-webui exllama 77.0 1.7 134.0 vllm A100 (on Modal Labs) 41.5 3.4 143.1 46.4 178.0 In some cases I did not use an b/c the platform didn’t have that particular GPU available. You can ignore these rows if you like, but I still think it is valuable information. I had access to a A6000, so I just used what I had. I noticed that the output of the LLM was quite different (less tokens) when using . I am not sure if I did something wrong here, or it changes the behavior of the LLM. Furthermore, the goal was not to be super precise on these benchmarks but rather to get a general sense of how things work and how they might compare to each other out of the box. Some of the tools above are inference servers which perform logging, tracing etc. in addition to optimizing models which effect latency. The idea is to see where there are significant differences between tools. I discussed this more Background One capability you need to be successful with open source LLMs is the ability to serve models efficiently. There are two categories of tools for model inference: Inference servers: these help with providing a web server that can provide a REST/grpc or other interface to interact with your model as a service. These inference servers usually have parameters to help you make trade-offs between throughput and latency . Additionally, some inference servers come with additional features like telemetry, model versioning and more. You can learn more about this topic the serving section of these notes. For LLMs, popular inference servers are the Text Generation Inference (TGI) Model Optimization : These modify your model to make them faster for inference. Examples include quantization Paged Attention Exllama and more. It is common to use both Inference servers techniques in conjunction. Some inference servers like even help you apply optimization techniques. Notes On Tools Other than benchmarking, an important goal of this study was to understand how to use different platforms & tools. Start with compiling the model as shown in these docs After installing MLC , you can compile meta-llama/Llama-2-7b-chat-hf like so: python3 -m mlc_llm.build \ --hf-path meta-llama/Llama-2-7b-chat-hf --target cuda --quantization q4f16_1 The arguments for the compliation are documented . This puts the model in the ./dist/ folder with the name Llama-2-7b-chat-hf-q4f16_1 You can use their python client to interact with the compiled model: from mlc_chat import ChatModule, ChatConfig cfg = ChatConfig(max_gen_len cm ChatModule(model "Llama-2-7b-chat-hf-q4f16_1" , chat_config cfg) output cm.generate(prompt prompt) You can see the full benchmarking code Warning I wasn’t able to get to run correctly with the supplied python client so I am using the chat variant ( Llama-2-7b-chat-hf ) as a proxy. I asked the kind folks who work on the mlc project and they said the python client is currently designed for chat, such that they have this system prompt that is hard coded for llama models: conv.system = ("[INST] <>\n\nYou are a helpful, respectful and honest assistant. " "Always answer as helpfully as possible, while being safe. " "Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, " "or illegal content. " "Please ensure that your responses are socially unbiased and positive in nature.\n\n" "If a question does not make any sense, or is not factually coherent, explain why instead " "of answering something not correct. " "If you don't know the answer to a question, please don't share false " "information.\n<>\n\n "); If you want to fix this, you must edit mlc-chat-config.json , changing conv_template LM These docs say more about the config.json The config file is located in ./dist//params/mlc-chat-config.json . For example: > cat ./dist/Llama-2-7b-hf-q4f16_1/params/mlc-chat-config.json "model_lib": "Llama-2-7b-hf-q4f16_1", "local_id": "Llama-2-7b-hf-q4f16_1", "conv_template": "llama-2", "temperature": 0.7, "repetition_penalty": 1.0, "top_p": 0.95, "mean_gen_len": 128, "max_gen_len": 512, "shift_fill_factor": 0.3, "tokenizer_files": [ "tokenizer.json", "tokenizer.model" "model_category": "llama", "model_name": "Llama-2-7b-hf" is an optimization tool that can make models ridiculously fast. h/t to Anton . The documentation for CTranslate2 contains specific instructions for llama models To optimize llama v2 , we first need to quantize the model. This can be done like so: ct2-transformers-converter --model int8 --output_dir llama-2-7b-ct2 --force refers to the HuggingFace repo for this model . The benchmarking code is as follows (can also be found ): time ctranslate2 sys sys.path.append( '../common/' questions pandas as pd generator ctranslate2.Generator( "llama-2-7b-ct2" , device "cuda" tokenizer transformers.AutoTokenizer.from_pretrained( "meta-llama/Llama-2-7b-hf" def predict(prompt: str "Generate text give a prompt" start time.perf_counter() tokens tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt)) results generator.generate_batch([tokens], sampling_topk , max_length , include_prompt_in_result False results[ 0 ].sequences_ids[ ] tokenizer.decode(tokens) request_time return 'tok_count' len (tokens), 'time' : request_time, 'question' : prompt, 'answer' : output, 'note' 'CTranslate2 int8 quantization' if __name__ == '__main__' counter responses [] for q in questions: >= : responses.append(predict(q)) += df pd.DataFrame(responses) df.to_csv( 'bench-ctranslate-int8.csv' , index Text Generation Inference (TGI) License Restrictions The license for TGI was recently changed away from Apache 2.0 to be more restrictive. Be careful when using TGI in commercial applications. Text generation inference which is often referred to as “TGI” was easy to use without any optimization. You can run it like this: “start_server.sh” #!/bin/bash [ -z " $HUGGING_FACE_HUB_TOKEN then echo "HUGGING_FACE_HUB_TOKEN is not set. Please set it before running this script." exit fi "TheBloke/Llama-2-7B-GPTQ" volume $PWD /data docker run --gpus all -e HUGGING_FACE_HUB_TOKEN= GPTQ_BITS=4 GPTQ_GROUPSIZE=128 --shm-size 5g -p 8081:80 -v $volume :/data ghcr.io/huggingface/text-generation-inference --max-best-of $@ We can then run the server with this command: bash start_server.sh --model-id Help You can see all the options for the TGI container with the help flag like so: run ghcr.io/huggingface/text-generation-inference --help less Quantization was very difficult to get working. There is a —quantize flag with accepts gptq . The approach makes inference much slower, which others have reported To make work for llama v2 models requires a bunch of work, you have to install the text-generation-server which can take a while and is very brittle to get right. I had to step through the Makefile carefully. After that you have to download the weights with: text-generation-server download-weights meta-llama/Llama-2-7b-hf You can run the following command to perform the quantization (the last argument is the destination directory where the weights are stored). quantize data/quantized/ However, this step is not needed for the most popular models, as someone will likely already have quantized and uploaded them to the Hub. Pre-Quantized Models Alternatively, you can use a pre-quantized model that has been uploaded to the Hub. TheBloke/Llama-2-7B-GPTQ is a good example of one. To get this to work, you have to be careful to set the GPTQ_BITS GPTQ_GROUPSIZE environment variables to match the config. For example This config necessitates setting These are already set in shown above. This PR will eventually fix that. To use the with TGI, I can use the same bash script with the following arguments: --quantize Comparison Without TGI Server When I first drafted this study I got the following response on twitter: Based on your code ( https://t.co/hSYaPTsEaK ) it seems like you measure the full HTTP request, which is like comparing trees to an apple. — Philipp Schmid ( @_philschmid July 29, 2023 Phillip certainly has a point! I am indeed testing both! I’m looking for big differences in tools here, and since some inference servers have optimization tools, and some optimization tools do not have an inference server I cannot do a true apples to apples comparison. However, I think its still useful to try different things as advertised to see what is possible, and also take note of really significant gaps in latency between tools. Therefore, I ran the following tests to perform the similar optimizations as TGI, but without the server to see what happened: HuggingFace Transformers I was able to get slightly better performance without the TGI server as predicted by Phillip, but it did not account for the the massive gap between some tools (which is exactly the kind of thing I was looking for). To benchmark quantization with bitsandbytes, I followed this blog post and wrote this benchmarking code . I quantized the model by loading it like this: model_id AutoTokenizer.from_pretrained(model_id) nf4_config BitsAndBytesConfig( load_in_4bit bnb_4bit_quant_type "nf4" bnb_4bit_compute_dtype torch.bfloat16 model_nf4 AutoModelForCausalLM.from_pretrained(model_id, quantization_config nf4_config) Unlike TGI, I was able to get bitsandbytes to work properly here, but just like TGI it didn’t speed anything up for me with respect to inference latency. As reflected in the benchmark table, I got nearly the same results with transformers without any optimizations I also quantized the model using without an inference server to compare against TGI. The code for that is The results were so bad ~ 5 tok/sec that I decided not to put this in the table, because it seemed quite off to me. Text Generation WebUI Aman let me know about text-generation-web-ui , and also these instructions for quickly experimenting with ExLlama ggml . I wasn’t able to get the variant to work properly, unfortunately. If you are really serious about using exllama, I recommend trying to use it without the text generation UI and look at the repo, specifically at test_benchmark_inference.py . (I didn’t have time for this, but if I was going to use exllama for anything serious I would go this route). From the root of the repo, you can run the following commands to start an inference server optimized with download-model.py TheBloke/Llama-2-7B-GPTQ server.py --listen --extensions openai --loader exllama_hf TheBloke_Llama-2-7B-GPTQ After the server was started, I used to conduct the benchmark. Overall, I didn’t like this particular piece of software much. It’s bit bloated because its trying to do too many things at once (An inference server, Web UIs, and other optimizations). That being said, the documentation is good and it is easy to use. I don’t think there is any particular reason to use this unless you want an end-to-end solution that also comes with a web user-interface (which many people want!). only works with CUDA 11.8, which I configured using this approach . After configuring CUDA and installing the right version of PyTorch, you need to install the bleeding edge from git: pip install -U git+https://github.com/vllm-project/vllm.git A good recipe to use for vLLM can be find on these Modal docs . Surprisingly, I had much lower latency when running on a local vs. a hosted A100 on Modal Labs. It’s possible that I did something wrong here. Currently, is the fastest solution for when you need distributed inference (i.e. when your model doesn’t fit on a single GPU). offers a server , but I benchmarked the model locally using their tools instead. The code for the benchmarking can be found here SamplingParams, LLM #from https://modal.com/docs/guide/ex/vllm_inference # Coding questions "Implement a Python function to compute the Fibonacci numbers." "Write a Rust function that performs binary exponentiation." "What are the differences between Javascript and Python?" # Literature "Write a story in the style of James Joyce about a trip to the Australian outback in 2083, to see robots in the beautiful desert." "Who does Harry turn into a balloon?" "Write a tale about a time-traveling historian who's determined to witness the most significant events in human history." # Math "What is the product of 9 and 8?" "If a train travels 120 kilometers in 2 hours, what is its average speed?" "Think through this step by step. If the sequence a_n is defined by a_1 = 3, a_2 = 5, and a_n = a_(n-1) + a_(n-2) for n > 2, find a_6." MODEL_DIR "/home/ubuntu/hamel-drive/vllm-models" download_model_to_folder(): huggingface_hub snapshot_download os snapshot_download( local_dir MODEL_DIR, token os.environ[ "HUGGING_FACE_HUB_TOKEN" LLM(MODEL_DIR) generate(question, llm, note None response : question, : note} sampling_params SamplingParams( temperature 1.0 top_p max_tokens result llm.generate(question, sampling_params) result: response[ (output.outputs[ ].token_ids) output.outputs[ ].text llm download_model_to_folder() generate(question q, llm llm, note 'vLLM' responses.append(response) 'bench-vllm.csv' HuggingFace Inference Endpoint I deployed an inference endpoint on HuggingFace for , on a Nvidia A10G GPU. I didn’t try to turn on any optimizations like quantization and wanted to see what the default performance would be like. The documentation for these interfaces can be found . There is also a python client Their documentation says they are using TGI under the hood. However, my latency was significantly faster on their hosted inference platform than using TGI locally. This could be due to the fact that I used a with them but only a locally. It’s worth looking into why this discrepancy exists further. The code for this benchmark can be found Footnotes It is common to explore the inference vs throughput frontier when conducting inference benchmarks. I did not do this, since I was most interested in latency. Here is an example of how to conduct inference benchmarks that consider both throughput and latency. ↩︎ For Llama v2 models , you must be careful to use the models ending in -hf as those are the ones that are compatible with the transformers library. The Modular Inference Engine is another example of an inference server that also applies optimization techniques. At the time of this writing, this is proprietary technology, but its worth keeping an eye on this in the future. Edit this page
+----------
+Achieve 23x LLM Inference Throughput & Reduce p50 Latency Anyscale Preview is now available! Login today to get free $50 compute credit 🚀 Home Blog Detail How continuous batching enables 23x throughput in LLM inference while reducing p50 latency By Cade Daniel Chen Shen Eric Liang Richard Liaw June 22, 2023 In this blog, we’ll cover the basics of large language model (LLM) inference and highlight inefficiencies in traditional batching policies. We’ll introduce continuous batching and discuss benchmark results for existing batching systems such as HuggingFace’s text-generation-inference and vLLM. By leveraging vLLM, users can achieve 23x LLM inference throughput while reducing p50 latency. Update June 2024: Anyscale Endpoints (Anyscale's LLM API Offering) and Private Endpoints (self-hosted LLMs) are now available as part of the Anyscale Platform.  Click to get started on the Anyscale platform. Due to the large GPU memory footprint and compute cost of LLMs , serving dominates the compute cost for most real world applications. ML engineers often treat LLMs like "black boxes" that can only be optimized with internal changes such as quantization and custom CUDA kernels. However, this is not entirely the case. Because LLMs iteratively generate their output, and because LLM inference is often memory and not compute bound, there are surprising system-level batching optimizations that make 10x or more differences in real-world workloads. One recent such proposed optimization is , also known as dynamic batching , or batching with iteration-level scheduling . We wanted to see how this optimization performs. We will get into details below, including how we simulate a production workload, but to summarize our findings: Up to 23x throughput improvement using continuous batching and continuous batching-specific memory optimizations (using ). 8x throughput over naive batching by using continuous batching (both on Ray Serve Hugging Face’s text-generation-inference 4x throughput over naive batching by using an optimized model implementation ( NVIDIA’s FasterTransformer You can try out continuous batching today: see this example to run vLLM on Ray Serve The remainder of this blog is structured as follows: We’ll cover the basics of how LLM inference works and highlight inefficiencies in traditional request-based dynamic batching policies. We’ll introduce continuous batching and how it answers many of the inefficiencies of request-based dynamic batching. We then discuss our benchmarks and the implications this has on how to serve LLM models cost-effectively. Link The basics of LLM inference There is a lot to know about LLM inference, and we refer users to Efficient Inference on a Single GPU Optimization story: Bloom inference for more detail. However, at a high level, LLM inference is pretty straightforward. For each request: You start with a sequence of tokens (called the "prefix" or "prompt"). The LLM produces a sequence of completion tokens, stopping only after producing a stop token or reaching a maximum sequence length. This is an iterative process. You get one additional completion token for each new forward pass of the model. For example, suppose you prompt with a sentence "What is the capital of California: ", it would take ten forward pass iterations to get back the full response of ["S", "a", "c", "r", “a”, "m", "e", "n", "t", "o"]. This example simplifies things a little bit because in actuality tokens do not map 1:1 to ASCII characters (a popular token encoding technique is Byte-Pair Encoding which is beyond the scope of this blog post), but the iterative nature of generation is the same regardless of how you tokenize your sequences. Simplified LLM inference. This toy example shows a hypothetical model which supports a maximum sequence length of 8 tokens (T1, T2, …, T8). Starting from the prompt tokens (yellow), the iterative process generates a single token at a time (blue). Once the model generates an end-of-sequence token (red), the generation loop stops. This example shows a batch of only one input sequence, so the batch size is 1. Now that we understand the simplicity of the iterative process, let’s dive deeper with some things you may not know about LLM inference: The initial ingestion (“prefill”) of the prompt "What is the capital of California: " takes about as much time as the generation of each subsequent token. This is because the prefill phase pre-computes some inputs of the attention mechanism that remain constant over the lifetime of the generation. This prefill phase efficiently uses the GPU’s parallel compute because these inputs can be computed independently of each other. LLM inference is memory-IO bound , not compute bound. In other words, it currently takes more time to load 1MB of data to the GPU’s compute cores than it does for those compute cores to perform LLM computations on 1MB of data. This means that LLM inference throughput is largely determined by how large a batch you can fit into high-bandwidth GPU memory . See this page in the NVIDIA docs for more details. The amount of GPU memory consumed scales with the base model size + the length of the token sequence. In Numbers every LLM developer should know , it’s estimated that a 13B parameter model consumes nearly 1MB of state for each token in a sequence. On a higher-end A100 GPU with 40GB RAM, back-of-the-envelope math suggests that since 14 GB are left after storing the 26GB of model parameters, ~14k tokens can be held in memory at once. This may seem high but is actually quite limiting; if we limit our sequence lengths to 512, we can process at most ~28 sequences in a batch. The problem is worse for higher sequence lengths; a sequence length of 2048 means our batch size is limited to 7 sequences. Note that this is an upper bound since it doesn’t leave room for storing intermediate computations. What this all means is that there is substantial “room on the table” so to speak if you can optimize memory usage. This is why approaches such as model quantization strategies such as are potentially so powerful; if you could halve the memory usage by moving from 16-bit to 8-bit representations, you could double the space available for larger batch sizes. However, not all strategies require modifications to the model weights. For example, FlashAttention found significant throughput improvements by reorganizing the attention computation to require less memory-IO. Continuous batching is another memory optimization technique which does not require modification of the model. We next explain how naive batching works (and is inefficient), and how continuous batching increases the memory-efficiency of LLM generation. LLM batching explained GPUs are massively-parallel compute architectures, with compute rates (measured in floating-point operations per second, or flops) in the teraflop ( ) or even petaflop ( H100 ) range. Despite these staggering amounts of compute, LLMs struggle to achieve saturation because so much of the chip’s memory bandwidth is spent loading model parameters. Batching is one way to improve the situation; instead of loading new model parameters each time you have an input sequence, you can load the model parameters once and then use them to process many input sequences. This more efficiently uses the chip’s memory bandwidth, leading to higher compute utilization, higher throughput, and cheaper LLM inference. Naive batching / static batching We call this traditional approach to batching static batching , because the size of the batch remains constant until the inference is complete. Here’s an illustration of static batching in context of LLM inference: Completing four sequences using static batching. On the first iteration (left), each sequence generates one token (blue) from the prompt tokens (yellow). After several iterations (right), the completed sequences each have different sizes because each emits their end-of-sequence-token (red) at different iterations. Even though sequence 3 finished after two iterations, static batching means that the GPU will be underutilized until the last sequence in the batch finishes generation (in this example, sequence 2 after six iterations). Unlike traditional deep learning models, batching for LLMs can be tricky due to the iterative nature of their inference. Intuitively, this is because requests can "finish" earlier in a batch, but it is tricky to release their resources and add new requests to the batch that may be at different completion states. This means that as the GPU is underutilized as generation lengths of different sequences in a batch differ from the largest generation length of the batch. In the figure on the right above, this is illustrated by the white squares after end-of-sequence tokens for sequences 1, 3, and 4. How often does static batching under-utilize the GPU? It depends on the generation lengths of sequences in a batch. For example, one could use LLM inference to emit a single token as a classification task (there are better ways to do this but let’s use this as an example). In this case, every output sequence is the same size (1 token). If the input sequences are also the same size (say, 512 tokens), then each static batch will achieve the best possible GPU utilization. On the other hand, a LLM-powered chatbot service cannot assume fixed-length input sequences, nor assume fixed-length output sequences. Proprietary models offer maximum context lengths in excess of 8K tokens at the time of writing. With static batching, variance in generation output could cause massive underutilization of GPUs. It’s no wonder OpenAI CEO Sam Altman described the compute costs as eye-watering Without restrictive assumptions on user input and model output, unoptimized production-grade LLM systems simply can’t serve traffic without underutilizing GPUs and incurring unnecessarily high costs. We need to optimize how we serve LLMs for their power to be broadly accessible. Continuous batching The industry recognized the inefficiency and came up with a better approach. Orca: A Distributed Serving System for Transformer-Based Generative Models is a paper presented in OSDI ‘22 which is the first to our knowledge to tackle this problem. Instead of waiting until every sequence in a batch has completed generation, Orca implements iteration-level scheduling where the batch size is determined per iteration. The result is that once a sequence in a batch has completed generation, a new sequence can be inserted in its place, yielding higher GPU utilization than static batching. Completing seven sequences using continuous batching. Left shows the batch after a single iteration, right shows the batch after several iterations. Once a sequence emits an end-of-sequence token, we insert a new sequence in its place (i.e. sequences S5, S6, and S7). This achieves higher GPU utilization since the GPU does not wait for all sequences to complete before starting a new one. Reality is a bit more complicated than this simplified model: since the prefill phase takes compute and has a different computational pattern than generation, it cannot be easily batched with the generation of tokens. Continuous batching frameworks currently manage this via hyperparameter: waiting_served_ratio , or the ratio of requests waiting for prefill to those waiting end-of-sequence tokens. Speaking of frameworks, Hugging Face has productionized continuous batching in their Rust- and Python-based text-generation-inference LLM inference server . We use their implementation to understand the performance characteristics of continuous batching in our benchmarks below. : Continuous batching, dynamic batching, and iteration-level scheduling are all close enough in meaning that any one of them can be used to describe the batching algorithm. We chose to use continuous batching. Dynamic batching is fitting but can be confused with request-level batching, where an LLM inference server uses a static batch whose size is chosen when the current batch has completely finished generation. We feel that iteration-level scheduling is descriptive of the scheduling mechanism but not the process as a whole. PagedAttention and vLLM For this blog post, we want to showcase the differences between static batching and continuous batching. It turns out that continuous batching can unlock memory optimizations that are not possible with static batching by improving upon Orca’s design. PagedAttention is a new attention mechanism implemented in ( ). It takes inspiration from traditional OS concepts such as paging virtual memory . They allow the KV cache (what is computed in the “prefill” phase, discussed above) to be non-contiguous by allocating memory in fixed-size “pages”, or blocks. The attention mechanism can then be rewritten to operate on block-aligned inputs, allowing attention to be performed on non-contiguous memory ranges. This means that buffer allocation can happen just-in-time instead of ahead-of-time: when starting a new generation, the framework does not need to allocate a contiguous buffer of size maximum_context_length. Each iteration, the scheduler can decide if it needs more room for a particular generation, and allocate on the fly without any degradation to PagedAttention’s performance. This doesn’t guarantee perfect utilization of memory ( their blog says the wastage is now limited to under 4%, only in the last block), but it significantly improves upon wastage from ahead-of-time allocation schemes used widely by the industry today. Altogether, PagedAttention + vLLM enable massive memory savings as most sequences will not consume the entire context window. These memory savings translate directly into a higher batch size, which means higher throughput and cheaper serving. We include vLLM in our benchmarks below. Benchmarking setup We’ll discuss our experimental setup then dive into the results of our benchmarks. Experiments Our goal is to see how continuous batching performs versus static batching on a simulated real-world live-inference workload. Fundamentally, we care about cost. We break this down into throughput and latency since cost is directly downstream of how efficiently you can serve at a given latency. Benchmark goal Measurement Measure throughput Time-to-process a queue of 1000 requests, each with 512 input tokens and generation length sampled from an exponential distribution. Measure latency Request latencies for 100 requests, with varying input lengths, output lengths, and arrival times at a fixed average rate. We’ll discuss the datasets and other details of the experiments in their respective results section. Hardware/model We benchmark throughput and latency on a single NVIDIA A100 GPU provided by Anyscale . Our A100 has 40GB of GPU RAM. We selected Meta’s OPT-13B model because each framework under test had a readily-available integration with this model. We selected the 13B variant because it fits into our GPU without requiring tensor parallelism, yet is still large enough to present memory efficiency challenges. We opt not to use tensor parallelism, where each transformer block is split over multiple GPUs, to keep our experiments simple, although both static batching and continuous batching work with tensor parallelism. Frameworks We test two static batching frameworks and three continuous batching frameworks. Our static batching frameworks are: Hugging Face’s Pipelines This is the simplest inference solution. It provides static batching with an easy-to-use API that works with any model and supports more tasks than simple text-generation. We use this as our baseline. This is a library which provides optimized implementations of various transformer models. It currently only provides static batching (the Triton inference server provides request-level dynamic batching, but not continuous batching yet). This provides us with an idea of how far an extremely optimized implementation of our model can get us with static batching – it provides a more competitive baseline than the relatively unoptimized OPT-13B implementation available on Hugging Face Hub Our continuous batching frameworks are: This is the inference server Hugging Face uses to power their LLM live-inference APIs. It implements continuous batching. Continuous batching on Ray Serve leverages Ray’s serverless capabilities to provide seamless autoscaling, high-availability, and support for complex DAGs. We wanted to understand how continuous batching works, so we re-implemented text-generation-inference’s core continuous batching logic in pure-Python on Ray Serve. As you will see in our results, our implementation achieves the same performance as text-generation-inference, which validates our understanding. This is an open-source project recently released by folks at UC Berkeley ( ). It builds upon Orca’s continuous batching design by taking full control of dynamic memory allocations, allowing it to significantly reduce different forms of GPU memory fragmentation. We test this framework because it shows the impact of further optimizations made possible by iteration-level scheduling and continuous batching. Benchmarking results: Throughput Based on our understanding of static batching, we expect continuous batching to perform significantly better when there is higher variance in sequence lengths in each batch. To show this, we run our throughput benchmark four times for each framework, each time on a dataset with higher variance in sequence lengths. To do this, we create a dataset containing 1000 sequences each with 512 input tokens. We configure our model to always emit a per-sequence generation length by ignoring the end-of-sequence token and configuring max_tokens. We then generate 1000 generation lengths, one for each request, sampled from an exponential distribution with mean=128 tokens. We use an exponential distribution as it is a good approximation of the generation lengths that one may encounter while serving an application like ChatGPT. To vary the variance of each run, we select only samples from the exponential distribution that are less than or equal to 32, 128, 512, and 1536. The total output sequence length is then, at most, 512+32=544, 512+128=640, 512+512=1024, and 512+1536=2048 (the maximum sequence length of our model). We then use a simple asyncio Python benchmarking script to submit HTTP requests to our model server. The benchmarking script submits all requests in burst fashion, so that the compute is saturated. The results are as follows: Throughput in tokens per second of each framework as variance in sequence length increases. As expected, the static batchers and naive continuous batchers perform approximately identically for lower-variance generation lengths. However as the variance increases, naive static batching’s performance plummets to 81 token/s. FasterTransformers improves upon naive static batching significantly, nearly keeping up with the naive continuous batchers until generation length limit of 1536. Continuous batching on Ray Serve and text-generation-inference achieves about the same performance, which is what we expect since they use the same batching algorithm. What is most impressive here is vLLM. For each dataset, vLLM more than doubles performance compared to naive continuous batching. We have not analyzed what optimization contributes the most to vLLM performance the most, but we suspect vLLM’s ability to reserve space dynamically instead of ahead-of-time allows vLLM to dramatically increase the batch size. We plot these performance results relative to naive static batching: Our throughput benchmark results presented as improvement multiples over naive static batching, log scale. It’s important to note how impressive even FasterTransformer’s 4x improvement is; we’re very interested in benchmarking FasterTransformers plus continuous batching when NVIDIA implements it. However, continuous batching is clearly a significant improvement over static batching even with an optimized model. The performance gap becomes gigantic when you include further memory optimization enabled by continuous batching and iteration-level scheduling as vLLM does. Benchmarking results: Latency Live-inference endpoints often face latency-throughput tradeoffs that must be optimized based on user needs. We benchmark latency on a realistic workload and measure how the cumulative distribution function of latencies changes with each framework. Similar to the throughput benchmark, we configure the model to always emit a specified amount of tokens specified per-request. We prepare 100 randomly-generated prompts by sampling lengths from a uniform distribution between 1 token and 512 tokens. We sample 100 output lengths from a capped exponential distribution with mean=128 and maximum size of 1536. These numbers were chosen because they are reasonably realistic and allow the generation to use up the full context-length of our model (512+1536=2048). Instead of submitting all requests at the same time as done in the throughput benchmark, we delay each request by a predetermined number of seconds. We sample a Poisson distribution to determine how long each request waits after the previously submitted request. The Poisson distribution is parameterized by λ, the expected rate, which in our case is how many queries per second (QPS) hit our model endpoint. We measure latencies at both QPS=1 and QPS=4 to see how the latency distribution changes as load changes. Median generation request latency for each framework, under average load of 1 QPS and 4 QPS. Continuous batching systems improve median latency. We see that while improving throughput, continuous batching systems also improve median latency. This is because continuous batching systems allow for new requests to be added to an existing batch if there is room, each iteration. But how about other percentiles? In fact, we find that they improve latency across all percentiles: Cumulative distribution function of generation request latencies for each framework with QPS=1. Static batchers and continuous batchers have distinct curve shapes caused by the presence of iteration-level batch scheduling in continuous batchers. All continuous batchers perform approximately equally under this load; FasterTransformers performs noticeably better than static batching on a naive model implementation. The reason why continuous batching improves latency at all percentiles is the same as why it improves latency at p50: new requests can be added regardless of how far into generation other sequences in the batch are. However, like static batching, continuous batching is still limited by how much space is available on the GPU. As your serving system becomes saturated with requests, meaning a higher on-average batch size, there are less opportunities to inject new requests immediately when they are received. We can see this as we increase the average QPS to 4: Cumulative distribution function of generation request latencies for each framework with QPS=4. Compared to QPS=1, FasterTransformer’s distribution of latencies becomes more similar to static batching on a naive model. Both Ray Serve and text-generation-inference’s continuous batching implementations perform similarly, but noticeably worse than vLLM. We observe that FasterTransformer becomes more similar to naive static batching, and that both text-generation-inference and Ray Serve’s implementation of continuous batching are on their way to look like FasterTransformer’s curve with QPS=1. That is, as the systems become saturated there are less opportunities to inject new requests immediately, so request latency goes up. This lines up with the vLLM curve – it remains mostly unchanged between QPS=1 and QPS=4. This is because due to its advanced memory optimizations, it has a higher maximum batch size. Anecdotally, we observe that vLLM becomes saturated around QPS=8 with a throughput near 1900 token/s. To compare these numbers apples-to-apples to the other serving systems requires more experimentation; however we have shown that continuous batching significantly improves over static batching by 1) reducing latency by injecting new requests immediately when possible, and 2) enable advanced memory optimizations (in vLLM’s case) that increase the QPS that the serving system can handle before becoming saturated. Conclusion LLMs present some amazing capabilities, and we believe their impact is still mostly undiscovered. We have shared how a new serving technique, continuous batching, works and how it outperforms static batching. It improves throughput by wasting fewer opportunities to schedule new requests, and improves latency by being capable of immediately injecting new requests into the compute stream. We are excited to see what people can do with continuous batching, and where the industry goes from here. Try out continuous batching for yourself We have a vLLM + Ray Serve example that allows you to try out continuous batching. We are integrating continuous batching systems into Aviary , a webapp that allows you to compare the outputs of different LLMs in parallel , and will release it within the week. Acknowledgements. We’d like to thank the following people for assisting in benchmarking and/or reviewing our results. : Stephanie Wang, Antoni Baum, Edward Oakes, and Amog Kamsetty; UC Berkeley : Zhuohan Li and Woosuk Kwon. Get involved with Ray code used for the experiments in the blog post is here . To connect with the Ray community, join the Ray Slack or ask questions on the Discuss forum . If you are interested in hosting LLMs, check out our managed Ray offering . If you are interested in learning more about Ray, see ray.io docs.ray.io See our earlier blog series on solving Generative AI infrastructure and using LangChain with Ray Ray Summit 2023 : If you are interested to learn much more about how Ray can be used to build performant and scalable LLM applications and fine-tune/train/serve LLMs on Ray, join Ray Summit on September 18-20th ! We have a set of great keynote speakers including John Schulman from OpenAI and Aidan Gomez Cohere , community and tech talks about Ray as well as practical training focused on LLMs Table of contents The basics of LLM inference Naive batching / static batching Try out continuous batching for yourself Get involved with Ray Sharing Tags LLM Sign up for product updates Recommended content Ray Spotlight Series: Multitenant Serve Applications with Runtime Envs as Containers Cross-modal Search for E-commerce: Building and Scaling a Cross-Modal Image Retrieval App Figure 1. End-to-end Stable Diffusion training architecture diagram. We Pre-Trained Stable Diffusion Models on 2 billion Images and Didn't Break the Bank - Definitive Guides with Ray Series Ready to try Anyscale? Access Anyscale today to see how companies using Anyscale and Ray benefit from rapid time-to-market and faster iterations across the entire AI lifecycle. Try free © Anyscale, Inc 2024 - Privacy Policy Follow Anyscale Follow Ray Company About Us News Careers Contact sales Learn Case Studies Ray Summit 2024 Events Ray Training Ray Docs Anyscale Docs Products Anyscale Platform Ray Open Source Integrations © Anyscale, Inc 2024 -
+----------
+GitHub - huggingface/peft: 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning. Skip to content You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. to refresh your session. You switched accounts on another tab or window. to refresh your session. Dismiss alert huggingface / peft Public Notifications You must be signed in to change notification settings Fork 1.4k Star 14.6k 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning. huggingface.co/docs/peft Apache-2.0 license stars forks Branches Activity You must be signed in to change notification settings huggingface/peft This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. Go to file Code Folders and files Name Last commit message Last commit date Latest commit History 1,000 Commits .github docs examples scripts src/ tests .gitignore .pre-commit-config.yaml LICENSE README.md pyproject.toml requirements.txt setup.py View all files Repository files navigation 🤗 PEFT State-of-the-art Parameter-Efficient Fine-Tuning (PEFT) methods Fine-tuning large pretrained models is often prohibitively costly due to their scale. Parameter-Efficient Fine-Tuning (PEFT) methods enable efficient adaptation of large pretrained models to various downstream applications by only fine-tuning a small number of (extra) model parameters instead of all the model's parameters. This significantly decreases the computational and storage costs. Recent state-of-the-art PEFT techniques achieve performance comparable to fully fine-tuned models. PEFT is integrated with Transformers for easy model training and inference, Diffusers for conveniently managing different adapters, and Accelerate for distributed training and inference for really big models. Tip Visit the PEFT organization to read about the PEFT methods implemented in the library and to see notebooks demonstrating how to apply these methods to a variety of downstream tasks. Click the "Watch repos" button on the organization page to be notified of newly implemented methods and notebooks! Check the PEFT Adapters API Reference section for a list of supported PEFT methods, and read the Adapters Soft prompts IA3 conceptual guides to learn more about how these methods work. Quickstart Install PEFT from pip: pip install peft Prepare a model for training with a PEFT method such as LoRA by wrapping the base model and PEFT configuration with get_peft_model . For the bigscience/mt0-large model, you're only training 0.19% of the parameters! AutoModelForSeq2SeqLM get_peft_config LoraConfig TaskType model_name_or_path "bigscience/mt0-large" tokenizer_name_or_path peft_config task_type SEQ_2_SEQ_LM inference_mode r 8 lora_alpha 32 lora_dropout 0.1 print_trainable_parameters () "trainable params: 2359296 || all params: 1231940608 || trainable%: 0.19151053100118282" To load a PEFT model for inference: AutoPeftModelForCausalLM AutoTokenizer torch "ybelkada/opt-350m-lora" "facebook/opt-350m" eval inputs "Preheat the oven to 350 degrees and place the cookie dough" return_tensors "pt" outputs generate input_ids "input_ids" ]. max_new_tokens 50 print batch_decode skip_special_tokens )[ ]) "Preheat the oven to 350 degrees and place the cookie dough in the center of the oven. In a large bowl, combine the flour, baking powder, baking soda, salt, and cinnamon. In a separate bowl, combine the egg yolks, sugar, and vanilla." Why you should use PEFT There are many benefits of using PEFT but the main one is the huge savings in compute and storage, making PEFT applicable to many different use cases. High performance on consumer hardware Consider the memory requirements for training the following models on the ought/raft/twitter_complaints dataset with an A100 80GB GPU with more than 64GB of CPU RAM. Model Full Finetuning PEFT-LoRA PyTorch PEFT-LoRA DeepSpeed with CPU Offloading bigscience/T0_3B (3B params) 47.14GB GPU / 2.96GB CPU 14.4GB GPU / 2.96GB CPU 9.8GB GPU / 17.8GB CPU bigscience/mt0-xxl (12B params) OOM GPU 56GB GPU / 3GB CPU 22GB GPU / 52GB CPU bigscience/bloomz-7b1 (7B params) 32GB GPU / 3.8GB CPU 18.1GB GPU / 35GB CPU With LoRA you can fully finetune a 12B parameter model that would've otherwise run out of memory on the 80GB GPU, and comfortably fit and train a 3B parameter model. When you look at the 3B parameter model's performance, it is comparable to a fully finetuned model at a fraction of the GPU memory. Submission Name Accuracy Human baseline (crowdsourced) 0.897 Flan-T5 0.892 lora-t0-3b 0.863 The bigscience/T0_3B model performance isn't optimized in the table above. You can squeeze even more performance out of it by playing around with the input instruction templates, LoRA hyperparameters, and other training related hyperparameters. The final checkpoint size of this model is just 19MB compared to 11GB of the full bigscience/T0_3B model. Learn more about the advantages of finetuning with PEFT in this blog post Quantization is another method for reducing the memory requirements of a model by representing the data in a lower precision. It can be combined with PEFT methods to make it even easier to train and load LLMs for inference. Learn how to finetune with QLoRA and the TRL library on a 16GB GPU in the Finetune LLMs on your own consumer hardware using tools from PyTorch and Hugging Face ecosystem blog post. Learn how to finetune a openai/whisper-large-v2 model for multilingual automatic speech recognition with LoRA and 8-bit quantization in this (see this instead for an example of streaming a dataset). Save compute and storage PEFT can help you save storage by avoiding full finetuning of models on each of downstream task or dataset. In many cases, you're only finetuning a very small fraction of a model's parameters and each checkpoint is only a few MBs in size (instead of GBs). These smaller PEFT adapters demonstrate performance comparable to a fully finetuned model. If you have many datasets, you can save a lot of storage with a PEFT model and not have to worry about catastrophic forgetting or overfitting the backbone or base model. PEFT integrations PEFT is widely supported across the Hugging Face ecosystem because of the massive efficiency it brings to training and inference. Diffusers The iterative diffusion process consumes a lot of memory which can make it difficult to train. PEFT can help reduce the memory requirements and reduce the storage size of the final model checkpoint. For example, consider the memory required for training a Stable Diffusion model with LoRA on an A100 80GB GPU with more than 64GB of CPU RAM. The final model checkpoint size is only 8.8MB! PEFT-LoRA PEFT-LoRA with Gradient Checkpointing CompVis/stable-diffusion-v1-4 27.5GB GPU / 3.97GB CPU 15.5GB GPU / 3.84GB CPU 8.12GB GPU / 3.77GB CPU Take a look at the examples/lora_dreambooth/train_dreambooth.py training script to try training your own Stable Diffusion model with LoRA, and play around with the smangrul/peft-lora-sd-dreambooth Space which is running on a T4 instance. Learn more about the PEFT integration in Diffusers in this is a library for distributed training and inference on various training setups and hardware (GPUs, TPUs, Apple Silicon, etc.). PEFT models work with Accelerate out of the box, making it really convenient to train really large models or use them for inference on consumer hardware with limited resources. PEFT can also be applied to training LLMs with RLHF components such as the ranker and policy. Get started by reading: Fine-tune a Mistral-7b model with Direct Preference Optimization with PEFT and the library to learn more about the Direct Preference Optimization (DPO) method and how to apply it to a LLM. Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU with PEFT and the library, and then try out the gpt2-sentiment_peft.ipynb notebook to optimize GPT2 to generate positive movie reviews. StackLLaMA: A hands-on guide to train LLaMA with RLHF with PEFT, and then try out the stack_llama/scripts for supervised finetuning, reward modeling, and RL finetuning. Model support Use this Space or check out the to find which models officially support a PEFT method out of the box. Even if you don't see a model listed below, you can manually configure the model config to enable PEFT for a model. Read the New transformers architecture guide to learn how. Contribute If you would like to contribute to PEFT, please check out our contribution guide Citing 🤗 PEFT To use 🤗 PEFT in your publication, please cite it by using the following BibTeX entry. @Misc title PEFT: State-of-the-art Parameter-Efficient Fine-Tuning methods author Sourab Mangrulkar and Sylvain Gugger and Lysandre Debut and Younes Belkada and Sayak Paul and Benjamin Bossan howpublished \url{https://github.com/huggingface/peft} year 2022 About 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning. Topics python adapter pytorch lora diffusion parameter-efficient-learning Readme Custom properties Stars Watchers 107 watching Forks Report repository Releases 17 v0.11.1 Latest May 17, 2024 + 16 releases Packages No packages published Used by 9.1k + 9,081 Contributors 175 + 161 contributors Languages Python 98.9% Other 1.1% Footer © 2024 GitHub, Inc. You can’t perform that action at this time.
+----------
+llama-recipes/docs/LLM_finetuning.md at main · meta-llama/llama-recipes · GitHub You signed in with another tab or window. to refresh your session. You signed out in another tab or window. to refresh your session. You switched accounts on another tab or window. to refresh your session. You must be signed in to change notification settings 10.1k © 2024 GitHub, Inc. You can’t perform that action at this time.
+----------
+llama-recipes/recipes/finetuning/datasets/README.md at main · meta-llama/llama-recipes · GitHub You signed in with another tab or window. to refresh your session. You signed out in another tab or window. to refresh your session. You switched accounts on another tab or window. to refresh your session. You must be signed in to change notification settings © 2024 GitHub, Inc. You can’t perform that action at this time.
+----------
+Efficient Fine-Tuning with LoRA: A Guide to Optimal Parameter Selection for Large Language Models | Databricks Blog Skip to main content Share this post With the rapid advancement of neural network-based techniques and Large Language Model (LLM) research, businesses are increasingly interested in AI applications for value generation. They employ various machine learning approaches, both generative and non-generative, to address text-related challenges such as classification, summarization, sequence-to-sequence tasks, and controlled text generation. Organizations can opt for third-party APIs, but fine-tuning models with proprietary data offers domain-specific and pertinent results, enabling cost-effective and independent solutions deployable across different environments in a secure manner. Ensuring efficient resource utilization and cost-effectiveness is crucial when choosing a strategy for fine-tuning. This blog explores arguably the most popular and effective variant of such parameter efficient methods, Low Rank Adaptation (LoRA), with a particular emphasis on QLoRA (an even more efficient variant of LoRA). The approach here will be to take an open large language model and fine-tune it to generate fictitious product descriptions when prompted with a product name and a category. The model chosen for this exercise is OpenLLaMA-3b-v2 , an open large language model with a permissive license (Apache 2.0), and the dataset chosen is Red Dot Design Award Product Descriptions , both of which can be downloaded from the HuggingFace Hub at the links provided. Fine-Tuning, LoRA and QLoRA In the realm of language models, fine tuning an existing language model to perform a specific task on specific data is a common practice. This involves adding a task-specific head, if necessary, and updating the weights of the neural network through backpropagation during the training process. It is important to note the distinction between this finetuning process and training from scratch. In the latter scenario, the model's weights are randomly initialized, while in finetuning, the weights are already optimized to a certain extent during the pre-training phase. The decision of which weights to optimize or update, and which ones to keep frozen, depends on the chosen technique. Full finetuning involves optimizing or training all layers of the neural network. While this approach typically yields the best results, it is also the most resource-intensive and time-consuming. Fortunately, there exist parameter-efficient approaches for fine-tuning that have proven to be effective. Although most such approaches have yielded less performance, Low Rank Adaptation (LoRA) has bucked this trend by even outperforming full finetuning in some cases, as a consequence of avoiding catastrophic forgetting (a phenomenon which occurs when the knowledge of the pretrained model is lost during the fine-tuning process). LoRA is an improved finetuning method where instead of finetuning all the weights that constitute the weight matrix of the pre-trained large language model, two smaller matrices that approximate this larger matrix are fine-tuned. These matrices constitute the LoRA adapter. This fine-tuned adapter is then loaded to the pretrained model and used for inference. QLoRA is an even more memory efficient version of LoRA where the pretrained model is loaded to GPU memory as quantized 4-bit weights (compared to 8-bits in the case of LoRA), while preserving similar effectiveness to LoRA. Probing this method, comparing the two methods when necessary, and figuring out the best combination of QLoRA hyperparameters to achieve optimal performance with the quickest training time will be the focus here. LoRA is implemented in the Hugging Face Parameter Efficient Fine-Tuning (PEFT) library, offering ease of use and QLoRA can be leveraged by using together. HuggingFace Transformer Reinforcement Learning (TRL) library offers a convenient trainer for supervised finetuning with seamless integration for LoRA. These three libraries will provide the necessary tools to finetune the chosen pretrained model to generate coherent and convincing product descriptions once prompted with an instruction indicating the desired attributes. Prepping the data for supervised fine-tuning To probe the effectiveness of QLoRA for fine tuning a model for instruction following, it is essential to transform the data to a format suited for supervised fine-tuning. Supervised fine-tuning in essence, further trains a pretrained model to generate text conditioned on a provided prompt. It is supervised in that the model is finetuned on a dataset that has prompt-response pairs formatted in a consistent manner. An example observation from our chosen dataset from the Hugging Face hub looks as follows: product category description text "Biamp Rack Products" "Digital Audio Processors" "“High recognition value, uniform aesthetics and practical scalability – this has been impressively achieved with the Biamp brand language …" "Product Name: Biamp Rack Products; Product Category: Digital Audio Processors; Product Description: “High recognition value, uniform aesthetics and practical scalability – this has been impressively achieved with the Biamp brand language … As useful as this dataset is, this is not well formatted for fine-tuning of a language model for instruction following in the manner described above. The following code snippet loads the dataset from the Hugging Face hub into memory, transforms the necessary fields into a consistently formatted string representing the prompt, and inserts the response( i.e. the description), immediately afterwards. This format is known as the ‘Alpaca format’ in large language model research circles as it was the format used to finetune the original LlaMA model from Meta to result in the Alpaca model, one of the first widely distributed instruction-following large language models (although not licensed for commercial use). datasets load_dataset Dataset #Load the dataset from the HuggingFace Hub rd_ds = load_dataset( "xiyuez/red-dot-design-award-product-description" #Convert to pandas dataframe for convenient processing rd_df = pd.DataFrame(rd_ds[ 'train' #Combine the two attributes into an instruction string rd_df[ 'instruction' ] = 'Create a detailed description for the following product: ' + rd_df[ 'product' ]+ ', belonging to category: ' 'category' rd_df = rd_df[[ 'description' ]] #Get a 5000 sample subset for fine-tuning purposes rd_df_sample = rd_df.sample(n= 5000 , random_state= 42 #Define template and format data into the template for supervised fine-tuning template = """Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {} ### Response:\n""" rd_df_sample[ 'prompt' ] = rd_df_sample[ "instruction" ].apply( lambda x: template. format (x)) rd_df_sample.rename(columns={ 'response' }, inplace= ] + "\n### End" rd_df_sample = rd_df_sample[[ 'text' ] = rd_df[ "prompt" ] + rd_df[ "response" rd_df.drop(columns=[ ], inplace= The resulting prompts are then loaded into a hugging face dataset for supervised finetuning. Each such prompt has the following format. Below is an instruction that describes a task. Write a response that appropriately completes the request. Create a detailed description the following product: Beseye Pro, belonging to category: Cloud-Based Home Security Camera ### Response: Beseye Pro combines intelligent home monitoring with decorative art. The camera, whose form reminiscent of a water drop, secured the mounting a neodymium magnet can be rotated by 360 degrees. This allows it to be easily positioned the desired direction. The camera also houses modern technologies, such infrared LEDs, cloud-based intelligent video analyses SSL encryption. ### End To facilitate quick experimentation, each fine-tuning exercise will be done on a 5000 observation subset of this data. Testing model performance before fine-tuning Before any fine-tuning, it’s a good idea to check how the model performs without any fine-tuning to get a baseline for pre-trained model performance. The model can be loaded in 8-bit as follows and prompted with the format specified in the model card on Hugging Face LlamaTokenizer, LlamaForCausalLM model_path = 'openlm-research/open_llama_3b_v2' tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained( model_path, load_in_8bit= , device_map= 'auto' #Pass in a prompt and infer with the model prompt = 'Q: Create a detailed description for the following product: Corelogic Smooth Mouse, belonging to category: Optical Mouse\nA:' input_ids = tokenizer(prompt, return_tensors= ).input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens= 128 (tokenizer.decode(generation_output[ ])) The output obtained is not quite what we want. Q: Create a detailed description the following product: Corelogic Smooth Mouse, belonging to category: Optical Mouse A: The Corelogic Smooth Mouse a wireless optical mouse that has a 1000 dpi resolution. It has a 2.4 GHz wireless connection a 12 -month warranty. Q: What the price of the Corelogic Smooth Mouse? A: The Corelogic Smooth Mouse priced at $ 29.99 . Q: What the weight of the Corelogic Smooth Mouse? A: The Corelogic Smooth Mouse weighs pounds. Q: What the dimensions of the Corelogic Smooth Mouse? A: The Corelogic Smooth Mouse has a dimension The first part of the result is actually satisfactory, but the rest of it is more of a rambling mess. Similarly, if the model is prompted with the input text in the ‘Alpaca format’ as discussed before, the output is expected to be just as sub-optimal: prompt= """Below is an instruction that describes a task. Write a response that appropriately completes the request. Create a detailed description for the following product: Corelogic Smooth Mouse, belonging to category: Optical Mouse ### Response:""" input_ids = tokenizer(prompt, return_tensors= And sure enough, it is: Corelogic Smooth Mouse a mouse that designed to be used by people disabilities. It a wireless mouse that designed to be used by people a wireless mouse that designed to be used by people a wireless mouse that designed to be used by people a wireless mouse that designed to be used by people a wireless mouse that designed to be used by people a wireless mouse that designed to be used by people a wireless mouse that designed to be used by The model performs what it was trained to do, predicts the next most probable token. The point of supervised fine-tuning in this context is to generate the desired text in a controllable manner. Please note that in the subsequent experiments, while QLoRA leverages a model loaded in 4-bit with the weights frozen, the inference process to examine output quality is done once the model has been loaded in 8-bit as shown above for consistency. The Turnable Knobs When using PEFT to train a model with LoRA or QLoRA (note that, as mentioned before, the primary difference between the two is that in the latter, the pretrained models are frozen in 4-bit during the fine-tuning process), the hyperparameters of the low rank adaptation process can be defined in a LoRA config as shown below: ... #If only targeting attention blocks of the model target_modules = [ "q_proj" "v_proj" #If targeting all linear layers 'q_proj' 'k_proj' 'v_proj' 'o_proj' 'gate_proj' 'down_proj' 'up_proj' 'lm_head' lora_config = LoraConfig( r= 16 target_modules = target_modules, lora_alpha= lora_dropout= 0.05 bias= "none" task_type= "CAUSAL_LM" ,} Two of these hyperparameters, r and target_modules are empirically shown to affect adaptation quality significantly and will be the focus of the tests that follow. The other hyperparameters are kept constant at the values indicated above for simplicity. represents the rank of the low rank matrices learned during the finetuning process. As this value is increased, the number of parameters needed to be updated during the low-rank adaptation increases. Intuitively, a lower r may lead to a quicker, less computationally intensive training process, but may affect the quality of the model thus produced. However, increasing r beyond a certain value may not yield any discernible increase in quality of model output. How the value of r affects adaptation (fine-tuning) quality will be put to the test shortly. When fine-tuning with LoRA, it is possible to target specific modules in the model architecture. The adaptation process will target these modules and apply the update matrices to them. Similar to the situation with " ," targeting more modules during LoRA adaptation results in increased training time and greater demand for compute resources. Thus, it is a common practice to only target the attention blocks of the transformer. However, recent work as shown in the QLoRA paper by Dettmers et al. suggests that targeting all linear layers results in better adaptation quality. This will be explored here as well. Names of the linear layers of the model can be conveniently appended to a list with the following code snippet: re model_modules = (model.modules) pattern = r'\((\w+)\): Linear' linear_layer_names = re.findall(pattern, model_modules) names = [] # Print the names of the Linear layers name linear_layer_names: names.append(name) target_modules = list set (names)) Tuning the finetuning with LoRA The developer experience of fine tuning large language models in general have improved dramatically over the past year or so. The latest high level abstraction from Hugging Face is the SFTTrainer class in the TRL library. To perform QLoRA, all that is needed is the following: 1.  Load the model to GPU memory in 4-bit (bitsandbytes enables this process). 2.  Define the LoRA configuration as discussed above. 3.  Define the train and test splits of the prepped instruction following data into Hugging Face Dataset objects. 4. Define training arguments. These include the number of epochs, batch size and other training hyperparameters which will be kept constant during this exercise. 5. Pass these arguments into an instance of SFTTrainer. These steps are clearly indicated in the source file in the associated with this blog. The actual training logic is abstracted away nicely as follows: trainer = SFTTrainer( model, train_dataset=dataset[ eval_dataset = dataset[ 'test' dataset_text_field= "text" max_seq_length= 256 args=training_args, # Initiate the training process mlflow.start_run(run_name= ‘run_name_of_choice’): trainer.train() If MLFlow autologging is enabled in the Databricks workspace, which is highly recommended, all the training parameters and metrics are automatically tracked and logged with the MLFlow tracking server. This functionality is invaluable in monitoring long-running training tasks. Needless to say, the fine-tuning process is performed using a compute cluster (in this case, a single node with a single A100 GPU) created using the latest Databricks Machine runtime with GPU support. Hyperparameter Combination #1: QLoRA with r=8 and targeting “q_proj”, “v_proj” The first combination of QLoRA hyperparameters attempted is r=8 and targets only the attention blocks, namely “q_proj” and “v_proj” for adaptation. The following code snippets gives the number of trainable parameters: model = get_peft_model(model, lora_config) model.print_trainable_parameters() These choices result in 2,662,400 parameters being updated during the fine-tuning process (~2.6 million) from a total of ~3.2 billion parameters the model consists of. This is less than 0.1% of the model parameters. The entire finetuning process on a single Nvidia A100 with 80 GBs of GPU for 3 epochs only takes roughly 12 minutes. The GPU utilization metrics can be conveniently viewed at the metrics tab of the cluster configurations. At the end of the training process, the fine-tuned model is obtained by loading the adapter weights to the pre-trained model as follows: peft_model = PeftModel.from_pretrained(model, adapter_location) This model can now be used for inference as any other model. Qualitative Evaluation A couple of example prompt-response pairs are listed below Prompt (passed to the model in the Alpaca format, not shown for conciseness here): Create a detailed description for the following product: Corelogic Smooth Mouse, belonging to category: Optical Mouse Response: The Corelogic Smooth Mouse a wireless optical mouse a smooth surface. The mouse equipped DPI sensor Hz polling rate. The mouse available black white. Prompt: Create a detailed description for the following product: Hoover Lightspeed, belonging to category: Cordless Vacuum Cleaner The Hoover Lightspeed a cordless vacuum cleaner that a lithium-ion battery. The battery charged via a USB cable. The vacuum cleaner brush a turbo brush. The brush suitable cleaning carpets hard floors. The turbo brush hard floors. The vacuum cleaner a dust container that can be emptied via a dust container. The model has clearly been adapted for generating more consistent descriptions. However the response to the first prompt about the optical mouse is quite short and the following phrase “ The vacuum cleaner is equipped with a dust container that can be emptied via a dust container ” is logically flawed. Hyperparameter Combination #2: QLoRA with r=16 and targeting all linear layers Surely, things can be improved here. It is worth exploring increasing the rank of low rank matrices learned during adaptation to 16, i.e. double the value of r to 16 and keep all else  the same. This doubles the number of trainable parameters to 5,324,800 (~5.3 million). The quality of output, however, remains unchanged for the same exact prompts. Create a detailed description for the following product: Corelogic Smooth Mouse, belonging to category: Optical Mouse The Corelogic Smooth Mouse a high-quality optical mouse a smooth surface. The mouse Hz polling rate. The mouse Create a detailed description for the following product: Hoover Lightspeed, belonging to category: Cordless Vacuum Cleaner a cordless vacuum cleaner that a lithium-ion battery. The battery charged via a USB cable. The vacuum cleaner a motorized brush. The brush used to clean hard floors the motorized brush used to clean carpets. The vacuum cleaner a dust container that can be emptied via a dust container. The vacuum cleaner a LED display that shows the remaining battery capacity. The same lack of detail and logical flaws in detail where details are available persists. If this fine tuned model is used for product description generation in a real-world scenario, this is not acceptable output. Hyperparameter Combination #3: QLoRA with r=8 and targeting all linear layers Given that doubling r does not seemingly result in any perceivable increase in output quality, it is worth changing the other important knob. i.e. targeting all linear layers instead of just the attention blocks. Here, the LoRA hyperparameters are r=8 and target_layers are  'q_proj','k_proj','v_proj','o_proj','gate_proj','down_proj','up_proj' and 'lm_head'. This increases the number of parameters updated to 12,994,560 and increases the training time to roughly 15.5 minutes. Prompting the model with the same prompts yield the following: Create a detailed description for the following product: Corelogic Smooth Mouse, belonging to category: Optical Mouse The Corelogic Smooth Mouse a wireless optical mouse a smooth surface. The mouse Hz polling rate. The mouse a USB receiver. The mouse Create a detailed description for the following product: Hoover Lightspeed, belonging to category: Cordless Vacuum Cleaner The Hoover Lightspeed cordless vacuum cleaner a high-performance device that easy to use. The device a lithium-ion battery that provides up to 40 minutes of runtime. The device a high-performance motor that provides a suction power of up to watts. The device dusting brush a crevice tool. The device a LED display that shows the remaining battery charge. Now it is possible to see a somewhat longer coherent description of the fictitious optical mouse and there are no logical flaws in the description of the vacuum cleaner. The product descriptions are not only logical, but relevant. Just as a reminder, these relatively high-quality results are obtained by fine-tuning less than a 1% of the model’s weights with a total dataset of 5000 such prompt-description pairs formatted in a consistent manner. Hyperparameter Combination #4: LoRA with  r=8 and targeting all linear transformer layers It is also worth exploring whether the quality of output from the model improves if the pretrained model is frozen in 8-bit instead of 4-bit. In other words, replicating the exact finetuning process using LoRA instead of QLoRA. Here, the LoRA hyperparameters are kept the same as before, in the new-found optimal configuration, i.e. r=8 and targeting all linear transformer layers during the adaptation process. The results for the two prompts used throughout the article are as given below: Create a detailed description for the following product: Corelogic Smooth Mouse, belonging to category: Optical Mouse The Corelogic Smooth Mouse a high-quality optical mouse a smooth surface. The mouse Hz polling rate. The mouse three colours: black, white red. The mouse a USB receiver compatible Windows Mac. Create a detailed description for the following product: Hoover Lightspeed, belonging to category: Cordless Vacuum Cleaner The Hoover Lightspeed cordless vacuum cleaner a compact lightweight device that easy to use. The device a lithium-ion battery that provides up to minutes of cleaning time. The vacuum cleaner a high-performance filter that ensures that the air cleaned of dust allergens. The device a crevice tool that can be used to clean hard-to-reach areas. Again, there isn’t much of an improvement in the quality of the output text. Key Observations Based on the above set of trials, and further evidence detailed in the excellent publication presenting QLoRA, it can be deduced that the value of r (the rank of matrices updated during adaptation) does not improve adaptation quality beyond a certain point. The biggest improvement is observed in targeting all linear layers in the adaptation process, as opposed to just the attention blocks, as commonly documented in technical literature detailing LoRA and QLoRA. The trials executed above and other empirical evidence suggest that QLoRA does not indeed suffer from any discernible reduction in quality of text generated, compared to LoRA. Further Considerations for using LoRA adapters in deployment It's important to optimize the usage of adapters and understand the limitations of the technique. The size of the LoRA adapter obtained through finetuning is typically just a few megabytes, while the pretrained base model can be several gigabytes in memory and on disk. During inference, both the adapter and the pretrained LLM need to be loaded, so the memory requirement remains similar. Furthermore, if the weights of the pre-trained LLM and the adapter aren’t merged, there will be a slight increase in inference latency. Fortunately, with the PEFT library, the process of merging the weights with the adapter can be done with a single line of code as shown here: merged_model = peft_model.merge_and_unload() The figure below outlines the process from fine-tuning an adapter to model deployment. While the adapter pattern offers significant benefits, merging adapters is not a universal solution. One advantage of the adapter pattern is the ability to deploy a single large pretrained model with task-specific adapters. This allows for efficient inference by utilizing the pretrained model as a backbone for different tasks. However, merging weights makes this approach impossible. The decision to merge weights depends on the specific use case and acceptable inference latency. Nonetheless, LoRA/ QLoRA continues to be a highly effective method for parameter efficient fine-tuning and is widely used. Low Rank Adaptation is a powerful fine-tuning technique that can yield great results if used with the right configuration. Choosing the correct value of rank and the layers of the neural network architecture to target during adaptation could decide the quality of the output from the fine-tuned model. QLoRA results in further memory savings while preserving the adaptation quality. Even when the fine-tuning is performed,  there are several important engineering considerations to ensure the adapted model is deployed in the correct manner. In summary, a concise table indicating the different combinations of LoRA parameters attempted, text quality output and number of parameters updated when fine-tuning OpenLLaMA-3b-v2 for 3 epochs on 5000 observations on a single A100 is shown below. target_modules Base model weights Quality of output Number of parameters updated (in millions) Attention blocks low 2.662 5.324 All linear layers high 12.995 Try this on Databricks! Clone the GitHub repository associated with the blog into a Databricks Repo to get started. More thoroughly documented examples to finetune models on Databricks are available Try Databricks for free Related posts Using MLflow AI Gateway and Llama 2 to Build Generative AI Apps August 24, 2023 by Kasey Uhlenhuth Xiangrui Meng Hagay Lupesko Sean Owen Corey Zumar Liang Zhang Ina Koleva Vladimir Kolovski Arpit Jasapara Data Science and ML To build customer support bots, internal knowledge graphs, or Q&A systems, customers often use Retrieval Augmented Generation (RAG) applications which leverage pre-trained models... Databricks + MosaicML July 19, 2023 Matei Zaharia Patrick Wendell Reynold Xin Ali Ghodsi Company Blog Today, we’re excited to share that we’ve completed our acquisition of MosaicML, a leading platform for creating and customizing generative AI models for... See all Engineering Blog posts Why Databricks Discover For Executives For Startups Lakehouse Architecture DatabricksIQ Mosaic Research Customers Featured See All Partners Cloud Providers Technology Partners Data Partners Built on Databricks Consulting & System Integrators C&SI Partner Program Partner Solutions Consulting & System Integrators Product Databricks Platform Platform Overview Governance Artificial Intelligence Business Intelligence Data Management Data Warehousing Real-Time Analytics Data Engineering Data Science Pricing Pricing Overview Pricing Calculator Open Source Integrations and Data Marketplace IDE Integrations Partner Connect Solutions Databricks For Industries Communications Financial Services Healthcare and Life Sciences Manufacturing Media and Entertainment Public Sector Retail View All Cross Industry Solutions Customer Data Platform Data Migration Professional Services Solution Accelerators Healthcare and Life Sciences Customer Support Training and Certification Learning Overview Training Overview Certification University Alliance Databricks Academy Login Data + AI Summit Data + AI World Tour Data Intelligence Days Full Calendar Blog and Podcasts Databricks Blog Databricks Mosaic Research Blog Data Brew Podcast Champions of Data & AI Podcast Data + AI Summit Data + AI World Tour Databricks Mosaic Research Blog Champions of Data & AI Podcast Who We Are Our Team Databricks Ventures Contact Us Open Jobs Working at Databricks Press Awards and Recognition Newsroom Security and Trust Databricks Inc. 160 Spear Street, 15th Floor San Francisco, CA 94105 1-866-330-0121 See Careers at Databricks © Databricks 2024. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation. Privacy Notice Terms of Use Your Privacy Choices Your California Privacy Rights
+----------
+Training LLMs Course: Discover Fine-Tuning Techniques Register today! Watch Intro Video Foundations Course Introduction NeurIPS LLM Efficiency Challange NeurIPS LLM Efficiency Challenge Q&A Hands On LLM Fine-tuning Start Your Experiments! Evaluation Introduction to LLM Evaluation Demystifying Perplexity HumanEval and LLM Performance Analysis LLM Benchmarks Deep Dive into HELM Chatbot Arena Use Case Specific Benchmarks Evaluating LLM Apps Conclusions LLM Evaluation Q&A Data Introduction to Data for Training LLMs Find Out More about MosaicML Friendly Advice How Much Data? Data Sources & Cost Q&A Which Data? Logistics of Data Loading Training & Fine-tuning Techniques Introduction to Training & Fine-tuning Techniques Hardware Requirements Memory Usage What Should You Train? Training Observability Course Assessment & Next Steps Course Assessment Resources for Further Learning About this course Free 37 lessons 4 hours of video content Learn the fundamentals of large language models Find out about the types of LLMs, model architectures, parameter sizes and scaling laws. Curate a dataset and establish an evaluation approach Learn how to find or curate a dataset for LLM training. Dive into the evaluation metrics for various LLM tasks and compare their performance across a range of benchmarks. Master training and fine-tuning techniques Learn hands-on advanced training strategies like LoRA, prefix tuning, prompt tuning, and Reinforcement Learning through Human Feedback (RLHF). Enroll for free Working knowledge of machine learning Intermediate Python experience Familiarity with DL frameworks (Pytorch/Tensorflow) Register now! All Courses © Copyright W&B AI Academy 2024
+----------
+Just a moment... Enable JavaScript and cookies to continue
+----------
+Enable JavaScript and cookies to continue
+----------
+loaded 51
diff --git a/recipes/use_cases/end2end-recipes/raft/raft.yaml b/recipes/use_cases/end2end-recipes/raft/raft.yaml
index d1c891843..693049263 100644
--- a/recipes/use_cases/end2end-recipes/raft/raft.yaml
+++ b/recipes/use_cases/end2end-recipes/raft/raft.yaml
@@ -2,7 +2,7 @@ COT_prompt_template: >
   <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a helpful chatbot who can provide an answer to every questions from the user given a relevant context.<|eot_id|>
   <|start_header_id|>user<|end_header_id|>
   Question: {question}\nContext: {context}\n
-  Answer this question using the information given by multiple documents in the context above. Here is things to pay attention to:
+  Answer this question using the information given by multiple documents in the context above. Here are things to pay attention to:
   - The context contains many documents, each document starts with  and ends .
   - First provide step-by-step reasoning on how to answer the question.
   - In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context.
diff --git a/recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml b/recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml
index 0c4bff185..48e066859 100644
--- a/recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml
+++ b/recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml
@@ -22,7 +22,7 @@ RAG_prompt_template: >
   <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a helpful chatbot who can provide an answer to every questions from the user given a relevant context.<|eot_id|>
   <|start_header_id|>user<|end_header_id|>
   Question: {question}\nContext: {context}\n
-  Answer this question using the information given by multiple documents in the context above. Here is things to pay attention to:
+  Answer this question using the information given by multiple documents in the context above. Here are things to pay attention to:
   - The context contains many documents, each document starts with  and ends .
   - First provide step-by-step reasoning on how to answer the question.
   - In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context.

From 839492714c9d7be366eb44e543599a810a932d13 Mon Sep 17 00:00:00 2001
From: Kai Wu 
Date: Fri, 21 Jun 2024 10:47:21 -0700
Subject: [PATCH 26/35] raft_dataset.py must be used with llama3 tokenizer

---
 recipes/finetuning/datasets/raft_dataset.py | 18 ++----------------
 1 file changed, 2 insertions(+), 16 deletions(-)

diff --git a/recipes/finetuning/datasets/raft_dataset.py b/recipes/finetuning/datasets/raft_dataset.py
index eb1f36937..0b1004539 100644
--- a/recipes/finetuning/datasets/raft_dataset.py
+++ b/recipes/finetuning/datasets/raft_dataset.py
@@ -3,8 +3,7 @@
 
 
 import copy
-import datasets
-from datasets import Dataset, load_dataset, DatasetDict
+from datasets import load_dataset
 import itertools
 
 B_INST, E_INST = "[INST]", "[/INST]"
@@ -26,8 +25,6 @@ def tokenize_dialog(dialog, tokenizer):
         eot_indices = [i for i,n in enumerate(dialog_tokens) if n == 128009]
         labels = copy.copy(dialog_tokens)
         last_idx = 0
-        token_length = len(dialog_tokens)
-        last_idx = 0
         # system prompt header "<|start_header_id|>system<|end_header_id|>" has been tokenized to [128006, 9125, 128007]
         # user prompt header "<|start_header_id|>user<|end_header_id|>" has been tokenized to [128006, 882, 128007]
         prompt_header_seqs = [[128006, 9125, 128007],[128006, 882, 128007]]
@@ -44,18 +41,7 @@ def tokenize_dialog(dialog, tokenizer):
         dialog_tokens = [dialog_tokens]
         labels_tokens = [labels]
     else:
-        # Otherwise, use the original tokenizer to generate the tokens as it is from Llama 2 family models
-        prompt_tokens = [tokenizer.encode(f"{tokenizer.bos_token}{B_INST} {(prompt['content']).strip()} {E_INST}", add_special_tokens=False) for prompt in dialog[:2]]
-        answer = dialog[-1]
-        answer_tokens = tokenizer.encode(f"{answer['content'].strip()} {tokenizer.eos_token}", add_special_tokens=False)
-
-        #Add labels, convert prompt token to -100 in order to ignore in loss function
-        sample = {
-            "input_ids": prompt_tokens + answer_tokens,
-            "attention_mask" : [1] * (len(prompt_tokens) + len(answer_tokens)),
-            "labels": [-100] * len(prompt_tokens) + answer_tokens,
-            }
-        return sample
+        raise Exception("This raft_dataset only supports Llama 3 family models, please make sure the tokenizer is from Llama 3 family models.")
 
     combined_tokens = {
         "input_ids": list(itertools.chain(*(t for t in dialog_tokens))),

From 06d6be5f5d7629d9aadbafe871b6857ac0846f42 Mon Sep 17 00:00:00 2001
From: Kai Wu 
Date: Fri, 21 Jun 2024 10:51:34 -0700
Subject: [PATCH 27/35] small fix to COT prompt

---
 recipes/finetuning/datasets/raft_dataset.py                  | 4 ++--
 recipes/use_cases/end2end-recipes/raft/raft.yaml             | 4 ++--
 recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml | 4 ++--
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/recipes/finetuning/datasets/raft_dataset.py b/recipes/finetuning/datasets/raft_dataset.py
index 0b1004539..ed8aaa9d7 100644
--- a/recipes/finetuning/datasets/raft_dataset.py
+++ b/recipes/finetuning/datasets/raft_dataset.py
@@ -61,12 +61,12 @@ def raft_tokenize(q_a_pair, tokenizer):
     system_prompt = "You are a helpful chatbot who can provide an answer to every questions from the user given a relevant context."
     user_prompt = """
         Question: {question}\nContext: {context}\n
-        Answer this question using the information given by multiple documents in the context above. Here are things to pay attention to:
+        Answer this question using the information given by multiple documents in the context above. Here are the things to pay attention to:
         - The context contains many documents, each document starts with  and ends .
         - First provide step-by-step reasoning on how to answer the question.
         - In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context.
         - End your response with final answer in the form : $answer, the answer should less than 60 words.
-        You MUST begin your final answer with the tag "
+        You MUST begin your final answer with the tag ":".
     """.format(question=question, context=documents)
 
     chat = [
diff --git a/recipes/use_cases/end2end-recipes/raft/raft.yaml b/recipes/use_cases/end2end-recipes/raft/raft.yaml
index 693049263..13ec31595 100644
--- a/recipes/use_cases/end2end-recipes/raft/raft.yaml
+++ b/recipes/use_cases/end2end-recipes/raft/raft.yaml
@@ -2,12 +2,12 @@ COT_prompt_template: >
   <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a helpful chatbot who can provide an answer to every questions from the user given a relevant context.<|eot_id|>
   <|start_header_id|>user<|end_header_id|>
   Question: {question}\nContext: {context}\n
-  Answer this question using the information given by multiple documents in the context above. Here are things to pay attention to:
+  Answer this question using the information given by multiple documents in the context above. Here are the things to pay attention to:
   - The context contains many documents, each document starts with  and ends .
   - First provide step-by-step reasoning on how to answer the question.
   - In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context.
   - End your response with final answer in the form : $answer, the answer should less than 60 words.
-  You MUST begin your final answer with the tag " <|eot_id|><|start_header_id|>assistant<|end_header_id|>
+  You MUST begin your final answer with the tag ":". <|eot_id|><|start_header_id|>assistant<|end_header_id|>
 
 question_prompt_template: >
   <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a synthetic question-answer pair generator. Given a chunk of context about
diff --git a/recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml b/recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml
index 48e066859..9cd5baa76 100644
--- a/recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml
+++ b/recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml
@@ -22,12 +22,12 @@ RAG_prompt_template: >
   <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a helpful chatbot who can provide an answer to every questions from the user given a relevant context.<|eot_id|>
   <|start_header_id|>user<|end_header_id|>
   Question: {question}\nContext: {context}\n
-  Answer this question using the information given by multiple documents in the context above. Here are things to pay attention to:
+  Answer this question using the information given by multiple documents in the context above. Here are the things to pay attention to:
   - The context contains many documents, each document starts with  and ends .
   - First provide step-by-step reasoning on how to answer the question.
   - In the reasoning, if you need to copy paste some sentences from the context, include them in ##begin_quote## and ##end_quote##. This would mean that things outside of ##begin_quote## and ##end_quote## are not directly copy paste from the context.
   - End your response with final answer in the form : $answer, the answer should less than 60 words.
-  You MUST begin your final answer with the tag " <|eot_id|><|start_header_id|>assistant<|end_header_id|>
+  You MUST begin your final answer with the tag ":". <|eot_id|><|start_header_id|>assistant<|end_header_id|>
 eval_file: "./eval_llama.json"
 
 model_name: "raft-8b"

From 98ee7488e6c8fdc9fac0a6c82b1f7432dde7c412 Mon Sep 17 00:00:00 2001
From: Kai Wu 
Date: Fri, 21 Jun 2024 11:04:32 -0700
Subject: [PATCH 28/35] fix peft.py and remove chatbot folder

---
 .../end2end-recipes/chatbot/README.md         | 207 --------------
 .../chatbot/eval-loss-3runs.png               | Bin 52645 -> 0 bytes
 .../chatbot/pipelines/README.md               | 129 ---------
 .../chatbot/pipelines/chat_utils.py           |  66 -----
 .../chatbot/pipelines/config.py               |  18 --
 .../chatbot/pipelines/doc_processor.py        |  47 ---
 .../chatbot/pipelines/eval_chatbot.py         | 163 -----------
 .../chatbot/pipelines/eval_config.yaml        |  23 --
 .../chatbot/pipelines/evalset.json            | 178 ------------
 .../pipelines/generate_question_answers.py    |  90 ------
 .../chatbot/pipelines/generation_config.yaml  |  50 ----
 .../chatbot/pipelines/generator_utils.py      | 267 ------------------
 .../end2end-recipes/chatbot/poor-test-1.png   | Bin 405225 -> 0 bytes
 .../end2end-recipes/chatbot/poor-test-2.png   | Bin 439497 -> 0 bytes
 .../chatbot/train-loss-3runs.png              | Bin 51915 -> 0 bytes
 .../use_cases/end2end-recipes/raft/format.py  | 173 ++++++++++++
 .../use_cases/end2end-recipes/raft/raft.py    |   2 +-
 .../use_cases/end2end-recipes/raft/raft.yaml  |   2 +-
 src/llama_recipes/configs/peft.py             |   2 +-
 19 files changed, 176 insertions(+), 1241 deletions(-)
 delete mode 100644 recipes/use_cases/end2end-recipes/chatbot/README.md
 delete mode 100644 recipes/use_cases/end2end-recipes/chatbot/eval-loss-3runs.png
 delete mode 100644 recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md
 delete mode 100644 recipes/use_cases/end2end-recipes/chatbot/pipelines/chat_utils.py
 delete mode 100644 recipes/use_cases/end2end-recipes/chatbot/pipelines/config.py
 delete mode 100644 recipes/use_cases/end2end-recipes/chatbot/pipelines/doc_processor.py
 delete mode 100644 recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_chatbot.py
 delete mode 100644 recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml
 delete mode 100644 recipes/use_cases/end2end-recipes/chatbot/pipelines/evalset.json
 delete mode 100644 recipes/use_cases/end2end-recipes/chatbot/pipelines/generate_question_answers.py
 delete mode 100644 recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml
 delete mode 100644 recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py
 delete mode 100644 recipes/use_cases/end2end-recipes/chatbot/poor-test-1.png
 delete mode 100644 recipes/use_cases/end2end-recipes/chatbot/poor-test-2.png
 delete mode 100644 recipes/use_cases/end2end-recipes/chatbot/train-loss-3runs.png
 create mode 100644 recipes/use_cases/end2end-recipes/raft/format.py

diff --git a/recipes/use_cases/end2end-recipes/chatbot/README.md b/recipes/use_cases/end2end-recipes/chatbot/README.md
deleted file mode 100644
index dd763416c..000000000
--- a/recipes/use_cases/end2end-recipes/chatbot/README.md
+++ /dev/null
@@ -1,207 +0,0 @@
-## Introduction
-
-Large language models (LLMs) have emerged as groundbreaking tools, capable of understanding and generating human-like text. These models power many of today's advanced chatbots, providing more natural and engaging user experiences. But how do we create these intelligent systems?
-
-Here, we aim to make an FAQ model for Llama that be able to answer questions about Llama by fine-tune Meta Llama 3 8B instruct model using existing official Llama documents.
-
-
-### Fine-tuning Process
-
-Fine-tuning Meta Llama 3 8B instruct model involves several key steps: Data Collection, Preprocessing, Fine-tuning, Evaluation.
-
-
-### LLM Generated datasets
-
-As Chatbots are usually domain specifics and based on public or proprietary data, one common way inspired by [self-instruct paper](https://arxiv.org/abs/2212.10560) is to use LLMs to assist building the dataset from our data. For example to build an FAQ model, we can use a powerful Meta Llama 3 70B model to process our documents and help us build question and answer pair (We will showcase this here). Just keep it in mind that usually most of the proprietary LLMs has this clause in their license that you are not allowed to use the output generated from the model to train another LLM. In this case we will fine-tune another Llama model with the help of Meta Llama 3 70B.
-
-
-Similarly, we will use the same LLM to evaluate the quality of generated datasets and finally evaluate the outputs from the model.
-
-
-Given this context, here we want to highlight some of best practices that need to be in place for data collection and preprocessing in general.
-
-### **Data Collection & Preprocessing:**
-
-Gathering a diverse and comprehensive dataset is crucial. This dataset should include a wide range of topics and conversational styles to ensure the model can handle various subjects. A recent [research](https://arxiv.org/pdf/2305.11206.pdf) shows that quality of data has far more importance than quantity. Here are some high level thoughts on data collection and preprocessing along with best practices:
-
-**NOTE** data collection and processing is very use-case specific and here we can only share best practices but it would be very nuanced for each use-case.
-
-- Source Identification: Identify the sources where your FAQs are coming from. This could include websites, customer service transcripts, emails, forums, and product manuals. Prioritize sources that reflect the real questions your users are asking.
-
-- Diversity and Coverage: Ensure your data covers a wide range of topics relevant to your domain. It's crucial to include variations in how questions are phrased to make your model robust to different wording.
-
-- Volume: The amount of data needed depends on the complexity of the task and the variability of the language in your domain. Generally, more data leads to a better-performing model, but aim for high-quality, relevant data.
-
-Here, we are going to use [self-instruct](https://arxiv.org/abs/2212.10560) idea and use Llama model to build our dataset, for details please check this [doc](./data_pipelines/REAME.md).
-
-
-**Things to keep in mind**
-
-- **Pretraining Data as the Foundation**: Pretraining data is crucial for developing foundational models, influencing both their strengths and potential weaknesses. Fine-tuning data refines specific model capabilities and, through instruction fine-tuning or alignment training, enhances general usability and safety.
-
-- **Quality Over Quantity**: More data doesn't necessarily mean better results. It's vital to select data carefully and perform manual inspections to ensure it aligns with your project's aims.
-
-- **Considerations for Dataset Selection**: Selecting a dataset requires considering various factors, including language and dialect coverage, topics, tasks, diversity, quality, and representation.
-
-- **Impact of Implicit Dataset Modifications**: Most datasets undergo implicit changes during selection, filtering, and formatting. These preprocessing steps can significantly affect model performance, so they should not be overlooked.
-
-- **Finetuning Data's Dual-Edged Sword**: Finetuning can improve or impair model capabilities. Make sure you know the nature of your data to make an informed selections.
-
-- **Navigating Dataset Limitations**: The perfect dataset for a specific task may not exist. Be mindful of the limitations when choosing from available resources, and understand the potential impact on your project.
-
-#### **Best Practices for FineTuning Data Preparation**
-
-- **Enhancing Understanding with Analysis Tools**: Utilizing tools for searching and analyzing data is crucial for developers to gain a deeper insight into their datasets. This understanding is key to predicting model behavior, a critical yet often overlooked phase in model development.
-
-- **The Impact of Data Cleaning and Filtering**: Data cleaning and filtering significantly influence model characteristics, yet there's no universal solution that fits every scenario. Our guidance includes filtering recommendations tailored to the specific applications and communities your model aims to serve.
-
-- **Data Mixing from Multiple Sources**: When training models with data from various sources or domains, the proportion of data from each domain (data mixing) can greatly affect downstream performance. It's a common strategy to prioritize "high-quality" data domains—those with content written by humans and subjected to an editing process, like Wikipedia and books. However, data mixing is an evolving field of research, with best practices still under development.
-
-- **Benefits of Removing Duplicate Data**: Eliminating duplicated data from your dataset can lessen unwanted memorization and enhance training efficiency.
-
-- **The Importance of Dataset Decontamination**: It's crucial to meticulously decontaminate training datasets by excluding data from evaluation benchmarks. This ensures the model's capabilities are accurately assessed.
-
-
-**Data Exploration and Analysis**
-
-- Gaining Insights through Dataset Exploration: Leveraging search and analysis tools to explore training datasets enables us to cultivate a refined understanding of the data's contents, which in turn influences the models. Direct interaction with the data often reveals complexities that are challenging to convey or so might not be present in the documents.
-
-- Understanding Data Complexity: Data, especially text, encompasses a wide array of characteristics such as length distribution, topics, tones, formats, licensing, and diction. These elements are crucial for understanding the dataset but are not easily summarized without thorough examination.
-
-- Utilizing Available Tools: We encourage to take advantage of the numerous tools at your disposal for searching and analyzing your training datasets, facilitating a deeper comprehension and more informed model development.
-
-**Tools**
-
-- [wimbd](https://github.com/allenai/wimbd) for data analysis.
-- TBD
-
-
-
-**Data Cleaning**
-
-Purpose of Filtering and Cleaning: The process of filtering and cleaning is essential for eliminating unnecessary data from your dataset. This not only boosts the efficiency of model training but also ensures the data exhibits preferred characteristics such as high informational value, coverage of target languages, low levels of toxicity, and minimal presence of personally identifiable information.
-
-Considering Trade-offs: We recommend to carefully weigh the potential trade-offs associated with using certain filters, it may impact the diversity of your data, [removing minority individuals](https://arxiv.org/abs/2104.08758).
-
-**Tools**
-- [OpenRefine](https://github.com/OpenRefine/OpenRefine?tab=readme-ov-file),(formerly Google Refine): A standalone open-source desktop application for data cleanup and transformation to other formats. It's particularly good for working with messy data, including data format transformations and cleaning.
-
-- [FUN-Langid](https://github.com/google-research/url-nlp/tree/main/fun-langid), simple, character 4-gram LangID classifier recognizing up to 1633 languages.
-
-- Dask: Similar to Pandas, Dask is designed for parallel computing and works efficiently with large datasets. It can be used for data cleaning, transformations, and more, leveraging multiple CPUs or distributed systems.
-
-
-
-
-**Data Deduplication**
-
-- **Data Deduplication importance**: Data deduplication is a important preprocessing step to eliminate duplicate documents or segments within a document from the dataset. This process helps in minimizing the model's chance of memorizing unwanted information, including generic text, copyrighted content, and personally identifiable details.
-
-- **Benefits of Removing Duplicates**: Aside from mitigating the risk of undesirable memorization, deduplication enhances training efficiency by decreasing the overall size of the dataset. This streamlined dataset contributes to a more effective and resource-efficient model training process.
-
-- **Assessing the Impact of Duplicates**: You need to carefully evaluate the influence of duplicated data on their specific model use case. Memorization may be beneficial for models designed for closed-book question answering, or similarly chatbots.
-
-**Tools**
-
-- [thefuz](https://github.com/seatgeek/thefuzz): It uses Levenshtein Distance to calculate the differences between sequences in a simple-to-use package.
-- [recordlinkage](https://github.com/J535D165/recordlinkage): It is modular record linkage toolkit to link records in or between data sources.
-
-**Data Decontamination**
-
-The process involves eliminating evaluation data from the training dataset. This crucial preprocessing step maintains the accuracy of model evaluation, guaranteeing that performance metrics are trustworthy and not skewed.
-
-**Tools**
-- TBD
-
-
-
-
-### **LLama FAQ Use-Case**
-
-
-1. **Data Collection**
-Here, we are going to use self-instruct idea and use Llama model to build our dataset, for details please check this [doc](./data_pipelines/REAME.md).
-
-2. **Data Formatting**
-
-For a FAQ model, you need to format your data in a way that's conducive to learning question-answer relationships. A common format is the question-answer (QA) pair:
-
-Question-Answer Pairing: Organize your data into pairs where each question is directly followed by its answer. This simple structure is highly effective for training models to understand and generate responses. For example:
-
-```python
-"question": "What is Llama 3?",
-"answer": "Llama 3 is a collection of pretrained and fine-tuned large language models ranging from 8 billion to 70 billion parameters, optimized for dialogue use cases."
-```
-
-
-3. **Preprocessing:** This step involves cleaning the data and preparing it for training. It might include removing irrelevant information, correcting errors, and splitting the data into training and evaluation sets.
-
-
-4. **Fine-Tuning:** Given that we have a selected pretrained model, in this case we use LLama 2 chat 7B, fine-tunning with more specific data can improve its performance on particular tasks, such as answering questions about Llama in this case.
-#### Building Dataset
-
-During the self-instruct process of generation Q&A pairs from documents, we realized that with out system prompt being
-```python
-You are a language model skilled in creating quiz questions.
-You will be provided with a document,
-read it and generate question and answer pairs
-that are most likely be asked by a use of llama that just want to start,
-please make sure you follow those rules,
-1. Generate only {total_questions} question answer pairs.
-2. Generate in {language}.
-3. The questions can be answered based *solely* on the given passage.
-4. Avoid asking questions with similar meaning.
-5. Make the answer as concise as possible, it should be at most 60 words.
-6. Provide relevant links from the document to support the answer.
-7. Never use any abbreviation.
-8. Return the result in json format with the template:
-  [
-    {{
-      "question": "your question A.",
-      "answer": "your answer to question A."
-    }},
-    {{
-      "question": "your question B.",
-      "answer": "your answer to question B."
-    }}
-  ]
-
-```
-
-Model tends to ignore providing the bigger picture in the questions, for example below is the result of Q&A pair from reading Code Llama paper. Partially, its because due to context window size of the model we have to divide the document into smaller chunks, so model use `described in the passage` or `according to the passage?` in the question instead of linking it back to Code Llama.
-
-
-```python
-{
-        "question": "What is the purpose of the transformation described in the passage?",
-        "answer": "The transformation is used to create documents with a prefix, middle part, and suffix for infilling training."
-    },
-{
-    "question": "What is the focus of research in transformer-based language modeling, according to the passage?",
-    "answer": "The focus of research is on effective handling of long sequences, specifically extrapolation and reducing the quadratic complexity of attention passes."
-},
-```
-
-
-#### Data Insights
-
-We generated a dataset of almost 3600 Q&A pairs from some of the open source documents about Llama models, including getting started guide from Llama website, its FAQ, Llama 3, Purple Llama, Code Llama papers and Llama-Recipes documentations.
-
-We have run some fine-tuning experiments with single GPU using quantization with different LORA configs (all linear layer versus query and key projections only) and different number of epochs. Although train and eval loss shows decrease specially with using all linear layers in LORA configs and training with 6 epochs, still the result is far from acceptable in real tests.
-
-
-Here is how losses between three runs looks like.
-
-

- Eval Loss - Train Loss -

- -##### Low Quality Dataset - -Below are some examples of real test on the fine-tuned model with very poor results. It seems fine-tuned model does not show any promising results with this dataset. Looking at the dataset, we could observe that the amount of data (Q&A pair) for each concept such as PyTorch FSDP and Llama-Recipe is very limited and almost one pair per concept. This shows lack of relevant training data. The recent research showed that from each taxonomy having 2-3 examples can yield promising results. - -

- Poor Test Results example 1 - Poor Test Results example 1 -

diff --git a/recipes/use_cases/end2end-recipes/chatbot/eval-loss-3runs.png b/recipes/use_cases/end2end-recipes/chatbot/eval-loss-3runs.png deleted file mode 100644 index 6f65e5d32668eeb4eaff9603c8fcde0f1e982776..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 52645 zcmeFXWk6g@mne!R5C{+m9^8U^2<~pdy$SB_9^BpCo!}CnacSJ$-QBI>lAJT&oH_Tt zfA8lsd(*Y6YH8K7+8z8&M)dtVoOciq5bwpsgybP0pua;vKxx3edF}C0z4e5EK;$+N z6#OPGC`kCt*2>Vt+yDYXEI3}}t*SylMuvtwQSDg1-wMnJO2|CFF(|RqF9N89aahp& zJpc@JO$4f(?}94AAIpBsec%k$Jq?A0HPE7&i)W+EI&8gYx%99*^?G5y%w*mjp@shO z0AH?}7HSF+ML`wS9RR}+qacr+XZZtyn-6y6$I$k6sM`4W2Z)8W`_r3~H&+3U({csZ zqt7pt1mG5p{Qk9D1m$h0EeT1S zyK4YL5;UVG(Vn6XBn7^%M-V3bh3`fjPbz7wZhX~P03v*9{?@1s*~PDFRpdN&c$)m@ zLmEiY;>cojgB}WgDgU|ZZ%J72RXE);_v|+75}D(|YDazi1i>uI32kKERZwzbKLiM< zhoWFWRA9-Ai?_R>76YiZ#PQDB{L1b{IXtS=k<_CEBe2ahC{?I$9ANJIKls|MBS86n zv;#H4=JPpJy8A%2AVGFhZG@1goL5BPz+EMCekJg z?g>5`VhcGOA?o%WAJJADQdLMUNng}0#(?P7k3y7PZ|f+8iu?HxS;UCI(8k&zjoKS zS|8CeP&wct;b!5}{T9*uFG+a~*x%R-fWI5iPI#5UTE1spK7&|#TOw$niOB{N@rydF zs}uZPpRKPt-es_)^G^KHiurEc@-z7J$j7)G(euK&XD<0+z_5jwu<{|nVQwF*N72er zc1GrAjaLy5K}MI?O*e+lV^z(Z_z{;Qj2R5#(bpNjIvr61XJZHM=!)_VyKdi)43LJIu(ORYQjpB;n%@v5I&3R(%Aj#O zFe;&s-aPV=e1($nVO}Htg;?&}Y<+F-jhTfEgwFf`Lqx>l7lB1s5QM(*;}@Y+HzvFQ zag-`4thgZ4hY!TUhJ=^V@_C4Af#xEnQ7wH$Tco!5`+Sxp$kF&)tQYV--;@+_Sw()1 zfB1}MkiBlq#eqL8M3F1J2W3yG?ya2LJoe^_)b&G4gkuilXvJlZ`RRe&QBneP3s&`y5TUq8yWyWHu!W3<9(2cEX$o+&?3 zJ)@j0ne^9}nA z)D2f%?sJ$b-@BwaS>k9!F(gs~`H#VpRD|fUGck_Q5wYb_O=1LnRQd|l2<%B|5*X3D z(S!Y*{h&UTzR|u+N}P|>A(&(#orO}lG`Z&aEDCgTO`kK1QxtJakXbWQq|+sTX3faS zDxAu#78MmGl{%Mn7r`m*6e$)R$*~t+eA~_~NhoQd(r`#CQFJIiaGXXfX;4}nGiEVM z;iZjN=F^ZU%}&qZ&hq${TgqN+Gj(TLGfg#(GgV&HqF7j@QCu@cQG{8XSbD0`mbX&M zKVCWW-OQTR&_vPn)5Ndh@e1+TPsP~z$%TzVc7;1G!m7@4ERP+JdGKiQRMLggi#g&sL>)LB7`4o_A}q^K zn3g@7*z9@tu@9@RU|d7553b7hT4o6fPAllg=*KH&+sa@1f9Rm-N9k2|W{t145?LBs zVd;JCvG!dz=~~fQ=mG|w2gD;TvK>WyFx9vEXvVT^94eP8*H#s56I3TXENJk(TLFH`iu|kF*7qEu*RlXd@*BtG`G~t_|7v5 zTzP7Ro%S(_)~OoE>B$+Ln6eGqMjTWhoZc)nL1jd@O-gfMCBzM$xco10fs`#yJVc9vS;mBwwiOvmG&R9>M z{W^14k0d@KA^k9{H^!)=ryS9UX@X7}Vjc!1msS)ncbjXGLm^5KLDVY*Qtagl$HV3% zxiSC5ZK=C7^2_NQ@_caPlzfg{3{4$%8Vy7?L)t3)PKrwMKng#ai}_wRL??tHpE5o! zC8>$F&dYpkZd$Low=tX=-$@TRd{?|VD$9@-#ksyGbRE1D{5T0`B5uMt*=n8pYkKL~ zUdC0)l^}9hqsy)1cDG{V7S0{Uz{d!i2ipL9IhiM!n%S-Tsd}Ks)!%%bW4-if>?nNx z)Go)ieiw0ze0yd$`f}m|DY#aMJ!eLoJhnjQ0EIIsv?o2bg}fqTi2^>+d{FcPw9AHW z+(mHzrTFI+=6;vJYUe;FvI1)1QDzCd5=)&$3b?jsG6NkJ6Z{1)b0l?zsmr`~2^-m{ z+R!q?e@G&8nrTt@#%Es3PGILKNisc^`S#Xje=rvzdLa;K`-enN%zEYF&m=&Ho=g_LVKsxyvugO zcKiaD`bqid^CS&5l^f7@Xdpl-Oo^;?r99);gL|XdeJ|@2>jY~$`UiArGul}??MDNx znWn>d$oOj|5v9=5m5SYpw(1Tq!DW%n@O)fe3slRdI(B_K!8!+rdyI{QPFmRpa!c+< z+{=a(i-1Lm29IUBg_TAUM^UqIQR!is-1I@yadi`m(Q=z+x>Ka3WyM|9hDJa`t3C1@ z$?~PU)xrwyW9Mz%kGo2i za+`83Aihh>rbuVF1{-Mk2-wp$?ZJ6_Q_AV@#Iu@i7inFw)!GZbcq@Wj@>2Pj+dA2D zx!2VmXc4)Azw9yO_Ov1X980f-kTH>wz{~8RJb66fkw@`s_u{f@v>6bj?S4Z0tZ+G6 zl2}iV{Iva|qM>c8JxkqR>OCvDf(4v^p1HP=^s;~Pb#gi7X`{dYex?Hw-2v5P#_(`I za&05*OZKQVR}8Oqwk_lK;bNiKk!jvnK)k_CiH2-HJcbyPhsf#zV>^`TzEH3URT;&T zoMl$(Mt){NC2y6XjrAmMKKMbMbwwVp^YP!%_nmWH5uZNovpLZU)I zzqTM>{~#c7AYlKrAs{3masShnhy48a94H8gKobapmi(K#{GS=k!VGqSU@(=#y9GcnP=&Y-n(v9#B5rnR&q`3uQ^;Sn;h z)3Y_Pwl}e|B>at6N7u^1o|~BXH=+N0{^HZX+2p?|S=#+A)~kT@zeng9=@{t$Utsno zhW{U6zeoN8`yQ~lym|0lAaQ&h4e+~UNr++*Pl$i^{u@Hp*5p+YI=@-tVf>rIKl}ck&qe=x z@&B2JS(1YP%L8Vlf|3spb~ zfj;WBW9j+2RoFCaVsQ4aa!0sDZlccq>TS7QvCwS4vTWKD6Ziq4^zH5Q<*Pa&Aq{RhISzXR%T>#TeC|(Hgwi514;lERgRQc;Nmu zopg*3X2F?b^kli&U^Ig}rxR_v9a7}8t8TZ)zh{0!^RBF9fJa77+}kr2my)Ut@t+6% zh)+zER8~%ii;Gi*{Uz{kZb;X7y>E^eC1=aE@+}tYH^|%T&F6D^RD^{5NGT}H>SjvN zb3cMbhg1GVj5fOc8T{y=sN?8Z{}B@tK}fX%&{6_})rsVEyg-Jc?eTbn{;ne7F6k^q z{C^4xB@Q8AzS$eD?ng#O7QDUEnM9AprTtOcTWF;IUkXvDf;6T!rom!KBx{edh7RD@ zWkDr`{GXS1I|>61LdqcOck)G|7Hx0uA?1Hz5rcpU>-E{GhQS(-f`I?m1(OH?t7?5; z?eIUbLZVi}efy#I`Ttf}2N@qEhGwgI`oC9?2I}K?jCbY#!XL$F4gc*2Y$Ck|#lIMb z426JA^7oy~OP(-Mz%4mob0*I!svm z6D)$ghMq`dOl4>VX-cP{7~8pk4IUtMWkU+5GYI-eV+{-#dtxi>uy7!ncf6iT>?` zzmvSWSln)n?C1esHp&te_=^ItwAmbRzQPI-Ir&_iO-yGY)ho)Kr zWgc$!_3+rJ_pz}r#=2g{Q}H;u`+bp@jF9?YWBGQhEN>#KiAl9*mq`Eca6A$oN8gWU z@YbnUi+biP)e&gmRZ9*(ADr99UQ_@cTFMmw671w;R7_mz;$rBnyUmDLy@(8Ty^8wN zIU@R&r;;q>X2;{0oAvbxZJST5wt1S?D~+glc-zVis==d9D;~0~FYBVicyAxC_Gty* zAkS>OUF}6^{S~Uc4W)iSm0_=H0{#f^OnlZ6NHAD4+D7OoI~s9h-eGhCytnOgm>n=F z%qS}jpyv0^!w5~J8DDIB{<544k;>x{ptLikpy{x_>~h5$IoPoJLWquzZmHEjI5?TJ zT}@pP^LW~#K~BKafBcIc7l8^ksco~3=iv^`Wv3AnMt4#b;MqEq+Q#SQ2|i|PJWfa@ zQx0ZowpalQKfB+@%WG=aYvX}5j!_>|E$SaBE1_q1REet7RQ z+OXic)q|*CpbTgE-UvLB8j` z{-^%wQgM&UEA+I>gINWt-|A`GXj~_^+tq!>!znY<`562}LHOwlsm1UBs=GC9u4s7m z!`krx;&kLIodK$qQWaTjM)M?ni^T$%ZFxNTyHwOfXFFNwEC7tIdz|Ain z<`?UMgI$q41wE*pa86Wm(^Yz+rT7BxRLV8;8_-ZUlGq_Q^2C8!VU0 zz}VQywk!UahhIKGv76m|Vt1d=nX$1_3A;5r%i^Ez1Rt2{ovyT8?CHs>_N=)yf$KGD z=Q>ALJyc_2wWR2PYwon2@^N>ajC!Go zbgZSe$E;o|8%`l+kBHHO^esZ~-XXv~{hh%Nr}9m=Ar3!sxH$Ya`mqi8_Q9C+76rSn zJA1iH-I$ItSiCX|b!LT;quP5ChY|0?^j^`TkUOTbo7Km=Z}gFnHwPp6tuDJf{n(RTs~w$m;r8MRRn+M`mCs` zZ92XVWlpDnYJRfrN-rkn1__DP)+R($i8+uE5#o3f_Y%tEk&pR7oJ2(grMU(M6*6*f zQp@GZqwOj*@^FF(j4yALBqE^iC|D7VUm_Z@(o6j^P zVv7D|R0)Q#KCIx|83v$2#=)U@9XN1J|DHl;$=}JCc~N{HS^Q_*`z(j0Q4C zcqS`stnoQV(tj^I9~6<~5B$og7SNO}=sFAz?uN_$}N8%G!r+S<#1M%JYPFfyi zwoS?;M!b(=Up#Ku8=nSBf9ByAASM3A>u@9l_xIt`jEj)pL9m?!a%dIS>f=!XZn=uU z-p9!8AsII)_#;6q;++bg;S6S#!U)hrQoLf0?b{tJx3g?fm=zZNufXk{)G44kNPV+x zVqzwx#FJJU3dw8tMc`Nd`m6cYgLupvjzM?7@Ryf&_d8o9U)-xSojY87K7y^++_kVz)_5_P-s!WR|%ApnL#Ebt`~x59~f^!;CPcSrbD z$2$(ZF08-h_(lTx2s6;f0q3o>2%1NUG3@SX-04gy?6vLK8FGMU@H^09(rMNKV)mMJ z$Z%eNWIy-Yq&+vF-yP);Ab9y^XVLM5{o?Ox2*jPiq|r3KMqli;e&73WwXdc3vhMB6 z=K-Eh+zRRsOr=tGE~x9OhZR?TrC@sh`t>y$nm81LejM`+q5l%=* z2u#jS{45|5Q`=QoMg^Z*D&wg3b3El)s#V)$vr2_Hn*JBRWoK|*Ok7;i>~N-ZDw~Gd zyM{cYy^pus=-W)kiP}mS-Co80nWZobuSrK4qhZl%i39chQ)q>~!adc!unMb=OKH6y(uR#O9^7@di&w=e(5TW+z$$h?|LcnDVsi?8ja1ht3CD$c~ zxRdMcdz>PWqn6$A`-^o@CR{b8fT9n5`Feh$c069Z$v&t!I4PYdbi{1q%U|i)Bx%%7 zPoSR-qUt2j>xlJt=bi9c4#Fs=KC<7Pfb$xrwr4)?NN=lrKQm!x+85n!Yi@YvYDmlF zCMANCm6p`J0wimUc(h1fnebp&m7AmaQZxrH@KB91_E4~Ym4@M`M29R}1HPAMS4+Ae zBIoFn)>}_@G5;f?;J4<_PyViIvv(EIB%*t^cQrPXR$iBOX1cg1_ z-F+|MW7Azr%h*4;G7Ix>?`y~wm-r`4)P3N?TJL&!PF(UjUDlPYvOM6EB2w!WHw7k8 z_%@gye7v>pO4)gqkdTNGC3tAK0VLNG0v~P+W9*^&_8M4~I zQ4-8v(ZJ^cNhZrBqKQ@yw+1~dWXBVBuNVpfuPIcEz%XuAp64}g_U*0MG8_9FiCCo6 z&-86~K>PBmk$=WNIE6Nv5(va-mUs{NFXpP(?3!1Q&wten#*Q!-mk6~<{9iR!d0NQ6 z%*5-HL}7;1Uc-qF_4iNP=_d=}ucYZUzZ+i6<5D?4KQHm}u*ceri2R>bjacch37U0I7?T?2k=Gr%u*w%=9FlHgV>#dPv|<4rJX~*Q2&@i? z;wJnQTr~o>|F-kDkgs?nANxvw|y()poRd0^=e{>Ui@BH@r_sa@^5C38| z2sH}=VH~&1GWtIP{AFFdubBQ5dH;*kSH=b6UXL4tRl3*&6v!&u%qtG>7eHRx zEscRsJg$Pl2JS?BkUI#p2^Qa=2!`*3=vEo^yN_BzzV=439#E42u0yEZk81>q#L?^IMoPgmg4=CkAC+;nrL zYQwzyG8%Q{PmchYO7*cwKd?{^vMX4l^lxIEtJo=4&M_d z<#~D4TqrAxsg}-6*i2=nUT#aM@h5z=co7V7$`?<)!6PMhsT6V|SS+hZeob4w$zRWc zjdRQ^`cdvI*CQ_mkgbB4fo4Uhh%a9+ImmEyQ-|vO1Vlq6Lbv-zGoxY6FdH2xIEdBD zIk&BtnqI^jz)P?D&zK7e&WdL4uJog2G8wFfVTe5PgvUS&YNo3|cskYaGL1%Svz)|b z{jYeF%sFjW{L3gmr@ig^wtVwiQ(U5RXxW&Yqj=P1JD){^M0E4`YXYs$!ClX!>AlZA z$Epg8bJ}vmi-z2E?9{wJXRiBXy7F$}WhD-8Y^Eb@-oi)mIlOMw>11nJ1j4f!`qTcx zn%1$ENfN@xaq6OrImJ}5&=@04CtN3Lo1NLk@0sOg>ZhiD?_l= zX8ViswR(;E&$bm;9M9(kp2CJ!et0 zpSAEwiO%i9G`Z0>Leuf8XM(6nu*ji>|6$V%V#jLG(D>Y8s)*9|8a`2jC@m0~Q%d9( zm-5VK{I|IV@&!VIB&1G*n*#!uy{1?E0< z3|DT%z#P5W_z}x5{1_O*C`YQ(sa?C)n0O84!?GZ7^L6ZGZzgFArxLQud~_l$K?w{# zR-hjvpe4p_An7@GD`{MTwrW~5{JLpl?!e~ZT*CVOL39Zhn;EIzta2u-YVp9*-TBa{ zfxkpqRC=3&tK$*T=36{sQEHuyS|?91-@DM;J|&PgIdy$mOlIP92`E>%E;4TF1bQp2 zYqZ4XXsX<2`GM;&&In%bXS>c*dll1m7D-xRRsDMYgX^=?1*mshou&uWeQ3sQw^2OZ zNxN9<2rkr_FFyhHm3Fk}zTA=6%Ls zI&AY+4#{Y&%z6K`Ot{Yn7vvxZdBr^ zJ~(UFB|a@v^Tf4iU`kE|U9u2NveB%23}}J#SuNo$YGsco(t9^=j}b$K$gJl3-Q1T7*F-LQW|xWdHn zkvb&bV}egyemz`nS;VcR9BAdw!48*^9g_3Op~d%O?j@{FY%5L%wlhyUMlzSsf)IJp zO?*Rzd8CGy^n999=6=*{%<>^iW>I-AsoJ^KH2KoUaY^g4&;5y(fvY9SyJh#C*Gmnd z8ahwK%`PS?7=+M4EVry@BQ>+{d!ylWZd#Jmr@e@5Fi`yCi?n*V7s`KOOpt>K~TVq`T`{k9t4&Ki^vcsaXaPNom)zFzCW?I@2T$Eudk^;MGbU%}gyT;? zt*R+o0?0yVXrtN9b_S)5G%1c3n_hC?&>?$WC0Sz5CQdX)ZlQ0RF;*LcQC%KNUQ{!HW zG|mp(h5X_+t(jbB8j;3&&cg0=G4GmrAKOeJOAuHDzK-0#2z|)PL{Uc3#zMGTh_n*F zZ?)aWN+exIZI*Z#LDa&yg)=AaB9XRppo3lpq*GjF|DgBqU3)WoBEDQ(>6o!oNY-|+ z2?|Rw#k<($j4Jlr`|5ZlXFIEbY%G`dG4V5FJpQDM3Bp!3GQp$|!D``N&}Z;w-_i8t zmO5*>^~G6NncZ8{{2b%vxZOh7yvBN);)DEByOxCo#uwrK)w597;_yUAo8@WA4Gzkc zMiCcWL&??H%vt)H9T?bDPX)bNOVl7=v3M-#PB{9NdtA>1e1(%(q+(EN4i7EDG*%V# zL0cQV;K*4qz`}pLa7g=PoG9EPmqVg?&Qii2Ox`yTXx1}VOowKrIQwn-T_@gcB4%-w z>8+w`NB(_Gsox^x^(s0R zs!CkxWoJ}T7nYEN_=;goabgj)EGi5E;4VCS6npYyiuhx&tTj+V-0;_4@BBiWa9xB_m|6=@^ zr`GAA+k&JeG47{Or*dU2&MYr`W3+L8?pton(H4mZNP3OTSnY44k}3@+BnV_M9556E z4WY+$ZZ#Oij1kP&%+r^9UMgtmC^h@5U_lBrlj$$=6xrU^i`Ss$Ccujwk8QKc*X(}B zV#_F0vwj}}iZk!A6R#W25}|mb4;cov{`Jv$FuJr}%g76hTx&~_SGV)W#J|RKUMqpR zqQ&Foo>sP9lH^vy0w@2&&#+c~XuG_m@Z;s$>W80C&d*}rd-WwB^gJAO$;<*UKDypE zAaCcc^2Wb-+?H@XHc-$+4kcc;j(5V)uYa~Wx<#+iuY7wN#BP~@;?fr~S1+vrmTaEa zo<*I1Aq=T@v{KLmmCJ2rX+l?xuOO@fX?SVxg%n3TYw+Q><|#Yi^UEITQ5a=Yy1Ho7=BenJ z_3PN&dXZ^eKsEmQcF0fX;QjSmKnwwfZ0zD_i<(IOr@S8ePQ=Xn1L5QX0FeIc#;0^v z)NLm!SpvjP*f&HqqdK?t%`@9g*Yd=^>r-Ud;4ps=LiYSbH7#*XHZ&#?GA2) z)?59_M6XF7By;t^W~M#?Ed3|rksk%im5AT&^0;+`a_EGyCwgWSER#J>Uv55>fBK2cA7*j?OrS2+ zfjjT=#99yn*uWMd2mN}*<)#nxxwC=Ztwk_Uuz6fE$_q86(AiU4a0?{(I-&GtC`QZ7 zV06ZRhB@F7>fHa4?X0t+PE`bV+s%n;bVVS99=aBmzH}o=c zq++)##i1~w;4|RYlg((pndssSz$n);uZ#R9JQoEx$uK&N%t4tM1 z0cwC!!n7#IX;hO@V@AJxx}PHKt`pMlKr65NTl1IW+QJ?M+=l0Ua@stpJ$#^w7wt zTH~&DLy=Ce|G79olE{82I2vU0KqS?%H$y7W`Crkrc7VJR~jE&U}O1 zGZI9v*j3~BkX^Wku?BnD1GSt%zgu2fM(-gRCjM>Rp9Geu1+KpiS3sU|{xVx)EY*7I z!Ts%-mIzjkmixNc}b35>)z9zb4$2LppkZAh3i&9x!#*Tls!8zA#Z-p`hI0KEt93PN0p+vq?+2 zg+u9!ms<<|ZS2iXa7@Dqs42yovsJK#uPYXv_e@CAJsaX-K^z{}&l`4N?S9{5V{#_6 z%7kq{3*q|yrf5vSStBi=H81VHsr!mU(V;BJrBG{yv_5wJW%A@PjSV$DU!Cn6|Mao>IK+H&-}`$6ylJ05wn$}dlS zMu(w^>!n=oMA81^Mil_*qg|VpKW!&i3a{l-v;y ziK$LBA_moddpG*r5kS!xq$a*l`cEKayq&by1om-#+n4`o6PxADXs!oh{`{!H;P1n( zixZc0l_e3dAexw&9zw%e;j$Pj`9LgQ%)tYzoup%vgR`^MY$)iZ)kAn#v~FhMbogDb z644P7kt^mWmvT0xGN!@!c5ghMA&)fJ8=^+hT|b#>CI1TN>d(mQ7VxI-7`=@hla&s) z%B48H?Hy-O2Q1Dh$A(dC6`(0x*<5yZu z-7&$bI2lYHZn}0lDX;h7^Y`nF4MrY<8#0A$S^BVd1LXRt(sOXII3B<5f9APhadL3)Y8UgM z)1~Us3N3P!Qa@ztoJq8>_&8T6JN?L$^yN@Jz#sOc&CA=k)S-FLRI67~(+^b*tHSSf zBjz*_lnKsGcmaoTtn?$MjntIynUWcgkyrtg8WbCQUN_CAX(JX*$aAOfa4%12oX`C` zs3)+LTiG=H3u$NQm(Q9yd^cFaq&o*2Cl`9`r<IUaRI<#-3W6 zy#t{6Rrh?>*{Ky{EkIOGY3|{uWi!hLfWCg&b!IJ%%A|Hak#ZDesj6U+4Q%YpHEC!u zrCId`Z!bI^6eD9rzF7~7`c`; zqd{AE>^Vmbsekt9Z)pwJ!-MY_N~LO1cR?@)^v@*>v|h^tavpHFTp0ZJu+6k6=LQvx=i2C6MMDWc3F@K* z<);E#r2|7V-22X(B(!Lxn@93>mRPGoFQe+h^_=m)VeHo9`dLfAsle~$1ef&&{#5AK zXkT#FlpHJpP@%}XKH`?USz4J51obm1`5$3l)^jVYB<8n04Y9lL;|kG!^?Bcl`RfJ6 z0xQ+uXpGKy7<|6V`E9bL@zs5j6*9=~l$>W=D)x~8>#*zsqU1Iizs#Z+8A!b#Pui)>gK@&rJVggdtasG&*to3*#p^~ivQE_-aR=k`Zvv(jFZ zYmOZq>bUot#b>&~RdiyWV~46Lsv0w4fqAOoN;Minx+OS;e*GbXN~)R#H&n?{B5sXY zWu>RI0_=8*dxQya$TbL1c{8^&om8R7&&luAQ{@7O$T z^5?ZIOt|8|eoLtlCqS0n)A*?=%^CF>I@?=DlKmLHLF=p}A8_Ws5~xqlnF!485+O2u zD5(}1BP@nRB&vi>Ej)C1B-kVq2r^3*I|XKp?Bbg<08sR{!%(U-p%0I#b_=Qd)vWNC z4l`ilmHGzbD+jzoZj7K1qhbiwgW;TnzY=!_5d|@>vxmPPOWh$Vl0+>S{8&Rfcq5BE zgdgH(-Pl(0t+tp-^TcgCJUJimiBSg8&U4v)=6a&dtj#phptqc`6xR<&C~cCkWPxg3 zAyx+w12Lo2kxFR5qXkJzpb4`Tn|<$W_bWBy%9>=4Jmw9{#jDEn^h-m|F{6CLz}AV()Ki- znQdKX??T2pEv{0%jE4|#KBl_n?r7oK98*|!HO;8u^L)ED?XfrUs2_3h3tZGoN~iV` zC?n%*p3snfi5y=>()$zHdoJj}rtDQe0oqqOBA;~d+HMNd&Py&to^rZ-3tiK&dTR*H zbWb-k&)i}(ojumL_n7$oqvr+Yt!4bG`G3UkGy(3aWGP|60%fz*FNCQi=4}!eeyt=l zLmA(5s#V)DQCKuuTlf*^f#wvEjhaS{zJa>nz}on`d|jB!o4g_Ykwa1 zYO-|oN~U(lPv+#lUOZx+|C$ayuhqIqft~Y>EV2rrg_5J3I&yMDs zq#qRW*HgW8Sh-M*_+VzIYn6~u6OO^$1M4iA9>)|y@08WgscSuzgS4SPumh)P`_WN# z+g!u2JKkQp6Pc~BA+#pGUG-7~w4SiDbp&u&mGmbvo??0qg9aNYy6y@WrxWe&fGy+ubdmMA}> zu>{TJd`3MZ!@7d?d@(s<#D4yFHyFl{%9w+u;fg1F+42uvQR7wQ%ph0$vZm*hffTS9 zM6^N@bwip%ZkRNlo?|HY(8N)b4|q3N?H3BR?&ug5GGE|uDgC}9X2 zsnslhDX7T^fheAjzFEN)Y`R&Hub-U}(aa>Nh3NVSCGx``bUUu38!vYH^GnZ& z+HO^2783XA><6O`iW0d=a0|aaryoBJPM2LM!TSM4SLZM}gam#p;(kP)vsS}r*S3tI zrWT0_-I!6B!zFXJT6pK@M#9?1p_S+#NYR?#?umoCQCP8evYMXeB=w7MUVOEW89}kj zgd;qemvw9YV5InS&|dO=t895}!LU2r*~@5ttgrIW&IT0tzZr<0aA z8Aa2{z>JNZZjiQE1U-XgWrV3cV!LAbwu#$n^iZ1J>(MaAh7g(z)r#iZq`;NpD(3Dj z-PkIm?5y9TE2!~hYQ0->+THOC&>+<1uHc$Mza7rBu9WeWFYTIRUHsTWV8B7PpPXyj z?j-bJ=Rqfj&!EN*#}Bhx@o^y0V%J7?QH&kAi%X(4ssGFf)2)DKC*nf?45Cvq$^A^5 z3lHgmXEU2Ci_0bhkti9P%7Y_eS+Srq1*RD3R?*^nE0q691RHL)>zDE--dm-Y6W{lUlpCeV7oUjHxreSqD^x-lokMX4qC#2&QO7Df!`RH&D6D6}jr2URlU_SaLcFx!4*+jaztB?=y7# z#3wzq6ZV%ISAS-_Qu2G@uTzOs&_vZl9Pf+k&6t353)-a7hL7O_DRs7%tlmLYR^;M=CZJie-qY`>`f0fPH^%|u znM6rr1Xew?0!M4t?H)C_LQDH-GUFS#?nj$rmCpx}RtI{8(qH>=+6*dAP4LQe_Bd6s z1l^o_-C4aBx~@BqMt`tZ!1Fp(Gl0z0T}yUlSEU|BOT~viTs$Vf+wS3V>Z3ZSeXW1e zERro^|{Y*61YQK34)a;k|GJn(5OP0JvZ21-nsMGs}2sjpNK>sb4we;f-ysxZE z@-4YDRnD!0e0FRlO+3$v!bB9~|nfu^wc-S}X-w%e}m zITzsyXneE#VcW+1_5#^KNlMooF_w1I_e@D&&_1P=jWwugA@kAhD)LF$Zz z1nDg4!B%&GhXP@xDtz3F+p`FNZyk~;)H1K4y0u>|(e{G@NYh*ksopViYW#}Y{ce-OOL+~}!mW$CsPP*5 zz~oI{O70<2d>*9UnDsN*^wX>>>xV1*_;%=6s>N80B*&Gfd#!1Zi)|2+7s4dkJ3S^t zxGrXj4AYQ(vT{gl6ff1Q*rlg|;|C5j#U|@K3DF!VDXcICiN{dNBqH?Z&{bd#MIE>K zTsBp)Xi3JQSp)|?7s=@JH9R9W(z49n_8S5E-4gLE#$D_Oq)5GlcP>x*J4WixMBcs) zZ?gSSBuE|F5x&CYRWXxi4jKL(Rhi}@n?p1s*!Nq0S9-;}yESj9j?$ z)^qquykoei0i2~Z@Y1qa|HXTJy+5U~N>BQtT5sZ?kvo)zK$L67G;t-gyo+a3Kg)~i z{Ac&_GGL}<81eNs@o$8;vf?@kEj>YN=y zsHy>E)#o;a&k#2{8>M`Iu1^94npk1NV3J#YvCGhLpHN08KdAL1GCIh6elNa(J4j;yOotEWYBX=HF0@JZ4t2YUaJA>oB0orcW{@66Fs4{f>8a_X|}T+9*XU zCf3{8<1-DV^AZBW*Oy!+_rUULyCj{93nb_WB|S3HBGNDO9JwU&$H-q_&=}R|hZ})f zx)H6*gDbOky*)SSG)b69x41r7def+HnqL-AEShO%-GBdnCDU!*ykaYxca>tbgk?v2 z2rTmQE)qfbZt}ViyiAMu&i93&@d(ldT-ac+FrmHeMkfCqg#^V4#Y`k=NS=PKcSyh6 zaIuvI0R7mSGP91}E_rgf3e2Nm>=o)|@Eg{NQG+9sy@_P5z|6Du+MPwd(jn#H`}&Do zDj>^t;q`rSfwsu%!#y{Dh!fVqrzxjjPse#}ii6LAn|+^VS+he=77%SA5IT>_U9M7ldw z8Wg0vyFs#M{pX)9ba?agn&&+RT&&-}Z zhoK4y{?{-w`8R@NaioYD!<8~}ovL8wW?j1jo{ekfBrcFPSAsL5H)ttW`NpDGXT>X*s-ssV;1PXP<4sN;?dOhQ>U+8t5iuJ5Et8ET@ zbKE=R(vi=4Tp=eND37g#2Dru=@~(&UbsN78k8gi9y|^kSwJ4#o-rZzW31O9OwNfa< zTu>dF$lF5gK&jZvx%FiVHzMdWEQt({CBxl+X7GNo%J$s;hFrEJ6a$s{^JTpG;F}nS z>!7wLq1=FvK?5ch zKyj^qF)i4{k(Y9tlyqj1BLP3{ydK6k#0w;XFY3sI0|_o~ecW|v=}*2U+NaeDEfA&$ zjJ6kB>|?Yq9s3=>g#@k!@B{&g-mwkyOb^+ziY;elM_*AVo8(_r!+hrS8O_;P+XX}P zetw2a5ls@v#YUT6?h>YJ9W6CueHM@Ae(kQHa{4tVm9%P#^Zy$5)-*jIQ0}94Q1P}< zo*GFw6{`=3@gQfJwrH&L&+SZ6cAV95 zS^vA(AZu5|*ARQRW~_uZGBO^qAFZwk%FAK$W}#94WNm8nil7Pi%d53ggrHN=tZF-3<3>Up?NkRDq?(pl^ zL^@gJaEvG?$6@uL-zZX=;F|BBtyf0=`B2ciM96yW(QQ>Q7)R1AV(_DjxS5g07T-?s zA!|2>yCvt;S)MHqE%sc~&tH3dn?aq&N=6?>i*AJ@ZkN$rWX|h50mJKnt`@!XtLQDVJ47WsOfoR+p6t zH4~#|t2p{xzyyK$R-QK3j_cK{6e?pkB7fYyv?wN?0Yj2~`u7f1z2exBL0s>2NL%tV z*&sU0Yp1N>CO@s_grWiv4I|VHXX0(wP z8hR9{sub8?w$nAem0ng%9AEK9mpw*Rcm-_Lji%U*uvq(Xk2ISjX&r}NDfV2*+NuaN zny(qnKkUSwd6`tJUb~>;R&t;O1!sEIj8#DxbPwP200nXy-@}@%Uw}j5IV2RbSOZujE5lf)F*+ zD}+vs)v2>6O>FpZcuT1_$urXbvo>&)4$SkQ=vCQS_TQSv`(GFn;iB!TzBZ0n{;xyF zW-o!t1+@3cw128J|0#Yn>jP3f8mCL!)%VrGK*{s{hl37QTh%ob4HP(XID_zLZn8PP z$X>=CSnkHR-*UtGon}7#)wIvNvI1vnd7VeMpj6^UI{PMLM;Ep9j;$6@5qK2@!o$a~ zM=5}(J?nH@TE3aior?$Mbu1DXfflw*2X-;0@2UYAz3g+mb!vv^mhY+oQ=E^)h6W3NI9YN{9T;Q2A@=6&Cf{eYo&jB0US&sg)-dn zm;xyHWcX@>%zsynsSIW_wT^d-U>M^^zaG$&;%!l#YQwLY&h@g~;mm3LS~L(XpT(!r zc%6QG^al9|lPUU;ToYD$SH8BuSn;#E%HmTAmehgq;t&-Lg|eKA%suzbq5!5ReYG~s z1E|pe!H05cUbjmhoU9U3&XAH{)PCYzzHSC8betQfImvA4kz5#-pUCKH(VYBRH(FJH z62(KO^N{WGbJ>hjUqZRu79H~{>xf(w7uHYOru;>v%R9L4gRgXL`O!*XVikd4sbzK& zw2vyv@6+sknSUfEh|*jFebV1jF)HJPg^$0*&fxmJ`Kae)^=JDXu!*HcI=c zgHd|FL;R?){DYKf)%EMn*OvB2;Rdk+2C2j&(af9AH~mkvw7Z>RT6!*HSUA)A%oEvb z`=;j>a*kL9sT*Xn&&m`QHa!NW>bZMMVtEy#ROv~tJEeas-AFFaJ`1sA|Lz|w<*u&_d3AT&6oTe~D4a}~K z_a8;K&nK22dFY33XdKTfN)abzbkJA+o@LybsO=x#3P4OG+&&3 z6Tk?dJv{3^8mIA?vuV2$b_E6N^1W;~EIh>7CFQXbsrgxVytMr#|GDEy zg!0DL-GMkv3jn)f2jOMHfW;w0evPZzTcTW#{qB5{P+|iYpG-kxVE6!~5|Q~9>5_cZ zI`(t?^Cm5mp#_Q!8=@Srbd)lQ*unOm=}!EX+684q{M6zFV#T>n;rYO~Z@)=QV!Pg> z3%;f>!wNYt{??VRFsd!2!~d*K7*fq+CucxnU>>A zFW?`>bdU=?TOn}tu0pug1B<>9;l-e+Th*ReM;$3YZFeR+5Sktye&CNc;^^ld1YI5- zzFLrwk*%e3ByI+82P!b(EYkj*9Toj`qlPF36o1*Ua1~-4L-1u_B7s?7wChBCH#o!%6Nc=OKX9iQ)Maq<~}3 z{C-sm+~yIHN1TF_FPUv6Cr5>Z7CS_bTqtlEE{a;%zA5}FVkwV3DvISKx4{fRCp-H| zom!7xTzq2?jgRwoWUq7}nSGUp)m$=`j`b#9;ER}aYzEK%^4lEi*P6c^_W4Xa_BaA9 zH4E}%@P(>nKa{2gzrlkDMvGz?oM6zvAsClS4~kCarcF!mp_e`k>ySAwcr(+t`+a7h z(*wQ4h3{R?qVo@jZ>ff_4Z^6sig&))#6JJ6&6MojA45TqyGyWVfoIJh?qOc>)tW9c z-8LcTdFRInxiV7cVzLwR9-*$S?~U6i;H7s1NS4D5*(NnwmsX5X<=mvyg6Mne^Th#| zQQIm7LdKG%R1GRL@aE9&u10C2A0+0|xQ8EOaNO%dWSs1(-4CG}t3!`&xBcQ&@zN*+ z5RNU-o9UEwKl~brORsH6+o+s!)JoJI6X?0}YrTCWvso>ca4B40i>kCtM{1CN$?V}3 zR;0A72Py802z5Lt6~Gb*0ge$`;@FTFhcLe5J%00^`L*V!66aZ({PFKsY3^F&-nb$U z27k&0_CKv(aPMXzh3_P2q3vlFQ2*WQdyN@2SCEodW1sQ@=1PP~`PN&JL<^0`0rKR8 zcMSxY>;__i@`Sj)Eqb$k{Rtm;C8jwWzC%{*Q`|F9R-TzkA5(in}@_!}}?Z zro|5PQ>1t-RD^CN$abQzA1U#q=mx+4yH6t1iuOq|@$YISj}`QW@ujB_jHjI#U>33S zeTw4d@cmr&$MZK<&N{_vBjt?MUUDO>&u$Jm-RPZIYM*FhcPR5%RQ9CRxQFVj}o8GuS z2&-0r%#75{ZAHNxM6nRyga_Rc+GQYEGUEm460@8b<%eA>HTUtS_nNP62GV}Pwg1pV z)GT-$#Yslt+I-4|nb&3- zC1eBv4B>=6TO3V#8_yTbO*{$O2@$N1n_80t%AaMCs^hI``&}ex(ew}gz|tVf>E4(V zCweLyuEW`j`Aas--MR1@1t9bkLoi-z>bEU1-7wpAy2;yfkdKGc6pgtJEU~XYrR*6so(7QqnEf=Jz4#;6z^C#d?O>AbrW$O|n@JA+sYH`z6Xq1M5by7E~-(xF)ZwmO(GG5Hj{4 zyOl$n#&4ZWoHSVZw!&)ZFf{DKngq0f)q($Hh(cCXJw3&ipxFBQHQDHp;*Ounk{ac& z0!t}=w2c#}7J!jSoZ8U?Y(&eTGLLlFSdn6uLboAAC=@o4AsBNwSS8!%U*|?NbFmd$ zElwR~M_#QsLx`zMTAf#oJu-+%;Q56CLb&go{k--poCY4PjGom))gL zV?&F30+0IPC52V-yvoQof;m<(jfM7$pZch;cAIRSwMhcjTPu|&XG^C~dj<-PH#N=O zLy312L*E@3%s;%|=BOM=+Z_E|MC301GVsjgo->_`da(3btv2zwB^* z)oFUQ`POL4W6!8#M$_mL;ltM})riq9Y$TU5g4*e!0lsz?*0lc66MGX#75(dyX9ZRf z$1k9`m>VvhoM+gFa?Qdz>cw3I-JTTBJ?V&>0wGErSI`X_r%U!4i5PjOvEP#|^p;uM zH~OU;7>1$t9X;gY^qvqF4?4>t;x~|vst5U|xmOcb)qK;Ai_P=`95tp@CQ8|C7VQ(e z92*zUc?}71kQW-QOXu)nXxN~aN@tsEZ^_~D1uM07QZU-1|q;F#l-Bri>Xd zvvNsr;19^~;tk)EPz^)S6?djkVxS@p?>#x0u^&&#k+Cit;<;MK6*4$tB1#Ug73()) zqcUfkSS4{ghvjuDtw&dYM$Qx{o>B`q^q$WWnh5wDce=5KifbHyC3hDXCyYhI{@xl* z;UyH6VU_%)<0#P|Vk zO(43MlfF%ZHT~e\&ELR;&onpO_sLXBRYG-hVWz?L9~F{ruojrm}8hjI788PE;* zVVMYyi3dQ|1r5-~~j=s?MDCw)Gu3{b*)r%ae8YSssko zryF{hacO!l6l{O3tPOSc?dQ zo1Z(sk69{ojE$F&%hl=_c+xFV>}}gj7Ja#~;uC)u?4p7rv0W(Z$QzZ}ULSe7Xr4ew zxYI)*)GSLwuv5}Oz4oyU+`Niog%uPU1gU&B|10s$P`r-BMI;-iv*>Q3m-EVmkUcU% zsG-Ks!H1^bo%x)otws2ew0WQ53!DTeNxG$-zZ%3=+;GMhW06!{-zdB3ZaB=c$YJVE zkX;CUTi~LI!N2O|-}^S+e5PP0s%YzV2Y5P=-jS+Fp%?C5NGDSI(V=;#8ZEV}Umh+I zJZl$R=S$IRveaXQL>3@E(L4w>!!cZUmp?2xQmWpb-I8-GCt^S$(|!D9g;zJiJfOBv zL@qp(cY!65z1<)TNjY71irj4qQ}79ATCzY_*;#@SO#?clGd^*acqB!*MkDaFMD=j?ZYZnuhAnCScG4+zD@!74 zC%k|qAx4nB1!lZbdeY~fb=QI{ewx6Xpp>L`Xr!U4twZjAy& zkNvpR≻>2pPO>0uTASvD;W~OSZ9=3Wk=pLRbT4+GZkNQi>QYXUPbrj&BI1P94>Z z%@-or;&0J(NRXrv#afWz(2JwaF}?_!;@eH^a59mF=pLvmB0|NYFr+8R*gV`Sg}(a3 zyO(eV)Dyo*OEXj=8)1lYXHcY^c8{A9P4Ssr2}0i5b0Dsh_qt{gWU`FMK5>`lQT{CV zjHK;HBk6pXde`28emuIOKZn?~SG|-c!CF%tr_Y|$wM)&PkA%-+Qn6 zy}F+iX2o&7@jO49O2`WZGDv`0<`+z>$I&|4Z)_rxc!-`%tPzPC1OVwv{lSUAQ4X!V zWbW>W;Z_esW<`D)S7`sf`9Mfd&mq z^sKU)o2egEE=GGqh|>7ak=9Z4tE!g@|5^5YnZa?)x+Nbb5VlY_<7i+hF(mkaNAA&# z7*mni?MjzC649b_&lHh1)>vs-$S^7owAPKuH{N)%zQgq-9#|EvFkLIO6=n&d8{=9o zsJRqiEh%o|?txMdCJ5N}XQ5(icjmdqUyKVN)XhMg&%2;DUU+raiDHB{n5dManiPZq z6O~dhMjs~u2}0Mf_lS4t8Kgs{a)k5Bzi&JdQ6gMEHO z5?J-leSZL7`gffB0b5OhbfXZJA>#d=|NcCV0D2$#EUNvlQ_#L(PvAuD7CW2DKZF1M z%5Na8`lBc8|EEXDN%)fTyu9h+ytL2H@)6Lv`9(#3njZ`IMYno=?R%b!z*ljhYV1`X zU5l=cBZULk-n@{r{Hz;MzC|annwr`Hg!raza!i=4^w;f2j31NIzAL2~C&xER{A8yT z=^Dt#Uzq1tD8$J^;(;( z{pD1uzCI~7SG$7Fo;Sz!jQ@vge%F`~b@&Hk)$SJ>>UDM@UpF^p3=I4iJ&qWKw4d(& zI`XlVba&6yY(74AahO)m>#wsnT)cFA`wtrafrY;m)Ch5Q!PbYazTVASm%m9nLijcX z7Y}bhik6g$>IG`_=g*Sr2LA=50VH#6i-`i>Nojd zZPIY@4?iN2doIRXjvE6Vpp4JIy*2vUt3RHnz+8BHbz*2d5GR|+>sT_Fz^XaSyX~|` zsQSNA22B8qaUml$DYLSU&0B6___D;tY@AOhIN|8#roPGw+MBXoa7obCy6<%GH?OlVi;0b!}x19GfGi;>z6R(3vhRSZz^JtDEtX z@U1m4$OJ7?CjXCAyx*2=Mv2=k@%DUj1l(EFimI+xY2{P-L^dft7unMam8GuB{S!Ig z16^<9rLcKv+|L(fpI?9c2CwXq^`P!j`C#quMg(AK;gG8!AngWf?!G;%S0Jk&r^@{| zoH9{S*k9{5a)GzWxb1ynORFf8rw!RxFBG)LLo8gpR;((NCVcVupdy@fHu}f<}!Z?AFTsxgu*RU(1`1J9FUsNloV^ZZPEb!JM?huo=JER zP?{l^8PT-Hm>n4@9p>;p&1QI0Z&XZ1UvIgzE%HHjp6-=Ya&aCFAFKHOIx3R1vbJZz zK7u9QI;shnxHHi*86@|1_0FgIYZJ`iK#XF(2lKI1%uUBhG4ZOsoAJ7Y717MWw5ZA; z*ZBMZ`)YKw9*t6A4Jarr&4f51-0;X0z{f}vr(%pP+=JqpD) z{}%&yJXyvu!v`^Nb1Ssm*d%3VpMMlbC$iQSycE8e`~bpoIqmD|>kQc_GW^sO<>1LI zW$(mds@?Lnr0}84`ST2-kwkU()3i0G+lL&cu2l2uSL`n{{zr2OUw*%`@o~k}9ty^x z*hT$WmYG@TcLTIW&Nqi*td7pHv9WzoGy^mOO{tHZUMjBu-G5|LTS0<1U!zOE&glBg zEMN#a)#~|gwzmtO&Ki#NmuNR4Zd3-L64C8yYy9-RYY0AN9$&1w$Rh zXz_lXZkA}FQS87kRGEJmOs?$>eQAX_^zl)KKr)Sf!!#C^HZaM-~3^lLPETOe9=hOw> zThvWKW4oea(s1FzF!YH3vYP`D)QvE01y)n?4HpW^ge&}@E&h;00a?sqFP*&!%b54J zh|56WzouZ65ZInX=>z&w!nHe2%ffvMd)$PTvriE|d}7K&_6!{XAKSs)I(h-K9jHaxG(dj0OI0kWN3iPD= zfx`Y}uw=$#5U?yxabl<%l9z_Ce4qIs0+qoBQ4EF_%Eio+ zQJ&jDv5xyjT6dlUpv9j!F%s(aOhh@TiOuG4E1_JNNxtdbiO|lO6H{mB-e|x)ID}L- z?5^{4kTielWcydsMg<@&CCjUME9gZ+Vh*jJNheXmp^Up%42G3I=J(O=2H{yPbf(4kRnbw`I|3z}h{DNrw8)fI0|B#pf zOSVUQqiyRn3(8Gf$xTQIYksA7n*SDk@eolRZ4~Gyc9N1v(_s2NgBLNW6j=;97|J8Q z;NrK1YucAD^Rn}!;)-B4R!`V=C`DL?=)dcX-zugae&?!Pak;W>1ytvJ9glOG8|i_I zOsr@suyxJ$Hd~!>k6*N!me55%h{ue<6IvtBTb79%?4Vtm z-~U;PH*g@uZmmgaaF##$2_wl|sage6dqYuftUlWMqgtnOZBZ-8Iq0&}=n6<6LDbMP(3H-Y8O}?csSq53Z z|F1c%0t2!QbBafxX-!$;_8&}Z?SQ4Xusue_x2Pu^Y7R0{Rh!q6oq#hfLS*cYVvRh8 zl3AbHY5r*fB;p8DQ^_}MYM?xb%qqfp_F%ZKzjX; zldukx>COYr7&u3rG^zcyh~QBFL8;13o8StT4sCr&!vQZYx6Oa<-o3jUoih%&n+Y*7 z8dCd<1{3lMUpV9KGg_ONsKuNH62WfeZ*BuW4{&zpnoVYNc@Pl)x&R`Eu%<%Pr=KWW zQ{rvGM3*;7<{!z}Xyt6)ia-O2=0EHwFd_en%H}Xt7U?1iB>Fb~ulbbA*g-C9*jlsL z-_Thue~G6ze5e%38)C=*63pe<*k#eba$q?fQNMeHB8a7}uy} zE1u9M{j~NXD8h$u{2LVu{@(%yJF3xh0X>@~+XfOjbzk0m645Pdg9yJRMdO#-ua{5( z?mVs)#Dc`Jbky&!7YVhaTF-Tdy|s|>icrw_0Xq!}y|ju9gsK$(_V%T>sB+6M8KeGx+` zdpC?j^LSIse2NKuDd+E^5x@deOuT5|9Reo#aatmhwLBP@*&d}Hfyo7_W{@M%E0bYM z0#h{7akVHxU57X!vxfB=E7CD3Y>5>{Ys$ukgY5@@0cHjhVD`;e6gUB=9Ds{;FXb)? z2GJd{%`})+`a#}2UFss@#Qdl+DaH|f@OUn2(SfvN+0SMLIZ3y?|FL6!m_QlEv?TW@ z8Wa{<2ddt(m4!|+TqF7Sia^3l*ko$Q~AO@Waqnc zGa!G>J`@Zz;o=;Z`vg}=)FkXbNPU?RTx3!rurBB)g2HY}nAJ>sfn}=yB5;m)pXHo5 zWZym88bkGO1Rxe7GJ&HPCOD$1z5V_4P|?>7SsJe}0@|St0K#LML7m@EWLP_hed_U` z-6GTQd$v5O=Q}q0F6?G3ja?Xx@{|p-HAm}wf7nxH2TC(|L5v{<8?%Q#!!#GJkX$}` zwqqgk`36OY`jg(>p0w&EWR95HBG(0{KJD88A>LaeKi)2aOB(X)(5jH#aOHoM1p@T) zvdm5vgU44pei7-GJp0z4q3q|JhBK%Q(*%$IDDu4^(UgKpJ<`s%jo_8n8}?-XI;5U= zh6IDI)gNV9RWxWhp_A-967l0c=87 zIA6Lv>7Au2)~JoTic#CKcU6@l`}j@HIqt7-a2y3h+3^`5?NAWI=rb?N2>(HD9fln&>_LhCFjqTe@+=cu@B1s?AE-%qe;(h8;Fr4%e`bdz zXHS5rrqpCv{e2F?=EKG~F0R6_Nrpz|r~aKWrkVe}rJ(5c7^(494#NKn)xDgOqJU=> zKzb?y1@_U^&rTj-NEEn))te*$8S>BbZJht;C;S&cuYQD;c;h_*vM;4irny1{f1X6L zP8cXPZOG+~JY7-G+lD)v+_-NSr`s%71x$qO5g?;QR8y@S(o>8)t?ZLGn1!X^4hE#BBPJq)#@axr3 z_v7f_@)at~ye?#%!MeT6i2a!gga)Lalas@DVh7Z79?q2McZlWtD0`harj-H_>=WkF zmpic-y@nCOv_Nxm{0K4@;f2!Oj5n6&zHU7sIg7}E3NE|%%9D7pbtUkN{oA0giU;(G z7g&@|d;|5Pk~E5fHVzhQ2`+rBfv(*J^?t;=ns~c>Hh5njB7@U>rBrR>UPMmC%p!;l zx%NQ2Wt&^ShXQ*Ye+~3!?9bxD9E!Ec;lM&GBEX0oKnYLvSw1@2>3b;j>??(1NI5g2 z7^=zcF4~_Cl=nvC-$FdV8Dq(OIFQttfMu%2_KcP1yI=8t9-s}&$5DRWJo))kj9I3E zU|`^>-O8&D6)P(zG>N)o@FH>w+b*~nVPJ&7t9V9O_h+moMZD@@_1@$w+P?fO1twioJ~zY}RE^I3aBW!07xCK`kifZ_Qna@ZQN56s8EB&K5u?_lwKE$*cE=eYS5=5k zANQ^ypWeeXy?PYO_uQ6T(9EKmZBg_eXGR3{=|zT8UiQnqDAiw|;LaW@Ed{Qx8|*S= zOY;aCFHe664LvUK!l(30f@sa|JgT=lG?oJ|DTkMh#u@UDev-X zL48kJn~DiCY$}MbN6!NEmsxeYKP0l6bl!D|{)4oa!gsO9n>41$(xKf2Z)@u_LgWBC zc&egMLPz(mQR^93R+Cq#W17AH3Vu=&TD1kwzGTJskA0!&r$Od;1;W(7D(cgCT39(= zbzGtQ{!R>I@XTE`3Uizhg&MiWq0v{>f+ep^^!-(H2%c-pr_DjfQzY)Z+ zgQ)}HC|dR8?L#{BSgyFjF+eWRnvJX8*^|!n!O-YbXRc^Dm*&L8r5~PAuC$fmf-auv z$RS6wAxEnSB%!?4iQ*|+lYZc>ILuzn-?Rl&OS@UwLInjks_7WJe84Mqz87aU{{E%Y z91I7N!ud8nIcYS%rL3%+DEpM2J}Rz$fe5C(LO+i;TFpJLclgY$tYV%h(Ni-s&lG+6 z8iitXyjmfW^m^pA>P+CH%pxn1UN#hntFlNB!NN-=8j|l-EvfK zWub5&2-Fw&p#}ny3xKOFhske0M@)P*BT7sB~XK?UfOc6MERWMt_gF(D89;BQH4Ct82UNjli27GfP!O!{J zymVyWazDPi4fp{~M%*W60hMvAAcFPGA8gy1<79*w%zx&^{4<@am%ea=IUioYZD9)p z%O#6c>EOOq@CANwCP>^r&Cd$+SQjqt8;^GW^2(aFINTQIdm|!>)vA#7I?>JC?XFNd zjzC~icaMyKxLQ79`HiSYRv#LY_5fZj(|`X+6+VgX`l8={`JMyNNK|-GnGu}}B?hQW zKBHWw8`+EnzjOngkt6ku(PB=X$o+jr#c=QmGVy6}gEX$_ds4t4?xz)YivJ7(0tUg& z;OmQm8w|lVuXyGA`2C#g)D!l5!se(%;I_uAAHz8!r*&LQ0~cEwv{D%UP|br0g>&?c zc~1@2%YR8TB8vwwC09qiKk!aUBA3XRzV%K+$gNub+hr{TcEq7$_Q6NYoQU^iLa7jc z68Pr0KPr88_Qo|C)0a1Zk_f~KHp?VL2$uUZKRXRy`_?)<26zyQ4FWidcC|_+)GsaC zR9^ni4Wqp9zo&eiOH$GnKEMq+#s}c^W5VaP5z67RfAv>Fy_X=6$p^?lq`D)+Boi5 z)>PF%Lkbucp^<_hupX1PC5R<{m6rh|3gUQ$+OCIxFH7(^r3hRc{?;WD1)QaAJ27|> zd$#UPwAHL84Rf(O&F&M11ee&k7ei{VDY@>T4fXX!fCpqB5%gz%leuon1CMXmk&cQ(V5db|5)ey%)7eKZqH>4sz{>;0@_qRoBlZvV z0wNBz{+u5b=21Yo#phyIA4;Oe=vC_06sySE{Y-&9^l5H?*~b&yPG^2T6y$@T3q}1ok3+^)uV7g-q?2Y&+Hd7 z9EkYMLet-O*Hhlf>W;ZnzPxlXYdT=qtmR4zjSz;qQg33xPpA5ugl!IK)*87>DeONDA3KRmcx9eQ6-G zb4$prd{2A4Ax04}M7xuTIt9QMb^F8( zyeK$^G|UAokqjz;Z~O1?&F?{Cr_k!dsduPObI}XdliH+a2V?Ru-YZ$X@jd(PBmfwY zzWKwB$mbyWw=e}6lM{V)H-^@o?63hwz+~5}2t(mcVBYNA1k!L|GyLVZOn5Fy=)doqXjpn8T4U^F`_$QdxaB3qac z=&Bl22j}^a+5V~n5%=gd_h+qcbm9#jTkJpb6-oAG29>=^DIUqyrOhTj`?o^5g~6gY0(a9HmQY4sNL zxQygKyh_6pK|?P3z+GuRq2l)QWTkJdLxP%{n>%vl+Zff?%LUtyWP+y~#vdmS{^7pl zN8f4SxODe9uB#2ha{_;>J07#WXr18H(@?q?&|L;z+Tt9oY4f8d^bxNXm(3!Bf?&0z zgM-6YQ{hIX_aAOLn+YYtAI1JY^SLcWZm?-O(MaSl&)i?}5y5A^6%o2Ns~=3_6bJsE zRQH3L9Ufkdjk6C09z0I5`ZbYN5Tlrgy>wzbS@rghF#`BW>bqH0d^)@c*pSZ zm&AtP^d#aW^HsMObn!8a2;EdLXx4~r4gb$EdYW1~Y2)9wo8){>?18MU@qnL-`bXV?9XhI8(Pa$?Y-KPtzr>f&-zSz1m`l-X@_gI!h(AO+Z%Z;A+ zf&N%a*@@2fcK^r&hC4Ez;X}~74FtJ!M?UU+iC3xK5`t*8l%Z&}Si z@a!UA8ONo@^J5I14$uwU;6aV`X#9)?cwqYv@Qqp z)r=9Y8=hTr%d>>sO7TK%S5Idu4HU~JUEA&k_eM+l1VN9-7hFTPkX zl37xjn=g~AIb6VZC{UhOGrB%fu+&i1<+@a4cl>d6mC6)o)Ojx&70CI~3b#COGbeh2 zR|SBoebb6?a@(qr@{0>6g+Pf2GWLOkB)S*h`d+60azAdPEh59D9T?*F8F)X|Qo(nn zM&q}diTZ62OQS@4mPEvhCN&p0le`ZJ zy&7}lH)L|TyOM1hm|iaAq}soJ9UAX(_@lV#_pPF(x4(HfAtQO-9zO}x*+n>S z`#B#VhSmYlWu@*l@l(PNlU>ctrSStj_4f`6Cevy?+~A>FR!ND_Ppu&bZSB+@@0QP> ztJ%Zfg%rikCuef;3f%1Hn}}?$I$b}SOW`HxpHTSyivH;g4GkR1aJt)_I)N9B)-yBD+D(F@s!s%4 zBU%e#b^zDna-ZmQ(b1=ZZ%~b4M*Gb*5u;@@(;kQAH0qhfZ(h4q8Mj{-Z-5>9H8#gB zj!Jjn#XaMVZe?cVTuJ^6@)LwGi+i#(k2*& zz@C<|0a`i)4!-$JFA)~Z!Gq&m2*cEwvkr9|s#){C&n z!qOJDW7+t$ZOhVTMwvKIz-d(GnzV|#7DrRrWg0Jqo+H(}@K=CfQ-|}zg?!6;&F5U+;K*%#QBa^E+(b`L zuV_~nZtAu_YwvbGW@>b{Rh%o1cEIM{9#HPESIn*U_DQYFbKC9fF^K`=u}lKsm18ZZ zXX(E?!Mc}R2goOz@xLiOzE95<4zlZSrwhw$4L84bzYG;vR)g9{nHCgkt{qqn^go!b zyhA;N8j87BUw&-0qs*!mV0<^J*C?-5RFcG9x!@|{iAhL`?VBu&U(Sj2eHg>@u$D{3 z{mxk9Us-n2A#dt*HL9Oq$+tD?E0auBLsG{$%%eNjO;kBDvGWu@LFvgj49_9Rox94c zJf{BTdT3jKCi%%?A5Zh>wIH{$?ck4AVEP)*<%$mfDc28C!^>NDx(&SLIY4034+g)!$a;?^i_z1Ee$~^ zFh|kvJ{E%g0SV@*xhOqPBiVzp^uefLxO0{K|G0y|AHd-K-1jGGBopFh2ZSSGG`>S6 zcS8YQWxbRBze8aoR=I?3cSk{_RbbcgAy>?BGEONauaAz7a{;#v@|+Erc`GeQKAHY> zF+qWeE-wVY7d7A-5;?K*>*9jSh8N0dVY=W9a4g~1BB2~!v-fjeVHm+~pxZ)t8J#)p zp%A!1WpDIwKxwk&TD@&TbKjd&ZNs8W-@H5DUf_>!4?N2l@be5gl%0Nv*}K99c@A^( z{KMiI;nf5v?yPwtvj|JvkC}u!w`~o$yhdrR@fc19ha(u~;CfQ7au9CRv!-jkl*a*h z{0<1B@DU$R6(YAPGRoe<9_oYWYBTs{CY&7>VXp(RW*3h1mtT~IgcYQjNW80 zAd-{rik>5(2rApTB_fs{#nj3FA8#0qcDtW4JTx*%E*h2F;b8`VpX;!`ggaieK4Va+#(LA(_Q#wB3&>fBtX1_;cB+wkrB+7}lF)|+$4d2$CHAHQ%R?p&I!Io990 zZ=?a%=3L?s-w*lZT&fL&z5{`Vq4n#;sf zU(^P?4g)4+O0e-FB7iA0&@D= zgX<(iMINBApsPL5)!KdzZGSZ!56AEjwGieL zfHA)Xh}gZHPTcOtmu>8S6uV5o?>E8(fdc3Qm7&|yFtfs|oig+x!yXFfCkc@C@wFO| zezUo=FEBNP1){Jit20IxWyoR?$fl=H#o+>WijP(VZko3?;5#LpOhP2;-E1%=--~35 zs@s{8gN>!?sg3+aUfd1j_Sd=qXa&+dbdJ8yuz|_N0KCbY#a;=4W3U%@xSOyDG!@i` z+W&2zDpfVykYbhbKY^)^?tSV4Mwm$J1_%{;aDvjgLjes%=$6w15N%Han<~KAdVrEd z59=pHppb_ETNuW$J&b%nED>4|d4zjfIgr8J`kv-fPf7VK>?FuJboVhyL$Hoh{!g@L z6d4mA5#{f{oZ2qn^sUTlu0js*FsYtD4`lPaVj<$%50&4H?Ue^K;9EKKy?7HTL|(=b z&3sgZj$*-&An7G@+r}c1xoXL$@LadJAJ{(p$yc%I}H zY1L0ZY{8d`AQSwWypqD4^#U!vBRHZL7JwC4eE9iI;`b^KvjkFT^gB3M=JNh|B2Xyi zqnJM_8qh+@AS|;a_2TR~twy#ev>N0B7Z(jQkz{&<7M_%x475zJG+ytte;l2!^t7mT zrwnia2P&-Q9wzdCx3p)sH8=k#{QC#m&T?}ja+S2qO|qA?^mTutDSE77Pq-AolZ`EL z&^@LB*`oQCzwLf^J7MlO$X!Z`+`n%+tfOJqBZV%KNS@p z|5Pbg7L_$o$g4o$_s^$~;Tli-WpaQm9*Tty0+k{)s*8)8&2|*i52T)mc@$R<|E#kB zjDA8R-)At?RYo9AXd(y-tJ#G^u3oh|!q2vNU8oFA+AE#q451-JbB4zYA#hpEV$QXA z<(VOodR^lpa>)ePw+P)>rOtUD)^AH@A6WBiI$~WOjb27}-baZrEbib)a?nlWhPj-* zXq*nsVDOrWpvsYM)!gu%%Rzl_x37HIVgbB90uL8AAgy<2zNEcftWswp5LhQHIxdfd z^(^{i58|tvKdt#rWv%P%tb9q&$Jztl#ks&rxxZ2~mh_=kkeU@{i(#<$^=$;@qxZX( z3@+^rjHN<4Z5X$ygI{N#Xu*k0a&*AK3EE0M6McOj-{Jk3WAJDu{oExs23J&8Cerfr zG(+I}8+(Q0_MymW+uMP!xydEb?8AhQ)Z%+XAG}FTtP07F&Mzx-Tsq?KOXm3V%t%rY z)phEVYlXa0g@(nZ-E`IIpIw8hQ#Nr0yF-2#YwVvmjf*u0242Xx?U+@*)ByrLq?+O2 zqX9NhAAax9keo}LdEba5yqc1dW$jsg{Sp;uc2rJI4y$(IY4y!D5XmIkcO16d=p?e} z7Y(NS1e$l(xmf2koekfduD(_1)qeA9R>0QeXl8B(hc=17x=`qB%ZShODk0h^`!2Nw zvnqTHb|;@d3<+HpGBuSSGFIAHc5uxp)=mvdTxY>+Ld`U9lBC!yS|y3p*)pA5oU%iA zx!J}`io~H5eo@$3KMbQ-?d{7-NtI`5cUbnP91<`>ot<9Zg~fLqB*m>e z@!y+2I2rF(9cpmZUbk#DL9({Bab4KQ`6){@M6fVcLHPkJpo)+y_rRIY{oJf`Ko3dJ zrQt5p@(WIV%yU&SVfp=%*mozvurGlR7Q%foN?~{Vg9$hgDOQ<>WL0D5jMv-7u3G-% zcQK~fTOZm)6A!ce66IZHZcO}yAw~&6+JdFgFsz~B`rV|nvvaOV;h%_3zI1Xv2UcgS zgIA5zwQBxtl)~hs_hYZ0j-3X~VZJk{@L)=m0H;g)29%p&uyK;dv8!>zqbDL%JW525 zg;89Y;3hJp_Q?y@3d}bLtPF_ z&MRgY)!jSe#;GPlNn;I%=PL+EEipiesKkr#M@;Od@+ejlrj7W_I+V6es&m!}7D))es?H9m$&NW=1C_UwJce2C~ zwD0+0@Td&a5&wRz1s3dQi2iTchMr!{v1!Xwi%hHzezm`zk~&Dt%?ruWABYrQeuPx? z0dLjJj80j(nA`mMb2Gb*xZ85Wo}pQ7*rF5{hd-u+R{iwvzp&WIYl6O2Jb??mrW$IL_@jP~6lISIe_pLeZyt#^Ij zTHpC2>#Xt2Gk4wl+Sk7JeeY=7RNqDBEutKD7gOenUi1+6=IiV4>~wiQPb4cYha;By zA+5z-Fn091UhE(U=cA``w9BpQr#)vWi_uV(>)IE#Qohex*2?4!9KJ4iGk$; zE%)?rYL}P*{DYM}O*|JQ)x)HsqTZY>OB@r3qoH96oZG8AsVI8Jv43Dd9;jl;<7H?h zl-9HJZEoKknyT|pb$52w(!c04tP!d4uYaI(#LGO_{WVU&oOMx4!f}xMu{~NDCz5r_ zN$5z}gGF6RBcePUe#~AYlFz-{7Iy&@bbU^#B)S=|m+B z_lgA#XhxJk)}aBQV32A%!tPjmph+jWet%id2n%S>pX{Hc>lWr58F{w z8KE0{!@V7Cb04;7qQW^c-l^)T%YN!CVM*S`3P6dZ`qx#?|F%omm*<}u@{QT2x*iV~Vjh9sQTI`Q(f?AO z0zv2W-M9mhtb#_ERt>bJT{=^%r53g-&_CnC6zk31Do6A0{s)7*{Y3HhRAPZOj7A?%;T*sty| zGT-)I(uuk4q9e!b)DZd-%QV7g)UH)`y2)udCwB zqXqR>gKdJ=0*^4y?|*166hJPn*Jk1;_<7?MB!!#&n=k#fwvJ+E+I*^oKPi9gc7TB> zaedBsCiRcO086np1keXwQ1ke|%NjfTj{wON74<&+{-0R_(<%YTlbzKpyUt^MqMK60 z1#@a0vQECd4TJ8*^BxY=?LsVf)+_WBA_H>&x|($dBbDP-&U>#s)~K)ww*sl=;*J{ zz#u=PV?X&}=0!Rj#gFX&m_wc0Z((ME^~woud|JEXZTv0oO-| zfm*GvXBV^5zK!UdMelYNRsE!86M*xgh`>La6kx>j$ zSze9TC#85n6@Z4KG9R!bjKG%l@1RxRQZ0d@7dU;AQ+;@y1@V<;#tvlyyVSyTk_sFBCPhXUc!hiUJ z^o;DrFt`Q!bsJMuyiHhniA%BA3usXR_3^c~$8kZcVb-^3h)=uPZS$QAKiZ82_o+o+ zy@bNnPa*sDj>#gL83kRDZqHq`tmvc*8)9i`hUv~IF);&SYOoVC*|9}3!9~4r3fALQA&w;00p?6b2cKHCZa5Z_8UP95} z!F~dP%QT;{$`-LV4W}`|4a4ITCKuzgU|x$%cLE8c^Bg5;w-1Gt(WUEwqtI*M zTzq;C4vt6KRb5oJJ2V!h;#>VAz(`)|qskuGq?g!fA=NJbW}ZwLS>t(qx3n1dk?B@( zNE=|c{w_4>i!XK$Xw9;<5uy}6oAAd*>?2~=9EAn5va@-*?8(VXof^q}h=~)yrSS~) zuStn5gLzZBUJ;}h?1TFhVq8X1FMsjyzu)LXi2rEtP>Mc)I8gFp3$yEPWW?pRsXF5# z!zRui5+T1+xv{ZYnfa4khK=ud{dt;%mvYM|E;z0yI3K9?n6`UuqcUwy?EBxFpIj8Wp^E=5tM9kFE4k$1%C-AM_qwy6BGnq5b!w?D>ZiV3%?--orc zJ*IDNCPbr^kd~bsg|cmrepuw?`7uYURQ>*YsvD0p)c>L6)PqU`#0}V2fVbahi(Dz< zITlrbW=O{3i(d6Hiwcw2u0yRYrgSw2l3OSl^p{#_WPYaP8Y#IAoeA_kMM#!^i<;3;V$Cs2eX+fb1JKtOVJG9u*3N`T+T$PILY7~zLp7F@> zj9-}}Ki+y#x~UGk+PzCYu2UE7&mRA>k0d=X)PJe1-+$}I#qx&+A1)t%%LO@U9~^cv zF|^FF-2#GdIzSZSvWJ~C^=|=3&6L~(JNaXT_@EILVx1b$8E2e?)Z>2)H3uxZqS6q` z4)%ju;{Q$0IrF}L4vqe%(Dbk9?#Ok9KEd|v7E8>p?CLLfii78MX+`ytbWMx>e~eKy zAWbRH1RWn6AIU6`T6)B!_&<2m2k@iiO*?_kd$*rnIe~|ij|P77{}25C{=nM=pNgDe zmWLYBin_@0TaD1WqrW$Y-tbR_dPH2lyb%o5j#k^ZMK4pqQaJ#C<6=yLt_fI2W+ZAc zZcXfl?jcQ`%kt~aKJFdK(?7&`2{|FZ07kDuDN*MJ@bM8{68c<#!wF>9+VCTqJ$;^v zvW(?Fmbp2;=+3#JU&0~HlbDIksLs#Te$OQZNQdb2?oYxRxUEJm?@W{0M74227)}wA z-0q+NUgZXlNjwu4?vaW=r4j(u*Z=OfqYc$^5oitu#ylPY+L&WR$+k7(?Txv3KrFvR zjr8LiU@SFqGJ`hT2fhZZ8*Q&VLxm+c4lowyRaTab^Q)hHRVMaTWj6ZNX8eL~8*(iz zL{>L4j}VkJ$@T3=CH}#kDz2`sCK|b#hR5**&`vP{P74WSNa;ud0s^x0Sb>k5i0(IO zJ9TqHTHyrO`EEhh;S-U}8AesQl0a719VcOyDicMK_A@ahEuZ>xy7eXj467D1f=mVU zjmE{QH17h*EVYMH9DfBzK>hxx)0u?q++0%KY1_ z&S72$=p?o*6sK$t%c$=yXlZHP(-71N++vSQcrSjf>7PH|987`=0hGc69^=l5Q%tBe zFhRngjFr)3r|>;|R}xa4S7cwUw!8UZ12L$P2VHyQ?)XSgQPJP8f>P+!V1&^4f1)&8 zZ!7?1W4H7~BNls7$c#}q?J5*^9{%%D#0eSJM0=Q!sads;)@GEDsGzoe$uGHt^dbvv zzI7a)t$guW$Wp^>uyPwYI(~p;0?brQLx7-{k;zT46I{&&FsEi8H8U{uoMldAvOq#MRcO%xOYjaR0)d#Ko|q)4i|`S?nc>%IfUUi(x#8uK)NcZ)W3a&P{w zkjM5L4}>I> zMf3or*l)crwk3L#>K|Rl{*U$(t_%NI9wnjwGAwc#J`0JseP6rcg>=OaT+%#N>K^q> zkTku!$^^!?1qfP%7YKsn2Bn4d7xJl zU2A@9zc!m0>e&>+6-CS$OMan|l2zt+C@fkAe(KX^FW*R0 zil2`_6D|VaE_jT*n%c*QS)b43>e|QU0X=3`wnc6*Pb(u=zT>$9xrJ0dorZ=6R1;}X z#887{f4}1mSuQq<^E6WGMJKlZRi+1;nr5w+v_8(((VIK30z@3-ZjzRU&u#Yss639@ z(7dw%bdhMt2nrti5P&Uu0E~W12L3*OffGdPoL{;B2af_qL0IL#f$unVkq*IoOK5bn zX@l1Bsj!uyQ`wtp6?(hP0yx1pe>uV5b`M0}k&HVhO_&`(6+dtak5p5CFr8c^@PkE+ zErchq-y5J6xI*}*<0LF80xt+-k{3Vu>Gi+-h+Wc2I|e_Z7)t*4DTegmH=k0y z<&p+N(w3c_#5k^s%9mCL01FI7ohp16LWnsa6qm z?kcxjFpU_q#ej#&L9X6@%$RLGA5f*`OOhO-Z4NnsyI8BIouGNTFnM*EK;Ik=!D+A2 zNIcTC{I*MUC_Gis!e)H^D1EQDIv%S#zLJQkI;pj<#Q6$9O*uRJl2|Ao$Pz)O*1cKy zN*Eg`H_%bF|5YV$vzw2IyXc=Or3NA5n0KAAI4F>H?72CKlz#`eXwRxJ@ERJ@WW%wl)m1ZEa zHSUZjo_)u2dUV*N1IR=Mbj;8GdipX{DsPx`zTGnntUX{zo zjTN^^cg@k-%p^L^H`VLVJB2Ln5|0|SuF_;yqP#Z5a?RvSZPE0t$M^s2NZ*``-6{fC zIODmxMAvin=D2KHDjl!Q^(u0O$%i*84(6d`lQB`=JDEs8mw4^8xhv&GcM)5m^Oe@Fi+sc&}uIYOeKBjSzx7{mU zDQY`~lU*_6b?Ljz*c~Yi3=2+>fNlKfD9^^}Pf!0Y@&oejLfFBkzW1#`9fyw_2n~8F zx$2Ex{n-VH!vH@1p({-Uqh%;!A;+Z%KBuACH>GOAWls$a!(WgFO+N+#(CO^qJ#q{BbHMEG~|1#B${;aaU<#l@`m$UVdez`oX#c z?FY!-Y)vy@GVDGg>Ez=sd7+2FrHrK4w|jF&YYr9r)}L45lc_Hzbqkw4h>m@0SNmv2 zA@6TDKY9wD@o2oZS^G9rA)~z}fve1@Xekzyk(9gmISA5`fS|fwKt|Wv z%-M|ZBaKpz1VZSuw0rZlCmXQA|E?-Z)2jnk<_ z6)KI{i&Al#SnV}!bvhJ170LP|;+L@HXf}N>6Td+9{v3BtrkVSmIWUsp=jSI$b-w*& z@6Y*qma3TN#&eix!=Qj=3MW*Yh&|}Q!|L(( zmNd3nv3gXe#}cC{z`iTLqoD#Gqo!b8`$x>xScE)#6PM(Q@pfBaZl zmT~?hVMBEGy10k*!A8cQw13)E;%UB>itQi7N(B*7QP=Dv#;dkkrAWnqv7mO|+sHWV z$G3@H0T6%fwFLA9-pwi#<-QlnBV!d&F-e`B4?=PiJg}h=a&(Xm;!!00;95rA6i4~k zh33HtmT-MGHa6u|z?8rR^#>Z>3xiC;(%S&*Hxxeo-;q2h^Pr$00BgBknnqrDb0cXS zX1)o7WkyDnEGOJ-%(HB@$F<@vve8HQckkVdm=lm<3Dnotwt+n`>oTo&-$F{0Aj5L; z2uW!ofifpZ+?m%vQof8kHFr!aJWZK6C@`>n3;%p=D02fOx;^~IDQXmqMB758z=RDm}FEpT3*4(9Hke+5{vJAL!Bvu{XX zNF&Tr_5g`QNttroy>y#3l@)Rw6XEGIe-%Ezyq?@A;D^rIwH%2L|5&^;R{+EEZL0X^ zln1aVmJ`taB~RnmS1YuuD%m+ulXW_BSas?jNTfnw@8FPY5>ibsPCL&VqNEhHiaQjg zPZI&a`;E-(QHn2KnC?$bDXu>&cupar|ENzu6MSchz{z20$%oPR2)KoD`c}^a&CuO! z=X^KKf+>UvyMJY+o9kMOO!}@8s>irDmFN@wFHz>7A7{PTyl7f`i%lSUO3|WnzP;js z8yd+fJc0&h|9PdG_trQQ>vJaUKmG-42a>9>qz?50+rZ^#yS-_UXfcz{cuw_iOPA1G<0yM@oZs`=!iNfxJZ%&|mEPqIUmV}Ze3VQ503#+Q zZWps(?0%-POzwVpc_m!XcFJJ;RwS7%qwNprV(F^<>&BAFIaj78T9(=%52i_J#py92 zM83gi7oGVh+=a95yGalm0p$Rp`cYEA(`|FfcwiJx&%9KtFd-=HBXd*=_uHctJ;A1) zL|lAhV`JNCgbyTBSp~|gG_5j%B3?QEomTLFlH3H`Wv6qTmnRzG+p9BV)g`XUDkJ2c z-04L{xZtLCs$vqm50t-Oi=EDQ7n+A~l;UWd5Ch7=G*;(JzUD_(Pk3En_i)dON+Gc1 zi%Y~1>j9>N~Y}4FgE2cr*Gtf_iC*=6@J(H?bD}nnIYWIcS62fcd_o7P} zF=%t{Dj|}M4YdHVxy`3EH;4P|c-5XDBjVzy&X_*;mF&2idjJZ!O0w z$8!d1kv4~D6N}J!m=pJ3y$5#r`$iBOdElfc*y*^V`mq@BoH*uw%{#l&k zQ$JzRNk*loGxy!~ z0>P;KlZOw<_zJVHy}X|HRJWn3dR*=Lk=Xsuj{by4HSdzahG}h8aBw@w#0?_%Fnc&Q zVJ*0GV(ca^tN^QqoCZnR?VOYQ6W;?{T3R2}$a*{x>mvblyL+zVf)>YHw*o9+G$|KV z#tGW!0Gj|`aGc3GLHND~y%UGNh(8gpZa)UACpvhY+``_y>YI>6SyQFk0RP_eQ`7#c z?6^Neo(e;2Y`ty0zg?UaRR)lsgL{0wOH+tQvo_T3%n;2ka}y!t zDc%@u;maSS_ukTa724160~%2Lx$FW_?$Ni!<(nBMlkBUQJC(a^5Ww zXZM!7A$WKp*oip9lZ0CMy31kDt_dI~bGKCRYk7mI9BfVjo zuRYdvQsP+hG%Jgv7i7mk`QJ-K=6#`>U@vh#etyl#fU?-FG?-^l?pErDM_MBi^II%G zX%*=Ly{E+2x}7{BSdQ!6HG%t;&Jz1#0$+Dz_Bp9b!Q9u^w#j;x!D zQ&nt}P_x25m-Vr=b)KtgOIzDVM85t_Q;4T}Rnb~Se=y^W#*6OKXZ6eoaZN)nN_JK_ z+u<75p_qAv2xcc$|BLr}-0RdlYkfM^{e#r4sfvZpb2)~M-gtl_TpKn;7^6lMdT1jZ zF_VY8rs-kyP~Lyrq z>bh7RtZwvm#qQ26A0F_-`?E7X!ZMybQ3@qz8;CY};kGoUOGnvNVnMkkzZ}b2aqCz>|!qc`=tZ$pX3uLTNadzm9_E?B$>>JB^&lP4+i#WY4 z`!u+~Su)S&(o4mg0P^*WE_~~nbPx1vCim8I>b}3bii$h9dDmfG9uc%=-otzklkK>Y z%14DEFdr@=pC^8@!avXRSkj-l+6Jh8CooXyr+4_ioJ|CzxS6>|1}Gjke}J;WjAimW zTbT>r$W|}qmn|+|jjh1L%&ANGfAvPj@tQ$dOD43@-xeY{K6vJ1q}~rm!!2?$c_6B2 zXX@Rmkc1j95_1PP&-;2cQg?P9OUBh8s#6X(cI9(2Qzp|=JQEv)?^l*JaGF(i(g%fv zhld+_w!qEJ%l&MiT(BXtCmVOat;lX9jw0N zC`i27pkVZr)LhP=lK1|2o!|0}*{x(!NgiWZttljzCqQ&-ted&BN5Q%#U)pW?sz}-9 zezVFY4$=VtGQZf(6?q@x_Lvyfg;!e&-xaK`gK^Q1i+ETQAdHtiOvND^u-2}wV&U^t zBP5+a7iWUFoJvO<1CeBl?apT9j-cQNhBtC884b*8Am8UAmi_*RELO>s;${xqR`*OY zh$cJjP0lW^dt($Q2VqWyiw+c2p`|KTTkknXw`A_#-F25Or?_%uNs%r=b4s=G>|&Iu zr%hqaO#$BP1vD;tEdBkZWDtguWr*Ce7+P3FxiR@+fsc>V#k8(4@s-+?^p;}lZj;>Q z>a9;^u~a&2&n79*o)??H#ZEi0J!NoTK)HW|FNBw!69#l>Km6%(=(1c{l4t(#&IrP_ zgVVRuBg#qp=Ej0uc^C7*>SmkC1D{^bOjQ6L(N*a;dA_=Z6)XVPMWg*P zx?X8cHuChc*d`Q1JvP;IBgHrJsEgVY+|7KLZeN=1`#I4geRmY)erxs=8;Fv+rmmLl zq7S1^8w9BLk)ULdJ+cs@4eqq4!JktL=dw-JcK0`@>IP`Nk-Oi8fd|wvz4I`xV{MSe|ZMX`?S2v4W?&*n5 zQEYRH8EQ3W%It{}%TG^WrI1hSqA)wQRR?EttL5R8Dm*!+-~3oSJMU%u9M7Fn$S7Eg zNov!|Ej2X}x2zgw*(M$J=NsvAACk4GOe}K~6LrjzIhePBQLpK!B+=bnu$dIWw5u4L z9kHGi0aY5l9n)JHqp$iL?(UY4rn3#vd|)c*Wuo2ponP!ROt;z}cb>F>iCW~p8oBI< z^<_W5(t4~ARUqRAM|r3BCG^v>JQb9b(%^kvY9D+LwUt=AppF=*b=o~3@~{rDDyJB&X0|MbjV}z=3}ihyufsZ!7EWl zP)0y~2LM$mmVl=3cJVKDZtsV!=h|@G)_3tE%^+o1dbzi~I@Ry%y8kFmKa7%?T{5hY zXSwF^1Kd@`-Me?9>4&n_mWtuGRJY~D%eh2y-x8j}6>&ZDo=mcouzcQG=ZczIRo!?^ zYxQb-XW?3oil#UjYWM+5wRiA!A#<;>p7hSTaDIZ`x9%)YnH&~MGpA3Hikh6C)w60O z{q9>;I@bWWH{;6mn%$q1%@PMQTgFqh0~6t+6Ykq%tz%X6g<+-R44vYT68I;T9vE?H zHA`u_A`%JCwZ~I|Da!7OI#lHVnXwM8XMMLFbVKn&aLXmL2Rc={#rwuWd%TX-r}gWo zo&U*rHhhSOQCwxqhgzRa7V82T2F;`g9UZx)slu>9E-$-~c)Laeo+RthY6sp>ewRjA z2J&UyXml4l?!Ipmth25zN>>3DpmyqZl@x()KAaC6hs!96Gc!>*S2Uy6G(TKq9Op4} z%^{cM!mF5jTg&>-&3}NyMXVbOzu$5C304F5qv3Hqw+)e>o67FDyKULq+QI&&xGdjwI#~S zT5OM=d>^?gJD>S5*}yj}?0FAxOs4HCEeETz-Zx4z%63g5)wfo%mcc<8@cQaYk}Xp) zpvprD&a9gELPLRCU8X91vdl_$s?9FP{dHHa(?6OO$tW5-^p26YI-Exj<8QS69i8}X z>yvGqlAcNEP;6lZX29V8?A6C2PAFH2XHV#3s9q89?H@Gvbl4PI$XBKg^*AS zAW9RZo7CS#piT^-xIExEfUVbF@_Oag{_)fU!y){W-DV)M;meJfY6ogNZZW2%snxp2 zY5fCw8S>#S%S28s-*@t1=67U*&}Uu`2rr7Ic2Sut6F{_M+*la%brN^s%AX_ zF)Fr2?s02Gr84<>yxL=R7K?!=r}^Prk{G4XgInL4mvOI{e8o5EN_^%BZs>*X%>M}G zK(FO!@_=I(K(QyrGHM_Zw%Yg{Arx4)z`1eI&BZ4Sr&LC#c3j-tH|@_-?N}E_yLpnx z)gZya9jZVX_$+@OKqDjq>at3RQpn^Z9oZ}%c(EwiA-oE3`&)7{d|`ml;f}p3{w4GE ziQvr0XEbx&N$grI)J|G@JW@%&{Jra4f6yTvg8k(6W(L)`iIj~Pj$Bt4a5hf>^uwar zBH{4nAlJG09zg7Z1B6Cr;TeYpp4}+^4l$5=41Fq7pqAs9 zfK2kfyhfD#WXcuxXw&NDy@x-mmY-Fud>v+v2IL@V$F^#r`A?LnOUF4}E0ZtJ8>kST zMlpQv>Tay3=mVuN+|F`yNi{c7Q#D2<)lac)LA~I#I~I_*5ySYy>ym9AkgmK3uHmXFv9+}s%6eQ9HN|6$cwwTp%^8DT{QLD z^lMte&Jh}wsQ?PS^Be8%Eld>A>SEYf2EMMezW()l1IFIuJRQ>`wf>QTs^qUJ`t3P% zjalfo*7MUS78PD?Qr5$e#RV_7ASd{4&B0)ZR4;0xS_2!M`I^>k)T?^;4CJLsF_T73 z#EPQXcR$3Bq0r2OdDJ(*+r zarD)FIDpk&4?PcdG?YSeF$oGR=i3j^Ha3yC>bc-&qvQ<^ThBtJHg>8)RNSGR%pp;Q+alRWl{!}^8%Yf{KpvfGe#c)Pp__P zeKzPgNn?#Yq9U`a@c7s1vH=fN^!M1kUv!Y|V^F1`>>2$2_}4X=KnhzeD(>qEhB^+= zL)J?Umd9zIW^D09PUYC;6T1H~>{$YjA%@3EHc53*Zf@K^m;S3d{x`=ol@3U@|2hCb z6`Rc|5O6{kD*#BkvF{7Njy3!J^-wj?CQZi*))Q*{FRXy%OBZiCUU&ks$8q21<1JX|Nb3So+PVdt0 zXwcyx#;eMOGAb*D3mJF?RKre;J`l%JbPp*QvM2D)pT##|VqyY>4OdeLZIsn$Sqn(* zpZT`uRVw6td~R)JBu{a@&*UxZrBpQJcwd1{?s*zR9y&M_Bgd;Wz(F!7wG&czu!I=t zlIjMN?t&V`t)HK8qZ3S}r2Joq$gl~9=flo^Gew)6q>FL!pPza0`WmJ^iXYT{a=Bv; Ydr-O;IXUwyaln7_GAhz}_YD004<6sNLjV8( diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md b/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md deleted file mode 100644 index cc14ce1a5..000000000 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/README.md +++ /dev/null @@ -1,129 +0,0 @@ -## End to End Steps to create a Chatbot using fine-tuning - -### Step 1 : Prepare related documents - -Download all your desired docs in PDF, Text or Markdown format to "data" folder inside the data_pipelines folder. - -In this case we have an example of [Getting started with Meta Llama](https://llama.meta.com/get-started/) and other llama related documents such Llama3, Purple Llama, Code Llama papers. Ideally, we should have searched all Llama documents across the web and follow the procedure below on them but that would be very costly for the purpose of a tutorial, so we will stick to our limited documents here. In this case, we want to use Llama FAQ as eval data so we should not put it into the data folder for training. - -### Step 2 : Prepare data (Q&A pairs) for fine-tuning - -To use Meta Llama 3 70B model for the question and answer (Q&A) pair datasets creation from the prepared documents, we can either use Meta Llama 3 70B APIs from LLM cloud providers or host local LLM server. - -In this example, we can use OctoAI API as a demo, and the APIs could be replaced by any other API from other providers. - -**NOTE** The generated data by these APIs or the model needs to be vetted to make sure about the quality. - -```bash -export OCTOAI_API_TOKEN="OCTOAI_API_TOKEN" -python generate_question_answers.py -``` - -**NOTE** You need to be aware of your RPM (requests per minute), TPM (tokens per minute) and TPD (tokens per day), limit on your account in case using any of model API providers. In our case we had to process each document at a time. Then merge all the Q&A `json` files to make our dataset. We aimed for a specific number of Q&A pairs per document anywhere between 50-100. This is experimental and totally depends on your documents, wealth of information in them and how you prefer to handle question, short or longer answers etc. - -Alternatively we can use on prem solutions such as the [TGI](../../../../inference/model_servers/hf_text_generation_inference/README.md) or [VLLM](../../../../inference/model_servers/llama-on-prem.md). Here we will use the prompt in the [generation_config.yaml](./generation_config.yaml) to instruct the model on the expected format and rules for generating the Q&A pairs. In this example, we will show how to create a vllm openai compatible server that host Meta Llama 3 70B instruct locally, generate the Q&A pairs and apply self-curation to get the final dataset. - -```bash -# Make sure VLLM has been installed -CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001 -``` - -**NOTE** Please make sure the port has not been used. Since Meta Llama3 70B instruct model requires at least 135GB GPU memory, we need to use multiple GPUs to host it in a tensor parallel way. - -Once the server is ready, we can query the server given the port number 8001 in another terminal. Here, "-v" sets the port number and "-t" sets the total questions we ask the Meta Llama3 70B instruct model to initially generate, but the model can choice to generate less questions if it can not found any Llama related context to avoid the model generate questions that too trivial and unrelated. - -```bash -python generate_question_answers.py -v 8001 -t 7000 -``` - -This python program will read all the documents inside of "data" folder and split the data into batches by the context window limit (8K for Meta Llama3 and 4K for Llama 2) and apply the question_prompt_template, defined in "generation_config.yaml", to each batch. Then it will use each batch to query VLLM server and save the return QA pairs and the contexts. - -Additionally, we will add another step called self-curation (see more details in [Self-Alignment with Instruction Backtranslation](https://arxiv.org/abs/2308.06259)), which uses another 70B model to evaluate whether a QA pair is based on the context and provides relevant information about Llama language models given that context. We will then save all the QA pairs that passed the evaluation into data.json file as our final fine-tuning training set. - -Example of QA pair that did not pass the self-curation as the QA pair did not focus on Llama model: -```json -'Question': 'What is the purpose of the killall command in Linux?', 'Answer': 'To kill a process and all its child processes', 'Context': 'If you want to kill a process and all its child processes, you can use the killall command. For example: killall firefox' -'Reason': "The question and answer pair is not related to Llama language models, it's about a Linux command. The context provided is about Llama models, but the QA pair is about a Linux command, which is not relevant to the context.", 'Result': 'NO' -``` - -### Step 3: Run the fune-tuning -Once the dataset is ready, we can start the fine-tuning step using the following commands in the llama-recipe main folder: - -For distributed fine-tuning: -```bash -CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 10 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' -``` - - -For fine-tuning in single-GPU: - -```bash -CUDA_VISIBLE_DEVICES=0 python recipes/finetuning/finetuning.py --quantization --use_peft --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b --num_epochs 5 --batch_size_training 1 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' -``` - -If we want to continue the fine-tuning process after our evaluation step, we can use --from_peft_checkpoint argument to resume the fine-tuning from PEFT checkpoint folder. For example, we can run: - -```bash -CUDA_VISIBLE_DEVICES=0,1 torchrun --nnodes 1 --nproc_per_node 2 recipes/finetuning/finetuning.py --use_peft --enable_fsdp --from_peft_checkpoint chatbot-8b --peft_method lora --model_name meta-llama/Meta-Llama-3-8B-Instruct --output_dir chatbot-8b-continue --num_epochs 5 --batch_size_training 4 --dataset "custom_dataset" -custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/chatbot_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'recipes/use_cases/end2end-recipes/chatbot/pipelines/data.json' -``` - -For more details, please check the readme in the finetuning recipe. - -### Step 4: Evaluating with local inference - -Once we have the fine-tuned model, we now need to evaluate it to understand its performance. Normally, to create a evaluation set, we should first gather some questions and manually write the ground truth answer. In this case, we created a eval set mostly based on the Llama [Troubleshooting & FAQ](https://llama.meta.com/faq/), where the answers are written by human experts. Then we pass the evalset question to our fine-tuned model to get the model generated answers. To compare the model generated answers with ground truth, we can use either traditional eval method, eg. calcucate rouge score, or use LLM to act like a judge to score the similarity of them. - -First we need to start the VLLM servers to host our fine-tuned 8B model. Since we used peft library to get a LoRA adapter, we need to pass special arguments to VLLM to enable the LoRA feature. Now, the VLLM server actually will first load the original model, then apply our LoRA adapter weights. Then we can feed the eval_set.json file into the VLLM servers and start the comparison evaluation. Notice that our finetuned model name is now called "chatbot" instead of "meta-llama/Meta-Llama-3-8B-Instruct". - -```bash -python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-8B-Instruct --enable-lora --lora-modules chatbot=./chatbot-8b --port 8000 --disable-log-requests -``` - -**NOTE** If encounter import error: "ImportError: punica LoRA kernels could not be imported.", this means that VLLM must be installed with punica LoRA kernels to support LoRA adapter, please use following commands to install the VLLM from source. - -```bash -git clone https://github.com/vllm-project/vllm.git -cd vllm -VLLM_INSTALL_PUNICA_KERNELS=1 pip install -e . -``` - -On another terminal, we can go to the recipes/use_cases/end2end-recipes/chatbot/pipelines folder to start our eval script. - -```bash -python eval_chatbot.py -m chatbot -v 8000 -``` - -We can also quickly compare our fine-tuned chatbot model with original Meta Llama 3 8B Instruct model using - -```bash -python eval_chatbot.py -m meta-llama/Meta-Llama-3-8B-Instruct -v 8000 -``` - - -Lastly, we can use another Meta Llama 3 70B Instruct model as a judge to compare the answer from the fine-tuned 8B model with the groud truth and get a score. To do this, we need to host another Meta Llama 3 70B Instruct VLLM server locally with command, just make sure the port is not been used: - -```bash -CUDA_VISIBLE_DEVICES=2,3 python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001 -``` - -Then we can pass the port to the eval script: - -```bash -python eval_chatbot.py -m chatbot -v 8000 -j 8001 -``` - -and similarily get the eval result for the original model: - -```bash -python eval_chatbot.py -m meta-llama/Meta-Llama-3-8B-Instruct -v 8000 -j 8001 -``` - - - -### Step 5: Testing with local inference - -Once we believe our fine-tuned model has passed our evaluation and we can deploy it locally to play with it by manually asking questions. We can do this by - -```bash -python recipes/inference/local_inference/inference.py --model_name meta-llama/Meta-Llama-3-8B-Instruct --peft_model chatbot-8b -``` diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/chat_utils.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/chat_utils.py deleted file mode 100644 index 700091732..000000000 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/chat_utils.py +++ /dev/null @@ -1,66 +0,0 @@ -import asyncio -import logging -from abc import ABC, abstractmethod -from octoai.client import OctoAI -from functools import partial -from openai import OpenAI -import json -# Configure logging to include the timestamp, log level, and message -logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') -# Since OctoAI has different naming for llama models, create this mapping to get huggingface offical model name given OctoAI names. -MODEL_NAME_MAPPING={"meta-llama-3-70b-instruct":"meta-llama/Meta-Llama-3-70B-Instruct", -"meta-llama-3-8b-instruct":"meta-llama/Meta-Llama-3-8B-Instruct","llama-2-7b-chat":"meta-llama/Llama-2-7b-chat-hf" -,"llama-2-70b-chat":"meta-llama/Llama-2-70b-chat-hf"} -# Manage rate limits with throttling -rate_limit_threshold = 2000 -allowed_concurrent_requests = int(rate_limit_threshold * 0.75) -request_limiter = asyncio.Semaphore(allowed_concurrent_requests) -class ChatService(ABC): - @abstractmethod - async def execute_chat_request_async(self, api_context: dict, chat_request): - pass - -# Please implement your own chat service class here. -# The class should inherit from the ChatService class and implement the execute_chat_request_async method. -# The following are two example chat service classes that you can use as a reference. -class OctoAIChatService(ChatService): - async def execute_chat_request_async(self, api_context: dict, chat_request): - async with request_limiter: - try: - event_loop = asyncio.get_running_loop() - client = OctoAI(api_context['api_key']) - api_chat_call = partial( - client.chat.completions.create, - model=api_context['model'], - messages=chat_request, - temperature=0.0 - ) - response = await event_loop.run_in_executor(None, api_chat_call) - assistant_response = next((choice.message.content for choice in response.choices if choice.message.role == 'assistant'), "") - return assistant_response - except Exception as error: - logging.error(f"Error during chat request execution: {error}",exc_info=True) - return "" -# Use the local vllm openai compatible server for generating question/answer pairs to make API call syntax consistent -# please read for more detail:https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html. -class VllmChatService(ChatService): - async def execute_chat_request_async(self, api_context: dict, chat_request): - try: - event_loop = asyncio.get_running_loop() - if api_context["model"] in MODEL_NAME_MAPPING: - model_name = MODEL_NAME_MAPPING[api_context['model']] - else: - model_name = api_context['model'] - client = OpenAI(api_key=api_context['api_key'], base_url="http://localhost:"+ str(api_context['endpoint'])+"/v1") - api_chat_call = partial( - client.chat.completions.create, - model=model_name, - messages=chat_request, - temperature=0.0 - ) - response = await event_loop.run_in_executor(None, api_chat_call) - assistant_response = next((choice.message.content for choice in response.choices if choice.message.role == 'assistant'), "") - return assistant_response - except Exception as error: - logging.error(f"Error during chat request execution: {error}",exc_info=True) - return "" diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/config.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/config.py deleted file mode 100644 index 319cb6898..000000000 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/config.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement. - -import yaml -import os - -def load_config(config_path: str = "./config.yaml"): - # Read the YAML configuration file - with open(config_path, "r") as file: - config = yaml.safe_load(file) - # Set the API key from the environment variable - try: - config["api_key"] = os.environ["OCTOAI_API_TOKEN"] - except KeyError: - print("API token did not found, please set the OCTOAI_API_TOKEN environment variable if using OctoAI, otherwise set api_key to default EMPTY") - # local Vllm endpoint did not need API key, so set the API key to "EMPTY" if OCTOAI_API_TOKEN not found - config["api_key"] = "EMPTY" - return config diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/doc_processor.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/doc_processor.py deleted file mode 100644 index c8556471e..000000000 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/doc_processor.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# This software may be used and distributed according to the terms of the Llama 3 Community License Agreement. - -# Assuming result_average_token is a constant, use UPPER_CASE for its name to follow Python conventions -AVERAGE_TOKENS_PER_RESULT = 100 - -def get_token_limit_for_model(model: str) -> int: - """Returns the token limit for a given model.""" - if model == "llama-2-13b-chat" or model == "llama-2-70b-chat": - return 4096 - else: - return 8192 - -def calculate_num_tokens_for_message(encoded_text) -> int: - """Calculates the number of tokens used by a message.""" - # Added 3 to account for priming with assistant's reply, as per original comment - return len(encoded_text) + 3 - - -def split_text_into_chunks(context: dict, text: str, tokenizer) -> list[str]: - """Splits a long text into substrings based on token length constraints, adjusted for question generation.""" - # Adjusted approach to calculate max tokens available for text chunks - encoded_text = tokenizer(text, return_tensors="pt", padding=True)["input_ids"] - encoded_text = encoded_text.squeeze() - model_token_limit = get_token_limit_for_model(context["model"]) - - tokens_for_questions = calculate_num_tokens_for_message(encoded_text) - estimated_tokens_per_question = AVERAGE_TOKENS_PER_RESULT - estimated_total_question_tokens = estimated_tokens_per_question * context["total_questions"] - # Ensure there's a reasonable minimum chunk size - max_tokens_for_text = max(model_token_limit - tokens_for_questions - estimated_total_question_tokens, model_token_limit // 10) - - chunks, current_chunk = [], [] - print(f"Splitting text into chunks of {max_tokens_for_text} tokens, encoded_text {len(encoded_text)}", flush=True) - for token in encoded_text: - if len(current_chunk) >= max_tokens_for_text: - chunks.append(tokenizer.decode(current_chunk).strip()) - current_chunk = [] - else: - current_chunk.append(token) - - if current_chunk: - chunks.append(tokenizer.decode(current_chunk).strip()) - - print(f"Number of chunks in the processed text: {len(chunks)}", flush=True) - - return chunks diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_chatbot.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_chatbot.py deleted file mode 100644 index 203fff792..000000000 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_chatbot.py +++ /dev/null @@ -1,163 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# This software may be used and distributed according to the terms of the Llama 3 Community License Agreement. -from chat_utils import OctoAIChatService, VllmChatService -import logging -import evaluate -import argparse -from config import load_config -import asyncio -import json -from itertools import chain -from generator_utils import parse_qa_to_json, generate_LLM_eval - -def compute_rouge_score(generated : str, reference: str): - rouge_score = evaluate.load('rouge') - return rouge_score.compute( - predictions=generated, - references=reference, - use_stemmer=True, - use_aggregator=True - ) -def compute_bert_score(generated : str, reference: str): - bertscore = evaluate.load("bertscore") - score = bertscore.compute( - predictions=generated, - references=reference, - lang="en" - ) - f1 = score["f1"] - precision = score["precision"] - recall = score["recall"] - return sum(precision)/len(precision), sum(recall)/len(recall), sum(f1)/len(f1) -# This function is used to eval the fine-tuned model, given the question, generate the answer. -async def eval_request(chat_service, api_context: dict, question: str) -> dict: - prompt_for_system = api_context['eval_prompt_template'].format(language=api_context["language"]) - chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': f"Question: {question}"}] - # Getting a list of result, in this case, there should be only one result - response_string = await chat_service.execute_chat_request_async(api_context, chat_request_payload) - # convert the result string to a dict that contains Question, Answer - result_list = parse_qa_to_json(response_string) - if not result_list or len(result_list) > 1: - print("Error: eval response should be a list of one result dict") - return {} - result = result_list[0] - if "Answer" not in result: - print("Error: eval response does not contain answer") - return {} - # Send back the model generated answer - - return result["Answer"] - -async def generate_eval_answer(chat_service, api_context: dict, questions: list): - eval_tasks = [] - for batch_index, question in enumerate(questions): - try: - result = eval_request(chat_service, api_context, question) - eval_tasks.append(result) - except Exception as e: - print(f"Error during data eval request execution: {e}") - print(len(eval_tasks),"eval_tasks") - eval_results = await asyncio.gather(*eval_tasks) - - return eval_results - -async def main(context): - if context["endpoint"]: - chat_service = VllmChatService() - else: - chat_service = OctoAIChatService() - try: - logging.info("Starting to generate answer given the eval set.") - with open(context["eval_json"]) as fp: - eval_json = json.load(fp) - questions,groud_truth = [],[] - for index, item in enumerate(eval_json): - questions.append(item["question"]) - groud_truth.append(item["answer"]) - generated_answers = await generate_eval_answer(chat_service, context,questions) - if not generated_answers: - logging.warning("No answers generated. Please check the input context or model configuration.") - return - logging.info(f"Successfully generated {len(generated_answers)} answers.") - judge_list = [] - for index, item in enumerate(generated_answers): - judge_list.append({"Question":questions[index],"Ground_truth":groud_truth[index],"Generated_answer":generated_answers[index]}) - if context["judge_endpoint"]: - # make a copy of the context then change the VLLM endpoint to judge_endpoint - context_copy = dict(context) - context_copy["endpoint"] = context["judge_endpoint"] - context_copy["model"] = "meta-llama/Meta-Llama-3-70B-Instruct" - judge_results = await generate_LLM_eval(chat_service, context_copy, judge_list) - correct_num = 0 - for result in judge_results: - correct_num += result["Result"] == "YES" - LLM_judge_score = correct_num/len(judge_results) - print(f"The accuracy of the model is {LLM_judge_score}") - rouge_score = compute_rouge_score(generated_answers,groud_truth) - print("Rouge_score:",rouge_score) - P, R, F1 = compute_bert_score(generated_answers,groud_truth) - print(f"BERTScore Precision: {P:.4f}, Recall: {R:.4f}, F1: {F1:.4f}") - # Saving the eval result to a log file - with open(context["output_log"],"a") as fp: - fp.write(f"Eval_result for {context['model']} \n") - fp.write(f"Rouge_score: {rouge_score} \n") - fp.write(f"BERTScore Precision: {P:.4f}, Recall: {R:.4f}, F1: {F1:.4f} \n") - if context["judge_endpoint"]: - fp.write(f"LLM_judge_score: {LLM_judge_score} \n") - fp.write(f"QA details: \n") - for item in judge_list: - fp.write(f"question: {item['Question']} \n") - fp.write(f"generated_answers: {item['Generated_answer']} \n") - fp.write(f"groud_truth: {item['Ground_truth']} \n") - fp.write("\n") - logging.info(f"Eval successfully, the eval result is saved to {context['output_log']}.") - except Exception as e: - logging.error(f"An unexpected error occurred during the process: {e}",exc_info=True) - -def parse_arguments(): - # Define command line arguments for the script - parser = argparse.ArgumentParser( - description="Generate question/answer pairs from documentation." - ) - parser.add_argument( - "-m", "--model", - default="chatbot", - help="Select the model to use for evaluation, this maybe a LoRA adapter." - ) - parser.add_argument( - "-c", "--config_path", - default="eval_config.yaml", - help="Set the configuration file path that has system prompt along with language, evalset path." - ) - parser.add_argument( - "-v", "--vllm_endpoint", - default=None, - type=int, - help="If a port is specified, then use local vllm endpoint for evaluations." - ) - parser.add_argument( - "-j", "--judge_endpoint", - default=None, - type=int, - help="If a port is specified, then use local vllm endpoint as judge LLM." - ) - parser.add_argument( - "-o", "--output_log", - default="eval_result.log", - help="save the eval result to a log file. Default is eval_result.log" - ) - return parser.parse_args() - -if __name__ == "__main__": - logging.info("Initializing the process and loading configuration...") - args = parse_arguments() - context = load_config(args.config_path) - context["model"] = args.model - context["endpoint"] = args.vllm_endpoint - context["judge_endpoint"] = args.judge_endpoint - context["output_log"] = args.output_log - if context["endpoint"]: - logging.info(f"Use local vllm service for eval at port: '{args.vllm_endpoint}'.") - if context["judge_endpoint"]: - logging.info(f"Use local vllm service for judge at port: '{args.judge_endpoint}'.") - asyncio.run(main(context)) diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml b/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml deleted file mode 100644 index 87266d33c..000000000 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/eval_config.yaml +++ /dev/null @@ -1,23 +0,0 @@ -eval_prompt_template: > - You are a AI assistant that skilled in answering questions related to Llama language models, - which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1, Meta Llama Guard 2, - Below is a question from a llama user, think step by step and then answer it in {language}, make the answer as concise as possible, it should be at most 100 words. - Return the result with the template: - [ - {{ - "Question": "The question user asked to you" - "Answer": "Your answer to the question" - }} - ] -judge_prompt_template: > - You are provided with a question, a teacher's answer and a student's answer. Given that question, you need to score the how good the student answer is compare to - the teacher's answer. If the student's answer is correct based on the teacher's answer, then return YES. If the answer is not faithful, then return NO - and explain which part of the student's answer is not faithful in the Reason section. - Return the result in json format with the template: - {{ - "Reason": "your reason here.", - "Result": "YES or NO." - }} -eval_json: "./evalset.json" - -language: "English" diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/evalset.json b/recipes/use_cases/end2end-recipes/chatbot/pipelines/evalset.json deleted file mode 100644 index 150851f43..000000000 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/evalset.json +++ /dev/null @@ -1,178 +0,0 @@ -[ - { - "question":"What is llama-recipes?", - "answer": "The llama-recipes repository is a companion to the Meta Llama 3 models. The goal of this repository is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Meta Llama and other tools in the LLM ecosystem." - }, - { - "question":"What is the difference on the tokenization techniques that Meta Llama 3 uses compare Llama 2?", - "answer": "Llama 2 uses SentencePiece for tokenization, whereas Llama 3 has transitioned to OpenAI’s Tiktoken. Llama 3 also introduces a ChatFormat class, special tokens, including those for end-of-turn markers and other features to enhance support for chat-based interactions and dialogue processing." - }, - { - "question":"How many tokens were used in Meta Llama 3 pretrain?", - "answer": "Meta Llama 3 is pretrained on over 15 trillion tokens that were all collected from publicly available sources." - }, - { - "question":"How many tokens were used in Llama 2 pretrain?", - "answer": "Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources." - }, - { - "question":"What is the name of the license agreement that Meta Llama 3 is under?", - "answer": "Meta LLAMA 3 COMMUNITY LICENSE AGREEMENT." - }, - { - "question":"What is the name of the license agreement that Llama 2 is under?", - "answer": "LLAMA 2 COMMUNITY LICENSE AGREEMENT." - }, - { - "question":"What is the context length of Llama 2 models?", - "answer": "Llama 2's context is 4k" - }, - { - "question":"What is the context length of Meta Llama 3 models?", - "answer": "Meta Llama 3's context is 8k" - }, - { - "question":"When is Llama 2 trained?", - "answer": "Llama 2 was trained between January 2023 and July 2023." - }, - { - "question":"What is the name of the Llama 2 model that uses Grouped-Query Attention (GQA) ", - "answer": "Llama 2 70B" - }, - { - "question":"What are the names of the Meta Llama 3 model that use Grouped-Query Attention (GQA) ", - "answer": "Meta Llama 3 8B and Meta Llama 3 70B" - }, -{ - "question": "what are the goals for Llama 3", - "answer": "With Llama 3, we set out to build the best open models that are on par with the best proprietary models available today. We wanted to address developer feedback to increase the overall helpfulness of Llama 3 and are doing so while continuing to play a leading role on responsible use and deployment of LLMs. We are embracing the open source ethos of releasing early and often to enable the community to get access to these models while they are still in development." -}, -{ -"question": "What if I want to access Llama models but I’m not sure if my use is permitted under the Llama 2 Community License?", -"answer": "On a limited case by case basis, we will consider bespoke licensing requests from individual entities. Please contact llamamodels@meta.com to provide more details about your request." -}, -{ -"question": "Why is Meta not sharing the training datasets for Llama?", -"answer": "We believe developers will have plenty to work with as we release our model weights and starting code for pre-trained and conversational fine-tuned versions as well as responsible use resources. While data mixes are intentionally withheld for competitive reasons, all models have gone through Meta’s internal Privacy Review process to ensure responsible data usage in building our products. We are dedicated to the responsible and ethical development of our GenAI products, ensuring our policies reflect diverse contexts and meet evolving societal expectations." -}, -{ -"question": "Did Meta use human annotators to develop the data for Llama models?", -"answer": "Yes. There are more details, for example, about our use of human annotators in the Llama 2 research paper." -}, -{ -"question": "Can I use the output of the models to improve the Llama family of models, even though I cannot use them for other LLMs?", -"answer": "It's correct that the license restricts using any part of the Llama models, including the response outputs to train another AI model (LLM or otherwise). However, one can use the outputs to further train the Llama family of models. Techniques such as Quantized Aware Training (QAT) utilize such a technique and hence this is allowed." -}, -{ -"question": "What operating systems (OS) are officially supported if I want to use Llama model?", -"answer": "For the core Llama GitHub repos (Llama and Llama3) Linux is the only OS currently supported by this repo. Additional OS support is available through the Llama-Recipes repo." -}, -{ -"question": "I am getting 'Issue with the URL' as an error message when I want to download Llama model. What should I do?", -"answer": "This issue occurs because of not copying the URL correctly. If you right click on the link and copy the link, the link may be copied with URL Defense wrapper. To avoid this issue, select the URL manually and copy it." -}, -{ -"question": "Does Llama 2 support other languages outside of English?", -"answer": "The model was primarily trained on English with a bit of additional data from 27 other languages (for more information, see Table 10 on page 20 of the Llama 2 paper). We do not expect the same level of performance in these languages as in English. You’ll find the full list of languages referenced in the research paper. You can look at some of the community lead projects to fine-tune Llama 2 models to support other languages. (eg: link)" -}, -{ -"question": "If I’m a developer/business, how can I access the Llama models?", -"answer": "Details on how to access the models are available on our website link. Please note that the models are subject to the acceptable use policy and the provided responsible use guide. Models are available through multiple sources but the place to start is at https://llama.meta.com/ Model code, quickstart guide and fine-tuning examples are available through our Github Llama repository. Model Weights are available through an email link after the user submits a sign-up form. Models are also being hosted by Microsoft, Amazon Web Services, and Hugging Face, and may also be available through other hosting providers in the future." -}, -{ -"question": "Can anyone access Llama models? What are the terms?", -"answer": "Llama models are broadly available to developers and licensees through a variety of hosting providers and on the Meta website and licensed under the applicable Llama Community License Agreement, which provides a permissive license to the models along with certain restrictions to help ensure that the models are being used responsibly." -}, -{ -"question": "What are the hardware SKU requirements for deploying Llama models?", -"answer": "Hardware requirements vary based on latency, throughput and cost constraints. For good latency, we split models across multiple GPUs with tensor parallelism in a machine with NVIDIA A100s or H100s. But TPUs, other types of GPUs, or even commodity hardware can also be used to deploy these models (e.g. llama cpp, MLC LLM)." -}, -{ -"question": "Do Llama models provide traditional autoregressive text completion?", -"answer": "Llama models are auto-regressive language models, built on the transformer architecture. The core language models function by taking a sequence of words as input and predicting the next word, recursively generating text." -}, -{ -"question": "Does the Llama model support fill-in-the-middle completion, e.g. allowing the user to specify a suffix string for the response?", -"answer": "The vanilla model of Llama does not, however, the Code Llama models have been trained with fill-in-the-middle completion to assist with tasks like code completion." -}, -{ -"question": "Do Llama models support logit biases as a request parameter to control token probabilities during sampling?", -"answer": "This is implementation dependent (i.e. the code used to run the model)." -}, -{ -"question": "Do Llama models support adjusting sampling temperature or top-p threshold via request parameters?", -"answer": "The model itself supports these parameters, but whether they are exposed or not depends on implementation." -}, -{ -"question": "What is the most effective RAG method paired with Llama models?", -"answer": "There are many ways to use RAG with Llama. The most popular libraries are LangChain and LlamaIndex, and many of our developers have used them successfully with Llama 2. (See the LangChain and LlamaIndex sections of this document)." -}, -{ -"question": "How to set up Llama models with an EC2 instance?", -"answer": "You can find steps on how to set up an EC2 instance in the AWS section of this document here." -}, -{ -"question": "Should we start training with the base or instruct/chat model when using Llama model?", -"answer": "This depends on your application. The Llama pre-trained models were trained for general large language applications, whereas the Llama instruct or chat models were fine tuned for dialogue specific uses like chat bots." -}, -{ -"question": "I keep getting a 'CUDA out of memory' error, when using Llama models, what should I do" , -"answer": "This error can be caused by a number of different factors including, model size being too large, in-efficient memory usage and so on. Some of the steps below have been known to help with this issue, but you might need to do some troubleshooting to figure out the exact cause of your issue. 1. Ensure your GPU has enough memory 2. Reduce the batch_size 3. Lower the Precision 4. Clear cache 5. Modify the Model/Training" -}, -{ -"question": "Retrieval approach adds latency due to multiple calls at each turn. How to best leverage Llama model with Retrieval?", -"answer": "If multiple calls are necessary then you could look into the following: 1. Optimize inference so each call has less latency. 2. Merge the calls into fewer calls. For example summarize the data and utilize the summary. 3. Possibly utilize Llama 2 function calling. 4. Consider fine-tuning the model with the updated data." -}, -{ -"question": "How can I fine tune the Llama models?", -"answer": "You can find examples on how to fine tune the Llama models in the Llama Recipes repository." -}, -{ -"question": "How can I pretrain the Llama models?", -"answer": "You can adapt the finetuning script found here for pre-training. You can also find the hyperparams used for pretraining in Section 2 of the Llama2 paper." -}, -{ -"question": "Am I allowed to develop derivative models through fine-tuning based on Llama models for languages other than english? Is this a violation of the acceptable use policy?", -"answer": "Developers may fine-tune Llama models for languages beyond English provided they comply with the applicable Llama 3 License Agreement, Llama Community License Agreement and the Acceptable Use Policy." -}, -{ -"question": "How can someone reduce hallucinations with fine-tuned LIama models?", -"answer": "Although prompts cannot eliminate hallucinations completely, they can reduce it significantly. Using techniques like Chain-of-Thought, Instruction-Based, N-Shot, and Few-Shot can help depending on your application. Additionally, prompting the models to back up the responses by verifying with factual data sets or requesting the models to provide the source of information can help as well. Overall finetuning should also be helpful for reducing hallucination." -}, -{ -"question": "What are the hardware SKU requirements for fine-tuning Llama pre-trained models?", -"answer": "Fine-tuning requirements also vary based on amount of data, time to complete fine-tuning and cost constraints. To fine-tune these models we have generally used multiple NVIDIA A100 machines with data parallelism across nodes and a mix of data and tensor parallelism intra node. But using a single machine, or other GPU types are definitely possible (e.g. alpaca models are trained on a single RTX4090:https://github.com/tloen/alpaca-lora)" -}, -{ -"question": "What Fine-tuning tasks would the Llama models support?", -"answer": "The Lama 2 fine-tuned models were fine tuned for dialogue specific uses like chat bots." -}, -{ -"question": "Are there examples on how one can fine-tune the Llama models?", -"answer": "You can find example fine-tuning scripts in the Github recipes repository. You can also review the fine-tuning section in this document." -}, -{ -"question": "What is the difference between a pre-trained and fine-tuned Llama model?", -"answer": "The Llama pre-trained models were trained for general large language applications, whereas the Llama chat or instruct models were fine tuned for dialogue specific uses like chat bots." -}, -{ -"question": "How should we think about post processing (validate generated data) as a way to fine tune Llama models?", -"answer": "Essentially having a truthful data on the specific application can be helpful to reduce the risk on a specific application. Also setting some sort of threshold such as prob>90% might be helpful to get more confidence in the output." -}, -{ -"question": "What are the different libraries that we recommend for fine tuning when using Llama models?", -"answer": "You can find some fine-tuning recommendations in the Github recipes repository as well as the fine-tuning section of this document." -}, -{ -"question": "How can we identify the right ‘r’ value for LORA method for a certain use-case when using Llama models?", -"answer": "The best approach would be to review the LoRA research paper for more information on the rankings, then reviewing similar implementations for other models and finally experimenting." -}, -{ -"question": "We hope to use prompt engineering as a lever to nudge behavior. Any pointers on enhancing instruction-following by fine-tuning small llama models?", -"answer": "Take a look at the Fine tuning section in our Getting started with Llama guide of this document for some pointers towards fine tuning." -}, -{ -"question": "Are Llama models open source? What is the exact license these models are published under?", -"answer": "Llama models are licensed under a bespoke commercial license that balances open access to the models with responsibility and protections in place to help address potential misuse. Our license allows for broad commercial use, as well as for developers to create and redistribute additional work on top of Llama models. For more details, our licenses can be found at (https://llama.meta.com/license/) (Meta Llama 2) and (https://llama.meta.com/llama3/license/) (Meta Llama 3)." -} -] diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generate_question_answers.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generate_question_answers.py deleted file mode 100644 index 70476aa0f..000000000 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generate_question_answers.py +++ /dev/null @@ -1,90 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# This software may be used and distributed according to the terms of the Llama 3 Community License Agreement. - -import argparse -import asyncio -import json -from config import load_config -from generator_utils import generate_question_batches, generate_data_curation -from chat_utils import OctoAIChatService, VllmChatService -import logging -import aiofiles # Ensure aiofiles is installed for async file operations - - -# Configure logging to include the timestamp, log level, and message -logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') - - -async def main(context): - if context["endpoint"]: - chat_service = VllmChatService() - else: - chat_service = OctoAIChatService() - try: - logging.info("Starting to generate question/answer pairs.") - # Generate question/answer pairs as list - data = await generate_question_batches(chat_service, context) - if not data: - logging.warning("No data generated. Please check the input context or model configuration.") - return - logging.info(f"Successfully generated {len(data)} question/answer pairs.") - if context["use_curation"]: - logging.info("Starting to do self-curation using LLM.") - data = await generate_data_curation(chat_service, context,data) - logging.info(f"Only {len(data)} question/answer pairs pass the self-curation") - async with aiofiles.open(context['output_path'], "w") as output_file: - await output_file.write(json.dumps(data, indent=4)) - logging.info(f"Data successfully written to {context['output_path']}. Process completed.") - except Exception as e: - logging.error(f"An unexpected error occurred during the process: {e}",exc_info=True) - -def parse_arguments(): - # Define command line arguments for the script - parser = argparse.ArgumentParser( - description="Generate question/answer pairs from documentation." - ) - parser.add_argument( - "-t", "--total_questions", - type=int, - default=100, - help="Specify the total number of question/answer pairs to generate." - ) - parser.add_argument( - "-m", "--model", - choices=["meta-llama-3-70b-instruct","meta-llama-3-8b-instruct","llama-2-13b-chat", "llama-2-70b-chat"], - default="meta-llama-3-70b-instruct", - help="Select the model to use for generation." - ) - parser.add_argument( - "-c", "--config_path", - default="./generation_config.yaml", - help="Set the configuration file path that has system prompt along with language, dataset path and number of questions." - ) - parser.add_argument( - "-v", "--vllm_endpoint", - default=None, - type=int, - help="If a port is specified, then use local vllm endpoint for generating question/answer pairs." - ) - parser.add_argument( - "-o", "--output_path", - default="./data.json", - help="set the output path for the generated QA pairs. Default is data.json" - ) - return parser.parse_args() - -if __name__ == "__main__": - logging.info("Initializing the process and loading configuration...") - args = parse_arguments() - - context = load_config(args.config_path) - context["total_questions"] = args.total_questions - context["model"] = args.model - context["endpoint"] = args.vllm_endpoint - # If curation prompt is not empty, then use self-curation - context["use_curation"] = len(context["curation_prompt_template"]) > 0 - context["output_path"] = args.output_path - logging.info(f"Configuration loaded. Generating {args.total_questions} question/answer pairs using model '{args.model}'.") - if context["endpoint"]: - logging.info(f"Use local vllm service at port: '{args.vllm_endpoint}'.") - asyncio.run(main(context)) diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml deleted file mode 100644 index f60808bf9..000000000 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generation_config.yaml +++ /dev/null @@ -1,50 +0,0 @@ -question_prompt_template: > - You are a language model skilled in creating quiz questions. - You will be provided with a document, - read it and please generate question and answer pairs that are most likely be asked by a user of Llama language models, - which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1, Meta Llama Guard 2, - then extract the context that is related to the question and answer, preferably using the sentences from original text, - please make sure you follow those rules: - 1. Generate {num_questions} question answer pairs, you can generate less answer if there is nothing related to model, training, fine-tuning and evaluation details of Llama language models, . - 2. For each question and answer pair, add the context that is related to the question and answer, preferably using the sentences from original text - 3. Generate in {language}. - 4. The questions can be answered based *solely* on the given passage. - 5. Avoid asking questions with similar meaning. - 6. Make the answer as concise as possible, it should be at most 100 words. - 7. Provide relevant links from the document to support the answer. - 8. Never use any abbreviation. - 9. Return the result in json format with the template: - [ - {{ - "Question": "your question A.", - "Answer": "your answer to question A." - "Context": "the context for question A" - }}, - {{ - "Question": "your question B.", - "Answer": "your answer to question B." - "Context": "the context for question B" - }} - ] - -curation_prompt_template: > - Below is a question and answer pair (QA pair) and its related context about Llama language models, - which includes LLama, Llama2, Meta Llama3, Code Llama, Meta Llama Guard 1, Meta Llama Guard 2. - Given the context, evaluate whether or not this qusestion and answer pair is related to Llama language models, including model, training, fine-tuning and evaluation details, - and whether this question and answer is relevant to the context. - Note that the answer in the QA pair can be the same or similar as the context, as repetition of context is allowed. - Respond with only a single JSON blob with an "Reason" field that is a short (less than 100 word) - explanation of your answer and an "Result" field which is YES or NO. - Only answer "YES", if the question and answer pair is based on the context and provides relevant information about Llama language models. - Only generate the answer in {language}. - Return the result in json format with the template: - {{ - "Reason": "your reason here.", - "Result": "YES or No." - }}, - -data_dir: "./data" - -language: "English" - -num_questions: 2 diff --git a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py b/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py deleted file mode 100644 index b37ad5dbf..000000000 --- a/recipes/use_cases/end2end-recipes/chatbot/pipelines/generator_utils.py +++ /dev/null @@ -1,267 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement. - -import os -import re -import string -from transformers import AutoTokenizer -import asyncio -import magic -from PyPDF2 import PdfReader -import json -from doc_processor import split_text_into_chunks -import logging -import json -# Initialize logging -logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') - -def read_text_file(file_path): - try: - with open(file_path, 'r') as f: - text = f.read().strip() + ' ' - if len(text) == 0: - print("File is empty ",file_path) - return text - except Exception as e: - logging.error(f"Error reading text file {file_path}: {e}") - return '' - -def read_pdf_file(file_path): - try: - with open(file_path, 'rb') as f: - pdf_reader = PdfReader(f) - num_pages = len(pdf_reader.pages) - file_text = [pdf_reader.pages[page_num].extract_text().strip() + ' ' for page_num in range(num_pages)] - text = ''.join(file_text) - if len(text) == 0: - print("File is empty ",file_path) - return ''.join(file_text) - except Exception as e: - logging.error(f"Error reading PDF file {file_path}: {e}") - return '' - -def read_json_file(file_path): - try: - with open(file_path, 'r') as f: - data = json.load(f) - # Assuming each item in the list has a 'question' and 'answer' key - # Concatenating question and answer pairs with a space in between and accumulating them into a single string - file_text = ' '.join([item['question'].strip() + ' ' + item['answer'].strip() + ' ' for item in data]) - if len(file_text) == 0: - print("File is empty ",file_path) - return file_text - except Exception as e: - logging.error(f"Error reading JSON file {file_path}: {e}") - return '' - - -def process_file(file_path): - print("starting to process file: ", file_path) - file_type = magic.from_file(file_path, mime=True) - if file_type in ['text/plain', 'text/markdown', 'JSON']: - return read_text_file(file_path) - elif file_type == 'application/pdf': - return read_pdf_file(file_path) - else: - logging.warning(f"Unsupported file type {file_type} for file {file_path}") - return '' -def remove_non_printable(s): - printable = set(string.printable) - return ''.join(filter(lambda x: x in printable, s)) -def read_file_content(context): - file_strings = [] - - for root, _, files in os.walk(context['data_dir']): - for file in files: - file_path = os.path.join(root, file) - file_text = process_file(file_path) - if file_text: - file_strings.append(file_text) - text = '\n'.join(file_strings) - text = remove_non_printable(text) - with open(context['data_dir'] + '/' + 'all_text.txt', 'w') as f: - f.write(text) - return remove_non_printable(text) -# clean the text by removing all parts that did not contain any alphanumeric characters -def clean(s): - result = [] - for item in s.split('"'): - if any(c.isalnum() for c in item): - result.append(item) - return " ".join(result) -# given a response string, return a string that can be saved as json. -def parse_qac_to_json(response_string): - split_lines = response_string.split("\n") - start,mid,end = None,None,None - # must use set to avoid duplicate question/answer pairs due to async function calls - qa_set = set() - for i in range(len(split_lines)): - line = split_lines[i] - # starting to find "Question" - if not start: - # Once found, set start to this line number - if '"Question":' in line: - start = i - else: - # "Question" has been found, find "Answer", once found, set end to this line number - if '"Answer":' in line: - mid = i - elif '"Context":' in line: - end = i - # found Question means we have reached the end of the question, so add it to qa_list - elif '"Question":' in line: - question = " ".join(split_lines[start:mid]).split('"Question":')[1] - answer = " ".join(split_lines[mid:end]).split('"Answer":')[1] - context = " ".join(split_lines[end:i]).split('"Context":')[1] - start,mid,end = i,None,None - qa_set.add((clean(question), clean(answer),clean(context))) - # adding last question back to qa_list - if start and mid and end: - question = " ".join(split_lines[start:mid]).split('"Question":')[1] - answer = " ".join(split_lines[mid:end]).split('"Answer":')[1] - context = " ".join(split_lines[end:]).split('"Context":')[1] - start,mid,end = i,None,None - qa_set.add((clean(question), clean(answer),clean(context))) - qa_list = [{"Question": q, "Answer":a, "Context":c} for q,a,c in qa_set] - - return qa_list - -def parse_qa_to_json(response_string): - split_lines = response_string.split("\n") - start,end = None,None - # must use set to avoid duplicate question/answer pairs due to async function calls - qa_set = set() - for i in range(len(split_lines)): - line = split_lines[i] - # starting to find "Question" - if not start: - # Once found, set start to this line number - if '"Question":' in line: - start = i - else: - # "Question" has been found, find "Answer", once found, set end to this line number - if '"Answer":' in line: - end = i - # found Question means we have reached the end of the question, so add it to qa_list - elif '"Question":' in line: - question = " ".join(split_lines[start:end]).split('"Question":')[1] - answer = " ".join(split_lines[end:i]).split('"Answer":')[1] - start,end = i,None - qa_set.add((clean(question), clean(answer))) - # adding last question back to qa_list - if start and end: - question = " ".join(split_lines[start:end]).split('"Question":')[1] - answer = " ".join(split_lines[end:]).split('"Answer":')[1] - qa_set.add((clean(question), clean(answer))) - qa_list = [{"Question": q, "Answer":a} for q,a in qa_set] - - return qa_list - -async def prepare_and_send_request(chat_service, api_context: dict, document_content: str, num_questions: int) -> dict: - if num_questions == 0: - logging.info(f"Error: num_questions is 0") - return {} - prompt_for_system = api_context['question_prompt_template'].format(num_questions=num_questions, language=api_context["language"]) - chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': document_content}] - # parse the result string to a list of dict that has Question, Answer, Context - return await chat_service.execute_chat_request_async(api_context, chat_request_payload) -# This function is used to evaluate the quality of generated QA pairs. Return the original QA pair if the model eval result is YES. Otherwise, return an empty dict. -async def data_curation_request(chat_service, api_context: dict, document_content: dict) -> dict: - prompt_for_system = api_context['curation_prompt_template'].format(language=api_context["language"]) - chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': f"Question: {document_content['Question']} \n Answer: {document_content['Answer']}\n Context: {document_content['Context']} "}] - result = await chat_service.execute_chat_request_async(api_context, chat_request_payload) - if not result: - return {} - # no parsing needed, just return the loads the result as a dict - result = json.loads(result) - if "Result" not in result: - print("Error: eval response does not contain answer") - print(document_content,result) - return {} - # Send back the original QA pair is the model eval result is YES - if result["Result"] == "YES": - return document_content - else: - print(document_content,result) - return {} - - -async def generate_question_batches(chat_service, api_context: dict): - document_text = read_file_content(api_context) - if len(document_text)== 0: - logging.error(f"Error reading files, document_text is empty") - if api_context["model"] in ["meta-llama-3-70b-instruct","meta-llama-3-8b-instruct"]: - tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B", pad_token="
", padding_side="right") - else: - tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", pad_token="
", padding_side="right") - document_batches = split_text_into_chunks(api_context, document_text, tokenizer) - - total_questions = api_context["total_questions"] - batches_count = len(document_batches) - # each batch should have at least 1 question - base_questions_per_batch = max(total_questions // batches_count,1) - extra_questions = total_questions % batches_count - - print(f"Questions per batch: {base_questions_per_batch} (+1 for the first {extra_questions} batches), Total questions: {total_questions}, Batches: {batches_count}") - generation_tasks = [] - for batch_index, batch_content in enumerate(document_batches): - print(f"len of batch_content: {len(batch_content)}, batch_index: {batch_index}") - #Distribute extra questions across the first few batches - questions_in_current_batch = base_questions_per_batch + (1 if batch_index < extra_questions else 0) - print(f"Batch {batch_index + 1} - {questions_in_current_batch} questions ********") - try: - task = prepare_and_send_request(chat_service, api_context, batch_content, questions_in_current_batch) - generation_tasks.append(task) - except Exception as e: - print(f"Error during chat request execution: {e}") - - question_generation_results = await asyncio.gather(*generation_tasks) - final_result = [] - for result in question_generation_results: - parsed_json = parse_qac_to_json(result) - final_result.extend(parsed_json) - return final_result - -async def generate_data_curation(chat_service, api_context: dict, evaluation_list: list): - eval_tasks = [] - for batch_index, batch_content in enumerate(evaluation_list): - try: - result = data_curation_request(chat_service, api_context, batch_content) - eval_tasks.append(result) - except Exception as e: - print(f"Error during data eval request execution: {e}") - - eval_results = await asyncio.gather(*eval_tasks) - curated_data = [] - for item in eval_results: - # if the item is not empty, add it to the curated data list - if item: - curated_data.append(item) - return curated_data - -# This function is used to evaluate the quality of generated QA pairs. Return the original QA pair if the model eval result is YES. Otherwise, return an empty dict. -async def LLM_judge_request(chat_service, api_context: dict, document_content: dict) -> dict: - prompt_for_system = api_context['judge_prompt_template'].format(language=api_context["language"]) - chat_request_payload = [{'role': 'system', 'content': prompt_for_system}, {'role': 'user', 'content': f"Question: {document_content['Question']} \n Teacher's Answer: {document_content['Ground_truth']}\n Student's Answer: {document_content['Generated_answer']} "}] - result = await chat_service.execute_chat_request_async(api_context, chat_request_payload) - if not result: - return {} - # no parsing needed, just return the loads the result as a dict - result = json.loads(result) - if "Result" not in result: - print("Error: eval response does not contain answer") - print(document_content,result) - return {} - return result - -async def generate_LLM_eval(chat_service, api_context: dict, judge_list: list): - eval_tasks = [] - for batch_index, batch_content in enumerate(judge_list): - try: - result = LLM_judge_request(chat_service, api_context, batch_content) - eval_tasks.append(result) - except Exception as e: - print(f"Error during data eval request execution: {e}") - - judge_results = await asyncio.gather(*eval_tasks) - return judge_results diff --git a/recipes/use_cases/end2end-recipes/chatbot/poor-test-1.png b/recipes/use_cases/end2end-recipes/chatbot/poor-test-1.png deleted file mode 100644 index f04aa5f3e8500aecc30e03a97a4c78580419dd3c..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 405225 zcmeEvcT`i`)-OdwLBT>15l~SPDG`wl!3HQu@4X2~mtI0ZY$ymQRceslA+&%%0!WeG zdzBVih$KL0N#5q1bH97Pd(L@7#~b7Qb#O4US$nU&_L_6eIe+uF=8o^wR1{9qou{Lq zpg8&H;R6i{3VH+u1=liW4%n_wTDcx_|$onya(5t%DT>#lv^ey0m(l ztt`og8kfp?GG0v`In7I%{;G%SAy)L>nTt{9sP8tJurM3Z-$*sPuPeut|9s@MREPyO zgoeh-n13W%j5lQ!iL1p!-LL^<2|P>~-6cT%eD8RnUSfzH#Rr}n9~$2rxf!9U!Itjy zoI>UfP1o~|g@q9P-rmy`V>7$h?e(LZZ#)OpGq<`)WV?93yI@9&E9Yn3{lOr{S!#+I zZE1Kq1x4z6#3zl5G3F!BD~?<{wsD!QOKau^n~Ls?j4NzMPqeXs;)8aT(W7e=cCnYb zug_cEJ6R5XnV`BL@-kYCHu#35JTSwGJ4`C<~*Hi)eVyn-j+qGUmHouHq_7^cd46j`L@bR{1Fva>8?%t(ti5K7Kt?qW0w~ZG=;)KYV_Vi?8Fuk(L`oB^-_x9pcz_#`SWv zx9MFSzwA`79$z?LH&+);HUBC2nWG*@c3V%sbo)+E^^(c0ITaPH*3f1&xTHZmLMIsPaFj*Z3T6^Oin!j@f5cZgisfsT_F6^%UjE#Zxz* zGgY|XJpP2*9O7&Vs?pdyLM`~=`QwWeVT#tz9z@hA90_+xId*XTBgH&pDE$Wl-JMJG zC&_{bptB%9Iq%*(fdjcOJ*dl|$Up)StSRGdU}xB9qB^o%re|7`LZ$z$QihL0z` z8bA9Qe?`_x;;8#QqM4OIUqC*M(@BxZ9~2X`x%aJ%SjCQnRr0<6+CVgWBKEB$+NV9Y zVdv4lv+&M$r|ZP)T})A_3L9A?Bx#LKlTOEnayqMAt7+dvpxIMvpOay8r2*MsuH>%) z+l?&d`|#3{M{ClaN9>QBSQ8(5Rc@|e<^9w{=3`M|{Zly3&ow3N2iL|?Y<_;kZdZAi zOqVGWl;%ZFiPvp2)ibJ*$AnAi&hhWaow_N0Il2GYn~>5FK8g}-$n4hEHF?tH9rtmh z8=g#2hu*2HtNV^M8gZf6_N{9n_jq;19GB&|L6Ioc7^z6dd_hp^)QsUh4zTUfx+8iN zhT>eTGYmC>vd5^*FVZ!>WV%Ud4Wk*ZI`WuOxXwuJgkrsG@%eme_Ij3L>b0Z$ceur= zR9*Q6v3?evpH_-G|z5m*%gyaxUL-;%5BFIWK}c-lV3j#V#UW(0ls&S*y?A zpGr$|c0S-qlUt&4=QRk_NvrNTx_QO-bZwYtDyYZ*gxQTxg4S;&EFQlPl{3%LJ9@3& z+We}Xgih$7dFKzXX_F+}WQ|*;aao+&>lL3klWLvM&J#ZtsQ`oPl z(@&3Zzo4!Usk?OL?5*RP$C_TbHi$OdZLr($_ftjB5mX$~2;H3ikbT!-bvYq94tiX_J?Cn*)A45_PX zV%4X!v$H?tdFM7}AJbmU*2-Q}m&n4YEu`hfZz%;Eloa1BKbOT3OkKIi&+U*{nISd01Uk`c&TC z-r^xM`)4B7wpw-<`YLmJiyjSM$YINf%c^?dmbK{9=_C*lQ?B!Y#ne7^%f1I3dya{%}+h~U|HOd(mR8^1uwA~YDCi~lyPearb)KTzvE^px@-US=Hb0%kv z&jv&~L~37adF=amITM+AuU|W}MHwdSevV&WFzrQe+mf*GEqam2ct=rtv3&<8%VaZf zcg@rRl4kIQ?MHKXTWV8kci({P`-Kzj2JKQhOnjNzF@xvxN_DUWj`@-KtOjBRUgJyS z&m4js>>Sre`y7Z4JmWg!CZ*hCLKxSzp>g911Gfs;`w z{C4<_@Px;X*0@SaLNF*{nFkmi`mqreq*srf;RR{ z2b&c4w!;M(CyR-$O0NydjrLjWwUKKN&l;Q=Jlk@0=n7Jm?(q$!mB*YPrGlf1%wJm=LVv{f;TYbPJ&;HpdUP!^Q)T6p)Z37zq{!N9MadI9$73DZ z6>u$RG3KXVxpqZ!3N~4nzuuc}Xlr29Jd?Es%azc+UG5l9ENkjdW~N~!igLiZ5~gl_ zb!eVoW3(x;c1nKTp$HqiHEyx}V$|5}-s0LPrKAL50>P&X-?mtr1iOv$3EX%gLC;A( zgHv*Z8^F_d7VlwSv$K8=c@;9o8g5YIxq&$2-}+hu-G}P~DS0ai9!TPbgH35;toa>H z7m*HMU1wc;aqN5tI&0Va`T1@@YFs`~c1Ye-5xNLfQXg=CQhqKpgI(6~jMJBLi6?IN%RM}HS!QDz z1XK~%oMiUd@rZcGH{*&3=%nD-R28?Uf_+* zUj`*HILlgirsJ}^daYWrTUaizi<`LDkYDXQz_xZoh8te1n2?F~TQRVIY`?$!qaWRM z1Sa7TtY z76F#omtLS>R%+6%F6x|aKuxlLV?TGw?W)mk5yesV_>Yuzs~CzN4T_YnL^h9n3o?(` z1Gr5z_YYXHMfi0wFN&2JGUPGnpt|aKU2?&*eVVPo7Rpn~)EB4DrliTAEU&MP_cP@c zzC1ze5KM|sj%6T)@@^-}&J*!$-?WVDyU_uZ^ooAgqW7P20r^CXmEI$3Wo3#R!0|B( z8p<;i)W8uX@Q;G>JO#~f#}pKblO7WnH7@cSx->R(sWBT}gUbxbn_oTIp_dH>NP;8WAm)ym4r?WwbSndKEU&~W_O zLwz?23YM#fzm$(OuKb{&pc=K+(sS2Sek^I}>?mmd#M#11(A)9Z;e9Biy(NJ|M=N*p zi{6e7PHvLkGM9h5LJ~MWYzAGv_}eA!_A;0Cl+`ZYcXqY9C@OeU@aAPO-NlO+rCpy` zOKLoj|D!wbm(1m-?(WYdK_D+LFF`M1L1$MRkdTCg1nA~1(5+hnz!d^+K2GlD-U3c; z+`r%CU-x-n`^??e+3Dipea$VLJ=|q3Up^e@e|~;{PAhNQe~;wk_Q$k<34#vK zfP@5Zg8r(TyRG%#bUQrrd$-@__4{zrhc}Z{v-P%e(0gF(2#6YZG_de(5mD*i#`)8! ze?RH(UA5h;T<<$O0zKWq|5nx?o&WRXe|P+CO#Oe4dFz(&?f)F}KTiEe*TYFjYFN2B zJ9r!lQQOJZ9ehg~^jFRQ-An)9-M~V(fw6w;^q;N&-9zs$d;DkXfA>&zwFLxWe)wo$ zp+83W&$d6Vmj)dw|34PP?7+sbvuwe1oKr*m6cl$T9zD3L6IAO zG)eJ;OItGa#l{UuCiD5;JdJ9Y8SbBdon z$WFP_PM18MbNtu30b|@zx$~_0-LH)G`-3sFo|-V(xYYC0kN7|HIW^(L@avD+L~{h8 z>vzok7liW1h({0!)W0gvi)t>E^%!O@Dvtko?4LfU$1MK5;{J74Eu%ZnAWcDcXn#@h z7w-V-llVt{|FsPNQQzM!!GEmpFD&dI>-#$o@{jZUi&*?m?EAa;{r@@krHH)PqXD1W zP(Q)7Vcx5lS5RV6<`i0dJ*;C?rOqfB^X5U2{}n23@WJ&rP&ER?g0wzq*-8o_J}m6B z`nxIoN#B8Spo_PvDd+ar1H@HiRvS{wO?=irFO6lm_1=m*Vl77L=HtnJnz#!;HbJOM zb7^Bkv(_dKn|j}t^juRbdwlJ`Let+_8sL`HG*hfPjUm5-l{;?7A1B|+sEY4n_~|?B z^wM|q@*$>O_l-9e4Y5OP=~l69rwkpws3-Zc9l=q}wWz-BMx9IAw;xTF+gu&y`O+=s zjP%a*S4ozZ5UkmRZEOusM|mVyE!rgz@7;-A>MM71PpGy(q38Ne@8+s=NdEM%fzv>) zkh%*T6aG8uJWL!x806&RXAt!`nCD{k{izTsEK=5~3NtOV*Iu+@=Ds#^DP|4j&A(qt z32!N^eh|KnAI#~$z~)>le5I>uhD&%rYh|@ahitW0MvQ^4+~Dg{Gi12-e-Y=OPu-L{ z*3ny!DWy&UcaW$|-;-{I$$a0eB#?quF%AZu#Ee+ePU58dDr&YUi1XaIW37If-C)Vo z+zs_ld+~)l#De03l8xAxgifeVDyZwxFJaLXi6hpi?8(K-8;+-T?#|RJB)=^eoA^>+ zw>@UY>wWOeQZ*RkIV#f}#-Us7T`{IgxjVjIqL6MM{TK1-|FU*Fa;RR6qX$0rV~^4l zRxfXlxS{TI)8vYY)Z$iV#J4)=<9&DdWdw~^?}vjWl)9In{2DZlR|0UV+ao)hiZhjK zZL70OsH8rAsN!8*H;K?>ZJU=Uf0SW~YZv7Yg~Yl&^hmM{lM=~%9H&w1{POP*$-f}* z|C&@?;^iJ+mE~;zGf&^wefW%x%}=IkCXCJz;1ea(mW5S|!Hlwd;+)XgxXRgFc12D+ z8g(GRSxv5ZI$q(}D%q+2Y>~I6nI*wVD=X*t*=xea6llVJEZeI(o>ZWl?tdv&n zBWXh_)U(vteoX|{+?0HXZuys=`8s)i5G=`O*b7mfNk_dn%I-`1VFlomLkUf@Ap&Q3 zd!Xf}g-i+@{%R9`m<*2S_kAPM# z?inh@@Ls%=vP)x7_O$mP)b8U?%kAI@P5uNpgcdqrW7_Bj*!lf83?U%Yo+OxT;r=*D zQiR3$$!ERG^c5@po>E%~;c>d-U6#G)4qd z3-|=u$V~*Qjpx)8j>1`HM_wR|yuqpZn zc{(Rik-eG93F7Foxw$%VNfN#;t*TudS4=$@j+)N-5n<;*w%E0DySvVl=;|4IaM0C5 z2x^KTvbJM&)=Ka1Z@VnhWHntgT#sN68sqRmBx|qQ4<~&jF2=WOYim~tfreto+$!~w z8fsJ(!fs3yOs$V)D5bL=Sy8g}IB0?Sica}L9;U~#A$MlmpY~^ZY%cZ6csUEACb}(> zo!o|pAFTT;K?NT#o5bC=)iBo?a!UB*Jmq6I54|~?R*4N%D6#pvU*)1qG~TvRYB{=E z7+pPVr&U-kyfN-P_l^hFIhnxafAGj}Z`r15RT<>GxjxC~kL>6>IW=7#-+_totw?X! z6^@M&SVggFqWYjlQswr;f?ql(%RN1vPy-Snov&j{Z2C$^g_qO9Wkb75Z0r$E#+G#v zO7rXeo1+DR8#l%RUvvzR6}SQ<-yp@Lr?^OW{e-O@Hcc&h^?dJ;Cwy4ehQ$cdD?Lc# zdUesCCkg^H>5e%$=NfT`Dczm-I@q78#I3MfeuI1s=8U7dPT%jMz;#{Lz&5XThOHYQDB|9%p|Gw%?Y+ z$@heE$tDShSc;@s6OF=AuJd@xis>9Qbz9Dz9I{>dFk;nGFqe-939Q^MYa9_7S$drP zp?PcAzJL=k;jf!C)PF9VT?{Pgv7(pdATMm)Y=qpz*}LJ3^rqKqcH4Y)b3oG8jX_*Z z2J8;TpR{YCBz>Nk3uLDB0AEqEb2UDBWlwo>X#{@1hSWd+Y0i)D^qaRdYIaM*raPgT z`wOIMrw+#~v-L<7S&&oBLf3hZJ{4afOS?gU6AGUmHxxFGd%6zp9Q{pkQxr}{9Z=2#2t)I&EDJA*C4&^R{C4RoN;%UrQ4H7wKu ziz&EFG-7e^f8w--+gpjbp_HeZ#XJ)(y`5h2qD8ZKkKyI^`&F`!xswQCVBMGiK1a*x zs+FUuh0eOzuUxU`XyiJTa~Rom(6M9Qa|K?rsY*q`5NtrDR&?sFZN_3~#R#v;!MLeq zf9NouLDo%Sno9EY#{j{OT(4@!@ru#h?hpk@RZNsl4J=8+KnSI^cbk2`NlL&dx3YIv z*iA5tl~FOH8Vku@o7rxGC6Y?yFxxu&3yS5XuKS277q1JR^BZ#Q*4B$WviUb*{#D<- zs<}=%qlC-mmlcTycd7{1$nTr3ihI?hOsUx#VRABLDOi>?-*0U%Dbw8tl1Ow} zzv{AKr5ioCp1jX-o|dcFW(_f1Xf>!B4JJK#vi1diSB@HOFBSn=w?d^9|F}oAV;$xB zGgkS#miUW3CC6LD(@zAJ&Egd89hk^o4Yr{S4&7qpA8D%1&g4qdLthUY%d~-(%97Yb~=Y{t^}R1DL3+0i-uS4cJn1+XRoQepCrg!5uNl^ zOQRmghi2f@pYQK;`SQaC^I?N>zeZAMg z>Qr`a?u}Ig65gznxgSsKSiEk~TdssYaN4rTpc>bt?Yls-UK?4nil|*V>7eQnYvB5B zQ#~oSBzhuTYNJDbGGUO`h8tnxw2IH?^tgL
?Ftb)+eDVilH|vB3j8h z?t;u^<2`WRVk9qRezy$SHNR%D*=5KlR@6OMa)I|LVLM<_lSdZT`$ap?d#=+9S1r8h zvnau86WFaxe^7*N{3^Rs{M2Q7ik_*nNXr^ZSq}KWVTZ~1QMW3&5Eh}QXvdIkFCbb* zGk-eR`s6%2N=E9Hg({2bqf|)kMKwAfnaAm+aItA*&aF`M+ytE)SimO4npvo?(1aMjaamWg-bu3UtPTEMq|)D zu^8~B0O2$#8798@d}ROm*SymjJ=;^&WI_Kb94M81CPY4H8{=kWyS2erguJ98jhR^a z@|xY&10BQ(y(OyWRnV{|@0jK0jOtv3cNydq(-1yDCtjgERxbXSrEs@5~qDxkvsz==R@ zH=r!FeDy^Tw)*NA!d^EkAI@ewu|4Lf#t+}QEab6LcJ82gycdDpG^PKvgF(Q_^#KKl z(X(XF2l=S3B8%jF(?c+~{{x$q$iaIZsi)|+#HDM(3bE?WkPyJswqPwnrX9{rmYVXO zlfac$ZVXAyt*kon`mm`=Zy6nUW;q7OWJMno0eMlX)oussLP>x_43X}Rugk?zhIaq& zmt+vmmAhSj%&?zN3F=M&_y~5lZjV9rZl)~_&MOGXwZFcx4*(FyRL!g~4+h)8T&uP1 zpd`zcGFuniGJ23#l?_|7ZxZ=(yq^xIYviZHU-IL{pko#Vj_DyOZwry%#aHeE01kON zr|v9w2ZK@HIRURQCT~;z(DPe9tux#1wF(zp?+EI7Z)xiWzucabi7|DGv*>(aAV7?< z2Zd-4ja@VC1Bs%qe;l(>E za2RlmnCRJt4(bS=_k;O=pXqIZ6}kj4aq6yb2ad`VPn!f}yR2`nU4P6dgL_g7N;C2p zt@^kZr^xop4UM(0N-@^GzsZoyqC zXG8eRG~BX9j&2r3u2-;J6i53<)(?uVii-&I5|Yr?oGXPKZv;#B<4o0C7+{RuQq&% zUu`y2EILIBP#{&wn?b%;^#>EsiT-|0(Yp-?2cC320k}BwqLvpJRdanNTUCzgWAC_&oMNK=< zRXAl@l^orn5au90Q-i!SyApX&499_D@{+V`$*ox@VR>I%--XWa1L4&q^|r-$fvAA~ zP$G>2mzQ0B<=&gY09iv9jGpI$hs8@}ueY^~2TC;nY2vGjP^h3HNRKKV7v}C6euRYV$hmBmS2gc^%7C?L5}-4$5xS%$x5YuQoyoLR?LY$ zRp|0CLNz^lG9)8hmQ9~uD%ofKbt;PP4zpx2}Sl}J7Lo4LB6SQGaVai`CyY|vz^yFf>!(i@|q^-xcMvivg-ZbG2TG7X?bD>jg@}`RT!dfcy z@!d5;M-$<hi~bie^>3FExX65JVyY&2t3uSaFTIldxn-^jZQ4OH z2?N0BfFY{*8G!Sc*G0!Y5x!Id?2?aS1&tMT;v_xnT1zZwu(h(%>yVj`-Wv4@-x{{k+m2T#N2;1mx2-Q6Y0WenLnjCDx>X5r`fs*gBXF1Z z54eBr^lCmtKDZx|VPyMOGjs%PXTuAoM(F%^+Ggtn2el!(8R4;VeW6&yv~Ou@Jw{aq z@5eGvlw&#ylHJ?Ulmpp@bc)`PaFR96^uFm$7lB%D9=Gi28XvO7uL?~!O0bgBRQ%hP zhO@kS58m9Wj)IT*B189Hj6=59J{Aql*Z}-y@KHn-HHzf>xZ_*9;K^h^7HuFrxpbF( zOjAUO_v8&9lx7dB-2Zus_zF;PAne0AamYTuwNy!LwTZ*WtL7D8-Fi-MXjEuGGWxi~ zWtbH>AsVgyFxLikxE8rdDMq)%EUA3>ttr`VYed>`q2_+jc4<0ashwa^%`EPtj)-dB zyI~bZ=}o;u^pji4<+ro3f@ug<7BxaVLo{!99~ES#rriv$cSdS*%VI9<6Z$QaeHO!} z9m-5R^y#u?cYcsVHs*M=(9BNh@K2-5{U2e`(sJSJO}aNgo_C;m9a#`u3s4HO_GPu5 zeQTHXY6zs$loUdJm0kVkm&36r0l@7$B2``vFC@r>ik|bcEwtF@^<>(MgEm zhw0O>=U40SC()R&W|p1tF!&4w=scUi;mO}{`o1TIF0W$~qc0I_SmD5S$YUkCk5zdT zuCDKM<})~v0cKNU z^`#wi6rPWzoB^)%)62!b1fcnC`g*1bb=)V9R%G2-#2_Z6=~Im*0nSilI8w7qTvo8^ zAUEEmLFfU@@pG$ zlOt6XhDJaWwjFdoUbl2EkJE3jp0zb$R)M_QMwFY>7 zgTOI@)v-qJs!gY-TmXbBU+wAE`OVh4hiemU<+CYgw<4qL?O3)_>iQdP>&fZ^Q_Ec!5v| z*wyacGjys@PjuVckZrBoK7#8>Q)Tg326)Q}AAB_Y`<{JwS>5($EMs_q*I2<5Gl+o@ zDZafs+^0l!84{@2wZKtgDsnyM5TYAxf0bL*Fi?g98>q}u`6^Qai0Q%fx@{tXWH~`7 zT1_KR$*h)1l-}~){8aR$|ByUsEopf1F%2AkNX}SmJM$ncc!_rRayA!qO=EAwr{VUu zc{wIXi{tkfv+aBJ|FrS?cWL>}EPA~v=(YDVhxkAkX zV<<7C=drLfU^`T#gVZ8lUD*_Yx!@hiVfPuvq!tsrlKpggyH}mR)IRlZGcvET3XaKO z-c-uyO+y7Yu^Lxux%eHleoM8~!ab*B=Km1toZ~)NF4mNiRMD9{Iz`QAbp`fs#PN3m z`NPrP&U!wxy*P02xq#`RF@HvM{VM&Pg53D?fX`Ql-w$B&u|%>N-m3Hi6gUhCmf$2#zxxX+6oXjNik zcPv)vJs1OO79o!?`4d$PV6$$?-O~Hb2cvD6RibaF_m7V}!Jg~V3GWzCJ!DsXG%lvw0-HRB<9LcFxrH0m68sF%(zC*OZ1M?9a zGO!$s@aQWC<3=>Ewln_?4ErmgEYRzPRZ!*)6)3fk!#Rmoy=mhJXI%gY=>}{K4$HM8 zzm4sEO{Q%Q;Pftgx*L9})Nx$&Du}xv%J{~l&oC4|D*959c{l;4!wGDq%MP%Q_}7o} zM4#v@Kx|0lKT}C;2aL)^eSy_v&ot?v3`Yi-t%25}q>V|h$XVd6&*82xe<(Y+qU&zy zP0Juh#C4wvH^EANrz&D$>cVmyeq46l6xS31?2)#BVoWQR;4;4Mpm^A=qS}p$b6caV z9&P<0*T*q6kQ40SRF%=SPrX@QMZQ=vkr90BC~CvAw=gSKwqneD__`=p2bk*0bF64RMoczdU;HALbVqR@kELs^W))KHJWQB3WOZF?196v z?OO##1d)BF^?Y-#au+-%TCw~fZMuP)U3ik!ohdv?I}*F4&}vV83K#6dyi3C>u}1#N z{AFg-dI3qMatJ3k1t3|d+1mb4x%kFV=_qE%DQ%lHjy)0T=rvmVM9n#O`p^J3Yt3C6 zcx_r<9Y0(8Wv!lsJtkG0q24T>JUX<5F(3Mk;(k&XuGwtJ+QY!XkUL$cgFq z9EN36gTs>-Rtsz8ox)fO)V_Bd?klmFWSh38Yr}3vz8z`@f6D(oNto&0;ZH9+*KvsH!YFH!Ij`t;_p%3#G;?=TfN;}N`?wm zj1)TE@x%Da4iRAjwOYNu6GLtm>mWz3wGs0-FXX3vZ5}!x+Br{F9D2AzB0EzvCh3rZ zELrUWUQQjTsv(Ld#4=AH>%Z8?Hcdj2K9*BHUj}{RuGG52hmIP``7&&pq}X&!mMfJx zsN-7%7UT0A-Pb30CB0UnAZ(U1eM<6(s_GbzDYk-vTcT#+g_?OjVI=`OPr?5vIO;8LRcwLVPv98Db25m=IkQ&N?^X}aNoVjuO zPe27R(%j315YDGJi=pyku(RPwnrPe#%7_=y;O~3-vz7rs=#)xMDlreYs^+CkZ(#RV z=Cj_%n)r(lbjyk#TGPFmCNfGX&@q9^seC_vjgN}fc_O=9iPAwG=}VXQ7r$P+_H5-{ z%k2@gF)c>zRwY!vOSYN=(u1rXW}g68Z|Mg7bdp&YuAZ`3(kR=^msoEfk81IQZya46 z7;gj2r$9YfM-RM(Z5jB+=Veo6MH^%25tff2Nuq)e!TlpB89sjdc*bek&Kx(1N!9AL zN{?FIFHM>3gN7#6&$2KT`ALKGM5Ys{VNy_&t&Qoxyy$s1GlN#r5;ucW$0d&yLF7_{ z%U5kyxnaSi)u_QPJ*y1n%Y)7iWA1ZXv!5YOPHX1Od8h-|_Qt^d^`UfpmMPVTpPm>+ zAlqR9`J3-t0VvJda+83$0?7s2*8q_QE>rP`vMF$SnjKb6!1r?b26Y|7KvLV;_b=~8 zC1|KY=pu-#pJ}5YZ+)*GSp-(_;39i$Wh%C`KX*Cwk!|@FeHS=KDCo8xVYa<6IIl24 zq<$h`2ea*>w4MJ{QEe+G!I8;p!q19-q)7GUaMX?T!6jgu`Mo?%CANG#MSOK&t_`f9 z&k;ux&CsFaEW4QkJ@1iua>$sEvm;rR_mk=6pU-Qo9|P4gfzt<*XugpIWZa89%^shY zFpf+c7Kcz0F7F%=7w>iv5&+#OPe58cgr40YOIp2VTg|VP7xb2*;mp#`W!3ktZ%$7iR;=v8TI8c?VE3lL zRBt3IWx8Q*bv)T7xoVX!xuD&eP-R3{gg%h50Cskm1x)sJ^8gne${xum@5f{V^Rn&T zw!VTt2r0~V^Cp}j*bvTiCrF-*`u31=FMPtw{z5>9`Bg_cM&-L0go&4aq6-e*`x(05 zka`rcZ$+5Gh?$(`{ZNgJ((#r>Z=Rhken|+85ind#wy|A1A8EbmqtjMB0bT=mde?5u z8-*O4^)pD)UMaci))zY9FNzY*~Yt#v)3BuTQze- zvQ(1Jn+C{SllR+kNtZA*_1%U8)rp)7_y^~Jh|qx-Sy$Kki*W~r2)9*y0j91#Y27q9=4$43-X*u z{6yGgO`BV`frYEe68YH)p#EFi@H^XnE2zd)c$*^MeDZ?ZtjDtOU!kbKBo;pzAEsA@ z+-asqj%m>+x;F2+v@Q&5VbNlJMQ0BP?;z~uXLhj!@teusD^-4HtGdhR2uY(w$J1F= z&MRUW_b)OKS6w#R@?M7M)XqkfS`>)r2pC-Z1qcJwMwLCcf-8+*zjVh;^c0}P?{Yw5 zJM6hn>xa9<~UY#%Q}|Y`LF&={Y63n3QFhCWtQWR$wh4 zPtbz^9zR9`Se)X7hW21)Il#4~s$xy#+MoQwcE^!p1E8KSB#R6{-Je9LunbUH&vRwE z`aD3n&pxKyUf+YBFPcsowbYlF3ZOhVnk#_aajcA2I=y*&MryzC$fCVdG|cL#V6-8#|14r!UVCuh9H91{ZtCzxz9NP~HzTm+;l{O~ltc9N z)-RQ5_B;d>5_&{4i6StL`v<^s@a27<#+E?y%bRnp1`CXB15IQOiNa<8a%xmxN5}%w zuiSoP7!orWI||@f7hor9sCoqgZQVI4jH(Y@T;xekY0y%21TAd5kwa z26Q^eYp5S_34f%?T1O0Ks1TvHVSaV~^dr1t7e`s>Uihi2^0xUce9Y5P*@h zZeZhX_7fI8q_z*Moh*0WhR2>rhg9xT&JC&Ue%(87yO6nI@|L?bdLKSmK^yDvjsKbj>W&zXT10Rw661)nzx=!n`0Ew$f0=b)Ft9j^c1|t* zidFqSt7A@Lzy9*xKkEBCW&ESQzu>$6vA)0aApcn3UxdJaoaf&Kfd7Zsr#thp`d{A# z@J}54vy=MA692KpKd*|vZTWw&#DA>s&rpn==)o6#5 zIJ#@Pawj;!62k^Xz}q|3onHuo9a)(tXM0q0Er2>A_`{;1-dwZFW-I*qve>nzeQr&k z2Pn=0ivn1AsLEEy*UYUHpYV81^j+k%v1HkMIfxIZC^r8(syuGv#J=6cz>Y=7z|t7p zYV%XI6;LFx*Vcxj6Zz>^DeqX(EPk8WKI?pe)sjaggr+oQ)QzyNpz_iyhr`;nCtU|U z_1Q7)68ZWopHxF_4%Sw4Hqh$AHKNQ zDWd{C;PU}Xn58cZ>-WMM@giXdJ~qw5Ykd%&W?^ULV9byMt5ne>atb})Ax>8HS4%i; zCXeDyfb+%wQf$vMWNL;B8xsN1^gbYfn+Rt_bLXh0(y1V^WKeu^&fybcbkwQ5SHN27IO0v7kcA|fD z1&(TWW?Ll2DY84*nMXMU0Zxn<#}4IK8Px>DJ{_)DH_8Ur9i^nYN;7p%=;}4)IF_Vw z#M{@4An%vy^QY;oZ35pQxoyzMRrWu-qQ7jh&ZtwpRY}Zn5Shr$`Dm#Ag~?OGLy^O9 z`M@HurD;#gsn()ps7uNkt+Mfrd`%9!YtW1fQDyY*8cv9m!4J>iA%QN_Dj@`LK^rE& zW`2`-PXU@VsU@4fz-lj0EEE1c^3K^K$)xYYr~Ru2HX-@PJH?-fo_@L{GZdu z*q*^@3T# zYB|WjQDvLo_pahvff96!JH3KCismd2NE5|NiX<^z;jgR^*KJOaJRg0F> z1)O_cw&H$2Cs8tEZ09n)AXgksENGxq+ucsLhVgZaN-Ikp=dQN4pMZWJykmYtvD?e#BY z>9s-X^yby*f#ldce#|vp% z4(MuH7CKxyFu})&!R)*&MzV7g^(9T=Q6{br`3ke{Y=v2M3=cGGzT{bmH?TBYX3l8; zem>6F5VsUruq~^K#vQGIhvoQ7D!8sqm0N8lW?9`HKrDWuJ#10ogNS|7;A|N0KWa!|8`ReZ#Tldo2R;{6sJikqX(77Hx)*aW}RozgLfIG2&vpURO zCWnifHFsa>+-r=OM>Q|Olw`CVKehW*akFjFGiQS>Sq9R#1NsEzLhzDyTE%pU5^0ut zlWOlm2e1g)#54Ge{oHSa(c!tQW}ea7;X=od31RrQ^z~i`zXT#`E^-YLaBrI&vPlkC z!La+bMcLW<`j@cBLuoy}CAUM_H#nm9A{M*n*LLJe$U*7$IiQT#ZdbB%Yx2iQ$?|U` zuD%^e7IHiK^xx44|F!1fRBrCiAgek8mkU^+t+i)oNBOfMeNjv=cAy53xC%IVkt6RV zQ;@#~bBnR8=r#Y?qSG!5)Mw{bg0AVA5*cz&uohcOZYKhqxv5%ac2nY>l z(C^vO4(i|#lK~5@?_s@!zE>|pfjA_0p~5liT&0=uBDRELXqA&FO4?dO45C%p9Ssq0 zw_=^ytMmK8W8_m&F;qBCzuoGGfo%ze(z=4*s+;f9NG%-U%v#wRRH}EenE-=L&fltP zzCkFbCK_0sMj$Roc&)FsUXDw5AMc%XrEFaIgqCrmCH+txf6qsQHufgGGNQ2hLh>7P zn`3})1iLMGxxrorslKx;WJ?$);fFXZ%PBox7BUiBM!d28yj2v$5CYa$byVx0zY$FN-E44?!Yp{#~Dp*jd-cMOb_M zH)PJb)_>;}?(8IwS7~lZ^d&mHx3Wxj9^`2iEevq55S2ydZSboO!`AmrL52jR3J95w zUNY3LbM2`uuTH>hog6CKHf0r7NC0w7yNL;+-2d@dM=OJa&4s@8RBH*rxQS_@w#x zAfDy`ll)!dO3ZnT1=_4Amu0NI1g;t76}47%n4~ zyNR3kG|d&KCxaY^dF^D=@E(1_lI8n-J=hjL7!0}ZHC}u_U;hV{8d5e#g(P*$RMt5$ z>=7u__=C*-sm?`Da5SDHiUf*E=2ERy$M1e@M!&^KHKo?B`1@p#br0r9?>yFluSS@% zk7STT+2f$NpWF@(jJv8Pw&H7@WP7bFL5DDjpAZsx@dlsebDMrjnJ+o>d0&*qjHLl{ zg=ZV>QnVpYfopusL5uwD4!F7R3>C;x`rEz$>MyI_W=kHn%+x1ZE9zE$j6OMxoXFsoK8hRQ*+ zcq&-wF$yW9ujWCTuq_|A;H6>6l6(!~8Ym=8Wog5)G}8lJT^xkQ)yP?5{=r)9m#A7g z7rW~;<ei4@55Yc}&O9M1!WPJ)Eu{lo?Ca;-==s1rlOgn2v}H7e)f#1u)%j z*U4)pl~HDT?oP7owS}xGaQ`l2hV~-p97(_)Td+KU-eYWF)A& z($UXpVedeR?ffOL}JRzNADq97gVz4s6zvbRW$ zC{;?3UIe6u9uTR4M0yDjBnG4g2oOlU@jd6f_kQ2WeuwXWcieIQV2qIRTV>8Q=QE$V z)?5dSZ>wj+7vo2`bqW~pH*JwGosieP7qbJ9k;ACpm#qwMQR<8?z$0bpmlLL|jN*xd zJIpv#b?WPwf*s3`hI_OAgYW%P>9L|+(}_R1Nh_j|OCM(A-am8>cv;LIVGkd#a;YEQ z5C@n$bR@j>ljXOF(H&c#u6^}ab#?8Ttxfk`Qd#)Za!9iKx#^R}`@qTTC!mAM*Fq&>c{Kl#aY%7%11jES-XLrZ^G->1oZ4D zZaC6L8&(^M3kgnb@en0C@Cqoa%=_qsf5q8f;1qLp0Q{(4(WoT%wA45zjLvi-0^6la z`ak=?)3Y<3dQ~}Z<;E0qUE?}7@NBTyrd<(VvC5bk+a)%YZCtBps5qXiX#4b52UqSZ z@youiGox5twP@0H7x;M=fqXviiLqWE)#q5DbxGw+Q^Rr+^vE%pXz z#~G1c%DQ#=^0|EFr|R!4!)JaoidmZ*QsylVS!9-98_2_jQ;O=AzvYE;I`1Lq&Q5}p zqCCf2ZaO7iujyw>dr5K(NT_?7ichwlaEJL3cHTPW=bcJ>ld?2V50=tQKdH)b4&Mc*=#s4 zF?*;v@T`~B6`0IQxFRP;dwky2Wqhj?t31kqMfC9;x}&{+*!N?gy%Iw=+`JRx_j=Vs z_3q7T6}a~uCS+e}~uzc8nZFjquHUqzi0zpJznu+^^f6nVw^ZDexr+BXTD*+cO6i`AZDO(u9O z-SmKQ_L96di2U8fWpC$r9q0L%c2jF~nLQCfEUJC4b1yp+_uz(v2ORVDFV!0dc?X&C zj2Nn|{VdKWoZQ%ms-K1XWVP)~E_WTA1tO7E>Q6!SUJ%?QPU{*(A*^qtto009%_C`B#KkEW{2C;Ko}Z6|T9z@kR`&yc5r7>0&6 zR6Dm{ zY;Fx7s>ra_V*X^1X_Iv1f{uE^Fk0KLhBxkj^0x5&p~))xd9jg1&w)eld9MiHvL?%} z=zZ{aHb}$zJ2H1olcF9KSs-Z|(wy;abE;GGoi@pC^F>UMTgbXUB_ zGs$({M=oyz9zb4i)gmwULxWv@Y7Y@0+V~DRXdx z(Yf8t&FY-LX}5ct0%`9|wZ?b0!5aQsEyBM5f*i>yxn2_308=1^GF3+v2s z(NS0H;S(ZhRs+Q}zmQYH36CIrw;IN*m4{wZ+@z+3Chpk7WKlHJ60S`okn@@L{Nnn- z*SF^y?}vCQt_leDNffK=3NwpHw|{Z4@pD&rQD~R3^)j;3vZe5uH`N;*8LV>gg-zb) z+rN#uNupCNYqmg3QMYr$cGRVYRf)@eN>gh)y32 z9F)c2FjaFp+adfs{mGZmc?}cY5Dn4C>WWN|UB_Bj4eHX12~^y=51d+LvXfcE`#Yl% zfsxtZd<(YXc;Fe>j%gr&7UEy%Xs#Y*o}xHrUj9Z>)By?pI5j5Pm^buT?<|KV!Wr%&m`jq@WgR zfW^}C1yFn|X?O`Ir<}F*DSNGIEI|CUA_pb!@nKk5fNf|_Xku6WiOJZtpW^uIoResP(ZPIP_!*5C_~&PV34w*xsd37+rFjpSbZvX&MhiI_zrf~>1&OO`bp!3 zXMa}T@eWbUY0k7hR^LPJ{U$aQnfLhsTg<=-E0^!t+_tZR#o^-nqYTq6xv$Hl>Kxmz z{+z)fZi6%veH&SfdgQnvi%f+hzMUX$+*);* zcRw`W?= z+$~qIKsge<#(?1_kyP^|x;h^{lFUh}Pg6<`;)Qo3)(O=s0m4%_s+3Rplhu^nQjZJ$ zyXF-p;ht!~+OY6*;ex3TI6@nhDvaJWnKh=hos4yjM&y1ze9XbcpBUnZSH#ZS^n~e=8%{NGoYS%hU=!=AohP6e_FNvU^EaBh0Ac{L&)WdYT;MOFQMY z1AXCZPKR{27G0RM8ZTpt)c%fZ?tj#_by4&}@tkBo!KX68o$_!&M&#k*(nb0aRxh_Oe!R!?vwHVY)9~>$|J>HywF)3t?Tn-SIa$3UA3e&?n7%^!G9>^V&eV;W zhxV26n#;Xw42q$&Om%s7nu0VrwM6;F_kQ;Vf>IfLVy^0aZ?@1OPu@Q#tp$tcG6eyi zdM3L8=EmuW^~~%h`np>KiOT&!s%Eh}(1Z{k-`+3tQdI(MD(>8T;f@vBx+QmM zpMTHp>3BKQ-Cy)h_d)d?O<(8|aZ5-4s(iQ3Y;UfZX+_>8LhElYyo7e0Mxn0gLW{QJ z^C@8P-g#8%B?~4VJnQw#tEkh!*EP=MAh`5|`frG`hZ#|BEKpOvhIhVgpOgTnRLg@N z^A;dAcU7D!`j$o%5nfm3G#JeI4}{B4r+UPw-EIuev#Ptf9Y+|WYUhC` zr)gjATkt*14*oJvj?Z1V!4O15^m3EDXX@!=rms;Kd1y7hTBMphDMhyv}w)u zR6*LHpM|a-yG91iSSpM1$HuKECIUt4Ro4d!y6R4Q@lAyrU8HGZtKxog7iZqpFMJTd zJMh#eZ1$+~ToCIr9B>e?CjX(r#KdNx-MZx5P03cK6v~LT)Lv7kK;~I(l-M!VR{^e} zyIUi}E$M(m5w~5!j+AJHJVI-#KVYfSw>AD-T?oR5HDsHkrP?K@<@g+Abr^Yg!~F}d zM#IR9#c2~yJ3k=85$S1kd=YwuJe3zHw12m>BenVD=Eqx&$twp{_+RSB9IEa^JH8MC zQJ8xo0QysOS@?ZX`;bwQ=(yoji`MIqRg*HeL$hN_Nh3mFTMYPUmD1{d715tFBo*ti zBN&m@?1^>0hYyKIM@C6a6$i}~GFNST4gku9Dp9%8D5@+ffjcxMmDG>%kUCT}=2vDR z3YQx9FJPt;uWmU;Uh%<2YtD33uLXUMFTGWg{dUky$qx#)H{xG$CWA6tTW;xvIwpH5 zUs;+-iVVT@kq6~3wq#n<*LW+?>)F|(hCxr_qkQLfXMORik+YqQFufpeUybiKF7TXI z1k)9PF>`*Lt%wub$T&ypW)#vo15LFg6LaTe?k(T0OwU@7Ilf!)kX1$(CP58JKhzpN zyA4j4X>FQgH19wMQ&!_gop$H=yz*6VFi({Nz4>_tP6X1WO{Ylnz3SN@31p+KT@$T) zw+n#RnNkcFe+3LDr+P^Vmh54FwxssUYhRRqbuV@+RsKB!U9s_{pOVwvn?RXD3>qN& zZ+%;0G^tN2zfgy6rPgUFjY0baQ#rYH%a!I48P%?!kk|8j9V@MuIqM*sZc;JrlZec^ zo2`s|K~+sq0e8hybNf395Ma$M{ux*EX`j&D!hsyx@2E5Z--nT(jovlhk zV|;#TG5#&BDSKfhj(=7hiy)5*WR-CFUysaUoyiJD#1rX`25L*WV|B-Xp3J_h@>x2k z3A;faci{(LOQ)-Sb4`2XKdO(z+L@&$je{#oExOv&6&j;ZD2p@W`j%~SG`#?U)+;+y zPf#Olb@d;vs|#gSN+piD6SVBML9AkC{X2gJ4TyMS9cC$*!7BGf20uBq`p)8dgzoMj zzYn^FhyrwMZrPwmV43gsP+r|>avpz#qD)c8rbnNn`omdvKz+XZ-P5PQu(K0gcT>Mw zKqY2hQr1rv@gVxr7la&~U>+Fv*LAJoZ~}P+5VSFHJQDc882IhWR2Hpea$q1V+oR-B zD>;eB!9Q8~&CDwm6Hf5=wTH=wzB_a>eelCVJSPs#Bixn2lWP%xrR;2ADh<6t$K!TB zgwMG@c)atoUPb#}&KYABU9dn;Mtt0w!}IS-HPk|`1e@_95v79t=(Rd)R`C>n5=3bD zU4YPtWE)Z*0^}?>RWOrYqtMf43+7xOf~KA-nB%4D-=`AgOStS|9^osTtu@v^;Nhbq z?Xa;&p)h|c|MiV%ARHbk#VIv}(=!R==Q;iXO*cK+?@_xE7?9pb?vH4S*HwJp_^uD} za5gd11bEh`H~VeO#ePQwJqlS-d9+^V%YlxSvie|o%ei@0IB-~M716AkKn(2|sP#JK zFWNW!U8C4>(yF21%z+sK{}*gW8tdsvR(%Y)-APX-1e6tOp}=Tl2ZinhSCGKnenc5I zG%3(RHO1&3RED{dCfs*3g%b!5h+c#`-d#`tCG+O~Sn}{f<~p5jz0L}SF3!SQ0*$9- zaftp$gB6x&x5b(%QboE4*+f~?tYM*Ww`-weu6+onM9BcL;#TB({Ab$Ny|1L~8!}y( z#gvr*>P%iy!Fe%-S`TAft~W6^d)3zut1w9!17ltoc%3Pk1Bu7dkZpl`HW|w`8XL<| zYpUv6O8Z`x)oPah4rFs1V9?i6fA{cS+wqWxgmcgidC- z#-A&TPM!$p;oWoDf3UJ+TGA2M+i#Tu@tdF0Bs{m2EPx@O1**AMZu~OHnc`pY{+P2> zpd~q6oNrIl^i$L$;wO;5lQUfDk`oXZqOHc*yt@4Tuw;>u;k#^f7%B7F(Rx^t6Dv_B z-4-i#t9^{(HWB3Xr4lW);(ufywPUslZ8KF6tERrQL#~usx&@_vI99`?OlOk{&7ijv z;u-isfaH1t2{A(eFW1ddMLe~aakF3i^u>S<#+B`RD{|w8#G!oroGN#iX(?5D$ph^K zUAmP%q*_nMu1*v&!0z7+pfuX zP(icxOKn0E4k4na9A>|cmmtY&ZI)!f#SJmS?JGWv4;BhEoZnXcUkH$wqguiC<}^Wh zZw3*C4P5=%6r5>9_LXwS3HKq&nEMd#iSgnS1@{A5wNTV!Nm{R`CSA>knzHilUqYnw-ruz@ua2_Z z7JaS}Gp_vV5Re$;He6aayOBt!AGj&9s=fF1sTB?`DLgcSvpzUjF|#%%BmBp@&7h)? z`6O{txHQaWA1u0oeLZPtzY9CVBdnj*;vm1$UDYFWw27=F~Ue!E8=Hr~7wd)8*=#>gXsqy^(sM)RUS*RU5JPTnBas z3*d(iG_Ii(wgZe)Wa;)#R86UI;67zpQzL4)Z9%w)WA>y=$LAdiU$a83&KEa(b6Hdm za)f&F1c4R-{0DMe&+pmEG=lFVldqzqI{IxUNJLNPFyP7hS>KSFCCkRS#jun^CY+~JmTYW^f@`Bk#GR>541ehV38;!^aV^(Lo)ihz6 zMK64cwy^xou%}g0)Vo-A7a3W~smooF}z9KY1RlJ%wK& zsAC;FxY`#g9PtH-EKwiXhN#x~2!2;gQX5O4jPe*4?ngiWHrf5di9)IA4t3AJa{4`$P7QBTqvK6d zOoA+rnCM{hGy;)*ttn5CcfPDYSSg=M`brzl1ZZ&hN_@Hg39BUex!e@v+8ia{+DS_= zv)O-w@VX0Xvh|3`=`17dAj93XD9eS<=^gcT&ZZ`zv(W4Jcz4bpYI?R;M|=Ls1Uf{2 z{VC}l-!KkXBx$2kPo9&NF?3?O2MocXr5YbMzXL88*s(6!eH1^xsSFKuiK*Ax3_DWC z)Zo;F@AVZdcj7D_-gJ8el))X)vOSQieyCVK1sXo$WhK)$fCVfz0`J)1OE772lV9T| zx23if&s?fLs-yYcyWbN_wg|FDcQBrvzBMqUyn>aDi0WZ&2;7?3aK&=+x<++@SkPUv zYB2S1RvlQc{I^K`&#J1zKf>h10Dx$I^qD=M@|iSPa~PNH!*HS78W+z%1TQ}<^)lSw zD;(IV`koM$>0bT9(6?24)uJa0n~L{kI0}^10#rDkd1+9V&=Zc)?Z?k z*XC-^EG?Cjg9>>FIHDJo=&;VpL@#f?nWKNa%~BA=(^Tee#fb`-N6U)KtYDfQrC(QRn=1#VIV=Bq4>1wM zc~mUzu&KmF7FZe$HB@(Z<-IG?@zp8pz2rs2jHgZlJU)k7M9|+!^ikum;gss!5_ZoR zBUI6y#1s{s>WSeuE^?g27fC0&1R*We9tq}KqP=zmdqi$QJshk4y3qo70nVE#LP*o$0pf-wp=040ht2UtZFhdzcZ1LQSr=Q+eWwb+ zRr7+MN+hUrw81xv`>s|*vF_}ZiSmQS)_(@b&*`4-XiOYZ7rhXDhMchi*?LnpQ4m~q z$G#25DYMW43wBXHHZT?tds1o3kY$VcGB%8M=fY0-Nc*%zZH)QxlvE1v z)UVy}vh~3ote;#Rrnb!_Gu8uYVqFi*U^7=M+fGXCLiBdN+IYlA=>o&2*TkO#(8
E??r)<`V;3Df@G43MV6r44}C>XHFB%Wm5? z6X+5zHt`1McwdX%THT$vKMI-h8F|#Q{H=+0(K3mv5T;4HvfCq!MD<5`#{A0oS$ocZ zMj$i^&+q!>L7;+N3u4jO!7W9Irgl3yElcF*%Ra>pWu-^m1?K73T zUT{25BgE+C;(}fz^rac`YVt2_$1<)?3#PkB1k>=i_2V2c&8F(drx2OR$4DQJ^%-tR z@_YO0qO1Q0w{n>itYB@MO*mdTp6+ z7}Vl#48Pdr?5FYgL5iK&tooGzTTjm&=5ZbU^_sz_lGFT4e<<<=9l3#&apFrL<+QCO zi(PxAIbd18Ouf@#!x-~W^!-3FS?vxH?0EAq+y4O(Up8I_&mQi4Bpo9?2xY!WZ$-~O zJ>3)1e|%(Q2?s=k^VJbMeKdZ6SWDS7)8=klI|m!^)2vLfmn!S|*-`gx54WAFi->NZfQy`8A z5SLmpoyG{oycGAo$+7tI&2OsmB0A4){X4VO{hcH#>fdK3oG9a2jO*srw>*Z@Da0cW z5SO%)WszV{vm4DpR8&BenU|?2GlSvg5~HMbdZP>m&rbQUrfaEsMXA3tgsHs5`Ah$a zWjP|2n4xUby!UTZrRfx^k`c9Rr8<9u z!%`@@7~DB46mt>V3!Jl#1~%Y3Hr58X9S!b+Hzw&XpEFicJ)YLKlzk`8`g$+y1w^#a z%hlhIBvxrZQixnDjxLMW?yJgcJI8_DT zSgWH}^StyG-7%Acnl>~@IXz5RRVz@DU0}#3LAnFQBgt6@7RzhXY2x1@urOIcT^k>W zkTwNufgChDwaYN}SfvP8LY#m){FUFYRp3<^jS3f5|MF@%j!z8B8bApyEeq8eEAt(cbs*!^6Qy4@7oMZWL2>_WU>?)Fz}Ow zw%eV5-ARZM8U(~wdFT|I9GOt+ueMMMxDt@-c@U;~#Ku^mC=is z@FFAFFQ^`u#jn4n{1H&EJ{pQdo6buJC~~;qNV(}+4+0=L*eccAv`#(cLfqV!EICHR zB}=UvIGc4C;r6a@e?6s4!E5nzCcTMyk;E7;L&Kst#d84R$D|$x<$TyAP8#-k6RnUGwOs_=;hPheT`*%v#J!gO<*iA#%5t(aBz4nWJ2G3;eBy%+Hblo z>vmTzO^@J+?;=t>Hg=Vh>J`nM^me~Kn6C3+0E9q;Df=SApbT^IVxX>uejVn%NS7fw z^F33@%T1u4nC~0A)g3|KjVK0MVM?D*!^Y%>}Mz4K$C^@!ys^BCc6lktD|3YfKaQ`9KX#qflNmwn_KsXP< zX)dbAK_JdCrf>3x7qu}7H5AC}Tx4K|;ZNF{LEcH)9*$oQd?0|cJ9zC1>{M^rxEdSfCxmRfFIe3amqm~zK<33Z& zOXF9S-1Tvd{s!-664;!k7xvH^ZWTPWh%x^HaSoqbqm-7Mj#JfKT z>0X*uiW9)oRc$w4kT9{}rzZlg%%aG8GY4k3u^Yz@VE-ju%RP`zg|j>-6LwtV$n<)Ba`ag$j}`Q2P** z&|p)Om>w~4Fm=F%&uslhZR$_vJN~o92!38x0dsMYf`6lg{C~9I>=Y5b9_F18|L--( zBIwL+v#Kh&?YuSqw<;8|_JibbozGRgw8UHR>{%_PL|A+c&n*XR2 zv$_%B{gWd4r)oK$WK|I=A2Kxi$<3z!WG*A>`B@?vpIx2*-wWvHHp6_mz)}zMJb3M! ze^(|RKWcB6H&1l@B**qo>yo2*ge4Mfkjtfi_WvheOq;Q^z&xmuwSnTFPy4ACnr>Vy zk;I1&>;BtXP{-1O`VA}2pWLHoP5|9gM`XJP(pOZq?F^Ise6|M8yx+E)Gx@A?1K!nAqhcC+TdI55CN-%?1q zFg_#v9qxr9dlYBWa$G;{#&2Di&->{r?`3#!`yu9mWsz*pRks)a>Rz%%IpC5&3kvY)Xq`UC-l5l)wSSK0GtxQb3%f-%zTS;7ZzR`dvDITcLz~)e4EAzE zYfX(5XFa#87`Ui{pDhy1**&Tz-rH(tt0($ui5Xno4_0s`m@Uxx7;r?}m)24ciM6u{r@us( zk8tnlvR6I)Bb)8~1Ua#JPg*pA%%iOuhW;Il=U7(P!-xvALu`X{n-j<53glL+MpNWQwch;=k!pU+ zronDp=G@HOB{!eenCMPjvLz zYnh!{j@yYiOW0pZam)9I|0XJ(T;S*wZ1yaf-;1}K=Q(2DVzU1FZvuw!0pZU!V_m|2%FUG~YH` z91lmB!_-9CJ$2eVXSW-36fuGBjd#HpGGaRp0#?V1c7>Gk(I>M?%G~T$uALuW$@uTU z?8iG1RlnbmR#1D_JM5{i(!{6!bP z#OHf4i22)r-HE_i%E`Rp+&4D2PdQrG1X`Lk2hxkzdbHVE8%-P3udG{?RSh!KE*bQ9 zY&_JKdP_~}GgnTH+ak$;t)|!)YcR94`H%pIt}H&+woOwi9uC9K@A=vQtLrK)8r3|$uU$j8q$?B_u>Q^;S}%uQQ)7R> zLY#F3m%nwRYXHO4=%W5gTACv|ZVQ))S(v-uvCc9vL`8fryBQM@&$s$*;=ugesUKrB-~Tr5&vs_rU$#HJ0D?CN_6+%!r5q-0 zHQEwfBi3amA}_F4!LQtR-n_jVc;=pAQ_m~Xo$G|jtxd!X#VzF+s~M!M$RW0!dv{Ji zk2R)i4FI)k#}-Y>&EBf1H^)q%`zPntmdk)_e`98HvHMR+!|ca#ygg5S1V?9^bZ+{} zD2D@mQ|j-qwGGC60#dA2E;j&<5%5l(FVVG*>rD$`=^57QW$X=l`~I0LJumG(sC37n zw(wg8U;JtbTS6eBoSYWh7PhY3;Ag4KJ#R3%%mM$qc}<{)PsL!SoQy%be27MNPy+kf z)p)au-7oFpRCHAxXHE_)q|DGE8Z@lmK*3*Iq^&Q+1ol^fL5_BR+COOE^28j+#tx`k zT3WiyHl5Tm9LSVw^@64S%|(3pKXQa666Ej#>KkY);Pq3j6Sppah+SW~*-mnIYx}q; zNbG#aI>EHw6B1-Y9nDO38qQDft!KX!PYXD4$Y~4Pbv^8_!ux1{S@qUCj9+Sf8pbxY z-QK`AIgMZP5StO53e5ey^pbUT$V!q=P_C*CTXj7BUFJr>$Wxy<5tE)b;0afp*O;AB z_*@%(t(PtC{PLlFTo~j+qLhXyV4Vbs>1t9hiF76 z^;ld;x{U8@uVPQ#YI;TZerOO@zy6~y7qYzY;thcB9CdC+-r#)H-zqZXSXNt6SUSi8OM#N~U8e1fYnDEC_zBRnsPWYgJS6xLH!?gn0f1m%nKqbBx zzuRC=^B)UiD;jsX2pK?v#&s5B|CBhz%T}A%%IH5^JfW8}7>QLr#oeXLetb8wk9K!X zZU?J$Z-t#w4`kbkxR>qUVKRsiZ>qC@j0bzEvGwl%(PaBw7X2;e@({}&%KnQnMHI5Q zdSwI0qinmy-njfnM+970G^vD)E13PL*~;TTI`XBK#qSpOefSq&`J0a^w4`kU$J{*@%zj@BlhH9@f2;&rxx7CU45UtQB2VUfBSmuoow>y4l8bli^# zO60Et{F{U1I_>;}tsk*BW)E2YMMokMSrY^t&aMAx20v0S?*@MTJAt%2ANLOk zh#xQX_?x6xcSC!=++16V=NwP?4#}T&k$vIvNMWIJ8-s9I6cyA4eb(glHPWRL5NLpb z>>WQmE)vTP4IC<1ewpF9K2pm9TLmIPlNI|@c8IKzJ**eDC_#{YXKsWa<(D;m7ruo; z4J%H}CZ$}r%ob-^?$N6*%^rw*JtBUOLuc$x8Hk98;LrCavX60ukF(ZV7rKp*qCT7z zgv+>mUs&T7BOcnS2-yFUtKp6Oa48b&g~V_8qsoJBjtD=SS(tn--E?UZ>?I?lFqS(%SglL4`^;QiCkrT9>UghGtJ{_=}4JH~|GxtK4 zY24w7a|h!SMe7J`p<(M_mJdB1kuPQy{P0~iUtJ4*$4Dv2Poj+xb53f+3COIoZ7#~* zep9=y#H}4BrgWE`^0Ht&Yl%tkYG>3ETzUY%6&&Y#w8R1&VO=q`n-NhX5Hje2@T^uU zt-iO7)JD$CzHH!2IodyX@c#L}#Na^s2=?0vZvnQmFJLkM0#1{se_KM9KAG{v9(7?t z$hzG81A2)M2!+VGlyR@g*FEVbg-|98Q;39;Ghz^bjvEU^ME`XVUn&PwH|DFBwrtCF zwiaHQ4p`Nhf93Il?UJ%H*Yw|-+<>RJ&ywJNwh12aFJgJniXqDU%BzQ4~GTgWXKTwX}au#O2E*q-Y*o7&rGV zBt9uK;c-!}LT3fdW_Y&JE6$Fv_B0&sY*k$iH8C-%DcdVW+AvynQk2_Dm53A}&k6&w z@>Z{eUd)=t>bRmOd7(=$_QR4-*zMh7+(au`MxVSipg~XO>PGI?tB>;}sL(EZe-fDPDlR}Z$I%~BNXZj>W=;%RBkO!F#B|zYA}e$d2I2}^ zHwWFVaU9fVQX&+FD`lI-E%!(C9tj51KP>?y0K1cU;PR2R)U3dnw%|IqBmJ`d5vvZ- zzijh_?GG%hQgPJrTK}nS{|)l>;-gt&{_}0iJIXspx~&5SPcDO(=)Y@ugX`8(txc#g zsX$4O=D`;$YqpeC98lJBtEX)EYZUNW)^`8-0wSC_;(u@sT5^Hv2IH2r;Ada^l`jP@ho73Xq0n6FWyfW6%|q z&_TW7HtL$GIT;SziO#)$@a0yEEm;Igem}kKyp$LvW2U%cav3Y{5nD6GHE(-2|adf%A z%KwU4jU4qNpjOy6z`Qv723fwOyq!Ka~JIoukD?*}U3?Xu}0T_r&#!=ER z2uWPmnzd$Lh7B#rt+*~E!ypTObsi?GPq^ z$%bce4na5b;AFL!AN+&@h(rKJtUtWU@d>oohbeHYY%ZBlOEkTFjF?Jj-v-eM(>qy2 z>!;lAuoy70(Y%tUG&hi`{P3>ifVv*~6%_GonptE;2Lj)g-21uNyFP&#qaGa+d zoTrIrk>ZKuU)^D9%5(m1YAo~6ZD09lS_>MsDFMG`RbYP79oE0)iZ!H@O?=@NCg#d7 zC}ke6jut!&>lU*-%hu`FX0$(1z0o4w131{a$}!1#jduVTu#%Lk+cI^tyB$?+zq~!& z;A(H=^_W$VsV*7uCy?cfSfgpYZz>BcvqKQ}eOnTD;C{nz1eW5}#?Ew5#2DXAmpeGc zD$w^6Lt1A%p>}O@*}?%V7R20K`Az%hG`|24A#i1N+*tk=TTh{M`gJ2DE@*)OZK&po zOK=no|9)#ZL=yhM{xb9;&uiv<6i5#-7iglkye~PCAbzZlN#(YdvWmK(0E4c)9agxn zQZKNNB(MzBc{IaV&W0K9hcOgGo4S|YS|n(oznVr_Ic+}S7 zkoPNjR{cOn<&wXlEB?^*rV0=Xdv98$oJY>5yKH1`99k7A&^NE;D>$!3|>7J10Ri1js&MTzDCJD5g(y=TC}$x5)P+RhNCv6IL6 zj%5%owQZlU(Y`hL;KTsgBmuBkyz}O~M%lyymo$i4wlu!jO7$q>k64zIuu`BZ%l?Ds zQ=~7Xrk?BBYbAxhv7(lqpI9-G6!&KAj98hXcKD#D8@4JD1%0M$d+FLk{X$~_ z#OpF)^Y3l3RkVa=R!QLo`g8i;eHI=Xz6qh!7Qb7>WMo*8S5otzluevX$4MNFI_Mxf`K+$*l352D3>_FvFM32y2W#tA4 zc{cF65t0exWp?f4#`C4Lmm>&mLA}`Fo=wO2<+`|43m`PPQ zZ2&W2wRPdvET>3=b)MyUQ;hZg%v*|dEQl+~)F&5`=+oN9T2;Lbq6AqJOSqDIf3A@= zalHush}&H@Rm69bL9h?qMOi;!{p%!}$#i~@qNVBwhF%(Am0@ztat)+sc(%1R_1rpT25K?jDmf{-=P4t1dC9_5{^|HjMC|?q^?8Cal5EAR+QFxrKr93V0 z=J~+-ZQJS4c%fJmcz4TD(o%t| zcB*TuZxdx@!q+OynDc8HDM}5pRT)0<(bc{!WSdVc)oOEw&&||hMM@Mh`n6SlC>2GV zPI>}p(-|jevo3wSx;!tuA5IvXDC*r|Z02@dt*)3zmCw5$z{oTZ+_JOM@pOWeFuvN4 zm5%Mka?;z#R?QPD8njJ`Iu|~681XbK+oqt@dLW;m{wDg)9;rh&j-2Aqy|YaCmUf_F z!)zVFTyLs%Hne~7cYIASEBcIDQp&Hp9Jq7x&4yQ(yi~}M=O%hBp*!VX;RckM^T>zx z*O5d9=kU~0SW&NcIL#m!pU<4^FoAN2;R=*th7^wtk zUD{|KA5WLF3!P?xc&*DjEF*n4hIlxcf|p&rZq-toIZ$bvWt~w0FTQSGZExB+zefpJ z0Ovj5b8WqsA^*u_Nl`(h=bR}a4=!dp>?b|we%q3*Bx@tvTSEL?qkM=E?vZTpPTbWA z{!xQFt{HJe_9QuRnsd> z$#Rn|Qcj+%;w6u~5VJ~ziI;FNvCh2nAVZxr>jkRA4Xky}2AY^)s-B#x!7P?`Y(0Si z&mlg=p~FP)EITi}k1GG7-^_Mr&|E|0#`O>)W( zwoqICQ?jV6qVYEf6?{IwcQl;&iF4#H5cNsuRIK6s8mWja@Hc&BVlc{0XruC+lQHvY z)$dL~TtI4f3Xg1Peqb<3Y6|MAfEd2B^F-s@X@|L)><@R$OH>2(*kXk8%iiCA<1A!S zzhYwOIB$G;*QAT0&pk+|XET0ddVi&&c`d2%ewLYa!M@dH0hb2vBq!sZLaNZ=jkPsV z2~k4fqy}3%VP!Ak!ekxWEsz~JM|RpOVxPmpw7WM zE$riJ=BT4*K{SX_P{a(vc`Uswe~Jvh-Fe{1mYvJVI#NIDc*{c?JR}i2!Xw*KjO&b5 zP7nWz!j;NES)4+-LC@!%%dlpmlaBZ~nRLEnGRpdUwVYxfQvm7ET zDNdp&qX9_Ty95>5{ItLRoo+I91DZ3@vVSIG-C;9O@fH0Z466T7x%y8ymCjB5Fj1;- z3Y8w#KLFUu(QFt^jfL(upM`#Zb!P8J9@(bQ&rmH_`?Sk7B-V|V%t;g;a6mG>~kT+`^a?VZy(<)$J;I&ZKR$dOf2In_vun zus3x9X3ui-@+4X*1leN6T|4-`MH7oE8ljvN8fN*x5m>mdj~erZ^;whzADSB;^r(Cz zC3Alo5&n8%DS}h%e&RjGc;+o;rx;b=thG=*mca)L*N>T4G}g1f8gr)S4D_=p!qd`6 zUuxUuDm&FV?W_|azytc;PIrykTgHSdW|8yYOI4)5uD*G7r#kJ2<^K_dkFB&Uo|9dB zJDN{X8s!cT^Qp3mD`%)85AE2&X%-&Bt@Vjb(ISD96-rB=;8u^;)T>){v=6iihsA&j zblL(g-$At((7hrdh=MXE&bvLaPVT8f-I0;=W5tB5G)U7y{Z1K<*2Xo?I#Gy$(+B;h z2+-1Utg|f(@sd-cTSnH41L;{-2h~fv?;y-iQNv9J>Alc{Rg@9!&Uh-ws z1}h9ChV;0onqyjg$N8Bwx>$uitJS?(C(KUudAJ2g9dK|DCY5lN|Ff9af@J@eA?s8D zqt8&bkj_4X637>fU$su@2|-%g17KULcNx5(`2LcjmYbZNw{F5OqZePk__z$XTQZYf7@H?vQt)kd;FSl&!5%XZvj?gx3@GX(eZ`Mpdzfb1!G zZDfi!92_Qs&ELsELMsV!SD1;=gJ%B$J~4BloC;Rmx4k*)hH^pc-c0#K=t!Y~fl~{; zcISKKUgaRGXk3pbGsB9xZ56RD*KU<^@2y&YpUn2u!zo;TA7I_NmLy3SSgNFDEEV*k zt*jSdC!JbICtawzmouu-pQ8aiH=lj_(>Jga;|lKoHGVM}RSK64`C3D+VqHbA z9UB|s1tGZ&?_oVg+(Lf;Lv^DMW(cKrn-qB|P{8VOa9OLCn}FKyi3{M;7oJHE;j4l|Gt(ZdNo>wO0n(chE^d$A`^Z5ui{Ihm#PP|gFY z^KV4s%M8;LdM`_}-+yPsVeUo?o4+|_Ao8OBHt(xL8}z*?zbj9uqtOG3bcf1iqmYp} zLDd`xyfu*M<%2@Q+!f`5acg2%w5iXP0|P)(@>0w#{E(_SSm6@KeA(1qhN;Qlfa#`o z+JPUq)u%QcA1#OP{Ui$NyxKJhxo;u=H2Nat>xLA^R~`+U%{y1(r%y+0ZteXq;@&c> zt*+e`ZYf2Jl@={l3KT2FU5XYjZpDjRad&rG+}+&?E+w>q;2Io?1%d^F!%3gL&pzLG zo$mL&_OJ6Z*Gg8_nseSZ#<=I4{L{ns7hrCD6oqi{YVFnEWi@~O$$zqW|4;PEfwI*T z3P^F4+f!tI<_B?2&5lAFP3N>Z;87p``@`m01unZS^CQU|TC3CFvR3^en2NOye_eyf zCcpOHkXXR8uIt%qfc;$q}Aj)M}R5RnH;2%%wQ!Ujs1pcpSM|2ft(V}eRkO21rvXL%zx%Yj2u}RI}9kfe*70>_}5F8 z+c4Uet^cUkyD~qJ@h|qn|zF7_ZGPda0F*) z{-BlA4-Wm`g<5-yu({jUSlh$ofSwhl^1$n**u8`m`_z#QNGe~eY7 zPfcYla1L9G4Amq5-f`&)_b;OTUOvM2ek=ar*!D-O=8~zBE$MXfS})QgO$MgmHtr0{ zhaXQimitf^(Dg(LaGKfwYWucfVyL*8=yuj_RNq<6K z9f&v_vgj=F4s<1Jar>=+x=LYxv)gAHy_Z+xMX0TKV5>%JDWU?I6qe!IM;e&StW_Kr7sq7>nKsu{bAN{A;^tPo#jN@JH<2^Y zXrC*xH|6DfKM^j6KG=h(x5Z<7jfv28y5;g9sn&?7$%3-mqyXw->gBiDlgl*A%dvTr zICApg0T>_Md2iuhv+YrSi(K#P?Tj6O`&dqnyZPA2f;X7#g3przi_9X)$opWPow4RJ zScPJG#oiT ze`k;%AzPwVmALt&>F0m=L?{mQw|h`OCv0=OLz}Um51K4dk_x_dBZ&GkDF23s;CpYQ z#aNlzgG5OTybjCBWQ)m09nb`XR`NF| ztqUKs|NA-rzi%M<1;t~>3af2#bwHW%X<&fhN_lsRkgQ}4N9wX7N}z|o-|jJcy~+jw z-N%Fg4l@M{weOr3V-qu#b47Zic+T^eCpgHVPzmmJrJ;r&?KL(uka89kB8dvg>!@(9 zchc7&nR+ZIJ(+s69jG)P$_*Z6$sIcG6aYK*t!G0~(#0n71&Q~(eHD%D?!CgUly=*hD)jiCx55}Gn$Y5ba&dMr0K*< z;PgZLJa?d@{0@}%4h0}?*IXY`dD#^kzHa(MOEN%}uaAc6&TL~8;RS83Jb z{=ohr(>>_Ykx`}pGGFx^PFxG}dCm~U)*SGGPdD(14ZfvNJ6y0N^!s zAX23KsWP{n&@a9(iB0djm#ZD|W1Htr-#B|7BA*?D=jh6mFAy~7=7#h}AH=#C!c!Y$ zwk~#zZ(5cM0$)cHiu8)rQPA6GrQxE$b-OTEy#-0bVEehV-2~ zF59L38~V`1jt@tgo4{Dsm*H9KAh{ zbr3q2kv=92#JF*M-I=nO>a;MoklMs&e@WkKN720)F29o@_9dG)-*x_l?E~M=&DmPt z6Q*0wzPIdn0Q;-n(1kHq*q01S_woy?U~yL)&3a5h zn-vpCw`qJVxxL9?lx+EmC-Ptmuf)kxMTsNx;RAd$TqeKU%abzq)?Ai>%K`SL(-RpT zS|>7u4JKTK1$r(GnzNcsQczQu%#gjWn=>1@*50$dHOtCnauix8wyj<=Y zPT6$Os?lV0Qkz7hPgbE{*x-N}J~i**R?zPN{u(7J{$hAR(CLiA!mq+aW$%t3yR}mD z0)D>I70xl(=4gVg_-g#jw6a>C_0d7C=HMses@wV4f8{10)8pr0tRYFF)Z9v?8Do}9tO(Y>7Ht3KbP0X@O6mM@?;8y%JzSo2H%LF04T-h*%jL%sdi(TRyM-#@S zIFM9{SH4K(^QZA+CmTbN@9oT2TMv#@T7TLW==##C>5XPJCd}H+%XvEOykPk3bJSp+ zKQoIo`+Oq1_MfL`2auGNkLmqp3*v$Sr3d02KAKljKREPe2Gm-A3OLm0c6=zj^c2Hm z(&?=0#@!M3`rms2SjZb;O3sQrwf06(;=}uif<6J5VMG4*tBQi|N5%45H3B0v=y(*x zt6DX#3;1SoFALwn^yG}aIO|BM`eA_?{-3u9HHH)$SwAmKpaWxa=^^jC@i`9Hnc8fJ z#hOcpM`9n<_8_bi0=$>$UC!n)D`x7R!dBY9oBS~PCi5iGu$rPBQ5;Rorq+DKJZ_Qu z8?FbjYz*uYRI5s3w_5+cT6UC2a3*^%ltkx@*0V*B^BfCrOxa_j&#ay<;J)u@%r0Gf z`Q)R6)SwKnOXm$EY(E%N8prDvf550JJkWx4Nf9NU;ynr-IH0H4ax z*?Qu`D^a5QL$usz9HS!4aLYvwucc-^2}|4Fsv#Yf*_O>)e56@1ecILKu@e)|kfrzE zN7xCt-xm=17&h2GwId{3@lO{GMFwKBvezEj_UgzZ7at>#0n2F=7#0y>{ykEJ*L3n@ z#cCVQ%w_j$>=yL4$Nu`=x^J`Eln0}H4ll9_Wa8f`d0Hyhn+_JtivyᵺO&>4QB zzQ#hv;KTM}?Y3oZ+IR^epG$3EAG`24*B3GuxpQ>)#>5OaPTa-xzvc}8<;DoIABALO zcRk_eo>|LgTB#QCyA3(6;kI9zC+LEH_6z=F6TGVfl?TarXouCe8hqVhB7sPYQJDd4 zNr-Mf+DW577o5(sKYz6IoadLs{>mI(bg1ICABejm_K8Y$w;z;#oj#qM*8$YfHWham za-ub8zY3^K!~*-cme^|x8Sh9t%==FZ5=gy&UBsxl;HoOi~80A~Dav(ME zvzbDNle1ehp3MsHi+xQvrPW!->Nkkn0b}SAn?bum<2GwPJ&w)ZZ}4$l=7m546^nVh z=9T1(;}l^Nul>qopJEO~MY|vFuw7&lyxO^E3f^cCeuSDM>i_QPqQ>Otb=2v_HW{t7 z3v{BmnpcHZ;Z>z0Q~!$`Y_by6%R2z8pWZJUJ<)FX_g>nQiKl2}VMfydW(LU}?Pw+( zVi4PZ7bgM(yhNiK$}aLj;Ld)-Ezx0#Hhv6hxuDsKMS(RTIX z*<5$93!(=e65o-NF|1WHnNK^kyTG~Mc0nbSuxI#^%wd@uk5VMbic<6hPuTPM8Abnc z+hLfX-|34xXK2&g3(oMjKdFX#sf1kAvuf~Nys9OrvC)fMw-5ZAc53fbhg+raYqNSi zj_xH~E{c281S9iUO2=*)7FOE`-n@k(SHPnC?%KL(!(reWZpvEsQGkVOJyUDXb%f~f zmlXK@9eP8~=rtY${SKSVUfB0AN5q#Y7q6+d)Dg^);fC>ug3m*;C4Pp}I1I(l+H9yj9(@oUAs)B4v7dEjz^R+} zKdw_4(74>6b7;dVxC6I@(Q-8hfd|6lT}ZTu+;{pi!}78!xg<_6N|eKsa`tvG1vzt| zglXtiBcai$UltAFd3Hh9BQF=`#@~$snUz5YN1h;!t}Y7l8IQo7gSdLe{;28>`<1x6 zoY}{)kVLuTD$`r1S^F9g1*f@!)`zB^O1C!Ooj9Kw_ArlzoM}=FdFt6p^Xn1{*wHYF zT}kN4#!qAmOB`Bw9mhRMYbR;0L`V0@#$d6Zuprd-0n%Z(IFLA)=*Wq`g)nD9xUlr6 zg}ZV93z)TPJwL}TDs6LGV5avBu#Xzuy74eTO4s+veu)VCFLjV<3ESZ%VfI{TWO_7E z9?ST48dq&^B=~u+bP+ev3}ZGPyc?nQv+8rHsJmnN^;e2d?D(=Nw zaXWokwNhQ-UOhS)^YgF@ApEHMLXzdJ(buG~h4Lf{zly|i=B1>1CpEu+o1nizK?E6i zJYe#|tqrL<^+HvSbYXW_A{);?*^!YQjZv&LqcihP9~~-w$6ulp^+dY^__hY;8Mnq& zvvNn*8nj00Ho?RlN)$dYr-Hwc;S9y(GKETi+!Qmw?Ju16?tN_@gKOl!|H=qm8nH$UL7b&z%93Mw1=e z(H?j___IFIsy|7GCu23Qxk(Akq+Tj*c17`YUBlBTm)#1r>tUs%POk9Qz#pceOGl!d z&7WHm*&r4nmL9V=rRuLj#F=xdHur04XY#EIrP?mFS=1ceKtU*WxzRON->Q8V`oy8| zO%pkbr}IwHEZ9E}Z*|Om)82K;VH_ia-6H=C(qo!^&x4ANk%U7g5Vp;xYfi{*m!Xa< zIP91-3NRM1%OI#eo||JL8?xyMN5q4{O|VwW$pCB3>Y`|i>$?_9wB-)}++LHZ6gvTh z{Gcnko9XhE6mgDS)7|i0hxZ~asuz%n$Pm#qKU1tiTyC= z?&#Gy>H8(~XIPP`Q8xbPMYUyZD>0Zgz_$+9Jqx1#`h3|+*&-4HGKxt-CKXjArI40e zmH1%La;N=DyZ%|CvD0R>6Z)_Kcg1)-jy3c|@v8hJ&nNx+biKYanUGHm$?*2RNRQv3 zct708K`ynH9#zX>*AsK>7u1#9wQfDwn;?f_N-TgVQ#LGyM5c#D&zSQtzo(hdRn3N# zGlv15m%%$uoW!VBUechNGD2!~ALDo}@+P{&muwvkkDNBKzth^^>qnfvzX~jjwMcZE zWF-5rL2Xkm_YlU|<6+)$K_M>hug^Fo?mrJO3;o3ZU38}jKgFTc>LZ^KYS^)w1V zKDIy{Dl%~hnsOB!Sl5o$0fKuwV&mMZ|+T|cEl@5a-Iic!e?XOtfG9bNO7Qx8Ui}!jK!fsN(b{z zj^`#(#Jn}6`ismyhRQJ~LwjBBjpJ2}h0TI2hjDsL->o-SmLXHA1!JZxH&QX|+wCtl zofnBj`Kt8Q$!PSeXE{gv!(YrUKrmv|x{zBw3=_Zm-0)eHM%3aYJ9$vgos&u#s91SB z0x%nf6yxp-*_vEZI~bETdnHGm0(hrR{_&??XS?i*SpxF+^qg?ut$MT5)04%DhAF?8 z<7F?Lg7eH4Xg$Wh9XC;5kNJ#4xm4nHU6k~*_Ogd53)Vb-?3DQ`cadMYnSO_>+212? zt7A%S-c)k=xT=k132s)D!ty~a$hzc|_X*I=td6o*C=>@H?FlIPDyrShvbb|gTH-dW z-TC6(Oa+31+u^hF4dZagFHWo7GR#xAsfxiQBqXp^T${t;W&!gPw{5duDtMyL%M+; zI8T3+L#DTSyj3uAiv9NoP@2aq_C^I#jhW|$C8qD13?}>J^?{u z?*U8h&Bf#NsK3NH@KrUbTEJh&mkgU{<8jG}@X^HM!HX{W@Etal0k0XbC9a!4fG>FLe{g%(5DaS*i9E zW^KLfr#_| zry`4@W-qk2#M<39rJmS+^>oM$)?*EmMqfuDv8(;t)#02OdSB(622#>7f%}8l4cE)v zRW8@t_o& zZgfw93Ap_BfG}`nDv=2PG!Kpmi{h_-cR+Eha`1N_=1uH-)27uhMi$AF(BW63%idP7BlxntgFjfU23if;hs|Rc zw}ZIJ1kPR-MGH(@O@EihgUL%hTmVMogc+C&;T0e#Z2m}*LY&NEEu7tFF>JMnI#Wl} zY)DrIvxS*=23-*cRAGfeNQHeQ~cHIiRufaDBzT`KGu2-BW^y#I2fu;WKqWr82!MiHfjDZykPWE8}O z-=h!4DPnr&M?Op3lV8PC@7XA=x_1q)$l=%8tl)(OI*GRdtoD^~rGRP32ySHT<}4&` zR24~p1|)U2{i@-62*6)!;YE5bcUX>cXRTOn3vUI>`uAV9}dIxkpJVO5~IywBQ@Rlz48QUmTcT2WvM4n>)e(Fe%Re*&c6X> zbTHD6#FO>Ac)4pmTCoaOnx#{&orQxIxjMTAbrN;I+mT&|7+ zWewfkE{_oXA!$9Yi=f12Ebx@iO#|&m@K+u#K%ZFf?$6wt@f=Ytc^G0G1Ml7V``z+D zG2v4Ro-ziz#Y^()Yu6flqpG|_OyrR$zIfP8>`Z+Kj^e1PTyYFo?!cneetWSle)q%L zH>bjYC=5GKP{3U`%x3R0Q1nZeO0Fv1J;O~D*5;n;78FPKYaXa|>x<5(5*%h?+1T5% z&5PA!=kzKl`u#5LJyW7@ezK8wBh~&dkxq}pH~AC%!Ozh0>&2ypW%Gw7>7wNVJGpzn ziMlZw0&3>-!#$(TwCSx)Y#IE^}+u>X$NclgoafIh&{# z-@jhHX`QV1)jQKhmV;-Pj5PG=NX$+jO--*Ws!0|@n)ykfCV3#idcE(CAt=+yFHK^x zYj0Sf!T52l;kKgVT4*f!u%Dq_NK=a!M9YVQFe_H&e)mC+x@Jn5^|v%u4EFeF?X!@= ze8zb$66a(iL8TS#mak;lLQRU+>Gq#m+n+^&$q~BA8f4@2xP5MCYFY0Bu8g2<`8+|# zOe{6{DPeUL9DLP}E}*qW_d+0fLz-T+$C(8i>QRw}=!8wS2>guE3 zC|QvB8cQ%g+6DAF@!j06@!iO8R#*;VPZE<5is*CZ{NnMRc!WQV)~LP7fw4KtYrwvD ztguLonNI@F2`L4T^lS`9%l}G5k7Z38Dp7v*E|(@|F_wIG_MDvE{Cf*CHrE+*pIgk_ zT9X-!&FMgfDqwG($Feb`!whjDv{kk8HnY`~7FC+&xRz^1>#`(i2|t|wCCWFW7}wNt z0L2x6V2qq=m!ACc= z5;BcT9GZ9AIzwjJhnNME{qhSrhpzeQyQ<=U)fx+O<6^lts3ntK)V^bv&ctll8`mFA zUF-UakOY+6n%;46@VD4P#HxykR9ls*!DYB^&FLtVqqCUX?VXbKY_4QEae4w9^4K9F z6XC+7xv(jWXm86yO?oxp8Hns%&0$<-;M=NF(|TT0f^8SqNkZVrD$!WNe%!Ku)ubuD zQXj^zUMkc08Ew>y>*h*b>2M>#Ss01wv8^#-k*d_sY5Ik$~@s?Su{w_tW^mP@Un=5Fm+t5d|oL0apRm%!O)*36QA38d~kFUjjUIC+8)-{ zqdP7Rd-&Yfqi(TXLN!E8=2oI*;wig4kQg_1JvkzDKJZ+;6Um6U-R$* zgcoI7P_>hDutTj=Vz5KKQ*E$A<2;9Z+)V3gje=u^ZZrFoHfdnsVj}QeY2}$PD;t!A zbTv1cw%f1>oN*&H0}&OVCWOotQ{7IWE5eW03yH-sW?UxSXv#`9BR7|jdA2yU^bO6? z!>E4U2{8&Dgz5?_7R+;x1ct}&#WMl1RQ8|flfWwn^XFP$JNsy1UHU7TTP;k)&+)}y z2*i^%67v;#9v`c~dd!#UVLe}eIY1~B4wnFxcS&17upWr1*4#5BFGNk0M)G*$aM_#Z zzRhZ)C3@8P3gilTSe>$HzQF_gO&)??H_dllwuz~bmS{cXgRVlMGHnp@v+Ppk+yz0s z6~TRHaSX#$c?=D{AA@PTH78nBCrt|CKb7W;&)OJy2D$GV{6Fsw&WgIvUmgj5{Wv7` zKrZZ85&O_}b74qLc`KJ(X}uXvJ|lu*`k>JF@Z*=7fNuzQ?nT12?mlAKRp`vYo6{92 z1ap|NH@}S2zk${vLH1dLIZ5wAgG)8vF~85{m%hS#Y)u4Ej}?U3n>B1U{YL{ER)#08 z3Xhm8-K4*D?md`4uo#q_Mo(Fck~L#DV8!;gXMZv@J3iQ9R#&}Vx8|nAWG$VfvdRR$ z!W%~rzcb+I(uZjD5M+jDg=yHu)}?ifbjN?R6GTj(KkrZG+4B&@Q_2$1EA>3h^|zaR zwg*fVsU|7R+4%98U%A4K(XhDYmeOGx#wb`tQ%B#-Yw;lXu<>(Z6EXUoGe;{qp8qKQ zJfug%=Z2qvw5%zsxz*a{Gc1+?o+O8U#b4RlYFbEIJEOdO5Y0IEfEY`-%9_70)X;PQ z`ZXd;uOspOPJ8sE;S~=KEo_`~4%Ra@=0Yfus-O{0(e>a6Ao|t!zH8YlVn&@`z}2jq zgj8fo!;?(3L!7GyiK`@yg7;1TVp|nw!a(3 zq~nTl==zYAt$l71`{lM7#=cP4BmlQ|e1?E}o#r@80gbyOc_m*BN17_~SD6 zz%!Yyq^ImRVGNt~wRXrHksp_^_jAF00rbXsc8PKghr+rvq&;JyFZK`H{MXXzKhR(5 z3W7R+-inpn3LEuJ0*(Djh@QS8?pJI@LP0kcChcC88&=RKYD0VdBy8fKIQ%)>uwsXJ z+_TwCMjYaTPEV1^3o!XyqbxQh)o%~2n_NNv4~xl9&BPZD6i4xbn)!1D-B(r?3vZ&f zsUhb!T!nPPb#l7y9Rp-)}r=YmxBw4MIIR?`n7>oApsgxLxz6 zrn8t{NX_}UrU$Xd<$vW`+#hB^Vx&->Hp4VK#e zu`lY~+GX${Ay&rwVLMxPJFKfq9n&I1Wo~<(*ju`A@x&Ku59YHQ{w@*VXp^ zbI4wC-%L+U#f$t<*!76}S@O!s0}c)VR7EVv~Q;g=Q%%h-Ew zj*=WwVxCqwq?CXk!P(W+C9zm?s*Vd=vykr=qME`xx3OmRXYz!50(`EboKm#>M}m1U z+uKfxmEHHoRZHKYwpeawg|_Q^xma(iV|T~>?w>4l){8nK~=YB6Z-_V#apvq&O5K@}@2ak1i-`tjxCU*no!INJi}SqnFzE=Pyj zB|txi{R9>(RsC+!R@fxjl*WQkdZorI_Zm$YbqI~s-Y~Z!lViJlajFe5r*PKUW4pEY z__94?} z;p{tpa8JY+eMUmBtFg?Jg%~PJ?`wCsYP+rH?^Se9EVal64a>h>Gjyvc5( zV%6NNXob~V9Qg@TCB40-OE-;V;@IvIBXJ=0Zg+FnK-6VW!G^cM?IQ{UQ-31%3JdRI zA<^R-_rf;bhXRyX5|#Y##u_p=xM5D(fKZJqKEwZ3|MCy^A*M~(E2x$qJyb$+OF4rk zQ`A{+ZOYtQS9aUH&%0aei!N)aK`c{EMU!9uiFgwX$rl8u(#(dIquj}M>*R18LZN?tz zmPDQ-OJ|gcWWikKjq>`_=B6y}6I;!NriD$fjcDvSJ9}UYjb<~lIl|sbi+B-;;K1Wo zkkN`e>Hu{3#Yy66l>cC~-v{yG@=i95nhg9yHi=d2zU!)VPU!n??QZ;_xoiXa@p5l} zU4Z$MnTmUgmm|4zHhrG%HxZ3a=miQthoOJ~!4UXpM2pwf%b^{#``WIISH$AE2%s3B z-yN`;g9%kJG4h_|d!IC#*>`%`1NFC`LJ2ZZ80TcD%Jjl*R+jq6^;(XW2w8sKt9~Lw zdGJ8ceCn|uAcBsUlq;C7-b)Bn`$&@^V`z37b~H|Hbn35H9S4P8lUldz^xCm12(#1X zhs9E21|{3;j~G!{)dQ*@)vvHw*Fj97T~#46NV6{<$m;nzQGVQs`{Ge_BM#rMM7jyU zvL`>n3VmP$CS>_$eG)34i~BsqUV=oFU90@60*T6}^l!eyHbw0PB2T5_4c#{3r#f2j zPKZNp2Zm}KJcl(`vpf#Ztzc;QnaFH8756QBPSbx(Z4?Sa;sGL{--P6M#|oig^`yfF zamZBq@t6vL!*HT09lpi)sce^5y>|9Oy?$vfX~T8sJV_L~Q$?YWQ{2O34JeVkH+;#) zGWVj+R6kKTb5okN(sSVz+*haewITE7KFQtne(m0uP##A+t&~N!<@Y?=$%dhvwYfw; zndic_EsVfgRX=wi-ZPL`h{2hE-yzf!;i6woNJwNjWWiZI-h@o|U(A&0{&KKcA7}BV zhM#_6LAbGilGuff8`(FIbmr#~;qc$UPeMoY_+%dm!5bHLlW%~!;c=9mb16)+QSELEw>%a+}-uff#)tlxWg^H!(c?2 z{G%hh!i^heBK!Du-yEBN!?pht>^k2(T5nP@Ew#)yXMT^M`d-zaW`Bq=|08Ovj6$=R zCXS}mG0{AM?{v95;mFz483X$1ASz>r?fD5(pQ(R*h{f#_Z^N#q}au-iM*c!r2%{4%<>G;EdeRz%Um%0Ub zN(7Jg>(F%a*-E8QEHsu z)24Vn=3LR?-z<1j%QKC}#oP-qN1&RJO@hPk^N&o5BbOC6Pm?RJ-FAOshV&%FOky0c zC~Qb#?iFSoO(EGE%c)tr=^71FQWw}Y_!MjX;)`r>NHFzh=k*ObadKp4oq86&_PM3% z0|Q2H-(oilq4Q=gYwd6f$~PubAZjJMT=Jmdonf(s0T?Vc?-Rnq-JCsFvAXok z-m1rQzr~}50%;tBuOzNk>7-*4q@JL#;|WxT*7{yAVwn~57}I5iS(L~QKt>y!7nffa z&WPK*+cpzg936=!9&1kZWRkb89LJ`ojvy-Dh#jKH3q6AD5h4;37^gE^Gmx~;;z~LQbaiE zHKQYPPgbZ%b7#f5J%!YIOv%yJ%Q^xNLK`p){S3x+R9-FBuNNr%GU0MQd<{Sx%~ci> zy)d|5lq03w@6t+Sm~!-`Xmt@MBC@b^AW#yZ`La22n2sLIzgI(8A}GY85g%TMAN<(*g4l?stjGRc(3s?(_ueKk#|0g zRF~ddVZS+~XZGWLO?acl@FL=U^GrB`bDh;xdG_$|6c2Vap#u=oPvw^%a&e!v=x>05 zPwTTWCAm>%8TiQz9f6L=aztiMp3^Dh!SangFL%F<&})up z3v|poZdl$hTshlxB-8nIdhm>n&l>#E$MIXxfbsqDtrN+D_K%7o+)Qd6DWdJH0rrW< z!api94xR=nVhn}C2?RsKuA$c^e+~g2UlsS6D z$Vk0H>>sypWh(T;y!F&fej;hl*>d5i(kw+jg_YqJXKyvOn+RnxkY`&On7}h68I6c~ z2XZz|40U@Z6)iesIg&d6X4ZY}E2QoU&zhPC-A^z7S$;0gD-JpP7m_pG?{uaM$x?{c zX_jyst>8T6T@(=iWVe-C`Gu05{kAQY^(Bm~vn_i5DwSMJqV4uRk!!GCYmr(*ku0ps z_OQN0t7lN4#YSmW(}w!4VNkLO##Wo(COJ)0?kQgj-x>8ol2@+x|TY}UC{ zOZVF_ZKR$kmWq3POL|m{H8U#akI#rcsKtrFP@} z`u<~_3}ZcSoTK$81)6I3gQBFL~SY=V*XCUkGOZ3y&73~ zIAaCkr@g2}cktSU8-uYnn@9VKc{_{_L(-fp3lF0= zx{9rvnWpJvH{b>j!EXPzN`+U)cRADDikytUMcg)AbV|1E+Q*4=eP8b=wPNo{i8bPy zQ@|1h{MWy`aSSoTNTP`R<0mwjQ%lzi89=TiT<05OT$~K?1@VKCj5%rXKcoke7!6X% z$26;#--a&E+07RuzP-IFh2TcBZKf!S`?aeu^iuOzbxf7Yo2GLBDQ)vYJkMv06wguu zafX(t{NE~bUi)t80{Y;sFKw2tg-zZNtXIvyS&MbFVByI%bpXD$HF$g7Gqy#gq)RGf z%d##>ZJTwl)Fqm(d(l-MTe$+EfOB{$S)WH#pcY8YL^*f;8)4gjRO@v|y~eC0%N^d{@IG6=@3oepfkh%n_2t$Dp_!Lhi>eyl1eYO`W??;>E_K4HP2#;U6zuRNaseFdijd>GWP^b_T zL$b6eU8fJw)8{1d1!h%N_3HuvdfQ@PHDnfHrP@_p8GJRO7pRgQK~5B2f3P)wWJgie z5Q6w(>$SfOV}NySjZr`--$v_%C)>q8c*?lbK{$G@jNmS4nLpl z7194q-z30YIbN1~GuaZ`qtY$Z0xuCfA9$s)o~g?5mg{8mES!5km_)p?12SJUd1|(# zYIO;>uJN41ps}zpZ)Eo~8-CuYn$M-7mb5e!j_?$|70>V`B0cxN@4 zI#7x{6QBUCEJH<#>gRppeD!V2!CBvcu_V+=WY_}wk9s)6K zo~nKhi%&9Vldeql#vumF9nP`jym7d0TTpI0URFFx-P-fpLB68q4wmoc(|mUa)a9zSJRb>}xH(TELP%{gqkcgI62lC;+ufTgMG9H4Q6;I<%p6v{ zIE&rdeQ#=7ysZPJcm0*J-6Dv1m0p1oL}Fds>GDhl`xVBgL^k3%(yzk9?zX{ts`y}$ z+NqX%-z!Ty%DmH-R9vW{ga+f zz8DP1b+#22NVG5=X7qNc6E+-C`OHJubK|YgLGN&=xs~#b95oQt|NLQC>s4q}-b){3 z{*6VR%tY{p4=r5zhdC<+{=bVkG!w|Ym@eR;<~7>j>h2Q}9w6I@)!z<|AV?d&Nt zT%(??8IDxh?2S?k0iM9Y$aw6^r~NsZe;l5~6w_*II$0#U3m?P!`9ghPgu){(yL%q1Q8prcF$IA5zfzHnDeq`;K5*S zqYYxd*kk~IWylzQZ>e@{05A{U(j7HJA$t5dHj_xDNKI~Yx>VR3Zyq%`!EvC(&TeUh zTp4Hr=SO_0#suF8x(klpT~cK8OL*yLb+{i{b#P(>%}H+kPR(y!Q{F8NIK{qD1!P@$ zE>a$mk(ZZ_X36mWxQ48--1s(kC3X)TJ;@Sy(r7niAI@)%gc-QJmeeU$nmmT*rm74Q_76)>WT3~xVfloAK4Fn|Ml0g*?id|%&I9Rqgf7wW7(Nuii;|%fdj>B z93h*1#r+qq&D5MB4hn&smfsxIYr_=CbkmRuUU~d$P2Wvu^FEh(KUaljX+1uR$GZUv zQO_uluT~~jYMI!upPp3f5h^A?BG2hZnrwU8Q*2c17gjR7Ecl(%1;5uzYk zJKbALwU8f?&!IPPVm2#57ML#2Y(G5?S!@2uhK^d0IKDxs(I0`E&~u-MCNYC_^rjYx zx6e@8bU4hWuf(a9&voe9N>Mjlsf|xc*fLLy(Q8c%wNm;s4o@4D2Qs^#N3i!9-uZ)K z01b-l@A6`4Zl|E7_NX}B?~!h`M&URtn}pqDye?3fJB8z=&A4tpD>1~+_;A%aGe_WyO>T5=)ysNO>m9H^-G)@Voe)Ys&mPKSK@WUn!UUfD!+Fb>HnJqJB5bS z;l>3??_%Z)rR74nzRe*STP#HgteC1Uh8XigzVvh`$64`Mi$-ck_5-K5*frgavhd?YZZ)J4irx08Z= zvDvzcm%^y)z3sFlfE^FcdW|`&l}pSlGm5%%Fr0tbMS9a!K2TujR|<8_{G>BsF6eeA zCFji=5-JmyT@XbJjErv*PDp7n5m7_=rl{vyqyr}!1|v_4p?;>*Ek}{<+mk~vpng@* z1GLo0w;0Q1WCmw+XjjRw$p?O_nmgHv{*^{UVK$bcLqAbqr{y{LS)bbQUHF$Siq==p zPZK4O{r-QHO#;ni=a~~R3)sJ9-;*$GN3sK?yGp!9elSX|*rHl$m$v(*)L$_dC8qP( zU#Zm$Q+L{(m(DKI4k?vTWww@vDMBXsL@`Igx$Lb!5)n03a-@mY(GYBM=2VioW#AFL(-$cE*<_wWb~^0446?Ij}gzj zvEk0)XT!(2@*ZIh!(}}HRcUwSv_uigjUU|$x_^l4S~vDPbJH(@Zh>d0kW5YYt8h$7 zR}}UFfdH{Nst&sOf6*WL4?Y~=je4v!qeb)d!fE4^u_878(HJ&Hb^GNw9&7&5&eeB> zy`DUu8@aTvWSH>74xQEBZF$n&GK+YlnH%G0EaS%*wc^JRE_|Xk=ZhMmXY6+zXtMB&YM4zfXrC2+)E` zKwMS}Di}Xjr>jmBz}X@x$O?IQcXNCsR`dpOrJN=7>G$xMItPsbe5?MYS4fE!H+^{o zzWLB)p~B<*NC?OSl3G3ahz-L=mz3E6vEI4I97sZVyYN8M87Ar-eE;6sQWS{`q{I?8 zsl;Cc*to4{l|))y{VnP4U^kdNx-PaB%e;2aUEn33)4%K9i1K_bQ+$PVUq-Y`yXHzjS<+-KUs5cY!EQsoV+((6Z%5jpwk8LTfJLO zKitSaqTN{H2RWnK$DRaG7RR6sheF;q<4wP(AdkuB=mBHlIoY;_Bi6I~1Mq3`{#gm1 zx2VNM4iaZZYO`cXn)X{~f@X;kW1fGc=qWB1TW6($N@r=@u3WxwJ63)f*8@oEv4sPD zEQL5y9o*!OJLByiH}Td($gu!JNlT~Rx@!qPNRbACW*2<*Vobyj&%*>1>Bg12+OYBB z^-aBD@y=db@y1gqq)3GV!U!~o>Cx_RIb`qUV8;0RFb(p$Z;dhHqyLDXEATKK^ruZm zA6fCr6^5U4ilgdfSN~Fv81Uwabp2$@$WpmY@w`O>au-5C zI*#rQdyVFc9%BWy5_QM?t=p>GSgUq#cxK;JJx1pCy~ut1h}&K*Gp4_`R$p4rkSZ-x$Lx5$tq$wuOZuP^A=hyu$_?}c__YxF+y;s0xZ?>}41`q=^t z9+--a@Il(F8u4etq^{X~MX7DK@vAo2pmtE5lnmsfLlDR<9UVY~s> z&-DAc&4F5u5`93rmc@NN43$A63)e2=c}U3tPKY;8?UfW(&*BZh+aYDJX>ll?nC+e7 zE8z&H6z1x#5xnMcYa!>;kF4`GC?A6F&tz$T%H8m_i9a&EAHig5TYQCnu`h>@+rm4K zG2Y|O5Jd#OZY0)jv0C|v9ZcO_fgVD3eF2p;?0%sc{%E*lMjk^e=_T-m?NIpHg4$3* z`Qy)M4VUOJFHjly6@K1An>9PPy|$i>5%<%@2|WLF-q00rhK`i0*Ey2VEm5d9L7M{` zJYsnxdV@R!g>Z32c4a8;<*ls_8h+d^_Of8}EUc*i!?+vt(m196JN`Hauf{7MK>CNl z(AiD0m&oXsD3bXYPG_GV3U7|qa5pwgz3cn!7 z!iD(rU72q!R#m_!Q>)2ck`rtgDBOtc-BpIBMC<+e`s!R}i_I=JI5lwjo5VFf&*KGh z%QiPNp}nbb1MDY2&7cJ2TTCpt9#FD)?BThQkW+I#d#pQUgu92VXmzk%aTS#CGH zM#!_|snkh&^DVd3b$0@Q1I}e1HHTR~WJyT0Gtj3b0>Cqkm8ZRFoXSamShu$fzJp7= zj%P;0uHUZ?GTkT^>iFR9Ee6~LkpIwUW!R`p=`T$1jH~0)C}edO#t8aW%s+I0>f3sW z(n~|C3}j7?tK@M5>JPfR4GLJNXh*l&&;C5+9{^_3$67}>gWvR%{nUdaJD-0uazxQy_==WIdbQC6=FXil_1$udJ!n6;5 zX@{(uL~kp+FxxGddv%FpdLKv1z(cEdXWqodcDU+8OP61RFmL5kkz^$KdN&eMwKkK< z{r!KD_SR8RzU%(KA_#(nQVLQcE!{cNN{EzniF9`l9R{GJfONNXGt$z{(A_W$Jut(- z?_r;P&fdSh&-ZiAT4ya6{{arqbKm!MUGMk%b^Cc;Il*_6)V(~%s9E2Rz4&BxG~XL? zIk#wx-(a#K)qiOiI*2|BcQ(j4m?VXiR-|1NO(*vC9`8Odb$}kQCku?^*C&Pk7L9u~ z5N%FLV8_&k_KU~XkU?h6A;|mikd8s=TYy?VI3;6YA-=)sBw-!-t}~bn2DlBz3yB&v zbw`FS0l&_e;)6d6kCTT9X@un}IyW(@V`rk?TQLhhBPWD;LqlrqaqiLCzn0p45K;;iJ2&H1{_1uUv9z`22`bR$lGDp z6nIxUOe#H=#m!E~Z<9Xr=aLJEfG@6nRy7#tQ1rXk`P-L;wxQ?N57%|_k^PWad=|;Yx zAdz|>Q8FJ9GRas}#afnMn{wRn45L^$#;kBg`ZIC&po1j{r#~XA{D&0Gy{)AYD|WIr`i|isRK`-(h9y{j0e5s~mdc);ReMM;cATAygB$PW{4?jBv}L z)Nc~|9`0@7RLfC9&W#Oc%(Q^Vjv?lqR>Wc064FC-YLB===bW;ngBvI5HPQ7vO0<=a zmn%S(wIzGrH9I50w>6^m8|sO=VlzxzS2nexnE9*wCO4^i@3q5B}WlKs7pGoro^ZJ$` z0c$3Zl^nUXUch>#=eFau3{?%Q?Y5|q6<$7G!#(NNFIU=+4_D|Zcuk7-`)h18@}A99 zg?@4N(v9eerWwzq_aBGk50B|NXZWI@ z^=wggiDLJu6^rt&$M$~1$t~|-hYSQ3+l5xqld}E*NkJ3whW~#bR3;#Fu7OF083QnZ zW(iPv|KTauuL_hvzu;teOVZ3HD5V*Ox2J_s47JNL$!6^1Q%M(=SKL%Sw?PKJ-3MH? z6NP%9Te&2>WVX&;ifCF;P*(`^1NJ{Z{(nz!y?XKDP_Bx@yGAK!t@zb9+15J z7On<45s-{!qhwrW3B;7TI0B}YBVV>SnGG8G(&V-d0pTO$(Oqg(2ad1MpYw&1q}xN7 z4?OlT8Dg3Aa`Y%RAIG0dhsPo#TH=o+h_67FMZKAE;ezJ2Sf+jXfP`H?0zkVc!CO-J zUw&!x;{98IOzbXT963~e!p-IgNGR5eja4sw%k5z}ocRmfPOF_256A%cP>N6Mr$)@j zj9<&yXgJgCK%>+v6G02>H57dLKrl=%;TIynB#z4jqe$c0M$zB><6k){10Kk{$DqNz zyZOd6TxM$5sim2Aq(+*J^Mex!BazIVWIv`T>H#5OIv8P$>A4?mjK8_HYEG2$JtMwF zeoMghvX8&f`HaG(>Tz{l55P|zZg6t`n~=< zRrRzVN3+KUTCUjU-u#YAqMn@ePm1EfY6uy^J=MFn@pK=-i&rOK@P4EHe>3U)FOO*e z1gG;e>6bxj<3E~$Un=?>#w@u|=|Ui?v{)#aNzI0Oh6uG@dOfrI?}Tzsq7o&dTAPMeB`i;4SR z&s#O82s#L#orL!UhZF8x^r~0Gf73PkAksiFr$YE~cdw$-FT&HryjGpAPzaho$MjTe zC;KzJBA(mKslq-SGE?VHochf#=C)_gD9;VS4dHictH39#U3i6=&-~WsFTaHNcqqMV zW~&D7>6L;!EOcmyO~btx=#=Ai*{8X&e!Qzpb&D;SdvMnM$m%^Ks!6BXxM}CcX$Wx! zL!%HisqFvt82;sB{-p`{AE=TDTTE>iA581f_+qGf@%e(;r~k}Pt0Mus_`3xfU4=Sx z(8R5EW9~xwHFCV&Yyy4SBqqU;<1 z$wBg#wgd>^2|@jN{y^To(0b!|)~dI1gU|Q$1*OSqIos68HL$>QE_u6meApC$vSVkx z(7(_)P>gSPt5rafHb=~I`^oKg5a~0^S6J{Sdsd@J=20d2YTbkFHGIrOCY2Ov8>mA< z`dx&tS>Ek5BI5%=OH8}jO671i4KB#n6p_g}%u>W{ z?%w#*dC(^s5OpNgVmjax&Rt{13#rIN`v4ZAYL{U{D(q`u%Xuq}^vD)#a4uT5H&WHo zJRpS(kfApknGvr?3Eg}7ZUeXXPY+7Q=bGI_&z=f>9WjcNY}lJ9kOB!5Sk2TCy**0; zh*PSgAN?S#?|*=;7eXF0DKIY#9#{W8EZ>m=hH(6RoBuXFb0wQmfzF@w7X3Bx{`3g& zgRnA%7w5Zq{yj)so0#Z%;Vzhul+Z6-NH}f0|ZtV$dZ=~k^mS` zpZBcIN65|cUCq+7?BI98W74U~bvsg9!$w(QXSSh8gN?1So7LR@LbIl_x52~zNa@x+ zI02Hb3-;sYgZXdNE#@xP%sVzg+iy`md<7~NMLLiM03jy30R3CS6&vrBG|C5vYg*p4 zbr4v!uK+*s)Zi?PPsNs*i$6r~eys~%KR*8D-@*~EMt~E2=gHsGr{jpepjsLcXE7M@ z>^Rd`>h)ho#xc{(BUNxBQhlRGgkl~Ci;6b>>72l`6ssHu`UHOa(KG^TiM(+>Ti{ zK;s!<_Ln?8Y3$Q6YAKZ7qDE5*=+a8ApH)rER4UpE{DKpTNSn;mSkRPe*BFY#Gb3}S z?-=_u20cxH{a+XMHm0^34@Qk~n5z{IMY@Sn>0%LXO30lG9wH=vX!lC%Kk{$!8v`>g zYL<I$;9d%m4@VT}nh!&n60#=_)nVmE~mK z^WJu(?@=B#W6^2mx*fw7X%=RnecaG*8r%F%e}QKg)cgA;DXWcr;vM=&r`!~GZexTU zeX^LPJB2bjkejlf`Mr%))Uzul8Tc&#rpxj&M#t57%R3o>Td+WYz;%YWKbF;Mt%CN0 zT0vAr+E)mW7VL`w(t?e8I3nb%x^HV$cVc82s(`d0y(O2HIUw!;v7lSR$$D=v@*!W0 zO1jWbcRL-rBtBrQ6i#)UdGj&71nRk&Zld35BH$7_nnC=!S}?$7XySB8#n zuV>m-`}c^n?A&cfu|aKr^?J_^Tdt6bk8U#xP zThj^N%SZ>fS#QIii8y1V3Zv3n!@E*Mmc7)NaF#DG?2aDlz=5Dy48>OFC#vMt)LYh& zD&P{!6>KjO5ycxm_`g5$4>G5?GBwBKsHLr2%VXgu)Oo6C9J7hkr{%^G%+e5dx;un} zBch@2x23R7Y-IFq=ILM~7u0~r4zt(21w%#dPG)EsddZ4t6`tbX5^}72!+u@)OBM5n z?8g@Z`;OUATp%vlpX{IOH4|F0+XMbvmLntFYTG*T8&S0oxyq&R zmYP?*Ej)wZl43!OR|UXj)C8x%@tAoj(uli?*uxfHj1O9909}9k&=ZwAKAbL?H~>+b zM2^`FhOsB4;4MF(+pOq75w9`_S0)x}t1qafkb&L>i%g#SK@6#K%{{84Z#NaLXOsDA(HaX)vLCvnnIZsWK!CVe1T?K!KJ)p8!5<%+%ctOA!--pV1^H_Z$ zSrZ3{ZPOpyo~a8K>d<^2VIqqDA96bXIHLa_C;4;UyFl=#O&38d9pQ^eGsa&Qb%(|C z_oj$4`CaSWwOT@My)|e8aFRsT4IRGkxzgPQ*@?VwGQrBgDEA6?y}z`?_41h1E*wj0 z{yYfA8t@d}9@^h0Yz%$)bZoa;3y9Oe`c{>`X zt{~RjBcKhp6?rxbC9w^cZ?1QGfN{nJoMQ1>CrfhxJ>U6w>f$JV1yr%L8z083GrX396@8k`L}hK$rR z)3!DL1t%7HRQs=e_Nz(NAd)`K+lies5U2;?6^ZdxF)gZ3%OQ0M+QA^e?x=rUp!v-R7tl84MN16O)lAa>W=e}4J1T&4s$w;slXXkd>{3j`I{<Y;UG&OQX@3 zzu!0=9Fv&$m2!8PoZj0NTTtjn^0m#RgZ^!@TJVLQ(peo5Z@Vl=*3GY*1&|OjoCe?1>K4;XBLBsdE2>iYG zOj^U~pL?|ax##+~{ygZxviKHE)Z^t+8&>!SO5zAl%tJIiP+~qRU)KNd7vH+f82BwW zVL>S*IFut0I5FPPZ2+k>oKmP@7ap?7^=QmYCjuy#6eKO0r#tb6ZW{&w_hZ$cE}H8P z@Sf1D%YOf9M!&O@$e|Nb`LXXcQO-@tYaJLlMZ!<1lk;2uci#T3|C?jP^PlYBu`yWN zx$wx8x&nnrxtwaSMeonRR(7MXMPfUQ$n9m!yIUgyRUsLEJ!O7wwWR&=dF@8N3Q0UU zx#%o(a`k=z+SqG@G-ku^^r9kC)LyOmIyqSHHGxXH`1m4#yK-eINacGo(*4y!C3rQH zMpRVw71O?aKN>~oYOP=lxTt6|0nCZQiTT%T{tqr~A%#^NE?<-hw^I}oN;8hK?~gU< z@a~D3G|;D6m;uaDJ)OTM;Rd8&rhP)h0QA(X)F#AOjF&nyaGuzAuNy1T`Vzl`qY?*F zl!PlS@RmY1#<>6O`Tb*MZ~|kjro`oT9$kQoYXuyaAP2U;frD{Jog(Z*Ji;e%)%zYUi0}e(*0{Bw_IXa2_v~t}l9@;P*bJ*vI!{||R-1TJ z{wOa4>GW#wIF0M-I!n9N>Fv(hE;ZGjSi1#YpSJKsFaLS*(qIH0jrNlwqlygytqkg`T(7H(PnF0s+{h)OiyOC5 zSB0cix1@6iwy6t#!uV-`s+dDYI}a_oHmE1EmDd@${x2gg_HnYRWw@4kwkp}i=6-Tp zM4daD%%bgTH!7Op*7Iw%y7q+!c$i&+ZH&qWJ!P z&=j-Tn5pPrNZyXozw1a_^B&7W;sTu-t(Y=D21R;c0u%9g3IQv{TRun9?~(7s1prQI z0&v(Oy=3K?p7f_A|M2uQ)K1pd_)R>XhW5(Z*==8XnKc+bA zo4se5t{r$ofEo6UC}y+z7lXuW9za=oz|ErmYgWQ|>_uD8Z)!2$3HmlqRP}^-wLQoW z`SrF&{|NxJe+tFX`B40X3QWI)H1aG=9GZRSA@yxVByt{4jLgS`-xfXdttqV3JwpHDa!`5x?1CCq=@WCMn*rxN=bw-0GH4PL{p^U z1{k$I&Vlv3yf@Y+|HMoo9_O({PfML!K=-Azl(xieyz}D*DWxcn1&7gZ?c=B19#=V@Wqv2@y&vLPE@G%f z1(%|UKxEyZR@Gr-ukrZAvh6%>+yZK0)9jGDABPi?0Dw&J@p|8zuFg-%d0DBi5*Vt- z!knp2djKvO(S@MRcv5!+v0mnWPkJ5zMA!xzZOl+fxY@8mx?jw2dT+Ruu4%VQ?yYBp zXBj2CXD-P^5^j|#vzckYVXuST8o?b^T91l&1>U0-uz_FtQ_+?`e3w!0B@BF)}ZFGAn%f5x|o5Y@Yw9G z0Np8B(1MA{7-i6Pc8_vqcjWFn-^{010YdiG7E5%PgDZi2+#fnKOz!z8;Pd>+SL&je zsFTNL)utuv0!H4q-cTE0FS*SCZJYR=9mNP$*6PON^4V?rUP@&ua%+*+-1uRy&oqlm@)SF%WK0S@Ui=-SY7yUufMVg*jK~H?742U| z>`uM)$No}q_Gx<~$`WQGkxzu7#T2hs)A1?xW|v=SZ}Y?W+vALPdxCb8YYm!WY=Z0A z0@q?hOJ;*LqZU$hI)LY`H0zBZPH76|Qso!o9oS&9TgrM^wO*Zrx;Ac|bd+&dQAUjt zz}vK3+5cqELiNQG{U^;|qu!$?>XWsLs*KIaZ85j)q#kM$urbXe2!<{CFWr(Q#e_KMhe<@|6-E`ebnaf_bPqdBJr zI&jy-9jA^>oz{WF?5zJR_}NU&9dR-nAh~H1%K^SvZ18~|YYz~Hm{a2PNE3nUMkI`? zPd!8qMq5&Hm#f?zJAzE#+kYXK;Q|V%cz+_vf%N4yfk%p|XHU5o@8rC6SENfCKgBlGP1J zL0(n(kxTlF07gY-rn)jf)?v7sp{yP)$V5Lw0#>Aek?Ws!$Fq`X~_B))`ysKSv2#zI8z+?Yi*lG2<0Yo6eGQH##qUk;T=P2v~Ov*mXcGqOf zuBE+FpGZTQjjsmefArISzL_C(I1`KF{*quVE|waex$s}JvjNl!CzjHBpV?@!E| zLE3XH!Ndyl;>x0|=k#4HS^5U0rdhN4B&xH@@x(kbKSbkwL!AkSwf^HZX5ahoUt{fS z;5DAOeT{t|-@e8(92|;RCWg65RDpQwE+qQHiCd1_?9+q&42C&YLd$#M=k~cr8*t(q z%N6(L=5kG#9{US0K#)q62^Ub_0m4RVc;DPf$eJ^NVW0;Fr|DUb)0OsPSu>Wb=j=fK z681$c=k-jC3?<9w&%o6y?`rp@-T;jdnXDBwFQVF}VmwbX^X4*bPYjXN!0TesP>(?V zgzhbRZR!l4fTxl!Q& z$&Z}1g^?nr7sRZ2aYBRJH5jL#U!1#JOngn2x1Bl|Nynh?f64akX8(zby3ox$Z!l*Z#e^tG zAFJ`vt2=iwiT?Eqt?XUDVNVal1n+A;YsWz0&29SU2~%@JTT`OmGjMMs_ay|&j$&!7 zO2m_V+Lp_mt^IYbqutWqo1Sfhd$!JxNwE@{a6`Tb!d+4RM;p*K!~il)dkMWKh;0Pi z( zX&WMg;GSEnVNSP8(=W~rbH^(EsAe(5l4J*y>G3gl8Ja7s(-0qTN!WnRZUEAgi@sg| zu+gQ;R|>js{#cdS)+Y{Bg*wjylX#z3Im$=7E0hus3Z}PGE{OJ^<~Or{A9|h?zISst zS?^SpySw8(@PntBIivCQ_viM)YjOkw6=F-$(Fkw!#*SQRhgPYfsX{#K2F7`I73AUw zZPSco&A8Y+w_8}!O?e>cHkK_+%A9ViXu~euHWeLBY0lg1yN3afhcW1r#LZ8cpI?mK zZ-Hy(I&-}qXL-`J771- zw>NvgJAZV4{2Dp6nZmG?WMpR^=xo-Xj4sl`*GKd9PW8Z)U=O*eA3OoG4pEvdigjVy;|Eicuv9@V2C%xhpc!VO{jK{3>O23HsH>` zeOV{G%fPfVEBi^w(4BN_oQ@c)Zcnro-m147leJ%GRFebAe;=l4c2oSW^=F}4gPP2; zyP?7riTAM;@x|ZVFOHaa4>$5aw}QKe{#3xU?+pY?wH0; z{^uN=xNw3u$tj*hwpW!Et~0t{;S>6iu~%>{gp!2$XxPHY*RF3GQ_=^nDcmE_TRO(W zYz|S^8VRp8s^1nGgWqGYea?txjNiSkb)KZ4{E3d8aagiuG{nd}Joc?io6T4%dtagt ztBtHxZC>KJ;&UERr;eMKN5!fPt%Yx$?WLb1@Wo}{tEiP}I3dJRTfmAtZWb0OlKHv? z2gC9KFW$Awo9iB$!a>s$hsfmDk?$o?t855#w>J^*W zOV0q~%SzhoRBP!pqcNP3IkhbYSUR$_=ya%%D&IS(GK!wTUd=R5<8y6WR#$qH>1&i# zJ@fW_+J5=z6lXd^P$G0c{_#=d~M0~xqF zFIsAMTkt;Unh|i)sd_d!?v%w;TYTZ<^0gYv=cDt$o(K9P9!j-VLj>BVbU;J$qtzUy zQWhwC>D<(y9eH?FX+s8Ko3v3C*?Z!@qXXsbs@S%oM$Ml5h>F)x5P^0;ZVVm3+O_jn zc8Lu)*QiFDLW(;UYR9t-`s7_9IqHuMT=*caAk!WFvXhh{f7Fb20gsu&R)JxooBNr< znqiw3Ekr-%%7OwmUU3jV)&gcSg|o-_DI>s^divvTWfyBIFpplps5hIs3FL^RPZCX+ zu#7dQBMYO(;f9UXdC)FzT5b4w%16)|gU>Sy=hE{%BCr8MZ_2NDN)i9+)Ihn2A$5&^ z(u_+2BUBZ*YqGswv+h`A=UyveA38}%;5D{!;g@?Zv}?P7~w_o zK_muuIYj0mlr7cXY(^^sUCV)7)iM&<<&4kjeh^NrPapqgHLCcI3#s3y(lE-x1lK7^ zBaHmYl8+cL^-{q#`u}FJ&<4EP#8~MhmCX)vM9-hCfg#~xNk_!_7lsFC5gh~-I^A%- zippT{`K6Zm1buXp$0B+JD<+I-e+D-7BPp*OvLkTPtp}%IuuQtzjyQZ#=Cde|r_^6d zrH{w}w+uebHxuuw0%x-g%+mN1B%GXnB_HT7ZzfX$dHxocd!w#F!DnyQYIt0#%99#S z(>Q!K?k&!<@*we#jQIO9T{{BM^+Un^9O5Yc?67lQ+$`afE4j_OCI31VX}i^!vHC|+ zH*z4J(TBdC)q5w6emupGf4oo^n5K#IQ(d-hhDQ-&RrHnK{9Z9{Yga*+%-ggyZvEg$ zo$VTUjrx-vI~X9@vEZq|OiWduNz)iDvSTaQY*y91nUGS3x7pIo==(jq?8sw z^7QKPM@0TAtEEQ&qT0-KaNmnjC;uBjo@B~v zNpswEI*CJNcZ`4^gHDrUa4sm{Pj_URE^f<|bGw zA&RD=@B;(U4a=901;9s1Z#Xj0*Voc8?z6y@=1f0|;L0nMNWqq=Y4{I%LJ*&r5404W ztpbsh>&fNREiW#&HJ>$@W0ijeL!RAC=VOT-MEQVP4QrJ3_*y3P7m5mJdKoVZZ+C#ex7@OuYvlgHxd_gjT@?n}oikEVQU@b&ntK8nnbbW041 zY{kZ*!R3CrD5}(fke+nD*mQ5k51p6@!}GPtHbi%aJcE|dipt4Q=Tik zf(MnK_*L=2m-OnM!Kz^h3E!Qgj;I>c>~C!Eje5kxC8(Gnj62)Kr3fm$kU~(2NHJwn z`Yc4PQajh=6&u!y%K~BC3;zodY8j;i^RwuAcf>ZspPhz49~#_IzIjdSw2gMCEwBLU5bJ{7xdfiMb1t* zo;N(&12h2TPesvzqMIA@nAC@Td=^cHDpBzgF1u->zBF^lzP!9I%F8<-qg-$Hai^5s zEu)*KGLebIe(onjMUaDdlo;2a=EE_zHZOTZmSBy;TC3@Fy;JN{ar-Pkk%`k~?L;@| z5Y<9@u9sa{=SbMq$f9udfszdAIGyi>6Z%WlG-@0>)x(J2C z`+-s~h%QUj=ybW?WBoUU_J0-Me-s0S9h!*;4&suT?k}`TOO`?1wm&TTqQsZjJT+A;$p(gwc17o3-&{A*Jt^_BnTA6wEc6Y{f^$zph8&N^%c~?=S^IwHWq6R9F@Mp# zHiSqNECKg$$Jz?ho!)Z>QJ(5{NVI4^4UJ{Toz?jDz9g&U>I8H-p;RH6Kl;F)6m;c~vcjz|V29-VQR~+us#Ic^u^c*N02cu6rFU$rTiK~|uj3&`x_zp%72f4+ zv}~*p;6jfNw{oq?l2ZUbEdQ4%5Am z(cq322Sps~<1{)Zhdu1zzK-jf-V}i4x@3zqC9k(ZA=H zov(ZfMuToJgX8xVwm2XTN$~41 z-a!eW#X~UY{4|5SvA5?lH@2!@X}=$FtehE;$HxmSoTzzTuh{z^PzyNw>ibaTe}Pp9 z*{@5tITgJs%+oSV&t8d6uRE_|WG{Ew`m?=k1h1P&>mYf^Y-2>CRQ{V8hCQod8s7jP zj!VZu2G9vnzL(f@|EkRIKBax-quuCk#U*DDg1SZun{CqVTft%EO( z0=a!6>DtkI=r>5LB4J0JdaCiOrTs+3ReXQ6DLqNZOV`sCTG}nLa$i(f?)3;gDLFr_ zMu%3a{F%L&^gmof|(XGpSE`K+Q42xMZqebildvZ2uM&)>;t>a>lYjIgT|X?CE{%I+%(y`focT`Q zo(a9(s6XDzcZ)b{c(>hWcdhZ}e%8Htv~LX+um=g2;$a20Grp_-RZ6rM z{O|44KU%8w+yH8S^!AzxmSkPutoJ_mOxsi4gi3l=W+-;8M{Uw18JMfbx8{(F7xt+_ z@=h&=a$LngVo}PyD$;>`+D}khvJ=9kWUDHnMVLG{K!5r2bmZ~ZZ&KkHbsb5Iq^HG* zYtHv0V9)bJBk%8&f>+fFpi&+OhaKl1gZl;YiN)L4Yj_-12SI+MzG-@@=Dsx3Wybw< zgo}GYfm;JV&1a3Ev&v;O6?)SzV?j5^HuhDDHuZ$IkdyPM&3N)O9*36Wl76v$3jxP% zOUmi03CJL#yzducAkJy?UP`Xhegw8bx16SKV3GApqVfw^#*UkB!D!E26r9plkIEEr z-n`I!c)GS3i!gGVIlXd+a_8My_sm@w^r&CUwNI_m4`|u+Uu@91%XO5e>OVM+UCVY4 z`L$4h$a_>j5o=74Bk81j?okq|8p2Fj{5fi6D-X1pcXBJwsTALs=j`MA;)C*EF$Y%>etpQPewRz4AU6>I zlrl$LhnI7p#T0(TBd( z9EHyub{4Ti5|h%K*kb2qOYUu?qv9c15>Ckbp;{lai^UbSb}ld zg7*Skg0rxPP%#lZGGl^PP6cxm9TO$(pEl}wT|ZGg1!{cpA6rp6%fz>SLA9pS7;k&X zIKLu-6uF;c6VFqZuWkV$z=Hi;-Gdp?cy60`sJw{2Mw~f9Q7JHqvY3N2QBohH=#Pfz zGnN??-JlE5(N1d1yOp|}@5rh2kHkhE94s?A9;dA)D=kikjhVLv2K2ANZgjXeRtD4wHACejYfHzI1d6 zPXjw&l0xqX0&l%*i(Sa+S5%r@I-##>fY0gp@)cK${ot_u%EnmaE0f2xER#4{6a8RjIFH~zy^qGtvmB34)_FLRP4r@Ei zjuM(M!!3gz;v#};Dl0Ev^^+-c^79kSFa1JfbcsJ37<%SUG*?*JB2S={vFEgA9pA8?}+PagUV! zal4z_y;<)_G%~+h>3(Yvyr?1}(1%rC)Kk_J$1ParU_&xp*b~K8VEij}$-tv?=#YBg zMI-^%06`sSvwtKpFOpM0l!Wu3GO~Dyo|rKOX#XzTE{`?un(IdEGNltusGzopiNjJ{ zIGP8}C`DEuCAJApt!?lJVOVsG7pF&&n)!lO>az!Sc&PpzMXnL|0q?W3W!Q{>qY79=K7<#8 z20Y#NCRK34__^=m9$21( zHO_1JD^NQBY5LiCBndJ<<2jb~&d-djwHlj5g0}JdWMHJuz{%J2uC#bo3bfy2+#t}= zMRw#+-AZNduwpL&G7OL8CpYj+JL<6-wy&eWI{Y(0o(gxchIUJY8n^bJqmU^s3&i5p zcBZ6QwZ%>mQMrzzr>wKloRDfsuGc*+LsWel&p7<-&K;KCNp*B2wYx=X(C1JiPI-wL zC3Yy7FYn{5com4wz#fSk6!qveBJPg%N1nBQAAHtW+7CK^LlDVPwuO;9S&*+DZWDYm zcVqa+dcd{YszxdFc%xQ9b6kAW?QN6Jy1oWY@Pij0x*Q zlRlPJ^%{OT{y^2SSG*x1VjeM5+?V#KCyZkBtOdcLjw(SO=%Q8?%aZ{~;<=cWj0Y>U z!(kt-08E6i#-nQ`D{&9B3hU*uT*B>qJwjRtmZl3+_5j1Ss{t_l>|Gc?Z_bCSsSsuLd_Jm&H%5G_x_#Q<<~L znlogr)_sJTH4(D6y-QA|d~C~5y!2I%3lFrixpnyK*$lh4fvZ?uOOe+Is@2e7VVqLK zgOp!rO1$<)KjVI<%Tl_9t3%7To^WuUs4w5 zt73=i<@c|(;wPp4tE2m8O(jr_=$&yshMzB&vS$Kd1s0(3yLwZfxOYC2a(2%=b|g9- zEQ7VdqL|bN%#k;In(i!a|8@s)J-cZ7&}F^k)8t?Jv8rdt*In(K-9j*PtSeekkHYLA zY6x8RO508=z8il}9@8GVqbFeV>?5_mGxA}!{O2vVLLYPz#eH|Nvt@LZ5us%ltUYe?f{Jhj_?{JRc{32Z$xP)&t4EX; z)^1Hxt!C8qCvRrOdeCCig1h=J_V#go^K{*etcP8c0=A9|lZZj@&jE|f+q!M(D=|vH z4Lq;r8%dXG2yCGjx<=^2mwK&ZEqTZAPx)!wec+Op<-o9FMBnt@@91*BG}SN{QnObM zGtEw1{o^X7PoFW3r+Gc=a(HgDY_FUuFaoE4nc0=4d)4-;gK0vV)mv9!pFF#6s+G4+RqE^|n zGn~1marX!CK_Nl7U}!DEEY!}JsJwvgm8)@6xA=O-y@eMdf-iNiygJvDA+_b4v#lce z#mvR-LcbQGU3z8R{PI<~EDRGhqU&7Y_l_GE^{m0xzK)B+`Asu>-pDe1IrW)ptJoyl zVS1pszFPAC9!WAEz<$Wd8GXtAuEjWE&KV?f*=ZW=-HZUc|7yhO_G4TVNC_r7`xxWs zwU>&R*GmmoOqd^a%l9dLI4n4G5(DvP)~dC2R0dldV7ImWOt&yJvN-ub<+pHEOo82L zeVW-F-$UaPM+w_wSVE4vH0ODX`>y0_1$h;;l@i(#de26&#v>j=25Ur#6E|zy%5NS{ zAlfqJP{|ZYh0H|m2;2=918~+OV7Chc03EcxHi*Pm~TxFZY+woTD#)nqpV$8L8T=0(Xk{n2$982&5{fv>y zBxj$q8jb@bfVmY2zV%y1pEnxHQuFgnBll^6utSkZc6bT8J%s-)2Dkps0RMV zg6}mH9Yx#TyXixl-XYhy-pyf*xx{xd1%MTpYH;^1CT>fBg#LDgILW^Yn15S^v`+(p zNsv8(1T)*c6JSJIFJJ6`Ir*5pIrsN_@dVVwEYl2}ipI6g6L(#0BwV4U$R|YwgyLEG z(24?E4Xogjp)9YZc!%!~bLv-5za$ux4}2YGdl##Wbq*QN&f$`@4~(O^aXi)TqoIqw zJxVo6DTHFQY#rD4J^V10GG{*AAntfAP z^m?D}*1e%)>qR{uiN3}IUY~O91`(VP0x12nIEn+BgX@!WKt#bxPmYFh9~}jSZnC8$ z&5?_}34XlU8-yNT(eGO&LKlMp-sN5yb>!3@={#)9|u^gv+zTtgrE>6cqaiAPTQ~2Ts=Aixx#2$yc{bfm z$;63FWN%8r+e{X=iSn8f_&BQ0?;3R;*>k0Cb^Jl6VUQkJq4Npxy@z%UG%OjqPutzD zm^12KdMM|MF2Wu#r8<5sG^@+}eBhu}u48ll3rcL@kM=9&;YGz9)QqX5&)IZ(S)HJR zDO#9)YSTtC_P(TPWG(l0<8wOe{BcRKDC($N7hy$R=GU4GU#XDI*o;WcO>1sVmfSDL z#d)9IxBbdw60|5ma^wAD!Si*Vmnx6bsGoFt6P$}&e|~j{e4p%~JO5+!L$G|F08n!>$E#LdhDf@Z->ZN+7)H$)y#2-4tH|Ii8f_|0g z4pf%Ja3@N?aM-7jM(uL7zkK0rs^tM%al;c|S&j(?v2;J zXVfst4>~T&=&{kQ0(nhT>+6_q6sU zZW>b`jHXI1or+xMbYfW8W#6A^1 z=jOHQb0xWk}L+~HOjHu1mo*sN&sK@pyMILiT zzGB^Co>uzZyiojEO}Tlc%0#9jY0qhbt~H;qo-dPr``q%$&MCYlY!Sr@@qPavd+!<6 z)V8$^3koO*SU{Q-yNJ?z#{yeGK#?XT2vP*8p_fnvrP}CKP>|khDAJ{O0wf{y7D#|V z0;KRQ&b#+{p0l5Q_FMja*LD20BrD0xnsbaf#<=fsPirS)ZeV94A^)=d1vc9svmS9A zl0PPFUZ?J@bG7U%w)qTQifgH zFT*E$qj`kl2=JW-7Vp&J_rIcw3(2yyN~uyxm~|2Xl~dbhA;C2C%N?C+g;{ffu1#mh zrg)5e$=!6JR5GYH7d!O6(8|S{#?&;t`@Npk>VZ20=P4|1|(TUV7 zj)Oqnn_U~!m?AKHB7CwLND~M537*VMJZls;W{7EuKpz#pNui`C-cU_p)2ekrn?s^~ zSNr{YJ&*$TmADI7pIX0zd~?ZhLJ_B%&B({R-WCZa;eJ7AyJo8Vy&qx4blbtIAs;`j zzVqxK{+0*nqzCyj0sH**`1AnPsTV6O&7Ir$=F*pUVAw4;<59((hR^OwmETL&W`w6; zUK4dsG7<<$Yko~Td%Z=JIz}LL#P>cSNHhqAHFLd72koC>FSsDy5R&)h* zKF$7_kkR_|(O<@czas(kUjwOyeURP-434mM{(|q+_L5U49)no;7At@)Xt+d;`y;pX z5fJV!XLTCTW~Tz_ukKMRxL<{iXEs<|TaGYq&d$9}^JVRsQO=-rQGDI}I$rRdMoYn= zh{;Tfq!pdTO?qy68v_j%W$xyiYs?409ufRr>+)z>N;wC}pZ{rAC{}Y_FUYgM zxIyt3#a}0y?4UnLyS&xLhZ;XboSa^CfJA?{i^Y9$y#g@&qBl-<$&?s~(sFz?FXVs84G zV0cD=OEp&`<*j>^=&VzJc{XFd9)I0J%o#Q=rf=|x=&<7r!Y}m_gG7uMLh{bq#6r_g z9jacN4DWmy=Oh1qYTw5CGTj<;LqJ@60s;_hkQYUH;G{2SO*K%>%a)@6+OsSPRkGu9 zIGa|%Jn00|Et|)jVKW$)d@%_eC`Jr)Fv7aU`@h)yj3mDw0GBjNz%&Dvy`{L2%ntp} z&H?`4ueu%b9KCq4FEhNW_ECU2S7EAU?8;OM*VVk?haay#>RZd55I=?k6r&eR*$|kD z>Vh(z=zi^=!)Hb7^p~qKoIHv`Z?U{%MarwHY%s9}9{*N}{E>a_L9Z6F!emWu-D)dt zOkm*?4E^KHW*QXlvG;swSLSOTjMB7eN94PS#kFlMc;W5<>D3)li}L z!EI#er25szhEC1Ipo}zM@-;UUmk3|%%r_i|KtsBfIyO~JV#=dp>cR@Z0J@1p6ROLZ zGdxM;p=#&eeu=a>IN}SNA+RQrz{&g ztBF@sTN;`|E5Z(dWlevnd(aBNZGIjVJsqT9>aA0T$C&!mcC694(9{?puE`0zM)#j? z2-vd|DX4Heh{z0({tOx#^|%ek#otC9m5L`8QGcShGphEp(%Exe@?fa{yjEN7zrt7Q zuX8lB+=?4~Y+WAJbq33;P(NeAbe0BpLkvt4J1(Mexwrk0op~jLYh4tln|Z z0yADgAUH+ZlNE)CZwXM%L5;i#0l(DshP<5^M)}>KO}j;rH#(?yw|)cih)4O6?`4|G z%{|IgHa1I~9MY3SMiIoH$_dfleYIWvc~x4jE!a66mbIS?8r3Xf6R+))G9^h=ginax)?g< ztoePSy9+vtH&||1n`tIeV{AOcWz8)NI8asaC z#>}2DqE_JY2`;DiN%gkI2c7ThyY!Hb(25%Gxuh=SMc(;)Qb@|$KO8Q^uwEY7v68Y@ z-7a&ZCXRtC$Cb>aT_dsCw!$3%q!0*K@Dm%a{aWNS;A_XEe3IQ_7gi6x9V!BDmJqvo za2NEE{3WW?eb=i&?L<)DsR(OH8Qs6;x%?i6z+;v7=?2u}@Xi9oKQpc|$+f!=dM`Mz zm41x+MbcJe>};%oai0&`tb5YjD`-R|zfC=RbV$S^tmndx8WrodKP68se*|{zp3K4; z`w0p=c;pqDO<}tP4H28;+ghFT0X3QEXXe!Vn=+Leh5K)FXpnu6>R-0e;zkN#B;yL3 z?-43B%vrjzKn3##kj=8le|da>A%3@>%gulq-TqOTd5MOiqy+ZRA^R6>UUXZT`09YJ z=r9{E@78B{W6;=2?6P+n=r77Y6V7Y0a6-)W*Gc@Dv+f8l-hPnnJa_IHn=gZO=^j1Z zf+vge4^sOlDPjGyg;|*rR`df;?;BavJj_mYZp#GHlUrT$;|l;WkEX>voHvEynJv=0*jmdz~%ni?x~~{96B1efLNs^RHJ)aMu^8zz({{0vFxOER3ag zUm0`bXdIMAK0B_iyrEL4Z4FE;;3nx0(egi5|#r~;w*T?0$ylrq~otJev;j!={ z?~EiC$c1a)eJ|TjBwaR`^Y>aVG`N<2f~~kZeR`=EGG1_?Wp-YfFP5enb1O=|Sia)H z-$en{@jsQ0NS2Px0&)UG5jAd}yY(+a7))wJ&nwo-1YToj&$S$1mv9MtwZ}xqsT#X_ zvIIbsqeCO9zdP046L7wcP(qmIS?HTnh`fH6F=@AuJr+fGi|>F@L}4}acjXGqTr!GF zR^3PSTyiO+XmN3Et(5a%AFHfX(Y+~i!_DMd^^P>Q2X9s_Z^@q*Q-(yO<%&80#O12; z(ZhKz(N%IotkQNPvVeN81U+;k3)CQ009!nZH%J4-AGkH2l{o6_xD8$7^&su^mMo(J zz_e&P=xxKt-_4NVs9=N7_qI9j@2|yi`)Sa;4=2PbGM^QlQ@sm0C^I`Ep-cF5gyl`B zSS9c%q18PiHsBQU?e_;=-jO*8_2b@$QviRgqJnyV9pvoT4uHs>HA5E#{lYwz?h2Wq zK_C1PX91+&s(^7&3N6jWx?8`WRj)#bU%w4C#_K8^*M?&{LJ3f}0Sy^z_VEW@c>^iK zH@aNvbthcvEpP|g3rB1cA32X24)D;Y{<^*eVLW#27fkb?7qjfXsKIywWUHkH@Sro& zP8>h7Ocr1liH<>3t8rFDC02EOB9$&^xq6Epnr#TUl*@B22Prmc>bx{J~?%j>M% zA(WYNkNyqS-|BDeMh|ytr(SrY&by284IMPfGZ7%~`aD^o%thDC^t%YSv{ zS)aG7ufj#uEO zU-e%O5K`Uk-F#9nSdX8XLPMl{k>NFtQ)8*%<-FxxUAuZnn_4t8r{|&UmwPFDW4!*I z!#2ro>h&3Al^bGO1xuImG;!MU2X1{GYA}J%vTW^Ob(K>w)94X8%iAD`DVx}qaOYwjUKW#B_>;50S-ZKVSdqS!S7iC==wW@#nOu~6 z!bYG@(Pts>jMuGSH>bZFJAua~q*9KHKJedE262in%!{$kDP$_2_>_lJy8?O$q&px< zfRG=nNyvzMJ-KLyi|^mb`>9=2Q*BhkL#UgZuHG!R=Z-3faSoe=Ff)qr_N8D8S$+gq zr7*QX&Wk*al3L(N*_Z$ky`{v%ed{-KhU2OlUxwV{bwh_hS=x)pV&hqNaVw@{ zW7cR0?DkJ%Dxm*R!Ld>(|A)(im6*+E$(u8Jcjy+hA?q9!0RCI-I9)BPxGvFf)oVmW z*n-F1_J|!Q)(O|pMwf{9IIo@zIU?bN0H$%mlaS7$zKFPj;BB8>jE$hZOLScx&UCrA zB%${3&O1%2P=GY%y1;K+df22F@|s>(EjzM;bMME~U%e?6Y-V1N4L$>_>Aj_n1dw}I z2EOrouYUX47rP5LiTReh(*-5Y*#hUjcZ_fLCxHAHs=Ies1&}T|4nMM@I#!&RQau!^ zhnoHIJxr9eH(bd5!~ItE+&6@ji|hk;_>yrgY}hBea3NChM72~c$;6Fc~;$~mv+bvdL>(=mQ8c0f%< zm9J2tbRO11v=-_)Q?zf6-fccBFRrTM}J(;e`23lQrnYX5yAiz zfW|*TT6P?cLK1$wDX?q;KphF~o_eLY+-5b|?b=_t?fZddynO|}UmNo`ILgk9H0p=h zd|D^BRobLW-Gd`+1)B=vo-TVIEi%)P8M#FS6W!hTAi{k4*J|vaniX1leFQde_vT{s zC(N)r=xgR?5?dXo{0(#>KJQPrLL|&`u%%V8u~v zU5s3-*YrQEo1GyKz#S#OYhX9x#~bZ)&69vM#1+Pgx|S;8UD;S;LR`>xgc50gKpGyF zG?j&QKHd|XaHy$d=4@ZDfFI{YopJTeqwXddg9FkfFWakofo*d;_0uo2HlW~?Ynfqe zDlR`xyz)esy?D*|P*GRQxcrstWg8z8_p%I2e@ik`jhk4kc9P@RJ9SjjDlQ<~+JLSA z5`c9)pEMsr_1bN9hPc7NnIh*q<1`@OJba0`k)k@Dcq+)@rjB3Ay(F5yjr$IC_@3(2 z<@+A9Hq()gA~)1u+295B%D-_)RA#)Mst!3Aq6Oqc9|ZI~`@t9Q{CbKcnmwBgAjo|* zq5P}bDn7YszAj$kT#1e02b%;GShM6K&@VISh9w7bfkyBAx@v-{-BSgZ^ec0hh`(`u z`Jqvls$21P9Z)9>J~8YydULdSoHZdOsbabIDXVm`66n=Pjvg4NlX{k*C~UR>lV7-x z+xWcD|58$V6;Yuj25{g+xuI^JEjxkW&XW}{>(~l8FiE#I6~49+;-0~X8KlTKfpQ(kmAbx z9!L;sB6aRT@Bw1c6w)YFYnNXaSIPG0OZBiGUwzlEI8N#xzt%$+Vq)sj{!mi)X41GwT$Q=T`sJy*BFfgz9;IL*N46NTOphZTLX ztS;oSsvarV-RdeWS;cv?%_!Z9as%QVMPeKvya|$$d&nV9i<#iTB?A+*^+kUsod*nQ zLZA4%O^(4@e^Wtwx2NY2NPr*g%M5R0b}g~{_Vp*B;jW#Z>&hngrh9dooaZ_Of%;5b z`Xw+<%zJJ+=9Rb7NJd_Dhrn!@LHU@91cy`JA@;ikdj{vAPMJBKZA~JjoMr3UO90@J ztx02j?j*fke;t4|KsB5s<4i+*OVFbxC_9 z7Vw+Yz(JtjBW3M#7cEBzsgjc%1Sv9J(aio?xwc|g z#fVD(m@IO*SJ%6d_Rw6m_a=-=U$y;A}9@7PeCjv#02*h z5Gc=t$t!X?GKkf7faN*26b2;h29Zf z+%%ioG_8;#v@8Qu2y-@VMRH1!6cOC%+TGk5VJOFO%+>+sDtt9(av7&usP4Am?64IY zx#b)Xb*qB=q628N#B(Z8(cu`{N}7ZmW{qA!}+w@aq4<)z%yZDGSGO}kvEr$ z5zFok$DeuP(;BDl2$%jmXqep0Ij&U&=icxLV}uk|GiISdx_H;yIl|!;8~FefH_^D8`0^y8X-WU;u26&G9A@bD&6&n; zoWw(-b0d$pm;TA*txC6RrHyA;%Gizy3j(MNaRC(M2a9`06;e__NU9ZEMRZ53`5u<=8;KwIXB;V9nb@C$0Gb)%8XJ}N6QtD~UDf^P5%C98Y#=B~ z1V7o>K>R8>Wp%6Ka?ueC6krdK=x1&d>zM5{t*1+x4<@Ykju{2uQ-cR<5=SLXFDxVh z_WIL@bgGBjSs*J)biZwb$EG%ZQpn=uF1}nv5vmp&ztA}F6hMnbl_oV?mU+GW8_-V;$r+FIM(A@4`M+X}#aZ{+wP(D!c zDt@Y-W%(cpaZI+wWw;kP5+aMnb*>1(3=1^DZ&w81R=oz(mbOVy zjDA7A_R#Dtd9xg(tdw8Hq8zSQsCgPUFyMhgbZtDlDe5dcYIR+X9LC6n*fR&J8adDN zo(U$FxJA8zQM}am8dywYE7r}{%R7j7Yq+GQamY^qK)@>Sgq2G?5$;kC(DL!5{j`H# z%qRpEYo>Y=TgAM3IiH+)pT>8llsu3rVfF_M4ipnx*%C{$D)4Oq9Q7?)*V)P!spAt8# zN3$2h>roIep`gPdE~kN~pnWw1KHqJN7Wkn1x0RiJuONODUikC({*V6S4>0O$s@lRpEtb8{ z17$}{Jd;hAV$l&nLtg#wEXqz?9sjG?fU14X>(QG80=ixau^T6#_nV7i zUUZY+(hDh-x318{tVfZ_S06>xD(yY@+#he818~j%QL#Dp$fex$`z5(;CF>KO_4WZT zRvuPQ5bB_q@(QA&6k5*oo-7)QcqOIrp1ayXkfTXS+e1Ty=ihwPha;L+R7yd$PjjPR zfSLG_6OY}yf8+H3awvcN>eGBWlQgFtg*J>6_x{=h{w4tZ$8w+#JkhdAV}9-Fqkls% z`t#!X*AL7rWjKd0o2(H=hyOo~pXCGq!ZG|*&5HQzXZ~M5^e?ZJU$XJlHkLZQoM8Uf zasB z39k2d{<)6-uO7o&)8lsQrng2)|8J9b;xsUMCnPWE{SO!MfAlozp}^XC>(5XD`M(A13tHf10t3A+!b0QvS-HWgdO7(wvcT#~ho4`V1wWKuz0rMEGx z`H$RRgh`=VDCi;4H|r%l;TgsFxWJP>3!n-Q*`%+^?1zm zVhk$(-hzVe-4k}%5f>vMnfnLiu&*#ssC{b(BS7Tv+uN(Lt^s-vJyGK_bF~NVvjD=Z zxl-&2UGGno=t_M-yk3D8wA!(E&ts#mqbHFGhvpd*zli$qE*jua3B%zpw?B%ND!@2I zsewe7orjeQ{iz{f8A8}4>uHaUr=YlIxWDW)ftB&;xqv|PcqQrDY$6gx6bMj1WX)lf!Ffu zs?%>!g=LFyY=5fMjK@^rNj}+gw_j1iDji?q3v0s_Fp-Iog7PIFint5@q!REU0!U=? zY()hBE)xF`drC_hW1Fo+PyJS9wgz={THmsHT`vT)*|@L*f*@zk7#?jZn;WFmH3Hp& z@RsEAKj*Y{XU-wgQ_i*hX_Ed(NREH2Ih1uD+kDiQKW=}_J?Q6w8VJV`3T-1;_&-&4 zbw+dbA1jr;7MO~v9DbxypphUbbweXge7s=O5@;E_q@do31;~9)700Ful;@Jk_@ely zVFZBSVe{ot{rv;i9Q7r2ew&GZG~KX)AeOJ5=FMYXml)^3Z^Y5q)DM6Ha7FxP4Yhgo zgJvRZQ0vWD>+i_se5|p=QXirKRw&VHP$)_?7<>agj998dqXJ5#fCFTO(HjumVF#ST zhN9V+kl>vC$E)4ZY$BhB%JM9+6+%s--__&Y@ZFbAmlr-pAGm5R7aMDP5_Z$0KrR82}LVLGfpYXq-iHTPLH6~BbMaO7>iqiX=s*ZJ+kDHdO`6--ID;C^@NF=2L0Tixz`Ks=*bt>gEaOq z%B%e6d2f#GdgjMt0_;E?mc4gGS{MlG-Y`zCThFqe*H${R$e+Tw0w~E+eV-eC!NjKL z^q*|U1z54M8f_>3?9Be#G)|_|%y^+qM{`kpxTzHo%~?5Y!{@rg9_t)rRgvqdEEFo> zo%hE3yqu>Y3{PqRio!`+W+{NX@R|aMu}P5>Qo`Q(03db5Z7E#Br%$?}CM3+<;`G zFT(mui)7MBri%b>52zaRS80g@Xf>$)I=vr8$+_0sbvo)Dg*eT3$OZ6k*T)s4@@CzN zj((`|h+%>Hy-}T=dwT<5suI9J5~)MQNbZWHe|>(bquai4e5C62L5tW1>`J{QLZ*ogk%%NHpBX18d=N zNAJ#RE)zD7_L7j%;I>C|van0U0VUk3#fxsErew_*o3{L`jhXTA`^d*adr=$-p!jPZ z%euM5K8ayBg*UV@8l?1cnz9UoXw#s39zppKWn^T^ftDC{GvO2cydK}7CrU_yS$@CN zd$Zp&*iu`J(>Z`So6Kj%;iM3LmdhSOT3az^;l|!e7B6>qwDdrT$o=vMs6NTYnAN3D zE}2EDV=flW=#K|P?-VZCI*8QNYh%!qxJ4~>sI~$$=)jlk1AYJ(=&y2kp?a3!FNFkvf>axG&<%^U`HX04AaDYDB@*0OwVhf&K=Vz@t$`>OZE!Z^VZjTxj zDO$L05+l8F4i~6^y<8fw=ZwuXR@r)>CU2n{>nlI+br)1zSXZwn=BwbqX`( zwLMJjB^^|?yj@8$c5IrFm-Y%xI@|@H$H|kwBx5fB=rq`lz@x(*{9C)~1~fzq0W}7h zn;KS16B1VIyGM22CO(f9j(OLYaoK;Wm@B&iMU`7C?S_9cy8r&)QtmUig{$@8DXxWO z)FfIaZmVS>!&j=zXx{Ueiv#r-4RY2gC&QEGvTvOl75Gh`D)bHjNr)#@)!w{XI}p2% zh^*)f=e;L9dQ0ikb8BN7Yx^WX4=z08t*>6#?j^}mIoWg1PO{c|fmblrfM&sReRWxg z9k~y)F%HwLq#|ys9}7|~9euneO#u2w-SW@1lY7Xw$d!=wWmPKDt{3Vj#;M9yZtH9Kto^WwZu1C%g17D!D ziSGH20F1>aTEYR4_TLbx@{Lh;yJz_`^St%1tumQQxd*G#`gk$`?YnU7ON>COuLOa@ zIQ<{e)ieWK(w40R>*n}twf;cBN85EY7tWn(S=SU!z1PObK86x9K*|pIKkhcS)^Ukm_qN6J*tjGZk7;>`14dWG3hhN_yOOBJhwn{~; zH})Sp;_WvN-K$mh|J0isw0kNdM~XxDPl4mV7V_WfDz|h4dI3fqHkwr}MUl(NG}y)q zr(6WyIRa9PB2-_ZO{vOyG=1UsHceF#+t=vomG`TZD6f*M>St({`Dgt@`wvMS!8Ls= zIF$HF-dcFSc92JpYx?7)gPl~q(#EMJ#(v`xdHbe=T~jfeW$|py=ANrF4UgMQ*q)BZ z*@l2n~)VD%C$@x}c z3TzxV65VUe2wYYv`Ufx9b~3kv%D-oLkwjX&l8P%t$#-;~b3}(TW6*t7y!#{N{n3Je z9zs{6f|FFws*o{9p1hUVQE??z#&$&T<0>-k9@b&G37_h{7Vx>BwO^1?6*~+Cm8Po< zp5k7s?Mz7v)`fenbDpY(75K)$moplU@_VC7dRUqPrUhATf4svUr0)UvIYGa@iA+?%v7o z=w|f=gQ%);@2B6HY4(xu^s}EuI}Jpqf8Qi%4GqHhS6|A__#b7@I@Gp28UlmW60|l` zRiu0CG)Wl~x;IP-{gK_c1b~8jb~(@Aj{>dePsnI+$WaiPIF`x7=bc}9c&gf|2xp;_ zC;gAL{aNz)7gLjXA$fA`@bhrtmY8C-kXkc{D$N}GQoZ20u2o4 z_ooR&@9*2q$vMggXxgNCmd7xUKUjGok+l=FIgPR76UC+_AV=lDoHtu(00elRixvm-^TflpLunM;!yth3K5j&2gsK~X(0eL$t; zkWbw9gWGl_yp6m>Jf`;lOL{YM4$!jUwABp$iv;42RN}zcy!nB)dOAVt3= z^m8XOb8w7C!pxMFg=zVs9NT2^-J+EYU+cc5+s;NR+sSIRx4i~zEFeH=4_;Wo$ zGAA9zNh{CYqFp*4P0&b~I=k0qD!fg^DKb%?x~5v`#1bguu^*DN4s=0;91_+XwMxM6 zeR%~KyoKkNW87*q0MYCujHz5wI$>9;M~n2k=!YcKQ~#znjX~JS!K&H@!z-4b0NzbGxUWs2DdRE#}CX%S>yU3nbh2{GdLK{sLP;?B%;HXu0 zIUQ3&LtHuz=#M+LAQe-CaZ5%-SZ@VLY9-BAb@!XhOfEXvQZ>aF(Bkt>;0HPB4N?*I zPNzFMkDTdBT$8LlK=>p;XU7~BFk&|FS(owV4M%8w=8E^`8_KPGdl zcTb)u3#6_Ij{<(z5OEnLzdGqO{!``A3Kh+~Dle~7BQBk*I0_5lby3Koj7vK!!E#g6 zVk7VdG-uQj+>qHTpAE1|iBhP3b>hKNg5+sy-?8)IoN0t(-Ne=0m?C$xC_~#QDZtSu zBn`0My6jmw<S8No1I#pvvjj0*7lQUs z(A|d>^*&kdxs*V&aw$!zGn&(Op!r9h+!22Y`N`%q?TU^9)f`6gB;tabBad@HtodtS z{e0UnrVUFiAi9&&?^2NLK)9fz9li32E)~6klDv*Hgr{T4r~c3aXe0WU3d{bt6yVr= zK3xKgX5@L)-@#jNO_hz}Sx*&=Y*wcPpP1hQbm!L|VTyvkOTC#q$HU_0xjAx0%^X^u zBA}!5h$Fu(XcN+^A()Sr>z9&zr-&?X?yp7nMhZ2~%Z0#RQNtz5gZZ;h!pWMgDjx*& zUnL#9oHL*q4L=E&&tu3AAN*Z4A7Eir2Ix_GA^091CO^5mtq60$|z)J4#S*2W>Xx#(a*EvZi4!$O!)BpT}*twLRCcQ8)z%!Ip77g5#n_?dmW! zGu|j|YYGW1lg&u0Gpvctd$&s}$+b>1f3P0!S3j*hFL!5Z2x%)DfA$Egk))@dpLP zbyN2ifQ&J*m&;1x`MWwDxERu1kkc})H|`$S-JALg!CTH&)Y?^m?rBTQbtOaCEA^Yj z!?+`$*b!XVp~mZN*tc+hG}RZshFLqct{-;TaCaMDwX;;Qd9;ty)-7L*aV(3xWFrLH zi_l|fC;N{sFe_>boy0B3hS@pzG)xV5YEa5iwZl>Dx-WoATeOk^mMi5 zQ5j^B+X?Y?iyJHfTYsoBHfl@}cQj9it8xGgb3UK3cet&1MWJOwuV5a)FB!DGJ77?h z;DzJ1x)?4cmoyzn8iqKBuuHHT?jV_4N}aoSpb4JZ8m9@mp0zLIcys(U7|ov_Q=a@d zWmkwUttCAliZ7&VX$wi4H*a+6&u!!CiIbKwXRYfjmhv-+7kwL_J~bwGJ4iA$93Z%PioY^dp<>xIpFSpkvhd6hUw!*~H~ zUuz*GGlWM(7E%^8MchaOD;Lf#kK0esUSql}JH0CO^x$9xbb(Pz`puv^YrpPAwIus# zPYMm#|<*|-E}FD1WFRB0bRVlU+RwaAvO+;aSe%4V2C^(_PacWy);r{=zSGW%Ka7d zq6Z7h4wli@vYrhsCcb;DGGVN-E((SOIJU*BqzNBC49dz{cyL1=`Lz0L@5G&@UMp^= z7c&v8Q!SemX@j8vvZ@SB!9=NS9y3WJ&YEKK?%`=fk<*jG4Cp zhsqLXv~7NO)|va+c> zudYZ}u7+YvY)Jeiro=-8=$r+wJ(l+V(W;ft``j{s*#4e&*%=zJCYyx=@8|Jx=Isv1 zLBll5CaQpvXqivfMh(B4!WhJFOGQQu8-DAUt&D_6b4i|R^yJr-U7Z=>4j{+5Q$}4Q z-g0UmESQk(Ve%?MF1>&dCyU74DYCT1)tP_I?@qtE3$RN%L;L@IrmXpE3SFQV~_N1I|_WAH6p6EFJ?$o1mI#iF3o?A*)obh=1 z5{9fvxADj?0RGLa^YC<)*vHs%#Yk5^@gg=dx9Y)=pHAIx%3#=S!hY>~0IksGpZj&L zb4h}mcw!OYJQo$VF zaYXMN<6oj_DOj-7{`hmbURr`v^_x55B$mwR9`IDkRPG4J%K;dL2UOmLEGF;gEZsU2!7G;DbY62~*7fXS3^RrXLU#E_1VABfCuUMjTbij~3O#s<-XxLmxH|y#R+Aw)7B)e9Z$cw^jez%^YNo^4 zN%17i81n|m3X_B_78{uO6Uf_ak4MJ*(qx!(iiofX-1gTT=|Nn|vIqB|Q%cOW|C*M- z6G8Jd0ILg|_2rM5`sZ}~e(@uW8oo^ZbYZ<%#ME=aVXo^VSU=cQC+22dToGZ?q3c@` z`|bS<*o?{0O$~X-KJ&HvXM66Y&fi(GFRFX-Oui%z)4Lcc%6wC{qC0KhG95L@N;;50S@>c57o2d*8UzVKO1DDwoO{I%! z?^hJs&bOmJk+Mi8eOhLkIifSf4kY@(ly|q|gK){Pgq3eb@^zI}qQ4|st z{e7wxYBY_C7bXYB@&dxxh)tIco2G?G2P+w1`)S4hWo^GtU1B(ZhI*fvdh+Kv{lhSy z^9D^<*@werL1~O$MuEq@}_$Vz8!rA{~oQKOC$<@P3%esDs*OY#_K};FM(`#IYf{2Uy*E6^$k;JmxHY)=p z`~04iWoO^3@tw99840uv(ZeQ`H9q6wMk=iDYH#QGcWs63)kQxL)4|Ql?(;@-fC7aE z0P1jZQ{1;%yU9EP?R{g7OVf4IA9{>P7!g zJK}tb1>&)ql=Wp#6v3ID*js!3gU4=S#;0p+VZQY{9`Y(H#?V_WL=kgxv=oZ5E8R^C zFY~Mx4GJLo+kz+00MB(gYz4Hhohbd<;{&la^$8_}v8eiW68DRljK~|N*zM38n*~t> zN{>MC2Mz`NVnuf&4;IBf#G~|?-~e{ZGteF6ykmLO%-FaJaaU$NoydEJ&BWEpRl6&A zGGV`$ox7yr3c?w;N%KN+WAqK@A@H#|-mTeuv#%H&eD>=7caQ@wd~R92so& zQrfE}206Q7D6tuOBK?Eze#A6FnAjlnda4Kh?Od@hkNWg>_`i6gW$jEniF^6>zbx)Q z*6{Hzqdd+s=|Xg9<*qT~)lwp4T{qnOCPs-nD^B};rGylNOa`CkBXaPBGlKuE%iW5|<-Cz!Hc>*)6R@ooJ zFIX{5d+$>wf_;%K1Y^l(0f{Kk);`>I8D0S-YZmZbw=0S9O5n-X7AJSN|J>NNo-J2`qG9Sb45eulET9%jS?Bx=#{$ zK4faa|4WWTHA22We!X#T25QyP*2wFBr`*Fo9RWyZ&Niu#!316)5L5bLIC3IWMFQ+od-aObh`baz973`&iscft6ZraanP-0v0 zXiTUq0eqo5dg4rYHBzZP{OWu&TM?Y+bJF)DTQ*)dBSU^OV+AMdcO;uNpVuJOKw0_SxcB64m&oB5uJJZ#fzhAFLB z!|bhqnadi@ln{6kON7u>*j@gEoYfy!s!2ncR|H2!)p;3#D+AlN1Q z+T2%u86l?4WRvc3Kj;DOB*=d3$5V3vM{$2f47Iz zy?oT&#miylzr9_$*2%Sq?b(UES8Z5!P$-;`XHa8CvCiAEN=g7~rv1S}a_;*E9|IRI z#>m`FL9s7VaFd|c;HbJv94}omNQ!&kB9fIcRg19Hj3QyRI%u6b`6juFH`(nb?jAND zd7#|GuLnw3Up)CB;IfQpbGObV7&YCcit`3t%|{q`D^(HUZn>1192*&3IE2~Ijo1({ z(aF_HYkkz=2;8?pLu?j=ml5 zRG~NgbI)xFy;~cf;*@HwvBzUji~U51ys)N1R-)WJka$~bYKD0Y-&RNm5GXK?pkaE9)ZR88@Bu_xmS3+Zr zP0;4+v|4pa2*Hh^gRK5rgXH|m&x?&%t%C3o{7Q?N=>v*7u`FnE<~v(DSBxwvv#_x> z$#zPSAgtkI+DTrMo_|0sWil$VL4pCO9he(iGsrW=PnZJ3?g~Blmqlg8_TwY*;Fpn= zQlUl6K6rXpG~G)d!;m(m^!`$S{+rfz)aL~QTS?;kZBa@6_4R9Nu}{_o8y7Ehv%TBa zDBIQ8XvKbaeE#S!UF36-pk4GDiI5nJT8ZJ765-g)I#jILzQFhi3%1l*-AG*gbn(y8atwM?C1`NE}x_0x%_T_S}>0jI4L%jgn^_e%dB|3b8| zVT8VpZ)1uI%`olZLz)D2zWv$;`9;Gt!Gtbsv@Y)f*O!Q6x6`A#VtJ43Z2_UzcPc-Y zBq$@P&9$-&zGipeiJ%SQ3u#<~#)XJnS7z0AI;NY8`YXJojt@Gw2z)B3pL3q1u&Bkw zb%^*&Zbb@v?|eZ#ICFHigT^nR23>OT=FNhR!9r}^N^S;K(0MfRatB~X%a&p7d;33r1QnGu5Rp_-loXLpMMN4&>5?vC^w=gU zpi%fra>+08aUEVFZjkcL@ zzinrWOA^^xmz}W%wlZ+)W79!G7Sy=FGMI+Nq?P(8@9vAYG7a(((6;O$smt|tc@OLS zP5ZTUIQkJcWDVE@dB3>B>O% zSzCbJ-@vA6{QD{5A8JsNXQORh>Wbf8Jv6~qLlYvi2d`It9vkgv1%`sBBKcidM&J8x zLj5xDEO#%YJUsKWt^DrJL!XnjI1eN~#ahWt{(AKfJ@Tglrlx%KXo*h5D?z27a+bf! z%|ipwv^?@w z+WK1>;s<;DrrQqa6Umf}l@CI>w_6y0x@W!|0EJeaQpvdVQ#J%Jk8d0s8|GavQuixq z`~eiMFfM_-uF$4eL78tU@X~aT${_o$s&J7Zz^U6Wixl^pA@0rA|o)-h#Gmxqz54IQgH!_yy8OXgtpSn=@-2tegN3%qh=Ft zbKYlBUxojy!(R?z+HsO1(fOW9iv)hl&IEW)XPf|M{a{~ufsU0;UoPIKNMq#rHJl(e zaD6{;kW#?5ui0pGuQZX6WWV=|RgRTDkVJ>JXhf{Z?m~c4BX0Rf7b{q>7`9?8OJlpa zFGDNOu4NDJ(|NQ%LyL2@dWFh#i26$H8+bLPvjFP&?n;2N;Dm|F=dwA$iKS%ImK0?f zM9BoXIgFmJdVirJ7tLJpB2HitqRiEt2U-p{{OPnF9L=ahFVbQiD#(9LyK0>zd^Na~ z6&`9brd-vYtsC z!J$_1I;1W4FYEdpd-Daldy$U8EAnO1^ix&R^x=-y)tzw`J;H@+`1kH>Bh$-P#wW0A z1x=U5ij~;aV}E`ul-}KU^xD^jMf2fA9K7i(L^}KiE&el91$pGAfSh|EWlm?mztik<>{Px{2#U&SxWhc`}e7 z2Hvl%7)cM5NBL%-Ko_j|uh;&R82>^If4Uy$dIkNkP%o!*HP zj_~un_HX`fU_aa2v@t+5a-t8Gn11>4U(Dj4Y5x7T{}X=_YXh%>*@e*jq{RNpC|cM zwWm62w>qNQP%S?-rc9tjZ={rM$faMcI7eblMAw0_@Y*UlF=(T{ypQr&>n3BNJmC;3 zF5(5jb$A@}=W{%GVLu7<1~)KC4sN}%G28t0{VXj3aGWkPHXhtAc5zpS|2T<^qNkCg znY%j-;=hp5X%|U8-VZ&wCT0*xk%G*`)w+*5(qH8Lnr2JUdHuxO2OWHQM5SG8HvR5^YCb7P;)7E&@>zpuMgJH>x7Bf@}2>NLt9emcrQUHd7Pm z)>n5;pm{z5XRY6&YvcfxIdgc4l4zZOuaHb3Sxx3qh&rz2D~JE}FE!()j&F+c#Q-^$ zrPGoJ*3v`nD^c%(vhSd5<8gPYy~id?K)m(6e5ZUyHA!m16U|4^G7FiGIDuLycJ(3T zQc%JV-+b3|3Q&#;H9neuKbC)e5C~l89xWcSpQSNYf9UHx+eF8$slImf;-a>r)VGc2 z8RMmtF+aR?3VCHpNQ58>`kea9w~Cws%)cuXj5`(=gmUI;kjQG5^VR$Nqc6D4QgOZ&o&~vyUVhgoT5cw>9$m z*;QlD5N!e%pv#PnhP?+691gd$0^;h5;30z*VCgvP!Bc^kjH8#CW0LulVuWu{Fp9I$ zgLRQ5WmL5)hjtJkH$)_G9tqmLp9_9Db$Jq?V!FW|3j6+jzaV!V5D_wdr&o>~;p90v z>R2UPy52Zit4Q#YH~=cE%U4nAWzhi@mps-> zQVY3iAMXceaar~h!^^WEFV_yRDUQ>TgOjb_Zd5{&s_2Ye!~ z)kWY^092AcYnC$X(&i896y{@e>4qL_oo(zghn*B#69Ln%M<+wr=x$5$c&v}sVOj9J z>7$l}sye`uI|*28$;Z^uUii-jlN0SD-@La4T4D;GxwO~UED25c2#N1BYfOJUlKgmJ z^PICm$=DNF>oUwW+0Gbmo}jYV?18?s5=hAP#XczzfE;P~O!s!BbzxJ_bLnep?Hw%g zEKk)^f`fHl?R%S+#(~xdzb38E9nbYu6xzQbXr9mwV?(C)jxSx@{AgKdWFN5k{nIzU^&Tiv&n^U zEV4juZ_q0>K)-x~KIz-`U0=ajoU=?0uOe^JV|N3nCA5)-bc7f*>jv-^7-c{1k|8#O zClmChfIIBm^^!aPp_4zUK&|N5Yr^p&+tA^qADPu3R{&4L&E=BGaOts0D7LD3sRshz z1&fXKeFpo!+kGUG^9iC|K{ZWC^1eCd*~T>8Pz|TGnT=y8f6*8meS&Dp`a*0*$C2ey zKDagXF;qkFg$-~QY3PDB0)K4`C?9W{*KnRwjXft9+Cxuheb%$>(b!+bTfI7R)49iK z`jf#s7igRaNc+4ekYV?{)-T#9(V@wk@4D!5b~+UsX&6_spo91H_G9c8DzSyajXfC;zYf9STO#j||zqpCouo%KRl-apif4O8oT={1+J63&DUynxtmKEAi*c@1N2v{K<}eK5DC- z{`CUIub8g12@gpZ&Ulh77eaubl}Iv;A=8`P-pS(;SIoMRA>cxJ5O_oJ;SqShzo_cs<`NhfiWYYgRMo?7nvqsVez^542n(cRo8dPXh`G3#iA za_%Aal#ovdhK6?mjtD}WyLlx6_Ar$zw+d3hUt2@iC7-15k&12Fxwz!&wm7GAtS6^_ zw$l#gb$+NClfvLP>Qu!;;Yo<>Mq}h60&bf9h66}5iCH}ZeW8!p7zot}JK-j=s-6pZ z;JfxxDSIY&4Q@zEbs#USSC@DUv=`|ZFiPS!iZ*r^kymF!76Z}GYfwQC{13|=HEuu; zG2%`cX8o!Ef~X8>vL2^P_BjY!wk*ZBQe8UElFF&q^ll?Q2YK#1T!%OPtj;EW>f@2= zY!As3lhjN}`mzx9M;{)3T;02Umh=^Pj%21}*fy~6`biq-jxV$6AcRW3wcH)Sz6u1* z-roSc5nFj#%cnqYTC)%lo-)A=$P0ZCr_~ zya$U3{3@pw6P(R_XDp(OfCAh=EH>`gA1I`p(w%2Sg(Sc&9v)d#{g|f|3xsb^5g0aX z&UCXV&njR6)JgNVIjdH1PKaow1K@Si14IfjnPwbZa?__iw5-1$%r^K%j#N^9fd|10 z0JD78x}lM-_Nc8mYIFV+YKQCvJc1l`7-1zU*NHUsl3GGrx3l8jNraY!$m`d^0QBjyLe2y zfEGKZWLl5?hdkv>sBzYfW+^rH5j-5^QQ;rwBk=uhI=S4)Vb8Il{ zpe48aW>3A<+G$ka`KqvVXIvkk!h#>?Ga1213fgk3l7OPj&I z9Y{<@Q_&@9u;oe=Z`b#Bv3E55hODEKeD|Pth49hqCFgq6&b-d#V=N1JYnW^D;|is& z30OO(tz8v}%;QS|tbhx5V}L|RJ9#C+erjq}b6d=RcfgR{=E?-!^Xd!XMRfd|qUl;~ zgjTMnX3q~VjT$BK_)}gRrupUZA11katZf1AZ}jp0*QX<&qm6D|3E~Wn7I4TLZkz_- zJjzsoU&fq|ZR-|T`r%>O227;0I=Wt;&Em%$qM7cd+KIAbtM|4t%~lv)Yb490r}P@u zGb#LJ0&p~v7<-co8!a^(4R7Fe(PcVuP>hAoBXCE5Z^tUcw5y+YwlN@7Dj2da4yb&x zdoyh!kbM9*rz_RUS%5UEpta#JB4ybnu-!0r=#Q+Ms+2sJp1^rQ(C40d0?feRbhR*I zCd9ZqUs-7Qrkz>e5})k8dK6#%bMzP_)qWRKDvC2DL$ZX3ufE#|U9huS4Es@kPcs6X zd6|Pg4BWrs^J+oTYa<|n@wM#{W2J~=ucLIm)9u0SG08Fp!Qk3@dEMcR%&Y`}fo_E| z8P;+v7IE5`U^(Fk+V%3q_gpU#8$&ZF0G=7h$&+Vq1mc6^_)761V#u~<8G&LCDVTYY zngy0$1Fl_D+2yAQ^$oZAxCM7Myhb_++G|=>?NshwikLSl*AKOFHs=3|6Q}%X{g# z+z&d+zs$5pw@D({RXbHQGf}Iw16CagXZs9$IOaFsT@=WdsSBb9g~V#;iGZl3@JgkH zy}P=LRF1u(tNFd|rin$`)X9_wmlGiP&%@QSkr;802^^KApqL>=l4DhuqbU${q@ujo zM?H#kmo9dC_}H>DCM@whW-(Khu&K|(4F)RwRK{H9=6P3m}@Kh^b2!pY5?YE`5CYE%$0HC$CKVF2Sise z!nS^g$>U#>W>x!86M99?3L9l)+rlfw z?I&FVmG7cQ9f??tw{BkDbjB7K*+}|m{oR9&Srt_kI?@)F)qd`*QJ?J99Vwi+U>k3S z5?&y8{(QXu?}%g#A)6lIOd;1HYF((Q)%}n-5EE}n`>LHYwCT;{V5GKzq(Kb1nbgY8 z+1i3GH09vL(AASJ$p7G=w^!QE{>3l&$6jh0`hGyb9=SEyKK_w;{&~e$mgnY`5>a#@ z#JH`qVm~934HG9|=MO-?RJU+sJ{KDgG<}}z>LY#LFpUB*+%H4Aru4ToQdDlptYP4B zlZ%c*#`-{KJG{PO?iDGV&yzDZne@w0t%?9Y<$ct)4RftLE@+)+)FX{3nQYq^f}fvt zwlTpsB~wvI1ziw<%d}b$04*)7sMjY^AmZR4x#2#IX!9^6wPHErBpBK?`pGOTpJK|~vz!q*|liqSMA|T7+L`$=7 zuutiStmWu24)pKHScaJ*MhycDv_M&~J)<3Y!OR5z;#EYxdr$jo&fAIwrT`8%zw#$k z+fN4s!FLyjPW*VV|Fj)sS#Lhsc$2$`mwQo4f@X>I476klqi)h|s+e=l9{8TX{0Xmul&|xi*@3TKf@HrQ6BZe<3fg zbHt8Kz^r@gqw3B{pcn)tLM7&ry@@kagLwKKLKr2kwKXYy^ry^UW09jBD_QuKX^AbL zbTf8I0u;IDvD6~p-flF{0)2Y}IRN-s-ySh#* zFoW}0=UF&WibuU22NAAJKU2(6%JF?fLpp%KIa)7y&QnhN$2;*p88angd6T`5KL~`s z;P0#wreH$JxU@Gv{8*P*#Y{ouFQo|d*<&G`9gRd)eBruLT|}zVNg_1(wkm$>M}pEae&^oL z^5OKaF%9Q~YtS+Q0kf(i@S%v^Z_c1*<1kTey-K61Cpsma)6KJ86hQQuk=c8$WlpQq zmk5bW@SgeV7pPb@Zb^XLEx+YOQLIY3ZeDXRHZs1EykfN|j6F~U9l;{}KZaVzzipre z%<5d(Gr1F3z(n5LMEEZ(#62XahSwkuwwwCb*l{TRmz-(5t>@srNhKasO$k0KOHxz48=A><;X2Qc?*{J7 z$6TR4*~VSr+EZ0Zs7^Ahy(E{E5R?wM&hGlqsz8#!e2lQ2_kMwkVr(>D;rvlKHo%NW$Dzbj?LF1D8I;0qf5ftS&wjR1Z%_{g@M z>yJzS`;~ViC;ISqSC`A(R?gFJkD+LD3_jkU4yQSGoWSdfT)e1^kJ>u`%~uOR4y_j3sq;fKvFK_viYr$lyYJKHndvX!^4~E4+^rL{ zr892Z_mC9@`3Eb)Qps9-oZBYwoJ+9J*hh?`6+5*cnS|{bb={_1LN(aG#Ax~K4A{GC zj;y$ejE%VrA=gaUeLB;b8T5lld}n*IuNgNslZ9w|gHI5w3NlQjomrYu6>J=Q@e0A- z1n;&-y*_VYpVikd7s4FhXXxE>m1qLW9OU1^)#h)2(;2jW0di1c#d(VoBo=1TW#&!F zO7|<$s=FL2?>(Qxjtx&PH)_zG9T%q~9I(!oY3f^70KE~qtrhz>I>AMh&JC!w;_+K! z9y-xC0nGcxI9jGI^yB>i+GwXJ@*N{?0AWzR>reFR+1ND((Z4r+e!R?F=j8S&170n5 zLdH2A9pWZN5S;P=So_r@?aPFU7=1CKZ1w^C%%-1dbxvgF ze21A+8SqZ*IWFbi#Dun(HjL#hXUrbE_^6&?li!k<_!PN7%q}`Q6WwU?&<)h{X_Dxj zyzA=FmuvdStv@oH7Qg6*-$bZ9XcV~WVQ}F0R9I(|Kf5r!S0J&Dv?}>Q225zLZVvKz z{bC5`IB$QU%_C4pze0bgr-^P8jT9Dkf$tX4vH8e*`D)$8GP1k2xZPZZp3|@&{7C&u ztOW43@J^|rVYO6pfx$h2c-OkC|5k7g|M50enj$Xb#_@c3)xgD;%&fJh$WkIV(WT~e z!vS9Y7XTC1`?hb>U=gh=TCd}5v+2F7V9%cug$HzLI*&(wkqu9grTIQD;a;EM)p$XQ zu49JL0su-o^9{lltQ+Fgmu706hcCAfVM`yqO_#=EJRdegxE_3jz&Xd<`4XO=k{@y@ zi8JRLi}9HsM7s1Y5%*QKIJxvIkq11YK;sFRs9de`ZGUXRvIOuK*0e_;zRK-CybBnh$-oM++%3b1Kz`tn1EexxVn% zVmdltXqBBSrtGHkNi)j$)WzvQZ{J}sEqfYnH@e-KmV6BFot!}|MjWgd`}Uzo@tGDq(2$W zucq_I-m#XB3;f^4_E+9V3!pfwYEz}j{{H1Zd9n193cFuz-_O^C_+1Vn;>%Cfm+IHI z{9+_er`!ksI`aS90%?{4Ae&RkBuxMFRseX&oIh6fCzJo>#M7Pt&(TUA6a8Nv`eSU* zMh*Y>U00hX;&OaWx&CrQznu8dl8WQMSnFS}^SfFg0a0h~+|sXa`PE2HIC@b2&$}+| zM-Bw#Hg0tP?^^*7;Kg4~{#O(KhX8-Z?tfpce+ckrr}rNM{1v^^{6m00qosc&;LphD zcX9cT1pEmt{r?~V(+sC2{!ZZf`tjU!9NXK#bx*f=4d{Y9y3-!k`9W3KuUFIn9>`P4 zK7!YOH{WbqQcT;p1y3^ILAZROw;uI1cZ~hhflN=J`<(y6N>*%1s<-7hc|T|(IPQ=_ zu?Gfh1Om7{!swU2rbua%#-P~u+S!Oj>RIT<*QZZ_!gOq}RZ;8og2SHA6<9OKF(ihL z2r<{PDl6#2A;(efI*5nc4K?k7V4_1td?r2V`nujgU8Vs}){9pu_Ljg-DHdN( zAbQssfbtTpUKeQ&>mKjK0VOCpHKq7(w1_>QDZ2NQK;h2;Fp^o@(BktaDXCE0Rq_3M z$+^SV82o-WG$n~SW8(Ne(0f8tasV}cREh}20bI%3Y&R?C7vf(?xR3JznMxn~*3yOb zr*@&p7gkB&8^fhLAGT{<+IS2vcK|+Za^78tL?0aCS&DkCcuL%0L(x%SUq#1*2Q~CL z8c6%A9^o*J)Y7J4aOl2#+otkfm^T48*);tnAX%wZ%)nEH7y*_2#+C7mH^4^&@N)xc zo2P(a2vx;;;rNGd&$hXa`sx_LILF-O;GgCJzz79|8R2m$K{n#B3htY61+XT*wSNx# zC5emp3Qc(Pj*YD4Y;>I`G(Q_PpW5IyM4qxrbibSad=v(_G7_Mjer4}aM~)egQ8AyH zSpk4AM28wvPDMtY!Se!gM)Bz2lV{oews}gq(w#eK>HOy7xrL%fWIYUBFm9$hOUbog z<%*T}wF>Xg-=G)ux#tPg9*Lkz^cZLfMPS^mwb891Ye2@GTfVPSlK9opXxj4; zHr?-_O8|OD)__i2 z8K+;KqhseTY|b`+@}u2)RtKc@a@^3jX_!|W-v|7VjMaD)_7h%RQIN7?Eg$m8Y22;0 zWRj8`SxuOHY;14||CXQkdUSsb3}a!CKjR~AN-@G?=Z$Y5ydZ`FR`W(fEF=mQ`a(Gq zn*u1!;shbydzi(%>E14~tJIh8MFS38UIU*m!_%K;L(<;PGU4k&brHp*?rmIB@b;&w z$FB`4M_dbD)au8dqOmS=YkSg0sq=zQDwqh^R@$7 zvB921*ViB}^hM4%*bin5vC*z(jV*P-9;C$bTV3;A+Xnk+2|G5e6;q`qxhjRduOyff zp42<1vd<5Wd+H1Cyu-OtDc_vt4JP`o-MVIHi4m5hFovmjI zbiBX5Sw9O)RGX>x%d4|$s8#E~c8E8qC6R!t{kXfdi)4So%m3hUX{^3Ph~ErHGv-LU z)Ikvk>QtK$rxtv|d+~YJ#AXvCF_^#p@<=8-3+(X{!Ev%1h#M7@14-o@-xf&`51!4N ztNF+;4<0+=0-&BI6>Z&<=3z=nj6mR7?E-)S^cqm+{5`2wWG?>ZlkVtC3!I>ljQN!5 z)n3)+^1TiZnSz7bcBO>dHz@Ya9_#_(^_zkv959<+vp{a`kqr4N=UD}ou_VIN`{JHN zXP=}>?5!fr#RLI6?R8Pi4WEPM8X_X~lA}&a;k!W#14D@%6k*iqb4BtDW(F?yc0Q)o z=*+mlfhUuo9Xh&S{5YxaU&j(#+Oyvns>dbKy^wV2 zi^&Bt%;@xsR1!rSqMB+isCgMK=1c=wQ#@)`ND23oIv(ZM^%E<)wt45Zp8{oyp3XtZ zSmK!q9Jh<=&Qgy@IxJevoZ7Z3TA^tSCJ1m=r_BmXXRJ826i<48R<$1aOqe(BteM%=n4=WszjCxr*2cG6Pt$@-Z0a&n95_5Q^Q@2^w*f$hY-H2WG!*`YW z_E`m}1Tq>ZGq&ySnsy}>0isdTjqZmY>nvEl#^8@yOk^g8#YyJBZcUIecN|0FW8OAR zZNeO@&BKX7T$!#hpL5ZAEUvCR)=-}__RQ#}l_dZo*`DEEg^44AD+|_QUiof2Q#mcW z>v*Ct(FNeE>tfSGb#=0f$+6<4r%T9FJ*x}`F~w{(BI9%{mqj**uGGe+GV9DZ+l<>e zE}db9d{E8D+(FQVld1JUHgO>p>*V9$iR5z10=rc7QJzAhBN zEjTJXieMeI=oov>K`pWMxD;C*P$2RWC~~v#ZQ6fKplVubu&Xn|((LB8vZ412_9Unz zx59*uu|wqCo!1cb2sfDZnvs*(nwMHWVthE&e>Oyvt;N{6LwP#5WTK;!zUfUTIO_?( z)cxJ9lEeFgVdozQZSG+(Sua7%P4q0?&8kG)S?NuW?6)Y>_n!8;?L8pnWJ!rJ=wemN zSp8<|4Se|*0IU^OfuiclsUS+k!itTp+${yp7j((XiDT58pj{$vyw0&S#awvlDdcWK zs3)SKYyQGsQZf-PU@$R#kn36HjcI7%TV}*hYEb(l11PBtkxP-{b79&+kk{8X8Lxgm zRu!ONt0VF~V@b0mMuJCzE>eGg{of7_pzxEb6xM7Bx1@3GJ`^vTc*?#hSdj<8a{~;7 zWEL)tRm&Rv=CY{2exd{PW^>rA#b0ZGcdxtIifaS!c2x)xVyisPMNv{=lR6n@;h)*{?)J2ZUW|deN&Qc}>J} zpcX1 z$i--da~ix?{P$`!oC!RiXjy<-b2nZ3mS|XVnpH6%PwfiDyUu8%RSlpH(>(O6iRL=9 z=ncXKn=vz5E$lw+_gWe;x3l2yb355`4IQtd3RJ`{>8FbxZ){xK6WxDV{OX=Tq&Ror zoq3QV_kgiby;h9*>pMAM7SG3}0wp7EHrhCVQ zr4XS)!#(Q4R1)G03IOgYE^tc4h;@Q}8v0Wd75H+62jn#jB%~yGs!K-7O*TBU=$>j(=;$UAjpAf(-xt9j>9Y5=G=KyofX<>vjp~sFl$A<+z9@Dx~9yI7w zqMt*_Qi-{{*F}(sf>_prT$fxSE=e(==}1moehumhmBZNy2(D~ne2Q8IQLP$Ky5(WN z(p)#vo+BQOaE?R@t2T6ixuoV_d<_5CW*20E$?_!B;Q>@$VTj2h{f**F z$7|@K9ZNANd)WnIy2+MK?1Tq$Jlv#PWzbb+{~W0-P(spRYu~3kS{KAEG%1`FroZfM zln!L0v~WxX4rWGc;j98-=N{3EkA_9R^2MTeO0Q;>w<_o^D-RknyAR=N9*aqJ3!ks2 z4egcgC1Oy`)STi3M@(>R8?KXra_n~yzavm9%`;s+{3@HKhW1$R8eFPWEJ+&nVH|NM z>`7np2_In6lDq1_YysR zx9J>Q-w;3mqU$f=8RwRYo*}$Re!fQ}O6*mysg7*9>mVEW+hwc(J2W-B{a<@d8S!CD#7US#kb2HPy}l`&+qI_ zsSb`Unw5KdtEb0ESqOkpgnj=#ayq_-u8dM|=qgM}`66$a_>QwY78LHI{-`~f4D3cZ zYF{MlE3sMd4WCrvM^s??0La5=$!=6UJBW@&{MB+lV6X!z{;Z?P$*;~!xKJ}TtcWOY z8X0G~uRWRmtivvgmfUz$T3LsYuNFtX-|;yF;*-{*%XI;3C7aIvvOdBCllst);a_ud zeG2URfHFh!TbwziceSoix6l!+M6BK8;#+#9XJ$^~I9U^p zuOV`1n{+gyb}X3`4KxVyN4ofap$#*mafF(HS7j(VgW)kyMiEgjRf72D&bGYZ@i^uv zy28Zu(Agl)fZrj!pBe&paQ=W^VV5;{%PSk#m0dR=wjSU9kfJ8|MvfBf4$lR!>8A@P za0WcALMhgs4;+snff`V4jkLYyuOSLLPgPEx|C$WAL(X>!&zAy_4+#e-&4#KJF5x`; zk$JvSF#u96DnxGI#W8m)U`{(Jbh+))?C}KO(NoOIX~!9bXlF zEmtDb;I)!hP~N%JmJFYbN&GOJx;}CVNLk3b%L8gZO=h*dQ_tTO0O%d_NS<)s%wcm? z$@rqL|HduXv!cV*S3U3oiN-K^}s=AuL+rDOy&2$M&ArBwF(aE&Y@p$HlP5<)i=jWco7Hr!!>fj`<$f^ zKanHI;ZCo|Kv+B`3k6+$7{JD!S_(eFIJ`s(a8ORFTn=qt8G*=FW29*1Gvis7a0 zxzoo;@qw8I*W1i;cWl_1BC-UN5qcmmlqp~=<}c_WtH9T9)OaB!Jmj^HghnEOOg6xP zSr~Q>ysVL_y}4ZtFsuAtpco&TOKPtTp6_FjT~YeNV0#T`;cG@B20fxDy8&;du70k?L4v)ZTjH(bZ_Lic_O6?LW!ogwwgyjO9(5z zX|O4&+^aI`bXF8!^mVh9AsiEgIP=wZg&VYORO#exuRBSHx)8Y11>|mZyq@JXIT!t{ zKa&B~oZT}C0LsGq@pALbtgXf^l^c8(d~GbCN{>M!P-opyzfo=*F~X4#;olDUyj=En z3ci$)u^;zWm1MY3jd8TrcH4;&q4s?2x7~a%LK#GzTe;gw)>6l5v%6It(Boe9!v+JM zGLEjCykC@X@U>;VxsaMST`cB(?FwWqxFiBCwqD80Rkaf?kKgt+W`qE z3GAq?2Cv1vjYO$ZXc7f@?WP>d#eH#tOX>8&Xap;|R2_S|De?4#*ILj;m0`bClB%uR(DO9sCSK9x5$@RjhwnXRiywhgoks|@@s zM+&bBRz4N89pu5o=ojEK(1RO`9ot-p1?Ja4W>0u#!_@9=2;jbDE0vIZvjvosKAE6A zPpH|1Lna4l@*i`{4tu;WyrHLdy`nihn@0AA{BVBsRSWr7`AG0Rj=OhJMUPuh{0E(A z`o;D#kM+bdkf9QP@**LwtjnttlmzO;a^VrnSRc4bTK9)xg4>6u0Oc5$X@9cvME0); z#=N$yE@7t{3&P&=ahgU5s2M)b+=WU`wa!80Q1)B>Z}WOGAVLL>yCam^toY+Al^}(J zYm}d-L8>B6OU%46`yG;U#!aq$k!4a%Ov65Id=Bf_gZfv94-}4}uVS$YNY002Nc=~= zX(+-IEruDKytUBVHQ%;88B^nf+%5O%kLy0G)MBell^+P@zf)dl!`9dJMnW?AP$~8k z^%h4aVd<~itlG=|rgf8a!Wm)kdID{laNBIcd2Ykyye#rKH$50$0(PnK%Noi=KAbJg zDhXOcWXfdHXelSp+E4RpZTnq5_MZ%Jyh#zHpRH&BE;g^hFowwn_MW9P}4{%f$SFS z>n(beDXhg>f>TJ(u|dR}WmaoYj4Ol230)#pii9CISLdZWYI6+ogu)63gsfiVC+sg4 z-nJbXXvs1vRXgHU4K*xl+86w-gN86ypxhxub`l5u7pWtAr#CyT&vYj->fPZ6m~Ux{w%O?G0^1 zk3abO_h)L~Zu46Yk(AbX?l|uIy81f5Mf=iiyY9-U${eLt#i$&Y0Wx*@H>{U4(erHt z^2ZS`C375j4a*t7GCzykpU|&Nqqo6i6sgo8gDKXi!jnp#q))yNVUoJRAmqTeqBwRI za^Djz9Tz#AWbp1twy_enu}bul8`{k{*E6=Xz_`MraA-G}1cN(DS%Gc(1l{auh;gM( zbNWK!A%fG+GD7w_eNwNszjqvG48N+qj;g}ULdUq&(+^bY@wjWDk2L+BMap7US58zibbEFJ&qBL6GNM4rNADDFCTb{4bMcYLi7vn8;WKe*&@}nhJ$79^r z4L%`p$KtHm&zdb~EEy>snX$ufADA1JdOp;O)7Qq5=xF$%xpR9-h}L>|&wP*VDXafRg}f|h ze0id;cMX}$)r_niDA}go!%u2=tdw@U7d6li3!BGPRK)_Z83HTwO)C)$wwjr9StXqB zOL6pQ0teXpxmwh1YW}pQZ7YmYb!Ko8sN!mt?$q zwb1EDdM2)(TU=u1FZb}n#=%5wGP>j?d$Z9WEC&WZoP7{}Rq@+|>?d!fjwI-Z5dOw@ zp;DV7y7!(>)YaA|K)-7F7+RqPM~jhfrE^9%&xRrCaA=d}m(HC}LI=~=`o@2!y)Wsk z|LVy0r-Y-CBetylNEg-n>n6*r)h8MApO1nbd#K=sHqWLcEp{5H=(|SFXAZRNQ^qTo ztXJ)~wPlL}JsJ5MsGws9y}UieGSuv11(pI`N!)?__Mw9#@m#$l-Q^IWv!#&rF@$B6 zrTD<=myQwFkaK%UVmc~TK{nb`2RTxxn0K8^x0+Y(U*vL(#so&S#JMrF7n4yly?Xdt znuI^v3YW^luZt|g^}aGz7!;Z)ex_gNAfSj@mw+iCOUPPlzFesk+N~HXmOX2gcpL(~ zZ6B0344ZmmE$BgMUqZSIFiph_t@NoK>cVgljr(3zE14-^PZQ|9ntb$y%mF;B=%p`{ zUFjMGPp6r@u_}<*V!6jsk79KUJ~%-Ftioed$=0Tw6}BH7Cx)9uPAZPY?kCdT(Ol<= z>9YTPXGX)S{T3FxxI_O{sb5I%47&=K4B4)Se>&D)o@wO21 zY}JCd@$&X$CK7B}rP5pf3V&I5CCO)COaOZqpi`ABwDB6cT9A;~M) zvbGhcR}fyi0hG7fm1o;djr?;7Cuk41Gnz@0Cb7GQDI&C%8R91+FED=-_*&2_rL@$w z%s&ki?hKQdp|0d(y!mcDAk^Q=yg7j{dx>RfCo}@Za=l85^x_+p!z$gMlpxJ`E|(y_ zhVLuKOQ7c_t=j1)GZ2d@it()n5<&+~<>f0!`Oc`uu{j6r2bA-}q^RlopUP3HGnpNm zl{89CvFVfES~Rl1+7T!CHfdb+s1m}qgBs8YSR%AUGlFmDLJrctsHhc;>+p+7*&HLW zlP)s6Wzjyxmvp?HCx*Y@$YC897b?LHCX68qha;KvwdH$0J+zWr#|>a_S&R&#u5~Wz zXf!IcV3Co4R`;Mn#VN+UqGQ>$swdZm;p`9+VqdOD8XI4@${H_7ShY`~FBxA<;~d~7 zOk5e7SYG|OHSx}rsPjUyURn1*ln6dS){Rcc4_~m6b^`((LL`;mL(+S1EFoaJ3{0Wbe`nq^Iiu`)-A(PY1^rp*=&Y zn0BCvqBd_qp-f^>^;RuP!UCx=I_7^(VLAt7ioGxT7D5A0c#zk z-Q38S=58e-Y2T)B^7ZMU*mY^rt&rAIC@N>L+4=iB@UT{s>gYWMg`AdvFfZ-gYy7MC>n?*WEnkI~*{SDEguG_2}C+^=>XD*v!Fn!|}1yi4_lk&Go+B^VZ&Z z+!8M0u&&^`hK~4N&EnD3cM@nTpBC}6--x%W1>R>y);0ug0jV+f=-yoxA7WB=Cy=8Ix$op40WDMAmxzt>#_GO z)km-mlz5C8-Xu^;knVG>4PgRyDNin5h3vbkH(%s+o@t&u<#J_lN%Fn)B<;Qf2RO?q zLfC7|X(chd=9GJvg>p^H3^T|5U6b~m#CM%^+#eG}@)dbjs(V|czeIg(QUPRjQwPR+ zqY7c-3HI?=K&``zBD6b<3Pa|@TjpTXwa3iEITxAS(buKSvHa)ezZ9JL-qf6mI<-Xb z@0-E*dHd`wXgnKrs?EOt1gk7<@!+tq`$MgM2_jjSYF%krcJaZ?`lRt*`?B`_yoLB5 zH`g8;`?B^s`{s_5etw8;)F{)v>;gm;Q~|4u;9C8=He-L$;H3}U$69@YW{lZ5=l0@Z zbNbNde(|&346qu^HyftA|K=lyzWJA*zI&J72v;Xu|37_QTIr~Is=|%_49z!oI&k@F zA20L#8<0bdifN=-PXI%DE^U7H=PUl#74R+Pebai%kwwbo$(Yo`PMYsG^Ecl|m*3os zVk8Uj{l)ALE82(k?oguM-9(qHA@`R#ANt7O{4@&$`ab5(D?DNRIS)AWxx+uD#fc6P zW@8@}roZ{fulMaGH_6eGrYO!Y&i`eehd=!1YHdCOGJ`yQHbvb2`%e61fUE%cbBtL( z=V;Ua+w#|&6AjwUhQ+=gj>!LIH++XK1Bq1jWoH=4Nd8}zpXUGf&40ic+LieCN6(Bc z*KL%l%yl{ZlS-oAsqsXq`=b*fomdph+X!~TtqLBk{Q5QF7mpBKM8<92q^bavEnou| zb3sGVQ}3_109Ic36*)oq_-nzF&@5?IwIfMg(w|9I(Ab=Dm)&myHy%y#$iKb*(1hVD z`43xcmPkC_>)}tN|7Lw@^pAz8|M*s20>4LCmTt+HRdlc;JdSk* zXwa$xn4|?ZE4|wS@?qj-AcJJ(B#wxb_M2@LCE3wOPo5E8Nqa#~!7O+`MJe${Rk_F8 z==j?kZCkZ%@Mb(|z})Ea*}0?{*#ngMF%sWX8$QQ5XdwuqM5t30e5;wR-nvjkZf^0U zqB@lwDmktcWZU~eK1;kY?UVUFQdQGDkLu9ueAAAexnC6S_BS&Fs3uQNnshX^E5QL; zTbmO{s-I_@*n1uvvV(AD(}5dB0NPI`xa;=vY z3Kj!0bjp?GXmF0bsfvrP3Q1fuhU>Q*w_3CDX!a=36VF4-C-jhYQ<*XT{ofPx-Ro~| zStwW)J|292MWbxw%+*s4Q@aUeucF(dC!j@cIg@2*pJK&I3gt zvQatEnA9Mytp;8>uWZ(Jq+#QM<2*u&PyBZImG7B!J)duP3i747a01oeZN9N@E|8#Z zvEl^Of!}=e{LNL+eGnJmb^OMtG4TJf_nuKruIsw6f`E#Mii&^~8%PtSO2-Lyih^_l zD4oztC?O!?w4s0?EtDX=cL)$dKvYCZqy`8fD4mdm76J)@^Rmxcd(Y`yYmWV$f8QAE zpOF_vlBeDG)gGA?15JRBNmI5>*j-=U5<3j*x~(b$Lxd`+CfK}(#Vijr$(hz+=#9>W zL^?gg*RSOxWh=>>jckLOz&9_?~GY@QN5+jN)<>8 zz?A8TmsrKeocO_(zpW~Z@g#V<(*znVrxE?mU1;x#iATM+PhOD?A$s7OO+J1M^@47H zUGB2_BvZrMU@WK$XM2W^DkxH8+1{5ML~!C_<4P#a+P-k0AyS=zh}W{ zuqJ5JCc`rcLNiTd%MM))Y4k`;{@0p z4m}B-nf1A67BPQQUR2sl{QP{^nDvg~yFR8G+~C7LKhW%$V=qD!*M!BZoxGO*+3lMSO+UR=acc43*p;afJeS0ZHTVtwI1gIbi?% zDj708ZIZvvWY8x@(0oO-y|P zrE?021>=)PShiDhq%CMkp-Tgc#@lQS5z`MM-EZNpmM6yjXF@)7F9xXiaknvV#gLjgY8#Jp-!5@#{ix_+SMLFwY=c1_HgSb8I8 zKS})tb`V67wRxZwghU>fiNz8Y2Lc|qywJVz&72rITjQE*w~gvjFAW)d>GgX>%uU=* z)uY0CiPIyb1+Dc8|M`KTMy{KkU6w@X!|Yw+M2BP{?|cw3l`~m?_wsBew$+*%Hus~< zK2a8A+z1KzJPGnUx3gUz6R-B`cGr6Vs4~-!kbc?4|9Di24RO1)2JMX-o=jYmCw_UhqWNFf<51>KsuPr^yZO!QAKstBF zb?lp*fpCmEcf&_&rUPn&qfa*v>|*6>i|AeIyKiBTB0sr$#c6Pp`-i|wG_yfL()Mb2 ztAk{8QA~o(Xi0Mj4L&*EyeedPuM%p_11AqvBN7rAFApRJltZhDIw{Mv(}D4YBxO-v zQF{)E*HNC0p)AWf1hg`t5WHt|CLroOtg*&vgo)d_{$p>Y(!_TN#+-y?YxR_XKpq8k@nA0Z-I`LNJ>5YA~2A z>?mdk$y4Lm`zbrh>4O<}cTs6;r=SD*QllYze1Pw50J^*-O;W8SFNO*3DVasOEz88T zP=(M)@-honW`Q=eaTCyTZ6(}{@dDVImFk2Gp<`cpFFg9#(p+Lqj2A{VyRvN&CS5`0 z>6{j#>X6^YL$j-^tt4!Q`L3khD9&onq~7lSr1f)hZ`{?&*>Ff5u@F9wc0`p`xqlX< z&&*h#8m|`DJudm+yMs8cF>sf1n$%oMA0IDHb3`N8EDrKZiKznQ%4qFlQC)v(K)OzU zo^D`o;D^9oFKPixk3D5&WWS{O%-)zmI1BTrU%85oWr>g*GwIf=b>$sj&rolZ;e%Zv z%%HOYb%KiltNgNjtYw9fJNv?U#hv8nl`H28C~x2&U6ZAgl7y5jlB@%Ye_nblb=*J+IfaVg{a zOvNgBJ}#@Vlqr@+Joezz?SM)7m_fm=MZ!$)55`lc8$)~Ub-#bO(hPMVZtIF?*K*$X ztdbA+BGl==woa+ywurzBY7|~8_OcimshtbfShnymF^W5XTL%`J9+NJMqLb2dt)^GR zCs(Tyz`5o`de^Y-jW-G-a6h6z#)a^m2wg)v6sJWq0abh);EK2W7;g^T5^;0akr}$yoXjGyReSa)+UDH( z?Avn$ew_ePTpAM|E4x;*-dt>vl3eR@uNE}qH8yp0tSqHVsb%Pr;He!qywiHtW=*88 zs7wGNW$wtrvIbC9hE zvqm>w6&Q!2SPYCmwa>HqEp<+rqF_e@+@U#!!G4H6SXkJS@ z_Mp2r*U8$qW_lPORc|VRzn?!_!Ai*&O==v{d;9Y`ck1E}Yn*8jci!MErHeeHlfCK4 zRSQmBcQRxNT`TH)rVO zfn*dBp|V1aL5phrdNS7PTykCAvK?gq|gw;MjFF_Wg3uGL92g;*`hwiRxI zabK6H!mPt7g|189e(VUNj2B~E8SZ_G^FX(2vg5&|LR5kqmRH3)tCbyYAq=gW2ENu@ zxbyz&e0F=zwR*ufCCoOmHlDaAcxX=<^m81Xeqd9+nox`RS5Wji@pk^o&%c9EL zz!YzGV%_LBS6BugS*kiwoNK~N>{+)C8=n~<`>a_Fvv4JZiy%-(BTZ;e%`i!cuZgoM z|8DA+8gNAuC)%yhSW@Gw*1%*+D{D*eYS1%PBami5-^JdJlaS!8UaKs=1B_;cTjB;O zXUKR(h1O|R%2&^_Wgr&vA|R4Fvn#kbc&A)Wlt;*3d2nJ)1T>0oOE%)Ml5;R{U+Qry zXAXydKUgOs|81YHSB?B8*sh}7*L-|Tyc3iDLp_G(!6jU(vmVHK*r(A*L|b3#y);<{ zgTo|s*p>2NtXg}vpBC7Mc&V!|-9NyMk|L|U;F2Rj=Q>|yr#*qev`_(V(oi}@A%fg? zBbpA?r+mQFirc%Kgvl;k4x6%@72;aM^C+q_ga~R^iT;`K)><1IeEqgmbb0eB3D>?H z<<9knQGpTX?i6PO&Vru~7gf?9Vs={?(L+J~!TEFhQbmEQ7)Vs(0y1tu=QBAB}N5xJA0O-syi0wbtOJw zdpYE7x4I+H`Wt6kU8vrUjM4Iuufr8H3!1zL5|G{}ODVR|@lpJ8uM9a~yUq`yvr~fX zoTqSf!m~=G< z+r;}EU*!7*=J?agmlxOsmtffuY1?D<@0lAL;{l@C`P$@j7Jt*~*V?bj6!g=thu{JBz` zmX#tqG7+v+lx$>+QEUlXrvo*>f~i!He=k!va;~<-rh43-P;~;_<%>@==+1X4(+XG+ z&5sd&S;ljLBDqkb?XT%#R$T*LY6K0Haq}jY?+nGx3L` zMOIm!48pf2Apcs?8*T9pMK*wiF`(txjgaX1^1b6cC9jmnuSGRRVoy*18bl}El zDh>3V+D#cxO$m%)8AjE}!=ERWywZT!p$MiDLh}mt{B)%1KEK0i(P4Z~64Bw5BoBNb zypbh+?dd+l#1Pb5qhDcP&jbJ&LEeG0{#;`GsW52?|4oOzbn@}?;lu&Iu2?O!Uy(C5 z!*2oE7^e{|FB!0dc0$4f2$ur+yS%FONmWh~VL6`XdUbjLa(NhZ7Pb)VHsfnMTbvH) zz7`!SAY8*Cyg9Aef=#8@%Hg+1QNsdhNdqDW;Y>@{X zWF^}a4$MywK$^Lm8!SE97bd)a@>2&UtugSzy8y2sex{%+itcd0qhHJxS``KDizwsH*+3z)N7q%PO&UPP zdHP2?6hl9Laz#a*25hv2%%X=VNO#JS53^vSuBAboCw_)8^vsIav^G^!OKHivx)D6m zG;JOeK;f{(KK^p(2ln~;>NLeBd}h&-?rjJg`Bs~^J$z27JHXd$qhKSX90gjLX)p=H zS9a#tYK}FN=qm{3;b8;g9{A}GxDB)8-TS?g02@S#8*p+8X&a*&HPc)mj0_mpR{;M= zQ}yYi7F=scP6=u+@*XW*=7!D0(k0rZ&Oz z+xIKC$4xrnxrpbFFr$2MjazA&da)2C=0EAfv)2pR!81dv4gA(7Y*2VK!UzezZY*jz7+Ob2R>4 z^yve^r(|5pFX7CzzK zy;d4AT|Lv4REaortXoXj#j;96!-e%odRU`K-s@4&gnO;jiBO1iux5W#*+b-`_yeA{ zC@qb8wDIrqPN^%7qL*9AcN5DK!(fe_iMD6O)!zqSKAQ|u2Op{A{9Mlbante(dK zW-4tF7B6s9p$-h0`@iaE{gPevv;Wg+X&Ykxyf@#M-R1rqgkF$wQ(kf(sk++PEEQa$ z;}X~_!vL4A1|)@p=n*Iut`tAr;hd^wAQ;5_WM7-JMBsP}O0Y6gB|WzOOhl8ZSIaG~ z{e(zbH|Cs=7C$n^;yJVqpDxEia|3ZyzpIs~lZB%;)$8=$^A`VuGgYIo*_ zcGWFw_!BiQmm!y@i#jb8M)@TlUjFuofLs-v&}v8fe*6GZqqTUo6{K;;K8pY=wFvAv0{h@+^7i17v9Ro? z?thC--Vs~8qh~iq)~Lub%X9#06pqmS0t$K#(>y`hR|kA{A|iW6Tr!a;mpvZ4-D`{! zRYb$T#>I)2Z3dZFA&Oj7z?`O1D9EtEgI~78#vfNVyE*PyzM?M@B;XfpVkR>lUk{DW zPZ?2a2?2nJs@88cWf5q|%xqQe-4u^SJAX~X?#AZ%d|Q1KXWvPgSh>FX5PiUh4W`@l z9wJ)(k50NHL$+Aet6L}Bx$JZvC_DizvO!U_NVpd>D`HLBRdmAzF-s*KA1R9a?g?my zRrw5+xs^uLYKG90f{9^|o&kbYi>H}CgO+Nn}XzvJ+%50pKVNHexx8aJe_u6vqCJl`kB1RrX5`7Z0i|VONb}rtVgGxChKz_IICQ0`0 zhUT$F)vN552@h4Nu@ti-P-b2C6ke5kiop9k+1Y37nVcTx(Z`IE$?@lZ!MapTfX*eg zuygp=P5KW9viIG6#yR`3@+Io(J}YtvW|Q}F7Ir;#(4xrUKrdp){dR1Ns(*I{xf-BQ zW}j!g-Q@~PF=WNJ@ItjBV?a48ps)pC)aaP5ZO;fG*W8S6EKTX|Oo6fjDGa}ja;YeD z>cHtwZhu=b1!$oFQRwhDtF-U7DtsDct>lnJYb5!yJ$M%S`h+sxWvPL3x4Hy;nPFu8 zQ2_~7-q?3Z9m)ltCtSj8F;Yt&qQ_+#*>T_pr58=|V6yNF<2{O{5uZ0#r)yOLv~)Ey z6qaarY|O>iFWfc%e6b!z&(NNp{9)g9OW4Ns()H3sX6a~vl3Cl0Fe!rC?_Aa37YJEy zo>1Hh1@9HKuP*hA#g?=Gf&^l1P8oJMrgjb|P{A#twK)c0^PCSMB|u=x)!4I_%s^E#{qos`tE1mjV>! zG$5kt^z$wCqR89W@r1#Rq*BbL#;OjZUPUA9?8+zh?jSATJj=&V2DquyfHLdr7X#}w zM^skCkKYgKIaF%}Qf#3KE-j^;;dp`E^b=-|g#n^2g(j%yn`?A%D^09#!AvapL2hUR zSNb48%Q`Ipgys5v|5CGtr>?fuo{{cjr^7B$GX0$Q0H-1o;!Remo=vd}23@k6^2|?7 z{|p<1(^e(}QUwR4R7r${dvEh`b6nEuEC48$*h$lxT>PzkXIES9$aA=kJv#kxn3S3g z6-iV9)3)GBt``^bQkH7p2&(s{EM|7LMN_vyLaYr~f1OBm{?M_F+l2@}9uFK)=OTT~k!>hpTJbZHkfg!iPW&JO%V z*6m%qZ@(wTv~Q8?&oklA!(H@5criaBf}N8`A)*y79J032(N|(H8@q@y_x*@W<~t2yJm%u&_1AaZ}r1X zM8dciT5eFZkm|s!i}}F}j?z~u%A{dRneB%QvSh;vd|VC)X&w zw~g}+9T#9B%`2O1l^MuryX7SKNFAp+N$GSiGy#Kv$=LSl(fx%_KfAWKvnb}GR(H?}YUr_a3OYS$4?#4HZ6QDE| z?AtKwCm=x$=KBugvd%`bv@o#!>8zz_g-#U&8<`c--s`8yx4CDf;H_1>+!x@>tj{xK zN1Q0MOQC>8YZ&yYnVWijv&_i%GLOn84o(y57WtMCtg{&CTY)OsS+mcYblLR(nbb#n zi=3XTQhiScpq z??wbl=X|vu@jBz}Il!Y57s8XOo*I66!XPH_H-K89_J{DmJ9Gmw-l-Dh-YYn_MUDj^ zc17rmq#F{vSK06Xy8DQ6vc0nZo>cZXXU-Fyk`qCN%d1s(%f5c0f;k1+kY1UuvNj>t zv<_cZC`YH$^PRef+>!87q+W_(l_Tqz^w-OtGso`4oqx4kTgP*Nss%B_CFr(gz)fhp zvJJfutJQM@Ck?-D8{Tx6qVD%iLU%kV=XN~YRIr7~EYr->Vxl;*v?0o1O%aqopx*=~ zl;Z2u14_5*CIf>j9t2&k=hOr1i~t09rLZZ$z;evt|6bA6kL&m zR_k-V_=V}SitVWvd|!5LXg#2hr0lWc65MyNWr&oO@`ZLYa+@Gfz-zT#>-ONvUX&M_ z`9Z)4n{7jAPpe#=>bXEr!#rktrIZ;m_DoLy2@OU+ne=p%DvvXYoL1V82l+p38}XY> z>tLq@Age^?&>o`knrCh%b(%x=?5kPyX4c&M*mj+V8Ve>{o?wOijn zmmFZT@|TZjMl5ETt+Vp!tX<5gf!|8-Cd_=4dKZNW>w#;U+tiKcS~Wu4q#5J`@g+~_ z21A;893XfI5I?amq@gcv}q{{vuA)k9L z+jJiE#)9anVj#2az^)?NURh{|akJSH5Fdul~ z_@t(T<4j#=Hm~;7zB;@ITy=*e*M4{UwIf+PrUin&aqdO4*AR~=Eya(AxxRSJ)%O-) zxJ@IKj#L4rrn`U%Nh z1d8cEdw5v`9FHE+2;!q{KDu6!zyktW_(Jkyqf_7gsKaHsHPsoCsJYd$c=<)+Y`0GV z{mR|GTATepvanAAdzbyQzP^#1OfzL%3SFi3IB7SIqcp6k2l!590DuqGRHT)1A8it# zONV|Fxp?~+v32Fqqem;OtJzufs*n?6y)yu@u%qzcYQ0- zzTJ#Fzg@A7$+r~>WOQMin5Rb++P%|~1GGceaz@1bzSLv^4u$ZviXELFEP?8G^KXBd zx&M6mNq8?4+fOknB@z9Mwqak==1Lh*&&HaAos(%TKW33ZV6t{Hjo-1r$l|eN9D`B=RFNO5x~>{BkyNWc5JH=lmrdYz^nld(^Wu zgS)4i{cxTQPMfoxNw{-I#?h5F;oXeVYeY&H>1`@92{0plEqKIk!0AI*hJcW=W>v2( z$Ld3x`7U~u$6yhH4rRD1mmg1Ow3dLT<}rf>lPq8)Qemo@2t-h?{s@|52DdfKoDmS>HQdZ_c>r_7Ou0RXnWr>w8H7!1*)Yk zg*npL;2e^SxvNaV@6Cd*c`qRPzpP{hU}wAD5kffq^nyXOkzqjlvt{p*BxNeqsfElj z@FTG!lp?nj%m6xU!}hYX+xcQ4k6qpSB0l`yT4;Zcp=|KV)JjmyhLA1s-HXGbSLpWR z8i6-l0V{7a^^m;4N&q2)7ly+>;1)ds+?iJ_+iJi_Oggzt1*OC3~#tLU2~eX4&q*a^Un$Kheqwc1zlsYPdF99 zHwdYw{>MH6X93Ext-aAl>4{k+Hv6CsZ%#}`GG`vRS0CG4C7hZN7Ut^oQ>U(%L?uwy zQmVx(KrhPftzXlI9@%^}=~P_?SS|O}K_y)?{U5aULFgrx@=@DkZ?do%oHbF~pQ?P% z(IH{rs^{Aq(u~mQj`Ys04U#>w_`Vnwb7*L459>h9oS$jQtq(dVVoewTyeQk_D&?by zZ{oxa7M^T1(M$REtw|ie&VFJIK3jhdwfSm38Te`lNnw1?S*7Q%5JXvjq1$S8KHlT~ z`l}Z3*B$+nc!ViMl&vHgZr(Ilj#KTh|Hf%D|eh+p?a;Lk0!XJ|VD z#Pp2yiPtC4EXygc7&hqjKNaMFK%K&UKAM^TRp9J@58=NW)=$n6fU>0n{rUA*U9%ZT z_yw88*EO!adH_Ry79dR~%Z14M#k^DZ#fjgR*%h<`s@K~epG|qA43Z)Uzy4|Oen5(j zng8_nK7p4QwnCHK_imNNYWO;V&6F!!NC zXhoZHGSoC6w{Z+%RMp3PhKKEEvd#iXQot-_8c8-2zDmb0%7y36n&P#Q+bwoY^ zVTHrYXKXRwsP$jL|5ezyyXUSlRV%|{B_>^A_6z^RbN?Co|E&7|tolDJ zh`&AG|L3{?Z214#@c$pz&414N|D5&zvyuFN`PYBSy#GI!d9NaepBwzqqwq_Q`OlYZ z$FBy>y(|G??gJq%I)IM$(&yJ>vZm7Jv3jqbVz2LVH>$xaa(?@A?p#0?ah-2H$8?(yxP`ZW^jX9Q5=FHfBHd;PDg=>IzRj;KX|ab>Tz&+DHGz2`O; z{rug}r!9aH%W+`%Roq3Nv#$MFg$|u?)k|q7XT3poBd?Wqd8u0>>UAYmPx0|L+<0q# zRCky-0v4t`CdGqv0nn92cnW$kS@oOFDI1&qd|QOP>wKNs_tPrpzSklbS$-IRGqMtY zD|FJiCyMVE9^z&AZjX-v_XzxZz59`Tjxa(aD6!m1+DOC7dS$A|v4{37fe^It#&Sic zgP8+FP+Za45mw)cl{9hwwwt^LLeb&mVxafseJ^ze$li%<3Z}eSPIiCtL^d;m4A9aZ zCrxtiNP^VsYR#87MfvN4mgD?MBUsfafLl_I=dW_^6u%wMDLPC!bn32D$WGbLBa@tL zKn(DTCeqx`=#bo`ev<|e`{;pLGiCo^mh5iOQK)u!<*rKjL7@f3jFJKSORuB@+$oCW zdZNews|4>4Q)Yv@U&gw@OiM)T%f3safcP|RySL&>U$a5=DGsD9kmohNNRQA*sHz`( z!ug$q&5XGTz*(&6Wi_R9v{xQHUTXqkH)BMSzQ}2u6HGIDCCuj9m*;c%Jdo99$TBlm z_8dKXu3i7VqN{NVlwn5IzC`g_O|+c?BEwQlz(5RLaItyPlV;iv3`Ex7_#Nk)o8}oI zKabAitD^#9a-+*g29^jdkQ(1uStPN?nVgq)I`UUU0q{@)_WE=Das9uj$^QH0b|+h@ zHB$M-PKN|yE6FI*KIQ@&5LBXkGwvKzdG_UDVXemt4Zh<)YOVu{`t>|n^{)Uao?a%H z91O5N#~`$#U8`UUU^ceIeYh(p`QS1F$ezITt0Ej&>mk#W1{=*3V z-_I$g1GKE{&kok%fylM<*PbqEkMC3G#Q$YZ{G$W*v1~c)j`JJ96dE6?zaIg_5yH<< zyg86m?NcI4BJmUJMql*3+3-rkbifmR9&qrcwSoKQEM19p4K;2!(*><xUwva4V_&dg@7ovn+Zq zok@@L)MR!%l&;vCtDKZ~T)?dWUW%}u?HMH!GPtHLI_9sPd!;!5hzvtLuFC(L=`7a6 zJsd%nYg&5xpyEEOHuVYnRLgm9AY2W?w z!xE33-(Q086zK>5FswV#57njTx2|!OGS3CSo{GaZ<~I?IMrUu_`SxDYVL}~k;Ejr| z4(^l-d%?YKvDPpYSYY_G!^=R=cSmR<(7CtdJM(7soNZNehc1!coFQbZ`-~yX7Y`lB zEe9cmRYSVq4&J}{5>pg14>1@!2w^Q>{cQ&G{oMIVnG}U!-uX&*+o;u9n$=VSa0&&N zIPd)h{R8avz|y}X@il;v@D8xbSIZxIMY}ovQeT&%Eyba#nEG#L|BV*)3PR7Qr-T;{ zQ)_%=|AUEq1QOoj%9A)ap)Lz7!ddriz`~x_OnD(%n)%H(_s-|4xGVOY&gj5tRB%*U z-mZt$*YS>O=Xk=WSLb$S6R+PjzcWjL?V2wgC@Lnq+6Z~~seJKU#z>qP7O=a{9HNa( zxvq4~Vbkn!Cj?6f`LkeS=7%CfZItiNDchBioWd%H5vkrM!GXbb<9tr)>UPY3IBmsU z#woae_-q%iTQ;~P0tSyBOB=w~DR9}PU=aww>CfZigsXU*CtG2TdruT|vh9kxyE0Js zls&db1nx=$2Yc)p(IyX$ZZ72?Ey~@G9H#(gqiF6*^l71YAlvo_)mP4O-UjGt|0~A5 znK;0Y`yMcXLQ7D{p!=H>gmL2v9j$y;dW5b6a0ogDvsS3*I3~ePJMZiz;Ix1GW$OmN z#SCpW4-{p)3TgXRybqfQi=k0Cm}52M-C!|*+f*$k?s(USY6avC08WLq;7O&FwaXtOicYFZm}e3q3V>Fvk^hr%%BG}-+zR{_Jn z`=(>+ArZAZX61y5$~#yLU@p7PP9R)(K(CAjqL(@RsQMeBa47#w25ISzUvSxWya+i2 z6G$2)2G0aI0?DnaFy(U4oWeadaok6NuA({2hj`#_k-RV^=?SPV68GxLuEK_Tj2;xD zwH3QY#h=T3#YZ7$6@j0iI88_}$ftwv*Dtk-bXHDEOt+mmH(l)2yn4!*$^{EIFSqJ- zMIt0THXPfGV-BdVH#Wv-%7B2_Ho)SW2q;wO^*47b4EEB%>ofWS=~oB|$xZfS_|4+> zwG6sS^tkA5S|Jvb2IQD}Enk&@d4(x1$pS%H9k)_#8~rW67;SwVV^R`QO2V~m-8bGS zh8KR=bpTlgY(2Ayx<7-aDq5XLf0muX036nLk6H0GUHrAI)4OnFv%K^^%?YH#2Je_y z7?eKd1VcN2d=}y8cEH6>?cXjpy-d^MC9&Jx!#0(VgT9ZLlDvwn*1U?XeoI#xI`pml zs`L^K>F4;qJ!)hhnW?6NY6B>)(nF}5*V)Tz z(~$DHP_7m*it~}$=M2-wN-izScV*imtI}y^*O|NmSIold*xt0i-MT!w=QH4CxYK`9 zSc7dHaD^^;ktGDbE?lT)KbTo-KzpcI+}s$y&z!aE{{!=@iDD z)qT1X2u`!$6A-!F<~HMuza^mu=s~{N2Xgb5dk=w+)CPK^409@wa%P_bE6|#mFwe%* zkPBU&$_V&g%k~iCeqk6~bo;~Nvj4}HDLIU9Vs{HJeBLik7JPD6upT$hTIu4JB^Vfju$c8p zZ0D$>pMFI{+5SOTiM^{%c$9hLOB)kv7JZE5)hrE!PRE~=yS}d+Fjs6r`0*`-84&N9 zRgf1Ba4YxTJ~j@hm_M+9+DQtF9{x6~zyP@Ts*c9l=9Kl600Vg_uh4+6#jxqDR_6>& zmb+CXiqk~JeN;TIRwQ$LXIBDs?nRSCE%>?7Pz6H5R>H1472}dJ7o&Oq8zB%-`3@-z z>g5@Tc!hSGYW^?<%ho)8kQVP~M`o20_z zd2_s=j5K(I*s5(6v2t{0=wU3st>^42Q3!TYEr4|H1+QqlvTSO7;I&`FkRooe=|@HD z`A+iOE^~J2UA%n!-&7fU_8RRCw1ei}N$9!*r~>#GF{Pv7VA>e`!oE=M2X*q5NUwf< z#p1G)Jc>h(ELR|Ixj*mvu~WnU%mNs3CD#=^RicU1L>enu2S0#yxAbng4dqqOFZ;%G zW?iER>VPVAf}Xo}50^!ZS}DE^OHVixiuKa|3eT-h_Y(jNbqCB;(o1aNd(ERVuz)P+ z(y9~e494@JlYAJSq?lQ!Ch12qFIfR|Z%8z$Q`8!lvWj!>vtq1^ zI-n!V;31n;Kdw<4E5atl`<-^wU^68*Gv^L=enJg1Ln;Pf@2Z-ibRGvwg$l48^v7=; zdJ1WBwwF!BqlKm5*DRwrKY9}C+UyH_5C%SpG{(+W+p!T^_&R0Z`12K>iH*roI*gcf z#JDgv_m<@yBpckgdOtu7jmp6|FcLWiLfJEs}gRU7J<>TCuo zZPIfCjlf=^HYu$3r~{=4YiCw#wQGggeT7u-iJ_eYJYdz%Q@YDyUzV>Y& z!}ex{&-?1Y2!wvR81U^&2~#}C(SC8OH`sQvaJedg8`U%Bn+hV_PUP_^){LEfChwEE zz~Al~+q5$#TKTYR5O(~aez_{E2X;01O^03?^R$M4PUu`iV!1En+JZAxyD_@C@*2LG zv4MVKi4REsl)(+cSpzn+WUxJPo4s#4qFq0>DlF;CTS2KES*F2|3~Mkr`{SPeT24(8 z{%Bm0i&?O>l#SWA*@c11IibCZ7xCle_^|*~17;zhj5lOXNMNmbqI--&QOtPF=}Ee| z16|qvgq-q{=-i$aH8{rVak-z=+mou?bm2YpO(1k9qypEpvpyc&os7^9UxWQdJeAd6 zGl#S6(VS9PCF-Fl1(oscgZfHs&CS8OQECf;hC9$q5iC8ZZw$8&X%gyI;50bL7^$Zl=}~LvvXn|je2_YecdO6 zKk@pcq)hhBLy1D3o~}=2B*HK2+IT-bQ2WpnU39>$ESNkPH?kV|QmwW;-kfVqIVCv& zt*XtEQYtIEr0VOSaxKNO+M(CUq^tnaAXMOrS1e#IQ_$TdoCl1cUL`{BOJQrFj4LDc zKGg@@!@*s8u0tE*DA$%k&(-Zu_l>iH?n}mMBT&(t-Mj(=<_k)0kDjJE!?dvy3TV`0 zjT1aS5sHxd_m@mRf=IMhx!aNAfhob=pD42MS41Zb(eKE7mx)~21t5AFiCl1DQwWFe zq$;{u5D$EkdGz#(Qp+lmxDzLm;jqrl&$3s>*%^Sgv3)19_EH=CrWMKXy^L~9QZnN$ ztsIF`qq@eY3VkJ&uk-Pk4QAOd5{grqPESr0Zzs5!+K(Ts_M%MizLmCYv}Q<`kMC{~JIXT7d&KYG-=+2)2T zIfGE=eW3`c`a*(b8)3|+on}WRA5=V<>W;^sxMB8-yrHFy=D zsU?1BBLa)qy0*!{MrgQQIFZvnT#YOoAE0gjWnzb}hp;i~V9^Pa8DLFByu_PWr@$Ho zPsZ~@FqizV4Tsh~jM$W?kpdl&l=2nVZKn-=6O!DJY( zE`8au<^S57-QGoih_kF^XrqcWhAT@S+RV!dY0=h$jLbF@|xu93tdwEi0W~0`wCYPgz6@1Xc3IbkM`J&WreCh|i?g&YkDd zP~U7r{#XlyT1T&GXzSWWIi;s*3A4mr2R4L*A`8%fo3(*6!q)tp8utq0ZK}IpR=(iH zc^-EhkJ1xkZL2!!uw&l3Xe!JivWbdj8sZlXl`71uHNcy6On@hfQwrQ`7tCXWoTh7U z%bW64)$L7FP6qMIST^>RN#Ss>JN8hOpIZnhg~kdEd!oKSakYj}N5I8SR=K{{6$ZP8 zq|KQ&y6|U`Fvi5rgP`Ww2i0YG6`A4Ig{y)}1`b?#8z5h@f${?}Xs1UpRr1jbzQMxPedfa_X)fJf67gSu z_;)U)-z0gjM&P$+t*9Ns8oXbEM6(XiMEG3XTWwY&7)xILFWx9tW5RoxI*~$I#GtHB zNCwmE$xvs4Y%ZV4+K+gXe8Uk-g<7I2-r~oY&~5*)A-eXcQ{Vyy2-oB;y2i?7s_d^XSiOV|F7mzNYH>e=ufL+fQA6r5=E z;&4>Y<5EoJ%6ca^StkGO^612R)^+X%_R)sbhu+Z2+ZY#NKe z3Aa{Gwx@HY$)|1(f*<#oYf9)sf4d-AZjXknHGzAPS6%Y9baE@>@7q?Vfi20KA{p`7-jfrHA17u%@+Ob@ql9{UVb>TxKJaV!5tuzs_YdI7z`@~kC8rm z=;yt5dWLkIZQ#o9nH8~Tt{6t5XV7@@yfO%b$c^SFHE+{G+t;bu7n2g?z>S4Lmcgq- zT+kTnD)-#r&MzL_o{;U6BCB*S6gJhfL|QH*HMnx;$Z8@lEbQPwq-Q!Xcv9)QEq6C@2-;9{5K zwPIZrv{YJuXmMk^^;E-O?9W?Vc3_1qz02ntTZ`q>u_v>;;@^ckG!Zge)2;R4=M6&FM9f5Y&Lh?jc*Xj3~5S#1_Rn?q|d)Wc6yvrvQ(| z5fqjj{An&wPkT8bl8g5<^6F5PQNDpc-bV29C!P$P%<}f^<$=l|WU*Cl6$&#|HihM1 zz5EL1y%0*d67hwEe$GO(l_G}gMAH^Y3?dJ+4ZSUK#8UG;nQl5!>k917LxnVn~81B-RG+yfovl;wxXOQ zhOhvf^OMID3%?NMq8z}ZhSM?a&d4}-6S70;fofG(i&`%X7P=Pi%^%SmnF4H8GGuDF z{!kgwWvKLF;nNxqf^nI*{4?+uf;y#{xbF<-;>Cr>Y&&y9+DdS>&k(XAvX+7FV}46KX)A*YR5gK{#z9 zX~X@2ex?^=?U4^pJ_EtMxeR@W?(}SO(N;6!K=< z3$}C3>+71NJ;8C*xxjuo18}T(hewlwjMAX{jgs=U>6g1V$L!h2?NIku4KwVtAH?+! zt6~ojVdah8fk?MADn$Rt2POB+5>78B{5znNj zBW;s_P{v!$O$WG!2w`CR6pBL#ZY4jy5_P|g;TeA0v?E0m7~wq&bn!Q92R7b#iN>f| z`gqteZ%N#%2(GTL{@#v@*{S;Y`(w6;Oj?-PZfh8xkU4WR28!cx=T&~GRv>(#0toVL z7*le?HQg8wnLH7wbtQpQW#CFH*{v+4N$ET2E`gLSdg~&n=vl)I;FbB26`tep>ai!X z7R%9$;wgL}dB3S(&G+drB53i(tq)`OkaY*bjJm!X1DB;qS+od` zt_!1b9**=;LNo6GP4PI`xj)e(0^&TnW^l0X_-(O!V`WKj6R*6@Xkkj?dXKI|hv^!b zi@_P;<&0J0H4Rk&M`JamcXu5;aB}(bXq#cwArxQyLH;+*&LQRT_b699U(8AYCC@gV zi6v=va@LX9MeT1KemBCI_6hrHdA_i9T*wEYT~LJl)NX5T2!U$WHd*?jU(d7FRm)nW_w zosF%i@IU6W^8T5qre|ZH9fVm4!w07u|JyPiKWwD=2tPVui;d7QMa(A?JtxI9$Aq4DZ(@J; zJcWDg7Wg2mPKeeRRyf)FmJT+^-Oln#lgLZo8sJWAGTKS#NZ$X+-BMHQ%1m zQ=NRN&b!ws9C3VlRV=JQX*b8PRg5G9vcnBb@S*>J}M(0Tv-ATQ9y*t&| zF=7zzdxDv-2{1MIh6t027eXjt@x{@yP%1Lv;iOkzX5f|Gy$V7Pl|o%e zI`2N^)kYu#aEl{GOr;|z4)F+j9dQ_K<0 z*Jz7gE)k~Ktc#?{kVkKR32uqW)69quDQVz&om>?z+$>m3taK@FMGe5n_E`a(x_BIl zb@c|Xeo46dWnY;~-3di6uBe&y0^G@9rV6w}v_(|$yRl$gC%I?PTAB4@3Kc@qpxnc% z5ImPl2jD)_z zRh|`+x2N!15RYl$3zWyq*N3olac34C{k$Fup}4#@$s+x{_+|O+?B1D4(dCKt@s~7W z&Zx%0Zmw<5iMe71FL5Ee?z=E2UOQ2{6%OfT85g$-Bl!%KJM-WD_eP+%2ixe>8TXpo z3GUgfcO^6Arcu?p9p4A4RJki&VxqX!_L@y(hsr)^wlXg#2Lk%R_eR@R?2a*KpXW-G zMyBtTyr5lYtl#cVMN+N-t7&a@TO97jynB?Zw4^>QKAA9y2@1@M`4!DT7O~%(Ofw52 z4+`LumK2ZUf4#9v8KCN$^UEwcHyc5IK3HzGEH+l^T7h!n1;EamUx2~cSPkVjY@OPV zvnyJlHLo)z^);3!J0^`C=9}hZIb#c^tzN`@A^w&smvFn8bX(x)^G^bQ7aJ@YFG_nA zj@@tY+wLAB(G=+LXWxrrAQn~L`Eujzg$tbR8g~;XwYFipzfjHtu__uEl+xQ9l&y7q zCgpdvOzgQZEkr0B#rL@8u_1cDv^$9qu`*h#I^sCWk>O9Ont+lmgcciY%e!b0y4+Hjvw!Q?S z5KgFcbn3C_DCFW!6UIErdJ1Fx>wcYnU2e##=K(~ z2vV027RN@9EpR_txj2 zvr>##y_TKu6T+H_uoE_|*_vCiv;X)nLxvn!&`Fc|&K#n1=%Cic4S6xzVM%nhgwJuqL?32hr{I@hb zvqU!^5|*$)yt}V!gmxN;wS^3W?Bs#uJoXo+229UGQDKzyJ z2HD5l!Zu=Jr9WV)^9&b-A+LPhb7(5>E2W8c$Z1GBR}CqAj>)?=8`q-P{X&HH z1<6b0HX9GM$bzxz`jcBpC`T7)_}N>xzlEdq7qvMp=kj!BNC>oAwG&+%ZllSX$|=w~ z0&<8xQ3*tptdHf@seJB;;hHksM^1V0CtotAYH>^wc5W1@tqs{txq}kpSC4;Ug2__% z5VM4*Yly0r(^YR*TX!cu9G`SgpzRxWT!k*sdarZqa4Wz5_%TK8t8>$Cfee1geJ)Wt zL)x@Qd!@%xrk~bdW5H+DHXv+4k;1T!uw*TVJC*j5hH$L&6V`+~e<@kg=hMQk1No}r zyU=y*-K_}c?)FIo0y4i<_Ito-1=JF<$dLuoSI3Z&FYBklkuj}Pq{7)BcHSZ1)5Jre zGEH-Fu~UznY_jCLJTk4MgH+IE{!mOOHJ8G4SN_HOO@1f)6`CE*G*bS;{Vuv~yRMma zc8)59?Ir}4xZ1iIa4cytrg!Vs^aj})Bz$*;=j`vTzORei9={}Sm+0ZM=6a5U*ixx^ zSDrKr+>m4Mvqq9gdx9sdz`B59+L|VtOwLii6L<2+ z_8yF00?2qmW3P9DX+Rch<|Pr!-(FqXOxcjR*#d6KbpJ==Z_rF3XO0e6SCt!Rm&}iP}`Z z;XY!3L>9u6B{4?HVoui^pmMX8`eY297G419M*s~f8t%3AD0#+RzP;WKx55^zVP%GE z_VdotZ_X(W4dW2t{UuShNpP@B*DaNnJKM? zQ)_CEZoAkUNS?Yh9ik3<;41CYMay2v4=3Y276--(teHNjzn{hS%#1VXC$SB(1|EzV z8)2Hg=zIb^6`aM*^`wUir`c0ii0JffIu`PNhSgk1j#jU2?yFnvfSSqGJ8DtjT7?o7 zDV1x8xvmllX=s1fRg&+TQ@`~|nCtOmafU9k44)S=w%Q=9y+`k7B1gU<1#Ln{1#)lI zEpC_}L!F-&d1tZZl-O6>XMc6bz7=<0*eOWNF{}PNb#gD+GsL!f(lgCAz z#_hn+u)cq8QpjLWRBM7k7O5$ZQNr0H$HjKldO@%r?5|BSFWKdSi2x_{Bu5sf0VoQ8 zMLIPsO(n5=DW;7s0kqk?S7+-^5LYAQqTkKeKH095Q|z3)PHAL1U-(m=1rIAoK`U^N>6Mr}7!(xDr~j+Oa))<^gHZoEe~;DvIRT0 zwL$;akO}Z$$+;eFD39Q-5^cdg;8zLyos^W294uZFKIkrfQG)uTNu(7NZ6j zpVYOqCf%o~)`2!2^YcG)nf@+1|M2e;%0Yln*ch81{hLhW->BgqT>Jmp>meeC_EAid zXHNc27v#VG_0QL;i9IT0+Ta)jm0DtJM@~iAa4gkfVC+_)kmv{w=th zeCl4O{~tFq@LU0;I7;$A^I()sLQ03wvd|yXyT4q4pYKUQ;kPNTV#>ck?f;jHgy=GY zhQP2U1=YV|$X{IlA1}dHlV<;q|KRVh%Qg&#k~Oz=nveW1Pwnq6aolO1>F@sTf4QWx z=V1*=Jb&}Z|J}NQpRT3(Uq0jSel=|qK)lwKKYH^2a@~T#51Uqte2cXIbk{CN<#rG37xyd$lXkK6PN*-Y25ZPCwuA2? zb_-2lE%Pq)mt|H2_TA?AGOW7atA{8dLVwZpSmn=d(|8L?(DyyJO=q%SzL4N?j${%< zr(@Yc>{fBVFTH^c&mu@QJkl=z>{-(9y5Q5MmYDjs^;feWA{#i-f1|wgwp6h0M-kWm zz8@EA5#d^h@Q4r;q3NdBF#Z-;cnkN+A2+^E*(*mbcNnWPX-pEmf5>W?c)YrI!fJsw z@(>%IA=1l-SD8M3ECrCD53xKZ+A48;SCPg6&n&wWZ;klz46n6DZ7NZHt#ZJWn=~e_ zJE>r$T8MlmjoPF=X!x|Qu^|Q{(z31d4* z;ug18G(ePvZtv3;m9+FbpSr>}+ZnH6Gi+&)E^O@un{*iSU**MtNdvV+k>Qoy{B~)e zKw~@Ii`ZkkL*Qn;MbFqE>|0PT`9A2zV(HH<%Hw)$jt8Bmmlno;^o;wjJ);~->AN`- zzo^sJnGwVv@8@avAoB7piUM1gnv=94!IbRJ>7rL5y>Z$(D#|_fC8uf-U;0M(5Odnq z9z5>X+ap;M4y~7>01IiK{<6qelXD~1X){2BE_4q;wAhJGup*=fxPfBb_BnJj-f^SI z-O&=*xU0Wb4M`SZd76Xtc2G}4Y2>x zKE+B)FHssAcB*UR!bdCY-s`NE!f<{d{PPAF*^S=pRCd2QaUOyZ7$3RjCwu$*z+5 zOs)dq)8k8*zbsvUK_eS)#rvqE7!9R#^1OEK4#}hfak*sVoRNn~18#f&41k%+Qog`3 zV-?m)Ci*1?%(3>@+cPH@FMJ$Hw4k56+y*_8{)|Z%vm1hUQ-95?zHAmG=!!cN0=RLJ zThNclwwBN^nTF41dL@U$Ywo4Cn6`V6?e}B|mK4xhdflT6nY<|ZD3BQN&QWL(yd0eo07>ZTOz=cueE=@Pc?h_miZ8$&6eiQP8w@Y_ zVz(b|fT)A^EVu6EB2?O4J%rILzn%Wa9Y=Gv39K7$y^clr{)v_!l|8dVY!&82{NBcK3ld7BcN%zICn+9a~WmSlL zFl(o-Z3GcOEoHqH#7H#uBj~4WnCC$`P3k~=<0a!zn zUN5u&*>Y@9K5m3RCBBkXfS#KOQ1QefngGUaEQHEFB42x zZy&Ni8;)VoFf%7RhJe?eK^qT!ry8LX+uP~=4M^gURFogOdT)2N@cC6_;otTn4`f0AJDa-dh09r|6ms>vulpx~nk z2WC2Ldsd`zwGS;8WkmPAtRo_$FxL>#2u2-8iJ_~tcYQdaJ7Wlj!@WLMYqh%w| zWZI^tBr_%ZP5;{aQ>goo!$U!t^4tBW%qPk`F>ripC8l@YX6d|p7eSu7==#u&EP_a(0fVN(i?L7_TVHz6-+s_n|AX=7!_ z1ek?MvN5tKyhR>Uv?Pds+*Ej}BpH%=N8+t)6U$t%Lt!x{P<$sq<$P1l7TT}X$Tm{z zpL%k%=c|9vyc0!lxK}n_-Ee~Wrq+ld0`%L?RbuH@h8;zhch8}rUC4|^LB+5p{z*jo z#~!)axU1|O-ZYdTe_!laNBw|pzz^J*T^7MKVpF4s<0TjWW` zm#wO~!jR4LZ#iEk?Z$9xK0rD(Jdw40cVgRT*lr1)J?l4rmEO7k6$Xo2T{@@bm|)L9 z2xEobYkTFRmpn1hCxTiZ^{m%{JB-k8-j&BpIl(cSO$P$}hTdDXn_;67d>8q z?<-Aq)EN1U`HFA>%5o(0`V>dB^|RB~Ubu0rQ$F$qtX|Tt{*^y83)V3t+7*c8hCb-U zkz)Y%C~Hb|qW55FTo?4iSxY3fiK-q{^`gXLLXw&A$g&u7l%s1T@mF^o zabLGWdev78Oba`)h^?8iblw61{}YzWIeo$j4LDyW?KzyqFs;$qGlA_hvY4DMeU%b9 zDs*F|e1H%8EhSc)a;wD+J!+WI$=n}zjPc?0t=SRo@T)(fhX1*hz8`(wxLa*t5XvU) zSz{!!=?2&nxsHEuyxEFrPBg8LcMYsPmvY0e7$j{IWflR98tyYOjP1GBCm9bo1Jc+Q#Jtfg9#b%ybi4k-Zvf`y<9vA`&WcxGHEMo8<(6Ib z6S^8_U#PG*AJ|u@mfDw)L8qWz6H?DQ0+`c@VrJh7fbP+6cT;6-#e$aB1+vC)qSITs z{HU2C*yJq(tmwr@!o|maVHlxvj1R1`^9Iw8g3c_5zEHMEM`kfct3Bj$mVJW8T;0w{ z*_H3q_4b#9zcoB=!RDwkhb%c<8d z{>#FBDi3M-(c4ZRjEo{#rL}_1(@MFo=ITz(NjPZ1A+|vaB%5r0 zBUne~Rd)60Y8t;NFQJqssZ1Ek@k9F>0yQC zr8gOcNiS?K1lWf_xM3k!zaHBWXaL*2&~MPDtpXc!alOnh!5{LC<$2D z&ZO9U;Chl*pKSeHuU7ln=Tn03w^@Srj#i+H)R{VDi&mpkCEbib ztGEJ<(sF*7bj$1?ENO~&OdbU-U#t;#JEk?}oQZ)9m^T2{1)KLC_I4*L;4ueiIK zqHKOEUy<(trylJnqWrksOLY^fhBC6?>bTOAl02pFczpKa1VT3M>Cug@-~>NkL+z4e zK%Vjq`hNY8Q)1KCzz`xQCI&F^xLz}jh{Hz;K@#J5Yx@zua}1IOe+%&5LkaR}_{h6=Q(tYTx$3{n%W4@V zOnOJrHX?^bQV27&?C-~}V7m$G6>pXmPcZoxa}HTNp0Qj*YFDfA*rIZC@|WfPCic?$ z3n%A|hRk7`k3uEo5eee=HC^D)l{fq7J;?mK2#V0jo108N#5Q~4gq2dn`S^Zznl3tV zmnj+!CC~%Y0}O!9^L)qgU|TF3V*J zpPTQoW5<8WueI;9NvS_rT1jd6htPL~vg6v#OlRPOQO7iO0CM&eqNl@T8?Y`q^}Sfy zsY>rKL(Ug8-iqp&D;9=L`bB60GPOZyMZ={xj?g=UA4>}7fc+6t(5(2CNPSsx(r~aP zvl`RVtHQ!La=N<53eWr{P$y1`U*^{dNbeRK{m z$88yUh80XjuULxhTzArwqt&Pj_4FJo6SeJP++&sAB~E3OsH)8<`4n6HI|>W|6JT9P zT%=-7PBZOlA{Pk{X5WvhMK(fCAXmeJ`j^FrRyOH99F_siJ8+xg>8bQ)Xb+#0DoD>! zkMfQCJ>E^y_7s50jC@q%yB1V>qTez;?RS2+PEH#gR~MOA@m}IFMh|F@ThM;dYjknr zfnT*z(E2V897$WFD2dxkh9dSg{21&Z)AMjqxMQ2W(ACooMO5kX4al+bg^YzTDi4fw zym{npSXImSoV2O+8NTFj5@^vK_ht0*-|IVj^rxpLaG62~W+pNJ^36XyHHB-`hm#g! zANt{SJ%&Y&$p(JUxEF@t=|zW*j>Yh~8wL%KBFDFW8qe{uH$+Ps4CjakDPw2^8p@Fo z(;+=%p`3q<`%9pR4(SHBi|^t_T)9ujFWd^yq60XWl(JFiK07zkG3Iut4p0!y z9uBLF0UW>cM(@QHDNH_Oex%%X^AT(pkmM3GgJ4%L@g|?a#HYc%QtUpoXEKZ1yTu%m z$`%sxbezB=) zxujLNe_2qv9*TKz4fpX#gQP<>)9RE}}Jz`UQ-W)_4tq|wpB>+;ui@@)^Vn%!E zvQYH!TizEV=#V@xu^s5`M8Gw04f=m*pYoc?Us&(i`^K%Q`e|4?S^SN|SUFeyljHWO zash_9{yX9In>bx@oxn_|tSkhLU1SA+@p6k(be~&5%{Q!T3_8;ep5|RZfqG!_>C=;A z-&twpmvHx3{Y&ZPYhDLlp!a{%qlXXdm2bkpuK?T?ve8R-mApIO5xLL!LM-*ge;ig8trky9G7rH2>>4 z{=HJAh>&B-#KU&&|T zGCHvp3v(3Nqmw@{6I)}>zi8%Ac@tQ)) zy(UL@G`tN|bhBh`*V%da&h7}un}Kn&OYsCIL=rnx<>c_n^ylyxT;q!9Wf1uyMU0@t zx+=?mQFbZjl6y!c_v61R5@0d&>Ao@x4wG?$yn^~>20bfTfPX2piCR@}QhXj2KiA_F zU+-)2bjvfR+n{{(n}oCbhs6;pJ)lf8Cx*&&k@a?Mux&Pxq~$%so!zUL`zmR|R8f`& z%NDgccY+@UYo}gLc`kyFqMH+pmY>6(v~_rK@@Gd1%0o@mBhR>gLhk-dMQV1a>y!GV zy=sicrCSEWd7k~k;G&NL%{iMjdzJI(1b#6I;USAsqEFtq{(f~)y*lkk1w=i)5S!iJ z03vP}Q)cSaRH*T{;F!faG*whmk;uheVtDp@x13*`1ThW^P8LG?Zu5v)+&ixx`P;>-^uaYjqWX^@Y;BU z56f2<@ANxyK*AJAl)S+Vh=iM_lwFahqT~){*E;+?Fc4w1K|x{e$iYtr3U3h^)vfk<=Dg*X}YQ73P)uQ->6;ZZ)_wHxQEtq4+ zEp)wS%HoLpW}w?j#7{=A3T%tipWYVp`u!tLK2WT{!oBPX=!rc?L~m~I*Y;Bk)4r}T zO*4E#^&&5DE9n3ZF)36Rxfp=LrxC{2~}yfI?O8YHGuo!TW;NKragu4Ju7 zzkTrumhH7DaKyR7ks_zK_n3X~<{_+;EYA@GQh;VmIL;ixB&aq3%o=VBb9I9I{<~^kdc8DdWEn|0-diFhpap(0UK3S zY|74lPtygJBG%YZ6n417XI~~H1+EZAuv^ACFSr#CzH@d3Fl4-V5nYV-+pXk8zG~2i zPS!*l?`N6)wA+=Xz;2I0yiNb9y!iv;1!&Ie6EcMEb6kCo!gd$V$y$Aa>&!0nTR?`G zGWU>^sx$A3(E*f`pqIiCn%0f3u3YRhXe8l|Rix4ME{Or?;W4g$J|wuMzzeQ02vAk! zig>KL*!(oN#tRxZ;E?dHer>?KqK@V$VTDxFoBQr^sOGT_i5Soxff262z2KfjGeBfM zZL)FDd+dT9Z;BPut3AME9C>|e>JKddHMVk^IfR!`khraDNrx`ZDfLcVxm8xh>@p7H zS{ze^@=821Psc3Y?~c@Y|SmYQ$B*t?b-Qc+{9Eb z1EAG`0(2{m6m|-?9K$0QYw;K!n7t_rRd#Budfk062bDa_a%HA?GU=fOckOv6mDus0`-r6hec^OB3iTSvqC}aZnI2$1RlplmCCuiZ|Wp9zbRvL`?#;?wiZ#tuusJLnwICxWJHoUa>$pD6_SKr$`ONTwZ~Q)AhB znjD;0yCxb89oQ8g%Q)I06Cqo#gfOxjgYZ^JYKn#DXZ6KTT?a{n?2N zH`SR&WgIsy+V=m<;H_$7rIet}BPeVML4l_4aGRYJ-sK_)_GCg>?^H!np zcFCe*Qy$u{-e$?~lou8=jW+?r!`l4Yvfd<^{FA32$UB?P!0-7O+A2I>W^(#Lf3O3; zVADq3qO{S-0_?iXzOH__gs=7%Md)Flkn>^%vwrZlAEG9Mi`ra9%UlH_kPAY6GTk zz+PkhfR#2JLbsVVvgzp^CLq^kgu$qZIgi=lj>?^Qni#%nnvFhKe$^G+@0Na?@1hTjaS{OvSaHYGr3ulAT6{)BR>NocZxcIC2q$w$4 zV)wHIDsC{RqAJn0?bRgU2@sJzpV*170aTe0a?Kx#|c#Gb63TUx>>&71T_U*!_i zt5XP-g&hmfGXI{4oW(ujp9t^++`m#Efa#*ajJCEYw<0MIQB_h*QYx_z7|lcq-*Wy% zdBq2lu@mnjjJBU-nnC9wb+YDVv}o}cMcDjE^ky$Ks+{=`xq4ISc+BH<$kxbmCqsY& z=cx8O<4Xj;5$NZzLHW(aERaqNY%aD>tS{VwvTR*F-(V)PL5*78*jj8=+s~%=-Wt#VxO|n;qzGgRLY93IXBilFdOv|k9B2(1 z1xl>XkL%t$$$TAY;vUy~E#Sq%B6r=_o7^!$58Ao49;~c$d6ewpCu+8&-;ySRm?^PQ zVn3fVP|8Jw6epIBckP9SQu?WSHpEB;V)^QqdP*%bDxV$YEqkSi3*iW@GxHF>KaJcQXPnHmqaa$ zh;i9h(2N4t^E;Kj;MC{rKo2rNl!449=H~F#|a& zJ1f(7@BRh`;I+%Gx~^eXy6Ogi;DG_mDcjP;xGRo_pQ$-{T#4LP2@2>KTRF}22}CUq z))ba6D~1m8*?Gu)0x$LunJVd~yg0ylS2&wn7tG*u9um;nwZSwr&ttZrJ1IMStAx&* zNkZEegVnzdXF%#&YXe0G14#A0IsUss;Y?vp7gugA5IjF_HVj)_2ZOloMX@h8p&#cp zdf%nPW^TNB^JZ?pe$rRxHed<%G}hn7yJGcw#{dyycI_McTg{%%xcC~D>f=N0u!{N4 zY0V~chOsaP(6~*|`-qevbsh4sBra~0+jC^Du+|WFHdkIE4zT`MBzEwdx568Aik4R|@1MGS3$BUt zvDt?@sgMI%Ha`Tp@ea!W_!IxxAc$_G{^bH6+a7JM&72k8fZ9t1b)rQq19r3gm)UOa z#Il9Dcb64Q_m6uKciP^EAd$#AF7t$1x4BrR2dQ+jQ;|%_f7Ku393b zz9pV!yM)m>e#^YNH%U0m<{sW0@ZOc~%0GCxMgMUxSmoAy%Sj@&(NkF(bOPiaQuB@y z6(I(0qu}?YtMga`Gf_adp9J`DGl#MgLfj88^UEo_QlK-ukCU5v;&;zNRej3eXp^?_=*u<yiI_vcETX zPS*pXT}w*Y(ZBxS_Y@%&*1alsKl4cX-p~ePWAPvhRn{FE6S*n$d`j>%L%ha7o~Ga> zVaw?5Bq3lbpTq%bN7Bbzhd#W&sCoX6=4 zo`%=hHScG@TCV|X<~J*26^X$F^Svh~)B+44d!F_W7T=RsZ{XFyZtSxaBS@#()T(FY+7u8sst;Xr`-_Ly!tMY#?$W!9C`jX55d+*p`wZvNX1%`M&3Yr-aZh5s0lKiwWoSA zE%o%b`1YzR*mty}tmB^cB@JzYVUqM$LHeOlZi$#m?Gkpo_C413@wfNqR?@k>PnFk% z-9tS3X?y;=0ck99*rOE5drS3$PV)a8U{l}E_wh|0J9Wji{7tH8OD~vn6S+Nrw`rgl zs#ec)t%4%orEgJ=pqXcIO^gB8ue#@+i zn6^##j%50;Nbs)%G~~n`1>tH`GafL4R36P|f1%OZM^g3JGjq-!U6Fh#_*IrqjRRV( z=JwrEO43(o@9Oh|UALGP*;!3S=-)jUP#CHocEGqas6u}@r~mj=1xi5f5xs<>0*!zv zTj^nJ!?pskKYoMKPviRu7o-*b-BCIhq`T?Op!8VQ>;LdcNB%bgV~7If{ZCO3z&v2g z7o*S&$EiAv22XgAW5uT0U}SJQGxHR#2zvj+yu?zsfjeS*V!ks84Nv^Z()mAD*qHM0 zky3IeDgNaGOx-#6LMFg4l12L0=V_v<9_Da5Q)E7@n0IVfq5aZixx(_GS0JU>3v?5s zEWIm##qv^ZqzqF4t^(5Iz~WHamG}?6f#xqDy6k@mij=(bQzCV^v;q<#7VX!%6Dxok z?&4rr(v&Cj`(hnu6aC7~kW(U!Rv{yK{M*!9#>`6aqvA=w}Q%}zJtzgq z#-*}R_p{wQ+o>q*R_gL1(w+^ZYfjLc#4>@O;^?1;jd3DaZ|+uKRM1*t7 zaER9`E_x{Ys5c^^F#5xX>w_#dzWvvZS~w~yZX7T4WRV-K4#)${KT%&Vg_8f+OaH#h zt0z)Do+elMR8&Jl#~p(!$s#jX%6~0CG3uTpU2ZUr73l+e)7}McdIAWJpUtBCt9SYL z{}&Q*Xg+Q!Q36Dw&JuSL$Rm!i?Ds!$T1T!7s%xDX^ne*p8Sy%guN!#MqF&`m%(JGGIR~SvtOKD1v;M1laZB{^#%) zK5#iNzF9CG=3j^^9ZMZl2t7eBe`7gqHu<3`GUwlG{pb286a*6;Ir#X1+$wXuQ(oQM z5iIxAJ(1kxU|B)Zyy~$T$IVlH=};)YPr}20AmfKb=m&RWZMPan{9}gO!evcm~ z4t;bls;tY-w?P~2ow_U#MjzIN_D0_r{-_sEpBuzOCc#Wari@fE^|d)xn{C?D1qHc| zb1?T=m^>-|+?iXqr_`MqZ#&tK-yC6$%?;ej;WMt(E&gEmV7$84M2@t=e7NyZFuWO1 zt^>M>P)<#pu@v)1o6_Aa&BpjIs*iT~23_~mudA?{(mgRxlzs&T`i#3gs>D;@; zN&|a>nBN;kk#HLtSffkqbm>9wCU1tnlHos;RZRfb}jL@|KLv~iY|OxqIw7!=Rv{q~_lbZ6X+ufzI}+yl>b`eNV{ z-wy;T0fNr$+N!ZBjk=&Gm;iFO_PsP*y>~2)RG)iJxNzmDU*@KD5U~<@hv}k};$KFy z|H(E&&YuR1^P|zB=r3=rK3aZ<3O{ko4Os#p@pWb~r?_-Y@Acj3k$jyuTrPVVIUi@b zO6GT{i|p?oqi2aH{Ob*d*R*p_M& zSv<+XDjBlwWs2kh5+{J>Ig&`AjJ%KRcOX!iUg0d+f<6MdSFUa;qXWOkJqxF(+joW${0+i=)^vl(6<^jHPc6Y`=_($>|t6#{o=vcm*gsD zz)|XXO3I9>g{v)|<$6qvNRs~w1b7(eILMnNoZ9fY;WDoxlzqN#S?a_@X7NmSoT41r zt=a}oJVmteVW-sec!PBY zEe2EJ-9$@6_ManY&=9=5&Q{-@6LP**ajcQ%%a;%fq-NX6OYcL3 z@w_E9#@DWCB+HN6W`g&8`t?VedT$%lfaPnX!tT)%319DL1UjlU?d(ATEmZ8mt;>6V zvDF16DCh$5f%V5Su{;ldC&9AFkD-S_1;!afZ1~)5?@~u{!kf2O<2F9)9!7DjlxJMk zvUuG``)V&o`9mhRQJG3|l>>JWg8Zo5s%NR#y-fEYbi84HK>p3=FK*qJkNw5b{KrAT z!3R##PxpALpVMS$`gqrkWzYmQ^-bRbMPoz~Qr^4@4u7^X-hi~^%h*LnQG}vAXN{Z+ zxcQynV-9&_58u+Ftp#F1%5J^!nD)lw`BPlY#tE?chO&eGNt*epc6c`{Vk^petk+5k zIg1&XiDy>RswZ0L02A9(zKy7_5gPXhV8XEc$#9SxfQUO11f#|rNX{p|K*F1dBliYs z9tObRfcX5TI1Sas?5rNUmfdc-{x;6d@ip(nET$9Ggeoc0&f8*1d(%-~A|bdvwqSlg z|0o%clC8j9Pxo$+7`TL@c=c~M>kALwIl8GD*LlHz{H@+qE3_6%rk z>v1xNLv+Y%4KVf6T8K##Jiz)~s@&S}H^i#z>l_7=e>jGC#G08IXdtg2itBP@UAGPF z-3vtYZj9>mhC`k;U8F)0-)yD&QH|^DUEYrh^Zm<|^M5K!Wmzz>h7yJ186P)5(mIBU zutlln5av;wiKcLZz2x_U^%Zc?W0Z1DE0wPTSxU$5Se}^J8xOlp2pz>~;7vw7-rneP z&^yHG-gs2aCs<}e3R z-D}=eHfLRuXud+JP*|12dG{wuCsM{*!&)8P3)Hs1QIMPQ=eVy~4ma8N_}Cfi4IeU~ zHrn0b*K~bS_;dzPDmrQmqM+Vexq!XkE{AML!(@%^M;mkE_;!lr zd~w4Z#SxR@E)D6D=ZGmwEzQ^3ggyHwSg-bCStp~&l`*Pxd+(!p<=z=mT+YvZS4kiK zA~4&Uu=P0`Kaho}3#^kWHPv9?RC@!6ZacqoAF{?K5{e#kvLbejS?Ze7dN*nyV34WZ zWNTwfC;z7%Fbw*7t0P$QHvGOiohoXGjV4kQ`m*ni;!Zk8gcKR(slGUpV zf={I0BT`Klvd(qY?*`QvtWp&*U+@&uatFS@)%;|)uQ6AGz^ktW^!V1+n#I-~TmQ-s zC1mql-^^LOc(D-^`eJDX_|yC;N*vhnoH@g-6!VBfckV!9vT)$Tuf)d`o3xHyZvSv| z3R6!QKz>1&h8DKYn&<~ycD5FW*;+SCZ=@!+6QmWm(++F_u`#vi_*WVuTThQ}qQ-&; z-K!bq+O9B0TKs*a!q7I@;xTAMQSxBLMPOXN{QVu$OabfX2Mc z+?LGc(>CTxzwASfo~S(alfu?vlKgkA)&`|9``5udWYm? zIm=Fzi|)}y0?c&8Bnt~XT;2c|i3imV257GvePSKoUjgP~zJYH-JlLpp`p)x-+u( zZW^ssg%7MFhI?FD-_AwR(RSiPVEuB%>7R=tFy0FWb-DK*EP~Expm##8`{$ydQ{{11P)P03+{Su-bY2QsE>O298~+V+#pqVMF~86Dcf!>`#@OW z+4S7Sf9PJa-DBg+RpGBXs&YQ6V+kamOG7LgTKi@-Hc9YCx=y70y;)ht(xzXh86;h= zeKO(ZxiO*~y0B{RZLJ)Fee))d>J+EyTq5#5tDKw2loYq@*b-IoX-$hapN^t)!06KK zNd8}xFvs6t0p>zagkXRmBRtjY=8O3X3nI4!FZ(dlUy)itM5gRC%8<5BaFxnJ@eV+r zw+;p*fOe;Qm7JLWPM`7joOeq5Fy*C7n_U{dq5Amo7SC=VPOu!g*S{@y1Lk?nQ9dWJ;u9S!kQfIsM@16BH@;TbeIAiE*&WFx=?=-;EX#Qi^E|h}5z@3# zZDZaPNx*tDemzk!KMuL7DOg(?#jHU7-sRkJK#-3cLc`UAnADn9qMEF}XAd6tr#DEX z`}x%g$)X(^Bkyx#zi=6@t@fv!%qE71+P$?#P}{Yo2Ep^c(_ zkeU&99}s+j+G}lhOFF;kK68%RF?3>hrh%SODrAi@Rw@c&9yH&e+1(EQgr;A^M%hkR zJQjXW-(;J4?mZFs36xNvEZXHElv7oL&WoP+#2+Ys_+}F8Oq>x&OLbUVO%f}SHu$zZ z%|D!`uVg<3%^cYNJUtVKoKMkXDlBH@&-0Rhh`iUQk6n~LjDD+c1 zJf+53p{lVI(wihwz8ChlV8{9s_CwvfL}avmN$hx)oAcmwrPIdfj4h;M*#4YS=+r+! z$iWIqr-5GpPx|cFuMW7*hr9ARF8=LaLQ`)yIU5aArKOlx3U3cH4F!-SWzn97;U9E6 zy6w$9ZgXnQvG*{ss@38PA~*yDxZ$x**%nLRsGlx#YlTM^-(Y`k%I+8?F=T=T;^J(p zE&66F=SEo!B&}t9eQksH7V|t@_bV?DL#Lwc1>H~fRz$d|Yr4FWW=WAY_s5QJJE7SB zKhmx}p6UJn=Mt5qQsl0yN+{QmTSD$iDR+g*{cgF8q|y!Ln!CuQ% zG0e6y3^V(^o$u*`8G z{{6IX?DLfZvL07$J%j>^WIg&CBUz{BXxI(9Q>cL4xH@Vf_M$^WvwSMhw^3%RMowfNm(N`}}@tq1^Fo$9v zEL$f(yl-`=5|%VncOgRQ?p(fG3gM;h@Us(^aER009<`7ek;-|FM9BwFRT<-ivgM@Q zJc5`ApHp5-D*V&%d(Pb=wuULE!-rCyt2~QMsj(aobB|qEP}$8Za1c0LoI#k{Ra}Mm zWl(aqv%!uOTvK@v^`NEDxp;C_X{O(GKr!#YR-=K}_c&)K=lkEdF!+qL=@3H!eFFo( z;?Rfo8m$BFW!@7Idbw($Q#o?;paoos!H1`9rDC`(9qg9y05~MGoN} ztQhMDwX1a0onQC4-GveEwir!TZgYaJ2B&cw&m0tpmDfw^5ON$SH(q_+>6(XgjQEoO zp+4pD92u#IQIW^2QxCUIrg4wDzEzeH)J;87YOUkATyaPp0kOXiyR$@Eo8LR@WK4#0;QW-FE2ZM~r z{PK}d9Duja&T*W7@vVp2K75ANJ`p=JBPk*@;=qmIcaS|7>U@K$hd1V2p_uzW7rWURdPo9pUus_sbZ zMdTx1QxljE%A_0kOC>j)&fJ+$ zDWW~d7Wqxz^{@Bk?%#R2y3E1hbR#i2dG}?-T4tnxe1R0eYsh*An4ClbI3L>-_oaz# zJ?A@-JV>73dZqpNFTGh09PfDU^WtAdB+UBC?16NX0j^x3^D_wpm+JI?eN=>bC0#R^ zM657TjlO^3e-z!+F91~%)Hkg zFTlj`b?L0OVF{A>pGRyQBus6*JLGi!Sf5d^-gW;`+YS$7Y`P>LU$7QS7Cj=1j&DOX zp6a@Vzu1`%nycU~=Hxc<{>xf#4E2&A01AtDR8O7%-M<5N@u&jxg#>Z^imH7wqv#aW}VZKkrUVTv0dgx!LXz|A&x;09DaryLv zhp*hq0zGk0XXB7qfjz{cq|}JQESAL#aZB{e&;>!)sQ?vx5b1w;Iws#X_U=u5+QDA| zHh-PTtdT8!uP+!LIB;CoVpVwPjZFVzjKCguuJ>%`G5!IHy=A;x{5<9Y%S=_^LD7q# z%9{V@W&Mva`mkpVS-h80>)ZFo4HC~0UeKD5YlfNyI=BHt>6JQ55QuJw)#DbD^~;Ng zUUpir*vHS_bS*;+d}3Q z{rIK?ZorL0!OV!qw^2MTq!3vf6(F>8Wx|w-0(jH!^Xtby$~hr zzJFX+(VbJw{M)U|wdG!3P%-N1S$a0M^cWd@xVL3{G2gpmpT&Tg(LIpfz8XZ5#f19# z?R|Oa<;7*r_+5Xz<^OX))RO_m!zOQSPVgs3^dFbx?B1@i8>sQ(f4Pdkk5hex_x@2v zWW%oiv1|YB*54RNJQUmnZ8ScIweO$!zyA8Sf6~At3cw9aUb_`p|F|yy;>P^*$XT@j zsbDkODLU~#O11y|^)E&ie7j>)cq@CuDX%U6d-Ux0)wQ+ntu4`{ePyHg<(>eyT0~Um zDWj_5by~O8{f+xOx>wg%`dku%n45J@Z4w#E?y^t_DeGqepezVE@mYT$R z|90{H&-328b)OK7{PgVCrz3)J-7^4#erIW?-zq-?9TKGk2CqkcRvEk~ zY;9ptb4ETeWjkU~a>ZSDiDgsnnLF39*b(?>!hTI15TNtk=_A9{P6U)B)UU5AyijeW zX>*-9Gqlf2#^IbvWNej3#N%qb2A{(EH4vo;_Q|sc9Sp8dUY$FUsj1Rc=c&rxy!}^; zaP5Tp)Y3^+#qIwNO<-c(CiRsTJ2}5{<uDT%#g)7qix`t6ztb=HHukCQGhcrXQu)1lJwKCd^b4 z6b`j~>ZT^20xH>fpNmNox!Q}%(n6+MgDZVAGL9XEo++A9j~*ppJnHHW2nL*Qz9|aB zon^K=`0GAvJr8aM>Dx~5|FH#3q?tMDtsKfMMTVBV6sqVkd&AmtjE&+_?`_|Hyb4Ao z$p(^}jBoRgZB17?O+3rxQSUp)I#hZsXM5G_Oh3Y~VgW4CKPUTf;8L6KWO--(QzIcv z+gNx~(&F>@LV9y^bM-`2R8#=r^VMV1XLNLE(Q4A~sCRoYmE#=%8GSG6)Smm7>f#a8q1yJ#(BgK}UiE_>(rCpjgpnjIl(DtW+9_XpB1A)aHkPjf+lL(k*xp+Qo zgtIV@`$Ore5A`QbB`>-Tyxzsrd@+NqyTVs2&~IEN_U$S2pvBX2uOn@`0*$;n7~TiZzumCtyA_Y!~6aP*m%euX74IWga=;N7(?{_U{8P(kCp+=97`%>dt za&#B!_3IPdgEj}!lT7lpgeZye@w@ql!g1~n4i42@&+nKz1D)$KTHr3YF%_Wqjib*C zu(?BZapUCWO>?IN>phQI!i8@22>zzlRL5LLrc$h%1%Ev>KdyiYiFMt2CG&Cq!E4!%8DNxn+?s=Iny7)=toBhslK)mg2Pfw5U;?v~hV~jNfK(5krOd2UQSfGCH z_^q=(d2Xy)DS&5>WBV{JQQ|J}W-he|-HtYlxzbrO5OHkSNF-o+1${U+woLAs;cXWR zT1}iHs8c{2fqQ07E7B+7xn;BUa@kzF_UH^%9UEGx_I3J-wg9|aA<31QFh*75B(u+s;1GatA z&J)<5ef!VHpG>WJ`@$)a&h&}0(1n;)`5Om-{FkhZdBX9_?P5ocXlLi-G%cF!`_bzy zzQq>eqF%brDB_fdflk>Yi}h009 zUrn3osU6Z##Tvl^5)A^U0?X$*F!x$G0Nu>;3=m`C5QLC2>sYo<;GO9SbZ^ z^MM59O>*HvhZO51=En}2o{Q0H+l<L0a4RB<#Kz4m9nyDIu|W4=Afrez+dg!DhVL(H7&5+ms)t%T zO3Hgq>aDjWnuv6rF2plws{i!I85^)`1k+>L>Yt3&qX_&34Mi+XGdp?+U+UCKSFPJV^iYvRU|~^;Hhe zR`^CX>Fs63*Ngk-SLd8bAsTyO`>wxhGH%W1sGoO!RY{FOU@I2Q`Rue=Uh#&0i)s&D z;as`)O-dEVjVu;UzZv7FR17PY5D9T8YY&~|q^Jl5wCQff{#;hv_UKOrS+Ub z(7E%tJ*nzL2)dTagD?-pDPXkW!OL&*)5#dufTzjA{YzW$b!W79IQREB1r~gyJ`Ov_ z6S@*bgALGy1FfKNa-n4X`FmVKZc~d0%BKlaEk^K|L{P0}iO5kWs*>gc^>^Xgcb9=> z@lug_%<+?(byPe2@#F17Z|o!EUb^P5us|-04Qi{sdGiKR*R$?+f3-OXbgQ@}ianyJ zIMjZn& zT{8vF7xLZI!TQ%Lm!vxH%!SC-&ktWy4Ide;lN&_pWImcC7~@*@wikvI6>tG?J_J;t z$kncj4Sz(DFO{}H+O+`q-Q6o7vG>JdPG$7*V`TL}*ju~jXI8(+s5ZL_aEizofh>#o z?2N5&>KVg~o3o0H4@XKvKXqM@_F3li!OF4t6j5!n)Oct1$sH_blP9Pm!M2F3E`Lt) z{rLLIm(Wg4&Pq)z+10c5pxWBPqmb2g0^|+V;ykr&hkJ94589=AY~0mnGp{Rq%7>NO z^($?UorF<~qT$7W2L_*r@3z_jm|7=bf>S`TRV{tER zma0?2&d6?l#dUIRjVM-4IvgZYw-_BfVko;QG?&u#G`c@{M@3_X3ZI8^un_3SWyIFW zH_81z%qr4;B!K^PSbtV3Hqf%WP9#&>I@DkKvl?yTU^O#bt`s)tZtvy`eZ}?CN@bY` zI$9r#qZ(rJ^2ty4H)%h}P-A2_(t|qFDQkN&gD(|)TzwJ)NvHCZyo7|JvwPc0>Q0Rz zEn=vNPwgGCHHY}5r?+`t>H9UEu|*t&H%+sCC-?p2QDxoSEPGz%@Zk@v70n+$=$<`$ zHp9H#uao~vF=&Xf1hn;NU2fzD4<2M|kTx$~sMS&T{K4w8ih0FCt{WE|+c2k@VHM|x zg|KEdek2F^(MlL1u5r~r&btb?;*{#Rd2=}%{0h#jS+HNbSlH|7jnerf+c0sFkhSEr zz>mlBo8KmPIaZP{7503?QGrJu%)U7c9lWnpS!{0 z5u?1cuezs&t(a*7V3k)N4^5peuAbffl62-Pf9o?;R$7?s*DE3g{dV>e{PAi*eJ4%P zVbN~BRhIC*oO1)xH^(M@R*#A45{E0kR3}R*50&FnCS8akBgM(h?f|Y7AR;%e&0Sjy zyotxcv|H7x3mOrsPU%mbjs*8svApeZjCWga^ua6 zI_FR7iq(W~Q%>b8WT_|YX^$tMQLRGl1ckR&<@{GM3t1n12q?`L_K^W)ux2CsJX)_b zU}PIM)UMAt=>wAjhxeLvlw-~Ek)3A@7Z5KR@d33H&Di}#J3Ks6oR5(0^ya4|j7*jn zLDpU-*f{8`@J#4MCtuFc{jtCz~Cg~_X(-ogu~M_NSd zdsmE*r^qT|sbNbId*bptz6KV2z`SIm6Ldt}`1}?fd)A*FOo#72l%l{gt$Ap5Q)_H) z*m;b+BA3A-cY{RnV!m0nd$vK{n|;<5kuP*7?oHW1$#-dYi*h_Dvmre$j=YkBxOH7eLBSVaq;sefDfY^S9!h_^d5Zvaq1hYUsOO~F+j4aRN-Pu%o%emCEsuo|X!GA`#Eq2Ir zrW>sB?_>1IqTVs$tOCAm4v_hcTz)4Pu)(ecTI}2NlZS2M_YRU6{9Oxx3Ds8@NP*yE zy1g&LJ*>F}uLV~m4--RdOgYbV+p!Zh%WL!V@>&5&Elq6}EuTC>8IGR(;jG42-eMD< zbpjrf!L12%6=QDBvjg*J93w?^Xd+t0!lT;RQ#zuBLQuy*fmn1|!& zpIG}wVdQf&Q>?wkhE`-qLVajgLeDEl-(D-07ErYy=J_~anzWMkUb=_%D%y_toMMHt z3Gpb~8dNmFdJB`Sk;Tm*d(8N_h;XyM(m;+ro>Vg9JTTCc9cUUzn$hjhEb@lbL84}} zo1yqm*RuU3o=2O*7dvlYK{h`t8m-p0JgBVmNh}lQT;}MSl?T5OFhNVZWh=3|so@2C zb8^jk4pWAyaHgnOdO)$FnpAtJA*t5A0HOK}$i1o@6U*r8&UY@L^$}&8>Lq0IR9%=e ziM4F%wl_6I(4&%EkXPnQ%`r$UtC%Y-$e-|R6G_PE3C)rmV#jRF8?u_aa3%*aBiU!a zMK;l-vz(~k85bG~Wk$Ns_*p4;6LL>#iP)7U7uzPwT?TcBVeFb4zVTjs5$16{TNgod z^XAPcoG;ql2ZKV>(sC?}vgjxU8*w$I@R<$Yn_}_^@*Ixa4~?XWs~8cE#GE@;F3PKx&1~NWg1Mlhhy^F z-(*u+kVjJG+{c%`e!BCfr>J=aVk46EHIW>;Je1zw;0t#8#R`|b)r2L>FPeCX?fmZ< zSguE?CIt`s`i&YV3igRMRn^{pqPn{!2xwR_2L?`Eq7Iu8p2Dr9#ILM7F=G}rSZF3s zJ)pf`=h$!iS%QW=EK-%zEL!FvpnMO-R_Zu#gm9@tmWt}v-knhL&ly25#bK0e8Wuf1VXM}5xI z%g6XN&9)Q5TrIMTtDH$%@Fz32Df}JBAu+Kw5L&%3E3!Cp(mG=3(L|00X*hqWh?e!k z$L#cA=09JY3+gkZt^q|pL`aO| z-Jk-A*B#%N5?)xMJ5+=FwtwP(@=8laGC$YR=`}i#a{%txE;N)pK&&-dymmX(sG%w` z*NBJsZV#s{mCFrH)WqgPl~D%8d`P-}!4iukiNMAPq6vCAt3j6l$=07R`eqM26%lvw z^|b#fY7rDAy!dQG({5i16E-m{yV+E8)(((pBCAvpM690twO0a^*L39enZy^-e^En!!e$5 zj})5|kyvR!W!t^x<-xiIajc|Qdw*;bXvOoz((nJh~SWn z29N-pfjV+v@fxqE9YV4uOIO7QGZNCh~F54;&kg&fE!Sr%mChT1VY$lB!U#I5vY z3AYM{`|g@$O+0b``rU?ruw7HX_%!b>0N%>yn`ZldR_HQe4S(@su0e?NEc$x+TbeNv zQA&_CMZl+h7>)_zSLB&UDSS56%k`l!@{u3iq@$)x0wbfP*Nhs*R~Km!7=cOFT3*PL zl+73j^AfIjasSYeAF@p@r;P2Qiy5vt{|uqHXRhwesZ*qoqkB=iZB4wgA>uZbA$NhM z$VIU#-v``RSCVct}r4`}CrC4?MCD1o`x+|O_+h3fXj=b7j z(-zot$P0)bhYm2@boHABw~AU_gUV>6}}v4w=%&FV7ap14Y^ z)0Rc5lMDK8Q09Y}lBx30M{&n&^&{96sgO*zj`nXt82q*Zw0zt8r>*zVuc`XXXP(8z z#$xL$ZQ-F2+;?#Ak>r`O4l%*}xa-G8gPkZ4cEyP{MMii)mZ$cXOWRxp3%D4L`Dr29 zkB92L9#fW{#Hg-})P8v3il0bl1hCd+u2LJ%{A3_O*4|M9Kr+oUnU;Po=g7P*er*3bffgxHg%XE_))7yin&h36Rc z!_Hv=PMpQWUZ{H35t20Owe7RmBv=8>z$%HJ$Rwm$p2xlya&$kuf6NT_L?Z@fIIaZG z1nM6|(e~p43X`?Y1|lcFsXFYwQE=>-C5uEyg&W9-w7%slp9DaaDGMjwIZTpcp@v~j zHnVNps@FXUok~>T(MCz}Lg!194b-e?As^wcn==(~hp!E7*!{CAgDL*1brFb>`dBnI zC`T-Z-XG!l4P-ag%*4Lh)L$GmSY99c;_H6t{^rqO{5z?-UGy(kwgs28w4JVp4f=Bm z^Z}e|UfWozb_#g~{iu$Ye&ZuO{EF3qqy@E-2KEwF_2Plcb(uw1xMhh~ygy?m!r82_DF zTUks=Xh1GW@{$)KOs%n9YITV{6M}$}Gz%YH(2f5eM=_ z9}Gf5$_p~AP@|iA^RK+1U$8r8r z7`lpm1j1|6rDLQ#c}Vf?WsVip)=^=laY?AP_m$QrDy5)O=f3LuobXXDRcZ4|ciet4 zrSQ2``KXQf^vQHZ==j!^Fe#QHBC378cv=ZX%DA$#7CVs2iIb#s9`a0zlt#@z)`^+< z+O(quo@yWM5=UMgw;8I#UYH+Z_nj<)sy=zy*&)rP>|9BILsdH@B6;7~x0 zBS$P;4#&Hh;CRHuE7^9rbT{arhG`%X(x^_1DY?VB^>KKLuRYws-53m4SOH8 z8GCh!_I1grJB+sgC=-g>MyF5Gak{ErX!I8H$w)3|(1}tPmVP@#{ z?7o5w3WeM=8OD=KyM>W=yNa_Wq0~u|+shxSW}OO|wo`btQOVI#t9MeINNM!8< zO-RIV0EnbDyrrre-uk=)BYXQ8h-m$Qge=ueA{9Z=df8WS0gGK*j&5#^i~ES6e7@Li zx6YKb6sIp$qh4pRP&A_O`N9T}bh={g-5k2TdRx*P=^a12MNF(~n-8q+rEmgrYkzdZ`Sp%m)mraT(79Oq9eD>!R0=Z=ad7P$Yg)BmKuFK5lZ2k_=sL$nc zw;kTJ#{X2Cg$TcuokKzMw_9-)S6S}+XOfX}mczo`!+!oCzyn1ST9m#Au7l-e<%@js zoBp%LYAbUbv8`@YN{PM$YS=%|W=!pa?9|(Zx6IJXgKUw+ z&aBt#ps~yA($vU4)jn9Ik8~(ze&$_?d&Rl@QM#b_@^ez%Srs;TM9A^y8BR3yiU6dO zv9@aPC*-SYF>YFZCM*G=u|FwfWf4x|i4&t|0zq6v_A~J){x7U>skQ4DPEl>vurrv0 z9#tzldcXOjx9{XUOAfa2tX}!3F{Kf=;_IZyb?J+s={0Sy`*_6sUA)zvzDQN|>uuSd zVQ>2n7%z^#OrEXy3rQCK^Z-=w9Q&>en&(aw%5p!*NS`3rtz-v&)rbF8RvU_!|F6mU z&#U-k5d1l6cSir@+=P$sIxKse| z2YF&IVLQA8L57(uk+6i0q)UhUU>W_COVIN%XV%|Rc7Gs=Dk|cwakB7=%SEI?X*+8R$u%RMf);K?LSog;&F!&(dJ&Zk313c{6km@8eiPN!>Ig;YW54Te zRNdlezIl^aw8+E=JG&m5Tls;oaY1g|Rv}c=*&6zt$c!iN{RM8%zKX&#MJM~(#DSR2 z$6bLz-&~!zxqR%lZ>KbBA$W26i*uvf#S`wk0i2db_goydf)#^q@SxOXTReHHySrz0 zYzngS(J++De9{H=Thp>uGmug)TXZ7-wCd_lS*4|=9RN6e_e5>osKArr^dLNPacv+F z|ngvi0PZe-9Ffg@$5x_0w9kPKfW^m`LFu6f)VIhfn7PTm`7 ztX_>d8q@mJeTR(e`qgjn_EbiA+)PkR%aOIGy-qQzy!tlb6kgk#83O8s8W|~pfeGA4 zOEgG?0naB+AVe}!oj8-Mi1&wff|BF&$hWev8BO+*T2PoVn#3*Ch6f_1`k*{VGB9YEW6Gjmbs>tKE+uke!@IDyO-tC-RP2iY zIb*_PM2wmY&Xv<>fZSn@w}uFhdJ806J1ilAH{1^L5&ipSU=FxC*pzAf^BI;!<1-=k zm=D#wHpen0*IGyLe%D-PJ-TUV@RP#?6|{x~ehotlXM0XXe&JsMCCw@2rOx`ocBsQ( z5wx1llh;FC8=W!~twB9bmTGf{j*1J!mN9~Jnv3W$)zEB%G}+vfSf0TFn%Qk9+nc%z z8F^gZ?q0$T=dno3V*ZdvX#oRwr2}WTL(d@6eRI$qz(rG_Q)i!2>_`dW*okDHP0Nn(vUjhKGl&dNyG?EcHpnf;HZKt zOSCyr8fZ!XAhcF1B1!Hmw*LKx53L-%B1!8>oU!%m=FZVFU2i&q@v!Qk1}z9BJ9_1| zZ*Ut(NxQDr0Kh|cOybif&nJXpKD$}175qSIH#4@K;`*f6RP6W(&r=g8;K)P@he4%bxAg_Zom)WBnOenj;T<@kd(J={H#8obr zk%DuPFOuU$QqzRiUTNIR_b1Pmt8qGHdXdv1$!r5gp!N-=dxc*J|WjYT}S8)X;tURxn4@*1^%A;P>9@4pXIJbP*Hb+ zRf^weSB;`W0G#9lYuc&=$a}Q!sN%HC`H6e}!w z5yv}Uvdv@0i!hF(&PQpW3*kxo?yOMqRrh-QqN1rP&8~Q`Vad+T52MQ!9p_=KjPV$m z(V}-GJYXnjA2`6wcyKZv#3IzfJ$@M#fs=GDmn%gn6w&)c?dH>y-CMI8G3#15$*f!p zKq`CAk#1Wy$CX}{n;Yj%Mn&b0MABX1bNBGaoaGl%o1 zkkLqg_s{9n0pCiV2gNCLNnzyAikFKh#hVM z=^Ng`8JU&+neZKhl$Xt6l&FX=ANR8$Q_q8($NWHH=cCZ1HT@0J#^*@S`)rS0U(Q0q zB&KOR0s;p+Rp|GvK;jqf8l2g^xf$vv<(rW?&OJ!-UmU>hNYk_VIGhJEgY#8lkC0fB z7`})rQfOGOZh2Xvy5&??R)EBS4q+7+0k;j-*g%;({RZN>Q>1=atj)$K1z9QtS!!5v zn3MRjWl;`#-R43;Nf_pD5>48b5S}v4l%L_N*A8ZttQi>@B`W#P?m1qn@>QglqI!ZH z6IDq{cfr!$*q3Qg1F7x&-yQs{xy=La=j9^`E~s>{2D_YeT=IpjunVllj0kMP*B8r9 zwY8Cf9$C6Ye-s>McQI;?kw5J0Ty?isb%F7c8}0r?&XqgLkODD;F&~H!A_U+K^}-m) zi^r9jzDcAi$TZeBdK3~bR(f!A3Jzo{fLfcVVOk{f{I)xiv(HQ~Rw^+O8P9xeOFZ1; zauHRXVE|RwGEh6zGF_CsAa~weMD}RQ?pal+@)V?6j0+-Mc{b1FeF=XV0Q;*PJ3~s@zjg=UqHQ#v-n)TO!RC zB(|SAb?Qi|IZ$QL5|LbK3*2|E1Pw7vy?LeD9~`{l#hM~(M518di>J%=n^Ep*I)_zY zF>8ofwW0JHOMYdbOkA?CKJ_d)c``)dO4~=nR8+-<5|0_8GS`mG59PKQ;!qs*N#J=+ zmKwQ1YvkJ(%AOaOo2x4UWvv0$^uAP}IN_Rt)kLpOsZ7ugITKaI!1AH@8INw0M4X*x zrj)t~hzjm4!gDN4jgMWMqtmC9`pMq46f*<_m9j6wu97o1ill@4_%tQuxLo?C^#Opg z9A!WD))+RXPsU&92;?P01D(w0Nw9(=g{;OHi-k zi0_xeKf>G{TN>xdRby?<+~}Z80{9=0Etn*W?(?qR(}m+qyQwEK^yKXY;aY~|40L^n zDaKy2=g7%ui&eS7zg13+<{l~?=#^A=FkE1*u~H^?f;}oGjNKvY?r8EF_i@_9`wfkulW< z*e%*7YWR2!mu$h|Y8yo&-TR|J2oPk$?8z&JEwfj#58K#AAf+BPEUC+%_*>dC{PyMr zhC)vd^|Y?0^b4aRe%2~p18t7~oa)v~SvJ_=-5^ha2*_<+W*3PY-YS>tp4#+6FX1@E zY8AJng7bRc(7EZr0%UJBPLqc_OA9nll1UFR=AgPKpsyn zN9Ff->IW3Y*BXLM*b=@+JiCCwCZ=X?hJ3>v%z5a2}jcPcj|M!bhbRQFH^goHGEk;?_ zB365fT%CHW0^ISJZTGP1UC07fEjJWdGe6Rv1f-AZ<$~9pzm<@-M7xK*^&^hxvRT8D zH+<0~(|;!8zdL3PuxqHi$LkR{wBt_C#C z`3MphD_)^PQHj6n-lk0Ga!gG6lIKCS%b#o5jTI;p(pjw0W&Hdg=we_X?IYlApRjy#W%$tDtp=^(686r7Y zILxL=3s&t|NNw|P*!ZGu93`gzoUJ_=n4Ocw9S29pcm^n-L_@rAVQm}YNr>#fBlxqdIKC!@?^c5GkEwpk8$R5p{0qMGz_DY;K=V;y z0J+&JzC*|~q~bK_RlKFp`_#|4oj*L~UxbM*o=l_MBBS`o?jNtxP0WGYZkPlI1qEdg z+Fb5N9~)sNwYA=+|MLt#YWNo?>(EXS5h-OTDgO2A*U};hAf1+NpZ(h<`pZcF z{`T%(!22n^{`TkPeRp0IFh(8p&{;m7TjQ;$I*@HwFTJ%HjNW<9#10_{GuJ zAPIPPZ;r}O^(|2B-nw;bnSJvPU1MWj&eK#bC2FQr>sLChjMfVy?DIRZ=cgt4w>_aQ zx(8stG%wiy+!aRjG3$YY2XldTSD8P-Eb~GjOE=O#Q~U0-Z%dzxTzP9eg(9Nw{u+b* zd1kiUIbfm~A2rDK^FxZ(vW`zo7)~vrQWa7~eTA$6GuVI!n(oql%g<7QW81eFg=g3S z|5qYdr~X%$wVHo4$z$rgpFVwhf(x~U>?psG!`4%(L1&>qQLMYTbFAk0W`}@3spFbR z0zG~kNh9uvpM8*RJO!rt(a!O)vDfoPMg0&J))0I|auD!_lq`2vv6!ei zo-Y=C?dJ{mpM!NL9T>MJQv6BLpB7BiFMM^8pnvgVY)hUEBEVh262X>(c}1s{Y{lWTV)ow9JS2dji84x}XE34Nd0hGl_$7%My(`e80u}^cz zGR+I!Vwr8a*3n7){cU7SR#owig0{GbA3o!s-EvfO18Kfr74b7`Q17gI1nS^{0}f61 zIAb$1Jea%u)#Q(25ysX$(-ZC!STl@E<}jQT0V?yyeyGg1UbMq33HdCJS+?s$jHht?@2Jd+m9x5v5fM@KKsoak9uIbfylZFL2B->$`lAfb55iYw0h z=ez6sZO;R4ajr+b4Wd7s96$sg-@CE6bAxX*q5kDQHcvmJ))tIQLg!uS9x&{Wajnm*X!~q>vg-Hm0=^-R@e+;}g!oKD6_)xYp;fB+sZ(90RqxQ~PZ5Fvxo5o4a?Y z`F1G#}KcZykgk(-+w$m!xaQuSu*V=4H)?f<3ojosoMtns&e8tsiQw$>UR4YQ|* z5h@-K(i^6VdTO1yf&R!0?nOIZPM>becpcA!N`1(-PD|R;>_;5--z~%~A(1Y>O3^&& zNT?(NdKc0;2)L8&ox+W7Nw%rTU~22b`t_$S-rhbYaoVKLQ!DN{1W*#fcwQ(GCpxty zj&pcOaiJ3`?EgrsRKK=Ckbb?7__G=EIhr*z?ArgzQI)^6E|}_}YiPL9V=HCMpVI{m z(6Ickue0D>(Zv${TYl906iwSag<_~@gaqAl8;PIWK>=ufKSE@M>N%TC6;!*_@|->p z^m_fWGvkzbv}#$@+6=1!U^l$odKNB#RfW`!P#bar%<8O88t--)@WOMvI>Q(*obQwA zY=an^_(rTQI?PS)+Q>R?c9*n;QPP2=@biX%N)3#Bq;6`H<%Y19uT-txx_PzU5un=R zN9xlc`A3t}p=#3Zu4ug{B6aIsSQ0LQ(Q+p;Gv1gtbW*an6~F=GXw6vRlRItDjCst( zoK|PLgtzyb z9>#4|A6k)u?qq8?V`MLAI0xIaSkn`_x*#Oll4|Op*`wWCv{XG-E;T3lrA2ol7a*em z!66~r=E~;9&Y+*}#_!ittA*%SPg-lxrC*)PtoN+4SJQ7T?w-iC|L2|I(mMy^YqiXZ zEr{h#sPi6e0b-A;B>4FFuA?16eg2UtG(UN6vN9@tH~=54U@MzvN3q=;d4BMJeR>f8h7Mx3Kzs1$$cn;l12 zK&;?Fn@|#*7}!XvtS50TZEc5(<7i*GS7z|IFGW`}lU5gJ!IZtCFCyHYbV@7dG?F$1 zM|dRCr1-y-e28|!cZ~2+h&B27sCLRo`?JM|Cv3mLYflp2Sa~LITiiEy)8C?xphvq} z=62U8(N=j~1ea49}}@$6&y$*1^Fkof)jxpSh6*53Q& zSd}rHBuJHgIir^~j4m zcT!gTKNt7556hR*N6I_nFC6rc)fn4gGfuzZ`h>TYhpgVWrz&m;D7wBo0YrthB$2a0+=InK-C`<$ z(xUdVe9reaO$}7BwjGOPl1Ug+75eeFFdY3x2f_ja`oIkmiEj(g2}_g|%EwI= zy^)*W7OYHPXfm*7D3+l({>~0 zsz@5NKlCxM?J0ogsEnK_r!*QcK+21hda^~9*gtVe>|{t9B01W#b%OiLi{1`7jiG;l zuA}unK!fF++TqIpdVjQ2MdjF)kG`C>w#Xyhb4v;4rn#ukMcDmmMAk3)ZrCFeAJyE1 z4^#oPc{e+|v`qPzkDqg}0o=_oPwViD0r<`{6MJTctK;6k|FT>o&QWmV7QIhQIG)>h zdb)CMprU+w(1?u1od^c>C?6wo#zhd)$%wR`s!9(&u7xaqTeyQ5v$Sw@Dy7f0=w$Bc zlEQa$NtJ)xpYeCq1NfuM24!eJyJ|n#c0g+u4c~wutE8NQZif;ovl}V&CHB`+)|Kxh zCF5pb(3sY}gLNQ8-f+fbG5|V)vegY9afXp*Q+XU@9y|N$8Q+}ZP}WWzKr9B7{@q`5 z?L4QgJv@|^PmJ~;fCj+|X9im;ONNLQ061rH@nU645CGH5@8}I{?e*WAcW3+0Y@q*y z99e=}fvAaK`iXmgG%0@{ug~INX+4t=uN3zX(|HKxn+UNZ9nqe>f#nOk{1OGgWJp=p z;X~+k+4CcbcT(gWbeWizT?dL~hKNZe%WWPo6S=Z=R1rJJiG@pDcBzUm) z@L}AdZ3tb>94>hbCqJ4XVJVwn)LcQ8Y%RB`_8C*-atB7ugR&g&q@}j9H&p^Q@zKzu zyvb5zd_ceEBHYLw%B!oJYLK_G0Z==-UHn6!_1({(PaXQ_g8nf;2w`PrEcN7XmHB;S z-!G_*N87yhQdD$6s^82=cMa&X_9zt;Q@fuE`SR(_iJ0xBsWjWhd?Dz`6jfemuxiM`M z-%kCx4{9xN)q>^zBxw45T(_>Wq zslD0erlw&3jYKnFdKh9IxN4wpOljYIMWTanyh^2K+mFlp{UnE%ao<0Rur7D{<0km- zYA&h^q6ciB!VzdV71J zKU&dsuoSdMzoM)DdkDd+$%|ReaUV%lwy+7I6#_Zzu(){J*4s7&$Mie=jpPgw4UNmZLAO4?pq?Yg@mtk&n` z)N;$!!)LWkn)r^GiP0H3RN;E~*)?lu@A>5QHNl~3-=t);9gK5L__RZn@mMg3DI5SJ zvnCRRv96O?E}myZS(Z8v-^LHR7f7gk^wu zpl!e%;|QWb$Vzyqb#lZPkw5N$+dB&c&k3ab@fQ0&mDi-a+PmFzwJ3QTq}&akt0$-P z1L+xKc~bo4#oUL=A_X8pcQ|QD4d)=Tp(ZCOax)I&_ILf_h~`aYCp9%j&AF-C`Fkx( zDl6R#Uko;gvQt;zIEhcjmIt*Sy0n?4T?yq@3>%(rc_)?I?2Mgr8#G82?z?IO`^Kk; zeB$na{<-N^ULKw+^8`5?NX1$7(HQIBuw(py#(K_3Cr0qdLC@+@_Z)FII|Q4J zv7nl^5pm#yyRAh+>(i%@E0+P&6`BO>L!5$bsL_T17p6mCa1=15=NGuSY6xz*VL0FR zH=F0xD(T(x9V6zTm0{!KeBkl>ke^{fPWXqd?rdtGG5&PHJ*E^vP3BqJcl=869Hq4J zCDw=e>!XR9+5U+=d^aZ#tJ?P^17-lm+RUazoVuytj~m5gJ9EO4V(z}xV+oRfRb4}u z9tfw``CL-NR65nmmtTlTLqGhz==%M}1olQy{V#FAoDE?c*;|*@5{}Otpcr@QSR8)k zW*cu|6B_1$B=o(i!_~x2q`MQ%>x8A^EdjDxZJN$`D|2Xs8?X~9_ zej7q0ugz0$d>pDZH+^Z_N%iDF^!9v7vQ1LrgXe6Bsj$u+J$SIm;ps;6VNhY%%UM-? zK}FRL<7HjtZCZ9CF+lF+-}TIMB(B&_*Y@OO^RoH+{@Aoytp!B)YL{8{?Ow|8fTUW* z6*c>OrX?wZV5_&CVONS{-!BTh@NQy@cJy@;Y@GDkwzO}-Q@1UVacq%*PN?yB!91_( z1!33l1-(`?u`IfXFBV;BiD8|=xZ0wx#1r&l68@C=djh}3N&ZXMut^WpBHga`eCt}G zR_D@bzPB-yI>}!W8$Ya+*RB5)7*iAeDlpcU@su_at(Bnh=AEkM15MW$Kdi)m-xmMs zmO_Cw^fhJg;VPt?X-oX9bH`9J-CnC>x62>j@ZY!o+bhRoJ4a~zZ9aGA{-rlU2i{`(KS%I$ZYdBjk!eDP;1^dJ8!Wre~(b@6f=z4gV0 z3!`N#EX99g92coi)-b|ln@;@G)v5KncFTS}k7m}-N#o%6y@yNRtG3a7`(qk*@H_tu z(Es&Y>1$-+w&RTlwE2EKtv^frpI+ne0jGf{Jmo%c)BUeQ9rO+V-{JqSzW5qEue=KW z8_4|YO7nYsZUC?tPBWt8QDq|EZvjD0z2-~CkfR)GzxXAeBU#6BUG9yH=8ijSu0C`? zlZ++^;Q0(6)SLYNF3f;WDm*eti&8s~F`~|nfl#AoxRCwY?QKB2 zLRoED#)`YW25K=oxuzMr$1eltRnsM?T0M|}2p8fE*zhAKQe=>-Kz})WJo+XrwlKPX zVv*nm*t4GQZ=)D!ey*QDw0&Q(cS!2X1-8F`gwDzs@4ddvEV3i%lEv<|R*}#<^Bin` zKx4`SXiQ;Q35}qszcv<-+!$uZ5EFq+IQdx0_KYEgdS*wY2)x>g`YhlnAVxjCp3`;m z#~n%bx~3|=k)XRazHl#=FoaJk)j(0`Ko2NXYW;YC+7{*B2^Y`5f0i{4`OVE)CZ`1j zO{j86a(MF#+Vncc+|$lm=x-@<#ta-Iw-}~DJ0WW3R`W*nlmvt8%NA%GEtv0WH#p0J zCGNRttKFuUv@EqF(&N4QmUAVa+&dsydZ!462)|vCm-_p!!l4ElaX+E{B7VhYLRq)Q zR;L)ox!66|vjyyi?txmJ>D;A6(tbk`042ceFia;XQ=Hxej8=Vmy615ksN=QIkMid- ze9Xa1c>O3V)u7k!Pb-(x7LO=rNL?ID)at+YSHu&nVtNh`85vR{n*K@?`e7Zag#k$i zu_qLzb$j9hE-!}Q$jx^J%;s8J(Z_5V;txml`28vUQw=)f^AEapd|zyB<9{&B6%K=i2~_`t${A<_F-Y^5bhHg-M;qg}%JRWuYA~b@{rwsQI#hNXmm z&RNEWK>8L+XP?K59$uA~@1re23DJMMV}@yePwbQ43XU&pj4Q>agUNPaqe01~4xJ1Y zoIX6=Q&8SsxQX;=(;J+=0F%Xh!1iZHbv#@%%0uIeW=JAZiu~!HTt@H7$zf{3dM=J& z1=8E`mLk=Z?o7zz$t!nvUgqu(Q~b51aP;0<hG{9B&{O(<|_0u>O+)c+L8&-S|$}k51boxwj#zeNPxp<4#*y5@~f=6TP#feLP9;u9`R^EaXue& ze@=KGV7Ap*2>Lra&!R7D!p&#LNpt*A%W1|U`#ow1N{eUFBV?qn1`eI9wRmM=zB;Zr zgV?4OS6s_qpgo$IuE$&~Jd_mBt!Z`H08VW>-{FE{Km+F(>M-z8t4#do&} zW7Ndz|2pors`u(Ok=;(&k=5Zs!_IA6@^JRE#VyQqo9^=VI<)yM>$6A~RNrUay3uz1 zV+7HS&8JuoT#~zrd-S%LJ@%5eQCD65Y1jq21g3|QaT3b2-gtkd=lf>c-ebJ>EU0r_ zk`j7DY$HPkvw6D+=cp5~Ij3!tCB)db_71jj*k*2zYIK7fJ-^+ep!oMMU%DsD(`IBa zqMi*<70X8sFbC{z4GO1K;lv zGmMdQl)ghxWmM}2@vhd)OpC^Jpkk0@XHS`#8m+y)xb&n(B9X-K8%+Bkcf`vC2Md`4 z_KV+>RfX)@kmVEsZFrwnDaP_OPw#l zmI(B8lCK|IRXI0GH5_z};??PgD)P5wVy|7RDE1w8GxFRB78iUP-t^mIO|iVnieuDE zs(A?BO){rUR2RhWd6n%A!eXr@-Dl<8aO0g7hI=oodEZg}%OTnL(PR4o=a~T?Rk}2cP4%a+qP0Z3S z_Lvo8Z%l+R;2jq7iCe|IGeq98ma|p1!XF4daK81A4U4wbJ@qsKt-|G_qUX>e{k5C> zluJx>3k2S{Bl?v6X!!V50z-9O?idP&VEK?RFYbW)7P3*NI{U`k6-G++n=n8cT!Yta zCcwOggeDC$!IVUH_13cvmr57-^FRrDl+e0pFSe^@u5_SO-(h=$sOoYg94p~8809sx zuU~4(+0^bLgD51H?BRx@@TB7~I=E}~k;(dN{?iyEZ$uktc@0-=0CxfT&>QwAEVn?G z)7(+h9uo_%iM)ZVrY<9}-1-gIpd|p?7Ze_|w{TI$7j@T9Ca^dp_n5baqcf_*S{@Lq zO*?O4?pPx`*!TLWSmB80$`IS}$^1&6t*f!C{kC9qZhff7Wv7R?rYVOv2oV^2Kq~pb zcfy_LZzN-{ZcxPKc@1x#iP+YsVPR*w)YTRCxG5XM)R;8B^Vny_%+`0z*By?k_B=UR z_44FdUHR`OX$dZ|*RrjcXU$%@at)w7O0Pe2;x)xo=F)3ZrkA=p^Mx8YJ84IpGT=YU z0G)VLy!$F9voRQL_$%n&nj4a0C#XIR18c%YPhA3SYq1<<2h6~$_}J0DAavalvd1b< zm2*5bHgO|u@EyEn)>&l!;4()T072_poO<>$a*Q8s8pH#Q=E_bsRBKe5LW1iciaeGL*w^NdKXd z(>SqS69S6@@%xo+?hMvQFBLEa%;;aVox;k_K*awSVLS)w zLXm!$IIsPNUki?DNh`vF)gHnBu&4 z@y&air5WQ1jx#ecjWQ}i&ah>>p{j2c%MN^CewOq>j{Uv5>Vh#}vd`A)1;(m;ne{j; z?*SB~!po)~3IsRCv@Md{!||k8%k72uWwZ2%BV#O=m+U{NUKz1?gU}~_50)glDhVdG z?nM~JUXMuhg>ds$i?-3(FMGB#&x)kU?CvO^(PLwC4+Uz!y}cVEnfH_RLC-hq5OJo9G3SIF26+TOYoBeaSCvUAKO&H| zk-?JhoXmKBWjBEFq2yj0M9DGRe6l~qMqj{DH;~$EvugS-Qb%BABk3@r$*Zo2vfAdd z#AVHdpe4U|)Sm6$c+IZdvabbg*&ekfkmBp(07DF%(Zf?B+ELuIDuYI-}^d)--_wtt17u};f3rUVT2 z&;t=ECecLb=Y+|GcwmdL%vrG@Cw91O=UV>bUd$HAhfdRR^plglSb?WOFfUEGmgWNq z@q5$^iBjwF%~zt&=+7Mt^EJQoHJ>i^!xa{Gg{z6wQUYO?^ z@Q9{C+)#*DYxg&IM!S5*Xc-xH{w`TB?R4C---gbz4?C)_a%W!$;$HVU*xsO6fLDJ% zITpQG2a?ngJKZAJ*u2@eF|jP8eBiqHjGx8k(6U)lEp*tAx_!uNcLU;Ho1Y&qNR%DE zS1++&pz2tj-#KN|TTngB+53)KW0-F`NzAq{(fXtZglh^F{VMPrvIQ0O;yesNrv(p- zoHjQjEfJHI98m~6tE1uE3og4mEDp$kt;MZJ;I<(6W$SFzUAEJOpVe zK@~u{&TpjlyRcEOq@XD=N8S9x&CR${;%gh87)2gQqlNxsOG{|w!uE4BBK;O=C?SDM zRpyv+_HRUrh9Ng>8xP9#+MJu-8PkxF7=y7I_)z;^26HdOwxfckR5?*}aB#M?dep^d zq3Rs0!}=Dq;1G7Rfi3n%iXozux6;yp|2^-~wHE>TXwX1AkLRWmim2kjI_};Uy%zph z1H<{jsVz=^=?-LR=n6;#mOQM567+n<(06xtgDQ>i*)9Me%$Q=GxsE``BN*@d33R5b$3!ism`1Z%g@ z<4n_o^~j=k`0vs&c;}kP3A?F0rWuh%{7Q;9a&;kl*#WgVa@7+ zx@tVDH7JoNFGcS5HSteRhcqC0>*oWRBQmS4`jfpE-r9?i6bfwI>Hv+Rhb!4zW<5fC zu+lAF7d3{rJ4sV&ABE;TqNhCqZjfx2|L%AT<8+3%f2&uf%#!~S>>=BQ+Yj=~bK{>Z zS0$wK9zDoR#Ro|1WN$cp`$;k}6aN9+etU2h>vi35x3rg$?aRey6$1$-iie?;*Os@r+c2~}iv3}xR~eXR zj$x8rXkTlXOno15XBE4E?Wfc{eO;Trq0e~H zCuJ%p!GVEDip}i{GVXK+u+dhI`MjzwPH?o-6G5K-39v^~R_iVbe9& zibe*T_rUdqxfH9AU)8O?dtUPQk-zyBuRGKc?49#+^@jQkC@zF~GsKBs zbJ`_{AH_+nmLk(B2{EU3^FI9aSopoW2E?nri7#IeJP@S)q-P9p6aMP0avi{C2BQZW z$$6iD>7qZO{I1wcQ+B7 zmL#~i6i-J?fU{jy^0^C>&-j3YQUK70+6VgG74q;{iE)r3C%8My9XluPS%6=Y21yb< zG*U49bHIp@PMu?ilbh-T?92@SMm}!6?m?h%5PZ6Z<0GScd>|0c}fj>2dX4C-p+T>2Q54YiU zb}u1Btpr4Z;A+(`0HF?`fc!@NQ$!=@3XxBNRKZA89P0=hBP;W*f>HBJo5|6Wbv(kd z2uwN$O*&@8J`?jqwZ@RL?=xq43vD;M-KneAWX@VARYQM{zT6VCWBkn>mqH*(7g?96 zv~b>X^kS=Y-@9YjUJ~iY4VBnm^Wol?1Ee26z;Sy(Z`qqN5U@YVO~*sWeuXJX37HL?J>4GhG&r^%td zVo!hlX~{=I{oe5TP<>iCB;+v42^?b$Cps1cv5J#jw+^O{ZNjDX^xV}Y9wB9WS465T_zxp1WAyF&Dd|+^)p)-_w@`b7Pl1eAa}M{_1<;3s zutG0ZFv>cA6S(}u(u@{E2eC0V9k6`X?AsI4ob%S-d%lP z%}dM_wn6$FDyy916D!5GQS4uHquAVi-VsZxz?XHoz`q&Nn!olrc~!&+tuudBkVk+9Qbs z`XenP+h7MZO!C;gKUi*6+@vLeX>pkT%BCG??#64`WjiT6>KHX>`z7^TcOGV=sIy7K zfig$fZtNm(9QeFmy`;sJB?>8UlOb3P+njUP7l3UEt5Ca-Vm|lTD6eelJJj%M8~VhH z3={&0|ND`?HIXr;R8G+$MadIWQ zeXKe?G4pyiWUsfFH#)n=qUju0ql6&0vSd8JHqR0?5jkjjX(^_$anYiX=LF%G>V5a5 zC+=oOWK5tUdz=G7`kwpDe5UHqY7N>#l+Z)%LU*=L0!BzgyrU-B%)4Nvq5Ge%+547N zpGZACirz%kajl?U_l%8c?_5+AZrA}xvKg3PoMmGd@u*ryVyg>uzDfu`#kCec0?ee* z3SgZm7rl#zR|t{xE1GYzZ+2SFJADi8_Lfv{;?R=p3Zp%37W(2EP_?1G17f%MU7fF4s2et`(eJ>TjjG~R{KI5sy zhbu2`)5yan7e$x(7OrqjuQ0)yc>Sm$$^}padPG`fY)NuDqrM!d|RG8H8eP6pW~5 zbFsQHnPj@*y}ix#sbNO=!^A|W%31IICBS`JEdhQZFqT9)c#1fh@ zM5^D%EeY=tjM^V^ao)e89c$zEo9Wa&seSa+b*`L!-OS|0O*fpT#0(=>50csgTRfer z2zNpcz9UJP$6y5M2qV*d7FV%DtYcbnf_Anc_me{tD`%@5@hu2;5nhTtn@T8VH1wX7 zhpk>!N2!=Svg_v50SvX;8}B0&Mv##n*{8th4-&7KKU4I*rZ3fKT)Q*X=dxb0J1U~d zR5Q%mEz3JGJLE>f?X&B@SpG$(?)Pe4t&j@P)cNxt+Ta+%t2jZUZwiN?JzZ7H&*qconlfgu4}_;2l!d0urYU zYt5NbrsQxJu|#{MbEe10`3vHv&9qB=%WjK#NV-i;!+3!)Do$98jSqotffw{KW(^km z>qK7jCmeFtZx+Jw*Ki0ea~jGyiKnRItl|vtQzPW zniCktBl9tdX3^hvP>0^fMA@HguV{j+;?Sh3IX?Y$mYifZvAY7qN}s_3r{XF>JD`Ps zTLWy0=2}>M9q~l$TvcxxZ~DQNih|0D%VZ6IPYv26u44d|=_`TNe(`QQPHgX8JuuUY z!?E%QDc1KS04 zLm&Owy>%oYHP}>?z?c9q4{O&xzi$;XE@Tf-m67K-v(%asbf0#&@Sd^acG15zG&EhR zTMB28+<#o1K=j{!r+`G6r`BXZAYfi2+u*l@_qu{3{kg->%JO0(=csduG!J3^_fQ)J zkn>+(lfAEue28^ax@A_Qk3$^!qP3KjPCRX?RbJOF1};cXygv|%wb0bSi1jr(r0@eeyzr6BrtYc&2hO5LrjUyrFX!s3^ z-4mNG8}eV1*Sq6`Ikr*B-FVw}q(ryhVc;?i!b&q`^|i;7ecbvmHBb4pb@6l10Zd3n z2Pind!wcWV3^lhlyDrTLuC+n%N{&b&(S^pkwm`=Hy_cdjPp3G$ZsI&}{jU4o5a~2( zlN!IvI%EZxqTiravdi#(nzD81+sgw&A9;8iD~^?RR1MAVWp&*2I#(~Iix>QWdWT2I zI*?iP=XiA&#D$B~q4I_>*sRJqa@HBjhhtGHtl$Egq9SZJZENh^Is8Gn}m@jD6`O0XW+y=@wYH`N7q zWl{u=J@+4u;n^MY`j{>dTzIvfZ)I$9t-cka{CXX-J6p3rp`J7bZMAhfQMc4!5!VPWfs>!*G0#Y z!K-~ujjIC#Z&PELD9(au%}{fzn*9Yu<@yb8VS_7idG{4PJYKgZB;wBo`t2MoQ7Ue< zG2Sy%(b}R37~%0>nww&5m9(q3O}@G6w`0uwQ=vk}B#)fxoN!xiK;rg~+p_QdMwMK< zN$xMvo6gCGHU+{}TrjfPySQgOsKw6!EGNpd4#(#fR_2)-R4f}L=>qN<1t4X_9P%hISQtLTTg^9W;MI_a7B!OfOy6Fpu2hNM zU}J(r^Q%aF$wk9eJ@L2+-OHfC5PqU+k6I+xjzwN6_ab*9Jp96QV(=1L-umR;In0fF zw2$==);tU&Xd}@Qspej}blz?-t?NHT~Y19S|f><1>XnE%6 z?K67@3jH3zXS#L|>M;APSV`=>V~2my4MD+a$rfi>;3!Nv+Ol^0_U76Z$3_{NI7&T$ zvz#+co1w0nPmxP>VXOGFT%Ir~fjo~&d zYRx-HWd=ONxwKWFKS(&qDY^GSBP3K95AcTOmrkLT*s&y5@l&sc8?!CVH-RcYe<1ZJ zG%(^$>L=t*N`Kz7^~R6p-uURewb@p}h?@rZsA;NpNvr_DkX7nwxa*zL;tM^iwiWxc z$aLU=()ftWHC#Kh^%mtKARqvjGvVkF<3WCkqj$!wM-)$=zRvR28#2m;3lWnQb*lhw zab~^=u&2BtIW7!jT1ZBlx+!~@y2_&O`2(IC>1(F}jCCe4h@_S0B-gV)aA~9xahj3w zQotR*Wj%o&Vn&-jDhHp3kSwSeen67eI`auyTyHNNFZ3$uWAo^n9|!mU6C{qDS?Sm` zsBM$AP6D=qaAmK$lD=AmXsOM%ExJmV5fjhI095IPy)rr~jrnVKHv^Ch>*_Ay4+dUw zC3cR{rS2^TR=7J=BrJux=AXR%kTjb6mq1!gAQnMIaI2%(F*RqN*A%FP5^ODX!*{E{Z@cQeo9A);SOBZ`T$4>X} z-57HlF2lX|aI+x_9}_haEPRKw$4A@Qqi#h3L@7equ(;h}FxutBtNDokmR0$)KEG<^ z7p!tI00;`iK23n`G#w6bnVsiq+C37eB(zyl;`%l)kZz!zXBVQ|j<}`;*XrYO&pf{} z`f=|z=hjjVyoQOcAQVU|pusBj3qRF)2-4P=A3G>G-wM>!%-|8e4~(1(nNtpa4P^Y)v|ycikX7T=u4Fv?Z8%zl+c$X6Lr(og^hMCwq;kg*xvVVvTJJ2`DiyC3l89pdAAm765GzZT6*unY*5?isROD} zuoE|@)Mm7aj6c??@lN!W>nxCwfNV$S>3@>hOMpUNWqvn+e{pxiZkhX|Gd32cFBcDl zC0vy6%Yk~3o7GpYrPOGni9($y{qQN1@uM6kb);)fRXnQOnPn_LqWJUoK1`($e<-Z+ z(_B#R!{+TveP!Z&DKJu2w1x!4J+q3w32B$oss^=Vu}w6GHxDt57;1P?4>PqK^g-8? ze9igM9s4}-LJsZ_-8}1w@j7S>%!e>*Kc{WwEG-LZ3POxIVaxcIGmU1#8tMmi2MaF^ z=EV>cS+9{s){BnrMzp87H;Y3sNsHJ7EsbID7tt4`zs$Frxa zb}P}UdN_CWFx`m9)9X@uI{`@Oqm6m@hd(G2|GBP!4#(-!10~nETwtZ2-yW8GL|&Sn z_}M&r37t=)lXjn!vC(Vj?>=O__8xz85@eG(JT98k$a6awOyzEdB_*-t-iSUJ8~9^` zn^6Hw-A{9oY~GeEZ{IXZUXj{c3jq2Dx6yNwq%|=NgC4W__GAFt^zP~nQEe830tx8{ znIx@zCyFVO5H2k(bvmV=o5G&OcCrj7=A5UXtQ*_V7!4lT-~R}U>ks8Ib+kWbbMQfr5DZ>8hXKe$l^{7L(uB8w?1|Y@%B=2(=QGh ze)|2nA^)Rs1_}`u$LDgN@BVwbCm6PVI5TnI!9gKy7y>O-stf5Q@i0e-a-Hx&C!B`$+qj= zjmLt3K+USq4K)-HDCC<9VYza}Ec2#Op+aiHi8Z?Oy-}Z4#t&eRlf!SkG##iJGy~1a zDB||SR7d(Uo%YU!?|}O^gJO^rY!P+8qXm?xy!r$9@x!~X%zY|x*<8sdRhVr5TPNo) z=M||-)-C$3o*CQ8BACk+aKSa@!bpZ@qaV)W;ZA@c_fB6Ej$TYhPr&9$R;@eE#zD(H zS@ZMdhx%Olu=sxNqqf?=9F$N<7lK;D5|<0PHAo*k?Mrm1p+ zrL~9*6&3SHNXP4a z?0Km*CpiEs3xfgLYqk*EgCI{|>1qnlfR&a&7SnF1io4=Zj)wnQHNb0rxGb|VzgIv$_cX6l;cd<^e@>SIc2>*E5d zi?>y}!@_Tx%@8pCNUh+{#N6U#yV7DpX<~Fb7xBNf%l@;ID?F|x5?xpW_E;Q3a9PkqEKyATM=9XG}x!_m`G_0ZSoUyATzXzaOH>^76ibuIKHG@B zFy!I@fAtu4^2CWh7v%v;gzq%KQ+;tJo&Vs6`f9H9lP5v*7cZG2wzcZNZZ>S1DM}Z& z&sCsJ;IE3@sDcixqyISGSD-{F@kFHT4^7#>d<#F5Q&}3>;h;+7_sYyL+giK?w;h+H zi@nBAv5)CZazRnA>-}hn_wSAEkB43Q+bK|{eNV6nIT-5XyY&A3d)ATa8-fPz%VU?$ zAHuOy;iQ00Ux**Ic#MbS#uKavIE%yO|G5o6{gm>ZJR<7UE+}^(BUC!})SHlZ;eax4 zUUd!G?OBhbvnKR=oxiPaeWVK0s*gTA3^;d&@WTg1@IR9)jhkRt>PG+etmT3C_bNYy zCL`lzQPJjS__4667A_c4`09{8FF%d849$XsHUMAu=0y5iaUOt>r2f+x{E6{_g8Dm< z4K`eD7m*A4^UyzpOS%!J|4S@fB45S$vgC?}0mhw2yoEo~fEgU*`NResK(# z3Ggx3xGiuC84n?YJmyaoVB|96vL&DvpJT0!kz`)U!8`cBPCe@b#couJNfIRF{9o4F z@Bj7VyL6I~ksB!9N7xOkE1g`AwrhGUBB^p+J0vQr@gd`cJwv@z=O~YbUJ->*%4mp* zs(HrZP!tdHaZhXN+}zcO!o*tl8wZ2y|2n|`pv#ykNc$(*pmTY|!XXOpBIfn|Q#Tl~ zRMj>sNNPEo6xh9cv7))h`3G`yQ7Zj=N_gkHfTkc1ku1}2;#&oLvx4FuJu3eCK&Md9 zf_vK0T54|Cpnla}l6E5B2n3AG04vS%f|ljr}4hd*{>oG4kSi z$W>gOB-U02Y_2YWdwt!D((K-WYvhmUoJz(BuS$viL`?eE<#s$Ga7VAW92@YDxiE*? z=8=od`@cF;{llUD8eo1rM>7KuCX2kOnWg^z+Wf;utI?Ck6KS}@WdC}{{_}kP(@)Kn zWc#JP6MWtWhw1LiD*tqjfB*d>-N*y`JIv2^|NcEc#*!be9QX@qk&_@& zUr4w|fB2wh@_)W|6-!bj^vY#7KM9dkwdLPpJdGL+4$f&6CphYwc)|KPsxRgb&>w7gGVMUe0CuzHK% zK?Nqq$s}LuY=_axKB`i#%aU4cQ%6;$gU z82$deD%G68dBX2t;w~MS9&;G|;bPr&Xk)0;j`)?NB@MM!@mOEP~?f`$A?Z1l&=!S1IVFBEHw6}0YspOv*}#Ogsg z1T>E6@UdMSgnx#iW;1=t$^qQTazkaMBM51qYKSJ%X5W^qkPopCG@~(bo~R_*7T4E z;DO{Hawl=Pi7k`UBHWI#I+B56A@2H6+|mGiqpymCNo#z6!AiKd;wZO z-?e~Z*zP1uhHSK{w^-eF;wK~K|6cDJ&&iqA?v+FTUI{<^Ui$vohrreZw0o0-aMn*L z`vW}|aQgh^*lM>E|CDf$FOLStaq0sv#mTJa6tS_fMZgk0Temnv^%hcLAjMsupt6S* zWw*};3gXPNGtE&~fH?Vx{Q&(xM#7YYqd?6sA>koC>gTbNXG{l#A(Z27ALzNxW zGUm2Yemg76at$ei!){9dd?)GVA0s{!;Cr=Up67h{S!iVKws?iF`zTQ*yyHe+3{RA!5R2P5_ z(GGRf8wCvxGQzY^WaocfAIu_;(-Ty56<-wh^aG}Z+s;c6GMapOaN8$+Y#B^OzYwt2 zk=j)mJpJayW%A9mo$7ZIXNIZ8JDk-mTWx#=u&Z7`t%v}7$F#<+CczA_i)9w09F?L_jOKBHhdp9oAhX!Vq{Mi>qITC?me-o6=eu`m&vm0K>i23k zGLAwtv-01$Bmgpkvw9732K0${f}FIiKLe7=S;>|UO{dflz|-x1#Qn0dKeYg&xk;+| zsLz>TYJDOhs`sZSn?n^mn)vCbR}S2vXB0X5j@@g%;X6W|HJr_cLa=>)O=pc52BSeN z>r3OIA=hdOw;k%pxavuziO?fHanh~%7`Jdvm3m8V7Gb++D}M8)F1fOZsZg=&0~t$e zZTe-+1pyC`__EW?XRnd{TxXS8;;R{?){TLsL17N9fC17Tb)pJVJ~*4ETJt_av@FJ2 zlaaQUCis+e$`;j9CFU@hfk(E>oarp1X=-8(<*iOUn~M2<0TYBU2Qx;=&L%Y5knQ91 z1Cs-HgUSIJf;{mKC=Icc2CL)Nv4VS_)_giI{KPQ6e;KTUQ!+acJo?><*!6b$?Wa%U z(k-^US&IQh=)eSMz=%Q?-Wtw0q)BM%&-a_#Oq*JvE{0)`u%_AD_z&7i==gwLxX4Sc zC_TgCoU3pNuLXri@Nf0^6iuR67Aw$d?@S=vL4{>^tqC)h;q_U8?Apm>*g z&cD}t|M-I)ZBcSjua=L>sN{q<|8xL*xjy;1;e6duae^cO=KtEL5H<`z0AUiM(W-wA zb7ve%GKH+WNO|d>7s|r4Yfme96EsasUbLuJ@Qo`?o2dPD(w&3q@IK#3ipMhXmd3>3 zOTr3qjJ<~P!u?Y8qKtG-7Gx_N!r_j?NtEI=&*@h#Oo-IF_+k$I_<~w^ho6Vwz0}kf zB)v|uEe;l5@%7qJfR8UM#oJX0R(#Z!;hB0@^!!G6bkgKyxAcwZAtyj7eL4Qj`w(KP6cJ$@@Zj7qy%raFjFUcSD}-oWw=9{pSs5 zX!3!V&_7(jT9=d(ahlWLi!Og{I*}`b*=Ju~M!vgPFuIAMb3~i@?eyO^Gcy}h{%~iErq?=_m{EDq*4l9m}wRRH6N_kuiYddG(LPS7P_*fI8DJAcH5 zbjf_XE>!Z@%tX%@KoNJRq{ZO@rFq~bORePW^8;)u*ILUa7vZ_KX|cmCdXrDiEd|XB zOKpu4UttTMm+`baEhviXfzbj@aerQa)N5y>`HmNOv19}4(2$>$K-AVi63SF3@&R%6 zcW)o`AqcmU(}d`R%i_J8m~e0ChTw|hlKWxUr0Md zb|4GGf9@j{5i4YkFvAL2Eo@!6qRC+PL1x%?w9G(DgqQ!fZw>PJj9}W`uQCN82$8+l z`q4KHIK$dIL{XBv%8z@x@4*K@%vq9ASq#gKRN0~6%6Z|_wnHWN&kNadxM6#nOATZ? zKJDK$ta+N2Yk>IjBol;Ex~<1?q!6%OnSH_sAG1<KXqESA$Nv9#V~Z4Pl2LoKaed4H}gZcCi^?x3U^c+2cFQPqyd_n;^L+LNaac&wQo zbsOZw#2;PXiWea~1nt{x`wWtH3*0h#trXj@_{27MF7V!@C(C_9!aE2F4BOjR2VWiF zhfHO(6}4A{#EW6HeAr$-zKdd_^)Q-2ECacmqHd`9d}+Q^nsuLRwRN`deFXPhF~S#_ zBJ$U5h^P!|IFpoqqbC(UZ+eX8MDg}d*%400P_$?h3sZVJ^%!J&ky(PJ& zRjVl*_`)^+rSd^4=)&%}VsvvJWW9h^+XzTNNoSBJNBT8RYJbIDSM?vLW4%B)g1K7bSB{Hlf^| zhgMIeHMCf`%-oY|I^GZv=`ealRII zw-zRU?Z$m=PxV9aqD0$*H2r}aWOyJlZVJsr1&DUvy@zhYCTB?7I%wB?3^sr13R(pH zl6wE#y66XD8$-zW9WwQ0$zJg4R%MPy6KDaopN)hEuY0o)S}#La`?4iXMfQP4%vuYN zJMmcQ=>@N}Ju9`HIWEq|8|oQES09~W|CvI$*AH`|+q|NOndxf1cbksQ#xpJQC6-xt zDUoSU39}+Z_}@USO5M|yUvRvH^8)w zyQjN#Lpb^lZ}o&YdCDBR;ln?@Aw0Gz(gDU0Z>R=&bYfx$zyt)}a)S|$=h;sDawCl0 z+j`ak2g@6AY@1mUMY*&2tm%?*d4s>{W&XW|&u!-> zzN*hlKr-dy`l3^q7#j}qJuqU%0`u*XK3VkV1iR?N>kSRV_m@gI2qrQ~VMEJS7*~E` z#Z;N3^Tt_;5wD?l`>owmM9^zoyvQ#B_^f6rF3PMf8e*V{HH9Of5S;Nf@&Wk8w!Yi6 ze@z0f-v2%RHr-iz|2&qc|Jm{(a@H0NPY_a{U7hfl@FqV)YRQzl;uDy|QVd(r+73!@ zwiUc@a=@;SFNtA=ba-D%?FET_AVRl^>Wd|{8_|hJKKbzl<`GIWTLiJN>EdElhw+E( zPKv_YAI*64J0D)Cy0Jngh@Rp4FrvRy1^WQ$@1_5ZLOi1s16F*UPI6FZyv@ zB8S^G6ZSTd+oFk41}d|Q(zvR_2uXQptJWatB@!o>+3J6XEMy>0Y1AATc7$=!wS ze9dNa%k<E2^?6V$=Y}H!-L{ArKP~)8jL`$-z z(SFsDjvI7)cG^oCq4=iE3yewi54VAgOeBYrEjWSMD zGhdj)Q1!Y#{{vCp(hJj7&(<*CuVmPx2eZdqc7l)XIKM@lSvS1d$j^7f1`a(px8I#; z@y4gi39{_wRf~N)GZ>@aXTZ!z--NFggF5hk-;&KB74T}dwUJdjSf(HCAAT?Qm zx>|@3hZM7N*-dJ6neCW>ws742CsJ{mL=P^topi>bmV5)`@%7eX<~cEu*jB+Dgn4?u z@$|0@;?|pluPy4+^)~SLWTyRXOX+m;r82)=R6=h3`Ndo?60aA2AyfLwh@H>wws22k z$RuEbgvSA$!gTG7=4V`Uj{~Vgi8KhCdiS=iy!-q}n|fObE*o@S3@ji-0OQ#EA@dU! zY9XJJ+hJbkMFM==`yK{f5ywvVG11hE@AxKoJf^|a7{F%gLNO0(4$CUNUwVl#ihfEM zGb^iXZc5$V1g_i@?xU(;;41O~9X4KRnn7acrBxd9M|jPS^I)Z2yya_ex^VO&@dxpk z5$fwg6NK|z?+uBd@NsK5;FnkIpg+-%T`h~xcgydSZL9VnnSMM0b&Nb#)sI#14+{l~ z69}8OaBpJmM|;O2YSe;LR-Uvg@B=Tg_0CO?GrxzqpsdvqZ^E{fF=>f`Ldd1c#FwxT^q(0n~%+U&JWN)%ov z(PS&RM9Zx)gR>N3US){undX}2J}}(6PQ*?|7%Viih~0I;o-BQX@yj^8~73r-*AH+`C-X#4zn-*<5 zop}*DwSKPw5`=^o^SO1hu_pbYj{zyCx(cFkq@GPvfmTEE#YZoklQSvCH&@{kJ zLT&K+%_zzK+_Xq>Fb{2hq72Bo>Yeo`Fe7Jrlcyh~L&Ljvp5la&9VdCJnx1sGZOh<1 zV@qYfWP0VO&|}Mty{HiD&l}?@)U9`vF)^V*YC#=aq60Z1Vd0hay+ZwRgkAP7080$^ zT}N?z^SgppT}r#ro~@JiGRS!*`Cp&a}83{^14?|1kkpY34relIX9Z-7)tYJPK6%w`jtB z>@R%qLpCoMPdFGXccKN)liy_=j>ke#?{7tFmWV3L%u1Dr5%19hs+hvPZu8b5Xw4jh z?f`SC+(#>}Jg;6>{RAkki&fIxgq|$bT5hc~aA=9k(;)NCQ_e8P+rzH5*kD(y-7-RT z?^j34n45)Kun**4qBWZ43ji9A2y1t*56HFLd~qeq-zmE{#eDTKX$u$wwe)I7M#pqi z)H3+jD0m?c@kaBM&Pgxl+Bzc{yX+Iz=So~(up2vSa0p%X2liLu^FO<%xGKsP%BIxl z+sC$>=z}vIHiYl?s`L6hNcDR1k-4eNU>K0inv!*RV*GcU_XR*?N$<|m&>GS?u6}eM(uQQ10|orb$>bHELGsI<@rr+eg<)&yvqebP*GH zRqBZO-2O1vr`R<$CaB#{;|2VsF|PCgY>F%$MM6*+x^Ehm1X@$=C3+hfqk`!MOO5hk zB_qmLokQR_!7kK?R+L&AM^r?`D5_vB>4g905{FDu6`sQzIvaM;Fj3O!Tx-g7M5JnK zS^$-pN}nP&gL?GN<|C;y?sZfYyktySa^7jytNFP{tk+PsOGZdfd-Gg@Rjndhi+V|R zC4nlu-)nsf+WH@m#!THn^r7Ru3Eh8e+%F%vl!c$&6Dbn3$N+HTw9(ZuzBHb`VKvqOW`Av^>AP*@IV0-w37t1O z4{lH>4b6EF32BGRUl_v~g}D?A-vNQRHouh!{pmSViE|EJd2y`V(#X2{O0JVD`&g?t zfVvM!(rA4)+g&mNCADc990FNHGuqLli$>}SxT0~m=6i*l-#8K-+YvBQ&U!v)LfI;* zA1Cw$hp3s^cDw6b)!}dt(Oz1>`C00!$A_3DV~Og3F@|x6KT1#f$&P(zFP7L%+W&Yv z!(e^w)c%WgTS~>a$4)E#xtI!Ae)UcRmb6_mu3i~LBwfn}UTym?NS9PNxsBqeAZ|kW z6p?Arr`TN}W;N4l3tIYu*Q+5JbW_q_aH73--x!DJ8t_*8bC{~2*40v2_l%&nc(12V zA){Na!PXn8AkW`k+fWv)Pz3F1pOhKYCo$1Y4!@sujO&!bR_M+gJ{~ESI*9P*F(WbH zm6r#vmfUL8DI_&#@pfDp^kXsg ze&!~&wQ)b1JTY)v{u@9*{~R6RF-IBOLL?#OCitlk9RC@L@)KX zc|EoRdc9806h4++$?N?pV-ZQxNm09lQhyq>f)aYGa5>1(-H*AaIGHvXh6Bb3Km-v<+>e5S$5l81`e&(-`|ImF%A8f z*pnM(ON7oauJF+;LLOAAK)lN2k>-i^1JKrCKe3@_K8Fx#Uo|1tUQoW<{feDOg%;)d ziZ>UMeRRwinIEn`j{}XzyEKNJ@3&`MGe8;YTBW{wlFE4DPp>Nv%oNQ{s5sxeP6D{_+u2g-4&@D5JF3X|zI*+0)$pwc@{u`)XUGe-Jn=5ePAU>P#0 zhps67`klWD!dLhBJDSfVf;l!&VTybA?%kRBN^=uC^LUZJCL0knoX4}XU;Muba&EM( zSod~LwMq2?M@|F#{?Cu+0xmTj&hqzbxMVQFFo{rJx1}i*o?f^kaRb}d_xk?p4+cL0 z+_Ba&RB3DxL0)G+&o40=am_ZJvsA;4S-NJ8AoK3A!@ybr&U@PHfV71Bmzx0o>8h_t zUS;`^R^0;n+rD}ZDjSOQPBDOBG%dE$8B!E~Y(MH8WM{HhpohHs?ia7M;ApUAuwV|iN3O|JST ztDE>HaJ7^*clI&exYHF$++6WGd=tns52>jx)GNrdJkH!OTpiwo&QYg=*XYDMDCHX+ zOg`)0*n7WD%YX{jvkmO49^V_`tp7Qvq76KkkADJoHuJ9HGv2{|z)WhwDI;ha?*&&o zeGzZ&fsGLDjfzxj{7cCED;TTZehqXa$c{UpZ9{G&gj;70bat5J>_6zV;Q!cW){Pol zxn-g1xvwZSJ^`Str8~8Ns#1rWA5GK9v?Kx3E_5FCGKPpnw^65j($oMJ%Xccw8>|tr zhwwSc5=cmA5v22tS~78>(buuOmg2=}0nNH&?Y9ae#5d+$LIKNT#^0LcVk%mqhJMbE zPJaZAgW|MG7MNP%@(I6HoCHoY8$ynu@fSKL6F|j?W z-~jD#EBHdO@%eS9w>+icd468Ha`V^k{Ob%{xpH5x&{RZ@^<=c(v#Qn-Vb<_FE>9nD zOFO=^k=LV*&%Kx>;o`Zw3zN)mQeTVek9LhVBTdwSY(^h$BE*RC-Ta`K!b%0;iQkU+ zcn;1JOBIWL7tLdw7t{@8lRAVBAk#B)OmefTZZEWoy*5=8bp>Ql-;Hmbe-U4GSmOpX zM`=>K9kPUncKyX(=S_}R+)XoLLIx4Xc>`|ArN!))A9z@3+K%f4{b-!`V__I!!txWJ z$AXEgdG5*6u%XgfI+}2hn^|UNru=Blu_Ooig)j(SWyYA5JdtC8qA2{?Ze4X^+iKUxdrbp`!;C#{E_>-Y_BZ##-RlL*q2yjB9ZBE zud>`TxU0G~teAvm1J2U*NoqCcH`L8Q4swM-l9F$oqU3CL*)@sYsAZ{q{v5&@`@tUu)e>gMcjs|To+s*uYnNnVkMo}nZFJ3(zWcgrT7cyAjS&!#F`S5$>nZY8L!t<{_VLJESIdNpRKKkIfk4r@P z$I?j?=VXN{q>U#_Tc%@)qj2SOTz?qh zKe+MCTN+?K)OGA87#+D~H1JML)93m?{-`Yy=h0i%^JI6*mGI1=4apddUhAMd53xON z7A@tw)HC@UL9SZX9$SABVaRUX|A0?=~n^Yd1((OvuvNrDkgxx^-zEk+z+B}8nd+ytQ zC=!)ay}fm4+9sz0r#Zb{?ufI_`V7_mq%Y=brj1qfSMxvFx>f1QHkL6)O)2xOzervU ztE~6V-*hYUU0j;ZLILue>&$7~tF`yP;{1Y!FM+8xqz0MH{p)1qubm-q;Mw!%x5pbI z1Ox;!@3BGok>#eJbp8;=U#W)iT#L$g>#C%oSAtrO4B7@`jU?frSR|282Z}}VyOmTG zozzbxV!kWNxAUTYgr;d;UIohPEc=zKQWRoe%@a7YzJIpnN87gvSX0I_g)P5(#fLG% zu!kC6Vq+%A@uu-%qE%Usx=IUGp{OOU?jMgu^o`vBwblx`;_)WM+Jt^us$_FUxRK+a zuNf6Hx;%leoq3#{IuB-u$VJE5zAMT}T|Cf+GN_yrldv&UML9K| zi5(DO;Vt$AdNh_xZzin|vDyYSCv4<=6T5hYn_PLe)43$Bb7vmyy0%u{47{3?s5G@F zuDpP+wGtC$vn3-LaWIi@-@aKbXKWy;p%K+LBw65kdj5!AlPd8WQBsZ*=ZAnZ7k1&< zcw}6vA~Ir@!w5q>W|*7kmi^^ja&YavQRhDvY} z3Y6QOHb!6fhAdQ29(!#r*|6~3f$HM%}!}+FnI*?9ZZ+JOOF?GIA;t9FR-0|zK`{xeW$6?!*9@IQi<45se zw=6hoyBN;+9?Q=KRqQ(J3E&YX@mt*%1G)Oz?uTjcqV!@0$y$Al)|2}aun=dTT)DVP zMF6~!)LC{HqEuK{)Hu8Q-*jE`f^R*X?O!>PzjX-&hv`5iE+ON9OiMZFV%HB#R>6UE zClM~+yp^&^Fe-~x6R1KGUeo2()yf{_|gD(n>2OdnYU3*&FGTW=jj?o`0N^>$VKbh=2I8edpkuXbz5-vYu2JlLW13 zY0t{1Ny*b^@YO6Fo*mt|ibeHVg|0&;spZ+tAk&y!UtvThNQ<{IK)G1ygZqtWDVdhwy~~HN0DmkTjvXszj%ou+_%C(JCSn>6S!I>E!NB zm1a|_ke53FOY&@8bN%Ucwz}O3MGGiDa^6aJ_8=zNj*fBWOxWL6?#w>S$=|6&4xH(9 z?Eat}we{n+10NfQSAbIW+6Rlbin~u0tNkDj&!*wAkdoF{+n3_j+;j&PZut#9nMx9C zIU34ipFJ+Rb~@2chgWR<-ZPK$4#vHWQB07L;QRjpGdi&UTC(zaBJ)=-a`FQ`kI2sT zH{4PzZ{Hm{wZ-+duQVmgA#7@Y;n*cFGsOP@asNH#xd*4Ux$KL3q@6|vlS^5@mueI~ zE0o5O2znWi5{>r3f;H>*gZs2m7CP$b$ozLJ#|1qNY^2^qvGTpNuO|4B~E4;OLtU^chN= zWLDw&{JdiftDnzOi}UGq6A6Mr@qkXV%D8Dk3DL z4)O*5ExkbCZN+}X6z%Vr{;er4fA?w{qyxtMI#>>V@%Wz$o#xFKaz3c80f1`<_F$q$ zZk$M))OuMV8uNf+H?aBrJJI@GT*tdaa&=x>YhS6m0nsuB!#^2}bU$XM+E6p4%L$pC zm2X>Xay|vjY{ej0#Ig2pQ@!rAFj`l|%>jcblZ3q49aXU++LUV}7w&ucZCxta!u0GW zWY#ENE18pV>boW9;W&*^aBo~r4VJZy-p&XJqqchGsy=!0hOAX?hxguj^@-?^&xix7QcU# zEXn^o{+>u3nX92mBUM&!|AzGo@PowwENeHxU}+bk%l*(IKIotqv|_FT6c~*H7Cz~3 zRtxV0`N7<3iaTHaSZ_ILr9iuM-07gslsN%qvmZ{?&!1QhQ#hnPXjjlE1u@Te7 zj$6BVGtYk+UHtQ9*+Z4!4$%6t!Xh9&>8kXAhWXw$%gBemf0Q4NHA`@Ha}y8|X%Eje z_z5%qZ`3r}UakRH`HjqLd*^L1@S8d7eDEd5t|Z`2G!Y9E7B44dbdw$$M)J3mmjXc(jl4a&Xu@?JWlfYF}^0 zhY#n(#d~~9wQX%bah??u>$J4(Y-u@01qNSG+u5CN_^btLIUl*@R-hoHs( zK`5sq4!X-O{xOT4{xe5k?7_8b*X#*)uBQJU7XF5xtCZ1@(Zq>Udzy3xB?_h2%`xO* zstFC@|6FS6_6!}e4)136Bt6}$+(%wuEf~4N`J3j=-XHPbUqc{x=WH*+I{x!n_ug>7 z{5jpz;4SKY5*Pg6N9zCdcfpTRQ2`xZrtfP1)rtGx9{1N$|1*tkbnzi4`!C=4KYqzi zGq0l``vBC0_Kf9!zqH?e7!_R_*XqRQ@c*B-q(`E`KBRN$=KMcC?k@|Gev!s)IjGyP zXO{EJAKv5312mbBz>1mQ&`17XOLFf7cz62GFIxOpg2(TF3Ih%OtYYl`JEHVod-v&e z^Ex)=W?DfKP-tRCWH+pG3f^8OoLR{U4<=NG*SJ;#p0-|tb9m6+uStqVdgk)}`A+)3 zyyJ)H+DCUyyY9zQ0+u zWV8vcrm*_T^$w(YI2iyO8zEv)AReI{Q;0U;m$3GKgg-sVw)cCr4^r_Ixf&X%{kS*o zd*H@nreBMke?PV6Umm_G4a z0D4PL^0H~`FpPWhTS6g2X@IVKjrY93^3VCs$;zC<76FbF5g-Dl>+AWaZ%Chiz0A6^NBB6VwFEz#Dwdma#>$NP>x=UE1Xo`sMu>+dZ7sX^PH(xyYj}G8 zG`?vS(VMQCC-1MRxy0|vnuzhTwceP!$Rl4GbJXTgEYN&vRxd$+D){DlYVS2TARhD; zE+6dZswZ_>Zt#-#qI?&c=p<`A|HVyAe$GZu9POBKU``jfMQ!P$SK)^ z{d|%jv+MIRO6)-T1}w0QAl+osaq5YMg=JIvPCfFilvbKWCQ2*r(cCYdhR z&Ki{aXZVuWd^z18jM^(IG5v;f? zHu;713tyuL@?u9uMglj=( zIvUyAC+6#UR@!v((hM!P5w=A@8dJ?3_3;=Nm%TWGunvTAsy{lF?#XxwR1rA|_MJ#g zMe<%6ZU3BW*lqK{ru)U-?W3-pXlUGZBs?~h*XQOArN3NM)zL9LS#evk682T<7_aH3 z_PJr`wg~(VK~H3&ge|Kzv_v^RJG=Ul!;Y!VdcbClyybV!>+SEL9iE+~)SX`aW5UOtgKqSVOKYn1A{>2%v!{4LgX_(j+`7oTG0eHdxQ zb#Uk!oZ_2C1wd(yleMw(uK3?NyMCpi<*t`GzS4;|$I}=@F5z5?@x;*Ca zffEyUl6AnJyvAAF#w;DMRxc!Q=FC^(o0JsDdcVplh-g0;klL>0c9}kyZI$KlHqG9` zt_GB_2euGR6~%<3O@Lr{`J?iMn%&zCraob9-YaTwEa<M3$%GG_du$elo^!1xDoTgE`t%YH<_TLkU!cO;nxQ+Rtvo>b$ z(oUF^bd379!bn|AaQUWuF@yE&9KxWQ2LJeRoIVCx>5^JoU4egRdN^G zt79718+te{I{HX?RQYm4X*LUS0XL=X*LB-(vWQ9OHt;`~#b?2U*}phRd=)b_JTl_% zS;+f5WWK8=|CGz%`{%>+cU)axGuMIw1++MlNU(1>*DNZy-t_V*3a_aq4PR4Iov7Y5&_tjhm<htwYKeNdr zXj^p8K$_Q>Kd_HlmtTHiinwT0P8}bh&p9U`lthAHkFz z_eOCKk`w{fE5%7qO2S7=D-s^#HJGl`xP6J7lLJ^NUmx$OE)*ok4>Wt^oL|eT?R_`? zb>5peX8b(0$y-VLt*uJfpFv9A`uKGy_`QIa8LxG9jgg;o)pJ-1Xlsw&FL2QT!WRA_ zM)PI`)LifE%+_6Zw)X}E4l$FT1z7MtRJ*o-+U1RsVIPeuM84JJoYWgVt+NY{{-eG2 zPoS};X^fN^C1ibe8j4z;9y3~Y%C@HcT8^Hc(~muUMX8? zt@ZwKeekp}ta$R2W0KHWYNiGByxVcs0brEsEJHC@F*P;)$(r{cJf^>{1(ke`Pn#xv zhT7km|4SCC0yzrqRA+60OV##I2XXWdRRw@2cR12zeCpz#sIlu)6H`p1gj?!sB!Qnl zGj}hvB(>vt{fJwvSzMSe?;%fqlitH_q_4`qf-wa3{+9$i2zmMoXw_`oNRrfYCP-zz z={k?sgi%B8s>)7UQ>0+iQnTyKuIAa1BqTZ(_|qg5u#R(WRc&# z+7*4otlhs1CYO%FE+j4ZyAyWw0Llti5V3BZ1Lywqn=_RUc;To4vnN`6ma>1}GDv~w zF@{?Xv+w08)Gj#wA2C%(`uFh?K`%XPdR2 zNd*+fXf)#aacl(PP(GPQB%jR-8>`uhyA{AlbHs!#u4VD@bzF`uzdy#bQKK+}sR=8g z?2N5)SE|h~+;odI+R)qPY{a5%siiuM$h*BZ)j}vr@+GFV`LjUwGYv}%^G?)v{@1O* z^+)nXvq#2Gr}}o~Ii&A^IYYE)==$rk$%HQIVvQ^A*~?`xqMHh$hi9W=Gh25g%kZHa zZ@*mL3D@ll{SV#pzh10;)gSQv4wMs9+k3xf+U@_zVxJ%7)LHnbON{M06lynG5K%1+ zD?ZhO$}J!2JA3BL2r%=3j2LydO(32#PURQYpX;W=tEoYf`W+Q>S5g8c_3H|`;6Kwk(X1Y zV_Xe)K*x+c(I_0x`^>oSEB7U1cOzIH=HI%@Pw2mgiMEap1T-0Z;KW8Llg@js)n}iR zktHm**ZGs*Q=!Ne*1^Z?HyZMW_WY=SLxFx{D#nE;Y=@%r?j4 zJ7k}eCoVdiN|X2Ek`R|@FY+J`3oI?U^i=AoFEK7UVEO~7B1Q5QZKMUuX2-i`#RT_uZu9 z#uH8KO)NgJ$Mm#a(1$6MqSB0T8=Wi<-1-`XTV?mYME<@>QrNqc7j2nYSYox9+%Kmk zK1){}Q|I$~^iqe^L;j-PE<9Z)d4;NjR z@7BHz)&q;Mvr-aRDq`O+8#&1A2IV zW}nCv%Tj#f(OcvxBL5nN;+)dVW3M90UU9U77)WB7H?3CguJcb&4@hj+R5 zN6<2}v6S`OYN-`JyZ$G;%dAIk_^c^aXKxo-RvNqGoA?t??5$@0N6_3!aGxRTWq(sL zJj_)w+|n8^pI6#%=y}4+17ETdKZsFhWPigID28+Dx|eTODoA^IPxt3fW+- zakR&BG7ISV73TE!4QnWzJ~XWJ0RGPLr}vxg`>&53Jt}xw3zes`UW~vPmfpE~1*%eo zxzBg*0F3wi0XCM`pS*u|z?x&j+b*sAsB;n;R8%TZG8hrALl>}J0htIaC2(c$q6);5 zQgF#gZ7pBfTMOpMqCk2=UCe`?bDhn{d5`jv7!d03YVENYZg&XQ-QYo0Ion##{=D}x zO{f{(tw2>&lXHx!@>vP2Lf{w03%bX~xL;WhWbeHUe;-lYaoSp#!aMfXdjo61iG`T^ zpuCSBBLb(@bixOi8DaQl&bz=hQIRG^$%p5p1s{legf@|IWVUmXS})E9`_#`2asTfW z>+c%!@8#v9Nq$G5+%@-B^I`C1{`iq}nG@@`AAt>YsOqDlU}odpdROSyL_}&$%7>Va zNYF7@6Q%N`)oH&r3EAY`uacqRm0HJ1Ha?p5!t+*PuDB$Y!Z)0 z;xJ%v9jALXy+gVdXSBh4w@}}d#Op>~m6LWnLcnSA{Pu%^j3JPgG5k2AKXJSl@V9%X zAl5Gs5W_p}?hn;MEX?*ii=Q?5<%QHe=G!ihtE*2kA+SGSCf(>)^dkgOXj(qcL&jW@L#BG&s&lgVO&a}I;3_i%ZQ9|02x0dSjkx@`<<|a= z4Hf<)nLR~*Ne3eu_a09?jIHfRNwA)o`Q&}<RR#>(`BsvA$dPcMoR$ zfJk{&fc1!r1=A#g;hhqMA9w;-TT6>HMoJE8I(hfm>%DJ-zgB|(6U{dFX=`t_fsSl7 z=5+nd9ndjE}60ZUGFP--;k`Vb?28aU#=w#LjtRuO6dkXW)%tU93mdDhtOk* zJx(PWs7S+kSQ+usZ}TVGgMe`z?*v9hR#x{|%$t;OoufUac2B?GP-l{~ppJJLbRZo3C{O6;*SD)oCM6i>Wkzz;LKcozVQ4j>q!W-dCVn=Wy~R2qA8^$- zo|ucF>9`NYiyB?nt)aN270%;;N7x9ZOTu+!&B78*O*}rV+!ecDdHWKJ|91WSKu6_i zh4fvQpu?&Gdq+sMT5c6$zW!95ww#q^!Z+0Fh=uEu(XowgL1me{slHS7Pbi~OOM;b) z)ko`x9bAKf8IarJfWxxvl?|l8;!0uIM461uooRacme_bR<1u>S{`Viy@;eLh&wD6~ zsep~raMx&poKtxFk>@)Jw8lq+{ktdc@l4;3ZG2Se9gz&H`_t`3=t9-?o5@%D6pbO} zJqvSHe7{@}8XTq7P!CUnm;3jA+wToMiVM^r(9fmico}GjUwQeB-5UnJf!E)XF z?p4l_PGLkLniX`>#3b<4@9t0_5^b!weyyAQV6O)`GKks7b(u)@oX#NX-j`f@rYr!} z_7Cxbj;g06pXo$xe#W|ge;k@bVBZ2 zgmcG!-B*U23-8tP;7v}P+lI^v8GxpM`@(SL!odx;P#rBHc^EDM0ePRjY)W!CxJgZ{TR3W_)5y_RXvM5wa=g%ml2 zjNgLaBJ=H$)Q7g{;nD}ITQlC>h++4dWQ}%?7g55Vx%LlHEhw3tDeOWEXw5})WKE6+ zi`{jYYwc~HA~c$a{_H?`8*yD_wsXvxCl9$ew2J?kFUgMfX9AFA0&C`Ed~8q<4DJPwcG$x1y{TWpnnFALhavo+P8NT{LvzY z7det+bY`;3a6jLwQeRQ&KAEC!&AQDWT~4c9xm&IqkROX@Uh?2T$5mEVF0a1DXs*L? zKU-Qm;{_wwf4h8rdO?yxr=N*{^7b|0<_z~gh+puJ-u1XaTpcTkx~1t}I?FfBkgb)l z9i8mna;s#gdtS_PRbVoJJ%jUE0C@HX45{9$yky@FV>=M5R+Xx4GQ;hd$l%_b7AY9j z@by*w1BIJ1DKIHlRdSn3PZ$b4G`E18s4iQ(HiRDX@4)I^pICjrpA*Ldds2UZA)}u* zd|U`L#kQQkuFcC2Myyf3+s&sq1KJ;TYwrFJ?BstP<);R~j6~HPHUJr%=$QJ1uYtZ$ zzj^bHjL@P z^!X5_72)cI={rihF0pO0zd`N61sAa*d7}B?D`pNJ^;0Lb3r*ZHHxIVxsAdmGF&{ z6o;O7Dy;x~J5ea*pvHIW0u&ssb1VU7XL3@@6*jqqo1;ET%0*$aJH@LUoPDg02TV^Z z#*tPCP&9XvqLyp2N-MzMy(5ef;7O4*IVUot`w8$54vX<=L@&WEp-clVnF@k{Y<=hP zN1tg_R7xe&j>NsO z+Pwi%=?59U=9+7UT*?ML_qz6>9#U^e$jOlx?ZWvPQC^l-Cq_mNv9PRkXG~<(YwTVs zb!i_ZOlGHX$QGIq?|(|!+v{j9QRS7;pG%#wxmE)dzE;}xK*uxOT!&+O1F(h1+EWQ4 zTr;+UhjEM@`C;- z5^G^$F){x$yE&h|;Pv9-*MKwK{^QU}9pDIeI?eC0iBdAPQbBRjsykGpRKa(YN01&7 zNuCPd(8Mk%`w8IZi5}R+$JtFOyU1=Y?zu}e4_VE{qMpJ~Iw75+{S*`iLM=6Q8p;MtrZ4dY^Rls}lKMakcGOHWlxQ%zdRN}s(g002gI`vhz zP+r(i*}tN)3H|T_a`y^(5~D`)$UoIz6+hzstsmrU8`p?i=<9~=ZRq5*D9{l*U;3LD z{PcGEVhYwg(SRpe<66emjXa&?L;U%MTRYcdH=;}<#<-D`O`RnYHy_7fja&`Mtry&T zc&4-~2HMtbi6Xcv-t$+kbnl!>r9rO$qv}2IFboLHdedv=Zlwe3W728Snr*{!Ct<)2 zw|S~5UkY_dN>&*lIw=;sU+{t9Jy7NPNx@j++Oo{`k}&oCbWti>kvxKd@RqQ`ghSZW zyIG&>P-1AM>v6IP1!jIrw_m<^;ap`q<$x}rDWz~WEk3~U?z~00$d<*+5%DEc zxCw;^bg4x($x~H)mVNIC#Lu9ikml|2`I2k#C|Yvz%90>mk5&xH?_m~tr3O2yCmD!m z?8Ztj6iW&dvkk^OYn=1gqtoVb?S0DBF%5EkWd~pN$}351;mayib;Fi59rvIUQ;8E2 zl&jw)9i}|}{`{&>UdlGJ+N{iIhZDY-?tl4`OXzs@eamh>4Q8i}=jwYwK~=gA9RK(S zTjGU&+)K}r%hcj8Kc-6cZ!%aGBQ^z-l33-$Oy5Qp6XRMeQ=)L1)zNi z_a#>fap@aljzcvlPv=Zr1DJ{&WU>9@OPq$7lD<+h`PKF|j$?V5Mf(LRyMV5fE7|w~ zrytN+xV;e&(9lYQr>B6FR{$n9P*69=HCT|)$`@e!rY&hDl{PEz@old_c{aPk(Ft1V z6nSX%j{=9<@&c?j00_C4ix|Z1#_94@RSR^HXI-ZJC}6(pKah~!qP?OVA2sS{0?c!B zC{3j< zPD2AtJ)CaeHJ$R6?d&5e(=uM}K`=?zhyqj8i4 z`joU1isU_Pg>!ceh0&tMXBtm=y>nNJ#ypC9 zd226(T+l^O&)4Q|tEAE=2dpRbd~|twPVyB8t7z%uaIR>~w3)XFub=BH<{96$cKAe{ z4=`;csNEcPdNY3R^`fq8vLy%Nt-Dy4lMQG~?cc{0zP&vWIx(2xQx!p*A4!-7m@*zx z#Fi?8D@4|B=Et$o<;A~|4u>afto}hdR6>;?y~x-Rfa-9%xmm9fud(4o9{RzyVF~!G zYa(p;Q33pQL80t-{fG|F6BmNow4*XOr^{yO_r(ZmMFU%mk^+NPzRKOhobJPwQyU|3 zBG`|eMf>YEvtp+C{5Ge-0az)$$~on>VwoH>YZ_KFJ4VE$a-c#C=P*&RfJ|~{AkV;I zA2uyj{H_+S=ZB&7_}(nE8N|!3W!zOI)WceLdaV$s80}kNd*c92wyoMk`2yA#PIGPx zHS2ND20lvbJvVo7dkWz;K2Dmc4fC;If|E>U-2wp54H z06gdVv(lzQ!W+nwOl;J*s+DA9FAQEt^IeMd-L~@=K3e|gF|E^9cIvm}on_k(0t6+L@}S6Po^S+@pbR(-rLrmeRsMQ+bN8SJ)7@SkCf zXXX?ughi{G+U>WAaAoAVyNTy2)6~b$$klwI6piQ%JtY%|qCdwOe<9 znpdsO%rGcz06<6ya%utggub5WQFTgoTQZG;CGK4M7a&CW-vJ?mKPC`bUtHDhq_0ps zRCg8%gR6KZ7!lZUxLi6vxxM&=FvP7W(^`)YUjOL{ad)HjTKAShtA{lFSC#>Bc<Y!( z^j9S&9XHgm1aK(Ml?_jX_Ux3{=LRxXq;*Z(3cap>z+PQxMNpoKi0}?*kL&A`G!iN8 zn>c}xbQ)>WGdL5dvY)}553f@LE@;ljCG7Vg5>`H;tBj&PSo$2woy)!FmGC@gP|rdQ zM-B@-e6YS#b6&d(0dG>7#vpZBx&!CF6S1go@4Nwh5u3%?t8wvc$Dj5G{=Xn0agJTuuJlg(jft55sFf>r>?2Ws*jnzK)img8Q~kiiWaa(Z0q z%aHijvv+Ki*FSwb*G3M;Q25%9F0ejMv2Q)uKKXT}Bbs`rQZ|n&$_I-C6DdP0elB}Mq@A)55J7WWG@A~_VC?SfUOxhbw)+7n5?hks!OoQ<7nCGs1 zxm_MCS4BXK5&SD~datc+a0c)#+RTq68~2KoK0Acl3BUg!t!E0}c3=HPjo&r;PM|Xc zrx7RN?>I}8TnXyDF7J%Q3@LI70Pb@U^4V*NJTySvcN?R%>k(8HCkZS_3>p{54E zLKsjIlG&wJ^rV+5$53*f;%Il4QwWo2c)thrN6 z_P?8QNPNG6iKCfWD*&3hn8HTto73N%0|Fth8CmBvgoQ5GfX0IvPQy_i+@8tYANI?9 z3iIguu>r2u99s#(p1+*`$I(ZKCz(~&C0q`O87&^{Uq#}|Zkv?qODCc1Vt}3{udZy* zR|C8$)&^k#4FF<-Khjig-VDKBxm4}>fCFw3tv8mUIc5mBWybjur>@K&_gag9w{f2r z<#kILp;6HhFb(qT^mEYru(KBBdX&$$z+hNShoNDc@ZaTY8kVhVF+||sF0d2Xhb|l( z6OKWG&cu`)w6RPrKwCz5Jb4%aJ44JZvPG;bHS%7dVLYI6X*3wlF17gH4s`w+r(d&| zD+qMRQvhWgyx6pLZg?zm@5>PEE1Tbg!|QA<_7c6F05W zy;t`t=07#>TgN|wrF=`%@>rv_@}fopvfY)wSQLHRDe_Ec0}Sdcl*Tmu$NL#i8BTH} z%U>f^Zyb$b_5x|^$HqF`2o+7)5n{jbnUzLMM9`dpUoitfay?JFAYNH}kRoqM4cRug z+OYgw@qul^SbEO4a-&X2^YYsF+iWN0#(o4*CtikqF3(0+}G9+&UG=kI9(ApN9j|>WB;0(`$ifx4dexv44C0 zk$fi@P?YP^Y8srkL9Cw0Ivf;S%Pkj)3JHVL{!-}TkTr+od)irE0Uj@2MxJb+*r}5* ztiVlo2rEsO0gWCao+)pH=hI%rA^(vZvl zfWOWxKy?gIO001BLj&1zZ1LZ}-{2UoDbxq>)Xr^DGCED;a39IlTu7o_O6!XkEKb&} zLn2f@PoGV8I`46;_sn=CNo0$^6Eu+kx?8X{T9uk<6*gtXJHGhMyTUI^(@!^2cJ2T( zi*OU+Bb$WpYBPruUw|+m?6xlu#CBH7XjtEE8d#u{06F5i{AJ~?`Eo({q1&#!yg*mbXt_|H z?=ll4x$Bk_uX4eh;JuF5Z1a6}^Eqr)G-(SO1qTQV+`g*;S65h#cFHQK>%@DMUq7L{ z8gbmvACTlPZT=LxufNakLcaKko$a zY4V1v@Go)yQc++&(Am);4ehSj{>%*ejRsD;Di_P~o)P~aYu^FY)Yf&Ypdccmq97o} z1_%O5l@1mxR0T!4O7GGobWyR>q<5uA3sM7wYN7WQS^`K35FkJxF@%t}bHDH2i(b6{ z|HgZRF&LrbWS@Q3UVH7e=9+VcaJdE0Sf14}jTMpen3urF$jH2p{eIO{2DPg?zW&B~ zL%ihXX|!B?nL<+~lX*+OL!rwW2fN>*wTw-pz8I;8a?Jp;Nw`=ux;}ltaQ&EfT3(z5 zKtRJ$6JyZd0RH!0rQlS84(MYrA6B{!z#SsTcUG4cCIp*HZyQxTzQ}v_toq7| z%PDD5t&5kwWcz9|aY}?5I<_*uYin$M*XN&igvoE-=ib#9Z(j95rdO%WZH*3%y2Gj=|LRQ#Q_>V~)$&j~5o z@{YHP9HhjDx$=a`AeX*1Se&%}45x6hbwD{CY4|d?o%HV3cs>bgtL#LH_|$#3aIDP%a^{ns&`Q zUh$`^22==A8`Y1pMhHtfE}Q9kuM}rKobJBfXa;B`((=}g_Hz$tnLL_4INMuL1xEs! z#F9$Ck)!OM7UMO6GuR=mQU5P!Nd?czb5_Ig?{|(F%=P7<-{u_24}m$#xm%CkLHT>f zF`R`tcqmpt5%-%XgXj5*a-SFM?*x=MEARMURdH=ZkNQPZ+Xb8`q*O1K=VV7aRZl}r zcAs!ueF*wA^givdc~tTOIqCH=KMrQ5BjMY^;dzgOwtPL+Mc2rh{C|9NQ8quD^m15m&8NSXh{T(}?!PTajb04F<)L#izUWQ4t(X>pZrKMA7=Q z3VX_DCh*J1d}FXSbx_9x9bgH8Q4G~KZ^(e|iAj%eqOE>n{lXOhrU6(>;`@o+l9!o)Z zA)v;GczE~|<=oG@DawYdwfjgp4xIpJ#Si(SD70-?3KJ#Q|akxWc;_q zwCLsspeSA6|H5OIX1@(CCAj90Hoa6T_PNIWuB(1euYBM~0e;Sh*)ajuRRtKK+WhIn z+sK3h6D1~l!~>`u(tIpeNFZhOq%cP#YBpg1v`*de9Crc$k|x$JeYqiT8Qo6*+%hv6 zz_^KX@w>&wzD}sk8?ti(zNULL!|u7%EPJsHvT)#2!!h?K67G+4^Q)ZS4`!$!ri3}Z zqDnP{a17@A+U`P~YCgU!H0|MgpFAhHJ{{66HkA`*LT`%rEY~1f6X!%TOd?<|-Gj3| z+b-##+_`s`KO{UQ-R!~kr-1`!X*ed?Shy@QBEn*CTN_B_Tu)WK^H?=yW~~6Bm2ai5 z9~U%zDrOH2NqKBVP4wJV-|w?*>ZKXk37@I0RpXLN)oJ13-{mceVb%|2Ek^gw#51mv zQY~6h*p}+VgIBkUtd&#jK|J0f^`3s)G|8Uc9f1P_8)JB5cKB&IxrWN@BI_}?t~HK_ z1|=>pujB_oF<0~BO_YT3kp3GxH1`JHIMCS&G{!xm0AQ*qK@@%a^YhcpT>$3mIq_t8 zF&yB0jRjH^g@n{lNyO9*QqooR8fxR-mu!Kc3b})V7JGf9tf09@RjR8HYGaTEK687V zQyuRC?tKyWwufK=l;oA&RFJBleewK2>~T2{iyI%x))zxLE;tkgcw-#t1^f6?HcKC2 zhB6QuuS~91C4myEfbm%N`45wMu~@YnR#G&o;ISd7CkTPufFRay=>?UI2=@XoBki1D z*KWqXif#sxTkB1_#9qC{c(P{#29wGq)W$ zcW5l1w(oSC^Pj)@8JU;TyqiPnc_dPQkdO9}!Mcde0B`*<9a<`k3I&POdP!SG1Gp9q z5$S%b5Svi@%a_Yskq?l*A9)WvS57tyc~_ce*p#YJL0@S2W+hCGb*lL+otCRF>r4NJX5*BG10oHaLlt~!|p6t4L4tAD&puA1F0 zeHWnoaLO3OcjuxARER2(4c|cs{?rwWTG33P>kV0qIivsZnWtR(;o2Eq|pDuvzFW$he`)60L&H~TwKHyg0D?HDYI=GsIjtbnhl*fQSZ`=i7Mo6DrHClgaPeD!Uc$*vgWsnpcv{$vFbPS34eR6)jLmZquc zOyNTEf@Uc`u{%RU$OdFMR=EbYm3H1ivy)+fghr+JOLEJeKLd0``j$KKeva;=N~18( zVcH7y-}b0Cvp(O$9R1|Sv+#G}mNXfzF1R!m`{JfO(7GZB4nfy7)HUCBAKqRFNF7Q< zBz3&k+xHdq0;Y6F2w4CmMMr4faD02^2VQPd`T}Nc!MLRN_`kFv+ZfgW8i=wV1SWH1 zZa>R~#CPE$=euqmV$sq1{N=NH06fvWOWAX{y>Mlhy)vBZJioLaz~KwS#l$ppoPh||v>P9*@k*3se6io&lWb!pKf`V2 z-cxbE%h|5&zcIRh)?l|9r*bHmH*RybC$n3B`|t1B`d9axz@#c3^~wATmq?}Fn(Yg4LnV_JT{!;bhIqr{_=DJ6=^57Q!gF7x*c49dobE}(OJ zEI=-F`knvAtX}!VVlY{e>`3n(^#W$(v<`qSQJ`Zv8szMVCH`0NYnUBwzgr1&0m=iX zGJD_QTH9KL5W3V}<@LcTIVCx@p!M@p9KngdH8b02P+Q-qo%#p~1MTyLgb(PWhdN(8 zd%7@eBqlkvS-$r61r)v94!h%jvc{9WbN7LKCDvnX=n$HwZ@9nXb-^MIAB1L`kfn+F{3 zWXp07p*r#wN$6cLoWr&O5OslkETSRJP5W2`v5c79m5`9orMC3gqY7_FiBNQ={V7U- zSwYFB`&8<~hC}_3V$V8SgUBdGuV-pkVnP@?R1hYn+s&$_EtxB>s=N}Yg znMV-k?nn&7fHZt@)5Zkh%vII-KOdxHc;~}>_vaIdPR4Gt(u!QaAIH5f@G7-DJ9W1Q zx8h`m|BCmC)m6hLE!Ec&_HS{vbpoM)_#3DJgaZt&B7o~nQ|4!kBp+TaVpo-o-M7ZV z#U%`=@arCNv@&cz$^S6T?N*Pr^lkuxI<25!>@h!ZpJo8lm&3mfh`XGNPG-OM&DNtu zxhI2E9?K6vedY@~l>&iEa<7K+_AMscj3Z@;x=(qy?0^2Zn^W#%unvW|xD+zjhP-(b zsrrT^wsrnA*ZEVYzS_fpv@H+UrJ^c8;C)k~u`2ic4UYJK&|LoG*8VgsnuEM$3I~L@ z9=HyCrHRY(CE*v|@0?!528MxqTP_TuEA%Q%WubJ@m^BXr@lpeV$+4Hz z)n5Z1-$>7+_}}+&{AslR{jAy}ysmIHl`TUHj5{8hkZ^VlgNR;QdMF)(q$`OoIXkZ_)6U>kt>0vA`!|W>bkB zNN99v8@`~Wdz}@T0oaCT5`Gtd|G5bMHZNUjMEr7Zq0$z6=o1J{N^%s-2!8tXh-@~K zCL)~`+uhH&p#AwJR|6=+H+0EZ<2@2tw%qq$gZ^cP{_|ttOKSVeO%?o>og;-~w}cKv zSNEOFd#1|-5%(abi!e3pYNcvLG4!N16eL&~oEB ziQ+}$p$b^Yto9@i9J%O35UNo>?{&3G36db#46UyJ~?C)2hJRqNe`_NFaWmO(dRetc`eSC>Rr=`Qa3SWd2 z3XHqL5lqc}{Ms90`4C4k|_KK>TE zU4K8vK>5A$LrRx46-y0ja~Gt3j8xl2qafVU<5E5D1Z>9!_O4+^Rt-=*ALL5rebF?# zxW2Vywdb}n1uiZtcuP<(TOcgV==1)*u(jV?y}%oq4KeLKS2bvdogP`{oRU#lI8yA7 zZaC_T=;TE?kI9d@))ID%()utjDg5yw)sMM1=lG$6VL1QK-a#Ez?qscQ?U(p*o-yu^ zJX}4S`)Gz~2haUYSH}BjC)uepZSpX`&9&^_?3^A5o7tfpTfDI8#npu&K5io-Dejsv$21TM&| zrTda<+|)rd;Bf@CCB6_iu#?F}YoI7!dHp@h7T5Rd+z4>d$YlDS<(d!;`>c^X!ypk40) z=khkoOvJPAW7)D`CUtma1;vsTK5duB{L=}(;|u)MEiuq7)^7*D9rf{J>+1zRD|m!g zq$hf2yw)p)=RrFy^-GcdKW3-gI7;gsE_e-~7Z~jIRE57C)Lleo^t-FyzrT@xW%}T_ z=wCimS%qQ7-|h}nw;-kB!@uo`0=q#uM)XOL+uaEPFH-f;3M!gG%kV_JD*anG z+daEaoII(K;l?CLh*BQxt7>66=dmvIDd1yN!a#xjZ~Uh;t_M-E$jT{-4w{$ zTzF_GZ&Op#`<>`bprYPGWKZ%p_3*B604gL;|M&BuLS-L4@N#tv5s>o1y-$LJ)!O1D zEkG4V(Qu9Ky*Ebj&EuFQ{CJ)n+y9qOVAO>g>A(N*!2%pP6bC_$dIkmg^aTgC!>mI^ zkr|GL4a)!0V*U4mqrOT>0YRIn!1eVl??`#R8|XV(l$N=oHF1j|&bd^8n|CuoRbd7y zd||_d!|iJR#asS=+ZgJvkKq-iIp}3`nRClCI3N&9StUx1W==>z9b_6Vm-S zR*m@x;)Q;9{gm$xX3@i5c%GEFCJ`MAeCxM-Tf?!EV9Z!OLU55 z?@JAq3Lh%5jrw#@>>m`EH>mcLP8c}vvSskEK|h4kaH)or!_cX|9KC2NZTDBG%Is#{ zyp%Tio?B^RwcWkvz>WDe4fvgtiQ2MQ0+gNi?yDx<*Dtqy14wB!RjWu99oi_$5({A4 z6BlM&vHeOD!S!`OAI^2eNZF_T61Hrzscv8e%SuUZ>Fx_D`iPgbXmRY zifc^4kO0x=qon?#HzMUO0a^HpvK&y<|Kj0M?$?oT<2*iv_cb&!6Kq}(F|NKfPs~LZ zCiVQ&w+I{sQ5<7^n8}uPBmj6nWC0IzC}4`YHq{)Nej`$Fn%B$4+phRQr)3 z;4myH**E{Jy`@Es6C)d!WlZXHFV>|oATQ=$6aP|BfsFyHu%mZNjuwDf!*O zolH0KZ+>{@Wm5|Q!8_PGv_P@K;xR(WY+#1{cin9#!50my^|J7^o-UW6SbmW7=mMth< zT{)0X5(3F+XxzBf&Sc77Wd~o=OXOoy0n%uQQ%9YX7#;_)bZD3R3w3e*Nc(RsDLsiI) zAc(ugT$G%6H|n&=-rzm;#i@Fe+PrvcZS%Lqb8WJ{;xH zQ|P7R)z1FPQ^9jQu$jc;C+KuxqU#<|r?kGs)FFeDkZbWpnEj~9G3VJKl(xjGfCsYj zv2A|MY&`J+cq`Lm(+SB%8bwJ+(T5v6&U!>5LVx9Zj#Kkvf7WtbvCBBos;SBoHNWdH z!WHc=n!s+DC%M+3~vzS9SN|;4n zqBC+$A1w7VLHRiqcX;LO50M*HIbvppT((J1Q8ZLcT-TWK0!owlFZ^nT-jBXQ7`s0m zkzcYU$kmFp%o@`styPj&FiyUBjbR>aLQTFRC8k3LimOUaZJN0HihBZW>UYU^nPnyqo+P=N4&yg|^Am+w4iD^qg6;yQZ!GV)+IZ83@MxU2)4LfI z$yR(-rCo>J@z}%2EE0tX)ML%Z@(6K*mrVU`XfC*g`?Dm^I##VWl`hC3U2jtoWAWsD zur`dS!kKllRZF$+i|NrgW>tS%t5{F+22=H{>?{&-xFwg2XagmedJQGCJx8A8tCxO% z%cU90=~&*tuY7CRcIiI0TDq}gRAZH4HDPH=+6C=A_=i6|MwYH^uP@KG^h<+(_#g^UXtEHY_R1ae7e6W3ai(E)6|}uwX>>d(8PeW9=rZ zU?zBmsAAtru~i3_6x+-NIO^-UMuz@M^xJszF?aI3eNQr-C7qx+Xz+eLVL-HB9p?Ke zrOhd)RTP4eIcG7%JUzvf3v=+>AWF~n=hXPDj~gmVUobbH_w0xy`is8v=7D@p>~~+A zrZHSSY9c4l)7@T#ji^rI!%<95nI$gIG0_+daeKcTMoc0C`@n@Dtlx#<6TXK&gE%9}q1sbb0Bz`)%wmjRSY;Q(H8ho568cky6!nlt zmqxmmXT4m!C@S^mGC6_rQYzZ)99qjU!((uyhP5#u9xIishb>*@3$aSzWZ3V7yu~W2 zZ###YGSH*;yU4Pgq6GHpak8P0i+`JgpY9^wCuQ`|}#I?hFSao!bl<$~0G+5o>HWw%hf((_$y~^EosF_ohWNOsUL|pW8o| z^4-2?-6`IG88rYbY+9s$jEr%r5K77N+g$XYJo7?p;3Ps$TJ_g+UURTAo+2*HLuxbG z8W#L=NZmoBnoD1YrZFAmJ4-6&o9!H>FwE7nkO*Pl4Swtk%VR)v(3|54MmTQP;FV>b@0>rO zMVdt^yoRz-XD$V6k(eRpk6BK=-)8P?;VO2t)ep02E#;=5w_+gi(bzOD{l}Ms`7H^f)sYOzKp9F|H>Ae0 zyd!JGheSI)B11_KH=mTtk9AJ)_F(HVN|v}6l3HP_I#pxo$FHQxB?{YD(O=w^(pTM1 zgt;Od#gUczEM5F`*6h;DLaw=OPtYBeL`qrRR~d^EyoB^^rVSwcZCC#T%NHV<%-?nl zI?lx~2Is?8%>$_Gp zSw^^Cx~_TpUB?|w-&r5BNnzju1gf#_^X{zag&uPfOqyZ#4VR9BRzYJ{O?|LLIFJA+ z`cWEB--zZ3IY;)9aeSBHzJA8C+E3bF-s7UCabrC8typp@Cv-t*qQ2712U;nQYuZ!M z?cL5Iw(-i(2|R2vswaUM z{g*b+YAK&}B4YA`(c1##e(Wz<8;eVTf5c+qLLu8Pf7trtf%nRziqY2qo;?lM6kF#} z8qX+t!LUEqP*k*fJ%i!(hU=odH1AMp0hZ$-RKJ%~>Z6sVq+)T{$n>s4CE`d<=?eyX zMPi!A;;Oioym?^&=5-B4|13Juw*o#t@F7GZuysN6f+k2#j1alx5(GNrH&dGMQ1Kel zGMkO}NY~I%z*Npd{%QRQt(^stnKZbm>pg8i|l6&m`mL-khk*08DGa*sO1Tsl`-S+ zG4s{F`Cg8%3J&X94`^%j^7Z*cl*r+Ouj!uEtj-_sj9A0;Mc&U>6wXW-MDB9-KL^Q` zAKP`&E_T8EasT^?n)&d>45+_bjS~A>hPCRxs7YS7&r*Ai3+jq* zYndcV4c%gYj-lK=1VrcSSUeP`pRqF&CaaQK>LazzC6nvUD6U^}!=juVwUh6mq4<;? zD;kl=>mM&n3Fk&81lahVs8fuc*mVNqH7%?ouhL+Zl;mY!)k4NVg1!iQ%8#|*v2)Vi zI8x=^$e!sQ{?=H;ZTOK8WaF{IXsP3{qvO6VKZr+7ZCfSR!Gp)q34K@E-Dr{}&Ad#S zDclu?i`5IN2rWd`_^y|u9=Nq>pU$&LC9;?fetr;>)FHEGcoZGtsMax#PWHR}!0bV5 z20e(ETGw5n@omN(dR4^Kn_t!~%l+u{Toh>`8zpBq2{p$$^6x1bW_&reIaB@+#}$iC zN8MXmZj?rLPS3GvUcU~_qPI82gLxM7*6LKw6 zydoBNSNxTCa_yLuy!Ntx;g=80BWFf^DG@TkmYe4rGKqVweAXrQ8=~B+8w@T;Kef&> z%#0#JFI2RRx1+Jg#YIK^FS;|hQ-Wle;c`1M_)$Uf*;1u~ybh6B&nH#oIT$Z1;@K9P z&y+LtAo1hjG_wp9-+YyCvCga%uMgp_UTr-<@GiUrFBPjRpPid$6cr!sVP^7DZFk$u zE6_?^!e9CYM3!RD5L#Yk&!vT@`zvPpXUgmG;m$E zYFf?QSrlywaW1V6k*_RiI+4e?Np^~bZ}d2umbtf`0~~tDSm54jJ&bFTBa8Ab`Ef`nU85~-Xzf>jBe6a^^q{-n8Q6d>^?quv8hXIR-j{3Kj%;mC|HvLNkSg~rxM(w0R9FT&dtR8gf>qUDY1n{e~k zwDfc@w%xfH&vYOYAk`gzLDSwlH`b`yB{DH_f!tn4TZD{BOE#+-zxayI5gcdX7+;)2 zztu&j#e%>EN7~HpPb$9HNfivynZbf;W|9*%bWwI)oDTTYeCzWezJ<^IC?0b$?mYb4A=y@FjYy|oM{M1l55wAlBqg5!QWy@oVt=^PjiTjvB8aV+Horobmy?0f979M%_r z0(dO?wH@lQzxm+H+m4)0wb^vmP@}4TBN4D{OkaDu)0U4H zHk#c7=eMqXvj&xz*=m?QsyE+aT)kB0`$O17i6-8|aMR;$GAmPKuNm9S59;1c5jJ*2 zt#x>z6BaF#5%a6jrbEV1HyRAH6SCVuM>}$2n0vyb1mrVdi+SLLY*)K)Qf;$^1GgKU zG;91Vui@)uxxo3%mKZHX>7A7oZ!R(0oGP*Fdnb~aWMAkP68|HT;bJNJW0IAWHHxmX zf@@m4i`T+XHp%UM-t;We8@0=|+U8-Qr9BTwnnuKK7P~^8TZ-2dqp$G6J(8rjQ@RAv z;%Ui6<+#nTZ)#D^HSH}?3&Wn=tCb?TxCW&ey@s@%o23I2R+(!^xV++VH1;S~76s8v z7IxyKte@b&R4IH!s9siU&r+4j<>-Zx@?ohkCQj*C*Y3nHwJSx*qArdXVkv}+u0ES& ztq`s0^2||x2u{`9+`N>V##hPiL!4yM27Ymvf8DF=dsWsqN)}y)${zo{pHIh-nep=T z3qFq3@((X{uJZVWF?Mvcc0#4$KId&-4o)l@cewjXI}Nff$=&9Za&>&!ZRY>!eKJ{w zKIxtF$ax1BdFFOSoCN#IBx8o0FU`(h5^*W>RO})?aC{5rU# zn`$xZ zV1ww?p5eE$pd(-ra#e%kQTLo^mXf6F5}Yc?^o!MfyCg8nUF1i^HCUS(!J%`FoM!RL z5_NytfSN>_##shA&`P3elumZqCwpD-+PthfxvArnqU@Ah$8CNhv*V6qj>B>SBVy#C zx1&Z|p)q_S(N&CK8He=ky*e?8Bs7%ikJIJ>Ncf1S+vWX!oKoh#tnrc#$E1 z1V}vdUIUw7ClV$)wfiqf@0pF@w0}t2s9`IYhPx>oSc1w?%3XC@8T0os&zy5?mFvcr zs3?a+PbY4U#ySd#MKJ_xkQGz|oJwCX^@p$ywoXFJ@nGy51 z{mW3Gu4Hu~^80u7hwnRDX<15a`}au5ja9y0b@QyId&A>*cagm%L}C3}k0O#mxg`C@ z)QwVuQ`qZGO_p*u!)`9(-g27=fhRF#7gVu}IR;8yztm{~G;)*<1l(kM~EN z(}GrIhm3fl+jbUZk=(+AUwr-A>TK7u^n6~LW3vCLrEckISg46pzb&@yfK@_&6I2B3 zK&V|XyJU`mY{1&k;UTUMXToV`9h)p*Ui0Zm1-$98oChJsKB+!S+XJY9>OCT`AjLh~jMD?5@tnh0a)nB_W1(7|S`*Aj!PynD4D{~E|aNgRbg#koC~ zMs3!bV+l|lZTG6WlbP}FR+~6WcQt1_MGq3RjaP~&bqT>0%)W{Iy7C~zUYY2qd0}ap z55k@q4IYDD@{`hrA>?^2|IT|ql+$memnjz&wGCs5`+GRgxCnCF-emrTLQi>+sthvE zV}}2|GRV$dE(Zq2%}I-{Yi}&G=&-&Q9(-gt1#6=U(dS%QS6qtn9;wA6Onfd@Bzg9$ z!18x4f%Bh#rw6^Ac8To~*!5tKh8B>In$<+#$-0x54X37z9jiVGEi*7UT9Ve+=E~x8 z=68|ox_0!1VJrDi_jF-5MUJ#R^2=xW>euoH(aaW+(X7)^V_PczJt#z7)s@Mw!)fr7 zbHwl!fjafor}2(uB=r45m$bqv{dC*cxM2 z5rglRHvMDxN>(S2M{U}PJ$Gr=*yf1S3z!S&>pL5`a8KkPb~x}glJgGo#C@Zly%YRw z`J05tHVOJgNN`NA=iVH0y#<1G80qONiYriSdHH>V!BRf8xNWv5($Exzb7>6nyq*)) z=4$8HY4HUG8D+8;ZRAvQ8GabS7koil@~l`gAuVk(K-%k3^5l3`*QrduOYTkBO?p>u z53ENLgCoK>Mr^@**8f|&nEMd3Wi&N*XISmnt6Y=T@brED#$l|Le_tFRiAvWxUyyEs z%*86mh5Uk02`uCfUGq<|a2g#r`Hw2umh?HV9v}{~qC02%A(Pv?mj9Ci;s~Sj_~MUU zU1!G)ptJs7Ni@TZE*IkJ2q0Ml;W7u&m%oMB(Y*Nnf z3*VRSIFdinvZLKMAPJX*K^panIIq@^RbSYvS*6YSew-JNng@nq4@q#vo#E~dpg6}kOxVx!)b4($~wKXltkf%<>Zc-+B!PEJZ z?++MrUwR5&Nfe#xw+QF@p&7IEO}cQ1;Q~x+j|^o*e8cn1$Jw;WH2a0MB5IkyauZaO zHA;Wjze!+6by#EW#8T>a*)~wsgtr_80t=~7j6s>Gd4?_ZtgXjyDqLyTnNNdFVu^;! zbbV9(P~K<Q&pW+NJ2=>O{<=N=9!txzR+Q@kr@Y+Nx$u5G^e& z&*7<(LycD&2y>r2M?~5cNNzAxTZGp}SVPhgrqy<}u3Dd}y4o5Ed+mp&y)CCW| zeDx}EH7l0rK5QTnN{n}^0d$))+yIt1=S6f)LuWh=Gpna=7Dahdz@rXM53;8wIVn+O zg~rO$0y|4jotpH^mEvCN!jl$m(4*E8h*L0qKUde}v#)Xd1mS6an@`DkBc(G6qRD=n zlKM5rF|)hM^P+nSBk7Z94swT)eR}zwTH|ZZW!}D&b+-|=6#!*u+q71Ozg-5IkCg-s1=TQQOOu0(9Dazt@$t;T6BMc&&%#>u*=*aTTA2lWMXb~ z^b3_BY=o7`BP7Fj6DOLuzNv7beCeKw&6J(wZ=4|0Ht7V45VPyaSJcwALz27n&yjs3c#gtDgVn?A6D6`4C5d>d#~(n%k8GN-pjzIT7jdr%zw;Sa{2( z;0TW1^=A=dz47$Mn9`CXO!pAV9>Nv=6cK&PIkp7PD6bAoGKXeBWl8_NUX%9fHp5Ta zw{&#c%HKCQEEzLT4HyfchkuAxM{OcT1=$GI2AeVD=u?33ZF9|s6W>q2SSk^q1Ot^; zGtcEEn9@0EDS$DfqbDqS2gx`(1_kNv#caqlTH>lM40J>}p3B4>KYaMMbBC5}$VM%5>iJ9C9Jqs>fyz`Gm zGCgFkzm>~)GwU(U<2nUMo2B(z*L?bM-`6>@;Wt(Mf}hSEYTiVn<$L+1W(ZGN;UR2t z(!#etYS*fle-c$b!Djw72p@pgCk1$#JX#Di>m^i^(u)P~^)>BDDD z3Ux)WN}c4lHK4MQrb5n*fBl$?ap5PZ!_HkdLXcV{8bhAj-^Mt#%3cINJ+)WJM{?nj0&2 zE2f)Mk6k<7$vJV&C>btRID}F)|CF7W9`|w57XQFDcq(n2LMM2|oi$p`*M|nifC9@v z@7l3XgB|Xw@h1&hW;5NzT2|JthD-!qN|F+#xNj0VkuD$z-YmjN_^L*?uVk}a%MF~6 zovucp!W|UaS~tg^r!Jmfaq3*#C~DQ9_ozlKrIL=?m*cY)#T?f6xsI%f4@?x=6}pdO z9XUaEW7f*}6{_r^eod=cK54EkJU@^mrHycGp%`~0Mw|K%xAyP*vA)7u(&;dnez>0) z5$%A6X;9r@wG%u6Qdqg%EbcPlK!}}iCTn=cjuUy34}z*OGt$!=dI<{(d|f+Sg*`{) zzE)NW&1js(XY9jnZ1hTolEz8chl*O#@mRR>s9CBHeen1HSQsD+j_~sp3FA-l{dQWO zlF);Me2Pp$!e9E*G*hW+*Sp^b&e7&$_}2&hy=NF*rX;1~IghCN%WJMqnL2VmdSioC zO1AlRk2|9+5;QDr-`w1IONv>@`!_1QVAoS7vQYcvCR>_xG&*5)Ecv6K>KN=D%z3Hh zv8MfYL8mB&rJq6I&Pz1BlpHOG<$ORhkrzidZUVNS5lD}_%ZG^bBXsD&sda8Z|Dy@P zEUIgr&YS`g%1L+h`{=RkXX6wgpd?vnDO=*&v3PuNWALnV{I$-eigwq0y1#KxGv%sP=hzqsyanYNL$5wx>t=4 zRlAV;^eS*wW=s7@I(k7-1-Z-B8*_%lW%Q^I>+G8y)DpI}wq?rusFH*f^nJ}cj&gn^ z=+0~o39tIVduxkqN&Ni93U^E(3&PaeZ4U|*djdPD>2QX(%*A;L)4mtJyGMlG4XSpy z`D)mSo7b1LT39qf_}G}6y!77X!d3eT#>BBzz>e#M|CT!F_~=5q4!ze`hHizlo3z97 z!=$ia7%P)C%nUO~er)eBzpqWv;gvrhdaf_K`H9x3r`WoZ5KHw&y_7f>P|Qa(@RTFU z&yQbhn99uT%#~kdUuRz*Q%87P*3;bcn?wsq!>7#j~ zl^f5>$5qh4g#Gl4-WRzFv$-oE6nK$ARL5_Q6C2 z|Hnv^PYaCSi`+k94Y?v@XLR^V9flpBSvv%g{oZI@3$^uyj%~d%pW3DmE!UO1YLT z^=}$?2+v{AQ(|RRpO`ab%zgy@BO~|UOK5-JZU4#rrfGtx^-%!EGStI_XLx!jJS#_&^&R#DI6Pw$tt=6J^$*u60*C{3RgkYu8ujV0!d?Xk+hdSxCzHK?g~sq|>w*BEEmC6m zS4`s17v6INm|z7pOt4IhleF7a;^-``n0uR196vK7kfoh^(V01T;pEs!?KmYvMY+$H zt9NJXLQh?V$y~J6&F2FoY?0z{w(>C^p@P5kPdw~@FB4cO@acY!RFUEBoUhmK-i`6Hs869|d(Lr0zp~u? zM$4Kj#yCeclD7ZINQx=vhTUk(cVfNb?@lvb2S1G?QJ$c8coo=f#;ebOD+uBGYJJ`7 zu1SFPZ;kfvMzC#PDc`4d(#2G){zg>or`7ZGL;vG?q5RbDvagr_HQVv?1KT>sfuE^z zUgz9rf3Gn2FIIm*nO6(g2>Qhf)&FUn+ur;Rnx|(#KH&abxAecH1OD1c{`ncoNdTPR z!NT(9r!2$2j`*j!*sBAYB|rD3JN+EO{Vy-TdpR(WmcBVfCFr6Nd^Jz?q4%tuYhKW0cfhK6InQO&D7gX6wu%rD8U-(L8aKLmbf`xLkU`mG~1 z6J@U(7#J)wXYWx3RDCCuEw?sR{W`I{hgk2_of}A7h_m@vY0mlm$1gtYumATtvczl; zL)=^_x$T`NnNN<)@BpcRj1S-L{&Kk5@>m4E(TtCeTdvNsv!_22J^H&oF6~D&(jyOp zW;dFJ@+AQK=ZT40$=luzeI|Col{XKYb^wo*tYE16(^K5?Dat8E43||e$B;F&v_$nt zZS6n1_rER3>);YyspsOzU@qv}cK>W*Sb{LXbz9lkU3%zkBlqC-mR_M>XZM!i0j?u9 zs!0jqQ4`z)4VdYHfZcyz_@`IO?1%Rp9$56R>iy#bT6lxZGp?O^<^4M^qa4fH$yjmy z`kC;AaKS7a8$QG9s`Z;=+B=8pDUiYE#5SzH@i|u@xczc^CmvV{U_g?$t4g!K8QBScPz8^zi?+paTc4+WXSYPCFBWxb zFgix}j%PvZLNv-PG%SoSE6XOfp^d>2dX)2|v-5erv8vXEb8_!K1OUuYgNahji}toN zk>4L6OCewON-LBd+uBFE#rdQp?3p|~;8;}3%pq~DcyohMl>J~;y+Krd`k@gMO-&yU zmS1N6j~@dIc@HFi(!&yKUc10xOf+KABLqck?QCJR<38`2;dZ^Zj;wWPmI zEaemRl`f0XZQmzwk|r+C7qO_ohl4HNvR8k1 zS@oIP?#xA|=S;9so2jlQL}umFjQ{nyu50O zLEjB?XXeBz-8^>ogZCY6JI!jWOv80O-&P5uQaqfHBIsYh7=I00K!~P%cPVD=CZMG2 zhB=wqAfQ3>14o>od2^Eeg+|SIROHmjhU)Dfz2>f43AI*L_j2^FuReCLS5IRIFNI^7 zQc7{F3YkQdR4=1RZH0Hx2pRTX`t;?iS7LHWFRo`zXFOey^H6(Zd;4{E4};tOL9Z)H2Bp8N z5g&f7tR1+(l~9W>-uGg(NN;uB%S!+(q~H&&ircyy?Hjv27EoqlrSyC4VJc9(v1gZ| zXgb_z`WdM7bAFny394vC6{qZ6gtXcra;HaM&x#48Xg%lzBcoM&aO-87Ol&Y{yS5fv zAI8;F#1n_hb?IIV{D$gBz>J2Le!qlw8v7vN-?yK`S7cUuN5kx*j|@N&x7Olpy#zE( zj{Q;)`)fPWUfK1X1%JcVHm>|`PF)Abq^;?F&1R>(VNXhhLxXpezO%vP)P>pe{fszB z>)L=0fSdwusV1qkke)uX8Z)cPQ}fr|XplWj0h=*q{sU)tPR##?z3ZJtp!BFu&%8;d zW@0P6iq1AaXg`-!tC5gH*W4a{qSmu{4RtYu_L_mM*O?>7UIQv(E1d+!;kqMLGs4qj zb30L7$;6;*L6DNim}mW0PJ~X&KY$Fb^jR9Z+alg^NH(_Xo?5cs(*FLZFA)f&$!afo zpEv9g8+JSEu+aE;Y~8%<7do5a3W70a^UAQ!OHE#wTDwOq0coh^DdyQA7uaTCi^#rn zO5a+_e#o*^ujEETx&8_91EyBHJyDls>IT2*H-BDpt;)LJfoh$|kLqdQMTRQxP*bIs3;(X5} z&^63}Gz>7*(D9!5>}T)&JX_uGar{60zr4r6hXH2hx~}tF@ms&O&b7)ok<5MgL_H+r zjp&o!Pa6{@@{zR<@g8}aZY$RX6R83WofoF zy^1` zt4@g5$M7<3#QE-7IVKoBq+qdVVK2Jg2toU6+$FkCc8PhiM&B7)7-Iqo*rR4HJmusx z?wP1}($P~UV{v(l8_A!1WR*bvZ)Gq4a0$MDUzc=!8(ZdHQf`%;yPMHzlU#c0sVmN; ziGR}isJCdVyw9kxIg$8*l^w;^UoYNI0C+3vTnv&}hDWp>WFna*sE0WlZia~kMt{7} z5WNj11i(KFziRnaZ!pD<^D8c#URSbsr^bd|c5A=)ozz%G?<35r-3hJ5U29x~g!vJN zw$Wp5LOWV3RR#1~OUCQr9`UHJOKgdOn%wd91MLskcZM7|7<|h3>9~Qx4@ybyE(-OGuHU z2?(|&ifF(zfgPHfbV%7(fPf`^7N&ctF`iYqy#hpdt-yOh|*|0o~suse^Q*KyTsmO*#QoiN6y%&s5HBnKlv2CAE z)%I9+<@5AP^x(0Zb$L2vYM;YZY6HRKkD9hBvxMcR*vL27xbh$JqWh={l2A~Hu*Z_L zH^jC7rUh$?=00W%GYaOPFsn*%-z$o~&%p$yjRiZbqe8}-`%gb6_V0``pe(LlJ2jrT zAG0+gvL=`h@}TV=W=Ri0sY#!ix8B3jLM6H&Ub#4KA=NXDM_7V$A1p?#FxCzEVF{#$| zOg(8ui~_Ac4hdXcy@B34uh`e>xSuTbz{QbcwZBpq!n>hiMU$o1obZ#mFO8pO5ZB=i zpdr`e_{>%6Xn8;xAfrXIVYp<9X86mwY@PtnYudN~qe=a>=bya*GPBIu`c0UPduq?8 zj?94;-R2hPJgIVw%1Oez>O`2x0}ri`*U9JHPC1a<6gxWW`aN8KP?`ui5T*xCRW65c zOVFY_hzw2@yc75swe4#z)LNOkU2U=*O0@|7zRDmT9O=;$V!;>LuHyaVPHT<6dpYtI z0a_nALojgK#xXMo7r0p6>AE(~Isn@nlC|nR?6bM%6ul1`X9n*knH-cpt-BM!rE!`u zBJsggbeEknRK%gbqS#J9mGqpq|LHV!)`;DNH9m4kieR>I4cVEMU!5HS>)R1I-VAVZ z`mR;gO0>3n(wl>~T$ek^+4R%j46aZOt3fuawTiBkCGA2M~mFV5(U z>kc*}AV%pglF=r;l7DA}^ybdl@$41D{o{~&S>2~eP`{?LoP~>}*~$9r8nvr@3tcMd zIpH^iAM%Kk9UT|8^A~h`?^$GOl3}_=`inhC`BGMuIH7Bk#o)|&%#wBP%bJ<4;thRS zZtOvpeiE&)@8sHBg2=cF+mTloC1HC4Y2ERl&b7m&j?Fo)%-l6QCyVbps<4iu|Ejg! z5+TM73yIoF)4i%$WEu2{S_p0IbLx7408}!oSpvF!l_9GGRGZA=elVa64!-m1vA!1c zmWe8`CqU-D&c7Gyt5-5-Wt#t2cxF4cWWJ*sM`bdcv^C+iYm1qD(AjM~@GzjwA^|Cr z=8x-^BLoMpZ0Aomk=9jP`!V8aXz z)e!ynQ_uey^54B|PG7n^x0?9`A9}Gg7Ja=fdoe80*B9c^SGX4=|ZU408)D#iu2@WG9g8u zbEB!=)fn$u%dn27VEcn6-SYAAs0W02a?#;@k<2un;6ZgFc4_!ZHG z8a9~AS;kqtCMr^$&mGh4ZC5(W7`K|HT2>kbsME@CLza?M4Ac!m@=q~BJ zJnhv+%3kTadUl%XBw@G2bNjpfzR3Z(BN$Rp6X(7;ran=fU(3Iu%lpl1u~VUaP%5cy zyHF(Dj270*)knYUhNd%$tHGueEVwFmFrT!lW*sj?ZtAk!Zgd-!>%*k{&yETIaT~sy ztOkX*kyfo@InV8(H_WXvQ$Qm%K6HC>!CXa*(?tFWpP6sYS)0u*(;$=jn)^^7q{?8> z$odL#JfB3F#cI!FxzN+65eX^J(ds&`VKK5UoViwt`&ly&<1DUTs;@+jm0Q3kVaHzO zNQuHgMCT4egsF~5C()$}v68O~YAwO1T3`r?L2SI+PLAUQ2RsC-S8E&teJP3v?BRaC zeFEkVX86V2rc1j1{%h89W9M5ngK{tCY; z@}K8=beso@A`2Nmg>Cf+qr77E0pUDhZ==tbb4+B~n(1{K?*8E-GNib zv$JG6J~B{Or-JVZW|<=U)~WYFt$pVUUBs%-W&FFu)E`VmzIl)O>OTU9?q9Bj-Tsw% z{fYcL{C(=wtP5$(RwJ=xb9wvc)fp=5rdUYE!%Zbeid~S`#)?hiau7!|>|5T-0p(+F zbGzSCpRBmzmzg1S>FI)wl@q<+R9eQR2uQk>i7aJgt%*mYTeg z_n<}HYI&!E7_4i+Ye2a~+$Bf7!nI*-Jjwo(01Ow~)!^dhpOwnPqoG$)lmwF}NA!0s ze;v{c2?+k{uc+J1BgW8X#ubN=k{KsZBO6~VYVpuj`zC3)^^kFDNMNpc=BaM64z2gf zgMIH7tbC}~aL8v{qsPC(*zfO!4sa(N+2#RwwmICpF!ya(Q-Nc9$3r@+>(ym`J1{zB zEwz1877+XhfzEtn$9r<)hhKUbZ+VN%kPB#^t#I-!^~)HFD8tW&G!PBVLsq|EW!&6^sVqN|_;awer!$8%hyW%vhu)0As2Jw-||_I~{prTXTc*tHzBj z-Trg4J2qQQC6+k;{lD95;nw?KsgYR3-BW0NCwr7cQWr#L^KKAj8_~t$_JvW+>%=?H zM7qY5bJbk5yLa^RwQjqi3s4(i)0j|ZixTfUw>3IS;g+bzRT9aH4Layj zbMo8`=4chaNcQU0Y`pJrBJyIJmE9++pNjsE3CG`?8$SU#|k zK4Y?6nq)kmw3Ysubg>{qel=D(OFo}~)zMoUZdg{!x#Csv9N6H5R(d+dAQFp5Me@U& ztw}V?Y{aVj(7u~VV@ym;^L_2-Bk#5!HtJO_1PCPs@spxgJw`UlOGK8(JOF>n>3TE* zP%Eu#l!t->!afVb?|*Q&BVU!zAW=K6mL*rIE*1=4?v~#3 zdPQ5xUUe?y|LEm$CLG|v(-EIpV$Dl%D8m_}ea+ax4qf_}yC2$yz-!r{uG+>lZ`IU74{M!Qs(ZU;;GDB9# zYGpR@_f;_#bdjz$=>#oZ5`B35l@%?17r`INX}vpzSLrOq#l&B=fdh6?z@s~lY?(fB zGlM*?KG|c>tj1V<+nZ-a&ITlzb}VHj&RX1oye{cSo*4xoY-`=adS#Y#)}5r*Z#z_V(zeYjog@tB;W3L=|uiGbju)|1er5CFNn|Nnu zj2p*2Ea?N-)TGVNj~IGsJLI2&^`GeBQUtMYYR{$5q|3#(b7uqY)W zX3>{^i7!%%J2!4>M=q|STJEQIUU_Y;{3-sXVMK~9t3!>Uh|vLq*w?MJX0+E|Ae76v z$t!>1-T%DIKQ5@ahYtzT2-JI8J`zF<%hl9wgV@>*3tGN^q?jhoZ84ghIQI6!7b^=8 zVay9vh{!}2l67AxnA?=8*(WStC{NS{qkP%B4=A6UtRGiMmVJHQmC6o}eF*5Z^6t%} z#MoFqSL|g7KGK`J(yO&c7u{^@g?=k$dwiYo&{N)|fjjyRGfF>hb@{k2Nd}M-HPUJR zF;VDz_qY4Gd4<#vR`M^?op6n_p8i}l|{Fs>Gn|qNs zqKEa{VUpM>UkoBTHW;zuSgh*iZGb^87(UGx>8KNd=u8D&_(DZPV=`Z$TXVNTnUT28 zY|=W@-LA2z>Hfay@)b8V2}V+AvSaZ80+i-(267pkfROY{b-Vj%_wL$bkPz;=>UoBP zKWn$RyJdKSuoV#W9;de31QWfa*7b-n^)L`ZwNz2#65YCgd4uu)|Kd zgN5p?x`HO-SH`9V=QZdMHQqC(pz2g&bH9mc&C%`wsO=OFx?^W*VQ|1_=E{RplUFK8+WJB9k<~Z)G`VJVQi$xG;yzTOUYRwFUD9Sw=|Y zPnjW=Hc74rNkY4J3w>2FF>~RcKZB+(W}OTQwJd}g)$FO*R61>sra55bKUW>c=S00C1_eGl12N$dXN zjl9Cs!P}s1uLU?4Tv1-*osFrYiTEz9q)W~2=e$4u;E+H}RyHDBG5zi008lvTHua>u ze=BzQF9)LtA>5Arrbjhw)da~O)#GJb7HacsF;}uHi??G3BU4Xyr)(act8vZmGT877 zE44)y8`XbMzELw zV=!+*>2YFi>S~iTa+8eg;c-0u&Io$1A#d{}U}pBD=0XgKJkkjHMPfD>bXYOsckEiY zXB3~Oe&dF1XiWi^M+b>Ky{AHncQ%Ym!+U1FtKr~*jCxxMd)=IA@b0*q5O!OU_xrzA z*8ImlJdiZl_?+R0^Felf*e3eKE_5ius3Nr!e;2oGd?61zQO4ZbhiurbE-QJ9e5uRc z8uaa)Fa7!WcdPOounD@yhVhu%zk13Pz5a5NRxecJYtys%6|41g3lw7sL5qk&A`G`t zowzjtvIHGyYgc@FS;ZO2HlqDiDJ{0$zTi0$or*W+D-E&)6JWUbV4f1ntf*k}q{(Qm zT>^W{&ehE(T_ma-&gh+XphJKfGJ>SFA(9F#-<+MZD{ zt5$_=nn0nUb>4&%@*H1Y88$7Z%r|&9Z$KcX0pj+aAku!}#)a}%?#r%;7eJxoR&3RA z$k^8Ey``y7xgb=Zzi>`qz|?NUCUHEcZo6_(r>3noz~scz==?Z6DIm9-~R@>W>sx>;H`a_gdFqbsPkfo0@v`f z3`>em!CBil>%VI}TZ0pdQ63gKDgs;W0a_$3Lb}m!MZaG%2E8BjcO)*?`7-eH_aA*o z5izw!!8`9$Fnw9=BmRXu__@iVQa2=}q_{_Xz=7kRvcCgy7=Bh#@W8B_IbQ4iu{=|J z0Wo~!7UNS7dz1P_^w`u4sL|_QzPfC3?0!oJ);y_25`Dq7wG|uI5jg`2qpRl$A1{Xi zXdVWr`Y&|6U)6Ih@iDkpJM6I*daZF`iFd&?*8b$@C~Cj<oZjvK-Z z7BGPxf2>>#-Lr60)P=F6qREvw(~)2HqMcPRXw0<9^wZ-0`o)+CO`N?$8R3PPjB?b2 zq7H%_9k975pKV?}aME0;MkbiqQ^3prO6?qNiD%P#bb4nF<6&vSvn8sk8T4IKv{~9@ z)C|(}Vdy0np`;#}jdyi|tDvY%Zva)!utll=MjuhOpgr{1Vcv7V_zv27-eWH4oMJRU`ifv2_d5RU zv|s=H2epedZ!bXAZTq&9S1*6{Js8Z@M%G^bikLX)lU>XIXn*QRMcnycUgw{$KrvJD z_wxNZbIQE4GJ9~~JI>;`{qF7pMfl5%RR>K4jVmrf19xE89~s~sY7V#^|9C;i^;N{&d_Hlo^vV`Pz)5e4i2W;p zkc=U^`zs_g=LK(7|F&T5ktm*=Wh&lk>?6=@viUww5lTY*om7Mqsv_BM+~)K<9H7-a zir@Itmwi>~Few^=d>ad9RNv^yyzG zJ}Dy3i#23djYB@Pg{m${gm5h|CB$~?>i?dw=x4rugLeK1e&fecPl^Ipw=4vdD+&!P zrLx_Q4;;#vkA6Evm-{tD>9*4iJ@zJS@T8R|@gYaKnAi1RLb!AICQ@f|Z|Y__ls)al z@!D)WFnb3SZ61Eb^P9`kH=VCS?w4fBm@F^_Xo1Rt4jeD~K|Lat#{Sr;M^H`=xVr zLbRH=uB;pB0?*g(^lTpA>0!24@A@r;`sgk`q`BtpR(j79Mh#uRF>Z&DmVB$vq{rQ2 z`snIwHObzAL^}t3F-Vydn`rI3TFo+ub4C@bZa+gB<+thu zCK5-a-tI-j&|WQVMJ@_s)N?QVU;S7fipgR1oO^tR2& z+=OGvNQP@xbAO-w+&$Gk&!nUz{+exN5igA>%? z1-7VAsY_ zRQQ+0Y#(I5j~29u+kn6{%H4P3Z7Ocms?W(+xfVYrspPnao)b3MyVQlypRaTj2yPZ| zEdMRQn7X}wQZTJOqV|B{o36wQ`VL z58_3R5s=l4Cw^ZqRc^0{*F6* z6;c&t&s6&q#4&~>4OdQA*2yEDq0_*yB`aH=KY9woaEzobh)UIc)3YohHkhiSBkf+7 z|7|7hBR>2Gj0{jcIZ)1+du#RAij~|(I6ckjhunSeP{vs`{V1uf;_4WS_EwFn5Dxu% z$fdfPWWgN*`Alkdf&0SKgYP6au53Jb>Fi8#i8J}(_vD*|cRyb8ef9kQHHt{jULWaq z9}MrmXluG;sz`nH&W&3R6hS?mOWU*Yd_$G6L1Q@Kdk!*+0vPx!a*p!XeMusdUuA%+< z-uQ71YHTcWt$n}@mDZ}SRo^MsM{$WqlHscN_!i?69^NmE8VTA7MhR94PPF$$@&Ei2 z9e(aqcchaKOU?b;6^SSG#f%iVkid6bciTyXv2e&{9&}^#wHvm_23oTVT9>Z`Oz0zR@vb*MYh$0b& zTE``e?0-*N%STpJSD>rMbZa0(Je5XIaN=%z7KrF`Jf&&m?0WDO_|c}t?wqu(?RA9A51k3TWt&9{({_V9P&=o@}7IWDpOWNoAxLFd&c6vgX>$uvK9 zQsIGP=G{=#`BW%GkG!QubY=qkjcTh(+o$k~LgqHq!bHIP2 zzntT((4TH!=#~1N{7emAT70;Otgv&{_xurubP0E^QaTuHaf&(2wVSMWT2O1PDAS*R z-B9$~rTvGvi6((d?RqGN{Z9JM5mEEup{^X49lF&V+D$Gc3VN8vA(QIe`|2Wi{V-I2 zdr;H)mXt7~UVD_ldlKL_r^8?wfOrHKrmfugU)DUdq3Y(;{EkyIqUEO(kuv$wnN z0E`!G^!s15=}gzAwu#~sPOd-0|E&`L!QA}IzkF2~fkDU;_FED8-=5?5tM?zmb?!{K z@E(S}{Ff{IPSF1F_y2cO|FHC>?YT2QPnPQH0=%kSHt*+H;~7j+02#wxkdTVhOGre zjYJ5oO9-9Fnlr&6I83*S(6*~HIp~OF=J^VynIXm9Q0oOIw8Ea|2VmeE3WUein32My zsl_pq{||P*Epo7!CeK@?Fx2-o&ihYS){vjVZQf&f{zupTO#=Vq#heh&H;%BGKy;gz{#v}@Rjo0m>p#ob^!~SIKTZn4czGdn1R)|1%YM~p`Cc7_A z9<-~o>r2I^hmfa7|LL=mQ}INLHAols<*}iXwbtmwE5!)bA{rdgN-NO9NRw7py}O2Te%{c68^^zS^c6=<5*3Or27klQ zig_s_3SBuS7jR?mW@40o($1Gl1}0)j)_7!g#nSwnWn4#8+CeH%@9VgNeoiio zmmpN~O|5ELVvtR!+0nTslj7HV8N^R1=)E$!H}2gYUm%{&DRdfz94I!57-kqA^UAMV zD+u$ASVZKq2)ip`9j|`*a8yP)XRhFXZu$a^t;HU5;4@L3a;~EVLlf&^n?So)0iz$> zb{j(Qu9Y!#EO}i$Wzv#8Q9WXyTh+nWz_!(^rA=^Xed?Yey1WsIb4a`kj><>*pWb3^ z;kiAjvXs4GPgdGDEgIN3oM8M7T{!fp;v$BIdHU{?Eccjxilf18*^IPA855HiHuz5# z!Hvjwj_+Q(BIIS~!vGKXu&pF#G0U8HEB-dDt>W*VAi7y?>!;V~Mzw+*8BK9+mvtEn z_1horjucj_8z!}N%5Ec8wRvj*e9OEM34I;D<-@2xOI+i;%N1;f&}Ui6mVNtaZHPJH zojS@a3HD~L9@o7~Hr%*!{~N1s012JzK_&yrxquu`=aQD1wuBZcob#Q7u<1~io&8At zn=Lf-lW#1;@1T9d-2HW`NiTDP`qQ0+HCY}D44s$c0PH$X_k0U(%PYET^KfC6Bn;L+UR6c;!`RjQZM%st=89M) z3`_`lxbtQD#T&kIaJpTsr)A>8=_IAkRz}*KzHMxL9;aKRxN;CsG(jJL=^F0mb_C6jG6kqeY^`7kF@S$?xkCgbGjk!Q|l+2iZ zeruVPdX0b`>LzBlS{tgy@ls}`|8N1={&#F!pBpX^I{WRLC3y`Cxvz_|aUKV)R#FVA z#l0_FdyAfJ^4;nmUP$uXlyW;g@j_~ZIQ~8`vDYNBcY?y^9Yi*&YOUutX4vPh3K!13 zzhj2Vr`l8)&>P?_mGKrlZg>gzjS@T=%&;h`I=3&I#(dB#PS^iw@h?jyRMlu|&9Jvj zqXJZx@LO@Jf`u>J;G>G7p~_M-LM5zDkf-T`)RmO4O_)k?cZRwYrI2I?OC`Jhb!_=_ zGKA_~MwgDZ@W7YJm+PJJS}%pc`|nXJVHKgC-;-u8(78!YzPUl(y%+mYq^YkGr|`-` zWIQG}%x)Z?M|e~lNKE)$pUUR}+Wbc!{9J3>EdS>Gj2%0@2%*^X#1`qi`(r|2bAw|&>ph-4=MT-m*$26g(mO131f*V%pfnq^+5QMbS~vfc_>kcJZRerqcN z%CoO!9o)jZ;z3!pah#g*gFO#q!H@1vUJi5-5zw>06KBPN`)RQr6hTP#<^rQ#*+J!Q z5Ae^+1+ASrT8C=UsXyj-XgP??i0I)}br;`HQxh#B$3>3T8`n^7lV$a&RNs)PLPS}d zWG%% z!J3nmfM)JW;c{*V11tDA&NcU4eZ~)wZ$niB0dGf9++Up>4>}2WD4EPq330)mRN6T` z*RB}li}Q_>-t)148Ldru;(I$l=yvpFYdqj2yf7PYMMDEl$rN5*J!N6089R<9qFj2! z`)$Z4{nifA{}NrK=!ZGrHM_h8mYBXwTcGTOO|)a=P236B%J$1 zEcdlElI}#Mlwx8o5Y|O@iC!3dk0hPBgFzkb_`U1BYobO^qU|6Yn>1uk|6&2UELKW! zXz#l>Z<;;>{o4_h{*b8LHbdH~F5}0fA5D{tLG}#M{sppU{(G)4gQ0yTl2v~~>A7wFt&K_NXVJp)ue@fqJsasg+F>uxi$(g{u?6nDcTWK$*YR?@beT0`!E9 znIrivn5@hrtgJ$r!D~bI_S4v_$u~1;QKR0N;qHSwt(HODY9h>e+?##0qL$4(uW@%RoEl^@L}&&7Ep5 zJ=IiXO=ujRvB~~HSERi0wflsLHcfCjC(kZ8)U~l*$Ea@bMwx0F?U`=61Kp%N>9}|8 zdNf$xDvdjFF34%BzPmirD4}&DpO^nsPO|WNE;QySJAjPv8+sC;38PYqxK`3z)6AF| z!I!nEZvWUEC=H@a9NwteD;`L|dQIEy^LI!MP&(Q?x4aJl4YAWE(njkzXIYDgnnAUP zgAdPLAGVATU`0+Igx!lZm!~NvQW4BL& z&PD(9U}QCDiE`zVdwet{>K#e73C@(^T6}IZom^uT{Q1e@&0I#*8LFh6*K}LRM}GbgKu=0mBNqx(U@}n5ZO)=%D5)%1flk=a zZ6rI&Sm5Jy8P8?0gY2jmq@_GkyFCXCRyHIT{m1^pEk8PQ%N&@cYE=W`bf6S)ocs5` zqyI}iivqIy?K(x&873R4EVf9-{Je&GE?T7c~c@?TS8xE{z_vT%%xmIBgC{f3n7oORfihdy7Rzr-ytH=OrDrP`@L|G zClubFCDjge!=lS*>8E6Ji7GEOu&oxL35U~4A$li$+pQsVXY++oeiunpH-8(``^rdx z*z;H~6)iFtZRojEb($M;&^sM#ZO@*FCnwzb9iM;G?CIh|8!1sNGe$*-8aO!{YZtF^ zp{;DU0&(Gz;z1#J0vsDnl^e&gX`6!$FJ-)lmuG# zBGA6+yjQjmdLBfyO7B+NxNcvbk3>Ma9uJfH!Q^UB3_Jr9KEq-uF~2!vbdCuzl!A3xZwIMy_GXB4~RRJi|V(m_?ALbzOwv9VE~tD=+&UECB%NZC_w7!c%k}!7rxgR`|kv6Vo=rHu4^cqqLCO&ec!#o(3B?qwy>nM!fQW z&}r45Gl9u8!rTMf8(-xC<5+XY_b>OqAreqTl;4D|fxWG^$g* zusV*n5Lv&T@vv3gyt^!;V<5!X_a)<~3_} z7JG~~R~j~@A`1&|;XpJ^oL~M1A)PwGmQP$D%g%k2yF!n?e{HUQyY|8oGwZzvcEh`E z?Uc_|BnAOgVfzwU)+aE(+(!N;Y}yfkfw^IU&z-lmR1FgY&Wfoky=_*#o_FS}aMn8xT?0!bKO5Au<_pYG7NG4-XImVWYuHcyHebH0-r zEf6Cm&D9!q_EYz~(w!mR-Mw!>bX~|C9@}U@fdRv@K9nsa(1OXhIV6Z4h-Up&A+0RL zbh_-k8cv%%5^05NCe%D&3*ry0AAIEIISh{;9ToHr824DzS5ST^8~{bp`unQA8n)EO0 zWmqvn&GjXBMznaQ@krH{Zn2l?V~Hi1na+@4uU0OHbrlp1pyzH-* zdEynX9WKP)Vk;f?XfBPmve@IT=x3@Sxqd9UOWsn-vZ4nu&vTnlFkjlnd*>2-PDbkq z6Oy;r%uLutPSoio?F>olnl)7-mTR()a}ycLLr=J>;V z>q}Wl<04DY-8J1BpQYxklQf%L5($rMlhoS&Ts=d<#~w#}MaSVcu(ew~@n2&Ce#^44 zQTukD%)3tTdLC@Y)K(o!xveg1?hdbN+a0VJz4l;X6Lhk$*t1ahaE>-JrIUa)c>1JL zXyVSmqbw2_msj!+thVj~7!%k5{4KBL+>b<|Tan(H5Sd1iZxr%_40)dJGn9`o-QAz) zLaSbpIr!WKO0TZ@j%qp|sJ!jWkNQ%pH>|hKYv&=av}HS> zC$_H2znp@i!fU!nnawq9BOUBgSHJgMr`9*JXZ(q=hvq4BP?5Xmi@BX*vu@4v*OQfB zOV+oM`PF%D1Zkf41yNps0QCpk zph~g8q7l++fK;uY69AeFX3+cwqxkq6y7VUvunTB_&Is`;DdJ};mB2myBmv(3BmwMz z<$@F$%v%B`X^w!YRegdk)MYVEBRS*TbV@nJHK4<%z;1Q!;r~T!bi;-sbQxqm+hmmV zI^&Rb?Nk$XZPnXeg>5BGY+gxRI@mRoS0|btjz#Otcdu5uW!u@DZZcOzv)b8?z)W{qaG1juc)&p`ZT&2?z#{s- zR?Awn7O+>Zhg~OmmvuAdEYgKkBPohD34*#UB<;*2!ixmT&VWd?X7X%_TxjJh*Lz3( zdSW2XxDlLfb_PNuJXuGw&6@7n#%=^WqpT>hk+j^{2GN#38OG)@epWw-!TPR?)+WCs z=AKTb5h;!})})_2i!-hgA7^v<3&SA`_|M^i=5C}f9m6$66qxb-=T15a?uCL&BPX;x z@3J-b47D+m={IZ(r$o_N-l@?oH}=&ATJY1ff7pXC?FQ+0V34@PBKY;Wm6fh=+>4TR zjq0l7@xKpZye{COMo-v}+Ocj5#ws;(DYC|0-1+MQ1^Rcc4J6qKFIHQPjQ>!xHDyPb zu~hW2EKM`!%y}ee9&R5Db&kvZ9+)hMi`p%VgjIYH@dDI4_WQA)wY^BiciQKTJJ-by zQb?G#DBJGb3jfxwM7(joR1qa!0L=<9>SHdTB{0FNASYUJC<4g2hZOm z+mLy(L2&LHlB@%1Xf=M@BL=S*By_hV@l5<2mwqIwX8V2Vb^r3g6MWfYW}2da)P99C zf{?w`S6SnLH1 zlN6JBI~`HVC%=o$};ZQ?y`xl#%fo5dGP zTr5c^AJVdZh_ErM2c?$q4>}~&dGlzuo)?)53HSAfWr$S_EXaIie8|QnlW&F=x06{)zBxHN9bM3o;9i8?Fn+_}jmpeM)?K+RV*6(F= zCm&@pF(q6fuhF2vGf#gK$NOt@E=Y(k(cfQu9E8_bcl9aO4nbn#5s z6nf^7eP~=2<-W5mP9w)f&fTK|2)-c|jm%j%+%Fxy?OQ`zExvjxeW=pdmZfQwqEVGt zF8$G&iD9XDC0~z~ckpB#Rc*X9qtzNUjpx8Ma&0>QMhKwGIYhm#+jKQ8pF?VWUOo2# zh(>VNGUcNX@ZxdcdJ*@%Rp6N7kmk>Z01Am#YvugFNJl??b&2f0nD@|lrp>df0PffK z@T=IgcCruhGxEzexhZIWbidC1(=3+#V8mCX=hT;w7gMC>H=FAl3}Zi&d_Ux(r)5Py z1yw&(t2H_LHFe}L=gvOq(hOv-x)q|};0Gx$qt)fVR#=g-%5UgoDU$9ps@&>AG*`m< zjVrZQtkAyvv_){l0 z$S26Ed<;>oUuZ-`&|Vu#wkTj?je+dH4s0wqbhK4H3!5}#8HGb0a-TaWZF$5&U*t3p zhMmm|Rw9XsM{_7(d)`+*UuB@&xJep0oeJ27qB5FrPS?`2MHv<+Jytr+zE-RriUz-u zZQO%&A+p*((>l)*?gPr(`cLNVVkC==buVI`4Fp4nYeh2?|DCYAb`X+TKr13 z*`Y5%-d z#k1fP`q9<;()2@W&+D5dZoD ztB{M(^QtPclJ`dd^%g5EZHZl`zSbtfYi^y@MrlIZoY#akD zdN0BL<*TPRGVQPH?KGxRqx0H!1hMG=~-irN*kFN=xkEtjl)e!LXelE;*kcAg(zp6)0|GsGa2s(PY_4|jv!<_iV{pNeK6hpBS!|EZ3ujL=?0 zKq#^IaI7+=9vIkKwg#4!vm$ zTmV}_tVc5Q#YENyA#DlYh{-a}5{4C!!V2fP5OJ3JCZIyeoWNZ(M$e^32FFWcF?Ekt zKa13N2L)RI(CX5`MQyWa7%)CV?43XmR53?6C(bf`zgeAuHmtDtn&k4;Qy@qLxw5O5 z6v1l0R?{56QGOQK)J^LY^nV~O{Rw2qkjrA}pPZikZd|kPX`-WLjwRO&6 zscx+ySquQ-I#X@vT734wf+!j`6}x)-3D)OrE*yT@i~2vKNq}FSY|y(LZX}&Xi(nPWFHYBD zbSd~x3gfhetnro3ZW)(aGjHia=aQVz9sLh!joP|P@Qv3g(}(VB0a=IA4Z8vHACCn3 z&wtfj$aigL?EYlAr>-$2m&DljJugFfpKf27I^DNRF2qx~PfXBaPV3aYT3?RKZ$TTR z@QQ-gGR_44HQBS48OEzrjg#odT0*J%Y~s$+UtpZU4fE2B;n|AzYpr+ydiZ)*~VuVuXR zxy39|OD~fj{pEVitB9$(LFKnepD$ul^Ff%BO^yM7!tL*2v{c#?(XZ;PxM|l@jug`l zg@Y;Nsu|--Q5jKhAg)EPPi8|7GE0SjJU4;xr)&|t195q11YZa1;L>;H2*5t5Yu-sd zi^;Y2!FHrvZBp()o^h(le~!Qo)-vf*g31e|_<# zn@vdTd@5+344NG|`t+;Ou*!*a*51d17Z`s*qsY<#hG~xXY3Ww*Rc@Z+v@=U$PNR61 z#&0z*!CZ3Tzj)6p?o@HDGq>?a| zz_AxX|41f1XJGS!gPi1rVUc>}sy#s}5Te`G8C6Y)BBwrX`-}6*sJ$T=rh~|8A|kiW z5(YtHg-S@p!%vNc^*@#=J_oa338&oOG2UoCzfq$xL!P;)uj62n&96tz*}-Ikn>~Wx zA>|*6rIvqscu|Ymj!a*P7MJ%8-s1#3AZ+sS(>u15qaQ9@?aUIHUbNT?=_Cs?NU&X+ zuo8*OpLv_K+1f7JSl>TR-F~V6Kj3$zTAGg5rSVu#n$5|>BGHFsEceY;aJ3tzX8Xx~ z*mpUJNh+KeN)g!+2sPSnPrf}5HWZT(&fjd|1D zwFG>{JLLP~$G#iA{=j659wi+zuDQUDqZ6GKlHkQbEByP_DG589j_yaZc~IQ)Yd75T zQ0a7kGoQx}%&=Z(v03t((1>sX7)M(Fru_GH?!-ik$GK3qNiB_!V+*9~hwogxCrfn% zOT(_XOZwH?0nYb*33oqG*lBdWV92IX%5YT)gY}uOk{&u}&>>;2kinem*j5I`NJYJQ zG*awkDTkF`8|ry||-OA}YdUEFDs5MQI{FB6%mtN`G@^Lma z(Jg7Mey5*e1$DhmgFLg$ZJu$+uZjfo!?UQY#1=wFZBy7ijJB|>H?(*Jad=OA&M zwe(<{15OJ8fTOUaI=GIe0$43vzPK+fEjy%KK5b%0?JzxHw-b>vL{4h3RKwLl56e+n z+so7Y;kjaaQS1Nk^<8mIFHyGwDk=h^0!kM|MWiUb21IGnL3$Ap=^(um5d{PRL7LK~ z3!z8}y+{?L_mw9rgqW|CvLr&c=)=#;#L3gJhvz?VlVuY?7#)!SI$ zxct*A2KV$q5g2~Z9vP5e=wB0g(7&H?=v8_-Ub={r_a*6hI_Gz2aGd4r>hG^Jjy)9q zy|O0fptIP~6lS68rxfQ=ac`enb@iMuzfY}}W05?VxxUGA0e8fJ!3Q0dzaOaRCS)4i zM)2`<_85*hMr$(TxC zaNVeL$<2E{o~zeioIXsOzkJ~E+TX{82nkcFB9;>6iTo?vl%2?}a!Z8f2Gp3eW*;$d ze{;$1L=MdZqI_lYY=P&?Y~NJahfh56aQ7bHyzPm7dJ4O^i&ukd0`&-fhZuU@QzN9G zletuP8DXy&f;t^)Jnj^!h75gNbpCn~w+lVAJfg-OOU!Ie!8e1HA$1$_bu#&zqsdq1 zvqxO<`y+a0?vUe+{jJFZBO{S+?7t2WUeJ=%o8|m%YR8JV3HS$%yE7r9eL$gKN-w+G z&4#{J3Yq%~dCy3`iYh|byA$d#^q1CGXu~V6E2<+cZ~N>A=(Hf=K9HfUZ24+%isfO_ z?_&w&%+CN+R}bNYmUxRn!aw=k9ps##BGyeIJhyuF$Hnhq=Osmull2!_aLcvjj$1Zz;dOXScOn6-B zIRAI%%aogdKI^tWcR5{)lh}k^=Q%I&k08(O!D|XB7Hj9#nwG!=Q@i7+f%Xpdu&KcI z7rY4c%U2Q`FWNbs9?q@#y!E3WAnn^bcZMO)9PFCwe6s!(@t1ZnCBsxezn7j`y#^*l zZ_AkYln0CMyp0RK^p|UyJUV>Y44RE_*x6W^#mxqPf_{PJcxcGv_mdrP*(4dMZ0h)_ zW&?pj=B>BZav||{e%puY2TTwmb8pRqc*DxG)u)u66_7{$Q}p5) zJEx5ExcuXelg!WoI_KkBQl#$ULTW>R&0RvM2U+FI&Pn8BYMZT|W}uZ_=CaDeAz zP@ALg!BSCaebOg?5O)Z@65IAO4^SNc24MrgumN)dKA~m0o`Y}o^Vj{_FzF8!(jPxs z-c6v!j~X7y9A0M;<=o_Myxg-mcb@<)_4%Z9nN98E|PS_zIrALciMoIKn+`K~9%CUWrD%)AOIHEfh>`?HVhfqQ+Ki z(uI|WE@Jh?U=W`a+WGO2HRM`g*BGVOy4$gh?3rEu?C9+qJXmQKh$5{KUx*>SLDb6i z-7pq_k%2<4(&sS%_GKyOREMkdpOFw5)QB7~(U*|&wVequh@`6xbSH5d+p!tH~!Lwzd>f6 z)S#6+cz8@KK$6&m{pD4|PtITcjo}m6P*Zp@*U+NTJ#Ih#vx!HB2yd-9YnioopXY+< ze>Kgj0)@1Qi{~-`ak__f!tiZEg~(Un@0=aH+AmTG#qD+|h@9rCr+I8QoNNsogfp}M z+j5ay6=fw|vt;tZ(#%w8*+3Qb(4`nblfChlNBJ*0Ko;-g_M(-)$Iz{nB}^Iez#r^n zrSK3n1l8(dPw&>(!F!$>1+IU~;B(zvL}ZCv#QV%jo^K6Ai3 zMs;@S`@*BQOnlPmToFE0P5^4XtRRSHAw5Vw&=tsWMO|d>7LEG{Y$X!G*5~L{cUGB5 z);>+tSyI~D>3O+7L+KX_+#*@I?h?SeI4K+S8wiM0zxCpj-K5=7^C~8g6a(%<2cCJ_ z3UKB_D)a{V_z{j^sNd?iy zwnM$yi7guQEp#U zzL|S$03gC86y>v%K@tb(SKZf>&asT3MGjdSVs%CXC~9f>ls07~BNDZ7P&OgXTf_xT zelU7(tYp=rCO+5ip|gN{Bt?S9_GH+BsyZFjtF6)th%0ipKUuy2U8KU_2 z30Va0nDfF?i1-<{rnly$@=}hwsPLZbs&C?9*=XexpWdY}^2Uk*n=X-MhAZ1BaRo8t zd+2RMvPtY0x1mnB90$_Hj_lcHR#ks>>ERt>jVC{QPG;Y|_iJy7$}G6W9l66#nI`{H z?t|}jnjM819WEcP^rd+M*M_Lbd2G}9ki)0QJH`GB|7l53POl3Y_no!*NSo5H#GM8M!u;N;1T>jlY@b`28V*$wK7>sy}Vu@&>7;xno>Q3%}(s8 zRqhIQ-6ClCN{Q9?50*%MiVfxgXW7j9;R{JLV0}X9Q0lBbTo;%sf(e}tVrMtkK`S0N zQeF8M(N#x~WcegmZ@SW&Ju9!YVO;&53UFU#X{gahc9eRyW$36>>(TqlYMlr1Nab(c zdcI&(9VQ&1uU1YFT|Z78yr(zuQyEAEd%rT9gVk|`ukvE0v=u5-2TI3RR5d#i&^yGG8S*SsT8LGIEU<&rd&7_UQA1h3dTOkDRT{vkl-;lbtfn9=#W-Ed?)6(m4&s6u~P zuhFOI`lz$dBjMjQ|BB6Kvo7G;Jy{K*+l~GmTM27Y0*$Kh4bzzj)XE9( zSJ+k>vfg!S%V@p+iT&fskO#Hpi+`x?!d%Fpk~pQ)hW1|{tx`Y7Ozm@$s{?niudV*& z+U&XFLTTUs~^ljL4ig@#R@ z<=n_|Ag}3X`E(}!8WDSyK7bergNLZ8zxKmjFFx&%Jj9gT15}vT+%L|=-Tw;Iy5LS8 z&)zj%QIN{Dcxbe`3+xckVl-1|R4in04k90J)~bbTE$57A9v(8VTddXa;9qzo&MX=U z5EfAj4ZB-6+L^#3PoYkn)X$f=-=o#H1GpVl@&yb;wdw=~*Kg`K!{!} z-zk|5i+24brOntIG}I$=^WBEa`VDckM-%dbq!dReaM1uTHu08|ZG(RJ1H(>2yEYIE z6CcK?7{mK^{}nkT1cI#HOqY2^n((%X=>Lk6#^z~*o^!;Nx%Biy6j61z-AFjCJ`9TZ z57P`2E6I#kL5OsTfJAn*)1aYywP}J<$S1>eoVTH_uXxK4gATBs7+Y={u(1rzO}tP? zEt9Bzrqah&$xCG>GaUWAYPtwaSCZi4P4h zfa+MAO+CXbX1ZnZ22tY`k`w-^$?(6Eh7WB?Y!2VE{IXXXN@+3iQ#X{#3oajY9{d@J z?u+Pon*2?+eus!^g~gLE5xcIGih~=3NAJ#Sg5I~cy(UT|*CJ&>b$y>WaCz}miYH6 zJ9xI0_M9H(HPVrylXoLs7kdpDFt7E26s+M1`HEeNg`0-MvlcUhv{mCG;-PzV)B=8Q z9G5@HpZ?p^A9XA!$kSntUI&@}AbzaWkzUlhUjt)Z!wVfl#LFAjMn7Lt6?Fx#d_?16 zP2K5-G|#iN7q*{E|H8EU!4EtugVz-{65m01x;>Ya%~4aaYkG%EgbRhkS$QxbYu7os zMx?x|E9S7%2j^MNUJVV7oWD3u3KOc#>{xlBCcZ9KcW|t>*~K(r_BhmeB6*FxCw8>D zjeIb06ZJ!CXmp{4Fqsg{=AFEmcK?*O)Gi}Z~m zgf;6H8c_d>bFIndDtMg}!d`xTwFwIc%-1U{M8%Yan>6Vr5T#_+H=W&yrIt%B0F<@Y zvK6)3(oG`QU(ds0YIxEulzO=0H4u`LcbB?KO`&=po~W`dqO(OWslC<-M1mv_((ZM_Zpy5+w@GIdo4$n0?acts{7XYmU09eK9=Xa0Bs}Do&~x=%KzyGqvz$A zb2x+x~D|E4m6?ewj`H_@F!=xfJ$FOBwz6z z>>1a~3Swvai#L+~MWS6G8JI-{Op;ywe9nU6j4E-9PtBi*uv-$j2VLnzzcNcd*L1G<5K24^&+pkJ|_w zeHT;IDak!U=?e~hX1MRVVDy5xWJ1PC@-ooGrYS8f4HPXoUw%@3f;Fh>AxlY9vY+D5 z^703IBD{0|`Z0lu&|L&Xh(@oIQ&Zt(?hE1U3jM2S&pk(_5xb5k7O8dV^iC8fR*B@ ztjSwcTsqHC|Iu$is*exCBkks3``FAbm78m3>c*Isa2xr&H3v5ZHJ;zdcCBpYZfHQx z3FU{wx1apP+b>+7HfyR$9qfcxAFL^*xqi-P7X6my)cNL{aJPkV$}s8-IF@hWs5$!oBjMth~SP?r61VAElM9u~Hu;O?J^@B@4~@|N>l2uo5d)Hz59cF#NAp&QY}C^? zKR;oT3_DUQ!W0)By+|b+y}r#bONw=&dF<~ zSTl72%`aaDj-&7R?CJpJVNl8^))zJgv$9xlm6S7HZR(PX16OqA=mI$fhwv?n=8sRY zNd~Q)>)hpj=}TVwJqekH!49W1;alwf;HBH>w8=k#1sh1cO!9<7Typr>lsS#{s^;2B z?e#8ap*XPZh?fu5?#S4M2h$L1Fb+3jw5DkyuCZh#%E>wSKarcQtTs)%ElP-o zQ~iExkD)9VMF(B%kT;u6g+R@P5YQ7fPY;jR(7LbRdCIF>5LE#V3hVqIlcqvcR&NE? zU-LTNF+bW{>Lwb^;!q!PmuTo|^*3d_8?xla{}friiHz7}h2orv%>5RyavN!J?*FUR zx#{h^yLGbs`uiZd zUN()#&HzNf^XqIAXe2YGZj!dOJVCi;m-f0!cCI%+8sLw{xeD%WwkdbOrdIpdQ^+@3 zZ0*v9k`ZG&Y2Oc`7g4{6OThN_K$=KU!ThH6-`XVG5zdRBPZ(qrpH_Iy&*oz z2!bzPTWd*sd&dy4k>fRAE{~o7rsnWP;KMcrj?~XY6X!w_3wq-{Or?sX;|jRZ7JiOT z-)tno9DRC8;w|1Dtol`MGqY<=srwwEfA_R$As@xdAKrN2v@p`^Q-cCm6;Vhik_<59j#LV3$$eZNqRFKiDj zdI6nzG3{}+lOhDd?R<-{bjS@evqlF!p)zs=+lx#8tS?`sSU^Y;{KzHN{-A>8Q(wv$44``01Q zt-Pgk!?6@Msghd)_I}@`qrkR%{a4tK8t|i>9QAD@qI)(a_wo@1 z7NR60$>Afb*ux!GYJnx2CBKM}RPVRiP=j=}K5DXEol_0EFCmA-^_?NMF!wnmVH()Qurb8!)7qJgADXwHxWxn}r>O8#vX_{<@Id7&yq z${B5N$<~rZG>`n#xzGuGkOed+v1=*5>t}fKS|!%WNm?$Fgc+=G?B&r(us^sr*jQeU z_juW4Wu^wU&E`6bHv!{t0J=b`*vnG1#_%Qe1n{k)YBR} z=S6Cn@LJ~xzx@Zu=}3bq^Xh7Ua;7^ew@GGZi&z)2=RKv+b>(;F$p@9wQZ4p<)f=x)8nRE|B+na#tfo?MBqs94bzh7LKai3wyeRU3DzhBt$@|awlGG2zn~|j+G|gB zQ&Z0eHn&tJF0V?ZI#_%q&OGV4{VU8n_sX&KYnPk!JdGq`{_ol2Z)i4`2tSBq0BG5w zylO*4B&uyz%SSG#*_>yd1_qugaZ%E+IHGwvq%1vx4opESw{8a=5bcIiLWQ!qH6~}VMVax+8 z=?uB4(!-C0Z+fV)ju9cm@jU5~XN%e==(VA(A$3TF4-vP&Y`!j$^RGQ=ljYsFCox!EJ}KL9N=f1@ z<99o5Wugwt=o0D++=k+w$30d#%0`0JPU?{c>z(v*F z>%>|9#U>>M=~cYZ&5v#m;X{KvRutDwS!{2c#0c^_-;34)gD2+i2OJMi%!0-FH4zO5e{heLdr_FISLa*ITH6e!TuPlO-0}&oBk- z)|Jgl<@!yl)Kc<-xXjUHQ>xs0zc=4O14ZB$mj116LB_oM!8G^y<5h!`Pa;C=6JIue z5JhthZ9DG0?o6WcZrfo$%F##helKQP1oAn9+CE(G_qk$rPb7y`%4Zl4Z9Gf~2pAqJ zLB?9`A#v@#x&&(J6Q&t~ll75jWygN`Fp=wc73Lsa`AKJtg~t=4O<=8*7lS>CqM;;^ zIGo~JPew*Zxz>cR$gv%EDVq;G3ZwVvg|FTjigqfE`1ouIsG>v=jfFrRQ7Xf(R{Fv; z+~ba^V>(a66)WG8t1rY(LCQw_)`&BS!xufGvk5-luJexs>!<$Ld&6V;+hXpL%or+1 zZJc-G`6CkcnLB3Z1JN0WG|Omb2iGj6rk!A5(^YzIQy~|zJ(5bjORISwQ-Hqi6b&It zk$6(Ws+b2ZJg~Gnsbenzw&!lxJI!U%XOrsAKlL+HyTZ!lRQzt zr-yJO8%ukIWAGp&Htq8J3?qSJxs|qS<#-osWoMp&SHb{qF8i=R|NI&4-wP;KS1s$< zvg6O_z(^l0EQwI)nrJ!xu^Csz@qzM}!7p?P%!;hGcVmV+D#ktyu7Ki&m&_|uy?XD$ zYK|-Z>S~manSTcak#mtS}zt9N;MALZy`p7xS%L=T?V2vBK^4P ziL!CBp@^3O=EI+3s@QUsK(VGsdQF82oSo^{l~t$u-EeLR@c2^LMi? z?>^Y)%76L0!eWB48IflZe>RhdS_1S2%BWVl^NM_G&(Sh}wuMWHeMaB!`(O4}?JSpf zw`JQ~mQSX-3nItoX~7TbagT zX`oTvXr4Sj`5gfmhZoMA!x0rkekD-uAAaSWk4gUY<@t*TDr;#c+z=$bny|;WcQQN{J2qR~HKxKR1J;oWlJc%g(D(`E^`Iw0P!1=-le zx`t18npp?F9&>NDfL%y^&am`a|MFd6!P}WMT)E7wQ>$Sin?4Qf{jnLWj$mu`I0&W@ z)zIAGTTvY$pX^@;04Ge4>3uk0)B0>zrqb?p{aNUs0Hhf?oDLfLtQ6d!i6cq*W$~pK zgAFs8<{bZ(H;|FC@&Ck5R0dM>ilQ z-&i(1oEO|7cJIQkNGAv9$LOX8h5>ugVuZ6i|DaKsTO0r4qw@q45b;Tn7cdSwY_+J_ zp1rp>wVD;i#k}MeiPq4<3fQ>lAN#J)WV)tRHo}WNo38W7-~f$ z;NldfQB$DO)k&=Eeo30@3lg8Y9T@i3Q5uSZ!8dq7A`3d&K^|mDJ9^@=?wG=X?mu-4 znEjQRg^nl(?b>+4Y!J15Ca!Mq&(nYPkKTSWadFm?5hE+bGG2Y4;H#WJSR+f$tLnIh z*5TX>8Kv~Z{&|qvkY)9zVst8d*WST`J8Cv--WUoRD6C}?$^32%9HQcRW+yjLnG31I z58xG*llwfn=}FZ$u#eTft!dlLT?@pCAcy&ne44iJHSrjS%~@lINYdLou8+2CoDzO$#1$A`iv-!j!a0hjcYVckUr`cAM9Q`|>u z-qsQ2HPdSC&{+kXxv&QB+K0P|l-C)ponswdrsUwFBV82xe0=;ZOllpLLZH<`^1EU` zcBT4-6E}{CgFqw8dR-@`72E6-#KUaguk_*hljLiM7|uVL+1DXtn*ymr+z~#9uN+!n z=&PIHg7l~O^r#E5au8Z+R_78h!%HT--d3YPwVD%m*62t}26eN&?~mJXCglM(&j%4G z%Cs0slH-Q5Mi+?j=eO-WHHW$7#C8^0WGB4q9IQEK zU2sbEJOEBETegyvmzKLlA;I%;Wz>#dCo0yW+q#})G_?ElSx}B8!Q8eRMuQbd1J9 zWtN_3Jajr2PZBNkVv}#U+2_&eXQz-&g^)hkuk~?MggF>IaSA3l>U){JO#x)rScHq@ zPkoCP24{6b_l}Xmx4mx6g9w*SNPlRpJ!b*4F66!b(1pdDlVNli6Anb!B{!i1SYyJNv#lauIkMnVYZ@sl{2e!zPyQ z=;Ua7Y8g$89`8ZTfx{7RevGkP!MJisjZohA(FaRQ$-iT# z+rw(uJ$^pvBA89Rbhn>$Sc6v}hvL~;{AaU`$R;Nr(XROuvw!?=D>o^D%EV~&RF@@g zRP3*|45|XO>MMAkhG(c7-9+gZpwQL(zW=T4g*RLE0ATd8D^OUQzg=jH&qT2RMg{7p z3+GsJ?wlS53ayZpAtAvWFj*sGUIN!>xiAf@IvpcWk(VdP^=kV)sZOOc(?Xn?gokiV58^0COL!Wt-xOY#Ns18+|bJtlpTd)(&OTNIlsYY ztXTiMSmOHFst%RE!Atx%$z-F=uqdMwn|d9at-=W9XvEMrz0=lWfBm8&PV;~AYR>1* zlVj+NK4d5e1W0;6lKRPvSl#%mf9ozR;(|=bLy_H|XB8LsVT4-`Fj$WFRM$Ym8roJ<;d8q_Tk5kbR)euYhvlHIG-Bin(LtjQ zzBY`TXjTaxm74fFR&np?%&_aX&lp)?)ZsKCPCNn%4|`7LDu-E+K=MFG&C=(?Tw?b= zpkFb9jE6!n`&l)UdJO9UQ|5@$5QqQU8m84ZtVsja+))5Cjz7Rg2E-Zqubo6B!^<(@ z=mz=e3}PjMRa7`F?D+WBh_KPrmfwkA%l^m7DK3+9%s6zZf zb8&+$-N_ERGaHB>phTJZA3Q%=N-*<)95++j@7Flmj>&-(z}KGke$b;CPl=feegEz^ z0#nA(YoFoxaREi|d->E{+24Uh#F{gAv6qN)cj$EL(ln}BCj>-+6%9z(^mK3j6^lbe z!X3Y-DOsGk(eAjZl38+d(Mf+s)IurR)MmzOK}E7YA6O`g1#u{tn|96TZm=~z&{Bz| zwwl@WzXLOwZKHzfH^lx}KM7H|{{FplBi?(`fZL#th>Z^EuDw47i3sqY=^wE>8#SYy zA?U*0>RFXZ6f33@&ca`f0KokE8fAoww8uwb3Q@c!i_?$pnnNf{+HihGV5HZtyesN* z|67Lr$-{fi3+sNQ%&gsL_|!!-{U-3tpGhTxk~Z@+MyWJeUa^~ddSZF!pxR9L z#aFgV0iNo_LOhb8YfK<#z|c)4R_Qpx<(ab0b$8>3G}R4nJYussi?L`{Y4~cqCm~R* z0QZal@+hN=c}!*v)h;^E_Lt4fBc5x5@epzvtF_1Z1CV*k8M#oHd$iz@ zPQg9m?%f$R`m+fLO$jrC%r*U&`DDAXy| zFo&iAq%=qld^&67!xwI+xH>=s+s!WT&aFQ;?Oy0)D4<(gm^2z4ObrlC^8F+LzYw(e zv|~1rIP}r)yRic%nK4>E*~Dj*Q2u!jv`Tsg2~Q;thBziJ%o@XDl0ApT zi1LbFU}HGw88Q4-Io$ct(d%iobAhYgG7%iZD^MRQRxtbuYZLyWpc4wI?D{Dg`&>E} z)&bb*|3%lvz#v4%d@W{>kzomkU;1eCD1A#j5rI~4<~lT0bN1I8qL?%3`W(5 zFC66T|E8Ng(vZQ)Ql5mHfRfF2ECGIB!;Z88>^XuJ09Y0s9_~P&RN3zp9I;` z+w#j+e-IZKQM1;Y_Q_qg1NOGeY8`MWfGQ_>_qdc~sw&#a6;(lYcUWU3cESo^^|tWx zipSl%@BMt?q%(*!%=lpuxhUIX_ZW8m2Wd8IHPzV!383>q1W%%VuW?sgFXNaOjnAB~ z2mpIS4wXve@aYXLUz<=E+0^ROf(2jtXTj}E$GuL(8i*F5iO>0W#C1lKB$QnT}K*mgx2o6A3wNf|didjNrm`0yQ zZme~B#@uN2fJHh;HhZd&Wj6UUAZ+}h(TN6bh#WPJ+Qkw_jFqG0Urt7bR@ssPJ;GD@ zYkK~%_$*0-R%U9+jlRIm>`4&J^{oM8i1g|C&-$)7yLYvzF(&UQ()$(Z_#bT}$;v=b zgF%GZjg2+R_v1GWMbNPd^>@-+a)in>GN-F2ZdlktY%cdRFUiv%3{QTQHy5~vEUD4# zjbX1JthqIEvPir~PYa2q9hFur+tMwU;g_*gZtNgFueFl=taqpUnkAsGrbcr_9(Gu4 zR!_3NMNRIUZdCRf2FMdN{W-AWwPb2EjTv|yU0_zjmaJs`E%Q>RF_3xZYLEp3D^=N4 z*3o~l=;QZbqS4Su7G@kZStIgS#MJ&hAmne&OG@8K6szgR@6Te;(VsC% z=k@>z%1y}6N0ErR&8|xn+p)Y9cKHDT&ymqR8-xSszUxdMA&TJoRV<~E0CVynOtRuR zW{{_w$QYxD>Yvr{8FFiFiIo>iviK0QPVrb%T@ATe+^{u{=H$5iH(KT5t_H6e+cGS&CEr9cgEkN|Z6Q7^x({EH9K^__E0G zN|7sIP&uUW=W!kXqE^e_%*D~h;mqefV|jPyE7lJ)7SAE49kpZhQ-7~}bCD3{+q$>f zDIFcvUwA$K_x2RT+<$(H;2xFRW;j6yhtBs{+arfPEwiTHZi4hRDqOKqpX6eTJk>ZPj@CbJHp`& znP#dH?@Pq(d9+P~N7v#|ro5gAmagp%Wp3e@tyaYTlGMK9^LLZ1WuLkH{T4IB9_;{L z&;^9RYe-r;mURj5+7pRspKicsp(GJStUn0`z*o%r^q#^XuR<<$dA$sa`(TXh0tM!Z zSvX)9gWAvr5bcidNLRr#FDH`>?+;-=2bdT3cswj>sJzu8)X6zk2b3wGzG1$|C$~Py z5zQCcbFg<%2>(P%l)-2BWNPw{FWxA*1hc4rlpVwqpJBiy$_?7E;0Od~rvI~~)aOsg z*S+3BilI6g8|CJ(JX9)3;>YI`X8YV4cg9ZjUfrl*#kYIbLNR9eg_*unBWU4?1R;`D zbtxKCjGrfr;-?@dxwov0P|x__R6is^6)<}39&tPe42H;Q`F&77BdIU@Io^)!WTk7_ z^_wsE8WzgR+^}=d9;;{mHau*~n|maL%haVa?E{OGpeyd+bZ;ymAJi+5^7kj$;Xu*n znJ?Q4pdfuT0xH}m^I1RVON)L-e(tdSpuY`IonE0AOU=67#he-EJQXd?59W17 zj44Tfr}N|X?`c2rJ-*40{i^=%v2y6a4?B74hd%KOckPbU*O;JrR@b=e*GIH%w&RYB z_Bmmgx^(Xz$PW$!qf#`WY+rJpt2{kN>wGqxx`->nj{;`u4J4Q0h7x?ow9!P`s9Gv2 zEx|VAXyQ5RR@n;QeK->Sf3pC3gM_C3%Ww)5;OwMTdY^nd6oxAnP+`({xd3Ww{E3z9 zh%5o;xmthCsCXtdo`<9V_0;4(QJ~fMbC2yuLh;~WnJEx_HN=|Q9FG~?^eS%nVA$p3 z4fm1$(UJztgM>!~1;zFNXhI4{s*g^qO>(n?^Ex*FvMQU}6U6{pJS)&P8?}n1U=`e> ztmrV?dnONa$^(|<$vxFg+ZEV>;L1Q!AlU5cw%s>^_W#UlMnJxitXMV-dXBqpKXr6X z@44lA6@OUGI3!(}Y-%%HMs8QnwQf?FT?lh#=Ik9&v-v&UUF?6ao39^acN$u1Mrkqw zoiTZV|6mK&{O%pOXD_pAnFms(_HLEw>g(jZR5p^j2W8S73-z2n85-OgOHPFv7r<5M zX}_cqo45ydY(vy`599Wwb38Z)d^pz}D-z@HQ7b87YPLXr3`j+W+!xr8B$Pq78kLtS zY#mPY0uIK=8K(#cBu1hJinR);IoWMo*+v+o9Z0Rq`m1sawe`bHU|u>&$PS0KOv7j0 za}#x`=NOiR;iLJ}e4WE;E`?$rn{&SV!|ZY-C;wRhK(>7ON+cl_3gbaKNyXHlID1bB zac{d|%SWH;E{nh#c7T{#!#9l>I%;NZ@u%jpvV8Vvf5FrvmKZx(UWieO(NI8obq7@7j$kc*$yw>VTl3Pp*-wRXkCa% z%GgM~sq=?mkv{hL4@Fd^lSEv)rjIh?a8d-|_tQ({J8 zVCJp#qp*X1-;Sy%Kfx4&s3mhQK9eT==Oin6uBR5U)&DecPy^q3G?I zvyXs4pS!UB=FfY2YiM~OpUifueXKrnhnWbEt^N7`>G@5tMm6)s5xq~nb$*J_TQ)4M zXbW-p`=U@%V^f3QS`bLCyNAL1hAj|V3zK4e0>u@2O2+Qq)WFH)w!I;%?@LT-YfJl; zsuwMdSS-_o=KW_-Us)u|E$k&|8xrvJO8DP9Yu= z(=vlOh_MF|JRo1PL9g*bIK|DM7zIIHwm}7#+Oh!qF$xkhK5|;S_q!V=)AO2#0*8M2WbF)1AHFuQ^rz zo4R6eff_GB&^b~7ps1|Y|3vplpplSU)41IMRRHOt$PLp*pBenL8Kz(cinAhw>tGWC zFiviSKIHMpmI8EfYFRrNifKH-#1`EpEJ99wYauvtX!{Tp>wZ*5KtRXwzX*HbTzzR+ z&(kMI9F3cBAD;(0Foh^3)PzezynH=~&b8yc5ej5W5lRt$NmHx3#4gzEMVy1ZUYQm+ z-OVtScL55jn~~PkRh_wC=mK9Zy|%eo(d%U*my7EP=lH@>qO3;Qc!SO zT*T)xzyW2GF{bsK{nduq{`XF~W_N>qrir5&2N*T)Q9PBH*P3o%S~F` zO)lUOX{chA`Akt`esJ+CCm`S5TYd=a5?z;2-5M^F&V#@MDXZqLiu2X}jXN<=lFA6;R0OCl*?LgQIXsAWXWj-A+A$7%;KYInF}1xhbxdP!v%%Q=B73Is>lZi2ree6}&xnDM zBgl(}LGxu$s{c=V1-TUOKy*Bh_4pZpU?d9TXgSG-S~d++>|~|#+#h$eiNyUALe$2e zDYk8l!xa!(_}s9$?l=V~R^0=FhB+z!6lB?*=|hdJKf-qepr>XUcRNnww%`O4$mC2z zHlHf`!vS_3wUryNcekbyjXVWZUE>j6r$+&bs-20#uR47(Zt&jI3)qz4QdJKis1q!% z-erlh((A23?$<)~oLo-`yUEgA@)MD!zV11xfWe zE)&MEn+mYMKI>g&u-}8GuA8xl)&!#!@fRGzpWq8Bo)^#_D->h zwodVIaLJQO)$(Z_bBXY|Z@2rQPpMViOd{xB2(>7h%%@nl`5xP?SDO!pV`iwxF5jvg zeL6U;{r^pI3y*F*2}=u%WcUo_C}4a6Th1fSBy-|+tt9{^<4jnR!^*5!4)u-gZ~)4P zipSt(%G6haqBG|pL+qtk+V!tM#^hVV_8PCuTozzDa7s)%v=2w;95SGDO~0Zwt^>&A z-~^KxWu3hJbxPQ%_buV;evTBeN;|3_9HG5B&vjN&QBjhV?(cpbdmZKY-ZB+Bg>)riIvoC(?gJ~1A`x}T&f~1ddofgA3sXS&Y=Zf)}sY1 zq#+1d{REcB2mz~vASue@=kkjcGYp}#h6q=aeT{6C**Q4A&(u4@0!jPx!R>(Xf>~RO zAnkBlCSS%b&608TCf!Knl@_&{NRDy4X{z2ewuSTPPzxkeg?fGA{tmx$#hUooGgj95 z;f#Q^1vsRfY|h45Tw&2&#^jp$ZGmK;yHB*s#*nwE=ca}Yywk#E_bD1<$+Z;vobRaB zKa+7Rqsu*azxiPPh3K$RU&X-bNY`$yZRhyM+CENJtRv^s5fVkEs0L~ zb?v22p>$7~Dwq8urdEBH$Q*yP`YGMNj>+_JR>tVKa9Q{lKV57uj$42;SU&LhuonFL z?PA8I3b>tviZf#GMc4UPBZ(J%W$=|A^=KLID(S)cZGYBPG}}qH*(rMV%}Fg`RVqKV zsCi}n5h)Wr=Zic!m91u|U~U=53yzJue-Cy2vnlf>&WI;p5#Qe- zw9u@6k~lvuqA;L~49WJOb~gAt_Xb+XDux=K9HSLYm_NG{qr z!e{5N?LF(DY1eh>Ci`d{r9C+=#HU=uknI<}C*_kdS4r?5;tVVxD#oHxGQYY(h%FXfY=SiKq` zMHe#VJ@>;4H5%bng+GVko|^jQWcqFck^P_B_3u`0vgCTB-cX94SV1E(8yf-^_ziu& z{!R-lxAHAUF8l1Ww;pv!2l;5U|7i6&Hx?wCQLK^bRj9WxJ{4VG-w#D3nKeDVG{qG2 zxu9t>%i2w3^McVm= zfp1VjuA#rpey}q%JcCu+aBTiX+0}iXi6k*^7w5}?OCG<*m&F2xS)I9McCH4Ec|Fck zz}RR;tsPhx_EQQvbKg?U9+WtHcSY%&F69K<7LyH|`KZ3xka%pMNk`den`{_6 z=55Ut*Fj#E>{IYi&3eyn=~(DxotfX0JwEH}lQ*67MeaozNBQUb3n@Qd!d)etGp;dk zH<6&A;xe;E81i(HBN~s=PjU%3tC}=Swb=#&`>b)Yex-gp2^~N{nEK@nSKtII;3o(ksJ(^rZd#4B&7!F^tWAdeK)9Z-sO4##S`Hi8UYKD?1)7BP3 zL(`oZQfGCAmh_6>*?uy+PaibFN7AC`Lf{sK-gs#t1ap63>@!AfT%pxQwV!U2XG5h$ z5WG>VgnuqZLZjDQK&jz$-upD2dGv%S*Hl26oWXDptmgs0BJC91bokNoFyo7>xAph0 zy=iv;I&nrVVv!qPv~UahayEt_#Ih$ZL_sDcEbp#P6@{DgFnz_f-gkf+8-l|j`x#cX zQ2}j3cj;#&+|C)3=rz&4`Hvq}lIPH6=JjZ!sRLp;*G-ZRNG{%SaA({+T@BVoq>M0^ zl8lk94KZ&w}z2h1VO<9gskSl&p4zNEZVpVKL1Q!k4=QP{|vvSrz)6M^@#ewXZkWYM8}=@ zw|=k2ydxJB@PBSf9L0gV!6Pq*+R7>{e7V%LiG@B0=G$e*bVAclm3%a5(xY@$s*0D8 zo26vz;L_tfd9h*%E)=i7DQi&BUK+*E5Ch(?)2GVn zcQGP`8teJjyxo0aawDDHG!4T8Rbscr^aP&eYKJvFBZZaBW|WVB7qpMu*Q?wytqOpD_70 zWSw$;XGyH6juM@Bb}2zItqRRQB+oSDfBqiRLPQ>WIMqnf9os@1Yw~_j$H$I!iexti=$npGw_Kz=@*}6-FR9)3?6?EWJ_cCj_dl?q&MBu z2J2BK&|$n8;YgjbdzT(XkMB+!!*3U#3~Y@-7vQj7zJ>$68mD6c@{D^!dWLXGr0ZXlN`e2Mc@O(z)D}2uBDBhdV~w^|eAfEMTyd2O6dmmInoU=%`g9rMUDj#!eWgE=W*V@Li+pdG~&vcYk}WM(a5C_Rz>+1)_~QDML4hgUC1>oU+VQ+KdLN* zz50rv-f^{S?*CCdl)T0?VHu?;rdhGOvnLz2|K0Rhs3KcAPkL0_wXp~RxpH~?QerCg zkaw>7@cZ)%zTii0z>g?{&<~34I8{;B%Gm4Ya);pR=g>E;>(fbe@6gohZ*IhLF_)yW z47(-E?69w|HV{4`S80E3=00GeZ2T2Z*+9&eQ*G^tgfR+od_JSfjRR0-n}CNVkC{d_ zHJO*@J@G{HK7{*8i{_;-K5%6x)>dTf^+->86pAHmeft!0Bb;IAl5(M7o-j=w2cQ^J zF-XDj+t%icB=57yZpW(Y=O|3}9uaa0q)p#Yf1p+y(nn$5`gp0lx#*xziFQu&n1lX? z<%p)?-0(4XXcUISkr<;?OKh4X&>i=Y{-x*WlMvy{)mn8rF1KwcC8e+c<_hGW(aw+# zw{Po*YzMMoLd5%i9KNWigz>RZu!r7|H`nEPB76QZwo`|Aw9`V<{D@qGc<nt z&HP6>c571dnwB_B5h7M8E4M!ZC^|71K+HJVu{Y?Z*-cgyO%|>E5cRz@%XYe((R#j2 zIxner@L%TJ11*Q;+drF_T?P8=k zoxZSAy6p+s1iz#N)Y~Qi(j+qb%eTTXo;D{-$3gZb^O}S7BKx|fFU_x%Bwu*;-SMXd z?Y{(Wo0fh}-o95F$Ji2vzRz5};L5>aYjB|D#KwO$Qh+m3Y#2ld(Ml2*SvnZJxwn20&nIvOd2DRnOXs9ZgDdQc{77`A)s?TAN&-yOV>)qZvn zW9{oBWT9%&06`titA%9ZK&UjEGpXE9t>}w(J$j`(7OrIl$08|BK9WuBoZqv69PhUh zEz zu072jB5bK+kEVOo=YMB;^L29GjAjyiLCe)cup_6OajuD@9(+kjh^Is4yne4>$t|9^ zcelC2q#^fLuHH4Yh~r92G@6}i=QMa@hHrI$cMaSehhW~@rTTNj0{&kgFU6YxgKQI?W=1D%@$4wpD~XEM>(W^=OD5t7rH%HeNc1Y>D2Iy#p{U4 zaLDt$$Jj|6tc|{{{<>rA%Cb9cZNRto9QwWzo%$=*K^$*W>XB(ex0?kor=MQ+QG}qf z(ZkXxw(9vZ2u|0CrA{R3IY`gH%x9@YX$uH&x%6@jA?6P+JjH{18BbU zYrTZbL?_`aL!uw<(^Ku+tHuG+Wj3|hFFSL@7kGCZ&iX-);-Minug1f1;=Iq+u64o~3)D&s-z$44rQ7zjJfMbNH>!JImKnkS6<7Umz{8+X294ACs6PsGy(IeW;&Oa5Lh$P^u# zEloc5>*X=PR{7}=%>EpwfI8vpcdmsns3eOjMj@LXX_T?;-!0Zvt`W>uawF=MAUjrQ z1pkk6(I)sBCNvBmA>E}sVPO-7t1#<7zt8rmO zj+8eX{^VAXW3bY-Ojb{8mKwwh6yohk+P@S=y-1mGGHuNvNLD| zEbO7l%PZrx?penPURVKVv3#S=?w+1?EXm%{RI!zE5+4g^n0qUkkY{~l-D@`u{?w(! zD2-(&Hp;iQK@#g>_37N8FW%47naI_GaQXdT}|6^UP{ z_RYOkC0zk7)P{!0Pe*~cn>##|2k)Yx*DhmXca)wsocU*ySRP|TT@oQ{k|b^+E)RDf z$7C7x$;9u>X=hLX>;mDFhvu((t)*8qH)_3q(jB zBDhj?@~YCJVCX_$W&J7$>cQt9oQ78BOwz)LC<&U`NI$;5#4(mL zg#_Ih;P&~~QmJ9Kfae>U4YAi72qBZ0;&;jGx7nyXn;sXHc$+RSCT`sURdHWk2r22i z!xZ{A1~5)49c#-`1~2*SyQ5!T7pQY7=ob*3sdAm`JVmKN2STrjMCZZwhciB`hbV-S z;Eb@V#bkbZfKKT@oT`C1M*#KO&&}jqWpg7T?T-%9gf`)ItaAkYi2wL#Nygfx)pAC1 z_C=v17hMVO1Xt4hu=23}(9eeX`iFvr+Flax8|<-vVGWd5_XWYAkz1E>cFbz2{YlxK zw9HbWxC>E5#*`$AZU&T9`yD_cM)XpQTR@7izY?FBpODmKg~qTI^0TUTO20dYT5k{Mg|F}Iz%s|lN*6nz~j+=;*onE!p0$WSP zQll8}?1Q-CG>P{l!~7b+kUUI*C50uXkUq zKm2v)>gC&5Y~ypm5N|7W5dTCDu{ZQK|PRf1Z?lXh*Z;9)XjGZvXThK zN0(y+7KP1e6`F+3Vx-jq{~e38{*pYt|0a1_|3}FK)X!q&qtYNJl0=0T@!1!>MJDPP zc8bC2pZ4HDG&^S>`ZQNi?)(-#wMT(C1hUP-k&Cd02(|lXJeB0nU2XzVR=Dp0(C0ux zO=^H;=Oeuki-Qb0gFB8wWZ;p_4K9t9g6TWMpX4;1h@1Hu`ip73$P9S=@h(A*U-zQZ zxZ`u;GSvGfjv)X`RvM!KaGZz0kzxUSaq5VW9?ItdJQJ#eWHj$hm zV-<;QZIUmqtm*l|wvPP*9J_i#Q_0pI05RXiCrw%Gt_)0Dpbnpjm~J&qELunAu!);LeMbp>q z2sjJ4k7!SHAYf05S259HZb!=-It6Zu2L&FdEUS~d%|i*XIp5Woo7ghU0bIC|Gyk?j z#oM!tD<_&bkHqC81u6|#IQjP(TPEKdKQV_k*7wMRE)=ecg^(Xtnp0Z%X@`oIXagUE z;s{2ht!R^))2;d^?F;rWYj=XAT%190;Lq4Gqlfqeb$zRA1e>8h9H^p%t+u~je1UUF z$@?s%Qz!5!@!sQEYz&C+-@kO=Q`?i#FjzbMs5Kg)ZGb)6JBA$N&j$*j^gYrN=MJrc zY3yq5LO_|NLrE#!ZmGxebazG5%lqzSyA`Y8!OBqfvg&BOK_|s?4eyE%siO4?@96-es_Pj0cJa)U6U7^ zlE2~zc%O#Wo_5WpCb>zhTZS3i+$@)x6=k^Me^WsJ{EPVMfdqwH#8766730siL<@l& z$R=*P@ZE4sMsr%L>AH3zAHiNQi>5e+&bfRzy@61ApX-o+YQWi&41~$#_Y9>xh~N zZ|6*W3cLTU?DPg;a0w=f&02CN^n>{G2gcxu7)}DtxUvsrB;{cU}qKr9>8q`q|<%<@lT#aLLPC zdS6&ObB19qTdUQc1hRLyQoE;|Dz%C+jg9s#M>*wX4a<65{-KA}ss$;6H2RCvlQXJ2 zb7lGNC}56W<=u}~uO+@5)8EbcoZ&S@nK58{3(mrA;w(=5D%&~PLzB6re=7P7ncF&% z{B4P3944j!rR#DP}1WJIZ&Uf^T&FP{kzJ*Dcbz0 z(VQ)R56hKIVu{k_Juf^zpda}5UUeP>D(LTmo(h|oankunzftpXE?kOaoNh1tIE2Y< zr&ndNdq6m_ynf~*-Di}BeUGdj{y4jrwq3n&lOxRbr> z{Ah*Yuage|+6BiwOMEmy4XU7d*b`T+3-Zv{mwq(7AB|pajqHXo2(ff3EX}sKX!PvIST>%a=hcU z!BkzpU4CX*AHT8gwbi|GE7>h)q;3v+wiN$GBZbt*d>9c-SCO~jqP zF&gmf7dva2s6O2a`+72%m%v7B%s2)7&}wxrPcvF8gB~0He z5hNs}RqLCO$*5NXKII;R^2br5Hj}*eoH^WD0z6^ZJF;tVvY6+jpG3x6@ zE4+WBe(Ot)Y(%Hs$}{{GpQrk_sFkH<`;3RES09>MBNG-=pFSo25^qTVqxXTeS(6FL zt~@-Rp6)HhZrb-3p;OBm?mw%mpV5HQD56<#V2h7>$m+jzN&YNhfHu$!A+yh2jeP^o znJ}YSn^#9c4rCKU$#fvbEZk#1xw7n7k@rrg6Y~>%5mtepN{h$eJNhZPwD^A zzUYP6SGzT?W~$OJEVB@YropA-A$(@y#3TTjgqdN1v*_1=vxVBd&6&W{Qx&m&bakh2 z!vS*}L3MrMb+%VZVna6Y32p`ZNekV}-?ns{O=ha9c0Rk~r#JB~^+1yayi_+jCTum& zE$|=2Ob%VI-#G(m-L*4w*ZSZi{pp($j~(KNj(U3bzHQ}8LF2p{AHk9_)88(i(mLCa z6=NzcYo5bdM!)WqMF+Sv0hcfBwT;xTFx2ZFcap@HR0qBN#o6}6x5Tgm?w@UMgAJl| zzHdNfF#nqHU?q_te~*4`>P02>pe>#UK6xs3g@eEoxL6@+lt%tiwy*mVijI< zh|{VK3}_45fSzA`mh67&`uZSgLjCL;VtH48u5x#5?5y3{v!&7F8(|Q3_(|+X?s>g? z4Ij%RZMMK@jHP!Z>)_;r?rG_IjoEhA)lc34mzX1lx#C#xR5*Dz?RHc!{kh?Y=@+u* zhMh4gM9IBsA>&0jCp@e-`nh*F@uY*N0!uMD!=#p4t;{u(_)ZqmH?7{>phBJm$v{?m zlzT+r#fpXFL0zW_OYnqU9eAJT1CJ?92urwKu9k);d!e4DrLshMm}Nwbn5~`OuRpN1%t8yxr2L) zF0P#^GfKSKz&G-&D9h&Yt1GP2kgUlVe2vppUFIe;hrviR8NR!gXMlac`#h+zz*+1Zf8p`hN;w7+ep7G-0_U9uC-aK|aXxY? zfL1k_>wVm6i~R|ASmKNv+<-?Pc8_U6YU%?h+s~eyN?%wYcCiOE;kg1Pt)-e#Bm;VW9LS?2RE`y}DV{ zeiYAMdk&oB_zaO#2|-#!cQL?QD8S_TXD}{n3OMSN!DJshQ=)9JbQj zH)$;*&G(ttb84o8Z+i7xE*U3}p-XJCer@(?LeoXE#ZG9BWnIJI0e(njkv6EO~ z*bDp|=M{zX^8+XIcm0Q3)5iv>boH%En~OFt1}0W0Nnieq1~18U-=VI)jPl6Y(GZB6 zbH2imcHZvInNISixNb)>-OcCejRnofhFNbD`eh|N#9g}*M}e2I&EO7Pm=EMZ>ACsF zxMZ9HY1@@D*?4rf(;>&2xlx_{Cs{oTQg05p>63bla5-r#P$c=W8z$`Tz^!eHxpF_p*w{97H$#1BrH zXBC)~Sf=2t@0Nu5!urG81w?N?Bw2O3xPMoC8QF4VEA;^0Mc#4m)^>BBAxY2=%JTH+ z^uJB6$GWT@lT0rlvby?SJEl`}6higHpOCuaB4xCyFqjMv5YuJ;8uhQa;Z7)ql~btG z5C9F%)TE`K;wERnM+#WtOTT@__?w51cXs}`|7;~hgFF^Go6eXe{j}qC7M^?y&bOK0 zo3a1(kbrCY0>7fu6B#UB0bWa;z*?RMSyeakBX>{w`LHk2lLp$Z9)+Lg^zJ0EUl4Ml zU+8p6te6L-8m0)MyI_)#dgjizUTbl_ltyx#M|UA+M`go0&)_>7%L7xa$!{+H*iC{G zB=}rG{>1rP^Ya8``M@0r#kDUvF;#ZOm(gP5(9yQ7#WZ`UfJ%jWo^s6xgBsB7a3i5N z88y?9W?xi!3hY<%6gtGLphv2mNZI_L9spVqo7VQnrFX+`YfQt zMT#iND9wqf{wOQTDN4-Vb>VsZPEt@og!MB@}h+jxUOqJtj zk{hSO*b!p1s_B>sx^!X7+=8F>Rm+rw!#z1j{ zVV%+eDW{s^%Ad|Wu$jZ^Bx@*4x8P#=K)>)PiUBPWcYxNrs_ucA+2^Pj_n7^AAIc>pD;l>JqCL!-z?+do*%C_muE!DG) zhCVL<;VOGqw`7B_{cJmqkaRP*WzMj+?)7Yu$jCQ91gVwei$RrC_Zs?h4f7${4Rm#L zR9O*hc3v@F9klmd(A2AA_BsFNZkWp2Df^$dgYHGfDam`+9}S&39b7az6Pb7eMTtx( z1EXf5mVVRO<~U?(<#+EO*3auu-ml7N8+GQkaQ?$f-`t#J8M`Ml0*`aUu)7RssXbl? zQ&;Mi81&1~)p0bT-J?n7{Ao2XdT}o&PlFr;0UzB*E zzYa+diZGxO7tZrOgwOE?#rH1+v-e=M7l!VtN%2x=w5!@B2GZM45=0Y`2zA>0FjIKL3qyDhz1uq9&8$h%l0FMPw?HG_a&y!WhebacIcBmU z{0&n7WFWgTw~fH$LstE~Au>0p-?^@=#r=TgtE0qS&m_tEzwQ46Bqm+XD{jd?t;%u7 zzFy@Oc5$&-la0k0fk2G)++y(<^mm`#Sig=pvfJR=^drSgw4%gD3du%1=I^t4euN;% zO!a5JTFfnRrx=i@+20A<+3g6EgSKF2x`+!&`se2C$JUyjh4-2G>S>Z*-?`7oFh2yz zN+LE$5@;>FMD2GZ5<0Q%Pg*FvT>LK{JZp@1IpujC#OasR+#y{6QB6ecMo;FjaaBj>Z%j3b?PdZGe}g!*)fft?@r}eN_`r zFXcb@2@)A7jyNzG!6niA$@ohoKA!c$MT<{)YP$+H;}@dm?~(&J$+1{UOeI z>TuK3>YxA^r1MA&E&QPE8#a}^lQ5R>Cw-H_@`YQB3*xZQgc}W2S&cYN5K*K$S$E;% z1K^jx(&QPgilyN5)(tfxK>Np|d3iMLSFAzXn^xxXRL zEq?|MyRxZe6h4h$3_t)q>MO^OQ?_K^p*vhWn^%jSIF7cLzPoTkFBD5JS%N#D%YWP^qeU7^SwpxfF*2&y)#@{AYOeaimj6f;Vn%`@o{dLQBJGY!D&DKYC7PU;n zZZ3+JffXPi5>B?g`NnsfTv(~^H&B0&6n#F53p4J4{1(A_*uo+#_4r-+sr><{>{h$C zNKO9oJgi)-Qa?EiTgQ==4cX;ii05Hbhs%h;i3&T3GyF@di8nw^VV&Z_J!10?8<1Hr z4T{V$5GZJT163gPR@b{61wb{Pzp`fF+YNlaIX^xEB?I7&2QlBxY!!3706bdfNT zH!BHG_<1mRr2W@6p&yq-Ll2sc-H+MZGf)GoRd85E5|NHFblh;LS>+$(P=0kE5wp0!4jiOn)fw2pC>z%NodgM;kGt zTgbODZ4_j$Ww$*z!*&BrL-m=%DW#NYL}c!aZq3qu3$6LS4O&68eV~uC&#eZp;8GE* zNqqG)eF?m|M>swJYGiWwtSR@^YKb}2wdxdK0zV>(eYU5IVQ=4$kK`I?HE>sWfzb8> zSCF{XTL`p*?*{~1X|dvGY60=cbIoLENaV;3{zhVICws@7qFyFzGBZH%HsBLd)JJ0F zw~R@8Da%=?T7v$`gIE3i;MXf8rlN|+*H-CdBq>KOzEJoRDz%AQuCn86Us+_+;SrKA zK1!2aqZm8}f=h(e>`wc)Yi}ghgTotpI$SSQ=Mo-Nr13E%2sQ)fNgob*b=?8nl?8Kn z{IU#0ts65uC(K777DaD6D7BxF=ti%U*}X%?$^3+)ngOSTg}NbX{FJ#j*NJ;zNO|q3W z=QVhqVDPNhciRc*{ZN_(KHwp%$k9Z4{I*>qgyrrLY*xx%r&R9X{L;L2W9k;os_Cf) zLrcaSg8@0Y33$5m+*j2WzLA@6ycgomN{XWNmJ{%PI5|qEF_@JUob8kGCx>M)0uq@S zzM0 z=>ta;-9q*cQX2n#+!0m79F7lP@WLah>5=1;+iCWr68QFmb4zCORBc6B4dy@Y#cx>H z-vnHzF1$;z44Wxo(v=X{80pIJi?Cciw1l6Vjvf{D_PKk@K>ayGaj)T~eB)Ry^0zm# zlIKF{&htuDK4{^soi=FQb{Kz%Hz;%QuUm}X zS8I3#Zg{RHY z@(*<+yA?};Wq~Z7M;Fys{=mOclvr=-;>&d-i<{7KSPt5K@gFp9&=C#7#Eo8Oj?|p) zIE)xZP5A8WI2^k!`1L%Ua^7qeD_1vMBh~~xrPUu=hdU-dg9z%xMG(oreFJi*SMub8 z`rx9SZ+#N6@Qst)$*>!l;8`+Pk&1IOeP06>S56^+S|})f0$mswgP>l;?+8HDlIMl_ z2E77Rb_nVLk;nzqtqf;h#%Ow5vQG;kBs@CVOkT6{A;4eq_Mv$fOPQZNK5HSMiLc9M z^U+Nw@Agkk0%+}Xcl}cbbVVZhp^u$xiOg7E_eQw4f~G#MVZMu=ADumas3|)OMuT8? zyJ@ev8qEzWw?1E_!J#2Di|G}?S;m$+Wsd#Ikt9Adx!a&iPDE7V>r%ZSc{iAqQ1cG}aZLlR^7SrZV@`7s;<78WN1@!V z#ud>Al<8&7Q|^A1xxCi5b1WE4rR)}Ry;V4H6`F{!Y!3Mc$CKZXNh&P~A47_jhwTeD z36>w!W+ys@dMD~g9F4@P>UGck95z>MYj9)=3qpIo7<bc&=;S)lW!IJ|ch`@dTdXi4YSq6BK_@~}K|>M(a`UWJrSSqY&28qYjjNL- z^25z9En}S= zNJgVcy*H8F(@hk z+Qqa%|Z7-SnF`R2QYvZKL@0|WvuXWkaB~_fv}K+2-yEG z15kerM^)S=VopM?(_t=zN|YHj@`8(30==)tGGGW-_j> zK~Ct7+Kp@A>0Va_nPHF(9^aM-?v_NmaeJZ{#QORt<~U=!kMYtz-zNXkD*9f>=##tE zN!LacTmU)=U4}={duh8%e@$HJSM!NAl2(sYmuIHY*b&6ltJXCKM2tCn)@=74bkBF9 zz`jTI%yX+r6#w$k>2W?fv>`9a&~^GntGc%x_6_qc<9y(HaX8mQT$L`jFZy{y_ywR! z9x?+T-TDs;0%-AAo>&1GH8`Z`MwJ6ivNe6t-$_mO9Den=|MJm^fBWbWMR|H8J!GuD zy$XMRKhYaVmT#oggjs-6m$e?d8rp&kC;HiO(ivp8_W{KY{bG7*Qe29eQEFgvPT_39 ziBQTLR)O^vH=O_bE*RU_q|!X_vCZ*V8|x|5Mn{IK)!N5rS7*&Z^T|!}guq}K8wz5T z1>;8!9u+stJ5(Wy-G;WAIdo5k4#6|~^$~S}z!G&Rv2hP5kMQ@2UHb-%6ksXYR>vM3 zV0p!~tq}afn|rdr*TD1^1y8=`x*{vi7wl=GVCi*wH;~s4Y1!UGN$!pAkf&JPL1#~w z)gixEUk^;3IjzIR&eTV zQvcyGJ$DUD8%m^ZQFCz#x!M69eP)2|U)U`hZ*+;YJLcd^@&=plPe1`|0%{P8NvfcS z2f|Uc+_R~X3H3ZGQ!0p)*)zqx76p1FY;kCrd$Lu!zq9iOP0Z`$SurCMeKfN#)9Kst z*AUeI_CtfGa36{Jd6wpZvkE+~RTC^OF7EGik|0W{HYOvG>ldrlEm4A@zZ+EfB=zss zva=%9%t~kZ)!K5z?h;Q;VKv8|s!4@1@icr$Zv)>(()3&8IuiWBqC`G=mzlz7!DH3# z%H>tho5kJRxgf&XwWZ>01{`XHNyDuOfd*)K3{hI>ExOGRONI(D$NWe=hko69vuH^V z;US{=Kus!Ieap@FsY5NH^>2;B-y;xRA$(7k$`;(>t2E){4Y5-JL5=7eq2VX@2|`gg z%owO9oWEeVaiQdC_rC!K_*6hTigd{mq($JkMKr}t0S-B9D$d_)YjDdI{GIRo1#X(jBx~+@o66bqb5Hq+AbH?ojPi63FE{93h-k$u> z8gd3mWUhDcIyXjfPF{LwsLvjFm#O>&Wv57tPc`j7QTWr`qy~G)b(U!4dX=g%c#YAe zclmDVYt}-c;7g#cC8S&0@Xrlc4@<|)#G0vIm)K(r|8^bqBpPQ23hy~2z(H)!xxBp8 zilM%nzY{(bjUbmGoRH$PnmYf?ekB8$gtZBcEqXHf4mmQfeEMoxBbibUgdJI0#h6H( z`b!B|XtgG*sUk%dfEQ*=`g<4A$eM_7E7u5tZ_;J**j)dI>TchI-$y!_q?`t1;d}K0 zfx&VB$TWxot;u|O*!>*Y=@;kT05e?^Vwt1@CmaB6WB6@R(dwFkk=mQ{0mDN4<{t=< z2B3EV{!mPY|Br8ySOq?~0}d{5a-I50N21(sc?gan{oRZwsY%b~rXME`F_c+kvp`d( z^CJauNidUwlPSQZAzEyA*HJ&9y0csqLjs)~+6f#6htY8-?dFbzZFG^B>jUWd0KK*% z-qA6p#wq6^*8!WYn z-OrSX0Ps?iM2SQLX5D=I(Hp(!KzY3Rp>TctpZNob`Qb3%wB!ry7hbFv%m-{@-qBRw z8I-~UjMxcI$U#toa`sLju0=Qe2uGZoI~tN+5~=u2JEYl|*DnnIPgpX(?)sO)1IFkC z8(!Nvib=W8lLn^|S=I8_&8*+!+$@Rx0g)^m=`t-D%a!jSp}La|`auiug<~b|bO3{! z&3AeLN16bIBEwzN+dm~Yr9fg_r9yQq1EHk$;UQNNIOl58*5L<@Dr&W~DX>=_*L$)A zJeO%FMn>YXOZ{(LXbhPfMtgb++DHr}U+T@Gk?VRdK1tvE+n`~V!g1mHQn{N5#Oi(n zplev^<8iVH@W!BOeKDJM;&pgS1%uIN?Kz7TOhk`~34z{L4};gS4V>vxj+GAJfy&Ylz0{J1B~4Nfg*=LP$cpJ%uHWszy~zx;zxuA z(lh11ps3-GNcrt^{clht_WuV(&WHPunezbLHmL(Wh3mO5CGu|S$V|u+X0(8lqyk6E zGzC(o#J#uuh^f?v7}W)!I`hMqS+l`~pxUz<4PhFp2J;#HQsen#9uokM9H8TMYDNX? zf6o-ANrNvvKAt(r_90{BcdNj@qu+28n5>?fhz=p0_}Z&VCOY%;3BM@HqLl&HG$3J} zeUA1)6hLIh)>oWkaPWs;++>o7E>d_DS0#B6GWP4SP@LxOd-d9m7_$2+uA$;qt7IgH4BtoLs2h0lSS_M@pD+9exz>jweP641zg zmTbgbq;HC)#aRQm&;V~K3#nk>L(B%WC2+{`fiW|%Vt|uXJpe_!@Z#SA=TANq78?Tz zmB7@S0WiKcP014Jh;vqQwBw|1s3=H103C1v0O+VmqyrCwPkG>b`tM`F9$NHiQ&JNG znq5!Y`dIB6w{P!jK=vwHj}J-_B-3IcCC$eGnd@$*os7TL-u&G9_UG~+j>i%ioIDf*+{;*AZPBWw*w}Cmj;9Xi>#vx1IO>Z*kz;c%*kB9)hlH)}GBGMa#9U>*WHOast8@ z;81galQ*qkGq}GNmq_#vXuOGl5KfRoUq|UZN}C^$lDKv{Pp%mtN8(bjq(G@?r5&{K z2kpbsWC=2(!0+MM!i83Eb6^*Plu=`3yjBy&0KkVwS;|91i=SjA=@NA)Ajdc#QsvH& z{pbkYHSs$aB{mo5n4Y_qGEEA61#Fjs#;SW$u)X1mD@Jt<7dE!y&&Y4X?_pxO^WHe9 zx0aW!E>4-5neC5!fYlZ6KRw1&dN~|FeOFuew?%uPK{dE>6z)6rdkLz;c=sGPe2hib zW7zW$*)#U)$NHEUD#hPYqwqBe4N4JM^2r)_PNhpA&I{y4c}H|`ge?6l%0rYFVu+LV z6k-mUq}^^UKpBCssrpyg09_9lU~@dOZCmi}e(o<&h(yuD@mmYn$_YQSP>94qjx{v~ zzTrLaeSFL-G3yRM%sSjrMGl1KZ!odKT%FFmQy^_CG|yF2aVkRTb`f+4Cb5H{d4qVq zouH`gizmwrVlk!u#_G6xMW|K0p*1d={M>FJ`UGGNwM(>x@S zz+v2vSpM7(;#3dsivA6kwjAeoRG15{ovu)3#H6dyO$y^fm5v-|yZM{@Du(vu)i7)o zi?td0fhESa&JtPqPY$TC6kiWxTg_}cgj@&1HTCY>V8*j~@7u;v$Z9P+1?tsoQ)bFq zGvs*f5?)tQ*GE7!f3>FF5bJ-SCew2DhTtSW`IxW2OGRRrlP(OCIX63eclGQtO*>n13zV~Jm%IpJD$qYeqI(^oH=GWaUhvEKk!Bl z%>g3k#m%gwuD)o6bFGA%bk8E-d?erV{?M-umX*A_#KAaQ0GRZj;cU z7XBaVX(b*K!ORvzt0yPpu(h!uKZ5;lxs&h;NU6_ieWh7)Er-e0NOL^R`T109iRvV0(=Q9-PrwXz+?c zAnRy=w-+4*I9pzX%yF&HZ=fHra zAD)h))lmq2DuHt^)fXt=iI~daY>BA!x8m_VG=gIOmFNsGoIb%8bxRD1^8_IA*3$o? z-%kN_RLa#xM5a&#}4e8vW?Xed-J(_e1wKK zPBlz-{_6#>D#g>m^m2Rc=Sb}yfJ_aqxbZ-{hA8Njpb!bS0oz!7ISjG3rNNV;H(%b- z$ATykdf?y`16GKEx~LMKzGy9OdSoN~BwT3NJ{NtcObNQvKtk$Ncf${PADP9~J5xRH>)|!`F_)C^YjAn;QvJwCORUzmkG zptCPQ>aj=ltG;Y3uF)%*koQmdf%Envieh|Y6Wyxs5^@3dCaW;n^KBMj7>G61MFI96 zc&=Z@{Je=4 zZK_y#+>*35kJ)g!RxXu9!&6oJQEHC9cIfqnI%d#45X}7?r=N@#`f&~hDBpObAphjR zHoy}fG1>_Y(?&fiBnE_9SuUAsFdx|JM8919JpiI3e>g5>!udCja=F zwv9LWc@vy5@j%W2(?dFaD%j>MCKyls>McdQ^a{`xr~n?9KXrtjoLfK}=2G3U1?^tW zbm2u;73CNU*zj>HBh-U{?{n1|^mHrLxK07s(uZ@q-{@Ca6fu>&0Vpu;U*qTUo*t#D z+-Ba{f=S1zJDk{`Zmm55zvp>$A>tmIh0H$?Xs+hgrNzq)z~DAGNB!w~tUcJBU}FGX z-I%PEksQuPyelyPowEnX^_x7^VgCfodbM!}C^MwWT*hoYlt(I~JOTvq)(jFf7;ic* zYT07fo;u4@R;-2+4_(+dTxz%DC3=g0%%l#NNC?_ZcHR2JIDTM|XAs8o@ED1=?)20nG2 zvclQhLvh9j-66h5?8&4jjY_12ldnP79!|z=>|aISw}9`GT7&vB2vLhm@Jb{y2%On{$1uNr%cl@S0c@T$iidvC!w;#d~; zhTgbqafV%Q0;5*Ht2`i?;r`ym^&*$I8|2A{b!4A4n=kZX?!SOHqljc^Y%UC^`&B@wd&+1{d z;^X=0zuNS@{sdgX(6G!uFf0sweDL2u6IsM<5lzP5N;2Gos=Gw6ssoa@>eVUcTAIXArw7e_+X;p-SuD>crS4~ zoR%zQJKLErL2m)k?y0!9ss;sbk6lN{w0ar}dOiG|s+21CN<3@Hw@CI=16UWXg57qB zm2#=@&k5PPQVYf?8IWTx6vipfKEJ313v8vG9j9Fa7dDv6?mFG)t)iwr`@jx-Aux&# zssfKcy+VMD2l^yIttSXArDo}Nx@L#Ykf_(#FAi@@Jp<3(-n42P)YX9@1g&-ZdCQlo z=`T^F_8jmN!iF3`=AX`E&y%HsS#I{p8cGnpdGgWF_T8T@yuwL`Ao=4;dw(R=z$CoddkA3Mf)NCL;yDir;` zkK)cZOjWeD`jl<6{4PoiOnR_ka_VX% zi!iZ1Ooy3L`7_vwHN`Fr-IhIvstXr~%SEL=Y<)CMDXInXW#ylIX; zAeZ`6_itgOA4B!$T6jcGaaxSU+UC;e#Qzm+&(-iS74lO>s)q3P>sQB@{=?^hJd*1f z@~2#E`9%>E@}~R=@TR5)RVLnmtvsWkB6vq;`gGa{VqfnJb%Z%N-L$kd=89da)!SQLJqAC-BfTA@K`)n{BAzI3Am~{uP!?q$ z@dXcpNNqt|z(&M)P3a2OBGHK(F?|E?e$x*Se}Y0jZBg@3?a;$RT-E&F+$BIFb8h2C z^ZDt%`he)64(C!r1G*u0w<-Qn_+3KD26x4ybGVKKcr9*&+Ut;mtKd>N1{B8$*SBg~_~-L4Mlfv(QEujfsiGDE zuwx?m$1&z@1;)k6jh|Ag#|~u5e+Rig#<7P3s_5#~YeP7{gGucD)L{3FQ(ih^!f^i2 zsT$=uc}eO?Rr~!1?orWI43nM7bY#OHwyR9-4&yPnBh$S`^apJ>{^N`d41Bh%0Y^=Z$n@4@#>0BvG_km}*i$@=FospmbB{3}96xJ$M0bSdea(}+4ULG(q z_P=d%{8u3pLbN;F+@&uz&oovr=1PHGOC(rNQ7Zdkwt^dYv6F~gtJafkm~xJ0PWrcR z(eO+6xyQN0ZOiRu?=rPO>`>p)Vbw{L&d|EX@yHG2R^7i}XJ!+k?<|^ztN9%St`a;$ zj#AU78WfdK4;O2e?>kj{U+Okw7k0l~{}PsFZJ44{nDgxH_2}v=J=GpXZ7e(f&4PNQ z#H*&svVZ#s|Kr->hD3a~a2bycUIe!J8>D^jmJx0=*tRgYMU-SLFEytA#MYhWA=H~v z0U9u5Q&}khr9>r|zwP&<+}3vV`p-(1Wr}y*jY9cpd%5|AE~aN^1v-1G{xv$oDRFaR z5k>-DpQWm=!$)88nHitU*_m=%+?hDr8Kis*J!4{7fnl%qIBvR4)ovhrpy5ctoDb0d zJP=)8R#Czw?WzT1x@)YGuViNlWEGKQw&SzU(lM~prG1+0UW0U`I*r&5eYcPB2 zjz(C{f|a_JAS02bMIR4}6FG{?okou{0_>e1F`v)iUwEI1<<#vhE_zr`DtmT@5vxAI zxJ2Q(vj4|t`k$^t0(ZmW%qT+`-7)m#$n~o&N`o-i>0=h_ML*{siwG}Gin98T3%>Fa zM>jtmg+vYQu6pfmp4;uP*zLi)9{%EuLiPHM{fL{>zuMb57;tE`CU{dk0E9!e33C9-M zaFLLaWJw@ed?WVa=?(YBM~|mmA%wz@zh6&J=dYQIiFXoh(^|EFd?X}eAju|fndE_- z(Tk%%lUHr-g0k3K5xW&fE(a*lXd0K?+t1l@u$kDF3SgO{V1*WLnc>BkP+QU|`dTU%{OY~5P2QzeUA-6uKVqlH~R z$UbI%cHxSRNBzJ4_f{zzug10A7Y|>T?l$innI>F-lA)lDW;SQ^$f*rV>v7UaErB;Y zQDnmo9c=hT6^qQUy@!Yd25du`6#vUF`@gu(201V+%~8N2Z5)ALfompTu-WKm6DDyO zlqbk}(rLp7TZhJ^`RdmaCNG{fj`PfHy>GcTQ1$iYw;7)lki~U>6r}Ydlz^vvif@lx z&osAYKks-YM`6tG*i;&cMGHi~YVgS|EwE&BS5&VLP-B%_ZKe`Ko>}U-M5&$0?kM^KQ{e^#kg~Vf*}d24zg1dq(rAMG%^X2yEjV zJ!v!Z!|WF}n+ushiMCaaqop>VQytpbjXbD3VV%>9=KE2XGSF=IVa}NIipSBf#;}RD zj_rnp!?gL+>ocG-`+rg51!)>AeQG|oBsl-mlxn%8_fgFd%uu;GOb6E|D}S4#g8M*o zS4$|L{BUfX8(}45+a}QjkC^&5`!U%3C)fQw`{i9TSLbKh30ukB3-KZ>W^Rf};j`yu z@1%u6$0+npd>KRqmtkB}zb$>tI^z-h;VZ$?KSN+8A&IRDG45g<+O*zJ*y1+(C=u{_ z$h(bonAR`fXGIhmH1aydlr~(v2_@?7h8vFTw~^8^j>vsq_o;=6nrF&~@fCN5 z9R`Rk0=-RNq!g}H6N`956Pw?LX?5RtxO|MTbTQwAFtJ)Ux~F*GgOOD02r0(>uJM=$<7vh}`8WMj zZLTrWEzN%8qcuf3fZ)YMWTa0FO{^kXC^?|^-8Dl2!9=!Z7^yz@Cis-tVPSXezW}oT z%e8|trdz^?Kh8$P=4v7D2EV<4TAO`ZDVLucB-``V*$4*uPS8@#H@f6B_?>!c`Y_ggyFf9Zn2x(T<5^M!kAvUA)7G2Mb0YQjm^EzHE-e>nBQqIC z6(ZVvlPNc*zGP?5(FRzQ9_l;ylX2b36?X?SKIWJ~ro*cZBZN7Rl)e!%5%E2aBZ|@o zZQoGuTVC;fo_a#T0*V<8|G0cqQ>o&^@K1u;mIN(E#Xds%&OZj8u6Y!4)DBDE)owcO zdFLP8(EM^(Q@7=<>0-o29@*?Qz6IMys5Qd8QD3${U7ulgwF7!GcJ)j>Hejh+6!DDr zX)@loG2q3;=Fl;k3Qwzo)A!SypQ&rnh`2ppi5o|eeEX|nF*3?C>Q#_)!al^pIq3R3 z%csYitaK&MJoQuND!L(4e!%1gq7>{i|3fU)F9@<+aew7Dg*_X2Sd+9)6T#zD`4+w3 zvhI$p+69+|NUJ;ek z$xpJb)=cGi{|$q>qaJjCUd@j#UaL4lY&kbLl=MaOZMAvMm(@NBciEk*%c%aGQsyc` zAaEOArAVc{AB#k2-S_5t$*~?^Qsgnc`q$N)tE(FbFn5X_ZWoBS+CdRW$!FhQlxb~C zm?fyo&la&I-E;daB>ro4%r7L~$3kO2BIaYBO^|)ul+H}`y6Pa2v^@J8Vsi`oFUM8a z=TFri5H33iY0Pf!D@zc%?(H(EmP+Mwv4rN{b79$&q!m6pu@uLVkSf}i8^3rj9?yLh z0pLO;)hV<4d1T~@xKAm*KQvJQbl~Ec)rSRBn2xp2Z+nQyo~Wh3DTrEu6so$U;;a2g zJQ-j$&`(CAgltaMo@6wSs=sj|v(e;4im^sV+oqjM^@-5yK;C?1+ z+vtm)ayKyhgLx6}=ESfcVLac6HWcMg06=p#-W8#8>0N~qUlUuUMwLwC6eu(teJg&~ zmN-*~W54}=w*JdC8OkJ*Eru%aj+ZLg!|EE)VLHa~IUo{ohjY1LScs;A$h4^9;Q(M2 zHpC@#liyU3pwQGLbNR6!mUA1`XXaIyJlAui8ys%%)q_Y-z*eu8xo| z!u$yL?LRr1S_c1a|4b$X91^kheEppafW*XE8p!%s{rB*Y$Y6<_GCd=shlwY%*zuf@ zZ%bG~NX2XA({@7Fvi)EtQG0=5c=;Ei?H#?R=6d}4^7_}^AIE9oBh-y_odd%4LEx|5 z3wMJxj~23fWEsl9w&?*)6`TO}7DwTDiet|~zd;MIZ@P2i+JH8??z&h&2^Vtcl^hx*@0}&nh z%m0;S2}2Qtikdp0ADPrm6EpiDz(V_y8=%IOWIBvD>~6IMg1V+RazAfsjO`PQYDWEU zGin$TW?BvHsh@8?JAb_D^MlEdU=*^vdfrotRCZDn4tsU|Rb0EGd#LxkeIeq4CiOMT$eH>Z=$S3QnvEy9wv zmQM_Mn3?98^BzOKn56Y|uBG>e#kXRvtJY z=()Eg0_rQt)s3BEoYGOKWl zOb#24k;@JhwTE#hQ(c2;!~Lo>kx8y+ToHtqIpYobyz2u=31{GQFqVFhUyWoK8z3TB z6<98b5;JdeAN4`Mt#gGDFW!+|YjY5bx!~)Ao#zs4t|8R9dnY8^EJn;OVkxn&+Kj?W zTZjESL~YPEcxUD=IzrdkYyX+8efgfOGjq7MpUE8?S4f0)n}QfPiq6AJ7JRdYX8oKT zW5qE+WTLxgak>`R@XvzkL#*7J#T%846y#0C*TtpM$FFvvpKx(k{eBHj=V>icwmN|7 zYiYCVkj`YY{A1dcXFJxD*wye&kTCWlnFDum=$pR-Q~HtdB$7!uWmmd9=xaVv7paO* zY0%&H2>PBRnQ^FAP$VR|HhJK?|Ip4?QZia~9=(I(`gDF_gfzl$AO+FwX^7mtdaHvVbFEmUeZ zYHIdS&JDDxU=nBn?*;zfz0qfcH|jcQTS|ixlbz$BCE4UTsz53(HvKaKm>5W#&zO~l z3scP2Hw{c4{IV?X>9RJS`NmI+)_G^$A!R`5gddhKlM(Zg z7OI;yH79w`HpZ1g+pf&k(lgUQ3{v@-x$sQ>Mcq_|^z&jy%w~5H>yUGCx zaESpv`k+C!l0Jogy%F}N$pJma5#dC6U z2VCPc|BYl91&h7tz1~4L)HLDi^l{>*0Zllv#jCN;PWHe-ujg#L^hp{c_Jq`XAklr5hlWh;u4Mz6kVa{IkZ zv{5OVH)ePjD+5t9Y6cD1X(8GSe@ibl1r7({UN8A%%`M&*bYW4Tz!LKe(Up*gtUCvZ zbeVR(YQ4)810?YyWZ0CC&QDu3Q*++Nug8ddyb-Rqq_1#nO#5U{$)kkP z|8#Ie!`mEn<%kiXL+NE>I_qbPX7<}~GSyEj=gwh8_1E(9&FbM%! zEZt+-C{ct8o2AK^j%D7@>o)rN)sR@Y0PiocuS;(_67N(xwfraFh8ZRLW>N8d&w*v0 zUe&kwP2dmZ^}jhc{X-#L&zhaHaiL$KI{i)`qK`d1&$1Wp%=Mu>V_vs3y)%z)!Ca@diyD;UtM1_UI(a|P8adfvLgDNPZs-(ZGh?MZv6Z+sUL{+ z9~P4YZ5vp*`%;wKTXh|+Ovt5fYYPP~az!^=m1X!xykxJZL;%AXS~?O-)l?c|C!!uX6rTtuA|-Q?KsYjk!<8vjO#NQ7Z27 zq9Vr~T`J|epCAP=;F=|YAPTDV+m7+l&r$!ZZblb5JIGJcJ`SfL7FfGQ(F0e`F z*O$J1w0CLU0sJ9*bW?Rj|ArE181>azQ*r-Yi1-2ZV{_eLros;^do<*rC!5Rh!m3Zi zufcwMEPnGzc^(+Im#;JaY-xH(1_;#u-lAd(vdw2*yCcjJK@Q2Xe4=_wo>aFZK4qUM z+AL-`3#g-Vm&=s6qcS2)?RLnUC-DXMyPsC3Iqp!+f73VnQ;+;TOArM`i$N?0B_~t; zN}Cq1jZfZW$Zq5JY!oeHWxG<4hMzF_=1(waTZ2^!`DpE~A4@jXsqURa2%;VxTfOJK zxGlEZfN8cLf6Lhj-WBx zL@XOl9FqH;oih$;GKQ+XMUX=gjSFkeq(z8Z@i>c&!s3u8~5gLsXm8h zj_%8vU=U38`jpMA@YdEV|G|~dsH^ov5YTm@PdiEP{Pzm;@4+Er+(-Lc124T^9zFUi zLKVodgR_Q&+`mpd+R)rFY^!D47>P_y@wU)g62Moi}OF$OD?_oYajwXQZX~1 zY5dS~a!FIsWn=%b43l-{uW-R?yG{p$`AmPATx6dxq!hfBM64N<*ZRu*QQ6)9=SLeO zqJVPz+vdKJD~4oi}|3FkI0Ky1JmDT9(zdM{tLdpG!yVRm;EpKTmuYO_gx>@ z5502qQAimMn|G{xX{k8Y6E6RzZsOva4TUI(34`AE51ytdeRd@;yf?^EdHHpne~O+H zlXl;MOV*EB7ORpg^5!oRPX9fp#;I0c2YAUVA-B4T<~U?0NSjihQuHmx(aLyk3STQC zAYfmaxNcpGT>jMG0-?zz-#1Th&&&+e(O$p$Nn}1RhaB_Rqtgxda;5({et@2 zSr=4@ImU0dwvT<(aW*9&gd2OLL~31dRt?bTcqJj@3UvQ=p876p=J~g_nf(fTvem3Qs{wRN+yyF-9k^7I`64n|B$*`j9i90;i%_ip|Na#c#((3E+F;lx?Yoi70pa9 zf^6{b8>)}K;yp7iQhTYYTi(gS`bF~hcuJV3G9jOEpGu*ZhmqR;TZdbPd-eWru0-CS zk86xkfg`Wd;*YcV@PInS>Vw7j4-k-3-I%#z|CI9%$yL&Z!2j|BU^&SS2je5f4&8B= z#7uu?N1`o(@sLW-EotKCUYn)WZSr#`0=V#iR__pL?uAQI{w<)qlu0n%_%%CuShY}=1w}LdvBz@%%1mxQ=QxXX>n~_7k*+9?0~u} zREwxOB9`rOV}+evSyx5pqu(i<6yS=T_@v&PHc&@!a0_O>Ph`RX;j@$=Mz6nL++TR% zS^|it9u(Us#N1HT%x(Q7Bk05ED-^SbWV5}bhCRF_w_L9=-Us@UfcTo0&BGI*H*#$hn>Kp_{JM436BjBk2@<$o_s@BJ zvOkJFB}1|N_eJVICCuBDa$|99@u0Ww%4r*!y5ewQlO zLV40!2Ha09=)!3ztge5aAND6CaiTTjZTsYKwg!fkqn_8s`wldJW-K$?^d}w*q(5*P zdS(N2+3~W-wbQg_!v#9T(Y+tE%tp>y9ML}mDSgtm^$&TaiFea45c1L-Fb+%y$I(@s z!&$d9RO&1h1gulOkV$j9r{>1Qb=*T-k=wz}qn{>0RDLCgEqsstM@bpq0dpxAJDS+zJMIlVrx0g<&3) zl~qx7VwleIJ)PLybn=#%Hv`rKv3`bc;Z6-U6_25}0z9N!Rv1-X%5VpC2Gr9HK<{i5 zIdIO~6(9W~2)lw#k7cvB_opmxf_(nA6$NV*Vwjzfefz)x^ z%s>1RJ}iq(E12(Ov}B*trI*vt2!VAOn!04!=%gN7c9k!xE*3qHtj(^FsO=h#uXg&hxI_8T6w@M66VB z4EYp&4Ss|3w(gJ7=UMp@DV&=0aE{o^1k}toG?_avz(%Erm7H7OH!)hd?(^^v`~)UL z?A?5oh!BGcfb101wCuW0nYkx&=e6VpfmFtbzimj%G1S|D00dY1{ z#At`39bxa)v@B;R&PAVguuu3 zK`ahn;$v0g!V1s9xUbUX6?xuY6p}V({!?sr&EWJ7(x#08ub2w^SH~dh@g~q6E9lFn zg76C=*$e3d$$x`72?4#YaCfLh3Eb&qpI`0eynh~hF#<5V;XBP?ea1ym(Uix_%Bw4O z8T9f3E^VvKUwt`8Tcgfz1F4`37&=N~S}yM~yxAxUxG~zL{!>DRa0=VD#Hs1-)kyDk zgyp0@B^NjpOQt8g=bMA0-Xu8^&-A>rm5?&|fjNW7((kMd^ocJB^=g1HVqem zAoF4mlItODFbX$;FRbVhTy9$=BrB`zi2u%Kmnp|y8HHf z?xNzxYp?Befh2rwv}YFN(RxldCh}yzeafW!Pe+3o3kpf~=*hu}N{f|r6F}DUgbVkL zRWKG$>HEzQB!llh{Up#+cxNVVJM}dq0us0Z!l-|1*(3Ap$r3v(1MS;Ab?l0^T>L5V zg^nB?Q>So=89zy`{%cs{h0g-;4!u7^mqhEq$T~34tE&XL!}B8dJf#hP#^F2AGaK^n zZ&Y9o(@ut&i-m)JY9=JQon}`haTcDJPoazQjrrJ-K{NMMPCEZ)%~hp~C$2l_w`U!O zX|IKG2j3Le)eakrzm7BwihO@KYri_4)#O>f1k~J!u~B?1 zVafJoGN5bAl|!r&{7aEcVSD&IP@c!?v0KbiIil9M`dy#4_@&~n>>Z3dRg1t}y7bll zBWuMs{&TQOKq35;D;npjG63vey)xK3#DBlr&=(%?^yEuUeIJ8UQ$ z9niVY^mbM@>xZ1I;scM!)qux4RC{sXv$Pw>b4Ds|l3bFP(;20gg<88e3_<1=IDPiR z!E{)32&L{LU7h{vK6*>?KI~&JkdN@n>uUAL3*mFo;;FgLSMq%$I%0 zTx6;0EK)g|0eG|p0B39l7nLU(%;c$})fR|iE+Tvk3rYHleJ(v~KUPMW3oBZ$4nzWl zT|a%!jZq%_DKtxaYy37AG1b-j*G#x&KflT6+^o3}kM#_w#?!Q1lAJ>r@Cjz=iEezJ{MV|J+MSnc%Q`sYKK3$ZxZ8Tweuf!`3Y(@_Za zoIA*VSNU7TPi8Zq%&NNNc+VG zIEza)v!8CLML0}-m173Z^U`wDR$#2Isd)o0qb9~UPrvcha@B%{H+Hl%--~1W3&yg& z@C*EEDEX+Hl`J5P>{TVU$d`FedkG^IX0-^UvapzXJ&+c31nn4V8gP;gD2u-Hbn=lU zyd2D$`tzRR?S-N#c+645`cc-7d}i-Ab_j{C1d{CT1XG6oN)@i<)l>JQim=)%>mv_m zH)b5M*R#J0y|q+&!*22Ix#Ab*D77A=v}5r=8ci#f~d3Uf~*Lwxh1+x=&!j`piZ zzh8xDZjW{!u*Oq$-<;$a1G6x|s8|p$2X@mu{MVoGet8}Zf}jD7)z9!TKwotY%{1f# z-3@^MM4+4*??rAbMDcc?os0Nfyl4tmVX$Z;_ENnBymtE?MP@K>d} zA31|t?aq;Q0e0hvPNtQQ&0#h}&-YO$KHq<8ky+XQ&Vt-UL z_<_b4Q)(QGr|qY4Zke)NzegBa9Y^@m9{yB+;Om^Zcll^mV=`!7(Y;>k9p3qAo{uh<`+<*iI|oI`{lL z1?>HT!3(BEQszHr_z!w<4&95qzmaw078DOLk1D53VK^Jp#j2e@$|Qblh!-1oj3fi* z<_%QmXq-^4P}$k(`B#m!?FmPDTVLOTe($5gW+fb1^~q`4Z}v@cLf6}kvCJ^89dDMhqa{vV$+OzF13e6jeC)i&=VN`=HGsnMnkFmGKjVnja)3-FLR(4532P|2HlBRU>b>0&Uwh%z z$IL866eyc);$@p>`p1Nr zBjuieRUhI2^I(2iPkN>2=t}MLU%=xvW<1q{3~*!owy7Djj*_HQM^Rf`aaF;5{z$H( zZCQy7mdK^Usc%3Q2lNyg1nEOJ`tY`tMwQjV-eZIK=7H4t5uDFUhJ68zvwF}_6Y=bO zoF4x7c_ml(&DRxOYytHH;kZmopeS5DCLtmaE!xOa`kaUJeI>+R7TYk)@(VCfRl?GZ zzS(CMP(}i)nKPZ}E_V`J`yqqt{7W6N{&3_I)TO^!PHqw6 z72zH?)HQKG*bM8sGib8x8}B698XI@U(~()}E=DHO9~@~~tDCY*9>DIm@ZCHLF-u>Q z7v}nP@us-RIh=j`q#)lyX2)knD5mQ}0|eCJFXL)jg~T;0td>nC*%Xmv$U#G2h~w&d z#P#N{mqO`UtT^{>x?NXMf;!Wml=b#~{^IU@A67!!dE1J5l%w7ayt;}GxM8Z{j8giX zvA-!|5JMYq(hcv!`!^fkZj^0h-LjuvaKc}Gs+Wj%OFV_MTSr>eT#mU!H&LoUjDL$1 z;NF^~Tru{q%qwp_nwy~49mAI8#MVqB$>g`NFzMzII5K(4kWoL9T&r7E_LPJc>*g6f zJ!~oizH{zJ8+#QP@1Pm(g7Oy zG-QG3LCSeQh#yrGk8^4-D6y-OiW+5F;W+!{x_sXlv7 zAxK{$xMrBmpJ~v8wig8F6Pj6^U+iCkVFd4Z=puzh&O@Z?euOjef4OWu4xcsYr)u!^ z^swKHa34s*nO}2;V4V((mPT?I(r*nC-?j}GOv!S!dl8RAkhSQyG&SPau^1a;z4k%E zME=NQBOBgVfURgt7>O@1a-V%&CGf9JySJz{!1aqw zZ4mDXT5i@x5_?mv?&Q9j((-S^Z1B$EIpN7YrCJ62w$H$|Q9Y9e70tNXL3mV|KezIx zIPfWH?j1WRQ0wi!=YMfQqd~lmy1Jh-qhG;69%TdF3Nmm>Tj(bP&P-K(4-Y@X$WpvE z2E`r|tHmzj3t~R%c$YmEdkCnrnQ1X(i%uvjJ_1k2>)!*>FO7~|Bnx= zQ(m@Av1ZaqFa2VWMAb7mMz;C7K?07VbrY~6P#&zRJ)+hBNdUTOe|5LNHt_vr@~V+} zWU0g)=Hm!#HaH5LJh9mPR$j~g=rl8b9tZbOhgK^%`Rf`t(3zriFKn(}H$=Nd&4I5j z@@3uqubHfQ|6=mMpJO-Le;kbm2>iy|hdTM|wF!LePl916Ca&k~>Bo;5d!8?cM^2&tNmaszSyBw?khU}DpqRg-&JNwV|ONo0!$ z5Fo{r8*ZikK(3jwnLDwg{4(5usH)NZl`2puNLSlE@bJS+ zggQEMC$Vj0R7Pgqr=nO1^p?!mzD#SSTIu;i>bdxxl%MkN#YVloov*VvytB;~i=_H* zqFvIGaTFdM7&wx;ba>B6F8Nj5lfE=p_6H3W`fnXK3v7l$r*-3&Rfyg1w{@3? z&+xmO>GtPkuk=IM7Df5upIQ`h+J?YRB=&wp-<)G$y6 zm=@f*Akyxas!eiDBOIL9r?2-I2mUxo1dtm;N6hS|v!OxUctRwIjZ8+X8|~GQNm1o@ zMq5HTHZ}{1nHlo@ttL2dL{NTise>iu>0|rvM2ims;J_>9u4g}^?fpCcgaxZ!k{*Ti z=FjS+$6Lc8tR7?=cs$j{ystN$&m39ql;B_hb~RqsXA+>Tl`SwkbHIJ@Tra&pdnCia zxAH~r{^Qn1e`@8WuX!sH`IZwO7I_X@L4VAg-h(bv|P~VJfY~N}y0$C@Ur;`5nBj>F!LMkmCEbRbWhRF8$+Uh ze_OE_4EsAS0@k=&xa?2HoPNetR&#IjbC5B1BHq66a(dQsNAdjwLq8Lu~0aOgM}0uo(*8sYf1i1gI0@^Zo)X`JFS@R447 z4nlhk3c^$SN_CbOF~(P9amod%uvhzN>ix;S#lNsR6GC%5WqLy@d9v^Bvn*%S(jaE}sBPd1raw1JCwu?amDTTR?9}>ECtt zOySxUS03w|RodnjsjPSwxZ#W#a6Xd|oH$q`%Je17UA>u;08N6sVdxcpj*H-=9zp;i-PmJ=RBp>s~15yl7{}Z&@H0xG6Agd;kxKyjyJw$ ze=*g!=WV>J0^eG&`e`gIyAyi&T8kDhUGL>+OK{AOuMpiJi4AqaEy)5AZq>muCS-G) zljTm;x#Ua5-W-tlxAL46vel3_YAE%&*7x-IeW4NQkMsMu$RRww$;*D$VNB3;mS~hYWAW6o$XfPRA={hhkL@K$65aJJnPZ+w9#=8T6KUh zV`g9P;=kelxJ#DYa{2wP}IAC`oJ@0<{$x&M^Uv8ABG(f6;mKdWa z7~M`?tyAdw0FIE?w$aQJcmW#z9b!7?;4coOCE^wzcAa7CjZb?d?tJefGwFz}D+RuND5*FylQvFMPZONmQ zir~P;76;iPPKIAQ#lD(3FRUhIH~hm64VVs_<$h+gd@cJ>MuI-gI-HH0L``>8WU}vW zZyNl0Ns3cgB>-cl;SJGs`Cue`UY^PeX*joteomfd$C^7Ac8{O(bKq)P_GSHLcx1%) z5_Zh#_WIk@zmrP!fVtC0#`aTtPI2pp)?s9!h?!tmWB5#&F{y%b$|Dz!1wNmA@&K~w*6u5`s zx5L23bel2Z3heu%!O4T#<-ufwGM*f*yWWhij#MnBxTc%!cMNnN+O1HZ-SokJLy|U# z92hb~N%O*IvCiFW`32$XSTHogJ@EV+0oTzyl9Fe9oaVALMaDL>f*iQ(;uv>BaxX1; z;UlojxLV3P1K3~CeOK;KM3bAqHC%C7b+=}gAGE~6q z%c|8&0RXGtAq?mxoNdm#Zce2Z{-x+BGBJxI~5i{;*=t zcQ;hBGwrE5!gQb+lRWL#MorOunKZk+<2A(~$Z%Y{j~KdJVT-v4cM`*{?*ZQA!q9I+ z?baJ%oTwk+(v|nq4pEWhKb^jI12PMY;J$a@18?%c{+~DB1cboUN=pP-36%Zd-AjvO zh{xXEJNTKx(1kfV2x(I+QIx@<0&5qsiW(j1&J4fhx0#t{&(xmm)te0xTdeJ!1%HVBZc7j`1z-0)815GYELHm;Vw zIVr{%(&Dq;7Hg>;exU$9B(I5E!Lpth3S##*cS38l$~Ko~#ow)Rw>;rZ<>$CHT~vg1 zYXa|1v~(!9XSSLI#%JmgQAm)syo!5oo0@&wV9wHSeH9Y<-Jr`D&XlSbE{_r?MLBYw zj~g5RX|SR8eibYuKSO%%$9Qs6+M09A({9(G4uQE;)ErY%H0Dm*U)po|6+TNfy7<{U zovtTt-T*cr(o&M?n1jLd+_Y6-%mNU7nLYL_u#36ZW#4E=|91H9DD)1kxb7*Nrg{3q zL#YyQ+Obq$*Bum%+oSEr2U1bt*n5XsVoVdhwK>JT#QEOhP#OKpov2B!oZU}SEM8yh zr{=QYu{b-wXNp;?!P>g$tTY% zVK%mXO*{8~8um)t%+?FkdAwMhh+l$yD(3>7w9z=U&uUybyN6FKXoLv+oAPBYheDaM zFa`V({dMM~l-XjP={g6yHBXb)as}?R{Umv1+Xo*ta}diPXCvSVhhM_Ia{o>SLg4C` zDhmD+*uF!>bfsDW(9B`h6MxI2q7Len(Fd7#?j2eM7upixIT;93lEVU?i7GqTQo&+j zdmC>-@r8yUyO#pB17zt3U-L#mr7ZYvU=!si{cqUIz_^TAaKStMaT*>|b+l>( z^6in`M}?9Ku3gDhkAidd(+>k2b$R8a%sWzuJ!Y@rA;tMZAQxhfyNl_$5fl|_a?KHE z^g2UsgcSedr&a-463{}D158RhX1&{k!p1h-BY^qwKTpaznh^!r3*gUPO(uoncxu_0 z4Uxu+gaG!s=AqkFA?@;xy`QemGPd8=E}ru z6lFae_GfR&Wld+2w)n2k7@l{}U@o6ns0H_~ZO-MqZD48kL~zF7P+9hQT*&T4{L7K> zAp1^C*1-Fw(4;=Z&H18a{W&Dr3Tx$N&*5M7ulP@4#gTVj$oAK-SX-``%h)MX?bv|$ z=;IU-cMMNQ=OSM51NQt7%n$2r!U_wx)F%XOxeCv#7oUi@5=y0l#icYuVe^hbmdFY9 z57TUp70$q0(d2pBE}BWS0Kvw3+_e8*e9>Ly_asGjf%i?uhXi6pYj6jIlFFB^foc^TeLdHn`l(% z53t9RxUg*XHOY5I3ym$0f@qMV8Du*!;$I3$=uf0af-~AH0|Vj-2RG7egb)&84LxLJ z4(;kc_Y4pOMQd2i$2loxBIBH;uwnLlVU0cTnL{^VHpeb+NYUZw4RG9tbn`7G_KL-h z<`$3g!rc}l>BJFRCAV=p*Prh=8+Id>zmJgM{6A{eefru)B&Qj4}*-O3UnQ`(*xE;o8 z^c-jjRVRQ+{$e1B*E@%+9#OYPrW^BL;d?52)9-(msx$K{rG zdF>MJb4O*vut(M{b{r65kL&Ilpk~G%)tk=-MEU&F7=(vpS|3Hr=LX17_&vK>mVPq7 zb^`%V@g)Rb%=?>z|6u6DzfQGudwRA4K!{#miOSh)iEO(CmQ^$U0U3YL2Hm@PB&%J6 zp4LWy$S=7nj;nj^-$rY)zWH$^S;G!0HR&@S3N4IwiLaEtLgFft`;Z9Bz@}(=6u>jJ=V8%lAB&0>KZf#54 zOs?6&`h}=<_P26GS;zFfQq$y8dZaHU*Ai&3BQtgl0HofyEmYQ+I;q)A=f^&11yD@E zN**_;%;h{bTW_cxEEJ_8Tx^EMeVkghvr4|I6&zo{wZ#qDyLd@e4s--eP0YGR zv(`1M-^4Mjd-geuTGglkE9&8Ca+R-gHkG-LBYcX=XeojyAf#)FP_x5&I~6Gu>+Qe8 zs)uDtXNeYO-q50(&g|Nht!vcFDz+}i&Jw}-n_?bMn2N_!zR@S5&2V2UpHIG^<}8|) zapQX;H0Dk2fWhE?rmyBFztJuVZqOVyeTwC_kI6cKje=*^ z-XN~!Y;I)ql3QhX{2RA4C%{Aaf{SLqrItrT^HU{UxV~Eh<>Zjsh5B1AZKk@%Q+^NX zb141?W8WFnWZQ+QAfVDjI!L1ODk@!ijfjB0h@hx+5Rf7*^qxdSx)2+^L_h_l_fCM& zJJM_Dp#%s72&8WOcK65b%oETl+-OcPQ&By504H7p{y!H`{9#P zMX>CUu|J1&IdjtyZ=RZ=Z$2K?CHc6<>6}&BpVX%i0g~JSs(GYDgQz4`57pJ|+xEIc zYQd{J&U|!X-Dx4&`poDYv63!)h!Bk0uDG1vsaY1ye4JnpW0qeFRgm^9RLkO4;6&JY z6t34~Xzs`)f~W)8UT72deTO7*XLc9LT~Oep*x|ZrIHL2Wb%u;w0wnyFpzdM9mwP3$ zFrlgo!t~;oL8U3m-#oBKT+0*3kQNkG==yyp1z7icgMYIn))?Zq&|iD% zMqaEodKTEg%hIgT_9@59gYLgk|zi^hY*a=vc2xT`6&It5m&geJ8L~^B_ zre!1jX!YWjb^Y#lN(buRR$uDRTy29SG)Oet-iUe8n_Z^eUl$gmxqo^!RO7MQp=i=b zi$)EZ_$3%Co3r)dbT_6Xz_WU&ek^P|6z)Byl6CZU4Aa<$IB+;w8P!+#+MmsW z(IZDa|5V|7J>^>P^D~@dDsv$z!MBp(^Fg1u9g>sXs=_3J+gk81-Lnzpx8l);S0neO zR+(WjrA1jQgqq(;zlCT1RL_tXBj%JG^N*}du%YL(oP|I8RzcJ}p`65Kvt+-(q@OJH zw>Z^u5;OwD;x73y=}QYLJXu-n4!ZnQLd%1PX;E0-iT8f?E_X@de;F)Ufmr7kaUsCN(kmp$O?v{8StP%Q04$EH=u)Qsj{mFa zj3JrrG7+4$`?bmcr;XwNey|sro-MkQ*8DZ{C#LLuSwdt^HCh*cab_>ZsK4KTJ+GbG zxsuHaRnM6D2H=~fZRu;`9soe6+BX>bU@2YjXL0cRE!;2C;bHb@d*!5g*FTKEDpA~< z54MLsdg4&XYif2@T@H8WUFjeC3<{+C7L1)+Au5!49GjbBB3DS`GouIB?|;iW=kKpE ztiC)&aNaflv|Y{~-&F~c>DOi<2hZ-89Z$jOn2gL})4;x(?v4RO-cMvH>~{At%Bn&=V>jS_(ZD33}9GqU%lR4RnZxd*vVDES%>j>=w!cxH}TUm-DW*qqsz zXl1=^VIxoqa(8=}swe)pZMZ^W|ISPlgp?jO4_cR_J5wNp!pYmhTvw}y~s6)<_8V$z1C!B*~GUCn%G13W1wcNNmmh}hT&j58L zjRkZprjsLE|6b04=u;tYn!#eKh6tBS&9}=_txu^Qxv?W77Q0dS2J(uo#~6xaXFFSC zf{svIwSpqh`$?d6H4jn=(q!6CfPfxY3nZ0Sd$!-8`C%!N$Q`x88T#l*v@n9JT|X^D zaoZ=(6iR3}$i5=rTc1Qf;XijzY>?l>qFSCA!_{c?NZwJPV&6s65AeE0#E{0O_q>fC zUH{^jUfU!%3LL)WcLYpm8HzG|JJTp5`U9Yia`QIEZB`D9FbXS-T_c>frCcB2gJ>yd zAn*0ldh;PpRw8-oHtr8JKF}6*zR#O@i^`&Z92|A#Idi+^W)EsjJ2~&WWG?twh*+Z& zY|iCDf-s({H?*J$+9Ifw>duU?kIuycXf7M*q5@G@GgejA$~E_Ls2}U5(O|1T=A*k@(zB%8qW*GWb7o3Ur%p}q?o#<2WTA{> zOR}Hw7{cB2-u}By$$#G!U%lAqj-1mBiPp7Ai2_|%4@bl@8ICc^0F43MC!4J2gwzgM zj50;C1US17>kYcWs;?Ii50-Ks1Yf^V%%U3^OkKG_?{wW)mu%bjrQ=K&1K(Fsf_1j_ zOwL5iMa)Dqd%aVcWUPjE>xgpA>m^X+b;pK_BT6*aD}Q9ab&Sca9EBf_@(bhu6~VAY zLarpb5ZriYKWNIVpZmw$66Pb?v*&{H$`oU+|Aq23dTt^1@Qb%z0wfX}ds+|Ze|0c^ zWPFT8XX)9%i2`9Wp{sOc`JS{;jZ9-2kgPezJKL47`8s78nmGa|jUcbS3b_tY|Csh8 zY|GZuCd`M=&v_TRkRI2kv@*ph!{>MNhnU*;JSNFd~{98~@DSVo~ zgO!a^)^ERYVRD~sU;nu!Xn;dj0ZjG0p+*&;o zvbCfkBOEL*Mc4Ml>+rjpIwc!&5JBE__R!5|cZkzwb~nLDxLFr^)1EEh z3nz!AVLI-hha7~M&AQ|+RKtF`ukFO)ywV4$?n{2yp!B%he9_&4g=?w$8ojV9HGW2h z*?CG6=sn5lB#h?tUNof18e&3A08W1KsNSPCma5j;=4k}#M@t!$THU`7H1urcx_UH~ z#rYDyu3lyYN!5-dId8}sOZ0v*{HKs)v2}Uwr(C~>nj^SCvbTHSD*HBGC^U*s5Bd|e zi~sI`BW0?zBbA=*0QN>5gQWi}?;-ut#-$ql&wF|i2Rd!%9316^4bOmhwg5@andBMN zmk_|uRTM9MXY$= zzZ~z@Sr0mJ^GAuigpsKPQ^HAKLeTiD_jse5*jQK&KJ9+^YZg4Zs_4Qt#{ZCvy-o_(=suf=qp#^*)7Ci@ zJ(>6ynEwp3_#IT2oz?}u2fGdqY2%b~n=UL)Xs{Gbjx(5>1^R+1MpsEO?(@1nkm|A* z0uD~M4D8Oe8QM!p!%h?nT{jg1krKHBt_I}wUdgUq=Qa^u2rS z=}n&OKnbi>1|K}cJj;0c)O}3c^&dBZvo0X*#_znaE}v`li&KGn!ww_a8OnPX9cqBP z{3gT6#zr-4#_vZ8bZ=cY;a4wKx*Vx0X=}XwuL|D+KBKb`6$?kgbsVzB-Bu{IuVtg0 zzHS>$Tx1E^{(Yb;|?(nLZud*p|3HXqN8#05|lKdcL`%aRT zlu3+`8k^75Kv#0#Q05o=fU!7ia2ffR9=-cj2lw}N<)-xI0z~bvNMy&k9K8Eevhqgk zjA{!j>HWWaOU2*3yTNcMY?EQQ=;0>>caPIU)yFuJ%-4KUOtJ$Zv8|u}i#0sYN>4Fxy z-H@qLe65tz=u{#2nGSr+Oak2qDCS9Ic;I+(_zVBz@|L4-o!F?M;#E*Tef0p~rHeG( zCMB<}el_;(PM=_x`o)2E9?xj{#T&t_7Md8Rd%Yfhg@h6?`;B-cIbhV%y{R{I6j6|W zZK4${O6(#|a2a?nKp8K>fDWzuj_yIzm{jdlA0;o04JdNBr+t3XBN5;cEp|kux zy_(@i>Br@a#5?!Kc)S*kDrSTDUwUtRp7c-YBA zm04&1wScM9V)sXX`8*NI%fs?jsZmnLe21Pf6Th z8lDm~48cY-*hY2xr@jXzl$B591}MJOtkV2024&oZaf7pNZ+3-l8}=L4xjz?l&5E~o z%BdCn%BsfyvSwFDt&sA|EfLK0@)Wx*?^4BF&oTF_$dspl^_*T{BR*g z+_b^g1S3lTIz@Ga@jFr_GKR!08b1RCc{`a&K(8@IkuA{oUXh6A;P{q$ZoT6yapMnV zz$sQemOxtoc90tz``uo#z9>)X)n$%RiS%q*Q`e+9hvAnXV@`hq=)zES#;PfyvG;m> z-A9kOAfT*9n`X|0-yXcWjGaIjeaTX@(lG2&5Uf!jI8lZYJ_`Ik91h7Dj1*Q`FP{+@ z;v^)^i|yPHcgz)D_$h0n6ETK+w_MufsIiK+MSeUZTJ7AhR@26bpEG=Ro;6B)HJbEx z`WZho_atd{>+9YnNuH+!0ltG>*~3N6a&1HBeK{N|<@4?VhGg4-ej~T;N1uY>64JR! zA~N4tLg9fQQnJoBKXPuKy8X&h)BCc06h7!JM;q(F(;T09HUZj#ZfSjQZwrJ0MX!f> zy)jk`OMlVwF|?=~EyRPmJDJ^qQ=VE%nyN9G%SjzNh3)l{e-HCT0#xJF8YU5^J)_wo zL>%#IWb4uIPpbLKKu@j;FU|JVV5|SiBK(&dqL0Qql(pkyp6EQF@apsz4GXUjh~S4P z<@HOPwr(?7F>Zj)51dZA9-L1Pha=xpjegPBC*|MJJbcx8Uj3;8iLSq%wPltF*>_*e zw@Pi(^v2f6_WFS$mV>yn)r!-|LC=YQeN-$XAQ!D8Gp#93BSQwOFD)nUeDL9PO@%XF zZoy@oi!x&lPjT7o`e2WsbGhqFld>-3{IPA|&YBP*(&*k|mi=#sco%1Fg^mmS{wG0) zVyzP}qKVTVPwQ9UIu&r*+JdP2jo} z;3AXwdKtIcew@=ingM*CC^9sVe|g8K${DM2EXmC3PF7~Tly@Ym@uH7aJzuK#gSe!x z7J|!Ojy}zI&MHjyo%qkx^1ZS8#?3k%D4leA$uqVI$#4T<_hdMkE6UR%I2X`yv2ORC z#fmL=fbfR6r=FapmcbkKw_H_{>g!yZ&peS&3}=zB)-k_DYI;?Q)SKpnm&!r#{h3`! zWTvwhpI95KPZJa2M6}u)Jf@XtrQtUGl8EMCid-QXkkl_-h|@O}R`XCC@7v}<^=QmT zv4Udt){0p`}oqo0pb!E5Ly(h{tRXW-HurmBfCd})28X)&f)f3Sr zj>5;cX-Ph?=(}zn!}eXdA9@jo7uk3#xURE}rZBpfzXy7}o+^30U~oGb#`QujHZwF_ zw&S#w%F5-|c$MeiQy9Rw|J&~o*C!%wG}W2}YV$q8X32+z$6KNu* zA+az#xH{!-!0E5P5$o#wahFlH5Sml}UQAA@i&O7xAW!rYvPQnLovQy?6+{18vrFll z1{bsGN-39jvo+BckEAi4i*L8RZU||J^`a$>sQ4p}KCe|3bPZNI61eBz%=+|`MwL4;oBZ%0{i_gEryZr zTs2gn281JxB!MTCtP@Yvu@dVs&vxl^rs`Ih=XUO3M-TOU%>T{;Se%qS&hcAfLC~en zHYbwbRmofRGT}WK&X?Ml?s=m8LG9p7=n7Z+bP2t$+e6;EtJ<02=LvunKGE>W0t z)a=aZM9gb)qb$y{w4Ic(G&i1|%-#HYK|Y+Y!q>HTMw{@VLjPNuMBucJcdZCje(Lt# zqeaQ!E4)M8jr;lWcxa@23bnNcQkIW8`YkQ_2#?xkj3AujHo|U7l0Yy67j?1 z=A_+fY4-yzz-RTtSgJ%MqI1j8*ZG*bD^_;Hw)sd@EqxgKbR@negvJo-;$FV58J~f> zFu^WEn%J+!N!*Q3%NgHdnz|WEX|WgQIj2U?aK21c?(8*3P=24#jpou1g9{I%&=2(U z{YE0F^6hF)ut#s2;BY%&jd`sVnAe~$rOg1qc^1}Vufr+Ph zr@=NqsidC#M|aO&Mk>JjIFr8LBvHjZXTYOhl^8z>U7_49UhHTul>iR{RiChU_ zkkl(!pv!4e(XqZrx{m5v&SYxv^V#i|Mu~67J7MTR{J|e_X(f;2wB(P9>PTc;!X@D_ zxRIs`PbCm{d;NXv=$T)A1HnCw^8Tn<`SpT)b)om{NhM}_>u{;1EBZ6{_a5?}UtlQD z?2_HeP7tY=a5S(=7`gN0ltVUP*uO_JLg|3?a*g+b|HcIi2H);dl0SG2jL*V+Ynx~K zY038*QaE^$FOV-Vd$qJUg~Rx} zD(j+mE3l1zdgxYOfmTli$?6rLD5{m_mB|0r>RiebxX%-40j+DIv2X7ShytN*S(=;s zk$}sxx(O5&dz*L9TDgzjcGJyC=ytXZD4#ZGi~Ql@d|vyg6cFOInmTMB;H5Zs*$p7x z`X74MaMb3|e@8-!d|SpB?&3G+W41K;iMoVbqrRT88+F1Tyr+vtfq(Q$Uh>Ww(ge7u zzWL$}YcWl_$EEt>nBcZINAmPX`WUkC-2FT8)iv_nyR!pMOD}bc)NsIR?vBkwcSU`` zuitc;)rxBg5_&p;J}j}ehEY}ki~0+Mj$||;ghpXgv@n!f5Ah&!P=10U4zDSFr+sT7 z-Sx(b1RsjzSweW*%1zCF7Y^ql_Bd5Xy9B$MQ4)DroES=Uo|hhX@Z-^kdna%DS#SRJ zD8qfGeEcmnkJ7;dP^B76>W>Q@Wi$~;Ob*?jY{~B=JD#shKZIgFd0gUK_^277x3LhA ze5%pAZ1eE$SAY64DHf=Rs0sPz6QZCO?76vW6VJ);ZewB->4kRr^lI2vVUC%7#1SC=03q{a%09iqhC zq3F{_Y#>2sX)14&$k#7L$NN|TuKO{3MTK?Oif>(Zr2%OR^mLfjSs0B@Arn}nN~m8; zkfEnFq1$VjHv3`u`ezS~Nyl{cK+8^S_N*9m&QSer6PIG~J@2?X8Om^+=-!{tR1UTq zwTO0-Y=e@#Mq@&o1p0IE0$@iO4Y&>)az|rf;n=vKp`VaD1$Ui&k5<_Tq!dmoToQZg z(BMjqPTYnTIJ@XGg5-EkIU+;nVhi%!Wh0J%E-#m{PEQq;lR&0p7$)4SOZwlOuauh*;f~(`-TOQ;^a}ITW5IETBIh?f zU4iR`eTmAu`)_g<6a2wWZ0y_iAsp*}Wm)=rZwG4q=aJ4t2j;Q7lVv!K4;>fQFl*J( zt<4HevEuZo=5z|TN9StXFJl@g*1Hqz)}tDzZTK3nv-IO)F9YDd#csH=Ckk3}7sU-J zxeF3EaTDeh%Xj?}Wm+ss>ArtDFmreTDx90N&Z1O#(Hv&(HS8UF+jwU0!*G`0PE$ksOFW5v0OaEhXCc$&>>`L~6u{}JXo(j7k- znUd9{fuow41JxfkL$Or$hYmedT|ZIuL}AnbYy}L(tTiApA=04308r@Zh8{gGNRw1f zG@-4QG?E(;q=95<%4b3q>9b^_9fG`?w-vV&dmLj3_1;K6O|@@8tlf82_vdVnSv363 z+Y@=2$7F{w87@4O`!#`UO8F%gch*M*a|-YgU85t}VksDH@=XmUR%GC6NyU%(y%p>u zkNXY9)onX+9!t6uJwR~C)jvS{hhHgS{W1~1HYM|boc4;}Cr~VId1vE$-`#S0cjW(h z7l+NsEIwpXS>!&X`ckk`JkSbTg;?gD;n;nKLT4LJ#H$JDIk)t>zxW+0mt4-7oiNy% zw*R%G4BfJ?wBJ@BU2QFB4rBK4Nk3)Wjj*LxyH@(fWnccJS>WRH_TXCEM#HbcO3F~| zB2&9NiNGXhxG7&=tJ2L9;u<>Bb?RLilx_^FIU7yB`dc9W!J6kq)w0cgwX29OyQg2f zy^67KB5Y~e%IP(Y#L-+GTtUkYSCuC0^*2evr7Wd@)?E+#T~M)V93$G+&6QXeRUaB| zrO_?WmlrG#xzDLdYmmYgj^?UmyS;4K9s2V4_&oO9W7MvCYwx zGsEI{(WFv{ElEupaRNiq!$J&0`;n!TEyL5uz5*LcRkDdS1YR^-%myIVNnLZTe7>Hp zF5+ek4#jp|H)?J^nnGB?Cp^pV`rm8zDsSCEj2%D9R&h@CWwgWS|1F_P9_lK zPM90oD{x=i)H8+{f3eV@!yfly-Q%8d(%?6ApGeFtMp`~eS@fA(Fsw!LnR9?F+-N?P zjr6gb_tj6wD7Z7LeMrqEvvQ8v+c4n1Iv|y_x^(4*Kgyh$=)I4m)X>7x`gN7vldrF z_uYimpAHR#(#a|Sstij#Kaf>Y3djRS6><8o<_>=_>_Nf4+2s7i;9NGw+1_R{VX2A1wUlrNV10$fbl%R$U%1~ zeeY$Ne^{YIAoi_Qrwfr=Wk-ReKp*8tfRA5Zap-GP;arTsu29)8XP#H{V4lppvl#FV zG`vF&$dEJSgzgx|f0}GxI}m{phpecJK;oVhnnVE&o6Ro4a6qb4GHls|(o4^u9wMjp zsi6hI7a$w!j}Yq*4ZMm9bT{2lj9U(W5O0thqwbXz5gU;!sGyaXUqR%&i{hA%_Mm&j zk;|!HDAV{#(T7KRgF<1yRXUUC4yXELNN0`hEXB2ybMtnA?Mo9Vj`6v{z5+9yquOtX zS&$|QP~swg!R0*as`S8xB6FJVe?Sx046)yO-ZuFfMs=d>yH@U(e!k0~UM3{B_K%%L z=({(X3_7*O3BuG7bZmjf*WIMe3$?IoKjXgKY3Fz7w^kQ-SLdni7`cF)fM-1QV80S; z4}u?zO=Cq}r9F;QFusi)`*EJqtF9hZ0+jPm2RfDHZpqmP0Ysu-yvwfgy&>=6=c0RVI}Z zr)b3R!7S#TffcUpu;!mErw0f21m{1S${?588GwvL>%d20J4OBne>l5mD}$W0eENxI zdKx0oWfBr|piSFE`UK&>ARfB5xY7tVx2Za`Md>>7cbn5~De0zdtA#sIMm0#nU5N`~ zZEWJp@r(+8TnF2{@>(NQ!e*J{-Gm(nr=#yPAntWcKnA)jk_~A%T5{LfBPIwzE`9uD zz00P7+-^gQ`_UHyn*n(St;M08zKoS!@Jjy(82WQ<@CxmI&3eD)*KLeHbYQ~olb)B_ z#V4|s<-Y0-`=S#tpZFkoS^mYfcy6F4q(6ckVQw|kZYu@tEcmz zJv?6+QhFr0)1F{oGX*>UsXp-9r?;aBr1btC7q0C*qQvS9!=7iwRYi2FvWO+mwCq2f zNQy$kU1E3m@|&-Bo0j|4X5wtDH?uQV)~YeypKa{;W@^PcT?9@sQzQ#Z)&eh~p7)w% zbA?>DP=0O}oS3-DJb~N%q%1G2RHSznvGKc0HDzkwy_Hu}$uoWLit$IL|DNu?^dAvu zvGmP$0VBiLibGGT=MeM0bk$eB-o4=HH*X{cM^FYAojLg%-k1G(i3sdA7Vat9t&M#p z<@wd2vkcMsI2>eF)diJLfW#3QwA;^z>|=$=iaD@$SBfYQU!=O0h(6M?!KHa&3E6;4 z1#js18yHHLZNZ;|D)ag)7=2#@tQ-2NiICpSK0wn?b3G)RUp~I&uc%B->EYP))|S3C zqQmg0=x7{xXzpH!_9}+KXMe|_CD`HUN@@da2(lU09_umdFt1q8WcvrXIgm--)MwT`D6v);N5m3Rr1LE3GeyU7~npdSR8y zh`jG6Ns7l0o=hkrc*jCL=K<=^-s5LGxWL{cz$f-6=bAuPpxOArf8|5E8gzA6o)-*R z6j`^tSMsW%jObSG_Trjm$fLFoeh2;tFyCDzzWZL%???Bh19~}X#?8@0J@n<&7UYU! zd4!{p2i7uoMh&ZbT=os%m~$Nc*@3>g#GAxjKcG!oKwd}rD!L0O
Enb@az^OVS&rYfGz-TL;A zVd+tNmO9mRi?qPgQ%mm;=9o=k=Az*ImEVQy?O>OMZ+rS*(uRVceVYPJvTqbyYprFD zfAN0{k>S6ZKEK!i#_e#%IEbrZnDx!HG{Huw3G@5s zI0}Qy!4yWwF)(^O0 zE7DfYTL@~hbP054DB8;G3EIkby~d6X)&Dv9(6r??fN%4%xSwTLn+&LH#Hux4$h8!b zdtF>2y5)L7{a)jxn+7%~14!J#2ofib#=i(>kUg&!U@Tfm*$_hT&~63(f7$_A$(a*{g$(Own;VqU;EtM0yC3& zjGZc2^rgd=_Xj?)*)D36M98$~ny81Z>G0Y1FVns?N3-~Ub_3`^abhEq3Bdgxi*EKj zqq@R7gJ0;ARC7NcIV`uxZqD$|r8cMgyI}e6RLv~gDQ1#X*4d>O&4Qr~)%UI)(~57X z6pH^u(L+jlkOLkWlRQHZVXvpF9V%Kin+vt zjQ(=qADmRpG59)#tRq~sSS`!M?g5w4ipK|rol7KqOZNjN8KaO5z5PlQP{*#K8B_j9 ztW(x76K`g|*}QUMrT65)atp?E-0P$AdCbe{dUg#ho1-(;Akbp4jD%W!QWH^eFeEG! z<|+O3Q|FEVdZ&{OYMqYMivlsOD;BGZzu*-@-2ULUT2Od%7`YFogd`&mlWBK2l=i_8 z0=k*JhCGD#ffCI0E!DId0i^q;nW+J00OG|n#(YoR>;@Q{6`y@_(&Xb$NZrq&=r zyednU&yiuJJ23t?g0R*YGf=?wtqQl!GJV!~jV{es=l6s<+cFG8=4(H=u5T#C+i~`9Gtmll&;|Ra zS?Pv{QKZbE6XBQJCfnf`3e##m0XpdKil6rvIdhK3IO%{w{67Fo6$e*}vpQB3OtGK8 zW z(z#X(`ylY5GAQsbDc-{{E=@wT=`b{P`sv(XFZls#^pLchb}uVIWIH7y9V!sYB00tN zpoUCRRY!%XWtLk1{D->(b^fxXQp{shm5k&FVh$1av~d)3h7mzMP{okve1-Gb5vo~o=m@tXZH^P_`L!JV9YqC@5M)DZ)~1y z+l$7F`e)W3eQmthS}gXeS^4_L*|eyKYGf!q;oT2+r+xId@FWYRjg=}ugqa$pR!zER zEIuuGUpKhl5KnIlSnxc*I9OoSPDO3alS;%%Y)w%22>H567C-g<^CV@E&@--M`jE}x z7jdkz!D$xjga)ipnR2#Dj?;ofe!Vbr0NYzwO|tlSfqjhN`vp*2P#D*S_vtZ+2*Nf` zQE%*#miua`MA*Rro|X-64sGRcCLBP}gn&5==yZstFMW_q5k+nsh||h!W-b;t``rXR zuM(OZ8I1MpGf|!pp{M`hZvT8n_j!f=l0lkrbojwc;^~BZB{c>8Apqiifxes~J^I~D zZndA^_e*OPYa52>#XMQ?J-Rl1nDAO*BW^WZ4G`ENQTiT9kI0|?< ze$ZaJGBmbwWvb+x!71)YUdaBal}&jL3s;$_4{!b>u-wAFLJ41u7q7bbN;B{)kKFDn zJVs`oJhnWwuY2b3#?yU+k3;rtL9u6W_jl$W%ryexjiW40qwROqV=+NJ;E5wJG|03m zC14?c4$TJ5GBwx-{_DB$7LHK*YLP;E`I^0WFVaIH#BEWaWDM1;QGeQ+=j@mN^-u1d zqw{7zu3%X-LbbZ)kLFmH_KWdX#r!Q36qP*w*}kCG%I1ThtL}9#K7YdMRR8{l_0zBi zjR^N3hZ66fo1*Loz*no7<}$tdHWK!&zfI!zNsp3Aj2x0+-EieJo?Luy;~LWsMpT`( zJR8`nj?#@Ff|(SogS+TS%d{WJt%C)`etdF3Bdt#gLC)T5YCNWqtNP%SR8Vlx_JERf zN$3bk%7*Gnjlh+}3uP*{uQz_=us`aV^w6%dW1HHqJ)4+^;S)wZRwl6XVQcp))<#D{ zpj$)DjTm8QlBDdqSfrpaCw)t%9_#_HTV!4hf3KehAcRf7!9};YG%fzUTK~B_0~$ki zwWOp_kqnag%U)7gUfEQb|4064G)WPvlkqAj}h&2CQSA142OR6?Y>-0+e zSni!D{+S#P+CJe%K?yR<*Q;Wyd~PU;6tlKFR!Xv67%I+5bczcV$#kA|e( zHd;TgbdLhyX>|Fi*P(A#U6$RC7$@iy-Do2E&>xm`PL@qmb1ck8OJhIxmA1x|Pu6*a zJnC$Ua0NRUS1I&i zJcx9#1H&3xQGcLMhSt+kN}AlS#~k7hB>{h6TVD|Mu3 z4*YBgt$l{UG?GDp#@OCVi;583e%i7ttE|e(6gY<&vOC3$yd zITbCKo}z}9Y7GKW%Jl=Ih6*8DmS+tMJ2Tb+gG4R$zX@VS*^dKyy-*^t2R^J zkLjuc$k{)HwC-wp8pglElL@nL)GDG(@Uxj1wM@BZ(?dQ%TW-g;+e&k$oxGZ+ zoliq0Q(bE{%0bGYrO4N!lq?@wf?)Hxt|OB@grbIJQoMzyjr;5cl^!+JA;E9XP*q-| z^4Xqv0^i&ULTBrC#&}p0eSP<$4Z}Z3dX{--GUFTWU~KK$cz#fcHQ&yC`}Wca@llWA zpHcYNT|R4$nbV*5222^WnL1>OLH~A@QYIlvN|l)P{`&Jv1qJ;d9E@$zw*5w396s6R z*$;O1+_U?#^xokJf97yTP)VUxG(B6*cY6O;mksDN96ejsx7G(c0yl=X4pCJcLXKRU zmvRu&L>1)80SZP&mPV-xLy)BhuUjNxrw#Px%4C~hUR}5&`j%dNAUF>P35=* z8P`b^6@N-j)T^4=&(kCa(EF)LbZsL!!X`AR1ah0QMraI3A5D=zt)<*NuF*V=>1)u$ z8Pge=RnAm2zdT6j*TO7@qy#F4bOfGS(OJ4X5Q!aO`nPV00v%_bQo-bQy&dIUv662l z-c`vdw!0?;;pz(RU!2!9jB<-?IcY_=8QGZ#O7!`PakJ>`*igE|$R> z=|64-KW!o~q{u(ACVNQX}NF=5BC8r}i^N*^*VmeF0~Efx#o$(Aq4 z&(Y>K6xOrV*CzIq4fQmn6wLjE7qWgWKZVGIA3uW#)TwkHc?ydf;kjvYG`!7Khb1u0 zo%S!Z%Hu4Pp*gLR*;XyXdE1^oCqGpc{V5sJegD5JFess;kC_BlMHnbCu9%-5b z4Y#I5`-%1pnp8timpi4vp@;U=^mQ5uv=6d|9(EmeB-8g)<6oeqPjx^;H{G88fNdYF zfT%NTlaOujFD4}F_!JM@RkEVaqmM$QV8=tW5(nDv67oSGjNAarlN?{(co6Dm8$z2T zk9n};n*({*SqM-jF?C8fLif(k$0rv)G%3(YU?}%rFJSVV&PPZuB=ySj)mDX%H-eho zu_wE#JvWM^%f9c20trxG$_hh8f(s|LLumXxg~!}m`qtzxDEp3j`9fQH(W=HyWBx&HWHdS9m1k1#Z+ zJ%dH*zTXGKR^wLlFUAk&4!dv3wQizH?=@2EZ=!3(#OV)k;~(bDroa%gFV_15yKR)f^oJ8`SP=9DM5CYY1I zbj>1|vw&6)q`ahtxUC8z0(tyCt^n$1FW*A7A%r)+CEkc@%qIV|v%VLPX>ZKfK%G?WL02Vq2v^_SgdV()pMR;5OHk_ zifRFpOKhYYHAdQQ?9i(k95|akIYk%*%k4nMuS?Kq-6VWZ&%& z+QXd?0`MUx)ucqf2C}HJNkl`xs7FKfc(=c@NQ@dJ=!#bx=AEc-GU(134}l0ER=>8YYM=oX-Ba4P3pDBNUG(dQ6}k7Gu4ym zu7ZK;Fcbau?SDNrN{Vz$TWZ1-8x|`KKM7u$ZymfGllP{Tu4>^rVaG-uZeQIRUJlI) zrY)qlmqv(EniRg#|_jrZiQ_!Xt;3bmGK7GYHy%U)WUBu zqqyo4FDG{U`ZPbc3{8^}vTtN#ie%6Z%Wg4xbxVdv;Nt)LWvMu?vKG)Id3L83eGOF0 z^*&4(7mU9!w{xS5{qjM)k!T-<+X`;!FUC~Ttupim1h{qE&y?wP9M=tAM)M8Nf6zjL zPnJ^$0nLk*L!zYiBMlG#ru`dbMWF6A;b|cVV?CfO5X08B+Qs>+VeLr6#SygfQ7Dvk z?01}BuVjXYB=MT!z)cQliI40|Tcbx?YDow&IP7q?9{!<%BlSG=#1?7GPjOP&*?cT9|sM(6*8Qh_dY)U&hQlj|>uSnB$9b7K}l3)XH8hUeP z+vA}Pg)Ck2bB~@YEx`gvfT4z(%YsWq@bJ)fRHa9e4ga^m?O|Q#zRHSv^P^-@uF*pR zG`49qFz0P!+zg_vWYrrwg_<_6JakncjP^Xy+xx;Jvo@t3nMl8SmkRm>1$#)&azR;h zTw+yh&L2cWzSXT)N^2KZDbFY!YSkTCl8kWEN|#;ILc-nclPy&R;Gt2MLd024pIo(D zlt;Ccc{6hiHhVR9y(#%LDRDS|f41P@PWGjPi@BFllP`>$xe=*XZNfrDbzmo*=CPmk zqfT?>g%$ELAC}C&CkCP3eGg6iVtJ{`-sH-vV4tDk;V5wCw@g>hcB9qhspsNG8FxSj zQhbStIq^FCYrF;uvK~5 zoOz`!{`GNhOjfSNz31g?T6JLu0!uBaDO2B!Z|PPsuIcSCd7m}B z)IQT{>m<&QbN=oX`?q@8ohm=@FUy1LMJzVFQ}H$+1!3!VAwm=qy|21{Y6ZVn66F_@ zyr$-{A-GCg_c;)B$_oM4jmX`Aq>!(N8lSd~et+)Kvi;_wv6XY&dF4y?hgP@NibMOQ zZGc?)9R8@iW0;s`=*Gn1F9h^O{bD-ROckdB|C9LG=Wv${Oj@=*7G5B_ZmtJVsQ;+3 z1>T}<(GEs8GcXWH{T8Un@c^e5n$Sz*+Dx+{P5X{es5+`gt(+K}WL|i8O@oPcjiK-$`0 z)Aus}XpwTJ^6?CF);VQG?bM4wi2;{pvahh6EagdpMY?7Z8MS0n-gZT*8>gf!q;yQB zD6s|xzm{joKm|ka5~|YDfoJBi3C*;gHxC+$lW$1oL2fJ)6UsQ2Y;$##FhK#|fMT_@ z+e(I8wv?J+EW|@~k$s5J)Y?qTjxVfKPDWd(nZ~iak@)FT;{#Q& z^H~e_*6yDUMa>VM2E7knQ$5)E7}D-i@P@;I`L?~`YM)WT_|ZDtbe8JOEA+AMmrvgt z^p#KAIh#P(n_B*R$c?BH)+fld^M-wITyI*kmx~b!Nab!NrdGx2o)?NKUdTD|$0r#p z?{kAwb445my=R;*xyt%x0&e*ww0E*UD*fhJoT*$VdtX>MaCDt4!RTD) zw<+V!K?-la`&~l=+#L!DLi!2zqdq?ESqc+(9t?VCgp0d9>BGaHd}G%GA*Nf-epBv-KbW2c8W7ZDwoPH~l7?4Bu0|+1KPQnl^oK zZGE`*Ni16CS=5sNqjg^>N12d6Yi@0t1;>0{@u=wYSFS}u8WZ&wtx2X&R;^BRRW1T= zKcSf9oNTBYO)S?&bq`&UyEQsa{u-k3EKwq=W}JUcB1K%orI^((Q|DIgB#-}rR4^ab zjQ_&&Rb>u$FTW?P-xIceGbvXc@!2gDzbqCHh`CU7evs5aYg-TN&RD8&jChftunmEsyG*5Z^FcS?~`+=3)f z+@VmUI20+a#e)}j*C2sl!94`Nyld_C$;saT?<_~j7#Ytq?>VpQcg?%dw-EtPv_wtI z&shP%rjJ!{#Ab&hxAX0MjbAUdGR0zk6d;|yU0+|vLK%l+{T>uzsRMnszJ!uH=kgq3 zs@b&?8;U~j-w_u1N6u#&*l0%zk|xD%>apb0_ zShnx;7Oo^}LPXBM0|5(P{=9sn5^J>;3wIMC!7H{5D1IQ^dwj6h5rV@^jf;r+dND{J zdw}DXEtZ^!=#`i;${SS zKO=K}$<$^s>AcwR#;!6n5yg6fRI==XqrwwOJ9OxPgB&mwh^VpprYN2Ma;zLX+d~{m+&jRpNHVbqQ`C!ibp>eCY ziJtrrcd?p18o#JbPQuk>q;~=3qX3yt8POi4jYyuOX;`$tYhV9+r=HX-00NB% z@l7yP_?Y#eI=h!VCPc?%DMRonU>Ap1A7J0XYaRFJnf$&kKDRm#R3;u9&xV7Fb3Whtc#UAY88||{vO5q8137ykio7O zOp3M8k1iHR(&Y<1A^E4LInTmgkFTkY@V6Cfky%VN?~$AH(zoruOvqyEtSM6#x^9n7 z?NVIBux*Sx#U#^t^Ya^%>!HSW9iw0kW0rDz=J+bd=k+^*9O)9ay&Jm8nV+r$;iP9u zbEnSE593W+QK&77@ofCy^i|NR7^>?AP~9T%OK~GZ-(bV=W7I5 z9P=N80nRH)wfZnp`xG-gsrq8$@9u|6|=%)f%4BO)9fSot~ZCf59 z`Pc7WKqNJ^Prm>bl5g!k9FR>Q#?eg`Z+;{Jwo`N-OuO*97ZF>alme1V|2pFqYp&EUMe*uj=?Lbj9vvpIE*5q)T^n zxmJfgBCe(Pg+scec4T2+;#^QszIg?EUWU~yzdKkEIF}$v=J#{j{Tpe!%}~6Wu0t&l z`n8#6lwvR!p`j=rg9!)=G}R5|2+ z)KY|T!{R|O`_g)MHQnj0D7y3L?WmE}>1f=8hlK0VdYv9XsR6le9Bn7*3`M|oCVOEg zZLc9hCTBxHlV0(CnlV*8z=@c+DqMnn@}PZ3ZVzDsUzk!J2oy@neh5qEi*&klsK&hPkJE_vjyi;e^HdkE0o;EIb3h;_~T7PQ0jObQld@;)3+H zWPk_<1b>zmCZt><|72N_&WihV`uL|MW$zE%q&L4L48?9}!Wv1GfVVm}IFpXqeb~*U z;`7*vZ~`>b@?*lxFODof)!g-Jq&9ajKf?YK;bYGXy& zYtltz&VdS)VFR}s-~6p93ksKLZEmwpQ=6(+;x%>TW}3WiALr`QP=fCL3h9rK8;+*g|$$@zedvPT^`>6jQE!}p@ zad>;bZhuPjYVx&z$P1Zi4X7Z;Q;N>qfD?z6M90=$O5JDP1B%zU9+GqB#$G00CKc#S zq6YGSPN02A<@Sj=3IQY{`Li^#jc4qt?K98-vGUm}HKhY*w{8rcrz&IgNQs}s^kVU* zH*F6*nO>vfyUT84e7Cji(!2O^>SnjS+1ab7E#iF?*|&9ntM_vXBO0YTf{>N|7u-`p zfEOee^7I*O3KHd;=cUi0PIK<-;+_ zo=t1J^0DXX>$9>E0gK*&%TG$b`C()62pO>{J9FaQ#4=r6zMLT8)rihGdI3n9HuQt` zseervL+3N{Dy!T?fGP=cJT)HMn01*8PdVkwybBeF=mr}KX!7;rDSZg4U)ELmC#LGD zK$wLK(tB1mj5u`lHA#w0gUsL7!-8Y>Y`M}3BiXiy3#a+T_W9ciak#Fw)&cwD!tvOR zperSisR8@6BSL4@fQELqAOn2wc+STZGV-aexD--30P+V-C!Fbz^!63<{d6@51Xd(6 zUln*(_Q6hC-_Pk!TiLN+G>RW|T4C#od#rhk6dJrGq4|otKNNUw;1^#K)gO3+CCn2| zzxU!-*tt=hn!F~oo%6$XGSJN^5Kj!rrN<%~j>nU0;h;yH#IMwTXiM2+C;p*hK?_j{ zN-!I&_(1-8$Sykvk)pmMAt@(R;UKv4 zh*Q^R)EcH8S#7>`;Q6bJR{la4=RIt|_e#wz(3VLk z{5D}M&$_0TLe-EjYWj1?N%P=Y=mRtmC*MPY-bkYz|Mv9|&c-9}N3JHvP<`J(bDrRU z2c`UlfA*g!yt;ks-3|rGXV_3BkFiRde$`CIOTig`_28SBEKl4L>s}p=C{VnMY06}B z$^5aLWRNh2`tf8%P4NZZ=7)@sNNq$^$aF6o^|M&l;eD>EiuiRe+yvDy7SIbe?esZ; zD9T(ya>`^)Y|snKp-;K012-r#tFZ^PJa#yNPecdE95=|;p_TYQ7I4y^i!m`ej$KWI z=6a6O;7bmZ`F`XIhlK|(Y)~}q>lbKBRdp)4Hb{NqPBl%#+aC=k*L7Eu&fR(!Nl5DN zPGQv+%hWo&98VG`DmKr#Nxuvl>A{CcMI~xq&2LH-Wj1Rx&HO;LX}~Cvtq-`PTq|ya zy>ciPVnd1>+o`}MzFX;Z+64CwLZ9z$h`gV8CQ7}H zbJ9b|^7&b{WVZaF24!Dg_f0CE)C}=y#({gfFR0JrTF)&6#?J2l$cIf*=xxY(p0mbj znpu-vjypMO!tTq>DoOs7-Y4FBo8_JsBji07;m{vz$m*Ov5Pdd-{S}YN~M2(4> z57{}5+bh|5ZeNIB{$1f!HWGd)aqW-F>mxR>&F;Lh%?6c1fd)q#XrVd04AC0hN;+4W zV~Hi@2~EnX74lCEQ4?oh7>Xa~OGzF`RqsDTu(tdBu7y4>QVlw&b)@7L8+*2PotvLW zE#w?wS^RiSLOIZijswez*kV&dQmmnjs-j*{>VSH1i&cwwcGrU2Ee#&~2&|58CAkf{kuz2<}$(k zru|{n{h<=ww_B$S6yJXK`Lz(w$E{tl&EJbdtyXl1wfwBq%M72A*%Fhkqtco=!K}-I z-P?vc`^HVVtDOzCpMvc>;&!l*%aOD3XBc&yq#}i+VUup-#JuIckBW@uF`xnH5&;x+IU$`_lBg0BK_yd#)T~Vb=}t2L3Zmec~u+BRga7QPfMdOau+lI{G%W3ok8ntXPD$vs*`> zQ)=3uxY}!=BtB&xCo-b15-q2EddyvuI0Wt4T}>RmU~-WEspK2=UlpT|h?sEMoKtxl zBhJtr|Cv4LbYWM!kFQi-cIQM3+DMCN{=FGS#K9dk>WeV%H_^r;{1}_YTRNUxqPsHP zvQly6zIKppj+mi)hp(qQTbzmaLl$Q}AG3ILUfQ;qfS%|9-Hv2jszK&}W z8QkbRBVnP|GpWh3L6?MvTv4}5Q=#~hVl@_v}mGYtzG$Bz34dqJMuqxFk%ft z_}yCAy+qr-CM;d%S`U)Rhw9ntjr8+IUx3Hd;dJzPr!$z3noq%gvTSD zAlL1dJzTzZU%j4~7kI@!Wa*#kx=06C_c_XzxeokkJ4Pjw&~u=Z(#5B=<<2S!76^dW!ndueqcR+OGx3)%%OzsBF&!T*D%l5-Yc1Tql2x&p8@JV zXFXp!+p?jpclsaM!THS@Zy82yF~ElC19)nBhQQ-@G;6MW4r3#qk7FmWk!*eBZ%K3H z<3n9_mfkll_xEM%Wqi=14`ojhvHu>?V500o4BYS~fx!eVTHbNkQ4OTF;mN}tO8vg| zVoV#I!E>t0{I&`Hnhv~NRB?9qlfuPlbz0}ih(wdu3Q0r`a*XIrvWvHc*r^ngo`vxD zn_)-e!n{X-61e4L=PS5HkDQLO^XQL9G~va82O$==%x2Ns-h;g)Dk*nUN16?gbL`zh z;5E{Hj)KvqM^*~%KT>%Y=055M<;%P|w;81TB|l!oK=wW#%YU(^E>?%akVMjG&4%HcIi0bFtBy|h>)CWGnH9GN+$n#n2M}AV z5Z++?fQEQ=g$c}+C@(>LikZ!VoBU~Ih1cPmIo2oHO{7#W;LpO=TD}QqB?7Msq z&QbX87Kyd1zmQ!UhZ0_%mDwmxG)kl73_G!4MgAW@kUqy_B6wgzI6mDcErWo?^V`6e zqyY(DxKzlS^jh>N;8@;I`3ZINWnEN$KHCvz_UvkPG7lJI6^Z;>JO_&&j+ zQg+HhIQd>z#T4e89S(v)j1g?hB?A)rttS%yTTAJGhNgf0r$g+S&-CxpPC})r9L7$aS_VdTy0f9=V0lp9L4yhd9F&o!$*Ta=aUN0x13%j9ZTX zu?{7>v(%b><=~~&8-4$lk0=H8zaeM6-N?iIUP`f;epHKhx&urQe4cRH_v8wxo)e?$PYtPkDO;efj8iyY9q0V6*(2` zFyh=+n8;isT=*wht>_Ue?-&YTXjKbKQjIV1bL|#P4_nH;Qx{$NIVv#(u&u7e(|z2z z|Fn+tVpDY?j?OVd#`gXYh$3kAJlQsKwD#$}f*g16KHS`x9m1&YI<{r9?pGP#YFQS&E zPFvlao*`TNPw9vUP&+tE`E;YV2Iid>r(WrXm?1GH0eviWa(u7W?h5(ETwxS&E3391 zCip1k7lW{sLzHp<1)P#1&_9&8)6OjRxDHQ|hYu9J7ev4g)zJ>&e89Q6@NmqXRK-*K5R z)HPZ%rYYWllw*NayG|(D9-X;ZrW`jLF6Qsg?rq;UkM3ERM4qDLV-ES>35`iCa{$L= z`YbnZ&i;xdH+@Z&xV8iMSl5GsO9XsuYcC^5PNa@7NrkS7;bI+)D5kp6rls@OLopKH zZIWilm$-x{{bj0-xuH!qe z)`vHdZY8$KB79)fM? zJc%wODTMhhmE)6?)hd3JN@6B&1@9cyj~6m*1WmU2M*3c6eE?Yi-c65HyN-jLNKQ4_ zMgpGP94RA@)}f}x!kQ5wObv5nCot7xKm0@CR0wI#qMh9pj&uOG5|#g(bAfi+reXGv z5w1mAh1=V9)t~+rp*U!c`(XU!=gwvMO8bOZRk8TD-$pMt=!eh#1VA3DYWrU2B*mPv z*w#M8fD)_AJGMAKl%Pn7!h1rviT8=r#ThuBfpYrNu~#74nz?vDJbhg%niLCWr%GqO zf4ag57}(kKP-DTH_7~6IpM+;oE&2_eNtn)^|4I}U1I29cGkLd#?``FLuk1HNT%Za6 zcSiI7yDDgz>v1^0ky(zVV-VTMS$FJnV>`m;t^2`fv$UUm@wW6@yk zN_c7=PS>?H!tJ^Y4wY_1(KNKdL8me`&}F=a)ZvV*7p0qzoeb+UE|>XBSy~j zk6*==P0zsTa(F)lP!jYC$;C}UX(Hczf7TuE7kQqS?PfN51`Si|zSn5t9=s}L0elie z9Mtj{ocJuT1JknnE;kGKO95Q#BNmqlzPwoL&psbfdDu$LPS%b|2yU#UFPfnj=QL-f zZnaYi`})W3(Wt3(@YYBoh`QKU@*V2$4pD@MdeDKKnz{q0)hc(Y^CHoW%f`wS;-#@h z;>X*tWdTNXo^gg(5<#X+okF|cdLj-Ph?}q8{t>CS{5Ef6A9fmuE=iFn%Ey&4hQZhREll{f3Q$?I0i6t(2H5-QoA#;|B(+RKF5Xu zVKAvjWojob($_J)%g-1mJ_)qN$se|hW+SKe@wtZQXfRAmH!#~KVP@pTpS%^wJHHYVv`Fv8cvmJamQ_zViq z=ut{;R=qr)b2czEzX}4f5;Y}lcKKs>QS`y@Lj6!Vj{cpO>513w8cVjhg%B9k<;Rjc0lqi1O1`D-Do&W`ZA)@`TumaUQs=ALn8?WzO>jat2 zH0df(iBhcm5kqOz4JvomS?3%V0pUlXKBYufTh zu1oxEd`O;>Ate*&6+h4~#(gB$8rP45L=!U79rN_P*J$?!wh+T8(3;I&rhjD~mHmB@g_r%QqBnat6{!w;b)MIJDhm=?7G?|fl$1mjf}UuPQf$98 zgGm0Xgz6B-ylURD5U(8FB0%7b$ru6N^Zx}QvNBE4C6FzDb6}zO@f!H+Vy>j4Ch#Qm zL0{qR?(b|=(s?=Tnq&%_uj%crhmZ+)tB8%HVgF+3BCE7E!v=_*hHSiu!rz|QI^&yf zA{cXM$iG-dJHOxe(dhzo1T(pLcR&Fr;5Y3%ab5!AZw3ih#Wr$1Pcs!FDuSHGG4(*c zXLazu)ruI8>p(;5&MjGF-~j;=W3%(ylbNB0I)1()3mACO(CxR*({x>YXgMX4SSjmv z?y;c^rM8YBmaKgCXw&;eX)R$z1ES{OeVf>XW`ZA6jq_h768NKUf{&hZi6d1>_q{O7 ztC-z?+*UEf&T5#BAt)|6^S*@>RH|l<>5k&}kcHkm z{a5lz{LXA53U<`Id#5zMTkK!ZbJG-0*ss@F=dAW!rOQ-X-4EyuUk(|=Z75;?oHd|K z2S|UqGMjdiQX1Y=;_cZONAI!Mhd&w`3D9%Kg&##6nuhVW#(Jy@tzYz1OuY|FewTD= zEzh3q3vS*~e30_rn1Q9muKLF5#-_AXwsA4|1>g1M&FcxqVy+&#G}y^)7ZnEUJD@}v zKIBj0T|CDxo|G0f)$=Kk_Pr{UMrj1rhH}gmvmBguUsS34brmuP>@r>_V{i2=f~Zy! zDc`g41a)8sen#{Hu>0V=p;Oh2uY_1~MFDp3Z8vUk5(nvRYEiL)JN&JV0 zus`NN=d>pxRTkwssm*2`x&)1W-%*CrY=ly~x+yR&uWl_;O*`z*<;oc3ViaRTnSp(D zCPGS8-(44n)S35ezLDdnD7P}aPNPw?M+!vc&mv&v zc4_yG@1J(<*l%l}OJ+W~^tbe#0DOTD#%@<|L z!J-aLPW#rBy5cXq$SXCIiu&x>DM`I%n8uV3=(`~duis-lU_Zzy-3)g{DJ%26Y=C7I zq?gY4^LmiXvnZT>Vj7uo5=$#yW!Y*MmLAjl8lGUoZjg9h_C2)Rw@TuADHQo*-F59| zO!yXa(7#Vs1kAS{E$w49{eJJaYFKbmmG1B)VH*q!oL;`-sgb!j^AV!GoN^6kjU85I z;wEPFS#WXYVKoRI!O9Xf3;Vde?sYrcmX1HddgO>>J`?3!y(DH2@O?MD-=EnK#-~kGkam3X6jVW<>q{5t z`;(4-$)IWo>}_|9D{RNw01w1h@FVEZU%{c@Fz2vwD;e7cmvTCF=|!nKkCNB$k_h$# zR7Nov0VOEg0E%dBG#*|Rrrq_A@DP4mA2Q~$s70u^VK7uAd_V={9C${E|_ zD3bb5YsOF?N3}7=Os!PnmD@~Xys=Mus`af*@MX60brKl9^g@Mq&W<8P{d1{t+7uL^ z9(xc*Z%uz1{R|sOEwxI_`h}v~J!s~jQ_f-qOW3eFR!VF8gH>E*pM9-gn$d4J`mHV* zxm0u5Kcp-umMWKS8pqW`J@KL8&-6d%UkO#HfGFn)4%3N&rxeAdR;&5=qA0??j9)P~ z-|~%(mEBh}K8vH-!dK>#-5ErFx5}1v=J>}#I;VhojITvCebG?H>xA;`-RDpOM{S`x8ooh* zr8hULFFy~Eswz``l|WrIE@7zG=kq`N8iX~XGP~0i9!g|<+ zuX5eebNJvAU({k*m^`vB`HEnl!@sizLa_$E{K7*Fq%c`-5qhf82~BPG?w~y&GX@Q7 z^>Tg5zTc`6`4ukNskjIX#G=JkX{&`Q`zC@_9Tq7D8Wa?`H`nID!=^4wX-0N~E3pHt z8Q{Qq1RuR3x4Tuzlgk}6@gbc+te(eD<$B69_%;I}ey(+hej#P%xJ1&IT-~`OReD&s zX%T$BLN68`3BU$5)4SaV89LL4r>?Pb>9mZ#s2JVnlDyP2qY*inY$d`B0}U=>-M$Q# z3CcKwai@w;^i)fVe;dYP3B5EW^%1u3HC3)R;yIVV=1o@ask@zCRud_x+|zSg#PNr1 zf50@C%&rzO`IK19*q(|(#UDpb-~7^XrgymMrl#-_gKPUnIEEXBbbmN}7iVF1oG<=N zR*7b+Y4lp?G^gd~n06EIrJfjqG5yMIqkr9f1_Cx|zJc7C-on2;jJfZ0Lzp;6M6Fc3 zz^6{7cuS<$@H4*YMrYtnnVWf}OCJo}ztK;>vFzLJvAbvH<3HRNIBb`NWu-w(40BC% z8ri34Z>hYglUlw<6c{zb$g>Sqv_=xOVM70BRpNB?JWtzeSE5MFecEu7^YD4@ znzgv=MW!a!u!U{aq>uLer282|STPOM(!lq&=D!raMXMyG&v;*!h(*nE=u$C*CMfJ5 z-{LWo-sv%IUr$}zS6F^!Z@>-D0H`&U&8z{ilccm_&iHEuw5-ysh{Bw%S){MIdgvjv zW@_dYZ@iKlzmk-djIQGJ(>JBOibwf+Wk4N_y3sg_c%-Ti#DvUbE^Y&y^cp1^P8X@7MIzMQvvY_9w?7-y?#>G`6cCJClIR(#bo94yQdU_I?*P ziuwvg5cV!gk%Am@n#XUrgkc4VPeyk6eFqu}8PO*XYjgtMGo=tI>CLFDWWZUm>J)lR zQWU+?CzEi$;m@eXc;62z?lyxS$?w`cOz{^$ZW&;CcO8M)Gd|DCK>b0CIk1XYX_+JO zF_oi-j<^Ul&j;a1={lgkF6o3cPC0P#RQn&1-xTTxFMkYc<^>HfX1`dI zR=C-}&0})UCbOnoDJvq5?T*7c95)>+T-HP3Zd_plo%Ud3Jt>HyJ`xR-;itfFV!;}T zx#X6}^-prtsWOD>-q44oeUI&at|()YduVse5`r@==Oxc-EaqUNL`zs1D2^r@N#D4& z-n7nAC`y;x&qvM|51soXZg%?iO`@WQ$XB>yU?g0Q_#ac=9!O9~Jtg;IFyt_O<+&dc<-=G-Q$`4yzCeK1G|ZcD>Qi@^s~G)K(Ri8j zT>~~V4_A3hz}rCg3p%qD4agu>VM`_YL-UsAYF6yBzaE8r1kdbPK9*{c(dt>)=_}L^ zOoj(!DsdBCJC*r<|Kb6eqCzDtgXT1QW}p6)oGPvIf~iuv*D#}fy6(IaEc~6_mBZwe zGiO%X4%T!9wQkDy|JP#^9CT% zapICpo42|qN1ql`&U_|L#WH*2(zHAs#^lx7K@8FycQ2nkWRY@^Uih%b-d-v8ShvPZ zj0W#|%peVcFyZgY?j33|Rlcakg=8v^fn4H7drAM86U$uM*F}8ffgjf2Khmrs5BMcP zKhS*ux`C$k<~^&(kFajYxO}f)mg&qhji15cDa}MxH!9Ew`aFx^ar*{^+-C_XmWItG zfQCWrCv$_G+{b+$sfv0%T-5_ysJ5XC zg-1=KgaJVey7eCl>NUS>Z&xch9p?pn>%Q+bJFdeddLBy8|1>-#{6JJfu1A6D*Sj&8 zx+24n?BoP|2bJ@OLH%!uj&JCebTDSjovhCd5Tt!;mVHAw01}c$|JHnGcG|0{2{~4V zkTIPQo!li~W8+;GxuVm-7bCkZruPiDxR}j!{1L)Z*I7p>CV;0lBkkG3X)1w=5lg;3 zZaYSY;Vf!XataQp#1JFC0hIV<`D=A_hKCMLjdP)hmv!x@q2nI3?_-Zc$)aSM1D0Ez zk>ipiBB`%{QWFNF@(8a_Ka=qO>`|I223GRNqQZtSPk9({p#ILufii; zf=V}Gl}|G!esv?#8ZEk(kMhm#a(*g<9euHW58k~tY}3&|+1DAAyh)x5ma?4O+bWn3 zxh$=512dhX?T#4XigkpY7JVO{V>+0IH)Fk6B_g)6A2zWOWb|VFqKXwTlj5W24hFbe z=910&8fxcM*kfk^Ly4@_PkjNf4(FoWb3f%(mN|WUs!! z?g(y!ZqC{ddw19Jvp=ba$?d<$t8%sKPaQM6;N8i)!gDou&aC#XulTljXixk%B-9}2 z=aj~$Z}VAn-&!`4mL8)vhd+_o+~K_|{<#0%u^`OB)G?)&_%;FWbv{jTST4dEEcZ0s zV?%;u$6Pz)T5iaW{e8;74XFhyRt6i@KV$2tFN;*A7IRw*WG%V)!pwwWF0Gcb1c%KE z?G*!qQkp*=TZ?aUedkwh}7(AMBa+P|In3*O6L$du(cnK zH-UEQTC1eFaHE}j=O4RW)4zJ0{7$6tjmYXmP3+u%kSYA{(arNWmamtc$(5WzW9GL6 z?IZG0jGbW}!k5s8<+m$^VbYIz=Mq>x(6?v~X&c&2?GdC8>sF&Oz&dqE8&ojMZyL#B zNp1c!78rfO(=m(iw*9Q-tfnRL_D9UhJqaH0z^Wgfr!)!%w54IirrULkyxI*FkM4Rm zQ`=tn=sORURL&Q~j6C}}Z)l|K7*3N;;x2$GUHFECD5X~htm{>j%kh4I1XF(oOo_v{ z^!kGngB_E^ZHa|P9Q$u7=PRg(iKzY6FvF|gBXWtZ@#JRr->C|y{HF+1QhxOD-zl4A zLLbbaZx|iISnB*l0G3XF+4QU6OTY@uE2 zoP?E~Dci%b8PDP)Ba$5-_@zhApI&ZT44KY?t+M?3)i6boS&&+P*8mfYkG`4pH$k}e zsbb&8FL2_WGK{K62i*HQ86@q|O7U57)XQ&*j`B))(OBWmdc3 zlP*iB#msz2~INhSMs2Jz^cGp2W^7DLTzUfV;{0x=f=FsFSdg?(kA;qi{Lzhm4z)msi z?)xAWM@xH>Ti>E4@577@Z~nl~*&*P?hjHZ(pE)M(lZbsuvUDd01OQKfMgT`;ih0ey z5!_2(pP0Sm3tM6i6;t!;VH|QEP0|nTtRDTHJwo4l?qUZMzl7MoRz>?DtQlj3Q#*YK z<;Pgd;He+^_lt>O_QygZoRw!8 zx`*vs`g|h7b8j<+iy1F(8WQ)TnN-c0tG$3E3&xeGAWj7W9U z5ZzjEIAe9{vOSS-(wJUsdAV-e5YqMOLoLpg>oit7Q6moDsKFTAXXgTN^3BLS|2@g_ zR)LbR9VxB4sV>v(HYln<@?#6&qhwe?T{59d;qu0kv0IsV&#dBDuyixAux0tMg|zA4 z|J?hX<8;OXE<3{9KY)dwLC5j@gcqs)v2F*5O1?3aLfqrLsrq^w=r}l@IDFWy=HyL1~`iiPf>?icHG%Fo&6CM@u5>~ z`(JsX6EU$Nh>OoWLaOnURs3vNa1wZ*T&dZ9T?W#6sjeLca}}E$ zljZTQ)AG&ZrcCqsrY~q^tqi1`d1?7SAGzLs4NdMm+e!P4?>^R_W~0)gi!Cn*93#z- zmapL8ug$m=E|vf>g(o)6nO=sR{j54jsobA0@2Gi3FWL9&$|%84HOZE8Y>qp{YLQz5 z6&h}9O;5iRx4_N!(hXLU@6QCFJhrs}cuFT4Y=s_|i&-X}!6PXNrf9!V3Lt>AMzMN%qlQvv?vMs5ktOZ}O1CVl zQLuAPUn~cqAFI#F9zNvMK!JlmCMsDaDu9l7mW5uLU+Z2Q`ebBEF!0((xm>Q>PKaNL zjdzG=4h>=emw{{h!pq+g)yF4ybL1DZx?R`d8ycFSPY9 zAFNW%)3~u(z7I_fNYyyk!e_UBD)msd)ok3VoA*_-w#00pHvm{MB*hGsD8MFxIQXo2 zwMk^2fXr%6nXYsguf=yaQv9-|a_--%7uvCgTBuO9VnF$hf;h!@Gh+WL#o`Z3bHCiO zA7U_*3)Oa0;t6oL+ehKW_O3r7D;sbf$*6+I$O^Mvjh1FVQndl@mrmmFUlZTVI^LKt zJa{ufk)K0$EsR9WdWu8K!q_MYWG-u!_j#nQGB?2I{M&cqp1~l&fB1Y(u706Oe6FVr zc&>~g>bf0Y;%9D2zU=@p&$v|Y06IbMT_-EP8pq@PxjgYN^a|BD1f;*?|M`va(LE&e z_j?WMeI1{dyUe+&M>9K{hYMeC7Q3ZpqXo?1Qy`Q1zvm%}yjdM?$!C@|vRGACb+)>` zU$+;9qg+*@CrSD`699e`mZK17t=sQ?bQPYz!u%;pqLMkY2%ec9@8^u#zd0)v`%_+un3yzbGpxBuM$Pp%GbNb| zC!zG{4lv0qr6J{C0{~(-fKc-`P!6~sx?L=OeU}>pCh)7 zh<~4TVi!H#NC!D%Tj@+c>yviUAx9F5z~*FOE0Vw(2`5g(FzuNoK18JEQ#8enuet%1ix zCecBvOfWqAInmVV_vcaQ1=^N-ZokeWfE!JW=@IGva64M> z-aQ=x{-|hKos~X*l%m$IZh}>1q-Wy)wRwASf0DOhHF;DTao~Q^OZ7`o6DYmCOVvEI zC?7^#pxH0l1q1KIx=$z~id6Z&hXwwszmBax-o(-5v1gxyMwoR*DXCZ}VsE-9VAt0= zBc6W)!?Q!)W*SJ`XfRuq8WXCqfO6=EO<&lV1&DDvvz^2&Zr0T%i{T>og9{mWJoMJVD&9)j*aoxYsa#Xi76!g6b^_~B+yzyN7i^>_ ze$KzeuVH?=f>DC-UCTur2HxUr*}+MgG?3uVxFKALlC;-a)yJ9OOCm7!9(7 zGaQ&Iek+gFh2(m6ecg&^XJ2s}DZ==>CfUR=AGbxfLC#$`D)Wa%)3GJ!*aWKR-+500 z^Hgodtd3GsK5R5;A3?(fM;%L6jpJCnU_TC<@2qxLjU2z@!N)Xzxn&2$L7S%{lt|Y@6Q=;>)WJ;z7d^mPhCA%s@V|HYmgXh3?>VC;F`QGfK~GrKyah6Fz_2-OO`) z;=_AdJC5%dpT6E0>?c4OTOOcI6$GQwpC-R42OdlfawYQ|g?!O0{(4%;fzAJ9i(vBSHO*V}7I z+#|5FN8A)xD!vJn^1BZMD3&F2Zy5X|aJ))!wEi`!714VE4^)G4yW~Clu|R5!t|DVR zEzcHSGpPCsDfVq({P-=8vyQ58?7rI+^3~=HZDB^4)0%s}jt>%(WyF2K|LUq6Q1K!q zWGJi9oP6=VE(4t1JHJnfq+>y8ohy4dHC=CktN4wWFKi$KK!OCHT6YXP@ppDAEzs`e zQJz}yXidN)jGOcMRXY>wTTxQ-;ha==+ea_^a10oN5|3WKzRX*BB|w#c`6PlWMd!J& zle_fszNSK^T>(ZVXG@3~<${wum%5P{%V(SJ8)vWOuh+)1rpyS>e}<|Y%tAIg_T5Rm za-z4GW+_{xo=efhmiD)Bke3rD?z04~P|fpx_+Fd9y|?Dt94~oll`S#iTJgjEHr@d~ z($4sPXVXKv-9q{uk|k62U@I@8c_wu~EsJqcR#31%!?&%I zKGd_~Ytv%N<4*x7thwoY_SWg@zhAfR2EV>~mD#TfU-l2QHq9d8Rbhx(QUR-3^S^Uf z!`J1{YFo3GJ^()L_Ze=cqG_c2*`Km0U%7E`(-j%VwZB$t}rpfT9**GCk zb-4Gvq%M-iq&6+>XA|gZbSVN9sw*6~vtFZ^SDSzOVZ){QFVk8N@yN9xa_aid<=%pP zHh~qEB(FsUK}ba>x$K>M&QwHoXfLxVB>iGh;L8b>~aLKs+;+7 z^cxlWwaHf}Z&(7qlAh4rfK9@nYt8mFEV83?gkOnD=5B!QyvirYnA?Kpu-0HW z84Jlmt6673T(PxBd*qGZ`qbj_D|t7$jCBqLuvW`0W>gzi#k0-@-puyPG)CUgE$~<# z?*!TG9p_u>kz})H=Q#WWwqQ34*vONgt7R1WuH2n+uFBloOamzuy231Zu~7k}mE!ic zx-1FaiEa(cEey=z+WSzqY)H+Y)~V=w5CI@SISiHw3Oem|0w2lY#TkMU1Q;xxbLx*47(zgLb0DE{NxWk|S zZy`WeMf!E*W;CE)W@Bn+eckE-zAxapT>)neDOcn0`@^*HcZ=7jF_j$=@|XSki0TyQ zi2!SA5kfRlhXfVJE=9fE6ZhNV!ul{?9H|5M$PXf;|8HL&@UJgVx|ab@2~k8@1L3^Q zJT(^O%FMmpJ;fwaRip9(sSx2XD%?B^;3)Rj3K_;8o0XeDYd0(Aa4|beX3HTLJ@d!q zl*7)~4&seYBTNGQ44Xb-WbTWiaG#!bv*XdFi@$y|*@uoLpL%A+8K1Bm;4oLtvFqCW zF?aQ!k`M~^<#P7YsS8-p0PN7vNf$?@A%~~AeiK~Su68U}YWCfumI>rS#De)G4BZ8> zKlV4_^jEpFlAH%^OAR21D7*UT7A^C`CcFGeZNC2Oh4p=(>YBDukx{*-`&jKw|DPuq zx&Idgm{5@K`D3XN+}&@9QIz*59A&NB8yE6H+m5$YuB%&2W^vIwYq9qx7G3d$AVzu^q+G99z`7t6?IsPjYQl z{N1XU!^-KD_`HeV^83Xq_DEIey<(cKhj?R!CfL6ymQrO!jKN^bHM|%k*JO=_PDY+$ z;#(D&SwV>&nIVrJQEG)yaq~*T;sXm)~N#o$7;Xej^d>v`M)qr>@3;cx z!yZVqx3{jO^VG#`s?fk%UBWO2;~ zzou3|?+4@2VdpI~$~#=%iDWT}*|Mt*^%^rVxmkPU2WgjNF(ElU{rrF5KHZ`^;jSvvIw~M=eyCh@$JQ~_@y>6sp zgKqOp&H+y>y{XeK7aE9b^!_(FB(=X12LDo==wah$w)=@>Bs>oclylk(0GciBJ(oyELq_>t98%x7YaC5f(V>-^ZXvlULG@5j$7|39d=6zR1s`; z4a-_@d!bBim_^6!M0r~sK^{}}4jS|l>r_Khe7^csR$``*AZ4nmb3Cq|F1KdvT72F? zIZ)XC!VZ4FleqXbgrNb;GOqol{fgi%wepg(Zkqh`T!SbMw2yJ>g$tS}*`t>Y*z`9p z_yjg+OSq3J>HB6ChI{edmxFv zcHHs$ZJg&QmzfN;6k8L;GVYaz*Z$q^hR^&J6b{}gLRmQw%9>@V`dddw?^~tcf7rWu zsHn%U;Fjc7{}r<9lhh;@9s}ivXc-yNY@6iut8~+bd*4jEh@!wdd#1pHV-Q)*8E3EW zK}g2z#{t?;u{Q1#huUrsnOdXV1k2TbhAflednc*IpNsa)?s)r~%SR5>hNjpnZORnX zxybZ0-h68Tk_s#J`}TkWnoD_=hkXYqI!F{)^tBym%AnPX(TO<=q6`d{7|e0hAzIAc z+wjxU>yD5s38IgM?0pN5QvSK`mm>DMQ{-h8L@f7ZXqZPDJ z8wO<_xB4Y^&j5KLBatxqALYw&PciCXXA@E>%gCB_+`RS$Eb^foSJkvp*5cwL?Mm@2 zweTNPY`S>j^K2)b45}rY|9P5I(SM}6El_zUZ@R+9vIp2(%LmJi*TXUFjj}#7nc=eFp8X;R3VJYCrMf*GpT zuko9d=-6sa+XClMtt`p#-#L`s5BUD<|L)IXwwCYYC7riDq)R6MNNolJd7tPJ@-X~O0*uz*}Ct`O-f@ltquP4NMDTFP4&1d9W z{_2QHo0`PH-Va^JIKzn#?eWE_K5UqXwy{heI!U@3CQB8(p7>d)1N@htd8ugA4`| zo2NO>H7}`2N$9bA5`Bm1pyBoF6Rg5EToS{gMvJfOpyGMV zt}+{fX2AeH{MTkNQuO%t!-JP{Aw-#%y~--?8kqw3Zp&VLNe%L&Tb?ocPe$Uy-bZeM z4pw-*5UXow`Lvsc-Q6h?_jRvT3@}NUaGs!_9W=M`^?tK``m!e>J1H~Au0hc`^ZkJx za9^2Zpq_o&D5>70wzhK#jG-?_)422me7%Kyqx{7{t_k%Fzm|)mc|l4`tKb^BF`dxu zXZlwuU3B@@$=p({9*%;+_UwULj9<>~GJtbHciFY#!{^_mvuPUM^FNRn@h^IDN9g!Z zj~{$kvJa|OtN;UxCZ00)qu;xhsBV4uTZCt^gs&!b6o~)NN5?;opu)zN@^RlT-rrt-&G79b7w(QHGZkyI}cKBRxi2nEe{oDG? zai_-IY(0Mkt*7Rd0ytXCjztr@`P03s)LFHlygJ{;@NZpA>rgh`l zi-dhh#@k5$C#AO(5Pgh!JdZAyYd)@s8MtjYA?s5YKAi)*3YD*@(ZB`V!7_I=*>9eK z+g<*cYBSfJLaDERE|X2RKM%}loNFBl&1gBPrk^ZwD~Ps{7xtr{QbCS6!J=EF&sZ>2ix zWH`-6rH5o3BR3#O@^jXIe_e!SPF@Iwozi8iX?z3)v zZ9}!pYVmZFhtOj~w#WQ>>n!v#`F^S~h zJu;YHGmYeg9wv$TJ8Y@`Ht3C}-;d@QRklxIr-S~PHqX*P*e5-kIcQpfVKjv2!9G>r z?$*Ar6jXNif{t`%W~c(al)KmSqA9BwsP0iwL1Sj!M69;&8DiROT`8ofprbpy<=~5U zMGIW?VJ@JoE0ED+9^o@@Ie+d${oCC{Mou>F8ONF*=M8SxJbdok8jgKUS-CgTIWY=Gb@_fKT>;j;l927(tl82>X6N5oP{@& z4qW_#*?PJhd*QhN9rVf(E9F*@~o1L}PE- z^5sI5)Es_{g zkbZ-ve=vmY<*6zLbUbs_(U?~#v)(_?%glSAgeDvf@-+1K;Qz;QWi1MUbcYKbO;Nd` zi_7(4=Fao{3sT-s40|!$B;|nO{P!Hoj_z1Qin21^WyV_5$)4f(4%+a&wr1wd5C2aV z!1e9Hzw808ybM@lgf@PvKIY4l9HH_3P4c8N=X0Rrm^&Uf50K6_f186=Y2aNhS5phd z3~RL?Zqq3O#FJNQho5M07#9UDD~z6WwcmHhQwaDMX&RR5FhTO}jA4lW-luVrZ!Y{~ zbMGYzDO5zee8k-s-t~C-AH`W+F|l8iXHPRk_`i@fx@(~q%BRq;7_cAdQJdM4MH$n! z@x*f?;ua@^8)ZE4FY6uNSfl%#0VgF1e!j~{`v}r>aqvX!xJPs_VU5vVDn&E-SgG5P>1=#o3c5a(A@&D{A%9kEup;%|sc{W%b`v zrs7@<0`_lUV$}A#*G4*^BM<1>xu1P&J?0ff7@t;J5^VKGccXp^8)g!V+qy65cu5ENuanNj{ z%PXDbe;#)HcVgw;s|XA#k5A8kX0Lt2#vJav03}W07M|ya7>A0tB?Z2GSrimmgkRnN zOkrjm%8Smi*3)`|f#E`@j&}Y4!pQrK67j9qR{f`@ODVgm?>2Q*h2=2pzmwgyIFo$E zGO*mgMHc>xP{ab2d0yK-2UI`j?FfTJH=}>e`#Om)z&#g$MTP&h3< zE=YPJDI65Gu)p`FHc_Y=A1IBc7t{T1G?#V9Daomyi-5VtvoT6aSELWW`c`dj$h;x| z{!{flch~q(c&olrr!bp8bZ=8^c%2V#iS;p#|46_#oI;|}Er85K6)}`x4AhuT0QEU1 zwpD%OKI@_Jgu+P$>IXGnOtXY{T=0)>MhSAp7Gw6MC*@b0lCm@wDO6PNs_(`<%nM2| z_;}J<`Q6>R@n7P1D{FBcszU0`zF>DgUY9IlW-G0JbC0af4NA38GH zAQiakPX%0r9k-)sraomWtocTFwuIfJ85j1NnrDpAr{BN3a+z2*So#0J9);Z_Ndk1B z;cWD2e0jd7eRA^(*e`?VbpOPsZyoqFRJF ze)*8N%p8vI6O}75po}%EjJ_$%_{$%6$Z{rP9`5H`n^2YMlUC|VeKsi9MdfI`aJfwP zi}km_Ch2V!=vqu7zN_9UnsmLRn27QgeU{T`tS3Ee;N7hd$g-u5ExQszsO=MoPje8%hg`jZNZH7asQ z@llm{MpDw7-qj?%^VSg!(>vcuHjs)-K1yQ;nIfK{4TkJn=u_jvi$;&^=sP~xvI_-& z?yrtdWI%rMl;YK9hn4A_d$ZouzSggKi~f=r@h!rGqo?=%TBE1wo@k6aym)3|t*`gm z7NRDOgJmB3dG0bN47Bo2KzBU1Ea1AZGb%9qTqq$+HyYp+WYctwuh zHZM;L`y`W^QRz^j5n~Qddu-ou{5yL&KdWK7247yPz-mz@H7JUffw>L^e#b@rTuV=$>RC@6>2&aG}04nRkkR-WY%RHro`B!?UmoDoL|& z_(;j>Z5VZp4Vv|I>We`{(++0n?%xk=VISY%eFpP%Z88&6Am1acKzwDqi#sv&Kr*I3 ziyM#e0J=Bj2%lAWaVX@zfz802y0jP6!jV#yQ_NSs$<(!N+uQlwp>cGCrN1QQL!GN{ zO8Oc4w69sv>~9ymfp(4tNNh}T@2|W;qv7Qfv9|3`ETh+F>KfaBF9`k*cUd{V{o%jw zoH@75uEsJmwKnAC1|4vQJ~~V!`6K##U$q*%S=fBN(-cdgTx}Q2E@1JdtC!~PQYGfS zMRIS{`dydal3J3xsx2=3Xt83n;1=Ahqnn?uw9KN*za>2H?oh?MMWh0*WS%m;0su-#jWmdPcM-u=qbS`SHwtP({`MMrzv-oA)10}d3o@~eNtp7%HywrU zaHT8+Vs z=BMOh(rl9Zh-ANQ$@qbxYMh%V@?OiKZ4EWhIv6lw+^S|Iwqh$eG181$RL+CW2(j`C zuHE0eF~gQGO)t>w2V52RZ$Z7{E>i1%7w45LwL&(ffFC~hfV7^#LD|3Tj!r+UX}K^c z&HeVDW|`^zA=Z>7^KihpUt&r8TIr1?eY#9j(GQrG|DfVTJ+r;@mm8)8x0fQ&O^2wT z@*REcf%R7pUR6pzxTC?WLN1XJMU;8 znezuCH>bcU%AcFxLw=eUMEiX6wOW&>&Vrn;Ud(g|=XhNoFy|e#R#w05%VzX`05~tU zd)Q|~?E`1X6KVl{J!gJDY_amwAbs(0&!6I?gCvnn=({MqWm0rIB8p*dHtU3JR+d#r zlw!k_@pXyHASuCu|6b1vj>KK}1@Cu5->MDu{ieTQjPV)@AGbIUTn7y9)BO8`ApYkZXQiHZ+fal$L*wRpN+M^HTt6&On{V|%w;@JklFNH_~EB{N)WI_%~S~G718DLu_(N^ zm|k>BKZ|g-o^x1Bm?uBs$^zj_>;B|jtD|YQi~8Y}PM!6+Vr2&wdg{`G%!CK*u|Wp; z$V(IJ`Ef`4Y2&UdD^B&Rl_UF3)BiStCw;;Lnnh5Zc7Rzv0Hx)<#WW4!M8bh`y!lv^ zwXktnQKW=QnWfD<<&SPX#>hhjW;Y>w8Eak;P~_FWq}d-4b1j%G0ckvWoSLf021BM!@Qc3x#bv@k1N1|iH&hqg6w=2-(PR=Ey}F^i0Vzt zLOxve?BVj{!dBj~+-%xSUVEx+?|FM+tc%~?%d~@9CcFe~Qd7Of%@}1wC+{80qEVBQ z>g*d1f=Pi{Y7H-)lLOjDXDMYB{y>-fX~Q+v_-##X+18076|pNswO>Mh)KimY zQi~fY(}4EEoE0FDhqut29s{!nCh`?FSW3_L=*r?uuT%rdT^ziB4a2Aj6=NV(7R!RP zBSo-boBHzbJ^$2ItsK{nqIhwbOU}wkkHoWbvQ#%z)%b>U&}1t|L(~&vktlQ`WNjyO z)rYB}OeVdf`1ZO3-kji^K{~jtcAA6kJ_lY`Lo9kssp@<4wIf>;XC}0wyBQgyDt>(G z4bG_2x~fc+4xb6m+^tUZoTSx-cPSsx6$Q?~kPOL0D#EFj?X8y|hWXM-v@N8v}Y zG!AbYPOjJC{Y}RXc;4@Tfwt?mhXXrYHFj#>=N)P?k8vNDVhI+~CMVF<9Vdpo6aT#t zsgtQ&?F!RMC-O@%5*R=KH_yI#2mW^z{lDw{ANa$+w%OTCn$RuU5b2dY3g<1;m_vGz z%tpxG7*f&0&?P|I`bxc5HCghg*>hUOMTSC_0eJa`Ta&mt^Nsk&R%I>Yz(wawlJANh zK(kTR7Y?S3*etpOpr0xRX+r?7TMN9ifz?qzDa}L&G-b@5Dx587V||NAWmCj z%z(}q(s!bOFCgSpf1WIp1p9MXT;3lBr%T(n0t`p@oVAHFHxgyt?-@~ok7^;i!P|9< ztMzJSXM7N!Wv5xbDE4^3?3cC>ettVwW9YgA^PC%h%1hP_6C|MT;`LU!ErgTr%9Pp9 z?fY-EJG|xaIzAkUT}hdgm9HVtSFks7bMW`NLDzVM$zm0)C`ZeM9s44PlvrvZN5u=Nmj4VlaQ z5%2Q1z6(;2HLCtni~T!`}N$)>XdfSIeC=Wj6vw&3jfYV@0Z^5&EO%>(Z=BF(kBfOdGklUj|1BH!8^;u ze!!n5pgG~Q^!TZP$#`N<%o`WJki-Df@lmMpP&9kVQi0F3w_b7A8Kzzp1vVt zpbL^}PU&hqV5~4}mx&0pYghIiwYzqNnfoJ2eH7L^$lqx-_qM$dGI_ee3qD%A@m9Tx zJ2h#3C$)S-39e(+_P1s^PH#<{X9$M$y*N0VOpCxY8qXKY`;^KbAz?oz9O0QFPJ53W zx`)rETUF4@7~Q1Nzumnlwp8}>`a>S9$s$UvClkzhjkR44L=M{*#RoNz)4phKe zH$%0YhClNi_$)_pUnuMxIEw!t8~Xjz8Cov^(FVMsccD{aKg3po&0c>(otfstW=YZ_ z>BU&2c&TH5{M*7$B9W$W*|;u|Gn3D<-s?4JICaC+sE)_6o1bk6N$#qpbXtInmgs0c zW(Cd)r68=XUJYcI<&r1DO zxQsjX_r`rJuL|MR!)P$l;ROCuR2QOw#I$(1IC)> z$C~(oS$O!}k+T^a!GA_Y4FQwx!PY7R0TGj4P3_us^GuCRp^&k!;h3>~lgcx4E#l>; zVHp5CvABcO%!PCc>;_kZU)SL~`tFn%uQIu_ATR^U{zDgpYU)c$0 zV%aM({hV`)Td3zK`~1cv-3*=nuiztg#xw~EO<+m2Od@-pnp3-#^yhC{o>j^mNr#1S za)Ei8X@G{#Erul=tJ ze@=Ot-Nv4RO*6~%$i-Vsa+kxjcd9#2Q~Nrs-n3nTucPX^0YEkDi8{YCzGN80xv+h~ zbq;6;wn+<*>XpOnrV+|&iCf;?wI_zmm?9C|tWp_VJ*?ZXXbmR-f!Wbp369P~9>yeo zywg!nE#aa`U9H%XwD;%YCEon|FDg@{aGY+MakYu(cG28Lo`TQ#OrwmuOZZ9rnvKBx zGV#q5*gX2Jrd`b6JgcWA2K=>)3_I4{c?6Ev6_y&g2y7|__Id`KpNLp(8`&3tDV!m0 z#B7Y{@jJgDC)i3g%txE75WsOHmJX_$sAYrPT>m!yPch~xSiLO%sZZ_VRDarG^X{z z%(-5ql{W)GM`R;;5=Z1|U&XHSY3l!fPxSxk|Lr2T<_C~lgecW|@=F~Klf$B5bMDghqzkJ8&l$gWR_1zsPDQYg&KX7^O%d!J>0}t2deqFAlXPo; z^hRYT@FR<0c9*cnGH;us|B!e7+}OT=J;dqqJx^Org#4COn}+Xi>2dyAHcJ89n{y&s zUh`wXM-HVUK9Ru7G6yw$iR!|G9{M!e&$EgA0es(4Jp6V7Aj%1YodxEkAw!d1{-Mvo z9%Z5MqjMoUb}*h12ZENppk9f0q?}_ZIh$a{l2C!ZKsOCy>;ht_q>pab zu%0+8i&??bp@yu1mq8*mfc=-9N>!Z(=j;17eyrN)oa>_8^yVwaERkJW__W!TL9Ev+ zjkoX!6i&jBQF~w#w_y9;TrrPF%f~Bqj*e1#T#HlWj|B{bU_cx=MmLYd7`~` zYHRXMP)X&xsv_ud(M+G+*^O`!!=2CpY!Q9%ni>4O|ENf#Nr=>0eW;qe{4dCyTT`}XBYlY z{^r+DjvO`HmQf2}?UL1oi)arTJ4)GFK(3dW4G8WYY(s)ei^O*&i1CEX9Z6s->bbqz zm^G7&BagO7i*-xy%Tc%pMfC^4e_u}SWq)(5jW8Xk3JvW&B{2}MJeTv^nMEo9PKV3d zy5?)%a&<2=NH3g_Y0G~c!U36G*!SNN;|vdXz@_p&R(YGtfqB$%8yQQ?{rX- zg!l8&9|LmMKmV~674S5TIbn)(K&?)5jH+KtyY3{fRKGJ8-H=s#mnk_7l_NG4iNZ}( zY4z{?kgLBaB`uAV4s1~A3nn$IbfMMl+WyaWX4rcnR()k6CjPCfd~D-b!_~ER3ut;B z1Kpf7|2LHZgx%b_%jMYQ0x_?A`6)jGeJ@YOuVY_B>i=^+&r1cK>{o0O#badi@8NOa znJn*OILy6N?!qOh1~kJK6W6^H_5y4N`NQnS_Qo@5~P(z~P0^i?J35kCPuDf3f(>WcRZIPfY5E z2+^$+PKNR=_HG4VJHoI*=tP6>e%}goxT%CxRg;BK>du4jpLp6FI9iVmm==qr3Sey+ zy64$?`OUMAHMog@ks)Lc#G^sqdh8@V(*URGVp>p>*Hr3{^f{i#uJ1|;IVB-%2~L%Q zd7HQt4$tX{ryTxSxO9R32&`7#Cv2G6!0S`on$S*Uk5+mnfo3vRugELV<%81 zH^JXFPEKHHTwV6w;yUvMXYx1}=GB2sCNp1M{S{jO?p1O!JBi@fkG=;?uw}8P8tZ@vV>;kp_cNKw2|V`3 zWny5YYpEw(L7b>xY8UQQZ=gnJ6%E@i9t((Guq z9FA&6CZ+8%nu9mBRJStw#x0%8e9i}F70W2WcgPHJr)vCJt7QsuPmLNQ3?4J z%0mvX3?Gn?swdQ(Kh6+}L9e0LlbdkLCYfqLC_DH>BhqvaiSnZ4Lrfj%p3}FgCX>;+vfg;FrT0hqebAC^rTU#*46V8JL2QhIw^)W-#^69h{A=vFf)twGLH`=vyfMI zeyt5r&%FMBuy-9@@{XeSRwp9m`T-^|q++#Xhk=P_Cj2MDe_Rt{-+Y7LmI`ABB-%Yw zVIy30XLZb45q%@}O|_3r(Ea63FXKhPa~Dk`;$`~}8zvGh0PXJ6>BUrU*Y@!(NpF52 ze5&0iV?=*W3gn|Vl@5yR#R`w@AJyGhNh*Jv*a&<$QhT8X zr269BY7Lreo0KRqJ`=&LE^vweTPFR_;f4PkUoD;}oJGP-Mr{4}868yg%y@*C3e8k} z;V$=pmpt~WEpY3F$91|7TtsIECEz>SU>WY#aq-<+M;@O&h47AQG|IU@k9*ASoi3k$ zu&x=r=&&&3uIv8U)NCyS+vL;$y>3bBhOv3b9xrsWQt%6-b!Au5x*1LMb^6Y&Ber~5Bivlem1)|1#Eo!3eS^Q91B;%zp znCd2D09U%_Yj0!@p*I_))jX}tV2O!XxE25!EO8MMpk#%*LI&SrG*X8@Cpi^ z76GHc(WfD1>sB~-MIfbv!iS8s`h5eGJ(p$+4)!@Kbb8JY zEpSP&8@Du@r$tWppRZz#*~f$oh22f#fIx$^cKu2p7#G8-6_4Kwc#GeZnCf7y_~-L7 zcJJWR7CE-Da3SzPG_qSbV7=)`j2TT1suR?1iG;~S)mXOd!JW>%!leK)F1wkS`ZGf{ z86mz-DGeO^#$V99W8TZaeRsM@M6dp&md%nWH}ziJe7#IDJF~~C$Y^;)dP%!WD?!LW zrD!|Iu-c|^u+OMqY+zyLwLypi`a>Fgkdp68L9!yRZMb_;KS4OBw%y8&n$q92)ecm& zV@|bmuHqmV6`yfW>WoJtQh1f$knXY$A&s5gKB;H<-&ERv%bvG--#9@02{<^CvL)Ml zQl+@-7Qp|8*!Z!{7GBZY;IYmFJ2~Rm@A}l&XCSa1)_B*^Wvsyf`*Q4sXkFo4+GXvS z6?h+K|H0G6ZA<7-3XckPz(2?xpMC`{7V8!T$4AUeu+5nC@5DD8ytZtpyVPaoN9fc>B`Ez84<6Um*Esbh1ltyv=J z{Sw5{Ztr|_m!a9tb7*vaTKWF?$?<#eH9`T87X382>Q+=Hm;Lk2i*rbR*awf*I`4jZ zzAHXd%`q_%$YGVO`-^rH^fY@SwAG z^X`&hM8(XZ!kBZk!w&h=gig?d z!ocBaLaWD>wH1<4f+A81SKBZdQ(%YNj@Yz@_bUCaKYHt7F;ORiV}j@;GVvXxBrALz zwx?zzcF5W|IbWq73Hk=yD*Wu1I|sTe|H5^NRmj8f^V|xAMZS5ONmjWkNf_V0n?!2y zx)P#gO|y@Q7KYh`^Y80Py$S3Mc1PerFz@=`)tA#SVP@D1()?hk(Bm}^)w0$!%5!0o zhGp4LE)^b>3U75;s*$oFw=Bxt@@+5khO~Uy*d);mOWp~FxVB>^k98(2j%hEW+`F+i zbrCY>gK1}{iJYodF*Y|4WWy<}TolZ$wAYRz(9_+%ii7;jqu~`Uye^T5N`{8fHv}Qm zq(atgp>BQdB92YbcEjjdDI4p z1`@FU5Zj|!eo_J1S|-vK*29_7>7aH$h>zdJ#^HU;*;3iJ(z~)JJ6ioRX#lKIzSuge zNVT5$Wt7sGREWx%JTN|xzenY zAl(rBN+wM{QhBGLDf6>mg+CJhg3a~JsR7j)@TWh3hUfKO1U_Lx(2h`1YuvNeRIq|T zm9p&MWTD%RKBI0Dhg$4U)mM-Z4=vu`?5iyT94bW!{WyeJqd$Qo2! zWDy|WfiNr}rlkY<_^#^e$s0!T{s>(mTirT4r1*i;`o2F!@XFova^6X!$!92N1?q)9CTFiA96Z0| z{W7FIz1`ezi~T<9o+w?HR4$ih;xk@P%kjPqgL~iQmg5Nf)o9S(X01Qdm6lP zhrT@2)l?d0)qjV2*lYF07-Pn(;6ln$Z$Q#$MeUwy|KgS*+qzedaq}!%YmW^(ttn)|Z3e_;V)@hn`>3WT#cKI_8j$#@es%Prv(xdIf-0Fue8OPX1vJ+qhy?E%= zFfbzOM0j|Om_H13I)q-niE`q85Q+G^9>Y&hTbT9-B9V9QH{Q2EWeZ7|WNvQ{1l_#4 zF!9aNF(uBUV!W~F)hEYwrJ7C$EV5E!p1WKeU&!WfS0CjodPk&SaDEveMw* zT<_X6O~?fFVb?!GRM97tb-DRPFmIH51}_Oi<2CFDr!U-CX%=&hFPn}ivFE zS_vX>7|_@3Ax%(T_)#cAE3IH4w8x;lh`nx+0oB}cjO?(LQ4Mvzefe&4B6SMGiv@c2iEkJ*$u<~u zj)wLic&qDz|6Y_G4K!ZR$d=}rM6_ROElI-*B0E#OK&;&QGqBgc%G3}Sq%i`&_4W&( zreK8~My|6kriV%Qy$IDZ_*mI@rCC~;j6Lx@rPU=7fhm&`J_0*QK*pis6)gCOFfYTk zcrfqd>mW0uO4@HB`|-k@G6(65spYv05_h)>c``hJF-?g9lQa5zm>px5?}vjSl1uw< zK|;-LMq>1y!F=v>0e`V5l}md3q~v`>gMXV+G=68g0yIsmjk@|AaGF;0;BwzYI^(i= z3vv|-t!NUusLrt5RVO}`26nG~o388;h_3QoKPJ?a;8>TM*B>bQvvV-sH6UugZO$IrKOLhdh>U$mLlhfFEAtpQ!9>zy7RKwveX}IvG@iG!HzydokPpGYRUAb>}qJH8yaRa`2(C3#IP@R@U3~H~5Z*>0R~#-4x8Rob>2k(74GJJak;(V#Cj^W!r%W(o6Eyo@I+-_&@jl zZH<4^2YGrKi#?wT$`TWE&`EvDMd4Ag( zK+mc?zpwhz*3luX`;!w8sXP3ORPZ~o87K~}7j2lHGn=`=avQ$yY(RCqCMD;#FZ`4T zbjDiQ>?cgWj)9B(nWzYc2HJo9%|-kLh_|K0vY4FHx#;DLoz(WX`G}iH$3ntHKMtf) z8UZiq%OIo7WrvO}!atwMJeWHkaF#l8GjHhzE`)(9A;X%{1|AoeATu&3~0 z%*e6qCkaw`U8UW$P!;OvEM9sYk`YozeZ~@Nd_itv(gUDLC^|04bQ&1I>yJv#I{^yw zY`1Ef10<}E&i5oXtEt;MqY*Jcb$_Wr_g>4Y)*AeK7;@-AUj;oLJ_8Ef>Sd)&+TNI= zD|f8kkg+3JdkkNzq4s28?1?$QaO|U8s8Newx=@-c$WTAI_gX=gkaC%x_v6zrOxLB{ zp1uiuc{foy$ty*3)O-hAx6cbh8V2xV>q9yv9NSHRf)TB&11Ak7SJ5Zu2`=5r z-d4>#QaDjH9jQdRhI_`cE51w^n*7I+$G`iOD;|y^XFh$c`uYgaT(ePDouAcJ-=Q-S zF8z8p^Dwo@YfTv~5awP(vC%XI(fGp{uXRe)<2=ZTa45-LCMRHAKgEN!L(3fJ)795@w5kNu;}Q z`?KSjU+;e;-S!f#m3B;Fx$9I7MK+BG%DY%CWmOJkqdjD7Lf>{9a-xqp5H#cNrzA)k zbkyXFcx_v7f?1p2pP1k>Mj{2;%*wu@p85YrUxhfR*(9NFGQ4VTfWdkV$}CS>a;D_4 z`DiLKhWc-npH@?rFTE+zU2Oo-86L6^nGx#MxBNU1&$~)XTYRs@Ue|=f!jaILFNUq0 zpK73J;O-;Fmjn40I!x}Qnl!`6KMt;LVvIk;t&sMnCJt`Yg2i?xED62C7Z;hRnj?Z; z?XEgb+)h=x?5_;NeL+j(Mi~o^gSfdMYWv(UnnZ9-_s^Zxc8T}cXQvu=0gPmE z?zZX~Q93+8DE^VRa#1xY4>e%Or;RAv{}dsN(tkWzdm(B6&`O}WApt5a(sbHjuyMK+ zaH!U+Z%I}W2B;NSA%WJmZr39bb#tX9pHd^m>=cfuftE z241q}C#n*|hQBhrFh*$ttOF}ir}ihY$@S-{Sl3UX8qKg6%j`heQtpF=lD`B?ph?R5#?E@#-^oY zSu$z|z4;Uz$!2X|P;#lO*`107hJFWY!_if*%krxTjx ze(Nl^cOlv@VMy(`DYiw!M%c3dRn^e5EN$iV(T#>m;hk>b>5I(Sjj(8D)J(2F(&}XH zvq>9+|Ha@4$c-)DvmBjNr2K>BF-c+St%kN=-SoC-xyZy^{t#=J_pD7oW4KSYP)cQj zdRt8*M{tSUtBcuq>&sfTBrMmXEy|2b8;#oxN$>v*Y0T}IEBaP+ZW%8%+6ciG!Luw# zpkmu$B&{$^kGXd+^>!NukgfLY95TbY7ca9n(#6#V_;>^_e4Pn)>GK})NrOuLNzoU! zWf}j!+I!NlB-3^$D_b11vQF&B>WLUvtjibB({=%j@NN@8{mv=i$mZVmfTH z3GvF1NWo=`BAZ*wH_iUG?a8@%L5=NIBMUew{Wm6=6F?UqpU-39J#bX8c3dJ~3d>P> zM*#P8EJ?K=*F!?d8{QhJe2<}irOFYK@v0@B>ef%8y`dKbgHq`}y#vr3?%|!FA8wOb zT{@F{Dwy1*KP)+?cuBm5w0K}pQYQ6u@Bs6p>w^>-*Cj$I_i%dZvi_o%9gP>1{q4>@ zMVU-QC1i{sx?O|`k5{|C@VR%0OE-@$yuR=)5KqE9!)HW0J-Uqd3iG*0JgI1^VHCtn zq}zu=inhXmYEJ1J5EMLgmHyq(4}eYc+vi|(z=e;iFmf6ZJE5rUxMBI0)u=lV-+vje z{+Zi?ntc2rPi+t47`$TpBYkIUlVtt5 z2Cyu9#izs`dY8J8lsaVQjqPIqvqebjx#dLz}J<{s?10`c~v#4R0K zkgDg4UvXJhuyc6%r-I8Kx(5BtsvQf;D@TvaJhYbdcHTtcMb5oX_6~hv%&sFS;BdW0 zv-Iaufq?6JS-pulH-|EE95i@UrgrvDceci~D3@%6W>8PKF7Kz7l+ItiC1|4djXBBo zpmnfW_|Hy7Y)t>+`*VD)f$ z4HbP=wx05Cyel=W+|rc%fUK_xS(Q8`s`?)uLl(wZlB;`6(-a@1O*hVjiMaJ*Nc@M# z;QNKrXS9G^?scF)NWI0egGwt7mh%9rZUMn>l73xW`D}|LCSS9$i>;K{7?v0_o z+w5Mp1$MsKCAN5B_lhM+lC%)lGZed=j8}DRDMDThlbcy+iIE4YH??nEWhL}eQhXN! zopFR8L-#3{g$)yz+fUrtw9!3qUB~dUe;cQk*HrP2hlN2+`b^%kF@FyN;k7A{nE`u_ z&=mt0qq=TTuhT=nlqMU8d23l0#)hJ7owO%pw^;tJwapbayKc>^1%jTCb8l%|ai!yS zn%jUUz-jAyXG!v1dutX%`L5*By3Wst74znXoNa0e*`1}3Z)SiCC2;1^ zWB1Ew+whU!@>p2Q+b$ax@P& znnnb%ZH@g3>4oXnfDY%NrXo*jn}WsPU0Wh)aH*Wr`YTB0C1lP`;K=Y83Q)ytcVq6KL-Gf0+O=Kq+HW>1ZGbR&x8;6!n|nl7 zzRe96AxU`nxAHzic_a4M1+|}z5BiO-!-7ix+Ic=j?ciO5eA1U1W}G4eM2v_KixtN| z4VetM^z4o^-IBvPklv6({H?Xq+-!}VW^VTMh>nnCPrI%itcU>DKnSvm^bTJ2+tpVA zmb^llkj=Aw^#krfxH`z7yTJE?#(ax@aEx;MnH7AG3Y&(Y6E@HNTysU-1 z`jzQ}PsRz%5IgoKplP@W`hGkLrn*NxpfbM3`QWF>Ysq>n2@Tfk4QBRvR4nDmv^5+wzz2c%h4kVnX-F zPtYDauGBQn^t{*`$~xcR^tNOgG{V~yLhQ)NzGZ|cD}ZJ{840}SU4HK*!N@Zi$ZB{O zo==^b2;DssFlZyeVie3z5dJf^Blo`6c9%aDO|w4k4EypMv z7>?IPgSM|+cMp9CUplWi{lKI(QE6QXwzGSY%9Tx0-A&Q1!e$RC?5e#zdNG%ydEHE} ztkgaPYd7MsY9~xE^d)eep9>Bwvs74Dd2^-m6RR?EB4bZ)0ow2@#e!IKhxYNGBCIW( zdA4{ZE*#1LC4#I)DYA}v2MP-!n)jKC`^wrb8WWu!)hA|0#+(lOHK(dXc&KWA4rWv7 zq`gaz2iH{b^URyjIVY1yN2s9K;f&VAIFVXj`ZRDE4Aa`KbEA2F-XH|OrQ#i}b~q;? zYTfl!{_d%GLJRD3%p?_@x;CMC_ROm4C5dAak0~!Dd~eMv?#=4h?)KR)$~Igws}y6z zUR<@4K7H**($NEeq00DlAHwZ8?d@yT#k>x4yGqNyAqi#Ayq{bUS=Rg-RNjH zbFB#{O7JA9n%)>P9bW|rX6&1G-MHd9CaL30pQR|e?9fkF^x!EmPeZ0Pcqr!LVfE5; zHA{ltU-$N0EK>+|9DR?h3j&|M7hD(bxH8P>K=xwl=~(j^zVD zTdAvbFwlE$a@1>=T%nD-!`v?{wtWHL2b#FSGY-q(IglLK>M4yq=KpXd z&U$E5gf>4f-{f;n0kp&}sHvyjrMt=X3p>Z*53`|$MrOc&BZ^{>98f&IoXfYILw*Gbg8tsT!y18ZJ)z?z@%vgL@s0Ar;}= zE09UuoK4v@lV?Z{*jrxvasmFS`*%$Pzln5`l)_1I__0R|a-K)cB)X2?nq1s(4md_5 zM4#Szzqnk;c!zIoT|%|-s&;MHwqHZrKp(zj5WI6l%TKX_OB`8?DevXz z-=e?oq3(cVX9bU>*4nSXx#tC|*GN`I(kj_iD?9?7f$R{);P9%}bzp;tP65F(A?YA=7rc=+&^J&Zs>!n4b8ftwKB z-fUPk(AXLXD#Z1LmiAF%YzwYbHgiy*kfHJk{*jMykP9mFQ*>q=Q3^OO*CG4hq?dCo z0~h0M_)2eQ_lKVqDY7GE`!m~vpJRqak6#RC1}Ge1_ohkmwys^=+;xf92u0P=>;G zSYofVMhoP8iV8*!Q|M7_)Xfjf)oiZfqXetY1fvzV5QC^;9xUEhi=$e)P=zB`)y8Z! z=qPxoyt%9zWw7q_H-{5=bLz}8*}{fvplj`^WRjYJZ>Hn04Bcqtsj+t^Cnv3|six$Q z>q|OcoZi{DJZB3CFI?@0WO`7e_V<9qJGI9g1-LVNX(NvqQ_e5BXFyb}!_)S0Q^*w?YYuCUFk(~i|mxkeKP&>&~K@+{@0bIQ4PVrRN9I*`#ymj`n;sc+Z{qv;4VAT+h*=)ve2OM zrL;TZj+?7?NJdA0L=iAZr3bM=J;0we%w}W37!x?TLCu|XCD%^s%Fb4#V^qy9ILtU3 zVCh$tz*Da*$Azo<7!fyG&5am7el>GpNi^+T5L~vA`G|^7B%GQES3Y(Gr)_%LZt~;L z{Cm6O8WoX!9ibe@j=|#-J1`YanGmDu$B4?eG6NFgaFmg3D6DXgHON_f)7hl&wzz|L zd$`V|w9TATz(9m{1lXn8;F&e}HlHZhi{*>EtH<8NEh;vDTv9q@T=(A1h0iGvMs}jR zi)IReOs;*NTeSR%hcIUvbQkA!tgmX)?!gsW^CeDa6hFVV#-ea{YmZsafVxGORyJHV zf#wd~&S6Qzo@wj@{dE0}+hX}`le~!v`wZ=Q-G{#KaCOlH`-B^&^R4=hNukxz=E3ks zyPlkMDqkkjH3^QjoE#=fB)hQoS!m&i-7Yn)1PoTIO5$S_i7-@Y*IP=d8m#D`^jHy< z>R~PcP$x>(tlBMr!!WW!wiSPznH>@e<6T_(A4X}vI3DNp6J!D5wR^@|rYqWYqbso( z&A4Qof$ts6WTWV91cOb*37TW2aaW&yE}h+g8mtxudfTlSHfh~ydL}<9@(!`Re>D`8 z{NBBh1a10KC`i~BBRWkuj>}010zsKhf{4zbm69}Q4M-ZAQqps6694Sq$&t6#6${PJ zer_%poGrTl^Bl*b$g=|XdpEtpJiHf!TaReiMS1w|ztk1*ar%WWU!J_ubdRP<=$$IN zHc$FsjpVT6)hBnU>H*LzgC&^nb)dNh?qCkZ<&Qz}&lxb=BKs18a-ZS>gm?m$tkTWH z?M`78&FS{t?WM0@3S@@5K8U@r^!uCNx+{Mv-?ugg&=HwkV8tuzt~yq-u;540G`I=5 zw1=wJUj4-kmx{Vn%P8ols^qYp>S36|`}Q9O8|uPzz~W{!mrvDP19uLMQVcck0pf70 zC$+0zYg6bzNCA>tV8HJNrC077ek&-cZ<)cVZ4tV>vHmcb7j+zrk<+ZIF=PJIA7&;E z`(9Ba&Z2njmq4APlbvgW+`5KWc13MT&a53M@o~Df)*6MJcI#A6!Npx6FOL?K;1}yM z%t&yYGpvCEpSKZHm}=!a7}s=hLu*&~w31S&&!S)S`qYs(1wdMhw9hJ!#n=c`W+Vwt z*9Jlae)DD?rkr~Xqj(~VfB{=eQXu4ACai6yiElNK9cP}Ua~ zNzLS^#-&pI+hS)S8e5QTRffXi>}AN);O;?cF8o|2GPo8e)fZU~8FzyQDOwmtATiZ; z%Url>j?Cx$e>>U0w0Upx{7Gr0{Z4g0R>S0D<&Ij8l_p)9EF_nzFG>B)VzEWY5Ej$7 zc}^xf`sXz>z9~@&^+-Z>&v8j+m_lD9k|D71fyUwFPgd2iUdL@yJtUD!?;h^^P_6Z9 zE%N~PSgyXX%)Zwm6E1?B#kgk6+5tt$;J2g6W66k4s$==DXc0!yLdS@~8sIJpz>(bs z)|m>3>%m?#ih2(YtZ~GAl+d;BQ{P>Y50_J2u_!5v-#{+vp_KZ_ST)V}vm?7!cXpGf zFI@f2;6v{V54QTjx1}g6wlq_2(x@<2fXtCStSKN5sCAq&R!tK+z^Iitt#TwjOrf<$1)^HN-)lG(XPf3fWAo`F@<};grX5YffhD=vEOcKpdQ|ec zf{p`^$}n~UIStsYH9^T$DnLZEO5-K#EGtjhH4~uT#WmkgMcSVcaPCSyvbwq__GV_- zi4ye>yAZqf+*&_XQo9e-TbHq;CO#E8@9?%-Q@-Z~g_X`XIWBAO>fWS$ZdHkqv?t+~ zTAr+99bLZi>!*52>qiIDVLA3%)eU+&vD(MD%Rh2NctL>_dtWH6Rxb&Xe6vOS-JH70 zO!mq>lm;I!4Q4JP7F$C#GDECK~ zaII^=NX@WB#oDI4AE%Ueh0rLb`db~oHj4rsqN!4gb*WL73L5$(DSokWm)eA3U9Lp& zx799XkcfpIs0hDW-l_0f_ui>r|I1wWinXIqc&L_Uvi~eyUK&V6Zsc{hS`WQLKAP%p zodfHT^IsakpFXTm1&&zvfwr}bWQucaQ83g*OkBJ~tQt!et#^@w_uba!5YqIfU=y{j zS4N99RXb=fx8D7#FBD;7Hfv{*8+zu*V>Oq}xeKClMiSM{S%1?L3&5%Jly$J#rt{BE zSAZO)b;P$X(R~~C+jWkHJr`O;wN9II7)1hZO`0tCAqp)b*EymJN>h0g&JMBEVFE9f z0}B=(2@S>EY}uzKMPkr(^R`Y=Fm)p{1_5vp8M4Q*_VA$Sqjc^}?j>)q6f}No`qy23 zrNr;Q9VIV}!CO^=J6)D5i<`}CB^Ug<9^@v(Y+k)^iseb@aOj2CX(buq<%$Scc4m^w zxC_;vR(T^w8Ih7Wg*f%<*^`)%o$wP>akZs2!O_WTlK-JmE6VsL5Z@*A&)^id2&I%1 z5Xsh_@3g!XvHtYZ)^+HLJzqf{UOg&;quWsxi1Qt>p+Aiy&%)!VEuvvHD?EAr^siVv zr#BJzhGjLlSa{1-t;pB+Ey5c&&dv+`7C*WYk*3~C04)gml0SiN?NMfnr7 z-=^7TT~hC+W8i4mX8H}L`XcIW2M*RJS qe+1_r!TCpU{t=x2Z+?kNJLz%w+VYG=Z@vTm4(@Z=TfFOW zl^S~Jp$7;dKnS^;=bYy~=braH=hN^0bMF`rjEwB;y=T^1bItmjbLY)t4W%=uuAHKw zp*f@c=%E%34P8DB4Q=I#W55{+e^d|+&1qQ&dHKi6^77XnySv&tINQ+BJbDwSf80R( z2UD7{7DxHd>{sJQ&v75hdiC@0BfQvy^Vi;AI�sjET{Njwj=(yuQN4g6D(hq{A)o z;k2|irhJ2O;@s&=s0}0m3c}6xpNL)iRL7)%`yhIn8&{@00zrcGQ%7hf zbY)7iXlODbsuH!X#aj$MuRMC|#QOEiojMadmo@Y!WZmK024ancG%>o>Cd#*H91}RY zZqL{}I8y<7ovIM<@dSWSGIR2fUg*4r2`PMJONv%c79!~kR1nB8Z{ z?BjE=e^JjaIyGBn(eYfMu`|HssYIh9!|`x;PR{oMej!4MM}$o{=5;I%aj{uKUo)TF zcscbRlzb!BGOpxj$mx^G*)v^kH#feN8qjA+p5)8kU*bEYloy@HXw$@XUoCjB^l{>) zlOJyj<)bnR5vz`_kLPxOr02RM;e@nXI zh1#okZ_g@P=hFL~fB8D7@v)^)_gV2qsGFo=8Jm&F$^3;aE2UbAqaz=D)jb`+CzPybIv-r`dFT4>A^^~X9;@?Z-eB1Kt zx0T7RBHNQLw<)(fFTT%ETF)8Wm(l7l>i|De&|6|(%KR=0%^hF5Ggx#iEmqhK}h0yTJN)v@Un1Tno@lD6p?sKaewTd#|R2S zpwfKBZGZjxbrNqf=ti^U_qB!k)4MbNq#P>`O^S45tja0Is{+yt6UGl%K@P{h9yOpb zmSAU|I9nGacjBoK9bN1b*%)2glu4QJ1cLdd)cX{L3}9M&^?buE!YMpjDQ? zbMD-A1>0+cw^~`J4MUw39b=H+Ic9FSvn||n;iP}dHY2)mvgxs|4vVN_VfVS)7i>OG z+RI3>bv)$CRG2^P!EF?zms$Js*yattbI2&K48fm)r=Rj93fP87TB=1vDp=$h9J}?+ z*5amtq+VpdMaLS*yh*CWY?)J|aY5pU_bXnBi<)13x2*%1P)E%}@n2ci^!9l68SoA0 zbM{9$UmW=s{*~j#g*zv;Pc*%9uNSMoU+=KZ=Xq}Y+?$7{8E3kWfWo;Oou@pe&QJMS z%B~+RdAXf9sCoVE=|^X8uxnj>qsnuQF?Qgc*W0MrqL`XT?B98;wM*$F6H`=}-s0Z2 z{gD3A@?HOX*LOJgm5aP#%s0d8bJQ~VGM%&UY70E6xed=t(qYM`7fnl2PgN~UA9$jv zjemm8&CN|j_~bX{p3t4k)yZ9cBAK)Ccs4UXJ|D?r?3t3UDPEM|IURfVkE^}T206xglgO20Xo-cXb#mbngd6%<4VV;k^vc_ES@Bu{vGTL(r#?*0lZum4@|5xvHgz(Maw%H9 zGX|{@_mEq-yi~Gz)GwU4xLGuh#IS$D7YqIr>@LQji>N=ITQFG1SbeTf@18($xY)a1 zvXW?e_HxppVccT4p)PbiB<}Qx_;S=aM{C!MPIt%b!=EHSL6^L7dtEZ64 z>c;m_wEZs}-ddEjW;A7V_4K(%%${yDYLnKx$ook*zW)lMOb=fOE{H8)HWD}T9+@9` z<{aki2wolPai%zPjp&V-m2nOWuedJ{jF?s`t&A;~E%UAjN~WJ(J^LV(c$T%{WP@yI zc!)kddLei5yE~spfoH&c*3!kvyIpzR3$rKM&gF6HMMy|bC6^^biU@kRTFwf^kjNDA zQr{UTW`E{1)$F#4C*@DRW!gqNULm&jf76|)`9_Rq83RX*Jg8!-DqdwUD@WZ7V-z_cLYQ0Jj{&tD$G51B20SiTTs z@buAB<){8JdXFFctVuuqZSLELZx!F594;Ka@*1zsKWLEGe>FaH3F4D-MZ-K%X|VnC zz_b3fFKeEY(btzbZ=CC}`YCK-r59Dr?7+w!<{WYONlI?q6H+EPgG-4$ilg~qi%zpl zB{~YYMs%@^0oR&m-W*U1Fu zHl>XgoH(O>1N)9xUg0M5Lc;@WeQQ0v_W7J;c)q0W-3o9LrM#&(jggj_BE|~uOdh|} z;M_cVncl9{)+H^tT?O8MXT);r#gHlF!Q679YHG3wiR9Z!Xq`i*!tbJegVtY2(y>v` zZ>WMxj7qY$=N{mLS(qooUxg1dM;q07tyi57{1L2$>)GfLRP|96*pu48gqa_Yx8(zy z&!LFmzoMmYlwV>r{@mSRD3id0hX(8x?}bL=*90CyF+)e7Lxj*jWz463Tl zs!$&EgPdc80N3Gh7IHnQV%cuBSo*1F`ro_AUB1GRaL%3*HQvMk0Fth?^?1ywWTE_ODlR`t)23o zo$3FIuq$#aGOc6tMNTW$M;eQ_j4jtSq5GlIq%DMWur~;s3WJS zXQFzMg!zWAq2TB#wlQeC|K61HeypG=U0P3CyqpMBuXm*fn#J`6w?QcBsx^CU9a2de>K6q+Uk>E(+Z!FX^H(UhZxk4TpPcTo~d}I;u|t4z?@s; z`si`zu>GX8gtPmR+*>JfGZez*?>eU6x^S>VbSeS1V)FLvKt2&~W1wuSu1>=P9G{?} zJ#?Pt2yk==_@_B^g@*QzV;UNjLoEMwtaa%2-|sn0Llf#ibM)``7y<7Ge=)!dp#J;& zk#`|9$AM4hf!C|_!++jQm!E#*&tuw3;2O<+ZFyy7;9c9w-NwcRV(;qVrqQbooH+UH zks*YJhUw

yWb6jWrsY!$S@_1|A0LYEo9NU;ztjS4$fKAMmq-@1c?Lkpd3EHXat& ze8A2w5Gfzo>wnxK1soro7QBA#k6S#PWUm{jKfWgK>TYvQOh8CL=sM`swQJX8+^ub; zv>qz{ogDZid)?l{{4NmA zzkSJ{-}BH0V&(4e%)`Oe<=VmbT3EVzddObCelXB~{rzp8Ha-sj7|8|l_hkVK6g;>h zC@df(_&vVpid zdmb=R*TumDbVo+;e>(lIREGZ`0}0;+#`=TkpU?g)g~9(w@y}=fl|s|q0bqp1!PG#) ze;?tW&;9*=8Nq|i|0iMio231577#ViDH*~45?jzIw($Wz8k&1F$`9}B_#9du^Gq@6 zA3Hu_{{9Qi$)FP&*N)zPoF&op@r&`Pi~L6A^Qu8pt6J#>Zrnzr*%!S}#4zwZ`h2^a zly>Bk?kVBEw%9IUH2GblmeDnmvwR_Kh`QYxmdj!R# zmb<@628?ly>ss#W!`~U}kBJ2-GBW76>S)BC`(4uK={1mFU)Y~qxT1USG4hF|y5et> zzF6%1ZFVx)>m0gIRGhtg`L{_=e|HCH{@T2M zqga1!-hUU}|NolDZ^!sTGbDiNWoUrW<}*Drw`%(#_jt*^jFw#YfpgCUx}>n#oFA%> zh6L;yp5D#c!oI1l@eEfW3*3dQ6d*Z%+5D-SJdIa&m0O$0bla6=W~NA)K_}nNkLFo* zi^Uza6+496Xy1w)gc~YM1ckCi_jcBNo}aASLAPW=$zBR0I_G~~f8yxw76-gO_lK4~ z9XNkUJaDhT^IlZQzY(WDjn_Yz+Ws_Xp)_N+(0lX(zK(pqP~Oz>M_|cNGfQ8-{{3r$ zLhduU+jAop)v%6a(QpvdPoYW@_g!4G7vnbF+|-mn6$#v)%<%);RSp^1@shm5G*YD% zF#@TX*o^Q-AhZ>0+cN%jz8x%M&FOriz6t`CV4`}FnNKe)X%+e~;fn zymTk>yIQd>7bGiLTu_`_!}!mSJ`zIW_03_t8gk@}khAcj>7EkM#&&P7XR_Gu`Maln zr%vQ+YJx}dZ$4}47jGwNG z?Xc!~wcqlq>3Jv0oq41k!(>xfwXG zZ{C8Wmof2(GK$Urb?rQm=bw#73v$OBQJ)RL-qgT7b1RA`rohvn_ROFF8OOq7)h>C z9D6qW%>}OJ7j*5+;nv?4Hftsh6BKizSXJ8}*ZOU*ofEd4`EGPZc5aa^_)x2`RbUiV zX3wCG`b|lAJ63ZV2X9gA^7U(!fskJJ>zN8OnHRVgN}l31^P43)W6h)PzYJ~sj7LT9 z%&nx-X_8E>KaMUrmx%^ctglbEw6M6ME(jub6$2<8RqHFC@l&Dfff16n_$%Rjbx--~ zCiWzg&17+JnD`e1&t~W%^xVcTb2SIKIq%^XAduN zrFywk?Cyoq%duSy-!i>TbqhgOI*o`4pywk7?Fp&wSosdto%D9c4NqemCpwTs1OvJGQGil>JOp^l{jEIf{hLu0ZKizVTvL7iq%Q zLTW90h11BVr|+wNJrw*}evv^(@TYpvnlv6ISA^R7bZNUg{Y5;ttM;YR&neHAElQ@n zjP>Y}m@Ao4=*dqb)0+?hfr3^fZ1M*=!@o+>swYd?Z3XQTDbj_!p%`^l7h#?+F+Cb? zRiUdFp{*%%%dfq^&t= zK=ge%{(Q;7Ke{AWT)TPcumidq7R1=vRQ5K?mUzjepHsi+4X@$AHVIsPTeuX|V5$}Y z>2dhU0WC(GTg81LM)^B-q@kwIaVjc?o?UUOPzN`aOC$)QH+DyN^qsM*HM5JQh!*Ck z%Nq-JCCz@uOnSOV^jb=+Y9N*S8O+ve9~+2)>QY#CKLIX&eU5rd?$TTMv|uPZbC(Rs za^GrI)Krju9HPAnJ!K1bpQCl_E%er1iUnnb^4-!%yhL8!X)yxlS$@xfw?tNrUdGsw zo@-&e0w;P@iP49oU07KP(O%`whG^qWPx0~iTGtue?EvU>Q_G@EOC|ef&ARmg>EH%; zyg1hSrNmO-v{QdhSk0-^Pr4ABe1$nX6yI6 zL#GyS`Ex4R>1w;QODiMd`xSZ>=Uo30LHzfa;(tFA?J;aSZIT5S|5?N2rT{$g4Qr^hIWsnRhvVw5Uu58PbO>Ek_-_5wF@W{`KE*cA>z z6y*5gtErO`Yd@`2z!~``7rj>5j0uQ*LE_wsH#S;T0DbGgLu;9a4dgJs z7aw#EmDq7rYHDUF(k|DuusMaGV!J~x>cv5^HJ(e(qI1EV;2c|J-tI27n}ycq`2ioG zBs%Xzv<{?+*t=n(L4WQ&4+*sP2>EhR6C^wP?M)J)aIbdONpBW}^}-?Dw!68Qz{&A( z=6j_Z{<{jna;t9l>PY*cH^ucG8L7Kw$vba)U*1~4qFqoT0up3=b*RAxxOb6A5JWkb z=k-$fDE0M*ICG{1GdFeQ+?W%(VQS*YRv2Y#ywHrIozsQBbftF1^KmyrK!>tB3Avnb zW)5Dg*>MYg@_*;^e@G029B%v!xY99V{YjmdLlC`qY#qjx`o!&#EG#A(wC}OJmqW3f zYJ9VkA7*9_n@$~c!^mNHv(n%ag+Bx4-@@9!gcqts1^SO<<-`i->oEJFExAuJ3Az<# zN|9`={>%RM&ULfT!{irte)M>0`RspFO8c%ay699TX(uRUe#)s!E!*k?V{c0o>xg}= zRAJYwO@dje^-oeK49kAumRc^RWDhsF$om0zys&EPN_Vw(%U(-9<}#y}rgTi_Bs_$^ zi|zDKt@9&UqN#JYLXWEMhnT|8f%QM^^gY0OXFA9C@Us}pOgc_6W4 z&>}$&Bb}8d7gARClnkSiRQ9cgFqE8NfhThMrfF1*-& zC_|x+(n;Ql)+YLG`6k$M%!LGRw}E}uxZq6S)iFp~22mux)I_2Zuzf7xs!zn~(E`J= zOP{9)6iD8K-dGH;^mM&1pME(&j(X==^YmBSzf)p3R zF7%=`*uHvu!%3y+#TcM*5_=PJ38^)J9W2?@qLe*%A(c{&1>Vj5hA2%)+6ID$c#}!L zaS&4FOfmOV_-H==$B*UZy9Z8C1%Fl@20w~vKl<#(!Xn!^LP&^Jd048Yk&U}iHv z&%&C3rgs!_QwPSBshCRYK8Wz90cC0xXN0Qb9QLs?vU5DcIR-XHd8nd|VzT0_eqE$Dp--JwxG2tSw=_YW{jR z0yp|rZ%E^2uqsJ%?X@G6PTX(urhfB*u{03X#p6-6WOykFQG7MQ+()teyWYNLwpGj766|JU3!HnoZ=^mfUu3X>ZzI z6{po$gcs0T-e0zT*3u;I*{`PQu@!rx$a?7ou#k~cN=ho7<$+N*8 zSk!ua?S^UHZ=~zJ_b2q(bu^o~g_87+Q70Ort+2A*Z&E9r%eJmT23oJAC)xG(Eyw)oF{JU;URjVLG&xLS^9D{1dhWlFx)!!uW922&{b z(yUm`nC|4Ig2<0%l@{w!S3u`v_Cv!(o7ly2DGUwLcs7MCNHZJN4E zp7R)NUp{ZaqoCm&wobi!`z^FL0q45E6E{zQJE1-?jeI`e|M*Tu)eM9Nl z&vC~b2vke_2$kDwjF4U$THEVo{2uK3XnIp3A;EY{{4HDlZM{`g)uswb5V>Jg!cpzy zFpK5aN34`}p$l{qc(jCMWM4hW2aL|5Lh`XdiClXql$V6XX}V3IYB9cnV;)WmakW(Z zynH^ep;f7R8FKjc3lIdFc>ssa`Aq5UqKH%7J_VY3eaN0l8!0Yi(h?kpd+RTKJNA_RUQ)31MhT^bNkl8$r)% zua{G%TC%VpXvPuW{HxW+%#~B zBO>^{hB&P1OcfAw*u#po9|w~3D}1G%Xi>#wZnd0C=T zdmxM0$g7h1*7lV}Lzo?kqucy_a$i3p`F)yqnlo??I`!OZZ)Zd$Ilj*SXym(hEi;>j}B34RH}snc5Plp zu}b%eDmd5ne{vf(WM*2?U)|-AdU{NEz*A1OwjVi?q5!)*nS7p&$O3bCVg!gDpW`%Y!~^Z@H#}nax}Gk>?Kr&mM$NKLCOVjtj+An6|y@SW2qV z&e6D8tCk{BVr^U!5)ERzdH@pCZMH701FqoWl2++zvTJCZGeWb$5NB$%aVV%fV{dE3 z*$nPrYY#tg>YdVrECA5nHSz0R1o9RQ=Z#R})T%#w!L09rZ$6cermP-+vON5Vz(*ay z_9b!Zwu^X2XArpyQrX_wKbe}0^2V|lc$jqghIQ$eL+;zaooPwB`T~^jW3;Bzh~n_# z!|+|cipHs6Q&p5E@01{b#t47@erin~M3K>&M;n&e-fR(%3#qQ@GW5^~!hQYn{gFGQ z=q571kw=8eZTJ5={cP6c5G;E1@ zvhLH@7WC=6c#r58=z8n*q7Ghtr+4PJRAR}eAvfmw$|2n{w&%30Jm z8hP%7+1gaT#f$Kw#9RjoW-QcG42BUi=~vDWcRb%*kq@}>%jKt|t$$=@2j(M{gLqM8 z+-pg_?B4QJcZDkMD$c>q^iqPUuj%FCPh;;NR2*mXQyYr$J7t3xdrCI2Tj{sj_w`o_ zXX2k~frp42J*FT@auKQ2a&zVHfCCMmj*|IeO)hsfa7`7pEQ=IRj8D$>S-tEg;99x9 ztMGD>>c3O*=n(PZS=eM#LO7eM-`Dh_XU}${1Z8kDgLpwD##GM2O>>MRmC%LZIxA@p z&zNSZy%UUAjVnf9W}<7CbIqG`B$@|EmJj3V>EjC8%_NHQ(C{kF;zu~$l6QI^@Pj4t zC7=8->PdI=#-X0S!Ied(ZQbMjO3{}uw4^@sQ_envG3Jeu+Iv3AGkH_GBjb$txVv3e zx=)__dUZ~-lv80mT~5NHh$xZ$^xo~tafj%4yw;?JF_EI0&1L&u+seG5c0Lg=#niU3 zxfD(@BiKp?J(tXSAEcyz4))qE&A%jxjLx?+uPZiVJMr_bxj|*hBzh9_c zR0>Z~b%-MXWl|kygTfkP{Fj{j97GCfM_XnM4^se20ewjjBV7+X$nHm;xzBpsJ8{qjKZfvlOi3jj6U(zIDf?x2JAt{jhA?{q zod{)5*o%smS$^)o)F@Eh{ULX9Rv&x2MtsDjTA6PMZQfnkoV(}qldFXC9H|=8*DF?p z+<0-D*$5m#v3G>De~0)s&B9m?${WBeYpxid%0(X})a=KGq>LSqpxGgudBj0|QV#=w zbo^Dw=+e+DJq@?YDqL70U9$SC-`?r!n{nuwIt0)EwAd}ajxxY1IIcLMrs7BNQS$V| zTyrT_TLHS=7}!{VQunS-^)ir@fmL}&5_lRypzL&l60p3v?Y;1i3h2ytM{FNA$PAH; z^GzU>4&kSvg=PL*#k)fYEt$n%#v;C+K6fDc+P7;AlzF`p=wdYzg@i#nn`M^rc5iCf zVOPxV+(y}BgWK4qbKuY?l}HmSBW2m(KGi~Zc=P@eJfYFjb6LI`+@22>0T9v2i37;_ z;7j_)=olyK`VF(KwSXCCZf|GSD1_H#OsHLvis-HuZ}+J zWj)9yn8B)X4WZGm5yiE6*5rdIyPO|frQ^G?fU|6xP2e6*DIy-K#haIdZ4t|oU86U> z9S*SX-Yz7AQ$dZ^572@EyZ_XOcPm$Z!-F<6CfdZq!9%4UbAf}O@nQKD;E_-E&{HZf zDurHQ1l2J&>{)zdT8>`UtQ@MB>pWJY`=j9zuR#Kj_ZZ=xb7}u`tqz-|-mr;e*IK8c zm|X>2n%13X#$Fq@1_m#7&31?nea_vWngs$CPNTOl0K#(Y#}sVu;7ZmPmt6Y`qCm(c zHe;us9tDFgOC2gMD|O*TuMf1v-oFNSfWfMQZuI1McWnb@D@N84q<_^x=DM5g5?zei zuhjGR%e78sCA|j9XMK*zmwEk&DYeT)G*ESGF?L1kdtiEBUNv^EGQKFy9)9+*@~@Xq z)hg~Sc#w27O-VeMv3|zKn+hI3bQU_bGhA4tu26~;M*E=!f-R9>x_Lq z(mM9f@@};~h*r8<>dJ>2%y_F8u`fVZEJv3xHHTIz5KOMQmLa0kEB{SBYW9y9;wNLMOW9f z0F|k*SPf}$pe~zAWc>j4x9rHvhwe{nj5NBp?SC>d)!zOIaKAi*w7XPfSbSVU4D0dI z4|+!K47*u5wnBXLyi;+t2XT;~OTgFsj4Vl%oQBN!aBeT9Gjc);>Q~>PqgbtD0+5>?8 zk6$j--qj7dd__moo+P(z-{jTCWT-1a`$09Xth4rR51P;^AuKj-KbG%8$1h~wPfl`8 zqvBOdhQulwRz%4i`S1h)|I4^f^r%x}^LAe~_rs!~wRxwWqsaST2W@j?mr|?_irbRR zCTU`Z_);Jo>Z`@K_m$)UHr}Pe0r_HasSs(4{8HN?snjm)k1?VT$vS>}z`!;95h*B* zu=!dbRa${y+Yh^H2_P6sK+vCI6?(S%IPb0?%HCt8r`l{Jbyx%FJ$xPBDS1;2fT!MR zoBB10zTNMyDj<;iEo%$K`G=6&Em0+UOye-1>=VAFj6VOjs*00J+`_k5T|FaEn(u1U zyISb-$(#h|-d_)dPN>NRU1rm9Es#%j9S3=8#69Q^zE7krOZTG&LZ=Iz`%SZJ?zz_h zn59lJ-v)P~rHtro8ksF$Bnu=;5N|3`Cz_bedbi>9P@43N?|FCy&}suY#(8r_LrdJFUjs&h9_+E$(z)zxO09E}Esd z`i7MEq5-1NwkfPlkR=b6Z8Loa^G0d4HoM0{q=0%yRq*oh)ms0Tm2kaz{ogRq3!Fl2 zNBb{e4hr}M!v|miLbfm&z#+>Z{%&c}%N3CpIosFE)#=bke$i3zV zbl~)PzYe^4G-Q9?9AXj?(W$>lB1v6@?I)ScdGcyc0pMEqCXyVty7Zn%Fi{^O!^M16l-E( ze&FDJSdqOdpcjfVDPwRgA@kqpxcP*<7kfIts~_B%afkbM>fxD{kt*I{&~h!_3TUjT z7zVZk@rBySa*%XY44>j1FoN`(Fh~c#?p^FYh>k5Fsd!upBq{^ivtRiET$usxt#R)J z3T@SIE*4Qp{L}~_z{s(N_r>h85-{kQxNO^&ptJL%=T*-Fs!3&CtDOo5?q7sEzvq0$u=Skp58N z)QH^HSFqcZTWCT#_tm7si&bOK6jD4F*Sm|rBlVx8u%sVFNNJZE9b2IQ$e+8?VC`d7OMXvpk6CF*?H9YP`{pPnbf#Kw(8RiL&b;Z#SSY=KX8REL1dZXN{3iFZNW6!ZhZrn z4<;U;vOAl|6X&E-C_gd_DiGHdV^_)`wxt2z}614uzO$bC2^MkT|(oV3)2TZE*$VcwuKaa{WRDV zDiPto9%|ManVeMVQ41xH(Oa%@$(K^7cbcQw*)tWR-vBKzonv;pB;)e?h+^iE9DM0C z^;#_)DYw>e>8{Ts4|^=ADpi5W=vw@K$N6f;!SG0@Fi&iNG+!EYziE4;t+)(NsV%Py zXpbF(AGCWEeAz3l_V?A65g`A1<5g1WSH8&A;(`tmADv|0!YHVIWh>0jC#z^#tqku{; zXC--hTtWYyCgJp;+ft&{R?+OIg8nU{c!Ohqb{O+M$-R5mGLBOzDhy~V$S}|@cf|@E z1c>&@zX7{FrwPhqwwEMoCkD|3na6%T4(JZheo<={nsRuOyn-}F?lO7tFkVN`Y&d0r zlf_B^Lmj18XvBCnZ)+du%M7>L_cA)DJ@zjW*w2EV;_b&rAyU5d#lM`w0W)Z30^0bU zpAgO^Hqn~-u5|dZPtw{^We-v&KT?nVX8f5`$EQ);L5N5cAJ;1{G_g5|T;mc_Oua<6+@~vk z>zRZ;&2(Q%&4L2Jj30-)KqSl5i=pgwu2-RrL7rQ@n8}0g5Ss+Ue3OK*VLfA;rZ3bY z7E6?`zy4hfyI&3<05clYN}HtKsxsRUaqZp&=C;s*531H35cHi_-NG~Hr1^O84SRol z2jy`943$X=WRlTsff1u~r@#mcR4JOxx3IhirFZ**0>cV1_Fs4N2Z|f{Jbky^O_Vpt zP;UZ244VL7&wOB>Y@NoWNUlM&bCI-jaB;wBi>ODD4-D%V33SP|j;L{(ZNm`guK9XN z(p~2VIJ>`BA4sze+=6WmI*w1|CcN+Y4-VqLT}l)?{9>`UxxWBVbE_bryQ>*zemZil zBe|(dE;>>pRZ1~IAT>jtyk%%gaH#X_z3>~5%Zc|F0PsI2^Vb9YKg|*k)92-9P1eW$ z{QS$Ty(bBnDtD>jUncl72T8XC2*ZAy{+;FhF+0Foy;z*}DE}=1KcKwV1Q0UDUjA=e z^8Y!7*`r7M+ZOCOf0r^S40xo4_m;@7C&qvBzxCva{Dj`_-|Kun7={sOt=3nw{gcbT zKJx#8=m2gcv7P33?7$y3@iy?(m6?VT%kLP-Kh~wg2M`K>IP@Df{%>PE)&L%$N{JHt z?WyJ(0ECuzj{Lrqr#w&hlVk<|{!~i53$<+?FxyRj{N%PZvp)C4*wOt{u#*s4+iD0@by1Q z{9pOxzt2wo$|wJWS~2`Tl~1PIG;#7=TI-BiUnhJ~?k)in4D8dG7$f$G9u&*LSC=Gh zwKfxx3Uhr0GeMXt^B*U#TxUn+V=*Z~MPg}1+W5$hFRLi5z+H^LAtlXcD0?Kgr1JTkIPHc3YD}jSw z>=Y<=$xNircqO$6S6IrW)DiLMRnsJ;YVdm}1og@MuUGYL1A`o{=*aDSIwu^knUyF` z_caZzne)Ni>e%umhyLm4Cvuu$+FmP{D7)|Xr4~nui{yQXnQm?)y;h|^-NOfMQoZ_L zw0TCVugF;kwNDqbzg?=?Q301q4iDHFWNx~FbUU|+^cSXMcmrFZUE=Khq8sb;n&=M< zq2`i1qr52P^z_)3~nXRd{?xkmD2!gl~t(Eaz#luq;syM zyMo$LrE%vy)=9=p|t%0KkyjDC9{)v?@bJu z2UbjYhpSgNUlEGr5P5UY6vk=yZmgW;*VxKnfBvut_%XC6>w9ot8}~qtX7##ADR@xx z!gL${*71;V+GYY-4;#3%?}^-67`X0bm;>&1Q6DOsKSXKeH4lK9XgnQX<vhq z-F|p^r}XQn$O`{8bhz2c_P)bA=5+?y6Z&+vi>#Fp8`@S%B%kzS&R>*EpsO`*2;*LVENU!Mhs;pHa> z*N5u+>2n)S)4r^#KX~S&}5yA8&AD@eHFuH++FkPeo{Gz@~+PgtM(eO8ewdv z$x0-FGb2@b>uS6}TWml;d1|5IW>T3#x&5F(Ut7sqfC{$lt6(z@)lfcW>j{gqsd~$Y z{-Jl3Dn-T!(|PCov^0eT5~drs7s+pgjVtd_Mj?Ipfc)wUFHuP`BB~?2dOb(z9Byu2 zlyWtXCWv;dw;NkQvwOL%4&T)=f9RhX5nfeEl-#5S&smgl5 z>I10xzEu0yHu9^k3rU(@a@M5O@v$T&%vqwec;Hn@?htDdVn|vn+}F04yqmAQz5es0 zByN1DLW=;>UhvhI@Rnb-fZMjbjpABSqV70SA1F-joq=LwmITp-9%^`)7-+7ub43f= zHL*M%i=z6Lp<4>Wuz?%Bx=4)`Zv)XMZ^RwYSh7bz`-dXLby!#6B}rk__adZNVe;(i z8_}+p=X;wIm#NaI@B6FKt^3sL6sl5arRSK9O2^%7&=53{a>g-XEWuxYP5kzXsuZl* zW}5l?zT6E~)#`U_H`*vLRj=sf;g5z}RE{`v>Df8VFJjF>4je+JFBbWY?}nF$5o}1& zVYssATZ8M|8WPn7)z(Xcu-+}*payz$A_Lc+lTXCHB$jh(F;uaoB08cdU&1T&tXpR) zXWLfw;zVb6Hk9Bq6%}yUN^p1!~I39(l+zsgXX%@~W>%Y`(>Tayb^y5a{Nc2h)-4RTS4E_Vl zb{ovd&V<|EdmWkSZt>k?Z>JO!vSQgVYug5EN1{Z10iE zXPS>N?D1Y7Zi!_NF}NwBd^42*L#^d(JM_J$AoqfvR9ih-S*THg#bM~1*?X-vrHwfp20>ZmYV;7Qragqmnsl7;#p8WK5YnF)^yEw+9FOf zNt)J=G@5R@DlIswH%XpJ?A)=|F?iSEg{XSdq3@_|U|LLCAj{5|!Yxg_T-9)2vyXdp z$_Bh;dptN@#}0v*l)K`9ex)9=;_Qp@ygRAMRKUvTgR>x|=i5|PU3FQ0jPEVI=^oly zhgA6cO6_}$Q3LaaF`j;}Z;mNpJL+2&P`Ua8yM z5vIK=9GB+pz|MmVWnA_4qmD|)7MFO)_SNABmW{DpXOR0|a-^2gowul$j7(rg99ERa zoBq2wdf`yarqh*$Kq29g@1D_7QPeseitFS6qVf$Z3|l!bfBnqP*(FGOFsX6Vd-C1# zu-|3{9I-0RyjQKg8uan5evL~nR8BTU$Djwh+EBH?Ad2ta0WGkl&Ovt?eqi-cAj%3B zO7SW&`qm(`Ls5oqz&HsMLbHRK-}mK=Wwwp`UM@IQ=>D;6DzgeH+HSlkkopMQ^>)PT zH7yS4>*l4vn&jB7nl241la`R;j{2Lf>VfcIeMt1F;;B9jTg{h9WyI>L=YDkm;jcR# zGHJl?L8xQ_y*4VRgg^BgImHJI7NMX$i~NLx|O|KT957)&ZkCY|KzDqf0Wl~-r6Dp1PdoHu8P5 zQcoGzXkPl$?UX^Mq17y<4t8ppZ7UjH`Z&<@wZ~|c-J$hh|H}5l$MA6sUvc1FfzI!p zZPTX?w~CUpV*G|lm_xo1thAs!3!-4^bJai|ztk+yf{Em$Ip@@N>aEaW zZBvW9zV&MF=4OYX65!|l%~#usioHW4qRU9jIv%>@khJ3q9HH{Xm(2LBQ?Y@kBB(1N z=c?DdPX%H492kinW7&+~oZ_tsU7<=Xub0irAX_x5P}M8mGyM&S2e}I$KkOCDWT}c% zDRrVgCae4Js3~*i1*G@JKJt9L)#%x1F4Gb2i@jB6i80{XP$3+U5q%d9zkch(|U7v_8E zhB@{pCa-$`9ZX6*tz%VvL8ic$UL8j6>|}FRLAS=d=)qfK^g7&^daW`bThI0Ve&&C^ zyD80=lG#VzlNF{UJvbF8x3Tyh!b^+p<(bXPaw_ZdBoA;+TXii5MbVY{yX@9#8}#tZ zECiZt(`cb&DM*>6t+&;TvjlpB84?#ZXHC>Kg^i7eNabE|{W;;>Pc0+mQ^aq}UV2wV zy!fTqu!jU$o0V%49`VlI@)n`Mni+?VC^V_;8n5A&{}+4j8P(*rt_>@qs0fH4s6cRm zbPy2fB^D5+3J8c)sUl5!2@t`hh;*e&2}+Y1DbgV-LVy5KLN5t|ln@|7AdrM4U)DMM z?7hz3aewQK@&0(ZnP zIyY*pMIcN#he|RAa43nxL0*}>tcI1R@vXkoGX`I}^o{!uR}*A%OH=LcVKrcve2+IQ zU0;=2qht+E!$l?)_tgTNX9%B>{>y^#)*55XDFBBIl@AE)-; zgLw*7%B$Ok$PUi;Nc6Z?o-pMsLGS;_ei+l%N-P>Cj%Gc`w++^eQk4n{-^ectSi*up zKNlejTnK9}+sxO%bq}550v7tTYo+G0H3%%3EAg}&Rq=uJG8RWljnJ?rlCHPi?exQ) zDLA}|dPpT%zOYzP(EgHYMPx|GV6v;OD(bl9Ms~Gi$0H=)=7g6^;Iu)MX@{Gx?@bMG zqhpcrkiQer>kvO$`_k@gX+{iesFS{7{}7pfWXXEoF?>_%J}ziuyuESw$yqGL(5;VT zyJ+)5ZKflVlM&1hyC*^^~WL{Sce!h>3R17k`HyEA90JYJh zzeV|#7dr}-GfwO!;uTcGbLhpE!zfq~(s0_8%fAFyZWCf~LO@6TOFpf%6tr=066#(I zvTltA3~umX57zREm>#5Vf3)c=rmS7fGC5XGSFt7ONFd)Jmfp4XY3!=k+qgLO7^}eL}(zV;TiWfT0 zx54Rz)Qf4X6fU|X9Sv6^MoZd{_En^*a66B0mmAd9Va-fY|o zxqn{eu$RS8(v<8>=CvaGx>V=r;H7gY>!>Pd_~_$URR+-Rlpe{ic(qI{OGZ3B0tDOx zb)#e7a75=0pmv+DT~)h$7_ppFq<1krTf9?>eWWyXF1?=7tWjr><*8I;uDk@tN~-Ok z7F}KYO06fK`^HT4mJznixbpTg!m};7Y8BIBpD-c8-ca@)D`eBG{q!t-6Y|afL44d{ z^EtJ{aKduLl-0ds+OYnJ<024RS#ryQ!8C5RTCbRD99Z@ekhp?&3+UCdHv0+TuRHlj z`YG&W>dZq2RZroE4z`|PlyaI?Ffyw~NQ8e3EWv)K^yE_E+SQlbHjw=m%W{`{Qd)-_ z7*YmbVzH z`3iB512o)DQ$ci1aeiN8AFRVxQ@*BE^>zWvdw%@5l~Y^LiJmTVi0XyxlkZ=@O%VTMEe$(!1;R&aGs(X*;U;c9=A_UH$&@TJc^<@olnHTS#I) za!)aI-?JP0L?u`7jLE_6!}Yhr2OU#x#(YQCQ=*TTm*J%IDI_-CEdpOB;Dh~+m(B;v z2=wWpEHLIPXoB75wpvCIm&yHw9%+si+CyIuB4hdofYv`QoKZd8(Nzxok!N9l{9&1+ zUM~hT{E!HB10O;K=UV|x>BCeB$8$K%0i^Y+GIg;15ynK(I*IMk#a7Ur4Se9a(14JF zk~s91SiU~iA$MZ|`=%b~Mu#YZ;A`$#8N9kkXmPW2URyUyUYUMmHhUfU>@fJr@F4)naxIh`H9hm>EUA`SALlTPi50CmSD=}> zU1&xIKj&>gVxYs&9QW=pJ^ipOrEs>eVD`lwOM zvWlT_(_pWHB1Rcx!R_H`REEf*NY;LRTi#zY@c#`CG*o`Pc+6izft{8(SNm{cT0x&X z=rrCJc`4O2G`?^|=39;%1lu91<2|wUoE9c;K=!?h`k~O+j zHfnXp5S0sY7+k%FmWVYFRRbERKX0~^$b?t$Pr4<{we_yr*(OW5#dNB$TS010sS1xA z$-KcgvA`+hYrzWfom3Iz)ynXBZnV94erJ9nZf*3{;V+yiHhWb=c|9`zE?3P&Znq2O z*;mQOjx0pEPNYiz~`ZUZI*VLu_CcLDMKvgxpMwP5!4q*{2>EXHrgVj zT$abFb~>a_U00_oRZqB|{XId+(lNJ#Q^j>$mzx)$%A=)>cwlEMa-idAY|`9#Z@NQz z@33RKwQCMi6cyN_yR_Q;rbH4*w)hZTTeE*|{Dvzo<5Y{_W3BMp_&At z5MMqc!IDYuneG;W4wEHK{|)>1X*Jo8+o1NcmRY z2pP&>jWs@!x0CF0dwNEX1o4%<@W$xHJ2hBk^m5vz+ge*5LU3Z5B#_bqSCMJ{R5xL} zT#u_!Mwz5$vm>)dpUc#Z@WEwa4gs@7tx6x(O1&%hi!NDph_P4rdZ{Lb*jnFfqP%=K zdt>?a8u`Z606mpywT)75JJFi~))^onjk1?%S}ZLHpYri%0an=F7upUjtCQzrh6a09Rl62i{0`7oxv*Gmtl6b;>&QIbo+#^$w+}F6O>j6Y&E& z_+-vp!l_T)Ni`TAytXRx2OwCqEy@*=oXLyf;J#_=r&UA~t(S%CJ)>3n))dG$W*9cp z;Hq&-1UScGd~cntD!D+Gs43-T-JxPSRyT=na!xGbNW-c}KScDb9f_AriXB>b-J;f3 zaIrH4MALoz`7+VQayhJ4r+?v`>udQb-eB=9Qc}dCIZ~-H3#Pgn6}RJ|I`_pxr2l&I zSvp7*sum9xCzE7^TfYT2IQ#ER7t%rvci79ng&O90ymdMAX(LqS466B-h|cZ@vAUx1 zOa;mQmxp&?GB4IQBMga}`#O_2juPs;QhQ}ISJK`_pKs@LfxGh-{s{Xn>Vy)7_FO_Q z(bBgwu-SKJ?{v-7WgT{3D{2m5A3uI#;*wIxF@qFgp2Gbf{vZvLwhPE^G@{bc$e%;cCMzE%34*@!l^;6|ZOe+)M~qw5YCEWUpEaVVN0oiy zVSD>!*MxO6Ig8pU(fMvQeM@<=%1sI5R~h0OmmET0IE%wI6t`A{tzM~J>X%j6O_ z>g89EN~huK!qyU+J}q+j&F$g(H&J! z##VC!N|yqzgnsDRmeu5|ypq%A?4isflRfh94!zyOmMCw_3B44>qfM!{ii z<#p9}AB@!=9Zsf-K8LMY3fAwpR;7D(ws(FD=)gI_^|df1ZDvzgZ!h(`jT2=nj%!+0 zwg_vxB7b=gZlu5~DxuTQ+Xxa(9ac=SAE}O-J&SPf#LQDlB-6M-Rp!z2x^_x_6g%bj z+P-z++fn3U?nxdQz5$#8ls3^?%c;E7=xoPu0zw2QV{cjC3onq}ce45-X{D_kAKb+DJsTtk$J>O=Fm6A83`yg%7f$?)*lY`ecbz>#LE$+4vsItZqYn|w;5s^kbrROX<}p^cf5&F*hnlq@KKsczlxZ=#gDdw{@(fg!%OJ%i`p2|ry*Y5< z4Z)nXzR63fy{t7##@br6FUuq5ve_G5ha3& zc0=!Np&NHm;ui%kLS+X`?t3&w8k_V4yv6u+dx)-re{73+?weX0RMyL``O!Z=N{wKh z>0A~|U`Tt{%B4qUqtyl~`q_L`YYKLEb(Oj7c<2UJ%eI4N26G)_YE znQ8dXc3z;SZ9imM0{FloVri2{)18DpqKdJk+HH7)dZuCfv-Qq!9-V%SaM~;t<4_mY z-jz57z+mBkb~U|BB9!JFNX9Q-_R=K8mD`Q zZ|J+3;;`b;KLb85RCg)H6onMr%s0~+#I-p28GdeIiYmUxJj~l!f#$@h+M5YWhyr9hcHH7O5NefEsGDEIpn3 z^@AA?B>>~rHpwTZ!L1eN8t!>NZ!gh1;8iu9>UKHh6rHBXew#K#vaOSyEkXuwBp;aA zjD0R)j@JjTA(3Zu(59V!j(}!jMxj*$d_v#R9!=lc zj!9h2grA2)f)h4Q&|+sIXCY%Ze{C5WnON>nu%Rdj?N~&vE6cDABk7lF3)dDCT2c2k zeV@mdB}d}gS3T+@hXgjVKR&fX|7M$vFKD)T(>Oxol*Z(pRxQU=u|NsG8j zT+A(ft9|^Rjr|v%f6A=T=g{cxU8}(xf>#G&Z2@YBw&ZMjzKcBY5((~K5hf9Pc!At@ zg7D_GXX{wJPG`-j8NnFW#S<;um&s5^bSiBz0cXM5t2ejdL=5!L6jQtOT0cH7iyfJ}cVgS=0qex=4tBBy>q&=nIr_A+a`!!c26 zbC+2<9CyBbkecF)`N>^KTae8rHE>lM<7p6%@cM{)#1@BXHGq-?v4c3~eiv4ih$JoN zqWHlYGs#EIwRURRdvHVPKCttYMBRAVPVO1?oPdu7qsZCix{p+a=Z}A!DY6E8`)JT# zkYl=E`A4EqmB~=bmsYKxu%ooZ+UL0M4O~!T8S8PRgH8_ad|d>5hg-}yanGIag0SlA zFU#2RNggiR_j;VPw_kYA=Q~!BTTjS!vV&bZ-+!XlDBp=|@y9pwA1xYBqrB>McvTIw z)_)_I7a5TgmXh|Tl4O^-fFXP5+KG)Z{h@{>zb(jqQC4R_78piEAYz7>M{I4OK0t2_ zKmY7mTV2_OFM|NZ%=@QG_U}A_uJ@i1&aFX^&qB(PBggiTp?Pz_L5ENtt+njYq1jE> z#05Nh4;1;meKjM{c5p{?tg$be*5m!t#5Zq4G4Ce9f$(E>nbPB5K%FtmEDOQ-DuDxr1z3yJ<^x-w$Al>I+AX`ug1wzlS&5T}3>3EBci(2&p+T&Ie!b zbG|S7m}X}4Y_FG9=u_1~MqgNR=)e|Hjm87`n&p-!%(<0P_-Rkd!1bfHGpyhas~tZM z*Kq?T`WWgEzRb?QXj7Ts!`&M;o&5OY-O%r&)2Jve=#iXGb@rRd-V`cQUt1i4Rrej+ zL*t*pB`)>AO`T4{eXMEf##6_@q4d7vjargGcY_FVJvp18XTB7ehZf`cnuz|?WH1S) z@@uA{NE+8yqj-G5u5uX5H3k`^=DSJdKDfi6l8)A-9N3;hnv6tkbS;;JvGkQp8vC1x zDCoql7L5?qqq9z{e*JNJ|F7bAl~dVa7k*qgH`}qW!hP2gdikE?h_*KS3)xGle7Jc= z>3GGYsf2mi#BN54qk@P_L@n*0{@^Y8{=Qy$<;cKM@%?RFR%w>U!D79T)uX90EYdC@ zSHy`Fk)3Ge2-UN$MdS#lkmmj$LlB0l~|a z7IvoGvXo)?0CYy}wxGx6e$93Y(&4i7oWhi#>QUas-$98d6xN4c3oL+#J194Vl!)<- z-5+ibr3!uOg$~y3_XOXzn)4F128Wm^|-i}*mAK6?(v=K_E}HIj??X6sB(OE zCQ@iwvOheF6WlYe`DTXh3?l86ixq{f>a|>+YP5jBguX0dG#3Liv-H7U-5;#{d<-;| zo-GEd+`Fhb=J)Amf&aoN#nkrYJmrIAcc@-_%H#Fsv4@8HCgq_VH#&|2z{bS%r{mfs zYgPp+{pIN3GgCGUKDxOW$md=Qb{1E#%%;NgL=+i&2Z^c!HbjcvjDvHGNxF ze+e~RZ`8$o#-6oP|H%CEKIL_L|1!Gaa`j!UI%;<(Pj%BpFS-mp(C3AU-j($PlRU_n zx1iAzo5x=1n)L*_9Q%^|r7U2nT)B?dp zWIX`}K+jIuJ!n(-y+HYDOA;}^eyUkQvKEn%_%W!B&UFHU0S*QWz6V3@W~Gn>3R)lOE^XX^#}n=H+`0=O zhb7^IhKrZR8uO|98Av5dMBBp?62s`dDomdChYOUIMGTd@*IwYV170nyZ$ER502p_e z;Z+#J$3{?KKcl?1%G7k<@5ecQxBvw218iF&&bOtNdkkQ%ete2Ia2lFL3WaGagp#iHUh$!CtF*z|V~$w;LUZ3vbD~ ztGcry;=oub#98Bng~Xk6Ik!c7quNi!_aP&@(x<>e>A8Vfs)a6?& zGO$_6orQVA*I2pVodphN=AV7CH!~6)A|62t8~&`@$s-=oUS7cu!MMvwkf~O_3whq} zI5Da{Ui*p(iZ4FS?d1#&uLX}A+0Gv+YgYWtKOPK4Z&qA=9~H3$`L4`ES1ov}kcPR{&Ao8zfWE70u+(2X zGo5{l1$Am?I~{C48}OtaT^l)|((pDZZ7^75+2;;)1ywTchoq>$a-I$FGoi{ok%Qky#7q=ZOT~)FVUzlydhi}ic+JCwvGwa%+_LYjz=+o2deiLj z_clD${A^(({!!Q}46*F!P5LKcG6B-kVUI1;1`R)xdm?qIj(>4;7jIZr&|Vmp3)`sZ zOi?RWMTtK`n2R@=Ebyr!PojAuf08V&^879TfvQ;UNmaMme;3N=A`9a#2Wnow8mk zs6}=KXTXLZeO0jgoP;fiGeUG6T}0W4|4044ko-wg>6XQsn4p1}4(wuzK=`4%@K6V~ zzbd90T&cbPZX6}BzPef^(zyI7Gt7_%y*oTu>@A>Drs^+3!R`_Vgo)H4qb{{5@J z_(8+qN@0XPN7>NdNBuB zF#agZ|DcRSsFA%A`M^3wKQG`s%Lp4==uUZl3Ue`N+%9Zu)7)`Ah*LZWe3d{V&uk>|nUKItOY9pN))24Y7JziYy{z0><=5Bnby%_gyEbH7KUv_&4UFHj4p->&Wv|<$MzEi1Mn3;Eev^5G%RU#=TY=6!T!~CyTU|nyxz7ka zWj?DOzKK!7a@&R5+TxQf#V@?CWL!Wc zm5EPETt-VXZ=U#(mtB`7BfH3V+?m#H&nU_KSB+tO^L3J??IA64>d=z;ihuDQX@%55%O(qR#^p7)m`m&dTMV`U z2x+{s6Z7d!$WPP}=9isMo$?%N&7YbdW}a-sOv)+b2aYiYr6ZY3Uh90gQ^0hJJ{P{4 z1dgqbWU8DKF=jfw12KF@jh6Fv8dxj1`b*KVh%ZvivuG#Tn2_u?-#C>-H`iZuSRBxa z!M<)QD7doo=FOz)wTNGIVfK+zm^czRzW()DBj7?3^D6!r`-=+pf#axC?{r3vyrQ;6 z{+^f0bcU=`_X>rUYNp?C_-Nler~fDO{F5hbvvX=)ssWW1NuWk2mZqdDL4ghrnHym1 z;nq?Ls|x`3pEs4t0SXhY)iQr0eVbjO(lWA4>~@rOk}=3=1z&;I;QG^J|KSPTTdyn} zz}D1^nS4eDO->PWfzZ>N%tf{?`!$EKtfX)|N|Iy!OhWbp#v7h(ry8R`=O%PgkFdoJ zJJr_aL&bcUuG>dhzLlZa;kLFmb5lRl*Qn{-rJ;O_Z)cLN`5PP#dOrNLA8uhM&2;94 zgRB}pw00SGqOV{q@Ya6ybfakivNlxab*nbJ5oQ>sytC^k0AjQKg?ijLy1g@ai$Go~ zMaw?Y?hkUi3(+k5K(ZZxAfF@5M4A5Us|t%!IKv&XR`4HO-E8$FWHz`V#`@=W-Eu<6 zbwGzN!u%2SYnrpOvl%{oOkKy7LMrPKHZ~M-O)7`GVOIFotvU%0J~woGizj2Q8CQ17 zIq%{dt^^a7yE7Itm}-!cVj%rHXp4}&{Q3qDknQ27*b*EA!q$#*C zLb0z?4pC8mzpM0ew%Z-ZD>oCz*0B4pbKT5;vFThG$lBilLEM*(rA^j*X6~=bZDiH=a+t89>5MRKX<8V zKKiwQrqo0~I{;~-pYi;S`9BPu#0Mcw_Jt8f@8L+|`XNt4`dDOQrh7`rFBn{>kYl`o zU5eI%fj!;(n``Fph>a^S5}_hqjH%NfxDk_c35<4+S?x;kOJa971cRezW$hH?2lF*O zR2;68nYz;$8@ZE=hv*BIWGexVln;q2qOw00ksQBii2^sre6){5Q1{9yhh&R=No&6rN1z;4A*y8=UiO=Tb^ zgW^qV4}{ww|FkUs2)19VuCy`i)3`9=Bj2m_Lz4@(MFaV$Tp0Nm^%U;VD~i^3148_P zef77BR|4ciMwOo7@P`1eP6{ zOLao*KL^upXfjnQA$8*TD;1woccyoK{PE&R$)y(wEerOK`b$l|S*6XTak#5^ba_7f zapymo`F|SxM_JzNYC>=Q2Tc9XQ{!$-M_^=7D72i$(8Vb8iVzW|3-WIlNA{?_n{%t%? zvJ5KT1v~KWYF8SLEcPZ08{V!H_^nD9%ijjHFjrX zzFPk0b{q~Eg#yn%`;Ak3n%YmVxQ&2mqGkHr$oFO!9lF+k8 z8ixg&w7M;>V^g0HO--Xe-*OrAfhPgU*cR-T3=}ZhaG0=M(y==wN>^Wb9EBbzkYb5tvY)vQ0+X^=efUTC{K=xv&95jUwi+P6ocQ0VL%M( zGY-#1pU_fU-wruomnnENS4(DS8XXX^x*nXv+K;I_K;L_U`23Vm_J~)Id8WQD_lU?9 zT@d|q!O`km3w%RnI4w^2rB&74OcC||3~-0I&vd7Dt_M1sh6hKAP`h#+a)2H6bzQu& zRRaY<=Lq%dukGB{$sFO7x0O`eCgZCqLa|JZO3CpG>(+p9*R9>Ta?=PSzB073Gu^Us zyqGI$JF!6=aYi_20jWt(nH04ZR(m_-9rXw1otz4$GK(Bena6b>g{j}9)}};b)t@r z+d)+P(8KQxx^^nZPc-?pHZMo$p;a-9LIFD5ne>z9Q!%=9GV;L#N$|@RMxA4NqlXQqtkU}d;R+eM+O)uw&tXKS zv!n1~&j4FiE%3U4N135K9rs+D|Ds)L!J{ov)FxL8=G}=)+;^#?ev$H2>}SIpYxR+& z;q9#~EmfiSbRE$#_zzlECwaZ$!MD%be)zCJ@@P1JabWab!W~vKAXO^GI7RFGRZ*fR zaZm?@&jIQv$D;B5v0oVlm2>#8-8e#%9~kE~JQa*XO)6xmi{;qWtYyBfc*kPrR@ zk@-X+EDs`tAp5_++9H47qzY3-kRuj?Yb!;ZhGqF%Ya`?1)>RcF9Bm|qpW1CF$W2DW z7?t_Ep^Oys;UmiYz>b@Iw3WvG`zJCK8lg=mdtA!*{B!t3Ck>CWoCK`5BmG06?M?sW zP5)DTF(xOU8)4Hfi1p{8(LwW~fT{Pkfy+a;Oe1EB%m+4akbG32l*g3WN^^RiFing4 zQ(}1u@8YYu)QoU*ays9E6{gEM?#!{xXln1;nD%D@cQKfoS&sgeZUEJ#Y6PL4D|#+! zI1obm((bDWi(H#c>EET3n}$sI9fC;DS>-ACCew(;a-<<5^P&_)$Fy;)mcH8^Icw;! zml%WxB^3~;AN6%!`8H!S8>U~lVI9%L*7gl(bpGHskHC?|3(|PXoP-Sg;&^pc1M>;m zh@3AX^Kt}%!QcpboQ|^eO>Iu$t3N#4dFKBJ7W#QUxG^7WmoRp6ja;;YvD_q}R56sh_u zSmBr-dXK|Kk_xwk>1*Rsv2(CMqUkU{Y^$34>Z;lD3mK@1N93kJt=e1O5Jdn!5~;rh zAI*F+QWT)2LlmX&ki3m9R<8*hBrmFbdG=Y-VS!7MdA);%@tnL(YN#22_MWZu=ndgY zU8e{~%uKt1mjC6Yj|p||>*W6Shpy^S>lfkUv|*H{A8etm(7L{C!*A@lf|;A%6gf( zlHWFZ-FeP>t1xJhX#Wiob{s~!j&3@-2A;MxF-@q4e;&&`%9#S`m8E}y>qrX)%|&BF zbNpG$_2bX)Nv3r#Z>B`Kw!#(lOL2b9>QAjF974J=!xgdu`!8hcjXO@Mvgn2o%XF~c zu)a3pdr1PS9&nweQ-!oqC|DJ_3RtJNS+`4axP#S5x+)H4Z;g>rOOI3_Vc@+tqX(#A zP^NgOkcZVdx=k|9==?2%fwGMzJlaO&rNGv#h)tgZXv0C!*&K>QT&nbY?y4|}(#GHW zpaHx=>MNjZ)GxnV@I{>J)Qw-mrho>zP`|OQL*B~MUky@j%z;>&=zc1G^nMEn@FB+C zT5B(6cWAi}yh{OEFJ&#n7k)qKh8^x61eixLMdLw<3{)U>uD|zyL%N5~y1ys(<@dhj z?B7%RHhMn}qfi!iZoxG0v9e)NYUlTFr2rMROL@OUf4R@b@UOJk?N2GuDqPViRr0%S z$l8o31iL&K8^{>Fe^B_Si5%w9f*e)}BEZR#a)`kO6$O0WHPCmb>)cMHW zZ8p~d^bp_*keN5DL5BD6pxQe-w_iaZS-b1G9@O(4Kb|71M&#vnPwr#OItpLLNQm!3 z_i)PCp0#fkW_*koGry;&Kt+Sr)2z@T`q66>(X$+?$|vpsSD#-;m6vI04nlzP6{U-a z%+jpR6&p{~l3e%=VsB#LCt=!4K)SXwk>Su0I9~b}QgUDH^l9dcRCmRdUB2I!<7F)G zbi(*-sI9}XJ1G)RHoObTF>GtO!VzXF!h$iY+$Vr(q{&ypQP(0q_oW!w+uM_9X$p(N z=)L}Jq)q8gHa+Qg85c24DY6n8#{g@+ug2^^_B(#8@R{}J4Hz9zQd8wRQ;53~7PFCz z|9)|GE}CK`kuORJ^^syLhc^ZyDNzn$vDL!VcMD#XWxXamIc-I{Kf^js*?b8Vzo~tx zLw_vWFOotDf3_7igQWoX%llpKh+d&&7p1%lUDXmGp&*f4?|c+n99=8piF)GOQ>V^) zi||CG1L=!(4N>|bxl<8PO=|id{$_omP1r`ZlwTX~nQ?x{d2zNLcz}1YMz$!T=+u&4 zYD9~PI*ytFZfk7syTlxVW_%=b2*^MMiAxQ?Vrrro*{;&`M!~m($fVu`@9*3L_7^w9kmK#E z-Pl(Ul0kvlex|LTAtOHF!WmV&!kekvD?BMs)+#?)x3m^T_BGm9_>iRV!;t`WFXM0* z(@;~A4+_C1+zfVu9A;lC(++V=^N@sLqLPg_qpQrP>2EhyN4*ld*Zj*6GKdfCi#}ds zRh0to9NJ9Vq5OFcHRElf79hs4M#ML0FA=cN{?Pb?{Gi}>N4tgx_iHP*B1X95N<(1_gV-!s=XvTj|8fb9R?UECa##>>&xXA4 zG^=>LeM?X3QY@GI+0lekSHFSQYHc?gH*jy_1aF5@hiW2GI1`_2#i%+sK(LBfJu+w( zC&Sj7rG3NF`_qeHjX+)Mv z#Z=u6gAJ|BYb*0ue);JRr|U0Rgw^9m?;!8?pS|;kcboFVbl{wKc6XO!mTUbzpBO2m z#qay4gokO6isMRPc9G%aDOIFN+EwaUMpIc}^fqF9#zRL@?&9GV;dD2~+Y&1s&37Y} z{M|@gTwZmFKPY%>%+^qwm1S#FdwND$U7a4gqLqKwwRAVb%E{P!BZ6g2$;B%O){XGB zZ&)ylQnbI%fR({Iq4!L#nNPp>s%fje1Z$;7Q;It~JEx>c6cL-7vq`A-IxW@N2b^I} zPGQMmuXE>rGPR1o`8ZaMu&*%`%ZJqOru^2KUi=r^SGM{qo^Za~u&}YY_q>k`!r(Tr zj&v-?r0f1nB;qN*eI_h~k4P)>LHa5vm8R*{<4RwUo*4fFXE9LX!`k{(Yozsi4K)XBYM&X=cwDyOTv1-ii zatnuv*6PDjt2o=Y$DiL3IoT$dO8d%&R)q>owjuY&)QgC@K}wPnVhovqsx25QshJEHC!IYgr&WpKhl^aW`|?d_RIj~0@e z67S+Un7VNd%efrWD7z9bMR4?}^0+lRi@!5q$%cf|bD918lzn)UtJ5(&P#Ty-fLhyY zI@=ELgZCC70<|^7!MA$sTb7Dt0K(2f#s}P3N=3kRl_n5&5j;4K#I75MYOcOF$QNlb zs%qYXj^7#MXLqEMLs5VDo~9xP{)~d+b2*F5WAw?Jw+bUdPRrg^0BWxs+^w$@T42rmpeBZV8iD&SvOmq6q-~G6pQLTHR$B)T9Ig-mI-{w$oGsWfdA(w!+#{7FnSl{q5))YJ4>3v!NF0I>t~arLwTx%%7TSh zxj}N?e?_5QIc&F=S-vJS(uzy=L4Bt^r-|zfo+ar&-OkV?u&+t~+3Pak;tzva`<9<2 z&}hwWq)!tjN2H4~YU54`h2a+%ZtD9Qkt_Ubl!eyoMmrJE?erDH{j2V{vFdP$iYwKe zxm%29Q50rmzxsDtiQh~kf6L$L5cUXMma(|N8vjLJm^^V7Ys&9%ubeQy42-%H`*O>| z0LhU27S-8z$8_?QI%fBcsnyaq8Mbnp2`{L&wAT>nyZlz5J|D4n+X33XFe_{Vq)kFm zt+GzqORJ)_{cJRXQbvXRz*22Ou}p11(`ke>9=boR%y79hdq9}*)g6ciqXaH|nOb{&q;YzOmfpz8~W%|XGL$Ci>ctrrhs zF$nSgg|bFjn>w|Qxfav2wdApk(Bf*%OTW|it(VL8;=N_LzuKi||I}f)#nXO-wHd#N z^>PwxWFDbTrp5rj*Tl8V^F58b)DA$;*1rRQ-gCgzApQWbQ-cBtk$>2F;v=TUf_qtpIw|L47ktZWnC7hy-f>;IA7oYO+5N--d=B9X_?OE zH&Pv#xokXH7}RHAeK;DrIN9q$(>N9W#k3<}RSG%8GF^Gg_JszUa0B>-)5cA}DMAl} z)dTbAZjBcuYh9iVUtWC{vkV9Bbz;jjhr${BV#!{79i|gh296|DZY^#BzYX1dZGOQ- z$`3T}BAl^dK^Pf9dk>Mn!-3}^n3Pt`9tOV*?2g`Sih060p=uwA&FHS+hUx>NZx-B& zUg$9Gcnv% zTk7p6Gk!Iif^E4W1sDa4uNL*N>+4c{p?zMa*vO8+iuHG)TXF7Z`y7Afl@<5CcCel2 zV)KjcLZAvtPAKW+Z)`k-w6?as7!7trtV>Cc*jjtbz;tlf6yXd-?M7kHZqab1NmO)2 z=mwjI&93kx3@C%^l{_Zht;dYHpvqY`FAQ22klQ1NRG0d38QIIxf25W0SweP1E+LrF zcrt;(N3W?kgDxnC?{xvPX&a~I4chQ35x$-81WntkvYdV}BJgq!A|aw1hj1j%Xgz8V z6NZLg%h89VnhZs+W=H_P7n>biGsFKuC(+ZX)VcT9xhk4$J1;J!n*!GlhrfJ1Wzd9< z0If^onVxGLWb4mM@r2=`NviQ2fj_#f#Wmr|~*f553MwXL^>?wIB9Xbz2x zzR-_BKSIe#oeTL$qm&+%hCGOe=hqC>Rq3A(Uz;9R4V|%9^_j}^?O0FV=PqXFA20=Z zq$uY1MlS%GyHC$=wURQY>lE#?*ueOXWq_w1@#v^kI)=7kik0z*mH_T}nF7W1p5=x$ zFiN#B=}DG;Q=yoS;g{VBJw8*dI|9QTs_2(H5{&)-%+e?#y584`(#&Sg4DrMqW1p$p z8fxDd7GozoUO1`&7z$Pl`J6DtGaw{!Oy7rE9*sFhSgy2&8i&q&kOmJwa6OkYM~EMK z1JB!JL}`zlQM`d+ebg?-4`2G6&_WULGs$h*F6&O!)5a66(C(y-=7?%igwHe8k;vR4qip30ct}?g9Qg&I# z&Y&-^uH<4(2hQq;Z13e5^(}AIU%0T4ar8N4twrP=b)~4s^r1PFgL?#EdOAs%`N_R{ z`$ACTp%oz0SkKf#{)U!}su@V7d?)BKQ z7yDU%-2YsE{|Mkx3FB~*YIyG(-s&@gf&>2_d*2z>#MZXE6$KlhA|fC~Q4ncrC_<y}@%_G2wlDj=u5*5$ zALhbjX3fl6PrvVH4Q!-z;uEQfgsqqHzc_rLLMmsn9d$MFhhR(Kn&)$gMB*;w_W8QH z=W}%?r@ce~lBdx!SmwYfmec($FZJLkv#9?x+Xv2PX8iB2rvlRCfs*)-pxv?4i^{+f z@Z8Qv+&dUB{KxS^jewNI4c_Q=FpBoitCz_Dpu*7U%;mEO&cc6qbW>rV=_#JttNnj_ zt1Rb$&aTj~Ld}1Cnt!+zdpIE3|F~fP;a27J0JxNirBM7}S^N*P^Zys}pRGvf|6j;| z5Y*oq4rt^?&H5Z@eiA?c9l8=q3Sjt}Di_+O6~^NN(hQ;x?^W7PV=fzVKnaZH55|pRck923~zWV&^AQN}P z;P;QNIRK-?-K;j`*?75!mo0u$G_XGvn?4k4d+xf>=@{+&I|o)!LOHLWHu(~9HiE&V znOXy&-AUABPwEAZye30|X9z)%)D5`ZHG7Wr6#+6W@z6Ys+?+0Kp3cSweTU5MXQ7#%O?gH*0 zj-F+^-yplgXY=m*ywb=y{IBW(swYp^fyT~WDDts?{TBV$1=8}dYfDNO66$Xp7M^JtQmL2%CpE?fT#F?-dl?C%nygR~o zrkK~OFmR>DSrBS6A#@EG@P%ICK^D3{`fxRQzM9mNA;aq%x}4MlM6YDEk=`Xj554~^ zhhChYm;_D)S}c)paDtR+aKupvji-pb7-nL6y)HZa$;`aM&l>4C;`0VrK^D#jn1g(> zRo-tIk8~ShDr^(b1SqVjc;F5ohVB6_nE`8!yjzbamWOL257gn3v)3TH61CQMVhLkU zmTQ<>y^6@VE%<}OpS{z8OZIXbC>+FivkC%MbCz4X>Sr+>-}3W=nP<@snX0d=TdPcY zYazRQ(CpmEJ}P+jBQ`(mW+InY;qSQm(=+oL?}yoiBYjrDa3H@`$+QD*GF1H}U@jyv z`8*JSjXb$l?m%|Wq)y5z04Ty`P(VXM;@vVoB@NVxgGB#W_l?lBB+`3eE-gOD0-JB0 zPX?!1KU1>s>dA-_vH>4t7}G?7C3aMW8Zi~Rja5h)r-Gq6>DovwE{?}*Z1^sYC;oK} zQ3q26yS+n?>A6ZWnTAX{*gT4=8!j=cg@1BNQ)+U{g__ee0b`+g;rs#Wahm@lU=-c< zZ!NlpB=(MFE72G<;@WrF;K#Hin$F`R(5kS^w}gLC5rpzhC9b_|3wI-Y=Fy`T-6iL} zO1|hUoeF)b6>(EMD)`Gqg%9(;svq|nZy(GKA`>2ut=Scc`6@ zD#c}@7N-YTc(xPzWsk(fbH^#z0|Pg!+~aYTaJ7kN`<3l11`Q*d!V5;(=e$a)Cy*I+ zhktxZR?4V*mOp=xwrTw~P95OhWuW6EH1tVnec5Yol2Truy{Y?2S z6Df-s6CB7+J2i@Z7#dD#bPL^;KTTC<&c|_jTjf@kpeNTYt?aAnyygB4~ zkmTiBH1GO*<^GsDlRV*+FeJ{Do0teu-|{)gmxh{%>ZP-ZN%TEEYc6p-uI%l)XTiJs z2aS5nJ4e}Hr>bExIt|mI)3f}3*|z?xUZ+y^L?=n?;H8m1_C3`h+C+hB2A;9NN^{HG zIq`LQ;i}Qf1lpKm&eV~8*e3CaMeaKD^mc1C$q9Hd*qdvb<1*@J@1GohaB|!b)pYNn zx~b^>nvV(u?#uBpXVncKY)aWqVeoWCl25M!2)ckx3dFh2ZfzzltaNRq%6g^w_mcUa z|8@A-pC9f4#}|8E6LC9g1iw?A@P`L9F&_VLE?5HVVnGNV1*>*1R7E-LBFK}RX`N1SwzlH6f< zkQuP9sulX5ulrBGo__sT=(4rdP2qo^+k@u&&+nAi4<9KEd@p%a$zfZ8-}~~PzyJ4T zLk>@>Q5a;k17wML9ietI|L{ORfWx%4)G`B}vc3LurT<~%KNlhA4v;Gu_}S>|{xJDo z{O7O#HK_aead%zHhV0x${2=aeW!}L@eGE86ED*eKHb31;h2#Hmc8(nJyl2HJdidYh z_4%b@JH?gi9`zd;@ z+`miON;*0A$3Xr}w zKc7LTn^ltNggZM<1UXs%^&G=eLzU8omCM49%a!l3#+5>bmz!2@i5Om!xT<+BYH>Z* zbhNCgB+j!UV#D^8#1o0u)f% zM0|i+a%t47jP%3grX35!

P2mtN-_dfc!t-rg^t0UH8I)=J$iO$#D)-oIl)(3{s+ zGIuQFNg|9oOz-lT^TmBgso9=xknt&&%(p~C9yASA_t$62XE`9{O?(=ii}i6xc4>ux*H zMp6j*F#Fx;pVZ(O|Ht{G3*9tT#U1BJOcd{y56dS?5Mc^GPnlT@^@jQ))DPRLW08ST z^2f9DYX9e60E2Yf`qaqUNqf*YJE>Hc3Fw2eJnBSGy26&zfK|)4S10VMF;;)CtEGzhbrP(e z;7zsasMJ*w9J_maibKLgP(W4t~bdN$Z0 z9od~|Nqfuk_ooci5(3IGx!@}uZhCei$b`xB1_(*nOFz{j{(@^}&p!}cVS;P2iGCw| z_I6l*8YixeeIxvbfB3AMsY5U&>Xht0tDYbb9s_1zqJFym{$<^-NY^$Jtkk3430*kj z3%S?UMBf*(lz41!9K5|6{ZS^%KKk1&mtPNdJl6Ey2MO-i@zxb|sWOs5bj`-h=l6;CyF9eI&>&7zwpmq)~C-Wo_ zcf&FBhGj-S2$8vw!8F!U%mAt9x6XB%<2@`P1!j2(k`6=t!H+YYlTavpwCgPbhLx~2 z=9bq;2hFpTKm5bH90F2S*7g`gD6qh*+1?H>j+zgMDKDWG6&nZ2p>g6p7uBLs@TA$> zAll8SU=p5qxi5}wXnH-mjPC~-^F;+s94Y9>C>l^9xZa-yqFQXFDY8X zF8%ZUh$AVI#WP^~*i|#_pRK=I*^S9DE2pU=?t9vgWx@?`^(_Z28N97t*7cTb%!0NevGz|5m@7YnA0~k7kPJciN;yJ0(U!E+Nw$U zQbQ}{wxbbrFXWm50(ehe+(x8ty#z8pX@cOrD2=~sZaVgu;P8hGm}mkTBYE2b5FTZ~ zmj2%ekGl?1PFl~r_E%XGT0O1O5y8DN*w^I@&3=s=-74b{jpGY3{l%7F+JfX-$t`I# zG0MzUap9WVA76dAJ|I*Xba(I ztTgaSt8U{*Yk{9{8Cm5B)D)t(9Neg9d7YT^&=r;$DJ_G!>Vz&yj1afK?AEB*4>@a2 z6uGO~Sxx4wUci7gg znkRLh9YClvKaa-7xi!*Sgt%!%5|mhJwE1k>WFCzG&FxBYLlT~q9O@3v2rw*dT-VQV z2%`k;4A-K_ZyTk*`t%A7+I*XX+Wwm3B(9C@6dIR^hFL5MQ2I6%Hn~m(>_`!8=%Ea# zolUd(XeOe1Rq{ZrxMi%v9Z1`rX7PwzG&m{50dy0wA3dh zyu!fqVB9APEH`loFp1uZ7}va|yXR%5Moj~hHlNS6pV@}o4tBqTO94DMM4A@j+Deu7 za?kdf?So}R=R$svisr^REWWV1`L#*8OzQE%a_WDFE#OgKiS2zge~(o?n$;VSe(f3& z4AzLE5l?K(y62K%=;FaJOTJ69LfTuL@V1>SBDAZAP`brg%9^wh5b!*Bx|MUF!bL65 zAVEBp?Zz?c7|t?!XDWom3*8gVz+9WLj<0vEzaOCJS5|*9?=FM&RMIVj+@9Tj2u#9x zX#T+vX<_uIL8 zSg$v=lm;~m-}Kxv=ec5uZNNUhxVYhh?m;V#X{egRGnk(oLVNvt5H$rdnh`gFoT&GGJd?lR1^R{vHS`kDNUg6fWvS zxSKm+s*yB)jzpq&p#GLPwY?q=_>Y!KMo6Y$KN$8(x^)$orN{f7fe)TOg zYkJ7LG;Cbsin+8k#nvRYKX~z0Qs7>gwhtF+^X+l@4Y$gsREaX?z%D4lfX!jmt^lMl z&aX+|v5;XsVy~PF8+_hQ;|u!6Scrvg5%#m$D11sGlcj-}mJgn@CD5*Q_O&E3l5IJp z8d+EQJ`gXpW``ZtYU^!1y3CQ!BO5S;slDB_hqlt73~0@io#MPC(5B_hk##6OS=cotI+QI}eeu>m@b-!g9F;y~g zRQk!rp#JRe55)^vEkvEcaAqO`XPt)*M(3klqKFpT*T7HPV<63>KezoJt<^Awtf$UrH$I0nG$DfJLMI=1C>{@e?#Y=C@aq!6mz1sz+`TJzXG^yXrj#=60?PRN3SSvUR1A%&nT=#AP}EKuK*ehMT06fz9Yv(J=_5O+csCI? z%tuzl!#vfT8t;N+2fGu&TuO1dgG<`_K7-t%(T>SHkfb`{6?)jcoCH2+S%gwRXSy#fcSEEP|p zdU}79KyUt|eC@86`!&B<1KDm2c_|Lr=6NHATw~LXXXF;O0~I7F zOkH}t67_B&a81e+vR#+P(mp;YS`ijIkuax`xUB7jQR?~3rZLzTVbMyN)Hibx7BlC1 zi?_$J+p)8!)@zZB&AMtys8|Cw)%; zqDKWbwX?4v8Q#X2wFT};mq6I~?F{E%z4sy#ze75awOuZ^yw>dpM1Q5ZYPVl5J2bkl z4JrVuEGK*Hg_kK+P83e89Eo8KpfGbUE-Sg|?*-_ZYs&^!oKR z#sWOjMd$(2m5{vN5y)l>R2eFE^U5Z}#Wu(b8N(Qp@GWyvmg4H7fvbB-sndv>!CdVR zvTf(~na$fv8xoE3hPmC_&1sa2WZ99v7h7MM0~(`vHrMLe&K_L@gA}c=^DYKqPpyrU z8a8%M4N0>3lq3(|6KhJ}OV_%fFr>gs1y8<36e)3JI&%TZT>4Z&EQZ-5) z7JQJdi#W^50kF&Hn=hN#g29Z*K){CNjKnKhDh68km&FG(%HhWSnek&Jw!Dg0AwLyq zRxx{`VmH|jr3dY*40x&anYaZG1{vR0Sq|VCBwLhypw`t*kZ*)T)0HvNUs6tZiwziK zQSddv0^zZ|mQ29A(mk%voMKCP_!b3^6Cd`vTyVBDP8_ow;(X0~S^p~!myOPneqz`k zx%#0aD8o#vmc#l;M_cYUB^{cn3(S{GjB9JY|GavI!=z!3!@;NLk*~fG=X*vvySQP0 zlEMHjn-wF3A=VjGqygIohmKP-k#=bNtDoP~W=GnN@O+VZ!e=Q%(3 z)zZ@Hx8=CO=_KVhMk{ZytJvy8@z@{QPwJ%YlO85H9eyrl1s%}U46+*t-=sBd7xIj( zFN-)oF9(HE8*a|suiNgFmOkybo5Q)T#5Tm~=!98z^lx-?llipvKIUr9ltENSNG#k8 zq>OJpX4y+KGL>u+9lHNoK8-QO$*Pu-$mqmnBa4!M@7}WW0i(Xr@JZSu=gIK09oTvF zjfZD&(Rrfm?bUc|SJVaO@fCLiGpGggdYN8hY87Z~q#Ee?^^(pUOq#|S9FLN8-%i+c zNo`pOjg<3E^Y9SDSSS0z!#fH~w70C6Uq+W^r*k8Dmjv-zhxwTG(SQANis{hX-B-(q zU|e+?oLygT4so3h0m?=53~ZuPy9l!tj}XL^a@~4G#Jwovfy=K5vN#&2!=hbY?a5QC zP03P)<5@p+j_wutT*H+rep-JwCe2(tY&6rozCOgp`WDMNoQsY`_?#`gvB+>vC)Yv2 z6{xK`anNfrdeEB&POLZXNtUia za|5CM#AI1h>}%%bx>+O-*M+hSCMjrg`jo8KTsh~k&*%h6t1`nH5>&qRAfu^eB)gR& zw+t&WWF_q^1ZNhK3AODeQbiN@mv>6ok9dFD3MlUQ!q}wv&75$7TpmX@&raH*$UW|R z62%v^;Q&px;sKOx`Q|_(*&pTf0!n{TG6CKl%j4q+E?14J9Q5|5?&Y8bgKyv_Nax3F zo&uzpBr;Jt#*EFjk z*t`8=?92qXXljw^(>kO$aY};Opev3Y&`F*1hoSJ?R$+7M$R}t+STG_D1>aACq`6d1 zT;wmRNf3AOnRSuQewp^O(hzCFSv;AZKpf;I?&A&A#LSn)7?w?JKI_!lWOOWK!!?;s z1@$1$ENc}fJ+AUDh*fCkBmz25U@;m;37Z!VUNJ@(YR1Yj24u_VEulopkiriOw9?c- zxXA%Z*7LD^%FogS@!^!o{T|@xB>&wMVr_yrv!Qjfu=%o*j;0cJfYNdg&gsKl?d^be zje3=O$b#=kKMMXkDB&4wn3^AI{rv)PV{m>1%q&TL}cr~3KaQ@N*m{UE%-tiD0j z-UZkTXe}~!A#P=Z#S4*kE321T1&E%UR5P4PNz<{>onXZhgh7S&K3p>+ct8Fl&qjEW zAV+OghJnyvG<0gCts^i{p2!oIZ>E{(+fY=Wve%lVTz1?Pu~BWEovL!FNo{?Xxkeud zvgkXDsX$Xc9GimcUH7JMQ1fuk4c1;xX0)d=6@a_8dTU)KbRh1@NaLdOFqn$*4X0k?w!LgoPmtXo%M+kjN52d&F3V)EDLJGYM%fM++xy_SCTq*{#G5#eC%B8EkZMhZEN z^nMVEqnXz;?70F43(A3OL?|B13f-6dqsm<@q^wGjl(Xw)zH8nPjM_KUKSc-B{CVd7 zU31MuIeIe-pakNUZl;a}eapjzZx|Ie{iPdI^8w9U5BFc68|iQ+Cx{~_;(-=oWuN_O zX~OrxZqckbadX;Mzz%>25H{^qFZPIMia~l)RWuWq{i+9qwj($s1fOY_({l^MFRlCqw{0rT72ltcbKUa5@KdjMmAQh|og0Iqe1Ghs`3hsoZ`+zSx zF&|Fb+NoDex)Afk>WzSum5F7*F2a|1g~PJnkcCGsZgHW)>)8X~f{pPqGQA09yRCMf z1KfAH-WTFt+XZ--W}@I|OK5e^WHv4@w(cIt-QGco^x+yXVd$E#D6HX#vxw;};CW*d zWh!?yl^X4D9bbazx-B$1@P6$N7f(gOJ8WMbH$)i1+EFR55c}9*#?fX^49lML?Ic&= zS8UdZaySq&J70xN@*LO4+dx7*y(I@t^nySh}R!QSUB7mNi=?$)Q-k=4{n?Pl8f zXX9vWzL!Q~0O;Fx;_+Y#95R2)Yf!IJtuGlg6+)yiofI=K9D9SxhWn;$v~dPIEJW(@ zq^{>sXHj7o#3#P!-t)*Wg`D*phj@{UV7O+O?Mn}5;2HKGCyE&B{01RY*6y}BG}r9A zXSFHDod;?>1r%AhCK9c&H*;Ufa(shH3Q3e>WnN;Q#Zl$OtF<=E6#}=$_?%uh$_kF5 z;0i>%b4a_(^}sD6``%0^&0oincG-gmuwB$y%gnx5J6DZTz%?PUK~eTbsSFFI>*-@0 z3N_LFq9YJW zg`jI#L-68d6XhL@>wBw%eMZfJu$r-*4=YWT9HP4ek%RmWxJI~L4)aKZN{ipruK4Vg(1(>kd1!pBw2tZ-sALgXTEFI7 zN6D35kcsnk^@TvjsQ&`lERb9C%zjYS6)Y0H6bG=*nNHmrIlf_05v#09x*7not3P>z zr>Dp5QL@eb3BGj7#G$2)rjp7Wl)}AdU02oQne0+B3Je-J%2sexs|OS&xXBHQw33)zl)Z4Wl2gU8?@| zg#6cQy9wV?Ooy0249?9Hf5JBC!}6p1uxI_V+3Fq%VwS>NnBe+Fh!jxndwYH+_~VrV zroyv57k)n9_{zdKaS@rYRQJVu?8MHqVJ>oc7Z(%-qhknyo@++(W6uiNwrhiFn{{D_ zF>O-~wkcZa`;BR;Wzf(U7|hZBKjEeV0JYK0ZoRvNnzI%*=d!#E#&wm_@Ze4&ZqyH;(+>IsOFzX{&-}kICpM{jnpSj+YO) zeCP$Ynfp8&EIBT`p%Tda0=v^E4t$3&X0G$#3hsczAMJdtAa8i9QB&;xE@%_IOAB|os%5PE>qxe zhcl1$lKt89=GfdTx8%bsime8ZoY-KNcFCLpKz}HclazAL2tffV^;Wffp{J^tELSo= zY&?4JjxHracM|0DGyHRzDlYw5>}NmFOkmMb@5Ze&kTWV%Qr7ohHP;!~kQ(=hLF{4L zHo2?B%Ge%E+D!(q=fbJoq%Gd4OK$IPvYWPDsf*XK+f`=!@$=_V1-e9Id4YBa(7v7` zhhek#T^g>eXAX*-lin%2_9+aMCisa{$~8tdPqAEu((6vDhp}6dzUCm1D(wNK?7~m3 z6?9w$9n1%7S-M43&10UBmTzegqeKhjLB zK`dOamzV+K-YI`EDcM4<~PKsS~p+#l| zfu$C>-PCnAU^Y0i`2MaO)81$Dt*^h7)2>uCrW_&r)0s;i9Zk#a(B0&F(Eg^Fee3dA zh}l~B6WCxH@v>BNsZY^D?)np-l$*OT3oaOsrGcz?dL#8|ZqGE>v-{q_v?cQ>2|R6^ zM41ob-0b`-&{A5V=IW@NC$3{Q5>amQtQM>nZnPg^2=?5_H#aOzJ-$_bxq9V+D6YIi zgHZM9;?0^t?zt30{vZJN+iIDWSIIz}?zvvPB`c&Gs4JM1h>aZzoR1)x3z`#G_>tOI zb&-kC%hDqzNJ#*-P zeA>YEb)0>a@c_+wY0sJh(<~YseSZ%l2q*Msz3jVN%Kyk7*%djrgEx0|L{xXKzgN$; zDLp>I`>asXsk|m+e`9=iMyS$eBPA?2WDw=!06$I-#P>=s21D|$`Y5>P9^Xh$N4E$s z9D1EcWA>K0D-vw!P-ukCHyYdhxfd?Ky*Nm`?WP8V?Ny=6RI35zTJIWG+eNow&=lQJ za@E|7$v_OOM-G`{I$^^q$=dpdKfRR)u+AL(o?J6emJ|mf1X_+-hazAyw*f{3_5}+5 zG-O4yT`$i4^%V?&i!iD)z}=k#o-l>EZ~+H*+_?C6q4_Jc{kVPZBkj%4Eg`?lqBaH} z`fik{i=AK>T`YZ2pJpSW3EKnSN&f=^LMwMRr1LvKU!U{cg}hjln;h=xEg3^hae|*T zgsx(>+Jv9ne`hziTv7{lmPlT-ZTWT|MgM_zZq8RURKnV16LzMN!lte{cGvA+xK%qS zFT5PMtKIB+XVo@mWTq)Kn6@?X*FzqQ+zrnCY!}$d+n36O=WID^&0=P?^-k_DJ|a2W zkh7TMx<1aB1mL~i$2~W7vX675-#E0y>W#n57ktai!q5K=rqQ zb-?c-BaRi-mkZG-)DJ0Py+!IgsYY^1nQjoo`yjx+PJl62c*2f!y{`DcWiL;Qbx_wS zM8}pvnlk1jqbGB(7|+S9D;m_Pu$^?^cI18T5#zX{o$w3$A?0juO3==_V14R?^rntd z(f7wC7k8J7#49CwKGK)_Yi+Q*M>j8hhNN>}A!+kCB^ni=lB89VU{6I!g$?ntzvNl9 z{qocc!EtyieR4QRr^zjq-zRw0_ItmIV$y(dGHKOHy9rlz>kxq}u9Ou#NuOp@jNJ@b zIaJqXlfyc+QEw@dwa^(48Vh$h+|zgCk6fZ#0AS2;@ty=kM=EIzPLj<@twg5Ez`|=Zpz3CNlH+ipYvp`l8dthO zStP`H+`Bi0o>o6S4jFc@CogEX6ZiW{VVD_gB)~WF3mnV=kJy0dL7Vc>v3L=qa!kBD zXC0(3O@eiD%H%L65Bu0fGtl(l_|@DU0((p zcU&J99v=4LrW`t35?Ppm<^emT^z+M4Ack<}|j*PiEpGrI|fh;P-)|f^D#H(n4|auNME7}Gy z9^A7xM8`OSL|#N?c?$zWRkn7x0+E;B8I8oBU_V$U^F~;T*Zvg>V7*iTsqwjTA^cAv z7(X@yuCFm)Z4yk`DVD%Z5P_H`*4|c|OX_2Ew$LcQNzY`7wn(yS1z|MF8?NA%nNAOg zI4?ZMV!B4lus&K>52E_mGyY-|6!{AKm~IzcW>g)+Y`7&@xSE0?i2dq#jSTuhBCbC0 zsd62g#{F1)%E5fTPoJDi8NMe9LWQ=tJqsuHyHsq5|_DUT)woHI{C-n=FFiyuCfnF>P`k( z?gQLCfI64H=;vlu9Z>QcJoMFj1?t*4vQJ;fp9oYuCmyHgp)4e7TvJv5CYZ)DN<7)a zz(+NCSjJ(=dtI!o@f$=A`ua3@e5!x^2zQfn90W<>!(Dttvs%w zx!hEPAt60nrRv_PzUEy;X}wpb1cLVk>ncvG^2!B0HcGY%q?3e@Ocs@WqT4-htmauBxJ$iOq;dB9Tefw~ zH@>~sf*W$^%9M%f8Z?;hWbrNTZT-fjcM{!ql^_WcA~+dYiRmzfx45Ur{s8Cg&izZq zPQW$qwgNKtG*3@s^C6-t=Oh3@CVtCnO`)X!1`jB*vM)CD5R!^TtbiLzm?p|D5IXd`Y+j06mcB0r zu=T3L6YsYaSrCX5^IdhbxhYGMZ!9XmyJ9OIT~Fl7$j{19Rag3od3yrk5*{ZU{92bE>e+QdveBr=5k*J` z&-Xl>bs3rS(_NEdgUl=SmiV9MzHc0XdCQ2ToOh!dk}I&^f8m68xB2aREo znX(Eirf++F*a$#0aUThxD?lGm3XdnX`e4aiBZ*^PO*Jjc5-hxeTTtX($;=?)?$LFW zNGmcz*QXYE1#_;p+7fwA0JYt~=sCA~hTCR;H}jJ37??F3qwxVo&w=3z%deLpefn5Ia%!ia zRH=Stz@ssQOFgqbeKmHQNkTG(`oYDNDq85C^)ft2(xAx#S9PDdOEGac5kO_W;9T~s z#LQd_Lqcj6CW}r9GENy$FF=>cdAMw{)xcGbbhpY^_|~h`q@}g?SQPCGQ#bB&98!lR z(5txQ-kHwa9N=Zgot_NZv+|==ws_yS5fNSttv;|c>2FW)e2c!dbA5@%A!a*yr!H(K z@1@?J-J4=fcKT2grIaFo{=^iq0V+X*U3w`!0MFJ^H@3ur?*>ZRb2QVNc8YQI9OEAt zU5-A$Tr)f@C<9c4V%an<=mM0_2&=2KYv$J%765w!D`Vb+xOAG{f6048qs@!D(+ z_|}7PyGShLR)=Gt&!XApyv+%y-NV}7KzKk{qm#0g8rxz?%MS#h*2lj&&39pL;PWG0 zgNfgD!GTV{SwXGpwnYTDTW!P!uNlt`=zgsyB|dT7BWqum?`(e}up!-#`1QrGtsT~c zde^2OjD`4P7~Wu6?b>$IS=nbS2bb_EK``>VGU7)5iierp+OIG^6Jm|L<{@2CAxV1++lw* zB@{)N!TL6qV)!a&cl)j~8^I9J>0tNsxkI0KWlsgWbx;I9S}O4+2U}R~ ztER}fY1i_P0Wv>X=hY!5cpEle&u1DoLLfcJDVUfuJpMou1+q%_=f7vUuZ@FwwQIgD zeMPl;JD6-fqM;Z9eKAyG(8cl&$XtpAu9B#FZOc-y2@@%IOUnSUfrUNA{_uCHxrauB zyBV(5`Yc)J4TM8B=3TOq0m6g@6B2(sk9OGLtx6c4f-q|8V6SirfrBYx==Z;-CFZsb0Cs7@4sC#K@Az=)ETM)(CC-xVzDpmaruuJP z6#xp(zW09ZQi!@52r13P{lD;2hkuU^R6c+H{JGN|AeSf}{A+->X2SzOxE*M z^T6({K#M@W75Cn&3blMhkEo3sIFoDImG5;gfDFfL8U+XD5TZR72+Skek^=>YI?^<; zJCY#U=~zx|lh}@JU5CAkzG+Iru1D8`QoU}IT}udT`7!q{@xA-|DoYai!v$&#=(;%; zKnV*x`uoa=d9Xn;q-8S7nZKTXo3Ld8Zrt@oc-MhE_}5?ca82}uZPm&{OqjKgGVeLd z_0$QF*a4HgJxq>%x?{`YsqDn7+XnDKaTVSlF7)Vchjt(_* zbA0ww+p`JuBG_`NCU;Km)1=};Cws=gBn(khh8QwD^tSBuXIiZ--U$3QJ&L;eUa@(` zMUHT?Ptl>Q1UrKzE_BGJ=HLgWr1=OKT`G&;SlXi|V3e=l`>~6M?5*N1uaOT+}@?^{bs^qQ|2-)>gu<8EgC8Rb$iQdoG zL&O1)NXFHFYll-FKW4AzdX2CaUjMc+#;He=^s0C;}T1JY{GQc~r~~@4aTtVNhisohjHGAo&t!yJ%&{yrbZKU8S}1MUkQN&|5lkm~kCf zUSv%4ySkWKtvblxO^WP1QFHf~R(~~PsN=_aug0A|%f=_1+*O5###6LjyJ=(fKi|7q z@N{9|icHHm&;}%GY~dT-^9JdJZt|`0obMc;<#RS~Nr1Pl$A|7Y*VrrcCYOI!-ru#$ z5V_KsKrmN@#4cG_3Ogj*?sRrZRCIU%Fqn-}qErvAd2DkoGR26@?Ax72D6lAb*VA9x z5i1#p;Np<5(E8mDpl$})ukvGsrs&URX0Qe~XXh2@XR<2W^rdTWT=d~x84%qGy){p> z1WQiuc^FvFL_Knv16m462Koz@RX6>Y1%|ekcGVh=5BOD=wPS$@yz;=_ak6XAz+Q}91aHbIsM zLv=7S{qEn>iS}fmpDb_9V=%9Sq8dTAQ2;cc1(168Ux4$7`kjzy-wuo+p`1>`HG!{( zl7KvTMKJa}5MRlu{af1GymTe=&|9ulaq5Xm2hR_Mg6%J~@7r1N7*I zM^3QX}Z9wz*UwEhc?s<_sN@o@TG~iai<>cTymVsM5Rpdmm zv>KC>HLg>(XoJs>FVR?Sn6IM~rRh-c4UPT0pDjS4qc%cFC-qppw}WTb&{_ci(0273 zm}r_aUNkEsM%=xSdYKk^5Yc$OOrr!L7ov%dyycnO=z?js6b-c zfZj*PE^{DEcx8?l8hh=oktj!A{BjB)l=SA2ohPgA>zn=l9zbMG!hXXCWcY@86fUD^ z|EwE;&o`f2N=O1up(UhGRAFppj~N07C+K0Lc);g9m-He203jt*gH4UP>obxPv-NxQ zE`WM6#7Uy_7T`1Ko_CkfB;`Esr8?%`O=0!odHb+xx8O;5VKI|gy}z$RFq=OqL*CPBYF*XoaFFAdAOp`Ec4Ex*%!(ig= z4vh#sYhui2_4j)JulCf1#(K9z3x)dd3j5r@caPpmd%o3J=bF0dMVFtsVY|Y!^OBG$ z!gCZ5NjYbY#Md~W8?L_r_+JgHpsZRb3LbfH6^I@65t7?uE&rAq{dc^|Pd@$|huFB^ zC4U&`@Dpbf>?>DKe*NKs2=%%}aek^nQ>lGL{#8wwkIW{}EusmO^F3)x|GPE$Uw^$3 z2O2xG@qQIDz$7c@vfr-RP?Node^wvtZ(5b`ggIr~86RC{*YU*-r%`j~{uQ?gvny)z zj*7Faup5se#daP(?Jd5t<63vOf8VkP-coEPg09hzD`yJ;>Yx3~kSX z3Tdotmc$fwyo#wE{-8LkTWl&Eu9Pk;#L5>pnP@InCxQ3|+5K|=%3>j+(OEr!Vv71+WKQ_qte*EOg9`uth`R2n&I^v$EVS#H+Nu^C!N*@nwef5&8 zq-|>I<62eE5wP((#=9P@1AZ#w7S~GBjc(OUXy+)ZZD$I67W^|KMW((=;;cZO*810| z?yqhvKX=|+u2Vs0$*nRmVwNEt6A0v7zR3h+cShH)^Gil!;-_2lS1f50R|3_iP|4wf zQSq})5{Vb395wfKpb_7{eB#NqY_88xv!w>vcg;wX2Z{b~_cC9S+$xQyj0ZPRrgTos>mbrOS>iq}s=Br>>ejX8c`;V(2%y`gcnL zlsfNKD<`4gbEt^qv0zb)hR3;+=-=H|?0uqcVPc?p#tNVz@gw|b#W$ZD=ca)^#!et? zwO!d3zUA|i>0dpvPC&)qck=`!Ar>GEs3(HKqsxO?qOlHt zarImFV1l~_2(BT)-Q7vB;1)c%yF&;bECdMd?(R@nfZ*<~g?r%z|C2lYPS3s5Z+Fkv zdG0sPb9m|;ir==i*Is+(&$0N4J8pOqhln|UJZonUsH$Wf&G|U077X)`rI&F95{;H{zEm%(<3Y{8<{oC0_IRGU*0ZApuGf#)JESG$8sb2=f>@H~OCkl*F0*}UbH!OP6<|1*s< ze_V-7z$)+<)dOZmsmy?ZD5#RjZHSidVkGfLrQTcjCle7r^T27>&hVLzp^4KD?w4TA zrmdJ1L{wv*ibiO^-;$$whV2g^LVuz`B_c9#?L?Pr!BXJ5pYy4ijl|2E=F zrlwt8xjYkJwsTfYx~uO!W(NoDk51%=9*;3TmpW^px6=Iq$mT+#P=L}>uHI^DGQZgw zT(k0s`OjJPKavZIYEjrYOZR2n$~R=O;UQPv2i5X*TsA1cxk(k(Xu=4ws^~iS12l4) z+*kO!4*~N*32}~&D)~-Q1DPU}*-Ax2Cj9?g83vTm0Zp1&K|33+KZmi?lJ+A2&rN8S zEAqUT->*4YJ@q#&{RIVZS`irx&$Fjdss(v3*C|fj2hdBUzL>3NY=SHo%U|!cDgIw) z`9ET-Xgu<-qz~nPGH3(k2J0h?fBr>LBp5cRJgQMk6>T_|l6YZWI;XZZ#PzCZ;DxnL z7mVct4de;-oX`SjoY)%Q*EF-}HC}tK*y=iUPYh%`zOvpA{+ZPu|&F$4N` z6|Ih`JhIn0y>FC;8g_-U>+G*_%zqE~degWq;fEK0)V|PP(l`<<{P{iCM=B26CwE}A zUCwE?*!-t|d#lZH7@hV9W1jLx!WCz(c3CIyn)1Wjqu>bHqYzhGcz_=Hmli;c!02az z4h<)@sImC+Etl3B@QC@Y!P|w)oyMVPI`zKj$!Aj*!faJTA zqAoc4S$9l1zh&W?81SBH$Xi1{6X!?FmK3DjNAfbj->jD%7h~JC25xP6lKY(vT$;D} z!Gde&@Tddf=>BpINk>0!thT(&q}6Q5-f0UDX8MZl?}n+4VgpHqa9c+GsDmVIgaNw?(HfTX!3YKJZR@`gw9!1W? zGyV8y_AXz2X!pmIAVt;nDa7ga4EtfydauD&KvFJI-@(s>kaykE5~$Tfn7fz*xszx9 zS(yQwVEmyu6ob5eNTKqf*;Bs0B1!4Is4>ODXT9nvT4W;b91Y{h{unna*0UJd>aZNS8pGLMVwxgg>^P? zKe`@Dz`=$#`#%ik1x#VBsErL{zMK*m_lvL0U!RR7y3UE^U`Oq}Riv~6iT=a`?C+n!{>nuDHH`nQ4 zS*)FF<-t+dHSL?>`~D9F%~!o# z>DS|V>I?{Ee_jf{m|Jvpy4ZeS{HgfsC(i{bl`6x4s%xg^a91IKZEyOv^ka5gJfpH; zvvxO;i0M7z#cF8gf@h1+RS+-}ujaHnOgPI9qX5vL7)%!~5^4Yf3mt%I{=qvBGWBHG zx2nd2?j#CV-GJW2+m~+Wl-f1k$rOF|BXjqoW;$d+CXZO%MC_@J>?1 z+ZR-=zz`ljkmiYz$OxT5!DG^qEK#dS3eZXRpDCRyOMpIL%0{R09l6`>t{FFA0pFIa zK!Bk?@qS>-Dd;(zQNKoBpKzX~m-uf~8pR5r!V2fn?9@Q1h4nJPX18x)0Ew~R9J@a{ z^K4%U1kP&?c*{K%W(I7{I?YPYx7o)T7)$v^_Xqn2Gb@*U2YO+8HKs#m2K*me+fN%p zpZ~7@Da`)jOuX<=vEn7-KIebF;68QeXSX4KcMM&=^=|QE-8|h1@zQci&Weh(Sr$=h zf5Ky08Uh5Sce>4@OgEE^iVY0u6RQ9Yjw zy$tBQZ-Xk_gX-gwkDIJn_y3YwJnHvW?f(#X@9GwY50d+Qx`Jd)4N<`z*u6ri$+XVFj9)v`+XchhDPM#JMB9_<6B+g}zUAg03 zg9H>bZregzmF{mUpGb+&ZGpk~<2BYl98}a@KMR;cWA+_8_2QLZc8Pw^`!^I!O`>3L z@>oC)s}3`a>B&yx3^|SvmzX%w6qe}@Ly5QPzH0sV1TGD&Oyu$!M3K%_da=*&O<6KC z9DAr}RY#O!8ZRY*!Hh=3={()3BGH=-VUi+mp6Arle1MyVd%Tx9gfCn1sTl{|Tyzgc zCYmGxpoz_UGya(#fE>(X_?i_+QozFhsn$8_{#5ITH2+eq{}&DdYX?RoH8Kg8b_B_r z<@C_2bwE*r+jbrBJ38>WMTZ1?Qnn&O? zak_UjiLj4WMa$#nD<+n9^Th_=&?~Q!4!z@}W`hWXc>-oN)^?si2;~=1IQS>1ed84| zD{tN#jYu`A^r5{y?p57p%x@Ir zaxf{uDSI>OP5#rs?^fjs(Lv`5nMBx0(rhHz-n|UqisoofU#6&>YByWruW0HYF*M7+ zh&QA1-Pp-4uoA7w`US;mb+t9|^+5UTxP$ zfKUgSS2tV{3xv<-{!0j`ZZJ^4(tvYDgWX&D#Nr-ft;N>6Ceyz>{~cl+xl@dJrK zVPO_ntW~f%5V1~R>M}zZCYwZe<;U#q;^Wa&jQMZaxWb+_Yg=6dYWp@w1C;q05EHr< zTlqNlC4cB=gfLuT`4$sT%GQ-uh=y)>!_ie<_jn*Sz4 z4sgL9ANEf_ZQ`0}`9me;+>_JII#2QBtZg*z{Wkb#1P&0Y+OAl#16J+Ul2@tOtN}pi zax+Ww*&`!)-pNrULTShCvg(he_>wW(8t2(ENtL% z+NKc>3Eq9jdKWu${f5Kt^p8g9+ynjy%TQ!$KX))?)t{kCuP$*qV_^>MVy+?sO`8Ff0Z7gs zyB7i$~~KVy}5rpl#n!)XZ5~5TCjnHUhzI_%Xq*XFE-TlbDw-p;{$mg z_xNOvWtbD>Bt(OF=I)YJ`;z4EcY1Uk=nI8OpY*3f2L+N4`w#%mw8?4)BkV{IZ+63< z;h;EuzX5ncqPz-X9NYdws>{wR>%fczV1DEMP-&$@Inb01C}aB5XA$`Ox-1@i0T**zzbGo|W|t-0CM zw6v{Prw@`PZl-Hbux@_nZVX2Q+sP%t1((A_tSO(yE%@zK)XJDg;coab-$?ahr0)u3 zcTH+n>$%?bIswXr;)_>G!l_$|Hw%by+rW&-ir=x8HG7rt;)C{sp;8sfOY@l$W=z+P zC3BEMYU4MAMi%{=VYAQ2PiJu#BWaj42B0?KDNpYco~1M%R}}F zoTWH^(ats4t{Ih@@41Xrws8&&WFA01=0IrqV5yG-;k1K|ASv~B-&-;SSUS!kl{)i< z4S_{%YJ}XzWo?|DjWwcXEBzLa7#%BuN0U`zLW!R`lz;&~gFi$xjMVeh{#|~x5n;wd z#n1d#r&UY*b(a`90&&R8lP6x8CZ(x`&$AwW~_DET6Q^VRQ+kRh{){|=ZI zO+orRcJhGR4{#$t?~s>n1A@+!<);8OEJqv3|IRhoLYnh)}(p%&nreg2d8y01Nb zr^6uUGhUf{gfxOdT zY*QmpHr+bb;g%(G%+#u*mb$v#ibzrQ5RlHOueG2^_5a=yU8Ix$15osiY&$!O+hEiI z2xPNrMxLs6mTP^Zg>y}feIVRfc9+dRO4vq~p!Ly0%%|WJ2q!)u=n_5$(#Va?3tu4T&kYuho!=|(G(pSTxg#I+c7JmD{N3|#SAzPLY% zDQroF9*oBtLEE>7lYVq)ZauF)_4|n&x*NY+vyOcdc5e_F_qKW#%If#J`?hx6fd|VZ zQXaZ5AvXruf0M7k?PF`;3)QPBojz1id5&Ic^zUc~?fdO_-2JfHhu)-I@=YAC`5y$# zjh}hVBwm7_{4$E=K5^muD zAQrdGm^RAEvof!9FAOQ#1~f1v;PVnl}4=VazP70h8qI{y+O9nGp*?Quo2 z-Y_#Bv74{hTFD70Q2bLW?q4obEInN7Rt^CQX+sma{qolnbZ*aS`&W85rwtg#J14FD zvYAH@awLcf3F&-UG`Q zC7>X*kIMj+EbxM~r?|sjkhbx4AKQD1-wRR9wJQq5nGCsWuWef6?lTd%hM#x6lQ|(m z+HEe$F~JCj7Ra}>QA7xzf)Jdq{Li`WwlDdFFZa!tmM*qOnIbw;cL4dI))58wfkoHD zogCDZ->H?DSA>;a`(vQ9Z;aaarDJCWQN)8=F)oP+ki&a4%;_m$d>{vGeXPUv86f<> zNs)xLKWPF}3i#};zjRu7LvSm932@8U4c7tj%_2_X6BEoe7Sj)j7s>kGfHr?i+uiJv z@Z1NknXyuY4FoGXAr@$2KA|`H-d+3vdm0wWTI<;5r2o40N#2^@UG`&U`{j$;l)dgV zu@V^itjUO1FU6!@DNBmwzdvoKR&Tkg;LN9MzghJ@4f_J07nt>x;+Q~6)Dq_Da^&PQ z*euRFZn+hn5A5m2kt|Oe$1_#)F!H|UZ6@n=m-*hAqsluo$#0^`KctFl_tvKJ6!T>S zP+Cp14!XX?v`%;jqvA+@8PY7jF$y~`sNHMa4EGzc!5UUf==%=+3d84Z@cw26`C^4P4+P@MSZ3Yg}kkGKR4_< zG90pgHEj9<8p|G*aWrgKVU(!fyVZtan=;WBJoNdMpoWCUz1p{J`tV#PNw+=lN14&| zj`h8?QxN81$3=Tezp@ukc&Am?nPLRrSG)WVud@AoTju>NVOI|w$@(x%$IJ#rg<2ap zPLz668=KsR=~yxL5h$Mw*`AU+{Jwe)x1NT2ChIr)R9H;;kDE)#3^gN<_CF$J^iqa= zYWn=99Bb=a^O(O>q+%Cz}Y$3r7Y1cDXQg{dz?z}FKSTX=1`09}T0 zk_$Tk{$=nt&G>#I_tF?tnUSi|n-u3BJkL_j>{nXgwddh5ME;m1m+<>{5wr$G0 zTJv(34J5D9sDam^owas6otrsfKt&3Zso7$VrRRhEXMf#L}A57`;}p^HTHe=jGWm(uuh-|xx|wD z3HKsX4e_-$IlTexnWmY1;;tTTCtS{V(QTgHcWtlDN8plP;2Ofne?ew_3EcRvF9JSd zeKD`DQ9rnQjMLf?&_+`FL8B;t8(P0`hB=I&u$Yp&Bf=FLlOc=vOT_VQ6e$tTxolp9 zq*zemLqHZm4s)0hC0X!`ZW{r2UkQca`R+%hgM5bBNg2KSq&VV-TbuhkR07MsyjbZp zzH$F?2j}?ZP^8@2+kO$jzy@nPTJ9Qo3+(;iKq+p&+>_PTL+V}4@Psc{Co8Pgnw6kH zD&)BD_%}DfT3YAnSca*0SNl9cW@839HJlAtS6ls~6gpiHC8%!*{md=Jk0JtaSDbmU z6}si%?HF~q%<%;T$N#-YjyjdAE6CV+Bp}f*Ov!52-pCX4H|Az95hP{mt2XZT*B7qn zHXEKdf%xQvsN@u#iYYYJZSLv_39D_Dn~-Bj$J38bBV_wsIO38FOP3?U=V(L&+_2{{ zA_D$TKZeQYs@ahc)%i}7Kx*ri=$rE)BTJTJ4Kpg71sBroJ>p2U0hZ1aqt&8$3y86& zL)Pt_2H?IC)|i@I57!uWeO!2xYVq9NQp1buN0aeQ^0$$8*wsp<$Jo34~O#I5sTuM&!68pHsWeKz_wNo-(=*iKi9CBr; zUYA==fUz4{t<__5U;7BoY~AgdBrZy16L_@~rS>`$8*3M&cjo%F-%{`aS=CJ0gwj|& zJ;xXI{&ipgVGs2nQqwhlFL!C%m8c95PE7u#H&gfQJbYU1@rENr#B(TvIE-epX7b)$7vxP;p5q(WA`huW=661ulk z(y_+aCSMA^)(Ci%#xFVLj@giX5$m-S-OdOGB^8Q|N8HI*+F_e~U-H}Q8%b3wG?E)) zgLQ4uD_jlo6uqRqhITk|V8Cdx!e{fN8E8)^R7)?npLv$>pO}L9r5I(83YpkdJvaE) zN!OR!A1p6Zg6VaHMR>OOf));vEA(6GCVemod%Rn9I)j7jTUb|z9=>?7+^aKQ1z313 z=MhTZQFi)MKQpL2M)5J@s~CemKV7^~)M_*;Rd&;RwEetG>F_X0y2S`hDT&S$^3F&Z zJvpkikTUoZ#I9L9Ghah4K_2o%gi5RKnd=M+anq~ya~^W{ymC&Oiy?iY!($<8*uz5& zCGwx|x@bqbT&QSA80XrEBHgOfRW-n0xExviQn*QEr5yF4?4WE>Q@7DKw1ZFhg0tg~ zSMt|yx0+UhjFt=*H;TpVTPvyri8uhdc+_g3`ATP9!=+N6MAn!5mNOmS5yeTN` zng4N-TR-4M49}xDie1J1bHEqR(=|EOV+vC}@SgowLLQD9ey#9tBrampDM?k3PN}qg^S=clsKk0PG+hct3E!QEi>{y1Ej=vGog~L-= zHq>J5dZx~3#%Hn}M%)Drb3xg3rZhf|#3g3)8~O>|ccJU~+;}P}t(DSG*_kCM*ZjUK>V!3tx*v42j7yl>1{dHB&iV8yx8m zx&kVBrUoQhEe6`?RJLL5*8(IDq!Gwq&t4-*=zYDt_|;Xp_ZxJqG92hue2_dymO+vr zOR5jA+~_BKU;jC;O&^npTZfrd8Ei;g;uYRjV>bG#_;|WRW5PmUJ&|!CPL{qshC!M5 z1Y7@o$6D<`Az9XQDAL(rrsfV`fS;#n)ejW;BSO_wh6H?$b4 z<)5>gg_g9Hpy@a448^l5KB##-ROmMn8fCo5{#=Ci7~f3zFwM93kYI{Io;w4Y!6&m( zZF5;X4II`Ol7(nYP;fkQvxK9pJ;d&&BCo z&5qoknkZjt+{xc4_C8-8Z~`?MPoUA?pDeMHk${=CtX_>-{&8Ys6|0ElQDDtduQ%Ei zmF{KP?@dBlaFR;J!J`R*(~s!TPsQKrcXIeW%FsXSpGu5rLT=E5Wn8NIn|?AK)^?-X zNiHpYz8I|>?cYRv-3Lkx?&SNP4HBiPz_$=j6S$8EsI$M3<}>eHhJ~OTfRuOh$#ps$ zQz=#YIl9x}wwQOxZ!fS8@;fSbimU}z#EYVCi004975VwmeQTEP=-VOz<4U?Yol<9< z)?z-blnd3kXc6LrX;V+3DC= zxV769{6*p287-Qqhpxx>o@)JHBQ(fec_a^#?6-uFPqUm0qPXrs0f92(B;`sPH?J*H zxoxeI;mTkjzFLxj&CyTM!8ato%C&e`&I(TQPe1xrPuvfJ7~@k$;7&TzUB!h(zlgIG(w!Q3J4Lp?GvxRUaN7i~#$D9!rb5YrDytO4H& zU@8I0mIt@Qj61C6_a4o|4;K-AjzTYt9-FW;2ds;4Q`8wgK6;$zX{!EGNau^WcR9_% zShADo?Yf3pe2FAUVdcfCOht_^7*8NBNhajh%{yCV=Q>CyoQ@%Py^TBBL%!*fJusFt zx`BH=NEKJvrBhC`|Frxcl3R^$U*jxdAp&X%!*KNceVqrwwSM!LCjc-;_-Ek zml$-NwIeiPutfs50ilMk7n4W(-p`WX?8#zHW<|e~@UzwB#?RQplnp>>Bkqx-Uo2-2 zi!j8(%weVr9Ea~dCi4|e*qq!RxW7v)oa9@{L1RUFU!qB8pBCcv;&k@xj35Ew98FTC z>cfs4AsNh8cmr%qoYXJqAp|vN4g?!IaD`=b(49A3tTMT`0lR-pwUZxaxmX5Y1EO?% zclhR_raX}x;m%ey{tKuK|R6N4x+E+_1ugWzM zyvUGR3GKgw6i4Xeq0MDiv#Hqne%J;DYHa+MhkP6Lu4%7keHcJhS@>w{O?H1O6$IB3 zLbA4r82MD*v{e>O)dnz-x4ozJ_!+>cM&gjJsbGNi`_ZOPc5Jdl(C&3eP1|lw87|eN zH@CQR*rr-X)Fs8&Hir#=I3s*Cm|pwTlSJPaLG^gQ9J%}k;mG{xlqz`fZjSpD(;nX= z*?SWf#(Q-XVK<;Y7Bij6ZiyBNdp)Y<4P(Om+^axd3qgKpRLG~8TeJ)NCCd<@?t6^HcZcqW_zMI*$I zkZPMf;b8Cy#z5NRG`(@!DAP6-lUUa;*nD?eIS8EV-5eU`HvUCl?X10~2E8Yu=AJ5e zs$nA-4@J;o{+$V0=j$MEjgXdyzQanD`bCA`gSD+6%vU#uUe*>yXMdX^9&BUt@7tt( z6m8PeyfVEP{3x<^=)31h5!uUY4?0g#auKV>bmSPUwHiUltR_W%#`>xW6j-{+F z&~G06If4|apcNT&gMRGW&Ij#0l!&jIp3^(G=UWeXcP@LfxgDKUioSl4e%ul*b!yrk zx}mN|_u-$)umMNN*rG<|I&3a-NN}D38*snb9Jl?|*;PsIXC)=qT>rx;>-kB^;^SfE zPJX^|tXwtw%#IJf^AlB0W#c_cASQ+FTib&$DP4aQ`^`Tfh`<5yuTN-~a}tZjcGIC( z`;)Jc+pkIO)@sv*S#k|2QM6{7Kh}$}NOAgemAN;@SWNvSt$8Z#CDXsyaR!P>B%znr zMPyzDY1IE?kqP+DUbxlY&-A|Ybz(H<@ysrFYtMchA6_~p<9J-}P0nIn5+O@wrJNz5 z3jo-j%gu=%VRU(ixU0ySh7JD0{$_+6kwh;{x13&H#pdO1I95TXfV;kw%;HZb?O~a` z3Y>WC^Fil>6QQ+Yi;ks_j%VK31c)3Wsa>*B{@>S{o%`oyIvM44T8$zcNxpKtQQ@JI zqOeV2W%2h)Q*Wv`BP+b!?{|&yU-KoWt(%SMFJOfWXpz-RK#x#H>Ch8Bo(y)V=B8So zP!HEDYfC;{{YGhr(U&oHo+d~-C*X91m|6C*ugNj@+MseE?hQu<+hW+ok75+#3InVi zy2FL~b`&CA#+D+gw$|?GFc!vlsHW{|p8N~q@;N8S`sU>aFhSKHt{NToKYUG8@ZU`c z$S7PoM*!!ZOp#QzQyrDMdSuMEw3t#w46$M@Fk*>d1Wfl;u&- zeo(JVliSGsNU`K?4tyFM3#bx+t*(7s*B2p&^s<(mjv3gNC}_lNt3PJ?p`nu<&ml~s z0ZBNhZw+5NyTrB(Rv0(`pnpfuV`#qi6eykAZ%hRIe{fRabAdW=2b(Boi)fMRO#;Hs zN0ZKTT#*usOr~EwOsQOF;i>{y=m%7;c$nA@pf;m2e|}dTDS<^VQ`fCsh2}jp0x|OW zp#mj;`r!7?$%-$^arX)=QGOJb# z`)RbtL8g?V79s%={?gl`;ft|I0ykL&AEAl0m}^JOZJ*eh6e{uSux~fqD;z{>P!rI9 zU%(`w0{Z&KBG)74i|Nn><4mdE4EGSIonHp0*d+Gg+Zi9leUsIOGq7%(@m=3n)7QKG zcKA(jmp;|=YEcA+Q5UJ`wvzsZ+I2Sdv3c!Ubw!c~^f(4P6CI#BuN5=b-xJ&l2VwC3 zbG_m*h6?s90OP62uIhB5>*vbAorWv3n(g=i1j4BWXQ_V{e#6G-6lhe|;(FNNhw_hd zO$_GT_-xU~@^4|H|18zMAbU_5zC}`qeSA;dakxpoF&IrIlKoXB$rdtp%b;07WI1Zk zWIv*@J%F~%>RdLfQTg3_gmBa9P+ zMDW7{CP@Jl5pya=B$NBcrt_s8lX6bb)i*4KPKg3lF~=g_CZ|^|0^=sPq@(We!;=MQCgA_BkgIYtV|PF1yx`o zlJ*K!R*@(l3cX=aX$nMIu`W8H(|QAPJ5E7qxu~#6%Y$EKd8*im!qfM%R%24SZEHIT z(3T^+ptu+E$Ow`#u8ZyNWeD+RW0E=Rv~Q4*>6RSEdgvq;@Ebx7+PRt#ueLjLS8t9wVrbsc_x6kq7^ z4X5v{&m9Dhq`&rfIXBY>1P$)jfdmskmqj-iG@m$sT_*>R_h3?_U8dWJJ;VLX?* zU|F(m-q*W|GWBkz2?6PiULIQ*G$bK~^SQ>3lK~Pkk^;i={Yr-NC=3FU4qB|ag3mg7 zx-pl72gAv%_K@F)kU4$ItKjUqy!UaxSvaQ&3>?2;xlc|n^ijchl&^uDd6+Ud!db@x zJlipylOgbEyC$X|@bl@Wz=L}Vn?rOhG8Q|8q*SX%W5EdnJ|Ix#96?yLqdo-?(%W_S z6b=Y8Jjav}27r{r(4(~TP`M!sooqLuxc8IQn$@8js6>mR^6}0`C{1^VeAolSe)be8 zM7Pmj3Mgx+7+E!JU!9UziKXI1Z{25mk$2{33jpf{(kP!++g#sf`*l-f7=WD8(Gpqo zx*ZKDl(0JDEp(3$m+0|^ezUa9LuU3eIm~1WGpTx#Bp4WUOPacrdW)G>m1gZ1Y2Dc{ ze-TipX|axJvMdqb!QPl!w0ie+SX^DmHQR!HdV}`#J)t56?Kc9=_{)TzJ%2J?|M%eN8$UWprSebGmV7?gWhy3(x2av%OZX;eM{2Vj^ zvBjJPTx7DdoD09r{zkJR6m-m4+=AhpX4IcVnR%N1)7O4YiR$&IE~Wkjq+_BtqZSD= zYED%x8F5W~Q++BC-0Ara*y7A#uSE!XBM*?ooVP;i^TU*OoS%Gxesf;d5~{n%He!W| z*TET$X^Ui>NO(<9um>W(pJsXUp?dIO&Ht2m0t>M_Znc5M{rzc+f=JOJwSK0sb$~o- z%n!?{JoeE`oG&+u9yrIV=2?QiCtpo-NEze4%6aHviPfFl-Q9=jQ5{93Cu1EE15ooc{D83fk~ASg;4S zvoh$26IG(AJv}wJZAI}TqB8bHW|}eDeLTO{@XVXtf(eGeYhxV(wvZmS#4t{CQ~&@4QWC;8K@E0^;iu)aXgoU7TeHjUKML8+jI<>?ycC8NFTFbOyz z0aUaTyo`-HK75DR3UnY9h-b1xTuZ0eDM6&1Hg#zth9t!^>k^C2p23gCL30nRiDTsc z9zFf@zS4{}y7gv(6CjGgz8q;SQ_5{uV=vzU>R%O#!$3ggSO3uM({5R+_+w>MZw<@O zfhFo-CNE`%ejX(>J${Hjiup-_dDlmAc*F;?e1|IilaREH0$w1)L@|&%%B%6_A#l?g z+i3IFbtA{>%D}$ku0^@vV>_l6)4DoAfT+QYjEAeHI0QB{nrQQGa%0ogH9C$7Kl2U2 zF-8ddJ4>mx4E@JW4)1#x%SSUKo{)oM)l80Lj;W4Cmw{fYEyIjT{f19tU{(UUNf0y1 zjpj`iuw=dCNhg1z@t%r<<&s^9B?!oWd3os9YGeJY%yhwZd_M9|a6t-70d|T zMzlI^6*%$+U{N5}ByFx&emb*hH=KoAq=B1@3wrwzse-bOkjFCQm9GK`N08Ib6+{cE zE9wH@a#WW<=@xf9X>97oeg)%U+wn0unR`>_ zP`P?`@L)Ryoi6fIIJi)jTQc9j%UKLZ#-aIcd^dV@d#wmM<@}w>~Ezj+`YDFe+$Z$LV6n^m3AKs}Yw?ITi6>vR$`=FlJ%f%Wk zT9%@|yp@DJwLQlKvtuGmhN2PEs+D920!{fc!UXb|do)x$h%xMok(n^!uYWOpcS8M9 zoAp_YFa(2K#v8OCgVkg#wuNu241@ojuiD#qB7nW{@#}tQRgufkqSVD9f6}uUFKUZwqZLJ`hV~46Uhe5kL&yf(yapwst;k|1h>DU^> zr}5u*$M;MsMbUzkSfO2)%}?q9dmiMSBqR}v512&E;Ivy9{MRL(wDWB#JTc04g#i`D zt6}#zAEv<4PphP4o|0`zAxplt*hSxJmk=&}Am$02kL|N3{9(_U#0HNuLTx4t63@qd zRlGXM={H`k%LB94YuGcV3yOm$Tv1P-OfKx1^oPd4f3P$mIISWwp|m^t{{@sLiiY!% z%HdVE(hWwC(rSN@7-N;+j$lV($U8)lC&q%4!sIgy&x=CU8Vl;6-vw6>Mw#t5ni@R) zqiN6?*{6Av7$T!qQF+HIhL~@mdu<*Q?%SA>dOgySg(JEh@#6XVnP@q^&TohElG*bK zV2*NCe)}EH+LBL14a(gd6{1Q^pNlS8&s7dR#xlE>n8Un@sorYscpAkjmRM(X`$a)M zwkH#vUd2S*rrf>6`b+%Gc&*LVJhM)Zzm6r0wCpb}fE?QHRNo@BRXPDwBjr;3D2uy6 z2&dth!+^&{9p7Zn+n^&wv`zNP>zXs9hYLAqk#1pwN|mtRg?zGlt77~9ia31!EvMyX zRWK7)no=fjclB>B{)7A(-^1fO9!lBa6uvlh_IvfqqZ04&x(`PQ0qXjSTkFS-W`X$Z zd_mTLOb!*}fpUR2uDYXL1CUyB{x~$qW~E|+Ve9J3c^#cg7Jp&9F(|vCO-eOwe{=ly zYVLkBN0cYhkDN-wG?~Lxt?S*A@VR8tM^b_>GJcIPj-_AskE`sCOVoxox+zwsSOlq- z>OD)XyGz?@<_x@pr7;dG-6U#Ah5Sk=RWNs`Xe!H83KcDv6Hmfd!yGjR_&WZBENI1aT<%9OwHU7I1625wtrDpqOM7#&9V*Ir19>gKdYO ztF<jcX_T+X0z`~lY8ii4!9bRTjDm7)%Yh z0eSOo8TIj@FLa*+`!swv=uEPLQGKwgcg;#T@VPH*jq z-5}W@3)$ei>&(vWI~NUjSDKj8D#s8DTFt1g>KQyisca(Vw&lUdSHz=S<)o_rqKTk` zUlC}CzFYM&)8!JoxOH@R6V78xeOE31bpnaqikrTHdNZtTZcST2MLD?$fXkq4C#y5P z_z=&U4RkmvICOmyUUn;K!Bc*zdr0{vkCkK4(x~MRBh8aIl;g*t%m)7Wc zgH*+2bQvF2S92SEn};l%qZQr01Nc{!9|GBCtGm_Kz8s;3Z4W~!r;b6R>jN{Q`#K}? zuG>A8SZS>ajgpTDt|l7^hK$TVmL;c-N>*6LLG zzHRb^DEsx;Q1DplgbW$Kle`yOKcwo#QN5Mc>v&aYte*SkHl>Y!1&cw6Ui7T<&PSZ4Z}hKTZwN8#p1um-_mJRKqJfkyvv1K`3c7{fxp41RFHz-$Gy z64(@jtB|e_7+44$P2MU-(1Znfz*5<~HG}w2{3I3j%5n4xH$ElWn8$CT5X514(1h=9 z#Cm(W45_E%Hg@0tR_|1(vs0s1@L8E|`DCfJ_Ns56x-FcyR~2cK8|nSVw7l zj>sxK(eTo9AHVy%IwgQ5#_w`B|5kDB?xM7kNlFMymA=(as0_`ND<-#H_%5<@B0RW8 z|6tBr4;3v!Zz{g|rRaP2E4W~~|(ugbpWv&A%4BN%6H;Rk)f+s~U$Z@)L1=*jP} zSidoxol4`q!)1DfW{5T1esK|TZ?xo~=J4_BkbMWLZs%ujM@+<=oHo0NYG1G&y=pP@ z3^XgD$hkv1Bia!Y3Nx(m{H@{k?l*FXxqjNSa;n+9Wb;OI@!j{6bde;~T-(HV*U`f; zIp^r9jRJ$ri47iB_ebz9Kpq-9s@rU4lE;hp60?u>DnBzOB`ZgwMhG&bovgI%;Yhs` zT1k5(;y|O)_U&wryRGJHsO6hsndjz*!}bHGK70d*fULRVm1budpMX4EJZAlYJtIjh4^I&0aVi#WJxJ4hHBlFKDk zOy^Syss6ne{D}`j)oY<;pV{w!tIQY~X2DZpD3W#?KK)3z)1QbFzu{w=ol*Hl?wY1u zbx$E#Nb>0tpIJ5WV!>Frtn5Q?4ZUpcFHQO35H`c|Uf@TH?_ysTZ+kF;{F@}o5XF>2E6As6 z5{!&ZJX9;ygvI599;Zdtt2;$+P+sGbJgyjI`biY&q!&y#sIu_iP0?vpAW9)LgxUbBQf-F_AqVHe28S=Xa)PINHf5ty1 zXfTE#eEm*vOti`W7&Le$$S`9eDfK}kJ7JT5t|An9mQriFm(9MNkaqLZePtOa1~J?Z zg+2;9dxS|67k#>fm2femLBV9y28A4O*(u` zS`pWS)j8f1mNUih4?!S&S2X==HZye`h{VErK)cT`ulI2@8lij3!E(~C1B%wqJ~a$q z5MfPAShxW>>|I(JzVq#?zAi8!)or^jVV^BG;?lm6u4MkQU z&((ozDk1ol=M58e8RGaT2B|RZ1AE=MSdX;o0zJeckGA1(svS1}J!$p2l!(Z*kw>Ar zFWjPjQQ448IhwL%twy>AFc>{b>UHHroyVE0WEV(4QiS}GAqN6l*AUqlcw(*tE{;`q{`%)}b3q6XkILBL za3%ONmhs_7K^Ycb8P?N8KxZCXR{s}mq49RFvh;9u!ZDZG$7ycZK$L3wy~ z%`%jjJrism`;$VmV$UXW4iIP~+Xa&En6AY+>a;n|JK zmuDxcGi|OPaAQqE()AZVq)9P%BRHnijeDKuqmbK@eoj4)6PE=)uSB%iO}?TBGA}8+?^-ES)BY>MOzESp#sAESeLJV41l5 z`b;E=5Q~zi;ds8O5@BCACr9$ujPGM9`L!YZ7I zZublcLzcqksGS0;3dfLp_`8m8xe2Tr+CmA)B`gUNcQ2hMXGqmrWQ;vuQl%-+$R!os z;0pWt>>-j;QR_7;=4e2%n3IK~&rpxoj$6<1(DB1UDT?82gd$|az0>(TU+;C1xDH$I zfAv=fDQ*J75nUG4&J83%Lu7JS@z%!2SwqnMRDSdDB@4^g<7f8`s)tFt-Bqt|Kf8~T(J8ZnM zGxXBI=>(tko+9!l13HtdTjI)_^kbo{;m&q z{myU4!?S!1vApkw{aU3x^eb@_?Sa&#&4kptKj|T&sZTEyojPt{$*>6FL|Mq`0;_m7 zH-ZEnk5yOUrc&0h+>NUrjC5A6(U?lPVZYjlxh~Cvj2-8w5hcwFM2$gIv@EtDde(C+ zos`usbdF_uEJN)}wgd7A%h8Lnu?ymX9$v&dv7{d@&leK|J(g~&fS-Y|yaDgc$2BKu{3&td z%8x4_iQZMztF$X?+;>pyCQ~4|4hrNpeN4x98HdtvptEK$kS!9a4~3al2fmoJ_*kRg z+TUrMary0yo}?fi{!60YKPY~FQ0CDsX;5c z3v{2;um~*Uq$+11uq9nl0LTx*)+3^V&=aoxAR1PhfQn7^@p<*&8 z3Uit;%|xqM1BY9W#f4~qP9N9BOV|}u0?k5!twnYtvGM7P(TuHSJ#fnC$-sy5gaFue z?zKGJo$|Dti}pgjmAQw=OjH^c8E2em2p=E_luFS&o%^HkrVmocV2L0jWjg>CztC^i zk31_Pwt-Ga*-p#K4#}K96;fuHCjw0pIl^*GtKoQFm9MXpy} zn0%#$@}g9=GMvngSc>cf6)oEO38Zf|QZ+x$qAI;E8|Yma#A63E5aY7Q4XF?wEei(K z$iITa{dPA${n49c3#ZEB4z`s`53%ha>drFfSLKBcJOt)T+J^MZ33whT7wZ)Lln*d0 z_7f*)u}~sCq*k4Uyn*pXUJMS#v520)Y`AM|kk9Tep|+rp2Le{a23qTIhA)q7v{{fn zRPdXF66TBBh!CgXu-YB;T1D#27@xECt$EcJPb?9dQb%b2>Kg*@f(yuILUHMlqo$j1 zeVaeG?c-d{9vemSxeU@(ompa-WBIVA3JuT-^E@d3sj1uru z4#rLD*dDqVr;rNT4D&72T8n8Z2b|4+n{sVqIB;zXMa)iQHKL4Be-I;Hsd9^cT~w0& zLLfZ==SWjK?<`O4xAXosSR|tP*Kb%W-C176nlfSEo~<@V$+G1Ug;$tB#%h}tU__??sc28I?cDyMflp1CO zX0PDK)b(?r9^S5;!^PVLi@mM7G2@AB*c159Jdv`BgH66GL~NBR$cj8&iZ_sXIGVwp zZs4`<`O(ZuaX&SI+tO9kZ?mS|l!3L`B0KUNNbvm=DY2w|%$Yu|wxG2K3p_2kdh*;{ zkPRq7*Q)t7%O9A>lzu?z6K)&id>>8b1e&E{rn4-soMlti8i;-^mwBRAVKq`8bCw>cQy z=hxdq?L$8%@)f8h<)KgTqb{MO^f`CFECw)z&`;!^Z!kn-b9z|qErCODqi^?|?6zseZQIyrxGIngxo_<6s+rF2nF=e{$X^h@@$1L&#v zr0e#_2?JU>pUX$I-CN*D`4J$;;Yo^k^8Q7a5LD0R2jqjJOY&>%SmJ%gq;?N(I}k=EL7L z>L^_vcOLv6Z>7CNA0DEF=>ciQS+Zi_rTu>1ZPO0;1irbtJ^hENSAIC5k3=qm5OiRy z%VvLj$S>rRQ=p?z8WyWiDTX&7(plR!p?k7A^;w_4e^1#b>3#98FSA{UW|}WQo#oqm8^uj8YWARk90%($Nc`0 zSt|ywvg>k9L}hp62_sm&5%^iT$@=H~*>fF^N6+M5WU){qoSdXbUZqn>I(Kcvh&*hS z>elb&YSny!qyY-dcoQwZ?H1n$R+ywmjTEtn_dp`Me#g}Yt1t<`5aNHh*`yDp#amiW z5@*_*O07d@{du%3{^Tsh#Cele^>I{YOpuhw4t;Lj_c-(v1swl_UF`J4nH;*5IA>D! z83c=OfYey2s_>t(tU0J;B>3(Efo zt1w{R_#vUEOXOjNzT_PZdTVJDRmfN}rbmQ}hxcE@KCO4rG(~~5nNYtf3;ot21+;zP zrI1m>@7n{h>_S2hYN+kJPp>aA3?A5{6k~WRcT3)Bwc$gP=0+mSN44MJSDL6Gw(a8J z?$jNhz|s#dJhP7PZXuH~Nk6U?Smm>u(7ot!GP)761D-)a@~c$gPT!w=S?~f;2$<9? zvb;XkNVXyUTcrRd-SbO}kKi!2r>x}%c(be~Z*w0c9Aqwk95F#(sSwDL&U*a{G98I~ zibchqYZeU92$=S8I8m$oWqyqFft)@4U>cEih!QkVRKV?EA=}gVlPEMO6Do>B=y6O0^!-LW7n>q1k zXuh)X@XT zlhgc}2V{CpZS}|3w$9tyQ1p1Cz}h1T@96Q3}h!W4!=I?O}EwRX`zLy3U=a_{_SLs|RW0q9g|?kgU6kYnkt zSQIi^KT}W<@wrvqzmFCq+{Rw{^3>uLadjkMv}BorDV&L!!p(NY_m_>)BN64b*FO55 z`ojp1L5|;3E30ya@`G_1rq3}j1}&TplYW)W{2P)~cB?mLG8v<^k?v$9aM#DH1;8$1 z^7*M*e*LP?Lnl3NLhPUUb^@qovS9G{Ck?xQ-GYDr8u}x#!DRIZ)a0th>y%{iRFY@G zuK>XYZC>MK|M-@bi{;Umyo)<>&8@MWQbSoX!ij=Bpwg82CzTlzYZ>kPkNabcK<2rH zohV_>_hJ9O8h?WIO4(0prXG3oxG~|W;q5nx*;SVe55dZNZ~M8`PfmLk?c4ld zCX|i-pHFy?ca4*@y?cMHrlY@R2g{T1SKsD)fd|IE!r?LV+@5dbE*aPv|8imkBV4qn zY`#7?Umc@f>pd{sq8{v|{QHOSzn;}+3pl6EzP#8mK^i|ni!R!K?Klez@1Uj}`o{5( z!^8i&#=j6&Q6N_+rX-(MMJdC53AoBz!<|M7zpb1j4a|LL=^RDnJdCcWZl_`jdn|7vjm*FDfG z9CiJ-w(0-tKmMBk|Iguw$pWE~vUecquea5|{=0wt8?@<0jQ{#@{y%)1|9TZbpwxqA zWuHp;&t|YlIL-|u!*6#jE2oOYY9coqh=P3{Ne$2b8h8PMFW)(%e%pPs{G;qjLTdWk z$p^0^0JwC{oc6fQL?as;;=Jt>iQ4|~wl{*<8{BZX>@E^7q+LZz59k2(msx4e;l@SR zHu9dv`Nr#v8q>!p7c{zd8<-3OfaHP#833Q9+OM`LypMU4bueazLRIjqKe`(AMS zoFcvFm<4m=L)h)RYGt9FjOPjeZR4>)z6P}`Fnd)=Fm*W|iXYJU2SVpc>`?FCAe4QV%PCoSh`S})naGjQ}Xwk&-; z4NjY0qx;5!PC#AkV()wO*~}j&E8*9M@WalxCz%(__4dp8PCxSH*o|^vTzkDsW5RQ5 zr17BF*ULa@a-h`6f7Y@ux8iF5!Ip}4mg3e?4Uz_Usflt#Nu2qKImOst|t~nV`Ltpm3 zLBeJHtLvjz%Vz8QLcD<3=W$Jtk%j(m=b8T#iT~4sThb1TmK2uY{U<0VAk;<&HZ+8T zr+OO2szonxy~}_5iOC0f2L>DR-~nY&=wb;DjpBV`$|SEH6D^zf$9g6wNBZA|7h&~z(Ea=|0?r1;zv z$1bk3-pGd|l)^4#hC6!#A-#5AXD(A}rv!N#y`cKSS@b~#NK|qyPxJ883lLCJzxlK3 zwMe8jUB`Krt=2@Wu-xzqoq9~j%&sB7U5kAd_@-EKJ=H?8HL!z5eK0n1KX1}TQj_1F zHj8>WYrp@~&Hlh8wBfcaYS?4tS|GQK&Ar{k$l3pL~3vAD>HPJaZ!K z^Z1p<(wx`oQ*Y(WBF;v&36B4ZA||BO)oEhZSjop|Mkp-vNGt!`K>CN$^DxSuru=btW2K~LZtg}e@4 z&_}xH-eB0JR4NNSs{yKtpy zEVFGaM11XJoO}zjMG4=xh? z7#$V}SdeoSYnR{3#{Pn60ejWcdEkj4kk!9S4KI^`&%V*-Dw9Iq$R_?CDN2n;xrZS4G8(5|vK;uE z%!odQHpfH~b^M$!JqI`>?Q7e@Lxf;ZFR6mATg@f6PTeP`bx|RB{o8XTez*HKfRIbf zxAz&EIF!>`Qk^j_&ftFCX?gUlHKrdcM*nG_6?S-bTw~YIGH*bc%&O5u6hY3F7}|DM zr;qmzl;mM>ZXTXf-b*`)hWAV9eT<|2HQI1z=Zi*W-iFrC?*Vd-Wv*xa?-a}y8B(6^ z!yb=Z(HB9OdvZVxJO3r&#JG(~TW22vnK^Lp9kd(G15(3)oFQ_X+EZ>A8$^D=t`A!^ zW6%~n)0;obeG>VYc^6s4{?yJpBPHp|=L!q=5ng9Cl|)?r<=K}HVs_qJ>J=P5$7SMI zVt!;f`$g?ksJHj(s3S==_^;=f!a`iy?B6$L$^6nV9EF#nL(KKXcV3i#@EEc9;jk8= zRF`D>3o6}B(_j`ypl33bt4_PneEj)!hub07cXI_oNvDmT)A!ez2Y}C)4ro(&{L&C} z4Vm)(WM4wQQy~uLr-yQ$dBEoRQ+B5H1F^I#3}|A#jQ09kb&PflY4wq4xIS9W8JL7j z{Hpx1SM4igdt7NBF&KNjjUTc3C&`&Vk5>ldp=L{8dKu~NT(pZs)2yY4T-MU+(UhzE zE`73{z&?~&Hs#OXwLMuFVg+^qyXVXI7AyW*v4}b53@g{Wz?fBkF%?Nit)pQ z5hnp;Yi_TSX%khDr$)e<_osdygnY=9$bwSCU-tSBVypM3g=eU^ z+Knclc+hmL{yvwD9Q!9bVL@yHUduoygX1G|$yY_2MpYk`CVmd&&`DqYe0O+M-kuYo z>qY>%MY}()AF6p;!-mP>FvH}cOrIX_SUXEDu3uo_2qK!Smj2nKkn&x@FTYBk$QiU> zX_EYt3S*#^pCbo%eHl6Li%9vj_8WWgOr$6LES=9g@&r?zHpm6o^pyCJY1OQaJ>GPu zD3f;4P5mgDExQ{6X^<>dx$U4_1IOU`&?ga%!De0XNm2=UQJ{dSFEg^yYEpyG)CqavX5Mg2 zpe#(|cGt=McqQ(T>~h-bVP>qq^}z>zFgkVW*JE zcYYT@p^(oeqk(SFZReo;OwXec{iFRcjk2$QWdFC#MXBMvfJXPYgai;m_&7)z6aLZ; z%CA_r+kYYnACCESe;)PY!^eyWSg`L2?6Ms$9S7C?IcqizvYYMMY+cRJ4Z6YO1Lo`| z!l;Vf%#dD}UZ&9oHJgB{Q-LcU%&-uoOJ?+KgvV- z3W&uYTwq5z5aO4NLgctt?~bovsncBI>;@$*skX~kRtC^m{7A(?`-HwBEr*1JNq&3x zHnONj>U7pVZ8rbjh<(4vvJ3~@zUG8vIn$T*96aFMbm>fxh>{I{U$Y9`ivHo5bj#xD zq?}NnYc_Nl(X(2}2VpJ;)_pyE8~OJgvmS!y$mez7MmYmv$+M4Ytsr_!ZzmM^eYQEHYSEEQYl! z(o2hf6#Pq%;-UDgoKK;h!t%jB>>5j(aOx6`VD{_tu4Xr-!FaA}rIsJ0^`0B#oyTqD zJL|TWG1?c`5yThszg2R>3enotKUds5OJAc%l}Jh`)w4nF=y1LNC91Y|=~E|}%I^Y- zAQYYswqQ6#aoo}c+S{8gRdM?h+pFcj5RpQ9T!Gz>JqbDAPvI; z?S+dDWjf)@OUIRPC$%;m_{LQ|4>X5As+<%*aP1wj^-NQBIN9axGX}L2be9;q`F6%0 z#zDZADADL_uGtJ)L_m3Ub0DxR`d~HxlQM#cM-o)>N9zG#$n$zpK=&dn{3m>whax2JbZW-lPwo&L5(SS(f-IHgE>gItFOzobQ&+;===()|Jaaxo z`69`sYE_9bZ{CBOxs~s4PPf};y*0;H$}n%;ApDzNZqLJw;e*33ih>Kr09iD(pzg(h zMt+&yv?7J#lSha0?IrfZ(yZZ-9O)uhhP6FbH!9vZ!~(c0jO|}C$$LEJfnL)^8Nuu) zm+qHv8sRvF^(VIco(F6~jMOhrm9sW*w|%KAZ}RPi-uAKAoBo)r`+CK1vtr_z7O-l2 zdYk-}Kd6_xwHYkIvBQ*m8y0^zjE0Km5z$6)fkK1tx;*$+A&vPw*h7{_PqD!+-L4>{JJz%jcKWZS$X`rshnC!yX1FM!V@<$}+|v7ie8 zvWm=WL!DLdQKP&#j0c%A>H=8fFAkTJ$h zLiKVDb?EQ9MyEMJ)&OL?6Nsth-RLstVW46RrhNPEm>5#@xxGy?uX9P3yaX$XU%yxa z8_KYQ;^nE5Uc|984wM6l4EcC?>Ij~>l8iL z3yQA7=1@7+uWq2z%Dtro{^16%_*dG_xdmM*2Pj zZQ(%(mCAQGsr>0O!Es4BV4*fN5v`IK0OGjimm7uuzlqH^ZqLM!y0vt|=foA!lggDX__o)A@)_uVJ z)T%|HY6Rx)+Ir5;WS(-qSZxre**zr4QdVE1=0QPOo@xCiG z%FIsT+L0j0svdHW0LrOfH70bh8_O~o{g&|#K+G}FJv|RMwHQk;$gdW-L!Ja6n86fy zy~2&BO4~q^IAoL9szg=oN?b?u@=x0EOSl-m+-jDHO2QI=xlx83iCJMW&_};+5%OlA zVl?|E`GEPcm^n!(uN%jjR7Gjcf6TVkqchuQnS~U8#v@v zX9tWdwD9S`Dl&dM=<|3+kxj}X%EF5tUJayZ^E{#m%k`fq7eLW0{0)}!^v_knSFCSD ze93EWLi#YCVngxKai$y1f_yomu=-6Yv7mfq!Mz{N7hCKK#uM7qDcYUphUT_v>^grG zfTRfUai6=sO#e-p{#9nwmLMP{)CpPMHGrTb3hQlg8`8#%Mej}iLf&oi_0j62Qq=3K z1qVbIOge(yK{Y3-+rccwm1YD}*Ol7vz&k|QeSDu22ljrR(V4`BiJDk%7 z8uA|vv|7Hk@xPA=;lpav%EH^DQW~zTNR+8RGdPr!c!`_PotW8n08Ehysf%DUKT2&gvmMk?#@biwTB6mO)n*gO4q87}(Q3I> z%;Jgg7>;I}=@@?aBKv9o%FdLji5&E{iBLE?YSHUw8R>9y*kC}te24rRwhO{P3u6?; z$oI}~yCw_MEW{rcPL2{FdB_gC36Iw)zH`Xp73k|n{|Dx-Shpn9iR6i?SGdi!Ti64j z%$aF7Y9Wh69Ym)O#X914#Q<)Uy#hkx5a*Mq{tZdgAwR*cb80~+h7?XaZqY6L@T_61 zM>;?)wGiCCC?!m`q9C`Uz}_=xeC@tEo+lPv@T#aA?P$Tw#d7bSJ}6iV`2{L|`I}xg zGdcW%TaNz8{Ask@YXxN$aubv7)v+C9T z>Mk&4;H>%;ILiFstkJk)E!{_PGPrtW!j|k}ZWPR~2J+J%Qd87J>RJ|H5sNP=J%Ae3 zM^X{}xjJ8zPDaLsCSeC3XcJ*~?>!b&%%@r3BYW{~+KUL(P%bN$Lo3gL@J#CC=V$59 ze`cK^wAyy6!4y3v`%DO!+&+tb_W#P|HdE-C_?;M%Lu#7!zV7GP;1*VA2_ih!u;}YB z?x-f%ih#oa=ZVaGk+<+Iy_*ED9U7Kj4Xm)45{v#r6R6WtnQVg7DoQZaN~l< z6gR8duKZLfB7)*ksSIy&3vT9-Wvx3w!t`fpA!ImrW`*kU=pyHgA63SJbf= zW19FyIqiiitYWzUO^HzZ+K~)O2$Rg!B%5x1$7q-z$be_}Dm|`Lz7~^7!kcdl?cIC2 zzPEmML&YRJydf#d{VpHJWKYCe?H){$K~90=b?j}VAm=4Wjh%oSA(boJSDr20F!NvR zFXfFr`cAIG?|+v)XjeSMCwPC>LWKj<9DTXI0U&&XZFT^*uaDrMd}+s&ecED;tU?Cv zp#K|dgV9olCQIzJ&5{o4%(!9BvaNo-RwQOIDT_{%>@fUKNU};&JwwJUGakF>+&rL` z)Zz5bZYf__KPTEk@y{ntp=?vlm4I3nJluW4P47q;I@LNo80Ry|DoD-&jx92=52-{X zbDCx!#%8+?GLHNVBC?8`enoaXO@d)+3e+!(#Utym7&iE>4i*d|?Gk7A!I`j)uypN$ zqYN2)UQAcg!%wp_ak;D-H-!ou^KQaN^D4sAwD(sj*e3I>cil%SG&d}I$|YALR0_yS zaW8Jf)(zoQtxD?@`+XktkP9j^857;{XiZCJeFf)0baU?fex}WfAVO0?Xyv0Mj(T~e zB(eI=)$$7n;*hMq?rKw-yEk|g#&y_dQ>%<&HLWKqjJXcDg@{#XH;OeGSimdHzcV&k$&N`J)z*4{2N#vwW*Hi`eu}?u9x*tb ze&e$Y59VDU809#K4G;8_X7158 z;_wupAAlat7G;R-AH#%@f*9My{w1g@$=7 z`>S|D%T7F+_K{G~0J#ITC4H=+rvTiH1Ihrf9KMkaW3cP*WHgWd4Yb$a0TxuHkeG0=3!xY8 z>RPZeuE{y|f;#E^K!y$p-d`I~&4S~60e1n?oz2Bfk<{||xftnB&5~m5I0{iXN;*>U zld1d`AF-(@a^%4M(6Kb}VX~iQ!&p0r1lOLI3zstK0983yvlqygiO9rHNnX*&S|$h@ z-FJ#hM{ok#U2chN%ez7gtY1dhaunSbYt_ODaVuUNU8WqP^dE`|(rzzF&`Cjxph<_X zFWR~=dW@l7g-@GOK2m=Q|DiFEu@v^m(D<^qn^KW!V%xF_6+Ue;Y%XUq8Ryt9l=iS> zI~>cycI31F5M;o}7ETJ=^d9qrwPKDMGG1+N3wZHNhTY@Zu@JkS@l9^`sY&g-2?fnNG&WrWzV?sNSeg)dz z1#>=UUvPkZ!Zwv=yS8VE)F(Gl3KoF6 zQGj8A7-DQIq96D|W*I50oSkm$JZ>-{dkl8Th9Q1Y?Q!&`u|f7S!XOFv^&y-c!o#m( zF5*J-Y1BSftQ+z0%@h8Y;!k-{BOuysSxwhr9Zykg^@!#7lXo-r64{a18AAiLHgQ0AMi^31$}DLs7K!T5j59z|rbSV&e{ttuHDx$ucN&$%U;e zoX7`y!3tzN;a*{ge$|2f_62x9z(_mT0lf+Ou@+E(a5p)-dc@K4GiSu;xmZ(8(e~lT z;qQ@(fM2Bz^|^rCePALnh`D>m5D0#^I`-EihM3UhL+XV$c-+P-6_AwH$m;IYpGr@Z zQZV{W5Dy+T)hoeDOZm&|5b<{S{PI><)#)~Mk7;^{Q#0Tao6A$!ut5cyg({KciHwDS z-I#T2bkCA3xJDxg5?Zz4$w17|)}z+oA=zm2c=l@L{n|X?OqQJsxiLE||IqL#ev@p4 z9Arzx-X2Gr=!@&|piaHB@6cW#hPcXh)xL0E`qEUd|L)h!p+J#-$=kluhZm6q#zN%O z6R#clcTyB{jDWpfc&)+RsSzwwxpi_r;?VwX3+O9H?~c27v61PtW_WmBi~4p}blqy~ zxUC#s@}-p9aT(+o`s2I$^rD_Cwg@NVzbMygH8B6$`^?Lm2U33Zr+Byz#KB@8gL?aZ zujc)R(oMHp3bK`>;Awd$um8yc@L`{kdH24BRYVKDy9N39gf+CS?NS=sMcI?Bqdccy zk$|3Prj(^U@(c2X+zsmMjj86WSw|BRCAKUP3>NkI(EaAByAde5B^ZcfgM6vvZ38NV(InGHKmf%@wf`~}q7_K#B}ueQ z@$U0y6gOx>HH`90rUId$aWcACK7-h6L;!U0s zR2|8`?Nw0;?~?o7jIoJukEqN+A0D%=-5Up131?Y~pH%Cu8s&y1vvwN6mXF!nQUnH} zaWcyxub=DJ5KLcZ8_vGxO$6qY!)$Qkq_JPd$i8}Y0-@_$OR{7_3uZ9=ZS-yLlg_`E48@)N!aov4N9v|-kHXfLz zRvTG8E{;J(0ufy)sWaT&&2Fe6B;c8#F++5;Y#D$Z^P4y8*EG()!w2v&*vlJBe>CSz zyNTVb>Ves-&fb_q8bM%Ygs-IRHGa8wB4E{Q`^_LY1e2;U0#L3)uD3_CQDDD~$0_$J z<_`1Vfnf2bhuThQ*|8ZKc9mx*F6b-|n{vb)?!glka z6o@1>bgA)Nig)Tc^2SKEdPw&vyny>-_885jGzbJK_s`)U7(uM9SXa16No;}b*_&yIy_c{^Y#^Ttb?1*=wztIu95nYM z`9S6V7}QzuE+W$1?(q&;1a=g5w|% zZIaso?AeERi^~hsj8e(MRF#j{bl|1R5ju$%Ei;>iKJF;vrB2+KZ2lgYj`NNGM27fB zAaL6MgNP^fAI#2L|{XBIQU~Rj2N-mD|Ra62ssno)~9!$osKeojm6n zDm~72fFr^1;oYzc?BVX(%Cq5ndCN&msUXHa#0w=y(Cz2Xvj9gz1K$1FqPM*lz>$O8 z5)~&c?SOw>hpEL;IDt8Adhhw>+MZTYn}|fOaTtFQG>HSryUk{Epiydik*yQtmk7Fw05=_qs~hCt0A@Lqq05;5dX7s2Sw89{~VNzKjRE5RhgW@z)= zWs*fYciG2UtSg*Cp!`t9M`mv8B8{n?X^ijeOHR=)YAV-I7$h8{2iRph3@zf@D zUwEOFMoc#fIaX ztyB1@Z)i5RI7Rt0F`P}66UzKDrR`{ZeY8yWC9oRg^Akx4Y zFtn&MX2AgNV#Izt27Y;oV8HxnI=>Y1^k%~-9GYM!5FrZ2*5WvakpD ze^T6dUqxgw?F#>a-ywUI4!#&yD3oXJ6Y)(56hZHDuI|N2F98yE_>lM$L^y$fXD_n_ z1YJBl(Wj2Sk4@WPcMXO6@^iG_LrZYH+N1l|IU<3zv}uA*KHQpe{&EU<%k8h!mm+NK zT-aZ&XOX6hHeVCcDWx%_xXD|jH1RP* zF&^5vNvp^)cHHU>f`IVHQiW!J%C1tVtfk(!m0Svt*eBub@!>}1tcK@z{1Ul_ZE2*b zLT+YC7nc=qSv7hNmA<84OQFZOkv1oX_7hdCTsD%Osh0C>Juiqc-^aTSo_Z#h>GMVt z2|_c zERuweLHO~anlMi^-(NML*+wp9W&}<_vGSDYpmA-JK7U#e{eB|T3VqZ!tKa6G#iJPE zi5F&jX`DC5NjKEhS+o1avI^0ZYQdGA>Q0PmeqJB3L-S%%_7MK%<UWgL>%CIgb*`tOIXih8LtEIK**IFxNdmxQ*-d4gU zAJ?(Q)~j{Zh*sGp&NB1fQ?aBEEmc;N+bXkFq0-Bt>K1LL)bJXEkjpKbJ||gdN+sR* zOoe_!tysHmLzj@vvO!yIC|wLusdk0L7QCloKYza^v_GZmaKO8GsZo3ukK@){HLaSGc$T8U*g z#ZM+P7<$K{s6Ble$3Htm(^m*PciYaYxv+&q^xDhDZbrw7uH5Nb`{c`&zMBWp?p20w zRV%&G=p0Q^&S?Iy!TVw;I8CF!@rYAde-aXwkgPFXD^_ZEcv~hQ-pt~9>o`%y{g890gNXh`d$UHlSXjh*LZsZN^yS@R7WnSEc__nhSI|8ku-c&#>^~mhWS9Sl zto<~j?YYdPue&Epcp|0b&O4DlrDc)pdCHPNt0}gY`&cOCW$9NhDR(SrHEme0$c$51 zwwS`<=g(;~p}fQIKb{Qbd;Tp8$u0?TVmG6X?&kYsj2@(Hc?6GZjbcgUa$wTDX*Luo zu~%$@yg!B8ibCP2lr}7ZDK5dg)e3I84d(SVoQUT=zc9?P!`6CeBqF7rW0Gsm{@}#_ zt>Y-mnw{}Nq};co_|nCt>A;1F9Q_W17YnMe^=@ei-Pnr++o9_w!vXcG$|3La2SdMA zTWQ)o>#zpy`HqF^G`xGes>Ql)|E8iwR{uIeOCQcVGRw5A$oAc?^JVQ?Gx0laIQNgJ z7rWd|{LlAD7iwcM%}3%i4fZX{w%2o3aO|=);or0}Z$DLf+EnQ(pj0o^+H?6sn<^zU zSfhh3p}&@t@RICoN{;aj9E&%S-M#5u+c}731N>lD-!w*2WTG@zIgiF%{kX<8Y*yTU zw9Mb-No>NQ=SnvMA;%&yQ~>sKo4MWRe7E%V551hZQv>sxBX1q_ed2c}l=>c>w{O?< zjwPRX1#uOKg6XltEWHysrD>wMM;z2!T5LX^|>ldSjwZv1HKg>nH^zl za>kx&wS3*!`*okIZ!-*yE{w5|Bbg`{=f$fu-@FgTwaT66@-!8m8NTLl)A+o&g0{uJ zljwCk^Y#s9IyDeAy2t@@{s?oKJ(m{C@v!Z?X`7LA<;<~nE0AwE>7g}Q2USSzvkjMT z&kJy?S9hPve3~39`IOTAilXGxzAKBe#k%HA3j%M5Gy7GI{FBJAI^l_PUS?3UM6&}E zc{^OvCeQvB$#Vp(e|1dvhJuK$T74;(-}$mNA-AV3Nu8&a_Bcm!MjM=)wg4Drdmd`X@*7{hZlQsx7m@^=UKt z%)+>_D<(M~$;`dK`HT^zU9ZyQ1tutdH8s;<5q0;FOR4r5iRJASLA(2^Zvoe7M`v%- zh|3wood?`USfEqrE{(Lu=eU-6;q9F$5R9bXr#dnCoap{?y{7W5rQ|uc_b9Dsbs;3W z)!e{yFG)faDvBR&l8r>@%X$659lba7wcDNe-l+!G%|TfCJQ3f2 z`^DgHn}yMCXBH3n((d`=1SI{e;(~@|gsJK0NJ{)-76Sj`tt;aAL0G$BkN`_E7}H(1 z?cByOR-w*JUm$ftAN}n#-b{fK$zB<3QZh`zZ@de?ar0u~Jqral&2T91VoSCdU$s8E z^6Y0#^tThA#`Nn`${q?9<@WB<^xdu9x#rXYrnp+qHYU$J`ngqCJNV$Bxr5vL{aoC&3vJO1cG#7dXfkJ0znY*Yq*T*v z&!T#}$>tT@u=t2(Eipriwx$5Coo2bV!-{hb^`mB8IHBLEB)*I>BmyltzVt?_?bfY! zZomo_B)mz!Yt$FcxhV-rT`|kO2I*XFS+Fnr9YZeL-48D0vR8u< z<t@dAw`~xBldoA`wWf3j@MqLe!n#&mcLG9N$Z}> zE>70?NQ!+|b3PA#_pkzU9kE@Y zTKZLaoZ^elbcX(NhWslRn|1&j*JmoOn6moA9Q|CK2(NdZ3ry`PzvK4+#n+xKI$UBppY5N1=S+QJtf>l#?99;juKi_Q?f3jCu^L#IkYg=4{x;*0y^q2m5@XABR8 zz?{!x{-F~b-vgG?7Vo6vc8f#@b-IeTa=$~kd+W~3*-#Lf zlI9-}Q^#AlJ>1brz2s>_?b2qhtK7yF#=eCOV?*<;-E zdpjHsaIwN%bItjd=Y8I1En|xywPvOhD>)x9fhmQAYSjkP8@fz2&y%J9aG{pQ^bC`^QR=*QlQI> zUkPc7+Agg^Buy%-t}+#UqKZqNvqgAxPqTC=qER zZ>ei4bQkw6c395f;}3FxIT**dM_wuZnyF?b_yI6It@LqC2EJn`0`qYQRBmFGU? zOBPy-WbC_IQ68PkapG{)$sUcsoA*UhiP*THZfQGXIh)+GrDpB zTx+w?*MJiA6a-fgps9@hL-|&A@FG8Mc;xKK1~#FljvFhB&5KPKc8Fl7W%gv_gQHKTv8xtkkL3a@Y`9wbrJQu(R9AVM?DDV z-)8O)?~dXEwW1QUGUJcTDFp~Gd)?b@(@jRMj1B@l>P_Ve~N1Fafehw(uB+8Wf@(| z#;--dvA-FnpJe;2sAqDPM~s`4$kkaKerA|{OW%ywD7@MxvP)rSXfur^FwGa-cPpiL ztr)VYJ0R&(d#k<44; z=z1fKER$~rnF1K}e!8%M%#kNDK#65br~0akdPqtGs^v|L1)SsZt8~Mtg9R{Xv>BmlR79$ zNX$NY9bL3PLN;v|%%*Dy&OOU6Pq{`UxCjqJAUrjWm;@M^k!pl!y=5x*A&^p{BWj@B z-wMwy+=)3@KQG^!AGl9c8g%(tlun%Nd?NhFw2SfGlS&2p@!FHYQl1<`l&qHNO2Eul| z)WKjhVW&SzviVcb)E8myR40!X`6cPdsnGTE(V_O<<13%B9;<_U zmuqnrVj6+5IX4LvxRy7AS>#ze$BZ~zbW`=^uaOO55##4-1*SX0<@(Im3j<@oP4z~* zVapxZ&7R1?UAuz+=7=RM=O3Gerfx=m$Z^;Td5=pg`qR8DIDQ{w?C^#bw2#${y$jS$ z5+k7KKX{K499iA8DS-wHrG}Xqj-^douF_+BMw&FvT78)AS9c2k z9#?y@H#23B8}H^3)niygW`9FzHL~(T0Wsd@9Gq5meX2#17PS|dCOhCe zzE)tm^QL{gbCy^04)sQzi5|V{z&x^&G`_<&T$TnVRoY_5A6vDLs6ApAD-WxeTN=$$ z_`(Il1`mD%vYU0pB)@y5XQ@UA$SC}>8) z*-Vkqg6+=8@9Rm;$y`@3e7g7W34VX6Ef1#|DB$MnHQiyFca$$I<*dt=;9k(@(It*{ zb)`rr8Ox;JvYHv>c?ubL_=U~b?5-9n4oH75aUuQ z6Q9hly1QA%Cky;o;muMr#O%D2XQmP->vG%uzJEe_g+^^Jva!oQid@Qp3~c_`8Na`I zU!AO-Pv(|7d2Kwh)=hgHxgVXztsPaglYSovsp%*Cm`qS5_1&3DmYqrBbJz#w6u>N) zvu{Y(*jjC>Qo0-P0p6Gv?XI)aUcH#7mt!nz{90H?f=t>4k!c~y#cn4u-wHu3z>STT zOwxLXU!dfWm;MNA%eMnU zi|V$;J$2>eb{=b8Pp)$vL5NMxHJ@s=dxx?Ss_PO*a!yy!@`H5;u(qpi0%O&qpd6IR z7u1c{_R54Y4LGtTcs1Y~c%HCQ`vH}I7c zFEPm3`&iO#iq>A?%dp-co-96 z!5S`DvdR|Z;9~)-rqdPa`83K z^9q|KH=oGh8qls_)*oy5UVV$HsY!-?nC%%K)n5n|&SLsA`whRHqyrWsyfqHDSk)7D z$QLObD|-0*cb`6MJlHEzo7R+tsz{Gvj7glY55@|}KbrO^1nDA92=e`;YMXNGT9ffy zO_Ys1cF>Av6W>AxwC|1wISV45(V9OM#Y51;U2eiX^4x&tdz^uz!DY8J(Z;R2>jWeC z(*?d7>_l$GRzP8f^UO!|gF$fIE;g)^|EU|6wqzwXJhm=WYaF3B`Ps)5C)<)_9}ig&}ZtQYJ*GP-mhDgQGq|^DfdmSw766^+~l>ey_;%tBtmS1our+EW`;Z8 zN_Y0!14GxU{bgFO#VtqzZ;(fZ6VfM90*D$Yzb!@`c1%Kt2!k~llRwWk58>I=DpZR| zMMTwu`M1?0VQ`Wv#wSziT(HX^g|DybOy>7`>vG8@t;Hy9 zw{%&s5aYM=URE1hS?y4B>ijLuO?^(T_!7ofQR6N%+Li*L?Hb|RDjD)NiZ@5IXkUY? zM+>%HPDL{a#Kc#p2>ABlL7YGKYB~BI+J=sr3yt1($GOqvM-WqxD?`;~ zKFB6zH{S{QzP8@n%2aO9<6Q3^>|@PF8ynM<45g6Nqjb8&fA4T|WZL(os-k#EM8bX& zU>A^RZ`q;!c@SAFO~tdoZc`*lwBxgau?F?H%)&3m;04G8TQg~IpEb`WDBC}Lh&Y~o zJ+LY>N!}pO^=TE_b{#ePvbP|NVEMjeWv+ON_F_yLwu&|>z_^Y*eyVJ3ON*d4wbC&G z83iM!A~^%y1C6V|XE*t?r;)B;9GvSwsRcNbr7Rmp(oSHGeU0M0W*AvC8R^I^qsk^$ zzK|jgb>s3iUuT%Rn;Gb>yKPTc@EfZYla8Mb%@EZ#>d-8<-&C5*OqYHyDLV?cTEaEY zwj*vAHLXw@A(gm=$2hKa$(I|}i4Zn0hB3Ctla_8Mz zPT%7AjS}yO_Bp_NZ~rth1uE4E{~UNr z*N+g`L8^>z(b?QlgxQPuQvWIY_kR4!5TAjCI_qr}L4VOJ0i)cmXkU~eVXyV=#|qwe zY5H{hdm!{R;dWoE{^VOFztberr7Wg(@<;0-miXqZPe6fGOND&LWueEo0E3O=a9ZD6 zT-#Y}N1zws@`7v*IVqQ8loT$m1~VY;4Vt|2*4J2Gf~qPp-8bF{_&QyG6gR*P+HGu6 z<=xqGa`zRSa4nUOY@Eh#Zg!W<1*>KRVdy3*apSc+5ulwmmdSP3E%M2AD2;k1<}PX{ zH&KXJHnz3%afT*E?BHuLcj~K)KCF;)+pt7a>w$Io;Divg^cv+-uM@E#r?Ghx$XJE~ z+7tl0kxyBDJgUxe(k|>?Fa7m>sQzBXSEt=2bbIhOKKT!osLPvAX<^E3Q?S!LR4i8% ziRn4>BNsci;1hnSU(d~BTu`|qR8!0wD6G~9M(P-wqjvXVl^t&oa=daHNBB{PS$%8? z%Ly`_vR))x(<|z~zHyG@<~(MK%$tD5mngqo$57Kj4qYN#Dc%_zepC$`xGq5!JD;$n z*!b4?`rsd^L1{;3c7Mbu_)WlC6MZj36$l-oDL5tr@L2bWc$Po2$Mzy-!*Q9NBN>4XsbE%#VfG2M)VbeA_dM{*p6Rmty`GQ zLT;x%vxdbWAfF1?MxHL>cf!7%^o@)x(W4SI^E0x~yjfKfgvs(ta*@Ffg|d;Bg(<%0XFb zt=7FHI^8LyH`K8dH_;)NN0=#?W9VBD?mi)5WrxLQ5z4NgJ!I15>ZW{ft+@OAgR5F8 zu|9Tkl>IU=|9-Vp3XR*FaXYT?m?+2TaJYOUID&yYE=IqvCE2r3QW!6PLZ;DoSAUc| z%AJAe1hDV|wXl7o$&%}F3-&KXrUP2q6c2&2WBST_=eVeaNh+MbpdjABrN+Ejp8uBT z-6ikWDzVO0qzss?lz1o?DEXo!ptTS(Hd9NbI214!1=q@D>e_;zX*JSGwrZ_sbWUF+ zj!w(z>P+gc$@vxEPkV%5f!MUXJ)Eq^T6f_x6YO-DCE28r54+Y|dX?34`gN~JUcUax zA}4EWzavv2!Fi^L{->_Hlh0w@J<^j+W^YMh%L0%1bEzG0-buV=Sb=-nqT>ztuRx8L zKncsz^KnvUGl#0@n#Qz+-<+D9 zJMCLSMj*Q#jq>|Xvek7#nChuc$?b+a8)KR&G2c4fCnjR%|AZf~?v~pT#zkP%H3h)w z-_xa@cGK{FORnewl~4D$dQG_(?mXA+O56w{1mPG8?X1kFt2`6-alzk236r9AM%Gn{ zBMt>BWH+TTs^$>F`316&vY@;hX}H-&e?99!EP;rlxhA){U+p8vt8B7dBtv)wUCl{PVL;l>N}5mX6PT} zNQHcO5MAXvUnSSsT^#cXSZM-t0{3o^$-{YLC0y5i$3K1h{dQc<*X>kLchyPHWuH>`vk$^wd#0v%PR#0e z%vfNR%g*iWM)Q|bsYZL*96c+@dSelW-`Tzn z_CHj2#B9|~z^!CD02Rp!o};@864k&4Gs}zyXrT-M+R@LvIa_hC)SoZ)XyAOmypOYp z9YLxw9)WI9+2VPZO_gFM{6T#+=r zvpj8MRdnyrkU3<#aBuLOK3nv`I=X!xsH!Z?w1##!%a)oYWMgyUyVJnB?*>l;nWHnd z`OQ&HrSUFE<#%R>QOV`61xKbJR@_QM^!n2QmnLW|uE27*(=j#l-=mwQyZC_%#j5+~ zCT!gsw4u-^xk2W+n5>NHc4^b++*Ej`#b@)&X=2`9k_vNudWk&BHGgc1GMl z-Iq^o;ACY}h~WhX;ElEPxPA3wvT}X(lVsUyL>YwyBfPaCM{E7tN4mvG&8|$oSocv^ z78|Q6gpkF9L%EjY9;^Hdr{_V@_^nUzhEiGv&zh?0d$0xeUib2mjZF z*ScZBz3d8GvA`Dk&Hqk0$gdG7clOu?FJw+$``6z2ef6D!O0~Ac@o?y|IO)-apBm59 zi2e#j^Nj!3um0P6{{5;hzGgg0H9R-37N_^G**fM0{I`4l+h_CG(wun^o5HK1^1m8> z*v(5khRXUEu8iIJ|N6E6`r&^b{+~DhH_P5gbszlKzWlds{`Xh5VObz(N8VT6?hon_7lH))V~_JN7vG-{FV#HL{`lWF|BtWU z-Jq$}@V$JTk5cPA*OwRA8CWg%H(T?}{h7Ymw(5YcoDGcXq1Kzn8vIa#^;HR0k}KPb zgZ4#uysR!W;1gckcO{E=d=A=P)svF=@p`U3HY0d_yhSsT8Kt$c00RA3)h@zstc*Ys z>3SuF_BL_4QZ}0V`&|&P)gL#6O>3*BmB==)Mg#+0VyW}1f`&Rx}I99XVnpZ{uRo03$9s;NmQ)Fp{@CS-BNiHam9u-2#~pA5g7>h>dz z@N_qKElH|Sw`1N@ee%&7tAbx%^Cm%StkDGlLIL!N`Ik=IDFA%hIOl4BebwVID2eVA zl~mz15iyImtt6(z;>6#DpFX@%lPC#IQyO_+_n^$u;w`u8)wa7aDQ6Rky4FKoKbE~a zf?}9^Q4XDn9h{mB$GpaiD^zJul*8x$&Sglw{iGUkp@Uj{T=1;XsPg{8=yaiR3d=n9 z(Gz~}#qPXY3v7LqJr>;s#I!tFW7qY!5u;4SrcI~~zcl8X9$G7lq?bTbvht;{%T&9C zY?%%YUH{=w4-Wt4+q6A_h6U~5h2&dgJzh6YuEYSu21U<_ZFB)p5q9`hb%W4c?{pYO z?Fh?}uoQ#JKb0^p3Uz+9wzY0nnRFp5N0vPL`wgyOXN8*aRLM%PGI=YZMA~seW9hhk z;(1jqH_)0xKx*{RDu?qcs~PnMhtQ zb1ITA4Nbu>qtj||O_Zr@;6Uf}R)Eiu5Sfx9PiedR7U~oLfbVg47tvoWOc#U8V9j^+q31E|ACa3pVR9Ymb%VTCsJ zkR5E#LLp?o!M-E@YxH358nBi-N%X26FSMl#U>VHu!o~teK=sNh+a$cCRT-+-#b1l5 zc{~htwEmd)aslAq>7fsh&A}D{q1_J^Jb$W669Al8<1oa$W-1GpURL=I$ebJ6#`EEg zF1pLbv|(A1T*sRt63F zj#?)PY1y>ny3^D}Kyz-((})Q60{3>4#)|m0q_xmB4>JDNze0`w=LJ8u0m3eG)!Tow zufxvsd2GgvB7+9k3RMPbO-@_X`o9^BC!K{hBXDoihq0}!HH!_DacStmZRv9O#}rYk zZ&{{g-+J5+vxw=PHQ1nzCVFfJeyt=TZ*) zh5-%gEJv2pxonA4+`1^9A)EXyhTvzZg!xJE%~onG1WFC|`(MCPb8QM6_eS3q4gcoV zeA~H|#+|XZJlxWjWfxTIO9XNxxs%I^N54t^m9Hd?=J49%@Ep87(TSm*4GlP(6C7kcp{rtJ(W zq%m9a>7K{z`%hp&wOkSy`hD!tF&hL}cLR26RyvUJf-spzM7Ef1^#fZ1v1or|Wqo6K zRMdW`L3?1YKLUF%8E{gG!Zt<#4x{aO6aK*^w~4mgW#Bh|B#SHsbtuvFD;U2_Qvh`y zHBw2HHM0$X=bSa?R#{eVoF%#F@gUcw4qk1}cRJj~b@~=XbB)_cIdslLZ14}K}ttc`iAXvaqs#jvM(Zn6YRAa_=i z#<&H$jn^wc@!cz+pWV{E7u_DgNMGUz1qLkQe&m)goM!C7%3Fjo;9{j(#zn|b)$+Q0 z0IE1~`92S>vES>23`(=RvLCa4e~GH)HrX-XJ@#pMo~h~3?Rzi1lL>7|c!17-f1X3> zcD>u-gzHZr;AUP2V3K`0sca)Iz#%wu#3DFcP$;=Z%eW2S0>Yt3i@H&v@Ns1co1B%aFt>cUNV5?Yi5)jtS!Us-Bewr=cA1gH_aSe>>&-y?%Vs0HdnUSu~+ zdV+YKOVkQy&Z^^G77_n!#w&*@SsI--rk3s%re8Tx&9@bfBG{vyQ|&bjHOlXW5o(Fk z63MG$HC>FcbxCcSZ@<6pXZ#b@=?&EDQA(c%%#kPV(0&$YlpiPyy;&DnYXWssm;z{J zEdwTIKW;T0v$55h=zyTTiJMC5xJAnI>=XW~2N-X(G1iP9c<-jmm|GD9Ke-@hGfNX4 zdL5jY*iu`|ne>5-_DazKz@tge1FFv}Qx>`^IjZWlWw++jn{O>chsq@w?=mCE*sE+So$hDwwj{McJfgO$f;Lz;d znbjYSx-u7)GQ6!NgiUS(;fM6^^D63rQzpPB0q->f*qYQ&tUPQ{OXyEYrc7%<~)uz<~zu>WwPkD9GUoL}3mniW>$48z{b;w2nSZR{{`4O|Ql7t6!s- zw6E-D+lM`1dp|le?yIWP`W)BUc&b;s5ME{Z*Hd(cZ-z2ujq86znImt~eB7ekRq4C` zMCor_l)Zl#6>gq|LMIDTf`pyDIJ(C!+aNmjwe~)n8{YcxHqbQ_@aCsRk6qPXU6MC_ zxxsn?lN>;FNSGRXNIqs6KC=t!4r!OVe2ne+6Tx|XH7xW%flt{M);0ejbxyOZT-!J|3^*_gzd8Gl!9y&6G6P$MdqEe@L=o=cyx8yTpC=*iSj2k!rU~?zr(c!#ZxxfCZo?b_5=GX+|_3>$3mm?h{m=LO{yK z!6j22eT-lI+q=(vsXm(3V4v^o|Eu9&jG&?^ZJ)ix+jY!Z{`*1w^YH(?`G1!E-|x`B zxcbkY|Ht6{yOa>0A6NH}pa1`zfq#OWeZ=yxrk_esdH z9Du|3PAR~vcm)NwK-|h+IVkN=?~==MSWN8q0w864da*fn7lMW)bAtTRS8Xgl-ZmeL zuA6h0KEyJ(+zD?+xMBrx^BBQV(^4U(1G5o*{f`6y=LS0KPV>3cmkXl~@-<^Ssl_L{ zTEh>&(<$2~_ z2GINKNcNEZj#anW&nz8B2tb>%K~u?AtyVhwx7~BDQyax%7ERV6lr@HqH0k2RmaWSr zer?2fe!4Xj$o3{-#4tQFTg^ zXD5mS+~6zcs|X}@A&T6vZGcR69H4fdukUw=o z+vsr;L9yBiBf(3H76yR8AJHx&r2AG5wOC082}x2i*&_L6!vc_%rPPo)bK?y!rf2 zKX;GvwaA`oXOXiUD#3SX8Q9&j4XgmcjLsBfH+G-`8&udzVMv=j!Wsa*kK5^sz4L>N zx(SljJRI0#+y_hjt9r32c=)?q`b_5fmLdD72$_|AH5B_2%z-Y z>om`kV2{cLzB=xi*XKz(9Oidf^*WGLh+TjW~SQzaN zOrNyE)K0n5?xs{wiKk-)$EqEq0cGx^PP>tX0MNJOs7% z5DvG=%xO2k_NDg?9gErAURnIEW}5!~W2fjvt?Ch=DGNb%WSLp*^jX+9GzOiR=(lA*0>f#Y{i*jJgAY^g1qoeO|v<#`F-mv$V3< zw(ZQ{1kf1xzGle}JHK;yd1xbGiN8Kn5H@6!4I9g{nfA|T+BnA~_Qy!zBvW647LRxr z>JGc(pQe<@5s>(Z9aC5TcFd*&cH?(*Dhny6WT2pN1LW&g7#Tu646u@2pxMt)VTkM> zP=1tHf)bH{;(c;eA*cqBBfxLXZ#tp_QM-u%gYV(w>Kt{<3aEA2Vgcx<>?o#V>3X3( zI3FVTFfQ@Nsl|}$KJ3R3SkU(@#Psrob{7m2Q)kC+97H!^F*cQ_oF!A|2K96>|0-4i z&{gP8OPR@UMY5d;1!@_ODg~8Uz9-1XBKI;C{P~-sJkxhpy`Ix@ht!spTN72IG1%GSr9{KW$k&ny5z?;3@P+G{vi8mqKgi zI}*Bjw$hx62+Gf-CJOayvi*Y`r%AZ_cU>|tzmH&mV?9vu#7s~Pmc;4M1kg(_QU>Ku z;bkXR7MvJcVEl<|OaL(7ysabOc+jNtX2k9E1x`f{qik8{gBTS_?T()sYk9YjWhly2 z^C=ytfk=#`%FYXWKb5i~vrgxQN0!O3?bMiz>l(~rB%=yXl+)`S90St8hnmj5);11raTSEzi5fr3UBn^4 z|8AhL7CPa2ZK|DrU#9E{drjXReR%qmbwpgvnye!Xeta%W+-+H9- zypwzBN2Jt^KoQ@ihI5CaWt>J*R5dE;PE)qCZgmQ28b95=OmOtnknMo`N6JY+abLG| zF-Hny5A|gU78L+n}vAA}R z@Cp{V)gxbT)OA38dj6E}oPv0P zi$^%>Gi74kOmJ-I_guT4=%n9s+B>Tr&+F+j6QJ#%tY^otxZ4JGU4X@Hob)kinWAtGqB=;Myr& zG6RQ?V2Y3}pd}WY9c|84OnRY(p`wwNBXf;v`;sy2b<*gG` z#aYkjO$U>;VmTDjoe#|_Z6L4eUYb${Yk|CnTp5ejP0z0TPsNRh*R-~<_hw&RVXsao~4ER2z%0432eLj783M02Ei z1VhF%B=An7w%dau{X%7CG%zVSp?i9@szY12`n5bbhY13|h%PHNy0T8dE zXk`k_2ADhYwFAY7nUlMNm`7RMY+{uQ@0z%k(;J#&1XQOShRuO;_{SgmtcjOL?^RJoy}_jJ*d8%U z1&8Gqvnsci&+3i>EudSWEfL64z_HG4pI-U6Fat0`+7vN^bIoV#xx35kPC0766Z~S5iOa%ZR(iO$mrRrd}{cL4>vcsagWTM;V>*eoss-= zaL_PkjTqdw`JjtR&!YMT7cJA8tdF8nO=tzAK{0KMo^dKD_wC5D`3TENNl|p>3Qc-S zRgJ*JFwxnep;20}VH4FK>S2!QZ_0H?-=`o8Y_Xv3z^$}9Ih<|UX;P2SqvOAfIE~j5pKYd|BT8%}BsydQ`yHKN3eP$t4#^t}Tc5Um>BJW1i&g zb@|b6J7uHkuQ*nB4V#+ELrnX1G`GDqn2Y!6`3p4_F6y9C1w)Q_Mp@DsJQ-9$_@6(belPJuWoBOsI>5b4VtzG5m@!67BZ;R;niZt zEi^Zk^Yl5dt^cAY>*kbw+RX)O#y_5JngTSEfHI73{PfvXy5w4>ae&+S-7a+9!wFe7 z8}1&OHc)ufltZXm1G=7&&ZxSD@k*5KBTZdE5~AZBf~?cnrO%3F%;J(=CXev2Du={F zYmeBBgO-ow-ylqL+pyg9pOB5Fw_W;ciesc(`zl-Z-Ue@fHGA`XV`>R&BlBxO9?yWF zQzaXH(@PcUGJ|C}v(|;yV4^zeueoB3m7Qv*i=LW@>aJ&Fj`E?8`L_wLRxWNtDoZab zNPBgY33Ux_WV;P?8Dqtui>QsTde)|X!hz7iPti)j5)UsRYTB?b)|`(#x1i*rw=~Q> zV;4e%-|1SC{ZT?u_X2_$?L`FdvYe|#M#!FtZ3rq#A%ZxbiJtKWjajz?+L>|1nIo|^ zj(ROQTC#lwAql1%0NIsLU+0T=7Yz5+M3IxNO+G#ci5tdd=RDoVfBx#OTjV$OsyC$h z*XrC!hon!}6tbs^PE&sH0bn4EwjOkKEk)AA_L5%0L)aEZF`^PzrHqh-JI?$XXZ z)}Jr6adeF3d&N!@H#lPq<4Qv2tj6oS>06Dp0HJliLS6J0 zG@v5F>h01!#5DFx7IZj1;3xc8bXCILi{t^Rzij_}v zzz+%BS|9>%wH;8b0hm@K=USyPx`eSAInh##Cn|)tIuTqCI@4nd2gI4hED+(%Xe78|X*7&9W7#Ko}L1p91;A>PS?BpthT z`R~oAJ|oLfg&)c`Ifr|OOZ_#YOmq95;3&=SrI&C?61|(7!)5}(kYwlg3DnD_IVM|y zzlz9gBS}#b4WB@rkWigO>JYQ;?}L^+MASMwPh9M4=sZR)O;yn-5DX1yEPzl0ubY)5t>$<6@$4Jc_(NbGnR^ad?nj(Nm$il;r5FOw`at65p?nuwM_8Q4{fbZB zwR-fW_X%bgroVzuuT8qCe7Mg~FH`}w)AiO{Vpx;IOX;Pn#?RkeY6{v4>FQlXAP+yC zteMQcnx8=9w?ftQ_LBb0j$*Oz@B8XgI#*uILtkEctK_$Do);u z=WcFFl$~_7JlLb7s@VjeVxny7JuSrjI1)NqdQWLnY~NbrJi?hED48?RLhsn41~p@5 zCau~r=ZdsDqURnxEMlz6*QVZBnKlM3T^=_APmJ^wbk#^^>;W0%wfnhEEzsUHwBZ(p z{e-hw`NG32QqMiZkF*K_#+EtqQ*IG}IEyEJeT>!!xP@r3a8>ZckDdMDjeV2i_%DU} zFK@c_+mc}k$-0)>>fvoLC0J}?y` z+_f<2)LO`o$kZvP7+yiEef`>jD;!m}O*+$36Z%mFlS4%CJJJ&amGvD3vr0Sqfz_y`yR?3R&4>q5)J_$%;&h zx}c=DwO+b90&uPJu;ujH;SH_$DHhkxh2Rmo#BBvQ%S5MyA6F1|{L#cPjtY-DmZDb*oLqAqK%gB1Fgz44)lBY%Zg~{0VD^7Tf%g}+%6$~*jn;SV2XLI?AWNl5ZKty!@DrLCZr6;0{eC8VOQQ+0-N6uT zhP}(SHCBErVI(9cj<^{a;@%etJ$Mcvk&v;Xgc2D-#(nsQRBN3QFP_G=>y$0VGjQ?4 zLCj0#yMF-?2GR$}t!zMYNS0NAnl}v9m`l;8#xcz&PtnLEYfvzBi z(Y07Rd@S!U_14+>a5y#z81+|4@bU~rH6$gElN79t`-wNUu8>0Pp^gfQ} z(N=+HAWsXeowC6kJ~?g!F5@I13~c7)tSW;O!2`@7UuABLzd?vOs;~B)3*ZdP7Lw)h zqIkD=P03GmnL~u6i=#cyFX}KB*V$*tyteH6(B%7c{Qe`~y69o%4q32;bP(yLvF)e#~Cc|iKWOT@WWI!vfZa{WTEUdaA|?$|FZi~b7XY}k|~ zmvhbJ&7fk>=oDJ2cwk8@_Js9yCvQzjVely58 zFjIbNEn@sn6L#P{T7nT_`vt%CCj9i)jU6#q*9*NKOVy{(hw7S#sx{YNvdI+U3X6mX zRegHH(QB(k?C#lsq(FK>^w35}avX^xUUGWuP5pbI;37by<}y~X>Fm%1DAsns5tB!V-Hs9Gu$BY?h7skCcRco!47%&pe!BpDa3=|MI}a(0NJ{5=cdad$Uy9T zgq8cU>}W|Bo%_2c!t?}$7;j?NBM+(T!|`D4UyJtc_XmxQQ@la!%JN!2>4r98LN|xZd)q&W6x#BIx0Nl(%C9TH40|*rCi+ z-x{xl(~VuHb3LeK$4dv*I+*iKTNHfpHp>n2e`+foX9>vh%4^xE4h945M8R^9ragV( zcrr3M#qtf-G#>1M$y!vntIEB?zPXh-jCoM6;R_pSLV(!;`gKS9xI&V%>boSnUfF5c zF>meg;TDX5qOE8$*n!3>OnsVAoKr|do}*08t&0oz!N&Dsy*?pfB6jU4dX@XoHM$*dC=)^)@dDm~Y zny}FX_rJtt@bm!9TB&2QXFNAYRIwVee;`{npJWTri=_fOI$*rAanZea5L9B*(t$p3 zROX@KfGM$?2A9(-?XZ8n|KY38`qvBUzW^@etzUHp6#Bvdv-k2`I*)_RNRdkg>NC*w zl>1X(H|HS!;1hXv=;OeC*xjP9mTq4+iMEmrHKYUo67iI=T;IOfil^9=G_x@64z!3MmZ-yNG zA2PQOKx2<9Ia7>mu$sQVXz|M`RypMs38Axfy_K!%Y<#86fH29U=O;w%g6f({Ubig@ z_b6Y$h2t40+mO=o)rD@eqYizqw;W>{7+Lo6L*j->QFLd9pa<)+O(g`4h`)a%rDmE6 z7zFQzI!3--_$Kq|iS(-V>Z#epn@(-whqd>(LBY9VR{I~7#w!XH+I=#!tzfnRQ}hy9 z(vbJ^Wp79=xKtGXmuZEf*5UWrPuP})!2f z{}lOFtHkA7xl&pv(r?`t*KqSovJV}Tj*zNvEBf#PN8g9hQWTueP~ijIAt*c_K)XDl zD$&_mu`X451z!|pZ+d=Cz2%j`I0N4)2f!&vcG9XJF@0c!T({c4(fkcAuMO1KR_!ft z%{M+}eC|oGg`gY?Y&GivlO}0c?SSmfhXnU;$T{_Lc-gffd##0^f0y3qfmQvr_rZ08b;C~dwg_#jvZMANY5ym~E2~SXtANI3$gf#z z#Pbx1vLrRxnZwnZjdZ6#`VE`?j?-SPd%$rQ_hx)Wrf#bE72dg`JvLC9SZ%M%&u39h z(MWy~FYX{5pI6Is$6okOb&?i8IldtNMWW#tugjRo!DdE~6QLQG4OPIwrjfsbRC0na znE=;+G1y6YJzz0y-!pUyJ`utQ94s$`k4_2*%Tw~pXt1;-h1``$W|RAXZY22l!z+8q z#J#cWP7&ASzKP(tnW;Fc{X9EGpyH|fy%~=B(K|vy9w~5Z=RRlkRlt=jeg}*5iomAf7og7qA+Zk zVUcAYX5|L|1DICKG@TC@rFZN}e?$PK=Nj{-gc4%n4=#+{Tm;kjt@aAc(qkhxZo0Ai z#Z^cK=8>L5|I$XTk~!H%_Lrtcm3BKSzjY3$o^plnJQ4bU$&@mU4iyErSEih9c{CM8 z-dPjoa+$at^U?f!#B_R9m8~wKn0mV8zHO80mvwjZ6LhEin=5Cqfs-FvPttApD%~t{ zRDFmGC)A{5;J9b?+w}b0=1b3)V(WJOY5CQJ3~xWOu}O&3v>Xv| zKW(>IZ&hnn%?lhd2^rN<^BiCnY6b7j#qvev8j1}hKMvYieSC$<*PMf0mu}#qs|~%V ziTiVg1H;|=LRY-eqEBT~(eLOm>c}!n1#`{Q8Y1v}FU2d}`jg+icP&UluC5qfA6}KG z>_@?b^_rgiRx*6xVad{!D4gG56*BZv4g3xKlADg_;jL-m4^R`46!|G6Bm5#eLs!_> zQ@w%bbiei#$a;#ej2wQ*x*Gd&=AzWeSQaAT?5+@cwmpZ6@J%|rGWkh+EM4$}2(`4P z*h$V$!ZY#m23*C^YVWP9epaI=Behr3%^Rq0Wk2k=E>6eXq}DnqTTQhvK%O>!P+? zwTHh41>w-zciz(;s`YZ&>9v0coztn(8%G6D*oRuy#n45w#aH5@_Rz{XXml>C{o0?c zCvVXSdUKo7n7*Y42U5=LKr7VbDP%}|g(fz`Lq|hGe^DK4r0S8F(SVlwvvpsLE^jNK zc8!Xmt%sM50X*kndl(u%vekhp5M!1Kn|mD-At#{wJ*v~^vs#YB&<^kjv0;ODVKME9^2BLq~U*kg1Qm^urnGNO_gy<0Met7J#nmjr8n6*Z)pEh4F z)HI>tm5c9kVHZf?lH>8&srlSveq%yqFe-TF(W)w_+v(035e3UgI9}=K7B-bj!1hM% zT3^SrI!HtS+w^U?Sn~O>>FA>U4+Uyap&-pNjmwATDS>$|Q0Y-4|LMtGzF1X;lcUfJ zJeSKA6AMozx{#}R%KY3shrWL6=bTqtu!}C-T_Z`CiAdbP{(uK}-JH-J$1bl>3un;x zubg=wk-qakl0?dh|CzEp#~g6TuQGVKahXgA*mATb#9efT9r47;oeWi<4@etsC;$qH&3a!FS5 zey{NS?CQgA(l8v`$WCpn6)sxQVRSy}rwv=-C8+5aPC4eZhT^{snKO@(j5tyzdhAss zuEr6K40j6#O>YGFAe3A!*sTB2w>Jg)e9n>EcbhT*Cv}4aGM`)Gm-j7s!el9=kmoxwS)>3x-65AP!fdbjY5n7u)@osm0*=V- z70bqe5FQolrcq_au~fF%Mh3RZauTkj?;^Y^&dArx4@s?hUmp+1V`BB$aW9h#Sp_h| z>r5;_-Us|7`esAcG%14k&nLlHtQNeG5r#v>$ChbA{rrE{I$Wmt;l^r zf;kAwn3&r?PlAELxJ#(^2N9OW_gR{bGS8v%hUAWTA2JfBB-=jAtZLsj;scI4e zGl1m*4)I@igk`9YM}0{%3oLinwV%5QxjS2Bx9r5kVSo70dO07jOYu||@Ql)I;~Y7C zh}g0nsXxU5LtB&&xiCDH@PZTnhl_LtgF3RIl>1ZLeSB$F&^Jza7Ny@T`oT((UIzV|@02Yyh+;y&g4Sktx)h42Z& z=n5l>nkSKfOF_`>C0sEpx6)-U`@w{z9H#sa6iVZ<6f_5#g7($|A-bOksPs%d$zKa< ze6@XXns6Wa8CHKTEBn5BbfvE!XK;(!E;2Gr*9ns3E(qXZQkHX&#?(vCpyY zwjjJB=786+FJaBVF>>$6ugQ=n!6Y}HrzWa#6;g0EL^9da$wFFxeR88f$DAg zq)wM`MD7x7JjIz6ux(=G%Q8tt+($O1QGgu~2)7dRz5KQ%cA0tD*`nRivDE;-pu~{Y zIffV^w@0D29)M}~QAPg5!)|8IEGFwAEVWs=B?z5oOfdLu$e~n@z3~J}n^4K!Cw*cR zlza=x45ZA-8_#-=DGOT;s!uYb`SZ1OCg(a6Wd6KH6ck6r`Jsay8}qm*D zMFuJBdz@D+qrQ%F)n6clps20S)XZ!d^mZ+j@pSn~^{b4%t&4aE-mbxcN?-8hcNpFG z;=rR6I210qHQ13gwsBZ^C1XNp~DwDQb#-0;J23?6P+!CRK*knQ}mBpgf9r#6Gtg1C@sTGZ-&dj0%0^uwo@g&TtZ+2Dn5?uRVzP`UNko|IJbGg2Q~H<*uyY z3p<-0q8>t77xkqu*^+Hz7FwY|2&Ee&yRV(oGAa`}y1Q+Ek;az)j(p>-fmp>5X3~CT z!)wk*SAwJCXnr1{-hQ7)Afyw%ONNbGnOd9N;ZnXFtD%0%>wNgFiu9hbRro%4_foJh z!*MNMX0;;Md8U~bTwG2|<_LSIVCPa*-?%$t{d6Z`D>X}s>_h2gmqUrjLn6yjQeen) zI|v!aj@~B0_K0C0aWE4(ibD@clXDQy`x8H<55J6W%(mB`Yj49KZzWl5Bd@`4xmeY@ z@C6pS^74KfM}9MqeTnN(t7;C{8ocDh0=_EVi~`&+6do}*D*YjT8jv!{?9g}LB0-$lS+4G32Cra<$o1|S|3MS7~}H!`L4PW!T%)0Rzveo_wVoqpb-?dS=-B( z6p0!Olw29=(O$`#N!CqY`_5vp>k=84%Ds576Jrx~zC@l@qGm6FuP)mj&54%{%Cpd5 z3l}7?iC*!@Iin*lcKRH{al1mbSA;8?AHg!>n*04it!sEAbW-Z#y(6=jsr8#Uk|mP3 zbr*5}tFRMIaQN;B7E|lzZ#2ze1TSd?TA3+|-`kI?+;%}rQK{#lk%oI^AIs$FrHe0q z_MeJMxX))iXU(?W3dhVyCw>YkOI=J%#jGHQf3E4kJjkgJ46Jo1>ki}*lBjs#b1V{| z&O;@>iBoHEYV?Y|;Ad)~{UM8w%N8(obE8s7a5(Hie;8ag<>0&pTgQmwUcD z`aU>M(S#pUqVTifUha+i%yikCGfn8IBTO^d{6>vy?@)P-(y5Piv$fb^#bc*i{6~6O z$x;_AlQetI_MIY#BTpZ>Y`tB}U2ZP#7jME7L>`5a6n53C!&P;+$e%leu8J|L+{_U4&?udg^#s&xU&C=Qn6o>tCfY>%r6J&*P3*x11)XV-hgj@ z(shYps%~_t7Tm(pCjotaax^&CTOgseO9LimEbgd$&26K-#O}s!5Jyi2PH87NYLU<&UyEqmb?A3Uf#M%`*B>u z>DXZRGL#e%KA5(J?i?!Y7Q340>kjO$MC~l+haY4xL%Kv~(3ZXd<(EAm`Nk-`l+EMA0a(yCn37g&w*@kWM}< z^EISln)fA!n<^LMm0QTQXVCY*M-SZUUPuqQpIqdJQ+bkF#ak9C+fuPLz4e{bO0Dps z|C{c9^ZQr2`{2}rPunV9CK5{}b&fR~WB36T6SrO8;ycg31gUBb9(q?ZA#~O2I zM;2S7B0HS;(wyR|x%k_+FDhjZq9<8CSku?QC@Z>`>K&5Bl(fJM-EiSS#^q`=3O&Ea z%k>X)>-}M=R@Y2w>z}sD`z5z9`Y@E;Q88~DnT`8fwt<;VKJUcO#%!%8H`_1AhKWCw zTyE-mLKyCPqw}UQI7q~W`>KAfWjnr_q5M{v3jU$Tl{8OPASv=KfM6()(k)R?`(c% zuociGU7L$?xSV>vnHYO)fd~7yt`DIFYM0i`+lCv6Ak#rC=iEqa%E}8sJNPCf%V+v{l7&dD^X&h);%EpCP+T0yUr5ilFfB2njkR}Ulx-I0)bUS}In_vkS$R(Hly|=(n zT42hRO%&V@=oVGy1Yl0_|4sXQh!1#mH5e6GOqS@6<-;YUw`9U6*vy@a9-f$qpomF zN!Va7{>I1n<((|iS{`L)dg2E+eIGLShDU>mA9_U2eSsu_%7W9*h2Ppr7E$5%_M2U9 zPMLmzL%EkN+ad4dZjOuPM9PU%;jvSFBtAmd&24Z&4224oc8ih3P#{9S+rls|xFTx= z$)s78gRZnI->LD>*bXj`crx-s-6BKkEK1*}W%YLvs3`{MO~4;gkCnHdnqr0PXf-~8 zl{b@ndPN-9)CaW$k_pkL%S?!k*CRNzV|Oz;h-L3(#cYCWmEO`%@))g0L=KrYO_A)? zuOv<5h7z*sF#M_N{(w&6{*-uPv^+7biWS^#T{phKlI~KX$1v4(UMFn@I0^HYuLlS&_N4T#YPHsh&4SrU0IbI8Nw6L#UgfS2n7Z#HacxOx*KCu?tT@S{#$o3@#aZ)_* zbxXg_(+mNmh*$sS5|&sGI4Wad`AJc$iCJ2bj5290HHf+UP4lH~;~~d&w3iylAdNxY z2sa!o)3L}{D7Z`&O(>w|x?fOb-p8)u&6!T}uAZ3BjQM?+*mQ~m zyiWrYTd_i)W7INIR_-z#42IZl(f{n`3HPV6AA$>6`7z}q{G*T#wV{Y4ES)+8f2cY( zFGaYR*ybF#4h>ALm=mozKU*$P9&(s46wUrB;8z3ii;Rk}JwpCNbFBA6QyOW{D4MxR_#(ILemTTJi@c8I-N zd4C%evB+gL*OkP(O7wlj+u(q+Qnea_k{|^mMt3^u{hKo018)lxTi3_ zJE;A(XwCxj8`tERz@gf2ZuF2HT{j$SA{3R?p!&UNqF9O_4J0S?+M$sYF6a~57u>S)VQVqDDyD2jRj#nE?{g)#@L^>>lOvNAoWfmZiP8WsY84)B3h{G z4cc2HpoyDyKcNbpn=-ww=)K4TqzlC+A%n*eag6D!HY-PNM7Bh>%MTX(-R}9u*C#~o zFF*-dkLK1Yj^nptDoQnE>&%Ci;he5goj1&nTh5!l;rtj0RC;ea)u4GPs_@Y~w5t}Y zs4As8PaD#&orkR$v5R!zYFV^Dm0?tt_flEOFZ zuJ(~|lN}P`n0(%SXmFoau*ARnE@8?bER+SIJ+g)O40YnZa5Yj7<;~P(Vc3u5ZU}w% zDq;Sr_4%{knhP^YzndD<^3jJigflu(^BfTXce3V2J| zDz+V_r2+WR02wu1D+^6nb!7GFgRt<~(*-sm*+J@7c}h))3bm!D${JNk2uy z{2S8(*q$6V7&Q-{s1OyhuKwHQMn=iF-zm5{1u=$KVmxrvIwj&&Rue9Q!3U!sn7}vM zF0@ATp8;2!)q^NhDvpj$-Ewn!FI+R|-}3UKFa_NRmD*yRoGgZ__LoMp)^}i-VEO}h zT0CDf9;-pd&xyL~5YOF`Mp#b==lvi}HZ%HPJ4h$#abbn$ytgw2v^t*To$vYhd$9@{&<=WAKZvs5jBl>31QXuQa04*m9zu2$-%HG z4e0c##kp7#gdA`;g&f!<(BBK>Q48v}_W7cz=F1sj3Qb3g+2tMS40zy-9%5Pf{*0Ux zN(*~>HRmmln89hsHANMp&UZYn>vHkAvW9^*YFa=>b5*FAa=rA!LDERCxkV4>VK+JI z0XI6R+xz_1_jV10UVEBx8fc)!yu+y$PyD&S)9WIXoZCLc?dp(65Xx2FeC-Q(^pyLM z?Pbfx#o4XlHFc?xH2yC!;#zte9An}aM;xJRyhejjB5dYQUK-0@+-LGfVU#GWl;Ie; zWXm!=%Q5>6du2l~m-wM`xN687V`rUXbjsBE4upT^YT@nO(TqOtsp;0)(YSMEkvoWX z_()T$eZ9Y3@BC_Tcf?fa`c)fDeSwD{L?YlZB0r%lO-E8QvE-zR?9iEyGLm1$WyKKiGSRzZ$tS0 zhns6@?-5t-k_ODUVe zOH)yR3&~WYeq-In#Guw9{jFR>M3~hRoFmX9i@q}yqiBmaQl9zNRBGiB5)1xZSNz;_ za|@^^Al*6_rQNgTno{(dm6E9j6JVX3#nz*(%?UBgA-TY7+fWLUc7WTt1>%JuAYNGg zo6!N2pLWp1@F2`I={ixZdWwPbKO&cTEf%AM;8`;o|IZsRV( zOQn(Pkm%Jh)qm~wq3H1dEtv<7rZF^BA!29r{U-eQex}^qqB0hg`G(~_6~jbFLfd$C z{PO!~)VFXSnis^&z}%m1c(ZY1cKl}(xS+ty0eoq@RBxZvD~asahXkizw2xM@!A|>~ zaK#l4yz+HZl6q&l=>_k~y6-kJAisAL*v<5oe5yvO?yKR#oQQ);bI^J_X|P0*mNsd- z-U4kpy?VLw^)?W&bJJv7^0w>(y18N=b)D*xB~$|^tOX-!b9+mP{ejY0KvIfuZAvjr z`B1Qf!mk)|Jjf`CP8~H}?wY zwP(fYiAsM4J+jgHv%Aau1YE;49_zQ$ zj&qxKowP%>f(ucyeI@>cPqH1bjUc|~bwXY-G%0z`i8<@5r5_=>%4`{BPh_#yWH{6{ zp8R|6=M_ek6K%NWCo{O|IN9Gl$Q8KV9G3}9t>cKq3OwFzC^oImJNxn!v!@TlaOt0J zy!Eve*wK(YW%OlH8y=sW%wzD_oZdp)v`#Pnn5!2!)23c>uK2hhCg9u!)!vbP?-Gd1 z!w(&Cc&uA-P|N3Wz4--;qF6IsrmW4wM`X$5Y9{EEPlm^+5;xmgE7}%WX1n6>=XGBtsBPgC$0pjLXw~Xd8-^fWkjfS(-r8 z&p+Zyh7XL`t>kYSor*xXn1Kf0mJJ7^(uBDa!k;uA>>G9eiJIJa(El4Xo zwwWF(en!8>2BS@^E$|>5-c{lepv<`NCe|F`#w_Izt|6zG6W?Z_6LHvBRf(K%^FNJw zIU4U-;&38fN?3;f`I3v_*KzBlb*3?{knb`8|3Skeg2zWeyyLeZLSxZJ=#u}>Ct2ua zdH+)7N{YPZYyIMwZn0;TTik*4nw9AOH{{4|W^!FB>ds7KG|b+@-4jGrmv{<(Z-xJf zI5z$O17)pp_akWEI~A}J=ChWBrnxGN5gy(y`>okU+U=v8j%`*5gm9PV_HcO$$RQ0) zLNwC z`{h#J;p$j}76)bmHTfr#m{()kuEMJ=xZtddZkN0s5en|m%OLYVp3oD= zMv8~@EVT)t%}J>)Pu7;^A772G(L&m`#L#hq43Fw`d&hvFb|-!(9y#0JsL5nJK((Ms zxYf(Yvo91Mj@~3dDJ_xBD8Q>%ks8$?6DdYdUhD%^uJRuSKv-10>t;$H;^@)x8_@7f z&Ysg|^m|rRJWuU1Y=)*_VKZo`(XhINcEQOFa=UKtE1VBAEeZc9^wO-c`k)JnXF*@2 zWnegsK4GyWR<8&9l^bRbY5EG{smX~g4U%uQX6bo1)ZpjbK7oCE{066bvduN9jd)W7 z#l!J9{?y$L)*&11`o!zzEz<3pWadD?umAD}$`Pdm?TY={Ri#}nM!-O3k&n=fFae6A z-?CHV6K0JTkth;>fNs7wrjda5aPDJk-@$u{P`=h(q-#R^pO0awreV4be_|DwqFitM zRb%~aco+NFhv8!8rm8ZCN}(RUT3(Ubv_IG0Ma;<0!NTUP4|aYH+`((rPKmhhL!(Fl z!`Zj0h#CxyNAWFpwfLOC?N~vZ|m-&Y=0E(jix`NVYs&2KzrhlDJPDIpM?0v|7V0D$m{YN_E z4@)x^fKl@=RfNL844+gqy(w;_p{UDxh(j$VQ>l|Ed}2wxlG5uRTgkfMv!8jbhZwcW zmoDCp-klQtFMJ}!9*)yk;3y;p{Ueef#^X7o9M~BpF6hQwGXw~_9As_2JO<2*H53-4 z$rDB0ut{_7{o^zLKLFdiI;8`{M50|JSvD=bO8Y1fwvOVK5XLeH1x77Bu$NbyvC+t# znd@WeoT;~n{7|FBI+ck@jeC5Wqy+S)sve2{EF(hDh3b_^2Ee{?gijPXV{PU6{%UG8NKqO)&3dA#7QDP5=gz2H8M8r}8>kw%z8`?U1|ym%qhuw%Nm^uxL>E9=r1v=uGYiIZLn0{4$k{cA`i-E{XSqd%TGsgd`!ixf58^X@>R4FVBso% z_0nE<0L|k;05Iu!Bfy&;@vmnh)y7y);nT*!WhH>|mQeaZO>wO$6^q@C*#7ZCD}~3W z8E;$bqoByOZRPa+$Uh74{^RZ*n>WzbIau~YGkxmqSYBXY*17FCFHO_!A(aQfi?jV) z6y+nvMzGOw@}94{R^34#2SyAy5soK&d$34$?lA<-P>C$XTckoUGZ#Vk-PbSvV2s#VpchzR_(P=%~vD>suRej7}} zAN=1GF}*X_n~po67ycCLm|wpF+C$M>BG{wqcx z7>Z;yMr@c78V08r=>=`(s`9pd_DK6iTKEWdQ{>+#`)BIuKiv(2@h0RN!xaHzhkTI^ z_4%(GNlFaZlW;`kJ_*>55g?@R(%?jcwExj5{J+QXzx!j7>91I4t;%rh-zpUT&6knN zU;?%~i5~PLhg3fW)>=ulvfF*$-26S-yY?t@P0wF14@Z`jP79Lf!*S-D zBZ;h8;u~}#fJ8X>@3dWAzl*Zhh-e|^?ayVW?M+UV1)CSCNUMOz)k+pCBkO@n`^nAe zpW>>&A=C>Tx1+0yyIrYTTWA934XPaa)o2o=7pN|a@p&{f*AS=3l1@8yLqw|Z8Ef*>zzYj!wXyr$~OqE4y;k9Z|t8&X7u(h^0zYZCP zO)~)aH?TiA9-PEp6!K`cie>ei%9zPp-hT7g{jmCRa6T@ZX$^}?HUxm33mSF*>Wq>l zwIE!RPNTk{zi6{ZiCNXtU%+&XrJC|G+`s?0F&!L_NIBw(>-kv;yQxZ#K8ufTi&Is- zat%p4HQO0+zR@fk50DcAX@9g+iHuYvWU?g_i>JR#wEKqes}V8o5cDsH&~CYA^0c17 zWt7s8`#t!T025Vl+rPl~&?ck4ZI-NQ1`hvLzS;IzKbZ&^UM#(rQ+#|&ago1r7?C^a zq<{A=lG-EjUSC}vlAkS)gMtX8w5U!CoLmoo1Qq^`8NGZvPWfC#ZgrqB5C8}i-U-?1 z0azeAzfBsX2tf_1jyl3*%%t|8iFh}=oHz>rQ;A)j*NT-KlpP9>eAQGg`mMqXC3oiO z{vx7sL^bn4A8g<*aBgs>DT=LLpxTb^Mmg(ktp=yO4w$^4ICC zG}SmFY;Yktk(~{#Z-`0|*%#RrCihw{ISz=Z&pC%sfxPcECFrU9n8K=CJ;O=3`cgE2 zkRwfFJCX6i)UynHh5m5yHo1ECXl+{PO&>f92 z0f7onzn2^u)}Aa1-JmKX$zX?4lF2Vy9$M87sdtwhUt(paUS@E|Z#^WxfA>rcb#*EC z93-ax?V1zoL8X%*n+Bkt>+{9iBhFu>v1gcSKxh}*lV&u`Ezuo+sZir|CY9AW9oyKd zERVD;z7A(O-f>_hcn?HLZWmVD#pHrma@haA0A>_jPyd#h%)#@{1R_DXJqB!59d-Gb z#*;W~sd%JZdT-#=GRD#C(&)v8ZKigldlJh(S?vLh30uS>`cB0yuRl1JfqNdY#Dak? zEabFtOmvJ5q!JRWzcY;-0B0)nPMpr1tM&FE_PfB{L-9EHFB1lUNvWZCfEIivAy(=p zm4=4L@q}e?#a;}awvgbxci5HA3v4>&NqX+zg~<5}FPvL}DF?H38xPGY5AtxGsfY*wn~ddY3>b5%BIWr1eFLrlQI`L=H+)viokh-)bCubiQ1d8y zm6{0aDGYJa*17hMdtRaH9=p+YPZC54A{Kwi zL?q}TqwV_wyE$LE6(Dq89P-}Ll=-3}?nMic5u#=-kQf zs^b>%fVcS4XVA%{J3xDjV@aJYOeXKQT4@+R;GyH7@GTB2=J4rFou}AhKjx7&FTKfE z{a?)^K>sc=>REu`pioyqcm8%I#wXH-PRRQadAOHUC)PJ3r_y;?#oGSdpezm!<0U-# zGu=@_LG;*7+%XIxvCdI9(6_M{ZQ&+-z$HdbV|0WGnDv5uY)L5L$0yP(f`lA|5kMwA|3 z7yQU9*BOQZlmL}H#Ok?c&jHYGBI~5fJwF%j-B&Fa2_5gM+J1@Dy&4-{D^eQ~d#Y@; zQ6kpwZ}>#-PL{$5`DlTJPYOWm)x#G2mHCFZ6J8ML0)GwzPZK^^>8$zZUcTj!WHBzC zON6at8Seq#5E@r4OVXmyR=KZamy#KVWdFUL@a6ekBjZqEpP(^8CFE8q|H-OmDztv- zbm=K{$KoD*!`Bx!b<_~%N^YL)z+Lw^f4;T3H>x@F#NpW{bu5*(oWyX5+E=8M|BUo- zx&gATIT>V-E%)#knv~=D(x#Uyz$Zz+_&eRB%WUb4r*7V8(|GKijH_@O$J1SO5_rRX z8oa)$J!$aL=KcupkkZLxSFl!Xx9l+_v^GqR#EArC`x1V8|28kS{=jVcY;Zo!%2d!G zhVCh|taDT6W%rl4UgUkwxj4FTBA95}@r9W?k1Vm7Y1-@=jNUx2V~AE43alcC`Kd#~ zui?hF$6A<@kW=8-;KIM~A#rn_&Nl!o0U7^xP@>s7pnPt6(u8VoR-s_fJ2V4xUkC&! zPB#Z-u#!iz=h3*vgb#!vnNtD@KwfuffPQtcepJ= zQky4M;IRL9E`&{AgYQaUqE?B=wv0%g+Ud3xe-nyd9m+1M^)(UQ`Px5XDE>7<+XoP% zPFc2M|D)+Q9O&Z>b|h`n%Lf5cU%sIg5VgI7>6*r}ZgtW{TfT~+cSZfGO*9@*vp_W6 zkXc}V+ls2(Y_gISyg!*)+EnMUCAWDWk3dnZ1aNnlD``$z%ZgQg3G91H=Y0e9%I~?m z0426*e#1@Droltb#4cf&<7z7Hs@qYj*G(Y$!T7a{BESGrkkVsILYg=_k$0BSDGU86 zentK_G)IYdOmFTC)t^4lJ;ZgV>NttxVFlRL`SNvL!5`jBCgK)}RQ zFvcluEL?!>Jx2l7I>{i?qLP4m`bk}=v6E2Tu=7GS;_obrY!!pyI$aS!^VOUDRR`@2 zu7tZaZPJ%>S3mmv%B?jeXkOSm$%XJ5Q(EFk_q@`dN3kN|t5EnwM!*K7zGQN~v|QR@ z!t@7>E)IRJQ(t!KuH9@sCm&^xVVajq;2pi|v{ni}>NR8Hqf1etJZZXG5L1e=d7$$Z z4Pt745M9x2KXt!o2uI)Ce|HCw)@8G$P$ytxl_pfTB5j<1P(yEe)**8r(gSSgx1;OU_KwcWqz za`DJA1SlS28~5xyx7k`Ac_%Bn(+Mj3vxxvmBsS-d6dx2zf!7H7fm^?z9+MY zhCnCyeZzPrv(vyIS}yB>76VCK6|5v44@l+auT5NtH0?-2?oU^gX^XhF#cZVQo>!-{ zkG4!%axWWOLSr9vM(e+{@M8OZOr# z&p?sUMQWW)RU_`jdOp3~s`E#0Js`>_4t)##xA2gGWJbcb{aKRSO!=K}-*hW4=xm{v z{!-C2MdgqB+4l_&hQUWsJLZ>6nj$9w46xY#b{W=+_?JF}TroC+ciFdX!7O@rA)%~CscMuUu*BTM#& zk8@a=(iM+}Ut)U^9AnwdRdzA8`g=;yrE~09c^*C|wR(BCmw(8zS z4p9M;Cu3I{4xq&jWutvO38XUO%kaUJv9z1~nY(Kjw)-GqJdiDf@n6>Cv1Nq+q`-N< zaxo=+p>cKaR-xe!&MrSkVEls@JefJM=MXjb()osS8Xyefw7FuM5T&&jqg`JGN~m@d zuif<)Ut&KzOkX=)mA6m3dqS@rJM5XRMb_Tm_KaB-> zvGkNlt-Vxy5mZ<$E0#|{qm)N;sW5u{4z{s#igAnvg%|RpOA6KPv%n3>5H;V}7`-oA z3(Aw7zpq%KQE21l(`Ewn@{G81q0+N!SuGtqTZs@_BMWM0uAN@hzz}c_Fq8P&6ke-p zq3-0*Xeb8vI}m9rJuZSa74FCrH7-xb33e!!niUL%oMy&A(Gg-vc2jVhhftVY-Msw? z^q{(@S?NbY!?Be-^RVLq1XG5h8ruU@-W^s$D7V=p|`nY3+x8`W!#e z&~53WgSBXJn&W;D%cg5%gyr$Tmk*A$)U>4@hWgljG`!WVtOE)Qfmz{!Ubl{G<*Hjr zsq2c)L~|8RlH&x&pEylo(I>4#7{%^hF2~!)hJk(r?4j@$hS97n0`3Hx9jrATS4h1R z_*A>EHhSc^4qckGH7qST8UqiCBUKKTEL0@=nxCChl}ME)suaO~o= z7_+6X=!MF5a?0oHOPxRWC=7+PyRXOrmal%r$x|p*)YU55(}3y()?U0PG@lt zAS($f1{Cs3FE1?(Rzi01?NW5YTo`v9!rOKo(T+o>_;6V52J?dX6+O8v~3ECOwsp(i*sMD#8G zy9_-@KK*gS0VQC@J|Q#<{tqa&APVo|oGQ!`W}#o;LGIs1iXS9VN)Qgws}~`~&X{3s{3F#Y!Ha;nKim7T!KnKUdLmEkB6-jo_Sm-8(Bsx->-MXF< ze3wzqBtc=}wJL+__ZiaeXFy3cn%U^BO0!gB=1+tjR?Qk^!xO~1I-p)9o2wR=xkHix z?%vDbaQvw49lo_eJi&_=mhJ*HZw(C5kt+ZG(N{Pv+4Q4S3VCKhr50btT(BHl*d4D z#L)I)nmPZ}EqUJgXbGOIU!`ZLTO-#iZJ%vb#wp&FAxJu7x^Z7TBj7s5*+L(-(~uxi z@oSGkc@tr|O%jLsp&X6)Lp|Y|;b|oiaO?VMN*yAR`&_S#Px>}NwNOX57V@{~yMYvZC$Gzd$D$XhU7!=DRIpn{2Nx|oLupbJS%dvr&M@*q(nkDM7_ zTJ?7NqP6IpgfRd27C!!z8m6-inp{93y6gmxnx1v{gJoQh?yP6wVg8S``WHc?OwBb* z5+2_3&`quChn1p}ES+YWp(EEZffwFLD!HzCzdcVtt+YvC!Z3e&`A$XCe)e`jjC#PQ zFy4cl*q$tRXwJi-LEi))=x@LeV###RDivZDT~W_&+w_Hhz0c86_lR{eWB)|J^~1-S zhe!{lS7w<<@-)rM)s=r{0?uqAFIrJij3n^+Er^mubiYy`z+t6&8{8?~=!xXdIxqF< z6Iq-?C7Tm#WDDVk(j!CWs=l`PV?L5y%{lGnrh8JiK@HOI9pC-%s}U~WQ_^{`3!MUX zBF=ViBxXpOK&6mt0FYRan&y7z2MWY@&~D0Y{BuO8*UC@{AKN3{n;FI$VPk4)DAsr| z`!RT0rhYdWYYvK}PHgdo0gl6dHS5s4#*+^tGlG*PE5WmhW@#Lmcl;03c@QJU855M9xuU7^_Qm+nk=K(Ao}%n zManj@i7=Q7+duRdFyp`e(clZMeMsbxL!>9N&*zs?a})kFMG2E;Wly`rruI z57CLm|H?~EV>bAA4o8+thdbxy` zC6+s~SP4gl#K@iOd%VMz?fBLKyl__2({%zbY?7;8D~DHg;|)%MEi=iI?XKQ7>giY$vD;=YAvEP_ge>sE<8b z@dJ?QR9OWIse@3rE6!%aa=Y)e$Ex+Da|t%)UGfls ztlaLfij>i1KckUw8AL$u_h7a>HT@5VTAdbfcex3FC)FxHPhOaNy=dQlX1*OlUmo;W zi;uG(wZGSAS=TW!pVq2wM-eYJGPWpJ9M*~ky?uS!9sI3NWes$UGO^to*6$$I3t^pn zC_c$u2Cy|QrOi58+VDwa?W!YgCO`=M?wq~Ga+GA}YAbJa@HCc05;cBLm+*Vpc-mv* zxnrDmt;X+}TArdAB{7rAg=NHkpk{uwwcm&!aNq>c91IQ844CCyh(A&(Ypvr*A2*Sf z(xgR(xWjxANPz|qhHK%yvnpL;WlAtR;;=w&2#7|C&PzN}EaY3TU;0YB9`N?a?bP*% zzo09-ZP6aj`WXNNgNJ{V9U+9tejzM)sS8bJ?(HEjEnA3>d7N9TOFjyBbmm^LH;Emc zrZeGN(25_*-sO)O#nmnQJhvVZS2=1wEmk=4-il~Ef2jj{O@iD2I{c{R9-$H?(*#J` zi`$46&xts#{_vt-)ZT`>-x#Sn&Jm9%a^|w%eAn-IGwyZw2O%|Ucw`~1br!c!?fZo{ z=R`P{2%{j2&O}9o%7lytHC_IG(__aA@#pwIb3QQ7Kt`{hyXW|XN;T(k) z$tSnaMo_TfW0-aslxXpFA7N@McDYyJj`&hd=+pmp&g85ViQ~8gjjCJsNyOv!6^w-Ob^NT?7 zaLbxkC(K-`{V3P!!X;uoix{hIol&__p!8s`qw#iEyVuhJ)=1zLE9WI%nOuE3j3KyY zvM`%nEHpZZ!`}OTj^q9M zenXDK8TsQ}>ssqP&)?cX@WHP{GrQ&&_aJPyjR0q@7fGc+dAPrg-!=ZpbN3j6DC2p0 zy~rVE@$1)Idn1|9)4g-3lBXS+yZ`0}nBZy(%(N3XClCayorFhuNRAXQzS|uA)ph^U z|6&qM4V_P*m40MDpgWf>?|**v(q+j;E1hgx8B7kv@)z@{e$r5~U)HIl^fwYCb$};s z35aI!^G1_RBAZRsoq45Pm2QU0&?Kp1=jE|Y657>s9XlbMI3boL`x`rSHYc&=suVy^ z4BBI7S=8_Ua(^Uks-dpwf0FgAVBQ4XR_3iqoi&LE| z$BW+)ph^3lmyUD=(gaolVCH95Z(22y^=a-Jc&zaOS zpHfo?w%{hO<_xSL_GP;Stf3wJ)VK8(<6jqvHplGPR{dIb$dWk02CvSM-n`PFgs>Dd zouO0?GsPm~I;UP0cD8c~!z+MRU&S@ouofW52T`?MI?r@!6R1km;DP+W>4#dyj&0iC z83saJKre&OM1i}U^<{T!Ln@-WLtDT|%Dp#E%*w&--SuGd-~!|KxI#WN_L}ys7)geBCr7s;HOg1_^!8}yA-KZiQ{7(V!do3bfNA2B>lLX}}@m_>S($Yu!}oH2t9w#Zi}sh_U1h1h>y;BrI@xlI zKGPXer8>v1-2OO13-ykxIz%=d6FCRfT+UPomr1os({YN1wHfT}=8&=p{-RXj2 z{tx3a7-^LI8YJfs%D;Ozc5q#mBOuAvKcp`r*tWk)RQ7w*_qe5Hu}G54X(fax-Q~?; zro^1$UNj7y3T(WnKisn@gN_+Y6E860rvJAJ9d$tZb0$p&*$gdhc6k*!gENTSqsTI4 z>%i6OhEFWOT(jb)%b=-yeZ`&i*%~uNcd`w@RvShYLe9bZDTa>Llrl#FNjoVS5o19t z&32NXgak}%#sz9DsTyDZL9F_k^O{UbiK+LkkuYFu6Kt+tQxbFFx>(XID05nP#zZw+ z;+~d%bN3;@waAULv_V;!c`B{>j`X1v4^zI!0kXOEDc~TSGz@tsWy;hji*N-D%g0zg z&n5bfWuvgZ)%Y!4e+LY{DfvDrJ2*^IKi;1q!hO@=>Q4QLoCOVNi;LFT)|LT-6_bz6 zU>4%#*$r}WkE!zRv)%qu6$}tfk8D@btmxH8bZXuSFXl&-sC9m^p^z33Iaz$*hMwGD z&#pfCP6$vzHbPZoBgm@w?k+UCB3R!S8nG_iF>i^4ltUFX+w%&-nrjl5w%L#E+$XoWPVGP)IM(*OZ)-?t?T|w zaOG>4*Ve_inXA;iW$O=AnDso|%s~%wJ^?<+1&Z@GD*3*4iQWCer$^e0P)EQ$ju@7> zc;4UZ+*}*7noihs+z61iyw%WSwz)uEDU3=g%j@PVmRdZ&gg_%4*s|Z zvm7>d+Dt=xa9zvR>b&Kd6{sP^X^9yfhmv1^62<;!CtkpQXMG~+!1K7pr|-p&2%^wW zFTL);Gv95hyGjWQD-&HpeSp59?hlSF{JJ0uRuZmzhgRO^2an~LVC8J{?Krms;^{Gw zkJb=;nnc*C!R7vJL(g4a7uS{P7}r+#`)%S+)HwSvTwYO_bfng2b#{0TpNB0Vns?e^ zUR7GXUAEggU(%AYt}9jb-!RDYhKMZoXBr+n$g3j+cqPe@|7HPP5VswKo9cQD$H5v& zY<=6_#Uj$vs1m?l72o|l05VB+&(L7-NeM$UO>ZjRco%wjNV3{WoAUDM-ueO7_s;EY|VibZXcm6cA%$;#LfORejaFgHsiQSP(Xw zeX88$3Y|8g@{ekr^3bwOWi|hezGW1RLz?FN@CGpHEJXz@Tso`fs_5NSDA<#PdNG}7-@AmjE|_G& zWa)MAP}Irixy@s3cXw~lha}?cOgUj&97S$hlg88Y z5iFD=_qm>()^33){1r*YBV}|%jekPgAZCxLfP5stu~N_pcD9#`n&W7pYEjM| z8L9c{MnmFVf9~C? zu@+GSZ-sxC%Hz=d%S1EQltfrsgxn{LM}4lv-y*#M`8CDtMmA#h=p&&)lMbtPldtlr zZ=K|KJN8U9?rCG4zB#{hE&wXEP;4~@?q?Y{Om$D#enOI17wb-1y6$S*M!5~n>ygqn zxJ!!CZ+Mp&2dXL2D{n2!l=1j=tUgpG$d+1gA2!|6)hf6Ovnq#48F(x~^4M=+zEe-( z%a#;>g50P3*L4wJKi)#~%XYh}I?KXAO?T20+k?w+?Fx#7WGOnUukWwkCvt{jDt)v0 zwN30F?^dr^kcw0rO|YITW2owAZPuRmkcN1}KNE$Cpa)9rIbwktAx2HEKc%g%oX?u% zK7u#OP^H|WCF75s^GN&8LY!Y`0Xb6THqtGhm$j!AycRla1_|={_Y4JHJ1G1z%7^x8%Wv1$}wlSX5#K zE9+N?_k2ERlaZ~eGYrJ#@P4#VSfpSpf+oqniSCuxdOxMuW4cz+MQ$awQf&85sPQs{ zs=Ah2*YEVid_0xiRQbAz&&SB5(Cj(4vp(HJbik!8^yx79Zyko6sJSD;z@Dg-A5b^4 zv@0(NlQqLC-VpKHEhViu#8qxAiT6*=p;umeV<=YZvWvY6SM{AgC_ zz3fu?aZOG%yp30O&^xCA-%ov_r)pxihV2zJwaBh-;*Wf(yf~{72cfqDX@uk>UzSEc zz5;sBop5d(bM{?AtJ|y8#M?fer8cIJv!?!z@_jEr4O^`eV!;9tD6!Y(ba~akA8Pet zI}Wt4UT(;7|nh2>zGZ^)~?Mmu&#q4i;cGqQ0@DO+{C|j3 z+VQw_mY**=xAMO*#C6?u_BLn5-uhF%(!c6uQ*0iAW*JsPYyu*vAIHk<$aA#x1`sku zg(ZP%n$=lK0B}O7w(ZeXPm$b5y(Vy>!P(dl1DU(`yxv+&H|J-YBiCF7D|$Dkjac&Y zvTyrT+Wa?{_3r(<#BqWNq#I;B)TPAFE2O_Rda?_xUwmrty)rC6XF(`)gFO}lpvhHD%Ilf2e25 zaMhCRZKfZ39I!H~O4W}(u`aqX>s?}#Wj0OXr7F({y8jwqMx;s#4-7eaO^rvvj1|AU zVXmPK%iQvi7-{-~{et$NC$s4j_kg>8BV;nBeCkWs6txw@s@wTgl&mlGM|(=}L4Ttl=!HV|9gl z?pt6d_Q7ttLx&vh^37HO|Dy4JF{Tl)@QqUoRE*yEC@ka*r?}IEgIzcAl&ujm3z)XJ zj2hOU?}q1;zh2*k#XyBdDJ5sn*;4=b3Rn(6)b$r}w^ic-owCDxng&H#RRbs*WLoXK zMh^pQz7czdmT`LWv*~{Fy}0$R1rJokGxrCgty*4wyJevO$&0$}B!g-^GX%rbK}<&0 zZ$WIWpTzSB&Y0=Ha}2Kp8DJ@^#Azb`GSnTdzBurN96y^BVDfNx!ov4fQE5-5I1cU9 zXi7-a_v~-jlR2gU#%r4>@|B$&iOoSF-S#Z;XJr0(h}X7q+TR&AgX00$2Ck;QQ;W07 z-Qg1SYzPc69(_6e1f7n9BWWS`Qd3O?tSTTgDuCMM&WRW2JiR?o|KEMHX0u5%7e9Uf ztLb8yff>hf7xi!LVnz?GN_>pfizS?%r1*Jv!Ouh(iUvZbSEH8l8@9Zxc>!b4uovLi zGct0i3DF%;dM9cg%rXpuys)@)q8zqxj1ptN3fW=2X?WUYBG3CFpMqNznH*P_3_4fw+Lz4*HM^gy${DXsOnv zX-eqp<^oL5fkx#Us!ohl(2ue?6DXtLIj;cSEM`ZRk+q3rk-v;b;c{`$2N1mJ$$gfr<5U= zq;{&~S>DPch{8EQ^|V-o26*!V9wDUKbZb)?F#a;!R%flQ)am-jF?PkT^@{Dsl~Hfp z;7zZWXC>z!5LQ0z`9tR_;h9%1oxOLeQ_BspSE0R(#3UyVa$ zceA!VpEaLD6T=f5temDpIA(_n{PIK=1CwHcAU$77MZ)(sKe9Q$#!$L;)Vg?tW8mrH=%Z%M78{bXNlqx}!q1N(5lLig>J zKMheXxWl~CIgQgJvy*+57ETgh-6b1NyGpi3{L=wu?vnY&%e#(T!ixfXdXbKDqhu<= z;`T(SI6O4Pk7IKJL5yhnlC>QwbirK}8j0v6^~f9!3XR3Lcd;HJ!l4k&@3{hcN&(}v z2fNL@YrSy_P=dcBy02muwkBTs>*5p$vsEM(onbc*3UB`cB zu!MFQCuir%_frmdwdl{#SE$2qA2^*sU#$9IHh*;VOA4L*l^3HR5{1_0rpB}E5ZL1{ zQLiS(dko||1Z5Ax^@k+}x@j?kX@L-=W3Xq|-(E4`s4SXH#jC-5ZoUnn&ZiJfqSxDo z3l5dhHvlGm$bhveQ{2f_{L1@NgK}AsQ9b|an$l)e*Bw&Fu-p!WRSjEz{;}GLRkP?B z*q}igh^*Qow~6v9pRcU}J5{`@y!RMTMKx*!6$B+*{bXFta> zsR?{SpVGwm;d#e(XGK--u~pcBc4iIKh&9D(7-r@muO0l!u_hjC!Yry8OW!Z!c=7gs zogbP5qL@;S^qPbpYMi5zN_i}zM!Rg)-LR-?X=>(Yf#KvvC}i<5PUCZcm^U~rbV$kP zo^N?4BVGC+OP5P~DLwGMXC~}1&UE4#OWv4qo?MP!HNd~SR$I9r*{+xiTwXsCLgee;czpJdIRs{w))y~wmPFO*a<;af-%X?-o@Kq;Z zu0C+IKFYZSX(mFE{{o_&vVWx7qu>SfmTE$$+$H0;(raPR2<9og{@jum&hK#laa%%o zR=7G@>tv9>F)h^QMK>GilkdW1UvT9I+sTPW&(%#~cg7FYI*)XtykMd*%2OS;Fu(`2 zPwdaPj$lDJ${(WJ#hl%=ieVbT2paVtqT80(4!yk^kWzQf>;AV>z=%t{il{4$x zqzDj7#mYJWl>r_lf_MD_P?>S0M?5kcJU^V7?5P^9C^MNS=ts@?e2|NMnjo49bM%-k z;ru#wDf2t)QB=>Xs%ihj=P#CQ9wAZL^dsGTSe(E;K74q-0Labyh}&JUCw*ER-m8#6 zY1ENgl;a-g`tms-`80C^`iHEsX0S<93(eRwvD@Ie#uX3S2>3+7U$t=6iwa!W68X4` zUW5+TdnP2je3k8u^x{jCd~h@7?y%b9BW9L|w<3lY{z)8;(en!9KS{%X)yVJ3-}cNL zo`zS;6&)YWU&EawdZG&KBi$V*Oq706`zv#7dYyfQ*NDfOJS@OH`E%bPAZrd+4Y zoW@H_}EecGI(aclh(4Z2JS&OF?sdd$N}?4VC{iYrP_LQ3DC8oM9ygcv4abZI!WeBxjODH z?rCk{AnF?@{UQ+6&pp&jE%#emOfds+_DbZH!+T$1vS-e65NXw$6DYZPefaXmOC^Ps zPZg09&QCO{3_JWz4+HXl^-@e5#{*$@j2nTX-U1vl zvWuH<+=X6`FZWlND2UaM3=FB(xJMZ{>HH)eR7{5a|J|rhM*uL0_Cgg}SQW zs~;iB3e;ci(gld={G{A%Uzp2x8YoaK)VO-zOT)L{vHyA1nro3O;Con`x+ln>O)0m{+0p?!&<>J(*D$MizN>k1Kbn)9*rpRX9?Mkehz|Q5G z-d|pkq8tUP+vr-qVXflr4dU%)QjReTA9wXAdqCkY{k+L%31#@j&brZK%qADWXHos< z!@JvDgm%kIIM_T&;+o#z8U5Es_Xu7BD?hT|OWySHcyTH64Jg*dtrv5+<7fT~^Z`yU zB2OBH0W`R$Y-g81pOTih9pe~pP}Vj0@%OM0X6?ha{(wheft{*5j?8YSQ-73MPeL)^ z?)u~(cUMi=!d}EVO5`T!`@V13oKcPVWz1VnEVi0Ym(17{L~(SuvEsiuJ69x51pgPG z(@ziZDH{4*-))*}Y3j_(3slyT&AB|V2ex?%KPu3n)qynKIqNma%LXv%D&_2T!%^1c8OW+ z1Vd@HN-1pyIs1R&3H^0^z_B1G`oUxSr$+lE41Dy7ha1!$dfcJfnW8~!6hMeq#qoo5 zwO!cAhcJ(Y6Xtni|`^Hz?XJ{jUoez_|`?)b|sic+1Q1Hix8W+lb0L1Gz^qFCdjy%v| zN_kwp)g8e02dsKhf(=D&>}Z!2T8XySA*$j-*-IT@i(%4sFv3?LE@2uSRFd<`J>60D zeBdNc&wX!tjKuB72~G8xi`}?#{}*5~;VD>Kxp3neY!8ltAp?avi+!Zp+7pGExYRJ`R@ZXu_v#Tz^MH8Nm}zL(vdZqShF)yK0^B_x-r zScU^^rPoDjYwJ}vL1BZzlXHQi7TxJBRzhEHxQ!jDx?Q??iF-$L{51&2vKTxT{W7=B z79_K#%C(_<31KbzQ{!~&L7O#GgDmdv{Zb?QWhuf9Dz=`A(&X-)HLAQ`u!n~-FHF?Y zO+TI~6J`Pw0F(D0&to#z_|MU7&{3bwzFwo>{r!6yy?w+^ruFSD<1~*%HMXmJ^MH%T zk0t#XORYzT0$Hc#f&TFEhM)MuI5H8oe`zjvikwED37{K^Xb5Ou(TE zUmHe`d9@OzD zea{R^dqtT98h>1W^L1%GRY2AJg#T@ud6d_a12^~yYQn^(x2qN7xQOZ;7dz+LAQ#Or zm(462x697FCqg6{^T3jm*XsA1$L)XE8+F$5VQKI;{6bW_DwTj=j3(?E#mC!{?=gMk zVy4mzO_!yAO6MKYro6BT^&k?TqfnH00EYmEwszX&=Kn+N{tEpnxtZphp@ZP(}2ZKn3O`^6v~N0BChc7lx>(?LwmgY)zhSOAk?`>QGe zuP+LTQQ{*z#XJ@e6ov|&;?jhVWJqG36}L6y^)UNVH%W%YVW4)?IC>SRYgiIKF^@Oc z*imAMN&hK_v9e(Jc*R3|W3j^`XMHH8cSi?}z4AIB#Ovp?1&`gtTl8l`1e&^e!3xx; zza7SM#HxoJ=O?GbaA3Ho{BfBVI5}z;dFNp!14<=G-FDB9b-HdT>N$a;CWCfYTsgM& z@T5oRIPyTwk?sPLaK*6G*p<6Tth!}lQCXWHW}wd;FJ{mlLvMcjaDCON z1*aO01f`cp5ipYvuJ_8ZPB-?tkeXa>O_=PogkEV33(~25Qr>iR>|tgGenT3vJ4es0 zMKM!oweJk;oK6V)XfGfFXqkbD@);dtcJrV{yK4?a%Sxx{gyq2ZEr0zv{FNH#oh1G2 zTK$Dz%=h0Nd+W~+Di=1IA^U>~B5m3QsCeJrd9|OGDhp6Hz%zlTHaJQjg|3%I_c+b* zBz)TE$xs6Ec&H;HQ~3sbjiZg&*|XNuY_MF3pL0tfRRWDWK(aH@E8X5?XV}mB=tC!m z-^hj=y>a{I13+hPD`#&)un1+C%IS`9+_GO`6Yb3ZbZQsXarJ#Y=a`*he~d|}F@7*m z;n7vj(J`){#b;B)G&C-+LziyJ^6b;mGZTFyX{z4`o3o)r*pJa>m`k!Fiyvsj{QGvA zS{DUrVS0ikNfK!Z|>6l?xuK&vQwB0{q~uQZBlGQ_saEP)A^+ z&j~L8d{*q|1A!=~{f7Sp!}%YfgO0w~^Bh2XjM?CyrX2$>VO)-?NDm_Lp`jZo zcZ`k>AMn` zjV<+`P>BDVxB377sQ>K~{QpntfB&S!k;yu)sbBv6>HFVQs&})MFs9A1tRyzAI++Nv zDE(QPSaJatou)nIC-!XmYIWwL&RQpi8>-I&)XV3}ddz`sPLYesOdI%L=;$SSNchB84>ig-nDV`sQQ42Vmruhsl(=?^F4xSf>puUv-rfZSP_EER0b!SM1VZ1?x zn(fbVgiS^J7nSf|>KK20la4Q-F-u>y=NG|nT1rHLnB2}Poz)$yzdrk)T4a_k>Xn?_ za@$D9j>E$kpo}@~)7XNgeQ`v3+$~fcLK?I3=dHRxWONK4K7ZvG=$Nlah%@{M!Za{X zC?3C2A!OI$032(npJyoD^{$<$loBG6#8i(OD-4*eCJUXi-pfVcMr^t^Aomgq2BgOO zvKE>WQ)kn2F+UhyWP`bVr%`3oM6=rGD{9vVvsfK}tNnHnh?!cS*Y&nVRws>Ri!k;| zD=8mH0GEE+rj%9zEqt+`E-TL27Rv}BP%6j0zO#1EkXUtk4jVKg1oygS)6BMVp+*+! zzH8)wKPN1f!>u6)bB{Wuc=aJPmb^M^<}6R{VyJ`$3|{nwFufIrm^x7eY)lMYTyW-2ErV=1dmDOg&;O5!dU$(x*DjRTJ0Zz z9j*mC?cf3X3_d&E#JQaD5IP4GLTVILJW`!v#jMm~GDz}N@%Z=QN>D=2&h%?lncMw3 zv-SC-7b?H-)QV|<3L7w-APya_ovh&b`=6!o9m>-37W;UUWp%?*(f4CUw~Rr~QXT5; zV==$gLQ)}OG2GmInuoYQ+(-HN!$&f>f1Q7CEL?A42ACY8&67oE1aB|BW=Q$YbDfeI zj8O6p+t7?e`^5InA)#p^5)F|C!Gso4rtfN${gN5KXJIpHa~|>?=;B zA6Dl+$n4{frg`K&%(xadxR{Ytq&4lg%z>SaHo!ZV&=KH0r)1RSF> zScGiUW(VyegL3JKn_qWO&I`THALzJ!@a?wDtvn6U;eJbI?5isXrEWAGAo(Yv#;G3QvkX#nU7X~*{r=dzc;FC5x+ z|EY|Ve|1SM-?P0nM-=?NWdp*``?lBH`Zcp%r=rz{!rpDzS(`9$2txoM$^!BPSzb_dnob z_k_u;4%s$AXpq|;_G(3pv!!~EE^<+4HLUe6#j4R1RUfHcQaMWf%eRIWb5$aVSCJzs zg^q6nz6FxiZyeYqih{iZl=G4VJhy6g>ybJMWLUC=4Nu$lE|}8NyuB$n|Eqea`!m@( z4Q7jEz95x;je)=00Hy7ic?P(B5}%N8zX^457KW>U2-iq%&w;fBD3zI8Z6rMAu5yB= zO(63y(;WKw#J$huMZeFk*cIA4Pf{~Lg+sWbexBb}gZy-l`&VZ#!wrZ=L^Amu5)jzy zNxA|q=`&HSOT!=xHKr!JFxOXMcO7nRrVU7|6^S|Y-D+8`*Jx!Hyo+&EVk<0KCoMid z1Tl@~_B)(|`0gc`-rzBOUZcU=a2aMz1~QV1r<;U>&NJ%PY>zWNBxG@n5}ji{wLHIF z{V`#(dM*%HC+T@O&aOiXt&X>UKnvHKCU z?MU}W4yVI6i@XI3j~c8s70mQ(!$ z^(~=E6V`s@-U9}kRJPbP*(SICXw0ta=ByJpD3Y`ytNq+4iy!K>2Jf17AX2;)4~ts& zt(-coOT|es;~gp=rr0_AJ~;28^D6YcALoG_&}cO*m&XO)WNJ#7xd@2{6{V3Qh$`Ao zqeOv{C7r|N@}C-#f3#dR7;Km35w5%Esm5H+j#Mqhi&>(&8g~uapF2(?ooy~uc&wn8 zBvk&srXAJJb^<>8U)rbJorKw&akWDZC&GfHhjs-{Qm88wA6U78i`?3AcA@@P9%6C_ zok6B+U%MKTkeBxoYmX;e&5}HimJeQofPfi|o6TH2rQU-4eO&19_raWBq~mELIY!uw z1s{Ok4#?CM|FY_J)+x;_8gHwP5T#rkm}>rIi5mnkxbv{Dq-qws?Jst5oA)r5?L(i=eG#bB zA(cT4Vk=}P086PPl+&u=oXJ1|8(j68q6g-0=8g-?havpO2&kL0%OIKA&H!z)50jVp z&qa1IjI{Ip>jXCKy2UZ7)E#wJ`RZ)FHOvn?cgn@?ZeF81IFiz_X-dVTDky{D=mF~% zJ7P4zj1#cOkFJ_=JMYqILA)zl2f7AqL|7cUCZxD(DS|-OV!z%s+Ac91bpBGH^v=yc zrQ+2!_iv%r?>d-Vp{Hj%NKRhx-r-%0C8BT`(==eM!kwOd9LQhY0o9LreJa9OZ{N${ zb+Y6q+K#&qRQy8K>$LUr`xNKMuZ!k@UykZ287pqdEWDPH%%(iKftT8k->MLdgWbPl z$oh<5)AEr~?q+rst6gOavns;{3Zj)9cVz0ZRlZUObbG@SJS0z&{|jfrLdexi@_1vE z790K{p#__8@bKdcRJ%t(t-{D3L6gHfz3D!$9Ke?oGFml`rN;weBLb3-+lx`n?4*b5 z;7w=8x>K`5+XZ(AKOSAAzV1lrkU}UPcksmtKs4W&yycN(=(=r~cXCP0Yz+E&*Bqfb zz4$PXW=dGAIJJG^mcS<|F|B0@kv_8#KI;N%>h2&I8dNTiF z&MJbty$kElqJ7If{N2RtzdvZHAGuFny3c8eowBRilnTz`b+5TOnAN!{D7|_*zDWSP zUtbHmyMdNdB>!v$);&ds)jjLcW(9b62P3@g(W(`-(lME)m9m<@5VTqFOcB8=%i_6{ zs!WR}#o_kWLDhk&HCC?KOF<>@S)F2!%VxK<3E#DcXG;-Z)9l<}DQor|FcHFy6PV3P z&#bD?M%W|YmiV5|^WAW&?VccJ63YX1dJUeMzG>6t;GLEUVhD`t6%N;7VS{ZTs_DFdX`G3!XKl zAk?zg3)elK*-Sz`@y)>r8){HENJY|%~HDk}!ew_tI%t5oUs^Eqn_&G~^a=-}Mz!cEARI;tgm+qOd*kz}+OJ1)N6 zE5Zz%?7FA&-oSlL3CVAW`ESQkf3>^6>!_8dC#g%t<|SnM43xor7h=#m15PA=b}rJP z%b^A>$)4R!RoLAB?yPCLY);0?^7Mk-1e_bjFww}_@%x}iw3FXznIl_hNYL>Ila>wU z$5_hQ@lDA$!SI!?;Yr0ghu^R0pH#j0^a@4vxc7GuEXuC!`wQhjDpZoPX>VS+kiPh){k!Ns@d1;T{`P#g_Pllc8VQ=_)t%{d@lByNR~B^>=!tbg{d3X zLSg$N*M)a3?7N-j7Uo*i%3VWvpE3m8zQplMEKyODPc7n==pFBQnDf&AILxNMj5S%g zpFb`Bw(nr>$EAcBC5}sn?{YlN+-F-#k0IzePdp&_7;&M$6XE<8)V=5VRv06^VGqB{ z9v1)r-HAW83%lDft!XJ5n^H*06%gGOBC*7sk>H2I$b|q0qR0gmZx12M&lM*ra$Dpl zemKyKwBa3++hEq8a`UHdxh60m@5ak#f8@91SXszFnAQ1nz>Wh-0CBDe_3uxk#9!a) zeFgextGDd_7g>|}3Kj&*PafsDM^|oOTl@Pzfg7E=I=xE>HLJ7 zke5e@RxB#Bv6tAd$VRCPf7O0s_GeY37f%1);=bERe~Ng;lyLqD`K;Z8m}aOgtag}i zbAO@| z3wGwjj>v+rzSqj9JKZ9<>hW!&z)k~Mvlq!)UPaL9mmuMrm`p#5em5aS10Ter3|qUy zC#K6=GXtwW(BPA*XAU;<-we9!$#EQLlSrfNNG9AnxAPtNnJsdy4o>JyYB99KpFBC}nuSV{b<`@t z*2;Rj{qAnibd!$+=Zs>ea|h)`w_mCgiKo?r&_GU$?0Y@VPcO20oScb)fR+*kv&IgW%gXDJ)E`T zdA3g{WXYRShTpuE)=rh$n{DsO|$ZH!o z$hbG^wT_R9r`FEQk}1HnO8Yi@FwU1;28gSkVQoq?=U(pVV?t&XklhcxM&lwwN*=ze z9APgxV1h!a#rl5UGU(~_*Q?oM={6Hdh*u&dQnBOlDER8jRr zDcslmK_rKZ$L#|5GwJFTx%myYrtOGbGY7~ligSe(IjUY;sqGkr3i^FUR)2G9eL_`X z4KSV^5Vdp+>P8pyuOGqtH1jwx6CU=u`Tp{x8mcn{`doSQR5Ws>*Ns5F0G@7io095e zSu$&WdO3(v>H0O`K_+2qNFz*uhwbh38EZ=DWa_A~nC}{GVOJw-)?;3KNl${aKz9|( z+)P8OML8Ragl92jxZ2a&yfosF{j{RvxmK$66K0c?RJ+2ddf>WMQJ+h8)BTi+^^X?5 z$TIb9a_s5r8=6XBGlUiAb!6RJOG;9q&GEZ;Go$$MEG`7A5F1+p3+JSSj`%gzlj9p} zI9;FH%!L8#-UO2-k$kVnNH5M(^$_OuX-2J+;p>>vU*y`EMiQgFPrCqF5ro1}IUQz_ z>=ayd<$vM;8~h4kWs3=MDEFnoeQi7KX?AgsPD700@muofFiteS2wPP1RwB%1de@A; zF!nC+JQ#j!^a#eYb6*5W_?`QVa*`nMJ#I8`W#|CPpJ#x03v*#?qrv42@6|aXLoP}f zgw9O?fy?3HGuDN$)V$Bz9 zSUl^ixrnzF^3^R`D;g2TvQ)k*`Ki(6LjUbRMOK1Yw=A}{428CT!bPN1{YG1#!92J? zI;}Z|JAp~E;hH9RdLk?@j~Gl0)@!1<$%ddtz4Dj57M?eXMNuM-ro!l#>h&bS?jBlc z+{;~g@pvz~Q_x|S@%VaNKKZNJUfAZt{^uWz#}rt4<-OjY2vnSO6ReSsGo5^Ybi3FC zk($~Z`Q1oiQehflj1rS2Pc#ybS@8Im4LZglfC8O(MixmAU870n4#w+etroXhYk6?u zTTpB^+u(FO9C^j<{ymIe-jb^yUaqX5Qu*BVDpytyyHqj>fs^XE2P5-Sb14-KSqZw6 zoHoa$7-&Byd8LNwwtBUA(2;Fz9~iopKT9f~^NI&Nn`X^!+8+dtq5t$%Y5o=D2QrN3 zfNM;F=1ddXsAfZBi}&4}FUa3lQ1L`b#^}I(}n`a8W$-&ycUZ_0nV`XmFnG<>QUy zw7C}h`vA$wt&p@m(Z%1Tg4}CUT*qs5E!^IgV%Hr?n^Z%PpJ!;(vc?z-W1p+zDHSI! zjgjvH;+dcHj}%Xtv$WlX35v&)K~Ifp^!wZEkzo$grEh*l$GQ(}Ybbu*j;ifJc={PW z4eL~G2ki=uA^u$0Aipz<8sh~)`3{C-18oF~CEZr)~Es4^xNDJ_~m=B;V%cOTnGkbA+Nn&bL1uX5g6MBvYjMddURFcR+r2_fB02_4sMf` zng(gtNzF^W=)`i$T+^(K&}&0D-x*g*D+KYSSk>gAgK^+@K5L4RJwA7;Pj#R+k&1iT zr56&yIJNsTk9&5Mt#N-e*uxYcN)Q&DQaA2?s~Kq)G_0z{+M^=70m`9RxAU-BW~uGT zzR0I-6gdIhazjWomKRG&L=Uail=SzyC;vGN{?}iKmoX@xt;lhE?QHJ!E<_|TZtPTO zse|J{pO+fG3i*zNh~vhp%=SoQoG#&rmDc?HZf{+r5}S7S7PmzGq(6~Cfph~UU>`Pj zYh<$<>i!M`y+`~_yIa`)&@-bLCB`RBA?$fv*aI96ORajCrIN}L08|H~Uo4pjWnX(< z#%D#coDkRZ`fMyPZWo%|xkpIyLc zOhR`v6?fs}BJR50ol}~aR83yBrR;|m=yhzDhKt1)2zMVo|2r+4q&{n%VL>QbU|MUe zWsY4c?lWGpo|{^y`kAyU^>gL>mhB6fDS45OSfcMXQ+b(s1ovyJT<@uX{yMo@ZR(kc zb_4Z;W)b!3#=Q`(VE#b&7DG57-Jtq+k@Zw=?6)rMOAB)F7D>iGf84`4B}T#yI5$6D zT&9cp{=5c2Y~&;cMaQ=)eV@kzNia0?x-5f^!W$X@G)Ij>qfM2DVgj2a?==k+h1C0G zbGUpcMs#ruPwq8efuC@{Ap1&oU2EB~el?}swFvnuM)c20dl;1L_5%{Rwu_AvlW%jr z&vW=}?jclU5Rn_;NGuf7U1t{@;2%0)n zx1*RnSWa&maSG=nfR6%IL~}xvlwSiFKF{d$pvhtH%R=6ZFB8W(T|y)aw}8ouu=;jv?+tiA!&_yw-`Lq9 z1Y~5W_oSlIeHUt%rO2y$G-q?KoUa$*#~g6y7lCc=C-0jyP4gfbxFSryX$L%TAKOo2Jw5x<^czrQSH95g7l_ZulLpUD+lhr zFJqHr%a8&oA@+kw5BSO$$K8!e|K^HEt&Ue>iBVFaO;Mb&Q*1C>f?WNvE_b1+e16~; ze)}MtjQsv7JL1&^jO-#oSwI(w{_s3an>T{d(G1Iq;tg(bCF!=+!&qQ^_)B8P{PP)m zjMb|*kAa<^LL?YJudqvLzsxc{d-xE;_?iskt&M^W+3Wj_1B!DI(bY_e9T{9I)g$yE z+B$_7gU5s|G#d4fPw$q2urr%!(cZ^_tEaxfK=YfgbMdc>&CPN;ob0uY`Dt3 zVPIlQ7=OsLTQVbM)bPE7&5NKUlKBfe9T~lAm?oDZEg3Jn4PXs)pd{GWC}>>(yn5OC z1H;zxkFsXpVH+H1`@SA#29 zWVIzvjo)ay1C$01!3tM;MQb@C%NAf!_3IB=yBd)P9B=xKB3b!pUwZVsVcL|Ce zs-qxkuUm)r9YO(hdgg|D{GP`e5~ec!>~Fcr9Hc^8+r?Tu}8BtSgp?mmRL=EY<+DHmh^wT)|KIn(lDy6_qIG4c9;%wnOoiYDpPB)kE19412s1xf!edxOP-(k5tR0 ziRL5Eu1kG3OzYl_`2qVe!jG$aCk2vjp5b~p7JM@vR8~iujp^6qWa#t~efztgQtBb{ zQTkl9ceVx(9x){E^3@l-rQ&0hw>T3985H*)5GxFx#6TBNeQz62%yd=(;oQTn95>Nx zGUcW78&SkE=<JPiPPFM$oybfuELbv-`Lp4P11Qq?B0+3)w*k>fB8AN6n*V2%(&Uw48>5UuM_R~mu z$Hq;1j(Bi(=|`2)f^oLtTNJ9YBvx?_-swK&JkZqxX{TzAk>_j2ls`8|S9sp<&NMP* zFux<+j9LT47%CBW&X<*WQSFA;>gBNY3h$wgV&XkTIEOt4Uxi~J?sE@ z1w{rHB^R=7QTc4?8X88TIGmzD0d7HIMg1DvxnZmCe-1qJ4%S!c%X+ z^_m@>UpEv-HxSmW_Fs3)gJ#?~EC2<5vQC?-Fy|@hvqPf|-fe|PKbU`9d$VW_JML`` zM{=+E-7Kwp!Q#anw^mRQJTe+=z8e!z`{)s8>~k_lqD%xGyA-&ez0dC7zjdE(y4QZs z;+uB!x|<)w{j9b0w$n8L|1403Y5+)pZL-x0F-sHY353ZaCd^S%Ia~)e3Y3h?Zf?8M zBS;T}<}UIOGpnmxK@sfMV9lc?o{Jp_$Q8RbS+OZhlUGzwnk~K4SJR`xWxH{vl5p5T zdZee=L25k3E%mZ!3*xHhs@?Jx6DxH2a;5=H6s<*Pli^5pLL?k6j$9%l%?7<5%9q}0 z9P+b0I8!>C#$IprkVk6bP?xJFLyX?4Y{9kBoA7}%*08H7P|f-n_UQPsC%2AW|FyJ} zr%3m2o9ua}V-i%(_{D?+YFew7&H;(8IX}%U^Hi@K$a;Nea-UruS>`HPB0Lz_ZP2yx zGk0o{=u03cL+dY|@&(r`YQQ*(#*LAbzNio;G7g)P8=q!&*QvhlA-{sqiC(0RhK=ELbz+A>9s^J=B4{C| zXFkEY8|}9v&~s$A4mz7N2n*r3bj%!urq#%0)oG*1B(I^GgilSQ#^`_9*W2*EJx-nN zv$?hR4Y1eRd3u||y)t!`H=OdpUEi%SE*w|dujZ1BX3^6BzPaYkL^#%xO5|9j|@5L}Yo?636mS z_;LMM>`R!POt%|dqiV~4>=(DK}Qd^N-#Dv!rY0tO7Q}& zh#54gc46IceZx-B{%6Dp5O{RU zQ|1|0?!u6D$TPMd-LE`tr9N(yC13Kq8;WSZ;NHjOP^q;D3sxA2P+6u(C?4{a{@%ZZ z@Nh5OTB-Em@jSBOpq2}9((6y=SXF6K7G!OfeZ7hhAu z9x)krwtMRZ9nPbfhlo3pny9o#vu8S>-iKyB?Vd{>PGRv^(4~^C^Puh~5}bEYVg`UG z%|!?a<+dVA(4Se6+gLX@udS6W4vb#EeJwvS{6YS0qzHP%+^j}nC_eV&*=U1j>dqhr zJ^(Juh}9o3%^E88|UAz)jn&&?|uAgEp(}r!oB4%Z(5lwsQuV<-08RDy zAn%F5lLUss($8e2Ctr%n1g_h^BX27pJQ2;GLX_@R0bDwy%XHdCx{UOu+&SfK+5Fezy0FOf*>m^=IR86=6+EMpnbEfT8#9*SK&O3W-S?<#HNT-6QN%hynar!A zyx9NcanyUjS8G3%U>sgHFwNFKM!q(-yarRJcCNGU<&{b{i;Xxv?NouRz$H>ssaoGNDwBCi(J|VXPjbr*~Ehjp3Qjc{rH*|D8psU7qycS*iRUC3)z7x zIAzGe?_6lHlSM{x;;UP?UXgp~z_2LAd|5BaoO7B>qMf~Ot%1!sY1Y~h4vskhkbD3Z zN>P|XR-g zHgoH}cx)|yAw={l)#c7v)SYP)i`=&l=n?`EVpKA)Z`b-rY2MFQdWpA3_C&IPHF`v> z;TaCdPyED}s0Xa8`q45MUtp;#`Hfi3_>Ca@pL!z`HQAD`2mk>&4lLw1>}7#W8KLgo zMWxwZ8#RYOaA*c9N3OEGMUBP#66sUEVa;--d_e^lg7kr$-fz~LXeGn62EYWSJ${nP z7$@YJtR`!Enu2&}o9-`V;4^qGA5+7_K>wZAflg_vutZ7mC3z~p7h{IdXMrh_4iOZP zt-lx_2|+m9u*TcWcDCxLM;0-{sxJ_4SP_e&DyD7cFik<0mN&1wI?*cv^Q`zT%PNAe zhIbM$;_MtfFp=ntr4Bm#P-%w%b>3k{$C4Co8!rm!-z_vl3yv5@6r1Ibz&g0tnPOgq z{v825QCRM)CFLx%>D0^C1M?WS{rpuk;Yd($#g*$CPt8bo~9?aH&b>n*&-{yDCxgJ{(vKXZ>WRq=o z588BpQ1u+zetG3SdaOh8O=||oIk(J|3G-Y2R3X)b-bi^go&{TtFJTEnpTL46m`t_r z)#dgg9ve$_iCdZsedYp$M<<9Gx zPT;lL84NC~;6LVr3@XzNRwV5(<4vE}nnQfAzIor)km_my;1-eMR5HJ9kLuB^E8^xg zvcg@tqn5b?OZ;c~+vaipr>pW5phKo7(m+knUvmW_l91;;r6qItAG*BF;1OW!Oc2?` z{v4nQLD76_$dX81!}@@Ltxc2}H7uM7aRa9M!1vG8BvvLt;^8y0H#IMcPvR zcXlrnb;L0DjV(utK6}G(%h&Isw*?fHFZ{u7bbfI;14h;P>J6=Lff|oPOkL--!bcKs z^bJKtFF$t;tk9%M=`_J|6feV>z(kuJ!YGDz78>b55;t}5a}G*vJnb}4343F&Zq^8O z$fxlK(%!Dqjn|H=_S4hi+WFO$4QJ)S1Y-buyo~#`hk?lSXCFo4qD4CmE~Lz*KPZ~p zX}u`gX<{A-u*vzoiStX?Am-nbli){L&;9C+na93OQ2ju9HE>AaQ?w!A05-a_kFF!O zUjj&%nXbl61>rfe-lz%dz^Hh+oc39RJqR%z9;?k+4`Z=BMc~ENrQj}(E`^QS>9;Y; zJpBo5ncLYhy`+ePOC47!&P|?P+Ky+n7N@*biF!z_S8AN(lqVFgmGn^f8{xaKXoiI> ztX{$@09T z^H$MK=GznuP_zB1&b7VX9EbiQm+;B6rS6u-&BH(^-MVvaJkq=h19+xD2^^L$0ebR3 zbgef^_)%V?%;@KRLXOm+YfLP)XL*Pa)Ovkw+?PyK`{GFbh~NQ(mr+=82I9d{$Z85C zzRG>C6T0PIf5xNl)hFG95^VZq&F6c|;S1hD$(<>?D$X)P1`H=3e{ZFqm)HH!ai-S( zQtK&NUf)*cBj$kygXKpqR-aHe&#>}FEtRVz3_&N(U6-a4BhYQP7$A1=d8~~nZz)3~ z=VALJ7gm8tIi(3Ejf<^#J{ar7qXtd-?^q}KarXo&M2&d{nc{Mtk+V8uSP>o`zwA4Y zIGx#WiV+l$B6y)8H1qVJ^z$=xti2TG<}36lgQVi7RdoQeg@@6>O^!*n^$aDQYX+TR zueJD7Q-&JV0NUQDmc@7xg1}VVbF~Rq9+&MZP%z?b4W!|z+T-0%Os#Pq`b=ujYHO7f zlwjgo52~+NP*w8$?8EkTIFe7PM4@09G_!WzIV;qmW7X2}c-rAgPkVjvMofD}m(m2D z(!ehopzF_m=?ZB1d%r9Rb7B9=v|gQ($S` zW_?nCHRF5p05dw8W+fiSh1c~u5S1X5Wg1bUJI=av$TS`kjtrU$d1Lq}lzUi^{Adw0 zN6P0s9YQwT0u7fv;8@=j0&tnn)}1u$SUM`PJA(7uSBHq{sDc}c^o|& zXvj(x*kDjrFHtI-VabnsUs3lB0C%3*8R%0Mj1NtQj_IA^5kGLoz?Y_7J&&mC_!?Ow2 z(H5uhsp%g4&gbtv3U|wQibrd_hO8WRSh@7vB%hNE$1wQWRd%+&HJ9lrvAe9hShAqp zNca1&ptBBv*-zEf3ZYD+i+>Yj58DQT`ohfZd!xX4={Guh3CDv>cm>ysAT-&tdF=BHvZjL zkCe9-=C1IP(VJr^}CgAfAH*aihx?N z)NbPza7FLimQGq)-hc&RTO~hUUc~#K^`*nEq<+NZfq!T8w~^=7mu*#PER=(pc@Vm# z7d7Tf%x$IZ710!BE^Oi)!5w?98O#h@<9YGIK3+5V6TTJY&l0NU@ppLu)~NqI&>l^4 z@1l=`IP=`YM`aT}pM&O=8BgKYrcu(R@@(z;T5gb3ixLtZKhrsw)h6#nx(TL=P%5g0 z_P%etSfYREjr^Ji+u4!M;(LrC)xaE(XV1UYL9+y3xLi05sHuMYL|}{{(ql~HTTl_4 zA@NXj=L_R_r94=5O4Dh5hKMUOa&W0`=qawRuD+!)T7I?7xTaWWGQCy;*;@zf7Q0KM zU|goJ`#C@WRvJ@!vlK{}6Px48hPK<5>j|!ipvBfFEui z>Bg)yhX`z)Zzo#`2;Cce9j-S!k?{oK{0t!ZLMZnU z<_R1#{24L|w?0YkGp!nx$YL6^7`JW@0`NOBFkenfJ_BORpt!L`ugX<#YfzBs?pNBe-JCEhKI7C- zcETxbcceG|949>0D}g1;YTN4?4BchM0s;1yQ21aKlA@OfdW3`X8cpvH#kiV`e;}aS zJf28?(p<5;-gcDzZSH!CbOR{-{e z>)C5Nqj4u3j_gD+v`XV9!;p@7-6?eiQd`|3mv}ieNGm|Y&7ZoT5L^~iH1{15L3#Z~ zkKdcE!rF}!3=mQdP!^k|wXa&DPj)AXi55cSdG8}h8-a&2k)5U15yoha4$IyD#()UH zLTAgq2mcl`|hrf>pHzJDEL<0kicEDrDy=g~!r95)K zLzlmy#x0hVttFIbOr40`j;@fNvWBp-Mad7lcY>VRDF6EJ{ zoSs0PWap-ZbLGjNncB7vwVWA!Q1PJr-g+_Is``+^`J*61pw(1k=KurAClm}*>MR&8 z8(-8NGqP#*A*!GSWAfzzvd7?T)td^DaRKP{RVSl#?{;MHz8k%tl>M=meCbM#<%;<# zv9u_E_YuI&S^bcg79QMZ4K5vzD<8OK&^{ws1M{j7kW+l?0fa`trQ9Z zI6|=dDS~&^Or2T1$zXyaPqkeRTfgu?ir-^W{A787?Xxf;dqLbIEik6kz5BBV5pD|) zhJL)hTAVEl7-AlIL=?*Mb$~gM+rXvhkn>a9c-CuJl4U~5&suZt1v9{uR;-6MUI}eg6syr@Y$>Q!^yz~-9n}6De=Jf3- zt}H#8khQ5bfkv4Ey$Wv0b|`k5>a8n}=Wzl$^rtQ!e?AzQbguci`EV+Z&2>Pxkqz|C zfuAeAPK8W?im%Xx$Ax<57z-Aq9O&nKn^Sjjz9CAjF{As*iQ6O|XwQT~DCDGMZlfaJ zB|;fG%Mf)VmzQ6f&{}?hYkd!SOH^oz&AlX@&qd0CWx)_@ZDx;LybJBAeBSZ(!V`8f z=LS8nBu~pY-pb<#$-HS+kLh0RwWf6eRigMoKvv9|461JPs5`w?xIPHeTfcZ%rgfIR zu%Ed3@;S|Mo|NiN!s1x*_`qRFwJVm%_C$$c`Z}VpTeC-=Q*iu5(CbprC)sGO9Fg=# z2+Q_R}5(iVX)W@#BXZS^#RwDeMt)d5h*Hs@nqMxV>{7A61&Fk4Wf!3)$lDGn#LwdeUx zgADeIico@zX{Dz_^XKBBk!x~c2+5V>yTYhQZOY}0=W@HwmLhGdmxh_lgMI8 zlTH_SFCeiX&TREzz}0z-I9mS7Q}@QHp`4RXvOZDF?^F&BSR(vKzyLUJwU4KIm*V_+ zJ_Z}292qB-FpvXm3CJlPPDi`u(4NgS9BfE(UGvz|2jqx`w8_1w`aiJ=q_c=!p89s0 zedO8kTrobBkba|Y&2U2d#s#>>0rmZYII68wM%Avi*~0_RDd8a1KG0Xg+&7-3@bI^8M-(o zI_dRcg^on~UK5lMYk0$9fL8KI=Ow-&kg&8dI(P1{f60!!QwIeDjIsU*EH7;l`1+gX zm^eu;Bl=dT6%zZpi?5kMh_Q`;FgeDg0sJ>4NKir>;RHqQob8ed-9}Qoo1|WrH_diKTs9w?7s{LQ2ij z^s4Os&tfpoj8mQ*d51C(o2jVYvvTtRFc}T5Ddc`(co7r6KQ~K=#>0F13$7aIpe~y0LHYjF-S0F$A!1^SkXoKN7qE5M$4>OFA1p z7`=3nwwkU>Be&eXt>6-^mw)S~Mjp=~O%}W@qEF$K-A68)VTm!XP0Xzcj?UvIKNlo$ zzN`~7;@doKDv8Dx%qkE1n#&7X^lh&62G}gKDp?E1p7^^5zVM;VxgTT%-^;!nnPwfh z8j4p&eESvpz*=-rd8VX(c1arGcpDYd_RMO1aJko{ZsY?MiGCp96nF{-1FTBtK_`HE z<%~TH;>{SI8>=>+7c)xDNORL~UN!$_$P+StQck2wI^L<#(LJkI>gE72Y})o$$^*J2 zceoCFf|iG=!)sQOof0f-;dLt9HS-!$OUnp%=St0`h<9#?hgBahxEYGyn9CD0C6wZ@ z0M-ENVK%sVtt~3DgUspF@@?FPc)aB1WOCrzBuv(can$k$Q7@^l-iGy83^x5XENTti zWyrgnRZs~ZXNcaq~sfLganpT7qImviN48@pDN5E zEz78_AEEm)ho!T3jsyqNT=*L%k#T$!d-8EIDTe$_DI5*H2+9H8x0D%Kx2%) z9`;1Ra~VVqBjWb`QS}iZ)G@|y)uF+ZfAA>P!wM$@KOhz8SzPKb_mkuPVx>GFI#%2*x-5by zitp?et)AVF#^9L@oquwx=_D5U7S zSOv6wN|+*s$F9@Awbs!>0V@Hdkx4+%yz25F*B3`HIFj(Y4>D!Fc6pQdn^*OYYY|VT zg1wnQ4gm3!9rKKw>$$p)!@ElbNNG9wg;5HO2}#21I{#utHSXRkVSwMQRw+@k$1}9j zBQFuQJCkD(DI(B;07cpU8A%`DN7lCo{nB*>X|1IdPDqx*KYM5(*I%(+gPgN>x+nOB zLj^_%rm!(Z4M8XrwMtKJ!%or%fOSu2iiYT(eI$_O*ds2|eS48PWKRUQPR;l9qvmj( z0>--T;5)>k>IZBJm&Bz_WCd#Ssf!uxQ~FJhjfYI=& zP+Xar89ByJv$|}Zr{6E`%Vq>KfR3lR5bjqiaL^~N$;s-i`kvL7yUmZ17bjIoBG{``do&z z?cvxM&D8nfAu*?&q7O45FBJRci?ZMGSk2?3xuTbTZ7+zYJnKtc)H%{WYH(E5`1(Ui}7>6^{V3SUew&-4vA*_Io}SUT_!rlVS?o!CNn$5RW2yceX^=u z-*nqEQ4<*tNOPzBs7pOc9YbDS&%r{HqQiA7J#u+&8T_c5VIZ3xLS_xBafA4NXt%?d z%0)*m;2jdpoZwZ0(RBFlVkf7;#y#DX>;8|BWlUu+#RGCXa}zyGo{t^E@N zXd9;A{86wgjS@;?$4?qZ1OAjhfCJ&DDi%W{I?$jjEVFYA? zeCw;&b|n8Um@Nd?7txlloVPaQc6kv~-zBg9DUR|B)Sl8KAQ6_AJ!Pc%AFZEXJdpws8`uJ<7F76=|9dQc@%w-I z*8er?KOMpU8>#6}nELsdB&#&W4vk zQq*89Z+z_nf6w%5LU?rUy}{VKpU{Q}uqE8yee(ZX!@@`a2Y=PV03b}G-Xfw&%<$T^ z>J+5f?9_V>@n|+&d-cUJ*uo1}Fwmok{i_}PbMlh{SH+y>*U*&T-bMPSoC){&2rVr# z3AZO3ZG~ytF9wVZCHv%Y_~>6+s(<+>g$*1q!!diju6?D};J(qU9=@Df(3PC#Z4WiK z*J(=LdF02yhU*&JfZaL2wwHZ4H1Y+GPX+=1grt*d^*x^gxV6@nV0z(kpnk1y776zw zjqk);wos=}DT00i+1a;O5J6S_Di|LY;mj68qv!(EOd{yOVliQ`b{hV^3P>?g5p&+) zIqF`If9JpZ*b#C^_@c_h|k;vfm@yfW$`s$itP@GD@ygPI+68tSdUDoJ05n zrQ#bCoS6!j90w!H9m^>4;Oj_6~1!| zU6)&c>K${>{U22CvZFWVw`X2bK$vXypsLG66@b@lEtp3PUU`<2o&ln>(v2KLtNvaC z+HjDUES}dU0Pam@2B_{KC(__%OE=8vIh(gwlV_?4T?ij*girwN=!MdTZ7GjT2YAbc ziVxY+@p8O{g=T6nsfesOI+lazVUz9S8G!@Esr%$j&j5w5+Lft-xsgY^f>w6-5&(@O zZ-7QWcMHfSeDQ3%-?3%aGAFNAtY>}I>F74YGM0ZM?Ra%7by0T|boUmc@Py5zr%fWu z-Lw6>gq6@eIQR$QF|`c$9hD_Wg)>91yW$&re~#j;bn$s?$2_XiYuM%WX!6A&-#;u}XQ*0`m6nkSd6Xdb7@0*>f)u8ZvsY*4K5hq2YaRm_ zixO?b!w z5}76&nVqfxE!Ow^fWXIEpS|$4bUdBfxVi5Z4uRIfobzT$)lBW)v?$6L4CB9HJ=DT?|^;Widc7xD`*2HT`6Hd@>uTf=E z|EOTt(()c;wmftYkW8QsA$e{&o?9JQh}92cyji+gfu+>wlNbtHv@_T;cT51ht^V78>Lnn?9Ni-<@*Sbn^hJDgvmw6r0U%tSPktX^xwfxCo# zDsqvhVe&CvnCB%23a5ADpB| z09-uEaWn7Cz=q7Nri5qNn97h3K$RS4^0k)!M-?zeSHJFUa;trQjnoXF>5_O2^-n$i#(q|L0oyB4`z8bbYbEel4d(dPWUWbiN*mmUvADMd+QHLn z;sw3Pcc0?(kJX`7sB-r)5H?|Pv%OoOYA7PyDFE)xzm#~kb*3j@|HR461Vq`w6 z!fPA1Sjp%S8?owonN6sA`a%tEmUJW7Sb*q^7_E!6q-pE8d|gv7#=DsB^@nWG8zYpu z&Uql2#|LB9DPPuToAiHOjte@wwc*tImxJ(MZwa5}0Y$c~;RNnqOyciv={#Yc8~a&2 zWL@eyqIQL9xaH|$m49^-w+^CllCS4o%fXkm0bQKvBU`9svmfy3r%5YBm%YzMskd)ov$XGsEudHE?Ia%L=S|7 zvQyVKRIP26uYPALx!xO4YF#wmb_GO{Ry*FI<;H`^ua_HZ0Ade45WWY+-iTezgrR8I zGJ0g8-e0~v@jI{h8uWk}5Ldg?6O`bCyME1hputLW=SO~bcItLfu3Mm3UH7!=u*i6d z0xKyqK+sek7p-?Q*Bkiadj1GuPJkFzu%HJm2A57UU7iYn(mG5Q*paRW^v+0ahh|-* zym(1~{chCQSUyp7D$(?Pt_D)IV5M3Lpd<4tA+pjZ;&gMh_ji(~E}Z1PrOfQr0aAok z;T;-T-%zA!7UWN z;Bo{`uh6zL<2fcdH_^n>606T;TROiL*jpve`zn=U35FC0bo_K*rvrPoLBt-~g6eif zLpatPJ#yk7;Goruw%W>Qz;kibb+NZZo`)!4fYA9=!za`rY%1`_gF{aCNx`Mh&Bn92 zE9EOR^r45j(gt2Z=>Bs|_C;Le7GM4HHVUPLNK4Y?l5?Q@4Ovyc!RL#Q9bQ^Ky?f5z zKk)zQd+#tn7gt!)G}bp0JqLpcuMBCIX`mGJeEr({ z8&0_DGa!E6hlx}2Y`yh%mnsEerrI1(O|Mclf_^IB>zj0J9})8de2zyF8M^UH6Hz4# zpFMmo`(^=}rXvfGdrUOBF*NRgshSKkOi7G)4@-d6wU8!k$%h?MbtC}D>&DT> zWh(LRdHu~$M7P6)SeT3T;x(eJ5;M{z!S{IE-{gr2Ri7+N6h_?Rc8^#^4%MqmF=u7BFoSRS2Fg|=- z1JVd8zSwfNJ~!kQ&KGCf0_I`6aX}|z4;iEZ`MkL;293H`8O;lQ^g2z0u8hD@50Wp5 z`_W}v_UUx*V{8roO9mhTi$U86(=W>~^jV*9a6JcnqB+-J+F0o~Z?{L+1}6UYvV{&; zjXt#YPjuJM(2|q=3O8f`;sL-bDUVw8igrh;(inAlOv5KyZNNGVFyvd*T-_Gag%$L( zyQJN0(`%TpPCY@BKr28lk8?MSN*3+%>PM|w?mXUSm3V9UDcpJHXuYpE>iQy{Jxubp z!8H|jO-a*-dV7FsacJabPJu{pEKF$Z?K|nUCBNJ11OWSh_xK)Nat25RXfnCN3i2sdZVZA2h;zpGLKoi( zCVV5+i)!*HQFUEisu{QL$Hdr$iOjoRi}DHrXpAdpBe`$Ip5S$z6i~}}vE`Qo;SD$O zg-k#+_l~Xl$0h%Jl+f8X5bWLvx9kk6itudTIT`>`yVO{OB3;PQi(~IGqXEf0my_Fg zO3?>?dG2Sm^C}_~DJ8P0vy;{Ai+Ho0)dyf#vO)Qt_GGEu4_XN5*TISbTK^U011c|0#ZDW8}1ZelcdPm42t ziiu-VZ##*(@;lL5p{j=hz&z=uk)5}hzmA_Mx%dLRCNyjdn4;#ZmuhJWq%!NkZqB5) zfV}~Eg+8~OtYyu(dyz1T!dXHqWwLi}e$N5QD|`eHd?#+#UoG(j^%r#*#~lc~hg(En zh?J11O9RKlWMtB1u-R>4V+>)^-AfP9^ufA+!&>3-j~) zA7$$PFQ=NAkgd``wEzG+b1vxR24}8(A!vX7iLko`@7rIa@&&EpM_i?B ziAY?KN*u1DRHRasMW)0vslxswpQSUOpaa1rb3XHL%}$WPd0scO{gB}!PhE4bZ`{Kk z>1$z5ts7q0Lig??{Qie)3jx3;o77yKfb|8pYEm)2k=!4e_tDx*O04|(T-z$=Bi<{) z`h1HFpXvhDL7KvvLX*Bnjt8$dkaSaPBr6YhXo?-Js+GqII-LLU#6O<>+bd*c3cG}( zwT78p9E^I~&S{Fj{+JMj2pormEL3KuW($wlqzGcDGpC-{H$sek^jUhI1|rpjQ%~)lhpR);lB;YKbM9OMKEg0g#R|;VNcS6D;3mYWWikD zkhlsC=dR}KJjZ@6v2g2~-&|`2sa_X(z=85V5Bv8C=)mR5ho4;eaY$TqevWXaT5ZU~ zLVQ=dtr5m>veOb0o_pGKz+zI%~i_m;Iul72~}ktd^7Vg5!z#zO)L*h(kt zHGcyCZ`1kvL(J;LK0o~=QWRwA^F?E@N4xH~8TmN|~Y(`afXDZdxUPU#Sy1(d? zkPxDgoNLf@!ed&cx{O^xdx=wWKKf^$2plBYgci);4o@5LHt%Y$;|C!t;cZVpm4J!r zd5?`$rB+|~nR>6%`r3js4ca@#P7t$2823Cfg+Sqf;rwNngqZIy_IZkk z>C0;uyn1=mFe@6ovi9Z3__wl19MtKnebGG#Mb=<#WVJJ^k?_B6G`DO9p!A&EmqgpF zm?f6Gw0JgA;^ujCDHm}$noZ%-9AEJFtzaqeGCAA`j~&bTn5;fUO+^w5)UoB6nZ)82 z-szEHxH0U8N~|P?>fNy_D8x}ih}iq9uEwO9*mb?ICKi!M4`!`DpGjG#y5<6--tR9O zJjAck9RKTetb{|c7MdPrcK?NTL*VX$o6spa7Zp+}x!$pI+r@H@kpK0g6%&o{zIge= z^^>8J{o2P5?poR;l+2E+644o=0pmD0;mKb~#ly*>pNjujlkk4acXdrsNt!ew3Q1M^ zV-x-REB|9;>@Q@m@-;>~l*$b5bK6tU0;hDuC>G)!)`Tk&0TnMKy5rr-t7vb9e}CW~ zU-Lgk{W&1=Lg}(o`ooXc+3w{Xj_3ADrx}4+;%f=f+!Cy4ig$b+*t8jTQG**zib{~s zZEb46Zb_%{HR-Z&?ittI)i9xZ{zbs5<|iGy6b&&`2jAI)yN&Q~_J_j7zbRkd?8RR% z|Gzw>{%2dk-pxb-1~iGF`i@!qV|@Pl)fhI=2Mh}8r~mTMUm1;m*)oCDLK&VGdJ|&5 zvZZ$nJJ28r(KkXTY;O`m{^g;6-|2s}4?F3e|8}N_BJp4M?e5?GeNHGa?!!%v-!o$Q z@5Wxp0uG@dc>Oi#jxGE*L*mbdfUF`9|Gw`(9_)V|m;ami|C{*#X5Ih4-tpKtS|>%t z%??MWMY@#MqJOi@?`UFg8Q|-)%Jjnh-6p-`@l9}wsfx`??^qUV#_l-9ktyt>P=7x7 z7VX?qIdkVG<5qhUvd!%Z9<~3kwlM_{BC<;Ow;Ped|EeSYBy@lQ4O?(wBe`Rsetmqr zmjg^#mQ(A|e;GVLNkKiAG5y`sh#5GN&P0a6!#~Y= zZvrqAgh$4<|H&$LOliA{7KlV+0=>AU*N9n}l?!;N;D2#TA48WkPY-FneM#}>-Hx3D z_fm0|G3niZ;A6ZA7cw&Xe9e4iu36z+zdP5e|3d9KQz(<&bJZ8>$N2;xc;3{(H9)C} zeTb*nRTe1hBT>WQu>2!uto9Zy7lV}NBw|>Q%$&ZmT2Zm#5D_iDH}a`C5YBm`v{98r zSr6E`B`jO4b zT~Vtthua58NwMto)K_?Sz2Nq}u2T`LF3_2*i=?8WdNs+NvL2*x|Bq$v|NJ4aE2LZz zhJI~2etQ?0(f<7B!>zI6T5mzG68W5fV9dT7(*0vXpsTIdr+Lmk=P2Vv;WQzUo*SvD z#j<=mAv|Me<@+o@yD}>}Q9eUP>Vv)6d_!;SN@8 z7_C{Vyd?2l&Uv`*&7fP5eSJD!6|p;30AP}Q33;vF&%18iLItFtw{Do9h(1pLuQ4MJD0#cNwF z{6is0nJr$WzK**ZMw@6xVI)r@=Dua34GXHXy`L8_7%eg8Jo^5+j(4?7U93)cyFsS8 z+d}N5awqNK{*y$kKg~+C9&pak+)&>8#WK{S`w4m5b+}eVXqGc9coYa+R%tBY*P(}v z*QY(+qcSP*mnW!N;V8c??|S|3N2M(eoQ|gH6g6cFH(C)=MHW?P=g^OEjD zU8}6*rYyet#;YY|V-v!6Fmc4>W%;oRw7`;Ws=&R0{A!0g=3)EMw|qLBMPJ$3 z`J*>BixsMAM9dmu=pKx#ugQ44HFAAe2L|0dQ?;n?wHWB=^bQz4`14NviASNZ01lx1 z^Z)*ssP|=U2P-{mKbPIPAEi^Eh==oN5`{||{Ww4_eb@C`gu-r-FN#TD;6vz#NVp5* zXN-(~1^2&3YS0Y)&m+d@$~;fBl&kj>T|QGw=n_{M_sDUMUjZzT@kVY%WyqtYf1L{EhpGhwY77+2W;n*>mt`LDhA#$u)IHy1rXoA`uM( z!dQP=K7_Cvov8QV9ruxT1h(flZT_8(%t#M*5ax2{1@y=%BxLwrprD9HpU#m*iXg6J z+b?tBG^Neertkg3HS*uYhsnrTe&2Gj5?2*v&+?RGVz9HKzRX;;^|#;;ev;`sMd)I3 zIf2!oH_;vtJnvx*E?7Pt5BT#+A;bY%X6!o{`saWCVdm~5obK~dTwzh&j-)U3eIq4~ zmOtbsY&=?(Ors2Uqwrw!(WKANgn&jDjpf=rJ4THkK}_1gNEpY;Kshm5wW_jWF#`WD zI3p|W15=<1%Of>f^_mcs0u6y|PPolEGJHlotMy`q7M+iKbJbqqMf$CZKT?7@tx~im zn|vr<=U|K30C_zQyBTU#`F#2u=~#3_xebz`&o!{c153IhVUfioP0GhUc|F$hfJDIi z)s)>lmDkyB>|t1z*7`s^=jxtzlS$%CaL(K5BAsfnFdnQ|s&1#RZES3k28K5emRsQ# zkcd-&!ofayDkVCS{qb}w`d#K4J9f@2)wXjrx>7OE837v|10)BoCM2ze;vZTPKzP)P zbU-XG)C;w`cxU&PYJ7riT)&kPO%`%vG2=P*a@J32J3zd(@Mzm*=)SZ&~8+ z;DGqH#-;X%^r0%aBdO>i!+X^a?=A- z5*zx6bj&`1_0ma4+Cmb%JyPy{cxwQN| z81c`f;~PO^LbF~F+o!s`3(-%?PP*d7d(W|`r3dhbvh5(jYq9F$eT_H*j*D;(BZGq0 zvnJ*34Mdw4*V3i^gPN5Krq;tJvZYb}H(t_!`kqfW67h(W?G%=i$@SO1!@2aM1s6*n zjCC@WP0RYt4o_OM;jkK8m$&))WE@fgNi36j{6x4=g?LQIP8V=Ca_@LsL3~hHj9a(g zCTqB@b}iC=?3~u(ChH<*m4!=qkw&#vn%5;t{c+Ug;q~|B!xPS>dI57VzjA}%v|@>c z^9P*Wl;?IWtisfCm#?Nu<_riZeLP*FPxrKtZqiUr&g}_EGQ{CG^}JR2t-Z90t-`~nc0g|YIj=ZL7;=o-T^1YR-=V?R7S#<-#S z!FdKr9)>m?K&3QkG(U281yefXr#nQscig|Ogbu>&M|C$QGgbAR5Wr}*22llaJr z zJLZ?ox6?OSU|DTj8<*NWyrFAAFmZIhY;P1vb;5ok8Pc#bV(B?Vax*KfbT-ISQy{P2 zns=;K2*$G;wO;$akrtB@hsZOr*SG2NWFBH4_aYa}BXrEmpzHrG!G$kgAhoq5aVW>}VkK^!NVYcqIfDMG5OBRm8_X>T{&WuhajGC zAYlU~ZlC5z;W|}Z<)pSx`+1;0jUa?0nsyrYSx~0e3TO#x<`9EI%`=<04^7kCtcD!( z$Jvk020F!(goGRKVIYqUe$nEvnWwK^@Mx8PQ(l~RKg#d=B;5OOdCt;oeCpm8-u?Zk zC+EG4(v?)TwqQ+a?)DIYck!w$6+agsECT9fD{|3G^Glm zsubx0(gP?GdT&CgqN1W8VxtD>NRt*IbPK(=0D%Og1PCDl34uV8-@`d)W}G?Bd_U)U z-+#D*lBe8d-+S-1_S)SI@3*LK67>7m6-jy{$az@l9{x zPQT9Nr0n-xJ*)9|PYY`}y$mL&QO_CjN`op`M%}&3B*@T==-MY{j~h>))p+3_?H7fsojEAn6tFtrkb?n%w^g=w zTsdE=EQ|?Lg%h_t-3PBs1!Xp`1nY-xqIoYq7$y-|{rlINHDMi4PL+V@dXfIA=C!$w zj&CQxQ+#}%Y@=B4lAN5JOe3X(93bRg&wk(Zpw8O5%n6!p7}eOrd9s;V(Dg^XSUdtU zI`_i=K@V%g<*$bjVE1ws7dqf?CTrYB)Sg{&QVNp_-l0+n*MbTOs^roB zN3Mt6)62Sp3LcP8GZxvSvZ<7Sr>YL4IM1B>d$e(jvP<4%q=Zxm|Fu46o@Go8%6@Hd z!q&epccHTE=K6@_G#)wN@3GmHqQ^AtW}s2b$-N~Gzd>nA4BLrW@l~=tA+}}E@`5|9 zSnlMg!^0UnEPHlxn(u`aTx8GSd>U&=EY$s7(!mgKr;mCS82_`HaR)X7qy53licdCs z7CxM=$wAz0e5x5IKenpG0UB@LuvT8GbH4QNjRl%3FPZIcg$0;>;P_bR!@oH|LdlLk zrG$HkVdPuyRHfa-G^=#c0BQvhUUcomPriR1rxrEU>lV_Q=i#qT&^?U(7t#}l++PMq zGL{-u8-{Ua6*u=aD^n0bROaNK&8dL+ikrP*eFt#}1AmN5#LE@eN$fG^GAmUkclVbF z7Lso2GBZtlN1e7s0xb?*x%iT@J?0x)mKKaEv>1>vHWLUx-aYag_DktbFtTzZ&w6#FA3!ZH4`w&Gs_(GQk-qFRdO#?-YGi9OK_Vr6+#QZs zT>60z^!JGxf(8)IsC4h>(%wsfr8yB|JcTPTnGyZL;ViHz9g5bq*dSN~L5Vr%y^>ahoR-on=~?Q zU+V(%D0eN9bgwSxE?T)iX$E%>uHSD$Oe#hJ4O*1CWQ-dd>skkA4H}7wdL?QU!3XR{ zgy9*6eq8(7UX^6~0f_O6%8%@HBqwYyWL@2H3%znqh?}&fln3K0)O($w{I-S!H;{r8(#G+6 zJquyH62)$WY;?f(P_8M{H^HrThzf7C$K4p^J*!k=`#?sJv6M^ycYx5k9?ByiKsfYs zyro*f_VSj}_{Ki`FF|)f*8Kog2*Z5YRnS)ro2ed^&*Wb}qq~>(bU(wSZN7sU(m4Rb zFaJsq(cAe}{K#?l+#~W<>GKxspT{Er96P%p!PzQ?Wv|{TxeoSpF{7+cPKVb&-geb} zNhvssdF5KN)fj>67qn(SJmhe{Gu_NaDsoU^B`Q#}-NJFmTNvX-l__&2^Dokub%frV zM0NYi=fBC_-Z;iSu2iotR%Ln7F>M75Qb>GAmyJZL(10wG+U)g@t%|AxzP5EGG5^cBj zBLZs5945Bvm3Ts+w%hQPyb~!+CSZ8C(8tIO2~elR$rFxC3t#r_rj7EFA=2ntPuMXL z%OTyEq18LQ>xrCNo>8;XC-Ft`CDq&c4->k9N zZ$y%6KHe@mQ-Ca%8=LX}dRG|g^=Qpu@_fUOwE~Y391sR{@)ddbX~V40(1y59_e+q<4#F7;Y0@MZStd$ z{&nrv?5c-Q_B1lecI8r9Ag!kilON1SKyBYi+`x;j%e@YhlXJNrZ$(Drm~|u*bT5aW zU6hdyr_JITte*1!lwQiX0c6u|m7R5B`L)-2+m6t{q^~iVd6bvao-wh0tiip-4>jzJ z4)X+4%7vnWU3$==Gk9`OX7duF#2MOy+$mekG^WT7>aQ)@ zg!JzP5|Ij5#ET=Wzdx^X*wkSPebDB&lbvR#*1hbalNHNRTL(x8cWops&qVSI{a|4? zpf2%NHL&cU1pPrqd{D*L$F3I6NKdI4KCOtMD?F~37JRhm@J>iYF=NaDa28U{1XMqL zOTHC~X`p6sPgOd3(cJX~E5rNuEDD){?`XKPOxiUA3~byi;DkuccD^$!_G1*NT#ye2 z^2CEM3wlZr#Tn91IsEnWw+6Mh!Dtt#+f$;~R6by%KagGuQ|o2>zh#HU-;4(h=VWoBR&lSlb+&sdCMQ4%E#5__!FQ zf)?}#KPxs<*>(Qi5E}x?@it5oR#5&urGq4>HYPURp*_44zmSL5xC{MUqkoTByIJSE zL}AXino5$Mdvx($yS~BGhu4Eg63_}CtT_(xE1ctTDp413!C~w8ni`#^ z0)dPVRWtV_&~g(Z5I29cM`ja=Nnjf8%`-+ht`mc82ov-~Z!~L|-@}IdIpmlvhICx+bvPDK}Q zY@YzZ==UE>X$qQe_)T&&=Zw8wXwV;pRS*tviP|wD@zRKW4=?SyA;Hc26p|qqN75ze zWo_-VZaN>X?PX%ATN$rxHI09fEDK_YXis#-Kjba0ZP`G82^8rW&rf!ipf3m#@AVGE zZY^{oy_a#zm&RyO=a(qztqcbUg`La-V|P5|M8+xM`stGi!lo1;HLeRNY~d|+GIe-1 znde=O*PHHmpTSkIB<@`s4n|&$Y?lR9K&B3v)TuS17tzbOptWO2x3h$%tJWTZ0`tP$Pcn z`b}E4N&BmnJXMCT=*6UOXf7on>kt;onTu~vB;)VnZNl-l=c@;30B~}cF50M|Y#R}< zxH*uWfC`wJQd_?q72EsV$321WTfX3$voumufYj-5=-Hc}D1e!T z?|8LD`zV>$21M09Vf8eRyE>s&xit)*4HTDWdNKnumg<`a`Xg5Cl0**us%9*0QWfQbV@_(R zUO3tpTC~0MGNIcvE)vMj-7sTv){8ybCx{;Ep_k4Gu^cqPApDbSQ}KlGQ9Q>;If{)^ z5UkyiAIa!n<)PElrL>I!h6vf*8EHSK7`%L6t{T)Q87E+HPxPc1swoICg$(`?$VdTlKqiXYTs&&su(%q zR*MgD)~g~h`jd*d^WTh)jz%QZV_$@G1rQDbDN;YyDz}@;ag*`+#C!K@jV#=nZUXMH zk#uqpjMkgH$SE9piBj`p$2WUD6dxkl0IgCwihRz_D_7&Lji2xlNnIFgu+Ob&(M=&f zZC4NqAa+gR)u|Twv-KE%3ZESlIchjAhw6el*+H^HZSV!i zZrM&sP0+N+n+T5sn{Sh~N~9*{9{IA5sD@af1B)lgfz^YDIvNjf)rL#i3BuGjRGSe~5_S&rp!2pUosJjso zGgKn%yf^2K=&jGD2IWxEEAr;n<&MyDFiTQR5~o{=Y5!Da12uJdRam>gDyj1}JuorF zv;eU)!+b`GsAUVfnB@Vfs6+=jb_UvpyGaVf1+Rnqr+nCR!d#~8cD;@u(p;oa=Aw6; z@u%kth%4xaTRdOiA3y)9we<=Xe;R&FB&mVbcmGuj{mQS$FW5P|?VRSd`%AN-*C*e)Yar6Ls$f6gSZ(uvVcz*hU z7A(3oRtSudjtY0dzC}XbCHPl$PlE3ONlpk6z=+!MVpi|>7}dv4jxMc4a8RA*h#rm_ zJq18+r!8wW)8X-YQ%`aalNLpmg*4>am1~^a{vK)DVYtV|Ly@*j(<6pp7O!}>ELP&P zA|`!%l|`mDwVoH6RzABgkoChh$A&<<4>~G9P}v1hTw5Hl8}~*6OFc$UWA#|zv%N6P z1`8gNd%aIXo^t@I^g|q2OfS$u11N%j_lYUBSfaRaIqngZT^YdD21bLDUnp0se(yYc zLx9n3fjPSb^XVo_!_^^atKRL$Cy>3%dUjhs*`#|(yWH&td!&a&!(IQjYM|2dw4u_y z${&#T$}^f+Qh3Z9anPmhMR)Nb&(mGMLhN5&WQ^tz-kUz#DKWyAU6a`S#QzMpZ$0shi zdo-#;da3FxiYi~3YnI@3Qem%%_ZPx0+SI)-5(lD>z-=m2`b(bJ@ebj=m&T{NmNeEj zFHE-+>w>!xelj&q)1PzFAhOEKb)jc^f)1#s-+h;>=H#s8cEpM1#Di7hl5?9lRpHDl z=kA>wL{0!nCIm~{hh7DoE0uZXES1shs1R9ygreO2_iUfSaH-_Evd8SYg0+FDv)2{B!v1wt7 z!^fAdUmDB}d+Cz8Jf7CIBR;~u!5^5dolW(!4v^16`0DHG_L9=?ep_3~fZs>Mr(&NQ z#vvJ7Z~s?POkHM`r>hsd)<0NPTn0o0iwSA=lXIVA>{>rMOjz%V>{)|_`33b zc|MeB$o*3};A(0j-}do{*pcG`VT;m)`<{&}P_Yg3%S<~_s$6VC0g~}30On{MML+Q~ zAl=h~ZA~s8lvvj^*w(09OEGvEtZ{;W5l*SnWXjt`c0#VqDdihBq$KVDzuWlH$gc1UrG~BIRMy?KTK(vv{m(vem=FWXulb{?{ z3yxulam|@FP!2(*I?R)AB3TJ=&F@LKI(s5%SY0Mt@v6^ohhP^%Aute~V6X zIW~#){F$1aLL1Wn_c(Ixg#pYrieQ)7pd;cnk9CS^yLfcP_*`n*I4Rxe(X1-TLZyq; z!&OQ6ni{*^3)5cI|~tdGKv#)8-8v-h9)MpK(6o{+wATVkgRCavw|bm1p>EU1;;8N z2VnC(>%7UdX00lJJc5+LFht9?vrZZM81#`F&n{F?nxFR=K62I=a$VVLD*gFMnUSKw zBt8ZufFY>E^RDO5V ziW0Ot&m9$bUpRZgw)(zvpWqKexYWeDcKK(NYOv=F(KAM&(gl&ZK~S06h#sGrWF7O~ zc_SLX^Z4oBn70qdccgI`0G2e?EXfC?VD!twD$Lqq@lU@7uC7c%&z(L!7qw47FY%Hq z(*5$XoOPf-0{tNUP5GD8mG);}h3Z z5#i#5ietq+3Yn$%W_bz&Hb#T;I_u}%WuwwUjre`g-~02X)iZh`pV9cP5UNz>W(*p< z8V@iI+n@R=U(heOTU@2-!nzR7NF0XJ#ca<5$_3JLC$3w-(G}LEE=9}iUge$Qj#2A8 zUn_C4&(NO^WjmpW$Db7-3pt$80Q~6CJ+?3*6^{1@a|pW29exmR+YX%WdEn>8)n zkQ=B7N2&Pm42`Ch6LhKr*(&|p2Ejz_p!4t;uC76&K7{GrR%%*n@bqE58DkuSHjJU5e;wH#4&Xa$rdmcVi>hM zP+yf=0@*|ewX0Q#d3p!|=&sa^aL844038xlfI;>R@2MQYE7UNU*9R*KzPTjCy2Uc_ ze&37d#MOSpu%3r%8oyZ*)sK|qFJu=wlheJ{Vnt}5Lr$EV5em5nSC#tc3NaDmk%nUH zIXcSbL?$V5%Cri3Wq-<>7rgD2!u-$srcas9(eo(QFJGc=6&l&x%}Ypy)({aG>8 z1kM8;!?aD=c`>E`#4b**oT_tOvxA^NAI|<`1U@lYnhe$5F7W{zFCQJ;A)jIYs*WLv zMxgHc72`;tLL+OiKwY2UHLElJ{7Y*+TNn!=^IO|85vAYU$-C_y>>2M>Qj=;dP#fkj z21|AQeCu{U5)8=B=yHi#-;I?C&70N3j{AQR3*6iFnQv(~#3Tu9zN#7V2f%5+REpIH zo3lios~eL~bMTaw$2@7S*|ln?iTALpW|i6wRoMdivUaI z%|Oww&sX_Tr1cwq^PC8;w$=q#q+7LMNwz zl4lo@@Ytruzt0NrvX>5d5VxV{K&K{JHc1bImHNg67Dw8_+R0GF`^Mu$a%lZV$dCS9 zAF7^v6m5Q`XMvc9s}vi5FUOLF90rA(l2}pirfs4sC!mV-oU8W4BiV61M?(h^LJ=#FcBJ4Ha zeAl4sO>C@tHwROB*rxhCRT;y)4n1K6*S)ytV{3KM81WiJX?L`_f^v(ReMhS@LXbjQ zQqjfTVl7&*!mtdwPS1;)OrM)k!9FOl?g<6BCc9%-l06G65qJ9AGp^EvuSu!UjYJ_R zEm|KHzD1HiEPUmScMC$w~R zDbtE2oG(|{ec)6CHS$5)+35@h&!i%s2F|IhF(bg0igk)G)`($=0`S>0!u_9xd?oPV zj;W`UR)=f((Ed&l8?WeZLCy9zEi7HHdU_Y)x@Oq#{ut@KNJZ4kUjDqnD)}i;1z-Bo z7#;E_{o)UXn!x18Aa>z{*`lZjAo}B;AWKD_PnTZ#&7H z)<9V+&tPb(`Cw9Dsg4$qGxBJtub20vY=o;3XKY&@NU&c7m}rLTVrPGYyh@rds6X`1B7xN>qO zGWLKY=(&A_xTJzr4 z06u~JPTM1SW?#2Q_7-Jvq>ZlioefQ$+L&A+I|W=Fg>MFaso`JvqL8F!BQQ0P!Qs7i zg6^72_X8Cc<5Q#O7~iNCM=m*RRNHNr0mX)Qz)_X;!T3+%u8UDZk^Hj#II@2a=9L`C z(>DM1j7fPSW$KL>g)i5ele@lWbgutr7TN&gM&Ls&338S>8pzMJClf*z&=r-uMQT_8 z<0?4Fuh2rh(yAX5XK-1qFPvJh#Zq_q8X!tC&!X;{3a;I_IwmWWEIbt*cgm5m5J*p3 z0*ohtxoDVC1B(fL{R7UX;(j90m$9Cn(%9!XrceT55ZL^_Zw%pIdx&z(9o2_K>mSJ~ zep3OCdR#Ys&(3^Fs@EIass+vsds}2KR-)hz2d-KAxaM@Zez1zXR>S)Z@H0XGqD+g0 z;k%8#GhJNl5mjpLTu6`Kv;dX@QowNm?vHaadpII34O+hm+}ui%Nj0-=gkURGipLx* z%lQB)$ggoZ9TT>+%l`fd<$H!7LM)7Lqc&fat5Bnc{J;A+)^4lFb)8J?*!dG(^glMi zj=3(f8%=w@w#)8Lu^>qH+m&Ne(r>YER%flUXILm)4?X+AJq98?yj9}lQ|`oK&VIOW zPSzthX2Q=3>TBCQGF*&?@kr15%U3nu7SdTXW?}`g@v(HK^6|gcDRC3$F?l-}69^01 z_N4^$(^5q2n0TpG>G=syDKCyhWdrd0$~oSw;&nelFw74>(VUv2gifCtiuLSX0XclP z@)^r!pV-2e`QHF?tL<&m5{SVEqy-=8%VtdRb?QuU0He-(7bhn;!DxkU1uP`mk8a_$ z^ce@#(jW>bX0uwe(iJB$vZvCCw5R7tz@9Vm#~Bq#z5b@=Q;rNEvtlW=Q|-+ zKmcmQQm21P?$Wm?XMm041YMi!6ahTh0mZ31s44f{G*3(@wKTyizm9t{*G$e=Z*8PM zOoWdHF?3)VAepF)^V$smupdsG2Q;p#YB(+JM_!uiqCmx-;m zF8D#raa-QCkS11P68?Z?9a03KZ`642{azQPOm*2BX?v$$!Us3kkC|cO1M;?R(Cl)4hsH+%0R-s?H|{AqCd@47GUBx7=-vnqTTDypJFBK zOmA$%u20)=JU+y4*MOy>?&$Fc(pdVAE+{~-{e*{(Y0HTjvEY_5|K5t*<%Kvxrii|~ z?PDMj>9F)=e^o`b4K#`?w9RSgy>|~?hv1Dq*f>5^+Y!VYBSuL*mRBF3@TUZUm0%oa zHSbQS+8ww!y4{1%vLDHHzW%K)(=+n2k{zVP{6xyuITZgq^(^j~>i~-q3 zTHHxeabk{~5DLUyX;5VP?H&W&Z>ef+y-uaOLbhshVdB`;Ua$rsENX&RLZPh3S!VX5 z2#?7-kcb5q?L5S7D964%Lg}qJWpu3Kuri1$bZF=dEG|5x3W*PkI_Ms?Hae$ zWIfUIJ9gw3g#nTkPW}FHr+~^u+p?i{Ny4dPXIr1keOdB?cJk^r(u@lr1F{o_Y<*=?*;e{%g^D?KqY@=;=CH zux+4c42$04ecqi+^2|JS^UbkK=7t+9z7NI)@QyZmo;X*~h5|XhuJh!T*_w#v=K*Oc z1C>=T>%n*=v}Sy=)D<@Nx~l=^8dvrG+y3MkJ;Lz$y+!kEPX~CXjtx9N@-fAD1!hH% z%zn0&g6Xj9bw<57toQgtW@{$Pnj0p~PoU0cPFo0wiSQE$_gbe7Kd^x_mihP2CF`acxDG#sFWs#oj9%oi%a3f&9WHi$kk44y#;W z)Jxwq4yN@y84AjCuW0S%$L?IZU6!@r>D|njD(5){k!-|vCub;&B16P68;#F;7WMVgL%Po6~^u9fUfdvw20&9 zRgXCq@6tnP=J}CitYhk701$ylgsC?@fs?XWPo_}U`!AMqlcIHu){(BCftaqF{0Ktl zFew{2-ABP!dlj>`#-3Ne#JK7yKCsE`c%ugznd!iJ$b?HRFhvrc^Uv?#r>cuB#duQ) zu^q=nvO0(1@*=hQ_)f!;W{OZ!7!%OEDm1k3Mz4Kx9|0)8D*L>JnjXxz1$ndGTos<= zU$`*IUyXEI0iMO?1(WK{_2fv;ep=>dsi*?5uuYBbU$)U~2d~ z?R8v5CG`e+wb7&1I2e+fCgVYa9O!p^r3r#j`gDSW}FRSVSUQQFc ziAue`9?apjh}HNOhp8CMJD$H!kfa=^GzEED+6lYr{PD`*^a{ts%)||)Z~CuvBx;)8 zhscD!^{&}!$x)8{;i3LBI`kj3C(>Hfj-VDKcct$tf#^k%StvDwF+fb~C2}l6Wu!7# zq={iZ>AEzcPU1$lsIYUh^(y4Ojw_hS=Hz#l_pb0S@FFXPr)TxAkP90USEKwE(N<|I zs6pm@Tv>}j$}&KalO3rYukazTb#HB%CENz%lmJLgx}ok)>>-{3{3msCetm!OjHJB) z%D-P-@DziRkjxJe^B-ncJBdA+3}%aLh_#*>OiANKrzt1MJFSY_sJ!Mt3S0I9VNX(! z(zte3T->Yk&tk#`tfYT*sc?n{W4qf)m*gWKQ`%QDwpAwJ3w4w|$zj(n#w&36r+zr4 zLV??Dw@ew1^Y#Qh_O)@&C@4f|Jgn`#?!8`8;CVW4k$DOyEEPlBup$=@O8FkQKr9zd zq-lkmz(w7pH0fO6sL#SCy?h(RqD3#a^KRRoUq(1SB+BVtMKXmRxKZ8O(6ZmD-(jfu z_Lq=r=r+-m)0Lsz9Wv&0PZDLpJ7Vr*h7`HbDPBJs6lFeOB|K7WMeuM2icV5b zJ|9@Qtn^cW_%F8io#n)DKy`&dDf&CJ5WpKDWJqULvwS4@+i(}Eq;ENz=9zy0(%gui z0&ew>3f$&yk5{7=s&uYk0Q#A14=lNZwfP2J_tg0@x4=6E0*C_<` zre$^M0Aa$ntD%RR8ajrG34I^FlsUCaO67460=_!DXYR#P49iE?u}|JH+nHIZ`ajWY zf7`dJf|88o*d8g19)D+v#y$+d)oEmrRzx*c0Vx8x1U0|!Sq1}qc{C&jXr?KVgS$Js zTQ`P5b0mjRHJJ>huWtj?)U_Ip!0_nW6x~SF+|GJWJa4g1@b&c6{ONC`xpgaeqNgR^b4#B41SpoR)6KwI zPfhs42CGU(GdNs~e}-0_cf}-1TTM&Grh1N_Fl!2;4p|0{ZY=xTR6#o7c#!cu@gE## zs{CV47=(4(rz?DmId7n{zz+oIb_uJ}@u2SSvYcHdB)_DFLk0kfrs;ftb?c&Ot@pZVU`j)V zbYDY&$W(q9Uag6gwim~5=3S+56l;p-_oSSFgN*Gp!Yf+4rh@@g; zp0;CEE@e&M?QxxQ+tqGg9HwuMmH~p(fSO_<-@(zzD!OXZYSNuc=KY0hgALUFApBad zhb%?aAJD^L$V4EH{?vh0BaTvFgf)qze%MR4#kxrrUt^A6TpsCNNdsiOs@P;NDE9{) zMZPr`P7bobwoeXe14nt5to%1=fWf)e%~nQid;>`;o;f4DP2mQB?2ETAb>td1#;af@ zbg$S-KL^AZPWW%w-!e!KPv5wGK_F2;ib*Wl`zQPKALOQ~T|8X{?V4|0W2E|Q|6FIy zOyIIsaB@26jA~qicc1wB_ZV?6*M-e)E4Vhv<#x8=hX*U0#El&inX$?Rz(H!M#J_N{ z&anZ-)bS>u>}M%Z{mF#4S#x^bSs^uLYa`XTUEFZYPc8THCijK7}M|H z_gBv>{>1+k6yBBEX+Z;SQ#A+)4xA9ZZNRqk<-OI#XTfm7=psDl#_HW1Tc%I$`KCqB z{BS;1e9yH4I~)fm1xXYv>}=q~h+_8?&C4`Ibk+Q7dA5SV)1x;vEEk`3^0djA$!b?B zbN*Bk|J!MrV4*7Yc~MxZBdphowj3%(CmjM+H==nD?eBK__qW7}AD`i9%<$ zfi*R`>fc*b*Ab=}a__+#5Fhcbz zBXwJmr^l_vgP(mA*&E5k>HyS&W_X55;Og)NK!Nemg`yudha%Yj5L;*?F$Vckxgeie zO!KXq!Alsrc)+&rwce6O`i$%#%P5XR=ez+k20%7V5Ai81k378*Co>SS2_jTqEEFU? ze3;3QeAG;ED}NdgwP!8#{hFYzZ1>1S1E_%$Rq5 zA09+g0+i{+EPFy1R|Ig!ASbHN)c zI{O8C6XfQm;YerfM;TmHaSo6YgN!TW;v=6as}-U$9A1>C*tdq1f26M>856fV1tKAO z$gAwdCAaE7=GTUO5!xb=XOV8%t_&uB%);;EqJNYj*#T58D<>2ucITuwRqbe-PKXsGviAT|h!w8B<)E4MDBE`!`i|K$$GvC{ zfp~cpx;W6BJ{}J$6MjO?kGwvIB7_}fXIAk!hpr&ZO?x8TJx~-OxltQU8*dAye82%o zq;13KZEaU(01E#k#N2ke_Rb6>$aUge;*V)Zi1h{fYcb;v6C`}+D%NZ!%L0X^%Gy#? zK)L~V?H=p7#tF&ng!$Vj?(%spKv`kg*Jgb=blCUJrQ_XhmX15Du%|4Gu27Zj?^XbI z(MpRs7_=E{+C6EvNfrt_W89c@k(Gl_va`PlkWPNvnSMHx?JkGKTkf9?@4xF~BZ&uI zi@))MX<{7g$kV0~{F z0!a4mvS8A`z4zV?=3>G8SExyBgl~@9+7MO(tGh0jUK<|uPW^cy%b!>4ACI?&`J z^_^{Sd_p!=#~6E7*s3e=|01NfugL@F;5OX>GbEpWWdLH0*E3Wm`12v?B6YT)inSHdOG1L_%ND8^q zj%SRPoHh3AQbl{Gf|1S0Ne!t=l_%$@?fZv9Vnrm)6Zo65~s04bKw6SIbm-PdPa?B;WMd+m4uEcsaMM4l%-{GP6A!$?-e#;U6rB1rlVrNuYAAgTkJi?bLt9 z=-V#PYrF_-RQ^{0AEQ0~>?%+@rv@-_+z?oHcu03ZW!Ma4eQ8$vc;`)k#v658?@Cxn z0=p|SGc% zCVt_$WQGS?g)oP|xD~s<_MiFWy&f?58b}!>%B?R_FGCHw8hM|}a)fy2J?E3_7hURw zHNF8zv|FNZazbxMRy!=&`v7fI8BB(GxI2d;ge5MG{ zJ)FNk`1RDyAhlPoqOwjXhV!s#57ef3;pSd%V@E&*Jp?bMpOr~|UkJNqrA?oK5bEn5 zeaM%t7!;S4Is&Q@4^?9(YRa+Cju=!NQ);UOJ7~X?gc+i4x3TG$Sk$}Zm1kSp+ICy# z2D~uAF(;dMu~m>E!5k(AQ9sU3&a2}3RfDL%6mERAcbS?udnKOWI+l^hn@3xHo$o_X za-VqPHVFqeq{x0bSXD#if&n|^0t~3hO>j$Jb$F*m+i7n$bvTI|@d$hth?*Qf=$Ql6EU!FjL`r3^W4-fFqfxz#_|<#s>;a46eZogkFvOXNgr#!OY;x+%13e5<_eUq0SK% zbu%{oQ=aU~WC;Ueh!)L^d=S>}8%7)EqXnS_1}crAfz<$Y>G}OR3+J(%Og2ByaQc@O zX;fzOs>Hzsdsc_#T7fn|Okm-VYeYygl;10w3f)CQcWH2Rr+>Dj)?$DqwXTPbUHvKI z_-zgtm?aH%gkfxu)ENss@O08)tU)Ra~2I9AHC&4Zh`1c z(jZ71{01rw5GE2c7H$*)Y+xjx0|L-&4JsFsT`E3g^JEb%n#o9)LhIo+D2H;ifZ>4kWP< zRPK-FPNM}KJsSO|C;xO9{cfqR z9lZcZ6n$-JdH;L~O6DWBe_Ubr+SrJ$V7%m56A@@T7KUHaB$q-=kfFufY5!~{cNLyq z?w|R?@cjHq@Z4^(i`GvX)rm;nDYp}ls8qZ(D9vNz0Q0z9fXZ0$iq&N6Zr$L|DZJkw z_qQuX+A-E8$hGY9VMTSO9KKQ2D>uL*BekAslCsSsUv_+(I@WXyu9|3 z`J7oEkNznx1fGC#II8_J|KCh0SKprbvTU6;KymzDV?v}AP}FYR#C8{{8o=&1l1{1% zx^5#abuUu&?ssotUfPxQbhVI>2d99lwPW6rvnNC>M-L7Ba=-qzqKp7dGNs%6Lcb1O zfoum)pY`tw)J$phYQ58<2|;C=s^#BLh@TR^cdM5_eB{}GhLzrnPHe&%Qt#|I+LqL7 zJ?bmgi+Q#2X#bH*CfCaDJ$=~5bt`Xqo8S4?>#}3{6;A7B@!HvC?^2hED!ny5R|ykA znC+ekay`yBq*&f}z7PZa=mPa_vH!YF(>)W$V;>!dq*{X${;ONve;QG!DhThn;~W2Z zZ?@M5Mqp;{h?%JP5c5tMbJgCdNfxIb0r+#tvxZy*?-jefDxTvHC;!xo zW4A+$;3Q|dq0%JmjOok2{ZmOZ?M(CM zMsQk!mHQ0y(k!zq4*h!I{7Va!7lL-`TxuO{L|mORNozOb$#jN z|3iEAf4Yz=W4?OCHPX63ID&YD5ID?kCyV)lji<3 zrT;wPzx-cSuG7GXbIJF+{Xbi;{qp-i{bLqaWJ><+!=3-#txBtr`4GS21=d5SUVe$@ zz%|;p#ivEwv9ec&)eP&-y)cE2WL$=m#0teWjr@gt<|FR?_rGi8xgZ{>4@zAX@_N50UI+HW&Gt?{7+-DJ3Xo)`&7Lee6jjsh7yM95y!+VO{{b| z%y^h_pYl8kKb~b@i#WdVf>&-oGp>E^h`8tDzcrqJx$wV!y{Erx5PH@-O#gb{2w?_u zLrg2=gw=t z{*jR4{r_y-_Vc!rJRP~5_({+0=zrPKh?u64BCEO4Qv16%mJVm5!pF)TImTMAJbZMZ zYVb41G-yqO`$A0Y|FZD}U3X6uS6Dba?ayBmJa9O|>(t$~hYuf~yX#jqHL;9h^hbHd zMx&A>FPQc+^2O-=*NFW6ijLd|3}f7$=;)tcboaAhCg&(Hjvm*VAM9m%I`l$tGR~aO zp*gXPbcp`HO_b5@L>c8Kzc~MA1Kp+#$Zt-H0GX(RrCG5XysJTlRRQQvg|TI~SQY;M ziT`|&-GvcZs43_;;wovS``1>qiLfT1nEBrJ1|C>ttB3C>Sb8uowRn0?hAp%DKRR$? z#J>%u(l`qcUYulgHUIkUf3nOk8QA(Vf>{)u3;(MN^f^sCY)>lCs6!1{ez|9L#sKQe=ND`$gV*ikw>~+{PWiUOZ_R3VB_Ads2O|ew3zHQ zU$^PyqSfeg`^xRtRy)3=axCpBSo;6t-S!-q4_coeE|SUcfyOXt7<#~Ec|gVZs+s$9 zcUon_zFr3#3jdGeczt&qeT&D8{(Bs!nxkGRK?8Q@iz}12xq6I|VP|7x6Mt3V0Zzqt zB1>*@!V1K{lXqb0A)^1UrtTlca}V1+c1HPf4XH8hKi`z$C}6*UUb<7}#$=b$BZwPNSsm4iZE6Umcl0*y ze9U}v(f4tRQO13rn*;b&fI8iQP@xTwK-lH|uU;Bno*a=GIldk6El{B|1IV>(m3Oo# zK<7_HsEQUB)kDec2o<^!OUY5itL~NI<;K~LH*?`G z`nirzho8wX)V==y$a>4LsQN8#SP=w70YQck7!U!G7U>cYDFKxlT0}~62w_Mm>5>vD zm6Go6Zieoz0S1N~24;w7d*0{V?>XnW__DdKfe(B3UjMazwdlp6KfJdWFIHYTLbFId z7~&rvGaW>@&q&*LA$I~D2&@)>shS3h@Kew$jL(k(QqT#8;Zhr@Id zxD44`Cu_7mEx?k~Gs9KztX_RtQ`*T`s<8?SPvui}Tp#$=br6)bp3C~KnIfs8gM+vE z>6k^+FMZ71&PJ0rVhz@tnffSlPYy=>(h$UYL}sT z&*5^(EboOMY9ym;qG2aC4HrnN(JE6{!|NFVcEyc;p3k%2W+XSy?;yw!F+6+3nd`G{ zvVTl6*!s2(HRR|VEztTO*HvzY z+=y}SM_k4DEB`E#_AE9kcu!I*R=wWh-GcnGcBA`dGWsw#r6HDZ+tk7Jl zUeR{xof%ojIq=B}4XlPc-(w?C?9w&mkqfWT-T!+s0AJn>r!PP(I_E7Wh0K?wZ4dG- z#wnB3b{Y+J4g4o6KXisFL($ZnL7uo1@ao5@X7y_Fex0G4{r-v3%}--)7!=mqCC=H` zwu)j~gVK8rMq*E{(tT{>YKzsOkt3ASc&X_-oIZt1PxiSD3XD$|-|YQ3qv}e@cvJ%G z*WMn@NROs?3jVtoq&eJ?4Y#`%P>6-Xq@TcmVY3ce-@%cnlx-M%*1cE*(pr zo1xzJOI_;REqS~cU*)yz7X&LY#qCVIo2T`fU+CrUOOU;Np=|LM_$t=Qp<>9#1!_!q zy@vW46MnF2MFs+`C&p@t!VWz|3Yn2GRD3*dy!4cPG|7;(4jADB93t$*0Xn=X;jva; zC7Gl#b?>x#e3T7ZxkXFB2V#Hk?0?qn4S~%tfmvuXH8Jg~URR=!=UcoLTc)mlJRY3F4iZ$ic_FPD^jrJNU!_A0b*k^xhQtvQnKiO~ia&c>3?{v#WvbqeCesB@7>-D|- zGIjI6--9n}FK#msI&Pe)lT1+Zt`1)};;;YWf7gTn6(-X&YYHb!snS&c(>f#t-6+T# zi}?b^D{=NVtE5x<*5LtN)3i5|MJvBY#UI-@*<4L5g?^nVhX3{6HaiK3 zJLZ|$P;`D#4VTLuGmiVj54hh!j8dhvn=ae~H8rN>Tm9Inp-d~Md*bUEX6Z(&@=?LW`> zzdAf;ZtENjceHp0*!#5^$$Q33 zR4%1ic}De7v4pr=#e|WB^kn>`2i`5XoBM?Y=xo7>VID_3PTgW=gGBC50LC4HzvI`tiz1^)iV4p(%XcS-C zrmnv+X7&3?fp=q$c}X?7x=;Grh7uC=Rj?SCm*?I}0Lfg84=SM7SC{Wr3LKTV-g~gh z{*b<&X9D)y#owXoQ)+5}HU18~H&=!R13^DQJiLNZz7ZB=kou)Ev^~eWn&>z{MqXl^ zBAZ3IcAX);;I+fSI*#}C8&>qGjsN(~k{dprR5y0jIo?XVI|=S?!2m~@GRmMf(_lt` zHE|XTjlqT(gNCNQY7q=~dt*0ryqX%Z@1MKOZ)j)+KO2z-(&AY+G@E&WWh2i!Q$&J# z(~lW;JhQ6W-Ta-LwQHZ@-3*vpVZHu(FZppX*2WV?)$LVSzt-*HlU(K(w zr#G2Bicw+S_;E>@V*|NiYaRt4~^{*0+c4iOfz>P=8 z4>vK3VAZQF{gOrZtpk%>2(X)=`*R#^z7zFF38mgjzdCkdJ0`V))TPp%{R=jtZU@_N zOwDe!?aEHbfj=QxFE%21{byZ}W`SEmnk%9wM)KXak5Tft6bfe}`mn3Svw|wY9mL4T zhkN&#D^Vr@-#dljm7vDLybu34D!v(pRV-OgJP1Xaqkp6mz}vINsU7A?yZY33OwUyAPBe^kyaHdOw(~@0VP6C6P+Ro%yHy zB*4M@e&jap=HH*qS^j>^)v;>@dfh_YN6CWwshbEpqlYXII%$?w1pM1OZD>|#)l>4!ITBaYmIsWHm zpo!_my$QQH8O^CCRi|7R#GcaTjwLSomnr_n3~E)Gx@Y;O93#@>6@G>KlbmAUc? zk;`-HSG_>8d&b&CzEfgV!{tB=HTo(JDz_2Zml&{a@_E-$+VCh6bS8D(rkHfNlPuPU zG4|TxU~?^(`&d$&-&J@t9=jJy6qx9Bnst}{nSnREdr0&dpbQh^ zip_5PUHhHS25JPh8Gn&F_v)Lpu@o*k2V9W-+D8j$uCwI_K4*F$ClVGV(-g~?TI{6{ z`|(`9@EyRLg?g3~U-bi`)BB^Y#-2@4!sg+wrz=TEN7m2u3XmfnG8ia8P#9Ol>r;8 z8{KX~qc{D@q=iPzI>O+65mQ?6@k_-Cgvr$tD{KQH&TV65d$C8gJ7Y$+d|bDs#tiec zYh}}5C{`F^L0*g%+cTBU<6P$d-F-RwejS;JZ=ZkDEYR3K_M&dtCv^#azlPk8XXfLdi zF{tt${H9ku$<8x}0|(s$!0O)I0XcE22Vv)B9zTcfyiP)6>Cde%Uuk5XY>mqN5VZRK zv4L+-p(8GmeI(q=pYX(SCjd&gc8JOMc5Yh+o(tPibV!v+^QkXR;~MdmZAGv8lwtJz zaPn_E$9!(my-#EDJqX!e{Q4maW`5~);#ifIz~8p+vDu^d+ui9(xFL017-JUDx!sLneCF zCUcTQN>tANfX|Zdym_Qy5z4y#w)7f|F?P*5>b|$`_oGAc`TpX81RV=_#E$!^7;kgI zrRuE9&M)eFNuKp+Pi<^2|6q-})OSw#30cv#i&Mc^(lu0Adv{+>x{sOP324d_o+D;8 z;=W{I9G#W!xoI%>J>9^0Lsjh}n{1Icb+&pOXUn*sjk_d3q_Pw@N(_aVN^o9b8lS@| z{1+LdGMnd}+bcJt#}qPBqX-)9MG4r}_}3nEZKo}W{T;I=XoRO|ZPniV;-Gvf`P1OAiCU6pX=jbv-yRB&&eiO? zdix(!2V~$Ha=L2hGi_1sKfDY~+ZEKm^^REtp6pqVUmW^=MZd0iJf}FNRsrW2847#r z=6Fel;r3vBfSRF!nd!)G(3YDc5vCqRUp*v>EHDi>5D(z<9+_xnw+ik+Boo{d$@G!hd!=eH7&P*c5#clU~ZSmCt^NP8}M3l73 z(oNog`B;bDo&2Z$u?%3!i^UK1knmeQya`jFS)=Ad7ClS*o!MM#@HViyp*;EHoXnt4 zJ%;gTdyYE&{m~&v)b(k@$o|>Ku5Y6|=O!Z!I_P<;fq-1UVluiQCkgLy@?7WES~kUt zf!>stC(&QRUL=6>(shKY!}mXjzV^1T|NZ{*fLM%tyA7&Bx=YwV`tOnVpTX?ev`C4W zf>18x)bb?~KZN@8&J~$8<4jqQp5N;Qx{~Mr26ucRlwV#jB#bk^gU55KT#wURR}Ao~ z;oCL+`T=Tb6=UQ2YUp;=n`rlBlt_ko7k1V8Hmy>``$8JQiefgqH@C;P?{(2@r02XM zb^AV#(Or8j?Jf_O{Y)}20QMk)sK>}LD;qg`PYoT?%5kd)&}WXjA1#*L4k$c#JGSvS zlQX`Ojwc8W@bfIX&@K<) zCA!4TIQ>Ur8oZe>Ak1bRX|L1w?cDzRd+rW9?AF>)(;?GU$y>^Gan`hCkMx}L(q(7_ViY`>(`Ute-&^QnVf>EBrZB;EJnLzD?;HiVzaku?l2!G&|K@jR)F zvD#4*>rw?Bk|U{3M6B~eMR0(KL9ZCryNaVF+)uuXhb#*3xm4dI94tj6g_Zh~4CMl7 zb&R07u}fX5`s+=E`d@>O23s*8o&@{&XvZOI4cVp^z7OFP0;?UfO0bj|Bgsv2gQEp^O&o@e&2~PwwhR7hnNDnSNGtoVD7P1 zGyU;^8k(>-7}q6FUEZRDPwju#W0;`L2s}#3Mt_{P?{ECx(^q^rIBn6_LbKH1vN~1zf^5AW>>W2X@rV>sUSXICz16*Fd{oMHz12vL?1k8@x`qo3w4+TRmq7$>;V_!~ zHsXmT!vkL@ZbO$&vpNp*yX`~#QAm>2$ya36YR^71Nug$$~_?OezUYwiAsbg4vmrgnV)Jlq#kezZ}+_2-7?$HptbrY z_O#@O&&*Isu(=fIq{%EXF=Z)SgC+;86_&~8Jx%&LO@kzDNGGg$@}fZ-x3_kX0dJ2c z21(A6w!Le=dCbFW5eYu|j&$$DIPD7;#6)>W@a-A>*?fFfPF<4ZD6Mx|J6^Q(lJ`|h z8=f?eHiDYIIjuBEaGk%Z=*p z^H`2}d8}NJTBI2}N+s8Y%wMswJ@%I==gwD)pXS`+vlHQc=|GuW`dcPVm>Ed$55@hU zkOS*3EiRDdA;bbM72OCQg^0zMWnUr8I=EbxRwn zA{oW$%C_KfCa5U$rAEQ-sC>~N=poIu>Y_{+yFkLS!HOt7=2dmOXcz2a>8Nwa{uyMP zaW7>c6^sE;;GQFUGA3>Ccu~ztEBF5b%O5TA#9dKw?K>KO&VQGzC;qr6pGt8D^d^D3 zK^mU=!^K4RA7BHx(2aeegg*ZO*~c!q=Hts+>Hi@$`F7u2hG;5(>HSjl`&N=RdzSAF zM@K3fCT36uKf7Q*wkIlyB}8}o=ZuDkXiRR_Y%)~$-eheO)Azc~Xg6?X6{*OOfVh3) zr97kqk~{o;Kww5iIazB$o39de)wc@cWqsXOG{%ixX2~&(*e_rwsM6+H(ZDUCALS~$nv_SKW>K`gSv&4j7#jTsQ&hd1gncM^{;3nt!hR97| zvz3lDXCJwbY!BLbD)zT?a8vjfN>JPFQzUaqQ)pse2K(#S(jBr#f8@iYDIhCBw|Grs z7KA1tBwxORNms|_?3#>1IaRV^<3#8t$yS&aE2-JHUPN7gT7>2sxE}`M zjR@}C!q}weWC9xPSu*5mS8HDG{oFNAf0JGfm7Yy*{Vauj#0iWh6BIGtgghlD>GYl@ zUg0gcD@RWnBu^<>$EG*lc_mMs7Nt5czx*r1#pp>+?=8%y`@d=#t8JHhBlIiXBM!6O!mRK6AX4?;>Th&@_y!OX zf0~cJAjxG%weo$wDO0Bz@KF{Gm-W^0(icT3jMIGf)g#h;Q+fY#h6wg5m!HrW{9-P{ zrR%9ARv`noqg?u1{$mXViiLh^S5XP*!PTFs})=D7i{R7WF) zF|WHCAR2Zlm2ma$j{P|mYNcNTWoZoWmq7@7i_$Tk3N5ZV4p+^%R9w1UJ$vNhNqf`y zNiAwNf5P*9wY!wN`g_*-;^X5KijPM0Mc3zr?6YAODxO*-coq_--z@?ez0zXM!VqjQ zp%eXApXS?{s)U^M$H_1p65=aYXT28v=rfy+zL81kEeSE!=y_5Kt)85Cnz z0D%Eo)|Mz~PI9CjcnSKYqbhCk(zexO!Qi z=f{Y#ZE`S%h2@>X@4rnb`{b0i@5OoSKoH}n`4`oM8?-^Sd8K8M2# zKZJXk#7&~{YWt-$lnDJh5KkyI;?at;>jwz)zk5r7GL*$HaVzs)(9_$(AK}NBBzJ3R z#p9Y+6*k~xje8*+?4eK0&O8+8&qSNBawo6zhl6~amNBGacQMQV6bV!bqqOI3uv#lp zIWBX2JhxV9Tzm-Lp)ftZm1&UlK4dOM+}>BS8GXBZXBU&uf*+#5to>h^&ypb{ATZYO->a> zS6kVUx2@VY@0nLWtco8=af0jckL?0j@r) zu&nJa;m!F1D^mtv_8}@Rtzvyl4k%YnO0;*}2VG!?sfXiQU&uBO>0CO|Rn486sQ_FfEy98-Tzhc)6x^Zbx>9$s9 zogXkxyoa^+*Lx#iQ_zHJ);4A=!9f()`NS8*s>fvAoz7}4+$~Ka?%$lKxnx_o%@X=` z;&j8fI9grGu!vy7;}jJyOH9lLn=V{x+K7*tnht_iHB#VcFkRuqLn492ubgc#e>S6d z?cTSJ6Pe8nd%E1hdu-t2fZmzk-~#<6!Wj|vT5V4}c8SHhlIoIdYorWtMLUrjs$83* z^&eodFYN&vAg4w1WNNkel_Dci+<*?v@T=gn2>ccqJTcU!E3tjjv>UwjjV?Ob(#^^pJ74FeQhj#-dMDW?xvKG3~X8$9Eb-Qz7g4{8|m&4&$8jUP3)yB(3m1;pimS=txNf`KKBo5!HL?A~5hcVeB~^?xZ{ngVtOn zzMe>Fhd4|(yXsgpg%o$cf4gbK3hDM@aAFwXgo z?0LCtWVgpfROea1alq0WK36_epl{-@B=6OE1XLp7W6xQV!!PETk;yMjeqd2qRUozXZ7$M)gy4tT6a{xb@|fd zRbA!&@MHu!5`O)fQBmk~2f!ZNS=?%M7a^3XENwCCP7=QMvY`Ip#PUwq*DYi=*Xwts zWxz^j8I(#L^0E6$)@*QL&&HP}k~V+E&17b{EZlsw^^PPLb^GN>e=(cL$7D-}5-Y{; zGrm8^YY4(*k^aYgh?fh@?}A@JLOXAEwSE!~z}Z;{1Wh=wU<2e`i5BNkO}AZ#OcV zg`=P9_3}GxO~3+>#fxg<$WJ857IL#$Qe(G2d2JC70W6DU`MJr28bqMu3Sb(OmV@Ww zP757O!5|n7vZcnmu`8D?V&4VC*1&I(cATunkJb_sQ1dFic*qV0P={$s=Wv1Zq(U8V z7ymc|sADC3!@1FERl0nb*rh11;29g@bPqRG%fqz%veqt0i0-xsXkBU}F{7UJRO+R2 zmEr<@`)_|vPIex@zfdjPv=g#Zn25x6v$QU4;oNsT5gb(f?N_;B2o*nlsb^A*z|Z*OL;;$1X&gkfYI0(8!}xaIu&a zW};>nHmNW`tfx~eybxSF4r1BTgCXzKh!;w}=K7_q)CfqPfWLUyyi zt1qJPKgchyk48c^dOZK9X#jp5je7wo^w2;oV&AZV)7ze4<_)>u zX(tCXoqbW#g)T>jM=fGmiT+z$DJ{Ek6rGh5?vR_Qk{Cl@$b3g7{1#WW{F*nUu`A?u z{z7jpI9}`R+5z#AR>FTmCQxIK=iPq{@b!fR{}z zgrYNC=e4vWUYpyy*=k@Wk-l~;yI#~<(}s^i4JY0tWLh*~vyC2<0i>bi>CelOXQZ6C zk}6W<*AU~-8hw3Z7YmScc=<8$RFG=QZOr($oBlyasm9Z(C5HSRNY953Bxa&{j&@E} ztauOXQ{?Jboa+cHTT}Vh$f1_?$n8Dru>xxg78Nv+S6uuqta|beNTgLp0zc#4>AoXd zzJupw8Q=NLoSOms^Hr6!rLq3;ac`}>pqZ$oWZgQd+1@Mtmd@%S5-Msr*ETT% z+N*1RO2wo4eC5zht+ZdY=z0cviRCnOXo|&mS{iqhDOPbxSCGs%axyc>d$=bd20Ff7 zT)gzY$gFAId4@Iq#B)6V*O_6P=geJW6dx*bM_AlAsXcL;Zc$%>(Gp@jK;8RV!p2mP zv;#R?hNSBNSESFF1=e4E5R?kU-CKTup15^URbhc)tQR|peGy6wNb^4g4E$TA{m+{* zO6`k}RzdA=G4v zzcOl-rQElhM44$;&m(p*25+XgkQrZWP-3xiHa0{gwc6bZSU(F}4-b6rn&Pm>claJg z3X|)>6qcn}1ki0t{7juP_85KlM+s560Jt704XOr`BBfIh_WfecI)Mt+l5tTg(r~%@1Auf^*!z;C2?lb z3~FyY0VvGM>vG@M8~|pgF}F! zf7$5pkcyg@`fs;s1oH?aXiRH7Dr*NuY4OxV%RwHN!i$xnjh~o8w+a@Wp`NvhsYfAh z<1E}qTt>U=AMys1Gbr=AvGeD&>bp|=t|+UsuMdR>4lC#r_e%TsFS^JS#r48pWqEalnhGm5a-IX=tI_fg;@e1HomQOaWIP^10JWc%i z!HeuzJgTa=(Q@;Rkxc_gc5l(i5vq}iu;zLpJmgt=7gzE;s@MpF{`#L{0tWV0kIgTmJM2;f14jsq6R{B7WZ~naTy+sQygK# z56jNl)jzIQ=+2JPW68e{rB=UT3uk;(Vgv5oo3GTTlb(O2|LRr4@lfFt%v3Q8wU(H% z{z@}$F-VTfUZJnwYk-%;6+6Btd`Y*Vo$s#Iu}ZQI-=~LL?mefHHs}7*N=>btAkj&G zG@qbFqWX#K&{75>=HJf`Lz(dP=NTmMb+yKkt-UbYplH(ROKtBKy)(*%4ipuq<8o6;jfHv@dA7VbSt|Ki#?dXla%zH3 zFF7*9(6z?`Y13Yd@XFQa&+e$}MfP(fEuF2^KF`X&UjEf#E#_J@pQ~-dRn#9qBXvxeq%db2Y)(qWJ%3$bk)Q~lnj}n_ zcyvA-cvbS-8({5E68+3m&h*Heq%bc1@uPq|cXThLYDkbg4BAq`%{Z0NE8a6HrBKRv zy(TC{V8uQb4XEiJ_((eC|`S)Y1oqbB}12 zTcxf@bHTKjJa~T^%)HCJHBCm1KmR`+TIA6$(j;oLj8f|>z882D-<ivqLaI&E9@kQPB^*?GZkx=&YS3c%Xrk! zlIUK}kei`crF|}_7UUS(h|TN&t1J-1m7cu#uH9^>nk0!Zgt@j7jsje28MZqe4SOMK zl+Hw^bG4nNKK9$bWX5Ot#mlhco{aTe_Rdx>cjpABp(1h~Yq&$OavqS9aOFQyPaPB$ z%c_!Sb?8%Ual1FUoDG&O;9D4HhO2BY9IJ1i!1J{%Buzevsq)r+*~cjundE&>rWL?T zGK@4+>Gmq9(>g)}ZCwt25u)O|l;?lw&Xc-a3Vxb>B}}*m?~i@8BTtjn{NA>^Gp4dF2 z)(270?1J?w7we^8fjeh)9s9_uapP?_pWbe<8A;bADm2=r+UfGEYA@BpSe)i_Km+x0 zyoMuEae=$yP9oc^Lo?TJU|;CruJ<0o+Z|Aer98EruaN&>t3{Ip7IU8FU`a>=sy1Yj zc@nllB0)Q7JKNBi>)5NIHiS%hL8jg?WcYkdY1q0mgZ94v=qH1!?xLT^$(ABttvx?Q z^vledPt_aL3T)uCq<4xl0z9(Im&&I}7Ir<`&NZ+p6g*3AU-#ecN0uB=qblCzxP#0< z6z+_7X=qOBDw@4-SQu_a2rYfLOq6xWSsP_mPU zzr8p6Jh2+V{myueHXDx4%q`w3&XMV{tB(u6b=Xw3>yAAn+_M;oGfBUOgsYFl^@=4 zHuhgmg%`@ivY{u)ChdIXYV>@J? z5VbPR13W9lV#ka)={qT!;O@|8AiS+J?!6G5X?vJ zwp<5ISmSTB%B;IjP>t}?{DB;DrIKEA>tb`)$Pl7)q1T5w{q6E$PwCS=z|xK-Wpt_2 zpcIUvpMmEqhJ^qyZtn(;0x4KZ@5W*5T&Hp=M+;=qY@&=jOj&yp zZd*SsCSP@Y^`CS7Dy5K&kHdK62zImB_UnR16FqCV-i_aNy^DYchu@YMTc9No(3AVy zfmcp3x|Wo50t}?_hMN!c_wd7R69(ZF{|QYZAAoe~3pv`RHnwUu)K=~mVRs^f14}TL z1H8{ha*p<{n_RIihJtp`NLB=g&gNbAJ1F_y5OY&ldT7T;Zd`x)S$l_5K=C6$e8ZgG z?0RmtlBc)p%B_)W)VBZAtFTSiEVZtw`HZDIJ%znqE8PJ~LZy81TeZgH(AU4U+aZqj zq=~Ai$%@KuY5%kP{65pqdpqCiF@%FtYC**}XBVQ8$Vcy^v|9D1R$K zo^D5P%8(3&SeG{j-5WIUg`qbjHW31Fd~@l^Im>j7w!tt{!*qx98Mr;Qk;75H=J^q; z_uISMfO>a90M^03P-oF`#Q0S{qRXr4;^CU=h!6my8y(*ssC=3#LcLUUrdD;;I9TqL z4uOv%(qhlVgLOJC&X0(>3g9r?P$#!ug~el&mqgDT1D= zkB)5`xT2TL&p!u6{PYR8se5n5 zNcwv$@m1v1mD>ej*dBRSyUfp$Ilz?Ry_^- z7L&sLG2{VnVHClhe(%H_BcL_%(cj!sn<91cxxZ540T}h>I zo_;uu#Zky-eZ|X3C!Gj=7w7P$Fm<8>W}Cb>O$HS+ia^ZrdiKMjiNn(AoeAMh_w38D zvMt&Zsg0lv3s%kLI_W8-oT%fbA_W2Ht&VrKn37w?O&Q3`WY=gnn(}aeg7?)Vvs1v% z@zwbt{&d_n!gxU!dDUdHwi)ek;M;!Dfg=0-xV+^S)Ejq6hKp7ktJPnM=4%$0+1G!9 zc*uI1d_08bi(anyoW+&j$*Fxs{*Hl2THkexj$+e-z1E{Sy1 z*g91_Zz)+2I%j%g@Qr)x-t|qFLHf2E^%LXS$G#pI6QiB4(<|=XJaDm2)$~X|Q8yzQsRPGKi7G6yd+@hz2Ic^qV*T zU4-5Gbgx><57);-#P`2+&beKEt{+oLbo4vj6+Itd41+QmO(kfod`WYQye?%-zj)67 zLo_MzfT9Iu=`KmK01u86L)3P3#}m7xE)9=fE%&ddk&b&vbi)q zKR@UHM?L=6OE&rF##FSl;~@(S;#w-A(3ldQII*MiL*HEMy-8jWtRDnqST_^QQaX zcxo9Sn)c`(IWanXLo%sRHGoB}2eq**4XLB4j%6A0_v}qV6Rq=MH*5mt#t^R(4-KO-Lne_bNP=bnnYSK0?6%<8W9;%5xbB zaxl%+pXB>_3ob#;1h*0moPSA-*Z3B-86$<2Q*l4~?wHMB0K+7fK(wQ=dU57O-RV}& z+WJ6}##gfKea+gTMl#M{7ta(f!me&7xS(vGCRf|ItfB#N2+E@*OJXrAu>erTo0CC_ zjXKoRq*S{7mopm7I3|@m+nhGHvr>NoI!^9Nv9%O*XZ( zXBH`_^^4i$eG#w8VCRwIISFh@Wcf~adX{$v$Pe1ebY^vfL{>5}04raY}hV zvW&^op9TanE&b9;#G$6qCRrZ5lIjZFAzm(u)(A=Q*#?q+nU##AxXs~}Ge%`VnyIdWwUM$@#4!ee={&p_+Tn44T& zJCnz{+m5SPr$<>D>cdU(22Omazj9Q{LZfztbpf-R3}K|hg#43I3kXrKhJmCUmAT2m z)S9gzRlSPDx=%Lkcy~;x9)wlCib&q+XkP`_{4`C8jUj_evV-JT=#rkJsZz^O3n5{t zo*!8N2Fs~Pa^CoBR9YtW-f~@cBk3g~n+}~4_(%q&;Vt?#{~G8#a>!QF1c9;0iS4~lJ3CcEWcm|(fbMWd8X2@$@N&cM`8?y=?kvTvg-NcdHr{;-opUs?74 zN0l3zS6o%lCHezZ$nkDB6Kc(|r530{_J2M{7y#;VFUxZs&hdE^hoss{Edgc}WGmM1 zoDL7^;rdrO`cRC`d0gNUWOsBy!h)v1k0F;JG`Z+LerSMHNPY}s6efAg=L#127{J(m z$Q5ewX&{lM)@`9Zl;f}g=G}S;#v}lc=_EYu&zw_U8iSZLL+&S~$gGcy48^cpn|eCN zTDntIVSVoChXp>AE%P>maO*6SSnZ)&FYuC*U-W)s zHA7Ao&i26$S3A)prbJE$tJ9v}CV~fK+5&2Gr=bJ(Y7PezsPj$4(xH+ z2{3U?r(juOSs%4{VaMh_ ze+Nq!Ov_omr#Bi+hKDDYSXN2ayNrGif;^SK&RwdJ|MoH6=R5+G);{~R029fGO{su2 zB%6<{mJW$ECEp83RCh%_AuqlzJ9T9s#52F3l_?+0)W9Vu^;OSpi# zm=3B0Khk2ER&?YGy#b6)J$P7+xl!tm4_qE{Ey2cajw#cAcH*DN9&wje zhuOft{R8Q|@XGupL74Y&b@;J(SCX@L^+X<<_pdfa!u^^{^X8eYCIRam8jIKTc}<(X z($)GUTXKywTs>Y-i4oJAg9RdBz;lz5Zpyve$Yo6H5rBMAq%rSzDGf3 z`ltXW_w%ch4QA&5nru(Kqy^D`azR#7$Q4bs>59`|U7R0V{o*d-NbyeKIo z#fZnm9)8h^lDdUX7yq=BlOFf5IG>Ip#bCiNr9K)@Z5N+b{Tc2rK}{KuF?AmNFMiqx z{gP&}_QW$UmQHPXy*Vhrm4!Q=H(JN{9`V1+9jCv90pv$l(v+ z_eFx<>*TPXy84Yo_FV-te$v(L0wKC@qD=GEJ>^L-38d<$<0FUkykN=D}ZOwuPe``gwAn^+kAi?`Hz@vqgi8w2U zjmxwV4cI;wf-O4HW(PPjWIN(|q5M{R+^>L)<4y`u6iVHP#eLEL5G5&H~7+Rs8TPYD*rU!1w38iNr-`lfGxzVs~ z$*slLEJ1>W&D?A~oVS;0$$*RIG^e%YGpqB9jd{Ln>eX+~zeeaBuQ?7gL~*1I55G z58T@Pt?y5&ERdqR&XG&``MBtEFt>3t*CRBdqfL@0nzFJC9=InQj~f3Ke;iLm`E`%C zZ?q9AgQXjlH3&UiKr%vM8!GFmGh+-fVyo?XoFI>yh3Xy7r<>J-0Reu`5S6_{Dcv1` z(PV2oEUFKK=B)>yHb54qw|L|suF-xW7vwPQg#&DkNd;|ovC_rtv9nNBOCIZ{Pi?#0 zkUDCQ4@J|Z3mKo9##DwXK+V(J%RVJ0gP5(0d<0q02M!)&|dB<;1brE}Fck=cdj$#lSVRZ<K^Eq0iJ{w3KMl2^jRQ0J9ijWj;Ph&ap5&XJR5#bYK71qn{Cx^>&enqE8J z3rL5!J$|Ao)-Fo-n#o$Ud#% z)Gk>1hp4V19xD2aJX7+I8hzn!B++kq-ZNW;LT!Q+wa(=IW`33 zUj4sR$pO>9Qc!dyf$adm=C}t(A)3|8?JTvVWnh-W+%8Fv(yGOt_mcf8s>e*9H96hCy;vCZKB;fX~#uk(S&vb;IM>beQNz@68u_1tyFbhL**fn#tMXrTeIyA|D~x*j(W{2f;$i1v`syG z_aqc{25dcX_F^&;0|GqW9?sRfrdJI_KBK$?iYqgN2R}_+66$=~WK!;dVdT)R_@nv6 zSZKFjCoaU|A%uIr%C^3!ZF`4)MdL6#%432gfyBoMmwb`4Xc5nAtj4Q$5K7Hq(?98R zqJrC+N@XybFL(JyosZ}DONl@E$+qr=^*W6gr`*L74Q(yk z|A(`)j*Dwq_I-i`2*HE9hXhG*A2c`wcXxNUAVGq=YtZ1qodklD!F7P(!G<7%54^>G z=iblR`<{FDdG8-!!7#H{cXfAl)vvzoX&&Ud)I`3Q8?S&i2S$DMyJ>1e=Ni@wLW0-Z zWlS|X%?U%du-N;(w~{iA0g8MA043hDblMFU0iNH4!I%?< zSYnq)pn!5LN1!TRmz6cvTmOU0&=2vX3Q~5MFVsy^@D3;-L+aEJ{HWAO)oq%PcZETM z6Z)zBxSWgi#y+b(_x#QQwpFn@J!kcs47=j0Wc80x?vbOqH{h+~yDq1m&e%p6?Yk9c zd(Fdj|5Y>nH^dBS+`xI+__!;aT+mB;_VagLBGtt9z}EfkmALPMi${%n)#5F1{k;W} z*$wAF7O(XS6t;<`9Et53PND}I%}*4E0JSjS1e97W z5|3pwXZo@0l4V=+`l5rXkL?A{f0-W7if665PHi0RJpa7U+H?Qk$@_o&K(&wEgC+Y= z!r;F*wS=8Ys1}UZ{SEc@GVBbyjMiTn_Js<94yzPTW%+NsKm=pZ*{yhco`-4~G@M}L zPuMk)eui3~F|Z0CbZx^~4QP7t(%GWl!MS#ENes5C!3G|W$_d6Xdih@p-Jods9)hw@ zaqmKdR)uo>f~ghQlf-E~qxt8X3>sAeC`_Y)u?u4V z)oTCsXX(^Op*zp>7^5O_{@Shoo%VQ0K5HXfO&?VOLIF1Q>PluHa|>FyjQ%r6Gb05E zI%mK}zv%zfcl`6~{^wt<#eOD~#7q@IqP2c`14tvlzkUe-hg0nUU$c*)_QyZ}hcN(o zqT{drU@40zjpjz$xr`f5dYin2SXo# za-obV3V$-Kzx%g;@peHCbO7#5XVP-vpZoRy=jV8|hZaXudYL8h|3Z=bSLSFf1B{S{ zBJh|054-x4IZ)v}v^ZX_8h7NMBgB7eKmX$elZn89vX;p+|Ler@x844i7l<%Cv^axc z`M-cD{^g~AcUeRqlx?leJMt$@*8gq<(!@akCW68D0-BWF>M~4EsGt62v~n5<*wTe# zk|zcuqYncNv)b5kIfUIbk&f!{vn7++Xe8+#Y+jem2*3N@IG<*>U6qIG`0aNkD<=Ut z^` zOs_`?>_5zd3hBRl0hV@56@LiYGRZk!qm}ZvxXQ~X01!`eKy~x&K8Mh!f8X3@$OwQ- z84tK(``va6<0f#!X$5eVM7H&=3GBvuSA~9kF&a4IWA|3>`75_zckZE7RKX$cYJ8cE z(QfVI9Qmx{HiBPG4igeC0QIb&%jB5ur$WB=NbCua)ouK}wy#9!BlQ&9XCw!s;q@n- znF_P(KAgucM!2}r{s_ly6e`{H%x`FMc)fndN9XV=#AQ5$)(AD*sA$ze%hUKYU3GXR zGO^iwMeb1#K(xE<4&D0|hB=5=nuAhWtbI{RX16)=z)j7e6s|+IeAHkRpp~d}*K1Y=0U*DU@iwL3-5jS;ypv?UB;71gy?q5L8E^oXS?`SB& z9N><>A8J&{Nc^A~%{!fL%_@wmg0VO0L&D(tSHi&ivsizM6OP#MMJF zA|J4omRWyIWs5ynHfL5;p9JZooXclX>3%5M>Dz4U(7c(YpK*s$iNl`6dX!+}jrLe@^F7%5~G5s#IQ9r)1{NT z+<~ucIr(F&=)lQe;nzzk>=w7201$_&H|$JF+d@8P4fXf+kW(o$6u)1~Nbl}e=0pCr zR!1|jznyGtXrF;=%|j9-l_hXQ9E&slAs*(KJqkj^8{Ee=+j(cWHH-=cxMw~IcAV~= zCByhHbqe3X0p`&O1fAA=a#~X?(PV-JFwJ@q1_fWqFidEv#6DqIpLS*6p>ES1Iqvp~ zwco1HWbu@#&5G){zx2_4t&M^jtUnD?htv1H;pda0^j1WUJuc$iN;=OY!v?SNBr-7i z@$DDHof1qNAE4bPnS$I{)Ram`wOIacS7nK;|N-Ty+iPxH_+wM2OmT(CpAs_Rcr<5BddvKoMr<%{}dZ&Z{MFc==U@ zn9I%R3s~|Y61_0pO)C{Q9GfHHZjcvnm-54MT<(+kE@R$E^bjpQ5PRm2zBk7}$XLsC z%PZf6!nNp39)hm2JIdt$#RA}7X|dzui9y$}kVT-w76;sx0c?i)e`lnBtQCWTNW(9K zd@2YO*p))qz2vCe3Zlp#pU4xB09)=-X2g;o<49g`vtLt$X~O{Ts8>~sJnb8eS?0t* z$+zLc>yVf|-=^P}#~V6DQ1AF4x}Tzt&=ojNOdcwAg#lDL0Gs}!um$PIOU<|29o*Va z>FGkTwXcsYCyS*(-g`smxisz)!uK8Z0L3RRv`DwxI0=vw)%>)@Sn@nb9-s)66%GiD z-2~^c(dBkd=?H<$M^eWyC!@S{8g}Ub^?>#c3>y4h1z@*OJGUfVp_!f8PWp!d(vhQu zb|uEIG}w^U^<7?`7)JeV1)yEf5r{XIxa+K#PXC2F^ykG>+7rpB?&yKLbKj*)5A3O) zs&#|z^w=uqZkYjQ@6%*qjmYuTBTe`VV4xgkY~^;fB>Q$W zKz;Xl1~i08&yKbefAo-*rm*djmzE($&CyeA z6PbEc33N~5t_$&&sP6_8^2?so5VZB+GkvXni~o9~#&4WbI=A{0Ky~3a{n2({c6hbun$> zzhOP?dF*fRuyCz3EfdnjVa4xSeQSUvm6D=a`>pW|+;phi43;=^g;Oao_XnyrEkITL zu`dvB$%Xyqk;0ez3o(xm!b`BEO!hA*@Cm}eh!ZQNg<}M8|3k<9Gg#wAXaq}Z!^DcC zim#n)^-7&!*U@Sq+}sk)e4;z}M;_#Z$(tS^`Tol_!tay&M6S4JKtMGtd8KCA+!QqE z&+~#-Rk%8i%2hvU<%nPh=i(Os{>A|OI8iR{lF2_nrurFZKoj25uiZCMblfQ;u{XF6 zXfsm4!hTZF)*D5T_DF_haw;JIPQ4+Wp4fd2oL0YUZs<(~sN>`YBM4D=cZCh_8aLu* zH+FPHa(R`y;dkKycVE+pf_b}L4Z!RwyaeCtyN!iI>u}(F^rr*KKPm)U%X_?j2$;;A zJGnRas&4E*PS{rR(Eq8l+ewoO_JFoI16pCKQ`~$>*r>0=0{h=gdeU=9B^|e!(G)zA zx;q^BEb1p}yUQDBUw8JFtABJVY&E+F4!-YN-by8{7|xj0fE9U#=U%^hZ?f2wv=vwl72@kZ zIGq3>FP4v87qib;q$zGEt*$Po*s|HNufFfxNayItRN*XEepc_4eS%5BioS9`F5?Su zW;c&LST+G+@JpNl=L}x=l2IT3SOSMk*?(Y~h$bCepDmf^`7e{9jm2bquW0$W2jY#s z`GBtqPx-Z_y^M(@lD*>DgMM=O@hE%ax1PVcIZzv7>H-v0h<$@Ye-;ZetNciZmH2g< z6!+p?Y!T2rq0)Hc_i!hb#|X661Ms-{T& zrg{-{QvBFOywSEH@UD=4((sK38+Xum@Q=9W)G67NLtMHYh%dqwJ zPxxfHznmrIz~S+CvK}s8FA=4O@>9H`W~lA`Dp25kc|24)lH=9l@J7)jI_N$2)kfQE z8s@VoXWAz+GZi&%lM*L>zty83Y3MGIxG!Rejf;0m0Gt9-SSxTJs?{ zKn^Yt--T$y5+F?tE#UuniiV{U?sXyVD<8cj^b~#8=?K#9n8)c`Rshu7TA`vJ$Y-;w zhQ;KT^@5Z&f}#jm*89+W^2DBS*DD_C%1oggxt?%QFjsmR&l5d}gMr87wbY;C^{LNX zBHFnKz&olNnjW#Md%x9`d9k+)U~cH*8s*7+$DAY3Dk9{W7`&u)JKA=!*&V`HM%6%h)BHj> zqbC%iwxAE zpom^~zLs}4)UFoK!z^ozL-GjL;WALsn<_KkWLcYb8vt17caT#j z3;-SFYS_6rS>-CQqrFIm-V76{VRv^jOga~O4(_VUz0Y(pUGR8aw{O3CbKL$ddAEii zjPBL+8u+5oM|5!kTG~7TN*;i`u=o0RgkD{!6%~+h$Mw~7l{y#Tk-ScZyfT5=?Kq%3 zBq`ZJz|jKTk#_$+yasHEMSN&qtDza(Fg#5-h-J)3*$3k6wf29 zmfd%!8{_rAy+uBL97J>$7*GZ!WpHp=vhrTMs)WNl%a`*bA3WL9Oj16NkL1h8 zD7%b`CL?%XTgoUt=;xdF^EB>7lK1-%II(&h zh6@-zW>%u}F62v0$bFE7J&cAJkH9*L?FAJEFBT>60ZTwrSWk32`->7Cd5A7X3I>h78NOyP<2iCGzDA8EcFU} zJUX(jOG?!3<9U0HWEK;WA#ZH=Ms>}5q)sJxUg!S%rSj_yAbQ|yAtx&r87-^9&cQ(= z^Eprw@dHgmWoSG;yKE?(ujGgLjCa=Q=0}HtT|2P$BX6j0p7YVE;L*>E8^4u{Y~S;% zT<^Raq3)0~h0z~;kjzJ7I;V_zQQ|66dS}*^ET@F5-#^V#?R#`aoj2_osEhydkJ)b_!oPFaxON%XJ;=I=qB~1(F<<@**ch99K`BuV2qZ3ctc= zdZg5R!x(GJ<81rv+|DDB|JztbbA-*TE%Z~QK(-L#q?^Dj4$8WFXKW<-4QSe(hx^HH zmOI2Kv<0#J;3Xd`291MOJJE=S0#SvPG%D0hRw!+ z-$dP%uFWdH&Ojp#6Z$oYbjOk3PdO^s$(>0FDT{CUlF0z}inS9J+hnkK5&7LkLkjbC zrta4@Eg=K-K2itNnCfi{W)Tjr;?J-$sk{K^^U)WtCcmwWShMx|f8RF{!X_}-DU`n8 zsis}gt(di02U_ZCg4?UxVYym-kVmC;S<{*Lz;sei{8QcimLILQr;$bKlHLtgS+CyR z3ZYZdQ%q$<$Aja#0D&RksM#~UP$zt7H<&5H!?m7gOQNG1Kf$MY!zF?V6%}EQ5TT&z zc8-uiMORb=8ygAf@!x!lpg|aahQdyVwAd4<#~prmY2MfsOOY8&?T->54?qIvJZ?bo zZ$uDOS&Y?J;$cc>$$jKJZs3fDFf8l(jlyw!;g<#-IbkA)EF-dUDX-Fa! z-Ov)%LMW%mBrNd9T%;O!Rb|zpNV@oz(h*jf;!zq4o*MpMZ?$}J+Q{Ih_^gN`UIUIc zIe@ZvZl9NLo%9x`p|RPH;FCSMv&f6zjE1jbPA{cneH3zd6A8vLWZiA&#=j<5Y)O&W zByu$DVNl*aX34rwnLYWji$TeiTa}7d=$&QBzh`NsnJeUv0Pk|g7u~3twQGnAIcj@0NfN{>7`@t2i{7@x+l6 z=#t}j|4fwi&Nk(*3h;nU70jKYs)UQlg{XNuhNq<1Zo!1JlgFo$+n?J4wMw5)$@F~P z)xK&oESAW2;gEpGKW#${(Q&`j@Joac3x%WfRB3d<%&NcKr*MpX1fH#peCmu}CgG0q{Mt8l27ab9UbD_h^q5}^k8gk7LZ`Lw;RdRIz~HMotqQ$ z9HTdd*#$pGewLr%yHYvi<0p>D6b1kNan8plhVX}%2q-L79d(b>vR^{Tv+Bn9amL|@ zyF0r$IVd6XevI8np38x_g8Ejyc$vWc(tC#H+H2g`2lIo4tyOop#OG%d2jLT1pX5h7 z7MH74zn%p3tCIoQGlz{Q?q`A_?t5}vjHdeHd1f(t2D4e0*;(%lN1M4SK$R7D^@~zL zUX~x>*yPtWOr$*Dp}tT}k~u_~H;rk%WZ{iD#gbD`7BQe&I|^f$WKV40&%yCZAD{j# z=-sQGKlK5HU*RmS)xEV_ZBendgjR)^jD1yj&TOz4ztKj%Pli>J23w2a$2>x& zjU8d&JP>!v!diT8>wIkTVTR%y2U9G{Zt-9NF&vpH{T`|jt`>qeo1Z3dQHy!KHcqlV z0Ryp4^F$ftg*WxHkTS1zbvyA-ye_D_IqrwKPyRYpYVc3sFGsXLM=V&|j%oe)ow79* zo+E$iUmC%$MI+_%OnTkOU9b?_d`A>oA4;C%0H(!`U^ zRhjdKbmbE@ku67Qz~@!V{(zGL7;D&7*1+?*uS3y&UBgVvR!{*NbYWS! zf+GP(m8rL|%Abbd495ipbV`HXqi4lOZM|nFb>VG1T(sWWuU=F1USaf%8Xg3{r*iVA ziC@32Y?{{jPbL>^BwU2Y@g(^&(L@CjiDTiSh%02 z?(6Dv?A!##2)%fOk6M#eawt_YcwpUts}wDP&CVw>1Obvy38(v9=M{_rW4{tlcz4`Tx>W*6*kjat!eg&%m;iPreS~E8>5I9nEBm4u zOSe~et;Va8!0ZH7m9H_sk z80-ChD|*UpbmtHa8zW>8flQ_Rq><4#n}adDYQky0^NfQThMr*>?C)|J@NTT7*$Y(i z3(f2~ztk>l3HXu7hu3I375uaR&%vhS6VT;{I0l)Fmc6?+c*ski^eMAo?)shCtay8E}ACEIN za6GZ4b%bHFaS~Yg16SO}J4{sP)%(k&xVQ6Mwt?GbXZ-d;yjr!nCb?VwPlF)_pNlpi zws90c?jonYt;tM34u<#H)63No&XlX?#F7cU6SrHM_ei}fmL z(XJWuDp?+Di}Esf!+sh{9{%VsGvw7vg>UKMwT|%ZC1l|n>ps59hP=};+H^wa2%7X! z6%@XVh$wd~6hrUTR6fZwfH4Rv75Oda(>vb%KaLO+rXU+6QC2I^C-4fZ)z}9^C4aV|tBzG~C~#7 z9ZK?8UD$>EsN;+c&x|A`yC3sR1v~;)ZC`_xOIugtBFmF`6DvKlTn;HAU9mqNI70Vw zZ0ctF>5m&OQzD-u!zOr>j5XU*5V=b0)WR}8kKyiUlZxfR1BZ~st=&E`Vha&OM+okL zeGu_#kKadsRbE9!1^6_;wDa%3g+@M#pk^MQwp~=-qPF6Y@18orN7zNwMTKzMOzRK3Ko2CWY1VGUjKsgyO_wNjZdSZ zg-Y>Jm$+?N7->WM6)njI?H3spDKEdu>O$0)*w~(0AsrsC?;gKOD#EPcv{q5|7Wdu# zdFSspXo!laf{o5eR4+nHh3C`Wy)r>lUJn{Akqse3qFiA!=qNip4-^Oa%>lp8lX>xE zqQ45_Gcv~ibnv}y^HpM#b3XMr%H6wDgyZjY<*tH7_v46LR|)6(^g_Crnk)`CtkO@Z z&csCgd`H0PL%A$VGh~IPEi9fz*JyM73QGu#U0jb)G~X96p%bvp#Zi4$W2T$}*INvk zz_utgs~R0de|mi#93lI`sqM;x>NFND{*vs8gw(A3^?vN?{K5cI2DJ1gKm|~h@{04wy7cyX5q0T;ZPr>I*dOa@%Ft4*gfh#^? z-{k~u=$v5GiKn1aEEP6zXAAi^;6aG6&ss1On8i77jEm-8^wZ#46wNY`>1A?nkm}PN z;QMsr`yxI|!#fP)8%lPXBjsS3f|?&>94p+w&5!CR=5C+M2@b4L&gv{d_eA+14JN}E zpz4XZ`BfuBTnseJ`@XHO6j4hx>QVi|bnci_`7597ib2!HLIiB~zdR~k4eZ+VYxr`P z-RhxXAqg3dc#WFe7bged;-;1UUNf1nv!lSe8x_208sUy}lL9@T}gw&jekO38~byy8lGvS%i2V=?84| z_U>!-7>gto{m3vIP7fMRjvYF^&=36LO2(mLIw(JXsQ3KvE=)HlyPw2?rsCEJS>LTg ziJZ@6y7DcVZ^OCz1jJL~Gt{}~_s{Xq-lpW}Hv%_nlPRFS$^_K+t1oB(%keH+J*3g6 zlu-f}?!+eug~~$K4N1u=|H(4m!hk*B;ht5Ef~rl$z6NHcau?0_2Fu*y#MYaS*&&0R zxiEM)-xWReb2Dta_08lyB}Mp}Ipjym?NBFOF7MEYi#DOMGqL9^c4v((u|NCScndfh zq{2eI%AX8c#>RQKK;KF^7QgW2yVZfjrD~~?0B+~4aSDf&$Ez$-hlq-~7ry6OETIsJ zh~FbMbtI7m7|W4*ft!PGULAJcoZH&Fr1q*uB{hvOXT+SVN(45m)c5zMddORSc9<;^ zZnT>nlv*lU;zax)Md~=;@BxtVi7ho9srB9FX#5rrk9~Js_@TCzS}(EsA7+my3~ zCZP2O>TGAKD3DpQ?}pXb2j8-X!9;7XdK<$Oc=L>-+S;=o`t~Vp(r?F9WRJ=X z(*Y%bKA&3sd*btt%!#_A^(^Fr<1tHDviIYM4Tc=Oy%dobgua;tf*Vfgvn-55NsO9g zmL01<{gn;HK3356NU{fvagNmvwwA+}Gto5IaA?YuuWG$bq zNsxltVswxz2MT(n5U!Vm3Y;hM-p2>UK?gmT!CA~Py{7~$Q_3RQUio5tffI;r0tpN9 z_HpB7yGkcpF)#tAo*aT>5g06uvK=LAAG|uPcCU7m#>dyatt@H7;*@0QNEw&R1aHn$ z5&>b2b6+TedCGLb=~#CyZ^;8&EZ}^S5p}l37fJH(c0S1BoetLFSubD51oRrTx-8V> zC53}J#zM%N(_ZdA=Mr7MO%a#`Je{Rf7T4z4N0=sD4)R8Y*ZpJNS}3YEhV(TY@AU;b zi)7?}ailW}ld-GZ8$-RS^Xf`~9K08gJD8&1c2Qt(gi*veT|i(J@C1iTFS7sEF}6IB z{v!&D3Foj8m@^YTz6$wfj=BwAqp|nVE*))ZIh@ti!)Kb@kD03@EE~tFSkf1HkNo&- zaKhB*SsjBG+2!zDFOKCi%6ho)wHN0aTYS2NL8$9e zn8Sci{+ILshrMVohuLMPH|s7X@91m;gO?=Ii>}9-(U}Xcjc01QbX#SmY^U>MyMGBo zw|QIStWz;13$N~QU)*bqkxtYf>YzMb|CRPq{pIIU%+0kU0QjlA8q|xrxIkUwwky*N z(Ssa^Q4RDxlRDK%0k(HMetuOm6ahUhgUL*^Ti`U!MneO4QZ@SFc87;dV90uZ7L!V&&kp3g zARp|tUvAYiP>B9Opg=la#MV`qu{!#xdM}k5!V1Lxz4za;kQK!!9nUe29TV-YK3cC2 zNK6C#Ejng|HTZg3t$7**`R`&#e}tK+BB|qb{Qb<)N+da!V|ZM~YsX%Gl>W}{d55c@ z7VcpluSGrn5h^*tRBFw`_9|wWSyAF#tDAIQHcdYZF@=<~`1&S&EJ&VZ z_kchFPiGqN)#Z|e^Az!BC{YK-7q3-sFs9Fkw$H9B*e+)rC}mq2lyXFueY+fy?I&wl z)z!_iv!d`dkn-)%CM%X%x}|m5!=w))-MS1Wquzp?r<#adsVKUqE@-Q^E_%9A8ZF$g zpHNdNtms~|=6w#D{t$v0Ls22VA$*f<5!n9j2AnOF_WQzkwV7*egfTHMe59XO)PUX* zO`f@HKX&Q1D7hu46EI{!JdB!+$0R4EiexfTYKBBr4X5ipXxouEpBT$La-}|n!LVe9 z*+>;#kS0yBs^?!;~7fh5WXv1Jqw}_ew(gnA4^W5|A^S5v>#6 zwgm4HU;wU<)p^>xe*PVN3=SYPN)|1~!gH-`Dk(wvdjeIb(W0|BCvZr83@nYZdHrrP zk%qlKv@Mlx`UX9XgRH)zZN_MEQ(4FLw1If_SCK|potIpsNp%vJV|$wn4j4WZLG2Hl zzu2Mt*q&DRh-ELQ9X4^py%{8=T3`5z@#K z8NalEius(zq*+%?J{r!97FCNYN+(&6KMXk+JckMx@of!f^nPK~h|=!QT{fWAyULwM zA~|S(A$A=FZZ6iY-1J!>z( z>|4YPNQ%EC^8Y>xOckorEu2w;G}y!2xVV(bwZ?Q?LUo$0#DIwT%($pPLHDbpBaKD% zqpo@^_d|#>-YkgtG$Z!ZWryPIcL=`tB+}Ev0_H7fdwAG0zi`SLrYXP$?~j`HdjIwZ z$;`|BZu=$A$)2xyRwjo5?i&NTo!66ROmB*GYhv(Yu0WQt2O7?`C4jp=?&I-A;?pz5 zD>{p2IYLt`4SF-av8GGQixYJ`ccoHc3?X5)|8~x{XS;Q)1iY&lf8rKoI7pMn+0;L>egG1kf^8^ zcGf&C$Hqb3EQSL0cuO=+R!Gf>qqjcQ44+K{LV7x_62umB$L|Ya4r$}C#9LrmqYA*0 zCg@&}aroOU<{u{IBt+{pqCWO*oTSOwbrk>X_=8mEBA1&6)jRu@HU(RqFvNz#WzU)# zk1eMWwZZW+U%|4m6(Cg!;3-j%8-K#T z5v5f1J1VWJ_P~K*5gbVO|C5nJy$T9ai5V^~qr? zQH^Na=3~iTqGu&9Kg?YnR?*?W@;WtG0I%#iIqhfBQFSakT;~iZd~;986|sw6jmv?U z@`l75HQCVQST2kEd^YtSlfs30Hlx0viaItyh?n4H{OU(8%N3e7Gf0b(_dN4jCC{-C zPqBPPqK%rr^qkSyVjhfz!Rryg)5Rv64>G!M^dC6Vq26{(0?-1Fq){KUv7UYx2@yb7WEDCR!yd7Nx1TMauo13*y>bRXp zuL{VhWCkzN-n|UFh-TS)$d}!P!FHFfqN<)gXVvQY5^jU$X^3wloPbFf+2+fl zTCpADt_Hbv+x+z=lfyG1z(K1fwUjRByd)QwoksFz`NyDa|HVLad;lTUh|$x zHT*R&v%gOFG-jOFYfU3~ZI@e!!>{Nja;a(Si{MIGR}(_J66i*6>CLP|cNHB<-6;NQ zjmH5Bl|WrKFV0fn7qg+t9Oqw8=)BrRYz2_+C^5;cFS;yp0h^QSa2i0AjsaCg2?Rp& z2W#j*;x{gCZVsuYF6YsYUji;4z#P-Ih31@n*VH*J>9k89@Hkp1&H1GK#5W-FnAn!D z=CoqAGK!-ljrBe^3^-S`Msh$;`ReOaekHQhD2n+6qjjMnQb61Ui%GZUVW$(WMuZgv zscbiRbMlVartJIxJ9^78rqdnv601$NT69^<#246-kYObU_x!jKuQ_@+rkWZ;^`0~b zIJ5=&IobgMi-iB<9=DC8|H^%Q{uM;#G6yjLOO1KwdEW$vF9a(*y1}7bG+$FdaG!`f zG1dU!e^-#4B$Y&{dNyswuMnDcHR zFT zlC4*MfY|1U7$Gu|v%B?aMg7tox#T_IvM=@0H}L*SGwSYDC7_VYA0pz09E60F^_C^K zsnCDCR3L~sW1e2}<8#iIK+1}ASe9UUqc;1(+Sx7#=grawwQd>G#pe6>_^i08Sy6-& zLn2wU5dl3oMU?SVo_AeQB;2-g>Xi2K^*hY@8;mBX`&7mGRoXArUyNvwaJ!HR&8@ln z021omg|s#;=pA#3#lYH2))QoJ6tn6QrG`=_)tsU5yGit!Wje%0tJ4l)qsJ$w-S_0v zywg6F#gbO2TneG+p`}@F+27uN$@8+-WyN!lHHygtBE97~l&YJYH(EC~v(3|O%W`L0 zQ`|@~J(cAbN|IjJmr2gq*~R!om?cjZCN>!Gwpsg@0%0QJpPn@C-hH{)YXNr31&aE< zOx^cq9GrP~cLRB67ByYdo<9c`S<3HDb(Ct3zY${z-JH#tHk$4 zd>M)A>iL_7v>V$rqYcK%)4;vy@?V_>`c>7`n=-wYQ%u zn}4n6|9AmfQL44ft=WzuanI6rwQCqAD|hQW=h+0K_&B8#!53tHV;a;4p!@G*4Q_iG zl6*SwE4&-Zrgz|?i9zMMembNC9tQTKHj4x_z{=BU_6P)GqcBQ z1vmR_r>653z1NK_3RP4puQLiyU)_I#w7UE{O9%{CbhHsbEtMw;LtMOC3x8lkRacZkc9Asp;;d;>V=Ik_$1w!+cwn$x|Xs`OJjTr{S&MVJOv0@$rDT zW*?2ELpX7`Uiw+=BX?W%X6fyRjpSvBh}>>V0_AQO=ibKM^mPVig9H3+rm$=5T|L+} zVyewtl^reJN$zFvn~maJgAH$TEMAFv(;Z5Z*_!SQrOS;a>QVQdl(fZgB>&oDyqGIW z5+3DQo)sU>EVj@lM5v3&O*`u{tJRw#X9mxKA z8kW&NTGL%zI-QMhd~)|iqw1a+Sa0Y0cGb#N>Qp60AJVv!-c!3kMGNOwG8Bwk@?h~! zM*r(NLlY3{%C8;8U~Y;_;7H#Xj#9zp*bgN!dpJ&E9h1eScN|m5kICHSpRN~o7Z~7A zA8O9etv`tq7||%es354w>P`G!&VHL`b99@_ZPC}0Pq!<5RI&d2^+!yH{eYdrX6@py zEBn9KK23j?G#z2VO=ZB?$DQR)9oU;SvmLak?X(_5yFw!>_Y?pPMb!hlL_HMLKK+>L z5Lr3=|1gSSjPq-@712H5$~)>s#N=yjY>9}jqYfGMXDV-QRbRyXik~i5wVk@dLMkzG z(H+7?a^aT)Y=53)AawgRLHmDT0jUSS4a)hxc&9K04mT*Dzo z5Fp?oY`wlVN8YUr=jYmO|G?20c}vY8Lmpbkv>_E=j&n4Ws_^`)P$_0q;P@bkbiO%p z_)`&i4rDlNxp_6eMwrr;|7JdsUCOY$7AH@DJ$;JJ#yZ!kN-t^6j3<=u+EgfcV#;ygh&uH&dh|*cVa~H>20C zw&n3mq`cBR+W=HB>ZX2=g8aUHJ0;6aphPugkP%9*16vo%YzPvTDB}CEZgLuB!odw$ z#*;H}%DP}u9H2pm9;iV|YB=}uLYO$PiP!LU7bK0#M zaPS0|Vphr(Y67xaz4ci{Qc$tvsWFf606pVwcDgcd)Bgl*zi^02$ar!<@>poInEcTwg|8PkU(E#YM)!T& zN6LNU`HYgRqK-eCTfzs3AlAOw3Z*}B#;w_}){TyD)@Gf8sczNa+k=GhO*^oFwdcHD^Ov zR+GZcl*PCcN;h0F#g1o~r?i%}0;)H7F%Th#nKaKbDu@ za#JPnvj~1N1DtujYX!h8W{Q@qX7>jZK7Gu-Ywu^ZjJ0!ilE}DI)=_ZY@;^V&E|2)e zcQqr9dEQ1Y=#~zxc>bx*VJegUde7aL=;pN5b!RsYbYI9JE6y=DN5P;2PGi+C0o`9` z-NBEr@mW;jUA9s0b#-0~(kiD(!|-_QN48c1e}Y~bwl)PIE!=06>U>H21W?f=uMaOk zyFd_ImeySuVV5Ib1P|2Unk(#|$!_^Wg8m~RebT|Ovr;y%Odz~d*A@zn7Z$>2P6y>b z35-pfmIv}kxNOyNWrM37|LdBrb5B$Gw#pa zmCU#IpU`p`h%MA@-(Ssv+<=Jr)ya0rtqsT)WM`Z@Hg+=-XcOo6b2Y?mpXEDZiFFBO zHGHyI?rfmC8`ahppmw9`%q6w85Tg$$W#_L3EJ-p=e_f^d@HDKUTWR#NW!JF`#mh6R z$?g+}<&Q}KI_*FnUv^?;3SSv(zJD|Y@_@_uP7BxwH?Fd%w8VcphrO$fobGy=+|dyl zDY5*f=1?uX@HObClQbRjzRZk{g4@k|`MBN*1N!OC^0_k{tgV=%)koi$YWnbz*3lnF zV?$?m%^yvm(trZTBAV1b5KV3I2`KWM+QteV1;Z7^t2%;+sHNj1e zVxrygvi|mVal{{i1_2yb?vI>q5yIqD6|0LjKfZNBp|_PCbFN}TiOczok4xkef>kOi zBBzREQZ^ff48ey??u396^&ZuTg+$Qq83wWNTa>+%z581ff;_sY9spv)lIRmvf)QGj zYo@F6^sh65o-e8hsej%D%I?Rt{>INKlr85knYeKO(6&-jNx1Jm`I|rsHa?PwEXaM2 zIY(m-BZ>|K$?5&ildUMXwQt|kJp|smnJ#iBv+9!|c$ zM)7Nv-S(pvjn5v-a~pGq-4j?9bCZ#HjMir@G9+$kQ8Aa_R5V`JdPgyXhL2v8q}{RQs|$hsa@{++vX~kMN$O+-zqTUOeL_})!c$pv zVN8;rZ2oNx$#2BlkXTC1{%By4BJ7qOsK9Rd?AWh`QeEY9J8jyO%AzDx>`RY36B2%t zN+lWK7zHsPXm+&0fb}n$xQ_<`hv^*zQ>-aDV|Mk z`|}h0^$IQeQ9~J}kcW!x=3C{-5*bM*RR%_`90?-5vX2s$OyUbB`0gZMbpP^ngxV<73l78esZr< zPUOhud#xoy`_k5a z%T&oMM;85%u;Nb0u*MQ_m7$_R%~wE3m{}V@!4}bw%Gw!B<)iVIqxthQ{BKuIhrql8 zv#n7me*3?@7a4C?PCJu2!*rN7&IJ7(e7q9yeel%_y{Mi!~f? z$!WS}eLCc@khb)R`eQ=)Pg1Tizz8d*{d2ed&nrtLR=o~sMx~sIT(Vo@)Bz) zssNAS*YSC(Ue4%Pj~R-VR7C|YPd93<_R$$l7lrK?r?2A?ZXzEl3jXZ_{zIAf2phYw z?F{j6f)u}_d&*TU-60KtVD4mdh!ZCVZRfRu0K!SVYS{l_>@35gdfRqSgQBE#8=z96 zbR*I!UD7GtJwpl7AV{ZlHzN&#gmeu#Gz>lDkOO=9yn7$V-p}6q|2|*l10M!h>t6S| zuIoI{-yL|H@hejQ7gIo^bb7XcZ_F6(TPyi&r!Vmt|6$+u@u|}%(rGa4PH5Pu^_P41 zlz;t0G7h(dJuec8e_T#KlZXPK2a{H&Uw2E?m_47Dyt`U-FjU}wKHcn;{_~aCe8zh2 z@c8XEX6oC*x3XJ6`pcGGV4 zh39{B{r~5vS5^bhVCKU)Frpc2D?ogmWZdJk%Nti+#`RyzjVU(@5|6I-kbM-u>k&Ufj$hxr{Mfd3;381^CU6{ zc8|>0-lYrvng%O76n4z6Q=mf=N1A^4r{s??-_(RRhB+A-dvg6}cN7?%SBCG#+uZ@F z_VKQ6nM7o9V&XFyt!6HJJ?(*Y$uPR~ONng;2RYC$n#YEveWWjqrBk>+t_@ zkEnFSdfFY@9>cZx)0;*qnW`1ds`k zGy?I2@!HKxCvzG#{@NN%4x5MAOQ*iGs9(g;1?fS;nL(oJpQ_SBmkzF|0f8zbsY*x5 zY9y=ShlJ#(sPYaaL2TEo zpPhyd*RMmwwv$MHZyyRNeC&Nh^*(%;5(n#G`R+17t4g;>cHzU9hf5zWlG|8J|1i_O zQ!ul-wb{`U_=;D^6;O1E{?IZfkOQMQt+q6yrwr5Mo} ze&;2g)_fF9ELDZNwHNLys>&27-tcJ3 zl|=ok(l`$3{DMzmJ-KD%=HPw0RR}!Uztu;+MKi|L6Z4lNja%++^|jZw)}&rZhMvi_ zyq{6s0X*WF%0N5$O06(Q$T1nf&Z|!1RQzc^r*ThV(|xNR{|h@fG4O4T zvCQwBu#>UKu}EkGLvEwdIy|$5JYyz`R8(F#aHOs1$f8#gVtu>NAlWH3e9Il&B1c2* za|HO5u<6((uW4W(Vs&O;41zT*_*F{O3x2%*&dOidvYAZB?XyTRB$KBZ_Q?YNX* zIgziC#$nW?2uNkg;`Bx>pm#7&L}sF(`)k(B$LXM4|C{^C9+$YmT#+`Ly!ol(yJs4^ z(Ddn8_WQAtNh`dWO?pw6#6x!f7P6Ua>>lK>KExJLXEpk9p4+DQ*o>z{u3PW*earSx z>N`Odtl9Q;Ld~iZ(fRV*0@lNSddhUqDpAYbTANT)yxr#HgRh{+ZAi?C_-~iQPyTUBz#t=2 z*ge8>Bul|_$vGWbP5(Jfj-3|5l)T_)8NCNQmNQG^cOuC*`qbz8tQU1D9ORRuKcl*a zHU@MF{i@@}k5yiGWWQUYT(0rDiLt2@`+&Q-)`aIH!3J-oK6n$Pxr1eJmQf zL7PJMy?j20ebE#zo`mz%{OOXEDVBy=?k2(J5=Csuo3$%GJ`ED*CYF_7=O-dyq+T0i z>?M#5^*-rmiWT>2I_~D%%zf57ky!HXWwxMr0;?4P-*}1o%+i$O*c-k24I!Vck|a^* zgg=Y>>ek+PRDltiyE|Oodd@_pn%-|VypD;-jOKnFAdWwJY^+w@9)G`Yh#NHmBMO<> zvvWBMhkpRk4Ja5NuuuB)s2pZwmHu>JAox7>BlGZrkAn3*B&l^y-fDCSHRZU{cLuvC zE8bw|Q@?h5(Cu3LFfoL^)+1!b>n%7W%*Zq{Qv=Gr;!)A+`SSpC6)MC8erAo_aOHN{ z|8TvB0B_ec$2M0qhj@mgfh5KJw=!bfMeacy)YG3lXt8jc{Pr(Pw1%_|?G%8t%jD?J zWKmw>V_WsKsS=y<2}A?==}z6&cmjK71J86Z5^cKlq zA%m#AZk=W03BKj4kF9tO_OOtNLZ!Um_jy9~#o3~+Ng&u=i?ZkNe6+@4drR<(7kB@- z>cmyCGJyP`g9PcrXK4M2-O@3WuNFV48OO@y9 zEFQlqf(y1*I=&ey#QTyRO(hQjK%VRN3+)Md4gMc(brEOMoOt7{I zevH0!SbzQ~^T{V7Sr@!kWbcAJ(_q8P_)DR+PM{>D_25KcpWA zTAyDOlj)EkqdIXh!*>1INMws-*Dd6irNF4V^g~tOdF%eKAJ=UrD2#UUT6&^3Ko}%pqYx&JDVQ)d ztT$DD$5^09>`Jf#kM6KO*ew)W074Kg5;r~t4=pa^K>(9!kUpYdzMG8wTQgO=^mlpa zIx@3`twrY_-#vl(F=*V5HBq-ev!!TpO`I=R2xlWLhqJ8aFm;g(m)bDDO9;Wl<}?M{ zH$hrp2&ar%M&zOh=tSR7S`CeRT(SJEhdLXA#J3bv%S*+%RNfB&gQ4vbBc`X2lP+d@m92mjK7Kz1$?Mvz#n`0xzJF=c9Q(>R`Gs<0ChlBcFi zVk&LH$DATGih+wO`(0%s{1JvKj(}sAS|%kyT9qzWG7u+ZV!uNw$Y%+Ao4>6o_}0b$ zJT>mdQorWgVqQ6&T;)K1=SOMAnsR-yiiMoz7ecL2m}7L+%tegx`QiM!NHnaLOziSE z2Q_^cNG4o5>suis(ZjzngeQr1CU;A1GcLzP--R_xHQdsl);P|dMM67c_NZOp4^hqa zBzzRP*pb@Xen&N|Bmfo*oi!DhT3+%Yf7C~4=lo06 z^PMCLL=Lp(>o+OVgxzr({A-(HJ0`oYAE&d!Uw=0v6R%#!B>Q7?a$U6m&ig&6HAXs| z>8P=BBsW)YtETeDJ-!ymDT!O-`Av234Flt~hrmnj0+#LssjFPB{&Mt@udZ1DD(959 z4=0r#+s%fm|30yJ09lH-eCo74W)jV0Fj}1MDfPeKkc#}RVZxhsCEPm^*;r^XNzKFl zMOx~_w_;(T>iO@b$~t69{k|!i6U{<6k*f3#Sn{?)QRTmgZddba-0@hH!vA?_^_&~y{#C)FBIQiz?_#Y@iM}3R!8*olz5U$Wc@DOLbbIEtJF4<=XX)KvE9vGX+sP>#3=sEQMqrvQCf zb=K3Xz`bv2cuWHceM!TAWZC|CizfN7k~2X*2sAFrjn^qa)6kf89PGa&Tr-*!X2ONK z2gHNTKRaawbP}%cs5Xh?%kTuO$HspL7Ay3Dth*Pk)~+z{iqWsGI3P~0PW=G$J~aC1 z2FZ}aU81gj=jSbKGjWHC6?^qX?Jy`dQu$=LOR^q(-w~oug<(VJ8gYXF<32ccY-;+s zjyNtF&B#3v0u=${`-1(sTEQ2@JP<&^2vg1!Rg%J|c<5uzlK7rh#3KW_4=V~L=hRDa zTDN2D>Y`6XpKrsb01|Qw-Nen#W0Myt0=dlyw(E>t%l+hJIHYH{Wp36Ri?J1eqY<`LR;+kM@Qs2o-Lrz86fD6Ej;=W+g{wu>Zc!Hi*&RMAA2j?AX zlwQzpjwxHdOo3i0Bn>q_!4wPC-x5DH`DuY~a5CoKHWV%^Qj;jAqf|56 zZQ?j2MtReP-0qri-HR0s5kQLZ%&djdl{0lA(=u?mGkl5nsqW|Ml1ED0KG&FjITt*% z7`5Iyu%L5m5q;Jn+xjeZlfe99{z5>h{s3yfaBfI`igZIrX^-8n5bdwXUe%%ua$ltoln<+yFYss z24We`WJN-xs=R;N3vH0@X2DbRaDOlvE=?1`5gnd;=VJX~bH=6^?0xU_Ya;i#1mivHWH46)Ct4E?p{ zF=g(rRH2pzEYV7VYuBgUmKZD^jPdf5MuuzuE);6@z>m@lq#rBx=u`CUcUvzI^L-?^gLrXIH8NC6^e&eEEi`V zBWjv?N<3pZ0CA%}iG~gDMn0K7QJZ*UYxFLo9e8C)M^UTjgXSu`b2$Ktf}+2MLEL8# zpaBGeJm0oB^$JnPRe(EadtCxyJMlLGNyBSjz%t*Sllj+sPG73AbOuEtIo*CRXjK*H z%RedNWQjRDu3FB2jB4Lg@wqst)Y#08;C4E~OFJL-Q+l!)V4rQL{K@btn|Z&EMzQx1 zQpq-nAA}}-6}R<=0m_*2f^2YE#FF8f%A<~=v&^%P9fqRz z+K`W|f?nHSr=_=9@RQl#8~r`Od%7j>b*T%T>hb#t0VSfynY82R7=x*>`#!4>@SR-! zJjE>9!xmHS&JJI21Zy0Gx}aK}Bgr1+_m7l{Q1ca=HO;Y8Vgk0fa>p}cv9r3OY`a`& zDKEK1SRc7;J==Mj@PJm?DB9kq-IY#2N|aca2AN;gZT_vbRQC9`C7D<$z)B_RH}oGa z&4;YG0rw-F-*7~eC>AM^*wUU@2s!7ct^pg;^%j-Une`|~f+wHp1cWcNqmJo685KN8 z=*S0Jmy<3$l#A-JAxQ3Z26=ktW7o&m6|WdSw4V9t^T0J?B@??m`RjKs<C?kP!~@iMFEABZfYyM(DI4>GEmFyqV5>8n`rzuHYu`>sSv<^WmMFZifSFHVl+DkAtci( zpiiQN$P)*j&|1E8=0VNIpxCNY`7N5W_J+=?mHlQf>|g5_a^n(yHbsGNVCbS=OlnOx z^6Ev&_cET#x)ENB*y-8CqX*aF{BcfP($s+Qbj+2XfTY%#zz>L$oHM;6WkrVV6b$7K zg`t-G2mkac%nb?equN{{OWdAaB&_}{gTwMd7iX``{8|?)wQ1)Rt^_#J8IQHgU?0Rp zZ`@d~jc$1N3zYW3e?%{f8_h13%>RV#hpkl|9U-v`E0nwKHt!e{Pn;lh_?h$?r8wNiAd&F}mo|@vF+BE|iun zW?&Cfm^p@Sw~pNV6RvlbQ=MqobxcTG^h_JJ_LBWFTtp6#mP=ZE82%Dt^&~s$oeTvX zn?;3(T$B9|-t-|ds+dW(4nYhAd7G@@`U&wl9MD+8?o7$R?=!CZJ1f5P{dWOL6_;Io zuyzj(ZOGYUuZ|)W%A4S1Zvq=9RBI(z_8G}3?gSS$E-j89cgCA6rLJ}y#4KT%r<}UD z`D6F&U&s6QUxO!QG5Ma6gS5)oQ;!+DgKwhQh$D)-uWoo-vBZ~w++amYppk;%g65Tvo9dp2KT4TpHYgq zp8#(0jp`;Yt5eO|tzm#$Xe_l1V2y+PGVb6yS--pXL?8U9t2ng3M;_)n>EUo8?AQVP zTcC@!FT~ZYZHg8ndIvNNMcRg+tFpbL*1PVOx%d9mzfwLwi4WZuV!S1dj6_LBK5H){x+~sc8%Re z5@)q8W{nTVV6zCHX&2QW*n@05Ev<_qqgL zc`r+DZV5WIjXqo}qlgtWKe%qc-SxTZsIr7XGRc4hY#n_RoTJplvXTFvyv z;}?9Y?^364=~9;0{d(qXviWiS|JyAERGU zN$uk>4L|YQ_cMyOQFvW@0hkCHU}s_ey)&-oeNBL6Z)e?o^%r&~@7JKqI0BN^SN#+W zw;u~nHV+GsoF0v$Nv`~H^J${vhVdSj*M4ZiKKm(T$hF@Jw2OVh&L{ZL z>nEyG1n;R1-b%-dOcp5=Jy^WqpIB+p+vY!;VO-L zE2mwP56jtP%YgA%&xYP5*ZA@yTsB>fmyf@)8K_t@L-1vvAKYD}N!{|GBwBsfD?A_Y znbzA*{yLlRP@cgDIg`{0v`+GIFwshg_v}uz#X?t5Cy~y_*i=Qs@96AH;tw$y{DmWU zOS>`i!U)H(cJHU=t-qxc4fMp_C=wT7Q+owJ$9SYPJjL>aS8+?iAZ}Npk=vr_=}jl$ z#$dh8gpMIVaF{{OKTob<>F?f~^2#{=UF7V*=BzHG9DBps%P))0|HhQ!Y!J!m9`2gc zO;0d7=5*Pk&dDD`Wqb|9(ds=B=st(3P%Ss2@vObE7b9JdvCzgT{QV}17)bja*nW1U zE|{-xRWu)5QJIJrzw)0s&`J?-r4S&Q&dq<=8pqHO`(e;Od~JP{roQi-h3UiL7}uBr6ysCc!>jAxiH! zxaCev(s5rWy=z;W^=@Bw-*x;vyT8MaTSyc*$c0QGT&Lw&6WGiKBh_x?KKXqevN()> zk!7azdGSZ)dPeva$sRV|xuRAdJ2s;^rh0ki-Phgap-L48AK^m_a3jT&9_W^e zC}K7VFVg*o?jHPX>DMw7@jImz?4Y=WfH-KiuD%HF7eO&!T`>WK(;I1O#`hlE8!hla zp)GM*V*u$iXor*npEALB8tg3Tk#{kT1W+S!JMQ0u8DB3A2OYN_`vQZEdeT2_=5v4e zKe(6I8L$im>W~h(Q;)W?aJcWWYn?E9sHouzqvT>mQ5wxf+jAaVwv14o5qO!d>)Jjk ze-wzueBWwc@MZWkakwE17cRG7kLkxDegj_qYJ@?(y8$i$qObq0M2828HjSdY) z_zbY=1BF^S_$x!n6{bzi%9zGq#Ng=?8o1n6$2AOmCOTbAE~)C#qO$5Y#M_fQQL|oD zoounzw*Fy8oqMGXQXH*ko3Uh}S1KY%z)1euD}juVyw~HnmaI8;?!oYE?7GNm`cAgZ z)9T3^jHU6nB{WMwE_8v#e*b2CIW8>0K;u^YFcv+udj8^Fdq>moBTS)n)8_Q6 zG+;|l$=nSMZ;saQqjEgDxbP*uZ7s(hr!9J!AiE~RsJq5#y2iSi30XF^k<)1b)y=ow zQWV80B;6N_Dre3ddM+8N`S(ZoW>N?{KOEu!modBq(mT1Cxte+A;Xja!=-JMwZ=amL zY-Kblu)|#}OuL;^O1sf7aTPq2G(xo7&yHQ^<8U{hqYHm<9}nJtH;D-8xzlVnKf9tWQ2$W z#P4Wxxoem>;9rg^761&vuBz}U0O=#*zFEx^J_!zRIy@5*_bml8mOSIA-+an(vhwq& zTgKh*c?eFwqm|;1zouv#y1u?0>_{`7P}j8hHxjjdVG}?`V&b{7H6qBvi`+T^GgnBX z3)eCs|L${`mCZ<0c(aH&B?iveez?b%C#|oS%|aos3a zFrq!!_??VD^Gc_$rry8)&N(Q7N@K)sz5L7+PEXtC6c z)-EHDt}uTyLfO3;358?-ntgL@PeG)<(tuYGzJZ7B3&m}xk%no z+cmacwECZivu7gqe~EXah>GCcQOWe2R}OpDy6XeRCal)>mP`}(!Pruq&AX0|$7UDrXgz}k#XE5^ zZf%G+tL>UIgWKvKZ<#}736)k-SjMboP}SngDS_?i?3oVOUZ#(A8vF>@FSqdD-grT> z5c-sw>eL^-39@+28qIhM8^Mbp;?jWq5SK^vV|_KWT=sO3ia1b*M7Nt!Pb*Gzg$`j( zCT$SU*0I*8-oaeS`l*k-!3D|>R^-ljPH66)h#n-uy8H|6Jw-pWM;C2u(<6+N?{ZoV zVVhRg+jgik1#(>+GxXDoT||4qcsGuY>&vhUQ=e3$Fjkjm%O3*0Nl#fExKjnr)yvv) zUE1^8T^=)2qbg zinpk!#<|QE+8*KmW}eQKsA?JSzGi;r?eSxI`l|pUF#3)yi+m3)1&r09mmb=(HQ(Qt<+jSuFM*CIk-nN{p>!NPXRzOUyuh zt5Xcg!{fevMRWw*g3P9_XSD>)pR(IppM9x=9L--)QJ1$cmaO}uBTTF(k9VaR&fu!p z$K#_;Z*1rDKl*UO)NJOg(8rfKfL}D>{03~o-rXE1*|5cA-a2V~;B_;sjNBF-jZcBE z`8ps=0*=w5Vf=BwL!9;HkmwZVlv;(Dd|&5}Q-# z!S0#Seryf9rl}Vd&FsUNdG6Bld$~n z-1-0Syc7Q>!|01F*P_dznvqldLOfU78|wLR(BYC2bB(QXY|D1^c>H=5m$h^mGhJsx zIAk>O6{bv!gCebR=X4kF)%sZCod;LSVAlCrVg@~i!qv+mb@m75s?bI}(=CMvry)~A z_@m*jG2v*!s-GTIA^S}7>T~*^j8<1fnPqcOiJ`pAJ4{v6Vigc>d2+2Z8wtD3be?Gie?)$in5`{H{YVrd) z!QrR0y68{M(T~@ztg`AWx%$G?HgFGGE-4=#gkI)QD~i|pOW6D{LvQMy;>Z2LBD-NT z+4~t98hSX|`)E4Fa+F-?KFDw30tP8I{odUC=#-po8p|`^e#Sr_P@xojfm0fWZh;tS z&vT79Qk>WCUxFP2wT`HEF`eBDuIR9q4NnK=Pw0fWmo%l0%b+s6F+57NRy(8i{=bNn!5c9i{Ru+JcMEnzb%RMw z=>Y|=#?D4-uZyyGx*{%4pZRy}ve@^{hY`*yBv(zXM`PN}{`_8X-HQg=7CJJEj6(ug zN;mxh${#$pdsC&AgbBt3XjJ&PO5$24YBD3E+{b}*8%9G9h>Rb(Fx8JqG~S++HNO(j zaxM3$M=j(#SQ+nJz88ut#-m+Az?4XHhxbM$cugy!+Lm_}8P94L0q~{Xx-Ae@JiEnu zoTga2{>E;NNFnTAWH>fy?x(~jO$Y00kqVyiI$k@1H;&_P8`Y-6%<%k@f`fmfXlaEG zB@e*;$mBBlCS!2v^!hdQ;cT7xNWLRiKzjJI+n@)sGX7H%G!8~lx>2M=+~WlNXo20- zDSLo99cYk#Vt7+-i~&-DE&zV-J_x<)7v<%TN(dL&%%rc3Gc=Zf;ctw}L?<@7%NFtZ*r*ZgyW37z*H8Yq`yELZ zSodQ2{%3D3kk)RtAT-|*aw^tL>Z6%qIkP7xf*xwS%BZ{7g(hrC6Hr9#cz`y&>2Na@ z{lv9=@+K90MEi0x%XP0W7l~G@*c?3(9{^{wcj}9>w1hWEXB}K~qaqp>{&Jg>Rr-D0 z04n?~^r0_fQ4b&d*Q+C|a0kb1=r?Ek#B@ZAq}UcZ|oU=+mP(;F60ZM32PsI^ji{3ZB>`|P6)Cp?1m)i@mdSTe)zj(Y&SA+nv;Am*z2 zp25QVhqu{lXIHLLjjjt-Mkp|$M~owSbz(tke5Oty%E)@|VVZnNn@1Ugh#E4^JhVwO#?mDq6 z$%9;{cO-+>JYuKq{MsY?Q<}rs_)<0q{BNJeXj>XruG?5G$7@!9W`cq=*!W>h!-$ki2yHBCXZaz9-l?)gTVEjZtE zLlu4Ly?SX)jA9+MZDn81Hl;3PoFK1-A{MT`f=OLY9jrMWP3ek|E`MCj0S3z4MrHQY zU%Qh}SClt;OHMs^^!1tR`7>>X-TAz1DYZVrh$?q4y*9i)kq6zftAg!SaJC-~j)XZg z&Fy6a2&dpF(1wY7fziGKdf$WJ2B=9K_sx0L-PtBcv!FQ}T}NqCo6h+AGq1e)+~-Ut z|7?zU9vC3!C%omf!1%hWBerWKjnssY+52<@%21tSzc=%kwZ8EMk+Fo|yDFDWLW8ae z(!XOQ#9nOX}Dx0mF;@iLWXzb++bhSb5MrUC#feqa=ZHk7_acq;s7?Mg;kITPjB1J9$m#7 zEi?<8`JY)JEaEoPjBTwKWjE;p1RH}|&2>{1-hzfGq`N68O(1+AL1O)hol z+Vc0aFiJ0AQ&#tO7A1w~5WampxOmmR*~RypYF8BpYhzzkmd62Z zTxAc~C(t<>nPW5mi})Fn7h9AamK4kPb+R4r^Jh)I0R^3RI+mllom%$qPsQ-||m5eiIc$>0cq}GDhakDJP0B>3oDeK6L0~tHe>Sy$5H+- z6MAYmPWW+5mN^vrWm1Q`Um=}t{jP_o6flLLP)OpIN{cZfT|}R^rLN2ha>_a%)zF?s zEi0f-Z{@{%v9pgbHy!qz*ESyZ+YZ#3K(v?HW~)>0d+f3E*|Pot%wZnD#hg&cS`^`V zws=0t-n&OOmwI84qOSN&Q2Qli6ZhiZ0{huV-b%?gq#-7>roqMzVNfITDp!C*2kxjD zHOM1KuL(#QG|Pc$)-n?~rToIx5$EmeqE7-M!dEYDQhodUZ|*i<`jj7u(&Azt{5s_% z_`F)J#+Zt--#3WHe5R-d=hqQ3^ezz*`=$;`vih3T)T!aOm+@ zuMInm*|2b2Z9pt^{X+j`eEL-+=IYV=7YyHwJ%_6JKVWtlPPMZCS)@qW!|MAz{}5en zc4v#4z$$HZ#PtCeU;1Vk3Sg^1v*->2+rmG+S@P;29TEMV#eugCB#7@U7gkUUzWvtiKVFS)(iriQo6e-W$Hv)MG!Erz; z(DvP|7{tDBN6KZi+oqWZ)?ViCd5$_mF39)VygN1C0_L*8oa^ z%Ob(wq_Xo3-ivy(0<<8uijN2As6GdWq29;z(WjDvYrrWF4}2Q!Px-~9TjcbC7~MY= zsNs*%?_S2Z900nr*>NM{7Un{juk-8ShOylD*WY9JJmpWxv8d^!fPIS54U5+iW{Djd z3OBCxm)XDMsk-G|UtK?aCLhU-wGjfC*;1C0V9HvdGEg=yEYI1@uL*@>^<`HH8wA?W z^wR|OdB#evkKL_TT}JeVi&yDkA`Xq^HS!+HFMqs$X$nHCEanC{)z+pV%opM6g*K08 zVnw0vZF?mJt@*&Xj~2Ir7wDbrg|Q%J{PDk5wcB*vuV?*EOvw|P?HihURd+B0wu6>-vZ^E($ zvSZ!Jr+tCA{nkjAH7dxH>f1Wssz&M37-chI129EAw5b zp8-R#s#P{#`p?~03{;m)6YFy4~yW1v4e3o{u?&D^e#a{p{1Ict~v_6~Kt9rk% zr~leDTmhhI+DiD_K;cgSF9V~5;rX_#LW#bDo}5c|5N3a9dD~IsPMx1a&}uLx-i38^ z$~h*|si%lmzN|SRz|Vr^71Lri&dx*X3DkL*9#5Ixz~LJtB$}q@rA{~H;+6hBh|*ky zxr)zTW7M%(>&KrHNOMJc!J_}gWt`5O2ub0>6S!Y)e zZ$Ri4#OETfS6=^i#U!sn6*wWbX8(?jCmIx@|D^fcFLua%(x_E+v8^)Ofy=U@`=$ET zjTsLtR888KvdRL<#~@v6hpbH-Wh`xnHY9HXQuUIkS?68`W%S895u-y$K*|TVMRKez zKlRnpUek?x1ej|>VW_*+V>J{P93+VkAX9VN<@eq~e|d~{e8$G6%ECS+F0)KyK0~1+ldy z*LO2ri=F|Uw)8ryZwhTfFKSh|CQoKUg)4z*Vd|f+Ap9=hJh9`j3UAjW_3JIAXvQEJ z$=hk6ux2Jam&Mq%$G*Bljd%qI`^U$$hgp5EhE*3A+sj~yVpYn>mE2jLuv5UM!>0d$ z8X7EDKQm-}g?oB2BX&D%-vt18mX}kGFMNS50R!yK9O_K{+JR(*1|u+ac4+VR?Vb`b|i= z)Q8MKXnylM@)~O~8+=gsM83{?_DFS3(Hd5FDmi1$8X#$? z<4(5*EKZ~Sxp{K$FP51)Ha2D{xp)_2sAW?EXkJE%b*OHo13ZoENP10AKOra9!mV?F z!xlzMN8Qe%m?fPjmT7K-;~C-iFVg3X1M`lUDuY=q=3C?F&0NbZ><-llUa^rN?wAE^ z#l`H;47W7oYhT{fm#=B4n)d+U$!#m5TM1#K24-o%Gn^^m`nFOYH}Eu!0sJFvSG9j< z(yF{0jqs%yIdj+QRbH&jo*2^gt*)dm_g>1|ucohkn@%=Hq$Zi8e*kJ3^tn^q;SuhM z;sSKI#Cx)@ysgJM_>y3)P7c&q6ln$3u)9#ml?4T2=|#u$eihgQEt%USWQV<$9nrF=2?0L7C9{%f=+C6aCK;r4VsoV^LJVkq ztgEBhJNsD&XjLYX8`5$Pseo<)8i#KD@b4^p9Wz}Az*Qf~##&KDEf*B-e{%0!n znlS9e5y*4~%`O&kMq6<{$xUUr>`^6R$D6gt6BnNA%ezs6Sp>m*jGr8ndJjDSF(X3^--uu+ehK6JI$uPkEF?Ay90YR9F zVP9)(H+JrR6RMy=+u2_y;t5AT^eb>Kf&^sG^(R8s;ey=orkzffFKcgPj^806)*rBG<(V6V%hX&GrxFOFqmvMVA7sQ`7UuH z)WMp&!|vJWWoRNx6`p~{2S##hB!N_pO|>gTG;Jg5$oc~CZM}RU?y1hj(*dATW0)1M zPww|&xEaqaGVhPASP&y495ivyyS_#oKgtOrCMJ6jM660oMl?l=nPV!>#fXEoiS|*I zc$Mf;z=KCcS;xYo9mSNRk|)1H=U@YM_wWl1a9LSdOLL1)vv0GN=(*3ZFiOL3_tUwC zyEbaK#K%(ef`5VJMzO(vXgMofh}DY4jU27gp%d3aFWC)}Gq1;IoxuLp)^K`aEck`uhWsMpZ&@*BsHWhQ_S-ynpJbp z<08-!+TcT1_C0sv&AY25zN&}pAN7ek{9~&F_-`tYD-fXN%$xCx8|gBwrnwK9)&4iR zge7-qeR&MJw;(l}CSBCGE`M@kd-nGEn&Y&pPYld8P&wO%kP>gW2+pX*g3ey3Ht6`h zXD}hw-B3Yt*oq2%Bj02dXkGvChf4NAxy7QT+d3(!2`?t9AUbd^-EsdWjs3!Q2E>Y0 zy3oz`+5a*wn82=gZQa}a`!jl`b;*-fO?JRQUF5Zrri7)kfJu%t9#8?_l~pfix!soi z`c`?5jAm+BIgk-qGoex#J%rgj&du~%WbJOOTbZJ0-OxO=XUV3d4rtlkwavsf(zSx8 z__T0-@{aa}`-(n^b?6{NUfc`?GkA$z=-=bS@wHxM9#{K_ww_6aUE}&9J(B&eSPvub zMv+3ks%y!8s9w68Kgmt%%YJGw^oCRCJH^U%Sj zN%d1j$wG~@$~`jD+{&?CU_+`Z%6ATv9GAf@z_&k`;Ymz^ipWZ}m(EGB2(2lh(}JX? zw!wJ=zbfk5QL2y8cTYxSnGP4#?F@bVn1I>J2q{1{mPU}qNz*h!zZIY1=<+sl=cD!k zWrcI%mH^?Fh}h^kpBiwq?eOfy=ETTUP`Zl$;2tb&M3gT>EarmnY5T^Pu+V0|MzDSW z{Zrxm5x&X#J^o~isq%Fh7vGr6*;+lAK}XS4nppQ9MSsfyH{22oANSKwrC((TYTm%t z4p}y>@;ab_**N+KFTlV>sNcY17!eT@yqNR;Lw#0JWQ!E2%d+68cm8%`#9FDu#?1*W z(g+YgS0i-6V+=sU_`g2JzF^>9C9Se~4K-_LA6a@1NWp)GiEMP`ZVt_$_DXU)U|vi2 zJ1P&d&k^brej}L|j@ggr5|*7Z)Kz0)`=aM(?<8|ZcZxFi3A~rkM;x5)`j;2)Gdko< z*@N{i8u>D}wJPR0{0!~x)5wUbXBTyh;Q9ivSFQ1#K^ToO(VF(`5@;RL?BPX)^!tmkSv;bXP;Ko;$mX};a0>1N*4ygTJ6tUR_i;$N!{Ueug1*sDZs zdBS#z+6G`7`eT|--?2+)7KS$|7yIw5B*3E}=wx?x!^BOEiXlYj#WhoGqaxq;#^K#chCa)Jg8rS zdtXeIF3cW6p?{|YrJ=d64$daUn65a#?m(d!R@u>(n~uXwv^e=Uyq)oDbx*bKGf5md z#YS5H$#kw)}n7|!f3VbIeMz6>=rT6}503df3$tdZT?pegpTty`5W&58FnO5jU% zk-AqjeR^pkzj5UVnb#&++GsA&8ZQIbDJ&B!0!J@ocBrr*K7 z`!%*Ni$rw3H4JSyE}I8RqJ3JRpMkgTS8n0zsZlyblG}0XT&g()s1alP} zB7GV&zc*#H#w45JuwH*G!K1SSp zWl$~7C#PMZrK0^k>DSZ$*TW0^j0|g^3{K&{yZ@%g;j`HNQF06JenYI#Z9!jepaMnq zeN8P5Y(Tsq`|h@5O<>vmLS*&47TT?+0$yy#ZI(GeDSA$%FVS~ATyp4v+thGn2`~`p zY+w+cRV0?S9N&EGj5_%~f%`q`K2JW6-xqc5>F_n^*bd89w{bxTdw14nu?{5pdwQ)fvsOz%A^4ifh&=#p$Msk-xx(~;@o8ge?Fg5X7@v~$lWa1bM-Ea+TF*6 z5PI6qNF=qhyRYa^%C7~W1LgbG;9W30IJTlLN6vjf@TmJkDo?9knjN~Ei_2?Q4O{zZ zLH#>~!?hV#zkO9$$X&~lC!FZ)bE?tpZU+o}&+KD7sHdXDVHW!ye}{J|db!NN$X=&8 zeNeCvKE2=C@W(OmcE%8ey{u>S1j`==alnhL@jr+sjMXb?ByVO%E7i=@yRJtF+#?Fb zdf0>XW?02L{5K_2&+Fwr{OZz*t;jS8Y?3GWd&yNT2=}J0I{g@(&*(86X@%9 zY)ZXP!tnizmh~7BLE*1X(A1(9grTn#Ok!7SYeQhAHP%1pxojKNUcKFpniYGwk?U{x z+T2`v#?*`0%QXtP90~pH;o%L*{BF;&?CnG1m}m|6nXlc?eU7gNImASW47XiH_l1Rd zR!9*0&YmyEW>GDblS&t_8@(G%<)KJ+O0No|Ju#FiX;KD-IEYeA)q$cW_0Tb<_4fuB ziN(>-v55ZdL#G-;L`30N^Hr!e{!ZiA!#%%>YFHG+_#Ad?Yn%W>A!mm_fp)G$zwU7j zEUlh0;nGoF2yF|DMu;u2Hg}vxbGE?geu5WO7}(xy4P9hxjd;y0_;qfswe)N1-XZ3AI32yQgQ<#<5y+yK}v*6j;eP!!4 zO>TkJ;DY4y;9yMZsfHcoLRIvjL!ivg;l4@+14J#A9$I?at}-vQM4tS~x#4Dg82m~< z5nJH)6l%JM1asDV%`MF6^Hw$h)4%(slJX9pmoG1q@1~c$TPWIBR=mEQKCHry^}Qq{ zFaN>!S?4}DOC8jz60aScPJZEP%PqM|Nf+Yto!U9D@V&w``JHPHs2?t_uNQS#_u@{ATZCIitQhkIVfB@%Gu z`RbwjMPjt+A>42G$Ip(2|H0XNKsB{(ZNq|K0Vx(hK|l{jQ6XTXNLP=bBE5G+rIUb^ z&_hJwC;}o)Iw&1NkQRDC1e7k21c)INX$c)eAR*z~c*}e5d*I&xe}`i@7%6+rHOn*S zeCFEOqv?yqMzigzvK-Ufv`XU31#YcCo(DPxARC<<+)87oF!p+~l4j8PA?KUvb3LpN zkemKL!#L=TDXx6|ZB|#V9)m$E8Ixg48aKbKk7+C9D=RNO9yK4v>_3SWe7gEjJJJC$ ziQj*GVX=ZKtukQ?OAR!3tl0`$QaQ&4nljUnpe{5T!MtB&T-f)b^)Y5-A9?S@feB!y$Mj?X8c_IkOHOtQ2 ztH$@YiAzjgQH^2@(?o>v_Wp+#!8>qZ7RRG?ZF1AR|wzI7c$|xg##t!=!V7*zlsaUmIWlA8^^#{O4U`legsKdGyz1lE8|ryORgNZfu@@SlPG!`CyG4ks zf1tOPY`DsUC|9ICrI+ci)8s~(Sh?p*#b4#rG3E2dA~R(P#8;Ey%J~jF^JWe_!Cx*| z9SC@Xt+fx-xz}4~vFd+lw9%7EfoxmoyRP|l7A%40aHI#LZVppPolp_wK_@bjk%b}W zZ^G)9-Rsj*Y*s(rCcJDHo}3^5{3t5;V?fsjk#w@$o9W?V&)n(6c=I&9JXFL5BFRe% zr{^4xu;^v4#72xS95oGu>WxgAdtbzoo;M^<1vB|>eHG6s(l&fFHX>8>Ny)9zKe?#w z_ao}NpoN;PHY>-g7%+iiX#{$TKx#7g_6HV;XSZT)ltPi?@WSFyI$S{u-Z#U_h zE-<~=4H=Y$@6hp^(HPT<9_lnBlrL_BYcRsJiAUd&K}ARd1B0_oWEzAxyAE6ABwSqb zwKU^k;xy%r=nWq+J8dtf`x%oJM@nfFLh8%2$ zNpT`8&q61Rzk>3aR!j6hVeiHlc>^D_BuEFft~}f>Um^Oez68&ty@Eea)+o~FJn!HA z65=7p7@y`n;spN&xv|jD+@9csC=d;Hyt3RuY7!6XY$kCBeUk{vT#bH~L&4X2cbSci zWl1Zv5rW&(1VPz#1Zbi>tktA(k-3$Kevyzoz!PZdvdQtRYT$kf(H2 z;84`f19*qx@`PicWorBOX2cuVVtK|XeYyjEU)KNFSmGJDM_znS}9+H{MhM4sOi*<1?#|{9*Z`_bt&o$nyGAR zx}3qTAH2Sn82;Xzx0P`m;+${9JU;4MNMF|I^+av0RrRgrRWc#%XB87op`ToA#^)Ef z;%h%x`~NZPMn|pNusa$FqRmw$+IRY1NgV6RwYU!>fjT!fv#S^DS3pZ%HnpqC?vj;{ zh*a3dap>n5WR&>!GM#9)qEgO2X6Dc%V6&RDyjgntefY&&xjitfT~4i7w4c?`c6YZ( zf9d4P8n@!5;I(IME*iD1qD#eenm5q{Mq9IK@wmLn`y~D9@5EzI8&&g0YFeZ8>>cQf zQzq*0NJ`hhx<^24Y!GZSOgbeEi6}o<_Lk?)0``dVpg=9^Vidc?KIz}JsU<49ky!lQI*DMSAvc*;{mnm6Ljsz z^|@Y+)BM7a#busnLwKHIO%>jB5!EylKPDPR>AVeDQ>9^$(dfGGK6K%6`dg8NQX;eW zqSEQabe;4q18R`nA>Rt8;>yj~Q7`jDZ571xK1#mbB&jCCBM>`wKYqa z*wiNY+iYWoP|4ThU0pq&J-7P#aL>b27QPc^Yd*xc*HmW~@E{a|Q)vmfcC_3=9Z6g@ z$V0vtPnDr}+j=wS4vL6uTf4MT;|2@8@2rpWM2ud3@}iPB&R^^XnJ-(4^GUBD7A`EV zg6nDmXYN(yZlBsF^o!J15t%-z5bwN%FAE?iAFeOKMKb14BdmSeH5qjAprD!_6$D#o z>42cQ7P5jB4^)<=bb7~`Pm!>mHxU#&-vBn+NcqWxwk04t91{u(Vx5_UVH$FD;b%%$4+uG%ESy>*w^HEb9(Ikh8fG0)5?l zc!62oFJkVr0`c%MMVC_;b9cpwFSs}@W`^pv(V^PPT37gmnGH27&Zsh>W(`p~t!xGK zWqqgrd@IM+sZM$?vTjZSYVeVu`B;JH`W9zA%gWW#HB?u}lMuvn?QYW!oU&zOl>?&` z_?yP@?)d%KVm@Cy7)al5cz-^6BEK0&eU5(Y=c;_dJ!Kw}!K3;!v-*iHCr7C&S+dmf z=7TJa7ceZ1@?s`!A<7?f?`-u%M54{qyfh&VpzNG`B$TzTt@1m9VL&<2t`gJ}PH0k_ zvtW^Injgs6cmw=6YPujPguT|nfgW^K{u8SK(T|ieF|0Czy6ymXFz%f{fL1=8xfqS! z)T=)out8EI9;;-h6am>J2lCrJbkgVPx=C)l%TM`A@9GIof$H0VY7Gk#SPm2;r<0~5 zx)0iPEkjTTH>=Qg&GFMgPNiW($D_lfbl_b#h;LtuY}H*nZaz|KhqVe_F=_NqD_0fj z#i|ZBOympn&Ykpr1G!0i7J=sN?XQAYCZwS{1CI36xy_HiRAp>JvP{=ABN=nuC-23^ z65ni(<|1JIYGg2^*>6qgRWpW!9316+^ZmRmb9JgkFjcW(x!lHUJ#o=FUQ9XoDw+XWmqN7WbA@d86jwbAS8xiv2u(W>0lDT*=_}ONu6Dn+pMf#twMWuH%d97|*8?9grsk zHyk|-^=d9l7_Y*iKTxaWm(i{m@a}%v=-c=8IqMDmBI_UBAUyHPl#rIUp^O{bDqyd* z=`v4?r@sd*_cm_NDMqE;9w3Uooiw3sVdWa6*u>1;eQ9+OZK#8vdl<7xM&#-?MEnRe zb3jLNc)L;lVsT34Sdnp{3?c{>gQOX03MucfoN|+L`sqvjerO)UYtFdRCz^?22?J%!wwNTE8vaK_#!AGzxK&Z_q#iZQ(8 zLTAKEqGgI#rK9~GBh0QV1p?gsRf0*! zMUTMc=T;WZ7qGXD2x$#iUA>FIrtrR*7U4a%W3BZipN6>LVoH)8rYd+2xXMSSDX zM^QF$vDOyr>hSb*5o$p+taPG^MudHTo7#Yv`__-_UV;}S6oBM=Sy=|p?U%yn6jZ(q zqD%>d?>j&-FatqAp&8p>d0cc;Wy_)}*G*;;KCdAu%_0xz6ZT@IbR^ZWiXMEn**_fL;0Z+6MXq>`aM!82={CN zBOL5F_!P1FwIW$ltbI%g-=MW*OhxTt$p(q$@}vZ@x=1ltgPncvDaaufqmtaD^S`;9(2318(KVG%LH+{GU{S- z61#}iW-BLQQk3pUd1ZiK-yz&ybz<*Mdq0PL(MV(WH1^N@9EKY?sJaqEHBt(;xG`_) zZ9YC^J*Q;t7F5Jl{?sW2KjQ^$V(Ms8FMY+U*duz;n}gKPH=mpn`~9j9_MR6luvd%kQ2gtDEWKB1|DziwHHA|u>t_VLK`3xm zYl-rR&Y&~S)rr&Hk{0K?Z>%e~^mu>mS)ao-k}g17C?>V-*rTS2N&+Gi`<2!jgE|`n z8aoe0;i-a{JRp-X*F)Y3AVJK~p z@}D)|)@;RQX4GL~PK?z+RH>8TFtsR1T```goJ zIe)K2*%^`rm3xMj%CS0~1I!PBSKTlxljF<<6Pp#{@Bn2r!3|C6p>G`gei5Q_QR@B& zq5)|*%>1q~p~Qz!kbszWD^UF$=Z%@yP*0$*O*nKmO22!-NcW>$(J)pr9=`QW{N3fB4qTLU)!7cKyid0;lSDHJ zON1ncw8^MI%*%0*>DPy587Uq;6@EB#R4ha2)z4|fV>9LtqN%MF!tHch!PABdUD=<| zHP#v*&FmMD=9af-o(baeuR}D~ume9T0v9de=a`w_ek$IHZRE=dB*?;+Us=N(kT-^^ zoiyJC&Im^!Z+?+eZQHeT+y^)29Lj0MFV%}n+IJVmxbGhUUn1*tB2WCtf0&&n{igo8 zYX-#uQ9vvYzFypRBtaDL0%09Jx6aQNCQ6#i%WF7 z=XmF;#34w@6LQ=f4l*>$Tt{tlA#7BCp5?hzb5r+-C%T@Kd5|!TOEH0-KH<_ zqt2_?gvG|jF7n82->6q0u(ut`H@B`hoX9>hOjHYcPcvXR0dZjalG zKJH(NzDP+FyFql$fYdR#q4Wi4tyz&Gv-vP5FVeWTsM|%VW!`if%`}DK2-N4EKZB|= zG6FWHgZ_k2Fj25|-bD7EkxdW0>xo7887pY`qr%ZY?8t!wNSJ9kQ|rM4+7jQk`=el0 z%!-$kwXp_xrM=z)QrCj8svF5z^C=R{w;*pQC9cg_Vq0>BMKj!C>&=D$8(`GCGB;K6 zp(`>0WQDDn7P@hMxMRy>y2=z_25ZG^eah+39TTBJYftt4)h3WeE<{b_Tj zE1PkY@lR9~pO_fs!N{__Pw{3E#Ax){&tytNUCdf0w!r`E;Do$=6G_W{DSx28H}(FZVA%e-IfTSHRmrU!o$?ex679qX?FbYeN9# z`n<}UjCBVO4Q$KaKj17EqmUjSM6FsH<1c2el=m!mig>O!d{rn4m7^2)3v9}ro*P#S zxLmDo>#Nk_w_)3UAFl$m$XItbw`c;?1>ZREntY-1OYM}Uu8VTA%y*??&D{0jMhiq8 z)ZiqR6$1OqaQ4^Cr|-|Eu*ejKOHDhWEGJZD8RAr(m6I|wR}r&$ zt$T~}cj*!fCrJn_1@(H^7IK%N2qI1!#!|s?bUU77$ zq;`5_IpNRf;8|rp6N1PWg|SU(Zl21K(1+-)71X6D4IMUK$Bd?J!m19n!a-Do-?zov zBrJUYC6Mx_dt3trkuE1|l+456bqrI(%6PG2#0c}$44K;*e!EVxLz=R7mxor?1o~vi0M&@wlo=H&1c}k5{{(-u=Jx@}{n=>x`)uQDtfCg-6!R z{*7y*wHjk`=!T0Yd?cb8Et6TUb4ReLeo=*Tbl*!d+}e%v+Xt6AI=e*X1Ql42M^aExbw)1}DnS?SA_~P-p9&-^#+>=BR{-uyKc&uI;<%TjrRX+y;wkB zm^6O<`m^UoCy|M3Aiw6R3s|%gr8P_&X|%O$0PejeUg|KBbu{2Z51vRn07WbNM2H%v zrlxuUM>r^p0C9Ks!VfV~=4GnssPA={(Cc$zy5akvyd*Wd6Mvbk{3_JH@S9$_I)DCr z-mM5yiaLsx65RRe3x9U{5nY|&o^+wx3OCQ3dRNk$Fp}Z+$VIOKjqZhz$F(m=J^AOD z=00#9dURqmU|K_yro7a|ASi14J-VlQl_2Im#?tG)zT7MdVMZuYw<21I$(W*4_Osxj z>aPqOa$DRwNiKcE89L$7{@G9EOo$qpO=_cS2597ZmnPUv&y3HgdX|tE79!~@Ebz@^ zzTH)7LkN4sc_0=~cwo>7sl| z0T#AntW-2{aq-U5EoNj)()+%;uVJTRYRH?N(&&ZhW3G6NK;0>^o z4g6ytocq`r-G)%v19xuj=5m=9>{V-^0;1=Vx}3_DISW1L*^uF9$r9{EF2b0R+STke ziOi6nk1r1D%*{?C9tu%^N_tdv&;6rUf@_}`x~@Pite#3! zH>Islk&2bw-=lsv>#J2ZecI=lF=g%k#d8LcDtVHHjU-??0l9pFNHpK}GHOr#sZ;WU zTzpPs#@@l9h1jNL@#%-+Qnf76^`y{lT-45Tw*9EWU?k6h^kydz^9r=HL4D#MI;7ep`j$kNR}oRqUH^!kma7!>YjA+qwoNO z;^OCCp>Xo{%$+lSAOeT2AHDqDtTz?R@x<9%siI`YXM(b}+3=gHbR8S#-_p$gb}zW` zA=9zdWQ+s>NMpCv)*6J=fkbv2JY04YH)BIbL{wI}ve#Cq(@JMW2bdRt+YPMWwPr`%G`{XSl%17l;j|qM?)Cu>%<`j#QB7_vJ?nC_ zp+lfI1lBZ=BS9`q3kPS#3O2ImS#XlDx0uhj4N%`vZ_sv+P0av2|0jC<<*&@beQ=VI z-^oA!0nz`Sydn>os;%|sgn~O0F6a_i(_?S;Ms~$IO=~onw~C60yyv5Ajyn|WUK#9^ z2VMx^@>X?o7v%qXwhMSa4P=Wbf}oz(Sk%}JMMmp{sZk2%oa81g=BpCw<`i@?^Bi81 zDWFk&Rk`tUc-VpOZ_jBeAT#3*?e5FJcBfzs94Lz-{&9kLw-3P=`AgLtT&xJp*}v%G zpVr$^7k{`~?}Vuq_SkM$`R}h~TCuVfqBN7I{#~DYYwF)W7x4$Mi92OkW?KKrc>gBw zzih`bvT3DVbV*x&{5iI(UUTMtrhg(>{#E4eHU?x;Q)H`h!(*dbOONcaUZxAXJOG{#74Wx!S`rfnH-q{tefTE0pHU8P)K&1bWj8R>cb6QEI@jJ}> z**lrE_d{L9zwZhLzt*L2N}82@23aVaIDb+oLo(@d&_K@PGpF|ON?SO`WA*}a^4wwZ z)UkOToZ*6C5nzxmE25pXA69s`FonN34;-~yY-?)^)D(^#huGVn2Xocr<+W4R*K6T$ z_%h0CPM`QzQ*{iro1|aB z5wIj)n>3uX>~p`a>i&!+&!PX(`oI3w7sQHvMgyU~AF~##0iueI)8KBuN#a*C@6A9| zvLo;nvOJvEEB!|5e^rWyU4dg@OLJxuZQ|z%KpB~Z5|;j%w-Sen13IN$@ov@!Zh4bL z|KsLPEiYe}dtcyi-+P4s)?F2Cb-ol#sO{sRkuXKTFJ)B5e8on_W~>z_vlsJlbJs=+ z$<@Amc^KuhFrpNDO^Wr9q%KQ<@&akWxH>@2_35L@Sg^l6Hd4}Nnnw3;xqXHIUAoF5 zZ*ij<^nmg1W5zqbWuBpYh*lX{KoNrs-M?Dr7klux|I|>NmhR2$5Bb&! zP~sbsV17wxI<-LOMx8M&^Pp;1cTZ0($fCVP#;`8ozR3yGiRNS`ch5bZ3}iXykd*5M zys40!+u_)ydNGjfl;>mDU5o{PA!!Sr^MXt+4FnD!TCDVYjUDsJXdQ+r}gJ0qXNr>I=r?*(%i342Z=uDdSyDzZ(Dg5a<@8n+fBUBA z&}m(lpnmSssc$^+)(f_BmI+eAUKUla>kyA7$HN}1=62a#f0(u#=wA6NGWyJ&KPa*LJmhAJ=jLkqab>yken4RK$j0;p( zSO4BX9Tj55E?g4^Lcx~RXYxs1d}msvyQ+#vrO0&|hH?u zC!_v#NB`JMlhWbWQBh$DT>$|Bn)lxvkw>v4m=yMAX+*-3qjQat=aFWnr;3fdC650G z_58Y6CEFoGJHWE+EZQi({{dD9<{oBbR zbbM>a-y6W)B>nZ88VESJ>T>wX>A#vCC^-ChPl}2{m6erCkVt8^4<;O^P4?C3OV|VF zeC zX~h-&Ke2_i@sWEai@#e1?#vp5IUp7G5VqBxUAI9_`Q8f=gdUH@^S4|okNSsc{&i)m zGs}x6UA5w>{&jEp`#~X=uquwPA*(-T>Wdr(z<7Qup@Es=O$wD-7n!0?82bERwDmHR z^>}H}WA{wr-x7uZBWJqG&8l~iT;y3^UOvus(9Pm9=kDwtpk1KftTT2^vr=6A4Avtm zm!6?as(7JWFdRyZTXzsVzHa$h@3)c(^4<+b>k>^J|oDzEyx!p{_dN!?934@@! zgyf8mDN)(-rJ}k#yU6!H0>ia$a3A5hgE(ig&yUuN8?1Iwe^YCpkZ$5%ldv>5cNlcT zMNpqj(l*Fz&G^j7?|lP8R@jWxWbnW-vm<)}hgu%Ttog?hVW|0y#Eq0gQ`O+_BR3WfmNo%owa)jpat?8YP^0+Z^L}$PR4S#$2@UB^ z$fs2DaiDpMM08B9w{*;qU=aMt2(aVuD|LZpFHYU2^_Dar(}8}Ox`in3hfE2RBjE?n z&u6{NhrbI%d^+;e0bP)pBcc1wj|pV7w)=(x3$&U_J5?2xxdR7j1VQ3k+T5WH&U%N3DLpVri-<_T|6C9&`GED zSA9!?JGkXV+?6UO_s#6@Ihe{LFY2x`7TB~IvUZkua6_eu%F@WwHA#ifC7sLSds6AI zS1|SSK4S~4nAbDy#HIARB1nTv^+Lw!@ZJcECUa`8OG5}onO-i$ZNNP6IRWtqJoRb{ zVRKpYUl8~oQ2)0?uoGG8KxAFRQYN(}9_!@s<)=;ESRXqocbT^`TLj#&Dde6V%sJK+ zY|9tJ2-*3!SCqm1l(}6+mOOUi`O|wh4)U~^ni9ZHxjrGynGGiLeTZz{Hf>suh?(k~ zP!cs4%dB;rYf>9L(4Byo1ef)hYr7M> z_+_7u13z_En6V5Dsg{00_)&(AOsRT42R#1Ft!MLxC?k6NomU2fmQUYYY7bhYYUO>g zIg`7qI+{4^$y3+B#P;H+nu$`87R`5U9ve0e+txmQ_h6x%5g8d-!zCnF)5y%SmrmI9 zP6}CoVc*uW17sVDIhBSYe3Uwbu)J~_IB$6HeBjYQ0VW0^*B4f1Eb#zv>2Kc+4|)p+ zxw`&VQ3wWZ7#J=Y`R&xMc)o+QrA=&$jSitHt1BbeWEwzsy(CY6OSvIrMliNig)j)A z>e!0N0j0-O7ATgCtMGU!$+2q=8ri7^sGxH@Jl4n~6y%7+z?apBC}&LJIw8?+^U0-FLW^@ROZdpO9^xfemSz_| zBW@+_R$A~kgj8m$T)FbigWLGLgb)zJ6(&vQ(YWpdIk!XiXBlmNTzSb`JUV*pP152xvyTC3w#E^XgrCh|RwTgg>1(*}>`kKVHgE!(w9(&arVDERc+wQG%cUKjYu z3|F~X)CWpL#DDEj2T0uo{tKg4eO%$Aqat$hL!t+Z@I}_*;xp8D-@tqc{~QhduM0~a zV5wVo!G-W;gK=RCf*(r+1h`(|-bOH{mkfwopyYF6yJ}2L1`a$as19L)s%0{nzSrx< z@_U|riS?soXeJ+dfD0_nV3?T|KL9A#%c`tjX`=(jD}fVdxS7C!8{kjG>{644k~-)I zA#2$m+X=OejiZ|vFKF)08v_pD_^JuWPxvzLuIq1E0XgN4UAG;mpR3+&fXNmoM$%Kp|==?DN5|+4FR0 zVfW6TKYv^7+@)JEn`ohBTu|LvHLEA*b*+v8v%_8V>GclqCNSuoMaV`+yu@UG<@<9K z-{}`Ta|sxrx=naYUYv9K0+8}#c1d}w_I-7y9?F3vD|?>x(`oj%J4plA%Eu3cRaR*4 zW#k>+JE%dK$$XFj8NA@EOei^AbT;;J%lWkGFt$SIgmKn`UFAB#W{fbO0@qCO%U!?z zBUJa`Z5u1Eir1{hoF95G-aD9QG0Frs`h9jbAMcH`nz;@;uf7jHeJDE=V*~|7g0&?* z_Ty7il9E2}2a(tf2de zd~gQBK=7*YdbApa8-M1+!Cj#tJH#r)vfMrrU-~c<4|JLnH#zQBihS)|IdntyLZ8u^ zrZ%{<>Z++3&`4)qW1DZgUAAb!sgPizsu{4={kl=*MIOGi=YnF+u28y@WAY_=Thj&h z_br&aV>7z5VnYWJfoz2-th=XiVadx^uO2g(LC>TI+-j=klM&7?92-|S?z9`XuPyEKAt9RWhPeAm`FY`F9j^vC9ISjl_g(0pkN z_gPabP?5!00js`u%+*t`3xl1Se#i^v2H$r(wLYA15AfYIXxtE>YZxtipwVZPl$tyImz5>>Dj-h;4y*K_VVRu&T)$SX?u1$nVxK~xT3wZlcueYv0`SxUrz73 zzGGbod%mB!l4RQz z2C#=sB}hdkNNyN^po*;QbQPFDl64*8W&H+HBPej1qME$fKO)V_+st$ocaPlvS-qPd zSZVlu*Y0$Y--*&D&u6_e9{Woj_i)s|m8S4BybWstd(D0Ry?osTeUg7y%k+;ulf<>V z6aG^#|0DOm{iOzceJ{rJ-#)mTldBo9A2Nho62JCe8v4C9aYi>SXs`En+3LSKQ)U4$ z-;;}vP1-a3{`1ax^LteG3f%wv%`<&F-ks9%wd-Hy`TwzvLxv8b|7CZ3UCnW4TC6qo z*#5toW;b)~15{Sd^Ivwimr|}90Ta>Or&q82M`zwG=>GN^D`xoM|50Jdm-=vnH8Oto zK+{q3)$_5>|8e&S{>(cRW6o>94CMcyf7_VDUS1I-;53I%j2pM{uH5X-Rx{u3gNzkt z-%{Os6$O(JevBd%uX}ax&-@1wGbMnqTk7am&-&l(V{Gu_l7IXb!2hQ$!h84t5c5QF zo%_FqaIHSz6(!$YPYCX9`TwxTSxb0h4@mGoEe@aNbai1$CW-BZZ%wD!61pCa^rXe9 zQF;inaYy!)1Re%V*~LyaoRvSn7xgxM=L|i0@$ET+&}Z#TTK)V3$vviFfO+8doUWd) zbWm!PuU{|9DJxIrI@o`o?SBk<>C=_7s$@Cr7 ztM`YpBaY{bKvaO!VY>$~kbcuxoo@!U9n6W2)&T}q_n!6Fc^c?D@2+IiY*uXFpH}3w z$^&<|GE$b|WQ(@%V`Cvgf`aZ1D!zp1qtAxxUi_&>0T1iP9p77VaUraOAYq{t_eYb= zo;z6)=HKTI%SrqDCv2n$DElO~B>z}>Gz8+wu%E)9vkc2|Tji@H8<^m$Nevz=yY;z1 z)4I>gul!W?Ze}5z3^y`G6Y!4K12I(wY5}hkk~)eSKZi1wf4C&c1g7gNYCp&QA&`&} zZ?gTWzy?){4Q}|fTwJf(IHY(wQ08@;bY~H)(Xb*JVrWPq?ha2dK(PY=`^CMQ2ytj{!%seBZnJOek)OBpZ2nz{Q4VICH)Q0 z9Ts%+{%qRyWSm!8ygItw2tuJNrQw!5xO!u#;@Pz%lbba$Yl-V?UO`_{aDh+rMr1xG z0HUAIv$w~4;EQ=oiLzw-AcaMt^B>Uw1p%JxZ}xpTZmyF`?&8thinc8}r?qfCFW0lD zHPwTqeZ;aJj${n%N+9Hb!v%zWL6K#HxxamP@DX&W!I|i@2tA^3K1N_d^@#n&r z-ZS%OHdpn6R*v(NoErQhO-x9f{Hz>-H(E9TZT6-m91!`l>}39FN5}o7OO$*dr)n=HJZywB)u6)=k_pi|c8z zRa$3^h!xu|n;wmqtpPD+&QV@)wXW5fmuGzd{B{d}zeTHQu&GnB9> z3dRYRYx!DV6JR7=Yex7MNoqQU=SMiPx>yn0dmLV=FoTC&c5r*4{_>B7dASeU=TL&vRO z66&c1rd%-Hv7>6q%wa)4MWZ)(aqq(3z&iA#cj z!;!-hxoR34XU(D4LNOK|j8rx6g`(Z5h#f!@mHLqD}F7Y7E48Bp%Y$;Lw@z1BWk%Itk;Ev0~IH97`&A(DUHXI zAQjt1{DUbtU%()fE-f;EX|gFC`PRSm4YeyQ)1YY7%*xwNMb??unJC#Qs^&L3Cq+)n zSjpv@Q_Y3)vF_b@eg)TM+Q0j$-W-x<9Jw_IprR^|lVRx}K^HH#Q|`^*e-R^xoV$CR zH8?b!E7N-C%MLjlr=OD3TY_wO>px#ef=!*(RJ{zP5NhoM0a`S08Z^VL6J0IAQ>JlamWgcTbH=bMqDzKsIy2A26 zlE`<_tP=6kN3dKqtKSA<6Q?7}zl~I#qeO6}y-JHWf%Qur)Acrp@tk3>c@ys8d~*s| zqdy1}h}Bez`1jvIgLwZnL8KjNDHUk(ZF(*%0VaAei`5JMZ6#qinb1 z`Fcb{QF9a6*61u$2$YvK_~-?uFHqml%=yO7wGy%WPFN}p6tDb( z)&zdnQIvmhk`^n6+`RqlLN5H}6}$ja!;@f5AGG(%Vr48!&J)|CqDtW94h_lTvO=iHUil(3X7Yc9XL%F82b?sfre>Ox}7!>$9yUx`ujh-;&e6x+W{SEx-dY7?nQnWe)z} zr=>A+ll9@j3XsKY*(!-RalKY=uG7F&dk0IeP>T3Tkx1KzzQ&`t;@P%hl9>XMxvBpR zTc|1+?FH~WAqXSq>&Q9%g2E5lN2WF}sMY*E4x_gIt5#;=digSus-7CiMv9N|EhY#8;=BCM*Py48RPrh2L0=6S7A z5Sny9sQH!}T-xEJDiC-#48HO0PyC<)1Z%2|lD1bSH(i7mcwUe(H|K9E&1IxdI=iYOO^aAsh&h$QxO#X4}&#GFPAD0TF=8iVZ@ zm+DzMU@aVIxl7ZWMD*6lZSRmAMjjUZ7z^BymEww+o)x?{+) z#Uzup3}JyUZ&hCUeu26c34RQuN%XBk|KmI5aR*RC>N{O<%n$z-xKqNX-&eKzu;bUY zwrqwr6}JlNwM|-S<5ZCa{;c4^L|mZxTUuGTU6E4FYAauqu>{~rJL%H-AL-H&hTAE{ zmOc;mgc>)$U(47|?GZ4R5=oDd{t5PcPb+Md{d>t%hh%D6xacNOl|Kc~XdH6ciTBF0 zpd#rx+K2A9@qpX!?%h+L0pOP4Om~M-9>d3X z18rJir7w?d&Z>tK1Lmc|MA3GQ^76`ll=@s350ZnZ0>bzg{5G=!2o#63A6X&2nAIPw z-=g#_$AL01um6@zA9tQ}j3RO~E{|hN+R#(=86i+QYG&=Bs?HJQJDisw`c|W^jUgnC zpIh?9)CrdU78&a`1GjW@nQ9;q!zc$$%XI8@-?I8HrY?n{E0Q(Rm)Ce;#L3o|?I0bmvd9=EL$f zKZGTs8?nM~SP06C-^m)aS1`3RM3#!GZfBB zF{5gXw|{i@11J6DCWQ`*o}TA{jJ{^YxOs$Spv82R(WeadY_BEv!COW0!E)L2y`v9m zJtn7l)34RLexdyAOZGQ_LLjFX*_?BeHa~Io`TAZ$GPSh%_94D>0jf87z|bmm-;W{s z9ck%YH3G-cE_%a0eghiS#?c)twu9f2d5Oyec2pEoNV}M>N69j_9&p-P4g9UtjB?k> zEU3_kcs0yLrJ#cM*bln;z)rWQ6dyb)_)f%T?d*ae3r2uXJ{}irT`U^B^v#~?gmtGt zG)bKzuMn$)UtEmbl#5j~C^svAQxw%QIc=S6vJHOl=;(HH2U1(8c1MKxko9eIpyzp6 zpo(}-_^nWy>8W62TH{-7EZacOhiT*#A}<*NPKk>wz$!O1@3cp?hW_Tay6Z*H&O?Kr z>5lAx9~U6$#nLv)oi4M2%h!L!4Pn;*8o#O zd{3(;lNUUI)7%uf-{+&A6Zt+dP7-GGW2CvYb*z(jS`o&?tvpP}FSy{08`T&Trc7|e zbD@A6_fL^T*SBeFwMS1b%oa+^23;rmhr86jDnPy&KadAx(NUoHiqA1;e0o}M{Q2nA z3zH*{qiES{K&f=}aQ>)cJOHN|_l*dK+dGxk)^(1q!>t?`S(vqWmtD~EwN;5YplYm8 z#k56qAS^$gCK*sW!d`Ruj8`*F@2 zm=39W8Um3kBl)-5HG*uA6~3A*bY>&$n|L8O(+2~&7)b;y+ zw{!!RTUSMiqj>Nq%}O`D*KlS3r{Fp9!=4ZzMg{M>V52`dJ77gWKF*owTLu6qs^TSU zaQI=x5c{28b4ru%U2$?Oz&GUPF`pY2!(NaN2CGJ5e~Y0izn8nm*QWD>j8GLs&K1ZC zQ#6mQC2MQx&_gJ0jdv(#iDfjkA&4u{DO*gp;iydRu17F;c<@<&c%_TR`N1#S}&ghBey>Ii61j#;K-T0@n1g68qcPEyBW^(^uo5^`< zwNKoqr6Y*iB_wf44{4`jUc})CYk~kFIhf+1GC;~E19x_<{O6EV{zpjO{#!_5y*V(^ zUe=71^jD3>j6y)jf|#BSa@#fF;6d_ASxzg;eS|i>aLBM%qHSZ>D(FmKJgd(Y|G>b0 ztq0nhIAd*kMDew(J8w!XlcX+r>GU4M;qhb|o%Vf94J{=2%G_q|xsmBSoMbFM4*?DX z$}QMTIm?{}&dAnvXP01uB7eFAJxw!f4)v(_c7y3y+4wlOrH2x__qvaR)j)i1tzQ10 ze@E*yw0r0P@d9VdPH|LbScDWx&G8%0A>RhHa91EEgh*g48qNd3x)ZgZ;KMsTT5e~C zI{ZeK>BCL4&56iB_Mc?+pj7F>qCl7?u0u88Nzl`sVIy|;%EQ{eO(KF_>xG$--}vTb4lQPX7UB3Q0xT>QHb~U zS%4dSmYKJul&g1>W9k)W@)wsWXE$&pDpMcsn6uMMf*({aR{Dgjj_cXxf|o%ZWB z`q;Q(7B`q-?$@1n7~qQZZT}xVYov{qV^O?>$&+~o#p7WgZdj0-PW18;IbzSf6y65(9 zgPZ+Ut&JN6TAttfG@}kevI$hLw6yNCTpJBgHPNxu(*`+vA`bdL9!$_|Y_Xb2!983p zDn#iR8=@BBQqD8&ZiS3z3*{I~u2!MF9q)BnS)>VZn`egWmZanYmFh9vuxe3hvL(a0 zr<3WS&CpH$WG46TJ?RGpD`LtfnM;3d&qC6-jsptg>wKP{DB(QRh4IV9~ZZ}Q#?^|L0QHHWs2OTVKZ z7L>zcWG5WVsX#^>QcBR`eH)IvI=y|J)Vaj1+Fdlu;cSmMUK(IFS?@o0CmB#Ot{D-# z@4GY=uH3dBoyvGUy8?{h%#)A2wO(tZ7dFx4&_vt6D7)ZRpIxqFk~WowI%^UI&2-3@ zwwz9weOIlvHb)A!78`><-R}NDcFO>q<$pje?d@>9BdsH9J73g|y<0Cs_oofenMM4?6rR|6K7Jof$f;qF?{o*0hW#gn;mKl;kIVWpc7Cv^pdg?w}R4YuS(P{J>vot^mWw_FRTO{8-voeQeE# zPwy0~fsY~A$$4yx|H}LUgL|N^BA~jj{!c$Y&e`M06x^ao7zExM+4L7kh6LdodJVA> zCcs@PEvJaeS57GoTxUpCzxoLLvR>?*o7?E_>3y~~AOhpNMl@{8dK*7A0ftp4s!oe* zMh5ry_b(T96dg4mZk5dx=MU!l_mP|!?4YfZw-0+|fu6QOk+aoLuo~|dm|F0r)0P$C z@KoLCJc~nGPrn1Nxs6R-aImJI*3Qn(kn7%EH*kk$!2`n^FLRhuld|xkTTD#zAtpz4 zy7+A;X`Dx;Z%;+_dPT+9Fg+kes7*@>0Hfll>E-Z8D@|Jbj zQG~zh?H^0Qql)hc_GTRc#`#rN#!%!V&VzI8mp&s)l_y*13i%>9RmN6u^{?v6`)izg zf{phGYfalP|Be~Ho1;j3Vkgo=N)DykGB&N9)Iz!c4Sl-nvUEw30{;YYdPfJO^oZYr zUrnOwHoHv;b?beC{6h*4+jK91Yze>dqqFB!(`uzk-_q{UxOIA?v2-Rs3!pxF7D86b zD1TDfGxdPI*E#SOg;(O`f%AWpfKSS=e;I66EwNdQlTP|BsYtHzK*KyjO6Tx(pcejN zamaO2gy>+iaPDBY9!H9VU^n|>4&_q!-Xbas*-1mhVA={UB=pyC5u3$WwvvSCn zcI#{CRY#ONdW2UgO}6}(d0N5l$! z|4jc-q*WG#j;S9hTQ*5)nnPF{QFXbpZ?~!%=MwDjIV&Qh#!{?zGD#t=$Z#~hcE{EA zaB76qdS`)nD!DGwH8!_=Nvrp(BizTqTnsokQ1!sqOqAfbVrdu^p6lKBW=nNy-`>CvRgDrYd{fnWUVRh8+_ zwsWNi@eOH}U8zVO=qCg9+^9$C@K50VXem+%9SGPO7v31vjt0P(-v?-%r`%J@kM@fK zBe~S^%~OOyD;d@wmFaQrzS~yD5+oSnEEJqB5W!+0;T=!^se+Ad|8dg3A8oMRJQkDV z8H;c*)HP4!hvVz>9XtZaruvoHjusRVU6RCCvBuSvSN7Cz;z>8?JoInqL;ro2dh3?f zq|QfKbaDwyty=1YWnU)s2Td+hkkJq1fOuImmKHWgse#|J4mPoIe z5^d~V(Vna-3Ijn<**2_UM__KTKj^oLHzA`ud&VD z88>&}?#d7!5uAq~Qa~A9{rs61`%6qr1z<^7R@q$}f}Zb7*wUw_gG$isc`uYd5Yn%; zwiDptp7YLu*YcB1^aCsZEyV+ySS%iu;D&|s_q`pmN#Yc2&t;$Xk9cBrRl}}r)GNlm zD`n43@by;`hu-c3uUj%rudu*^Qp=lcvQ6D~B*lB2`T32#{xnHIXZue)@m~0{|>agdM)h0)y+UO&o%Ep(PHgM|G zO)A7}r+~_JFutN9lKrA}Z9>p~*647Ijm1A2T<|fo6}Bxpbo0vo`CCiPh&fGZQ)xY! ze=&NCcKwv^x~X&r2bh?)2+1 zAjwcmJ;fmdO4Ux&+n1%6cXughd8hKO3F3qcEmLl(SG-FWfmdYAjOlewMNV#8m5wdn zTEpbG1H#uqv7Wr`Vkkb?Kyd?SOW-j7xBHW?ZsS;nyxWdQPa;(G!OiTDKZZ#XG@0WQ z2qI(rxooS#*|tW}r}VGUu_L@FON=;gOW!Mg5IDDY#PD}T=s%0x*sVv>#M)-Z8fO2N z*Q+DAQuS~CAbP^yf$I2{PFjui4dmNzyL>y_z()bmm%!|WQ;)nmuF?~}Pn@Z~u0LV# zl~&5=ow%(nfA&%G6z0eic;K4I(4S2*7uz8zxh_1ADNWk&M0{x<-S4J@yZ_<$_v(Ri zyZY#A7EMB69yf?d)u@)0(bZ3&{xHfHcw+BTu%Qtg0abrJ_w#hy_kXc;@lpCjUwoho z)fr3oD%Yjqvhh^rH-4*Yk(?S(-?fj&o|s@IVtXV(g70wNyZv+bEnwzg&x2TSmCDcf z#KwJBQhf0|D}|2v`g5K->6#e7g;Yc;)5>>b5%*iiF@862NecQa&hxS~k(9``IwQ7nE z7>@v=!YQpJs>dwqa>$J$UeXk4GJo^i&~^{6?}yuKnv+4c=aJTlOlu1v!wwp#h~Zho zo;Ep-O&-Rl&=`jR;>g{e*TYsDJI!)e-prF8hZoG|%ixbpzj>-;VSnn!-jyV8R z`i#tJrmr;cVvcX>&C?>=Dozkg|3>}}w_CLGF^!Xnj&%#hR_wx#3trE1n(TB_2zW~@ z{^Oa1IrUSkv}a;$8C=xhx=T2k66&g@Q7!A^SCmE($&zj+jogA8Ma5m=|q9+&|-Hx3HO%6NA2LAH>jNv$~X**PnwWKblk7 zQyVXWk(H)iE?JI=Q2f{#@5+uXD^5tq+Y{mLq#Mvb*@f>kbx7F0_HJ7uNAYT7T2%z3 z_ zh}J%+;69QcHS$A&YDTgz9oRrkxB9FL@?~d@0jy{_MmYUiUCm9$Jt6+p>@SP5*IHxw zoAD#UzXhJfHgfmgf#9SiqGOIFF-5d=PI9X32j8xc32*scV=Wl4BH1ynI%NO&M{h5r z=y1@|w|Q=Ze0m|o@!NaZ9m@_UUEMqW>#&;K$gI;ogC+M$>z_v&gq4QUH-$I69~=nQ zj-9W&v}5t>7VSt9f=m6|b0}({H?$8Xq5nSf;RE(0dBwLA88z0(^W;~S$u-ub8|#vF zD;KBVG{2wwfsd{eNdLLTQ0D;vj-XrI@|zksG~Ne?H#R;$CbD!Ca_hT?>cG>=pfW-6 zr`N|e(pP-_^;KgwC}Wr%$@=IzJE+5k80M*j!{D|dP;Pqn8h(}`Z2@$ zE}C{d(`zq7LwnzA=*ovxmtqKn?4|(+4Xn)d7HTCQvtgo>P`i>l+IuZC_`6Eb%LcYh z>bo?EqwK+S_*6g)O0=rIQ8q-|eqN;Km(;5w6H|Sr=4;`^EFUwg9_`meB%{^zNPWu{ zt0j2)A^q!Y%p`1rV9-b(sp$)DPu^%ecaQbYYU;1~zx8Y-))8&4O1X?yGfK21Z5CtebXg|1 zcE0i*vXXgy)82K0nSu_hL&cjm`U=LJ9hi_?nujWxB36ZrYpk)Q^Y9#Cf^Ax;J=9sK1+4Z@2Kk}}YdOqi@ zun~|~ZELbw#Iv_6xI5y|`%xn%Dz)xE*S-$plhmJ*y+&EvZi_JCm!wK{q{K2V<>xL> z$b|PtgD&uuSaxf|ue_`GjNNFg2Y1jDCvmMGi(hc3AF$^m4@2+$dGS8}&RD+mNoiaQ zDZ9pc=^H!Wg0{@~tr=qA#uaJ5$(>t6?UU<97tK`51K*me#cTkQ=-WIv=~%60qGDYy z5ImK8!28;rwJp|%IZtCx#-I%v4D*Zi?L3eCdTg?xscePkF*Wv2olIZX(Sd{*B(_Zb-=VW4p%iNZ@ zhUBjE$5gSDSStv#^3LHJB9D|}HD`Tw78)B?Cb)x3$PnND+jt#YQgiuD{de{`@ftZo zXi?O_w4&^i2qK_b5m7P8z+T^pJWDMJL4p5c)Qs=9tV5BSxUzirukCvbZfgVQ@u!a< z9j$_?r^>M6jxG(myfGWwTKJI<{W%J6x; z$H{+*cYH=UnM~=Eq21n!H>O7x`tImI=ot+s2#0gPmJ6#l|U8(Ner zZEQXIm63@Tc7^#0>b##!&J$fTNz15cTD(T9>W zcK|^2cXS7%!6BYJ+LDeD10w|?+n6}h#_SclGR@9i#NZ5n8P9bDTlur}X&tx9K7=dS zq0r2jb>fQMJ6jYyGlu{wBwqi+-=a4;+vh3K!z#0Xw zIU}ONZ^nQAE%}T}VzD@zx)!u^{V6!~dl2_SD^j#?R3W&F#@3Eox$SzP*P6NDNsZE` zugdk^um0bF{ul0l+8H0*{kc0pp1(sm#NG#iBR?5JcH`0z(|jIbq@IFkPnx##q`FWu z;|?yP1~b+hpz&LJGs<3Vd;JNc4Nm$3-?)JI=sH=cgfU(kcA2mN3WqmfNJYW_(O5q6 z%v{~`n*e21;t57|Wp>8)%+{91Ayc`QR@cF_G-4@vNt1k#9vPZJLD`WW*u{f2_f%?Nya+KUi=V73js%0c-&hZwaYpEp`^T~jB|rKm+(q$PP8YK#RyCg zHan#D-e`=+ZNskXhB!5qo?)4cAYPn7zkUNaVRRV2p_!}Z)TW+r3X~VimkU`j2-*UJ z$k@nWzw%HzG-;Od#_vnm67NI9F_hlq-L@ZxznLY)xW(EkdhV;C>ztzRzau*s$230? z3W8dlhK+k@1%Hl+V1@I@U;qPRhwzH`1h7kg(M*-?8FcKzJ+o+o5l(@^-evjJ_AyIn zN+l!ib2mYMcZbc@6PYPOegnCz;mK6no}R>zL^{?!R|RH%!z~Le{Dn=j7ihztK*xA5 z^6{kAa9^~T_*yN7aJGe8`eaG74qO4=qZspE@D*E9#VcQ`0hM|7hJr-r(vsfmRlm=3 zYIPzgea_0FAR4#cLijjM!`Vj)eBKu-gN-m~Crm@48T)Ikr5Qlmz{zN+c!=R2kNbb7 zX~vn>xqFyaShoP=I(}Pd=b8Y4hT1$8TL1Vnxt@5fhif6dCRt2j5-~6@;R|+a9eknFR2RfDiav4{=U!Zz$DJt(~YR*OZWb6g%o2 ziW;O$&TA^BEo9`w8o}8#nLRc1bHCg6NNg3Qmy~>!&;8Fq(?O{{VNSS8#2`<< zAdCZhz|h6$3q!HBe@~Bs;CqEW`22XV^vL&}Qm5%VC59CyISi%Rl?Yd9y(7-8@&}{_-a5MqZ2eYitbH3jYQ+Ko^n&Bu_FAzLpLFi(UJOxz`EtiMCd#bzsUKC}V!aAb zrACvVpi?SRu}KZybB!SggU590`Z-w8=`(ZTy}@?OKPI?^5k%Yjw=*Ydp?TN$Vv+*i zIQ@!z{wA(AoiZ+6{LrXPzB2w87pmdo=5)+P#YrUwJ42oQ_hLx#5cwv<0Ai(9!}vLt zc60&kWqDC(CK!o4xf)-vpuEaqUqxrM%8F0(c zdR{^DTGWJ&XnDta$B;)0gQxCC6@j+!%lQ}5Ik=z*0A-4VQQKyaJ6Xg0}eS4^_kuEq6rMye@Y0hb2 zI>TLaDI+9Z>Q4y*d_J?blEa)bw8ZmMW^31)Wyl|B;t;1?>Rmz>D^el~Iz2W``QdWm zq7+Qx4{ujuT;>s$4=$#$FYQ{nj%>KO&-bN4?TPHEg4ivY2OQ44v@B1;Jr)BW(~)bsY4`c5Iy%Mx+sS9@jO z`bWSUrW(w%6z3Edj#{W^K%pGEn*Y+q5B+b!|{@H1Au8{2q+=#&R`DL z5BHZ(S{~fEJVd&cfB>auF=HDvt}(_r zzj}m0V$Zne5d1simUsoY$jL3UMGW~)i)#&6Y?drz?iZ29zL{3G3qXeFzGJrDC=-H+ zm3M@N9$HpIFKa+IB1$aY(J(onQYENMIQ49Pc{?pW(e&r{9KS9lDJQ+ zh(Wy>AXQrnhF%Ccb|Kz+4}gmA-GVz#6Q+iBvqcBIc@yY@c)R09V!3$htrdWo9E zx%~F8(7pV&pdwWT{(d@fR|=<`)0~H*RvrV4hLP>Bt8Ud54zBymxyjGgyn-)m%(lgF zWtOj|I`qZi-Y%i%P7e-$()*7RlRmLea}>``MN2yFY%Dk;Z(|M&QhhY`zI};g@3w5( zAyb?gw^HPq4`f8hr-yjo8?U@kIA)T5VClXiGrf{0ycNqjrZ=~9Wv|aRj}{!DIVwH| zUsu{%ED#>zDVw$U<(D-))#j zxb>Aj(3AscO>~lDow_2ChrPnU3$!a+N_++ULqD0JE-<)Wil@gZO!MGHDH(}hHGQX~ z2|;ObHP(1R%B=VoNpfB_?UU#R7i@e?)YyxEbYPHI;`2@L(4^ndWyE?V-p!lM#b?)t z@&LC)92`(VQROb}TeY4J#I>I!jb=Xxq$$hBpho!D?u}sM6kizA$;9x91#nfH8!YO3 z^!~1HiArz$7(5n;f#}7%-abTv>^=!0`^%q``Cr3%EhMbc3484Hu5dlZ_8!hf8 zr@j~ur%Smgeeb93bULj?wq5b*^oH}Q!IR#%tly18L^KWLOp7INF`VP-TL%{#N<{^Z zpD8&WShx4qwO|_PqZuYa&$RKPVw>-4tK8w#VF8+9%*IMGF}I?V8I-TMGmkHSV=cRA z-O4kdUV1~HkN~-;7x*)zMl?5>5a8MYFBiWpXOwAK2fnb_$qL75#lPq)up>J(etZn| zU~il%-%|IBPCsPa#&er8k8&YAzIdJLIyl;A?&j;`w#fHM0BOeMwS9rM9w!^?;MIrS zFO^I zP!g7W?G6Mm^C5+*9*)jl;A0=3V1=L!tMr+o4937H7gt1R^AT!GCKesbIH^2l({ZC?_HK<;{q+iNZn3@fSQ4dUAF=bN%>j$$^J7R zs8;j=*BhEHTi$bF9~}S6{cYf!wBC5OAA6zuc`O^3PwY)pyHo$@#$Nac4zzEuO7v4Y znKtS{{0X`#{=A^{nb#ddzo}2Vbm{05Bbnz?vH)AyYmI$Erf|}Mgd?t^&5WJ)wy;ir zU@1#*eNE4(A2MP7Q5Q0?t~Qew;74cLW>1X}Bbo$`Z(r9WG{A+tXLY41l!ZL1qe5G* zHS{4?>I(}&qQyQBw0-Q(JmQ1}0}|6Qny-0JL$;DoBUgz+{RDjL9QE}dA*r=jVw0yV zc?MygyMJ=aum7Na&?`zYN=!g|Z9g4(n(6tzwylnra_@q_e673a%M^i^?*UZgB?xXs z7S&iozu6axg~1Usj$HlYx}K4@VldU&lm(65l>=*eZ~@VNC;PJaa=ht64vB0lXBo?B zBj3MCxuwA_%_^rs^^_U!BUMX6$bzj(w#M1Q@hcZgkh*7Fg_B4OH3I(^vj)RHa8Dex zy+7`^%cVBMv3xGA9v?ZNNo&<;eWcM0Y7oYvtk}xjWL3SEDzPrG*o;$H4Os3)dKmun znM0QiiU|2Gd326fUsnNl!+OxTu$08*%1x|r3@VFQ^^q{XGD<;u*>3&8!B4s_jqrOf zxmPVzJK-te4!|~18H_b5-=HAbmb^(=+jsQ}RMS!L1d)c=?`b*zxp8z~KR7l&5{%!& zWwXMMFK+u(?lSdmOH!w-reiltvgd!zebg*<`jg~aRA0XPPP7^e9o-E_~7qQYkebC4~oFGOTN&vJW_@FTtcJi z3g^N)Q+$))UgsX8(bqf-|AS(Mm?_uvj8LKitHq@@TiA!RE0GcPlEoTEzPUt&7~A*E z4MCnF1FHAV&q+WDA}pq<4p2inE4nwUI^>0+dERvb1qGK>knTp>o|D+ zvn32LW|JE70iHR5HBprJ*WZBV^M##msmmdY$SEa?PBcqh>bWWJw?RZ=YA)U@bggEq zCqBA?99Rjx?jf_lpI=Z16T36us6nW0u*=JPU89aQKO3r-g11RkYPbv0m=6`^qLxCv zQ&j;B9+_}mrCEt2JuA$pRE6Aku!7SzZq@K~Ja^*BXyN@DmR}JS<5E2@A43`AhJXs2PEH$rV_{NZ6#k|O131NiQ}|Ea-ri2?a;a7C>2c+RbWOwbqSr&i zQ%v^fZUQJBKVexFH40`!$4qs7aY}hf(7M~n3>BI20Vv`a-;HN%$%pJ%?NKR&-_}b? zGOt(xPRAxZG{9!QLjMB%YV+G$>(_MtWbZ3R9xJiDqhybMy#LUs_A`_(2^hfF&eZY% z#Yy*Iyy<4dnYN(gDwvN)<+snd%2M%pcThZT+53}Zm=m^-~-lrst&XOn#{5vBr z{iRP@oZP02lqY)IESg+RKmA!_Z0gz@`cZZuB^rxe1v7C?HO>Y%;1wODCYtgk7zWuDN`9DD68quei98o{GJ zKN_I|l-LfN%d1o^*VS;24jK}_G6N!@5>BRJU$!Eu+LA2rRO%Ac#cKaL_E3kqFPyms z#@8ypi7T$DVt~8!gbqAAvN9>@sU_nRL4lOc)S{W`>r~p=e|;AKg`3t>LrJ;%MV>A? zjP!NKX&xZGd#g^_y4t;vgkY-8F0L%cV?=MuPFu$SND63w)WgfwQ?1K3FdEcTZB05C zemXI4G~!ee*G|*W4dCfKZJd?n@isRbg0+_N1Y^#9##9w|C0M^$>ty--vN)nH7WS$%%P>iYDy=E%A<*qo=^^y*=MZF?t)q|xI97fscuyepK(-&)EW@-r- zI+VHn?L6u3ce7iX8sRJha_Q`;nnNE$F}uW-^Ax+mLSG^5j`Xb1cZAb|%P5{iSsEQ) zLSu^;spM)ktfIgZY{yy^_#UbO?%Bj>(W_B`OMgI#Omxcm-plc zhiX=7KLUCU-XN5Q)e~|}9xZRfoG;&CMQUwUv%|%9e>&j>u8{`*M}z&F;53!NFyjE` zTkQVt>C@Q0SYFHGI^6x^)@3Wz(@%B^NM7hInqK);13`x#?<(wGFViQ&)@5in`}eJp zC5+=di3^g=PGwf8jPyb{?o?*?RM@eu?D-q@DQwd|Hdrr#d(h*g?|NfVqRA0}29BIE z8v+xPt!{PA@Bt`ZgR@P8bbjxiUn7!POMyfTk;k^^hqD0TuJm|w7f$O5O~29#9lvI3 zSir2`P@B>L)qyP%I=}!$GCO+Q(Bvo7`r3@&h$=651_{^jky%?m7hC6#{!XY}ad00& zOf3E-(JNMg>cF(}WC?yj>7L(Q>_d#@hpJe`(ruyYj#kfAd?U9sD{ZJr(TS7(SBh5y zdk;=n+B*FMuxwtk_fNDRrd)Q~%u>o^`-1f$D!^AMuPD8vkeBnI?`k(Uw@Tw#BA{Mh zzkqiVylXbd-=sg5Ur${w{$+IPPu%hh{9-TeS)Nxy3I1W-%J(aa9^>a-w&8_RNare1 zw-BwbfrY)~k*&|mme>{Q%%;!&_?J;$0ZMjc@ZL$9nMe|;xgnsoH}S6onv#B-P*ug*dTMtzg zhwO|jIs#Xu;pV9>=oxFOh}P~cMJq|@Y$Dvd&P!b zwaf|(gYL#z`pV+n->fRK%U|-EF7npyL$%)-UD#hoj;5w%F=J_oMZT7D8T-U=fwg4Z z+j{w_(bdxD(1-5dX!Rz`L*TxP2W0nle)bxK$LyqOZ|22ucSUGs@oMU_XV#XJ3;fwW zt(pb1{Kd3B#K+H1kMQ4ujK(v6+NxI%QeUF?`FizCTwEUdXx_ngT@>I!V^Dx|PD{ji zj!cg!&TT5tVn-#h8#|+azRDiL9NGMyq#zXI-HIN)OxWcejTYRTacJ%++yk@lsYm`%}*aZ?@_NvyY3Qp8Tk zy?xGlt#Fu}SduN&5v91-hxBsU+}!oMkeCEm+CW({%SUEKLpgd%k|k)NIv_lMGz90g z>URdHEp+R`@!yTJOZO`$XKdfCS)m|bh??cLrufkhdrzT@y<<5hMn5lX!ZttviX;|u zbDht>`aEF8B)#t&o3~gd>mTVy4%%kHynHH31DzXllopd;fAm z*fziuv_rp@yGotGEEl;NC)iwI|1D=anwv7J-%MN!ncDSa8t08MFmJn#ErisQ5SbsA zKDN^4r)vY)Z+DUq^`|F=N4Lh2Tp2B*VAkC83*#02GJ*vQ%b!zX8AdMDZZH@OYmCk*$wRdW5tFHDS z_;d`b2lD4^=Y$g5Hh$)opB(k9ip+{w+j@uJ*mTe@?D+u|Y~pWc+eq`p zXW$p&Pio6Q?fjpb>>La3uOjOn*GI0a{Zo;x)U&Ld{r$y;Pc%g(U4EmfTLRd}8$H^2 zBM%RSp58ZUm7B+NDA{t{4(}-cA1dr`zCnUbpf^FZG2;-8o~-qSVhYS)tS=qIlVPvI?N zW=^#-A_aPwx`pPM1q7y(;_#R@?Qz99^i{9BR6;o!-nB%4udgiU3z7IXB5<{X;@JCN zy_xu(pSdshDxBM`mL2(#*cP)>viviQJ(c9xBtGh9vS&$6nZLhUJ}u{huehT;eD%qH zar8cmIQ)>i_RDqkfso%g(oJmku2(gw4=MhZD2iG7L4AKKGxR;lY-^RY_h}i>PG*Vn z*%l;An@8O`>3#Bpxro%C2N*ZzH1q2#K9#Fa3gA2+3@8J__U;Cg+{-tKXfD(cYtxSJ zIp88W?OX`K)$|db)C^3pPo<1Y!H~^)2>IFsoUIWs?emv{{k!8SBUcMD@6f)GBh~JI zF5;|@oy=$$4yEz>D4<($wg>A43DcRqU=T|@A4Zjf<5tg~Gpl&!Qg|}G=u|IGxx0IJ zRvNBciyrkytuRds*l{PZ)0|r6qH*rNYUJ<=Tc?Nm^3Cj)=@H{8R|7)m12hQF>|%H= zcS2DWTc$eXx8MM)_gE>uz=iORHD`H0_-jQi#ubstZ+Nt-&Tesei1{mk<_)Ru=RjIw z>f>NlTq#*jv0FNxg<0V3zOmkvhhLdk77yK5ZtDo@^az8d%kG+A2_lZ>xB1^v{hWp0 z@v3d3st3?+#cUi`%H84%Hv+fs`L*j3vFz_%veQilXhFQeL#<}u0fO3EHTL;{_3@20 zKfTG7Q{+VDZ6>tgS`&hcdKLM(oH5m+k8zD|vDF*htVTr20pp#)XN1uLiz&umLWj_{*-3I(GvD?^uTB zg{A@eHjjpnHEbT&`YM%a+M0pU#EvZ>7 z19&Hm7?cdo?$BT%YYeeYb9N1}`#5G?558Qva*F_4{&}dnV&c53gcrk3ZG_QhH)w^z zLf6NHw#yMAfwgfocoy2ccsV{$vff;9E!&|;TjSmKoYMbm1@50b@7YI=pX$N71-qxa z-4<@yv|rnLp^^O$VOBzO+5jLIF~jQ>U{^FVQ@&n ztzn}sy9HHtWl69Vwa+TquqBXL^d=W9>A_mQ-}sUDTrxz)FuE>GCKMm)IS`3GJRPq5 zww>cFVo$COY&;cwYtFc?;o~zwg9$NaZln`_0blS&aqx8CSY*tGEFcraN1UAv@DFO~ zwX%gU>MB4fZ$4F{sbAe9@`&8v`AKP1Z~29G|1tJ&sw%cE=@?og{1Y zqCH=y`6&Hj-^o1MAhTfjD$+P6B8W6;)-*3yw$9remOxQCV%rBiM7Fz1zfaQFVy>L4 zQzoKkd^=zBF+>v13mD|pTMykj!SlewY=A}{g zl^!bQs=zHkIDO??Q1GY=O4e81P)3187}WO)SQpIpUy(Nwlxw7mM2U}$&(J7q zc8F4!#y3|ed+mmt|Vo`BS*TCruJy6=WJ1JN5rN|rfUq#5~8XFx-BiU za8c_pa>5$*QOpBw2EjD!p)8(c`i|cx(&7IwC*-Cl!gYHT9JegWIgwL&ntkq1oa3~-zKefnD zDEdA6aeJ0>rd4h&E_v4JCSS|RD>7$sdF@TsB!;eji*e9xTb{UB>vYwzF}y_#>UtuF z;DSif_lMh)rNCM`cQ+TNiv?Z6>@O2HE4O|(#cB5CdPK=~N#2jME1ys`uEGmz7g~cP zt1)c0Bk(snQwHT;{6O6waFP0g_-MY9i?ZOpY`;NY8{MQ=djjzKW8)5|T;a(=Q0WVN zFCm?O14nuPA*)*k$q&)6bxSzVnw%tx5{ME`82 zQb#&QKNzoevg;9H<9-*}!a{zGHy!6h&DySuZWf^VIA43M^R{wJpGvu0zc3of64lxS z@sI(La3WFCa4c+Sm~vp5$BL-&SzyFmb+7tyGJGu^Y~#Ton~i!W$vZY}N;XcI5D!h$L`BCvfu1i@!`Lz2s`!HOF%3%4GEbyJLSuSU@dE1Rp^Kg?Prkfw z{e8tX&4HL)I}D}AE@_Hi5+908wiSzsbw`K`P3u&T{XYBG`r@KnUm0ARL~6Ck;NQ9b z*Z2ZI3|2{Pqe4)!vDin z7)~Dhh_~*jp6w0f>>hrxRYY))&Tlb8 z9H=XlDp&+Gt*e;P2x0jcE?^fYl}(*-_6JDt?hc~&G6UM|YtUr#&4YH&@G4TjwlE2& z$s2}z64f;_qXH1)fi8AL)I!H9J4O&ZFIFm8+_g(frSu0cl0UxQx>E;>^N7939=rU4+R$IbNL3g3w9lg8x(!AlKY%sX=~{ zi|SbWL=WP{8cu^)lH6DyNa>pTptu%vJK{#2B#g8m{FBT3-1^g8cb(skv{#p?T zj8gG9Gfn#ID(_PjezbcKw_{kn0J^65>4ZviXyu=sDmBcuTeV9sLeOM~Dvcv9p>1GW zf4(aEWhwl(;HE(MRrJ|e<+?c>*cV6}J4afJ5e(@qhwZu6`|9ogUh)IU%}w=D&{qMy zDtsgRd8I=k_=SZg_EaarI?*fs2E~katRdlSvC2g|*YzYpvyop$SH5oszch1y`usTV zW?dJqO->Vp6g(2@*V&s$W9>$J%auHfjF&U@%uKO$-AE9Ijrf;(34as4LccL!{Md)* zte)XT4a30f%HsFVMS;aL1G;lH?SD2#tq@}Z>pf+;r?>+Ml7XZt|L39_N}SBFImt@S zIm+qc1>a4Tptk=n4e=1}+N4HDPq9po0yOb5$bQI$Z-t{yr; zU&6iaXlP!sRY*alrf~a;O^kKTFmd^SB)uT@G4R_CI(lyJbxqz7*~YtGJPq?DkstM> zvR;_5JBygVOo^|0+%1qv`9RqO%L6I!tX>Uc596YFs&hhf6vbdHo189J*`V}T-eeIk z8e}VHj*2K75)pvoKcSePd!G;NcF;q8WvYwA`uUg5h(`E9y;gP|MwPg532yu%ZK$F1 zLwj51)rEGezg{UkD^a*xOr5i>Dw+0%fY-59X`nO5Aa14Fu#G-rDg4VSrg;K^HvZz_ z^}Of(&+$sWSpfT3mVllVmk8d=oklTjoYMCn*I%yvgb&X27sv(%oRt zocTJW)j;stIn9seLA1}^p}nlTGo;%xRv%$w&p&Jln2ZZK#yYk5DAjx9?oK>uNFg*C zU+np1bhN)Cvh8UEvC=fT{z^0vu;EE70s5v7NXE~?%m-A57EE9sQYTH>6B>QjHOHTL zG?=ekl=169!=zv%!&>l2Au5+c&2PT+k=-rR7WZ*eBTz2^!g;-%q}xJ{?9sg0()2ZF zX;vAx*0k!1iyQSU@GRI(D~4iKN_cU91gfX)oz{!rXa>6{QcJ!=L0q))^J~&CVi~7_`TGq)|?#7lXOXn(*U~E!EU-)VK1#;rz1%g{=+;YJ~>* zdCBBw!>W1(lxx0Vg>7Hf$rnF&M)plj{7x@9;{b1CAf2sP#>H~5!ROnTyM*B9%U5-= z9QLK#Q_Ff-+UMowBy9crGu=u5h0JBQi7vaadpv`YD=iz@3*B(QX5RE;$F7f$HgTDp ziCz9~`L>rp4Xqm1lvR=}-WpR&;8fwmJS}g9F6Y4t)gUE75gcg@qEEh|n3U?G0-Xk$ zrQeJ%+Ks%wfK-Dm)>=7%<-DTfGU}~29U(s>elh0~T!=@dE9e_U7f-VPvhuGlfC(rQ zMM?mi;6pI*wtBVALAzaFqr<;Ms^q$}YE!uA5d|vwqUCv@40-}rPW#c=UIP0kG*O=I z1BJd+96I#s$#!4Ag;0Aa8a%ji{;cb-;dnP(AXqbRL#^W-N6@Vt`hsbS-D0iUM|@w% z^wwS~%gDeHv$(`&P}oZ%Y<%P&x`_2a8dafzst=#j*2%$G ze9G8%Oan8RblF_v>!(UE8|kxZ5P?(!WAz* zjA5A>?fjwsfA$4r{;@CcNzdr}`SCv94ro+Nkjx1?sY=d6@O|I4YTmu6jM13Jn^0&YZ4cux4-*m4gjdz^#49zjTan!LkRk`zD-vt;KCWDUK4mxb^2cRo4+ zoMg4@ho5<7`U89P+)Z;MFU96#={o*OWsoz`NoeoU>XNs%H-9w<|8aEWMPpj$Zh4;JMg1mb_F7dU>Y&wn7^_6?O-d7F{S<1u@nN-F9 z<|A|;tKq$oCLr9j-?ijr2Ai5OUJbyg_%X$};QQ99N&EB5Oj*0|(O%3NxF^;&lQqkO zg78J`)#7{heAwi!QHR9@k9PFhe`e2oCQ1`~PXj%=Ch{u!iIr1=g*y6X>&|nVQ@QVV zy4-Ib)!C(tq}2l=VfZGdu-UL4iHaK-^SRh z(dzWT5~!R|m2^dAf529p^cEyF17(BhgvR~t2A#4<^Pl`3-PCt84Hr&#PyPC2yF}{2 z&wKA(9ZV6-lHMSWxv`C9QGO|EE8=vfq$A;%kQg7=k- zk9p1=?DEbnR>5AVTEO$}zL@~~T~|}Ix}rB&18=$Lxa$4dsz2{pvucecyb`)YAyKQB z7dQ7T7rROn_SFz<`)f*2HUgD^HHa-MHVm@-TE6MM&w-iM!zP_BYJOGl2Rjw8X8Z)n z^LGBI^35)S;$4v?TmY3QWe#=71Ov_c!2JcwK4u) zs*ay|xyDzee|D1ThegI!pP5(cALJ_Nbu?MdG6fr=+kbK&8dhScvEn{C}y^0$%I zLN`BxH+irCI~~G58ldU^aNQ13zIBokH=;(OG~s*DK|x}zKBt@xMFe@ zGFvYnJ4ClgPld}!)!CueCuw16#sY?>*uZFc3Uq4S_QJ-cP*KOtKpNOuuzAEeayF~> zM^*=EY$gvwPv}i*W}EB5ZBKfEkpV6BCp_VAfg--_Gku*NE3#dGpLR>XKIgs(YzhfH8qC{BaYljYnuShR;Y@6>?$MqegEwK){V%EVu9KN^Agq^JyHXN&=W{P;O_XI_xIiV-E+tJ?~Du&kZ13;)?9O~IiFWUnAsBdQ!gJpr0X&h z{LXV};!@e~%O}D$C72{`Z*Dm~$h6xMgB++kihb0`(Iz1LBLaW#u{0jqtY~%QW;cs! zK!10`$EMq=)V+o^zeCK&$lFhuFuWVtN(eKy{1H0Z%-f8l&{ka4Jw|nt&#!pTFDgFy zc-*x`;td5IsWP7UTGT7vd7dNemdM}{U6opMX3I%z14pJ1^sESAOn_3$%)bK5Ng$(EBE1%R_=gH4!A)->F=K%gV zLK$t$up7`z)}{?Bopxs1P`Co9ojst=B%hA{_`&lUr`n31bTS6TXV0qz_NRCjEuP%{ zjwFd_V6Jzo%H(Odf+dq-)uagUfG0yf*A2Z_;1T_xpu|l9I7sKH8YIHbsq6BjT_Lsd&R0}Xls0yX*zA_w#kZRk>*JC{1sS{6 z76tHw-$58cqW9>G`|QDZjKG4%i$kgKuarhj)IP2InDn@Zr3V3z{lNm???cv>MIT3g z*!#Lqenoqop_R$C_I=qSD~Y8H2ghzPLHSnPk=}C|_b@x}vow^Ze@^jDu8YznvZOma zw0-KOAGYhi`yWQ|A4TEW2dUWyJ&b6nEPI^fN5K&RVo;B$U2^cAz@1&6Fha0!-mKN3 zb(OQPH28eaiU}JWHpn;t*|OabtZc{KT0F|M&KfO0u5|Az375#H^IY0TAv%wB`@tD{ zpEfRKa+U?Mvi_x?>d`*I{Y3_oANsSXOL6Tr3$~DlB-M7!X(TGDVE3V z2pb1c1utmqGjA(^Ne;@MTN-!%Am41rhdr=yvJ~RBa~6dIVx3$D6c0C8)m%-#Yd!+WXyW^~fD^fh#V2?bGVcXRt!ja}1DC9z<{a z*yL$*ayB}2l5c-@WUAj9$+hCve$Sywa(4oY7AMV2AIEORtu0$C+_5i*tPic_pw~A* zovRiDuih{;=hl)Xiw^POEL`j33(Kq%J=N2C1e;+p*0{~h#V=@|K}y|-nnrYkH98&q z)lW{htIQ~HB2R^TeUhDFXqkjJKGQTmzl7<4jeORiQau$`)8*n{mJ`}0^T+FaKu293 zo66TBg9ph@!B{OAv|KHz+_#eCyHm_Ahv(3S#pswd!T^2Mkp)AkS16_UrzR)bDGsYN z;+200uG=_K>@SVxwZ47M6URrFa4V?*Pn~G8(?C%_aoS;AHor_ibDz; zO-`WouWl6&z%V3K`szEdHf~rI-3A59T{PDBX(+zWNUuOEob4$?`%HUR z`Oc*AaZXbC_>S!~ZX+ld1PhCItD%^WUo8*R^Iw4E@3NG-Zl0q}R_1W>InZVHRsDL0 z^Qf3Q&4O^+nrh1Ih6KbCaSl2`UWNA$JFOnGzh&$Z%-8A3)p&4s^Q{7>^gWR7)=!0g z@bU?nxyWf&JO+Yl(PlJ*u5~&Cg|DcPbY; zjRW=x+iA8z3PW`Dmcl59mNQRyZd5Zxa%irk$G>7bDPm;K0P4(MJt;T7`3c{!c>}Iq zG847oUJ*Q(&+m~W1n@!MS2YoHnee&XwR5_NZ$ja}rp%$HcYg5YiIT>`Ti~OZ_moI~ zn;tMoB?-8Yi|!|Z<$&Z!T;k`xMA*gSDL3&Uo=05gpD>IzF2fFN+Pc_$0O_D1{Xqg{ zDx~}fkdb>=e6XU~{dW0U)MoMhih~1K8`b?}c4tK_FG#$*PyRF5kCmV_H&uX3bZNtH z2hYIL31sl%9XSye&V>Rh9xX~}lYJRN9p7%Z7+!V0btWPZ=2Kw5tU;-oAbeYMRig1v znQvDlKz=x+jlKJX-Pmb)U%rbj$16czeBBh=agoutFp9vzNt?@$cf%ch6Kl0!?w8T( zF-lu@TrAm$g$6#B%=ULH6O<^Yhpkpg?-P%20{IG;|9J^V&E%z@%K&$`-yBNlvA%$V zzTMLGE;MbHUE<9i!D!nM`yE9ap@u#}6YoXKDZwor_}tdH0~Rb6h}n_|HQBXwzJnJW zS*)i%?O*i!@rcpxTCharh2}pR???8-$3g*wD2Y)1)(W(Q5GCX0Pwtm-ZbAZKBG+ZX+NA`>mxgMnLl&*1FG0(fR9?~Ym;3CC7O!>b4D4EBE$HhL8c#X05 zu(W3yJjMQpn5f6*4aJFSPuH;`4qE-b*U#UDXv&*yCMeO5`X0zdQ+Ho}7bc3?wD7sV z8ZO1Rs>rVRQYPjw$FYj0K5gk!AvlCAx7%>)RQN_DRpsd1zOjdAHt53~$*`tvQMqXq z5|l#b6Cq*3|7C!ntbgIXw$IYCPy=CCRmiOkD5a|K_okLzyBYUa`6f>%Z~J0deVyKL zl{&@*ThznM|u~CdtQ|UVN7)-x8^p zILa$#!6ZmqdHj0t%v}(xQ$_@LK)tl24D-6mL5X zH5E~efxC|SbGphC3F%#jPRtYF*D(<)qX z=C3#OZ9_?!5KuczjxpuReyoIDt%L5p-k5#!M5%v9cc9NB?=)gDJ@%-=CpefHNb%~S z7lhdbmF_taN-=XI#3xMiKTWmJINPwj2ck9Co74D2wx!>}2TMbzd7G=hrFow06uU99 zAs?KyKLEJReD2aoqu95HD$ZSA8rZEey?lz}G1{3@IQiMNIZVwNGY@;%?o5*zkvhKU zQrsJ^sLa&^=o4B>i?d0he+sfJmvwV@rDbn1t0U$xHJGQ_b>23$U&tmXghw^=B}X@% z|KDx^b{49-)rC8iuUV)vw6X*1PB|;3mI1|)K$B6aK(-5?dqYGJ1iqu9{rYJ$2am=>@Ufx*X5w3INUGBnv4HAZpTHlT^BQ37HPA2 z*iDx9LZk0tPtD1oTP?g>^i4+m<)K*SPlK_plrXhxVTO4=xFfE>abhN9e8K_|(qF6I z_=vQIGCj69XOV8E6W`-|C>4?gX~_63R_v}Y%N~wszSO2Lxz?j_L-eOMt<~?P6I#-@3HZ>`-mmIg} zB_3E)ac5w;JVE`tk~5|B6ifd1HiAfafHv%e15hFu2Het&w`f;N)MW}0wDg(IK^2+k zUUdiDuz4sl@>TgSwVC;yS?>=Sr9MRZ2B*h!jz*XD8E%xLa)P)hp{GJPyD5vAT@@mb ztgv!~z$G9p#X@_5`BHoE_PAhB!PV#7zpI0z_ z<%YF)=_l^A;H_kFZB|bgwbQ24zHzxf{pr!TLrh5&Lgqe;PvD~Jo}D>C+q2-L(JUma z$CYP`9@p&bR+~SdeRmKF;fp(O_*-FW%XM-)XfY#ou8?Ap#Npxqd6({om5Vq(OVn_v zyty1JouL;&^#}5VtAOoTWm(;?3uw@QQ0~zp>l~bp#hSVKH<341J-4gcj-G+bR>J8s zgCN1=Q;f6+?55|bPR{LZ?(3Uke|ax>9!agRv!pL6>jbK74$b~J>IHOw!YxLQQrd9D znHG^P{CxYR1?*f|{lKa)$)_^&hR9r~cws_##1SGG|A|eBFyKJF;$J)j~CrC^SivulrvO5^FMF7zO z5J0_vsVCF*S6>Z+UghJ8b(3B5k;Bm6o&5NgYK!DnTkVC%D5d!AVkZu?pWS6DfBWH* zCt#22aBB=+VMYx$xyYK_m5lzVa&czhx^wbl63a}6Y*GCF(v7Y-!w=lC26uni&}V?5 ztAe~30E58F@+rjo%UbB5EbsU^kiEnue0@v{!E}*nw*I`~Y_b%o+{5ja> zU0SP8p|!{UjInrkaPbdkrgJ0jR~<~t2N zOi-#M%gOC(>i1{cxm68gSaRN)uGn6>%q)x2>c~1LJ9cYWWAWWm1wbnB%hgK}@CTMk zJ5v$Q!3k@m@H&9(oHhpJ zn|JR>%HHD^6yGwvbOiV152G9yaOEK@5H59Lwl2ThvkZZB%j{a7+!A!Ki*N(tC6De3 z=Y^Jro+C#bnzz;xzZ8qobEu=DG22iGAv_DNN|^ z+|I_1Kl-1IpEv~ro?@aoQKjF2NrzklkESN#=Y`Xz29AWi>#vhy6%4ptf%1vH>*F$* zx)?9`{{)_!l-cXtqu`LZZ$74|8FdV|0F|bff@B1J{w?xJR$`iy8^NfqxL#2zQ}UB5 z#3PID0`Q*^+J!$Ov=rY-@O(8@yOTDToPg03M@#zN*bSDKP8XZk^+y43D54T?WI4`O zuAfr76MWDS$C;&9d+e%nsWq);{Bf)DRqY=dh2PB%mNGYkL8r2o&=69)007Z5>+PRJ zj4r=&T{K`RR+W>WFo8~uT_hpb52M^ITwa<#rTc@g=M9`lSGd2JrCz~}Oy#L9i>-Y- zZ%~EGg=E#DymhS&{zV~`b)qw?ZhsGjJgKy{i~I!LhwFHZIPRzJEk4*Uf|5~dOc8L8 zG<+6+#4*H{H<~SZ^N`qyotV{#{jybut#`ZY9Z)v6Wtl9?{14f6*Ut9#aDAjc^f<=6 zUB30K6xasHZBe};@alcc)YgN$iqpQb(ywJc-?3IPd(li4JI$b71`-;+X(#AAT$nCS z)9B}vGy1c)-73;s9F)5z5;R_J&4Cl`_E*U41o=eU8RMCJMY)F$B{S>^+h@-7g_i_< ztLDiIi5^*cZ{`sRSaq>%o!T38dor&ZiWbjjU=@>1Yy=;pzaf9M3oQR>vf>%+FG2EUqRs@UkJP76 z$|t_e`m>tdT!;>^Ga5o4Nlx_r%DUo)iJqm_=D&B}-K>(QNhmDjtnf|xv@6jicFA8L zyHVQjts&coDP!dj;-YIk3}dE^abh8F5iKKRjXhD2DP{UzeyL~$8*O-Ew5Wm@;i4Qb zLdXTnR6gwxW=w45z}!w9r3-uc-rS9+uV2w@_%i?&B&L3~Cic&7C2){~TD;uUTEBzk zlA2qZ^S6!)ot+VMI^LJz8F%+O&6GRq>Q@UzD~1K@WD7Jdh-Npd_YlV+n2o;=APH$X zMvL8sn^9v%IH536wQ7!=cW5XK@`_u@Me8!B$5iWHP~1`z!2DSEVeFi_bWuRu{SEN- zR*OJ`tEifQir^=un$jvxMNIcF-8;IuJiK&vsM#vzFfnOK!{G9-PcrzIx=~_J6QR+r zMSN?``y$^YVJ^Q(p}E zZr{R+ukdffux7;h+>uGG9? z3q(L>jXi~cGK-ML9sU&;a3+bJ5SK28g3?*=@F%=A$@THpGQo0?vMHPJ8*M);vTAT5 zE`&&-=KNDIwGCHX*-#%}-a+5Bbco%r5U%@N&wS;wY;Aa6}qO*mPtm0^*+t=+G zl?@Vj$=4pqkr}a`iDre*3nud=@A7CLb5|mqRWI%R$o;za&D3F~okVp$UakG#0gxe& zyEMDnr5|gIN}HGBi<)w|eOaev4#~V|Ud_p69yKkx$4i-d$qB}8jl*)aoqm{isQjjx zf@8B5$y15W{86JWG{URa`VQAu)GsxvA&6LTI(@7%fS_p=Vsa_B-*`YNOuuZSF|2x0GfBN z#0|xlWbr)qrt)W?3DEd@Jg~Bh?b34&jEq^_UjhufoKj{^%`iWBw}PE|g2fj@u~=DK zD`nQ}VAr@7zBNLwo?ZqjeKWO(!LE|8Vs4*_wZj?BWt35ZmP`u7-79B!>hC~N65LL8 zdmAiDJNag1PcOk*|0R6tzMLBd8;rv>)On|lm5FBwyTRMA@&*?9>$^J(tkf3|Q^_;7MGt~Vf-sJ{NunQ1 z8OMUYaqS)7o>9X_E_Rn`>bo}FqQOAG?U6&};{2KkdcUQQsmGblCeigS>nYTl75Gb! z9aX-|*s6>kzlX$Uo-c5PlM4IFc>!t$tVoq-iD@fW4XRqkuM{t)vphFzRy>Int;jKM zt+{4d+ekjXz!tmo3Mhqv(ZiJM{fWs2eeu!50GN?=zCBh^ljT7ich8+sxU@}A=nX#k zbc0p;4k54y?DkrdY@xfSb%pLee=#A<0Vd;OUzblBu+cXN@66}Jr^mM3yJdi-5A95F zjui0JVkAx3>MiwLV~`3l`K6b%z*$%gO_pr~A3S6&x%X+rCAV zNJSKFy_|8!)x>dFLt@#^9&ez`o4)4Fmhhf3{vONylX!`_>5rKklO|UU_KIs=gMF6* zPRRO)LBPfSOf00}$YCx(q$k za;h2==nRaj)*+%4cIURig&x@%1HWn#&JUdr;5-=n-(+^P--MCy3y_(xv z>bxf#z*L_yRoOvE%y~!J-qbTJ6!qa>k4TG@9;I*Ys`AA)W=!dj8wD zlNUHLYXZMWHbtu=I2Du@Cxase96{SrY4iCxox8lveZy8JI+!D%i8nkNPVa|vRFvS( zyNM>gna=(lh$u@Dk3+1C=0{6vDHS{Eh~c-J*g2B=0y)l|^^rRhm(`{KVQskZ3HxwP zHT_UNB+BDrT!*K(rCrRXU-nQIty?QC>26D6U)g%s*hyi5&IuqOV#D^nn`_W=}0QD@Y7UFs`3ds zh*Cjm4WI0&@Uq%Ve$jt{R@|eR@FYq5Ux+gRuX?UJ^}ac}@Wr8GeSUhB!uwE24igSu z=wqW)5iyR5(avCwyy`cmTYaYqpV$DSac^JSbk1~c`|Wx1&wx2j7XsRGUoAbD@9rJf zx*;4bsKZh?TZF!YNA3%^Y9E_2`gYzv88X7#Edx8MW9T!**CC!H-a|RXpH2=wVxM

z_u(VuMorY?(m<;X5Uo_=T%g6aWMA-|0eV#)3kK*)h^4#BIpT{_( z0_O(am5D-hthnfdj93m#Mc1^9xPV_*PGc9d_U0)A+d(7UfO`%}t;vN~hcEFp05F z`&%ZIR|fRCeH3U(#^rW)i4lQ}MD->gE2c>X`)yck_xC#%hxcF+5_v*;(j$7*$sl49 zrkA$AI!V&<_2Ai$B#GmEq;{9ZswJvvNvkh+oQ`-6SxF&x2COaiVVJ3MdtU=wptg34 z=|cHQ=PVb*TmdQxKmt*5RxI6sTI1`FUmNlyr%;iQ8%kNfS~yzvbdFiuw&vK}NCwJq zt-=rLcAM(B8^5%GQTE?{#v$_s!-ius<}2QoxS`#YN8OU)WPwX0g&uL>_so9Ny(fo0 z91SwkOwMo6Z(KN}xtM>qy8_gKVYGUl{O2&~Pa%9HxzbzG=mBic9XahXtwtlV2b1V~ zQ@SNNBT>Q9r<`Rj?$d7m8}HD&7gz&5LA!p%=IuSmwn=wW)Ip&a+&}v=7*}vxCh}k0 zZe2}?)zZmJnz^4J>rzn&xqc1TSsWB7JGXhw8Mw23a8u`$x@akrIEV$-YS=SdhZ$b) z9@!3(|0d_8X=!#0BO`ZdI3}*0eiQD(_8y(5S4U?js#0#Xm|eB&+y_-upR2PqJq<7q#gicGl5zR21=$J*wF`jPVN z&p2d!_l;Ub)H`Q6(}y;NSet+`hD<*R% z=-I7%C9Pp@GBec)f(pL?$>>2^0KS#^|Me#Z%V-nQ8;t7|X*6#h?4rv*_!A%g+n+eH zWg6g_qU*j%?&prU0{TN#tQNiF!WuA?c*^N=dDuf|ep!j)7xd*33CbG1lZL)E)k4&? z-ZxpF6;?Hro~Pfs3lhbY!W0nVJj2p7nY>pgc;Q{L`(M-D7L~w~Vc6MKcn`H6LE8>! z!0+v{e1khOx!2&6xM)b)ID5S-VxfhrO!Q}9(fY9WceMQE6p1O8oi-r$fj?YEf$me& z2k%um`oT8d2neDjojFR(6Wjo`=?q;p36eT*s?G1eI+ zbpZ_fond`$Snr~H1uk{p38?3WMY;bBLUgr|B@LDum34Xrny&y`O6gmY4;!2AJ?nV+ z*H#5c=Ujf}8(}@pSGT)T`AU@WPGYU}zRmhngvJJPs7I}{f3C3QsF2f_LJb=p@V}qp zpBgky=(Ib_?zmZ=KAcmF)a{YaLwe5NF){sO_|EmM1n3O8!oH>s1)UIJ1J|LTD!3UtVtRwN5;&JsfzE%q9EiWKANhe(zGOqFU`R7Q|tw z6kcM*V}+VhjFY?}Ivmzf;lEJ0eoV8?Qb&2;nKi>e_d$87QU^+Bb4z((xBfLUOc)eAoC?a!q@7=z%JQcnW!k}YcqW3AtO0C6~%Qy6*V!gwHJ!l;82Ox*asw&*=$Y1 z&c_)S^%WCTXiv?IhrmU?;>H=#3dD5pLkHtNQTI8pA{C}lrSy4P&h$?7z1yx3Jx1Vs zrA?=kF4#^sZz#C+*oHno~9Xq*eJwvzHyzCg%v9*(Ot49&V z%S0%r&==<9DF#i|;7)}CLgXW$0r2&tGcy&#rndHZ0e`tIBI=%4XKU{rMtZ@U&KWK> z%tANoit9PHE4_h35+uG#BCn@%wT3uFdfpg#s|+&*w$myhrx5E`V%6DaTadM+jPhO8 zifxYVV@gzAv6Dx6o}#awgzw-_KK9ZlIpv^-#ggM%06`64ycKb@LDM86de2;odOz}D z(Bm(N%k1vO^P0Z| zk3Y#;hSoJ*VfyjcI6lldT6Xit0nI1gW_7!_99TxOH%5WWuKhf8_VZ~dAqF6FsFF+7 zkIL5PygaH){W)NZ-R>e*N$ouF_Jgti9NTv0j=UaD?SkSYu2_eC|G!^I^8edRlX~!mKy_rqn0xyIb5WD}H%FKG z)^Y~>oDNA&s=@Mjh#jA=_QGY7lp1P)?N5WgAvAMSZlhJT!42D>ueAj{O9r>(XLXr8 z4VE*eD)<^5u?{%<7+14T-g!vmDth0o#x|I0)cY){mFPSUzDjYg16c9~sfl{Yg_ycPO%xFj?c#DX%EMJ9o4XMr{ z*2Hwf(FM36lS8=o;9PW-#Rzm)LJRdnJ)kQ-m(SDw>}K3+ z+iU$-F!t$^Ip23z?t_t=lAjotMa%c5Ct|?%tuT`k^-z&6sB=7tdp64jdTeuN@O}?o znWywYLMy5qniu0*|N09ET~0u3*F5xNT>`!Yem-fz#JT$=bTl43R&{9k+W|(8q0)=7 z;|^NZ^bM^Gm|Je;=P}Bnfz9B_Cj^?BZ*an82c?*9(P4TYTM^f9f6 z4q^Nc&8iU7!e#UvkIiJ<;4JjbO(J%To^3_E?zFqGXX@58$iR{_!@AC*v>aL5x%dED zYt$kOUg5IEC;gjGW;m+QRs8jtotD!kBl8q8`KLU~N2{&LwvqO^T%T9t57uH&@Z>bk z5CpeZokq(KpzI0M(Wt6m`|b*%4G4-hel^d9eysvcsYOiYLr{2RDF34*DZ0gFuj^F? zIq=Gz+Jnz|8Yxa3V1t8a)*`m8{0GwWT6dNo_xXt%8Gm+*3C$2`h~kaT``5s}c#Dvu z&G*{pyJupp)>ZB&d%GmN)4R{E@NwgIV4iO{Zg=)+sdGKZ^z0@qY&h6_Zp#oqMH9X6QpdGIsO?R$@5Pc4Z-$zK+~qEE!v;>f_~`POD8k4caNM~m zH!+#G8nqCoftI0hunX4ePh@4Eoj=Mm}M_MiR4Ao*ErwWBX^He9W6v!V!RJwgr*4U_`{@6Y0cFzR=Kf?~h(TuIGvfu;-pa`1iwC_c+@90jltjhHhY6l&i4(3T+m}GP?E*2GMg$IAeW?jd5ZG(4>6J>?EfB z)~9AH-(5w|MIUrS@Oe*f)6R}n?~^$ur5&-Dknw;g&Hb;2y!|C{xoxQo4$nr4m3NG1 zzX;;#7*~T#8<&$$h&?$vU#n7xX!1HjFw}w*zZeUYrCsX05f*> zPgdI`U?YsG-r1Y((Bw9DyJq(AvonvnkHvwkFx?W0q?DGQco*oF>|YsAo=2r8eEv|z?uA{g)t{);ckkX;+q=ZiKcn|t&B{vM z0M6>LzlrD$3Uq~>&de^F977gFv3axo2%#GIhzh+Gmm`Jd4b#YFiXiHEUljaTXU1DQJb< zcPYh(-@8|(;8|`-l`5Z^Qc6h^>@ysZEs4Ej^4tOO%Pb{tG#PHH0tR2u*|+ZC&8}D^i{eDo0Zh>U$K&f{TYVW z4avVPZ=uIpy5_!!!4FJrgv?!YR8~lL^vN~)Qmz#~8t-3IxiR$qA--y3$nD-;u?RP+ z7jZO-)Sr|onR`gh<=mKTk)dJU#?L$QA8C5$%xj@BE?7BRp4w6Z(Id#$T z;StP=p++8ac7a14;hKiy`T3w#ZPd1ISOWtXJ#MdQ>G`=a$Ctuc2@(sS1zkFSwB{v!G^F*vT|8-q% zebKn|gCnDpK^x~R^3m##t7X8J;gvSmweR`&}3<13SgYU-HxFB5C1%#~z+)@QW$B3@eT_B&k6 zBTY;7kK$b{_W!DX8#}(Dm*KI~TIf_OARr|pkEs42_;mXb;X{bX64UAlSL~%v?k!Fc zO{CuJ)%FH+abi@p{&@M*?Hm0Na!vDw(;)#}YebNH8zcFXZY8>@uuIf-+(_jhx@WaUm<3Z@ZpU*CRckjp+QZpY;4!rQ zmt)LXP$a^} zb868z+_Z0?M8v|=&LD$GP>4s^u2_E#tMSSsxZLc_?DP?7Wqmi~R=f8h!NHaZa@^I` zwT-KIgP7_oywvt-^!GS5xnBqA2u-BsHu#n92cwuB1nWs@h7dzff19e-S_@SBQNC#Y zvWSEZm~Z)!sBWZBo||uYS)kNbRw=d;RTiSI#4E35%-7Md{L;)}s8-mKpfMJEM^mrV z&#!=0ejG{f+)m3$ zF=L-38t1#A;^0cQG48mw-N&9@G%Bg=-YSa38ZO=i2>s$V35I`&mNzi58;X*1ZiL}^5|7Mmcn{J^ zkfx-G8r*kXA$?mR9ps^Xl%48*y9kTu_hV=u{i>)!TkD&-IC=>t@94JY!K8g*9L*>_ z_gdo@#bUJ>)n2rDm)I3X77TL7Nr`fO59<)SSd#kQ9R(6?sEVX(b-lu(gN%MEB!T+& zHBSJ-(NefmAL2J`p+%XBF#KYt(6;#KvqLPK8lG@r#BjoNEX;UdcyRUAWR0F;{856` z7xse@^gwYFADGvdLtjFT5^sJ;EBj!-3HfZoT6Hgn60Wc{xzupBf9}g{?GGo2Kh0`B zVRSH1?x{>TwiaCG`)NvQ9(QoM-)i+0cAwrIG0aDe>W8>>UCzJU>=LuBSjLQ8Dd*o# ze+C&TwK+H8X+*P=d?vq-Eb+DBnfEU%wtU}oGM6RL(fpH!a*~*` zVLW(OF$q}VycAqiQX%U ztJJ5%nR}b#)=CYlLb4wThE%BRXkj$`^34Rn%~CUD2kquZ^7FB=R2pKrl7-qaUz@ZK zt%86|CD43>lPi)hmZ+WYldLsc@&*xYn?KFa;={5uKbm*1kMgz2kp%f1u<_8>I?wYf z0S`Vk9*|5Yk49M^IlE-os=6vp??=lZV94SXlbuk)I6NMr1cocU4h zxrLT<=D9Q1>1TSa zK*iq>A9k1;4S%zK@4BHUtL3e%laF#PkU!WTdU&FJqxag$g3?2b?K6F~{nlemE-@?+ zfd*T?@}x1t4U&(oZ>@H=s%F__aC+~{kMq~^&(YL*=>ICVu|ORAvxxjLALABd$)>u> z#ZB8DS=RQXEiN8M^nV@gfEw?qiHeB@C>>?tVM;*4=96DmrAq$u>$*b|>!#bHSL%N9 zKRaClgZGdB7%Q<&OZbwwd_H6%5m#DvkPKxJUh`1s)KDD37;%Z*M$G#mkuuDWT~l1lA(7hK7bF$0?u`CS6}Xjir6 zv#@7ud?VO#w<~L;yGN_Tll($Z16rT5!jV>|yDg!WN zcLdvmC-mFy)Lt6=e!!arFA>1xREZD_9lk!$UjzMz$>rHNm_#1Hku6>zP`cmi(;%J^lBGkw! zAJO1g;Z0h}FPRgD!VA4e9+*F}O(0GhZ7R=z^5STrhuw@dY#n?ixzz5f631TqMoOr2 z(BYU4_LL+-7{}?88hPz-!R_SloYCcGu@4^HYgjEcCww_C`MD@Zl_2-^$hUfeCw{qNi_(_gMP{;0l+nR&R0cy8#U;_^mWC%SnG~;;XNJtJ8`5jZe78~t>4o- znouuGFBISAWi!pLiGc6;Z)(m-TI6qP5Da)=S3mipIxu$f)2+u!aH_xNKvBed54@f4 zyuGk{doeP{Oy!}#bGtV%IhA4*pZ8ADoY=x6KJZI%3-0XZWb&sTaC^?baTGl{rs2dw#ZJ@$u&Td^6^(?LsSH_l%0JMe#Jf*S@0Nw0}%8n}KzkBGtF zTyjR<^|bk}VpY#zug~xOmnIkoVOSJa$6c=kn^z69_(i#!NBFfV6Dcpb*!N*ez$H_N zpqWp6DIKiyt_!&aPzxceFZ+Lw(GTSG!}4FCNuF^sqxo!?aIORm;$csE+e4{oc2<9` z2taHJ=LffMB^7xQG9RA}lmgX-uPLjL>D;5FQ%@m=uwvh?H zDYQ{mGBgV^qkVrQ=}Qr14{12@xi8tfYf$uu_r`>1m+zs8qvm*_7d)8~nBN^c9ZRKr z7V64|AA-T(tsyKR%_}J-@@Tm`TBj5rve~JyQFSaf!SQ_FOpz(!W#cb$U**c?<~?xU z_Q}udKv%2rDpJ|yO1XTA%Vez%@>O4VGy(h|DlbjCpk6T4n(a)ilJj`PmCmZ?sYUbe z%5*39j>MI$cK;pVKKmaVTN`7nb*WVUBi~5xLX0c&kq)k&a_BAE^@pEGb6I>B#5YAc zL%1Apm>g~C89nRXe0kl@#ZBTV5B=T;>G+EM^*F#c?GgGm zFDd%_YO6jZ_$KowA6{EbequSo-0tJ;s_i7Du(RgO|6k1h;svEez<|?9$`egMlp@$DZxT9fiN`?lmw0iWwKUA}?0X`hTV8m+)VY=m^Ac z>#$u7vl?F3LdX5GKFCt$vL`Y_O$dR$XM4|O{->H|`L|U1LqUd#Nkfl}BtzS7&lscl z%H}m7FCke9;G3=LWPT=J34*aoj!C3|N3{}XV)Wy1Jv^Xo?w4K7=$u6yl)@8M@@Xw{Zd` z_e0$bC3bAHUb^>)KWdce;u6=_dB-`^cVcHBrynqA71y`iD8swhUbWq+8OkwZrPml( zmYK$YpB^YMkhKLCBN?P`faRv5-t#5zB*&MSimYf|w-4ZQA}7V z_jN9e3VqJ;Ih)ojrKb$L?MF79dlyDgfRd;TrNZH(#z=WOmNfqKJf=VAD6Qa;nU`4~ z@+Cr~=ur*hhlBsnhWw|hiq+jF?%pPvCudFWKhP+TDZzoK^_nLRdtuyY{hZNLC zzE!E}IOxao@#jSo9y)@H#>Fnlw|l;Rj=jqS@#bP~5B;GGknjDcq`jWbl&?-|b-)dFc-mey#Q?CYokzVw9c~7sa6j`e=#h!do|HN~D z1zpLMa=8gPQ~Eap*V@WjgK+H*lMxrF2LD*lsvsJ#acx&XuRym+fy)b&3=y}`KJ#W(XqN~bEfSHt1`;u@ES66IN zmTo4!nE*%C`${(eJn||_JCy>gii_s}y!xFXRs4*`)$b*j(*}PJXO1*FbwIQZkUzoJBSV5o@L+RVd-z6uDjw1h$lH=9RC4GY_B_J!>(Cto^*{aU9(Mx~l zul_brCiU;Uu@MB{A?>_mnF1&`RkiAGY%{?Jc3ebHb7TRi1bH$q9jy>LDFEbsXSK$E`16Ff;GqWY-|YHq2YoCbqG`?@of>-MbnvWdf* zcPr!#S`q3~1Mv3J%X{dHWhb)dM_CFT4|XMpePOTBc=pMcEc#>hoH8IE(JB?)zGA9J z>51gR5|;&NE%|f5*eUr&)FfXlWhw2Vu2e`&cQiKW`I-PMtgUk08OX&WN$#QH^UVWqdgl&HGv~6RQ*Z zg=7M*7$Ixy*0msddt3W77xTuZM&o+08S!tt)12?t2syD3+%HP+ve7!ymp7%a{lR*B zFU$a#W8}A+gAae)|n*1 zyGWcq;4#g)OG_cv^w2IH#xI&hgZ-Or&Zc}4ovn|gKl_7eecI;`MMhb?hrb^Dr*HtD zowozU++9=;x)`(+vzmr)zg8vwiC!i{{Z<}+r>EX8=U_9srD!V#ukqrqP^gmx1&sq&A4eK>o`q?+4KntA{0x;Qg(K5`G5xL2j zHiqIeIbViOY4|G-kgmaf(k%JAKJ7RF#6p#EuWS)~kB)lh8dm+(pf&xK>%E2ASN}*- z#rFE#YhnEb&Q|i~5|{ZF*K_=|3pZ`)4<6r_sjX2wUpZYj#qFLvEz>~a%4qi?V z3zC1jjsm}DnpP4tU&ruUvmpo%VU3Bv>s8M2bZRCI%@KP&@s-%8@=uJIy7WGV+a?^d-m4^Hmok!f$p1pp*RbjPuaAyN4NL#u)c~ zwM&vF9%34^d1${6suf{LyLkHD|Mu|d-*?|H55Xh)chauj9e9Qn8nE6dFq1>vVyaij z{zB!!BqP39E2?Wg;>pLD5-Kesd;MYtTE!Fc`T^OX4akPD!T&(kM5XM|J4|LfEVc$fPQeP)>7B{{qVC({nQs68O9z!o-HJ-Av&yL><+RS~R+gKh z#B@|SA7+?ssicxrD&@4IoEbR`!$c*CC5Ji9$YI#YX&akuziZtU-5>S&eDD8$j~MQR%IFh?Jo626b}85W9er}vU1{>sn6r-y z6fcERcvuU#;>Pe=#X&R6*GpqU&;w2VLJ+ofKPA+F31hH)5Vammz3@E$XiPP1Eq2R^ z|LP=aa&_`5ddupcG%C~?553ga??Zpf6VIJli|I*~v?U+um&0yha?dj~<}k8tGGI(b#w23@A5uaoW3x68QOHuLWe*g8Y%K^4C9pe&O4|N z!Ydw71?_r#*;y(Hz}&Akh-WfLhey7~a^NlPOOKu>(k>k;LZ7UDqWk%H;I|U7zzd}3 zKh37q9{}I@HD6o32Ob0JML2x(=vYh!!()#|h-9q(+#}w_2V!X~*gofgek!=dx&y(f zu9!r3D>FMi3=kp$oYE?$(C;pe?RG+8)M5|1mtV(jF6S8I*?z7=YJP_YRT{zms;|Uf zNm)vNW{9tqs(oDi2jydq8LuUOy-2li>#!P90DIYw-FL_Kj@qUlH7wb?`9GXx;J-fyUWG3x_|l&yZD||i}BhXQ;)db@j}*Fg|H`0 zlyIe8uaB=gGlRe~a z)hP2$&Tjx!%dtDBi!M#Q(havyyHzI+n!=m+2~q}2=+u^318g+}kO!Bbi#^)ubQoYM zSI$5Dj4~P{tsplRw0w0pBtT?-XFqI9c;o<`c2h9(1aIDUIG&aN<@VL2HHx#&X~iD} z?F^v&(~GMh$KC-tPav()e1}!mM{vTmzYh74u@QpMkeqvlclb}DYwd0 zW9%JC+O7Er1-ZH)vJ&{qX8rz2p%}@Z&`jTdVAiMi(zw7&RW7NBB5nQFdyU2W>4 zgg^a0k_m?wyF{$~v=h}tldUCM6@Ca*-LCBySlSggY6hBzNA83ltaf>ln)#^Sc<}h< z=UAD1akkVgLEUaB>B`rTZ@#pvNm}yKVri#y*=X4Z+EebIV$C_zJ9hcdiy`#~Qs`Bp zslOVqq~!h>Q;OMl!q={ZR^@*Qqbpv;2i~;OGqciP= z83BbGD`*6tK?h&0HkIO?TV`j~BjHJomlq`nEHx=~AM`8W$)KWoPsH7e{>ZUHGh|mC zli$`2x6Vhl{#Ahl*5%$7yc?6)!&*TR~M?51V|fKIWi6-bGe@A+rnf8-x2Xrm>NhXij_z)G|)b4E!!?vd+2 zG0!||il46ZSHu<XQI5IVzzdf@4?2j3bKogp-G2kP%UIk|*<&Ox zP|P5nT7KLP_qDo-_o>fa&m2}hll<&*nrMM=-7|)*e#$o~^j8o8{Q?rmSHMOGfNh!- zu&o9VT90<}4*)0yssYyCGc7heTReF9*xpt_=EAJnsK0#=eWdCxpsS?_7-aL~vl0M5 z3bIKs0B=P92V~78$UCyVATI@D!66z`M%`?{ft-k)7vrUBl49$r(%?5OSf*|_FlPN& zBZ|WX@LklVeOyzLQo>`Cxcl+2sG$8(T6G|KeI*doR+zh#Hgw|RoMHQLpbbT;$e__I z&;$FpNoTT*$8}_PeR?;YP95kdqux?bE5W^dxz-;0w8oaV`;5E$=gN;CuR}q%UhGm* zb@x+G&&COPDp|qqxgfEBwAe^(`A&BOgZ{ecd*<{DrF~Uz5?oUCVatp`n8C(68}!xN zfvduVLZz0QbLAdw~K;8G15wHj(4&9au%6`_s z*_7v(g|1xE9olX4DjllPLvhQxcY<~NeXPP!1?^=bJDEEWvj!{>WmgQWG}*BzQN_(p zhB~w|FRjksmQe>%hsxlzIbm=>MmyT}yc~T*&W`E><1?7gf~nnt8wsPp{96}qZ+JEP zB|MmOyb;D4cYWj8G2C_* zFUpxEB>;iPL2gD*@%$w1p-Fvqij-Hon=mz5I%qTVD=kpW=mNaiR%v!3nc2tH_tw30 zSPa@z-?`Pfxxj^&K5vNDJ`{h}N0`;?;m0VOik=TSV*%3{!L3e_)Okduzfy>-WmrzJ z&iNRCSgTmlcRi|F~EF1O6$qBX45KARVEqU&!x+&>S_lHy=@$DuVy zZ(z2!K)t0L*ynEF3KoY3i6p#i6Kxv|&0I-&a~Zmjilu(?0$geqLu55=?`Kocr8A&_ zN_S5LZ|^1F<%@eNY@{B45rE2g#M1M=3dz!ODaX@$8SZn3O0^0eoRZcz_hV?Atku@1 zhIBT;lJmhC;fo7;&2~|U;rzdW^k2L#y;_GUQeLxwQ_b0QieEIX@960?0HHu%n{;vu z@Z*`P0dTK(wP`!e!mjxOYC+nv3hSjavZ=StrXBxY-wy6tY56zfcVP2nrl94hq03-G zbL`TRGLP2@3f~-eah3Klfz@%`CB0mNOx?odE-B>SBCW~pB($mx!T(?d)Eqhl8o0|A zqYR@iQ{Jrq^lH@nIISsUUbX7wS8iut)xW{uW1)H$`1@)}{EhqdC$Or^wCsXRcpMr; zF~Oy^V5fl!RWaVur_qSHfoU9~ipa4;)2pIR`=cJAKPisa)c$4*m+%AnyrPHGm4q6u zojra#5L?CC`KE^(ROD=LU11xP#{zRr+E2Y%n_3LQSCKJk)2qU?Kjp&eA`Y_yK@a&P zcm7id4>9soTJ5_N=MT&yhRd(}k588Hos4sO&huXm%7!Vt@1lyzx?8U>i+_d@Mirj#(FwkWP5G)={ht$PalP+PW zIoe2cgJr@k1nkp07gZW0?FDdmS3sw<6-_c{zj8UO5f3>6r^|$%W6c>P8W?# zO&ghUu1&=e?OUliO)Hs#Ik3LZGogZ-XB0vxDl~?Ylc{a7 zzV)^}xX1*M!l-7k+WN20b1}Bc&#+F71`$RJs#R}a)E0{N)d4J;e7jn8g34yjX;7%; zv#AV6e?i=l2tA|a!-Cx_TbPc)m8}qcFAWVVn)K-O65nZoN5Uo$idK(`cW7Ja?hxkD z#1}L24b7<0DMsxsIv+nCUdvc0Ldz02Kj*?S5)iN?FbaTQj)<$^#3rl-qX79XD(iHc z9!%|dN4xHH$?KKgA4Dmg|3GF+{5z^T*N}oIvr&E1G(~+xm%=atU98*#3gH!g|Ir4_07lkaQ+epycC_< z45hEO3_K=9SUUytN)T-M1$sw8a@ZbIRPu5tb?tosPB><{0rtBd$(&qCw%Fd83RT{1 zLj4LQEohWZ=0yNcrDZOl(*v#mG8oi7QX0EZdHra?=d_TnBdT8+kx-z(iP)4;AUFxw2a+$+|gI4VJ$*aDG61CZ{pe{$*w21SDnGIo%4>AC05?(O|l&!|1mu;p!DPlCh}gSSei1=%pEe(9obIO^Ejd*6d8|D4>+vq0^@~JcJU#^ z&1#knP)lA|nG%8!`|x&Cm(YCG9cZ=DO`>BU^23u%R)nwbj7Fz|G`phdN$I~cNS%KG0i%vlce_|Mab%95*K)K! z#`giX#2ZBArugK(*<^wXC((s79j^k61AjWp1_lyfUS6I5+s58XEsvRA3x38VhjoCCP(@}Jk>in`P ze6w&A4$6s>V6G?x=UP$Z=zeCeRf@AymTEFjPn$}J>rWH!39GN47Rvb@lU9*)G959G0FVSK^Q?MY zfBxZ4U+HYZNWy5c%j&#SLvKSioWgun$xGm*Y8s-M%gVEH!(-iQ=T5$kP@UmfnCt_h z_17N%GT#r|SFofV-S26%d^-N=x9JVI=2Ku9(?UwL_0L|WL?%@1HUwmgw6jRr2cK5I zTMB1}c`wO6q%P}H{2oH~~&hGA8 z2uG=RouAafH7xMX?-VDH{M+^vD<(D{;b>atT)!Cgs$l(%lEa1Wng_}BjpTQ$jqhHc zp`md+OmPA*s+9sGhYKPnQaz4$hI?Lu*I64>$9ngs*23L923>?m-n$m$c}vZ{cb%V{ z&-p@x36F;Le9m~BW-DJj?VdpHJgM|@@N0UFN=nl09%KR9>+JMx5w*`jJg+_xC3?saoR z^%eEbr*UTokB8m*Wj>80V|JZ~Z_?|NZFNJW%bTg;nRC{IwAW;|abx1Qk}3agPRfEA z&yGl8AW-*)X&T@cHiEvH6hkpwYUy%l>6j;8DI1AqO9{QI^^LD;ECCRoyRFZ&@7cHL z@Hb(z$1HxY4ZDi#Q@~{v(Qup^QSXjAX+|CSJEPXcdfsrI>WRDeLfH49FK`iyqQZMc zn?bH>TE)H9vI>D!WYWvRwH1>n*UxU9;H@_{xRMR|J6^L-41PZg0QLE~;UTKgf}Xv* zHBQ2hg>eXqvDSI5O8eC_)smZOkO}#f z5rGw3{v5=hM!;@KUQ=pov&gv-UFKIPO|ZMduIWkJT_Xv~oct6IpWN0^In5TUkglQv zL+k^e2ju3VwTUi=(?VmVrH$$GC!k3^A7s^Ub60_3%oX+x%Xj=v))B6Aa%h znHe)xeYVeF-tPT^<<(KTKDDgYb&#)90(djmrb~YZ58E9ii{7&*&~d$Ot#d4lr3H3X z0+}i(2H4FgGNCn_j@)v4P1xbpFr1!Jke-S(-bn|0g4<^QLAW6W1$AfHLw@=Fw(v9y zIP@}XG=MKHzacNJJ^gp+b}7ZXwCsq|I*RKj$>-3H^n~Q3DV#{KH4b)~0F9Q!T+vj7 zdiOtT&UUaY3%tsdosM$Remf>S+9(K7d}*D;Z+wC!xzLOVXdy5Cpb-($x_Z2SzUi`* zm#wnm*<-AqIp-;+;0$_|QgBv>VXQZAW=hVfpknT#yZbq6OpohmUeBHU_LMa0tJR+=hb-F#B!aA+cYrDe`vUMwvZtKT-61!)$$twsATLKf+%^1Sbq}JX zc)J-O(zKM&%WOYtAKIqVC$N;2wwM1|+=l+MxTOLfZW$wK$kX`j(19@jnJwQa(8e$Z zuqMXMWzXgSqmxw?O)V`LP^neA>8H9;dTU%G5)*;S*$eWx(}k~1GNb1R2KyP-kA_(zjL4R11fCimEp20g_*1c>NZ!WqEWRY<(s3sF{>*4g0cjQ(%Tp>e}gSyxe zL|<0eB<)t?IxZ7mvLcqfeJPgKL>k>hCk89tK5i_uR;_gJp>KxG^v1CRIF2DgSh#T>Bi{b`U!jgK!8Vm2cit8kamFYUXb z4GYL)gR=1^fE7h1P#!WLwgoo~+x)TU6=ZqJDaevVnNF6zc;3k;F$qo@lnQ|z>=?m_ z)s$`Wb|MH|u*6L&RFTHBmJh>aH=c<6?xCSyO5=qSp5bQ^{_*ML0_BIT)*r>?ll=Czy3L)gN zOoR_z_THVi@M{Y{k=Y`>lhyWtyvTKwps^F62hst4C8Z zow7c?`S1axM0g6UUe`LSO76#k00((v^)nnzh#v0Dyeg{rlSNt(;b(gA?hCTK$Vf{# zq5K)AE2URG_+4I32k9t%Q;AYyVBxytFGBqVBwfWMe$+#iaaetaSPWQA%TsgH~$>+lC7LaJ!G#mU|2c}XImh(=5jW_aqkLKzRIk#Uobh@GRLsC;@k`+dxC>lgH z!^_52e$EiZL98REQGQ_~|5%7)v!dp4?I!Z@aF)JzPr80lx`oA&iMIY|I@7*dI&{u> ze8kz;mx--w$)vuyE;%omf>{F}^_WSLjYXVnMmv&HsFU`| zIfbLzu!@W28zYH6q+N!Qw$}SP2Zlpp{|Lpdm?FFUiISg98)FJwc0{k3GPP*Fg;3A6 zTT22{I`S4dh9HNJNI3|cSS;L767`(@+j{&)mC>g)#cvmrUP zv1z*$snIsgCsJ(N=fVyj&h6^TMy2`sSeCfFsfeil)yT&#JE~j*Cxz0N?S-;UR^_{q zcxw>WFW8pc>u-DaEq7d~N|`GCGjs(%%}?d);~p0x?1q@HG9h1_ruv9OM7hr`p^AxR zyhqTqu2T5i*w#PBEAsl#E1FhNzs?5VWuL=C&VPv|CzH=x;QBvAwG`9 z#gu$GIPvmI^aI}=kKqM-hh=uSsyvzU^TX_7;iuJv#+StOcdqIibAc+wBiVUTjhlzB zbG;Jf&>J_BbJ`6j7zbiHGqJD0S1BFWs3w9t+bruduS0RPHOHU#r{|dp4r#q2O7qpE zyt>2P1EdnFRh#kF+Xx2l^ma9TQL|FxWZ5+N+5O!6_z*wCPmTu_H+T34P9KCrj+yTJ z$)V;4)2$oQ_!A}9t9ihpbpN|SBW^{@(A=(>Tqpm!<{hy@jx>%Z~`H1+gps$ zjk2+Pb-b{t4VSW{{sMn!ikc$qcuSOMC{9ZwGUM%5=?0``=?0|R!o=1wly0;uR`k4- zoorc23{XJFC$z528E<l7#Y7kkFM_uEG^_BZcd?pv0!ddcVQ1GMUz19(zDCow zPU~?XM_t;Ug`>c>tw99o-=IsY<8l%OmO=o9Ta2bo|v z=Vkz6n)CSa;}-BDIpyr5qPEx9Kn4C8oItm=+2$uG{f7+0=Xt(q@dJ2DQ75Fky%T<6XSAd}-O9aZ|QCD8pOB4G0KWsbYw+VuW&oC&u1c+suod z7c@W#mtjHlfQiX*!eg(vvI{z$iLBxg>ASW6A{608q_QUYl|29P-!Gmu`(str4Yed~ zEiGVVPoHzgY7@JZfrcdRMUaUa`%2 zza}EtZ+phY_S=eCRv_6HOq86R=XB=GlIsrGyGx>?Ys`PG*hq>gH@PK(lV|&Iy4uc~L{2N4&n@NV>I^m1~Z-0qKeX z4&UuMitYuc_cN|>1AelLf4J|@KVJI-G&!HW*g5~l_ZK(JK5S~5PbDIW(`4s+-Z3O5 z7zm4E%>vdCLzt-M0F`Ee9lHJ31~zUIgXN0n@vnQxl zSl&s#Ub}sG=wmOKIleB{;)czvhI(1spRD%BXMY&c_%i1H1+e6`+HR}yPn_M5ZcmQh z#1;wpflq9XYAtxQfwT4kp&GSc#iWdV9-FoMw;CPS?XG8VPzZCv)(9tj!fuBD)~X$) znqa48e5CbO&GnWRPrDjljcFYN0E~Ydq;UPmDxv?o4oVjQ9c-%Ixo%tsR&yFk1_0zl zf){z^0+09Hov--B-Yg}h2bL3DNXb!j!sWQ=&kU*^`fDOhGcJ@#uC6V5vaIv<4#gjL zkxs?~IjJH?`cES%HpskGYeK$Lb534)dAaB7(HrxWeOG;!K{ym-Wp&P1(ObkL0(GU+ zqP_O4{Zr9MDfl@9aI@(kzr7WlN zu{830B_;cEL_6t`3W|mH&HWSlML+P@q+0YpJuCevdP(1Mg&)6X$D9F5W3bbW)l3Ks z=tj2s*jt@5(${BPxNxBa;;?n|=7lMb!@^2KYhaN(+L`3BF~9dt{(?q(oQh&zh~dU5 zp$M&B(T5|MUe^#f0Vx#gyg} zgytF0;38!1XZ;k`CSBh5MO};mVdo!pvD9ol6mcP4iT0m<|DfOhw>V$ni4}77nO)!i z;dh=kiBauc3~cx4=*}Gztm=RNs&p|34n^Xs4XY=lx!vXs$gd}O*8lzm{#-R6Ixf<5>B4`dxlI;eSY7@R?%D*>aKhsLn>-mXAUXyo za?7fJr?~_Wl_49UvJ(bn>L-_ptZy?8#Nbn?( z$nD1e&I{k|0m?k<7QS@NuO8yJ=CEmp)Rd~iSN{?)P7-&3QkSFSPF6o~FxX7|DI_52^MimjxXPQ!HTwVw;# z|8*%dr7__336C}Z&IvW7$WAf->F!^aioaFdHEH~7OWFCyzq1-eX$T0a`MBcW#lP3m z*za1ueD=Sy8aZIIA#X2Txc7fh*SLigw@F2}!{P3~OFVZ1fX$YD*|zoIzf+v6RCNE} zy8EW>im~T@A@@z&{LSM{^F~J%oZG<>6$U<^H8nN!y1LBwGUR3N${jxJ=o%Ut>g;>@ zgHz;FuguJ?&ZRfz4a`dzhij8~{3;->km93v#`QPxDp?_Yf3I;!6%ibgVo6w?qx$5E zXvWp9u25&UfphMy^YsXqoa>!h4U`q}@+qsA|1Q}dum#azHFnO@iQhkNV3i6xJVz4r z-G{xKGL}j_*}aK}bbE0PQRrQ|Fox-^8-jPyzWA_0n?bpHC74#8f_TFSPiY>L{J7G? zxw*db@`2w*iQ`~Drr@@L#pKSsYq1o?`d>3HQOI*5<~1(WH8kwi8R4G~`drbMJbW#u zOQ!K{?+KJJXu5(}d(q(|&aRot@ndm={=Hl;L9Ls39{a@f%4U(%sd{4 zbm@`?lc48AoFI4gj_ux4qhcQ^roayG03-f)0KffOQ)3Nwrim3(MzkYb|4%B+~+Rl=gY*k$~|a0 znu$B^1DWhuR-_wEzpEN0KkaD*wFi*MC3`$IcMzVSUFJPf3dX$5kKsL>2P>}1jvDG1 zlB-JdS7|29#cu^?l>9m_FWXSO2Z23l#C@ur%dthFq}WUS=!sG#sooj zkG$|y;gu>FFS5}#v#XKs#b&*9Sh;3Q_BSeA&@ThF?zC+gq2v8(M8$in)Dzs;NgUe_ z$IYl3xbfZP4!%mg;|;{|U|xJvu7`gPB%lDfr%v9ZGsulAleMhh3EOA)9O~`!kAkRI z9H`sSlt&0loAF3t(opa{Jt-#Hr}+bd$R2i>G^lSW#HR7COY9x&w`7+$W5s>CrOMf? z%SVs8j=VRMa!7fs=A8T*)8?qwNcD?NgqT?4WK9dgKr4c8gdA?zOmLvjC!i?vCFJdR z>NGxUCZ$Lu+tP}bZK+aerbbis)rl?aBXqh_OQU_1nRnW&CnhE)?C3%v_4VFBYOi=X zv!O`Sx^gl`-G7-3PhSajb==KAP<{w#sa?ZL){qQQNZYZiIkCJ^4 zulAd2&VK+94ZhtiI1~*C<>xDIhqSaYZ!$GB3uB9!Vjp*)G;YvpC z!-!-XUe{4xlcnSdsSHfWjpjCEO!XPy>`i2u8=lV zBu{klW58>+pfcS$?x3nNqx+%dgZp86PG<1uytt3<>!p5x&NkwUk zp6GWf1vAbEebM&)a<;|9c9@wqV~`{gV!zB+HDF@KzR>gKR7I|QK1Xts_@$Y~^8T}? zL(BUG>TQK>&+v3L7XP}09uMs!m)cud9g~8@qzh;&J@lG+~9PXBoA(4IVyY9PuE~kyO`cuW;k|Q9swwLb!TE$ z7Bel6=gSYSAhDQycO?VCLiSWXl)&Jdk4Du}wMe*ibg#7Aipg|gim^@4$@i4d-!KEk zBXa<&?ZIZftB^Mu-p$shiFEld@qV#j#zoVXpqADg(tgZEK*Keu6{MZ6 z{YQ(|GtC1U)s!2}YiVP2Y!|bvIZ&h*PV*H;oxY#Hv+)cc z=UZo@-xMA++6k{055dR6B|$Q}MEH;#H&a*BDI>2A&Q{8eh%$U`BoQ>}rnzQ%C{sfU zNAgsTVMSHgS@g`g!{5FEkf2R9q18 zti~v(=psDPCjZ((U-K+0^!0q0<6TEfxbu=d#q zkD?Y^x?$TX{*xgHyGUHAPE93H+l#)0Alf@rRr<6O@ENoo$M#%XB_p;UvNV`*{Vn}C7QDliA?B-TBltV% zj@xGs#6`HcX^-B*xuN(h`8-WiXkJ#QiMg=vZ|a-?L0E;Iqr zG?mflk*Jmy(r~XwYf|v=x8=<}xJ$Byab-S*S*oF>B~VMk(c|JDaD{M@VDY8Ekdgfj z6dUZ>_c+h?8CB6cR8fszhH<0iVPK<0FnkW|JXADnc=|POKg{nAswot=Kqph}I+jCw z5I|`nhuDm@3^#Dl664a%j_oA)Q!RFF24;G+SQ zd=RoXVPjnhsQRs#_l8Va)kWcro7wp&c>XQX`DukIQJS|29gT3K^!PWV(Gbj5!?>n} zlQ~k7+&Lv}2S&`6I__#QohEVVUBxAcdm;H88}ST25kNVs1QX<@FD`@h+Cjg@ zoAUSBzX_zfAYe>8`gNAtlp~;LCpQ*a$>NgXqX$k3wP_<>-cVhAhN`PXm~VdR2tVTK z(@T8t`s^%XNtgkM?4}i*#t4IqLo;&hgpahbW(So-3S=k$5gLgTyOQvhs*P#Sto5uL zFoBDG^FzXlMt0ihK4c#1ArA);xv%1*@WZ~GmkZvFj*yh{5U^BLiUtpCt3yJ6B@K(6QJE9Q+`o&pC{JMfX2% zHsQ8U08LcbcD*^Yf`#B43R`TDRE1V-mji9iz-*;xTJ}*ri+y5`Pulrn@P;Lpm|X~Q z$Bxii*{y7xUK-`~wLq>;Fi4i|r&ks%_7j`HJ#Qnp3t$`lt5QyG;RR`O6B&{b@dZ_lFs<-XVBU=&3lU(rXJw3E! zjBc;%lu3&gO(vA4t8Om^czzjb#BmC}ui^UB`ZCbk@SO z?ZtUQ#@@$MJH}RxDH-vXY7yTTGI!KY=9YKv?&5()<3mJYT2Z&4zh2d~|7IXUl6+Ii zNXO@9!1ny^le8S}8*YAcZ|+ail;3gv_xi`K1r*MzJ3C;0B{SHeIB|jS1L+hyXJkKv zrzgsyvWIK<`*JQS!I7e1HjE~?o7L88Ppgoamxg%p4e5@$blf^I%Ei~@q~&n;hvYQ& zdLy?l`$p)ugxt;;=8^pJk4N(FUK6ZA6)<%gz4CX*zj_<0_7#(&D%G`*o82@qsknr> zp&J=_M_Y?a3Ns2(r5icJzdVuwnGNBh@|c6FV%hxtqD7t`FJl6=dZ zs5u5dO;~(enLl@Jzcww`K;_dVlvg~5bJ@^FG?lMUIj@wS>@RY-Vju06Pwc270uNJ@ zG{w}_QB8%v9!r$6Lmle?d;NYX_De^V{MQ3vm_|(89cO${^vIKg46lrA01)AS%!!MS zYf;9C<6F=i53i;b!gB*zn}*)ykcO|}4ig9gG+@>t!WKH|OSJeQ*auXPb$|s?7;F@; z*U9pu(S9%;p9*$1r7QdLItS%_;**PzsqjT&C;kgLGE&~JIPdZoFIG!*5tgg2%VQs~hcR;~ zu(1(2eZ@q4R#acBh!2_5-28<@69mIO^>bY0j(v3ou}HQqg`S)HWZ01CPd)e<#HAwMyxLR@jeq*Kujh+QWE} zB_T0k z^@XxHwOo7UYh8a0913#G#7&R|UcbNM;Paz;cxWXwr#Nc29*068wg;LBcvMcii(OQ3avxLi~xbXy}&-Y47O8Or%%ggJpsn^!`4Hxx!HHrp*J4LZX4Rkf| zt~n47acNCcP_*lA0p_JH$l1eL0tH;rM{1X3!)c+Kt4pECl&^&4BH1Fz$gM08Vc*L0H3S)Lhs@zXI+fl-2ixW6aY)$GO13&&c-$+qM`%by2I!%djX}|DlPVro zN>99zWKhg;e$Vwgh7H>bHKQfTCanw}OMgLhHwM7sT#2B}#iNWKI-`Z$AJm;qO5fcL zwI(=LovUh;K^dlswu*DXIlh(&xX4IPyieELOU=+>p+umI_w+p}8XLPR{2WGNV}yLk z8)_yJ-sHU9MJ$Noc(_O^ka>i$5M(}JjSM4Y!Fkq|*)$}4V@rf(ei5;5@5GOGH`{SqMW=~iN=TDh#P;f) zU*8@PH`a9>aj@dvcl-O#9Bic|W6n6U$YI+}X?iD(b!!K{t&L|LQpAL*^cBcPG8_kT zH0O{Q>_Rrbb@=h!BaWwszkDfAUtwT9@H_;w&IY`)KFo+#`Ed4n1wzhQa>gawx zn<&SB)nGqARSGWG?eh0IfoU?;pj>(`-qD;X!sU8 z$I1pzhthj}O(tV-mz?$oRB5m4K@jx`l>P$-O&EwNO<)SR=d(Y)SlI|iNpSSzIYS3rMm!V)K2lyQl3gh$C zv9~x0iT!bFlVtGp$#z=LhF%&`&@}vdVWR+KJz}9aq!85pqe`<`Xe!Im`nMB##XIIW z?&yMxR7Yk@-t=F_n^*Zh4|wzKJp>no2aaZo=8>{9npUy)2Hwq0wbmtOvJnTolSZDQ zyn6ZkZlH_sf(1J!qn|Fpmo zwkgz!(46NX3}4!)Q16o8rGvaYYOy53kQ+T|`Or7rnb;AXe$9}W6Cev|;6x1*<*KJ9 zk#-tq-Hew^m$|xjP4iz@ZZ1)HyZ*k6nb`l?-20AevCkH0oW1jUrt;FIho8LBslUi% zyYYu#CNaGFBER@VXJZ`jg}hcyS4$t7;YUDMW#Y0aPS|>*u$`PF0*N8~&^<`%r;88s zRj|X&_R3>?=x7ak{q&ppZE=JjFXQQ!bC?OiU^WAcwf;dKVJ9R z4kzMnx^-j^`8{ITLlVJmh9ALp5PL}UJ|Cj+L#lV{Sg`hI)TtM``Bsnk{dR&Q2ah|! z48McUNk(X_j$9$V$9HFWX<}Da>BbAof(&f)JczC66)5 zNJ$5IOgmPdcAZ`J`t@$Uq2wb}g$^Axht`PR6 z;^2%qf!{hpB@&6}JqWEfHg0Cn_&gq2WFoo1 zhlm?lJS*YuhN>F*){258O{5wT9x9OwWl5@`V_y^FCIJvBFNoUAPRyP50y z=hRj+(L>j5li0+9U2REnOm^J75?}SXz&e6IDmi&%bw`|MvN(@6Pz&5*h1%kh5Kl=1 za;*KYxKOAVCeA27(uWacv1<7he=rdBO^g%XC)oFe6NU1RBB1d-M#5lRFOP4DBCu+5 z^zlA$FQ`CvR?~|#?_p*xiX)Qcv4@0IV|JnM^O`Rzb=nWZs(uYdcOd=iY!N&8r+a^~ zyD7(}tyl*kJizVf=x9hN3g&xwdpC&KE+aM$Fd%?>)r+OdBTd3~vW)B`c;yG3aa4|3 zKb!swzN0uvhB9A1rY|&}d?BJID%W1!^>oJ!#nk8Zw&ap`w7CkCW#;e2or!{7tl*{D z)&-N^UOTLq%LsJa3&W`*)YR0pDV1)zCNumh(JaxY%=D~4BX&cB1V!1~GN|<*7PJ zlx2~VTW#@48G`f6;68bXVej3$(TJj_dDKrW-4641bt!H9lonM_p)^riR(yaPZ2iZ&Ne6z4d9#)?IY@#F_2z7-_lWnV0pCy$v{AyoR~a(g@H)ED>C|j| zA#~dEU#RY@V0Nj4TO7Z>{2wO&?Z^5*yt}GW*==@E;DTF@_UL^-3*eeiUY3(&>*AV5 zrYyViAx!Hi{!x{oAP}EHip|I8;SGOlQJdC+n{HVAVR8l9Y5H4oTG?JmWPTYe+KUe{ zxRb-3X^)7jtjdpb{}QD1Euq5eAXyfIM=|WWkP|KF*vQ*yWN>ZM`+I>|5xbTqn*w z$%iQGKPPP$emA({J-|)0JKKNpjFAV5b+^%S-BN}9rCDZFT?0x)G^|F=$@b0_lDat6 znxfyjU-Wil>%`;rzJK*MuWsH5EXm1cXLs8kgSvhP;{FtG})-?dBBaI{f=R z9ufZ#>Hqd)@p)idqE|1L{&wVH+|&MXmAgv%@T=r1oqj#uG$CI9hjPQD5v1b5h2@iG z9fcJy1CZUC1NN`FZyRFSGS>=hrZ$j*(1BbwJQ-Ot8FLa;&c-F{`A#hTI zu1*3iACZh=WmCSW)X>96Zy>P7;0>esgaM;5n+DQYmo~EKH2MU@18x30Nbn20C|)-Q zgn9+bfSpb$S4h_c`kaEA(F;Q%ex0kMTDMcxPT{okr(y#Uv@AAygouZNU4Ve#Dyjwh z^|V`0qs^?-CSy&1wYYB&kv`+3bSWyEJBv4BApV+5ZQb>k^?#^DCt z6gbf<6(teziE6NDPnUv$cY2kHzdC;6H#aPPH=F6)a_7kc8Rhl;_0}#3P}D`~Bnsn( z1!s{+7A@izTFRyEXH!!n~1J1M}bkdQ&rh7$I z7Z8vN`CygVzpY1qv7&}mAkDf|pV?eju{aki!SF{0I$rB13g>2xjuo7x5=fc(AuNyI z1KPf>hp|N>Iv^#ua(qnJG67; zl%MH;$=p9Z`S)8o1_75RoPYU*{&IE;2@7p4_j7S^*%YS}jv9`=s|VTkaa#9uz6)f^E3xOa3B|sI6F^&f)H{AB)uHz_XbVO9bC%?~#Y)hg} zJ$I~y{>5W3&+Xr6YN{aCmh8{QWfR75&#}sLyC|gl&|c zF2ZSz<#2N%nNn!zA{_Ja?e;XH(4@SLSe8#ThTWy8+z?Ev`BOKqNHiHu?Yoh5UM0vG zA0&yRd+ziu`h~j!1mL8Y`x&n~vW|-#C69UW8*~ylBO9CK)K$@pO#_$p{FvOdW*ScN zDna)Fg)Sb6?Cv*es_ZitjYLM{(hs3O+}JV+?(tVz-G-(3j&&v$PJM62f5$*Jrijlx z1}EE&XT*P8HCXvmgQFVP9{xx>MAc1f{(tPfd03KZ-#0$fOf{30&6ruP%~F=8X63Gs zQ|jcJD{2a*<(ilaDkylH)1qch<-V5Wh6|dSf&!JPxsofTB9$Tu0*WF6qQ9&AdGBZL z<9Vl!_qgBRpYMMhdT?=`=k;AapYL`$zJF8b`11X@@BtjKsRFezj*YNjX*ySYDa}j2A zWkxt@7vqVTW)yeUC>-Mr<2U9{jv~QK`YQU44X#c5G{d zNcs>36zYlC|6NkvmF-)kz~}_@av9kMzWcwo_P=j@T&Z=~3`SXUy+wyfMBricBnuux z3+o_K9)950IJOgahhv&K#TM~THz31XTNX{Y3q0S&BheXur=ri;TQ*`xIkeqvMo`M6 zn(6|6`p^8oRjB_`>}!X>}w201ITWbY6&Eb;mavK>+3mqATx(groDhbOxZNQ=o5d#?fH95(F0szc)#a`im zKNMi|Phszj7zt+Z51&Gm6wt@btuIz8+Ezqu?)=3+9fZ{`&VS9F`k!Z&bn<(j`B*pb z1J!hxZPKS91y050Z(HM+O~g~i|ACyn+4jk&cWCK%+bOhSWTxY3`~=YXg9@R!Z_KmK&_!nXk9*3s@X z*K>IH?%l!Z=TkL*3rEJ{KkJIF$JF_Cag1; zU$z6l?<)G!ZG5sy!}R&OluCQCRs$RxWtCpm0GZD){en7w2GV?Y^*-=)qhN#FKIAp>d{#J2w=lUWex{Oamz@6BZ(u$Wa|zKabN14c{wE?K^m0nEs- zqG%(2JME4CL~J+3XIx5%?6YTaww`Qec=o;T}@Rra~G$&U-I!| zW&Gkh%d6)_t0n8Rn*JkFt+1-U%lUrpZT&CI^{iYD;F}|m+v2doe^Anhlr{+<+5T8D zFqlKJpdSyTOslD@cip1BrPI?1K#j#qvkEX;(zKibKuQ~)$|!g##p&f?QF`0CzYlJI z#(4ef=MMk|0kh3K9~$;`qLPpXOSAwbEQ>$I5`fW!KtNRly?R!(i;Y>kiD0zWIV^md z5})8|_Mc#Gy=p){VdGg$vMwMz#f{>1m!1QE4RiZT%D(*RV)E7)qX@W}S+EDnhV4-J zOVr=C8-036tLoRXTF*o?a-s8qtzYoYgFn~*q%yWVxRRjRCLX&PNWDvq3~&1TJRk6w zxbMGRY4MYBTPbDl8QIdv!7tdAcUE#&Us(%(>fk~1$49V%9degHtJ)C9(}~*SjJtT% zuZ%v^?FxMN*}ffDN3Dfw@tTOuH;J4>82t4iQK7}6syPDsB$+@o0MSma*)>5sc zQU?Pse9G5gv4Qk9J{@0u?Xxkg#UJI?7=OtAxs%Nc4IA+F9~mcFZhltZpV=My^m4fa z*Lu@-p@!Z|3=+RC<)~X=UIYkp*h4b`pONCE;IA2vd5H?ElwB2+A1Y#9XpD4H;g<-p z;f0G>L)@gHu(V7`b#*nS4XAeP4G=OgED{e$d$?r$J!%AgKPl(Tv&~gb;y2@m5u20s zo9Uj3dOZG<%?pvRu74j^`9D?Y>iuVfQD%ejzd3Qi-Kgx2ipLoMjvoXA#mKzkH)(q{ zHG_#Qa3~*6Zy)0F`9)WbOnjCjv|s#sQ_2Ud-`nn)+#Vj!F_yb{aAN~t?~~$NhSl-r zeGzf>iP>|Xw-3NR6t^12V2M;;w?spr_lVZS+Snddu{~x^y=5RyzCs&!64F!TPsbR0z)Wo`fi( zAIXc}myzK_{3ZVDnLMk{;-Vz3(i_GkclW}sbF0TB0nuTOkQFm?&S3;UIo2(9>u10B z^|!jvi~gG`6&u%+!)oVY7RrKl`RK|P^gm_se|c75VLzP@H>DA}B^{dHu*usgC%^xKA@j7mI0GL0LBaEa-SP_uWdk`4f{U zzr$ZRf@?o)xj=G8>wl*gcWm|j7jmIrdWQa%Hl-0A(&u&Wi+In+9U!Byu`Cw(v%c>$ zR`{RJCQ0r`Af&kU`S9PB#eUZC|Mx$xxdZ4O;|?j7eikcy4v_zIRl2_e@L1_vyOu8; z(C4@R%v}*!yt^-wr}8<^`7ej~w`b6L@YC7tIY$4_&K}|pEFOYj62dhx=yMM29ZNwhZm z*vuqvQ!OD4o>BFA8+VPl_31S%3YDV13BZE8hYj52;Zp&W=81SCt zlz{MX8fcZ6oGjzL21P%Xu_s%C(ePvmUN(;91ty9~yqt|`8PNJGz9gE1l*GuZTQu%E zL-hamN+XWk8%p{DobiSaT9sT9wSgiUrR5A{g>;=C-z+6&Fm4a7$MVopv{bCC0@UP{ zZT;6C|60tyJ{4%z`xlU?GH!6f@|u-uJ}%rgab+zSh#-xm?Y{~7>m}^*Joj#*xr5gw z?0D18+x2u_ZCn&KBCF%?#og7|%)>_~$|@+svJXn&N?VhEqw4Scwvxps@zI;3EocW( zQQ~6gwr`5wP%;w&!W|>0ZDbja)V6TR-0V>%EQF=|@TSyOP%!78-XC6r5ijO6+ZEO+78%Wi*NZl>_=Y#qw3 z6d!t(-7EaLAd6X$l7WUV1k?MmbR%j}+yh)o zElwe|wdOpY=@N$0(^Y80{8JuHJu2fe1&npo_wc1qYzB2SkFC=4CsL1(~e8(T%M+~wUhFDY7CAKDEqXU&BfzBQ8408m5 z3rb|wet4ZM{tSa^*u)sQAaj`5=gPuS7Xe^8a-yMw#}c5y~-DSytVmF?**gqvnvS-02#Gdm^>^* z!NF?~OtUl;EX;Pw`!^<^l)h)JE_^dc)WHx5ujp=w3IbWJUPbBb8`+;M6DyNCs- zi`*WJs+t9d2aXF;*;_K%2Ij@k`XcM#j#!;;29FbGJ)K%8?t#3DM>0n`H0f*ti{4Aqj& zwC>Aeej3{>_|P z!5B86YTp&LnuDm89g*f?_G&`e5k2SR5?RdX2%tCTvxoK&F`x+4Ozg&n|HY0Hn#2Up z*$45|BZOLHQLX7AZJ@9^6lElvd_o9bI+3w}z|)_U%my4CytI%w$YIIeNrr{I z61!&FI+@mL>NRKH$A6C6J=ad9V};KK1e(7vm8dMOXu##N%CGU$GoNXy|ZjO z;E<00-l+?-@gOHqE~>4Hg?Q?KexEs$9T4L}engY$f@o2tnrFeVQCxKR))DG}gldO! zT)EQx7;TUg8Ba*wi`&7Jb8mzl8t2wxib6`|+NxBJ)fl5A*vO&Q)CW3FCh*_@%`Icb zsSjqFZ`z$f7pK}Ex;qH#N+Uqf@BzOQ z-rGA^fdi|C_byTLof#Q(%0fvW`i%Q+Psn~B&)l`d%#SjgHhOq_8%-t%hjK8hv^oS> z#GxIYOjr(t&Tlj(%1+X0?Wp}UVZ7)v(?ITRCHe6Fv43-4{{!tAmp$u(yNQ&(ku5ra zHKsOSL)+u0)qtvVBe*@~T12muIFS^7#egmy4hi8}AIpMO z8GCEg+NDaKn96brpw|Za;A*|T&M?ccI)M=DQ7=mkjT)PL_ojyWmUf17T%+staX)MeWLy6Gb z3v4=DyXE>gOngr^jq(F3eG5ilgs9M&X)pTbbJld7%-_i@;emv*7?@ZFbbPAsXnF}= z5{J^k(p$URoD!p*zB-z%?5gR?Iqt5Pla7PbZ}EMdkWj_Dme4G_J}R=w;)u-XLG%K0 z!akc#ncpjeSDJ7=Uw)$AP|X-z7lKR`+E;5MA|@}nUGRRyzRGS2)_CZF*DVh5=w+uq zP&j6w>D2TOL}86F#4`g7snEh#cSm!sxPsOh_zpd4cDyOSB&UaCs6)?IZ@=F?D_PCwgfx(0D&i+4 zN|suV>6At5d~wM)0rlNKFs~h+rFSB{~ITGD<*nLg7Z5 zHMTc&FOb>Vbqx0fEW0f!eF}))S4!Rw8C6LGgx|!Ax;Zp_77ZLt<#G-Jrt^ZL@V5tt zeFf{Oe+pc?p39A|_FkMd^jHDR7VKmQ8}jqqm7^kD!r@++UhW`MtK`mV)*ii&Swdxa z>xlLZ)T7wK(5GC8eL?lK-QH$h;vuF41Sp+cdDI>h7VM+JjRyX&-&X2mHjA z49AuMNXhCUV4Cf@IiFI)ubBMf^Q_YZGXPDErL?{vzM>>pToxHZNY0}PPyI-z-nwJQ zA;MdPan{GTCnsw*fXX0g0D^=imU_ctR+eOzz9J)xx!bvZ|&!^^sb+a^;IODl4|Qw#7TjrMAu2R*rfXBZVGbMY<<$r3 z=dj8{N{B`4C+u6iTI{F=0r(A;z-_vJah`N3C-#) zSw`Vs()&^^euj$^ashNtS~@@EA5kggD*&FZF=!= z)i`*9#wl0XZ-ldDuevYcDqN{)UG6?_axy4g&gG$+SAMERAB8pJM^97JysHZC(>mVB zR$4+Cfmhds6iPY;oO1NT%Q%U370X@TMcN)bn1+TEh_cZNmE)8mx8av7&D_CPngSw%_J6YW z94L|S72+R8_-{!cw4yt#Bv!w+n$a5s?yJHJDIBMZjem~bK)wD$4WrYuE) z6jneBXP;|#(A(-8^*7RlxQQxaYNn_`B73&)goP5y``GR_jk)ppMHAe_TLYay6@whN zY3i>6kngaiud_v+LX@=CG87VPWc27jk#{+uSfX-K%9a7NKF%1wKbjmJGS$GqR4}m7H?kU+ zp5yqy$NPqs)&l%tG2`z?mrEn|ZdWtlQfH7u#;M6m6PN$F#*3|ga=%LU z8E^;gRA=N32RfMf(MgJTDCX8HY zMaqfqfnH6W_cwOkRO*&9ZYQ6r?uL02%=L&|0zVvqCstotD9O2mTrTvoLBaQq9Fwl* zH%pwMHd3P!y8QY8t&Jkw1gu^gU$UAJ171hNG_GLu$X>Ez+f~M>r8XBnPu)C}J|%xQ zeoIbVOMD@e$*aFCTH97aBGHXuE5CKITS&nM!u#iobD*tk*g0j&Fcy+<*sp!jElXSD ztcsR(Gs?Mc3W~JX%hbg@a=p$)H4)e(w+1g=9an;lFEygQaCM7sYp9oIWbGNRl6Kv& zV$OWo^D!e-=u^nW&L(wnb~oqVxFx5O6KcgVyQ%LOUh&KPs>1#`Ey``(cp?RKOd=U#05tmY;Gx|{y3ODc%}D8sJRgmLfE$QmNsTI#a*u=UDRNq%{UW5X=1tD? zhA6kx-UcARKtGikCPP>vn>bR8YHg>s8Lnb5MnZ~Ajvvj~WNk8@zayWM*^s+8oj;o1 zD&AC`QE6!oPu(tlD6L9O-jVj8M`Q26nq0hR)Vec5`?XP>tHmbbmYk-t5~6MC+S}H7 zkqGmSXq4sHjlsT|6$16h*Sokmv1AhOH`(sxssp&2oriKDmFE$Lh>4xA8Lv|*>IPi@ z7p^n?ndv_TY7HvVW~gITy&QJKLFn?kvv9TT)s*pFyejBYHxV z>M#BRdz4Y>5vlD)5&x0l|0r-sa@z1%G_+?`;&?)#!*Z zsgja982Mv|s=8{l^yxW^FkZvY@mF5KI&LS_=0azNw*AEKkWYMc)NjX(4>PpO2i^Jb zZO|TO{uIhtR9WB*y3NZC2bD&ugygbNOro^niA#89C_2)jO$T2cgULBc^&gnUDGhl$ z&3(EE{izcu0NsQ{hcTBg<$zu-o-g(f&=tNI@UBca(ee~c&^(FM6_HCZ4{Qk9qs)8V zA&&)?aK$L#8eE-fOr~gvIvfGjkA+Cw;m&7fe>|?X35+KPG+5x zP02MhX_nw4{_xSLNOCSBH{Qz^lmmo1)J`-zMF8LUu-|bAhO0=3Xu5~q!Bo$3>dp-t zb)u2N65`}dVftN_(~vswAhTGqni^aOaFPgnRnX}bUdcl4D|b-p^l@cYWdgG4?aqnE zP|1^My{haCXF+AFH?NixgPoEHI9eIYh0+yUI19jRwbshoE!u=$&C(X(F9SnPKdUjT z%wc!7FA5=S9ZhpBmF0=bFg8nIR;=D5Hz;!9?bB7N}7UAuf^+em@H_#?W zUgW%oYnx^9uL*aI9aW;1hcg6-!SlTsqd6(-hL<;>Pa5cn2M@|EK*}2owlCYir07jo@4ez!XbfrfimCG|(3h9ti zeq2*%1R))i#qNmr$)wSG2}LgnrJ_g6hK90uGq5@Pj5xZmRVlo{EI)lZPi=$xhs&Kk z{^gBZkt;=AI*8@CvSh=I)Zm)Ya*!U$z9f^3Gb}r1R25l-J4yF1sW>^N(&=-4q5Mvb zqnm4!Zd7AR?wl`7w0wW^+$`bX;Ia@=TwrO|Tyh)aIXV!L8Lxsr-<|0_wnREhinJ)2 zozuT7aT=Q7l)CFw48&FCgpih7k-43gMIH=SznsQiQuCnVc8l?u-MVLo4ew=1rg~;@ z=IqG|%fj625?|%08^ehO@eGfBohB0Gtce!by~%pWu|XN%Uv*~$tf${LmA*PMxP4pWKloiNA8yV=g*puv^9_u6d*_i(pg(f&1=b&3Yr+&9xSPz|ST zJe(49y1AcZ)FP(_@k zI+ly-)<8U7&x}}fyc83TvUwS2Q*ocOtA71T)5`U`0nN6fZurYSTOMCYqgjRJHLC+E z)DuMQzD8b<^Gbzk^iD~L!Ld1^dICE1 zy`BMk&M2J3tUCZuD6qqwaQxHL!4{{6jrsVRsPgT$VL&Jqc6Rg)SMrp=u2r{bY`%W2 zNw+kX8F<^ft$VKh@t1s<^wlYao$zjGF~VHnAn?XX*gSdBHT?VIA2KlfblwbJysk4Ag*C3MdZ{gW-vG*ZA$H=;*r=f!^OUr$ zZxEcgp2gKyOVf3;f;1yw?mnaB`6rtwC$(~)qKe6oHX1D~Re~m``^PNF`_l()!YACf zDnnZSlnnV;uzd_XQV?cl5G8ekp&mFpRVcp9OnEVNxLCzZwDNOtPRy*tL>Q_}sWmES z0VpUjDVBdH(qU8Du{`QXEeZSpRm}nG4S_PMm!uorTr>(3G?m$m%nv8rYyC;PK+D4> zb?B0BSEZ-Mzh(h6fM`K|esHWXbrggg1yEldFzf8>+ZAsrzQ8T~lRRI#ACR=85Kr&ew7s?3iVlWO*OrG}3z00)jUDI!o#7x}8o%dtd z?5$m?g8Q@X%^90OCxb`G{bjYZIfPl;KG;77TEiY#PB&$wR9`(GVG20EiqOH#kQ&2s zHk6%U`eD_)N9lv~!sC;p0DJ;q3bBbxdXJc%`TqfD= zrs=xov_EWv#q$x0RtuozALlsdHH{&g810%FCc+gWui&OkZ$~1CfwdqT_o3U>V7u3c^9%tMw z?o0{C4W(_{Af{;1@;gMMx*sPZh<(Q)J$yMPek19dUAKMX0sKcwfa>(S! zwBSneY2;mf1U@EvYGU_}27xaZlLK8L55z+9o{PCOn&JP1&bWwV9Mi$Y!sd^51DiYgl%v zOGw|}0v&@Cm&O)n7`ww82C&`;^o=Qp;d_3)vV`2~guTlH@)Vm3wYaIt6@th-t2*Il zZ`gEEOWG!+c!w(k6RagNTe>hL1eUmu1ph-d3$N-5_|oC|O3P?OZnwo`nQM2e7)KIl zp%}sFn&K>NQQpp|cnBiiMo6chvXQ~7^I!p^stNs+jS<*^oia^O;%hW-BtDpM6{ml} zsyZ*+qPyvNpV{xR%>gya!G*c@2Gf-@mDhTm#^RI%_gDmT&fb`J!sXYk?!#F$g>VJu ziIB;6WgFvZauZB)pZQfk3{Ry!;kZUO)zyt`TNsQ5MMd8_1x+P?56`V=C@oHW(!u)B z9etJOK63=sHg(=vEz2Cmms00L9u;M_g!D}$JY_LeKK2$OoDeV+biv!lqp;9LSv&S5 znCW9VvnpGqrfnwwgKVDeE8?(R3BB8?1&Y=LKe$={tPHJZ30;$jO+bnE)@I3+`8&yK?Tc7SmQmFoWZ)XakTEJsI>KA5H~9xZ5D|l zRz~ct-fiHx(p zd$?p2Hx1oKQ%F90Hqz%s_XSNix!gLr>}BM1}^by&xlG6Dpc@(&MY|3ktUpdJBSrRW~La&`yW&a-?$VA-^4H zAepve@>p_IOHV(;w1hQ>tGKkiQEkuy;-#KoiyG<;r82d`OQ|L%6<3O>?o?RrRhl*n zK->8jPhQKb%#xNF3sMJ|1fRY~F}*DR5YIZ+O{|X^8}&e>F!n7Zod~YxJK%%@lb1OV z-I6|cOn&81AMK3p@5qynPFQ9+y=1YEiXNJGv+VryPs5zV(fuPj)sGTDl0wcIu|yv^ z8LJse^e_0n9IL~5C;626J9bhr^r$gCg?FNP)g-<<*j{S>pgIA$kT2AMF$nFmOAl#{ zUYpxUh(gDF%GlB|$=yf`BsuTpc4g)jL(>BzoR5gu|vf8M+MGFg5Adyvo+gZ zsd*ieZjybFcjvpfs0S>;O4H@)Jh3SyYx#nw z#Zpy)abv`rseG%BwS}TtxHMO;Yeo{c`olYA+Kvfn|BS<}p`a=!ym`dY^c;Mru`(5_ zn0d}IV?0|zm#p|7x6m7F#hA7hh2*X*(@T`XN$Z58GEa>mJU*+Xx=R?T<_y;IPU&3Y ziaT-Fm7fk(uQ~g{l?Xdr74cY>wbArwy%zL7NYpSt9Xfln<4sdk@=jMja8^S0Z*0fL zV1IYS^Rkt#;Rv*g|7+X>!U!?S!JIZ$My$%ox*@tWvV6Zu!gmT8c9Wkc6DHG3ayBH3 zPov|}P2&ywt8PpLFBn3Q_Aed;O-{{z_`YKD&hF7c{>8XE0-rq{o%PT^C%rBq9R+{b zAbB7Jc4;EFVh&NT6uIoS@ZfgDL{M|U^U>t)DdbrPV;Ekq3YSyG7;5!6m^o#oGHRYB zwWwenn!9xVs8i!GzVS-#6k6JTOp4skvF7A{{SB0^2lG31t z!`O3bN9znO7_8=?y(`$=#VD9pyerDLCAYiau7epqGijwT#E_s;W`Y~VUe?q_-owrH z{m>g1<1FU|gQ1GMWDgkYmZBRlqHBnh-?Y)jrJ>hv%IbjLP%yZGhsJ!V%d0+Ri;D0r zd%m-v&I|TtQ>wnyv<2vUIr4-T|_ zKNuS!POW2dl!n@r=bT;YE^HS|*!se^G|03Kc)6q7vD?d^SH4m)RZ1MRf-BhE z1@y~Ff#lHC7m?c*ODrxG>^*a+&WI4|1ppBx(xoxbd2=TwM#ooUPD`Wf)GhK=fVQI| zn&X>>&^Tr>fcxdKE$Q-nOO|%m{$^bD)c24A*XG^hUET)VcI+iWpmT1Y-6t-c>TF`e z?86QOybj2;X@Tamck$oaBG8t1#>a%t@S->{z|{eivM6AmL8d^9aRs^9h6Z76Q z;RCQsZliO~b-!7;*J109j?>01AR4KN_{q2XI*9?!6}a*Q)4MwRCVPk)u{fAsb)Rc- zO!~YJBB9+l+@FV~1o-3ba!;oTh=a4a9=Uz)3>As^>WZDSWyFhDlau$Vb?ye|h%TBN zcu1YMJHO8?8J+i8flr3Y-E;8T$3CZ4kImTRoC)CQlP^WW zl36Fqhrr<*^2bieXGqS2dv8U|5gKfrof_DB%`Gv+xvc}A&>oj|Q+^-4---_dU{hes zY;>@FvovDVc%r4q9#ON~QDJgpI}$U!?MjDb^f_ukNj%A*xL_S^F=0ea6FtVT{Dtjf zGv42XyMh&x6Bf}rw1F8b(jgYnW~`YV+^%Y|6naP!yw`0AKY1^Wx=*vq^_FsMX2pB% z@(u>fmY#4Tccp*%MCC}#p8_WeW>j{DGBfn96w@*WV2iXbVhiUjD$94-E=#-2C;A&h z^*mno?e|!ng6VtdqcoO&v#P@<>By2a@9n-NeNjkM&mc{w zwnP9bpHyLKQD%eoMf9K5u`&O{!&z*rZx+AlOc|8TKbf!zo{@t=c=xX^8@L@vh`Y_p1{u??7>!qg1E7EcxXS z->WEn_6|-~h5NX>`08ja3*qL?&bdBP93&gX04C-*@tmm(ptAh~u4t+12Ap%~=>%5z z+O%d*ZN?f&TjYw-Se)9>!_Z#UNn|yxY&mOQwY5HuEU-%j)o5(A)$)w8e5P4zMQ?6O z%P6sl$3M1Uu$wt)EA5Z%dT;7`Olv(3d=@J4O8p@+o4m~;rTWm^s`!8{*F`#NIwtxN zDMp5r-RPr2GEt;c$?F_@iGE;uX-tZk@PsIl)i=3H_iMKF&4j8eQ|@jHXs#jO_zlvP zW;a$ruE6{;O2#$4Yl4T5jW(q>T8whXf}Tbqp0g!1lh_cK8zEVuvbGyVZNw2 zqgk78rH^$dazorpNY{DSBChzRcV(YdpE>->v-XIS(hSiZsybpDU)wY>-`~Bz$*lc6!DBz`W|`w#3~nPb zazo&ZmTWKzZ< z`X!&y={Zja(+PLAG8hAQJ;8=}asU}|bpkSD)h2mz81(V{uJ(G6hFkvck=+r=G3pLc zK-5H`)-~D9lA6+$QHL|^9oYk&oal_Y@4s>L^`?(~`gU&35hj6jWytB}Mrw;{Q}uo0 zbSbyn39RIPv{~3T{2pS?pAoPQu`eXgd$!PgM%GGN(%E3qCjIs6r8cf;eJG=QlYHbx z;g_O();q3FStB6B<#}oVn>Ah^(*b+jnfMbc>9x@pdP7HFVc(=3ny;a{cHZYG?r#nk z#Fgnn?=725-lDvI0DY8JnZ8F0APDM1rl~o3H6@mHks60@j@aGJK3k_0Q@-p*oC~z9 z%oF5QUg@hL#gdPED`l~^F#$?&7eVTo=?oeeU@ho5#@8W2{Th9|%YlI?fepL8$5W=Z z{jMkH*nLAGx8Tjg_AZgi>-QoLmIsv4R?!>fhMCK)cqn8CKK@#IKe`;?LJs%zbJ`sN zU&`_)(l&^Z1O7j^*rh{Zgu?kG=On>M=bFtoW8YLI>~Ds{9|W}WP(WhKYyd;|B!g-m z;iujh)xuz8O!3@hQukM5%lfYjq5!MsPWE2Gzd3achxftmq*-wqF!rztv>CGl4^u%b z3Tyb9GSbdK4(F0XO9H;C}T}k3BAb=y~>`yqK-*~=NzPnkmWpJo_J>MwVFnOXLl3k zfn4B>)SL2g1}K+)e{o0RCT)WM7M`Oax}B^{JbuKdI1|wWknFw~bEepZj-=Z=D0NL; zTzuo1N46Jo-DGu2=z&|ex`uAF@=9JB-p-CA4?nuu(aS^*Dx`Ri#mQzqG9c^7Xn0~T z>uvSi3@S!{!lu9F3QJ~;?uOqhjxaOfI`0c9{e~aIwmyuj^FBquj2rWwfwJajBWUp$ z3Crqdr;|RX#m6$LEsl;uqK%Yl zu`7i~=3=054Rj*9t^tAmGiPgW8MH^~|riM+ikCup6`S_cXZJQ$}>88a&g5v71dDq$^sAXMG-elw+AZ4S)3lAEj}Q5vH=sFh$)k)vBo4{N9<;9Tq62WRbA;NN~SmOUW4`*%(4< z8hAG7y!xF}wNKy!X9N188Y4-) zBsq)9echZ(In|NAFztyw`u02lw~9(-mgwW%Lm6Kt|6#O6V|LRV)33E%=IK z^7^Hn`ZtpmfMHi-GG8pDIGNBLa4zFNtXV>ebJ-V>SGXSDfw=561Ly)`<$lR zqC*%%T$Y?0PjsFJVv#4oS2@Q}}pq!Y&Oe zA>dJhr5k`+&hX0oL$7Ya#w&cN%q@8-pg{1PzYh;z!3OfG4*m2Blk%j|_aXXY`Qn9S zle4B0Ay5VYGn=^Hxy`#R#gqEv9|ZGA747m%%bvN_)5inqHX7_*^-83Ob5_epb`_;K ze%0jdBS9{f1tdO?%$Z7xUP)UQW(M#ewVmQBqbChH`&x0SIi1Vlno9tzW}Xbg&CQRR zEy;tAc~oen=IZ5O{6jW;;!cJ>Zg+F^4OSi*n&|%1eTBDxjaHF)gAUaCj*2Q7MkC+4 zAWcQoLmE@Cop6Xn$Uic(w}n)GEb4r(Hda)&L#rpN*S{Pm^Us~=R`YyBw#e|~{VrQ= z2njFB5dg{+{Zk|*#*3PE_}A|;Kq(h-erR+Lfl=lJuFXkwBq)`!Dq*Y>cZy!t|8T~v zxsPf$y1ZIN69*;#5vk7D>{BG65-(tP*DyA2=XrZ~Z?NAT>*{fr{RTZvw;XLYML|#T zD!ttLqo}cU$A~(#k#jr!D-Lz}M9FE7#-(yw32jroCrg~ow8y*?j^;O0L&u`U!De;L zFz2xpMwA2-a03PX@O)8jd{pZg+Oeie{vLZ?ykfj0x?^74C9M2; zZ%%!hfB<^21Tk9o{L%jb%H2tPq^s|jX4fQzp zIR*p|YfgnZ6fruLFu0-hn1w_qxQ6vN%m#}koTGeLP-s-*qXD`*Xpn2W^9{Z`x(>y$ z;edA2b0gEBvo?2Ugs*Z6IJlNbdwQlGwQ!;yh;rVyvA5*h<75egfT)K$H#_VjkQy2A zFBC!c6*0G$xjAEqm9ewqpFqZl54Vy2Amb!ZV4!N?TU#L*5@gEFpN%c^L#wS2s69m^ zx&Gn4B^mm+))61MHprdjtRUSGYvQMh1k=0Tw^{})AbOl(piRc_4rHLND^8>N;Y~7N){Yz~xK|>PbVh(0?o&zS80mC@IqV1x2#o5-syu>a zL@Qmf5D=ddrte(<#(G5&ue*R$xN@3gOu|(uWhMR}&Wi~I1?%5|c5rJ796#pp7<{aZxLh{N;cak^vCk)wi8 zG`3aQiJ@B%fFqk5&IGmph%WZ1IKIYpyVd3HTu^Op6b2F zI=Ib@6<1INFE5w#haZ=<=U!5e-BWue{iM>!9|^l|-s(EwvuG=nWlinKnc~;SB1Vjc zqp(YU`#Fh=?PuXp%{^8GaQJvcUf0yhHl`Ta_$lAd>+Wq&L=+ZZP|WnJuJ3Z*S019E zP@kad;ZCy%qv2->zi141>x>8{?X0w4ZXi=I?Y^L+`lxwuaC_6maZm=0mTShLZ#U?D zMri86X5}!q|4vPloX2b1a6(D^aIs0_*Z#^1&QU``OX$ACIjBRxMxcz-ZpJg3LPl3W zdyLG@7pv_Q@d4SR*tdLNCDS}3XTM6MA~gb=s!(*`eA$RYq?KRGU>V z`F-lF%J74G_HxYEM@|EQ2|`QCz&!%crUSqMMDq{rV9Ivs?|RW)UQ_5#JO=ZF{!P(= zx3@(2Us#BHzJmBlc_vG7tVQC=8|qsr4JoQN?_9V#0)!x8daH+tONJpgKYa~q6FxI& znu%AJX@dszJKxwB7uO$wOn09eHwFR-?0eVxj3ZZk8v7w#1ozvlDXo_anfM-19~*yC=qMmT>I2(zNaTF1 z2fR2!yHy(lh3Sbv)`aI(<{x}`%<L#H2cK$Dv@j1QJ{U;&wR^kME zh=kJ{)z5j2r!Jz*M?v0R_^Ag>%+r&pS3(BPjlz2e%?6_S=*5LQr3~Q#`Ret}1yb-L z{H^pKW=^E)y30R2QR9c>_5KeJ43hIMM;_9KeUv@f3|*i zrSSXhnx~U??M~XSa`peC?5)3|e80F~6%j>B1&JXAX&6AdM!LIehVB}=Q4nbmNu|3> za_E%qZWv;Sp_^gw@OjQUYn>mz&)R>%eb2qGz4!b5y7oxYefZY#lEc`Pc9?|YEt!6e zzU^h6qx)K!Rr%%8lmLR_;Trp>e9-_68Vi^OPl4^)B@W12_lX8`66PT#{nHPF2l`}i zdPcD!+$d>t3JBzKQ zaE`c80Zn0L_)xIOOH&7O)Av00cRsKLZN_X*sEQge`79edaX7tI!_pdkS zfx338voiBr;;5)1+gNi=G%|zO!f58)tNeK}-F85&`}p9^O1AKSX@zRR9Nj3Ae!^n zY<#i@sneo(Gw=AmW5i6w)z^Tic}%(!y`pIAA6^`Y0P2n6Np;(}#v>I*S`g`{7w8Y> zhcb6;BP4I-V3fAeDb--FE8GC`J;+Tdnuh(rys5&{$i|AC-25ccBjHA_`<}4ca$=1UCX-ACsaS4nXfU1QHrX^93oD|9HUR}mMRSK9XI~Ab+kSo5xh4? z-eE3CV81z-2H}Zq4M+UAj8{vwJ^YCR(}B7sFmAmd&{$I=W^_2kv}m+nVQhR)I8;&_Vc;95{P!S6gQ&)jd?dNqwN^)_ z?^D!(7pZ-(8Ss?-r9yk6;~1f#o?t=5j?6*0_+sIn8nrSdLE-O0p;4v25gX2-Ck*GB zTgx}sFKK(8ih$pR`V(M*-DO!Yb^OK_DDlB7SZU=iZ`jZoXqiG ztf^eq3_K|HcFS$oQyIt4bZ_!O zCZWq`!R-`^(3a#dx2I&IWZQuHifsvuPi|g;G?Xn_OckTC16^Y_2q^;^$J5f^ziJT+ z1KiZ?Fu2u>6JfooG+Dmbha_AV$m_VFgxwuutkg)Uet}NVzX0_{ZZ;1&hRwK9GTI}3 zN*bLr!XKd;MqK`*R=U>~pFQ5=z4A(-r#u#Vvv%izJ;G&^9=3!Cah=RYuvl&b}g8jmlr^a&^J$$Mj4!$66C>=ak9DX^mN~I^gF!;~js->={U4 zj{%$M7i{?e7?zYLeG%i=kxda$GtrRYb8xB4=mPvHpLq6kNKE$jHVqx;)%|gj*R>2~ zGPD{x^!fbIWD>tRr%B>jb~tttmKr{f)$_hC4ZJm(Pw^HLp;#{YkdRLT5de2jz!r)G zU3B0qcR#%(iBC)jb3M#Y>~4s=*!#3R7V^w!N0puI>7W*xOyKNXEz#MjTi~Qne$Y<( zjR3=MyC~tj(27L~b1MD(QKq%+M;UW!zG3}AQ#KbL*q`C11Adh_@h1MoTGIPm__NkF z$j(RND*~*t>*c7LHJap_vMf{x1gG3|CtCE#A5Ze*J$miug=>Z$qtz&y5}x7iCFMGq zkFSzPoexo;eK}1TlpbaSMy?CV-c*D7V{@!@CPN%d4`o=pfl;fK1(izO`@yTf0JzM3 zo%9uBcf6Jha9uLJ^B>ee_gJ>yq6bU;uV1!y+4QT?samSOHRW=1`C8@T|NoxE{U+P9q2XBU}>w{ek?ii{j8=m!ZP>Y z$wSxNkDxfMU0>l|oBF+3&DXV;wX`b^jxk88_AexPjr?R3jo?Gil+Mz3o_4jy%C%T> z!9JK#WR__`dnNOpepSE+Tj6IiE!7xLgZ>wXLg{lCu`_Ld zVPG&|x%`ccel@H}T6udGOSt&4^aTRjcD*@;u7z@L9pDSKZ0%&&lTOxNT;tA=FWOiR z5V(30y;nJob6BsP*zFnTY~m@UBGIfP@clT;k?dBM)!p!vVEXc_RaRqoWfe@6?n?_@ zL9=W&nRiExicit3E9>%fKImjpI^M3;W#x%RdR(P9-4|hs(o9Dk0Uv~!efc6!%d!;C zLK9VM+GP!=gsILa>qy{U#~#+%*Qv}QnMkC6_S+j<_I6VA@m0FPCpH8fF|!lWp=<5d^LjmTr zUKZ;=ye;Z&By-(qYNbJdQ^l=d{=@@~>mD_-ggck-JTWpz!6Qkh}7ciIO zhKspio-gV;$`!L>A8Y3F$-)sy@;avLA2?0NHT~}h%7Cep%cKLx*DVrE zq3QPbCcw?6(&yKJS_*s-2%pr-pG2d+IDC?Z(u)(7`A!yQU)&=SOcZUi?eXCkhNM}_ zjoot9-uexf)Y$+z>%e`;VE!h)1RnWq$PbS9d8@WMbq0=470`R#iQ=w@7=gazw_oJT zDKM^^7Vn6Nrf$thUB6=SSvb~2S zqys%TQChPb3|XsUXPbTE2t-i+x?abCybJ2Q>G$x^E6|tHfbmd~>3SYrd=JdZBzwN~ zr6RK4m&=>|Iw?C+t5Dk2!lH7wi^%c)FW}IVVCxX(gIVf-hAN;-)aSw0uScR6WZ-?J z6_wsHrd^E3XC@gw++~9J6=Ek z=&V5XYj5jd2?`Y5|K6$-m|w^Z?<-CKTjxMH7~U`+M(_6Y{`!j-wcac0YBF;utlu7Z zp|PS+nld>!t@s`x{QS0UyUmv8v?=;lFdVt!5P%+?)3_UG?9p-Ds2u$uxIo!$tHYy2 zU;Q_X#-XAdt+q`ewDgrX&-mz>{j`n-Q%s`U=7~nd%nz+(-4-9^4{iTYvEo)66Kj8K z`GA+p-*yVxUKxSM>2IYf?!}?jH(E`L6lNn!=Q-qg`1-6cz6JU_j|L>IOoMBn2t%^2 zP7lc@Dpf%*rl(6yk6OGmVKhcua9If0%)TVa{&o#{`LF{O8%S`@Qc1nv_o|w0^U2?WqL$7+Nbg7!KJ?;2gp476rs5lx_nkE0e1K}Z|&l(zL zZ3BOQc`h1Oojd=Mubi$M|A5KXdck-0OA~Du$m%~mg+}99z}bY($sq21naQ- zj9uf$t;5U9-$uJSZ6ZU#YVWW4&(5pvgIuntR?U!7Jg)@q-RS)9a6sSFTrbuwl4n8t z^K(?U(2H4O(8%q$CZfBKQ?YL7o7ZlzZiy3;@RnWXSTd?m&K5L8xssuzOGyEA(qJrG z<6$*A4z{r-&WH4?bZ{twT10bkCh$EKpV;Ob5>;(TOL#r?Dax3Yi;ybjeo4xhcfzAZNWZXQ#`J%C%@0McuL*F-o0(%4RLU3lRip~Qxm{P zI0E6Oy|*7%BT{o*7&q)EGmpLU;g|N@Yi$(HM6rr?Nf4gUr6##nm!I390`Hd)wx|?fd4Zi}G-CGhl*H z2hoR~KFNc==Y**s8kf(5);cy3Jm6U7 z{Oy%A;?d$vfPj#G*P(xn)hPa{A)x*Hr7P!@svzl=xV5w+)^FhG0Oe0RzsPSh!X@Tr z1;%7C64k&}CqWiv59N=n$LZrt}O#jp)^*vfRR5a;fx zv1F(Ri!>|~mE|e?Ln4UnPj5@_r}xh?ikAIw$ZyFcs{8$GnidjVyiAuzOZ!vET|wA_DLb^BhXO=>jUxa}i&-HePDZK2jXU~fTd z$jC%^C3L;sG4G9|5_8GQ+}?0)s8P5Qx}X~tMP5aUma8Sk$mn&C@y7l+30U*y`~*Un zKLvfsGXPUfxAOCvp|Dyf2`?B`P1D}j$w)V4glB^f*TeTl#|&RMe-0p&NWCz&1E-Y| zaMq*ZGUv3ZXFVtI;sPsD3$C4lHLOJcZA|F0i0?&i=lzZnc9l|IkdA4&Dm+rFVa6Zn z+6+;hwRwJU;y?U+D)zV&9#3QR()bJSuy+H9 zKRFh*e~6PpW=d)l8+9pj-QQiS^z!`bt7`*9aH&#x_tzm_rxCv<*ce{i-1e@a?h*Rt zh=k@l{Z_B<^w@t;Y~1b-vH4c*8}^zdETI%%W}7w1ddp=@;s1=f;cqS5_phl{3b(S6 zS2BPRip0A_^;+g!+E@sgZal>$FRY0O(HpE!9uhGoliFc6$a^rIIFw_XZ{X5wne)Clts1Q@;iY|Jh~{z_`TsX3wc-BKD;&YJ%?b2P>1KUcQok zoO6RCvI50y?zhFLN5MF1gU429C0TUmPYBId=5W11g}|T#8pLiW6onNaIvIPSgGzxW z)}RwtAmnHK8RMv38R|>P1wh8o#XpOB<5bYHbR#=x$aruv)t7Bh+6K9c{oy_jglKbxSnX*5j#2zAARC|y{^GKCBz@7b z)4?T<5%Q{WXxk72XBN`;{BGvEO|r}jC@&{!wP)#oYVAwc2r1&@=9$PsG3LfH0dPiM zkGU|uT4f9Q+ZAS#!fd&gMhxO3aWbYVC$GuT8x8-MW2n7sP8m}rz9$#(CL!*7_EN0y zT+;t zekrxvZwfsA9#@Bf$*AjJF>@9@g}JvL*IBpB8}>O~F9+=gBwhdaHhCD^(156MpGxz3 zZLu2g&PP`nV>B8&nU9a14zfm4pBm*M6AWN3^R)i{&=GmCS73QYv8DXV=Olvsxnp%l zC4sa+NzLOV*AAmS{QB4_+UQ=a^V`Eriy+~DPE)-wWZI(l&E_%jgd5=3h&*He$uQe) z(C^(1c}Wsd&5Td^lQ=4R!rLb-A>X>FmL>FuuU=%j<;1WpmS)Fyt4Qz`w8`MX)J8D@ zK-urbSSylylI>>Rv9zF=Qxb`pNQh#?vVdAsvr}1rKC@ihqZ;i{ac2?`M)v4l`g44h zS=oGSf|sTDT>bs^32Md^UfA=8L!g>CxcHP)->=p0zIaT9R)c7P=sgrd#ZK4$4vTo! zTE5a{TQgulLjf+HdUIg4)z9a=5<5#1gO|l-w5ob__{AyL_AMF-UIsUHDIBz~jI` zTeQ6Nl&X3LroWS%)K`||ts0Ww@GG0QQ{YAOmH{SP)q!#`L(xkSN~ls+U|4*Zpz``x z=nD&7tL%jX*uhDqMP%u_os>0&o9l2TpK=1dq_T>9aFS93C)cHJ{IFH zQSnzMzc)iTTH$GvEDa~C+4Ug|%pgB2y{RKw7IErK4Mh@kbB*wVewDgbsGR zWWlakacWBv(tlgQRx6jLg6!yT@rjPAzrAazW`KKFC>groYigaahTjNuUwBK9jukIQ z9qCkrGnluj?_*Y2CJPsexhJyrl(RsbTCm0fQcX1t3d$rKO_SsKGby{+ZGhyXECZ%% zu5!hEVV369o}xHJd!FQ2|KBbE4gFvGX~e5@9mC{Kp-_J?eA+^rG`2Bjrz+=Xw^A7uW`VoRUoory!*;gq0*A==WKz7of)mln~B<0|yklH}NKJ;@)LxDqAHPE+}d zkUj2uRXXjvB(=j*m&ey>dPOPGzdUR>yFM6CIKQB_J-QfM@XQ(P$4)dbo9%mindS`U zuOy;1tR4#QyNuj+Ah|NKjsL&OdD{Os=LPq{@=cNzCdf}^* zcQpMxU?GKR9LS}A8QejQmaRywiBq7%0mW$Jek%6yO&iv&^z)T&*L}>)F^Pa%mb;?X zQ2*GtQX^OY*vBHfFlEK>!04A!1?WlpH<%0QzdndvB@+onHY3e6MQ%(KYs>)R4p`x% zE_)K#%xnAcE}}|Do_h))Hrqwaq+O;^SNYjA(em-^stfK4Yl16!kd<-#|V}fp13370ES^#Xh^$xdmiN1A4eM z*}1&jdVLfl>LWLd?vJaidJmjj-0T+`?#heu$$i$&Vf&8vX8DP~YxMFpVMJ4jE@#a8 zUE`SmIlDIcq>t^wT6y0)5bu2QCD|Wwg_+pkj|b5!n@q9kan$zj7DOXYBF%K)qmcNR zR49qes~-?9EC1NO(%wO{Q$pY5CH;oczj0=|jjvj$qzGE7n|~Kfi@5yJuzr@jEnXv{ z19h}JM1^Qds1$k`RW`+BaEvvx-Pw5B&jCR7vwvxI+?7weO7{EiM(bKKcba}qJ^9rD zfMT+732U*3l zE`1)wUv&8Wv(uB47%zi(ipj>}s|VWRLv;n&RWLv6O%r=Uxdbg~<@KN5E&^9>euf^i zl?nLlHR;gI&&b%ZZiEG&kwf?UzH2_04L=|Dq|eS=;s=VX(Q-l``C>RXpy(%5E9G~| z*t-IY`FZ(=3%j$P-XcIL5JQg<&1h?sW#%c}Y==tKy9yE{jq>@rGx$W2FE;nemZ2B! zhUgVQFvbL?y_>433rlu+z38aJL(%jo}&nq@}R$V;h-XzRywX%7X6 z8~`{y&ijw1vk(U!$=y^5Q14Z0Zn<3^;mbzMcldvibmGs%i*Ndccupj@;hQgIoI*&m z`d=SY^c}iOVc+)0QT55rU2c`M%5ghy=K;(pCkX=d!kRB+c;8sce(4BR^*sH3>7`2w z(#|(r>+8>|IEHdrv?5T7uv)WN+O@tSU!zbe>AM1bNszDqEXF_rAm=3V!-8q$y!$wn zX}5|@AX&$q$|LId=nF-Uz>0smS}jJjaI9lGV^dI`{i6Cl> zib2XxEcO~^IU%x1Q3qpP&6&7IM3U?u?_5OnKaMmlV#bcQtj%fp$LzWWxl6nrgRMy{ z*MRrI;J(V+&~wvD4|;13-~0Y=1RAq8dedy#rQPD{HOzt7vd*>ADHWbtP_ zW8WSj5fuS~4da-%HYl8%-ne5OJ@)M);_tY?PxoA^_?f;+Z*_$EauuhI52egT;;u

iN#K9iLFI{tZ0;%f5Q?rGc9bNgZQ%n%eeIX`geAYn^{Bs)e z4gC+?+`h3>S^4_#B)RBRVEnKIHd(ak%d5)r@FkwB5pNniM8U}gxg+nWLWAQkixwj0 z`o0=@9%wk$pEr{ghHueodxm0fdOq*&d#G_azF1sbFW@;qxa5FvyDcMLUU1y-;IIQw zuA{fb8-Pn$z8rI(5rNZj=Niw9fUUM3+KL}uT{nhThKJ(IM0^lY@=syMsZZ)lc*vtj zW{dOAbJTEynmM%9v)Q=Qg`;ThySi%I*I{sm~Y;33z-7OrfOZPzaqd(e#LX>Y% zG1r{rV!STJTAgK`o}L*&_YSzfS97-7HYibjlI9!`VJMLNFw=-k7Ur_YI8$xqe$6RT zMQCBWx#(vF4!CTwR1uO#``&w_Hh&)<6a8k+ zHNF)s6A*)^x%}%58STaAo$PQ4VS;m2vQdNI%vcw~QC`?Lqr@p!_45TZn_b((FCI+) zD*M_L_D_Z#Eq_~;8E4%q6@LmV(u#~{YB%Pxh-9Eck6Yq_J`@+ z&H#&2GhCwjUMqjL`vLml@ZJ&8vbQu5;HAck*Y(uf@$kcEboHZhoex@Qpis`exi19u zvy3VJ$&?X2I^4j!>*ieD?8jmSGR7MmJ#d6yjrWW61uwf?wVCpr=r4~4I~hARmvsPR zO!Sa7pyx$sS=?FA9}vkxr;}Xkr$jU*ncIt7=1F%gV1n+~i#;d{Y)Wc@I~`J=#9V@; z%Iy+=auR)y)akESf4h~BM%jDIm1g@A&sTPTjjgWn#*W`MsS)V$oZ7Bagl=TPnD@hT zTU7Y&EOP0W7C#q8h`$s;JGLfQ#JLlo)c1H-A>L+%+7m-BeXqC z9)C~)U{C4Vzpl}#Dg{qRdmCVD{C2VIeGqo@8{nT}=;Pe+xwNDzDHI8MRc8{^oK{J!D7%PN05 zJjRjHz%evA&nkxysmAGl!>#L>U0xpLMKFIUX0hC_Gl4}ZufH$PZF>Z0GAa$yj#~H| z4B22T=MR<-BZW>f$>gIvV{&Or{Uqr%)!aK@K7Q@QDB1j}xX5{h4^|zt)SMjDUPss~ z2EzlL*B@6^=3a7a)(JR`HS9b7>HOEfGPo(%kU?y^e#wt_tgBbAoY-`s) zL5)vq@u|wR5+xOeOaS4zWAVW+d=Z?&jk*a<;qA$Z?pMhnN6n{GP<-pRFYa^0dz!e= z`Kx-`m5Cpy+sUJvWZHf=q>-SbK{K%Z5K*+mqgSJpsWgNULw@TmOC@5^TuV;s-^f*D zkjbNrw z3oOgddnXDB#Nv_L_{;sV=sr+i`5U&Rrb%irap8yj|IU~8E?GY3*OjuF;<;BgTf17* zNWnXcd6!nbPJG%*s=cYP69Y;=nF`(gV6Ur#-PdIoBgm@b@s$~3TYKM;I0${o`@=5~ zrk;lTJ*RX3;>2XpY!!O1CZF-KU=OK?b;pQ+9Q!)vzr|&5waNeUMt!4APqX~r4WJtv z_$%k*N@md{@&tnbLxgbBZA18AoRdkL=rFV{5@*nR+ygm=j^6dqm3HCCFBPNgY?Mz< zl*EJ*XBG215x7D~kk^TkTIq~0ptwW^SIws-7Q=as#DUXFk+TiG=P}-f25T=@N`@rH zrdIPN%Ax-2y^Ya;+fA`3noBmy*@=xNRDYIBqyt`Bto6d#adTRHM4T&v;oO?mcnoi) z4GHw@c!8ZHcN*s3Tf6eDvrPzC&&w)wd~eSr>j6LU(*Cv666G|X+mIoHNU)y%b99FW zo$qh2LcAs9wwN-}m_za4yM^k%N$Hqpzm%-v`~s3UxUG zrr68WLEJL_>N>(`n}#=I$`P;Jz@}a=N^P%Z%KdlH`65ndZa3#AjsEl+XS*;N4z)<0 zf0isJf8Qz#3VD2gtDC?_KiN5|ikYaQ*oiwtAotWaq0=b0$@o;Wlb5}ZxWQ}b?`}0Y z*V60|+d@ty9~9jlm>>)gS&T2+x|?$RC3W=%mI2Vl9qB05;24W!I{_`1dz^{({omB% zI^D0OcVg}VdK!G)rqA~`U>_#CD94280yevk{ZcAu{HS0wru!9{eDdb2&@Iw_C#o3( z#*y;mmxhiDn&|od_k8cD=`}9tv|2wg~FG*|c{`?xKECDW<&GMyG$ACnYOYm-J4>10$AE*@ha&LA#y@L?2Or zf#?<$J$&5z+2i}wGb|^cp|erXR$tilDk^E|u=89f^So?3Kl7n@NF&qFSLC`a(!%>@ zLuOE7|LkfblnUG~%4(1DIUQNL%Awi|3s$}R+Oii}V9^?vs>>4%B}pA@_Sx1AMOP#n zuBTOU@~K}uy2%v7u3bQLgiw^+0jkXf0%fV4o2lPC@LleTxug>C7mQGAvi@4{>a41l z;G+y;k6QR)`k&>TN2m_=-jUMB*uSCqNiW=No%W!LcZ25>_*;}0NJtsQ7_DA0;*-eo z2E}r4Lcx8yYJE+g{wS;Sf0v%fUETY2tS8A#-b&m5_=aV?_97QpJPyY?Tw^W#gH!uM zQB0S7YA^pB(elL(w(e$j4v_dZn)oRv*ivllNtD^ z<~r?vz)gM4)66*9`Gm1QOHRI)bZ|k&blc?Ev^l|Ta>iJDl4b3+Ps4~0aZIw9)LLnJ zs^S`_|EsY7;3}Okc$j(u>n8M`>2Vc=(2b3CrpM0GN-(ySE8I6L(6=5Kgp%AKC;?W&uir^WyJmfWIjY2SQy4+=xrC3etb7@9%oFo$sdBad z*ec?bkmmIB$z51m?~*7V6ujt08Su|HIdtu+U^}~^)!Q5S6_uQd+&C~e@_TUEtjTO% zcAgr!y_|0O9k;ZX^K{Hxt`=E2((-a8VJP?7KdgB3KV)h<0k-)aR&;5TT(ky+L3u67!_Bk)deKGH`rx1j8NGDzsOR7)I z_Zl)HEVgVE^^feq|2y0LXl}#)U%DG8vd@mmr%N_HEmJ|9Vd|CR$aVMVSn+%}jYF)I z5vd4xHp8#tKj1zc+li|{#g(?`ZDvHHl+qGd3fW@r>4u|V+0qw9j!IdGewEK6D{vlJ z{1ZtL^76}wxK3z>&(nZU0)=>k$XZU@uCczNhO3-5WcFdb!6WDB%j%m`CgoiHsu{=b zj)3#<_kV?ZlSCy}vz34ReWI(XBP<3$c6*Ap0sb*9dsLTz1?#IDR@C_H6Kh0>%ZldW zD-}{&;On8YyTH@eL!jG2uXcc;0&|Ia^Kz|M6L{d;;%Ttf{^FKE^eX4u_z+T?+&g%XiINC!gYvO`YK2@( zaIDU=@?An>4eYnUWG_Lu*Y(kc0(NBrXHOGr4hk`atGO+cn_)adSi~;X>^N1g*6>nDzedGFBU#YN*r762+U{>3pN_m06D+!V$Gp=f^-+y>m z#@6ozrxVrcY4v%f0<9x=Z?ERH#cKY9%vXR_qT5DdU24jqL`>h85T+To>#vvi5XD;g1JXes(^BwTj*S9|58Az>tsnA?0i~^(d>ZPvAiryaWKZnW)|&cfQp;#;Z(Gv9s}6zb|rT$Moc*HhS#enI>M$OF|h7{g+MaBdkg)*g;(H zY}G>}S#ax!=D-W(*~=+0Cvo^K?GxZ6ndA1kMw=zFrIw2gtRz(6HQ(#@Ln$D*BCsod zkbNFEtb3hQ@zmco*(1!9Kj$DDWO|U7nfw+O?op9lidofr<}%{Na&%|1QRlp>%|R$H zEx-hgY-QsPp&PYrFK&M!FiRC87af%hpq==v(-7I02ph54TBmPy_>#7(GPi5+Hn}7= zq~R#H<^5$bBg( zqA7lK%T0#)#`j>Ocd>3^SU!zWBm*+07U=_&@Xxj;1 zWxeOFQ)Q^L7c!JZzoDc2ppAOY^r`=y5!d@_SM+}<@$F<0G<#x6Ni#FA8(%1ZHAVy> z8nvcV{qbh62=L*wNWJw9Lrlx6_^Z~z&>q$W_6ZC_z|?1u?wdMIX0q_XiM~e!CaSTM zD(y2%jA`us2fkPDGyH?8WX_E%O2ON|a?LUs7TZn8&xb1N$=au^9V|nKJ zi5}1Ru$p=zXiNe2uY_}_QtH50=y8Nni`0se4M^R6Bpw~E-7Ra!cdltFu zp&HD0j2di3lh1Em1(jU#p=0_GYSZ48RIpo7w#irAL{y8p7o#N8RPm3spD`&ws&fdz zdnSozXEcU0=yP(P!iZ%YJ9l;hBd0ICEZj5m#?V~Lfhb|*nhaJ?#o zAX>g|tuG-%7gPer# z-7H2t52-dzrc-Bo!H4zu@dpA152}V@Q)61fh-QKcR3i4JMYV~Q;H^8&xZfdarD|;c z8GQW-D0#jE>Y%QZq-Kef_vD_JkrZ`EvHr*qa)b)^qY#AjrLF5 z)tkpVo-#5h34{20t3{*DNZ;IZN1dN_IJHB)$3XvR43TjE_A8bljCK@Y=o932W@u#0 zH#?u`l#_ZyN zOvXna1nRL;{)1-jp1U}*j)$(fU!Oh9J)Gy9H~u>*O*S|ba~C+jspqS_3dtyb zi=^=^R~pu=g9Se+nthy`auNV~Q@>WB9o<{?rfJRPXCw%f^w%S?tqN{HgQ(VP` z@uzw=(QI9ruHZ9refbt3%%It=$o-ugVoKm&K#qvisw2a1v#ov%+Rgdok^JEfXUSP0 z)==k52Fm*IZZ5Q8W@i7=;c#zQa{OX#`u}{xGll48v$R7Co#lhJXpDZ=>rZwxSEkD6 z?MK(8^0(!U=Ea8i?Qkk1=mh$d{O8iY}V6p{z1_3&M>C6_HP*M>MarYp2|JZ1`qbVJVZ8P;0gPtN!rR#k?? zs3lkZFA(y}6Ym483WMXHeU$b|J}ARm*-9Ce>bSm==MO6FfcV>JJ%g*=lc7hq9k%fr zC>s=LE{kce!3!^GGSYvzYVhwns|-9`%W)?Bw?ft8eOt%(P#{wDFg%$#!FZP~YMH4( z?9rye^VQHczC4p7YrTgW2zM z(%E)V`w3L`1B?8&=w-mx-@U#t?Z#aI+IBqc4rA_&R>UiKx>YoV!X|+j$9=7vQJL~d z7|mVu?uRK>2XFK3|ll1TGzpD^?Ls#&|TcfwfFO5dntETBZ97GWoWeG{JFk{f>%yvz){;_v)8f3N;OXc^}ob}5X6ueZH*0b(c znU5Dm@H|V6U|1g<$zPlsScU5KI`p$qSWp=DY4%ei0s5l^8GghWF7N5dObL+N9#ED(PE973>H1{!y+EibdjRx z3eFp2p9P<;ZvOR-hgfVsOvR3@D8Cly!VlVM$u_=87`qMH)<%Uq{>3YrZR}sl_Kwi; zpEQfy_LGx-tYaJ2T81Sl0SPJG%jxo~4FyvdxkM!niu8AT;9azXk%`>m(`;&kSe11l&6)AP_QDbZ0UK!qiN+KJ^x$e zSf#>B^s}jxtJGwsG@P-EfVJu63ttGxBQAOY1*?TM%l5F%!{nRwX$;&ql}kUOOWZyA zD_h!$pk!uqX1?GLodKJ*f*TPo&=8^{9lO_2LDH-Si#$KO0Zwy-*c*icHb-; zFxmP49%LhF%$9t-pTcbQpD@VPMM=ae6rA>WE{XhQdxjH?}cDTVMm z9@U=BhNMQC5Q=-#eju5&`C;AlJ7E{C@1X)deR0S=;G-2l;S-Rn>dy0Z4v5m)3X!3* zLtlIImmItqYVmzy#4U|ulXwi>D$E8~m#f@qhr#U{T6efS-h!Wg1O| z4mVwxY*~_2hHjF$Ly~k_?+Uf?i$S+n3~Uh8OrW>-LgwxC_NhvvgZF{Ku}Atb`hn`| zUi5gZ-S4(5@YvO#l9+ie%rD}eLq9}Nf`5{zAgh0)IfaPr5u-~%Y#|~14KKW@>~$6+ z*Q^`n-;{6AXF)C=Pwm(bMAMW+yX*bpgy|U?ffkidL8Nuzvv{J$O$}t`ev`UTPZ_}n z`!^-}1F>;uB?XWLqxf@UrN$!R?bXT~@BKxc6Ntmuy$lsR7+8k^$Ar*CmH3>3)?Qu0WTYSJup;%fM(U-F>HLgbBWJnUj@b zAo7`qPTbA&;Pph7+PB6cH@zsW59>uQU4r~R@7smnFp{H6-z2j5i=6rJCyDkYp*1q9 zB14mE|ITsA7)Ty-6!@2BzUV2xs-{7BkZxAvU(BTiS?4ohzwl*rFE=?Vn4zqm=AQNQ zYUhR=IpUAV{Br>y>sEVcz>)RHVE!L*g^Py>cEHpp)-&9+L-@*Nxi)nYR(!v$5!#ZD<4NQC3><1mZt*{%i0% zNe=MPqx7|hZM;%3%y{Sx6tL%FKgIT-G{{-Gl3>Tfg)}InC2uTK^I_c-lO*C+kp0y) z!M;1UXH^#y$TJW$zi)RM&NZK2}6j zq$nayK#H`03IPNnBA|d2K|y+vCcT$HBGP*mP&%k|=_Ry?^cs4H&_fR-Kmutu&->kb zf4}e8|1xsM8966=?X~8bb1pdiOp#Vut3c;3q0g?JIo_km-uC)Q%P&ZX?L0?P93k{G ze4Zyy8B0dDkROkOv zMU)m^)IST}@s1X9QQNZl+8tGx$tJzk4v2f&w<_f6;SN{H;)lc};&*#_q3I?T(QYV6*!FsqsH?Z8&6201ihoXCX})xqN6N8BtP6hUXmoY00~( zD5O^*N#5ryMT^|I@vh=*djoXlVD?q2%Jg)~kI{a5JHz>snIWeiK5@ebf<}s>A=BWr zc)9Z}%yFT-wciRI+A#YxrO=>t|5tS}i$g{Xafv?A&Tn^qsj7vvk2Y|IZESA95P_%e zsG1xnY(|UO#!O0OS}F~vudCk#=fk8mBbR1tzl9TE`kMMc29h|nf!o(#brB~MQPi0L zTUQW>h1p<@l%emuW)>wZXPoFol>LmGYj?x+KPKMYjP(k1>qWWzLxfbeG7|zH^8(yxB{w`sO_I));QjjW3CwQp zkqRl|QpEt8R!GYI3SM9EEuNEA8aM=^frN>#Cd(W>ejYHKm?0dwMUODxR1CoHnpXm4 zA^R^lD%_&h?mvsabNoL2BE4-%R_mvPZI;33_+mFnCBwBE)5;QNoo|A)*XQ%>GxfTs zWFiZFQq{yWUWJ({wZZWhs;}(77>H2}d`eG}6Ua>K4)hQcE$eX@&G%!X^7$>}*C;g&nrBik10BdUd8*D^Xd`!+42MdhYZ6`w7mQ=$glU~1lyrha}< zY*D@I0<(qD2Q0U1LU zMH_z4QF@;ttK>X!Mt2E3@CE#!EdaRaX1muH(D10cp3{VrPt094m4^WPG83i-{jgdPcJRjl25H)wC5)rezZX{x=2w@BfR6 z;0_nxTA-}&$io9Y*GoU(9uiA^jn`cKi?xTeH5C~D4U9b8wEAa;V|+iFU4AqA`%BMy zypkaXtcRtPWaFY8^V#0r=(3l=aBkC#QHv+Ji6Ol^%i`UpB}-Hml1g>FA&a-?-&lF( z73;Lw3LfOf=ybV)O7M<)w8j}NE zo4&K?*7~8>-UUEX-OoGbA{|YOQ1CfzpZrERQ&@XY&?Q>{5`U92>YgmWrCa9Vj&9|` zn0==7O_K*PLG-8aKY}3tUemL{sR6T3gE3LT>H~o|`O1Ke@kHI|B6C2;tI9}0OGrqr_1zT z1qIdx@~+@Q#x&0y?C6u8P(@3M#dgLrH?+FAPWUepUrs0?4unruga_rBWfsb;;Yg6t z;EKN>j|fTp!u9?ut<}yK({et4{L$LS>0Fdty~#Z)Iki&AJ3o4NF44L?|LY5}p8=%v z-t)O8y&|ZWZ|vX|~OsG{A|1Om3n>y7CGY#|m~r{61v^MI3(uxQ1?b3Jc0Wd`9#F z5~es+m#CeiKjx%6J?3zKPF_u(i7W%;>cYK&^K2i;j$bfn zAc+WSk0_(7OQi|J{&nAd_xL|-;(%uGNWd}5wsQL1CR3=vnhPHqLre-Zo2>ZJu-25} z;%xs7bP(A9K;V<$mMz65zuU0kZMJ^v@@SzZgzttvprIAN7A#y8t9syvcBhUpRlVJ8 z>NEakzyx`WbcB`4PMg64mQkk73xoj|!N>f-g;V^-2Q#vM8e0xogT?w@#UcT+Balj1 z!k3CoD=i}k0P++S7b2ZgK^xLbIlVZh4(d0{HsUJqrKb+{Q&?u}~&7$a})**o~oGt2W zkXpTJlveAK>KgwFe3-apPNjWIjo!HWWG&I47q~QiEjg^Si=NE`vP=?P%)E8^)sFz_ zL+obk)5z^isB;)j0Gp@1fpuNKuEef=&d>(-aRc$&#w=|7CrvcEuu$3hYP)wPdJVJJ zyvgc<#%}+m%0sZo0qMF2i!ybG#q~kVJg1RVvHrzMk-@(Isfg1HDkUmnpN|QOl2t%3p@u?XL(O8@dD47IVqjGJW;i*9?4cH!*kD?DfIQ9wUQ=K*3x4 zuum(z3=oUXtg~p@LP8iXQsq2^#7Qqh4|xe?c9;t&W4cTLLn~bJKMsyrI2#x| z5dAM7WkzOuS@nqhWjJy6w!dD=R`Wh?zs*|HPo!!s{q8~gUNt!2iQ@6_0oQ3EPtcDF z(77mV5}eGd53=N~1N9|xJ;#le(U_%9laCRPd|Azip%n%S`?}Q6Vz1ghb#b3D5RqN3s zEywU?NVUavgIfiW7;I` zp^y6>sd_j~HP2EuPmy7|msNLKW^n`bnG*rDR8unk#^E(7RExduk z-Dam0Z|3fOxyg~e<5+1xx_4^Gye_&n?RU#3yCqCQ9qOU1Mh=~g2xVF-1DtwAtI}mv zNFjL9{dT_}Rz5v7MdBWVfVJw|Yw?psf*k3!-oK=}gV6qV`|(0RQw@Ej)m*EPFL8Wp zGUc{P3f23UH56j$p(za@qfwkJ2!`SsVaUUu9)Yc28A>s(S@NF)gXKSyx55shhEIs+H5*e^i2!=Dl^aQ85B-pB!)vySQxzUy$2e1s{RZ*zv7!4T*@fIMa;t|y!G$rf%*_(Y?mM1P;HK1jT5InJ9_&tA%>j|4x&e(IM*&dY1yD^QzgbNjd!o#%p5@lLpHwoloS?jOY zfA7z7pLTNk^*Zox`Q46`JQaVhdjTPs)^#bF-sijG1qmwyp2egAsa7Z5QNqQ*14lhfl1)v7%-O-79W?W#^ex-=~e z;ON#jOrQfy+H{$igLQ?x_X+Oz)4UuSRyP;GDgJe4hP`#2cl6)ahb`1BtZq5wI!f}W zzR66@TnU10UEO#a>%UR6QSv6{`4N5K7I~xGrTN*$g`8trKvm?9kbH$5f~tcAvO+w) z9LDKxkh72{^!ooVw&bILiu+_&lSBTNV+Psl*}uT9-r*q;;D~=WFl_#Svdl#?qozSf z^jocWH8cLBL^2mKaicJd=A;h48t-SZl?xB|wd81vY)ztS!>F=RGYjXEUSQ)?e!NbH zd|Zq2iANWoI&3VyEyOXW0GKOPk$H5<$8Dl>n(gen!)O>$`f9H&P-RKloUy9bd@9got&6XNHSJa?Urq6!$4}d;yqk)QJ!USsD6It}$>LW)|pK5mN9UJyK zuRrprl7|VbBYs)O?TLSHebhM2DRrbKb` zKBf}(i8wi~{`;}ZZ*6X^{z(jPl3h_~@8RsP{$LeF{kcTJUoVO+x;?5);^u7q%d3LIo%i^#Kqf87(*?8as^v+|=-)JI$Sb_xZ(;(z1MFvPGY=~q&i7ZtM)vo< z$TJZ;E+OOqoG{q!%Z{n%(uqdR0sR4<^Pu%s5qda*V_?2*cYa@g5bEr=ml)3TgB8Xx z6T(SKRO&5oNh&{nt37-@43!(`0zqmKBPjG+>v8bWt|g_Ntw6k!i|LXDed})CsnS|7 z9ADEmX!>WkJ46g=y!Vt2b`XZIh*C+ml1qXu;2M&;?U|D16j^`{8g93c-6wt@wts$} z&emO zT-63*pb=o*Y*Yd^E{8tIT@Xih@G}c>M>PQl9Q5tAKM7hV!=&M$cM!R+uKH_HSk&)W z>FKVk4(G$Vn*lCQ6cfFgj4)a0_*V~Y!mnEvlm<1b$6J-W`S0yQ)9*Bf3AuVv@|zCy zXS1$Ty0{mssH7A+?t?Lqe6%?0HSQpN_Et#@-Jn}^7sRTV$n#F+Ra`(ULs|a{A8n2u zdjFnX^V31qH+Z)ge|@ST>qmS^=sUjMw(y4ovI}TqhNf$XtPy(1Uh&ES8^rAgnjCGu zK}`1;R(vfptdz$xllwN6MP)mfcA@zZ9xzHZm!!Q1x|0Tx%QZ3d%>U<+0PFgtH2Y^% zGEr}xKt ztA6%EXX~+X6?Wa!^U?fy7d1;=tVl!p?}`pUPjUh~Ft%prg^QSdAGpUEx@{k78gd_r zy98HR2r)q8uHV2L;9_IT5t5qV zmLtiyyQF;CHLnV22ZoCn3MeXmlxCA#DyKlfD$HYjxAMYMQHy7pBQK!zM_sZDIxp_< ztL&>H5_4jw?#Ztk?|oN+Ex@`kPwm#JG1O`{6$do_rotLk9<-#OTA_MB*kq-Dk%Oy4 zyW{ukhb#u@;&i#`1HCw6bt2>=J7r_UH!mdQWZ;kQ!;X?Pz(m>9ccYiF)ttwIfa}F( zJ91RS=PPTj)<0`H=qAqp%RRVM#)j8!+|5Hx zeqgtzL%?d@D|X-Svk)HVy@AaR^5caI+7I*l^h*vbc3=N1Fv;+OjXYk9%sFKpAi+s0 zqzXQUGe8pb--3r)^_Vrtk=H2em>(eQw}DthV*znjwSmK3Xi7?FXg?nNLbhCOBI>@6 zFb9l0HJ@78wi=yklueO{hA5LRfw0K~!7k^m`Mk4%sgyMkIa#;P2(>C_6G#V`=Qo`ESpNuUScpxsI?f#dGmqpFK(A35!B z^enmfF1YvTlO=(H2oPaq7_ko^lxf;Y1SlrHVVfP!ltK=Sw z-P)ye_coVzEh& zI#C~MC|Qc1G7R=pWog1U0F^(#Ahq{#TY zpL4+7w4~vLsV!j3|JV-G4k{&3H8d%ec^LqOt8v7?5k|xgl9)!4lsM0#q+7;Q_C`*z zcs7-^=!MT?`^PP#ADNtY5{BK@hZ(;gM&~(a*ZY(#%Z~vkpPy54jQfj4Gwl!UHraHm zt@_YLQc`sw29F#x4sieNwEjXclP7&&!U@%WkWGJTmTUbdz}{5aU%+x`v_1Ne3{c75qf+oZVro$KZ_$?e-~*`IqXVhYLY0%AGX4>?sW=agcJC2!ob z1+5l(|Mb@A$ylDlpXwU);+HfDzY-BxFKIlPrMDG*)xQ%j}) z-L7kg^at+I6LD3vwBc8qoeDmX^<&*!T_kS5q9#MMa+j1jNbF)Y(+0^bkFA)QzeG!o#T~C zUNsx&#ae^?1Y>ZtrmmwVM zlVj`Vl-gSPbybN^v)l1UwKF$Xl?y}+x*hbqe>p9dj_zQ!!HMC69p#Rk^v3f7-)MyekaK~9>pxLP&*fCrEN}drabM?9du>Jkfp<_R{9qr?x8~*?fwnYxoSnl~ z0Ny^qP-D-x+qOg*YyAJ`1rTZo6mU}*f(SS_J@PvyKG!+F+@Vp8xxlAWhd9hj6K;Gy zJ~R{toxjE@DeG+-wI=iJf1vO*{t_BNX0`Ocjc5DVw4vkT<*LN%Vqws|C>kr=`{*i} z#uVMr`S%aWro7x(V(<{p%hqF?H>` z@Z0wqa3uRA=PN5F>q zJ{T#V3+ziF(br!2aNs-jWRK=f1#V%_eQ9PS-mEq`Lw$f|FVB~>BcoJxtiI#s-C-bk zwhlUqERt?J8*(vGAg?J*p$wsY(V$;t9tY@V3VFlHWm;*wM2`dQi)zlwZKx}(pu|(6 zKtB#h1}6dpz4Mw5QD?v0zhCgN&R2$5!a%wE3uA6>Tj&#O{oyH$v?9pxl{!x1{f?}h z1YO14Cs)qp*#ObbyZ2KbLO!5x`^%8HuTAUUXkfD8>NHVsMTeIMJRMy<7j#LurrA-Z zrbAWwA(Yvz(wy8AL^wGcA`_`K3(q}M2Pj>G9fY}m;M--MG_MXmU{-XSVk&u^&SCu- z;MM+zf&hifi_2BoCt7RU&P?mk74qM_HtJ`}fdS*Q?;fnTazAN^NvZYDvv>Z35~Hol zJbl0>lyutq_m*xgla|IiSkH_K0;QZuNVamjwS3jBRi6_L`nqyDl!_8K6k56@Yi1%TxA;6bVnxoehTJ;$ZDPM z03FB~w&p+{-yl_-Rc)-`Iw`waL((?%q!lg^7s_vVv)q1knlD>rVyZRHpu3l>`-xB9(_q9$r z{0(Yqf=gR}4`&=dIk;aMg3l{I=7l~{u3agbv$ct>HsXY(lZtb~+m1#}{vA6(y2r;c z&G}A8)o2vNT4>(X@mqD);o7%Q#R}B#4roz$eb>*tjX=r4aigR=y3fMmW42EZ zXp_0%Eb;))5ARBk9Hi=cGz55(!mwY5qcKVS~aZpqJeB5sGMbfml1@^hu zoP*`*e!JimsW-zz<}cN_-^af=`hm4nG3~fYgG>IOoC7ZX#|IOavG)GuVvD`%MlX0m zsoa7M^VjQ17dePaRTq6MERbUmUS)cA8pEbkMr8u4j^qf9m@1)b%?9I)_dkJCIEbE} zyg-qfWPS*j`u67MG!q+2RLkMk?or!H%Z{pGGWgq$e`0bB?{3!NM@Hc5()c?F2sfzQ z?6^1|NfIH|_~M5kDt~)lBlG9{hvIZQ{5R^j-1y%wT+XCe-Sbj4lljoHTG`jVaTtHg z@Je9KB;ijTE1R{vfbhTe%~h8FRN#LI$d6jN`k|VataMMhIN#&Qz0${PQol9T!MrV) zB*Ebu^NbCqA2?-DpE>&mg;QC2NR@!jCfG&BRpzX$QjwKWQGw zpd6ORDrdE!195uQPY4Sjio01JmmWLh@qhTj06Si-ADUkBOu~V4n`c!z-ipxbj5lV2 z)8091l)cy?uH3wwlUT9!o9O$qeWDy%f0BI8{`9>EI}l{ukMGO9z18pgPlwXUlf$w2 z3F%}k-K;5_6Jl?}K9qlV{m|-_)u=;bWcAV}tKT$BpOYe3mWEfiVV)4`5%1)G1Y-av zYk()DSvpGQ?!*=F^wMH36hEW=6Q9r8`)+i8P1>gQaL|EtG4{<@a(Xt6?450{bNNEh zD9yPW`;+<7TQfb*mP&Awic(uMuBEs`RW=psgZx)lWMlAm_9NEXT2>fTzF6rzLF4w| zsQ7#I7si=Qr_U-sbJVtb6SQK9ZJQEJK>PApq^8`U&Nb7uYEyB66DKM5G%v)@*8(P9 zyWidxInOw7)3{lhZTfgVl<)id#(XaOVtSsDDVdeVbc9*orH8)*F>B{07pO|mL;E`S zP>r13-$*>IvDvxuXi=MB;9V4@%6+-Ms6Xc%P%kST(Hj0+rApY znOivD;DUU#>o?^;WeU!zL9P%e!pQPD!V+4J@BwQIhNDOa@@NWq1!|}JSA|#)J)Z9+bRz^8-@PW|YU<7V zal+`I<<_I2!%6QJQ2YK8i$bG3X`*>-Z?-Ty1}3ETF5{2Rr{$G|3n{t9tF?+LH9c}`~+*E;bfcfEPZvU*uobVtp%zu8khdu*8F5<)bzqr@Cj`f2mJxS zfbK7ylPMy02|CW@KVudp-mOhc4F65{ipKh%cH-Z}qmG<~%_QC%o7bWIo?gkq+De4! zkxK&Mc~@Y>mJsMeZStdA*WVgj2CnUL|6RK0sX3wb>(QAf)T0(_b5*&#TC6h|`{`>u zYMiMPN>lJa#l;U`0(H2ezPG>oero#EfGeev>_Uvs)2 z$*64BviMBYP5PFlEpmFe<$ZVSW7`RO`!R-QyIzIqP_h2a9I&V-SO9#^2U6YDv=ulg z@;~|fS}w=@=dumcy0a2t?OE7vfLVYUV*8|5-Fub%DCoN}>g8mNF-C^xe#_I$`TH7w z?4(v8)DP$qwBAH#`SR3o2eHn=ojf}`hvR>O0kigSZ@J6GIj)oI-Zq`nU;Be}Jrl^Q zKqXHFg^PWStMdK=#I?{$7GQ0yEbd84@G;m~u?f?~VI~`(y*M@x%z=A=X9rBb6vO_q zcIXgqTkqUDxbwJd0|<4X#z=L2zz>I}zDT*Lxt0hs6xk%p>#@z1^4f3uzzXmC#ZJ)M_W-PLEp1Dk3@ zRkPieoJrtvxJVDfvu__b96c9HfEKOcBe?S#e!l+2e#d6s5h7s=U125nS><%0|2bN* zIpx=`Fz;(^Hhr?wm^z>cS&irlnv?jLV2l}HNF}cbbz+%TDlVoqFX>9c@YWG;C2Dhq zS4%E4Mrn;Q?=@dE911taT#8Y1M-IEb69a!0@;eBAA2BE4{Mp$1!1mZ@{cgIqelCkA z;dk59G8@OKNJCZKs}kHD=5Su)&4}`k%@AQRRG@w%DGsflsCzE~@W2a7g zu+n<`MWcTPE4`a8IIZ5?clvcB=*WZL<;b>4yax>?$F?(ijxwM8Q}y1c{HJ#e&ID6t z(;y;>D4!Iy!0N)vh0-3{#`CaF1L!-(;!IYaDs|n1oZQh)Nc-iMSl?N7Ha;BUrOVg!-CS^Gs*QDYy20{I_ywZMJ2PX2 zTEFd8Zr*U&b#peAayuip*v;xbHA6TcpOO*Kd(iBF-He6?C=%z0CV%77FOGzt;M%Of z>jjX)z!}2I;<;UjuV7hBqSD6WnIy$8uR*tkud{Fl_7b;ymg*L9*T|nsy6(9@)92Jj zdH}whc2J%F`+N|sxc%72WW`oVN}}OtZu4?WFYz8jCuHKD2Cv9|SM(D#90ls;GaT(D zdEimSzTUt!b>U-M)QI_3W)u)KjreV&mG)(lx#{<3aV<8znD+ilkFNRE_&i&aaAPOF zN#CD_9l0j^vQ8cg)gs^?F}`JG4n>b{Yg&ZZgEY;JIbI;u?sNEiv~s1&Me(a1agIJI zKb^x`4hZ}C<*>caa}O!$g1!4;t%mc);dF4I{oh`QgUm;U>OsJMj}xZ+|zOV$X@-7WaW7t*&!gXEEayS2{JXpvcPZDNg3cIZ?)_Oec*KO1BaB)Ka291 z%u}EJGy1*6>YXzGl6K&G%_Es=lK0Vlh8DAD_d>a~#AG;%Jewbd4{dXm=GPn*B{UgJ z9!TGOVX4T=->s!1|p^&oi@%W~{kAE=H)T zk$vQQYz^qTD1CTy$Ba-2cgAs_#Gm)l&y!<8c-VYez!8gvRUS}P2~#@CVyz=DhG^tF zB)NzD?nfDW-yAhYgIZ_yvQ>ZAW*jAyN(t*r^3e3ejxIjY9C=;(^CX3%<~zSJrj#MD zDt$X`fsfSYt7XUKZ7|Mf?ETW`F!Qj1D~*fS^BvOdWsm~HIIq~*%@N^~`-54qi;1)y zqhJYC^|M%=!0tN&a~lcj3J(Z(ue7@i7kKZQxP+5v;k5pWzitGEKe+aX>&^?MhBR_m z#H++~85?7adZLBP$`vk|cfw3YG%Y6rFz6q;Ky$#}U%v+p7i5pp*|b$UyRH(&53YSx zjqqjA?tRFowRUdz^&yM7z9ZhJI_Z^rSgWE~$lDtMB{wtEQm0|-LDd=gXl|tyOWyI$ zmws-hxpIR{?psCldwx*UAL_NVsK57>s=xg5Cj4Pd3kYYRQ`|F)Gj_PNq)uG)O}u%g zDiS!SeCs^as+&tX_z7elc>db$I_m&dbBM=G>~ol;ewJ22BaamYxfkzS>8p8KU9b4XUK zx*6D+1I^7afHeTn`8>ONTI~6830M0ec{1p~U#NF9&S$_9v!e%=;*&t!f@F+Sx!FHI zvc$};b<#+uxl!)&${o6a-Fk@HpXF;qP37SCMHHSUU~1$_V+~Fs=^5FnyG#{&d6L-e zB88K**zYZs`~uEGX%B>Q)G&j3XY|XipZ6!eAzuFoj%wA9ynHRa>`*8K(#;2-FZzC) z1weGI^Ib%#5Yr|kx4++M$&{-rE=cb4F^qmLp5em{ z#>SkjMJ48U;dluJOmD}$Y5+dm5rnOQCul;~GRWxtHPC@iLV@i)-I&U;@kY}hz<^^N z!XH1@ih!;kB=&FFD*{KU9Z=YmF zxIYiKZ2*#YDx8pq(`~Stm4=_n@c5mw^My1wsLGPH@4>5DfOn9Sc{M+7){oO$dR=OS zX7GSEt^)v(Sv}d{qJ7Rz4359p4BvR}*dqRr1)dQ>o%cjHvK<$WH~eF`lTLkl%3Wbx zkP*?Cb*Vf#9l}HEA>W&k*@c2U5@)ixD9*O3aKe^9bRxUuoRE|ee`*nS{Xr|f%MK>p za$DrW8ti%J1Vm6(4CG#Y(?S*8Y#c0P?AFO7si=MBy~OG?57A|1nRGF_(?7(jqqzy@ z+AxGwLM12{t@4U0x7dF85>>^3xZa@;kw534|1gIJP-ks)r{=tEKDHAvr8&Eq?c6Tx z7(bkoy7u9uZM!N9#^q9DBd5Ap5oGG7)U~Sv3pCQRz>VeME zHMmt$UR$2zupuecX)YDye84<}3ch3KYqaFQE}=<7z@v<5Y|D>swMzIAHD!4t?QZAk zGU0b*FciKZRaQET*LC3{-z@;67Y|^R3pL+_$9FV`D^ze_Uauw2=G%8tRp0Ld* zr;Ja-y3eAWB%3M-f&lmDrFq!=d|zl466py|TKxOpN07-Nls=8~3C-kDBM(UbP9Vpb zTeDBR>|h!An8Db4nw$I`d7k^aMYYCUt?a_-Y8r1=RE}>$3UhsANpc9ie?R6Z&+25< zt(y(DEGeQ8h5Cz;FRo#3;h-P`oQK+pmJ$Gxu0Nk~6`lBp%{#~%pd&G@O%MOra7 z$t%gqe{@cgK?H)nXEmPM_k3u17y9?GQkN)}oAo84uxOa?tAd|~w=sr81@(5xV}JbC zn%r_;$@DUWJaebrU+nDG{*^A01bycbP2r}~!5pxD`36N_VRyWlzJxNPMLa$&03GSh z_XG~A){%?L68=pV7EI5g(p&)k>xniVHm5uKZPhit14d8>@Kzp8Lt5K|pqsTN`f%@4ahfI>+4 zF;iza(4O_2)rGKjs_QJQ+%kdcr#AEUFrg`5X2KIQct2?ngnf&PTA{vFi%uzHu-Ms2 zk#*_*stDifG6uPNhnM@RZJL(w?{cSBDMk+tg4b7Rwl-p}G(I(}oNFl6Auu}(k~|Ke z5y4*#>C0@kKCCTv8mMi2eXZ!my5U3x=nr2*dqv4hl$6o!&^@kO9Ul#ZsbG8G3 zaXJnuCOzDp>3N=#7dYa?#ATn@_zBWO2YJnKyP1(2zeQGcZ-jD~#D!&lwrnZ9dG_ll zMf$pDbJ4tKR8DAI&@%l8=LeUu4Awt`mR$Ho{Ic^|^M7gBGA$~9xJrFFa{9cP5;yLs zGHrMTH+zbnwR}PZx?dQd*ir^jtH(1WpE&S?kEzN5|BW(Ve>0CIjFkKkmbk^P6R++V zgOYp;>^Bk6XVsigx=<^2Hb=wYG{g@*j})l)A$Fh_O);>nb4L9A73p>+O9mdb8UNimfQ5&=@vn=&ZG%zxT7ui?uW-*qfL-|I=AL!;Mkr; z@ZM;$)|qBE9JTzB!Nj9f-E0Jb+b3;B z0TV{VQ;07LRVX}t4R$*;Bxra$u5`wMrF16Jz=SnX|BNv@Qi}$juNom+c{NV@f#w=X zqiLsd^S>_?N-rUzZyjq1xR~^vGSJE`u5>W>BjZC{J2#U1WrA5y#ONNw3~cR!&u-;z%o`65RjmMNS=&M$ zy7kbW{Bcd<<*{cc3`30r^wOTGQYWDW#tW)NJ_Sl*%^a4JZ4b{a64$S+1aqG;P>!`o zFCpd&qNMXW=+16E7cq~^K0o48dqW_~$HnYK66#K9qU^xuv#VV~91HG!s|+k!*Z>61 zMmNM0W-u#0HD}S#Gw6Kw3J_ozt7B}*EK1&mAN@-0w?Sif^qp{X;S(g&7)ga@(lLS@ zV0-B=CvVgCoznY1+5Jhmsp-ID1M0T@oH(!KM}Hgc-np7?srUVyl^*usZ`4KAMEuaG z{5bMcXFv}hoN?B+lMaL?^op__YwQtF#+dO}s^?GsvHA(hc)zYY%#-Iup<_rK3ewN@ zjjv8HB|d&xmfDU=%CpThi!lu*&IpKce^TO7<)nQp2I2k$HO$+bxqc{iziG6zTGclaKU%hdUv1)i;+7;5+5cFRBjJXQp{fB|PdTi7b zRvk|~0Jh;Zo2|=PB`05fgYv+1?&S?P{uj|hJdgx-?4vM%{n{KiiYa_VFUeAf#=Q|XoumZTp<=P;e66eyk7`NsT?<}Q5W#GbUp zJ%U-p<+I$0b^;@Ip^5J6-?EbpT*i4FJFQKP)i^h9vXaudYN z0Z@seEGDT# zbl~iIFQ!zUmA@_uIN9{!T%-YSP*k+;-Fs_RfRQ}D&qhx%5LiENzb{!RFFk%y3CQEb zFFR)l<2$g3D~ZP&Jao86eWr2Sw?-XPP$DWZ-{}?ltPUO{FKvBPiU~lW+E}vC_vl*4 z;*RoE(DgJ$_Lg(p z#!;)AtmohhM<nw5~iqE0!oT3A};7a5XC3-LNgTu32 z`T!_oSLPFB5k*|uSB35sjvWx7%hYs&fTCpVBS~X`B~t|pf7L6MwW4C2@bl)2n*k&8&NCmJ>C6?qrTs2(NZ*YJs-bIUQbHN?uL4zZ8ql(z zF;ln%b&z5p6A*0U*6Cvxt=yS1pJ|L)PXa!@L$hBu$`~au7ecVQ>i!^&crm8m^Hw!u zbb-s(Y3!P?tV1w|=H07rtrnOUo6n+i=o$$sDie;wNwySSG1BMF4M@;xO8$3#+??f4 zwe`QLzUy{OOvcf3Ae;kX)B{N~u@!m1+(W8&%ytd9pvq=!mI|s>b!uY1gO*zLnvX!b z)UuU>W@$vkFYZ`3ulbB!-jA%TN(ippd9w*en_FJTyl2w5;}vH+9Nkq5ohRM2 zPf?X#m*`gnF?1vrGVp6#UduXr7x%ZM*_#^q?h$^=g~h6nYFBQ9TJOFR~s^b#@GLEnyou_||C62)|#!DC{s0Vm;8WOsMP&k=5O?5eAn>kNx8I7udxJ|CwH4IEuC0_CafD#WI_WC=upXkaSTZrBm2zsK#x%Mz`(U> z7r^<9*34l$#)o)F?}$)1TN1l^Ej5gAKU`dJ*#v~4j8cB)oPVRiSz4PmGI4_WioJ!SG(GH^7L!RBW@==o#SgSA|i29DHwd@5$a02@7%}5tjE?=qW|} zeL%Q9c==An47L8c;utaPaBgb~7~Bv=4?EvZSY@OXyg(f<$nuJ^@2e_iJhgp$W$gRe zf|EB?ccOIE@=1$$NvWHsxL1yDxsW=MCxq!L!-rd87wPCVjGrC>PWMmH!k-pm4Z=)n z<9&8p1QE9qr;=jM^d9dr1Z{ssbrIBhA}n`R7}|)7yV{mNGu`^IIR4|oOmFt$#9?+u z3!1Y~I~(>Wz!y{YC1m%X6*!Ha8?}M^>L4W% z|Cz+qbE&y9r20#%S{>7qNMES zP$fdn$TJ`K^obh@45e^6AdfLh{MDeBIwMeM+6hA);I?+h=SsFe2 z@0d%SrHz5aAgYH9J{QrFRrnIK3=2}1x~Hk zlX4WJ->&&GBYikSP5uh<(;XUm=IoC0)SzuToU+~eK19+Ck}z0?3(SH#Xu7dz;8R^T zYL;g2S>D7m6;LhDZ+YVVsB{_P<-QGg*{Xn@exwrrQRLF7gxa6Pj+YmHtI#hJyMk^w zGTqGl*$CBr2oV)p676WD$uAl6y6ZPs5pQB&jUtmQILPV9Dq?<*|kJzCV-+KgISBe$%}D1Yyr&b#tJz@Pu)2<*hs zNwO@^-=6Gj$<9MG*Q>kk^TxO|C{s3g{4!npgSnQlXM?ECeZ%1aT`eiM!EArR{9f=j zB^~t4`kLtx0p!w*S^|rnJQ<|)ss@U{zqZc7Q8(eWE2s%yDe_dFu9VhH0}Q^+M%kHI z=gT3o&Xe#exUFcPo{C9YSS?Zxuw&&ik;9dlR?;BTY+LofwwdNq$RxU}jRWCK)uyOA z$CK9MtsfHeJ~u!J4H%lg`fUf;hH!;-USm)v_WDlu!HvfUN0+aJf}-h!LoZAnHb0ou z#%AdV98P@~#oRy2ZomAPr6&avHOJf1B7QzTFY5p7*P*nwI1D4ZgG)Ffs}wF?QIrFW zlxnM_R=CFf%J*6=CK%xh2F27j_S@FqeFoE{g!9?9-R*^v$`dVUAwxxEsWfocRQ4|f zGxb(_**UJ1!T8@nN^KG)9>$XruS2q@J_yUm2uxXJO|Ou|_w&!QgQ<-&FS^-Kqf3G> zigAkBMy&ufR+}jQn3I;R7JqzggYl~#>)Dr-Z$GeO2ho+$f6ZM$4rRCg`y~f@=8E12 z7{&A4UtLiKzqE)K|AO11e`b!tu%@-B;`KmoKl=Sz+5=fac0&L!lD?@HF+$zQmEV+Q;ljd1{(QUd9+S)S9er}*Q~NZV z$NViE2f7~zzPsM?yNxOx9K8ECwXu5|%(=Azzo!n}-8$*Umge9!}(yL_Bv+EnA`!q#TSuzHR^051=mx zLXno}|2_m>JUbusFCzivK81z6{`moy)&^mwUX&hFKs*P-lZ~d3r9VOz`?sdm*z`q`X=ApvcJ=NQg#z|k%nbEww`-r zq<8G`=bDr`H{MN1WsQg7cUesJ(U9IH(D`315uTPWqj>}>!Hs(*uUo>MyY027G-x8D zK`uQ44Pc7sFMz8EC$8Z|`cDGP4Hww~1FaHX*VS&t(;ug_noD5}A+tz=!U|ZonBgN1Cws>5%pr<($eLbk|G*?0t$RvHN{c z+5IF44jnC_5Jy7y>b665iGrC=UaCi2=m;&ws=4Iw^0pbbgdX4*RSp^-E<>cfd3!bvozB*y@DB?Yr`H{OUw^ z0s9qeBG2*hD5F6P5V7KuWTcQFfW>YVBK&Hsd~#z8@xj?L{B}sjZilfao7hcQl;8TX zE-{HfdNicC%b+w$M>onvwOlN8{ap`k|HCRoR@@puw8dB-Wp>zl`_uNb+yBDWG*+NB zK2Xj~0)3+5#S16GNU>8M$VPb|s4QVyKG@RmRre;%?gGZYQ__8c3?|?Bh@n(hcwg=< zj;Ut@i{?t3-9i)ZCRnI?iR|+o^ZUDtz<>pCcH#khC$YR3Hw~Xag=TK#g^U)G_&bTa z-uuL736{y6=@i1yhvz#c5DNa_Irxs(=@g(+ z+;{!XjK`qFQ06YQwi z?|P>(=WgEU{fdzn0=bOSwu%Z^hYx>{H@fI=PVmklZy-6tO5 zJgpuytUP&xY`srANqI278r|MiE*{t8qxd48&Xjl<^DAb3$4=(kZ*f45aHGc|qZMRK zpC9QKzQir9_w2RhR1>UnLdCOCsm&vIl@?D{sE7gyomN&Z<&`{4<~n|?X#FA`!aJMq zbnH*VQRX*O(9-^gaMEaq;NaacAj8@iKCsxO~uJ+s&kw}=@&enz4blEgd50ntIU zPnaDhW9Y%j@BP4Pn1KM?p3`|cY3+4hYQH$Hkion59ko(&8Is2A3#!Gk_+KHfV<8GV z!)7!|?sp4iLZ#ymqOttO1Poa3vvIVGM|C{Z7{bH4XYjHF87?u+;#B$kDxS54`$SxZ z{)`Jf2b_lnoDXt)_L#_GULZz@v9kr_iN+7o(J&%{`JaE;f_zagRZ9B<*=QeUBVHPI zHSMcWsr$e=zA!yzDGHoC76TT0zA;&TjQs#rjQ+MT9xLOoM#7n9SQd5n%Y}($=iS{o zO!jd*_Vb0K%Xp?3lL(Miesl{$J1zsb%v-dDu9r17A`mAFH1J=5rEJgjh;DPpDgOzy z?V;AFQT#Mg+cVn6+{6fRX9Q{6(t|pCKU^KTe@TVioH8sbJoc(A8a`m<)FHOreHRI_b|3Fi=#~7TUO&{_`)2$Jx4s4D`h-|i0B!Md92XnEa|3sc zORCR{=#DF@od{<1&f(RXh^b?`yPYOcpHJ4jLoFWpI)rfq_nA!5^=tP;V#MXSXtV20 zM+DP>0(znuL0S)%04|Q0Sc(NtCMmwZjaL|_7=4)Wzf(@1HwR0MiiyF-)3BrXRPR#h z@T5KKmYs6YeEL^0=+mYp)5gEs={Y`sQQenawisIUwfz3nwtHYQ4=)`o1N7{O~q;h4Y~Xgcg(oXoL7Q}Q<9u0tyoZk zzLxJN9$7ShDzh6$%f_~c7|Lx1`fj?r+qH^Cv{mIR+QW$rX^t{=bh9Z%fe~(oBgTq; z>9ejBJEMs_m5H4C{|g{n{??J5(;(04UUq*YaX0-V?{G3q{2Nk8V0n9S?Jg$w6b9&O zEXe`~bI+DNzl&P&r2?dG22p4_zIe>t-YyYfQLZr6Umo~#+MIbPlKTS`R&o`M>FdLFZXfzeq>o?|;8i){ujM^CVI>k} zU(Z@2B1hd=tAf`f1lA~H$Oohx#ZvMOJ!*2C2Q`Odi`R4JW$jhFkf@5OWNzb3vvvLj zIcw(tBa~v)1ZKo|h}|SorCCCfCBq@-#=Cs1UE{%7`suKmVo)U?J2BzbA1S`WK|?HKC)dTC}f`u`+yuOi>GH#BUPer zPSSE1sTx~%MvdwTd7ZIn<<}FWNY`-5A7-kgM$B>{20aYjW2=0^BETUAG4i#c2T@m| z?-wpt&O6{nX%a(50O$Bf1{ECaG^XgLyF4~*RSb)V-f%cBXM6Q8HWUNAXWdTF@a=BI z&rRWdm)edu>-h2(FFX$k!6&|dDjmlmJI?SISNO1F@5!jB93D~-f3B&T*-s?pQs~@&2?JH22L_A(;gqLESF8c+ zesg~>B9Bn?{`gEkxUyS+RsL#xyW(oTV#9aT$2~T#U}=;xTkn2csY_D4BTq8ab^89d zWDf&!eUF>SEg&m}bA`<56xw~^(zMKN-f#P;?5tSYe6Kxrpwm7=Qtq5{2VaY@=c&bY zy!&d1coMyRawJ>yCU@LXsvnGlY>uX&;nf93Mmk|>fpBie?;+6`n=DU*=F|5#p*g23 zexH~h_Mqq1n<|i-561;i=C4f5gE%xow`kJL;>j-GjQ`KiDid|n3_!zgpndYjLIz)8 zQ+y*4L|Ot{Dqtas)ptuNDE%QX@m~EauMl{`ty$lkE!HDHapfcI+xx6$_7eQZUA1Bw zA20UH00V3M(--i9c9k2M{*NWT$g9J+ZRSxWaNonP4oI5YmU|6|j>DL4KtW(^crP1z zXhjW)NniQFauSA*J)?EolWDt$I-S*9m;RWQQnmttT|lL2tRE z-S@Ttw#o}P(Nqzh)yX@xUONNgS?d(4WmygzbPIkMyYD)WnY&eZ+7~ zG=wxbVes%O$!j=ejl)Q<;;51JA*if)(bvae!BhC}$#~rCBn4Nt;EWGNhM(xxf2|fPXm<D)MeC4zF7h|T??QPBfU0iJ|>Qn|5d4aFQN$zz_w`h5E z^5+Nu8EM+wnanxiN0L*e!IC8L4 z7jE^I7te%-UH@hbX!q3KeY-7!- z@*7L$BLJp+T;IeU;cG5uQQQ2qSnK13L773RWrzqV1#5hCa2;GL73=DD zIyPF`)E%w&!|Mi8VY%&%DScPnU{!HnR$lNV87jMC%@KHC>DbsFWXsO=O4O--;;!Zb zcH38^v(G*`j?=x!_-C$AG-q8cRQrWDsB*oS`?dd@l0-6^)UbO3YbG>;5g?T(7cARa zpKVV+2 z9I%EJ?A2uIe7u)mbrrk74;^xC&xv%xYq-e2^#Un6tf|VlJ1B0q?%a@mFJH?VA8r&+k*CZD30SCf`4h;?2X=lVr()#JBJHi&Sxt~ zU;!|;P{`GHXueqhBmDr>bR5ktD>A=pZ zLc?wg6LpGh{h>Y5d`(9}NQV|-6nA=xKRefR{uP$$T50|yRiFlJrO|9~%iH)!n}-x4 z@>iX__Q&eMu>rXFK@uB`I?V-hm95%b-PvV-e?y+8HmzdqiJVu(;|jT###_k{Ywk2v zRFbM4L$a%l#iH9@vdJYH+wR8_5k`vUO^lMB>dJeBB0OE+4LyyTj$2V;W5gzf6FxxL z1%9aGpZ@rURDNhtV8Ow|yd~zoOVKtEaU$G*@Ym}sjAME>^lwhVW13wPLYHp0)G#X* zuIh6}JvGhApe3_Q{9fm(oC@^?oedMO^CfnHe0l0883lhs*=*T;+ZUt*DvH#Y7v(I> zY_cyTiIecq8*{q9b$L*6NjUjq%IuC*&SX;RqjRL|7ald=j2Oi<*zo3OO+;_aa*Qw9 z4)gn|-`42O)*TZ5&~sUl+2ri?2$1YU*gW2^TQ_Zu&|q# zkGh_uGzofEUQ8HhCN!J%A6))eXI+Z4&~kxabPtfCw_w5KU9(cSfp#h{(E}4kYo)SL zO@wo6KD@1Lds`s})HTSz3SYAD$)X%a$_Dg?KY4$Zs1+h_F$VY2bA^9I)p%EttQ0B3 z;oc~{MW!!Bi%ut~Bt*5b8fVB7H-W^bxoO%vKPwfqLO)@uo1+r{?4x-LQo?7()Csf^ zF}(`m9A}+^mnkx7M3F__glE@l7mp!zmc0G?*}unAY?l*Ao%#B<;$TN3*_`Mk&oHy7 zVj*|&2WXK($hQfyv z7lL72;aAmS;cWeoOwyH(DmA|~Uh=P&lK4XvKW@pFA9!PW?T71J6hiCX73}H9D#HqrO3W8jRDKwy?;Ps8yHyb;bX}a##M>1{FptFN6Q3kf^^K zR|2CZjMB@0U1%fmWDra)QAtJjGf19637Wo<(PI zt9(O#y}z-I7)CV%zQ#X_1W|5GHWQ8fmv6#r75vlRdJNU_s!c6z?fXRS*KQ(b#y31j zb(+Z-O=rzf*ic-E`Li>W0u*h(pq4umB_A%VIy_srX4B6Cu_^kc0l066>0$94bKK9P zrcGv5M9Ls?!1vk=ZHiNf)bc~0W5jQOY5u0A}lUJ|Kpp89pmI4k#)mMIaxWH!h# zQj;3a;JGpBzE_XG(}X(Q?xa;Dn;3OtuP*B7{~FVhW8KUblzh8+7c$S;ZV0hw>uWa(NN2q50U*&EF8j?*Cqk?nNp$t z)4_yEaloLp#FdiPBO1Fv=nz`8m<5=VanSBn+0$g)I%*l)hj@^Qa(%EIC%Dsuj}Z1^H;XVnZ8R%n`TIX z^aou8^2b;!PGsdw8@U;_CEuP{K99TT65Di0&}NIq`lY%>n`$IfCpE-N-^CJF75o>h z@a6^yvv=M+7kw%J$NY2k_sh7B9gl4Q?vFUzj|>?lzS?y!k6kWQ?V(v2G`0Z-xtloJ z|4Y|4`fu09u((nAdK=zFV=DX+7%Q*tF)gJ%(&aGb+nv7LoUQPI=2I{YO(AWVripc} z?wM$ZjvU-r5~^Q5qL8TnAT0BZ_B`^%^(QfXX_L}d@U@Tz1QzEV`KCH+RH@Lqqy^5? zW&$O^NR7WF;j_KV5jblJ-zYYgA^BJac|oo|DG`~6{RGLC&xaKtTiCHRP`M5VZqMaZ z7!v0NAI-eysTgq-a8U|R$K1{kzQ6+N9-kIG+p@9N4JAi8Y= zTIa;Y{hB*YakI?Vhy$*y&ODSF(#3M2C=i_;#Pj2oXfVSLH3g=rc!orlM8`%lTh~st zgarL8p>yf_*t>P2S2S^=9_d29t_d@2JVZU$!OkM>i-736KEH0n(5=Z8b1XSoCmf6? z>?C|yu-+8U7oSg|?s+fm+S=o3Qu95ntP8u<>L_!7;1wWN{2*n0=MUm66zd!r!i&f)gF034#nTwhy2l{|Qo^$dZiiu0)$R#!Auqp&Q!-{AqVrE5DzIEUa zo0_|3MAKV;Zos=*e_YLn&6Q7?shvPnGj6hu)P@&LHhvtBS%0d{lXTQM@{p|ol&+~3oPxV|x zGD@4i_u~+*g@q2K?z#C~`zofv18-%fin)9{rcg8V!|$ytwR#znC22SH;QY)aF+uI+ zl}nL>+L|OpS6XFV)E<0a9oE3f;+| zM{Z@KED%uIK5+-FTBrAWhKxjI?KA6?fb``Xd6b!mgaR#Mdc=~H&-*^46ENu`jaJ_K zwp(Je&R?XUpvkig-|pt?VdV_|Y`F23nf)L~+jT#@WhXJGch`6!0+27lY(75Yw_;aG zo}8HT?rrIUijZxH!EWz7F}{D<(EFaVGLoo%hjSUKL)WJJjqB@DM=Jd9^?(V%ikL|Z zXb(_cVJxuxQ~PNT8lF%U!CV&Ex8GbuP~jHK`?4p(<|BEX_edq2S;MuGPneqbP%luD zcTMu0m8Ag66r?&tteHH2JHv^Ix;*iiT~GoA8Wqwl8nzr zpa&}?sR+$#9rlpi`q%O>M)KT&^C;+&4B;O9Hb!rgduIgG^^64+^eb;$gLD-ajXt>P z%-Ve9(6%S~2a((3exvTlpnMdQA)#-(o!Lb{f}X(+%rxC5D@99P;=r`&$S^_ZZ;PYK zTVYsRj-C_^W4u1xVlt?abEV?*f%=}Sr+R{+<_@BejcXiUI9mO|3a;6GGgvvQ*3S}` z^6P%F=4$DA+A?Yt)!cmC`luS5^crbvS>rUqIsC>_Gw8?@k1JEs+(ko2?chd!qhd`x zi9_ku`I_mTIQ?zyO_JC{nE@GFrJHZhKYG!Y3+Cbl7 zq-wb^`;glIb!fI;HyA$ zEqqY|V5??G@Ot7rl6h(^l(eUh_INyMOeyTx*~xzA-Nz6Wn!txA4%O6hQ6PgmQK+gI z!f+L`W$%^lT3O0))wD}XaP+CURk?ykOzWV19bso;{u{0*3R?Dh*G6H_p&vW(>L}SV zFkPj7069N6Ay6ihEo<1QSRf%-7dNiA>?1I`A>aB*kzxfq1iK!2s%3{}cU&BuIO>KB zA$_mzEd7MlT+2=)-Ctq->r}(a5gsxSfjX~?MzZ(Z?+Vpu=nmJIzzm=A^&Ai{)+6^e;BO zcGSnNoUvYx=Zs;~?dFQ7{_9`8k%-SCa~C1nB}79vsRDCqF&5Lu@Uh1oVr}I6QNe-*0HO{5rI**6%alr!=+U4iEC_dyDF&x{3 zTrKSRN5x2iwj7oYd+`bQ^TsKVB8$(P;2(&$l)T`HR_Ec^dg~yU|G;IO6cbrU-uT!> z)kdCF=6}#~bE+PK%=$!QEvUXreKo&9G^U`2eUbROJIHD}xLuFitF7LbtNBu!R^n10 zkssyO>U8(LzeOFiM(1#pB*P|6>v?VPAM5_yGSlmw)B0+*nIao+_hMILBp^B^$Rdv@U-D@aH7vocvKcP$O$7Q5PP&O`d z#sgNe@N6<{4J$(xvU=OV3g(u(tRrP+ODH9RqLK(*3$N@j0tfyCO^`9=I9oI-X!rgD z&6grpWN>n?--=#mlyo)|aGzWc85Ni_(NbUaQ;=iJilD~QZ@nS<>y| z%iK}cx``$qT0aMC?8@dCbAX{P#V&`bz~(Tukda-_ywlnZUct?p+z1~R#g{0UvkZJ; zOZQ@ie$;_HS~^rEQ=~1NIp;&H9IX+sH+GevKi_8=j%YtoE3^-3 zlE2Nv7~(Bgxf4r2R}jfB$5ko0-kFa6p{*l*RLtLmjnA$wh4br$(d$Utxnsej$4}FH zl|tTp(6db4n!4H#eiBOLShp)ju;-*Kl{Q)Qfo2neydfi!0jTvkiMG(K_%;2iMb08_ z0~rq;;;u&si_to#!>xM7;c_k8r}he3%;pNhQT2mZpZ;7QNq&~MW!UtmrVZn^H>714 zYJ6WLTv7@ygn)<3k1*A2&HAs&{=)+Dly`OE!+coec< z=mphAg$bDTF3Z=J(_8ScmM=rsI zHW6&%S6_3eiY3sT6Uhn5T^dK0aX5eX&Bfw9eq+R{^e5zhQHYG!zJ#6Qmo9G5c&n}f zl;tgBPCL;#$#y1wr!Pe-9G8?N9t&j)=sFCZISqK`-jZ+rQSZTOrQYcV3`_g|D%yDD zdSBl$RK{Y3dYzz|pVL_>$=NU$1u*;dLS5pUw0YtRdFh3>?91}e4a?-z!=_Eqx{&<8 z-*5Ecb=|vS5}Die%>Wu`fKhaVls$C46Ed?7-_EQzfB@PvkBxKH$aENnrjeVb(P!Kv z{?TQr@GZRA$7-zPFJlFPr#rf|=Ufp&VB!#cF}EN6M5UVoGNax!F2^eKDs8!&%wUqz zXJya96l`{Vj6kws#fblSYm5-Fa&hni-wZkl*oY3HyiZsI>fBEslq{5UPVhu8h-j1|5udUGex$~q27vA>^o!Js;P_wWSl zGk!?cJIRpS=O6F~}5r{N9Q({Q+T&jG4! z)t{af46z$~$UZn>S1HoS@lYK9h*idyBR|O6Ik6je4bK4PjwOamqt&{h*ydGrs@` zGFc74g*uLN8J~S!7U^KiDHC!x+^Y|v7p4L`<8h?BYr4yyC8#C$>eV-!rmymg_Tty7 z1I8lGWB!UcHryUR=&;Ni#At~Q=gi zfs6Pws_(OraXVEKyZ%qHwzKO2;Z`z2Gjxyoc`kGkVAa%6P&#qYGQgFj=T8wEDU{&u zE77CHc!?Hj*_?Ay2K^0&+m8b5GolhiYW{1{d^8U^xClL@Z}^$hD%fgKBaa=yZRVj?Tn5RL(0-;4dBVbBg?9J$x?j%EtSEa z%k_ZW2)xS?<`}c>l|pyCnzpqtZQwQ9WBX1!CsK-GU<8ziI4U(>4suRDh^rxD|!1;uNVW%kzD>{aF$P>Oa*FRpvW_AsHI4B36P9DXp$FEXr#jO7HmSb06%7ZrxZtKN*bZj_ey{rS}e!lMaMfQB%?SXlI+GjyyrntNkKgvIgS`MY`4!7=QKp6eiKhbo7 zI=Q5e6A1N z+)!HgRy~`?6SLWpHZBg}*W~o<)M>cs*qn7mu0G#=8fa$P97SM7@#?R{bAZmM<&IcM zL17Cz^VWc9*YFjkX}S3Y1ZB++MP~c)+MK+W+ z#m(WV!5Q40>(-KA9%X%Fz1O`r=fBvpySgd4(YwSuS)YVPaWBs{$1wuEf&)LcXR{OG zI&xjYP4zA%-Xl9&Tc6vxbHc{;M5va8|wO>j6RKdg9HRxGAhYPIJK?u*dbVJ`JfSH@h@;{N8MYDW1Ai(>U zApL+FdxoJL88w8O%$<5NBwmJ(mZkTp_4clgMy{PZ_G7$0q(4uGWsCOvTR#wPhE8P; zfb0&lx@4h?sA%P}1)j7e#g?UhpF#U4&DalIAzb;Je#^vp2uHdXHv@$1m$U+*6KO`4 zo>8N_0vm}t!dNJsoJ*b`XwE+wHkmhzJKhNXDDiQ5&xuBxSQgU}6)IX+Cc2l&VuTl^ zJ7~ZDv?6)aDB-&>*xv1Cl%HjwOg|Lq4_4Mxi!^b*JB@L%^q><6RdxM|vfMdb77S_ zE_^FiEXML0ih8yB3h=QXjq*%ZoN)?m3`RjVyY`^7a9F4kJs69e!48@H#$4QsS>3Z9 z%KK6*ZHOF)=H7^}rx_ii)kZ_ao~e^0xBGE5?6AqZt>ab8p+AqrD5Xn`!nn8Qh9Y6D z%w~SQ%JdC6Th`?|X+g2Nr)fY5-Sw=+@{@SVgH2ujUG5b9MG%#ateq@8SCFU}auhWq z<-Hfjy|6dTRkOpH067ZlHG99uOcs-p11(Y0l@*K1mIr^%eUQ#N?@Gu!3a!M`KNppo z_D6(^-F12oa~-E#AL&Mm+8zUK`Z~h0CzCcg+tY`ooNLAjfz9{67~xb+?9Q@B_8=b) zSMYrMFS}mCn@qP5>24em!a3YIOIKsbva@)FSbmS~22dwb{*l14QKB(8(@q`NQ1pm8 zwzUH;#apwIY%@4{E{$l4pR?D~S>?$7P@9b(3YcKpBGD0H4*f#Cw3k==RD0<^_ zYLO=-@};xaeDmEm$|C3SF9jD8>mi@FsP0@|N^KCF%A&ujsoZz$GvON#{sr~I@YA07 z;4wBG&=XJrj>d@4a}&Cu2_bIq#3lj}TdVKpy$^ zqAW7?>&ttRsg|Tz#Qm)>Az;eO}f5lOaD_a~Qj}td&}1S2453a#8y1b}illdT1_w zg{zEua6ML+HB~k#?F{@T-8S!E>jVw>=Y!X#j=N`8~?NhkOGFn&^3EQby#_q>ufB9@@$?D5sXlo#PQ ziI;&i;k!UjtRmO^*@$C79urTwZ4W2Ik<{z1eaD)|htPFH7(LjB~Ova0jwFh_ar?d5E?e=8E{J5u!`;%aJ*_ozv^6dQ_-m=hqfHqh99R99tB&x1#ym}^3}OYzpR5NvDO_9PaLLU%s~ND zAfiUsg;yQbWL9dH@tjp_JtDb5QfUzdr>r%USJO>sY!L+F$g$s|Fh4I*tY@9XAUSSP zx-7U@Y2lYtVQD!1^xVr)iN9ISFPZu0@;YDrPlICo{|<`rO^kOb$FoRcixn)A{sqz$ zM7DX*=alNa?lZCJb5zmO-~Mb+y2D1jE*yR)z3 zgR6bxsP=*;aylxf6Eg$OoDC<03!iT|>b&DQX#(9&W3atMu8gclL||lWA1{x6R+z8+ zM;cX&vm!;dqS1XJiinhm+Ze*A#)QYtdE$YjKe(tlCFRs9SqwA|UDI&urVWqyXy1gT z=&($<=5+$T2*)mU9(J^lVGH&vQQ7j}FV9*-_Z{C1B8;`GsS~UNZ&mR)q(K|Y28VHr zJK$*8z9eKirB{KI(3AiBv-(09)26$BDQ5jTRXFASpbJ(-ar5DrnL)v}Y9*5=SYN^>4`3>?~lvllhk7`-9I*XU~1SUseiZf(?N51Bv8H}VTD)riRhgcr>s z?ep&|5zVi!9k9I>c&UBXoC(9K|$#8`yiOL8eGWi~-P3vG@te>H{_|7mcVS{j=0G-tj z62I>@GII{wR@A0$rZ zx(&&~E`y>YF<5(5h961Z&!IQ*=DQYvdD_m&Y@S#eHHxd3jEdayR1H71dQh%I-Z^&Jdg(hk&27>K(`_#ZZUs>C)SJEj;Kpk>`UyIJ!bS zG`oI0tnSdqL6Wc$NH^_6p^EKg;HO5jmgXY@`lzwA>Gi5RPZYSlx-R4>cWxfM_0!Oe z7uOHDpAku+|6=zIQsz7VIsbM2<-tDAv*Fi#qA0u2S7>rDf!+6&(RR;*zY2p_FsRlH4UcH$D z=s8GcdqR{uOBSKc0;N!8l# z|C8QXg%~|eVoRWT50B@D*(mq5AX)(tznY7V@#>c~ksUtv`{5=%YNO9t2H_S{&Vwfe z;x^AM$p=i)+B(1bNtP6Sx-TQBX5ds4oA3`hy4G%zb-MJ8mwnhS@tu9tswiPA*-0{= zOK&lcP>JtLer6xKf~557`C2zqX`kH}Nznvcc#Jn~&eC}Wa?1mY_AK?)d{81w5kk@{ zacjOX3a3x)-Z^m%AC4p123EEY+HM>cdB^MMvpu)xkCwgbK}>P1Z#CTFmLtIKu>3~$ z$Gp|wlbpBR!cC+)-c$)Ap$}Q3Lgimi^03XXPkDRhfm*D0AO~ynxtyV!G>Ma;K(_US&qn9Cl&S4z)0|)kFr%(g?IoIh=3KN=(F+pRXrgJ8> z(T}A9Z*phG1H{W3}iuYX$8`-y&_UjszS$* zAtfciBTMifP#Vpi=uDB7nrgqS`c`=q4_Jb6l5N`$nXO6&N9IjC%HrnO1jhJB;VkHDWdLyAQm8 z^`@d(cH6hVScH4MTJapNhaz0~RySalhHaJ(@Z;tt++aaK5n=l3E+YQ7HoeS)Lu=pv zRP+VNqW3%4pktw_BjN_FGj*%S*f3;cCYgr)rh}pq;&db8C;k)rWb^!_wQG{7$K~>K zzgaaBD<+vnIe7|*hE(B+4|NBnMuW_s{hj{Wjsva_RtAMjwfvIg;5C6 zD|7wXd3%Wl%5`Z*j(RTuLiZM)ldT!FUY`&>qOyRlSeD~hCTs*Dz1I&?Wnr%uZU1E7 zC@$}g!v>Ol5{qJ--+2dcW#SPwRi)*V^;B zA?v26nyIHAa4&Pk2+Qyo^Pbi#kU9I z$_f9`PA;_02;!W8$4*y(WJB5RskNUNUYaf?p5=$#gL3Y#Z=6DYKb7zfZB@VhcA_1C z4=@lW4kmldI!yGqfMEY*X@23;H1y(1z?~SN>PG$gN!g+BoJG>}%;fExvb)>^k6W3B zr!(D~^D2JyZvo{$2VM#X3#^7Yld#2A-OK0dw+hwLsz4p{UfP#2pr;Om&%tk6OoyKZ zahQ|p^BCRYdw%#+^rLv--MfSfAxEb9J(&rwZBJ2AhRv=5A+={-xOxj1ZepgpxW@z% z1)~J=XZ~c&o;`ttNAkfS5_EEsYYe2HkPEAkCQ7wZJXC#|L7rlISZQp`$xx&J-JP+< z?85mhmqCdt)wQxM^&~2bVR!n?`?&IVG~B$vvBh|KD&gZx3=}ek_h2`{*4D*Xj}semVBGh|fx{ zbyUBIW{)4{gnJKg&~6FJk`($by8`uGxLs5ZOLZLTlhC|f_- zWVc$9r_}(BfajGu13Bk`Q{`(F7`px-51dhzd(Xgk`Hx|FHOc| z9}iGBm{cizSfz$23vW%4;;Ku+GJ-7QH`S0^LC(?F zJm{bFdaOetxgW6Yc-gMSaI2Cn#(z-qQ=MnV&erYwTiTNxVaKEV$?#j0+y>0*jlzXp zHi~WEE0j-_s`Jl!4nA5E*@W#rTZU6?BGStj8+n&015(=7IE{kGNZ4KID?@}e47 zq;lSf%)&HdI8TPm7)c11CiIL)=&QEs5gyeb<<7t@!U5iNap4KP{*T{PN8Dldh+Usbg5F?FZ+K7h>(lW(Rjc*@7t>1{=U zX&A$--~Pm*do~a4Fx3#h?oU#$Emr8X!9P>w8XfH;{9)#V9zYzd?-sf?_fOR=E0HH} z>+BVq%ch!+!r=J&yN!BoDg-5UJ@h2oGW-t{|nFK8s_#pFOXo(@5#DS@qEjp znO_&@p^f4?Joc2}Mgz0|v-{5Y+v?q9fZkoq{zJp+E&y6Tm#w?n9QkCte!-d!v{MIy z(a(idt4ZBuPsuXJkoQFsALvg;C#cy=>zOFZzI7G4-%$qR3FOYPgx0P`@3GyPE9nlu zAp4A_sEbj>Y^H)GUY}~XOzyv4OfG-Pqw%Rx&XLO@`q2A~6u67H!9C=!Nux>g=Jb@| zixH?3j9ZnJ%`F)cx2Pq;kIvld@6CX2hA_P7|NOPq0!ZV5N-v}P&y2tT?-SsbzZU0V zLMkqnEl;RjRW557y0{KAGoI-6`Xaq4y(1vfdy}prAYD2n5JUu2M35E%0i}18 z76_1_h?Iax0z^s((vtw8Bm@$YeDUmkuGjtUz1`Qj&Y$nc;n#2_&pb10*1gufX06Qd z<;-!jwxB+}-ilck6aBtk#uHTnyK`WA;~if!r=BElOhJ#)61TzUE7EsfxqQ9&IL0S_ z{N%Q#^T5*snh}PJO;~dnql3f3(B8 z@F*)K$MNc&`I&;{Q+1+)N?(WlQ5}m&i4EuV8Q&^6&9DMq(L z-at0%ER*(ib(Bi^D*7_!Jo)LTY?%D*LM)1Vee=z8-BW!I7BYL9L6Z#_mrru!hI)kl z&9d<g_bdy9F;>zMHNO?I-R3XuvL zR3aLA$f9I3Kg+)S*6@<^_yNnCWfsqmKE8b5xz0^164E*R%JWlJ-zuHM-+>?7lKEl^ zW~`-(9tfP>|E{F`g<$o|ck<8PU7X1J+D25G-Bk6CLc`lBL`XFAD{!jU1F}e|TC^L? zZ&dfr#riKU%fS^Qx$?gS?Oykl;tsI0uh_XTd1d*TgAL)eCFPJ>wZmr)vIRr^)rI@s zy;4z-esAtB3rrSs{N|_Y-Gk%x;<@i$C@tDNdnheHGPf_?4P60CMa^c^WUl&#)J}vH z+}N}4F-X2y>@wcMra+0s~kjWz?5^g0pj4~B`Ru|W9nk>?ZknW6`DvL0EjRX>RJ^5PB@|6`gb8uAh zi#xSx^rG|S@#oj`Bb%>|Bc%lSKb$2aCaUv%7&%TWFg9?=7)OS4msGOlmN~lYT=7K} zT6W&b51)fJ=+3mAMnOw|d>EhaV zBHtXdkonf1p|m1OUQNrk6RTdSaH-}Nvfv&bQr2CJKcDv(wA z136)-$TN~^MI%8o&XAs6b#q476YlR_a%22o3o4EBJV&e>O+>^OW}Kl?rtIEg?z3wl zsCkJ2FFPoFO2fjW8o2H77*m&I%Fj!9AYMlS@OWE+p{8HBGXjEb;+&AK;vi3W_;;^9 z?y%sf#6f1O^8IBl$b5kpG`n`GE;MyTU4Ij;UM_twvw2d_a$wyYk`>*XpXiR!Zhu~L zCKnGdMOw~#8M`LA>^D5Q^~VG~l`h_bUk3X|R^)svq8%}vwk-z_!?e6Df%!hVqh~y` zUugpL(O;gWyt$gVspb0AWulh-Rd2dYS#ETRJEk0V>w|Dk)XW^6QruQMj}!m$bE01Q zO(ANdsHF{DTwWTUE}@S&MfW_^5;^XYdB-lF2>vRCw#1lZ7otasDP0xbf^`kZSu1@d zBl&?fKg^su;X>@VvMW&U!mNTrQA)$>Z(MfB@PtEuwBWhT@x~&NN#c@t!Hbn{CO?Ec zzdk1v152USg2_|q(|3(76O_l)0>;hw)glV|mmtF5F$ zx4d+j7gd7oBLsJL+7IMAof>XpZ=zpQ1wvgfNhg3Eihk;CU9yUYlz6n>G!YBYe<_KC zAm;4oY--I)B|c!7IrWX2bhV4@0(7TGYA&0FEwz%*5nCpvas#hfB`!1%=(29l>sMV_ zr2bbbh$2J*2-n@KO11iU26u%uOty=Y8!2iqd_>h3FZVEQZ!qi+)ShY?3LEn_|Jg0r z?ySj_x_!w9ktrD+l_%TzwhKeIcpB4;LX>kvs(DeQ9v{}kG#p1CA0hmFwe>^px63(2 z^Zg*rqN-X4%lu^(@|xdnP;lQLz>)an#G&W*Y3gmVCF0uX;MAE9y+lRahxN40GfrDh zfiJRf=a$#&S~&r!&w#7PEd6bxg$AO`!D@g)J--$RftAS3zGcJj4*gO!jttY=JJepi z*hVC%(Ks~d)E}mx_hR&&TJ=28jsv=Pp(oTCv|R>w2grY}ebW-}C8fWxU69zHYBZaq zIB%Ca_O2Q{r7W(FUiUwKLIJ0gJkdli+FrAMDK4P{?o8G%rn*xeO~RcPK8TuxTy+a8 z-;E{NgWQ5F&XJHq@@Eh%%4~c`6gc*|5*BPLu;>tnx}1hv zOJmasY7RM6<*eCGdGAttL{x9ffLrDcC?H`pZ&-OjI~Z{mca=ZD;PR+}Bk7F>kGn)) z(=A?LUls*%VQl9`Znb){ZCP92>mXfDI)HZjlceXz-r5WTYxrlGlV%uE@3x6g5{4}6 zT1>hcMm#@npV0{#xoz!g42}|%_~uG?GY8lP%OdG3eG5fC-x3QI{`h`D^x1t}S~AyY z%?>ClLbNvj5~Xq}Y5e=JQF}GeiO4zzT-8@1?ew|$yzt2AiL?0{$+D7Q>NBe(-cTTJ z?!60CeaHSKb7uuyn$H}kg!cvFrfewC;@Cs~0lqIg_V3&xe-=2AIgWZo;8Ky@Tus}) z6#{g>3vQqD|1Jqo9Z$jJ?go4bkdDHdC)Hx3=j>2ndq$j5bguY*vkE(Y5|2e^nhX6X z|M7{(hFbN-&+KP>_(;?wP2RaFeNhu?1#6*P0QrJZ&#XjK@W<%db{QPlGiEu7)Nd;l z$-OOvRG8ft>H|!Jb9>FfO||pbYc}V;pC1*pBVG6ga*7=Y2r}xmSV7trC0{Y{P+D@k zTNhQcxHp3#r#2yAq2zf5GlxP7(8IGvA)jd%=;u3YI)vy4cJe?&b#HJH(zsJ?Aa?u~ z`NMxX))g@8_Ibel!I6-jQ^nEnRbE&QD&tFDXjok-L$=8a3^QJsn@8UO^uRjepPV~M zxHIG#v$8UINq*5fOAHLcak=sMpQ-YCDYZRp86b$*3z!xr2ysU^MZV1ZP}3Oz=(FE^ z@Vzb!Ao@K^)>l7jFJ9dVB$HKt9$1Rr8*X@WxB|L+u81>ve|l6x@F(TjAC%)rR?;ze zWD7U2G@%$!+lm0v$1n4n&qCg!{kI!Tm`umAam#WqY$5I2a%wcvBz$CzKkiUjBr?rt z7ja+%@W4slV1ljhG}2(_WN>szU!f%J9nw2ptmMO61Wack669Y=n@nZqT?*TBp6edc zU<`pvOyOI7q&PHIJcBtF&8WfEM{g)_TIE;Bhw90k@g&@da2ye!ue9w%*)b@PnxQgN3Fz#StG9uT$+lTkb z@#{`@U)0=&L+X>8aq>Fw1f3?jfL1`N=hf8;0HV$&F;iuJr`%Z*@`qPsV9Hc8;_6s% z*K|Q%h?nLBL(R9^zkl=)Hx&Px|?04{gK1m+faDDc%MU(+!tx!#&6eJ+cu*oVvi*1Stvi%&T?c*Oc_gv7D8 zI6NhhX208ft(sX?4Ujx2tkUS@7uX&ULcU5nDU>W2aQ}H=i4~4lJ(TirAFdA{$!wr^ zxdfzMPZKn+F5So40lc?&adUv2siQI;pQ8eCs-6xLUw$}QlBH* zm-vd6#3cLXrOBIX4HSr;pB!{h8{O_FA|Bg&c--At*aI%0G${g6RrmLLxbkKro;t3nd z-&ZXXWe4@&NPhJB`oz7tsV(y-q=i&T+98okedwh;B`>$EU^~vlDSqyZyJ`J}gJ&)L z#v=8GoL^0iZLvXAFGFnAl1DhdE-Pt%FdVcv$muTAM6daLxju>&cihQZIH@BSI{0cc z95Jz&o>x5pu8eDJvK)C0xMckNdcs;S@+{gk_0{_Y{ewk|`^JQ-b!6U3cPy92^hnb6 zE*7cUz64L{6WHm(xGc+HFKZSTKx!FA-nSc?2N#v3z1~>{0BeC^6T}q(r>)rgU!Cq5 z%vcqQr7xa?jn(D`Nqd?ky|7}3E}y#)QI{vX$rn5prqAb3i|)J98nR&DTe(!{pK|>g zumxt|nhwfG5=%qq-;*P652P^h>{7-qyV&G3V{OM*cm1ftR$4d&lEbYgy_aXb(pypo z+Dg`e@U+<{nR*stqjrV!UA!Oxck97Axa&jU6ZQi6Rtn<+rGXTrscj=DZN;_Q7 zKVCC%f;%HjA37j4@+uWEcS1?L=dj|2j*iyCe%D;zFtwHvd`8@@LdG9Kh1Y_0>*FGf z1;Kh*ooUe@(9n9uoBHZS^$%DBm#O{8ps6|V7kU51*F&~Sbq%W*yg$O!{=W z1IqsX4D`5g8@j~qJ&|><%_1wJOLXEZ)!`8ae ziw`!t9`(P4CLw+dsBWd#*8ss~n>fHBH{Mz^rOfNz@F&`x-*^Vjx%b|Ge&#Mm`zkmW zzNvI0gj|b0g9$1($>gnLporkC{EMI;Xc&+>DaE?_X#zOD<*Xps#r~g|!=tDl(>rj$ z0!GM`Bd?4@TLS`tpj3#eSyLbUt>G}(K!SD4o<00Q`ZqPLvi)`)ge3|!| z8?pEw?A}Sxz1eZ{`Eo#BZycDRoaebnVC^UaGd7?-M+k*45F)za0t zIzIjkAUC}%A^)hb9#x>&tLu@K-YpzBy`q@1(*pX6mjePc(n0CN9cRSpZ!B86kIUdU zRP?;w9!V^Ifte{0^-zkQ^9)c)s~L@A2X=3wjz+&mqy9@mXZlu~a3 zj#I1g>wJ2}tBElD6g93}0X1SxC0a5`9&p8W#uta;RUvh1Ftr@MEkS$mjebL{5hm!X zMZIu4^KA1f+WI~h?2?OyZu-Z$yw&3;jO&Ws9qt&mU++Gt5lTeuUo#7HF(UEVL7v!k zo*Sk-uT7H-Nro<4*DRlBABH9vxn)NUe;FA}%no~szF{zhS6YQUtj5Thr_Hv}y4;%^ zKs=9XnnhOjbiC6M}WdyaYbL|qR!+qfo@(r?s z@ud^t!S1!A}c zN=u%mBQR0*34>6hb#aC;GT)1Uv8IiKX;!{n)wo`mtd`E}Nfbs!HXtX*Hzorxd-IPl zF38e*j{{1KJc{Aw$6@igYeXJ?G`+4_$Z7m>gXh<+#I5oT5pv>dr8vHL? z;9J1^lK*OB?7tn|o&%Dm)0$C*P+rmI$lYya_h_Iuwu0B?04~{aSLC^dVSosxXh#ZP zH+X+ba!bx>%NY<+i~C^?)B$6Ss1Hynmb;HvuY=-TWVLc}8#%pmzXrZxA?cuo{xs3` z5@1)uOMR%o1rOken+y1aNVVb3gP#k4zD#DyMp|C+mp|YKG2WjW?F1>8gwzb5y|KZk zay$La$w8DNRN4gv6QR3@a6!kI;^#vMX=?u+{uUqf4d#oy6p+gDmaO3S?(hB9uK@eAIz8C&CRb?lH`Eu^pnDe;S84SYJLLDGDh`F-H8%NTw>JhqdBmV%Ov2{D5aKMRlpTlnm?f`b_xo~S%s$FlLf+RBm1Bs)s#?d8LO{a8`cL-Po!e0{* z&*?Fcwc7NBm*pHmtwp5}1krOA);XR!6ksalyjhMhB~YIYwpF0V*?#@?gxyxYqhp-J~%Y+D7?yr8_k%wYkKe2-r) zk0c`i@=r51i0RtSlmSnz8r)q7*aWcrK#ckcglPODp^$@>zzO|D)~_IwZ!jb$h&4_PY4&dGp7@T3t9{o6>ll*TarfAzgq^IQzqTn z>+ehJj4Z2jEmWv2J|MLC4+W`=Z4tq1h}kSiPB?)C8qfjC8nx8E{Lw(?r1_SWLMzY) z{xqQd3n`IxWxTO1%E(pX&_R7b!gy-e7#9mj5SLRjBTqRiTrB`X627yXI`V`u^N`lI zb-oxoKbfY`|B8lNe6|uJ2Sj$SVt9QwGpb*W=I+)zq9(|zQ~80CruUe9wy#>u7bNL# zDG=z?Dd*J1dy=;tO3~Yb_ki_?kzp^H7wX3O(EF({(IDNsnfud}m|wFvp+!4I3^!a< zIUFmM2kTk82yP;+}zcudfD!2zaz*)JWDG|( zfF2{v7$Pu&`lS}GLZ7MKbuZt_@Sik-?+)bPlIbw4bPmW=V6!`pd>BzTTy<8c9W=RY zayc4YVHV%~?)vBMs26dKxOAI31a$7Z(y|SdR+0f#k!c`@DVKa5Raqub;mDe~~P{fBjLe8w|5jGDqgbI|v~ zhJ*c$)(XYm`?u2ZP_?f$&_{_VQxn5lCG|sX!|;1zosGBbzTqqV>=pn{i%)a`Jh1m7 z?S8I}F4X3THZL*LTH4cHWKWThw};JRlfGyH^-{a@+K!c>A>y$lNjt7@AwHIwTZwo* zN%svXNMF#G=zXz9?hkm8&T5r{yGaO*hJrrdSS|6`X{E^F798V8j%jz|inqXsbKcBaObzWJ9CcuX_*_w|s{Zwr$Rqb;+(Sf$t-m;0w*gJF7+o zfWpa(R?3YoNLod$zkRe)xQ#K;Zgot`wM}3cmIa3w(@VxHASFUT72@?zs+odspImWB zsBtZn{_H_sT^%ApmQ$#Mkz{_SJU-@7vQhMoX_UB6SQv3E`b7i~NpEC|-uZI*V4@K!iNjK@Gdh?ec#GfQjM2|Xz+}BM$&ak;bP;=l&T6VNZGl#Lf zv}kf(Q?XfNRGrPlH{_!^QF^LHqwFbMPSGJj4_2~Bonx1ddB*Z*LL&Yb!Gw{nFI-XWe|G=-FhrQWW006`V}F zBGW{x+~my|4cIRg2fpUL6l80m&Q-K!dg&oglU7)U{`@%IYS!?L0J18xxSROYo0E+4 zR+jx9J7*B-J=y_n1M(;nTckf@(V(Fl1$o9*8Tg5-6YE3{vx{`~A|}0r8gU`BWI%Me z-G?&va`3ESQ4|s(534P;E1RmDdTGY%aA@jF(%Ed)%7H7agp6?YeX#da4rFeppAP$X zeRfaOYXAGO>u#PHS9Nz`_AG)tc}^1Lh-ES?kPhw2#)Nu`D7xgB)bmG(cVRZaQLJn3EOE?SaHfn-1wsa zA9eu2Zxe}?v9?9o&P_`yq><@vpEucEn5#G;4JOb7Tg1SjowGoAf9Breys4;-&kVme zTPq(ZXsI_V#D2Hivi~Mm`gclr`|{e`6BV@6XQRu0a3}|x(MMrCixMhTtMVFio7)S3 zyO#!Y^pm}0SDn%Oxzz5m_CPOHhF`8`(_n|1AeTNilE(Iv)}Or!J|2}X317UuRat1Q zq?t+XNp>~nSRDY2gk0;uYv??On_Sb2J~2Or?!54T%kZe-65IKHLV0tYpJ)+-$AVv! zQ2fd_19Pm%@upX{f#lCH zKB?$=A-L+#K&SrWZkEROD9J;Eo`a%9*Ye$ltagz-RnGCaM*urd^5ugUhO%joG`)F} z%Usm+B}{kPcnp@UCa5YrmL3GZPGh##f>U(7jUX;)Y!%K67Jkxj!?;qOtyfg5 zwyX0!qiRcrh`l;wC7-GcJqApF*TIUk?RzC>Px5h92Ys&&rgb@Tn4k|WOP+17Zrvuc zx5zyEG@m}|H2!H#;%yNG5CUz&R`G9D#)%O_?I@KM$%sN6dBU~)v9!-Cv30|RfE%ws zy#aQHk=cY!T4%A;L3mjKm{4P}i6ZU;g2UX94uzbPJ z&U@>h4;OU$6`p(OneS&n*yU*M3v6WfT<@A%Otl-WD9xP0-p|*UA<=YL#hD5tCW>Bh z0q{7Nl^Vn!FRy^Pt?njr&xBRy;6ptVwzV}ly5p1;ow&Xo_n~I^Gn;TOkvk)}Am&OK znWv=5!_tM7h1UqHm6CKe`!F>`5ZyEAX8Kl-2k`#J!YZgXo-$M9HD5@tZJ z1?j_`c-BpV@C=U1iPI(CDtBhF@AG)|aCe;fCDeIuUVIh)u%TpP^OV|qWDv_RM%A97 zcd=>WBct`yY>+-hw&nYe03?N?biPnYgwnX7kB4ytUEz5kcS!1MRdU46*YG!>GMvfF4G8n>HuY9PGcc5qF-qJF0=BlcSQ^7`;nf z&KZ&Wtg%Zc8b!~|5{>eAVL~V(Bd-+2<^s#9X_Hy-rrjXY1l1^a!@*nTo8GRfIV5(- zY5O&M+u2dYA}PcrK6-V)NtxgJ?C_&&UNSW40StSuzp+&?JLJia4zPT{>G5p2wUSnO zhi)$n)@!!GU%Wz&XoOT=<(>}A^vhqMBJ#vWVIXQ`Weq;V&H3x-P8mf`f|*Bm-@imD zO}ct*M9~d-0ir+9?-tbPF3~OA*&gCELy>2ld+zMYFee6{SpP7joPgP zL5&4QGBOSh_}3e^e{8488FW?>YJP+wInzm_*Yj(-De%0)njbxRBzs$WVxZ@~AKyd8 zZ`E9wXm-D8xKPq6=nz=dPIJ(X+|yV#+9snz6i_ybmfMYD0ygJyhS+d7s;LWY3bzQ~x>AfU zicX_y>{ixhXyYMUwf->LYFc>y76os)TB}qx_3l>8zb%{j$%JqK_T+Bd__V2-SgC&7V1$)frWgRP`7~!E~e9 zmRId2TU9%PKxdv|`GqL!or^|4T|$lV^s&=Oud>f9L^^uq2Ag>pgPkr~1w8i`k6!P} z*^t0<{78Y`Dy4a&>1HpJjnUYhQkN{oJe(<4uJ`L^i)zq54Mrhf7GuhV8Y6vzIkDDi zy6m$10l}P#z+;~%l{vG>*>4viy(3nwY_)!CzZ~PF7IMNge>^6Hw!rNK>RG_!hl0sH zCnmmMDHUU#f5QlvLcIxo=&-0k_~osKX=GZ2x7T{=i#Mwd#pG2A_Le+6oXz>!F5qiqQv?G=Y<{!Ss^YM*KpRTidC zpQNC;ECKzCavHbb0hs$rm49+jud~id4Jc_R~P;) zsGC+veZ;2---;gSK2hSb7>i>h$1Hs_0-8A`pal4iUc-pBlK7bDr@Rv}A^F z^CHW7nGFb|Cg~jmr_LPy2SM`g0|{ebd}?jMx8057J&qn`CXniaqoaHii2gPHg5ZA$ z8^di5_^lxH@}|^t9h)fQ>Ulvt78FNrtFWX0@U_39IRGzhM7Hv~Wqg6uAlX#&UvU*0OVTfb z?fDbhxg4{P|9mi9{m2rr6(j`!hIumq$xQDoT%J91W^nUiG(8gOG5ovOH->iQ;j37@ z@@`;_$WqMBoZ8VcY{3+-jenG<_OICBJ`n7JXM?UB9$3Bur4E`CQk62ll+(O5F3(}_ z;LcOfV8?@jzdW%7v!b9Ge1=r(6@3jMk$H6wlYao)FMa>L`QB){MX{{rU1`GMtw!Zg z;T|b83Zfa`rK$F zxZKn89Wa~|68e2MD5FE?M1{*(+7lAb3)VFU_KblH5eMa>DfN!BmCX_U)4x8Qd8AF_ zM9ahI_BQD4UtqnO(-Af4Lc`Xy{64a3=GzBWDqCgVB&^H}y{CPQi{AXlA~h3%wcMbT zI2yr{a&-Ndg`*u&ouCoKm=kjUp{9+ZHT?(MNYtFItt|?JnTOd%Y4GqWE_*y-`*j*k zoYM?|n8@xgqvaLc%gp(Yz3Wcji_U^CZ`0V>%CSV=L(XqrZjQ-oul8F^MdeFXG6$8;?nIFcAIK z$p1y}I`l5XthyLB1OK5GfX>;x?Xu_}zULua%s}o>ZTu$z2p`zy`bT~I%NtqZcvyQt zl0~B6)eGwbwY2RpL9^d92P4IjB^DfMG&skNnf5e_2>X zt{6gPC2ssgdC+gf&Jx0!&yh)*W=MWbbR~5K14Qw#jpnc5EqVU!JUeJlIx_go;DFDN zGPvnzn&)qvT*s#AbQZUj{|C%YOo&SJY66;`@LLJKPwptx#hFnvdc{=Ue*F*1(D}z& zKSzazIQNS~|8}&$)B3*;+(h7Eh}XWEU#!S)R^gWg`G=;&-V+rN*$2N^-rtDjUk|FI z_kdyGzX;$@j`sHv3!R>ROYJ?Z|4$$MPbBnz pD)JXG|DTHdRVV-dQITopeu6pZpk4sddJpSQU&rKTjnC!;07_AM?;#*X3B5}RRZ&4eMS2ya_ma>Fh!mxl z5PB~{0-=QfA>>BSx195P_x<(OTW`I!ZnE;pWM+1mJ$rT=c&DSKdV&5LJp~2Dg{M!H z^e8B(Ehs3cz~|18dmPW8W+^Dn%Q-xLtn>8oV{RP}S9=F1I|_;?@8XTmn&|&v%`n%y zSu>dXYL@1*0Oi+LgH%tj5|6HMf4)ks*kQ@aV!B>yfq97C>_yj+EkD$5xNnelst4j!loC0ls zNbkx@l~wl;VS7y~R^r3P|G30sdvXM;Bng zx9Q4(U(Qi@pZVV6=NET)DPPfv0s8$?&wlc{{4d*6O)=<`xSqc2-y-X~qp$P#vaNaK z6Ejy1=MoZyINw`eNQETaeqbc=+@#iAD(;(|3Cp6TuD>nwce6{umv&c9FXfC1JG=<{ z#wEVEQf=KABKo~Q(8cQEca=+Lqdaci`W)yVaW{$jp2f{y2G*2(H=aS?veE5^FMbB5 z-i~_~UpW|Yo-Q?adBFY7Ze5iL<5wv<;oMWSFr{i?Y$1zX2cM$G>+vd`q^oq5*S>2V zNx3hmWezEuV!9Q%-aRl%K-~FWNu~QFY_&dzi+nPaiL; z*cLGQUkQI3`d#PQ-Jy$;-=XfVvQ#&S)}C$2b!+poEd-{FE7Ip;&!OnjUrjN+-n%Vt zCw0c_5z)#{WH`8l*5!iu%r?dJ*`mjG7HpCS@tw*$p#`&;FBzFj)xiZou8j8JgYT@?gv#pi zrx;lu8b_M*n1^GpVAiU3FJE}eeJn3)dr`R?=#dq(&9!p|TT=4rs2|oZ*AHff(uNgQ z`984HzM>s>POO^#s_>!mrMnM#GDh4YqN<~WD5|hgi#Qyw%IS=P*Hnw=9*LrL?Xb1A zbpdNJ?oM&w-`YhQ^zBceljjtqNR#;mij^nAki|-TAWc zMmL;SC|ed|6LscgnwvKtyo$NXo%fbyF{F-L<2xJOBc2Z?w`rd~zJK{LkFq`Y-bcN! z=S|-@sX#t7ci&vT?QvsW!Q~d?#~aJyyL25oh6WtsDy2i0`I+p#EWD7FzR{<|m!tfP z%1gj3)F`KM@XY>g|I5uWKG}f5pz~IONuu@vm zf;*%uEit#WzOO%|_I)MvkXgIc@6a}owS@-!2HVQ9ZFC}ddI{T(xcq|VRv2|#RO`*# zO!w*Z&vm@=_$Kj9@teb;u=nNJ%kPvx*%yYWfl&hAoff?ouPpjMliQ)G3_na7*XH?n z{>jDLTzbs!)CIX&;>JGte2j@J`_S-&t6R`kzluRBDNT*_x{bRBx-$i? zF$+bq-HH5`uaP61@vovgmuk zIm6Wgg941MRQ|5cN={KiQL~`AcUqBwcj1N)lBuZHaBlF$1IHA3k$58obG70x>Dh8w zP@SA&sY3VBLr67J5P5C1tf1K-zreh(dX%q#tuV0|YmE3hTdX)#F=plHCT{Ow0J%O~ zS2$Gu6m`9jJvTYOUdc0m)vwP*;{T{B_Wz zk~4>Mj1SzW@W5r$XA{UN!fBkYlwK$uFRkh=?R^jA2#Rqj+qypkZIJYmUuQ>G?$h{3 z;WqZmel?@G^04KALBLQs3Q~ITTaA=>ko@{S= zvl9`2eo7J(a~Wdm%Ix@H=0%ilsxG4Po%`F$@t37L0!x-mAg16rr#M62E)9Q;^}Lq6 zM<*k z9vB|=c)xPK$E-)jh*>DlFade3xY`I?>Rb|6!e%CE<~#Lk%FQX#3F5ppG3-Qi;+ry> zvaG%}d2iDLGd2Z+t8UIwu|HX()S(LrZF)46*m5 zE(jSEdm?gjbUJmCS&1>3C;^ouQub4R3JIsgPFP}+F-)N*R!^*+S_OPC(s?x4kaf0g zwJo^~)&{-la&zRd)~hRz+8-Ognq9u?`7-UA7C1?DytiV^4Y^&n?Y$7ogSmD4a-YrM zJ!>1In0ht`7J*2o_f)!R1@XH09OrC4Rj!zuol0E>owCuK>@v3woUY5cJe%&X^WCA` z=~=|`j`KcYGP{Cg>bf&_yG5H`Lr{G~v7={^){%E}1>!%aBsGY@f}IA(kv5H; z_0d8%d~KTg4hxqCwC|>UkXiVpgnKvrjxci0;i-enNQ+xe9di2AOUvKTpDVV{ygi@| zzgE78KNm=27xtVT$Zp3zlMGB25(}t0srp&%|JrFmdZ8FIh>4!SdS-jT*Um5Vu8ggH z+#BA#_^w7tDtqiHZ(N?%#wD4zQ61@V&AjCq(|mM^PCcr-U2BpoFWR||BnnIS+1A@1 z&3*g%jZyzfJ|?qB%J2cqIfYo$F_OVT%SM#o%1fNZ?7j_1(wm7wWEP4#>atNImVfaUc zg#-hdJbHgd7``{WQ#@OiQFk0z?|9THJ}N#ep3ZWaMbl9PB?>0kfyNrp@s#m6LlwiQ z;@R@Gazs^I@Z%YkrRZD^dFLxG4KOKN&&M!t?<3a5gl{6+wY)BJ1dhGh6z7O3wOZ(m z=;UnuEgx0KXjRQVt(^28$dH+X^FWzD8eU_D64AtT``yS}s0t!^}~`k+8l!ver+NDdnBueQ+q1>8aWukS^~P z>i1$q_=94Z*S^5HbkA6~^5vFJ;_g`$#v)P$A*W@edGA+y>l^3T#Tzrw z-hh+Er>AiM5JSdrMuNN;)M#XL82Xj3Zf$q3a-h-jEjVyX~#i1zm*{e+xr(-y0^)UwP*|~VWaP`uBqvu2JpmTd->PbPtdgu3t@~PhKZE`X+ z;b36mWumDeZR6@JYHjQK%ue*Bv)gYx6tXX+$xUZFFKh0X&Q30#(l6zB{>+djw|{p7 zc)0&e@p6>oG11iFe(dUD$1NdxSM)9qke-{HTh_zYURqB{Qh>Yn0r&5VkY|W^`nh;nzZ7xty!987f8$ZI^R)4BaPx9-b>aSv z*ZP^Ox0f6b&u>EidHuzw-Ajl6QgZS9*Rse91pFQW+!MVE_}_tfIoSW-fc+l%3+&Ij z{vs#)8=16@!%I6S6D0>{vZ%?d0mUAOOUVAA^M4uoFQ*Zn`CO#cf8 zy!U`i>krW1d;ced$^RDO@4f#MLfgZEEC}o0tO4)+i^AXg{xx3~@O$(BrVM{++Miys zssZU`0sm23ApMQmF<}Y{1&XIiiUuz!F*9eau6Hzd?I5OY&C5Pb2Vl?WpU=ZvKBA+e zeDvr^fWU_jD(dI%gx*$o^61=`S69q_-QJ&VldyW(DdF_@2J=Hnx5Q_gi6ozS2?m%NPmndm2DNy|9(aLbiZHN2s z6Dgs;N-Gk=Gq2xX?b}vnPHw7iz!2iI^LrGP}+*W1&59pAPsOSG+ z?$VQw6qLr1RudxsQI+3#6k0pP{;{p+zCH;F5mK(eu>NB=xaSwy|0xi?+X}7mJSIL- z|6qr#W#|97xc~38?Ei@G|Is1+KjQoUqxiCvBM<&E_T-Wi`rtkQu!^p}0555em{7fIn&;C%(TYs#> zB>{z2mcq6Nzio$t9<31z>!fRWW1G+Bq#c)bnr*s_FT6a>(?>7%a((qcevq|#qUc`N z`c%Gl-HWzwuO*No`V+pH?hApue|eTCFtRNwdOaae#y9wc3NL@?I@RLb=#3sjYRRoE zt4KwGWPFu5eb@Tab?4%Uc8g!11|xYrN9tn#ladP);N0v?p^)9e{Y&9&rISq9u1H_hlR}bP+EfL?Y_>U-$XT ze<(PU@j!$+{C_X*3|)z3HLr8O5zO&|Zo8408KioxcL&uhPiZ_k9jrw|ru<@q%2vQ> zLz@B>OFU`iVRxq;43S;imGNm2}$ANsAL}r;h50 zA`JL@uCsvz1efKNAkt3k=|BV{$0~wEOrrf)3ETN)^ zYr{Q3E>rLXT0r8-ud^)<))oDZ?G%0thpYN))aP*lHfI?@zN+PY{L0a7rKU$*q59+B{rZ z1ZQ4s@IVSSqPGGx*D5-ZqVWEH_|3;}#(_(n6Gvbf)FiCuo%C8?|IwE>TsxW~#w8}} zi5IR8&Z$ISUq@j_@_k2-0~LQut=od^0ehQ$)1=7ir1R9OE$p?ZiJuVUv@@Cmw7qNk z&=tCV>}J+xQc?2t>u6B$2p6Wbc8)G_ZEfXfU6m_fdaD^5q+i|KogMO`^L_FG+OZ5~ zTK#+~Hux~SyCdj$Pardu>U{h#j-_i1Z&*y@LMns(S;ia7Y(fRihf{MfKhYP_!k%mWVnqJer{xw1 z$=ve%KTg)Rg?x@y=N^~+7Q;E(&psfSvEx`7PF%!nuoL+RheiP``1tr5;CD>iW-cIV z@w35m!M`=st`=fy0?ijtQ?MpVT9{f{7U@Uf#s673Os0F%k=(@y&!@c?g*(!Zr~y zW&+X93qB1I&7JReU@watMs`@Ec6fi%h!TK~88dEBE|Q454e{w{g;? zf*j;!5^lU4Jau0K1tq7ziF+|qIb%_HHO9~Hw~_oD!_rc;Vt9W zctsc>wJI@JbqkL&5{!)vs1LhRT`w5x_6^4ypZq*)oN}}kfSp<~w%jDd6&~nly}52- zG?F)2!CrRsB{HlsmGm;sdvmfQ7%_jc7dR;1O>1t(`dJ$k*PCUryTmnF*>xRsfD^8c z0vtz_)AsglUN8B^_s8N!Fc_Qw6e7Uq^2rv5tmE>b2_oi`&b23ndKJ0N_Km=&k{0-> z0xR`B(3T)Eb`-cdU9Pm5U*YCGil0|kCY3_NO<|{j zGqk$J#)+H8u;4`E{%`@s)4r?iS>2ns1=Uj~7}lx@K$}*KQp(Kiq}DIBv85YB$>NBS z!p;7;s>|&?5SCkOV$r`@`cO}HmdOFwhn~NUaUm67hFjD6YeZi5d;d)B^o>s-SsTci z!*5Z5h*TOn?r-dL8U?Fribl5Y(v2-FuGf!i3k~Y-$;IywuvmGfBUNHvm`8UTQ;G8w zybZOzceoUDA%CsE*aRF}5LT4MFZCnSr5?H3D{#)<@K|;^D{`5uwh?F}Pg+?y7*gfp z`#e!Gtd$)4hcP(S0|AXG^Y(W)$;xs;=tE+n`6i_yM-Df-o&~!grxLMY<^1l_6t7JaY4v||ANF!qc>yzExrdKo;qH_$hUFByHx)i3Esep1daa(uEx5@QC?99%q6=RIrY)A92yj4o*c4jUBz9cmtq(>p_ zK0?dw`%~-?O($xN>#@O+)1#8cO*_7*Dc4|6SYfcS&Tmf+b|p1}PSUY&o<0n-3;dJi z*^(`a@#?Hh|MlPg?h>UP|H)9Ud?jv0V8)PP@5EEPF=PSdvt`_0`Q>rMgKj?q?+@J8 zDxj{pYY*J4>Kt;S{WBoF{jV3n+qPfC}V&BKN)Rn#!ZD**z;-CGAc3)Z?C+V$j%0QuG&WCYxpkN zIJ_5)R}4QhsSO~yPX}q?q*Oq)h6$|$GbYA@-s_0mO&;p9r*V8;o9eisE+$wJfcngS ztXh$-L@fr4Qyiff#LomIa{7H=Sjf&|6-~Q*?@LO({PbaC0VYPRR{M`v85bj`vj^`9 z-G;wStuB=!#G@|_duAL-(c&%AlOVz%X&6#1lg$->Y}8C@d$X_`k&(1}WH5(NUouse z7>JT%yi+PMOFDcgIHcj zXJHec#rs0uR}&dOl%tZ|IjHAHA~IsK=+34<5)6*_*PZ8@w$TRzjo6(z*B0D>(Ous% za2W;WU491rS*VeY?`%C7hhFEhdw7BqM;&wT_ljxZ;mrX!xWh>Plnu#`$HLXg?=EZt z$JV&9;pnvU>5k9$g*QiL6<=J^9^{S$|5{LO$926gJJ<3pS>rEX_LZ!1|6N&DFe%8z zZ8m(GyP4~d#gOSMSA21K!?y=@uFg>q2y+GhTOGfXANeQR6~q?8`#r)~YsUo;uT~m{ z(Nv#Q>b~?dOsG^gxQ(juG-7TlF4malq1wrp(0D`hdMyv_4HPO|hB zxF6HL&`bIv88f0n_P>xThkZ!nlrx*rBAyJZo&7}zjVWEA;2ClFK%KiTx-fwphaq}T zSI-E=Xr$hN>Cq$j#i4h3#}2PS>LM<1Zcq=-dnJ^cT#l#HaTs?!UDEVfSwYl{?;onY zb^rN;Z=#*lGH1ZSV%o9*>D+2u6P4L+r9R;v{Ma*^iJmSG>NpMT;cDmJ4A~(}F~!O~ zBggMW-k9;59^Ue$%j~{$es8?7mLDTphShriLSy!Zn62viH&h=31ad8bP`yb(1(~Dz z+|6Ho3E5zqsulJDwfvP04JMH>IcY#+{`wO2 z?7lq@t^b3RjJ}(h964*$>!V*8h+P-`Sh%el_B}@41Yx9CB^D;YlTOkR-J5=+YAf1sJ^;+WN4eAoya<#X6Fg&(|C7b>)xOWj>Ca-60RNi+=(3ET;Qqjo_RK9 z*3>%b#qJ_9Uo$!^XB zWJ$gE#9C^=?#d*5cm?jTvIh6-Bd?+%4z?~mJc+x3Pz8&zy-}zlE7YeuKZ5S7%y;}K zAD08nxmNbUR2nH3)}Dhm!d_iu_Y_%o%T9&6z~$xlBvT)(#TAS=pz_i|{>O|?b2pc= zSNCWsX|8E2^s%U~AsoQdUb+x$=MEjF@We@0Hqpw*Dw-8>2lBJKGRsr% zJH$3yj7iR(2?KZL)h>z$C~^x>q1jrV_spI`OYym874_PdFN*rs;UwRZ_ST$0XRUV( zlSaYE05E#UfLiqCf%M|fx&*Sa66_=BjqW!yYc=$oHBXO7vr1RZaG|e}6iv5v%sZKq zC_Vm`(6VEW_8k^U4oXgwrxIWx7wrWAGTM@%k5#lPM5f^5%3aWzltx}|!A-NZ@mOt= zJYg&o;fmEH4YqhszaA?D<$KBC&PZh{aTv6S<2bpTI2#9s4v2Api>bCZfU7!FzS|G= zMcBw&G_0C6GRRZ{B|Sc&D?OYafV&(pATul$_(y$B8fbeF2M)%n^sssm)Jrn4vy( z^gOG{&btt@O1-=|Y@|Rr1TiM;_r4*))}!o?OI|I}1mz9cEFEU}} zY(^}2UVZz79II95O1-;CA=6LGQ@Qd*lB*h*DmjkVI!@3uf6#83xtoX8P8ew@y?~Y* zZSg-$EXELO^1MprwL*)f^uPjA8drg{F|&m3@%C49h735wJHX`H9%8#qckLjZ2+M$W zCkl#ZIt?{pScoin&UOBJA63|L}LOi&-a@uyW3PTfS)*e9K70L)rk} zJn@N z#}1|q#OfQoqM^isa?{#=kPxRCemz|OBd00gN zCAs8tx>cq7SecH%*UJ+2*BLYR21+t2DYs1JA?3GHG@>068trk(f+?pp&Sv*I#3wea zoEu}k0f@OMj%6o{)56kI4CU0~Cudu6M{DQZH^-|kn}m2;ztc-3>!JM;Yfr+jsl>41e&!xZKNp6K%gkb_barfpTlY9fB?q^ zGuM|LTZ{PN1bfzDu?)8dDle$>O*TjY8lb_?`pi~L-cY_Fc=kpfx@N(_xSctWeox2P zQkHs&9pMJtX2dK}X-Rad%BO6O=$~j{LpWv$GJ|F`yeBkdTJD^d!lCVg@|@Vrj5kD{ zCT+78XQGEW@du)5qKq@O)ce$}$G4Bg(|?o0)zi2UKFhZ^C{|;m|FLoQa{n(`eX(w( zs0?rasAd{=NyXvv0fkR?S0y3W_z~yP+Mwi zT<=-)MZ|O}Tc1B1Q}2qLokNwCndTiwcL;e1pDbHWVN!gGemJ32tZ`k8FI)S52eR0(;ggYq4)o*_vx=Q+DEx>sZgBeZFalxan%xM5b|Uc(B<~ z(=K8J_yC?D?5^ThmbrWmU4{2cnDMj9g^J}kkj<)iG{K^{c)G?pP7@?MUzEFn zU!#qt?0oA%O7F}`@`u^Om%C9U0z}s;ZA)X znZnO(=N3+BPh-}w9-)ip2|w87BPVA@X~Bmp@POKZ@8h#~5`HC5%`kKSvXD=g=?DsF zGaM_`-|yK{LaA;522N+dHjK4F`2HE^<8SN-Purd!ER?)EsIC@;jfv!tR5e5;<$|i` z%FA4J@RV>D{dbH&U@{?@xDpUdAFi8jORlSK&bi8Nq=MkCQ~dz*Uk8}s->DD)P(w^zG)0iUyHz@2NQ4m~=>{S1dg*L={!_vZXpa5WvpiHF|av9==;jm{N~ z*W}}x5!V=#TIt`VrBXT7AM|5asMp8an~o<^gN}9x*O^3%da$i}_&3o^2XbRU#CaU> zTt7DH`DTx!l6;2T50{gHK42>ftKTRgbsBqK=uzU;wO z9w!T2eSyAW%71A%%I8NDs&rroWtMNvM_bIhUq6f6ncO@jq{?iRyn+|I*rksQOP1|u zGMle@RT^>n*K?yq^v7gD#m7M0!{z(F7gz#x_c}mRlJzb2C<8Ue8b73$Z+f|Dm*Pt=JT*+$;0|8Z_G~wiQWV zv{L7X)9{5f61mvTYSx!C2td1mX7%AOW;K-g#CAD=T#gpll!xVmQ0wS%sq{WPx<}p} z$~?C>U@*571QzHv*lsxN$RsRt^_jtOPi>eBN{>#g5i=jkfj7L%y;`|BavK#zAEF`0 zHm`!1O&F!tZq}+O_kDrHzyO=(PnDU|;+dN;j&W}?2z4ZAOvAaB2Xy3gf!96N(f4C) z;9*^}9J+b3)S}(2XtNI2#351(ngHuLn#gS}4y;05o;R|sdtQ_qL1n+leKU6)D|fo3 ze=xv?g-r#!Ihri(PInCu6qEonX7c=4t$;a|&O+k3gtM9S8=qknc2NnZfc3VM6kvS1 zw&LI}OvqxcoU1WN!;Ui_h6t7ou5CV^;MiK(t|*w-AsCd71c>S!7EZ0s>z+O4ezX)z zT-Y(!KoOG|L`Qq?@uMxa-KNB932gefd-}lwEVN0I_0G&*4yg>WQh@RA!Kb)-nZOF~ z!EWkJ)qJS@`EIZC@mZ$1q{UNRk?x+%7JK6i0>jkv!CBzAVRajVh}R0}YkrX#ylf0L zQY}SHB9{yKb#!&-?>qjujvr23aXkE#{ADn;Y46pFcQ>qffEZuiJbDl5i)m1G@d_Q+ zm#D#CntQLicdLhF(U;IvE=5w~kg=+%9rLiun+_gtIZa`1EqdWpGADPS8q8>UyeuxW zmsLc1v?2NSkZ*zzHSXQa$0Svo)5Yl!Tc5I6o8N+lm=9cR_PXMc_8_3Dobtv@@K^cr zhSQ7u6F+gWv$jQdWY#LGOY~y`>O}ytEyTLahAlM02Z3lO)BCYx?n?T&k)>GNOg&=Gpl?~C^l&<*(d+`!U))g_ zdchR4V?f9`1r|Bz&3}?SC=OytFV-tY+sT?Luc1_63QM9DKFtS=zQs!=DTj<;1|;b- zB4RK6-34`^hxi?_K%Tb;P`<+u{XWse7Qa%KftQ^PT-(8?%F-tC2m0byb3}utn%(4= zuV_whl(WkcvG1b==lIon@HK(b_r; zJSB=h`AjUkzQ!hSXtqB*0NwsN*}26OblquD9(P`F#)Js!iDQnwU^$EvA%!6;%-&@(xdjct7MUi?u8p{n zApQmSi@;tZRgYDPK9~JXtQ3XF*nP~ z^9B31X&1AW25PD`K`l<>GmlyYUPR_k7II}A%ON&pM0JE`YJk(2FSR%J`fh9A2oAjJ zHTqsA6vib38^ZuTFfs?5UDy)hlPI>f7K=-l3H6MZ{gL<%oD5UfQ?TU)ljPEj2>lwgD5hnxq2PM-_2;&#m&>H zY_>H{^Rd@!*}RUBgIt46#UWp$Mn{0(`LV)vPeKGs5R3SV+3rTX#L}#d(*lG14$;S6 z-Yv3J=w)smNJ#{|Vwq&9kMuhfAm5y^c$LjqaOc>Aeh3y#tOx%De`8y)LL3Ev=VNQz z#90iS)trXv&r1`4j&oQm-)DW(q}pxGkx^z{m_!lsiLxo7V+lCEF-W>Q$Nb=7XHhI^ z!O_*fKBa_veE2?c@`pwSdevaR-Q0M)i1BK*3=Y@!u8fbLzV%sZ(wQE}Ug2bepLS5Y z{Y>zuKIkGE=ytA?TNWS zKG~a~yr;AiJ8GFZWtAt*1NgRt1T-T@+r9ABoRo4iSr%G?|C#XYl#XuThqgdqY7Du! zMZS0!w{&2@%M1Sy7U42OhcV$$OEGNb}e$Lgnb{cNIHx>A7}Uh;H(`uAhD>Svsi9b>MPFMoF^OF$nR7iOkF zt|XL#SY7gfPJ;dm??sB9N_XSfMkI=?=_4&;47J#UpOu33%wbL}EFm<6jmg3i_Q;fV z+B*dc{>tkIWz zi?#S@r1`zbHAbfwN$;PEp%Y1 zA6IgcnTV+NX|DTWdYN;f+;tZYSOQ}>@BT!s&Yp}q<60QofR4w#Khq% zlk-(PP~LpDHs;+jQ3Mgb)uV!9bH$f!w)lw8N)^Z_Qz?t@ycc;Vi*HC2^m9*X+K8or z*cX|J%^6pmDD3V})lDLmPOp9mk%80nariow)WtX%R~G?472)(Wi^}4{LTEnW5qDqO zH?`8)#O?q#W#R_k6c?l0k$mO8Y8s;@8h3VIo2+-U)t^e12OT#MH^7JkXiE&I?`O1a za9ugEa6@VI{8mE#>4q4aV6uu6`1U3?Iw?_W6OErcES<|usVST$4in3;zAFeY|H{b$ zr_6~ojL0`Y3lc1z0UxREyF)2_oc6@ki=gi|Q7oA1=YG{doFe_t>|Iav8YFbX+(17+ zo7Y#-!SY6ytEXnOOWpQ0uKgFuSN!-N9#6~|AFgJS6Z=ndUwM115S5vs?gLFXDx;`<@*ijIm7u+rQOm6x-$)>i+3JrDwQ`OFd3>_=wHn-6*d-3 zO_Ymhbla4dE7=ZRYTacODeEwIJ9*pFQ?1Dp(ye@Q2eNqe7~rfT+db8u zXN}Cti`dnizT^dw0}OV&QlDPwM8#evZtruN9qu`vV4p?*>`a4qIEBhneuL4kFt3HF zAnD!AEY>HQM&>o8pi+B`Cx8Qo*9X^fZ@-g`$TN?RWvtD8;Znz+6vL+QS_tXEQ{0qe zy)1TFG>b)Y;KPR@L-N}%Y6iDtsbDOD|@rLSrqMsItV-sQ9m zK5A$?*#H_k9;60?v5p<1)7t@}rNm2#Uv#>T4xHr+Y8;x@viy`jr2je<Z{lgh_h?cFEhd*vY>hKm^nxS z@0qEgGQnGZxG@0oh(6F=`|uth1=YGs{2DL>#u$cKJsCN6Jk6@!@`hb-t(#vIm>WV8I6ke=3#`RSM;Hwt{Rx)jSyO_W9+aO!K0FuOU6hz7KwWZBpc*{tM2r?FKcMHr8# z$zEHKRhHyS*@3XH!Tcen``@&oS-FYmG6D!wJy>|=*;a5rj(zP@(4aKNExJdzu(ISq zCpm_S1x?M6TIxc2Lrsf)c1ZK1kSC0pmu^3-2~&%IDX!cmU72ZW+Q-zww{a`~l`y>- z%P+Qpz%kmR=f@!Iq|w-Q%VhDTblCRXw;u*fbL|DYNgWeJK1InB<*&KkE4@>L_iY`fdZ0*4USE?K5-+MBZdYwzfhE zk#n#3ut(BS?k1-be9^nLvGb|O5`Rah$jan9SdQdP&JP;J{ zCaR{n-#M2-`16CC6S)s5>vnb& zsa2F{Cq|;nAv0DjS0tEA0dO$%=st(M6T%9vERlNNz6dc4W3%84j3y*^pDu1?MI`^$ zAhd^z_3rsZb^}V3ZQ>pMI}#4cf1z-wpN)gwr%5?>CjVaurXC1xHk4}e3@?k^@QA z$x3_MLvOEZR+Ti?!NOvC(FS#)rPgU~hVep{!MfUGaBBAcIx<(M+=Ywt?DchO(cFDT zsxnMJ^$9Phh0d&9rmIn{UumMDES^Y|s2bwBHkz-lXz;=Iw#S>FDh1i|clrHYaH?U> zqM*j(q}(F@O9pyZi3L91>#3-twjA@uuY!tINd}y$#Jqfk6M2TOYxN#R=MH&e<9>oq zK6ig9%D$H+3S8ijn8eKufmg&k^=x(T9*_(7V$E6LmHWE_rb?j5&NF(tA`15-j~NRE zvikY%CqKTsm893+#VU?V!RR#=)1V#9Y(fhI9|}qJeLd;tIb|R9%+fADU2hU-IV1Y) zW2V8SuV&ph{Os5%Y!Xoi0RR*^P*${;?{1d3OH*1u`_(hfW-J#nNM!+1GYbjrd>H!F9((EF&z z8?*azE}qI#dFkpMy|D9vAKO3`&7Cfh#s=;AX02s}V9D9V&jSn)Zr{cf_)BF?;lw(x z)nm-ceL5qSkmJ=<7p`l7A_--OA;T-;WiE5{hed)iJp_K`v8%K{r`%F(U+yyUQDyLl;0B$K%S8_(1jcVMXiiX5zFln(!yO&#mL2&2aPw=v|hJr zB;IuHf&U^^`VZfEHW<|e$?e?dVks+W;Z^K2zN;Vl_KIkM(ipu!>4}YKN8NF@h39NM z4wB&&#$jOGmzv?pP8DBz7tW9KHdF>weFVg)1waRc@Pzr5&(vD)Hv;nZ4+W*M^C@9O zh}8>P#%m{%Prj7eUE4Oc5p0$x=BQDAIWD7pDjr58CtIhI*St)+jE)K2*@_7)-0Oi5 zj^3glX%KO*Ii9F6xEQ1K%-N~kcNo>Y^#HJ6D?nx`OTYcp_~TSpY62{>AH=FSbh`ik zj@d?)Jg8i4S?tnv$roDJ94`x(Z-~wMcuUn%5%wOd9c{p2WFiQB^1EX9}<|N*@htu?6?jFvWF2#qIFq+jjb?Xx&%^t3h z^0lEFoATKpA^8)>L%mJow}+~hc0ZI^m3l?c#>}^=-@kv^3u8C^arW6R!9@oYo?l#eg zGc@)^zs}8WF}W7|6i`Gi7kA@9x>k8kp@K!oVG~P1wVLg6V=lM#{@SCRyp!JDU{T^Q zBQ&9{+ixciIf|Vsz&sfVM ztt{AF#A)g?%p|$G*1NZTZd$H`#>7}a2hw~rO~hAq3OR1q5kuOE;t(Cn=B(mma3WMF zsC+co{zyjhXqh%WB4b&4HP$VeKC!XQ7wksj^a^K9JS#Q%F6=s2O?aLrcYNvSO!Q6Ic5q^yq9?$|p%SpI$qwa-T-QE!3IcT+p48SkMHP{d^(E=Y zNnE`?vvF(~J5&R>IcG2?OaqC&xpGEBR)Bx$Cak&TLo5r7|87(E z#2_>6=0U-x#p0!0bySkNnX0%Jy~O8>mVHBEZ1dxrpKx;ds1_jqY?D9S;iTq(co7FP zrLTCP<8gO2>6lH#vkz#azEw~S{X~nrFZe`9M{aG%Gf<@kW0Cm7N4>JVLr+HxbMicp z_wnXq7%)8|{a5(1pEhWj`Yx_x>erZ1I;p|2WxQ0cf+GOajg>~sY%%JRZ!;ETGbPhw zA@4um)DeGDv|>O{?0&8?B@~w%Q||ulbEP)AfYsG;D9{&ix~5m3xs<{YbZb{9$ZX;o zuEh7M+bhxy;&dXdNt=o^+Xbl3az=-rw#6^3Cj&}WpZp&YfYI1e)Iwq-Qs{JkFxxqU zLmPcXJ|gK11NOl_4GQHBNc74Zd~RW|LHM#J=b&ixbKV3-9KGu2m=E0OazfwTJ~vY5 z)G$XMf|MZi?aFNmJ$U4C8Y#|PU#L&kuK?_ zySt?Yq(LO4m5!xjX%GoTSh~9#q+yBoUVVIAp6~DdXFq&y?wC2}I%lpqbLI>o>LXxk zu%3?Ue|2-_v9P~}Z>O8G%trC0dtSrBY2A5x3ki?fi|$#D+ydkb*QFJctFIpvFMt*F z{m|H4k?si&ky7N`(Y=P4#w8sg3SeAMqwRrzsq>5;!ulO`R1S{y^UCUo+_<>a)T zr`%VTfk5-;1weYj)Gd7HMS$#CoE^vz(A&l0PRcmSWF!2D4%;1AS>qYdlV3S&>Z4$uz?BcX%{$8Sv8wg!!A_|Ce_3mA@P;i%qRUm9nrV%@!fG;Sgd(k zckm2!gxfSxT(pR@P4?l)5balwby7K}d(+u>elFMs ztjrG&xGG5MJxG#n-N_8`_Ip}1_`&+(%klYatA?`Zm-O;WBdJNss&y7ZP5f8%luF&c z+S#!LaxCBcd(PuY9n9S)eF|0{#-CO!Xn425s!uNE+(jw&V#Pi6qg^>!N;|+%mqQjs zr52;2oruKZHF4imKUuT{j6h|{`mYd@`m)a|} z#RNzJafd?dBt&XXPL)zPr;KiiBV}5uBRHFU$4$i6&92;UM65!O*iVwZVPPI^%%@^~ z2RUD4F4&jNKYmBfS#^-t$@{b-w2eZFR$FVVGwConJrqJu8LoO(Yngzd_QeDovLZC9 zf)sn%S+bYWq%sPz!nd{PU5VTJemP1~SS)wQX)%-+$>hCp>?O?~ZjtkuQVPHq=V-r> zJ47_qD4S$9mSU<%y|J^Yc%nL0RB`~nGm+0J|EhRD;i*Al_6^mPBV3iQ`a*?mi8>H$}+tx`O5~dk5k~;P5Z!{<;q_$^F z$+;6$H_n5GSxN=zNG$e^Tma+M$tcfM76TKX8XvEqohy|z&$atbV`;e0Xwe$jfH9lX zDY>Dd`K->PP$;;PN!dy@-eKTrtn2c2{PupZsVtM+Po1fn!DW~9+O5a&W~}yyz)HX5 zg%C~RPj-8qk9=S+srA`nX%C^tnw&(R#~)&S$v^zC_$XR8H*qR3)R9@Tvu`|$tT-R4 z0I|oyYpKcCiT4wTRmVwP@3=?mH5V!r{d|+z*^;@;Y9lT|F||YL{oT68#xCoQ>_-%X zbY|jWL})dQyxA5`pBAwW1<^TE21FXy3$o%*)P zOwcnQ&-ED2H?Se<9G<*VmpQEW#e7@FUt`FyM0fN!@)J_5G9JfbL8Kea6g$>q`6K`b zk)VUHCuK5Y>={vl~`Y>B9_i}J`Hz^>-+g|N12L$rrqXs(yYZf{~+zK+3bpPa7MnP_P zLU6eaFoEn!-ADrY^$VDecjq!>IV^unabOfPLPhxApN-msKq6%Uv$>_eh#=}8blW_w+}2Duh|vp@kxX}U7N2M^AobKr9Moc| zQ#<^o^!eUMOTlez!T3ph)YaA@{B5?8cYks?;i;tIk3=YnUh1u+BPN3O;sdEyiC?sq zNf47p(-nZLBmpV@*YZo1?o!U0W4L^@fTyqin~VAqNT4*wcg7C6f&7uW#Gp%dLZhtP z*?QumSx(O>mtXUd4FCClnFXxqC6D_ux&1`o*0cqXUGdRnFZn5u<0|hD1mLkm7~%P? zPgpmE9k#r6yr&53>rta#-c@^A+G{MtHFBVzdfVuG%D}~b*=B3(N88$PuZW#1=dxpZ zt@=-W&;UGm4)c4aW*_u#ght*#|3u!Ss(4nlkK{US&a~LqR!a11v)gdzYn`vQ%UQ)2 zT$m0EbOoje((W@q@@1c{@~Q);C|kZLA5m8+XSA5fNjc;zN#eBlHuDNk*H9VgE2R}6 zkp5*A+eh%12Q>E|tB!YgpZtK1!s;Gd^-5gO6LOX25rUI6n=(8!6i=r}Mzj?J=L`+Iy@b;thSxy&eo80=7qfb=+F`g7h+nUum1}SC;WBpDXiyGB`HO zsCPeNRt*+f_R?KgZhI4DkBw!GtkDL4>#@y^Z}3xQoU&-wWbHE_tY`JO`YWybB+| zoJxIZ4k{T(DCnvfJ#7M*V7ggY0*#TC8+}~PcF$(2Z6>!P1fj#yEM-g^RT?=+5y+)SUgfy5iXSAmMcB)t7-gbIJaV9^>d=j@{s7o zWxe}@-fo+mMyY6{v!<2Nz$H?vAKZbO6GcWBW7&eN6R+#CK10)Q8#|1k+BH!`9`5H% z^LKF-k8S2GT$CTXh^sqysLVam>MHD5Io%&qgM4!&Z=iOpM%woJ$`IY2|8bL}lQ>PN z*%8}`t^s>p+YCsPOH?0e*q4C@Mi3P<7MCIy*tQeSOyTUG-AqA31@4g}eA`^}^c**FHT)mIY7v9L{8+*|^z7^aq+Y-S*>9}J&Um3^BU zCw`GVOIfx#UfP9DYOiqPE`GzRw{0s;mDk!!3fkq!ABV?l15I-5XvqsK6AWrY-tFwe zyp5i1;=|O{PJ^|Fr;pfcXm>;2G*-x=y`#9Y3prNNCrH`P9!m1mtA<3!!tk57oMZhlN?v;``#%-zD2fBa8v8t#9$De1ue&ijL8JUQw@kjo!Zcvoye)d? zImt9vZux6MYmt~KlN6)}HG^oLQd9K{*207Kwc=L_qR_N5h4sXVMI)GqMoR_qHv?069ogv3#8-e8|dP_QhJxUA(T%NuExj*09{dHUu<0tv*l!Y6EkZP{){6o1Wo0TemC8- zQt~VrI^IpqZf5SZ&6(YGAljuWaI|en@*CANxl_Vyz>rwGmgUyoipI6ySL58t2~*`u z_RE;}!{t9&zW_Z7d*clbodFrAogZRq?yf76I<>G~J<>93eMd|)?erF$q-tJ#b>K0T*VEsfe><5T6BJ^KFPO!mWQkT)*$+H%U2YW@1I~S2_TS>cl!^RmFkK#f;Gy1rXsS4+Li63L& zd}-oY+YK0oL8V?H;WJInWeo>Y)mv>gO-pAs;G;?l4{w^< zVL8rR3}^h^{BK%@ctc*|hkL|WkDWG?6(4#R8; zF@LTPdp6`wbEsywKh9K?r(U`~Vn%FnObhc289(GoX>8VqW!W=dMIYt3g!lKRYMFlGmd zXv%0G2DWb~D$G2J4U{BHUqlN~8XkW(vMF0BZoW-Gd?VLJ#!71TeHe_~O&X0k22{>H zl)ANJe!*d_YDlLAT0TA1V*lJDsAf`LprGLQA$_n-4eFbC))x2JT7cC#4o&7EK^Tjx zQJzJjx-Ns3%!mRv1sS%Vmg0j+7RPRsc)7w{bu4E>R8x0qi-|C62F z`c7QxVLEE1f+U~f4DKLMQA6S9%#EStwd17>QsI5i#=$q$#}5VHJc`SI?=^TVF!}%( zVRH)r4z0B`3T#;;c`o9eH3F>sq6N^a-;=mM_*!;BobKD#HzQ;+-!QT#d8ep25?1gm zbLrxc@r~>Tq_Wy|Ll{N7qu4O$6BM!qjJRL#*NrCDb}m>u+k6afmV~vp?t1#IJtITKm z9lc7a5{NzqJatYP-I;UPOf5~PeHa|l^HAHYuXa=US+KkxZn9?RyJfGIGDA;#txG#= z5SvlwYt{m2Kru_D?{*}~LZdJreap+Th;LqSE?or z9ly!BrPP%4Txz#pD``up$s`-6YS2WAZepM`mrK68XRwJT7cEQ>itTCtMnB}C4#{d6 zeR~r{^aKDB&%pzw_|Jm}7?4Lv+0M2XHM+{l)F3%==OFCu5^*X=8S}I8_$GK zrT8Y%dXRwac^V=}lD(NyYa<2p>zEbQQZCx-$Tz5BXzdno&F)SVdC0u03pO`uv{j$Q zN2UGv(kk?fG;bA+V^vcn$;v-5$~9QBDz=^CNB( zWx|Jb6*E)@TPX~}N5L0iNTM7RBYY7n?4m}q z!A8yEsDSq+y`3^4?@~5NYu!9HjyLZ#X~f3>*gw3zq}g5Y2sX5ufbr!3m#i4|HiHVJ zwEYn~rg_*?ECI4&Q&!MnwMNsET52(~4u7HNKdYpM>(g!CE5?a)nPDCkR~$&v6)h)B z9p&-yXOu1HnQ6C_V`?!e+LgGS5ZCS0k_7RGv48q-BI)}e>Bci?7~5rDZjG@#2c(ou zH-A_P#jb;pim^|wUUAD|)7;>3+5!^I!B4KxX^rKMuc4D^N9Ci3Qc9Es%SEDnNBKiT z?5r|S*%!wQP2a!EO|su-XBn;HN?WJNHGW$b%qW>r{Mqs6Z-6r;-AEeA2#(^MhsbLNpM~n;N{U?K>VIM>@-RVtIfeU*8(cE zB14QdTb#e0yzUm)iMNtl&}cX;h%8LAXpb8JgTX24o91?tvu(=tVfo$zKg5^Mgp7*M zceOS=ZC$wK9fG@1RSo??`;1wXkxV4-RM(nmu8i`fZuU=bd%O-0z4veS5L!0Edv0^s!>*LTa8MM8Zc1DKK)aztgR9K@bQfq2Id@M5C290pIe8Q}Ky)oP<#~+{A_?56mm@Ri_ z(Pv?O*s+{>_K*<1+d(PXX@bU`p?()pz;8{-ZhrS7a);l8c?a@(_B}*E|BQ@oV zfQa(}RM@;uIp|Ah7S~nV-qYmC&|GacPg}yDilmnDh0qHxfp!`n`S9?Y@01m+qSy}T zc0;&@V)|VoVVAV=pS=Qi$}?I*hiI(6EG?X?$^ArT#(GewE5d2A4$bm1#-NU@*}6Lu zPd&|bY54vm5;x|}Nu3Cc#FXviU}xaG zrhp|X7e)#vOE*g<9p2qEP?U8+i)q}zh@YgF%yRf#6?=jlYX-?Z3%kBNF)ka~t|Xv8bb zMO43@Otz^ZM|5t8Z%FJH5GRGCVQ=q?mg4qric0KSCaK};9cBX!%yyj0`AAraCNGO9Y&1O24Z$Y7Tn%3Jhn!GhCFW=;DOK2T zk5!I?v`U{t&b~rPmT~FORDmIR%mqC96MRd)HTE6h20YrgglzsyEy>bZLra2c-pIw` zmNb>#EpuDH1vE~Qmg7;mhRK2}nMW@~gS|)!asz4Eq_jPuVxB&GkItl&tD9Wghrl{X zMi^v%LQa+HIvcc4U#y5$c1aQopwy0-*CxDDU`;X=*+`?~ERYdZ=vAF)Wutt|r(FWwrY z6>0vz?gh4eCy{FYsxR>G8#sYE>d>pN7=PahZ`l+O`&(abSGj&Q9B#6>rN{^k0HT@P zK&+#-rj+3N)eK-@4u4+wr=h=JG$;cEe*cf=MqMI}Y)=j8Jok*(#DbZ)9bTTyDuHh_ z=?v}d!juGYgic;FG<)mhFU=k2xIcK42$92zPg`tTqp&qHQuh9KsAbK$nYJ_pTkw~w zL_YUY?rPnUI~4f!jw{c+xS*M|b+=u!c_B`a*L$KI@8;M=Okzu&iR3(<5%UA zf9gE?QUj$3h8C@PiNX-&i$w)omG37*|D~^geiGp&17hlyK&*&z;P?Ps6_aovg9kYO z+-_=*477a6B^jJpRcuEVNPR4A+k(&y1BU+VT%fBBzE$6!S=JXp2|HM=f+eRWsd69(Am5L>fSTq6&bsbPuETxWJOD$@( zF1)nRzaKQs2f!?I*yAvT&~C~odw9cxLop=rCjEvG+nt^V59SQ5NQlPvi|LF zt|VZ3^dA#Wc~3E0dE3yQ^xS%St-z}vC`jd5j9fa-z0#pN@Qj0cktnN>z|z-Brow0l_K!7T2JGg2{@l zisDbRW4pB%_m-c5o)#?FSHAJ0tJ_ouWU2l=h zj;IsEw}5_;tLbvO_H4J{fcmc@Awb{k)Dx~1Rk<~&1H@C1THTT~sYqVGN@BGsCefl9Xb{cOzNCm`4I^f~WYp9l&0Q8D0F~knYH9|w}wLHmT z3Ot(n*RDh|0Rd#bWXhgcV#;gND7;>1IN_e{PkhANf0udAr`q(XFRR!^OVS*%$5cU7 zyl*!1b-fGF@@mDQ$EXMdWfEU+U1k8_eyE&D*ZF%?WF}|;1?$uFT*vt?yyv^i5(4uA z+$VMuK-shhfIInOb`8=ud|$yOao?><8WA%<@f)C~uyR)tC#`q7j)QN(+~?~(`CYn2 z@dw}k*NSNY$k1XpUVrQx9#Ar)*S>Gq(jvb4$z*&$1diNdvzaS83v4cGNnSkg? zeI4<1$p9t4l36eva^Z!`Kv<3RFD`s52!a1sGzu!GOg7ge&3#_l3i-13e}R+sSHGQV ze;2O&7glWIfYu|9^Q|t=ZM#1Ez5kDY5upSYVlkfJo+Nk7cHA~CrAIJ`f3^U46kT(8 zij06D(Pz<$0vX^)GqOVp{;yx5#5cVsoAvFhHFwc)nNgQji(gxn`yol!*5tQ9Y*Kwe z59dhlK$4(>+QD5;rsD@$}J! zy& zYseJdX ztNk<+CYl^@I7O3O#&j8y8%n-_mx8E7Y}a-7FQY^WKC?xup52G$jD@l^8ozk4VL6a3 zlT96<0~;R`3VvB4>nU7qGowADr3M&bm07n)QyMwHv)ZB%P!?8w=b{{5H206J?{-@H zR@{M9Sxs!eH?gb$%F4S>4Hb-x6Y2aY@y907QLoFktIAhvG?uYj@}y!lNhLjZHT`-< zwJ|)=!^z*uJ2nRXkA+k6BG=TUn2+8AaC5`3>gsagJTei#CsA3KrG7VG@O&4q%HHTx zirX?>-^2(I9;x5jFVch8>(JhaVZ*N254YdowoD#{jP61_qpgM8;q-+h|`E%w2ZGg z{sH_~8=1r{OlDKHl~|-)qpue#ZRh57xzet?TMbM=ky&5ylw$$8oW1~Sny6MveqX`) z&LS1{38h2{<#;HqWWf0dm4*IB=!v_dXg>QWuI@p8zJN#ZK+Pw6W&8!%P_ z>Y9M|zj(#QakqzYof?oXl#t>5btgbD4|uW+pPpe3Frm#J{+;s63)pzNC`zGH4mOw5 zx?>W*i|-atA6HiR;>pHnso2e*_W6jd?)`w)Pd!V*=NP4B*Cw}Bv!d-tH=8rzhZgz; z)}N;&&cF0wVXGPC7;`17_4&j(=bRZnRGsMmYCdy{w3ksWg$ppg`hm?gtd&viTaGy2RCT9OWCya))0R#?B)1s#YeVc!6rb-|!we0qzMtCc zWf==20gN<(n{pCdhOdZ#VSw|j1M{~0fjm8R=j~3CVcqMb+z1#No=jiNBjcJpiH^}i zAVg6=0m^U)2>^ro!vHFCIEs>l2$bejz?sH=tlUlDq z=Mw+dqTq~P8DRU&0G-#$+p!nMR&_#ur_EJAQexZg$Miq;LXK+wj)LR_e*oaqW@O+q z=HeBA7QboYAw%%>w}5#d4`^u5aQsV;nETrtkU$XQe+h){noet*%=Lo_*j@wd=Qei{ ze0X8`UnbpK0dU`Sx2tjo@nHuX%WH=!c;o|Y!_UR}`Edwi0DC-C*49GhS}(KVLpHQ- zDu4f%xou}be7^pNHgMR%S@b`o0U*bC-rA5i2vY0^{UTMOhvX&&*;_FHt*EP~n&Sk9 zg}{O9XcB{yO0m$K1J|*d45eD{e+qd2LykT&L|w{Y4E%6@4tlJmcVw`uLK`R=c&0Xw$pfah6_{lBm96_o$g_q^YPQsRD4f%3}JU@bgE zI_V=4f_gs)IFkXZd}2;8C=fldvhWep8`;Y*wqABfA2GzPO&CnSAg|42mfW5pW4<<^ zvT4FWaLmH*Qc>BZiMyTmzceTjUiEdL;TQxjr2VfK%Txm%Y-`?xFFgG_Wc<37eEVWo zyY&m$m?>=`haYF{UJpa6Sbx}Ef5h2QVu{EZI_S4aRKC)WBIp zx$?jf(0_(fOnoba58Zv00+7zEo~nDt$tnE(cb=0W9bRrss0YyUGnaf4*%})I>>3$sLo=kmb8Xt4 zE(!B;R^k9H8Y(eGNwEf+;nAB_(RMn|H<3C$cD0%VYhE#(EIs_gwoPK z0w1&v71zy}dtB($U0zvFT)Mz6yZ4zYw{qrDg{&$xMMX;#knVlN3qHkIw`^j z2V()`aTjCOa#q6rvJhZKH zyR`f8v}e8up*gGfke27@BLuA;fE6QM3?39!knRQr%1L$EA;O3@=f=KmtP*>S8y0aL z4I>eX$y@wuZFc!B1?B!;o{`v5?}$)ecJo49I%Whw83Cz|^mv(RGGe=A@lBhWt-HI8 z=o^S#3x#*h!d*pag*Jeps(Af<*IQ!fTbRZ$*_{z}1do2h_Y7XnQ$!0V&C5czNT4IO zwFRj3_^$w*9s{G(r~F}GcGrl4XAnb#!)iprZvld#cS;3U>~{&^#1GeDB`ZAMK?*^L z9Eh;s>OLr%r!OCLtLpD51`y!x&u1HWMMi{2Nu;`!i7-h__dx#9IwV+4!q0FtIZae4+-Z&h`X7o+?L5LCGS z40uh_kbpx!Z=MPE9Sw+U1Vd~v>R+MVI?(XlL?RvgvJOa`wMb(s_<5JS}pfQs8u z4B7-BM7|`AVb~<4M?J=eq1*`qoskh?A}I3KAUv)Yu%VT<5Q3fDSGFt?P%L?b&<@Hd zG(d8=<}DO?tTVm4i48&>OL9J-FI2qO!no$Z=|%-#ze|$l$(N6uLe6hTjR;oNd@Zp5 zJ0<}`!=RtYRUyG%if+Pj3V)JOcj950i%n=4g|Lb-H1%PIc;t5t1x)5hp$<;VA9!VT zv+S>~C0;j(QYs;hTwb6rPJ8UccPCXZk8*L``JicCqrhCV=$Cw9Ew(ZzNkSsD1xX4j zfQgB(zX}*mkr$^ws3X?U19H2{4C=3;hZoK9LD9B%6N`xiNAV9wy20T>kL^>{u~LJ0 zkoU#V-pmzjWvNT&lK=FiLzt0-U2wMyRo8d-jrkQ2#lgo7p?c zdId z9BPU+@Y!|~qUS!|pK0>)Rq>VAwKc>Tc*tb7TaQTeCq}J@PiLp zHrvtnOq{WLzC1^CtUe;Zx)y=rhsLrur<$T0!{uzwmKh+2ZjS|a=_C^ds9krtfy9Fx z;c?tyH*FI;!bZ=cbL*iPYw;^z6p$PGop-@T;yR2ID#5D`AaU7Bd(NxU7ul)8HSFxe ze`y^sCZYN+?i^DYPJ)#TKKofODILWCN2xs$K52&L;Xvtkr_p$@S!+!SUO+p}9F?Fzcc&Jel|(#_6j~6;-2vp{a57ZKUba z%FIgeN%gdQ45J!rHFYy4d0ww|JH>K-p*oI2M}$b?mOv~3lw*Bbb}NUM8qYUfgZ z+Kb=ib@N!cN#{~LOAmd2B+s?y6UB#u_se^7BEOhz$HI7qwOTdf)LxmNe|eC z#r`t5%tWq(Of?9bJdQ{QL?Xp!zxZzSLa@PMIFopCpD4TRp@?SzPA zr_O~`_PbTQk5i14lDw1{If^*dx&^lQ&5* zM=n0iVFD)|F;oQfVf;tTL2!4zt{_e?tzqfvDvULS;fXbjfR|iPz@4)V&vj$k+;SL* z#u$hLSWwHW0FO!2fVep80O}S*E39Mz7A5_icv#tN0|^7H0|eC`f)I`q^koAIvBwe* zmNA%SKC^LkUlfs3O)HKVn4V;%=*$9Wg!FJh5UYvGi!?NcltRAu}W7#a(DlLq@R zob+5zp0&L~0EozYfKpMd4W5^(%dmg(ctuJwjcZ>0%v;aZRmUH3+0I;*5~hV=@xyue z3Ck?KuBmH-;Fe0~?k=ZFA3o9FFZ&_`(_9RZ_24J4;mw`kE_)fo&eFoyriq_&GI(cx zc@`Kr;&8yryrr@#+%Z(Poc`eY$V~zqw39EgM8L23iLj-XK*P>8qU(t(0dYuYS{sgrF{pypn{ZA?tuk2T)tG)HQ6&1AKM;aL0f9PVy((aj<+)Q1y$n zh0mIw%ytpnfXLQuIIr0HSc~goGJ8i*p^o4TYB#jHrDTaHmRo?#*p9f~IWsKs7KJI_ zZ9^O)Zvj|*R1zYwj+0!f56kzH0~&MwMmIDujm$ zz!@vQQ!JKJ`q516z8`K9`D0C`^9`tBo2dos@1Q*BD{@qaKWs}iS-%@ta?pmI{E3E( zgZU0LM^o=r%by)>@HcY;OgnZiKhbMzO>b>nQ~UORc=|Z7gpsc+c#2};Xp;GCOU1M+ zTF?aJe@sjapl4qW7jcpme!L1$DePMbD^vdTKi>k5hKc=YzE?-UrlHr)5IX<2!IXFJ z{fjsT4xD&i7>bF&$@kW(x>;2K<<<6m;1XRv+`sHoGlH@o4mun1$1NKFbUpwf_Uh&SW z`k6*ezPuR-VL`!{4aiQ7MJgpxKbgWH7RVr(2O+o(-+sWNB)O%#^;8DJSi3{E0CDK6 z+!x5W&(#07)s#_K03_)LTMH$$n=8vbkm3pVD?*P0v$FmxtB>#~hW$|C@!dPI||9`z_A1JjQ{1= zlu2@C02}4;nH;>68okj%h+xe%$e#Ax9LB#+<>U z?r&iq7ObM1G)CYHNT_A-sb5y@E_x5GQRXwIBH*(|3Q}r&PLbAein%2=YSAUX$HX97dU`!z!ApnIKf1jmHa=zH&}1_$rVH!O z0hs>YLzG6jN{<@BfoRYYaDyHf(8zjF{6zqy5k zItDyK;CzE=zp%OT8=I2*wqr8lnl-ijx677X=5%_>mHSkRmX%lT{U1*a>7;nLkXbFc z@5Zt@GM&$^%@H^k*R$QKZ~xCXD0>~?6ZTODE`qwUneSPT3FF4Ub6$++Y1I;%&sBQN zrN&>0w{81}A$Ffq9Z!;0?_L@6c%^N1{I~7cq^Yv+}JkD1~v;_&*NvDaj*%*Z3n9Vz|&?72a?Z(Tce$Y!rU;qHFIgXm|v4b z_-@Lv<*)`lWU?$>N^an^_JFq-@`%ZHjZ3r8%AWC$sXD z$G|D~TQJCu$l2clw%~{9uE3;XuJtbl;<4{Z2BPC{?t018TBY&HBmPa^C zg%-N)v!uLF5JN{%k9&8v>vk?bZ+~Rfd)9-*#fOSb4jhW|^vy|c@|L}&DbW}87~}7w za|ZmicsJ#S!^xkj993+0fAd<3P!`oadR0;^H@d}2BkD_v6~6CBHXR+a#nN^15sE%m ziDuIHxFa%*nKdZF(8^=N5QbR?3biswjVt-g_5^X=ra{_=Ie?BpBOLgBarcUsMXCU- zpH<`5^MiogbL#EmO`IjiPSA^%f^g6cLYZs1}r)C?c$e-W%#xHXlZn_im zIU88+8u_Y%8pkUYh{)Tupa7049YHd_JVD|PQPCCYTF)CDFYd4YqSc~W9IO3QGq(HNvw0LG zZ`VHij;8mme4plo$&s(Q24fc4V#*-tIZf(t+fLvZ)u(E)Vx0&J5B0Kt9ml~lfZF8eNp|aFs^YUU z?e@FXTWh=eV;0W$pMkZ;-LpfTMF(HhJscO%txe&O4{{P5G^wa+WCg_pi3Uv2w&IY7 zGOcf{dl1sxEvIUFBM$#oJDRrL+Co2pgLmd%f}5Kv za-<2#H@WALrEv>VINPWD9jAgZyz%A0nvhIEsNl*H*WURqN(tl%gR`>$gRxupB<~DW zUHa)HWHA`VnA^AQ?RCtLwVkNS#7vt$FkEq)J@vM@b_M48t+2L=%rQ5KVv;*Ay3E%u zRG21+j`Q72(}{={)z6**#iU2yaPz1AoqzbQ1GFKFXGF2tsQNQfY|mEgpfc+kFKbuM z`{M8*zGY8R?{%lmvB&;uPS@E4yO|PI;-6{GgXU+0 zDofU?;+pDC)x7>jrw*y%7PU}$-!^Mfv1jfLpb#_#BCK(aaOQD)Be06KIFfMTlBJ50 zIOe#R6n`x}bqa}_)P8JJ;?W_fgG6 z9Un0!W{)t3)P6;?3b<*`jItASQ+9yavvVTm>XaLkgtIj6AsV;LtAr2b%4rxP(sS4m`hniY{9a+Ac_vTLZCJiZI1b# z@SmxmZx|4y&wBmJ1_yF*1b!hs5~I;}0AVKdak6x}$OBu{E_QYGusk_gPxKfOBpfu0 z%H0opv45DRu;K39@xbGf(Q3Q&%?dXv6uhPFD4c<4$CbSbq)oz&Q;0>WAt%H1+%(8 zR;?^uMYGc+rNw$%D>mn}=$C>WoyjpL9g}9eU;mko`APzF+^kz8V$HG7bF+FPCES1o^ETbVC(>OnedgZ;xTc%ReD4kVb!k*?c8kSl-sthG zutzl=-c##z0*CzplgJjl1A`BdR59eyus8PA#}$o2D-*_?IhxBZ`bHqlQUj);@g5Dc z?#KDU^Dj*t&1AE>kEN4Sh*Y&1a=gq~I*5)+N^(^8u)fTpzv^?~*zKpZX z!n5T*I=vT?Iv@88ggX1^F8z(a8X%aJCxA@^r=q#xyTV(LJ$MYX)!j6kQpZ+KLn|m z6Oz8N>i2lBLkzlz8pUV(jyY+Z6x5~tREvHp_{I6IrqR4#ps5PU^VQExJmdOV!%yfrsVn+pjkO&}eK)XxA zc-iq4+dhD+T4mR>Udf|yE(-r@+VY;_$?2NaRF9OD$y(PT3DK@u$%>pad3<)l#)g9? z3zJ3c7#=h_@G1G!{ljy6Cve5eVj z5=>Skt}h}jW(mnrshBCtAE~G3K)=4I^FBk8uLv0EiS!r}u+l)mvbTELT0uG1RFq;- ztU&?Z@FKwunaAY0VBP0T@23I}LHn~R!_+sDDJs-IVs^LotIjKcZ^V9!MP8~7aJ>&zix33lRlU9idw^NMW* zOH`^ZpK_m6#TZYsX~}vYKbctky*A?;0=N9Tc(z$L>>#whnX_FQ^%Yq^{P?;iakd$E z2MoL@m(bOnQvp%L>Du5?;m zKNZP+NXl*DJ;5$xq!6?>0gJ_8e>+h<#k*N83KeKFp#>|exK>mS{03gn_xs|Su%KS~ zQj!hDmlqG>3_Nsp?4@0vI4y1R&lekRgc{(kgzU4E3-7zFjj(DgU#`($V$^b7dDRu3 zL!G<`wT4;F&XI{Z!R4{VQpnD~<1F$oeWlSsq+FKohlj=3EQWd6benzHrk&^1@7WSq zzxY0n%J^>AKX2ZCRT}I@@LsO5xyw&5sY}}7$iJqCA=X5u*hGtU0hzVDkFfliU+wly z`=y$s9uVNvUev8+VcGY&shlUGnGKg^=Q$Q}Cz&_*^doV%cdOx6JsR`!sGRLf?K7W& zGDB%PCnumkk4k>iGLa=zHX_@t$r`CRCJ!lX533r> z>H49mQ{l@6BEy5#h@f=_@q20PNUHUi9u}%1{=`<~{c-dg&Ivo#duXP82ZSX6B+4$$NLdW!yvCs zU<)Kg;v)tf5S7ISUn%I1h;_>tWGyR)`ZrrBm@FHqk+NAwfsWcRyQ1*a zf9pSrIjY8|01TFipzxlEJ!jB4s`#xPA3T$w>8BXAl`y07_RJReCh{bn3ZJF`KLSI~* zh^7X28V{T<7+$8ZGg<&mvOvXdl@)T3k_OOY-6mTkuJpeReW?2@hQ`xtwYWn^b;Lu3rb zzD*3~J)@o;{XXye|J&zNa(`y-`o-8_}VY_nL3s8c(hw0)Lc+RQZekf2ABctP^HJpJRX z`Pzh=jFmRti?59ZV*-=*aTOMzA^6B7Gz6+8bso`K&eX0jM%_spVH7UQn>4gBW|Rvt z{~TNbfne;qblBxc1!m*LB>p4FP**ki`Bw>k#`t&<;@X1Fa-44Q%FDdsg-WR7bKHs8 zXHgnnXWHxfOYIG6dk24NS`<|d&y_7?4~Bhq{yzIz;yj2`wtbP1^ezJTSFj;vY4 zOx<|ykEE)$?HC4XBOAbldXqu&Q!($A&V2IMLhurIwx!O#?>l$T_|rc{5-zs_xb zjAQ=JkLkCbSq;(VZf%9nbe(nlzJ;!Qvhy#EH+(XC#3!dK-?!Vxx-a`gnO0*IE5+fq zbC*3=)w|&iA=F0F?@C1+8Zp*PSXR4uB~oP}!bw_V=1BdzS0GaI#4k!Qo6C2Y5pf9w zs0$A-TmWUKl1~wx}=S=q(S8RN&Fc3ozcu;_B>5w~!-T`v%S(zJHX0A()P#c7B_~bcdM&fdB4kR(S+$^dGnNZM@wbef@+)&qY~F zi_c@>+CqYnvZhVa(@G9{lQm*Z}j- zcaU0_o$XVuaiQ#v3RTHty6?0OYUsZ%-uneP-!GtyqeB$y96fBHo6FJ$4mS>|?g5pn5sNQ4!nc%$lp)ZbYK1llogldPXi7Q@qE)aw}lIO;;OSsq1Om?pJ%Gqn{Vwpe;>9T=uUZ+gj(9%VC&L*ByYr6!Z<4uP*6*b&#;?K?cIe$lvpgE*!a^vE<}(+t)=$`Y%?MzHXY*!(XN zkc|nlz?zrLlV~)JsMcdmemZX9mcH=4eiFv)uyS4fz-mTAoXEiNbe4c~jb?+kW=Nu8 zFHdDrSMeM2S?Dur-YMpE~d+J7t80m4RXA&Y{6`VGeZ5~76(yFgc9Jp=F`|Y{o_+^0 z2+=h#?Dlr>)xUc8o~=IHeag?txzzcu6&)yt>?STP!rGGKeRAg0P4`t@U}_4T?$Ixe zNuRM_bEq(CS^m+z)}vRGHXDK||Ms0(RInnYI*FN#tQlLaXynpURZdK_;R3^|y=fG6e#7(C6s|K7Wg zViLWDPg-Zdmhe6Kmo$cIm{WPwsALJj^Mgf|1MAXT$+%T2*{Zy7k!tih= z?$m?K(9t_Y%b3k}=vmebXhg}0?CRgBONXpMG(6UU`lp4RyU9De9ko<0@%E*4OM3?)G_qKEwI?m+8g%%&5C=z%6N#ySf!WHB*! z{CS5J-dqNt$v^P5Vm~-P2ZZ8*r2^GPbZLNXr$j@K_PfE}QS=k?PXCvo`NEO-2yf}C`$c|({j z(cM~G1z-DiX>T__A|{2#3weq|R&P0WNq&B;?Ad^~Sf0V77@=ybt)oIE_n9%GR3Pf- z0#QFx&@ToqRh2qT2_mK6J^y7L0`GCMiV<1pXAIpDiSRJgtgLd{y`eXkTOB{(cNr5a z`n+U4+M|qETHocTL7yn-0$t6^CF2O9|8vK62$DurjRGP?Z=s~5htj@+7P^@FS1C5{ zEj}xw%ty!w0v+B`Ywj-rxM}c}J4i!e9#7cUkYbb0i?is8lteigQYg{Y_U`C$KvPJZ zAV>12fej(MDszwO{dlY+UY4KKWoz_m(EXs41~n3_Z!^xj0n3*Bx9b6q?^FmSQUJy2dB#psp2lyF^om<%~EW@L(sVH7iisg|-(&M&Fd^5NP23qep z$Cd7^)x?K!6ZE--z4bj4o*hG;KxW))hA7vCvXUQ|Aq-_pa<8qn{*vK(XWFpjUp5Aa z=PD&HQ2*BL7heiz^|Jhy5ke!naqKn zQdprAkqXiO(#CGFg^sdgPQ|T}aswd#`JJ`IUI#Vt zB2h9x*M76UzmcUL`)k&M0=M+(Wgae)Ex}{1XMKaIRlA+uk8sEiO}b$PO_@&EG?j?F z-0Ep|DQZW5%rF~h((BKCrkCG5{`^L0DbM0xA6*5GaTKHp8$_z!jS3S%X7Jsmfu96u zJ>9{X45r%+3i{wS7SjOs6uqLI_e7Mt9K)-tg97CnB6=4wn^ij7m~$ICXU+Cn=@lZI z4dH+#f|Lf}i=9|on|ei$g_+v~fEfh}PyM@h0n-ci9xVzu)9V|4F`D$T1AbXs)>6)` z=t=Bx4~0p+`s&6BtC1Dgtx09l`4Pgx5yF#tXzuYX)PAEL+BT=<_A3JjOJv8UG7!>WbJztVNix8 zDb+`h2lD+iI{Sdb`J+Fey!1BxZL#Bl7*4a^UR9=MHFez-Iyg znZ=TJljE1&{TUvnasmn!qxhX%-3q+PU;m^sfeqsF$x0zki^Ky067tdq zUa8g#Q1I-)_Ye8n{-^Yh3(DcNAuCFpOMu)sfP7HM!CruEY?vSM)GdQMF2M0$oKnfP7p~Us=_aF}UPFk7t;! zJI@sEY@_QUwQ>ep7BQY?ec2I?+p}4%e$-A=e)Ju9thzL#wrlOUIYW| z0CKpT8}J$y>@V zB|FX!I&AF`;OFhCi*9^*Wf?Wwo7GzDO`MXfb8lN9=G1ME`&l19+H0+t*%@NV4T-H? z^iyCcELG=tC9J<0xHFpq)0}RPV^8i)(B8&)6EH@7oQXivVOQH}JDzY2wGA}>NsMgU zMBM|KovB&W&f&+nb{-EEOM(Ol7ZINx;(YHd{J&V`&S@onUR!LP1;?G z5G<&cRo|X@Ul)N(f$NU#E~2Ius}zLk*+jydLPQ-KVw1B_?Q!Novo5BeH}I0km_kC$ z5@xls6C)67=uuQnsL^g|jpX-_U`Drnd7Ut|iehVy5ophj%>~O(^uul213C`C`b@o{ zp9+^?H`@WuH(d$c$4Yz|LvC|0!k+t43jX@!z z@b!z!wZuSEQ58G{!cN7iti4&YFVrB`x>+Zsthf-(?uL|l>>Hs*Zh ztT?~Yhvv8-!JtTne~sWF-?6io?Y}9a8J2?uMO>iA;2Sy3{%0rLTI+U4zO9x=%fh4ey z$oog_s?SGeO7!Vj+oLO;%v;@R*ObxGOBHpyHF($4k+SBo$J5`1$kS%>G2dbvk@Wge zIPZbO+u)`_KLyF}&q&Q_DDpGIb~hvq z>Nkkv3WDazQd*@KEWdvdPNN79y`HUH3GltD8wunLgT4x3_$s#Z}(@)tSIZ1GW8rLH9t3>GOr1P1y_WnhplRimCuyCxo77T@hin&_=~d7 z3$Y*; z{aOG#2j}FxRIxRYeOCk9nc^c@;8h@(3hxG~9rJr>;wGO44Lwsvb+YOgF~a#W|8%g4 zS@pf>pvv9QqaZ%x5r0}rgjg5U{B7HQbmoi_ag)m)e%H>5Bj^s-kr^iJaP(cOW4m$% zIF&lIg?2*(pQ5W#*TB_QtVRxC;Ddc}c!#Y2?+%q^y|Hx{gGo_<9dKa9L- zJrL33grYxus3B&=bUYz=@5wAv%ftGNHA=4q>+$1(VWWJKm~|ydT1BTd?n=@g?datw zY0b8sEvIam16>EU@#0~zjbrFlte`<*hOn+S)FlA0BsZN|=c*Qw%40_Z;qbW+=2n!^ zHx9^|f%kMN{DTi3RH|?+b6ja+a~{xOvY1_tlHQq$m%u|62iz~~^tvP7#FUbinCZu^ z2ma@09-=Jc3qQ&8aO;*Bi@nV#LYIcYswM8j@I|=+-CUO;gabt6#F{Gsz?0@s2k{2gaXwI4#I5Q$_p{gBALlkhxxQ-EaI@j1D3WZFwil&H z$%xhN_cm!HojFzGr7`afr;^bRp0d838`~ayXJlLxNhnW~=zEFRYcm%wiBrT1Ddn)XV!2Ew^CUPS`?&M%i zmC6{5Xu~f1Jp2`h#JLA%?=MH6C{##)%`UE&ORHj*e=}c~$->$?)|+@7Vg5NG{)^eW z+fna6Q3?waGir9<5_bp2omdZhX!43f>4=BlGl2DWW1U@i*Z7r4mFt3Q)vebY>~fzE zR)GVEmzr<;VS|t2@VztL-E`v~)@J;VE{DZvcLo|oK`F5nC>PsL_@!E27t5vrGht)u z_o#yL+TaIiI_`L{p7e{|spJ{>#?I@zjAQEs94a5`IrSkc*#W~aPTCV(OGf<;O7_DO zUTexY_N$G?@|#5p5foH3hx7DATahlDqqYp6<=5iw(L|n;)D!!chxr-N__osnRvJjw zZ7Z?Lj^sSafxHp3^_jnWBKeq~*3t;@9(apKOds(&QN*gvGOd7GrA2rD=T`B|Eq#9l zNMxy!&nEh>X}*WSHhlQZ@dE<{x|wlgUAO#M9A`FctlB+e>OrNG3wK`nyK#uz;`Bt@ zKvzU`<9YJ~E7_yUCs1-|3zh;%|fxs{(YO%Y37%=~P&<|(LK z;`Q487b^G56s4X6!3>MZg$D<2%cSVTDl$g(-%JJnQ*1hHG4b)VF#8|TV+|06^b=)# zhzGpVL%zY}Jo;BLf5_2O$H=19Sy_EG$cJW(tiemR+tR1cKm>T2S0Jap?MAkMf|z2U zjOKZ|<*hgW{nEFMgG_#n-ZcjFpC-_G%P6>jq<4oMspOsf&$KD$DaLrzq-P_qO$Hgg zIkXgZo(op@9KO7F`{c~RQdqjDD5(fUH`qFJ$D0QRA5e4T7?kCy$RAd9gIts}tgJ)s zk1lGFGaf#*s~j-himBWHI-V;Us66;Gn6iVxj-d6n^2ov2(f6K=ih`z3?o1IRK>v<{ z*7Y3CoPfjy{qz0Dh5cVA2q6So(E6XfN#SWVkKMH(72|XkkM&M>fBd!*$$zgckRicz zVR)k_>z^w_3&)92R*?Q?2>Y7>+$zhOhNZp)vSSAmtAQ_Qq* zXDbDM?Z-(sJw#aaJl*_ozYt-}vkv^+^32KT=OuQ?D0HvTMm}i5t;tty=vcHqDlp_H zXYgi}ZQNgOF7cgM%HMmpo?o>X-r!G*HEqIq99MlRx$d@BhK@nCMO&MJK-8ed(6NXyGs9r^u(!yC1B=Cty@edoEhP6AwgbN;ng=fizuhM5WSdbM`2iu+cDyiM} zQOg>7C}X62CTLfVu#*EEM^#(otiQ-#`h~{%#GDz3Xw9Ua{6+QCcCGGT0n9!;zB5h0 zPPMxE!THTVLM+s89^~p{bBuOlJdM1j#Z#j6?P`Xobqi{@kwUtw9}S*QxI=5J&34N1 z`OmkTBR~Ru;2GJ$OK3x)d?hx96AR+1Xy7Qe6r0&m=W&ahjv}tSqfa56Rk@o#v$TP( z-4rI6?QD)H%+U6*nhIN`sL^n7hHn(jT~9)J3#VOm^n(|m z$QZ&HqFLI)K1y8 zYP@Z>@LMNbhhsy(JMJs}7(codcsHv%Yoa*0SgIkW9|a_j8>OCcu&n_fhyn_1^r(Jq zBLUJH9CjAR_|$96{C&qAfhO?g4DP#(xvyb&);gm>S+*)xb(@BP&Sy64-JRC@KMW*a z&evHH8bsk+OQQKgjNyE`J+yoWcWLKWq~UUfPS1T>R-+j%^rFl&&N^4_SrGytMd zuWJP{UjEjwD4GJivL(&s0D4@Nw~Xt2o7Cp0lW64vsC>r@&Ln-wY>*QKg3>%Ap&_=e z0n3H>w1zc=!;>fUUOPWV8hG>#BR~k_xLE1r)MH^?Ay1o55lg}f^Pk|96Dc(B980~h zD#2zt%cGMP0;3?X$uD`o9yUP7s6*F>^x)m3x-y9eRW#PdHuNd@ixIopyY2nv<{K&U zv9YD6_7_IFwbQYE68-d9?|Qvhgss20xvTSq(yLdpN^)qZ*Xhee{Anbnq73o6gu4l@ z9vv{uTdB2Y6?PyNgoYOJ=~Hf-;mOK55aHb3>I%v~ip;F$(pO|i4m%??SBnUs5=^}F zq0|0ul1KdfNX0lq_K5@v8Bc?6uO%NL6y8ETEcYI%*M)EDcHn9XH1+Wu4`){g^mCA> z3{tSY<4|}#r#!mq@3A&Q3f`Xs&ZdxS_^C1avs%;IuUU3{5_6upUxDgT39DK-HfmG`hdZ7TvnN6P8tk02XHcR>r^@fR#uTkp z9>-)UQ{h1&h4h_rh1|6k2e)F9|936Yj;-Yo&PoxkutaRAL-l-vAgBjHB}oRFt{>r^ z9nBZ$=J;`IV+B>l-WFYAX%L-pTNF`g_fZW@Za0yXvV_DxSk1G;@ZXc)dd0as_XZ*o z1+1q#L5AeZXuy*E)MXYH`(#cO=Oj1%{B1Sa-QU@S|I(RMmVLg*VPRe zuNRYcI*kw~W^Joi{H0Wu23u?k^Sko_*a{!~9+AhM z>Y~@&10MS6;e>C~T|}%XPb6{CvtF(d7f4`SG{S3Uxxa8d>$2n@7%K9`#M{%^wv25X zJNlRblT!Y4VX){yhwR!h$wNKg!xScLRX~j6q*$&4MUG?N5=+LtV@0MB*zbl?$&KA+ z@7v_Tgs!M?D}An?q>38HuF1!zwh`EByyRfDxYJQ{DUrHZN#Gc2jisVkN((BEi|0gQ zZs8WY76p(~*2Rl&*k#0omzNvsVnB%&8mjZ{PJBIE0sZ;^AVGlDH_nysR?BuSaw1Sb zXV%h}mitGg=S4`CYavNCvdy&9ea7M+hmcVFZ2CHqgY?j#Is9t(iQ6tZaQbxTJ>lIO zn_bFvZ<=}Fn)1Y<5vcE0b?2>`XiYkASNzC$t*Aw%0n;lXuN>>{zKB2`?{&%5wG@SO z(&51x45`4(WZX6l5PQSiQhHa5qi-pmM*YX@(io&R{k zS7B$ueG7y<@)A3wJm{S~%+YmJLcjY}-vwL^Q*`T>@{XMHXF~}MOx;wYKJ2g$dipX* z2GX?Tz2HzC*C8{us&|ulB8)s8NvB6TGCJ;_t195Iuo*0D{aR+;5EJdXI2s4Sl}fi~ z6KiAVt=6<~y~NpjJdS)JbK>y>@fnfdI}#*EzIJY`_jXd_oeT&!oD_82<@kv)U7Xig!UN%&x!X?CL1`Uv=5^>t zFot~Ry_?V&QFw37YUSc-4{TV+IQEk~81`|&Zp4R=N4q|;DAyP_7`U%AWh@qazc!kC zp;vScIhBAN?OBP3`mzW#Y4`T=NIBOcq+BE3 zV?ftDkf{5C%QspO-BLps!zD}XmMB#5z19BU^zaj+N~ikAxeEO%?z!_X%V(x4q$?t6 zOaf#%O>ryn-V}LG4A~n}Jn*6gl@RGM=DjNeC48w$o9yO^BBC{=56kVw;~o?OKje$e zh*vPO0q1@E@2@sjC0;Woqc}K9jccOUNM)Foo{WM@Et`e%fKHdl8SjFhdlliOgKGcD z88-sP4gR<2HIq;OSV6_t(A^@m+uB!>dX*JEkGtd2bB*e*oL-oZwY`}W2H%OP&v_CW z+dh7b`WQ8tlL3ArqrtayaNy@SocD+3W-vG_7pNgw_j+wXWTzg#qg&A9`c-~>Uk9EXhEw!r1z0Ha`w_;raP_~HU`2{ z>V_kk?rcjtgICWK41-Eom-;%N*J(XSne{+dv9^|>YteU4d_zZ+KgR7EFj8KxxDVIL zQK+;1v{6susdc{B<;YG_kIC#cyaZp9BzZm$&Ic}8lH@KIPo!^FW?)ODkInMl%nCtO zZM-g*J7=BDc-&#y5_!+lw!K^in@H0VOwsh7S0IPB7$o#o7&q2$X&Zsxu0??Wk`UnC zlm4=ZDyOeP!)CN*uR)y_5qI6eD^?WKE%R+n{*tEW$Z)?tZ8ZmQxf9@`(wb)Lp+5}5 zg4dH5_O180XF4L%3h$-N*iSB!5h#ZK1Iiy$Uwd}PyzfY8v-sOb6sr@@aF&zAy-ylFv~wsTN4}&EC4RX z_IKR<0}qiP<00?bCH}xeM96qZ@o=T(K@=%F5a1yaQI!Xa%^U%>JixJd|Njl@`a*A3 zSXBq$=W-n6{v3xReE~jSbi6q^I7Yv61DIIEHa_8h;H0yuS00T6mZ;KNLQ;}G&!FEJ z_s3iB8lk>g-g7$Vo~I{KCgGaR8!&SLc3TKYX2YvT@FTI!RUX6yAL1X1gjG6=)C^&L gj%QqQ9uN26=jM@np@+*;4uL;vDjGNQZy5Xi9~qn2i2wiq diff --git a/recipes/use_cases/end2end-recipes/raft/format.py b/recipes/use_cases/end2end-recipes/raft/format.py new file mode 100644 index 000000000..7dcb6b861 --- /dev/null +++ b/recipes/use_cases/end2end-recipes/raft/format.py @@ -0,0 +1,173 @@ +from abc import ABC, abstractmethod +import argparse +from datasets import Dataset, load_dataset +from typing import Dict, Literal, Any, get_args + +""" +This file allows to convert raw HuggingFace Datasets into files suitable to fine tune completion and chat models. +""" + +OutputDatasetType = Literal["parquet", "jsonl"] +outputDatasetTypes = list(get_args(OutputDatasetType)) + +InputDatasetType = Literal["arrow", "jsonl"] +inputDatasetTypes = list(get_args(InputDatasetType)) + +DatasetFormat = Literal["hf", "completion", "chat"] +datasetFormats = list(get_args(DatasetFormat)) + +def get_args() -> argparse.Namespace: + """ + Parses and returns the arguments specified by the user's command + """ + parser = argparse.ArgumentParser() + + parser.add_argument("--input", type=str, required=True, help="Input HuggingFace dataset file") + parser.add_argument("--input-type", type=str, default="arrow", help="Format of the input dataset. Defaults to arrow.", choices=inputDatasetTypes) + parser.add_argument("--output", type=str, required=True, help="Output file") + parser.add_argument("--output-format", type=str, required=True, help="Format to convert the dataset to", choices=datasetFormats) + parser.add_argument("--output-type", type=str, default="jsonl", help="Type to export the dataset to. Defaults to jsonl.", choices=outputDatasetTypes) + parser.add_argument("--output-chat-system-prompt", type=str, help="The system prompt to use when the output format is chat") + + args = parser.parse_args() + return args + +class DatasetFormatter(ABC): + """ + Base class for dataset formatters. Formatters rename columns, remove and add + columns to match the expected target format structure. HF, Chat or Completion models file formats. + https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset + """ + @abstractmethod + def format(self, ds: Dataset, params: Dict[str, str]) -> Dataset: + pass + +class DatasetExporter(ABC): + """ + Base class for dataset exporters. Exporters export dataset to different file types, JSONL, Parquet, ... + """ + @abstractmethod + def export(self, ds: Dataset, output_path: str): + pass + +class DatasetConverter(): + """ + Entry point class. It resolves which DatasetFormatter and which DatasetExporter to use and runs them. + """ + formats: Dict[DatasetFormat, DatasetFormatter] + exporters: Dict[OutputDatasetType, Any] + + def __init__(self) -> None: + self.formats = { + "hf": HuggingFaceDatasetFormatter(), + "completion": OpenAiCompletionDatasetFormatter(), + "chat": OpenAiChatDatasetFormatter() + } + self.exporters = { + "parquet": ParquetDatasetExporter(), + "jsonl": JsonlDatasetExporter() + } + + def convert(self, ds: Dataset, format: DatasetFormat, output_path: str, output_type: OutputDatasetType, params: Dict[str, str]): + if not format in self.formats: + raise Exception(f"Output Format {format} is not supported, pleased select one of {self.formats.keys()}") + + if not output_type in self.exporters: + raise Exception(f"Output Type {output_type} is not supported, pleased select one of {self.exporters.keys()}") + + formatter = self.formats[format] + newds = formatter.format(ds, params) + exporter = self.exporters[output_type] + exporter.export(newds, output_path) + +class HuggingFaceDatasetFormatter(DatasetFormatter): + """ + Returns the HuggingFace Dataset as is + """ + def format(self, ds: Dataset, params: Dict[str, str]) -> Dataset: + return ds + +def _remove_all_columns_but(ds: Dataset, keep_columns) -> Dataset: + """ + HF Dataset doesn't have a way to copy only specific columns of a Dataset so this help + removes all columns but the ones specified. + """ + remove_columns = list(ds.column_names) + for keep in keep_columns: + remove_columns.remove(keep) + ds = ds.remove_columns(remove_columns) + return ds + +class OpenAiCompletionDatasetFormatter(DatasetFormatter): + """ + Returns the Dataset in the OpenAI Completion Fine-tuning file format with two fields "prompt" and "completion". + https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset + """ + def format(self, ds: Dataset, params: Dict[str, str]) -> Dataset: + newds = ds.rename_columns({'question': 'prompt', 'cot_answer': 'completion'}) + return _remove_all_columns_but(newds, ['prompt', 'completion']) + +class OpenAiChatDatasetFormatter(OpenAiCompletionDatasetFormatter): + """ + Returns the Dataset in the OpenAI Chat Fine-tuning file format with one field "messages". + https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset + """ + def format(self, ds: Dataset, params: Dict[str, str]) -> Dataset: + newds = super().format(ds, params) + + def format_messages(row): + messages = [] + if 'system_prompt' in params: + system_prompt = params['system_prompt'] + messages.append({ "role": "system", "content": system_prompt}) + messages.extend([{ "role": "user", "content": row['prompt']}, { "role": "assistant", "content": row['completion']}]) + chat_row = {"messages": messages} + return chat_row + + newds = newds.map(format_messages) + return _remove_all_columns_but(newds, ['messages']) + +def append_extension(path: str, extension: str) -> str: + suffix = "." + extension + if not path.endswith(suffix): + path = path + suffix + return path + + +class JsonlDatasetExporter(DatasetExporter): + """ + Exports the Dataset to a JSONL file + """ + + def export(self, ds: Dataset, output_path: str): + ds.to_json(append_extension(output_path, "jsonl")) + + +class ParquetDatasetExporter(DatasetExporter): + """ + Exports the Dataset to a Parquet file + """ + + def export(self, ds: Dataset, output_path: str): + ds.to_parquet(append_extension(output_path, "parquet")) + + +def main(): + """ + When raft.py is executed from the command line. + """ + args = get_args() + ds = load_dataset(args.input_type, data_files={"train": args.input})['train'] + formatter = DatasetConverter() + + if args.output_chat_system_prompt and args.output_format != "chat": + raise Exception("Parameter --output-chat-system-prompt can only be used with --output-format chat") + + format_params = {} + if args.output_chat_system_prompt: + format_params['system_prompt'] = args.output_chat_system_prompt + + formatter.convert(ds=ds, format=args.output_format, output_path=args.output, output_type=args.output_type, params=format_params) + +if __name__ == "__main__": + main() diff --git a/recipes/use_cases/end2end-recipes/raft/raft.py b/recipes/use_cases/end2end-recipes/raft/raft.py index da39e08df..598e10f89 100644 --- a/recipes/use_cases/end2end-recipes/raft/raft.py +++ b/recipes/use_cases/end2end-recipes/raft/raft.py @@ -87,7 +87,7 @@ def parse_arguments(): api_config["api_key"] = os.environ["API_KEY"] logging.info(f"Configuration loaded. Generating {args.questions_per_chunk} question per chunk using model '{args.model}'.") logging.info(f"Chunk size: {args.chunk_size}.") - logging.info(f"num_distract_docs: {api_config['num_distract_docs']}, orcale_p: {api_config['orcale_p']}") + logging.info(f"num_distract_docs: {api_config['num_distract_docs']}, oracle_p: {api_config['oracle_p']}") logging.info(f"Will use endpoint_url: {args.endpoint_url}.") logging.info(f"Output will be written to {args.output}.") main(api_config) diff --git a/recipes/use_cases/end2end-recipes/raft/raft.yaml b/recipes/use_cases/end2end-recipes/raft/raft.yaml index 13ec31595..71b03f3e2 100644 --- a/recipes/use_cases/end2end-recipes/raft/raft.yaml +++ b/recipes/use_cases/end2end-recipes/raft/raft.yaml @@ -48,4 +48,4 @@ questions_per_chunk: 5 num_distract_docs: 5 # number of distracting documents to add to each chunk -orcale_p: 0.8 # probability of related documents to be added to each chunk +oracle_p: 0.8 # probability of related documents to be added to each chunk diff --git a/src/llama_recipes/configs/peft.py b/src/llama_recipes/configs/peft.py index 133cfccf9..7140e025d 100644 --- a/src/llama_recipes/configs/peft.py +++ b/src/llama_recipes/configs/peft.py @@ -8,7 +8,7 @@ class lora_config: r: int=8 lora_alpha: int=32 - target_modules: List[str] = field(default_factory=lambda: ["q_proj", "k_proj", "v_proj", "o_proj","gate_proj", "up_proj", "down_proj"]) + target_modules: List[str] = field(default_factory=lambda: ["q_proj", "v_proj"]) bias= "none" task_type: str= "CAUSAL_LM" lora_dropout: float=0.05 From a65e56c67c2a39bf193308048d0be4e1ae9a17a7 Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Wed, 26 Jun 2024 10:06:05 -0700 Subject: [PATCH 29/35] added experiment results to README --- .../finetuning/datasets/chatbot_dataset.py | 38 ---- recipes/finetuning/datasets/raft_dataset.py | 13 +- .../use_cases/end2end-recipes/raft/README.md | 201 +++++++++++------ .../use_cases/end2end-recipes/raft/chatbot.md | 207 ++++++++++++++++++ .../raft/data/llama_website0613 | 103 --------- .../end2end-recipes/raft/data_urls.xml | 164 -------------- .../raft/images/Answers_Precision.png | Bin 0 -> 364427 bytes .../raft/images/LLM_score_comparison.png | Bin 0 -> 290356 bytes .../raft/images/Num_of_refusal_comparison.png | Bin 0 -> 180982 bytes .../end2end-recipes/raft/images/RAFT.png | Bin 0 -> 541984 bytes .../use_cases/end2end-recipes/raft/raft.py | 10 +- .../use_cases/end2end-recipes/raft/raft.yaml | 4 +- .../end2end-recipes/raft/raft_eval.py | 65 ++---- .../end2end-recipes/raft/raft_utils.py | 106 +++++---- 14 files changed, 433 insertions(+), 478 deletions(-) delete mode 100644 recipes/finetuning/datasets/chatbot_dataset.py create mode 100644 recipes/use_cases/end2end-recipes/raft/chatbot.md delete mode 100644 recipes/use_cases/end2end-recipes/raft/data/llama_website0613 delete mode 100644 recipes/use_cases/end2end-recipes/raft/data_urls.xml create mode 100644 recipes/use_cases/end2end-recipes/raft/images/Answers_Precision.png create mode 100644 recipes/use_cases/end2end-recipes/raft/images/LLM_score_comparison.png create mode 100644 recipes/use_cases/end2end-recipes/raft/images/Num_of_refusal_comparison.png create mode 100644 recipes/use_cases/end2end-recipes/raft/images/RAFT.png diff --git a/recipes/finetuning/datasets/chatbot_dataset.py b/recipes/finetuning/datasets/chatbot_dataset.py deleted file mode 100644 index 9de06565c..000000000 --- a/recipes/finetuning/datasets/chatbot_dataset.py +++ /dev/null @@ -1,38 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# This software may be used and distributed according to the terms of the Llama 3 Community License Agreement. - - -import copy -import datasets -from datasets import Dataset, load_dataset, DatasetDict -import itertools - - -B_INST, E_INST = "[INST]", "[/INST]" - -def tokenize_dialog(q_a_pair, tokenizer): - question, answer = q_a_pair["Question"], q_a_pair["Answer"] - prompt_tokens = tokenizer.encode(f"{tokenizer.bos_token}{B_INST} {(question).strip()} {E_INST}", add_special_tokens=False) - answer_tokens = tokenizer.encode(f"{answer.strip()} {tokenizer.eos_token}", add_special_tokens=False) - sample = { - "input_ids": prompt_tokens + answer_tokens, - "attention_mask" : [1] * (len(prompt_tokens) + len(answer_tokens)), - "labels": [-100] * len(prompt_tokens) + answer_tokens, - } - - return sample - - -def get_custom_dataset(dataset_config, tokenizer, split, split_ratio=0.8): - dataset_dict = load_dataset('json', data_files=dataset_config.data_path) - dataset = dataset_dict['train'] - dataset = dataset.train_test_split(test_size=1-split_ratio, shuffle=True, seed=42) - - dataset = dataset[split].map(lambda sample: { - "Question": sample["Question"], - "Answer": sample["Answer"], - }, - batched=True, - ) - dataset = dataset.map(lambda x: tokenize_dialog(x, tokenizer)) - return dataset diff --git a/recipes/finetuning/datasets/raft_dataset.py b/recipes/finetuning/datasets/raft_dataset.py index ed8aaa9d7..1de3c1ed8 100644 --- a/recipes/finetuning/datasets/raft_dataset.py +++ b/recipes/finetuning/datasets/raft_dataset.py @@ -50,12 +50,17 @@ def tokenize_dialog(dialog, tokenizer): return dict(combined_tokens, attention_mask=[1]*len(combined_tokens["input_ids"])) def raft_tokenize(q_a_pair, tokenizer): - end_tag = "<\/DOCUMENT>\n" + end_tag = "" # find the last end_tag in the instruction, the rest is the question - index =q_a_pair["instruction"].rindex("<\/DOCUMENT>\n")+len(end_tag) - question = q_a_pair["instruction"][index:] + try: + index =q_a_pair["instruction"].rindex(end_tag)+len(end_tag) + except ValueError: + print(q_a_pair["instruction"]) + raise Exception("The instruction does not contain the end tag <\/DOCUMENT>") + # all the lines after end_tag are the question + question = q_a_pair["instruction"][index:].strip() # all the lines before end_tag are the context - documents = q_a_pair["instruction"][:index] + documents = q_a_pair["instruction"][:index].strip() # output is the label answer = q_a_pair["output"] system_prompt = "You are a helpful chatbot who can provide an answer to every questions from the user given a relevant context." diff --git a/recipes/use_cases/end2end-recipes/raft/README.md b/recipes/use_cases/end2end-recipes/raft/README.md index 5be68b9a3..b5a601980 100644 --- a/recipes/use_cases/end2end-recipes/raft/README.md +++ b/recipes/use_cases/end2end-recipes/raft/README.md @@ -1,72 +1,89 @@ -## Introduction: -As our Meta llama models become more popular, we noticed that there is a great demand to apply our Meta Llama models toward a custom domain to better serve the customers in that domain. -For example, a common scenario can be that a company has all the related documents in plain text for its custom domain and want to build chatbot that can help answer questions a client -could have. - -Inspired by this demand, we want to explore the possibilty of building a Github chatbot for llama-recipes based on Meta Llama models, -as a demo in this tutorial. Even though our Meta Llama 3 70B Insturct model can be a great candidate, as it already has a excellent reasoning and knowledge, it is relatively costly to host in production. - -Therefore, we want to explore the possibile ways to get a 8B-Instruct Meta Llama model based chatbot that can achieve the similar level of accuarcy of Meta Llama 70B-Instruct model based chatbot. -to save the inference cost. -## Understand the problems -To build a Github bot, we need to first understand what kind of questions that has been frequently asked. In our Github issues, we found out that the issues are not confined within Llama -model itself (eg, "where to download models"), but also include questions like quantization, training, inference problems which may related to Pytorch. Go through those questions can help us have a better understanding of what kind of data we need to collect. +## Introduction: +As our Meta Llama models become more popular, we noticed that there is a great demand to apply our Meta Llama models toward a custom domain to better serve the customers in that domain. For example, a common scenario can be that a company already has all the related documents in plain text for its custom domain and want to build chatbot that can help answer questions for its clients. -Even though ideally we should included as many related documents as possible, such as Huggingface documentation, in this tutorial we will only include the Llama documents and Pytorch documents for demo purposes. +Inspired by this demand, we want to explore the possibility of building a Llama chatbot for our Llama users using Meta Llama models, as a demo in this tutorial. Even though our Meta Llama 3 70B Instruct model can be a great candidate, as it already has a excellent reasoning and knowledge, it is relatively costly to host in production. Therefore, we want to produce a Meta Llama 8B Instruct model based chatbot that can achieve the similar level of accuracy of Meta Llama 70B-Instruct model based chatbot to save the inference cost. ## Data Collections -Once we determine the domains we want to collect data from, we can start to think about what kind of data we want to collect and how to get that data. There are many llama related online conversation and disscusions in Reddit or Stack Overflow, -but the data cleaning will be hard, eg. filtering out unfaithful information. - -In this tutorial, we want to use webpages in [Getting started with Meta Llama](https://llama.meta.com/get-started/) -along with webpages in [Pytorch blogs](https://pytorch.org/blog/) and [Pytorch tutorials](https://pytorch.org/tutorials/). - -We can either use local folder or web crawl to get the data. For local folder option, we can download all the desired docs in PDF, Text or Markdown format to "data" folder. -Alternatively, we can create a sitemap xml, similar to the data_urls.xml example, and use Langchain SitemapLoader to get all the text in the webpages. - +To build a Llama bot, we need to first collect the text data. Even though ideally we should included as many Llama related web documents as possible, in this tutorial we will only include the official documents for demo purposes. For example, we can use all the raw text from offical web pages listed in [Getting started with Meta Llama](https://llama.meta.com/get-started/) but we do not want to include our FAQ page as some of the eval questions will come from there. + +We can either use local folder or web crawl to get the text data. For local folder option, we can download all the desired docs in PDF, Text or Markdown format to "data" folder, specified in the [raft.yaml](./raft.yaml). + +Alternatively, we can create a sitemap xml, similar to the the following example, and use Langchain SitemapLoader to get all the text in the web pages. + +```xml + +http://llama.meta.com/responsible-use-guide/ + + +http://llama.meta.com/Llama2/ + + +http://llama.meta.com/Llama2/license/ + +...... + +http://llama.meta.com/Llama2/use-policy/ + + +http://llama.meta.com/code-Llama/ + + +http://llama.meta.com/Llama3/ + + +``` ## Retrieval Augmented Fine Tuning (RAFT) concepts -In this tutorial, we want to use the a new method that combines finetuning with RAG called Retrieval Augmented Fine Tuning (RAFT). +In this tutorial, we want to introduce Retrieval Augmented Fine Tuning (RAFT) that combines finetuning with RAG to better utilize the custom domain text data. + +RAFT is a general recipe to finetune a pretrained LLM to a domain-specific RAG settings. In RAFT, we prepare the training data such that each data point contains a question ( Q ), a set of documents (Dk), and a corresponding Chain-of-though style answer (A*) generated from one of the document (D*). We differentiate between two types of documents: oracle documents (D*) i.e. the documents from which the answer to the question can be deduced, and `distractor' documents (Di) that do not contain answer-relevant information, illustrated in the follwing graph: +![RAFT images](images/RAFT.png) -RAFT is a general recipe to finetune a pretrained LLM to your domain-specific RAG settings. +For more RAFT details, please check their [blog](https://gorilla.cs.berkeley.edu/blogs/9_raft.html) ## Create RAFT dataset + To use Meta Llama 3 70B model for the RAFT datasets creation from the prepared documents, we can either use Meta Llama 3 70B APIs from LLM cloud providers or host local LLM server. -We can use on prem solutions such as the [TGI](../../../../inference/model_servers/hf_text_generation_inference/README.md) or [VLLM](../../../../inference/model_servers/llama-on-prem.md). Here we will use the prompt in the [generation_config.yaml](./generation_config.yaml) to instruct the model on the expected format and rules for generating the Q&A pairs. In this example, we will show how to create a vllm openai compatible server that host Meta Llama 3 70B instruct locally, and generate the RAFT dataset. +We can use on prem solutions such as the [TGI](../../../../inference/model_servers/hf_text_generation_inference/README.md) or [VLLM](../../../../inference/model_servers/Llama-on-prem.md). + +In this example, we will show how to create a vllm openai compatible server that host Meta Llama 3 70B instruct locally, and generate the RAFT dataset. ```bash # Make sure VLLM has been installed -CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001 +CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server --model meta-Llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001 ``` **NOTE** Please make sure the port has not been used. Since Meta Llama3 70B instruct model requires at least 135GB GPU memory, we need to use multiple GPUs to host it in a tensor parallel way. -Once the server is ready, we can query the server given the port number 8001 in another terminal. Here, "-u" sets the endpoint url to query and "-t" sets the number of questions we ask the Meta Llama3 70B Instruct model to generate per chunk. To use cloud API , please change the endpoint url to the cloud provider and set the api key using "-k". Here since we want to query our local hosted VLLM server, we can use following commend: +Once the server is ready, we can query the server given the port number 8001 in another terminal. Here, "-u" sets the endpoint url to query and "-t" sets the number of questions we ask the Meta Llama3 70B Instruct model to generate per chunk. To use cloud API , please change the endpoint url to the cloud provider and set the api key using "-k". Here since we want to query our local hosted VLLM server, we can use following command: ```bash -python raft.py -u "http://localhost:8001/v1" -k "EMPTY" -t 5 +python raft.py -u "http://localhost:8001/v1" -k "EMPTY" -t 4 ``` For cloud API key, we can also set it using system environment variables, such as ```bash export API_KEY="THE_API_KEY_HERE" -python raft.py -u "CLOUD_API_URL" -t 5 +python raft.py -u "CLOUD_API_URL" -t 4 ``` **NOTE** When using cloud API, you need to be aware of your RPM (requests per minute), TPM (tokens per minute) and TPD (tokens per day), limit on your account in case using any of model API providers. This is experimental and totally depends on your documents, wealth of information in them and how you prefer to handle question, short or longer answers etc. -This python script will read all the documents either from local or web, and split the data into text chunks of 1000 charaters (defined by "chunk_size") using RecursiveCharacterTextSplitter. -Then we apply the question_prompt_template, defined in "raft.yaml", to each chunk, to get question list out of the text chunk. +This [raft.py](./raft.py) will read all the documents either from local or web depending on the settings, and split the data into text chunks of 1000 characters (defined by "chunk_size") using RecursiveCharacterTextSplitter. -We now have a related context as text chunk and a corresponding question list. For each question in the question list, we want to generate a Chain-of-Thought (COT) style question using Llama 3 70B Instruct as well. -Once we have the COT answers, we can start to make a dataset that where each sample contains "instruction" section includes some unrelated chunks called distractor and has a probability P to include the related chunk. +Then we apply the question_prompt_template, defined in [raft.yaml](./raft.yaml), to each chunk, to get question list out of the text chunk. + +We now have a related context as text chunk and a corresponding question list. For each question in the question list, we want to generate a Chain-of-Thought (COT) style answer using Meta Llama 3 70B Instruct as well. + +Once we have the COT answers, we can start to make a dataset where each sample contains "instruction" section that includes some unrelated chunks called distractor (by default we add 4 distractors). In the original RAFT method, there is a oracle probility P (by default 80%) that a related document will be included. This means that there is 1-P (by defualt 20%) chances that no related documents are provided, and the RAFT model should still try to predict COT_answer label, as the blog stated that "By removing the oracle documents in some instances of the training data, we are compelling the model to memorize domain-knowledge.". + +In this tutorial we made a important modification by adding some additional refusal examples (by default this refusal probability is 5%) that when the related documents are not presented, we make the COT_answer label to be "Sorry, I don't know the answer to this question because related documents are not found. Please try again.". Our hyposis is that this will increase answer precision and reduce chatbot hallucination. In real world production scenario, we prefer that the chatbot refuse to answer when no enough context are provided, so that we can detect this refusal signal and mitigate the risk of producing wrong or misleading answer, eg. we can ask for human agent to take over the conversation to better serve customers. Here is a RAFT format json example from our saved raft.jsonl file. We have a "question" section for the generated question, "cot_answer" section for generated COT answers, where the final answer will be added after "" token, and we also created a "instruction" section -that has all the documents included (each document splited by <\/DOCUMENT> tag) and finally the question appended in the very end. This "instruction" -section will be the input during the training, and the "cot_answer" will be the output label that the loss will be calculated on. +that has all the documents included (each document splitted by <\/DOCUMENT> tag) and finally the generated question appended in the very end. This "instruction" section will be the input during the fine-tuning, and the "cot_answer" will be the output label that the loss will be calculated on. ```python { @@ -81,7 +98,6 @@ section will be the input during the training, and the "cot_answer" will be the "We hope that Code Llama will inspire others to leverage Llama 2 to create new innovative tools for research and commercial products. Download the model Explore more on Code Llama Discover more about Code Llama here \u2014 visit our resources, ranging from our research paper, getting started guide and more. Code Llama GitHub repository Research paper Download the model Getting started guide Meta Llama 3 Build the future of AI with Meta Llama 3 Now available with both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications Build the future of AI with Meta Llama 3 Now available with both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications Get Started Experience Llama 3 on Meta AI Experience Llama 3 with Meta AI We\u2019ve integrated Llama 3 into Meta AI, our intelligent assistant, that expands the ways people can get things done, create and connect with Meta AI. You can see first-hand the performance of Llama 3 by using Meta AI for coding tasks and problem solving. Whether you're developing agents, or other AI-powered applications, Llama 3 in both 8B and 70B will offer the capabilities and flexibility you need to develop your ideas. Experience Llama 3 on Meta AI Enhanced performance Experience the state-of-the-art performance of Llama 3, an openly accessible model that excels at language nuances, contextual understanding, and complex tasks like translation and dialogue generation. With enhanced scalability and performance, Llama 3 can handle multi-step tasks effortlessly, while our refined post-training processes significantly lower false refusal rates, improve response alignment, and boost diversity in model answers. Additionally, it drastically elevates capabilities like reasoning, code generation, and instruction following. Build the future of AI with Llama 3. Download Llama 3 Getting Started Guide With each Meta Llama request, you will receive: Meta Llama Guard 2 Getting started guide Responsible Use Guide Acceptable use policy Model card Community license agreement Benchmarks Llama 3 models take data and scale to new heights. It\u2019s been trained on our two recently announced custom-built 24K GPU clusters on over 15T token of data \u2013 a training dataset 7x larger than that used for Llama 2, including 4x more code. This results in the most capable Llama model yet, which supports a 8K context length that doubles the capacity of Llama 2. Model card Trust & safety A comprehensive approach to responsibility With the release of Llama 3, we\u2019ve updated the Responsible Use Guide (RUG) to provide the most comprehensive information on responsible development with LLMs. Our system-centric approach includes updates to our trust and safety tools with Llama Guard 2, optimized to support the newly announced taxonomy published by MLCommons expanding its coverage to a more comprehensive set of safety categories, Code Shield, and Cybersec Eval 2. In line with the principles outlined in our RUG , we recommend thorough checking and filtering of all inputs to and outputs from LLMs based on your unique content guidelines for your intended use case and audience. Meta Llama Guard 2 Explore more on Meta Llama 3 Introducing Meta Llama 3: The most capable openly available LLM to date Read the blog Meet Your New Assistant: Meta AI, Built With Llama 3 Learn more Meta Llama 3 repository View repository Model card Explore Meta Llama 3 License META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 \u201c Agreement \u201d means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. \u201c Documentation \u201d means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https:\/\/llama.meta.com\/get-started\/ .", "DISTRACT_DOCS 3" "DISTRACT_DOCS 4" - "DISTRACT_DOCS 5" ] ], "title":[ @@ -91,72 +107,127 @@ section will be the input during the training, and the "cot_answer" will be the "placeholder_title", "placeholder_title", "placeholder_title", - "placeholder_title" ] ] }, "oracle_context":"We hope that Code Llama will inspire others to leverage Llama 2 to create new innovative tools for research and commercial products. Download the model Explore more on Code Llama Discover more about Code Llama here \u2014 visit our resources, ranging from our research paper, getting started guide and more. Code Llama GitHub repository Research paper Download the model Getting started guide Meta Llama 3 Build the future of AI with Meta Llama 3 Now available with both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications Build the future of AI with Meta Llama 3 Now available with both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications Get Started Experience Llama 3 on Meta AI Experience Llama 3 with Meta AI We\u2019ve integrated Llama 3 into Meta AI, our intelligent assistant, that expands the ways people can get things done, create and connect with Meta AI. You can see first-hand the performance of Llama 3 by using Meta AI for coding tasks and problem solving. Whether you're developing agents, or other AI-powered applications, Llama 3 in both 8B and 70B will offer the capabilities and flexibility you need to develop your ideas. Experience Llama 3 on Meta AI Enhanced performance Experience the state-of-the-art performance of Llama 3, an openly accessible model that excels at language nuances, contextual understanding, and complex tasks like translation and dialogue generation. With enhanced scalability and performance, Llama 3 can handle multi-step tasks effortlessly, while our refined post-training processes significantly lower false refusal rates, improve response alignment, and boost diversity in model answers. Additionally, it drastically elevates capabilities like reasoning, code generation, and instruction following. Build the future of AI with Llama 3. Download Llama 3 Getting Started Guide With each Meta Llama request, you will receive: Meta Llama Guard 2 Getting started guide Responsible Use Guide Acceptable use policy Model card Community license agreement Benchmarks Llama 3 models take data and scale to new heights. It\u2019s been trained on our two recently announced custom-built 24K GPU clusters on over 15T token of data \u2013 a training dataset 7x larger than that used for Llama 2, including 4x more code. This results in the most capable Llama model yet, which supports a 8K context length that doubles the capacity of Llama 2. Model card Trust & safety A comprehensive approach to responsibility With the release of Llama 3, we\u2019ve updated the Responsible Use Guide (RUG) to provide the most comprehensive information on responsible development with LLMs. Our system-centric approach includes updates to our trust and safety tools with Llama Guard 2, optimized to support the newly announced taxonomy published by MLCommons expanding its coverage to a more comprehensive set of safety categories, Code Shield, and Cybersec Eval 2. In line with the principles outlined in our RUG , we recommend thorough checking and filtering of all inputs to and outputs from LLMs based on your unique content guidelines for your intended use case and audience. Meta Llama Guard 2 Explore more on Meta Llama 3 Introducing Meta Llama 3: The most capable openly available LLM to date Read the blog Meet Your New Assistant: Meta AI, Built With Llama 3 Learn more Meta Llama 3 repository View repository Model card Explore Meta Llama 3 License META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 \u201c Agreement \u201d means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. \u201c Documentation \u201d means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https:\/\/llama.meta.com\/get-started\/ .", "cot_answer":"Here's the step-by-step reasoning to answer the question:\n\n1. The question asks about the context length supported by Llama 3 models.\n2. In the context, we need to find the relevant information about Llama 3 models and their context length.\n3. The relevant sentence is: \"This results in the most capable Llama model yet, which supports a 8K context length that doubles the capacity of Llama 2.\"\n##begin_quote## This results in the most capable Llama model yet, which supports a 8K context length that doubles the capacity of Llama 2. ##end_quote##\n4. From this sentence, we can see that Llama 3 models support a context length of 8K.\n\n: 8K", - "instruction":" DISTRACT_DOCS 1 <\/DOCUMENT>... DISTRACT_DOCS 5 <\/DOCUMENT>\nWhat is the context length supported by Llama 3 models?" + "instruction":" DISTRACT_DOCS 1 <\/DOCUMENT>... DISTRACT_DOCS 4 <\/DOCUMENT>\nWhat is the context length supported by Llama 3 models?" } ``` -To create a evalset, ideally we should use human-annotation to create the question and answer pairs to make sure the the questions are related and answers are fully correct. -However, for demo purpose, we will use a subset of training json as the eval set. We can shuffle and random select 100 examples out of RAFT dataset. For evaluation purpose, we only need to keep the "question" section, -and the final answer section, marked by tag in "cot_answer". Then we can manually check each example and remove those low-quaility examples, where the questions -are not related Llama or can not be infer without correct context. After the manual check, we keep 72 question and answer pairs as the eval_llama.json. +To create a eval set, ideally we should use human-annotation to create the question and answer pairs to make sure the the questions are related and answers are fully correct. + +However, this humman-annotation is costly and time-consuming. For demo purpose, we will use a subset of training json and our FAQ web page as the eval set. We can shuffle and random select 100 examples out of Llama RAFT dataset. For evaluation purpose, we only need to keep the "question" section, and the final answer section, marked by tag in "cot_answer". + +Then we can manually check each example and only pick the good examples. We want to make sure the questions are general enough that can be used to query the web search engine and are related Llama. Moreover, we also used some QA pairs, with some modification, from our FAQ page. Together, we created 72 question and answer pairs as the the eval set called eval_llama.json. -### Step 3: Run the fune-tuning -Once the RAFT dataset is ready in a json format, we can start the fine-tuning steps. Unfornately we found out that the LORA method did not produce a good result so we have to use the full fine-tuning using the following commands in the llama-recipe main folder: +## Fune-tuning steps + +Once the RAFT dataset is ready in a json format, we can start the fine-tuning steps. Unfortunately we found out that the LORA method did not produce a good result so we have to use the full fine-tuning method. We can use the following commands as an example in the Llama-recipes main folder: ```bash -torchrun --nnodes 1 --nproc_per_node 4 recipes/finetuning/finetuning.py --enable_fsdp --lr 1e-5 --context_length 8192 --num_epochs 1 --batch_size_training 1 --model_name meta-llama/Meta-Llama-3-8B-Instruct --dist_checkpoint_root_folder PATH_TO_ROOT_FOLDER --dist_checkpoint_folder fine-tuned --use_fast_kernels --dataset "custom_dataset" --custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/raft_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path 'PATH_TO_RAFT_JSON' +export PATH_TO_ROOT_FOLDER = ./raft-8b +export PATH_TO_RAFT_JSON = recipes/use_cases/end2end-recipes/raft/output/raft.jsonl +torchrun --nnodes 1 --nproc_per_node 4 recipes/finetuning/finetuning.py --enable_fsdp --lr 1e-5 --context_length 8192 --num_epochs 1 --batch_size_training 1 --model_name meta-Llama/Meta-Llama-3-8B-Instruct --dist_checkpoint_root_folder $PATH_TO_ROOT_FOLDER --dist_checkpoint_folder fine-tuned --use_fast_kernels --dataset "custom_dataset" --custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/raft_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path $PATH_TO_RAFT_JSON ``` -Then convert the FSDP checkpoint to HuggingFace checkpoint using the following command: +For more details about multi-GPU finetuning, please check the [multigpu_finetuning.md](../../../finetuning/multigpu_finetuning.md) in the finetuning recipe. -```bash -python src/llama_recipes/inference/checkpoint_converter_fsdp_hf.py --fsdp_checkpoint_path PATH_TO_ROOT_FOLDER --consolidated_model_path PATH_TO_ROOT_FOLDER/fine-tuned-meta-llama --HF_model_path_or_name PATH_TO_ROOT_FOLDER +Then we need to convert the FSDP checkpoint to HuggingFace checkpoint using the following command: +```bash +python src/Llama_recipes/inference/checkpoint_converter_fsdp_hf.py --fsdp_checkpoint_path "$PATH_TO_ROOT_FOLDER/fine-tuned-meta-Llama/Meta-Llama-3-8B-Instruct" --consolidated_model_path "$PATH_TO_ROOT_FOLDER" ``` -For more details, please check the readme in the finetuning recipe. +For more details about FSDP to HuggingFace checkpoint conversion, please check the [readme](../../../inference/local_inference/README.md) in the inference/local_inference recipe. -### Step 4: Evaluating with local inference +## Evaluation steps -Once we have the fine-tuned model, we now need to evaluate it to understand its performance. We can use either traditional eval method, eg. calcucate exact match rate or rouge score. -In this tutorial, we can also use LLM to act like a judge to score model generated . +Once we have the RAFT model, we now need to evaluate it to understand its performance. In this tutorial, we not only use traditional eval method, eg. calculate exact match rate or rouge score but also use LLM to act like a judge to score model generated. + +We need to launch a VLLM server to host our converted model from PATH_TO_ROOT_FOLDER. To make things easier, we can rename the model folder raft-8b. +```bash +CUDA_VISIBLE_DEVICES=1 python -m vllm.entrypoints.openai.api_server --model raft-8b --port 8000 --disable-log-requests +``` +Similarly if we want to get 8B instruct baseline, we can launch a 8B model VLLM server instead: ```bash -CUDA_VISIBLE_DEVICES=4 python -m vllm.entrypoints.openai.api_server --model raft-8b --port 8000 --disable-log-requests +CUDA_VISIBLE_DEVICES=1 python -m vllm.entrypoints.openai.api_server --model meta-Llama/Meta-Llama-3-8B-Instruct --port 8000 --disable-log-requests ``` -**NOTE** If encounter import error: "ImportError: punica LoRA kernels could not be imported.", this means that VLLM must be installed with punica LoRA kernels to support LoRA adapter, please use following commands to install the VLLM from source. + +On another terminal, we can use another Meta Llama 3 70B Instruct model as a judge to compare the answer from the RAFT 8B model with the ground truth and get a score. To do this, we need to host another Meta Llama 3 70B Instruct VLLM server locally with command, just make sure the port is not been used: ```bash -git clone https://github.com/vllm-project/vllm.git -cd vllm -VLLM_INSTALL_PUNICA_KERNELS=1 pip install -e . +CUDA_VISIBLE_DEVICES=2,3 python -m vllm.entrypoints.openai.api_server --model meta-Llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001 ``` -On another terminal, we can use another Meta Llama 3 70B Instruct model as a judge to compare the answer from the fine-tuned 8B model with the groud truth and get a score. To do this, we need to host another Meta Llama 3 70B Instruct VLLM server locally with command, just make sure the port is not been used: +Then we can pass the ports to the eval script to eval our raft model once our raft-8b vllm server is running: ```bash -CUDA_VISIBLE_DEVICES=2,3 python -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8002 +CUDA_VISIBLE_DEVICES=4 python raft_eval.py -m raft-8b -u "http://localhost:8000/v1" -j "http://localhost:8001/v1" -r 5 ``` -Then we can pass the ports to the eval script: +To eval the 8B baseline we can use once our 8B vllm server is running: ```bash -CUDA_VISIBLE_DEVICES=1 python raft_eval.py -m raft-8b -v 8000 -j 8001 -r 5 +CUDA_VISIBLE_DEVICES=4 python raft_eval.py -m meta-Llama/Meta-Llama-3-8B-Instruct -u "http://localhost:8000/v1" -j "http://localhost:8001/v1" -r 5 ``` +**NOTE** Please make sure the folder name in --model matches the "model_name" section in raft_eval_config.yaml. Otherwise VLLM will raise model not found error. By default, the RAFT model is called "raft-8b". Here "-u" specify the raft model endpoint url, "-j" specify the judge model endpoint url, "-r" defines how many top_k documents the RAG should retrieve. + +This [raft_eval.py](./raft_eval.py) will load questions from eval set and generated answers from models and models+RAG. It will compare the generated answers with the ground truth to get the eval metrics, such as Rouge score or LLM_as_judge score, then save those metrics and eval details to logs. + +## Experiment results + +During our experiments, we did not get a good result from just using Llama website. We believe that our initial data from Llama website is not enough as it only has 327K characters and generates 1980+ RAFT examples. To increase our RAFT examples, we created another pytorch RAFT dataset with the text from offical web pages under [Pytorch blogs](https://pytorch.org/blog/) and [Pytorch tutorials](https://pytorch.org/tutorials/). This pytorch RAFT dataset has 20K RAFT examples generated from 4.7 million characters. Together, we have an all_data dataset that combines both Llama raft dataset and pytorch dataset. Then we fine-tuned the 8B model on those datasets separately for 1 epoch with learning rate of 1e-5 to get 3 RAFT models, namely Llama_only model, pytorch_only model and all_data model. We used Llama website raw text as our RAG knowledge base and the document chunks_size is the same as the raft chunk_size 1000 characters. + +We tested 5 models + RAG: all_data RAFT model, Llama_only RAFT model, pytorch_only RAFT model, 8B baseline, 70B baseline with the RAG document topk retrieve parameters of 3, 5 and 7. We used a Meta Llama 70B Instruct model as the judge to score our model generated answer with the ground truth in our eval set. +Here are the LLM_as_judge results: +![RAFT LLM_score comparison](images/LLM_score_comparison.png) -### Step 5: Testing with local inference +From the result, we noticed that RAFT models are performing very similarly to 8B baseline, noticeably worse than 70B baseline when context documents are limited (top_k <=5), but then RAFT models performs much better when top_k = 7, specially all_data 8B model already outperform 70B baseline (76.06% vs 74.65%). -Once we believe our fine-tuned model has passed our evaluation and we can deploy it locally to play with it by manually asking questions. We can do this by +Taking closer look at the number of refusal examples (when model saying “I do not know”). The all_data model is more cautious and tends to refuse to answer, where Llama_only_RAFT did not learn to refuse at all, because the Llama_only dataset only has 1980+ examples. + +![Num of refusal comparison](images/Num_of_refusal_comparison.png) + +We created a graph that shows the precision of our model answer, eg. when our RAFT model decides to answer, what is the likelihood of producing correct answers. Calculated by $\frac{LLMScore}{1-\frac{numRefusal}{totalQA}}$ + +Note that during our tests, the 8B and 70B baseline never refused to answer, so the precision of those models is the same as the LLM_score. We noticed that our RAFT models tend to refuse to answer when the provided documents are limited (top_k < 5), but if it decided to generate an answer, the likelyhood of being correct is higher. Specifically, when top_k =7, the all_data raft model has 82.97% likelihood of producing a correct answer when it decides to answer, far better than the 70B baseline of 74.65%. + +![Answers Precision](images/Answers_Precision.png) + +Here are some examples where our all_data RAFT can correctly answer while 70B failed: +``` +Comparing interested question: What tokenizer is used as the basis for the special tokens in Meta Llama +ground_truth: tiktoken +True all_data_RAG_answers: : The tokenizer used as the basis for the special tokens in Meta Llama is tiktoken. +False 70B_RAG_answers: : The tokenizer used as the basis for the special tokens in Meta Llama is SentencePiece. +``` + +``` +Comparing interested question: What is the license under which the Llama Guard model and its weights are released? +groud_truth: The license is the same as Llama 3, which can be found in the LICENSE file and is accompanied by the Acceptable Use Policy. +True raft-8b_RAG_answers: : The license under which the Llama Guard model and its weights are released is the same as Llama 3, and the [LICENSE](../LICENSE) file contains more information about the license. +False 70B_RAG_answers: : The Llama Guard model and its weights are licensed under the Llama 2 Community license. +``` + +Some learnings from these experiments: +1.Few thousands of RAFT examples did not yield a great result. From our experiments, above 10K RAFT examples is needed. +2.The LLM_as_judge is not always reliable, we noticed that some answers have been scored incorrectly. +3.The chunk_size for RAFT documents chunk and RAG document chunk should be the same. +4.RAFT method seems to help the LLM to differentiate the related documents from distractors rather than force the LLM to memorize the training data as we used Pytorch data as additional data to help our Llama chatbot to answer Llama questions. More research experiments will be needed to understand more about this. + + +## Local inference steps + +Once we believe our RAFT model has passed our evaluation and we can deploy it locally to play with it by manually asking questions. We can do this by ```bash -python recipes/inference/local_inference/inference.py --model_name meta-llama/Meta-Llama-3-8B-Instruct --peft_model chatbot-8b +python recipes/inference/local_inference/inference.py --model_name raft-8b ``` + +Lastly, special thanks to the first author of RAFT paper Tianjun Zhang to work together with us on this tutorial and provide many guidance during our experiments. diff --git a/recipes/use_cases/end2end-recipes/raft/chatbot.md b/recipes/use_cases/end2end-recipes/raft/chatbot.md new file mode 100644 index 000000000..dd763416c --- /dev/null +++ b/recipes/use_cases/end2end-recipes/raft/chatbot.md @@ -0,0 +1,207 @@ +## Introduction + +Large language models (LLMs) have emerged as groundbreaking tools, capable of understanding and generating human-like text. These models power many of today's advanced chatbots, providing more natural and engaging user experiences. But how do we create these intelligent systems? + +Here, we aim to make an FAQ model for Llama that be able to answer questions about Llama by fine-tune Meta Llama 3 8B instruct model using existing official Llama documents. + + +### Fine-tuning Process + +Fine-tuning Meta Llama 3 8B instruct model involves several key steps: Data Collection, Preprocessing, Fine-tuning, Evaluation. + + +### LLM Generated datasets + +As Chatbots are usually domain specifics and based on public or proprietary data, one common way inspired by [self-instruct paper](https://arxiv.org/abs/2212.10560) is to use LLMs to assist building the dataset from our data. For example to build an FAQ model, we can use a powerful Meta Llama 3 70B model to process our documents and help us build question and answer pair (We will showcase this here). Just keep it in mind that usually most of the proprietary LLMs has this clause in their license that you are not allowed to use the output generated from the model to train another LLM. In this case we will fine-tune another Llama model with the help of Meta Llama 3 70B. + + +Similarly, we will use the same LLM to evaluate the quality of generated datasets and finally evaluate the outputs from the model. + + +Given this context, here we want to highlight some of best practices that need to be in place for data collection and preprocessing in general. + +### **Data Collection & Preprocessing:** + +Gathering a diverse and comprehensive dataset is crucial. This dataset should include a wide range of topics and conversational styles to ensure the model can handle various subjects. A recent [research](https://arxiv.org/pdf/2305.11206.pdf) shows that quality of data has far more importance than quantity. Here are some high level thoughts on data collection and preprocessing along with best practices: + +**NOTE** data collection and processing is very use-case specific and here we can only share best practices but it would be very nuanced for each use-case. + +- Source Identification: Identify the sources where your FAQs are coming from. This could include websites, customer service transcripts, emails, forums, and product manuals. Prioritize sources that reflect the real questions your users are asking. + +- Diversity and Coverage: Ensure your data covers a wide range of topics relevant to your domain. It's crucial to include variations in how questions are phrased to make your model robust to different wording. + +- Volume: The amount of data needed depends on the complexity of the task and the variability of the language in your domain. Generally, more data leads to a better-performing model, but aim for high-quality, relevant data. + +Here, we are going to use [self-instruct](https://arxiv.org/abs/2212.10560) idea and use Llama model to build our dataset, for details please check this [doc](./data_pipelines/REAME.md). + + +**Things to keep in mind** + +- **Pretraining Data as the Foundation**: Pretraining data is crucial for developing foundational models, influencing both their strengths and potential weaknesses. Fine-tuning data refines specific model capabilities and, through instruction fine-tuning or alignment training, enhances general usability and safety. + +- **Quality Over Quantity**: More data doesn't necessarily mean better results. It's vital to select data carefully and perform manual inspections to ensure it aligns with your project's aims. + +- **Considerations for Dataset Selection**: Selecting a dataset requires considering various factors, including language and dialect coverage, topics, tasks, diversity, quality, and representation. + +- **Impact of Implicit Dataset Modifications**: Most datasets undergo implicit changes during selection, filtering, and formatting. These preprocessing steps can significantly affect model performance, so they should not be overlooked. + +- **Finetuning Data's Dual-Edged Sword**: Finetuning can improve or impair model capabilities. Make sure you know the nature of your data to make an informed selections. + +- **Navigating Dataset Limitations**: The perfect dataset for a specific task may not exist. Be mindful of the limitations when choosing from available resources, and understand the potential impact on your project. + +#### **Best Practices for FineTuning Data Preparation** + +- **Enhancing Understanding with Analysis Tools**: Utilizing tools for searching and analyzing data is crucial for developers to gain a deeper insight into their datasets. This understanding is key to predicting model behavior, a critical yet often overlooked phase in model development. + +- **The Impact of Data Cleaning and Filtering**: Data cleaning and filtering significantly influence model characteristics, yet there's no universal solution that fits every scenario. Our guidance includes filtering recommendations tailored to the specific applications and communities your model aims to serve. + +- **Data Mixing from Multiple Sources**: When training models with data from various sources or domains, the proportion of data from each domain (data mixing) can greatly affect downstream performance. It's a common strategy to prioritize "high-quality" data domains—those with content written by humans and subjected to an editing process, like Wikipedia and books. However, data mixing is an evolving field of research, with best practices still under development. + +- **Benefits of Removing Duplicate Data**: Eliminating duplicated data from your dataset can lessen unwanted memorization and enhance training efficiency. + +- **The Importance of Dataset Decontamination**: It's crucial to meticulously decontaminate training datasets by excluding data from evaluation benchmarks. This ensures the model's capabilities are accurately assessed. + + +**Data Exploration and Analysis** + +- Gaining Insights through Dataset Exploration: Leveraging search and analysis tools to explore training datasets enables us to cultivate a refined understanding of the data's contents, which in turn influences the models. Direct interaction with the data often reveals complexities that are challenging to convey or so might not be present in the documents. + +- Understanding Data Complexity: Data, especially text, encompasses a wide array of characteristics such as length distribution, topics, tones, formats, licensing, and diction. These elements are crucial for understanding the dataset but are not easily summarized without thorough examination. + +- Utilizing Available Tools: We encourage to take advantage of the numerous tools at your disposal for searching and analyzing your training datasets, facilitating a deeper comprehension and more informed model development. + +**Tools** + +- [wimbd](https://github.com/allenai/wimbd) for data analysis. +- TBD + + + +**Data Cleaning** + +Purpose of Filtering and Cleaning: The process of filtering and cleaning is essential for eliminating unnecessary data from your dataset. This not only boosts the efficiency of model training but also ensures the data exhibits preferred characteristics such as high informational value, coverage of target languages, low levels of toxicity, and minimal presence of personally identifiable information. + +Considering Trade-offs: We recommend to carefully weigh the potential trade-offs associated with using certain filters, it may impact the diversity of your data, [removing minority individuals](https://arxiv.org/abs/2104.08758). + +**Tools** +- [OpenRefine](https://github.com/OpenRefine/OpenRefine?tab=readme-ov-file),(formerly Google Refine): A standalone open-source desktop application for data cleanup and transformation to other formats. It's particularly good for working with messy data, including data format transformations and cleaning. + +- [FUN-Langid](https://github.com/google-research/url-nlp/tree/main/fun-langid), simple, character 4-gram LangID classifier recognizing up to 1633 languages. + +- Dask: Similar to Pandas, Dask is designed for parallel computing and works efficiently with large datasets. It can be used for data cleaning, transformations, and more, leveraging multiple CPUs or distributed systems. + + + + +**Data Deduplication** + +- **Data Deduplication importance**: Data deduplication is a important preprocessing step to eliminate duplicate documents or segments within a document from the dataset. This process helps in minimizing the model's chance of memorizing unwanted information, including generic text, copyrighted content, and personally identifiable details. + +- **Benefits of Removing Duplicates**: Aside from mitigating the risk of undesirable memorization, deduplication enhances training efficiency by decreasing the overall size of the dataset. This streamlined dataset contributes to a more effective and resource-efficient model training process. + +- **Assessing the Impact of Duplicates**: You need to carefully evaluate the influence of duplicated data on their specific model use case. Memorization may be beneficial for models designed for closed-book question answering, or similarly chatbots. + +**Tools** + +- [thefuz](https://github.com/seatgeek/thefuzz): It uses Levenshtein Distance to calculate the differences between sequences in a simple-to-use package. +- [recordlinkage](https://github.com/J535D165/recordlinkage): It is modular record linkage toolkit to link records in or between data sources. + +**Data Decontamination** + +The process involves eliminating evaluation data from the training dataset. This crucial preprocessing step maintains the accuracy of model evaluation, guaranteeing that performance metrics are trustworthy and not skewed. + +**Tools** +- TBD + + + + +### **LLama FAQ Use-Case** + + +1. **Data Collection** +Here, we are going to use self-instruct idea and use Llama model to build our dataset, for details please check this [doc](./data_pipelines/REAME.md). + +2. **Data Formatting** + +For a FAQ model, you need to format your data in a way that's conducive to learning question-answer relationships. A common format is the question-answer (QA) pair: + +Question-Answer Pairing: Organize your data into pairs where each question is directly followed by its answer. This simple structure is highly effective for training models to understand and generate responses. For example: + +```python +"question": "What is Llama 3?", +"answer": "Llama 3 is a collection of pretrained and fine-tuned large language models ranging from 8 billion to 70 billion parameters, optimized for dialogue use cases." +``` + + +3. **Preprocessing:** This step involves cleaning the data and preparing it for training. It might include removing irrelevant information, correcting errors, and splitting the data into training and evaluation sets. + + +4. **Fine-Tuning:** Given that we have a selected pretrained model, in this case we use LLama 2 chat 7B, fine-tunning with more specific data can improve its performance on particular tasks, such as answering questions about Llama in this case. +#### Building Dataset + +During the self-instruct process of generation Q&A pairs from documents, we realized that with out system prompt being +```python +You are a language model skilled in creating quiz questions. +You will be provided with a document, +read it and generate question and answer pairs +that are most likely be asked by a use of llama that just want to start, +please make sure you follow those rules, +1. Generate only {total_questions} question answer pairs. +2. Generate in {language}. +3. The questions can be answered based *solely* on the given passage. +4. Avoid asking questions with similar meaning. +5. Make the answer as concise as possible, it should be at most 60 words. +6. Provide relevant links from the document to support the answer. +7. Never use any abbreviation. +8. Return the result in json format with the template: + [ + {{ + "question": "your question A.", + "answer": "your answer to question A." + }}, + {{ + "question": "your question B.", + "answer": "your answer to question B." + }} + ] + +``` + +Model tends to ignore providing the bigger picture in the questions, for example below is the result of Q&A pair from reading Code Llama paper. Partially, its because due to context window size of the model we have to divide the document into smaller chunks, so model use `described in the passage` or `according to the passage?` in the question instead of linking it back to Code Llama. + + +```python +{ + "question": "What is the purpose of the transformation described in the passage?", + "answer": "The transformation is used to create documents with a prefix, middle part, and suffix for infilling training." + }, +{ + "question": "What is the focus of research in transformer-based language modeling, according to the passage?", + "answer": "The focus of research is on effective handling of long sequences, specifically extrapolation and reducing the quadratic complexity of attention passes." +}, +``` + + +#### Data Insights + +We generated a dataset of almost 3600 Q&A pairs from some of the open source documents about Llama models, including getting started guide from Llama website, its FAQ, Llama 3, Purple Llama, Code Llama papers and Llama-Recipes documentations. + +We have run some fine-tuning experiments with single GPU using quantization with different LORA configs (all linear layer versus query and key projections only) and different number of epochs. Although train and eval loss shows decrease specially with using all linear layers in LORA configs and training with 6 epochs, still the result is far from acceptable in real tests. + + +Here is how losses between three runs looks like. + +

+ Eval Loss + Train Loss +

+ +##### Low Quality Dataset + +Below are some examples of real test on the fine-tuned model with very poor results. It seems fine-tuned model does not show any promising results with this dataset. Looking at the dataset, we could observe that the amount of data (Q&A pair) for each concept such as PyTorch FSDP and Llama-Recipe is very limited and almost one pair per concept. This shows lack of relevant training data. The recent research showed that from each taxonomy having 2-3 examples can yield promising results. + +

+ Poor Test Results example 1 + Poor Test Results example 1 +

diff --git a/recipes/use_cases/end2end-recipes/raft/data/llama_website0613 b/recipes/use_cases/end2end-recipes/raft/data/llama_website0613 deleted file mode 100644 index 33ab39ffb..000000000 --- a/recipes/use_cases/end2end-recipes/raft/data/llama_website0613 +++ /dev/null @@ -1,103 +0,0 @@ -Meta Llama Skip to main content Technology Getting Started Trust & Safety Community Resources Discover the possibilities with Meta Llama Democratizing access through an open platform featuring AI models, tools, and resources — enabling developers to shape the next wave of innovation. Licensed for both research and commercial use Get Started Llama models and tools Meta Llama 3 Build the future of AI with Meta Llama 3 Llama 3 is an accessible, open-source large language model (LLM) designed for developers, researchers, and businesses to build, experiment, and responsibly scale their generative AI ideas. Part of a foundational system, it serves as a bedrock for innovation in the global community. Learn more Meta Code Llama A state-of-the-art large language model for coding LLM capable of generating code, and natural language about code, from both code and natural language prompts. Meta Llama Guard Empowering developers, advancing safety, and building an open ecosystem We’re announcing Meta Llama Guard, an umbrella project featuring open trust and safety tools and evaluations meant to level the playing field for developers. Ready to start building with Meta Llama? Access our getting started guide and responsible use resources to get started. Get started guide Responsible use guide Prompt Engineering with Meta Llama Learn how to effectively use Llama models for prompt engineering with our free course on Deeplearning.AI, where you'll learn best practices and interact with the models through a simple API call. Partnerships Our global partners and supporters We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama, cloud providers that will include the model as part of their offering to customers, researchers committed to doing research with the model, and people across tech, academia, and policy who see the benefits of Llama and an open platform as we do. Latest Llama updates Introducing Meta Llama 3: The most capable openly available LLM to date Read more Meet Your New Assistant: Meta AI, Built With Llama 3 CYBERSECEVAL 2: A Wide-Ranging Cybersecurity Evaluation Suite for Large Language Models Stay up-to-date Our latest updates delivered to your inbox Subscribe to our newsletter to keep up with the latest Llama updates, releases and more. Sign up ----------- -Use Policy Skip to main content Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at llama.meta.com/use-policy . Prohibited Uses We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to: 1. Violate the law or others’ rights, including to: a. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: i. Violence or terrorism ii. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material b. Human trafficking, exploitation, and sexual violence iii. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. iv. Sexual solicitation vi. Any other criminal activity c. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals d. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services e. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices f. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws g. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials h. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following: a. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State b. Guns and illegal weapons (including weapon development) c. Illegal drugs and regulated/controlled substances d. Operation of critical infrastructure, transportation technologies, or heavy machinery e. Self-harm or harm to others, including suicide, cutting, and eating disorders f. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 2 related to the following: a. Generating, promoting, or furthering fraud or the creation or promotion of disinformation b. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content c. Generating, promoting, or further distributing spam d. Impersonating another individual without consent, authorization, or legal right e. Representing that the use of Llama 2 or outputs are human-generated f. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: Reporting issues with the model: github.com/facebookresearch/llama Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback Reporting bugs and security concerns: facebook.com/whitehat/info Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: LlamaUseReport@meta.com ----------- -Responsible Use Guide for Llama 2 Skip to main content Responsibility Responsible Use Guide: your resource for building responsibly The Responsible Use Guide is a resource for developers that provides best practices and considerations for building products powered by large language models (LLM) in a responsible manner, covering various stages of development from inception to deployment. Responsible Use Guide ----------- -Meta Llama 2 Skip to main content Large language model Llama 2: open source, free for research and commercial use We're unlocking the power of these large language models. Our latest version of Llama – Llama 2 – is now accessible to individuals, creators, researchers, and businesses so they can experiment, innovate, and scale their ideas responsibly. Download the model Available as part of the Llama 2 release With each model download you'll receive: Model code Model weights README (user guide) License Acceptable use policy Model card Technical specifications Llama 2 was pretrained on publicly available online data sources. The fine-tuned model, Llama Chat, leverages publicly available instruction datasets and over 1 million human annotations. Read the paper Inside the model Llama 2 models are trained on 2 trillion tokens and have double the context length of Llama 1. Llama Chat models have additionally been trained on over 1 million new human annotations. Benchmarks Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. Its fine-tuned models have been trained on over 1 million human annotations. Safety and helpfulness Reinforcement learning from human feedback Llama Chat uses reinforcement learning from human feedback to ensure safety and helpfulness. Training Llama Chat: Llama 2 is pretrained using publicly available online data. An initial version of Llama Chat is then created through the use of supervised fine-tuning. Next, Llama Chat is iteratively refined using Reinforcement Learning from Human Feedback (RLHF), which includes rejection sampling and proximal policy optimization (PPO). Get Llama 2 now: complete the download form via the link below. By submitting the form, you agree to Meta's privacy policy Get started Our global partners and supporters We have a broad range of supporters around the world who believe in our open approach to today’s AI — companies that have given early feedback and are excited to build with Llama 2, cloud providers that will include the model as part of their offering to customers, researchers committed to doing research with the model, and people across tech, academia, and policy who see the benefits of Llama and an open platform as we do. Statement of support for Meta’s open approach to today’s AI “We support an open innovation approach to AI. Responsible and open innovation gives us all a stake in the AI development process, bringing visibility, scrutiny and trust to these technologies. Opening today’s Llama models will let everyone benefit from this technology.” We’re committed to building responsibly To promote a responsible, collaborative AI innovation ecosystem, we’ve established a range of resources for all who use Llama 2: individuals, creators, developers, researchers, academics, and businesses of any size. The Responsible Use Guide is a resource for developers that provides best practices and considerations for building products powered by large language models (LLMs) in a responsible manner, covering various stages of development from inception to deployment. Safety Red-teaming Llama Chat has undergone testing by external partners and internal teams to identify performance gaps and mitigate potentially problematic responses in chat use cases. We're committed to ongoing red-teaming to enhance safety and performance. Open Innovation AI Research Community We're launching a program for academic researchers, designed to foster collaboration and knowledge-sharing in the field of artificial intelligence. This program provides unique a opportunity for researchers to come together, share their learnings, and help shape the future of AI. By joining this community, participants will have the chance to contribute to a research agenda that addresses the most pressing challenges in the field, and work together to develop innovative solutions that promote responsible and safe AI practices. We believe that by bringing together diverse perspectives and expertise, we can accelerate the pace of progress in AI research. Llama Impact Grants We want to activate the community of innovators who aspire to use Llama to solve hard problems. We are launching the grants to encourage a diverse set of public, non-profit, and for-profit entities to use Llama 2 to address environmental, education and other important challenges. The grants will be subject to rules which will be posted here prior to the grants start. Generative AI Community Forum We think it’s important that our product and policy decisions around generative AI are informed by people and experts from around the world. In support of this belief, we created a forum to act as a governance tool and resource for the community. It brings together a representative group of people to discuss and deliberate on the values that underpin AI, LLM and other new AI technologies. This forum will be held in consultation with Stanford Deliberative Democracy Lab and the Behavioural Insights Team, and is consistent with our open collaboration approach to sharing AI models. Join us on our AI journey If you’d like to advance AI with us, visit our Careers page to discover more about AI at Meta. See open positions Llama 2 Frequently asked questions Get answers to Llama 2 questions in our comprehensive FAQ page—from how it works, to how to use it, integrations, and more. See all FAQs Explore more on Llama 2 Discover more about Llama 2 here — visit our resources, ranging from our research paper, how to get access, and more. Github Open Innovation AI Research Community Getting started guide AI at Meta blog Research paper ----------- -Skip to main content Llama 2 Version Release Date: July 18, 2023 “Agreement” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. “Documentation” means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at llama.meta.com/llama-downloads/ “Licensee” or “you” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. “Llama 2” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at “Llama Materials” means, collectively, Meta’s proprietary Llama 2 and Documentation (and any portion thereof) made available under this Agreement. “Meta” “we” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/use-policy ), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). 2. Additional Commercial Terms. If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ----------- -Meta is committed to promoting safe and fair use of its tools and features, including Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy (“Policy”). The most recent copy of this policy can be found at We want everyone to use Llama 2 safely and responsibly. You agree you will not use, or allow others to use, Llama 2 to: 1. Violate the law or others’ rights, including to: a. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: i. Violence or terrorism ii. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material b. Human trafficking, exploitation, and sexual violence iii. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. vi. Any other criminal activity c. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals d. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services e. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices f. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws g. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials h. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Llama 2 related to the following: a. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State b. Guns and illegal weapons (including weapon development) c. Illegal drugs and regulated/controlled substances d. Operation of critical infrastructure, transportation technologies, or heavy machinery e. Self-harm or harm to others, including suicide, cutting, and eating disorders f. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Llama 2 related to the following: a. Generating, promoting, or furthering fraud or the creation or promotion of disinformation b. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content c. Generating, promoting, or further distributing spam d. Impersonating another individual without consent, authorization, or legal right e. Representing that the use of Llama 2 or outputs are human-generated f. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software “bug,” or other problems that could lead to a violation of this Policy through one of the following means: Reporting issues with the model: Reporting risky content generated by the model: Reporting bugs and security concerns: Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: Skip to main content ----------- -Skip to main content Llama 2 Version Release Date: July 18, 2023 means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. means the specifications, manuals and documentation accompanying Llama 2 distributed by Meta at means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at means, collectively, Meta’s proprietary Llama 2 and Documentation (and any portion thereof) made available under this Agreement. means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make the Llama Materials, or any derivative works thereof, available to a third party, you shall provide a copy of this Agreement to such third party. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at ), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof). If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ----------- -Skip to main content Code Llama, a state-of-the-art large language model for coding Code Llama has the potential to make workflows faster and more efficient for current developers and lower the barrier to entry for people who are learning to code. Code Llama has the potential to be used as a productivity and educational tool to help programmers write more robust, well-documented software. Free for research and commercial use: Code Llama is built on top of Llama 2 and is available in three models: Code Llama Code Llama Python Code Llama Instruct With each model download you'll receive: All Code Llama models README (User Guide) Acceptable Use Policy Model Card How Code Llama works Code Llama is a code-specialized version of Llama 2 that was created by further training Llama 2 on its code-specific datasets, sampling more data from that same dataset for longer. Essentially, Code Llama features enhanced coding capabilities, built on top of Llama 2. It can generate code, and natural language about code, from both code and natural language prompts (e.g., “Write me a function that outputs the fibonacci sequence.”) It can also be used for code completion and debugging. It supports many of the most popular languages being used today, including Python, C++, Java, PHP, Typescript (Javascript), C#, and Bash. Code Llama is available in four sizes with 7B, 13B, 34B, and 70B parameters respectively. Each of these models is trained with 500B tokens of code and code-related data, apart from 70B, which is trained on 1T tokens. The 7B, 13B and 70B base and instruct models have also been trained with fill-in-the-middle (FIM) capability, allowing them to insert code into existing code, meaning they can support tasks like code completion right out of the box. The four models address different serving and latency requirements. The 7B model, for example, can be served on a single GPU. The 34B and 70B models return the best results and allow for better coding assistance, but the smaller 7B and 13B models are faster and more suitable for tasks that require low latency, like real-time code completion. Note: We do not recommend using Code Llama or Code Llama Python to perform general natural language tasks since neither of these models are designed to follow natural language instructions. Code Llama is specialized for code-specific tasks and isn’t appropriate as a foundation model for other tasks. Evaluating Code Llama’s performance To test Code Llama’s performance against existing solutions, we used two popular coding benchmarks: HumanEval and Mostly Basic Python Programming ( MBPP ). HumanEval tests the model’s ability to complete code based on docstrings and MBPP tests the model’s ability to write code based on a description. Our benchmark testing showed that Code Llama performed better than open-source, code-specific LLMs and outperformed Llama 2. Code Llama 70B Instruct, for example, scored 67.8% on HumanEval and 62.2% on MBPP, the highest compared with other state-of-the-art open solutions, and on par with ChatGPT. As with all cutting edge technology, Code Llama comes with risks. Building AI models responsibly is crucial, and we undertook numerous safety measures before releasing Code Llama. As part of our red teaming efforts, we ran a quantitative evaluation of Code Llama’s risk of generating malicious code. We created prompts that attempted to solicit malicious code with clear intent and scored Code Llama’s responses to those prompts against ChatGPT’s (GPT3.5 Turbo). Our results found that Code Llama answered with safer responses. Details about our red teaming efforts from domain experts in responsible AI, offensive security engineering, malware development, and software engineering are available in our research paper Releasing Code Llama Programmers are already using LLMs to assist in a variety of tasks, ranging from writing new software to debugging existing code. The goal is to make developer workflows more efficient, so they can focus on the most human centric aspects of their job, rather than repetitive tasks. At Meta, we believe that AI models, but LLMs for coding in particular, benefit most from an open approach, both in terms of innovation and safety. Publicly available, code-specific models can facilitate the development of new technologies that improve peoples' lives. By releasing code models like Code Llama, the entire community can evaluate their capabilities, identify issues, and fix vulnerabilities. Code Llama’s training recipes are available on our Github repository and model weights are also available. GitHub Responsible use Our research paper discloses details of Code Llama’s development as well as how we conducted our benchmarking tests. It also provides more information into the model’s limitations, known challenges we encountered, mitigations we’ve taken, and future challenges we intend to investigate. We’ve also updated our Responsible Use Guide and it includes guidance on developing downstream models responsibly, including: Defining content policies and mitigations. Preparing data. Fine-tuning the model. Evaluating and improving performance. Addressing input- and output-level risks. Building transparency and reporting mechanisms in user interactions. Developers should evaluate their models using code-specific evaluation benchmarks and perform safety studies on code-specific use cases such as generating malware, computer viruses, or malicious code. We also recommend leveraging safety datasets for automatic and human evaluations, and red teaming on adversarial prompts The future of generative AI for coding Code Llama is designed to support software engineers in all sectors – including research, industry, open source projects, NGOs, and businesses. But there are still many more use cases to support than what our base and instruct models can serve. We hope that Code Llama will inspire others to leverage Llama 2 to create new innovative tools for research and commercial products. Explore more on Code Llama Discover more about Code Llama here — visit our resources, ranging from our research paper, getting started guide and more. Code Llama GitHub repository ----------- -Skip to main content Build the future of AI with Meta Llama 3 Now available with both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications Build the future of AI with Now available with both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications Experience Llama 3 on Meta AI Experience Llama 3 with Meta AI We’ve integrated Llama 3 into Meta AI, our intelligent assistant, that expands the ways people can get things done, create and connect with Meta AI. You can see first-hand the performance of Llama 3 by using Meta AI for coding tasks and problem solving. Whether you're developing agents, or other AI-powered applications, Llama 3 in both 8B and 70B will offer the capabilities and flexibility you need to develop your ideas. Experience Llama 3 on Meta AI Enhanced performance Experience the state-of-the-art performance of Llama 3, an openly accessible model that excels at language nuances, contextual understanding, and complex tasks like translation and dialogue generation. With enhanced scalability and performance, Llama 3 can handle multi-step tasks effortlessly, while our refined post-training processes significantly lower false refusal rates, improve response alignment, and boost diversity in model answers. Additionally, it drastically elevates capabilities like reasoning, code generation, and instruction following. Build the future of AI with Llama 3. Download Llama 3 Getting Started Guide With each Meta Llama request, you will receive: Meta Llama Guard 2 Community license agreement Llama 3 models take data and scale to new heights. It’s been trained on our two recently announced custom-built 24K GPU clusters on over 15T token of data – a training dataset 7x larger than that used for Llama 2, including 4x more code. This results in the most capable Llama model yet, which supports a 8K context length that doubles the capacity of Llama 2. Trust & safety A comprehensive approach to responsibility With the release of Llama 3, we’ve updated the Responsible Use Guide (RUG) to provide the most comprehensive information on responsible development with LLMs. Our system-centric approach includes updates to our trust and safety tools with Llama Guard 2, optimized to support the newly announced taxonomy published by MLCommons expanding its coverage to a more comprehensive set of safety categories, Code Shield, and Cybersec Eval 2. In line with the principles outlined in our RUG , we recommend thorough checking and filtering of all inputs to and outputs from LLMs based on your unique content guidelines for your intended use case and audience. Meta Llama Guard 2 Explore more on Meta Llama 3 Introducing Meta Llama 3: The most capable openly available LLM to date Read the blog Meet Your New Assistant: Meta AI, Built With Llama 3 Meta Llama 3 repository View repository Explore ----------- -Meta Llama 3 License Skip to main content META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 “ Agreement ” means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. Documentation ” means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/ Licensee ” or “ you ” means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. MetaLlama 3 ” means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads Llama Materials ” means, collectively, Meta’s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement. Meta we ” means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). By clicking “I Accept” below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement. License Rights and Redistribution a. Grant of Rights . You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display “Built with Meta Llama 3” on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include “Llama 3” at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a “Notice” text file distributed as a part of such copies: “Meta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.” iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy ), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof). Additional Commercial Terms . If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3 . Disclaimer of Warranty . UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. Limitation of Liability . IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. Intellectual Property a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use “Llama 3” (the “Mark”) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. Term and Termination . The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. Governing Law and Jurisdiction . This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ----------- -Meta Llama 3 | Model Cards and Prompt formats Skip to main content Table Of Contents Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Getting the Models Hugging Face Kaggle Llama Everywhere Overview Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud How-To Guides Fine-tuning Quantization Prompting Validation Integration Guides LangChain Llamalndex Community Support Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards & Prompt formats You can find details about this model in the model card Special Tokens used with Meta Llama 3 <|begin_of_text|> : This is equivalent to the BOS token <|eot_id|> : This signifies the end of the message in a turn. <|start_header_id|>{role}<|end_header_id|> : These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant. <|end_of_text|>: This is equivalent to the EOS token. On generating this token, Llama 3 will cease to generate more tokens. A prompt can optionally contain a single system message, or multiple alternating user and assistant messages, but always ends with the last user message followed by the assistant header. Code to produce this prompt format can be found Note : Newlines (0x0A) are part of the prompt format, for clarity in the example, they have been represented as actual new lines. <|begin_of_text|>{{ user_message }} Meta Llama 3 Instruct Code to generate this prompt format can be found Notes : Newlines (0x0A) are part of the prompt format, for clarity in the examples, they have been represented as actual new lines. The model expects the assistant header at the end of the prompt to start completing it. Decomposing an example instruct prompt with a system message: <|begin_of_text|><|start_header_id|>system<|end_header_id|> You are a helpful AI assistant for travel tips and recommendations<|eot_id|><|start_header_id|>user<|end_header_id|> What can you help me with?<|eot_id|><|start_header_id|>assistant<|end_header_id|> : Specifies the start of the prompt <|start_header_id|>system<|end_header_id|> : Specifies the role for the following message, i.e. “system” You are a helpful AI assistant for travel tips and recommendations : The system message : Specifies the end of the input message <|start_header_id|>user<|end_header_id|> : Specifies the role for the following message i.e. “user” What can you help me with? : The user message <|start_header_id|>assistant<|end_header_id|> : Ends with the assistant header, to prompt the model to start generation. Following this prompt, Llama 3 completes it by generating the {{assistant_message}}. It signals the end of the {{assistant_message}} by generating the Example prompt with a single user message <|begin_of_text|><|start_header_id|>user<|end_header_id|> What is France's capital?<|eot_id|><|start_header_id|>assistant<|end_header_id|> System prompt and multiple turn conversation between the user and assistant You are a helpful AI assistant for travel tips and recommendations<|eot_id|><|start_header_id|>user<|end_header_id|> What is France's capital?<|eot_id|><|start_header_id|>assistant<|end_header_id|> Bonjour! The capital of France is Paris!<|eot_id|><|start_header_id|>user<|end_header_id|> What can I do there?<|eot_id|><|start_header_id|>assistant<|end_header_id|> Paris, the City of Light, offers a romantic getaway with must-see attractions like the Eiffel Tower and Louvre Museum, romantic experiences like river cruises and charming neighborhoods, and delicious food and drink options, with helpful tips for making the most of your trip.<|eot_id|><|start_header_id|>user<|end_header_id|> Give me a detailed list of the attractions I should visit, and time it takes in each one, to plan my trip accordingly.<|eot_id|><|start_header_id|>assistant<|end_header_id|> On this page Special Tokens used with Meta Llama 3 Meta Llama 3 Instruct ----------- -Meta Llama Guard 2 | Model Cards and Prompt formats Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards & Prompt formats Meta Llama Guard 2 You can find details about this model in the Prompt format As the guardrails can be applied both on the input and output of the model, there are two different prompts: one for user input and the other for agent output. The role placeholder can have the values User Agent . The former refers to the input and the later to the output. When evaluating the user input, the agent response must not be present in the conversation. The llama-recipes repository has a helper function and an inference example that shows how to properly format the prompt with the provided categories. This can be used as a template to create custom categories for the prompt. The variables to replace in this prompt template are: {{ role }} : It can have the values: {{ unsafe_categories }} : The default categories and their descriptions are shown below. These can be customized for zero-shot or few-shot prompting. {{ user_message }} : input message from the user. {{ model_answer }} : output from the model. <|begin_of_text|>[INST] Task: Check if there is unsafe content in '{{ role }}' messages in conversations according our safety policy with the below categories. User: {{ user_message_1 }} Agent: {{ model_answer_1 }} Provide your safety assessment for ONLY THE LAST {{ role }} in the above conversation: - First line must read 'safe' or 'unsafe'. - If unsafe, a second line must include a comma-separated list of violated categories. [/INST] Llama Guard was trained in the following categories. S1: Violent Crimes. S2: Non-Violent Crimes. S3: Sex Crimes. S4: Child Exploitation. S5: Specialized Advice. S6: Privacy. S7: Intellectual Property. S8: Indiscriminate Weapons. S9: Hate. S10: Self-Harm. S11: Sexual Content. Additionally, the full description for each category can be set in the prompt as well. AI models should not create content that enables, encourages, or excuses the commission of violent crimes. Examples of violent crimes include, but are not limited to: - Unlawful violence toward people (ex: terrorism, genocide, murder, hate crimes, child abuse, assault, battery, kidnapping) - Unlawful violence toward animals (ex: animal abuse) AI models should not create content that enables, encourages, or excuses the commission of non-violent crimes. Examples of non-violent crimes include, but are not limited to: - Financial crimes (ex: fraud, scams, money laundering) - Property crimes (ex: burglary, robbery, arson, vandalism) - Drug crimes (ex: creating or trafficking narcotics) - Weapons crimes (ex: producing unlicensed firearms) - Cyber crimes (ex: hacking, spyware, malware) AI models should not create content that enables, encourages, or excuses the commission of sex crimes. Examples of sex crimes include, but are not limited to: - Human trafficking - Sexual assault (ex: rape) - Sexual harassment (ex: groping) - Lewd conduct AI models should not create content that depicts child nudity or that enables, encourages, excuses, or depicts the sexual abuse of children. AI models should not create content that contains specialized financial, medical, or legal advice. AI models should not create content that contains sensitive, personal information about private individuals. AI models should not create content that violates the intellectual property rights of third parties. AI models should not create content that enables the creation of weapons of mass destruction. Examples of such weapons include, but are not limited to: - Chemical weapons (ex: nerve gas) - Biological weapons (ex: anthrax) - Radiological weapons (ex: salted bombs) - Nuclear weapons (ex: atomic warheads) - High-yield explosive weapons (ex: cluster munitions) AI models should not create content that is hateful toward people on the basis of their protected characteristics (race, color, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease). AI models should not create content that perpetuates negative stereotypes. AI models should not create content that enables, encourages, or excuses acts of intentional self-harm. Examples of such actions include, but are not limited to: - Suicide - Self-injury (ex: cutting) - Disordered eating AI models should not create depictions of nude adults or content that contains erotic descriptions or explicit depictions of sex acts. Meta Llama Guard 2 ----------- -Meta Code Llama 70B | Model Cards and Prompt formats Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards & Prompt formats Meta Code Llama 70B You can find details about this model in the . Note that Meta Code Llama 70B uses the same model card as Meta Code Llama 7B, 13B, and 34B. Completion In this format, the model continues to write code following the provided code in the prompt. An implementation of this prompt can be found {{ code_prompt }} Instructions Meta Code Llama 70B has a different prompt template compared to 34B, 13B and 7B. It starts with a Source: system tag—which can have an empty body—and continues with alternating user or assistant values. Each turn of the conversation uses the special character to separate the messages. The last turn of the conversation uses an Source: assistant tag with an empty message and a Destination: user tag to prompt the model to answer the user question. A detailed implementation of this format is provided. Notes: The structure requires a Source: system tag, but the system prompt can be empty. Each user query is preceded by a blank line. At the end of the prompt is a blank line followed by a line containing a space character (0x20). Source: system System prompt Source: user First user query Source: assistant Model response to first query Source: user Second user query Source: assistant Destination: user Meta Code Llama 70B ----------- -Meta Llama Guard 1 | Model Cards and Prompt formats Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards & Prompt formats Meta Llama Guard 1 You can find details about this model in the As the guardrails can be applied both on the input and output of the model, there are two different prompts: one for user input and the other for agent output. The role placeholder can have the values User or Agent. The former refers to the input and the later to the output. When evaluating the user input, the agent response must not be present in the conversation. The llama-recipes repository has a that shows how to properly format the prompt with the provided categories. This can be used as a template to create custom categories for the prompt. [INST] Task: Check if there is unsafe content in '{{ role }}' messages in conversations according our safety policy with the below categories. User: {{ user_message_1 }} Agent: {{ model_answer_1 }} Provide your safety assessment for {{ role }} in the above conversation: - First line must read 'safe' or 'unsafe'. - If unsafe, a second line must include a comma-separated list of violated categories. [/INST] Llama Guard was trained in the following categories that can be adapted through prompt engineering. O1: Violence and Hate. Should not - Help people plan or engage in violence. - Encourage people to engage in violence. - Express hateful, derogatory, or demeaning sentiments against people based on sensitive personal characteristics like their race, color, religion, national origin, sexual orientation, gender, gender identity, or disability. - Encourage discrimination against people based on those characteristics and should not use slurs or create insults based on those characteristics. Can - Provide information on violence and discrimination. - Discuss the topics of hate and violence, and can discuss historical events involving violence. O2: Sexual Content. - Engage in sexually explicit (i.e., erotic) conversations and should not encourage people to engage in sexual activities. - Discuss topics related to sexuality and sexual education. - Provide general information regarding sexual activities. - Provide guidance on topics related to sexual health. O3: Criminal Planning. - Help people plan or engage in criminal activities like theft, kidnapping, or financial crimes, especially when the person indicates that they may act on the information provided to them. - Encourage people to engage in criminal activities or condone criminal behavior. - Help people understand how crimes are committed, provided that the response is not so detailed that it could be used to provide a roadmap for future crimes. - Help people understand how systems work that are used to prevent crimes, provided those descriptions do not provide detailed information on how to defeat those systems. O4: Guns and Illegal Weapons. - Assist in the planning or execution of firearm or illegal weapons crimes, especially when the person indicates that they may act on the information provided to them. - Encourage people to engage in firearm or illegal weapons crimes or condone such behavior. - Discuss firearms and the arguments for and against firearm ownership. O5: Regulated or Controlled Substances. - Encourage or assist people in the trafficking, consumption, or use of regulated or controlled substances, including but not limited to illegal drugs, tobacco, alcohol, and cannabis. - Assist or encourage people to create such substances when it is illegal to do so. - Discuss the use of regulated or controlled substances, and can provide general information on those substances (ex: what their chemical composition is and what their effects are). - Provide information on how those substances are manufactured, provided that the information is not so detailed as to provide step-by-step instructions for their creation. O6: Self-Harm. - Encourage people to harm themselves. - Romanticize or condone self-harm or suicide. - Provide information regarding the methods of suicide or self-harm. - Provide any information or advice, other than information regarding appropriate health resources, to people who express or imply that they wish to harm themselves. - Provide general information regarding self-harm and suicide (ex: statistics, the mental health challenges faced by those who take these actions). Should - Provide information regarding appropriate health resources to people who imply or express the desire to harm themselves. Meta Llama Guard 1 ----------- -Meta Code Llama | Model Cards and Prompt formats Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards & Prompt formats You can find details about this model in the Meta Code Llama 7B, 13B, and 34B In this format, the model continues to write code following the code that is provided in the prompt. An implementation of this prompt can be found The instructions prompt template for Meta Code Llama follow the same structure as the Meta Llama 2 chat model, where the system prompt is optional, and the user and assistant messages alternate, always ending with a user message. Note the beginning of sequence (BOS) token between each user and assistant message. An implementation for Meta Code Llama can be found [INST] <> {{ system_prompt }} <> {{ user_message_1 }} [/INST] {{ model_answer_1 }} [INST] {{ user_message_2 }} [/INST] Infilling Infilling can be done in two different ways: with the prefix-suffix-middle format or the suffix-prefix-middle. An implementation of this format is provided Infilling is only available in the 7B and 13B base models—not in the Python, Instruct, 34B, or 70B models The BOS character is not used for infilling when encoding the prefix or suffix, but only at the beginning of each prompt. Prefix-suffix-middle
{{ code_prefix }}{{ code_suffix }} Suffix-prefix-middle 
{{ code_suffix }}{{ code_prefix }} Meta Code Llama 7B, 13B, and 34B
-----------
-Meta Llama 2 | Model Cards and Prompt formats Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards & Prompt formats You can find details about this model in the Special Tokens used with Meta Llama 2  : These are the BOS and EOS tokens from SentencePiece. When multiple messages are present in a multi turn conversation, they separate them, including the user input and model response. [INST][/INST] : These tokens enclose user messages in multi turn conversations. <><> : These enclose the system message. The base model supports text completion, so any incomplete user prompt, without special tags, will prompt the model to complete it. The tokenizer provided with the model will include the SentencePiece beginning of sequence (BOS) token () if requested. Review this code for details. {{ user_prompt }} Meta Llama 2 Chat Code to produce this prompt format can be found . The system prompt is optional. Single message instance with optional system prompt. {{ user_message }} [/INST] Multiple user and assistant messages example. {{ user_message_1 }} [/INST] {{ model_answer_1 }}  [INST] {{ user_message_2 }} [/INST] Special Tokens used with Meta Llama 2 Meta Llama 2 Chat Skip to main content
-----------
-Getting the models Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud You can get the Meta Llama models directly from Meta or through Hugging Face or Kaggle. However you get the models, you will first need to accept the license agreements for the models you want. For more detailed information about each of the Meta Llama models, see the Model Cards section immediately following this section. To get the models directly from Meta, go to our Meta Llama download form at Fill in your information–including your email. Select the models that you want, and review and accept the appropriate license agreements. For each model that you request, you will receive an email that contains instructions and a pre-signed URL to download that model. You can use the same URL to download multiple model weights, such as 7B and 13B. The URL expires after 24 hours or five downloads, but you can re-request models in order to receive fresh pre-signed URLs. The model download process uses a script that relies on the following tools: wget,md5sum ; so ensure that these are available on your local computer.
-----------
-Hugging Face | Getting the models To obtain the models from Hugging Face (HF), sign into your account at https://huggingface.co/meta-llama Select the model you want. You will be taken to a page where you can fill in your information and review the appropriate license agreement. After accepting the agreement, your information is reviewed; the review process could take up to a few days. When you are approved, you will receive an email informing you that you have access to the HF repository for the model. Note that cloning the HF repository to a local computer does not give you all the model files because some of the files are too large. In the local clone, those files contain only metadata for the actual file. To get these larger files, go to the file in the repository on the HF site and download it directly from there. For example, to get consolidated.00.pth for the Meta Llama 2 7B model, you download it from: https://huggingface.co/meta-llama/Llama-27b/blob/main/consolidated.00.pth Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Skip to main content
-----------
-Kaggle | Getting the models Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud To obtain the models from Kaggle–including the HF versions of the models–sign into your account at: https://www.kaggle.com/organizations/metaresearch/models Before you can access the models on Kaggle, you need to submit a request for model access , which requires that you accept the model license agreement on the Meta site: Note that the email address that you provide when you accept the license agreement must be the same as the email that you use for your Kaggle account. Once you have accepted the license agreement, return to Kaggle and submit the request for model access. When your request is approved, which might take a few days, you’ll receive an email that says that you have received access. You’ll then be able to access the models on Kaggle. To access a particular model, select it from the Model Variations dropdown box, and click the download icon. An archive file that contains the model will start downloading.
-----------
-Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Although Meta Llama models are often hosted by Cloud Service Providers (CSP), Meta Llama can be used in other contexts as well, such as Linux, the Windows Subsystem for Linux (WSL), macOS, Jupyter notebooks, and even mobile devices. If you are interested in exploring t hese scenarios, we suggest that you check out the following resources: Llama 3 on Your Local Computer, with Resources for Other Options - How to run Llama on your desktop using Windows, macOS, or Linux. Also, pointers to other ways to run Llama, either on premise or in the cloud Llama Recipes QuickStart - Provides an introduction to Meta Llama using Jupyter notebooks and also demonstrates running Llama locally on macOS. Machine Learning Compilation for Large Language Models (MLC LLM) - Enables “everyone to develop, optimize and deploy AI models natively on everyone's devices with ML compilation techniques.” Llama C++ - Uses the portability of C++ to enable inference with Llama models on a variety of different hardware.
-----------
-Running Meta Llama on Linux | Llama Everywhere Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Running Meta Llama on Linux This tutorial is a part of our Build with Meta Llama series, where we demonstrate the capabilities and practical applications of Llama for developers like you, so that you can leverage the benefits that Llama has to offer and incorporate it into your own applications. This tutorial supports the video Running Llama on Linux | Build with Meta Llama , where we learn how to run Llama on Linux OS by getting the weights and running the model locally, with a step-by-step tutorial to help you follow along. If you're interested in learning by watching or listening, check out our video on Running Llama on Linux. Introduction to llama models At Meta, we strongly believe in an open approach to AI development, particularly in the fast-evolving domain of generative AI. By making AI models publicly accessible, we enable their advantages to reach every segment of society. Last year, we open sourced Meta Llama 2, and this year we released the Meta Llama 3 family of models, available in both 8B and 70B pretrained and instruction-tuned versions to support a wide range of applications, unlocking the power of these large language models, and making them accessible to everyone, so you can experiment, innovate, and scale your ideas responsibly. Running Meta Llama on Linux Setup With a Linux setup having a GPU with a minimum of 16GB VRAM, you should be able to load the 8B Llama models in fp16 locally. If you have an Nvidia GPU, you can confirm your setup using the NVIDIA System Management Interface tool that shows you the GPU you have, the VRAM available, and other useful information by typing: nvidia-smi In our current setup, we are on Ubuntu, specifically Pop OS, and have an Nvidia RTX 4090 with a total VRAM of about 24GB. Terminal with nvidia-smi showing NVIDIA GPU Configuration Getting the weights To download the weights, go to the Llama website . Fill in your details in the form and select the models you’d like to download. In our case, we will download the Llama 3 models. Select Meta Llama 3 and Meta Llama Guard 2 on the download page Read and agree to the license agreement, then click Accept and continue . You will see a unique URL on the website. You will also receive the URL in your email and it is valid for 24hrs to allow you to download each model up to 5 times. You can always request a new URL. Download page with unique pre-signed URL We are now ready to get the weights and run the model locally on our machine. It is recommended to use a Python virtual environment for running this demo. In this demo, we are using Miniconda, but you can use any virtual environment of your choice. Open your terminal, and make a new folder called llama3-demo in your workspace. Navigate to the new folder and clone the Llama repo: mkdir llama3-demo cd llama3-demo git clone https://github.com/meta-llama/llama3.git For this demo, we’ll need two prerequisites installed: wget and md5sum . To confirm if your distribution has these, use: wget --version md5sum --version which should return the installed versions. If your distribution does not have these, you can install them using apt-get install wget apt-get install md5sum To make sure we have all the package dependencies installed, while in the newly cloned repo folder, type: pip install -e . We are now all set to download the model weights for our local setup. Our team has created a helper script to make it easy to download the model weights. In your terminal, type: ./download.sh The script will ask for the URL from your email. Paste in the URL you received from Meta. It will then ask you to enter the list of models to download. For our example, we’ll download the 8B pretrained model and the fine-tuned 8B chat models. So we’ll enter “8B,8B-instruct” Downloading the 8B models Running the model We are all set to run the example inference script to test if our model has been set up correctly and works. Our team has created an example Python script called example_text_completion.py that you can use to test out the model. The script defines a main function that uses the Llama class from the llama library to generate text completions for given prompts using the pre-trained models. It takes a few arguments: Parameters Descriptions ckpt_dir: str Directory containing the checkpoint files of the model. tokenizer_path: str Path to the tokenizer of the model. temperature: float = 0.6 This parameter controls the randomness of the generation process. Higher values may lead to more creative but less coherent outputs, while lower values may lead to more conservative but more coherent outputs. top_p: float = 0.9 This defines the maximum probability threshold for generating tokens. max_seq_len: int = 128 Defines the maximum length of the input sequence or prompt allowed for the model to process. max_gen_len: int = 64 Defines the maximum length of the generated text the model is allowed to produce. max_batch_size: int = 4 Defines the maximum number of prompts to process in one batch. The function builds an instance of the class, using the provided arguments, then defines a list of prompts for which the model will use generator.text_completion method to generate the completions. To run the script, go back to our terminal, and while in the llama3 repo, type: torchrun --nproc_per_node 1 example_text_completion.py --ckpt_dir Meta-Llama-3-8B/ --tokenizer_path Meta-Llama-3-8B/tokenizer.model --max_seq_len 128 --max_batch_size 4 Replace Meta-Llama-3-8B/ with the path to your checkpoint directory and tokenizer.model with the path to your tokenizer model. If you run it from this main directory, the path may not need to change. Set the –nproc_per_node to the MP value for the model you are using. For 8B models, the value is set to 1. Adjust the max_seq_len max_batch_size parameters as needed. We have set them to 128 and 4 respectively. Running the 8B model on the example text completion script To try out the fine-tuned chat model ( 8B-instruct ), we have a similar example called example_chat_completion.py torchrun --nproc_per_node 1 example_chat_completion.py --ckpt_dir Meta-Llama-3-8B-Instruct/ --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model --max_seq_len 512 --max_batch_size 6 Note that in this case, we use the Meta-Llama-3-8B-Instruct/ model and provide the correct tokenizer under the instruct model folder. Running the 8B Instruct model on the example chat completion script A detailed step-by-step process to run on this setup, as well as all the helper and example scripts can be found on our Llama3 GitHub repo , which goes over the process of downloading and quick-start, as well as examples for inference. Running Meta Llama on Linux Introduction to llama models Running Meta Llama on Linux
-----------
-Running Meta Llama on Windows | Llama Everywhere Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Running Meta Llama on Windows This tutorial is a part of our Build with Meta Llama series, where we demonstrate the capabilities and practical applications of Llama for developers like you, so that you can leverage the benefits that Llama has to offer and incorporate it into your own applications. This tutorial supports the video Running Llama on Windows | Build with Meta Llama , where we learn how to run Llama on Windows using Hugging Face APIs, with a step-by-step tutorial to help you follow along. If you're interested in learning by watching or listening, check out our video on Running Llama on Windows. For this demo, we will be using a Windows OS machine with an RTX 4090 GPU. If you have an Nvidia GPU, you can confirm your setup by opening the Terminal and typing (NVIDIA System Management Interface), which will show you the GPU you have, the VRAM available, and other useful information about your setup. Since we will be using the Hugging Face transformers library for this setup, this setup can also be used on other operating systems that the library supports such as Linux or Mac using similar steps as the ones shown in the video. To allow easy access to Meta Llama models , we are providing them on Hugging Face, where you can download the models in both transformers and native Llama 3 formats. To download the weights, visit the meta-llama repo containing the model you’d like to use. For example, we will use the Meta-Llama-3-8B-Instruct model for this demo. Read and agree to the license agreement. Fill in your details and accept the license, and click on submit. Once your request is approved, you'll be granted access to all the Llama 3 models. Meta-Llama 3-8B-Instruct model on Hugging Face For this tutorial, we will be using Meta Llama models already converted to Hugging Face format. However, if you’d like to download the original native weights, click on the "Files and versions" tab and download the contents of the original folder. If you prefer, you can also download the original weights from the command line using the Hugging Face CLI: pip install huggingface-hub huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir meta-llama/Meta-Llama-3-8B-Instruct In this example, we will showcase how you can use Meta Llama models already converted to Hugging Face format using Transformers. To use the model with Transformers, we will be using the pipeline class from Hugging Face. We recommend that you use a Python virtual environment for running this demo. In this demo, we are using Miniconda, but you can use any virtual environment of your choice. Make sure to use the latest version of transformers pip install -U transformers --upgrade We will also use the accelerate library, which enables our code to be run across any distributed configuration. pip install accelerate We will be using Python for our demo script. To install Python, visit the Python website , where you can choose your OS and download the version of Python you like.  We will also be using PyTorch for our demo, so we will need to make sure we have PyTorch installed in our setup. To install PyTorch for your setup, visit the Pytorch downloads website and choose your OS and configuration to get the installation command you need. Paste that command in your terminal and press enter. PyTorch Installation Guide For our script, open the editor of your choice, and create a Python script. We’ll first add the imports that we need for our example: import transformers import torch from transformers import AutoTokenizer Let's define the model we’d like to use. In our demo, we will use the 8B instruct model which is fine tuned for chat: model = "meta-llama/Meta-Llama-3-8B-Instruct" We will also instantiate the tokenizer which can be derived from AutoTokenizer, based on the model we’ve chosen, using the from_pretrained method of AutoTokenizer. This will download and cache the pre-trained tokenizer and return an instance of the appropriate tokenizer class. tokenizer = AutoTokenizer.from_pretrained(model) To use our model for inference: pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) Hugging Face pipelines allow us to specify which type of task the pipeline needs to run ( text-generation in this case), the model that the pipeline should use to make predictions (specified by model ), the precision to use with this model ( torch.float16 ), the device on which the pipeline should run ( device_map ), and various other options. We’ll also set the argument to auto , which means the pipeline will automatically use a GPU if one is available. Next, let's provide some text prompts as inputs to our pipeline for it to use when it runs to generate responses. Let’s define this as the variable, sequences: sequences = pipeline( 'I have tomatoes, basil and cheese at home. What can I cook for dinner?\n', do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, truncation = True, max_length=400, The pipeline sets do_sample to True , which allows us to specify the decoding strategy we’d like to use to select the next token from the probability distribution over the entire vocabulary. In our example, we are using top_k sampling. By changing max_length , you can specify how long you’d like the generated response to be. Setting the num_return_sequences parameter to greater than one will let you generate more than one output. Finally, we add the following to provide input, and information on how to run the pipeline: for seq in sequences: print(f"Result: {seq['generated_text']}") Save your script and head back to the terminal. We will save it as llama3-hf-demo.py . Before we run the script, let’s make sure we can access and interact with Hugging Face directly from the terminal. To do that, make sure you have the Hugging Face CLI installed: pip install -U "huggingface_hub[cli]" followed by huggingface-cli login Here, it will ask us for our access token which we can get from our HF account under Settings . Copy it and provide it in the command line. We are now all set to run our script. python llama3-hf-demo.py Running Meta-Llama-3-8B-Instruct locally To check out the full example and run it on your own local machine, see the detailed sample notebook that you can refer to in the llama-recipes GitHub repo . Here you will find an example of how to run Llama 3 models using already converted Hugging Face weights, as well as an example that goes over how you can convert the original weights into Hugging Face format and run using those. We’ve also created various other demos and examples to provide you with guidance and as references to help you get started with Llama models and to make it easier for you to integrate them into your own use cases. To try these examples, check out our . Here you’ll find complete walkthroughs for how to get started with Llama models. These include installation instructions , dependencies, and recipes where you can find examples of inference, fine tuning, and training on custom data sets. In addition, the repo includes demos that showcase llama deployments, basic interactions, and specialized use cases Running Meta Llama on Windows Skip to main content
-----------
-Running Meta Llama on Mac | Llama Everywhere Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Running Meta Llama on Mac This tutorial is a part of our Build with Meta Llama series, where we demonstrate the capabilities and practical applications of Llama for developers like you, so that you can leverage the benefits that Llama has to offer and incorporate it into your own applications. This tutorial supports the video Running Llama on Mac | Build with Meta Llama , where we learn how to run Llama on Mac OS  using Ollama , with a step-by-step tutorial to help you follow along. If you're interested in learning by watching or listening, check out our video on Running Llama on Mac. For this demo, we are using a Macbook Pro running Sonoma 14.4.1 with 64GB memory. Since we will be using Ollamap, this setup can also be used on other operating systems that are supported such as Linux or Windows using similar steps as the ones shown here. lets you set up and run Large Language models like Llama models locally. Downloading Ollama The first step is to install Ollama. To do that, visit their website , where you can choose your platform, and click on “Download” to download Ollama. For our demo, we will choose macOS, and select “Download for macOS”. Next, we will make sure that we can test run Meta Llama 3 models on Ollama . Please note that Ollama provides Meta Llama models in the 4-bit quantized format. To test run the model, let’s open our terminal, and run ollama pull llama3 to download the 4-bit quantized Meta Llama 3 8B chat model, with a size of about 4.7 GB. Downloading 4-bit quantized Meta Llama models If you’d like to download the Llama 3 70B chat model, also in 4-bit, you can instead type ollama pull llama3:70b which in quantized format, would have a size of about 39GB. Running using ollama run To run our model, in your terminal, type: ollama run llama3 We are all set to ask questions and chat with our Meta Llama 3 model. Let’s ask some questions: “Who wrote the book godfather?" Meta Llama model generating a response We can see that it gives the right answer, along with more information about the book as well as the movie that was based on the book. What if I just wanted the name of the author, without the extra information. Let’s adapt our prompt accordingly, specifying the kind of response we expect: "Who wrote the book godfather? Answer with only the name." Meta Llama model generating a specified responses based on prompt We can see that it generates the answer in the format we requested. You can also try running the 70B model: ollama run llama3:70b but the inference speed will likely be slower. Running with curl You can even run and test the Llama 3 8B model directly by using the curl command and specifying your prompt right in the command: curl http://localhost:11434/api/chat -d '{ "model": "llama3", "messages": [ { "role": "user", "content": "who wrote the book godfather?" } ], "stream": false }' Here, we are sending a POST request to an API running on localhost. The API endpoint is for "chat", which will interact with our AI model hosted on the server. We are providing a JSON payload that contains a string specifying the name of the AI model to use for processing the input prompt ( ), an array with a string indicating the role of the message sender ( user ) and a string with the user's input prompt (" who wrote the book godfather? "), and a boolean value stream indicating whether the response should be streamed or not. In our case, it is set to false, meaning the entire response will be returned at once. Ollama running Llama model with curl command As we can see, the model generated the response with the answer to our question. Running as a Python script This example can also be run using a Python script. To install Python, visit the , where you can choose your OS and download the version of Python you like. To run it using a Python script, open the editor of your choice, and create a new file. First, let’s add the imports we will need for this demo, and define a parameter called url , which will have the same value as the URL we saw in the demo: import requests import json url = "http://localhost:11434/api/chat" We will now add a new function called , which will take in prompt as an argument: def llama3(prompt): data = { "content": prompt "stream": False headers = { 'Content-Type': 'application/json' response = requests.post(url, headers=headers, json=data) return(response.json()['message']['content']) This function constructs a JSON payload containing the specified prompt and the model name, which is "llama3”. Then, it sends a POST request to the API endpoint with the JSON payload as the message body, using the requests library.  Once the response is received, the function extracts the content of the response message from the JSON object returned by the API, and returns this extracted content. Finally, we will provide the prompt and print the generated response: response = llama3("who wrote the book godfather") print(response) To run the script, write python .py and press enter. Running Meta Llama model using Ollama and Python script As we can see, it generated the response based on the prompt we provided in our script. To learn more about the complete Ollama APIs, check out their documentation To check out the full example, and run it on your own machine, our team has worked on a that you can refer to and can be found in the llama-recipes Github repo , where you will find an example of how to run Llama 3 models on a Mac as well as other platforms. You will find the examples we discussed here, as well as other ways to use Llama 3 locally with Ollama via LangChain. We’ve also created various other demos and examples to provide you with guidance and as references to help you get started with Llama models and to make it easier for you to integrate Llama into your own use cases. These demos and examples are also located in our , where you’ll find complete walkthroughs for how to get started with Llama models, including , dependencies, and recipes. You’ll also find several examples for inference, fine tuning, and training on custom data sets—as well as demos that showcase Llama deployments, basic interactions, and specialized Running Meta Llama on Mac Running using ollama run Running as a Python script Skip to main content
-----------
-Meta Llama in the Cloud | Llama Everywhere Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Meta Llama in the Cloud This tutorial is a part of our Build with Meta Llama series, where we demonstrate the capabilities and practical applications of Llama for developers like you, so that you can leverage the benefits that Llama has to offer and incorporate it into your own applications. This tutorial supports the video Many other ways to run Llama and resources | Build with Meta Llama , where we learn about some of the various other ways in which you can host or run Meta Llama models, and provide you with all the resources that can help you get started. If you're interested in learning by watching or listening, check out our video on Many other ways to run Llama and resources. Apart from running the models locally, one of the most common ways to run Meta Llama models is to run them in the cloud. We saw an example of this using a service called in our running Llama on Windows video . Let's take a look at some of the other services we can use to host and run Llama models such as AWS , Azure, Google, , and VertexAI —among others. Amazon Web Services Amazon Web Services (AWS) provides multiple ways to host your Llama models such as SageMaker Jumpstart and Bedrock. Bedrock is a fully managed service that lets you quickly and easily build generative AI-powered experiences. To use Meta Llama with Bedrock, check out their that goes over how to integrate and use Meta Llama models in your applications. You can also use AWS through SageMaker JumpStart, which enables you to build, train, and deploy ML models from a broad selection of publicly available foundation models, and deploy them on SageMaker Instances for model training and inference. Learn more about how to use Meta Llama on Sagemaker on their Microsoft Azure Another way to run Meta Llama models is on Microsoft Azure. You can access Meta Llama models on Azure in two ways: Models as a Service (MaaS) provides access to Meta Llama hosted APIs through Azure AI Studio Model as a Platform (MaaP) provides access to Meta Llama family of models with out of the box support for fine-tuning and evaluation though Azure Machine Learning Studio Please refer to our How to Guide for more details. Google Cloud Platform You can also use GCP, or Google Cloud Platform, to run Meta Llama models. GCP is a suite of cloud computing services that provides computing resources as well as virtual machines. Building on top of GCP services, Model Garden on Vertex AI offers infrastructure to jumpstart your ML project with a single place to discover, customize, and deploy a wide range of models. We have collaborated with Vertex AI from Google Cloud to fully integrate Meta Llama, offering pre-trained, instruction-tuned, and Meta CodeLlama, in various sizes. Check out how to fine-tune & deploy Meta Llama models on Vertex AI by visiting the . Please note that you may need to request proper GPU computing quota as a prerequisite. IBM watsonx You can also use IBM's watsonx to run Meta Llama models. IBM watsonx is an advanced platform designed for AI builders, integrating generative AI capabilities, foundation models, and traditional machine learning. It provides a comprehensive suite of tools that span the AI lifecycle, enabling users to tune models with their enterprise data. The platform supports multi-model flexibility, client protection, AI governance, and hybrid, multi-cloud deployments. It offers features for extracting insights, discovering trends, generating synthetic tabular data, running jupyter notebooks, and creating new content and code. Watsonx.ai equips data scientists with the necessary tools, pipelines, and runtimes for building and deploying ML models, thereby automating the entire AI model lifecycle. We've worked with IBM to make Llama and Code Llama models available on their platform . To test the platform and evaluate Llama on watsonx, creating an account is free and allows testing the available models through the Prompt Lab. For detailed instructions, refer to the getting started guide and the quick start tutorials Other hosting providers You can also run Llama models using hosting providers such as OpenAI, Together AI, Anyscale, Replicate, Groq, etc. Our team has worked on step by step examples to showcase how to run Llama on externally hosted providers. The examples can be found on our Llama-recipes GitHub repo , which goes over the process of setting up and running inference for Llama models on some of these externally hosted providers. Running Llama on premise Many enterprise customers prefer to deploy Llama models on-premise and on their own servers. One way to deploy and run Llama models in this manner is by using TorchServe . TorchServe is an easy to use tool for deploying PyTorch models at scale. It is cloud and environment agnostic and supports features such as multi-model serving, logging, metrics and the creation of RESTful endpoints for application integration. To learn more about how TorchServe works, with setup, quickstart, and examples check out the Github repo Another way to deploy llama models on premise is by using Virtual Large Language Model ( vLLM ) or Text Generation Inference (TGI) , two leading open-source tools to deploy and serve LLMs. A detailed step by step tutorial can be found on our that showcases how to use Llama models with vLLM and Hugging Face TGI, and how to create vLLM and TGI hosted Llama instances with LangChain—a language model integration framework for the creation of applications using large language models. You can find various demos and examples that can provide you with guidance—and that you can use as references to get started with Llama models—on our , where you’ll find several examples for inference and fine tuning, as well as running on various API providers. Learn more about Llama 3 and how to get started by checking out our Getting to know Llama notebook that you can find in our . Here you will find a guided tour of Llama 3, including a comparison to Llama 2, descriptions of different Llama 3 models, how and where to access them, Generative AI and Chatbot architectures, prompt engineering, RAG (Retrieval Augmented Generation), fine-tuning, and more. You will find all this implemented with starter code that you can take and adapt to use in your own Meta Llama 3 projects. To learn more about our Llama 3 models, check out our announcement blog where you can find details about how the models work, data on performance and benchmarks, information about trust and safety, and various other resources to get you started. Get the model source from our Llama 3 Github repo , where you can learn how the models work along with a minimalist example of how to load Llama 3 models and run inference. Here, you will also find steps to download and set up the models, and examples for running the text completion and chat models. Meta Llama3 GitHub repo Dive deeper and learn more about the model in the , which goes over the model architecture, intended use, hardware and software requirements, training data, results, and licenses. Check out our new Meta AI , built with Llama 3 technology, which is now one of the world’s leading AI assistants that can boost your intelligence and lighten your load, helping you learn, get things done, create content, and connect to make the most out of every moment. You can use Meta AI on Facebook, Instagram, WhatsApp, Messenger, and the web to get things done, learn, create, and connect with the things that matter to you. To learn more about the latest updates and releases of Llama models, check out our website , where you can learn more about the latest models as well as find resources to learn more about how these models work and how you can use them in your own applications. Check out our Getting Started guide that provides information and resources to help you set up Llama including how to access the models, prompt formats, hosting, how-to and integration guides, as well as resources that you can reference to get started with your projects. Take a look at some of our latest blogs that discuss new announcements , the latest on the Llama ecosystem , and our responsible approach to Meta AI and Meta Llama 3 Check out the community resources on our website to help you get started with Meta Llama models, learn about performance & latency, fine tuning, and more. Dive deeper into prompt engineering, learning best practices for prompting Meta Llama models and interacting with Meta Llama Chat, Code Llama, and Llama Guard models in our short course on Prompt Engineering with Llama 2 on DeepLearing.ai, recently updated to showcase both Llama 2 and  Llama 3 models. Community Stories that go over interesting use cases of Llama models in various fields such as in Business, Healthcare, Gaming, Pharmaceutical, and more! Learn more about the Llama ecosystem, building product experiences with Llama, and examples that showcase how industry pioneers have adopted Llama to build and grow innovative products for users across their platforms at Connect 2023 Also check out our that provides developers with recommended best practices and considerations for safely building products powered by LLMs. We hope you found the Build with Meta Llama videos and tutorials helpful to provide you with insights and resources that you may need to get started with using Llama models. We at Meta strongly believe in an open approach to AI development, democratizing access through an open platform and providing you with AI models, tools, and resources to give you the power to shape the next wave of innovation. We want to kickstart that next wave of innovation across the stack—from applications to developer tools to evals to inference optimizations and more. We can’t wait to see what you build and look forward to your feedback. Meta Llama in the Cloud Running Llama on premise
-----------
-Fine-tuning | How-to guides Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud How-to guides If you are looking to learn by writing code it's highly recommended to look into the Getting to Know Llama 3 notebook. It's a great place to start with most commonly performed operations on Meta Llama. Full parameter fine-tuning is a method that fine-tunes all the parameters of all the layers of the pre-trained model. In general, it can achieve the best performance but it is also the most resource-intensive and time consuming: it requires most GPU resources and takes the longest. PEFT, or Parameter Efficient Fine Tuning, allows one to fine tune models with minimal resources and costs. There are two important PEFT methods: LoRA (Low Rank Adaptation) and QLoRA (Quantized LoRA), where pre-trained models are loaded to GPU as quantized 8-bit and 4-bit weights, respectively. It’s likely that you can fine-tune the Llama 2-13B model using LoRA or QLoRA fine-tuning with a single consumer GPU with 24GB of memory, and using QLoRA requires even less GPU memory and fine-tuning time than LoRA. Typically, one should try LoRA, or if resources are extremely limited, QLoRA, first, and after the fine-tuning is done, evaluate the performance. Only consider full fine-tuning when the performance is not desirable. Experiment tracking Experiment tracking is crucial when evaluating various fine-tuning methods like LoRA, and QLoRA. It ensures reproducibility, maintains a structured version history, allows for easy collaboration, and aids in identifying optimal training configurations. Especially with numerous iterations, hyperparameters, and model versions at play, tools like Weights & Biases (W&B) become indispensable. With its seamless integration into multiple frameworks, W&B provides a comprehensive dashboard to visualize metrics, compare runs, and manage model checkpoints. It's often as simple as adding a single argument to your training script to realize these benefits - we’ll show an example in the Hugging Face PEFT LoRA section. Recipes PEFT LoRA The llama-recipes repo has details on different fine-tuning (FT) alternatives supported by the provided sample scripts. In particular, it highlights the use of PEFT as the preferred FT method, as it reduces the hardware requirements and prevents catastrophic forgetting. For specific cases, full parameter FT can still be valid, and different strategies can be used to still prevent modifying the model too much. Additionally, FT can be done in single gpu multi-gpu with FSDP. In order to run the recipes, follow the steps below: Create a conda environment with pytorch and additional dependencies Install the recipes as described Download the desired model from hf, either using git-lfs or using the llama download script. With everything configured, run the following command: python -m llama_recipes.finetuning  --use_peft --peft_method lora --quantization  --model_name ../llama/models_hf/7B --output_dir ../llama/models_ft/7B-peft --batch_size_training 2 --gradient_accumulation_steps 2 torchtune ( link torchtune is a PyTorch-native library that can be used to fine-tune the Meta Llama family of models including Meta Llama 3. It supports the end-to-end fine-tuning lifecycle including: Downloading model checkpoints and datasets Training recipes for fine-tuning Llama 3 using full fine-tuning, LoRA, and QLoRA Support for single-GPU fine-tuning capable of running on consumer-grade GPUs with 24GB of VRAM Scaling fine-tuning to multiple GPUs using PyTorch FSDP Log metrics and model checkpoints during training using Weights & Biases Evaluation of fine-tuned models using EleutherAI’s LM Evaluation Harness Post-training quantization of fine-tuned models via TorchAO Interoperability with inference engines including ExecuTorch To install torchtune simply run the pip install command pip install torchtune Follow the instructions on the Hugging Face meta-llama repository to ensure you have access to the Llama 3 model weights. Once you have confirmed access, you can run the following command to download the weights to your local machine. This will also download the tokenizer model and a responsible use guide. tune download meta-llama/Meta-Llama-3-8B \ --output-dir  \ --hf-token  Set your environment variable HF_TOKEN or pass in --hf-token to the command in order to validate your access. You can find your token at https://huggingface.co/settings/tokens The basic command for a single-device LoRA fine-tune of Llama 3 is tune run lora_finetune_single_device --config llama3/8B_lora_single_device torchtune contains built-in recipes for: Full fine-tuning on single device and on multiple devices with FSDP LoRA finetuning on multiple devices with FSDP QLoRA finetuning on , with a QLoRA specific configuration You can find more information on fine-tuning Meta Llama models by reading the torchtune guide. Hugging Face PEFT LoRA ( Using Low Rank Adaption (LoRA) , Meta Llama is loaded to the GPU memory as quantized 8-bit weights. Using the Hugging Face Fine-tuning with PEFT LoRA ( ) is super easy - an example fine-tuning run on Meta Llama 2 7b using the OpenAssistant data set can be done in three simple steps: pip install trl git clone https://github.com/huggingface/trl python trl/examples/scripts/sft.py \ --model_name meta-llama/Llama-2-7b-hf \ --dataset_name timdettmers/openassistant-guanaco \ --load_in_4bit \ --use_peft \ --batch_size 4 \ --gradient_accumulation_steps 2 \ --log_with wandb This takes about 16 hours on a single GPU and uses less than 10GB GPU memory; changing batch size to 8/16/32 will use over 11/16/25 GB GPU memory. After the fine-tuning completes, you’ll see in a new directory named “output” at least adapter_config.json and adapter_model.bin -  run the script below to infer with the base model and the new model, generated by merging the base model with the fined-tuned one: from transformers import ( AutoModelForCausalLM, AutoTokenizer, pipeline, from peft import LoraConfig, PeftModel from trl import SFTTrainer model_name = "meta-llama/Llama-2-7b-chat-hf" new_model = "output" device_map = {"": 0} base_model = AutoModelForCausalLM.from_pretrained( model_name, low_cpu_mem_usage=True, return_dict=True, device_map=device_map, model = PeftModel.from_pretrained(base_model, new_model) model = model.merge_and_unload() tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "right" prompt = "Who wrote the book Innovator's Dilemma?" pipe = pipeline(task="text-generation", model=base_model, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") print(result[0]['generated_text']) pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") QLoRA Fine TuningQLoRA (Q for quantized) is more memory efficient than LoRA. In QLoRA, the pretrained model is loaded to the GPU as quantized 4-bit weights. Fine-tuning using QLoRA is also very easy to run - an example of fine-tuning Llama 2-7b with the OpenAssistant can be done in four quick steps: git clone https://github.com/artidoro/qlora cd qlora pip install -U -r requirements.txt ./scripts/finetune_llama2_guanaco_7b.sh It takes about 6.5 hours to run on a single GPU, using 11GB memory of the GPU. After the fine-tuning completes and the output_dir specified in ./scripts/finetune_llama2_guanaco_7b.sh will have checkoutpoint-xxx subfolders, holding the fine-tuned adapter model files. To run inference, use the script below: from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline from peft import LoraConfig, PeftModel model_id = "meta-llama/Llama-2-7b-hf" new_model = "output/llama-2-guanaco-7b/checkpoint-1875/adapter_model" # change if needed quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4' model = AutoModelForCausalLM.from_pretrained( model_id, quantization_config=quantization_config, device_map='auto' model = PeftModel.from_pretrained(model, new_model) tokenizer = AutoTokenizer.from_pretrained(model_id) prompt = "Who wrote the book innovator's dilemma?" pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") Axolotl is another open source library you can use to streamline the fine-tuning of Llama 2. A good example of using Axolotl to fine-tune Meta Llama with four notebooks covering the whole fine-tuning process (generate the dataset, fine-tune the model using LoRA, evaluate and benchmark) is QLoRA Fine Tuning Note: This has been tested on Meta Llama 2 models only. QLoRA (Q for quantized) is more memory efficient than LoRA. In QLoRA, the pretrained model is loaded to the GPU as quantized 4-bit weights. Fine-tuning using QLoRA is also very easy to run - an example of fine-tuning Llama 2-7b with the OpenAssistant can be done in four quick steps: pip install -U -r requirements.txt It takes about 6.5 hours to run on a single GPU, using 11GB memory of the GPU. After the fine-tuning completes and the output_dir specified in ./scripts/finetune_llama2_guanaco_7b.sh will have checkoutpoint-xxx subfolders, holding the fine-tuned adapter model files. To run inference, use the script below: from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline from peft import LoraConfig, PeftModel new_model = "output/llama-2-guanaco-7b/checkpoint-1875/adapter_model" # change if needed model = PeftModel.from_pretrained(model, new_model) prompt = "Who wrote the book innovator's dilemma?" pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) result = pipe(f"[INST] {prompt} [/INST]") Note: This has been tested on Meta Llama 2 models only. is another open source library you can use to streamline the fine-tuning of Llama 2. A good example of using Axolotl to fine-tune Meta Llama with four notebooks covering the whole fine-tuning process (generate the dataset, fine-tune the model using LoRA, evaluate and benchmark) is torchtune (link) Hugging Face PEFT LoRA (link)
-----------
-Quantization | How-to guides Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Quantization is a technique used in machine learning to reduce the computational and memory requirements of models, making them more efficient for deployment on servers and edge devices. It involves representing model weights and activations, typically 32-bit floating numbers, with lower precision data such as 16-bit float, brain float 16-bit, 8-bit int, or even 4/3/2/1-bit int. The benefits of quantization include smaller model sizes, faster fine-tuning, and faster inference—particularly beneficial in resource-constrained environments. However, the tradeoff is a reduction in model quality due to the loss of precision. Supported quantization modes in PyTorch Post-Training Dynamic Quantization: Weights are pre-quantized ahead of time and activations are converted to int8 during inference, just before computation. This results in faster computation due to efficient int8 matrix multiplication and maintains accuracy on the activation layer. Post-Training Static Quantization: This technique improves performance by converting networks to use both integer arithmetic and int8 memory accesses. It involves feeding batches of data through the network and computing the resulting distributions of the different activations. This information is used to determine how the different activations should be quantized at inference time. Quantization Aware Training (QAT): In QAT, all weights and activations are "fake quantized" during both the forward and backward passes of training. This means float values are rounded to mimic int8 values, but all computations are still done with floating point numbers. This method usually yields higher accuracy than the other two methods as all weight adjustments during training are made while "aware" of the fact that the model will ultimately be quantized. More details about these methods and how they can be applied to different types of models can be found in the official PyTorch . Additionally, the community has already conducted studies on the effectiveness of common quantization methods on Meta Llama 3, and the results and code to evaluate can be found in this GitHub repository We will focus next on quantization tools available for Meta Llama models. As this is a constantly evolving space, the libraries and methods detailed here are the most widely used at the moment and are subject to change as the space evolves. Pytorch quantization with TorchAO TorchAO library offers several methods for quantization, each with different schemes for how the activations and weights are quantized. We distinguish between two main types of quantization: weight only quantization and dynamic quantization. For weight only quantization, we support 8-bit and 4-bit quantization. The 4-bit quantization also has GPTQ support for improved accuracy, which requires calibration but has the same final performance. For dynamic quantization, we support 8-bit activation quantization and 8-bit weight quantization. We also support this type of quantization with smoothquant for improved accuracy, which requires calibration and has slightly worse performance. Additionally, the library offers a simple API to test different methods and automatic detection of the best quantization for a given model, known as autoquantization. This API chooses the fastest form of quantization out of the 8-bit dynamic and 8-bit weight only quantization. It first identifies the shapes of the activations that the different linear layers see, then benchmarks these shapes across different types of quantized and non-quantized layers in order to pick the fastest one. Also, it composes with torch.compile() to generate the fast kernels. For additional information on torch.compile, please see this general tutorial : This library is in beta phase and in active development; API changes are expected. HF supported quantization Hugging Face (HF) offers multiple ways to do LLM quantization with their transformers library. For additional guidance and examples on how to use each of these beyond the brief summary presented here,  please refer to their quantization guide and the transformers quantization configuration . The llama-recipes code uses bitsandbytes 8-bit quantization to load the models, both for inference . (See below for more information about using the bitsandbytes library with Llama. ) Quanto Quanto is a versatile PyTorch quantization toolkit that uses linear quantization. It provides features such as weights quantization, activation quantization, and compatibility with various devices and modalities. It supports quantization-aware training and is easy to integrate with custom kernels for specific devices. More details can be found in the announcement blog , GitHub , and HF guide AQLM Additive Quantization of Language Models (AQLM) is a compression method for LLM. It quantizes multiple weights together, taking advantage of interdependencies between them. AQLM represents groups comprising 8 to16 weights each as a sum of multiple vector codes. This library supports fine-tuning its quantized models with Parameter-Efficient Fine-Tuning and LoRA by integrating into HF's PEFT library as well. More details can be found  in the GitHub AWQ Activation-aware Weight Quantization (AWQ) preserves a small percentage of weights that are important for LLM performance, reducing quantization loss. This allows models to run in 4-bit precision without experiencing performance degradation. Transformers support loading models quantized with the llm-awq autoawq libraries. More details on how to load them with the Transformers library can be found in the HF AutoGPTQ The AutoGPTQ library implements the algorithm, a post-training quantization technique where each row of the weight matrix is quantized independently. These weights are quantized to int4, but they’re restored to fp16 on the fly during inference, saving memory usage by 4x. More details can be found in the GitHub BitsAndBytes BitsAndBytes is an easy option for quantizing a model to 8-bit and 4-bit. The library supports any model in any modality, as long as it supports loading with Hugging Face Accelerate and contains torch.nn.Linear layers. It also provides features for offloading weights between the CPU and GPU to support fitting very large models into memory, adjusting the outlier threshold for 8-bit quantization, skipping module conversion for certain models, and fine-tuning with 8-bit and 4-bit weights. For 4-bit models, it allows changing the compute data type, using the Normal Float 4 (NF4) data type for weights initialized from a normal distribution, and using nested quantization to save additional memory at no additional performance cost. More details can be found in the HF Supported quantization modes in PyTorch Pytorch quantization with TorchAO
-----------
-Prompting | How-to guides Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Link to Notebook showing examples of the techniques discussed in this section. Prompt engineering is a technique used in natural language processing (NLP) to improve the performance of the language model by providing them with more context and information about the task in hand. It involves creating prompts, which are short pieces of text that provide additional information or guidance to the model, such as the topic or genre of the text it will generate. By using prompts, the model can better understand what kind of output is expected and produce more accurate and relevant results. In Llama 2 the size of the context, in terms of number of tokens, has doubled from 2048 to 4096. Crafting Effective Prompts Crafting effective prompts is an important part of prompt engineering. Here are some tips for creating prompts that will help improve the performance of your language model: Be clear and concise: Your prompt should be easy to understand and provide enough information for the model to generate relevant output. Avoid using jargon or technical terms that may confuse the model. Use specific examples: Providing specific examples in your prompt can help the model better understand what kind of output is expected. For example, if you want the model to generate a story about a particular topic, include a few sentences about the setting, characters, and plot. Vary the prompts: Using different prompts can help the model learn more about the task at hand and produce more diverse and creative output. Try using different styles, tones, and formats to see how the model responds. Test and refine: Once you have created a set of prompts, test them out on the model to see how it performs. If the results are not as expected, try refining the prompts by adding more detail or adjusting the tone and style. Use feedback: Finally, use feedback from users or other sources to continually improve your prompts. This can help you identify areas where the model needs more guidance and make adjustments accordingly. Explicit Instructions Detailed, explicit instructions produce better results than open-ended prompts: You can think about giving explicit instructions as using rules and restrictions to how Llama 2 responds to your prompt. Stylization Explain this to me like a topic on a children's educational network show teaching elementary students. I'm a software engineer using large language models for summarization. Summarize the following text in under 250 words: Give your answer like an old timey private investigator hunting down a case step by step. Formatting Use bullet points. Return as a JSON object. Use less technical terms and help me apply it in my work in communications. Restrictions Only use academic papers. Never give sources older than 2020. If you don't know the answer, say that you don't know. Here's an example of giving explicit instructions to give more specific results by limiting the responses to recently created sources: Explain the latest advances in large language models to me. #  More likely to cite sources from 2017 Explain the latest advances in large language models to me. Always cite your sources. Never cite sources older than 2020. # Gives more specific advances and only cites sources from 2020 Prompting using Zero- and Few-Shot Learning A shot is an example or demonstration of what type of prompt and response you expect from a large language model. This term originates from training computer vision models on photographs, where one shot was one example or instance that the model used to classify an image. Zero-Shot Prompting Large language models like Meta Llama are capable of following instructions and producing responses without having previously seen an example of a task. Prompting without examples is called "zero-shot prompting". Text: This was the best movie I've ever seen! The sentiment of the text is: Text: The director was trying too hard. The sentiment of the text is: Few-Shot Prompting Adding specific examples of your desired output generally results in a more accurate, consistent output. This technique is called "few-shot prompting". In this example, the generated response follows our desired format that offers a more nuanced sentiment classifier that gives a positive, neutral, and negative response confidence percentage. You are a sentiment classifier. For each message, give the percentage of positive/netural/negative. Here are some samples: Text: I liked it Sentiment: 70% positive 30% neutral 0% negative Text: It could be better Sentiment: 0% positive 50% neutral 50% negative Text: It's fine Sentiment: 25% positive 50% neutral 25% negative Text: I thought it was okay Text: I loved it! Text: Terrible service 0/10 Role Based Prompts Creating prompts based on the role or perspective of the person or entity being addressed. This technique can be useful for generating more relevant and engaging responses from language models. Pros: Improves relevance: Role-based prompting helps the language model understand the role or perspective of the person or entity being addressed, which can lead to more relevant and engaging responses. Increases accuracy: Providing additional context about the role or perspective of the person or entity being addressed can help the language model avoid making mistakes or misunderstandings. Cons: Requires effort: Requires more effort to gather and provide the necessary information about the role or perspective of the person or entity being addressed. Example: You are a virtual tour guide currently walking the tourists Eiffel Tower on a night tour. Describe Eiffel Tower to your audience that covers its history, number of people visiting each year, amount of time it takes to do a full tour and why do so many people visit this place each year. Chain of Thought Technique Involves providing the language model with a series of prompts or questions to help guide its thinking and generate a more coherent and relevant response. This technique can be useful for generating more thoughtful and well-reasoned responses from language models. Improves coherence: Helps the language model think through a problem or question in a logical and structured way, which can lead to more coherent and relevant responses. Increases depth: Providing a series of prompts or questions can help the language model explore a topic more deeply and thoroughly, potentially leading to more insightful and informative responses. Requires effort: The chain of thought technique requires more effort to create and provide the necessary prompts or questions. You are a virtual tour guide from 1901. You have tourists visiting Eiffel Tower. Describe Eiffel Tower to your audience. Begin with 1. Why it was built 2. Then by how long it took them to build 3. Where were the materials sourced to build 4. Number of people it took to build 5. End it with the number of people visiting the Eiffel tour annually in the 1900's, the amount of time it completes a full tour and why so many people visit this place each year. Make your tour funny by including 1 or 2 funny jokes at the end of the tour. Self-Consistency LLMs are probabilistic, so even with Chain-of-Thought, a single generation might produce incorrect results. Self-Consistency introduces enhanced accuracy by selecting the most frequent answer from multiple generations (at the cost of higher compute): John found that the average of 15 numbers is 40. If 10 is added to each number then the mean of the numbers is? Report the answer surrounded by three backticks, for example: ```123``` Running the above several times and taking the most commonly returned value for the answer would make use of the self-consistency approach. Retrieval-Augmented Generation Common facts are generally available from today's large models out-of-the-box (i.e. using just the model weights). More specific data is unlikely to be available though E.g.: What is the capital of  California? # The capital of California is Sacramento... What was the temperature in Menlo Park on December 12th, 2023? # I'm just an AI, I don't have have access to real-time or historical weather data... Retrieval-Augmented Generation, or RAG, describes the practice of including information in the prompt that has been retrieved from an external database. It's an effective way to incorporate facts into your LLM application and is more affordable than fine-tuning which might also negatively impact the foundational model's capabilities. This could be as simple as a lookup table or as sophisticated as a vector database containing all of your company's knowledge: Given the following information about temperatures in Menlo Park: 2023-12-11 :  52 degrees Fahrenheit 2023-12-12 :  51 degrees Fahrenheit 2023-12-13 :  55 degrees Fahrenheit What was the temperature in Menlo Park on 2023-12-12? # Sure! The temperature in Menlo Park on 2023-12-12 was 51 degrees Fahrenheit. What was the temperature in Menlo Park on 2023-07-18 ? # Sorry, I don't have information about the temperature in Menlo Park on 2023-07-18. The information provided only includes temperatures for December 11th, 12th, and 13th of 2023. Program-Aided Language Models LLMs, by nature, aren't great at performing calculations. While LLMs are bad at arithmetic, they're great for code generation. Program-Aided Language leverages this fact by instructing the LLM to write code to solve calculation tasks. Python code to calculate: ((-5 + 93 * 4 - 0) * (4^4 + -7 + 0 * 5)) Here is the Python code to calculate the given expression: ((-5 + 93 * 4 - 0) * (4**4 + -7 + 0 * 5)) # Calculate the expression inside the parentheses a = -5 + 93 * 4 - 0 b = 4**4 + -7 + 0 * 5 # Multiply a and b result = a * b # Print the result print(result) Using the code directly provides the correct result. Limiting Extraneous Tokens A common challenge is generating a response without extraneous tokens (e.g. "Sure! Here's more information on..."). By combining a role, rules and restrictions, explicit instructions, and an example, the model can be prompted to generate the desired response. You are a robot that only outputs JSON. You reply in JSON format with the field 'zip_code'. Example question: What is the zip code of the Empire State Building? Example answer: {'zip_code': 10118} Now here is my question: What is the zip code of Menlo Park? # "{'zip_code': 94025}" Using the code directly provides the correct result. Reduce Hallucinations Meta’s is a great resource to understand how best to prompt and address input/output risks of the language model. Refer to pages (14-17). Here are some examples of how a language model might hallucinate and some strategies for fixing the issue: Example 1: A language model is asked to generate a response to a question about a topic it has not been trained on. The language model may hallucinate information or make up facts that are not accurate or supported by evidence. Fix: To fix this issue, you can provide the language model with more context or information about the topic to help it understand what is being asked and generate a more accurate response. You could also ask the language model to provide sources or evidence for any claims it makes to ensure that its responses are based on factual information. Example 2: A language model is asked to generate a response to a question that requires a specific perspective or point of view. The language model may hallucinate information or make up facts that are not consistent with the desired perspective or point of view. To fix this issue, you can provide the language model with additional information about the desired perspective or point of view, such as the goals, values, or beliefs of the person or entity being addressed. This can help the language model understand the context and generate a response that is more consistent with the desired perspective or point of view. Example 3: A language model is asked to generate a response to a question that requires a specific tone or style. The language model may hallucinate information or make up facts that are not consistent with the desired tone or style. To fix this issue, you can provide the language model with additional information about the desired tone or style, such as the audience or purpose of the communication. This can help the language model understand the context and generate a response that is more consistent with the desired tone or style. Overall, the key to avoiding hallucination in language models is to provide them with clear and accurate information and context, and to carefully monitor their responses to ensure that they are consistent with your expectations and requirements. Prompting using Zero- and Few-Shot Learning Chain of Thought Technique
-----------
-Validation | How-to guides Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud As the saying goes, if you can't measure it, you can't improve it., In this section, we are going to cover different ways to measure and ultimately validate Llama so it's possible to determine the improvements provided by different fine tuning techniques. Quantitative techniques The focus of these techniques is to gather objective metrics that can be compared easily during and after each fine tuning run and to provide quick feedback on whether the model is performing. The main metrics collected are loss and perplexity. This method consists in dividing the dataset into k subsets or folds, and then fine tuning the model k times. On each run, a different fold is used as a validation dataset, using the rest for training. The performance results of each run are averaged out for the final report. This provides a more accurate metric of the performance of the model across the complete dataset, as all entries serve both for validation and training. While it produces the most accurate prediction on how a model is going to generalize after fine tuning on a given dataset, it is computationally expensive and better suited for small datasets. Holdout When using a holdout, the dataset is split into two or three subsets, training and validation with test as optional. The test and validation sets can represent 10% - 30% of the dataset each. As the name implies, the first two subsets are used for training and validating the model during fine tuning, while the third is used only after fine tuning is complete to evaluate how well the model generalizes on data it has not seen in either phase. The advantage of having three partitions is that it provides a way to evaluate the model after fine-tuning for an unbiased view into the model performance, but it requires a slightly bigger dataset to allow for a proper split. This is currently implemented in the Llama recipes fine tuning script with two subsets of the dataset, train validation . The data is collected in a json file that can be plotted to easily interpret the results and evaluate how the model is performing. Standard Evaluation tools There are multiple projects that provide standard evaluation. They provide predefined tasks with commonly used metrics to evaluate the performance of LLMs, like HellaSwag and ThrouthfulQA. These tools can be used to test if the model has degraded after fine tuning. Additionally, a custom task can be created using the dataset intended to fine-tune the model, effectively automating the manual verification of the model performance before and after fine tuning. These types of projects provide a quantitative way of looking at the models performance in simulated real world examples. Some of these projects include the LM Evaluation Harness (used to create the HF leaderboard ), HELM , BIG-bench OpenCompass . As mentioned before, the torchtune library provides integration with the LM Evaluation Harness to test fine tuned models as well. Interpreting Loss and Perplexity The loss value used comes from the transformer's LlamaForCausalLM , which initializes a different loss function depending on the objective required from the model. The objective of this section is to give a brief overview on how to understand the results from loss and perplexity as an initial evaluation of the model performance during fine tuning. We also calculate the perplexity as an exponentiation of the loss value. Additional information on loss functions can be found in these resources: 1 2 4 5 6 In our recipes, we use a simple holdout during fine tuning. Using the logged loss values, both for train and validation dataset, the curves for both are plotted to analyze the results of the process. Given the setup in the recipe, the expected behavior is a log graph that shows a diminishing train and validation loss value as it progresses. If the validation curve starts going up while the train curve continues decreasing, the model is overfitting and it's not generalizing well. Some alternatives to test when this happens are early stopping, verifying the validation dataset is a statistically significant equivalent of the train dataset, data augmentation, using parameter efficient fine tuning or using k-fold cross-validation to better tune the hyperparameters. Qualitative techniques Manual testing Manually evaluating a fine tuned model will vary according to the FT objective and available resources. Here we provide general guidelines on how to accomplish it. With a dataset prepared for fine tuning, a part of it can be separated into a manual test subset, which can be further increased with general knowledge questions that might be relevant to the specific use case. In addition to these general questions, we recommend executing standard evaluations as well, and compare the results with the baseline for the fine tuned model. To rate the results, a clear evaluation criteria should be defined that is relevant to the dataset being used. Example criteria can be accuracy, coherence and safety. Create a rubric for each criteria and define what would be required for an output to receive a specific score. With these guidelines in place, distribute the test questions with a diverse set of reviewers to have multiple data points for each question. With multiple data points for each question and different criteria, a final score can be calculated for each query, allowing for weighting the scores based on the preferred focus for the final model. Interpreting Loss and Perplexity Skip to main content
-----------
-Meta Code Llama | Integration guides Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Integration guides is an open-source family of LLMs based on Llama 2 providing SOTA performance on code tasks. It consists of: Foundation models (Meta Code Llama) Python specializations (Meta Code Llama - Python), and Instruction-following models (Meta Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. See the recipes for examples on how to make use of Meta Code Llama. The following diagram shows how each of the Meta Code Llama models is trained: (Fig: The Meta Code Llama specialization pipeline. The different stages of fine-tuning annotated with the number of tokens seen during training) One of the best ways to try out and integrate with Meta Code Llama is using Hugging Face ecosystem by following the blog , which has: Demo links for all versions of Meta Code Llama Working inference code for code completion Working inference code for code infilling between code prefix and suffix as inputs Working inference code to do 4-bit loading of the 34B model so it can fit on consumer GPUs Guide on how to write prompts for the instruction models to have multi-turn conversations  about coding Guide on how to use Text Generation Inference for model deployment in production Guide on how to integrate code autocomplete as an extension  with VSCode Guide on how to evaluate Meta Code Llama models If the model does not perform well on your specific task, for example if none of the Meta Code Llama models (7B/13B/34B/70B) generate the correct answer for a text to SQL task, fine-tuning should be considered. This is a complete guide and notebook ( ) on how to fine-tune Meta Code Llama using the 7B model hosted on Hugging Face. It uses the LoRA fine-tuning method and can run on a single GPU. As shown in the Meta Code Llama References ( ), fine-tuning improves the performance of Meta Code Llama on SQL code generation, and it can be critical that LLMs are able to interoperate with structured data and SQL, the primary way to access structured data - we are developing demo apps in LangChain and RAG with Llama 2 to show this. Compatible extensions In most of the cases, the simplest method to integrate any model size is through ollama , occasionally combined with litellm . Ollama is a program that allows quantized versions of popular LLMs to run locally. It leverages the GPU and can even run Code Llama 34B on an M1 mac. Litellm is a simple proxy that can serve an OpenAI style API, so it's easy to replace OpenAI in existing applications, in our case, extensions Continue This extension can be used with ollama, allowing for easy local only execution. Additionally, it provides a simple interface to 1/ Chat with the model directly running inside VS Code and 2/ Select specific files and sections to edit or explain. This extension is an effective way to evaluate Llama because it provides simple and useful features. It also allows developers to build trust, by creating diffs for each proposed change and showing exactly what is being changed before saving the file. Handling the context for the LLM is easy and relies heavily on keyboard shortcuts. It's important to note that all the interactions with the extension are recorded in jsonl format. The objective is to provide data for future fine tuning of the models based on the feedback recorded during real world usage as well. Steps to install with ollama Install and pull a model (e.g. ollama pull codellama:13b-instruct) Install the extension from Visual Studio Code marketplace Open the extension and click on the + sign to add models Select Ollama as a provider In the next screen, select the model and size pulled from with ollama Select the model in the convo and start using the extension Steps to install with TGI For better performance or usage in non-compatible hardware, TGI can be used in a server to run the model. For example, ollama on Intel Macs is too slow to be useful, even with the 7B models. On the contrary, M1 macs can run the 34 Meta Code Llama models quickly. For this, you should have TGI running on a server with appropriate hardware, as detailed in this . Once Continue.dev is installed, follow these steps: Open the configs with /config Use the HuggingFaceTGI class and pass your instance URL in the server_url parameter: Assign a name to it and save the config file. llm-vscode This extension from Hugging Face provides an open alternative to the closed sourced GitHub Copilot, allowing for the same functionality, context based autocomplete suggestions, to work with open source models. It works out of the box with a HF Token and their Inference API but can be configured to use any TGI compatible API. For usage with a self-hosted TGI server, follow these steps: from the marketplace Open the extension configs Select the correct template for the model published in your TGI instance in the Config Template field. For testing, used the one named codellama/CodeLlama-13b-hf Pass in the URL to your TGI instance in the Model ID or Endpoint field. To avoid rate limiting messages, login to HF by providing a read only token. This was necessary even for a self-hosted instance. It currently does not support local models unless TGI is running locally. It would be great to add ollama support to this extension, as it would accelerate the inference with the smaller models by avoiding the network.
-----------
-LangChain | Integration guides Skip to main content Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud is an open source framework for building LLM powered applications. It implements common abstractions and higher-level APIs to make the app building process easier, so you don't need to call LLM from scratch. The main building blocks/APIs of LangChain are: Source The Models or LLMs API can be used to easily connect to all popular LLMs such as Hugging Face or Replicate where all types of Llama 2 models are hosted. The Prompts API implements the useful prompt template abstraction to help you easily reuse good, often long and detailed, prompts when building sophisticated LLM apps. There are also many built-in prompts for common operations such as summarization or connection to SQL databases for quick app development. Prompts can also work closely with  parsers to easily extract useful information from the LLM output. The Memory API can be used to save conversation history and feed it along with new questions to LLM so multi-turn natural conversation chat can be implemented. The Chains API includes the most basic LLMChain that combines a LLM with a prompt to generate the output, as well as more advanced chains to lets you build sophisticated LLM apps in a systematic way. For example, the output of the first LLM chain can be the input/prompt of another chain, or a chain can have multiple inputs and/or multiple outputs, either pre-defined or dynamically decided by the LLM output of a prompt. The Indexes API allows documents outside of LLM to be saved, after first converted to embeddings which are numerical meaning representations, in the vector form, of the documents, to a vector store. Later when a user enters a question about the documents, the relevant data stored in the documents' vector store will be retrieved and sent, along with the query, to LLM to generate an answer related to the documents. The following flow shows the process The Agents API uses LLM as the reasoning engine and connects it with other sources of data, third-party or own tools, or APIs such as web search or wikipedia APIs. Depending on the user's input, the agent can decide which tool to call to handle the input. LangChain can be used as a powerful retrieval augmented generation (RAG) tool to integrate the internal data or more recent public data with LLM to QA or chat about the data. LangChain already supports loading many types of unstructured and structured data. To learn more about LangChain, enroll for free in the two LangChain short courses . Be aware that the code in the courses use OpenAI ChatGPT LLM, but we’ve published a series of using LangChain with Llama. There is also a Getting to Know Llama notebook , presented at Meta Connect.
-----------
-LlamaIndex | Integration guides LlamaIndex LlamaIndex is another popular open source framework for building LLM applications. Like LangChain, LlamaIndex can also be used to build RAG applications by easily integrating data not built-in the LLM with LLM. There are three key tools in LlamaIndex: Connecting Data: connect data of any type -  structured, unstructured or semi-structured - to LLM Indexing Data: Index and store the data Querying LLM: Combine the user query and retrieved query-related data to query LLM and return data-augmented answer LlamaIndex is mainly a data framework for connecting private or domain-specific data with LLMs, so it specializes in RAG, smart data storage and retrieval, while LangChain is a more general purpose framework which can be used to build agents connecting multiple tools. The integration of the two may provide the best performant and effective solution to building real world RAG powered Llama apps. For an example usage of how to integrate LlamaIndex with Llama 2, see . We also published a completed demo app showing how to use LlamaIndex to chat with Llama 2 about live data via the you.com API. It’s worth noting that LlamaIndex has implemented many RAG powered LLM evaluation tools to easily measure the quality of retrieval and response, including: Question Generation: Call LLM to auto generate questions to create an evaluation dataset. Faithfulness Evaluator: Evaluate if the generated answer is faithful to the retrieved context or if there’s hallucination. Correctness Evaluator: Evaluate if the generated answer matches the reference answer. Relevancy Evaluator: Evaluate if the answer and the retrieved context is relevant and consistent for the given query. Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Model Cards and Prompt Formats Meta Llama Guard 2 Meta Code Llama 70B Meta Llama Guard 1 Meta Llama on Linux Meta Llama on Windows Meta Llama on Mac Meta Llama in the Cloud Skip to main content
-----------
-# Llama Recipes: Examples to get started using the Llama models from Meta The 'llama-recipes' repository is a companion to the [Meta Llama 3](https://github.com/meta-llama/llama3) models. The goal of this repository is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Meta Llama and other tools in the LLM ecosystem. The examples here showcase how to run Meta Llama locally, in the cloud, and on-prem. [Meta Llama 2](https://github.com/meta-llama/llama) is also supported in this repository. We highly recommend everyone to utilize [Meta Llama 3](https://github.com/meta-llama/llama3) due to its enhanced capabilities. > [!IMPORTANT] > Meta Llama 3 has a new prompt template and special tokens (based on the tiktoken tokenizer). > | Token | Description | > |---|---| > `<\|begin_of_text\|>` | This is equivalent to the BOS token. | > `<\|end_of_text\|>` | This is equivalent to the EOS token. For multiturn-conversations it's usually unused. Instead, every message is terminated with `<\|eot_id\|>` instead.| > `<\|eot_id\|>` | This token signifies the end of the message in a turn i.e. the end of a single message by a system, user or assistant role as shown below.| > `<\|start_header_id\|>{role}<\|end_header_id\|>` | These tokens enclose the role for a particular message. The possible roles can be: system, user, assistant. | > > A multiturn-conversation with Meta Llama 3 follows this prompt template: > ``` > <|begin_of_text|><|start_header_id|>system<|end_header_id|> > {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|> > {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> > {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> > {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> > Each message gets trailed by an `<|eot_id|>` token before a new header is started, signaling a role change. > More details on the new tokenizer and prompt template can be found [here](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3#special-tokens-used-with-meta-llama-3). > [!NOTE] > The llama-recipes repository was recently refactored to promote a better developer experience of using the examples. Some files have been moved to new locations. The `src/` folder has NOT been modified, so the functionality of this repo and package is not impacted. > Make sure you update your local clone by running `git pull origin main` ## Table of Contents - [Llama Recipes: Examples to get started using the Meta Llama models from Meta](#llama-recipes-examples-to-get-started-using-the-llama-models-from-meta) - [Table of Contents](#table-of-contents) - [Getting Started](#getting-started) - [Prerequisites](#prerequisites) - [PyTorch Nightlies](#pytorch-nightlies) - [Installing](#installing) - [Install with pip](#install-with-pip) - [Install with optional dependencies](#install-with-optional-dependencies) - [Install from source](#install-from-source) - [Getting the Llama models](#getting-the-llama-models) - [Model conversion to Hugging Face](#model-conversion-to-hugging-face) - [Repository Organization](#repository-organization) - [`recipes/`](#recipes) - [`src/`](#src) - [Contributing](#contributing) - [License](#license) ## Getting Started These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system. ### Prerequisites #### PyTorch Nightlies If you want to use PyTorch nightlies instead of the stable release, go to [this guide](https://pytorch.org/get-started/locally/) to retrieve the right `--extra-index-url URL` parameter for the `pip install` commands on your platform. ### Installing Llama-recipes provides a pip distribution for easy install and usage in other projects. Alternatively, it can be installed from source. > Ensure you use the correct CUDA version (from `nvidia-smi`) when installing the PyTorch wheels. Here we are using 11.8 as `cu118`. > H100 GPUs work better with CUDA >12.0 #### Install with pip ``` pip install llama-recipes #### Install with optional dependencies Llama-recipes offers the installation of optional packages. There are three optional dependency groups. To run the unit tests we can install the required dependencies with: pip install llama-recipes[tests] For the vLLM example we need additional requirements that can be installed with: pip install llama-recipes[vllm] To use the sensitive topics safety checker install with: pip install llama-recipes[auditnlg] Optional dependencies can also be combines with [option1,option2]. #### Install from source To install from source e.g. for development use these commands. We're using hatchling as our build backend which requires an up-to-date pip as well as setuptools package. git clone git@github.com:meta-llama/llama-recipes.git cd llama-recipes pip install -U pip setuptools pip install -e . For development and contributing to llama-recipes please install all optional dependencies: pip install -U pip setuptools pip install -e .[tests,auditnlg,vllm] ### Getting the Meta Llama models You can find Meta Llama models on Hugging Face hub [here](https://huggingface.co/meta-llama), **where models with `hf` in the name are already converted to Hugging Face checkpoints so no further conversion is needed**. The conversion step below is only for original model weights from Meta that are hosted on Hugging Face model hub as well. #### Model conversion to Hugging Face The recipes and notebooks in this folder are using the Meta Llama model definition provided by Hugging Face's transformers library. Given that the original checkpoint resides under models/7B you can install all requirements and convert the checkpoint with: ```bash ## Install Hugging Face Transformers from source pip freeze | grep transformers ## verify it is version 4.31.0 or higher git clone git@github.com:huggingface/transformers.git cd transformers pip install protobuf python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path ## Repository Organization Most of the code dealing with Llama usage is organized across 2 main folders: `recipes/` and `src/`. ### `recipes/` Contains examples are organized in folders by topic: | Subfolder | Description | |---|---| [quickstart](./recipes/quickstart) | The "Hello World" of using Llama, start here if you are new to using Llama. [finetuning](./recipes/finetuning)|Scripts to finetune Llama on single-GPU and multi-GPU setups [inference](./recipes/inference)|Scripts to deploy Llama for inference locally and using model servers [use_cases](./recipes/use_cases)|Scripts showing common applications of Meta Llama3 [responsible_ai](./recipes/responsible_ai)|Scripts to use PurpleLlama for safeguarding model outputs [llama_api_providers](./recipes/llama_api_providers)|Scripts to run inference on Llama via hosted endpoints [benchmarks](./recipes/benchmarks)|Scripts to benchmark Llama models inference on various backends [code_llama](./recipes/code_llama)|Scripts to run inference with the Code Llama models [evaluation](./recipes/evaluation)|Scripts to evaluate fine-tuned Llama models using `lm-evaluation-harness` from `EleutherAI` ### `src/` Contains modules which support the example recipes: | Subfolder | Description | | [configs](src/llama_recipes/configs/) | Contains the configuration files for PEFT methods, FSDP, Datasets, Weights & Biases experiment tracking. | | [datasets](src/llama_recipes/datasets/) | Contains individual scripts for each dataset to download and process. Note | | [inference](src/llama_recipes/inference/) | Includes modules for inference for the fine-tuned models. | | [model_checkpointing](src/llama_recipes/model_checkpointing/) | Contains FSDP checkpoint handlers. | | [policies](src/llama_recipes/policies/) | Contains FSDP scripts to provide different policies, such as mixed precision, transformer wrapping policy and activation checkpointing along with any precision optimizer (used for running FSDP with pure bf16 mode). | | [utils](src/llama_recipes/utils/) | Utility files for: - `train_utils.py` provides training/eval loop and more train utils. - `dataset_utils.py` to get preprocessed datasets. - `config_utils.py` to override the configs received from CLI. - `fsdp_utils.py` provides FSDP  wrapping policy for PEFT methods. - `memory_utils.py` context manager to track different memory stats in train loop. | ## Contributing Please read [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests to us. ## License See the License file for Meta Llama 3 [here](https://llama.meta.com/llama3/license/) and Acceptable Use Policy [here](https://llama.meta.com/llama3/use-policy/) See the License file for Meta Llama 2 [here](https://llama.meta.com/llama2/license/) and Acceptable Use Policy [here](https://llama.meta.com/llama2/use-policy/)
-----------
-# **Model Details** Meta developed and released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. ||Training Data|Params|Context Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10 -4 Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10 Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10 **Llama 2 family of models.** Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. The 70B version uses Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "Llama-2: Open Foundation and Fine-tuned Chat Models", available at https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/. **Where to send questions or comments about the model** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md). # **Intended Use** **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 2 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 2 models for languages beyond English provided they comply with the Llama 2 Community License and the Acceptable Use Policy. # **Hardware and Software** **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. # **Training Data** **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. # **Evaluation Results** In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks. For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at the top 1. |||TruthfulQA|Toxigen| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. # **Ethical Considerations and Limitations** Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide/)
-----------
-# Llama 2 We are unlocking the power of large language models. Llama 2 is now accessible to individuals, creators, researchers, and businesses of all sizes so that they can experiment, innovate, and scale their ideas responsibly. This release includes model weights and starting code for pre-trained and fine-tuned Llama language models — ranging from 7B to 70B parameters. This repository is intended as a minimal example to load [Llama 2](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) models and run inference. For more detailed examples leveraging Hugging Face, see [llama-recipes](https://github.com/facebookresearch/llama-recipes/). ## Updates post-launch See [UPDATES.md](UPDATES.md). Also for a running list of frequently asked questions, see [here](https://ai.meta.com/llama/faq/). ## Download In order to download the model weights and tokenizer, please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License. Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download. Pre-requisites: Make sure you have `wget` and `md5sum` installed. Then run the script: `./download.sh`. Keep in mind that the links expire after 24 hours and a certain amount of downloads. If you start seeing errors such as `403: Forbidden`, you can always re-request a link. ### Access to Hugging Face We are also providing downloads on [Hugging Face](https://huggingface.co/meta-llama). You can request access to the models by acknowledging the license and filling the form in the model card of a repo. After doing so, you should get access to all the Llama models of a version (Code Llama, Llama 2, or Llama Guard) within 1 hour. ## Quick Start You can follow the steps below to quickly get up and running with Llama 2 models. These steps will let you run quick inference locally. For more examples, see the [Llama 2 recipes repository](https://github.com/facebookresearch/llama-recipes). 1. In a conda env with PyTorch / CUDA available clone and download this repository. 2. In the top-level directory run: pip install -e . 3. Visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and register to download the model/s. 4. Once registered, you will get an email with a URL to download the models. You will need this URL when you run the download.sh script. 5. Once you get the email, navigate to your downloaded llama repository and run the download.sh script. - Make sure to grant execution permissions to the download.sh script - During this process, you will be prompted to enter the URL from the email. - Do not use the “Copy Link” option but rather make sure to manually copy the link from the email. 6. Once the model/s you want have been downloaded, you can run the model locally using the command below: torchrun --nproc_per_node 1 example_chat_completion.py \ --ckpt_dir llama-2-7b-chat/ \ --tokenizer_path tokenizer.model \ --max_seq_len 512 --max_batch_size 6 **Note** - Replace  `llama-2-7b-chat/` with the path to your checkpoint directory and `tokenizer.model` with the path to your tokenizer model. - The `–nproc_per_node` should be set to the [MP](#inference) value for the model you are using. - Adjust the `max_seq_len` and `max_batch_size` parameters as needed. - This example runs the [example_chat_completion.py](example_chat_completion.py) found in this repository but you can change that to a different .py file. ## Inference Different models require different model-parallel (MP) values: |  Model | MP | |--------|----| | 7B     | 1  | | 13B    | 2  | | 70B    | 8  | All models support sequence length up to 4096 tokens, but we pre-allocate the cache according to `max_seq_len` and `max_batch_size` values. So set those according to your hardware. ### Pretrained Models These models are not finetuned for chat or Q&A. They should be prompted so that the expected answer is the natural continuation of the prompt. See `example_text_completion.py` for some examples. To illustrate, see the command below to run it with the llama-2-7b model (`nproc_per_node` needs to be set to the `MP` value): torchrun --nproc_per_node 1 example_text_completion.py \ --ckpt_dir llama-2-7b/ \ --max_seq_len 128 --max_batch_size 4 ### Fine-tuned Chat Models The fine-tuned models were trained for dialogue applications. To get the expected features and performance for them, a specific formatting defined in [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212) needs to be followed, including the `INST` and `< >` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). You can also deploy additional classifiers for filtering out inputs and outputs that are deemed unsafe. See the llama-recipes repo for [an example](https://github.com/facebookresearch/llama-recipes/blob/main/examples/inference.py) of how to add a safety checker to the inputs and outputs of your inference code. Examples using llama-2-7b-chat: torchrun --nproc_per_node 1 example_chat_completion.py \ --max_seq_len 512 --max_batch_size 6 Llama 2 is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. In order to help developers address these risks, we have created the [Responsible Use Guide](Responsible-Use-Guide.pdf). More details can be found in our research paper as well. ## Issues Please report any software “bug”, or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Model Card See [MODEL_CARD.md](MODEL_CARD.md). Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements. See the [LICENSE](LICENSE) file, as well as our accompanying [Acceptable Use Policy](USE_POLICY.md) ## References 1. [Research Paper](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) 2. [Llama 2 technical overview](https://ai.meta.com/resources/models-and-libraries/llama) 3. [Open Innovation AI Research Community](https://ai.meta.com/llama/open-innovation-ai-research-community/) For common questions, the FAQ can be found [here](https://ai.meta.com/llama/faq/) which will be kept up to date over time as new questions arise. ## Original Llama The repo for the original llama release is in the [`llama_v1`](https://github.com/facebookresearch/llama/tree/llama_v1) branch.
-----------
-## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. Training Data Params Context length GQA Token count Knowledge cutoff Llama 3 A new mix of publicly available online data. 8B 8k Yes 15T+ March, 2023 70B December, 2023 **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the [Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/) and [Llama 3 Community License](https://llama.meta.com/llama3/license/). Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the [Llama 3 Community License](https://llama.meta.com/llama3/license/) and the [Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/). ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program. Time (GPU hours) Power Consumption (W) Carbon Emitted(tCO2eq) Llama 3 8B 1.3M 700 390 Llama 3 70B 6.4M 1900 Total 7.7M 2290 **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 8B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_details.md). ### Base pretrained models Category Benchmark Llama2 7B Llama2 13B Llama2 70B General MMLU (5-shot) 66.6 45.7 53.8 79.5 69.7 AGIEval English (3-5 shot) 45.9 28.8 38.7 63.0 54.8 CommonSenseQA (7-shot) 72.6 57.6 67.6 83.8 78.7 Winogrande (5-shot) 76.1 73.3 75.4 83.1 81.8 BIG-Bench Hard (3-shot, CoT) 61.1 38.1 47.0 81.3 65.7 ARC-Challenge (25-shot) 78.6 53.7 93.0 85.3 Knowledge reasoning TriviaQA-Wiki (5-shot) 78.5 72.1 79.6 89.7 87.5 Reading comprehension SQuAD (1-shot) 76.4 72.2 85.6 82.6 QuAC (1-shot, F1) 44.4 39.6 44.9 51.1 49.4 BoolQ (0-shot) 75.7 65.5 66.9 79.0 73.1 DROP (3-shot, F1) 58.4 37.9 49.8 79.7 70.2 ### Instruction tuned models Llama 2 7B Llama 2 13B Llama 2 70B 68.4 34.1 47.8 82.0 52.9 GPQA (0-shot) 34.2 21.7 22.3 39.5 21.0 HumanEval (0-shot) 62.2 7.9 14.0 81.7 25.6 GSM-8K (8-shot, CoT) 25.7 77.4 57.5 MATH (4-shot, CoT) 30.0 3.8 6.7 50.4 11.6 ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. Safety For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. Refusals In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks CBRNE (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### Cyber Security We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). Child Safety Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development.  For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [GitHub repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Amit Sangani; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Ash JJhaveri; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hamid Shojanazeri; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Puxin Xu; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
-----------
-🤗 Models on Hugging Face | Blog Website --- # Meta Llama 3 We are unlocking the power of large language models. Our latest version of Llama is now accessible to individuals, creators, researchers, and businesses of all sizes so that they can experiment, innovate, and scale their ideas responsibly. This release includes model weights and starting code for pre-trained and instruction-tuned Llama 3 language models — including sizes of 8B to 70B parameters. This repository is a minimal example of loading Llama 3 models and running inference. For more detailed examples, see [llama-recipes](https://github.com/facebookresearch/llama-recipes/). To download the model weights and tokenizer, please visit the [Meta Llama website](https://llama.meta.com/llama-downloads/) and accept our License. Once your request is approved, you will receive a signed URL over email. Then, run the download.sh script, passing the URL provided when prompted to start the download. Pre-requisites: Ensure you have `wget` and `md5sum` installed. Then run the script: `./download.sh`. Remember that the links expire after 24 hours and a certain amount of downloads. You can always re-request a link if you start seeing errors such as `403: Forbidden`. ### Access to Hugging Face We also provide downloads on [Hugging Face](https://huggingface.co/meta-llama), in both transformers and native `llama3` formats. To download the weights from Hugging Face, please follow these steps: - Visit one of the repos, for example [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). - Read and accept the license. Once your request is approved, you'll be granted access to all the Llama 3 models. Note that requests used to take up to one hour to get processed. - To download the original native weights to use with this repo, click on the "Files and versions" tab and download the contents of the `original` folder. You can also download them from the command line if you `pip install huggingface-hub`: huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir meta-llama/Meta-Llama-3-8B-Instruct - To use with transformers, the following [pipeline](https://huggingface.co/docs/transformers/en/main_classes/pipelines) snippet will download and cache the weights: ```python model_id = "meta-llama/Meta-Llama-3-8B-Instruct" model="meta-llama/Meta-Llama-3-8B-Instruct", model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", You can follow the steps below to get up and running with Llama 3 models quickly. These steps will let you run quick inference locally. For more examples, see the [Llama recipes repository](https://github.com/facebookresearch/llama-recipes). 1. Clone and download this repository in a conda env with PyTorch / CUDA. 2. In the top-level directory run: pip install -e . 3. Visit the [Meta Llama website](https://llama.meta.com/llama-downloads/) and register to download the model/s. 4. Once registered, you will get an email with a URL to download the models. You will need this URL when you run the download.sh script. 5. Once you get the email, navigate to your downloaded llama repository and run the download.sh script. - Make sure to grant execution permissions to the download.sh script - During this process, you will be prompted to enter the URL from the email. - Do not use the “Copy Link” option; copy the link from the email manually. 6. Once the model/s you want have been downloaded, you can run the model locally using the command below: torchrun --nproc_per_node 1 example_chat_completion.py \ --ckpt_dir Meta-Llama-3-8B-Instruct/ \ --tokenizer_path Meta-Llama-3-8B-Instruct/tokenizer.model \ --max_seq_len 512 --max_batch_size 6 - Replace  `Meta-Llama-3-8B-Instruct/` with the path to your checkpoint directory and `Meta-Llama-3-8B-Instruct/tokenizer.model` with the path to your tokenizer model. - The `–nproc_per_node` should be set to the [MP](#inference) value for the model you are using. - Adjust the `max_seq_len` and `max_batch_size` parameters as needed. - This example runs the [example_chat_completion.py](example_chat_completion.py) found in this repository, but you can change that to a different .py file. Different models require different model-parallel (MP) values: |  Model | MP | | 8B     | 1  | | 70B    | 8  | All models support sequence length up to 8192 tokens, but we pre-allocate the cache according to `max_seq_len` and `max_batch_size` values. So set those according to your hardware. These models are not finetuned for chat or Q&A. They should be prompted so that the expected answer is the natural continuation of the prompt. See `example_text_completion.py` for some examples. To illustrate, see the command below to run it with the llama-3-8b model (`nproc_per_node` needs to be set to the `MP` value): torchrun --nproc_per_node 1 example_text_completion.py \ --ckpt_dir Meta-Llama-3-8B/ \ --tokenizer_path Meta-Llama-3-8B/tokenizer.model \ --max_seq_len 128 --max_batch_size 4 ### Instruction-tuned Models The fine-tuned models were trained for dialogue applications. To get the expected features and performance for them, specific formatting defined in [`ChatFormat`](https://github.com/meta-llama/llama3/blob/main/llama/tokenizer.py#L202) needs to be followed: The prompt begins with a `<|begin_of_text|>` special token, after which one or more messages follow. Each message starts with the `<|start_header_id|>` tag, the role `system`, `user` or `assistant`, and the `<|end_header_id|>` tag. After a double newline `\n\n`, the message's contents follow. The end of each message is marked by the `<|eot_id|>` token. You can also deploy additional classifiers to filter out inputs and outputs that are deemed unsafe. See the llama-recipes repo for [an example](https://github.com/meta-llama/llama-recipes/blob/main/recipes/inference/local_inference/inference.py) of how to add a safety checker to the inputs and outputs of your inference code. Examples using llama-3-8b-chat: torchrun --nproc_per_node 1 example_chat_completion.py \ --max_seq_len 512 --max_batch_size 6 Llama 3 is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. To help developers address these risks, we have created the [Responsible Use Guide](https://ai.meta.com/static-resource/responsible-use-guide/). Please report any software “bug” or other problems with the models through one of the following means: - Reporting issues with the model: [https://github.com/meta-llama/llama3/issues](https://github.com/meta-llama/llama3/issues) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) Our model and weights are licensed for researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals and industry through this opportunity while fostering an environment of discovery and ethical AI advancements. See the [LICENSE](LICENSE) file, as well as our accompanying [Acceptable Use Policy](USE_POLICY.md) ## Questions For common questions, the FAQ can be found [here](https://llama.meta.com/faq), which will be updated over time as new questions arise.
-----------
-# Code Llama ## **Model Details** **Model Developers** Meta AI **Variations** Code Llama comes in four model sizes, and three variants: 1) Code Llama: our base models are designed for general code synthesis and understanding 2) Code Llama - Python: designed specifically for Python 3) Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B, 34B and 70B parameters. **Input** Models input text only. **Output** Models output text only. **Model Architecture** Code Llama and its variants are autoregressive language models using optimized transformer architectures. Code Llama 7B, 13B and 70B additionally support infilling text generation. All models but Code Llama - Python 70B and Code Llama - Instruct 70B were fine-tuned with up to 16K tokens, and support up to 100K tokens at inference time. **Model Dates** Code Llama and its variants have been trained between January 2023 and January 2024. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released  as we improve model safety with community feedback. **Licence** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/). **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)". **Where to send comments** Instructions on how to provide feedback or comments on the model can be found in the model [README](README.md), or by opening an issue in the GitHub repository ([https://github.com/facebookresearch/codellama/](https://github.com/facebookresearch/codellama/)). ## **Intended Use** **Intended Use Cases** Code Llama and its variants are intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistance and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## **Hardware and Software** **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed by Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 12 Code Llama models required 1400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 228.55 tCO2eq, 100% of which were offset by Meta’s sustainability program. **Training data** All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). Code Llama - Instruct uses additional instruction fine-tuning data. **Evaluation Results** See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## **Ethical Considerations and Limitations** Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
-----------
-# Introducing Code Llama Code Llama is a family of large language models for code based on [Llama 2](https://github.com/facebookresearch/llama) providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B and 34B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B and 13B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama was developed by fine-tuning Llama 2 using a higher sampling of code. As with Llama 2, we applied considerable safety mitigations to the fine-tuned versions of the model. For detailed information on model training, architecture and parameters, evaluations, responsible AI and safety refer to  our [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/). Output generated by code generation features of the Llama Materials, including Code Llama, may be subject to third party licenses, including, without limitation, open source licenses. We are unlocking the power of large language models and our latest version of Code Llama is now accessible to individuals, creators, researchers and businesses of all sizes so that they can experiment, innovate and scale their ideas responsibly. This release includes model weights and starting code for pretrained and fine-tuned Llama language models — ranging from 7B to 34B parameters. This repository is intended as a minimal example to load [Code Llama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) models and run inference. [comment]: <> (Code Llama models are compatible with the scripts in llama-recipes) In order to download the model weights and tokenizers, please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License. Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download. Make sure that you copy the URL text itself, **do not use the 'Copy link address' option** when you right click the URL. If the copied URL text starts with: https://download.llamameta.net, you copied it correctly. If the copied URL text starts with: https://l.facebook.com, you copied it the wrong way. Pre-requisites: make sure you have `wget` and `md5sum` installed. Then to run the script: `bash download.sh`. Keep in mind that the links expire after 24 hours and a certain amount of downloads. If you start seeing errors such as `403: Forbidden`, you can always re-request a link. ### Model sizes | Model | Size     | |-------|----------| | 7B    | ~12.55GB | | 13B   | 24GB     | | 34B   | 63GB     | | 70B   | 131GB    | [comment]: <> (Access on Hugging Face, We are also providing downloads on Hugging Face. You must first request a download from the Meta website using the same email address as your Hugging Face account. After doing so, you can request access to any of the models on Hugging Face and within 1-2 days your account will be granted access to all versions.) ## Setup In a conda environment with PyTorch / CUDA available, clone the repo and run in the top-level directory: pip install -e . Different models require different model-parallel (MP) values: | Model | MP | |-------|----| | 7B    | 1  | | 13B   | 2  | | 34B   | 4  | | 70B   | 8  | All models, except the 70B python and instruct versions, support sequence lengths up to 100,000 tokens, but we pre-allocate the cache according to `max_seq_len` and `max_batch_size` values. So set those according to your hardware and use-case. ### Pretrained Code Models The Code Llama and Code Llama - Python models are not fine-tuned to follow instructions. They should be prompted so that the expected answer is the natural continuation of the prompt. See `example_completion.py` for some examples. To illustrate, see command below to run it with the `CodeLlama-7b` model (`nproc_per_node` needs to be set to the `MP` value): torchrun --nproc_per_node 1 example_completion.py \ --ckpt_dir CodeLlama-7b/ \ --tokenizer_path CodeLlama-7b/tokenizer.model \ --max_seq_len 128 --max_batch_size 4 Pretrained code models are: the Code Llama models `CodeLlama-7b`, `CodeLlama-13b`, `CodeLlama-34b`, `CodeLlama-70b` and the Code Llama - Python models `CodeLlama-7b-Python`, `CodeLlama-13b-Python`, `CodeLlama-34b-Python`, `CodeLlama-70b-Python`. ### Code Infilling Code Llama and Code Llama - Instruct 7B and 13B models are capable of filling in code given the surrounding context. See `example_infilling.py` for some examples. The `CodeLlama-7b` model can be run for infilling with the command below (`nproc_per_node` needs to be set to the `MP` value): torchrun --nproc_per_node 1 example_infilling.py \ --max_seq_len 192 --max_batch_size 4 Pretrained infilling models are: the Code Llama models `CodeLlama-7b` and `CodeLlama-13b` and the Code Llama - Instruct models `CodeLlama-7b-Instruct`, `CodeLlama-13b-Instruct`. ### Fine-tuned Instruction Models Code Llama - Instruct models are fine-tuned to follow instructions. To get the expected features and performance for the 7B, 13B and 34B variants, a specific formatting defined in [`chat_completion()`](https://github.com/facebookresearch/codellama/blob/main/llama/generation.py#L319-L361) needs to be followed, including the `INST` and `< >` tags, `BOS` and `EOS` tokens, and the whitespaces and linebreaks in between (we recommend calling `strip()` on inputs to avoid double-spaces). `CodeLlama-70b-Instruct` requires a separate turn-based prompt format defined in [`dialog_prompt_tokens()`](https://github.com/facebookresearch/codellama/blob/main/llama/generation.py#L506-L548). You can use `chat_completion()` directly to generate answers with all instruct models; it will automatically perform the required formatting. You can also deploy additional classifiers for filtering out inputs and outputs that are deemed unsafe. See the llama-recipes repo for [an example](https://github.com/facebookresearch/llama-recipes/blob/main/src/llama_recipes/inference/safety_utils.py) of how to add a safety checker to the inputs and outputs of your inference code. Examples using `CodeLlama-7b-Instruct`: torchrun --nproc_per_node 1 example_instructions.py \ --ckpt_dir CodeLlama-7b-Instruct/ \ --tokenizer_path CodeLlama-7b-Instruct/tokenizer.model \ --max_seq_len 512 --max_batch_size 4 Fine-tuned instruction-following models are: the Code Llama - Instruct models `CodeLlama-7b-Instruct`, `CodeLlama-13b-Instruct`, `CodeLlama-34b-Instruct`, `CodeLlama-70b-Instruct`. Code Llama is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios. In order to help developers address these risks, we have created the [Responsible Use Guide](https://github.com/facebookresearch/llama/blob/main/Responsible-Use-Guide.pdf). More details can be found in our research papers as well. Please report any software “bug”, or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/codellama](http://github.com/facebookresearch/codellama) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) See [MODEL_CARD.md](MODEL_CARD.md) for the model card of Code Llama. Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements. See the [LICENSE](https://github.com/facebookresearch/llama/blob/main/LICENSE) file, as well as our accompanying [Acceptable Use Policy](https://github.com/facebookresearch/llama/blob/main/USE_POLICY.md) 1. [Code Llama Research Paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) 2. [Code Llama Blog Post](https://ai.meta.com/blog/code-llama-large-language-model-coding/)
-----------
-Models on Hugging Face CyberSec Eval Paper Llama Guard Paper # Purple Llama Purple Llama is an umbrella project that over time will bring together tools and evals to help the community build responsibly with open generative AI models. The initial release will include tools and evals for Cyber Security and Input/Output safeguards but we plan to contribute more in the near future. ## Why purple? Borrowing a [concept](https://www.youtube.com/watch?v=ab_Fdp6FVDI) from the cybersecurity world, we believe that to truly mitigate the challenges which generative AI presents, we need to take both attack (red team) and defensive (blue team) postures. Purple teaming, composed of both red and blue team responsibilities, is a collaborative approach to evaluating and mitigating potential risks and the same ethos applies to generative AI and hence our investment in Purple Llama will be comprehensive. Components within the Purple Llama project will be licensed permissively enabling both research and commercial usage. We believe this is a major step towards enabling community collaboration and standardizing the development and usage of trust and safety tools for generative AI development. More concretely evals and benchmarks are licensed under the MIT license while any models use the Llama 2 Community license. See the table below: | **Component Type** |            **Components**            |                                          **License**                                           | | :----------------- | :----------------------------------: | :--------------------------------------------------------------------------------------------: | | Evals/Benchmarks   | Cyber Security Eval (others to come) |                                              MIT                                               | | Models             |             Llama Guard              | [Llama 2 Community License](https://github.com/facebookresearch/PurpleLlama/blob/main/LICENSE) | | Models             |             Llama Guard 2            | Llama 3 Community License | | Safeguard          |             Code Shield              | MIT | ## Evals & Benchmarks ### Cybersecurity #### CyberSec Eval v1 CyberSec Eval v1 was what we believe was the first industry-wide set of cybersecurity safety evaluations for LLMs. These benchmarks are based on industry guidance and standards (e.g., CWE and MITRE ATT&CK) and built in collaboration with our security subject matter experts. We aim to provide tools that will help address some risks outlined in the [White House commitments on developing responsible AI](https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/), including: * Metrics for quantifying LLM cybersecurity risks. * Tools to evaluate the frequency of insecure code suggestions. * Tools to evaluate LLMs to make it harder to generate malicious code or aid in carrying out cyberattacks. We believe these tools will reduce the frequency of LLMs suggesting insecure AI-generated code and reduce their helpfulness to cyber adversaries. Our initial results show that there are meaningful cybersecurity risks for LLMs, both with recommending insecure code and for complying with malicious requests. See our [Cybersec Eval paper](https://ai.meta.com/research/publications/purple-llama-cyberseceval-a-benchmark-for-evaluating-the-cybersecurity-risks-of-large-language-models/) for more details. #### CyberSec Eval 2 CyberSec Eval 2 expands on its predecessor by measuring an LLM’s propensity to abuse a code interpreter, offensive cybersecurity capabilities, and susceptibility to prompt injection. You can read the paper [here](https://ai.meta.com/research/publications/cyberseceval-2-a-wide-ranging-cybersecurity-evaluation-suite-for-large-language-models/). You can also check out the 🤗 leaderboard [here](https://huggingface.co/spaces/facebook/CyberSecEval). ## System-Level Safeguards As we outlined in Llama 3’s [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/), we recommend that all inputs and outputs to the LLM be checked and filtered in accordance with content guidelines appropriate to the application. ### Llama Guard To support this, and empower the community, we released Llama Guard, an openly-available model that performs competitively on common open benchmarks and provides developers with a pretrained model to help defend against generating potentially risky outputs. As part of our ongoing commitment to open and transparent science, we also released our methodology and an extended discussion of model performance in our [Llama Guard paper](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/). We are happy to share an updated version, Meta Llama Guard 2. Llama Guard 2 was optimized to support the newly [announced](https://mlcommons.org/2024/04/mlc-aisafety-v0-5-poc/) policy published by MLCommons, expanding its coverage to a more comprehensive set of safety categories, out-of-the-box. It also comes with better classification performance than Llama Guard 1 and improved zero-shot and few shot adaptability. Ultimately, our vision is to enable developers to customize this model to support relevant use cases and to make it easier to adopt best practices and improve the open ecosystem. ### Code Shield Code Shield adds support for inference-time filtering of insecure code produced by LLMs. Code Shield offers mitigation of insecure code suggestions risk, code interpreter abuse prevention, and secure command execution. [CodeShield Example Notebook](https://github.com/meta-llama/PurpleLlama/blob/main/CodeShield/notebook/CodeShieldUsageDemo.ipynb). To get started and learn how to use Purple Llama components with Llama models, see the getting started guide [here](https://ai.meta.com/llama/get-started/). The guide provides information and resources to help you set up Llama, including how to access the model, hosting how-to information and integration guides. Additionally, you will find supplemental materials to further assist you while responsibly building with Llama. The guide will be updated as more Purple Llama components get released. ## FAQ For a running list of frequently asked questions, for not only Purple Llama components but also generally for Llama models, see the FAQ [here](https://ai.meta.com/llama/faq/). ## Join the Purple Llama community See the [CONTRIBUTING](CONTRIBUTING.md) file for how to help out.
-----------
-# Model Details Meta Llama Guard 2 is an 8B parameter Llama 3-based [1] LLM safeguard model. Similar to [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/), it can be used for classifying content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated. Below is a response classification example input and output for Llama Guard 2. In order to produce classifier scores, we look at the probability for the first token, and use that as the “unsafe” class probability. We can then apply score thresholding to make binary decisions. # Harm Taxonomy and Policy The model is trained to predict safety labels on the 11 categories shown below, based on the [MLCommons taxonomy](https://mlcommons.org/2024/04/mlc-aisafety-v0-5-poc/) of hazards. Harm categories S1: Violent Crimes S2: Non-Violent Crimes S3: Sex-Related Crimes S4: Child Sexual Exploitation S5: Specialized Advice S6: Privacy S7: Intellectual Property S8: Indiscriminate Weapons S9: Hate S10: Suicide & Self-Harm S11: Sexual Content **S1: Violent Crimes** Responses that enable, encourage, or endorse the commission of violent crimes, including: (1) unlawful violence toward people (ex: terrorism, genocide, murder, hate-motivated violent crimes, child abuse, assault, battery, kidnapping) and (2) unlawful violence toward animals (ex: animal abuse) **S2: Non-Violent Crimes** Responses that enable, encourage, or endorse the commission of non-violent crimes, including: (1) personal crimes (ex: labor trafficking, threats, intimidation, hate-motivated non-violent crimes), (2) financial crimes (ex: fraud, scams, money laundering), (3) property crimes (ex: theft, arson, vandalism), (4) drug crimes (ex: creating or using regulated substances), (5) weapons crimes (ex: producing unlicensed firearms), and (6) cyber crimes (ex: hacking) **S3: Sex-Related Crimes** Responses that enable, encourage, or endorse the commission of sex-related crimes, including: (1) sex trafficking, (2) sexual assault (ex: rape), (3) sexual harassment (physical, verbal, or visual), and (4) prostitution **S4: Child Sexual Exploitation** Responses that contain, describe, enable, encourage, or endorse the sexual abuse of children **S5: Specialized Advice** Responses that contain specialized financial, medical, or legal advice, or that indicate dangerous activities or objects are safe **S6: Privacy** Responses that contain sensitive, nonpublic personal information that could undermine someone's physical, digital, or financial security **S7: Intellectual Property** Responses that may violate the intellectual property rights of any third party **S8: Indiscriminate Weapons** Responses that enable, encourage, or endorse the creation of indiscriminate weapons, including: (1) chemical weapons (ex: nerve gas), (2) biological weapons (ex: anthrax), (3) radiological weapons (ex: cobalt bombs), (4) nuclear weapons (ex: fission bombs), and (5) high-yield explosive weapons (ex: cluster munitions) **S9: Hate** Responses that demean or dehumanize people on the basis of their sensitive, personal characteristics (i.e., race, color, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and/or serious disease) **S10: Suicide & Self-Harm** Responses that enable, encourage, or endorse acts of intentional self-harm, including: (1) suicide, (2) self-injury (ex: cutting), and (3) disordered eating **S11: Sexual Content** Responses that contain erotica # Training and Evaluation Data We start with the training set of Llama Guard (see [model card](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard/MODEL_CARD.md#training-data)), and obtain labels on the Harm Taxonomy described above. To improve adaptability of the model to different prompts, we train on hard samples, which are obtained by taking an existing sample and prompting Llama2 70B to produce an alternate policy description that will flip the label of the given sample. We report metrics for various models and APIs on our validation set, which is obtained from combining the validation set of Llama Guard v1 and held-out samples from the additional Llama 3 safety data. We compare performance on our internal test set, as well as on open datasets like [XSTest](https://github.com/paul-rottger/exaggerated-safety?tab=readme-ov-file#license), [OpenAI moderation](https://github.com/openai/moderation-api-release), and [BeaverTails](https://github.com/PKU-Alignment/beavertails). We find that there is overlap between our training set and the BeaverTails-30k test split. Since both our internal test set and BeaverTails use prompts from the Anthropic's [hh-rlhf dataset](https://github.com/anthropics/hh-rlhf) as a starting point for curating data, it is possible that different splits of Anthropic were used while creating the two datasets. Therefore to prevent leakage of signal between our train set and the BeaverTails-30k test set, we create our own BeaverTails-30k splits based on the Anthropic train-test splits used for creating our internal sets. *Note on evaluations*: As discussed in the Llama Guard [paper](https://arxiv.org/abs/2312.06674), comparing model performance is not straightforward as each model is built on its own policy and is expected to perform better on an evaluation dataset with a policy aligned to the model. This highlights the need for industry standards. By aligning Llama Guard 2 with the Proof of Concept MLCommons taxonomy, we hope to drive adoption of industry standards like this and facilitate collaboration and transparency in the LLM safety and content evaluation space. # Model Performance We evaluate the performance of Llama Guard 2 and compare it with Llama Guard and popular content moderation APIs such as Azure, OpenAI Moderation, and Perspective. We use the token probability of the first output token (i.e. safe/unsafe) as the score for classification. For obtaining a binary classification decision from the score, we use a threshold of 0.5. Llama Guard 2 improves over Llama Guard, and outperforms other approaches on our internal test set. Note that we manage to achieve great performance while keeping a low false positive rate as we know that over-moderation can impact user experience when building LLM-applications. | **Model**                | **F1 ↑** | **AUPRC ↑** | **False Positive Rate ↓** | |--------------------------|:------:|:---------:|:-----------------------:| | Llama Guard\*             |  0.665 | 0.854 |          0.027          | | Llama Guard 2            |  **0.915** |   **0.974**   |          0.040          | | GPT4                     | 0.796 |    N/A    |          0.151          | | OpenAI Moderation API    |  0.347 |   0.669   |          0.030          | | Azure Content Safety API |  0.519 |    N/A    |          0.245          | | Perspective API          |  0.265 |   0.586   |          0.046          | Table 1: Comparison of performance of various approaches measured on our internal test set. *The performance of Llama Guard is lower on our new test set due to expansion of the number of harm categories from 6 to 11, which is not aligned to what Llama Guard was trained on. | **Category**           | **False Negative Rate\* ↓** | **False Positive Rate ↓** | |------------------------|:--------------------------:|:-------------------------:| | Violent Crimes         |            0.042           |           0.002           | | Privacy                |            0.057           |           0.004           | | Non-Violent Crimes     |            0.082           |           0.009           | | Intellectual Property  |            0.099           |           0.004           | | Hate                   |            0.190           |           0.005           | | Specialized Advice     |            0.192           |           0.009           | | Sexual Content         |            0.229           |           0.004           | | Indiscriminate Weapons |            0.263           |           0.001           | | Child Exploitation     |            0.267           |           0.000           | | Sex Crimes             |            0.275           |           0.002           | | Self-Harm              |            0.277           |           0.002           | Table 2: Category-wise breakdown of false negative rate and false positive rate for Llama Guard 2 on our internal benchmark for response classification with safety labels from the ML Commons taxonomy. *The binary safe/unsafe label is used to compute categorical FNR by using the true categories. We do not penalize the model while computing FNR for cases where the model predicts the correct overall label but an incorrect categorical label. We also report performance on OSS safety datasets, though we note that the policy used for assigning safety labels is not aligned with the policy used while training Llama Guard 2. Still, Llama Guard 2 provides a superior tradeoff between F1 score and False Positive Rate on the XSTest and OpenAI Moderation datasets, demonstrating good adaptability to other policies. The BeaverTails dataset has a lower bar for a sample to be considered unsafe compared to Llama Guard 2's policy. The policy and training data of MDJudge [4] is more aligned with this dataset and we see that it performs better on them as expected (at the cost of a higher FPR). GPT-4 achieves high recall on all of the sets but at the cost of very high FPR (9-25%), which could hurt its ability to be used as a safeguard for practical applications. (F1 ↑ / False Positive Rate ↓) False Refusals (XSTest) OpenAI policy (OpenAI Mod) BeaverTails policy (BeaverTails-30k) Llama Guard 0.737 / 0.079 0.599 / 0.035 Llama Guard 2 0.884 / 0.084 0.807 / 0.060 0.736 / 0.059 MDJudge 0.856 / 0.172 0.768 / 0.212 0.849 / 0.098 GPT4 0.895 / 0.128 0.842 / 0.092 0.802 / 0.256 OpenAI Mod API 0.576 / 0.040 0.788 / 0.156 0.284 / 0.056 Table 3: Comparison of performance of various approaches measured on our internal test set for response classification. NOTE: The policy used for training Llama Guard does not align with those used for labeling these datasets. Still, Llama Guard 2 provides a superior tradeoff between F1 score and False Positive Rate across these datasets, demonstrating strong adaptability to other policies. We hope to provide developers with a high-performing moderation solution for most use cases by aligning Llama Guard 2 taxonomy with MLCommons standard. But as outlined in our Responsible Use Guide, each use case requires specific safety considerations and we encourage developers to tune Llama Guard 2 for their own use case to achieve better moderation for their custom policies. As an example of how Llama Guard 2's performance may change, we train on the BeaverTails training dataset and compare against MDJudge (which was trained on BeaverTails among others). |          **Model**          | **F1 ↑** | **False Positive Rate ↓** | |:---------------------------:|:--------:|:-------------------------:| | Llama Guard 2               |   0.736  |           0.059           | | MDJudge                     | 0.849 |           0.098           | | Llama Guard 2 + BeaverTails |   **0.852**  |           0.101           | Table 4: Comparison of performance on BeaverTails-30k. # Limitations There are some limitations associated with Llama Guard 2. First, Llama Guard 2 itself is an LLM fine-tuned on Llama 3. Thus, its performance (e.g., judgments that need common sense knowledge, multilingual capability, and policy coverage) might be limited by its (pre-)training data. Second, Llama Guard 2 is finetuned for safety classification only (i.e. to generate "safe" or "unsafe"), and is not designed for chat use cases. However, since it is an LLM, it can still be prompted with any text to obtain a completion. Lastly, as an LLM, Llama Guard 2 may be susceptible to adversarial attacks or prompt injection attacks that could bypass or alter its intended use. However, with the help of external components (e.g., KNN, perplexity filter), recent work (e.g., [3]) demonstrates that Llama Guard is able to detect harmful content reliably. **Note on Llama Guard 2's policy** Llama Guard 2 supports 11 out of the 13 categories included in the [MLCommons AI Safety](https://mlcommons.org/working-groups/ai-safety/ai-safety/) taxonomy. The Election and Defamation categories are not addressed by Llama Guard 2 as moderating these harm categories requires access to up-to-date, factual information sources and the ability to determine the veracity of a particular output. To support the additional categories, we recommend using other solutions (e.g. Retrieval Augmented Generation) in tandem with Llama Guard 2 to evaluate information correctness. # Citation @misc{metallamaguard2, author =       {Llama Team}, title =        {Meta Llama Guard 2}, howpublished = {\url{https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard2/MODEL_CARD.md}}, year =         {2024} # References [1] [Llama 3 Model Card](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md) [2] [Llama Guard Model Card](https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard/MODEL_CARD.md) [3] [RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content](https://arxiv.org/pdf/2403.13031.pdf) [4] [MDJudge for Salad-Bench](https://huggingface.co/OpenSafetyLab/MD-Judge-v0.1)
-----------
-# Meta Llama Guard 2 Llama Guard 2 is a model that provides input and output guardrails for LLM deployments, based on MLCommons policy. # Download In order to download the model weights and tokenizer, please visit the [Meta website](https://llama.meta.com/llama-downloads) and accept our License. Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download. Pre-requisites: Make sure you have wget and md5sum installed. Then to run the script: `./download.sh`. Keep in mind that the links expire after 24 hours and a certain amount of downloads. If you start seeing errors such as `403: Forbidden`, you can always re-request a link. # Quick Start Since Llama Guard 2 is a fine-tuned Llama3 model (see our [model card](MODEL_CARD.md) for more information), the same quick start steps outlined in our [README file](https://github.com/meta-llama/llama3/blob/main/README.md) for Llama3 apply here. In addition to that, we added examples using Llama Guard 2 in the [Llama recipes repository](https://github.com/facebookresearch/llama-recipes). # Issues Please report any software bug, or other problems with the models through one of the following means: - Reporting issues with the Llama Guard model: [github.com/meta-llama/PurpleLlama](https://github.com/meta-llama/PurpleLlama) - Reporting issues with Llama in general: [github.com/meta-llama/llama3](https://github.com/meta-llama/llama3) - Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](https://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](https://facebook.com/whitehat/info) # License Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements. The same license as Llama 3 applies: see the [LICENSE](../LICENSE) file, as well as our accompanying [Acceptable Use Policy](USE_POLICY.md). author =       {Llama Team}, title =        {Meta Llama Guard 2}, [Research Paper](https://ai.facebook.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/)
-----------
-Llama Guard is a 7B parameter [Llama 2](https://arxiv.org/abs/2307.09288)-based input-output safeguard model. It can be used for classifying content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM: it generates text in its output that indicates whether a given prompt or response is safe/unsafe, and if unsafe based on a policy, it also lists the violating subcategories. Here is an example: In order to produce classifier scores, we look at the probability for the first token, and turn that into an “unsafe” class probability. Model users can then make binary decisions by applying a desired threshold to the probability scores. # Training and Evaluation We use a mix of prompts that come from the Anthropic [dataset](https://github.com/anthropics/hh-rlhf) and redteaming examples that we have collected in house, in a separate process from our production redteaming. In particular, we took the prompts only from the Anthropic dataset, and generated new responses from our in-house LLaMA models, using jailbreaking techniques to elicit violating responses. We then annotated Anthropic data (prompts & responses) in house, mapping labels according to the categories identified above. Overall we have ~13K training examples. ## Taxonomy of harms and Risk Guidelines As automated content risk mitigation relies on classifiers to make decisions about content in real time, a prerequisite to building these systems is to have the following components: - A **taxonomy** of risks that are of interest – these become the classes of a classifier. - A **risk guideline** that determines where we put the line between encouraged and discouraged outputs for each risk category in the taxonomy. Together with this model, we release an open taxonomy inspired by existing open taxonomies such as those employed by Google, Microsoft and OpenAI in the hope that it can be useful to the community. This taxonomy does not necessarily reflect Meta's own internal policies and is meant to demonstrate the value of our method to tune LLMs into classifiers that show high performance and high degrees of adaptability to different policies. ### The Llama Guard Safety Taxonomy & Risk Guidelines Below, we provide both the harm types themselves under this taxonomy and also examples of the specific kinds of content that would be considered harmful under each category: - **Violence & Hate** encompasses statements that encourage or could help people plan or engage in violence. Similarly, statements that advocate discrimination, contain slurs, or voice hateful sentiments against people based on their sensitive personal characteristics (ex: race, color, religion, national origin, sexual orientation, gender, gender identity, or disability) would also be considered inappropriate under this category. - **Sexual Content** encompasses statements encouraging someone (who could be underage) to engage in specific sex acts. Similarly, sexually explicit (i.e., erotic) statements would also be considered inappropriate under this category. - **Guns & Illegal Weapons** encompasses statements that specifically encourage, condone, or could help people plan or execute specific crimes involving the illegal acquisition, creation, or use of guns or illegal weapons (ex: explosives, biological agents, or chemical weapons), including instructions on how to create or use them. - **Regulated or Controlled Substances** encompasses statements that specifically encourage or could help people to illegally produce, transfer, or consume regulated or controlled substances like illegal drugs, tobacco, alcohol, or cannabis. - **Suicide & Self Harm** encompasses statements that encourage, condone, or enable people to harm themselves (ex: by providing instructions or information on methods of self-harm). When people express or imply an intent to harm themselves, any response other than one that provides links to appropriate health resources would be considered inappropriate (even if entirely supportive). - **Criminal Planning** encompasses miscellaneous statements that encourage, condone, or could help people plan or execute specific criminal activities, like arson, kidnapping, or theft. Items should only be considered harmful under this category when they could not be read as violating any of the other harm types above (ex: statements that encourage violence should be considered violating under Violence & Hate rather than this category). ## Evaluation results We compare the performance of the model against standard content moderation APIs in the industry, including [OpenAI](https://platform.openai.com/docs/guides/moderation/overview), [Azure Content Safety](https://learn.microsoft.com/en-us/azure/ai-services/content-safety/concepts/harm-categories), [PerspectiveAPI](https://developers.perspectiveapi.com/s/about-the-api-attributes-and-languages?language=en_US) from Google on both public and in-house benchmarks. The public benchmarks include [ToxicChat](https://huggingface.co/datasets/lmsys/toxic-chat) and [OpenAI Moderation](https://github.com/openai/moderation-api-release). Note: comparisons are not exactly apples-to-apples due to mismatches in each taxonomy. The interested reader can find a more detailed discussion about this in our [paper](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/). |                 | Our Test Set (Prompt) | OpenAI Mod | ToxicChat | Our Test Set (Response) | | --------------- | --------------------- | ---------- | --------- | ----------------------- | | Llama Guard     | **0.945**             | 0.847      | **0.626** | **0.953**               | | OpenAI API      | 0.764                 | **0.856**  | 0.588     | 0.769                   | | Perspective API | 0.728                 | 0.787      | 0.532     | 0.699                   |
-----------
-Hamel’s Blog - Optimizing latency Subscribe for updates Summary Below is a summary of my findings: 🏁 mlc is the fastest . This is so fast that I’m skeptical and am now motivated to measure quality (if I have time). When checking the outputs manually, they didn’t seem that different than other approaches. ❤️ CTranslate2 is my favorite tool, which is among the fastest but is also the easiest to use . The documentation is the best out of all of the solutions I tried. Furthermore, I think that the ergonomics are excellent for the models that they support. Unlike vLLM, CTranslate doesn’t seem to support distributed inference just yet. 🛠️ is really fast, but CTranslate can be much faster. On other hand, vLLM supports distributed inference , which is something you will need for larger models. vLLM might be the sweet spot for serving very large models. 😐 Text Generation Inference is an ok option (but nowhere near as fast as ) if you want to deploy HuggingFace LLMs in a standard way . TGI has some nice features like telemetry baked in ( via OpenTelemetry ) and integration with the HF ecosystem like inference endpoints . One thing to note that as of 7/28/2023, the license for TGI was changed to be more restrictive that may interfere with certain commercial uses . I am personally not a fan of the license. Rough Benchmarks This study focuses on various approaches to optimizing latency . Specifically, I want to know which tools are the most effective at optimizing latency for open source LLMs. In order to focus on latency, I hold the following variables constant: batch size of n = 1 for all prediction requests (holding throughput constant). All experiments were conducted on a Nvidia A6000 GPU, unless otherwise noted. Max output tokens were always set to 200 All numbers are calculated as an average over a fixed set of 9 prompts. The model used is meta-llama/Llama-2-7b-hf on the HuggingFace Hub In addition to batch size of and using a A6000 GPU (unless noted otherwise), I also made sure I warmed up the model by sending an initial inference request before measuring latency. Llama-v2-7b benchmark: batch size = 1, max output tokens = 200 avg tok/sec avg time (seconds) avg output token count platform options gpu float16 quantization 44.8 4.5 200.0 int8 quantization 62.6 3.2 HF Hosted Inference Endpoint A10G 30.4 6.6 202.0 HuggingFace Transformers (no server) 24.6 7.5 181.4 nf4 4bit quantization bitsandbytes 24.3 7.6 21.1 9.5 quantized w/ GPTQ 23.6 8.8 quantized w/ bitsandbytes 1.9 103.0 q4f16 117.1 1.3 153.9 text-generation-webui exllama 77.0 1.7 134.0 vllm A100 (on Modal Labs) 41.5 3.4 143.1 46.4 178.0 In some cases I did not use an b/c the platform didn’t have that particular GPU available. You can ignore these rows if you like, but I still think it is valuable information. I had access to a A6000, so I just used what I had. I noticed that the output of the LLM was quite different (less tokens) when using . I am not sure if I did something wrong here, or it changes the behavior of the LLM. Furthermore, the goal was not to be super precise on these benchmarks but rather to get a general sense of how things work and how they might compare to each other out of the box. Some of the tools above are inference servers which perform logging, tracing etc. in addition to optimizing models which effect latency. The idea is to see where there are significant differences between tools. I discussed this more Background One capability you need to be successful with open source LLMs is the ability to serve models efficiently. There are two categories of tools for model inference: Inference servers: these help with providing a web server that can provide a REST/grpc or other interface to interact with your model as a service. These inference servers usually have parameters to help you make trade-offs between throughput and latency . Additionally, some inference servers come with additional features like telemetry, model versioning and more. You can learn more about this topic the serving section of these notes. For LLMs, popular inference servers are the Text Generation Inference (TGI) Model Optimization : These modify your model to make them faster for inference. Examples include quantization Paged Attention Exllama and more. It is common to use both Inference servers techniques in conjunction. Some inference servers like even help you apply optimization techniques. Notes On Tools Other than benchmarking, an important goal of this study was to understand how to use different platforms & tools. Start with compiling the model as shown in these docs After installing MLC , you can compile meta-llama/Llama-2-7b-chat-hf like so: python3 -m mlc_llm.build \ --hf-path meta-llama/Llama-2-7b-chat-hf --target cuda --quantization q4f16_1 The arguments for the compliation are documented . This puts the model in the ./dist/ folder with the name Llama-2-7b-chat-hf-q4f16_1 You can use their python client to interact with the compiled model: from mlc_chat import ChatModule, ChatConfig cfg = ChatConfig(max_gen_len cm ChatModule(model "Llama-2-7b-chat-hf-q4f16_1" , chat_config cfg) output cm.generate(prompt prompt) You can see the full benchmarking code Warning I wasn’t able to get to run correctly with the supplied python client so I am using the chat variant ( Llama-2-7b-chat-hf ) as a proxy. I asked the kind folks who work on the mlc project and they said the python client is currently designed for chat, such that they have this system prompt that is hard coded for llama models: conv.system = ("[INST] <>\n\nYou are a helpful, respectful and honest assistant. " "Always answer as helpfully as possible, while being safe. " "Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, " "or illegal content. " "Please ensure that your responses are socially unbiased and positive in nature.\n\n" "If a question does not make any sense, or is not factually coherent, explain why instead " "of answering something not correct. " "If you don't know the answer to a question, please don't share false " "information.\n<>\n\n "); If you want to fix this, you must edit mlc-chat-config.json , changing conv_template LM These docs say more about the config.json The config file is located in ./dist//params/mlc-chat-config.json . For example: > cat ./dist/Llama-2-7b-hf-q4f16_1/params/mlc-chat-config.json "model_lib": "Llama-2-7b-hf-q4f16_1", "local_id": "Llama-2-7b-hf-q4f16_1", "conv_template": "llama-2", "temperature": 0.7, "repetition_penalty": 1.0, "top_p": 0.95, "mean_gen_len": 128, "max_gen_len": 512, "shift_fill_factor": 0.3, "tokenizer_files": [ "tokenizer.json", "tokenizer.model" "model_category": "llama", "model_name": "Llama-2-7b-hf" is an optimization tool that can make models ridiculously fast. h/t to Anton . The documentation for CTranslate2 contains specific instructions for llama models To optimize llama v2 , we first need to quantize the model. This can be done like so: ct2-transformers-converter --model int8 --output_dir llama-2-7b-ct2 --force refers to the HuggingFace repo for this model . The benchmarking code is as follows (can also be found ): time ctranslate2 sys sys.path.append( '../common/' questions pandas as pd generator ctranslate2.Generator( "llama-2-7b-ct2" , device "cuda" tokenizer transformers.AutoTokenizer.from_pretrained( "meta-llama/Llama-2-7b-hf" def predict(prompt: str "Generate text give a prompt" start time.perf_counter() tokens tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt)) results generator.generate_batch([tokens], sampling_topk , max_length , include_prompt_in_result False results[ 0 ].sequences_ids[ ] tokenizer.decode(tokens) request_time return 'tok_count' len (tokens), 'time' : request_time, 'question' : prompt, 'answer' : output, 'note' 'CTranslate2 int8 quantization' if __name__ == '__main__' counter responses [] for q in questions: >= : responses.append(predict(q)) += df pd.DataFrame(responses) df.to_csv( 'bench-ctranslate-int8.csv' , index Text Generation Inference (TGI) License Restrictions The license for TGI was recently changed away from Apache 2.0 to be more restrictive. Be careful when using TGI in commercial applications. Text generation inference which is often referred to as “TGI” was easy to use without any optimization. You can run it like this: “start_server.sh” #!/bin/bash [ -z " $HUGGING_FACE_HUB_TOKEN then echo "HUGGING_FACE_HUB_TOKEN is not set. Please set it before running this script." exit fi "TheBloke/Llama-2-7B-GPTQ" volume $PWD /data docker run --gpus all -e HUGGING_FACE_HUB_TOKEN= GPTQ_BITS=4 GPTQ_GROUPSIZE=128 --shm-size 5g -p 8081:80 -v $volume :/data ghcr.io/huggingface/text-generation-inference --max-best-of $@ We can then run the server with this command: bash start_server.sh --model-id Help You can see all the options for the TGI container with the help flag like so: run ghcr.io/huggingface/text-generation-inference --help less Quantization was very difficult to get working. There is a —quantize flag with accepts gptq . The approach makes inference much slower, which others have reported To make work for llama v2 models requires a bunch of work, you have to install the text-generation-server which can take a while and is very brittle to get right. I had to step through the Makefile carefully. After that you have to download the weights with: text-generation-server download-weights meta-llama/Llama-2-7b-hf You can run the following command to perform the quantization (the last argument is the destination directory where the weights are stored). quantize data/quantized/ However, this step is not needed for the most popular models, as someone will likely already have quantized and uploaded them to the Hub. Pre-Quantized Models Alternatively, you can use a pre-quantized model that has been uploaded to the Hub. TheBloke/Llama-2-7B-GPTQ is a good example of one. To get this to work, you have to be careful to set the GPTQ_BITS GPTQ_GROUPSIZE environment variables to match the config. For example This config necessitates setting These are already set in shown above. This PR will eventually fix that. To use the with TGI, I can use the same bash script with the following arguments: --quantize Comparison Without TGI Server When I first drafted this study I got the following response on twitter: Based on your code ( https://t.co/hSYaPTsEaK ) it seems like you measure the full HTTP request, which is like comparing trees to an apple. — Philipp Schmid ( @_philschmid July 29, 2023 Phillip certainly has a point! I am indeed testing both! I’m looking for big differences in tools here, and since some inference servers have optimization tools, and some optimization tools do not have an inference server I cannot do a true apples to apples comparison. However, I think its still useful to try different things as advertised to see what is possible, and also take note of really significant gaps in latency between tools. Therefore, I ran the following tests to perform the similar optimizations as TGI, but without the server to see what happened: HuggingFace Transformers I was able to get slightly better performance without the TGI server as predicted by Phillip, but it did not account for the the massive gap between some tools (which is exactly the kind of thing I was looking for). To benchmark quantization with bitsandbytes, I followed this blog post and wrote this benchmarking code . I quantized the model by loading it like this: model_id AutoTokenizer.from_pretrained(model_id) nf4_config BitsAndBytesConfig( load_in_4bit bnb_4bit_quant_type "nf4" bnb_4bit_compute_dtype torch.bfloat16 model_nf4 AutoModelForCausalLM.from_pretrained(model_id, quantization_config nf4_config) Unlike TGI, I was able to get bitsandbytes to work properly here, but just like TGI it didn’t speed anything up for me with respect to inference latency. As reflected in the benchmark table, I got nearly the same results with transformers without any optimizations I also quantized the model using without an inference server to compare against TGI. The code for that is The results were so bad ~ 5 tok/sec that I decided not to put this in the table, because it seemed quite off to me. Text Generation WebUI Aman let me know about text-generation-web-ui , and also these instructions for quickly experimenting with ExLlama ggml . I wasn’t able to get the variant to work properly, unfortunately. If you are really serious about using exllama, I recommend trying to use it without the text generation UI and look at the repo, specifically at test_benchmark_inference.py . (I didn’t have time for this, but if I was going to use exllama for anything serious I would go this route). From the root of the repo, you can run the following commands to start an inference server optimized with download-model.py TheBloke/Llama-2-7B-GPTQ server.py --listen --extensions openai --loader exllama_hf TheBloke_Llama-2-7B-GPTQ After the server was started, I used to conduct the benchmark. Overall, I didn’t like this particular piece of software much. It’s bit bloated because its trying to do too many things at once (An inference server, Web UIs, and other optimizations). That being said, the documentation is good and it is easy to use. I don’t think there is any particular reason to use this unless you want an end-to-end solution that also comes with a web user-interface (which many people want!). only works with CUDA 11.8, which I configured using this approach . After configuring CUDA and installing the right version of PyTorch, you need to install the bleeding edge from git: pip install -U git+https://github.com/vllm-project/vllm.git A good recipe to use for vLLM can be find on these Modal docs . Surprisingly, I had much lower latency when running on a local vs. a hosted A100 on Modal Labs. It’s possible that I did something wrong here. Currently, is the fastest solution for when you need distributed inference (i.e. when your model doesn’t fit on a single GPU). offers a server , but I benchmarked the model locally using their tools instead. The code for the benchmarking can be found here SamplingParams, LLM #from https://modal.com/docs/guide/ex/vllm_inference # Coding questions "Implement a Python function to compute the Fibonacci numbers." "Write a Rust function that performs binary exponentiation." "What are the differences between Javascript and Python?" # Literature "Write a story in the style of James Joyce about a trip to the Australian outback in 2083, to see robots in the beautiful desert." "Who does Harry turn into a balloon?" "Write a tale about a time-traveling historian who's determined to witness the most significant events in human history." # Math "What is the product of 9 and 8?" "If a train travels 120 kilometers in 2 hours, what is its average speed?" "Think through this step by step. If the sequence a_n is defined by a_1 = 3, a_2 = 5, and a_n = a_(n-1) + a_(n-2) for n > 2, find a_6." MODEL_DIR "/home/ubuntu/hamel-drive/vllm-models" download_model_to_folder(): huggingface_hub snapshot_download os snapshot_download( local_dir MODEL_DIR, token os.environ[ "HUGGING_FACE_HUB_TOKEN" LLM(MODEL_DIR) generate(question, llm, note None response : question, : note} sampling_params SamplingParams( temperature 1.0 top_p max_tokens result llm.generate(question, sampling_params) result: response[ (output.outputs[ ].token_ids) output.outputs[ ].text llm download_model_to_folder() generate(question q, llm llm, note 'vLLM' responses.append(response) 'bench-vllm.csv' HuggingFace Inference Endpoint I deployed an inference endpoint on HuggingFace for , on a Nvidia A10G GPU. I didn’t try to turn on any optimizations like quantization and wanted to see what the default performance would be like. The documentation for these interfaces can be found . There is also a python client Their documentation says they are using TGI under the hood. However, my latency was significantly faster on their hosted inference platform than using TGI locally. This could be due to the fact that I used a with them but only a locally. It’s worth looking into why this discrepancy exists further. The code for this benchmark can be found Footnotes It is common to explore the inference vs throughput frontier when conducting inference benchmarks. I did not do this, since I was most interested in latency. Here is an example of how to conduct inference benchmarks that consider both throughput and latency. ↩︎ For Llama v2 models , you must be careful to use the models ending in -hf as those are the ones that are compatible with the transformers library. The Modular Inference Engine is another example of an inference server that also applies optimization techniques. At the time of this writing, this is proprietary technology, but its worth keeping an eye on this in the future. Edit this page
-----------
-Achieve 23x LLM Inference Throughput & Reduce p50 Latency Anyscale Preview is now available! Login today to get free $50 compute credit 🚀 Home Blog Detail How continuous batching enables 23x throughput in LLM inference while reducing p50 latency By Cade Daniel Chen Shen Eric Liang Richard Liaw June 22, 2023 In this blog, we’ll cover the basics of large language model (LLM) inference and highlight inefficiencies in traditional batching policies. We’ll introduce continuous batching and discuss benchmark results for existing batching systems such as HuggingFace’s text-generation-inference and vLLM. By leveraging vLLM, users can achieve 23x LLM inference throughput while reducing p50 latency. Update June 2024: Anyscale Endpoints (Anyscale's LLM API Offering) and Private Endpoints (self-hosted LLMs) are now available as part of the Anyscale Platform.  Click to get started on the Anyscale platform. Due to the large GPU memory footprint and compute cost of LLMs , serving dominates the compute cost for most real world applications. ML engineers often treat LLMs like "black boxes" that can only be optimized with internal changes such as quantization and custom CUDA kernels. However, this is not entirely the case. Because LLMs iteratively generate their output, and because LLM inference is often memory and not compute bound, there are surprising system-level batching optimizations that make 10x or more differences in real-world workloads. One recent such proposed optimization is , also known as dynamic batching , or batching with iteration-level scheduling . We wanted to see how this optimization performs. We will get into details below, including how we simulate a production workload, but to summarize our findings: Up to 23x throughput improvement using continuous batching and continuous batching-specific memory optimizations (using ). 8x throughput over naive batching by using continuous batching (both on Ray Serve Hugging Face’s text-generation-inference 4x throughput over naive batching by using an optimized model implementation ( NVIDIA’s FasterTransformer You can try out continuous batching today: see this example to run vLLM on Ray Serve The remainder of this blog is structured as follows: We’ll cover the basics of how LLM inference works and highlight inefficiencies in traditional request-based dynamic batching policies. We’ll introduce continuous batching and how it answers many of the inefficiencies of request-based dynamic batching. We then discuss our benchmarks and the implications this has on how to serve LLM models cost-effectively. Link The basics of LLM inference There is a lot to know about LLM inference, and we refer users to Efficient Inference on a Single GPU Optimization story: Bloom inference for more detail. However, at a high level, LLM inference is pretty straightforward. For each request: You start with a sequence of tokens (called the "prefix" or "prompt"). The LLM produces a sequence of completion tokens, stopping only after producing a stop token or reaching a maximum sequence length. This is an iterative process. You get one additional completion token for each new forward pass of the model. For example, suppose you prompt with a sentence "What is the capital of California: ", it would take ten forward pass iterations to get back the full response of ["S", "a", "c", "r", “a”, "m", "e", "n", "t", "o"]. This example simplifies things a little bit because in actuality tokens do not map 1:1 to ASCII characters (a popular token encoding technique is Byte-Pair Encoding which is beyond the scope of this blog post), but the iterative nature of generation is the same regardless of how you tokenize your sequences. Simplified LLM inference. This toy example shows a hypothetical model which supports a maximum sequence length of 8 tokens (T1, T2, …, T8). Starting from the prompt tokens (yellow), the iterative process generates a single token at a time (blue). Once the model generates an end-of-sequence token (red), the generation loop stops. This example shows a batch of only one input sequence, so the batch size is 1. Now that we understand the simplicity of the iterative process, let’s dive deeper with some things you may not know about LLM inference: The initial ingestion (“prefill”) of the prompt "What is the capital of California: " takes about as much time as the generation of each subsequent token. This is because the prefill phase pre-computes some inputs of the attention mechanism that remain constant over the lifetime of the generation. This prefill phase efficiently uses the GPU’s parallel compute because these inputs can be computed independently of each other. LLM inference is memory-IO bound , not compute bound. In other words, it currently takes more time to load 1MB of data to the GPU’s compute cores than it does for those compute cores to perform LLM computations on 1MB of data. This means that LLM inference throughput is largely determined by how large a batch you can fit into high-bandwidth GPU memory . See this page in the NVIDIA docs for more details. The amount of GPU memory consumed scales with the base model size + the length of the token sequence. In Numbers every LLM developer should know , it’s estimated that a 13B parameter model consumes nearly 1MB of state for each token in a sequence. On a higher-end A100 GPU with 40GB RAM, back-of-the-envelope math suggests that since 14 GB are left after storing the 26GB of model parameters, ~14k tokens can be held in memory at once. This may seem high but is actually quite limiting; if we limit our sequence lengths to 512, we can process at most ~28 sequences in a batch. The problem is worse for higher sequence lengths; a sequence length of 2048 means our batch size is limited to 7 sequences. Note that this is an upper bound since it doesn’t leave room for storing intermediate computations. What this all means is that there is substantial “room on the table” so to speak if you can optimize memory usage. This is why approaches such as model quantization strategies such as are potentially so powerful; if you could halve the memory usage by moving from 16-bit to 8-bit representations, you could double the space available for larger batch sizes. However, not all strategies require modifications to the model weights. For example, FlashAttention found significant throughput improvements by reorganizing the attention computation to require less memory-IO. Continuous batching is another memory optimization technique which does not require modification of the model. We next explain how naive batching works (and is inefficient), and how continuous batching increases the memory-efficiency of LLM generation. LLM batching explained GPUs are massively-parallel compute architectures, with compute rates (measured in floating-point operations per second, or flops) in the teraflop ( ) or even petaflop ( H100 ) range. Despite these staggering amounts of compute, LLMs struggle to achieve saturation because so much of the chip’s memory bandwidth is spent loading model parameters. Batching is one way to improve the situation; instead of loading new model parameters each time you have an input sequence, you can load the model parameters once and then use them to process many input sequences. This more efficiently uses the chip’s memory bandwidth, leading to higher compute utilization, higher throughput, and cheaper LLM inference. Naive batching / static batching We call this traditional approach to batching static batching , because the size of the batch remains constant until the inference is complete. Here’s an illustration of static batching in context of LLM inference: Completing four sequences using static batching. On the first iteration (left), each sequence generates one token (blue) from the prompt tokens (yellow). After several iterations (right), the completed sequences each have different sizes because each emits their end-of-sequence-token (red) at different iterations. Even though sequence 3 finished after two iterations, static batching means that the GPU will be underutilized until the last sequence in the batch finishes generation (in this example, sequence 2 after six iterations). Unlike traditional deep learning models, batching for LLMs can be tricky due to the iterative nature of their inference. Intuitively, this is because requests can "finish" earlier in a batch, but it is tricky to release their resources and add new requests to the batch that may be at different completion states. This means that as the GPU is underutilized as generation lengths of different sequences in a batch differ from the largest generation length of the batch. In the figure on the right above, this is illustrated by the white squares after end-of-sequence tokens for sequences 1, 3, and 4. How often does static batching under-utilize the GPU? It depends on the generation lengths of sequences in a batch. For example, one could use LLM inference to emit a single token as a classification task (there are better ways to do this but let’s use this as an example). In this case, every output sequence is the same size (1 token). If the input sequences are also the same size (say, 512 tokens), then each static batch will achieve the best possible GPU utilization. On the other hand, a LLM-powered chatbot service cannot assume fixed-length input sequences, nor assume fixed-length output sequences. Proprietary models offer maximum context lengths in excess of 8K tokens at the time of writing. With static batching, variance in generation output could cause massive underutilization of GPUs. It’s no wonder OpenAI CEO Sam Altman described the compute costs as eye-watering Without restrictive assumptions on user input and model output, unoptimized production-grade LLM systems simply can’t serve traffic without underutilizing GPUs and incurring unnecessarily high costs. We need to optimize how we serve LLMs for their power to be broadly accessible. Continuous batching The industry recognized the inefficiency and came up with a better approach. Orca: A Distributed Serving System for Transformer-Based Generative Models is a paper presented in OSDI ‘22 which is the first to our knowledge to tackle this problem. Instead of waiting until every sequence in a batch has completed generation, Orca implements iteration-level scheduling where the batch size is determined per iteration. The result is that once a sequence in a batch has completed generation, a new sequence can be inserted in its place, yielding higher GPU utilization than static batching. Completing seven sequences using continuous batching. Left shows the batch after a single iteration, right shows the batch after several iterations. Once a sequence emits an end-of-sequence token, we insert a new sequence in its place (i.e. sequences S5, S6, and S7). This achieves higher GPU utilization since the GPU does not wait for all sequences to complete before starting a new one. Reality is a bit more complicated than this simplified model: since the prefill phase takes compute and has a different computational pattern than generation, it cannot be easily batched with the generation of tokens. Continuous batching frameworks currently manage this via hyperparameter: waiting_served_ratio , or the ratio of requests waiting for prefill to those waiting end-of-sequence tokens. Speaking of frameworks, Hugging Face has productionized continuous batching in their Rust- and Python-based text-generation-inference LLM inference server . We use their implementation to understand the performance characteristics of continuous batching in our benchmarks below. : Continuous batching, dynamic batching, and iteration-level scheduling are all close enough in meaning that any one of them can be used to describe the batching algorithm. We chose to use continuous batching. Dynamic batching is fitting but can be confused with request-level batching, where an LLM inference server uses a static batch whose size is chosen when the current batch has completely finished generation. We feel that iteration-level scheduling is descriptive of the scheduling mechanism but not the process as a whole. PagedAttention and vLLM For this blog post, we want to showcase the differences between static batching and continuous batching. It turns out that continuous batching can unlock memory optimizations that are not possible with static batching by improving upon Orca’s design. PagedAttention is a new attention mechanism implemented in ( ). It takes inspiration from traditional OS concepts such as paging virtual memory . They allow the KV cache (what is computed in the “prefill” phase, discussed above) to be non-contiguous by allocating memory in fixed-size “pages”, or blocks. The attention mechanism can then be rewritten to operate on block-aligned inputs, allowing attention to be performed on non-contiguous memory ranges. This means that buffer allocation can happen just-in-time instead of ahead-of-time: when starting a new generation, the framework does not need to allocate a contiguous buffer of size maximum_context_length. Each iteration, the scheduler can decide if it needs more room for a particular generation, and allocate on the fly without any degradation to PagedAttention’s performance. This doesn’t guarantee perfect utilization of memory ( their blog says the wastage is now limited to under 4%, only in the last block), but it significantly improves upon wastage from ahead-of-time allocation schemes used widely by the industry today. Altogether, PagedAttention + vLLM enable massive memory savings as most sequences will not consume the entire context window. These memory savings translate directly into a higher batch size, which means higher throughput and cheaper serving. We include vLLM in our benchmarks below. Benchmarking setup We’ll discuss our experimental setup then dive into the results of our benchmarks. Experiments Our goal is to see how continuous batching performs versus static batching on a simulated real-world live-inference workload. Fundamentally, we care about cost. We break this down into throughput and latency since cost is directly downstream of how efficiently you can serve at a given latency. Benchmark goal Measurement Measure throughput Time-to-process a queue of 1000 requests, each with 512 input tokens and generation length sampled from an exponential distribution. Measure latency Request latencies for 100 requests, with varying input lengths, output lengths, and arrival times at a fixed average rate. We’ll discuss the datasets and other details of the experiments in their respective results section. Hardware/model We benchmark throughput and latency on a single NVIDIA A100 GPU provided by Anyscale . Our A100 has 40GB of GPU RAM. We selected Meta’s OPT-13B model because each framework under test had a readily-available integration with this model. We selected the 13B variant because it fits into our GPU without requiring tensor parallelism, yet is still large enough to present memory efficiency challenges. We opt not to use tensor parallelism, where each transformer block is split over multiple GPUs, to keep our experiments simple, although both static batching and continuous batching work with tensor parallelism. Frameworks We test two static batching frameworks and three continuous batching frameworks. Our static batching frameworks are: Hugging Face’s Pipelines This is the simplest inference solution. It provides static batching with an easy-to-use API that works with any model and supports more tasks than simple text-generation. We use this as our baseline. This is a library which provides optimized implementations of various transformer models. It currently only provides static batching (the Triton inference server provides request-level dynamic batching, but not continuous batching yet). This provides us with an idea of how far an extremely optimized implementation of our model can get us with static batching – it provides a more competitive baseline than the relatively unoptimized OPT-13B implementation available on Hugging Face Hub Our continuous batching frameworks are: This is the inference server Hugging Face uses to power their LLM live-inference APIs. It implements continuous batching. Continuous batching on Ray Serve leverages Ray’s serverless capabilities to provide seamless autoscaling, high-availability, and support for complex DAGs. We wanted to understand how continuous batching works, so we re-implemented text-generation-inference’s core continuous batching logic in pure-Python on Ray Serve. As you will see in our results, our implementation achieves the same performance as text-generation-inference, which validates our understanding. This is an open-source project recently released by folks at UC Berkeley ( ). It builds upon Orca’s continuous batching design by taking full control of dynamic memory allocations, allowing it to significantly reduce different forms of GPU memory fragmentation. We test this framework because it shows the impact of further optimizations made possible by iteration-level scheduling and continuous batching. Benchmarking results: Throughput Based on our understanding of static batching, we expect continuous batching to perform significantly better when there is higher variance in sequence lengths in each batch. To show this, we run our throughput benchmark four times for each framework, each time on a dataset with higher variance in sequence lengths. To do this, we create a dataset containing 1000 sequences each with 512 input tokens. We configure our model to always emit a per-sequence generation length by ignoring the end-of-sequence token and configuring max_tokens. We then generate 1000 generation lengths, one for each request, sampled from an exponential distribution with mean=128 tokens. We use an exponential distribution as it is a good approximation of the generation lengths that one may encounter while serving an application like ChatGPT. To vary the variance of each run, we select only samples from the exponential distribution that are less than or equal to 32, 128, 512, and 1536. The total output sequence length is then, at most, 512+32=544, 512+128=640, 512+512=1024, and 512+1536=2048 (the maximum sequence length of our model). We then use a simple asyncio Python benchmarking script to submit HTTP requests to our model server. The benchmarking script submits all requests in burst fashion, so that the compute is saturated. The results are as follows: Throughput in tokens per second of each framework as variance in sequence length increases. As expected, the static batchers and naive continuous batchers perform approximately identically for lower-variance generation lengths. However as the variance increases, naive static batching’s performance plummets to 81 token/s. FasterTransformers improves upon naive static batching significantly, nearly keeping up with the naive continuous batchers until generation length limit of 1536. Continuous batching on Ray Serve and text-generation-inference achieves about the same performance, which is what we expect since they use the same batching algorithm. What is most impressive here is vLLM. For each dataset, vLLM more than doubles performance compared to naive continuous batching. We have not analyzed what optimization contributes the most to vLLM performance the most, but we suspect vLLM’s ability to reserve space dynamically instead of ahead-of-time allows vLLM to dramatically increase the batch size. We plot these performance results relative to naive static batching: Our throughput benchmark results presented as improvement multiples over naive static batching, log scale. It’s important to note how impressive even FasterTransformer’s 4x improvement is; we’re very interested in benchmarking FasterTransformers plus continuous batching when NVIDIA implements it. However, continuous batching is clearly a significant improvement over static batching even with an optimized model. The performance gap becomes gigantic when you include further memory optimization enabled by continuous batching and iteration-level scheduling as vLLM does. Benchmarking results: Latency Live-inference endpoints often face latency-throughput tradeoffs that must be optimized based on user needs. We benchmark latency on a realistic workload and measure how the cumulative distribution function of latencies changes with each framework. Similar to the throughput benchmark, we configure the model to always emit a specified amount of tokens specified per-request. We prepare 100 randomly-generated prompts by sampling lengths from a uniform distribution between 1 token and 512 tokens. We sample 100 output lengths from a capped exponential distribution with mean=128 and maximum size of 1536. These numbers were chosen because they are reasonably realistic and allow the generation to use up the full context-length of our model (512+1536=2048). Instead of submitting all requests at the same time as done in the throughput benchmark, we delay each request by a predetermined number of seconds. We sample a Poisson distribution to determine how long each request waits after the previously submitted request. The Poisson distribution is parameterized by λ, the expected rate, which in our case is how many queries per second (QPS) hit our model endpoint. We measure latencies at both QPS=1 and QPS=4 to see how the latency distribution changes as load changes. Median generation request latency for each framework, under average load of 1 QPS and 4 QPS. Continuous batching systems improve median latency. We see that while improving throughput, continuous batching systems also improve median latency. This is because continuous batching systems allow for new requests to be added to an existing batch if there is room, each iteration. But how about other percentiles? In fact, we find that they improve latency across all percentiles: Cumulative distribution function of generation request latencies for each framework with QPS=1. Static batchers and continuous batchers have distinct curve shapes caused by the presence of iteration-level batch scheduling in continuous batchers. All continuous batchers perform approximately equally under this load; FasterTransformers performs noticeably better than static batching on a naive model implementation. The reason why continuous batching improves latency at all percentiles is the same as why it improves latency at p50: new requests can be added regardless of how far into generation other sequences in the batch are. However, like static batching, continuous batching is still limited by how much space is available on the GPU. As your serving system becomes saturated with requests, meaning a higher on-average batch size, there are less opportunities to inject new requests immediately when they are received. We can see this as we increase the average QPS to 4: Cumulative distribution function of generation request latencies for each framework with QPS=4. Compared to QPS=1, FasterTransformer’s distribution of latencies becomes more similar to static batching on a naive model. Both Ray Serve and text-generation-inference’s continuous batching implementations perform similarly, but noticeably worse than vLLM. We observe that FasterTransformer becomes more similar to naive static batching, and that both text-generation-inference and Ray Serve’s implementation of continuous batching are on their way to look like FasterTransformer’s curve with QPS=1. That is, as the systems become saturated there are less opportunities to inject new requests immediately, so request latency goes up. This lines up with the vLLM curve – it remains mostly unchanged between QPS=1 and QPS=4. This is because due to its advanced memory optimizations, it has a higher maximum batch size. Anecdotally, we observe that vLLM becomes saturated around QPS=8 with a throughput near 1900 token/s. To compare these numbers apples-to-apples to the other serving systems requires more experimentation; however we have shown that continuous batching significantly improves over static batching by 1) reducing latency by injecting new requests immediately when possible, and 2) enable advanced memory optimizations (in vLLM’s case) that increase the QPS that the serving system can handle before becoming saturated. Conclusion LLMs present some amazing capabilities, and we believe their impact is still mostly undiscovered. We have shared how a new serving technique, continuous batching, works and how it outperforms static batching. It improves throughput by wasting fewer opportunities to schedule new requests, and improves latency by being capable of immediately injecting new requests into the compute stream. We are excited to see what people can do with continuous batching, and where the industry goes from here. Try out continuous batching for yourself We have a vLLM + Ray Serve example that allows you to try out continuous batching. We are integrating continuous batching systems into Aviary , a webapp that allows you to compare the outputs of different LLMs in parallel , and will release it within the week. Acknowledgements. We’d like to thank the following people for assisting in benchmarking and/or reviewing our results. : Stephanie Wang, Antoni Baum, Edward Oakes, and Amog Kamsetty; UC Berkeley : Zhuohan Li and Woosuk Kwon. Get involved with Ray code used for the experiments in the blog post is here . To connect with the Ray community, join the Ray Slack or ask questions on the Discuss forum . If you are interested in hosting LLMs, check out our managed Ray offering . If you are interested in learning more about Ray, see ray.io docs.ray.io See our earlier blog series on solving Generative AI infrastructure and using LangChain with Ray Ray Summit 2023 : If you are interested to learn much more about how Ray can be used to build performant and scalable LLM applications and fine-tune/train/serve LLMs on Ray, join Ray Summit on September 18-20th ! We have a set of great keynote speakers including John Schulman from OpenAI and Aidan Gomez Cohere , community and tech talks about Ray as well as practical training focused on LLMs Table of contents The basics of LLM inference Naive batching / static batching Try out continuous batching for yourself Get involved with Ray Sharing Tags LLM Sign up for product updates Recommended content Ray Spotlight Series: Multitenant Serve Applications with Runtime Envs as Containers Cross-modal Search for E-commerce: Building and Scaling a Cross-Modal Image Retrieval App Figure 1. End-to-end Stable Diffusion training architecture diagram. We Pre-Trained Stable Diffusion Models on 2 billion Images and Didn't Break the Bank - Definitive Guides with Ray Series Ready to try Anyscale? Access Anyscale today to see how companies using Anyscale and Ray benefit from rapid time-to-market and faster iterations across the entire AI lifecycle. Try free © Anyscale, Inc 2024 - Privacy Policy Follow Anyscale Follow Ray Company About Us News Careers Contact sales Learn Case Studies Ray Summit 2024 Events Ray Training Ray Docs Anyscale Docs Products Anyscale Platform Ray Open Source Integrations © Anyscale, Inc 2024 -
-----------
-GitHub - huggingface/peft: 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning. Skip to content You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. to refresh your session. You switched accounts on another tab or window. to refresh your session. Dismiss alert huggingface / peft Public Notifications You must be signed in to change notification settings Fork 1.4k Star 14.6k 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning. huggingface.co/docs/peft Apache-2.0 license stars forks Branches Activity You must be signed in to change notification settings huggingface/peft This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. Go to file Code Folders and files Name Last commit message Last commit date Latest commit History 1,000 Commits .github docs examples scripts src/ tests .gitignore .pre-commit-config.yaml LICENSE README.md pyproject.toml requirements.txt setup.py View all files Repository files navigation 🤗 PEFT State-of-the-art Parameter-Efficient Fine-Tuning (PEFT) methods Fine-tuning large pretrained models is often prohibitively costly due to their scale. Parameter-Efficient Fine-Tuning (PEFT) methods enable efficient adaptation of large pretrained models to various downstream applications by only fine-tuning a small number of (extra) model parameters instead of all the model's parameters. This significantly decreases the computational and storage costs. Recent state-of-the-art PEFT techniques achieve performance comparable to fully fine-tuned models. PEFT is integrated with Transformers for easy model training and inference, Diffusers for conveniently managing different adapters, and Accelerate for distributed training and inference for really big models. Tip Visit the PEFT organization to read about the PEFT methods implemented in the library and to see notebooks demonstrating how to apply these methods to a variety of downstream tasks. Click the "Watch repos" button on the organization page to be notified of newly implemented methods and notebooks! Check the PEFT Adapters API Reference section for a list of supported PEFT methods, and read the Adapters Soft prompts IA3 conceptual guides to learn more about how these methods work. Quickstart Install PEFT from pip: pip install peft Prepare a model for training with a PEFT method such as LoRA by wrapping the base model and PEFT configuration with get_peft_model . For the bigscience/mt0-large model, you're only training 0.19% of the parameters! AutoModelForSeq2SeqLM get_peft_config LoraConfig TaskType model_name_or_path "bigscience/mt0-large" tokenizer_name_or_path peft_config task_type SEQ_2_SEQ_LM inference_mode r 8 lora_alpha 32 lora_dropout 0.1 print_trainable_parameters () "trainable params: 2359296 || all params: 1231940608 || trainable%: 0.19151053100118282" To load a PEFT model for inference: AutoPeftModelForCausalLM AutoTokenizer torch "ybelkada/opt-350m-lora" "facebook/opt-350m" eval inputs "Preheat the oven to 350 degrees and place the cookie dough" return_tensors "pt" outputs generate input_ids "input_ids" ]. max_new_tokens 50 print batch_decode skip_special_tokens )[ ]) "Preheat the oven to 350 degrees and place the cookie dough in the center of the oven. In a large bowl, combine the flour, baking powder, baking soda, salt, and cinnamon. In a separate bowl, combine the egg yolks, sugar, and vanilla." Why you should use PEFT There are many benefits of using PEFT but the main one is the huge savings in compute and storage, making PEFT applicable to many different use cases. High performance on consumer hardware Consider the memory requirements for training the following models on the ought/raft/twitter_complaints dataset with an A100 80GB GPU with more than 64GB of CPU RAM. Model Full Finetuning PEFT-LoRA PyTorch PEFT-LoRA DeepSpeed with CPU Offloading bigscience/T0_3B (3B params) 47.14GB GPU / 2.96GB CPU 14.4GB GPU / 2.96GB CPU 9.8GB GPU / 17.8GB CPU bigscience/mt0-xxl (12B params) OOM GPU 56GB GPU / 3GB CPU 22GB GPU / 52GB CPU bigscience/bloomz-7b1 (7B params) 32GB GPU / 3.8GB CPU 18.1GB GPU / 35GB CPU With LoRA you can fully finetune a 12B parameter model that would've otherwise run out of memory on the 80GB GPU, and comfortably fit and train a 3B parameter model. When you look at the 3B parameter model's performance, it is comparable to a fully finetuned model at a fraction of the GPU memory. Submission Name Accuracy Human baseline (crowdsourced) 0.897 Flan-T5 0.892 lora-t0-3b 0.863 The bigscience/T0_3B model performance isn't optimized in the table above. You can squeeze even more performance out of it by playing around with the input instruction templates, LoRA hyperparameters, and other training related hyperparameters. The final checkpoint size of this model is just 19MB compared to 11GB of the full bigscience/T0_3B model. Learn more about the advantages of finetuning with PEFT in this blog post Quantization is another method for reducing the memory requirements of a model by representing the data in a lower precision. It can be combined with PEFT methods to make it even easier to train and load LLMs for inference. Learn how to finetune with QLoRA and the TRL library on a 16GB GPU in the Finetune LLMs on your own consumer hardware using tools from PyTorch and Hugging Face ecosystem blog post. Learn how to finetune a openai/whisper-large-v2 model for multilingual automatic speech recognition with LoRA and 8-bit quantization in this (see this instead for an example of streaming a dataset). Save compute and storage PEFT can help you save storage by avoiding full finetuning of models on each of downstream task or dataset. In many cases, you're only finetuning a very small fraction of a model's parameters and each checkpoint is only a few MBs in size (instead of GBs). These smaller PEFT adapters demonstrate performance comparable to a fully finetuned model. If you have many datasets, you can save a lot of storage with a PEFT model and not have to worry about catastrophic forgetting or overfitting the backbone or base model. PEFT integrations PEFT is widely supported across the Hugging Face ecosystem because of the massive efficiency it brings to training and inference. Diffusers The iterative diffusion process consumes a lot of memory which can make it difficult to train. PEFT can help reduce the memory requirements and reduce the storage size of the final model checkpoint. For example, consider the memory required for training a Stable Diffusion model with LoRA on an A100 80GB GPU with more than 64GB of CPU RAM. The final model checkpoint size is only 8.8MB! PEFT-LoRA PEFT-LoRA with Gradient Checkpointing CompVis/stable-diffusion-v1-4 27.5GB GPU / 3.97GB CPU 15.5GB GPU / 3.84GB CPU 8.12GB GPU / 3.77GB CPU Take a look at the examples/lora_dreambooth/train_dreambooth.py training script to try training your own Stable Diffusion model with LoRA, and play around with the smangrul/peft-lora-sd-dreambooth Space which is running on a T4 instance. Learn more about the PEFT integration in Diffusers in this is a library for distributed training and inference on various training setups and hardware (GPUs, TPUs, Apple Silicon, etc.). PEFT models work with Accelerate out of the box, making it really convenient to train really large models or use them for inference on consumer hardware with limited resources. PEFT can also be applied to training LLMs with RLHF components such as the ranker and policy. Get started by reading: Fine-tune a Mistral-7b model with Direct Preference Optimization with PEFT and the library to learn more about the Direct Preference Optimization (DPO) method and how to apply it to a LLM. Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU with PEFT and the library, and then try out the gpt2-sentiment_peft.ipynb notebook to optimize GPT2 to generate positive movie reviews. StackLLaMA: A hands-on guide to train LLaMA with RLHF with PEFT, and then try out the stack_llama/scripts for supervised finetuning, reward modeling, and RL finetuning. Model support Use this Space or check out the to find which models officially support a PEFT method out of the box. Even if you don't see a model listed below, you can manually configure the model config to enable PEFT for a model. Read the New transformers architecture guide to learn how. Contribute If you would like to contribute to PEFT, please check out our contribution guide Citing 🤗 PEFT To use 🤗 PEFT in your publication, please cite it by using the following BibTeX entry. @Misc title PEFT: State-of-the-art Parameter-Efficient Fine-Tuning methods author Sourab Mangrulkar and Sylvain Gugger and Lysandre Debut and Younes Belkada and Sayak Paul and Benjamin Bossan howpublished \url{https://github.com/huggingface/peft} year 2022 About 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning. Topics python adapter pytorch lora diffusion parameter-efficient-learning Readme Custom properties Stars Watchers 107 watching Forks Report repository Releases 17 v0.11.1 Latest May 17, 2024 + 16 releases Packages No packages published Used by 9.1k + 9,081 Contributors 175 + 161 contributors Languages Python 98.9% Other 1.1% Footer © 2024 GitHub, Inc. You can’t perform that action at this time.
-----------
-llama-recipes/docs/LLM_finetuning.md at main · meta-llama/llama-recipes · GitHub You signed in with another tab or window. to refresh your session. You signed out in another tab or window. to refresh your session. You switched accounts on another tab or window. to refresh your session. You must be signed in to change notification settings 10.1k © 2024 GitHub, Inc. You can’t perform that action at this time.
-----------
-llama-recipes/recipes/finetuning/datasets/README.md at main · meta-llama/llama-recipes · GitHub You signed in with another tab or window. to refresh your session. You signed out in another tab or window. to refresh your session. You switched accounts on another tab or window. to refresh your session. You must be signed in to change notification settings © 2024 GitHub, Inc. You can’t perform that action at this time.
-----------
-Efficient Fine-Tuning with LoRA: A Guide to Optimal Parameter Selection for Large Language Models | Databricks Blog Skip to main content Share this post With the rapid advancement of neural network-based techniques and Large Language Model (LLM) research, businesses are increasingly interested in AI applications for value generation. They employ various machine learning approaches, both generative and non-generative, to address text-related challenges such as classification, summarization, sequence-to-sequence tasks, and controlled text generation. Organizations can opt for third-party APIs, but fine-tuning models with proprietary data offers domain-specific and pertinent results, enabling cost-effective and independent solutions deployable across different environments in a secure manner. Ensuring efficient resource utilization and cost-effectiveness is crucial when choosing a strategy for fine-tuning. This blog explores arguably the most popular and effective variant of such parameter efficient methods, Low Rank Adaptation (LoRA), with a particular emphasis on QLoRA (an even more efficient variant of LoRA). The approach here will be to take an open large language model and fine-tune it to generate fictitious product descriptions when prompted with a product name and a category. The model chosen for this exercise is OpenLLaMA-3b-v2 , an open large language model with a permissive license (Apache 2.0), and the dataset chosen is Red Dot Design Award Product Descriptions , both of which can be downloaded from the HuggingFace Hub at the links provided. Fine-Tuning, LoRA and QLoRA In the realm of language models, fine tuning an existing language model to perform a specific task on specific data is a common practice. This involves adding a task-specific head, if necessary, and updating the weights of the neural network through backpropagation during the training process. It is important to note the distinction between this finetuning process and training from scratch. In the latter scenario, the model's weights are randomly initialized, while in finetuning, the weights are already optimized to a certain extent during the pre-training phase. The decision of which weights to optimize or update, and which ones to keep frozen, depends on the chosen technique. Full finetuning involves optimizing or training all layers of the neural network. While this approach typically yields the best results, it is also the most resource-intensive and time-consuming. Fortunately, there exist parameter-efficient approaches for fine-tuning that have proven to be effective. Although most such approaches have yielded less performance, Low Rank Adaptation (LoRA) has bucked this trend by even outperforming full finetuning in some cases, as a consequence of avoiding catastrophic forgetting (a phenomenon which occurs when the knowledge of the pretrained model is lost during the fine-tuning process). LoRA is an improved finetuning method where instead of finetuning all the weights that constitute the weight matrix of the pre-trained large language model, two smaller matrices that approximate this larger matrix are fine-tuned. These matrices constitute the LoRA adapter. This fine-tuned adapter is then loaded to the pretrained model and used for inference. QLoRA is an even more memory efficient version of LoRA where the pretrained model is loaded to GPU memory as quantized 4-bit weights (compared to 8-bits in the case of LoRA), while preserving similar effectiveness to LoRA. Probing this method, comparing the two methods when necessary, and figuring out the best combination of QLoRA hyperparameters to achieve optimal performance with the quickest training time will be the focus here. LoRA is implemented in the Hugging Face Parameter Efficient Fine-Tuning (PEFT) library, offering ease of use and QLoRA can be leveraged by using together. HuggingFace Transformer Reinforcement Learning (TRL) library offers a convenient trainer for supervised finetuning with seamless integration for LoRA. These three libraries will provide the necessary tools to finetune the chosen pretrained model to generate coherent and convincing product descriptions once prompted with an instruction indicating the desired attributes. Prepping the data for supervised fine-tuning To probe the effectiveness of QLoRA for fine tuning a model for instruction following, it is essential to transform the data to a format suited for supervised fine-tuning. Supervised fine-tuning in essence, further trains a pretrained model to generate text conditioned on a provided prompt. It is supervised in that the model is finetuned on a dataset that has prompt-response pairs formatted in a consistent manner. An example observation from our chosen dataset from the Hugging Face hub looks as follows: product category description text "Biamp Rack Products" "Digital Audio Processors" "“High recognition value, uniform aesthetics and practical scalability – this has been impressively achieved with the Biamp brand language …" "Product Name: Biamp Rack Products; Product Category: Digital Audio Processors; Product Description: “High recognition value, uniform aesthetics and practical scalability – this has been impressively achieved with the Biamp brand language … As useful as this dataset is, this is not well formatted for fine-tuning of a language model for instruction following in the manner described above. The following code snippet loads the dataset from the Hugging Face hub into memory, transforms the necessary fields into a consistently formatted string representing the prompt, and inserts the response( i.e. the description), immediately afterwards. This format is known as the ‘Alpaca format’ in large language model research circles as it was the format used to finetune the original LlaMA model from Meta to result in the Alpaca model, one of the first widely distributed instruction-following large language models (although not licensed for commercial use). datasets load_dataset Dataset #Load the dataset from the HuggingFace Hub rd_ds = load_dataset( "xiyuez/red-dot-design-award-product-description" #Convert to pandas dataframe for convenient processing rd_df = pd.DataFrame(rd_ds[ 'train' #Combine the two attributes into an instruction string rd_df[ 'instruction' ] = 'Create a detailed description for the following product: ' + rd_df[ 'product' ]+ ', belonging to category: ' 'category' rd_df = rd_df[[ 'description' ]] #Get a 5000 sample subset for fine-tuning purposes rd_df_sample = rd_df.sample(n= 5000 , random_state= 42 #Define template and format data into the template for supervised fine-tuning template = """Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {} ### Response:\n""" rd_df_sample[ 'prompt' ] = rd_df_sample[ "instruction" ].apply( lambda x: template. format (x)) rd_df_sample.rename(columns={ 'response' }, inplace= ] + "\n### End" rd_df_sample = rd_df_sample[[ 'text' ] = rd_df[ "prompt" ] + rd_df[ "response" rd_df.drop(columns=[ ], inplace= The resulting prompts are then loaded into a hugging face dataset for supervised finetuning. Each such prompt has the following format. Below is an instruction that describes a task. Write a response that appropriately completes the request. Create a detailed description the following product: Beseye Pro, belonging to category: Cloud-Based Home Security Camera ### Response: Beseye Pro combines intelligent home monitoring with decorative art. The camera, whose form reminiscent of a water drop, secured the mounting a neodymium magnet can be rotated by 360 degrees. This allows it to be easily positioned the desired direction. The camera also houses modern technologies, such infrared LEDs, cloud-based intelligent video analyses SSL encryption. ### End To facilitate quick experimentation, each fine-tuning exercise will be done on a 5000 observation subset of this data. Testing model performance before fine-tuning Before any fine-tuning, it’s a good idea to check how the model performs without any fine-tuning to get a baseline for pre-trained model performance. The model can be loaded in 8-bit as follows and prompted with the format specified in the model card on Hugging Face LlamaTokenizer, LlamaForCausalLM model_path = 'openlm-research/open_llama_3b_v2' tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained( model_path, load_in_8bit= , device_map= 'auto' #Pass in a prompt and infer with the model prompt = 'Q: Create a detailed description for the following product: Corelogic Smooth Mouse, belonging to category: Optical Mouse\nA:' input_ids = tokenizer(prompt, return_tensors= ).input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens= 128 (tokenizer.decode(generation_output[ ])) The output obtained is not quite what we want. Q: Create a detailed description the following product: Corelogic Smooth Mouse, belonging to category: Optical Mouse A: The Corelogic Smooth Mouse a wireless optical mouse that has a 1000 dpi resolution. It has a 2.4 GHz wireless connection a 12 -month warranty. Q: What the price of the Corelogic Smooth Mouse? A: The Corelogic Smooth Mouse priced at $ 29.99 . Q: What the weight of the Corelogic Smooth Mouse? A: The Corelogic Smooth Mouse weighs pounds. Q: What the dimensions of the Corelogic Smooth Mouse? A: The Corelogic Smooth Mouse has a dimension The first part of the result is actually satisfactory, but the rest of it is more of a rambling mess. Similarly, if the model is prompted with the input text in the ‘Alpaca format’ as discussed before, the output is expected to be just as sub-optimal: prompt= """Below is an instruction that describes a task. Write a response that appropriately completes the request. Create a detailed description for the following product: Corelogic Smooth Mouse, belonging to category: Optical Mouse ### Response:""" input_ids = tokenizer(prompt, return_tensors= And sure enough, it is: Corelogic Smooth Mouse a mouse that designed to be used by people disabilities. It a wireless mouse that designed to be used by people a wireless mouse that designed to be used by people a wireless mouse that designed to be used by people a wireless mouse that designed to be used by people a wireless mouse that designed to be used by people a wireless mouse that designed to be used by people a wireless mouse that designed to be used by The model performs what it was trained to do, predicts the next most probable token. The point of supervised fine-tuning in this context is to generate the desired text in a controllable manner. Please note that in the subsequent experiments, while QLoRA leverages a model loaded in 4-bit with the weights frozen, the inference process to examine output quality is done once the model has been loaded in 8-bit as shown above for consistency. The Turnable Knobs When using PEFT to train a model with LoRA or QLoRA (note that, as mentioned before, the primary difference between the two is that in the latter, the pretrained models are frozen in 4-bit during the fine-tuning process), the hyperparameters of the low rank adaptation process can be defined in a LoRA config as shown below: ... #If only targeting attention blocks of the model target_modules = [ "q_proj" "v_proj" #If targeting all linear layers 'q_proj' 'k_proj' 'v_proj' 'o_proj' 'gate_proj' 'down_proj' 'up_proj' 'lm_head' lora_config = LoraConfig( r= 16 target_modules = target_modules, lora_alpha= lora_dropout= 0.05 bias= "none" task_type= "CAUSAL_LM" ,} Two of these hyperparameters, r and target_modules are empirically shown to affect adaptation quality significantly and will be the focus of the tests that follow. The other hyperparameters are kept constant at the values indicated above for simplicity. represents the rank of the low rank matrices learned during the finetuning process. As this value is increased, the number of parameters needed to be updated during the low-rank adaptation increases. Intuitively, a lower r may lead to a quicker, less computationally intensive training process, but may affect the quality of the model thus produced. However, increasing r beyond a certain value may not yield any discernible increase in quality of model output. How the value of r affects adaptation (fine-tuning) quality will be put to the test shortly. When fine-tuning with LoRA, it is possible to target specific modules in the model architecture. The adaptation process will target these modules and apply the update matrices to them. Similar to the situation with " ," targeting more modules during LoRA adaptation results in increased training time and greater demand for compute resources. Thus, it is a common practice to only target the attention blocks of the transformer. However, recent work as shown in the QLoRA paper by Dettmers et al. suggests that targeting all linear layers results in better adaptation quality. This will be explored here as well. Names of the linear layers of the model can be conveniently appended to a list with the following code snippet: re model_modules = (model.modules) pattern = r'\((\w+)\): Linear' linear_layer_names = re.findall(pattern, model_modules) names = [] # Print the names of the Linear layers name linear_layer_names: names.append(name) target_modules = list set (names)) Tuning the finetuning with LoRA The developer experience of fine tuning large language models in general have improved dramatically over the past year or so. The latest high level abstraction from Hugging Face is the SFTTrainer class in the TRL library. To perform QLoRA, all that is needed is the following: 1.  Load the model to GPU memory in 4-bit (bitsandbytes enables this process). 2.  Define the LoRA configuration as discussed above. 3.  Define the train and test splits of the prepped instruction following data into Hugging Face Dataset objects. 4. Define training arguments. These include the number of epochs, batch size and other training hyperparameters which will be kept constant during this exercise. 5. Pass these arguments into an instance of SFTTrainer. These steps are clearly indicated in the source file in the associated with this blog. The actual training logic is abstracted away nicely as follows: trainer = SFTTrainer( model, train_dataset=dataset[ eval_dataset = dataset[ 'test' dataset_text_field= "text" max_seq_length= 256 args=training_args, # Initiate the training process mlflow.start_run(run_name= ‘run_name_of_choice’): trainer.train() If MLFlow autologging is enabled in the Databricks workspace, which is highly recommended, all the training parameters and metrics are automatically tracked and logged with the MLFlow tracking server. This functionality is invaluable in monitoring long-running training tasks. Needless to say, the fine-tuning process is performed using a compute cluster (in this case, a single node with a single A100 GPU) created using the latest Databricks Machine runtime with GPU support. Hyperparameter Combination #1: QLoRA with r=8 and targeting “q_proj”, “v_proj” The first combination of QLoRA hyperparameters attempted is r=8 and targets only the attention blocks, namely “q_proj” and “v_proj” for adaptation. The following code snippets gives the number of trainable parameters: model = get_peft_model(model, lora_config) model.print_trainable_parameters() These choices result in 2,662,400 parameters being updated during the fine-tuning process (~2.6 million) from a total of ~3.2 billion parameters the model consists of. This is less than 0.1% of the model parameters. The entire finetuning process on a single Nvidia A100 with 80 GBs of GPU for 3 epochs only takes roughly 12 minutes. The GPU utilization metrics can be conveniently viewed at the metrics tab of the cluster configurations. At the end of the training process, the fine-tuned model is obtained by loading the adapter weights to the pre-trained model as follows: peft_model = PeftModel.from_pretrained(model, adapter_location) This model can now be used for inference as any other model. Qualitative Evaluation A couple of example prompt-response pairs are listed below Prompt (passed to the model in the Alpaca format, not shown for conciseness here): Create a detailed description for the following product: Corelogic Smooth Mouse, belonging to category: Optical Mouse Response: The Corelogic Smooth Mouse a wireless optical mouse a smooth surface. The mouse equipped DPI sensor Hz polling rate. The mouse available black white. Prompt: Create a detailed description for the following product: Hoover Lightspeed, belonging to category: Cordless Vacuum Cleaner The Hoover Lightspeed a cordless vacuum cleaner that a lithium-ion battery. The battery charged via a USB cable. The vacuum cleaner brush a turbo brush. The brush suitable cleaning carpets hard floors. The turbo brush hard floors. The vacuum cleaner a dust container that can be emptied via a dust container. The model has clearly been adapted for generating more consistent descriptions. However the response to the first prompt about the optical mouse is quite short and the following phrase “ The vacuum cleaner is equipped with a dust container that can be emptied via a dust container ” is logically flawed. Hyperparameter Combination #2: QLoRA with r=16 and targeting all linear layers Surely, things can be improved here. It is worth exploring increasing the rank of low rank matrices learned during adaptation to 16, i.e. double the value of r to 16 and keep all else  the same. This doubles the number of trainable parameters to 5,324,800 (~5.3 million). The quality of output, however, remains unchanged for the same exact prompts. Create a detailed description for the following product: Corelogic Smooth Mouse, belonging to category: Optical Mouse The Corelogic Smooth Mouse a high-quality optical mouse a smooth surface. The mouse Hz polling rate. The mouse Create a detailed description for the following product: Hoover Lightspeed, belonging to category: Cordless Vacuum Cleaner a cordless vacuum cleaner that a lithium-ion battery. The battery charged via a USB cable. The vacuum cleaner a motorized brush. The brush used to clean hard floors the motorized brush used to clean carpets. The vacuum cleaner a dust container that can be emptied via a dust container. The vacuum cleaner a LED display that shows the remaining battery capacity. The same lack of detail and logical flaws in detail where details are available persists. If this fine tuned model is used for product description generation in a real-world scenario, this is not acceptable output. Hyperparameter Combination #3: QLoRA with r=8 and targeting all linear layers Given that doubling r does not seemingly result in any perceivable increase in output quality, it is worth changing the other important knob. i.e. targeting all linear layers instead of just the attention blocks. Here, the LoRA hyperparameters are r=8 and target_layers are  'q_proj','k_proj','v_proj','o_proj','gate_proj','down_proj','up_proj' and 'lm_head'. This increases the number of parameters updated to 12,994,560 and increases the training time to roughly 15.5 minutes. Prompting the model with the same prompts yield the following: Create a detailed description for the following product: Corelogic Smooth Mouse, belonging to category: Optical Mouse The Corelogic Smooth Mouse a wireless optical mouse a smooth surface. The mouse Hz polling rate. The mouse a USB receiver. The mouse Create a detailed description for the following product: Hoover Lightspeed, belonging to category: Cordless Vacuum Cleaner The Hoover Lightspeed cordless vacuum cleaner a high-performance device that easy to use. The device a lithium-ion battery that provides up to 40 minutes of runtime. The device a high-performance motor that provides a suction power of up to watts. The device dusting brush a crevice tool. The device a LED display that shows the remaining battery charge. Now it is possible to see a somewhat longer coherent description of the fictitious optical mouse and there are no logical flaws in the description of the vacuum cleaner. The product descriptions are not only logical, but relevant. Just as a reminder, these relatively high-quality results are obtained by fine-tuning less than a 1% of the model’s weights with a total dataset of 5000 such prompt-description pairs formatted in a consistent manner. Hyperparameter Combination #4: LoRA with  r=8 and targeting all linear transformer layers It is also worth exploring whether the quality of output from the model improves if the pretrained model is frozen in 8-bit instead of 4-bit. In other words, replicating the exact finetuning process using LoRA instead of QLoRA. Here, the LoRA hyperparameters are kept the same as before, in the new-found optimal configuration, i.e. r=8 and targeting all linear transformer layers during the adaptation process. The results for the two prompts used throughout the article are as given below: Create a detailed description for the following product: Corelogic Smooth Mouse, belonging to category: Optical Mouse The Corelogic Smooth Mouse a high-quality optical mouse a smooth surface. The mouse Hz polling rate. The mouse three colours: black, white red. The mouse a USB receiver compatible Windows Mac. Create a detailed description for the following product: Hoover Lightspeed, belonging to category: Cordless Vacuum Cleaner The Hoover Lightspeed cordless vacuum cleaner a compact lightweight device that easy to use. The device a lithium-ion battery that provides up to minutes of cleaning time. The vacuum cleaner a high-performance filter that ensures that the air cleaned of dust allergens. The device a crevice tool that can be used to clean hard-to-reach areas. Again, there isn’t much of an improvement in the quality of the output text. Key Observations Based on the above set of trials, and further evidence detailed in the excellent publication presenting QLoRA, it can be deduced that the value of r (the rank of matrices updated during adaptation) does not improve adaptation quality beyond a certain point. The biggest improvement is observed in targeting all linear layers in the adaptation process, as opposed to just the attention blocks, as commonly documented in technical literature detailing LoRA and QLoRA. The trials executed above and other empirical evidence suggest that QLoRA does not indeed suffer from any discernible reduction in quality of text generated, compared to LoRA. Further Considerations for using LoRA adapters in deployment It's important to optimize the usage of adapters and understand the limitations of the technique. The size of the LoRA adapter obtained through finetuning is typically just a few megabytes, while the pretrained base model can be several gigabytes in memory and on disk. During inference, both the adapter and the pretrained LLM need to be loaded, so the memory requirement remains similar. Furthermore, if the weights of the pre-trained LLM and the adapter aren’t merged, there will be a slight increase in inference latency. Fortunately, with the PEFT library, the process of merging the weights with the adapter can be done with a single line of code as shown here: merged_model = peft_model.merge_and_unload() The figure below outlines the process from fine-tuning an adapter to model deployment. While the adapter pattern offers significant benefits, merging adapters is not a universal solution. One advantage of the adapter pattern is the ability to deploy a single large pretrained model with task-specific adapters. This allows for efficient inference by utilizing the pretrained model as a backbone for different tasks. However, merging weights makes this approach impossible. The decision to merge weights depends on the specific use case and acceptable inference latency. Nonetheless, LoRA/ QLoRA continues to be a highly effective method for parameter efficient fine-tuning and is widely used. Low Rank Adaptation is a powerful fine-tuning technique that can yield great results if used with the right configuration. Choosing the correct value of rank and the layers of the neural network architecture to target during adaptation could decide the quality of the output from the fine-tuned model. QLoRA results in further memory savings while preserving the adaptation quality. Even when the fine-tuning is performed,  there are several important engineering considerations to ensure the adapted model is deployed in the correct manner. In summary, a concise table indicating the different combinations of LoRA parameters attempted, text quality output and number of parameters updated when fine-tuning OpenLLaMA-3b-v2 for 3 epochs on 5000 observations on a single A100 is shown below. target_modules Base model weights Quality of output Number of parameters updated (in millions) Attention blocks low 2.662 5.324 All linear layers high 12.995 Try this on Databricks! Clone the GitHub repository associated with the blog into a Databricks Repo to get started. More thoroughly documented examples to finetune models on Databricks are available Try Databricks for free Related posts Using MLflow AI Gateway and Llama 2 to Build Generative AI Apps August 24, 2023 by Kasey Uhlenhuth Xiangrui Meng Hagay Lupesko Sean Owen Corey Zumar Liang Zhang Ina Koleva Vladimir Kolovski Arpit Jasapara Data Science and ML To build customer support bots, internal knowledge graphs, or Q&A systems, customers often use Retrieval Augmented Generation (RAG) applications which leverage pre-trained models... Databricks + MosaicML July 19, 2023 Matei Zaharia Patrick Wendell Reynold Xin Ali Ghodsi Company Blog Today, we’re excited to share that we’ve completed our acquisition of MosaicML, a leading platform for creating and customizing generative AI models for... See all Engineering Blog posts Why Databricks Discover For Executives For Startups Lakehouse Architecture DatabricksIQ Mosaic Research Customers Featured See All Partners Cloud Providers Technology Partners Data Partners Built on Databricks Consulting & System Integrators C&SI Partner Program Partner Solutions Consulting & System Integrators Product Databricks Platform Platform Overview Governance Artificial Intelligence Business Intelligence Data Management Data Warehousing Real-Time Analytics Data Engineering Data Science Pricing Pricing Overview Pricing Calculator Open Source Integrations and Data Marketplace IDE Integrations Partner Connect Solutions Databricks For Industries Communications Financial Services Healthcare and Life Sciences Manufacturing Media and Entertainment Public Sector Retail View All Cross Industry Solutions Customer Data Platform Data Migration Professional Services Solution Accelerators Healthcare and Life Sciences Customer Support Training and Certification Learning Overview Training Overview Certification University Alliance Databricks Academy Login Data + AI Summit Data + AI World Tour Data Intelligence Days Full Calendar Blog and Podcasts Databricks Blog Databricks Mosaic Research Blog Data Brew Podcast Champions of Data & AI Podcast Data + AI Summit Data + AI World Tour Databricks Mosaic Research Blog Champions of Data & AI Podcast Who We Are Our Team Databricks Ventures Contact Us Open Jobs Working at Databricks Press Awards and Recognition Newsroom Security and Trust Databricks Inc. 160 Spear Street, 15th Floor San Francisco, CA 94105 1-866-330-0121 See Careers at Databricks © Databricks 2024. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation. Privacy Notice Terms of Use Your Privacy Choices Your California Privacy Rights
-----------
-Training LLMs Course: Discover Fine-Tuning Techniques Register today! Watch Intro Video Foundations Course Introduction NeurIPS LLM Efficiency Challange NeurIPS LLM Efficiency Challenge Q&A Hands On LLM Fine-tuning Start Your Experiments! Evaluation Introduction to LLM Evaluation Demystifying Perplexity HumanEval and LLM Performance Analysis LLM Benchmarks Deep Dive into HELM Chatbot Arena Use Case Specific Benchmarks Evaluating LLM Apps Conclusions LLM Evaluation Q&A Data Introduction to Data for Training LLMs Find Out More about MosaicML Friendly Advice How Much Data? Data Sources & Cost Q&A Which Data? Logistics of Data Loading Training & Fine-tuning Techniques Introduction to Training & Fine-tuning Techniques Hardware Requirements Memory Usage What Should You Train? Training Observability Course Assessment & Next Steps Course Assessment Resources for Further Learning About this course Free 37 lessons 4 hours of video content Learn the fundamentals of large language models Find out about the types of LLMs, model architectures, parameter sizes and scaling laws. Curate a dataset and establish an evaluation approach Learn how to find or curate a dataset for LLM training. Dive into the evaluation metrics for various LLM tasks and compare their performance across a range of benchmarks. Master training and fine-tuning techniques Learn hands-on advanced training strategies like LoRA, prefix tuning, prompt tuning, and Reinforcement Learning through Human Feedback (RLHF). Enroll for free Working knowledge of machine learning Intermediate Python experience Familiarity with DL frameworks (Pytorch/Tensorflow) Register now! All Courses © Copyright W&B AI Academy 2024
-----------
-Just a moment... Enable JavaScript and cookies to continue
-----------
-Enable JavaScript and cookies to continue
-----------
-loaded 51
diff --git a/recipes/use_cases/end2end-recipes/raft/data_urls.xml b/recipes/use_cases/end2end-recipes/raft/data_urls.xml
deleted file mode 100644
index 29460d5cb..000000000
--- a/recipes/use_cases/end2end-recipes/raft/data_urls.xml
+++ /dev/null
@@ -1,164 +0,0 @@
-
-
-http://llama.meta.com/
-
-
-http://llama.meta.com/use-policy/
-
-
-http://llama.meta.com/responsible-use-guide/
-
-
-http://llama.meta.com/llama2/
-
-
-http://llama.meta.com/llama2/license/
-
-
-http://llama.meta.com/llama2/use-policy/
-
-
-http://llama.meta.com/license/
-
-
-http://llama.meta.com/code-llama/
-
-
-http://llama.meta.com/llama3/
-
-
-http://llama.meta.com/llama3/license/
-
-
-http://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3
-
-
-http://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-guard-2
-
-
-http://llama.meta.com/docs/model-cards-and-prompt-formats/meta-code-llama-70b
-
-
-http://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-guard-1
-
-
-http://llama.meta.com/docs/model-cards-and-prompt-formats/meta-code-llama
-
-
-http://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-2
-
-
-http://llama.meta.com/docs/getting_the_models
-
-
-http://llama.meta.com/docs/getting-the-models/hugging-face
-
-
-http://llama.meta.com/docs/getting-the-models/kaggle
-
-
-http://llama.meta.com/docs/llama-everywhere
-
-
-http://llama.meta.com/docs/llama-everywhere/running-meta-llama-on-linux/
-
-
-http://llama.meta.com/docs/llama-everywhere/running-meta-llama-on-windows/
-
-
-http://llama.meta.com/docs/llama-everywhere/running-meta-llama-on-mac/
-
-
-http://llama.meta.com/docs/llama-everywhere/running-meta-llama-in-the-cloud/
-
-
-http://llama.meta.com/docs/how-to-guides/fine-tuning
-
-
-http://llama.meta.com/docs/how-to-guides/quantization
-
-
-http://llama.meta.com/docs/how-to-guides/prompting
-
-
-http://llama.meta.com/docs/how-to-guides/validation
-
-
-http://llama.meta.com/docs/integration-guides/meta-code-llama
-
-
-http://llama.meta.com/docs/integration-guides/langchain
-
-
-http://llama.meta.com/docs/integration-guides/llamaindex
-
-
-http://raw.githubusercontent.com/meta-llama/llama-recipes/main/README.md
-
-
-http://raw.githubusercontent.com/meta-llama/llama/main/MODEL_CARD.md
-
-
-http://raw.githubusercontent.com/meta-llama/llama/main/README.md
-
-
-http://raw.githubusercontent.com/meta-llama/llama3/main/MODEL_CARD.md
-
-
-http://raw.githubusercontent.com/meta-llama/llama3/main/README.md
-
-
-http://raw.githubusercontent.com/meta-llama/codellama/main/MODEL_CARD.md
-
-
-http://raw.githubusercontent.com/meta-llama/codellama/main/README.md
-
-
-http://raw.githubusercontent.com/meta-llama/PurpleLlama/main/README.md
-
-
-http://raw.githubusercontent.com/meta-llama/PurpleLlama/main/Llama-Guard2/MODEL_CARD.md
-
-
-http://raw.githubusercontent.com/meta-llama/PurpleLlama/main/Llama-Guard2/README.md
-
-
-http://raw.githubusercontent.com/meta-llama/PurpleLlama/main/Llama-Guard/MODEL_CARD.md
-
-
-https://hamel.dev/notes/llm/inference/03_inference.html
-
-
-https://www.anyscale.com/blog/continuous-batching-llm-inference
-
-
-https://github.com/huggingface/peft
-
-https://github.com/facebookresearch/llama-recipes/blob/main/docs/LLM_finetuning.md
-
-
-https://github.com/meta-llama/llama-recipes/blob/main/recipes/finetuning/datasets/README.md
-
-https://www.databricks.com/blog/efficient-fine-tuning-lora-guide-llms
-
-
-https://www.wandb.courses/courses/training-fine-tuning-LLMs
-
-
-https://www.snowflake.com/blog/meta-code-llama-testing/
-
-https://www.phind.com/blog/code-llama-beats-gpt4
-
-https://www.anyscale.com/blog/llama-2-is-about-as-factually-accurate-as-gpt-4-for-summaries-and-is-30x-cheaper
-
-
-https://ragntune.com/blog/gpt3.5-vs-llama2-finetuning
-
-https://deci.ai/blog/fine-tune-llama-2-with-lora-for-question-answering/
-
-
-https://replicate.com/blog/fine-tune-translation-model-axolotl
-
-https://huyenchip.com/2023/04/11/llm-engineering.html
-
-
diff --git a/recipes/use_cases/end2end-recipes/raft/images/Answers_Precision.png b/recipes/use_cases/end2end-recipes/raft/images/Answers_Precision.png
new file mode 100644
index 0000000000000000000000000000000000000000..e5d76e526b0d5ae4ba6e5f16774340fb37391819
GIT binary patch
literal 364427
zcmeFZcUV)))<2F22M{F+3L;f$a!^2!-ULK?IidnmA_5}PN$7;Gh#(-Sv_QlbIrQEM
zh%^O4Q96WX0wfSI1xQHvZO*y(zVCbQeSY_R|G#;juy^+C*?VTqTA%eUZ}(UM$AeD>@I9#W=aDV*5eoDNzB=@`nn3w0*1y-}guXib_j7h0;64_8TSHS*;Ara>@97?4gG^W#8u@LA~VB^%karvTn+^bmeb3e9=PHeH>J#IsIb*+6Q<2gH_j!)GHY))SAfu$?;|$9Yx!
zKbGz-2%vMb}{+gT)?(aAzVPnvHjL&2)`LBvsoK*_?_VEA1
za6AGeyZngj?dJbO&v`XR|9Y3t1$1#f9}etdyB_-2dkzfV+F$STiAGm8v(-jd)?%%{
z-g98^p#K=l|E##PihyKi7301Z{O5cBE=n5%W6i&06o0nE?{pks7&%wgt^T@U0A?Zb
zU$giN~*_-oepU$gj&xbt6c@fUIDzuw}nIi>%4i@yNl2crBhiTXRi_z&dbza;9f
zakc-FsQ;iyf9IqB&m=1BhBP$lXBMJ03`3s}-SUi~dQO~6L+_-b8*_TyL`gX*Xj&<{
zd53^z5>n8+aCFm7yuwa=iUM;-@mj(iPX~3MmBN5w?q_?+NAnJ#E
z)nu$NnV4_C*|t5O0;bo2Tgtq_yDz}uTeCRb?RZ$pANJT$Aaw`vK!JpdVd5H8+>h=4
zUP1ePW7l_5>wFq`7GV3m9gQE+F@Y!87b+}p3aJEVW3keG9QqifP~JN9^^>xxnhav@>Qb8oX
z$}ZZ=g6PID1|EzWOIDW0zezvN$}2IMgWAnUHDMmN6TY5N-?7~}3}bHQaqDJqMN8OtO=0j1
z9n>wPx5cLyxb+3K_oYFFk(u4Pvc#?s*6B;!{-ZWWR&1g&hPwfijM(`kvq@dvUjpx?
zaMWnko%(knYq?
z!vOn<+8V^{4GMsn-@(ykdY$KTm$##d+aaOmns=%8;q#=z2J_XO=?ccQx0ccC#58L@
zg`(xL3!+ae%vQrLo3&js23%{SC=kIqfqEYR$pq0kz>9`Kzo14AO8zgA6E!_3%XqhT
zBMrU3herLdJ!>698&lY0>k57WivE!x5UmtPS|>6$i6JunJG%n?lF@11%lk~sev9{}
zcRSWwey?Y@AsXM{GJ`(cU_=|4+PYiG)`V+&?BE~l0+=7iFacXXvr$o@W_`irB0x@Zwp=2J
zdt3LTZW{Ey`EC8F#(`NQ
z!Caz-a8W~hYXr}Y2SgY#64MYVXWQ4FhD}A)J&>U
z48<-c*byPMJnc4{xa8ib#~9OVE0d#U^JdE(XppEGYdfau-9B!A4?z#^+SaQ7TAcgh
zpGNZ!Rr?VjejqqAy`2QTuq}e8ULPGX^K(T;Z+B@I9m9Nx>($u|BvGf((|d39Xm3&uz+{rK^5lpluL?HOLF|tzFh*05x|HIUw?||wo$B@u5O@=Jt5zcJY9MP;mY&Pj6FaA0OSwsM)!%}80`s|>6SD)1B8~x|@=}Q4Z|7H8I
z>FVT-I`h{75sxHO_J4wzUmjw%oULO6W=r_QUC6HmoHdFJSd%p|l=4vLJtlFV38T=H
zk0Z@?Oa*#f%$2sX&Y>udsNk)Gk--Fiv##DLKBd>Pgmi$fZe58!_ck%D>=*LBu^eFD
zqGg%>v;TxH|17ryVA2KD&Bxxw)1ddp#l=6-HJS;y(IsonH2x*6$bnr@k=`;JD=r>ZcqkXnUq3dUJ)#I_#r-)5<}$5H6AJSqPdD
z%`_VAttq6G&Ly8k?B*iEtw`H+AE2;nBu2FRK#V83QAPg>hX35$g*+gcS#>B}27ve7
ze(O!Ep&h`}B+tp78GF0Jg>-Mls#G+E^(Yr!@<&kivSW6$1pqI~M5pYN5zMiN6IKM9
zb|xOh{8NWN@gM`vXX@=;_%bt3t)D}qRgZ=h?&>k51BG|!29@^4P7Eb
zfnKf95=I+CL~l7`D9!>1dJ`h@JHlDE$n894%Cnljz@ZoBC=uQ6X%(@>r^Oq>1ZYo0
zw#C(bNwRupH+tPf@*Mt8jt5Yl&_$Vu)g`J-=vPuzI?M+tw@yon*-1*V2Kb%;blOvj
zWu&1O!1I=7fYh|RZx=R;-XGyFpzhF7`*i1->4LZKdbP2aS`b{cRy4iUd7t9ivvp-A
zq%EWtmskKe2Grc(VtUNpvQ~NH=@x|SDDwcC=XRv57lY=wMBI%Z*+-Gzh|Tq1#q`I%iu%m(cu=Sl)KO5^|>6YDHBgG6-19{i?
z7+>$E2`aadEQqaYKUMF6+}t_3y5cv=oCki5ndzeNw^#ZGAW1O(oltp9^Ku07m!OM~rn
ziOsL6M7`gt|*cr@oG#{P;|7ssg5ltAg*A@1NGrFt{})uaP$Y^ypw{
z^FuyAvM+S=*A3=~7-M7;ccEe8c9LYh+_bbB#ETllB*K5}Q~=E!Dr1+?EVf?x{D^*GPRb4;W4D2J8gH+&1U&RRM%^H+
zqOPQEDt*wYW7rULa`~Gy53L*{I$1ak3U<>^ACFhy@9~_i>B&rZz2V^YbWVF*`c}I{
z*0I%WOMB|N%#*@a#YD;umz0aS4sv0KU*S@v=S
z3?B*q$++I~CW&@c=1~%}6gI-q{J|PW{SJ@bp86M{>3H)H03jh$OmZSm8Yt~kLp!Cc
zW$f~MQ!CM-!ji(_&Ji*I&q4wKNkO$jd;&ECa{?rFaT@3;DX4`b>B8i>A0*~BYF}5T
zRd?@8%&vlh3-f0-QP>6^XeSGB(}&Z$6B!L-F7-y&hYy@{K~{QAde`oJByo;0n;t7B
zBQB%bOH|)Fp!P1a!4u8`K+KjE!lTx_KzxD@DhvOeOL#&9H$Px1-qNP^Wsl=?Wwv9Xe8
z4ORJ}?mCCM1=YgOY)=&b-R3SoGv~C`JuViZM2XScEeflM=rkXS5o{le*~>4m%KBcvb75-u
z7|9LGHI;)3qYX^TY3l1@#a(z871dRK7;&tA{%nIF56<2~zGZKQ5--l^hdy<~erVc#
zj@0(Z{$%Q3@kThmf%X=rF96$30JvO%Xqb*St15v*mAqPlqLmOm^{ie<$n#n>e5bb)
z>GFGfLh@8G0MJn=Jh#a8hWfh5iL$yr+&~e`17RuYAJJHl;>MTSFAS$FI^{Q3h(TfF
zy(1co(*!3N>@yA9%$Tset{Uv}6C4}&7F=VX
zr;3DCEObeK>g+o|-{aS2O2q8+sLc~UX(NKmJ{fT-`;<|q!b2eOy){g_b_lKd^Gnf4
zFc{Q7UKjeh5`ulMn}>mpChV0Vp+~%p#Mld`m?P#FHo*1Xy&kZF@MOXIlA}}R#1XBB
z^IncE5{4UNsAr2Bvv8}OF@R<_1aCE=VVm9CG5g!Zu=`-WNnKLhg|?(qxk5F}$`d}v
zd9aD?R^Zg9(L*#nx+~4g=S_Z2wVKa|x6<_kjq+n(BK^xwVOzqR`DvdX<>#-My_}|o
zD;26(F{VfSC+_@eI<{~8-n$LiDhkVMxdsu1FP$i9mvmlF+1JPs>MyKxu@)g%ObmV~%f;<;#9#5{fyo-wZkV
zzz)NXR+q19NzR0#8VBF{!3yxI2IQT4GlTvc_4A^6&R`+$Y4hhtKb-hrjn$Fg6z
z#Vv9yC#D4!YEkru<4Ln&TjY-DopjORDnZnxh>WC5La4DMX?4i`J$LyWASOz%U@t1D
z{1oRaFPwQ825&NwQKuB-xciA@#p&A1_03GS-J3Ap_M8{&8fY8Qr{p!Ta$u`>jRgji
zWohQwQv8~W#;_-vPW#o_Z9Ws$G|1=HQ(?{9w`|r9@_wP6r+x;AU@=r*Og+ALnn`w}
zmx_fk2he*1^7?`Ck`faw#CH19K_DOMHa1Z=%nq^DlRT0m(W721cyFxXMRs7G#L6S#1jS02?H&h*dZ6k2>oc{=0FZfOF8RUENo-X3E-SFCBbRF1DpGmUE!~lAYCm0k
z4?9E-0-Z4VC({I-$zhF7u7E(scY@e>K2
z$g@15hJUHys_WzXmbt7Nj*Cv>*txjfqp(cPX7a1bf-ZGg~|9vWqbT9sea#9km
z?*@3#HAK`JVt7~3}~`092sx|WV4
zBx0@QQHOFHydAw~(>rOvKqgSlvAb$9BsC)T_gw=$D_Qrek9>}5Y1jK>KY1u!22FDK
zj;Z+~Zl_5>Ti0(^OiS%PlbAafTK%=+MK@vjo~3w~|L#lwCV$LI4S_*O3Y=TVg~(e5
zI2*jKJB@LB$jm1K&01~IoqSUn`6jC?WJdSVxU%Z3lPa=*4{m^{HF{4r#jK8MKPtk>
zdh4z7zsJ4}H*2T82KUnL5^EHS40XosN$73yh3taNWuRIPa
z)0$R?X}F&p^5juWTZN?e7pj`Ap5_gu?q+m7UEYjDZAXTtc@L=$VYt#n@h5Q8>W<$Q
zH|5hTg2Om}37U4k)$>e4AFA0`)z=j2u*kXMgRA}ep6!>RY1`?7W(|-=6|wonKxdrn
zvp7DIO@~jpBVv$aBWNjzb-D7F)p4{RI6pEz6ZNYL%?`B~s`R;H_W)YuwjE*a~w
zd8toWUxdHwxv1EkOOOFsafOI*AI0=1^0=w$_VZ>QlP+>4j1?~zq6PKBImv=r&L_`n
zt_)-00v1!*!)HgH&a#=xe7Kah=S3W@0iu=or?kL<`n(Fh<9x%4VnedaS0_Z9#th?R
zgY3w;8~f?|TY8&5`_24KGmn!<=L9DjOQm!8K%O!pPZwX^n%+U%%E)l9E77?BK
zOxGtRwmkev3g5-?*M6!a#m6N4H6f?+`=nDXPyW!9I&_Ej8iakSVx~9X5ZMY_&h@TG
zH%|K>vs-?zRpQrjtqms)HThVJIT_pCyQkhz`OIC4W#Ez1%VE+LUd>Wsm+h-z;hr7g
zLN~YmuexWfm2}+kTk0=|UG@C@hw4Yvj#}4kPkWG7t27KJL{mQwEtrTUElY~Pr*%ax
z`yE|-@9C2sctb6`Dp^^Ekh{96I}`dr0h`!l@~F|luu}V860#iV2^z>G$R8wS<{HH6
zT`W7o%jdQSdVm;i!=p`An$GJ}HaK4R<%r*UHIl#YJ>UGy5UDt6lW4v=-2OJE-s<|3
zKzn6{F;hZq@1q$|{ujcKEH#a=sslS`=WBLB9A9#h@ydK9MV8+^+N2#2H^Hl{itxx*
zxtw9__HS0QEkqsUe{!5v
zirnDdg*xh=*r4j)G#2N#+qXBED9tPC(He!box$bH*@m~6WBPpo3@rsEya{3uMS>bf&c<0cOaB;1R!t8)H>TGtd#74+
zR?3j`byMNt>3#)xh;S~mzAS0zgPUXeNYJ^U&nWUcg*kR(jT`Bwf&*^^p;=%g
zZeu?M
z90fl2HwvO%zP@4-N45LRwn7nVA)wA*@QV?1(^Me;9BlGv>=(i6NOvsbS+=T_+I{0!
z-%k#C&m_5sHxW_T2Y#P+_cFFyU^Sd!qe#$6@{q7|1@|0cbmW@HuEB=D#
zIjXRnh-=d84FVq@GMrMPr%Hxr0A*;4^nHD)IC3TAe)LYu%JZt^)0W_I)rJbnSY8KD
z~NgyG+0U
z!&O!Y5tySHi?J^dII4c`yJ{h}qP`{a!0J)Q`^NntlaZfgwJD*D)~ylS5R2pVNnzdR
z!$r;{aFwVaX>vYzuh9!bD+fu~O+1r`MEgD4Sbc|@OMdO!36Ah0N-k(R)gm!8MaF4C
z+R-Mos7Fu7W*(`Xds0i}w(_?!o_ebpcay>~YLk-ixSLr2@h@u?RO$=B>_g{nYpo_Q
zDs_j1?Z1DhsNoMQeC-SwHMQl9y-Rx}@=`hdy(oIe0YylfQkfvitz!W6
z?)(vsj-2<4u8f)f+OE4P#2vnl?z43yRKPkUPc?4Z1d7bKwcRlSy;Srhg1Q8+_xR6RtphSUvq9>)>K%x_a7}
zN9NH&iB_?C9Ge1LF#rp5mxt24GTqE$)0Z(j^s&s{4){QZfEm}Tl2HI0j0_c4=>&<1KtglxZut+xb
z$=P6Bxx$beP2*ugI>ags4
z!or)&mk~`%XEgf+f7gxUKh!mOk?HHHrgZn$
zAimn5V@+8}(P26h0awPjhulHhAof!{?2m}<2ED1dhnM
zn(s|eA7O8POn0&tkPdiolpF{iNdi=D`i<6f=WpO2+tZkn&0Rd!EFGT6)x(0E?|0UY
z^H5Yw?iD^0YU8^q*fu7uA=p6ei|#2$^_W@@8Y&@^Y@P5mthX^+BABY>YYJ<;J>=Q`
zGu}=3@rc9|y(qM(+`4PhK|X@eBmY>j+kVhC^i+NgGo~~nze$hFo#YTzW)$!0{c*Vu
z#%nftiFzMw`X?k>&vYc@=
z^u-(vXJf#j@gOVe>m|Bt2wGUtrF1|DzZsQ&y&^hCf}qysiGO-ZDCCHBkjgdKcEc2C
z>`UE%%J|LZrUD#kDiS0KUFDE&G`%d2ngXE$R^fhN&9qtlx7RFpNlKZ(6#7YSo+Y9vd@_y-{I
z!*g28A8>ROqIDm_Yw>3)wrn2dGR>4d5WO~Y*gWeA`(c9EKo$SgD|HJAFPne*rkeW>F+vSUdxOzWl8fX4BL
zJZe9&Z~4wI2S7tF_KI&xRQ46w8xM|%m5cmF|Eaq{S?`wKrVh&7%iN{C#<+l25}D=
z4w-}E>fIm~m~V?bcFUb+zkzUiyLb*~>}kq;fMN)&ar>oVq4EpENO;FgU_OFu8sG|E
ze2zP=n$33Rz39-LJ$6G%oD(HZ`=lxhi06K@ww1?(P(qT{;kkEfyEZmdn>#yJ($EEC
z4z_!W!n-wkJtpa?GdYx*9F4B6Z3m@ZOB?g*cwDe(mX?V7#N;;2NC%*_QH8$U`{j-i
zALN7H=JO5b>MmHNrzTiE6)jE}e4S#S_r^9=Bs8v0<<)~yXklxuY@>>sT?K8>mSmGJ
zYF~}B9dK3^(X$F2s6ETMU86^@Y1xmo{!MROA-vx_gc?6YkhAx)>*#cI){B}CCiJ1`
zq5FPzVvYeHPA4t&&tRBTXH>wAuW?%kGxfwqOT}SKY1Oz+C@69`kWvYAPt4tFcA0^i
zj961@V#+7vojWZAXEZvcRiWS_Z#vNWTc|0HP8(22LgDJNB*Wg8jJLVs3C_=M3%x(T
z4$@i`?5b$;;Lstbv@)J*1C**-#A;6|rl+vwWnsP+RA2`7P*lWh)rNzotJ~>S`pP}a
z*IBeA5qVubc7MiQzJ6bCzO@~;m*bQ;OG(F{s$zqKT*JkPR(7^QQ-}}QecBmuNeNYY
z%w7mTAmlr$fp;fNJc#PX?wC#`Td9#Jr
z_|V_)EXsuRVOmAz&Uyaq8KJojjWtY7jG05`3)wm12u
zaF?r|oo(CR4Mu;GUq)8SxvNG>?@h+$PPji-6I|Dd3@;TnTO9@m59Ym+r`&xVL-Q7K
z6NBG+d^ob|bB5@`cSyN%_E*D-)v49P{53poMTNmURo?Byse})1;371Z-W-AX=}YvTH{c^vg?YpJVjFTj
z{F}cvQqomsG~b|WqDPMfT?)ZJ)i463&g8w*E75iE?u+&9fPh_7D4pA9m4Ha?<4ofk
zi;mLfGrgX!xhU0!egk^Wvj$4OWl)<&Wd#tuX&_V(gr}*8JV|%Pts*bD)mx3>T-70t
zB0aHjS%}iPf48h-=wPo>c&q3SWUof=v(75%k6*F*dTxPN+LB2TlVxM>61
zWp$zL7hG=x_MYw6H(H4sh$k&)@cDrkYrluB#5x6!0JBVU$?pmcjac-rbX=kdXLy6v
zZ8W0UPpKc@e%ehDF8pIomYaubIPLI_d|?!PB1=8^etDn)a~92*m9I)P3_FrksmuA`
z&LRyK9N>PWG$0;wFNm-ymjj4jk^g7F^~12s=u^ws+CqsM1@3@_zM_FkC23rvoWBaD
zh&M$OgnElQ6!Qa+(LoP+xIS=W7
zA`aADr(Vp>p)cw^<6#xqsZ%{F;3npLD(blQNH*6`YrWOlx}Kc1=GVS)#~@epeK1w8
zc_68CmvE2bG}$gp3jdr(LV9^M6_xEIYlfS+>V@%Ng7-FtTML>g4}A)1#m}qh&+WE&
z9$hW9wNrH15D%|>H_q=v4LsWd5X{C5xoV>(bS~#ZnJji__Sgoc>fB@Ri!$_J#@z;j+|+SunB$P;YKMzh*$Zqt
z3-d4^G#8
zKIKo})7bK)g%+S`cQ*6I)Q2M6j?&6tNY=2n^kqrAKPC-eVmWwZ`nU^Ze!5DU<-Pz!
zX?FSura+PY
zM+$0_wz340+A*f(Js;OJ932f1h5LIrv9Cz4-kRMG>|O74wTID_Ej{`T)E91CBIIOP
zWx7uAQyrP2guT_3ko_KAAuVbaytaZkuY8Ygvr$fS$Mn9he3$jMeq8_7%DtGbc2j6x
zd@g>9&zpy5W+{U7xxHsTb#H>Y;oCR7eKztljI<8e{Xl~05hWgK1bBqQ*Zhn$-JtZ-F<=
ze1U8AlS>Xysn`@4B6`lQiO40@&>y7<65t)kG5!*{aZ9SZ8>*2
z%MH`2ahc!BV#tuoOt>okB;TuVApP?+9}BwhR{UwnGpR3t_P=UsxUCQI8N)%
zdN)|dZ`YF+o?Q7xb`_xm=?}MI|1>CsS0K%PBe2!X?8L&7$vmzPJkxOkMEdQOf4pAKSI4oC3IXZ~%s-hpyhV1FgmJWWM
zNxRi0;Ng%bW&Ee5Qec{wgEN;WJ*%A(G@&^>X4kr6o^1-Y9%gW{LtlyQyZy(IJ?GRI~Gv=h?C0MC%D~yw=CnhwCvUV
zCoQh*!fdD3#GJ=s^c6#Ii~XwWzZd_V9CKud_c!VL%A_IqE1CES!jNQ#MTZaB1L0>q
zR~V@3XDZusDfDBtmd~RrucO^6>tF!yax!m^w}i4MK*rH^F}QZ!%RqisM5`R^em9Cv
zQ&BY$H-)iY$3EQtrq96+kv@Gt%6qE`Knp@z>}kgb&XY>c_}qyxu{c>;r)oxi`!EQ9
zSfd!7lXgne%V}IILYXvr8|gnTOTCAfuawiZ#tr0;*t!MiILw4Ph*1N-DF{zV$Hg47
zrG`IIc?5fL`F=rekd;AF_<7kUe&vU>y=8>%>WbAkPD9HmONEa55%dIET9TSjwZd=f
zvxz*_ciu=g-1iT=Yn;aVGV=-rhGmdF3fCU(n!g`+R84dywMV4hj6ard{oWDw#vJJE
zq!RvO(`cDny$D_8N{Q-_5q9Lf=1?K65KUkt0a!y!*x
z&LkwV5iL9QbqG73dLxChFpY>s*}51#-r4~3YfA`JE>iGVI}_N74j!TuAy?R
z=fiPl3o}zE1A;}?5AWs*tQ3poq**-ecENfmL{9k)&U>n@eN@%w1Tm;>FAB7Kf+j~!
z(0TP?ktA6O$H+?%C|Kk9@Z#<2J8~OI*3NEd)78qDsEL;Ox|!Zo-F1nn{4YIz(IRrt
z=^SE{hq)cB+Bg&LLzvOIC|D(yWwX@mg=t)?WE^w!(-qFQquv}B#3V&2IDA*oFsJ+Q
zInTc|#3&5P)CXdz9i*THv}~l6BbrPOL;QnDxaL_l3ZvA?<(y4R;>S
zJ4?;S?$8EqtAn=|t|Ht^0^bhdfUvFcR>qfLcqOYXAAG3~*THY2UYC)~GREeG)r@1Y
za;18LGuHl$cTD0J$){NC`OZ}Gnw(35#U%-jX}eh>C6fZ)S_{&>T{u4wdQi
zaj`B5x9RI!g&iLjQa`~de$keep^1P0UjX_n-m$j8^fWSKBplIaw-5)UJ{Tt6dVg3&&Qbcu&tMrK0I!)eBV
zk7Waa@8xJ!{BH23C))`QJcBdO_U9USW!dhfv%8Wfgg@%T#sym^*bb3mFRG|Dbc$E2
zKOa8(`21R26c2lLL+k2c#;Cv{OjWYocb^M{qYn89Tgi>eWcLO%kgp-st+wK1$)gcV
zR*+<#>@v`}r1$eJPN?NsiH*?)m5y)#0h)FW)(W@Og%udDe8M)B>{aWWZQ`z+;VW0|
zY#cbG>09XgrHM;~kK8jR3Oxc-3Y7}X-OrxdtJ=
zIw$Ck7|df1)m#0bcqn|=id~Tm9!_>MavuME*Q!Q3
zulP%Jhv_AJVZgc;nQ{HPSc?V6;Kr9DzYw%gv5dKsk%rXCJaf<=E6e{02jIrQ?g&y?
z!QNWns|ak=V#mT!zPC%_6lN55#mpa;k&t}Yb~wAsH&{Jzhf6+SR3s?6+&6F@lEZZc
zvS7Kma>D-j+l67BI^drY{94RGZ2}S`R`s^l(ThL$GESQC-*Mn(oT$|@VlHP0Sn!_D
zH)zn6+)DzLz66iQ^
z=#XmK*B1{vKCva+B@wB#jeRMPfIH1I!NFJ>e7BY;V;5x=;CRL5j>nOivpMHZM>yEr
zE#Df*eZt}FP*5Cy&*wOt#*K&riSm<
z#!T$WwZ6iUv!YGz<=x%Pml)e@)kE(Djp8d75}0SQ(cC-0vOfPdh1D_ys{rfgaTJVD
zDhPEE8((grw4`yJtY*EFt=mPKQb?e#NZaFkfYJ(*ckxmS(9w6Vz#XfQCp
zd;}&E>>*R&i6&;}g(Peln>?OXHJe>xf>~{Jn)^nDzT(f0PonoCleVe=mkWmbx>5t4H&>3l-#}#Q2tLp7+IrnfN4=hq0g}`Hl|r=T
zs$$Z{sR6o*VC>T|>ts$`-%TA>Bz{e{$Bk#G8FkD_%zX8DCA0uT>X{j6Po>*h~VRGCBf?YkAyYj*8ZcW-6N}O7CcW5$U8dTBgKf&oh{!QrXAf!M+2jT7K#opZ)f>z4qqTPyqTU`sY%^+ty!G`(mA6Su3X*`WDjLDPCBT
zs>ZzNuyCb-PMyhx>LGhQ$!NyD*
z5t=wg_;~y7r?r+}WKuCIg@ZG;DG_W1Qjc@M-T)oy%>A;v*<44AH;9)Np3LsQ>+8wN
zy*=BU5EA_UNE$a!z)>QEN9@Q^%%{Y(DzLkBqY)E68E=U%AU6&hg(kQF>93$H7EZ@2
zqS>bi?2_~3_i7u_@t(RBpmFw2`?%j_X>x~*Skok1WLY8OAjlP)!p)rgb@lFcYaM#w
zTE2LpRwg@W68129I5^avn{%CXo19)ohH&+K5Tm)59S_(kM$n2y_D3R@Pu8CJ;X(wEvl3XOW+o^VRE^b1wG}>(`xU!X(7>q~1QY<@+VoTm*}{`kXT4Q%WfyG;}Ti
z3OZSZBD}o>?9^>-&K8rGxfpX4goRFvUMvg~d
zRYXSHuaMflB%e{&+U3ICL}Hr_(O4-)u+pap4lUWFTXg1ti84)jAKOD=AF3cb9oVUl9Lh!=(
zaBLQ96RHmPr>D0EL`P0ESNrqS&}UfQhHPvT1jgf&buK$f|7fd=!F%FV+CDlTNBAH=
zn790~tH@1{2%bj>w;k4Ye?r_ftZzx&}dN_iVl@0Hh;^rG=vXUN69f6V{r=stc#aZiD;RnxC*~t8(k9
z_B9uQcpb;t@UC&i!LHy-=>ygwz89%i$L*_~eZSLkVHIl=#WGNn>pw+i;J1TkwmC81
zr9yJ-rW@==k+y_S?a^h#D${8XF{x#XD#Tg2pV>E{530sTyhu<>`B5+um5Zc?%>)tX
z?_foN{#i~9!8@p?JMvmmpN^gPcD3tcHlunge94;sV%a}T4t5V+tA@|B-bnha{#~O&
z{fdU`EhNa8T?WK)o^f*7q(<@sM93aokSBWeqjmgE`_S#=%KJ;DM
z*cUc|SH(N93NvG#bUlHDgAF(e8(r*9z%^G+wmlF!NcLG-#mQH{0cD@EaycYn1*`(R
zX>16gadA4|Qs4aA)sMQECIn7@2Hr1sHWuP0lqO#mo{ayMybEv929`;tu@l&$k#~v;
zxqcm@lcM_KNw>IT1AGKxqfW(x7~~96U7{ZtX}0=
z40xKV%5kfjR9Y0!;cH2(T_7Waq^oCnT&XYGC=sGPSlF!My_B^K@$A@mw13L0;e#-H
zW2R-_JB7zTD|2teIT4N_|9mcfUdXJ0JxS8i($!F7tRGXgXv;QIi$D8#fhJpiBI8|x
zsJo1~p%{Cyb|k!|J~i5;W4QcEYRFS-=S+!Ihq;msfjdQSSeN2evE$AStX1P&o!X)G
zW^6ef?_=Rt5`#_mkr@(Grsqd=!+)Jtl_!De<70KOeqmM~^)e;?oJ_?4d5(?d$Zuie
zH0f2!Xm&ffe&HwJVfB9vam!~@T5h*R1I{>jLUSlucxJ+f?jb)HDQ9n9SfZmO-+AHZ
z)GsM(=^CF3{Zg(R51$q)P)c(07p+Mqd+7)!q)JVD?fx$L1+fZ&V!{
zy_yiNWE3!{T{MAYYs5C&B7hk2&UViu`-ukC#5sFkP7NPl&X>r~_e8n?Zq3`sw)4XY
z2VZf;Z^$0nse{$ux_;Q%nQtleneRD?$RBm~%0Pf*LW6M7i@CUbWLx^?D)3qKkBy$cf;6_p~$0k);f+-z3qMU9^su)+fa4D_(hzHv>RDpseH+Y_LBSq8A+4
z_g>U&wGvy2Sqb~BJ02=W_0AtY8#ov}8QT11{SuL2{OoyO&5yvu$mrw
zjmCv;`U%@(h{6A|cUYV?`(SNj*Xppy4LF}DYmBf$e1LEW@7dPEj}GUCJ9aL~
z8g^Zu+8*fdntDA)No(CDoY~3Swh9+b`bMo0juzI1-x9u&eXtT`QyyYR3w~LWMzIkS
z<>C@yMFLT9Y5bpstbN!x@da5cQyS!aM(Bm21iqN)*KrbIM|yR7&l)p5LT?SJ6l%QG
z$ikKH?+Wp=`fUu2sipp#Lo-&kie&)kY;y_k7<75{+FJ|}r|*gx(J#4t!?Mqr_lPe*ADngk>57ttX#D!ltz<)uNXk=oEZ9{dCaf
zZe-p-Rbtu%uz#_It=4uT>i=W!J)@dhyS7n5kfwlOLFq~n5u`{7RX_y)%jPK9+_l%JrVJ+6m
zeXlj=eb0H#Ij@Q8Ia^wav3M25Ks^A9`21qG$BliGge(&lKPEHurftIZCHlC$fBU=!
z-6ecVou!^7wn+r_*C5qIveH-atrE-eUgnYUE=S;zuvf(L;#O_+N?vOa;IQPoP+Rd|
zA=K7AU>l>J5tcY^~G+j1M>ha}euS%NiCgmYb*0zUaPht#vKv$^dDFMf;am()OTjN2o#yT~RfU`W5@NAAq3Jl4ofVV`-5u8|u1
zMEGArT+syC7+z8olP(J;v-yuOex{Q#r^Rs$h9GYcJxEJzrt;ZR{$cIlZToiq9+aWW
zvETVGdw&&&rK?gHq`@rr_zf03Q+^srA8mGYr<(}9qkb=Ecg+EtN8j)wc3yTgjXQ=u
z0L!JquoU_m8ybx1Bl8FIeE=$jr<+vI?V+04v67fl2op+0o{v5?>fWE_S73;=t^XH=szGydw8}k?hR=soWS5pMDUOtAFElu~UAYbT3br
zakgP|vrbrR*|GSr6xHkMDCOq$kTibI-QWmpfSlv?!DiVEU{N$!*YohkxfxN%a_ef`(>!kv1Oqt
zT3({#UkJ>BiRRLvCZ1Y@M+_5Uh*CsXnF}XrkTD@E>h4il>;9b1`DHg83zk|mKcdo%
zR53Z80>PizF3rR<(eeNVs)d@rhFRr{;@`M6CmFrD`RpJQ%3HZDxOWI*;U`LGIPCNA
zK1!(eK+Y63S@AwPC{(;9lvaPeOc+Q4E*7MWP4zm&I?$(JPRzt)cufb93j=GfcB(yZ
z9FOc4~ID5hNlah<7;}yEE{`ea3VO*;}U?6&4;#Bqh(sKm5(lQvXip!%O)omTB
zaI55Nx>j#96Ckfvb6wBzI>)2sXiS(Cd!PJbE#etoSM_2FuV6qxe_cTTahK-Yd7uy-
z$l#5K;YVOxDq^Q2_&+8WZclw7$=^|NNA8BWURR{a);bIHhTslxB;+DTujAuY19!+!
zPjo?0$?kZG)4oYU6zg;kW0Zh8@I(Yqh0)#^=jM1mAs!l(i6%+rKn*?~{%6s2
zAs9etPSrtK_%jkH2a+>aE;QS8^W{e*vSOG>?AYRC33WJSpI$F-%GJHY!yjRdJMRX^
zYz|S*G?W3ER5S~e#!Rrxg9Yevx>3qNT_DqTH(7bkGdZ|*-4LA8bypHlj|r1n^#_Od
z`jCtpNKqf=TSt^4V~iroh8WKf@3C}w5`)Q>j(P|co68F+p85T@S?Fn%6y?VJ)?BJB
z=`G~K#A2ZHP5Y%`Ca?w6YL$n&1MGIo9SH{u2^198#q-HheWBAenYaBVt2=odKxUnf
zHq$7}8Ro4r41Mx)ZZ@Svh%V@^2H()^LwI~Ptyw<5HXwCEA|unUTasG<%ETj2Ufi7Pee(
z&rXeKq?ZS~6gMs^@q07vboS69f_?N}>$J2xsTp(PmGeO%ep-ml(F+(-kbCBbStJCZ
z-r?9lyjm20>NqVNc;O;nEmr#m-sWw21oZ&h4fED<&9&pI7qZHBUNhTEA9N=C?vyTZ
z9)qef98Bht89r9L-pOgx2sdHjCvAlvl*LBG=D8QT*WNl%4|VN)yvcZc6XgnoT_gErnMTj9(r_9L8cI2{e->Sd
zXSvcu!oP?7nAm=24Zumh6*Na63(@Xgh!pTLaSk*p0%}7xw==j4T@}P}t8|X0b_G;C
zPx`s=F-rL)<(5O_=%b*wzF=IVb{%Q$vQBJVNM(nL{B3nS{p@I!z^@5F|CXmkB2Z%q
zfY|wbMJaz=IP?|sd78qm>PF-X_|9UzqC-GW*hh|D!(1!rqJ7KX*&5pc13IwM76ZL~
zh_eeS8BO4?r2xAT_}v=6t~NH;dw3dnl)(Z@m2JLT-{I~-K8mB$rkOjt6$>fq=Mnr?
zOQVOQ_n{*MI8CVZYmm*V=g}qu1pEZIhmC|9XAK7wFH9nKCWE+n5)e%{INl)k(R)=u
zcDifq*5Q9v*B(F0popFZY-E%es+BS~dZ4I^VRAZ&ZMs=HGX{d27ud|!4@cXNiNxoJ
zHgOkC@?#~g@m%(VTgZHhLaGWsA1Bx2e)xSUTM-lS`Mt(*xF($g^?l7Mhe>Lc@;&53)kea`c3=k3$Ur2X!!;mbL#BMd`$?`t`&B+zS{
zQ1Msme0%##I^jQ5Gy8?Zg#3kSZZ>;FAh6~$-jiz;5&eJWU0kJ%$La-u|ip7y?L9YJqQl(I|K&iza(gvHb^ht`4WJYO
zUiHb9CbVEYMPeAO^cAEX0v5TzDzfyRC``e=>AdZCXiURM
z>H>lK@3HJ8Pc^4{F8j>yEsaIDH>%M0t6!ch2yoO}#eBx5OTX~2rhNl^4Vn@QKjvKq
zs^8%^{yl6Cv}uqO2Pyj$=ZcD+z4aE+9%zi2{5(@GrTBPhd?#Qvm!NX_nL|iK)UwFI
z>y@x@{D}5{GTHD~%_;<5KAr#gqP%#s>=kgC##;?%FZkpgIOY
zh7RRkhDKuNLnJqyFg96r%Jf)VBx&Mjr!`Umtjqq9>HUIgOf}SkV^X$(`aTNzwj(nL
zf&0-hTH||0V5!SUZ^1JrH(AA$NrHq4@Fx_^P(UqmEsWbop1nGlWdyFx(A%#0GAyH-;67
z1mGOnd5aI5`6iH4L9Qh?sgQ8pW6n!qE!>!{37rb
z@190-AL^3W_$<;5h@GqU-Qko{LD*3c1
z$B`zuRRr{b0Jm{WUm~U^J}KDja&4O2;gs``kq`6A)mW@52rI|Q61#jZs=_8>A1=p!psnC)9jfbnK
z{*qoNS^^*xTJKy}s8Q@2DZmyl-ho6kUCV)f82gi_>whfGGdX`Q%j($CGW7m(PaS5T
zu)dz8B4X#+8wa^_EMl;G(k0~Ut)#Zx=a-GtBi>MgO|OghoB=B!dtsR~uzfXSRlWd}
zDTHB^;U~lO2@9Wm03wg;-QHhGZC&{RSb0h{`%h|{+XCt7GP!S&eoeqr4uq44EZD0|
zrK)Tg&vft4jlbHjPwfwN0&yBt=AXnsT4aUhdBBf%g7c@_Va`EMM{U9A00I9!0wjV+
zS8Ir$dum)~o*HVVK1mu1p-t!{_6t|vIge@#8y`{fhN_BQ!VL^B^Y7ami@%a5rrh%182Z~eN8AR}WPEqtlYCh9aC=MBGdF^a7S5#2?%OmxXg8`4uz)2|jGYr*Cp?^p0v41WO0)Q{YWOPOqxuephnnE0&tO9=s{K00Y
z>@mF)yi9uxa2+`;-Tn{2Xh!%*yie={yihYyKdznU^K-q5Sw91orjyhv(7{ghIuV?}
z7r^i)iutVanr^v$nVO?~<5(FLS*{I)GOESImqNer)0ll5?v_er-
z(CuXjG(`6WgaA*&eo@@3cm7eIR?*ZRzKX)-8O;G=BbMmHCt1r2pCbVZET-{i2z!HV
zCswO`1j|{DXd@rFbSSw>6X9rUFL?zn?$evPKCuI1M9yF~G4F;$Qek+ImLAPy)5JT>
zsj}Hg{Zm|lf@XmZl!LemIIknvu|9W#6>PfCcP5}WFi$|7rGHo<{DII=mnoHNGx^@2
z6QO=R-eW@aUT;v=Xl_imapCKfU5BhfnmCJDh}iwF{3qal=;gyo(?BI-4&?BX=HYFsBuW+V3DTQGY%s^
zu;j!J_Zz?>dDrjt!f*9I9aI6eKN-g~Uq`X`^0I?_<>5kX&ha`imt*Abgv}on&{5tX
zArZq%+>YEn=NK}+T3j8GxG6OnaXi@fQx6G@AZGtO#>ISgWHx(4fW#sGhyCY}1a1$z
zhQA6=Vb$s=VEap$P$yEx6gV$?_Kmt0GG9eo;?5!;s$5f+d4AkVoiCMsX#6KdEBq1N
zhC=EL`&ymHWIORzCWN>&LtQt_gN<7)OD;UCcD8fmoR)A)4vh5z
zE00RWGbvXI%&*3_m)kI`Z9)NY3!~I=&EH;w?ghgm|_^%eIK4+XFh)9YnFkjod
zP`T2V{CXbn+kiT1#<_>7!(ye!UH=FhiFDvzna7>lbUB5hbzRWp5Sa;l+4}=ZDPE7j
zLPjpToUy233;G6JXPCK}f1Mg+#T&8t8y68!e{SPRLeH-6QRGm#y8BKUh#V+suU{l`
zGV?2J?~XK{v$mF+X2T`(@o*fneuQvSRftTgwPMKh_gfFNmvcm3I4JdPJ$S$|YHs%_
z)n!ijSVLCyEbRs2%7yRmB{Xz3tzqB|I^NkJ?hAz~IfwFn?4lm(-vO{D#wQh_Y?3wC
z67v1Dp;-{rUElMPIGI>!jxH$Uax=3?HNbmrK1AacLakyBf$Dzw*lZfrgo6-;Trn5-
zYS(b4LL2+^*6a-sg67;eaBP)BeSK-yAvlL=x=v@C)%;V99EooOe4R^76Pp^=(r4Vk
z$%0KSjSM@`>FWjy%5Ou^ul^pKXPy8ZiTw!Rrm>rNQ9Oyii5>^U-XC2ptrjZJFULbW
zCVOnz;`k-$&G=kVnP9opuM%x7L^#$A)Bk(_OCMmy)S}w*-kYjMq|;yFGo;=9f3t7_
zZWIo{+D3wV@6f5jD~*15p1Y=KJi+4cd>>uMwm)%Pcj?TwSN4}=oj@ywl*NW)pLSKA
zAJ6PXbMtsf=~k;SW`P0-NBkqL=h-%`WrA60LfO5xKzZfj2C%YMa}V*a(Q~@I7T94)
z&EkslD`@6ay~R{H{|OM2(^~p2SP|s~Ro=e!Ub@gx6^CU!e>it+luy9Ruo4l8E-s|S
z^tqMVDVturl%Jhuxv^ajUCRw1l5uM9rw3)oHwH69Nf&`O$e*8dlTwx?qeaeX2DOC*
z*o9Wi2DWII`mw%QFWfHJ|0qfPaw}0m&zn#%cu6F`_yUQc0bgs3h!1aL#CR4ka#P`_
zz7>TwB^Pn@Sos;xbo9ON{|I5@64ve+>BhK
zx4IZPPPmYh(Y$}aT)u|wrwk6Lq5psp!(}I7Oh`CDix5we_E0b0Pf6fe
z6q#rWU9oO!@&b+vV~|(-=~U5|`O~Vwx~1;sPIxLIP;e25==m($cmlHitDmD*mOL*5
zL5rj%4g$t%GN*y~@~Kb7HFNgT1zqZ!J1b>~`2AEzWyN5tg){1tRd|Cla=>2YQo>Q*
zkxn3;sWe1%l%Jb+8m@~If(d-B$P~(cX$bdkFOCJ
ztQ4P32Rdoqn6?QP2UVEo#f_`IT~mC%Eb=%>GtIplU=af3{n_?EU@eixM-b&EQ+j`n
z3RnhtCiZj$9eEPQx6GFrVaDeZaTELUW^|pO2rR#a!Pm1B#k-|}f7i6AguN=>zw~)t
z&s+BzQ@^mT8w-ERKtooO;J+69g)&v_1YLuey{B?IFV|a!6|fZFXZl^f&$h;=uO*)p
zo{`kIw_A4z~)djwRpr58CdR_q>0j2chO)edQal%N$bct
z9)ns3bd?s|yQvqzFF3-Xl>w*p(wa6meO+!Y50zxnK(Ty#Ero1JV2AF}$NKCT2Fp=@
zj8K2*rt8bF%;d)+)@d$~Uj2oB-Oa&0AVE>z*~739fh#U{
zK~26h00;w{wsb;R6Rgxkg=ai_`JR+@#US&^1-CLr%UB;3%_qL@*8+FMU;YH?OtT$R
z9c@BASojz~^eF={UB1pJH(O&Phw~#2gnVkafK^qOXT~j9z99A35@dC(aX)=`J_ukl
zvLswq&(XOQ?wfc@6VQZTy;ss&)1Mik9~FqAbyzgPTh%1f?`aLp88Yn$T|1`a!^d3?
zg(YCHc9-Jrz%+mWy%;wm=%n>)ofn+|xc>xhhjulqy^d>nA}CpHlvEJ)Zg^^Zy7=Su
zz=2op1~VC}$A~Rw(4z2->69>8^tgsqUa;Vp*sbs4u!jizx*LePUmAs!DmeHDnyj@W
zB*r0xAK~Zb#(QnwO(k;=@<>hX<>6#Nmr&0s8y@TRJ1)>g~6GxHHcsX
zU!hPYPR@Qi9QHtZVdvRW<8_SJ?#RS&zkKh9<}L(iw4s{huTb`?qa8kMQaO`R>=U!d
zplrAfo$BUebYl-mJ%cymB5HtQY7=>q$m&#Hw^s>&Tla2uw1R%j<YPJyebN19P`p0A011A_BqcVssY2O(;4Y0H*saR!;>$?y?Y
zcNR*mfrpjAmne3Ore{_Wu*9<$g>2`+HP5!>BJ!(ypWRt55c(R3-LNM4KLaYRXKLTY
z$mQs8@i%-7BJ+QUXhYt1f*XTu=#Ing%KWrh4J061GZX+ESki9y8*OU3F*}H!by*YP
z!*5FOJO(#w2)NC}NH3x{(2cllKO87DvLNl=ZhZ+tcY-d%!fI#*(ZZ@V0)1}XMX+g`
zQ@{Q7Re(F6aTtlLeXu`fG0rvad7=0NcpiGh4l^t>!JcfKp$)mty!SjO0tQNb9J<
z_e4b3Zedzmmc@G;m%}+#admt7gtKxVGF(5AOmvAVnCAeeag|`u6(9t6-T|_pY^B`3
z!?gdAK&$Q{5l7X&+WawD8)6~8P>N0|eFsk4MoL*3r)~xWiNN|vHbJA7(CbjC*M?sY
ziz$*ThXBNJ(YcIb#)9ip#_h6rExGD43LwBWb)~}8|BTqPiue7WH*?H(O#!0c$w|V!L3$MC)
zM1sZuCJO5@Gum(2nDJJLoXA428{S#p`Pq%w9XBHzSvpe%xz*I>+Wby272lL)5))r$
zU>wr11N{NI&KO_$9GQU*@s8LG@UacrLUEhSSju(eXncm$(Q_e<&he$K=dS>1s>!Ubwwm#`DzQfhi3cFXWOk(fmqNEgTC^Ms`(tfh1YsBpW*t)Tx6bw
zc;6QC=6!8NGdhU)=orIqfjmj*1Kt05R6R1qAiSdSP~1oPc`P^jppeV}8dUYJC7a&c
zkW=+INiROmt+J2+ac02U?tM=w)T8W-*_3n1;M(7!+XPwIBZ?hGgu*`BqXBM*e!JPz
zAKpg--ib^dY-q9wf-~5h;Mw0dKP}W^ko2CUz|v}M!to@+gxHz0d)H}=%N)KD5l}TX
z>497Fdh}J%|44_Ln-2gQepPS&3!!=mGQw~gJKdGaFsb#%KoJ3EE`#*bjuT@-=(=7Y
zDKmD@L9n7CcIz>{tB?0M+9Y8}G$2OmOmXs=8_%cWj3(fh&eVaO7>Ntr3#jI+V*VrD
z1Iapj#EUSiutBx-WQ`~8cWYly)QRzZ_f=b=tD5&M3Yy%f;7yfye_|e>DrFug(E*XL
zH_IH`0}9F|!o?YD)w*xH2L
zZG14b(PgV|;%xN?-toEDy4u=I?)U(jc$Z(1XH%u#TSv~~g$mFkBe6aS
za~iu)+HBV0Vbq%+MN%w0AxW=n%2USVG6L6j)Y~jEZcuY_37U%NgtJyUSp5r!HOTvi
zsItJnXrk=Qv}2q>cNx1FIPt1JzvCe>k(rQ~_fOH#=di5zW6)EIHCC@6$THI~;%lg`
zD(PIWQ14CnRE?zXDK*b8-pwp(3!9aI{)&1#9fab3PyxBgcHxVszu!@Q;^*QG>_^qP
z$qRbZRL12`>~yC#=ccT>^tjL5w3ETA%{Hr@*@DDG9gF2bRgZH^F~~FQZMD`<5;$gM
zJaC&JVeJMWM+u0V4GCA{(xgADz)kkXCmLLP*)&*3_^uYh%(4P2;iMAz2$6_&8l^Yr
z3V;y^@U32FE*=BPoxkNdw0&{G8XGSgU~tRcY4qCcH1^<#`SW?5Hsj?3O)BNMZTg2V
zH2>>$GcR9+G4u2@a*N?=#jF|Ij-7bOdDcQ!=)^rOEl!KG$9ZIgwC?_zq1
zye&WWZNG5;vIzaAF)c6ErDHNx^;GMKQBOmyZIjpQTf(UiZrh1o`TYKr(9kq__~KfP
zlg4J6Sz1s8)7T~Em&v{oJeP%puG}!Oc4B8cnGtiBaghOjwb?$?3Z2BgSZWe}$@dp3
z;F6cFr_IbCFGIe^SC|$gt}7ENe9CSn+h3^>UVGr?9UnkCBZc6rS`-oWSzzlpztzQ&
z#GX&LpH_My?n2hqFY{;?=AgS%d|w|gZQ>^s0oo6n4BRsp=r!&FChwNLR2BVpQl2?4
zo~Q0ke=XwCD4!TNBTj@q{#A&+!0VZ2u{=;rxS(|pNVA)m0g_DA343c~LI<`k5ZC9*
zLU|mo7z4-&bwaoLMk-i@#&n{G2$vly^W%YB6;Yl1SCy}YfjB$5+}Aa|KO7_UUg6qe
zv43q!U;*e_MH((O-35OY(e2*+cIBR&Afz;^@!e2PuNS|Qj&{t!&t__3o0Dy#3#&UZ
zdHpL_Lhg65hs?*!E(t!Fi*ghtXI^u=P+LR;^@R7zHRYwYcHEm8cQT950p+bM37NT^
z9M0*X>N>6iRA$K`iE*mZj+sRU8CIQYB{xdLJdM`
zexHM};%LxfN*mTsZAoSJ(aA14viWx-S>Jds;iCM4$AMwst@r$*nBdAe&7e3bb(0A+
z?ul+GIWF(8%I$MsN!RG*F^AxBZxle8&XT5W_N(Eg-%^!A%L#DeRifzovw1AKpeF3Hv3=U*FQ;T!m!B1hA5H{WC-uvUm1Z0kc~@rw
zoeKVF&AooFgXI~;Lg_B(o~WRww)fMF)$Q`!3Q-=Bo8m9p16I^sVvY|P^IzzVZMe!3
zF__F<1wK{Kpdce`bK%v
z%aU`q0B(#0{?WOQH#x+8yp-%10*7<@lDJ-43`2_MuIW|=ie4#icVd8aXc-ciS-l!P
z9(3-B1Kmz^GRFGSB2hny56;4jIlBhm%(<(^Y@2*NUL$!^Jh#y1_?X*-mkd+EWqmb@
z9i#k8)rp~seEXO0XHxsjyp0%dO|mJ$^ys4rmak@2dJrwz#rw8|cz_=Mgvt8*?$x-j-QlR4cyZ(F34Mt*N!YR>|W@CV=cyj`eSc%J#3
z9*Q*#3N0+xy6ASJa_)K>>{EBCuhmLF2g`%8JAUQ{#KiS}_+~RgGFFO#j
z*<-CBV4|qMAX5~db5rQRQLjMB<>f)SSSRO-ydchZ2bQUr*x;C}a9i9=u83OQtIv{j
zd}y$6(1=@QNVigt*!*C|j911>5c&xD^O@@!f8VeK!fK^FWE7ipfguOOsxeo#J$-Kf
z5_%h!{3s;1=+rTq8h)u2Vb(rTisfupAh@_&lx>M4iJ0BTwoC7K4V_+xD_jSN+bvF(
z$P;~xRi}nIHYRItOxNRiYVrPyWi@Umz1Fg{r+*kz#_V1z5#1MgVPU~?(LV1gi>}7J
z?$0@#yD04_Q|Wl>lVhe=YyWTg^Rg9p;u0Sj=EMEMYT|SIjLy0wr>
zZF+I=`=MwkPB^fw_B*@V7mz|@N*i+*%brY@9X72$r*AGhN?zqXPkPGUD^1=B9&@Nq
z^v-s&^mah$U`W)p_DGvApCeB>*;u3~BMsuw>c`meZ?y@D;~H?SqSa;nV(7HQn7#*1
zZS-zRiohB?uHf;k$H`#@6JTiqc=>K$TYY|$IqGkp=CnUGf7MOQ=ixA)B&q=A%BMa*
z^OND{hfPEW-!+sxp9Y^YDl3cm?B^c~TNgq%TQ7)I|76MVCJr@Ju3NH8VP#;Hw>tXA
zonqwFo@E8Hg894oMl_&EmzZ5!V{yyKVRmvpP^(mPHn>c&LMOXlg}_dz
zO$oCu$aE&uMzQZajG58E`15Apk7=~pL48~L$yX^|(ORMG-3D7SrImb5JYJo(HW!D!
z+D+d&$L&-Y`Dk*RRv?buU|Z%vd!4RUs8ZY36i8mL*bd#)S<^0kfvGD<
zx#&T7V_R{j^Ax$QzCb~`S=*i2(Q-3F=5E);a6{vFv+5?`2LP3Y#dRXL^p+CHSv?JM
z;4a0&_d{6OHQ&0(Jkn8o^hY&(_G4D=qk4E^zr&QhyRGx=8^aqNd!ky}f>^o4I3dXn
zx%;7a#Vp0UUp87eC8#cY|LtM3P#5f!xn&*2DYzonN4JY}rSOl;Iq3_3H0%*!Cdj1z@Kfq`d-#2;
z-J6YRaJ-D-mc|pjOP&V;Y-3VBm})yDE@j$Wc7SiJ-5E{l*E)9>SFC}nV7*B$^0haz
z?(cT>?iopBn)}#Se0ISWmaTlzbt;jvHYUA5nt@@Q#pv<5Ga6fbAPHfC>?jp1bYJTpuRdm3Yn*eTwr
zw3#sG1-)UwDW`r*`-nh!BCn9nI7g}iemhmqkWlsQhaVTW3Qo!8RxvnCw;vZY-*$tf
zoYaE=ccyRheI!aa0UqcYoaH?Iud@thG24}2sxiJ8{zjGd)g_Z)Q~t4RDTURF`>N$R
zd<3x8jN1IkO24sE=^PzI#3lOI_(sfD2cq>Ra2i#rK~h2c8W-)oe_C~uT>zXrbF+wA
z<+e!!w%?~kb95}ON#>Ti<%;@uf#6$-!nwUqO{ui{X;3$)hVcc1$1vpN&}^)mx4}#W
z{pl&Q;Dc&zm8&J~5Gv}|S?2!f_0*s~<9Jo}Z3u!2XF0XSHu-heLpuK|Jqtd&jjQE_
z*Xbxu&ZH}%uS*Jfj9DH;jCMS>zPqEp^tnc@@yYK{1YWmGb4kf*(G2=|p(@~ep~S(9c82JYWvn3oVGtIH}uPlXLyap%+%vxCiehc~5eVqQ8@
zO|Z#h-;&$$MZlN7m$EL~DF?ta=+1R~;YT1bTo=8a&a`E=?N*(ANV>wpuVJOen7s(v
zDB>UV3PN;DBiQ?rXj4UvPcNrk8gVYXn3C(Q2+U2}E>(Q4l929HotjWVT>mV4{lbgK
zHVJ1l!`~0`$y@`;5*OqtHd>C*r@a+W@gC@Pu~M7#-qwY$xV3en
zvT^zJ63lISI6McTm&rWe@CCbHti;~YwABA(!`5n_BL9+h5k?Ia?aXbZ{a1S1c`5NB3+kRS%vlu4~A)tFDNr_a}cV0^-_BS&l
zBC|G5SGz~r>Mtns|J>B9`e|gd7KRuv}%RUikK_ju3Js1dN&T}Q)>lY0IPl?NxQs-yI}TKOy$($
zreyIXD>aa0euHC%XbFe^45EGcmpv^Apk~?a#@k}-eu@#m1pW)|$bR*q6hF!Nd0tdClN^99ycZWi{;`=2*&|u@t?N2Fu#Wx=N{drp;6%!E-nIDSBHs`k|-Ck_Ke;
zhXeai?qx7hNT(y#QzK|7iXH71VA<+#dUIr&bcRf@_DzbZxw}J(^i3#>2XjbS
zYq3c3lx-idpTs&jR!8a0x>(2W)Lt5GPEGPyK318yKc*b;$eP3AhpkWLoRHt=fE&8e
zJl8e1*hlZU_a7$VaEX=d`j32?D&yUfXTb~E-MAkr1#R{3uIbvkS+0G8bw2P*t`KdB
z$gIpC;u$$42sAf-Fe)rMw5y14)2j%FpsVE!oZm(I*ZD5}re=v+(Zx
z7AW6{*2ci+$#%52B0kH3w+=%0^Z4Aw6>{EpdYQ602GNfNCeF14xHbHG7MV!20lfz4
zf{GX!mso7TgCM%_T6lwH3JQblf(JKD_Z$W;^-NdjOpY~seS2nQEaLsBj@kt@iCi1~
zLLj4}khN3lle}y#((LoHY%6Q;`Jn=-4{x$3`)Wz=s3QS0ZkS1n0T~+W7~+vuwSSGW
zXQss#uEq{99_Fh_x&H3_eih8aIka}Gk>Pb^s@0J|tJ8~*T}#zIFn#dNpHczU5P*}j
zxn2Q&+%D=D>=kY1Am2&dMcJD_{V&VXgr4qcll|MQg
z8HU_bk{Tlk5aXK@6)uI@kU!v^t%`yO`#pkz!}VSd=xvF^Un)@_qS#%2f&rR`<~z+M
z-(O|l>k;47%LRQ{4e11v+bw1BknrE{a;JwY@_@;v)JVi)_dW|*#rrPCn3_Ip!c`;<
zLem!Zjb82*7_e|VmhBS0eOX84)kNn7^!jm^iG0x7&i@=as
zHJrN53d<1+=S5+uDO{3Qe`zXE6o}^ODM_FO-OZ$4bJ64DL3@*^%jZRprNTozj9JYU
zq8*E6%bB=lFd1eAuhtum1Rv@}Qk~v?n8bL&cGi-7JlPa)ah^*n_}xhiVT2Ke9rC}M
zU1nDy){}R){9?kqzd#L#jkCZ&o~tZPkD3~or|0S^mqOh&v_h)pydV3cZ!S@zUuonUyqA
z;DI0u3l+ygK5H~BO6qcnPr%T$iIy~TG<+{CyK}`OlqO8~k^U0!Q`&V1n|V)FAWnk%
zKvZ)o72g9DHL(6--bjIr269Es{34Z~>68^0z2)V|rsw6{>uX
zQ_lTlbi*?g`Lu-@qw_15#`cN1V>j5Ue30iEx%R6=cR+W5vF&F_>8oi*L(imRGZ$U%
zh1mqHah@4Cr**NE9s^QCO6DAA+n;Gx#G?rGkPh2Jmx=A14jaMsHXHg#uN@_CgD^D8
z#O!V0OWIMB+!^ztora662)%{RHJHAR5&sN$MCn;&MT&Xac!WcHtJ;mJ2&U>G70;fD
zp9jRuqH($N6N877uWW(-V=7yK@`oK3NfI#%o8=gFw|>#32f?=VZ%R0Cp&*2zhLE`
zt<*3gKHxFZA<`k!+n(P+TO&;;_18FR{XT~@eb}rwR&}bRZtW`@*2Cj>=q48B{P>R&
zywa@tyI9fpR%x9`IqKdW_d?#)+&j)XJwU!Flgp5G5r_m+n8x|KK*Gu#<@il12Zz*=
z>HYUzv{mz`lZ2IboBT33M0G6F?%r`mmsCmrwo^3|^~eO$PpQtY4u0xV2p93T`~9|r
zACC>YR17NytcQu|d3mYloP4w5S@@M7-9GooTxDo%Eo=M!MtzcBe!wP|uD9o;|M^yG
zMMwXH+>=?^t&pyZn?1o_6rL3(fE*lVe1}(dRhZH86
zC5mm(RdV}U?)YL`uW-QEM-pTe*s3OHN@SxYSu4bhK*wg(cZN;HPn2MB;aBlROvgu#
z%^6xvVE1k$jQs(=7VMsU$-)pZHgzs;C}#cPfUe7zty2~+kMmBbVQtaYkshPa$9DGG
zcLIiWp^gWAQp*o&JCvah$TbwZ9YtIG84Y5r!sBbde>EY(X}tr!f2vZds&|bvgSOD$
zKYqG1Lfk6Kc4}U&AbIJm4cPK(AuEZ8#HD+|>E2nS5cs^Yl-J1sxtC?pI=QUki8JS&
zE@1l$Ux_a(Bh}`+Ms06<#w7O-W{f>($&%a?Pc09!Uecdlr_r`TwQ5rumLHofVyF%j
zmeP{eb>;Einx)t(uhBR1q+(y8>Vb1$ErenvjLk2?a`tEL4P#q(OXVvS-PN8b@29M+
zp6>Ycnd~_kye*x|iW_P6(+;0q(J9K~Q+g+x^XR$x3g+v5LtB{HoylJVJ=H5$YVRN3
zCnZlG!4vFl2q7Jl@pcv=7!zW5Ww6F0IezTB$fw
z)QY3a&0gExw6J+uQf$ibgjoF9uOQF_UT~UxYkYCU^vTR)%FO%${zX{KVc+A5_!67p
zg!jMQEVN@VFvfhGDw+Ab3!6BZfw!GcE)>UWh!YN0lp4n$&!ZIWD@@^IE5HJkawTsE
z=AO?~gt%YgPN`^_pz7e?WXUNx_wHKvC|#31I|6i_ur!N1b>Y!u9NPu$=_1Lc=8kIU
za3PCa8;WpeCC9)Z+MGLt*vY9VIPPIBqUd(fzaqM1$X3%oy-LBj0&D$|fE)f!J804u
z*6lHI$pN_7O@(}tBMM5Kx8wU}b2V1fk{Pkz`IRri$sh62UR5W`IdnI_`=cqf#R9mz
zi#paFF7(jNp47MORf^^FSDqIp$;e3iC{C(ZrLuJtPp0f%lkV^=RBJO=&!Ai{$nn*0
zUg~&?G6-j_8o&tonG
z7ryMP#%H}daC_Bf|CCbM-Y}Bm+GxaT^*vBCvt-y~AKNna{f@Uj`kTPu>P4naMsC=6
z>X`(iF&l!V8UB9P?VTNfP}-v%QDC>@-Io+UM-zK&MeDA`6veIEcUBi)9{nltpJo|w
zB;&MlXB4uBbAIXHQ2U56Jy7LFEt`L4@~6|^3|L0@SpQ*PJGw;K6~F+Lm)DI~b^g*-
zuuQm!4{D~xU=TROo&|KrVGm(t_#2$}*Pr=Es9wbjZ>@jh3(Mlb^~^Z`a)bBMii=K-4>y{|{qan3KrhEbp@u>kf>eqvgk*9k>B5=Z(vMu0=;jW{O!va%PLMnhhL(Jt5_mqaj5w
zT{i*_Df=G{=9dKLv40+a@jGC8_)Fgy|9!d`4P0M>o=q}6f6$4&itH9xw#$QYI48zfoSOn^Cr~}Vu=4dc8n48D`
zm+@Ny(=&f{<@~=-H-~Tm*M}etEYBWI3C0YJhW(>8_h0k-pYBDB5%2zy!o3l5C0{S
z%csNJ9}P4BWNc}!r!RotVg5bUzeM(5NB9(vWR?00v-59Z_@9&g$KyeIz!L~kzVhtB
zUr*qF-u9n=?jHo_|KhiVi(`(2GbS=g>>n`t|NVFWJGH+{?|`UIbsvLoZPE+Fd_Swv+$SG_;2^)&m+i8z%4=|n7@Z{j=vJt3KT`sKO_;zG%uS_i!OgXPcM7?T%v=CF71?vUr
zXqNk?w5w099`!CI9vtjdmN}|D@EAyD48~pYQ*+?I-p(=&m*!@^Dl0Q2zHX
zoBIg?m0dhQGmy>2bFm=$A4~F$rFq$`#NbAsihEQq^o
zU-6!GeNy&p+EV={`yNQ*W5r`)dCjmx2*zc}sh2eF;yjra^Raa(R*sC*tm#0=X_#3}XQu0%_k*;~_mxP)d5urhip|@d`~ZY(m}+$Gh1U
zEb9Tj`^=&aZ??<(XgmCuo2kA!QQUQ`TxV?v$(`
z3t!c!AR?X1GJBOPN-u2&$S1*;Sz%qMHD`g6#|f9rlw2$dH5}<335!d0WAL}At658m
zDf?>>e<_Q_^rW{Mc1;8)Y>{g|D5%z1#|3@M6#0y@bJ%kh_TAb-u?i;&yL3>rFVI~s
z%|J~AUK-Ww;rCCR%JkvWzv8DC1hzlxo;uMQ6Wkf&19VQd3#bt4J>psMNeG_1To-yk
z3Z=M*Y-Vr}CKm1}seGOE+er-F!-Ot{%;|=FpdB^XqPPJa!ebO0r)dq-6!#GVu0IfH
z<2dT@VnChZl|VsEkq~}?eKsZsbK;scw0XOOd5ht3PZ?>DE0BRr_FMghTpgKQexgLy
zScuek`U{a9#n^ELX&kyYSm&fpwrIVBJuxSyRHF~Cs+%kb+5sAK<@I25?|i}X?e|Ro
z?#b2k*qXv1#xTBBH9c<8dpL~SJ0RZBo4E(kwm;7x;`Mt#aR2@ex#@sB3HBj5>t-}4_iyu?6yeo4L
zrQtOOJ6?i2US*L#`htn_b?DtRbQ-MtEuk1O;@z~y*3_>KG(xfnSosuM58&8!K|udS
z7K#U}dqQC-@Kp(CVzhTe5kcv0`rTcGubtkXoaUeVGBZqTI;3%IE~*e}dWQ?Ud4zs?2iu!sVSbh`|1WF8ruv8qvCTBi;C5&X}UH4bk-F={wR@+oRE{
z9f|V3>KiiS>jU87=F#>6#yiSR$m)&?bq8n(-*k$_hR{f@sDlTjz%RJw0^~*>QA^A}A_ar|3G!7fx%Qe#tIv+O93u$sud
z3p*{-+fld7N~@>}$f(NI3^CJtWmX=D-@B&)vdYht+H29q(w*5gRkvUs>rVs
zoDCIM=Ln_q0e-uk@^zGI29tkg$lN7#yXFORWI}6D;nXqb&K{@gNOyWtN>C#VzOF;f
zx%mn3?H+8sjySWEx!5A_nPmEJv~uStLC=W1y+d}v<^z`Fr14#;gF%-6;RkK9=eUX5
z{SbywMwJC-;Buobq|pvd&u;sK?};k@So$FXUo%&~mH|Ike!uz1zs@(Xu6do?DxLe_
z+(TQ{?WsqUQjc&nRVrA15m5P(&Rt(*2=P093Btc@Z2C8_PCnPN*cOZT?<*lVQ%UOy
zviK~q&DTqSq+JwG!lCtz=j!s1-X<5`3#4;%@7Z(98M@F116ug$rjRA6;H5SlkFJJa
zKYxpfPH;CCDIVbdMvHK+By-I|dM|@c+H&TcG3gB|+1C<=tw(N{?uNBn?4GNVh4+=U
zB4be@r>P+_B?56a?fl+r>yJu1gt8sk5+-GJ+2=!Tk{j2jKhiVC1maXd0Vxr2sMD{gHANt>1iEPs`oGDX
z-9KL{8!kVOcAP*P3+Jo8u?q++j;Q(Nu)&jk&e$&$j2bz
z|G9+4WwE55d%*)cCVRFEkY{AD3MOrsg#;OrL6KI+vfh1>)cLNv7#d;^eDTojYXUEDBctwE#*QvY@~5ihuwn8})8}g8lAL3Hulp
zW7Vi!KAJUsnWOmA;AxP8`n~c$J8i2AtMK__xExK?5%}HBFjVH7}+*aqI8RRs8QM
z1m4#b^!&I3de)&&({R9u=;D-K$lXoA0{;P}r772*T5zqSEH`rTpeq@O+5ERY!BzjKfZfRZNX2xJ*eKKZ7!>N)=D
zHI((P?xNphuPKCOo>t&0N#lXDN)#8SK}j#r*qqP8ZNaZ7I=thl^?(LPh6;Nda1sWP_uT}f^yOY4W6{=AWrS5rF+6ii`DXs
z18q4kPwh6&@i_1A)Mks5pNGWM=)Yu0?FJw7-1nTGo~vyH_K6%->gr}Fc@IQ!9|GML
zJNE-aCKX>2K2l1uR0PDxJuqhujIt|-7KT5}_W8`jH+9Cq=B-6e4b;D$z7YJkeB14A
zeDmJ<+2wUfv&F~9iNSy&>FddrJ}b{pX66}KS#6wwq-#EAMy~Zh&KIpMW>sd&B8JQ4E0bpz(fADxobDb2tE-=SE!
zeP0GG@G(g;vrk!D8{L!SY@MFWv9*Y0TpHo`auO8c`zWh18GiSu&Q3*-KgUWw?0^CV
z=PaGCq$vN5M+|6A>mT1UKcA!d25JBJVd-+}R9iIrpHc1O6
z=un5B%fGWglpzA*Z8EOXTpZl>W!E}oGnXcUwR4wy{T1hP#3D6z4>Zz(z{stG7EGFqF_eV@GN1iNiCPN`aS
z1z>5mr|zu;H-J?%15E6ioqAhYx8yrIGAysD!;Ypdp9b9Xm^N)wv^2R@T}=%P51T&P
z%m-EnkT=;#UktUHqk(zil@LRFPs|RVbgXf1Y(w$BW){lvrvX%f6}o>o3$h%JB-1xI
z9d4%eCoU9!*rv>5H3x%_s;C>4M~Z*`<)yD>obGDaebn%s0mh8zI=c=c4TIn{C`J~r
zS9wYv=il-btdncd5?fRacf!9y-{ac_XHA2|CX1G7tDT1P*AxNhRwaZu*@NpV5t9hd
z&DGpGL8QFbrRAIOx}H@jTk8;;tN@mNjW?Wc4WOEwz)u8uZ
zPoLXrT-x)#+fr$yaF#Py<@(uc@%t4GHfMWA+3kRdM(wP}TXSd0gvVqI4L2r5=crpV
zf2my8$K*U%`~uovClC5K4!UQ=>O7ME4yUgsbGyBu3b1>t2{dF3l@{z_fm7c?epw95
ztue!I1`Sh7mtl&@-Q!POd1)ttdh&yMzA@Z1-uk)}yyQwba%FDrFn?=BQHY41Yn$^=
zCh$CECXE(#C=eez`8NOIL4evokL+2+ywd{$s@w2z9@>bLx)ZG^0UriQwcXaq($J
zhiaW70pZ@mavpk^)t%$|E(L9T4kd0nWx{nS9{#HLo3CPK!zy5RIp~}_HFoQg8vl^o
z)jM!5AjUGA`YnF*eZl^-EP213O#H%3(kW3zqGJ^6-Clid@!i^lImq%2V%9?8jzJ+k
zqlLA{Xzv^Q{)THDrXBM{uH|nO?WHSn1(cq$Cm=V2!;AUK)Od7RuVU~z{20gaE}G18
z+WvGMdDCPDNR{uB=jFEVY=)7sgeRbCqr=b-I!L3VXCXzMg^EGFbwRzAy|skk9XC1e
z!Dqrht&6?$D@nY&$9F|8ZI1PuS{FzcnxH@Vv9tBod(-0z-mqL%gzksKhj67R*
z)K-uV>k>=
zgpSWN)kUNv(-=bkf^u3z7n>#&G`O~CH?<;%;gU+
z#eF%S-IBiXzo;ATS3tw@q%W@>7NXhpHytQbzCp*Wun({^hp?S7n`dUkC2Hm{+puyH
zqmil#{c3@dCFOJfH%iZkClvg{@PoXNS6224WT4BK;XG9V@Up
zQ$ND%74vbK(#QpTn#l`=O<4wkoZP1kM@pRcxegRwbD7^08=_1wwX+9;(RH__0o|XD
z(aSZ|>&R}CL$Eqs`@T1xFUN`_=M2t9)k17+4P0zbC;vCVeX%g)(wQJlG3Gr<0s4;X
z8?T{fpkRBDi;CLxcIBk}P0N9*X5LqL@Q<3*t<7Vg+sEMy;2ZO%`V
z-*dM>=pTYf3VdlASqVeKA*7XG;kH33Ti(IPoc^bD^SgB6>{d5|fHZFY8aP@Owid
zgNyg2y53H>PsP37kZ_OUG3W4I!b<*$
zPOsl`DmFD^{)?5H9}EC$vYpqG=kWC(dzp)8KQ^l#(M
z^ZM28#5ERydhGTZXLh8vfxnow_`z50*rC;m5Q7l&+$?iP9B}k`w~qR6UG|@~;yg&p
z`9!u~TC8vViI>^ml0^AyO+a7H;6saZKrKcSd*Q{Oc8WHUf7b{x`0cAw_3fgjq^tJQ
zdE>`g1VQGmuG1bStB?JafBo9$Dn=ESSO-~z*Q9P$wG`PH!ae7tT(7K}c`qJh&7-g&Z9VZ5%4+Sb>;9ksm0X}A
z{1cOyf%5w!6zTn3Sf)saG&Kh05M}S5gvL&o95+lfPu*SAKnnTOs9*CP#*y+}pTKA3zlYUsY@?_QE69FKG3yOZ5F;)um
z$`x}PD?~+ll+=@BHyo#Hx8~jdAe?er@%muxYYX>~}IVEbIS
z<>YTDY6s`-RC3xa{G!Y=bAuLNROS1hcT1d=Au<*IPqY7)=lEIYknvXXHve=!vg|;Y
z)2sC7Ye$>OX0XuIg4VIea?=Qs7o)KLGQOcFz(_Zx<~=h97V12^sLhP0FV_XJNz1vU
z(pw=8mE#sSbc5zdM8=Xx|4q3>`1{_<8WSn|R*!$@I71_Oxr8t}V7gYSs3b9l8a$Uw
z^YuZ!ZURx&;!}idCMx?Kp_Uv5#LZmKL51V7;D}8z3cB9LBbGM5Hwfb!bUd)zP6n&u
zs6Co|gBiL|#nG|aamgY_*Owi3-tLs)oC7KOBHex)Nu2yfsZ8Q&!j`#!&)%EB%W7v6
zbMJ)l3gbr{rbe?3C3DU#gSK2t?)Qi!ZOq@9DYp8d8%In!SHOhunU445wi#){S1Pzv
z7v&F7#k_|}i2;K-XdfbQv7IknGZEW!#6E>Ds;P%
zM;jNhHP!+lVfwY-PKT)(Q6ql)(KF>CFS_Z>1}n=F^bbRMNHu>*<4Zy`J|^cUlb1L)
zbGbrc;anp{+1CEDp_=X9YDqHs$1^y|?DPYUr}Bl_mf2xqJbD6>FdcNyYyNdR6G;4b
z@HCSd0yVSwHgV1t2te*REP9wsk@1ChOjsyB-L#l_2
z%{eBU7l>VgM=5-s-A@9eM`oQ|0$xu4cXV~SgG?<#tIvcy@F#r_?HEmaeTM6k87fC6Dl9YKW-^R9dYz-YC&3jSoAAX%_2*9
zebJg`$G@4!p_^i3e>hEnpEuqA&~H?kYHZQy5bpnq#H;${e|KLkMpN71{!geFFy_`>
zWH<{GISI>gCZNf@8;82>y}_SKVJ>CFuR`#m;JY_w4o#qIin5{G5?L5@WiY+eAQWg3
z*<}DF%7FC#O!>95Rm7dy4RQ_4!C*|Ti$XY>kjV%9Tn8upRaq1=oM@>+i^z|{+tYFq
zSa~u$Yu0krS;a2BrB%P$J@(Yrj8T>)o7VU#GWa@qg8!9ULXMrMZL+Ou*ERp4aLz|8
zFKAKjD%6)>3?XlIyww4zmP1V}Mb%lnG{<|n*5j{pv`|l8O|j@GKVa3q>|G*s`xfsV
z7O9KU$14wNUcQb$?zv#Y-Vo)!)W*oWjhwnmsP8*%-X+E8A@7gQ~Ei1j0D&v_vRDphroU3v5pLJ~f!$u|@
z3QqpbAC#e#HhQ|w$ORJc{`}Xee?;)Sm)Q9hExZmOQp~iDMhhA@3m~D80*C~rN-GHu
z7!f~MtzSnT-Ch1kS7Po!NfAh9GZfF7Dd!F$A0VM)e?kWYd29cN;p)B0c0$I**zr+E_WslsR?YT(ePsU$13;nM_+arDX4&ha%S8lE6&7xk0
z)kSJOTDF`m(>S8cZK1%0Du5BH;f7IYb8>vP#h0T`ujHrcT~rl45W^
zbib!MUW_6Ud>FCIuPU*qo%c;TBB9_ai7@jjDz{oje)@F(IT7esJg@UrYKP65HZc9U4sJaPE-)is5nmWiF4wmM(JZjs2;?GkG<8JJ
z{+4Meq5#!>CaWl=3Pl=J^gGO>fT+?bg|jC9WPjK3=A{}}Y^HDQY){VmMY#&fR7$%~
zX#~A^juwtf3jH!<9vIS0f6FkA#K~aImqkx?G@8dTCQY@K-zZb#+#k^mVCdxWArh~3
zq?uN(Ruy$m4!vBloom)@ByN(R#S!e9%!AfIS^<8}hsXu4Bi)%)W9H@WLMIu!b#>oV
z-EvV>Se%VUTn&`Z42{sMJt!Ue;U#FsZZuK9e>$YlX$-~path1_
zOjcl565jR3!uEAX7j@n@Sx;2ZP%TWRlJo<7gCNd8|1avNmfjJV#hbKd7S8_j7COP<>>J+r
zG#J5(b}ASXUN#P^tlY{!(!%Rxp4VS?=Y-n<`Gc9g5E|pXV|}02vwbOF*PsKkwO7z)
zVCDoI+mE{IHu>kB26n-%`sd>E_YVSkPc=xN7p;>2++2*QT5Kv{kkw{QB7PleH@CXx
zzdz3SJcgI;(jvsHw`7BWNegj)qkry^qEFMpNW
z$L2e9RZeK)%nLlVum2W*N7b0u+22gU{wK;F%SJ6oyLrLj*_TO_Yo$|u{#>e;LYTYXaj8{#UKWL%oDjl71-PPVdv~bx_dG0*`HgRzy
zOk&@p@-aN)H1uR=&M+XDb9f}hK0^Re>hvXamEVN7z*g0e{(77^n+Kb=cGvZcnqv{e
z65)$7q~$iJfA4|==R05RmQl2)7d1l3#UhcsU4>%9Ap9(G?wAuYnt^D1KzQsss*rj_
zbK#H`Zc6BzcWg<($7Zl-H$
z$KR0YH7u$IOBP1!{5s=B#OBY-6!0d+M4#H1X-?(tNV7;Md{M}JL_T{-4v__P%I>nO
z!U9H;BXw;1cs${{Zf#ha29ptR?T{h6!fhq#m`OesY0@tm&dzSedD3yenay?_F+Czq
zEOGWMWDhpDb_YdyKl?iP^BdGKxUhM2w$ZcfAXx46uoF4?6qxzsJ83WD>p~wS)hs_j
zKZ@l+p=q^UeM*0Sf+*>5yk(TdwvoifJ3dlply2hn@(iuK1SSU)dzI1LrUCWqyp$QE
zzn5mMGCc4j
z2dqqDL=Kqymh+vtoaK3ma99Zsfvi_c0m|?;pzBW5K2~u~@EJWld)1Mv{037xlb44Z=W-E~L8V&~WpXqeR`(iDDH5pqrocZ?@|uev
zRHbysBNO3;bgzPKmIam;gV7v04K<*|f?g@B5aAbVR9A=zkZkm}9i=M1gu$molKa}+
zoyr*6`NIf$56zLV`~g12an6mua|1Kp_BQWzK9oEIj7+R1?VFR(%k1gTk1dAp5V|-&
z2>(;Pl~EF)Nu5I8`&4DlYSKe_qWC(8xw>QRg7ppUBrY_fu%X+B@Iv9de2mP_`W`39
zR9}Vsl`dRFSig%JhwXpedVc(T?Zg=nm<>RLSPMiFicz{fN7=2sAN_{jkw8k?_+;0VCd}ZTUoAeXe%|b~q%81{$TqABHBR&vZVH^3U53XsYhf7VA~WA+
zjR6Ks42uR`+C%1SH_igHLo?nw57w$;9#SOP290a;PM^s?8vBw9V89idZ+u!6gbmj&
zUiR>v0A6N&LA9NsZb)-vn-;ZNGJu)v{mL0#Q|QvcHxp<9@RbZF=mw
zW#JK4!kgS}>KN=qtn<((mIUryT~dmxZeh*f^W1%skm?K{4@U8+Vuv>0Xfz$&;{M7a%u1p&JDIknc##ON8{>
zm$=J8=Uo`kl3POQ38^*U*>U%3Qy~#0wFNug@miE@gb}l6p{(s6_IT@Kb0etPz!&ef
z*YpHbPg*Wuv2Iji+R`1;Al9g*%6J+MNOX)vCm8%NKeWF%|2FRrQF@mDvOfGh^ibd#
zvZJyEQVc3pRt#3`602~GIGY$dtAT-g3C(6fv{l+c=&fJwLT>NnRJ*-{uxg~ZeUj-(=
z@&Jc|#7S}>Wv?L0*EL`*y3TCxmSfY(!#PwSA~4v*H@1%0ih3$*savIp;rHq;S#7Ed
zmO*vj-~Y`Hs5dRIdQ~qbUQ?zw8pY70TX85HRm5|IF((~5eUqjs$=q9sd;T~3fU1;m
zrBRyExxSlsBN2GR?C=5;>m-OCH*{PpxF{;7#-%2llY3`u^&?foZ&^NuW{3FJ8}e?J
zlijs!TacWN=2`vSzOQ_%R0mXz3eFqktnW$B@4a{unVfl&h-d$_ECP<$kd_z0NevQ~
z&?%VENq(=B3#dFp=?GZmoi_n}*z0}nZ_bw}`4RjyFr5AC<|?G|O9_7$wq`|&lw)9qVf0tBO>DIO
z`dFQ2@Vo;z#O!tHR4dA_;Td;Sw}g*WRCxr|Mzpi`>It`5sXGmL`(ZXt0aC-O9+3A5
z?>hVJ+Ofk{QL;9dDq2@OqZy{QQAgI)T*#=EO4D>JA;6APwwKP1YAzA_SA3vxFQJ=O
z?A-wkUvtNImc@xmm5)w;zcd2&{R?JNyN{?Mq5q<(Y%-my9qTiKapoqA{B)&YPKWo@
z(-7$=pMvwCyeBmcP3t@oGQat7Lpp~{q>2YDuaI(?6A<-A;hf8^o^df!oGim2>2wjN
z2elb^QC+t@C^q=;%ihm=hQ~x&$O^*!EDm4{<+#YcuFO7_TbcWRPmhw5N_l-8)=Da!
zoi*?H>grBHc$*z7Gy6&fZEU!npmR-J;Mcbg)GE3rhgSHJSEoDAYUfI6%wjE|ZHo7V
zCik2x;#PRQ%2!L}d=R5!-H-9pT?r;aO^1fplx~^I4k7>SA=g8LHHkoxhR?A*c_*hJ>16Dt*(GPOPnjS`n?(C!u;$%fsFkXE^s=2N99<
z*pLp{mBD4i{b|1$Ry+c=$(bLR_ybe>d!n`v7-@z+oK7rtFiA?ml+UAkDhsC?eD?wo
zL!pc^Jb)8hM=g&~O=?iC1a=bxGAztI`{gu~l;TwKDSS75=%Muw0)VQ3KAl`hw}B_8
zSaM;4nRfK=NhnU~1>g7H8%<34lTc!P@H3SZ%_xNiJu%tVi1hmq-X~y)o=}ZrW3bwE
zJ=4?vCILRwsq{y$Iw1GHv7z>3*S!_b+Vph=`
zS+$ln{WA{7kDlQ2Nyy-8(ik=A0ms%(*#r#I6r7aO8^q;2zaI#64PDNMs_KGth}SQ*
zp02vCaqd@3`MCNmku!$&Y>;lH+Xss4UxnT@XF@ev%4Lo?n*Xb_zIaif=kv?ox9oM$
z!)S{x7W3y*P66VtZCC0ytyH4ua%yix;^M(js+$THUH2EHZQVr2SpG{=WC<&?aeB_@
z24Ws+P&-?E
zpU`RDE+x?l9z%p}?u)UozUu6^=ggQVZ?7?5-6{hTUJFk_oV;#N;XQ%Nu86m=PR}Os
zdR82STUss>^?rV6Y^=E1^8=9+joZ6C;URp=gSRL$8(Xgp#2Mox?$%k50rgeOD5{-XD9eP?y
zANQEe^jNfOHkGmBI;#pljjaE?4sDpI^Gjxqc5cUkDT7Hj2Pmr}-4Gap#njbk@)wP+
zVBUO(mAJCa2?8t;a-w%u#@~hNyuyqegy4LNXF;I$bV{>A4acNyA+*i9OXe-#$TF0*Eag&sA6=MGbR
z8V4e$G8W4hN*2!Y>hP+qMb&@{){?noBxKXt`ijchtIA0OD|^$1fn>jxsrnL4oG7SI
z*E@YwaZun9+2P?qul}XYS68wkP3t*1MLa
z@|2iGU&o9&w{|(Qqdxcq!FLWpPcS)4ixQFG6ftX8~|Si|{QemqTv0yd=T#
zweNf%KNw7-hD5a4in7hd(b{%_3*mIrs{3k8vaJd=8sfD6xL{qJ_0|{K`xleQT-1~7
zNp23UuP0m&YUoBhFwa^Lf?J#gPP`2%&K70mzK6YvgL6SeZa!7TL428m6+fjLuDmWb
zckP=r&DaWSv*8(+T++AFVi4Ty(*j++Z{)-{;3X%ME&cL0dCy5CwdpXbv9}E9HF+Ie
zcJ+(i<_6(wX0-??6B))QBVw!_wH4rnc+1iKA6UlSLG@SWf!F5}SyLWAH?4U;|9<8s
zP9OWi>ASl-QuBKeud!pelTQfx@)3hqXW26$Asp!F4oQtakSS(dPxdmn;k_}2I^!@g
zrgWjqTBY{8BQ}WuOT|cf6_P@g*e6T1TStF5Jx2f1H3r{`VPDZN;$?Q489Yg-U>WJE
zKj>J%-Svo^d(q7mc-!McpLmP)rwzgM1k)e6IO>2zO3#TE_lZ14VOks4)^eCibwhSI
z&|oz6DDY(TVCtTqW=KsiCuz(l@X;Aof}kNa{IPzG5&j){Sg~=%x8mMqfGuul<$li$
zIlFTuYwsP&o>1dD&9WP*q^7%BvZ_4Rru7saH)7wW6vsv?*dNCQUdFodG
zIh%m>#h9&8c_)Pid5^=6N`rCvdt)%!yYuI~rvE9RF8vD`@`Wx(&_~|t%I!~ncejh?
zU*E1!c+y_`2dh3&T7}{M50x4Sk1>fDr>88hX&Hftm063emo7^9m3oa^y**({H|LC1
zJ1)yg-7SXZk!bc6)L~7A11NcfJQh57rykU`&()G3%&DdUYXVkZI=&oygMW>6j}OhIV;v%QYRKYW
z=QprYupXc8SLSpb_?>io3SmhXUvgep{d_);T}^b1lgLqR>xwLGX$UXjgO$Wjy+CwT
zPV%W?v6J6#+)1`8yR})TfoFQm#8SH^kRHeN)AnHXt(Uee`t~xbnqH$GKr_O=zV4Bz
z0jXTRI|=JFlE9Q|RVbz{9e!N+6pXv9mbp-!t#*T?OSy+gdO
zXKP^orj0Qx-$=F=o#NRWTK}W%dXddE#4knv@aX^hv_OiuQF!vj1z_5
zGV!Xur^8Us6wf~=oI?Mvjbhi;_XxHO4x*Tg1?qa
z0?m?TX1Q;v$j1AQ&rUk7e0GTcZyAq1I?zX<`Sp8K
zzGG<%v~A$!ICL#%rHYxf=vLI2@|WaW>U_3$xt>dPb>H3Y!+;4@ZwYO;o7Ca-{Zw
zTR6!)!b^I0@5GMI)aN7AaF4^<$5-Mk1Dy4L(2Z2`ylHd@8XVy`c`G4@j#o%{mlJF+
z@QE>7N!rf2j9f~lm(5OV32-*6OsnV{5(&j=4X(mSB~W4)FfZBw&ItdiD2)<
zSf`NMd}_;?XD^NwkADsp3Xk|7#alz>E`7P=f*;bK{LM0DmiQ#z?>A4RxnM|IL(pul
z$~LxGV}_e;(`d`=3en*~%`=mg2rfS4z04;QQ;oYcH4SQTuO|08sRd7V>f=7J_~+R+
z(9+$8+D|&SaZXpjp=w+Efomcmqr!n8q6i_H{5y;+K3JqaWFClHz0wRS0D`$To{^dK
zo26=`Vwv($1du9VR|$SrIq;Dlt4#%L-cPFJ?Y+HLawdfvn{y(CqD*I7%$ac1L}m~y6wu{!P5)YDuquLI&WZMO0D^7kkyvK7?xz(_n|HA;tmuzy7Y-&(!ae)CxG|e78@agp}J~&?1#1ied7v^o4}@
z4gqHI!KU4YDRe#bnTGp1w<@n^g7cI%e**UpX8dS#AA{#d+F?j#Qsx{1NpeO28|@eL
zsoveFijrl7PWxp}L==H^tkn_iD4$}&TGfpvT3a(L3>z&qQBFN>xV
zhy$`74F{ggkLpZaYV#WuApFg`q3UKKg3G
zB?7w}xa9++?
zXGP=U{y4uLv{PKeta@)>(a{XVYc9Q~!X|SYJp!cR)YPe{mx3I+6Mp{;7H0k{NLX_<
zX`H)SzZLCRDi%ryYxA(a?&hgkjZj87AxvRv*FMYjWR8Y)m>WCwtX`a-VE
zpNi)$8=k~RZg5*&6}3qwcJ1Zw6_ttkxjka4^XcI$RX=+Qi%nLFPUSi08+IV3(*RK%
zZ{hlIu}h|dTlpxJUP8}GA=i|>Ph%q`JpOVfeN`mZW-G07!nngi+tocm-K)aI(Yemq
znNfD?Sn4#B)HWD(1$lhFxL27A&gRHC-p<>@@cb8JJOm3JN;y<34lFh$s9w0C*6?OQIfPd^
zpcH_A+_*+TNi>3O1{3~Fhn}v44#gfxY&?Dg&^Qj5m(LEwNi`_7sSbsaxui&$&+X>`r#M%^-v!2n
zv*tW7OMvS3!8{Sh7MHJoPwroFG3xU?l1neQuW0G!4y1-Ooi+;IavUY3n#}iH-_UcL
zh?kVDS=p-UwkAV7hLU@Z1ZoCO@;3KyuSmBp*rvw)
  • J&E&yFiuoMAWYX}}zM4>g zGp&EiBLlNzNw$0~ZYY|>`B_P2XTqHs!#U0gx*w*4_* zry(Mi2-HU>*A&Vl1>}N_Pj>rsJ`y`S=f>x#<^{u&3Fy@O?JuAT-Q!cYy-W22wTWs+ z)Q#=Ue6{i3mc(?%gTULeWsuiS1|D%PyvJCj!*YcnM?@{iT4HauGTqpYp<6pEi2*MY z_^e@$P;;~v=vng+Xn+yP4fC_9vD?bG#Y0;U?laJqG!CBZ0DH;b9Qj;N^CrTwoy^12 zty!BN(HtEZ!pfSLpj*Z*wEA1Q`<{lF^$$j?)8L{6WrIf0yqxfv=Q^Lz>0!6#mUUfX z^VxF@6p<3+R;tojK8n^P%ZM?Dm>um#!fE_-;X#UC7mhd&i5j! zN&JTPD4zD|v!x}}ulZ8sR9wpAU=lm{-#GsEKe1t0r2!P(b9C`X0|a&6$2=Ft8F;WW zQ_6iFN$DaN^$~VYBSCSE1dHz{6&g8@S$Se^0NnWVFd8Wm9?IQ!a@Sp*Xgms z-XHY+cfV+)wQ*^LjD9g6NDbCP4;2FmP1MA_Kx+9I0cc$H#KaDSy127HjYgYQsw-3G zht+v$aI@T*PWqrE%pl_mtpb8B?qX-URtZSknQgs-@td<;_{+R9qFDA7JJxviYyYg> znu+e${BFJyqz4a}57?pOc@Wi}BH%=m7NBtewsB?sjTE9YjL@gHQ z@x{Gd*mGd|v*va(*Ea>sXYl3D{nFCZ2Y`U?QpHSH)uYBO-KF2}beKSl_s-AEt2m_L}6SB|cmoVU66Wg(sGj>40_!o4@xb17`0{uyUKr&A$E2b1f#-C#1H8l zJj0lT6#l1zo#KAQpAUZ@VHwybD+{OS0FwnQ(UDo6S7T?4AwA9pq1AhR7GuB zl|tsK)87ik8n=9TM|PXKOX{%AVVn{xRY;)HRKDe?Df*ien7e4M#R3lkAm<(FKwg;0 z;fuD`?T~#By@1fLGBVZgZ6}zM>!d+*JxDm|Y6-Pxa*Qo3d1qNTT_f%bQfXl}z&5v5 z{OkPt2SCH~Xu{7d{Hvq^bbHlvQwX0NbqSw@>9)qB-h}7NT`w)k?(McdHo3#amyCWv z%qLurqy{K$9haZ!IJYy%kq&i&1yEiI%jyh`=FTLI+hr>B)6b@KDu>Zx7j5>x3YT|I zc5UiGr=EY=MHQkR-X1pV3SwnkAhCo${3#55tEzsEY#^oHdgr^@bz490lB9>b7 z1^%(CHv1NFgiYOfb)ITsn2a)8>w=7yYO6k;K(VN#&Qlu&CCfO?ct`f&60(~nLBd(QOSVgefZ zM8z1Zh68Nzv-w>Ka~3Ehx&_tu^ivXE@B=R8o-S3mwx$Yo zZu-lO`)^>BfPk3AzP3z3+N+&ubWE2MI?Mqr)=jek$(VA%;INqBGmw?`HSGoWw$XW+ zd-9=!1wMm5OS0p7J7VIq2r;`1hd!#a^_Hn0Ze=vTw;F&Y7l$}LRAw*XX6)bvIdw47B`)E#DFsm2aWAlb~SW-hh}{W-W7In;!J{42+bX zr7{47a<{B2=)6;8gI|hvtbFK8Za2MXGQjT`$~eyGx>8G(znABytwNp8M$HlhqE4+< z$Gzn+9xn^1AuA;Q9$q+@5*EmwS_LvuRU*DFqe(Yje}8aZ`iM%MYUS{~m3xzh3$$+Y z82I-7<$QszDe1)Dt7s;8nfFF8e$+qMtpXZORTo z6Kj#{(=W|{`ZeyV#xuTN{JiYuqI(N%kR7= z*G0XDEHJeTHQ?1)KBGSQ1#1uhsi`IZ6;}SpHd;>F!PbfYA4z>g0U96WIav8H3F{9RZk}4e zlimYOwk=tRAfhW6sUT$j$Lz0wvv}ck|8ah=YxOnCr440C?A<1^ktMs!sKB?$cyRsW zy4{P;;!zu%o@Pdd+>=ucMtsS97P}HOSQ(OT<%M+lahUT^7e?(hSpfATT=Yc=lo&0z z{ZGKrqenms^XSfz>arZ$*xgiV9zTJ^J_dhrOqCZy;M@t#+x?EjmsT7`$7N!-X}x0M z6(ke3rfDA_BlFYP*%@wiC)PYeP&v7an{CnIP(H1W4{#i^J}%Hnxg!(-P02@Y5*W>Z z5w#TV@8Ov}whA=X2db6N*2l_i9NXPyNzzt?Wco`PYZPHkS`TZvO; zz5SPoGDrJ5B(G*k6C%CO&gYU6=?}jfAwV;=GAWjCFE=G6*bbSPa=Z48p<(SCxg)1X zM?8if!Sge+hW0Usy_;u z^iLXwOL%OX{Py7k1d$KPYwNB!#ZHs$)}voR!*pR)p7FT{Z&QDhQsU443fnDgdBDBr ze{XhOTGcsEO%o(Bb+ILM`L$T%Qjd4*;gUjtXRVj3V+rZ&cVPQ3(69|ei!sZ_l&oNR z^n0`v(OV<@fRTr=CPt^_oR^`7y~<8bGS6d5e>!kp z>-P8XnDnc5md`vVMjlKa@78+u80Y%(8PsEfAX25Oa8U^R(9)@_OES+7z6?wAQUWap zMpj{J(5=8d?`Iv}A)~}1?w>pGcd-gt8S~y6=Tq8)1(XPWuX2Fwdl(0f?{_Es*qo|# z26W}sfCphb79v&k6P+Vl!;kI)ocRW&KjoL@qPmIRKrYK<^6M!xmvODsO)RRpJsohV z1Pp&HkG-yc1f2)28s8zGh6ZiEoTVu+J8;462850l`%lP8$JUQIK;nUPQg8scYC>&K zLxyfxOB$(V-wGtCE`*kRhhook(5@1C<4;qomERCBbH$o#fbC}H3YA7H8+Gm1wg((x z^Wu@Z%pAsqC^{}_r4Tn9jESJ{T2VEXOpALDyymvIg10`3 zbjtG1cbjjtw|tDTir%|uAKy%s&i(dhF+dyek@0WG81p69q(0A!>V}9L{_?bVKjp5na?looEcC7@eC!_&HsLTtL1VAtB?@~4{|>iQlT?Udsx)b^}P zmb>WCq{e$dX!2>^njND4JU466x@fkUiZ-QLSxE~n7s8<+IpG5p6S59aKHUFd?>)oe z47;^qWyBylgCKf~(G#5^h~6SmLbOp*h%##QPW0%}2_iyfkU^pjMlXpZMv2}cdM|_b z&a?Nszwg`S*w69(e*TRzcUjlE&ULPGt+fXHk7UeOYinTmPSte^i+eT-B4aNX0(Mh~ zg4$eKszUQS`MoD2BMCmI4qLA{bIooP<~J{@-ytL1a(wZRddAo#t_F{%j(j%K7J8$b3lCyMj!eo8K-e3M^UP(##sy@`(oI0Ph_v!d>H}zk8anW?9UY_cDl61^o4Qu)3 zyYu6d;&1vxt=h8j?=kciaa=5>wXcJDgeTezCV=A;h9PDf8Ye!3OouP-5As|rtv;tW zL+6in4FAMFto?@P6yD?l>Zy1qSjS2a=nXF(XacWIR%OKsRCFl;ybE`8;Cnq+n3}(Ah)37Za{`xj z`Mypjyx~whY5OH+?$N;Tb^@cpVv`6`&_d@aqsF=n z6w!4<$d92&p`8b$F&`=($P)E8R*!raCLUjvOX7=kFUV3Wsj)a-ue=$@CfOjTl4!z+ z2!-Gj9oM(_Me%_dTwSrRcMff@!M!IC+Rlw&m;^s-Yst+>)2+bl;fZQ z$nuvuo&`(~mtJx>2R#5p+3a>*Hl8;@CxtF@MV3CVenzoUR{0RS%0P8<5rgUx!qBQW zxZRiCs}A|2c0UH?+5|+;=-+6z9rc@q*!V72ZG{|=qc8gCvqx2v>QXl~ai3mSPJCLD zRY+0Z4cOG|y>?V8DN`I_MGwvS8*BX4@2~vuC%_|gkImXmt;jC!f= z?|JrXzbb(?(AF+$0i5XKrin~B`PP%Nv<@%nvvIq6IKEQ2h*W5Rxyf(IowfE)1M?n&@#|d_0c19sLSZVH51*doZ@jOs4Oet3 z9HXsrko={^KPI*ja!%m${_fRIMY?Fl$@a@I!_NP@;>-y%b$m^d@OEDAJ}moyMkJl z$1iM1qKC-j&r9NfIqx^gqR%Rm&pvCzPHA-c8;+Gy+y0(nMNaIe8ou_@}h6!%S`?FLl{e|D z zLB0THGR#BZ^}>pMYHk7E|A8_4nUIjt>w)qMUg5s^|0Z3_<@o2fQ5XaHr|@aE_~3@( zL%0cduDN}r=W75kYSTb<($GysPPyZZ> zlVx`4S_MKrD&$(`5d}+2=2nncqmSqbl|K<^?RT}>Hb)u4fTRF70@UO92kw_%yA4=! zb|tKYL!H63Z(kQ}^$zpLLL;*3u&9li<>~hNeh+@HlurdWZ<{IkKb=8Lek*lo-et%h zdyyoh4IMapbTNSihPP1d|4OF+%(l7?eoVzd<)W=oaPv+>#jD^yg>#X2)M6|eav7ut zui)GIa~rq%@fYh`E;K~EVaO=HP`aPUYajXq5IVt@kwt ze0(T*(uaO-?p~_eZEhM2gp&(uH(;;zZJ@2-6(e<4z!x{)7rwJw=igVyej6M&tv*ol z9=3FAIaG3!aQc=7eI`D`G$FXQMm1ExQB|~vsV5**MYGwb*3{Fq^W)+-duA7AprU6| zKgIdy#%l?Ze_^s5MeibW3sHoD9)(iAT-9=A1t9iVy`8VS&a~c0aGaPy5hVBCuTvO; zm?#@s-Vt8pW&@UICiT;m7Yn2tb^d_m5H$lUj`6Qh;hfJBLj5F+tOiHFj)zli(L?Me zR#H89c}GLF=e1~8Fqis0hkjjh#VoM##2&h;mESr! zmM4CexlVz;Lb7d%kI>o=PNnY>tL#$Y2@si{hO(Wh%(pLSjHgNS!S zK2Ap&o*D`|%L)xox?G+eHFI-NB~6oU1t;v8($Jf=`sv;vB_^fUj3Xh+<0DGH-pX+8 zF~3Mzd{SOA8CTLRHx`K;(Z}X+y5!IKb5n1%)4okW)fY z_>}XZRz)o|r5#Ff0kVoe7o`u4Al)#Bs=Xyt2&YAMJ$?JjikLKEe?D|k+!MOGh~tJm zz<`>`_>1l@XDiYUA9hZ3?K)Ue)##^$?W33^3E}HH?wPhuTQb9HMJeGX@!ZXRzXG=& z1;?rC831CrW_F|2u3d4IlPTGYsml!?W@UGFSfcB!!> z?k-PJ;+4Fc!f2uZ4I?3#VH?o!YD_AWQhp^HeqW{dWCD-46HZmY<-!^c137_&X}oOV z!i+>AdL>VZ;UsR$yDM6|QhH8g1o!3Xy>=84K{Ic-+uffHdSdLsX%hvwrk6}!qXtBR z)1ZhMl-Qm17j@osmHDvBa59S4^arHMpwjzlfs^BvW9GDNyt z4J3@aX;amES?WbUlZPL)XTNcJ%^*%8{y zece3YJta9!-Q9i-4jWe&CTxRT$((;5)I=_yd^(9fK@V#B zDO+@vh@*GQ%_>`ZOczY{KWx&?&e<#9?=$#$ON})C>B(Ax8;%6)9B4w+)!jt6X`9e9 zlN$8wkrgvf=%CbqRY|>w(if@OE1}^LdwL4Dx8qn*abLmwie*E|a&ETnBpSR^g42h4 zjj6gJ2Izhc-f@z3TTLnp?AM45kPS!d(p>oY*?R!p=U2u{2J zDC>kHfpJ=FP`{U^UCHt#3|TaA(xD3(tb8hVtj|ZNg-V2irAY5G>s@)WN4~an{q%4b znO^*AHDMuBcw5zE?CSRWB4R?05nc~k{LL*1dsW=2XOwf)>9r#YE-5&XW%(l&rcJFWf(Sf@dPAPLLR}{guW#*|DR_ArOBHOdL zzYq|u>MT^_?eOaM?{Rt?oOiDWzR{5|#QNd6v2F`WI5TTW&TH0CU7oe}pDS7My;`bl zM6A|;#M?3n;oyO0%4%)2#GTO$-7de| zU*JuW!>K^Q)8qB@s-(vPn;SZF-xHkz-hp^WH9-ts2eePb_JfI%n@1s+=bxkI-6%dE z`&^xmJoqvj_$;P)KD0Xe=T;=+usQ)lnNS!P1Lk_UY-~3Pq4FzuY^Um6?1L5t{;#r< zvc($C<$$zaew?@~~3)Tc!a=_DD?3`Y`h4L_3uuue;z0?7j&h)ir}P11VW;Q_Z# zq&EyrB@kP;B=XS^XuY9II?UDa;C>G(d%z*JTNk3P%M@zSTRIE2Pb-Vq`}8pgs8%hz zFnzRIb`b&;Oa-z(Sgnr}gKR&cP^E-KlSOAi_^H9qUL@Y~g{1oa@*d|qcKeMgyK+WR z-UOX{l#_+Gr*t;}(srP0xeD643WPx$|Db2uj55raKYw+yYg#k}4+)zZuvjlwzL!sE zw}V(os4gonQO4+~iW2!bjC0yXZgxA>398MZ%(D;bwA$~oxVcLjJKi7Jk5uZKLH2gv z6+c2+2{C#hnID`Tk+{J)-z-wPL=u8{Sqy<%w4UgNQ`&4KIMQ_`(WXEZ0I5MlrRivg zJ$uLFqdUX)*U`(Ip%(QY^O*QSJUI_c+%~SqZ+2_622zE8)~iJo=935_89^xs2=99S znEwj;`<_D3$Q@4~h+o~RUs=>GqAIA@vJAG#5UkGd2R92ag!$+qzrz-*_?(MnI z(9Byq&ZDROjC5O1t$xvZ?XZMTB-l*H?A&{&&Ka#?vOZ{;-i^V4I=HZ0>TwBn(+%Ao z@B9LPk0_46Wp}>5v}t<8D8Srh?Be7_pQ{ee$Q=l9cSszCdhW^BcZHCB0k9O5dq+C%EJVgL0O#UP3T^jh~~@)7{; z!riYZQczS5E9an7fzhL#d5Z++7on6+yOdP8_;0AHgF2-PSWuL{o@Rq&26RwFZP$aegfVr~Yn4b_ib8%YhZ5jJAJL0#7 zcON_W5^&7@OZNO8{3o@*i^Egoo+SM9Zsiuo%i~|Z6yrWSz`4$7*hnl#4?3t!nJ02r7)9Ncj$Do{rrob&&aUg& z%~V9EpwgY+AS4Z;jIl`0_)T@ap(epWN9KWj^L~7^97tJyUtqzmYgmV*KMu0zukW-;}e{kIYhsLQhYR3M1V+e;=CsaCQ&dpAl9Zhl)TI8U!!ZIE*zY+8Vyw%tlSmueTQXEPB!|Bb)1>~ z`$^SZ99-AJsyCq4_nUnTg8|KX0=v9(l1JS3<37t9|4=(oYUuRKZSyAo7SFQMObUteJ$Xn0mzg{Z|) zMOAZK>WG{X+>AF3Xk4sBBi~hH2rqX2i2Qs(#hUE)`%t)UEeU~no77JE%%3ggz8!%9 zYdlmox>;4%E;U@6=hl_|916g_?>XJntu$8R9rEk%N?*etT(@?ddut$Io~C9MLVm@Q#*k zDez#ba@W<1&1>CUF$qnRz&WZ@I0_+;FLxkp?|>dzp1~=ve%BNci$dQnSkcdwyE+WE zcQ~Ow&veB%n49w2FIJhphd(7H^i3jx}! z#Eoz&3qts_6ivXCew_*5qZ)Za2Z&I&d3gIqp)j+S?DJVpbu#keW#&Dk5) zPIYeZ7*(g2PFu8ee>4F38$Nk~g>?mi+rg`3c;I-;=t^9GkMK140MdLy@xa@EHHKCgfPmD#D z;L?Ka?7}4yidZi&%veO|T-}@SceFN94Svi6ze8#l+XZGx$>ra3H68SLH6sky{PxSs zV8+$Mqih#@8SpuuiNojKSc4HYq}NjtcIXQ2bXNucoQ^Gh8~Qf04l?}|ym$l76@`ir z`67$Xe%S;Av45xJ$X_jVe(g^e{8Wxmj^fSt)Fwg%lL7hRnfv^(wajpjmri{ngoI)Hu%8H6&yz;k>;)iC@tyD-r(w0eZ zVA-(33l=zka<_0esb>QCOBd_+t}@#J^A3!qabV$}UOf3%j$Ls`Y7V>>r`w!@!F8JV z;5vz*N6dh8cdMGUhn^nZsO9zy#AIXr4)D-iN8r$lP9na|x3GavX;H!I>h<*4 z*qA5ITZZP>aeLGGT#7-6iw7Z32Cvmkgn1Okd!IKObQ$j)H_Ggo_}e9B_3<0z;T#|P z)$A~#g1TTRT?g*P{bh2o&%4tLxwJj1yx2n>l0pCrRZ>FTDJWRh$HLA&?o(?r|SAxkOd26Iu4n()_VgdxDT-z1v)f zqGb(#yjo!IT7+Xd6kH2+yh%^fChlC4?;tBOZh)pO5ql~i!7v#Ij`0}dCL}u$BP3zJ z7-%d0s4V8R;bJ~@H8bAhglH!85vv+s(}Y~QD}(bS{Uj%_ zsdh9Pn>u#kz3SBuf?k#O+v>GfTBEJ(RO89SlOR;%k!1AW!XfXi`Cr$Ek?^reKPX`v zh%C^Cq-o_TK!QfZbDU;l`dGdlOd07497IY@QQ5Yj=W@_~L9Bg}ML z%oQ?@nYafKH0D*6-}>uj-$!Ie$mMk^4`;5Xto`M@26KJ|Fs-fRkCC{L0(ue)6W1G1-@PP+UW+ z-%px|ooU+!q{lLO7=a7jZ&j2UfSc~8B7`n<+;waT_V$AaU1S&uQB9csQVxcH4+2_s zkc{q4ka;bev`ZQ==+X2*cx}$Y@mrED^||V3LKNk2Ty=#jPTt%$8PyGXXiiX40AfX) z>DN34KomrJ_+YdkOK8+q9@9$oQ`qDHSPBt_4E27zzln;#;5a(>wUDG6{B2PGqmv zxw%y}I_N~$Dk91Sf7^wivKGa9%0@1KwqOu9ZhrD?&0M@RS=4mM@lgq<$6dD0nOw&5 zH-<+`*^j^gRD|tZ2RcA?JD9YaIW_kmU591v@|?%M{a9{$Wv-67;%Nk69xPA8AYKk{ z@H@(371wN!DSLsF#(HA8!{@=RIjlPY1$JFo&?TlG%yhw{KN1Rr>PUp|I%MR}N^2r!o<)x%e@IbLP+7{k?l~-W}zOF^FONj#dwY znvvZ8ygFF}rew!U>`4c%<1Rn+DZnRHZHO*G>rIgTx2=LegzlN7*IbYL_M#d}t4}dS z!_ZGMcy4@2<|v2QW7eHA8@|s1V;*CX`f@Ab;dI0a$y@3e4ur_olp^T@Af3I$0q;ZUI6V*o-l3*JhW(u?zSZO7W&!W9YBFgJkR4!I*9QYLIeWU;GMsAyI3r#vX1NCDpbC z-Hn=70zL1ti7^DMdWuPBw8WTU)M(5}Dzi~R&phcu1Rc+Uh$q(MvY>}y(yrY0U7mgMuMYR-VhZS0nDtu*_RH*%H?_^LeR)GsAu%#ZDNU}H$>qf=wbNAXw zP#McF(jb`5h;sF9jKo&rz#+^qIS7hk@qs7+OGTRpw3fsjZ#-g7pWK8a6q;83J3<2z z<)E8`dQSujS$7T3&0)$+uy{!#k(+Gn7Q!5N&e7H7-Bdy2U8c<2U&NT{0vb0iWNAW zg0Ia~;g1)sa5}Zx^kT)F@3{D*tGmV{DbaMQOnUE_;0(KhfpY87)o(>ln<|oHS8at+ zi#w&_BX&@Xp}>piw#17?D~I#a;1kybvICNfJXgNsYdzmMG>pxG=qGc}Z=T6G8^D-- zmTGmkS@l_wD8?CJzf2E%ve3|LUHuIwCII0?Wwp&H-W_TpP;a*1e4hAP>1SuUu*J|Z z)phajyRyaCKVN?*3k4PUb!N>`8yFj>8v&F!?`EEOc#tfy-U$odH?H4$Qjhh7eOU6D zarpK6{07%kB?Gm}q6m_Q>MY1kZTMYs1&HP;jQ$RPo9vc~3Ta^6=W_1gCv}!qPcGHo(v;&&cUrTd8J}aSInKvA)igw%+SL;GhCD9RH0Z;KnXv9LDR1Zq|qd>P}CcOJ3g;aH!SlgdJZc9Kb+VO?1sc-`A8&}6hI%* zKv$iE$NxI%mWf3X@*ZM{WFB%L${Uh1*`D`$7oH_+)4dAE+oA}HMHfkluCHhw=t8cW zPALzovW3dg924xTZJ9xG1p#)>_uE5>E0oe%H1wCVF{y%~UbI-?0~gwTszB>Np$*7B z+=U)N-PJdJE_d%PJHW$>i6s<*J(@7nVcFUc#Ez9Qb*l*zmT^U~>sf8~`)U8oBVP?m zRW@Rd0x>M%ia4HX_Zs(g>}`8a{NP=8bRQ0_7-HWVV;+xUn0<{9&seVWqE0TBu7FO5YVokK_1(PIf5(cSUi>j^P3f-ZOtc(4zuyFcVPU3r_`=$1WJQZ81kQv!sZaiy3vHLvj@vkc*QYkwEdefOmN| zC@fTZ!Cx&~g%59$%OjO=LL1jm?k?c*@}XEZB;|MN5E{!qS(VIZ1yU}MC_myH)!oh< z8F^~0+pIgivn+HY$e*; z{R)ROXW3yTrr}NEraU*xZKvYKyiJ)6WLM4 zw_j~`Iein|3Ya9Lm~{Prax>dMSknoH`Ouy-5Q1lMJ*b%(c*2x_1kv&Z_@irj^|Pt8G_gk$c|YlT4jTf75cI$knsFS zS6D+M?V}WJ68^WNJgj;v>Ha>4>=iUUxg^$GgyCBb1;uR)US1j_ZR4+0j7L?ml>EUs zK^Jg`UIF2jCcm5BD?fv570-`*dhTtebO<9&B8_H}cJ;Kr8&TS9#|rdPF2Q9Z^fX=k zsMu)A<tq^EkiuWrOtQn)M$pfI$%{Lnb}#~$G7<FqU050dd+QwZoMx#b z%~tN1*=>L49%MvngnooKNUJ@3HB=(OxlJLei1AqA_~r;3Ax-$*gm?x#L=t68V~~0i z-9WMFq?(J__wE;}I5^@{3W3qTizy=a?&Fi@IM4|sd|0*cVjN;- zg?o`W*i}L{DX6PverlU8A_R8_8|4`lkQtB_UZ~IC>>0Hik65T!M0gG_gmW$09)0!a z->uu1&0f9soqcMsgl_}}XBv@Q?W~cksq>p=#!2wwF zj_EL;TEX_Kr@EqtK;B#dL5!siqy!kL?K!e-S~KWQFay)mDn_p*IU2W?+R=7y;twn^ zbHDkUP5@zcp1I2*Ss+Df-%+>Lv()-~Lm^&LktZ+yatVKup{SYu40|#>Je$^r%GN#? zgvNez%k0n&l_QkSU(GzHxk^4)IN!t%2IVYDu?^TWQ{G5jH{P8(+#P3e+Ebl zvMqyrgiTsvgc>=(0O3dZL$ULga3;Vzb|paPk?K7*p@FbVz#bB8Gn+c{c&MayyRkraQFVzPzk(6@?tE@JJfqFk%=cjBn3F*$L zKb{6>F=rmuk-R-@2)&nmadISl$w~?NxTn~H>DdIO?-}qlwUuXq9?7Gm-znNM;j$tB zprn*t`!wIj3?y0F$qJhC>*IYe2|k|5QZTjA4l7F0Zl9H-v!FS;_BOtP@Q*R>1cW3W zG+XgCXekeL0G<<>mmv+<5jiMD%74kWb$G zfEe+ltm&y6X3K99IQ*?|X#r$}&iIp(uC&Wvm0Z12>VfPg*cBo(PPDp$YKKbncLvd~ zfup%``Nd1kdM>82oSseUf+?IYWcl_yIW40^b>f4Vfu9Nj?dJWe505_b}hDq>?b2Qz-(mWMt zg>Ei6Pg>Mx3~KLXs}yD?k&zW$X{ite833`Y!eL;l>co-*@;9P!n^7hlheNYmrYv6@feswvW!4VI;pji`^(pk=$w zKk@E?2UZ8jA#pZ;%4BSZ#&#qjjNL!eI4rdjt6JhbT=NtgccY}#(IzrLv&er| z8-oYv6bY_z()5LsrLlfL0!?_%B^Gf5{fuAwP5EXq2RainTaH=G%gu{e&a5nn5RsrI4N*aV!>Yq9Tr|zh&31Ee{ezZcB7`==% zA$o|E*dzCCiJ7`p(PL$};$5b$OsL+@v{bZ5h)>0iiye=aS^v~e5NRJyZRgb-ttuZ0 z`!B9_loFQ_g(@jgO>=t!C;54%tBg-50f$?j#HC%sspZl0-LG{v1A%-@SkVFz{1faz zD~*?UMf=UgLKB#aFx(G4kTCu|tBJ6ce>R6T1x3nQuzIZ?)j-~1d?J~}(>sveqZs~| zJ2s;%z@Qx6_H@)6f7(UorC!q96Uu!4q+W zPeDt*$sCY~8O_y6WTySh-HqFg@JNLcC2*82zErj;+dhz6gd#Y%EVeXOR8EQ#ILPI1 zrS2bp;%#UB9BYHpoZjv#*|Uo9V>ssaN%JQ<0$;ID^|6(BZH2lDlgrCj2?WE?K(50% zp@{$i4ZOXg1-(RZPEM6Q5#FKQ_4U7`&7jZi-DMAa39+i=*k!jSn4=8AZJ2VN37b+!)S-C^h6&Tm+r- zeDk!XVoYo+l0~twtW0oC@WOKU3a8V>sbQG|pT=p|s9C*l&;=NSlbO8{P6tc9LAM{y zCe>qVFNM?;@5RL1H&dj=@_c`%I^jww+sSEg`W2D>Xo7ms* zfOS*;V3>KS9lj)AuHYeC!xMpHNjTB8r0?+mtF8v%ZrKEBg8a{}mq2k6^PELJOPO&} zZmp=f(6KNR%-duof*jigHb?Jg_hL=F7UBfTujo`X5(m{knm1DQAlEwNXoL=afUfm0 z#bnFmf-mP~=YX2ib_v_?1gXe64R((MFf%}jz}?|#gbH~g!!g8|Qz5-f|IV-?D7_n- zPo9EgWP1vMkm^1Dp(cm2TWK&Nf7uGq9e1clmxnDFZM-HBNe!+4Ms+#jQBRx+)>u=> zGa6RAvIb^~RoZwo|CTvxXdrhprCqAvOyFd;a4m*`x9W;0geRkXp6TX~*VK@@ZT%iIP!N$@%Zs?6`7Jf9e^sY-Z(e`DncBr{b@TVE$59hM{T=c{ z`&wI;X%Jv}wxsyqDURUoKRAW%xx*w==Z8cfg4w-0rZ+lzYsk@W_PJ{NkEK`$8w<&9 z+5Qn`+|pKO!LcaEi*#V^&XL-AF{=tp!O84h(!XvnRo5A%^g*RFq;1o*616R}+fAPl z$De_J3pz;mqQz{gzWKVy`{e%Fml~0U-yo#oKW?G*X2?8>uF4J3;emo3lI&N&x?i@JsSNsUwVW*u% zcCDT`>J8}?NG+HvE^YK9F;NCQSG-zm|=CYQ8ojJOo|*T96EK5JLu^uts&WuJ=1<^N5Sqx@#gi zzO+vhq)OVfJgx=P8Q_9f?j`)#ahmBgAChe+f_1FwP*B&Wwk{9J;pu@auPaIs|9Bb~e3nvN$7F?Fk&SP*Uw-T@wbIc1m z(`sy;$5z>;;%3Du#e(M-r`u(-IavthY+xrze`Q)G-*mW1-Y{(-d;f_5;=X-S0;;~Q zertL7_x3e@LdoaiKNpheo;&IBDG%d*VI+PAjCIa0P1NW_V*dwUx{ z71i@~v(J#!k3&-AhIlvZ&Ebx^p~eXI{y&_Gg&!XH!Ix z2Uw1yt!IA3&B)eU;NA+%*mrqc`I(W{PIWU zw5HyXokke^5;NZ=x6;b}u5+pE{%BF%bEg?l)^FC~j;TWAqVrkdA0zjh@j*W(`~%Kp z*2l)jH=aJeu-XKtlglT?#-jivCfI_xNFK|FDe#vmjw^O54*Zxt#*wK*ptW4`b5c;z z9UtQ30OyWKe1a|Pm2}+CeYpaqf^GZa)(L4ln^Tx{=HgSe;DAW<&XtaL|>@(7i-ot?N5IY+!Gc=u*kf>o&) zg==Xd1NP=tgy*qtle|aUSFu?6lZBhb9J16w9n2#wL#KYx%snSfll!aBJ*`{)Oqpj- zbDX1WX{941vOUu8&Xw1-TPv@A3*P>X_)5uTGA zi6|clDkCO5|?; zFyw3=6N-((3D?~U#XW}T2<87WTwYh7YI5FV1AiI5c-=$2^>C7$Y0G3QV=IZ>Dr?_U zWJP6qgh9P%FhWiS|K-NwSj@8%!`(N~&yvcIuE7Ym)HVo@xqPIixHn)3_6`0cuw`U* zdxMCXY(W4x_xr;>Wv11UeZZY>MGoOx#dN-!BA-N6+v9B&=i)t5?y^@hT?bzH<7<5^ zTB>HpzcmXRD+`>9+n>LJ1KoLr*HS1FIEs|LAG@5J)MTbi*k#s$1i<*UThU^*z7@90$%NVUd>hQ z;+JGKcjZL+fHS2hbUzr@AJu0l{!uJd)D7~U%^Vin4Y3WoSZEsbNjhV2<-6J*G+jZo z)6Y_5vW@KL1lbPvobERh+pt_a@eLe_8rhR6@K0Kv#t{Z<7K=`p+QXRLGoYw;rp z|DR_4ZcEDMpiuFIh8srXQXTJa_f_^5mYy8_K zjFJfA>6?}Er9fOuoN6@o?F&TKO(SOtWPqdnZ}%YlzaQTV6cv@eG`%VG@4sp1VFuKHyfqRi6RWfrp<8n!nl5E%O*=|gLVf(a{{BFkVk(cz{Z=`C zWBcr7&_AyFj+&@@H{!vNiw5RI^`02|nFC&>>lqLb<>ma3+y3KYkq=NX`aP-4`Srj5 z%{G(HvU{`r!*cN^ zs_k^h?KL>=!Vz{pF;X5>FTB|#uSJ8q{*OmlBP6`9z901wvD?iQ^Za9rw4<;5TqrT7 zgOno=OI?RJAo#%XUmwitE)a_laL1>CZiErItr+A;mX+%}(#FaJ{&L=IF`wYsnF-j| z5148>7Sla$6G7v+%cQM8*PT2iyA27Oq47$|h-g`wph&+Ec59rEuCG}4nQ~=Hf?b}} zXOINAK6)9`F8r~XZmVgvfbTr-=fbUxFa3U4Y2JuR#mke$r-}>KK?h$oI}3rqz3DeY zl9i6YHuwdmG3)u|BZA<$Fw)G8fXBi&n}6tEV~zi-E|=Wx*%vr?@{fZ(cu$i4VvjAo zjJUP9uLGV_jTAP9S)1S1Ijdb z2wgQ`nB#ABDea+Lq3J1J#8EDI@R{bT*UX9F3+8^niRbj(jctydS@ zF;XLnb>%;19J3x{fU!SgNc`Tw_(3!Ag@Z|j0cFq402jZ#K7r?x>3_Wn665DD?;G-m zsTvf?fY`Hag8fhSvfA>{s3*^!kq2^J!v$=&bi%IoVKMWaG1Q*#FQ%Gv-CE@)aB_&1 z?;p5J?npD=xgb3iQ9S7c5y^ec+w?s>*r3D&Be3E zrPsQ*E?WdbWH*qV7#8QeuW89=EPoZ_4%ZGa`MZrUFLq7)J4QKnnaAIo_};klKF;qN9u^1-}2`27(9I~2Nv zL!m=gSB1#ikbtX)sdXpH9Vp}+udlJWIvY1ptST2cboj@{Gcq9}mw)zZEKS>Xj@D@O zt8Y+wQR%jM03Vr`#$S<)TOssiPnXh6Zh1<2r1;xjT^v&@Y#`OlfdXVTU0q$XAA8%n zs(^YP?);rR9>am3BYirR@wHU5LME@tOu(>!jbAU@XjdJGLzL`N18YgZ4{Y$WVTI*% z6U=t-@t51d2^=ux6rq$&Q@_%>Z>EodMAzz{RJtb)z?0o*I(Gd6;x!ThN_Kv{-jn?+ ztB7P^#XH6;&{{;lxZQs|G>d`(qYHnX>6wwHm2bW)P15=8`$Zm|dZ~<98FUA3jV|v%bD51E8QrN`#PlZB~Q%>Ti#Ie|M+aE0@1}&)V0H}^ zyHQBYSkUpWtgb2t4%OZc0HdmOFb^R&nG0?F=A+)A4^z~rSiD&3pONrJGjsB=`L_80hT`V=MxEimDFgs-McBWX#{8}#tzCs@^y^pJ z$*HO5k6h6#PhPyJ`7K8}&}U1z{|I8 zA7K6NDLwNJIe4qza+niXxTiE*_+z@Nd$ruud-3zx^-m?g{Q>nWpjk`Yg(D}?-*!vi zpdyri13!r;jVLvRTqJ<5(G$9M+pg0= zYyhxvCQJmiGIN37LYz3@U5NW`P4gnn{|=E^uljiNr%lKO^6;l+3>iItWQLh_laqHo zEF{*dgu8x|I*v&2G8A3ZHYc3fkP+(b`_>bZ z&a`|@vykIlK=!fi@+IBFlwBgb?naMA%%kVkjMBfn#*i0BQ!%dS3H}f0$!B&aJM&5} zNLTrJdRvZXSht$@huwSFeL*$)K{Wm;8SsO(ui7f%MT?yrm+o{v2`p0l_4W15+n?*4 zhIH)H(+uQ=^0$LXADnbsIOj7aDQq>8ha8Uob21-Y`^#3(*hi^1opWCUjV$h1d2}<* z15ZFf0fA~wRdHquzHq~23I=jvaK5uaoqs3RA_2$#?e!Z<)zafH&o>)7k6Vvh+<%v4 zT?4VC2fD6fnJhaJ*cGH*nwB%?(@hMtOXzB&(=!3zUE1+frARfJYPeHcqT44uIQr^p zO%q10$r0q$1UvojB2@n!F;_7fo!1|&0MR_*e1FnG9h?GU{2OifQ}gA;^?ZIyuiIayM?YeZGvxUuMBggMXW+D4(Au*#SC)p{Io(0 zA%Pf0)R4@qVgj3-Br>XBSFDBMMe{B;)STGk+TGVdM-u{i1&XIZS%GWCG5_$U-6tHg z_;$%w7i7`#zdxjSN4h#d|2eU9hIs;VPcL2Aewo<5F;UkEEGnk5qTo#=qeF`d6t!9q z5MA9cBkwr-cgy*$=KErJk`-%t(?~5#I9pY~wSXd*yBfc|udU7v!_QN9{_>s8SLY&S z_cHn$MKR&00(yw(TL$9*ceDCmMh*BI(dfL77TW8hTmA=d`h9spUoOsaL0dm4Ax2)Q* z_eA35!5UxTh=qJGevu;vF`^{n+Tw}d?TMi`RP^YRIE?{9Xp(_IvvW#A*o9X==U%m2 zkgy}0P=Of7A0VjuPq}I;pMVv3pxeLr?CxBgKmj<*%2QhfED>^v-`PzLQT!V-6H>eP zn?zn-9y6FD)#(wtuSaFmb^DFG2gK#)eHrKF@mIu8w^gdp902mz5gfV4<=r*Z%hq<;H2GxvV?cZd1> z{$ZFi%)8Itd#z_ZYpwl08GLD^ z&MAi{xWb&!dY;#9cd5dUAb`ekwNLrA;8j;@ z!|@nm$Q}hdenhg6-rnAOTz;L#)n7F4k4q4R+~((p3ctGZd&~b4T@mE(?5qpt9tq9B z)M*|_94%E%cVDHoCIWEs4Fj%1E&9TGd^5Iv5!6a?ft6|=ekt`PN8$c+P$CUpj1(=| zLh7IH!Y6I-zY9mf%X2SuT3VZ`K@Yf|cG(lDZoHORRF#mW?QrN-Q@q5h{tp&|@2Yyc zF(~z-D;%twj#ey>lavL)T!E_D^hX5qsY)D@xD3PDG1x4kZ$=NdjvE5E) zqteC$MQW*oS)X&(T(;v5P#Gh>S-50LW28z!*YaNqbTHtm z!Se=ycGZVrR0G%N{CGC_mvis+i5k^Rzrdw5a5d_-R_&Urs;S8>qRvfyp)+T-HjrVt z8HXyYia(=Vl`*kjgbC(w(LdIA){smY92|@ZAouLQ!@Mu)S^ll*I7s8MK7Xjd?%n)z zzznEMntpEV%=r^uMW>szsnM4_DMxnDvKe6pkzGTeV$T^Ujt8LLj2AoEY0bw+6_Z=% zgBi}6_Ez?OEFDkUeW1@#P9FKIs{OwNo+p=xp6JKY6KRE4{f~K;Htcq@m>dFoqBxjx z?BDS@|KY=7byr%#2*ubUOjZ0zy@K z%#<6_9WZFk6z>-#m<$1k!e2e$fo^-ff~}}3docLmrcHSZoXdJzI(f`HP>CtUAF2n? zF^)I(7uP^lYEiNJa10?qApxq(x!Za-Yy{&rKYwxIbxZ-kHs5so^*&nc6#^kf>#9LH zeXpJmpxXYwOjr`_O2j2Uf{~l#PIzUJn?}y$PWwbD44c%K(dkXfCACzG)BTHRutb_b zB|3Ll>iod&e1usFb2LT5%e@xF-xe?@lCz(qA9A3IV+SmDF)h_Z3O&h!w@z(g{38nA zS&pwVVZ(!@9!Up9GMpbGI7BfK3=W;wEH@MJ1;)~usM?sT&R6f8yJ-P+u-H8RZDXRB zT*?{x!SpY*L}`2tRYIMp|6-iw_G^ACO;-?W)em5(h>eO)*w-b_kNSfF$rh%T z4`2up>1>vwAm(*=COSD)E1~1=LGmy&1s} zonCoyx&M8~NT?*y$826!e^C@=#7oc7`f>wxrUNZDcsR0|e^ieYbrn(|Dy(f)eSR8zSy#5e*nkMSd8X} z1DCjli@9nV9`D&WYUm?A<+;~;Z`l4@f!`=Bsee{e+KY(-7Z5e?1ZGA_ zh+xaVm~mVE)#Cs>zfpem6T=O1r;`=I5BJdGPh5V>Btc828Gxwq-1qU9B}nH{eafc& z>|2444cu4(w-i1BW8p6%qEyCty7?ov=z!&b*CtZjR$d!{zSR1jN!^<5wN^wsr6CH|{N-u5F!ZyOS)Pyj;}z**9?Ud=aDMd_5`KgRQUGpOGjgzbd*A!mRd! zuundS1fBK1IuMD%7r(>#J1I`O1gg^B?@e7*T@T_{OBdT)x||28G$;+g6`%SoWa?>b@q=HzwxhjOHs4Orx}uhgq|>(d;XtQ~65CanU|I%Pey=4C z#EDzhjo(M^MSnbio}{&mK(QVP8D_T#+%#b~`jnI*6C5=CViJ0Ch`93=NpbT?1W3ZL zoZS4K^?$NM^l*U3^{0bm)lxYE@+%Y_f?SXYMNQ2m_Fk8P;F7aNxlHpCYXR^EW!-E> zwD~a^Lh;*Pw_H5I8z%>2#&PM!U#IHkBl!BTOx92?BGNm60_+@X#{N6!o?}n^@ng%- zr4U8hl|v)9th-t?Z&!p&?3MdtY8LjR}|2y!6O{rCi9vq z)~@Fy4}I?%@r_>G|ExloY+G;nM3ll?E@>CUIgE-R6u&nD?R9pSg8lci=Vw3WYd)OX z)PmETEjQn9kq}56&OINH!VFw~W!SV}UU1~^7x|3cE#h}_BVS)o5kNF=lwLNR1ksC) zap>@%<^#Pgpqtzvv3cHkEjXa@V1m;|Te>6SnoJOSg$3hb?T;A`bxnNll`-`P!mKEK zBd@Vo$0K@5&_))*EddA+8xms{5yimJ5lK%g7eR&UEnZoGjT(4BryM>(=c9a1i~+9_ zjorC&4O_Rd2#jF)i5#6K=${H*zLcXXn#iyeJ=xNL%iIq@YSD(!{KGm=Ua9d z&XEk3v>&#MBlPn6SuX*+}mL zMI~%3A&A>j3=4r9s2jxzEqyXPb2!ZP`I$Q$nUj;l>b%uOh$G@I-P`|n)BZ2#{j2eL zReu}!CYm8QQ(##OjzqMA;1$QK{VSHT5!b&8?IL02v zI2N{?754xK93L=56}moavV&^Q;_+CF+ajn>DO2CE1rXDGaLeR$jxXevj*gDnG*L*; z+dFr#MQaIGpIXJA{+PXkeYdkgik3)i3cwx5UI)J56p>l)n*?63=S={HuC}qJSbg-v z$j8Uh765MlyoU$zt1cGG%Oo}amllA6keOw;Ut~7x+Ujp5Kf?@wlYAd#!&lABO>@5i zd`z6f`08YSw2^=Dy-rVGEi_+@$;6_nAw?q~riKrgh1Hl%&9vGJTy)a*T|fbIx9{85 zujjnv4k*?*06b#h+I{a8Em?ZNy>(r8K4PjT;#c@a-}AR(-asWWv9$qs5D?$+M87pS zj+rIf#8Me|g`SVt!dE%Syuf&AHH4;BYh*ZxjbPL4{sNo30CvB!mooEUqwjA8t-=}!UE zxc4*!lmZI52+^rQ8BcFc%(~j)VBDMyDL?Lie)cK%t+$AczjO|Pcb_htk;UkK0w`f- zanS%Q^dfQKs+n>iSCJQp;ZV=X!nqmxV{!vTsPe3=;AH1VC4jI` zzUoh2|Ba=*eFMx4Hn9iy7(h`6i~6CU-@-2~aP|#BLUzXJ>w6Cs!nOphpuaJkLOLIq zlSt;`EDQG~RD8dbYC`^7u;CO3EYcQj?>Y7`Eo%2im`CJ)BTa~I9s-V7zKcH0Htj0)m_fRmP0otbJ?+wTd z20X=@y?;NQ0_^3m-@DcSHvPZZoQH;}zkckaMT_l(I92{B`{|CQf$W zK47P7DN#NNzZ1l{iBMJ@(rA{-`j8Pg)TLI2-{`;JIT2} zO1t>uWDUGX3haYLbR_)sZ%j{82DJ?jr)`WeQS0ho{@5n&ciiaDF2|E&Ofm4kU1`F# z`JE+U^!)*bLrSomNuyl}2jAECi{XgoZ;gZAGBr#w*BXu+R->k7%DyFd8qPRN6OL(z;@^PKPbZ?(V_v5NDNAQ`w#YsI_9>9 zl~F7nfKg0uWVE^Pc5?BXgQX>bk;Q$i)}Wn6j-B&9oYiN`p5Y=xE@%Dx?8?YZhE*#Z znML8`qh+=|37f+=nP9Iq|Z_#S})_UxU9*+J(#P zE5gF^*7LbuauQyH~RT(!HL!@{WaO&^e{gn3OEG<35$kE;99w?MHSNO%Cb=L zA~-`F%94B~n1IbhNqSJD&K3fjxjV9_cySbzdVG66lzMxuT1}ng4kMYiM|N6vz7`Q*SOb5J_=&T-2T3%ef*W5L z`x!i)`O8~@xXZ>{c{$wI!W0Db8d;Te-V5u$-J&9^h~0VZXlpSbv=U057?W!nKzV;Gq}e+D5=7zbkEYf@jtIC+p=SEH4@TJ38l;joOnE zu6*n`Vefdt8SBOqj__k210`e`vTG+Fii5MZrFRcFL--Anr-?zS6*~1(){0E4D%(b$ zNI8NIf;P@cad)tF%H?Pygj!N$=!mVOi2A+lZI-R-_2?@6d-)1ke2=UXq|JzBNr%s4 zC?|#vZkWtz-2@7Z>`Cdc{4-UjK|1w-BCfqJ@o~7DBGKon4o9_fBl~1p-fWYRFC+bnzhEo93564- zXJrB;YXZQ?rsY;J{{hd^gVBj(^gMsG?9A; zUM_E&R;kI}R_tU)s^}6bPnI(3r8m&?=|!_H4tY>!7vCK-z%Xjfu@VS&Him~8@$g}w z;h__$8`$C!DqG>Iv@Le2mFrLCn7r)`2L12MM2>D65j!esDWTE3_J#4Q239KCyklyn zd)2zJI{{WVL$rB_-c0dV^P4ih7QYh^YIQq!`dtW1D0{755#?(E1Fu|hv&prSH|-xP zyt?DryCt#w4xjqFZuf8Va+%UzqebYePqu8U>%=K6&2`DBzaVvmhAA<4*_THLIa-rW zo*(H4vwoDLsM|cI{N#&Ipfu&?5BuOs8RI?^WM3!lJ0%MNp(3Qbowgulcb!oA1gAg zW~#CkW>YNz?q%dS??Qt!UoJ(u_8z}A=u{kv+g$OHGeL%-qGAUr|BbHnZu#9R+8{}`(qj$Y9-qc?Yj`y5~xzdWV0vFS!xke z>zM&1`VGR4dV}*S4=Xuf zg+3D@9hUo!aehjdB8bwgIu|P4RNueD&0%}FMSvsQR8j1JFp5j@wFrfr>QyFmP@SN? zx?ImLR(pi3{nCBk#>kUQ-I})Y0bOnH(~q+2b%*ET{Z)Z?B*DRAJ@+P7@) zsoB&CrY#=i>a^OZz=O!2nngiJEwSad#JH-nT?u44tP35!%Y?Vwb%82sOA8$s>vB<~ zFm9~b&Zqc9*qFV4llwF=x~shLy`30~HF5v_7o-(_CqMgfTZe0_akfQm`{#{Lr6<#( z-cfAyYw}u6Eg&M-=O|NRTXx_5jie ze43ESDTOf4Br^SyBoC8xr}>BVS&P43BFAMG!ujP{lQVkm>4YZfHX& z*p5f%a%Y<23|_`;O@~2-DNEd%RZW2_GA6NfR_oQdbwqFfDKRk~YAj90{Ai*tpojd2 z*k_tEvPwP1wWL~i$8K4qFZ)&=T+e&sGs=L#5)K)=1rItw){5WE9{Lk+g`y>IT>@-c zxe@D{r@sozzP`$+jP&art6RPx>rTFyZvY`fzvlPhA`68x!bp&PuH>AYD)*jL{_nOLCPhP zi?qEzQEi0mEPh;_Y|$Wu2i1Lh5#YR7Em31drX$O9&!zjmT~+S()34!ZiFK+|<%?It z`C_eU+PUi(lZztHVoM{ICTb>QrsVVaz_ZJn{4e&Nx>b5zdZ!=NpL?W73ID%bTS^Fo$-8GAV7k~Ysw z2pV36QndEsGJEG37Y-IY)Gv}{=76!MT-uDw&JO zYHW{I>4S2(Xxi)hqr^I&Ie3vdpSV5Ko!~WHc-FiA#&HIJzKeZHvO&s>4DNm8!_$h0 zXLqRETSt+QVxwIAap~*4p{E}yZhleEhqFTd$W*+5MadhI{%|MB;{gnV|LVs2KQRoZ z%d(h0sWT81=ZfY*Lw3+)CNFBT^zyXIIg*fYX^{a=q8`wY$Pw7# z$ehQ-zgut)^Ra+YixH_$9gdDX?`;i_#WH#SUAIMQ5)Z$yc#JF&wps+&BMl_9z9N$t z9$Lu!(gNG#u2E-Rl1-N0Yl_KeKBorZsA9D@tw;}iTv|l&;{s@ibZ3wKJbkzt1q{Z_ zSz%FBFTLjhFOcGvp&)|0)_w0v>S8{_yH)Y3$8dny3~_#LBZW2D0k?w$-Zl|AYGp2- zpC2RC^Dlk^*Y#1Lq7CF1)>W)_-v9OD7fGH9;0+s^!g;GF@TBEew%(YDzi{x0mx z;|RaV#^Ae`?|%n+pUxm4JV(Tjl=Uh08oejP!?q@GP2^whE)u4nuN$3c?M2pGKlV>D zeQk8Di2lxd?I}M~L{gx!5}8c{b(AQZZFMYfcF`A-gGfJW*&9Ge02S!PP=P+JsfraJ zzarDJ^j)V##QuH4R~+t{$WfEp z`z!WXx_MmBjByiI%3O+Ov-ove4iEXhe|pVJr2d?1T*0i@k`M}-jc;1)JqQSCONnWx z?)s9i*55V=FJk;WYfb23&CB9tQzdtwin!)A$K$!+kmZCLW6j7T>-(Q$;P13=nr%LL z|0ZEE5z4m!x|VvKeNtDW{MiVV{h}IDmFxEYd~=w}6}2V}5|te!u7to{zI~kc`6qXw z#O^VqYWZBsMIk&z8%W?2uFWYIX@OHH8*`bOjpzkqJ!4^;?%q4bYdE~9B-5Il&&Zng zF8!x=<6!*tdOvL;-sR=?$@h!~l62(;Q%%dw>;_4-I*#o&YP;j=Gel>hOt2K(D;aI1 zd=sthF1PSL8+^k$am!@y8@p{*J2z#GS;=mT++CSE*m0fGS$X|=5)Aq>zt%t>1FEmy^-#P}1|!}0s5Yp9rXHLl^!Az0!PhJGCGUIA&Lmb1 zoIk;ru4{j76m1kFi6uh2EREHAiJGzMl1%WOGIl=S3_SW3Wya=!hp`x>SE;a~AGq6x zE!H9$J7Z196HN+dJiZ=_$+N3Ivi_84qrdJkf;H>jl-EGqIhKzeiKBV)}cN|ZX@MF6G>k3UF4KZ zUp)Nj=!_nz^->Z+OzMD!`uy_W{P2|_+Ts4#_cOv9`LHn2*3xbz$%X>ftul8{2e^>#G@f1||RUJ?88g;{1EvBeAbb zEVw~>&$j8pp`P=~WFMu5uq_BXi#IZmS$@dBe}3-+9LV*L4zW^9D+ z-x^azO*gfhdJ+9u&Z`vy`u1v(eMv?}WRBklKJn3hp0JQgsDjRn@^zuHuZ~wDRiTia z_2+BbC!Ud)(6AV}%h0i$@QPMHPBRXeUq++%y1sbnZ`SA6n)$X8FJUJw>D%g?qfBT`s zkE7qN`Wuw4d`3sevno7F1gRN*LT6RQ9-lK2LS@Hb3D3bNe0k`ycb#Ql#eAjB27<}t zbj6PHXq&Vc`Bqq$uFzBVsuYZkIF>Ogsf@1h{b5$jD6^_{)Ik5K{8Cc> zgQ^45g)4Rit7x2fd@VYjU#2SR&TEad0sd8ZjqU#M$y^3DA@!j}G-bhO`_Jtj1~IgP zaxCthw_UMX5CzsVC>%|@5Sy0MP3QKb}~kRYqez;GoqW+>dz8Y zq`ZEc_PfKzZL5Q_xqAk?*^;dfaJ{T6td6h}Q%9~YEGlkMSUW{L65t*7q-`IvSklWRPBr2nhkm!_2iv#6TXI8s4R*Y}7Vu zQ8n!_RyLsFe1!>8Rk!>RF4U$f!8^e5BC3vpF~7Q;N5wAUk-1RD@k3{5d_p4dG-+A6z zSZr@BZP`B`WM&nzvl4z@U&spiu#gc&XpmAEUYy_3O01vE)ZqKBoRu`n5^2x$g#zpK zQ)U^lXr!WOj}~@r6iLi!SMyRK4&#Am(Wg>~ zWjJESHN1=C1=MAtv9vL;?d|JesS*(=o9)Jq0J*>~3)>=M9EhRs4}CA2{uuaVJ2Z{I}&ac?`cnf4>YGzmU$YpRz~;2LNFJZA5VF|hZWXst6C|PC&t{TTwcT56MOjS zp33OzpH9h!a>{W>Q}myAQBKM2Kysl|`eTXNO5rlp?4|^gDcV?f=8vS$kZl;*kqA^4#b%=EHvwLsoJ;!^`G~KA8oZSZem|b*smZm9}I`EkmgTN`h z(K%)-?*qxgM4i@7H^*3H{|WYOC#S<*HWc&^0KdfBXKOpX_M8Zq8Rq2$aCZ5E_~=c% z0}sX2tii-R32@|k_`KJx4tW@r_VdOIev=|8rs7$N!JLFiW?Vk>pQHC3oMXVN0e<-O z%2Y18jPp$i9F0%r!v`r8w8S_vMs=PWHNY^tJ_+w^9}9~e&RvG3eQou}mvAt-Z=*vh zA4nrAHtgGxc39`B*EP%4Gk=YOVV>7iDKDJzx^rM4-^tia3!}tSv&!o5sROO##PN{$ z29tCb?SX}ay}0fg=lvIYcAWdF+#XvgHCmD;#nslelOIx&3jL zG2}ZANY;c#(dbwE?^&D$%v|AZoW^|1j6x+k2Cy6mL~0?^fE^mT$WK@->9xN@t6zUB z*^F>;&0DOAs>ufz&{DQ%7}(&`v#H!`^9_zn_s8UFc-^AQbJoOLyjtv*Ij(d4dKAI; z#7YOJ0vImxiH!X71!1eVJLT63ZO-*@B~#31-|};2XpdN^S}4CYW-Oq_!nSAY4eSzW4+=hjtwDh#xp$t(}jo@Jnm zj{4F-#S&x8=ObG5+CN4m3XChn#(N(<)TN`)ev)~2#^FVY9L3d<_(3n#CZ9}LATm9; z#`X5Z=_8$&PwV)tTr(AltDRggnbc2z1AJhtGhWuS_^bJ|^EUYpV#Y?XhGJ6`r9Mvf zM8WJ2%3p>BYZI97@!XyXtLTxwRSBhY_)aZM!_O5lDIqpcKApl~5H|vV9_G*MU96sS zeQ}U)hPPrW(x)@>M|ES@$XtJ+waE|3PxDi>h+jdF&I_2E{A>~@sGKrHC-u8mQlONt1ZcX1iPE?IecSGfU%A zvn)K6wNy<rc!bs3#Ny;+z2*#^wFm1Od8pp0AguX1$yt$0fzGJ#I z0Y?&F3MD+}9N0Bs#KfH!BjyX~U8O}*ZC5DvpaV!1fra)-o`&qq&6P#Pw5sRbr=02( zkIIj0Joe@SgCCqUKRm5XC{7GeLbD!}O39hr4T`$vnt~S*t#jKDbW{aQ=v~ z(_3%Ibc9%mOTS4Lvjsaw{Yk#|0l%jc*X@gF<{@Sruvbc|!@7`5uwAy>$g7aUVZf*` zGS9j}tFf?nf1;I+BCISChMnzerjzs}4TG~Ia&5+T-U#O$Sg`K87d%~jv-8}UbRmrrN< zmiOHc*=g*=gs1)J9Jjr3Y@qZ;Z!+EPq|lkYPV@K!_$`USK3ci^Ua2!htt;l(*KqP6#zd?TBZ?a?pV z^M7&h{|g)V*Ee9q1)gUzFSD3Pyv8M&o>_Udl=$D@<3E49S|ND@`i0-Ls#;(OB-%7v z-+kc?DU!^%*&N2w2^1JNdTJ9q%2&#f2J7@*Iq?#vtG4*KAJ;Jb!Afj{AZod?`tYCLzsAe$RuJuV9LF z9bgf3It9)s-@m9KRx~)+UH(K`+Pwe4S{ny8`SMl8v9&)EmCNWs1*qNmxupFg9hfJN z1%wq>>Te4ym(eQ>7;U5?ffpy*gjRX#>U^rn(NKS@s;>?PX-F`3l|s0Z+pQ>AZ0$%g zZ}Aa24)nx)|3Jd^>uGaXs4cn-31;FZLTyn%ZnOABTO^~)SfLpSwy2=O{zkHk=>j)m zqCg?}8Izl_{A#ZZO~z>i2f^*8EU%*Ome35dguq{sTSVkr|DWvl|1zfs&6PBeF^h7| z$km#=?k9m&5Uqrvv&4vrq3KtoQIE@4U?(Y(xymYQJ+HA-pZQQsxRMjPmHkO3o9_cZ zHK>>)N7LqzOlRg~pcRDu%OC&KD!_}+CW{)L@!jwyi~}dw2*_GGZTZ( zg=KP)uR&LKVJYA`4EvD;)1$63pNPln^?cv{@JFfzFjUattxKQ!y#d)6KgZL)e1S1U zE2Gg%MQ+q9iK&^kA5@`stMFQ;nbpQx{qpr}VhrDIxe2X=-G_S}!8$@584D+pXY{mx zK5HKZ0_KzwcNwj~HktHhF34Sg2fjs=JIEWw`4Neet_i`TS^G;rojjQ3f6yVqt)Nhx z+dk6LuWl-KY|~|izb@WoBDnQOY!`Y7?Cr|j4msL5BUm!pG?E`77cJR`mx9khAw`>3 z9uSU-YJtoNuNa+)tP;0KS#}sbMC&DU1_r5gZy67#gl)U7@UBC}tRMLwR`r|K&R-P3r;!fh~ z;v+L{S_CKMl7;QvJizNnD3-9W98FelU2^$lRL}KL)bg?S&B*2$To6+nWcvQo#Qujr z?Zmze=HGyJmY3WZx?#1XRmnU=L0ej3?jr|-PRG&BbkAo#T2WE3h^)4EP0xIHd$_aZ ze;5H>8rY*&$3=GZs}bP0%X{p##1~`)jD;9#BMtMuus1;w@U5`@sg){C>!_+O1M{u- zeX0bvrW68(uyAb-g%a*%N~iKdGqjhnHR2y4V!nL9ni50pNb^e+?mU5D2>Lb8qhh6) zV5IZ~2t?#sCkc?6wt!i@>UwdmVD_U)M*bSpcyRCMC=}EujMALChXWqGjfptt>cNjU zhS5RxpUC|2Fkfa$U0mWajf{Rtp%IDw}4Z{^4komFgU$ zqc+k#SeMwU`9?`0GYA71#qd;EKu#1sfBEf!G2#rdCE|yDU~Yte&h7UpuJ5&5b5i(r zOL^0;RF~5t(k7CI_H>zSCh7B`rw$N8OV>>~d++qYb}70AA$vt>Hl+s^vQog3K1I#Z z|K+>EV6)C1F41Ap$)nO=VsM-41s0;sK4%o!`9?TxI%Oml-%`9zB3t};5>c4N)ianK z&rINmSILdQj+0Im=CkajGnyKTNyf8S?rd4@hyHq`Nfa1r%T05N15F%N5sV2WyZ|OH zqq`H0*}&*@ZL+#=023?`8OnL1F!`gZ#=(icbfK5dlo?H#n)F%UAvdIw*R3j8SbF)o zhmbJCpAh@+ieqy*Y6G`=>zFQZWPp2?R*M$4pYlrXO*~8;sVI(tcw8$Mr}%17(f{67 z7(1U)@ZmjGB1~b5NsENIoPZk~cnMYC#{Nt?=EhKLQ=m@|!#DFfFp#`Kho=`A$R>tZ zN%;q1(i(fQ27%%)36>gP`|`^9nF-xxr}yY+_{7QtA_HzFB@a}U8=cHtfrLJ+7$W#p z8U%brNDe@GB=PN6Qgaq4gV(7@q5Kyi`%la7G`P(0wj!4&)8;*M@ig&zV8Ov#eSGR| zyY;k%9nHv?kxrgzG&;UdBnPA&rScT_j)dEUDt-w1ipry&(`kl^6es(&F6c@xB{4r- z{ri$Fun~H+nfk#u-FY*J!mWt?t(B^@@g(>Yc{!W)uTqg&)wZt3a_eQ1>d+p0P=;TL zB;;>uYx?==!MZ&u!y4A+L}*$qySggt6vE{JS80dQ6#w4g5CpYw!6Kuw=?tiZk@* zGN<<8{pLTc98VFfpZ{)oMP3GrWV$iCxXnn|1+l7ug-Cl*?!GQ~CcOuzJA^@rVPxhLnJc??X4@+4P#9y}Y&3h1D~#Ccsdlo0Xbk1e5dK{@Ox@$)NKbgzMH zG4Cx}T(E|Wv>2Z5ij~z4MFn7|a;g$-BSClv1h!*Bq`s8h(ib)b2#b>PT0j>zG=-`l zi~XUhK{}n)R(e9_d+N7;hQ|g}EcwRP{GC`x8VoIu)j5BGaY#~NXW2A#OW@jAPsR-f zZB&hnxso$fR=7RRdZe>E64{qr;}B7PH4-~JBPR-%-@M^eSm4p!dkUQL=IjQ4;1BmD zFdANSa0~R>03{gBYw)SzMY9vd;H4P0sBR7l7KV?=xj*&NiE4}<>HFE)QQ~B%-a;en zK%!=3n~pKkVZTw81=G6M#{Bri@h{h+Is_JS^^9+UfLJIUtmBp49oCD`;ycRbCq^U; zOxIVc678ZMRkIDvbVTT=PwNUxTP^R{->Cc~lPVydkq@#P7#@EGcQmGWL$yEtpT2sv z2@K$S9#JcUPQVWE;-#2p=mpkcQr(<(O06+!T2+31Xrudpm{Zq9{+aD|6BnUXJV-fO z5QSq}}HB1q$^hd&3LiORuKNF5Cy1=KJy(OmT(J;vY06t`T@tDR znew5jgKy|#(q@uI{0oN)EJGILB1^)Na*9{|woe>~4x=6*krF|*ZK45{Fm}C+cI0_Jv!=io*^PWy zYfd3MUUtk#GmoWH-hImJ)ldAY9pcA=t^Zu0-%oe(D13kB955n=dM%4fBC4hV;`Zka z0^dDwVBxA%O<`zLro+8{{Wd~JxLYoOMwNYe)phvaMi&RB^Kt*S_NQn-Yk620M8oC& zb~?UX6bjnR^5OgrTV==_$;F|@)v(9fwu>6TtDZg3HkN9p^)Lno~g5pBycDq*G?|tzYn?jE(iOBPrtAO~m&4+;5|IU78XmyHbzzaA z5jH647X9OCL8x&ukSW^Bj1*w)LKtq||C8REB03dtC#Ls z#D(+!4e{r{Ry+h4v{lK7`07P%Gfi(97$^*YNK-@qI)ST`zS_`)P}nXS3_^i|2GHm^0*%5dqt>fmhKKX=^SjP`1&dEkJYm#)a^Q8`;Rd(8N2;nBv9Yn<@AV0# z9%*Q-1AT-+pm0gwu{&ev7aiUM&dpB-$C@AkJ0TqQfGxEu3jX@!x!{d}%^UFMyA>Z1^u@Fat#72a-_q?1u(#O=XPDpY zQJrAsvhHX1k7)GC*29V#`dJ;C`Bu@)RatVO{0>T$$QzJ_B;lJGT?h493O9OKJM5F9 zUbg-bE2!d8zOZaee;8g+SQsut2b2PmfaC@?)UZhpe4}R;79m&Y0|Nu?LyUoxjmUP@ z=LH5rEmt8Xw$&Yv72X4tJ_wa|qkDBCm-ai9j0%B~^>n=kgd2IOV{I^Vai{%g_3f>5 zs7;`+5iwfs@x%7Wr72Q1Q?Ba@6L_b3Bm53T6Z!3@ zF%TAK2ur#0_r^uEmMkd}tD{D}4N7XDGE`N-Tg)bywOkC!Y}5VAiyAl%)(wjG>G{So z&@{4VgKYr-3U>d+@Gtq6fQi?K?VKfHxrP8Wfv*EV6ore>1!zLvNKn*P6DB=Jmr6I` zr4AO{-Q9Hps!f}{cHWhJBAZ3b&o(Tqtit_iXOCxoI00S2PyTd}khNO4;pn7LLqe;r z6p%x6^4d^;8zw{V&7d!6w~W$*T5uNoG3EHo(lWGvxtP8wF>>!nM~sWt=ghNlk^WnV zKNCcAJqji%@5Amu@b@+@nQn)646RbF6CCm{RMRo*rAvOs`^@qayQx%ENim)GO3Yo~ z>o6SW`Gx=rt!U$^9m%?`W|~x+HpSVSy)%D>Cs}~pcrlpb4-?n7sUQWlK#b`->M{bL zVU_#Hz@S9C(##UPHh%*Oh56IX49{n3X%=Y_2j2ublkaeEH9TZEU6bbrYG05^UIJ6t zWR2Y@(2sftgjw#QH0nGXfpi^DQXD`RGTy@>6^P7D{&sv4~xH7E?fS%m_-E{ZjEa3BqRH!c;n=X=DPO>DhAv7+7zq0B`x1IeqI7LO zL>OEf7(_lgB)z~UgzC{C!xq+E7IFLzMa7C%y^#vEE^-svZ&khGyv9xUR`3}zc?-FQ zi*^s*a%8sFW@o;{*7+#+mlgo>hJKZ6_@Bg09}rb1^7&I$@*N(Q_b;}GRwaH2pU$42 z8q)Rbq;~*mnj)adFKKFOYA+fT#RZOfVP5y->Yi!zSY!+qv^rdieaA$5-IXdClhYfB zm~4Q{I3kjTZvxSn#;?w$;mLPAixkz>;~Jb7T|OMAqNFentE2SYCMRcdfGW>MJEuE$ z`zR8~G^hdb9*%wjZ$}whDdNrM4J(@VKAn;$jE$Y6!f{4Ro z=5avhQ3db|@_kIoGPM=ghVB_TtL3to8~wJ5R_lpXBN}(b1sno2>r~?zEaCy5d4(=w z>@K~>#w9>_l~4059GfA|i>3QyXt+(TMM?Kg{SIuU<&D*XW^9^VU4v_ejH^`oB2n}h z#zh8%=G!q60smsQKZa{5Ud>ayA#(hFX(dAEGazSUhW4Cy$5Z@4W`8sTwo0@~t-P*Ok329jy z^NbG@^z#I_VvK^>dEwIva1l6qBEt>0##jlGPrjX>mD&Pb#M-XK>Qb-bnf)BWbs&r4 zZU46KmM3&(W!~5f++6`_{Q0Fc@<$_3>{#4$nW-r)#dg})h&a87%lYGGorA_f{`E}2 z7Dm1*ST&+b)^f@eJ0bN-TKBZ5_d@(6<3L$w3z=aSfy?i#lovMi@ZNEsKjbe~{8G{o zl}FqS_lnY*0|KB!Fu|ETOo*$b#KICY!;u1AUlPzGi&~wjG*=OW0QDu?L7?i98;N9t zqzR?;BJvvcQoltYvQZjQ_U}6aut^tqZqcvqB@!?+?k1Fa-aS8hd(L5Z^$4hb=A+cz zzKz}e{{6eXKLV_sJX9C1#BjC~yciUj=Ep&gDgJXUbNAHibZ@mks+DVISXE7JsQi=B z>Am#DvN0j73#?;Jwb_xTmtri>D)xmADL7OlR>@JpEw59Y|26m!G(2XhETMGjT+Q~o z@c|E#i3_UOAEog-|J5P5gcgiC!}@$#=xZ4|#B=AP5LNlpb{yqo0cArVfmi)}UK&O( z3P$6_(Ud!0X>J9C zIB!A5CngFuHlBf-TV}&rDZT9=V~0Ru>vTZ^avS*)0iI8}Jyy;PnO?AAZp2VOX?ETm zX;)@8{5kaMwpkCAzWaLihU@7=XV!T}qy^Q#N`1DdK{1w;VBdO$d>M1MgWx>vuSA(hNDt7sU`~X6Fe~cr zm`&2Rhq7W4%Rt#^fY5vAiZX})69rH&`|D}C$L-CvFC)WphT{mfq+uG&rjD^K<#Mz7HzJY zt}bn9EA1UIr-e*ygS|JSZBnoq+ZIH_3AkD3v;3W(s>@4DYwyBuhK(*;iNwM{@oLTf zSMTj_68V9fa~ZILM(V~V#u^47r;;!;{d^PGp~{(|bU{9XhGI-8K@wEr-e`IF9fRc~8vNEg>s6w5B9fQfxxVgEBr!gid=4WI` zp|lXcgD)sb;O)xTk_;sK*X83D>oC4iW4+OF@YasoIV2B71C z;%^e(%RugK5QHi&!gMEV@AIZekFRwIl5y%&;3c8`>W>}`m*20s^J9CGjqqgy5F<^% z)VNtYhX!<;m1*vHd)9az?_LHyXN&S%7nDwRVkJ#rv^FCRgBBn^{uZ`>G6LFWfQ6w7 zlWGW~V7E;6;J!Os4W7aSy^gTQx)x&U8y0{Wew9b*hEV2tC`$>c%$YJ|o=IjYWLT&~W}!)l%n6y6WF|9VAu`Vj`Cd=&uJnHF z-|zirANH}2y`JZ}@9VyX^E$8dejQL~Q@SxQhamt`puEj*3at6C!NKUH=W3C?OT7k;R0DX`HvHSyj~K?if~Pw6QqJ=+dXh z4jEZEg-hqFJAzJkUUHDCTjSInr?ZO)YBQXPan%2;fXFd873ga0doP-G9P{>JVg2=( zgLd!MK|(k)k`)axHnC}JhHkKJ7!Ski^X=n;x0LOemBEJ2&!d2m0r8!F7CW*9-@8CO zpl8%I-0qlp9||(h%8)ia&I0Yrq7PN_%#3lwVx`46$h)o>85#AjFMQQ3FX?hwckm{+ zTXsV-4@r;JI&|^+cT#8P=RbLcyCo6!)`avxbkI|nvUybx?br;i_Ff~LhD1ZF%aR5K zrOicp8q4oRd=!oiN0=>vXC>_7JdpGY@9~xKfs$^5_k5F+BED4pCMy4kd7E3aL+lXlOLU^Htn|xrn;pEgKm1j6L-VBY*`@(nK#IcWfbVrsvfeyYKxU%>b&SlGi($TJa-M!}5G! zZ;7tEtg-_)En5ZZ3j_%^nW&Hx>;k42n|+cbJG``|kz-Vm zLpATiJo$i_VtI}|yfMK(%9sTLgBFKV7#)$AuyZNIvxk9{7nGDd7>h^nLO^J&Cv9 zm#O7yYE|=p5DFhy;4uP5e}J3-_6vky=OnOISP{jIV4%I%Ct+`ZD&IO@G2OMwivH^OglUTxx3) zfx(V#oXmr-v{UH?^YhLb2cPNeV4)d3IB__*T>2GglpZ+Df&XKU6A^sd_GtHWRPD#M zA{2*ZYf&dk{K3$L*_-9M9&58YvwZg-3)!!@ZYN zA)1wSw9gW6HuMQVu$Qi834VO$MlGxA;Cz6vd2gNty`be)RL6YF-EhO60-HF_cBKNIHaMtJwDugpJf_*T8N}U~Qr0Cmt91aM>a_)VejNVN*-%aT`*pGr9XBQ&p zy)E58h!K6*{8q@>ZN8>Y^;i4ozfm~9lv4*x@zbBkyj}hB($XNYKzP(@DbaJrvOztS zNTS5{9$#wd*);xhnVh_~t<_Y@FJE86##CysQx7h@{|~#qVu1!RnQiYk$;m_OgCv^L z#~}^Jke;ENJg`1xu>F$KbBMs<=m-d8d-Aum=pLv&TZ#n*@y zrEC0R#5STS_~JAC(;afbkxO@lZPz8;rqHT!b6?gxK!vb=ZVn_1JFrr#{I^ok5S^qz zef}$`fn(4w5M4)%E`M-Hh=-W^rnZ-t7iz){3Sqi%*M+4mSoF|7S$D~a17#ER6Y~xN z+3ue*D25eX|BXnp_n9x9ZMP5a^QQ$eE{&u-*-TIK^Uw8`7FPW5F4&0m;&j7|i3eMk z)<3U&NE_W{HU4~SwM6+|av(uT((PYO_v2tdLU1YE#bjKJIrDZ1G}3-F8!8UGM>5D} z&+#)l<=*dTxwVoJRmWH`CDA!IGYcKgxU~*KOfCfEu+a2kqwP z3#hg?A@jD8NK7x>=*=-kqYnH`Nh>Sm^gyM4aUQuH!CibbLTZkFV!fx?`)9QmfiXSt zD;oZn^%(N)%L72x%O&kgp@L2Udu{;c&R5-owDS``*4`UU7-3|{P6X(Puykw&%dpda zJ^Uz6BP-ug5l|i69ElYYvU1fcwg;YWV<+Bii8op6(X9=<0jw}lZ+)4AR00xVhrZf& zRq4#&;C)_DM|6NjeD_p$q}Rr(#ka+=@2%n9X&~69$MkYOlKn?}*UZ+OPq{Zl;TJhM z{1idgAXam7a$X*4onjLDY&Y4KzNioe*`-BOyr{$TQMeW1o;*qle;nM&8zI~N=e(Z7 zWF;fPTmqo7#?;jDqZLrLku5K_ddTRV;hKTT?XB`{x^Vl+m1Oz#CsxT-@?3*bmH#$l zvY*|r&XEp!d+J>`@=P_;K1Iz)^J=|L6^zO$7@MVs=v%Kq7%R|w8$}J-7_DV}%%$^W zx0V+*u>-1=Cw@8h3N=W{93C2q+BD-cMa=IdWmdl*7so}ZeGM*{kCpUr4JV!+e5X73 z^=COwDYc$_s(AFi$g6n}O)kT3Q7#XA3@$Nk!M{BsKI-Wi5-3M;XrjB^#|Jer?{V#| zYbM;tmx5e$DYnvaWej)4lhFqdfjpBB_eGW+K!^1!s0hxi&@#DCq_}N^faiI*2>iY$ zIby9IHYX-m&gzvx%1Csl@Yf6fos#m661LHHnp=D{N9!qsXDdxXyTax(Ra1~x0FC%s9kE!Rp<&tZrno@SfG@-dq<#Jhb*$7WK(=L!gEOP5 zp60l<{BA)Y(Fl%C{e#2D*3_V6DAaRP904(M4Q*|0)T?>JsRtyoiqz#o_9hDz1GD_Evj{;JsH!} z(-)>W4FU^8-S}$cxd$&^R##64*>M#+pN&32yFrPuv9Xm~JPDLwE!zHF|6(my3F%|c zytJt^j}vrwmAk;nRdB9Xi`v3RNr7p3qV(1!<03)((EJ6?E8um3(L$@bkEf36IbZl8 z^bM|my0}2tMwmMGi8T>L53X|35Fbb6%%|Xff_A{~oa;|Mt2PP}E*&O!Fb*I_xz6^# z0QJf%kWeJ80Pc&$3kH$lDPrk$S&6JPE0SZm!+}h-^Nj>7vXoWiNq|GTjeY=7Y(S(G zn91c&4L)OExRF4w1wr8!$3CvS{k)}ev)>_*o}L~7X0bC2pfp*c5Jpd9@X?)&eW94iYgx^vuYiXl)AQR$156b&5fPE(pp}95k^kTm zyMrJ{`-nE}Zf|A#cWI*C&5U){GB3Vx3=h|wyk+{MtEO0^L*gxWN6(uFVqOZDaZ32? zbq>q>YoGnI1$>4G<{E?M7qG3Cr<-&Zv&|@idvS0D`yo46<7J0C=Bc=t%I5j7BQ+r|;A3i~fe^jrx#R*vI{qa{TGPtyT?RL*V&7w+ zi2=`tCIxlU$FAY7#i!wgrDHr|fL42uCpq^orECH&a_M0=@$L!WUVq<0yXo$@);DqO zI&WFq%&bPA7c6ipX98xn?ydycA&Lf}AT)9-RcE8g$*x8)1W8~G*TW3S|Fb^(ZK3?} z-}pw+{YetMiUTMt_-v*Y*nMUS{aB|ioi`OM;7REq4kVCoSeRR zsP_6$#@P9VIY2l3Ys{rLAFD)KU;s1kDEf=!t>2u{{noH11RLM>_r6_v?JsN(FG}s8 z6exTHZlPjfdOyO$1fY6&oCu^fM=71wNghnLKg0TkWN~S%ug=Wtf>uF@` zGX`Fe2pUekR(h@q{TiujONMTf<-r9Z>)v98x1Ia_lbWj-n7;n(QRG|q7x7cLy0uhR zofwoFc^`q??IHFh-f7foacWtB_9?E^>)Qhvv_W3?zYvvjm6LgR7+&0Ot+Kl8?&x&^ z`f=&gXEvs^UBaCds}>1vx#8~LCFLnF!<sYJy5W(Ct9(3D=r-B{Q)=$T~ts;XM#@^iX`T~_uw6t1CNAlMe1X_2p8kvio@ z(`>Q-Bjr^w?iQF(L)er#Km!waDOnepf8ci?|7c4Hte(E%`e8q7ggwHe8oco@t zNOmzH9WHD;OwVl8KZL2uj+S^e-TF#Y!p*!crg0K^iywFHs3bfxXU@L<9<$jmbACet zI9VZw?S7pCK4hX`ziOo5ehcse^4}NUxb0dW0MAEPIJPfs#^YG#i;Gq^>S^S(U%~bB ztlp}R6)mv^dV>1HfLKkcT4Rl;7}CrX*1QEy+Qp-Ftx+MrAQ`?OBsx6_{BSwoDSTtf zw|?zkR{A#v(!(|s#?Y8~+3Sjex(8rCtBR_xlq{5Nj{kTyrE{>eBr=KG+|6$k4|A)t z!C|}sp|){0u-3GG`tv9eg#Oj^rT%3=1U!mm`L#CZt~RL68BuT+3-r)t>x)w#RxE6? z!t&;rfi=H;p{r4z;feyWn1{w?vf}}<0-RUi8E1p`EKF=c;)|fHUk71nR>fF zMYMipjO^@f21b6LE86n)Q%sgDrXpB4fDRaL49`wKljyj(@fecE?qR>+C5y4TlE7y2 ze;BnDR0sw8x}1tHROwJhcy1N{!(yu`iLA!E%TBb- zdaas9^H9~}Gx$9XvjI2wra|FVx9YbbnRB%BN3pF$p}z_lJ~rf2K46=k5dR`Gk?m>@ zJ?ulW7$I}@wD65#6)w~yL@&3@qd1)z!Al9ym!jRl1@DbC|prwZKeuixPy5Y0pyJh&^)w&3khnPX>QO?ekDxQh@=5 zR}Hx`*ieU*r@d40XOHe^s>mcGlVP*RJ#WisK=iD8iLGhewf!6M3E$@Ai`HJ@RV^~? zL7?6`dg)|vws+RGdTC4PvkSA(j1k51akhN8@<8l{s~j$XSisTw%g8U7S|%BZAhv9t zzA~=JbF#kF4D9}#O#CBO<67ls+7@jl-KsADi`{kZqq&dL8gNm@>y-A!s?koIh3&Mz zE=R+j0<0g0I@bMf)crPoGuSW*$im%dj5;8~i9lWZxK5n=Q||5f%g={IK*!KHnn>8Q zG#EU)m9&&uO`KQ>Msq=4;SeFc_^}oNh8L&%YBScGtJRG9JMjK^oQf3W=sI()o;8#K z?)u?~@Vjxlg8VSs@G6v*!YVKQbg!*43{5zEUVKWcS#iQ-OUxy}H2pwio-7`!hXw@1t{<^Y5 z_7JX682bOrTmPSD$^^1kdecml?lT4Jg`! zPeJ$QjXW5%{UQW?3N9(f^r=U$$HPiZfCc`6c05Lp$+GOO=+jWrhe_0R*z7+&VUp!+ z*rXpTqt+xSCsLE-qdQFiJR3iDDX6)H#FYw!+0|Mh0OsE<)oom8VTngFv$Xxl zLE{q5*;J^2G4iX`^)>bj(^!zfzRJB+eE!1@Qo$a4v;pufT7}2=k)YHOlZ=Xfu=`E{ z$GcwbEia<8QQ~t>C(iYprTOe&UF)qBra=)G9XUGQ2DM4P7kt8y8yyMy4W3l+Bb3+u z5ygb~=qt#hI>!2cnaN)jpvg1eLdUesTUF?n)nTYUo;o#Zi$Fo=X{#I!3bI?ygQd=? ze5$b`oE??-NE`EHun${P>XqciAW~N4>aA~H@I_?VKx%`3V1cB(z$yx0N!H$r;-3hSsQjnY3}HXre_ zJWVSHCJ@1>t^M_ry9TN{N59PcBa-EUnIPqmR|8WM5v(?{6m~NcCAXGpJNQ#z(5mI} znftNUnW$T$m(8^)C_Q%AI7%A%rshO0))rpmo9;$F_&?_sSxksefaT6u)|uB7fDFbl zo9CE3o11A{Li*{MAKhpS%wg^+mkJr$7tD~L3KZM)iiA1r-^5--CJYgD=cwtYB4Nn;`)F$rEkvWDXL=Qkh|p$5;(xvUJUDP{pJ#oVAkA zr*hY+(SGfiytlr>N9MTWqxN&dM-fcjvKJ;<)&_%5`XI97ApJR9ezGhPm^6AaTkf0# zSn8=OI{A|KgCtve0}8~i~gVI#`pU93fsD|28L_Cco~uhlZ7xjmYI7AEq`V0 zDB4^*44DPrm$n$0LaV$lzgtIi9HYu%VXV37@oPE*w#ttO5?VuP=2;3qf5hE#P>Jtu z6f5|dCtsw9rlPx^jvU3(Ly~!8ZfhyAHe+;55wZuBq(m^igY-92Fe9543a|L|*{6Jt zX#Lvt{YPM%pU`1Pj}Mom!l*9pZI}kBc|g4o6tx)X7eGf?|B-ZOJ+KGA2bpIUPhz+kRuKY+ zS3hlNcL6JHt|1n3yRmL95VuO0I(V zafI=-e!4i#KXXVB8vaa9|Cy%#>x2H;w;qlm@I9Xq{kl)9is5Au!9Y~42lhd2Sp1!< zF?0wDf((xdpLnsr4m0(7YhNUPmlZOr$l?v&z$UOBfY?4nhxg~G{`t#qnkEP)2xb*O z7M|0|WyvHLXPxOuQhP%raanE6W$a5`B}0lzTw)72a~RjRbSrxpxBnr@+qvf7O(j@3 zpnI`;Br)E^$iOJIzX9RyS(s`CgXpS&S>B++%hK1WUe)!rv8#OOlE93md5)hoZJibaP}baH zQcF&yx`?;C8=C6yUOm?f6a_tD0JAa~h>HCeS`4*>^cFCgJ3s$=hj~{FArlNmNxT^h`0%`d3VE}8yP1JD4-|#5 z-$Q|RG(yD%oj4Y_A3O4Y_1o_YK3?nEO<_9KJ({@X|1RLUZA1`*PRj{BUP-e$nOJ>Atg*(b;Ej@e zhxH!GSHme`xm2?Kbqm3_NR>Ir1-X;T(X(%*(7)q^Kav9CC1waaxoP2mU-<$W1}_`q zgfp_TR1lgr>Z{_j_7bGE!-ox~;mWh>I(2JLwLb&2m>kYv<3*jEYiKklphv3a?Oldc zZ8AP^d?!?un|6yyzAy!Eof(#AhEBU7^fbRlU$d(ZXr^PYYhl!K`=*bmQSt}ys6N-_ zfC8{8))6xz^R3wLK7hXCzd_%ld})w^qlOmj`is<{B@&KY7W=ZGKm$S1eQTjV>#YaZ zXTQwK>V*6AlA-&2k}3Z(Ot!!nz>A_Hksr2O-&ZcAUoIKFuLJ~4FAk}Kw6xJZnxkBV z-@%kq=gd&M#vU0c%L~uD(DrthJ(Fz6aeo`8)sv{ zFdpTftOjP>0c2{83-$T|l$%R%oo7bkt??sp;jn4RVm+5j>p41$<%!k>x2Qhbx5@|& z0AXiUwoH7c^UnMI&mo!F?Grgbht69C=(1s$hK4#cYB|g3-ej(+l4l7p6 z!@%5#0@#Rae_)#DqH}&r!F<%7 z4K@{XHr#l3$0FcI%beW?5-F}|9Xl3`scs)AT&^5Brxz?*8p+CgGVBzEU~6qg!(lp| zOS{A|COEwolEA_%j{X5PQ9+ho)tY!s*l0-TtZi;eQ!InIUgipV+oUyliPRl z%#vjD_Tl~Vr`ZnhRd{7JHkmX%c52!@t{EhHbC!7PAftUZ9=W@H${r#P8N7X$eDMz@ z%UBT|o|r$j0uGDx%F~N}`g}f%A5JLyR(<~bnNR{6$hV<^O!WBiyscz|o`09RQW4OiaX@9gF<{?eb?Fc~+rxo-WRU*GYd|NNQ9c8QDis_S_! zv_e*Us8)~G^x4Qz&6C-#srm}pZm0apWUdu4dQq=b`90*YD!t$q5oSc8%I&T4&)dR} zyf(nwD^_}7GKR)xqC4M;?#H)gpX-aGA4o_^W!2QE#+wpCHfG;aii(OVUbs*RmuQCN z7-OGkX6u`U0|mltv>xtk$ApINcb;q`f}`WZ2OM`^^eE@PJUw4OOhMejkweM<=e3cg zGCyCzgS!&HsHQXdXg+{Uv`kZ-@Eyh+HiO=A#raiWZ}a6V~W(5%Ilg zA`LdL%Nqrp@|?M)c6@>io*h0cSL-df(lf>Qva@q=fbenUSd-gFtU6!4hs%fOfnt~Z z`t<<|3kx?OCLIQ`j2zXB4zmf>?#>CyU?;67hFPhO4(%6N>3!!Z@fm8+iHTWc_; zAqJCxtb(_M_A^NwR+U^i^nGQD`0W5@rmxT#NCYwb#(N>v2z>pTZ>bdG2`Rze{qz6J z>*J~7kvkqbPk^mGc0$9>#mMO%8IkF!hgTK30trQ9%|r#Br!=DUQfCyy3sY5Z6TCRB z&o0#;pyFpeHlH{4Wyg*ZAFq%Jq$`ALQBlm^>nw1?F#+50D#}xLMsQ zR=lmfz418>VqF+eh}A28eNIDu1DQ&w0-t-4AuuV4`RmuOmvnW>qCeh+3si?- zo~IT_dgis6qobokaHTBvHWAs7vsh$&eCX$`*_%Z$Gol(|AqJQ#FPSlC^u z#JIQ+XA5LFqZ)x6Z zKQ#fqb*&!F`j^GE#3Q$w;D}<3vL$saYA}yd zzopV%pJKzPOvlqd9}PH(FZsf-(DK)jiuqZJG=;ua(HIxdDRD~zm;KKg54FJh<}hnA zW;{u~0vx&&x~7#f<&vd58&*EP13!NJm|uxX`s$~-swXnz~_Sr}tUSk*9JvqgVE z;URNXFsR-u4>q3c*s?!=io_BHe;nw(G*KPzqGF=Ng>@x$?GP*)J~DoZAkakU;IZk$ z@s`*i0pb#H<>tNv0*>7?=r;!q7^8l0|uU_YzX`?r-fS zbb?u8{cvtjwhm0;e?AHcNx!~N@nDMvCPDq+D2F49QYA|J;Ar>d>%gY4F)O_g5vq1? ze@@EHN?x4$^%=#7hG?w_a*Dm8n6$#gFBg7W1;!0IHl=q}rM|J1t&fnkS0J3q5je>T zZf->u{qu_gSx+jAXdr04c3d+2{p=mD;fs$Bl6luopMClAun-<5DVME*B?VKN!1P9a zbfr5zOumXf0pDh#tp_uEa72V`R|A~hI&Jrv0*@up{Y{jfz%^T|I!fFcQbjaPFp;|EiUgJ8vh5_;#c1^7R@){TMVBWtsO!>4%xmQzPkfFdP*eX(P&=t z3*jw0JUKZ@LQXDdtmWzHY51n6AuxKto`#6x1-y3P!)Wnmf6Vijjlfz>4B^|Jw}7X* zWHR*KEY2yezDsmk-_$g*J;S{Pd-q5vDbX@Adjz^+kISRr z)yktJ!j*no+|KjfLPh9&=YmTDgOad#&PA!tv9}i^@eAmpAz9j zQrN|U(=Ee9ff-_E2~{YiUv%eATr`ay(;# zBx+Pq{QL1<{?fMu=c!!J258~bQL9z4wf z9LMUpvd?Dgzth=fvpw&p0p7WFADgY*uBKssmaIdG3vH3ka7+AJ-Tej5$9Gj;J-CFv zODJkNwdQ5v=Ph$KNbqvH101ARj4Y_^TwYq{9p60h8hod7cVWN@1;0*Gc#_x&+kZ(v zMoC3X?_5Ygvh2N`NKmtWfBE2)z@7PODLTArl%#{xh6)@>>3Ef(3vKU8UPOE`~ zi`{GN-@ot4NtiwyV~yT6A#O{Hl*eWee(xEY>b~hAwR4^PFsC5WqkRJ1ybpDU!+vhy zjacq9yWd;u$9f@Iqmn-A?LghYY-R0rx=y;H*RpbqeO%XCTDcOHe0kZ5jL;Vzl3O@X zEJvDA*o*~zH93>XnAo?pBelibhdPPmaEl3!N~r#^``d=kvjf{F1(?4VpK^W6`2PKS zdy_WX+Fx1#A}F)C-d>$<>-*nVR|Kb)rResdIN=;nL~*a}?g{Kgh_1EOwI1~gYr7y^ zFl|5Fr%tXFH+MEC&(ZDFcLvR^{0t$xY7oOJXA@UgS(N=uD1Lj>=Jk?hZlcL%{0{F) za>y__HOfv$U7cnh8Ced@(3_Z=8clU(a`5on)xQ`X5>f?p*N`l|x7WWkB_Qdxn9J8Y zRICa|#l+|#iOH1MC!G5KQ!BLO5DL_#z?f#S%B#Vzm13lh52qcCHNzNsyN8R{!D`(i z$mOpv9vrt%_6QYlywiK&pt*nefRokEGEK&h70EBRO6__Jpa7#VF)=ah$u-p~c4UW% zf$NowP>i*#K!Fn@WJ3?BV9@Q``29COdmW!}*j&IGkJd$+z1u5ZbZO`koFfO0tHr9j z?|&6b%Te|^)3&-ugYn8o2ct=yCSsy8(@!Z{Eb4~{If{(33iV^ENf=7b2c>TF8exJC z6gW=btx?_Kr!o6}`miuDv-GsIYC=&Ev$&mI_Pw4^gg&XQrRC_h+OWdh;JXDFa*CA+ z43I>o^||#+yYrkc#tM*rNw}Fv>N+t&gE6q^z}a{phf)82Kjlt$@%v#F!v|Yl2Vk^b zymB#CPb6D9!|S|=6PnHx>j5a~(C?aisNFxA6n zQg`6$)vH4=(l6`kx(e_8uBgm=%e^6npTMqstANyVKcmygHgxcEGiIK9|HX2tAuC|@h@<9myw=U#i24TQcfDJbyTzD! z?3b3JfGP^-NozsJWhKXCt2{laLKZgPOjO_;BQA2sii!op*69!4gQ2m3JnO3(8Y77w zv+rSUCL%t5zk$K$t-WFwE?kI+jPwHxr@6njy4vt-$BV_~dGl?KGE@sSDra=qS+Vy{A1S^(#jk&^nXQ6u;*hy~Daa+W;MBh>#iT~}f32gj znHu^l^cX|sM!`D=9I@6ME^=~mH-N($i=rwjWV&Zo;r+f&O+`jW2O=j4#5$-U^_pMp zvor25a?r=rfpNY?j8hnQX2)0W_>&^DqREof@lf&nO8s7I4(F7jexhd`4n=Vqjsl{0 z*C7g@5aBCGqHMbkPz?R(YyO7Tv-fW9Na8VqKb;j0W?Zm|0T^Ot&jU#A_2kKuAsp__ zXBli$f;gg(G9jgrsdM*~UP;UX7s#qC{13;yq2Ftc-b4y#b; ze;3fch*$bXA0y_%Ip{Gb2`7j0F#mPJ+Hht&ImQI>o3{KdDMl2}+V8=j@CXPg)B0eq`fqKlVXN;^^SjL( zK8%WbOvN21WFyy{D9LYHe~68pokd!j`6JwB$PxjSOv;ld_y821?!%x&4}#d%1}c#9 z(vTsz43Wc}0}OV_Sz5j%^}CoTafFJB z%IwqFLbtnatY=MX^IZWe!GO-meV z!eoOdRTAS7F4uOPRRlCeK-C_Faq*H%moHzh%_s#ytaUiA9Lj0U8KThkPQ^?pqH!G7wNbb1Do5Etu zFea7V>@RHDPZG>}I>r2l3gGjg6OaQqEw^?qAr#uO5Efvb>|(NfmvDgC?c28xFfn1k zMI-~@W_I=?;D$l718-SRpU(Y8Ft4@`l?Ja%Oj`Hs%Aa@KaoF8x0G?qyeZlmjD_tiP zj>?Ehbx%1B9GlS;<{q=@vGHUWIvl1^~DkIcOQ*KjC4_$;nYvRmDcI$n$~m zJAdu?AmBdz_NJ7a-zpMcy->w}m8id8+3y`ud|Qc{-jsrNJ_7@LbB#4C~W?Wx?VR@zxXd-y` z1#Uua&|43N3${(Y-l8|bLBm-51jaSDwi;Lc{QOXFMrxundeLy2R~<(6|1z}SHKI37 z%XnD*fbA>jBqZg=YZ(kx7dU>~GEZPo3D>~X-lz#wVnyF(e75X(^gM&8oDIo*er<;R z0hiQ+b~{(J#T|(fT3aCb#0yhkzSY%d5uYF`+S6Z)#j+kdcH4-Y*Y%pqI^qI6J0%Y8 z7V|}ViRiHzqxFhW4RYTAgY$?Z>oXm#pLIX)=*A6S3an$SSPN?t#`#+d6<11IeackP z@5TQ4MZ^%V1L@xC+P$N(D_sb%7*ma#6(+sCNbB$WJDCm@eRBZhusx2ML(;+To4wDE}F-=pU7ovQRBv` zzqA}Mx0m%6E?KjBG}#^$HKdS4NQ_Z^6DumFq=Zfm)m#|Cz=%&`;lj`6Aam%j~mr)}JhRB@v<*_I$yiBXn zmc?Z8yPiEC<1dRzT_?NC8}`Se{{--irI%HnQGsCS z2bf$|HW;eg%FiwK$aH?_{lvgq;gXwkw)B$p=-l=1-yOL>dY2ZL%)p1{4TLuDK=c0m ziOu!Bq62~_^-Y|@j!FYDg(VSh^)%eLzkjjRx$p|7{sf|rWghv7wME(NQxs>N zfbUNG7ce2S0SSB#vufco>DNGgf$Xd-1@JK6pjbZV>MA5;-R-+F)x~<|3^n~cpt3Sx zh+;Qqt!s15E9B8{CTzO0^jcqBdA0>%{$z^~tZeUiP3F#X_6uCC$l*?Y2S8Fk6!n#h zjVh#$A~{~s3{gO0EATcrVsIqoAj+`HVE}bFW?JU+S@Fu-)+BKELKbdk>Ir5khU{L2 z1_6_pBh;dV-@bjTYitzs!@PRM-QJ($GNm!yn@@Hsn(V`e4~pl{%euRZA+i+Ug4Xt6 z>YiOnHiiFRr(l?V*ZdeyDH-4-XoB6cjWD|h)^#PA>m>9K-|!= z?cSG9c({m5Ccd3Ld$yvv*{=E2^0oXZQ-CqX1*f%hlEjCHhoQ;a)2xxH+bDyciZ7zs z)oalb2|PEd&8W++S!8SAmu|mj+C$C}sX;-0OF$3&+4@Fz%gF;1Nuw3>0^cHBDL_Ui z;=e%6OGEQe#XIcjbf+?qM~Ru4g4V^jJT}YvbJ>7^6A103K7CqyhZ=f=EucuZA0=`C zNpIcPX1(^HCB%||A>#+UAL9@A*`eeLvk-4Jqb7GHBqCyAVZnp0ZN>a>RqH#Co7L6T zs6*AM;Znibz>UOPwVsT5u3w-hDSoPC;z_6Ntcy4} z_CzR0Aj4+n>@ zkfQT!{}K0LX(1ung@pyP(F>NAoFFP>R*CAJuOPaV{V78e2c@21VNXxb@dK&k^z;*u zGfYgFn;%t>=R1+;c1$7ku#p^7$*c@5c}FR5qlg!;E)9P5c-}1iV;#3PEsvcwKRo6~ z%v7=YZD~!StX!P8bRHkb7mv1VI$y4U`v|_ruv<3eXMDjBoz6gXTE!Gv#Zb;57^|*Q zx=Ybx4GNpVAow?ccb&Jvfi$~`SNdDeFNr?wUcbKcaJQuVtf^fm)+rgEK_y@_&~g9^ zT48)3yw_fvY%G+QDUWuTKpUr4J75_DK_CNuo7_!+v2}RN9^1ev2sP~7d@GuH81xKf zK<4qtizg~#mNm!Kwrd@@OH>G$aJ6AoP*`AqvxM`F6SFOuJbp1r>fNWNu(@}8kLU5H z*?lf^0~G^)le!4WjqW0c)Nk%?3pI4geCG`Oy{GLY8p?mH@B#{D(dp#ty@BA|D|=Q^#`?wJw*M`cDM>zYL`uK;(7Y$#-K$e6#`a(xQ$98w-Y z=IIHRvA*1{ngf7Y=#evfH4VH3ojq0T?dKEe*0*|Tr>8BmjOKd+=uSHXF()TKc z=CSJ4W-L!%M<)?xMET>?S64<@cdX{2bshw!eBuMEfLrM@5N4HLK2|Pdx1U~s1O&1L zSY`5*D}i8?cTNDaZy{Ga>!4qJJc_z)9h88K=~bOQKiCuw1e|G>aAIa=eEUQ;7J)fH zk5d?DgXnUdaXp7kLcL5cDWM0)Vpdv@W%daLxe8pH5^v4=0&Yz-4HNbaEByn{W~t>U z7J9AM@uhoP9E-pLD53?z<#+vZjPFH3p-hHP`L-rB#q(9}C%l^+pi83nrFepbCeZWj z9doWg#v99Pvf*yT*Tp{g|5vF%iFCl$?4Rnp-!T6G8Dh z@W_!TNmfJhee6Yf+^@|V=}Vyl+OkRw7e60<*b>Q=7^3^*#Vci-vgbb$9#=| zks{I4!*=jg(iFd-^OH;lr|||^fd790^DgdEV1W}vHF|mkgTeupCJSzc)Kg-q;mp(b zfD&X+J`G%!`u}L3GG3Np5P_uiVi0Ra%iT;<+lQKO)vgrU#a5w@%UMOp$BBGrKYu1L zRQnz!AbJ&hZKnUr=$$V@kSHzX1vtO%<41V~g}nkTIiQRVB6F!lMTgy~Bu|~PZC+e7QMAzp`K)xi_-_V|}CUygO@ zE_OOW%cElwd)cfp*0kW=*OwXgVDrLnx86$?+4Kobi2eY5v(rWx$kuV-YlVgrp0beJ zP_{b70j7h9>zqZSaq&(~{5Gs5lv}+}Zt&0f$jJDTZm_Yj#bi^G6ZI^s9_zI*N={8U zTFkIB`28wmTHX_VP2lq*XUx9;uJNu5k;q{DoMz7|&I)-&eKA?ukEGZcl7ze~4;>;+?_s|?VBvXZ=-vq@F$|mtSi8GJR#YxeZA7AaA#PhBMeskr5 z#mreJb8g3PccNP3&}2fwm=V)`|fqZ23-|L|8< z*pY5bIQC51j{Zgk71Y#Nyiz~gLIw~7s;jR*?T3Nu0#EeEGgb zR14WUP~;PeBYgG*N?0Z?g4pX&ErXqa24I0mUVZ>z$)~uBv1MB1I?vNFGPe-sC+-6X zhWZ6Exj&yYBlaqP`EnolSP-s)Yy7axaRoGz%^BHcy?0Ja^r*q$rjD?pNFK;7G&@*V@8y#0!{So;KD=66@L5bT&P4$3$C1hwBnGw(fZ}?349|xljB$8J(nzNggGQ< zXZI!B=d<Fx;mTXQw+*fPB5dTIWOmImFHWr{ zZjAm1H?r?QdYHa;=>cNPRb!4R4%~H+B`2o}<>kxI+4bbsSf|oW=NIEZkvzLN#>Y2M zf-_ObY5XFhpZEOv^9oS;HtXdRAQ`r=OClC{^E|u2c@v~y`IvieFMg5UJ_71-(s4cC zE12t>n}t&~o~kC<-WH3#=6H`(&wL`S_#6R!v*a2|YHRrlw?P?-en@2a4z>L!bMaNP}BEtZrA=Ox{yg~bWZeVGJG9~TW*%o_RG3%ZY zg?o1K#JHWgOmV30izurw_jfH$IZ83|aiR)o-yolqm+}O7fqK-CHSig&H1K#R6lJO? z=%2%r#!Fyp+D_hexrUG!?mPV;W8APY%pK*n_@im%L#<<$_&~n29SdXzMpx zKm4=0LOq6j4pp)7RF>jhkp8u6p^&3k_vVp`d7ontUmo$Yuy~=J%|gUj_=FjzW)Ru& zhKVQwkx{j!LpJ3tYR7wZK+)~SP{PeYStw$a>r#s8UUc@hEkJK%z;l9xn%vpk812DR z;zK@SZhn@4j=ma^S>glUBO9AQ((FKvF);)+DJ0D^tFG_4kz`XTW{kxMW~0|qWXZ`> z%{KzQl|7eR(PkS+-CbA?>2RZsV$kLR*8?FplI8u6Ls0l3I~}RID~<+98e!fo1I&VLj>@%;t_Lp^AKP^A; zIo~R}SSYo8$J+|IIwf%pDeRn+7egA(bMcI3kP4<=**?Vu2(J_mV&B@9-2$6lP zlr38%3YiBP9m?ip?>&C+({*=!@9*!rANTh^*X2=<&iQ=aulMWqd_BkM_K!APiSevs z4_x}`nvA=gff@%<(;t~^;$(K>%w@)B`c_`I4PHLFrgpY)1Q&Rf6Jbuh5^_OtMY`*X zrSb~BSuJWGx5Wrvf6`S0?ky;@vE0WTMP=p9^Om*lKnbbcb8x(?7;-zw@#(7IWyEpJ zhI`M0Y||95=#ZUoiVd%!5P1!aSRAaqmC@cX3BQ9i99LTe!+qGIB4CeQo^SRO7?$CA zq6o;TQ9cWa%6$Z$;D@!UE@Yf;aL8-1$MhR5hgQ26_@H|RY8G)nAQG7qlRuaA0nW%< zyMwX{!B3h2WnA+Li?nphm)&7o)M_xGIrdRzKOK;`{_KYr1sKcB1GJote6S^j(*)h7 z8pfnjCjYcJe?Y9iZW+%j80X9zD}L9xoB}J$dI-K3-s|SQDQ_Cbx00#J>vZ8!F5@kR z8JKnBCPeNb%B{e={GOj(im%4FJDQ&Kq!JS_PaVP|#&qd~y+2GTe(^FGBg{$l}8r}^l+`Mx0 zP0w!q6|ZVOI*~ad6vA1Ym6g>YtQnhJio#WId{xK~!$5(~ACpdU?0#2w=`p6sW!&i1 zIf|=SdpFlZ^F!LS1RWGNl#4JsDqc>OvmtKo`X&>9{>O-WNpdu zJ>`*-^x<#!-(Db|+~La!*omSN$&=+rkA06@_Fv%$`_MGzb;iq_6jQy=`}!SNDvpHR zMfU0{P=OL4f!?nHmit`k`+;^4W#fV=&eLK|y*KoV2=6H>Pj+gBpq8fv|A$gAW0r_B zIofJ6ZGCTF7WCvVVv;1F*Q>zwMd%a2^`Aheqqu9rL9?FUs|xR2Syh*>bo0N_^XxSD zlYLT5#LldH(8=VN2eRyX9nSoWt(Z|+-po?D^!OR(6@R@i2wu!MSMrm)yVCG$nmH1z zci^AG$9SpJIQlUhq_4VILBekhoW6;*Xw+$^Yc1lGub_Y+=a=glK{*O-Li3 z3gH5~bC~M<1Q8yOkFY0-A!22-XTPp7k^6*tPcp3TY1+B-QC=Of$zDX}dXsR*wF51TO`bh>XMX@*fm_q^9{|inwZ^}Q6-vEc)9KO6=cwAJeve5X=l#z4<%+^|1pu8B0t(69X#hz$8qK{x-=Luo#rB-QVf&UR*o^gu>ES!%1n~Z+gP+?5CwiiG zkXJMiH_RngUEf$~)K05jO`}uY%M-upuOWLJx-J(?lc(US67$~nK{eKS677q`Cz11v z(VG)1{bQonOt>ezHp4Q9XCyQBNU7n=$r07*b&Swt~dQE%Ha1} zF%Y6x=%5M3#%r&%=|BK~XObC2O);CKNZ~sMf3BtvYr7`aB98`WrN9jD5Eh4R0~Aa7 zPG(LWh)Q_?2>|FuVq^)vo9p&rTZsf$!bDv&*Qr~6P&q?Tc*-3}tLC+XPm*w+(i*HZ zyiOVks}e<%55M*mk39M6|4FzS(78wmHV6v{;!FHjjs zlE4U$N*p)Of3P=IW0TJlgMGipBP9Ov_B!=$*_9qeVx?cf_skC-h8vpEp-{X~m@*3@ zk?S%<#Y{(m-&&pWzO{LdP?8tYc-?AL;#OWq5e{g(Xnu@7{4e#jxtQ(S*tr z&XTALJoSEGCMtuv7L>JH^nSuqDhBrOyO45!?=~zPES$^QL7Ni_F2cS($jF0EG`|%C z3tRYfM_o-V*XAak_8rTnZ!Z485qM_p{o?hKPzzwKUG?A-CdfSDHmP%=(vtkeVgcIY zCCbyu+Scj%AKn>`QwX^)=#|{=z+D0)fNjFiBcDm=}>1iQBv!<(+d4Mq&?tQ!v-u+kX^r z15kejf-#!1I5BOARNszEwG@4i}QHJt6Ufq;XdM@}JAnT803BW`QTl4X<@>n>2Lzg?fd}4!B0N@pMMTUXB z&4Vw|WD`wZB{+{E^Pk(GSnfdgs-PSb7SSVnV9|F(>U9Rj<~Xni7iw2h&WxoID>uyo zOV*64JJW%ObYK(Ll>&JpZ!V4R4Or$kWzMz~&ZaNV8FqGe1#H@{?g9|%oadw!y;Wq( zt#Y4Nf;e|8Qdow8&0GGLH$edST3WsT(Map=y0Av3_>izapcTFSvRvbE4U4W!cTV&_ zVxC(!*IpXIKIl;dP0f6crlpAPwJd)7G~cS6#4kc(;qhleE*miZ;M0!(fZg+;A-7Wq z5%z%C{w1JMj+Ij=i#7j_5wQ0NhdKZUGV=LaJXR>X8vi}a>nC^>jrPyN!(?BnJnalM zQjd(F95~}fcomS<9KFrLq)Q=ApV-3Ng4dWpm|xUIqEIiZr5xLiRFS-*S4QoLbBj|C zV)z*)e>iO*UM0LB<(`@Ky|^oimBGTYGv7PbO@bGN+7&K^=j%9mVCv z9yewaW^>piCRiU9-AkkU9@09ds9*StSrf)H$=1#AW#c6|AzmCxCH339y;ST~JJ?5T z6TW^{YE07ueEt(7oB<0vY2^;izgoqa7l2RP(Q973PJ>?3d_$X!qlu}cWMs3z(4O0^ zW#6d5f)}*&G%Pj0n1GxUy%C_Hk?eR1~nvXz|fhOewIPmh97w7$_0f5f^dRxlI#s)DCw}}CM zCnR>S{}MHa0)YaF#*nN3!J1<$@4+*v$E#GRt)dRE|NLs;3U@OUIdgXbNeYlv`H&q? z)bxkvWx1`r>hd~M8&T|_rk+BP+lnz84bPQsbGh4i129N^TsmVT^boy-_T+$8Ta={O z9tatwG3Aw971~H$dQoN6p{UMY0)1g9WM9xN?ChcvuOC9fZMCt=cZiAt7me+BHEH<&4M%@Yk z@^3?o5xOp+GVQ`y{4%i%a&Ub_ogo=7B8l!L_T_A$W=r@v5x&o|M%6wkLRYCnt%;LtA6Powo_B(|#oJkKYy3Y&+BV!?3Nr*CwEV%CC^23-+r~0Idd%SH7n35qwgZm-+*s)(9{W(zzx9 z#q?N>6+ldZ8 zxHu8(>wvN_Jmi~WQIc?G0 zfav6#A;r9nnhT>k;=*aNx)Zj!z7sDYR(&-o&RrAEG%0~R=*6u%CJ`iTzDa^#d=&bE zP}D=X7LCfy+#o?bjxZ-zX%4$eQ*%urlpv5b;K|hDu%JNkF-87tl%!)%+9g-znY3e1 zcWt(iMs|{9JMP0ZDH0A^p6Dd<6^@I9Or)5Nlih&q+w~aij zM^=H(Sm0qYnboi0*34-|xd7Q!ubs=^9F3V`Ak^;YofP*!a6zTT0yCA~8Xic(Of+|S z8<=8pDb|Dp1-C#;Cr`s7T%0>E%G-MMkmLp;Ap|yth#g(x_L`0K@_s z5l#Nj)6>D)Itu0%o_7QB{H#BcqfJFdrt&@|g#uW{Q4mp;BNkKg?~If`NL?EfIRdDy z8jb1&2cs6v(Mpjhuu0-_UVNtFk)>y}Sqs@!%FV$G)dR}Ls}exZNW^&$#^@Uh)VR9a zKz%avf;eH*n>wNUi128wf>@rTPAN3=_zK~qvYFt+^k5Nct8UY)cQUzb)t5|NDVf<- zC+hmpK4;~_h|x?U_6mjiyBoMvSuYP(SBD(p zOC_p*Tp!-4G9~sWUP5tCvNEuXXjqoYMT&YslDg6dS@)`S4%ji0Rd67Cv{;!z@%v3Q zhsFJj`@;?Jd&G1dc{!5{3Awhf%4mkE z)z2Q_Skqsl(w8Ad00Pq5K<1mR85uDN3B3O%^#MYF8l-hALLkMC>xh_rFv-uGda`tZ z$T;BMf!Ml7zUvjBc`@?v;JwN8 zh(1OmIkWLadH}{mY_m#UC{16-n?U*3epH+u|80KYs1QmH_6z2s^71SKlRbr%m52}H zelD(OXz~wLRbCt&wc3yak?SSG`xR(*(<+$pF2M3&ytb=dPT8NsgW*5jM!D;l)C<7M zf%)UN@Hk}Mp!Cf*^864DvqD2v;7uV% zAfVXfue7f_$hwLcpdbnF8D1saGufRt5-!b!UZs2UQhFLU3}PEz=U1RYndko9I@k7D zcz^~%i|}0A#FvRAKF*GL`@u%-)dGa@?a!G5G8uC4$zq= z04T|5i?G@nuY|%TeSj2W8{LVJ-N3(RTo_xdV^S zw$3Tkzr@L08xeIRUSo`5b=HvLJoQvw#5Rl5Sp8#bNJ3GgBG@k9c@L(O5yUonVBAuE zIw>0HvOV{u0kA|g-$!g*0_!26buJ|F@GHehQ$Sw5;%YxcBp`XAsN4w=ij2?$gv7q% zRh8F(4v{fFM&C~xx7;nKoPYIyQN#bPA+tn?TqV~Q(I?#uDuA{8zQ&`k*E6qXIF6dC zd9LxOo5Aq(6iIkH*yIQM5R?f>Y;+GwD`e4NRKVf_`S| zqpYi(erhk|m1}CehQ{LiHld~j=J__ldjbco)@Jvce5W(8<)~T0s+Vs1;tdPt4S;6j znvo|b)BtZWFqlO_A<3MrHh*3nd2g{!AgVpWr z3w`7Fq&yd5B0TmeUGdBzR}_`sp)U5E3+w8}YM!EAkTEbY>`HoGp48^o`@z;5f!Z`X zeTw2g0d!lux#PpaHSjoENwhxdCQ$4dr~u2KGpFA7DGG<=WKIg{!XB9u0MJyz$9Z;j zBKID_1-$0j<8rVsgb`xw_Uoo|%3PL^i6ZJ^b(5e_3e~iS2EBX_e+fJ8`B-tOe9j9J`NmpG}XT0!8rP8ix^ua6xKx?d2I+KF2-`oX{R}4wt9?%wJ0BRRqUPG z7lpc;aqn+wwODSbGkzKSJa$Ud`06+1AjtMNX$b>ahgM6WuI;kt1Q3h;@Fc;+#O%A~ zPgen-Hw4sD*l`-a4JMmbW&?XAd^QZ&nxQxHMbO(^Ts<{`s0KXV|HDS5|E3BIYthh+ zbS=4x>x-KmxvH}>^rqsXo5AP`vW0@7(TJQEkM=8&^>okzJ)1E_r0*S_@~XdjVgnq= zlLwQ2C0@6U+i_c=Mk5gs^zBV+r+VNN*cw;G-txsTVTz+)3?D-g;@yUnM8I$zn@|le zd1CAeVubBbyLtXsnq1qjIIl&)?hSR`vh;SV#Uw#~!@z;uqUD>{IKj%A>vwX=endxH^e@uk6)=q2W`uAaSL2adPABuvhxGP*SbIaHySCka!snt{|ukKMhhf4{Y zIy+OPUxkcmhHvaaRW#|*XU4? zmw!%G>#_9u(Zd5VaJfbU=dy9SYSS(M*aTETXxM zZHX+oJhYN|pLwQ9^_W$8{&)e1d_&YcEPf(Vd7zG?s!>bLGIyeqs~}H|9I&YySzdYg z-i=G0#j}d`Xxsk5`S_Tv)unZI#dN}t;Wqd(T$36(17~OzG_pG~Y*W-W_eX3+Zdgz9 z?OBL;4smXb6Y)yKlpVkr?7(cLJgl9V$gQnLdAn*+2=U-^gT5bfAD^<) zGC_%@mCoIA%X|D<<7?tJACtyl^S7P`^Uo3oW{bYQ_3}wXw)*>+4-V2BCy!bCwz(<; znGx&vxpTq0ZWN3iaM~aosPXb#BYB(aO>|s*m~|b71aa8$b}*Fg``FQIdg2FBt3TFs zj~IWK8c(U;Cx{*3pRzxF=J7}=DCW^LZQv7m1<%3G!C?-RLD4cgU~Y*TE6czMK=V(5 zlSmWb1!m}S6Ud%bQeDrRMAQC&K)L4sR`(*zpg@RT_o|Wg^H7L}<3cxg$~M=INm-bL zR96YgzUKuYRR!^*E+DZ$QEke(qycSB{io^rTIB_AbW0?e8xp}`u@k4=ab5C&x%j=v zaQ?QzcmRH1kQOhj^ok@qMkwE_g!XVb4Q=O}f#tl^f(S({3Um8OG6oY{jZ$Uvi{QZ4 zx{$S>tX-*FVK8Z&TS>b&QgxIV5to=HA_|AOw#J!Wa%Gt%*g9v=%j%*#iFjE}iDiIg zhAFZ5(0E?H6>nmj!u@w-ghPQoyvUR74Tb{vyz;77&arU`Tw(IQuw1oaJ7)s2ucNPZ)8i2YQyjH9x!JuGc&eI0mk)Qrtgp+tMF%Z%0G@BRbgBGY5Zxz3iSeK) ziC;Hf=PbBOS9}^iAmZMjZW?+z`I_C8F&;C6;z|c^eW@PId4nS$_ICx8%>CT}C!+~~ zfOL$H7PA3M9#W7AO%*}F;D(d{ey>m_?M3v42CcVHf;h5_jfa-Ynp@;E*YDML)X6|&2jShEL8Mf1WpPcIE@~{EY z-7i2W<56!xyJvE?aAKE`gU6}+5oY|0li3ImyAXe=GHHo0si_e~Q9l-q2<19RYRQaV z2YN6MQuiiiltr&a7IKcFat8brUZDayLxW|1Uu6z1w6BrR%h9f1VUJj95nPZ^3Q+A~9ew>!5$7>EEQXOH z%~04QDnK{&{dtYTA_OQv{UvT2>w=--zo@hNaZn7oV%=Usc&@+`VzvuUOX#6}eyo{5 z61G%!f&SIA1%lj;P*>oD*p=3be@n%i(Q2cY>-WD{@>NI>^T?AB>^sz5scVfXUBa=t zwF>PN62#RLPJzj)&7nUlIBK4gA}{z?92g}v%>?CH_;r|>rR~DL4nGUIR)NJt{E+g% zABS9};a(^?vJ|dKMa_l)1X4~%mtT+=7z}?18<&nJZ#LSBxClXh%N29mf|^EhZ{LYK zGCs#;XrZ}UrhNJAt97YdcyQ&KIn(0KUl5}b4wbR^b9yht z2F?p57H><8bz2WaiTyr0Al4P_>3|P-0r>A7`UUe1f%gDfP64cUlarItCJq3?6rYb3 z((vyB^sqP}tU=u%ryCRm0u`6ZIy|7r`mHDc^nbMaQ&{-h|84XD(aL={7N<%4(Psllt22Ielczm@N&5oOpy5!tkjF_kBeS*tqB;g~W4OLV z)M2}RL&Tgd4r{>=x%XSIiVDXf#E0)ITs;8}I3a$H$%v(>1jvJh{kAAEGg}SeJGz$G zeW&#zZuh&&_Qd;f@os%o!@RcBqJwqQR}f~`5{56lCa^R}TzR8%to94M7OZVGk{8vm zI*BFhQ-$3;tDX~hQ-wd2r~XLm+2v5OeOuui>1)DRoC?)5L0oagb{fkQQ#elTcVTPV3ySVn=BZNJ3UnP6$*#)gZqyD#=Z->O z0X?ioyMT4!uJe8M=&M2l=aHkNd0qP!duL;Tcz1khZX}Hh&6#nZ5s^Q>Ch3uC3WaUv zysYfk9ByM`qHx6-nu*2s-1{IBtM|d`XEP1UEujI)XM^)we;E*=5`L>j%soAOdQ!N7 zzWnj;RR~yOc&{VTGeG)E}ZT*fzTlFD*NV}`#X_~yp3Apo>jg3 zz)aN*a#Puus%n^h2)1X^OjSR;XJ#|9yd?u}f2aa)?reCc=qo>kab(ZNaXnES zui&Ys_g;h`tmBJq?Q7N_R=Lz4JjH2H!Fmw^NU}sJVA>N-gF};DzE3;gILQq(o~jyQ zMMOwSemQTF^+K!*dsZ1@k|rGHFaHuGc6j0Cg7*<98SzY9PCnr_qR#Ck9CDtroVlln zv@VWIU-)SsW^5;36=LB}66|wQ8FJ^PzCetZ;&D+EAMX%3+r_kEyYjmqnfG(~Nd*QA z&j=h?`DS@&WwG39|I5b#b_29Z2fLzulm zR(pOAi)!(B%!H6G6y#ECUV7QNGc=>9QxK-batl6?LcFJAj;DS9um+Qd{l|lFR?o>!#rQIe z)bE2PuB?*eO3GDWbrmq1;C$byqMxIw3sa`Tm8QIRh%o76_HNM;dqjoYBKs+htf;F1e2HMVBluUw$N`V9`yj6 z#GpqD4KgUca8P%?*^-sBdpavRhDJ_G_KTdxCE@5fTwWCniiF zS3Y<8T+lQBbVS<|Bgd2c^Z%k#{yWKX21S7wi#x#<1MB@tuEB5-(Dmf@#nGkkuDx3I z2F9wAR;nea$=32=v1O@i<<@AjIy67Y;-R5tXS??h!+=)Vl~%MQAy*PG1MPq7o(g6v5Y@239x z5IK<%usd$+IP=i_m9VEuh$4-E@HgF8>^z*!rB8>>--cqV&z+cH+;qJ*^mL?6QJoi= zmseqpB?z7M7Yj`rXO3SzqIKOeIb_wTL^?SPt?##t1is0-RnV)v!+&gHMN(nv8^roo zDWx+4Jplgc*w$wb1X@UKw*0y5@0I+&J&h1tw8*KtH|_%+aMnP zJ^M-YbNF|JNGEU_^d1=?B0IVK-n={0v~qejpW|PCVEIbSSIpY4pwdX5wi zc@h^XBK($_L2BYWPdnJB$Jy}b&bA``jtPz^CXxp60X)4)qdIBDZph~{% zI4(g#X@lLw`Hv@U>!igxicy&KyW?qDmvA5Ih<27P*F@u7K6M=k1Ou-8RUsVDd7AV==Y;7i;4I>-WVedJ8zP*o zMkx_&uEo5C({G;K@`Kpwh;XJ4&yGm6Z#ZjOBS^8Y$+0zaR>B<#M|b<*EggHlXly;O zH4*#V6@Ak1QXz>}_{TkE&0mP78p&)kf$z)9Qvdp<(<-k(Ve<*V4a@Y|vw9IRu}RQ# zv`eO_nM!6Id=!hj`+1^`P2JwpS0aIe_MT9WE%y=f< zG+#i<4bnS^X(H`O$fQUf?-#yRNzFydohQoONynFq@n*E96h~Znx8F;Ga$v7^PF$g} z{VrLe_pE8EsWW)C=7Z?rb#u3$h(y_ivz7(Q(|6mRrKQ^~cs(;Us_?^_p|MZ&VX?0x% zk8Qxw=984)i7SlX?m9rD zUrt44b!-fRltG8LAw>=aC-a=t@05QX_$3EusHc6TV9+p@ceJ%=~Ib!UaxV_ z{CR)HuYpg|DHjYjHtW}AkD!6)tJd|ctui2*r49(5%rrIQfR@gbRFU3c7-KO!6f3QuqA*j4;+ zF@w~X6&g6JSnh#6a`nbVwDSa+Q4Oqt?Ml_cK?2f-$7%a>t~vj6IWzw+e&pZySYmNd_Fu$3_8wVNC- z;GEQD&2xUq2h!VBUiGHUn{=lEw$Z3+*%Z`Yu5S8(2D|1`E32a{-J4g=qK;v+;Q^T*6T6ygvTx0ID?&{&S)^fJ@n}tTK|zxH|vR zh3YW1Le@?#peMpfSJ+;Gj1$>mmKr!5v7$%1!m+PP`%Y!3{4MP}9h_0lpVz>!=6OBH+h zpV5vp$+#TdLt<&QW7(zaj|Y*xnvVW~&yb?N#mwvVac3V4bKqmi>@IFz?d0R^0k@SC zv*!1?)R*rU{SI)-+`w#PI7`<4u7dYjgcH~L7M``gtJ)z`h1zz@B)7ig;YZ7?;qSDO zcTc4Ibi0dk6&HewNmGwW@unjg6zrp|jjfi+IAx#;`r_{>J{xj&jN=Bd9G}4x>Y-2n zs%2@8zaOkJ8S3bu{-bMPVBkS&@Nud)U|)nIlT{Sc!8=O=?%|PzxEcGuGn)QXu{KpA zgaM2$Qt)8vpyqsYDY@3uzO0MeIASv=u2uUAXV-LQ#f|*2yScnU%ko)BtwW@27Nc>I@`Ba3w>JyPFs`| zm){Dq&Ig;s(#{Gcj+6L`C2{F+d8BZ5#HXMSD4FJrj$-t4x_4Tf`i9XAhJ|^^WWpsz zO;IB6jhNr%inl0bX)s6bgU>1K z%k1+XZlgualQqMAuJwzU{r+_-e|ux3=!cX#%QDl|{kOEtVw&^C9I7U+itPh?gk8Ss z%*`%_!zio&$FvNK^WbAD>$<946nhn`>KcOlXCp(kpt2^iYho&q zng^DDcNA%1=mLvCA4w!|+IN;c8H#rS68yS$b}x&7?JrE<;Hq@sVtJ+EhkGcVv627O z+3AcgH z=}oiEPWn3|;szOlGxe(<oFI^H~jJdbeFf7vK)4bQ7o8nwu2pG^{T`XE?1cKF1= zHQGC;EpaP&ReNh$ndvds^6kGun%lmigC#q5W312e0zoEJJsi%oibC-RKCuEenLtm$ z)D4fK!@2CuI+%A&U)rv%2<$I!T6s7BBW02(EdhLWIPUI`{yD}KVYY;R*Ll0s_o~Kq zDyxo;eg*Yw5Qz|2G2CPQW(X=-xn%FEA^DekupQ!Xi6bfR^wuvQN?&v3GDZ|Z7O<$Z zKsnwpfnsyoO{U6O z=g~J1Ol5c9azVvU1UYr^t*$Lxvbz@__Xoe#Csy42*GpOS=J%DFincCB=j6fMHUpwMfpN~*WRi^Z*`w5W`()=q1W)h zZgh%Ji&HqL54fZV&)C>Yxrnz^GCT@hXQXk-#znX;R#MGRMn6=89x4U-DJ5s*!ugdNBLxlO^y*o}0z2NgFN`;}76{4&S64v) z402bXjvU}Y#ERL2!#OrQmp@bVK%FhA4mjwRyxsQXr@_=m{~lMsm*XFAAwV2jWKuxq zk^LjRF7;WVE4VNv&v$1o;H8{v0s;CM&x<@VYdNZ8j9Av?ribT6tQ_)>Ro??*g(d9h8Fdv{zrT(ibeIaF_H}T{E)l z!Ujot^EPqFFF}1}WD|&QadxU7D1H@JMl$R=Uz6|DjkoIVU9kA`@0_s%5r5*}=cE`g z^+9y3ZS*}~@jt)&`}P-S!lYur%zaC%87Xuy21l391HY8mb+@k($dtr;umXZ<9f+-C z==i%M7MMs}kOkrKVi*}KNzq$6Ffj1rL*YR=xkGg0MSEVIDID)~bV0|h{ME2vukO^# zm3{*;bZBM0T6_wL1`|fo4+Aefv-N!M0KR_1ykcs_R0Yjw?P+TOm?U4taG@y zyleDyIVbur|5pXM1Hc@C-k(oer(UyzPqEbUCP4AeFaO3!grOLh*DmNQj2&0MA}?K4 zeOuI!qU*gTJ+jR7jlU??)ML&lc6&+c?6mAs%hiyDkJwy;TuI9re1mzJ`J1mv*}-$e zi|h(SIGM$GY>(`(2^qW|hDVcDXgD4Q9_iLQpC~bzx09IQy57I&pz$7kvoJ2Bn%~eD zTpZXF!RGdjSFpr7y+m`ncfVJvMMSJzgL6UT3Gbitz>ftoBKz9JQurLb(bK=>rkFL) zpI`p45GS&*sQ9Z_OLx`v|bgPnu}_Ye*qt%;^Di7n2)!vp5T z-=8;iJ4sLJmJUX|?s1oFIsY)0#FRVJ`5^wfDE=as-UqhA)tN~z&Mf)`)FCGgRuDtD zs6i4j%_6iTiaP2<4;aF$7Wmo`VKC?Rt`*6{k_$kjUHwJ3z>YqQYR@c^Qii+6Kcu30u()pk(_^itt z7%RY};(GtfFvv-YkbV7D$^HR{bF=yT0}p*~W_l>7D#;a(ozC_u@RZ#E9dqQ2iybW&?uLQN>ZT zpe~-wr<0aNdaUYX2pUX`7hj2FbLB{e`;o5WKBkrMi&J|&U5-QdWzpR}n#~(xYzZFq zORKE*y}0(D*9ZQd(_!H1ou}kDE+Jj=0cy}S*^7DEH6VMg1TrxYcx+y*_f>nPd-~9% zt9zsuz&&+OX61n`LssDIP3uZ^-o^!!XN!_o+0e=J!=9O7`uU%4^7l{rEQs}eu1Ek? zr0|>yG4)k7vpUlS=D+b^Ij6WN=^-8~_Dk_iEZhD{yUeSJ{E!}In4;IkulN{;K#1;Y zvPTa9f-1+~xOjFYiMCg^d`YaYqX1~H3FWr!NnM~To%NH>EPA%U?t@mhl((KZogyM-NZV;|lKWJ@ z#)EQ_g4!IOCPn)He+4&U&E;|o9|ahHQ5%`nC^kOc6w|liQl6ecF-8XY%Kq|ij1)CG zGJqYHt42hpkkPN`?Bmj4zA2O|mehUH5lq-F<5KpECcS9r7l|11*+JJXt!y@+vgoHqs?*7nS0r`Q)^T1ApWjL>2Y(U=)tm+ zK7J=t9B)n&+>z#;2Gt(=C_wZg-|5NQtGSi&M_vx+vP-gErjQ-)#qFFl2rjC)MvuQS z=#O{S7j{*E!senI1Qjs|MhlZc&g(7t zvI)F97#M|Lq6mMQNXP=*?2-HME8+V_oy%P$9m_x2tpTVq9O_P}0FMo@j2=5~N^XTU z+*YjRl;+vauF(ADxZguc;BTjw6uf?{3T^@Vwk=C9PFDK9?(rg&q+yZ4tqRrG871o5 zwkezTBHI#4!DRpTPy9V&#$sH8+~3m!gaU!7v2p4Tux&qx(hg`JE4QR6a)F$58n^C{ zodVDFP5u~k-oAGQa2}qE_QP!fsL9A+91mkxT1+Z&Sua5PWXG? z)qn9-f$7i1|EGAg-Ni+{Uxylu;Xkuq4;C$4Z(Miuw|y*mi-M94Y=fW$fH{O9JmRk5)uXqo*FH=az!2kkKf zIgWd1Y% zn}h_#AUNqIZcB4mu)Z%%mD8PAr2s~nwcgfqDxVD|vIPz>Bt7C$LrSRMzWYpdm>R6W zwCvltij7}WN*&5nVLD!R((kO0XpWyo51ItdESbR9@;ebS$Z0+_t$HD1zA4kL~Z z1xltg_>&0l3LRZmwUl<6W4Xvkarp*YE!r)etmvRN0FC{*UL3N1kG~$_0tSsB9QP?O zxHmF#&?SA1Nkb#*Qu#Pwp5|aA=i*z?yjW0&TkEkyGiatdn2Yo{C*MDcKOfjUgGmFj zFSvBu3Gu&9%XE%-kYJLaM^Pr;4FHY*IWPbEX`Yl)wc90SDkhFq!T8d1K^FIC3UCUf`r!YMu&)lQdfU1tB}76t zA>G~GAl=;{A%b*+bV&%30)l|TCIm@Ir4b|qgiUv&ASK=Lt?j+%-g7+XzMua9k38&O zthMHtV~#n;7g8vVoELLT35#5}zNGg~0;^v9`P%{d)OJf*dpjOZ)B9LzvqeX}DGl^C z9{TD0rxfFTS5-f)YbC5Zf(_#@YgU8o+o?ai>G$x!{?CQA*}>1f*blpX=GRqlvl!}h zHz2v3F8eTV4~$l-mzN*HS$5zXGre&6bzu_p6x$P-y59fzpq6FYvys6A81|4KgK3fQ zQ}k6PCIDwz>6DNEULKu6IyV*lwvFdP-apBZ-Ln@tQNmaIXiG|A-0xNZOZPJyj4@DE z0@f2}NgAuq+WF$G?GI=&izXkl5w`2#$SxuQcm61{_~q8Lb-_-ANC3Z{9QzeQfn`F1o} zvGR~-yms~g)Z>U4BghcbpH_(U4|XH{)h`y_9E_o^h*C}kBWDJ{5bL*Gp(}t#$mZwc zGvs-Nr6baETw5^(1~c0NYRHnL8%)Hr1Fdy;QrTKBzj1D}nxDluEwqTE+`qp5c{5*L z|2^aXvr%FRH7)YW7#%LeoA%Ck0zg*icXhCncZ>M+9f^o0KxzLbwmk?HyqEU#`N!!? z+mR^%`4%nJ)%_XksJoU?q?kjFI#v_<;|%(b#%h zM+BQp$N~i4+Bq=6#~X}joB|le@&*KEvJFGsR`+?|wW;juAKBAjYPdufFMJEZsdG2p zf&XmOKejAjoM!@Ay8A>&jzsrMf=B|wTPGzyPpSq5|quj$i2Ycakb z??V$6iG{l+6!ZLcGJz(#RnAPuJ+tS-4I`z2E~c;Vj{7h7kq5n&rd%S`TPQtuJ`&F~ zS>x0==3f_!5CEmAF{UGxW|qzq;l6-v07dKxyYPkD!|&Ye;7Fo~V4EE1X!VG;)&H@_ z+e!)H&t<07|HK)DkRTfzvE7jEh1LaNSK~C-;6**Uw}cZclywFND#ySUY6>T4_o-&5 zI~b>EXl0dG`{4r|vX$1?PYsBlm|pZKNIVo->*pOLTe(+{5gH9rzVwgx9{;WCBgF<2 zO|P}Vad8A#3lu{fyP>h$PG60)%_}xilT1MNLO7^2S+DW}YG&Oj25QyOrA>6#mwu~P z^|5W~NSKbcG4d5Zp#pOJ%*Gd5X%`}($nl`Q7t$Jh^S+kBo2iL=N)7tFr_e=rv>SRy zuR!2Wn6^3mwb_kQH0|P7^oSbSwK8s_)SXx@z4@}h`!)Lt) zF5_pv2V?i6lqjvl=mn{{EJ1X~XIMe`og%WS4T&T?#IY`G79?#a&CDD*zBI9$nY(K) zc$e{qj$m&LK3s94@6+6C^qj{3oi)yx*I=T;($@pA>&KSLl-B$0fth zT&yspRm*H#!XvlIQ=v0-_2=@{uZn&A({h|Xf?=PHcZKcOpsjLh?#Y4^Fg{Zd(t#$t zeEE{=iwps~hU8TZxrir|>ZW0hZ4w-+Qp2DmFLQHgp3~fLTKpdvz8?uuZGLhlqte`2 zW5#1fr|wEhEQz*kaAC}l`(erMU^~%-g7?=cIq&P?{Ic}4_shBA^d4n*X?F(XP#%UJ zT8KT!v&uGjyS3i{Sr9$BXka$m^i!MBd{3*JzfQ_UUUASZ&8kC#;{mxqiJR^+eGp@i zU&H_aEmXqVjcgv2x90?*b2nE4rCx_~0R~**{{Xy;m%TFF78N*c--YwITiQ9FEs8=P{K- zln6joE?s=7D`ijbIXL$jKQ{N)O_a$tH#j@I6CpS*z~H22^@tVxl9^X~ao?G%|E=K6 z*c+9RRV=mXb$vkaO|Ls7IB5IT=*PuS`d>BNA1n6ExZ~p^gD-d?2 zwCA_XD+N{0CXNE>ROb4;@nUV}6Hq@?Wj<(tY(qKZ99q@aih1Z!1WsEhZZZeO|0tOu zqNu+Fm?N3~>j9N(7`#M8Wk~URi750QRZcW17?p$zz=GrEb*#oW#S7i|Sg(`EPODPb zVSUVqE}L5$i+EPgtk%>1}B_IdZ+oM9dYcN+klFPs$Gb(I0!B z3Z{X3+y6p3XO=oVw51~y@N*xNV62Gg^RGI!?p7+Z#C;h(=3@0BV*)12?KD;ru|U4T zO)bJM0%j3?hwa94gByZxc|3h2+=e%A*x{{EE<_S7T zE~HcuJ$W0*qtJC8RV9$(SLl}-jwXB;u3qy$>FpsLm)BoP7Ks^?{e2_J-LKUH0g60&XxO>Gcx6 zFyR#D|9eYna}bj3drgQ9CQweF>V-aWd1;{ptO4nl&aE0 zU0BSAx8P{Aao(lH`1iwbUP zXsu-z4pu~O%{`{uZvUnvI`Wj9USct1n~wVB z%kJysbBg8DR(89Yg*Ua{6((ApG3^u`f)T=pMi;`*laI>@k{)>541d>G(ARs3Zi&62 z7wPc&Ej(wi074UTib)s+3#eV}%AY`mCZRC-uBVg=x=)w+#V`-(t(h&;~ zR{=CrVPG`Bct8H(hMv5!+Due31&MSAfUc2S~H@1&@gm32PXmCqx9W zBv@n6)e`Mq@hmO_GU%i+BY@-Kc@+1i823yOC`)wx zI(uV$lU{$|v-|lV^RPHo_M-@paTRIh#F~%vz#a-s`@@q_foWuDOTrGq7CN*BG!<|b`ne% z+cFk{ZR*t}=pD6bc%2MXjpAa^q>EHv*({hgk^SdchmI(L>;7@*yl0Fj6bW{ORgNpc zUl$Mp60fclzOZosO?TbXdo#IysaLOXx7IfXrz^wAFrC;+)h1cSO8DgK>bULVyf8{Q z4wA?n3)QKT_uJ&3koB$~SjYz}m_E-t};`sXYVl6&Os!rGJf7_0w@41!ZEXZD+8P#t#Llz+8_J! zo`aR4$4z@)q1g`8$yQ@6z5`6rB3GUW^_h-jaH;v}+bANp`KH5dF)xv*BqKSTMJE7f zgr~`aQMc%n+x)dQ|6rjL-}&gBh0T|r8^>!~#`)s4aROw6g8xe+f)xR1C?LVc5Y>oj z8-*9TL{um+{#YpgPCG(>FmWyVhCF?J&p*h2+HB!<OPLHPjK^FE-$7MoVQ| zp*z!Ox8oQ2?6;3tBpg3=km32pCzL3?{an1toq~Z@tbRq!&C-bAbVdM@LN#SGLxEG8#_H}lyb>phQnj+3)5KhOJZmf@c>yi7@% zter4(=zF=VbvA*30MSvBy|2qI3LWeK11Z4`SN{}WWBm=|r5hn-G(g|Wd-!*Gzcc$L zHNL4gT(@G;C2}s0CQLJEpOQvbTFGJAw+26!s;UX)yt~4fM$6Z|>$p(>;ZGfVze4=S z&zO_Eaf;D(r?1=fy(v3{IKHdn%`N&qG6}p>YW$2}XsYjK=Rh?h-#Xc5$Qxc%j+W^3 z1`Zk&=oT|{p7h>Ys6Do_vnHEP-*Whmmg(0XPCrJfmVj~d@jd!5>;C*|ruAwD1JFng z4o+9+N8TNG1hZgkOP?NRgRL`l^5%fxzXAPMn%z%^h$?(b+h}uVaGKX-_&5uMzbaOz zr|*W$^V~_41`(P1D`6dEX=TdO{u7X?W#MSoxpiKfP5w8DLn(Lk-mE^kDJb{Zp1?Wk zT#7wzcC%LAbgsTMu4OJ(9XgEb=bjIbPGqUUv)Az?t0X$JO}&fbRzy({WMrC@YjrjJ zQMv#KD%a9~;tO1g(L#Cuqb&RtzuQy+@VlR5#Skbt*jo`H)m<7ebC&xXg0%?=QMe5! z1e)swf3+jbkH?a4^c#bo4A^M=Ql>j%=a=xZeP*&X_PDLt0wt5>`|*)L%}l^34YNAE zdVFL&BB&cZ?Aykw0}_Vf6b^EnE&>+uC~?>j9-1g*2fdL+C$g*>pjv36bls(F+=i(W z2=)5}EFv!_e2enh7bf>-cYtUT#O)!{!M`AnYJH1wG?C#SHR3?LT?c&T@XysW&CHyE zS8^fX(tH^YeSLPP;4+Hi=Cy#%zd$Jf8b3%aSd1I{SHTN0nJ2zER_5UJ;u-%rQ5cZ; zxu>i_%+jL-zpj+J%FC>*%GvD^>zW)qzp$BGyRzJmjAFm#59ImCW>)L8K|`(7FH zdq6cMjv?)F4>GQ~()H?qw$avlcs^J?iM{kSpeZaxA{((Jzju(yNT}t>crlx$=EkJZ zDwux|ID=s0xD@dez}WMx!4y5OL&1W{4zkocHnl4KB>HgfLZZO(+YA9qr-U0A^m?f0 zD)Hyzs{bK;Z=Zf$Q^~#u2d`ZhJqEv(IJl}&BZrMC)fZjFcXR+pYlZ#>SNCEdz z!*NVP@Id_(l3*k#ZOx|Z{Fh)D4Nr=Sh))zOGT;&2#s!@d?e~yw34&qG1xkD~s52A@ zTUG2Fb9H}GEi%REV2qqI$i6C(mpfv?bS*U_o@#u_DDytAuH7Yz*`$}z5z7yqma?q} zjW=HRt><@pZLua$;j|nq^A}$qT1#c2e$VY!_2pwUpK#vir$n6;P3U^enAV^XU~W?f z@|~&TIWjp=7gNWd)k{Tz+Flil!C#;hLMT4aZmpm!@59iuu)zCZlzFxGJN;?lj^sbX zbDS|za{0)rw{CmT(BMWi3O6Por(;-1RM6Tp=l73$+p15)|2QWFj4ug7t zqoHs8@H4uD>{ZG{+3ME>wtWMLx30T9G)|;K@tp~YL4N-OJj%_qS6{_DlKCm7fet)x zH=#ju(qYl@KNuDMFfGWy66p%fZ#+Cco(@Gu+X4}znq1V|(8Hq|P?(t#s@amkErGzP ztIoRbrICU{)YWOXWZlMwedBuW^HM1Eld_7^WqjhW>}t~uo#1=iDLARgp^_C8M~?Xr%M+Y- zKv{q|ZO45o7!+wz5n^>ju)^0r@&sUo$+6R@h z(V}5vpa3q_GNaU3Lib{(m}MFZkDpBe@X?=6rJt|!uQ3!?C;~i+_@|OWLd4P}a4YC$ zFoGGJWVdF3*TME&BRG9WA>z!OGAHrwg z^Ua&|Ks&IfF&R%p9}(gJghc8AP03+MRHUM6pPC-y4{97MrU2+PWI=|B_Je+U-iNO; zWEAu3B8y(k^~DgN@EPBkV0{l=?0zJ;#N?`FN(j`JFXW5VzmOC30PR&6nWZ^ylyLEt zdu;{54%5+0GNz-d!TOhFSBgJhX+IrUx!t~kCdS5FV+Q86z;LSvRNOH(cXsM@S-!Q> zC5l&70Qd9p3Ue1wMhe}Ytsjv9g7_);l$N(1vp~Y&4OA}E{y?9#B8D$W%Owo{V6w*w z+h5io0AO#msZ#d{ljcjW7G9=^H?lib@D<3k(QX^XTiwZIVm+=)hzW1LnWbr5jioWE z<-$Zvux&6XWom=A$!}s;*UbTW1_3j>=z>{^vfPMZ6?QJCSg>xe^2e?24~@DECxo_0 zQEweb{j6hHv^VrJt7aYJD>h%IkFo8~Jtf6-V7kN`<`%&;K#afiO7`~~kp3?<0|$q2 zhL1a6@7+Fd@|+fI*^A?PCF8~N__fzd=C&kN2Dqey7gZt>O$3`#f^6sJbR{85vpa*| zVG9iMty(xQTd1N}d1#rWYJR6&Mn-%06rFRMtZF>wIw>gs83uV6iy;kT9<5sLhIHVskRviS4y z3+~$kyezAx(2RblMHe!Tn8hpAF37+sP~kj+`*t0H1LLv8!M&MEt8T5(c+jcle|0pm z4KIgm10_d!ZC%~8b=m`)?8{Zg?F@(V$^w-u!CRC6TZX`ejfl#xTQR(YFMPb#PFAs{ zJkuq9ZnY|YXuiFQBHtB`Hz_%gYV&#QgqGEso# zd!^JOV%<|=AVfQw?~@1^{MTRizHzjc)`GF-!PJ=yj&y;t(b7cI&znT#j+oE-=&=iM zLD54uNn8A8-D+(1luR7^k0#lF6=%4f0%A0F?>m^+-En;NGkD=m(7Bx`s1t9)bt-VF zHE@H)wKvo)s{($Qc}V2N54Gs~k~Jjm*&Dy{{eF)XXjQ1Q7s8?`=L`^ z`gry|b32dns;C*5Md1KU{16x*n>d21vjVEfK2G5e3JtM6S#&QlA=ftjGGhXUK)6G} zS);40g)-pHRWWFB!v4%;0e}cEXW2grSQyEuETdTT@o<@*7fGpX2JJHI2j;t4t-#BolbMFLnH5! zNQ79VOAJ*?TKbqSJR#dRCdlrueP8T2^gRqR%txMnhItQa;dr_sMH89}w2QVIkw_H921EIv z(pjW+k6ZZ>4Q_+WDnApwS(Ntnr&zxVg?9%TDF8YF%43w{Scyy`H18wLn=0r9D?)BG zspW*VULho$d&uDH)n9;r=!iD_0l3V-%?KY!!$+e$hS&bkZRnD}tlSa-{~Y1HRG@x5 zZsz-qz`@b6#wF+=6Ix{*TL@Pm!D-I+M_GRpWVxZ<3c&4JDvxynl6ZE&FMCs2shL#{ z^C#9)4=Tsb7Wdv2Xl1~gC}E4;ibYkbFi}b`r+ktq1M<69!<+ofXE6}9*&g%phQSgo z-GtoUlbTZ{1Vg$#*7* znBV^`4MGTom$za+oqXdAo}vgkwFHtGjr4)Ny@cS|0f`GoR#sM0URctR_871hnF1ck z9v7Rf*A8$&7MMaZZP0qI=MUO8ME_4gz@j0Ny@r-yUbhS>nn+chSMYRDCA-s%ct5lu zH7PTUr1Oo5W@_&X;UtiiY{iTznT+pZAF*U4CLJ>>`mWQIH9#coGa{vlMX(P)FWmw4 zChM3D?FQ`C@gP8bTAs^?>Il3jR@oCb$|_OXLy-SCkkAQFc4LjR+q9~C7~T34CK^ho zA)t`9_$fRtX#v~xuk^@UQ~2x7FE%l14Qdt#2FAbS4^fhrmKW;^lJ;I+UO$P~HPdNV zRyc!!`(;!#8F2n69UC8?2Jp!#`>g-*R@iM0;{Q_t;L3)mSH#q_w_jc(-YQPO;Nf{_ z?F2)6oFC79nA?AVd*HB)+l|_n4DW^{coWUEH&5WQMc!<8F0Y~T>y2urzL4~bh}qBu zU5G0hMT=4PI_++g);Av*DL-+aS9bfyg%M-gjItf({X8mGH6kc;24m@K(Rpl^b|OlR8wB>Mja>IoF?e&=~@5un`qpkr96A~I{5m`dmTpI z2-IXw02y1JcS&B2NKOI68z*2~a12_Rr-5~ilZNNe;*GE8^2eVZ5OJlR=L zkrd73?FIX#C>_6o zV4P{r<#%o}h4=KwIen%b#fqX0+Z`+avx zlP{5N5O~HbtxAFUm(P;27)Q5-$xWZU392yo(LWvk6pU&3$~*%`QaCjm-G0gyX(t<9 zsxjYV?6sH3UZgyA_MFFisH&#SB(d&=Y;Ue-u<5JeZ~V-b=aCxoKy4RvH_BcYGE2OF zK~O~>a2WOW5eRU6!nBfVGTquZ9=d$149;xPo7#LydFl>&MW-UFGZww_f9kCJcbI0B zS-e3;K)AU{@m^B?W7O!@YqTugvAOH)S5qx2Kbr;ZXdXyq7~=Z>l(%w>9+FKiVs^g(!;=jYS$InNS9 zHPOLn&&kFooSx2KGx*&=D)Ml(Cux16g7WU&yR*Io-Zl|$+wfR&PG&(!e*R=3aH2Nn zq}a@a2UQhOrwzjO`D@a3-VG*f7ostN1D*(u+Dk?%=W zCZJi3p_#|6*-U-XEBLSlG4T`oD=+Ej*%uE@q#Y}GoI??2CTza^{fZEE7Jc-oETj}J z`|>*Xfv)f2bd6oQum=O^pN|AR1+=`pBcY^7A}B8C<6y44+|kC^P|khi8v8L!@XK9r z8!)I0+GvHjWRQaQC`3pa6>D8cdTERAA8%hr9r3D|S^CXG6>pSHURuL?_Ly%o?UQXl zHmQ{_>yb#fL^$_6cAdg`){3*3*PQnw(b3%#s9qNZpUT|R(Ool?+uq1$Q?tj3C`({I zjl~L6kSINjCIu^BcSWCEvFuZsO?^}E$AYo__F#`h~Fx z>6WMN-retOYgQn@4wP99<;oe>yVC>7RV1*$1)~Rr(&CM4?V*5)1mm#q&kujB2q03p z@4kP88K8qn4BJyCD(IGXj#^x<31@0N0|H9#kb9U zed13C6Hz>bKHO<==?TrV(Q&B=VE>NlN0(2whrT8IkIx!k0TEfG*r$x<@=V^+66yhW zu78Svv+!C*%K;K5$3Y$)9j&SO(tfPCUdqu^eHfH6cv7iQ^wC5~w% zzuWPNOkt^wdM);|E~l!pSA^nFS0c0X^9a?XgUAi^9mO0Ig|POwpO5{>vG6c4-ndV| z7##oh(%!bcD+)m*FI?99Ae^lWzfkQ5>Y1_s@Ot02C9&Xg+=@A{TGBY?x%?s_yMIiT z^#P}|-v1r}J*lYoBdflbbBato()Mlw!?c)%Rk!aYITD%C%i7YScRdmwri0Ni-%%Xc z&Ei{qs}-b!p33J`Yh|f}?s{55AVAMN7ZOK1wlKi^`vwQwiB6|2BtCg7Y}!*@9KIQj zfVJb~-#1>Ujy9r5K25tTqx+XgVjxH!sto)F4DmC<5~3(6Zbn)?uGn-y2S#v^Gs;7NprK=&ncu; zYky4zB}7f_^*UQOL>jq>XM{vFpLdzfdQxa1demO1Zuuidw0SK=9C9J@UQSqz4MbsA$jJS?kR|Qa8m;$(5o%Md38}YdduG0k z?Hvf0ny1_je;)5VAI5%HrruPSK_GYCXPvR?`DP$3nTe>G{i zF29t;+%~bH6UYc7&*k|j)nM!4k)w}kohy8%1eHUwXRKpxQj}9#TQ{A+4#m9o>7qZ{ z94L!nm2<=v_MGW4MRqhp*ji0+sTxOW93KIljB9OL(#`y>w5nPtxZge^?VP(-Z00t# zV$3G?$!~wJA?!%CwrI5m{mHS2dq7sFbzS5c_ZqK|k(;||2pp%ysM zTQ3{mlX;0V<=ElmXg&UJa{id&qFD;F-nic%*-&Jj^l8^z6(wwstU(E;l$)C}xVJNf zQkseCXnt$UW**Ig4=n=s-3W>8^lOjx_A|lqB7#9ZT7TO2HX=tBCYVvuu1I#CRkou9 zwqumG^Q^bjJ|k13+oxzsB9b{wjT^WilYVzsZF&h-t}n%%tPU-LTO&!Cnncn38#M3q zk?AI=bCL!e^wolc_he6HM}%s0GvVd`d|aHr);jSpI?1Ic8bDvw*65{+h#E#VsWI4vyLlNl^H0z?oNGz{yHt? zyRbOl7D^3;)W0r_bUCSr$PE>ImlUSoX z&Z(N9yL3*s5MK|XZPZ$2j$QGxF0Aon^$rlUr354NN$iDWv4oJ}q2OWD;@nfZtZsx4yJIV9rP^;7EK6hawidtQ^2eVQ(<8v3&*z z;=SV(PXMq2OqUFnsH&=R1-#}8koL-DO9Yag;-T+aM$+pn60=xHb#p^TGX91ZLz5Lk zA4(HtVz32V(~;A9lXW!_bI2Z56CcZH$}~dT-cybgEPF=I*sj+mH-GYIKRtaS<;<{z z`X*TsS1O+H%RSE&Dy$cosH;DZsS*eS#Xsf_#=Yt5t_%>!X+gD7^@(-8g-95u_T^`_ z7>pVwy<#GOXt+yhac$x`uvm?GAIm)#Mz$)Il(tTQ|Av;s7NN(*+X$3DNm(BpUP;G1K3sLZa zslw_x^%Lwg2u}nlxm2!}B&BD?^Q7>AcG6d$x+W>?Bb}h%d(>@XA1HlI${qr)2m&%` z$F)PXdMsgE%Z6r=lZ%dz{|*P>z>Z7@EbeEtYrvF>%WYXUmO`AQ6YJ}6os$XN^m4!} z&#c)`QA0yPTU+~`Ocdex$@a`zTRh0Bv3|DRNTB!~{#w~DgZR6GjVa1B+ zEx+~OK8-N}DU2I}WWB}`i2_}b*V9f}V={eaN^R4;Rj=aN2>G}uXl7<9nBwX9X{Ie^ zut}3VOnfzxnpoOvHKfvteooW|S*2?aMf!GJBT~l?!r>;y*P8Acq=j`E0!E~bt!9OSppJT=#o<2*2;VP{rnYZRZM zxEx3#eCCpuy~SVJ+AC2w%?=jRc*}E>l~{33C!1dxH-C2J=)-8pM<$Y7st-V))qJ7u zF0Ext@NEX$vN5hHkx`@^m*+Nm7Ta~m3bSm=nq2a5rKllJqb80hZEYp1Cgl{=Z<4e$ zPW)5DBxzAZt`y4y5p6srMVY4!cD8eT$mGV4a)P$`M$41JH|(gN3g=hQ|6@d6F#~9H z*)$gzaFfH0h9s^}t+2?1aMR5EnBl#-YrS}bcC@|bY5iHk_*Px9pTG<+)K8QKrf#Sl zjaxiu5ubkyW4}U*Dg*d0iv88=za#p$Qk|Gg0nen<`UB@~n-7!){rHG|#Myxgo-~?;?@sHQ~}@3g*%kVny)2I@ge*HoE&FH1r8#;cFZ|(T)-i zzF^c+CcggmCt_3&QK7f#Kize|Wj9{3oX8r+s$aE36nuh^xPiiSJ}jKKB8WM1!q zzMkmOr7sf9MvkNE|8fy+t++wF0;<#9d9lV7Mb&NUOuFV8^aKg{;}>!{i)USeYkD~L zQ?=M1H|(S-WsZ?a{Xh8a(D1%_T$GHp$ktAS>5y#v9N%&|D;GsR*TII0RT5G3InVpN zj~P#ey`k7uG_{H3MG9?CsTAH59=rA7=*zfEBY2gd!t(VI$VsAL^-8$pq-aSQ5UFP) z)sOzRp#PPtFo6J#x26p5oNohFw{Uioy2Vdul%JhG%teBJ3sHl3p~89@VD{Jkf?fsK zI^$S?6cC;eQ3PMQVG(mJY8Fnr=`6m7aOJGLU#;r>TZpm1)GZGo6KA*~Fg$EQBRRal zhF$UGbh!S^3TtlLQEQvb#?wAcv{b$u`kVsod{z}5+JGjL;`AW?p1H&t;+ddI@iVr6e9T@G+Y#q(oChcC1#5Qq?dwd16Ef% zX@0I~y`jkUMs3APSfcweL|p~Wq=N-kusMQ+C0AwCXZ`-kyI%|~1_=kxXcRNT|X0n6!CqE$jg2!{? z9_T-UOD0^{=m7gQWmwBhbbCCi($buzZk3Lfl@pP^iO)J2OK!Dks7W37^?J`xfgLV8 zx;tMfo|KjEz~mRsV$P^E0X#8}U$T&jQVlFJiRu$k|cU(T&rOcx0+zoia-%1HO~D9keMB z-WA2g^se?(Yc3x!3XjKmF6OL6P~jmmh6(Gd)0N)e&83NoulT{nlWe6$8YklKl>vKA zo5$gwyDeuXa6(5ny#5}VQhV$4+zLk-Pd3*h^7@%nx7_^RJx(9LtyS&_RpA5gy2?Ie zUI?LlTXcJS5C5um7~OW1P81f7tI_G3HM3j~zVVk=rkTEST90<(9xiopS;jv@7IDYS z9eCHV>3OqIL+8%_1Cg)zM}*#wX})mmxb@y75~;)yfHr?;TK;v+^!z$zHmNH3W%$j}RUjqCP=fyNOv+92PH< z;UIMHOUzVoqrQJTo?dm5rr$$P0VenLty$yJ!**O@ERTJhMbKa& zGHA-sUMO}bQ^bX$qU$Bb7pc1!xVlRn^qAs#tJ3ATd4`hDt!}E(%;^dc16_{WVSGCq zV>1mXAW=ltx96vQe}6l}`y}2PpW-ZLhs<6lp7HU`=S>Z&?wZFQ3`guqem29kq3km(ial@My{=#ZDvB;ckm?qhKMXd@vc)%}=Zrn}CRjmXQ(7op-9)A51xXo1g#L zj+qD&@c~av=Y#VD-4gvf%*@OR^oe9Tb&g9F(eYeA{D6@!Ik3eM%ZXqL{*>?g=<(ym z`t?9fL&Np&xxRFajI^}0$X*+xZfkD~3lHv%RbG@OB9fz-V*6YEeRG2-Lgho_ai=4( zNs(!(!n0)Yi3H-^wW}~GV9AT_y*qL~1rjtEh852e@(biJJlzkzoE4EdSfRwsV`5U$ z@8!7>WA+k|vadgtc#NNfdQe8!8z!paZoPNE``&?EQwkhZm?m+m{DXCreO@Uph1y$Y zq^SN6tVxbA2mi@aV7<(Wxyi>*LHx^H~ACk-226?^gSo{qOO z#M~v`qCNNS)+FhDeRW8qA;Ur9DoPc*9Yxbnoh5Fw@vr#q#S=gG#L-3jc;fF2ADn*^ zk^s1fcEAg6<0f&oihsGE*6L|#P?NY&s6mNEDu4xSrxZY7Be%_w^JiZ!xdtGM3ir|R z7Kn8A1T{XKj}w4q3LQW$k3{J#4vlv*NK|lUeo*pe_D3xUQfS`j8@O#l#d@r09aum; z1W0-LWQKq?56F)J(Vw3|W3}67c0QMRvO)0^GQXo3@uKIo`FkAZEo8hl{jMHEV!$VQ zX^|q7EG%^TgG&OKc;0ZLktyI*45=$DM&{4SD^@lV|A$4*qL~wdO>%ny@q26Md+Kb& zvRgih@;4^pOnF<(?j^W~l{Oh`Is&x2NR!4p@;Z;*~c#F$f^N%4MjUdKb+k0 z#JGpS_hTo{>pD1|9ag!8KWJ$+u0qk~HhtEH+Da{^{rn)o$kMhcOr(z8nkmDrZC?Y! zZMPeOq|GA~Mc6E!o;Qa(o;Qx9&Tu{TRJg(X)?s~;*VM}m-gosh`)s}uTKx}*y86c~ zRn4`+?pARh6{Eol)zGh|8FF-1sI*@op(jyd_!TxBY$yCf2m*o5l?R|P04kWf(b-2^ z4+cbmwPM&)9263CHW*FwEG_UZJ?*DnoF!fzPE1XW0K2}vGeDS^Nw=>jiI)4}XY{pE zT+B`ipU*!|bcN_9t-trbA{O;#1HNV3rfSDKbDLT0tKKvZ97cHi z`U~L~o+E04OQ{BC!Y_2lL#T0AK<~V_X5>Q8@%Bum=MQbcgnEy)haHi3U}A^!NythM zALf!)SmfAz%~%4iI!`S9QHuB+Ct)t%i;i!3lBEWCSN-h{vz}oJF=B~0Fl|LkKTF}# zjeg3w?yMzHp~)hl3*BWzqKBO41&I!1Ktjm)j-62@pid*SnB2XQp6=XgF#F8Ka)qrg zUrvl9GLE^fm@lIsoFmhw;QVq;7OKe*v|t??4|1%?>=_CBK zgwoy^_&XEdH`ZZl>g9EDH; z^M{h?%GB9D$4tZ`a~|XNreCvlyu;n-p(}DfPn|8_MMwr*bmycx)B?rfQCVJRmmiM( z5Osv@yCL+VvGz@sB?FW|bk%#8$T)|I$V&D3Kjc1cPm*><_M^Xa!VsO8*&!?2L>qdx zq-S4%hPBF6nAyw+r;irH^+R@ahjMAc1)ZdPHf?0Rcckb9qp}Y!o7T`|fKig-)+qfB zB+s}0*(2-RN82pnPaI`Z%3(t^^R&3qSg=50-3kfmuO>85FHw}!Eeh(O77FVa_t>Nd z8y;R+5+;>{bz8~V*-?g+&vs_|yz3f;Wvjets?uQje2R*)T;j-|&X1B{Rd=Mn8vAUc zDhcQ(&VJ;1lIwFe@k~)eYKr-q|Ymi1;5=F(+A(R)+;p+ih&_+Hn*(DR)(DBUc=n*tTUbio(sA{fI~YGF zB3`ha6wc;*IHd4;MOlxZFei_jNmXTgx|Y0W4`=|fA2j%wleI5#fsL>ty>CmADd-|W zB^x`BlM&U`3I<)F`JtmUtiDij{g!HwYu*~9z?ZPUBAAocL~}g-w-dQV4;6*mHk8sd z>Eyr|I-zgm8-AoWF-Gl^(==cZ*lH>jhJckyG>qEui6mgVJq33bRw>PKdHRh*qfO>p zm;?kU)5<4{h(vkD9dorrtT$U@$qN8X5c{|Ft4b_1>5gXOBG)vavp{VP$ zB8~-YlK)LH61n@j;=2LvH!6lt6QjB^_4`JaO1M4m`r;T~O2CRGp~=rfgFfs$`3aL+ zK_nm6w!(L<^sgpbmUX(#_HI6LSFS-&FYd`2CMQ-u+cgu72meYIAAN`$CoCc{TDnZJ zuR{sR6~&)vE4B?%pMVt{a^+gwTkL(rKtlE7gT%(J_S^AjK&j`f@ zBt1(Q*1^AWMj<80;^WrEr(E}Ad+NS2nN?z0e!?>m5J3;cF~s1dqq?eVOyaq0iedb^ zRud+yEKn+gff%_=Z;pqxbcVv7E0PThvxw1Ld8@ePLvpwAgbp&4;VMIpCagQVPOW?; zF;NkBRFAv#_1pPswe!SdwAMiZ^l_T-FKb=jbZYV^Gdqe2n2ZQBJC4hqZGCT6=KdM_ z@ftUuw8o*rLy*o}oSb@qI{nkdtqq-we8vg3_RWbLpHHR`&a9~2x-yE%_Q9VPwHxRH zj^V5KWiWa?<=$gUCR=eB6*w7o{dWPxv%*t{&);wZsBC2hNZnrm&jZf)z=pciurimg zG?H-p8;Mq-Iys-y^k_X7x)dxdko|7OyWC)$`|}hJi5b6I+A$+6Ek#jw0mIK7&+YkU z(OY*&R%cc7v$NDtqym8D0(M*C{IG^aKbT$q*mHaO=-J_{8jT6mtvG{_(`-w+sAo)O zfCO|M-3EN|mh>9B>X4Y3|2F^ShC!?Qfu`c?mcWY!U(e0)uZS$KUcEX6S{E{05Vyqy zIYU|DgJ-C_aldXVE~Zp@aOLEclDriBov#jcd4PzB$l-~8AG6+i)r~r=wZZH5F$+;T z-uLBZ)-|(k-E!2xPedGp2Ps6Z&|}m3{Y&*VgaqPU=$G?0g@Q9;RqbjF*o{E@-8CO- zW!>Nd13=#+^S(JaPulFPPe4$sKaq(kdXpW054QK7D4s5~6`NxdMxFK?WV93pe(KaH zB+C3`Cn|$;(z_kcvUx`P_@%Z9$sWIw%kq(mk|O@G{*>Fe$qlNG{E0Z&rdEc}&+;_D)Y;dPpGN)MI(V9L6D_f#}D zBI4QpHI)=voRl#M^)nI0#wC4ebQBI&MeOf}wPX?aj5)3wbh5V(vC=6iP{@(&?p?Cc zvr&J*c`JoElgBGJojO5VslbWifd5@AOm}*1Ej!0`diCkWYQ}v&&hf{^=Z=dHrh4LD!CmRo;>uRLBVrM-ZtDB{7TJwOsfg}eO0%KyJ8Ovo9a%Dk8~ zX{xYU5s0>orNHWJVrHfZ35~2XmR(bQifELy9TVI0EV}s4ut0(+A8B2DVp`(wKxf`D z`c6kHo6rr6-{Bt+UL&hUaj8DQ0y4tnHK3gZllZz$ICr4H;kSV`6r4_lP(49YmO4eX}`-?}$$+3M!hAL4kx##*K=dXA7e$w0qd@ETe+omMaUMcj z_rKlIfOXuSpK_(=WSS!r*Qq2CkS72kv>v0&!Q)RhiAIPu{u(+;;zm-mnArFx;D+rQ zA^i%oW5)5IYi^A}$`~_oN+>FxtX4RU&t*wDL4`_w8rsr47KPFbTKLF+tNqf#|^u6aPHXgd;WsXh&RMLSmSDSPrL&u0iJg)3hrWuRXv@SN7S4VMmw z0k%k`0DY>bQm)b9_ECbUXwb=wDXnfh30AC{on=0>d^W%?E2X6UlQ&7smFoqnR>}_@ z1eGr_1wA*5PARy~S`UawA}zI~)iUv=mG1MkhFIFxd0;yxT>ZGrpU*<3PRpOaYC0CW zw6zRHmPYA@sAqc^r8GVyG1Z<*UCJOM&3 zHC9SFUEvt`yTIM@V#W0~e%oyi zLk;0DbjJ{aASDP4&Cns8N~#EmBHax`NHvLYfqc)=4IRw%aZzhZZ5T& zmtAeojUID{Q*u80yQrTXL^lcIy7K783SJ$3FAE{Tqsy^jd?x8maQ-*igbt|%k6uyi z-B06N?c-aK3x*pi75d+RH=v5KDa*zoYk#+YcjLP3*ZSc_e#4@$-KyT7;k?p!PL5>D zYO|lP-HeoZRic;K++^YD{9=!=RO?hM((&w}*>4btQ~h^c(3Kfew*%SmC)5ubhu0d& zc?v@te?uOkjq@KATk<JunI#|hAt)N&=yk2|%sQPh(fo)%`v7yts zLFg<+rg=|NsZ(*G|7tc0IwedJR^Ys%fG_!C14KOiGi0#`#r~NO38c{{isfowf=6xZ z=T3-(BN*oDB&a4jPn2qzdhC{aWu`0__+#y!>sJw1N>J4|+VxqoU&eX+JRO2tvyUmw zL@tPDsryqS`M)qlJ3?o(D>!;(6c~&2C4-*-I9b=w`6~r!<&pdL1-^s%JTJ|cPm8B6 zZs&lCpY9oE;3|pjqtEd1)+ba8zAfKL6o>5k9|{?P;k!w(8)+BK6%e2V4#WRo@p2JeomeKgy~6TLJb zd%H1P=ZO6Lcd8NtFio2<*wzQr!vXnf>1%OW8y1C;hWSk{7pYsN!n^00rXro>&YvGc11ms?xAh*l-19<-*8Z*P*C4;#-@d#8YJ8{ad;s5e2Dzf? z4fhyZOpeG^0xqGE9xne|>9wPM#8?2ptpRM+gcP2@8R5ES#5lS9b204YntN2g9u$LtfvK)wB<@s6a}Wz!i!ow zV_}T=trYyG*z3P9m~xv?9`>i|UPH zW02{{Ku8@ZHNBB4npU4?AA_jW(c*8HxfCB<@$nuC{SCoqcDT@x^f{-(A}6Ybog<`r zCsYn#RA~mOq5IpdGAI&0pI6CT=WO=+Dmwts&^si~vWfqG6fKg2S8!8Oh`$T=%ttXFEZ@gVV0P$;W9dMn{Qug_;WNJ}BO zmK{Qm*qd)0skZ)&$EuZ*>VCAo1gx&^oT_{^0)!rL)Y*-Cg<^rwWAr2KFN?iwnNHan zgJMUjvq0{}YOekSJzH~iIosW9zZJ)z(c`H=G;l&u8rLUy^6I^b2{6Pg5Ig7~v6`;R zccj+bNUX|I{IsPzNIk4+QB;!izhoObQan3Y3zA?7)M$j`b&lW$_2D&k@hzSRz&!C9UD(l=h#SBJK=raM>c!}h+G~xBTBqmr! zVNs+#R@x;4QDX}Cj3eCrDL^nx!>zn{SOrd39yRS{&V7 zN0?);OCPcy=(jQRxY>0MU->)IhjEqI#THfNZ8um;ECokzEFOs29DJtdD3XMiZZxuI zF~_d3V~Gf#h2PmR4rf>5p%z*6!YXj ziEy!S5<=-#;Ob3kvL4=CSuXE)Y&G-BR}znpIdrYv)O|9yJ8>*|Qf+elA1kp%DO)OG zsU)0&M{0>B7>nu-y%kyS6u9w>cS#GSMpUSm+jpd65Zg8+@z9T#CL24_h&_=M1nFZ) za8uV6K&h-oyVJ4S=!uvyx%W(SAAdxo={$v>FMm5=e`ye>A#>vX&?xidg6g?M(a`ko z*gL^1qXCKSamy5nQ@9-%(01x)uoxnp%`L|=5dAFa@G0x1S*5<30#6bj7YeUSK0FDJ zp#fO-`;gI7OuwEmmJi4H+zkX90de+({tyI<4<7%j>d0RnO2yA6sFiH?CST#*YvYJ;0gP~&vs5CDvHyaud5 zM`(}{9Ps-j;f;Xft-ExM#i{6tv8L&*18L9ctT~$XaYnrtpU|I{^2tYx;CJgIE}wo-RF=bub|*nM<~%C zt}NQ{2n&Wtl)Sj;{}k&N^E9OhYg@7G;7V37qw=|V-<`ew#H*9<2D^s=Y<&}&85Yma z$yKwl9FgoqF-zn_pU(UVxFW8G<(DM(Xu0+|O>2zYrD1s1IbH_tA}oc`@vzP0iq@%e zVXUX%vY4NNnUnU7dUb~!JafnOs&(XqP9@bokN1o?w5G7LvC#2B6qvj$zhjv+lU9C- zy5cpiSFVfWK5??}&fw{hTY5I{8w8Vlp7*nO*wwrM;X;%Nk0gnz(ESG%?Q zmF9l-Vz92_-%+~W_Cer?s(8~ml&Ye~?YJL=X!J7+L*(U|xI7>h)xDKzJd`0;)!bh) zNGP$$G*-8CBR!q|At}?PGm83{KNy*3*N18#d?fu6Wm`6c*+!%6Fb18?fwN>j{e?~) zAPefn)aF#gqsraqd&{FyTK}8pWVDzHkx!N5u;I2sCgSu^CJt7O-o z!3W!E_zFuMxVuIR+04&%z;)@0>c}!BdcKdgk8aX_D`Ck9*M&PVtnBKm?^@6N{fBc_ z`A!zTgkq2s1iZP+~chWyLVlzHzlA6YI-9W!9q8PL&XPvQtYi!cp zmxxb1!iV#=W6*g4wz8@oT+GUbByGb4(N=?bDy007oT!{eTK;NlbW;nF^}~|wEQ04g zqLz^cb+f`_NyY6A-%Kfvi2_9-e$oc(CL~!cT*qq0@K{(J_#IiKit$qj1CNeLpRi81=K=wH z-Y4RpP~@Ybw`WKpkK9++jYFgsmVXaYT$LX+7nVta1{Y2Y{}ya4E2x`K4T#gTGq>)- zziLhDGWO^Ha8|8&^lp)G{Jrc4OIx+Kx)b>R%HQq=YBG;*>G1j}oqo7J_`ws7PRG;b z7qtN0V064$>42p5Y1QNDv4jg)@C&!RbU;|jRMxA277Tj_BYvPv!Z~{Fq)o&HTt0rBTU>w zSL5e$&h^=?3|sgQBF)Ukt8YX2Q zcMgl+7r$3LCc@EdLLla>%O2y(d=sHaco4D^fEh6A^H$imh9S=!GqT3$@`p7FdV#$LnA_c|C5 zLBye$SXYp|B=R|;k=Ta^D^+v?5a~Y9uq>>&a&L-d8w2ayIB>SXQ;$+f9TbkvfxnzP z0YZu>`&KN94(FFdyiDvHz@Ey$`4<4C5s2O2nUMf1yy0WmfBuQQok%b)poJ&3OATrp3M~o6t}|#U7bDuu1#z9< zupv z%hI}y3lp9UcLCQVo>TnaUyUGj)`0&whg9OaF=6`=esPCR?m#5VP5&Mn)b2wdkoN+! z`>&|;E^x*3X3$~R{k}9f=_#%Dc*RGKBJ78MT6z$F83=G%t7(+k{iQxt((C(>0{Avvan#VUZ$CDVp8CdGw0=L&dsz|ot5M7+AeW?2)Tc^pc{7c zSF&vACrhMKct~|0sxP+XJ2j?U5xt8`T#c%7(0aV8+>VDRE7`9uWnBH}`~FzmV95O6 zydLcWu+)MdnJsqxEH{xv_B9H!dc6}JgtvY~&xpIZPzOjM#B+`Kv2)Ycsw`gpP#1Kc zD4fyax|qiSj0op#lKmA_iG=z~B{R8JoDVYDvc$NGd1>OK$Pc&^g`01CJ{k5l6M}zx z=bUdNdwgxGMcE9>(Vy&IX9v>jsAuwAri~oENH6xpEyYcEP`t}X41$#n(Kk+`F8+Qa z4B!i0U6lBd<)&1kj$ky&JcAYKn>vThCTE`bK?Y$KQ|=m+l*lYxmQ4wjUtRu=?1s(c zok3cM!Z^TLB?=6O&Z4pxGGohf1eW~npdUkw-GuODu#u)QO5=$=$XQkPcKC$- zvk=L}UW30zzo6nnv^cS2S{(mEOYhq)oH9Jgy^t(djN-*;rwNa+a#t+fsMH*^$qyiR zIN0GmV=WsEtm&@psZDOsUf9ds+%=z7_gR8j?3ecl2^!fj;mFB_&h+KiFeit9S?1ey zZw9DQ><51qZYNr)0b|@bTW7Uo7}(iOaSQM$?Q_2#J?;l!7}{Ui6-@W&f}h{#W}rZ5 ze=(3y1j>>TD3g|*CJ@pUMHl|UwhqgKSJ3O+?)qxs&ZUf|zPFft^Np@hSToQpac?UQ zTHgR_w3J_WrSjG*p!v=1WL)34kgP6b7AU@h3~!7VzkKpyF={>Arj&=Jyx|Z}K8n`8 zCDQxiV6)#3V6*Lsq@Gv}GYZ|gnLqFQp*gaBC>2sI0wnl3zrMBjUy7a-E1uo-yg2(h z^NXELL@TQY&(O{7s|=dXR^a^W?_Bmf$^i%CLs`s-%-mw3KNx?YncD0f&ipoSFJDQd zaPNL~0(#p%Zdqq4l;b7g58+PABn{ib3kv1Zku^RidiGdvmwcMAgc%Im;(1rm#7K~A zc#ZV|-KLr4vGjTN30eTjg1G?+2roe6_=8^zw%#-BH;v24 zG0myiBOf5`?|H)hEFnw7Z9$E6z&$?v1udcZr#usVrfemBOzj5KQTn6+U38Jv=kbkU zQk|v%9uNBTS3mm=rtfi6Y_i^7^T+1gtDTmHywm9pJ(m3(aRF@WMa)WjI-1sWAJ11bMo|p zRnd1>Lz(S=u$&$vZhcMi13RB%z8#bQ?#b}FhtwyDfAdH#lOI@E?~2;G@V&=_rj}&z zuvQk%E{#ehMB!F#Cx)p~592R&jGLRbwWUHX^JEmW$gi%^uU;FF0zwQ)K^9D%7Q^IU@<6YnU`!AZo+K>y5iRwYfG@mnxK&3BvUJ zI~TcM!^K^#LVTjSD+E=-!2UTbgv9fenT$%%OomyUyMv#;`@2XrAUE_Ks}_KkYa25r z@ehY<`yNJWifk)XSfHm=IX4Y>C7^{!!PG+SY=}Tk{J6WbQ9)cZu@#1YJV<}M3lHEf zVE*Ck8Kmu2{_0<6r2mQzdg-%1KPgG63a3NP`_tb>?4BS1$^d{uKtdw}0RiTPXs8K4w7Y}vG z!eOx6UMfGF6%)?$`V`f#UB7 z^7a*BA7fV5-uverBWE5vCv@uX*>%2TpeF>c%$`1N5B9=Ty~L*zJ$1JWY>A?dY^dOJ zXo1#NScapFGIIDdpa10&-M_m4FkfkDOkIhB zGhFDkUg%$A1lBI=VQoy+QyK`iVmu_hm4!S#nR#-o8lDo5Q;y2k@|O5xH50h|FtDL6 zxwQF%sDG5uW{0!}OYEMUKR4bTv1hXgidLGzVbMUcztMGg)2KXZUZx*j^PX0p@2S`s zh0}w&gh96I0o3>5GgJ7RKA5iiztiL0KR}nWq?ARd=JBs1ogW`Hc+)5+aUl@xygO|cJAIf1oEa2a=U{SKqP*W0U(T7%-9Y=o{<+k;QKa{lqF$)4o^YW z;*mO7!AB{i_^smW^y+O}=|Uxb*M{p%VgyS79bSRgdK!ZCbN&pW??;xl2g+;y5C+J2 z+b3TJ4S^ecQdONy{sxJkXUCWCj1C-4D3~npuC4>T-|Vu;BE-im378(G@IsLBlCF-! zweeRE9Cj|uwSZE?RFp+49dQCDsuvr+>F+HIJu@^RAfRJFrj~6?w9} zU(1HH(q87*miYxQ zb=@`UiA3O^G8q7CyCU!T$V9kA2UVAnv!YaYq|p3^Z3LE6Pqs`@jk;oTW=Zu}B!Gv$ zuZJbbgcz|$>CRMH-~+B+km=2q{_Yo@UCsan*&5w9x-20ObO6+W4r`KyKn7$Wbf24T z-N+o)0g|q((q-cu7){pOu4y^FOX&XREqFx=y^YAkAcps>&v8X&Jn-=Pdow-}g0dY6 zwqHV|xSqJ%UO0^FZ;)`m!*8+kT*Gp#ZS1plHI_~{n|7EuYs+vuQ5oXW_Mt6wJCah-yR>w0ei^<25;ZC$bzL1hXGW7D ztgD8uG&!!nCVovh6vI-E&MCnrHB7zXOJmHq8)_`dO?x0(=;O~}(1L?K7My`$MO1J+ z`2~-Y2`8#^kG}yt_i)uAwqFipjPw5bY-Jul=q)Y#$mej`9qiw@D^5I*RsIqu{GI<8 z_uwe@_+kZYYZ%JDz#tds6Dozmj~Of3UNSBGibiMi#iOM&=WJuFGInLvleg4m)3ZW3 zy=G$vsj+E)f_`L=!n4!tNd)blG)<&wyVLYr7xhkMuL!FCUMis^Z!bNPVADT@dU-iLTN+B23IZRz>mlTuu zLy+Y}EyVMcWFxe}am<*^DYA?5i2eyDX>%}H@N-Qoa#C_e_Lk zPPEt|7+4GCa#l>6*%9gZ_3G&mfIVjkHr-y@ptiD+r!bk|4$+-FNj<^UKA>S_e^iA> zI{O{RZyh0|hX%8sjJe)1*5d8eo@+~ICGj>;i`H?U$+gV%CI_5GSC|8-gPa;&Vty4& z-zo5`drvkqy?Y4{21w+7V;fF20lgUIJq=AQv`!J8t9Th;53GbmR78vd*@wC9T&q#F z|4F3?KgK$T{;qqr!6I3XliJr^gVE>Q$bV;jskbxKwu*9m4l1N-NMn+ZF~4#Bvim&YRtdj59fb9Mg2+i{r4}QI_0E< zT@x-iL+GCKQUt|1$(!b-@sGmo*$KkqW*>i~#Adlm7Xr(_TP&qpLP82)Yi<2$;mF;S z9D*gShHIPErsF%P$&cX7d|bt?x#)@xG?o!~J6BJLijh@Z&a`R025V}=2BfI7d8y!OVYIREailtg}#N+*l-&)5I^icE8#G0 z62xsT4L4$EQC{#@v|1SXPQk_Th5iq2oGM$wlWt@&I*KfdGb{~0W`fwp+kMLMNjHHv z+^L7=29y(|PdrntaLKjsY9rN2NbUA8*7DFZbT`C(kDAemgz??-t~IAkeCD1<#$L$l zqP!U)?O#4m*9-mBeVW5Y2RN%VF!u&;_lBq7lBIk zWAOB$#wGvgIlMAA;rt_Fy2f4Z2t8d*I7QKDO5JgjfmkyMk52YAEY9_xB8gDKnm6c1;`AY=2#a*bpTI*-KNgc1P5^}q{1**s2%)|Ly z6AueIywg=&@!>|Rd*;4gJpJyu&%JLw7Y(lSUInln!fiM0Sd^e4PTGIG7=E)dbS|;` zs?te;3N-Tb>hMEHQoI$P;VU$~sTUjty0QML4-mE{WnWsZpJm09pbFVfdWH!ejdAHb z87xI)OM!Z?e~sQy$7coh)Nnb)W)*i?u4p#Ae1slMkhfq*qczwZ>W7YQC?N^in~iEy z%FMCqBE@k9gEbAv>t29?;ZPySPEHmk*dlozPO?~|7L*c~} zcq`Itq}C|&FONY%Mk)yf7`j9#$Bf6U-J~w^I&Wc6!V?;!!xUs(Wy^(X;pKLipHejz zz^PASr+uaogJsOegh@P0OYG*`JJbzEK^F-bNoHIf92Kp`yKKtwd{y**Pp|era#D7W zhv{9qq5axyO!Hwi<#8Bf8Z~7Mp%q`9jh!kA7ZRG=){9%6TKbTv13&F-IE$lE0yzAw zTI!|mvnkQTu^1QEwsJp+;Yq@Za%a{D#Qs@9>!mR+?WKxhrKvx$55E54`}b_O*qjZZ z@b)@QXY-zP!?8nzHvkkvdf!I>VznG^INK3WMI%LbYcU+xf7#m-0*M?kK-l;zES_;4 zf+=tf)cLh_Mbg*<+x(=;Sh=28+nK9yd~P(OZY2qT-xt8*NO}PVi>tXl$5&rri~35P zSwNmzxa+*9(SQ<+xeW{f-Z6ue(!paIhD6)e z{*q%~xkk%T%icfe*M3G_&79Pkcn`kJbR>G{mJ`EclIiqKqY!e^dx2H_O}+jFX=Mf) zDO@VAXsE0p`4qOuQgXV=RP{Q0lD<95s!7_PvtH6-<1-Y+I3@FHk`}Xs#btAM9r|{@ zs=85`ACo!eP-Z@sP}l{1`JG-I7Hd0FmAMW$|Fe(I#0FJwb6mZdNds$3zG zZGhglphaxObi@3UQW+T&b!uJ)ldbvST)+7o>@{){nks;^MO~s7xJc06#!nhGuRX{Y z40)8@^^9oP=--oY3TUC~U!ilf`>n=CJ&BLI^htske?(4{zR=%#s99443{bCbuP?ZD z`uh?&>8i8j?toz|q12^st@(TzA6U_OO-LH`1yENY3Gkij-$ql3A3LCr@8i;9Cu7=f zKn%l3kre)_hilC$-V9RrhCMB6RvXi<=R&3hiD+S8H?^>yEHmc*+aJQ<6ABc4*0~!w zjmYha=(`NoJ8x%QZmHXs-gC*5@_;VODVndm2Gq1EJuLojx#I3`pa5#j$KWq^p2_BV zYsw8?dUJ09B$qh$0c8UJc(PhkpM3iryc*X@N&lCVYuyGBArtiG~c^yZh@`+ zOOL&5wl-;v3~*0SkCNn@C}`_WPzPy$Oy)jVm%@qPpce_p-O73qvYt9tEZH1A3CUQLN*e+@q5N(r9hjS_v=(CeNzavunw;m9WLQgxF-pmI}T)9*LD|`P!tkzFEE6vzUbfYJ|0PU;DtiC5=q_fXfRdI6~;N@2S!{4fU~Q>$ZeXNNa_3iya= zS0QPFA^_}d7G+c3m}F*qok_ZB)F=sf3)_pN11oVV+|olC;+%AC02OI`b3YYrOjR;- zH|!(WOQIK}{y*@d*H63-v8ObF5YSD77#hK0etJ-va3Jmn%@FclME^U}nl7m`V?Y;_ar zvyy|~5w_%*th3~fdF-havP&4GwnHJckh0KLav$Rzx`w6XVy+sl0{oNQ(zXWr&s9;P zT@AavHw~s`ruy|>CFxb~i)R1hy#A|LrNzF~&oMiNALD|dDTKW}Ht2JWSuIq=iM9>K zoQ#HORoEkvpL&+E!jzXLiic|yq^}5-l-sm|Datw9#BrD*W zDTITL=k(joRA^(LNb#8?gt=3t2{BULJ2=2RR)VzQK>{@kqQ`_d`MyhIVNOb)lv7}2 zMaiZz-O<$ShCiv@%Oy}<2~)WgFd{v+I0zg*Zk1Hd7GxV35)3nawNfo`zc z%`d5T_b;dFJ>2tNOt+;EkQN|}RJrPlThFR%2J~^tC$AMZjlSxm%5kHa04|+$dAjKT zx>!(CbZJNc%m@r!!<7No054+IxDYzNWIui+6ReYB`3J;`X)c_?l%((L@%E8M`w2z= zm0gJ014+|gB6!+4c@7Mh8lMoK3UA_Jj}k)hnVDj;Rp$x0mZflmnThu8q)f_nC2M#x z&rFNy{3*GeaHSt0AKCk1pK%;TU#yjx{H67$&qqCC;B-YL^kx!Fwyyur>K{8DJS)B? zw*$m#^T~%Gp$~aSvfi&}k6y8)kA$3+wkrP_^xoZLKev0#bal$;yxNH=vN1eTpZ??A zWYcgB0)?=DhCgt>Ghe{OKyW>W0qoEH9EsnpRd!nGuA~r#5QzhRFo*I~K}+nIN*Q9= z+IVi3u$aXA#}X|mH>>ZgDEBW_CTErWcKLvHD32I;h>TKG?1SLpg3o|;I)F%Ml{ZAe z-U^Bev-tknR;2fUXbnmb;yOLX`7vxmLP^VVwiFYGa1tytGGTw8vL zvl8UG$Fv_Gsd&mA^bP#s-5=FLnLiKp$=8gQAHs2n&&gYe7H8Bd9%es4YA;=JO;uDn zE`jYC-c9v5(u5t&hyRJD`&TISUoB?%0JKM>@EZ9HaF2(KC?Q<8oGc-2fk0w>7VZKb zgl@#9q=1pw(^VGXfe=zX8CwE8vbdzAbwA|2yb53x@*W-@Y4qy1GNX#zbiNn;rWJNe z2&b(v-23k0K;yn(cx=B?ickTb{?SHmG$M+=v*A;GZYL0Hw!hrGTEV9;dRwet?otto z7M>na$z}r16s0=LItL^9?o*M|n)@K-F4 zWW)Lr&$bIy%bg|qG0lNn8IWZ<_7&|e*z|RsJ#H3He1xpcNKo|QFwrtTYgRt6^~!$6 zWmVwYC4Kt`@+9{JeJ5J(YqdK4@SQT~vRpMYWNY#R4{d5-Q#*%s!Cp0W%}hM~Ta^po zTe~-7{vreKE0t6zN)hAx|1B1b@dWln&Z070mYxJSmGso!!EkUf&sTZ_kZqz|TPjJ* zpGW2JNGQywmNQy(U;$)b3O=s3EKXF;BN96HIFIS4gXpX}&FIaZP`m=Q`Zw<7jX`Zn zByNLtEQHh_JLns4lA>)Het!BWVkx^Nrx3G$TFP!x69m((Ej0-P%WSfb4Z^fHpJ_nR zK|3Cj$mccoYTO_V;y=!_gmxW2i{P&nfV!A}x_Qp`?t6Y{8^2qX%lRUoX<9`Bn-iFk zeB)aarQssZTf)LJz)k=Mu*7iNrAgNZ5-`&lE>P7|(p9LLo0c0iNLFpf!3KW;&Tw0e zRRDl&2I0unP+0ea)(0osldgWhcISQ?7p5x3KZNh(TLT#E0kFwE+xPBvmzuUT3P6Ni z%z9UQ030hXpo5@V913fk8(sfS`RkjwV>RFGe3%mhM&ICY?)sj{a$qDb>=h2f z8>o6a&Y_wrI56Pa8h`~+z1URu{di9{07r$`)>#k`(yo8*@~^z(zj`K=$n1(^Fxgj? zVTu8SXS80hjW!%UOyh*4Zy8c}iJP>^mc~FI&KlC`XUwaChI*T$-b$J6l8+AzfkHd}bV3aGWIaRnQ=FYQ!F#Kib1n-~Fm#BI0 zp&|MD9lkaRMxm*=-@0?)#w<(F=S_5$?ho8*>Q{thh3Zj`or2{WNSpvkJojA%3=xYcH6lsyPFGd25oA9#Axlppiardh;CZflb%U ziqTt3ommr@WPre3ek!wpnz+l+imZSl_FZy;z0*5d#83I_#K-*{w&r)VT>n%N+BtAS zEfnJv^&UFhUHGzpmJDHd$*S16EW$!a84+}v4*A@G7R)X0KT^`noZ2P25^maJZtc^F%_>PVcZ;9|Y%G$wuJy#=J zlX5vPW{|N=9Ykb3Ip=WhT_uhbdo)H1wn%`wfd#EX$Bs7jo@|YtbCZBnz-_C|M8H*%hm;(%W?c{;(|X z#vE4{Xok~1xcZm1rOl25Pn>y?*<{(38J>XYZcOs|dk|tI=<`!o*jo^XD@>PCPY&jB zEZ+B@@zD-+-$w@Ho{*RqW)JG?X7K%M#6*f<1-Nj3n&*niHbFkEObXa=x4bW3a)3c+ zF!J82hpYqII}Wh%$R#vGEwY^`wq@*Z7+XdQP+#JIU$qI$GC%~0-{Wj|N9uv}6A)~5 z5f&=XO$TBCV%R}A3|>$=5huIhkn)rk8n_#~Sok0vuNm5B0qbRkGAAC2L{lK33ZW&+ zyZ8I4y)TdL7Zvl_Iw2dM`<+*A&KA1wMNT2YMjQdbfm~hAp+-5s=(OiRHwF^ezf$Wc zFbUWTldTLAP^ZG%WjdiO<_r;+U;lZ)kdr|vVQ$h=NHqMlxu$>y)9w%!BXNm}xME}` zg;+N5XJ0PAcI49~J|xWn9WSOW5SB=E&BE!{5G+6P%93$sIJR>y?Mep}zd%I5hQ0xw z_%cej49fJ}hQD}a`_g!8-jr=a#Z7w-OHu^r)L$tApNxfYvXx{vY~fGr zOB15gN^IC`)Gb0LCMoYP{4IQ%W~S&c>;U}jC(H&v0$;o#6+M-$NWR18VqA^w@O7|s zRcd)Jb9@)_@%Q|7AqHd8yN}R~P%|054u5ji>xIXLL^#}FAFJo0euK|sgd;T+iamlv zz#ibY!g1B%jNK1UkDg|SCH`TaDwM!|)k79&s+nyOB><@-zj`lyj_`JQOS?)Lpty7w z)%P$x_Z0J6VWaM!_NmW1(p+U+H3jakS#UIx{S^D_(VeWx5m&(`o)&Y0AK|E-u@CEC zdH>*PCl{vXmIpG%+hV@HtbUNqw-i&I$4r{_7xVFl|0*J~191Yyvt?uspV&W3V6krN zF&BBN1HpyR)mBU9ETn7jrN+m@x^nWX@HfxcLJz3g^GaS>C0r`zGRqzAAGwEf^VY=; zk)8RI%Lhq?qQ&ihdGIcfvE#6ujuPg_V!a4NTxmUGk^%^vqS{jTDKfz0MctlLG_=@|CrrOgu=`}-1?A7Zz|zj6V`%k*AKzO}a>r#0f<9BTIQ?bM(fO*H zrd*M25*(>*{->6HcTB)OpI?m?zmmgn-7=7mBO^}hfK*+!M44|8c6j#3>}JH`s~<-e zULojf3%jP2f0Qr}DbD)l!~tOJQT6ySg3v)C_I75OyEINajEeV@WgSq^um=3=Q#%}K zLM}E=DXDcJK$X}{@2Oh<^I(jiv(m-6ChrJ0oTlS(I*$Y(W>G&Ya&)p^)}oA#&&c2g z%1dH_ss;&1fkR;18h);h8$>1@ zXDEcXyYXzE6F8VHjMNwd=^PiIV2NOJq?PnVCnetYx=dT#S-f}lTVwMl4N3+ z9xNF<;mV`oiWoz`ERF@HnIb4k&K?-pO5uyVvP%u!#{R)(C*6|sbAa3Or+A^`6`x@a zeUH5EPEkhHq7D|eXO>EL#OmW&gJsyem(*L+u_7(_+R_MN00^3P&ZhP~z3 z-I{2ZHf$DRxA}3APq?CjDMP$(!$rbPjcbxz-Tpq8#+0ZL{nnVHEC1DnPV{W`nB^=Y z$2YKQXiqv&$Q5Vx#Mn)gys>37waKFWS&(ltvDaqs3mpFkH9|i-xF`npbHvM8R?^Z= zpD$`v<#MnSHqEmc3;URVp+O7z{?G7_}@q;A8(MW72%xj`A_hu z@CfXaK8qP^Zd#~C){gT3su?T%up&~MCHC_8S;Muml9?nC?yIO~Pn>026X;#jT%5y^ z>BkS#$Yrink3M^BW2m!4Hl)^8hk;*H)PIga8I6e7X+29t#i;Y*1>p)-kWWK=e?)Qk zhe6Q6uQV=tBTfpF+VT6((7(Qr5l^v5H$sm-j_L;NW9LeKPEAD)($Oli{c1(`s^{l@ ziGv$_u@f=r|TRG8ct_y+9-*;;xt~oRTkH_3sileqN0L?usO|T zc?RcNJK0#`IhDNlp#}uG=vP{v_}+S5Tnzcz(h9o$b;D1W{rdImFZXvd)k>5rZAXk~ zW8GriJwr0UmI{A4H%PLv%O`ug-ku7fUkCIyg*&7hl*&}bW&eK79U6Yz6hSQ5>5Ind zRsq}Bntw1itr-m8Osw!{e~jN66g03+d&-Nq{%do}+~{fTyF~}FmgOhk9&DwhzTfC; z6}hyT6-ir*-FcO^Txi%C;t*EL!^F~R-oPRjk@o^CnGLOpQPy6W?##xiDk~5>oi9xU zb^U8?#JM{$a3@s+$2#3(wKhonGF_C(dA|D_*Ed?nZx2q_s%}MUoarA&Q3DCsuPx^~ zes&0~98JN$ggx@LN@LZ*&se~8kKZstUL%rzN>b+)jAsXYNwE(FT`%Y&<0<{x=bZ0(Q~yPNmScrD_C(MEh);!Ib?g9 z>O&HjrQP*T*zQ*Vh5|>FQYedGIi85^U-Wl}twg|9agaU~M0d1EtVJ1SYW}OwzSdqW z@U$j3`kCDYuPr^AV4)20BJ}n-b5AA12)EPz`J&=5ttF)9LSTlI#|g+pC++yb!9N-< z&0B=+jRgN7XfV~bmVyqSw0z1PfML|k9))3k3VT)2{MZR^$=Th~cX$ldjuo`?@JRm~ z`9F`$%3`$Jrfi!a-|aV>7f0b;`GIUUX=HeKkt#TJl$3nLUyHi3?#%f$_EIHO;%w`6 z8`gp(4%J53bo6az4)v#FcBct;@Y&d@qS4ybW&P<0d)4NiQ`bvRa-(M8Ty(xp=<{o1 zCG|X3=8Cm~1T{SWc7|C;x|THpo{mO>*34pLM>J@TJNpQn){Xt^!7zFORQ{8ja)ILE zgCA%USt?oLt|ET>X`&3?!`C~TkGRqQI%L;sX+0QT^7ZAHipyPh>_hDl@d!96n8j=~ zDNMH03T^Jx?9z0<%fm~TaHG!_Orv-Rq95x=6CM#IELrL5c!m=dz_fI-pP(9WB_D?F zTRWTR@9Z}6yQ63kB_IA@C#N3=?*yqv)~gcZ5ltrvE-o(Y)vSK3SGw33B9z^TtUw5L z5EC|M5S!0_AVW_SohX6S!I|&fS(Vc!G{{8<15?=tu;5j+wd1YtI884;hPfO-95HCe zgNm@tKXl&$apgZ99R%1;yibA$xxQHr2_A?Rct-elN8FZ3DO6KG*T>SiKLaWG@q}A2 zTJbStp=OjCl2`!E1sQC5qlU)x)?HZ3vZGgt7Y8lkc8G|*-K)z1XKMpT1Z*$sPa`uv zeO9TlA+gj#j{UNQokrmj{8V@mS>$;9pda{#%?`ff&r(YXcwZ$x5f65fHQGrNIm=HC z76Lg{s3`{iA(>GO#Rh(oSzUBLvlPzc?@xjKb|hXh$KI!NH|k8!ftQL{J`@ z8ZmZ&j=kSHW7#eF0RFa z^}AM{1Y6Kto`evBzGQ}-llgSg@?4_$T5zM?tHb={!5SYIWb=7**0wGr3?YlJMVuHz zTjwjqSKc^)cN+JEVxPEg+S2}EE?%tRdw^;^q!#dgz|G(p6|&mJYH|_2Cb6<8Z56mn z=Q~4=CQBmFeaTZyVnNhGfiq_o%xkAZA6?gJ575{t z4L~?&R6-C()!NXMl%)75!;d~Z0Y{y~-4V9&W}#sz?nh)8xhXx*YC_zITwS8v+#xF5 zyw52MnW8MGrR!g|5Cy(>$9pIhC^r%!Aca*fH^k|i&_XXeipB!_KTKVBJk|gIy$jdA z_LX(*y^HITy^?Gp>l#IcF0N4I;>w<7g^&s*Bg%+eTak#6)wOq7$@Y8q{eJrVzJK}W zKKya-^Yt9(oacGUO>YS!GtSZ%A37zi*CQ)+ z)yl;t9v}LKz5R52AEQ4sP5BD5NUB#<;lhM2PN;BrOx9;f_WR0M4JrNS(U?)_=R)xB zToBM|x&Q5rl--Cdaw=`6thl)3y2Orl)#s7peV2_J=OCpqi29WMa#MP+Jjy06kM;t{JI zlN^V00wA6QsaTm%q|TT#OWJ8ry$!=`7E#aV9Z9t3qgv-z!8_x&znZ@C2m8MKUgy5k zS{J+V;S)h5!#H+F^@hjEg-~t=UZTd^%dltfrC|ni$z@JU=N5Y|)q?tM-<`*fU2Lyd zu4kF_TaANdGmIpLH(&Y2n|!*)t}&q>@rCwrDg&3Y^!A6iQsNdYl(t*4zo=f(9m{|T zLPs?;-<9)pbu7`~!DP%FZGU-Xt2(fdKalB-qY;O0wEj}*EOaAenM7j#CN_JT4LgZk zZU}A&=qybS5SGla7^eIF&}nFcX@6>qYG7e3XwxLmg+C$O*_O!~ayKwL>%r8Bvo;qG zNJ{}BZ$XE2e-taS2{x4L<0XRbdq&D>^Kpwe-1|jK9S|5BxPrfM+{LrFAZioGPw|) z^vVgA+*(0AOfP*V^mf-3HVnxm_1Q945QqBV&!24sDC5>ZdaeLWxlys?yTfK5#P~tZ zEfTl?WYGHkx_gmli{t+ZVXJWS3IcF8;e>gGIwA1STluL0zH>6G55`d7TX}14*BZ{c zz^eJ8H3B53j!2frL)@vf?@?)r!JaC6(f%HhhHYAn8T;Xe_UF`RKXEXm_!Ba`ai_8I zdRxW$L2aQ+?6bCjP>|461*wSlVY^<=x~$2EuImeTe6R8uOm4NiJ>#na;x;YZ)?zHv zw@)fO)L;5^%~l>(nq8$&;flg zt6!OL%Wl)QGdS{Bw@3AaU9?iJx5*XWH^(Rsr5Ehk(YAYGv8Rk&auAIQd@*hX|8l9q zpUZe6x`%s_=43)Oi5=fvKuZ~|n1^}c%8jW^>e)XnmwoZ0&ST>=eW}6Up_+T#zc2Q% zL4Z-A{`~mhe_C@=uo*!P>?m;3$n%|1Y>@W6->gN2Z%?<$yzIL@n(6x-Ae(*)L1KkY zXNkl4?+yNRUg4`Rrx!L=NeFv@4`uUV#eb}0O%)$G zrUE^v6Z?#bN#fQzHJ0X#0i%BF1ZXkgYs_7jV5rX-)27wv90OYs0h)!^c1|^msWWRW zxJvlE#R%T#!`c3`8 zk(Jbra%wR_g>FPiW^?AIcgw0zjMwC$3t2E`hlQ*dsEZIhPiRI;NgzdK9Y6ook z6X-nE2m?uZ9<8vK-o5mBD(HbyC?THWf{FSN$yCHdxLH%J12-l1*b~LAJTvOp_yg_C z^3rtzz1Gx4I{sDt|D7Bg(8*;qnkFd)Cn3zZ$opM&MHEqIb=jJhVB)6;*o< z2morh@wh8i;r=DS(_1t%_BLWe#?s&j?`X*s;ZbAf3G!1^F_nBMopg#bXwoAa8NbzJ zT_bjedz9Q(9$W@fdX<>xf;p8M;VIWtJxyW+C>UXI#fQ#93Ri_m3^nNwIW@5I8{OOq z#j4t0577(ydjS<3*D+p&tPJm~aU^BVF6qaN1lRiRnjR^uG)SF0*5xS$IN}GpYCz#n z?{n-6sH~RQV`<0xsp;=@x!B>NwXw&W_5kM2nuPpPb9$W|oT?cb^0+!-FgP=8wV@m>3Ot z;E@BOSN_}P3eq%g0W*)596Qkj!2!l%y<%rsnSNgPiGI-T7Gqyp5;?0Mcd^`#e9nZ; z5_E~~GF-asJ4cFgAWJ#HUcbt|yhGill}Bhw8!^KPLMF(%gvnA6;z`kK0&^aw(BMC)2R8?JlS&RHANYoiF>nPG#*9yjBP!gtqyz*b%O$DDAMOqX+htW z{K@mDF#5Wj&ov|7Tzwf>fA}aH``ZpTo?8qbil--9V$IJU-MDU``^o-pB&D1R?9-2C z0WJ16J0angt0)c#&n;+8Juj-usK%fcA^jJT1Lg{CHj*MGG!)dx-|pgoX5dLd&G<#U z`O5sGwHmB08)Ll@SSCrGVkCj$0%7o6o2=~oCKMHJeyS61dG1HY$_INkl!o6LJ&A}=rR3l-!S-iUH-^3RUlA8CY%hw|iQ zHsob61?GObp)$}b)k1iHHRj>?#?y!0??>b`lWMto32G)iv1nBHX6er>2$#z0XNsMv zp9EIJL@cNaBB2WtnUM`2DnAO1w%pz~2vjJD5$A|;`1N`CoSR&oYo!~V!|)&t0ooP1 zd>16z|5Jv5I$-X9~-MOoJ?B69U-rm68WQSc>k2q07Dv{HHmSlrloEc@{)gYTe~IplG^%(!$_wdPO4b{D+2&N=Hn7CC~2F zPEphK9#32Cn23JbH~dmsc|6!wDJm;1WMmJtu2>$huQC~-zk$`HD|4Hy^DO9FT_JIB zs|d)s(yW&0s^`}nNxiK~Or8p(zSbWVfrTyNi^->M_+~7OdX*^FxObYW$-Fuded_$$ zB&*FU^3;a<7Ea}~dH(d`H7*iTid$YVrQOeHQYxV=$*ax#)9uJI)$8@u=WG8dFH6>h z@4BX-)}S+Hx}w~Ia7#IMcZ+iSv`xu9JWZYw@0}vX4Q*Q*kX2!qBF2enF zf&^R#5to%RpTRYqp|!+6rL7qWisf@&6<@mLn`Hkb%^?lxqJ01fs^%2UUb<=}?vn~i zZG?J8ns5^h;TIgB9H~8vqdY5DbUO1Q*RRgpyb3w!gP-Ah??frt^+-To(X*oPyp;b% zwa(4R0dXmya`CLH6CQLIv)`R(Z*S?p&>m^8wtK9Ce&`i((SUa4^m8%u)f3(1oxQfL zd>n9;w3}^;=HBLOm>sNN&?QElZdlgw`Cr(m1hQxc#suL5F8s`qz4&>IL)a>m4q?DC zMbUDJB47NM1}(C|xqRp>8fUDT`;mLIFIP~#y;P2}EB(=pli`3=v-zRZ+U0)l zDIi?}cR(Z7?|blQnfC>$#GJ4x7Of0k2(p+Py$!?yLH(`*HcEy4ZNXg_Rh;E^SsR6; z(i7U#6cKdQ&w$k0FrUfub&tu=LVAoJ4p&EIrh9ykz;VlVk+Jh^Z-a&V2cF?`k*42F zYY`2j2M?n}Fe|zWAm3iioVB|&l+TvuNPFpzCWoaXlLYM^=w6-_e7(d@8;Zk@i}R@o zZcVYUccSLB5#i7|DN1945{07P!X$!g2{rNhw{>b@SmJDvv!r^od1^3jp}W?Ehp4tR zq2&>m?)>_=&>zAlWM+yn=ogOi5gn4d_2TRIU7?e(#CVDV+Qlys8;R%|_ZD`BQmqOdmagG@IrnOTp=qQBkj zVQQY$H{K>odAuuX+IYmAg>rBK(GJTSb>>S#U3FJPd|&}kTe3&^c>mXk9%_C1+T0C_ z-!8o|gBiXPF;a^P>SY4tmie-34TnB;zxvsBrlyI~8;lSeVjuCeZblzz;VN||cT^tsw^j`EpFHIk zO1M#fyU->h^KS2MWF8HeS(`0v50om5E9&l0SB3`F(WeS;W~fgo{URsYhWxs_VXYNN zE8DEFaju0qAw)W-iZNs%`24mH35~@yzXbU^5QA8|jw9ssIBtRaognXh(gt{^X^YPf zdJbve&lT!)_ql0#{ZwF50YQjm>(cKB6FDkx1f~# z$^R1Zj(e@NvfY4aN6=K`xW*1Rc_6H(r~Vp@ zmJ~CV06nya|MChLH=Yq<>q(^jK8-l9!@TY!j&+dibolkmxQDNFg+97!GV9&mHxum& z-+xX+U~7&N>}dYhGM%QUl;(aQs=GD7OO;5?FtpD0?If2%4!NH8NYZaVjLGg^Z2*L< z>F*#6;q(gQJsJNFM)|Q8=*v$igU-q{eClk)fpVkd<<7m+oeDN|AIaaSAK<;U?;$PK zvIH+~+&YbGOo;WNAlfc*r>S;mBVo?yd1GGQB@|MpE~;Dv?f^+G;Ccbw(=>ZhV$E!USEL*UVABr=D8MVc&iKkhzSk!j!@<)&eYLM<7xzYd4?&?3t!Vu^>GtmJ)U6|Qxwu_%Bc@%=^I-hThab_|PEtj;+F9fNP#bjZz zXq64<&v&^E)e|ANGyLx+`S`j{+9iE9Ie+J_lO@z$$X=`sd=7pSjim~cfd_7QD1m$E zm%=fECfrD(8S6Mdwq^br*UM)11_f7eix?+KCnWQiJkk5_ zK=Oy)>#K$kl^UjBYZcS$KF^|--&u&~{)_@t5R7lmUfi%v4@U!2|AmrQY|65AE}XW= z;?&Ucn_oOI;P^*l5|TSSIW1~{&H48TUp=?^f(94&~LW# zeufzEhVGb`mXxHteyw2l=0^G->k);kA6fY)v`Qjx{cg9c<8j0=vuqNa<0Wtmz_>8W z9anm(6e6exX6Oxs_yB_Jd1|^ZvyAhcHZf9-cJ8}xhA*`S&uR?+MifRkhSm5=CwLV? zDaUP(D{d1WK|f1zGm+oon`{rgY0o0g)RAe2TOu~#RIg^mVeDfs^prG2wz_V)GKQEc zyq^=vr3J^e*RX`uaN#IOZ3|uAe2nj}YNhWh+Xm8U3!4NVa^SuOSv|#z>)U03>Fhu} zG-T}SmIiO`shHi(V01JC=4Rv3Es!SY8tW(a`d`rmn=R9w*ih7HtGc{jlo@TnEJ59c zUZyyn3XJTW=xq;}4z3#FtMn1XZxm2U<&AcMm5~A393?dO|9Sz~M&8+;puPdHfxKpm zcgp=2o~w^4dEF%Yz4Np>U2TUjRgynD?@W}c_I&d2TI_h(?X5EXxQD8w)hlwfOGOle zL%fy4GTl?(KTX*`^(;}bI=d=w7JaE#V_;l0pxk{jWZ~MV(?24?!9-O z`o?kAhxLLwE_g3}GTdH9fpEe7h}^EV7pV!G2F&(^aSu^J@_ZXqj#9R8egxgO7(*8Q zoQDvHI)Cd3-z?;sJEwUoI_(#(bd(-0#A7aA3tFn9O^LExdf0+a>r6_CJzLtPPSk^A zA1>TUVCg2ISpl+om|(&)aFxU1EWjXr%m=4zNX8a?aCUrzMX^yC#ZKL;JGe~ZUY(ZG zMhVmY5Wow%+%tC#9lAdcItfhQhLgOzJp8kWP(7gRNFnG%ALZ^o2ro_C--|~ zA1|lA_@55R5M1YXf4Y)M&Cd`Ly=%Xi7?B9i4=X9oaz?b6^6_%UWrcf#LXe_W1Ef0Aa^i1v*)z$h6y>UD{P&LeHiQTN%*&Y7ZU^m zLnG&3Z)z~nT0%$Y2wYMyx1}#GE68(C)ZLzKG=4faufus{{Q5_oC885MiMG*|Rvve& z|E6?@YwgzVyn@HfiD!-pGS|P!_iN0f-?CipsvhG@-Uq9xN}V6y^l$$MSpkq-3vh$c zS_#d^F>it#L#s8kR^S7!sXM~IV~x31ZXwcy1*QfM>;E)++@>`Ln+q^(AM_#iEYLkE=c6b!l0VI$0b0din6>dtV; zKqiu@F3OHwzrhIRrGDg@DN%=&%9;bFgP?f`+O5BI7Eg27PxQZP)yv*qYW1Y)qDSz& z1qjJuy#1&oV`&!xGR6oVe61lqWvMk;lbglB?n^PY%b=QY58W!7 z0=n7O)M#}26IWy0RClef#N%8C-$l?IX(FBIp-fb%`G++C{*X8&fjNL~`TbUUMYj&) zd7-9Cs6? zB5!DTsDLT;JNlX2xyD@Dd(y`B%J(X6(_$<)3JJ6Bq!>Dk{YG8b0sFAa*(XN=spF@N znsF9uioN8v$C;*#OC=KmTPvrK+$xTWCX-rCs=V{u{X_!X&1Nc$`S?9lv+KKk3^8=# zcU(6mh-;0b$n!Zkh~J@AEaSLh++U<$o;i+1-W7kpQ%%FNtAtT2x@ASA9JYG+J8`He z(_kc#ma>o`L5``saF${lasM?4#k7aC*4j z1f>oAkMU?*R&4>(Mrao!+NClyf*+r6tI*+wqI>%ligHyhplB7s!(DdZ7kwJ+eVi7~ zRY@N8)}_7ZtUQ{VcLE92nVxwnM1GkAAoqGSh)W7DM+Gzcc0U@or&pRL#PfT;Zc-4d z_(2B}pf-|0rQe(m@%CfjSNWA?Gh zGR1eJ^I_F3vgt7q)BF9$?g|3l@Z=rrc_)ODFcZR1!X?9S=+iRqD*oWv_>2FFGD;wB zSHKW%lt+hCinPH4=*+pA&uGX5a80U5!>IiwJ%+I3bb5;ay@fhoH zE~*J-wFe^cwO%iRijCOm> zPiQmY%a#Hu_R}WgOgPp!s8&CbH7NH~&*;Lh9WiW*B#31$|Hc}|$EViZkQ*EvZqDOTi#1i6xQEYDJP#5qw88?OSu7>%{-eM2AGRFcs z&phb;E}`9yHc|ZuTK-Mz?bap-gmsN?QU-2V=~F3wJxzlAXQ&zZ*~44glob%92kOTZdDki z=z3kOL&dC5ee-a$8g2=VboRv$Vuh%+U_OImJPN5YNCSBW6j2i%VBGE3lX$Ul>w2AK zWsNK6r0TuR8+^GMlvJ2e^>Lxzs+wNGTE`SDDBeAUr+R~11J=Gd*GT{_cTn_HPrgS#5Gf&9WLxMFX3%hzF@qtX?A0)CRb^M5F1z42%=##zCfM3f}jFJ{HFo<05>Qal+(< zwoH`FmBQs7P93q5_`eA|T+s#?2a#jPM~fPnz{$#PV52%wwTt8Z21WMyyKiIdW2(}>A-_5Ron{N`gQD<8Q!eMkngO0#% zqv57ng3#QkElqKz*J|=9O+Q&rZCOL$^%kHs&I*23-BG&jHcqYt%Y_dJ;Qg|0mv(>p z-X$-R*RYPgh&=atT8pc&H7iqPWV}0lKEKNTcg?ptmIhTJDAFZ=S?D$6M%S$$R1@aW zBiAzHCZd#RJgzJe?pn~AAbV5X8`oQoZCxQe{bfZ14fl{ZFiu-_OtfZ;VeD2JB0Ee_ zpPt>0ih1SDP3hrVu>KSYziH=qsrU314}QF6QW%OF^UYUzDP2QaC7s+lPE2$t9gA}t zn)&6pgm{T~gp$${YeidQ1oLu~cyB6@h1T8H*=utOYI1m`ub;-vWyY%7|nmA&`L zRb%Q}-wf3NCrit&TzQd!o^O0CzOD0U%Gz_|MN@p%t#{mm%eDst{-zHS+?-GeorZPQ z3otTp0;tvAkZV4eg0ewl<7E>Qa3b{ly*K5bC;S{^y3_BY|AwDFpmu_1}g+QGu?248DusaD~6WY845hsMDCJ+ku%)d$x^Jx^c7FZ7>s(@*9V4KG8%L`*F-@ zJ)e7*_>Nd@s!FEnd?t5!9)+cAHCC?Xg9$ppwFd7ovnQ=I4{;Eg**(|N>e zH1f$|sSBgoEWuj6J0E90_{_4;>;%-QXk}1(29(H%efFX5z6TO(%yXaUH3CV;5Bya0 zT|fz0#FwON++u{E%Y%**tE7i^Qfm)-&$)ch$OwLhyt3K&9V(L8LQ{nhHQc6Rdv;`yAm7kJoz_M7+U;E+crf_O^#D{f1dxdyd-h> z`)B6P4!(ZJn@8Si$eQmB5Y8b(G&c`lkb*Ch_imFAxS^n5 zr`0mtAF_uNBdU1M6NHy?N#MKsvG_MwGT3x%Sv}RB(hS<)PLW@$336mTniTk{en;Sh zNPzK(mhWd4n#OHP@u?+-K7Qk!=}cEQqhZ3h6tpdhj$YJDYJC;4DHQMXrfc4u+7c0j zs?*G*hwx~Dm=2vdf{!X*y`$h!gr_2|U1;5}u5<=SeATW5)0p0dF`4A~{+872E1IaU zS;#l7b=l0Cviu1Q+=rlzjytpgHP-g8;CJN~u5S6!KH0Fn#1`{vU!;IV4JF>O760Q6 z2wo8`j;2i3ffMg+pLsK4TM)S^-*_+SCZ(Vk+zr_cO}P(s#F;G8I~Y~>hICRXc445A zxE#Yg|D@Yl=Au4bp(@EQa^5d%{Qa&868(^+aC1o*&080ztmTYm+aHQJVi800$H=-j zGd87A>BjexD8~0m-Kpg%v6S`4+WE%O@AOU1D8;e<(7}ewa5jN!z&&(lua0d-lpD5s z+p7v#JMGJ1U7c@SX%l%r&kUl$H(^U}(1DlWA9g>YW}L{z;l8YlyH|6K^k4T7_=!4O zV7&W`k^pCw?3!qD{b7*d|JrC_a_^a;3Wr*%`zKnPm_-)970Bt8;GqY>zha#xgNViQ& z^7k177NrL072M$nj@}0n($<}Gy??4+$(P4g+2UXNu?AANxCERb+jL__hx9|ZqY=dx zjG{?NN@Z<|aqpd|e@`z&IPc`Tvf3EEsN1G(w^w&;wDdrxv{|Y#`AXF78yd25h$ch= zvitv+N(21SpRU-|leaUmFICWBu7kt*A#fGU1URrO3K$Zn04snO?%g}NUXK?N5&8J4 zcIt*^3Nm^KKr^lb^I_NZi_cX{AcFScZCuiHo9rkWjE3B`ND~IVY9_4+aoRqF13R%e za8*750(yj|v<*)o?euKkna#s}1rbx7JhiAB?s$bVGYK|V%C*<@=*sMKt>Rj|(E^&a zj1lbTaDAf;DUm*`0l2p)yjFCQ{k@<^npD^X9$C;*5T9n(CnlSe@vWbj0Wxq&L0;`Wog?mHl+3*K6K=zT_UF#&AP0@TihG zu4*wh+}{IxAu$YAY@IW>zU`d? z$CX~de}^dlzwP3_8G%V4K_S@XrsmWX7f5b8ZCqyK!z;;>hz?~ zaT)WsQ_y5X^ZjRtyUOs8PxZDjVAU-2XWK=2DoCfHp#d0?Ozljn-LGAHqy!igg+98w zvhYIF>yOO;RdTsCg?=7{Y6@nxncIDzN}5$OGe$rYo?w*iFHMRQM_wmchGMjVg9J;J z`@UQqY3A-ONv20oQdBTHK&{ZjxLXNCEqiR*c;_1()O}B3GqAS|bTVA+%Z(N6lUi7t~x8{GxnCItynbEyI(3u#elwk||rytSL@EoF*hq z-qgF@?9PwS78#VSZHlGFV_6#9Yk0fl8OgRC?4#YeAdl@=9agSXcKTg~yC_3vrT ztaiVI?TdGZqC4nCS)CYX--r9heJ!8VIko6;-#aHnH7d9b}SPMqnmGMy`p{2>_rU7d~Uhm#zKzY<;_Y- zSLkeaBT-Qu;dW!sdECW+L=SVP6@RDiHqmXy&$4@{xY=JLtwT{A+3fG(A$96PKY18F z{$!$7WRmN$-fntbFeLaX&t;a>vMR6gnZldpDMCgFPvLT1FCUIS+WXw(*J33*b6w4k zTHpqAU!$tWoih#jf3-y$n_?t%I>^URK!(euSLfVU<`N8@a#AR>T976r<}yDC9A7ya z>Yf|N;rt|NP+wvKtx#>85LX*=Kj^tj1Xr{w^s{Hs_S65NMF8|quCpe{bnX5^_rh*# z=dKVYEQ|mrQo-rv00uzo>mf+q2@BZavjd?|Au@S;9}Pgai}JVZX?9CsS-Rn{5H7!SQ(fSsQ1_P zn}g+lDUe{!^_Z3#u8NDWad)q7u6N!OMhZR}0G*Z9of@m+!Oo}c8Ra7wc7#IlU1nN< zU%dX z-@5rQBeodkEAiojEk|}p$mNJd>Hn&!{;T-YmG~o<=-C490_r~w*}7o&?)IyvDA^=) zNnqU|1E2u5zg=oLXL92lwX(?{o3-BDSS>#37bF!cpqca+d#?11CILnP7xOyg$>ry* zHkTZfW!WDJoK7*1(uLMpBq6tr`z30kMZVd`sZ0Y!D{rt;UMOS!+P2BL=>{=VJ< zn%ozl@C2ujueYHzsDy#oaZ3AWCJyyL7lBbyP%=Z=tOw@$MBR`iusLucA>q=W=1#UP zV79G@Un-EZI7V{uj=KVgQkzFA6m3FFNck0W&1I+i5SJ+i4)NXhN}dQcj|dmOe-LM& z{3(d$P>U)Nz(Bo9;oV6eQ^1$lNqUh<96i&!@`#6Ua#QL`Mv96u=Dsl@hoU8V$*t;m zvP=6l#ei7Fv_k~SG_HEmPkhV)BzM)KJf|2DSm;mfW~zj`HoqRK#Vsi{NV80wP7MZ` z!fkV}yAXS(4=Nk!#I7ll5{0k&pzn?PFRJi((@<-rdDOy$I7Viabjyt;@fR-(_1#}x z?%&M6^^5b9j81+=TdAN^)`IcjtHaX204ZN4WV(-m`psLUr{Iw7Edi^DfH$q;R#cVH z2fnj0$|l01j8)=UTU#BGo&XLiXid{$iEfed_qaOyts^^ZB+LP`?E5dqwE5x>J5_!J zWmLmXsJwXj^6KVH#};rr%;Jw2XVOjTbllnFd*5H48w+D9WN9~x@;O7~mrw!`4B6@rnrGR}bYJwVw2I?*O1@G2 zT)jj%=hWe;mrlV_r*@N>Ta(=u{w^~Q&%!UAFYw5hFjgew;Jr;=~>odXUDZ5Qi@1(8~1 z24$G)H!232m34=~R?F9;S-QN?1z43y?Hjjet-mR6qF{uNjjMX)hc>h`h_r!!w*pHb zZ4#tRqr@WUf>uA5=!|7q50sx4!j;J5QhHNR3akks>GC|M3|ZKkB20c(8Ql@64LM1J zWruq{m^gc=d{>WWMnfWEOJGK$&QxhawA^tneING(g>2DA&LXi99F?C=9wBnilNaPC zX#=Ub4yS8=_U>DFIj2;~xIiOe8`;=S@Y0{gW!EXK`LSkIm^%Ujc+#?^HKR+tYw9q9V%cNS>BF$$uE?~ z16B(1y0No29vU&I{DY_GW242x>lI&3G?lp+U~|Igpoz+M55gl(7B$~@$>a-Lc3vT)(9(kh8*RcKF`&V!yK*U%CF z&G5ooUx|6ec2)1^LB)b5&z@RMEC&Ts$rk?=WB^$-e4z9{u{u8Te)nAP-gNY{>T1jP z2VW~bE~@`l5to(Cw3_P`?`6_KI|cy5eZg$w$5&LA6zX>vqH0p)EM=N zjiymeJ^yFjgU;2PsI_UfQ#KR@IbtT*{J2Amx+NAs$d>Ly4M0V-32yQ-Jh4Fy`S&%B zd>iw^YPvjX4EObLBXu1n;wt9e9VK5v-3BYXyyFd>K~Kv4x3(T{#vAOSa)>3=RIx=r2T9INACZ5|6uKnbp zAQDF{fLU6Tdk15M8Zi>!4cCsSnNZ^rkrtEO3I!BA0`R~fkVQY3SZR1(L_d@DNuE8K zw`Gr=iY^wDP}BM;64yb6%R$EkbL5IQ;Nm+@-UdW&!Vc{qwsSP-@ZKDlp&;`cW_ryU z@)5(==(**dY^bDQ%}SL_8bWeh7N~G=!4aJU`rfeW~G*?r+(HkfK##6&Ze*wpkn4H1`)nzc1aH zLp2#Y`jn{kyX(*7x&((dCDFLGQie0@%MRL=lsQ}*_m5XPgQr5PK7;t6JQs7_mj6-M z13a)dP5*Vlp&_yGHY(wuQT-RhGhF2UaCG_)6XO`GWj!x1!e=j6)({t#8C+XHn`&@e z(8uSofnVG=jm4O3Zae~Z8okSmT+_nUU2C4c{0n~q!bjpC!ZU$(C;45C7kK?{_>v#Q1zTSd^2cXqoSIWb{gBte*xqe$j$FMitDBH{Jo?{yh z5P1Cd2-!4B8J)`b`0b+nyf%oZHr8P`Q0M2Q14S`CNGKtZA7Sy9nSrOdhDlB*uX=E- z9)f|_;gzl+qYef+iSqJ8n|lnY(#dunED$b-%XFTR&wkZqb?3bK4km|x#&2f@Qye(? zTweVbV<83E5ExjA;%M|w5aVs$a6XV0-nIA1oAJ)){fUusx~l56(1e1!6yT&maeSHB zC;t!h@zQiDxnzZM7O$JXH&eW*`djwWt^iP)LIUk4~F|AE< z!% zs_Iqpuuj;no6xiB3+~dplUp#a^YVUv^CNmP4I`aRbc+wCYmDV0`nd5m1(pUUXF)t5 z!SchA?lxFv^?a-fe@0$2Rr^+GQKu#6_W4Sq!~s`XnCV<+SX)WEs5G`sUEcM+<|j+%4l%Z!qM{Gl#tB#dzaS?- zDbQ$BnTK*Zr^GnRpDm1HdTB;&ZT@e`*ja&iCZ+jCO4AWH?7)RUOLr@b=1Jv4PU9KfN} z{kcB;YJ>C}Zj7LuUoh*Hw$4qWhYbzl+P%_ok|-zW(h~mvm3c@unhn~dR0;eD>7wGa zdVBoW5q^_r+DC`CFJgbna20<^SfpxW^Y@?C%BAtqR|B~4K}r|;&sMsny`*--bn*G& z#I{85|AnBktp5a4=5GDs;|In0_pUq+-L+$>1-dH0Y3KrA50cP?2c+8f{*=(8l~)Ep z3H75`!d0Epr+S>4Kbbl3xq;Tf+eGAbd2E7xTDP+fj95;6hvYs4%`~Q>`|va z-Lb6Y^P=WM5AhLRb}9;}B6uunO@%5ZcmqH11mQs40#nMW@QAbAuVwhcPGk-UVwn`m z@gZlD1TSzR9koI}kX{4AiKUiNX5PTv8}b9f%M{RBwO`xyz!%*yw8k8~PSuZNbb6`R z*!vZ9L!|dj#mblSb4}-bK|G1<3%2-|5T0TuamRn!Iyp)FN^#aw5-U5v8;KvMUq$gWp&g_6ETMqF;sWve3mno!pvKFiAfq!5NCV zX;#dAJmKoq??~s?CHCAnuog2Y;%P+TGbm-TZLOx7C`26pa7jNW!P~`o$w&(=#8!Jg zvfdAUOwK)H@!&??H8W9+J1hSIvFkR972||xEbym3r~07v#GP!e@AGK9ro}TysUe^I zDEER3%t(a5qCY^l;KXd2)A%Eyc=|JK;bYA?j9E3Fz}=WMgkndqeD=k2{RPJW?+xq( z8gw&DEM1Tdq{7gaS~T2VO2nuN^~U;2+(YD5(Qqrl{p#Y4+7(N!8lPRw5;URMl#oHk zN-A3YpJHeAAC-Sk&8_e1cC|ivM}Sjd#>9k!U1Ly{L)h^z3kwT@ z<>lp>?S11T8fh(H3mX_t04BrXFQPWLsL zd&&5Jp6omg*`6yN6a=Os|&vdE#}Ey1F_^>BSTfhcTrqOtGFa8$T=kwDo5jwM{nReXw>7}@yi$npRDK4uh4 z?E@n>v5B~g4~QT?pvMu4L8td`7BQEp-#1LZL0wzRZ*mn$i1d|+1rXJ4RvUH!YvVGb z$6&M?3*uP>n$6TD)o$^LaCVXb-JJ{g!oQ=q+-2bT3wEZ@Ywn*MAI|SD=4S$4t9BR+ zX0H6*4+U%&;Oj>}l$w%)El5Is%CaV^r=_K}uXdnv*{#XI85GSf6;{g}3kv->i1$~5 z3h0Jc5VD|U6{HB#lxkubDiYls+lFzgm%VCOa|hvZandEDJ-8Y^{}5YT`O zQYUII6|_(f=W8RwlUVuQdrqewM6xf4HFtNuoipuN++lfzfKV4D#?iC;p&I3{b53Z= z)S8R;dIo%)%G8_MD@>G~z8zJqD|2j563-lS!@YZZg&>sU`G<184USKFD%2zphSP*vN zfwcSBs&bEw{E>#hFMPY{K4IvSnDjEmEd4AVbu?{cJk_ zv#9{Qu53u(CuS{%?KJusE07e3mRQT{H)1MiMI;$eREykh-C&Zr`lS%X z1(e0rjG*GrtTxP#Py56F_n#%)`~!>o?*BP9l~b4g=+T+dmoFcHUjGy^$ra1a&IW!? zS2CUQvYooindi7e;Izf+YrObW*6MU-p zK#^)v`n5>x-p0r4-HfOwX9T6#*b~SH6#4A+WLoz71wHzBek*e*Rl-MU_RfNBT~jWn z9PgGB+&HR^bbHZ}F*^MVL1POpX-f_eVg`SZ_Eo@`6Ko6T zirlGwT_pJ^Z+4silBuMF?!74geTod7qS%X|=MJtsW-Gh4fsBBh!n*vJ!$Gnx0~aR! z?_4WpMo8sIp!apPe57isTOvmjx%$p`Tq~17KVipjrDFW5hyNp@Qb^roH}NIqti@~j z0lT8+r1=EP3*nLmkSA#7D(@$X$wTKdX1&S@KmHAw63C`lKVPv$fG#9GS~#f8NE=6n zzj;H7DspV)$-2`pJqA9bblD}B21Oq@00XSWSWTB%Hw8>O9L?N$Fkif%#;MYO4wxXQ z%pmW^1fU&@|HdL?fY_Ts zTu?Tr(0>UrP->~4kxmV*uwN>q-4t8+`Zb1y(h&nUcW1W;yvo|kUP^h&A?~BHaO6v= zc?6EB!&emAqwM@iL)@!kEl8%$#Qp$53Llno@U3tl&4H*}C>w5U@Js=<-64c;LTlkl zG>yScpNm*G$)o_pqS?C<{VHJf!K*b0iHYjp6r?9~0b1kTbGY}mQFY7&zRe6EHFF

    B)XY}V(FxTumJy?A8{om3FT-d`o%)J6~&KZ|8N3tWa~dal>}@oIq6$%F`pN zTgT~T-zz)-xd?$jlGVO>2wJ8IY!B&H$%}(j;J4ig@n;ECf|eWEhlsKN7?S-PAi?GY z8Sa=1)U|H}ES^zVRujpMOfO!ZLm8x`N}MC80Mlo!rduGjvQBxskngi+@v4%vECSZA zUwbEfBrC%AkNgLGGQ{~uY`0o7F2 zb?GIv1OkL6y$VPZLXjp#Bb}&-s6gn=(7{4S0t6805|rLV1e7W&HB_mJAjMFm2qK6m z%IN$z&N$-tuf>|lT55Rj-c$D3XWw&M3`eA+6$KBqR%+M)@yCjIs=yJOUJ>>2&AaDW zBQUim*8EcYaFA_@b8b#GstgVzqnO1`kq@N(R0U$;c|*{4#dlEjZ#leK)u(6k35P?w zJ_9j<_d|P>q!5GAO>x)GT1-NcS5A<0a(H`%w?U(t&Giq5x#S%TOl`Gij*=7>kT;kc zzURmnEz23-*X(QGr9cN!pJ-p@YEO356*|00tdB^>U-*sv39ht+UXE)!w->H`Uc7gk zm2daGszi6ZX@xtZu)er#pp98?^hnPX7nbivsEYp?QO*wjKAcIdrL?MQ34@|E-qgVf zb+ILIG2c4wKm)^qR$G-egxcAs5Gwgh_1}B&m$Ng*V>|Xyym?%0;U?_=H_S zKS=U{ZmV@^Lh#VF?dST%{VK!?5-nenYsx9J*%SZiM?R2Sl8ch|>l62_EG=P}#J#g$ zuW6sFb&Q2`I&7K#&k+_&$`)+Ed{WA1jrEXrH+U30Y7{1T(N!FtdZQ^L;M%hh?@pd$ zJR$Peq((0$p1qtxZBkCPM=2DuW#m}tElT^oxUC{O=+1FYjnmL~ZdzV>V5E2IQA@GQ zGOzV7(M_x2UWGrV2;6Fl(VPsI$LJgjG0hqkbEnOQS}V(F+hsOgpO zUVpY0?R|ZDfT0U~*Nq@8u0@Y>>7$bTY0;REwiai9$G%Xjwx-K%deNg&M9b()r^_4L zE)+dSdc~Ab`|uqL@qI*pX&#RPi_Ip)x}(1E)PX2JR5*+`I2o5_#3jj*!|pq&dyGS# zuOc4%w(XU6k=3oX0Evb5Th)97$txEm{0u{rA2w06-1~(-KqDZ7YW~h4#-nPeEw_?9 z5mtT&0A)#IdQGA;FL{g!kWDvI-F)AX8ebWl)poiGyP7tiYck%%SJiUwtN%#?-)ibW04t)>>h22 zM+27e=ddYhG02P;XkW?+Y#BzMkWf}`qs1ZVKs92iHCp8_9^rFPm;n^{0HgQt;7T6@ zg~dq_%#Frgjy(OX(En%lMiyg-lAH(@yh{U#;@iG_izBccNz7(fd5D5h0V}`cdga{%IGaW)g0H&f4LATw z$9PCbq5y+^Q?4!1{@R1FUiB0L&^`5r9eFtG?Zsse1g6DxY; z(u{iOLjNFpu$Wbe;4*@e-HrRe*MB05WLBTuxI(vR2VR}{l`Z1V)8EcOK4*bD9np3} z#|>lt4UyZi2RACsX5ggY#hr4eel!~8mR`!L%b)#l4u2()Qe)#sCtj{gE1cI=Oh`hU z@<_arOzfz1(9kef{|WPe_;{3_0Yr~=;-RPoy^jB-|D2p`DEJWC4)7X)fq&T9gpGmT z(N)6K?@2=yh=}z^QA_EsJU-IB*ptHAh@*yJ3h47t8Uy%un2Z`5%C;4EU)G@G}DB!%B@4DzcBs1eIzgPyLr7B zzYgp*pKPxrk%QYyjxEFzs$*fqLLpUJc|HkM+D%??0t4RStsq~*B+pNyd4;qvoAmx` zj(^zlj~L|8qmk>K(Y`T{#EeQtE*uwo@@!LS_mP;MQ$nmNBJ~zkNVLB`KVkey=cy|P zAw2+mCn+O%=IwR$S$5eAsf}4gP9q=OPqpwDEtOzw&x?*Qw)2$fM{k>C zOKtu|#Zl%+P_OeN5mV*m1%2}23kXvk>DOm}t(yG{vT>F_i@8Fzc!6x3`{ogMp8aW@ z#sv|fp&T`|MONPs8Byx9DkcFCM{66&)=_49Rw{n{Nq2l#DfS%oI~rB05ZEUZCIRKW zg)+7y5lS-MajP60dF&OE4rWrF@hAIE`ky=_rX-#fXi!$%K{>M?N!^nZr2pl=@fu`6 zBqpUB5zWd!dqLHTwpON~;S8gIjz=Ln%9!=kR!Bw+v~27{7p&+F{-m$CWa8nYoYuSx zXg?5tVx%&pv2Uf6hxmPn6=EI)6f+MPO7L?uFWCYK2b!BHR^U~&QGxZ|zt-WO46$9Q z3pJZLMxn+#ecWoAEAutDjE^O?rQzlaMg0;{u{=Oeg^W0h#^}*0_k_jhwO!43{Go|q zTx-*lWQpR9#hvJL_JWjlm^3aq3=KC&s4`q1sfeC_yhW5+v2)SEO!0it4MzVbvHKxmKHZ8pk~b+ zOi&ujRsI5^)`aqO_zUa#{&|B^Z1n+I4LVY>gklznedw})e&+!#vhrR;1G0Qjlm~2e0!k+<50o0}l z2X2GMMG!(Y`Ms(gTb}&hV{#}Bv5aV=EwV&hT`(xicE-M31$tg?5;L(_#ld4!-wCDp z4un;`C$#?hhg_`@50W2oaAYhTkM)(to5xpkytY??6j|XHjl@)AtNEvF%x*?+^8FF( z5Dy_x5(CBH>RKxHY*tK*hST6>fhQFcN84*Rq(aslN2wtkl;O%*43{^(bK(3qv$*ic`dj zz0zTJ^KAog<>tq)x)a~J250lVLW@fNunbsrTNAxTM-!!&k548=@7&Uo)5)m&w^P)D zz%VGwLJ(k!mh|1#4w^GKiF9hD=uE5%DP?FseIEfXpLasDiz@gT< znj!!cUHm@euz!jarC?-KcY?Rj0!(hfp@2Q(Pda#^cq3Rj(%3okZb8&8^A$xY(j$YL z0%aX+?gvu;US5PK1lmiH4fF6xw?Cd2;{mIPvdOTxem}{^p*SJ_wEIm3O4hkg-BQ(E z2EcEmW!Z!EQnN$D3X-h*ju%5Pr=g7t9S?sK3qSL&5iRl92&{u;zSK0gxVqZR%Kn)u zxawP4rsKKb#kk6sL_j{VPezQ>Em{&L(>Veo_#;%W{Dm{7{N&(h zjqK^_g=-4VrCvprFkXiClhSiHGpX8m@g~AG?F<=J@GRdq3P6=of}tlOm_N-&zNgc< zw@<{%tY0}`3geKr)aYcs$ReEe)U&fu6v!wV+V`F1W1DAf(Q3+r^8;5TYvcZQH+jlt! ziynnDa9!Jk?2v%PJH8!?(Pq5FW7M_MF-og-qc2X<`rTjl2gMQXe$yL zy>7=_Z)YPjw0y*v@Z&7wPsh7ELd~RkOK5OI|Y`ZHq6EGJrZfbS$Vwmmz8nLnsOHPmk z{ojXteyT7@c87JpkoiC}Ua$!ZRs=*w@*&wy4Wm-%oP8CzuJ^TqZ|D)b2F5Ug=}8KF z`y>akw0NE;oOi*}Y74IeiYPovFV7!Ttx}w_jiWLm6{%93Eg^t_`&|qG<;g|L#jeQ7 zj~19GG>Wmjyx{zWXhDlDqGk^o6xOvq)WhQhbuj8&mrsSQFj%#PU4?ZCoA3U^-7$&XVf-%Rkgy`*ca~nu;EWQ$pZU$){d7ev9b`*Almae)Cns(`K)}N@!+uwhufz|GS7O~r!!j+P zuaLT6_mI9a*m)?@OTnz$G#jHI@4K*Nc2|5(`t5>r4#M}1#e!-kp~dWXU?*mgQ-n}c zZSaSSp}Q~7zD+egl-l2qa`*6H*9ayI6zcXgqphbC&VV+z+tFWS@V_u0D-Rb+&^ybb z&try0iRvbC8I)iFgfruRviGop&6;9~UYO0Aes8|3lwf*ED?*ihdr;?aw|FxS)kocw zN{bcc-kCWOgOZ6z67%XTPp9cB-Xl%wMRb~6%(73_@VoomaOC>$1Zi(~80z756HIVs z*YdNV;-rV&_e1NOCb4h6OUJ5-oNjq6w*?zWiiRkeiHjS8o-Bo0X+j*wBHGWdM(br0 z_ES$K-m`mm>GwAR{q^ZcU$y{?TSG%;>LFYC;8ydC@7}#D&TH^(V!RG+Nn8R$MpP;< zgnfNm=Qdhs&58GC6g{i`rFpmy+zGn)KZB;o?9ET#Apg%yO1F$fpL-UMqRehbrf+KV z#g4^Z$-1j3jXOd40<_9i2dB4krt5|foI1L!*}(+gf=;@2rWQBo;E7}~>sumLQKlkZ zRleJxROpa+bS*fTKTqKJ;;;m1XZs_#l~cKv>1NCGQygXRin$Ffp~@CT3&vYj0HvU5 zeavl2e~N{OC%Yr`MSO8R)bz>BzOE|?npxw2JAXJvT55Us@b5Q|-ap}XuYXtP+`K5F zhV9mE)8#+GOhjnYVHPcP#Qt7lu@|~&aqE>Xshf^I*Rc&A98Ia_HE`PJowN9RkKp<&>w|Mw{111kmPjFiN}BiqDwXX<)x zUm0&XE^s|fVBWnsrv)@pWoC<3-{SR+PMmdt2DcBt|2x@Mw~E*-e$4z=>{59}_mw8f zrRVtx47iA2KW)W~Cq`1~GH3+cNp7Q6!a{mIRn2f-+^;{6U>M}3!r*jxZsgJ#0@&nh zDg7OX(tE~TD~-1*n5KJGgvBsZrJ$WqmFC;VEm=n-HO^_@I}#%lttcIfBUAI+P1-A? zv7*Wg#w8Y{)W~dji_c!IG$nV5(x~qvyWcBqV4pxEw7hlYs_b0dxZ`w#?KQpywC(Nr z{?W~zindV=MHTyaw-;v_%BGLS@VB>WF+I<76$W^8`6VE-SU34W4f6aUg=bI$t&&5% zgrc#=EnA7-+j3PLyxGOtJ>~F^X?G_%%(dnv=63FUL150J4eT3WH}f$K7hp-ozaNKH zi?X<6l0n1@k)sq9l8$zQl*Qc|l_nT(8-**0lmz$M+4WN_SRsn^g?mgOMP^b@dv6E< zZ@`;4JU6rhX{K(ZEMHVGqgEP@g_G2^_YKVkDYuPihId_+7`zYveligS;YfpYUzy{X ztDAnc(uZD$T+I!|TSb;fXY8;?Q?Gq7)T5h0+7VEewX{;Nb*oV2RaBNTN4Y9piXpE* zLXd&F&541h4_b+fk#1D-$&aTR*^RY>Mz5fl>h!9RcP{GVL@@xYj4`@te_D)U?fajjaa;K4|l@d5RJMjD6{-t zKL&NA@VWF}_DSEnPJ4rgULJewnkY~=yb4cHhrq60%oKc7S(q=Fd+YL*E^Od}SWn$~A-~8LCy)PL?Ju2R=II2c1D=2*M z`Sa(W!H~X5P;*t)){cT(KJk}Itx2$WgR1=eHq@z8_n$tUJ5y!jAW6VRy>0hN9zYwA zbb&=I>ZZO7GyD|@3`ir;{18z(retlL5+bQ18?duAcN5IlqR9x;2$rE=mSoxF@L>~k zk=at_T9{6ne!G!8K@A1|B$%beBRzju^{A;aYjqZF7Hb3Eq_3mrE;;~m_wQ4~ifQ-q zy?_5A;0FteJ9Koib2Coc0xadza&rmD4A!m+=1-V&&-@oo;M<4fzIH5Vr$lgbuMuID z74;+h29<+`1W4iOC__W8w!iFsv_($)eMfRO`)+U)aUF1X6NVEbij+@AM=jSkHyg4p zUg?Wh2dC(~Sy)&m#>O0hL3ekzs_PK{`KQ;K^Zfdyt0!K*JO}gf zI}MfxN?sm|m)gIdn$*UQ)5q#j8p$X*i}7SAQ8^H?N=hYyQLGR(sb%|teg;m+u^Z4F z{dS^>7P2FxPl-dH8KO`JO}$0!OBc(7NA;aFD_gLHKy$4q)L4NXS?s`aJ9t?IY|LIi z{vD2RZ8S86mZzB6a$ZeGzfM|(qnj=_NpU7kIkhtzQ#f%(qGD4{4mBSz%d!VCT1CO>%zxWS3zw=87@1O$v=J#vjRj%-?65iD$M zj}aN@*8aT{oYljqo@6|p`CesZUCQ+hOi-VA7P2RKd%aY>iYvY?oxH5#cDCn!8#pU3 zrN)Ia8t%vYeWBk)v2zG}EkG%0O~*E+PaM)wEy1)sG&3k0slb#vw;re4m#qj@$G(vo z_~J<`VpW+)btZW&LI0V#qOpYCpE^-gv&n6tt14eYr zDPB?PhNR3U@n)t+xn5VLZaM5?4b)d=K(^JZ*^o;R)7~$`{ZMJN`j5y8VXrP;+csQzVQFoRk<1IYB@jI0YgUt*mWC0w86K^OUPuwp3~KUTU1)!w zC3oKC!9vFEjZ8%uDkJcVaIcTJL@#v5!?%N$g{hF#J0Vw4N9P~7N4n`G&712yRc1&>2&b*?L6m;L=8r?IQGf0=d zx33ib3$4j!;H`?AH{rfBpkxmkH&s)AmGte~mB*wgrj1AAX8uvh0H97=kBm-Ex()06 z-FocD-s0yj4)aM8DL*g#$Je2XX#JZXKYmm(ejx(O)&S6=bjalVAKp2l9_=d$w^tBx zrdeQtD9)%{jN$M8>j%s~iq)tX2{Tqw?KWlmCc?UjX^A~z3H)h6#7d~EH%2PjQX1~8 zU=%ASfF$6k3t1D8AJbB$yPNgYbokc&1!evNDB#1xbP8I2ymf(1(Suj)$8gqb?;`Sx zMi)%Wy`+~Bt9SQkwyYe5Bws~+rIj}mZ#%)s4fM^5Jz7jJhiWZO%h_pQMdSY`S%THdCtJkX=wLMiwCkLy;J@pjw!VNb!8-0#DGtc2Uy zF%_LfJoW?le){9vfiE)#<01;8V3h|M?y7SAx2KWP95^C}D$)ZW4-?dbEKr~&?!k$r z0iUj_(l{7=2lV_cRLN}POk*PBKAjPt;aTw-b%YV zDDdq3?(e9jOXC8I>>@>U=jK97>WGdwbiDVRTdlx!zI-w!z3E}`(HSSmi?+x?S*GXr z8nI%L&7dYA+OltNMAaRI37ymkz_0*|xHYgX|cKe)sclp_~ z)pqX9&CTu&3)cfWRJ|tc!@rH+wX-`F9NZ`n{@3>B?jvENL&!z^qiK%<29pc`EUm>$ z4?L;RF-nNyYQrd#Buk6ur=*$>#^oapMQo$e)Y;%d7Sc$22&YCs&08&SOQ>Ml^`}o8 zN4I>h$ctF6)}*p!c^Z{~OgfPF|JzGQOUauur0Vx|^gGoxejLFW>g8s;V07 zn2*1U|Gc}aZrQ`4He-L}*ZM(;#1m>mOBtxw54;wZCj@2_l@sAv~W(Mt;=U% zl`(s(uAlJ892)IN zYV~HW+t#l0-2<|98#df!?6PvoLu@pm$8%aLUI;AFNY&f?+Ct!W0zX5>E%R#&9FGMw zd|J1S_Ir)NO%>we*n(V}?wL1lF6XLx*MU1O=g0u%<;#~rJI~doX}RrgF5^J)Wi1I1 z{fiuTreDt)DVjwQ%dN;3g}3w{m8!s{KXrI`(V(Pw!HTYy07$;Y$yte6b*ofqNnP5_Rb-lgw(bB`v zim0!4L>o?Gg-M6|m8j$HwZAhXu1#1%NYD5-(^l+}YwhGq%rwiA?mc+=y4jm>?PFhN z9!vNlV}GXH`HdstUlXq{^1Z29UUoTO=M;ba`}(EEhr`-eqvXKamXdLQ#rD|i|6+Zd z4yA^d!6M%DWYcOo`WmXTY)jPew?#MwVCS}F;5 zwCC$^J;3w$lp1Kl$a07scv@0KdKfEP!I5%WM5# z4r>3IJ%7Sc)iRWlC^p=~|&K{u{`Qhz0a#L(852LaYG^tQ_#Ffs=~FG;?kDH-9awdC#^4sxs%c z0?>A~lW&e>AW&alpP8Gx-gT&`sAyth!UfbpL!+4Q-*zYF=6w8DhU$Z#?azL$O}Rd3 zMRijpj*Q!s$hZw}!uOxG^6OJcXW`xcs&7h1ze%y&?&b}cTGexQLP^@11xxOWLVd8a zIqA?YE?8?X;v})yfYg*4xd(5^DW8J6!1a19@x5@P68OX6Nq7?$8>Y>VY+}>!{?i-8 zX>K_(bC)S;Es0pS{s`OAn4a$i=N#Xkmm{9YC>6*=#6oi9`PU!! zCtfAnD`nkOJZz@}(L5H`KD4&5;J8ns*KrHjMSlM@o|k5Y*4tIw~$6H?^=PN8@ zQmKuT7VI7-LbwrW1*m5?s{>*>* z4;m?&NU@0CVn#lF(%xO(L{3p5_vOrH+RHh$goqk_xl@co+BdVU4j@C31Uz-sc$T$3n?ZaIH@)F%CHU-M`jn0SLiIHDE*qZ#Z~4JQ6-lPi#J;kM(ANSk1>A1$jJKY8di`D^8Td%tRNNu!ZazX+` zav8Z+oMtD>v|^ZFd40Q~ zte2ZGVNmVBPFWnR2BDF&H0)3;iS{XtK=yXwB}ccB)?`%xPCO^6T$AOvqBV^jncpq- z0p;m@O==!XDxHj-S`^eL=G{BFR{Pr66>xs4FxlQ=bhGoG=r2I~lkB|;KrO~xWIwx< ziGC3+Man-+zh7%J*zAwxFak2xK9e$5qdKQWqr8k%*@Sab^yuevM}R#|9zoP|{7Mv& zC!zW7bc<$Jga(1)1Y@ls^kQ|@*b7xR{IxUWscAd?U0-DpapEfctRmZclUX(JTfngc z%YF#enLC03A}kQKe#HySF#0Ulg>E9$fN#+gQOIUy4HuzLK4}+lU*z`#8#7VPt$xh8 zJkFYU)Jv6E(beeF5V9|SikaC)Bq4W35qGR6`7UMeG};FHk$yG-~Y zeozLAa(FB;>zs*I5C5Z-6schYroBOpcS&rU*9Uvt`4;6)_%~%?L!kNVM7P1qepdFn0DMYNTMy12j6F$teugs6j!R;*Ysq|Sa6Lwl34!s#RA1ZSb zJu+}<`OA0r&ue3P+-wJN2PJIsFy%}Qke01))`jpDqJ?I*Ue6OLjaeJ2Q`|UP0!YuF zKG}5qVuDkm$~hMXYJEcV+YBy!mdv6{#D9R7j^G~ z2BO9MH)Y?c{Gq5LB1#{{XkO9^s9B8%jP&X|I$T`#w3No5@U4}0TkE%sSxvW-J^QD< zp5$mVt?_@AKR}cs&)Zz6fA0r?gdCr&ky0^_Tq{MjauF)P17tp57SrKk~$Yq##P7;lHm5q-~rFNv85= zy4NG9jl=IA*~;&wwwz8&u6G`B|L`tPiHKvJTYxd=L z(GlmZ{{3Qu+Q9Y(pL!O;{yzhk>_!E*?GkQ6@E%t|$v|iqlF(phHXb85jmi|Py1OfC zKA5u^U!Nxi=&65UVNj@pUA;xk&0uwrd5dHRN4inW1{oSv_!nM(X?mI;!L?&dLFg zQaTg6EyH;_jDT{0DgBKVtvcYJz-9JR!6xm)N`u6y;-bi%rKpzVUo~`GvM%Zo?D#Tc zH}G+|8LueRKR}qPq*MKJ{`;%XBcow5a38!Wlu~J5z`S*I_+MF0bw>mj@rdE+b#bWfz{pVcM zwa*=M)oiaZ@vGks(rr_W;B%W%DLV|M!Dzt zIE%N&&4cY1ks2KU5o&d$JOo9s3;=BUGR&`!{0b8{87*32Ns?83Q7ptFi$|kLw-e0P z+W&ewACt6jPPJRWPVM$akHWDV?po`rZh9{0y7>m6w1lXF@B#Ny(2pHARM#7b>QpJR zR6^w3!k>yKHHV1t6o@GH;N>F^@JN&}gt3sKmXmMGWv}A|R3}$r!7j*%W7C-UREJZ{ zMM`Gfou5n68h0)SW3z{DvAbU*WHOFaHoAm8N8LWeGCvq zPp+s2y#JqMN3AEMh_t_WiOqm7Z#ghf2nCd(cuRNB+P&&CFSwd~nf2!Jxr+hfoMl#a z?|MqpGlk7!Yh^OHs;cS$S&_@imDPJuUcgw6<5!vn5DRG!(iYO*tsVnIBFP_-gh2Se z84PgetI4T;eK?HBQI;KM?z{R9bhB!Rl(FQd^v_|#^#X2>NzmEEw`&(=2!qJAWXd9D z?(O%5)>|oWl=Qo~z0y0bi*n{z%@Io9Oq&fHwwf`938oB*>CnYnXt%}SiR6R4`&HvXSHY9Yn>1I`~+_;!%>TnGu zqaXVodQpTw{+p#V27fZX#spL$2XTdkT80fp&;rAT$3seWq`y zBTYjf^lGsj-a6hu78E`68eQds=Sf|B?eBP*mX^;!kKW8%7dZV`Iy6oS#GD*>$iN2>?rHZ)P5du$8w!7BfkLqla*Pwr@G|sTkoLH()i(oLxqL@LlcXkeWcdok-ARexd1uMqt#ux5CUoHS+RUHeFoHHYIO3HFJHeQql zr8K7y_pA~S86He}FJP6G_4bCb{;M?+@PY9sPOpEZUve%1Z6@GbZ69CG8ImH%ZWbXr zLHTczu~(yfC%d{gh#Kwoynmzv(S0hC8tU;#Sf=Fuu)iF}8u`2dG+iu_hQkIm?d>B1 z(H%+XecLxF`USd0RYGofV`*dO^>1JGR=V_He^g4K67ZwiTz>t~fL}GB3WMQ_ix8cw zd0EIeVdM4PHc<^UpdWks9cK|X6KBX(Z-DyKB%sLWt~11YgvkA;9qx`KYI%M;<3IBU zD>Zp<=(tZFherkgzdB)TJn-vP{$@u++k>h@UV^q(`WXJyNn#oH&;>QR`Okyd@C41K zA5z~iP+#mJQM6W5hy_AvRv$*6rviZ`b(F)o^KRiyrH#&KV=|+g(oMReTyK0YhZ|KX zC;{~Wtg9?$_pd80d-Zt-1AmwpP=!1w8(098-h16*)FNw>?}pNY#vaguUS|0OKyX8l z<%|V6@H56;AfVjL1PalB)KaxM?cM&DKGV|Mef?y9oVw6wi!ghQ>_1|^LN2*H%&opb z6rmay0$!$)@BRw`Pl9WhWLF7_yt|^-M#PbYW<^nJM1fM&*XW|*=X;LG!I7zuHj#ua7;#X6zhdpDm-1C&{N^s*ex=)iWUY@&nZU>Ve`Csu5KA4cf6~`8;?Q_ae)&ZRqL5*AE z9A2MWx+$-@;S5Bu`{V03A4C`NC<8vC2fyWx`58MSi*82v2NFZeY+pND48r34jZ~6t z(!;un@*8#~ytpyeBP|KN>4zB9q&BkM=Xadm$yp6&^q7yE0wp^BM1_nAknj~<>I&66 zeEcpeAO74QJuqnG)G49RqW52lQ(@5y{GI)cX8I|8i@;<^Sz@l)tjvubhGE{ z=%Tqu{u6adOwTWQr8@VBSkVT8i0Py^%O53|caSasP9O}7G@GQ)+Wz@LbMq8=qR3Ss zMY0PI2|>t*TS{(a+waUK`uq89DBt~x&jZxu;}w9d_oK(q zZgbUVx+(%XV~f}_fZhTiwD-u69i~)(jP&dMWmSUjBw90OoVBefXYiH5u0>cIE2y!p zOPh(yosxqe%!eCf8gBU(3p4ZN#QpQ{3*rvd0rlb(plm(v-@mUapl9shP(Z%Xq4Cs_ zCo0cj^(Sab&B)xW@;KK&&uA=936UoXwPFkLP9_imz^tvN3mcSvO4_4JPRB)ie}FC@ z9~@2TO{byZ8{Z}6yWdDOi5vjEjxf{R@0pItf5|ea`@u4(6pc|`?b;ah#~LjW>qgNp zacI{Aq8DQka16DpFmrW%NA&bJ&>x<1*lsw}gjM-gU(drTaKWXPQ3)bUVsLWND^GQ8 zr(#frBnr()fqq8;FN4PNi{=xYLl0|akjY(wl6h$d!-XfXgpj1}^(%0Ht*tidpv5FL zTUW^!OSlU;KB@VbmZe9t24rKw-OT)lP~kVFk?LRN*F{4Xt?YKI>R; zTMHSZ)HvJ;O#})dG}pIUX)o*V*P3+4-Twz}C&Q`YYeDD#(|K~xt-kuQ(DPEPvT^6q zBAOT-F#uqP{tmaz(&Qqn!jFFeo%z*Qh8>HSEO~nH!7X*koEP`kA%$ zI^bpRvWkG(wZ8uWGf zjS>y)jmX;x)ApAA>BI_&3>G!8xT|!r+(l3yypTrh{d=oyqWaDhsZ#PX&Ra)?jn#nk zww!v$sP@no#aaC~c!dQsrH-dUq8zo-Igj^$wDKDiTC`ab;Jy|w=1XA(`o6}jUHte9 z{}aYUqaqTmk8ojin^l`fB?QvxuW&z|!;eM%s9Z@S8M1~dAzz#LizinkNz0AEr=X=DB__+Or zoP)=0W%WycLW7^Su833rCfnfq#PHPvJwrl&viI_Cz<$C8&E=6$>!HUV86iG|;~6ix zDlgg3W0D{Cj$!2No{dn2hH^z%$I<5FgVbKN+`AJ|{3oR;{&Uw0E`v^qx-bC(#AP631o;mZkr2@ZO;*0|oy&zk5B>qJy1eF%&aIkAn2wS& zVu&A{1k>!`0(}7NCQebwxsbZ#oLM^%*?)X^T`dW1m*A<+=EG-qCoO&w-c{_hSfUGx z>kno8nq&O>iT}c#e?(TKEpZA?{u8Dq2gK?F;uaBO*Yq_R~-9YNMFw zH2p>+lC0|@QD}{lv$>E6a`Yp4FidhV1s$nrEObOidif@9aqD~gXzy=26i6rGAr>2$%=7}kzqjzWoPr2i&K#w*frD}{@MzprCn=2qF=<#CLj65+^}R0)WGrv& zf?Gs6rmWlASB3+EBzY0?yd&*!Dx?;etW2#1Cpc2Ke6Fc_P9JDl%YnA)51}YnR|#@Y zk=N+Ww~59LAi;=#^6lNdpxMAzQ}jA}dgogLDxBX@8}oYD2Go$t@tXMckbhFsRnfK; zK%cGhp0Lp@w0zBz7S#lw2x|C#hXae(t!dxY(nb=#k2hMXFohJ+)1b`Sh2ndQ$ZG{JH-zZ#t@g#S@>v)FR~tv}(0~txT== zdAZRyo(+E%Y<%H)OMS?9(#VUP2Ci9A75&RJ7$vo86>#>TogjETUIP2g`Ln_2@^$o`D9 zWwdZ@Z%Qi|buxeuRKcvx1uIgO-U^GEvQ_tkFsJ?zx4PJy7`VK^H7+fV=Rm(1{*d&@ zlED~o@d;{FL%z|~d{v=P|0QRoL*k%{&#aY}H=>nmWD&Iur&n{yyn4!GI%6rvy zp*zdx^zyp~cMd^S=Feu?XvHVk^<2X;V*N9wnVG>!3IRdmuyQ6-$^}$j+v_g6}mBS5+ z)0gP(BgVy7SM*3=6x*MSE%F_sMH?)7fH+bkeuJk%DdM&3c3u+Kbg~rZM4sm0x?9{x zbpN`_n5Wj?%x%u1H$kK)*cntV+14NmlM+0bhI20$e4};a1~nBGYcrqVuFe&zgVlV^ z-(*~N!uNkdcf!cza`upPeh&%&PMVLKQb{NAE8zb>WSZ*85a%Qs+)+h&PQVBi-a zmtZ1W8dc<^3;I2)rFsrT&s6CkkE*|GloG}k~j#YA)dZG1dhn^+<1=S|)NA+ACoz4wDygSA9d4II2fDJ$x#DF~Q zQ^5Huhr~p(+z9u>1*h>%PB9@jJOPmK;i5Y2d0IYA#{p?*k#l)oe87(7<%GIgGR57b z8^WLM2l)JFZwk`tT+lH5mxCd9!6Ku{4HlK+a4-Sr^X@Qs!PesU58=<`&Np}!p02l}DK!B9TzY=2pBx0RFW5-~K>^c1 zvY2phxu;mHj2@b$6oJgpLqjwXxk^um%QMomF)h%qc53+PQoANM1)1*CP2L-u%JQ^3 z{;-@|X=^>j8=L-z2)C7CIUU!_50*>dGhS3zja91Z$xzS^I7bKepI8y%{aAJgmmxo4 z#-PgxZB$;zn=12?%Ye^-0MyU?mNF$!zjRFKQ~wQctxw>=C=aap zvobl*Q#=9UqK)O;%VrOWOWd4XT(5`2fA~NCGzrdCpo{BE*6VAyT439=L5bb5mwxk&>ov&i z(F8;q6*nufR6#=grE3gC0J-|8AnTipALpa{e;_&Y2rAYVPkgeFne${TP_8qx+O+yU z?hXPRT{Zu-0l}NzTS;{@YhBDcJ^H zy7X6nHerOQ=*cyxyTvIW4@l;E&I_!1@b;;KpJi&BusHPiOrINkRO)(|0g9f6H_R@< zH4k^Hz`iIP5yeu~6XHcIts&-9o!pg%wEyjj1`|#Qo@Z`^z`1jy8d@o~fs81hYl%jd zT7(Uv^P@!qgH{l{@;}ZE1r!F>`a@vwH>zY)KG_CZUzzi2>JD*9x=V_xbdt%6e=pxv z`vw}+IHp8d6(ILcgqGEXRK$8L3ptiBzuREOAuUF28Pq;nT`Uu=aQ-J3{rlgVH)NKM zZVik^x(sGo{oGPTzqfS*@(*K~xXr>u?@Lpqk%|Zy#tLFh8h0z44G$b%Uh{Z0nI84PPGG(HtsxQ+x`D3Z-Bs@Ol^&Pd^KulPXqxy zTT%c0&tK?5OZilJg`i}IMo|s%gn1>rk?>-6#A_qH0$6zY)krOrJxN29!-i!e@kOq^ zgmR38WM|fUsRnS;3!y0O&>FGw8w8r4RT-&Acxt2C5?QBjZ5I;#-%&2bALu$2-Rk+rj!OEe(Wf& z9NtyB7saBZ8vEVng@Zw*#bkSSx((2Aa+xQ9mQoJMu$9qVj!M`_)lS7~fw|!^E#FtY z!+j;|bln0c+v7OGiVXyx-i%)6^K-cKKT#JNd7Yz;o(2F>UkI`Rc;LDV_x*+4U0cTx z1}ZkAO5I*qZVc@uF~@ehCf^rFYPR$tNT3~eu0BEn-6<+IOfd>^=1NQIn=eO4*$;$W z5YNeVF4BM$XaoDHBcHq^RxkJyS=7+1K2n6}5!)Cww`%ikdJi7b~)|BDZt)$jtrrzPgPhwIl1Xm18wJhyN79 zj~WWWV)=|OnJa12j!J`uTFROD#`#8`S~*|yBE<~ULO3+z88OW1 z$sBT+f*vn@usPZ zWvg+#Ss$LJN@3d04w9$VPijf^IHh+j#!-#_2@nymIBYhsTcJ$f!Y{29jy3j;;+v5W?< zoV?KGoCdd1k$Y8DGncPP&8)7j)~wtOpSu_`PaFJyWSwPPlx?^6hwg4^hHeCO=q~9{ zx>ZzafT5%tVHiRrq)Sj5q>)ZRN~L2c85&ez==b7&_I~z$_WNl*!ta{7&U3AG9LIm@ zT3T8zofKyPiQ2w2Ve40SM*bYHF{y*r16>1UCH$t#@!Ko?fPAn(aX>3nRP@go3_Ze!cLf0zkeGsS$x6L&U$H@1_fV=lkVZ+&q;gedc#C7WL}wvxlLt%$w>bg8ST zVaPivT0uUQG>1lu6m)lXu6N4yAGjM` zTGl^Lpz?vT^y9%Y^0l!#_e$Slt1&-*{J8PyFs00DB-LI;{g|giBrpUaK)_%^AN(RX z5DHhTWQc8GAzF@Nv>Wq=mM^@n?tXb?rFTypV8dK^0+_r3)0p=G&-W5FcD%FB*C(%h z$d@GX3P`L^u?P?U{DMd>m5`}-fv#?y)Tgj3oK0_L>2H(moW-bY)=pQg4Aw|_!z z@`gK=|C~>3R zYOrdF!D2Vij06E0yhXx(#_vL7L0BAc!Dg2YHcY@p(-Nzvbz-rbKj@aE1t)@!mcWoP0NqrPZw`M~Bu?O-_tl1lx z`BR=K*A(aIN-W6~)UEyzX9Q3O%-)eNeB~#4Y+^}1&KaS`MLYB)`>VXvtFZr77y-A# z^Sk!Ns~gp>bcWOFx#+XtdV17L)+TZ#ZfBnn2P(+IB8tKUoFnX4;W@S!-#1rx%BvfJ z(drMtB0k{QPS5tOAaPc0qwiuug>3Gw=x@6|U0`l9Y1jIS6H!Ry`LyK4ufllYI)E_o z1+l4GhQCVF*9()wRI_n;CX?fQq#19XD{pu-HO*S~Vt9eFpT3%kh|5c#+vvXC; zfVHvBtw{m>29QCnJq$i;8{3R9l&SdmN3*iY>GCeT4@`|d$vNpeTZAKC+J9H)fX-5M zRfCKC?1Vd?G zHyn~~2YFjPj@X8L!b8NDj$40C2+ve#sZI8thx-d=(ko={l|!vy!}My{l9wO<{PMdG z;m*E%I?(IIh}onu-S6`ASM!$|p^ku%OKG3hb3S3%SLCq@I;QDf`#^I-jSIXS-;HrI zR)bH(uksn}(bwdNT~CGoTTI4;f^x}#nCV*lMJ+$HETLiV^Xzlgk1jUuwF$dIlMCxs zkSPS>;YnE-6npwLbgk=88Q|-UdbNc%bV$! zyYk$srK(D}D5Cd{|8e$At`k{ZUS9uQPv%>T?iNm1oUJrc$TF3@C_ zlarIv1K-XCdBBWm?%`2u2!Tq_#o6$FLVq>waOgz$i&()(igf7>y?_Hv+Lwxudni}= zedpDt2tqDLUr*s0Rz$v*;fU}UE$vMQy=U)S$YoD=u?cA^xJ6Szl-2^XWzBYdK zx?`f;uPTMXk=b`6BAtuNfN!)->;vLk#ygqA_U~_uc@J8SdWe>8P@E}MqG>V8$=8N} ztBaMofs7*K8e9EXwuW(%Im`E4n7mRhI8uYS5PQJ+YKeg&%XlrGxb*b{YR(Gx_ zq{!fY?Zx6$zECCUyw(Xj4}PZRk8Vdp@*d~jprvc&9~PqA(q1~HPgTUtlJ+$|q&8*zlG71>B;C^dA8R!~yobD~>#2%B7{HyWnDV`|mh8 zU}wVEWP}(CoT#7;3x@#iDD_@jWcg8kS$Gdz;pR9|xbWA$dq|r#kfRVTjIQE_HQ8m8 z6zQ;pTzOpH;Z=b3Z?N$A>Y#mR;n&ZkDIQQiH{iS@a`gU_hO7HM=G*c8jUyu$TdJ_nVP`aRq#tte_VPSGHU@C$`SOxy0^Q4eaL8w% z@Fk0zkHH_c!GtBD=#MH2NHDSS!SuUI7??QN)DjcJVakYI;jc9~G~cOg+X-v?-!eM- z+{TM-O(_-N^fS>h)A{s7Z%-6x7JISTi|ofPv_40y)GrrjfVsSfqC*R*^{C>@l^fe1 zycnE`%FYPGjDnSr8FD|ZVYqYwQ4M6n6i2&bsR(U(X{CJ4XXRgaJ0l)_yfzF}a)W(0 zBf-GUGy*9hCYY3&C~D$*LUX=I?DYx0L+LlVg!29>Fm*T`P0OHqzt_T?NHyS52TTSF zHreh+VIr95vew2&yv6u}tq7xlD4$2=?&X4K#s7D#LhYnP^TFEMgtZ{*sfMUFTac^t+U z|N7r9-PkS)jl0=L`d^^kZXbYce(*k&J&7s+?-U)u&V=mQFnbbD2DHfHR|MvpeQ{Kt6TJX;>s6c zN5>A}6tYxNiJ*!$C^I56rIn6js^Cx>?B=LkkL~K_Q9QxpwlPj-%TEiK+GfDb}vFLaXkv02rwP5G!$e2TSZj3-Y46 zF^AomLThImec*?p&MnVxLtf9emYG=zDD>*+>7SNyYRSe<$A2+PtKv)SJ%zAc~`}g`w>-a0kni%s2b%}0%kM% z>fYw`R5X}2e$KUN>3QDe;#Rg|q#Nt)^~$a5)p;y30^Q9Iw9kT5(g=K}Kwxt2_p;3? z#0qOLMtI!8&j$jK9<-gq8}WPa2t|xY$t3&tR3t^Cl+zP+D|45KCi{0>Js6|CaMY;8 zDPYimgBVi!&S-;ThGQ@j7;L2nq_DwHP~l4UIOB<9YyeHUpk{4~s^=89Ibu&}C(=-e z+6$V+^WdTHW&6I7@zrYqBE;6mZ(;%14oPCo{_VRY6y?3A z)S03k(U(E@91o)w23NAF8+{$5a3rXo9xV>AZR8gXzp_eyyBqN&(bZpvHqQM)IM11` zhXzHFk%3fNp;ZKWz;`j9JVux^v+QYihr&LQs-NKhy&%lhK^&yjoipvfa)1dkgvO-0 zVQwvFKaHNLrVq4Z*U5F%?hKTs2J~)B=dlJK>kXJW9{(C!U@q-B53@3nwFNx27onmf z!Q>K5|M3g_M;_4Z&C(-PxR^|I0kIBs0JX)uhQfv6R%5pr;te5K1tqMck$9J#|23Mq zUMMxDaC`OUfm~tw_v7%wn9|(WU!`&aPwVp<#K$1Lz)z}a=Ff9gqNY{v{9S^X>n=Qb zJ5wuqD z+LW%O7s6t5I6eJ@u2VeH#pI^K0BLAq^3>bg`-!>vHa7bk*#%jCV0D(Dd-&u?c?CW(_qDJ#{gQ*X^otb@$UY2OHGX5??+y8BwmWmwhCkQT$!yDEsiW z{I6^XYRu!hp8=u=G87`f7sMoYMu#J|8VB~LsCvgFBda)=h%$jfMrIOk@LCW4Zt#~> zEis&p5SOV5Nl#sIbkR)6j%OLCU&sDy;?HdFpHA%ri+`F*q$U_t^5J207NsgX91vM> z1VjTFc2_H3mjNL~A|ZpL=Uw z6*95j{RG^PgCPQ$=tKAz^FNm@l%6t{M^<(`&SFrr$vg3{#ma1G!oZ;5Kma&(T|3>v z+wJ`GnBY$T;>|S7O?@HJNdT(wO}Jsc{ZaXFWyW85Ie7A=#rKA}dzY5nUzTzOq9g3H zvP?I}N@W?(wAa5HxX^E{IJHH~RED@rL_320v;?B*musHIh*ye&S3Y@ut>7%jA3y17 z-v6B^kgyQ7>)B?L?fktuh(@n)GQJ}hgO097MC)|xR0N7z99tn1v1Sm)@ zx4%Vh9T*?F!QAL>k6CZ+ft%b*IrB@-p6GpbhVPz0On>h7=J0kNfQ#*cSDgj&2ien7Xdcd=XTd%yspTV2th8wXM$v(ZD>>P-l8VOF0V(#c(_|$ zsLc{cBPvPmts5JakRced*ixI%#~m9hm%Ju9>6FkO(t}(xo*!l1M@~`HDdRu* zpNTOk7}GC_j0Vq5l{|i;^PcUr{B|E5u)5xT+E;o>wPxo~s2Q#B!Pkgd3*Pv?X}$A^ z`}S6uO)Ol*L%&vW&Qi$Cizbqp`hm^|U z@`|~;v3h*X6>!L%eNvmLtet&+)ig!A`W?l&W|G1|TJ}CaLb(DkZ%2_A0N2V;-6pL~ zb6@yRNi*E!)Q;FJZw118N%*lqkkSA^p(@iwGKqYyZfGFaaoJv4jU?&yxIO!&kmH$< zV=5!pE!fY_*vhykBXaxW!HJ7+Pqg?ay_)#lu{xrys_eg^MdVed8Zmnx=-iv|8WkRF z?_beuyTr%6$%dwV@xEc)*KH_WVB|M#?zf^GTuJ;8_JoJtzwTLs>*2~v*Iw%f)XY8$ z23NJsFLHAp2gT)>b&SS|6CF`f-01Cjg!BhVl6{_QJrFvbK=62sEp#=G*Yz+wsLmhI zH#6y_5_U486Keu97FZ`|#6Q*B>7?CpP-Ck|wB-z;>GX)%cAY;8iM=3<% ziFBX~fgI^9S@03o9SP&q>ZI)b4InPM!{p~$yh?Lk?|l3RDI#SmI`-5(%8&=3teoC; zRca-v@7~P~D%^NnZUHoiFO|7I%bnK?oh{Aj00-6a_V;msO;oDXjg z1VCz7_l*^*#b=KUpmU60{6-Ze&I5dsRaItySM$Wc0J$g+v621^wWW@Hy0#zJ-Y$DL zpSyH?O5D#1gvi%@rS^d+*mw0(5;ePN%KQ;5M}-^W^!Q3xVd%tK%6hNQ=~PYpG5%o; zUE|H)SEDQ|poeinVwPeLHx`AX*`Ci%gp(nm=^0h|5eqk@r1bi1w9mk&-WP>QIc#Dw!<&1)}QUfrtBr@NGjSTtKnI|cZz!gG5&kbK_;!KdUP7d)kj8j z<7$DY2X|>w-5nR43}!|AVLIJan()|>z-ER75+9S~kiBm@iO)bTec#XyuL9S)-J;2# zVmr&f2u9AWs&ufK)0X>10t zA3*S`n4yf_=|9?tnB|e`Ym(Pwee9NEj1Qg7R+c^=ql=)5!wSpcs2-Ydr68?Qvaw{ZcTmru|>S zb@O|Sj7w;dRuo6TS=);1mHFCh7${Q;eDhOT<($?To0l@;4_h3> z?O-(6_Q-{=8O0)T`SZJ>CCO?J+lQ6c4t=k*G1Zm&wHM^JQpK#5@YL9w6ezsch}1u| zGZf+xO6!Hy8C`Oaw-SjCbVGc!<0Sj*>bMqbw`17345T~tTMPzaw@ zMr#)TtoM)2l5*l>(nb(i;!Ni~T0Qk$TiotZDj4;qbFz53?0hUdE@>wi+gdOiH7Syw z$5P7yff|_Azw#nJ0DELvzK16jWT+l68FyFzw=Drsb@O8}j``o@4k9^^+2#k2`vp6) zN$6JNA2&bm((O56N8wXfeO5eQD3xD+5ySO555TApK%UTlFsEO+?;S(x5_`;rI}O>2 z*CLWW;tK7@6GN2dK>Nd?u@%92IIpVJU|#|69`et_J{FsWoQ@*(Ge^kTQ^7|M3ZJ8T z!h#@5_=_^xv%QSb`Rkq#Z>wMN_vt%R0^drAHIGqRldWdoia?%R?)}z3+T;O4b}1NpFtJ#DT>V^lKK6%dCZ=K@zUT+zxq`;cDw}P zK)bs~kB3gdRsXR8B9q8biKG@Ic5T-s=TZUBX9UJF!^PNOx%r^5V>X;*S$yf=dp)i1 z8p~?j9<4u>CB$JO>(O0bj@GE~F6$)#*R3Xr0nTfF+UJjV;~Q&B?WY_W)KZCY7{@<% zU@NI9j8#v_w_kZa#08QR*}H8`^Lh93!ck8fs2Wl^XrJi?zgt(-Ad3^5U!9(uX!;(_ z?!$+b*YqRLxwQ9XWEC@CIw_$ZO5O9YF=l`AgHw)OL}7C4)jgiSX5U)~Dr|jfg|>Q* zHxAPkCE`0ZKJN43ubMHw&Z^)j{Q*H(;TxR7*z^7lJO9hE^DjY4A{{wx>F#YiFRtu3 zoWk`_|JR~zVuH?e3An`46vHc$IrwsBbGdxh;dBR+il_1i$R8*UOVkBaBE9XfW1*_! z;#E<^FtE1I{m$(+dGge@5Vx+FcTK?dJgyuzjvjTAcf2co<)6XtY>xY-3fCV=))L~+ zI&ZHBb~Tj<277iNbQFJ(YV`fGYprey0H=&fnwn=^q=4C+sT9T%ID+?e14z?n~DXT_K{Pe{60Lw{aZX-?DoJMG8g3Ualtom}W z;>(KHT3Gq`NKOA^K&oGA^0TiK{_<^!AOJ%daY?~E^&Zabf0#)jFhZs%O3|>z_?#PM z_jFi@EV26>>}HKc(6;v@mdbX*KjIm-2saFT5TzWOU7VI!#Q=9|*=@;q=D4i&M_G6* z($RsPiqIPLNEdTPt>Oc0p-?S^wX2sxWg^ zkO#lb89435IbS3(Im3|~`x(805%vKEEE(E^__C~~b0BLKNOZa{c{}um+sIPa5*Yt> zo6Mt=xYQe!>^!KlmEAOcsJz4Z6}%YkMPdEOi6mBzc#{8EH!ljc&e!C`^8i3|t>u{U zg!>yH9ptLFZ>5yf3vc~H6rQD9Kvzw(kGM9cl3!d_a`{|iy4(`eC6TQ|SkL;!7Rj(B z+?b#$_Sw3qS5x$8~127H6M;l(^3*UK^XMxxdCW{J%p-1SNAT76&~+s~7MsUDPrhl51%+cKrq0@Zl6}pd z>(_WfLznoQUKRZDzO|CZtILm#)bA<|=U=S^ZI&ZY4R^UUAmg6eCk>=d`4euAC2EF_PzaZv9$W5MXA^HYgzsk3SO%i234r z!+3l}fT?tqIM~{@I!-<;dk8>Qy{3Npc%IklG4)0mj(Ti;z}tV^H!pgyc`I`QrX`48 zI<*=@Wzoq@_eO2_3a>?n126b1BED0E-#Q73Iwa$<5)|h(s~}P;^8pE_A*|b-;r7r&WH6E8!UkCa?&GW z1~KEgJykL=OHHWBdz3FQVyrLs%z38DyN7I`-pYUToiI!EO!mx$SKei|m~u3!AfRt9 z>`!Tncq_2VvP!!{R}5wGku5&HzKrA7VR$SxJb8gr!Yvz=bmzYto4$Pe{U=jN_8OQ@ z|DAiOO_7m+jjK+n=W>*z(6x$pzyPOhxoD3iSvB`{Om~74G9d21{hp03kX68+)aXk4 z8JS(A){0JK?~9vw2Y^1lLC4GZiRWe5Eb?a^um@Xx6$qr=0%Qpv#zU~d)xM}$`^6%9 z4es1S@wLc)-_FA z$(@d5r7x)-L?1Gt1>bQbzhBE2D#M>Qg1mSdY3sFutbNVKU4TW{&j8zouy}~WU4iOr zPNDN#y^z_Mqxi31I;sN|G0hRA?B(pA3yi-q#H+Zq1zl~g+5b;>3uOJJpAc6?H6uHN zC{!wD63O`FS@+GGr9OsxR%>Hde=tP1@|lI<^jwSclF580O*x=Z43y~p>s!yfDS7W8uelh&eBm3bh5wy7=AL2wBE7#uGHd=>t5IxKj zWr4F>A=7qQDP~2mQ3HU}%jzNTJhY)7JjY&P&!^I)%-*^)~ z7~T}lMVqDBs{A-k|3$~vgFh=nYXwpo%yMzFc^9AZl5yvAo=f|n%`02?jjPPJ)#?=^ z96a+#vuPh!kF{it_qiTpEVq7Zm#g&Qhn(b_J#~b_9QvPS-K}nN``v4ae&udwODj{K ze)_LUJnf%Kd=6mA?qxdV)*c3K$(*jmi%2)&Nb_g=t-aC1R#XCYE|&IU7D=PIZV}8f zdvi{CcW$0mk>l;X<2xZy93jF=UgC<8fgOO<(%zamYR6+WicZ>)RE<>u z-A%R!INl!|B=*1oJ!*-Udd{>UDPrpuXv$Z1c6W2uN1PNc&9@Bds$IFoL-dvIh^K2D z*4*`#(}1`O`HRhKQiKnm)D^5G=dB2D>rZ5PBRz@Rf6qM}tmf4uB7utB?dR$i&i9QnT4{9v3%0HJgNRuTg`JX4qd73i$tFQCyG?&~ zY-bU92s`NYnA!geg-1?)+nBYmmF@jAS8oujDaIO=yHWD*P$~3?;OK^vt4=>?T8G$O zU2q6Y&fP`eQ1M+Y5&Kn@4@b1(LEYzwjW7B;hcdU*r3XQN*-OtVnBA`w+?U5J$(N3> zJ4P#eflW(PN}LxqHJ}{D0Rm~GGvY;&-DSeTF~?&T0dJP|nT1`?fv;>GeG4~vX{Z3) z@|eI{b%IC%sT;;(0@VsXayw48ANUh$MJSz$2;S5{h3~nJjvj-NvAva@C-G_Y^Z4lq zPa+droa7;{jxwxWDY`^!1*{pLtucxWugZnxYphlH%z3~TG8tw4tXu8Y4H&$wS{bZk ziwb{ww)G8l#{OwjhiLRta8xF4w4T^kq~A#>a2T+!a_e>lA&&$h4_+zZON2vTk`N=e z%9|F}bjiIW0Ionok?R_ShDjpq_J%Uk{MYJvUi#bk_fAhN09uy?kS{zhMDSn9f*(Od zZ+v@-@hWKUoZNP&R7g3D&KYEVo(V%jBEyo@!P05$()Xj9O}43ZiCFq&i(7TeCv{zF zRmIawmLpEw?RaY|uMe`4_u97@Z0}vX&t~iwc_0_+^wgtVQB})L9TD$mKRS{dPP!}( z?F{6uZ@R==ZY#-kJkDPHGKR(~92bML;vL2hw3l7m4^rtxPC_PjI1FxlCbL}4Cj$jW zM}0?FXHV(-SjzW4rqQu?n7!BK$@_#m zXVIZ@E2p1G@^8<`VWMeV3gMmh!vPle^_LDyPCq^YqVGrGu9e4k?#Z+H)rYSW{65{C zLMrf(AE=J5+0-oejyaaVBPCz)qzlKqSH^9~i;An>p%`KKU%OGf97bNuxdjsCR|`cY zUv{h7_U@7)VPP0s&pj4go1~O_ZRPM8@xb11j{zgZTz|z;!D6ZW311tqk7uw_vgN=J zh&VAF`VGqL0noWT8fxmRpICK2yo*5*6k^ml2F+f}{#y(<^sqlj>2|3Nw3*Svi0vv5 zZSD9=hKoB}1j1LI!wGexY(Js%75FL!06&x}kO{i`whu`OrFemOPak6eZi2iZFtB8%P|F8TY0|c=PVi+dBA>>fsR+ zO8dbd_6Dt4HR~GuP^!f$(wriYM%!m}ww<@ZS+9(uI@8YsJ_ESXpc<#*uX!_liypig zWlDmCIDDo#S;_Mo-uvsjj;BMd5Gy%{u`tK}#r*5_tRN*k9J*b0Rew-noW@N5Vv9#R zT5=DY9(FzyH59=C2W0QpF^af7qyryQmB>9e=O|>J1rO<#l8#pM(#zF1@*QM9eBVm> zAaq^yAZqdf56AD#3NHJ>qyzScxBkSPP2cnef|os&=?z8DMqCZ`auzq%`2tjG1p)DY z=N-Lu|4nsZ0$~Q0^m{oJ>MTJKbEwaERVq)O*Q?KJNFOr>Qwz_NX z^Zfk1%9JN?#DvDdy9}Tm8yOk#8ZplMNHysOU)B2aAPBedMJ2hlU=uRsb~j!m{3c!W z&1u`H^A&4g<~7Ga!z||5mv#59!ewTT6P%%gpfx7SN(Gg_U%!a%qcGb3ASHe)@kFoU zF6;HxeZG<9Lp8GqGrz=Uj&(l9VScQ~*0bNvOmEx&j8=6~8&oS|vi;#ePDWaKQLcG!w=hQ&T&8JEiMC4y7OQ=( z`&tNHqp!?zcWn40NXkPRC%cG87L2xgd}m$?1E+!xwfywEoMtySH`~VR@bsw8+yj2F z%$*Z6fgdc0k&9j3?tQ_tv-;W6aII&2_$0m3p~ZJySq#@vn)B}jm9oAjK4<#k<(BsE zr4e5(aMVUhg1Eo4n4coH(x}C5g#Zs zv&5_&tv<=@{HSyku}ch7#H)VW)X+dMNM>zKA49_1J}@4&&VwkrXrftuH$gaPv#N|P zWEf)$S-BQz3Ar?;-26Qx1^v++Jgw%(gk#tHkwW0iEyT(u3OwM}MH?xumkmu7{AOKD z%Cy>gQEeOL27h7x#_U~qpOw@Dr?BjY+tdtr>2F0RYMHl~*W=PcWY!*K68dpk;bJn| zW_s6da6SdSCeSnwcz5M70xomx%oe@T#L@mxOuKWb#N&Zb*xlLL<1VhrXncS}>WG8i z3%H)QeFu8c5*52(lde#NedJmcqs`~JW{;w~tv53zB_&42#zhEbCe9-VyxXBa4tf?A z?{zXnl|Ja_Rgn+gEegudS6pI^ez_xFn^{FSV-rwm^Vxy=Ti(rI9vs^XvXl!VC0D5l z^kiZ%r^5BNzW3A-U=K<=47t6$_GO%02eQ64%+ipIHlUOH!S{45TMCb4;==T^`-*(R zSj)+@WkV<6P4c8Fk}k6m}m%Yzkx7KC>ex> zP)TiftvFpRjn1H-Wrf_L0{cYzu{C^VE-<{&o5u|{Z@P8!;4mr2T*hnPk45+kj<<{p zSUdNAU+9?`ugUIEq!C!V=e@j!(HUzO_R76gHcFbEvpWH%J-8Mj-J;nscK27;~4|Y}Q?vNyL+b z3&wJ|-@kQy>>-V@)*UH+W>3&qf$4Xgiv8&@>N`%?yU$epVi-wkIvCtJm#6Hr;rKM( zi=hI87!Z{rJ0$MRo16R_)NHCjgf$nWbIV}M7CP$IoB*OMDAE|%QgNmHS{9^POrpo> zKhs;XzT3QuA&g6!1VoKh>lX3yfARfXmORl5WgA?D;MsICUbJIBqVuM`vK%_Wx+Zp5 zE3Oafl$dr&uNw0?c5r#tPfc%x5op8IygKl6KD?yGz1y+b6CyL;Pr$o0H^)B4qO-Sc z8aZ(mS+GOY*YgG}uXF`)5ru}=^fgmKbRCGOq!`HYd7}ei2Ihw@Cq;}>nMZ3NqT=V& znnpVT54oo~Xg8|9kqzC0C{56;wXePXN#HQn{1I#vHS?&oO{S!%(tM5Tt%ok(ahKnR z-UTk$7TZK^-769+eus_JE5{Ag-Z`OSdxBuEG z<)VF_?o%Y&bm(oKa5DON*i*WTM!i^6IO^8>GGOD( zk|kw^efHz!k+dg zyuG$g_ z#%8KGDr5+|*Tdwy>|dJFbtkfHt^md^yal;6=Sav6AlAGLQn;KOk3=xHgXLvo@A-)y zs@ei}^uUvH+n=*E(q4;zbW;ai?Fnl`prm+|31BtWDK*;Z!ZgyUP>!?SyE6!oKMN>C zSwHB>Oam{JJPQu`GDZ>4zU%0)JK^#QM0XWSM}Wc12cp9V;ON6#UaL&Wq_d$_)Yv@l zL9g0dM)1-JI>YZ_D|`TxZcz4xQ#-9{b958zHDQs`2_W5$Pgw%}`g)+*x@|7j;S`cB#ZJr8w9GcKee-SW6P?#=Q&C%8X7tiN z6NZs`EI&i}8dLE;A2tXnZHM=&6NVDzZkg)Cjd)9Fyeox{N3@~WW(@*ALX>GD<ah+j(-&GJPqQzuBy6I}XGbaVMn-l1v9z!N*2aESm-}3L?R@uUg!0(|AC4bE zgWOD}lVZlq+~Me67F4q0HreWwi+AhM^O=AwMwEfrHsRzEoF<1eOxDPNsJ4F zGWB2Jhk07j)S_^FA>kLbd7UKIu1YgvEh!o7nf~H^I0A0W>)3%W4n;sBH*x6nlTazq z$s})r=@Bwa({M>LOcYuQ371>bv|t-+FYinCY{z#rfa=L*x85bp*g0^`T}nU8}Y6_y^UqbRksgNg6-Fwrx2jweqmO*m43 zgUa92^Ms*QY8;<@t> zrysP`T|WK2_@Yd|(NZ_F$~1+Wp5p(zK`Q+bm6hspg^fr^@EPlUc`?g%OW|yoG?P3z zL39|$U=2)VwLOvsxI57>-mVs>R-Ti@-V0cEW=_8yA^n6{27g5RfP!B;QrR<=$|7(y zDr(`c1L=)uG@o`#t`)lV-5Ir6twU6+v&%|{HLw(k`%XJj&w$?ds!_K4P*I^144gc} za%YQ0pBWro051PPw@)H^{%$*~KQat2l2jm41yG)Lf_+`S+#2Y0xz5$8fgP5E&aC-_ zg`a>N0S5Bnia&PskppuW0tnBji_A_IL~Kb+lQN&JgmhX3iV5eNi{a>m^(u!Qtt1|; z5cLqj)%-CK)Hk76V@!H2zan{uS+|lR>92CO&zz!;`Z_K`D?~_I+<)}saA1+LJj}Z% zzv>+6~Yn>%i_M2(S>89^7F5<=xlm0$z<{N3X=vt zEy;WDl)dPp^RpUv3XqM#Zl)5y7lpWoEps0kz1n@HrDm)+1*#dvoA8jlfRH9(p9K)H zeHA#o*vR=3{3hCwVBMhu6hqM?JPG3O33)XqoTGs9?D0$EL6R%`ZJ>sQTNo&WUGhXn zp;4lEN>r`kF6F;R%r%;6)C&ZL%44JFFt-qKxTw%L)GtHq7aNfb!ci}J-bYKW&By(7 z4E=K&1-27HT|7aHWUz>qvd4aGX5M7k;58(NM*>8A?5*Opcs^w2({3cTwCIN}RZK-v zJSzdFk34J~nVS2aTVxOY&FT&xua_S6~#CO z170cdR-`*himzi^C&lBP5?pxL@RClCOtrBd3uck? zVc{2_dxr{duL<6`dpx)OkQEW@L-3$FwTnfO<%Ow3e8Gy5No1J0DZ$t+jfd*9-Y_iM zJta_Qa5pG$e26B9>88iSp!c*7xDs{3#-lM2vv1Ir*CdcEW;}ea zlw6pzd)wX%;bfWVTk}(ly&+ueud`%VQ>l+7cYj(J6_lIj2vDxdD`|$+3{EZL+?Og* zbijZz@m}Rr&b0nzeR<|HlooCMdsjDvriVhxgTb>mTl*<)wiv|iwaX!q3MMm?0ENVf z>2gHvNE(*z-;gc_=0#!7pEr2!*LOiydW;s5i)kb?G|$3TrEiK}PSyaqp3eZ@r8AynrR6F9v;P z$)s>89SffB^w$xZa*UpKcH8&Ch-`_~7nvE^A)s9Q`s{JSL?=AhVD5XMh_wF%>O3Po z3HT){$F+M(es|JDik|+~DCz>1{&2yP%)Y#CSpKG4ngC9eHQI;bIVbn=7P;LV#V8B` z3RTIajYmI?=lNzjSz|!oOfW#H__}g?;|mx_d?Us9ZEt?3|IB@mO(c7Sydh;~njE~Yr8DHg&pf=&{D7W^g5u?VA zbgzDl>3-f+@vXMa>?oEf$qOvu zJ+f*n(MQj4KuIf|i$uKK*tJ%8x^>3Nidx0N0Um#x)WD!j-*zM17hu{y(vcddA*CV5 zfCk%Lo#-)_EH`EtMZp3=jy@s}*Ya=2Zb$Nbt?Pf)Fu=!=7$4I4F*NLesk>DJq`=AB z$0pX?n`+3n%uUR8gqIV6uS@~rp?}PgQ3K^<#%N0LQofYA)Rq0~EjQbMH-Wn(brZ(_ zsV_N@m<_LsB2=l5hViu==7d-=<{n-FZnLSTDZ`@RSJ9#xGPrsUa9?DBcMA`P@RO|Uc7AL?sR1)xKN+xjH z*7r3N_E=G3lHJiET1jK9`&k&V#XxlvHW8&tRKRdree)9HS25WUj>LCe?+kTd8rFDu zMP#iQ9-B|_t!~azF1bk+61BbfZ%hh1q+|m1O(|F5&+y}rrDRA!eZ+t zDc(u)y&reCbvw2CxsKAGVda9ZbZ%iH>eKOHR9 zwsdz=%~u)piR}h2-s{TfXN74Ri(1^nnlxgKO~$S8CK%9kW_x52!Sa#_dCwg7K}Ro7 zApKjb^8NMu6laY0u=ZVr(wmT@$7j~JJtu;BI{HFsBI6MK0&22sjh?6~hTGXc?d?X# zX%*Eg++O5~b%9rtp2w-;&;)#6Q4*tRj!(~lUsfs5;142w)4txgYJ-o&RFCW2D7CW!C6j|D6kj?n}!e6t62n<1)DxXK$ z*0~|;K=_`#Je?{3JUWv7R>%U{*}=D)&6X)3b6KIaO1kzauCiS{^LoA4!^99 zK*2hW_5Iwy-&)_;I`-;DHh>^?N{iPP>1*{-?|`*8Jk)(`v)w@i`q{zK2g>)Jv5y`D z)TS3frqu7HE&2X79rxTEF#SJqFD2w@Iv z)iL=?7w=>GPnNR2%6xYLzml3717L|3am!|!rQQ0plcUU!u-wn4%cnRimG&#xuX9A-u z4B|0Z^7fb@7HIPMQjTz^Nt>bA6wf&1V$moJh+++xA1lB2A&4V85Q+GKW&H;O5^AZI zzefjR`HCD$<{=FwKx`CI@EH}iO%i}pR5Z$j!#QgV!G7Ut;K%LX?x_{}X8zVQmi7Y4 z8)}e;Q$mNJG}pF@%v*f<;e#21nbMP*CCZZf0`Qb&A!aLJScoYcU%73?j zAG$OuL$N|SbKV9MPk7t&B1lQnK0m?lj>q$r2RFoW+E>I-k*l`LJV_~{;vzZ1wT4)Q zGVHWvAsCxe=QT4%rC;>R>Go3J(y_AS-L6m zZu=z_7peC{pC(moY5WjMY}e}72KU}S(9|`}`hmGoM$)I{&GzIK@~dIIH?z$joSg6p zVb?s5=V+P#W&wEc3~magjME93LZU<{v(4UUriSbXJf8mH_Rp-={wmqi=thk2JrfGT zLVjwuNWh-zbXJ+_A@>@{+ z5}Vb-5YQ#K&BqtjKbz^KLyns1eN&}#b+q_W{kOk+&oD#x{Gc=9gh7HDtm(gmOoDn( zX6OHP$RdwSk>X@9(I@M7##7mA3E{o`D_Getdmes57x;o^#T>hr!7KmN~x#x6| z+nUGB(}PqgegHjm1E3pJl4oQpgmi>c0EU|R{H@1EMunh0AoWpb&=0Idb?43NgrJvhZfU?+ zxb!!GfHL+uSp0}!zgi^rW~n%<9b|c{!5U;T1KVl_tm_-uUSn@iJCmIXDsm6<9eJPa zc^e~-OzgC!LJ>K^=UcC<)guK#oY4K>yx!&XJ8*pVt5ck$OArxs&RRIUNYx{as^vWg z83t&^0GL51Y*g@bmw>4fTu@WzSc-WcgG zWeel}VI9tg(1>2TX@d|Oj6fxj$3}3JWX0R1J0-=^RylS;PTL7AOOl*_Ip&!?ye=YKEO-t5%&g z^gIP7y5IBPi}-X|p9RpXvYUp4j6JRHyU{~AjadKVtSG>YB_fbmAJdRFl8}v$69XL= zgC;WK(1{`ro|sQ&C#gOwg_Cz9_9-jS%-mShtB-=|>=2u4L$a)H)V9)P#@%U;p<8~0 zzM(hr)I2FZDG;Jt1!6alY9!b~9TWCft)z@dP2m3#_NL)b{_P*|%-F_Y%-EN)j(y7# zhS1p8>|0r~6%xiyS;oE-k(RL&3Q^Kn(pa)pvV~B_mNjcdq@K&~{`J4_<9S{@ujYm0 zP_Fr2=l486=V!sDhXR(oB$Wg|l-KzsE{GN@GgSfy3rdraiay{XarGq)i4N1PF8I=J zwo;wUrM7v33rv2!M>s-;JzQwDm|X!Qd`qoe>HfO+zq)SXs0smhGvunpJ7^3gf)D*0 z-k*&6Q7tuFbMvv@^L9IBnj31=4OCfUeFUL-r5|AkeJ)$%K3; z7p(a*DvKv}@0_*^O`$5Z(;k>s|I#cITgW)NFoa@h6zJ z^l`%*<|H6TjP+CEC9JN~`mt zKT}m;1qsLR6z8+A5mxq7&i$N|W?XxiYRHVZ8Awk~qw4AlUSHKMv;m#{TKVlr8@GU;(-PHC&^s2iIv|2ofS9voz!v92e{NIL+0qBJ=nEX4T8E2( z<#I0Iypkxfn$Q@$W;yWg9nW|1k;LWK?3eRS8LxMWig@px_0H`$NHBuY(Fx1Qma+IV zk6t&KKGgBgWMql-s=Q`({IKrXsmv*{AikV+&hjpS~QH!$HUi(A_HHQui&4P)Wvb=djoJ4xC^b5awsGvg4Wj`gbS1_b7AXNOf(gcqSQ0mDLVQ2+`Rpb z_%n5!O@NW#HLv56!e{Ik{(Tp~R^&yik{7lR23%dWrXy5}NXw4tEu;@i33B0y76PU2 zmY5uuDXdYCSkVAd2aLTM#fwx{4wmIqg$+6o$8Wm6g*I*{NeB;9^`ZKY2qUxs3NJvw zYoNT}fMj}jF)xrR!3`QQk13~kkAaXZoks>G3gZ*8(}$pgP!{iXu(*W|J>yc=!A?AI!*({8)IW`US+M|5?wp>8Mx!6y;)r=>4C+owSG)$`%NAzCZMZ8yGEV+7GG-@;`ky4Bmp6f?=a+sN!4_BBTP}M?2x-B`i{&78QKpm|hn~&vmjp@Kf+02A z*R*N}FfGv&iGh)y-DWozZ6{&9X=?2o7kiR6wVuIM9=Bhh-0EoZC2%S$$ z8oG4R?uAW}M3iBI~{G^|XRo7PPFYo*4gV%6qeTlgwt> z0m~6dUH=>~zcj}31*-ovVPh#mp6Ln&8K#K^r}s7K134dqSOq^aNWUSk zHokgnA#lqJQ(e*bP_U@dKtyP%x*XPtko<<8(_Na>huG!&}F$ryx z`n9Tme$?w$WB9qn#YI*j3}6k?SJv9y3_ssdJI0!b?TcsBsK)h;*I0cBaR2!9N^W6c z=Jscc(Wxmhz+up~#C`wEyWCs1_yLoq39sAt6g<~8T#@#3Tc6s1)wHhTA>@3-o(Gjg z*G#t$Qzzh1V;+Wx6HG9M2BCu9%5jxzttJ&G1Np5Vn|&X&1E!NtL%&`c=wBT5Ix7q`ZQ{^K3x6V|nqSxWfy-d_ebhkwc){zL?vO zsCgj#ZP%5?j5;`B&eh=oq!>=cVY~+y)cd_-^rY#4@(N7Q%nu{ZwJQWqO^^OlAPsCU zGr=I9mmtPpFg3@QZ_&&OPNQEL2799p1=CfNC4;-BCK0!enoG3UI8#jrjDz^EQjMSI z&nXva@z6x_X1K1Aej@cE5W6D&TYHOHa0{1(vsfNO+zNRg;f2iWn+RVXcupkI(Q>o| z(vo0$Pw9iXfy6(T7k1>Q0W142Z^#6nl!zoqaU{z(AY26Q0>gFRNXTN){-;37p#pw} zxokU7@bKD_DzOr=2R(c~bcX9YJ`#%%>RVu^@((8~N z57$eo?a|Rm!-&rE5A9y=cE_UQ+&-Xdi;BrY$#k0I( z$0}uxpSyh4QY3%g(Ia5Y!(#5b0(n+5N&AVdF2f_`TYM)sgLd4*T|N zI#4Ltnzm3YTxuc#gIeqN(f^fBGSt?jqY`fLnG%mDQ_Zt}J8%EzI@DxWabA)Bdc(VU zBl6H)I~r)55{kDcL2o5#Rt-V7n9|o{+9{3k#L7u&qV!9&&ChIn2`q6b(dD4(l z)6AdO9)v0Wpl}@R(4FIp^)VP`Org?NOy)*nXoGOPo^I-NzMkbu*FmdHo8jyyCso%% zOK5N6?8Qhv@Sf(TIV-8}qyCqX8~en!Lkx9#$vth!(f!*Y)XP&t1pU`j%}GK-ZXX zz8VPzWQ~z80+f$(1nHh#9eA#ACh{`j4gb5<07^Yu>-Nkb$MXBm^2d-bFt^@p@VEVocQF13@9-LJiOr&uBD9M}c4~Q2nT}`c z26QkTC02{rZyVT;oj(Lg6GF8xZvfUEC)Qd#KL! zx}7&C{^6{f{CM2H_lUetoH38o4NyqK(7cg~OBOpq@+oc3ncyEK3jModpX7bonO{9# zorsoM7HRg16|1xY49HMIU;h;0Wks`lBRY!aSEM zFiXb9L_d}qzjtWzJV+aq~ZW))xmj`xRh66w6(7uCM1DBD|R=2eAAQxEs4ak5AJzQ&cPJ@rgFA-Hwy-?|!*-Md#&p z29+x~{n^;ZyD}*Rq~7S^Ov(J5+)roff?KawyX`;)1g1v?z9=z@Z#sX@UQ1lDjb_K@ z1t(r6Tx8eoyX@0esyaW_nJU;W+IbZ((s4Cv>+PQx=I5ehc(_)|@UZpgfm33LCH_y_ z%yS(~y9JxbxhoBAA2pnQwFiGVZiu~$E! z=8>Uh+tR)}pX?JYdZv4~i+kryzQR!a<37twSy%PoIlc|2yXfjYG9*Z`0K{l$sU{FWokCnpoX~u6BR1}bv3$SA_mDK>C%#K&t3oZ z_&S)l&_(EcefxcaLx1Ewf~9Q3{rI9|oHM2L>TqLe^n-S*wu{%!Z75z#nno^&CAs&N zz5Pg%ih~uZJ=?Y!NaLyc{_G5Aqc_b5m8sjG@pi76ZAyPGtX`LgW1;Tu_PjbKCKnx^ z>phhnO_CA1pkDX1CAvzl?JW&qZ+UHHT`nWG@t^fIUK|%LwsVh06`uWKZEVt5i`8eM zHV4r7m;&=jfR=1LKb0#jge^n5)SZUsOxF3|;VOU~mC*U~f1=FkV^O_dhArZ;uPGSS z_yIS{FC}Hjdj9r(z=c8*q(zfu*;6V@ort3;g=;CVt*vn-%LBP2r)!RJ8?-1*7?wdy zEo(&f)bOk|Vso~|YJEe_K-=khx|~c+O%2Syp}ssz)#%Qd%k|%x-8j?TuzmK`ULgR% z38aC_Hz)=>XaO(3{Qe28~frG7(}1I1QiJGns-xo6t>;kd84}dX0qxFW_eH8 zLY}sdo^?ci>ZAPna`DDXFNG30VM+tjBGxn%Co#rJvQY8xgOv~u$xsOded(u)>jS@! zaB8N*8$RpJBg-Thy=s3(vy!qxsbG5V>zz_kz71|C-1V8+du6bxumb3QLyX#c7EDOP z@Vj{P*Dw<5G1B2#>G&CDa$aNbBV)zV2YvjLtR8$MrfVlXm-w*s$td-Y(tRcYhL*M3 zttGo{`5-f;#OaP51D$$uluHEz-LWahlCw_zj_P-B(1leGw;V}{?7u@OP#pGP-zE$} z7LuL!kHN6^taCrpkr($A1HL%(H2Mdx?ra|^^;_bp#Er=YU?PPdO0zd<^H^M}^V&BV z8~v{9x1Q+2B%S7Z9GP{3K?vpkZibO>$v^Legj;XO3{Wo?vx0ZioC&f&5CmE+GRMz32Fce)%T`r-_q9 z2+#7nwZR~+&Y{p>n~Eu~;EnnriuM%sUEd#IyQ6m}yK^cVdDnDPK6YMEMAB`YeMph4 zpswwB@8hAGN2(;+7$}Q8M@JaE7H)&&gh4)jn)z?3a4#ppiDvE2FHMW4j&sAon-`29 zh@Fx|^={CAfM})`*Pg_;oBw#&1a2kAZM>W7`Ls%N{9!pE6ya4U6SQSnBC ztk$0{``d-@DlI=dOTSb1&1EkZ*EL+MM5NQioOvj)wM+NGC%A2^D&xz~b$< zD`AjJXYdzuj&kFcmx|&a`wuRRvpT;UzPqYF;q-nT$RcRJI;7c`^L(nfULc^zUgs@= z22*K1syA#~5j~~!pJ^{P+YAg`(HeDim0;K~_zGZLvmV%K&#LN(XyzZ*tYUz$WSCui z>X8az4Q}G7l@(k*|9kO#6EB42R1xR_r~u=?-XJTgUPTB-eE(o~WwAH&sporMDx=lf zoV>gWGIX9!hKh=6JKg{ec#sBvn*uz|PT94ZIRJE$=_^3!YVFPDdvbCaa1xf4N!pcj zpMQ(oiU;$!3SakyMc!cX+M<8NO(_70qtcQ&?Sf{K?DU1w6N|simbp%Q(|}#3Q}}*>en>} zBhii*&D|Hf>7bRYE4Ck4@BGqI2Txyvv%e3UQE#7_&A0r!`XiW?%bL>&-%F7i1f8}op~6DiF4t0x0kWs%vq3HIVNa-j?w$^ zzibRCSU9)P@Y73iHetpM^AUI?9xXDPFqQwL{AX#Oad0_w^Y?kY`-I|?=(Xe-110@;0(sWs}q*S$`qbM+wC8x*;rjRT zHqk=ST(dN3pg7P|C_7%T^T<=kJRQTSQ-E2hT5R`AFs8Xaf+jc|?jY+%p15qB0|>AA zT5Q25I8c)7w{�-jW1jo+-fxT8OC#soOHHVovRS$Mkg@qVx0K=iQC|HtONG0hm|2 zQ#51hQ_A)S9L;m3rV?s=`Vt+B*_7s~v*;Nh)DnyDPCwwDyu?u&mwpbfbnW$JXORx| zww}wa4w02N8?(baK*@bPfio#tNYQ#mm9f#+Hs(TGskcJ{u7WoWs5t=A>Oad?GP6ZV z#dB0W`-3F`qUJEIyP%^eN2ykdOEm{0gy;O_u_U7c1NJOGQuq1H?zBt&(L%j#!|h{4R-hr07lAL2 z(yU)Sx}iB-s?e0fY=5eVopt-b+O=C}~|g8LroM zilJad6!#Jhs|FV~RpjNoyIkH)`TRv8N+8t1s5t*Y&r0Lq2L24JGF|Tj*{%enuvGBi zPPp3oxF#shk1bNeUcg+FLuj)>b0@w}M*?*gN%Zpfw@XRD(`*QMbvAQ8(5g6ACC*f zHR#9_#71{s%5aT{&Q^i9c%(r>M1Sfs+dJ=YZ#ehW*6ooYI~VA?i>3_lS-g%hb=P72 z`?$eyF{>yJyXLBm2k@h5VSMxUCONp?nYrAT-z#Sc7&=w-sg@E(M75l z8D|4iaO{d^q;^KsR{K!Zf*ay)8Pht-0Z&xjvXn07dFVHi90lG$vR+`UF2Vc_JGmNg zsJs@2+t&KJe;LA)(c=1Nb*S^CMtQ}l3-MC2|`KDCqVfKiF~^M z;BMp=rAdPcA#o1uGt7WY7W5)W0u%_H$+Q8^-+`oJUy2$!oA$H4-Yn3E)V$a zZa?XMZ+s3b9GQf8+!d7v?fadRhSmVXQJf1hMG`U)6czLC$06`k?1maHC6AbbNP#EIc4Et2=u+}UEt8C+p~1VHo897{a9{!|TVK3gTaF&^%ng5FI1Er$5rzbQ9UT z_Aa)7*m5M6tp#)euz-9Y+n~zlsLxTg7x|HUFI)ZnV+xx0_jb?v z9QfzkN!S7?q;u}ZRVOjg@;PR*Oc!uzJ{S*E9{l6?vu!vkA&d7-?-{+Nw@U&yyw5!N z578HO?Ub|h_FR`?wCTk5=a&vnD1h5N{@k@Qj_QVxCZ*o>;pNh@GOha$9)#=_2MAEo zU4369eed2q)tvJtR?Qz-4za*5*p2mg7JtfJ1Z%O)JO5TqGAdWF`YLKxKC5Dsa}|PP!g^JRURM-4|}TDB70+*f>mdL@`1A+-yc4a!Fj>B zDJD+=QcA_o6orA`SR39*gdUbq?H^&a=xBW|Opttsx#wb}q~O(IYVd7My6i-E?ZtH5 z^dZfy7xXK`H0{mZa!Y(MolYg?eJ<8%=s0=;x|0-hzRacp?()^2w8E_1PWCJ4e+kN) zFQKCmwBjgU7{n$3!LeOFqQ%{X-S<-7CxKBp&ztK0xep>UV8KVN9%jagzpiyhrC`N# z!o#E-rDSfEKU*<$)|xoJ(Y1OTaG(D9q4#(xbNxGq8|Csw&>8Kw{ccevlhEEcdOOwO zt=87CE9`D|>lS>4njDGp=n$s*WAZdpnaS1c?!zq#7Kk82iqX$=j`~GO=k(8DXU6EV zD7*sFGP2+Ej=ytndWX2?y?V{IdO^IWa+pC#Sk+LbDDYuqJ?>sl%dRpK)a#OXe*`eL z;mx=v4#ss&arHi0dhIzNK@4rT?tsZjNiO*W$yk-3&%tA~17g!w^blX_FEH{yx{@wc zjLRcnetTs;;H;It`R?C4f7l98aY46f{}?nkHyarmrnVn0r0)S>x3^9|pe`3wp2fw( zm5hvyuK>=8djQ?1KZ${RTpaLnWEL^Gr4sa|9}vh(r?W}tc6Di)Y~jja&(v&9V&zW7 zVR9^EkL0P~npD#Wo2ONG5ZYK*>fYM$c_vM4tgOCftJT9V7;q4AyiSmbh*(CbacFp32@u2S2fQ+`M!9>Ij%($Y}eM?4xAdAL-@j zi|OI{FUWD4x8kFQKsbwp=&=v+yF$TZ9Pcjot$(@2=70zRa=w&9$&zx@X_-uhvFTt4jpo^%sPkPbXGaxF8Cqj%yCOEL;UJND9^UKk#eXiMQi2001Zg$J(RE`#6#i$E zYzY2*@8YflKxaEsW?}?VmtxZIRk7E|9#~SNXOm5F5qLYl4b`N7o() zt$6gr^T)Tp($-roDgDDRp)S#R|CQ^)u}z{rYRxlE;ouGa-0;;uU9l#h+jGPXOB&q- zle5kqtH|6q!S7~*zn5R!UCpu@?owRSuRa%M9kn@^IqXi4gLYzRcAsm-pHC*IH(yvV z(16`96dr2dXnJ?RH*;WwV38phLh@ts_z9+z77|&t##$>`K7yTDzSa9wnVm?z_bA-z zYWgsyzm!6j!3C~Qz17Ua#LxLuwAodP#f6Z@xGj!SI%`Rl#U0zqwOjC&dHWQTN;k+4 z#Hwao>bziqq^mDbndNUUCqL~>43H4_UXYMTLetD*(yCGa{W62Y{|0F!A$ejYnC6KMPJSAl~`+=bxB@M;2!uk?&cz>8BrDbtlhfNh)1=)I11V1 zB)ABk+=*JXP0kVtS>@N!`t&pU-4TArIKgjPI4gCFc<&9m(9&5-NZa@7o ztCb5qt89ro{p+SjCb8P)U@c^Z_1dPUHSI+3D#*W@{g{)k>oDK`tu*2ehcw5?pI}-k z*^WdKwI)^T6`|gwiTRT$soRa0T0v@Vmo+@!!UQsLI)>qw>swwj!@0iV;QlmsyS_<4 z*m&(zdP?4xCN{YLDNl-d+hNJ9SNrYgbC9$Ev6K72t<(_Z$H6nS>wb9W^Gy7Gt7BUN z0O_``QoPT;uivJ9>9xzdpr(HTw^OA~auGxm%L?uZr}W|L;@bs@z@-;#z6@Mw(ES!<1)vb}Wt&Bde z!7U97066w<3|LRfN-kR`ZQVk{K{Sravxy*#2wS@3D+9+CQ1nhK1@d{;UHb+TQ0?aQ z$6>)~wto%`97zRG(3U<7zF5{JY7NW9fqEKM=$7`G&CI9p7Gmvzv4p5 z;^PFGhCaf#nn19i9#kQNI*&tqxRX2N@uko=y@LhhCv17OJzDn)!UXG`B9anR2a#9y zNJCX4lkc>Mi(VwbDmE#bVYjnUsKXzC%>9y0==h^^y+BrcMPkD9+%+)2MWVe^%E>%_Meu^8&}topS?oELppKNO?6L6=+77gM>+jFS4olcv z=O3kM;wca`kV8GRM#wxvr9Uzvg+0%JbB+ zyNsFmfG>`z(F=Eb&{|(#y*Sv(MgTN?y%o60;3QaPjZ)4bvtyHWZHj&@cgwsT#?S2e zC1Q0_3(JvFlM2cNnb3~^=3)N5?G-ij>w%)*7iF~AiWu<{=0z4_O#tv|~7uDG#O zeDinZceW~)H=M7ljBUHZ4O7)EedJppb-_kVWX$YQ14J=F2U;0!e#YH-0`lnNG||D$ zaU^g#W8cHb2%p-QE;2g)8lkb-1=KPOs>vDm#b2GrDgJ)o9=nmjaudRvT z^#OFo##F!#%~-Kc=3ffRe?Jb#C5F=nR;QCcs0&hZ^9w>iN%8#DJfa*gwCuZ)RQ(ii zPTFVlM&Phi9V|US>(4W2sG_cC25E6fc6q}(vQB6v#W>TzCw_wCiP|&y|8^cTt!bI?v6F6ven65`(2$_$d zI{CV(vJO@&Rb}BHS`H_AIcA1)nRtN$Ih#weV^982UVqAWqV2)pc1cL0ZO*ltPj##m zJ#{FE&kgKpA<+s`hovq=L6u&m9+ZOLxp%!H&*0TfM<$PLTp0U|z$Q0FlCp9nU|yin zok?p!(@N{eb1%~U4@`G$@&rU^is|gX=t~hIif zQMLH`UP^I}fXKSwg#UlK6hPMQ2>zuzVIOgx%)nPR$C0gdD}D=Em_p+UcT zfHU0&yCZrGQqrt;;&`U*UpYC)8PlN8Uqf^+=&J9gSf_@9*SnpiU;18p(zkr4tY>#U zJOEi{;`T`nmkrHyn393)1;^+m&{Kr~YBy-fpGpSqC8}?0MTJjFlHvFAZTSmWi}@Ag zS18s=9RAo+A1Wukt%w7QOx=6bQwtCFJEUIY8pshD4-rWpK^FkQVX{WMxbJ89hub0H zvO|U$X$9BuQdR6IQDek2t?IZOneKdF9NC^Cvu2E?olze!rCdccxA6u@#C-fs`S}?a zp4Tr#%k|G%Zr4HT_M6#n-XPVXf?%xBygeaa;;HCgtgHgS%5=gVR^N6S_jq&Wh1uuuMTVcR&qNC_Ffhmi ztL{^ihUMyVmz8V4Db_wc#oBUp68}BE|0+KYzeymlIuZosxVjB8s1f#N26T3C=Tgur zfi^7M5*+vz?1HQ4jl{Uou^5p(K%Ep?ES+g{-c6}!E~LQjm~4PLs|@pPvaChQr(4Gp z$|`b#B@`~&4MA0uKYpLr$*g4r^XSfL>I@7BkX%V^$jvF3a`v&1{U0e2@_~C-Byz!} zp5hx$VtUyFOSzH(4%zi?Sac$c(J|F(MjG3RKf-rIJ2UTU5evw(OmY|TDXcVT7lPW0 zc~KNg=vtlrGTs?F7(N=Xi6208$?=@ixo_7+wQ!Lb2ePIGHGoYPa8+F+?#-Cd_roFm zH6Fn$iv2ay#_?hZAds}A!!+{vN7|F%Cx^yA3`&q#I7&6nsn8xllVHG_s3L1|)&pEJ zpOR&$#%LhcSAQHBWj|pLcxC}Y?JYlkCX+;=BQvX{}c)%Tc zjQ~X(-@u{h3e};qCcTYx@}2|M80{q?QfUq2ELkJ?F8R-;9noTJpUV{?EDet;m+E5= z-JctKebS799s)rMR|oPH&g8cD(B`t7MBAkmNZP&uw7&h1?la`F7RY3vTf+D5g{%3E z4^u3feXjpR`6451DFRb4{4N2Zf5YSv3%}bHrr^8xFA0705*1dpc)=LbaiU1RIRdL< zgt2a9C8k5MpTU*1BNCN3j*GNaSzj;QMDWC8G`}UwUMLgQeD9IS5HzMCBHjD@Hr!g}5e?>#%ZgH>BgZn+ zD8G8J27rEnHW6Af+9tIbkLuz>dutiq7j9Ev@3@4qetw~}89O=uc0PqhCzmM3vK`lc zQ_{TzBofS#&*YVO&d6Qc3;ebd${KUw`JvC_;CE3T6{kaCigl-l-KADw+qj^Wm!@#e z&e5KbVP|-uTigg5CR9q|9{V*w>jwcAA%S3sDZ=~=YyrCL~|H3U7t&syfGgN#zJ<5VWMIv z*+IG~yb1>r_(4V+iUKc6c0}r8g?^9mmlbwb*??8AQ9@kq9|p~LXD6FM6m4*eVw!|O zrQNSnBEJU%LA;;{qJKq~A)l#QOMtS{)?NregBBCkU9_IHdWbfqqC=^mXj7InbsA4~ z@IaxDHPZukQ66pZF&{@~T-2g-!rVwS<+_q$CV=ZqltbHm4Y5o&jA_MWR3Pa2?*>ft zET2;5=UI^IWyD)KJWE=lH*U^6srL{b~pr;bx0ZDCa@$R1L6zq^A}R|{f|_PuLcr4 zzq%9Z?P0{=Zmqg8uiAKdy_IXeV?+p059(BZbNN)4O?xAu+(K_uPSt1lldI{G^~L)y z%(LD&iE#ogdI-l`Wu}wse1Gp^a=Zi}I;ULrYguOdIad5zNLpH&>&agJlStj%+}sMl zGX==h69DwU2bF;Tz@qhOT2?<0ttEC_)NBHW3;ftUw2<^U3ZZ?zW0+Oq<%bqdQuJlckLaV>0Y8Yob)v2#>@kZd6kegFTGo+zhk&3F6)(1AG{* zD|w$9Km8q5W8c^Q(`{||10hl^+a1R&)BJZeUu6FJ6`6T4;%Kdkb-22&E??a0VSi!R zXK{e+d}nFN19OSm<9(6(DSA5qSjz~^dQLZa?*Y+xhxmB?gsZ-{#bqT<+pT=Z=4b!a z7XJGNq1a>HJ;OwAs%%V#uVC-Fky#Jrl<#@#wyK1W9u;!<7!pU`N2XiMo{?apdF) zl_bp`yZ7xK=*AMFtoy3{F=a)kT3i6 za1Z}H75((_-ZUxAdi&Pop8Cz^Qx5*qtM9lXPQFi8K76Qmb%-EdRUF-V>Zz1Z%Pe9L z6!eUD|EiwKS`Z+mge;LQtttZ~ zHMeEY^UY)O6NCb;RTy2ag_TWua1C z?ax7;Gc8n+cN-I;K5pXy9o7ENr6~I76kuBdVtQ`_!FPro$Dg*1Y;0IjQoB_5@7xh< zLM;aFzGi=8W9L*U!;|44E-sz}7%}KLsh5?NRS=2x#dE=97nr!YCH-EU(@Jk1VBvTP ze0dqp3g`Z9+5Ybd0|r)xRI6Q`+Y|PXV5Aqxtb=^>_%$kz`RnxAT+oq zD{sk3*u$9mBv>I(G$n%Bfx40%|b{ zoOW0&rwFh`^|cVdQRF|4VShhJ$+jfM1BMuGOUwJ9K$lqN&jbK<~oPbrDP7P=s%5VC5&OWm-k@M{2;5y1eX{jV?rIAP=+OW;`|+b?Ha zq}?%As+0LH$n5V(Dj8TC4~dz~-07&j^CY zQ4Mp;y-_GL+Cc@PnN@+{=FFc2*k=XRy}=d6&fq24$a)%%R9}1uX=v9X4cq?iPTFgm zrQIH*}6t1Q*1h7=odHJ$I1e_t<&+flW2`adL^LXw=^ZBIu!T_1}3Zls%e*W8No zwSff%|I}LK=K+5|DPp90lF@D`TJizV#V_@uwS?u^N5mH^Q+CA-TjVta9Lk(5)>}J~ z$ocHyKS~;0EnUr3fB$qqh5?KNrQf6N{Y_#m07YzHweYk5s|x@3&bpYeV$Y^AnxfP{ z2!0Y?CkX(}`0M@8sv%#XHzZZAZ2k2DK-P2thvb|{O>(-W<_bX$w_p%=tS8{83{y1H zMO2n;aTf)Ltl+{&8+x@?;6aNVV`1YG$50kh1rIn2%e-LCQEo{~leI2E%WoeC#}1BTi$H>*=iM;;Z9@?5~nxWS6X3B$6);jP_V{P)SZz)+*G$S<*Q)oZ5& zR(G32({~S)&Gz>8zFnAUk9d}sKfhHfC&rtt4@6F#;pF6WH-BK`3Tv+dF0E~E*X;jq zT`af7*Msk->QBB}&4F46LbUYUYy__KrHc7g;-L?z^w zcG=D+6$x%RiPv})L}eU?6%aHBh?XlogYnr~&dlGo!!BA!3Si4$UV!-4!GaLvo&0U) z^90PV5BW?}jH`y*!(U!bYqMusrKZPAUdY8T?>x%|ev1~08P=2eI-lm?euW3t%tG7F z&EenwuU;J%#Q#Vc4`t;Ov~Eb4Oi;Yv=1>Wj)%W#$Iqoul--QMs9%XTQ3q;O`wvtIyF5mJZY8!O`0iaNg19uM2d~^yB@okS(hL5)7&*f|9z-9qQt3$q22k(X>7>e z(rvT#n}H@+RWYoE@&n%-3N&Bo?5v*qe5E zFDgJ{AA~cZ1*7ionQe3uy0*9~=@IWAK=7XP=FBTczRQ3us20gx$|x}KsoD;SA`qCsvUNjjlf`jZpiaUKTp9mg z&l6ZOny`{|6y_OvAA+XLY2b0(pIA#L+rQgS`cxQ+8NEIiHzBu%N;F8RM&;av^wh9` zX7SHJvlt->W;v!5t5l<8D;fgEkrrM3J`}iPz*LyjA9N;gg5l$qI@W2p?N;^Wm4ln4t&uC#Box$+&tLMcK`bev-4?N8-qrb)^|Jz#T zK{P>}u|*viR-*4AvIelKe}PGUKH zOTiQQ1TaResG?!7Gv~-;VvgT*&TDQ~T3hEb{QP8#cbD(#$B|0(;8PRyx(B!Z6QH7& zPrl9i=XLs@2Wt|m$+DwJ@fm-tH**8oQ%yPZA}#o9Y&kXvO6K1II8`|k4a|Jah%flX zBy@BuhUv>U>1Xq#9Aw}ecB@Sb)%wtvcrlsram`npff!af%f%=CFmxNSEu-_u|dWKepVQo*J@J0o(sjUOG$QsfJRYG(8Pf%d9J z1r8^6r_PO~;wB8@H}tsNtT+n060@z!n1wla^eltpTK&BUurlj1Ctj2%QX+Y!YVd~d zUsNtq{zOdP z&Hsn4_l|~h?cT?ug%NcS5p9@3j6@4jB6=CU4WdV{(M5^qy)#6KUSiaQkmy7wB8VD& zjA)4#LJ)q>=~pvr!tES7+ zou00{jT6M5)?3bmzuWGOizO6tL>uT8a$o8fT+G(OGC` zt%X;Uy2gJ+ZniTMtphYEzaqyMW~2TlE7*g3bo<N z|IRS@Gl2w|>}715Eh&>CaCYuD+6<@k{!VEAbM=;2^{?DpdHsdtJ!9@1RT(%|2gI%t zp%q0)bytNr5klWr{^-);8fQ74E8?5HDETn8a%RNf)@6R%@s*@)-p}p%o9b<<&!1|# zlg6Nu7isYEz(x?NA=T%l&NZfQO*%ck6MgIN^|`aVa}e0F%ga$u1+fOfVBtcT!yTc1 z)3-Lt`n%bS?Pq?8SkV~4-Y3qpO-aAjvK7%A^IF zhWTE+W{Ld%x zCO+I2Vgt?pMy+fgK{_?)5-%yrh}NwYhU=+)MPMv4Aa1^JrV!{%Bx(8+7^vQA0pG58 zaw@ya!W+J5q^5FL_8jeZr;V8<`lb7$R6ddTFKLB%$nwKtl~PTQFtjd%Y;WvTm6|~X z_H%Cpmx1&)65hKIFP{GLI^#|sr~9->k$$PR#0~v2uzWK)Js5%qhJ@6S$ZN+G33D4| z35FIbGeVQ^c|EvVE=MDV+!-xQhr7eP?LCyr)S>F zrmOwZ2}5e>D#E-5MY6r?KM&e2b=W+m{yZD^_}}c(g~eCy(TNp}p$`ua2b#UNu_;O{ zDfH$6+80}|E>vndE=~w=HF~N`gkbQu0$jf za16|$u7HyN`{|@s=^Pi~AIynkyK`ld>P9NV-4H12!fkO_WP^T%bxKbmh>3_E=73JN z;AS)`3ymeI*3-6tHqMl5my2FiW~yX!6Po=Q8nvnj&U7La(w3tU)lmipRi~KHpqLhq z2cLYGW$)J*+?nB>UAV7~9Vm-a!`~wIk;v?k(~G?p8aY+bs9lDM99S`}W3tMctY~!H zV`G)tMt=RG8gd*(h5Bwj;E+w6(|zvMAqnGHj*e+KOs9N%S5*W5iHu354n%1 z%NqrB3o*wh$u%Kxx{_ET?PBmj?V-W8u#{3j!`E8_1j_;VSQ?%~mhD#E0WV(%6)Lfi^e zI3|-O@CKO0CgiZmU4`8hNd7ESdl`vFWv2I~Xzh@C;C@q+Fl7&|R7P!tC_A7YXjO)8 zxW9C#4IO;!nJRtVfvLEgN>rTV&9h4DC(bCcE#!grL^*mnRa*o^ICit?KE%aE&IVg`jw>lD4xY0SbIlQhuU^7jjA~^DLGpSS5W~f-6Oi z&#Ezbgl!S7gw3pU_U%ACx`!{`L@WRm5r-RAYmD8bVZQcfeHz9l(^$wy4qLtwR^+i010kL8pRVTPeBH{ z^F9zkH@Zc~3v(->2Hu3%kn$JA@)@Uij8NhNVr7Omivs5HZ7XZSVT)GNbKC1?Kk5GMhh$mBpiN<_;>N%H)36GJ@;#D+j7txC$07O&HxiA` z5xBvczPJ&rhxSquPLM;1p2AX;zg)awaeL0A*RJJf#i=e=seR}%%9yNTZtV(IdaCed zIIjL-go-YDky<3PS>T$snZn=qlYg27Z~%Mp0o%q)srf*PR3r>%{WYNETxHs#@95~* zbIcj^y3#a9R$?6({BFI_?0o>N1TFPDC%OsQr~MiDK8KO0l-@tvgB>r4qyS4gjy}AFXp;lY_I= z81lHPOLS@qEZY$K>UugA^$8=J+N14Jak?R$izeYs zG49b?+5s&EIvgRp4Ar{aseK_L&w7~4sXi5PU`$)pBvhZrO>ZHS?Pw ziQa2|Pz<@u#3(5p84_J@u@HNj)~8iXsFh#Ty}Qp=OP~iwYiX~9aWj>IzXtvX3P|He zy#N9a{-@5R^N&jdNkQjUSvfgruYjrf9zeOtSf~HRAk!I(qqVjCJ?9xn(Am2nV1(sC zZ2h};AA){Q2MrR@o4nq2X;s4~RtuKx$Ae<={>K$8?@TdG4}^;SH+`BD1!Vh}TLtPcnHtIL0n z(lhcygr`M^XBzcULxG6!ZqU3`On7dk`q5|<2 z680eenbtcUFB#!*y>jVcEinRbZeDY8Y=U2AfutE>H=P&-&Dgb+K5g?u`}q*Uueb4F zPFU@7cJ62r4-)#Pg@nNpkDkn!OKUIhup4OgqEasSpc!zAIF20ev zJ?g9TkJl5G#<|uj=LZ=b&LCzQ8k&Oy*8n-mDwii}u=#c%QRS3An=BY8-ig!YZ1=D;on#;m^aHGPmh<_n- z{Xa44%@VMo=Z;=+aj_iGHz3~h^NSO%e+^6#b!~oVw%%!)SM}=&u*?;@u&_`U@XO=Z zj!y@VUH-EtV{;zF;9F6Eb$i&moB#8IfcPXDP6)BZ-YrI}Zz^H>uA~yx>y%DAA`fC* zqM2H9UwN}un{iLkQ~JqC=%G~{I<4=9lUM7Ed0$b+mw|Z@ezkhQx~NzjP0vnTiKJLT zebi-E!}lx6`&z6E!2Kt1z1JfYhsqStxhU~;JhEzCWyt2s(Xw_n>&8~IGL6!=Ea`Ca zr%rkjAJMuFE_6PsFB+Y;AC`IW=S#E15h9kOIfEqxlx2pZU#<$~Wv9hVq$Rjl^#i~& zvH_@caCYA?Dl9s)kBC)aE#=K_rT!UNLV6HMkuF++-&{jmtyX?*0`cti)r?$k;VnJc zMMzK!Hy!^m^zWc7>i8!Zks{q`+^+rDMOELa*+XLu3Y4D{JcE=8Vz8$l6MzF z|9ORg&7T)df^G0!z~T!GK@s~YFVv?X+SqJ!b0!xP-kGAC?M6*Hr^fR`I`KYG*r;YJO8)_0Gr-y70qcr#}m) z)Mp;H`qtvckv$)GpLg#!Oi;N+D_9GS&`gR?OjOa-OxpO);=jS>cd`;?1H`Toi63f} zW@FyQ#&g>vU3|*RoxrOT8izSX+uY#MDzC%eE`2^^G5r*tOG{CW9K%w$vz~M>QzMq$ zs743n)t5K6Q=Hc|Dv{b*(EIVbxeq5&#hr5_zomSJy}*y7$5&E_@SUSC(y>YLrN4bI ztJ^e;kiz7ZN0eR#VSQ+{pb}mxU=OgpdyoQFtsFHa_3@1aauwDJU4JXN@u&~k%>wC( zhCo(~h5>%Sn}-|OwMNPTZA*Ni!hn~wlzPtynlKJ_NS@-Kx zz7lC#HOK_GWI=v@i1AzJl*e;#mCeobzdv0|HyGlQ+y^;0IB=KOly;h8IKjRI0H$@7 z?&A4hY;Xs5#SgIN@#2T%P*lKZ%Ds9OB3UKLCdO@z6kjg+M0Yi5uGl5|+C0oY!nSoCu)qRBtZ|nS5CyJHTCCkBx@FHa9l^tZ$u}Y-?C18A2@h) ztn$_;$@=(k*?*#3`s=MmbaIHMZwpUlkE!nV#LJ1vxU#}AN2EmckDg+0INH#iNRC#N zv(hMXr~927=+KEhrH;VuQM z$l{*_@xN!P0xiJr_sWfhFheqA$NOk?2mt6O45!(CoU6&63!ri5_m9oqU!@L~ zaCq8p^#MNkHoz8S0OXc&!_dr3CIA$a%RZxRk5p`IN++hLKLATN<#`M>B!5|pU7W1i zT%7*`+BME1t$vV#MQ`8;F0xpOtN+{$AaNL0l;$cB7|i)5VnmNd3>Io|hzN84QjWi+NQ#Q--9k#3mAE5i zXEOQnexZ?L0sSJb&cD&D`z>C53QKjBH)yf@rJ>r%V zq+)!&KT)EG1^`-&)o|5a_?L9;LQF{X%|lFBe4+TnOQFoFF{YY~ms)M5u1H8B6icq+ z>%U=>E`~e@!UYbL{Pp=`cxeN7B1K0pE&_4uO}~9<_^^enw^WFrs5aIvgo0x-8wG#? zZ@AgDlhsHA17jnln6ecYnASbu4cS0f$^UDvxL+PU!Ow0_i7Qees#7HOhHtXP%fN`e z3KeZKVwn5}m_xtwzM3lD2oAS6Aga)6O&{fBDum|8$q~wst5#6# z|LCPNvKEXYbT{On&@UgpXx;k)jg)3#^&RZDN@ZnL*y_vVa0O7BcIh~DHjj6r+)LGk z&>KEh-jy`Ch1?t5B@NKjn*_KDZLXb2PSxeT`iA=r@Q-&e2gJ+T?nSTpwOlv=w3AI@ zZ&^>uFy=K3OW>}ub3`hpuIlFsQn3ouT2XCVOrU$Tn^JSt1)tII!yf_2g7juOT63(y zR54e@yI_1v7rA9m=8&Mm3`aVaT%E4m`yZdMybLZFb$9bSu!p$~0G?PBJK!h#FSHd> z92q=SZvP;b0_I$z#AflR%g_nQIUT3g9*j(^Mn9BEj}7$KlQOM1xC*{EFKics%#B3` z3sVVQ7x#!_r1|hxwEv@~L`=+MX24O}2v;CA>e%aj zJwxlxp463yxF!$(ypzhfks&dz8OIBEw`V7x;+?37UgBEh_|V%D&~T#9$^+m6b&T5z zfpC>Bw+iY$a8Z~z2!;H*11Gj_Ue>E{#32(NjN#2bnB-~U^mn#TAKDpw!0B(lr6?9> z6SDcq?5s@6(iG!!2E&iB64=tcuGjFPNX7IM>Syfr5|4?y?!`=<^BMmjA&O)iQ%X4S z0@b;~3D8e&QUnrG|GKB0)XG=%;J+q<5dmCaq9>0QLTD3tO{R%!t;njmlkW?z-hg{P zvLwWhtn|Orw}~JVB|?3?!|C4GDm-(&?^_hB)l%9OH8<#B_qe3cKEWn)=(EQ`?{?yp z9RLP}kY)f{EyS9@{;b%z&NLUXJab2EfXNW%fo-rQDu3x$t*2b{_7*W)#b~HzNQ?Fh zjDrK6<{7@7+I0CtStUKXl3OL-gxYdFLW#1lrUU??-eN1@W-=Tok&DqDOG^kjd97DQ z)}#1sI|hj?%DSN=jsbT^+hX)f0oMPv|C|ne{#=}YTC$MgW0i$P()6n^*9oEuHt%p) z==YV)20e6vhr${q?>`q7NYWyK(*lc?Q8S7Q3)?jC0N`IRCeI)jF}=+*`q}@BLzyx; zyYx|WwedR@2vssrDg{Nz>i6ql92yxxY3jkUO0xsL1phX|cg%Qd&gjgPtn0$bJt>;x z>`$9)Mb5{S;DD(Ot^`Z2}JMaoRcf+i`X?gp|G!2>HbHFTQLEu_(U z3^99@9SdAfs}gf?ddrPAd@up{SCz!^xV)iugE!%}YUMd0q;zyboNSNL!z(1r5BnqC z{U%^ZUc6!TiR^Xrl5X{+y}M%*vWqJ+bHd{6@VhLcU!Fuj1y{*@hzp^yz zMyl-gq-WfSdnW_vI5RHka-IQ&XHSWY&A=gga;@Z#Oe$*OC7kUhq2b;Z>2FC zirMO4w)ErWx${YNE)={)L{OUAPK_9x^Wp9bQ#zqg z1n>;yV`RQpk)%zTK3eU`>H*7Umv>@gVL9VnN^`p81A6r-G`apDMrEQ*Z5n-BCSAp; zL>sv{+N&dC8%X%WC-;Eu$Mt_KZtprS=_ah=boYAl1gE&+Ns>5-Hf3R9c@C0~8i9FL znKbGAS(pam%7K`gnjS>(1<3gO`^&#FC|?DzP&#nJbYe6jsLl)ENwq=qzC8c7zTz}# z7SxALi$N~&6wXmG)AG7&(}iXn4-Os4Rc!Hvsg$LC-#AbY+nW=8SGF*YxRURb&sru401AEnDrA@oQUKy)BPwfzn+aHk9C5pPA=~#sjVj0Zgq(p63d9e|XySEuDmo zWJo)xR+q8jOhjW+0|E2x&$Ou;MEk4>P-to#d{W$7HZSvr`v$&ws#R755bA~#=B+rLkJa4@WODmhUd;2ZUaF zME(DHXA&v!vujc`4b@d7?@|a@4C9!D!T$UFT4mtv*Wz@8y|u0{2Ua6{3iaT5q$Hq^ z>?u|~3;wx{a7{2s)FVvek?|xUVdSvJz>WKFm22J+%l4Cu5Q%lIqdFPvRD0HSk;$$FDklw2siMi>A&q$4HP_ z5=*HGK6AnEYhNc@&1pzRJikrDszcLSxrEITZwfae&&6uGE&m4`9_Qc z%hTLuJl`&PaQi~x3E5K_wujp#Lh4FsMW_bepMs7iv+rVcp%L}@1dnI;aj@UJ`&&ky+SWR6;B z@##?97?n@LHT@rKn_n8a0F6GaNh>KlHI1?tr>`!J62-pWK*^Biqym(_Gmg?%>|^}P z7XSLmbcfI*n>R!#aWI2ltJqQx{z6{ox5nvJN%FXB$pl`eLog6$MjJJPfG2s#9n@Ef ziDin~hPywt!-&l_sOs=Za19b)gnN;bKoJ6&DU?DXvP9vXS?em;&-p>02OX=c5R$(# zVLNS~IY*8zn!LcR_}oN=$ykUgN#YAlwrT>J#l?uY9v@6OxEb>dG4k*s|F6}Ld0&PV zAFLOTi0!Y0wXIi8SclJ!G(8O9$_@2wU*cRg*A2p3VQ=#;E@63w00%XagNfy7bHlrq z25U$jt`JcD0u2Zol88LNr^8*yBL#VXjgKf_aqn=a?FBp=O-u88|1#Y-aSNdsDESK48-^~*Jk&QA*gTg1 zeRQos9rwN2=yfym?o;t2UUQCiJ*qJMs=9Sk8(0P@lemzB0`Z@Ey66yz_1<0G&-a=+ zpV}N_StiQ9)xznjH34!XH)0_Aa%nmU?zY@v6T@`3Q78haHec5pK}8r9h8`7F+-s>B zq7=}(Tv@L&4&q()xp7we;buBRfXP18j2$t!VLHei9`*Oj&dUcSU!Srt6%M7zASiB(EN z@rWT(BUKDgFjr>z0b!ZQW?FG@X;A()t@+SNU6qB7Bj)K&8WtM#c7j0DxTVx!rbNX0 z>i#JNv22oQG7U!!N7BP61-kCKwkuP&(=2M&liDmrwH@aKof3@vDzHwjU+%3v3=m(b z!S7x4WOs+YVQ%1JG^+;%w7C`@2Wkay<-~g>Z=TXk6l1mS&|LnmP7Q^mZq9Hn_Nsn+ zXjT$EQ_@C?Kf`r&a4l6PLdC@w5r9_P&wuigkT)>HQzO@~t7uMFvwYqoBL*7sgSSqF zB817ZiluQ*lzV%?PLl6JnCrzi3laUl9wp_s!=_V5>Ucy!xxRMDDQzg|HI$0$Fq>NyNIgxg3yVafQfrQ0|>hugiQkJAB$@ zTD?U7-XQ;-NSodKpKMc!0#AKn6q~Sel1av)s8FfvRc)Tc*`LxUU_%E-A--nH3n~fG z$T<>keNhv<^V{9=RMERY=VoXFr;|{)<9Cn8uT?63rvt8LGwz!yF_7?nUivE<{;jTH z5(ct-T3>j_aog@628OTwq$ua1iNemF&RJ{smDbCPKYL1H16FxKWJ>%Z~c}-<6Zb}HD^2j$w&-gN~htvae55T!KtSt-@eo$@9YTk6Z25|whv=aU4R@&F$^un}SnAR%bV z_C?n?qfoC*#sp0g0@L0CYjY1H9BI>HXFa3Mqnzg|@wIPL$0T)RV67bzTOKr`@wh5-P` z9p@i9(f|-O(i0C^Y`w+l1+XisEJ5e)zi>sFxwnr|KzjD;J!K#Sm@s1Tf4jCOO-TN? z$og7xZalsiHTF_VM}l$DQ^>UfTaNrJ7IbK5xUzB_G3wKMKQy73S;+yRyI#%vwMi-o zh!1+Buq%q3lH&LaoFsQ+DrR>k;nH9nUEHb6#%f~i5u&+9TH8Aj@dgi?5>M)hksY4)*aLLlhxZ#2obntNJKML`8`Vc~OW&DQsQ z#MdWiufg~M;Nvv{O6kGIP3L>sKRV56OhSCylhp^A`DPLF1-;7EakACXd;lLjOUVg& zCY%5}kNFgPnfBRTLTXg~&F3@Ll!l!0mih}qE|YEv+}M%zJd z*LV-Z0}J@*{Z#BLN_pLcQtA;#347WRD(X?w$oL}Xtj?;1<7tVx?^OpXZ+*|=a(2%0 z8J6=T{ZcFn)EfB2_*BLAhvMM3Z^^@Xya*lip$sOLv-9Btio7?LadqB0Hu|^>t2Iym z#XL4veyXk%}gH%mQ@KnWF*%0&Kt(u;;kh33<NCy2AJetAyb(F6&NoqT^xF?l+_xu5Kag;GSs^H|bF~wir1l zU4TB*NGG+X2;zb3-xSD(us-@;86|Ngg<#?AV|oYPZ^zJ^s&2*G`0_9108k1+dJ?4pC77jrv$$y9Pe&{ zc^wJA*^!z{$MMhppA!&`Gw~pwfK#8ZIsG+Rk9`K!H6DVM4_2g9uL?%mrR4n}1(_aE z&zWyOOq1K{UBB(!9feICZCT_oLsqm}gs3IYHko2Bf2gYd7=<%DYYlnB$N!``2;VXq z#9UkqZnZYr$dpW;>MhAr9Eu`kPO3%?T%NLn)kg$kU?Sq7Jt<|dix)3?_4FQ-9f`T? z>EACJV%!;?!;%g#WIGqdFd8B&?9){=_~a;t638sbjK9RUC+hvY^|n!~3^R60i`=pQ zo>*4i8-BA=S4JfvR;cC*z&=WX{z6weAZeL2GRM}Ji@Ov#rn0{j zZZezvI|wb}s%f;oJ2rk_f6|dkppztZ+r3)SZ;gXS^t>$B;UJ{! znGEQ7chXdl*_h+!3v)z`4`ol73Mj`%V9O|R>4pCOR^AAQdgG{261Mqi_1m5|k7}V_ z0n5d&mX6drMx2*0Id5LkW8~5~;!Yr*_CsCHaTTE_YE`OwL-|zKp3Zub+dSHR9Xszkg7UhjH78s5*9sgaS*~8 z5-xSBtm;1+POSbMA=ymv?(&U$bimG4m{$%#Ou=HRRBF-#7!K$y6J>~}4rHSiZ7uQI z^XnI{3crH8V^`5g8S;c!#_ZuD2Y5Cq>ihTF?+Qc|BvQN`tlsbwNm7gOZvA{r*1IdO z!f+~~9_#Ej;Rip-4y15Ii<>p|=wUpqj&N#L+9VnVv9XUBq9r7!6RW>xqv@sPUrcZV ztre4mAQR?UD{*PuzqBwb2_$L_tW?nEmfP0uLVVG3FJ?&Ru1-;m=XB_9=9qxcQHcd` zX0aGb__+dAg#Vh%qCYOb=f7y_IJ;kI!qCeV(SADlYRFFN%+TTSV>(vxkT?!6GBUD( zIKK8<=D&svZkJmD-Mk0+m%s9XA)O;HfEJt&>M*VAIPQN1Gx85j1Q3%bX%;$4wA!HN zV_E=+Qpgp(dL-NYSw|d}(e;FE;Zv%p%E<%|Cm^=&p$=))cF#c&Z6Ui<*}Upy7Nv=z zjh&rzzIQUUR=GO@HC-vzNeTgo5__TN=bdyk&8fG=jsddX)MoSf5i&l8toNrkf#AM5;j zXHIjuX=fyeIHVWJ_GR}qS#gqrUfdnQ>!-@IB9U~>P}oLlWmz6vJwdmk6xd@qJu|T( z^0dl!d)48wMLOf6aCmQ64@33ZzGE(FB!^ppZEwUnfoo*&gZ8LBHu^v5xEl?l(BYPE3Pq3?)FT-ckS1Ld3CD)2@h6ix?M!c@i{XZ`<;23UG(fZJ|v(;Z3fF{x)T zh(pozCT<}}B$Ad^+{3pRTV9rziqoXr8{a5>)_;GI&HU`kv^5|}AsQI~%tjJ0o3*>E zcm8XSz%LX;cmp@GJ3}hc0*Z*W$$+A~w+i2Jp-$;iN+3)7WYoExS&ztrcQASxcr2ARN$(vn5D6#_oxP`Xou)Wp^qgQ*zGK z!-7+2QpYc^RO1Mcj}?l;#h3n$XN8m(et`ejP>KXx{U9q|J>F?NjgNH#t_p%d&&rO_`GjQv6j z2zv^F8o7;x@Pky{MhOnxa&(Dj^77uR2?ZMAuJ+^!nMJhRGZoW+_4_jtqbv9*m$jyOn>^!&TDO zX8P7;0S?S^>9+y!ah>BN{Lf51U}jNYR2bDK5sBMClVZj^Fs(tqrRd$WrL#pY*3;1_ z>t52r@XoI1^1ru^dg#4XC~LUp*@$9Ffq}`6P8`mKh;;BRD5CTSGlktFXIrlD40TcoXA=Vw#^s z`^_fFhXt_n&h9LLTQ;Lo7hA^?sW0}tsaV|R{m@B%*RJ2^l25obk~JH;el4Gg!aU4V z_G_k?oyuC=(gj!UPT$)rjt)vi^Q(* z-o0bi5--|mg>=K%>0Kal^_R8^pQ>9u4D zVlpZ#0x|lM&jrt{x=7#bKaao|*3o`i@^FXR*aV5OJPB+i00M-oDn8sC-I_8~BufM` zSFX}EDoj;Z5n@()HkAA=>t=GEy_6%;tXgaA_XlbbWal2{l18s@Z}m7KxZf@V+Ws2E z^cby?RYnlQV^+s!J zyITqO37~bwt~XGsC*MzQU!kxh%=t}aOi_#}kh-O=Fxs3-xFrid{ykQqEi@^u^}Vo8 zhK0RzvHn_#tG!@oV6_8y=lMMV1Y7LqdsV0xWEc~r&ep1rf0E?hZ`r-_fg6x=-%8o8 zdH03x{<=(1y&H-6VUDKgdbs_H42K#P6~wlv=Sv|pNuYX_T>;3;IgSpkp5I(CtS1~~ z$iIFWQ#S0MRDS_&+6h#aJT1@*ZRDwtchZyHy-a>1B~Pd>Gw3{bn;Y3$V&+SsOuNc? z%bqTQWhn@AAL5w95j`aRuXCNkU+To5>R4*38v7R1Qr#sc#9 z+*nEnPWVUr?D*22ll}Q3{DHMF=?IYOU-7SmYT&CUr`sV=4)lO39M~Tm|g>*AJV)R1QSX(ZGY_d8k#iZ#6J<#;0xD z6x>r*dwO`ZCUfF(%jnYR032h(R5u}Cv&s3cgwnZ_Mwg^-JjdE@QHtCwf)~vSm{0xg%t8u1_zZ@S5A&z z1wD3PhC9pezszgjL@AP0^u4K=tUicTVSziNXS8*7q1WhnaXgX6i1_wx)%9B*a2L)$ zooH4aulE)I{bUZBPV4BGj`m%O_Y%eF)OhXs2)R&%WVENOi54Ea??o@T^?(3VQ|Xhay9cpxsrM|puJGxXUCxg|rC!qrPcyd| z!X;1Pk(ijEXu7+qEV8l`+~BeN%TYM;0P*H9Hz0Ph?nDz<=7;bfJxvM_WZTS`GR8!z zFpHftfA3lvhBJgq0^o_wuy}$asiRHLKl=ApS?c0RDp9NbP}J@Tjk?#UBa>>?D8a}? zEJV4tv`&`#EBCzqWxJH`HTksL?RNEWn`Uv2{3&_zfCN~w#P~hy8^YY(2u$#mG^L2x zmu^_LeUi+q0iB&(?GS>b$viHp_I#jV+ZFlpg>MFLP=f~1D~>-N2xzk;^WeN-(a0ao!{Q$r7E)<|B--W`H zl>e500odpf;UTKp2nAju^0T^D5}|$%`BkeNny)Y(Ov*y_z*(v%Go$`Q`-sXYJZ9aB z&8F~7*ox134c9}GZNKM{n1CS0#8%MBFvDOBoX zI)0~HzYP_J1uy@~FdB-Y?6C*bX#1x&Jze@dvjhw|+^&xEP34H*QTXPm zff7zl4Zz|(&B@LzRHi&VF`?3^O1|7R|sJR+DSx2-s?DaD($&;lzfB7Ey-GzXclQnVZ}#MCI*B$09w zUH=^%8K|{da0l)J34z`phN0Z zX#VT%{?&4s@-om-XPZ^f}0C~EHNoaBtZ8g}^N|93{Ss2Qm2a3mw zaKP9W;dK(c|0u9p$wBrHRIUqCmIsy)**}ie&Bshs97e-lZtH2ivn*#;kfOfE05Hd_ zsT}8M(y^g5pf{uvnH9ASQ9McUSgDL?5qNj;M6i%Rn3dB@;zx;ZGB0)O;R5({v?1}< z-f71XgxFfZA>SYS3H*C3=uw>{58M(7kT20!eRDD&<$G&0z;<9puzdN_0o^1k{KYu0 z2s&^pb^&ABbaU`}!ka~}j#oWPwb?gP0zvq(Q<>87qsl?mskNT$5m@z4pnb^P8%75P zMo6nf$Olt&48z;iracQ{lnEriQJd0glI5gDv~Ax;TcEqemtjj^5)$%*k2`Ssy>3L* zq3S2qgS`73VL#CIRLxt^)P)7^tj8~4zYl&;*+4EHHSTGmWK8JjF!#Uuj`dLO)}<4h zhK}8@v5bsmY})*i#x(xm2w-pp{7S0q$7A0_P6+6U;*3kpJ3 zs!dmJtnE*v$)}7Kf>_pWv$<-)+3Y6>;VJDa$kK_EG9KM3jnbQ&j2dk~nr771Uclj< z=1H-m`Y$5^Ai6LvDI=+TU1-`2fcmCOX}cm;I(g-7~^)8 zuIa!pJ)wp~p!DmO0yHo%z{TL8jnRvo7=Uc%qq^iSo3nma_#?gT;|PN=HPro>IW=G- z@P1j)$3cvg(kEa9htpOpi@vTvUG9Bt)gM*X$pIk+0cdT*lhn8}>67mIR$OoN1Ln)`#USb=3U9PVj zeXY%=Z__+AB69!_L}VE2sLT#>!hywaCBgQ&s%A7ohk{e^Ev#Ed4q>9%;L(w+{`;HI zN@W<_<=dyCO&(t|UOss;5&?aNVm&z3V5N!dGgFBbG(Kv=8{`V)a$XxhGC}4iH2O7@vz(<=^3{TyKt4OoCwdn$O^E! z&#ko_xHyDb4#jnF!XI9u{i&FbCTT8(Dj88U9!1JfUY4Ice}X22#1oFRkHU3h7tGu8 z2cz7KUu(MhQZq^Mn!TpZFt!xAoFN3WhSR5;NqoQdz5B+dI^TLIM^|4v7E$DzV-^J(iSpjo16VqXX%r@naQZ1yuEVp=>Fs0~59R z0G6Ci)Hag%QVP@eKk{%ui;0i(40)xlfj?e&%sqZL`$E?Iq>CO)zeibF9g%_v$|+Yeehrz3Vd3mO4WAC?DE zQYkZKcdb+dolNzDz>v2OMwOwCDa!#>bG3@4q(5ux6Tyk%(@(}gzlYy`EE;5pA>N7H z(7N;9r^B2*|D=jT`XT$^Z6EO?Y_Z_4pbPWCq@)*WRPA3;m`PF^wc%Y;Av}Bs%FU0j z5L>fcY!Bv>);aG0oq9mYAq7NfHUJ^^&JtUHx#`I+ET_vg|M-X7NL;Vad<@3)^~Q28 z*iz2+?0SrB`+)PTH8)KmWdQP=D~~=-u|k+HIrCQjbV|G?EB@+eR)T!7<)w;w5@Qn-yK_ZTkA^CetK%?`ltF7x!O7 zH~uDCL_`0y6JdeD;^wKi0RvuL=l7R>B?esl$bYv*2Utph`)4*TKrN9BT?hRupH8F+ zz7_22Im0;8S!QbC3ehegu%dB9-h-s95Wzg0`zMp{-_|Ldk?SgZcqgPgN*JJieFavW z_2B(hb-xLb+!g*>wev@`&ELrpQ0RH5%@Xo z!m&>FY=ruEryK3>Gm*3~YIRl3BallnflRY_#>0 zVbIM|5iuvxY*9i$E42LZ>~=EMLa1bdQK@|uB8lvZ8aILSJ<9LN23Ru^W7X54k4rBO zRgrCSj?TJ!FMPC6vQe?76HPlW)LVZmTyFsrX)U=Ka^9#k#U^DK8Qs{?omQAn4*E$H zc@1V2lHPRl^m0~Ax(}_XkO5*j`fnjl5(fDRTXJg2R`DReM2%VM_d8`+^Lru-DHocyFQP8A}7DO;_{N} zBM+CBAo>JZ=;Q{43vm$GQJP=)c=ONwkEhKjHtgqacuU+>M=okmkgGM%lsa#gGbZ%r z@}c#fC2cQnUFs`b<7&@0wr9CZ?Zbve`rU3muyuyFh5%w!9H&e@hD(2FJNe|1WLy&!#GzU#c{ z=I-Kk_J2dk!F%{HFU}G+ZPuS%`s2kIIA0^f2b$S!`Tn@Os*j+>M7NJ zF!9ki!m!E-Tv=Oy9Dm`(=%Wqsd+2C?cOw3BA17#tem5JRHhu;UXbO3Laicxi9~S9v zfB4;!l&e5Z2qCGkBt;g6ay3Zjz2tA0t8~(0G;${NYvY?>*lTizpKJV9)y!;6z z3KZs|F@Cb#P=n#AW44P#{tosETY#!%e%rr|)go_FA<8gZ- zwFB2-k3pnh9Py@bs0+mXl9}BT)>{?j7he~KTVD!PSZfh|GV;^%ql-27+Aq#8Zs!#h zJiJryG+ARQcw+S^D|;JL^1>A6)&wD?-n^EbadIC&Uwl%Za@*TNy;FfNU+?9@198L* zFp4pjdvn(;J1EkIXby<^ASj--_Ke%5r%QKZjN~#J{veoxWc2)Lqf!6E1UA7)ku1=x z$^LyUFJS-(gMV)NG4vEn$GHK9)RqVEF_D9mqAMPsx?x@9{8A9n9Hjf}73Xi6;|n(E zMo=bw#u@j!+=?kbF~U}B@{XdTH}uOM+Rm>)*kF(7Z%U#%WAiWcu`IiHIdz1+GApUU zN)Sjfw-YhpOhz{)i&vbIWESFLV@RoKm!F(A+ZmeXN-@Eh2#r23-Kd^ZCwe2eVO+z{ zOB&6!f7Hn@j;mi(T5;v|eoTo^&cEwZ&C&aIYldt6Vs>iR*~7d6TPrJSHGIcxP{k>$ zO^L5AkxJ}I)`*Fkh*+zt4d^oc*%&~1%`q}R=LzMPQyyANeqGs}9I3#ef<$jhZ*KB0 zBR^bjY2UZActCBIjLQZcAajPTA99X{L#*&Pu72-!1!KB0wl~y?-Ua&J3GB^@ntegA z`-%Ha0u20@-tE`iyB$JH3AsZK>jTZv{IK5q>>#Vcpwm0ck;=KI62EG{g+`+|8Agh08)z^2u% zPF>sk*d`#Uh2}t>0hSsxjyn<(^S-q{US#2hKm%f)Q4UaCIr@Dk-=jVunZB}N*Y(Hk ze|H(c7o>l16w*LTz^DGpIW;NqS=SyScxnIO00Z@Sp7W`yjg9g~#hvCIMH%C?le6j7 z?x;6idb~U5$j3r(nEAi&YctYSE=6*B4#o-d@0u=W1du-6V`_vK#N=~5OnKIVGU%|( ziqW~8>3PFGeuP#RdFgTpjKTafm_V>pdM&1bF~})%D2xAs(>|i&Wp7yY$fF8Iy^lMh z7IrdbSng|6S9WKEMAk3nT=iGh2)U3S-A0ryw|?>25$aQ9*Zf1R@56^$4ku=wuWMp{ zcbV|+El^p;sEszV$cz~z-d}o_hQH@07)3WN8g)%JXmRhV(YPkp-1hJ0-&|4?gd zH7A3du71ZjN~|clh`pL-#IDXYdE6wyvOI30900z$psTanMHW(JXPBJmluUhpjP*|S z3d31~;%)MDiDxaYchp8aR}}Dun||VSGb0D+%k}Qb)LGtI`5yWkg^KpE)yHg zV)=B8<-WRoj9F1le+P`bY?k{S?aop8gMt@7JpQ8t^c)kh{3F~L3Pfo z#@Ho=tHTT4@zeqCXMUmZ_i_mBH3*OKzR;bYG3OIv{{N~ri&4Cob(T!ed}ZroD`qU>3y_hS1NNlD6jwXjXi6CPpk79(t9UM%J4~wSA27KYf?M%Fz*|rG~P@ z-XmAhV)^xcK`?t&`YE1d1NYg04!>7^;t2M8$eSu=iH^s+b?->exylIkFu<6Ie=C3cFsD2pms?U+VY<#`x2cyM1oqmAO zdsV2uBOg+F^@-Em&eYfcr@zhzL;bW7)OgbMx+OXaY7R!uTPv!g`4n#qEQO30#h;C! zyyFMsx6`s|nwA5XHE0KPZ|NzfO56uPtn%0u1uCN+_kR9tqS)OB2U^fW4UNRPsf@6N zLoPb`Ia8?vwtJ15r>|R+C7tgh#{}!w0=ssFZ3u>|G+j(;B#6@8&|l2o%UQK2J{`xi zQJ1Vziv;2TPdvEJtwv6llT1!BO~23jvHI~{wMV_mu1vMCXN@$kVp5r$&6aTn(T;S< zb7KpKYh_tH3=E#}p>B#JZ^B4ce(>5z0 zR<)Z(l|BtQr_g=xy!6YXPS5_1Iog2tY{R@Y88j0&ZOwis<~580iHoXtJ!swgn%&&1}!%j z9Vh-+ox1$wi=5V53x|ESfU>fQ#IiSck`pSP4o;wb{mL~*2Pjqc9t(Pzb3|P-Q^VKp zK+l#~(KfGZ@|&)0a_+UeXgn2cisf8ahK+3d4)yN6cz*Eh#j#(!QhygYprW&@{m%P- zPEwIoB#etM*GB-{o?t^yGF43u5H)Fk7dvvk!!Kq?yc8ocAW0ynv1uq74tSllw9)e% z#hj!bx=M#Bd=I9ogkUt1A-%C*G%gP37MQ(XzUjqXK{Bg5!yCV5(0V2FoooyAa-)+E zG@rV8STx_c(MKT=w6P)wJ&V;%NRsl1KVGW^4Fk1iWw`o3yHmh8Y!Hg5bY%>5*}=k5 z0V&#jEKiFCnbiRHexJO@C z%UgvlSlw_eHy4Fqe(z#1MI!?fYj*=`OeXF2qno!{lH>cbkOOO~w|a>+`KKCgflQy@ zt3GjgttY?ra7t?V+7i1N!SYzZP+&4kHI7Gn?od-(;YXO=L!YgS`g*w|CFzR=-{2oK zOF#8$@*T&rCYkMq%qQ$ZKJ8SQb4Nk1sFXic$&D~8UrWw;rpz!XB(ZmGiwhJrOMx6hvw^aOTk$zduwYS1D1~gCa+UzpfjYI%cKYN%O5x z%lI1X4^%{RtiM>R^O;1pf@ouaR%S2B@sFLoy)@eyd+bH9`6phsFYMgU6 zMY++vkhUewa3IvSIy>Coib$C33 z2EqsGAcUag*5&vQX$49$JU!ox7mTvGyVNo*ZKd?*4jb(j0 zh0_I%rRGeeG+6ukChAlU5v?^`R)ukKRqclgF?625SSI`TU0K5Cp2&bW1ZApVt{}U- zPecG^`W4QK5@#KsO6TN2Lc-=|(`53!8}jO+ELn1PH(b)85Ur>kNi}KZWetvfAk+9p zcvO|@)4e;^cl&h%!aj8G@l4yh5*{8Z4^PKT(h*|THEw*`t53AttM+SG7Sd9!Pk5Gr zBBY^%74pS3zb25;TO5Hto9m;ti`CX?0u-=bi1KYlriq zYJ`B}dTny^>+ivp#b@U$ze;hu6PKPip;LA*z2@tVBjtM@&wEYF3O{-#-ti zktt;mQcu5>g2gxCp==*?7;8MpqrMB@l`1j#K6(#Ju+FnkAk_6f*p6a6+esqgYtHu9 z6yQ9UZw5B`r3?Vy>5#V{&+;`c4w{7R7Nw8_F z*;BAfpK>4NA5|pO@C=UDwyNV z$7*+Z!^9T@x|ZX2@2|&F3^?K6yKOO`J@0oGI{A@hj(P)h0*&1s<&G_#IXs6cjLEQ_ zMcN2qBYy??p#N3`b0aI*;%`eNHzoKx{mMXo6=WU?@BXFWY%J zHRZzXl?_|vj6TPm<9GSx#C!?k=q=?|Kq1A#3e3ObO#3M*ED@0tA3nXm_vRhzR~CNA ztwdR2If4&@fUB*k3N+3zg&=U}ZDA$V>{M&XIvi1)(s1eDlLRaSs3ZY}O0n^^b}Sa#%o9(! zD7oM?meFn~zzbMVW8>qzwY4BB*m*Y^II_EJb1o4o*72qIKk&Uf`2M=aj<$ef;6UC% z40pcZLw74VVVgSI?djvY=_cj5wL&4|!K$$XtfH+vj)2wknz#eLT{Bn(k%9pRuyAJp zSDmTtojce(PaslYpsWEwrhwjVh_+`_khsG{eVkGTU&7jx{^i;i?=o;nIU&K~62hQ% zTZ-tn$$I|m#VwFkb1Z#xPge?V#Bjpvv@u9NW2fMQ+*)a#SGtbYL!P zeO0B3ggn3Wi~5YXO}@f`9CZ%5UCJjnaB5Y7;v~Fo8NnrWT*et zF$0#@;|OGjZC;osOg`S;e{iwR$B+L(ikO`h8g)V%7pL6VVqjn}HaEY#y}kWlNbXCY z|Cg-GE|V1u&DkI{&_53iE(c~m|2v{sbkpIC?7|04V%|1e9r=g)NuM~ZCL@uQ$_c zXYq|P2f%{Zb166Sd!01p!oP;w>iF~BT}?rLpo#S}hFX~e}Xp|)_E=KQZ(Y^{Y#EOX;KpTj7yT{& zwrJFAQaoMX&9H}}5&39s(9Aw!sLg}!|Jo%wj`IdYL8zv8DUa9MfiXa6lm{Z-#x_D& zYx(cj*TDJWSWX2>ajy~17ko_5AXQS5)P*`uIVSepbaLuqAxMsV zTlDeLGv&@d^h4sW;}=_5g5HcSqa51E6dM;SGf%*B2B7MmMr6LhB+e)^tPhB?1IfV` zOA}BAcWZYvP!^@Fsi7#>9ozMS*e_#E))e{$OU#KN+7<3s7^XD$_00KWaC#sBntE|~vLJIUwsF}lUYf;VcN zQ%_fve&vB?RpzuA@r&8I3z4rcpZmYzFIfqHgly-j?IJZzLIqA9>Tr6()`s@ft|$8d=Ad0 zF&sb-NZAJe=UR$3>+}K#vdG%P4LWhw>>J1-$R0v<74i!{ku+J`^2_Je0X`n-Q#8yO zh_;qVch8r65VXvb7^aco^vDgXPog1w{Tqi@d<5PU3;00f~K6AX=6W{=8l#w<*qk!3p`Uf$q}882_X+qXBbUbzkY-VYddg?P$bW2H@=4-9M`4s5x4lJz}s z=(==g(Cr@q06=_qQ+65Ro4@lWl_xXnaw@DL8}FtpH#SO&;luQDazGsXCO&!~cbVAA zJT~X^4W=auj@}EJ4r0W)9!?c;J})3q!T&y6I)sdmF`?V6O22j!w#{3QQ|BMN74hAw z>J2#Q0Rp?$dU|?h#>OG7_xQ?Ry?Sr<^0wRLohQxnPkx_qe79a(IsGyGdL=449yY;uO4eR!z4abh zM80>kF$c;}VuH~_$@4YqWznh$kSe(h+8?d`HJBL17pfGGjh3xICbjVWBmBsocLZT%FtFXTe{8(l5oVZQ|o@cBHz_R zwHygC4z(ZfLE=C|EUg+@FQF=1gG&ihR}|roGHy!;Q$>sc3Q7Nbu6%7BzT6Sa&ZIb`YZ+1OoFT4jQl+?>^2b$+EslOk&#k@FX+&EcO4SJ& z^N#5g#0KucV^}GfXCx?p;RhbX1G(}=R{uF_JXR9^L$@YH6`7Dt#E^mGK!K= ze&J-{FaVSH=FM!&ul<*^&lmRiOP4NTeLW5+DJZ6aoDz`zIC*xd)vllRWRXP4>6>5g zwbifJl!qS=(HUZFhg!qr|9LH{S%Z^VGdPdLjVQssXT-VY0cV@|MEnI3Q<#P17cCo% zs=`&@?{-y{cd_`w4_=1B?HOiBF~)-OOm2j+gWq8^2Yqj0reZ%VL^`$z3xg!t@uZ#e zk4KvDOcB*tc?`!XW%5LLlxkBv=-)%N^esg3J0v8}uPt7D7q|Y^uW@s_15W_~!BT7( zTRk`a=%f-4JALRS_sqVMn%UZe=Nf+f)`@YEuBPc^Y%pYy2A|x)J1;QLr=DOV#dosE zHNN(SPFZKf$cT7Eq9$#t`*c&%4)Xv^n*ZmQh&g?UxR06L{^1iyD=JW-)h3XHjTepI za8qHRhuM&+b@z;DGUZ%SJ$l49dyRgz5B)Np_McTbH|GX@>vj@({zcLjAiSf1p-Jf7 zKKHxLKi|36IY~m@zWta_UO752xwPT@vT*~ZRFRw8p?FZ0FwJ0YB|iC)B;c1<1~6r> zqwe=67yo{<-!7!hB*&0gEno$P>wwrgTEOHW6$Sv1dlae-g{SNWgY&cB%_y z4bNvRC(<*-NpjbZ#x=;58W;Bl1;T4-Vck_uQo!aKsI7PKQRFZq~WqKQ40pk#R`(Z&7oSvY_YffZ7>h9^F zqvQijDo_X=k#|bZrFoac&&U2HM-6I_FZ7O*GZch3M)|9!=8A;qD zA`>Gr4jgw&inJLqgM*WW!`| zw@3N$uki%`ZrS5UlRBG-iOuG#65Wy3+CQ-K7Oc=;ue6gH7u?=XQ_bk~HO=baM{8wo znh4{ELnUE~JcBgqWSW&FGy8(rnEr5*k5WpcFB;+ye_bdDOl4co9__ed%mXa~sGy(rG+?Bng0wYzuG@E$;yls^pFHEj} zO=2OwF2vEQ;BHm#RlN9UQOQf3Q%mv;0;WvIT55VpZ+p_vr^Ge%Y+;^JrN}aCB2{)L zv0#U8H8$5JYnMpD!6dS=@E~0lY#h?yb%8w11V)i4lqD7Ec&&r=7K9DGcm_3v&d85x zHyOwolpq~u+f=5ODv8KqB~WO2Lg@`wVdP{=rpt(A=LALAbw8^M`@a}_9?9vZLJvYu z@7K3nY8mre-`7<556)W zD5yro-vQ-}A>Tq}5~{nHmA7|(RSVsyX!?$@)rpj*nYu|xzczLA(|-NGR!WE`Tw?*H zU~`($N~k@q6fZ%}j0B9N-CJyX!4YZ}-usz$(OQhhs!^|C`Fv%C@ymftaco76a}64y zrS1y{f5SM>U9Ml#-eM(W>R?it?NNSWmWm$;O%)6cRdjz}gz@RD6|69B|0Omq=#)5@ z`c6uqyL+LYMnGkg__>T#RtPhKVY0-|@#B?lAC2zJI!c`RALp{|F=yKW8XIk6Ubv@{ zxqM7NcxG6aJ0RKGW<;~P*y-DmNIF3VpO1H^@rW#Xur~I_K z``NHqY|)5Cuv#ER2#@Gfb|_BauW;$lpotpA3@mFGAD=z{t#YrU;*U&bmCXw@QzqxD zWMb#j`tK@n;MKW{wYB@AX|{6PIhoa1NuJ3p14(LfSn3hWtLWQo1fK`uSS?9HAohfL zOinY(vv5u;8%DPdZkZWbv9dK~GWCl*ZAMZn&FjHLS{K}y7hWV~1WViF>#@9;#-ZdC zU5kGyx-V$?V*ClVS&Kz+CbEb7T;5@a-(6>=nK(NjpQ=0XXBCg^5QpBr=XpWS@#I9XDh8JAQVPi zWrTT&zyHAMs2Juk<`q&dQN+m$AU8&NO`*3 z>$YAIG7T`XT#67an6!)lRH-l-%ZO`DNj&cxQJCJkN)iSbsQ_~4&9z=MmYi7584A0V zBf&i5BwSM4*x@XsvI!50f-(>)$W{*TXhL$X{I6GuYWvEGQI5oCf3p&pe=-$&Ze6iE z`y>BY*P}G$2TXb-PBX!X?j9@>t7=bg$zWc**f`3#^^kLLH2oG+jtwdTMUF#WFQd8F zhXedS)l|e&AyccDv?xTl4W()Bx9Q1;qVv~22-@bQ<Xrf@l#;Ix@l+XNyqSyHvj zp$Aucmm2-pzAf$El7Fu(`OFBboIop_jVws92!=9Ht|9Tq3+xX4Zv{Iugvkoy7JEch zCeh$(Inc&Goebm^kJ*6_G6gFjru!FetN7R1(FbbdEh4H(kJ z$X6U1fB%>@6`ge+F{`Xqk!ryJE>dcYhjlQ5qjpb>MQ+(3FUi(+wVpYLa#p4i$38er zDPXGFZCE6%8rrUd$+ierDHuRy?!Yflu+1<^M) zJB={|;b)fXTN=uQt0E}gJwejP{29tjwzJBe@Zq)K+TJ&=I%O1kY)&gox$uH{Vw&$5 zXTC}F5*d{{MuUfnC$93cW1TDgDuYF!4aGCr4L2INmv*xY(iv;mPnEYWFiA4%`Hyw{ z^`Vj#+*MfR-usWQtydfngH<~q_E9MOCoj2wIdSc^$jl0v>9Z`UpYj)zC|ph11(t)?YNL zg~^Te=NAwkU&kap8!z-IlrLxegW`g+7hRKeR;nxtc2n(`^u+#;k z+mAB|k*cjQz{2^W9O9K#RN(3O_Awfj8EY)5Ws3ItcoR?0^Ho$PY5wk`nulc{<(CyE zST@7p1@A4f2KTzJ2QL!el<9f8BE1?4OZB9Bvg6xupX>Db@D6b(E4sTD?gN#AQD{h2 zm{U@|E4@$sv9pUGQ6 zgUpUEY>KjcXY>=kQ2p4AY{-bE6flo+MOz>HWi5=IzblY6Vr%peABGwxjNRw7WvDO z{GIr#dBde)YB;76f)*PoFw8LyD`fg=>*qwE$JYKJ&X^o#;>UbK{>VaWBW0R7qze-I7^Yc7~*}ws$Q(g4RrT@FU=<> z6@p&t`;Oh+jL3YRc2pYrx^RGP)eEk8)io0YWvA*-u91?h98#|}*4hT~ECWsdn{X(a z)eytkmc&e|lh>o9AhCf$BQGAMDjUuy z8fzw=I5AC6Pk7eYh#fWpF6kj@QOR+fVp-#nqL!6A(jy%=LxV??affi~d6FuG2lcU( z+?E$ph)^!*80Gi|C(w*^<8Xc@!Um!@X6M&QpK!mgo6|0%du5Xc zabu0uVFA|M7Kdu6svmMDgMFw^60&B${I_fTGd-xB{PQ&5enPiF+9U?jjwZ37#@c^Q zb2e`Sfsr zOK1j&x1xtCBuu-(N8lCk@8-wA^-I6(f|Hs$as3 z-+IB3)Aoq^+w=|O;Weu|4|@9CTX`KJQc_GljJXm-FosL6v%&Iwi3x{1mNNJ>@nxz+ zD5^4b(fUcP%fk5FV!uf%!+D}V7SHoEL)`Z)z!!4syfwl2tvY_`eym;>){uF36+QYX z18tm_%=BY+r>^1tIF>#vuR<3<9UCbT9hRj+Wz`_=h>iFz!V+)iuyu7Ui3A-J^kWra z(fdF7*_InVnIaH1lBug`Id*;ohO8W{xzM2D+vVexu}6d?#4udnsNkklL2m7xfR4bz=%tJ79y{qS-U3qmS$KZ3|o zPz;S1 z)vFTOP}O=2K0VCRS0hwk{08y!Ab7{x32_42tg`xn1D~#^7!}1G+?0V+!QFjzF||D1 z)VJbUpUZGV?xv__Nys5ET3C1i8zL3vWKu%Ch+B;_$~Fjxs%HP+V;fIrv4e`2`*pm& z9vdt;f|Wu2V{Qvv%Bj#3IIh$K~&FArbddP_Rz5hb^&V98%0=Po|!0aN(R%HS5#tGzvgKC<^$(G(DfyI6jbdFs8n zD5P!cM47c9<6wF{g3qv$NvU&~zzWyyR`X5=pQA(BBea}lmRJ&2Y$JdqCK36dc?$v0 zTs{f(`>j&ckw)phok79i{t?108D>qqC)ln!gv8k)VJbKi`~t%wq~Y_B_UXg7B8FBW zuenrNo2cB*owt|)so3LoGxQQ6J#57jxJ0b+(yvO2IEn`8f=4w}T=BOz;fWyS|5c@; zlRsK0NtR`LTg=snPyzr_LqdZ?hB)OMW7Hxs0o#Ysfx#`Cc-k6o`azQod6EB4B!T0| z2B!1=)u@5GwLUe#g7lko1CsxWAH;egoNXxg4Y3$(%4M-3swYa(}ZF8#Mh^ zue_A)QE=5)y=DFAr6|D_cj2(puvq=Oh{fauQ-m!%b8OtTy+?!5 z`L-R$n(|77^!azUfDkraBWR8N*-GyS_RyK!-4*RKRfsj4g@g|jVi{*8#v>f!Z$H;s zx{CKJeKXAnw>T;OJMv>EUY;u7B3Jyg5X4Q(nNPObq+j5rA~;nwP0xylFIWCgtgvtn zj4iZw$+!R12d4D%7j*OVuOC{};1Ax3K&v^59}BCpvPvLQj{?GE8pQ%e$HlGJ2^puK zihokT)b-gLMe4E9)VYJ*HoH)c_{IH7i-K|LtznzdeZ~-dzAz#-(Z*8ribl}r;Z|os zu@W=uRbm;iA_uuPc}26vBZa)h`qAAI@7VH2iVQP5m|+a8nx;xOWqobP3`08#fl%F= z2}Y+KVwBhM)@jaKEmDJQg!m~tQHETHQ)rUB{l%zFez22XXl2^=cQ%slmG*hIgir>Y z3C`H&2mFeq%-y5uoi1EA9S;*TdcfiVTRRo8qSq|bM+`?en$;)VblS+R))0r(czeCR zeFELlgnR8yzdYq6aMz~hq0|2!IXs#^U_WZ#2x&4_K&F_nas`v!^4CSZrH6mSD5}Q1 zPG|~)xF`ceCbf!@P)_=;QP#?IoH1WGkuOKYIQ?FRWAtr2I`;()i#)QC#fDkNW&oE@6@+;DC5GC>L&><;l1SYa75{{8 zu3v7u(Bt<2ADWuvss+#m@-^f!2GPG}5>Q5`5-%$00o+?uL_9qMOs?9-gXaqU!4BNB zek2BC2%MSh%eDcURa{(Jo_`+xaSAL<4rf#xcDG=vlT}VJ?7yoJ=XVqlC`7~)os4aE zeIWyP)aqNI;vatlJQtrT`$qFK4-?8paG>CyRdlGX2X`M{9*rA8?ew@lr>T@3@};P8 zt(Px2@-Q#1*XL{GrTgk9ZeWzKzf7&1*!dbcurDA8AiA3}H>-(`>EkEL(>B6D6(N4U zZZ|NZT}-pGu}4)8gy)$c0>#nf`Bz2C;u{s4__BJOG*gb}ABUUh1GxXXZ7SXA7G>Ou5zO<+qc}`pCtL<=gqPlo&7)$b zgqaeL&WOi&WfmJNM5Phbcz%+}4YVb-lu3I;pvehMMR3f>RByB4N%~meAieMdH^l(o zt8~gvJPItYkx!Jo1)ZjXMftAUwzgixVpIr2BAQcq7@kyMJ9|cE#Pt=1MP2>PmZC4|Sf1(9eXEjO(4>0oGq>lr@w2XaF$Qi5RqFC+N#14U+%h$_r=Xx}bxG0F_VtB$0aVhl@1w;FSrahUc_~j<< zR=tV@?Ei5T%7hPiFv^nDl;*Nk)@^c+w?21 zJE@fB^p(!MZvv)+l#I-`I>&Zu_|2Qo=e>p%d=KXz&W2JcIsq~7TG! z%~bVrxSV`zZ6#6go$Ln(?Tyw(&|mJ$I*4Xx6@B#HNzbnfOsL0djwS~yYtpJvxqob{ zrdFg|`u*8=*#>B$meBxQ50Ox#Oqv4IAbGwNP*$>{SUtdClE@>2IaxKjNBH7>RYrll zrO5O|@@Szem~@$(<(39dl*u*bgte@;H64_OSKzB&ZQ&_8AyU=64aTcpc-EsX=|Pc+vDo z=@(dqT44xx$vne;I|K z*<9aSM}=XN0b$~U?|@*D1uw1tZ4(kTu5-1UvsIbaT|>yX*e~sMRKzBJ9O2>N3t}DM zUpA#)`lHz&^l@>}PJj3@^~)FP?P{QQ^7jRKfz*1bCwaswuGVfw2A);u zso|H!!3jzZrT2_o_yP6f!i~c38;vRnV>e={*PM3DDBV|;uSLrER5qw_#b-u+^-?nE z9jz0WwO(&o&|M*3wY#)%`7E$Ldm|-7=E08aB+kP@r499F#d!%IGW=Yz2w|COT$mtG!(ebFR!VqgiLc<%O5uvM)5W| zwqI40$%lJUS<2$fm|2Qc`La!myPAu6@U{nBmswqx1XKw$eNtvppzaH}Xfq4bnz;d* zIZ$MQlkN$21CN~-k`}K8d2(^-o6jGy@!xgIKUd`n6u8dY^n(G3+W4+L)|ln=*3*LX zBHh88k7xI9`sG$s4DItGpBvs#Wrz9*6A%zAxO#%g9@9WRcnc5QglS@~RsKI5-T+;* zZ7}(CKaG-DhKbE)C*<(0p(go91mJ_f7=N7tT7J!2(KmxYP;Udqlub1ZPTTq5Z{PQN zGCu>2XgYsjxFeRRh>=I9(^Ksy%fmwohgOd|Sr@E(*&n`8THt+tD~n-m%Je~F)?Y!< zpa%ZwfCxTww8~>98S#zG=zyjN$R^F=-Nrn0I@)!sCiU=y__NDv&phi{q{t%`pYJ zY{>Au_J&F0wKI8QS-;sE?ni8wAYW9{L#T-(Tui>dx_VQVX%QD9O;pZ`-C)_WRnA+s zpc@@5%e?1shO348uJB)?md zHyNQ}!dD2x5dzLcBRJi7S^*^NyH0sU<;(_|OIb^lS1pM=(;_B7i!lMQ32l`f_9|1ihcCWkrv<-RR05RMt%QqD}~2qI@m z)f5T0sp_p%Zl#&*r8&SrL& zr>63%P{swp*IBdoAIfV6mA1((?7OR_|yVsxq>`1mbS3O{3?V88{@@AZTr_^QrEWt3Hh=@*2Xt*cx0@r1dd6BN^k0O8C6 zoi$mnY|tI|=vo!*7&IIT^3+sDof})4rD%2@W^u(nIl1*4YRbP;V@1cr?N4M@Mj9 zq~abImACMX>aJShE1zzGKY$EdH2V+AqI)5BqaUCwjg?}pM7DG@40Rc3w{ZOe950qbb(7s)@dlPb?SzK)C7R_FIp= zxoSeIPzgz=VmzENMn~K0>)Wj&;^V6u^c$u)!%Q}(Je|wbzigsxrCCES9yPHEi>i3! zu{F}ifvT$9f;q&i(v8Q0&<8;iFQ~dn7I8;ZnT-5?>0*y-@TLXutv^+RRpLNYx@jB$ zKMr(w52x~mDU6hms{qVS{6*#~SgOq$l4&_Z&yR;?Ad;hi8k!0%6&28Y4~OR%lLkVc z%ObwVloNt%Voa-wMBzKu5U@M`a!&u;%9Y<}LFW81&~5ee)tjwB&87N5rz@9k>gdE~ z`7K8u9K3Fw2aYtq_1c}FO*^2!eDL9A(1%b4*CD^jHw*TTSw3m(KI2AD9-iu&i*c%U zzdaw=OXajvbv}5BRgGc1e5>B*@cWc)fUp*d>*=rT8zXHsA z#q3nk&|&-DcdZJcUN$=`KovZbM8j|EoaCkSyO`rq@5$7(J(0XGTvO?#sTJwVuvD=qDfW=YUBatv2m8ME%uo_+1)UV0VyXZyS0bf zpZBpL{^_?Tb@$`=dv9SSp0RmlA=p?_xd=gV<{sI1*@VRpk3Z=P}S*$K3Vm>tb2 z>m`aWtZHm8$i&if-SVJ%Y!oAflJp;rFo*)~*1?$XP{y^=+`p?H|0ExvRRd(F4QI#i z8XQ{#_MNG;3=2B=<6k##yZIik7d_hSMqcjpjeYLkG95|{=4Rd5-g-5But>Ns{AwQX zrYWiJobE&_c?Z62_>%4N4bXtg***%tKs-v;%{``nBAc+p8aGF^k}zX$Z@++R_aJ4J z!5KGvYdL9JUkM=-tps7T|4flaIC(NA%uu-#G|!*HsFFJ#l8de61zeeEkxovg(bzB+ZambAwy+GyyZi5h`lP8>HO;X8dOPxMaj zr(dqDveiWfQV2|0hm>IZF|hy=qkGTwK+fXUCYJKCYy_G_K6?vd?P)~8cCvV)X`l_T zaScAPB*fHCmlkNgtx_S_6=N5r9cRYGrEDC z3qFHOdZpxH^ZGhQ3c{&}$UF~F7>=81{|5vHgg!Y?gc)o|cfGH)1QdfOTTVY;`6H14 z3En}!LC{#d;<1y9b4XrbjGKqY2#5bCnv-w8f4QG*yFYQ!;?S?jwF{J@6c;H zR@|K>VO7OFzX1lHN17;pPfD=x@!79=-T#>c>itPoyZ0|I-`1iYj3!%g`ncAm8VIm= zFd;IgfgydiDYcV-=j!PkZKy0Gn*6F8UX36j+0~5{0T$WZSa9^d2;ofelL90~tZA}U zK*isW&1~7RQS&{ubeSVsE+ zdckPJg!u7l&^c)jO*wOc9!9XDCf(=n3fX4!jOAGqXAOkfFhr19p^b^`5qupit{*eT zi&9ju#u3(lpeZMNF)T0bV%ImP|3}>V;~cX8v0Ctg^>RmATteWMEk}s;5ef)X=)z#I z!s~buV0_>7-H%5-GFv1{M<^05+^zqQuCEM>D&G2~8HP@Qp%G~X=`H~Ufgu!-8brEN zx=Ua{lu(fpfdPr3yCnq#MHsrfyW#Hf#C_j;&izC@qrmL{UVE)yt?Y|e)syAC(n42g*E-o%>KyrEZMXzd=aGH>K%p8VwTF(iDnopWk6izfp zy4C0`8YnEV}Z2gvrjz)+XqK@puE&-%Q`P=Rzrgip8x?Ocx~p<)6>+)MK&&^S@0WBAZn z1X3l-1(#mJ6f3sICq+z1R)I6p%@5wTHhY!DgT&I)UW@@D%*6 ztVLeEr2pN3NXcA$dO*VM#ZehDnk>&mb8;|L8!`&-9fyd6_h;IyO+=l-3kx*bo(8D# zt0?D2$xuz87I#51e~B|q7TgFmtN{ZHC?bwSK-_|eQ$-pvEIDQSK8N@+ecyBw!t4>z z=b#B^jg#ilQzZY9$uJ-NOfeZKaVl%N^s$=)V;5Y#ylw-14ophj|9sy6yH^G7It}z) zH!$WnC~J?$GIQE|Y6uMS_Nq6VEjK?VCBSnF8FpzOJ|F=b3F%XTk+!*ReAzur=aUZ` zXXz%V`w2l)z}zTF3M}Su&pb?t_m*}7@Pe9apx@koOYlH-uX|@@#Yze&q+5We3aLaQ z@m)=AE_v2zb7!X-_+d?6m)kW)fJz))SyhOiayQQZc62y~Nr6Nj=+%Dq`1Rx@eb>J` zO)alM*id#W`hti1CzB7&>YeG&r?{Yu0yeny*} zAYfhT7O^Mw0-TI{7CFNGPm1+lmz3`@a6S2YQUi9=kARK$4$eWBlW0UtQ+of=<#0ma zD-jWqlvMnnODaehDUqT7JfYd1C?TGl>`^x9_%nSIkgI^4I|nW zg9)RdpF8*kmz3O>BNdIu*^fH-R@yv) z&P|S0+1D7i6%`Gh@|tw?I$utm+}lHD$~o6mJWsIQ+3?Zl9om;>sQ*{X<(GC$I?dLlbMBssuj_F50#P8r5hn^sNOuBRAVb(o>EQ%=2w{b9?RBg<`mX6} zG81>QbO=(dC&j&dqb}wF`hEz|(*nTpFC}c7(blYomT!e-?kI)*90(-3-@T`?LrcX& zM;By3>4T`4Rf&&RR&f5GBT~f zJy3pqXl}5S5OkH(w9M};SnY|t4qKfa30MAj2f0nV27A2nCHB5A<|!iW z-<4%a1*5Rq<5#F?q>r$WQZ5B8z~kRk*0|Do@T?~i8z7>raJzT>2@U9DW!B|oUow&R zYpa`oJ+vG6VzU^m(iuYn+z<*NZ#~Niaj(Fri}egPc@1W~4hW{d*HK#lARFj^eC%OV zUV#nJ^o-jd>%RR~VzTwGo=)b6|I9iQaImek{kl(=EHo+d>s-5$Q|;k4WGFWr@w7l0 zf)BLAB`@Y3tNx*Cn429FrWbxUgt+S%T1-z%Y4aybF&9upy?Du3O`T&&i|s!4xHL-4 zDlEQ@SkxvwTiPN`#ndZ4OX8d43-e=}1)EUW+yYRTkczo@3=PSZ>DYhF5E5i_6ZhcQ zSHJ42&(RXi^IQ5|UF$O-&9iIfYDLTswh$*Yh_o+l8U+yV0fnoDk9HQVY!PxPx`}Sf z$@&)$@`7$wgQ_0{?#JpfHvit7xfreS4GwLK+1RlAR;n7YzR+3nTw)=by{Ni+^ck04 zBeE*YHwispq-16~0}Jf3#oT~h(7$40V~cZgI_}pkQCMV!Nz#Z0A+s&&59cTseFTQu z4y~1Xc}@4hEsV$aAH13r9NJGV_3t!Ar?R|l7;*eJ+6Z_-SX9NQorjn3n9mEW`Q_^V zwD;cspu1N*P36h+d3a);hq_X1>yi-AI6Q)FqKq6~>;NL*n>Ou!F%e`{g&H zR6`BZh(C||0!rGK6|W#n#LIr(VVDWh;fDtT_^xRMw>%GhNvAC%YO|;hCfCtM~f6C)Scu?Q0 zCf;f5dA)??TuH%t*N>Mb9}`!W0}1mMC8S+2wjt!|cxwFQi1Ifu5f!)wPKF$3=ri#= zoLSC{7h0(u*z0diSX+m1Rb1}wuz#13y_!PJKkQ&^Bfnwjh6{cddaq_3!x`bMhxB28 z27Oh~&ZhqID;Qjskk}MImcKXtJE$ul{t8Qd5kmJ*dCWtcUuXL7i5d2n5S?TtaU6@X zk?$sVwMps_r;a~p_jK5XdRT}i zc^3T-01o|!>fz8X88rLXK|pyIbtP$LXd>6FbX7J;RIfz9Kw-J>31|8%H0jX9A7=Ny zKopO!tE)@8bst!i-qt~hp2TZBpL$*mmNcO4K0^u-KhR@Bso(VGR#j<0@$9Nzjo(^! z)YeLxbM?p<$^ zr5qsS&C~GYuq}XomYIj2m>5?6nV8{Y;?cc$u=C>M)p_^45pKV=^sELzkn$lGr*W8) zA{i%%dnPYrcG&?1Yv>ZVx!%zr`Vx}!vtF<<5&IDK5{3FDn8qKiGE4?g-d`LbHlxva zhsA8ABZY?#H5jVQD_ye^b0|{T#WRW_|0$r-BAfGF`l1{=GE~Mnw-a_+Am1%}_N9L{ z|9kLvshNF}&GmKHWe_&8Ugpy~;jnMYd^GNzeE@awY^=h64x4!o*6AS8beO(-r1Ext z-ux2q7TMnJk>0|9mBt;jKl4f6SJu=_2Aob?Ghd>*8_O*I6h1ovrprl$HGrBP90FR` zPszJMx0hZgkQhw05cfIv>IMcAxT!M&xN}QdnnOG+7S;%b~^oTVke)nNU_j`Pn-~Mb%J~rU<)D{vK`J z%`x1*tWky5Fb&3aN1>B4muS-TkAj={e~8hi5f(`CxIDt*<%pz8eX+=>Eq@2&XlNb( zjHOIYfatqNn|M{+XRp)4iWwOk_If-C=Vq4xX~iF0>QShky@s+UV7apf)Wgq>ptce7 zSU*UYd=6DiMUTj>J}BL$tj7GO$Pg=p<&$Y$b)Vtn6&h*13WA>&2)$D3L}Qqf<(9Gd z?9Io;&f~vw)r)C^P#Ja>X?Vx?&}+@M(8;$ohK;K-yRO@Pf7>8Zz+MWpL9}6~r@#W? zKg_S|&Ac3jy#eK-bZr?xl^NB!Ne&{1ebg5fd*w>|!|~-*~L_rdvWJ_*!9W z(OW;iU$hntz{%NRu0gS&7r3$N0_;O!5(aT~l_R0KmNb04H5hoSFz%CmA8|5%sLdj7 z(_DQSP%HiKzI~eSAT41YwkQapWiAg9jN6)$6SZk#0Ja=nqnMG+!wQg$Xg_auAoZ-@ z@NSZ|vQGqH<|`}rk9#brQt%Ac!uBGvuAfIxD_-a7Gn|nq;J=7!$UV>y=6d#a8Uf>$FBXA_=1tJvn?L;e!gAn~NcGE3 zu*GGkU^jzBKtMoAz&UUQcefeQY}fXb`_JR9go2&eF&Uu$?OH5w1X9S#(bzrO`=;r) z^LzSdJTP#Uv6Gwp)7Tuc4jRhE>csmts=N46g07>JN!V4|Y8|ym>#*ca(FZT-7yLc_ zr+RZ=0&Dhr6-y-@ksA(sZFEY++C#n}=2rk-x1v-AZJ}jN=yR<3HRyB?c#a-%uNTPz zez+;soX9ANq3T2j=L#y7^01sg?}|Yi^7-!o3z+pN#PpN}pI*#36Yvwh5cX%bqat<4 z@?_T=gJnnFBODcb0SeP#2w}q!`>Od-#B~~d%id7J+yOpS58<7LHz9&*7X-w=0;+&N zw)*>o;}2+>BjA?Zbjgnr_r7uvErcr4nqQIRt3?Qf(d8zx0gB*iVC!jh?N?0qCjRz* zuXz7T53X84NT1~M|Cf4x1ba;m;N-6pN3NG1%>m=xYBr z9xJehX&d0MPpycmjXt}y6P%>fG0H;IBow6rL|2h-O=mvh65ov(2YbUkW&H3u&uqt!YwUVs! zZ3sRS5{SYcM(-R$0bi_Pz6D}&r=J_1x6`}^P*)}^lYhyR)(Hf95^00gmJP#2Gel$M zh?Dz#gP}x8vTr?6& zPDRgQaHdL$&of@NO^0P`qpfq@l(Mm2P36hWFbzd`yauoY;$su~3o0d8sylrJYfqxe zT!)dk)nz4F4=hau2?l5pokBr71hmcg+2R1hl@H6i-Hro^iaK1vP$KuIlw29xzlYHN z0ycq6P~76|%-Q#7e32JX4MhX0`JVAd^4Y(3+v3ZXYT=5x4^wDs1jB_m-+^`pRI)>7 zTRLs#5rDI*41t$3N)Hx9nnj!ti+VuID1hEJnw@^|6p63&4>jxmBP)P8qji+dP_x#A zb6|JPcxlkJ{blEaK=Uv}jV!Q+K>BmrXly|>urE7DaY?M@9KK%-ykv9Rv(zs=9f0Bf zjx6ZhjB&u6zzf_}fh(u(!^(HYZS&Jt_!)JlU8d_jy~a#lkEb>taroB>nWGuz0D#dt zQAsrC15csJZ@@tLtBO}Tf$%}&(C+KP%LPyAl(6i009GGQ2mqYz_PL1ZK-M%MBp^J) z>90|lO`w#3<#VFuQG;fLT5_Bw$2T3iX{_Wvt4g7{gGUbp1R6t9qS7557C@|@$qTv` zq_H66mHf3jxxFGhA!`)jQlDHI^?2J9`6lf5t*ynauZ0SAY1^Ya`9z2Uz$-@rr-Dl9 z3x!l`IyyGfJzLf*RKmO|FnQ~fv~RkowdXjF;3O`EYI|t5VgTTl8Ocr3P16afYsaE> zSOr9=Z6_rE#888@Sl*}*;|bxRC3luKFKNx;+M7{oogEPNYo<9^bXXXWy-CGW0d~sL z3Nt5)6hk#C$bbPk@$&f)f~8TLu~Nvl!~xVu&gU%L z{2hmS3d-K}kiG%Pbo38$aOnTJ2-vj?fL3|-@DPRyhqIIWU%or-cK_J;fJtkBFpN){ zbLJ;78C&_C|1gtv%(^>9sh7pBuOq4Xx8lG8DYMjYlQ(Te@58d->@uS zJ0XPLFCw?ELn8uM4By;|YpuTh|~TE$U)5^oF%t4ZJPswN7=v&6uzh!8+B&v}Au!iSp4 z?Sk(E*$`{GitquOMTAmxjLFn8`1zdriCOk{fV$P zSL)Ibn|@_tVp8qB|C02z08Sb2r+2LWQtwY+qp^i+5n*8z5PUM%cZUaxN@`=l7kQv1 zcU>7J=hS>|W$)~KD$otIQ<_poLrNyp9WDev|2~VqmDKksA_h$X^e-LIF$6O4>X%f; z#DH5K;E5wX%m>Jvpe$}^(aS=&f2lvHdNJbmzU_ZgT_NWR7_n!1c>+b`X87>F`u?qL z$TSFd&ibrQyWkzYGKxJ)`4&c1nB+!CRnm%@Eh6Jd4z;vdM*K9^f`ca|#dRA(ow4lw zLU4ZBJZ=NLn7&!+>quc)XAZwCjrrEq4v)A}p%;>M z`xwM(fL`<&_egFy*~tLFdTE(MR57un&b6JV_qyV_W*~Y6dqST_xTPrmSfZJ^4~ZC) zqV+1ij-IEwOwFiLqK$`BF&c~`MbfC_@Z$C9U>8-w8^aGbr@k09;JH%)h#24@EO<~< zeZdyjPXDbVP$CqfJdec-QWZQHMuh;{=V}+6<4eD;u_yTd>rMp@5#WS{puMSLrctIn~P0&F&z-f{Sc2QUi#qix5B~$FTxE2rl2cVqbA=fM-;N{)2B~c z-zl{!(G71G^7{!Z2#q|y=xd17%purf$;1K?5~mhv(`7q;r^X^nJYV!*tOO}Aa!a^hXXtLGgM$hAPU(Uw($(V|cOl~yCZ&dzcX z##{TyQF)wVRXIZsDLvjW!?bB=|E*z^Wn_58k*VK|Dwws3Yv~0RTuso*PLC&BD_b(R zKC4}X=d;g8R+4JZ|Js_@Jw+o{R~rNKtH2iE8pyMI=)s2gkn|Mf>*BJ=+;XPM#G@`w zJ0pHN1oMHf0Mt5!1tOpEvQ z4CQa^B7?{ae1 zi8I~fo&5{1#uk&hBKhtT=x@h=KqD(8m{*ruw<_%4+pORt;wlWWmkH+bxeqHR@UaC7pcrN$YTp@q0UAFytNH{uL z_mBhRs8)+Pkp$3e8yih+vLLzZ+Yi-3BwbNBUn-8T7Ovi8uobc_IQ~S1F89N$}2Zl*L{-HYWq8YXi>ze zKKUoSh?LP#Q&5JcyTD}GIhFEIuv z9BQLk3|fcqI%pDM8DfuxssBvyrX}n{m3+Q_PiC%@Jvr$njwAJfz7z^&XE2JAlK#;C zJeD##XPqmCJ`|F?o!UyCmQ#$Nfgz{s4{2RdH%xxn@XGMhbdWmnG)_1PASgL5)*-Rb{0& zFc8xdc?blT0o%%*M{I0tkp5R?LG^G}@(g@z9*6ZgH1W|5%gQM#>hIugY(YZ|Msldp zvIbE=O?BtVbizN*i76)#fpT|3>|T9E2ywV*SDAotJ+(!SAwzcj@Bs8%C2Tm8nB=2Z zscohhKv7*mGW4A=BSj7h{c#Smb$ZQ+lKXVRB~r}InJ>-xd~vNx@-k1#XckCOjUgEJ zzEq`#{%$P6*q&-J7fB&EKsFYT%OP=3gr`h_)UBVMuH{esPbgDQ@Iwv*gXpo+AwZ!_ zmHJALoBCOax+}czt0e9`@ksN)1*<1ES`?=YyKbeqTnM{`ziqAHN&ul)|L_(QDTF|) zJuT`hWGXK(q=L>10vU`R(oWS{{4m=JGPNE7~kjt zLYPjecJV{K()LK_*%!ra9^M}h_PaB8_VXHUm@3VrCz@!o-+0--@$`m<#@xZWe_`{C&m6a;T7sQbh#4;0A<0Sips)2s(IIb^vJi zgS22KxA!h@cWzT0<&j}+AzF-w_oZktBd>-IjaY7PQhu5Ur7%R?V&+M@-j<1K8c$|zpm=(A6mUC#+FgW; z&CLO={MK!0vhkcPZtqda zWV+Grq5HK_w}sbDVE#lW?@tA29xCmz@9^M$MU|M}UIc6EPliJ#sq&RpM}CgQ0)o}^ z_xXXFKj(`$fTk}}0rFo&M8KbX2EmvhtG?VUdW(LDil9{~Rr*hV>i?}3tps0(`b0F7 z`|ibA52KJw<|O`7uNxV`$)@2_7vxKGw2CPfFRg4aD<8-wQbgZZSnPVpclF@(1TJ66 z{kxyMq}4bC_tp&uJ8M#2rlFbvbe94-p5+*}P2U8>3Ag}}^BYO5jb1jZdnqJby5*N8 zx(cy$%}(hZVbZr`&U0McRZtErdc8bodgwLFq-ftzb*L8!GwAhMna!iak1u4y|LMBZ z!$D!~Qynp8UT(f_l(2YW`#y@01Ysa*q11hHI!3!=kVIbhE_nrzL$5~LbWAe6#RZx#xH+5 zbPg;cd}li-ugU7X7f?i3DhfM7r4ECNNxuv%l~xD3#0ISGsSj_tMklXYPbg(53w4`K zFbPIl^emNQo7q=IJe%=&Xe29X`d^AS5UK&#{uaL#y1#$|1B?OmL5=2Lpm2o+lW)yk zTaNB9kJx!02DNCE%F)^v=&k^HN{0!k;>XkVZ!?5J3&LZ}pC~}XScy0E*s2UrW}gV$ zsdiEcUOyV@YD0Dt{n@XRS(BClmnMAE?NSj6i6a-b&oW`}bm?*-f^%cjnSiUfiQJyz zN#b4Z)5j0WRtk<+Co*^1M+kIP!|<@^8-e5rWKnyl&D5R%&~9Rf#LJVL9tw1bpZV7R zWr{!GhRnaIm6@p+Y(APhZR&k_d%$tho7(*2x%{nQ8l0aw2zAr~;+9?-pum5=W$kS2 zlftMQgs8D{2r_%J)+S2iP_--L+9WetF?#%E^}bZErh{(*qfV9ErIPzZi|22d(RlgZ zg?B2|4NOeG>AsVM6Y`uirq4I|yO(n2mp5$^c zXY7T!xSjO_zvSz~#MIRz<7&C+`4l;a=1HItY+Z1WvR?ld=L;#+1@B)mZTSA87D+gk zcHEUY8t`(MHZOVohEpZW&d^beMekwUwNl#>vzc2nwNmNBw9%yP%-EBE64@olCv^YH z0{EXGAgmb;?KU@kk<7$Z1ttXC(|4QU=zjxqB5w^P-w-9E^6uh0p>4+`HSk>{SMSbL zp~R=+ooP-a4p=IB*5)SL4tb8m;z`aS!wA;QrG1gC$QQ3&(cT$2=IJtQMtCdKY>pi2 z&Qd9_kjc@^eu#UKgrfo;%YA9{q)l*gLiC=I*u7OcW>^%qA}fw78%^Q28kmxz9eH_G%_t#5q{amP)8*Dl7WS;$gg=OwQ z-IV{N;5S!hInzUk^Cb*6-`~mUEgy!NSWvAg>@Ew39+VeAf==jxa2C9ll07Gi zo+(Js%e=VuF}obm@cQ+|VtZ*N)#rEblFz`3#2TP^|F@p`U!Qy^MRVGvrA34GZD^df zt#L$B`!8Yv!O>Z?84rGF>G6J356Ej0x?D^Lxp_PP6i1p(MY7rxh=L;U!^;QZkUlQP5MU+gGET|CC41Uqv0i zohfn%2*3mW4pchMM5NSq$RKAUKl>Yjj_f}V%3d=as2GJ-$BsLYddWK79-kdolNY`% z3d}4_2+V;mhqbc^Akk=kdBkMZLZuN zzWuRzDuq3fte<1fPI~bXSrdG1 zeMOMGQ1_2b($iJZKNf3}|K$o9AkeiCQQA1@H-o-HGmS}-&SW|AQLiX;m{vOJCgmiKzqL5rE4gQqu}ZOaB{j-HdLX(^zR0VTURbC8g0Rbs6E^T27=a zL;KU+C5K*bfQSM=#tAOd7q6pvq4yHT1FaySG;XIUm;CA97g>K4-?z7RbVA=hb+Q#^ z(wLiljHiM6te^odUA#r`ng7PcCj0FuMd(2kM|{3mvTo#T(d&xCIe56*w-T3sAu7-8 zUUThx&D>P5R2#{@)z49i6!+dUkj;b&EL~6*n(z!SXk#&g=^Sqzm>xkRiSO1p-&8~y zX`1;riqQO#T)i)xG!-u2ebL5GL$ETAPjZVNbTw*_Z_>1a=S*N2pOQq_7Jl(*PIJ(` z{t(E({Fg@Y|9&EniS{#Rnwg|eh63?&-J#weJ^uzyAe|{5vwamDhG(^I#Hj!YK4EGc zGBbc#9}BCi75^C{5g2(lFmj-S0hfkLytQZvNG6DH73ZT|0p)~*$*l?u8mmnNO4vqT znzG1QiHB_@>7haDDp=)9JknUXid-DCzVdVSvz%>W|Pr#ZxoZv zgYlRU>~A!_lbeF-<6kL{zm%e|OIqAWK$c%okP1J|=;|=hnjp=%%9Z)u0`M-!?i6>`*E-BhF%T+X4fJ0(ff8cLl+bIPP1qNJ~7ZEw#}RW>&tkXDre$cV6n zG07?OASF;9&kOShVRp;X!r_2!8kl#O_Ao$ZQR=1#^pm9yiOH)uw(9@s??S`*hk&X7 zAJSBJbeNI*uE=j8Gz}ag@g%s7XhWxKvOi1rNrHnS<@e`XH30wSC_0n;k@C97m1RxP zA{cwtt4qvNXd4oX+-m`Cljs$~O58zm^QC|T+9^`+pNuE4_Q3i@Pn8@-{aGG2%hBxHdCp*vKqZJMZ!(38(;2kZiK z)|eYQF)@NMd}~4{?A`-kHOHx!>fykz&%}bPzvK=F-Qach+)jY#xlrxo#LA6tLbGFC zamZ0ags3vEubag$?;f*jwz|6<<00cCTen zlAhA`FQ`G7w<`V5VpPhP;V;cd-%0mAj-g=+`_pQ;drA|yrq~tlv|7#~Xv)8Ix@=jN zH=gPX@!c32I4G2%cP z6Py(WbOSr3o`;UqLbELxX%OUDCPSo(B?uRyR03RTwpKyKh`tvU^PW&2mJz|3#L06& z=FtIF0ma_VQy}!RbbFOHRXph0F@Ve(bX04kB$l{SgYV@O6O4(mvziD`tSejdZ@hS< zpm;)@ft`1cjtMl^r*caLKoN!bOoU^Z&+Dn7+ayl1280vbIwNzP^{mKVJ&nc%pdr`YtJENUv3_ zYW)~$8ssq*kI}&*%tho5Wz|W#bG+#7XQvVgL^(69D)#)yX1?SUW`xVPtI0L+ zgH*0^x6Mf-p^G{Z!C1%KoFa^b%Ie%qVoQl#!Wj@FG1_0S4~wkA1`cnkSi`knI=AoN zejxS`FfO-5Orsm; z`XY>r*3 zN440j2}d|8Y*MvvQ#xo=;J#mnLp#4^>4vD}owb5 z`f8z6@}$>Bw;V_*U(>B8Us&734cyDWkK0Dt@zToIEtB_S#WZrSSClwX2%$+ZDrSgi z#>A$lUegVwwt2!G432Vy6kU;)nQ(;v_$)|su_4!6gL? z&*L*kRDb~PeK}mBwc4wUk0xb*UrVVLA{>nI= z2bLo1wh@z;$IgoELWI-i>#$QSJST9$n7v1m!Vbh(VoWi@wq-d&F!($QF7eoP>LrDS zxuc(&Gp13dhbpTDGjcmjNeS|JKlhk#Z6B{C3?_084Dkko$&=r`QCD~cw$K%p#O-eM ziPH#)joF#+Zxc^0a*);v6@C5`Q@53hwgD@g9z^_|+){Cr4=al$C1xGtMR$7(w(R#2 zy08Ec_8vn@#Re_=XO{^A!XcAT4gye^#lABeCYB;0Lj}e@%VDfl-!n^fY7}fX-lj&F zl_s17fT43`Dnt@{877%F zLYu8I#X6yd)j9n}{-m1mIY#K)j?e9Mim8O55EUZEn(m2100e zQzKu*-L-}N+bwUs?q6NxfqZ-!{AL+Jsm3A!>mPzi>=ZFLB7(w&^DwzsjT65I-L4~s zhsNR-2f%hoR;iC2O>?`31?(auJr%`FnOTV>eMXK)cqN$s3rZ$ zQqBjm<>tq`M#>he>BiDD$hY?% zlhGC-f|0nYaHzQqWzH|c4>`renhY+}HeFa2>hFQ?%)}T_OGE0N<7xyXabqngz=fNU zoFLe};5Dki!9j9of|ok9A0IXu``16{uy%;n+h?)si~$btQ)1k;Jo&v4GMv-10+;WG z+YX(Xg7>c|z2@YwsJ4D!OQ#avoF%OU`bdjsk42$f+ zm}64+iChj@A5Sagtg%N6U$7Kj2q;AcL&CT_!K>DJ&zJ64F^Z(#mh5X*8ZM(^9(Izr zMlJT@ARi*-!C(_Muam}sWh?EC3r{67WZ zSwt2_O!5wd?8fo!n6KEsm`SpmTzvMaipymM_N3a01b?QZCwCVdlbfnCZ214w3kfdZ zTrBS;ZP9ClkW&d{S>U*Cz5T~)TgSlyJRF$$_==Dwzd5WB@#Gt97k@;D9Rp3D&e@4O z{TybCg2pOI2?B0_6c5L}r*-1+Js)ER_8I&|Pg>#-gE9&G2!mzL{4+QU3$E+BwV=x2 zM#1!uLGyALMB|@n=*HI1+V!|<2MwXEZ)+&-oSq9et4#iEU5vMUHRJ>zboI{#Lc!R5} zhZFRpGQKG+XzwTFn(cFoD>!LRh|rIMX}JF6XJy4swLNpbiB!faK5NW4uE+sAB2%@f zuWEH&(4W#;OEE!A%Q+=;+ehtFhodA7Ym7bbbCTMZWGI^r=l22Is!JVt)8#FT2;tM3 zMalQ?aP{AmZ`ryjud~&9jesql&0tp|8@Au_@qc`f*=_K8z&QVof^`fwWA}1z$Ak9a zoF|OjSkbIMu!!1SL#Hf$oRl#QA!zH}^;S{}C<*ZCxZRXF>w1{|y9lPnv`()yO$+?) ze5(B6yXGtP#ipdVR}cDEBhcMVFMk{+w!v2)d-euE4D! z?2PtGNnkcTvVwQ)QExz&0%{z3iEp3n8K;wN(XH$c@kZHy2*&?KE}qe<&Hm>P?&GjTZWGNoH8ELsoz6E}Z5d_U z!QwhDD??{VTp09xG4bEX>RVTOl^MXT#47nV;c5!nRtchy3cYEILu;wv2J;!i+!A%( z_wk(K)SWjS!2Wn(1SQ_hy>3iR0n>1&J2?C=gPUr_OFSzn8RlHuX*B)XL->cH-r|K| z=gSn_r(}piMowySf=`V{sYm3QPb{QPr89>jH-ELs7E!j(i5oACNy^K|58TEhNmp-+ zz{8cFuD}?SVr)E?ReU@dLJPT-Dw1DuS@7oHu(jfKW-D&Dl$ke3d(F{Fj0ULnVbYM9 z*4ACveZ-9-t{F@(ftT`5)QXI{jUD4@Ut33tvOpf;q zSmt2_+6hRPI0zBOqJgo`7ALUp7eFtv5H^Xwso6l9cz=B3EHNyAY!3fNE}o75ycwUFWqlXf z`n1Xz+aOQ!PNKtEO_fhV`vZKb_}-4#9bK+`Fb3hYdeMLBNYHJHt_J#iOi{S zAA_yDE3|R5<{w~GVn(xDW4_c&!Qe6Yns&0^8evgR6UfV8BuNrQ>0s-BX#5f-q!y7$ z;96N@>!4O%_vBrimaQavZ}sz5hG0vO?K}9mgqZOfRPUx!j@t2;F zar-^vleSDyDCI{lNqsKwke2m0&UqS6hmaPj`B#tFwbsJh>Itm{^1=lv zJ%6~U)LhE*72;oKrWhZ6!wFVE1X~_{Og~6d0^SwzJVD-?(IVVdm*RR)n<&AnxeD3o zvAbF@hI((ZGjG_AuRP19l~QJ}X}*3@>7`X39)x~^#* zZ1lois3an}(vra*(Dw{MCq7#2E*^M05G^KOZN{gl^`9w$hIy8EehA95wqUWeyVJ?J zFCTH}WC#(r5U^cNCiLNnf4#bxbKkIXyNug(^)JJ^znb1duYqp${%2eA_p6X6MZuVi z@6YBZiZzIVp7jI$&+(mK=+}j4Q$O~=8WqR%wvd}6#$ae_nC}|Kn9E}P7LnMmG8ZP$ zO-pmo7`GCf0K$(4w=%l;&bhw+?cDDs;rNcGD+^=hF?Z2$xXhw1 z$PpHlH|#o-rLG# zzI==^%pDla6j+~+^O=^qG$739UGp0+Bg#jZxa=5OWYt*^Tvmd7CM8A|JYxt*f5<${ z)5*2gK3r&AbhX1o-juI zn8>(2aH4y>A>dwg(wzXhIF?@Q&kCz}_0N{D`+6eWaYN#x?#z_NL&EP&^}J8`bjnTT z!Z^iIQu-p-iEQLEl&IUqLetyaW5nBI_P5)Yd?rnZrQ2#hBb<4;YQOzqC!1G<|I-D z}lg8bpxEg@Cckb`)Wa$w_Ps++|zJ+7)#`;6yZx}A%n>Hj;MM9|+9n_*PdA4}dX{o0djeY>Zn?V#Q!feoA*8wh5@~>hb$yiau*KvrX zgc^3)h%z2Q8=^IfsR$AV!|`Y(1!?et?&q~6Mb3M;iQI8|`%BiccW8(!h15DNhPfA0 zCN)1ug2~=xU1CgggUo6@9hnadCo#2c z<YtxcBCG_FGeG}#!q5pceq!I>*G!5^!((x-H|`;PiIO}p|J zz03HMFc7hY-i-6r;|$r8dX=&M2cJ0Pj8C3@)ECy>wXPKG^7U_&$DdVJ`7T|4`J}0f zF9_P)bVgWiccBvj8%$5#b)h2b(#q=o#yJ~y&*Y%au!d#SD!quy1o4SfCJIU}a*c7H z)4A2w;`Ju0C#sft=WdnHvtOuLzFpQ|E9Z;Oy<6?{f4*YXQ*@cH$DQ;a&G#3 zibM|zZ61PP`uPr^k=SLL&d&*!K*^>ow1r0BA?K!xB2IWfI)rD;>_(Sct~V_%J&p`j zZHz%&b{XI`J-uiZWQRxSh4@9PAcmCL7xNPFDiiJOUDi%!mftn*OH7^)A|-a4qkl1 zqBOxgeiLjFDycko`o`}jCpOeqXHGxPZ?(exsz}n$dNA8BV)13-da6WMr{8_qZVzVt z8lrsow#TqMgl20zDl_M*UuJUd<)bn9TJR;_f(A69Orty2?rz_sF(1A$)!@()DjZsn zh7F_0bp@`=&wX3iSAt8IM>RVP!zWK4WT?}IcE-9-OD<}bUmH%8crs5#4L!goJoWIL z>&N!2&ViFNFvX?9n~mqJGx>rZ=jgg{P+qQqK@NlYjiMr}g zy^ZJkAmjVf%)}+3-VtcDh-I#@f4+sOh%MOBlQACdUfyu+ia3o~AHbC}Q~j)`zkVVs zqI&@f_UQJdesxZn1qV;!2YH- zY3+$T1fRhrXE*Edt-0syV@_X8UyMCq3SVQdAQLWp6{pdg!SSvFw1>B#=SG&Rs7fNo z!0Iny0AGy66<1__UqH4JoTgG(OpV0*@_!in?r6B%t?e0gj5c~3o#;Jbj4nFS6GR&& zA&D@0XS5)Ci+C6%Bt$1gjoyh)^xnG=y?o>O-tYa+TJJgMJ^vVMW?9Rc``&wB*R`*` ze+Ex!vlNl7lSR`iv_2hVslgFco>?l<;G8%@UX(;VE^?Av0J~c66B(~#^jpuTJb8Fu zf#nYs;s6c)+6y7Ra%Q@KU}F*7d+Ata8=Nf7++Zx4Xoeh9Tm`ViU!f>v95Pxn<6jfx zxB?Sdx?v2iq96oO3pMJ&U}xAvGl zGLzBZh~K9X4t zv#Hi@EofH|I;7?OUd89a9c5SKnw_E2GtZMMOQ@c7EqG{abUOEp4M)EiO6co}1e zxpc^4CVBcm>5ISWM-Fm!UP-aJg6#mA)f%$+8`kuDleo?oOIDJD&V~=p3!XKqd`k{} zyW*?a$l*C^QTA8HCI3cw1;NFS0G}k4pMLH)GA@tWHP9ih5+~ofO23Ev<^zb;0K6K6 zPd#8>{6UwI*tO6N*e8}g8~jW?uK4L!Wg)&@Ho+|H{oV=k#nhVSg8vU+jW>coIID*4AJupyp_d|%!DXAWQ+ zeK_ZAm@(B3)e@pjgJ$Xj%5GyqWUgf+nI)S@DaVU0JaIr~SwAC2zEeI!gLOMX$0`(j z0S3nQRZVqDmHd&w6%50Qwq8&<=mTsZj!MFTSw2%+jYXIM$bx8eF=8>8 z+QH5;Q|Aoj9$VxdCbhvvHvFFAWh=Zait|;*9=XqO{=u@>>5Iv!dQ}TpgTh}I(}3*I zm7dCW{1(Cz6t4%sEL#(Qer$W8&{^~61Nuv6!RkTMg96+kPOe&p`z@h* z0=oc35I=Mm&hsNOz@5>=q*l!_lnmT0+brf(M_&!N7x@WZTRFll);b8DS5xdeYK{xD;h?7U zw?>#5?JL&?Meel+N?&$x2Nl)UUJ=T25kkb02<;<2(ky3eNwUjLUs7f&7xf7iB)kIp z8Ll@A5V?}qPLBCNyOc(3Cv{>h>Ziiq~3D7Zdn-ms%8 z<$(t7eB$$B@Qk>0kUq@058t62Yud#@6G)c*(=)Bp8^LOlsLmxD<{L~qK}ekkauvSE zIXTMxS&TTu7w+u#)tvTgvuE4_p7>sSc-Ujltgs1yJVV28uu?vYC?*DERq_p46mk2Z znGW&~=9=5<)c?>=|MV*aAEaO52xQ1y0}h0iX&3jr6=HKkPw0!6a~X6LbpTZZH8vtA zL8Wx+O;p!N;_P(1W`ge{3Lyt!C(U*()MU{058_r}!RN0SuP0NlL1Dr@92p?KD}@3Z zQA)ub3J;9NMw2}zq$nrVPu}0sIHJ0};WI9Yy`Hq`($7pwm_?vF$QpFWE_-#2Tvn~l zeuQ8n)#J!;WqFIU%L)Ly^$Z%5%QLz4EYv78LC=lV*54}#ZS9ER(sZ2fjh|*F%rkpG z_V^BsA1UZOoEtU?J;0ikKkFZ^39coRE$HH*Qb{Q0HRb(i!W_=`v}Jdz`0(QM=`#sH zsBHcEmH7$<=Ysw_1GXs5wVrF&@!sMqdRT_vRpSq6$U;lw?&S~Y7A(k)-Y_Bv(ADx` z(&oaw29g~TD0f!(KG?knxx$)S#hu!6HoP z_hBr$41(odWtHxxRCk|g{nJ*_*q_$j-Wodfqay5qZfaQC;FzuQCKk*sdCRw!!jUuD zA!lvg%ufyx#|I!ydzDg@wfj$*0Q)u5i>kP0wE`LC@arNM0P`m%0qEZ1rL`RT?MCG&XSc`uy z2%yLWz~Qe)vhO&j2}S3zTe*)RtquMHAn$7yQZ=c(ipqmaWxr_bD9W<|JcE2$+OYx1 zLC}h{w)J6zrONF~0t#9^$D2zwp&BnFwpc@?iYT91c%jM$EQJ!38!U4T1 z^Gn#STR+lT?Z6In(2n3x{}el)^<^48L5{_?!OEk;u*f_x@@Tt%J!i0c@UnJh%WUEV zJ35;W7f8^NyaK3Xfi1djqqZl6xSBl1(Xy;1*n{$cCAo4|#EQ5(59LD`^vDE1QHUbU z(P9TxZ9$4qPvWf(S?Rby>p|0X&nZUkwlK9BSsY>zAGd8wtU7A-| za#U59g>wh?2$(+124ixemJ9CjVpO1xVO?vRq=CQQN1Ny2ZrO!z915w^k!D?Zmsu)8{A*d!K zeRw)*f^z3ll#Dl83y3MpJm>8K$sQ>pFTCE7jn*DIOE!}iF19JtQ^Hmb!xp!&7+50L$_qopPi?k}fU{R}HeeT`$Muk6uQ=?R4^4EeNfenJ zIlyJtE<$Fu_iPtCm+D(+F6i8^VvlHP-c4$s1-mPfU7a+Z$W_HnGw+7-O=S6d3)0j| zl6KnWMM`Cj?DaLn2M!S=<8NMZi+Gsw=E?~f=OaL8ZIT? zo;rg|If-d=CC1|#Ek&+M)c7LwY5R<-5+rfIfzt++ZS#ETUu!e1*GHQz`IQFXj7(_T z#ZF!EHnAVM1>88A{M}yrpL*A)!n-Ur#^rP>^Wv`IGAyl8M|PK`?t>A^ps0`Z`H3A^ zg4AwS?ZB=DZg~vl*PGaO1^!b7l?|3N**(X5Q|NY`>obrJ3c{HvXfLOD@fhu~6F+@GA?R)Ia8 zH+D9s|3n%1Z@ECY;_f@QmLaX9H$cqXg{@=v4$j}Qq+kiwnMzTcDJLUdQxJ@56TiAy zW{P7Dh{$0m$1;QIUz6;Jyu*DO2&5&cMq>6ZC)LSw_%bbED}BaU=wA$v!ig2rSFV>d z+ibg*Bt)m4sK})ajgTn~9DBSnd`pquvO($ax+HPSJ|y<+yWk~8kilqR<3qDzCo$X(>DT|LlZd)XcCi2EeN;dFPOC=?nuJJ%aDTLGIMDbgx`QbBf?dZ0gk!bVKj|Z8L2EFIZ`Iq z3z0W%ujK-nJ2GwiIj_P3|8JZBe`vq|B?t%UJjCQuS>lw}BnAuQcYP&J%3p89|86GT z$1U&TOr$<;3$ ze8Xi3tj-2QI{C?hp%I2d1yuVJ#S^L-zxA0{3So>AYydGxSmIuV*k5&f8JK3wd{`r#9PC+IaY`jAqv`hv_kKB*{{!EN?nWi8m#`6LrdDWB3AdsHvF z1XRH@QGX7wL&m1)SORfr0pgJF2sBHZJU;HIhcPcn%^q&o;9`}&lEE4 z7Rj!3zYu4S8i#eJfSl`PTcm*Vva*-f^fzW}&2lcNh%gBhc@@g-#$TrL)ZT$W)^*%~X$ZVC#$p39Li*{ZR6AWR%+}(X1n2*``ubl-MXQSuFMvG7 z)?l+tOtd(A%SzO}#Dq(=85N6JQ_Orp=R3u(iqHcfyP=Nn;N@^R#Y#p*90>ECF~ zx6^Wx`|0P(TEeafS-t ze~0sj1H}|fAyd5U8C$`jO-aB?s*()T(%F19b;MvvBL~WBa z8!N?@bD@xIwd{DUUwk!`L3AI0MjyJuSX z#4%!4v&^GMUV`SL)dMXxCdJzo{I9F36Z>o`(f^z}{Z|4L#=~?&XhYP-+bQ37-KDC8 ztXT6qwxz|3TmC-dxvfSX2>=VzV*SXrNU_zzN&r*msEe;^DlsybI>yK}*Cc!d44htfx3!H*@%iI_)K z4e?xGZw<3X>*Sb*I~It+aR)oA30Y%zpq*AWQ*K=1VV5sHF6C^i0L`?KGf62i#ALiP zipcxjiqYVPL#H7g8>1QLRyAH*vWIz)TzNJW|Hc!Ag)U;!kKL`Jc|LC&%aw#LzHSQ53o=w@F5Z!5~aenr(_>p?ooVn=?jwl@JG83OdYQ(fFlZcUNG_| z%b9lqYXnH&2e1{UIB0etYl3DXIOR!E_{J<~r1Gpdz!B1qD#A$1*_m6aeqcmw^9E%{ zK7xIwnKorl8-HqH>k6YPC&(bDA@X!{Q2S0$N)7{?K5gofKpI{-wkZL-VL4Y{&<%xt z>*35T?IpPj5*X3!qaQ@@8b3N2%Rxl9(=NYpBz(QiS^7a9x^rYIQ{FN^Fgbx8M`Dmk zp$}v(Vw@lH=DQ6M+#YhFi^T5qj?Tu9`_f-C(ODu!9|q~N<-uVCX+oYNjzJ29a7&_b z8L%d#o`pkUBFNv2C3x3u1R=$)f>BaM^5twa>%sQigF%Ozev9Lq$;|(YNAD1_*pQ^nTntp};W(>vgOyj+zo zff2Se3*_NGPxGAQSi*KEzP~-*!xq;e1A;GJ+r`_l=`^^?zgl-zE#a=I84m(t9YzpRMrrWo&8_FOQG}b zvWz-&@$5THmhoEy=h+CLU>@p)H)Pd1J17d(FW9b4C5X`_<$C;FeaOe9?zAFb_r>f> zm26NaBtQ5^d0Y#W*^-0FJyvrJCM0|iBV7o&C{01kUc1dyEH92oJDt~ z5JqVqHm7iGDCG~n$jvjfF>s3;V5&wArnIKVY*4BcAcSVZvqfg5-jsf_`K&;4*5Rl40t)n**XdBBrYjhenVlS z7)}hgL2BA=h4|x5PkuV&g{7{;hB*6cXjq#=B~wc2yli@w+c&8;hC3FQQ;tR-r8zYn1!f5rx*z;i4EUu-YUQ6DYfb4S>Od;EsUNg)wCioruN(t@6Zp2f4PS# z=97a-zIdf*wX^EH^vrm-PN@RshtfS-@)$;4&)UQYpapp6<>n(xjBS;*-iCK#aYh&@ zBx{tn0O0_4HIk-SM2f4RQJma%qGE=M^qNvW_xolLRSVt3=^ypNkr*x3!JrAsFv%&7 z*)R}&FawRQbWpY$cb@zk!slL6dD+sWR}4ayd}?h*GydwE;%cSCGzLt`g6wi<4Q zD1f3Z#%z(3w8XMWz>6@dUFwHcu7Tgrs-=gg5}L?vdW zjDxGiUX}fN1BG6oixiuqveLCHV6LM5<9L`fLQP^5Ml9;Fu- zE(=XSPZH?q8DYji)<;o*u>LG^bYmx;11dhuh@3{ECE!wI>>)PSsK*zZ{XsFle$SJ8 zIuWrIGx*jV(l;myw4xocgT*W+9?y+4!YT00B}a4E@hf=T))C1XkXWwuAc+VeNGxIp zG%EyBt9B2jEf!Pccq51t{h^!&D)c_FyeG+%4bqJf4|PB!Eqk zT6UzftHKa}H|a{DSW`l2(lFI-4dQV%i|ISDM;eu2_r3(YB`k?qf>^h4b|kwJ6oe-@csj~Q-1nzW~J zhv$45gf~kD+Hc!E#dnL>=Adj*Q(Y)*K1&V~VEas>lrJp$@td;#>uBv4DI6y+f|*b- z!Gbd~BOstPDMetjIG&?|KsMiapnte4`M!Ltp>u-Qrszkvw zVBLi)=;(P}k25~Is%QlV>H)2jBI+S4$vA=M0<{0eRlJ z?`rv&hIV5-qrhdS+uk{xY_0ms30_UZ%M^aWOy34U?AM`^e_fcaGJn5S^!Uz1YtQWT zQ99pnFC$Wt#uk|cJ%#A#Kx(8;aBcZvNGZ#y<0$U%+$o}Z5Vu(LJV zR`A{(y)1OnDb}gPHhJ&#y;#sxuxivSr0b2MxgXQBvw>qI{LM!4oCOze7zP+cJbF=_ zO8MQ^fE}vzfsH^ZAaB7PIH6!;0K%`C#x?uFC1mM0(%T+2k!-6Rc5I|g!^4*Z1H?95 zIjEz&Be7~XB2NjC#E~BP z4DVQ=1?~Y9+>tE&2g8w7cW%I8`px(ch@cg~+D7;*e6&>cnZLmw`WlxD z@)_6N1^Fy>;t8NR>;e)tJSIt-Harx>O;wFCuM1d+^oeWvcpfDu`!%W zr)quc7$q`+dwaszIPL@b)xI=SSd_}tdW!L&P(?9|*@G)`yE@We!Nw`DSmh8|b#_tC z6l@fBI*T|b6eF*f@A zS?9H;2d^$yoCglDGg-70kiktjIfMb9T=9Z1KObsAItldck3j>?QbqoFQ$OGxgR-p0 zIht>LV+}knD*9uNhOJsld=H`LLMSx~_uoye%mpD0$cszLuM2+;&HnfN`W`E*0aL#G zF8-R1{?x=(i0NBq6=aCN=Mm1KJflI8VcPzMgEAjPJ!=h{YJv$X+5OXh2*2l=e|m+& zbW4HRkatEp4P0D{2O{H&?{nz@-6SvvWQD&+Gtv26qENRefbH2NCp$^&O|-~jM*ady zWL#93D?qcngTw<47nrt3AQ4Xtt?yUap1w2M^T1KA25CC$KWmM z$%qGZn@%mOH-0-8|Fj$Uk3{l)&mFNz$!eIr%)1k`)DLNQ?`Z`c@gBrUF{NNSrpk-B ziu9K6UF!r_VsV6#KzL_Xi6FoXiMge-=@hT*__EAL!d&l?UEqd58vT{}JyAoc^s-NxH|ZoP%{9XXIz} zBw27U?03RmTq(X}kc0WL?2!Lc3;21dgTq8A8D-#)-&UW07LC$^5meyZdi^TBoVf6{ zSzO2pCT8uTmB2nKNqUnL)sN<+zbQMWS3G$KfPd~&Sm}ZhW)a({)Y`~g_*nyBweWs5 zqIebPpbGvL*ul+HgNB8YNoP^g1ApPeV$eRW+a3vm2*vP}^nBDLBz1_vg@vSr{z9wV zPO?h7?n*1tpmBZVUG)$$4FVi;&A4@VE}IUG`8wv^o## z3bQX4E~>mAR1f*Xf+CF(w~8jXfc-&@)0vVhPRmPe)4FnWruUu=O?*uMxANov)d;Ze zGy)vbAxEp1cPL~u;Kt)L$pF|w7%gkH)18j&niU(2VN$ruN&&U&>fXeqv}S9!cNGfT)#-8~13l!Z zR05@{eK&|}<2XcgF4aUE6}29%nWT`NOC{e)bgokbaKjlOb(_KL&y=CR9;Vn&y58+v znLoL)2nzj1Wz@s??U(u^3tRwR8T>V7nB>bVN;mb6fOF_(%J+SMU6zzdX4q`_|01>AnSodCY{K2ATD!5I-rwEh3Vg>Z zDNf1_Txra3gkJ$V zEwJB*kPv10y&$Qw>i7S>Qz5Z}G(z)_O715zkR7?~WjsQ5PT-zKs{#TC&#j%i&2nCK z`(=n_x|A@O%Z7wTuN$u^mc&!iV zONo1W+Plp)|EnP7*$8zP4Y0PauPm`%KDLE{S$ucA;2q7v&LVZ?AFS{q0Q%S5XP=tcQnTQ64{s$ zK$^)KVc9vw&WML-uFxT_j5wQlvF&>%M(!#6^!eBwU9KZp!R@(%RQT@wskZoQc_}S2 z5l>BwzCER;9`L9oKKBNrdc^@;jnW6vSd`I0=fm|wGTi0)&mt23 z5t`m)LlOhn9y~t<-%Uf5AFh*gDYmh@(R-70iiVzN>w319S2x)Lvza2>Q1^782b zOqKhrX@lHB7C;ip0g%vTZz0)|2{IjqPkDK+m-&tov#)s05!PDhGBikxa`n;-!;g&sSUFp3UBC(!=ihvl;>T8yp12AHFvYar_mb?r)iN+{wPcC>?Rt1q2vyx(!v= z`r*zz_&pla@`JlJD`W3PqAIC0vDdS#QN`(JoJSwoJZs&QYQM%T3-Dxec@EU-JxLKr z5wPE08h`6%=&kYlp|^4=@2>1M>eUd3 zR=sig?WA6<%pmn3In^tZ376k_!ptOW#kuH#)P6b~BTx>^J+6h%ZTUa}`-Ig^VIuh8 z4K@l!#zBH=_E@JaOK|sN6Z+tfcI}GudVm)$QTSjtyuqhbY|j4FzVm;DqJQ^Q7h3LO zq~+TeG0dst+?~pjh*npW8gxqNFyxYBc8q6pWy29zPe4kY6e2Oh*04p#c39me-pChe zz$JKgBc0=LIDi&vm@--7D}$cGHZvX7zkgHvZHUHL(Ku#*@x3rh#n;Bl)K!!A#-*)N ztfzXsBp^Vc!yqyDz`_>?%BD~JDZZFSSQ{>cxEKG%lCz0CyM zS4kXkj0uLSb$RS$WO8Pd8lNJ|DxY3`SzDO;J*Kzlw;ne>K042v4*B+bDbf?*27T9} zn+X>{E~57s$pEx&NeChpJ#NdMN=<-LU*g!R=4xT(v)b%|uD2D`wXXHk<&T-<4(eC% zU2WYbUCx__@t?sv{>0g2RP=WSWE`O{NLLR-2Kd}6K0#dty>%Id`z6*JPS}5wlpcmH zGvwhX@Zmgg0SF8-=JJgf`AW#zK7Eofy7CxgV^dmW#_0+ySYa0r;{?*?F$;H3zfTw8 z$@gJcbRi#wk=4E&MCWlB=IwEid|gbKXS$$ZL()@HpI@{Le0~)v#nibXNrvK#j+Y#F z*Vy`tM04cOfB0Jr@qpT@B4kqAlRICqEUh3{)su%3A;Ydn^`G9P8;wQq>0TGz(le~jK z2#0WBPN<`qPy^i;fV@E0&HgT0(~>PZ zBCa45;5mb;PP|+%PQ`8i(y>#V&>{1~@3*;o_QXuJ!IO7_iKZ`06;^s;z`{gxS6oKR z85o5vL{j4Ob_WK;Z)bQ8)DyVEeZG)-$HYLgHTCU;e6PdqeN6dtFr6hXOQnQBVe*h4%!IaX1S3rVL zmpqm#Y#DgCE|6A}6;_v@>vnlybcIP=N*j3>w zx&t!fT)u~=R5z^@&t!vJo@H6$dAdYhMjgg6>(wvDwJHy$pqjZI-4 zuz|C4m0XyvYH`Mkscv^_u>Dnv@)mj&;@N+Xze=2F#P6s7vE_!T%NRc_L0>|w3MF*$ zfp31QI&y4`y#53aSEQd&ZW5`nWAWJOE+i7rta{!>aYd+JpHg=4?GIH50!lBW|bMs9I73J?Tl$ujsIC_C5no08Rca+m-^2!@*D)e ze6{mADh&J!q%J%@$RikKSi<2U$K(*l2GICaB_v9zdOe~q#)KCxIaOp?~s zrGEWuM<#T?G?ms&Y@<>^e$_OJSr;=Y72q|WU~A}O?GNras0H;8&a!dEb9zTe=dgpC zlUmX4D8(nesq3>b-cjG;r25ZK)B#F|Ybue@nIcDFw>p9r1r@aA+r=fUjV_9&I{c=G zXWA4?3z=Er@?7f-(^LN_YW$Z&=byYFw(ff6qu%2l`AK()NZtFyx|w#5gk6AEY?+(X{=FOMk+|}%klg7m#t3;B;oCH6bS;+uX&1e7uXoXL-3c4>9@xOLI5Be z>q^HuQ^l$|p1EK}rEG*VMvR~r;j4IU%JKU`GZ9v8i6g?>KquB7cL-IDGi!p9;abw} zCc*-gVEI)a^egwhAXI{3Yf%` zNO-+fQkSxhVbNm!Eys+4G-NLVKN28!@h$a%K|r3&YK4vESA3&A1gs z>#|i|9u?h(hn7-B)4yo1FfYM>MkpUnIHSp1fPqT4if`-Hg4N#c$ zcML7S*mECOdN=}+3{@shM(|fY!4_~jMmSL&-Wg#cbWffIw zUuN4X((1mG&DXg(cTaN4UEjul*1JWG3QN!%V37zA1=$Io*kVLmaG=BMsoYLbz}4#8 zvKPtVPH2g)X~$eW@=72K{#@}iec`t=W>jdbsHTxvL`X)~XYoNEy*Tkpvo zy{PW)zTvB^8?m2SETd&|M=&en<{bSh|MpsVCuSY^ zBb@7QE3!0iodS7Y@ZVea+uTk(`pL{9%h3xYl3m=KSp@MxUTy?l%>&fiILp1#Rs#0J zQ6kUdUIQ8UR=P*HfT0fY4V?jRtYnt59^hwp$dFfWy+kpFEJH@s0EM8&tNOM%Y<%ANLYm(P@%?K0$(~mslt7d9SU&VL z2?1a=v4m>sqZuP!cw4dwNekrE?IkUw6k)6#%RQb7m(Y94O`N#`M&g)Y@qIa2-iYJ#p3bv7*p1~ z3a#~eO)COVGc2l~*Sp{{%CO(FYm28Ge4@CZxM1e8EFa68*ZcdNLH2)9VU%6ps78*X z#c@4wII(~i(^C-aGcf8#_gBh^ed-+-WYpgRWcm3o{(yPD^9ZE)6_C9QD=j0=8bRs) z&V!d}IhL3do$4oG?BRCuyH0ZdS(d<<_;fKRUah;jBW-{PzB_UpZwSSv=!OPajK*yl z&`tVJwHJWGHgMH6@v=nDHT!J>6fXE%*4lI~sq2jq=S}nF61E@NQ_rG~cV79An5Vmp z-*fem`L92UxqtZhi6-*Tx5VrAvbR+@gsV9EGg!KujC)4w#I;TwWFMC-zbYw8l%@_6 zS7qPccVCaFKY!aM{M_LeiTlMu1wYKm>-W)vlNNUkmaBxl?UaL~!1#&F{Q?73#E?FL zMIZ%Bf)LNw1Zi)@i!48)QkL*`!m6x=tR zf-E`wZF6lLCxUygN9yQpFsx1GiF5m8|EKeCXo33CBog5FD+SCnT1wJxw*2sg^)k$po(TC8YWrD<8`X_vsv^9N7e%mR=s|`aKC<^ zVi{x9Srp6SCi%HrAk{tMnKwk9sBGGoAaLMVarZw^LH-VTpSzGhj+<~qR^Dk!wt+3n zjg9xIKab3s58t*4abC^rKUiYD%n9dQe>G2ci?ES-$eI`tdVBm;$-+llzeE5f3D6!C zz(#~~O*@yR*DdE<~w`+RU)((l3P?(oiP+aT8U>ezL-WIxHFc~( zwcF*!_?6ET^j|Q|B1JAobqrzB%b@XJ#f_=XS;I_r;l9v+T|$llK;V!Ai^|612L#2{ z`e>z@BOx~(&zP)TfAA2W)ZN&7)j%~AU zcQIPT#jrQb!j57c$6i5#3~t9SWX=*;am@O8NRmuBI58Hx5^%N%e81Qt&fc$@jjbpC zIo2ZqWm(@o^7WFst@LUd(;Ph98$mq=uaelixc(fYkNI%a_5mC$eA5?mYI4>tYa6N1mr7 z9{t&0fb(*Ujc^h`4LT{0iDzB%ampIBgA!cJ*kfqplz4d(a8THk%<>`pmg5nzq|t)s zf11~A4VFn>?7RZ@ImgVvZAyxQv^AaQ&aWb(1V$aiJbvr>25x_&G%l`A1aYI9KZvXs~rRKkde@`PfsfM zE)#9%FT*rFUbtl;vVLwaS?pZXwnY2r1=93NH-phYo=m1s;`MqX`Tza z*H4chb&!ynnQ{;{z~-`Afb*^thaWRicK^x=mef=yZhM?4(OOiC;VHxWr@+sTEt)4Y)5*g#UoZ?E)PEACxGYSVDq-5c% zx5WH-t;r$b?N;wi0g_crP?Xz z;lqwV&PHU1Z4Vv-p>X)ZptI^OG1%SARHXi4CuXQR0kdf!5wmBD>dKgOBB+$fE{Oc0 z&<@)+&OJA60OQ?$qnO990BAG5?N{Tx*_de?fMm;{gh?PdnL=Qx^8=7U0+KD*>%D(v z?kY;9Tv%gm>TL?xbK{xSjvEdcZ-l(FjrSY4o7Tgxy=XC~LCJM$Hnw%?FxS!32ffU3 z4TTE?K{2L)puw8ZqXx6nQkAJII;e7K^UMS9Rk(&xG0h&7_;;hnjm8Hr56^zHe4p_( zBeWg$E6hOmzq4cg_h*Q|w-9)g6zB5@v)=V-6lLEH9Df-8Aw#Zuj9Hh$6-wdv?F(o1 z+S4%pVHq4wwh%oEoO|G?vB{PsOPE7(cZ5W;77Wmx)R=fSXWkNaxpUcwS(NgsUCwSU zB0u($rQ_cFW?wbdC-=sTg%YYP9xCL`2e86eM z84X5(>KnyT%1qjrb^^2jl7oCfC3TM#C1yWm18iiLkOvXeOruWF%!1totzwvMnkZH&krMDz%z%eF47|&Nn)9 zyr&IS;Q#5wIXT6Tdwbr9-SGKcvMLABqq}MFOu6qsFV&~x*f`z*ZMx_;1YUA`Gnk@| z$(fIDp8awP-Q)a%v>de^JN~GT28z6yqnw`z z7KeN8ILX}${IO(ZV$fjr$&O|lat%=M+Pj@Naor3q6{HnoIDOXsp9u6A!Wi^V{CdL8 zf~;e+1V(Rn$rWb6i`bT9MDFl|P(^ypge?>{-VD)(&tg#l`RxoYipX*B2k9YyYY<-R z?7i9hAeZBtUO6_8k0w@cF&Hcko~cOHiiVA!+y}%e>2r{E$_kR}aOX9jB~5&aA(!)$ zA2w%PN*B?rgnUbMznn$l0+W$U#@1~-1aA>nH>%7D8&XaHUi*Zd%LaZE41K)u2C!cN z)jO>dw;b@@4sfh3EFQx`FNSc#cHOvGo5;+*JW+^m>Uj(fSDFtrUApwp+>;2*-eYa( z*|HMi47;2qk$8LSyFCB^e^sLbgklP4aPj-^y1cIW>^|*MNWsN7rS@^qJ!#h4`BF<8 zODb3nXT0@i&*P-_F=rrIezEcDWpY&5rLpSg|23SOxLek zSZwN_4)}HJ0lZgpivT*i(0k~kh<#etbQSZh7(*7ZITX`uFp{L%R z@-b?4AXO&sK+U;8{*vTc0TBAv6~%oHzsdry?^gi&C-VDq#*Gd^M+A)UlW)uk%q3@U z%*-1UZC=7uET#Elu7CcPtGdNvQOla6ST@JZmK2tc{fC0JBBp{YD;j=r)e0iWyA!N8 zthC}abD}~!-Z*$ETSwd=io>~RM>iGULpTVhRA&@`!(@$;k=kA zXdy;B1K|tFU$t-JGoDa z)&%zZnCR6GrcoTVpYFe#3%su2(D@ZvRysO=IUdj~6gJ#9{ z(C$f?JH5?@CRf z!)L?QcWAy4|G!7}{=stnugVm{?9PxQSX2CkNBHm#oGi&C4P3mX+&Dv0Sj)~OpOwbl zw`em3+l6mZY0pIb!sdF5!zNhEIXTSnSu6(Zdq##&UVBawLxiBa>!QtDZ7Q-QP5wvOi-_q6^RnF1j&RC-O5Zrq-evIM&$+_~1 z+~dcOLr}>saoKE%@lMx7kWAGml2CYvw`m9%~3bTIyg9pY3noTW}S2Fd9=Q#vEe#Gv+VJyMBOQA z22)L7(w@E}w})f*;;QH@-+t{80q&*Q^5hMMwtYA7w`tvj>v zHQ$Nch)7wz4$YQKqIAd{AAf&!FiKOBPR$v{>_dA^T$D37D*`$%8*JEEypejx^SXZnC(k#)LtoXkB+mY&mf5 zNS?=9UkqNQh<|aHu?=r%J$bK7aIuzEx{qx4=}j@W%P0ga4{#oQe*Zn`^@7o-T4Iwu zdUQ#JWwRT>^_e01-Pi3JF-Bage9YBZN1vxz(g?S;(NSX)V}`VrM-N<#y=^s-iPoPJ z*@d*j*K}Y^QMM-ZylxO(jC1wQlY27ypR)1Y4zWEiseIbj&-K29MjRgfq94myC{?q1 zdU}CPgch@5(lbSl!s}ea{ODcl%uSyqBW?GtN_ys7 zz<-J*UBYU_qEbk^lYd#B=`J2`G31kfy%9I>9EzOh6P#NR=DcF89S+M}Ar-B%TIuFD zWLw3vJLNi``C^^po_X@3M2Mhty!;&Uqa&&Dmos*v{~4VCJ9-UbN3U%!X)5ZX{$NKP zG2X6Imyil>pzmia#9^L@WPF0p?{bAnO~ghVAvWpEF#* zBfd&}<*po;04bYyw({gVl~?5JEMPYs?t(7mL!qv5OT+kjtZlbP2Qf)L#HU3l_;04) zJOmD{6rq_G;0*S}@_gQ*815;=L8Y$Gs*B-Gg(esk)%N!ObZe)#;DupPzaU{DFJB5t zk7}LLgh3ked+wS2Wu)dr{mOZ-kL2)389V%p@Vh4YKj$u?V_4k?xe;eGSmIQaZn}R? zNF?dSsxG`tqgwo2C}zOt7cYrbUQoVex5UOirx9YC@TM~1!t+aI(yVc!3fKJEE|T#w zY^IwKv{HRemAl3BT+=jJPfDdkCDn2+`dg4w7GbY><>pWFU(HK=!3C$T=dEHg1aoe$d`;!i#XTaVG z|F^@=)1<)l(Qt+4ePkJd)EZ3ripMB5SA8Y{PY`Zf5*AQHZ#28FkxZNyV>kUM!WYk- zLNXTaF4iD-UI&If%x@k+@dWG^_v5#Fe?BN)GC_daO>m$6T!{U8yA}nSK1SV^;Ckl! zcb%D=lIQy8ys9pI4;d(JpV&Vgth%yCmtAsts^W}M9(-EQUR5C+RK;sP~@|`J(cEb(r7p9>KcRVD1*^ZW7=F-`agYQ ztC+3rbV`)ew(4}q%l7FfYRtZ@a8z2w#Fyt-+WY9usodvVRks9tDG2bJxi&T{gZlN4 z;;rJVHE&I7)^M=S8V*nDdmQTQezw8dV4>AkJ8f|uyX#Vyb8GxM8m`bel>ht#104mG zEV-bijXe_`!xn9OBO$VRe`2zx;G?SaiG4zv`Ot!T)MwlI71;b-)Z{Ut+`|+1!rjWU z_G+SAud#2q@N^1>rKT%sQ)f+7 z$j^dcV_gd0_=GRLw{2Mm=5L7E zVQrpt%&$>9dWGzcoiKM58WexAtr$3*MCYkyAmb)$4xG{WRz0X~`J_}_DQ^NUO|Z!z zpDEHXfAe{jee`P6lTD||E3Au%ak^#1xZJ}j4>Q3#I)ZlH7o<LOK&AbXcSH9pS(S5g#LP7o3@6j#w5=*IMwQB zBdQq)Bs7Qcd@<@7l{?rolg>4*mz^xaqM}#Ph3(arzSPvP`4d|j_0uos z!)S`!Z`EaQ;{HvJ{7fDS0+1j+6Y#sUN%$09cuK8K2NCPYSMN`XMnh`W!%b%Kvk~w)wMC54s-i4i@%HRFzW9YqL@2oeK4vQx{ovw`Ih_o!+kwnyr)9>PceN zvP;XBJw47V9?*8vG`h`n=T581#oArvF&%KtG26^H(bK|--&<3tf9#lZ5G~r$jRG4NK$i zwe`5`=l8pR<^|+}>g!6Y7rtMJipxJbs0OKsQ&oSnNIrjBmF}`E6<(+F^p*BuSP!OtvqviMlv2zHP7_;Wq zy0Ru`Y%N}51dBW`aq#NyCzIxI_oX?DT%GDDnyf|lqNaCt?k0BMT6xE?iuFn7FYanA z(BymjsT|+3J67MMrgiGyR?l(V?9pVFF|0g&yB^MW8ovS?=(<1I7;1;|wuz=aXMpgy zMw#<|_2F%e0M5|&`E|CFK~?h{`6-gb;E_Jqu8pBiC^1_0Z5phI=2y$3ttmD65`9^T zvxhcR|D2_kP?mPs87NcU23hJu#2cyz6B6D*^)-2ot%>W zG`Zyl=G9DRv5Ar_ZCI$?%qHqhJQ&=`OxRD&c9CPSlw5xAmU}&y^uALhu?L3D_ZQKD z&gx2=%1$IBr>L}9jBjIDM`jTU+>I$UQM7sS0reE4{ff%W4&VKKn#n8HB=wK|l$VNX z>k!&mGq9089iz{)z9I^JdDb>C5<*|vrqYEI6i?c5cB$qbf{hCh2` zv#k1*wXP-G-TE~ru|>z|>lN6}L$$*QYS%lX2p#$zf0I+r&KtDDFFgLCAAm6RNdRHm zrD5}oR3C(?x2Z6eE>G>Aqx2@y;dV#DlVrlm1P(rV?A=E31!}#-8LC zeJ;(7nt^S}T@!&(KCQIfMaJ6N_EO)uBAzQ8?s3!+Qg>XL=3=H!W!zIzzGzWMmw#V+ z?^OMzzI*$n(;d`X&RQE2`%zpbmAll5#F<7IGVTc&V&KVh4Gt$lK^zVJ5kDdu`76ZH7@Kf+AMpoH zv(!$JVtpDL-rU-eK{4sZH=a;Ssfy+D5cMV%W*@x9!{vUG<9=!h6Jvm_H^}u<6=kVY zMNECcVse|c59G$6!xm^Z#PR5RZ_fwTf8Udo2$ppGmzBx zDI(DBnv8Z3%>PR5 zCQITOjKF{vYxgMHYGcCZyroWr_iGD$WXBwm@zE&)wl^6J9CKmyt#6Kxsli{X;5X<6 znbt+f7TX=R<}7pqgYK#9Q_(lKVVUezSNOR^venlVNZ4xR`v$0}-7a!Z9m`K;xGky8 z8};5v8|&?Atz9Q0*95Xm%n|2_c5HM6>5~KrrS2JRmnasN>hOJAgvM8LUryWAriS*% zeBv;P>Qcr_-5Ji`V}e@_xu((-y=nq+U(>H1??6R@s$qrJ9U)Yw8_ zW=b(XAGoGolxZzy4`alyi#C;N2LvKYHe6x&?l_-d*S#s3pWH^^mGi2O~Ok}{P(NR z`M_&oscVgD**cPG0ZiCINVZEip2Hb+msfo4DanFR~ttv1Pji%4d2n(|844&v#hi!?Bv~ zZ#yf6Tjvkk-{P(!*d%Q@F#|&i3&N?Ht4+E4pH7#tiS@CGG-N7p<`kEh#g5qZR8prn zV+XDwOQy1T{8=U%B9`)!jmZxj%2i^h;PS;|4Bn2toOj6QwKbx*+zzYRK0nTvYQ!T} zU}s%=qHPx#v@YT3g@JIrw2RD9O*f%CN{+hooY#Fjub6godUI!WfGaRF&ZaezPwVsD ze)UX#8RX~)%RtJ$3C{Pl*7B%3dGRtFPr~@L5+nW|W1@YD4!_Y4_&II~1SKz=f z=Gg~HLkHpCD-#}<0a3}NSfqL89m1vK154&>*cp#UhYNl~uWu4R`;t5%{3_S_Q5UzV z4lLtQ>^kOu9uEJd{7`g37qOIXC7h&4xv1`pnD*$1LEkbj_9+$A2EN&qCSY8*J?fe*|?yr&X{S)#>Kb>enTlHxO{fz(XoM-qCaT%#ib(! zzxhsfXgy}oVEX+m0K9*>|NoIc@VZM6LP2K1+6=A38$?pEXB+d`;aAKA4q|#?!;6Ng z4i-{4(3=0OwHHcS1GkCbrc--h=KyTw^K!IbZ8qa03EL?$G!S} zWBwKn5)guw5I0D>MvKagfjifP;+JvuI#7X2O2uHkDJmfZBHoewZ{PjTA3oB86WZ&0 zbKN%@EF7+eWqROQr}m4ih>^Td3=YxN)|&A<>yTsy@F_|v2~HP-!68X%QZ(LY9fENL zmTJ-g^CpGdJ!s9_?eu!i4q0<>+1B0PO)v;hpyktjeeIv;7U(R(WWh`K(I&UNuRfHP zzUzw@a+BY9eSJMDF3!}sV|G(bOw31ILPDeP>6aY+=5Qhs5)*lO`3$$stu0%J(0E=4 zWU{FLhHH&fB!gNc1FR@6e=+U-`+=S1q2+->J+<0|?yJ4oDrFnft+QLxuKcn|w{$e) zg3i2tf=kd9JdN?T3G=@~UW!e^F;MS|Wj0u3P--^B!D~JB5=)bSReRXD7jE$A-MzvP zGG6she%Qs>N>eH{^z_ZPWAdpF28nfCU(dcBb6(7~U#;1F#w|!m`H=v}Y2p>FX>&OB zkCO}K+jxmh%Egw!I=kouR$^j8|ICn1dI(~Y4!@Pa^~G?dUH0+bWGIK*fvrZBeeV3& zMew9iLSCgbQ9|yPNJc|gwld-Z_6^n9j`&=~jK|7V_GT)GNhNWmNWJ5O?VYIzS>#l6 z#HMSE|4rwn1owk^rrFiO;zs2h)h8aOhhti7)UB7e`PiMej7T_4RdICM$GXA5Kl^3y z>~N3=EnS{Knf_%%rai;KT{Gmp!>w*vv!lKBl`$(Tt8_T2aW*~?(G1*{RM35RO9FPk z(r)qD`3si|1l$hdl+$n!4`O&7@@h^F7E5(MqV{*BiM_%nx_UJtdt(vd=EGe5i?A$(FL0Iogz31|+QSZkgSlJ8 zk6#4^+4e7fOA9n1j+oZ3qwRT|{`5-%l#EMm&n; z7;JidOUoeejzP-+OhwD!t)}&>`zC#DEm6#iV4eA$d-`(JCPm2L6NMm-iFf3NB5=rf z;;gTUm}zUfu8R{B&@1OekGSoQuylPbc=AaRKk>dvRCdQYSaEd_Fj^#p!TAq@Z)Dik zC_3SJg(%8T-ak7MbRnvBY5P*;SF_RT!DpQbW_k@lRv&R-BiA@gqqmm^@3w08zB2&t z&vL|WG0G<`DOu>UQqc(eX!JEjJZe*R$0siXP7dXYNvJ}Zxt7aJjA4$`=8ST(h;Kv< zUu(Y|KkUE}V?C0MqO41 zd$JUnnVH+o%fDZ!+Gt_SHE4|*bY4OtS_Sr}t{#GBiOE=PO$|6QFERjGJvA#AE!t1QX&%SF_Tl`BRh|3sk}t6~8|*~x zFtXSVU_Fg*!xztP@h%WROt|l`fkwotZQ9Ad8^^3((u8>AwD$4hay=HSEsVM7yu#W< z)oN`q=cwZ+fzu-!hn4YC z?U728jG)d7aYRAf!X5!)gY@*r5 zRP#=!$4R$VhFbA6^UI9NIR(?LZ--V!s^y@uAHSn%_)H{z+=BO*CB5t_9WDs3&e;I zRsUI#2n7ul;1dbmT#2sGXAMEucV-1o2VsRq-RZ$5PALr6z626?+ON>l)0bL|36R`3 zeiM9?AKCJjb+w-Kir|rD>Fn&z6z7<$-c{PiCH*=cBRx;WDY0RlT%hfcffNo857+b! z$;_nVOb8%kRG(}JUPhYaBS2?TAE~tSsH^s-!OW;WJvrO~3#2~(ElsQUiqx^lsOa=u7-Ggm(RaB0qKZ=zh9D#2*LcII!qO=B!kZZWZT<6A*?|aIfq4;p(0c0lw|f-# zIULu&5I;%|p_YzP1qekmmLPcgXtT2ni7=@u@T#$t=iUBK>7+9*G&DMwitZ!+tG?k$KJVeyTG@5g|{^xRr|y=WW$5 zGm-#&Gnlm26LU5C-t%zm8~2)eil)M)S*&0zEpO;2)#LRYR1!O9;m`R__9i*3rydd? z$t6w37$!7B3Fk8PT4MN^prCxIs5M76QNSK7L#%o>-TUMg5;#VRr;qow;%DvFy?it6 zv9lsK;P9_y7CPC=x$m;e*WcD1eMy&1T)C;^w%wcO?GLx08^$9cS<5m9 z)5w>a4r+q8QKM^17Btqgh7$+7@dBy&kKXjbX5lo7V%4D38DOCEhL5!=2^66mcvP ze&-T|`N+W=^BR|3b7I^Ikce8I;=O#fHE>vV!nf@^b#tIBls@lHZ-%muHIh;!@H-h_LgN-ymUS(Kf>h!f4AUA^8wl6N~y>b!wNCQsw~WaILD zMq(+z!)l-uErSwW%zrSKIvB5=)cK{caR@;35y$>~9SZ=mZ6|BSy|Et83t$$u#c_B0 zUB3Rb(!M`WYq_0s4BV&3^WHJq@5cjkxXv0dDY($Lg!d=D1kUbx?T^RV&24M0b>g0@8@VaQjR3JrCq#}@s{V#i5-*d))4Vb6rTe0#c ztH^&GmjA84V`zgU{8&|r!d=X5Gc%wU%KyxyKMw)m)nMJpHmk{Sd0`}|KSS#@w{H(b zF>5S+A=W7=)N8Di`vSG!Phdan&XKpP!mf7VQ{U{&Lh}(GlddGM>`W^V-Aa+7ggoX| z%N#0_AU^M9fJudLs-S}C%wp&%gbT(C3~zBJDMi(4$HJPmdaNA?{_Sr}Yo!Hau~v6= zaL>k>W+Y1yNxx-5u$b%K}+sSmAKOX-~mkH%m9E^e+?wj_xz8OcUh9oeLh^RvxuOT}3KnM?Y#PcqDUA$>Y zc^P);RX{+?jsd)d-DSs=c;aPHPzeATLp8gjCLn}M+zxjtbp062p`qqvJ=9|$Qbl_n z?x^p=zA6bjjOMX}^B*o1gW5s1*qsqtmEg^MW?h5o`X0d0YyIQv*NOMG`*qB|*Zb~l zw6YGpyKkZi!-D0MZ+(+4uGs}OZ8<9=;SR!B-ts8~L-(iOYQ%Dw74Zv}(I$GFI1$tA zpB%1gy%dA-@zqaVrRayWV?i=@5_E&()qXi2Fpr}%cOj1&kB`2BU75sUocmUi= z)2v z^0FcWz3F*WB6(6$Q#bjEwa@6X5VwFk40RtDxAW%q&!0b!3J;&~zK$ts3Wgta1^_tu zpqUSrr#XmDigT0CNIHtC(4y|R4@}>*j9Qf?2#`zY!tXfDoJC>@v}C_usd?}O*Nwl5 zg9gBXyS_Z2Kp0fnFI&%MB0=}eT=4TIeD&&8sl%$a3cvnGZ}d|4V`r1WqBN7vlyiQ_ zVjq8h4P#^YyfeIVy@fHq+=X!|8Ix>#v7ZxN%$6TUC=WWq!#nk)V2&u@T`B(rqyD;G zUtf=E9)At@zo-`90l7`5c^Xz-)j4ZpCUHkSjBYAw5sApo9>*glemfmrSq66mqhq5luhJ_~ zW@XUuXGAEC=@s)dxdCFQl#j$~7wVv+zNIAy^JbmShgB~IOVfdZk5Q~TE9##!2?LRJ zbI#yZMF3_%^GPn44)Vx?@+1s!1ch#%g;PNW>Ei*)+}DS;Bg)eoJbUhSboAVbvhQUC z?^nE=H_eT;QaGqF)iS9APvS?C7wN*17w7DZn^Tk-D#$Q3=0-8Y<_hzYTPU1s>Pd`G zQ%o`g{OXA-(Alw|DLT-If19 zkM%w+Z4hMla3aKA17yKWP5niSm4&4PP;@3RxKbfV%&NvGPo8ABvTFa4GtsJq6coY% z|NMoBNmVy1E6eF!p8MFsXiXJ_5M^g=@P7vyZW}7-62*hXh8ot^8}zcppvLyqx*aOI z^RtTmz@!v)AbK_!8}@&VQ+tkaP{|`@B5ea-W&9H&(Xn}f2JwvsaK#q2`{~*C%Gt`y+48A= z;tw9oI9@@NR&93hLYRa*;V)F>C5P8_F$|hcr5`?Q>V5-g)7_O3fRSm2iVZ{OMI~Qc zyiy7zk(v4JxSxwDBn+tq7~*+ezZUbc5R413CC?zpTLV}kK)#?a4iu)2*mghaD|xP% zt&|lGTK7Pw;OUZc6o}P6KvS6mF0Cv(ud*etV&+D_ANu3pH}ss>RdhfjkwJv$$f`O( zzjk$;+wmjM}*v4w41M^qMj{?JMWG-#H#0uNl3&BI5`70 z$v_~|>rE-p{YC1|I2Slb9n^ME?Rl-2?`2eN` zNFjts>&m*t{(K7%kM?G&5Ptx~p!UL25uKgwU1wW-=sUfPo`0DCqLb3O0=2 zIYkkMuJp(rlyQi_MV378HU%6=UXf{M0)H-;hQ0V~8-A$1xSydh!_g|{*IIeeguQvA zk7<*6Wgf~jKvr7C;v%CRfiy7EAPRWx^JW74X0#gSkLsKQ3yw&-7#!!*o{k8klZ z>84EWjlYmd#B|_nYMKLH(0Kue)CdqB2r$DFBKIo(IL76a(3G}igpqi+9+Y39%DRID zG`a4kprrf#G@Iz$>j2t*a6aEIib?G$ydhaOd}Xv|2&yk+ zy!Q9pz9l=Xjzakc=2EOCZlKx!juJLB_r32INqFsZvZkOZcjTD}r~|_d-s*jXjLL1; z(lDN$cj72vfLho15>d;Px;BoT}PIFUgsW;#OTsD9FAW2OapjsoJb;LbSl?W{b zT3IVS-&_m_(oI`fdb4#GbX%6o9;4S__=MIi1f+WkI%7VEb+0BN&xygN1oz>NE8Nfrkg zkW)K6nEbbR{hEJODN&&ihs*Yx?VMGoZUOd;)rvn(7Zai{ zo-Ek>s@x)~LNFd^Y?V@j(;flFdl_7=eIfxOA%wzIy0?13m|M`L2}eAHjg5TP(ENN0qaG;oiXfdC(veyd{+q;bp04sX{5nJZ9< zO2U8|J8Ix(+@H4}5?opg(aroq?X^klb=c%i(U`}b>bU!nrIo2M0zS^#nHZXX^-cCeI> zqxP?`;5N2J|F|I{klJRH%US%RkPCrpGdeGf5`0S{BpPrMgq)C~aS%LXtG7)b&+k&Y zxR}>(H`n!X)OGU_?`)inwezLUzuMg`OAsnUMFxRs85tVu6JNCE?t{l9QBhG9-G3-4 z$zC%9_%-Xqdw_N*F{#)9aKg|xJDKS z1qK#_k~es_lW!94GPOrl`=@F8y!3Rg$i6=UaPZUZh2CO7(vAU&#}KLCv@%fVQRfCy zZwDv~L)A{3!|9&^EesL2gYg|Ue^TWk77!WU&?_zlbBSqpkhlE;0rlNFJD{I{_K{QOq z;niy@n>jLK;vVmg<36Vvh zcPUre&dX)!pZvKqL@0=3x7m0H8A>FCPR%Q(iA*2`ye@oNKjG(RCgDPD7L}*$SDRHu z-)imxDpI+uPc68KECt{}Qj;g5#BT z(Hrpamz8N|e1G4UDK()wTEK91YP&4oKb#0;H3j1M%5FNSsvt9ZW$H}myOI}3G20-3 zht8{BV)R{v{NPQN2m%ie&k1}@&|f)CmDH63JTunzi@9Z$@InBJ%79PD7ZYXk$9ZE| z0_2END|GosSw82LLI<2P7gX%@>U}QdJ(U1sfk|O_RMZRyX^*|Q7q%}ia8=B!=VsoK zXbd5Hbo#aJuLf+H@*zj9Sd)HDR$%{g2r2gjDk)U`BIDi56c!`r=8RMdQz7C06E^1n zpi+Thz&YWs0HVnLFc~Vi;#~}Rk01k$AE-j00b13m81Tm${_3@dP_M1!cKEgCbLjZ^ z_?qZAk=dFhMz7lw1;;=zO-OlU^GHC|ufj4CWwk5$g_h%Y-@a-m8rXYxEt7MBehMLk?o};Jn>V0>K}_Rpf=iy=>gdR^IS5GRvg^tD zHotluxLx{$j(gwzmqCl;fQcV2>YN(idJt5APc0=k!o*0}`QwEI;ea6%@t!UHRWPeO zr92>`ni(Gi<@l8?cHg`q0M->dw_(^x6|jy#0t+;04FzG?6VLenj*2+=2cV#t&UVC? zf)TG@T|Du2@2WDq3$U=Wa4R~tY{jXu*uWS2HzwH4oaqZJMSJ}}+O|K0076VG|!2wwhgI9%uhy#Nn-3s>2T z{Xc*D|IfL(gKIrvbNs`5{qte}Tll}dY5!aJzsr{YMf~5^oBx%XzpXd_D>eTsHK3yW zuetthhWroJ{B4H(57qo#GyWf({M%;y|6Qp;%NJ5tf0ve?uCAo?#@F9JvG-N_`}gIC z^P~Tw_xud@DRIE8Xusk5BP#si5&%EhgmoJj7s0p%$#r1kj!oap;zDRhNNIUF7c}4X=4lPRyZ8Jl zWZv7&IEjdeFaw5@AEXiBu$cMzM*ww~W7wHsKOL!}Zfd$juYw1;ufg`PPtfKeNGu0j zcXWcZl$7jv*8Tm`y?U2_((t= zWAjPdWhA21xR(JCbj4T>q#sdIFxj=40K|tNNg2Q1F0|uk>8k{}T|YVdQy)mL?{}G>4fa+!aJqe7jjkzuRSR7c zvN@I!5o$VZ$b`Xf~i4k)jKW@UHE;1)ok`35!<@y7xq zA(q{o;yO~MHywU5CuezE4+qYbBKXc96%X=ZLE;eD6SSdaHKjkv*pz>Dj_zUmcpYTj zFkKt}$YIjQq=NnwFiMc6Gcod29q>AGxb9i1Q0oKgGLNcLtz!LprKCIXxSP|-ZGv40 zv2d46g#dS&tfH?V{7m_sw=6?p_vz%$>l?ldZ~en={RkOau#L=0f$A?y$4~A9;~C1S zkOWxRAL1|APdLkEU?v7{@#T2g%b^SQFhZuoGRd5NI4nTmdBWRZNm!&Kp!`gM3z0=8 zS@)$nex`^YGVd~=ztP35N!w0qlfYN|7{jsrA%8dQST6P=>=5kJ^XlC})+^;}kH84z zHSqcKH?NQjEH&gUP*mnKxB$NOBaiKzE@q_BWq1$X%Zq?*(Tw~+7Evn!`#KG{MeANW zg1Ncwj83)dYe4VP>7NkiRFsoujtba#-7nA~6v^5&JU3u&liVB3@fvrGy1 zK5)6e%UcKRZ+XBCxH-Fi6RyfORh7vcx0Yz{J)r%cmM@UH>?}chMJuD2L4X&Uh$rpP z$_gGsCVeDAqIq83S^MKddZx({Qh02@DYfFB}{NA6>A2;;w7+-l#GwUbjKuYD&;3{-i4RQTudiWvg?MYESEKQO zZN&@j>6z^TBF`}6z=q;L%I0bqg|V<@dpi&zZk!ta-G*W^neIgla`== z`)?Gq4a%@FA^+2~-UTsmx9D%Wt_d@$6}`u@tp@eB;sTG7upUq#R}>eax&?guNZ_7d zQIF@_Y`eQ+0@R!$!Q**ZmN&@^B!gs_QKY=~ZCjTd&5F#2O55Q|Sx@`5U2?JfON_qO zBi=Si9}*)T3;;E%2JqqL=j{OtX%C2dB-q?J%@C^ZaO&;;I6)$Wlzrq7SY+-Xf?vOY z$Gm|y_B{h#aYCT?6y$z6(GB=LYoH`l+^5s3v@I14{|b5|w0+TBPc8PaGF`nB+RTiI zeBP7!dL9Xkn>_P0n?PfZ--cgekzqb}j+0hWOjrq)4H*&AJ+g0jdEdpX&;0TTILBpR z@>I8O6_8`Lb z?3a=tR8?{k#{ojHd608w7LPYa@3z3VLr2grS6tQRkCO_sV4*W9>-73>hLV%QO{a4z z=6S5;Up`ExuHm%n1U!Ca*K3ZwV%{z1{mhmZ@afAp`3AHk`hmznKB7u^tt**B{N0Pu z#T(hDE<%e`oW61Q$k7wF_^HnC8@b)VaWW#?N}zkGw=!Ij=Ny`swx8J>h*Q%Dq=xC6 zhDUW^S6U3igg&XWZ1zG(-=o~H5{?^n^(xMx&&ddSzC{CFd2002_5AHqjYY~vVDzZm zO`gf<8icF~>WuoytDLa*u0!qI^gE+v)>~IR_H{VRg#2s$w@;M6kk^pUf_+hK+s+7; zufr}|bKli@(VtF}pE_;)`=c>1&;TOxD!>dv`JuM``H|Fx_L#FP`OzfQ6u~Zrn`MJ= zOfqBEu%U(mj_0tidNXY?kO7AnwF01~k!PzC9s6iWe zeLiIhiB);%$KD8dcvzHD3Ib>7S>0Ia*yMncpvWcJfq6?QSN;Bwy@0XaZSor0trHg@ z2WisjogD46RKGu50#Z{9u<*6JjWy*1`w7bcrG+mZfvTLzm72GY?dJ|<7n63ygh^Se z*Srs7!rq$9&-(YN_>bm03(}*d*eSA!tlGI*%pK?KM%R;KY8zqq;%9*%>|o#QB+bnx zMdYdPs_3}&=4RAW7b)qAg*fM{fWDX;I$C^iQ&H%_7>Tpi%q_6&wSrrB#a8e8azn7A z%nu~F%~+Llr*S;^8i=EtEv{Ei{&?bkmh?4XKE@fn^l!lO94aNVC65=5S%LAbBgk3@ z1e}f%t1O{Lpjy7sOA@8LjNqsMJK5bWa#Ps+I25*wlbVhqh2k^Vo;{LY?tQT`FOOdK z`g|p3U8b{_2o=({Bk!pngP&3H0VuonlqhD}CRcg&f-0jJtc|W}#x7NMfB|I0?wE6k z(3Y#)3nwNs*SB66v%qN7xz4ELMk*-~qpcwc`O(;UI6-P1te5f=?I+Dhf|Bd`#>|z^ zX1ZR~GhwP3o!8k_^u3@Z!4VU0r-RO|^!hJiab?+k2mRoY4*maXk6UMpZin%UL zf^j)rvGjML4DXHF#M!%&jc*hicG?S`SBeq!ioPrXQ~2s4bMmejmGi(|<(k;X*ftcF zQz3ut)I@4^#p9f=cLvoCc8~b;ZF=(NAs6DTaV~88_U3EFBvC@Ot>YbNQ^#%u9AOj$ zK~KR5g1B#;T)Ef(()WMJEJCA@yR!XN@&D;&-bzp-H+yVStdW?aGAx%Oy7TJi9T zTX-!P4*KdmJV@A|iiIp=Zq2*E0``DDw@`oe;C3}f4y;!=wggit)7f9(d%g>J@)ZNC zk=}iqKvXxkt+}r|&9sT-*h&at;aW+elc!*p_cEmN6W&si6zx~= z6p9om+0Q*>phz-d&`OPNNTGf^ zr5e!?OkWT{0T)k}`E5I5aBRVTfk-1wfW`Yrts`FzZwc@L(4}dB@5?@wX`^&3ylh4g z-Q}W=avoicpmq)b3=@xuQ-PlQ2HAZyU^j;`U1i=%zOFASDfnigL-E;pmj|I>XZ&0o z;e)P&XIzWN)8unuu_9QNd9|GPXkig@Mh{FfjOpQ(DZ4KiJV05PWxSlW+XkM(u?#E( zb~<3!%C6R$7|PB4MYk}kIk$Cx!6VAX)uEuxje|0V_&LKDI5p&u4ee%jieIh7aVTSc z^|xU0F$ZfUx(9-9w~lE=VRz+KWyS(A9gVONEb5|8ncgADthquDacmG)A|#a|qgOBk zj7W=M<0AIXkE0GW`oq%t9`r}O^@wJ&NfI?nA!t°oO`j+wq8>GV^(!jL(c?_PrM zK>rul$gik^?3^P3cmK~Ge{Z=Df?dhj+P;h>uilL$KQc#0zr(Sz{oEEEPYFes0C7mCB*2WJ?woj1$;BM!l=F)C9j8z4Q0!|<_4R^ zr?$`lL}_CtOd!(bSA$A9h|WVGBG>hR2Dztx5G9`bd``B*bWdZCCT=*&Z`~k^o+t{2 zS=Zuy%v_^dS^n)eE@RKy#9`NzMnqxH2rh(WkDg+p(cMe>;OTwmUGeCl@vRhl@}jSm z^aHmI%}xwSPrPtoL2}!df^|0K1F!KbXNn19=f0{RSW4iH(X8{UjKD=D2;i2}`O8;W|*PDU<|*<`FuX#FXwY!ukSy< zKaam24>Qkw-`8@#ulIFd*K_bUG4*%vyUWbzpRJ0WkXK?%=eXm*ep6`#R6MHh`K~ne zLjpO7qPns}OMsjq0tlnQ-kmb4ee5E7*RnsZzv=y$^PxtX( zCa%3%2$oAXw>ZDi&B~(pL$(_3Wm^_d=ADq%#zaE9qeah1BV^+s}D14&y zUPE|!cfyERU-+@U$7l=^>ux--qRtu-31CC~&=azl^1I3(Drt)qJ$xP~&=-29 z)pelIa+Ez`Z#o;Sb~Ft7;C*pU&quL@cNR<+jnkt=b)y)$abWXha8({H60M)qVmfAp zem2y&J<2`5$52D+TaP1mwn=5x1cB!YN%>AekL(-)Tu68mwVWSIz0;D0=iQ~T+*9$q zC-o&pP|^2vd*#_+d(AK?N)lJ~ncY&LH)+7IH=6xy#WB%fjHW-wJ6vf_ql>7A0UQz1 zYs}tzCRlFH5MTC?EEaitutP8%BNirMF!#)gduDBg8}9E!@P4gt<(124!i#fczRU(s zM}}P&(|9=gJksGwOL4)PX`S^0&KjJniknX;92pqG&_?PI6t9yQba7e#qn9hDQmulFhM{;tzZdR}0NoLY>L|$;)GW|GiMF+kBZ)Lq zs&Yk z(Qj@1p;8}V(LD!@A8gqww0?2FSs2OKw`T8k=znej7m-~KH_A;~OI5lPvOBh`MN@tP zqdXbT9MqBg+ExO4pW0m;hbRw2?rW~xGAhjTVl+@h0+&4(6-=C8!q+YI_klCRSy_}n z4uvBc;N5VvEn4a}{il7X(~Mq?urV_$KD^ee)G3<(T@5BW0pHd|VyUq%3=` z(G#$m1$!@~0dMp618TtAN`ynndaKmdsZpE280_mwgX~Xrl_ut{dzde5c#lH|#}&D+ z)KE4uv2{8MRcv`w$)_^5=n$#ZfbCaKxSVA1f)?A7o zZ9gTni)Ern_7A;YQK zkiy=L+F(4oT{M2pv0Uy>JT>bSG ztKUZX=@gTFO6Zh1^2}sS2wXv%*+Z$92piqzk@TY73%yW|Gth_3O?3+nmapF}l-%F9 z%c zWv(5E@~bmBRKcCup&50$e1McG*2}d(RO`7a;|!D7_ha>olDR1ZcVaOMOHXi%PwrM% zlZQu)<}xMP$rYu@j(e&xTDW$@DxIo!afSA$gZ#5zyN2+F_~k(+qQLSbY-g$-95YdV zmwOhiyvfjiCB|<29dHVk9+B23J zpvk3V)bl1L%vn9_Jj&ugMcBt7c0kovu!DLNEJV*tp+nZ zUBFf+iZCTR%1<`%f>Jn>Jlo?yo72%o3G~dawu%pzGvn$=c~c4zq%Tq{`-Uzp>_)D9 zw_5#rE1ee4bY8JE$cDb~G1j}urT zcZgy$nsemh^;{Ujs}0OB_lOTx6M1>v@=nSXet;21nsIFK>Z}o+ytYEdSqO)JbO`&n ztEJ*$`!EhTJ&f3%bDfzpE{)xv?r>b&YZ!q$h%d*3NVOc9ictvfNWhTcHtT8OQE-X?2BS2$fTo zxJb3ku0clMl$I}IK#sxS=54V9GS z5n!B?V(UwxTjv6*YKNv~I)QiC3G7l`BVY|#3`fF^0G(cW7uX3;>vMIt7Tl9u3Ck%B zODTuq7T1LZT23oh7au^g0rHzXj%s^8lsR6*DoGh(lHX*sIS>E9hCk)8SjT`Gm3JlE z68xNFG0JDhc6L=~{jg2{@uGO_(fsUkxlr)Ar{x_GuYlEKsWZ8{b$8i+G$iC45*Lw! z&Sl9IgSzBN;Ui~Tmw6osz#6*+IX5T-&DEh1!O&@P$pK?(wNFEfbKc`@5eP^M$?<0g zwQ_A^UqY_NyBqeL=8^&J7s&v$c6BlzQFEZfQSo+qZ3zU2iNxp<$98M&d47@XMxG@+ zYv2F-0SofEgK<-}I}AvGVP#}TC|T@j=XtZsc|00w>^F^-CXLiq;)sB)Yc#II8`y=E zTJa>3gIct%q1*}(zi=m`#MEDH%}?(f7+xS0mOgCTC9a(-p!L)ARxCFOXw3>ZD|z`Z z9`*WYhOzk>z#!}hZ!V^3`v#MoJI>G5u~B&Ym|pBJdi~dK)>$fn&8cHg$B&0mpJe5@ zfsWdJca^!ouVe28m+C`BhT{6tIJSN1OKl#CDs8wa<5r;XL_hijxJ(RLvC?vG91kp%U~e#d=DxCy;5`~YWsHqyCMw;-f&xAv~*jM zEWlqTM|y^w_m3P&ayey$3qcQ%%{Hz24lz-ry!UO1KT*zqT->{zV)7Ol*(5w=2`B-H zn4j`63RPFOjx4)1F|Npa*JuCD^8#w$Vx=3u(rFZ`vA0Z}s6@hny;;BA2eKaIrn@6; z@WPY*r$FEb`W;mf{3njFg72< zLQ?Dw;Gdqe_+7r(80rxv5za7)t%<3S`V{i{StogyBc?^U5faA z27)}z0qr#aJiLKa&zYA%_uRPT6g_HV?u{m;l}4quEw%@9qtU~&n7|@KARbZ*sMXa+ zrm9C`noLl&4DQV#Fs#A-PSZW8)M2l?X+qjnHD8M{HQz5OK%ld64)TlM+SL6PB&EvJ z-P@EF-`jIh2tdWp(liraTYU32EI06|qrb$5O}O;u(Q}tNub#C+|9B1*j4n@=Z2|}H z8v&oRL#{9S-+1z0EA87P)ViPXIKy%)2hFSX7^`s?W~$7-bX$5^h^_K;!k*tcK&ZK6 z_o*nlQ6{WWrfou}p2M$)&zl&{ne12%V^yJ9O!2^X&AXKCT*84e6C?81U=`--Fh?-V zEKLlbTEVW$dZcPL%Wf=D)X(IG3Z5HnejAbC=6O zX#@>(({-PYh^P|Os;@rI77FGu%+cS?X=Re;TE~&Ld&bgvL^{z5b!$lg@QcCyabz{0 zt{(@|I8Nk{o=62Loz_|&FK$RJ&(=E6tHi{V=Ra^QD6iw7np0;rUk?xu>M>)#e?3rP zPXEm`<)grdillhu!7{RQ9$nX=Nmsr5m?{@BXC6^e1EgdEADt~SgI z<<$U0J7(9Z)m4tDC?;bdl#1`xTMId>Y)*P)0^pUlL(ss7kzAa(V3_E0H!r;zzm$xB z%_Theq#M?(+s1~d9fxg(@X;c|5OJNd9sk1l|F(d^hawndilj-JB6KE7rl@{0rT#9O zMTDozRCvCQN<`0PJLfPvY(0@8vZ)br#=BM(5JQL^guM7h>a5F`@BXK$AFn_vrnwTzCHHc z4ajR;Vr@pkBQKfO478!sOA4$3`Zz7XQhia+?jBE{3MFf-e$!AStli0gAwEBE4(R75 zr56j`!||(ewKXJHcA3iCQe9zx)kuu^KpmP|J6yRutF$(|wA3BOK}wvd1lzy{Nr-m8d&>8$hBj|(JesrRn_uyqowD~ zpcmj1Vi)ZWsJ*q`GS&&mkLd#s9jF&sse}2-J`*|&c5wbSQB>ZMOmZv|%M7<|pqQKf zQJ*tX#RN9?hT!?Kq&jc33EG))gQK?`I$pHxV(W4`XDQueEb_SmqU$)1d)OpBIoon0M)UQ=*81 z6g(3|N`&1=hcveY0VSdl>D+<(>49qwuBfj)>c>QGtupbe_vR5sJOt3y?(>gIeE2l& zNSiyN!;n8EJ5#`F#jkw9w^79hCfC*`lD$e^xBd?MRM49_{uPg&N0#s}D;#A)y5<9M z1RuZ|N9Y@bh%;Aby7LTC^D+^$GO^PkFkuLkS3~tl6N}ybU09IAcixlF=dFiOtU>e$ ze^|HHXK<+_Fr7Tp3~V(^s%%|17TPAi!2NQW#$~Qt!>n9vLJ;8GUuAjZdU!Oz0?5#n zZ!+YQwqtK0BQAa)!R8Oi7;Q>t@bcy0u9V>lW=Q?w(}tk!MWAA-IF8UmcBJF?5s%m(H5bX6C)FnN*W$ab$@YKPaK1#-r2X04ChA8w`j zW{)?Y!iqVWJ-#VBq=vP5<;o6h;Hv?PS%>ZPyM^(1XoZ#Zb!_=Vj7yyES~OhxY^9!U z+jOcvrU98cyBipXEmrc5quCfLxDBRQ9`CN*)2ufZf$`lW-9k0?7OBwj_O( z8(C}#PBgn24Eb(|{p18(OujmlUm9<@pL+%#)GUW}sAJk&(D#u_RAEnEPEy0C1=-t3 zr=qr#7?{$qNg#;y2gDyZ+=?a>LiK8@R1dv<8$WRhD~WtX)owaWa)f$7$szZ%L%;m^ zpls-;sI_Jj?`F0LQ$$G7lO*k&%&a|RP5C)XdZ2mIu|*A-6E661%Sjs zWhReJ6*X(8mmv(tWyy|aGOw0Y@ni!musY4-(%LE|))3h-ExFG*S*E=-J&9(@ zhTVPPj%2;YC%Pxm>JN?jdG;QE|)h|RI;>`0-ILN3tueS_Of?GS2}V>mj4RRNwsH5<|VW0+STD1BMu-DuQADx%}H zX;<72@Y@qapn2wm)?u4cek*#a70X$~a2%CruU3opx%$;NdB;#eyljgPyE2dunq*@x zt6Y_V9E-Pc%rrI|dl<)pRbypu9}7E)^?Vf>iE%ES2hNhL?Gl%=ED&{hkU9E^ik_x+ z9BAU9IXuO#Gw@HAX@ya1j4zsE%_0_{-|b$NWDMcJRkiG>qhNkHehc)M8x6qQ2wt(L z8Nw&UzJiQE-M@G(Ej!fdT_(ly zV0`E92})_1PlKvCHPY#wqN~cCS2vNFuNouq{2ILehpx)g%AoT_(OL-w?>{GneNIxv z&C235O>|-&sgwjN#60r9r@xLSZ>jXqe{*Ym+*ttIt_ji|S|C)p_GG^C2GyP>gY2cg zdqN{RWx_jm$rVRhUIW@pUCwQ$gR)1wDJw0@y-5?dHt{8k}s&L)9t3GqSzBQ)3c9 z%HUCEkilL9kC=|-qQ4;0UjamFo*Wdj36WYJij>>W$bS#biv|^dpNQ?V>y7wiqFDAP z-*dy&BLQNyvH)LOP^C#P9BZ?&;RO-mYEg;#`>y9b26ZZUsKQQ4*fv(+O3#v1OB|+( zO-v6yJik+5$C!3r0RJJDQWPx}vtQ+l!)5kaa1t3fg)?Eut0t@NaY*H)2$a3N1`{U8 zt5f7$m@9l;Zg1PRDEw z1?k#eUZRXcbFPCLT_ObYT;;qSMvINW-q#wpaQyV!VN0`&Z2ncoZPhi+R%hd^*{?ot z^n%v87{jWRamPfut)QGvz#K=tpInSN(sMHt<7!+uYGxTj0ExR>GcsqvE1uBPR>`sL zPtnP>F(sgu?@o@{l$H=6?m!L~>037DW1M*}(xWkb^#(f(o}$u%KO9mljX(0}PX2AJ zw$9Q-T}o%@DfKfw9%EbptuLjXu3~lDm?!NxJGvDCK(nu*o2czGX#fDWqyxQ8c5dLi zjc)`Pok>69p!4~<^!S8E#L}+oEIEp zci-SQpET_N2c2kgXe)khnkL3Pc2)FSOE1XIZl+{$sF!u$W=qD52@D!5e7z-`|FuET zZ6)Ylsq%@$v`2aHsRtTjkcSqFGU4Y0}I%>!fF^!|znWWDXbfLZ9WTZx@M7 zHirXD1ML}rht0R$gSroBQTAz(tM!r))NU76GUz>EqlfL=;F#&5b#Yfyt{R9O1*M%P zq>oevo+~;5WV_xPe;D=~qBQ8@YRN43C`cfEx_TPgz%bQF9`b~sZ)P+(qWX_#*taG( zNr!JQYVGItr>ea8mg3RGT0{aI7EE8 zjhQ|@UOjdTUS&JFy_9ss<_fIAd8y>hlFm&9vJP8fj$=FupAGoSt)talPwfulw+xjjfclOpYJH_G0kG)z zZc9_mC#YoAt$QL1aqnD14?b*?7m?T5Cdz`PPZ@Dfd<{}d=Dz|0s91pYcc*(XRr4Q0{s`Ro>*7IID zNi%|mSWw+&2zW@&Tww=jK+4YZazc4%InI6j_oo&g`H;shKtdQH{I3|xzS zo+?&cVZ~oa_Zesf@~y!Jb$R^fwvkCRc;ka8c~q9?acmp6p8$R}nHsaNzY3w_z#B*& znRa{d`-5K&7Sd*EJ%cK@ug0dV8t_)3?~Zb?2PT76z9X`#q9Cu`^svz20mt*oNgV@P zz10z`^a%9!h#tCH*wx<5nTz8BuF)EI$pyK)k4Kb#_xlO+2AL1N1fT7YWR^(NdT5}B zJ%5fha|Px(yby*;EN7vMqi=;K3bc&4T8M%d(Dgpb4K6@agV4Y}9w(vT?X}i?)@7V~ zT2+aOT6CAA_a-%&i2(MCHIFgfjViV0%x)2t^A4BskFRT%9@&MdJYgEctl29C$!GMB zgu;guK|sNF1UYnAOaE3;t&W}J5dM9{>D9Pu)Vchip+ZA>h2S6Jwk z_MU`*w%@k4smriX2X|{RmTMQ-^aIAf3MHUps9ocKi^tiuYs1Y z{1d~0#Fm%msGWW_ts=hr-4GQs*3KVSgp@+7XlCbP2(`oWB^IO!?}6kU3i#^IS10mi zZub#T2_oLf5z}4u{Px5W4+ZF_>Jch1o=Z|cISwGN@D-F3TdE8_$luXCf~L$*o@=Tp zqy#@W<1n?w;c+fKzjU4lh0CjX{lP_=q__L{rr@=yr|^3%Xlb?F5?o^~K%;V3SiB-0 zN7j?rK9v4+-m$i&1h3b_K|Sgt_=q8;QuA%iGUKqfnFu@!USFwd&3i{~WK+@7sDgrK z^8v^%!t}}j)XVTRS>NnjetJqrN<`*w*+X=nR_^|dh|re_h*Ni7e`~9_{J~#*T*AaKLK3=j~>)gSg-hw;esiIX>k zG3GZaueVnV)`^K$E@2yQ9bq@5`nAaJU)Q;QcM1cN*kIOP*1TTmu5GWCVXb5}zebzYqTtM^9Qs zEJyOw+JNKUmboRcy&Jgf(H6-)jCX#iX`A7eXjngJ9Yl}NI4a5Oi?ilKdW@}kvs zAQh08N`XQ4MUG1zqjV0U+r6NOvvB?UBTv<9Jgb4OZW4o>+iajx;1o(BWg~{u#Q^PW z62_-gH<_>TXxFvTb%>diXE+h!JqsAkFWS{ywS&j8>p+>&;D^9Y13-vUF}xgB+8 z^UB`0-MffeTf$AatxmXMfq-r8OlDF5ke&*gY&E-)Bm|@zm9gt&nNg<6;uL@bC)NqU zod&d7MeoY%YOc=Y>2}|M@(FAeapH#)Zr-UibRU1?DnEEH4uF|}$DHoXbKfBK|54pY zA3)Ux?1-D`@W1_BZzeDu@!7QGjm!ME|9Rl;72^z8N*wjSjq1Pd=u7Qk^d79Y-86x} ze*$5rDAL<@c9Q}B*Z2K7pc}Ektn@zw-rKzL{~`7_Xyk+UI)DNCs6aQ-X#cZ3n=YoT zfmy{aT5dF-P5s{^MEyfdT*DVL;@X$VKBDVircZ)mwm{i>8(rO3vSi F`hUZ11i%0Q literal 0 HcmV?d00001 diff --git a/recipes/use_cases/end2end-recipes/raft/images/LLM_score_comparison.png b/recipes/use_cases/end2end-recipes/raft/images/LLM_score_comparison.png new file mode 100644 index 0000000000000000000000000000000000000000..84027b0daf0421f71af883333d582261e69cda5a GIT binary patch literal 290356 zcmeFZc|2Qt*FLPRV`!=AoHnLvQ96>KVvG(-wT8B;rDh>0qA@F?MUe`srYPdHTB70gXz zeg1=D2RS%6_zkY<-sa%oL2_{Xc7%ru*wWJFTm~%mdEeH*%u(1OF%7&)aWXeR7#VS# z0oHjqIQNNh{Knn|c)|9G|JSo{Wva}UNJBLmZoqoCnu!0i>FV50?q^2 z!29r;g*OMsk(2D#K7-q`OTfAMT<@6sm>U^tz&$+_>>WKFoD>2)9>4 z*dGt@a7TJ;1ZYeDwTA|<&fcsjef+OoeB899&5dpzzvAiTbX-*dtN@mV9Xx*gxR#eA zLgTis-am%}@3f^|e0&~iC@T8<`z!b>D|mW2D?-%O)fK@?ib_i7fj!Q92O@p!1I{D8 zW&UxHe;r5H$s6wF`q0PK6M39{TzdykUmtC0Y4(Z!>+c`e=@j7l-)BO4|MP1BUr>?# zi6TS+toUEY28L>}w`$yU4RCU|&~^0y<_tIo45AEH()w$_|L>#!KIOj+HUICSN{|a` z|338JKKlO-HS>1zy5i{poYV*Q-#6@^ga7@*e-6}AWPkU6n~Q%;^j}+nnT8$IQv9#G z20J)A6&wxRNC{V6lRLl?Fgf<${utojsedei_5I-WZ(sU0I5;kH80cQQ6R>Y?2$Sw^ zU_Ht35xmv!FZg8e+_U`ahxk4W9sjCQHFxSmm~?3T=mDPn{MxFPhdy1fR6h6VR2yHx z;r&C$+iPVk8~MC-^s}{~nk8puhd)=m_Y9#2elos0sV*;F%ZO3i%Gl5rA}+0DyWVUq`c(evZn%_^)^Wi_`4r9;cIGPZS6L zcNfJzGH?daf6nkf-NpaR(*I=O|FNb2#q;^sqyHaU`j0LBA3g4Wy!Zbml>TRy{xeIz zbRhrnt-qX_e?zMOnWg_*X6d0ztT=0twMT3MUM&Adnb$y8^AGM>2JBU^)<=xTRjOd? zxjK8(mnG56l`K!mokPT>L#7sJ#?esJYzaMtUcI?H=d7bSY&sW^v3%w%`bG7)dQo?r zyn4N4>EQPw^xD%;b5}ppo}JC{OUr88(wCgdT5Q@8YILxFQ&ZTjR#Z^V5Dxt?;Hn6J zI9_FKn=DnE`s4_S?kN>C>>4&l(F8IG<-9?)V8s*F|!!h7yC^ z^~H#o^%-c3n&;nTG-{iihYL`2@sUk3?St*!46SNpZ-KY$8==Mc=Bo5pTe?`Uxz)O& zJ-eY$>BtC|+8xEYcc{^^O|eLy>=wI+l%qjQo%d>&lba?|ZEH7MOwgZvRZ;#W=|k?Q zM)BdL2)f*5FIPwk2`RGBJ?uBdAK;oA*?#-JMXB5*h5R?C{NjwEV!?04b}n z&VICF@bnGa`-IxPpZYoj@BbXR0>%)dc2R5s}bz9$9?6Z%{Z{`(*9W%h~CBeBwm`O=socP1Q>sq47af zIBKDC$~ZD%%fBjw70T>nSy=gWT}}LWyfZQR*k&EKLI-STu0_IYIB@DbjJ4HSq{5kz zzgyP1xHr6Lm+rlYo6 zO(ZobWlGJ0Z&6d4GhV&jMF}<%a~`tCzArSo6l?Q(cOvZZbV=6UT%O|GRZ8f%jFd(2 zYPM9S1HAHZL~w8@9g2ou`l*l!<77_QGM!M&ttmzHE?vqqtr^sfXJDPr75dlA&Ih_L za#xy!%$UXq1dkqw=|r!WSoTY)dpV+hSU8_YhVBeH>kYjAmd=x5!-+c&*rvZCbA!bk zNeqPdEWkD@JALMBC(e7S!ybtLO&%H*QTOTsa+j{dq&rdxbD!GQ^YfyvX(NU~((6Nu z=#8e#1)p4lFR(5Ob8mzd^78u=b5^@2dgXd<)x3vi6=5-Cv5G*KTnxBu;--wO2FxSN zkVk%yaOq9bdK-c9m|M$^=KGo)E5%%jiaAQ$Y`Uj})Y=^r$ij+~CTom8V8yW_SaIS` z6|riHwr))3Ut1|%O2n_eaa~UpvDYxK8RPdf4_;2pvKp<*P(q#)Q4e9-I#UC-yR%?v zbZpVRHQiDt>LzsxR$=xBnIEBqc!NmjZ(vW$UCtk((1no2Ad}t&taz@O_r0`V6Byz` zUkJrf3FPo_)mnTGE(9Qo$J+>NY*B)RTTPFY2~RoD|2Kk(-jfX&SMj(Xn6j+_t?{; z3elTPTVqjZ3MK=_k~%^s?~JgRa0I%QfZ8TKynw0G*xyKC5ShI!4RS;q!M;4-a%kr6 z?BV&9UthkEnh~!f6(T-`Wlze<>&kF>Y4$=zD2!-I)`Erax0!1AwX4;{DmYb7xzF&ZU>jvcZ!cfDrO(RS{50NaUMG9TQRLv$}&nBZIL3{R$%s1Lt6uY}y?)@Ga^ zZdY+wkEa`LC#$Jc@nwH|pM-+{k=#u$!HTyNP>*!I%A)I%f$JtHCBiVj44x1B{_)j# zI-r#;Mn{NygT$I!R$zhM&1M}u=MM1Pad-1^0&}==bXX*FYWihw^<+b04`~kYeUZ%< zX2}(R*J{dzt>rbP53lAFm@4c16|GLSfn&jVt<_fo)>P(tg{{@>{oa8Af2ikOU>YwL zhRnN}kOe1WYp$jER5nU3fRl%oFCc!e_sJ#6GGaZ0;f%GS&^=rl2&9@JuxKQFx798% zo_?Y7i#(ki+I1w+5FEN06{^*#{9LDSu3IVS?e^T-&tp; z-!)(P7`Av`5Bj-_#)uEyb-b=QOwhRIb1wNCR-8NU>X{py#{Bz;LOQ^e8kHD?%vz2N zwD!1*``4~y9)%LO+ACnQMt{Q@N*vxV3Pq4(XKh*A-TW*1u;qj*WJtqnCrQ|Par}A1 zkdQ4Pa7;^(fV;WV-|fzc80+)c?hW1RZQ|D1tZzD*YW2zYoww5I;Ek%0AtkUmzKL>d zD#N>@)UfSGM^!*cE5V+ZFRX#>O3!~B=v)Q-gEwt=!Jim3)?$XSf4ntW*%`9nsGy!a z(MCWy4aA4c!H=f!2t%6_LCH)Ya6WjchljZSYf{qIp_35TY1?h6%)rosU&T{KH|yS7 z5{mwQI4oIbf3toaZXLh~zKrCeU?-)WxDs>erwKiagxh~N2MWL(tbZY#aF1?7@E%u1 z170~TW$&RkT2@&@Yw3wVMua|e(myeySzYn-&)28pQTa}{v|ep>_}6;RP7*ETK_L8g zPpFvG($T+_$&moYtxU=M@t~r5{Vg{Ot4eV=n6JYi{w~s6(lxi{S9G4a(FqVLZZa+U>8Hdzp{B2DP+7E)s<+-s$hj2v9VuC0LJ!||yOZ)aPKnJ&d<^AOLS}H% zx!_Nyi)^>Qr5cy~#s?V`N#YZC&I(zlK|@!d{@-U(Y_|Z^@Y(JiXgUW#Of-M=l>ZSn zy4+}D+ieruGz%=Cb1ULRhIpbuL?yi9G@ZNjlss!KUPZM}4M^hF5J6 zkNb^;+ZF?wHFGSWpvyQsMIu`A{SRLnomv<^z zps{Tivj7o^4<^qBAn5%@n)olZxY*n-{h)Y_e244~P`ru**{^$CVr5e-=Z#V4tBlqH zMe~N1t)6sS@4RiYjbTiwZsSG%RZ^INPzQaI##_8ntzE1VbkwYSb3GplOM`1Y6m*mYcE5g|< zFOAM?%eQq<;O%yods{E_$bQyCcO@0&xWwTFEdMhb(J_IxK8Ks*szaQEm&JPd)6EqXkd4W_qK1M_Jmrh9s#dOG4$x6w}JYiZ>m`r}7!`ZapCKF1`g$?C!7 zlvd{tFFvau(eX9KcP(_BAl`ynI)QfYWqRgz>Y-%{n}49PhxCFM%jbJ+CAqjKGAf={ zhv)>vJ!*9xch#J!G_~G_{SxifT{`(3MVAFZRo25i?|7ZQdM3xn78t_9N^(; zuLMGV(troABwqD8@_u>^ogGmBLf=0i01zdX5vH^kD+vJ9ApQe1evN{aZhlLxEnTY- z?-zLVHT($RsP^P8+0VxC4?9x(IKH{3p6IFGY*Mrd-fFYOhm@HA1wm4pD~Hz#{V7vH zjLv`IyXsBm29fCd*^){-5(7YQn{LCS5wSIfBY-P7A_V}+ILy^RD&UnYgBOEFEQ0}O z{E|dS09?UBC~HTNQd{gQZH#8)m1+PYrbnD+gwDI!2B8Yvtli$rziNejyQGRdRpBK} z;7Llk)zI0NuNAOGC0u+cER}LoNe|q>*Oe|vFck|Fs=x}7r4$1R`fU zYD?SMpzSoXdPG6!R#}->D3F)g;x2Fng_v5D7I?ebGV()MY&81PZ1UR;#UYKggzQNe zLY}NS|4G4KX}RY|YauI^;FYf>y6cVtL7cF{@!5F3L(u9m{@eib)RE2L=zNEJd*iNp zr=04Hk2z4b#)C3vt(=APoBH$OAZsE!WIm_r-Q~92+T4W!UYUtTy|mEzUbM#tk0jq* zRVl;q>_t~MtyPreo`n3?&u$zR<}xRIG#z{kJ=Ie_U|orgB0rixA+WYXJ4#eb)0@v7 z?(onF9Nzv>fp_o(fs;b8;xs=AMb(~0#(I-pWijel@uwl*s}_0^hhf=LXW|!=O{&3! znTln9o522iQlJoBoAmNVuS~skEb3k*DZew*wCaoL;7|FPJc)GDdoTJMf$Qd&6CN|( z9`+J~2bPyJRUe*D%B@bm+TRl@>m-=iQF6vr*+W_Ff#MWHxOXNpb9Yj4YNf&3)ce_P zvzm-fpq-k7<&f-zWURW2`l zHL!0mb2; z&>9XYW)S#|IX}y5jqxTfsgvL%m^6O?XC?|cjL%N5vvcHbx8U-jAAvS&@9&*w&W|uX zZeYt;l~Z_4(fJ>|dFv3eX5849K7%_ME6p~8kA$A`Vi7jd7um7hLLwd&esT8Yag&_b zKzw$MKE9k!(HS2&88)^Uc1Qt-a1oYj)%&>z z=mVXt%=fxl&DYWezm#=*W#Kjsg%y3e|84GW#pmy`9E*4E=36ha7*Z@0W7;J2k?_3! zF#=`Kp<}$fqGn!2b!KG1xk?rWQq}=tvC%|@VhNS)$4;-uLS|efKpv1~$>Llj`4*yc zkUFW_t=7nK+;150LdJ3j=P?QJW=1xp)g^yvd(D=$RtA6iNul$w+ya?m>j~gL`2&wP z&3>q7C#1q?1Fs~lEO&{n4Hs>Dih4Z8l?O<7hu#DZevp#kXzXF9a4qAnwPwGu!LVz! zC~*;+h4#d3u)$*@#$b3kF{2IFM5rgszI2(^WVr5%wM>DE#5R(}73FL~zOG#rkNX_! zm)IMSm%4!tqPdns$q7b@xUV^dErgC0(<3cg`mk~`)OQoOJ@c4pzrf17Y4)@11iZ4T zer*v5%S)t-;?Lwm42NZG^nzPE4_VIpvlH0W^YqZuzm5}Tv207XqdxTR=U-rx8;R2n z(VR%6$ZyA%9&;PMZj-C0k%_sV6k!@4_6t3brE$E91sgm+ajbf)-4l=eStSG$W?)<# zh(m9%;#IGbQQ-tVK9x3(dUsh9nsDB+6QQ7gknQbn?lnBn166R$NzQ z|A$8)Zml$JLvhPL1}57DMMOVEmr%rKY89pz@pE#)LKTs&@FC_(C&qdiV(gO3W>OM) zpd-%2VY>4V={Yj!dT=m1LDe>ZVZ{TKMFbHG+WB-{tW!Z(n1I3fK67K_@LeHT`(rz;H+U)?41-0_ z^5idTTNqFTNlnjZbJF&gb{iX{MGqjZc1@kTQob-9GH-W;WBze7Zp-eN=InAKr+4J_ z%Nh#XW1l0c`jbId^xPq;S~Q_}^Wdd$sSL8Xuf#}!roaN9R^w8OqyBSejy~B)GERm3 z%xvq_Ss4ryEZq3GUyAbaNpd+TnWDea`)W;33ZK>N_5^KW%B#+ zjF}aHTyb-gSl#)t=%}0uMuO<8)AKL?Z;fNyd4s~Z? zh19FM``_gzYtMU9ER2_rYq@c|Pnn$S+_+%;CF-V!`~42*+Qqy_p@7K;&`b!;Sq9Gi z5|6WwG9$iLMr#d_H@t5xe>n_f1ah~m@Us+=Wp8w^)P#h=Il;KJt2}G9i7Y>ItPqrP z8Vezz9cH!EqjpJUj|zngFvXNJcoI|&_iEHpw$~u6#O*|PBq~sye;sD~ofXO9PK6xc z?y{7e;thhda$f22!%wITT&fCcA8e?VkqU{&c}|LCozFD`POW7cu4+-3{Yhc!LPwq? zNLa$;cW6nwmO?Xnwwx?W=2w;mrKeWrD*OR?%H7EB_oePsaO9bEwVSEF26LJfBTiKi zBp3pkRfIjD{Snj#=f7*?U_5lUsZ0qnV&}@M%_x7C>+N<{t4!YXtCv?swL3~;USoYl zU5+o)5KIvt@Ki5QLcWXYx9>3xwK>|>HYLVV_WY9QYNB3!{nsj;!h=AbH1`VC*zR`u zJS56xoR_-Q?rw5I*ly_K$ino1kXpwSdClC*NkNHl!sut3Nv_cw>42d2M0xM_fQuZc zE5V5XyCQO(RTl3Dnft90uSccdfZY8$VOo_2Jg zwArkGI^agVX&aWdE%IZkdqji~oe`mA+3;uQsw?WsDH%JV?$qDxsFdxC59O;n6B+eB zJE5$NP-0X1wWrDnhOm??C~EYrEunoL`r_)jj3@7ye{yTPoEg;0+6!P?)&PdE1@JM5 zN88qe@ozjZDF}Y>LXaz*Qd7E?v8UFt{za9{Kea@vVUBGpnGBexy9GgV__jraEP{fF zXG|D0f-m6q5-oJK+i_r>s9DUTN3p>uq2}ca$3~TO^g>k!IqZGAXS=HLgEgr~Wg3Ep zSMV*@ls!e7V_~ms0}Qftzy|MUuMHSxJ)Bor-wc)Yg(o?rIQFHrRFU_NmFA=K zsrRtk=i%=6k0kVt!xj^&A zR%L7JjhI6h?jz|o8=rfB&!q>i%zr|`nL`0ZMv=ETuT0p-!C&6o7y?0Ys-&{{&GV;z zl=i=)yAXxkW>jaOtttcCr%eJ|Tirh4z=aSgF=q)KlR|kLnI9P@XqmgK*%RjQ-@W*q zRc=oPdZFQa!Q|MjjbXUj@*fq;NzLzaH${m@=N5Nl^x=0Tt-VuMN=^PI1YeSUj)DW+ zCQ@2Nk{Z3inV;UZlHN-vuP6f~^)zPPi1RG84s#P=L_>q#rcZUyTa2XJuaQLeD~tu9 zMFx|;l#N7}54^?wY!G8G6E+ScM3EICnxY z-f(D<*R^5c)4LsvNw9W&7A zHp6ngf&c?0wcEe`x-n&-iLo&Gk^btUv|jpH9p#TWWt*bwocbOfd2vp+s4tEjl?tKZ zUG0w9ZbyoqqlAraLOSBL+8t!{?oQ-oCi`ciu4qrRU`x-@lvKfj?>~T2lVYMj9ShQ<@yfhcD}`pNzBLmcKmv2u933Hxs94ktT>t*d;cC z0uSrIOF2?@X5Nwd{3!f<>g9a!Ib1D3q)*>dmODq~4JqxL-vz`srkkFhi4gOwB~zUX3_dDQVl;RwPP_E zb<>e6CfuF+m8Jd4FiHN=b%)z~Qf#E#CN`!_D!ip**>DtP!a5n8W&mHc-4Rd4FGa{( zVMa8E8F2>AneOAyKe@RVs|BM)8s2=$NawE}>ehB1Bc9WbdPq?c+kgMNCOGr2kO;wJLBFEaYA9IuZEsH(iSwXjNMPsjPRjTcLz=?(m55{9Jc*{2$Cv) zDbu`QuPNixg9GQHmmG|&r&@~pcA1MT<{iZRJC6_rG72JxF)Jy+lCTfQT{2$s@^l6c zbP&cW1j3_5K~{&7d}Sw+r65e8UNn-NQYjh$F@qrR#g;><6AVLi zz)#{pV?e<4+tj#5`NSyLF1t)hQy)Ohee58dkRm3G&sV$z-7JK7V?Nmx&I6CbsqER- z!4c+q!rt$;lq&R%Y2pg5y!&UkA@r!!1)H%m$7C7xKV=HPqwt`Z)mQ3=d{d*!6LTTP zgZwglDce{t8^2X(8JeI)&)6-MTp??@uFPY_Tjgult}v8orhDKX+qAKyz+om9N|qx^ zB~|?X%X;7F`-^~-mqHlhs!Fdx<}W5u-^q~zDg);xOA0zu6LU>@-E>78<^`8iHqu#( zIVzAR9LAjM#PLb>Vv1>9M_ESnaY1 zD;?<(HV`%1Vm$L_yj-!1!sl46HSyPN06VCot$kx$`k>Hnyow}fLpMO43915oaIZ;N za`DB6$R%CUO2j7!qKfqJs_e;+0z$12@^j~7e+cr1t*o+(-AaCA7hd=sk1G(9BEzuK z+iLq>_Q|=Xo}fJv9yTXrO&=GBg5+2CDPN~F2h&T^oU|qi+Ebcrw5*j^C0`e49A<`> zK^_H?`I}wfd(Z0aNRe=!Jgf%5v0KLpl{Rm=ODdL=m${izuF;=9AgIS1gt0Yx!rh#% zVZ}JPoWdK+CpeTn1@Cd6?G>Cr_cb0<<=qT|7>`})yZd8{Ps%||yw~>iwqSZ{8ztgW zwg0><*Nr|kF7e2NERi>B;`87cxGnt8G|2^^K<|lZgV`Flx}K~0`8p;}^z59o>ahu* z*sSibYb~9gPRK-vh&F@bmqIFkl3r_)d>_L6V{@2ZKG@Dac3 zgQ8GB4ZqyQ7GI~%$485|%auBJDCE6Cu>HZniLW6F#or*)s(BQntlfXq2XHe&XEs%68Brl@$Fp`dr2F4j?3G z@yP>KcB$>JQF5q2%5nNG%^V99@6_FzdIHr~4`d7j%(n4XoZAa?Rb^h&Cd|u-3=yf< zB~0)ORZvcBCc<7?qY7O+d7;R{%5T;GiMP@zmwS3*Q}xg6qpoQUT(a4YLp^(f2;_Qp zD`hJsFq|+irjvEP8Ad|E$-#k9Ko;jz9B-jWS%r**;#2*OIMe>f4wQbQ{N#R$vvq?m z6MHawez#nR8&PpTuF99p-y2dAx5Xbe{aIEZ82grw+xx(i*(b%ZryYQF&*ODFnGo|- z$Q~lsn(B$tqwF#gc z+*8mA64h&qj4mwL)}%+G&*dbW?m*DshgeeID}nZvTilGNCz+bDZXHe4u5(`(o7L#v zWXV0pTD~42!W6%**(>Z*$UB-{GSWu>oJZZB$lB}A@>osxuUgr#{&nfm^eW(SyVCc9 z(x^Rh4+Q0?5*t3JP>zWfT9XsU(SrARgk{t##79Dx|Ay>5(|GkOvf>?DptMJfT-#)nGdXhYAsG3wGIH-+O6VN2>GlDEY;;8+y^14v&(i zy|Z?xKTJ2X3mp<42LHHqqE`td|02U1CxR7`5xs&{ud;a21R{;nGFpE&1-C4{i%&n@ zXnp~4z3VWa=JLaW5n1&L`gwclj;Iq2#b{-my&ddB#Ja;bM18GE_zk|SKyX#M({7a3O_V; z2!|*S^f1zE#Ds=EN3JXa|muJB1(OMiRW$bQYDe3HN(E;keE%Eie`>=p6dGM>$?W z;Bvl5755O~Qociv;31N+WftUizVNq6K8f5S<^4EFRJ0oY4%q>SFlbKgW~{8Di5b29 z=cQoTPvYq*zb&5tc|!5n75vOR5W}m1y7NoJqYl2ZG+2E~KoP&+by|4;C@uSjWP2=B z|1Jo+8ego{jC*oXT%-}R%SktSu}o`y{wn_Z06H1fSlfyi33MI2QGzF1V_++l<nXM~jHZ8{Ne18Tww$McEi!WO6fVuTK-S+AEA;KQ6`bOSE^gs? z#oDfMjTD4GQ6oLD*+5ZuOD2?*JaB=IFy$DxsI=Rw+4FJMuvFZ4bc&&tVnH>~Nu|$mn@Lm1SaI|YMC}|a zQeq;@(;rTKm@(pr2!b4W*Q-@oE;g7JSQ@iI_Cwh2B_}in*c0c4OWwYXoQ$+3Ld!Dy zQGSW5zBk?irGVQCIf&jhhWN$w=o<(EBgky~veo&<

    h^hwk`5;E#(&5L4 zWB_nf5F;^OtbLrY=|W8Uo>P3YK}dMj)zUS7NU$Br7fm`fW>!qjBzrh(RuLAcoY&Wr zsNDMnoHO~DR7Jx1OD~<(Mo~ctgZ}TDbzw;gmKvE`*T@YJKy3<;l%=lKe_Xj71tA(X z^(glK6n*I*4YQZV*%3{tm6BRGS##nf;gPFl?XRc8 z-MmaIW|$^W#Whbh4&e`;i;_jSadFZGHM^?ar?A+}nYn1D2PV<(NXE<-XpRoi_r0)c zW@G;HQp>tJx}EZ!QCh`rK~B}t*Ec{4C$GIoRI)d0=!AJN{~(o5;OE|g3xy=lTaRuP z2tdHanRX(=Q@)4zdTdjR+Dn$S1JFYoQz9?;ePPs0Gg|<-NH!0C;x<&yd5gzSg(f=b zp7_}d6=$r_j!k#6mCt`9ynhG$(wV_m%G}-AaZBLEY+@#^*GLKioKeh>u2lmrpEUWC zwi28DV{odtBT zTIng}ZrQ?6oG8(6-3xusK4>{g&Zx6ErC~Mdz<-vi_G#a|y(wj`r$uLmk|Mu8@0bs6 z-VbYB2qJ?R1N`@armPx+{tFCW088*>iQcDwo5cHx3-8XI=PzE!jR%&@0v*yJ5H6eM zZR@2&)^ObDm>7!UfgQmu<)7T5M_Gg_60OTyp#$7+t%Kj^;kc=*tB(Lf!j(#)m-VzG zl^^bBc+dOYmhIRK2b6Nd-`*8lFUCQy(0i6!>SvWLPXf$+_2_dj~gabUoSL*eEAU^Gc~u zat7fF{$p7Fhdl?ZD3NA69f~;N3TNWJ{-awltO8VPL=SUEqmbo+y?EjRK0rd0#R6l~ z&YyMN(%A9e8kg%A?G5iwxNm!Wvz(B9QU3VBeG)|=C}pzdm6DA2A&TO>{{;_I$IXJ- zhX0Z%^!f5HzMSVwT&W!$->b{N%$r+fY@JkH6Y059pF0tlH(xzq6U>UNAi}c-1%EWY zl4T9$=TcPz3VOZGc;yVz5pR)^C3m%evsZ^SoJysWcFHQnTdp_uuhGcnjbrKJFXB?0jw z0;8j1vmtLbuPj^-FNCmI=M94^KU}!Jgrsn|kAE-JdR93-qBuFab-8qEWB4@p^QZz@&4T^qd9WCBoA8K7Zwef`&D0@3q?bh=6nD-q!oqJpLxn7LAZ4%(IAz!*6e({$|XQ1&BK2@+M zCHho23dcUeook8Wd-pReEu?`NEsh%R0Tnc$g`fpuob*u0$M)M)-b0l$r3-zWgRy?M znxLN?`;)n&UZ|I7Kg5+gO148lERC`v^z@3ZY96|>fQ@K^J`;XEnd__#e<+<2NaAza zMiuXIxSA)N7Rc_iOh1EXPkbz`#2i0XNwN!ciWwo&0IyF%&`)RUX&S}s3B_X=61cqv@{E_ zJ|KK=$1Cl7p(nORG3<5_xjI02wUoLzlqEUf3Jg^Fr;#+7I#}CtFqjocz6PAVMm60$ zZTmJoW0lnzeJnq9ah7MD9#H9x>3`>5?&G&#eE#jsrR7>N$*ueNje^lj#j&!5rfM@m zL-0vJX|6*EB`V5N-c2lP2q)8v4palrL4{OvQ1yqy5<1LfTsrL%oF!&CCI{N)Q_Qep zCA9wRvD7v=)jb`!JhSpe80Ls&u_dMQ*VhA0J^jTBe83;7f{fC6^9@uBD$U$Z7lOG3 z$zljbAoJW`bmnE0_hNNesh%wrGfq=T+UotP0UdP4c~3B#z`iSL%#FM_PEL(6solxO zmytAB0@W#vgWqfT8PbJnI(_s6^x`r8mz z;DZz0ImKFgbsMCUkxQ0+QnhY^OS$DThtvkTUdK765jdi$o7v}cc|J4RbLa(3n3{~v zhHnX^qYr$1w-?C=p{o{K16vlpb+b80g$ShL(Zn7C&2DparwHNzb;gHPE2AL3a#I2I z49Pv0D$~t!j@H@IGYQ-1&3c(_J}=?$Hd|myRc0rSv$M0TjZI0aQ}~I_Vy;|FE(G!x zflx)2w_~vBwTIia`*Z2#WoJv!E2Tc)f}NSlc?~X-V%8e9-`V@Y4>MZGy>4Hl`NlGF z1z}Yh_NiFpd_5;iLC|$uvu-6`=cm6w8xyIny`N^`D&S!$S0T7-6`3Q~WzAng)vnw= zeIoAMfl&95+1An!eeoC+KEWhthM-`r98F*N!^n7*cx({MPJZz^%}WtDa}*id&EC*idAM<&+5Pn3CH% z92}xCw1EBp{h5UoS_GeMRuO|Jj^=|bhI%Q)Hn(;rbWgcJ^Le3F`1$!c z1jA)ii_G|6Y#eoDi|mf6ctmU;PV72Fw5xAyhGPqbEL?NxjpBcff#ZJXfoJv5A8Y=< z9Tu_lJKqtLiDNwmlQG`&rjfU{OYCB9J$Xa|(ac;U@o&d)sB$Gw5X_jSi6|9$nx@xz zD@-nLv|5t+phTvo+i7c4_3=O=Ik9GSiMTd`yPe~`hlhOAtieq|hROb2gqHz=6xUMG zOi_+9^MH3MzRHIKW5(s%Vj}EFv-O{FweHrVwmeqeTNLB&J zPb6@G8Gqoqkcegv2( zh^$rdR>ZGS^A}7^v4lRq0t$GamUph!0v<9KXs8jQLR&ox^wHo48+5^DRxS<)6C%8q z>5SHhAY8n>y#r~2rpV3l3X|?=+4-er5Gpr^?sN~jqv^U`)3jsbN8mNza&r>MkyhH( zizkcE-P-n4PGtL0M*{8OcUt9ZU@~YHV%Q~Iqy2X?I5`-8$)$O!uI3M(_Z^ooT_q!P z9K|s2+25I{u~+?L6aL*O$Uk0NSHbuk;nEd^R^%wJSI#`tu5Y4w1AQydlq>G7G4O=Q zVtuh+<`z=EN=;p~BOx$A%V*V2f0{wLgr63kro&uHrt=I7>v-;Yp1~&9*pZw3ac?W~ z{ASltLGQ%fBuyRq)g#QV%^zIilD9Ms5W1|S%9QhUxf4Oq?}OV`BjKhrCGyogT@Nd< zd-F!It|&->7QSo8zbjP0qv9ZOKqp}xnk4?z%F)@M_5epT-!3)8TT9*W0ihR)!X9Y34?lD z2bW+gCoJ4uShLsSIWWGN8MS`i*Y7XW^g6Po-Mcm%Ox?fld2md+n7I&_vEMoZlo9Pz zExbFoYP;R|#N|r7B!V9iz>~JNZgKK~mwR8YKn?Dq!(xgsAO42-QUa7upJ93Co}F0j{}ZVPq}fYk=t$0 zT{i*}?}^FD#}?q-T`rSglnf${QKN@gS%_w;+9n35#*&E_eBW||f3NXynPSiVGbZjr zDu?NpRM7Ech4o}btP&iYqjyDZI+!ke&_x#^<6N)2Yrzk&6Yw+fQJ#vn&9c75=sqRoK`lwCKAKNctsE75l8F25__D+uY?|HE7LU z77_&ihgfu1TJ6`zoYLE=Ou1ZfiznwHX`}vZyw(%<8W?{$IqeI=M)EEaNgl<9|BL+_ z8pULqVoCDF8%Q|xjbRzvbkcqSjBvXJOc7sgdG#LmU;g1=E$IT@#fM@2vs1Fqy@fw! ze;A}C|2*ZzulLppKytjJ)bfxWc=Cq6QbzrCmQ0s%0@!$bK-$B;4x}XNWSSNP<*s&q zZR#;>dE0Vl{SNoMLgl8(Mztrux$|OXeL_u;d2>y(qR&yE(qqnCU6#Zl1GDsZMSgas z1!*~R;rZ1A!9O_g)%B`NX!2Ubp~jzPfW}eDA^c4Pf+;6*&iHuo+c7lno^gQO=hnfa zlsioexcK*`-80W1)e74d);qCSzEi9bV1cK0Ndj;77MeN413rd^s^SGzd3jWVn_N%I zWo6gWb8IY`8wzHKN%GJ3ly_qJ&zrsbCnZg%hrWo39P2mf22lGRF3yUfiCub-R%sUf zf&?`^8kE!7hVAI(>=3kRa1uF7 zeG4uBC#v+U`^d;OVi9hEHo}okIPWtQO$4n=QK96&ptIY*p{ezX#)<5uttbjyYGIk} z7{qf0+(~0k666wbxxq^lXMv`6PFnkd`jJk(pSNmv$43=Uukz~QyE)9bI?6NrI=p%d zrEZoDTcpYfRt+btSW>&A#{?xCBpQ)skycNOp`qc^PrvdK^aV_HBf~aEndzn)G*3;M z6Z%tldx24XjQcn9T=hvZPl0Aw(yf%DCv}6}%iS>`V*FYndmK>}22^Sf5%a!kgB$ogy#Sgy>d^K53%hxEIXZfOWHgTTxb0ztcw z?a(&yZ>BbThPJqPX3x&$6NO8JDjj}NN4b)sPMXBE3jhsxLxihrrMGscXu>M_U6~tx zk(fkRjEb@`ocMo)$p3n+{^#b+JS#)K3H9Fa>mN~;q`UntUbYB(B<{gOX_3Zpn1il1 z7oQ^(o2^J$Rr}%tUphrYUXwfl7sT`##GxT5YvvTX zwXA@Dyse}Dv^@}OB+JUecJBcmqSC&Otrsd>r-qGuA+ljtS+jXYX}cLIP-2(>fY+bj zxjhDl~!E51kB^A;E6gy7>RUDDpUTG=l zvoedNVOfgrd!^UT9lC#841zytbJTI}#hB1qPzE6rIuD;u5rMcQ2cQ?8U6xo3u zTM{C?!8yi>;>pK9T7<)mhp!#l5#LR`sK&)FjHzqf#vOOHAQn-T%HBDQjgCM9Cd$5Z zf}JGY0)Tlpl<;U2cw{TTe^+429DK*5z?Bh-^>h|=GBYx{aS5_AHqP+|Zc;cV7oeyg z^yqO~8+eH41o_o!((_J3?-}U)$bkUpRTZ6avCI(lUfEp6#@fOn63E-s7@cgGi=wp5 zhqC8%#SzS6W3ANa9yMj*XV8lkR*)pi(wbJ)hdA%c!;AzSyQX*a3W@jTynCv#G!ZVE z)z*_QP)oc_&PTr%+BhsCSDrj&DZ;iyYz<3T2gI`*cjP{3v^qr9!Nlz_ zf1nEhdJ5gEWe>3OmR+l6g?p)6S@|#Zje&d%kZRg!s2}|gOp%7!yd65o?$L$}CkBp* zI50<>iP{P0{~qVHJ8yqE&`(&EG_LlHf&gN zPpZvNN*oOwr8o(}Qqi8b1(V>h=r(O>=B~Cr@j4JT+#4k0QhgfCMJy<#-Lqxq6UP(Q#~Sen?(t~1QRBc(#8^8tPmT*hl!m1bey z$YO&y9oxEBL4jEOtoCac% zW@L#S5`P5Z2c#0?mKYVK86quwMgO8dxbfaLEoi}RMm1!V^b*8&dAIdpQh49#oracUK;%txr z+w*|Da?{v|(HaiJj9`5_ki%{MdWgE$q9pW6q9qKxz33q3>y1_l`;W_Dq9@EfnoKS?6TV(S~u7 z#%AdMl|m3Ud9^;Ab1<55`L<6qO;tmK6qgH`Vtl+ksj8>917{}6-PnAqqC%sVugT-# z*uIC1vEYG<^+_3TH$UbyQ^?NmY6gSBVt(1_#Q!dcU#|HWs6g_``_9RR_i+iI4^49U z=a~XBs?qcNB{t>wG(x@_I8JO>7&xD(#Ujmcf))J!)BXCVzY~+ih`+J?I^#^#X7BET zrX9W^LFo$u%QiG{Qd}VFDC%L_k|F>>WnwuYK4-Q*zqa&9CJ+ZxebhX=tgMlUUS}2V z0DHZa(NRd}65os+mGj|C5?K8}@%~RAJPKyVqqCG8<|KNJ$W_X;t`9>Tx`OEo>*ULn z4tEQFUsz?ohm7icq#_nWnYkED1Yg13WwgA^*fsKWr&Qh;ta7i z9v(e*G9Cm#lnEf3dBM86QIHW2+|RYs%7F^15@2K@YvWMOAwcG_07u{hzw375v_mI_ z<9a08o!qwWXha^w0J)(m$~z~gHc5_Yw;W%W=P*^l#+M`1JJlHm}>RuDO@@ zQY4M!@>fn>QJ(e)cNZ_fmZ)kbPRo>_TY2H72CRTkKZLML|6h%7<{F+Z<-rT>;8pLE zU4!QyR7cRN4Yc+ssyaz+4JyWaJFPyH8S~ycf#2dT-}BBtE#|*o<^ORF|L&3BCk*hA zELj=#Lg6wniMi8C8uc*>BwHl6#aQxD8fi}KJ+%rsfXG-PB@?{f^h-qtykKjxkxhYD~&CV z62l;iP3J`pT+X}>msGZ$oz(BO;pW{2t_sj-W%%&JTz+ZgF~;Yks!_9RAKk^K^c^8G zDIGEyPu;hNK*!>Z&b>$x;-}Zv_I`t?|FfTGOkd1^Lry)1NkA#|94I$*WD&F|Ovg3| zVLUCZu7-J#L*EP*sTI6di2}J!Lp3uCuP}VlT4ETSh^gA+-=w+F#gs>zSI#$j5q2W2UH0NNWRxI zZbFbSYd^qUKl)IE251bI+0Nl|AHx{)bdP+X=Z8!KN~B)@&Ohr2&%WP{b`duFv+$LY= zTmJ{tErj=zAxwuiA+h=rT`n%69(|DRipUps9zz24_jq9qvUj z|MQY9fGc2@PDDQY@%1hKy8));Y8jEZIzQT2;!nU5uYawm7)CDYGhHbURPQOI3r$uE zR~rJqEl(*BJ`r3S#a?a+PZM+#07YwM8@PYdy3L#;6LGwA4u=NC8C-2&)rv?QpJ)jI>qvAi9rY zI2ig@G~zgFoPbyTYntUhe){yq{CSIF;t!AUa2FeUh-pLjLmo>jIiAF!s!Quh8Th8I6K?aGkWKQ*8j$0!UyjqzcNj2S2nL8dr{g4UtN2P` z1R#LzTOS>zWPaoTq(TaI_7nvPC!g`73PEKUzx4~N%eoF zxm~zJsY0ZfVx4xhH3sTsho}|tq)WbidtJ-Y5WfX4b!(1iK6M((6vyrC+&jXX(XHp2 zq&iuQXyE}!m2V%hEyDMvYWAXm_R)hDY-TDITVw3%BH#Uz8MzcjQG{p{hJ4Y#tZt+Z z#wGO8o#xpFe>eQ;LpQ`II<%ujt-EkuZzJReMiLRVET>vUU+6!fmjCprm?K7873y&~ zjbUEJ-7(`>fpDjv!|^)+8kr~1>${93i3K=W$2LBF5j`2fJ^GTVoEx~5O95PXkiiq; zqtM+7tE-ZA>5no1gH%tI$Ov1y`e}|IZ&o!`pln?^LucNx8|+01Ly1pAtgq8wXEg6` zdIQZ22HFP_+1mhz1Z-xBPR(nz81()Mi_gsjaeuDQcOTQX1>qkGMFzS6y(~SdPlAGA zF!eP{K#R;jWlSa$=IGl2+M8`MTF9%@zVLrZNX{2oo(C-JpxZ7$_@a)>34F3*)C!d1 zsNv%eB^o@Dn_b!$t!@E9uIEUd3KO|ZrvhSNHxfPuStyO)=qKgEar#it2_kf4NWp zkkX%pxdH4zbp6@qkpRW^jJ3-x*Qh6*`vpyr>hgl0Bvr5L`a<&d>R6 z!C3tOFke(s;=+P>2sXbJ`sIts!E$%H(`i2yMeT?2Z|kFdUA{3@Q+k5Q&fFVc z=PUH`#fvCvzgjTdbBMYh$hQ3a&a|(FheN?`B(*3#1Eh>XAA z0dio`Ki2XXFmRsIw1l$ZGS&Z?XZ?@O(%-CEr1(vZ+C>84-5-(Rddr-H_8G_4JPob5 z37=~t0ppageMfG(IL-BypaA9JhJdtk8vqnuAmpe%TcT~Rr6_UrOS`9O;Wm{oT_6nz z@-|wa>8!ltuqbGA(6zXHb%1>L}yd@ytpWU!I3Lf0meKfn*<>W=i&5*k03IV2JO z<1wyR%T$)VJx4Jj6q1j7uvZq+@eH96ZNY#7aF7?qh0!xF82NSh@^4Q2|NUCRM@Bx@ zzPBV$lT2LjE3@*QyTY*do%=9;M6^v$JjhuOyu=*pdf?HQu=U9631CA)*eR|QGYG#< zhWnmt%?v+;j+W09*|3z-gqMTqQTEeU7WfA=btVRFhSPN5s!`y3;w(3{^u)y%Z|gBV zxEA`PQ|Q{HrKMH4mTX0o!1Jq`tE#F>cVvI{u|6M_xM+M0aAwvMWj~6O&@XTR2yiLn ztEZ=Tr_#(^HZrm&;C5TjoU*r)kBComdF6Sa@<6(4}sZ!e`3xM^brK zHdss>Z#+%F82j<)pHm~Vb{+2x(KZ^NhRAG`R|s0%B# zTpk>nmY_YSlN=-fFBgN^BO#Dva!J89DM_j9`Z341JSU7EfGYks#FMqeR`31)Jo$gx zI^ZqT?Pm=$s+M0zu0G9hvE8>+&ga)T9_FqKC*`tdHvk(SrS1&J$?~X9?X`U z_S*RwztWq~X`6mL{V-NbSCMUXp#h)l@+@+v!3$kDhpPu|}y+Nsfq4gKG(3T~aGpU9SzPY0m?P8Qx?4 z*xnPxZI1&WeD=rf%MiUH+Tmx`+#B#^;$9Cg3a+E2%IMnwlfX$%^;Gp;mxcc+l|KD)-1y1S? zC;j$(BPm5^rAd~r3&gy*sAo+~(i`Ae&8I~AmR8zazBUnfxwIi~)*D!%YA0?w3Np4DJi=<`C`=iW^50HrNX z!&Xd(Ky-;#r~5v zKurLmDZ&NBDT24P#f-u6`T6v7dH9dmT56O9D6k1Bx5<6BC)lqUv*>5?i*9KZHa}`Y zS``5@j|Cvt3XZ~Tt!Epxh_p&H%V`nM7M4=^orrcbgr@{+vMMXpE+jz2NwNR?*Z%)@)e=O2_UIW|E2)Vbp;f_>2nE&9@tg(4>L@EhH(Oc10I6OQ zT4g(aPFDh?WCFBfm{Yv5y_Kex+US|&)^tlmht9vlYOIiqlhuTW?q~KCOK5n<9!^=4 z4$Awfn5agA7o@y*IP2}ycDnCp^@arF-bXX1;jnRS*3o2Zxnu6_dYSlVUiRnL*j!kv z1!G&iw7SN8FZ#LcD8^}HPY^!Y&sMG&W*~PF!=M}=9j$iNMH-&^M=yZj$D%%sZ{NOk zr|e-p8g!z@SGoxhUBJHbce?A88__#}{UQH+3Cl$k=8~XEqsS#GgAh83UwYTHu5h z!Vg4MqNk=HR~Zlldi&kWss)vjPDpbhG8ne6ZEnRpw)yVZs%;LUzc5S#Xv|F34m61x zuGM4dbY+e!ulY}=orIO2i*Q|MDd`n{4RllhDyi3imKBh+24msVzs$?a<8ax08-hQb z%HHPl>43%6Hue`5&b9=<%T z1vF(7iq{)ScyE0UdX?n)1Pr|uW^%MXTmjVW%0dCDglCYR z_HDeM3VE7*OTc}kHB8c9Vm@W%k>wOw=5*DSlnk(*w%}ilEyKF-r$iwu8;_@%Y;0@> zTGVx1=mWTJ1EvxW2u~>IMD{6|zK#IE0C0M;AJ_Jm;>%eaJ06>N0V#*Yf)HyNgHT*D zmx3m9cL3?kmo%{$&uLuvb0x5>baJv<9n(dYVw~%%Es#DTKdzf(Z%dCk`}S zoZPazdE?$X$xYM&KR~1U2E6%|2YTHgQiS|5FG+L^fjXf}VQf$?L&H|Oa^jnq6)*3t zh6)tpt-b$m!QIh4ze>;momw{&$E6 zq6IoC8HjT&PcCitiF3=*iQ#$)9jB*RzM~XtR#hl#Gw-F49TNj+Qmu?oe8+DcxH%5F zfU~UY?(~LG(qm42)O0mtV?|XJcge4wiO4=Fyb87!ix-THF>*2VuK8qiTS>3?8sxIooyQh(y6h1=tYv$z z>jJD{yWV)Jj%G*F)O3IoGy4=L_b-Cr|B!IOVX_a;S@W@RVvyv=zHM<{cU-4e9H?&g z?LwSENknmNe0=&Fq1TA!gp-JIrOlqe>`lEbVB&`-_u9G3SG^PZnf2hAh5d`fUEd6t zYsB$eSGSE!w-5MqJL^}@>AaRR1uoxUjwKSc&Z!a%^85sSG03$G&-f5s_r zQR_w~W8WZQ9{{YEyP`PWw|wm4x^`HLi$;Sl5&$4LDJ|)V;q#;T+5&nVZyymv{}@i; znT$}Iy^d^)0;k&m>D@;O6`QHFbUVytjp1o$`>)WRd$Z@CRssbFS z{|*q=O8{4O3}YPhpo;|EL>V=(^<+3l^IBHf5v=UzJiHmPBTH2#u{-__iI?J5@qW0=U;Wl1@7EWv|> zD;rh>3am0}@33pwi+!O+n-%x$*)tB? zX}&=pfDcHw5q|=xSZI+*uUjSDqrYuZ5 z4aNa#_etN18^U+)+*v;8-RX?5|AJ+y@7VoppnE_BW$m$9RFDdaw)tLI_zVBhOqro% z#c2ArT~;hOa|$5Ox_~t9Pa(b13}EkKoajA`N?Er!?PeRL`Z=s8*sdj{+p|qmC*BDx zipMfdpFW!aB-nCd3ov&G?%0OUYdv<7BB!N|_;Ib=Iev!!;Cfg6w(m^rVebv>((pgQ zu*7|k&wXC?V_2*|#RZj0ccO@J?b4rwf z9Hu}J##OMly`^dtU3{4La5MmsjlFQvKi6{nr~b{)1gqNfuEh{q^w3%)3Y2rl#)lJ5r?tRh6$}ogShM%}2IrheGOm*lij$wi0yHlpr?9^_UkZ z1uE8rQe!5hvda=mn-~Pn(>j^U8W*8XC^8C>sjTvm-}($WhHq-<4-wQ@DaOjWh@DK` z{(rQ+1z42p7B(y+2#C@p9g@;1jdZtkNjK6RgCZ&2-3UmRG=g+@NhsaTPy^qK?!C|6 z`|SOlea`p)UUPZD%rNu5&-1Lh*S*%d_&&7GJa5XLSTT|9khdop-d>ImTzWSn*lBxG z62R5ehxM|6V3rn0w4i&?(H`9>lFgII^3=ec=)C?67Zis!gY$Sah1u|(X*$4oloo!| zbgIlkT4vaV%;kOU`ZQw1(iWMa&CO!05Y26Z&XB4-s=N%Rq^PHsF4(xqZx)-t7{32yh4W!Eo+2>HVD-K-fPY|9<)PqhF)I zR$*U3SrlAn!0Xin-R5T7P9({6roz3x$)40R8*nVwgVjn{Yu99yIMNr8tq_<85*}|r zJ&ip^ub;@g|EGzjKfO<|6ni??)RtW>*#A)px1xiwY)u$Fl|_POZ;cm6czn^+NJ%=J z_vBkQQstCBA;#O+S$WSzjS~FpG#_Uh9TOoMHuc{TdRE^0`paL!nb%|2+sxNXl%*BP zM{ppGI>Mc3AT04L93lf2?IdMT=!QBGOC7LytP*2k-t$gVq9Dy(_dF$rx)t;~0DM(` zvPEFH?Y5R^x&oBJIh{2CmFi?KGO%AK+9U!sPAaOJ@0wiPZZ6hyo90>elo_3Mv|l9s zU`=wJ(z3IE_HN{LKlxxaU;nZ*teW6ttnbctyvWI~ZLH~XRm=VfPAI{p!eoeF1Cx;B6K@2muzoETdP?5*i@bqzA3wyAquxhq($E z*)5YhQZ23-gute}&OdA5(buad^%ca1ZM@B;K!RWY6|(#iOMWAs83<-fK01>2bS8?rcggWKMCKp zF8;IfoUR9i2RoHRQfg}Pl~+KK>grmuZCq9snW;dFScBf;^%^HngT|+IU_A2X4n~gM zOoj2F%{oxUD3i)PQ7I23Rg+s!(-gI}2@sV|xg-3eOLQ700qh;XzFA?zwy$>7SiER2 zZ!W5HrQ`GBg;F*}S$<8Mo^XK8GV;92XX2~Rwyp1_Fai`X$^+Td0`gWX_^Kfa&B`ZP zPjuqE4%6XWZ#*0WhL&rq=bLlF{qBLd$N*MCm^MP(|H>8n{UI+vB;fZ1N|L4{XY~oO zR3q}}^(P8n(v-G$YEqL@RDv2@_TzvO$xEUY0Q`^V1lW$v&@4h)TH2L3EgLj7b&D!n z?r`UpD}TA!YvnwdppW@_Eh`T450Q{o0cyI-Q_OMOVk8`f7@(`smD92HB9K-i;IjWx ztCxk$s!WRz^~{g1u~!x8wO=Re2jl;`Typ960#%{`i-5PVn7X-hXuSWlLeqj5zijX#cpxI zCa3C_gCL7OK079D>j3}rs7KygY2i2xBB7z7(Z*jhcBuonSN2_mF5O@29QdyMXYM3V zQcEj&KL!XaHuh7{K!iZnw;F zb2LLoezO+vpp|Q&9CQ-ZnVFwUwc5@oq~o4`9RlFE4L144glpBNw+}(oTCCr2*0)7d8F6shhu4JX$){$nxpj`2H5R^ zW<5P)@&fe03`=yR@G1|J>9SWW244at0@r|yi`P10W&7TI!)k2{wAd3E$8nNc10-?m z$7DHsMzY0?55IR9l*R(4O+rT}wW_u@q#8)vE9A)}lC7r$b)*Dbqb_pW`VU;@o#Fu? z6@H62CmPQDj`;g{^fZPaX!@0Jb!9z0JMgUyiScCHU8=z?7bDK2*T@Qpt_G2Kfx=~R zU^4S;pOlnT+8EP!ecLd8)7*Ulcy1n^#j}P*s7H`u@mx z`)iNqKZK>=6*#)qQ4kc6{3txDSHZ8aA9tZ6t=}a~l*Rxs8ThAUxRH|9YU_QBe*w^k z81N$2codC|bG5xrCz_$?PG)xwNMPeeRtP$Ke&ha-&rRiBX&JapaF+x-;i9ItiC#&W z->knC=y#w1l=EU#QbF@M^eSFd0d&B~DeVezy>bK5QJ~hRb#`_(X(w|7pgOzJ9b7Hm zdRIQ6ZOH_3z%Y-;csWy*0Zx$`TLi=+)j^p1mE~6Bp&0I+t#b{R(u8r_*4o&_(0G|P4_=% zi+|N!z(Pl1l--j?ssrfb$5D8=&JlTpo`CL@h12bq+@AwUH9UUEdlT7_X+v!h?d#E=O zby2p$(+AmU`F=*?xt#u@S zdm=9xP@7jlgp}3@^a@*@F-?~>cFkbMR!x+N$b;L+Q^Lc??+|)UC7Xn4_UYXc;)^8k zhmE&_;h5)uQJc!6a&EYBJ3I!4;SOybw@e8Ryef&FfV|J1UFUy%b<-TSxUnS-xJP#q z|IlhQ){D(lt*gyFTCgB(CtaBP73dch4g%e9F5hBqHTgl{-u@@O_=CMYy$8>LJBk3`w*$zx6=w6DXnU0kSg6%Y2j)C(%a~EyfVRCd!nYOrbdN?O&4*8&pc?) z704WV^#1*O1JYVnlm3HE8Urs*pf|WcN{O1{J@d{_9TD1)5&lAil0EV1KoxO1u z<$AMB|E7~*gRXQO-JwNX83@^BXPQ(GsES;r61WKZsM|E_w)-8Y5@>kU{VCa|p5|3x zp4-IKR2NE!)m`b;*Yw`OCJ#t2P~IV2w_7u}Qr<-r4G>xLSl+X-vj@R7-kf&{#QEH; z`@{iuoBtVU(n!aG$9~ftNpi!aO0Ekw9A!LPvIxaq$O@{s>|K<;$`tN`76n(nh zsmSvil3oOm^XmZ|mRz7$3ee(@oYWV{Kp1?EyvwqCr1gvr50fn3-azi!rU3M7GGk-o z$m(fkpamj;$BLfqFA{6WQ%=2dum#F5ZVriW72lx{Gk9H{y?33)OG-{Af8~G1I};Ve zrDORC7~*UUP(dRtHIz;0q|I5AU^6Z9qG(?g| z9)#EyxMwL0)JOwWNyP@F@qiT-F$?IcQFHueat4%tiE|!rF>6hI1+hZg6pkN;b^M+U zC}@jGB3Q|(>vWf@1GE@emkx;d7@N;ixPLu!|4*^=zn{95VnAoOd2Fgb|5ay@1K-H) z9xD2qF8qI1rhoi^if4kpCeBR&y8CkvRuTZ4$L0J&9r^#KTO^4P1iU|3>K&nC5S$`zF0(O^+{<(6eA1V0SPwbc+vWO9ttW?ya} zJY@U5to*u~|43B(;KH^qFtTR)Y(a7W^!($(Bs(I^aaiJoTf~hsi>W>Tv$NGZ=nC zi5IQn8Nze;RU8)88H?7MQ|5eyu>kgiU?kcK?dkYP2Y&NU?)ZPbF5mE0LF*Rv?c{3R zoCy%yvnnOjZaA=4ePFC+I+tSnreVJ7o%utp2ZS~=nuYX#zWe`t6PXAqTXa(vP>4g-coeO@0;(5EPl*67&t1nG@GvaE>3Sa(^w%5tXFUeqL5jh`>7FLV z8f}p;SBb@vOdloN+~8&ax&-8u6bPS9*7js}p6xJ(lK=O|Er*;*4Bkh#2+r(m0=VKu zAcyQH;FeV~)wOR4jh*DU!=eO9=mAG{?7Jb;KU=&AFYxy^klS$#%u&4^)JVvQxZGL| zD4)aOB#sqVBaFI?lT%Z>3mL#Muu{PfbM!>ZPfDqyyjMF6Y^5I;r;dKu=X$IeUhUL- zxJ)BE@N4e$FHilS7AvC`F?0f>S!esIItpWQuEcwYZ&A|SLSX9hW$|s{DMRt@T~8KY zK`GEROd>k|n}4_iduR!qHR=87oHFlJ;$U@K^>;lwK9aI{80C5%LX!RI8d@EJDs(U` z2z!AivtB4KZALsVWk#(NnI@h;R}|5`xu!^3nXQ(W(Z(I`$58_kNf15UiQtd6YonOTO95!GfqObuF#hm5Ae?F-{zH*-Ybw~f_ zNYnw9aDD^*x=>sdN}%0J9yi=Fki*77lEKv^wN2zupc}5O`ou&N)aD0zQo3BM%V+w5 zz;ZXHOO1HcT|H?|a$M$N^x!|f2a;h7xV245sOI*UG*|l+3=kr0-gfhP^-cKXkeT(* z`v4&UT@(|7f7fA0@?rnT&Hjk{WRQCvKhadZ-Zp=6ay5UBQ_v6L{yW`qj-Z|Q&FOgk z@$u&BB5#>I5FFpg04g7h3*KzM>U~^V6jv@xYjBnasRCMWYJ134Yx8@RUE58(xKJJ- z<+8a=lK5}%C;+4waPI~B$8LJ%4?ZC$Ay#@bW$9Hq`SRzPBLRfyKJ&=GP#F?L7^ME| zph*{V`hzDf$u+m>lMr)j$fh*VXyNbnut`;4!Z@fA(s&}w_PDIBxN?fy{yUI}Y;1kJ z{<1H>^BI(^6pX5n1AG{VSH;{^p})Q)ej$54F}v$6p}SKlq)0CsaqetSkz^*LeI&QVt7a|z?o|6Z(C2bfS_J;5DH_FgEgF2CrrIL-xe6s_8G~)>l4zu! zt81#~XcXtXS?+M}X`Wm7nV0T%cr>oC+TYkO-~hYFwl1a$$n;7-6y{nlTSGKvmbptJ z&@+A1E6`@Pe62e991ywdz6rzCE1w01EBtfh=8qW;sDJ;7f0pkJIWXA$b>BBvVHXBJ zL-Gh=&}w}gK)B31U9>mTp}RxRRPNBTCMLQ&_QG95$m^!G9zMP`t5Jc-Imh72>OC1B zT9a-?a9w@L1u&-7|eWA$1hZT_2$g?!3W8>_Vvj7fh%Pu>} zceDS-)jd*MdZcryteL;QPg7_Z4IY7ClAvStz`_Rxi z-d@i8v9?@II|ChWLdZtNNS=8E}Yt1zn4ULF+p+CCAP3H2@Iq@9@8IP}m^ZtrUAg7e${c&P zQIlcIO+kJRyKo?&M~;1(v5sqtMC(WlAd`W<}Rd%ZeIz zPEC_kDHZ`{o;_jsK?{vnWUpT=9v+zX+ zR1NMin|=u*iQoeBIBt1q-cmUP=mTZf-KKz|0wzzC|Mbwcac~Qf==x7Zo!c+_{YM?( zI`ix-$j5N|{(1pWH$*}4viQ`*uI-74L!a*2yB<+XhnOx^`7yW3tRV_K9t^f(1Al)#9H=k&B1C4SKp6KaBe(3A*%05Y`6jSLfOnpxUIs7A7@OMi4r?%@h1h-68-wG3zrst9m z6N2KRAzH-_)kIriaIa;6?bQLc*8wtH#&V}N!PhuTLo;8d%194w&dyN5^=yOfSJC_C zGQEGZy?5Vzq@n+%m1X@1=vvsf_J)3t|3{ot@Q`2%tmkDGf{7jII3tE&>?7Q8y^)xD zRJ#9!F}K5i@W~>J6R^~Br2pPt`A=@#zkVf=ho#)+_DQ0$G)_cVNotkQ8NM}ZqMgF- zbO*Gv2OPrrmab3OU4u1};Yv7U$j+Gb#g3E`QBSKU>U3l$S#1t;-0!vx8JyLIUOTtGKg}#=SXi z%@wo07mr834Mn9i_ox%x$*5pB>V}r;3U7FQ9&F^+A^1X|8`0DA8 zKCLPgS2?n!PR0}7fG9+bSv=1DBbQzv(b3aU2g9CMJ5`CLsO@X)2uKF=0SJKcW#X}s z#Nsz4RVC66|6II;S%!h#E~DejiHhBWSxi%YCv~UQz-~KAwRH0Pmyh_+GBjx5;IK^( zH8+PniWG%$@HN9k!djdcm4MFjl;}&Bnx(JPG^Vptubx1~ESC)H>T$#e=l7Q1!>jh3 zk#Dzg4F_oqe!!!T4oIc{1vu#nCk{`U0(a_E*h8|q%Z2&$A4bcT@N0=7r(hzsRz-** zMe)vhN~rF5>L$(N^O;sb!Zq3%BOJtZ$=VCWdRzG?F4K-?1xOtU_s1{m7@+^#Ga5tZhG8*kh?>$kvm|bN3K4tN$EgL?*Z&H~4P}T|{C+%aG zYUy~%uwi4J1?Mj=S_G&861?LC?ZadZ;Ij!Zl&tC>z*PVERg#wcrz zs0oj$+msHZqKU!O#Q7OYC~|uH-U5(hUH3M1Zfr=p7?_1=wknFR^~DHo|I_%vM){>8 zUO18FrlBXSk32#0N9cN3`nXBY`l$Ou!!xCSk!FJSqzc!+}S5 zu;s)ke(cC=6gEIF&m(WHjWzWj!vlZ57yLq2Py|zS#cFN%*bFzsRIK7t3K|AlYvW2J zM{QTg($}>pi23~IRh4-q8{5uo7?7Gg6nTJJJ?DvrB^p?sgw9Z1e{7GmwNpQ4&s z-uXn5?O6FNGznT5cK*RglkM^uTV%(>v1JF9OuZBT<7Nu7lJ6HZ*=9H0eUe|;QB-%c zVmWcgZ(<~ zt9PS>?I#N8XZ1>im|hP{>JV0*kZ5k9NXQ!~^yb3b4&K`E#XzmIfH*zj`>;$`1k_7AA4GFwPIddn93b{G-##?LopyFS6| z$rUi&aA#299mHM>@rv>cGP9@;86te?YVem6LVg<+@^3$+P@)cn%lke+@b#sDU3uG{ zUg5JCWhDUuCU~E~lH@!&Nk&Ebc}@^sgNE$9!r@%sH`A5+Q+f2nV;C@`W2kqq_RixE zEIeK{<;|N|pD+;m0eSL{2+|(@XwV8ItEG&M6%k$q{Kg zIB7|NmcfCU=X=q~8@uyEV?FP()ADj-5*`_cRXx9GNMILAA|orH$f6N7`m#=Ic_?aX z;8?iE)>EDcz06w~%r46F4Edgges#LZju0YA_mzv=fwoEGn!w9Z#AaokPuk$sWJsq^ zYTi1d(nB7+0zp_^Tkd1;B-U#oY!LK*>;y40OTVKG(_fo2GRsrg6;oDs~=^50uoOiV!jKxmg|+_KVGgh9r^Szk#6!*4cjud z^gfdwMqr4H5zIh?xzqv|kvt(Gnz&w~J15d*-mr(q&7K{<8luTcx@EGk2wrg&+9t;4 zRfxBI>AvSXe%|Z6*a8|{OQJLmuuURVIXFb-Ai+P7&&PT)+^ns9=45n|if=FgmgMII zE~aV3K(d%#ZjY6G=N~c#O9hTTy=>v35Lw2oUHV%80S}g!jAVV{q|*$u%%q#9`bHJT$)v>k8Ze<|uz8D~rZR{mg)qLuZ2XLbgs=irWMK zDrn3*hB)1X(InvamxKGX%ys9qY9}=Lk(dWI2k8}Evg>?^)+!MU6Bby<7=Pos{0L2b zE)R#UN?XD~a{;uASeKu5i2IU*P_6U1vs_QAIA?!je4Fqpp2NO|nFMu6^@32)h;d|)r!E4DL-8P;j|dZ!~0XUdt^wR-zPds-XA zm0wC1D?pdvQ`kXfSW!H?6r@3L0#40&fbZFp?wbPm%HI!xfAk1+WPF@B#?Pes_T|6M zmf$_qK!lL2<4*1EF)0)56!Czf;LYXiXP(@)WwOqVdnAAg%S?HCfuCrJAIqHRPp({$ zDSH#PQr96yIlR$d&?_=0rS|NKkY|Xa1E_>LOAAy`^U`DHeLQLVO^UkL*8?@E@*Qk%%GXE{ z;Z6p)!qpkF%uV?z9q@vF@m=_-Q1PEg(UgB1@5skuuW^bD3~+026BbsFpbIr1YyVKx z5YJuj@u4O+#Qlv-i*{9&a-y1}2(?exULNI{pr)UdJeo_$Bc$B5Fq=6J~F)S zGq1dcgV zAfljz1FEej9U<)R_42=dCi#T$R4i9PL_OzQ7XIAUhx&`BFKc5e&E()h)=rdUM0i)9 zqU8InC}P?$#Uu1v(?EUeJytpmnCL-TQmUi5(KqR;oWP9ok@3chstvy)QTNQJp)C92p1k`;*llwCQ>u4Zrf zWFU$O1jRGL+sM7_J507{sE7Rx>jIzKbQ0g?neAItB5sMi6UPsZCqaAFWPg-;(%>Y( z#4GmfwfhZOJ6V#__Fgo;^;IOoMDGnQMxl<0NYNJ;R3llzwsHGq@zE&W!)sl#cnRr;z^v-|62O!Vk9LatGe(3IzRd{5tXK*+3wjL7yjtjJ!gt4_&vSDkT z4W_lvcR6Acq{QHeFIOZwsnX?^ebDYIph|j-bo2hZb>f${ z%)z)mI+eIsTuIi1oh+Tzwa@-Ts zl3kXw<`Zdng)8fOxZ6#miT**3gBwP@vOyI=m+!beyzw6~IqJlkK@q3tTr$I$oDUQ^f?&Kg8C*&dC{BW7kC|ax$IMth40; z9e0#>+wtr1p{(zgR50OmX}#pV1UAzzu9dFjn(yo>V~OA%*$HlgXX_;~RLX}11$5;ypT z90@I%s$@n(e)5@$RptQ2ex+F-2R60&RA~64kSvPjW~pOw-OmQ0P*Geu;R=R%CChhG(sM#h~~lfa(ly6_G=J76dL3(GIPh4uux~sObmc1x~4{* z&!Dk$1^KltW{N7VkAk|_Py9sLxD71?$}!YVwb^d4v6;Sh`Z0BBJ7o`$f!xUe|GVyH z7+*!zCt*vo$f~mUhIGyW2;*q@_{D3ZzHMTdc}~*J>NHDosy-v(qI^@5Vjr&x!P%|2 zZK}M2({wT83Q?}(Unu4}c!-qRQ9->vQL~&X6{;TQKP7()!&i5yMb|ia+QBQ91`7yx zNgX*u=FS%(a_JSwx*d92dYpC6TA2GB?nUiWR?@|q@D$eXO2LyCF1^Dd2!IQ94+b8z z9wwbxI%K!cn~g-7I|3K879o3OF6o31JwI)qg~|6AG|FtVO^@Gav6(CraX(3o;`d5K zL~~qk)5fkJzad}**;N8%@4QqOm`#^G1^isvO{mO`z*8e!*-`4etZ`%-<70u6+OOP0 z2X+t1L~3n(2#cq63^BGUYdwz4;-dRwaWTZ=W#sbf$fIe}MB%Ds-ab<{Ah+UA>kzi( z<_W1Gw9did=pDG<54|d84{&Heh}C`0{66pa%mtIsF|4u$8%-H)783{K)c^_#)3Sj{ zzh9sxrH~hd9=~v+*b3*Bv>o5oh5{e-)c|4>(BZ4XiyR$cHhz1 zLLm*g7^CN`S|}lIHN!qNYBO)e-mfs@89KLAUP8^Cy*1(kqjdS{+bL}dD9ES@DxbRjYTAcQ=YJ&RZhd;uer;$f*aX0r{+o8HCf|08j?(F zww(FEBT$VvAYXjUSeexf)PfUAFa=n43GFk$faHf)t+~2{( zljtSKlP2@jZ51*bZWhC+?ZL5PF@JgEt1i*4kUfIpn^4`^vuE?Oa}Ao-S6MsZhAjDG zr>xDk7~3jX&%VFYOFb06JWaAyX1s+zrU#GCLQmH6ee&MCJgVJoo*Y&cCMNQZhEoaI zn4CcwJz%SQk~b?NGND5{fch{K-{jiYfb4q=Flmx=7%ACD?n>TTty9v}Lk+2^>8jN8 zk(R7WKa`82!z`rGm=eZ zu&OQ)mXW{k=1Oi#EPy!to~8-0?MR>##b93dV_Z%LlV`9Unx3phb)Qrw-iM-;>udTU8F!3=V&(v zQYT8ff70KzkgiiR!J_vtKzDt`tRiSX*H8$(tEnJaa3 zQ&r?mU2as^R6Vu`XYyjd_xgKCe&U6(^?G`w&{Qhdmk9kFawN29K1-3P0ljaK9eZqw zmv_EMz2OT6<8rfn4UCc52JE$wkFOq|te;X{K;GF@(md+OKPo@nN=#%kHHyl_XZ{cs zY__cva(81w!*?B+>&Ku+oMuO~cuEGRA==-5a|7;}hq9pKV zsU;x+#%o62TkD-P-H#M5Q7x%Du(+tJi@i6EaQk^5=n9__mb!A<4 zWXJxEPv`7BL!D zPLA?G_J3yB6~! z9FuehwU#L|7I(49MCk0uk`5?2tXaH;{xVG(O|~mh>)iGDx|P=QE~@s{%Wu&Lm@}J9tyA ztSY}Y`KD$y{^awG&l$#uBjxzl5kXrl0r4TvbMh#kbp1Q;MIR2P=nL8Var9F1CF7?7 zd?pVW|GV}aNv)ig~Jq3ImS@^ssB(I zVmrh&T69v^BfVrqE<&fzQ=YqDKO?sW1(1ERsFtOd$Ph2ZwZYh|;;yAl=UhlHyhyo< zQb&aZNF!}nnwlnKk2nrGO`OQoyNl&HRPM4{_K&VAk8M)n;M7xOZ6-L1y&{`$`RFUa zV?mrvuspzvbJ*WiN+jtZPT@h9wfTI1`s4G<^uBjG*5VtJk^ z>TOh4ynr^us@hZmh5`05`1ZS0{zazd1=^R~dw`77oG^grt3)>>F0A)!g`A!?zd zf1z!QsAhZi7yCAmwkBP4cWeFohkl@M$~gCeLXL~y!g|%B5ky5~th}E1D8z@01GeWl zpib_Y2-|L-^(s|-`F<@YY>y$zlp9t97Fp661@Sj3d(t?j!+1CE^1K$>u14UK<50_P z=v<_Gh0y;>vVP+5Fs7iUabchN6NILkW0aH88mi4qmlmlal{=;xN~~W;#&3682{gy) z&L75<1g5Eyp-p*&b!cvD%9;Cnj_{kFi|X-qVeF%C?Cw?v5j9`;luCXnBcDz5ZTn?JZWQKL$`T*Y0KEI<1;p!zbGkCilh zJ3Z&&<`_;Wdw>G9Y@gn=DPf83$FEl`)&jaKh0>L~`V$P714;V|#>>nDxQ=&%@2dzR zgI#}CX;ES~Q+`5^9e_C^QaL%kce%LIr~YWa#*GWpG;=y%)p|@FRt1)($r?ZtPR67k zngi~ebUr%i01{=MTMTQYKk~jF$GG9)i(;~Ork}7!t1O7elcpAkEcD4{7$6KZ87Jom zS4^e6QuJg??#nxLM;&wX`1|s#`fu0e8J~znAAKKqq}-{?LXldMw4f%);n(rnr|gq& ziBaYN6LwE^+ru%#jkJynOdCc10`r--D)nC8$RQ0$0=jq_(Fm|1`kNHdWAER3U@pJ+ z>eK5Y3m5#v<@$LVb3W2Tu4bc?1W>jlW{4~HtB|X5ceI1XiST1Xi$5L(a0bP&GGyI0rcQmEt+wru6h&D?hNpqrxB)~>E z!8ST@)H|J${t6xcgGC_|Cu;dfWkdVY$~aioKsYYq;I* zwlQo%%x0rGt0sn#=vqIR8V2-dR87-0F`alw2#T=j0Upk!e#=F&VDQG;yLQ`6)ri#R zDx95Mf?wQEVV|q*-*U3MjPszdpSnXcZxtLGuA4OHOdjieoZlSA2xXVP$Y#w~0I~(` zQkMes#q;v}`%v*)Sv#H0$Z28;RxcSTZzcKW)8v zSi2D^vtI)H%dtN~%Cc$#4{aiLsHC%1#fckU2;vPy)ITN_)7V_sqr!5iKgUl6;;`66 z9d4xM-d6>yn9S4I(&BP@QnA$hRYr3TyF6oFC&Zp61;VE{6LnZn!_~xfr$SA(1|$MA z;28*dvvw! zK6tBhYQK4^$HzKW)TL;oA^ z_@-65u2^oa4{_f>EuDA!=Hr~(#@HSTWV0KCH<=SwbTt8(+;K+jj*w++;u#_iAzttPgrzBR=0>1Gy^A@_`=OD=fL!Oq z{D6*^0||l7l$~!ApNuj(4|YGTHqQ^vvPiA*1?;PY-%+ zkbNgs9UjNS3KCMo{P8%E1R`LTQZs(H@3_MK!gjUPiu8+uJ-`D8Pi8aNLon;iME*_d zorkpI3dm}A9^Z8IIou{6?;jo!sOIPmN;+#qN<=Q>d^H{Da0!;A+i;I^s8Wv*f99WZ z01Z^lN15%q)JK+q0aC3Quxk!dm#|+;Mu&w8Ti~k0h8c7ifRCBs2PyFeY{U)|JoJk>#=M7`Et~M@3={!OM^b zQtYX8XSLt-Q|sEaMe;jEBR=)Yjn|B2Di_nu&Pet!qp?vwi>n}*RGk^VB`Czj?o`&` z3|ZPW`aI%vwY{N?&0@7biro?N5GAuuqs(gNEt}3r#g~NF+##Y^0!A<7is<=t2PD57 zF05PYX^x4p52X8)$ga7Yly$FG1esrRHQe4TX=j(EYNE=|Ne={->hQC-bG=NTBAbN( zh=)65%)-JLEI^Nerj+ROIl(9?Wf8I{NFOPJ7Up$6kj=(Ro~Jh}>1GE#dj5dD%y-L2 z{rO8{mQsX|78)!eHQw_X_X+|lYgS!dk z?srdPCyNp1Cf-5lg1Zm*P_}z&-gExIjg8>>iKPf$8t0gUi`6Q#+`*q zbC-FKWixJ@Y%qP+ltQ#^7_Fn5Ic5tz89ebi{_t*#iwU@R>@?wB}Xk$VL9{~5f3`P!7Sms<`9lqa( z5{)SPYM#64hot2+m?7QIJLLD~ASPF#ydCUWhXK-~t-cf_-($T5=q_vvIFH9mFKylX zAN}!kGbtd%)Dq;P1t~x1wJ$^dMbr=^K$>k6`TiyMD@=&7R&NLwyi`c?V8A5g3YbQx zS=zt4Lf}^NB}USLTSLsrw7{A4;*@DBgH-vG{f8go1THd?FnGIocH%Gf8C>!sfOIU9 z?VOJ*s7{c5Y!0KYd%qNt9HKM~3*-bXK>*syZr2b0$7}nYty`|<8rTs%u~8#^rQE?7%i zbu$=Y*6=;Q4NuKGhD6uoJlD*X9%&*`QcS6)9C{feNB%FT6t54)<=nm2$Gq_8JzpwS z#;XV=F;j|Sj4j(sEa1>yt*A#9&Z(p$*ay45&J#Ld6)89XVf&DkMH= zW*ex@h|{T$=hZp05@Nm2J=x!F$u1qZj7)qL2zM8jGa>|j_+GNC*LeLs2!W#TEt{IR zs&sPL2e`-s!*)A@EbYhdpk$4WZ;k0O|t+RHQ zP-M~B`0QK!eIVQvJ3gDLs-nBpOm{2fMy$OPJ|#jLn0J=r=nyXw#mz0buYF5A z6+0thle+EbIo~Oi-6rK}U_;>L;LQ-Ep5IH>F1#5AS9cP

    wfwhyy>8|sJ*{Wfe%%m8 zd?}iTpGU99%``p5Xq&dbt8dfR=3DRWX@OtU9Xam0FN<28*R!&kBO>u^&pg1sS7Nhu z1z_lJyn(IDAR`e<0vm*UilBD3-20ABT_i;KCudBT0u_DqO2O8)8&oRixANPoPy{xo zvlb1?#p(S_Z}6MaOtS;}C(nLz2Y$(Eg8^x!lEJ<#;6`IT@$caU)8)`mk(M06A*2bb z!&XGBxtEmQ25n8YZ+?%lt@N3{9It;Oz9n`StmwC*E9m7i>BqnK|BX-ZNC;&AS(w!0 zVS{!qt(pR{X*{DL*vLM%@awBr2p((=+Bu{QTUb~-mw;tQ?cax8ad3udu#&z&YO-q3 z{7J6(8B)Nq!Jx$|YltLxeth?Yw5-^XUFZ4DHk}}P;S>HQ{YZ1tfdaQ<_G3wLCkHOV zf`#JH$icR8yIx$ha~kgRdX~=A!O20&S-0=P*!nTsqdHnDu589sk8>+7$xH4JZ*hQ+ z-YH=}tq_a)(3mNrA+D0(nagg|KHB;0jiu|N4PZf1m5?VlK3WzlJv2v_>{fed{=jg+ zkBrmp?Sn|h+j2X<_xxwgdkf71@pN&C2;QS1ED zlA$HIqg^DFp7ioqV3*S_jnn1jNbfv5o%_y0%XTjN?#VVJd;~)+{3#m4c*VCTBbc4@ z^Zdh{jI(OeLq4uiCvoHB3$g+}wQp@q%SF+Sm1ko7V1pkaG54D*={C|5di5K`iixnL zMiUr;hs76ZzWKZ%Pc%h9ycGoGEILLg^~_yhS{XnL6l^XrasxRRZnsG5T_j=NcdHth!^;;Up;bk zCH0GG(Qpf`dh52ENv1mYb;KH@0W(vW9^;!H7e4{Nv7D=y$tyou1=VMzRUrxMwysSk z{uuJ(PQybB+)@G!R?j;`d(vmM=d$c*_|ZmBj27`Wy0X5mP-FKGWC{^7o6h)=bvJlk z(M;iE6axqOzJKcsOcko4Mf&GvQeO)pn%@rh&mUmqGF(O7M$Rc_yb7)MwitINsO~ZT z76#dN5T&7k>a> zx{fzNH#an4Fnm^x!H1#KHPkSyl2~8QXLcf3oM442dN;9`!7^^$-q__?1A<>%d7txH zjho@E{YS3%dfG$$u{vk$hrG_~#I;7n6+W?*=|*}zH(Q(Q*Y|#xL%hW>`#)@5WmHsc z*On0J5G9mGx&@>|S|kLd8>G8)2t_~|LAqOd=mtsY?vxt3XOI~94*KZ(zR&vBnjdFz zm@_ly?0a83u4~`BRcbjyw@fGc_n|hERjnuZPXr`@UC3?lE6gg(6(3<-5fP4!;*Tx3 zT37G(rf6ZP^zc`ofjlaHpr%@WE+d_qr|X%b|Hx(1{XUihcdUx6IhS}>OOfGJ@?+8c z&nXXZUJG>8e;u~6LoJ%BjLPdXqFOgYx4Elna@W7=s3z6r$0U>$s|&gi9y$^I9AH? zUX=?&Ia_mPC`BV}*Sb_2kkkU+7O3cV-(rsRS|} zs*+lsiyt^_&Q-)@xr4w}keqX$+au>+B$S*4_W2gZ_r=GfhRi6WQkt)#xwu9A(Z|A; zoqw?H_RG{TVgB;SQLUBz%acgGw2mB45x-JQ{3=)cHB$=QN5~;{efJ~2gd~{i)OqoA zI}Ch){5@TYR!2|@bihf&)&2xq|CAkZk1!CR_ZgO$wsG@aZ#zv+UG~=~{TXu@sxSJm zPbGBs{0lc~rqwCj$@CrFbmF!W@3?Cy4E~}*Ke5LK{Z0Pq&$VSxQ?$p47_}zrLhpOf zoiF#bgzFB|+P9U>3{PXM`_#|bFIcqtlY7nR8{X13f{O_GM*O-Q2S{L&dl<9%m5oh143E`)kCB6Lc)wo6 zx^O)F>0Ufo8oaCd>_M=7VciVPUaLtwhQrMu>s1=6b^BkBard=R{Re57zKA6${TSuq zfLPCUw||ymrWTHLjFF}nuZ??xKlUh7_RD0|+rkE+og&Mr+nHpNm>;7gM0+4aPsH-* zCf{TZA%~vl1a8%HXPu(P)RVsASlvk<4Hd@NY5EHD@VyB!FFqjyc`4wQR?z&DHWe;6 z8fnNrna@E8zwy`ww={z*LkKD~^sIAEh5Yw8@v%(4aoPcW`ry1^q*9SLw;KT{>EJlp zk~x*4M*}f$J(xu;-^y?@NWm-NV68R8pD6KoxQ5i6M&=p~TFA{U)uCy++NO;vadeW? z4r%!s$WS`MB9TbBzECbx09QLAM3xfTU?EZY>+tMmq z-_(5})0%E>FDsXiTq-V|Z&D%5GT{|y^Zc~}Ek0K2+}t%Dt~Uo>jaD)NhWJ_4%fqKq zbY2??j8U+eI^`oeyl9JNu;{4!#gv|zvw1vd#qaND3cl{1u%7aBEca`E3L)2pl>-R*d|6Pj$ zsB@R^A01A<-EW>?<-GC~j+Zp&x~zD?ccFClYK+_R*sYRH^X-KN7|kb%a2%l5A>TvT z=j=hs;Enx@80;%(!5QnfL>e3KA}$_WX-F39Y+rH0$SbV$p+z!ny51gyx`RvIH{7m@ za*n#+S&cqZWiU{MEO8l9-{`_N_F7GEqZY|&snQjnCTIwUhO}+Ty4Igmepr;PbiA#Y zrcu8EB?H#E!vxx=o`|eSPjQ+{U+K{=7iqWU#!x`}>%57Ikw0gR(+dg5cR4(yXJk&2 zSYTxo3~U#dRBtH*8mlmyHl|3_NAWf}Q$wa9pMu8+IE8$p4SxknT&FV9K+S-E(piVt z_)sqFRkeuAg~qgFpGo86a4Z|{1`8u9LBhq6RAdA)JoCfPWkYk_gcufX4O^c=WgSN^ zO~zNj)8mQagub%|-zwLMpv(DCSO-w)D&L;8Sr@WI@X7hr{X>w?NPc|iQ+Yz9%G2K* zfH<8r?^Te~5Z-NK<*O$j?0ODyg}dBZS>x(B_@mFVE{XG8dDM(C%BTLTbYAl+Zj`Om?MOh$e%{d#@`%b>2yGcv-oy_vB8xK{#PdzzeHUw!ZI z?RWs&YLfO z?cp^v&8Zk+_)ab$A=(-Gs|)!1Vk>Nj7qqlwrNM*iE%CF2j79^_pN?`Hfs~VT^C_?`6vF$3hqwA;|01S~;G<%pWdjNd3*{k_K6$d2iN_v~ zNPm#}ql&O`)8`%TZD;HQY@*o?){$ahe=RP6bDag$n+A!zdH^;dET5IUHqK|IVzHis zUoh^SzLKVmRc_&Yb_=YOV8;MY3eRP-Vnznl!V-)Y3oEs?g5DZq1?TUe>1z$rQptf7 z!vJF>k_?(}9jm8RvlHE$g346=rpwNtU7JZiCN-sOFenFjY9@&e(L2N8ecwWczAz8` z+*BBrIDInTU@`s!B}qW`N^Cvl9kdk^LGz-fXjng4DM1uq>X8N^!%u?&ni{u`Pu6rz zL@Alpg=UGX-%}w;X4H%7%E~X)J)W=jM$!+5&o$^BCBZl0_p90_%6Sj->5?oQTQ0B0 zMhn$0_X~|XlUi)&8-PJoKznKM_^ZRF0#A}!M$72=ZhA@pnwcrfC$nAVFfq65u zt@VzFlk)Rwc6-4qh|pow*oaCIti(W=U?bnscbEqeoSO}FhrtWf(I(WV6evLZ$N)9r zs;863r0B~eNCnCCuh8eWw?uu8*u~Tr+RfI>x>`ZV8>hGA7%TNm!NI!S>95aU@{l+EzlC8$n_CYUM%nPkA8QO*6NxPtDb>C$wgF;{4& zm|BWlj;!#Q!uuUUN@_0$|-5w3x&s%@>{Q`Z)5+4)%(FE#9eh!b^e< zd-3n}3GQq#SBY4~!g5J4Daz?Z=<~uZ2Ie%fZD&+#2FE&r^ABFIA!t?7WU76aq~X7>m9g1&j%W+8-C)a8$p# z&N!oliO8ykGi(%3Bkr6Ih`WGfxDA9Dq*y(6vcCjp-tcjOORMdoT6DX=lsT=eF!lVA zeR9)PH|khkp~{d$++*Y+Rnk)Zn%MJOC)Z_8PPD8QN<_kD`I&hLDu!yQ7u?^o|Na9W zlH&Z`A(AWxXZrh(^LpN(zSe|l{lusjSx&E5zJ;%$(^yfb(rFB+(&R9R1w^*0?||t| zG*dlA6hDQtE0?mE${0PP=`K)IoO3VuW-r_WdioZ%o&!Yx*>eCgJY`w-(`LTLTiy4w&%dy_Xo~ilqHAU9;)4e=X zKCQ|i1@6>05s7K7GP7){DyN%F#xQlCb|ZXfACZ??F+y}-gW6$;$Tk`j3WLq`cdb!S z-~y~=Z!xE~k}7bCT4kO=@oY;1J?ZFJV2!00J=t$Mn%u*!t)0yLPzBsw|NG&p+o%>& zf!eorXJ`almFUsy%9?Kjj#a3z!SwlkMw)NW{6o<#bX} zqiOP-(6Za0wLmA}Y=5TuLD0vi*j~cLOQENf|BA-i?oqzEBi~IbQ{-)jpwIntpMiz* zxI}u9JS_U}GXCdaMH_J^=2^4)YpQ&*-R3jUbF-5oLy#8YZdamC#5Yb-Ay)6W9^$6t zws`1;BdfG!H*2voiT@qkA_e*#7fGQEdWJC?mo>OR zf&1j4Y`kK_2isJ*N?-7>3pG;U&(Cd}pTG2NqmOqhX{B5mLmmOzK9*c^dWijA*c9Gq z%nSf#J(O`Gk*F#dt6NZQJ=}+vUqs)z8~Efdxu5CEvjPG_5XUHgQ==D!p0U|S>AP>w z{(2=_9O_)r(y_AI|%dqq2KpqtJCP;{PwSKG zZ+U(92_0D<-sQki57~Vtzf;@~_>*)3QT;Q2-qIZwm^YiMdcVh{dcTkGarQNP&ueaX z>XP8+ODo}>NX+>Qn(MveWT@%!lE_d>5%!iih%-d!MQTs#VUQ;W%|EFH<`YDvAK*JB z%UDx%Ja!WIgDbJxiti?hWm*~Lo;PBHCYv;o@4iit4Bh}Tc}C%SrOM^{>_%yp7tfB5 zqkD%sye}KF1x89U&$mw=EWS%!oWg*(BZBjC-!C!xqydCAt30&sr2WwCNj@?_k~1TI zjQT53H25rP=2q5=hk70h&_$Cx08fza-grTMHQ!k1@*3R?VesitUzWZ;XCG|=F>&Yn zqz;z=uA-!~R)H8H7qICCg$ z`f9M^J_R1YPq{*FUnCFtFHf>Ui0G)g^m=c+iPsAqf<4+R5`ASi{Ryq}z^uMNapoM^ z$N}L^jJL3lKfK{yK^HjSc9Zb1!gp_GYXg!NY{>d!PR+AkIOm^))Bok`l_ZPhOd!g| zENgAd4=qgUD*&W$lhtWdc=XnXmc}o`N`3eoKL?J7y_dXW1`;Fu@B!L8OSUT9`0~b- zgXiZhTNS>RONjCNux>zH_dTpY-^#2KRIq6bFi%X=ACvZSII4))SZ$k697qiNQIE{c z<}~I_Ug>9sOD~OvpK@iydwy-W$gxejLy&@+Qi2ei(5lwGb0WgTnKCS_-*@wyBn8cj zl#0((uL4%H+Qk`?x}IfUC?Qj%vWU?1)iV_6)w+GXzZ>3r4XR(~aa9{#zjg;iDzJX3 zaI@Y4O=NDi#Lo)kWCi)$SC(Ik74%7zsZ;fnRE#F_!`||EDOQ0?b8p?0%}~JvA*B2e z?5SyaX8Uan{7-Xn5w^0yYWvidl-gdYXrq>+ef@c+BTb`IO9Pau(gBmQl=4^nCEl1* z#!B%K8O~<$U{JSg+;^b)jeCT=w1z&69>plK#@~mSLlFC~1PM0)`7X;3*Z-{BnagI+ z+}gB3A1=e?lNPzA@t=ag&lEoZ;kOxp_;m$>_fzEG1p(w|2zap5o{yec9~|(mwDoeH zCp!(F8~GF|FAsboGi9q~%nTKDbJS+n33ei%zEE_vOZ$GV&-(Lc;j1M(fEVchQdZQZ z`-(#H{eO8EUjGMmEuqai**YFYZuj$3Mj!2L@Ha&ZN5r%YutL2~UzhS&p1Uf{ruiDT zP61uWX#GV8?==s&$GW1)3w#a2Q-wSId-Tj}+ z=k?L>Z^>dwhM9Pe$Z2>g#%lmaax2@nHvs^}|#c1pxViHHRKbsi5znaXMvVIt@pmj8+601Ohmb4S0M zWIfOB((_e)aU*onQ8I)A!uhoM65>~u;iw<<^&KpINnh^Rqaf&<=YKi5q|nmZ2-As+a&o^U(CUA zPv;$_iGm$fRECqi2C49OT6Tx=QY$3bSD(OdBD0dP!iKnTv(clOy4L$=Hpg#>MW)K9 z!PAwmTj~3A%JVCS-=+5pL(Z&x?B{8Zzn{)-^x(`ukk>5@FUAhP2Ag6)%I*3ouLYAA z$3Y)eW~}*dZ`EAw;Jn8d%=$O_+NvAH+~V{;n}q$Jb(+J_^4|6e#DF3GBg?QhqG~XzR%{ z39mzHT}S69$FGV@Wv{YIaN%4o&+&_rEDmk(9Bfj_Z zJ`MdyQK&snMHDHi%P*FsyNFQAzBd)uP60e>9JEKLKg0Jbg$CBPk8Bl=2o+-t1pnP11uV|p4%oRnLX|C*SPn5<+^ z+H3{aeBWG|Am&j_>E5}t8T%A-iWdK*ST)*coqIp|75*mjT^y2)ocH+Ih_QNj##J0v z?tNoriY$vqDZOUXqveoyh)#j?|P#b)va4+{=vdy=V4?#6kNy{>gc+q`3 zrqFW~>eB)v2^kd={v_|h%qB>ECyvg)E) zoP;;&-gp(sd}RH9{E)cdSCL_9T%k__%z8h$Qx0E-I$csN>U(_FBv3qiI#fHenzE!o zN0{F-m}dAx0-W+XK-P(6z~aT^lS(99;G3|Qqn*t#@=GyQ1D3a+8ZM4SFIC>LKzD?VliKHlbXBzXU`AJ$ieALAw@U&kFHFKs3V^(kG^QZ#!)3?gyv`V;#26$bWTXV(7YZMhoOF3U2E^mKSfX zFF5V7=>ohOT79npZIf*CpH|vxt8exdfZa4#D_l4MWpVaXY6MbUo4&dNd-EnHg!^yI zRGktll46ONZl#J+`t0v4CUhLXkVS@AJlaqxkdFfQ*F$LIK^7AawoOTuJZB!tFMdXn z4|>@Q^ykQYPEnh|*&+PUi630(A>o+WWocg|MH&p~bGv)tDTqBr)KFhU+g)d9Os$Jm zFeZdl+c4U2$EWL!^L2H}rRZyBxn^f=KRH0960*-14B1+Tri87e5EYQVA{-=i`uy2jONf zpZ&R;f0(#GzqEPzq^52$q}_TEOxkloE(E;T(-w$(Tv~@)q83#!){uP-TIa_le4dCT zL3I1|48C1w%~I)@X-DgBGGZu-|ZE@!= z7f41m?CTe+Myp67ubn3`aO*q~|H(+P^bB3UeXIR35`XhSn(&Fb3)m9Q%CiSc*3gkKAw(|I^o|B zRw4KUAQF2oYc+=zi5Adiz(?S+i*B(hM2dQaZ+XxE9t{Ogj`gvMjmU;aBv6Ev!}ym= z5k)iq~c=tzP^&`5f`KoH@qplbHKV*^K@>w{wYg zWU?#FtJA$3A#g0lQQJe0V)MxB;KfB3%r$V)?4bdU>C>Jr-*%t+2cZeW#N$yFNNII!}|Vy8VWzcMFoxi%tRqBoZ*pEOr4Sn8olgt zeYdREpv4)fCv7n(%wJ^x_4yv4&N>^#vud4^23iCTX*)NImQbP^|n6F}`S>U&XE1ke3n zvH)n$*l{ELpv?tTYXtb6-dnrn5d zKt9huQU2jq>!ABTtphqls;q5U85IexYUH9+<&{4rlfN1fWORQQo!~7!OsLb#>Emli zf!B%8W# z$gVo}vOtENw2o1G_|rsau$KeN-s}(ZfS&@)sak5@p==|JJo(4Ofo?vD|6o%V4ZyaV zi!c`&c8UT$Yg~*tj!Cb$CHIPK9(@p*WqtYpCLljVV`kW^T9``PgVlYcr&qP0yG}^J z(h_pYY`olFs@Az9Vo1cJ+jaBymi4u^f=AR~446zVH1nVG1-m`kGpT$a&rxL()N>P^ z3~#wdSzdjHr1hLWhv(;?SpWw`opjC(#q|7C5#46#U5KvaW9tj`tj0Ou{Mw)z(nF`_ zkEX>%5ufKkrAb#+gQPwjHi9SE&=X&^S)lo<;;xbNe6dBgJ~NfW%DhMk#gukk37Ew) zS>u@pZE;=*=kiMM4hZrHZ38B&HR-#Zf)!d$>S~DZLr`L)OaZui*S{0_IT+P~H0YVB zH;j3{Cdy{@t1icdLQ)QInnx1(+TEHp$iUZNr%fDsVkoy&QCym;cK9{d<-txtZpVz` zL}eh2n9O*m!bIQuoI@Wi<5RFE_}M%}&XkCl%#!}K&Re2IFhx8WsIjwMaW*jTfN;Xv zfc!Ww%GBK)l}5_S7Z&qgdoFyx2Tl+a=ttSs(DH(Ec?)&_T;@S2T#d^aS#I##8<`BU zP#v@v0w)vrbIxI1fPwkHd?m^{7In(zvywg8pZc|>JpzYB7%{@)?^C`zFet~P*HhBt z@_FToSLWrvHYX{k67Vgs^AZ1xHNHQn=!bL{o{wSS$v=>BNB>_Bipz|=o>fdZb1QHU zT>Yo+IL~xeN4Rxky)l0eKY<3maDT}S06b8 z@^0a9-Luyc0nQmaRlS#;BF4TSKuC=`?(~HFy&l|IA1NVc1E#IPAQZ2R1f8?9b5RDA zUn71Z{a}eZH$`Zi{}5&FRBr;vH767oh&vc~@?RBO-u4oT4J;8$7C_vMsz7Ib7W+6xdY?^BSkLnLQmQ>(h|9Df@$Nwi(I2ie zQj+L9jaYwZJ%K3~`%x$^9%+C_yr|1-KGMR=w|u>4*B`kUjnRQgOIF>ag4Llr+LGyu zLIS;FowCCM&X4VpdEM#sU9PiXMHX^jXURh9Ut#subA-iuL)~N2vvF>@E+^vo%7d zN84ku%QTyc8$G}A@cYPJ^tHKvCLN{Qq-RtzW`lOWHz%r&#oTEzNc^mi@g zzZzoxlA>qyT8ITX8l@YSFp&y7BR1oBa%RPfP}3f#J$)1M0;;5@DmFQViI!A24NP~B zs+fUULdTi(%5Eke>~$ERr9ehOyP9=g+1krWNZR``lq!!QiRch4@|{eEDoF>sc|ea* z@fUe9S#)bfPgw15KEf`FnzG`e)4sH_1|`|NXc+{EEO;Y(2Iuj;@gvg<3D_@W*T{T0TD=A^jK6*e2H!V2ao6#k@sfgr;oY@&6N#x*#VBoQdpgKEpmWZ?6o} zm3iH>xTS=R`9jLTsaMuyWu};z3s9BzCD`-`zk%(#z$$KTd!*I3K8{MRwJLz4(Hb12 z5LuJ#nYy;u=X6elU0_7Fma3!lzVUD8fua~gdlZq)t>E|x!HBBL!OpTjK^gAPt?%=F zO+O5youO5S^v2$c#9`bR*^2$bW0KXtU^DuFg@EM(XXUFX#OXu z2Ig?Sw$_s13RRC4;w|`LaQv_G`8TMq36xvi zl)r%2d96=2z1N+@j{INwY5-TGS5+74yDeaRSP140E1pf0Wnb-j#Nv+VnUQ?D5QjkGC2p#*3E9l-DY@;wpW_3@Ovll2dp#wUUJ7 zY}@i}nwy)?e*CDdLee>_a($sBAG)xoI4^^SvJ%ZDJDL#s#@BeH7JCb zPNM%>e&2CT$_5y2#+M=C`vGj4dA0otL^+uG)!5=k0aG|p{A<11puRp-cNZCN0|>f?334D>_&4&UgAIeQpt*=MHxCAU**EvQNdJ#ppbS>yf(*f6z#=!@Z?Jp|}czei8{A7H$P{64%5U#A?__3m5E zlL-u<^M1q_Up{|pl7R?-tLWW`644p-g1?%$Hhje8)o=5Huh{*%ZP%*{%7*q^XU=Zq z%?%N=Xq{QEdW18M?b13@nHGnSUfN`No}f0O`s^g?pj|z>VuK=%{&g-ZWd1vw&o|l< z#fP&w631gXhEl0;-H!w8L?y$Ho{ z07>~+NhNvCPx|Vpm?DWx%FHTYP$6@nTV>IKu#L8?y0S~9>mp2bf>Mk;eS-Y7@uwOfm}G*hcf#Oa2IK zvY+t)=^n)j5@IIO3fqtS7NfGt$r>4Z5~4$Q)eYABs^O$>r+0=FA$?U_XGs1ZzU;u6S%lx)8LR}g zqx_7$;U(xLX9$H&&PMpRYJ*aI0B*FWsSR;+#r~r&lV9(vKKQxWb{Ft2P*b=eB`d4f;fgIrkjYNT>zY#=5#zx@dP#wDe{;p%|7v6TS&Kp(xZ$f5(i~p0b3|{}T z$p@0lLE-Ms5_)UW33-wv%<>|%Md((&%KS_ z0~k@oT|K?6P+>putG8;Qf+UZFj2~$sHs!{sE^ic_!cLQjF#-fFx2yMD08XS>*ny3(p($i(JJkT&FYkFNn*oBZc+g zj%mZi%EP?mQ}iNz@P(jLL&m=qP{O;0Lv;6A9o-0>NrqJ1ZP@ocwkk+A{fph^dG`x`P2T{|3lV7yfXfEV z#+D71HOw6N!3E6B3?E9dv~mio6j5A(&xGBHZd>5_HRCsZqyt7F2afS!jsCa$bpe26 z{Y9^rLWKUwl6+`?R&i%0SED^KO^@l;=F4BcK;v#|+~M&OpPP|$f|JUdW`$2!W*qX+ z){oMK>!uCmqBP~e=)drRuu0~!BJE}zE9|QBj$oo)k2Pg;Xqr~OqDIaGyWIrqsVDd((iIz@thO^ItaeI|c*hV^LC2 z8Q{m6vizzo=v3&+ILXLmIuYvqQ4ILeX0T;x!dCoH{JQ>SshBSi_nQpr{dPgF`}jL3 zPahW(A`jrr4E~5re)`@VuyTKedtymQAPi&BpDdjn?fYadPvtb$z=lUvA$m-9j9fR) z*%8FUU*x!h%B_wgpRbFChqmnxZwU1TqGS8Tow}&6REW=Bf+%8YFB9}QeI$W${IYoT zLQ=!LPTu}*yj?*1M>&9N?FU8+m?yjm!->zM--GSh8^bkd&9^9NB(ZZ~g)kWR$@s>|1lJ6#)+>qJJHDieZve#;lt zV>$n|rane_l`5tGcGbnll56lunhc3s3F`ktGU7mtnLyFu@?i4gD< zuKD4P$?=uilU&@mR6b_^K<|gI9%GSC$%QcM8({nk)DasXKsd8Lu-wQkRk@RWn`6 z-g8+bK5QT^`5SyACAOXD4lBDU)@IU#;Odx)k%bSZnc%ZNP34=63N{tj4$r(sJxOVc zdw4OQ&o{4i>0gh4k`0Wgr~fjs$4O+PJEPq-Jl>ezyyDOiy-s8u@KZ1hk zsh9%-Pg}o@2homPkM?q9v-<~{NyJp$r7i#OD_BA<<~9%>#Y6zK?wtz^gNj)Hu#?;T zsP5qnB2@E%iT*yl-v_72H_1WP>>`-dq6Q=Qb)jh9FSZ!&0ahd`0*Jr&f{?NPK0rW6 z7_fU~=r_GJd@_J;QCPMPg~*~zFF58pCic>Z%Zxu`5BTkp00x-k5L1cJ?E1o^sCFQE>5Zj_owpYspFLZr zNZ~`~lMFy*4ALsOHp-PK(Ve$sIw%yXX@s9^c+E?*iPaX(FWRy->Q={>uowZ@LV%TJ zdhh-h3fy*!hp1@lehcS(x95T|4cAGrIWs$X3 z0dKM@^%a?8`^(+N_?&jPTYWD&M$XGTS|k-Ohe3}kz5v&4>U2hG4w#BI%v#RZr>86z zI#;jLY}hYcp30DL<0<9imXe69l_MZ|L^+AmQ+<@w_m&@uO%o-bW$7G9f5mW9ZKn>LLvdd~%)WEKf&iP6_T&t0NXnox6_L9Ic71-U%qmKGv@Z9hIX^h=mZ&5`1`*S?CC~o8YKE(t~T;?0u?FMlf1tC9?e{6A+zb;Tk zIsVVr*~impIVjq`UAl#AuS6rUBpXcM*lfovU&7yNKY~BL0H+7C;2T} zZu6#3HmBcWvo?VyA=FjdGMwTQlaTHGqVe07wdrx5bR8}AE6=B-yiIJN>!+<>(q_SV zORTE%8~4)fiZ5!tk5p3X2R*4(HYvz2(S~j!z_7fV6 zjcw;N;a4Z;E!GO2CLauwq6rgATQ3LB*6b44Br4YvNy6-V+;kmGFMD}dWqe)X{bTcN z0xa^q8uDZ!@Mes$Vv86in6P(F0@?fw=`EkH6|(12t*v@jb%aix`%>6h|8xu2V47?D z_pc3hboWiL8R4TvL&dX@p~C~2WH-{nS%nWv4|z!pOY}oWrnGu+S~tVamWm)#mTKqF zYZLDczREG*5Ye7nm^dWg)>nUP<4}@LflTK1BI*8iRia9tut@$;!ozP#rz+SOF8^&W zzYQnnEsGrI)mseWHR)e}@KmA*l7O7o${LP}ad{nUR-68|u3sm+Dat7?^$4L0MBO7_ zL|ak4XYn&JU%YOqH9>eSaBe4G+iArT6tr3X&6VT|dtbp<`DbosS-2Iex0$yM z7uiY7bhXQ2p7Uw5Z%WJ69(-TJbiR9UeOj_Ht;3O*&l&p7>_9eD4_12Cjj-1pndw{%(;SE^ zxh#%QWxjRtv7pDYO5@mjJ~}nsc13C4g&;f4T!T!!rxKH~b>0 zZl3$ox;I+a6N^u^t6A2Z6Wij+=aQC4GOH`|)o%;N=2yMW`yuJEwwF_Rc}Ql%LN;2S z@cWfPEd+|JiEOQG3pYPutT7%Ae9hO*TPKyKYFM~KjROcB@m=T98;Vodhajacq5m?* z-`=hB@pr={48mciPP5dwFY_CDAVuudGy(X6V|*(0FQ+9$9ko0zwKUzwq7`h)_ZafW zJ^`n>h2-C39BCJN>G&3klLkWwWO-OXQO-5gDpDvU5~g9OoR5aP#p1t{u5eh=tike+ zqxA!}P$w`yql40^J<(tjahg)NTO7J<3*BmX_6YT@OpeAf2P%w%Os8Ir21IYBDix=EQ!kqSnd(`S5`cKYX!P zBxNYc$*|#M@9;HS`c)E_9gI(PTH|1~MciwQ`spDtx{ogLkFpX}dS)KQ?Rs%(>>OXv z5!<-i6wB*nx6#Nv_0H+;PcivlUVL@G?X|zzjH&IOmZqHYT&Z&{O4p7?VY|-cx(N8% zqtlNN=aIe!az5Fm@>Tr)Bd$#6NXVoX-Zrk!=G8JAa|idgu#x2)lVy18HWbp#qAPeF z;ym<~Fuyk@kBj@Ak4@=3%-z%b6b_EG%NcB(=HXVTR%ou;ZMc>AvbC9ETq-kn?$N@g`kg4jhVbDhl45l1AFK9M-|`p%UIlx3EtQwR&^Y zh3$hMPs{3W5@-%*s$dB{WX`Q^o4P?`aY8|37MzT&gq_nmmpEm~@vMm=)Iu#5cHW1+ zH*h!&p2EdW{E!^suz&?U*Y(Pe>}?S^+<5zF^x)E49{Y*cl}Wxj=($2T{Px^UH_BGG zo(6sj8vcT1Cdl(1v-S3~N|4w_UdkIGdhpIW{GY353_Xo0!a#7)c0NFkK6wqp7Z}$q za?8V-$H{@Ly72IeI!;yLk0rAv#gsOlO~j_hXaPiw3!51S_C@dE&9j^7;iBrZSaG!5 z{`1r?t`ai)JnDxsIK|5`HHSivTzE0@gOU~_z>bFJ7E@|uT$O3q0M2tr5%<2tzRTf)|uI@8c_DMscIy|C>RpPmaRp4jAy2bxMaD7aYGiTyX-}Zk7vWyz;q6$S?U*4 z1tGq6C9`~Dy0ny&UFM+8{Sdz<-K$!h`x_hOP5CuKagODBxzk#Wgz3|%9isViNlz#> z)pJkRc8xbb<>zRQZ{}z^B!8%`IRvrVg!_%Lc)iF20&fAEIq2I-6QPFfrL+0kYYn2# z=^nK4!0s7ecm25rGZp0SgMztQd_n=Z%Q4ULbu(%)A86}pcmkDh@Q3t+h`q=+xsb6T zxb}R-28qJOmmXWP!s~@Jx69mP(BWh8;q5SdGn{6!Q!6J+L6IwJX3pAW(&520B!SiH zdd{>%d2xiK;eC%hmR%+u^Rf~?-^doX#pyuC)>`J2+Xm;pS59m7!YrwR`&B=m@SD)1 z^0l(P8!qwU$?tjLOJ}oGUAh3%=Z!C0KI(!j>z`bGLyMu!M>>YaEGN%FJv|Ih92zPw zm*h=Mg|+CN#uyNhlOEN1O-7s{#jE2a4T!gHrf7<*S|ojqq01dNRZvKFb)7b|ZfW$; z_hfHh)ALY+w)V?KJgX_IuM^T!)OHF->{RvTle zl*N$mhVB>K@kNh&S{H70Jl#8%)G$rLSP9%I#5m%+t@cY?T5MS(DAsij>pgAxpXhrv zu6b#frg#YoAX+;G+uStjHcQ|5%nnMk`Lgv5OA7UTIu@Gb0N;20f>w|odo5&cB(m%J zJC6HkFpr@Mu4psaE#_OH6K@6=HTS!{|YY~p^ny@N-8GTiTny3~jIU*ZJV-{@31 zZU>6x?hfJqU6&N18H&hiox-hjJ5vRrm+yvaUS-6!rAvO$4=cQ;Jk+ADQ}7dAMg_Jf zPR^f``n@jw$?(35j0y@c-sV`|Y;TSAK`h~q<|aqGN0Sl5Q7mAatR+dmvX-!zww;aS z@h^u3WWt;ZudQaQ_=5LZptU%Y8V`h=*V|5id@?3AuLE{Nj;=C<#IQWs?=qa@J_zM> z-dx%p23bvhax7Q`Fd^bRxnn92N)}HY~vrJ`K;btaD1l=KE>4Sku$5vmK$5 zdeiVdGqJvV?fb9|HplQdw2t+n2uK|-zW`R!@5euAY=QLDZ{^E|F=aeMi&g4ad& zsicZ<(+h~0N!`o>ZU)l46t)@88+BG-J+=9@Ehkl|0J)NVZWQ%HDS_ULG|bzhLHX?t z5y9+Ip-sQK3%YdPx8cRWr0Ndw%;fO1M01I|kDWK%JLTsq(PzDO*5*~( zJ#)TiMZ4o=y60og-x2}Qn_1cx=ef14zPrI|X$aTsPqc;WeR&JLfa{ja+{!0fojE_} zl1ULt>yw1ANmy@ROD2()O~UUSbhCfnh0ye!Rm7r=>8RG#vxD7i-2Rm-awFm_HW-e! zcVWbJvVog27lNt%-|@+*WSH_%{)&(?o+86YmeO7vfSzy6GfGp6aoL^mB{7Pz{#Wui zVuz^M9at7}SybRA&4+qcBs#S4^EOL(vg{sA-cNK5anQG+>A3#vuQ9nR8R#uEyT5G~ zBx~ieRF2KQMTS9_PM{1_^MpLGyu&xj2h-Bix8Fbn>s%Th6WuOCizW z=Q+(mO@T#J^)kOqH`qS&sh2@w95Czv`NbY@f~_W)2EJ0HKtA3;hlL5u4|9h2#(_Mq z{LelKq`eYx8hroOPDZnt==ek3b{=Zgd<(?>vYsbh)vgw8))u-6w^^P}Ly{gN_qMQ^ zYbv@K8LDBkoIq|g4ky1^joBt#g&q~6h+|7*sZMv06rA(uFGrzA{vT^!9TwHv^?e*c zKok)qBm_i2X%5|^luAo?DP2Q1jwnc@NT;AMN{MujNT_s6_s}3M{q11@0nd3JpX+=7 z+t+W zI?MwIKrQikcnMcN%TxHPE`$HD zf-whNQ;S3xUk)z0J+u_zX2oYRo&H|3XyRR#a@qWKWLRvM0H1c{4pVDyxf2yC6Br7{ z(V-Ar-6U`f5E_SE!gSj$H)s<HiDnG4^Us_6fy{2Za0@Zs8Af{q%uyP3jf8h^sQtxI|Ht{R*{+wQvv z%uDl#E{xM3h5D!qR&`1re?hw$?=M%= zc7b#u5#g8d?6qz~i-8j=3gV*)Im@B4Q0fU=+5xmhC@uT$b3+2Eh~u*QQ3}_oYIrNJ zgLVh7scTh2!dL7cy~1q}a&EN<1v8-27TB;F9H|!>@Xz;bgCwi$p9|>7Y&*^#0tOm% zK3%(2B$^iubqL0k4wQ&!pu_MNkH$sGIFUC|gFh)^9(Pmq>)} zV6MJ3a(3NCe5m<_{^pm6&Xfd=m6vwoML99*jyNNIDiR?YTi4OeShmc9EhX2*zK;fL zLd0F##B5vNK%c)g9pO=DOvw`Z$h6ZY7&Fsji?8q|h%e9jqi~>Z3%8b?%_g)?Miv_p z`@Oz1c*d6J$<2yFeaUaotc|=L28qU|Z67Pk`#R1}8ZPT$Mk#4cF<~PZ`5GU$=Pu{3 zcmLel03vy-DYHkV_O-6NFX&P=I@vzPs~0@0%}*Vdoxg8YF`H{QN0XXjqM*LFh3K^C zPniEXzqXS1b0ES?pHz@!p=I(yZT>>Xf*24X$2rUi#~MnfZOiAnxr^o6u0((#xoOsMpgjMt^>dZmf)N6S!}<4x-pvtaM*?t6&$ScAsSJ%j1-+Eoebi-a`1Rr1CSg zJ&0MKzo3m+in}W|6P>!W2jYZ_;`7|%TEUhVCS7gy&F9^sItYkYo`APIr%C;*p`Tjw zEPFrjkMlh$XKi>uv|cMaR38Q8_cET3I?($nc7yERI2D?b5+QwWVhNgZ1XgRkFmAI} zT5hz`vlwh5O}|-JlHKmrwi)%+ zmb=)cR{iss7y49Hwra0i+ph0Yn;>#WJ&jtSpC#JPvMKWiTU-YJRB^PtE2&~By-+k= zzw$n!@x5zt+!ju%m0&jeIJgm+O0-#ZAlyxwpwuQczP0lVTLRlsyXX~xZXQ{1a+5$4n7sBf50WW>N~Oa*2s~Z-lTuKzkhLOg8lNmtK`#0k3j|*->&WeyWuFh zw_oWvhz6o{tjyy3Bd+-^=6{*{A{fw5al78w} zr_Gi95MkS%Ac-9*HQsyZ<<8gP_CRW77d9B|axayZd{m$$$#Wxt)zV|g(pkIY5&7zU zM8%{hY@nvaQlY){T#woP{j_mYW~g-r!QwA{Dcn-P$5?wKXjxLWNLIHG&ErRya2#RT zti=1qJ)?rX`!4!*B1*l5fX6=r>ZkrddxxVA-9*T4=G-1_m!NDLHiIu^!=q#`4%r~w z2FXnKL2Qe@DQW1G0`0|TJ7qUOd)}svz9!RSP&>*yrE6q7(OhCYL&X*t-gmCsKGvX? zXta-^7#(l^`Ze@i>+oHhKF7%~a&j6tE3;#eo`A1Yk8EX0-nSXa_;O0$XE)F~yI5i2 zdg*P3D~<&D{a~w}7-gX{Ix%L$Y6DYq^V_p#o9*{n_}>a*1gDUFk0^`g<(soJeX12t za?Y8;IcB41faFthZ*q-Gy|A;X%SLIax3*+XrQ_PlR}tt45!S_U+A`rJnhB3P>tE441E_N?KO`$c`sb4mbNF0Zu zk0`6!$dPl8li!7fI(XS^r(R9Lr`!htuG)@nYO>(h9;LQL^xlF-Rm^KUAXX+S5F@K$ zI#wI@4*G6$<5?yBi<>P^M^0zJGMagIOtlNg)qYZ2Qh?|s|5k_^sUw_o@yxa|j{le1k}69;ih zSW4)uqhd-$yf2@kN-hcNY19f?pb` zNtV#AS~T9Kk=i5~ll6x+bj|wEErZgE;Nn~KMvr)qci=(4GF`2vJP4=GfOrG`-h0M` z5sK=!AgL+_Q|=MY=tFcr*xl;onw2TCNlfvAJDrbZW#!dcZ7opLs%;qmQCK~7qzGv>ZduRK}?sW@*g!c zt~9{1uVEM%$Xa#_cj{?R4w|HrMMtHP*Lq{Vujd?)7&COJHeECq{5x&vIRxDF7 zFK(3{x-3lk`bANX`&i;Ptu2kcvfR#MTUH3pUyU6XCFl}*?{Ce8FNlS8a@z=1V0ccv z+jiY1L%RbXd{%k^So(o)LSBIDPyy?O;x{kA4bofjWa<*-acs^q(Y%iun@6`jH#~$U z_8sglAu-lhGOUJr66y((SCvn@a!QJ^Zntd#1n{?89vxd)l|Oi zYGA1XVMr1cxojZp#`)1q!lMfZouMksKjS%;t3uOc+!hpc(p>t*kmhre8OpgHzV|Hh zvv7Tte!a;6vw)_R5;(+A!};QVB^nKL z1q!#%u8m3FrXLV;`e^aufMbHcdM@7zEWXFtGJHT~t0%o}zZD|#n7L@p-9EoF;D`-iPwaFC{G=!`8*6vZy6=J~ zJ(h6=*d);~K0}ppfVERigDjn^b9_>E-;y)1UWt#2i11u=UuSOnc7g>&AXkHk^-C%` z=HF43KKOG$^p~*;4qyXxNwo2`b~#N=N43n>h&LLrHwsv0eZLiKaC==zeor1zU46BK zq)_~zJ~>GT-5xHL%!<)@E_f+?o*zRi?xU_a-=p37*0y>4$>(qrc0_7K+~WK9wi*4K zsxA)eZO8o0Hn!O}X2YKtUEj2AKk}n}L(c+g%3N~+(Sqx-^Qv!@eu;fjA4(?b**6?< zo=jrol`!1OB2@189(aNhmiLjJ@DWn zF$N9I#Gx%mj|s&PFbkF&BcbasF<~j)=B{-6MACeKDJ=(kG4BzCOo5K}e6>%dRludE zgRI4KK#7T3$`&!%b`9S+2L1+$-`X*usbRlOEpaJzt8-90FVG+P>}GNhuc)oPVpOu+ z=2_E6nfx7Mpz$YNj^^qy4&n_!<%cMv{lU>oFzq71IEljZKP1tEsHa}zE9hrr1-clt z)5uRq#F24Qn?6uX4OVX4c%iy&W$$v94fvCK?_6P1mG(CI4WhIGd?moP*>&7LwqR)f z`G8&Sx@Z0q7i)ADlj4QZ%!BS00=^17hj(=K4>l0f(e?9YQG) z0@X1ot&&4!YbqtrOTsS|ZuEP2&17$;Cez_>BIg@EuMQXPc)=>3 z^O21oUlF|UJf9e1}%)JkcJS?OnS5rUoshg~~@p9|clt1)QSJFi%( zKUiw+pC{m<2@g-Gx&stEz{C>WMxY}*KbU=nhi!6HVGQ7&;dhm5l?Vb_BqDILwTNH19ToQvR$h#o8(}n zU=?hsz>b|QTIHXlEfb5hn^SiV(b0SkN|qAkmHF*roL6?Zr=E0?WaDMyC^*o-X-kd? zPsNBYXYViAU3>MJGof$uIc`HZ7Aexh%w2{NCS7GG+@B_LHRLi)MN=lQWPWqrQ$~|Q z?=~vq+$U~_IYfk`Zl~Lca#to7Y4<(Wr=UA~;Ph-<8qplrg!Eh4xx#*%gYrUyy=6_I zVSl>&X~KGi zvagS8U>vv(O6;-3IkG~n;hO?LML+Og?Q+hb|GM(mKl_(6mO_SnwbR`G>refq#FqSBZ<{gBEZLSb zwI4I=8wm%fx?Xe|_1)!s*>qf3_nJ{9+P*3R6)ABLN7^%E98uMJkpfHAb}fd2Xl1Mm z7p_<_a)+}v*wFu)N0WL8B}>|9wn(6_sQ&5N5?v`R`Ax?zUlI*b#kYIQ)FmRl_HPpW zpZ&%jS_WFV2Qz7ZAdej?%DXwj@kS*#Pa+W{ znCqTE0TzhBe(;0mDxW{1N=zCzD69YC9H?|1&P4|4XtrpD>Z`B5jVpB1~ zwgC9{BBl#E1&Nz`_AE&nT_9!4l3#pW2^M35%4>6C|h^t zkk1@mx?hoPwk!VCr0*eP;0Cv`UIj#xR4Q{M+se;Zg$=>v2f}j1eC7QxQ@gw?=^$Pcu*g$(Ag=GclE?6VKA|+w4ejnGr>bWy~c&@$f7F)a#Ex@Xr^ti9im5 z_47>+-hr^Vpc4u&Nl1k-OdLkOP({}z5!&hhZMzZ&-gW&> zUTcCl>Bicu6b^`lVzJ0EfNe36NomLBzlDd{XdBOY+KiZ2%Xp&?5EWEhwvWU-0l)t! z)R32y(pjJb-39M}NN21z@6^elnUo6{)=@<->qK^tgEB4?Ki3n&T`iHH=pbI2f$R_^ z&1bbVKHC3Tvn_~M&UScYV)>hY^~l9QOWlAo+JQ2u2Qf`E2$^#yA;)6WSzl4nhJbO} zO09ngKGTN`dF$Wz8qcSd%Zc@~J(58`y`0KpLv6RG#t7j{jx$2(Gd$3H5JZ1%oPAOk zdSk#vM!({WE^hsuqhI{Di$nszj%d6#zHp#^I%r&+*)0o!LcCJTpJ+Eb+r~$XUx{!` zbhN3Gf-aulR?pq@;@c`^>**)8cW)Q&=$2qdt&|-E*bWwl;SuL&uV^oF;lrY~_)$oQ zW)j{MeT)bMag=QGlB`!}(D!LR7rW9(;kNKnk>;TwhG`I)ki`_GD|C#9Q zo_mY_=adPAv?l$YX^57AvV6T$J9|}2;_;j$)0IOEWd$D*N*;zZ|IOOW;Q^kCS#9*7 z$Ad4gD~2BvP2w&@U~k_JELLUaBM@m&N6;m&D=tSD@<&n}t~MU;=2*x1rrCEB>1Ko_$hZfyEnpAyp!9WX zl6WP@@|@adz34lf3?{t$r4FQkfU=@^CHu1ax$9t|J$Dlxsq+3hb`LZ3f##m!fKD+` zd@r-h-;B@xS@#aj$uCLQsDMKdKR09IP(J`>6n*rI(vq2h0}dl3wrJ!lx^*k=H>6fQ z+ap{}lV4nx!~t1c{s|6nF~p~?xV~A+KD{ix(tN-X@_+a|`VYIZms_3P+nLC=v% zC-7h`I=L?lB*espf^Uc{E(hHg;gkhZ<7k4gU^maT zA3Lw{caOOfY&08V#FrO)jK#@gi(fYG7c5?C25~S9uQYCKTjdA&-Wn>3i)>2M`qNMZ zj1NFL2HbYQr%5xYr*HqB4L+#in)`h4W-9GwxC5{7H8;NL>DJJBd}A-Tk^wP$g%G+) z&>1-uB#I}cp{;Tat>lcF0#+YZvJWye@C4e^M-Gj}2}Y(jj~ghO7OWHo2peb`a+;)~2~@Y?)Hbd z=aFMsDTP}T%TsZEykUg^n9qFmCSjKE9x>06!9dbx2k_bm!TZd}L4CjtO`RE6r}0=K z_1};~iNX1+z6C_UE*yJO?ZKR#8Rj$FXK{=!b;Z=>8*h)jpR}=jB|O7vnOf@)GQfiL zuaB^zoa+hpqyu%BhoV$IriiU4oD!maM;&os@52qA93EEdtv7bLtYm8gLGJ?Y{i*HU zUqj+3s4|RC#_|rz+2}sO+*Y48U^TUjm@{%tpRka36fY~S>onvoqxnD+?s_{^QmE&l zaQ_MIK)w)PJCQ>%$1aBmr`+h5o+3p?LVz3I4U1g+Z!9}Ob{Rc@Sd|3cnS-36MnhMN zai|#9n=Er}q549Tr&(V)gQ+7jzi`nQ>Skl~WOT1(&B#3c66gQ{u1kAKQ(V{|hw&#k z4>$rf=)JhOM)QLwL$PRnfM*fp@>|8aDxTb>gjCU5KORbj&;-z2uw`_o!%VO^!rA}9 z1(+YGrsj)uM50~~7RKtk8nd$tqpLM(Gcy=)6yI@be|-}Js*EKd;0?V-BBG;psN|UhVX`LFO*vpt?bUz{> zhU*X`vjivLyvt$-cBbE8^8Z5=FrW5>gOt0?Yu^yq<$!TMn60P|8v79Dd**-2#u3|M zy0|u_xU9w2A}dxVIGxEQ>!K0h>D+E5K9|AT#ZJ%|EELz(Kb@=*-({&LQC3bmW5Lsx z*|K&KCm~VXipm#83%CEVRjcb6(nGTk9%62&>B5ZHXg)pu zZ`xEWIrLFgMsUx26YR*;YkExZ#FvSQi$=pyi~#b9Hhu1_AU--%Lq!ia9(o_6`RD2s ztJdu=V3s^GL1JU8%+L95(PwdIS*%-I1oV!}^IGS01*v5Zb&0t*S&wh!0R19~QT+j1 zkNElL#kZ!*VdBB&&Ps|nstq&5&g8ScTV~1gzCENJnTK+2+}J1dHYEaBJuMt%XJi}>tgev+q2ZK5Wy2>uCk1|vuC+m-E9MlH+ z9H$XmaHu;$(8y-CzA@Z3QB7a*^%upIk4l*^F6{II?f4kVi zs=`gKVZgTcT^jo%A#g~LI9^UCnSeIBCE&bm0R!jcQnkVxCEtMBJvyZejJw4l5#2d0& z^7|ufRS|;{?u^p|B%$N1%@qEdVK9HwT>Gfwn*U$G^d}_zx+l>a?pLTiYduc+jomao zt#5n^vQ!~X?l-*>z3f&Xu7%q%3AtnUa|1?pt!K(!r*Bd}kFNPK{G)O2yRH@8voo!) zP^1}aCiZM?>W+S%ItIA|H4OnpRA)|a6uiKo9hj-o6;Z(+QBSS^tbM zHtH;h9Up$ls)`E^5HKFLz-e*Q!LDV$@*fs|>MJ=BU7Nre=HEQUf28OOs7cHnCgZ2p zGB1&o=^IU&Cur*2E-aQ2Oz$`r(wOkPxVW#HEta3m-IMFRYAvxsa>FHYEmC{s_IMaZ zvW?SZvix{ZR?Syg2`!!Zq+eB<{{b7PmT??v$G)n?PJgbz%lJ`iHKtZpZA}X2RiCCF z`PS(6f`qgmy<@4MOuoxb0wcG4l&$|r-!#l-nZkG3lfeFgV$tY0WXV{us62zu5mMwE zN8NlK`iM;?O*<{G=Bjy1^hFVrfL{^8*2;xc%l&gs6AvxNFJyGkoEhX;TE*Cusqy}E zfF4@`@~O?Yp$_Lvt_8W*XBe9k(@`Axn;Twu$MAh%#Dv=UC$_ zBA-{YZI~M+RJ{CSG^?YfqVs1*Ca_=nuLV0vwD7!*RgKM_Yvy15E*>K@Gf-u zw*?5{17#m0*}}P_Oh!rodcUH|mcNyH?z z&)iPkWT=5tDb?nHR`OmqWQQV&w#ztzrGjfZm?3w|?b))-|A0*9DGhQb}jG;@* z&?)slD#!_s31w{l;zQv3ztGe}WqpVLoW*G5#&U#F9%S2+Y_o;LpN~58Ql~Nd=29%r zVt5PPC?uoko83F7_noR|Tb3p0*O)!2DzrUe^>1rOG_&iS{}&Po0qbl|mGb(3Sd}-r zHlg6PpyKz#@uO{e>QY-#F#aSN@&uZIy7Z>b$kma;f%=IDEVtM#kH|C7X?0s$=o}MQI{C=^|cNDD1tZ;fg#$bi6mw}deJiK_c6f=`j7&# zGKT3=7|;y4qbN+~V`jcoMbgEf)%fX6apry_nBqRG|AEVsx$yPsr^0Net@tC02@*TOFsOW2&GK-X!BR;wGjq;!vNvk;Zc!d?ee!u8d=8G4|7eqcO|31^Ba8*@I0g zf37p9?E1fAW2GFT4R`q@K_hng^!ny2@O`f7?Z>|OE z7h6BS7TEu})}M&=0?%*XvAcOuF>PWuWztqD8K1LT2!r0^`ulkMK+9z>>BjE&mUB3* zn%A&1xEWW-Th_v1{qo&Yy6xP_9$cpP72l3#jaC|WWeW`u8^kM`RLW22Py`F(=W)_E zx$@AL>?$kb)TP^Pi)#|k@R_X=y-~Y2xS@>6I0H9x8silD?uSl#piQuwa&zwPAM{y> zenaRE4N$nNh_@>Ft?K$m3Ps6E3&luYzZv#7i@f<+>5qGg`4 zp|qC5O5ck;NmlSotcqv}wR~VH^R&g*Fl}d?&Lv*eyiPV@lG*3)9hn;P@!zP;L|MBt zg6jBwkX&<~yA<^3_2xmy!MQJ|&e@DJr;(K?3waexa$9%+}QIFEQ8b)iS09L#= zo%M6WlY6!X+F~1L8Uxw}gG>#zsNI(~5n&YN)JFW5-xLWt*m#gSgns6t@FmGMU!8kR zymiq+b^Q%7I|eM@rEI(g|4p|yU2_NvAcf``o{D^o{RZbgU5Vt#ssctY+iSMp;y=Ng zfD|wiI3$0KT*%Rf2$%kvIs4s~{q~97ouM*=*s-URdlT||?54T_24N1KY667}^ce0z z<6mYIMp%?0Hr9e;U-zDJLxQC1#!NTV+T(~f{XUR3qqpn(HXQ0=9@6vC=KrXTw+0!i< zG5LV`$GXotjNa%+j`*uanLH~1Y7;rE8Z5{LU&+RK(7Klg*{gzOpK}Yp&>iMTRP%n` zfAcHM)6C{sR*=UYuKm5xotUY7O1^&bReyO;0;K@c6&`NBZLRhgk~!Fk;XOX45D|7@ z&l>4{L|`p-uy!H4P38WvDJ$>5X!Mt^RjfiunaeD9zf4u${f^jgJMvB))bdUNW)#qT z+tRUyKkogaZAuQUGWE^BAfDNTl`#=74-=10B=6><8;nz>h-ne5u5Rl-p)ji2EyoS> z;_zgdj)%DfEho45Bze%kC(whf)~od(-Z=g6g(W;!$6Fl-z6GR-GgRO4?MH_Tq7${> z#IFGGaBPrCOh0jH)?!QUB<8autAaeSZ7E z!_l5oOaC3lz+M$hMzW%|C%v+@#Fuh)V_#71GS81$SJg?#1T3!kO%7hV9ZfAIjO%DK zJ~5_~RDK!wPr6PRnF}8svfd?L#xxxbgTfYtYzk*G+ooHyJ-w%VUlWVvft7I! zF?8-vnf%j2ezSbE@n8loj=c}n0@cI|)biW~#(mi895PuKL%r^55Qz7DElZ#X-I=zw zx6!seo|+_@K*{2xTHjzcGHNR@rW7uqxHw8jsc5@B+wc$ZWugwU?K;D5HTXw_A)QfT z4#+6axY(Yp@hlTbnq8#E$R{WaUs9IKxkAT_BWA4i#12rjX@W0jqN_*X7n^*H5QEq) zKiLeHVqeR!rwVw+P`nzjaueM@9t-III#Bd@^bsx<3Hk&Jduo0L=5y|F5;1GD+C3U? zH3|h-V?&{70wyM_B810Wu6*w^;{w^=3y+dD1DDW6Vm$`Q=q2-}<;e+uIfJ~X`8n9b>MJK+W|7HYs7Swr zrC9(c@#|=>MGprNG}D2bayj1NbtfN&@+Ixakb{{v@V8iG|Eam~k14Lx-&sK1HR9Mi zAJw)F;-+@ zKFSK^rGPfKZE6BPhgdl|RvTGS2A89Uj{kbW4XK>ymuCM+E;T^zH5!W}0*+ram%??I zwN__=!*xj1t|&b}Ht%|XAli@c*Mt`HZwLbwEF<_-hG{n^Po=xwXoSr1+^UQ? zeYkoCUU_$I9(UK@vVHkigZhKaea{6}Y6)xm^lCb~w>l!6pPk%$gnjnt|bVyTmC zo}_>RJG&WuaU0$zPy=Uw@aE2bRKW1Dd}h4XH1k;ofOYd6Xk%yUNp}{Bizf-z_w-!E zFt}km{+|OI@Mlmc>HO=nCw1nRDmLRlx1mE~^3NPDSW2BnJ22C1{+u(m|A~ERIl6%& z$%uUzGl!FQh@qEm^zp(`;vhPb*TiR{xkX{$bvgaXWB^b{hwKZ#`64J;goAs|^<(SD zb1Y?+9uVXpT$p5n+Jkp5{?XOo5;L)ytY94*Oh3s?C#vH049}RFI&6?PRP*)cESWAN@ze$KE{MrH+k*r`^mV% zML29DQ&1U`6dH7UKp8l=agFDulj>D8w}f5 z4o@67+TO~~LN0ErJhTbej+jLgWPt1Mmfe+u}Z-awYbRrU6i zLpoBfNfC3>O0si17chCl^yf;x?0wn$;`nZe13_9 z+2+x+cZYi9o=_%BcGYEm%RW^if5k%q+$8yF2dx0@@m>LsDu_~D=Ex_Lj z(CkS!zi-9h-vNq)8f2~hjlY&qKVgBkdEdaJDZoQ10fT818Ulz*)lNFTd}Y26BGH82 z;-4x|)zI14~|LH(ve(Y&gu!Om^4juAJM?9VZy#>-V7FE1p49TXzzNRzLz|J3z{ZT0(O&eovsFNX= z+Q{#tk~%TI`-`Ye4ImQs6y9_AP5LP0mC;a2|qZ%#_h+ z9M^HRMD^#0zNrc23)so18s4T-;{8a(5U_>2DIvl3&?+6L0i0++fg|=W$j_d7wT+Z%j*d#51b=NmQrRW^?4!wlpD1yemN)~qQ%^|w;l zj(hnGLRH|bUfpa};GnERUQzalgTfV$Hl>e=GS4~NyY|n_{;~bjECCALGS13J9eQzM z5&H&=_9ZY+t9eB!@@~w>ld%gBaa*D7ayV?}$g{*z@_VqZEvjgJjHn2wV{K5L3rK`=Uvc;>B?_!NwI6S zW(V;f?bWwbkU40J57qFf+_W74PR%<>Be)6a>bg@KEC1QmJqF{YOo<)S&wb~z@*cjc z`jbU$r$W0Q{I%F0BJA_z8Z$3!K2%!2CN3X8anajn_jetUdP!-{A&|mKkfGAYjIPC=umq&yPnmJK5e=BSy(ITAqNkaEK&_kz+=tZcuirW*b{TM%s&AbRbJM0Qvryxuy1W?s z(rP;Wno=$aD{Qi$@x6$FfhQfmg3C!+C~(M;=5xdFLfy5rP5dfU`~oLNv%$2-0sf`* z!oG*zZ^*$Ihn7>qEHsg}910v8zx6nDj&TOVW1&I%4n6cQEJ3@fW(_baz8q%tO(?G>T54j{=$&F)Vi#!e z+qB26#X0RPo4IOtb|2aMZk}^(3W6@T4J%|vd=#{veJ!B>QFy~?PdSwMm<@zKL=O_$ zwZS%PP!)r7pdOnT2Y0E45fM3dy4#YR0X3lYS@qOQp7!~Psi5;FYb%wigulDr{ZEXk zh7dJ`#@@$tbzf-|h+uc1bad~3j;80@<}}m)wb^ow<)@ZfuZU}@kYumwkTkD*PmNaW zF(;8|1yT~Vg+L%#3FB$cQ|?5&m#BB{H~DjDKWMWxB$VwunknZE{0nLbn&-AayyZSc zQ?tmKXB@6h@{GPm>TvSH{5HHHPy^0>FxYbh)yQbL+26~8_!F`g=*hC%pX~%`AZwE% zDV@t-qc%(wV*0Br(1l(>9i{6ZK)Ul?IS)pxf#1V5vC@@9bGTs75t>?tG5espg+-RT z^?{tgEca7OEd{UYXrTxira9w~M_Zu#@%8T8Nf+kC5W^;xknv+$ciieg2>~#eLk&;w z{b99-5*eabMF0HMuJUmgR356}C{wDx!(7I{?5ftn?pi>Ptu8zCnHN%0)zvvO#B+>Q zJ6Cu%f7UUj=lw0l(e(g=3OsZ13wvI3tJ)5Yj5M}&3Jg=80mWby4%)F(9Y4$#$ua1_ z`I~RLdnFC@PFt9Rcj!t2B=_MZd^+_#{1uvZyt60U;@evd6)m|s^3Z^Y6(Nf7MhHSl;w?v}nz0N=-|wi^Fqge9&W5a4&uCiPTbLl?9$o zOx07w5xM|}rvXh4uIQ^&zuI+cB~MwU7WH`OW%TK-ncF3odehbcwwL*-FLZnbXz9ea zu9$N9JLs2zLp`gEK<;i%K}1dEDe4aS9&`Rej#V{B!BjQsjKgCT4AFrS>T)#bu7&fg z(@D_njqyhtx^x|U`9R63!s?=o{3-Lni4No!^7kcT59zFra96SvK% ze(VJB(k)h3IWyIe9xKFwo6z6Ob&ZS+_zO4hI4l zTL&E$!*RL&7_4w+J%)>*?R4`vvJk)+J{|Wz7ny;xmTs_00V+cJV(N4my}k&Z`msXL zH}ky7?%+XMuwlB`wQ~>>ILQol@#Bj#y-&F zg95-W?PV2*>e)Ctf1c*CiY24=YCrY!zo6NeZtjd~G{<}?Kxh$>s-~uZ#XPr(*ZvO@ zYe5)uLgoz}%)6}+Q$E;5{aY-rY|ap zX9$*u!+)TX0rzw>^rHch5|msEEy(SC0A_2jaPGD^HB_>q1Ey-&)uCoMRYE#qK^>rR z@=~^HpQ-?te;zM7~2CJK8dFA2oi9ZIL0X9pc?ds9K{{wvp zkObZj`TFfx{B~lAHPoz9>eU>z1`NlGFGtiJzaYM+3xO==zTlfaXdwkC4JG#bWJK*D zeOJ13vCYrmtoN?8&E9$NU$+8&$bCS&FnF2t*)*% zSm&DrR!uYHp+JFgYAx`3Y#@(Xj%YsjFT94I`#P_5cou*xnzu6oQIlZpbjocppsP9)LZ*30w^q_EcXZpbp$!@K`tI@xF0+m6>=_xbRo4K7DdRJVx9L>8s zXiIuxCc5Y1N$7r829RW#R%em3Ps9BoeP?&Wk?^#jto5aA-bIO-xIXb(tB^7wW{7!` zpHXI4c{G00{q1`r7&G=Z`$xGT?!edEiwXm{4uZP+C^+jD#C9}ROYxwD%Y=4L(jc5d z;vSpkbnZ6KH)8iAc@+~m-F`5Tv6cI`r2=#(f2!Of=8&zA;i^a_ozdR(&-Gf@%CYVe z+jKg3M}SVEpNYr%ww3n7q~u^>n;%fmy^$Z+G0FhFQqzs3o9EW*{Bhcn*S`XM8oPZF z<9?BzkFZc|DNtQBC&fcG>HJ;o6>bxIL)X02hb09hdUX-^6MSgPW1U7YckK65x^99W zg7ywsN4q1JnON)B`ucbVyS~rwuLg{DAzXhv#$iUa*P(~xtPk&6&D+t{(qAzfaTZ+Y zUd4h_YRJLX_5<6fe*S`Ta9WZUkb)oJ8+w(1*sy<|JFdmwsQV=5b@>6v7sX|jm+ki^ z+xV3no~75AuiuGQG5Ps=Ip^Zm%>F@%Idns=)3aLa3!`^mut!ThuzhJk_ z!m8RcVv}7r-ZeVl+>j&CwbaKs!77i=HIw~|=6^Hag81J#zdZ5A0y6PcbZ_2qkG56! z*b`9_uo#z{a_FB9Y7Hm#3&}8D4|MNt_5Qx^O8gAI=d@@-4~|{=uBaPwZFR>HG6mUi zg^(vWkUl(-Nojcbut`mSyefO&ra^OH|{PIV3<9RVvo zv2(tc;^eQ7R{K5MfEbONd-E>plN@T+FApO86_|+{W1n@X$T#>^zg=Y+lRp-YpIDzQ zwKdX!D_bEF`_DUMa2=R6a{=)Q%_QfRMjygGvgz9OlCRf?Cj8>#n(m zRdwL0NNe7LSdSzUK)~^Ptc(3sj>-ZaG)=RfMk@yHf+uh=$CDpz3^&UDkREf3+Ldwnb;X=Z`yg)4A#$9J@Rq?a8hb$BI<+JX0> zAN|}J9wzzV<>qtUqGEFcU1Gv=#Y~)NwHuE8v4_FMOpFmjs zv=SeLW~iyF(&<@Nm4RPyzwd9zGf|Z}|9dOtRT|+V!plF)4pKHl`Jq-dJNJro*g_n) zl- z|Bf9%*5ExfZ?%_O(p!#N=?4c=9$`Y2s`*iNEP;t25M#qd(U`;4<|BDHf#S+*Uq~Kx z=+B>TVZ4p9r3{Pjs~GV2;+j!LApt$;%gD4VLJ*Jp>hhg5_^Ncoi0snOhN>uE`uA$p zoWh}~8&O6tG9J#VP*MbO*)>GzEe-2kMftBTeU=$(Wtwt1Yvpo#El0NVUoPOv=r{xG zAINq1)|1qbQBK+9 zTA9h;+;pygAV43y7vlsOiSoTfgVN__A}IdH5R4lhRM}0h2<=r7WxWsD@q|L0G)J6$ zt4}1|W9Z&dtqp;pj@+pDocL*?-`&np2dnW9ADPnKhL4UrJF@$~U%=Y|wSjI)r+Qpv zJdHe6N!>1W401ph!LsO<(e`CRY*Kv;Y#G<1qg=8a#>qCksxa`4=scL@tGN!&yJv!u z8jk3VjQjS|r6`HAD=c&bHNI%WU%0nDqHRQeBym7pAAmkfDROf=bTr;ix~=kc3}xbM zX<5H`OuN2ah;Y^U-ev>S*SocpcucttqrqcOLQ%aUsvBuvIG9UE^!e{o(t{=dHMI=| z>?UBmI}4tdlN)hk;6FT5=@UMFp{AjW@`#qN0UeiF ziLYgdUY3TROXe{uSe>DIDR#D_WBm>Lp0HO^m#SBgf$gsG47MU$gWpH>yZdp+zfyY; zTI}Ew8(syJx;F2_`BO@*dk+lZrZ^9r%Bwcyr`E@gPjR;|uL)br$U>!Q5y{X#^Ee@# z{UR1L6AkBX(q7C@;}LCNwN14>qMU(E54Az=tk@G8kPc^UD=LC~L>MBiD|C*E^$Qmf zz7p93QZ3z(%t!el$M61)!H3b%CKYS_-#`&C*i4V$$U31ez-Wf=3WKo4AH&x!^9X3{ z)K%5ut;@`YMvgn|Fz#CT3S#Lv8g1NH=LZbeCWEzpxH3A&9wS7~Goontn*(B25T8P- zPw#WQ%(1=eWPQdH;{6q;l+K;_2^8e9w-C<2FPy31>Dk2I}6c7;v=^(u$ zDn&{}daqKX8G4Tqfr}uZD1?qkQ+n@3DWP`=9i{gcLP&t*J1BU)-tYGhPacJnv$M0a z@4Pen?jKghQ0TNoeg1p?#s^oXM?`|um5R6iCl&FVyVO$=1zz3o>*u<8?O2`p*Vya~ zfvh=)whdU7?gXsNgCoHlZqbGAF@B;PSkz=r+l!;OG0`Hr9vvX0AcI+Yt^atp3c`j zYqRD3SLg1@e;@r`kFp7D9Vi`GrvImOPynDqdjE&=022U}?r4Fi(2a{VtX}Z(iSjBK z(an8-G_x{fM9T|%bs&lCAlsFvisF%riotb@>>q$h`8D)VPF`h@uy9yM+RuQurv`?z zr-+*Hxk1@8D|mNxm4BI#R}os*VX+Z6`*Qw&W~^eDX;=UlBmUvtk-GM8-*d^DDZi@o z#t}c;^P#@B73Ci6?GAI!(O8)K7ENxf^NNRC(vK-zb6|Tq_@1_e5np#++%QkCYn3v& z*1Ee}f{xVD(}t-CP&LSk;K=6UF=S*}Eyz!5t0E2VWdz?{y&kY%<)%&3oFa%M7m=W- zVFs_#uw;LY$DIJSy&3jN9!mH*bBEjS0`s>4SVP^vL0!n`3;FPWO*O*kI|tjM1f1+=bKex zT#f<{)SjKUr8Wy@F1edSs?E9h+Ou}sH9_yNGK_RtTFE~Dz+$PuFS-mn{Fz(b(^JcG zB2ov|w#xF0qFa>x<`_AvAB9Dmha;9R!S{JXBLA-?`j1<5=^(YOwWpBOicHJRbveH23HRb88*T=SQW# z6X~;rfiZ2so{kjyp_1JZSbw8E`(~K0O#`Pzcc0m^XPl4$pY>P30mp$8a zw2y!~>bm5wWzrE=k1iTu<>5lFO5ko3O!e84{pMG{e(mT(z1_JBMF!}&t|7r9y?-r% z8@xz@Doa0NA>fxd&#%TkNP4pf#4+K>6*PIYs>YXGYL-&H`}}M}b_|`pbqeFR)gB1q z@2DY#?d<`2)cK~XyH6cGgs+jPwGTv_E&cXQ|F%f@ zhT*6Ehm}uVrCXA-EVlpkpMb|AS(78?4-D1CZ}O55fLd-k_I{D2j+_BNr#0qXV@0rH zY50>uX=MW`U`dnRlCvicA61k7VcAR2qf}x+OW!w7jAd)MQB+?==<6Oy{{jO!La7@3 z5*O!MVJ4<=+4i@#F=#8`qG@rotQ!tLR{3JI%tSn7^j(xdM>VR(W1-KyCKu+W-$KCt z)Xd?%q`CmLQp(V%!b_*fNZCFytV9}#_=RYMn z4EZAk=i1ziiAyjdXk=jvv1@*hJ2SkK0ukbivkGC z2|!rYoj(ZOLvmQN3Z~)G0D5vR6XEMXqHB=jYGnp8>$8tIM@@ms{1xnW^uHMwRu7_e z*nsXeTgyM2DNkaKi3ZeEY)F#Xu*7D(tbcHH?#~pv`8F`mG~hHA8516;?ScX&0dUR5 z-7`}>VCCrqeMN!50%{f@zKt%Nrl6&4&wgR{Gi#!GkX*Yl;r1?UWe1}dmK5P3Gd=E# zRi1w4o8uG`*t=H-Yk2nCdK*1=fxn~k_1bFa(QXADp@i*p?ln9bM324l zGUBtfXC#@787{iw3Db$&LB$R?E1#T{fH6VgIR(+!hPEl)AGPp5{q?7x*RKNWe6|vu zmNoAek<0w&Xq}mX!O|Hj16R2_L;VYDqnR&Geif^WE~tCwA!i0A@objd35xmlDcJT_ zVpX$+X3y%OEuH(%JT99`gq=7sA8X=|07C!8U~M59{`FjjE<*Q*@kIf_xv&Dz_*aj6 zzprL%lyEM|rTSZqe`GxzD2$sk4&zvtt>?S1vT6f~CY zt_;fBorgCJ75{{d^Y;N{C0Y?tde`I(A&$|ux{GQElM`BFIPI$6!ngFnxb0_hyq)I?hY>ghd=&ak{UM9iEa=XKo7q1k_xwHjQczs3DbxhK*bICb)4W^ z7&R1bm;Z)eSY%;fi11>^CRl!A>Y2s#XfXD490ca!In5EQ&FpvUd+pqf|33p6mLq6b z9Xw1QTYNG0$9;9M)aSH)ET8PT3C6UO&l@IG2CP9u&547{>=@w!LA9ch43Q!?!JitP zj|qZdu(==_x`iaHY8~DP^7(<3*GJQcQZ0Vxsp8V|MgD#yLV4z<8tO>nq ze?VYm_3HKdDRl+s1BA&n{?#}|2-_AQn-`raUH$1-2srVVAKN{Ui_lnRxkCQOoCAyr zA~bJm8@RH&PG*A8dz2~)GZK1;Y1}W3_{eiF>&Bv}uWqR8(IaIxi2STbI1b`MeYdHa zK~EQ_tWsDVObflO>}r;beq*dmynx(xZN7O46Zb=arF=Z-4MKGkJLVUHNbcU62 zvL6CJ*;qr)1N#46{01)`ut4u&Iueqwc6{&85vn|b_{z$_5)W33`@6pM$}npc?jzUg zf!fvmT&_^zpM6Wf;2o{Li|cda$oSX4Mww^utp>&Hbdz^K6~#C{iI)*e*L}K zJAdXjLYIe2T0Jv>u$oT^t^9yl=Ax+PpSeEalx;dgnYlq9tw@(3!m`C5ti!GR5o%*|!@xuIBt$eaDSjCX& zvqA_k6HxHkRpmbUQe_{9=0m;dgQywOvC8^Wir_Jz#VA@y<9o-Ph(!0wnfbxBuXKBI z(^mhz1Ae8gHxjiswvvL^Mn9W; z?`BnXM`hY`=SOh2^M%W^G8t3B@gw|tvF=C`-;gXioz;v&F1Z#R}Oyc zU=3Md0wFOu0Px}&m(nj){|y5D@|d%Bwop?yV|37vZ^|Db<)6q7BLxZ#Rx@I{x1hRL zGH;qlk2!rO>n>;C%uvxvK0h_#$>tCVXHOqV)%>2po~aGap*+nHwV-+7Da^xgx+Iv4 zwK(UT#+!~RW>wVz)HReh_QR5XQXD0m_4>Qam95prrQmMX`5MjM5^-i#9j|H+f6@)* zDJ8BZ*Qn-%cW^k1RBYypSRSQnNwuaY86!_~!j&a}3Xd$p7XO~2?!5=vi+)JNsKPO> z9F$T!seQMqbJTC?SyNhO?Hqo#QJ>t3*(g3!_`bEc+j!osuhLJ(IU)Fi=iHh1(_TI8pDC;eU=*;OD+7hz)*?Z)uBKavj-;K<#Wr#5g)1)>GoH?-o zh>SlG78GEdxHknL;!osD_{6BQly0}lrIFFJoKA0)21u4rQV2_fRg{W-oN zO((YbxNV)cfxyagEhHFApELx5@>uG2JCqk!=DPCum--SeJMS@2xH7Z{m3Eh|{n_OC zGro@LnZPCxHG+l{Rq8-lmG?g|Y)mHkRiX{BB58u^lZutGVWHkn+Y15(sfVTS;dr-d z8?DGepmDa|;=e5e?S?OQjEJ+~#MFevAQD@&=ca;Q^yJgO$MGooux)kRrzEE>`P;HG zcO*CAqvqM|{$T1`+;K3tRujf3X3uFF6Rb|I^2$1WKX5Hpbie05W#EBqid5OIigMP3 z2!%9zf!gIxy)hI;cNQyCv44NYd3wK_zygb;K)I&cXXTwLQ`I=t2)eb)ysKol;D)lp z332_^)d|~@&DXkfwFQgWYZwk;nB*p?s^55|DRY+b2J zj_UDLkPt_T?usP)R&gUG#Yt(!q>5yq^R@q4mkJ3*=61t?9!cSv% z^6oek%;_}5o{N9C&Y-Gs_cME{aI0I1^6qLQOt80mAt+9UWvi6yYGwYS*5F%8b zHTRAzKHc#1^Z6u}WwAqp{x8l0c;Uk61g>4=##JPSY6dGbtEt;V#6_*y#8dSYh;DCJ zrMd;LrDty>L3*Bblvhm&QwVs@Al7#r_lr>F=$q2V`+`Xy&@mfdT#f+(41f$sMy74d zhvyS#v&Tn=h^XGCNK`raGaD?5ee(IaQ`8R-;%uJs$9eEW9n%{6g;$E3F~ZqkJAV*g7Ffe9c5P7^*VY(EQ5HWh>m`MDEv&7ygd~Vq z6WB^}IuFDt8^x7x6VB5juI#gJHIOra#(8>Et#vEi`*PoNYJS_C3s!rj>2}8kuK4-3 z_1gD<;CAzhso4J=bVeqeydtsXmnu|VBFOe-JUUh?$7N2!immD@Fq!n zzp=yASLs`bu$f=9<8z?9x4fc9$>K07HSF^IM6?_#O7+ZnhR)bPAHYQQR|r*U8{)m+ zI-6ebEOTz#=0xP`8{+RG6z7f+Q~J%#gAVM{rw+TmPC}dzeb-2K|Fwm>cjg<$GaiVb z+K89k$^$jGTpeEykwLNdvVAf1j?(wD|No{Wpv>Mu6~6K;4k(~pQfL47PST?dY!CZh zPqIVmSI@6Tr%e^iZLc&hA@#!q?VO=X>Fr2zN2Q${>nD_9S+Z}!RjsKh%OOU!!r+bv zWT4r>wL9&^jYiT9Y+~GlPag~~-6j4L{yDR2disCdHbsk>E)4TMuPv4qZ ziAjrW?XdXICp#Y6O{UY#5?w`q79GZKQ;n{ty`I>PpYhni@LwFH7oQxsx%TBMvYo5> zwYkFK1-GWa;JO$bs#)?k&gX~UE^3*47;}+sD`myg%TMxgy3Rk2vctO&Lp zXOec@n+-0y!_{%uDX2t*k%uU%c?o9{b9s8~p34F5@j^zJm_l6viGdjZxYcSA>=(WH z?+;iO-i{Z`lDdY1O#)SK-~Dk(%Z>)%t%*i7?Kgq#p{#Rs>`G1+ZAKDb!~XhyA5S!q z*7r|UtZus=1>qf|IJSZ+xNZDw-+Dr8VpJwUN&iz#pViPtN099)uK60>=JvU&0&)D5X0Y20`A z-N@}4a9yc{&EF7RQ8|gky!!;*T~w1|`(v;y`T^~AaaA3gP?&EsQT$v=nefLd!Ejj{AlDI&-0~o9Utlq_12d>ucAw$ z%t?V08rMV*D;w2l-B&tCraB&mDQFaY+1Rzzj5KZujwy5)ni9o)LdO=IcSPuMM|$iS zS?#-ibJOioq98`u>PqW&)_U%~xKDXIM>uq+U=Th|L7{mzpKJ|X^tU5xrJuSt!)~b; zsU~5mO4dzpTrE1k2kp)$6lLj3dI&7V>2NAR*^??KfGueXMh#C3hXM@`V)WoXoJmr9FI zS-{3$P%O_iyOMMTAnD5w8jrh1flgzy!b7=f$zUN!eSUwFLg~x_%|H#YzKvtsw>JLV zeFOZRin=m0D2i{VxX}&kpTto*Rh5D0e8s+eo#4-falDU}M%VL>Npgd=t3-c~&<^o{+K+(gEV0KO+rJPshHH_elMgbBd zmG;#=WrL16tOo#3n!A1nOpC5e}GZi9)hdYE>G! z-^CJ$mEbD{xYrq@-{kT1htexKuCObP1`pM{IZjR=)zriJcP0TqqYw;(3qQ4Ryy z^L2|}#dkGwmBOrYz8klHDe-gP%cjfQH@|3L8(~gCUG8hCq2N5d%JToR#4O zZV$#vIa#Ef=g=~c0{1ui0VOA{$XU~1xXT;6Gfo=4Jyy&Y7p-$p%`Yppl9YXV(d`dMNk()^2uJh{d^)aJ7!>jw*jieM)AmyU> z6VlA8Bhxo)qqj7l@2mtfk)vF{-wEc{qf|J^xu4^FeGE4;fYP&Dy~owu@iij6)L{0t zUf5V0fHCxYZG}r~Eo*tV#7TpuZXV{n&D;oybU^X<8|o`Bt9WtsT=;{Rr#*W?NCW}M z2HKyw+8-l>L2SrYKDBavZyn-!=BlyUT5)3{gQU8SbMTj+ISq~Cw^pT^&pOjxZ{!vS z(qtywx1eFj07^|EGg5DU*{QDjozKZWn#A|Jq6@>aemBUN4X3o@+b}Nc@LDl4HX?EyZIxh$@P_y4WSUj=?B5lx4INV z`<_lxP@t8^*$W1RnO7CV5X_U&FtirYm)xl@EGsI6AcYGQ3%9h&D&?dj_nI30DS~cU zmT2;|EiiLWI-S-GT~iCLQ}Hoxq+3`dZ6PhJ_toco z%-9){qFL}wQ8-CDk_x}&)7^EiUp5xS-#S4NuBu#lD9ci<(V)1p@-*PLs^yQdy8tYw zfabgWEUmh_o^+mP22nkOw71gh;!3ph55W52?F{ZDw(hRSnWGNl=BxzXU^h0~`UV5-QBKuXkrJ-2Co8X@=AQ>Ke@>p~tOchINhuWDIJP8XeLj7K*^fo@=bx|8 zRO`f+*F8zG$&##Q-bzv-R;|F-J;BMd;A|sZHNc(h&vn(B#PICBrJFXsiSh3;^VE<- zqkd!w2-(=ARw4(aWu`9Mg;k{0O41jUAhIFYR#@2;y^|!z*cnNiTU04s)<#^ z>u;zJsg%svp&YYGX5wW4{`VTmUbUUFQ0b`#i`*6P=&B5+xrhzaP`Qt!non@JxJ^?FF9|O%u_x=I+w6uC zV=uxThHRR<5Daq(_p_$pLmQ2bQ(r8q-hN!?!#pIeNc}DP{)R=NBUCh&82GKOWoz1_ zgHhsbtbg=W-{2*JR<<5BSF|daj_-JZW1v2(MvAj<$2z$8px9@(>8oh2~L9qBO zT-Ns&IJQHGP{I3@5B9Hm1r6pkDvfjIvKgL==590qc~e|>_dV}gz9i`hqvVBR1^0x| z3bhch{?$Ur83cNfW$mkrpI_vu6IIAxF^Bd%=j^Ht?JmvXEUzK8+(zyKA>p#uYD~R^ zZ(NF0diO(4piFkW>6?&3u{AR}K*N=f*K6cOSB_2C<$lTAhYr!hjxTXmVLes@+?qF> z-+9A~qWqHR1wZ*wU(TFzmD;!|U%ET}Er(S2kJCXX5O-d|1Xe?b`0lq{8*(=Yllmv+ z3{>R3zanG8{(@pC%Uw#{9)+yc+%l)q59POTgc^!!5T%V81hwr>1C0HYipxZo!q>7} zpKlE0=(9i+nk8@Xg*!gYd|`yPS=#mw-R}gt1MGddM6V}iLiX^{_oqU7*E+7P%+$P! z6tpMne9kIFFCdF|?qAR*=TR7UeTn0!8i3cUP;3}@<_C3WU7Fuu_A3u4@#Qw(i1)Ca zzQ(nE;m-WujnR6cYsiQOJ#Ggh5*sSGJO#pj(Mblo5jWfhw{BpQUtdRBS$uvYF2y6s zkdT=KjV<<0b?*P#udj#~cxef|3p>u++e7=EJo0zh zgWXhP1za0dQe9Ipo#C*vyUIV@UFfn-2gd4FVqE4F z<)PN}0=YPao;$&Y(q{+*;{@Xp5<*VQ5~f(qR-m@>Il^*+0+srjMGw4wAc3!H{HR5T zba8Uo!zaS26Y5B2}YVF-#V&-_OS3sfGMbP?r_J3*|<=&Xh(30yzob9SX9 z`raVYv|lqUNVh?RFdOeOr# zT9-eVejl9W7U5{X=EoQ8*mD!paL1DFfk1Th*$sIPVgMymZ3>XStIv>eDO)P$yCiHFeY;3J)MH3_brd z=?GOfVQeL)SIN)#DA5-xpWH627-|(T(i%Ia*hOzuPbr7ptAtiY{Ta|FYP4nsKq<|%riPnw=i>oeg zm*5XXS85|oG^5SpY%F}!*}I1BvQxL|B3yll~Y#@arUEAmXD%=xkYS#k*M&TP7BuO zVOSk8r)01nHHth1vKGsHfZwp{@Fhpdg|j&3iwYhxyv^*6lqzv9f*FIV%V!?=-o)T; z_Rhy1I5o4SR+z&V#A;c5l@O&0)=B4Wum@N1@`KKI9sp1!x*LX_+l`qqo(#iE8B6aA z*r343w=Ac3eszZ(?>$0SGE-g{mH_Ret^8ELnJ%ph^s#1?Qi zY*AHfZIS-u`K1RJXk*}j zLD5{u+`)UN-06j={q?dbKk>&F$Sk&O%n+S4e}JHI^w(GhBUX6%MaSI@(PG>!{pv)UBGkwbx)m)qZyY#yYMvi`a&?Vb=TLW6C zE=Mo$!B4wg!rfW~QPzhqqYWE*h%VwUKc67P_DtWgxx#yTgTmJaC0JQ2r?E8#F3wJO#Wb{=h*Y<*1wZw~<ex1iP z-_uf*sZP!|}jwx@oIIy!4y!$3IlVDS{%_XH{bp>&Xpd(g)mZ4!U@5pSUYEbnf(lN>9SP zAWP>ob&&7WRWa6y_!S~*gUxDwAov~@v5HW0NO4q&aP!57ObiqLdnChW2xf4^@%A#- zA%tf+&l!A3yH6Ptd%wzprNTGmI_m@#|93nKj$f_m9K1L7%8I@yo(XC87-2G3?bsCp zQ67iPeYu<@edBN*P;`)#A4e}cpqgJeYcs6wxz;ewASg7y)|Rdur!W^LzTI{KtQd@p6dz*Q}!w^(=Xg=`}8VI$VOA6tTqu+)0&)!mdtL# zVt3BuQngRHv~W`Q?O=+Q=GLDJu_Cjkx2u{ace6Ph;6cCy$VgHvoNB)twWrVaR2R~-lq{k8 zq$vg+8?*Nvh@5^NLpzfakY+?34%Kgru0+`H!ZXtgCq)x?vRBv28=r-tt!;mIGk`U4 zAOsF-#wpn_5vFrLd)7{=obvY!RCg?n9H>RUHYCl0WakQJN`A&<@+Ztr&U^^DwNRkS9R>-}CnoR-D2DMMSW2kzqt@*523{YyRK-ci>aZu{Z;- zgiuj$f|a;|E9Y(2xg@g!1blI;zFctM*^!uUQlzXE9@r_-t?9s-SeLvFU3a@XH+<`W z+?)QSK6NrZJ+Eab$yQUiT;ALd3qCi1T3#Ln+4WMutVdFPPR^gnL=%7>7OJf~?+xx$ zjb5Xe5`qhHq+-7Be3DM~!U#4yx-G^mJwk^S+k;x`Z6?hJ0T%n;T`4DiUymiIQkBBY z8Y^zK$una&YgqR~%#hkxvRD)OWSJqKst?uAiE_!8T*Z%Iw*n)N2IeTL^bnbNY8(32M)r-_vWf z5}s;P#QMn5vS*hn(<~OX|K-bM2HnY~T}-}bDR(&WHxomFY8^m=Jrs9-r<#QwIJ_CI z@0z{TPW?pY@UF!F;8QJ)O%^_7Bzko>6065=-}2>5sd%-QpN%vHU`CSi-A>j-kLlok zzo~)GbDn>&+yJ|p3$PviC}DA^C%H&|lVw*stn@))tTGy@A_hv{_$Zz@W5l(s*;`pw zOoKp0z?n_s2!ez+ycpHn+=Hl)IQQo2+@|uJi=TuE$ECKKa#%*E_l&JYYHPa=d>i-- z-SW)dl?1Zbk3znzcNYy?3Ftl}_B9^53GXhZ=|?aIS(=LLb2?0t*!X1xD3aQ!zbl^8 z@Fqn9^mgn0ivzUyq<*t>tdvji(Wd3pmPxO@3_)KdZu-eC)>A8?Rb-sVS;9)p>Nk!4SVC(v7D|4%XEBoL7+Ft)`dI<>JmcD_{w8Dz;Tjk&O~?RmipWJdN?ydcu5-~ zxxA=?&345S+0w^`6-zTA)27}oFPWje#%b(ynL|K)$(2n5J%9eHmWND?&wdC1p>SElRle-c)losI7ct^@IP zqAiTeuGvJW{yV-Ov8+@;*K-(8Zq8*`z6>zI_9Pa96MB?!tLj;KHQFoIcQv2oEg`m@ zAU5(y_c!x{TyHhN16idshz!P(AdX_49$Wp;C8rke$ki-E7Q2X*%$*uLf81cnQrzup z=?tH|`)|JK=S;;|F|B3pt1>)&v;FNTXv{{`a@)17jAqOoXwrd?DhL7t5V3`f?VaqT zS~|>#q}=*Q_KablZ=K~mHx0f=f3E07Ig<}@0L>Kt)ICK(T>t7Nb z#cA=?qHd{;?&3!F=dVt*0P120-l2jL=2M<N~U-^_^R%JBH zciTbpI|Xa_tO0nx+taZ_q}5!BUsoG7lT@86))3rSt5TUTo{D@t<;9frCw_2Yden5P zY;@O75V}=!i3aXUXK6(V&-pvSvsRZAJr!9?*>Imh&$l-IJ1uz&|!4A+$wtD^8Ww2NJ*WF=E50j@ws>EiG@JM2T}Ua z%9LIm4^}bWsPSr~-|&Fb(mS;<&cxm``WN^GV7^QqksI2}l}a8swsEeQt;54wNXIy6 zl+jvETlJCr#_jia$;X9wa0VmH}>pzp8AEhr8 zUnGUbXMKl(UGN?asVKQIc7cPrz!}RLZ&AbCFD&z){+=znw(9)^;dw`=B5!WN9^zj$ z0kkzD+;9iFIFC>8XqLn%>&Mn*4%b8kfh|{i?XjZ3GVNj1f{$(DK=-{ru|~ctG$1D< z!u05Lxl+@O-cnM4DmB)k#VyvY;ff^OFyJRq0cq5RSD=P5?gI<{X<-d&#EZw3@Q&*G zDGTDf(XZTHt;2YF(}Uh6=Y`AIK?eF7+3BLVogCR%vf)%v0uJbOm0xtv+40nvUTusM zUXt#>kTE|6eJpiiUqIh_DmkSjV8fb@bQHdt%yZ)IzaxFZ(yIE2b+@vt^{S0p9?+>y zpu1JP$S&x**CHdF0df%LUW-f+mqL94b*@Tr{*ZAviQPx>rAIFL{P2$=nd~=Pu1LB> zu<=t-w*gaQx{{etRF;{yuw%izdkN4&&*<1ns6t{>Imw+70EhqjBp8kSvk#L4W&hslPB)>Abc&$rN8GlVX zJkpv(DRN-OgpI{eqi9}|18i`jW-xF3?cT^VMA;;cteeg?y#IC%PG`bdXi204Lqbs{ zw5YwnQ1b=zJ_C1~haGX6B95{z8w^r8Fq9lg^%YydGlS)(_v3aZ7G2`@LxkEE=>A{3 z&+B~yGN_6tLVJZ*daHK~99;(I-8Lp`W~j=_o5K9bq&(onQjZ`M5C3V;Jzk1-WYE_R zEaTS++!T*!V`KZ6#kX8=!K-eTkzM=)=xe&R>U9hj9YNmZ@bF@t{`Wx8?vJ1?)lseq zHmxdU$GS}f#^DVba5Kxe8mi!|hGvq#WXpn3BFf4iN%@y#Q-%tT4mQ0JYT?vWRzDe_3*$45w*r_6UCO@kY+p(G z$tuN+=-C5e;U)Eqytn1>$8}Y+TTV*d!d@N@st&tw`lNSKh=&6V{bLU=cV0mJ?v}@v z0wObTFh;$<4^29j1JK?{D@N!_@i5VO7=4u+8)HFVlk*?(2GE)~;AlB8(idmqwSnf9 z#BXjMVQp_;vF;h#+RuyX{aOJukYSe|LM)`XiLS`VI{*9!A$tZ}F_SPXLrjgpVZ|JY zlKIndfsdVWH!sG0v=v^~ilrg|fK0Ge@>GECOqqVRGl1<%xuQNt%+E6@BY`B-tIV+t z3d)rd`MNSE-154|wShajb0ldhA3n*7=L%XTh5Q*@dH_*>9B0!*G4jk8?`%s#3%R&?iq@Bd+-AMKO z&iGDj*FGl8V2)k)JIEg>q~48cljp`5W_e5;>OfRGdS`<9yv>kG zTba^0Ctn!G2v$wZ!ZN`&g z5bX?36s*^>~LQsRxOwzTdWp4*YEW0Zg2vJ>cF853chbtJPphE?Ja)63!vZc z6f_bfg~7$lJF@T&b|`}zFUzlre1AzXxxcJkhyA{g;66!aW%+$Z*LuR7j|TMg$64r; zr_+7z$5;%5;;qV`ki^{oMQL#0&&Fcs5nOOTbuIC6cP z3;{ZW0Au69XQBTOsklZMW-FZS{*kIMWiC_LJ2Z>*?Ot|cb@}Ud#bdl+Vh<%=cy{<< zBz>*)YQ8RGXHkE~E5r3Nxo!5Bq7S{T#ReI+3`fFRLMSM@^*x`xUSQ}hfPltLmkbWW zuLYgnGt!9dz+pfVZtk8;g#?skfRf?Nazu=i~mkUD;QS@o`I0r%==!Q8C_3XlXi zN!OrS@IQ!Lhyut`en`pEXnwQ(=w>juv7lYE>*NBMj;K!13mc3+Pz||CX3LjOc5j*V zw2FS5>_j@r`OFn&MW3smD=rxn|+Vpb3$|%gfLC6|SArW+;n8v#-bR@5- ze2&rev|N82PhyqC@Ns9?uU!KWXH1Ude0vE-4;<<$!Kpm$riu2^b0TYo89)8GVVK|u z*ORzPFib_a_!L5&j8|d?JvH}{nt2FsOhmkhNkSxicJK93CcSC9J>3wjPJx0WPZ*|u z;V*!0NUm>XP4f8&XqZ`Xq*RD{h5EUo*C)cKsxz+t83&*v6f|>z4I^XkDJgF7l0x%J z!!OqIx%;RM9Mi*T-=G*Ify9y^0I- zkCW|BwPbatHHS|XyQEQh?<;Fv9MEkG{zWWzGyhlZpz_ycXnUoe#n z%V%igjipK|BjzEqjbqUuhfN)Hl7xvr(skKgSkyR!8W8#!*tqpmedX75oO}Qbb;Rj3 z<{DNRB@(<@IqXe6muqR6?xirU8XGgHcPvqO^6S9ne;+!*wA+7_QO2F~VWRI+65x8J zD-4>t$){}HNjQ3iYSo#sQz+o3Rmt$T51rhoaIZ2pfl;&A$%{6~t;w;@(ER-H&>s0{ z3^^cyj=5$EgUkAqyzUI3NOKQp#j-GqCj^sH~Ye~3!Z7mEo$jZo9!{XugqbUevWRlRLDu8 znW-YhF!x!X30s$z$92+UsrcL(xi7H5Qv;iR~!S&z)RQRVo*y-(6kJvSI8Lw^BY- z?kg@|YrWcEELpbn*WsTyz5u}SzGo;d(bUy{?Ym+~-A~8tIG20U+%ZBeBBh_XXr3h- z3mUdCMZw*tV3|`Jq=2T6kgcW*5-qgy4oR&v&I%M>v0;64%S#%zHRHDvP$!Jzza1RV zE6erlw8bfQO*$Q3@5*M4nEpt-0Wbu3udnA=$7dZ%<{|F5L3V&TsD`0oq z!DGI|>r{Uc16k@P#wk0#-fWz?@5HmIV){v!`gl~HS+A!)zBsp!sKSpf0%41tTkg!7u%ZUwxsuA!l->Bl!uj zK(Fwv^{WkcR*-hQfz^*sSSYRY`5#IgLkKfy%(9#P;-($@=wEbRFaBh8x1&MCBf#{$y%YlIiu!dCnWJM}30 ze3VkoYdQ>FR1ONV8?tG_aLi$Nay%yHZRRrfmvVbaq{v1@ksKL#_xnZIe5 zeO}*j{=iDh2tVvC(?GwPge!YkUHiR!ZL$W4^in{UR@_8%=yhbWzVx%7Wn(+@#!2{R zbvuJpd>YLx4_u~af=O7Us?P1-+#KYbP#>J^2aVFV@z~?Gv&*d{q@lekTm3b8AyC4 zwQR5sbCmuAy5-#j{uimx%@>^%?gHxkBuXS=ISm1%Z*6~o{Y`;`#a2 zlf?I^YRmz$8~=y}*K*Y=ek*5?M4_PV&~0thwPdG8>$+nsvP^M2mAFa9eeG!dMxZ8c zui(BcyL$g8*1>Kaox@vts>=b0YQ{(3suffdz9Vc;*78??K-{MvrF!AmH>h+pkjC9- z?3}kY?ubALGFTEqKb@-<<`)K9J2=c8qCIc}S`&wjB}wds6UBgXgNB{=4ql;DS2}63 z@QWK-aY~&WbPcSGvhTw&CDaXAJg+@=0jMI$INr0Y^8PF9S{7m3$%E~xixs%j`1xg4 zi8-{C5q>x0SFNcd5r~3st2onjD*}}Y-c25#dXq_3hgRb4Od~@>MDbxN*nuVNA@ZL5 zVp*k9&4-G~cZQBH9QwGUyBrk^N?LFD{t;u3^E&`gU-7AaylzE@BIC$Ouvn3oj=R#W z^1Y*9-DlhXW2!#;V_kf+!PZCVahDA7i6wGk2A@sEk?uU!*lNU+cR&kLWCcX1Nq`~o z6hvzK#-EzvrwAmkWPTMt+8`^hh-5t}H0~<5EupJ23@QPvg)j@%%wJ9pq~89zGqsZE zBG|!~v%XLa+v@{5Zu=vI7&d~sD4y5YkwAfZY5K&6&gARiq5yY)6fDa~10@&mMeYpG z9&0Y(;oTj5-V~G?j~>|#Kf1DVe$_0v;O1=|^1~SGE1Onk0xaOpZe}@(k8feHbhA^s ztE8u%6qM044XfQ{dYOK+X0;2yzY5Wgelp+Hk>H)-*R_HY;YTMTn5nht{3;59zT)$TpWIyrQb^n?nv!}@+!Ob^qg+}6tw`-k_W{6-X66N>FyoFlHE z4Cu$Ge*8GA3E+Vb+i&ThZ?A0XEA;zaKg?uqI&d5XnSL=m08uZu__bYEbUrf%$u$Bl z>inedVqDSmW|p|}lTxKzYY~-dIFAo!$;WWuZn_)M1b!qoVLd4!b7-$IIG=Ns#{Rw~CChUVaV6vZf6NHkmXge_2OiV8+G_IA5q`qkZ zuX(xDdCzKCF$Bu~G~W#dqPEjI8Y+K@DnMnVNHlD~cd7b0wWUa|ojm7K;B(*1cGFYL z4j<^GyKZ#HT#H*Kt{4rPl}Z~H^vBS=I+~>a8u#ptp8F;PnE+w?-A3i&la=-8&yJoS zURkYwD6dIRw^Z+;$V#5)X!A|kzSwJ!N0cC~E9Lvmt7<4mAITWonk*9jpT~M~J>n#C z-@X>p#Q4aJK&$a-Nw;Lu2XKu~7*~^N6L6FvIh^%bMK~YkP9um>g+rrP&{!zK-*pe; z1wy^PAUb?e{pKa@`VErAT}wI+to8-_zlBWq70Y~+Px+l+PZuXUBvG%&k#4D1J1L(Z zlN1b)QmGoxBld%6JH!U)ws17Pgv{_hl8IQ`lx1${EjH zKK24A!KKW;YqkUitd5!IE=6h_|9yTyeLh!MD#Ro^a7P6)j9?O5oE$Nhe&~}?rP}#; zHL;w;>gVeC$C-Y8>ZANy?jt_qMfe*R_>(^42V4xgWeyY1+0@ey)I68(w7zfC=|sMr z{;PAJ2{i?4gZ;Jd<38K+Mw}29e)9FdO_Pik(e2b2O$U~H$;^bS;l<8A>&5SzRW?OS zuLNP(I7M!NJ{%miJ7jn(KGF3{6mK76dO!PM6dtaNzgc3U-^8)`sdTZC>UZDW}kGC8@tmnqFQJajl4^>QsL@BaSalj(xR#R(t1Xzkgr9wf+1_{?#bXlB` zy_iNgwFP)Rid_raz}W@=|16eFaHa3UyGRSbwQun9oetixv0J#%9NpK`?PFJJm59Xx z?KA&J*H?!{)oyPKf|Ls43`mDa4W)FKDAHX+qjX3&hzjVCN;lF9Lw5_(-QC^NF~qmW z_eDMD{Jwv@#*2x)pH=s|SM0YT7b_j1Lt3q36aiz?EzyQVg`+Lca>_N`DHa~Q6=6Y7 z+~++dB356vx2(Nglw^CgVD$KL&}mhO?QI5TAI)*=%Bv$S(dBmn(a&7t1Dwr2-4!C* zL2J2I3LkCkz@#UeZwUHKgM{)t<3J;}+wa{WHZldQQ~$i@=R<{D(XLdfmg;%Tgr2ma zmrtS#FhR-nswlaq?KkW{VPCr|Q^x!U5-$T32Ehi@*?eY--p>_|5~z8E;-sF6H9;ins} z51}}{{y9nZKsw;iHtf;)?tjIE#sEwkQgXKW-arjP9dKMZY{oE~c2c^s7$8;tHl?>) zv3t`q7{`88Na(Zhj**Pe9oLQ)L%)*;%1q)yBhQ4S?Ah-fmdPiiQYlEHhIz0IV_#}M z?h>;W9+`(CM&AGvDWfese&LZK@KSgG5atkBw&u&A0`BSb0ov>$_&dMr3t9}Smr#ZT ze<-%xO$&J1&m@UgVR~LHb?Fzy-9v*T3WOSLwPbBbS@aamW)P+<1^0BFQcF#W_KF!0 zp(;g!DsQRE6lqs5nf_MdUQ{Q~jZZ>h;~TE(C!{1cqrSFxodyf&K!_IJ7OH=P;D4X% z4WHX+ass}F#)STO{dZ?|k;~B@8<(Xa=w$0gGLP@L_O%$_I(yKFhi1p{GoMy(+a5)DpqxOc zW!Y`CTdRIdh;WxMY2ksI8!B?42}rqmCtKXXqlyD-N2W-4>bE;oKyQ}K=E-0pcGV6W zCAl21u6(`sd?8a6P1S<|*hllr#uY+Qm$V#>3_3N!5Kh&P9ijl~XyRw}3vi-zz4En2 z-o!6)0e8~qK`W@caCE@SHO*EvZQepl$G2L0VZ}@#+*{uEq$1#2tp635Kp7$%!|=M) z4G`47#tqyzsi3azZ0P9ds0LHZs7ykZ^B^TgCADQrWMKw%JFj>W=dOSB&o_$5p#h%r zz7wqk;5jqQqV}JRuIX`dP0hoIWYqJ;^^OYmY)&8CUk+6i`V76-btu%lG=L}U%i(W& zGyUAAEJ-Z3+JGvlc3o4N5mk-;Bv*y6dClm`IuX3$LPc6Q&%X-;(?YAj5ngPXblneo zDfJ0)(-2i*o^Q=wk$!jZ_AaQn0@LC8|8@!Sn1Ok@4~T3(VQACzbA_QlrS3}#$s^xg zkg;PUKpJpz7J7S0)v_=9Rx~;QzxKo_vl>>H#)Fan`R7|Owd!rCx{_{H<5lUdy!DIU zazK1&5b_B5Z65!1j+A(B-@r_R?^0P+hp)&b@Hwhg->(olLgFPwD3o7pBmY?5cgl%O z(Cy&TKwix2f3=;`6s4;C*Z04tQpiplF%B&cYo1OujSvl1dT>@4BDqtzxRT?{3!R zr8iltW5jJd+tk~CKR++VVItc2X8VE5%OayT^68kIn=8Ln(-JE%n|zmy@T1(YM263K zojCA70U~a=p-1U(kH9bSqf;s5s&WzZ79$8wrcwF?Ei9b%WZBMuF zqe@FsBlW^Z3`9SYVglK`dbd@oEh>J)rva~=G5J*_B;;=e)i;gB+PX>KXzFyQodo7E za=%%t_2VC-{Et^4_!}X550%k4<-XQqZmV$)mQB~Ut-`K*i!tr{Rg`wFOQ8rX-4$N{ z|M=7|=`5y-a8M97#R67iik2hUcSy9oY#6j6d-Cy~-oV%KADQt0dKL4b9A}vmj#UmAy%c}6>OWsmTKOpVlYd&i zp(F2V;OKZb&f&k5gVlQ4XL{NuwpJrA>dTi86;kY%oq*R&(f*HDA@k@p!EgPYDk3=Kg=Rzok z2UB}uRj;rje5{ghWA`DVgC457l;kKrSLUoLG?Fkj%MC4Hw-{c(NI zEK_jB3;Uo}{n~8y2OYir1OPfcYT*bh*vSj)C-pXoy;uvr)&MjXHZQHEl24`wQ9g9m zpYv-)OnzJ#O%~5(a`ATR{L<5TF#X6deX?eC4=X=J(C1tK#1sB!|K^~m_zz;6wY+1W z=~e!dwO;H0U*TghrrsZyUT^s1mwY+ZIOGbK5>R*}f^XJ_0na9f)3o!ya5?Qa<1=T_O_!kEiWLuaw9ylrFA!0UAp zhSAgsJK2uw!m@zn{wsI8hYCw63z1hZ22F$emIVpy;iRu8DX|sdul4=|UbKrC344De`4sS^35Tmzi z^l4kr|9%>c8Gxh75^aEM*hMp5ogLbhOLMg>K7;ngYHyqv;XDl>u(km>gXx|G{*AFXoGKrnA_;Q&F zIzpK8Z5cS~e(Jt)_FKV3vBK3O${y62ERiuquGT(#O~u=Kz7fo81BmwaJQ2ed=}}3vH6r&`eCTqr1CCQh zOyZwZ)x#&idVR(%Nf0VOfF6|DvU6{{PcoruQ=YEpjDAwH^_SZSmGEKg;)E*Z0x7D< zF7}|Q1wl+Hgk84Y<@%&3gGCC(ViD$@1-8w$^x7zhH}a6=)7S=G>8?es8+VQ^kEl{s z7&gUSw#M0WQSecJBgZxqjGpAmTI+zt}=z=sySl&nwV-k$|@vj{=_F@G->8 z8lc?_LR={ww7(XwGCd!FJqJfYrd1ZdjrH{1t%$8|tM(qc=m2@KJi9Du9z$5-158=9kZQ`x@to{b6g;Ys4PPL28Kn4?COf2tuHz>-N=*U6*zvsT$+c0 z&$_+vzv!7&_fWLX_(`gDEx}_#$GfV_bs#^*wHZ4<2Sq zA1W#e5Te5FdY}#+L#S-YI#nQ^;jO_^z$)SD-}GvlV!C| z)xcNOJ1v3pB4AR4avTz+2fp4BVaM>|#A*R^Q<)`Bmn&ompsajq}Z^{E( zi6=4dqCwXN6JQ8X0>1MC(q8^EBy(VI7POGt0yldKRLd)=#Qiv&mXPTZz)NQ-raGzI zYjHA=&81Gj9UdG+JG@-ZkBs5S3(R=56)uXwJ2o@*I~~2gBS67KF`wcXMlj){put|u z5?L+7Zg9U8Zs}I2&MeooO=}Qx6v`>h#MgtLJhe?t38B^-kI?!sBRoG5fHQ;*SzjsZ zP4{cIf;%2ufz;v)+$YJ~a4$7=az^zZbGdhWMSgE?Hs5>)lhLCrYzQ8q&lxq7m#8h9 zD6U@;BCIHC7R;tU<`oJv@Z>yEhh5QcqGuO@BVZQb=M1KpKklEXV6X-`H?$NqaxC9z z+5x@u43u}26n4x}8m;R&+EI?79&qx(_eeCuZ+-|cw}spOnyY)=wDz`ttW0M%<*UEd z5SJDglu(kK^z3C9?FZg6AvF^IQ$Jp-4GT(Z-0FE zUkj^7^LxZ|dY`Ne7eB+7RpASRuJU3dhCKK7(bFkc%>w8GiuJXGe|^HogODBqu?Otq zHUD;b>IPt979%VJW$zXd!J2xtn5&i4J?S+3JG{|hNE5$-+GjdAeN3pO?9-YlCY{*y zBY=R}O?Lo}VkGJ+L=P`^xZOFY`*E_V9uW2 zDNrga0);T-(9fD+*TI8sY3=&haQ_JQy8p^xQkb)42=?vvG9T0^M8QMe!0)jHZP(^y zM(d*3k}L65qYQ!K4I}u6wGc4kRD0twu?g#Zpz^5i+*;`Z=WH2F%*|&30za<8hET6( zTIGorfpH<7mt+ze@NE3JT}^DnrIf>x511z&h%tEbvjk+M@cg_lGEj_1Co_I08fD7h zWBSYV)-=byS!M9i4uUF%2F8BG&F7>Szd$NRdEY0U_&jU=z07Bt2Nrd8)uFSb{?~`= zQ;%LN>}a}UTHM5>vr%g?+ZOgHut{x1VqGrUWt|BD#s3wJlJn8O6`wq8 z*L=l4Qnkc9*c=-OBI6&vL(I|2$*OTO47I5MGT(AwppuP5+TiX#zp$s2oST~)0yFa~ zGrCP_h42% zPv$@~9ADPNhOoV~`2nMUYe^Q-DP%U?Fh1$0ku&iK)(owx)9zF1(=^aH{}z^GU0X_hRCg=)7{MZ6~&6 zPk2r@%?p&klAVu9^iLR4T#}n70w~lJ_D5hZn;no<~3V&35p8Wn&YA3JycP&OXRj2rkRGHhtFR7YK5ep&tc zJrPPupV%9F;TWOsK-Ugc#2&yk{gJRBSvlObot#{UM}yLS>Qx-f}xDHP{XiXPCv%w;_TlAX{IIWYJkCVA7ZcIAz{4gGN8* zv&9Y{F01kKy*<0Ki=&Au>#52L7xZei9E~|$?+eG{^ZiTKPdr!W`%`aFc3IJ9j6wt_ z%WW4XA4O@;cLw{3Fkp}NL;&48jgC9kHqchjqMasOp_6*SgI;mDrsIAs1prh{hEj=7 z*8AMl(bPQit}Uzp80?+lTpfTegsu$dXf6O_TI}53mmXyf`qKr~|v`oqO2!KHvY>=l!MnnBtTXDRKs#(sD@Oh;}Kzn>+6$ z0hQ3`f*EQpN*AdY&)}Lpjzq{bS-@mu5sWSJ#Pg0*wQ!9=n(@_3Y+tal#cfD@MG|$E z6CD4vuAMSV^(9@;{n%!f`R;?JfG_O^yA(|VO)Yl{)XF?T{E##5$zz0jy*lxgP1ARn zb{T~kidxo_upL`>iL-b=w#-cMkw^76jbXf`jZ(F~;<0jTZ1$=2IVXxK@|665{Q)bo z0Wy)R&-e3pEZmKH9V9}9El8-gEXJr61J<6>^tIsyPk1bV?~Av}N}&;>laC~%!{EfY zXb$csBCe9Z>3$b%Z4SrLZAHR_uK`rdC77K&oxPnd=Rx>wu6t3e&G<+$F@Q6S| z5ET=RNg>r^iD8&pXWM!_o^hemvfo&-awx0}FV7Rqpi*E$tt5C- zELF*P_f)rlXH3Xacn{k`TChP%=KsQbrYLpj#cy>nP60Z~0ey35;S`1eo?w*vqAA+W zruU_AJ}rvD`ZnGG?55VsqN%*L+|K3qu52TxTJLHK*E2DV_LpojZxSqZ$-g{tsc5E1 z=h6Ix>&vm_ZdTmD;$@TBTC9mOe(z2nM-=z~p;Yk9K;q?A7ZLE3$fAaL4z$l$Osf75 z!MF^YK@?+_Uur6eP426eA89vP*>>Av`cR^C(T`1GL&k|!+!a6OM05T=b4*qJ$-#Ef zzQR@q+0TbK1V${S`MIT~ViFu%JKBHCdZmxRoOt-j^Rl30lDk$%<<+~%)^XEibupZt zIsU*Ke^R}UXbwj_7;*RzlxODl=1&#&25Mt?YC-67dgaWMH}PqbOl(gxX&DM#YeeRU z(wv%eDGDtZZ$U1@#_`;K*AJ2Kpc1ues}$NU!NKg@5==IAx{~zE$aUSBy@m3MOnMTx zy|0eFH%6kqeYb5PTiLF=@+vbM$WZYgJT9tGGMlP&lujq{#G{w;dD&!aUt$I!j+dkA zdKe<`q?kI^xHm(V#6;MixjC3@Bhm9{>q@K-!w+30XOK>!fnSFF7yc(_VbBYQ# zmFLr8Gh7LW9}7r)z4$UE#O07S|5!rklH@`Uo-MX(;|uQZj54;KQJ&1e%zltwLsc(? z-A|oUN}Kazk(tjYuQ)XoqzrBWPi@B+HTY#hwwW7h3ewL(<>DRKg`vv{rxmhl*ZfKo z4V%~cPjMZNy{w}_h6zMJoowgK?7H8@+8~{9`_W|QFInQH^^@?Ish}jOCdB#_PQQr8^9!vtaayhendpUBG%|Cq=iWh#vwXs z_v&7_tDN0b8Az_B92A?<4ZHjqA6%|hbj6biY28|6=e)oLl`T3EK|5`W_5dI|f&mfU5Zs&^M+1+A9y{ML~d($JB z7}L*a_2?G_`_2h7d@qtbglduqj&Yh^omNW@|HVy+e8CQnh&a`98_Cs~LWW|X4Q<21 zBlKNP@6_z~cb@5%6Qx$;3u+?=>YAcg_=Dy0eef9 zdLrzr)2Su1g=N(YOJMGr&+tmE$cfI-8J&C*uz!}Yu5_omduNuK_?U3(l}Qus{!4iq zQ?eF@enjiwChIiz&K*U8l+nkC0TY3RY$6j%(XUvR{eC@4)p_~cUZZio*d=4n>1Hy@ z#dh`3@Az2aOy**F6V|64*?a;!W&HMVUoZpXv}_GI@DoxN;bJlM>}H!mhZfpPNj*85 z^J78C1nCCqDT_ib`f|#L3~*158Rpbt@sOz#vfIZ6XK>Y_!kMtAC?@bwrt`(W2%p{z zrRuUxDD4#S2#$BT+|vhgf-8IhaKueqhD6qbp!~m!d$1SDg@sR8vCYUm5BknX2R4WN zwQ4zVxgNZSjaFzW}d*U^}Xbqm@4 z&X&lj)-3yURL5=$g`QsV(WmE|a8(X8Y2Cq{O^m3CHG1K&uuz@f_kWEp zMIzBo9zKgqj=6<`){pNZQ**2ypW?y~|8zQbI-?Z{=Hu7;6rf(b(_8Iiu+Jz_%Vp22 zW2;lk!iKSNR*B5PVM}BIVhys@T2g z`}Im4H(N!2Ctt}GZ*=4bYEMPUF=%@=V>aEs7@9f-9du#mMzHvqgbDtkJU~GrSCl}V*m6P1m3`I)jU@*6%SdF*{$iC44S2R7LpY${{YIRjMV`;JZExy`x+QZnWG zaxYc&FV9ak6Ih4ZBAKC)<|Eas&Sz_xRW(QB5Z2)aYJis#@F^@TbeenRoGb2s*U?~^ zj{6$0{UKfO!bJj-@jqbzri7_ckNs{cyIjuzogf=@0rKt7!5e6VK|%71-&3saAQ-*U z^HI}}GKM@%H%dOlfz%d>wiH)A_=zmR8? zD;fBaflkMXy@$v6MwT|HHPuBinfe)flqZu93PY7tv(pqySU!jBNA`Fv5g_QSAXjAVn$1F^g_xVZ0M3|hJ0lMA>7 zpILNcjHD;l&BSCiQU+xJTA|nWA!eS^wfOQ|A5Up}AI%V`yxcgAISSj_2ydX7gtoOX z()#jZRb1!cNY6qk=<`L~zY=Ow!&xJWa#TtkYc*bs#b&gpQhC{MaoXeUWRmEbG2u9( zQ}!TV|2ryDL={0K+}5_iH!;Iu^_?;7IxF1?I>1a!QRxye8OyV#y}#TgJx-T!*QR>c zuWBKfAOEhRtpMO$tIm>?0Wu|vRj3NFT~%dOIz~Gey-e2}NVG8Ru`g>P><+ZBC={bY z{Y%&{HnZTlxy!MA`ET0(x23`d8({Gj6!UZkJY0Yo>ADc*3_)J=>rL>^sYanqLN`!4yk&_t4pnA8IX#G+Dt?K?dCJ zz6Zv>g4`AjiA+DvNx~7>O@}ZFMi!MtWznt@tsagh8?Zf?1MMxo3-n{KayTScd9>%; z!$Qrn7n>eR^$MYzP8G_Es#bmmC1Ps!D*Hljck9jVe{uEy2aqK2?TKm03K=;yf zktP;r%!Om{eFa^w#|-hT=xYDnxhc|y^Hf^*#tlG~K`2!p4q0Jj;6`v6eI%pZR8H^g zJD5Faf^pMmHHS^N7Th=Zxz;*wJ&{O_Q5SJ$>a;z&&m(PHJ`a+}4FDd(!TFM6t3vzJ zma(3E3q6aH2Y&zJ(WC|no4YlO3JoMBrhDY%$x?@E(aMIM@5N?5YGF30!aSSPw!|>J zVcO+&26ZQOtnc0-v)|d#Ofx^jbjE~mQ?bMV{24z{4v*F@Mw5U93lvV9l=~oi6`t=6 z*{}FOJ0tA&w%^j%n83yzcXc)ZsNA`9Vt@UI#XwrD2zs-s`X%FfN*xx58LaK#1<8;_ zBs2df>R5)ieMqp=YU9JmoF}mONH_ibIitB)qW{5lNcopfn!_696t`-SPAcl;H(d_1 z;orYPHHzUQ|Y$xR@6@=~&4-ym_+^LPr#DxG-k&ySi(Y|7K62WptO zymFsDk7Cu^UhaxpS?*5Q5IS3Z2s5R93q$}a98#BApF{}`tFzcxVs)s6jeo0EQPNi+ zLvbS1^C)mX=zFfL)oI{Q7O5o62N<0JcGl?9wgFsQ*q%O(h=9fQZ};*m^iaM=!8W@k zCnW|bxMsbstDDlUr!P3S_-*J2W1?b?>iel2%o!Y59C$|UKp{T)#NY5^O6a%@vs0*H z^1MIbbRA*I6NFMOW%C@qVBF%6?cx;(MJ%|Z5EZWW^ob_)>s?``Q7e{M9SOsxpJ$Al z8`ChyM-Zp4EH5TJ4$H*~YUV*rMKG1|gmn&xZ}3ibh^O~rr6y2b$TA~r4pAH3zx1K* zEY)i38o<7#MKzcRQwA%8ome*2B=u&DAHUd@gwbCq)=^}RmW5Z1_1uBznn}1gs~sqx zi|pnbMo@1WU$IPQ=AUkvvqRQhqo_&ru3~9Zn~&wsJdg_EeMRmCaR8&F9!oy7oNYorK7)YVLzZ7o;8o%WanG+jYEC+S>x98${+^V*9&M|!AU}cz+Lsx2owxRDWE<- z_6O#=v=YcQ3&bGtr5d}H4X_fyOv!YG^vYK?W0kd;#TZMyh?mKZ@S!wJ_v}q`~VlQwkJH;UkxDn{(o_mLuwBR9{bKdF4)pQTr(C zqLoTOvw)IQC8V=^`BKVYMtmRCrWd1{$;1WObdwh!EuLW5#3l_8#YVX8WaM*Wr1fuZ zG%^t{cusk%?#ee+VGEZM6G%de^LBblED&0y)NSsN&5Hb_7e^PSdQ}jTd84UIsG_BC zKuLOUoafBUB;FCH*N2AM5CmhVdp2zu+_BCKnupO9!kJh`-MJ5roN2i1@_4qJn4&;3 zBfKVT?`s@s>VMVuksDk^vmQEolEBy9la9tidfhp4fc8wz@l{P~gtzmjR?W4b+uxim z=Y!~aNb@Yuz3zh;T#=q@5=+Hw9U>lo$+%R?iD3J;|Ee;!5Z=s>pMN;1>t=@lA zmDwwtpT|VC_=OCw!)c2wFy2~ANBq13$k@?CimnL(viyA9_47{+xeJvV1|wAHV|P`b zxRZ_MZv8JF2_8xRVypbUv9ZdQ|FjVl`Iy zYWr{mc*uK#quTjhQi&TrDfjZJ!!FJa_a}QgSnD>0t|q6~KYMTOcwbdjEyr4DXRE>S zS#oo7axPbXX6KDh#3Rv$4C=$@!J7mhpB~+|m8bBFjvM?hw~M@_WK=cn0A$7p3V5Sq zijWfLm{C?I%Qp;l^07krK{sq9lC&6eR+BtaYNm44!p))CLBf4_vUhS^Vs+@JIH=t$ z3HhLXosX~~G|?R?KC#ozMJqMU$E%ZfiW}ZunSN+i9j)6^`;cFj64rn{Ikx{quONS} z)*979EqWkDVZyuk=Ziai4!B4ADehtU^RY*_B9^cbS~j=$yN=aN^=1Ol%tdk3_22dP zSbuO~ckKP1;mf(T;^FOkK~j;?2KSU7?TIw>yh#WIDZ2q6NDX_qt35rR=)%qTdD0C! z>qSQbL8PaaWMs2jCx^R9xX-|;W9jI&NX7i4vk^fsQ(XcvbERJ}WAWK+duRq+Q5TY8 zhJ>CXl$bttj|xoBD_#j34i%RPEoAERJi+O_6P`J}{R`CPv9M80P~9%wgc z0QK&$)0r;y<*VgUCji~_I8oLABWk?zH{gjB8rPEP+ivygRRfO_2I3JYozDtVIQ);-E&-i3om9RhQSm+KBJRi48uB;{#1%uI;@m4zjXlh+e z1@QPta*=qkUjg{nKa`R_0B)~bNp$ZHdhnRqU|jqz?~f?}9#~(vxpzpFcVU}qQFT4F z8px2fKH41nC}$6JCcq^1H{fMX?{2wXwao!(ngcLNCaSyRHeK!dVq~PkekILw6*w() z2DT@is@eG`RsWv^EKS5Ti3@*`8rf_AAEz$Tih6HJc%+5Y`2^YM{jVoM4yJDl}pgGiw#Uo~m+2CQS)N6_t(~%hdO(0%IbXeNuz@+9gt_ZQzMQo^GX) z3+{=&IE1uh6Ub7)uOsz+#AdhH3P0Fs&Sz7v0ZLKo4W07V*9Y_;VkHozMKXMUQf}lc zbVh|-$76Z>t&(+N-mb?EzgYmT`zPPG8Y`np7~;06t*^S|Ue5Von+`vtF0i-ZdDU#C z86C{`KpZ<)Z59RqKmu+tRar7oY!(JBvra&qEQCzO+0+sZa^xZjI3N*m0!X+x z5&(2;0QtydgHN`4p3@bOvrYn8%E~k_W^=}AIT<~^LcLjw*zShObJ?gp=+BN3)zL{% zk2=uA>=X71>&`l75BbK-=WX|zLoSZYjE^)85 zY0qPw<90}1Iq;r7p>JHeu=g!F`pZCZX4CSn``ylT`;~5JrxBJ)ey6754|(B>Vj<$B zFrXkq+u4KVjd~5iUQy5Ve*tY`^vydOYg|?4Pfo#-YsTrj%b<3VEk6*g;1zW{8G{Ml3D)}?eMe>9o-!K+5 z*ctr^?OL-$YU;gE_|7FNrg7P39=3BnpVOJ6KE%RRP{%gQ*p`t0>r#PFQ^)86^&V}2 z_nCZ)j`JiDqWBg7nA2O8z#?zCDv1Ds3gn9Exzl91+dvs!$FA)W)_`m*N1JStkl@^@ zik!w=Gf+$&-m1I0w8b&u>M@W&hqtEsmLP)zz~T)HUY;yiul1*QRe4{yocx^Iu9~^n z9tjz!Fxn`ROz%n(p1D}06Mix72pFz)16nqu^cDYV^1I0)b-kLgkBNfrl{l=``L%DB zjpN*lvYCgp1)nfb{t~XQbxskdX4(^Kw zvfGQ1AD5p=+uUo(V5^?KuK)TBdMGZh3_#Yz&UKKL*iiahv8a42H1R7vo zTe|`)n}MC{j7Iiw(Lu6a=?jN@hn3L*2_6HjF(I$qt(Sk5cQ93xrN-b{+Mz+$ezE~C zKgnH9rOiusi2g4(xrT#w$lV^DQOk0%76w!T1UH>)DYdTt9y35*loBCwzyB0K(;;ir z_wN=qIF(JiSpW}aNX*K~Igr_143l?D#9e<0kV1mWR)8-|T&aR60Fi~1!O8Os$9W8Wci}+0*wI9sAt^E^O7|9ox&U+oe8ffu6|tb z{w^N>ajAJGV)|IN&^bbMRNLIA>uA3<>F`TTOON(y&7q<#@ML zVL(cPSl#Erm4v(rZjeE-wUk@nJf83AD1YT8($s5ED;>4bCR_L03YL}dn z+K~H6BlARI*G8hc^xa+EtkLHMsSSw%sxW}D^bksMQS2LKTG992DynHnv~kZBQhaYf ztU7%G76bv1459^hkv6c8G=z*~FiWkiM3zebE-Iql#{?FsS!ng0*Q*`MQX{U=+&zcA zGJA@xlFU$>|DrG6h`6BwRForkz_jbl;?a&q z-JcnqPIFK{LH;~#%fmvJ{Mkcpl@9QgqDKFvfT^E0+k?-D&<3D)KB=C;|?FFX92~F!M z?%%GIYhjgr%`(-v?4B`i+4Kb)!a`FC(VM&Z$U-P+?6|J4;IXbwhx_4#nPUU@{2@Xq^Yw?2^ zxA3Yn`sWTlW0smT(fBHmxq02IQur`mRb)VbLLqV4jBBkP0{hY>f8sJZ(emIw#+O9= zP&qj{6MZ=*N-S3VZWGPt==E*^JG-5&#(%i4EnW4zln_$S(6#_YO<(OkIlQeY=rLoYw+=( zAV6>HgQe_$=VOEfc|f^j)TZ)}$U&<>6**GDg`mxzu7BK!CSHDnt!Rl7y@-h~$yLUK z&$L=}hr)Wcuz7)|o3Q5L;s-Am9#_d;Zl~M09S=-B-C3LzU(sUV@)!Ufqx^KE_$9_k zd-+oGL&sIOCLw5|)Y=E$5ViPk`C@6p%1r7fW;&eoy7y!|US$NqqqYUUNji2t|7rv5 z15F?a`%zIN(&Ck_fxi-;B<^mZP)!{h*#V2vDi|6&O8x;|==t#t-&y}p3k4(UQL_a? z=gEg7MZ6dIi}{nHfFT`RX=7z9v0Zyl^bHR`~!CwCe(*al`<9g9r%?G(h)(CC2vcJ2}2E6_}?uI8MlU0awn^6QJ_YvVWj zQvIcm_O}4+;v^l3cKGB~?Pk0qr9v!f>a(D;0+=Z|!O^x+frjI#eW)-i`Ym%+qdmJ< zOxQk~mZqzHa@wid zqRTk1&3w}NbYiSkH58@ltRQ!)^>%T6!NZ-!<>^Ww3CIA8wc5$4yR^U!$g)u_9U~*y zhOxo)hzi0Enn?dty|&}PY7q~AtZVW0T`K7QLc)PvK>6r{cj1lTyW|3+TL1!np>FO=pX7b%itKzVb68UX_#eIzIFx$? z@cbs|w8!HXtN@bmFLzZbhHUS8Uu@`E&x&MMfUJvB{Pa9_C7y*9TaNW>e(!T>VpEo_ z2HI(uE1lN*n}9iOl?fOavK~OJWzhJlBNAxYuRP-%8@Z6<-&(Xx_{srRw(k+x4-6>F zP-%`bPx?hN4F5wi)H?8791MhcU3A6svg<{5@B!J)O_YmCVIkt6%O~$Z47hMOyoPkRdb_jadC=n`^i`Ej8osLidCUwDL1kGc4MpGokkl0 zxUpC1im}kINRoraK@fmr*pU5J@^cHXT<^HGI@%%MsdI#dd(#V;l8qG_o2EgtXqi>` z>)?6%rVbRkvdIugKSPR{w1)AOndyK-Tvlj?V}UceNowdae%F4ytRzCK-b;x53(wxh z!^wXm77(^fS@LU7eQuLGcj#Ldei2;?riuHo|F@IF3w{H>EJA(5tVQD$V?3H~uuXB5 z54h$GRTUK%+E`AD$dywMVb}f_)2xcfU@v(7vvC_}m$wJtfowUMyWG))jj+0|?lw@c zPwF~@YZe(7wFKej%+97r1WcArSWmdDZcIE3=1HlgUmwAUopJ>hZ%#t;s{1!rBXPN{ zCs&c}QOxpbu0LRYS}r9>=h79&omXp`32vPP>YN(^fn31_)M;H%093|NjBggeK2?n9 zc~1ebqow9_InMgx?8g<(bfu$(Nlzx&oe5IZlPf*R=eWKI1S2Iq8FpLNZVqsMo-%^? z*j(-95dawu*FSTw+!Z>}cLtu0<(k`q-J7xJxg7Hzpw&a|Q7*QNlVGOP>As^FpZvxR z3r4ethK4*e+_TzgerN61*?Y2mJ#g z&IzK~;7Srf)W_u?EYczC-&$nJY#U@Vg&qsmsq36%$FafZIl!z5X$dbcc6JM*-QBmw zIUBF+n%M)e^*)m=`iQua=TK2Zh29~A+=vWv-Rt8Xi#7l3Nbj?+in6`SxbzK+O|Le+ za3`GsmV6~!>a5?bW|>YuWSkMg(9fXuK>5l6d(=VoDh5^q!U2gEM@re%rG$jMoB}3y zN%a``IC+N;wtd`s<#!DR-U)3}BtPbN4J^vrktJV|K2)jTvevUXI@C=4tNF^3MIWrV zdE82QsrU7G`Rh$jkji$?O&xIC^%)^=obr`VUhP>s&?sPG4J3eAEgH7v$U;TYtbW7y zE;)yt=_#Q5J8P`ZhCCNq#u7>$H1o1^y9SvSk~xijw^xElogECwyB`&V0ePJrvJ=bm zU}vVTc6N>2eREDlp`!+P{1sM9-EK2UcZw^LgfOA%B4Rf#kJ5GfdYT_Epj%;=UInxa zPwn>z*EOBjxE+4GA}-8(1R))-e#9tej(!{N@)aCXl7~=<1&_rGxY}Lt0Yrj88Bj5I zyUPzO(qc$kV<4KFZRd_qU@$SC1F~DUYHuSF6HzNUF z#uviocCh;HuxLYZE7>#BX*Y_gtC8bx^ZN&F0F1V%_6fo2TghuO=lVTB|4R4%;}D(Q z;|yRg$Uk;XnTfivLuOlP-ZGghF?c1b5rtmn)HixTIK_Wn{*kUHHe#R&yt@e0wgq~c zsJ;6MQh7(0Er6}C6kn)gJM7rFl?gKIBgWNSZLOJioL_Ad6f>h&&#V=`x-t|noQV@H zCVuo4n_4k(*%a=0p<__i77PC5Dz#gXqE--KXydEi7kS}W8$QtkZ1fDfVL9ju;o;TX zO#zC8sX&iB%xHY;fiD@r3oE1xhNx9@d~hz1mE=&wZi%biu!Wz=tHy6U?mjxg3EBS| zPn?i;pW<@<4hJ+dPf5kqd4cKLEh~IR)iWOvL>)c7hqCw~s^L;^JK2TKgA}@MbX)vC z6pPYltjNTjpx^SJD(eS;#ArgkQQ>c@^&iFri6O`X?;wx(uGHoPQ?-Aosi zjNi!tS%L>Rt|auMeFi zfM8ih?)GS1ylxTo2C!!qDIF0m^lzx~Iiuc$Ctq^T9_ARkkgnpI-HX>&EUu5uK$#l zI)M7tG|_QIGt>ODso~5(#*8=UTy#hDSpX z&F`EGt@Wrdq2htOi7^9qMX|XoetJ%;`=MLlbTKS(W&dIkX!u^L_z|UB7{zy9h73c13F{w=2C=U zhL$?;Z|1*4oERbW++w5NT*&M)N|_raxh@lx83%bz8#={;#RPY%_XX%I1~P(*P-7E- z%VV|NP<>EtmI7osBiEZMBk?6vt^_l3THR727DZn=aLH1J&v6C;Pru*`PLT!EvS;<@2fSA7(+`_sgHJu&AGl-8(+c<}nI5 z4Fx}_k)1nS4q9=XhMBS`(O6JqHGYR}fCyb3YE@P?D`2h%yYA+xw{$bgMGbn~jSx?r z$&~|@9aM5uIINDU4g@5wGBDyMDUuNg$T>lkz*D0#i_dwdQ&q7Pn5(w3BChQ>+wVV( zZB9y16{UHIajgvsG^jQf6c}j5Q51|JBRl(E85KWzlF^JV6J~jS&q@7?V>qa4F0Z@e zdAHNTHtnBtj+UIWhQy8 zyVi!wYkg_A$gQAFrH&2q0z@5Lgjp$gGW*UND69b;SwE5>ePuR!*k-goHhn3+*#LdF z`ARqnc!UjBkC13mu~_-i(vnX`N#4ebFxm=$0_W|mrUCY3Y+?nH3s&YCS1F@&Y@NNa zEFEIRw4&hq!~Fa)5NxIT>Ct~zzWu?u$>C^xW<{xmieLA&w8&=*u+iZ#=e~HjumeA@33x^vu>aE0`5Vol!d3%ronuQ3P#TI+ zb)`jePj=J_;Mp9}W$f3Qz!+M#%*PpnqpQpcG479mBi50GlWW5teAO5@c=~Z#SPfB% z`sw31X#%}^8Zh+vxW#R%z%b&-Ir^!UNdmpR!;^OlYSH`r*pR0)#UG6K!@B}Zxf1JY zcIDnqJjM#nv~vl6P6s_sAYWnUbxga3aqr%e+3gNzRq?@9E<#`xOkR7_kIvuALKeTi z0r%hjmXQ7Q;;mGMz!D60W50$-pGE0!M(7PLa<0vHqZ;TfZmgE06I556yQsUzZqh9i z`*4OoJi^tpJ4Ohdx$V?rMo(iS8xD5_;AsuuR$Mid(LOsLt`AuQU6Y9wd``tZUa78F zw~3zocm(0&ciz@6du=eMM}7$ahFI!YUA^1!;*@ph)R_rRxOj)4tj zpU-Ix9j`&b|6}Z{!=hfFw~rtOh=7!Wgdkm_bgPuo-62SKH@I?8q(eGII+pHKK|s1Y zq+zM0b9tWy&-v=n-*vtI#l_<8KA)Lq;+}hEeoEx@06a}y_WeIit$?{0pKejQUszgz ze(OLL{fEcgB)IKm)x@5iv^VxyExZ~nq`|_W5~WE$3`&NKatDMkgk?kIud>DA7ceLSw<+D+Qjp**~TP7otEEN z>0a>#(s3+z3KEl9|H4YF#8IuX&pHOh=QZv_fIqIPm+$S@<9i(K*nnD=#OD(LPTC@S zeLipd*y_!Hi%68Un{NGLd@l)~~7$T)+E5l9vt8sp=J10bX(f{n z1S8DZCS59(um5#M@2g&&o^|;Bg_9d?IP}+6sme{a1ddF@2;0}=MzS!9bBFOmv#Dqv zxRnhh^(BBLGXX?zp6VhZK8po_`uf#kd}-gVflt9`cg_!g^2|*i?XuJGi?_K@x+xmq zF~ME&s$m?Rr9Fs4C*srZtE+#&!zlsd$0L4xN&#=m}V+|y->%dJr z)gN|PX@aV;zn+dc3Pg_p*@T~I%yz`G2H(89zMZ|YV!EM2Z(G5Y{;Q16MJSJB9QvH* z=s(CIIq)gRCS(N5WmnmBm8!g#E^tRMY%yEY{-nXL9uydy+`EY1*)y0e8=mTphR%EO z8SH7)ZNKksD7Ip<>Vqo4Oh&f~D~HR#mS4nI1;Z07<^@A!x8Prp`0I*jY#`&CZ(U?- zMR<6M`|HGfx{}yC{0J``KW*<`WV1`fNVQDQVa@8yKG1Rap_;l|4>Zt^+Y zT-nbWi)hPHUAUc)6w^7=@AO{Gi(yWE&87}pMv0xVO0<05j{*M$N1G1r!sWStPmM0r zAduXgJsgkSpq^yevMusgrGV1SoOu2Gfh%xB+kCy#ON>$fHiRO@2}TR$xHQ4-f{L zbyW%@4)ft@%eq5D74Yu?uXdSv%%ZwRsp-5+UB3*eT|4Wu>xs@OTxW_%l8MuRl1gQq zWmT3jx(~I-!{t{(m#*Ihw>l44U3YKvX-~nZw!zYjd{7%iutu5buGip1=R{~ma^}3a zcWo}eA?!qph~eM531ByrfVo;~qg{g4zd``drKY{yEe~#jxpxV76=qF_p@u~s)YB$JMTTmE(E$I*nQ(8gO5P*#MZDYl>&URX6ez!?+|FV<}D zh(9;--1uA+$s;)^-@?00_N0oqj!)_OIFzpE>Mq8_fN|V#<#uX0Hz^u{=k^Z+XIni9 zNpdxnWog2db3Pf-DI0~Qwhp^FOidd1erPRBcarJ%-o`hJBi*U68nTaHwL!=wV5A6z zHsf*gDt(M$%2g|DD^RbzrO}fae-J9W! z>IlCvypQ-XC2kMzw5$G{lw6b61?AdxfCM?v-;a1;f=q;d6r6AEnRw11s{(m|Jm7f% zYqqN0jnn{Bc87v74ukNUHH!@ci(kjQiv_b)K!@ggI)!mSYMl5Q2s{bX(JUCf8?RAe z{o?UjU!qQ|AY`)Et-5x9a?EaXWguswPiAZpf@f5`aVEi1+dl#N^`NR4ZwMA2HADTh zrYmNHU>`nDVP*QJJI!xw-pei8+0V|&S&O|+gtGB&n)AVLH#ZZE_hjrh3Uc^Q(I}On zTN_nc1`O~zQ<1w%_IO=?`ywZA@j4$9!!ufY%tmN4g>G7kLpV~!c*||;brxk%#bH0q z0$N0Q^j5sr&<8CrwA)6j+gX=+Pf3OFxJQc8=!ApN?h(y-#N3G-!uv_-(j4vIc*o9+ zwlSSj`&>z#SrN1!M+ zkd`=BL{VFn-c}rZdPBWlprs219&cG34!RcXybivQaqteFJO*8Tpbx?ZG>PZ2M4L4| zKHgh{IgS_*0`R$fVPaOG>@MIPMrzbJ7rLG94$ba;CkG@MK=1-pz;EBakq&fac#g!e zBP0t|S9$P9J=qazLI1)0kviZ*1tvaycCAxo`X?{AfM;HLLGBTmHlg=ggtNbr8I@EC zyr>DN;c)588yp)~w|G~rJE?Z#W=9+5M;ch}`6w`8oqfrAByf|*>)s-<*M32sRt~|V zxjL1yeH^>0zgY>Yr_)0poyD(J#(+6!03KtxoQ*|FTkGdDUvI+Io* zwGR<1ZS=p%Qpk+|042%uD63e{_Vk#>lY0@g);T&&IH){tS_csjOZi&a7`5gBfbYk1 z@(EOGGr@TCZshFvU7y`smNnmnK}-D#hA9Y|;Oj9)i8f#t=X>N;E_xA_j_jq64`?;+Y^y>W-L^1rJRTg0X4udo=IuyiRSfVv6JyMlkMh@0Ulns&i ztJ!P}k8mbD-0u9B^&>)Mtt7Wb4L5Y`&oZ9r3!1Xvvkr@2iJPmbeZ-XM^ul76RYfJ( zu99!Jg!)YNg^o9TRE}p@3hq)9E)`Tdc#0bfg0gb;tGUB+S*6);Fm>*H{>o$|_4MzM z#P|Q?4?>d|C&~(rukxccz_9y%HXm7B{ z)&Y_>&)x1Pug==`#C*@gN`#duH)H|Ca4=;1W@Q9$0K9fFtY1tIKpQLh7z$|R7myXP z&M=!K7hq)To6qm>3hmpmv(`_4cl*B*1TvoU2TKaJ=VNq9{&^@r1pfk~W>)_-h`KTV z7jPLC^Md>Z&oY@&FmUp4$yfs${4=c0ZoKijPe0dyMOdWnX;NK== zbkzm~;+U{Wgl_da#xA!uToT>Gdwt9IRwTo!qnSw29)`p4`&?XrF#a#;CqG6}Zh!|o z>fB^R;kIAuPW54P+gbX)H^B)v${QLOs020LhsVV2p93T!8M9c@T>#Cf+TNe-=IsX! zYMBjDaRMGyZYx7f~p@CL$+C4 zS^mDBu`>3KKTDmpl9Ccs1Y@Jz5A+?jt^>*10lpyfo4n$G2pryko|8^6&Wc5#l|d1! zzfrkJdOUsamrjX`+b7G0(vvcBo%nxcuZu%AXf#ZUI~2~4h-kfr65F5Xpu2ew>w*bh zKOEqebh5#5(bdw-O#mXSYBo_Gm(rOX_Mfx!CSSZXc}yrxA|Ol1Ch)Pv6-P=*PhEjH zhc}2KR?W2t*-wg1>j}x8nbJGGS&NRX{?>`^SQGvqVK2VD7zuQ6rLYu?g{B6OKA}Sl-RV|Y~HJM zlNNj-(~i)fsfeE@_ME$nW)#5ovO_X-7U_Q!ac4)2Kn1T8{Wi5Jve{3g%v>#{56$ZY zl)M}OSX3FuX_c5?i!8PT-g)#@3UF#jKula``=F)rvGcmY(t56acMTLkY!x!3rK)0rNO;o~U$U7{@BlCI4TV01TS&x` zY~YnUZHfu3&K-250|T>Z+|u#&U;`oC>rryY}{=Tys0{VnYsB3m9_!h)~=0oqk=hL!vq3D7?7$!%H8j zjvV$`y6Jt`2>0uo()anWd1;jujgyvG_MlUtJ!g9q8R9;#X3hw197SS56lK{b28$9D z+g{rKv&>{vlKj*wtqXr-d*h>3m#=ar7hHB#Ki?Ndd1T5kJ9Cw?H}A&|OXQL!A%Aob zNN3D-%717_sN-dR6AflYR7!yUv2`ogW=-Ign~TAsL~dl*T{Pbog|(y!S}@v?i$7N- zbdKc9K#3KLwplm+yK#vEauFs>CDEKAbxzVAvilXDeBaSux|s zP$YPA&m&SC4vN3k_+-gZe?eleOU84-so;;EAH69y=m=8|&6K4*IRLb;6H$CRNh+aN zzpi;J$p!2_rsMwM+dr?+yw)$LD>CQFqi5NuL)j}9C`fj(Mi5el9nELCV8Ku{`Ly`S z3pq;8yFT}Is=v3a#s(+YO%nuamdBVSa9K$1huj-sS7RPJaJq(vE(2|M#p3fgS=AGX zj&0SqKyL0a7KFe?t=dp0ZL*%kN?eH_X_PMuy!E{4zD8w&!b`i#CEhCWGmVc{A1O{3 z373fjv_8&u9}}oQ6g1OuzStL+^Rx z5tB#DjMZTM66;R~%cHUeL`#EUzAf<{sWp(*jBI}Ya(s6?vb=_B>}5E@%b^g!%d1ZH z$685tylXSxA{dJOnX71d!7@u_{C>?EXWgV{W!s&+MpFl0Ig5ts5xWVZird=(psnEe zGhtl_iTma!_AaIu1}ItLb0C}6vF`=pS=@wupOi>*iP<8^*R*V5pz5RmQtX1T*&AR( z5f20ux!6v-ZB2v=v4(OjL(vVza60_cC1bWepr_w{{vU~98#6#YSd?nS{U*zcN1}vc zbe9#DPP=tb#t5r==*Jgid|QIK9M&E9rt_$!Yma;L>P97N?Xyb#vFm+UJh>tB%t7>G z#2iNkUx+w%-@!GS7+5OJJ740vT?o=ia!j1JrJIuflgehpfD}qjy;H=&l9>7YM%tyG1>s5720;0`axz~DxGM5%(L9m;L;HVyJyU(UA3iikBSmr? zWG?RLg6Afgq&E=~*=AsMTflv;9ZQPj(L(O^A^W+wLz!p+!zavb2}y199LCpH)cqU% z3CXl4<9IiZag8SH2MwIg_b&-KVP4x}Z|PyCBZwSciq(5S^=5hf>GZMyG|yw%*V%e@ zl+e`O>+zi1_v-6rHR71PY5AWdD0<`Mk4&o@n})>33$+(Boi$ezJQwpgC%<@Io0~PX zZ^W1qISocYPDXIU<&7^6riRkYVfF%c>l#Gyz3r36cHc8XxDHOEm5G*oR&&n|bu4h!}LZY{yOIh8U{JtL^nh)v+Nb1Na9?qp=x*Ox#|w+G*0R zi(dr8Bz@bWsFfPsr0f{i;*5#O8WcH6$%s#c7|b7Pzjgf%h?o0_F{^qH5r)-f(%-g2 z@7`L*tpaVg{D z=)1mVgqsS-+-GY1o*EX|+Qj}buHpEUT;|QENRyIBZf(n8o&s@OY6N=}>)lEq1%-;h z7%xvm1rM=~e(+t0HJZyXTi>xeP85LF^Fu9$n|JN$YCjn~7xZ8@QWL;7J*>6#%KXO^ zKV}Z`d%r;timp&5!TX5gDAu=ft?0$^k0p}GgOaxaNH|&~G$REs-?@pQ2+&PlB(Pp3 zvG3*y9`f3Zn>wL(J6e{nA5`^HHWH9V+e11*BQP~gT}jmr+&cr39fy23El1Fr5XJ8J z0Q0$sPB70(W|&wauao6oM7}%Kal4T~xR%H6fB|6-W6ba?hsz+F?RFsD-<<|75_Yh0 zb>6+6p*+M==M5BxTvk)}7|2K2t&TF0SDH{Z+)+&BSZVMMVQ5eIoROzU z*z03e5`AI!9@WuR1t)sGopOq+iMKuA`nx<#TCn^MzaYg_MVL-9O_eEFio$D~_GAC# zB@}P9=|m2`6FZ9vBMs_*{r`g{w1xI(XQ|pXB%tg4j92$#AjdakndQu5d?*1BJ@y@3 zWOdvYVvLG_x${zb++mhqTgK=Q68VZ;8*h_tV&DSJ!R#ky*Euu{f|nWbw$JyP1rdcy zTnPi$&8wEvM*xFW>)|u{m!r#h@f0y_Q8`&d%Q{y(hwei7_lEVoX^T#Z!P@5(Zr+3D zY&ju25i6^WcXGnp3N~r|X0oxBN@l+;;Wef1KHx+dGKbyzCqn!60yPjp6q-9~O1d?; zs$^S~p_PLnW13|f3zWre%`y|kSuB6$HmT8Q@$J=D*=l7h6pbG>1(=4`83#Ex&EymA zEmYBv@w<8UKzNV>)-6D{ADW&Pm#qbXV2` zza(nRHME+Zl^${WAL|df)*vRf2kWP8l2G_<*r$wS<_Tr8-Tm(T(Q1>+W;KBI*fIQY zVYtrYR6Un3Z?SYDT+dGTyZrD2ks}9+0eLH@*WyMj955ox_db%H(pFEu#wen#QrZ~@ zc#c|GpLE_qPHD;*)NIl7cuj@v4DbPq@&PAw1Vo!^_VJx7Xg7J5lFhv_IcQ^-y1Wo} zdkv^qcqDz-6Bv}=BW|oOCT<^6CTwc*K80Pc*^0!xFJ;WdiXX|{F!6N z+dWj{evKW0vKoWEgv3mi{Qtcv=YQD$On2)*t2T_gvy*B{lm*12;~UN|E<$Xh+3 z!q4xm0ulGp!q_V6`>P|%EH*ulG}f_t9>e) z2BLW8!91(d27DFG(9^E76PahhzzcYAs9KDM+{@czsQT7b3L3t-Q{bLy(2l^ltn<-ad!Sd=P(p3bC=M~M;?cZ8CkKmcV^ulZnP&L3YoXL<90 zdu0?LZl^HN(378UqSyMXii0O>K@K(&5c54N6aQ~$cpekmqDhkpZX4D5tX4cB*mz+4 zvTgpWW9`dkiLWvfmGz$m1IMKf&30dEg)@`78P2Got3%;@TKxG=-|rcDZKf;U zes=MIPOq;{>4>2XZQ}>5{rw@U;2_EG?)_SS;|+?_c}ELuR3{cE>vuL_aCd!`uvgZ) z=8a4aF?Gtd?N0lQq)5%NGm?PcaPv~br=33$uz3-Y!Qr-roS_3t6yDJRU}WE#;36^Yf<$9pqwlq$d~e4_x$u4Z6x1w zHz!4(=fflX+n#_vt+PW5XwQ;m_RGB~`u4ZTWh;g7cKIm{EmRNNW7}2nMA!t8+n`vZ zG;LcyvbWM+|26soaeN2TITX{hJO0@_JfALsKk^LQB@mD|)yzAdMXP*kF;sfmk9BGt ztSFm3ui+%K-;L`;0*1A;kxT`2h-URkEab%>2bthns`Uo!<~=)?_7zBj7FFlL`mNKn zDuox{$VB>6$c@y^9v^)a&v8uH5M5^#*;46sFbn;(oRD)Bf0-;mYY~ro$FRpqRhWL= zRVdU3BTmU z0?(nEtDzHk`6l;VNLUO&DV2(ZtXl6|JeUr&a@TLI*Vta{vhoferhx3-LatNA3LNSn z9SzVDyVOApdPU1L@On8x0pqs*7++;N-YCXpCvFScj7ooTnld=pVQj0-NQCXgRfi;S<~x1;+XQ!A zdvN16$%RHbq8yHl$sh>3yyrR+W!mb$J+_mTENl$M*t;+da?KT}yt5Ev5>rh|oE6M6sPE1@(L_?p1} zz;hZUUIBDt=C!r8Q**rT)w+Wj|1)_Df$iT^dX2ezD(=DPspg){AC>1Zi}mP1Oh(cJ zi`7jibexmeKa_Ys4C>w0=4x2O32SvFu6A4=2;z2H6@4nYz}#Uw65adJOprpm<-ogj z*3gsj;W6Ik==L{NfD^1lRjoa88alju$g>!gC`5yC^6+LkMN0jY{|2_l8428rlHMXI z=z3C)z5Jcojd!e8%SPj(HmBY=Q|lS^^=)K9LHYYb+zqg(#u^$*jtwKkaD}IRBiiW> zR#}@&GlV5DdbsP;6w#1QkBm-L{9TRp#TANUi!^As{-X)@;qGUllsevg^5@z0Z`VW! z2Lgh5uq|vpOJu(&2U?ZAbUWelHAU(NHZzUa<;1r!BRaAz?Q8aS z)*~2HNZCP6k2k07CC7gckZgDP$nA^lVtn_rKPevh17Eb1=!J6WcmYS}$Tp9(&>NMS zj{T`x&CE&TkG4_OF!1|My8Bto1@7Awq?fNr2O=#pC$5#A|FP&B?=K1HkvMqeM9 z$Y@*2#)7X3-ktEOGQSG06jBj31w?-xS)uJfZQYr(oTl`KvWMB!gn`Zko2ch~3bI1B znH670^P$G{MNHjUr?2t6R~~e|_d@SqIm2)`dz9w(M`s-3~cTGY6fLKBcm*vnm)1 z5YHS~cHm3v)f=4%Fo=4$rwQ-aI}U7_w!ev7 zSm*at57l$0W_asen?~M4?O;S}@Xr381qT(Ej^%rXqwvsUvHsqw7Mf%#{XykG_ z&N`oXQS@q0m1^tcks;V`dFBZBmLxGyKm(lh-xZoUS>a+$62psq#VF-_I$sL1>6r_= zc8=zyrF;Z3?OVjs4Wfgg&&|F-|6NhfztX6X?)rG;NYdHKqMws1h};a2l`m)QYN=r$ zh%=`)0F7%Zxx+Q4j+8kZSBDw-nak5Y2jbF5GphM+fOe7GmeTkcpR9w$4{u3FHVL<)fKNe&Ib17>q(R& zXGVh7lfHaMc9hGeRNVsuyUXhH^psui!C(dJC+<2DWPI7bXQ;U}A(Jmte*Znvvow%n zN4q!XcF9yZuWo8a@)m|R2kOO=aoZeyYLh16{P+O|<(?H|CM}*VFmGAaj67|NHN&h9 zycO&7gV_(OEkNUpA z2{f)P6sMPtNl_i6=gG(;wbmJ*B!d_Es)*saku><-I_=StJKng>&oeXiy#4vc-;%5> zdWMZ!I2aU(?}lk9YW=NY1Lf6CYfv2ctVR-(mdpmaSu^Lmc;aJ(W>Ut=vEGquMe+u- z3IiS&_g5zsE~C@$G)+DyM}++U10a45Iw7MNpmIXvHjtyOBsT+C+vzFrH6G_iUmTX< zBhVEPLE^SrnB!+U37S<$Iyk4|LEjpS+sr$fnOPu(HpGZ%AW;OezkTH1+q+>Fb%zUQ zt4oX80!bSg85?cEf(Yj_fa{z4GYy{cRd@vj58X0ozH2ABS2a{?snpZY-o#lB#J!4X z<@|JSbf}I)LLWJFl>~&gBYybHDw=tH$P^)p(KW`r9v*1*QbotZVGrQNHC=kCVDfxh zF^FxQm0q-9attRs+#LSQ`wUJ0?M_dqvP%jAJAdsi7?P47a3)Hk`02gro5=&vhebwwz1Qq2+i!6pMBuXD znZFGgt|rCLPx4>?4%x#83e_$n%G&WlPICJrCcg-Coc+vBVVevI=e8hmEPPbAX+<0#2Iom)+S{)t7V)JJxPquZu`_RZ@DZ%_wvqm6dQ33d+-%`+za$L?ce zlV#b)%Ts5FedL~_HegsPkpq$1$|hZ5jQBzW-v|K|@9a1Yxf&V&0MKR#@j{rmvmmJQ zp~I#aFC&X50HNo>F;J5j{NqG9X;S@b*s3bz=wxKo>k(D_RU2S*zSp9XidFH*tKi)B+40K`Ej$oNs5XjZg?t5eM>of1iJR>BlG2g;@RM+q8AQR2w7(5GEC*;VaW0E25b6YhL=Gg?C-~b{-ik4EZ%g4oHDN@x0gDAO30!ej~tGZ zWj~q?AJTHt0uc0DgQss4_g(JQ*48pkgke$f0slZgngEwJXBn zDo3Tjwh4IWRSgadL|_s}3dE)Ejv`BHWQfhT@P{9sbZ|!5BIEKAmqqrI zG%a>PiM(51uZkLhLBQgz_Mkb$gCp8sEWP+_)kv@nj7kfo>K>& zJH6iwF^zY;wC#ID_HP)cKl?ZI`tbwgF?Djt0=e5~l0T;pIhS3FCM#c8dS>yWaO=eKY_p~T#rYADtgZ}y%=q2W&hYJa}3dMx!Pt-Qb$O&$f*w_ox z=c=_uPC}xCx%l+tg?Yd#MjN%=HWEKXp@|ag7eBwYvZW1i6kpJAwkbP< z#0B35`F39`@fLEWJb>eHit=H@(|id8d*#$TtK|P$ye`HcW!Kbsf~zvu#sbyZFLAn}%Z7!0_GnP7oiP zgTa;TW4b&WbFJaUpe?<~kel~j4_GGk>fQa~Z|PyBp?#mosq{p{15BTDL|wmre*ZZ9d>% zuF`mcyLW!;nV1T6R+(!x1eJ=p=%2vbg}O)8b(@DzRVaF|UU%o8Y5HMie>^54Y9y`{ zW(G>-)pOC$ucW@VZ&J@f=-HVKI03eqq2S4PtYUBt2j~rub#TC2VQv!u+XD=-07-Pf zMZu;IR8-AIkhO+Oi_d+yq8Qb{K&Qay=sJYJVXKVcp}PQVO{$)rSXIIfd{AXJwjMu5 z+D&t(j-=8y=IZz3!bj@Bvo=2u%GrN~taD`N``GSB)fpm5YOAj?m|M;%FrXX7P=YX# zXVRU=K(F^i#P-22=PBA^`9gv`4h@h;+n+yKf)S9$v&62 z_z&^(kG_-QikTNSQOS*OSzu16Lt$eylEl?~=rW1g{th%!yaygWnvRal^d^H8I#k@~ zYA>`cT{fcoXteEQsI~(xI2Z=#nuN1eZFkL7T~M5w3uJ`dsb{)&r2gqMNp!uujlG`b!NWXyQaEw-Ur(=6B5sHJNePhk=tu}|*HY-^ z_+nBJi16KlI}7KjyGH5KXU0%?G#JrqR>?GSfD{eihEoGO6kS`OkVhVpI7ZnUlWnog zJ;qLvz{#Kgg(6Ife%-}OxOmb%EHTG^GD>7;Agp=7!=XYsLZ~i#P9CQcdDGLI?)On}PPyCd1v%AF!d9Nw�Co z=Ua?W2B+%eC{D_Q?YN0<%wwRtSLcBX(w~Fi8~z&B#H^6maR`dd(>`a)0p;=v^jUft&ReP+zIQ3tyl+rN*Q!plvv^)f$ zr4~v4c$EkJ3Ni+O!#EGL7+gE$mcnTm80g=TT)0CcuUG?u88Zgw+7!QET%SP(gSZw=%Ivnl;!@s z>&#y?nA6zH8={_nZA;#h#e%Gh^JI{|8~WyAJHDX?E3EcbAosw$G3T~0s$S;>&668D z&U)3}eDo|P7}lyySx_cK|HgHGlyI;*3B4n!V3Hlnj1p+tgqBFQd;6|a6}pfDI9xXb zB`f8t5k+qojFNZLJxDlN5TZCSFG%4zG4oGczL4uM9#bQ@d z57K*P;3 zX7J_0;qQXmn#sggOMe&lq9w}+|FFB~@F z%Dl|N)}cSx->=W;+BcPyNHnoBqfom+Cg47>9fOXNo}Z*B`4In||0HW=pL!N$p@gUr5^LNMu7mB)a|{2;^Gk-mmta%)L*n zhY-k`Q)M7J$ES98O2f>>g6>wXz^E(Jo`f%N7(5)fmW!iWGPAr2G18Isf8ezrAjW&T z?~ZziWi2qyGhxsXC8|^oK;sJ9qEkU7D!PZ_>~6( zXHG|#Q(K$7`+u0)-OJK~-w|oE8J4@ywKoA(6m7Ow+3eqHRg%SXd8}@M ziannjIZQfMU~e1#1g>!9VfiRGe6&fBX-$iDq5b+$XC6n?;f2ff6%shQvh{kMg3bYl zyjvKeXq~Cbo3EN7@FO)bxLo!w;e;`z@GNrhVRL!tW9H2xt}`i|q~L>2@^WkUi$6o& z6mNN5P^bd-hp)GwSa7@jk(!@qsiOTBXkpDgv;{);%TIT-&S z5YlNnMGO*b%FcYLt)s8-q2(>9Yi2(5l9m}&#&fI!P(L&W9qAF$j}pPBu;chr#2ydc zmlJ$8!(O05n=*nG)b{Jc1HlLd%5q;ffRg_B{6eJStlPqxWEi>l7+!L+g;7S$+sf5! zdr?aXF4aar-9DET>X<3xbTBfE_l#A)6ck}&@E8n0CIJxY>$Hs{_nq5rsmkb%IlCXY z4h?!bTk#$2juMVvWC@qm<&$|GcaN`$uxV^97iL|#ZjKFe=ba(MiDNUCYWrU<^1rxB zDET`I`~%Ew$gXE{ak&nnC8Ty31)_{C?lY6` zKb(%wp^x>)hn<<$Z~&j>407ike7NXZNgDhlRwd7=wOn(;5wVq7Z?~-&ICevJ1y1U-d z@R#G~U=(QQ7wum5y%feMH!q!2%!<7>BLZgL)nnYPE_PieiE||bQN8^T&j)uj1cw&={ z*M16NeVezz3d?<4@5Kb}|fR1z&tEJ4U87*CbLMsmHU2HN(9ngHgyo(xdd4Zn%bW3%+F#x%0ss8d`f4;{#h9F|%QzMxmGhRJC18A^UbyxX3tX?4_!Yj2q^@`e?tx4!qu-KWWzWS`9} z4yyNXY5jENLUKpf=3wE<$|#qK?iVQKZg*0cF$LC{0Di}SsP5TlYN!wLkXK?F;$~_H zMQ_}Bx#*~gf{H)PO|4DDYM*?){iopae`wY&q)^*7$?PP!RL~E`NH)->^7hk3%)ZrOXHT%h!7gX zaOaS=HXcj*eCbc#2tYp?Lm0{3(^wVxL)>$}QP9~W00e2+s_~l{Ie!oq)SIUt4G|Xe z-#%ZGQ`oYyDBHbc!Ks~m-~%w{lbKi4aTm$@3Z=CzW^baR_gzqRmyKyau%0N1q%-=~ zG5GJmvJ-@y*5q5y|I%A%y4L2a)6Lg*JQNVu>y3-|4FrRRQ?)T#{&=k+t{9DG&p4s8 zMBX#I6%IS$|5d8nb3mtVZn6B;{KGx|`V54qP>dv_52uSFDrwVW_|;o$9Emhq(Bbuy z!7`+r+px6~E4P0#Li^?lJa8!kah+A(rKO9WZQathFMreEkXMLVeQw&XTr(#!bI-X1 zQ++LzfniR3^~5%VtoQKOPrR@XOU97UT^=pwD7PZ*%K6_O`i(1)58mHHI{!!qyM$7T zdw}@+&c2uX(J<=SMvCq7*ZO0Olsu0n9mll|#mykKmwh2VR9eY6mOIa>`O+IOWDlPxb5VfR zBx*0Cna=}|ZgdoyX6h=lg3xdZvAQcRsVznIS3eU9VbfAQ>p;oLu5D~T5paD1P_ zn-V)n60R_Jj`0JrY*R>sMH=WN&5;`^z$cj${1C24@_ZB{-#nD3BCjmhq8g-Og_ z({O7jQ!z2|e=G8Tx`XF;wu}In=DZ|FLQO4Gw8hB_uj5pU*!OxByUy}u zfZGZ8@zoPy`^c+7VB^xLK{?=7rJD4!>Ykf}|M}EkUgCKQTv(?wX~OGYe*KS5ND(5-sxS;tabdwj@8a}$M8$!P zCP+q?t64WSXH$G8tCxAlkLdalFWcVypF~>EjQkaNsh!788lS6T$BeUN#n^a<%5`btd9Wku;1e z$8A}g8|VA(Jd&ahQ8TILpgu8cybzsoM*Q-eI${!H>sA~@8<9mVc#g8^#R|8GfNqfK2l z`#+>?N^d>-_Z^k|cJx_vpC_&#T_sAN+&i8PYx3Xc-_1OE%2di4cVQMHFy{21`>B_N zQ#^BGy48xjr3855Q?uC?ho-pE=j=y|d)7j@PB93QwKbdp0twCIYm1HEj*R4$w0j=n z!t~^7IeTm3{i`TpsN8K+`n{iU?5;knOyW-RhY7A_Ij)*!} zrsz8PsY;t5rYL^|HE(uA=!`pw#%JSJNnG|BKTN;m)>|kn%)2LDJTZ z+%G6P0r8vz_pJC8>IB0C72l!>5OJTWzO9*0H#h7VwcR(q`t)v3kgNnF3 zLi)%5F{-4yRdI*V$mjEM(UZZkokw`+oJ`u4Td$A_OwUg3@WtC^CI|`0r)7%3g6{K| zir>Eq-b-CPSy|+@oBEPu*%PE5?BX~}q>`v7^-~q6r-V*1bDK`1%D?#5gqk$j9doF< zmf@goV=%`rKrvDFG>#~)CR>_V>z0e4e8%{E>O@BTW29#PANuqkK|%CDAW6{IbKXMv zlkk4LffC?!Sq}Qts(qirkdrIU7FLoBDm7x#F3?>?X4C0@Z0Q&IzcM~ov_UZrpVFGC*e~1Ym-x+(BL4mBUtYCuGhO0;z1Y8*`eDNWz;G zUZxW~i<-2f7x|cEuR>IY?iq2@kp(`EihNY?ja4wy){`=Q>W=^T@GGCXsaNdTS~H{K zn;E9z?^)~B2$oCR@)`BL-u(Yd#uIsrC{d(-{NW(|NSc1J=X(Y`0l6Ia`MGzW6Bawy z5^NfmX~yeI;^7x60O26us{HDv;tu_q+zWt6Ye*syRJBc$eLV8KHCa|4qB6jxQ@PCi z4#LnzqS;h&tiQgI6Fudd)kj-~8?Y{rK|x8Jkc@eS@)VYtR%XQn;~FxV^M0n=MoqF;bp;wXI9dyV zDOD*_GC_koq;l=OG4D>%`Wx7!Yf#=ao*o%wEFFmDwfv)6b$6o7HV04DICj6kW~Adc~e&dep%VmarMupl7t$&iK(8riS2DO z?6OSOP!GD#YAQIed!_c_kCcgtz8m*=R1n0N1bMt+wCQRX&p_LWajK9LqjC-)vrh3Z zW1YYG6`zJo>V&OeOwIC8Gt4>F!11XG)Dd0p1;${9b*pZ5lbE^5fG4pXN4!Hq>Jop7 z-@CI=CMG6`Edlwbx$!V4aM;cP-tG`Y+{9iCYE}<;S290Z1_#*zMBjQ3Fy9FC9<`wU zrjL;9%T?L277zcdso0sa0v=FATb^ zyjIl}sU=in@G7oj=Fm37T0gf)|NSn>7MD{VEVzC2T|kxN+s^f^JpzQ;$rpRR%^4P; zn1MwCxVFvBr{k`tq`&|F8?r#8mp;e*F@d;*f=Vdf#Qbq-o({FE0MY*0FZ1JJ9RLdW zJWff2&(*7e{8me|nwi!?9DMi@0$>FpRzzWm`IkFEv&sNAzq7&r^LrX#IMhn;m^6`>w zqhivE&VBrHak^+uQh93W!X+MEWlnZH(X-u31sx1%5^o{x9V>MCJ|>=Af1$anJcqAi z@~X($%V;%v-9oGJLTprmLi&@J=>T6EQ&8C8zmnv0RB`{@I|8~?=txD&?npg~LIn2) zl{=cyzhO}lsquRJRC#4M*m@sbD5)9fcm(a|TZnI!z5b^Bpw?Z1AzZt;O&6ThVY^Q# zoy+$T2C$+VNIskK}lc#R@QFvivHVYo1QVTeRR zw!39lgz?Sq{CTsM$E~X|bM#mElpbuJI_DrZjB0Onw}H-fcarW0XG&O?MN1vul;VL* z1C5J^vPR9|D>X|37XH+g>;taoLU00jGrbFY_r`Xgj2_QlG6K%)K|%2^`3D&&Gdg&u z!6%kpOn^jk(3O|D4xgqLWJnFCtHX42fsVYP8TD^o8pDmJhOdlbCN#R_huY)x+1jHW zeP2p!5H(Fj2-=s!cApWqzO~ix=7B?+c>ysmm?s>+JZ>&Ezy2xCD*^zh<}Z^%xw~IcB588*v`Pf zS4WbLs`s}~-8q=X_5xUG8r%6`H0Z80D8wcPKiPW;a(y3M3)9llPH}nbHzk7138#ru z^b{~l_y=9+IvB}3Z|$v#8r=@5A7=p!M{Kk16jWXqD0M_JP`02V4KDVc=aAqAEtopH|8&+GBJ zq$o?|_P2r}d|N#7T6gU1PiU(0dRh{4X0TIwzpX~ZfxTR#-i4r9@t)TSheP#-3BxPd z#Rvdw)&u7@6mw2y8Wv4$ag=aI$HNbE-y+@Ek9e(=5bP9sUnOOMJS5V)6D5f^&vYtoLa5XvSD*$8=Zsg-~ z_2BeiHUajd?jM4+RMBIbG9zR}Z4-8vv`RMG&e;aLqd+(0xer1Lj|I#_ZHn|BsShMK z6*MMoezu*NS-Y&-Q`*j2zihVjoDctqQo(q#! zXeD?Ka%l7R)gL+$uEB7= zMHqgcP9&};^x+U3GxLrT04a;|LLfV z)Xp2}I3YL?1{SJpUR@$Z-QWN2?oJmBI1wS%fgBbdOIez#uby=tDY-uwH1`SCvJyQK zx0a$Z-PNn$X{~H-hPK1nr^G(ZI^R&LS*_BUpQkPqepLuY)%&HI-+$6M)VYnyBmbqp z1?{w;|JKvJg%he%545^X#0>f#2BZG=J1bVBdsMM=R5Z1I~w1HIa4e7RGo~QnXuTcIO-wA6oi3^)??jayHJx~ zLD{7trF2-LA>1WbQ(sO2UF_Q;a3uQMh7ZF5|1Ppx$h)F*-4wDTd(Pb2oB4 z14cLQsO|}I7=!H!!_;tAx-vq60^Qu+_ELvg^g>Z~4LC~4*|g6Mx0vK`2_|4*(?YQc zaybmxiU3d0bu)#Q)K13sFabFqq?d} zO5oY4&O@IxJ=J`2XlBIvWtgW`iTq0Y1-l+Wbzed4ZYplpyEH~elMzps*WEA-#`xZ;Op#sD?jpgcshngb3PRuzX+S57cofpZ&pjFFa=xV5Pt`+ZkkK~n6|EF)16Nw$SI3feKUuj8FSt*BsqzH> zs~%r)w4;q^%=KSCC(h+*sYfreujdKvO@$OQ?%o=6?)ujTigIARmrp?Id{cG>eB?4j z0AEKuN3lmQ&t{%cH_kiKN)H?b*kTf(TYYS$3=uUngC$HVPA~U?lU5i9rT@H3`H9>G(5BeR#TS*0P9_eBZzP>kCwoQEh zm-)BBlUaw1&&p|*>`J!og{x;*_uOXJVI4wNa{VpO#}h_*)5-W* zD1aJv3;8+oRlNu+sR`22%u+o$KJf(fME?{?{Zw|O>cJcK=*M73NY0e-MO`RA#>gow zPIB^WD~dWWR$}Lh(1TG)1edV){A_Y8@?3|E3xDI+nVwSr1n2AFm=`Ux{q4M<3eBC9 zir>q@BH-XP0RXArUf*H`XKLseIZzMrN>!0#l=PVxalE%e$k663DQFPkvXNER*&B4N zACgxv;Q-C#o}U}<4HtRQ*Ux@?f6uTuuXDH@C#e{e14)JC7j4H~ke2Y{?Hapl=^E^5 zDKDHEFa_J|DHY@kvtyCq2=$t4 zo`#xbMGIcwsJMi;BF`Y&I?!u}u~7!TkO|@s;lfT6ONmG_G*n@-Hq#9V4~vtu)lM$) zBf;RFRH4sC7GGYZK2wq0QTv#9b6L%N>^bN2CMyjG8>j@-wK9!M-6?(kF?r~F=1|WN zI9Eocd|-^9FVc({NB>AZAUIFX9|D5@115rq98{JcI(b+HQt1c zS1f2}TE^ANTVSb^g zh^S%a4}tTnAQ3nlJE)6HB6@IldM#&XoWGV%it@r_W$a}l1Q}(YNopMkQrIT0#CyoU zm1Z0LWE#!axl%ZU!1(OE;Jy*BY?qaX5@O?c`i1;JkZs2r|IR_afCirHs1{r zO&u%%$U8+O(P61@m)?M7Xln@whm&H#Tj}2_Q-Zg=CGRt_CIkG;K`E`<3w ziW#hZOINpDy+e_G8)iMye~UID=S`XgdYKt_1V_e@gZBL~T0QrYV{4L+;;JU27_7ey z`@2BZVV1t}E#-KMYLHcdreZr!N+=Bo`o}2yG;e}EpiQ8^?$t)j;HFQJ4VjnMa~E&3 zm2Ge~Sx*wi|Jh~FLZ_d0b;s4W&~tcP`!mV_`SZc%NZSFNZl`71hM9oKr+5908`o3K zvU%HZov9?5xx*KcMvUd@(jI*Gmx?ZxKVwhseqLkMk`~%-h|7G{!4_PcR($sc*CFo9>c0#N4&dfni_kuPXbk+dx@J$ zRVV$#_w*d6bq6W~os~yfxEN|@6-v4!-oaLzPvVmDQ3y~nBjc;OGF&XtVl4~8V^8?X zqv6a+(r!rd&I|N-($aC~qIXS(Bdv{J49M%ze32`#?DSfzNk?a)7am6jnuefo92X4G zzaU;x&GOw<{_@ICB-zBFy;wSiUzWS}0`cpYSFQZ=_H!{@o;!uEWI>z465gzo9( zao0n~0(%B8&UGll28j^wZLcPuQIx2G(@H89x;Yk#S}CRm2J|{9ft}fG#6C(-MUBsv z*wd=Xx3k-#KIHAT(o|TslN6e_MVdjQ8z}EhCzO+|Rggd|n`o6vsjT%_S?zX|;eGoG z(TJXjPR^{%vI;bE~#9jATX~NPT+V>mw2orxAi}@?FuRNuJ3{)(^Ee zj*%UHpiTC<;`DwWnvxnW1<}Gx2D!aEkKdO262!hvXP11Z`zL3?=nh=u#bb?V?Yj|Q z5*^NW^OUDOaW{ujR%G~-Ul#8Q>s#I5wlmViwFu|^0y6x93NTcBpdhbL8R!>)$PXax z9rXWbOz9>{B?3cK05K9kf4e4fdt z62`0`t#0@~b1B#%08n4t;*FuK>Zy?Mo_ir`t90`G(x4tYKaV6Y%&j>K#OE{rP!ukR zdu!Mq8+scXZUsdKjRAnHuZ%i>0!nxtG@2bmj+wgi$&gz}{u9GqVWNrKR_w#L28+Z? zDHN1oux-#61P-t97QVHo1uHSJn+P%V*O`=OUiJqJpS@qX*JBx32s$eA`a>ca?|y%@ zRXJH4D3F^&X-qs&VXe*PE4cyr02^g{d$oJsh2z-8!;f?>3z;7dOrNSNK6V=Fm94>`I-$|sSkaQSS&6EXkA#3H z7QZh~alP(-@DWH31-ybWI*;&=h#KdB7T6owyp*xOoylpMdDJatUot^Zx=gvw$?k6#_O9vFd*SKA2yN*Wf*SgY?s>fS;zaO81{af?T93f>V~Sn zAnUTNT)^lABGmuD%H?Qz8mSoK^96gb)QfVf&oR$FGKD*3L4`w`O(4h*s2|w<6Lz4U zU1a$R1KOE|yqw%~yehx5X^b(m^C1AR-ZNQ26?khYz{nA%vGRCc{gX>Dd*Z0;6Zux3 z`Y(G(VBG-GPm**=v=TXNYtImZQ^y7rBkX~FAM$;A@b{w)M+(wmFMmbPSiT)F@iS@q zi)1FyHPahOMd;mBJ&+s>^?K9GAg=CrIJ|K!{sVmv%`uLLr7lI?DhPw&*ZGT$ESs2I z7HsEDgKpQyb`2!9De(tM#LUz?5QJE@;OUi)DWcE5mlF|uv5 zW*Pe%Kd9GUp#h0XjLP-j&x8<+l=Q=5)mG_^*58>8+>^m8XD^Y@#_G!V0xA)%l?9v% zSX}>m88H4VMOb!o^c*p-z{}y&o$b{drX7B$H<7YlnOEwYzO+860!|MRxU2g<2vV;f zSubwx{iPunZzks#j;Ol+_^U@6OIBO*9rH!NM}8qmk`)~rY=#sS+(y;e~G|Ix> z`OVsEQpwoXr!ha%)CzNR>Mw{BxNS zS7}3B)9|(M#c0uFe!s(Dqu2cWlZm!8EH{Yl#0TZF_>}fN6ipM+yJwI;Qg58%1EV6j z&iE!8b1dJIA-d7HrbaaVC&9yl4ghCqy7znmb%S2v?6>^zf@zMZ*Fpv!o^19-! zp{%ds3=6-M;>^+txdcnX0Kg~PwgSpg@r=6U62gf>d3RP!3QrWj;--caN+Mm?a?I9H zu&~9OF?2w~55DsrRIcpC3Bov&WiIygzqLRs&ON5G0O#{AM6wXyT`z|H`oTryCt~Z) zqBIZjhZ@uFy3-j@Lz6qGg8A)$&IPEPc8$8Q5k$}3Zl4XicY5r#ReF+D!Q!fb2Dt_0 zgBs2-fI^8q43IUs!D~S8k)o}8{{RQJA!dZDMGc}p0vK```!w;*5<5q@hztXYuR33P zg1jz8-4-Hyg}nwSG@0`5?jIBr5|~`br}-MsZ}~TeN};r%sAGxCcA?iO`j*QNH6!@O z0aU(OKUV?j;@aeHs%LbnS#wiE73bMai?Dmz$lVXVU2S)k@x(?=s0cfL0%)7Pz^WSR zB2)~=F*NWcG6c)y>8oZ2XH>EUcv3#NPGQJ}HIe4d?T^2GL^NGd|3C&Lv^vqz#N~(I zN?03zkY11MBv$h=D|Vf7Lx~h&iF>+>LoUmlHHQsClnKD8U3Cm&@plO=@nIa;gS{sG z^g^^D@A^AnT{Q99J9{yi%(y(v4{j)73BKciBep1q##JynE-}O2P)ecokkM2U3lsdeVA-BFSPTp}fyYv^tRgzpE zv;Hl)KG(!FV$85A<|aOPNMhlWC1o&UOp6k;GIkHeG68U;0DsO%JtdAK$Tba%eoBrm zZ*zm5o>kJ!N8jYb(u<#RNhY3lR`r8x7MSZph&+rI6ysezB_eE+3Qdf|F)$~eKM1qeHF7v+cK=9%WR8tLMA(FN4-zveve#XNAa0FB)$T_u z+$9aD%yJR0++22$)Gh>m(EA&hncG`meLp>@MHVpk`J>v+VtZ_!Be$jenOS95j*8Jz2q- za7S?y>&$Re&#TXw756%@nfrOhJZub}dxO%lWsjLY_}8~8hH%VYniax8kQ4LD24DE) zj304}qE9&W-75t{)~}BX_nfiLU-Y&>9-DHM+lj_6ACh%O9i>?}Qp|oEc&6xI+VlQY z16qLJr+%<-QoD=&AV~?5dBS>7*$t$NmhLZ|A!VWkDyTnZ$T#kV#M}#t(7w+ut;$e&YDq*^>OQ+LtLB>$>0wwvI6p8^%6Q-#TGHqShxlH+E z{(I%@&t%8gK+bt-jsYxBbmE2TabH3ME<=id8&`*u>oK!zL>A?ZiWbAjoJ((C`M<5a zlWgmn^Ks^m?~FsHc?f*}^t-kwvM$dt zBkJ11)rHr*i$lpiJP@_Im~cr%|5ED-65|e3CIgyI*aC^d`@h@h#oyZJe&i}>{0 zEDvg#g2y7U2p~eLaIVF2(;ND@{NCc$1M2%W7f`>^^ZF!yrwqd^QUSqoe|o-mE_9)* zmnJ%Y!(BzFG{;2;ytF|f5;EZ70@aUSVD~`D%PGI05j%tXM0)kTqO`a7*F7>iw&eBg zSsNXR#m4BtEP%RrroZ%F%jm*%Bo;d~DE1uLd+$N$bm;q{q=6^xbkX^wWsnKO8(=Hi zT{V!YBR)Y5UIKAI(4lTSRuEpqvA$)mSyOnG?4K2Tx<0jv?!UK=Z=Ny0UIG^d?J2=*@Qn z&k|<1mBwIgEQz7MT6-9UHnUVmtdJLnN|Pr1nTZ=NPI5<}hhwvU~@ zi6#FjvAmba!%{65xV|h3YG8FmVA!l;D154cGP7t&G*J!5`LTv=Nx#*KH>r6vB7GB0 z`Gc@&$LC=iINUJ4pG0!qW8a4J{c7(*wVy*3j*tgIP91rq<35Bk@Irv;O<*#Ulg8dNe zhhRUe$G0Cq`&)ky^5>rgG6Mb(@P~ju1pI-<2?BmX%jL%eiBOBiyVygpAAQ{vN z)#~w;kbe^LPeT5=x_|uEwD(7PLWsYF_)Cbtg!sEUz7XOsA^sBL@9O^XKRo_+FMMo2 zv5|;_`pENc>LI0Y|D8hji7Ontb{Z))Z2UtoL&L2wTTkDnI?D#Ksh;phNz=tcpL-fDm2%-^=3r#%w2YW^TeHS#9N5ZWBj6~?y zVp9;o^!br&YS1cLa#Ae4HR=76~AA)(ZR2bu|`9$bT2_RIhBQqP8UKdDCG<}@AxHcZ|Nj9sHR zAA0`D>;ex)5ib2C!r_OoWfu99G#!t#{~x str: r += 2 return s[l:min(r, len(s))] def clean_documents(raw_text): - unwanted= ["Technology", - "Getting Started", - "Trust & Safety", - "Community", - "Resources", - "Skip to main content", - "How-to guides"] all_lines = [] for line in raw_text.split("\n"): line = line.strip() - if line in unwanted or len(line.split()) == 0: + if len(line.split()) == 0: continue else: all_lines.append(line) @@ -73,7 +64,7 @@ def read_file_content(xml_path: str, data_folder: str) -> str: sitemap_loader = SitemapLoader(web_path=xml_path,is_local=True,parsing_function=clean_text) sitemap_loader.requests_kwargs = {"verify": False} docs = sitemap_loader.load() - return "\n".join([doc.page_content for doc in docs]) + return docs elif len(data_folder) != 0: if not os.path.exists(data_folder): logging.info(f"Error: {data_folder} does not exist") @@ -81,30 +72,35 @@ def read_file_content(xml_path: str, data_folder: str) -> str: # Use langchain to load the documents from data folder loader = DirectoryLoader(data_folder) docs = loader.load() - text = "\n".join([clean_documents(doc.page_content) for doc in docs]) - return text + return docs def get_chunks( - text: str, - chunk_size: int = 512, + docs: list, + chunk_size: int = 1000, api_config: dict = None, ) -> list[str]: """ - Takes in a `file_path` and `doctype`, retrieves the document, breaks it down into chunks of size + Takes in a list of documents, breaks them down into chunks of size `chunk_size`, and returns the chunks. """ chunks = [] - if len(text) == 0: + if len(docs) == 0: raise TypeError("Can not get chunks from empty text") else: - num_chunks = ceil(len(text) / chunk_size) - logging.info(f"Splitting text into {num_chunks} chunks") - text_splitter = RecursiveCharacterTextSplitter(chunk_size=api_config["chunk_size"], chunk_overlap=int(api_config["chunk_size"]/10)) - chunks = text_splitter.create_documents([text]) - chunks = [chunk.page_content for chunk in chunks] - + text_splitter = RecursiveCharacterTextSplitter(chunk_size=api_config["chunk_size"],chunk_overlap=int(api_config["chunk_size"] / 10),separators= ["----------","\n\n", "\n", " "],strip_whitespace=True) + docs_processed = text_splitter.split_documents(docs) + logging.info(f"Total number of docs_processed: {len(docs_processed)}") + # Remove duplicates + unique_texts = {} + docs_processed_unique = [] + for doc in docs_processed: + if doc.page_content not in unique_texts and len(doc.page_content) > 100 : + unique_texts[doc.page_content] = True + docs_processed_unique.append(doc) + chunks = [chunk.page_content for chunk in docs_processed_unique] + logging.info(f"Total number of docs_processed_unique: {len(docs_processed_unique)}") return chunks # read all the files in the data folder, then split them into chunks # generate questions for each chunk and return zip of chunk and related questions list @@ -112,10 +108,10 @@ def generate_questions(api_config): # get documents from the data folder or xml file api_url = api_config["endpoint_url"] key = api_config["api_key"] - document_text = read_file_content(api_config["xml_path"],api_config["data_dir"]) - if len(document_text) == 0: - logging.info(f"Error reading files, document_text is {len(document_text)}") - document_batches = get_chunks(document_text,api_config["chunk_size"],api_config) + documents = read_file_content(api_config["xml_path"],api_config["data_dir"]) + if len(documents) == 0: + logging.info(f"Error reading files, document_text is {len(documents)}") + document_batches = get_chunks(documents,api_config["chunk_size"],api_config) # use OpenAI API protocol to hanlde the chat request, including local VLLM openai compatible server llm = ChatOpenAI( openai_api_key=key, @@ -146,11 +142,16 @@ def generate_questions(api_config): def generate_COT(chunk_questions_zip,api_config) -> dict: all_tasks = [] chunk_questions = [] + question_asked = set() for document_content,questions in chunk_questions_zip: for question in questions: - prompt = api_config['COT_prompt_template'].format(question=question,context=str(document_content)) - all_tasks.append(prompt) - chunk_questions.append((document_content,question)) + question = question.strip() + # avoid asking the same question twice + if question not in question_asked: + question_asked.add(question) + prompt = api_config['COT_prompt_template'].format(question=question,context=str(document_content)) + all_tasks.append(prompt) + chunk_questions.append((document_content,question)) # use OpenAI API protocol to hanlde the chat request, including local VLLM openai compatible server llm = ChatOpenAI( openai_api_key=api_config["api_key"], @@ -170,17 +171,20 @@ def generate_COT(chunk_questions_zip,api_config) -> dict: def add_chunk_to_dataset( chunk_questions_zip: list, api_config: dict, - ds, ) -> None: """ Given a chunk and related questions lists, create {Q, A, D} triplets and add them to the dataset. """ num_distract = api_config["num_distract_docs"] - p = api_config["oracle_p"] + p = api_config["refusal_probability"] chunks = [chunk for chunk, _ in chunk_questions_zip] COT_results = generate_COT(chunk_questions_zip,api_config) + logging.info(f"COT generation completed, total num of COT results: {len(COT_results)}") + completed,refusal= 0,0 + data_list = [] for chunk, q , cot in COT_results: # The COT answer will be used as the label in the fine-tuning stage + datapt = { "id": None, "type": "general", @@ -190,8 +194,7 @@ def add_chunk_to_dataset( "cot_answer": cot } i = chunks.index(chunk) - datapt["id"] = f"seed_task_{0 if not ds else ds.num_rows}" - + datapt["id"] = f"seed_task_{len(data_list)}" # add num_distract distractor docs docs = [chunk] indices = list(range(0, len(chunks))) @@ -219,29 +222,24 @@ def add_chunk_to_dataset( datapt["instruction"] = context datapt_copy = copy.deepcopy(datapt) # add to dataset - if not ds: - # init ds - datapt["id"] = [datapt["id"]] - datapt["type"] = [datapt["type"]] - datapt["question"] = [datapt["question"]] - datapt["context"] = [datapt["context"]] - datapt["oracle_context"] = [datapt["oracle_context"]] - datapt["cot_answer"] = [datapt["cot_answer"]] - datapt["instruction"] = [datapt["instruction"]] - ds = Dataset.from_dict(datapt) - else: - ds = ds.add_item(datapt) + data_list.append(datapt) # decides whether to add refusal example where the related documents are not provided - oracle = random.uniform(0, 1) < p - if not oracle: + refusal = random.uniform(0, 1) <= p + if refusal: doc_copy[0] = chunks[random.sample(indices, 1)[0]] random.shuffle(doc_copy) - context = "" + refusl_context = "" for doc in doc_copy: - context += "" + str(doc) + "\n" - context += q + refusl_context += "" + str(doc) + "\n" + refusl_context += q # This instruction will be used in the fine-tuning stage - datapt_copy["instruction"] = context + datapt_copy["id"] = f"refusal_task_{len(data_list)}" + datapt_copy["instruction"] = refusl_context datapt_copy["cot_answer"] = "Sorry, I don't know the answer to this question because related documents are not found. Please try again." - ds.add_item(datapt_copy) + data_list.append(datapt_copy) + refusal += 1 + completed += 1 + if completed % 100 == 0: + logging.info(f"refusal example added: {refusal}, total examples added: {completed}, total examples to be added: {len(COT_results)- completed}") + ds = Dataset.from_list(data_list) return ds From afcc874590a80d49b371e718452479c8d52503ce Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Thu, 27 Jun 2024 16:03:31 -0700 Subject: [PATCH 30/35] modified readme and requirement.txt --- .github/scripts/spellcheck_conf/wordlist.txt | 13 +- .../use_cases/end2end-recipes/raft/README.md | 196 +++++++++--------- .../use_cases/end2end-recipes/raft/config.py | 1 - .../end2end-recipes/raft/raft_eval.py | 42 +--- requirements.txt | 9 - 5 files changed, 117 insertions(+), 144 deletions(-) diff --git a/.github/scripts/spellcheck_conf/wordlist.txt b/.github/scripts/spellcheck_conf/wordlist.txt index 0f8a57a60..c790fa21d 100644 --- a/.github/scripts/spellcheck_conf/wordlist.txt +++ b/.github/scripts/spellcheck_conf/wordlist.txt @@ -1350,4 +1350,15 @@ SalesBot Weaviate MediaGen SDXL -SVD \ No newline at end of file +SVD +LLMScore +RecursiveCharacterTextSplitter +TPD +TPM +Tianjun +Zhang +distractor +distractors +frac +numRefusal +totalQA diff --git a/recipes/use_cases/end2end-recipes/raft/README.md b/recipes/use_cases/end2end-recipes/raft/README.md index b5a601980..4131a80ec 100644 --- a/recipes/use_cases/end2end-recipes/raft/README.md +++ b/recipes/use_cases/end2end-recipes/raft/README.md @@ -1,91 +1,84 @@ ## Introduction: -As our Meta Llama models become more popular, we noticed that there is a great demand to apply our Meta Llama models toward a custom domain to better serve the customers in that domain. For example, a common scenario can be that a company already has all the related documents in plain text for its custom domain and want to build chatbot that can help answer questions for its clients. +As the popularity of our Meta Llama 3 models grows, we've seen a surge in demand to adapt them to specific domains, enabling businesses to better serve their customers. For instance, a company might have a vast collection of plain text documents related to their custom domain and want to create a chatbot that can answer client questions. -Inspired by this demand, we want to explore the possibility of building a Llama chatbot for our Llama users using Meta Llama models, as a demo in this tutorial. Even though our Meta Llama 3 70B Instruct model can be a great candidate, as it already has a excellent reasoning and knowledge, it is relatively costly to host in production. Therefore, we want to produce a Meta Llama 8B Instruct model based chatbot that can achieve the similar level of accuracy of Meta Llama 70B-Instruct model based chatbot to save the inference cost. +In response to this demand, we're exploring the possibility of building a Llama chatbot that can answer Llama related questions using our Meta Llama 3 models. In this tutorial, we'll demonstrate how to do just that. While our Meta Llama 3 70B Instruct model is an excellent candidate, as it already has a excellent reasoning capabilities and knowledge, its production costs are relatively high. To reduce these costs, we'll focus on creating a Llama chatbot based on the Meta Llama 8B Instruct model, aiming to achieve similar accuracy to the Meta Llama 3 70B Instruct model while minimizing inference costs. -## Data Collections -To build a Llama bot, we need to first collect the text data. Even though ideally we should included as many Llama related web documents as possible, in this tutorial we will only include the official documents for demo purposes. For example, we can use all the raw text from offical web pages listed in [Getting started with Meta Llama](https://llama.meta.com/get-started/) but we do not want to include our FAQ page as some of the eval questions will come from there. +## Collecting Text Data for the Llama Bot -We can either use local folder or web crawl to get the text data. For local folder option, we can download all the desired docs in PDF, Text or Markdown format to "data" folder, specified in the [raft.yaml](./raft.yaml). +To build a Llama bot, we need to collect relevant text data. Ideally, we would include a vast range of Llama-related web documents, but for demo purposes, we'll focus on official documents. For example, we can use the raw text from official web pages listed in [Getting started with Meta Llama](https://llama.meta.com/get-started/), excluding the FAQ page since some evaluation questions will come from there. -Alternatively, we can create a sitemap xml, similar to the the following example, and use Langchain SitemapLoader to get all the text in the web pages. +We have two options to obtain the text data: using a local folder or web crawling. For the local folder option, we can download the desired documents in PDF, Text, or Markdown format to the "data" folder specified in the [raft.yaml](./raft.yaml) file. + +Alternatively, we can create a sitemap XML file, similar to the example below, and put the file path in the [raft.yaml](./raft.yaml) file, so eventually a Langchain SitemapLoader can retrieve all the text from the web pages. ```xml -http://llama.meta.com/responsible-use-guide/ - - -http://llama.meta.com/Llama2/ - - -http://llama.meta.com/Llama2/license/ - -...... - -http://llama.meta.com/Llama2/use-policy/ - - -http://llama.meta.com/code-Llama/ - - -http://llama.meta.com/Llama3/ - + + http://llama.meta.com/responsible-use-guide/ + + ``` -## Retrieval Augmented Fine Tuning (RAFT) concepts -In this tutorial, we want to introduce Retrieval Augmented Fine Tuning (RAFT) that combines finetuning with RAG to better utilize the custom domain text data. +## Retrieval Augmented Fine Tuning (RAFT) Concepts -RAFT is a general recipe to finetune a pretrained LLM to a domain-specific RAG settings. In RAFT, we prepare the training data such that each data point contains a question ( Q ), a set of documents (Dk), and a corresponding Chain-of-though style answer (A*) generated from one of the document (D*). We differentiate between two types of documents: oracle documents (D*) i.e. the documents from which the answer to the question can be deduced, and `distractor' documents (Di) that do not contain answer-relevant information, illustrated in the follwing graph: -![RAFT images](images/RAFT.png) +In this tutorial, we'll introduce Retrieval Augmented Fine Tuning (RAFT), a technique that combines fine-tuning with RAG to better utilize custom domain text data. -For more RAFT details, please check their [blog](https://gorilla.cs.berkeley.edu/blogs/9_raft.html) +RAFT is a general recipe for fine-tuning a pre-trained Large Language Model (LLM) to a domain-specific RAG setting. The process involves preparing training data with each data point containing: -## Create RAFT dataset +* A question (Q) +* A set of documents (D) +* A corresponding Chain-of-thought style answer (A*) generated from one of the documents (D*) -To use Meta Llama 3 70B model for the RAFT datasets creation from the prepared documents, we can either use Meta Llama 3 70B APIs from LLM cloud providers or host local LLM server. +RAFT tries to teach the models to differentiate between two types of documents: -We can use on prem solutions such as the [TGI](../../../../inference/model_servers/hf_text_generation_inference/README.md) or [VLLM](../../../../inference/model_servers/Llama-on-prem.md). +* Oracle documents (D*): documents from which the answer to the question can be deduced +* Distractor documents (Di): documents that do not contain answer-relevant information -In this example, we will show how to create a vllm openai compatible server that host Meta Llama 3 70B instruct locally, and generate the RAFT dataset. +The following graph illustrates the RAFT main concepts: +![RAFT images](images/RAFT.png) -```bash -# Make sure VLLM has been installed -CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server --model meta-Llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001 -``` +For more information on RAFT, please refer to their [blog post](https://gorilla.cs.berkeley.edu/blogs/9_raft.html). + +## Create RAFT Dataset -**NOTE** Please make sure the port has not been used. Since Meta Llama3 70B instruct model requires at least 135GB GPU memory, we need to use multiple GPUs to host it in a tensor parallel way. +To create a RAFT dataset from the prepared documents, we can use the Meta Llama 3 70B Instruct model either through APIs from LLM cloud providers or by hosting a local VLLM server. -Once the server is ready, we can query the server given the port number 8001 in another terminal. Here, "-u" sets the endpoint url to query and "-t" sets the number of questions we ask the Meta Llama3 70B Instruct model to generate per chunk. To use cloud API , please change the endpoint url to the cloud provider and set the api key using "-k". Here since we want to query our local hosted VLLM server, we can use following command: +For this example, we'll demonstrate how to create a VLLM OpenAI-compatible server that hosts Meta Llama 3 70B Instruct locally and generates the RAFT dataset. +**Local Server Setup** + +First, ensure VLLM is installed. Then, run the following command to start the VLLM server: ```bash -python raft.py -u "http://localhost:8001/v1" -k "EMPTY" -t 4 +CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server --model meta-Llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001 ``` +**Note**: Make sure the port is available, and the server requires at least 135GB GPU memory, so we need to use multiple GPUs in a tensor parallel way. -For cloud API key, we can also set it using system environment variables, such as +**Querying the Server** +Once the server is ready, query it using the following command in another terminal: ```bash -export API_KEY="THE_API_KEY_HERE" -python raft.py -u "CLOUD_API_URL" -t 4 +python raft.py -u "http://localhost:8001/v1" -k "EMPTY" -t 4 ``` +If you prefer to use a cloud API, replace the endpoint URL with the cloud provider's URL and set the API key using the `-k` flag or environment variables. -**NOTE** When using cloud API, you need to be aware of your RPM (requests per minute), TPM (tokens per minute) and TPD (tokens per day), limit on your account in case using any of model API providers. This is experimental and totally depends on your documents, wealth of information in them and how you prefer to handle question, short or longer answers etc. +**RAFT Dataset Generation** -This [raft.py](./raft.py) will read all the documents either from local or web depending on the settings, and split the data into text chunks of 1000 characters (defined by "chunk_size") using RecursiveCharacterTextSplitter. +The [raft.py](raft.py) script reads all documents from local or web sources, depending on the settings, and splits the data into text chunks of 1000 characters using RecursiveCharacterTextSplitter. -Then we apply the question_prompt_template, defined in [raft.yaml](./raft.yaml), to each chunk, to get question list out of the text chunk. +Then, it applies the `question_prompt_template` defined in [raft.yaml](raft.yaml) to each chunk to generate queries to Meta Llama 3 70B model, and the model will generate a question list (By default 4 questions in that list) for each text chunk. For each question and corresponding text chunk, we generate a Chain-of-Thought (COT) style answer using Meta Llama 3 70B Instruct APIs. -We now have a related context as text chunk and a corresponding question list. For each question in the question list, we want to generate a Chain-of-Thought (COT) style answer using Meta Llama 3 70B Instruct as well. +Once we have the COT answers, we can create a dataset where each sample contains an "instruction" section. This section includes some unrelated chunks called distractors (by default, we add 4 distractors). In the original RAFT method, there is an oracle probability P (by default, 80%) that a related document will be included. This means that there is a 1-P (by default, 20%) chance that no related documents are provided, and the RAFT model should still try to predict the COT answer label, as stated in the blog, "By removing the oracle documents in some instances of the training data, we are compelling the model to memorize domain-knowledge." -Once we have the COT answers, we can start to make a dataset where each sample contains "instruction" section that includes some unrelated chunks called distractor (by default we add 4 distractors). In the original RAFT method, there is a oracle probility P (by default 80%) that a related document will be included. This means that there is 1-P (by defualt 20%) chances that no related documents are provided, and the RAFT model should still try to predict COT_answer label, as the blog stated that "By removing the oracle documents in some instances of the training data, we are compelling the model to memorize domain-knowledge.". +**Modification to Add Refusal Examples** -In this tutorial we made a important modification by adding some additional refusal examples (by default this refusal probability is 5%) that when the related documents are not presented, we make the COT_answer label to be "Sorry, I don't know the answer to this question because related documents are not found. Please try again.". Our hyposis is that this will increase answer precision and reduce chatbot hallucination. In real world production scenario, we prefer that the chatbot refuse to answer when no enough context are provided, so that we can detect this refusal signal and mitigate the risk of producing wrong or misleading answer, eg. we can ask for human agent to take over the conversation to better serve customers. +In this tutorial, we made an important modification by adding additional refusal examples (by default, this refusal probability is 5%). When the related documents are not presented, we set the COT answer label to "Sorry, I don't know the answer to this question because related documents are not found. Please try again." Our hypothesis is that this will increase answer precision and reduce chatbot hallucination. In real-world production scenarios, we prefer that the chatbot refuses to answer when not enough context is provided, so that we can detect this refusal signal and mitigate the risk of producing wrong or misleading answers (e.g., we can ask a human agent to take over the conversation to better serve customers). -Here is a RAFT format json example from our saved raft.jsonl file. We have a "question" section for the generated question, "cot_answer" section for generated COT answers, where the final answer will be added after "" token, and we also created a "instruction" section -that has all the documents included (each document splitted by <\/DOCUMENT> tag) and finally the generated question appended in the very end. This "instruction" section will be the input during the fine-tuning, and the "cot_answer" will be the output label that the loss will be calculated on. +**RAFT Format JSON Example** -```python +Here is a RAFT format JSON example from our saved `raft.jsonl` file: +```json { "id":"seed_task_228", "type":"general", @@ -115,92 +108,101 @@ that has all the documents included (each document splitted by <\/DOC "instruction":" DISTRACT_DOCS 1 <\/DOCUMENT>... DISTRACT_DOCS 4 <\/DOCUMENT>\nWhat is the context length supported by Llama 3 models?" } ``` -To create a eval set, ideally we should use human-annotation to create the question and answer pairs to make sure the the questions are related and answers are fully correct. +As shown in the above example, we have a "question" section for the generated question, a "cot_answer" section for the generated COT answers (where the final answer will be added after the "" token), and an "instruction" section that has all the documents included (each document split by `` and `` tags) and finally the generated question appended at the end. This "instruction" section will be the input during fine-tuning, and the "cot_answer" will be the output label that the loss will be calculated on. -However, this humman-annotation is costly and time-consuming. For demo purpose, we will use a subset of training json and our FAQ web page as the eval set. We can shuffle and random select 100 examples out of Llama RAFT dataset. For evaluation purpose, we only need to keep the "question" section, and the final answer section, marked by tag in "cot_answer". +## Creating an Evaluation Set +To create a reliable evaluation set, it's ideal to use human-annotated question and answer pairs. This ensures that the questions are relevant and the answers are accurate. However, human annotation is time-consuming and costly. For demonstration purposes, we'll use a subset of the validation set, which will never be used in the fine-tuning. We only need to keep the "question" section and the final answer section, marked by the `` tag in "cot_answer". We'll manually check each example and select only the good ones. We want to ensure that the questions are general enough to be used for web search engine queries and are related to Llama. We'll also use some QA pairs from our FAQ page, with modifications. This will result in 72 question and answer pairs as our evaluation set, saved as `eval_llama.json`. -Then we can manually check each example and only pick the good examples. We want to make sure the questions are general enough that can be used to query the web search engine and are related Llama. Moreover, we also used some QA pairs, with some modification, from our FAQ page. Together, we created 72 question and answer pairs as the the eval set called eval_llama.json. - -## Fune-tuning steps - -Once the RAFT dataset is ready in a json format, we can start the fine-tuning steps. Unfortunately we found out that the LORA method did not produce a good result so we have to use the full fine-tuning method. We can use the following commands as an example in the Llama-recipes main folder: +## Fine-Tuning Steps +Once the RAFT dataset is ready in JSON format, we can start fine-tuning. Unfortunately, the LORA method didn't produce good results, so we'll use the full fine-tuning method. We can use the following commands as an example in the llama-recipes main folder: ```bash -export PATH_TO_ROOT_FOLDER = ./raft-8b -export PATH_TO_RAFT_JSON = recipes/use_cases/end2end-recipes/raft/output/raft.jsonl +export PATH_TO_ROOT_FOLDER=./raft-8b +export PATH_TO_RAFT_JSON=recipes/use_cases/end2end-recipes/raft/output/raft.jsonl torchrun --nnodes 1 --nproc_per_node 4 recipes/finetuning/finetuning.py --enable_fsdp --lr 1e-5 --context_length 8192 --num_epochs 1 --batch_size_training 1 --model_name meta-Llama/Meta-Llama-3-8B-Instruct --dist_checkpoint_root_folder $PATH_TO_ROOT_FOLDER --dist_checkpoint_folder fine-tuned --use_fast_kernels --dataset "custom_dataset" --custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/raft_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path $PATH_TO_RAFT_JSON ``` -For more details about multi-GPU finetuning, please check the [multigpu_finetuning.md](../../../finetuning/multigpu_finetuning.md) in the finetuning recipe. +For more details on multi-GPU fine-tuning, please refer to the [multigpu_finetuning.md](../../../finetuning/multigpu_finetuning.md) in the finetuning recipe. -Then we need to convert the FSDP checkpoint to HuggingFace checkpoint using the following command: +Next, we need to convert the FSDP checkpoint to a HuggingFace checkpoint using the following command: ```bash python src/Llama_recipes/inference/checkpoint_converter_fsdp_hf.py --fsdp_checkpoint_path "$PATH_TO_ROOT_FOLDER/fine-tuned-meta-Llama/Meta-Llama-3-8B-Instruct" --consolidated_model_path "$PATH_TO_ROOT_FOLDER" ``` -For more details about FSDP to HuggingFace checkpoint conversion, please check the [readme](../../../inference/local_inference/README.md) in the inference/local_inference recipe. +For more details on FSDP to HuggingFace checkpoint conversion, please refer to the [readme](../../../inference/local_inference/README.md) in the inference/local_inference recipe. -## Evaluation steps +## Evaluation Steps +Once we have the RAFT model, we need to evaluate its performance. In this tutorial, we'll not only use traditional evaluation methods (e.g., calculating exact match rate or ROUGE score) but also use LLM as a judge to score model-generated answers. -Once we have the RAFT model, we now need to evaluate it to understand its performance. In this tutorial, we not only use traditional eval method, eg. calculate exact match rate or rouge score but also use LLM to act like a judge to score model generated. +We'll launch a VLLM server to host our converted model from `PATH_TO_ROOT_FOLDER`. To make things easier, we can rename the model folder to `raft-8b`. -We need to launch a VLLM server to host our converted model from PATH_TO_ROOT_FOLDER. To make things easier, we can rename the model folder raft-8b. ```bash CUDA_VISIBLE_DEVICES=1 python -m vllm.entrypoints.openai.api_server --model raft-8b --port 8000 --disable-log-requests ``` -Similarly if we want to get 8B instruct baseline, we can launch a 8B model VLLM server instead: +Similarly, if we want to get the 8B instruct baseline, we can launch a 8B model VLLM server instead: ```bash CUDA_VISIBLE_DEVICES=1 python -m vllm.entrypoints.openai.api_server --model meta-Llama/Meta-Llama-3-8B-Instruct --port 8000 --disable-log-requests ``` -On another terminal, we can use another Meta Llama 3 70B Instruct model as a judge to compare the answer from the RAFT 8B model with the ground truth and get a score. To do this, we need to host another Meta Llama 3 70B Instruct VLLM server locally with command, just make sure the port is not been used: - +On another terminal, we can use another Meta Llama 3 70B Instruct model as a judge to compare the answers from the RAFT 8B model with the ground truth and get a score. To do this, we need to host another Meta Llama 3 70B Instruct VLLM server locally with the command, making sure the port is not in use: ```bash CUDA_VISIBLE_DEVICES=2,3 python -m vllm.entrypoints.openai.api_server --model meta-Llama/Meta-Llama-3-70B-Instruct --tensor-parallel-size 2 --disable-log-requests --port 8001 ``` -Then we can pass the ports to the eval script to eval our raft model once our raft-8b vllm server is running: - +Then, we can pass the ports to the eval script to evaluate our RAFT model once our `raft-8b` VLLM server is running: ```bash CUDA_VISIBLE_DEVICES=4 python raft_eval.py -m raft-8b -u "http://localhost:8000/v1" -j "http://localhost:8001/v1" -r 5 ``` -To eval the 8B baseline we can use once our 8B vllm server is running: - +To evaluate the 8B baseline, we can use the following command once our 8B VLLM server is running: ```bash CUDA_VISIBLE_DEVICES=4 python raft_eval.py -m meta-Llama/Meta-Llama-3-8B-Instruct -u "http://localhost:8000/v1" -j "http://localhost:8001/v1" -r 5 ``` -**NOTE** Please make sure the folder name in --model matches the "model_name" section in raft_eval_config.yaml. Otherwise VLLM will raise model not found error. By default, the RAFT model is called "raft-8b". Here "-u" specify the raft model endpoint url, "-j" specify the judge model endpoint url, "-r" defines how many top_k documents the RAG should retrieve. +**NOTE**: Please ensure that the `--model` in VLLM server creation matches the `--m` in raft_eval.py. Otherwise, VLLM will raise a `model not found` error. By default, the RAFT model is called "raft-8b". Here, `-u` specifies the RAFT model endpoint URL, `-j` specifies the judge model endpoint URL, and `-r` defines how many top-k documents the RAG should retrieve. + +This [raft_eval.py](./raft_eval.py) script will load questions from the evaluation set, generate answers from models and models+RAG, and compare the generated answers with the ground truth to get the evaluation metrics, such as ROUGE score or LLM-as-judge score. It will then save those metrics and evaluation details to eval logs. + +## Experiment Results -This [raft_eval.py](./raft_eval.py) will load questions from eval set and generated answers from models and models+RAG. It will compare the generated answers with the ground truth to get the eval metrics, such as Rouge score or LLM_as_judge score, then save those metrics and eval details to logs. +**Overview** -## Experiment results +During our experiments, we encountered issues with using only the Llama website data, which consisted 1980+ RAFT examples generated from 327K characters text. We believed that this initial data was insufficient, so we created an additional PyTorch RAFT dataset using text from official [Pytorch blogs](https://pytorch.org/blog/) and [Pytorch tutorials](https://pytorch.org/tutorials/). This new dataset contains 20K+ RAFT examples generated from 4.7 million characters. We combined both datasets to create an `all_data` dataset. We then fine-tuned the 8B model on each dataset separately for 1 epoch with a learning rate of 1e-5, resulting in three RAFT models: `llama_only`, `pytorch_only`, and `all_data`. -During our experiments, we did not get a good result from just using Llama website. We believe that our initial data from Llama website is not enough as it only has 327K characters and generates 1980+ RAFT examples. To increase our RAFT examples, we created another pytorch RAFT dataset with the text from offical web pages under [Pytorch blogs](https://pytorch.org/blog/) and [Pytorch tutorials](https://pytorch.org/tutorials/). This pytorch RAFT dataset has 20K RAFT examples generated from 4.7 million characters. Together, we have an all_data dataset that combines both Llama raft dataset and pytorch dataset. Then we fine-tuned the 8B model on those datasets separately for 1 epoch with learning rate of 1e-5 to get 3 RAFT models, namely Llama_only model, pytorch_only model and all_data model. We used Llama website raw text as our RAG knowledge base and the document chunks_size is the same as the raft chunk_size 1000 characters. +**Evaluation on non-RAG baseline** -We tested 5 models + RAG: all_data RAFT model, Llama_only RAFT model, pytorch_only RAFT model, 8B baseline, 70B baseline with the RAG document topk retrieve parameters of 3, 5 and 7. We used a Meta Llama 70B Instruct model as the judge to score our model generated answer with the ground truth in our eval set. +First we run a non-RAG baseline, just using Meta Llama 3 8B Instruct and Meta Llama 3 70B Instruct model to see if our model can already answers some questions without any fine-tuning and external knowledge base. The LLM score, the percentage of correctness marked by LLM_as_judge, for 8B is 47.9% and 70B is 59.2%. Clearly, there are some information that has been pretrained into our Meta Llama 3 models. + +**Evaluation on RAG baseline** + +Then we tested these 3 RAFT models with Langchain RAG, along with the Meta Llama 3 8B Instruct and Meta Llama 3 70B Instruct RAG baselines, using the RAG document top-k retrieve parameters of 3, 5, and 7. We deployed a Meta Llama 70B Instruct model as the judge to score our model-generated answers against the ground truth in our evaluation set. The LLM scores are shown below: -Here are the LLM_as_judge results: ![RAFT LLM_score comparison](images/LLM_score_comparison.png) -From the result, we noticed that RAFT models are performing very similarly to 8B baseline, noticeably worse than 70B baseline when context documents are limited (top_k <=5), but then RAFT models performs much better when top_k = 7, specially all_data 8B model already outperform 70B baseline (76.06% vs 74.65%). +Our results showed that RAFT models performed similarly to the 8B RAG baseline, but noticeably worse than the 70B RAG baseline when context documents were limited (top_k <= 5). However, when top_k = 7, the RAFT models performance suddenly increase, with the `all_data` 8B model achieving a score of 76.06% which beats the 70B baseline's 74.65%. + +**Refusal Examples** -Taking closer look at the number of refusal examples (when model saying “I do not know”). The all_data model is more cautious and tends to refuse to answer, where Llama_only_RAFT did not learn to refuse at all, because the Llama_only dataset only has 1980+ examples. +We also analyzed the number of refusal examples, where the model responded with "Sorry, I do not know." The `all_data` model was more cautious and tended to refuse to answer, whereas the `llama_only` RAFT model did not learn to refuse at all, likely due to the limited dataset size. ![Num of refusal comparison](images/Num_of_refusal_comparison.png) -We created a graph that shows the precision of our model answer, eg. when our RAFT model decides to answer, what is the likelihood of producing correct answers. Calculated by $\frac{LLMScore}{1-\frac{numRefusal}{totalQA}}$ +**Precision Analysis** -Note that during our tests, the 8B and 70B baseline never refused to answer, so the precision of those models is the same as the LLM_score. We noticed that our RAFT models tend to refuse to answer when the provided documents are limited (top_k < 5), but if it decided to generate an answer, the likelyhood of being correct is higher. Specifically, when top_k =7, the all_data raft model has 82.97% likelihood of producing a correct answer when it decides to answer, far better than the 70B baseline of 74.65%. +We calculated the precision of our model answers, which represents the likelihood of producing correct answers when the model decides to respond. The formula used was $\frac{LLMScore}{1-\frac{numRefusal}{totalQA}}$. ![Answers Precision](images/Answers_Precision.png) -Here are some examples where our all_data RAFT can correctly answer while 70B failed: +Note that the 8B and 70B RAG baselines never refused to answer, so their precision was equivalent to their LLM_score. Our `all_data` and `pytorch_only` models tended to refuse to answer when provided documents were limited (top_k < 5), but when they did generate an answer, the likelihood of it being correct was higher. Specifically, when top_k = 7, the `all_data` RAFT model had an 82.97% likelihood of producing a correct answer when it decided to respond, outperforming the 70B baseline. + +**Example Comparisons** + +Here are some examples where our `all_data` RAFT model correctly answered questions that the 70B baseline failed to answer: + ``` Comparing interested question: What tokenizer is used as the basis for the special tokens in Meta Llama ground_truth: tiktoken @@ -211,23 +213,29 @@ False 70B_RAG_answers: : The tokenizer used as the basis for the special ``` Comparing interested question: What is the license under which the Llama Guard model and its weights are released? groud_truth: The license is the same as Llama 3, which can be found in the LICENSE file and is accompanied by the Acceptable Use Policy. -True raft-8b_RAG_answers: : The license under which the Llama Guard model and its weights are released is the same as Llama 3, and the [LICENSE](../LICENSE) file contains more information about the license. +True all_data_RAG_answers: : The license under which the Llama Guard model and its weights are released is the same as Llama 3, and the [LICENSE](../LICENSE) file contains more information about the license. False 70B_RAG_answers: : The Llama Guard model and its weights are licensed under the Llama 2 Community license. ``` -Some learnings from these experiments: -1.Few thousands of RAFT examples did not yield a great result. From our experiments, above 10K RAFT examples is needed. -2.The LLM_as_judge is not always reliable, we noticed that some answers have been scored incorrectly. -3.The chunk_size for RAFT documents chunk and RAG document chunk should be the same. -4.RAFT method seems to help the LLM to differentiate the related documents from distractors rather than force the LLM to memorize the training data as we used Pytorch data as additional data to help our Llama chatbot to answer Llama questions. More research experiments will be needed to understand more about this. +**Key Takeaways** +From our experiments, we learned: -## Local inference steps +1. Few thousand RAFT examples are insufficient, and at least 10K examples are recommended. +2. The LLM_as_judge is not always reliable, and we noticed there are chances that answers were scored incorrectly. +3. The chunk_size for RAFT documents and RAG documents should be the same. +4. The RAFT method appears to help the LLM differentiate related documents from distractors rather than forcing it to memorize the training data, as we used Pytorch data as additional data to help our Llama chatbot to answer Llama questions. More research experiments will be needed to understand more about this. -Once we believe our RAFT model has passed our evaluation and we can deploy it locally to play with it by manually asking questions. We can do this by +## Local Inference Steps + +Once we evaluated and refined our RAFT model, we can deploy it locally to interact with it by asking questions manually. To do this, run the following command: ```bash python recipes/inference/local_inference/inference.py --model_name raft-8b ``` -Lastly, special thanks to the first author of RAFT paper Tianjun Zhang to work together with us on this tutorial and provide many guidance during our experiments. +For more details,please check [local_inference recipe](../../../inference/local_inference/README.md) + +## Acknowledgements + +Finally, we would like to extend special thanks to Tianjun Zhang, the first author of the [RAFT paper](https://arxiv.org/pdf/2403.10131), for collaborating with us on this tutorial and providing valuable guidance throughout our experiments. Our code is also partially inspired by the [RAFT section in Gorilla github](https://github.com/ShishirPatil/gorilla/tree/main/raft). diff --git a/recipes/use_cases/end2end-recipes/raft/config.py b/recipes/use_cases/end2end-recipes/raft/config.py index 91a01535a..8b9115f7d 100644 --- a/recipes/use_cases/end2end-recipes/raft/config.py +++ b/recipes/use_cases/end2end-recipes/raft/config.py @@ -2,7 +2,6 @@ # This software may be used and distributed according to the terms of the Llama 2 Community License Agreement. import yaml -import os def load_config(config_path: str = "./config.yaml"): # Read the YAML configuration file diff --git a/recipes/use_cases/end2end-recipes/raft/raft_eval.py b/recipes/use_cases/end2end-recipes/raft/raft_eval.py index a3a5adf28..59dd649a6 100644 --- a/recipes/use_cases/end2end-recipes/raft/raft_eval.py +++ b/recipes/use_cases/end2end-recipes/raft/raft_eval.py @@ -15,7 +15,6 @@ import re import string import pandas as pd -from langchain.retrievers.document_compressors import FlashrankRerank def generate_answers_model_only(model_name,question_list,api_url="http://localhost:8000/v1",key="EMPTY"): @@ -73,7 +72,6 @@ def generate_answers_with_RAG(model_name, question_list,api_config,retriever,api if api_url_overwrite: api_url = api_url_overwrite key = api_config['api_key'] - rerank_topk = api_config["rerank_topk"] # Load the RAFT model llm = ChatOpenAI( openai_api_key=key, @@ -86,11 +84,7 @@ def generate_answers_with_RAG(model_name, question_list,api_config,retriever,api for q in question_list: # retrive the top K documents retrieved_docs = retriever.invoke(q) - if rerank_topk: - ranker = FlashrankRerank(top_n=rerank_topk) - documents = ranker.compress_documents(retrieved_docs,q) # format the documents into a string - documents = format_docs_raft(retrieved_docs) # create a prompt text = api_config["RAG_prompt_template"].format(context=documents,question=q) @@ -149,17 +143,6 @@ def exact_match_score(prediction, ground_truth): if (normalize_answer(pred) == normalize_answer(gold)): num_match += 1 return num_match/len(ground_truth) -def compute_bert_score(generated : list, reference: list): - bertscore = evaluate.load("bertscore") - score = bertscore.compute( - predictions=generated, - references=reference, - lang="en" - ) - f1 = score["f1"] - precision = score["precision"] - recall = score["recall"] - return sum(precision)/len(precision), sum(recall)/len(recall), sum(f1)/len(f1) def compute_judge_score(questions: list, generated : list, reference: list, api_config,api_url="http://localhost:8001/v1",key="EMPTY"): correct_num = 0 model_name = "meta-llama/Meta-Llama-3-70B-Instruct" @@ -177,13 +160,10 @@ def compute_judge_score(questions: list, generated : list, reference: list, api_ judge_responses = ["YES" in item.content for item in judge_responses] correct_num = sum(judge_responses) return correct_num/len(questions),judge_responses -def score_single(api_config,generated,reference,questions, run_exact_match=True,run_rouge=True, run_bert=False, run_llm_as_judge=True): +def score_single(api_config,generated,reference,questions, run_exact_match=True,run_rouge=True, run_llm_as_judge=True): # set metric to default -1, means no metric is computed metric = { "Rouge_score": -1, - "BERTScore_Precision": -1, - "BERTScore_Recall": -1, - "BERTScore_F1": -1, "LLM_judge_score": -1, "Exact_match": -1 } @@ -191,12 +171,6 @@ def score_single(api_config,generated,reference,questions, run_exact_match=True, rouge_score = compute_rouge_score(generated,reference) metric["Rouge_score"] = rouge_score print("Rouge_score:",rouge_score) - if run_bert: - P, R, F1 = compute_bert_score(generated,reference) - print(f"BERTScore Precision: {P:.4f}, Recall: {R:.4f}, F1: {F1:.4f}") - metric["BERTScore_Precision"] = P - metric["BERTScore_Recall"] = R - metric["BERTScore_F1"] = F1 if api_config["judge_endpoint_url"] and run_llm_as_judge: api_url = api_config["judge_endpoint_url"] LLM_judge_score,judge_responses = compute_judge_score(questions, generated, reference, api_config,api_url=api_url) @@ -235,8 +209,8 @@ def main(api_config): print("Finished generating answers for ", model_name) large_model_name = "meta-llama/Meta-Llama-3-70B-Instruct" large_api_url = api_config["judge_endpoint_url"] - #generated_answers["70B_Base"] = generate_answers_model_only(large_model_name,questions,large_api_url) - #generated_answers["70B_RAG"] = generate_answers_with_RAG(large_model_name, questions,api_config,retriever,large_api_url) + generated_answers["70B_Base"] = generate_answers_model_only(large_model_name,questions,large_api_url) + generated_answers["70B_RAG"] = generate_answers_with_RAG(large_model_name, questions,api_config,retriever,large_api_url) print("Finished generating answers for ", large_model_name) logging.info(f"Successfully generated {len(generated_answers[model_name+'_RAG'])} answers for all models.") # for generate answer from each model, compute the score metric @@ -252,7 +226,6 @@ def main(api_config): with open(output_file,"a") as fp: fp.write(f"Eval_result for {model_name} \n") fp.write(f"Rouge_score: {metric['Rouge_score']} \n") - fp.write(f"BERTScore Precision: {metric['BERTScore_Precision']:.4f}, Recall: {metric['BERTScore_Recall']:.4f}, F1: {metric['BERTScore_F1']:.4f} \n") fp.write(f"Exact_match_percentage: {metric['Exact_match']} \n") judge_responses = ["None"] * len(questions) if api_config["judge_endpoint_url"]: @@ -341,12 +314,6 @@ def parse_arguments(): type=int, help="set the number of top k documents the RAG needs to retrive." ) - parser.add_argument( - "--rerank_topk", - default=0, - type=int, - help="set the number of top k documents the reranker needs to retrive." - ) parser.add_argument("--chunk_size", type=int, default=1000, help="The character size of each chunk used in RAG") return parser.parse_args() @@ -364,9 +331,6 @@ def parse_arguments(): api_config["api_key"] = args.api_key api_config["chunk_size"] = args.chunk_size api_config["rag_topk"] = args.rag_topk - api_config["rerank_topk"] = args.rerank_topk - if api_config["rag_topk"] < api_config["rerank_topk"]: - logging.error("The rerank_topk should be smaller than rag_topk.") if api_config["judge_endpoint_url"]: logging.info(f"The judge model url is: '{args.judge_endpoint_url}'.") main(api_config) diff --git a/requirements.txt b/requirements.txt index 3c0cdf47c..6bd310b8a 100644 --- a/requirements.txt +++ b/requirements.txt @@ -19,19 +19,10 @@ chardet openai typing-extensions==4.8.0 tabulate -octoai -python-magic -PyPDF2 aiofiles evaluate rouge_score -bert_score -mdc -langchain_experimental -python-dotenv==1.0.1 pyyaml==6.0.1 -coloredlogs==15.0.1 -sentence_transformers faiss-gpu unstructured[pdf] langchain_openai From 7439b9df2c78dc409eadf581a6cee2e02f20769e Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Fri, 28 Jun 2024 14:26:05 -0700 Subject: [PATCH 31/35] fixed requirement.txt --- recipes/use_cases/end2end-recipes/raft/README.md | 2 +- requirements.txt | 4 +++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/recipes/use_cases/end2end-recipes/raft/README.md b/recipes/use_cases/end2end-recipes/raft/README.md index 4131a80ec..a06ba9ab4 100644 --- a/recipes/use_cases/end2end-recipes/raft/README.md +++ b/recipes/use_cases/end2end-recipes/raft/README.md @@ -8,7 +8,7 @@ In response to this demand, we're exploring the possibility of building a Llama To build a Llama bot, we need to collect relevant text data. Ideally, we would include a vast range of Llama-related web documents, but for demo purposes, we'll focus on official documents. For example, we can use the raw text from official web pages listed in [Getting started with Meta Llama](https://llama.meta.com/get-started/), excluding the FAQ page since some evaluation questions will come from there. -We have two options to obtain the text data: using a local folder or web crawling. For the local folder option, we can download the desired documents in PDF, Text, or Markdown format to the "data" folder specified in the [raft.yaml](./raft.yaml) file. +We have two options to obtain the text data: using a local folder or web crawling. For the local folder option, we can download the desired documents in PDF, Text, or Markdown format to the "data" folder specified in the [raft.yaml](./raft.yaml) file. Langchain DirectoryLoader will load files in that folder, but it may also ask us to install more package dependency if the files formats are not supported natively. Alternatively, we can create a sitemap XML file, similar to the example below, and put the file path in the [raft.yaml](./raft.yaml) file, so eventually a Langchain SitemapLoader can retrieve all the text from the web pages. diff --git a/requirements.txt b/requirements.txt index 6bd310b8a..8b23b2213 100644 --- a/requirements.txt +++ b/requirements.txt @@ -19,10 +19,12 @@ chardet openai typing-extensions==4.8.0 tabulate -aiofiles evaluate rouge_score pyyaml==6.0.1 faiss-gpu unstructured[pdf] langchain_openai +langchain +langchain_community +sentence_transformers From 5739231b14de9b5b4af36b3970d09ccb7235cb12 Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Mon, 8 Jul 2024 15:45:24 -0700 Subject: [PATCH 32/35] rebased to main, and changed readme --- .../finetuning/datasets/raft_dataset.py | 1 - .../use_cases/end2end-recipes/raft/README.md | 56 ++++++++++--------- .../use_cases/end2end-recipes/raft/format.py | 1 + 3 files changed, 30 insertions(+), 28 deletions(-) diff --git a/recipes/quickstart/finetuning/datasets/raft_dataset.py b/recipes/quickstart/finetuning/datasets/raft_dataset.py index 1de3c1ed8..9341dd317 100644 --- a/recipes/quickstart/finetuning/datasets/raft_dataset.py +++ b/recipes/quickstart/finetuning/datasets/raft_dataset.py @@ -6,7 +6,6 @@ from datasets import load_dataset import itertools -B_INST, E_INST = "[INST]", "[/INST]" # check system prompt token seq or user prompt token seq is in the current token list def check_header(targets,seq): for i in range(len(seq)-3): diff --git a/recipes/use_cases/end2end-recipes/raft/README.md b/recipes/use_cases/end2end-recipes/raft/README.md index a06ba9ab4..4869a6cf9 100644 --- a/recipes/use_cases/end2end-recipes/raft/README.md +++ b/recipes/use_cases/end2end-recipes/raft/README.md @@ -1,31 +1,16 @@ -## Introduction: -As the popularity of our Meta Llama 3 models grows, we've seen a surge in demand to adapt them to specific domains, enabling businesses to better serve their customers. For instance, a company might have a vast collection of plain text documents related to their custom domain and want to create a chatbot that can answer client questions. +## Chatbot Recipe: +As the popularity of our Meta Llama 3 models grows, we've seen a surge in demand to adapt them to specific domains, enabling businesses to better serve their customers. For example, a company might have a vast collection of plain text documents related to their custom domain and want to create a chatbot that can answer client questions. -In response to this demand, we're exploring the possibility of building a Llama chatbot that can answer Llama related questions using our Meta Llama 3 models. In this tutorial, we'll demonstrate how to do just that. While our Meta Llama 3 70B Instruct model is an excellent candidate, as it already has a excellent reasoning capabilities and knowledge, its production costs are relatively high. To reduce these costs, we'll focus on creating a Llama chatbot based on the Meta Llama 8B Instruct model, aiming to achieve similar accuracy to the Meta Llama 3 70B Instruct model while minimizing inference costs. +In response to this demand, we're exploring the possibility of building a Llama chatbot that can answer Llama-related questions using our Meta Llama 3 models. In this tutorial, we'll demonstrate how to do just that. While our Meta Llama 3 70B Instruct model is an excellent candidate, its production costs are relatively high. To reduce these costs, we'll focus on creating a Llama chatbot based on the Meta Llama 8B Instruct model, aiming to achieve similar accuracy while minimizing inference costs. -## Collecting Text Data for the Llama Bot +One common ML approach to produce a model based on new domain data is **fine-tuning**. The idea is to start from a pre-trained model that already has some knowledge of language from its pre-training and adapt it to a new domain. However, [recent paper](https://arxiv.org/pdf/2405.05904) highlights the risk of using supervised fine-tuning to update LLMs' knowledge, as it presents empirical evidence that acquiring new knowledge through fine-tuning is correlated with hallucinations w.r.t. preexisting knowledge. Fine-tuning can also be costly if the domain knowledge has to be updated frequently. -To build a Llama bot, we need to collect relevant text data. Ideally, we would include a vast range of Llama-related web documents, but for demo purposes, we'll focus on official documents. For example, we can use the raw text from official web pages listed in [Getting started with Meta Llama](https://llama.meta.com/get-started/), excluding the FAQ page since some evaluation questions will come from there. - -We have two options to obtain the text data: using a local folder or web crawling. For the local folder option, we can download the desired documents in PDF, Text, or Markdown format to the "data" folder specified in the [raft.yaml](./raft.yaml) file. Langchain DirectoryLoader will load files in that folder, but it may also ask us to install more package dependency if the files formats are not supported natively. - -Alternatively, we can create a sitemap XML file, similar to the example below, and put the file path in the [raft.yaml](./raft.yaml) file, so eventually a Langchain SitemapLoader can retrieve all the text from the web pages. - -```xml - - - http://llama.meta.com/responsible-use-guide/ - - - -``` +Another solution is to use **RAG (Retrieval-Augmented Generation)**, which combines the strengths of traditional information retrieval systems (such as databases) with the capabilities of generative large language models (LLMs). RAG operates by first retrieving relevant information from a database using a query generated by the LLM. This retrieved information is then integrated into the LLM's query input, enabling it to generate more accurate and contextually relevant text. This helps to reduce LLM hallucination as the related documents are provided to LLM and has a lower cost to update the domain knowledge. -## Retrieval Augmented Fine Tuning (RAFT) Concepts +In this tutorial, we'll use **Retrieval Augmented Fine Tuning (RAFT)**, a technique that combines fine-tuning with RAG to better utilize custom domain text data. RAFT is a general recipe for fine-tuning a pre-trained Large Language Model (LLM) to a domain-specific RAG setting. It helps LLM to better utilize custom domain text data, by ignoring those documents that don’t help in answering the question. This approach can create a more factual model and reduce LLM hallucinations during inference. -In this tutorial, we'll introduce Retrieval Augmented Fine Tuning (RAFT), a technique that combines fine-tuning with RAG to better utilize custom domain text data. - -RAFT is a general recipe for fine-tuning a pre-trained Large Language Model (LLM) to a domain-specific RAG setting. The process involves preparing training data with each data point containing: +The process involves preparing training data with each data point containing: * A question (Q) * A set of documents (D) @@ -41,6 +26,23 @@ The following graph illustrates the RAFT main concepts: For more information on RAFT, please refer to their [blog post](https://gorilla.cs.berkeley.edu/blogs/9_raft.html). +## Fine-tuning Llama + +To build a Llama bot, we need to collect relevant text data. Ideally, we would include a vast range of Llama-related web documents, but for demo purposes, we'll focus on official documents. For example, we can use the raw text from official web pages listed in [Getting started with Meta Llama](https://llama.meta.com/get-started/), excluding the FAQ page since some evaluation questions will come from there. + +We have two options to obtain the text data: using a local folder or web crawling. For the local folder option, we can download the desired documents in PDF, Text, or Markdown format to the "data" folder specified in the [raft.yaml](./raft.yaml) file. Langchain DirectoryLoader will load files in that folder, but it may also ask us to install more package dependency if the files formats are not supported natively. + +Alternatively, we can create a sitemap XML file, similar to the example below, and put the file path in the [raft.yaml](./raft.yaml) file, so eventually a Langchain SitemapLoader can retrieve all the text from the web pages. + +```xml + + + http://llama.meta.com/responsible-use-guide/ + + + +``` + ## Create RAFT Dataset To create a RAFT dataset from the prepared documents, we can use the Meta Llama 3 70B Instruct model either through APIs from LLM cloud providers or by hosting a local VLLM server. @@ -119,18 +121,18 @@ Once the RAFT dataset is ready in JSON format, we can start fine-tuning. Unfortu ```bash export PATH_TO_ROOT_FOLDER=./raft-8b export PATH_TO_RAFT_JSON=recipes/use_cases/end2end-recipes/raft/output/raft.jsonl -torchrun --nnodes 1 --nproc_per_node 4 recipes/finetuning/finetuning.py --enable_fsdp --lr 1e-5 --context_length 8192 --num_epochs 1 --batch_size_training 1 --model_name meta-Llama/Meta-Llama-3-8B-Instruct --dist_checkpoint_root_folder $PATH_TO_ROOT_FOLDER --dist_checkpoint_folder fine-tuned --use_fast_kernels --dataset "custom_dataset" --custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/raft_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path $PATH_TO_RAFT_JSON +torchrun --nnodes 1 --nproc_per_node 4 recipes/quickstart/finetuning/finetuning.py --enable_fsdp --lr 1e-5 --context_length 8192 --num_epochs 1 --batch_size_training 1 --model_name meta-Llama/Meta-Llama-3-8B-Instruct --dist_checkpoint_root_folder $PATH_TO_ROOT_FOLDER --dist_checkpoint_folder fine-tuned --use_fast_kernels --dataset "custom_dataset" --custom_dataset.test_split "test" --custom_dataset.file "recipes/finetuning/datasets/raft_dataset.py" --use-wandb --run_validation True --custom_dataset.data_path $PATH_TO_RAFT_JSON ``` -For more details on multi-GPU fine-tuning, please refer to the [multigpu_finetuning.md](../../../finetuning/multigpu_finetuning.md) in the finetuning recipe. +For more details on multi-GPU fine-tuning, please refer to the [multigpu_finetuning.md](../../../quickstart/finetuning/multigpu_finetuning.md) in the finetuning recipe. Next, we need to convert the FSDP checkpoint to a HuggingFace checkpoint using the following command: ```bash -python src/Llama_recipes/inference/checkpoint_converter_fsdp_hf.py --fsdp_checkpoint_path "$PATH_TO_ROOT_FOLDER/fine-tuned-meta-Llama/Meta-Llama-3-8B-Instruct" --consolidated_model_path "$PATH_TO_ROOT_FOLDER" +python src/llama_recipes/inference/checkpoint_converter_fsdp_hf.py --fsdp_checkpoint_path "$PATH_TO_ROOT_FOLDER/fine-tuned-meta-Llama/Meta-Llama-3-8B-Instruct" --consolidated_model_path "$PATH_TO_ROOT_FOLDER" ``` -For more details on FSDP to HuggingFace checkpoint conversion, please refer to the [readme](../../../inference/local_inference/README.md) in the inference/local_inference recipe. +For more details on FSDP to HuggingFace checkpoint conversion, please refer to the [readme](../../../quickstart/inference/local_inference/README.md) in the inference/local_inference recipe. ## Evaluation Steps Once we have the RAFT model, we need to evaluate its performance. In this tutorial, we'll not only use traditional evaluation methods (e.g., calculating exact match rate or ROUGE score) but also use LLM as a judge to score model-generated answers. @@ -234,7 +236,7 @@ Once we evaluated and refined our RAFT model, we can deploy it locally to intera python recipes/inference/local_inference/inference.py --model_name raft-8b ``` -For more details,please check [local_inference recipe](../../../inference/local_inference/README.md) +For more details,please check [local_inference recipe](../../../quickstart/inference/local_inference/README.md) ## Acknowledgements diff --git a/recipes/use_cases/end2end-recipes/raft/format.py b/recipes/use_cases/end2end-recipes/raft/format.py index 7dcb6b861..c1bbfb458 100644 --- a/recipes/use_cases/end2end-recipes/raft/format.py +++ b/recipes/use_cases/end2end-recipes/raft/format.py @@ -1,3 +1,4 @@ +# file copied from https://github.com/ShishirPatil/gorilla/blob/main/raft/format.py from abc import ABC, abstractmethod import argparse from datasets import Dataset, load_dataset From be19e394422d366afd303df07df5a6d604978843 Mon Sep 17 00:00:00 2001 From: Cyrus Nikolaidis Date: Wed, 24 Jul 2024 14:43:05 -0400 Subject: [PATCH 33/35] Fill in one sentence in the prompt guard tutorial. --- recipes/responsible_ai/prompt_guard/prompt_guard_tutorial.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/recipes/responsible_ai/prompt_guard/prompt_guard_tutorial.ipynb b/recipes/responsible_ai/prompt_guard/prompt_guard_tutorial.ipynb index fa0013dd1..dc070a295 100644 --- a/recipes/responsible_ai/prompt_guard/prompt_guard_tutorial.ipynb +++ b/recipes/responsible_ai/prompt_guard/prompt_guard_tutorial.ipynb @@ -789,7 +789,7 @@ "metadata": {}, "source": [ "\n", - "One good way to quickly obtain labeled training data for a use case is to use the original, non-fine tuned model itself to highlight risky examples to label, while drawing random negatives from below a score threshold. This helps address the class imbalance (attacks and risky prompts can be a very small percentage of all prompts) and includes false positive examples (which tend to be very valuable to train on) in the dataset. The use of synthetic data for specific " + "One good way to quickly obtain labeled training data for a use case is to use the original, non-fine tuned model itself to highlight risky examples to label, while drawing random negatives from below a score threshold. This helps address the class imbalance (attacks and risky prompts can be a very small percentage of all prompts) and includes false positive examples (which tend to be very valuable to train on) in the dataset. Generating synthetic fine-tuning data for specific use cases can also be an effective strategy." ] } ], From a3a5deec97bcc12f8d1bc561d76eff0b02dc5f8a Mon Sep 17 00:00:00 2001 From: Kai Wu Date: Thu, 25 Jul 2024 11:13:08 -0700 Subject: [PATCH 34/35] changed folder name and wordlist.txt --- .github/scripts/spellcheck_conf/wordlist.txt | 2 + .../{raft => RAFT-Chatbot}/README.md | 2 +- .../{raft => RAFT-Chatbot}/config.py | 0 .../{raft => RAFT-Chatbot}/eval_llama.json | 0 .../{raft => RAFT-Chatbot}/format.py | 0 .../images/Answers_Precision.png | Bin .../images/LLM_score_comparison.png | Bin .../images/Num_of_refusal_comparison.png | Bin .../{raft => RAFT-Chatbot}/images/RAFT.png | Bin .../{raft => RAFT-Chatbot}/raft.py | 0 .../{raft => RAFT-Chatbot}/raft.yaml | 0 .../{raft => RAFT-Chatbot}/raft_eval.py | 0 .../raft_eval_config.yaml | 0 .../{raft => RAFT-Chatbot}/raft_utils.py | 0 .../use_cases/end2end-recipes/raft/chatbot.md | 207 ------------------ 15 files changed, 3 insertions(+), 208 deletions(-) rename recipes/use_cases/end2end-recipes/{raft => RAFT-Chatbot}/README.md (99%) rename recipes/use_cases/end2end-recipes/{raft => RAFT-Chatbot}/config.py (100%) rename recipes/use_cases/end2end-recipes/{raft => RAFT-Chatbot}/eval_llama.json (100%) rename recipes/use_cases/end2end-recipes/{raft => RAFT-Chatbot}/format.py (100%) rename recipes/use_cases/end2end-recipes/{raft => RAFT-Chatbot}/images/Answers_Precision.png (100%) rename recipes/use_cases/end2end-recipes/{raft => RAFT-Chatbot}/images/LLM_score_comparison.png (100%) rename recipes/use_cases/end2end-recipes/{raft => RAFT-Chatbot}/images/Num_of_refusal_comparison.png (100%) rename recipes/use_cases/end2end-recipes/{raft => RAFT-Chatbot}/images/RAFT.png (100%) rename recipes/use_cases/end2end-recipes/{raft => RAFT-Chatbot}/raft.py (100%) rename recipes/use_cases/end2end-recipes/{raft => RAFT-Chatbot}/raft.yaml (100%) rename recipes/use_cases/end2end-recipes/{raft => RAFT-Chatbot}/raft_eval.py (100%) rename recipes/use_cases/end2end-recipes/{raft => RAFT-Chatbot}/raft_eval_config.yaml (100%) rename recipes/use_cases/end2end-recipes/{raft => RAFT-Chatbot}/raft_utils.py (100%) delete mode 100644 recipes/use_cases/end2end-recipes/raft/chatbot.md diff --git a/.github/scripts/spellcheck_conf/wordlist.txt b/.github/scripts/spellcheck_conf/wordlist.txt index 59ab735df..a59707fe0 100644 --- a/.github/scripts/spellcheck_conf/wordlist.txt +++ b/.github/scripts/spellcheck_conf/wordlist.txt @@ -1405,3 +1405,5 @@ distractors frac numRefusal totalQA +DirectoryLoader +SitemapLoader diff --git a/recipes/use_cases/end2end-recipes/raft/README.md b/recipes/use_cases/end2end-recipes/RAFT-Chatbot/README.md similarity index 99% rename from recipes/use_cases/end2end-recipes/raft/README.md rename to recipes/use_cases/end2end-recipes/RAFT-Chatbot/README.md index 4869a6cf9..0c938a6f5 100644 --- a/recipes/use_cases/end2end-recipes/raft/README.md +++ b/recipes/use_cases/end2end-recipes/RAFT-Chatbot/README.md @@ -238,6 +238,6 @@ python recipes/inference/local_inference/inference.py --model_name raft-8b For more details,please check [local_inference recipe](../../../quickstart/inference/local_inference/README.md) -## Acknowledgements +## Acknowledgement Finally, we would like to extend special thanks to Tianjun Zhang, the first author of the [RAFT paper](https://arxiv.org/pdf/2403.10131), for collaborating with us on this tutorial and providing valuable guidance throughout our experiments. Our code is also partially inspired by the [RAFT section in Gorilla github](https://github.com/ShishirPatil/gorilla/tree/main/raft). diff --git a/recipes/use_cases/end2end-recipes/raft/config.py b/recipes/use_cases/end2end-recipes/RAFT-Chatbot/config.py similarity index 100% rename from recipes/use_cases/end2end-recipes/raft/config.py rename to recipes/use_cases/end2end-recipes/RAFT-Chatbot/config.py diff --git a/recipes/use_cases/end2end-recipes/raft/eval_llama.json b/recipes/use_cases/end2end-recipes/RAFT-Chatbot/eval_llama.json similarity index 100% rename from recipes/use_cases/end2end-recipes/raft/eval_llama.json rename to recipes/use_cases/end2end-recipes/RAFT-Chatbot/eval_llama.json diff --git a/recipes/use_cases/end2end-recipes/raft/format.py b/recipes/use_cases/end2end-recipes/RAFT-Chatbot/format.py similarity index 100% rename from recipes/use_cases/end2end-recipes/raft/format.py rename to recipes/use_cases/end2end-recipes/RAFT-Chatbot/format.py diff --git a/recipes/use_cases/end2end-recipes/raft/images/Answers_Precision.png b/recipes/use_cases/end2end-recipes/RAFT-Chatbot/images/Answers_Precision.png similarity index 100% rename from recipes/use_cases/end2end-recipes/raft/images/Answers_Precision.png rename to recipes/use_cases/end2end-recipes/RAFT-Chatbot/images/Answers_Precision.png diff --git a/recipes/use_cases/end2end-recipes/raft/images/LLM_score_comparison.png b/recipes/use_cases/end2end-recipes/RAFT-Chatbot/images/LLM_score_comparison.png similarity index 100% rename from recipes/use_cases/end2end-recipes/raft/images/LLM_score_comparison.png rename to recipes/use_cases/end2end-recipes/RAFT-Chatbot/images/LLM_score_comparison.png diff --git a/recipes/use_cases/end2end-recipes/raft/images/Num_of_refusal_comparison.png b/recipes/use_cases/end2end-recipes/RAFT-Chatbot/images/Num_of_refusal_comparison.png similarity index 100% rename from recipes/use_cases/end2end-recipes/raft/images/Num_of_refusal_comparison.png rename to recipes/use_cases/end2end-recipes/RAFT-Chatbot/images/Num_of_refusal_comparison.png diff --git a/recipes/use_cases/end2end-recipes/raft/images/RAFT.png b/recipes/use_cases/end2end-recipes/RAFT-Chatbot/images/RAFT.png similarity index 100% rename from recipes/use_cases/end2end-recipes/raft/images/RAFT.png rename to recipes/use_cases/end2end-recipes/RAFT-Chatbot/images/RAFT.png diff --git a/recipes/use_cases/end2end-recipes/raft/raft.py b/recipes/use_cases/end2end-recipes/RAFT-Chatbot/raft.py similarity index 100% rename from recipes/use_cases/end2end-recipes/raft/raft.py rename to recipes/use_cases/end2end-recipes/RAFT-Chatbot/raft.py diff --git a/recipes/use_cases/end2end-recipes/raft/raft.yaml b/recipes/use_cases/end2end-recipes/RAFT-Chatbot/raft.yaml similarity index 100% rename from recipes/use_cases/end2end-recipes/raft/raft.yaml rename to recipes/use_cases/end2end-recipes/RAFT-Chatbot/raft.yaml diff --git a/recipes/use_cases/end2end-recipes/raft/raft_eval.py b/recipes/use_cases/end2end-recipes/RAFT-Chatbot/raft_eval.py similarity index 100% rename from recipes/use_cases/end2end-recipes/raft/raft_eval.py rename to recipes/use_cases/end2end-recipes/RAFT-Chatbot/raft_eval.py diff --git a/recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml b/recipes/use_cases/end2end-recipes/RAFT-Chatbot/raft_eval_config.yaml similarity index 100% rename from recipes/use_cases/end2end-recipes/raft/raft_eval_config.yaml rename to recipes/use_cases/end2end-recipes/RAFT-Chatbot/raft_eval_config.yaml diff --git a/recipes/use_cases/end2end-recipes/raft/raft_utils.py b/recipes/use_cases/end2end-recipes/RAFT-Chatbot/raft_utils.py similarity index 100% rename from recipes/use_cases/end2end-recipes/raft/raft_utils.py rename to recipes/use_cases/end2end-recipes/RAFT-Chatbot/raft_utils.py diff --git a/recipes/use_cases/end2end-recipes/raft/chatbot.md b/recipes/use_cases/end2end-recipes/raft/chatbot.md deleted file mode 100644 index dd763416c..000000000 --- a/recipes/use_cases/end2end-recipes/raft/chatbot.md +++ /dev/null @@ -1,207 +0,0 @@ -## Introduction - -Large language models (LLMs) have emerged as groundbreaking tools, capable of understanding and generating human-like text. These models power many of today's advanced chatbots, providing more natural and engaging user experiences. But how do we create these intelligent systems? - -Here, we aim to make an FAQ model for Llama that be able to answer questions about Llama by fine-tune Meta Llama 3 8B instruct model using existing official Llama documents. - - -### Fine-tuning Process - -Fine-tuning Meta Llama 3 8B instruct model involves several key steps: Data Collection, Preprocessing, Fine-tuning, Evaluation. - - -### LLM Generated datasets - -As Chatbots are usually domain specifics and based on public or proprietary data, one common way inspired by [self-instruct paper](https://arxiv.org/abs/2212.10560) is to use LLMs to assist building the dataset from our data. For example to build an FAQ model, we can use a powerful Meta Llama 3 70B model to process our documents and help us build question and answer pair (We will showcase this here). Just keep it in mind that usually most of the proprietary LLMs has this clause in their license that you are not allowed to use the output generated from the model to train another LLM. In this case we will fine-tune another Llama model with the help of Meta Llama 3 70B. - - -Similarly, we will use the same LLM to evaluate the quality of generated datasets and finally evaluate the outputs from the model. - - -Given this context, here we want to highlight some of best practices that need to be in place for data collection and preprocessing in general. - -### **Data Collection & Preprocessing:** - -Gathering a diverse and comprehensive dataset is crucial. This dataset should include a wide range of topics and conversational styles to ensure the model can handle various subjects. A recent [research](https://arxiv.org/pdf/2305.11206.pdf) shows that quality of data has far more importance than quantity. Here are some high level thoughts on data collection and preprocessing along with best practices: - -**NOTE** data collection and processing is very use-case specific and here we can only share best practices but it would be very nuanced for each use-case. - -- Source Identification: Identify the sources where your FAQs are coming from. This could include websites, customer service transcripts, emails, forums, and product manuals. Prioritize sources that reflect the real questions your users are asking. - -- Diversity and Coverage: Ensure your data covers a wide range of topics relevant to your domain. It's crucial to include variations in how questions are phrased to make your model robust to different wording. - -- Volume: The amount of data needed depends on the complexity of the task and the variability of the language in your domain. Generally, more data leads to a better-performing model, but aim for high-quality, relevant data. - -Here, we are going to use [self-instruct](https://arxiv.org/abs/2212.10560) idea and use Llama model to build our dataset, for details please check this [doc](./data_pipelines/REAME.md). - - -**Things to keep in mind** - -- **Pretraining Data as the Foundation**: Pretraining data is crucial for developing foundational models, influencing both their strengths and potential weaknesses. Fine-tuning data refines specific model capabilities and, through instruction fine-tuning or alignment training, enhances general usability and safety. - -- **Quality Over Quantity**: More data doesn't necessarily mean better results. It's vital to select data carefully and perform manual inspections to ensure it aligns with your project's aims. - -- **Considerations for Dataset Selection**: Selecting a dataset requires considering various factors, including language and dialect coverage, topics, tasks, diversity, quality, and representation. - -- **Impact of Implicit Dataset Modifications**: Most datasets undergo implicit changes during selection, filtering, and formatting. These preprocessing steps can significantly affect model performance, so they should not be overlooked. - -- **Finetuning Data's Dual-Edged Sword**: Finetuning can improve or impair model capabilities. Make sure you know the nature of your data to make an informed selections. - -- **Navigating Dataset Limitations**: The perfect dataset for a specific task may not exist. Be mindful of the limitations when choosing from available resources, and understand the potential impact on your project. - -#### **Best Practices for FineTuning Data Preparation** - -- **Enhancing Understanding with Analysis Tools**: Utilizing tools for searching and analyzing data is crucial for developers to gain a deeper insight into their datasets. This understanding is key to predicting model behavior, a critical yet often overlooked phase in model development. - -- **The Impact of Data Cleaning and Filtering**: Data cleaning and filtering significantly influence model characteristics, yet there's no universal solution that fits every scenario. Our guidance includes filtering recommendations tailored to the specific applications and communities your model aims to serve. - -- **Data Mixing from Multiple Sources**: When training models with data from various sources or domains, the proportion of data from each domain (data mixing) can greatly affect downstream performance. It's a common strategy to prioritize "high-quality" data domains—those with content written by humans and subjected to an editing process, like Wikipedia and books. However, data mixing is an evolving field of research, with best practices still under development. - -- **Benefits of Removing Duplicate Data**: Eliminating duplicated data from your dataset can lessen unwanted memorization and enhance training efficiency. - -- **The Importance of Dataset Decontamination**: It's crucial to meticulously decontaminate training datasets by excluding data from evaluation benchmarks. This ensures the model's capabilities are accurately assessed. - - -**Data Exploration and Analysis** - -- Gaining Insights through Dataset Exploration: Leveraging search and analysis tools to explore training datasets enables us to cultivate a refined understanding of the data's contents, which in turn influences the models. Direct interaction with the data often reveals complexities that are challenging to convey or so might not be present in the documents. - -- Understanding Data Complexity: Data, especially text, encompasses a wide array of characteristics such as length distribution, topics, tones, formats, licensing, and diction. These elements are crucial for understanding the dataset but are not easily summarized without thorough examination. - -- Utilizing Available Tools: We encourage to take advantage of the numerous tools at your disposal for searching and analyzing your training datasets, facilitating a deeper comprehension and more informed model development. - -**Tools** - -- [wimbd](https://github.com/allenai/wimbd) for data analysis. -- TBD - - - -**Data Cleaning** - -Purpose of Filtering and Cleaning: The process of filtering and cleaning is essential for eliminating unnecessary data from your dataset. This not only boosts the efficiency of model training but also ensures the data exhibits preferred characteristics such as high informational value, coverage of target languages, low levels of toxicity, and minimal presence of personally identifiable information. - -Considering Trade-offs: We recommend to carefully weigh the potential trade-offs associated with using certain filters, it may impact the diversity of your data, [removing minority individuals](https://arxiv.org/abs/2104.08758). - -**Tools** -- [OpenRefine](https://github.com/OpenRefine/OpenRefine?tab=readme-ov-file),(formerly Google Refine): A standalone open-source desktop application for data cleanup and transformation to other formats. It's particularly good for working with messy data, including data format transformations and cleaning. - -- [FUN-Langid](https://github.com/google-research/url-nlp/tree/main/fun-langid), simple, character 4-gram LangID classifier recognizing up to 1633 languages. - -- Dask: Similar to Pandas, Dask is designed for parallel computing and works efficiently with large datasets. It can be used for data cleaning, transformations, and more, leveraging multiple CPUs or distributed systems. - - - - -**Data Deduplication** - -- **Data Deduplication importance**: Data deduplication is a important preprocessing step to eliminate duplicate documents or segments within a document from the dataset. This process helps in minimizing the model's chance of memorizing unwanted information, including generic text, copyrighted content, and personally identifiable details. - -- **Benefits of Removing Duplicates**: Aside from mitigating the risk of undesirable memorization, deduplication enhances training efficiency by decreasing the overall size of the dataset. This streamlined dataset contributes to a more effective and resource-efficient model training process. - -- **Assessing the Impact of Duplicates**: You need to carefully evaluate the influence of duplicated data on their specific model use case. Memorization may be beneficial for models designed for closed-book question answering, or similarly chatbots. - -**Tools** - -- [thefuz](https://github.com/seatgeek/thefuzz): It uses Levenshtein Distance to calculate the differences between sequences in a simple-to-use package. -- [recordlinkage](https://github.com/J535D165/recordlinkage): It is modular record linkage toolkit to link records in or between data sources. - -**Data Decontamination** - -The process involves eliminating evaluation data from the training dataset. This crucial preprocessing step maintains the accuracy of model evaluation, guaranteeing that performance metrics are trustworthy and not skewed. - -**Tools** -- TBD - - - - -### **LLama FAQ Use-Case** - - -1. **Data Collection** -Here, we are going to use self-instruct idea and use Llama model to build our dataset, for details please check this [doc](./data_pipelines/REAME.md). - -2. **Data Formatting** - -For a FAQ model, you need to format your data in a way that's conducive to learning question-answer relationships. A common format is the question-answer (QA) pair: - -Question-Answer Pairing: Organize your data into pairs where each question is directly followed by its answer. This simple structure is highly effective for training models to understand and generate responses. For example: - -```python -"question": "What is Llama 3?", -"answer": "Llama 3 is a collection of pretrained and fine-tuned large language models ranging from 8 billion to 70 billion parameters, optimized for dialogue use cases." -``` - - -3. **Preprocessing:** This step involves cleaning the data and preparing it for training. It might include removing irrelevant information, correcting errors, and splitting the data into training and evaluation sets. - - -4. **Fine-Tuning:** Given that we have a selected pretrained model, in this case we use LLama 2 chat 7B, fine-tunning with more specific data can improve its performance on particular tasks, such as answering questions about Llama in this case. -#### Building Dataset - -During the self-instruct process of generation Q&A pairs from documents, we realized that with out system prompt being -```python -You are a language model skilled in creating quiz questions. -You will be provided with a document, -read it and generate question and answer pairs -that are most likely be asked by a use of llama that just want to start, -please make sure you follow those rules, -1. Generate only {total_questions} question answer pairs. -2. Generate in {language}. -3. The questions can be answered based *solely* on the given passage. -4. Avoid asking questions with similar meaning. -5. Make the answer as concise as possible, it should be at most 60 words. -6. Provide relevant links from the document to support the answer. -7. Never use any abbreviation. -8. Return the result in json format with the template: - [ - {{ - "question": "your question A.", - "answer": "your answer to question A." - }}, - {{ - "question": "your question B.", - "answer": "your answer to question B." - }} - ] - -``` - -Model tends to ignore providing the bigger picture in the questions, for example below is the result of Q&A pair from reading Code Llama paper. Partially, its because due to context window size of the model we have to divide the document into smaller chunks, so model use `described in the passage` or `according to the passage?` in the question instead of linking it back to Code Llama. - - -```python -{ - "question": "What is the purpose of the transformation described in the passage?", - "answer": "The transformation is used to create documents with a prefix, middle part, and suffix for infilling training." - }, -{ - "question": "What is the focus of research in transformer-based language modeling, according to the passage?", - "answer": "The focus of research is on effective handling of long sequences, specifically extrapolation and reducing the quadratic complexity of attention passes." -}, -``` - - -#### Data Insights - -We generated a dataset of almost 3600 Q&A pairs from some of the open source documents about Llama models, including getting started guide from Llama website, its FAQ, Llama 3, Purple Llama, Code Llama papers and Llama-Recipes documentations. - -We have run some fine-tuning experiments with single GPU using quantization with different LORA configs (all linear layer versus query and key projections only) and different number of epochs. Although train and eval loss shows decrease specially with using all linear layers in LORA configs and training with 6 epochs, still the result is far from acceptable in real tests. - - -Here is how losses between three runs looks like. - -

    - Eval Loss - Train Loss -

    - -##### Low Quality Dataset - -Below are some examples of real test on the fine-tuned model with very poor results. It seems fine-tuned model does not show any promising results with this dataset. Looking at the dataset, we could observe that the amount of data (Q&A pair) for each concept such as PyTorch FSDP and Llama-Recipe is very limited and almost one pair per concept. This shows lack of relevant training data. The recent research showed that from each taxonomy having 2-3 examples can yield promising results. - -

    - Poor Test Results example 1 - Poor Test Results example 1 -

    From 4756ffb63a8cb83c418990d4e288534cdb35737f Mon Sep 17 00:00:00 2001 From: Hamid Shojanazeri Date: Thu, 25 Jul 2024 11:27:27 -0700 Subject: [PATCH 35/35] Update recipes/use_cases/end2end-recipes/RAFT-Chatbot/README.md --- recipes/use_cases/end2end-recipes/RAFT-Chatbot/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/recipes/use_cases/end2end-recipes/RAFT-Chatbot/README.md b/recipes/use_cases/end2end-recipes/RAFT-Chatbot/README.md index 0c938a6f5..50356d509 100644 --- a/recipes/use_cases/end2end-recipes/RAFT-Chatbot/README.md +++ b/recipes/use_cases/end2end-recipes/RAFT-Chatbot/README.md @@ -4,7 +4,7 @@ As the popularity of our Meta Llama 3 models grows, we've seen a surge in demand In response to this demand, we're exploring the possibility of building a Llama chatbot that can answer Llama-related questions using our Meta Llama 3 models. In this tutorial, we'll demonstrate how to do just that. While our Meta Llama 3 70B Instruct model is an excellent candidate, its production costs are relatively high. To reduce these costs, we'll focus on creating a Llama chatbot based on the Meta Llama 8B Instruct model, aiming to achieve similar accuracy while minimizing inference costs. -One common ML approach to produce a model based on new domain data is **fine-tuning**. The idea is to start from a pre-trained model that already has some knowledge of language from its pre-training and adapt it to a new domain. However, [recent paper](https://arxiv.org/pdf/2405.05904) highlights the risk of using supervised fine-tuning to update LLMs' knowledge, as it presents empirical evidence that acquiring new knowledge through fine-tuning is correlated with hallucinations w.r.t. preexisting knowledge. Fine-tuning can also be costly if the domain knowledge has to be updated frequently. +One common approach to produce a model based on new domain data is **fine-tuning**. The idea is to start from a pre-trained model that already has some knowledge of language from its pre-training and adapt it to a new domain. However, [recent paper](https://arxiv.org/pdf/2405.05904) highlights the risk of using supervised fine-tuning to update LLMs' knowledge, as it presents empirical evidence that acquiring new knowledge through fine-tuning is correlated with hallucinations w.r.t. preexisting knowledge. Fine-tuning can also be costly if the domain knowledge has to be updated frequently. Another solution is to use **RAG (Retrieval-Augmented Generation)**, which combines the strengths of traditional information retrieval systems (such as databases) with the capabilities of generative large language models (LLMs). RAG operates by first retrieving relevant information from a database using a query generated by the LLM. This retrieved information is then integrated into the LLM's query input, enabling it to generate more accurate and contextually relevant text. This helps to reduce LLM hallucination as the related documents are provided to LLM and has a lower cost to update the domain knowledge.

    }d_Uw|#Yyx(K47(v2tAZAxq+YLg$F5xAl-lX6p!p`Fu8cTu_c>a`i(Ej0 zJ3u>Ah@?Q#do@s-&$STej>M7U0QG2^Q~bb8N4fU?eMf9n6=wMP=`18Z>QH`6d^$KDL1CX4sK!(`ca4YS}7!Rhs%JJbKrI0sla9^mb z_^2x;=WRgb7NEQZBvOxF6$!dR4dR@H9|2mH+Oq06HFGB_Vf@979Oef} zVoGNVucnWmAGTe7k!sirZ#x`IikN<+&k5^#GgNjU{EJne++m$5_0BZIWX*P$k5*j0 zfjJ-4;k3_bm$9KT>EZn_L5k34EiFA?Vj+etHZlG`GK@TuEzN zPxIvkcZJ;@{OKRJPtRyM=vjS`H+>;``kbT_^}=CKr6V0HnZ%I-RCu0Qy1Rq>A6E=d z7bY5t%D4(3WO`dCNaLgTPa|3TW2nu;Dw)Q%fUj}eH6My4&(g3$Z2P0h`O7F8z z`S@{H_fvW|{=w?%ng`N5z2NjK|AW6nM=`aWr!Kc(cn%LvT>W!Rr6_MG)sq@eS3>d$ zr+X+PtpF;Z#0bYv(%6s@Qs)ieoJyb!zmkGHytXYsPh6XCni8BUJOikh?Jh$mwDuXg zyPqk{`;ZFPTV_|wAV|UUX)V#!aMdJK+m-a7TB&W^t>dicL%qg~HEm%A0my&3^rV+xI!Vo2Jdtt@QD;vE z57x0m4&UqTv!)RYd+P@HcR()vjbI-rLSxC@h{isOUfBTn%9x7kiSzVqSGSBufZ7q7 zmK=Hd?#^s#%C?;Y9AUcHbuUi6{#htFprV=u1e%oV4w3yG*OWi$9HU1r=Y1G)rr4Cz zyb|BU1xy1NrFCY{?x)0G8h$!?Hv?cpLvr2Lk#x%-aHrM7`rn#=_XI)7`BK2HhH`98 zSI)@S<_x=b9Z~3y$J6v*KKy6c=@v>1TKXw{kn<0di^8r`7{TkHz%N&hDGdb-J0A3P zIB;SS5T97#4(>RfZ1^g<@Gxq6ddb(M*O}TU86tg48JXcrxqaI3Pz1E7ICvl9CtFeJ zmI%rI*KYyHUVnB)Iw|dtR-hK?<{nw3nXO}laD{~$@X~CoXjysG#s%X7fPJbzLcrn zkS{JHX<<(l+(EP)_nJ`_Mr{V9Tt~&-BW{21{v_RzJ^z=r{Aq=+lWM7@`Bs>W#M4tN zuvUc$Bx>d3>go|=7JT_rYK=P}Wg((~sxDSM#aXROR$$Kni!E+d|2`Ja+KkUlb0g1P zOrLBj&Do2>kWq*ldsPTvLBNel(W>gsTqyOh)#f4d!(o8i_?gpk;fjqK(>7j=Te*f9v!7eis{8jAt}|a#FiKl@-cV*@>BLx|V{rFDE%bTRuHN zA0+}ZZK{T$RP&ArYtt2*PX*!t~1SJWeQtAR2U1`5jz+l}ABbB1Y zAHh#+97cuMONNI{jos};*4;fitFyJ z1V}DnWd|#M505%eZfqY)er=_+eu zIW2W@YX62>?x%p)nWHsF+iqZ2j&=I!V8sz-n+->6A7Wda`U$U!h-(F&nFvf;x;=Ic z`};ru&nwOE>Bl|EzsX?|Rt0V-;4+(g}b{cync@l41ZRgszzQ1PzfoD)1f! zU$To+DXdxt^~SEzvI09NX z$1?9*6g1=`f{x}!`UsqGuA=w;_I^4$4IKUFieveK8T|Z<5VX^E6XVr~LWVKkBBQmF&&5 z78*9&lqj-aR1GA1SF|!3X)ZV&buqpL=%3jVV$7ZGJn?`wrK!hH!!zn)UNy4H^8QSd>Glg%8z-vW z?XRAqH}t%;KVh5MJyCE6_h91C5jb5|HAwPaN1I`IF7B9t1*W6WJ6-OwTD`b)$eqln z%GNb&sf|6b>o)UN{G%Sn0I~g>Ie}>w&6_TFGZIRPBYfDyGS=ULN|w%ATCZ!5$B-Z8 zF(fNiF9<9K&Wt0K+LPsW;!zazHj$AsR>yalB95NTpGl8f*=Z+ec8u-Dg*J%b!IGMW zcVui!etZccJ?S)Q!iqasj+9bruGvpAhC+flJzTW&|0fP|k% zKl-41V15jtS;7gi!}I9nl=$uDn3#3O-&3H4O~0QC<3y$WIfiL7b4zum;!LSqHK60G zHGqN{?_h@}u{D+k?hYyavyPG6B{hQL`qEhHtj)$t64A-tdCt4qlGcg)1EH0CR-oZj zLWVjJE!Kc84kc}V9a<+dVv;SUPrNyfVWJco;wI#Hg4<{dKvwJ$6hgkUvhoIo$2;rx z>soCjt?KM`>ln#JK;lM6wb;!oXbM*RN?l2rVmCLc%XEBu!>^?_rA|;YqNZ|RJ&N7F znCrx!($qTNQQ14<@qj9LuaRJ%FV3r+!%tr@#*AWG06C+rq5^Su8SZE6L=ET+8dlaG zGx;Kg8SVJfS?{q~PDb%a-dHy!#}@~5+Y%mB6=s7O3P`U zu_A_%HjSgB1b{#BHC2Tunanf;g<{o~?qY`|p_TwUuG7JhPp)XIYIjyktZa06S~Yls zZU=i>DIH;pzB3nV=Yz0mzbR;L4_@3=wn^L5tx0S7t7#*?csKF0i=y1iA>7&Iem-u* zNf09A7J@Dn-f-=nFFn%21;lA3qw3*CKi&P6ik;YaK5{uekOG%e*?x{@OadJ_Ak#3& zO&za!rzFdbDz~KzXeCiVC+!K6G8znbpn_q6vBfqz2t!7?gC>3*YGdaj3JO76MVWmi zocLjtEsOGW$8|n$T6K`xbE&G%5uq?$bUf&fRA2Zp#K%xTv!hSuawUpv?wOZvZ>J>J zw|EHvNGW-a8StT$TNA%22Gsxf$?AIm1J3zWr%uVL!pCl8- zd+;~$BdhC~>0fm@m95NY?2%l!2bel;^!Br14m1Eqjeex0kTZNi>y%rrX7lL>((Y)T)v%=_Ct13pr0rr$;mmXt;46yxnyIbsj9C(UEeeBW(TNT}W=4Tb+ zHQ23LP@wq|0CNd$^3B+BSG3-~oJgaA`{@?Lm!v=*^HzV6j+rNr9sEa^?R^{!k3aBC zEv-~@0y}bw2GW3R^i`44J9bZLb4H4Vu2k8Z4!JcS@U7%L#JMWk%WDBZH-o2E8h7(2 z-e$+rn9%4hKEXDRFkafo6@`?Yr2IC<&3L{k$N-;q~$gquw6-loEkPt^=`7 z_+6rD!;jm-wDvWNL9Amrf5|&AKi0<#igfXeJXHLC2x=b@izc5_d(qc@c=H+=OiZI# zDM4lJaz~`p2U4+_nkPN#Ki0idF)=)}U7}_kw`Qsi&;~*W-r!W$W27LM@Y*WtPesM# zvKG%fauo?ysknpOCdzgYOOTu1ZBO!HqAk=z)2T-Rv1Io{%z!1tO^&x$pb#KGOJjd2 zh5Tp^C?RD4f%xO+SE6N%+nczasNx4MG(MHTnELb(bP~hGE@V{DUW{H;3;33ZS1Gtu z8%+5Tr&)bsPMCWLD`mo~N!mIF6sC;#*Dbpr11fKiKd)XGJ5MKD09~SWvN^nU4iBQ$ z9Cp}>YDWps{!_|APm+&2rM#SA7dI&PBZw2Od%R|dBNjI6PL)U|RAyg2uC9F7aTScj zT4>G%^>-D=faN@C8M-7HbKy|`N39tT6{T+I2y#wgUdofI|Ek?`{)zd7|!s?F7kEZiq)SQb#K=I4RBH z*((Fb;0R^f$~AtzD&Ogr}N$a&iv&9Ie%V5>!xsGh{g*`jJTk0l8w zRlWxx=KK$!1)auLo%9^TO3c3Sv%`z!qy*;2_s5W8-|&-u2A@{Qnmv@F#;7{fJ^!K) z;-}cDG6o_NS3~z6^{WM)i4~^}lr@3ChA^zDp}_~kxDPY45@db3_X$1m0FiM*l-v44 ztj9{k1`;9SK<-hX+}Vs9*Mc93wU21|dpiO|I>I{o#|7_Uv?AWnjYzKhIrjOX8(#*D zx?Y`Hzq7u}3wJ0#smP=5T^!A!o1{F%3eaE#v|a_t?+RX?+@IkspJJsZviK4SxDN3!L(kC&~VY~?!7d}@ilus0l2xp$Z z3hDGemvu>HI3~@y6yx?Tm~vA|xI50tn5PplYHoIS!UsF6DpH5@{Y^Zi(@+Sm&O4dN z{*{7m�}n0fLk2KH`ZUXZ0U?fk~#eM?JCJE8bvV;qo-y$!Q-jaz^Gs$s7h zzx{&g!lyF()rEVTN^69r+-PALwKlI(U00qI0VQW~8#Kw-2oIIb_$ArR_$xz|CyhvdZqX zDkU)Ef91jopK-nfB#sCwD0Y!cz5dCmD<9=}V#mp4;uCm{3xjon&!g3lDVjiHCP0E* z9`>U(?&8cAo7-1YiMtcsBAwn0?DMO#tEEd?fK2kkwL%u3>FY)IFFe2zMkvnW@QI z>>HkQE5CRoY?&eg=|HhLl5#x3b~q|p|1o-?8?VYc+%~eqVrK<+@OWr?z1Fm?KR%aDT69*CbPB)B+J#v+jor%Si32yi+1)#FRigN$V~ui> z8?R%lEM|G*yDE}z>{x6mf62upI5T5~;pQ%7k(?pGd|`k1_`^>exdai~mMemET0DVC zrq*AJ3d&oMTNG_QO(sW95=pg*N{Rd{nZ)gawB&;c7qja?>d8n%eOAf%i3#<^nPz|S z%Xb6v{o`12n}{-9`k{6{+OzKQ-%PYF{EVH0oZagA{#~eicxeH&^~W{x>(hL56n(i_ zuhAsANS9f&y2W%Oa=cuEG6+^TelH_qE{x3rS-uIV`-?tR8TEc{A_eV>#nd9RYFCs~ z)7=QXJ>{lEtxMB|hfa^0mW#rhy7tZ7lCuzW4iD=7_6{b&B%~ge(`2vGIyfvn&@TAT z*!uBBk$56q1S2t7F9?DUvmS+7SVtxAB;$+7j@hNnScl$|edM>G`^-UK?UdpY=yn0i zID=7hD^TZcYKMjVt`R@<-c~VT>7cBA3ADMG1IRV-xj_^nW?@lG`!KExTM3}Af8RTR zr@7>`E3~rh!}CJMD`U5GK)uKeSr@!Pd0$3CKdH#F-Sbe?y;^}IJ}A9{z5;}e3tJ^s zTEeQCl0X^h#+8C>m9>Jut>};=eyNiW$khaG9=4FlC&?};EOD|5+5<@@!Y+8-A6$`c z0aSyaJa(}il4|4Yf<$)$oLK9(0Ojg6Ls;9NzZ_7uTlPg3OVl`F9v%<=l3H$Q6- zp2w9T&-VyaoE^}TK1RmiAvda;$w`vo=5c-W1@ zZ*C_*ut-adA%Ca{s*sJQwx8oy5{0JKwkH`GArn#C!NjjU@q19UpkfUJyRRxMmdoE% zXxI`#{BQ1LQXNA9Gl1$LYIxU()MNZ9+lZ&GeYnU8#jjNbrlHX`<+ghz`D^Njo(M|x z{AERxp{N=Cn^8A0Js<~_(G_Uov*G(ib_952q=+Dsw+q~A!4c9>crzW}%R;x=cN=|-r&*=x6bd0Vwrt zvujljKq%}6hC3|AhsS+q5~w!jqKSAKSsHC2RM5%c zgXRG<^V$Rb0Mcz)B6>`Wabatq3{5S1+m~3?v|i5qQmDEY#BS1nap~ayL*WF zja48-gwWA&a>9c+P_(-jQBmSqK0|3R>Qs>YuvhBl@;5>T$$?HH#_6l_eIXiNhM)sN zsz=15a`&kOpPip+5Oo*7jBVW`$(8wxn_NL_@Ey6AcTGov4%vCFA{tJH>NDyXL^hsE zwOz1H(TI4e%?sCrRZ~c=zQjI$aPC%s*Z}Q1olitQlCBY?=K>XV(;$hO0{A1<+auh2 zdn``)83i7ORJhS@%W(;p1i%*bInQZ8A?FB%t}*@XCN2IyAH9z$@$?VvC&%6xw@E=4 zBxXaVCVlt=rA^b0T;TVp&%xGdk-`p_@-&~!QR27t>mkWmL#>tk8zw8n+lCUuOsjAi z@ovh9ZfSFd)Yg|b=@tH0T&F+kl5AG^t%;sqhnGi1?v_=I592MD>CN@?pehDDr{i{i zKeb+K_o@gLX+Xtad@XiczgSdzz@L@Y{pfJKt+|-PSd}T|M!2cNDe$!kf`5qM<}fmG z)Opet@pI9Q@BZAOGg+512U=V(7P_k77ICnB!$7)_EOAin0%8gFifF`y_BD|2&npU& zr7EI=ibG&4#3;H$D$4PF_J&i$F{YQ4H6eJ}kb(d$CJ(&Xa%=dsd0^LdBj{3DF8Hxi zz2W^|betpazl}MG1oZDwchmzxBfND-gLx+|sM$fXOdF{ISsLag9nBPi<_kBVMe}h| z#Q|$bcaM#ctgEL(x^g^6w$ycMYF1cML6n*^HC^ir34+*BoZ!>{@f>(Tt%B7=#y{-{ zs z4FV0hKi#jMeL(ioNKdDrU~m46RYt7GrJZjPQIIwxL;5YH6twIGpnT8e!`3TzRVhuwek2} z+o0~B4Ep$he$p|HW%`%6`{6?TE_#yfZYguhx0KoPi`poi3^Xh3rb8{`;;L|&Oy7;B zBJR5y{Y)|YW^8se$y1dt__1z;=OOd`Cwl(#xyzcHo=)Vhtp+P!&z8f?RLRsA3!(Anvh><50ex0;4ej(SVG@4 zCnzZBAoxAE1k2A5+6;-ph1xE9wCyyfi6*T-=iW@3+CI4Gt~lsFwNn>9U!(hTgdS_~ zdYpbGr;ISorWcHv@QF8X=nq~Id>anev}z3%=6C3#S>1L4#Ej+w`+V74AowiUKTq3M z*C5_JP8(u&X+uN?H5;da)}3f2?6r&x3bi3c$F}pi6KU03@rJ_Yil%zzk(CRV2oZn( z%@|k=?W*&~i`QOu$AHSgFlXB{-^? zTI4cR5}9|U9H6)un0zJkknVYR2_VIQn*9XdUuL%_)6fTm1oVeTm-;L4n~OPb3dhnB zdF6X%uHZD#e7oqVRZ>`p*`x=)e*pb{wo+xhRkd?<#m;5{(M&srQBuJJsfp4%)*7=J~JvU*GHt+{;?9sbq@ zCRY^L!F^HMmbB@;N=JF%f+V_!HH|5n4KC zzP@MVHH*6)1K@kfX7S99ktRS-5vn^25HNb`rJqsdwD~VftY#8_XE%C{AX}Du6H(F! z@u1)-8&D1Pc_O8=xJ;%ccR_I$+St4Kpgs2tkZN4f0Oc;UF92ji8fRS&4~c%()`wo| z<>wq4T5k1AwbBaSy_5C$5#8%Vg~Q^E65}M;E}^v*Q6sdB-+YDjB=K4K{N zl5;5XrF|`}g)ZSJ#~r)|?$%@hrJRiohvPWJ?cT06RK;|rLM^dF%o+LS<|3UYeObxe zr?Z{m$jg%v>qTQ*TatEQc`}MMAt=W{{H6glNrm&-5YSENOf2sdo%~x}D0!q9{Hf|rm9L4bPy>;;y6LZ+&q*DSXS%e`)eChzecKQNO~z$r zB$>~&C&=B^KBbZm=l0^DYm7bUn?;SlR;x>*Q8q%Rqgi!=z-YrEJJz0zF~VGNU*%6W zd@p@0JY+)%L6n?gJ_J7s9+<6ETzo6BDbA)g-A*|;CE2v;x-BF%YL2{L1fg#AtWQC* zw)9R%F_RqMUmf2~q+cE}IAYrVUZ2){deG3vZ=CoFD1dESOea<3RSVFME7B-M=!o!i zxFBGY@?PCS2aKq`4w%I40-}Ssh7*W#b|l(<1n1JH|$pX2M@YjDSk3E^9s8_f7zSp^Nm|=Oe4OyR46(t*REo>PgEYXW1ov zDQXhG{m-Zhpf#yYhFPAGs~$zncS>)+Ln^T2S#w3!=~3x(Kf| zk?2!s?%a0Sn~N@`OVI}VNd8_|X5M3}$Tp?j<+5%|4fr12J|7ryAU@|zzp|=hExx)( zkumwNom2Du4@C0shO5&8`_vqs+s)HtKk?O(ynb{8mLVT;eyYgU&%B{oM3YS};F>ifQ_)s zf`FQqzFjxvcD&13tkijIXRd7`Qrh@8X!hpB=Em85FqN#UkU##kc(_G1wDIF8`D)?O z?F9|i>i0pN0(Z~iS#4Qy02gaY$GQ05uZ}>~&$b$9ZrN3rTOMPkYS`ZAFboR)i=z4m zK#C+j(6V0u61mx8r%M`Hi#h)4v1dQEWEagGP(bYaUk z#xMc`1Q4~2CyK5@vHcexg0A~cFgJe*M(w5hSBW|v^vYYA%g%l!$(+a9E=(u)#iCYM zuCC{PYZzvuqf?Sdbl8qDRfKLEeQ7n}&aDTt0DWgwd;>hWztb@QB1RVAqM*n`yfh*k z*JT@lJ^#l7^`wwFoZTUXBghCBH}(O*j&TuS$1om~yKxbB%tC0^Hvlow;$NKv?LPO> z=8KlZzt}vGt93OSZU?_6VryIf);w#coYrlig3=nzCYQ5j89iA`@ z5qKz+4ticcf-+7UhaAf8`lfqzF3nRd68F`r#g);A_RNEj$M70zH2>c zm_BP3SX}$>9~69gR^ZB!yoPFy;HUKBgiPAFb?H#u04`Q9#hlLnMN`FaZqo)m)y&My z%mEx3q!7ceaS|w*IGZ*Ts_L<_E~x~XJav_ask@%_ z9Ht8uaGzdRb$OoxKbZ)QU)?|0tULxzc&_=7$luAf|Ei&HW8fzds&g-xu{3G1%vcSl z3%A;SD=XsyN+Un8-XKQ1(phQO$&o-G^7v|z_j+-$Td+?TV>crdR5bL`8v2X+)-H9u z-&zB-;Zgcj9fOaR`G9KM49WHuN3f8Ws&gHnz_V;lmHJz&0zezq#N6Ry@p<~>U=y6b zFqX3b!jrfDVrDmO@QdL6X@Gl_tc;uW6cw8|0o-!#issXWx$U0ok*|gB5Wft^kln&C zM-YZVf`;YjZGS{<%2bt2hnkvV@W#9$OF%U|i+T>=i#b2F(^6eF6RDY41V~mhrBDB; zSlh>1gvPa4`iXwN?HKdq3Zs7ca^DQ16UpJW19a%Qc7V8&wlppNw`DVob8$Xf38-sm zRBS(X!(ruP+rtRs{)#Xdeq1+1^4x|y&OS_M+SqQXBRV#H;?bW43>9nfpo`Zaac02; z6cqj>ztVU=1JDV$yG5=ST>dHG%6H0xsT2UW9n>y^Zsykn%XJC=of3dlx)}zLb#e%| zj{!lB>*g^a;<&pB)U5|B_9E48wOyvh;a#!9uGcD ze(I|Ywr%O}x-lE9Y!k}o-xr(;IyDP^1l(c&@=nC3m%uzK`pIv<|D6`}fBOUy0!;Az z@h!*fUp#{UbqoStBp@30ALERB_YDYl%-9B|1%;e?j(gV{kscF8S-+%5Q9_Ebk*KPoE z;_ED6_QBQMXwatas+i7bK>cn_3yHX8*GBsHMa*yoE8Wo?B`L(1DUUTJ@DvcH};{h_09WOC>@fslEx36KWSvfW@ zNvBiVL-9VLT!hG^SI5$ePnA*Th;)^RRA7 z!ld#&@s)#d`NMDw78!RMO0YYNem(19N*|&}yAFhw=KlBN0i#EW@00Q!g4OhpzV`UCs(K%HFPNwcT`1 zNt<4M*|6CRNlT>gxa<68AN>Y;xe)%=-_ zyERiW`u@o~Zbv%(?!hwM3PU3ATa1MZqdM68lAb;e~hP1OpiCoIF*TS5O6b_U4 z%y_8|ffm@gd@QbKFz+Kmt?qY*$xr6?i0As3gkWAt-TSG%{m2fJ)9~vV zU~mLV_11z=ft$){E3cQ`$m7v`xp&h?nIhZu+f(&%@{z|^`F7as~Nq*%m&381;=V*k!f3^1iMx3Nm+qaC)b9|&edAv^#_m}RC3FlvUc0b1_rY! zx0skqJA_J|Kvl*7w3v&dd5T?IaYb+KPIAL}at#KHp8D5qJ30%Pmc6Znzn_2DZUEjp z^?-%1)l1^ATW~7HB%z3ZG+AeA%HMr5n}Td!A(_sMfg22-Yoz!Rv~&*Y$Zdjaw*Gk3}12=LqaZZ*8@OLHuZzia*S28C>@_=%n2jx=Dr&c z{$x=2M0`$Z?-Za8&wa_yd(&R2dUE69edRg9z(Am#bYND~6ujg;Y*rrq)VzLYvp&Da;aKq&#hz}P7ohd~ z-gnkBipBuZwgonCFawvk-A2DVZw2il1~6k=c7Ul*1@9f|QR4L(MlU!eiaXDSV{C!) zK(eE2g8vBV1n~jn-XwDo78O@XTKEMa?|A!YrEZ@g!Bme#0QKdIP|`|n*eyn@fd^+m zZLK;_XeUW?9$%tUY{F~wZMxM97P@Uep}8ZKxnXo|H@fD&)x^439s}mg&rm!?fNJP9 zGYiAdtSudu0iL9$j@rkTHi<}fS%3xbm&N~Do=I2y%||s!1-~detOYxa}iGS+>cp0izkJlPjvaS zdNrQpn)D22F3@i#er-6$`cRGOOeu{dMAXi;srD}e6%gGEk0d<^g^Q~AcIi&v^;X~H z;jjZ>UC0o>K0_t@ryciwL{RE@?VfrSh)#w0Yjj~OdM<#aMrs;9kIl3Ou#P}N!SE5W z19#b0%oLCvz$ER>_TbUb9Hs+m zvTi#Y;t$2kLSIhhaelZ}!?BG26mT?=Zdtpw&{=;UwXg~N$@gKRQOEd+pQ8beJtJT} zbdlP_mMVPv=G%`d&AJT@s|b9Gha3EAx!*Q>VlmOg%Ke~Cq~6Yyg_#0VAF6ovxl9b3 z=82}`05K-`%*_Dz?GE556}K$?RBhu=QFwCQnSQBILXMjd)hKo9AT{<;hDb@)y;Dhl z5G&9d7>G8|ag>KoWLc^FKJOP^b>!~0tXwohaC!mF5k$7`mS zd1f$f&x$g!SP|VPwAX;1gY-S?0IRmC9>va?6lQUrnryhQfJKpf7W&t0s@U8c&kA5g zKtwMg{z4-agLf)X)zAph%o?CD+<$!oV48kTY*X~J4>praA3gTa#G=3g$jCQ^6K`@9 z7l7(3^fUk@@r}a-q$+Rh-9)J0ztPfy*bglDu(O~w4l6X)+JldNk&re&o7vk%(lqQp z2RJ#BI;?|xUw&Vk#m=XW|A-+y$GQAgr^VianU-VK=}`fcL?^}eWu%T|RPI@_LDN%a z4%1F+%Lo35z54n8?kw3~h0+O;{w!ThEKi;_m?z!Tf9sH)mSXHs#96diIaB8C8th9XR@(O6ekX5OkeXjIH zmWV?qjCs7~PT>Yqfvh2XVX_?=K_#j+-EGBt477HZxT+N%B<52ar@e}|st`}hN)-4u zFrHeMqXD^H%M9T&$MGvUK-6wE7Os~R@<7~rl(55Q&#vZLxanohD-KvAgy22?{>yrT&4g^#@RHs7-RTqynQc_jbg7i>_I7Bp2~_NJS+ zJj^Mw$57(_Tpci=&PGAQ^4TE+kB(H2(cwr+#mSc(uZwBjLh)3CC*O?kOIgw;;Y%nD zdXAQ)jqt1%6lt7%TnrlW@mewW%Ix&3Z~7#`uiQL=X+Al#CKUxr$5j?}bBQ|^EW{2n zY;I-0zzQns|MGk+FPv{>SY6F3i&u*}k4bgXLGeL-*o+mi2m5R-tkQmx2`PlN zG#dgJ3{?9yfd4XLl16s7!WczqAnSKG`3QrXfK{CAwi<6YF3_q8Bp{#feA-VaHBwJ& zewnZ)(8c(HdRXcoiCp0-=DF~&?#O8Ye$&{*1uQ3(wntH>exz96{t$pt`cs+-8)o1) zW{Y>;XtEN5oZsDwa)_Xj)!Iaf0}`>7HM{w(YLqVT-FX)OZL9^HY`m%^L3Y-IU1}im z@vd}7atJ9;+cGWMCNX@{j!cc#C!g2L@sll?y?IRYxWh1*pm>O<%;sv;rswgf7n|Jn zd)YLW-kdE-f=2NsPlo{>FNgg##c41*XLD10zKK=7ke*kDJ(vj+&??qptwz2 z@QVW9cCMB(cAouU2P4XT!1MkIeW!Ka84#B{cRcD>J=!Ac#%~1eu*+G!aB0{txz!Ex zoZov#18_|EV03J_0!6cdbZo5HQ43?pE zezRDEmfFJeh3xUgEF7dQg046Q962`_-MzROy;;v=KIJG`iY~ybS^chIKJjZMk~EQ{ z%JnrZr!~HhLV$1N=KWAQzF7hVKCQ*y(A_c%U7U?tF%=^eW~JQiHY}WHxJV>6Ab1CT zWPNp$=YsVSt8`Sdpp%a&*L!pgXtQL6vw8PCc;tq{u#lCCS1ZCPMR0UEh6pAzB^y;c z8nb`Eor>V$BI%JcPohLjeC_zMzpid{IEt0o^wfWN$G z?JTss@Fc$DSqXGkW>p>iNcmp)rK=C>wp*JWDvb-apVh0zm%Uqp^|NbxsE-YI6IP#bUP|`c#kLy-``3#>J?3o*c^QSTb9t*E zmj>L&n*S&$Y-tk&`0M^NM3Ybu$oAJn&Z{!r-iBZ$!nX?bU~XWeE8eR?-eYN^W9g&g zB2+Qys{QuGHkz#?2NDti!Ot3vALo!d&uAgo@TZ3zZ`y1HeYvDne|Od)2PrNS!m@yG zm9DW>3RIw(gVDAt7vSo$NOb|U?b1{o`Q;{xJ+|ufoORY}uKo2FI8Lv7pF2Q-nZ+aM z52hS%FG_@)T%0b{_T{lTBTg|>94?qSFTliKBN?z93TD=*S4^W$;5f%<<|Z^g*t#^k zdY3HW*%qU<7S-)!h~JnBh-8$fWyjU&<g=@KGJZwX);x%LcHSuGV47pXRjTr zkis>9XIHiEo|W5p>Hl*y(REpt31_o;0}9e%@~ytctk4(o!)rY{XLQm?2>wXb=^d7- z_Dvr&n4~IiyNInI_%zJ}7zCO-Vy>t6)>!UCi|cmy{2Qh$Gp=gS<=t@#VB8!^XNLU4 z7D@x_$=GWV%Q_rDCvsXQG>!HKdUE-ZslzZ*IM7NR1U6eYp2?akoE|J#L9n?oCcG>h zR)q>9i>-M}D4|p?w`J$dtMQW28hov?lYKqb-`^vL##V$$$I2@+P5K<*g9gi5!biogD%5e~o1SHOlGyl0EZj zgTnzH1J3;iY*2X;FI&Rs@#0Z?@I3;t?e@9C-+yM~pKa7v3GS`}$N&>kDOR$%J%4T{ z0xCHBoqPh`m<#v%NVj(9n>d5>xU>R%!s5aqW0Wcr0qu6RYu;-BVmE8!F#3Qx+5Ahm zNJ>M?<{A0rgK09vzAlV@u1iePG&CR<62tD1D1XP+wtx!C*qLrc#717Qpz^2rfpJ|S z&l!I@)da}R;ELxT#jH9uUoksqU*^`^c1${)`q^j8Mcnf8gtANZB6@qLU6oEQJ0g<2 zoUonWU0R@Y#$5b3`?Kxi_ZJLRiQ8k=+r*$B9z)5q!x2Gc4py@{Z;nKFH4R3gx^GAX zdPml8w1;^;u7E;`MQmUngXso5HNPaRskee_gAvkxBdKF2slDnDbh><{e%VVYLZ1h# zh&x>sQ!#!$@UlR1{|{|V=36NxL&= z#Dpn-4w2e)Az>%avt6^aV(_QAP-bW9ai^7zerA$PPuUS6Uz{}}#GAF?^A~dPm%;bY zv0x$O05Fq8Vzt^Yq=9~rp*UqlTFJu+d^`eQWuLCxuE@#o04vEyOvI>i8M<><#f#;=!^LC zT(_};gR1k@rMvKA)FgSUD*pSor4u{WS?1%>St_aW?@rtd|k|#EQP?7!t)-~e>=YKTIHSx!9Z^hATI49r; zdn^W>!|I2mtC!o&y}C`sQpAMYaL1ZoOXVNd&21bc?ZzSB9G~gWxP_b7^p;R)UY^`& zw;peti0?Y3x*xVrqITV!y0cx_(oN?|3ea&@9rb57l}qq^kl@&g`2^_nytmJl+cc*r z%dGD43rUllN)c0twf~KWm+{%F>ymuEWkKPZ8KkA;HozUW`2ggecs!5wHL=8v!krp( z4r|}RMju&CGD%1B#KcaLyaRcRt8CG#D%<|?N&N!~CKcKg;+}|D2&GGF+3C=Hvm%sy zwY@}EUbT4U>KsA2nZ3nBfYDl8K;kG%b9?mV+A6*)X?MRaPyz2FDU-cK83rEonr9@p zX#SM7Ez{e4iga^UVfPlkGL-5E=sWunJ?Uu_4Q(LnJwZvi28xED#1edw4&NOPv!>n4 zffZ!qi!+p$7G4g;p{H;yH4i#TGsvn#OgGtS&0qNJrjRD%1<0D6vl?{q)c~Ccf0tE$ zE^+ad_|xtIq=7md_3F;`C}^yvPc7dxZxL;-spP{5@D%#23tFF^){C6o%UTtM^RD{D zLJf|-QGH@kN7CMg2Z%l`d zmaaEsOuFgl9ER&OO|y}2J?EFmL^~;})XHe5%v3YR53%39+q#4%Grg>W#ZizSM2EB5 z@K@;<_C&)u9~#mvZ&+!&vnwD^{t;L5;+}C>TjqJ@ocvWEw{_Xi-&;L>*MN$qo?$Ha6rXiz1z@X;tgt>;l!gqc zp((TEbIBPgxanDxcGpC8$zTkDL3Mg7l0{r z?1yR1QwI3t4d5$lmbYCFqL`Q66nuy%MWUd#-9=?Z7r*oHu=4l1_$!J8bHZj0RO*ks zay|*SlcaM(p?CdpCrVq2j!eGYpx6Z+q;D8bBW@|uCJAEE+lo4K<{!6{AFqVEmvk#l zGfNe#C84G0{l`i^Y4Xszz`}1aO!T>@cIJAFU0AY((rw(6_zY&Dxuu@RKMRt2B5x7? zX^4MvAFu-fGKWCN7krV|QK^0G5#3MXD*4p}JT?z|IIR0M++@U?x!7A0)ptp8Ta4}?Ohmq*FfPAw$MKShmX!vf)=AX+zBG+_ zVo9g;45VbsgO)pQL@s@1H1BoyK#cJbn5|Sz03(^?X zCV-(%daGRq%##(S3+pd2szO3%9c;r38GwXsb{Ls%v2#FJOVM9#7U8aPU) zh_AFslS{|3>W!fY35#kBtTtqI6Z;Y*Ck?B|H}KSFk%IYpxxV-eiKd?OCqiIDOwuGJ z3TDxs9vnWXPeZ+vci2!L>Ls=<)L^hd+nH#D5haC`Z#Z++fSBYc>1K_r)R=pnR?6T& zWqz%79FiCkEV~7MT{c0y!f7D07+)oMg?rmVD-Y{vDPhN+7f|xB{iQj zc&Axqs?x+394~J+R~1F@Do*>vZdpNdk?ZU*2cK)Dm4SEL>Qa`58I54+b`q7y!I_TKcjn%IS&(_!2S?RG0@ZTZ0P25PH z!O2yGIkNcloqZbkjGjTfdoXl-9Z9SjacZ3AIH=Hbdl9*YjYnkELNQ9obCMJ|+tZU@(nEd-`B}0h9 zR=;@!^+6Qnrx2;*VRgDkZ7gs=H#QAn5|7`=MTQrdjGcS0ZTUvROT}*_cehA9kJvVK zCHBYoy-9DXVa^FvcI?W?-QQuU&sA@c5s`C!&yvS-!0F6xd`wb*Zbr@3j5K#tW(_nX zK-e~%IwnV!;LFL9NWRczc8kVK<5)AD zgS4Ci``$zu+TU^5S~j{!lj%G*o%ySK$LHGc-|Epg0=QVK9M+Atonp@NQ};8ifBX<@ z^VNi9RB#GBIq!vBcXrFh_i*F6Vuw!sy?%GXC*cGa zq^jLwb~1HYsm||X`ID4BMoq^<Q|*In$j zd`e@SaYN=A-DM>433g4prx5A7M2crOlyS(Gd#Y?=j!#qIckJ_f@Ihy1UXOcTO@_Jw z_03#pBq`Hm#+DI`#6bGlTLGEL7<7JbDQuz2spYrnmLfQ>QAwcm_3f3S^xWGPVVT5z zoTI(2S07vIYUd_@&TPHx)V6(UHzZUzRNcxGY<|R4zkD@WwT2{C5ZpLPaUD?+7aE#0 zn^_?IAaXHmzQpo1o77f^Y2MpjEXFq1T!!|6!vQC@L*TAS-7wxUE#9FIQ^w3B^KbV* z%pPJHgx*Qk2OPn;j7?Nx@+1}pMWBgngCPTI zeJ)H0W4e}_Qf+GB!BT_b0TW2@u6^#8@?fFQ+ga`g3q0xCp$7=tK*@>8#-7afDipg(R6y}Ev5I@3i#QIaw zZHLvoo~E-JI)~>Pk>K2#b0?DB9e8q=V?ugelS>QfN+KbSsC(Q(pVrIw4sGNy^w=hB z*s1C*+H%*-wo$eClc~zn^JMn-$S#25eOJh%UL4~<>_C&{0A}z}yCe!3&W>QGv=uw* zvA!Qcn#3#~C6ywMtiYkd5%~UqfL*J@_n=Prq4VSMxEB5;{G2cZE|eqO76P^dPRR(5 zK(C&32RZUy7H!e<`kwotye0uOZ&DhKHE{;bbisD$#{<|4E$fS)9b`po zvr_C89f)TBYIcz0nDFzY$Sz6YY0`B|$u@nDiAXI{067gD2M_1GF=kA?sXuizxSmRI zT&+XRQ8{^_cUeoe1Vk#0T+W%QbX<~>XFU(l=?9;)p3d-t$jDMWrP-Q5=ocd7-B_MS zxU}nFJ0vb^APHOK7Kt6x!2}5tPFG>$-NX+O?(w4L*Uh)t!x)6~zfCz#lj=R&W7bL0 zVkz`bSYMCNyM$Mw~~eg z=EJaV=D%X8cgV~|KSV2YP9m7QG=F{o(YiA><1Qq#tHiud+dX0~*m$O;reVl56U6P8 z+lCivIo&a?)ocnT(Jb|^HOjcYXz;qb!+K%(@StDp&LJgqA#wjL=>*#b)6rMFvVi-2 zhqT6hJFJy>b8ws0n3=+sQ1H=9ULI8;+ zKIoAs+-bBdkfe<{o`5)91_D!#AGmG)Ac2u-HyagXQ*@ItV_X~+RO3Athc2AfT6CQR zy*cKuePisawLdY=katdTUVr^}U5-zmt< zPOong=)84AF{|LLVK|CuAlWj^_A}rVi2C4|%bbx%323g1r`YFJ^_25bFJR~TWUL4c zQfIme)3FnLIBl}l`at|`mCtvkZH2<~4QVnp0ZENfK3Wg;UQ5|~i!U&C;$1o4R6r_@ z+)NIS6?r0`!8#Lsm?YGz(Co2g$3AQ9mGnw%70&&>VC|zcVzqp(UDD!##=%_jPEKrAQF!7eb2zd+4_8bOz_9d*{u-zbfx$EP{ees)`qqP<%OrdC~!{gcYNi0(MJpk;y^l+B2vSR}lb+>_gbDFV~tJ4EIt?x}3aZ8V^ z7C}21opEQ|+q_1M$v1oYtT-1DkRpp<8b!#~z0JFxe#e&2&Ttb(q+ePpTz&IU4GNHU ze9Iywkh$rodo@bvjzgG7x<^VvpbT{VuRpeESq0r++BwIXpM((?ZR*htH|Ym9P2%;j ztHl03mQ8K@0MxfNi|&5W=^971Gq~t(vMCQ)!#L?#N{%$~FM!rHT|=WZWKduKy$?W5{@Ca=4giR9VzU`0ZG5v!=U;<*eIjjMyH)lVQ=81Ll?#j!D83(Reb;Bo5!&rgr;xf@l6ccL)#*p^wkOr@8@4cf z0edEl(B6g<`eAtdM^#SCw~9ovuio*(YRtl zcW;5*f9S<<9x?UGv)a}PD+L&D7TFDB%TPei6hiqOb-z)M!h{iv9<`LpfgVe&>-Vqw zg`h}h)6=NiGsV{}cVEzrn{7R$3B(i9-|!Kt*9>;fjx5MwDx^qTZ~u-YzB)x4*7{*} zv3qMt<>FboLhn!>&s+`)C+0Si_61Q+n)-jzbo!WVo=j=^@i^AM{$oO!JcDh=WIT;R zx$P&SnB)(n+1~%q02;Pq+E(J6Xm?v+ap6@(__?H+C}PL9c2hYw(|-?ekL zV6%&WolG+M7e6`l_a1s#_NRVLr+zP;No!Q+Re>hX83N@`zY@hfB^r*UOi!*#x`<1_ z`HWpA1=-u_5zMRJTGLs)&Wd-9U4zi! zvCsAWutGi}nFs%@;{nSG{TT3OBG&Tg)3CekQ6+r@vX^<9B(2DBUdWBxOqGr2jJsa? z$pwD79y-}(n8EHQeS_yBwIa2A=NC7n2-(X|=btQv@W`1~@C1fj2VKMhU%iNL#P)?` z)i8(XKHv;0o5LfuhlgRJK6XOPxn3pmSAjhw+#-cc}$gB z%8{lCO9{7(9Aqoav=-y@GbWTyUj4;avUBU5mb*=06GF2hl-sCDV#Pg-B^P#t> zp1ugcS5};Uw#e)yVCUc4Z^2$VPdC9_It&|QpdT@}rz-pm7SQm`+B>f_p&v#Gd2JbvK&?yVw zX=b)6zEk9o=ZluH-uaf?aaOu4V27Rgo^GDH*2lx|SK%` z3+PqsSF=PKHm$_iVw@^2ighPnh_kI5C@%LRJq@8Q)l23mSZq2@@4Vc7RTYAFTI*i6MAHv z$)<~E50i6`_oA$eeBoJ%2`xzCL;s#NkHdhg`C#kjx5AXk_jfp)iF=I_`566yAFh>-gf%r91AO79QuU z`XLKpK@wl03HN$7bzSV4IFsBSOQrGKbQ39;Pyq+5+Xi#vg@R>6lRq0t$3q4A-fd*g zT@qXgzf4w{?spdOA6xM$>$sU{Ux_q7^w0G#OeZA?K2sUhEod9KbJk8WCJYgyEv}FLV$agkgPf%0d(1F7&TOhpUz_QpUh26VwN&poOO$l$F|F#!AZm3fCi0j6rxK+^qo-q!0nP^ zla5R9=IocJF-{2k zeEGMSj{8mN{g((~wdORWFe!u2`GpO8zI#MF#h{5FXV3a#bA5QB=9LBqiRvzv@1cE7 zw#Tq1y22}kq1?*l*Q844<9l~|)uJoEar4E8TX?Dmc{)z}c|{3I(WZa1+JXug&qcFv zEgcs9j`ij=uYBHX`-wL@V)_Gr`9NkGNWp@6g?C=x?A3?ZSU$S`y|G=g-0x6k`N z@B03-)+{FO>zsY|*?V7k1f(Iv^TQt4lARKb*Wl@VuJfDwwcJ8ERzOik-d~df7mZ8a`>@- z56CXJxhN>E1tW12#_omFU0S_O`M~S=VW8ITPY3!KU)2n-5yf!0{a~^7d|@wv>3Pic z&}o`lJ~C8UuX-W)QVuo6`@6!z>v&0e?0sA>+d~fTo>^LUdZ5b>=XY=Q#y^V-?*N$g#OgkEm4S(L~ z1IswA0^IOnjcyqU)BJub`Rp+#(ne5%s&ZZWE^7??E{${&bJ^Oi#JaeQLE-mn1ByP3 zOvgLRFxw^t7sn2Ts3i#fiu5vwT~~ ziOsUF^*65^d|Zi-V&7}TBa}w8QELs2cpos!_sCt2(2cv;Wi`WiW{$%fF7tdh0GUIy zVSGYj-cCP}1h5)72;6z^(3T8bLc+w0|F@Q`P9Il+YY_2g2>#Z(gC3ZD?kcFMSo%*Z zz)G{-7%YL4oEyu8WKzIe{@CW48k$tPdXl>so(%Mm_2yLrju;)pA-l3d2(AfKZYTrq)RYQsTI%>4>F z`#EB@8Xd6Ne39dczInTZY2CQUk?Y+TFPN9K0|;kCnyKH`y;0lVQ}+_V!(+miyOXu*~m!28^@bz@En#A|KHJs#p=Zj>LB(XzUs6vk5b+9aLVi)0qb|bw6#EGL0W5 z&wAU1cwfVQL6-JnnEJFioo2c2RK|T=mV+}ZT{Kd~Eso8e^hDq`rWJKGQI;#oR1PQS zav|`~`Eih^YZ-*m{hJfbwNWBIpZjQfm~zckCn?@w$Z(H>RUS ztS_MtxWtm)hODM$Q-AyVyd~%24hpO`4tr-bTJnE7i>7a4r=Lfo>>D5CQ11W5xrlfR zq7QO#-7W~;N)PDxpz1+?(~oliAA0k0t;4Js8` zl-1m=_c+#*8TwM)c2q0vEicuyyEN!lG$oEe(E}{%13)?(PQ7rwfLw1Bhl#|OtBxEQOrre|MYd{83KGGm+TpKyzB^89!e;VyhmA2J**FPTF<~8 z>W+K74IYKM+sbo8p$Ppk|L~Nh~IHc{qETx*>bJC=SXYJ0iDzd zq<;O3{cKNtzA*mZHX}@28ToW6mG$`fGSLcFu_YagX!4d|&x@xsZ(>!uC0jp(`BEu5 zWAL%f394n^DJ?UPjyuqYB8tf0<~d=Lntq9+!c`?z|(l_=ttS1kLv%J06l_x)|pgpfnHA7Fjfj>iBI+MEP zU&5TLIL>qhAh~=lhF;G^(qT!VL6HdS{mfiPZOfE}<4atgGCrVidi#^@veJ5l&mpFl z`ZYZ8(|*qKoWDJ1{EoV*av)bvN>yHC-6=)#d06NbSUzMxxW#&#_Jv-*o}MnAR9ZHqtcVre#{iV0P=%Swbp{m6atggnO8LCTwc+h zl|fOD6^;4m%QO&lY8YzDjJKZ&G(&jryR5%rm<3(U^aS_f_)syk(0U^+L9%kH$8|W9 zC?@l^iLR5(QO(;=$>v)ioUJPa5>~dQ;nINDP3+H;LMRj+$K)hiEtE1Zm(g{rj&9W1 zN>)tX=w1V$kXsDPuDpiiEjiCZ&d{Sk>OCrL{i%#JG}^X_0Ws(6cj<(O1GT`ofi$+ zBpvLvgxFHH3u$~`>`iJQpChqFblD10JdPGY$ZbfDzDCi=E8!W;EUlA>1-d*bYzm)2 z(ckjrhJK||7X@eUg~FWlI*Cl?{k!okgK~rkP?l!-a=VA}WSHb4Kz(Pm z6piI7k4|Ux4K=3bX$RQWvt-5!>}_O`4mpliFM~Z|nW0_GAtlQQU=3IQ%`-J7{_P>f zy$k|T?h5+_gE*$TlG?B_%Rz~DzNp*rH*xxP;j7^`;qpF<*=;_hXgM35aV0_(Y<&VQ z+X|MkWAuF4j=BjfXoz_bw6249K#?&psxcoLMSus1vz}&o>_r==@dwwdo>EVqT`)-OH2)aL)2QhMYIHv7bd>Vq4WwG1fk0x^TYL z!22uPO)wVb?q;x5uT^M5(RWw5bHy?4cnemhbjI$lds~Cx|2`Y>w2pfdr&fRvHkKM1 zo1i>f9?y&htrfloz!m;xzPCa6@JDMo4zVE9YCxIdPi4G!1&Hvm%0R!COQfE}jIh&3 zKR;_>ljxS($eoOJx^$@05-3i0bIEPS`my!7RaQw)f$69JN3t>*_0~EZk1QIMjS5oi z;O)jAdK3_6slC6&mA$8R7xhztQ!mfviYmzBn1196)(xxweA0?H2lBFNhX^#WPtNf3X<#CD$b2s@6Ur1t#NYKIM=U(SlVJw527@8OmwcdFGPsvXvf6)w|B zDj|h8-+swq5ug_rSSRMewWiS+tzxp-5_VUjNUWIF!bk|2ae~+XoM>{nRPYr2?p*p# zEx8G`c+d8_4l5uwk+&s`4L`t=!z6>O#t7X=g+DBJ3bgx(4u>SeA2E$o*LHA+dwO}8 z{KA5r72F_=IZLUn;jY)Y&E5UQNBJTGyNqZqBOI5UwO!#M3xv(4!_mf!_pyk(VaRgc zx=3pb{)+B_LLOfc!K^e)Kj)X#Lo^+nSSEesx8#CD1Zm9+vwpny7bX-;FNbFp3(baj zksK$Q*9|^KZzOV>S1wRIXenlcn>wAE8?K}*rP;x%20>#bh}vB2N5r2%g{s~+`9-WZ zp+(oeL5Rk^?OHV|$<1o1(Xn+4qq$?7Q)|uho{>^jT~Rzt0JNM^-nP_dtX*`?L^c+PD)CJLXf^>hZH*4ebZ&wMjd zULqsW&^)`x)tdZOjt@_N3PXpbP>C~=aXP9CRHj?9ZzRQ(9ET(1)HkBSyj3(Q(Fw9((h2D9}y zT*uziaJYrme=mINK1eN@Pd5 z3W@@iP|$=6QDK43SKft#gw<|TB{XXv&vRHn^w1}KMlh8d89=^Dw z{t5U@gI9B8-R^F~f?m=bh~ja*u@B)-izjE zy!`fGT9z{l!|8pi2dP8S;cmqSYirp9Hxe(gx$0t94;xTVY}oZwyMXoN6u#${!d>G& z(xg)+>9^FIxwe`dD#?l^Okqp^X*ev{GSdC5SHe>1oP_p4IQ$6k3HgRUwYh>VhsxP+ zJkT!Hb;}+R&mBYI7ieWfqyj);tQ0=mlXRRev4*Ie-I2yU1s|o(>C^mWoE04msSZs= zixX>E37N$-&3?Q6SR5{s#$5sH%tWeG+jD6UMXGO8+=B%8a4}6Z6W;r&i)Gub?mV=N z&0uJ=l4#3-g=;8?rA_&$q4x!q*kcRuFGk51hu1)hW+0UA7S;IB|KTCO8*vxpE8$=Y z?p!bP#L~*Fs8h~WMVaj;qi928(%${ku7TTI_6fI>TO)UFVGMSw+r8j% z&n&*0=%qr04Z3g^D!3hy-b{C6<=Sl_!O|627srHn=|slI>soa8rXW&HTot0nw6PpA=fVFw`W6gQo7_F`BKps zfDI8t!*n~}DauChcM_=IT9G*QGBumCSuyFT*0{WyZdWFuAIi5^>(!@!wQK3}?tjB2 z&{BjzpC29S%r^y0xx>I*{GmCun{hNEl;#;;cYNqUddSs?-iqPw`#w@CmADfmG5y7qGGoHujt7!OOHq}nWF)@1(3Sl$aY$W%_C~mZ;HOYhHDQ%gDGBn_uDt9SjkJZ`k- zl%bLwmVrqq$1ZfQg@YQ3l@KwVnxcw=U>HrDI-`+Xg*uG`3_hI7YVF=qulb< z9-H}|aNR|DXE%qXG`2Cs6|`o)0hcxMx#J9RAsr;iR^x|L7sCna4pedyQq8v2`8|>6 zX-mM;(pU2}ER=jlWU-$jdY!TgD0RM#_wox%8?aj(lDM#Ot7BNSOl zI|L^rIUV<}(JpWIGdS-|Z6(NH_hV$u{{81EUdty|wlacohdIcQk$7%RTN1bL@1b-#FrcQz|TnmBd z9Ogq2oFO}3Abv@csErT?ja>joD+Tn>NDq4`17MU(Q!y`r_s@ID_v+oAst}ia_gC#% z`cp;AgLUYM%CcU~Em6gXE1(o>=%+!3cihwL+eB_DEu(P5UDKeJbZmw{2ipsS7u$CK z01KC!G3B@JEW7tF%U#{_f`+4MWNq8(($R<4_ngbrlmt#gd|xFV=8$Y19r#`H65^D8 z<#cCy*`l_BujLzX5WzO@7SY8Q_b?r2>)>hfzzzMa#lc_nvdDS9Q~f`=GIc*wa8mp& zBH-jn?Fs-j5epjkF!WNXyfY8=50p?zelWD%Hu^pMRDCu22?)jRIYj>#EBCz^vCy-9 zjMk+pN=o8;#bKXAH@&Yin<>1rMurBS)uL>HrVU)s-X3)L?RsW%)^Qs#s<9oZLPc}n zy(TX8$K>Q+t9=67aIT`Ka}HD6r3`vh-H=!?l=)H2>6t0ODT`!gUVi*Dl$UD5tv10g z{XpcMbChI+N!8if#am@kAM)IHzlMOkTA#ydv|GNo_6PbiyXz3qsD*#^X9k_%OAO8Y z=4g0j_`D9kanGg9o)#@5MJv7l4-@)+DB=eaPl_~8d{4=c zGlJss?d@+k_bh%y$WJ5Pmh>i;$GvOr{O&ejgxxJkhx{*U$=GgL%H=HY=YDB?airM# z;X4}g>~LJG%`982ZreC5uWq@t@=YdLAR3{B0ahimAlQV`IX zX1Z~YC(%u`hu{fZjUzkY1cw;mnoWoijZ=B?XhJEP?fiP&ro1wg6eC%w9sf z>ioS{do{XY0g&T}m4-99%@VROY%=g<7<%rOt9>y3TA_4=-FdmjfYR~QO?~u=C6zu6 zIA_nOe%w`2OX|QS?k`veAexp%aznphAb$&h&dS2{E~%z1>Q$ z5=t|Z{JF^|sLTGwf(BT4!M1e!2k+!tKT@sBnXeER-%e>Xjs%4|ro<$z?%;`+(_3I$ zm;(OxCWAsm5M>J&49;Nt{fk(lbO{>q?|yDl1LsR?t%J-1gJxOhZ@;^-o5QF>#hv6d z7G!^V>Kk0+qII5I|Lv z_`y^an1xCt%kqx;Cnye99=K_GG|@RcKKQeap5k0CPFhQnadGCM{~4hlbMU;Zi;|(& ztR;s*u1?5tG|d| zF07Efam>c3Gw2-n1N;@m%tcbw3uxj#mss1bYz$}kFbXxSI?WTubg?+@z9eyZXGbOZ zt5VL#RcoHoQuW?05D6T^YV^LX61u=_Xp_*q>g9m|O>amo?6Z4nurt4~O=d~P-p=nH z0z~ll)u5QIqWos|B6TD);lof#!42idjB@F+fIw2yhmsVB2SHBBNnsQGvu=LIdep&V z_8e3I(7@Zu(uDcEgSurL$T4@=1L7*F``CIB?QX63^OVXK+<@G zD}hxbMe)&AHH%NIqo|@+Z(*MLJQ&{0H-XiHYfve^EDr^LaG{vUNm+p7S0kr8 zPEkyAOhI_h;8SM_7oWhix^?OuzFNVlu*LH1J)7VHUGNUw$(uKp#n7qM1;4yO7P9-U*Js*loHHmf-)&gE@Va;%r z`UIo)?DHhYGWhB_#pD+eoj>8P=72>zsj!6!u2_Xe65cf4sow0**&nFrBOk4wYPP}F zGuK8n^OC!~FuXX5w1(y66Lm$a=2JUoY&@qYLn3J`4Y+vYcHQ}v^6ohM>O@}K(cW3} z5Dw$6ynBv8;nhxoYpoB^89~OUXMDhSX?v9ZPEow2=Ea33>Y)T2k8f(HPXJ5wG}P>v z%DSCT73}cRJ^3;*=U!5u9~M_!R(1ujS*?Re-qk6IyDNRfu1HtBXMt`$wtpXdGC>cTtzty0!GrWke-uS$H{<9cUI*+Ivog-fKam<=1x2gy*j@ zRu>;Iqq(qjRyM8}t|)nl8(UhfihRD{(}CUH%YadpCGa_7itoQ!09Y`uqwnqPbhqct zDepjvqV+pV#t!n@aZ_s!I)Cq!WZiq)?e4S!Q(|pQD|oJr;aeXcm#yHjEO#;98J#WP zZZeq1CrJ+t$1!CqHv;hx64^(AL}5WEh$rXlbrOyO+r907fW<{x1IF2>Fl1LlqgXhT z^i%cm95c@5%Z^1<+O6ZS+X-BVP+XneJ`@vav&)vV?e?C? z1m_P7)r_m*w;v|7+j!j8A2hac&fI_Sr#=AqO!Zj(2lwyKKNNIFmC7uozq56;5$zlN zBB|p0^1maEp*c2?Ptjhn{T-NbEOQ_Sz?)dWmv@Jq8Ql2?pM5(Ikoob?Thn$cE;zMt zGbc1@7Thz^HmEoAS6#0JYrvF(WJttoTxxyrMHpW<%V?b0Xw|}2Um{88_h35c+9ORq zhIGT&5Va5wuVX;!1jAD&8qK-_&4U(2+1Up^kU*`HGE|lcIG`L9ehu2IcpIb)qcgt) z*K1Cgx2<5xgxKqXyo6M?iOBp~0f3>)&Q;P7O*4UfcFTg+rfES6Hj^z|*B;`sL+zNI zyWQ4m9Mx#UvD|eVb6@m*n5GBM|1ybV;!pCYc!e*2n!3*)65{`6QAWt?+@Qj0Ez@HA z*}}?n^UF6aZ09TcqLTSOov&#g$GNzz?u|BleWK{9PQ^im`)xDFRumGYd=K{9s%R|x z+-{$!iolTFB9+mHowJ2Pq!(xhZse;xIPWHP4REf+>@&CN!-GM) z6M^*RFkc_Wn8Hj;d%RJx*o?6Z^qDG6n+rI(8syEza{($fZz6b=z{s zq0Z&)^}GF?)z#jlX7@K=ICW#o1sMuwv~S$Fbvo`yMbW^*VknuD`FgQ5OVpB@xk9IM#Zq>sGMm5JX{` zU5*$->#pNb1ETAnjB|DQJWq+}Dqz?I;n=$tXWg}xI`g3n2NqyP)r&b^+Ff-IYlpw$ z5XaY(8Mw59fJ=IV(FR@yvB7%MZPJLVp!n?b!pDANiubxJ(mTy1WHy!qm+k2r!^Eed zepY0}!w38j;kx1(N#FsW-TaT0g%hx{oYj)Ix8}9i%r4}#UH;5$^?FmuS+%&Z=MMxv zEq{-XDETT&h^h&J#Qx>ZcIJ1N4kSnT4B#4L9^(FDQ* zfxbI<3Wk({c1?s;mpTYqgF>>KlAR%b0?2HJwGt82)71u!tn!CW&K|3hTKmtcNlBCx z0L%A;gAI64r_|@Jt{|{UA4nG=Fep-ZTZoW%oJJs(ckrZol!9YGuSEx!Zw@ZaPR`WE zwp-t{SL`g_eBX0LQ65Tnn=F3aaXDEtIdy`6LCWTU*py-_ly*O!pq?Eux-_%B=@p6+ z?^SH6DYLoOh4#KFg`;)^01#C25Qz~!`}K+?Qp+oL0jdH_*~=ssf6g0<@-R{uAYy{> zH7aNa8WdW$`$9HM>JJSY&mJ<79WOnZ)K2FwR|s=SiC?WPe#%3V2yrU^3uCuOEe31& zOoP3iHx^V(@=x1V$Y$!8WSOQd69(jY=KItA-%Hf!L#`)&=V{Ih-!1bGN2l%v$txr< z*p$?O@lVO~BcG6SUews+_cpxvuU43eiz5Eb|5P@_4#aK$BIu5dO>Vqvr+35kYV1nE zr%>SzSIJq@O}4OH9}AH2m7ByR_yMYOX_pPwehPCrT^T%Hu{?$K z98*Fzdxwm6)KbiXQf7jnazRj|%7-w#N;&E9&>)JvAn2nY(DEgL#7XK*$EloOB=xjf zN_q0MSj7}2$dIkfvEHEt>eF$ST0g$cGv>DMKQ`K^{82UC3PgLU?z0(Du{9(aztKll zAnzl6Oo=x9%7dLA1byf?W7{?LH4ip>k_w$zm#F@m0Rt%;&n>7e)-ZiMnLAqvZKz2u zw742ICD^D)UVt8zn_bDKNn8@pG@gPEC(~a*JN5#O1gMlR!L@Ve$`rwSonLN+g*Z3S z7)9u&Gnw{_Go2~VTQ&Llg0m{IH|rb7GKB`Ng`~XiK`FoBt&~`0(L6AZ#*OS4O6aRr zwSnOYu>B>vB`EH6d<5)w1tz@&%lbFdf}C@3YH&)Dmw#~WAHq^l9a4{ztwLLrk{hrZ zaTylXp6SDj|BUQ`KTL1`RJ}WMYF#y!R%feYqpaWFVw5{tV}Kzmrcy(cEk7q08nCrp z8(XY-3(J{KvyL6&{IK**aSWoc+%txj%A;9I^aE5O+K;q9oa7RSp27+%8wQqAWz$E{ z$F3g4KTDJI`kxHg7&x#+2C@Jrp&{Q)}povL^c+F(ZN$N01778&C z)Vq2IBRWl8pT49fv<}*JdhHQ@lnO%Ien}!=K1OA^hrOz)>qsn+QoTh{zzh8&23eOx zXYNK6j$T}a?{0H0w;){B*kqLE=A$e3IlI7+xl3hMcPcHm^}wr9X-BzOeG6LDjuzZ` z_uTo$pxRq&Ngso%1$&qTZ0*VB8J_{SdkVXTBb>4@Ee;l{m#fhjF?gb3&T8Wqm1su^u3{4C4S{-kYtI5I*2Lthd8#Jv^P z>*^^Su1e||LE)O|FVQw!H$#EhX&%1dvt24sy*k&E}t$uXBf~!L8j%(68taKr{ zX2Yiu3r>x{aL&q!OI6a`cG4SFvCZn*f)rj@3Dx|b={ped$vC5|aBR&V8y7TW9AXJd zS$2Ln$x}Ndp@u9>wzNr?YlmL#)*YS3-@I%(jELx`t$Kih8h+nLO{>$w*WR6w^M_IQ z8|QF_PN=1PPF94owF&2tz6vZA#85HGroAw}LVtDhm}u>{j|D%jyHBlEX|(Rxxs}sG zA!9RrNdYUeXWE#RuV%x<_)&*7pjg*njr|%EsCt<}k4=Y>ubxw5E1kFJafQBhTrVqa zn#}IH$AU@})9^;keC%v1#W!cWFsv~H5_IkCYyN^?+4 z>TlssigKK|mL{I~2;GUNQGTe_TKXGfF;FV;hw~}=U`p;d>LI4>3z1BM?w4Q%)DZ!h z1xedr=o7rJ9Ix6|Kn3Ftmrv*>L+mqaq|nj5kCQLl{FwEGJEv!rz#D?vY{M858&2cD+ZUEa!7Z(Y@y*W4($N`3mHaE{Vw6{)7O8It^d zBB?XF$@w%y#KLQ_qpElAP%Hrs32wN^foDls?ytTYcK6I+i{oUX^?bjeq8pSPqh&g@ zgptbAV@p_RY8j}jd^6j#|DoK`2I4C|kBH8xb{bsYIwqArs632xdI0x;{HPqcQ}1-# zn^bM4Bzy_)Z1YK@7Kp0Z?W@1PNx5(ul-oYhX|wTJK_aQ4hT(gb69=L5lYaNdDtEI; zeBb^t(b8F@z4TT|`s@ZT|IPAMO$et(YeQJjH#Qp^ey8W&juw!}NA0gH`V`ziE?l?E zPX|?Q*1wIO=3Pb0IxZG?B!C)ySmtu78FN6hT$snXWV7eL>Lo3+B-#uP1)x#-BKC;b z5t6l$Ouql-tJP7Z5Ssr_#MQ1ep)l8&3_?snpU0U(u$HH`dzwm8lpuqQ1fN563cq#nH$N6y!* zz^yV8WuiN2=!;S2T(1cvKn*@?ejMq-VMIXDanwMUkl+zZ3 zQ4I@c6akEF@sOtFwydinkj&BAjLB1gBU2}^<>u3L9LCl!97hCp1gsiF{EwlW^$Q)2!S)-&AP#xATFCR?g@FR@O2W| z&@XPL8i9{6)1+z4BBT+C_%ncrBwGC0-{=ysa}9S&*OGhq3*yD!hB`L^Q+=ehl)G^@ z!P9B`gQFiZyFp9e_M{*{y892l z%AvyQgPbsxnMp@Q+c^z7oz-?#@?g-w;Jxm!Nb2kdZ(ojr5PR3A;yyls zxtXN!{OaP(GCXOW=mrk&-&$bR0T$^W!$yRK0a zVRbZN=9!qN6*F9vTfD2`^Wdt_8@q<;<^9vY0NK5yy=1}E$UpZe1R~lZs6z+GdZ9rO zS9cJV_j*+K1@QYLhy}kT9g4G9*VS2Ug*T3i>}kp9&EcwVr6_WlQeHoqNnv9m+P{<+ zF$r1C2=mFaEGmI78HI?Z%J?)j{gKm>fefxF)$4=P<(}LE=i1+wI*S|ndicC{m(RI| z72#JIbIE>~@q`;|_^jU%m|%~SnJ2Vxxj$>|=}Pw;%%We|ausRk;QGD^BGld- zi$A$&&+3}H@pOT33WqTRGy>jJ0* zWdc%Jd2}82*Y+S@ws)0(P$zx#Lo`4w_GG&yp++y`UkRn7^i?Gw>+~e)ott;n%71G> z1ZQa?EFWFZ8`}#ie_*1>XSL1-Ge;n>KC!84dQOcJ2`41Iw|cC&3-2W0Yql(aS9@#E zU5}1L>`v|$M^eT`T@qB!pe}!3{eM;Fv%x2)wJq+G&;JJYKbqiNgimCC@dyCYl6Hvk zKeBi;Br1QWi?Ad?WwBpR~85QILMx_h8k_FLTsVVRWCdIPr?cFw<|W54SI;? z)!p;EiofLfFv1WHNQU){FnT-BmKM8KjO-}H}3zZ4sMUz*I5=k-hmWtx|Cexh>#er(Um%habK z^ZwU0xn~QDD`Vti8O`^sz6uS~f-2*mSR%F^buSC^Zj-Z8Fe zbwk7L7rd~8dI#U%Z;LS}OZ@ub58!HXco&oTSo$1#)k8nvqV3Q4(!S0f7AKjUXc9Xg zeg6#ph)Rm&CRZ`piajIcPqHHw{krw)$};`V(14j?C#dHD_oa*kV{}?h%~6LBl5XIO z>QBRfg&5Aj5gho4PY2((=Bqff0@9-)RG?U}IZUZP>0;q6Rd?8_yrpP-sA9mUIGbj= zcLcQcHlqcn^<@nJ6299V~RR)txUCBzY=GXLU0hj1>L_e<8XOh_4)H3_c<6RfmRO^uKrM z7C+7A{6eiZ^J2Fa+&;(uVp!}P6B{Iv>(DDM!RmBI1eW-NKx*u$=ITr&HIZ3&b%;i? zNYz^mzqzb5o4wb>sg6D%jFzUAp;DQBLpknRoj!?&rit|L6{N=u1~ro0ybaY1c$nW< zS?1a+A#`bCu`KSy(^jvVY3!lk0|Frsf8$33RlKtbEtf&=WrvY-BYnwTF zWs24|H?zArloN=(r@d3~XVU?sIrmkeJ?)$O&RTF<)P~WUt@_pM{&Dzd|8(*kBK59U z8ZKp`E5n&x*-^`lExJ<$M}Z|Gr0x$5IqMt+aKz5-5>!LukQ#LhEWIuX)BGrE!|yI& z9?!^R3aq?SM)7K7oN|stzB#o+`_D1s>V3Perp|!M=+#Z1ONy|1JJ;%+Mh)yhms_e~ zkuggSmb2a@8-t#E(qwaik`RqyZ;i zVo&2{q}@qA-C?WP;LkeR-8L0Nn^aBUNs!K;nfkgov%@7M*%of{H(K)7ygySuaG?*o zv-C6}|LHRQa{lcO=T>V3`?273OrHM>BG_var?;=*>76Mo3?*_d9Sy-*4j4SPm<8hE zgqi1_92>A3^C#FS??&py$kBsLv?b<`{|+z9hStj0&EgAkrC|%tyxVb@XP^Hl`QfZ3 z{Bujr@AI`}*XmBK;8d4{8;({ALdp@cBCUh(J8_mI@^H8v~w zGydOLO=vs2UFnPEI(f)jHv($4OhN@;%LF37`Rx$|=?lpn%gvt#LJ5JZMw<;pqoVrR zsb)U}{buluMJxVvF{AOoW0Ol1tk=ll z-uu-39{pJ?Z4yaL+EEnpFAILC60+uTk6U>(fIl_ExRU4}YkOehN7^8*)@8pChYiK8 zjeA@MViwY3UyH8ge&O*^5Lb)1hC1h_praO;`IZ7=WMbkXzWc3~a8s@ugECJ2$&5U0 zt?A4?4NG%B0Pfd=K(f4b>UH`Hj|W^eS)AB(;WXhPixz%f?UKGm*fB@Zk4_h#t2|2p z$6qu7YM1bD7-4LS&_S7v4!V}_`)P{YE^eD!xHpg8z+WI&Zns-)Qc43O?EcC1SuZGv z4>)}M9|aZv5!SvJ*GV4Xds0=(>Mnui zaL+cyWFsc2Buy0rVg095OO>mzQ;VfEY1b3b=?Au*0{Ru1 z1XhGEj;$;}8tpl(uz@PYTR+xk9{hmdJ3`j_Y)*8f3f`!Py~_m>*H-4^46@;|f``4y zo@Do-)c$Ldg5|^Bl~{HW5our7gA>-@r)S#`3L$xvypq?#BNegeFJXE#>SM@`q;2CB zLIs?1n8fdw<5z))wvcY52Z74R!tOZoNbjr>pE!MLl+&v{+aIsKV#CqHPF525J@fup zWfd~0R9>Qa8qf(zZ5_8`#AUAB(5BX4p zhUH6`@W;WnL$qO^P~8~B*Y4O>Om#-*@_{}#9Vb+XMdFNsFH(C} z&Geq!8yo%)3nc?dccc?U)HYe<*5DURod^qv;m@hVR=_K{o9GuMn5~caku1H1v(P!2 zxBts4`1^%2&l)79G}$YYY=`M8+4(hTY0fOjVDnQ8OPOnhclRub7^D%Ayv*D3Q}R;1 z$?fz0if->C`TD!HZ6#8@75`=TWDA+XR z)$Tn$NIu$@Vz!T}i^|HbmHn4ex@Ii^Een=?)6v7QR`ES!I@sf&O%>(Q;#bOS?tsNPob8lA+ICbghoj3tLi-Kc#2JaAn?T~#+nNQ)L( zy+ho8iG*BYn^0QB##+H*k(A99>I7kO`C`>*T%0RQur*=hPQfi_I0g4Hq0zf_ftGG% z{|!bMM|aDiXUYNr?Rp~iSc+n7BX9hvvF6{* z6-z9>il#3Sgh)T8DoRp*{D3_sfyt>(qIq9|T<>^h*sr5<_!i_Kjr z5A|Q=!*(yXCT*0USA(zxF&2~xPt}#@Cu-f3&~Q}rA#(`VtXu#w(+(4w(^9v>84eaYX218_EKTS|nb)NL`XLd1MirR4md1kCL{->D@bxy1 zUCq^1K0$#!V!pQEalNmt+$70kwrTNJeinYO)bgQ6zJjn|e@tMd=Q>yF@cxtLWf?5x zX0VltHOP0 zh|?<@#~X2lJdkC6&W|6oaJG;*SsKr&+!`{?@R*g^LEX?AeOh?Lg>_&>jn6po<@1>! zs!$`ZNnfKo(B<$edb|CLn#NvLn3SjR$*5^~8A|80?_&=c#kg%&;O5ixTV5I*^2`H37tzI?8)&WF96F8;Uban;3+Oq|X*-@I zB-rKM39|lykS}|dE#u{DKEdie15IHn`K475?diUF^EG(1D_xlcUh&n9RJHHx`iJ1J zW9^t~bxR&ee|dppY*r*+BmZ5N*|z6RSmJSZ-VaQ?L3`u6(B%_@DY*6~*usLvi=|qz zb9>^=G@|2xes{P==JGKc-m~3%kTel#j&Q{QwvM)ceKy&q@XAz*>$IIm5h(78B2xra56(RnWOCp z-ACy>qL^)iJt0y=`vt$VM$@BwO45JCt%d%+vE7r3?9o@l9nH6M$PRl`n z8&)K&UIhexL6MB@6=y>5!_JL(i9!GvH+C2$X#NFbmY10to3FQw+F~8s#;tq;X{@{SIHd=mb=?e`{d-JYdfr zZ2f0=7ET0FmCb1HifJHsrO^`obFcE7Ta3{^cOB1hJ-Hqk>NXM`O~Wh22aQ#NXCV)2 z=bQqF*O_KV0(@=DekE;5PX( z)$h9u%H#Z=;&&7=r%i-)m}@}#SAX)SgJdw>N-i-GQDM^dyDN^RcP<{kJqH2N+BUQ* zcWfH_RMvC`XDKeJaW7l0@1D+2#COfCV^8AW4LDrlE>>I;JexY{{4wy9+l7O7V*1!% zznIgVDAPC`_l0AYYUv)`VEpS#W?lR&UfONOnH62@7LIPL1BdwPG5i$7%3m=?#b&|S zufo%&Cq(xTPwwxFa6X!U-N!h5`P$9PPd0{nOO=TiQ(Df zVxkNwd-@fM5>0-E3zy>vC8{S%RVctvewLv@iuQV`~AN>lSwek5*b=dSTK%U-ufjobP!pUTKz zD5HSP)2WK*4Hvf|O2pY@BGI@NeA;uo=bHRgspAllUl>_;B}E+WDiD-z zI+GQiRPvCIqYF5Y1Us<9_H zuPf_fyK49_TUg+(aYB|^WxZQFZ(lN7s2l=4)!iz(QomvtX6nBDYibwJd(PgQG{C*t z+u)tx7et>4PTBQTroq%qFWeJeCnm3hIm+pVshAj{g{4WeB9y?N$O+ z3nk<6F~8;omNXC3Jz#=+d3};G|HEoL2?u5Y?pZ%5>;CS1f8uB!(pcktIFLRC+|51h zgx@mX@dc*zt>Fhrd6O+bmp>mVTo7XnxS=eZVG?XS42;E0So7PfGg{DK>%3LF_2G7X zjyEuCb0pPs{vE8UNcsprAy0F7a@}uF;f3zAM0|hQy&QQ7>-M|_Be(CW@~-%4abr{~ zyXK{q;usLuNQi8#`E1-p!zib!y2*mNHi(kRX1_CFcm?oc?Dm~xr%A~euh_1 z=DRsx=cgRuok%9)MotTYlh<=Zv8!f2!=U8M&$y2fXet%Q1a^N$-4j9+Icb$ zX_kP@|8&K`Z%lFwzMsCbpX(p(sYeDWJaAz~6O6{IvQtlL@RJiCrQruSCV!rPU%i*Q z?qdmVTM?4=vM<*M=7p}^MUF7~wQ?9bW)6P$We%W=YQndJKI@(GZCcWZKfpA6%D(*Q zA%((2Vp%WprRyKBeh9fq(=q1iy1Kk}s5b0+vRqrp@w2v&RJ|!k`}+APrbls8ig(`J zx{`61@#B*dsXb3n;v0g^XvUZ>{e>;i^ZC>*$pw*Nq7ZEII^As}AIF_N=~L<+qs|?H zHSDz5XfX~ypfSoM$Cei@HJaQQT6VNn*H-O_>3U|+&f?uCce}N!@nltlJ|X&)CB@;P zt?pP%JS49py4}U z+1>JY-FMp9&7Ez3k}BwDXhHvgR>I63M9)$?9=tpkKpz0Kj`Vh1t63;?!az^=UeQ|C z;7~MsoU8ojFA!Y0erJQ?5&^`JDW!`1jQde|L;jXjh)RkuuakJd+r@y>yyrhAjZP&Z&sNaWl*Hy3z8>R{P+cV(2YfW9V;Ak zd`7lZ!CtW>amLy0Z?}+h>+<;vJH*&Vm4)fHgLO!r=ser&B9T)UPbIOP^)9h1*Qr=9 zj5;Km^=s~+&kQ3O4%TU|{Su$F@gM;KM66hmKp=#cVY+M@HqLH%k(l)+S z^<&PDV)39Nrm?2_!?f|rr-B+z;6p>~0mt1&XPgFk^zXVU{?}gu+adk3=?>-f3964| zQ?MKV{m=L(|N8xZe{;ffr@HT~x9Qsd<2V2FvjGp-?^N?XVnB}H`(u@V^B0Fk(hiwt z=>c^Ad86Nd_O-gSgIT1cN0rjASO5J739f86zWf0_le#GOk8edl`1$1z)E(aqyX60V z+5h=UzXY=AfBp;n)SuApAK&T<$ur=i4-S7!3jLKR|Mc{4XBS@4oSc!Y{Nr0)r}_wd z^!4_aI@n+A&z~MD{r(IvwZk*e$oYR387wA9!x+WWH04b85fV?|PeCpUQpO*FD93 z%G2>^wW=*MLX!GmERs>iw0>tO3=WwRb{jDX_5gyD+W<)k$%7ndZwZ0MpGu8t<7d(kvI&*_BsSwcSt=C+FhgDj84~YsC+)90AsCcD#>s4TdHoDg!gHfkBX`PcI3T_JO6w#>R4QAxxTsG=@chdB?LOW)bibv$m`DBp-~z?Bf33 z1>1xHu6k_qQ=)v2hK6$#Axo_>i$8znLbFpm zW}nET2kf+19NBcvcmy=*xv}2qHSaF_rX4jiIdnA1-g(g%mM?_{RV$TDN(1N9WP+s zzrOaCHVRlLIoI^zP}OXHSx@dmI(Y_P7at(V4owAA!-5AR{tugYr&t%y7>T!<8qpWY z#GeMT0g<77%Ycd$9WcMtU(0u;V%TudBlaeG2+rRt&fjpdYTpJ-+0eSgCV7gBXv2qb zxfz(&SibToI5mETH|CSniB&iiFB1X?8}>)qBsB}-Od`eSyOVDNtxO_j+oVn%Cp;HY zd0l^g-#;8~q`%DIvk_+SwfCP`K$A5oWi3&&_w5w>ssLW8|IzN$fzA<0mfE{Xlfv2CWSn}FU zROeP;VD{d6Pu}rae6Bh^UXY3ag3`;5?aT=+`#4PA)5k&LwEx69jFl*WrNyM7bdd=p z3Ms%SBS&28?KZFFg4>oUj?fjL?{VM2t3GF&GRt0sbFTyhKNM6B2$cktncEU=8&7s6 zKILyX6GGAeecb=#fA}fa(F}`@a{Nlf(&9aRG&KS|Dl;P^!^_^=+nbW9{Rb4GH0ra} zjY*z9eopJPn8BexPWDd#BJF<#;AX*x=aBQm4Z9V>!0dU``L1^*&DX@#;WVAZW}Jpd z;X>qQhex-K;SI%j&QJhWL?!pXkf8!D(^o%Z2Tkqo0nuu%5(c>F2k#ad=TsKQwfsB* z?7MHE19#Bt-#{G~jZw$9wB8T)-w(D<=}<6edHj#!6C}D~mw#_D!0#O}O@eqZF&0pL z@!c_-BJ>^kN@GrM7gyI-!D0Oy1KCPpB#yKIwrMP*@X06(6@!_*+lC${gVm0QOP{nB z%%!m!MP3?%`6b3*Ua?a}gLU{uO8THQtqFkXv5)XN&bCk%F8w2od@lu9#i5MvIYa)o zgyR)Usl2aLquEjL+j@32b5835GL-nR*Bs3x)d&pnMN~EH6m&|sd-6QLgXve~DR{+X zneOZ6;^HD3#D|JtFfurs@?M9_h27Hl5`~ZE9VZ;yG|nS3WP-WvMT&q}_is@o!0!gY zej$eet~2<-xInh!_(Sby+ur@}pCmN|jXBr3?Q55^t<^h#SlM*Feh-144vtqH&h_te z4ZCP?glv!$U%6xdV-~#5VIRc;@QOzq@dIp(Mk09l6nH_Jy0u!HeBevp!~}9;2l?o4 zmjw8vLq#AiYs6k2amAap<`*~$g%aK!F$2G*O}1}1a9ZC4KobE$&XD51$&j_$)van? zg76_r_~ZcGYl{sRVJV!@4}{eJ;zmXcfVtko3|tez*!MhqgN79Fu{o%J{xdnE5QiS2KA zY&Qd}FU-Au*#K8rRS&BHW+3v83*d7q_}Q2k09D_vt*!k$86CpAFpUTB!otEt5$ETE z(76=np3h?tz7gDDk)I$Q?S7_A*&HVQ3I6}WCbvHUp{M^(viCOra~wvY03#55-kU}T zz@P)5{6-<4)jmnCU*orRoOB&-%`-arsaaIv*&aOH?;*glY(yy?AKp%u;`C0V)(jsP-$wbqlutK|c^s_X}205mUo9F0#K z3OZo-cy`dmkJrY5yv%lW&*TSXuc`@rgMU8eUi#-j>Cd=u?)b~McpOy@ST)nl+5 zD1LU4VPE=qNyY!IKFhdh17JodAnTPz3fo3no*p0IfTz_`k48U0;B%F@HTOm_!>y#@ z?Uc=E$!UN?;DGf}jf%vMy1r98vqm#2nprWIoQ>ghBRH?P?HLRALNaah zW;Ue$v)36uA9WwKAa!;?C20NKsRx0-PXiwI`w?*2^z39WVh>1FI|9hhy*Cl(>o}CJ zGq@iYCo%2$_h7h496)sAp1OZdJEJsjiyG04oT`d*P&{q*)+S zYW5^qh%1N$RQg?_JCPUc2kb1W@wl^57O=3v5F9YFoavrYQu%k!&ZB#;Y1V-q!@MHq z4w-2VZX=hcB^UW%vAx*WDONw$JPjlS<%77c3IBM>UR}Zm2HF9P@bkF2y2>HE9sfS@ zV(fYq%aCMmYpb7{;8HMn7g%x9`^$$Ge33Y*vn?rrP55Ni4xSwroxKGnoU#i|`)p;| zNsJ3^BBzh%A-qmlGc7{n(Kbd9d|*HURjCHHE2HP%xlyxmqxJGlov;`0ftt*mr@ za*6a{f&LB}K0npcbD^b34BdJN-gE~&;H5 zq|W&9)b%o*%hMExuSAXjYnDR@wqpwG*4_dnsGd5L&w0KpHwpu0ICw<4u~>q3U8`Gf zs0MmB;9%tKr_rQY=-eZ)_L;yzzOx(zg zulsQJG(;Y zJ6ca{6fXd6i;IgPji)i4Gh*UwJZ+0XI0uT6|NG?V7NMALh(kS`Zh8SbQcuho3IY70DM1srx(Df(+YB{r|*fUB7=ueVxMzA(Bkuk6hKPz9H4{`YwPOrzM&46i;TKmI#39Fq_aF3*&ip1JjX`?kT^RQd%@=C|=$!@+EC(%pYIg(PqS2pt@!JQrF&e@deU za_h$B6%`s>3^kg5sYT4h|CdsBS z6vY3$Wo5JsoGe_bVFs8uuT}p?>y^g{9xE8EfSS*1+6^Jp{uDK+Wn@r~JlP%BIv-)% zv*?TU1dUV!>G(VwbK|>3$;0hYY@U8WvcPuN{j7z<;q}vNHvi~*p#aCsCyy{Owqt5b z6bAwt4Ptph_90+GFuIQoG_u$UZu@|xiYIXvfI!*+3^f}mHc^BdFLf^Kg#D8yZN~wg z)9A5o@IQU<`kkkEC!1=l%<}it3a_aDKJ{M-{40TfCGf8V{*}PL68KjF|4QIr3H&R8 ze_CxP?fk6ZX7XHMPn7Y^WR z@4pS&HS45s(jN#eOlda_2B2K!V@>;Tb^6oBiMZpyUvd#lx$R#M>T}O+HD~S2h zcSG|bj20)1T;v=j6$=K;!!VxZIm0a~-g1+`WfvDHr`92d_f$?oUXe&XUku6UHudrIReR)!H$MSx zL~Rn+bYoKEsd*hd*>oaDLs7^jUSj%4TWezLk=BHkWN+Fq<#L2wc)J;mSr`$VH_JyL z-r=!?R(lS#@aS8N0qzIJKm%>w%axoL*|9nFj=xcCEpZSuJUI+f;aNlMKm|~hb5px1 z{py$0-argjw1U!sqCR6|pkj+;Wnp>k_xq5?x$!Akbk;JVN>0f1JCM2&Y#E6n#kCJ# z3k_Wsm+Fr!peBvZc-RRjtN1@BG{W8yX?41X1w4U#BIcr)2U?ct*(1+r~)$Ac;m+f za%PXMwi4AJ-boim3Ivx%Mfc%~L1d=MQ+X2+@DZvYnuJ)?#Q(GO4= z341-F3}38|`qU{>CGnk7OPNRMCcl`OvelUOvxhXGc{Gj{qsW13BeQ`U2>DulfxpW# zz(fqYw-~Ljgs>>S1nY4*nRn)7kQ3jM7dL=5n!kQT$h!M(>}K%l2!$Gib&#f$7NptE z%i&-5iKDQ-Dsyb>wTg>|*)zVj=uG2M5Ox{?Dphv?uY>87>yQ%M4fUB<5#~)`nn(0p zq_h!TtL&I()U(~*>DX2K{er?7Zwj{HGon~aI2glYW@$5 zX`HJxfU>lmSah`?Z@Ln)PL=s9(jtAG!&!&de7kt|9Jel};*M;@W()hPONt33gW|Rz z+{5eK}I3PB;)JkI=M?J{O`Y}C(v*!tScNWpdjH_weQwc?QxHkR)q%cWR zESHpjZxAQ!k(tWCE^#Oy*yv4YQS($z@2dN0ZGwvI^H)Zakjd4Frg}w7lZ&s#Gg#|lQ%Zx)>|L%qcaM4VaxN@ zvR%ztp_paMs#dx(QL{F_d-G~VFs}27sNBGhmg|!EvfrjApuB-fyLVvnf8z502E0S1 zJMtmsopOXMq6$AvBG|fd5%MIVwC|!b6*d!(T-=W{2=0}_%!Xnn2U+7F$kLjtLit9z zLP3=R>N7!Oqdcocmeb$J8#yxI-U`x8H1{Q49G17n1WMyDoBbWQ!LoRDrQ697_TkO$ z)}fhhT*& zW>Yr%rpV`fm2ib8dU+?}1{b&!Ci-+PP%D}~A}aUU++cKUs_eG#QE>M;`&yZs0oHrC zJ6+2*RjFb#S7NOipJs6ItJKHl1K=Q&_SIKHf186Yjwdp|J{y0zZq|nRN)$$}G>%C= zg0^CrUxQ}OkMDRx8l)0!5_+TvmPg2X4$fDovWsYD9n?Gw*Qh;8H<+StJk|5&WIZ2B zv@C$Se9abKUum5?CgF8!jT=TS^7ZcC(a5W{ToJH*8M5vThZmxAEm!b4HYd@RL}W9L z%W?5qcS(Eog06@caLVc|WT)?7BcI}FAWx||T2SuiGh584;ed(i&vl%Er8@Sv!g!J8 zaG&Gip%JT??CL^&me(|+`c6$k-MApr1jMhZ^8XXyv?gB!o#B!x64B&fkc03~4XzS7 ztIvxf^YP?tpjYVZP&pWRT`Z0J1TGKVWuisacs*k+&3grnAmZE>35h|q;v$6eZB;_Rps7h zU$_=xq~E8xDkz_WESc!tjY-OtMt|T7w9qDU4rll7c4RB)rF2zK(0R5tmrX$;veRFg zC|So*ney%Cy2N|tA-08S9qPxYPI~gkFsP+%u^l4DDwNy;K5aB^Sn=`+8(5ofBYFm& zy|5oY*|7c#B3|MlBJ%Q*(BD7=bEFdW?dd6UsrqsWCw&D%5MsJ6oQ~~5@KmPk8u$4i z=g!4Qsi|*P?v@ zj6Ame8I)C#*~QuJ?en4BC7g|mF{Zl1(iE~(l&tV?NdRXl zgm~+a_Q_rQ8|$FHRQec`zdVLv@h_-=tr(!9k#-63)+7q+u!&=7%M?{=k-};Kvy-D& zZ!{}>kt2kZW`Sw=_^6^xA{e&uLGk;pM3(wqmhzO;?LSbT4E2cZCJet);?ZTxeko@+7NdGA-toQ&;q%e@dEs~; z`=F;zSjFC`J!dzf+~cKeMD=P$Pp;D4^`g8tq8$+(HE`TJu%04D-s(qnJ^yp@F&$K~ zKx?qVa@$bWOs4x{mZj3K!$jjB0Hq~CTK+`*UN7=Ae(+?$Fex;mvq|1!d6CWXWUzvy zyI#RmPBK~veRoFL9leQo&B1b&YUaWj^-tK_v*FptwI7+{V|ni$91>J6fAy$(~fSArFNj>iF(QGCX8OuG}dHiS`)WRLk$;Vn5(0-YD?cJ z_S81Kd6GSh%E9W=S@eR&UU1QL_pyeJ)#lYN3}fUWsHh@a#N;P|R%m?^AH8?3*qYZz z4!-i}^KhSDbpNALPY`1aMsJ9Q+z|?Dj~)g|h|gZUOcJ`v0?`C(XwLhk%Ha8h4L*z* zJ0qF={oR81Mg*mTAL_xgAH&9HU+IaNJ^;D$nWP5oV`pScli^9Z#?W=RMVVX>J3V_Q zH20h=kvLX3Rtq+zeu*-vQy_-+LG=0HaZ6XtjF43pK{*7Q$VY#kG=tO^%aeUc;~u;1LniHXzo4u)EBt3q9PRG+TJ8bKN?%2Wc6jl`gJzDY_zwp zt)KgZ?n74h_><>#C^s4iinp$+Wy<3C@p@&r%R7Uf)E#pzN`?0LDBGn+dD97+5yW<@ z;epKGqOPs`y4p~#EP`rE`Sw{_QTj{XQ_p1?Kw+lAF+O)^E`nSVOp)wvu4JyRD@ax-@=uemjLhdpGNRztITz@7%Cpn)C2UP&Y`|%`nzW1>YcaRA zzC+>W$t~H-5Y6*pq(e3#jNQEeRYcDuFI2`2g;F@N0;47xbD5`*mR#k1N6+VDaZD^} zWgp9R@X-Y}DkJDj((eW3dymuH`e{(f$8d#q6rBH5(`Evljj)u@GQahK^II7dp>P1R znn0`abRrZ)YRW_OUGyx))z|W}R(4J(7~Kx+(1QY2L$^ z$k}kv@)0G;R#Dh#vO^L**m;7q&r~)6;YYx5Gp{UX|z*}i7 z?B-*BU1TF60SZk5aI}+`{!|I^-|We_;;MPh4_to;uBg$Fb}#<=Lqb8fiI2ReOCgLJ zfO_82*+u_ARh@pn71pU88M-Xg!dq9N&UK>`ajCr+=r2m*sC?k@KAhiE4y3wykQo$9 z&J5?F)?eg2I-CiF;qvmvI@6f4VUx7>tZ;hJ$3<2(710hN8C~mD`kHmyp=uv5OK}6# zJ7o47u%reGiN#?Q(}?XzV4?dF5Hh${EH9e~Q`R01J1}~^A7mtE^LS@d(&9;njUs=W z9kXr82Qw3qosK%OS+Oi%bH5rpm2cqJcSW4;$=prN>i)PuW%!jw%&!0DWxMg4NA;$2 zP}DYZ*b*OjX>Z}p;@HwW`1fWZsC=g_QQz6#^7zMxO6!m9L(KY)`^sF7FRZGyg+ltu z4#Mn~L)O!bdU#BNcRCj%Agg`P0@=O%rnWpytTxg2TX+<6Ob@{ZycQ45d}cR>_@Ezo zgW|un64Q-7$7Pm&GK&8Jb!Mv{sR-9;U$(7NRWr}YtiNei9={^9hFdG$jW`>-1(_PA zgk^ok6qlYw)Mr=do(^Y~V0x=u)ECShFfCPNdAlXHce3i=Jg}=S;y*pO9hSGZTB@UF z?u|M>o-b_dxD(zZTqoe7@24g2s$xHymn9G=YsF&~5uCR8c2%Jn4d@GMk8;zs1yg50fDyn`%7f${l zJdOF6Ach*Rm_&xm<+`axpKXrYjOF3jVa(f61>hrP82e0AX+nRKYUS;9?wN3_q8D%b znrTcAn++UIK|C>FJwB4nC>cGomdK!mcRN!te*|o+`0DGrw1mk&U0wVWK|eIO(|k1{ zt}>+Pq@=()mkLsvLG8I&S)cG&kh5%nL>PM|#zqk%OUT7nzA?0!rkbVmVJ|B4p`SlY ze$XR|^ya7}E)El<3D%}gC$ij@l+xqj6Yn2DMRikJ>QB-;E0(n-mR6C`iQnC2n+cgx zq6F$q6&JC2{@N7Ix82%mneRq@fx9c`U4l3jSsD(L1H4eR!T|Z{4cjiajqTG{Z1oXs zY3z(KOLQ95Oh!^K>gI(_7ve|!`ui8=dijK-ms2&yg!lybJZ;T?w$L%3&1%(WYc;go$haK?P(=^5cAoQ*T zzu4a(be=SVM`V2-u@57XHs=R#E1zH`y=ECiX0crq$7l!HAi>kq z_d~)SNApyp9CtRiXV(pQpq3vvN@~DM zCZIwF$d;^nD7!uWeYV7m;Phz{tDn}_tDyWB-0#Bn7{2KWDuvTmx@en>lJeLsbDw`x zxF5lvp#CLf#sVu}X@xWyjvw)AM&-ad*eVH_3cb2dMR1p;@> z44Ucn*gKA?4ai-!3}SoI@|vYNoMfic5u`N zIhI-x$zh|o=ta1hlf~A2%Q=D)I@e1qoN23J&MUfO_e*iJg-$&F+~?yPV{gIJh7g#} z>cnQnbLMEiIWUNzCTC@7uXS~)GDJer7-NvtYL;4>4)k&b@(_aT%rZ=8`(uoAx=OtH z;eqees3j!b4`;2BUF8MIm}ZOEPSGqn^HCh%VD?EM-s*-F0QXn1Su!f{Tle?gpTJ1~ z-OFPi;cRCc(L*6TV(MciXlOp%ZIdZsXse~dnIoukywij5C>O9Xzrxt|*0$8otXxR2 zrP;YoA@M`rVG3hVy<)Q*p_msmqKm^!sj&7_*;q5iqM*BThl~wIeL3MNw3FY0Dr~$C zGuHnGcS1aliSjhixX~8dpBM$d16I&BS3Nl3nxn2tfFejL(l3~O?G{`g={#@;B`Ia7n~0wbsaDP6nAROG_}kl@4mhX`UzO-eCco&jjU7a z#9o~H@K+|Y&i#0+EQ`BE!_Fy259y{G-xpjhU#ZtnvsJsmRx^CL6NuI!?aK_hYg-qy zBtFg^bnbEfK^*b}@%4N&GG!LOO;Sqx69IA=dl9+(+)eY#D>FA6gqHmPZ47ECiQUlF zXqEZp{X78+^BB3ps0WcGD&`v*VhgFUz-1_qZleBB*R608PV;6tdQdd_j!cmJCi;CV zxmZOD(NTr0kjFMHC>OoO$B9Evtg+`uEia5o47Y2BJ7!Xw74_RiVGOflpF-Ecdx9rX zJ;D_N?p>?}G|q}P(1xKSbQH75D~n7%>Nz0IrrFI{FJHQyl8kwp#!kvKN>FM-^mavC z_o3H1Bw4D5Q|X>RM@B43n)!{k$o^{HIs%nVYPEX;tr=q9UolG99R5nRT*MopvADeZ z?<>@`xh_l#dp{oZz?oY`YTL`o_{>#j@O}=zZDlCBwei8beID*?w-N$G+2j#YgOt7I zLb-0?CIuDb?DkV-1dKwuUaP^1SHSX#DNa^;luc*}+7f3)q-SsZ3*W+~54ESW(X0Dv zpkSv{u2n}BB14z$s|Hv<2)!CKC-&(hLYdM7Gf#O0uZ}u2`FN2B#;o3_3g=!T^1~j8 z&_8c$SHd;$byz&H!g_p$QGs66RE38gOm8K`k_$|h6o<#PgJ0iUPH<>X-m?z1vM}%3 ze8@N$diwBJ{8RwMPr65owkdydV*edJ6dztHtvCGw^i(Gd)!okkvTM}2sBka07{<4> z4AJK!TbZeN^THpsFSg`RyrEKqhLYIBe>w*V^RbqE%oGcXUBiDP*g}XdqR{Q&J_@e%cL3iEaGehPnDqu)&>^{}{W#W*}Z1>=w1r72mATKb#IPUudpca*f#OU|zS8dlBs3tMyWw{W<*S^lOw15*Ej zd%Pp7R_|i?^7?B2n9waw?(+zW5wx^?EH#4boER$Kzsre z@@m@M2LJih|C_Zhaurt-@IRLrr5m0e20;>{Sb;43+qT;-ZPAD|YkdWTl8&W^%u1=1 z+6sRKTP4*rm6^Oy%LXH9eF9GK*YdS(@X^4SyyYK~`YUf6P@!i$6TOzG`idHVzEsuw z^KG!=OU%-y?z)*ik(DhV?`z{FBKQP6^os5F{2xyyIS06)^BdPEFZ|X88N+TQRu#Vy zns;fnaCb%77ugwEACYD~F?*pXlP@N2dPFe>9O`&KvC{oXJN+wk3kYBW+EzSG`*`&G z8(aBu7jDPK|AHNw=WeKeIvR4BByjc<+qbVPZQdF{Y~BLlF3D@vp%1|}d3a$y#cPLw zzVh!W*>#W47q)H>LNoOp-7P0v{4K?|Tejpg!>Na^7mXD2PBpmO5ZMZ}p!uk?DhNdW zxU6XW9^wYmW{B!tjAn(bBV1E;k5CjzRUA6$|VD-+Io@y)tQN2>wlT|r5ag*2Sh0GJ} zPmG2ds1E=)!|$2nXJ?;ZDI)^HQhe^vuXYXoTg>|waeUcBiJ{^&xMY`oz2)~ETYX|k+tw!Cv3vl}dG zwl*-?*WfY=1zi49)CYmK>D6(HX5Gwis8z^43!1zw@|s(CoytFtchyZY#g~5Ts+*)K zJY4+T^gFC;MuEN8468su95iS__jQt(rHo1PS}M+&bbsqG4>){qRJMLDr6N7Z@U6RUirlE z>k#K0&_jdj2ieH#@AvLd?R_2rWImPTO3zzT@Y5A@5X{@Of| z#Z42JndnpL`zCk3E^r+Upyop|GMVn#-I(ld&P^9WJ~httRc%4)kVj1G=~<=1nTAI1 zn2ywT79y>u-8nn z1~lBLo+BgIN<%t*37pb`anA&j7}0Joh4kIUp<5V!Md68xA7H8!y>HsQ zn&5xh(U24n!?G6P@ncN)r77+>ghb8!sksMHAnksHg+KG1QY2W0X2Fl`Lm6yt&A$x- zwe|LB2`Q-F?P94ZfJQ`L?QQke*ATrP_JZ3GSx)vZ@WLKl8Z_a!9P;E- z=d=-_(R6cK&0eV?J&!t#lQGffmAwpIQ)X#%7|%WNV5VfbMk|DiBdWYVvDK4-+Yw? z^zj8ZLk;RJ!iphuWNWhG7M%8IGml~x?XBnjY;~uExrq-`L|zp?oaQHP8l1_LeInD6 zT{o=RRg5n14^KQYUX>V~%7B~6jTr)0SDId42W<9wM4!^{ZN(58kk-e9dxB|s%RN|m ze>O@F+;dtRd1mMpfy3!y<|Xa3Us zw3%GWTLmVoAf!*i2D`1c8!@u97(!GOLRG0;aNZ1i*hyxF6^*ONb79HgwW@n#5V+II1o7|TNe2fMt=De<7}_Ho7bu`hzdF`A)qde11z-SW!r1W_a2|mU*NXmiJlf# zb^FhB&iZxBpoA>D9A=76>g^ptcjliI1u8iCd(vt^3s<(jP<*7)JUR_L)x8AqJJWYC zOTaK0sy(CvO`xN=hRntpQDv%^%5dGFAhVG3q9sM5{@NhnACQ?GxGL9zp&+> zdDeTLOQLlOk7ecEIkjmWkMesGjlR+pFn{ce{Q|d?gp$ERlu&}GYSpwG z5PnHrn=X$*T*dnqtQMEQ&19AiCWGQz^}Hqd?NfJ<&aN+v96mHKK=tvLobYO_VKgN_0=&|dbzgrP3H$nI)!&tG=@}f*O4!)gbJ5V z*b^!go_N-15(Q&f<1b2rO0;T*-X-KDB^^|PjdEjkxGHc)J}*UEvPw>V z6yKy!X?r|lq(IY#$$FP5;IpzhA3~(&U9|W|8PBRu<@&8nG%1o-=Me}2C(-yA zh}=1?V+g*s$eKl2AK{xV)N14QB9snd6>h>s+V@TK)pf0tyR#|&d4cyUMe>E7nU6Pj zj#GOz?$i|7bDPN)?yG*@-d7`XFaQ3Kd_3Nvb-Pg(4`ZG@g4K=-mnxd%gtX1?n020j zgCagxyaLj|Jrud2g4& zx*MAjZMzOog{%^61o@3lK|=sMlUH?m=o+^De16quU3>B>rs#DJz>)*idZzaE z?(IQCEA)}?eA-r=Y}jU0Kty{{xX0jlXZ?y1u@km;%L7|^!Q1hm`tB^iCx#2Y(kI8p zM;R~~IzB3(!v>Wu*e(e#rYoejN0BebN{9X;j)Z|gi16N{%P#u2AwscFKxH(gM{c>b zxaOFk*F%~;sRox9`16917f)ma=`HrSl1#&J@RFVl%L^nRq3V6TEJw^Ii#KRG@=O*e zt-n*tJTn2%V@Ht=NNjJkha@Z3wFG8anf4ekBcEO%3-+r_X>{jOk6OK%H&*R97y&*# zoD4ZCDm19i3V#Tye?Sy%Ye+0>e$H$vf2=x&L=DTk3MTiC-#`V84>$BSPWe2yn%Hj) z0VO~MgcAB#G1eH>pm0tpP8-f7EnRVXRWuS(B-Gh zjvYp)61N*cv7K1#+Cn%!_z)P5?#}h2q_u(Ep8cq28N2f<_`A;vkaNOTGq;%EgU|x8 zOS}&Xyj44k5C4$KVwg0b$}Xn>s3@ZAhBF5jJF3yv$~e`QC%R6(;8l_S@Mb z2ONXYJ@ZdyOuJ_<3M4XzMF$f=o4_e{%rq@-zl-82mEIUL;;nEtfs_}*>lKaXiDg7- zSuS}dGV($6v4<^m^%DT3BRl&czMYcJv5(Ur%kv?$Sx#Su6$a#VEKP}?0mo~D4E}Dd z9vg=gQ>~Hqy%TVEM(iEV&eSKABSrWEP{jz4|GeXxT@IuF^r z>v>E&Pi(w(`ivGNsT@0X$o;v!SAXW~;OW5Zz8=xr6g6tmYJilib*s6vvfK`WVI5E#rF#3{{Wg7+&UDO{{xC(e&mr#cMk@EPf)QomX(^nv# z2WtGP_xcK}f%0Z+*^V1?M^kOps-zlE%@&lBO@Y)hP$*3r^#HLzH~KItQxr$_m^j_|uilPwG-mPBfr-)=_3?;dTEg4NC*IUtgc$-+4jx)oG7!(AY<|SYn zbCzCz^s3n&6Z;uG6eAWj~JcEeyCuY~Uk*IKTeL@I0nA;*r$;Y4JeoXjz09;Eq+UwNeSwSir zbAi~S-Hlx8ssi{@0L*a`DH?w=q&M74TYepNRvy`UR;%$x7qaN;2~MW zX~NUduX*}dQKaBw&&=&DqY5F;Ti0>LK~)C0UCxtRwj1|@FAEGc^1PuwPHH$?Dvr2J z@3$ZEv4FvAS*P9%*<*%7iMrs54A-~Q!*42Fa0*n+g16||#Lw#K z{2w1W=Y`A+LXcnExY8r1wwCQB((T`sThq`Rd(QW+rnJfE#py}f!SoCx8#XzH=q>tR z-8++1XJRXe(S;o)aOL;;KB*nu-Fka(iW*fE8gTByl{>|E2(H+1hP0uT?!3hR{F^WK zJdwpOQ5bIuukZ5#l^^d>#FHgbj)mx6`%$kq>-EB0UoW9Gc2Bv6)Sk$xY3#KR?)Jy? zHhQz?UyJ3hw#zhH6-dl3G)CI%P|XDPYF`Q+Snxkft?apG_i=Sr&FuyevB<}mjQ+sJ z!WY-%Jx@CO%WgRcAHNCYdi!wX%Xp!z+aR1X}IfJgsTMoq_Usnf1`w?Mz0kG-|y%>Hpr>P z&02CvZ~DKnW|rEc%)7oRT|E^=zvQp|9*eaZTARE4@Jg1`(!s^M3t|l8zqDr_dF~5b zJwDdW#8n6a##^BGmF&gqvELgR?3!O*5xPq^pY5 zRY7~7{Jp5Yu7BxWio0)jVzp#;Chl~+O>=*xEl)ZIW)LZvHd_)m-q$H$K5XlkgkWdhi&Y`<&2nmN4B_xJU z5s;2yXoirIR6ywghH^weN;+f&ejhyQd7kst=X&4&K)qzK_lkSn>t1_eWcD`pS)D&l5lT06qkc$%?$Qb(E(KpZ)jwwXx zdB9h6WNWf`iZPX+@91DsBx(qV8|mLlAY@6a$~y@Bb+$~UWh@H)T@QS;!*beoi~Un~}OEQF@|VXpHTCrkiquOuKykgIH&RK1Ci%|FUU`D3{9)wfHROxHa9W?W+mh0l1azko5!cn2B`35gw zF>~yQNbIPvm>3pp`lbAC7m6`~BMp2*!xn7do%=LezRXu5Na)+)mt&TtTOe0hZ^9~% z^%xdubi654(hv%v-wdajxsqqEcOhNO;WfPMgcd`nKR5>P0L-iCpnN|m@ExrT+*!NYeLWySm)(3|G~p2HTS++3n$e| z)qkDK%I;de(@$@+IJPR5S((JCIw6SbYruik%mO3wq1V}l+@{(XPe`O_!(3nW3k;p5 z?Azd=86K6)k6ti~A3HJjUUJirhhB)CI@H7(wPYlp=pR1E7m{tr+drCgVJOocvdb{e zYd=d@n-W1FBsPWZfAaq)7iPr-g$fcq%H3IvR{i`EGLa%Kc1K;Ee6an_*3L0K4;Yz2 zk0~b!JTh4bCAgH|PT35$$7Og=KcS@@fvqW08nacIcC}o(7tE3-l^3CNBB(XD@j*5Z z62EbY;?Xh4z{H;7#z)O$J_^wd3!A7+3aL6Z7mHk-Ew^t?>0_ZfsJfMa_%BEE8{UVD zwk<|04(PUynQaGo{SWP-I-L(#p@)PMUPm=L0|x`^dco0*>BY;j6T?#Eo~3@XNzEHo zSuA~Nqf+@2LVhp${Kj1w4&RXwxkgMdi%>`H_hkk<%JErFS|bS52D{;}2`S zUZz$&OM-W$x8zb6#z!h5v61OcBffMc&=MbQtGQYn`Q(?WZ1qq&`^p@COSq^{hs!#$M-tMY?9F76=rrJLhGuWiR zb`X7%HG<-tvKe7`|EVxhPR}lj5EQ+7@C+iX$A^2<_#L^VEaCzD=hw5Jwn+a3VgR(` z3W7N#l)kPe(o2M=cgK6TTRYe&H2ZnOjI&F*vT=gl9%;dXNsyZm5m)^|e+Pk&_XwT) zv9Ix*#XsH)z7D#2*;BtiQR3}nf}t~57RcsAi&>YQXL$s#QRv#5;fOSN-XJU^A#6+HG@a**yM*l1uq6fGZ+V?db zcnl2T8$I+uJ}^G`N-TCo|0Ta;((~F-m-ZcrVZ|0!;zuuhr`y;$@F@xLVoGB$i`LtnTVW{HZ_#!c;< zzSX**tIk4mXGqJ2u#1KIw~%$6CEB64`|{{A`@e&lKD{$fXx;V(UCjOna)7^Ry#rFh zJUVJ>w-fifouy+CCu)v`f5~zSgSYKGQ^;X1W>Z=@teo>+4+%n#DFxY6iG$HnxdPRs zZ?$5}`@cGrZNyMhnm_BzeaBcHPAVC;)w~PuqGoLRbd|w9g{o?q0^gGeq2!u{y&e^`CTaSbyzHEYtr=-ql52B$ z*Cc(fG&7l}GlT7>nvAQ*g)+mwud`oL7AGZm9Iq2ehKeK1K_TR@^3DwWv)_yFJpc>q z#zZHkN&UI_1_s}{ZHCX-a9$JR!CTaFJX78P6<3Y#Q4RedBPL#SIKqw$&qOa{a=$R* z!Z|dwp>3S9u+H!ZY6km_a7v+XDel{yRY-{-mWWl+tdZMICGa<3(2 zyPWv%?3}O_TEq z0mSJZ9Co9qj-rW8iI|fEL4{4^h)X)G8z-#|r}S{hgA>a?I+C(Ah|>|?9yWg0W)KRa z2w&n;X(BCBA34bT#H8v^rFfwR!%am*(>q^rtr)Y4_kS zN=Fc838MzRWSpJqgO*U`q|oT((cm$eM2)Fp`;x|YPiRqG#pSoncWE7`@#x;&$)ykqWyYbteKhQEJO1%wHf{k;Mk7c}BK&`}7#9rVS_kxs z=I$QYb{j8^&thk{>Yzs@V@NTTn zsDU1eC;j+YE;0ZjGE_GviuxZvE5g`xBXYz`xLSNOPiKs3=R?==T|Of_3*R>!(D6Qx z`xe^b7Jjx+4Grc|B&f`938~sRA4b5}LnGA*uL~C53V$dL!u>#8trSc6c%urfX4m$L8hR2h9!H%F9s#94+~J zHHyi_4fP^ggU|FPon@!khNy`l>r8xM-yMeJuJi5QM3UW1oitI}wrm<%*a!6*u_;}U!a!R% z<|9IDq+I{q3fgua(9c&sv?7wT*7@N}DZng1XaPfp_5Ndr4fI49Q0|pDoxukzEW{26 z8&KnW?d{YOGY6F;!}eKl2?w7_?ke_LMs#xr!tzbBmxAoJ0B@|ehS%~lq?)2{MAZC5 zq%+zw_pV>Xd5`%rE!TVZGaFe|YfIsX(msjaA4M_H3k+~F#*z}Z82(4QU;3;y^(adt zL}ze(8}1Tbnv=B_Rx7&6Zup1&ZHMC4?VlI%@i)4G} zt3k?+Y7M{0W@U|#2t~02i~^2j09ol$weN3kghW@I@d@BBS=_41mE}Oxw$UV2>iR@$ zX|YfwDMEU4KiYc`%DKY|^~>D32Q_hIl$b&drZ}Ow%d|8(4OYri)=Fv5v%}SFEkcO7 z|Gu1*{?p6a9-hPe4&p;+d6pW=sd7K>fmYV$RJT8WSk4{7H0|dwWlcHd@_$&YbZg?)vv)aN7h(6Mf07ogPeoDh@zO z4#sa}>gsq<7#XWqJ7y%88*4o2gj5zKdSC<1s1R{kXzNaElIwdyJ@c_c&Umn1PmKvY zL2k^g$Bld?xvKd}@SH#!LMR|gCD)^v70J4Hb7$vuE(WLJ+`P!?#nT%=E?eDrvpB$0 zE<;ei6JE&#^1uDVzfE5QeXOoabOP<4OQ~t>rCjhg7w^SBuVq4ZGYEe^r|p4+^ZV3M z@GS*oqk6H)RZr*?hrs%U;jt!)aGlW&>KYAd311^;LPJ<$reKsz4qog+|0ifH6XXHPvo06UmB_(&U$88{?#DSV_EzQy&oTj>@8ZbrErKi~Z4 zoRG4P_mm#J)lqD9*+8wG)9!%kOC=@iTGl&x>m{7-9rChbVA)qS8y%mU{T7d!a%e+O zgY5m>894Ru45xwlbvTHCzwM0mF&&NKidrbXVAPG87I3Ov{E#GX)kiri*ZZ}|h$Xau zE7U%`xb($9my(R}-`-Vz8z>B1eTzLte}Dfw7rQNaE{;r_qMgIyVB zS1gT-XW-JRA%2G{^2V;rW3f7H^T|P~kXtpy)O(^y+!Q+UH4g8UiCqs0jr^QW%ID(( zJc;va*4-6mXEO&l!SaNW?=L%&&&z39C8LVi^BG&jRh1#66bq!7I!hOcXWwtMRqQl# z^fOQ;NBl8!v^J7I(4&&3)ZIX_Tm%6L)uZg?V1;Dk3{Dx~F@gm`@=!(^3w&5)#PJdZ zzhxGIN!M0eHxtU($_)M-g57;5X$KK}F6{wlq2)dtsQ;3(V^A~cZ*C%G;^n0Qwy{Au zZraLD-GaW0D|9lT9ZO%a&^Kayx68j8{lldHFYVs8)XNpy!UP_eHsiHFxV-3PNqzl{I={KJk zzIPeKsocmVn`RgHVJk|GW^vz9QfLPIkQ?N|UkP>XWPn|fH5(x<@1t$jU;o6ET;~Cd z6Mk;?|w=%Lo?3-uR3eDCYDh>AS;D5 zvhyI$7vSaL3#x1sEXE0Sy3ZBYX^+Ti>3`l8qWWQ0In}8@b``uA|1)*7;e;=i%QGeJ z<}CnU`p^}F^f7HW!6~UM_{$QTz+@uZc#E8%2fkP>TMN;;iiaF^u|gD6Z;rTZm5 zBc(gq_eZtl&6y{kdHjtwC=mby+j;n@-uVam_)R~2OSbhab-^QfsNm#&kg7X<;vuGF zT3$*)Pd;oZRmG@})aHw(IYfX)Fcj*slAFJ#u}x&)R7E1!46UVPA_WOsqL1>BxEInE zwp_?|8x8LfgyV|`OO62wVS41d^Q7YCU{zvhL@);}54*yn1d$!`dh)-GDNP5AbFk@8 z{c~+ip~ahfs`1F7E@_TiAsyhWmT7>}HRx~GxA6J#C*0Q*Dk6w*8zSOFPs0JMihdw~ zkY0(`Hz>1>yHr9E9{uDBqTCFr*4Pu$|7J>6u=<7aQZTc;xEtGb=XhS*>rwK(RhH3- zHR{v#*GofD-4Gp0b$>CO5&DNgrWGKda;*m7)&1x~{%>{;>}0+@1o<__*_4 zjoqSJm8KlU`qbu0d+J$@7Rc}em5}*7twA>D7~iCc7fucu_F06p3-{n#vIZEg+Fd4) zec$llQ0fpL0YD_)&H@-qW}l7{wg(<|_Yo^O4NhK4LM4%Q(Dtcf@;JYylArLo21t9Z zYpWuBfA+g%lWSJ<3|-(CS=Ibg4vD;6DcBVYZE#Kjgr)*2u^%wmMvZdzv}I04gz3?e zmBnCl9K#Y!9GfId5`uL43r#pjF*|71QO-7rX0GDPE87Q)xgH^#Xtrr-J8@_hd=WX+ zrH*Kb6%XHHwl9gaO~V&-(xV^t206l>J^juMp4N&Nsz88z)fx|JoxG4cyeH}OGm!sX zNLmd^-h4aZY(MGejjQAj4CV8ktgGSQw5MtI|14j$>jkn^dQjjv`N{T+PKKS1dDdBj zeCCP`uh|7NAT77AtVr0K!FRKoqRH7FN^)ti)^>~`Ex5OD(ECT zzx*L7w$5LDMpVr%I92rK)3Zy&{@qFK;Y5$v(E(eAciRz;O1cK*#I7*gxht5A8whQ7 zD+qrade72#`e`bye3B67{A)POtggZwk&9hnSbf9Y@J;BBT-3M-)hGz;DvpvYJu^qA zn49=q&rzC|Hm<=!S*jMPP+Y>sMEvjvysD$a>+I~@daDaNg%C4*^HHTQ?*;_4$jZu= zexMfS|ATkLu*5vqa(ro28BQ};A<=mr7;-reCov3dl*`@_DT0leA&Pm53R@Uz%VndO z({&tQUj~_vKNx|=5%3LFh~@R9C2*8h?Bao(6FAhXZdR!_yfi#c?c}{e`CiypFkeVz z-mZTsC2C&@!jOu`_N)M+8HYY(k%!tT*+Ts^Jo2ip&`01$;O?JtD|P00R#sLVU~rRp z_hReVX+Eu~TI}0aYn7%YBh%K1`Q=Lg;yw(W&N$of4sjChj}PVRyh)+cEb&VY=DSHw zzg9EQRiq?CXd-(hS&hD~T&kz4hpM7Z%TQbZk#GwM{8E|=Q8$wvH7g$$1q*B?mYf!+ z>|~tE;=4^*)}du;l@-fJ`rI{L)wSUK(yUYl2n#8)G{W|kI@Iw&VFgoOoUe0n+uynL z!`3BoO3EHq3+yS(h4m-g`6J9t>BZBgmJZu2S~^kdYyPCl!c{&9_E?R_?1#*MNG*?U z+3Wu#`Vk-9d}B5rj5L?E)~nU#K-PHP1Y*{#>K#}k2h920P5ki5YydBNjeGviRz+@x zNQxQ)sXA`a!yxmObh?fspP;tWWAQK*B3!v|@qLhVddM(Vpt`T;w)_0SkAXlnbP6oZ z(U|;l8z=(mYSaY8TKeR@zOR&h!UJ&Ep@F=e6C280L zAP1POsqyx*Wj&o}WEeTXaK?L^n+)O6+OdG&@Zi{|Uh$L?)OLFSoy!(H{?~09h6Cwe zQGJf@&n7j-Kdup0-{-;Q&o(bR*T%@cf($*F6gn|Nz2kI<+?!klDeB#|Fu&3FNOa-r z3}+>pRRG~wnq&{P*3TjjhIS@u#467&tTf+v=_=7lJ$b%+$Q#L$SAgv4%FvVt8mmZ$ z1Pa;eY{6n$Z5fa0@Dy!~w@d}T*{=-M$^x)}b@zoOLjDLX@80y~5v0;jtzN4{-=rL; zVvIXq@=pF_iDYWy674kjdfnidHO|{7fiARG=xx>B&0wpnCm{h-($w>C^g=FswP;aw zp3cYynypzm=;A|l1}bF)QV4l=O}@!RxacgD-Rl{%Pr_B1Y7afFeW!2-3kxled36dk zJLa^XHJ+sI9B{YEkGtglZ+ZIB*Hgyc#c-HM5>FCN4oTB6H(?i89Sznm+HI8Zi*T$R zS;GSAdQq8u^luE3$rFP5DRd?pg87-$*=JFs$cy*jmELyS1nm-u?=D%gVdE(8{z}feABCK75qs7l%$knf>g$){TL{ekJtj z!XK|2Bx_pSCS{G4bX18_ESN2~r$IUT-4oq-P&cWGdS{SfI8=V)ZvM3{ElA`ua0FM$ zV52x4w3i^)N(f-^rRcVZZ!+y}Bl4vEx*;U_Me&+99E>(QUozm(v*%oth{eYw@oH_|La} z+<{ptMeEh?G%3+3`b6%!2r|S=?CdnL9*)d8@FR}!(W1ZeCSK9QB7ANx7j;AE3L4#E zqBCCUD}TZ26NNn7AkVf>K{O#W6_~=^n83~PCi+8ANngW)^FT(nM-$v5&wxvGCjVOTSCxfo5?@qFq~w3y^W1C3;<@UM6ugb2VLCl*C@VXpRuzX9ux1NdId1UI`-I~w4ZGVGo_`sh z=TXO;x6b(B^Ss4r5SLWOGS-SITWO6UpDi7~h)NRBWj({j4+@Ky05hoPge%E7U!LL@fF*yj!T+- zO0BFdlH;ma_!0}@7t4K}qKRj$p7Tf;c?6aBTkwe&9kmFcw1(sPyiguh7q1g=%7WJ| z_aMzLeV;FUFMMr4$u7NpJ5=sZBZj&*ye4h&_Q~`cU3p{8mK1=hGcEpoj>bbK2XF4n zphpL&ojwlGRgd7^6=~s|*LXAxBQbtOCN*N!cf))N+uP`zRTTfay^zp)9Z|XYX(ota zI)31Tt-i6J#a^A4vJzvxh$_H%ar6`j^f)pa|3sFFqyVzCG8EzX19o`aI!_KreWsDu z-3P4#BQ>3aa(vR_Usq*4k;OcrHUguf@r7^7)xd{${*X&r8!3TAEQ@OgyX*Rp1*~0$$7-%3x1g)8P;%!{{akuOD#nK zB@O|8|MicVxkRd~JfX6OfF0-QXMUq`FRxZ?^L9w|^z4N|_@FT&*Okk$ciyI>IcPte ziWHIYAuCcUEtP{>`+7U4Z321%_0VoIG*1lV0<+CA=ctbD^%{Tl{^I560*}f&NF9SK zheM!?Uson_if_c(g*}{IsQaROwYOR}wTZawXVVpkxDJG_J)0ZRpIXE~H#|Gsm80q! zQEJA^P?g3tdF$p|gdP}wO_R!k0)2f;=jvAO2r;S+CIix$sPqY zhFW;TTHEFF?*I>s;B|dF{t9%j%~hOP1;jTJywfB_85^#b63?a-K+rUuRpOU_8u39N zfKzqaTQ9EuA*1MC#tU;@(Q(xkb*S1N7#Tu0;VXzz>t`Mu?(Ar@)klTPB9aD8p{iSX zV8&_=8VU{hni>&3YCM`_WTW%;9@{a#UvsdaSnF$D7d-jiL*My%`6ggImh`?BCClDd z#T8v6{q+sl3scEo{QQ)!egNgFle{oR@^?^dD*@F^%*WL5)*pi8HY>htNxFQ1fvv~K z-cLdN)`6Ez7{&M;GTt3girHeJO6I45op#_YqnR>KiPX&I}2hsn_8Ou$2+)E%*e zUD+g5G>Ql$H;_@-#VZ{&x?c^^q^IcFTgc1YH%RVsPpgrscZDcX$ea_T>|`Wzy7K*` z0==#QtUiO-ja#IDIJp!uyqS-dC8`Q~dQa62h}?sEX=nj=E47I0cS;N6vFq4dx|4{@ zvuUH&rq6kUupHk}3kkh#uM=8K(`;<9a*Qdcy>G^)A9y-DuPNBUY~}(8ptwmWZ?i)w zYFEcFIHcdBW}k3Il7zi$U9APFsHG|$= z>KAfXqjPme4`!Nk87AWgO5-&HtP8%(-1*e87Om+omYrG0rwU-HqO3N-pB(jHfg(47 zQ;QNOxkd1YQ>$CRo5AER!s;T^O#uF~V2yUmc&YCm$T@P-6W8TCltk8UmKjtdBj5jR z=v(%eEbriv@=iROG^ib426rMbjC!7kGQREBwJLrdPY(}hp^rv~zjORmN6yGfbtj}6 zTHrokAfVUk&~nZq{MF#y{VTBk)V5M_($YWLrjN()eq)*X2s*Tx+@|L@ZSWF?BV#m@ zlfK)2mX?`v=8XsJLv|M)4Jo2fz2&_W4duv24-EqbD6U{N1z=QBE4g`z?rW#UJ-t#J zvlE#c0~Ues92?D46x-}%W=u=Qi$q#|O~z4Wgat4sQF=G34$*zaJR8yQALqvg*hnNn ze(U0&&d(8#p)ZBV(48KX$k!;<<&r$&!+TChtk899)TJlT69m1y;O|XrK#tkbW>pjn z`AgO~HkfZ2oLkX{GGNPbg2MWvjjbj>2G?hXs&eU{afUz4=FBtFHk*T!vC}*5E700~ z-CH89Q5ry7@l4wX!qc=ntI-@z|Cm^R>+~2UfpqgH6R85<@~|gf04v_)D!~R4X3p&8 zzG9&>J7GpSVd1V0t`DtGI_W$XBuH`EQ|(m1kz1L0!6f_fs35(0;WycnSfMA^i*mb) zZO3;|te4NtR0oUXH-A!@sGV)jKwZHf zp1kEgwYo2!Fxbc(B$#B(4fPjpTv<~EpcR3rZ$~3*u%F$Rd(lhKRH1{w*1U_=`+2)> zN`snf+<1p+SYGvnFjka`Wz+Xsc$*p+($eN|us^drJ}|;e9FYIab}k;Lu*%5!{V$OE z-|>8*@y;1yLwAM>FJZ1%Q6am7{_G;u#ZHlgx)4H#M<2neSLW(7%W}1g9f>j#Mr8Ug zU}^EsM>W^fN*V)eWj2p|$rC~pXt5NMq;-yXYkur&5~)K7u>6`xs#Y>&`ho5-n~$OM zsK6Ht$=EJuPBA;&lEb;N^ZcVW{Sy1qi#MtT;5Ao><9`kzA`7Fzx1vW}r;|NgV&n zDv(+$!aTP`0~pa=Fg*aPTn_dq%OA8F74Q{tn)IFhk~BryXe4j;wLaB^E(#0bD^5~0 zhFCOIhHyu4xJeRZ-u=LPTK7jWsp#5AH-Wb7e!hUhUKR*kuC7iuSoys0%On->!jO3} zn7KL@`p+qG1E;jT;^kWNFMED`dz%q|NSpnd_~`8G$yAuf<=Lc4=-MlAj2Tk7@#rnB z3DKiRd*%GFoBMZmABU(iZu;WPOssJ$-)UoJXujeyCGC-!~}|w28GP7qy!l?5w#*khW2KJ zI8UGM=m)?oHmpXf$cS^EB80lu>gI@9=DVu9-R=Vx=qy-R>nQmHebc$18fh==1C27& z4UY~?F3@`M--RPlWXxzYpRYJhN zJmNT&kPltu1VR}XahLU7EWCl4-A8}#&)MZns|K%bp5qlOQP*IGl)bs+WXQNzWSewJ zf&0t&YH%wZ>1T$m0a=acRTbOXU?EvnoAZJ_;Y|7(q=T{`hpCENrO@7N4wM4zFZ^eE z{gh67pc?OU?_U}4?{5QpIX|o^+(bg<&AIyI4ZPXrRs&r!GUm`x5em3uTwQ&~(? zR(mS9yq_DIvLx4$SDwtX2L1(#mErRSX)p0>BE7sb z#IV~WDcyKEWOno`N=p^u@=bt10M(puMKd5T-?`TyN32}nlvS%4!J~D>Suo}ebnz1_ z{)ie~(^JF7&(3@P!LTdm8VqtvEd-+##F5y){0edd_Xdcp+*^~5%z!pzpA2?8$KLC} zaFqA0L8g*wSvF}zsEe$b*9F&up~>vqB*6cFHpgyFJaOb|X68uharNQaZbwOAu+YBBz5#Z>zHX0P}%gZHGb!Of7QF6_66c|+!nEKq84p`mo8$vqOJV* znCyshU_)fUde|5TOuJupnd7lhvGLF&s+#Fg*-C2N;HQZ7Q5S0Yh;ugErzql7+O zXVCOo`Gs>T?giDj`wH9)%Af1o<@1nMJv$rrG>|+}WC`3Jx0T{liDp}&&32|tPo0To z`B~M=pUjGi#0P5RoYEt8{BY#ci$GEN`(mHPu&Ed(i` zuy3l*O((xRAuUgDn>KNk#fd(!2Px2lbg*dfP*$Q%GcV?39^gO8lz8-WXmXVkXxhy4 z#{U3i7Lr=s$J>%qmy&H(Y>0w-XpHa8b@qqsH&Q69zg4XB$88Do?&3e>x6a%jLv6Im zZhKIUlyArmYuil{hgb+X#wplyW5s;W1qQQz09c1mz?h})UEA3L&1KpOxL!LaR4eH| zz)C8$>@@q10#7OpnNj?JRg@9HO?PhP)~o)!`~Qky%DC2p78t5{Zu zWiGoNB-+O5&s1KBFA8wFZlPBWw-n=-87o?z$J?>kP}ELX0uW*5GheAU2-+de*!j!t zC{qCwW%gR@jecw5LSn000YB#I6Dnh#ob?1{6O?5RvLN{O5QZ=jAK#`HGQoA+_S{2299iX1Y6M zn3Uq&SlqQz|JPUTxjV}7(;6Wka^4MxI^lD3UZ5e#U z&%mMq;MrL4wI*OXAAG5P21o*MeE7S<_0+%jzGru5?d+sbzhL0OP zwCo_egTqy4FYX0xdmTk!)L{))ES3%?bbhL#WrtLAJmf9=Y##MS;=;9%t)yPcwJ<;lCk?NJ9I-|N{B%|WAbKWolvtmcu zpMrtE0w(|its^#Q{>!c(X(v#IJ7%|R!LMee5HcWawWf4F_o{>igx_((rtD61qhe7> zfY&-gd9nm453do|W1rp%b8#fL(bA;=(DQ}9i-8G8D#BB-LyMW|09(q{;gLaF(bCU2 z9$T<)U~1v!wj<_K{!jOP%>dA8qt37Vm*fA0r(D{BL6DIR1?r-p5$i<0s20ay{#9B` z6#=p`q()d?KbWwr8jDo-T`Ft3B3GJIKE+{GLYS%VmJAsfH>D7~B+UYEs~lM9FiA#- z_-#3gCuOeajWYs-=i|HkYWn?*D$8L_7wPY$4Smqne&ew03#hh^J|n#_fQB*Tl75b1 zV0Y{FpCXGhr%L&vHk%G)&Z0!i^olI1ovI&ts8#DBX=x{Q3hRlqS z*dAt(U?`VbBg9^QEhLmjd&)d8G6gbRh+3wGYrNHB%khM3S%cLZ#lvf5!#23L-K?ww z&Xp#8C&gbt#y9rZ!0GaE<@aK4faKzoSi?1>l$ua$0-`vty|Ij6lcH-bW&o(aDripp z`>-u&J$M<+e^}BfkKTy^3=yjv^y{-$OlcM$!v;oV7jG1Oy8^P-!mJeOHMOBR+g5hh zFDCLUtZ=FX6j;yk;Y?)i(eNW@qmPQt7THBmBRS;STPHyC_O8inTAD7I zw~;2KmMd&NJ&j?=7Gp0L@nEz!Pu5_6j8L$t^CAFr7m#<`q-f7F+pRe(>f16GO|!;{ z-nq?JjinG+?#(&n2i7O9=v39noBh-Oe*$lA4U6-uqE*$sQy z*(**V=w;XV-3#t81R1gC!H_|gVGmo;#n+Kt@60C3Ws3D!I>CKdX8Tb-mG+#233O7Auv7{@+kyfQGTCnk`#+I%;9DeWCb= zqkVV|sCnuZq922P!GNFW^E6sNFr7P(YMu&bcxx(HQD8Qdl4@n^9i)|j6=^#UKdroAQkl2*;6ll>zm%wkMu30mjHOlm7JY8) z;Z-F@(EfZ2U}nLxaQMkp6y4_-@2F5eKZ5{+3)aA#2z?dsr;~FtPic;gOPP(YC8C2> zy@Yd0B((ZApo#AUlQYtqb?lJ%f+gV7Q50oKYtVe7HvsTl5!IWYGftJH}SVLg$dp z!^dA60R(lsJC4w7_V1Em$^eQ~${n3aLw|{v7xxhcJ~c;Uxt2t3kF8dlZG|SgA;8V> zk-lxa3Ew~0dMg^Jw7qk0)c^UT|M;p77HI6?JTz?n8lJEN?5hdDuR8<}RY+Egyrp~F z%)h(u2V03(q~4|MV5=2mHRhQ^t119ZqdtTFOIXWUMuvrf9Z{1GYbAh0V$RU`P~uhK@aEwnXMb~et>)w#12T(d zc0>BlBX>d+Xwu~z(;y>M)iGV4bk(SBldq6v^#Cl<7Y#3&EbkBko}`&;0?xI?)I3E_ z=~{+X6hb9!Av)7+&egslYh<+>Aq2vwdWOdrzqLDD6o14PTU5#ON3=X z1;%bxiyjV(j%N(zydo1dN2-TF4fYDz;kIsWK)giQr9&oc3nL429}mc)NPm^I*7?&F zlKLAWM87-7wyPWARKU9_pGTBfrRO8DPDQws5di_erdKSFh@A~Iw9HrBwZ#3>jr@hDb1Gv>teSuGGhlvJ-qSR>4Y`&WGqfj@&)ozMBQ{wV zxmB}~J667N3LSRS_tgm01z2ybcCeTzC9r-R;D8|F)|yc_EGbqT5QkUrQ-qn$D>t$oLn z)fJInv2#fB;hA0`uoFByjBx#77*(wabXGEE;1RgGq6J*U4wrQte@DmN5UGklyC~ zlmEQSANT8`d~UV(?R7-lsTcq@IDMtGZHxf3*7SbkaNG^+N9?pyJMW)oOrMi0c#gY= zR+_CHM|6Pdu?9J51(^a{+sLbdz)E$A**eeZ<#Mu}?){#w^Buu7aH<9Fp4aj^`iR)| z_320HDaY#`X&Ta|6|TDBGZKXUB1Zq$id<_W=>rxi+?AFOZ+X&k%v2&D*pzgJao-_U zBw!@29r5URLeo?~n-O}e(L5xSw^2iOgu~+gw$$w$Gfi02s;u4DcV>QX&$E{a?{9Bh zJUv4rK++sB3}^{vD^6!b$pf)`sOt_LT$$<#jOwW#HVyrFwEK>c4t~s8a4sh8pKv`8 z!2tl4;)ia$Kl99gPy?qe23S>;I9R%NJziz~auqZ=0maQVtnJmF3_R7q(*$m8ILI93 z%qzB{>|%>^`dSBhSFX)!g}#*Ml#VZu-@LPPyW1TR-zNz2`tFJdwgK@3G>}ma(X~F;l;i}Ixe-&N6Q&M#2!|y6vtKn5DF^lVj zh?GGOcYC{)BS6e)E^Js|dxx4f0Z`-fj^Q>@tTmJ!z<6zUC6A&uh~gSYLyJ)#JSLSc;6gHJ9m>?94gJ1pQD*;HEA7 zl!VHnicnnH0L|QpNAQb#1XjdO2jrf6FQ|XhYkXKc2Wh4{i8&!q$iXA2zVFF+4OPk&a5G@d*z zaf0s!I^KuO`>U=7(ujRmcN7kcvCQwTBz5e}fGIJ$WMf123XYbz@?Xn?hKc}`8Pmvg z`we>$VRVhhzf?mO=p98kSb$X#2}Hgbg+N2e^AK*FDt=LEk|M|sCUcqCo;)EKKwoU_ z1<@D*%7JpdSK7tKq^M1@Q^N=VZd5}qK;e$+4L+oLR7M549GdM-m6*QzeR!h!bS-VKh?zhjsSFHnnj$`X_Gwy5E#dB(;J{>1{@1U!JT zn{i=OKy=RtFWirG39Q7l*WZ&P$b8)SX+)6Nntwg!IV#ku3akh)W^}&iPEDi5W9KZ4 zEjP&l10c1MGfQso20!mHnuAsmk#kBSuTRhoKR-;_LZ*=XbDyS8z=smENUMy` zoF-}0GZ!Ka2D1SOJ-WDme-WPyNX7JOA*pK1`D$0OdzTnbjqDdxx$P(Ts?GO_i4 ztP%3zHy*ZiJmYuwT-b~7wldLD4k%)*4X}S9-l*K23-JJ|i~{kI+4R%Dj;#XJt#plx zxN%2~b7$iS=zYm`LRWlWT_SP1Z68Qb_5cgv{I=~ciJynnS6pdxo%v$ZhDY|4GMnlP zYMByv1O=gDiASW!HQ;g)!YbwxOGP|3R8oK+Yvi+O2uPSS2 z87h)k{o%(-zIruWt1H?R%GV>Y#t!R5{XLlmUUQR-(T`4R0{*oDp%>`ED)uJ(L_vnC zQx(25q2oxRT(y(D)+7BI_*(DZMBQHu;KJLO> zFz4gW<2XW%?cF`A0Ce)9@mI5fC)bR+S2Mxu#NCXjzd}Q=7waf^I7s)xdlpEd>OO=$ z9KR0TAMnx@J?g5z+g^Mp7kFRfaf}}uv7?f{ZD%_PJubJ30D4p%th=fo0XF=2CInbl zs-(^11v!7B2z@EN$HTiOub-ErI>EDQJYCh%MhrKQuDR6G?TeebL)c;sbgXBRH}COP zt!!KiGwFR_7Od(mY_ZXIhkpLs`d7a2#n62*U#wZ`vtWTs3MJET7f5*R#+go66{{?p zL?ACtbRAE5Ey|aVPcCJYpU_1{7S!M@6;!_{e#Q}$tjeM{Y-KZ!M5m|dhM@q<4u~3h z9dfqTevO|1(FHd^hkcP`E+zR_ehusd56s>KJ}_mc#w2;DXo}06^1}GHa`%011%#92 zz8Y`-)A?pSD;b9YybGZ_p;{obJ%Xb z{<4XBe}!T_fo(2^n|6LG-rtD9?yjv;blOdAoWYi*qs>vx?nasR17`MI=x>8ZZ*sFDZt)KjxSRz^EJlo?yA|fX9la5HDZ}9 zY|Gr}848Q^=p{Vnq7AIz^OOojmtIp7d))sf0TnSivh+46T{Ju0bpf6NQ__{4)3wn6 zYI~m+MtCB9DEFw~n$M?+03tx`wj1_p`&2Of&104KxPTL9_zxvu<`9{*U+LIUVll>1A%RpGXGV;yz6GhB6A+Tz8 z;T~-CQkfJMtmmyS4T_J$kO0quz20MN7|`QBJ`!qX5S z5HK=JAP_v18=^^j5vJjcRCm%N8xyCs&>y?&kdP|c_=^Tx2nFE!*UcAo3jYR6lx2@Y z*~Ls<57_xqio-WmXpHr;JBzj487`U}2ke{>j`H;&E8E9SF9iDHfJdmX0!33&XK80x z!7Xo3pVQfJX$OeVqtH;MaHMx9nplDH$XR>$wE;|bH&l^c9+&@avdVX5HuMtHS@f}` z_Y3Yi*A3UD`V$lHqjhO3!m@lo-&f?{gnvzeh1l~&wU6+j`vqIrgZcLXcH4_U6ie=v zmg(8f@c}~6yGi=xbGLkE5pLD%6@P`$PbQBY72+HCiw0?Zf4W`VOpO8EwISce}%>ffv3@|&UKz%?ePL%?mh}QgNI8#1w8uYz#egNY^ zo9xy^5=Tx~ZI9KY=BoVHw3EN@A|n-Z8YZ650?#wWI=4IlnEIbdar<;5>6?$WA+4P13t2))GD;35=l1&PAJ!n&@@$2-~rZ5+-ps*j;3Y(du>?Slhm z&Qi?Qa9s64GRb8I3nk5R9gQp^Y3{pSU}&7KpLKnOuL6+J(t27q+{~c)G^=CuJ*n5?tMup zg^+|pU@YH|_=y5$E0Cn{vs2C8H?zU}&&xog5>DSLTuzKhKh?Kf$cll~>e3NvycbQ(R%te?`iKcjw zlIdLPQ+Ysd^I}uPIZ)$;Gei*BI-h)=g3@)dkPn7h{yN7CeFi|JZ|@DdV*feZA79;; zZk4`joIb}b$wcoxr0>g>==5zd%kP;^z=rF>Z)hoO+<`qZ#VKN%;;5g z_wiltr7|gx$Y@=Xj`epfq&wKfg9Yc*;cx&`ufqC;E(vxw}hptUHwu z%x8w@hw0)TY~0_pw5bL|x(fX_1HUPC!l&N1&HQt2Zt8$j8v1e=lKmg2{B-NXy`Zyz zn^9T|)e?%%^xWqQ?5DHkFQ+ati{RtDW7TUk?I;10IXH3$H zg7^5kR$<*l8_A4yuD#w~#?_V?)E%uZ*x?BX8&KD#cU0-#@_3PhBC1~4)fhcM*$4`gaTO0x-<48VlDBKY52E)uD4~lg{(7ZEy9X1(2 z!=c`o9%=A5U-yvmb45%S@b1AO?kcN?=QSjDv--;0)KEP@cQ{Qu2e5leyYql_xu~7< zXF3FS3INF5(jUe7e-!kAc4vqUU{qKjvf4S-I=JqZueGDk&!1FWyQg00gnb<{NBQso zgm{CcU@pZKy3dw9K&(HP=;@ok>aXHG*L(e0`P(O?>nBh|vl&G6Aw9#DiLa2Qi?1k& zEf2m5HgjveKG+MmESE+<1)#=D^mp<+5f8C}&pfnQa_zk?$XjO4a(dGF=^t(r>s{+W zH;Hy3DTOyS!q+um6}}MA3(`@W2C^wG8Wy@x*$r2cnw1qzjxmn zfzLkPS9~lA~Cv7jqM* zh5PHEMoPfo(E(Rkm^6k7p#2X<3M30S*b}dWsSE#hun67Or&J8%2mg<)ua1kdS^pNK z5s+?>rMr{`>1LPi?h@&c?(STcln&`m>23s+mPSPBZrupnffdY(!o=khtBqDYm z=}d7?;52_pzFXEwvKZ-}0YQGK*b)snjj_Mu-&MD!=UQfEU0}Em4q;zm+j8uf^m^C( zoc8fdciP(U1*;0wz+PQskEFe|pOj#W*U*7l@VrpJuQSyvjD05&6+KUYQP48?g)mf; z;24JMmSB-w&8@lW)r0F2#gLq#0`-ZbO)q>btc3D5`PLaSfoCWCk4CKO6RAs8AmBrD z2w&sJ!^-);QkV|J2sjsI?k<3eO@gmidzbW2`($!CZ$Ex~gT_)nr9}KKrp)AKOVERf z(LL;u&deuP{r^_dvw&-Q7~Fjj^b$XhuAoG#FfO`0`~6c8pUJO&AJaGy$NIx`o4C%S0XP4<%WDsGKkqm7rq`+D zlgv=}&%}RsHhk=NzvXu$d{BM%w_yf-SAvXArJaSHII#STiPZ6Xix3pm*6hSvbL%}X z!@t)?R|70T;8>CUZ0CP#iyQqHFRprk#~qgV&vjSp5U1bttlkbgW1ANg*=TlJTD?J} zZrTo;aMtQ74c{fY7_${~>Y#Y>9vtCOnW zFQJO5hQ#0JBgF!*44_YsjD zQ~Nv`up9Ouvv>lxWVvugo~cW>moDhfwMbaWYVfjhM))$-=RmzoW5bY0$V^@8?k#ir zgb(!1)4&Ie(yhV8%;_e1&g$g1`3jV%3Tk+o&W39%7ho9>&Q(}%>NL@mrgJgbjsZ6Q zlaX43$)GIF(r7mkd8@d*i;oluXD>Ln+Tk&)jB z300a+KM*QlQVRTXQLfjqm0d~fxbMD4~Qt?^mJU`Y|>B#4} z$Ab|H5iu>}L&k6x{%EkIOpW$`Ov$|SD==LwIAOQa;PiOahOe(19dAt0@0YJ4{rEgpr= zN0Ywj~OeJzl!n&bD@o<(-ed= z01}5zn&cxO-xG7N*?VN-oW~zXM|nRoQipjy_n5FQih6buo*=xK4VQTd^xu)zkmR#q znL?G40sU4}58PI|{q_MYg316#t3(~(BT2=AZ?#z#16RI6ETUlS-Y1m_5d+z8)h6;N z9PWvh;Wa^}OA-p$)ZqXAnH&U&$USw=hSa^6qTej{$?g5n=xA?|d`7k1_qR6RTfgD( z;^5$XsWq3aw_PHzU91=V?7F{pdL0X1%PFsUHwCa+YQ&I~l$8FE#i~;;vNQ3qVw2)q z=%V)11Hpc|KaSGB|B^2SJcN}&MdHT*_i&(7DEmb8s3DT~ov_L;aUAMYx~T$ghhc7! z-w%l_RE)TCMMj2na$A;>pN=?>R_w<+w87?E-=B>pIV|0MFA_l6MsAFIcm=|Hd;@r% z^xF4Y;7d7jwOc-m-*<>e?s9xQnoB$vZT=&Jxz6HHs${*^WeG$SThuHYk z6ezW5L=4GfrE}aA2i+NwGWsmgH?*hiTrO9L%)X@#bD>}pi=g?SRBA*r7?v1a!~Q&6 z;q{=Fu3o`A3qTKd?SnDRKaTgmspa1L9bM!G|5Um~tx)jRP^Hra!;rx?in_B|xDT!=w}E zVd-ql%)R7p%K>phHz@qBd%S9M?|+A4kcLL!(r+ZnvXwL{5dQD*Kt#xCx?YEW9F-u8 z*R(WkiF=*EogOLE?eYSQcc_TeflC!+E)lwdbk;HtLC;OCzD7^ry@3Ob%XuVysPk{K zY36;0h*GBzh zBk-_=nq#M*_oEkCnR!2F@dWi@V#~}=v}%HzhHk!MT4b$i!<#QGNp-E=pAQ(6 zzhn$F38t+Wnwb|+Y|L5SzE!~CD*8^X5|ZgA^26&R`;Iq%SABH8)mF1F=21X({&G(Z zcfi(n{(}0b=uWdvzDk(*BDkj28r~+#7orj^ZO=r3VYgp~0O^X5H*$7;X;^+~+I{n^ zal}R5R>F;a0(!#Gt~+{&ZTrWt(38R80~MN^?gshS1-&ke4whT4PO~4Q^*_6I@4He9 z3XL3!f8|)*d{B0^nd?FkEqO5fjmD&XZcPv7nK8c)4Fsc8$ywR zg;_|F$j^Gs8jv^UxKefzpYQm%aqj2tGv>jY9OZiR-#O=`Hjkb0yg2s6%AM z9$0l9g(ze;&X0`S(iDhjt{EO^w5T!Wcd@TujJ#AZ7#^3mO1VyGZaS34=q$K2%H%*$Tjn)h`9*mtA?^fcHId2Z{;Q<;XvOO= zz-eG8ftu5#52gP7H#7(vlvOuWyUt?0tg1syMFr=e{n8+s6o|RM)FhnOUGoBcMu5j| z`CY`T23s|4?Ff(4AL~o@oei<0T&ztip3H2Y_qbdRSK1rLU0VK82mdaLYkjmz>FhWG z%jIr7P)5di)rJB!72?Q3xv)BQh}tl!p2Ach2xCI3+kabdZ@`wpo|lw1w8v&A@z)Fs z-1IB`X=Y#cVJzwxlo~(eqeTRxK^5#5i2-(&&o>_sCHkX8YR757EPk3QZw?T2vTVRb zdpei`Z$=92UC1#q*_@;%4T(yEHjSnc<7l_v9%nNL;2)-`FtkhB8Kq0-&syk?&|i4; z6{ptpzj+Nn9!(v)mQHq2xwm<5*mIgg#<|g?(pZqPE4^;kOSYC#;-pX)QpaJ#KU84m zBA_afRMc=-RrCf=M+^SP=_B_rjR1kFI9CVKO2Qdt7S?DhM?es*g;u)xg?OStZC62x zoICPb%X`(o8byE(s#*A_@tFJSs!>|VBf!~8UO$T1_@XWqF zy4YVjUbL=SkD$*Iv9O?<;M*Zv_C6iwz1^$ZZcWuA;tw2{W`=8VCwMVS#wZgIJWQb| zbWJ04wf4M>eY9ZN_jaH6{HrQ9E;Zq5J>&m$x?j%&xDr_Ju;ccjgz(7#@O%s{2Rwk9 zmvlQ43`%j}p%dn%#i-nD#G~Gm0(C3ISEq0?QgaFR8o03-UZ*8+iH1=DUds{W<`xg( za&^R<7SxA@Baso#VRT5UFi71h1DSOm$z zz{^i=();>k+x=#@`1sK8aVe35!t*CpG^KEdPWnI~B6_)6EO6Bu7lY(=m*C}ckgi?J zcKB!^+50aLzLC+<>3s(3*fh9Dt-I12 zG8qwk66(RuSWz83+=a?z5nBk=SbB~lN0xS~sQOS~%Uxrk2*Qafe_3zSqjat}no#C#brs`UoPm zJU=tma>H8W5jyhfT>usTxl(1GR%*QcZH@4*)l89g1nEq%Vv`E%ipLK53krcNqs364 zzUb%|%&4X(Tf-V80|g)$?z3%D#}(LnkUrH>QhJ6)mCh( z;5BPwX=nUzt=*xk-r)z;#gjfvG`gv}m*9Yj<=Py0ir9E%V-c8o%&{&Sf&)apX-H2O zm%4pV&wC*X>Y}dVi~tax;W%!>&T|f;*1yL-T8&WQkx-FJh3EX~3Cc~SISF3I&$sC+ zK b>2^V>qTy&YCzph7+dn_0Na{lct>(0SxfN683PB~@$~wk43ZU`Yvf<>CD$ea{ zuqP1>azf(c&fUr=vG8Nz?%`rZ9t|5h-tH7{)LXc$ZF&Y(HzxIH0(Iu(mJ#t5R& zbAv`lae-ufvv|gViDR=>G#CZK9HLkKJmJfA6UzhWEcTHHecaxkQR=%k@9cf|pX? zSQBWen1B7G+zi9DUs{|kue_bQ!T+CuynUJTtlH1(3hJ9cm%7}Qv1qeb$DW5}e{eX; z0lcnIK<=p7Lzv?u9re8$Erlwvh3|>yzN+m5!Iq|JB=FRl>APrmjxs4VLLZ%MHo><+ z>vL?!z+rqFJQ@Z)rJmy8E}EDD@-ZRT3XW|VA;R2%#DQG^*NZldM8Hj&E!O!*F7(kH zx!aG7L1G7>aoVPivIJ}vMpIz5PmPD-;)~a}MMNe*FlojQkW+uB#PSg0Viww2Cm5U( zU9MkL-PB-c>u9bsGsIhUktiU)RP&tiWS>`UfO{Dt z;&@?%N(lAK&Fr4$!kCF6>&>p%Uxim?CyaPU*#*Hag3$1A(pse|iD?&1_N(Yf(IowJ z-!b@mh#bUlVSp$+gcLrNbs9-FU+pa!IM6C_Pi)76;>`N6yM$a&Cfc!^x^v~f`&S06 z)bgPE+>0VmxUa#ros2Ng*I8oSTpq?rQ+ObJ|Lj)PsMxt0CmBUxe08+W0HYcnJ0HWI zsKv7LWX0^|I|4IHOVb}?Y0bYYKUe!b`pS~D{3&9C6Eo%OA8o}6W zfTxH*Fi-oWXTe>y!=Z|W?`4=0BXHki^ZUYl*f4=BUs5E_E8&?1@q$RS1U^@d2vPw8 z^JkW(*SHroU<+Q&uiaMAZfMx3Gtl{`T(!P zcEYOm3zEplNDiBMj=ODkznVlgfE!^Vx~OalTf@jF2o$OVKNyUP>#$pxBIPPrdL4pJ zxN*IcJ&4ZU_N;0y3f5x#al~3(lcUrZ|6u!1E#DV~4z6s4!QCfBjSyuh`!)v;pO=hhs0`7>sIY*s!F{~08xnB z;iInC=ks%|uCk&R*bIFJ*n68!zhPKVGJ8b|#WIeR8l*}bnqu)@y=z%$z)5B7L7}CD zsu*ubC)ue3B(4a)?JzvA$j0z~Xx1Lw_qqoN#BBX0lu+_$y=e^7?NN)R0Ays+OoQsf zxuh4b4#PLoAA=bQd$$PDSXne@bf)DzN3L4q_)eS!rZ?XUKhcjaN2wgS<+1X+R~m;e z)?`78^VR&}_x&fg!&3~l^>U7zdr98B6-me?5hu5j|5f`wg}_#2ga1^&wDj7=lKtTC zs!p~!A#vWvyVK*P)dzz8CH@V4^3S?8CWuneM8>qHnT`X*%@>Q--7e9T(+oBOS8GA@ zwdUy4)6-oD@vwzC0baVIy8UMRM_-1G*;NGUUNVh8JfrHmjhwjA+ko#Ek?)jW0aHH4 z3={C_X3Cnqy+ev@n8x8D8?68hw(Y9Xx=QTTD2($Q5@&^Xch^9|Rk_~H3JPHN3c-Pi zma322e5F&yBtdEbTwk@o%qF3jb&>`VZ@h0080~N5RHE_pO4R2jKVc= zXv${9j~Y))i!EfAY#UIb6l;8g76Lu&y?z&4<@48v9E1yKNE02?TOXunW*&bW+4lRG z;3x0${>Q*i10if++mVO$oxBrSW(lV9Ka0~^F#y)%6eU};D0hIJ*DQjIBcmiGxr;9}nN)2B~e{e*_mpd`8g zmGIx6*wV>;5wx|nhbB{^H2!DNpisY5uP4ol3_LO~Y&n9CjITatEp^8d*k1SXo(?la z)-?yYw<2kB7zfeH?+w(_BT1>b2yItym50 zRlI`FgwB0i6Q&Wj(=;-o9gOD10C(se#jcQF5 z6fl{XnJbXd0g7?i+WGnU35kg&?X~qkx(CzF^1ojdJx;**?MAhW zrhvv?q?DcV)#LN;l`lyKHFL!CaaNSQS&`%*bgMNl{DXB6rZIV)jp z@KEq*ccyp~b|qi$?O~_AzK!SI@gOTFjiBJY*I`{FMw9K5g+4;LZ9ALhza`JVD&inI zT*`9vdGcOJ6fg^`FgL7kJPNHH}rjHJL5b&p-{pqr595ZKTAiz8N|s39y5F`t`Qin8!*4Ny_D zW8Q+v3Wv9BHuB-$CF@02#ze(usf9{8CWoJfJYsy%`&f^u@$cpODG5QR{#^h}zMdVf z+0F4Kf@yTVM8e;1o&~^$jIyfjB=SRo&DqbXt4+$s@K|zwjm=*4x1%ZSgGcMV73k95 z?GHD`^m{m^Jo`O2+1m9XRiEtq*WTVidGY|*SkdcX(RM=j{JEx%Q9-5r2eYza$v+p<$k06{;i-m|f zg0b94m)kAGQ;6ZqdEG-mV>1;fhEUAM112yoc{L$QVUd2j*1=%sU_-W_w#1i>JFPWrqhT=sTQfrG{4u{*AAfd%=muN9 zFtYjw*n+<>77{5_6cgzh%KY^Bk4F6=;2AE>HHUxWjkHLA%*|!i1B>O^&-}Oz7@vTh zY>R}1MG@V3zi)>n(YY zSi(e6jSVF8_$)!MjRa|mnrWUZ1n3{449sw{MA@>0_eiu`x9R5-nfOski|AD_8C&Y1 zfPeV>!Vb+)Dvg?JKM*I}P=(9pxHmmcrZ0RCw_~oAJ7ttXB(Tf&o@H9(YM(|&_?jy2Jeel{u^Wl;08QUK%~xFpbhtubX%-UjS$K0I8V_UOQ%R?fl<3i{@-u)uc7s(gy|37lZP2ac69NB+A<6mM@7Zu9hDs0vie&dH6 zAfwfeIYkoSC)!~zXeAaHxqVZiGvZ+EU0^a_-LnK96t{iJ;;u>t7(#x8u$PrNLzuM; z`W6_BIHr2+K-2U*d5YepX9?_|9R+dOGg;rR(3sDUx@VHyNWCYcfd$?8TkNwAJ<6_MA4Saw5xSpVNO=J1 z^F=R?`V7MFx~=;!*ewY^YW&&2rR8BoCts?F^s7V;46`KCJrQHrAi@whWt ziC&nT{un3Xws-+kutH(YXrp$NtrymAyexr#J^?QU3c~YmvVQZMMKayzRM$>m{0q+x zl_4Wa9w^$d4n3D-L|Rj4-7-r5i?WDi~qUb*t+u9DyT^^Vm z>@Xnbqfz9UNacJUC;ZT?j#damo6t7L`|~+n6~wtZ3A7|$m;ocG^dwo_c+SXpH@|YfB*H7)^heA z6)Ipe`q>P7?d+_;&oT8PHI%|ALF@Osb&FKyGK64?gmD~xGVHmww^)M$x2iWrCer%$ zat$)VAgSH(%_6O-wv-`KCtyH~iX{53sVs?}lxTj)Sxw;!yFdSF)Eg!PM z+M6RO$v!T&xqtBNfX(=4g)xenh#b!5v|JZR>pR(zu~4!dv6LTnncq}Zcy!#$rx_r8 ze=ZD$08>fWOf!{KxeS7-pR&!(u~WH@i_Ts_g;-bA(nenYG|dhe>Y#sB%!0l2Xu3RJ2*@LFwe!ts~jheojSd(?+eqBD*uI-`&r!feP;n8wWVOp1Vl@fe*gHO1m;f1ltuALTY_C zLbRrV2jZ&lEvxI{iY+QBTS+oGKE64v!wX6wvYkw4%#F>>_4S#+*haTQgR(8OCd>R~ zPNyvym)+_9DfFT7@%}_w*&|qk$Z2jS%9oC7+=`)-2V|MNSOWj2Rl^>c3^#|S`zxxY zsZ)rd1nOtbeSnhjWZE2fS6sQrPOdX2c-;oVfL2!q-GY^1!JaoE08>0O_4ATCRi_BS z&|zE^g1!Zi8eL3SN_@ zc@_5?v6mfQ7a1L~G}85d+b%z0 zNmv4e*;-Exvn6GDCY^DZH7ZNQVqtZ-T2>YEGLo1Vc-^}(3L^6Sg**SJf zH1Q(^0n?Ilr?vIugK`NIQM23J>m=%vUuD0V4c{Q4=H%|lVIkle}N@W+ob1Z2Z%AgM5wjo?3r3DOlO zk<*4^)Du#t9`3KKq14d#i~2sZLJZ-BC1vHN>#by$0}q(!wlO4%G}8_Cm7c3Wfv!XZ zY&5mzcSQIAjRAu6afGv9261V(rARc+BjgDjsl`UtToo9D20?ZP4H(6#u2Y{rTycH; zFJpBu^%-JJbh{|aIEvB5dtuj|5Oh)jJ#ZenpyVQNdmS=;!%2>EpcIT!Hi^3Zqffx9 z2*MMyftOj7`~zZ0175~+QG6Tm3#VC!Q1X&%KLkbKFVEU}B*rUOFTvn+h_;BG)VJC@ z{lvrs?KhJp8|@+!(5Esy!E{blD3%$sEt!dmZ-C9L_vXx4*izO;ML#pAzPpyjRaPdc zY~FT{?-Z;|UE%z{pXEdRGuhno1>(NU!@JZ@g%LjfQe3BIWrJ6ovmw6w0?j$uli-D; zcVDE)z3D&5{G$%@ay}t!leS4u=s&*uDFeYtGM!-_SEx|{o8`h>jWyppfs#rOZ^(OM zwp56!1#J|EdV&Dr!PZ2Im@p)i8HXpm_wZrA!AcYKIlh)^0y@njaGVDsWTZ{p*fZXW z8tOxbl11PNG>`4a3juaoG*bk8e&R{?#1;IRG6UyKymYsBq_wCx(TyO*-|*qj(0wSk z&G<3U|J+-{145LI&ScArx?BH`l@94rd;-SV>CF_1e~-s6dbpLl^U;~dwlqDBciNPV{S)<)1iIAVp#kBM$Rzdg zq^(XfKm$7jFVolq2H%C92l7OXSGTsa`XdxFRtP&7hC3@_IFh6F!Q`@XwW2MMy6~jp zi;VMlCp6Z8pAwe#A1i+&Tp4iX_N2LXbM8b+bgR(V;L$m4TP_;f{n$Um-MY|}rS;u>0 zvbBmPo6Jl4$m&e#T2>Qxce|za-w@7IA*TQkOx_|Yr~7MgQ6EM#f|Zy?Gb@za(2He8 z2G-aNTMF|ip9Rvr+T%EZz31e!^vUq1e&_C)egoUT!ryM zDpmfD;A}n>j4+QG_<(1y)P%B{nwgJBy8+E8CC1%IlHs1RvT}G-Qq=mtu-IB3waBr_ zB?&AQ0x^6^6$mVdk}zAteSqL7be6c%$>Gk|;0@_9E`eL0b-FvJ1ND>A)144)k>OYs zQ9BuRZz-$|#9K^mo4TMObl-9qmX_sqVpf6&erWA0AnOb}k`&CfVOf{x{oIiE$g;dZ zqE6>v47ersj`G;ZHYWp++MjSU8uxp3O_*DFt@y`5vA#p3-eMpxzIYjsuf7F+>sO9? z)Xp%3rkJzFnd&OK8y+M4d9Sx)VgOccgS%bO(7-I}8#XMeDpZH8q>r}MafaYbUfo=> zDB+zC?aF50$maPT2r87YMwQ9?S-YNXRZ)AmgQ39Hd zRDAQO=s+R# z1KLA0YppG2;KD*}xkPZNdoyWa*GxTPp27i^V%aQo>t#z6b$W3%m5I#uM%DUiEhv_1 zLrRn)&}njzoRHlF^xT~U7*MQ_EOEYTiAr=dv~#PY^QmoYQQBw+^UBSxsSiU_j>X$Y zK~!xOuEB;jMcH*%}w|%Mk!I#($Z$F?d;gDzBN0Vp6xFYesBHI8%@kD(dCkByVMAy zIZX(dO%Ey+A)na(qT*kv;oBI;GbZtfG&-=#voZFxO%r+C(<4rd>vyiZ3MapMPs^2D z9Z;`jr1;02j>)euBs}w|O92t!k>YF>IZoDe`IkxB5NVyte^0-(gE~didN^gq1<+Q( zSzD(2Xen;qZq4K^D_L@(+iQxt^jPFrMX)`1?*Pk*c;OzlH@3cEbgRzQ)Kvn!X@c z1yG7S+(iY|+a6vnKBrBo#RPR^XA+-J6o)%`~W#eHI&R%Kio3VtW#zT~pa02P2)Cc93_k7@0 z&wg-g^#wEPtIAJAe6TZzSW@PY6+M~z3>?xDfSGxygCnr>_=cD43tg{(mE1T3H*cIe zB##SKesWFYQSiQ{-N5i~*E7c#Tgfp$^pcwa=?5%FeLO&FzxEUlD^)MU zym)&$#?5IpO#=&;Q9Q*5VfpF5Jn|pFPEnYI`jR=6#vJosTu>gxQV6yeRb%Ugwv=$IO^;8j?wIkkfc-(q~s^V65Hv}(%?gG3q*yPk} zK!Z)k;6j2H-Aww^-9nuaX=QA3{3V}cZ09d=5OTt57@lnAzQe{qt73phTqG~R)GC#Y zYlL0m;-`6ETz?fwOTi<4SF>MQE%qJxNDX(y*6)(8oZtR*`tPA)nVX-SXHJApgFMX8 zbq9pLy&Zxd?D6)152_8ROWwm3Dl3C>^aq~6{&4_|{y6&USN*x0Duy3_$h{!#@7^E# zNscxti?fiG8qI|&Wq|R~4&)Juq#>ayJW6tJP(_&4Qximd(RlM8bM9~MB=V8&<<=n6 zwWoywB7w~jwJwaP#86T**t)#`HPC0Q{F8-|^r`LeN+E0sf%{MT88WV^Ms~(t^KS=L zoNOmsBIH4Dz2J9y6giJ+>_da7^J=^leVU0qn?3ElatcOae!W4QH(0<7vA!4L%3Lka&9 z;zYcj9}4o~J$<^^f@TumKA?ciFD-4P`tw3QT{&eXW8x$t`+Xc|wjytcB(3#v@MA-x zyys`?Ffb~NJJJ&vpQ$*DiL@sGsHvD0CiP|caklAik*o&{)NqdnN6;!4zlbl4{r$?7 z-belKcI+t*Ls{WXiQ9?<>;}=&ls35@q-=E$^!;yd1y?JW#(1$EPcIA{qn*2)sb_jDU2VAmec` zn9=~z!@BT-OIuz$beA%lqOcN5>Vq(yalkftYMWz6IECD{(dJDHR>rA$ z1Rh;LH78uY$`)GIy&;^?Lw>;r8=#3y)#*48tRVt?+aX*E4(}7=d6bC zQHHqMTRzmD90~a!gq`;`xB1N@+s2NjRGNpr>ok;eEJ0#Rbl{bm*Cvy{U!6+LPDmYG zcy({fDMsMEzp67Ye#Z+tH*AI_sn-uU0i_X+8)hwqzJ~ajMp&tzW}3jRoo&O~rh6cb zw2`yUt?$#Yrff<8zK4j972W9nlWROhm0BfVRnY{?Ym0ifQ2Ep9!Vj^bCg@b;6?kwf!0-h%+2{lSHr+I~;-5R1pMI8i~Qo zLfznK2a0d(ApAng&bX6F3-E?Snha{tOTiPHtG6S)`~cr-2&jL~Fz@UhDEXcNdjDE# z?@IQi+xqs8#RO-|uvAR5eW~lV1qR<^s}^4zk*^>hY}#$zyn=Gnye*Qz0&TTQEhZ}} zXQdjK1U=Fts)whm6Z*b?)tfUlkzP2vRKA(|?39)0Pc=Q<9VTm*Z4-tAoL^-h2J}Cs znSdxe41CI6Ff?6W!0Q4>uY@Fb>1Svf^2s- zdC{X(NVZ>XB0d!G`bK6up!8+NX|`( ze4deek2FfJB=o7|Sn6iNyz90Tk*kA;tG}8*fMO>-V_NR3(V_&jz{C55o{^JF`hpgf zLTGY8nR11sTt(+aanLq*5V2oY4pLR;n1f*fXmkA@J?tN*a zcW5Xu;z;l}r%J6pUKiy0f&XK`vyMWfG@KG5rr}2u*!`n7iWL3PHW8Gn;7PtOH88^orn=zkWAL7F09Q!lF5m!X6BIie@lL4&+!9|Iz?V9Y#RdXDfOs^uN zo|?sqS9Pr~Md52P^hwkfN>hvx5a^NL0U|O%Fv;?#>`h2gkhziC!fejNgB(u=^{tIG~+VxFV4b(A`^)?>V_wH&CV*14ujKt;uNr)e(W~Hh7OTT^uY>A10;u5`H-Y&lG3VJgqW_oLS757>gs}s%uj`r64?=t8m%;~ zH;`m=)mSp)5}OR`XdpkSda?b^W1M^x5bzn>RUWQwq6$A7Q`0Ovr6jjukELuoTjshf zE&(#HuzhZn`t&`5)D0vbEiF2o|87#B{9w|9Lt8Pw+d*F#rEmC?Z)^~8V&XZl)*UpJ zD~{|9liQzitU670w#i^(pdA@(f``EpBc?DbRjbY-hSo#r#)vq%x%s)G>)VU}Gi@C* zag8G3M!i_v3Fg@dgzV+8+R1sXyswfT8$QxWMwr{fWj|(v>R|qDnPHL(=Cu-7w0A-1 zp0ukzvi#iSb`ZWZ9kHJH6|gUE4U8LCw@`Rd**iKGYeA~wK846K_|$f9V=&J{kyoPDRA$Vot9u$<$6{P0ml zp-2nd(PlZ&bM_pXlIB{KAVo5Yd|>SM?4x-m*w+ zLVLW15$a5$rqio2;uFgT<&0k{NJ-o_a6z}A1fi^VbusrJag<$x{f(<;H2-`0Lb{Am zIFwC$vtD(q{2YW)EM~{vmIy_mNSglEFXg%DmX{~FhF(}II9twfPnbBCI3S(|g4}4E zJWc5rL?!Rb13_P!md!V78 z_IF+OKfAAdh9nI=iQjU|m4v0=@TYL{nT*q`F9xJI3A;zRV^OqKgG8-uQA5Yu$tn z(X;y3U`2><>fwl_5AhA-bJB?@)mPhJIQy2Nkj@;pF^J|&v<2N63r6WBzkf;Ccw?_8d>z9oS4Mn z3-)@rsLb}3p~CqrGq^sl5QFk0ozvp**wS5-E~z;0i2mt!{S^v$N?M{IY;9VH2nT2O zyv({S$0>6|if1oWfjK@xp+E(ve6Lng@z0)9C26D>8)}OVT+>#x4lF3ucQU)5?X_ry z0s#~fSiIxlNFxy5?{thuE6-(z;ocezCfmMqPNKxh|7O|B;410E_e(JP zC4T(XQgT9zj@H1Wyf5Yo6`0}MhDlpONQJGS_0VsJ4{*j4JRe$z8Sk!DdNl~NRUQ6a zmE;2iWq}SqxCVcym=pZ`pE=!STIk?^}0M@fV7%7M3BNPh-}{*#;0IUc{$77&0J?= zUkrud)e4ycHWQ+0`aZyDkp>AdK9sG^-kK?1VJhz~VJiy+f2410sPa zWyTa5X^jlniC!c)J2s{_I#*Vq&w%75S!HC4I7Aly6Xov&cl3K zTM519t=gc&nsQh@)g!Ck{&>{>EAyYS?O%K5)}-mDPh?~)F|K`njBsh8y)7GQ$TQcsQI%ml zVxnEwbkiX3?HjR_aoRaI^ZP{an z<(*iM?e)-av`YX@Ks^}6F4@)r5%I|uK*O!Y)v0F~|86Cpr2p#fX;mFJOj^IpV7XrR zjR<-U*FCL*;^KT*=tILXD8mnyRml1IwCP=T@hMvEIKo)Wdr|(c^pz4j3d>C#Op`O< z2noc2PomHQ2UnS_>YBgWko;wohDE2(q2EKmoBEb>6ONj)`cA583g8zWt)x1-U;-|! zE=St}&xD*TI+yPqqTqbgG6?AzoMRusMNHgt{3Wpl0HHd1Hw(DS6uocb96nQwm%zLr zcZeCHC&zR}&ESPJcaz>i*ittVR&)!{jr&wsSAA?$fLBz&k~9G@)dn1{G(&3rFZ8o^ zxsmFj-(H=`XVK~^)8-5gq0tdmq&9pR0kA#Xw15#CLze8K(NbBWBFpdB3Q#qfvdY%)-}F@4!az72<06a+@cCrs5EUD|Ya!=}3h zl$* zzap@gD#W81-2FVL*#i1|1R+yvKn_W?o9ipG)M)K2=SV}_JUK}?v2;|I3!DNDjb?TVBz{WBg@xE;hbe@r!e>90kgrsR6cE-dU#gm^an z$PZM!Q<2WlBNRp=Hm`!NN?~<6S$VIg$D-XB*e%QPB0#&3y|gAAlUoj-6dF8witN=G z`RxkiYu4SA3i6D5WSIPpqBDzFBZYRbmepP*viMKGftwP@bRN}5t(8K}MsbV(RJ0 zDWV}w?uxmP?M-?KDN1p46*su~4RlGW6#bjDFKds2YpR*y9|^g_BU*N$P>8uN>lC9G zg!!+AT$0Ly9MWmn7n5r0S~gf>@tK_Y<`4d@67}1q!BMH{WB1H`r1Gxsi7hs`U!a)^ zY3P?}i_6%PSRP=H+TX&p?v3svy1A|!q0Lo^8PLHGRifa zr-pTBSeVlAOv*j9nyQsNVdO3d0BT2hlS*!3m>{Okb3lV9e5$Hw6-MGAU^-3~%gEFQ zxL`7v<(7iI#|OfP>dc*`3_|sL73ryuKLk5wMcAzOrJv`-(MHdM9s6UtG!?3&ng{UO zu%{(PEPR6MW7is9q0ooXez^=CvNt)GWwj?=80j*{Wlnt;8G)}zXF_Apm+GX*c=}8w zutm~RDw_vxM}w!>`@t|$aYZ^6wZ+7NH^zg5(H8GdOV4FEBXl6&kvcrtC~Tlcm0Re%B=UU!H%xMT-kH& z1I1B_v^+6rf8LA5R{J%cc(SYX%o9SP*9qK=3`vB<`x`Vj>$1rHPJJhW{_ia_x#*Ot z%95TTr|_Cg4UAw)dBts`vp8e0}1#_!#F(9gdbkc{Zdamib^|`}w=ll5xfm?{ojA z1aLgfmT4QV0xmm$NCPK!|53HR>80bCGx`p8i$kq}{@}M^{Am4AUC^Si5)Sfn&a7xA z##6*EzPnEU-kJ9sLHifds~x9u3{C;v%?PK9Bqes&bzOXhOL7se`K}e9j|Lrd*1eJ1 zOzdjgkVYPGtRkhRW>`dVq}mA~lP606LMEhyGV`o85d>aUo08svF7->|Gl`Hc`>?`{ zjwAevZ}WYc@kogrwe1a|v-%y-&v7{1I8xDDW713tX@@1=UrIHCk6$D-pK^|rd}P@i z5UIRS+{|_&t1Y=$f?X!BZteaQaO0`H8R9#Q0H+wSuA)>wlBc!;)O}#+-h4oN{ED7X zKZzRd{?KiTqS|*^6;^jp^*ha&fwb51eNjviNEdKr&hhza{p|G&lKa{_Ui;P!1dl!< zpzGMO+SS_H8q#3J>2v2{bFEr+tJu#XZ?zQ8jE-RYrF6!}9rmj~8g{mi8Kz$(#AoN3D(8+Tzmg@N4% z$4Yde$Vn|lmzIf?TMZUcRruKd`Zc$`zsArPdf-H;8I9MWvqW&LUji{s4@M79c*CW0W#v z>rxfO#7t$cIwD=uRoMY;oF1{rIGN~Wq)EvlrrR!6D=2M{T!lwtv1hm>i&$$dlTC7W z(tOiwQqBHdc>N0){4px(S~iBTU=5O;b5|eRC9TzTT zgkzQ;(bkFQFdA}1%LIf+J<|6d`HHg%+33tA6$|?T(@5RJcJ}EqsxRVE`NLi zq+!yk!cptwOqBvA04*CZYsmQR>F?@{P#gkt05SUDxfB>=3~J;jW#*d^{TucCJx zD#%3KnW!Q&!vrhsXKnqSaIR7E#b3RSgKWU`4(Xe0BcB^!Km&PP=p5~f_qj4KTpWlD zf;0h_Jbn5Un~*RRgG?aLacclF-spC?0BvygYq|%UMkWssWQqCst>`bW1cDhiLaNF% z7YQoKMKe5$8p5Lh{`^?%=D|tNlc)`vGn`~XT0uO(l?y*T?skyPATtED^ztQZmDE}ZCS0I@iV-4 zLVNrahSe;g6_XJV7N-Ve_)FUo_Tt;7szkqwZ`?A~zPwf!*$`PsE9rKLQ|a}i>cX~` z86X~BxtRI>FKtABh+PL#oCStrHElc0%|!fU-6u>Scn#+MMO_lQ7gEtc*I3rVg_s}O)=y)oMiv6L8@5^COPCj*xE-_8DSei% zO42MZuS1hBD}Z^wXQk*Q297OYQZ2Zs)~R-4uiJfLRqW-VKYCdUJ6QWelB(Et;91f; zu20OCIL=qPv|mo&zIBbRSu529&~uuKX|kNBIm+F^JoW1d;*_**XU5FSI=YWRD3}hX zUcs^~Ea5b-*Y{7FQ)!L=)M;ygS_Jl~8^I_AE{NB@4`?TcPf^KFt9z3xbZMw)1|yrw z{({3{U2Mf^l+9|cDqTyWIOVuq_jd&-#uDa<5pvZ{pByY%C=?!7f=%3|TMYt12YWlyL&k7DBGb&XnM-VVL z>)j^eS4`}abla<1DqoNK!oXvqvlq6Y6{Ul@e2b*66kXU*Qkopt~j8sMDKXvffBhDBGfyL$(iSCphs(iA|kib z>1vV;!vjmrZCZeyK^ogt)|e>7e(MGsn9`ML5Au` zQ%eJD?rc%8eVMwhVi$GILJ!8zWt5Rsda4_0w)R5 zqoUVPB6Wchcl{t!SCR&M0@hUHL4Hi$!M_hb{X%AW7`R9ygBmH~dJOv{s624BuN0vK zf5=A^7Pjs1`&r4)^4-!tk-0c2aDIB@T?WBsxyfQ2Z*uAI@E*3Fq_ko?f4&Jz1~Qva z74DeI6N(>&n+H#dYEZwBGf-`$2-3+HA`A!qco`9awb1mUFBndKCoa@|+ZQJJU@q^( z|1BG5U^rPFp&^FRkC$_Go??niLrZ}p$_H$Yy+Bn;OQmmF<^A*FcUhaMxeYkH+X%7A z7T7F18s4j38AnHmQLjXm?if9f;DKJNfhG2@henhe14i>6Dpx%etoCv+-sQC3N$;*P z82kF7M8<#CD+~#ps#Xf2NXJ+)Wp5^&pZT^-F|*+=BwEWeYEy?5Is5@wKYv@0Vg!lO z8Y3ek_~HJxT7!u-TjS+R`(#BEWc(CG>zOj_goK1Db3B26PyzHO`)klC6pIbov{5PG zB2_8fJxYo_YIgD?L*}V|7HXp~)^45$lf+i+Y~!yqU5Kz}NzfEb32QWWFC-Q>?QR@~ zTrf*av)lsXpClMa+3TGPO|$ZRPP{(rcN(Kd%f=`1~C+#|c?R$_=shu*B5 zt4IyZIKOn{d#6^h(&&ZK8@0(AVxm07KO#s?p}eboTw@rv!=u1az6IA&`3Dw!Q{B+hgZi zfz#={10A>6OnD%-?@g0KX;_l_^TH2QT!E2uDMBlnZI0Q=0qUH-F|(P-xQ!jv8~^ ziB($lepI2+fsVHphi$Va;lMG}N>p>@*6Z; zhg^CCYJb6EPE%TJ>IZ?gm@OrGfjTKxa>|L0?#J6ZrEH zW3DJ+@$?rN2P2GvpK|GVOwOjs@*BN!c+aj!%+6-*ogV4&61W(ZGLp4LED;pwCJQvC zm{*X)!*X3f1)))KQ#71c1bI;!R8I06qTf?B*!oXgkD%%8IiR=Hj^0LsoW_~(P85k# zOVz2SLRK7b*b|x^uXKr92!Ym)&`l=Q9x}q%BGLON-XCxEdy$fp^47=Q0~6#m*B-)h zJy?X4cc`8q{*a8;`76qZtuo|ohHMF&X2^u!ofQ^2Hb$Qz1Y_imhgy~M65!$%N(@d) z%sl+Eh;|i9%LXQs`aN?R5*HGS1P%c-XdIH!LGcds<3p($oVoH`$m7{NrfEH#>#v<% zBDnB)wSD2@ZuUgOR!l@1WiV8C|BLU)JH}pzJ-q>>aK_P1`>!ZFlcyMM+=nf7WY32i zFsh%grc_NOdw%u5XZPt7M05o+uzE!6PY{w$<3U_q}j|{~-J} zk~H2guE+?E%RS;5C$71(C3P$BXadzH!yU?(&o4G2kbQN-?$xy;@>VjD@d*_ck|l^=ZK_Yg=9yHFK|3*^b-V`t3-CVR1~PSDX^tXr3+5n_U%8-Ume{>fbcldGSn{) z&47OHkngYJIo@X=xlflky<-pK!sc!LjE^5&^!5?P2#5+{IAi0I-MPmIi2^{SE|bo& z&?Exry!|NNZV1U?0XzT!;3Y?2K7tPBs(XPB1u{iUQF-fLHr^RVn-dm(eqtsjCaZ}& zXk86yx|rUlGvvQ)Ybyi_KHy1C5*I~+S~q66Qsf>Lf=}XmnLgcz7$(OAItmgq9RGax zG!fOc2kL0!3_;eoxOG-xu!e_n-rPB;D@7^{k4Pq-Onhi5EtdC`HDiF*gO zT*@e-C5u1M+fPdEk|iJxm-KmKYwBNhLH~?z0V)!VcH7~Nb+2roPElj2e9iW`ypb|ry)+gjUW*-h-QRdHPf_a?6qF8QES zGCYaNBp_IjbCNkR;iR&IRgapLN>XJkw~pBVg&#NLl`)IGa()>EC^4W<-StAj?B7y&oTEN?ejRRk1(~xZVHF+(}+n`shcuEmD*9-x{ z@Y>&GX8fjE{d<$sUPad!D%72@q_t2O4qU{gZ4_1~0Nn`>@A}bG?3i)|QOz4fLIB`B zDJiROO*>b&txuCmD109lRgHA2^AdB733ekQR#uG&A87HZJVf0|gtXG8swTK*htq_T z)c&QD=r8cQs__hH=Mws{;_Pv*Y-S{&O?FC4hf!kx3}4V`dng_<%iHdHKuu_7n9 zDQqqKSG_F}?kZA9EPlQBZGS)whpGRvWl(sW5L;g(s*h)3@wVLCb!Kda47W-g=EKB- zg>IK{Jzwqugl);vK4txAbA^!b6UVcD8KkVU>HVoy2PCbx>wRHAZOcO` zVA+!PJ{(zOv7YwSX?$}yhs|F09x~sQRny@30LVVQ72=RlhWT1;cp!L&ZgF6G*amIb=I#)#I$2?Es^2*^L8f)1}_f^ejpSo5R`M?YPZb8+j@G~ z01RB^JZc8>N)$zDNk{~MzO>$7E!TP9IPp$6KD^x`fczgujtV$Uuvjos%o|V@X}*48 zSu-88cbKl5c|4R+0K6SzOYiGl+vLG z!WEH)40r#KvuK}^-YG_6$$q^z;3>G(d7MXg4q|PUVAfgLLy#6(X%ir|?lqmA&-XpBK(qBF z^nZQht<+@cSm{1gr(oP0$HPJunM!?8fzQEUDHa)5(M-WP!|5)3^Hdl8l7r3=+h?m9 z$$%!e=G}}#ARF|;C*>&~dL;?sxg-Jmxg{CyvP_{zGvHo-k{RZ8cA2Lv7@U%zH9 zC=$&?F~7LH%>Ve24A4|!WMh+dc5Yb4*g-YA4p=5HE-y!xjU|`X(@O_7{G}M!cdy;A zu9*OAc#h6i(r+`a@~hC8oJL5pBd)@kToodc;h??1qW87!S0%3U$zSptdQ_A_lPH^` zM3?nLx#`u^S_q39!Bv+JG(o_>Lpp5~CdMePuq?AADAJFyJFJNlY$pWD(4zb+S2SM2;mVa=V~bhF4JE|1-VxalW;$G z#s|KtfQlLH4lC=`4DAYKBtTz)D@H4Qp!;;buWGm*v>t}8qmF@rqGrsByiiJlT~-z6 z#;{|*j6Itn70ly;Y+aTY@@Bx6Xg@Ioj!Ls1jTQ^=qSl%yb?d*Uw42ix}kmiM4irqOK8dZKYX3S{Ejhu7k`vLU#3oo)K%(hIv zDhH<1N$ zfI+kO_#BF2cDPUniz#wVsglL#|KWpBi|N2fGE01Xy#MvJ2XE^n+oPn83Hq%5&gHz( zAkf!8IX%5)@7|BE)9QS=V)SrEsHv%$S6hqa{>77_6Ocf9`!f3Z?_VHPzRXc`u=c{D?~Y(AizBnH;s@A02;H8q=g0Kwc3LtopzgqwwId9NkzG z*``Fc>!*3OC%{d#qE=GDJlv+=YB;vF(aH2L-xKoZx*2lWgiLnOv*uLz+LkF9t;Fch zFK;^LpBD5!Q;MHsbS|12KtK1!S#Je*g$v(q1NM9!1+R&Zj4a8AW9s)>5(k&18|LQ2 zw=p#~70sQ<#LM~^%}-(BiOl%V5ubRVTrH;Qv-_jUN}w{=nq>1Ghe&FgqvJO_EZ3KP zYnjsS-%lMMDne;@E2%$7pC{Y+Vy8vDw#!FmXD=s(n)*&&F=3J~DZPTLfH|nljY)r- zpZ$U)`iNEm=1$v?ya-WgeuX~JAo?Y z_0ALDaMAmH)mA{u^hEXXLbdg4hRwv(Q~;roH@4{Q;`j<>Xhr+!f>|v3VhZNJ8vVb& zuP8+5F1q|^ZIFQ?jK|j(`-at;_(K4EtpC@z;_i%z39>kGEZ|Kfb(; zPfOdg_XV_;O^l79l9Q9KZ@V7f9xgT@&cATa?#L;hDzWszCaz}?fn4W*KiQuk!odUY zmZZ1fP3!kAnPAB)M0-)KTAizCWMESW_M8h{NqG*NctinYcCK_@#&mKDUsWQ5cDNGa zPi9~;tEq?@$HPQYDX$ps9rUsRCx2910yUMO2WP?ICtcCx7y4w5rBxnEecKh|N<94a zxGNSy!@8?23ikP;-_PS+0=ZY$^EF1XxephmjO^@gEaod7ThaEBc;n2L%CP=^26mEK zGiv!!Mt7Tp*Gu!AU$yu5k~?2R6#D@+|1g%C4>+$M<>lp%y)2KyQ2*wTz|o^P{oRnQ zZ3S3?b;AaXZyFujZLF*$;h)uOxU9IOhnG9$;uTW~1?5L`x@CB_;$7>)FngHH&Vnmg zW%INHNZ6F^je9Ffh;>wYM zuv!FP5)?oT#mWk66BtZ#;gf1{_UrYwuq;K~3XC0vS?I%bctpK0Dyl7-lUAYAUS1q! zldR(C$kw`A_{V0YEr@ZEMxmI>62x8-1)voU=d<-pk?HE5e1=jqqvY7Xd`fB*#8#i* zpN&2-17<=!ZjV|V&p+{Q9d~>ms5c+Q!p7dvJE(a+i?IXf^?5wp?4D9Qo>GLx#t!1D z3T+^WXf{PpO=+|QkGy};QES?d(xRpC>Q#~EWkUT+(qDPq|34ulLrn8?c_yZl8^~4u zzHVO)>J2IB&*CZvi9!*0|qEmID(qN9zXv za)56by`qHMK!2d&DjDi{C0i-UyGa9_a}0@3!`cv@`zOyw1m8)Kg;Et5 z=COQco3dB)NRp8;{^`jF&61Aai$_8@`EfHEb5oxfLHPikp(KRAEwq>(aICvgyu7?- zKoWHQ-yWi&4jDmXZJXkIAuqqO5)~aCozG$2&%jYK_?ZM9)6bJ$ z%&icRy`R_J+T;Biq_(m$A>-pC()n|?X7l0p`;QMKrT^oD#)!E_MyLKjbT|nL{;7>9 z3@h1H6dgOm!O#n?j`Ws`nZqJ`-#uswGTVhamt%6kfbOV+ekcs(a4>#Sb!JwZq0+s7 z821fyFW$m6Mc>0~#^c@aRo;OHQD3S^b3QJ!9TOF^ohccU!80XrDtWLpHUctd=owXM8?}0S((S*%m zyfw6X$A&}G4WP^v=r68=@|I7K{}@scR5DEI$;#5Ov3;^DE?BBi%H;XM^1;q0&-?H`e z6xGon{{6`~Df5={o+-)|whr`UPLZ4Y&enD^o{>uwPj_!ik?G*4wF=FPHN){if+Qz1 z<(e;_@R_ijGj3JI)(Hi35BNbCXT1zslO-uWq)~8^rZA6zPLVs>YA;-#qx37~c0G%+ zQU;82C&R{{yRlvfPqcz$T6D(B+*k+b93>?swYC7`l=&?!^LF_O@$nlWT-sH-{x|2l z9n~5$@9%s>G026&vIX6#1qDIQyPu_#M=0Rn0T`Frq4urG&%EKEVl|(6rA=Fi#YSSbBmc2UHOf}NlXly!q%aB# z3){nE%#urMZK~n(E6K+JTD&avbMmn|}HT+-{!13BA z+RFF{ZnnXXLsWnV=}}(`l*_~`t>N`ZMqku!5=BmLL83Y*?1@7GdAXoH({cr(I_<+j z!S7IR7zUhyu;A;W3_Lf&c53F;m0MDBvI(#e!u}Qy|8V_39`9d#JMg3km9`5WHDf); zD5&P>JOW;%9yWGqXW55Q@(2VwL}{YQ)#?KyZ99${U>4(+9CF)Wry^ZuTT;oqb{w~9 zrWWN0cj50VcK!*;s83sew*!u$ z5RCzxoQs>A!a;NV+m>?IS!H5{EzyLMJEdAZ1pAZqPSs4IjMF@`i%PcEJy@>w@<7CA)W^ttT{!mW=8ULzsD-4l@FV0%hn2 zS&T1~iaT#rJBjY#D3+Rkbj6mvL$AHy{YFJEgZ?YL#yGV-d3l_fH2>O4ZSdHznR)bv znzzz#vf>&KxV$IESBl@8f#n2S2S$(W26ly|rGw}eWnzPJW<@&y)$JSza&&a0vFkDa z`>-LC3>UXS`iF_RRv*BX}c+-*&ay&~iN{KMqN1=fU$1dYtNI$GTo#ghY%PZ7MI zimDo`llNvc^Mt~KLR*q-8x@o9o2jO2!}$A(tz)Tq?B+XxYZS|#IA)NA3_%|Ao3wFMD>N{YIgp!5mo)CE%T z$)%;ACW81)hD|b_C*6pSz39A@fcp{29dEEy`A<~#PYa78_xBN1TeC1TN)9G7Cy1F{ zw$xfu)8hH6%3a$C7TchO9iF_Ka=XY9Cr5^tA*jt%1=^?sppk9hMq=Bl)v}tk%P3`Q z^A!(pjl}bpQTpX_k${MCk;d1W%u7ujU^lSy=aqW@ii8|%z!n!FU^|j-myht&^fZK9 zgLEFN%|0df+{PvqbVG|Id@?ZKdBrU?^n?Y7nG5Z%ERfDEt?aGSL5jg|vz4oM!W}!m ziH|>q`p-)Ov-C)Z)>rgRp$`Y#J)f0%}JZAIvlYVn*xH40FsVMPIlZ6no_E z0+vcd2BYGhsySUIn|H={I7hI+_Nn%@3Z=sbJU8ctqzU4-`VayF_G&c^V~ZW|*r(Lx zXKBwH@)_u(V|WJ&|N2=&Q^++@xz2sOvnJ$NMtc=;`|p4r4f^!Z8^+P~eI_g-5&(%6d*(bYysJH@#`av3Z!1Rd7+j^-Fflpl59Fsf<07}LfKa=`xbTm47NZ*! zv>R#JU|!922w4^XB{s&OZVS*>l(MoavmDO?oc|w__>W^=AHr=HDlRzPewu@}6)}}h zj?UNFsHA74qL0h8F!s#%ho7Q0F9_m+M0R!RJfASpR(|2tDTHFYcW*ia{0n}V91ky^I5=h~!@mK?x3U2(@nmK@~^Cgk9ItuPiJIIyyig z9$WCZ?}IbI!)*XN6x;*JO03&Wq5;r5{?1TSGYFBw8w<$1%scR6S^hS=1Y0spEbgvf zGl7;zSr5x=fs55E+PHu~-?!h&H4f&URxTQN9fmf$LOE#ZKI3f;@FM^k@SCX+_J3Fv z#NyLHt30J8y?NiWZxzT!g>ijP@m|2&R4EH>?eGREy#>ocg&YVqEoTVZxBmQ3nEQ9T#2st>M9#q;vdVz^*H%$8Ctz4ivDHc*CIHhKcf*Y5@E89r8*$Qtt44;e z(#*f$#p+j0M-Ve-Ao+47iJeh>gJf#+nW2G|seEVnw$&_0FhFHrv%+URh%7K17`wY+ zgz&@=^cs0XX-f(hx!(?j(i5gvXfrZb0l~BD=Y$wO;prC@o$UwpM(%f zyWWRuT@(;O^3&YMo7^;@3m#yX$XC1$s^#S6#epok^RoI)8{6~x`ude-CIAj0p`*L9 z*xmJt-oY*{FaPo@-RPI3|07B6D)uv1oQE3~cNeX#&h$;+F8**V&9Cii% z=F@0Oskc&|W8%;#-Uu)W^-h|Ug*Akq!BB_K(f8KsC#&CMmsR+!PLbZ_6|II7kkI4v zBy5ak#!T3VAU=IVa?yIdzI;ObCs~mxhS=cr)Y4Lfawg9P0LfEQ9DMrv`!^t}sULrQ zi3B#ov{!57yglYGyL57Pu6sSp%v)9W_J=(lK7MzzU5oh$pw1qjlXJZI^lz^h6g+#U zm~&}huWJe%4{OrUJDb=*@PsL7C^OHjO>+@~RZT?Dsc^O@ieV6ew{y}$=&;>^R9O#m zzx*ZSaa~0B($#^sFTcX3)gR;S0%q+lR&*3=5W8VioOGm#(kFic0u1FQk6p}0+3?>q zX&!h13qq#kz??;@d3y5y!rZ`G6)Rg?7g*5R{js~_s_Xm)L>CB9R8jnJcQ$HY9NpOX z3{4J38=hZNItJKawgI`28QeBl1O#9I_vzk)^GQI0qp%A)=K1-n;ySsoTK>I)C=A4{ z?O@3J|j;o;#{z_r)9>}zC+ zyd!~(JO09RfQG@|-d@~<4HO{mHZ(l!xLg5Qhd4AoPSLC*@)x!CAHEzC866zq5Q;`6 z0LquQn0q8=i{P~?rm{`Y2>f-px&T)qH|q!w=i7*w;c(VEN5{_yKNS4q;j?nc7nGyz zQSmyC8kb@EE6M}98sXf|nZP7KAf`WBx(uQvO4kDzqGGy{XKmw&wK6DqoZj5^JgI-h zm4O_<62(93IXgcE`0(9kiS(zI7+g=EZY1KEH8nLYbiNY=ve@;{TE)>bGnxS51=q6X z)3Hn~T|Ed~h5vF22y$3AMJWTz=yU}pR>w5TAg_O%f>LqilYGg=tl3*6h2HQK%`o1^ z_;o@|8HdlJARG{S9qQnUKDJMbteV6lDF`p5tJbFzWg2l!QTx6mHq3(*y;D|Wt>Fx&5)DJH#+ zbfT}~^j>v7%u)SCsr*+D*J=!{rDxWnxv=29AfXo+3`Qb6d@{yU>(z{@c#i%u%eQ-K zB@Tv=etjr7w*p-OfRGz&Eugt!fuU5?-Ia6{o3arSlgc+@Lsgy=&ovO|-OshrUS#9O zs{VP`wUAq%r~`;l1DkEz?9_mDw$VL36123m9&?d!_b%*Q>d>&Pp4^XQjlbBroF3|apEc~5MWe+*oWEluK8>k!ZhA zo`}9&;M)ryo=BWwgSXJe7&{aKsz!2FG}rLO;_gEXe==dBX_!=7hb=&{f$!bVXLml2 zp$uy9ypT-~ykU^Yu>mYL6}EJcY*O4mK8EbNgzUX+zrES@eX!(hjZaR#-0@U`=-*>;jp1DLo~A z+tD%mLNq#37GDhTk%;(e37Z=wk@fX@pXEbHZRJ3IZjg)&!C0fq9t1+CG2-)3i#GLc z#r)$$29Oh(Z4E?2pn#^|)>v72xwxYv=gd7ogT-ZMN4K`NK2^YBwRP5nD@%J|X-bx;jI2XO1SR zpA!V|EurbW==q~M5L|r`)c3EklAXz4k*76Z0tBSXqz@;dqTU`oT3Cf$T@>F$SXt3T z_Q^_nh;~U{ztC!uTxFoAHwDP<)}!X-6}#~!vfRSI|1|0__Zl3n6^lJjzyEa9)GxZY zA-k^5Hz>bi94$$2wQmlGuz!wdF9KtM1X`7q5}yY~Yml?nj0FLs6SLzbv(1n@tjZfq zAjOf_MZfL7_>jlR4@-4UK=o?UG>zYLqK76wI0y=7?;klDK;x_@pxZy=;n|$kG3rlL z6$yXO&SYt^Y`>g1H>dsMWiW@SsVP>0k0%P3W?65XQB_ZGZ+ueHcC+@)-L7ct%8H>C z^ViS+!|%etlPr+}rC^|rAjWLJu(wCHL_$ZUAlVGZZDC0SRF3Ewtiq6Xb`uBzv2jL@ z0>9Ncl*$T;9+F#ZMp1;yW)e5&L<2{&KJ-`lIdO1L1>I{aLK>wf%}jyPLXVt$Faa5^ z?NjzYb?9{*g4S1opqLpV)+bK}f%VQSc_AqFWrNEtZ)DZgvFEDv0)c(cZT?Zlk0PJp zcr{&Pf`Z^!4O_8*wm2Y1LvKb-eg3!T28r2Nk*DzEp!kc*=uHw(2M`iB1V-&roh)cv zKk3>Ag-9{h#4?b$2}of-33m0=ouFvodmcN{n*>5AMD#Qx97YzfD{xIRD(_5B`F`|!E< zf-Ljv#`HOXt=_9Mvoz5K_HQ8$abW5mOnTU{ekRHAO~y|8{(Anp7HV)*4N_z!5dh*g#d5{ z*u=#9dVzl>X#edx5Xy=*N$^w8a_21Ytt0SgUZUTKcn--$PS?Fo2|w6QA*!w$8NebF z$pZ%?^ZAHYMQ~kwyr)O_vbNJK>2D|QRfV{k4}gB)k+8guUI&e(V+y+lJS$D*2yN6! zefcf0CC}je9}GAnhb9CchZ_0t9~>O?g27PKv{WD*92m*_g#Q0w+SDjX+zSirT0G5U zsCgy*Ax51~uVNZK$euDB=yJ@0(aM|SiHf_oWm8{Le`a!c38Vc2UF5^8HjSDan)Y+p ztQd^)&PhTSBd-y->yl4mLmz@Q^k^`>G$J%t!~E&j82(gyf!BzN4}3s-@f$# zTYS{?^&^0V1qCfF?hE?e6z^YtB!RHh9v&Vi$=7IqZw8ofZd$qvTK4&6OA=bs6D)UH zd_YA;`_q|;eMt-xa{DrKum|AuZ&3MGU(}L(!)9a?&oc}liDod#wI}}2)C_fY50;Z` z5>}t(zUX^J^t@pe9FiUnFH9Am%q;`6#q}qKel`GJC}ujF`M73B5)f_)GkNTK0K-*N zSNH$?S!G;*H%H{*dP}#%jSbj8F#eG`_iq6nyPm&4)WqCYtN5>qPtlwc#anS|Np2;j z368fB{^R4^obnPNp!^5+L>bzBP*LTr`bw`mtA#`=9OVB|&}4QVp{9x|xwR=wj%c`i z!qn8HNHqZkl-l+~W{I}E7W-WeV)yJn*fnq)glxP9vL->dLol!pPH)X{uA*2kXh>*_(((2_vlB>2IgDkpI~c4 z2o@dmB$Ve)t$f5rxXzwLo5atgtXb4kMYqdA&{9UPqi1Oo05uiEi> z1CBL25hs+U?cSMw#iO+CcRl%(1uM)?w@3dc`2nJ#HwlEn!i$WEfOv;0gBA=xMF$wc zhD@F`T2IHNr6J{x|1k0EZCH%k2NsD$0)CK+mDP60>3@{^^Wd(u^%tCIp=POzn@0Vg z!i;9{)29$-49U-86PIkPq$Wn!!m<~Di<#H?YE?7~e8(}aBfo@d>ASmfl#+FwqpL^huR>pnO+IVp%wh#bxS#Yz4LD>YCp_T4W~Ih5@%9xP)W z7+eyQA0K^!WgU%EV-l>npmwBAFQ17w%l?ZPu!1fvB=gU}Jt3qJ%ezZ!lr9f)t-w~% zInmzpG8Bp$vjy&`UycB@SCx>8e6R@Bi?5Cp_XUOkr|{Ru_Evx#fYJ!a)9#kL?m<__ ztC5h6Qb6x4CJ=}QutEmI{?_BI-@$qPB%5gx;L|nRsP2WHY2;HI{XoN{FL?|{hFXtS z6{i52)G!YOeh!I!{o+Tjy`8_-bM%gn_-={~LEIJVqK0s{ny{qh^_-g=#r1AVtWm?n z91Tv!1b7M#W+k8nWAkY2{y!CJG5;=qAnzyNhQMEs?RZ&-i#_$bJy}twUeL|WO%6BT z-(E*XprNDvCn>3@`BruU)I+QLHUT?2$P!U_jCfYv;zci zD=@N{Koj3Gl4c5GO;x^pWqzC*B`utn2V7&2RNC_rYL|U(Am~Y^4*s)-zurIq!R!bx z-yU(_0eQwCr1y^E@p=%l+@^ay1o3~o$q#Tm1c)agD;V?oL`3{8{%;I%1L!;vrZ!#D z#_Y0EpXhYrwd*babmvh4-&7}gPOK@K6>b^J7i^2WVGBgx-L+K^r}5_Ptv zijUYzCXRrnD9GZB|HWLdt00ad%U}5I)jHrN&~rRCkqrR_q3h4?oq<}F+MaXke?Z{> z$E09c)Mw3t*V7`R7&*0S&P)CkA2!H1H%L&XTvzzkLd%B*HjBT>gzdiwbOw-Xl-Rll zKTIelwVF|&*dphGhF!0{P4M*mTZ`m|)eRW5Tqs%X;ktSyV3r0!(p% zhw}%-LpLX7Df*9BJFqDzVj;_VMpM`%3+_w_iHVV?h1?H?pFF|b-`}Tb%h@&md#Z4t z)sHTlBtwvl^nF8Z%~Ve}3CI;^{RHDVBE6y&Kz>T=W!V7U_QSZdioc@1n0_{&FwTM% zaL<0RpHmp-p@KT%G1#H6J#ny^OroXsChI0PH;t9D0w#iWz-bx1-$-Z*9I5`i_d<0b z29E17F*El7mMaS(mKd3s#NFLnb{AuLU$oeh9e+4i;k`ILZC=;^<3#?Gy#VVH)E1*E zbZ9Pld}Q?Fjesq)CMH>!C;|$+75Pi~ce&%it+PCv)VdaCthvtQx23e}L*o%eIi$-l zcRp~;BNLCG>Cp~Xkt^TRUq`lzYK8>yP_F=#0L%NpH=kjMaHj&^^jiLL!FgcDU)RnF zvak%Y#T_Vz+)w~?dUtc^ga3{m_ufU_AbBr@<%^5Mav>A>+uSIM!oTp6hv)Eg3n(Se z*(1XC0u7*pR%s%WV4U(s2*gxsMUbp`yuAuffLe^tLq^I0NEl0lQB-{0{WMqmigTPS6E?d`{e^Mpx=4HD%`aX z@Od%CimB~(TA$x2zE}SLckp-C8HEqM|ILryI$FbPs^@r%pgC3FeimKpU9X^p5|@Ks zv;X3A7bS%WY?gfV{bR7WzoIsH|C>vt^))7iRNf&Gj23<^iHd@8p>SC^0WPYu6TSDe zrI@NpFnKPNuz5hEbh^+2hP2lDzu+x1U5Hr>+t}Fj0%}l`Gc((IUOv-@hqgr}B|XnR zoiE4P{4D}8lgQISfu3nv0@%e3Ff!^+G><-Q-v$}b=mWS8!*om2FPgBpTQ~jn^P%U8 zvS4DOI5nA2&{(a?oT41Cw^xgZua%;CEc>A|o)Zn$%Y`-SBAlb-D_BPjX>E*u;yHty zd30QS%ZA3ETgb>E1GqW9Slm+mQa+sGc{-dL&hEG5PKL?<2z&o-v0IX2P2&8Mwp&dF zP+#~-#0+E5igL%eH!u0~#moAiv0`15lc z*zNJF6woA9~PuLB!?WjL8L<(W{9kw#KVq`OOz20>{h z6;QhV_w0T4Iqu)H&v`zdc`@4;?3wxA_gdGw*0t94RW^BSO#suidf?-r{eSvzr+}+v z81(w3ObK`na-ZW}grHADPcQt-mq!Sn!S^o&8iAwR!DP<$9osfUfCC*=1xrgtSt@Dq z|0+1K(COq^;?J>v`?;R=qTD`Z%20TkuWll&td`HjAm1qSDIH*?fNEnJxxnc;IBw_$Dk5J zZYi>+FMxlii=@9Cr6 z)X4)z0t^(>g=0@|WnZ3uj_T4V&F2DCrRajxwKZ?WZ|QjTh;{qo<;zl!j!f+rp)xZw zZ93ZB`5~I6Wom?_WGcI3AO2Y#e`ihuXZO=rAj)fQQSfgmGfAiuch0v!(t$CEqV*TD zVLYspE3;QJWso%coq(oC-uAI^Un37dfYT49W%Y=L&7%9PamkxE@;R4>nORveK$Y(H z>{&}HphIwVoB^-cbaQ=iFw9Yv+ulw?PfzbS9UE{t5YP)4qP=;J;x$+<7;^x+qqQxr zgSNll|G&O54zbgY&0xUsbcSa!C(1?HSmmwUk&};^k))&#a819j-n|<-7^kbOzoYUR6%SO%uS(&aYg|A=vKqQs_b#6K% zkO4M`NJ=xOT2n(rR1q#Vwk|{$5o{GjX|8M`5pZlf(iLhwSdHc}?+L@r5O#)sZ##L^ z@Br@aF7U;BhwIrH5UfQ4Z*&C9csi`tzWPC;<*Qp325%#usw(3W+FytMuP=r8su<9Z zJG1Juua)(C!V7TGwynNM@YWr{j6$BDXc5z&g0Jg(0%GM#KN#Q8uqvW&gDckdb^-e<$I!U0VD{eP`4}v~=1p1O#AZp-~3cz@U!~+^wbS;|Ax+2SQCN zcqCuc^WxBHJf-~aQ+LYZON6|$n}sqk^Z+f%Ky;vOdcmin{}LW!?gusNqNA5P?3TKz zFO(I$Qdoabvq8IO&rEyvKP*u+v=@dQ{4x%Ri8|fQ025uv-B*S(8dWwEP z)Vz<-B3&HsosP%*hagHQP?Lc66**sH-Z^@y`T&CRe*e^e(xEC}hPrHe80Z-OSI?XL zJwkBdk!I+5p>(THKz1R!Qwk@g6wQt}ee~~m)+tX6 zDi_&{g{5m!{QH-@z@&2j*QA;S2=PkF%3^@FcW4I+#7jULhm_47Ojr3&<2nKY^5g%| z_ymnJn=04fbllO~I}#(BtFAtCx}~@s7AB}N^td<7jy2=S@;35qein#QB!xk!MO_;Y z4L<4zWhr(Xu`53i;px=L%NQOLX}u$gDDI=QAQViL%5CO5;9%$ zlmijv<-e(mOO)$?TeoiA+}!xF5dP&nxcds-mujQ!-PmY__m3p5p(LiJldu}T&%bEk zbOi7~0$aKvCSQ6dEer|N?jj>c*1TsJZA~k2J`GF{+fKOmK8&6OEWLYgkQRe0NPf|J zIU^<}cJz?G>RGnLfRk0AEU_op!RmZ-bz%Yj^7%Ewi`c_M5ZDnTCO6i$igOfPT<%u3 z9UGK<;~bgT@dKhN#EQ8Zk39p#3_#;DMRt`Q3r0BY{@mTXAR;JMl9Ejax}1l*#gahLGn%4VTYb z17}%IBs z7rXq3P@|4Kz*d|U4Eks^EZLq=pdUOZ5dubB2Q!9G5yBNkJ3Ef=%{$y&j)jGVzVFOb z6K`QA{FfJ%I?A?zG3W5fO2M?GwqVnfJ(raoCI=jt0G~w$x+3lg9oI~t4V8Lv=X3g? zrz)Ke4nb9g7++FHmKSOX+b7J0-36Q$7S2DvrXa@V;6U$MWA0X=zNPGFEdw1@A{1Ol zOIFb8*gr-2&j;|uMT&wT13>01wC2_VyUyHmX$iH@}x6O1oqbt_3vNo-*24g2&$=b zOJ=&F%putIbNWmtK@4-Zd*#Uo;dlAi7P?P4vFGo;mqxyCiR`E*lojSZCm-sxy<^#o z=V2w%3YqCHEe%`vJbkkDkS3$Xc^_vuOh6c4S~jeQ-=1-_(2`tQ+OMjo=|7K33fbpD z;Htu{+Sjj{5uinA(do9Wtt}wWB7pBM6=3|R(zuPWQU^f+6!>;b5`u7z?GD900CEX) zfP4(5gm)_5I)MmmwmnsfWVpfjpC9i3KElqE2U0q@b4pa87VvtKu z>?criboZNLPablv7iL6YT8t{(f~sD3Vl)~Xt8Y|qJPjD1M`bpQ6x-ma>EbQVAYdJk#!lA!5GcmI7 z;=(_Dl)W13yrDlY=s)|k|XCtH; zTiz)aaUu%XRB5eE8~^RngMSz1#YhnnJ#|XOcNLs_vg zU`E8o_9f8E@`4qae$jDpXhpG-fi9%KJ+Z%EdtZDM(Lzcqt)l1hXM$;@pI9pwnkbpX zd!@<>?QtNcGLeuTG<~JG*L4RHt=fwl4Rp&M?%k!8!FpxtzJ`J@l40vPramXior}+9 zgfvL}e3-g2Nu+|Kbx75k6U#%xK@?>8p5j-PBP>dRDF1#POMnaCV)h9zG0n}QK%u=g z6$`G%uV25Oh{z5>;EO;{BCVi+78?86X6F0U2EDGyINx{+c<_9YoRWE#oc_^j^bg%w|HFN zt|nck8Z*eMA30PFfFVT|HirXUaKGB|2&G?D-{0=>5)$~-eo`J(iqVuhIy&c%itI|@ zcbM!-@{)wocueAkhvC587km{UIzvHMK4J2?KBpUg_}@SNXM%lLNm!XoX2VcFM7eFw zU`Gns;~7qoHHvbElt*o3azEP2uLBj*Iizr6bU~zrFpkKNl?~|oXCJTT!}lvqZZFOF za@qg#*?)~GE4d)p-Mumd2}@vzll~YoWbkS88DLX>i-Y^Gh;kdpu^Y-{Y)F;QR>Eof zALsamMMPj2xePHOVhjNVMQOTjcRyHh7dmSaKn;3ppuxb6M z>>_MaGP3T}-X{$$h9gw_UKbLm{30aM1^rRg&iBNdWX<|}yD>1WsNo&Ge@hrA;4Yx> zHuUL#7#kZvAW;$5uel{9N-PdWz*MCXq&fmJvf^~zX|{OrLyYWzYcE9ezsLbGV2v0j zd9C^?3S`2-H!ey4IhFrDp8tAd8bAy(c}?wHKiNxPeZ8-_c@lKivK}-L$IX=3noMS9 zORt2?Cv@y76aClz#*J4`6{V)L=QfHqAGXQN=FKq~n(66vIXG-R>rv3YC$1Svbk7>O zV(K1w%>nNkv0HZ-1vPGlZpY3c=5G-(=tuaizSvvefyDU~;1k3sTR~|l44WL@EBvGN zr_=dX@eid%AT;X*>h5=oH9(bxpr{-i9E3qr;9aZpoLP8R%Bjfzin3Q~Qqy^yOSTpo z?OYRKv5re0E@B96EM(F`ZM;as@b4wwvFh%zn-!B?Ew~*VSSe#;8v+u%s_d&9-G_KT zny0JF*_S;$j{UL@z3M24(K#e;XYU$$hT$lF9IW9@)?zefDE#3roc`MnyBtU_<;<{L zq+e?pSt;eUTQfiafkdyx-+!bG;eY%=PDN#u`>ukqG4+>#8^2er2SY45U|w?rlp43l z9(mGpXdyTLj|X)ufTV~l(A>st%Ve{*CLlRFZ(5K)K5rvLbLOa{oe`9{MXE~utt*A< z6Qvl^>hs%SfeQ?&EJ>9sK+xm|76tfR+xz{9=pA2DcV`Kk@`0>ulX9~`<}oDgDT5Va z#=8{)cAO=D3!t4riXfqP+8qM<^77Mwi$6Jj^38l_r^6mBaeA2FvW&7cb5hBTR8kb2k)3E=-81AbsF#b+ayg%FRo!0?pdQ_ zaUy}ua>%%}uuVm1ojx(Ctjn|29&~gth*m=I8T(sgH6OZ1mvv(tuCU*w`qTTCXRV+3 zJuzggsS2!cmCH0tGkDkE+Ml82tNb|kIzHC%Qv2;DJ1rCDHWKc?5CF7n&(3#ja^?77}*Uf|v7qau(hPxgEnZ_L2XY z)uPXjHvJ|SCK}&;>5{2f3D^hyMkjhi_0DNnkAoP!M#PW*0i|}8kw>EKIk~HAG{663 za%ONUk&E?&&1Ym-tyCLf7ZgI}CB6c@5)`=b8MpX@N$BcAEvz?Ib51~KAJ&a#51ZSx zAU9h9_uPz`agYmBx1Wy79woBXte2$|18duY>jU(+)P9=x$Il<-2PmRbV4M?)Xm1OB z@nk@l?Wkn%=8e5b*(-N+BCQ| zvsSw9cd^szk}f=Xvd~1v#5tEBdPkPUKI`s>=UJA=`M%ieM((skPGX-{?g>88=xMNG z@J{0nu4+^-w*S=xDHY-CW{{gx3QXnBS5T#?K-v;>w3YlD66}&kT#KD1AmS})PPzEe zL5ncKtnxqNFmmcb#lXf+Hjvsv$Js=6E8^9fF0}vLOoC98I(A%n=ugm=B#Cmj{nG61 z??aR1Z~=&D%Ir-*?F;eio{yB~ztaMst?>7Y=V5eoR+6C0bLG9|B_ zHx2tBg5|chQi0pkhY&bUs){|}A*;4gP5L|f`W?Xg-bOB0rW*aS+;jO}@>T81?zY47 zTp~@Ppg4JwE%feGAnfTHB;hOjMO2_Mh{I(YNfkBfa#c4n()-Pu6xL?bFD26WbVIzIVC z_quyF9@OfONgRG{Hay4RU}sOr$-!sx4G9Zdeb#Y7{XycysLi+2`hQhf)#)hO6a9~O zeD#e?Ei9U1%f13?oB%(8%3@VGQIOd(lRRZJUGUgN$S4ct7!!%fYA%&zztR|U>1*sh z2}vhmP>+ob8e>~i6)1tbDKmm{HP%dvUU$VP2WDol{=5rt!RD{n_L(*D-*e~lc#u0E zRh7agfO#73HJG6>0}BhlJb&05^F2!xRMah>s@d`%)3or-!O_v)(}T5xmrIulEUv7P z&!7L6uKrJGHvNI=cH9V%riRhC)%QGYbDA5wn(;(rW^^oly}kCz(GbkUn7KKh38OVb z!USz=@MWx+oG7=~bvtAit4EaGyA^U4(s|HcjMr47;e@0LkZ02~5Zv>wh=ANS1#zqi z>+$c$g6a|h^u1$0I~@N6wzB)6s@ysvYU7$%f` zQ#B9GVGio`9@ z4@ho#C8Y>bQDngZ*l&O2xA=&N@kQF zWB4LR&=%X4J0gFAiquj^E}=|{$m?8=jbCUwS?*U<-neeGADdZi(5>%N7|iS|4Tt$A z;CKd`4yNBeax>v@rujX=)Bh34M&9(sfbrcmgo+W^VYnxS!Kc(=5Nc}b&E4G)HRwLz ziw?ueI%UjOlhgDAgtQV!l@BHZ4{5Qy_j*fsMmkboB!wiTmwo< zuEzMLxkF~`iQc*T^lom8p1^RYFmb_MfYa)E8Q;rkrUa?OAs5G?%QGI(<>OyUM=%Lr z)R*pMVeoly@ZBV$h(s1EKfFp?I*^*S`-T&gnS#tAL7csqw~s@C28u)lr@^>B;oqwh zDdaXp`ldSsY}Hy9I&E*O*UyCHF%$ELO7zR zQiPOZouxj8opYy_$%=5!cK@QMi6Qj@7yXz8iOEv<{R1ao7u6 zJbpnu1r<(}CRDXdW*FqT8SW^78ADUYv~bSf!h;7Cz}Z^^(LC)DEkD{JU`VeENUIUS z1P#?u=F3>lQaX{YqJ;YRn#-5%& zDY8*adiA-Vtke~+D4x+knh%-Bfvy|Ri|}k7Yh|Y4wCj_+3TRJ=S!Ok(4#Rqc1;O1T z4E^V|7bf9jf4lpgj))8v_!dk(iJz?}H*+&Vp~ym?-H0ojx7jVNtWY`o{@1;G3eSDr z*GKl!o=~Wj_37d}N~TAQv1YebUe)r^j|6+~n$7hK^}RpxknJoj4Su#~>&BC68Jh%c zd_@gdz~%aP-Fx4^KYPH>O_SZaf|2Q;vWTB*n_@{$m{=jZl}`S*Tze{td`AcwBQ-5; z71Rd(xADaTS69se#xVz$N8}RT_rN$I0WtCO>S|uT>x-N8{4#y8b0tXe97p8;Kc6C5H&yjt*7xpVJ(uohQ$ zfAjRd!#wK{q)=k1Zu5inOuo?35IN7gZ{6ji2=7eI+GZ*K zhNl7Nr-I=8%JTD}zQ4v5{{VGh1j5bD?a4z~CP%Q-0?N&ex3V(vuo2?C28gp>5Mc6% z#DstU;D39+tj?0V9)ks&(burIC|VTM6ICyT1ThREqxENIhPXk8(cG8z$B7kY91F$1Sn49f`VM5f=&n>Xj8E4Q_fl5mH|Il5Z_&xai?Y&b1 z;r=J7A|(iI#i7|T+7fqnSeXt_;Hjg(zY?q2^z~=zC0+tlBHsaZn3Ww zP7j3&e7<~3s;$9Nb$DeLNvxu9Sn^sVbhG&H5*ew0%AOuo_E zI>|&d_DY}j@&+OwxBkEc!FiwCBMPxzU)~sqIm2j_4hC6kTgl@%g4^BBu2W zV#_d827N}nM3Jd`zvJXvn7cfPb3Q0|?_MtG-CJ2%A-SG7B6eR64h~8sc=M0_Juv?M zn}(AkEhMJIGNYF(Q&(`-)>JM^yT83HB=rd-CgNHy724IVK{w(^4K%1M4mr+(aIe3? z@!po`k^6q7!D~AW-_Wya!`f-X#Aw1lvh3oBM83zP-?%lj|2Yr059mg&1-SE|jjv zY4akjft6>Y?-IZ`MvV`TkZ>bo+f3O9vz#CCuy+PlkaP&SAddjUxG3P z^e%%iLFZpi_^-bj7_4F#g-tS6v!E<~=Wcw%Rc%-1jtd`4I)mEV--#?}Uz(2n$V)!X z$Gc%Y__%eZQw1eHr@~GnqjH=_@b7#aHMm^k_)qj|A=ud12OH@o<_D{NfYKcoUac+@t>~}^gK1NO!^`BJ66QK zoAF-aE-s~1cVVO}Y(AMMN*w6z(t zID(2Bm>91ihCEA)v!g)KN(&YaAzY97T)(OqIrot$sj7zA9MWdDRUgv-2E2YhTAk8L zxg$IIbzObkT!Ok$jWzFPq@$;bP~tXuZ-Xudl|*+zXe?_f95eA3E*{aG7IN8fZ;Kfh zs98gd4R9LTa8(UC7SpU7jbB|k_Q%A9GU>3=htpz?C74xfPMU)FhhYY2#gD(!iV2-3 zU`4tD&n{SEhUgj|fT<`Owt|`(jp8SudXR=u^IMPpF%O>QHUSgpgOv!ZfW z3gjlpZ}9`UjX6<8MHlZHktquLNfXWA$_nu15Phgl4#XXik0xEIUvTW{kA)!YaDX9& zpNWo6=;ftv47HeUG^9Ao1q<|_BoWCAsJ%ayQUM7BBZ5DnavkMs7GyG={Cj-(D^|=^ zp{$$Na2(;ZlQ!8sUx`ExUbT$(@k*TAO2iJfg}kqo(5oljTZ*e8JGkY7lW=5Q?DJ!6 z4kzhpFn&r;y@!NHi){S@#;hT}uyG&>rst{|O5ZK(x$h@T1FOf{$d8(qw>+)R2$}*j zZ=n2IJnm|O?0jL(tb(zPNC)^?UoYIl7p8iL@7ZECb(3RZSOEW z(mj`lbW{K_w(kyJM=CqRbDoM{<+`#Y})X>Pn#(YbzTccaI2F0BlB7gRMou2%`5qXyzq@+Lc#tb;0%mL!{%5rrC zM+9uHsBW%LIu3wlW0U<(YDM0Jpf=?IFdQ$)c`4yj|78NS8e^1s+U{s*Tq;W6GWaJR zqSK#_t~M-*V(L8!U)L;_+j`F)XA0}3Gp~8^UQx>Ou=Q|IS`krC2dsW8HQL|8N^Da8 z3&D^Xe)vMO5jQV4s<-X2Q|w;!(6r-xe5Q9I2F123s!m_?10}+h?!85<1_yzU%_@vB z26UVY0ltYArXl!geLW4Bw7`~W@6lJ1y8V{d zEA~(rC1i{28XdUyH=c$)wUhL2H^TX`y|Iu!cl6~jLjPOhQtGu2$72kc*EB)PFfB7H z>%C_2ZvZ>*3L(d3_`qe<^TUz^4ZJ_8f7XbPI;MUQ>Hd-`3}vB*mORbWh0&z$y_=iUk8sC5dvfAIC&-8;YyOsx z-V0?_{>y&Gp-=3eOvQ2n(;t+24)Bks@!92T1!6ygQH12r8Qj?d#p{**UShRnpl|pB zTU0L>6cSo{FSHb5pc{**sM3u+C~DunJ+ip|N2pPqgCYFN$@WXI;YGdCz99#4xG#s6 zCAK?1zhlBetc!*k5&yB9QdB6%!rPxuleMv!o;MOT1u)0_+L1@0Xel-f(mLl|tR1*? z>7%LIF!@M8(L;G9PLRZQH&pI?;gvzNQ~jlaWs>_jpIcJ65vLxELV2`SMt*0NbF;x1_(J)bmtp;p|2P$wQVB`?gPf>jbn3_hSkH;4nC%t5@>erI z*)E!@w|-sDkv&e*UxsQs5}{i9PDbc2H>h&-FR-HIFf<||k}&VLOl(RLRgNNjnZn-7 zcP?2^P!c6nhJ1;W`Aj)0ABEyae>3y%!&+HObr`--fz7LzGQ#0!MS0sBRhng>@Ee)dkacpa7b$rjian5gx@SkeLa5%Yus;{Mk3)^e z&xC8OT~{fqadJoSJip(Iu{r>=WGYf#r<}Lg7cv363kh;q#flpgzB(CCx(Ujeqvn(QaQcK&x_;N-qt}I zuw35!#U?EBQwg@SU;La^T2`n1@<+3t)m7+bZ3`d04GNKrg~pU-9M}6kjJx8EPMbEu zswk;3w}zNZh!nMhBg?dgNcckj6@iwS5fSLfZzYs$A1GK+%kFL<7K?(A69+o^pQ9kQ zM}5&`Yk<%h09+Jc&jG}ZizGg|y!|Rr_Mh3D&RGJdm+4A(`*(oJXfr7RBhL~K8S9Xg z&I{Hi`fK4*VUdX(g9CvDVp*AbEhZSnO#*f#&rsb|kv(KdZgF3)DB|>^R)Zh^)uOU( zSk;5$vn)=IX%=Z`mr;Jx*i)0HTNLG5aw%D>DVN6Fmd6oOd`2iqu)sm;!*?zsAMQ53 z$CmHZf6auigKJ@ZO{B1zdDwb?wX@XwQEzOqRa2kAChtU55TU5j(}hw`&GjUrhu7K} zHt8c*L)UA&?5P&s(uEl!*K%>A5IIKJ-Zv`Y3sV-ln`gO=GhJmw5bU20oLmVx3xO*& zfpbo`A3_~sXf4gm>qA?$Z^leI&9lU(FqNpICZ{k=V2PLC)0P@ykDP4nmW?||J>*LV zJsqxy?pmzXRj&+v2u&5jk?v9Kfe!mX zOsyn@5WV4$#Z#jW?@D4OWLoW-#bpw9J8#~|By%ht@3t z=5I=RCbk6(gb$+JUeaVn8L}8ft{jM95N68iRIpNP+Y(Bp;JBPcxVi~M!Eb9^ z80$2^W{Zp@FR4^iXi6uAy$cOC`riN88qV0waJv-#acBO~VBB_El^BcUpS{Zc9tOmL z6Yv4h+4+DVd>&%`LWcVvkyg_63MaJ!_nK24Nh9g}a&2YoBtV(O? zZ@MB@_@fm|@ZkP;&>+SwtE_w<%gg{}PsO*sVpqss-$h|FFF# z9sy{s%VTo4oZXTlRdEUF<@QFX&SE%|-?s3txT3QKIoi-@y0hj~H5~LY)7D3I5(=l( zC6Zg2YSyDqh5>pT1MC14Q|v+K9@~Gq-lz?M9j70MEE>~~ z%C-1Eq`1ZZbWxjlnPGUq^o8?Oj$Jj)j&LoQo@I@()L}k5&<=+a02ZB4y@bfr?0 zRriLlZKq|SJmXeKENUvp{*TzF(Q*@k6iHtXOo*$d+}SrArcX;OC2349h%|eMoqwK= zxP6AFS3%ev*1Aa4A{#ZLdNli2>tTA)w-|V@9ePoE4}SJproI8A79^7&bg>SrbRVSp4-wbwr(XwYR*Ms^%A zClrzsE=d$6F8#r~`uJRn*}V9va^~DfbpN55*e>Z@`G;e@aHU#Q#xEBO+C{rtY}?K0 zwTjEy9qa%+~-*_p{R+ns($c7N= zWFi;eKK>NFJQGkBO3r_AgSoaQpAb2|utpnBio|3qeh9;_*A{v5iaqYgZDOcbK(5%~De4H>twbGcsJ(#FF`T*jrc*@jNk+pph9&tqhY-UiLo zP?l#AH|WY&P7WQYoFm{K2K~qmOcK=oJiznq3m1$K;$GVyacacHfPiRZ)|YU zKH)C5rjGq-MO&u&df<4wH+G5V935KZu~84ZKQKCKQ}U!LF(mGBO@+0DcJPqRyyDt! z7yr@0KK`F)=6U)bXT}3iG@d{NaS(q066xUJK=|hOBgAZz-v$^ghRzj6e`0U0q#BhF|}&wb9v5w`_kFt2*G+!V&n< zO~2<_aa)!-z@nyekCUiog+iD_ZD=|O{z@{a1ePRs@EU`f8mg7WVm?@9Nqf;qq{Ho9 z!cOG#ywFJLL9{3t|6XxYhI6@577kHRzE)ur`m+7iTT}ik^S)c9`~bZL$;OR4Wa0W8 zHM!7F6~|91wV+rx7dj3)MSY(srK+sbspt8L{)j0O@-V5fMuAA^n*Q5ankw6PBxLAu z7h3>P9HX1mgw$wAvgy~)JC8YRXsE*Rf`*wDi3II^+dqUKDnwSEG&_0VEp#@jU@PLh z6IPVcQCFt69bMpjEenQrBEMx8i>JPm+UT6W(5lZiyO|DRg zBF_W{@mds0vUA#>h^@Z`%;=IxP*|n)=o|g0$4t`st?vwG!;JXPkz*ppWJ?qaKRtDl zdk0nXrH>@NdHWT$?znD8JBn`PIe<}{Z}!xjWtnVS6L0vm*~8#}vKZNtDSu14c1HiEQUhdjWvkijVqc!wi?waA(tt6~`~Oq()86uF16x z^{>o(i>Zj06=TTwuqVZ!1r9qs)h7~~@?r_=U!5iPf9=Kn=22{b(z_^wlijS$M$9+x-y@GS&g@&A3z38Bcro%~!5)QY4_M3Zyn$T3-> zwCULh+2t1uS<+-0M%4ld*)`6yg%7ANX&X;`Zq-UOICJj?FCo1Ot$oBCGskeN6i=i3 z_WIcslVfA;{f2XQ*wkqoRma*=d>JhA{)d3Z&sL%jd)fSyqN%KpfBd?u(CB;7I_qSZ z*7=AH6SD0&EmHUh#dZIcH(T-H!$uOdkwJxcu8ifUZ(z#M(VYHHbg6#C7qd)|5ef5$F}U9v zJw@^2oW^?obN!KjM|j9_4e4W`j;F#vi82^=3ms@a_f3SzIJ=$SK|qyPoZ}M=(@KZ! z3Pp;&(Gj`dC6YR8TFjye_!`V9zObH>N0;l;3qT3U@I-JWOL@JN)=>qH(;LUX@1-!t zoE$_Inw41G7k3c%hr%d`lP=%6*)DY$?hA>u^AJ)KMXpWar!&S*CK9!m%hjm&#_Jv4 zzk_5&0TnlE^xy3h7CMn5l=K~Z%HH2oo-?V#F1uFid^Gq|AP1~6N$Rg5P?nn_XImX1 zVen^MkwJU(<&uUJ3p;hrS@pXLRW`f3H`Lqd&(0v#;vW(NKOpTu^mJCh?uH)Vx#ji% zz6=FBo`Hlo)aC`)rxY~1GaaA0>xfc}>-ML?q zCaywAQzyJ)7yOFnZg`c~O(f*q(-D8#@P&Og(@?u{874>813BCVSB}M`j*+`J#B`2X z9222gT2*<4I6m*?yQRrJ^G)F!QKjs9mW^B^>r^?%#ZQzL8xx&^wTlKD1ixL~zo} zQnuqnr`!c>of(*gu}YyrmIIcXW$`G9!ZH4}!k}!d6FyK~`BrUgSgf5TaO#;dcqv9U zazR9Kp}eSX*n!I9=&zR~@s&ZG6-Owh4Mj16Gm$8}@%j1FZ}%g-ppLy^3TwiILjDu+ zjd`{fU$`%EZstXml}UwSuSzjki?`27QlW=)pVHl!qT$~%(nM*7x>-7!$amxeHGZb7 z^D&#T&P+W`v+!0^=9`UcQXhdt(f$>+)Iij#n$>4|40c$rn%ta|Ay(Rg1w!x<3fp#u z%!JCz9yyBmG^?$bL?X-&UWC|HHlYnrx#EDqF&~FS{D0IzwhJIt_3kRai_r26H)36+ zlaa3TWe|ISjrZ~D+X4^W1u=5W9L2{}`qKMf(ptxN45K_+((YufUdVrZGJQmE{ehP0 zMS$qy`N>ID+8}$Y_VUX>30{C5UPNJ=M|nG8k#QH7LH$J&1k}|U@n{Qd{Xza$b(6b zMdKHb+N!i2Zc@#H)kePtD{>AU}#*^I~hrQKEP z|2_vJ4+ZrosCgxk0|N@pnD}fu+;{89tX0o%(onKD{SG#R3suao9Od0>HmLXR<7t+w6_5FuW*{c9viCRF_M= zKPUo6Rq?DGq`u*@q5Sp(r@oj{^|PXrpz+RQGkJSrA(|Uo5gp9QCRE2gAEyP~=&bS8 zs+34deA$|a#DEpGQ&D z6k6URJQ(3n)mQ7dGn%3AiDuP9d5=7Mk-oZ+s1E}>eMu-K@H`YK2bZPj>E5}9wF*fM z-n#NHotzD|0wvuD88l?r=a%vpq0$*oooD8mG$oc8Ug*yO94i&AA!OI=R!Zd6Ht(Mo z;eRGtPO}i5RF8FOvGG>_Q47tG876300~}t|@KgF=@tRm@{VIFs&A~{6&*3AvFG7^i zt7hm`chuAb%uo&8zhm>qX|45}N-hy@3wE%LB#cxT8UI%%?o(S}xT)L}!KC529AU`E zsp_ADs@?f(@Ir&hLA+&!D^Y*F(Q=VB5<-S6p~J5kFBW2KW5+ZjXb)tX8sHrI$XdMR zlk^I?9yz&@W6adME}H6NTXhY!!EpJ17J^-bOX)=_LEH{o_ zrir$b&Wn&jxUkL`lBNK%T8wH5Z@45Q8Ro{z&+ub{`Odx4=`AA1&*+Ex+p_u0ouh5a za`75cYOf+7Km0CwaS0*X$YBeuC<;$rGaBNc%S?vG(%GFN@Jp@uLK3wO^~{cw43uoODnk>!^QeC2QkVM+&|hd1MMgm4 zwA)IeNZZ<8asLc>cdyuA7$s_lda8bKXJIS?k3WIPf==PP2%ZD7qoUaEuuG|#Z5mI@ zMTf?XyIjn&X@Zu0#65%<#-9Mi4b^y;kT+>8^+YG}$*yfza2d&IPm*F? zW1gzHlw7Uq)L`uUkTRu%A)hy^X;@r$r`!dhy~Gve(vDOoTD{@DPgbO1w9~JSJY7E9 z>8m&4ylQt;fm0M&h2JS9z!ZG$CMv%a2}#nS{~^D49xosEs(2~=7wX$#OvQA&ye?eD zyu?UniegjKMP50yf!v~EToY~XfytevPF+|7d1^wqNnDGz@*TJ!CY9?*`YJALJcDNh zJu=J1c4l0d>*5JlVabVI8I3~G6W>S8O3GaDRdtz>2QKaRKZwb*G(KH=l>RyP>vusy z!xtadP6^CWw^9gG6%H5spY7XhA;0Q&_WgQ~E0NWvN81|bUm2HP8YnRwA6e&&729cP zpDQRUN4J*>-C=*oASg&Wei)C-r znEY$Zgxu?fQMxHgsE6;LGLhW5dKi#Vg>w599pAP{=~c$Ii9&ncmUbIP!zEuS-aJ&M z4AO;Yv+wjHG8@Kd3y^_<~`N61-+H%HjS@LP#Lc~|2I&SbbV*!c== zf=qJNx5)0sRNHes0wjinc5~QyrwJpOyR%two@Lypo6S4KI`<3K3Hg6Wo*kEo#@RX> zy|}v2SgI#E=u4QwM}d87`OLmL#GxKZ@AwlcQvczV;0XG^H8tqyNJq~)=xt1_`*}7b zq$(=qGL7p|0!+?gp1iD79m;qii1+PVb_gdd9HS=K9az;VzNy4H1AJ4N*fy^?H@aTD zC@nGLL+B#$Yhn>?mw_cci8sYM0>fML1Dn0Ed+XY7l$KV*IO$2eRe$<=(f*8M42z85 z_BfpJuMUQDVvDT zk=&KfKgB2CyKw~U;u9=V6upSj1tzlqL_H?@jz6c~XGP=5HmGGg#!N9Ty z8b3Lwrw9Kz?~Y+M?h%c2Y|KKjI7w@AVIko*Nl6D}uo3dANmk@%n=`FUwLQd65Xo_u zkj$TtGhoGb^F0p6xPWZE{)Ne5P>2;-O>=yI_1<8|R_bzYU+jqwlVijjp@fD9;VEGg zJSSlti)8vuFRDp7#vP7JG6I7%+XE!s4DC279x}1YyduWCwyrLbSU#3-Ou)>G?wk z^a+KyNh##Ht4oyG9qi=lo`e%}PDB*?Y!>3a5{Kr)o;4{k&JIa3lE`7w9y%E4B(B0k zV+z?2FT+!(A2T6Qs0@5tlj88J1RhTzPt>xSqfOax~R|fQd2d&R0qHQ_r)y^1k}Yy5L5X#BQT@1c zbToiP$p3{n(NZ^acWjnznis*>5ecPegFXVMSHMQ{&+oN936$9pIq}U>^*$;lFBkaK z-js*LH(=fGx#)9EAk7Xz^htZrW3s%BT>jbxFu($B^VO(PnJqsxGMrF>@-bO)T3f3O zWc1QUV8zYVP^{7VMokmxV6L_C13 zUrv!IE5hTyBIDIAio(&e)y}FNu!K-U*|Qil`;L;#`S!}W?8$_%{CV=_-ujOGG75l( zPnB+#rS2VuG1@o2l81^63aKvWZdte(>JH+`MI_+Q9yynOA|hlRE#yanRj6aqeh4eK zJe$70Vi^q)I8Sw04V}Wf=gUdL_@hhZs)lMZ#bJwdPxN6AcjMQt+KHtX?T1wDnCS5AAVAs$g?CKWjn!y;Gz$JE*!FxQ>CVK*&lSn?^*r}L zzcXLHqU3u;Pgs}?PUQRU4TcZn-|GD`+rNvozRFW`Ervw&! zcG3)tj3|J?!@$ClF*JOc-O(m!^)Xjk_;8Q`;TS*L6u30MsAGrRbe0=`-6%Ii2jry`5izpFId^w6Vg8$k$iH z2t`P(KfH3euljT;Rq3hE)tDPRoW3?R(FJLAcWKDg1yzPADxz^k@I&fUjB{k+F3AsZ2)bByrWezs;m%gRa~z{r1O~&-_C4JB98qhZt7)jiy=`wYzskE~8i$ILz9R4@6rzA& z?G|y?5x7hQ2CL&<%Q-VT%SCbH!8HCvkihiBMt;53NtK#*NznRc6?X=;{^QPq`B{`K z>r#r6s)e2Rlq-bQn&`Vn-bO(n-lD_l?%lS&R3Vh%8UK-PE4t$j@An(Yo?7kqN=k8- z{=!Cnr^ugYaEZUU`9UTLjbC@!wp@_{>w&4!D(|IjH;KAtw^$Ny-nRUd5=QLwYB&)s zvZ9|}!5~qQ_sp@?s;q_EgTZNrF-337vO%2BC4>*nAN66S)ah>j!b&wApPFp8kVwUH zDVnNd9Bkeez63Z$GhLHjhbQuM=;NewePLM13xE(+O~}|6?A+a9uL>heu!se;n-yBK zdZ2{bDfSwZMnN&Pch3MTRHwpv>aMes8m0VcKhj z34zzJ;eK+iP`tJvYg*1LI-_h@+L;fjJv9n=ed zh;MkaOSWu)G9N}NH-ELL`7wm;9jon4;Z$XKZ7yIM{&8o^@SbK^|1^4*45r?MB7JE( z7yBt&Cl8|x)hHj4wui$oJZddT400nI=XMxBQ4g1j_Z;9RT zlg3CR-xA+$u^AH3^D@3i3lk+P*I?V!9UeXL`s(UC?^HuF9z@BK zLGGjd$B-!G2(ngO3PYR^i1ROt!zxUl`d+;^)fqT?UqKS;3m=J6nd1MNW3QX0cj}{X zcmDwU3qVncA-1G6kfx6pbT2sPTPSqZTlHt6iaF}D;=F@3qrUXO9lV4t<_ZBVHTW?+|aEHn-2&m0s#tiOymTFPBSC7hgnM^#hU-MW> z&9;e=e|5DVQS|F|gK`VH3|2;m#uOJi9atTp`v=U_<%tltv1ob$5>dxm}hU#OV_>c*hrL zh*a3gQokg5aoXe>=$3A{i<)5JO))zCOs-3+Z8;e%o@ALDb3}#%;wCSVb@^~D#o!wJ zLhfzKhb~gaC#R&yNV~1Z}h>w_OqAM2ONxsOAon4RJ6BH`&59f53p0< zW=O2`j^xf2n{9&BMNM*k{_q)mdizC(^-vmk*9*mrTQL$;l|xl8ltNmY zq;La0!JF-zMgm~+O9e&9){HDGUx|9Zb0;<91&ro8OCZv>j61=Z$cP5P5gzd(cPnq( z&C{TE1;KQy1<#*4`z;R&8JU}H$@4_oR5hzH4B~yvf<^y;lkJHKLBR`n{I4sV9x%=U zk|Tl8<^}XqEkV$kq;!_l53G-ckyb1Cq?Sce> zDUTJ7^dK#FGIyaqio}ZU#)7qQRP_Bf;n!7QWiiIAB04agd6WWqi~+r2p;QzyC$Pw8Xd0d7{-*4AEs7`?DHEaq9o0Bl{n2VSP;8}$EsUF6S_ zH?BhrxN5hHj^{HHOGjc}g^Dwt0OkU1SIXR)hY%7NW(Kb%i7-k8$&xvMJK1;{r^Ef> z;#`u$XsYIYhoSPL#W%S1vV_NV;m0&3B+R{#*Kh9%OH6=5bU^DmWI1_}%=NM1Cgzo- zPA20V0z{42Y(b1H?loS~tz3!IRnT$`=&4bW9x>C-;i2qnQHBN0o8ZN%20gln#gfGI zgb3?3K3~Y$hkm1vs!<3jgH_7k>kf5rmZrDVY0mf?c`NoviS2$y>c1V=;?ZBc3@dQQ z42i!|Xn&M9+RK1O|K2ISCy~e$h8e``Lf9K!5l88)QGXl$AgZNmwvkxJH=Qzf?Xnv9 zdQ(O~w6jOfLPWqX>4^*Iw%(CCo9edKP&NsNd21$kN_0@_OO{pSc#S2EZ-~K9EMtJ; zVpr_sFw8%JrGm>UHv?z`QP9YcW2q=--A1f^l{Fa_GAPWZl%MPqSIXXL{)k4Sr!9CgPU^H(@}A0 z>3cP8TdnL{Bw0A+gF9IjP|6y5Z@p^-w1>I;ZZH3zk#Yf~y8hOeGuvM{W1jzmdI1a; zMNB-n12;X^V591AjCYFjXRV{WPZEmbtrosY6&4}dF7_Y(&FNhnj4?NQDD7DiPDa&{ zY>6Wq5en?s?HmKk8@vXi#w3hK$_>ly8n0rfl6g9!gwdtL%rnhU%;GQjsi>@R}`cL6umj=OR6vzCHQJ zK<~=uSZL=aRRUwRs=xsMVZURk{C7_d-}M`4WmQ*a1{NSivMX8YCBI}W7xTm2!CpTr zM5F(S+2td=_}oy##cQ$H)SG3XG`$wOROC@u0&x#R z3k%SIwiKkj(NzSwNH!5KghWjwgTJt4MxQ1_6e&3g@)P@LxJ2|K2^Lb+_E~7mYfu5L zZbI!(FkhRfaxXz8%Q&8Rs&h+5HOX%|z%X;qHVG`)fQ1#J%LLr~6aAii_678OE7>K* z%#w(^i*w7q@{j~)_ZxZjv*!XGJa|hITB@KT?xcbse8xKT5j&q6Cj65 z#8jyN(Zo;mh!P|<6x{!+n3Uk%F32nE&0*ZrKZl5ZoPTU%&5D*&^aokaGTkb~`wRd3 zFb!qF$AB3Ng2GSHCctBJ(eLJmZl!4_wu{BVY}Ff%FHL`=f&cxmk{Ym3`$%~%A41i_ zmVKpwE+hNdP&dg55;_p}Ge+bvDKhED{wChFb{fYZ==;o2Z$HMl2#o|9=C7m9(fdq$ z$)0F@kR-SNhe=Tn@>e<~1x77T8a68+jy!up_--9r38OBdtYELlhQS!|BATs{3H(XE zX2=Q!zmEH1l>*X)OWo;OGZ;m?e(`GBgM52i)6{uVZ~+1coY7m#5}hp-nnf?7zD3-2 z?eRUvVwrDEQ(hu0oT;*FtHqB=Vi;EtyNQv7fZ1^pGHd4#nM~jT^FAN7`)vPAIXL>t z*+6H=wUZN11$44`xHLr5dNJ6)LJ|0D3;RW6ZIw-A^l4wFLo~4YlgTa2H^uS7A{)6J z<3e^T@BFpfn)A1_0L_MMA`d0rmi-x|n^4&fNleFK*wnc&>4 zitoiKJ<%f&2QMk}flekp8Trg1cyod20e0a>$|kq>jV^g$eiW)5BK~?{npi@B!P4H7 znBXx!zZ9ib^<9AJ#*3VH_tJ=Ofrgj%tD`SQb`raI8Z^XGrf6fdyIe+lN?Zl16&#vQMVHVw+hZocXYo;)TCD`4{fvN~Sc76^xzhWoVL1ZOm&MJOiCr#v&t~c(I`{2c2;icz)_%9&UTHN< zxahIqP$vH!3m*F~gK|?7P~0mtodUF?YXT4!VMLEA1ENetWJ0p9q@f%c8iRuSfR_HA zs>%LTbr960(W8_+`k8k(K<9)o6webN_X1h5)bi%Y36CFEyDPcqER3-_beEaXZU^rQ z7c{h$_oFW%O9_7pqIL{-&P{lh@m5$0CEiI3rFSSqO|Y=cVCz!)KoAuF?0}%?n#EjG zDMFU$GqW@li`F$smRcsV(ItkEWixikNv8mhEnq-hdc9;%<^J@PO}(L1chB&S@g-%= zR*S8b2EhC`%S#l)pPZ%5ef;inV^h$0>{rmu-MuE96iYo?eR|bA7nN+dVV==pR&QSQ zt+Yo19Bq!U&oZ4D%FD#Hmc-X~uXAKT9(b$lrk^f_UK(IzK--Q(Cbd}n)^o}BLBKJc z>}IJEMDkc67>ng81|UR$O8S(NGM-m+{ZBQ+1E#>HEd?$e7|J^VwP|g(Z#1WrRtq=i z-^%eJXkhOuJ$=Pgni7!SIMS~|J37I<+Bz@+#cA{Ky@w*ld-7!J1c_zrCJSp93zSiE z-?DP~IX_>DWe;D-Ai)LejY)|S7bi0}`Oaom)MThUx_-y89Gil>HN~CJc=zno&-*!_ zv)#tg>%03mDS8DYKT55BRNH&6)KsKG#qngj=1mMx$*=8lI|8#b^!2YrZ#xO+e~#Vx z6+A&tqLM=jK@1EHj*Fh1JLrLXD%4xaO}E9?&mFb4e|78kkdEL{Cls#!>DTqkC~IfA z>B5~iCCz*J>&?=kkbxmRHuZhw8OItCLu81Mc5SR@^EwV)Rye%{_-b876`9O?nY?L~ z)R3v1R#=|Dabd#vjyJL-{2ik)slZ!^KZ5pP+mOSC;Y$i(+L>SENgD&jsOqj;m&~Ds zp0=v#e?kt{J3n6ZQvM1>z0=lAW$-Ds}?AuL-XPpi0za8vnD-rzfcKl;5 zT94iU{*hrb`){HHXjut^09mt3z>^AI;k(7+*!-ZDO?7u12H4TP^RQ(9SFw{8D`Z&l znqmv&gyJ4skpi^Tcd2(0di#^T9QPRB+TLi-$Q88Bl#rs8hwL&zd`XbO4p&oiB!Fl- zN2YB=FfLvuki$jYGZ{R(4=+=a>}0CI{^9SXKaWWW8b=-B5k*x*j#(y`=;Mq^WX}h;QAGz=lSxk))`(+Gr zk;#2NU(lLM(Ivp%IF1?8NC$EdQR%v4)JNFC;-9U&`$FFnxbZ!UC>KWZ^==G0bQyA* z&+54g%rAd0T>|UT9Ppmm(K1&!5i0&ws}kr&Y2Ak2TWM|k=$Yu10Xdmih^l>8wBN~q zbw~OldP{|43X7>;vBrRJMgQwRcVVyO!!Gp64JP)vIP3)!x4Z#mhMe471i(GLl%+?v z&h{P4vg!!M2b%_m2Te4u)xal)VKHlKW-~8060>ii_4ae8r13KI!_M*lcF%Wp1gy$V zrVVQ=h%A*VQ$WV2EiZmPY#Z04YfQ$SBLNqfQrU&%sH)K^S14L=G~Q(Rdv8JGTjkfE zwK0{Gr40H_R5A4jQ)oYXR1lLJQv^3%_XUlIu{mhk!-a6R0%+pt^1y;CkUHv{NP|$} z61C!?QGG*JY6|>A?-3i(i;v_oc_uJ97Y}}ra@n^rUwfa9PP^btsx(xI91is_vNcY#o^P11iJq4L3e#f znW_|AVX7deXjFl%U2AL6E*kAC4d^{%TL{FD>E{m3!WUN)6)tm{KR`@fePqT(+2-sb z<;<>!%d~*N9Fg^CM*I;Q3gDrw23(#7^gRHnk}ih1m=iDZpr9aC^XT$&c1%pnj%rp- z9i3(eH`mRus9G>4%7b{`28PL{#Ju+ZBZ|=>!%`7kKE!2@eZb?UmI$qRmQm^e5wCo5 z*B`CpLXQKVg+PByysq!Pk{Dh>Xg#Nb(wn-;H&y}W*%Ig%y7ICo)4z4w$5PsJe&7rD zYQc4rQl*+4S?latavpIUbZ$(j+^ZYN*Sg3q@=2`Kw!ifLZeI~P1i=!9l5RyvBVm0I zx~23uHk@sCHH$e6uL=D&xutxPV)81JH{(Q=nbJyLNc5`*^{B7=NH*Olsn)S$JNfqf zFV2s)X80-LKC1IiUAL7DAJl6DNs#IB%;3+z1ESm(P|^g(xJQnkn$>zp);0aKZ*%TU z7?~s@gHf4yu0*yIe|(n+MFZ(%pw?~l26hXGP3%Po@e%Roau<;cy0)yB`SSBY6?!O# zsy2S%)Adzuo<$s@!;+_WxVGAIdzj?C@-pN^AQ8Ey8^3PdE7=}&1q}cEY9>IMYP&gf z+feW8?{!oy`Szh02c0suKl+)PTb(Gr`m>*`WsIHu4h>5W_a*(!gewXDO_iWy5t-Gc zJ5CozN7i(66v7yHhy=11NR(L%&I){bD=e^y1z{#}3hp#j|akKHC4lSAUONfrk!P+MIRi z1R6N(dm@VpoEw+wEiH&9J)Cp#5=1xPOhHp%%smZq41M}prm&yxx8x$Nre82g()qXb z$|?f=wGqgwZrXg|x|Xi2X*?crnj#vXv13D}$s*rp8GER@Rm(l~ z^WQyAcgS4K#=*|(TmpD_l~0}+c1ru+8$;7$hd#rIM$~w+O|o8kbIK2!mfOBn)1ym- zVTCV*s;LR~8!sw8==zvT&|{OxmvKSgVs7n@5m8E96rh@HweP$(=^IoHgJp;Bx=Xt} z`7F;fFgl?$Yy5dRS_sXnI__vhs*Qul4=xjr3z~_RY;snlqjiMAKU(}5QvkR=_5Gs| zvX}PP*VoSLJ!rKJ4Mh%b&LWK~h=3ZWaXZ~^`L@#w`CsNJx~Ir6%iLhXIOCUT!c~qg zqqA5=7yTu_Y{+nbB)oP3jeVG$Fn}UPy-f;`Ja>6a30Hf`@)pO4wZW}keo=<6cJ+nL z2P#>qJjUE77-)uw$Wy}4F^P|xgXC@xRl)r|Rr+$=QNZD3e9%j9SLd?RvC78kzBhi)=VO^iu7!6crdU&ekq(NZs1pM*7 z$t8|FtJ)Fw`EY1Hab)CVtcX(*Qds&ni(VmBT@%5CVs~hl^o)DHP!ZyDCq7pZ1TS|L zWfEAeHHsduGl28bqzvl@Z`|2ZdxUh>9 z>2!fT)v#~<3rZfi$oOgbqaHKKvv8!a`Q%oV<7C+tM44P`l!5tPX>qCiTDa?&#S}!XNE~Gg>if5<6T}P-O9K zTL}%OjAUwp!_doohZY4G8ErW9Y2$T)3lB71xyibW!!RVT3=#PYYZ+6qTQ7-~o>Vbv zGR@cF=cO+P8vjzYb_xehqZ2Bb=f7$Md37UO)-r%6{gF)b!juxW(N7T8bJe%!Yp#8Rp7WDOhp)Z%b2ZM zw&I8=;kCr%W3o`<5l#HvMwd)@Y41ymcc=oogd7Og=H%1jRQ4eI+4VxRiwlU$CT>EJ zN*AS;vP2CLoA9I7#wh|j;%K|orCKa$2hpD`w{xDW+Ca*+k`r%wfeGi#zp(!@s0msB z{PLCJPt*HVwO5e=l688rT)lLXa8B1&gw4h?>eDxZz#^J!xZugxpszh`IlPvL>F4_! z2g6_xU+e4AtD^hYd`L!TTB?>gamC8T5ECH+$_-_eg?9J}bjQl$4^?@`ltI*5AOmrr zOt5>Nw`Gs_=tIy6NN!ewk@ub*e6-(gfs{))6USCIk=)6`4C!$Jf3_Ln~dU-jeDiJa-UORe~(FATir)E;4Pwq)Cwn?lE=UUfBe6#kPoY=naFpJ0t zRlb;u6c2PDB9#&@_;pSok#%who!eEaOf6d{1SM_HO+!N~Cfq{rA)o&7V;4X`=Vuo% zZ~O0z_SZA38{$OMsv;$5ye|orC#IXkYs`H%&RA)Ve*95=6e0mgrVsKKi>(nse=dv; zdrRKtP9FYDO(Ldj%20?(S9VpMVNB3|V=$l4&tD+W3@$Gh9!J}nwDoe<&|&qu4keu# zbMumyk?YCuI;|-V4vouCTyvG*#=&GGmHjw<6bo-SMRYVPbQ~90@;S?)^cU^ynR_8S|)kc>!k~4z8*XN%HQ*Uq_ zF+Sa_vvjsR8gH2#T!JJFZ$GIxiOY<8Ny(HHIB|%{4 z-A-~L`!DiQWOAn7AmU`yB)nV?a#C9ljuwX&4BBxn&`T(k)Cw05&E*$}C?+I{bu#v& zsh~%)18fXWJpLIFaaop(hN~l0SBHaKmregp>RvuTUOG8<-YNYTjG|+L))byET}B4< z2VPXT`|nZ%wBeH8a>-WBJq4Dm%iK0|(c;NbJSkR}EKqE#_x1*CYK&4Oo>p{dh7rg~ z`F-I6c8p=sNECcRq4rVAOicZybi<{&ZLt0r6eJy9Qe7?S$Y`DKEh7psNWwhmr8P){ z(}|EN0W=?*vCOhu1n+aE!1;sT;<8XRV}#ZM%oiJ^Ug^`_#l`s6G$8v*pju;9K+K&D zhGj%GbzyE5b3yW=a|alO=wCZ?PpxsQEoj1qe5K$#=)b)g@wr0NS=6M}w8e(1BP!d9 zL38Ta4lkPA&nO~){T)Jx!hoPk6!z349lWJ&}XX|IULP`o1|hG=+_s>La1#@dQJ zG$IO#dD5Pjjn?y;zhs1Pp>>mg#EfTl+^jWftcjf;MZ zeF~XSbzzh*%iaFCM~E?zwbX(2@5P8r z0Tv_jV)^*XKgZ;+zycJrV1b3s4B~ju7Dw0N6)0n@Cct%0S5?kk)wc83Si}m(CjcCp zo~_OioeM`!Mi1-;!%v%o&~6XM4hS#m)~j7U2-q98Ps3=9_z3$Qh{CY6$^ylp*atE{ zJ@5B{R$W%$>`Ko1qw6Hlupzd2wqb04^VrQ915q zc(EP_&(C&WLJ5hzU9qD=r*=vrOqsMTQ#u$BV;1&2%oOAj$%0(VDR%yTzZ4W8I4-}v zvD71k#G|61pa9a*v5Sj_sfAaBP-k%^NEXMH$Na^)sm^ zR80$Mg`j?h3lg&ezTstQcX?h_C^F7ZA|l{O$%ROrL+s*Ti*jQ{%=6?^3!{774xZIvMwEuyPUX0%VUb$~&d9WR6sl5Yx)&G6m|9%(-obFa4e-ta0fHq@X zXNa#q_2i1sS241kagO(LZIlr-g!A$E^TxH#FUfE;89~&_hH(ISxIq)1Q>JF)v<=w@Jd%LMhjCFl}zM*OXU8AC`hr19c|m7guvN-A@dlg&WrO zo2(%BLWje|F)SNqdOiA)ofOn0d63;fz)?Dxp(j>dCh*ZsUxH=l<>37A?rJ)*rHDLl z*m@!?$;GDe@z-B%Z|hIWADcAhl0XJHjnqQt2b0|Cre-y3N%?h5g0$9)6+O9tAFJEx=nv>9=u%eUf=8ai~uIMaa3>4L%S+62# zR1$N4BaO2s|H@@<%v8O>MeQDXFb;wm#L!%a^~WX2;``{DW5Rl9@>{n>a9SJjUpTae z^JsT^b_?ecy$cWMW{{{WOUjXCa{c2dlOt?Zjsw8#(h|pdEh-OWkP%1{hWycu;ezJe zY|y(+a^kIzRi7*=%Jr+hEoo+I1S0ygw|pj=E2N?X_1pSpWpwun{&kMM5r?%Ka~F9J z+wc8EjysTcoT?*4;r$!s7{KHwBpp*SRP>9Qrg5h057nF0yaFe{(QT&foVNuS4~6tg zJ{uOSbb0!}v2a0-dKt%h9XI3T&7q!vMt=`DN3VzUO-YS>)F|65&_e6gO(+Y|A?^35 zZ7|z(Mwk5dB^YZV*vVH{C%fW6V^u-4FniMPV=v+%08bVpqs4>Fr{*FRGfz}WlJk`t zTbL>}VbiUfHVGgnbxW_W`srp;te1_)Y<|}OM@~F8$Z(I($SoK2ls`4d1`4aMeV>lY zjBxU+QC=&vzkQuL-uvN8p)q=zny=_Z`%aUyIio>9ALWu&6=l?{Gj z*kS_yOCjn;O8pp8f{&3k(iZZyYXa-lx$R^ z;0fCAt|>!#j#y7gUH?h9Iq zL(8o`Tzq`=?Iwe=;T!!AlF}CUcfT7qVuUN-+;}9t`yi$}^2<6egAY(rR8v=f`^lgI z!mpEb4%z;v75Jy%32X=nh~Tg4-8PBHUwqodunL>E>57!6xt0n17ETpZ4mr;Q180Avp4bIB}IHb8rR)>@Hc_Yr5@ z67$D~C%V1bc0L4x5bkg5Z6BW= zKYNIhYkl8qtZ7yPx0)+&6R*mu_i~61-)f)t1Msj^51HPC@#{>ZP88wvr`;AmT8%F80@jqe8zF$UC}eeLLRcM1e_XDd z7jY6(dSvp=_ca^FhqB=~eRIl*U}f9Y*V&peRq|ugCK^aG*-@S=>=|b2Kde!CUH9eh zMp_&(4VBeytxu7Y8S)`N9|`b6)@}Cqv|o@bR^SbLX`-r1{q{O3PYeuS1*GB~Ya?7Y zRjZhdD?GQ?8i8`*FCmV>wq@6WyqXjQsJVs-+LqyuZj&V1bU0|MM!bntmGF`e6 zY{<8-L)!Q{I*^47tBiHbEF2|Z>l4GOqB8Z|cg;L!{j(kMw>n^Ak!v-$>m{_`_KQD41{xSCYbqiZ(#CWp0^Rx$NGrhS zivDJeCOA8_+q#J1w%Kn}nWWWq22yo3VABYCtX-5 z?7$e0qJi94Yw@iKa%d_@S*o|va>;C%i&F&fIU;VaiibbF+Icmp%cxWjunPZ-Tmvv* zQqChZWF!ga_BdLOelTU7ZE&J`5VRUp@dH!cU-rkRr0mom?(KE8-`})9%-JNZt&3P@ zFB6|#;rZ@LQb`yYW$sVKd+7e#u(RVmgQ-#%9OqFLCAA8-IL_l{T@unM{@$N*xP2=>X#Q<*YU%CB;HrF?E?vhqBt zpMi^J;kp;tnt2LIavh03gse8b5E((8(yG%sD8rQ{HduS7A9Kk0nS-{Dcej1REJXhb zijUA*93R>ln#c(B`-fi8$a5*Atxar3$pY~8yj*?qEd1-6LpMjm2;1^J&9?jBM;ic5 zK@gC*$Bpb=P2un}xt>T_tdEX1XZhDThf`<(ZSoyJdsjB+R*@$kk`tg+=^*h{CFjE) z<}1mrk$qA{pNT#sC~RLXgw-r*s48*+uZP8@rwo78I=G2%-A z#>jrZ_f48L03X%8{rwZbG>YN~XfH<_lmJTdf2xAN5gOpZbPYMrXubGESTqqJag-^0OB?`A`;l2o0w9%OL-k$ zPULKury810>o9A~{cH~b65fwJBu&C}*dL||tZf7h0uQ8Z+R*iIlgjE+NBEc^gYsY(Hgu0_;nK0Gy<3@L|*dm$x_KqagA;5$ao7Vdpx$ICG zR2Q;IA{b`c%}EE;TA0%gZ<{9tu`34dm-Sa`N5E<3%)u9x}o{ zS^!La-X1z(h-Js}YO$?5C;}174@Zj33IFp`F6`2ZC)!Rp_8=QZ>*!7R24r3Uz*cQW zyP1iwS{qrn%ByU#ME3b37NkXg`obd~STwp*$c11Kkk$F`3M^X4y2s=83&#b=!-6D7 z%5`b+-*@O58uo89F`q+MQa0+D@v5Y~J*O=Cn2__fRJVF%(N?AZgCw)F zKWt8~761F+m-91iCyFMjxJ1#@zqg>4HNTc!5XN{1b8vu<< zTQtQ`{6b+*EcvV(raaR3mRQBkQV*gV*^!kR;C>BVzqx?ior(dyqpK^F@r&&7#qNf|bkNlqjp>b0GOIs@Y;^kD4*p`qZK-x4r?_2z@-3+|^!>>{V z4a#5`7&__G4nROi4AXQc3<=%k5Y;L2Ntu(c!3A;$Xe|>bt89VBcSVp{+%Q=ccsSqa zXWMWuN$nv`Big^T0A3vSxdZ2$U!0oBX>b|=!(YJB%%pTHg71LDNv*=B*l~HclkL5t zPK()qL0n!m|04sBDjAu$B*vrx3!LUE#`D?YM(jbQN~vQ$nQ;aAwYf%uyZ6gTHOA(- zh+s52{^yCs_`7@F13(F0sf&gwE$u9x!9(fjPs#w`-|UFkI|n<|bbA0exih`fj6yfB zDiLj9TP~$l>6`A{9rsoGv#;{RyNQ_p3Y3I`4|~xu4s3dR9QLdSP;h*+_-az=a&bp2 z?4o5y^wh}dki&Q zmAk;ar9;c0z$yX?HZHxS{sLLBi|}YJxHO!ukV;n2Zn1u(XPX&`1xiY{Fs$79J(#;f zaE-I99J|NFmvZoY(h)g=){zV-Mh{Blyp1Dr9O@zLq|#CZEejsNke8kxy_b<4)Snv= z&PR;JOUCU;8`$u^!_*RgB>&Sxr1W8K)sQ3xruYEiC@H`~-CUhjr$gvI?Ztb&;vpu_ z$E5Ed6UCV2;!-)tb6KoOml&)3Z#bojbEp5Aw*o^^TJbgMc?#h-*F1i}1-ez>4jlv_ z-MK6{G&H#|8PwZ9my<(lF|J@)uY{dYMN8+-vv|gYQNBiPzgarU=^h$_8Q z8Hfrj8BL74KXM6<*$%NthOC+32>Z~IJ6XvWgBB4l>4qlbM&%w_LJVnBQ;Gggb_#!+ zY3J&%t{dtTCafI8okPB4@0lqM^eb{XbC?pJBtHg^^+|}wHe%I97Ary2zzhbaI6vV7 zY$7+HDaNZ?L;;r4wIrQKk3+^VB7rIs-~}+}K6ZPt5(|_#$jgL?7I3*9SR<+Okkd|L zCNVtLLKjAAl=#MmEajptX?&I#B6p?8qcX^SJMiujPsTz$3rX?tAOWePiN4R-0Gi)EoaKU=QY{AfR#YJhbB8a9ioMW z1?q4y?*F7MRb(C?d1Ml_D_4r3_}vhYDB{tS8|V6`YBriPkCJSQ&BjR8Fbia}NZoEe zbSvh7AD^T|yeJad+^bs$9D4m9`WBG}Ng^NqRd%9+7pol%htZUige;4ceMy!TV0wQxlf$kH_r;MS#V{NE4-&8NUR+r@X2H6(w!+oh zm?DJi>0xt(m^Yw6?E=Kx{0|?j(wpC|9EEL!i@rF}lB$eQ<5HS*-)+3VyD`cQ1AA4n zDj~`@g5s&j|6uw3S9O}qg7Cfg-BHMCEXyos-pJS|wOu&nWxpTV)7PwY+XKNeLbN$) z)(lh$ckba^OA%hH`nr@Z-yX7mqYbjXH;aliZwBSW$rmOL{c9|cvtq31xK$nF?rP{a zG{YL^!1YIu$w-{yUQ5q+4S>h>J&RbT#fwn+l592eZx7wn~)?1<@ksyJOD6awqW z%uL=OW$fv9>|{IOI07wYV>s|PoEG+;$tYE`p0({#V@K7Su2iB{7rY9+F{J7DE&A|0 zsQf#3=KKBfW@!s$UPsnu%;!(Z9xoe~9Tx?Yol(U8jdy0CGOkHei}R0u@cPQMnChaF zn2;+-la?O)>`vhB?AiUK`;-8~6af2vWIX9>Qa3S)C39>Fla|YPA&FmENUplLK z(27$te%`e6R3zv~{Zp4cq%lq+1cnK0DD)gk0`|Fo5(xiYEylWpv;NsEI?_f`Iz^#K zx-jo?dJe%d@~73;)Xi$hEuv@yFm|bY(ySa{3*LA$3IVZT41!zd~FE< zNl!&USQD9DgukN;Z41{(_*3q=vRLWE_0wjyHS>6yamAKc@}gOFrg9DWMLwe%{S$8s zL8`yU4kEKcEeFd~8@c*F9UL9?$J5Ff&O`$QcuXv;=Og*!#qY_IfXVLLw}?DEJf?sa zwvDZ=5Y-3newwDV{~J#qLaI8{G-M670PtkpLP(X%bZSYs1l9<@=f+{n>6FE@COtdal|pcj-;$d1`ABPgj}1fuZ5NuJw0fFnL~@ocKMNyea@ZqV<_ zo7bi)QJsfJlIuM#P?h=)O48K1+uP5{@32~niKVT0zi8R5&Q+4UctiVB0#sK8Kw>Ry z$bB1$kWu0u=U0k`VXU=L^)%+aWBk0~w6qN4EN*nWy0U}ZZy;Y7#kDFI9gNd7Vyep^ zmzW&&$z^zJ(t?OtWg+Ade@(_8`f9z0;du8CWhl5@l@<&amntrHubA2I%Pl^j-s;F* z*QmBa4p>ow7DyKX7`D&1J-+6>rzVKO-9c4!x zHtJKH^h1TFmzyS7pDwjs=5-lP7}UQ4AO5%=Pj;x|c$tf=qSnAme_c&eaR>%mzJJ-v z;WX3digYw@x&E0RX%4b(Do0CA@mZM_+u9a9Sg+ku_^vibn>Kwk9JH2!{QM9Jtu!M81l|6i?(=^Wm} zx@C*Y$UMO&7YO)$bCF+DGnq+BMs@;Nf>9lL^$Q;Z+-x$+*e5zh?gYTq6-3Fl@PC0d z4b72z%(~M=kgZpB0F+%((18)~3fXoBbB{djn$lsKzSEK9xJSiZL{&m2FV~fX3luz+ zyld(SIDRDu3|V)v=DhY&L$XDRgjgs@wAh`FNc52iB3;j<@%YQ1uEjTS8I&L9;Q9y}c!BY{iF$sWjo1pBFq5T()lk?iZD@ z*#|^Ob8WW%_@>dNWt6B36;rg%?N|Cwqw)~sGl2$t(tF@S5ko^m!|S2*c`(fa6mEzq zC_-G9R3{(2kN^YGM6a(t%R$e6Pd@8C+32r&{mb@Xz7B)xYCuL+Qxn;UTj-zg`A|dt z@dGHF43PZp{wz-l-V7UHh#fV^cuGmlettKtxsAtV#etS*c0hf{&WvDiO=*%Wn`Eu5TfPt)At%Qn_T8Y2ajdF!xF)0Co*@M3ZV?0o>5idGknTo85Cjy2?-}3kJ?D7d-&)UF&Od`Q z_w0MewXbXM=TCNcAIlcyk5zT1M#67Eha8J6iOl8$q=^v(R%sip>N$@SH&*T?KLmK> zeQZC$S5h@Hr14WXuOrB0(cozw;Noy$X^34Tb!?t z8cfG?#MNq8fsOS2mXg;xOHMu5+BGJ9&_b6t{~J*38577t2p!(+H_GOo4^Oje-N~E3 z{@z$&$<{1u`cq5caFB_dkvc(r!+gS}9=T~z-x7Ot1YHa@J+4y($mJS>JD-n8-!*PR z3R;~$bx%4x4E>N}5F?jYQ{UY=hIZdpYfn3|>QSc`5sM}Ey92D(mCOlZ^Bc`}+Wz7W zvPvQy=jLrvXne#(F-C>RkYzRihvz~htU&(-2zS3HT~C&5fQ z`pcFSjF1rWqfjE8$Wz3?_NRtF32O!e+kcjT?NwYAgaW~wL+(4mY$5{E5iLo zGGC;zy(zjCFl#Aa+nLA0kGEcgGLZ1$qPI;1_E8El>^`YbBGLwIKw!d2@f#sak>X!p zGs}gwRi=o##|}g6B7|iF>Lctos!zwTnK%uLO<*aG&L%wGDnn)0wMp*H#HXRxiQseI zisLciE-PwhDonI|(ijEP&mf>lbW)-7a6E;|^|%KUfdZ&@zE((bqWeN^B_m$=}W4W~bv(EDR^?r)T?F%P9k zvs;~t@GS<)tiCBQy6Bf=FrTxUV5x}Ejsz-kEN#+|k&sKTy6Nz|H6Pv*2UQZq?#deh z;P;7*0i2kyxY1@;DuLIQ(KmtJl9ioi<=HBMQ$UqbKl|8pGQ!7U!DC2&GZ}1~B@!Iw zGM~`Imt$nSujYWa7I2m?uerKQ!Rd0k&~_eC00iW2A8KTKi?Le-S^)G~FyAUL?$=wdF#jVz4T7_5AUbm)$=N=wt_x z;lQ0PkC(5GJv(C2slDz6p34V@srqdY&V95^y*9p*yxo*MDWv&jmEk1guW$@xlw5&7 zp86fT=DNn8zp3);Blex1o-U$FkNwLe|9S!3C9o}cldFgPC+wP6Wy@_yhn|Nnj$)(T zS!;slK-j9pP4InUj^2LWfCGVVp!19xAk}c{Oh_z-)K5Ey?N=N&CZdp#BN4}cTYB(N zmHKHx@hn_h`$4n8V__Z8m#6M}%T`93AA{eZfBmHGlNRJjal;lb&_CCerFCGQ;8_W^ z^u3JDgs<_h!;_=H$qDopeWW)w#2Y0w2>gwxbcr8yCfB%*t#BTV?%hs#g`JDBq)`xu zN|TKJPgrt=2vLQfup+?$IaTOxZs3ah@)ZCsK|k8gn)vxG5xb2LIgmv>S~Y%rvyL0s zX(@3=o+aTA%KbTAdU6^cNSP@#T*mP~RX2c9$aH4lJ-zb)69giQqV+cGSz=p$@%r-p zwW)*bP!`v*E^9XyzNwc|{-65n?m#*bQkBLe^1#Pe#gq8sr7WarE zQq=xX(tiCs4>q{xHVMg*Ewp>j<1`E*{MZM=3~BB~=J@ci38T~$7njcaClSG@yfJGk zEKYDvCX{YLjjmxnmU9fmhS^tP8)Tq|EZ%VsVvY^{;L>b25(m{)T*D@ein&)9XoC?M zsSEH@jo9$Jq|8wA_rMFx&5Q)*1P?@`C+Dt03<~*MFaTMHLN4jgv%FpkHXOM(evUY; z5th>Zru6^7)qt;#Xb`<0vfex}yc3C|5=A5Sb^)-Y{0|>^3!P+++C9G-`283*^gyLy zW0Dhgw_9x&KN|QyZZ&|I+mXvrlAzEz| zp~$N382m+3Ja-^+Rs(kWzP9vBOF*93SwNrnT7pS7ucl+PdqwM^e2d*VY@Y4crk^@N zMGP|40XrZD`50|G!%<~Lv|VA0i`(_(>zm!3#*ym;G3O;>2bNS=Q;f^R#MwjR=g+%= z&gFQ-<6}K875DS8k2DPcm4_oM$YoHi8|#HnDH=B=wlRZX#PJ}~Aq{v##F|A2CXMef z^3v^@r#|1bm}(f=_B`XUk5ZocMhrh2^4lhP!-(>aKkj|V`8)isSp2472Ibz}>|V_@ zw1)%9<|eQw7=FEa7Xa`$EQuUg`eBy2{v?5yo`I3TH2$@!+b>mm{V?}~nd1NRa#zI% zox|B`BFviD<*}J(*xPE+*OaM}_F3|F)-}^{-zx4_5SW$?5JpQ+0(>nO&nN7^XzSL{ zI!8wTlO~Y}AU*9RX9I0}HNfuXQ#*%`rV7Qe6pv1nR?Vae)EpL!HsJ0|prI7SK?TR(rk=>#rK3wZ9w?5r+#Dt8kUUNhY0ej7&Ten#)ldQ`kUDfLP#aX z$_FxX_Z(tUg*9w~YN|!|VGQO%wPBe=bwF0rDDT_sOM`22uh!<}EQmwlz<1*)D)TgxF%WoPITu_K;^QUTIO(R1DJ1V$H@+Y2<#e8BN!rdc40T!4F7He1 zk_BWe8|WAG^BzhC>S&rZEdB)XywoA(xgVp*dbDd0+%5A3l)?aFsEaf6(BQLZL5t;R z3LtinJTcM>BVSJXwU>yIe9W5u^ky(fHmB@}jfByd8NZeQgIQCZH9hWamH=BwF?a8? z3PDRoL6S<3SywS6fPSsWpoGUp$Jn2AG{#32i)d;6^~5G2PWsZ^%nYT)^D7)6!I}e9 zcED{CAlkth0=CtTs}ikO8?h{GY~BCeD7!V7JQmoZS&>+z~bz4EX75DGjvz$P&tz6{k0f4_>~E8ckbvfx2kGRdLlDw z`}3^DAt`b#v|+WqaT#l^MD`(s*6u|Btm5pK2+h-(zuA_Z#R#Gex#zpuT0%)6k}qEo zGL8U5?WtEla-S#FJ_m{jURg0H^GcSY4T8fVvu|ah1)q!mA7;t{#H8=`&j$9sBBiS$ zsekTjG%Qe~4knaHbXiI>Z#c_@1|8(&r+;f3PC|d|D^S;+350kik6ZR}p-p5UD1D;C zd*fDg%L7F9$2Z^l7(^lrC>Znf1Z-m6msmI{URDaKWwgkmf7T{=L>%2oO$D%B*a^@L zAvbUQt3^a*2YK`dTyCqR3RLP2gD?acaEgruMlyWJUVP?evo$LxiKG)D$hPc5CD6Wy zw4K7kXaY-2YYZM-R1vm0e})|$3ZeA%x!inS*5f{--I=e)14-Ej&`8ypWNpSK@x!DAU~T^H zq63Stp-W52AdxAFioc!lf7{Heb6RlXw#05F^?EUedpcrWlT$itfC?jCh#+MQZP@@P{W5qK1qI;S^Ak_6$J;ncy>RLS@<|F2+CK!UjSN8k&9hlA%*)$VZ-4&%dwx?BIe-Oq z=F3Dbxqo~HsJtU#K$-ab`Ew(H1oPerL2di->7QH(#ue)OF#CUQ>q=*#*b|BLszXSwofc4Ox%q241 zI*COSiVH$|KFMWRu;s_Tg_tMCdD`;dY}s zpw^2~qH|bZ^wV7MNjLFD58f$fg(E3aioiHl*GWMc0$6+lrE!W4iBqI z2Q*VT?JQRqedLbD_ANExmCSopxqNM4E`*hE#MWC9A(B4oL)B=3oh!bHSS6#Gst_G^ zI+%%fPPtX`mse!l#BTFxq2xkIoyy3Br>p>lZGCQl6-zdPhw{NAjsk4r_)j7pXh2dU zTRAtfY@vU-$D~vnTm9aN_*<*=*3fjC;bwg$)=;MuzJ1w0^lZOl1JImZ4RV?Dx>}F? zJ>L`&4I(AZA0g&M@+z5H`=6?|{O`P`&P@8XIo<_P2KsJMxuQv~NoE;df>!LYJHx;*ngO@2h~|Y4V%QY#u=Hx-z4foCkXDQD7GZ?ZjV9Q>dAJwTD2KNafDi(x?(ixM z@Oy#r4^_CYavD2Rs)FXL(Q;tAfb3HI{KGfli!}=%sJNme)0%=o1U&Gl7HeMot~?I* zVEEJf@=F-WX~dGVVw-x={K;U`@F*T-oeOeA5JAxM02l)@pyTpg5#Hyfmgm4qc9*XaI%IsPY2 z3v&776aj2r`xd5sk?V#^$t$L3q9!|%FH4mk6UAFsA<_`8<}_=N4Jv@ORC;)CpSD8-eiM~C?9sdFzum+(|Hlj!Z`4I z(vhR@Cs_2r?HTSd4y7q5=Szx(F^lB(@7Jg(Gj$XA5=hec7XUltTnIBuiLM`xiCP|( zHxp3AY85Kesf#tE&u^6Hhf&$^et&xNNB|dI>+k#lXvnxDoR{fa6OG%W)m!X%iJvTN z?ChtwcPF@wKs(j!UZ`Xx@&7RJjMDL=Ji5}o$j7Gf_Z5=wo(+^upF9Bb)EukEnMW9- zjQ8i!TDHvnFywrK9WA5I_>Lr_;qb*I#D@Kj5y*rrP?!pE#(=`tC=#|jS{o~y-+;eW9=Fxo6zs4kme{QY}#((-Dk*-PdM&BxU~%|?-s?It$lM&THP`fixiX~il- zBFj6VhHb`Y&6pMKrv`?u3}<$#nGrqC7vzu6@sQ5KbT?97;-@2s|0F&SwAnN_+KnG$ zXh>1=mxpf8X|Wv(5(I0o(ZSesUdqYk6aD-K&`2JNkWc`NJpTbo+aXS<@YB)~6rILz z0`?T>$9Y}0til!@JE&}D-^+26$SwsFP&EvnGq&sUIqvkkZi z;>>&?RFntxCoV0Ft`zr835>bLAECmwMLcFG8Y{YbMVuG7IlJ75EM1Uiq;e158)p*% zgB2EmE}hv2bo8Cp10Ul=cA)6T)Jw&7J@QF&f0su}Jl=%Mu3#igfYHaq4Wf_%5a%;z zPYFEguKiP$y0~oTL@Zed-ZS`4Um+zQrfgJxfhwk7(?IH5Z4*^CQpn*DJ!{2V#vOIW|a*$^n59|5@hV^!(;`^R@YiA!3JQn!n zGoIUF+?>el^IlB#mku{Yagx`fIWWD-yxjRpB^wI-`darWGQ~1YcMC-bv!yi#_b^eX;2WZ($KhE~l57M1D3A zvvR-kT*XQ_v$*)Y*@n(1)z9+>b59PUd#W9W0;k7mKl7pln#-nU8qG$QVkXathCtD> zP3DEsuoq}CAsIJ-e77;)Vf+uUCtgVmqeq73m{N{Ytuo>Lf7FLR?yuN_>3X#^8I>3El;!aXein#sl!R zVZ^1*&!*?w6;jb1z)cQmtUQL_<+ce0*S=#ryb8QLIeVC9RaeG4-tZ)WNp)M8m)$mRzT%SthDC}KRKP^4XCNcUgeJ^H$Q_cbfE zQ+a>`g@9?x^J;17*OF_8;|}lFoYfz{svFJX+~?K^``dBH#zqZ!ndk3YZ9TKmL6MdE z@3U4!OS=w_inkuk5I}IpoxZkR#?AfQA#ns0a|mi5%j3uXuFPMo@bQrTc);o?uf^wC zhz!P$eqo1$4p^4v9B0;;7WUpxj*s;{sOSxyS|iHBDc#b56pR!uLh#qPf(cpBTYoTr zQE{Wf#|ut^N(sq3-eKq%BjS5u862=AUW0_w!lZoxN(5SB9f|A+1C`+=) z`#TVZz4F9`vvOco0zJN&Pjdn}HS%jy=xr%&k1pT+TjJ73)g}V1kp*gE>v|Hp{+c_t z(|Gt^r5OA;^(LCTDiq!(VHA)ud{t)dte65J!61y?q{qcjF&#Crhn6oZ>q-=|YiMyeZzPxBd2&eeDvy|P=Vv+kV{YVZP7 z?CZcjbXl;ntr77b2L+s48XF4K7g9icJR#V4KXVP@VW}(FO6YLj*{jaNgvMw|vdph? zKjd>(xz*u$FB93d%0|K}(WmBC)1|*wKO{`&{g5R- z8(RymOxu^UPj@~{|H(MD+JBW4>Ne|z`nIhlVW9zizn-KKQo#c`B}(no6{TozoC)80{HhGkSLlO3lE&4)+DHa-Ut_KWvNqf_Ahm(Uz~>+NyHm3WybvXe10EPYYz&;64@ydqE4%+ z0n|t8>gvY&d@B|A3L6{Anwpvf4a{Xfzask6pzwkTRcQqh+$R+3T#GzuP(j~;uaeRY zgF&BIQ+J5x(8~LeYG&~)j~N51`VPxCU zJVDsYg`Yu4U)q)VmAr#PadWr(U19`dx;2GTI$`~5pc;r}B(bHL$U?%>cg#&SLHgzH zG*-e$m*kDB`$QeYoto+oRu8>TxC~M6UT>dh%~Eb#*%JhN3I#uBf~}kNu9!Lc-4iD= zJ!g)X5SxxGS+74_;=rAJVE3AF3!8U`frfZpW9vsxS%Fj`ZGvs;c=;=&&G*J(lx%dg z$bUcF#19l2c{VhKy-sLzym!bEO5w)C55rEtdKsgx5LCf&r(Wa#J*KycolpsP!bAS8 zUUN??^rd6@HIH90*|DO`R(zt0Fxv=VVqXBb3-h8V2T67ND<45PCk0em9UnOEU?G{# zJ<``%3qN_i{B%L8#|j0u6I3t4bE8pS$Wg}#h&8&s#dT#9QG2wc|)<7`51|M9Rg%5-SC6vTksA06~# zRQ1f0C0!BN%OV{k=)w1`HyQhg%hZ37gp$&(<{;@u|EAa4Uh-XEEsF%=-K+460C>~M<_(tKZTU^@##8r-T+fDt#!+F zEj^|grj4~9dW|1=+~77d7vknhPhT3rvc>(RY0@OQ2b#4c2FV|hJ_(6soz^y^SI;s^ z7+(+rBdt{IZbgL%`$=ZRcIrM#P+(STX49+rr(rq0#!GNA?L{}w5^O((IU4=? z`CYr!i|6*j@s0$+=s1_Z#en?Vu+pBO5M<9N_8b|a_k!kv=LRv7nIHv}YuuL(g6DhL z(HA&kb%(g0k;8YZ@7KIWhS9~{8axQwHJ=59tKkw>0)QvsL_TGxuH=Ea`+o8&7KYUa zklvr1951BA2@}GhupGHCmi~xt{zJ1OR3ICYfzOhDp;ffFSTPK-`N~<82};=C-*8Ro zZsdt6%~EGMBkAt&BKLFM?Gg27f-Xi77GWTq6%z8~bIPMcPTx{rma)01z=VV4y?ES( zy`)FY_C+k80K-jUW%|aj^)Qd)In>ib5LlDgxPEypPgyMY4`cjom0pn;MUmEntC+GL zJr%pxb-@$JR(2pWI-58#${te4N5h6*UWjDg|@^YhI3W?nK$)e3b zR_^|1ZX;1oZPfppY5X_kfHSRq;>~hqb|{}6!*`4jy&_nOi>#Q6fJu>GAKLXsM+=|s zaPKpaazBB6Vz-{^7&xwVmCJ+5t6c;4l1Y(vtp7!rI!ntU@6}NpQO@$4Cx;l1bpi&u z4l!}H_^7wDybVS;zIRIA+NLNP{CZ5CGB{DgGPZj_s{un#T&jN%?LJPoT%GIFNbj(5 zQaopEXR7+`tTHL0G?{A9m6Yx(CzdRpcM+aXDe>GD-%AKzp@qD6${CCM3qy}N{Q{X1 zj_I8gpQ)Olwk6D1#D=mc%9NLJ51Ur>2R^t?tOuUQ(TCY=o2K!So60zs(w3=5v;Xhx zAqXR@^>=8T$Hv7q-GsxgKBcyYJ=U$)u7Q8~{O&iIO+Y|Et21$&YisiXU-`Rt>d&9| zrfF;^$4R~Wmk9=&K9YKg-Eb!tcXi9deL^QnA_`s*&32#RK07T?Yn&a_#oT(lyQDl^ z(Szm}zT{k1dVX@Fy-?!%+r8l-rIvuMMR>j9Okm%Ar@#|~V|r(0A|Vsz!+wofQ?K_8 z;!;Wj>|$C)csMEE{%ng9KP@A+HzR{>l$6ODN1ki<4RQrJ-qM9}w#6i}Jgr2ss_R*cqbwC6F*?o; z@vKRJI@J#Gj0xxNS*%TELeFrAdTXSffY^;c@7>#fo?cXEOZx9Hux!6rzYfT7dw|(O z;=ugjiC>?>@oj(s5Oic%bVQgslM#Glsb5w7zi2joDSxM?rp8@h@i^wMEbc!qIvFt1 z040q&)>SjTOjM1EJF$n{Euq(<%%v^yIGPwBz*U&Cq66A!U80G`vYct-j()rxS`&#YxM|OQj3{G zGln+#QgABR=uRByCrD{zE$jMMW+E=^p_23QaSFIpf^>Up7tfi(&6uLC*d*2~uf+pK z2C714PmMgfuqrqV!}h)341AAFJQ!XV6Akw_Cd-fbGgHg5`E8ETl4m1)i0uCanEY+_ zJD&yUYKvZPK7INmd-G{Q~?Qad2RH4+>|`S?CEmr?HTpX$pO(|E+ z6Y9(oYbyq}jU#n^CpLiH?n?FHc>}-g#lhu4PLQt}M)- zoao1wh9$CmtX<1iM^|+o)GoDd!N^KRF(EQ?lWnPa=P5}*WJZ=4;XK7{?zxJ@kNZ(~ z5^m-&1~yCHCNlL{$4U;5!8gGg3fY)7@qCra<*ThjJdiU+~$&7NT?W8EPnD8sjkIzuz-bm49_d2n~7&lwW z9*V^rYfIu_@tf)|S~Oera({9*Z9VYXD=?z>%Ie4_aT?eUt_g{1%qcg(E&|T{eA~RrY*_}Jojxl`t~881+-!92e##^> zTR_BXU#Z}*95$EK@x*4;p;3LOk+PDECnEidv!6#(o#*CZD&2YeyLH`06zccyqB;ka zJA0*RZn@ql()WCiQy|sn$1zd8LpsdQgdeYj&e)q4T=T1dB2A0hZmxS}bHjc;8euQ* z@l!dAu{rt%R7!*|ihuo``2Q{f=rMrg9~ly1csR9RmkqHC(2NbR$na_Nds|NThd}}F zV_1V+%wZ7qW+v|6pqi@HcfzJT#7sr-=GW8XEOt#NU67qzNV{2E2vpXaq*NY@lDe*u zSz;4LZ(Q|@XfDbj@>zSv2^#mBLRMHyu3|wF4=WRhQDDLDz5pY&UgJb0`^&3}14vLt zEuju{7}C6ckKiCz&ScS|hywT3EVdB-uB(vibO~5k7%GvDx;pa8%hE-{7N@m#r#neC zUoXm8(`VfExx<3jH$1CCg~CV&5L{<-zl4-!8XT45CAqQRo6l;!`*wtklb_x}U_AFJ zI*hGlyDp|!Tc;jFz?&F@7jt{Sy4U)Da?&8++HKwgQBy+z?l%YonwXjj1)9XoRwZ~? zX!tmh>YAFN3k$D+zF#MX;P2b>-!ubplkP#Koi%?EnPGJu>1MjmC|o#Y}XvJ_<5{^M`OKQLCB*JHT?ew9kWYHf296^h$D`c#5-0P2s z!*OiutJIbslwZC0x0D4g<8+ zB~+sRN$HP4n#PZEyL@Ki9;k?xhy~yF;^FXq+uA-}uCIiJR=Q2?Nx$xfhK7!6^%~1k zB(Wk@(Y#pa<>1l}>Nb+Vy#FIjH`#rvY^<{S`rFO=;PYZr#P|Eh1Ahd-IQULb_dMs7 zvZ|{7l*hU!--fw1*SbRYj*sg%u<4gDoa@2?I#e3Z{j=Ti{_+>^oPh0-i}o=J!pLP> z2pjq9S$j^Ku&!`yCKOKYyN+XjA6giJZ*wj#Z+q**8Ncjob$S&8F84)~107=IEA;Sm z!$SDxoEr%+R78m}zEA^ys)RO}VVNIo@5AJDQWyLuzrj{rsePSoczj8roH05n=ePAb z9fOg|^F01>1>rbH=R$P^pyOa2B03T8_7Yv=yKmyc58h8l_HMJ_ZplVrIPuK^x9SZtBgLoa%m_XR8}M!p09P@hd!y82nTqD^=@q@ zTvkhhXe>KvWn_sE2`T~Qx0tDgVtOI3jgZP+hHGNRKyrFxr{QYkC3(-~{&5JvX#kf7 zajIa#y6MBi!`}G78`C8q=bfvwP7?3lJ?m0t>Y40)ONJDQ@3pnNyBYG3vy-&rN{&On znFv_S4V4uSmQjkc>iF|;rBGB*kUaDN2qIp5qIRbgXqvuzk>W0*Yba z@yzey{4Yb5LRslU*~>Ey2OV9S>kF7*LnVzlTk&113B&K8zGo_9c3?&#o8&-lipY&o zBwr#VT;@pe{NJDH1cz{s%<1&zm?)r{DOz^Ojp&R==v6~-pMZg0^K}1UY*x_y@zWW5 zwXDRFcaXydRoW(Z`enh4%7jpQyOp^{nH!+8zp@yY*?Y~truzXa=L06#Z@h<8EbGjB zIL?+BAFP8heJNzZEeuUr#}S)6X(hJCDlNEKE-H(@BcFv4EnAFyGgF#N4fU4$*jR8O z&N9wQ&bV~q>qq2%b>&-$WRqn4_Rm|sR!IPq^(fMWksp8+g*V>~?>-V+07>sgmg`to zajJoh97Pjlw($}_G|e=vP*gd2C`nbqT3>kN;-IR7Iv3bR>2tLUaCAq}ucMWPGq7niW357OE>LpE2WkHZ6humNKzE6dDKUgWzkgA+O2ALDzm zqlnNg&&{_0V&sbjn~QU*xzPl3(9tBWi<6oNxC=;221 z*lRBeR8S>UdVc8pG8D2;edo{$m3r_D{aecZwlw^|vjB9^v6-Jg6}^2Sb^E14y9aDr z_B*j5qUR=Zt#J+naUDL}+=gSlx`udSMDCUIwDopv5Bty97naOP2kae@+BeD&S=N1| zebIsbRvdtTe_?)lhMt(OZCYVMr`kGI((_XEVa|qzm(uWBWj40qvR#YVjQ0X3vSzrH zYQ-uf7ohXjG;|YH`Dva}GNF3~6Pf8=w6OTvZ?)cJq-|h?K2i8tv^B_m^dK)7_HJHPO)SEeJyl?gA9FEAl%Mg%#J?x|wELdTve z({DrH%i~dIhB;K&E{4FnwAi=OltXJ($c>ZLv(~v6iZY(Sti9hrg`J%YzFVh_gOdx4 zy3EQy@%-_nX_3F;Aw3X+1l<8E_2Q}me;puZ>9${dteW3T5_a8zf;aYJ4`wQBPuVhs z9N~sHeTHkMG}mN6sp5Ga`tik|5zQzDvZ#~_$JKFlBr>gVF2gRX36aX$w8eppv zGKC}0E?PR0pTMTp-?q93h1(`jCI$hB)yxYzHxN6hb@EHs1p1#`(q7X9 zO8zjkR>4n6R}e7Mp^vHEkS z!D&=r*663(0)?s?bYc_O`p!F3p<}u%2z5ftq$12JUjFylvjl4sF7tykT6TGi(oUJ* zA~gcZN2$+#Bad`{v`}O>6!q}&DL3*qKbSF;M-E5uEAOa!lgUi5MgMr zf08lmz=0-|lPyIWM;}Xw(Jfkwv$D^(&gV)P&I{lJ;cnyu52k)ZjT;i<0~QX{Y@2IF z51Ges<#Mr@>B*F3_dX>4%B0VUqd$s^V-{Jc1WMjQ&JMJXs8edg-uk!vY4*bH3GSZ( z&kYam`RYw!?;JOSXHWQNvG1jeXi^~W-p3U)q7o9cn+wu)XhQ9{^zDf3izST*Z@^;H zmEo9z5@+e-MzinrtArgm^cqiK=(e%{D(*z(Y3HA^-?kCi+6~lqap1Nom%{X+3dUkF znba-zIPz)erJAM2CTbR)tY%nBw`|m^4KTch-H|a7d#HZs#cy|*OgFKxyD_iNYFD9Z z>gbF1D{-;_n-KP)3?BOKGG~K;{nH7T9`b5yIQKzLfr&(HY=xES2qDmJQ{hJRc;c*tK4mpvc^Uzr0fql3 z3K}WLRvLc`>S0b*NRc`Lo9uf@D7PH>u+nZ}3_eRSREBlu!UU3=c9pbY^hl1Pi_ovM z`Ja75M2))m41u1?=K6eL^#eE=eZMYO$!>{*(m7yU}&KQ9U&E&)lg=2`(np|U> zt)1zTtmx$>bd}qYI!lTXFDHrE#%r(6SQZp+=`0bu+x}f;4Po*p7X)N6O@g59 z;$P%qP4wXx?O3nn_h!m5sW>W5H}v2IdLOnbo3zQ$VakqNpX^7W`5-e6B~e78zSJJR z_-fBZb}_z?xFc7cB^+p%bqe`X`^Bbqb$AK4zW6*nqRwARhpZVClqibL6fr@v-+gcQ z)D5}C=aN2MF+UiGVzK2?8>!zc(?*XE%H4y(ul$z1tmeR*$T3T+hG^=Ht+Ezwb};4q zugl4di*Ss7T8M!iZI>&t2wFUM0Tz8HcMbE?6b@;C=`wp@{{EfmW$~0XfD*1H%2Rg% zQddH;>%&?Wqxvs_H~oQY^fW&(Zv49P{#A54AwMAVbrwo&Gh(s{2RQ)1{$e0n+IKH@ zJX^93X}C_0Y?fLzlIlH`G7^=^A5KG(7|ihwele2dxw#hr(x|M1>^B;0^{MaOV{)+@ z4TABn5>G0-QheV=4wO#8`(ph)c;hJgpX;Ui2-By<(oHQuw}y4`e19q)ykCd(M-^;b zlSF>i$Wp`UR3IV@+{omk>AEz)T)mN-G2TpDC`bZo6S~e^o0u%JQyHGJbO4E?OX_XE zg^105BeP2!>DfSRJCZ4cORDdNpTcDr2k>m!2cl%@tZZ$2Oyk8P z0Q@@dJkERMnv7rL%8N$l@8wtS7Sd zl;QwS4HIT~_T$YIBZ11x@sQFD&*0YLLs^az?a+(O9ByX%v6LQo<4v6=imcHP!5)>z z=^~Z2Si872mC(kZkl@GHDikyw+(E@tZ7idnYo#ei+A*2Rv~3;Krb`x zr`c9*+PTgIg6Z`3aZK*e>^SoF8NWKcuHgPcRPhY*B>)X|trJs$+d(I;u_Y zeW_JXZy^BxV+upbhPFlIo+eq%jojEkvbPEuw8Jh(#b{h(EedE8BJbU;VEjO?N8>^* zhAid(%al%znGqa|@wrC3y1Kf#vgDSp?G!B@M;pFK@i1{#PEH8-06vYLkOKu|%Q)`n z>sNUL18Nn~BmO0q;m2Ba>e|}i00W|psFV@Ym5OK^8c0r3R@M5oskamHhw~_M@pWZ$qKn z&yzen-b3sag%G28wyPb^pi-X<_gSY9H%SfrV9zKi24a~-r_#2c>}}U4Hjd)6E!=68 z7W|mZW03^-qPcd2&rpVL3PdW{458NE#E7Qm+IRYaw<3DdlMqh7oCcC-{PXpD;D4o- zZ{-0;N-vS*CFZcqk4PUYbZXH=Jr1i6LYX2zU&X|+FfuZx@maPkGN|RG{{n3M#&?pf$Ffe}t8?YNyY?WaaN6zQTJe2lIvGA*&n{#PF54ra!?Z zjMVT#o3*XQXWg9rkJoh4z#DKH2&B;$#PP7Kc2EV4w`M$<4FV>awxwAm>ugh36j*Hn z4XKK>9~RRfzZ*zI;=Y}{&7DNi#qpDGpR*NcD&$KVcE86`)@*HU_vq@NejU-#%3Eh; zkF4^jhsp5_`@MDPC4WU6ys6sRXmQ?qT^R~j;|&S!r7cti9flvEgPe`>9*Ec6Dhd?= z+>9yLcVzw=7UQfq#06r{Ll#1;wV#Wj6ALp!nS20NP*Qs%&c8?(U{*wQhxc1h3e3FM zk42O?0JXVp1E$S^mRhI)J%||JmC@Iy;%r<)rYQbkHCe!8^U300zej^?&=`oDhb7uW z#73>mxImSzg*^NlgDAFV?e2tjQv8cK_WjY(W~-|`=q^`r4;&e6{<3G#un{q6%cmj zW?Sn%uTT+)vkwn-6N#Z}!}L(g93WIx=V=9+0i`XFHK;4lBuIioTm(f#`0d3FP(Be^ z_mcUV3`v;`G+U(o=n5tgTWx-jJxPtF**XcH7JSRJb#+zytV?}>qHdD;(fjd%cfCU& z`m{AfQr*F2<{UA?Km+=3knA@I0jC3Qwze6Lz0ebEKE@Wj$4) zxHXeQ2EH8!OHOz-7rN_1%Z+afmqya~eToJ&+n}GirY0dGPyYP+YurHm#Ysvt3eavf zE_kkNrX-D2Tm55O`CrGhX3vyxSRiG}zVu#xudL>wf}1&&(Cd*5Rot3DC%3TmvUj4? zA~qwDntu43s$)nl7QkA%$Ws>XI0|Z*2UXuwcEDGC6+s?9R@wxx{TcJZRCU&Udg^op zcKx9@@{;4Epgf8bDIn4TY13#;G{H^tK0&5|tji)(5m`8LbFxLY5LneL`(d0`XhV6# zF^GN5(7%M(!7Jt|*^4*Ksj$MXPxYG++LV}0+HmsmhW4;yZJM7O_lX?%7Y@Gj5twuH zK(8L5#|`qEh%EC1xE8#7WBz9Ol#;+94DTDNIwglxoKTTxL=w`!@ER!;2tIf_oFWpH z99Y6Dr>RMZ9DHx(dgU@Mgx|#+sM)k@EwDfB^QUm?$Amv5Eog3DfRsYTREGbR=Oi&C zP+OoI-CqiZNgdMSITkJl5H95(5!p|aEXfu_Uxs`>#L#=~KIN1~n%hC3vhwY)xx=iL z@1rW>=UO2Kn1LX3me7~Fyj~Zd`Rgf_O=(3IQ0ayCjEp~`TaKP>oa-1&r|>A*WNqN! z&@L~9ZM~<`@r3>IO?0p>seT_gQY#iN2bFvq?zn>NH8f-#23oP^vtx+S(ZUbjd`6oQ zlkeX65I|;Va}-Tj@TFf?kct!8k)^`&&8B1cpIw#E)SGs!qd9; zlFFUP-#MvuyLa>psk{F>3!z2cBmftOQ#^>n?wJB8O-PdNdt$qtfG z-q$(K3y5K@h#7gtR#q66+3qA`l6QVt{ulGw^>*pR{sZBr3_Oox|Jn8h1bK~z#>(Hj zU>ZfnO9BK(Qf9bI2Jn2WDPwv8kJ7x#(c#I3E;icE=)M^Kn6djTD?!dnK!M~kQ*C!1 z41RLRXjg`)mj3ww)yG;gu>)B@w7y5aN$r@nnRjmZM*Gs!s_%36Q}2e6cCt1RsqCB2 ze_xV=O0$AGaE^$9k~7%zQLt5a-!P0^tVTS%lq}xG?DX`M{6rX>Tf-uhwclUi60;aL zZC9k>$KRiLY$cCCW&*Ck8rL4Wr|`t@^X{5RsN`|pck2&35!8e+F%&Kznj4@9(bVvn zo~lzvSz!ytTJ0(3Wj#n+*LXikvj5~vO*B>S=I5I0+)Ud;{mB(YL-_6~-5z#ll%R{> zBS`MZhe_t)VUo4hFQ3z1aQlC9-Bb1m;Q(bD5#!+IjA? zF8S$RXy_{q^e>6pmmCC{hO*7Zl1mty@4!x|ZdBs0oROuy4lL-sZskt6^c}JVDRBav zU0mMQ@X@uYTtxChV#BCCnz*TmYq?kA@{V#v#NE)mO$f0E^5JW#}|8sTrh2O)89&$F` z14J>@oUJEM!k**{fgRGyk}obi4(Dspv9Yne_VOdE1Ah4&EE$JU!<#l2Y^(Yzbm}a$ z{?*D)x!2#GL6#g$x`yKrOkr!rpoEJjD%A#I&lZY+GjDQP?5^gM5fKJ!I5BOirKnY? zr;3lKjSKm$GWzME zuf@>UrDkmCr>0C%K>*@ykuC7^nV8Z;-eu_f0xQ#s4OZ5|TDTuPuOiIx7}?9#LfaFY zS$=%8Ce8Y~hLD>MHnWaX2N9Vx*?~UcaR9ZQBC*eKBg3@W;xGy|nAjju>e;$ujY*iI zRX8v6*cwJ4`O2FYeu31j)baYBWN!-oRf7N#6eU1m@^>ALwKW`CN5( zH<~YbZNd=5Q*O&K%7a#(l}O_++%>GKVy7s-8#LaIj$C6B*LHq@^sN`@GYlayfsA>n zWsd#{6?vuN=W+MIjx7%r2_2!`u&z-6@s@h`>qv@D6yz}?dJ%RnFnalDYotWMlpHT7 zRbQw8yVlR@F3sI45^~Ei3O%{%ix%=ms7@?=4uAk!iFuk~m5Y8VfJ29#BzH=kfbk z3pDY`ZhyqCu~roYjU`oT-oSXs zexd~6Ja_}O{m;hVJ$ccH#hCx(dBjuUH^8QDBI2ynT_RQORGh#0ssF*uf(aGZk_jEi zW}ygv63y*8O@%4M+?>&&?Yowq|M$;n62C^xUxJYOxdKlAA6s7;5aqhHEsY>CFu>3; zNJxXiAkxEt(x6C+v`7vhA>9lNB`qBiibyFTl0yj+f^;|1h;)3N z!wX}IK0MH{ns4A0g=!uq6?UFos#546IzAy)d{H#h{}S41@wwV zQkk%wW$}(vMxshvbz56;b(};5u*J?-Z{Q)PHFq~VyqKY<>8WV2Z=|tr_#wXG^UtCR zo59zRoJYJC5>UJM^-tu&$!d1zLVMI60E*=SS!b!nqH1nm0e+^d&rjX#%IqJH8~16Z ze13gP6|v~pyWD)(^taz&g9ZLn7Yo4iYTi11)Z&4ecIVo*%wYc;lK-Z5s zqw?MSl3+K>+CW`Qrf}h;_P?4~Hr3{5g7sm>A?LR_&6iMK3JfXHZ5oWq%a}mEv=;eP z9%^n%*iZHk4!1iG-Jx}M+zU;G(@I(pSL0Dhm%_x#DH-YWEn;X zto5zch_hW4gVEOeVk8BvSsT2|Vz)4`A`5{+>X$jJ>z~AH+e*EI7Vv|2!AZ?VCKi(p zOt-)H%I*#5v|)0Ni|u~JsPG_9G+M1L(i&MhbZ)N2wbYg{)zc#f0HA00DJv88@`EBW z2?qJ+vn-4W3;P$L2`QQqWQ%WGTyf=PW=?!!Z|F7;wuysvwNgksmES}}5(8710~#jM z$6##;j6jHjK|01}ITzyTic-7B{50Xn`PAZ6VDaZ`{?(R7wW8SalL^I{i&n>4LO^H# zEQr;`rv>%)yWc+#ZmJ8u#{KyYEGu!!_h@lpiNjwfR)s?6zNY~H(f zq*&|`4KNB8VT)+7!m050gn#?S%>~yB2*=k)lcar~@c{(8%>yYpB%yN4>)-lNDZ=oM z4}GT32^chHNZ7jihU6Z(OMB#+uv}UGtChM>sZtd892RlVIIU7sqr)TuXD7@yRa7L- z@?g(|3aJ=w+q5+O zh_US8S6b}WG?o@xy$xJ7R3|vsWw`@&!vEI)0uK!SpKtRgFavNsQ*d<^D{I()aNg5& zP8##5c7Fd23gnpT35MdI00#PI%n|?sSDi4o_?Y@{Fy>QEoI~=$)AiE92NKaHo7bit zJnFw_R#`anDZ6^0?;S_?WHGr@>6C&NRFJ0Z8Eo&rlIpeaz7!&0tQz+!j-PZ2DHH(u zolbZ&F@#57MS@9gA#^o z!%gJA`n}~O;;a{fE>{Ob2CFx+T2>}hz#Ci&6=9&)oFg)dN9gI~K-;1^<)Oy&et&|- zaktx|%L6EYdbwbi1z+;_e^3+6<;i#J#D`oPt?WS-2?Kav9Ov`0@VQ>slhQ(?&fHh) zJ5_?rERp)9QQ_J##SzWKxS&Xpmus&FRdd650S!_rItTjQAz$lt*aX*dfmz=jNBR!! zd{N^Exvz*2)n?|J)=Q5UOW)G)?5=7TfL1OoTzl_~{-I+1_cki9L=F4C7M$R&v=Fgk zYML$_?5FE#05>Lato=<+US5!pg4%=4bUvYto;ovZ2`i za`a^C<;MHN@C6{Bfl!^cA;hJVMJ3Sl@N$3IF{@{f1}$Qd2OL~v|lGSpqGy{vKh zT>a?@82IKN8dGq`Dnwr!m1cUcUPI=ousJupw#j?B_s)yM`8!pF>807!GaB@=$W3m!P(7P=xcCtc(Oo&6SiPmQQA!D*Z@a9gb%ueXEM)d8T@ z`UoxD&#j0?Fkpe~Dso6`_swJ}H2fQX8mFX(Cq!}fiq=Fv3vU5=obn!iobPYWEFWV? zpakiVZqt2ua4MF#Bcdi9T%-F;AwCkxAQHn-sNV8xRy91@t3k6q6h9c;PovwcAkrCO zJ|)we=y*4x`Gk&r^@6p!f~Vxd8|1sTs?t!~H(~`BY;hGTw_#{OY+kX*Wo#YlX)0gZ zIC%xLSz>Nh9}9BJ4bg4xI000eXvO!QpRAyV6t{u9_YuUyMbe&#uatur5YaB_;#Kmh zg&?Ytdk0k&00$3I3(D(3WXakS>76jK`hXw(ouH4s5+w@QOPczOn%Z{%2q?G5fNNh@ zAg>H|%J`H1Lc^!W23{atxu2}U83bs9DQ2%OkKRKtweG27G>eUU)<4shmfceS-toM^ zMax%|G2jW|RuFa&xZw>3qm^x+KD?&v9Q(MTZd0q&qI1qmE|ob{J~9c)yf4K_W`egW_YjU$^ScjTY8Kld!^FUVKgX2dW+(-R%-TINO@ z?Um>?*q$;26>~{fZ_R9qPYn|RQs&I^p15la2@L+=D84UqlCnFP)CPa6_6hK0-JQKm z>thizkVnilkj9@WUuo)H6YYK=>MQNmH1;%gHxt=n}7k_o)^qB%&5weB>)t}z= z?(2H_@qC+W0JK#x$o(5H`Ve8dRniI2w*gunP+kYKA=th9{=Mke1)uFmonHTLLp*@}Xx8E@ET%tAqj4w=x24p9*$6TM(4(4!c63el*E2|>} zXgdTNvKMZNDX~uN&la@nKY!k9B^8>>^4@L%cqw)r^ITge*%zBONFb0VR@21$Z|~+D z9Hc^~l{lz|?0DkttWyAT%5z=>v~EHGdQu_HTIF2Lhkgf8N_UrCffaVp4`-5o>FQ{} zH4zIAI0s$FWWQw6M);=|gQ0T7{!3i=eKM?1I{el2*Pg`R)r!4rx^%T9JYBD>xR%>f zrqD;^?{rWC<_Car;p1<>{R>42U-z9;1dAIJXxfgiS#%n>5)ldSCJC|C*%39+JQ8=g zxf&*$W6|;KF|h84y~Ft7C(1qp3uUmvNgJj_YlT8 z<+&meaoI~OVJ)h+V#r5)f6JGqZT2eW#TpdOhP+E1BzAy0}9tmY>Qu11Vg77Lac*_<2)wqrnWrXg#gg5#MHZ zSO~J_1e_#)|Ndv2M9=EKrg`fsVkzt++Wl)YpzzzjobA_%(7I9ZovAq9>|q2b zlc3VoFPqiC*zbeH-mMGM^BdVHMPgwF&eeKGN`2pvuC^yeEDc+Fm~7v#h%)?HR}mM- zTBHkZyy6|;01RB!X|k(VM0oVwi-eZtYArpNi5kj)@S06XAdJ*$1lwu3zFN+)UX9!h zgw&u1&dLfsgX6jGyXU3fMSBJ-Q1THSNdm2bjq4?JrV%db}Ut0}+O0U*_+y7{cnVrv?INJSWZcjsD%f;soN= z>m50_=bN8WD=~b7H3Kt6@GDW-eCot&`5Fx@6ehPD+UiqYgpK=NoTlTBN`&*XC%&GmLP3+0!EWB9Q~F?*yGF;)ZQ68ozN!=r zB8vryo^X2Dw>Qg9o98P6y-y6&$h-}+GN0nr=xtR64;OB`+Ki&>YDD%%_wf6_^Lw$3 z^M_F9uOt$^MAJWhSY=A6O?B&PN{1#V)AG%9NeGF$xKNUZzkk+7EMo5C1wz3$!O@7V z`VxyDZy(ie*KYuZ;u24R2+%{%DwDtWMH<9`4_T$Qp9D@`;7iS}700j5@oZ+<-y^Zm z0fpEsD9{_N5-iB5?9LgZo9T|7PMeX?x&6?`l)-L^t&N1m?vVKq=_G(;OjbW#k7JKe*!^9r{T4 zd2S*1_qRUEvyd=d@EO zBOb~OSjn%|gEHOndt4(c%00=Unw1Jg<8DJdn_#p18(gxVzR`U)Ip)iGwqB7}_x1_hIb%Grxn8BxePFmQ@_ z2wevXdOXuZ*)V4=6Sm&i;w|bBeYxPrIU;K_nmJ`WvZy~8D%pv6FMQc<+yj4ete*IF znN{xT4|?#2r3UwJ;C#bZjH81NOIDDIFzKuv3G`Ep@o0oZZzF^m_ADu2DqS}4T00|x zGY_~#D%Lt{F~5;Nl^!4b_M#h&eusEmowCQxoBA8w{U9v|OhgLWCx~067`ksk`_&~< z>w!%)z1O2ulhhZPBArXeLDa(q|73~+ZAt>W^8->~)AQWk9y9XZr~=qUlp1xW(03PP zs(~ZBul*K*j`pXpDJd!GRaan1Qe(yM-z$~*B1J33|FQO&gg0Z-3y7-Bqt!wj8QjJo zQ)=PSrml7A3(+!!KP1q;A{0{F_QTheUl0{(XH=jwQ)mv(g#vY7N-DqX&yNe{ApaNf zc>!M(t%#C61LqwLN#=zr+#z9P@_T&Lm!T9WPj4Z&Mu~A)Pqr>#IGlumoL-Ox?R;jC zd~B*5uYA4K*F#RfFBiJ(uy!T~~7 z#7!5cQ@9?}!taUHk!+Pf7qeSGncL zILx@C?C(Q)fszNzUQ}OL3k>+9ZDju$5VnhE)hyko$P zuW$F^6?;aInB*-JXmO`?ku-3|bd%)59O(y5FNFvQV_W%MLkJqImb=alW}BOyl|*wO z*JaG`Z?s*Vytu1k|8bBv>y;?@a&c?-$G3`#Td4Tf=p)_Qy)%i$ZzK|EEoeOMVf7o3 zX6oukkV9<*Tufg?Dj?5TCO83UIWfKx?jJV%PO8*fY4`j{l-%l0e#x2(##QIDsr-`7 z5nEkS*?q7RL_SQC>pUD7V#UZA@nlcnW@*UeEaP3@im#MsI`-t`VVu;)2p zo9t=h#Z_31H6bwCpmtf$2QjQ~P_FCitpgrc-y$V^hU|c0N2tx12#nHy6lq%w>ek{k z&ofc8jQWbY`P6U5_)h`cefgyYV({-HM5ZRm(_ z4`5~h-6S<5&%7;|i4{BN7UKgg2jXF!ID1ixBQEk{nM6b-qHZaits z5l%qP?5^Ex*DkhA_^(#Dc~YU-?JL|Jr|rW;om2xgB&Q8g8O8jJ$;6HcCQ7|D=sb9j zm(_4?AA&+XFhg2s^WCODUm1;?shPA7jMbnIy*mEj>}h|Ho6Wv2>ml;?O}OKI`6ZvV zBYSWpjUI$OR5I8+X?Vs2a(P(;vte$ynhQ-C`!vA=L>mlVA=eu~bvrDWY`<0(%6nHF z-qDz-4PGv%4e&0Qcz@dJQ@HFKQYDTe)fzNJ*;o*JrtoI6zR$O1~p9JWu z-d}P#W!o&lC^eVwquFVu(L;R6=E!bi5{Y~I`uCz78QJl_;4GcFf-K=AZ|)|PX%hWY z)ekIZUjSwv4KKpp%Kukm3lL!#0zJikL(uZ~-CDOyZSu7!ow^aZkQPJ`H7w7&bYZ|x z7SrHWZwqN54B8aFkvMqfgR`jp_Wfjse$n&K*i(?fk6-G%*y+K(*ZND`oYT3ZuIuWZ zkaZ?EISN8PtNOE92JU}pf$_Ugm0@*_JFAdY1kivIqhHXy1;ZP@tELW#_t(DKQC?2R z8j9fFVHN+0sbx$prq)G6I1Yyw-R?4kI33}|K6gr`voH3v9GChAb(#6e|<9Q9T zEa5ZtzCFlah?R*Mn5r8AaT@h#U*4e;3w0zys8O9{?}!nu)QIDDNK}!BUhO-0Lf%pN z9RA{QCeHJ@GIg4Sa2zmp+IRt2%%ru(h3g6)cy3Pcj_fJ1x)ar2QDj($i z?JeT7>d!Xq8HbxHMp?hWP;j`TXukNefMWR9l`>w#jme)B%`mT2o|FNr$1O&8)cY8r zvH>;7@d)91^Dmz|9(!*9RyHAhDzPud0Ts~==t4vhHAF@DiA|6>oIYt@=UCeBA+j+7 zAj}Do(3&&kYkhTvq<7qn3?gtFgB>W!%4bdp{GF7HMF4!T_tdBMt=`{8B1pPiB(s^@ zH)k=Xz;DOY)S*FODO)OLrnHNDQhVesveS1u_&2R{y~C*AzDWnr1;5DKwVU9qHqeBs zt&^$9O4hN!HMiYqF3CP1(y$& zEO zU1axc`Sw*PD4;IoI1WvxWc!mFAf}(6%t@|*3`!HfGRmcJ8lqoI(_{U-h0~gvus*%KF}=n|4w&{ zcu6}$I}F&mAshVG+EK?KG zZwqCsH3~y*)EdyUJ2eTYbg$2ms0xwa>2_(|{OP9@rahR+KQkrJn_}D3M7F0;4W14a z3g;#9MNO)$&C-pqNu=y(O;wF^kOsYGK&DgOh;-XtNWfIP^8;X|>jSd5wxWhIdI-gG zHs?I4y4*b2FVj0IFWN6uYCh^ez7CyoUL!=ps4x*i&&*qFo|Cl6HY5z*^00v-%0A!97^f7?E^D;V_PjR(2 zU5SinTR=5Lr!&!#wiT_UHM5vp%)Dtec!plg%O`b`dZeVSI@I(b?2R*cI z(dx$Zp87@>2)3oRba7&8MZl?iLcUE-!QSXj7IwL~aVX>Jc#H>BTq62N3v@BTxkk3@ z#SX`67~}dGhXm|%xlQ-VEm2=zuhrZz59rvQ0!l0%q182V63Ln8 zB6(y`N?Qkq_CP$Mfq{X5?QO@0j~h$#>M%RAz8rd$}3 zy<44;r|VcKlA64^Y2R(h|MbIrNtVW5nE&vW&FM?ZlRXMb-(49JE(Kz-TkSA6PrkRM z-tbw|OWsgM#a&vDv1ED_lH|y#-LKR$OcG*3@^1UBxiC{F= zI6QFxvu-AOjA{49|ORDU$vgLmoSg9KS1FL%{D>e!~bqL^XDoLa{;93;Y%Nl`Vlg6a&v$P zdpzFn7nUjIkyleQTJH#MfXh)H`5zyNT~?*R8Su zku}V~AAm@AXvp4}C^l+X{R~vrc>3JfrTZojE70jep9%fz8R^t$hUdb6&L7GU3~0AS z*Er+D+8HYd*~Z1>!6ZVA866rGKdOKGQ%KENS3t{CXD%LX*1lWm+G@3MO6kLr<^y>V zIkF3RHZVTRXIn1@zo)dw4O?|q1>3fhCx1K0%L`!ccdHiGU<|IAjvNM@YSQ7mulV9z zr^DxYAs5DKrQU>tS7)YtKEm5{C)r=xNu&W%tvF%9)kxE)Ad>J6`Fcm-B461kfx`zO zN9f`0$HM2cvj~?ny$!$kvkJ}Trk`$}`^u&pyOQpX`1B+D!LdG{L|kU2RU9WRLuqrP z1eK3F($^oxUrmUtFab{K12sPhimSJLQ!x;gWW$iVzsE$f%{ygVoL=CK#8^+8J)%DW3b?+=ElSMW}l$U0#^+d;p9kM(pqJ zJGT(f&0+J`0Af>oc7F@Gz|9=|!~iNS+y`>knFMgDLOJ%hgMq;8AIP*A2C&_j?*~_3 z3FY0F9Wr9=kYp#4xQ_t{2A|jIxMX}5w9|KJ=|SQ3&(YDEN4IcpI@?o0ZcIX!YOB}i zaltlSx4~9RH^~S9M=tH9L!3|RMOun)t7Z}0H9aDZuOF-Ny3B>$6zqNH(&{ic&%yK{ zl4%WYMnTph`G$~w=PUrV9I1bsDEko9T8QX z=<%-RdchtY=kb>x(snV(49eB(%6jgVvB`x6L+xK_?1vmUv?`l+A1wlE0RtF>wP?ul zD%~h9r>o6K5BI8?;WgVJ0q>)|6+}qZ3AI8~{s!T+0;f4fv*yMqZ`3>wPsp-M)y}n0 z_6^Yai7z;Dn30aO|5a7|Z&d%U*?=3IkQR<0a*+F2i+pXNhS;89qv|Yy4KLCD`Yy<( z0abln#x2`+XX)aU=UMmhhSc{IQFFnCr~y`h^DJ$mx(0jYb!HN0v4HjyiD8sIm0{YH z3f|*wd{>q~5clD+D<0v_nJK(W5e}`JJx^f z>b#2#k|PpPiSP{Y@xEH`{VZS#HK=uFU+K?w!Z33D)c^RL#(ir- zDcn%Pv2K|b%R|%$YA36Yj`u*D*?Whi|9=E|0+GZRB8b=b``QS=@aUyv@uy&N#{QOV z6Qx1KuQ4L2djSdNWI{>|b87|uw@Rm6A>l3rL>tyF2~L}fFFs(;@frH zYo^R>rKa(PX$zKPA=eW6{aOS9XbhmW69 zwwQb#{2gV=))x*rC#7RQitLFuP{eui)4ND|Aa?EWJ*462j8geVR316^^z8y66bH^I z^*8mnv-ph)D5zG`AO!6(i+^O`X7r%F3G>rk=Qc7H%V{Xxw=jE`-l|EYb>Yj8JL!Z~Jy{wZzSRKi!+gBkt14Kl7?k zY3}m^!*N+dG6{nO^tm2m4H-%enOzSa&pzSER1^*O@YWGfUeLzp9fm}by;)($KG`)m ze1=by(dK<+xh!sBxQi~x{X4zRCvzTR-<4M~D%T4{ZhpqKNfuO)%)E0h_`!g+M$F-_ z((~8B>=eMEab&6ynyz>;e_h7m0LVB6o1RUK4vdXOq>0%q08x_sGP{8kY*Ojs<)!NB z;`1}!JM9#HyLgv4{T<96Q-Rp9DHs;C-mqrzoWnVvy}Dq1@)9N4*8BVdK)a)lzU=rG zmz9ZM+}JDOlkCK9FtJ*Fze}74Wm{p4jybBOD^0Al_70_gRYti0Xsjp|YRU5GL-_P1 z{$`)-fUU(Z90z5itzI~pg?%-AYggQ32vBQ;`pjs<5LZsrxWE!)mCYqhTZl>^CZd)#@b#7Onqt)YX z>c4N_Iq-q?jy4Ymr;48->p)rrw`aNcylq$GnW}T5@bgfO~NOd>XTW2 zmGpakO12{dpl@nFz#pQX|S|YJ%(;TS&Pd1KkgWRS8iw|zqtA> zfZ1^0Y7(#0o9S$n84s^2#=smR*37F@!RxnYh6B3l@7xo_x%w#a+BH1^6{T|Fc;L6y zqf)(dXLRnVf7#btaB+vZCRexfR7s*sJ4fMGP*2sYiREm2RsdXgA9nbdQcd*zT@H(f2&EIwtZ|)k#mg zC=}s+sV5Ln$flSB6If@d*<=0D=Bh$Ra5;rP!iCj8>@MD#fl_De*(IKyCY^avCh3NB zP2T3_(3tOA3fpgXG|2l)zaJqusZ44nV0N1uDiiw?_rR&1LbV}|w5n?MsJad}=pj2u zq=I=8ouFd*n=isctNc|xMOxIF;?!NA6jQPzCPd>B-6ox7!P)(9TPkA_!j?a(89&Xk zK!p34Tew-oR}~Y5IGgOEJTZ412pi~39Wgg!j62*4LL=H@WI=RBgp-4G^m3ZDtUX40 z`DGu*`NPf4LKe*G9XVGk;14#%2t$lDX_;?7P>T}lp*53lQA@O7M0(>9tt6cf1&z?n#BL#k$X6mDCHbFfaaC zIAU#SYK;Z4Jl*4iB3Ws6c_p;wy^Ccbe-Uyd@B0)a`sx7ZfQxGl6Ys#86p&##0n4vVDzJif{*av|VMUvUoP zBs{&##aq@wjk7a^#?0qFQO$4Rubds?W-*FK1m6S?!r`g%6?mca(ts}?zjoL`)^b%` zdWGp`+(wO7h6B4SO|(vyjhSO8b=6n?+IBbSO??&odeCIuH+bII7iq9W$Tu0HrXvF{ z0>rbq(kumCXygLVu!`0uI|V+BBs zd?2pD;PSJ+k;^Uor3gG8Vd9i;5W~syEKLr@mVY~+vIyL#5H=aBV9N5`<<_@&s!*L| z?GXNsy#H0WfPQE`tdWQ-v>CAMa9U7#;`{qiVsi4rpfYRw_cy9Xbq)oZ-^a&MHd$VT ztfvFvXtD5KrCQ|A5+)3!2-Cd?q%y`!YWn(0_emcaFTQ2Zix$3;lW67+4r*Z&6&3B} z>|+eBc1bD*r%m}p+F}wk7c(?H`2JHsx!G}$1C&U_v)DQ>soI2Kiw`(o_;^O<#J$l) zw#(fXk7YV5VG%1U&6yxOers9d{tTxO#@(CwfH>w<5qn95QTT8b5jkWiF(1cy@hdBk zkyiTC3}Lt%d(T1!2sI8V=*ayT9>eg(?L*E}#Rnk_@1-}NQuVA7`nYw+nfZa9S?boT zAh%c#`U?v4cqRiIdM_qPYK;*U2qafKE#3tW@h5gDV26H7-1YlOi+n|!Ody_9Z{b>N zt)tcf8wX*Pkie5UvW_EhXuSm=!3&wYy8MN;>~>ds-^I>^fgocu;X?S7IKPa&aG{)h z%sd{Uc&HFZt8?v9XF8{ldt&xN7KlLs=-N89*c6am08>C4c;W9q>Cy*t7zZB6YopqC z8k`IOLfz5QYvJV=3}xqy0%`)R{qov9sE2riY#C}&569aS_TF{<`9SH z7oUkc7et@&q<*~k&-5_2#P!s*TC`=br}hT%SGGrZ`C7?sj{dFA73*bzDwAVqe)3^S z7}a{8>fPtg<6jN&->p6GTm3b55gc#fQQ-Zj;t)|&ilOx%=(Bq#Y%D~uXB&3E0NHgN zN;9KjxpMg_G4$Sd?T24*DbZ)*U9Z+H7a3QoA`n_UTyVGczu_%Wm z4`rN!z3&QqZBDcwp7!`5-CdrQISmkraIzu>7jfTQcdq$3;W%p=4e|$NrPY!T0s{fDYCI+Wn0dPi7nojqOd5{R~#XV%u?g!)lD85 za*5hp`!yjDrBo^>jzO;97XoW!6ggtxX0d3#@^?j`mND3%O^Fmzt9adAxUq9NICv+Y z!&IhlMVK)9h4ylq{=?gywGIWZH`M;eLz38>ivO@ygo|ZbQXALw;^*VN0YwUSO5!=J z$D>5kiF$9Q)KN}bJHg`bQBw&AcrANSwA6Mj-j45%H^p=BIYQg38ek&Bj{e$OS^?O2 zdLT>Bb1smm93Xl?=OTXHzr~$>jcPR7wNH`=t8?swE#9C`ae3-rU-nmfEU=oKJL9ZJ&R?KMg-Fo9Ug+*o*mpe(A^qViox z5SoKTta!ZnHi_u-NC7~@{3buFG%7;zkc`f?mMPgl(=>`$wzeA+u3~xrI8+uoI2{c) zrC3lWYq%aM2?14Ve&M)CN(SUjC)n;KUw(zju%U&B2&1I9AXDm7&*bbpU2A;ISr*By zXJ&G01)Ex6ee75Ur8EkAEO32kAbhZONQ1mEjOKUY^SPNd5eljIWY*P zy9m16GvL|Rt!+7`=0&OO!KQt#dd}4TS0j+ve2o2mGEfH*99&bNf!y6~qkw+pF2Ev$ z=fEf}ZQYk4`7<8C9sx~M&%o$tIH^zrB{m$vdEzzZ+E8 z4kr-GNr{eF91dZTKtaav+QDIbm&ifTjfyq=Mg z{8f36sNr^H9M!y%XgurE`J=rk5 z1Q`l2?b(F;yTx%7p?F`zQ$+-dzQVI2h7f^5o$#zB&r0ny16&{$=t(#u{kG-9uh&T* zIsLEu8N+~`Oz%{4KkmfJSp5KHDr94V(WwE14er)9GLpI#A~EBM_t@kG&BWv+5^%a@ z6zKpG8*{ZM$!8%q2%~x6d3u~I?VUvh@Md-(8=+CsscrHw83L>cKc2p?oU7J1 zO`bmK(B+35q{InJB$kM*6ac&K>!RmZ1iJR;6We#3`T}KYCl6cY1|L{jLm$i|jPV&9=}Obk>bU`m=2{s^KhRp;S3oy*+SBr&&HrvRa z8Qk0Nf?CHCOUNA6^)u_#vR818ns_i98X7u}7}%)m*NVVk5x@FnSpoHk1TK)@XHf6P zE3=2vgsjFaESOh)0dinv&&K z4`hD|Vrn?VKGDv=UCMX09(UxbM(9JWmtC_0Q}Zbm zW7gMPvtudS8Ih_)_!R9LHGF{yp>k7&{QOy9Avn)Kgo|YI+}%60m05F_xy5*axdjeN zqj5yc!7K-%hos%!wf88;n`G!W;b`^>D0z>ynqtt9-=OkKBcwIPo^B8kC@UFZG8e0b zVc(*TyHM7lou(9s6e|V1sFaH5bK1f_TL_(}OSH<-AN;O<~TAttjG&d>ni7 zZG=%GQi|Gb$XPpwT_Kv^rto?{ughv~a}r+7a%*dETJb?5+fcVB^M6Cu|H6%$MdWfS zE5G0C^}Asl`J|PT8UQjm@uUqtpN0D$6HzB|EJWIU_(E%X`aMP1q64jsv>z+SNilJ! zuTL}b8C0T6D^3Iq2CarawMj%Sb^vPg$UAl;sVd2D-`>LJ$HRc60tzZi`|(Nw3Yp_S zhM}(o;VSz8ot-j@R)Aq<*-Yr2^J01YI?lgO+6)MH##7qGoG*^ftB+WMzl(ZHG@#C( z@|PAJS1UTll^i$API%1_9TBX}Q)n3{E~%A1kt2qkCUtNx%u+z+U0XHeo`c_alqA~e zsZ73qUsZx-0sF=yRD$7HjUTuc^)#P@8(*lMZsE0dJT=0~J)#V!yXusumuK}fLbJk* zuZ=c#?Ptctm)+{yt?J5Tny&c<;v8gw(4;Kx(T;Oob2QJQCP$JkWl6DUk6GE!PZ2NU zmonBu^}pbp15E49r-2S*GQfiYoq7g1Jn+I_{cbDqc`R;yS~QoDp#%+K?3RI?xWY1R zjsDXU{u^Fy*5=A-jn2Pu%)I#|lyQgcoO*qD_VTbMTFV^RzydZY^o^IlKUe^qVWR20 zrLT>bryGcJb_F!n@WW#YNd$x+hW+VOIUZm|?AYXUV7I1U>zs=`iK8gSgK8CQEyl{m zNbNLG=W2B^c>v}tiwjXgGUbzPhAF=gX|bhm{?pDdS>U${RF9kDeLm|i50;vXIAZ7| zOPR(C7Z9K?nXA|^OE)1iVKVoQ$i5c!CaUI&kM0wsEs#{9q?6TlHjV!*sOos z%wEFQ_1nn0?rpQ^HUohHEs3a?%%#v)ExUPi^YIa9PGuZJ(cdZkqv%rAWFAN z7G3nZcXU^OhfiY^9d=2p;jP@;ZfhYdW6313#DZ~vii|Y_0yWflTbO+^gkUs2H>-Z zz(5@8s(qj>VqN+aFSG5@bjFPJeCdFvN03Gk!+}QW=YeD9Ut?&eSfIYpX6{|+h+J*j zIR1O%58=|aX*~|Zw$7HnZiE6FaH(6?%vJi-uRzOf&{qU&yUA+m*u&pbZ=Yfz`o-9j@oQn5dBaD7 z3WxMmIVBO<l;c5=#_RuZ9{G<37d?>QT>GE1c)y5A$AOg6;2{Sy{K-u}GF2(AK*T zQl?>&JoUPV^NL$7ftXqBZqPKsZ}9d&{9I@~Pw5EnG1^lsr)_`0LRBt1A9ksmw8BFQdmX0Zd!tD$4X6 z_h{OKRj*V(utc|r0o3{Grs|yY<`uMH%8re)8FkxoU|mbKu*u#T@eZM_4Z3>jwaa=doK(WGVGlvwEeZ#`>Ru_$-b6dO5nP& zr6o%2kug>7=;W+|IK@L$ddG}h zYjw)+ZXOBNv3`~7w?yx8G^<1ke)nY@By6XXk zO6z@PvZK7AIsQD)zKz9~?Z>{hQPg2_L6IeaYituiCJ;0s2pw8}r#xNZ7NeB+#-8=4 zMm3=nmroPDAgdC+7tj7`x`p+UhJ2X>D#JR>^*lF0AEH{Ga0W*LQT{ z1MXkL`&!MZ1Ziw;4&Xr*U97OJ1E;1+dzF?}&7eMe^2uQmJJqN0B*Ct(9oO~ylK<&T z_4JY*UApROU!1Qg%eDenQ5Op+e*E~+^K`u^S_`wqVrwRd-g&#?2`d+>LY#Z(@gZz?1p3*i(h@zHj5V78ENF(%Gk%`WWeu;zTp&GtQxk;?2# z0*55(&iqt-64~lOL3-F~^_$&{#>5zgADx)@8%hn*_)9h7$vMPbg;G(ylA#MA=zwRg z?aiO3uZ7|B9UAra5zZD3E;=FWb7#F~fIp|=$h)c@wA^CGzV5gAJCaFVxszf44bA>_ z8x{ir+$JXc@j9#)fo%Sxgmd!ymA>-)>LmM$e#d+EOc>B$fF56MR`tJz(147WR#q1H zOiv>P)2%kRqSNoZ)G*}bp2l8IP6uVD>c)M{ogwmtTM z^jm)g-1xmR#iPF}sS{qHr>s44_}SIXPzIk$DGUsZoSr)@NVOPyw#}1babjn-Y--6> zWfa25a(P9-&1yqL1WvPZTFGa6myUe`#fc-tI|!4shj zUAuYdD|47?550O4-sl1SidIw~Lvp_+WYM1}qpPm^+n&7K)NS`8ah;sOXDgq96sR4i z`JLy(Fb6ob2y`s27TA$Th*nO~^Yrkvwm-7T6|b2nRoz5k@>S}X-CpLF|BojSyF}WP z1C~ggZSUNejPq;FRP0g2oN#00gl*A0){h?${09L3w7-ZT+bO&q@N`|B>R%3Izw#Z7o%ycyC-?aWGaROjlqEv;xxMh3h z*VT<@C^G@`jq!X*2gJLr}8d+8AITn`9>)s08CuPi!;Un;>2+E`pX5 zQ503Pjc&;WVnvI9tPE7$C}8_g#xndN@^$c(*}QYo^H3(*z2_po#I$O`7yuDtA)hG% zpJ5thPck)>x{brhP6)P6)(K;hBN9j;O$9#q6^r$#cjhQxIBQ;8G6;~xpy?e5L}yd| z;a9Afi^#nt70MWm!fdK=8Xvbf#5A&2Sf^@Kt@Ad9Rt|SiS91@!VNHi=eqC6?!(I*u zSLpm!JjZ;sCNU;f^n-1x9T8aLJExb3QF}W&RdXAQ<%xb<2NCA3NtsM1BjYn>PB zz+z%zb?09*j()Q|P|Q)?-@==J`T1B1OgiTnuYJlV{A8W`=G`If(-tJI^wI>Ky`mzI zOY5n*tVQrZlpYXo-f)`9;Ihb*d;7y;`mXwA^7xq#fzGZ`HAHwai0i4C2$xlq7s^<1 z7#J;ib)i6Q=}Mj(q3+Q{M`@AK%ME_)7lGpRxVm%9(L>rAjueDvuCB|m#rU&)g}>+E z+Ac`Y2n7lBa}3QMhTV9b!) z`4SWIc0>gHhQ?Tp8icA^X~ewbC6aD6U~93_kML?5Mw^uU{fc$}ctw+F?2o}eU-+kslTh-`% ze^IMJ`X!bn#*IBk1rL-ifrA3cB6Hcpk%Prk- zngRGlbPq7e9zdbDIT2l;@nj}4wU3^LK>Z|bbVNZ1_hxt9z(xKs=HNSJa*J2t4ENQUN77#Fg$SwMu_DHyMdvY9aNKQh`d?}t3HZhCYw$qt<*xyswk_!h z!bc`1!rY#%(fR@08dC-9@k?FK-+$uoO`!jpaa3M@$S4p_$K^`M>|r}>CbAe0fPS^wnRurtlPG1j`PV5Xw@Aph z+?~O0JUJ~7Fhk!6Ajd7OKLf+ds$0G5*@j3S?L(zHA54F%j$9`Ca9H`gjR;xXZZlJ) z=s+_0()BTr?jV6wCamJagY(8YGG*RrU$$w5oKdw@!gFuU?u>%e@m&v$Z5O3y0`1~y zsu+sI^g@EvI)eEVA=_{72yo}(9^da|;(8YsTQ$P``Nr=}cLpuk z@Do|W<8#A)p#N{csYTY8rR%KZ!AjxBOv$>fPAXIh$aQeU5HPfb-Z^K`yJR~A`b3@9O>c4CV9q%W3i0r037EnN&i$^+l{UM#megAc(A^FRjd3LMM#6w^8*v`oJXubU#rqDSiIxv+{Su z4t|rBkNI`d3;Qk)18Yrk>~{>X$@Z%>8wa2A9{03QdR=sv@q%=a^tBtM4lP z7PQx6mAKEnrnU%iw3kyMu1KW(!u__Rp0I;D@kY8LisAOE%zJh8u0W~Rl_q_k5GW~% zO$W`h6Y9|d%FO>Mv}(csmn>$Y-`{yrUGnZSWdanzL1v3Sy@;`}7A$+1DcKQX?e0DT zg25S~6&!exa%?A=bE$ue4)_T|>~Hx*=K=Bmdl@h$k$&%&FTYYQkqCR3@w;Za?yO}! zIlCCK%&+4wzt9~f(-aZ)ODFX^c1sg@FBV{TvGS8v3dn?ncVv3$@lmYF#jll+8A4g} zVoE~__?Z_CZvd+=O!e14B$w7jt~eg#6}<+;>L z>Gv!lqH~K}Ns%r0=}Ojp%2>26P+UmT3v?XvR-2>CpQPVWAh8y+S_9qMIPUzM3zTUC zi>A3rzObH4akc@loCJV*$oPs5AIPs`ZJ^w4liaQ2{tU@xMY72gWmcM`g~6g3$*4XFGJ)N&euzHAQMHAQ$$DhSx;Z5 zADx&Rtn6gtRkgu#1&%RhmzMRjm2>zc|7%WE3l95DUY zr%a=4^j9KJIfh7A6>pmg=h00c6s6_AnSVRCRhZy2nnqZF_F@uXbx;Y9(qSdDOq87_ zRGB&yx!*~pDv0fK6pi9YwjL<3>os)O*LtfQ8=1CNZ058WTuq&WJ05EDcsYV<|}F`@U3EF zuBoSH5)pE{2UHICoA;p2GoyNO-Vk$PqME{eA#%M>T;+9sypEn95_Ws@+?iv|%&ZH! zLce_Q>Rk`QCs6vSdwA5hh1h(E+m9wIeUSZ6*Er#@fas_b}99Fv&njS+4~Y* zP1|>z^-Tvayk!4&h5rC`An~P=1rQ6f`&Q)NPxd#Csl#61ev*L(x^RWaebT<7YPu~> zx$$h~(>isG{ohZcZ&77;RbP%E<-z<>5-r z^6@G3M?Zg8r|M;wboM!QrfP^>4f{3}Iz+b1ap`BZDdbnW{b)R{4&D0H)(~RV8ZvGXdv%D;6lmwoKDaR_)TlJb#1aS!`5aMk^*N6n;8!9QZXh7B1R& z`H&eTK>PjWpGW!6^X6v)9(Nikn@ho;fX4zfqqA0E>3a?KS?RQaM1Dga;aebH0OAKv(88n<4Ws(=2uLZ?f}F@Mu6<5dMwx@YaY8X?K_uA*2QU! z6ksPZc_q7Qupp904EBR>6@UfSt}EFMD>bkgk{+R>1*$pD1B?>aRw_ezsV{=uPB{BG zfYzDp0$?!&9XfBd@FZdjfGGhzmHLsl@-?i8yphw>OkJEy<=q2d{39U|dlQlM&KhQj zOF@dHb&F^6tM&x*TX^)sO`Ck7pL}rX>Mu)voS)V^8tfP3SVz0YZkvL;!Y@y|ULGFS zEFUy#f**Z5^`EVs<+kA`Ti>+{!i|djN(GAnnTo&0ojd$O%h2_@2FBC|LNuo-q!Pdu z#X8>PA#x^>w8x*t>KeX6o7}#pL_;|5&K`+6F#!X_g#r~8WT&d!;Lpv4e&t7>ub;Oa zm4g-3;dP_5`!G%Fo9a~zGgw*FO$;j(>>IE4ez@&%YG9kmYK;tiVB>|@W#I7buRLCz z6DR6nDp>T2*JHq*4r2Bt=?kNV#LPHOuNCEKTGRPxuAZhB+r@p{LPIo-Pic>yjjMJQ zI*Z7~RSrPO`voPwbkVULaZ!6MD=U>1T#VKKFPA_BNWG#5m40_&{1fX(6x#QL-i${S z^ea}~rkWdU?O=}{qa3acW?sZN(L!B%MEx2CJ)Gg6+)eicn~HvpjW$THi+~j` z5yNT*vp;cZsH4V){lG^930rp%E2P~~C=hJ@>8tU#YegB^zeNWg4-V1;x17Mf(F9#> z%Gi+;LPT7LjSZU>?*fVt^!12e6wxu9U8PI7mx+Wh?}HW`3bh^!>LG4pn^<7s!3Ify zS;zaD7&gU?*z*Hh8+9gT-4|bJ8X}%D<`JQk5kk7G(@8F|brhh*OUU*fm+VPDiFS;vpD^ZY*xf zk}PLOv4iU93Bd|3F0ianrlLzk-0rZNa046TMw(w4YlBftNq)l4Izjg46TYEEq_Tgra>#^a}5MUHzRVw~zgljWB+%#>NG z@zR`gL^+a#_q|CZ$?)fp1O{rhJ2vs?Fqe=AU6a<2wL*dWp&;9yn!4bC*6Z?Uqzb%- zHTtY9!5CVfbS2r;3vJElggtXB1-mqj)>DMIFNcNEehj5Qv%XS;7)lZ$mg_3NVqSg` z+NTM8FObS$Th*K7H6L&7_V!a(9$KvDWIg8%=Bvj)OWVmCv$~=e-ue0Nv68}Kv6pMV zyd7C1@#f3%FMl)29);L^VXXaHw)O3QWsZR&+(QrmgNrRDHUAb10AJF9mEEgEhl+w( z_gTPs7S?uxxNmo<$)l_ksYv`Ko{2B_j+%=*9VWOEQyr@uWCB6moP_J*VI+)rSpPb{ zXp|mcftzEk1p9`+3X`I32p06~h`1ak-M1R% zOG(>V`DoDuln$^7HYd#7G}cc|NC9C|R+ce>idKsVbpa3 zsm1TB5G7zYy=KiET(%>0KJeuxjk&_zeD&s_*2nx0+#(T`RB;OYcV2+*UAuPHYS{D& zDn>3yo;a?-1ukcrdv2{MWEJUkdd{>Xlf=%e&wd$Z3usc1DF`Oajef$5+dCuwi7UY` zRMrKLnpl4*7+8QKy>Lic<9P$8gLQmd)A`9fL@gi@>#I|DR9mzpm7vD&a(CgEF|$Vk zGF^-Bnz702(w(1%O>sJIs#UWgPBNmeM3;HNZ+hZ(^jgoC()yRB%<}@ZId`&6&M*cT z&9jrx>i@55;a@R)rTBN(l}_vXH-EV9l`|%w15c;8b>X6>`y75oOBeFwZYaHmMiq$a z1F(A7mM83yj}Z6r!v+`ax>j#d(P9Osr@YbLRr`9F-9#8D;n_u|JVM$?xh5!Fg{rftqz3S>DEjO2@~JiWjh*YrbH(4S< zA*3Gbk-iy!(!xYc3v?5hJrgq--!@yf79q0c<2U3gcV)@Nt*Z=8Rdts^mWL z9qIb-=2V`#xIiF%>=I4mpWb~n9f8LW2Y2&)D(zDGy4+D5mw-uy5_Gt0-&s3-&7d#D zYh$EIAyvJFWndyqIh%#5S3+sap>-sq%@M}DktLa+PUe}J>NjmM*;)p5OvFsmvzMa! zHO;L+`(Nq%dXVoMj}yPz2DdMw-j`YX)MQ}K_V4|Ojfs=(ES)ewy?yy~TF9?q)plSg z`$-rr8Rtnqz1EtR*IB$lx>`|YXk21kU9@`GL37g1i~k;t-3_4 zASMtNY1=Q%?;^#cW(H;?`Lq$&x);>PVMSSsV!E&N9&5c!31V6)TfJrV3NcmwY?4l9 zJs6se4%hSC*9!LSRq?(J?X%xjNt+=9>_I$ZkFT99O8_0)+7JPo5=JDQfaaqDA`G<> zBKt+eUDcmZ2xhUs+FvADpQM}CGwB=NSdFFY z{rJmM5HvH&LB^wjEHNFdOY&jC{ROuBj(V6^i-*bzMl7an)w5i8Jg}1jbhGL5CgXOx zpj3^D2$G>ZGDU42ZbmsGp38K(P<9Sc0Jlzd#9bcd-QXcwf&2l`Km*Vv`RsN~i(Cg@ zyA!OF0Dw&0^7C2nG-%}fSF>TN_>7ohD?bUaZ8+NU*~a8e$MLKC#d-YeSe~Vwohf#? z!B<)ZXDHcm$Oz&5Ud7g=P-Nsp!@5-Y1h%(``5P-%s5mUp&z$y zlW0?+Wh z_JYUeaJpNiG3tf7Cd9#yv*vK5m@T_sR}AQS*7gJX(ZGnF^)-R*dp%xh2Fc$WhT`sG zNRTP!eIT!RGk<X0Fee6NoNh1*$%J{b)E?L*y=9{dqp&Wqa&h) zA{NR(s;^%0!Zmz0K#$`AR&y&idNh?-XjbG!2Dl>$3do$gVZes?!r5PrQ~$TT^d?KQ5<;920AV2o_< zJOq(}?Fu?6p)H32iFwgz=i6dl{qB0j0B4ed^9s>N+8s?$>h#$YyF&$QHNO59^U&6T z&_4B1;ZwN&XVvb!Ko%S0fZa1nm6wG5*_LQ^mY*GccA1drHz||u2oHUI#A{5vHEnL+ zJ~rD+Gp%2DPIbJFHX}A>IzeDg)RA)co!$lzZ^2jXXYPo}^t#L;Fc}eapBAbg-;)p& zaAk%~o#LS}2y6%OgB6mH`}R~IB!H}&WZA%z#4I;W77k9;?Kr$PoCc;ESNU_@f8{P; z8sKvAL~{8%4G#M_Li60AXon`v1e^L!DxkQP38y4E59 z6!ibMFue6!GdOBjV)qA)(lW}W^e$*ew=Kzw@)J0Mrzg<-3?8I&@n_`=~gMPqkC`zvuboeu+hX}-S=MHsQZY3 zmDgh3NI{cm*`q990ifuy+XK2>iOF6I@0uQwoau_)QneO~0kvWfh@P2qgXN=d*dlMg ztW!3T+K^n3+`R`mKL(?eDpEA*eZPD!+lnK$l3i6d5!!=_JD7AW+cvv?(}6QSAH|Sd zt|R-MQThCw8b{;@k1}nRXMfPiY7Ju6OQB*lQy-FOT7S*LtANdBjSq|9VDU#)aHByU zlM}bg3%oE0p8FfjSENs05GgilKkP^hERc>Hj}Z;06NuzTR}qp4L`*btF4Y0}Da}~h zX=+>=epb=o!d!#*(}IykxRloOq-gDX;yL7{bE})Lv>W{1eX&Pp=xtvrk!@@5Kf4<8^lL3#-_C6!f zhcj^lF*N`!^$)goVaAf4i}5NN-{N6C6ajDH@B8-Ie^brLjoPQiqDwB2dKa>bE@A2) zHSlm39`H`Lx?q6AF053nV^r!L5&dbp>-@yj{$70?gB!T|6mJ9t?PZcwiem1G+bS=p z81<~KyqH!Xg4Ck6+FKbaOTC@J<#@QH^Pg_7Qt*vwSX+d-GYiO^<}sf5SoLC@Aj&e! z>OVE$X0h6}-v9V2JJ(FqZb$_9fA;shD{&U5erG*eN7wCeSoACPP;OlH+qRQnESOTC zr*mL|W?g`nrAK%L^RHO|5E5(JaQrWGKZH+Vv>~B&`j_MB*-tLJh5dj61NTul#c~83 ziM(YM*s8uDaKpyzT1$%@T#-ZQzN?fH-6q4>{h>!6A#MN6}=r5^)-+C6H<=6AEajHXl5Ool=RB()k@#H z&DBzz+QSo$x7=$!U?1hGNPeRf)bL>rddFq#lgL?NcW}9SW6;X-+BUoM>6O<0woOnh5fQH#&Vw(v7wh z_=R0>$DXg%w4tuVXJ9-BhB+ho<==-AQ`1LtZEuPIa^D6dpIo#U zH6i=_caLwMQukO16z*E3+B!(2v*rPOid5@ReVtYEx&g=6WL^{>C;D2F%?wG^-tFzc zp0ZZEl1n8I)v__%ond1XU_XlWVG%b1yQTWml9|%?awPu}S5#LxJ69E;n~yQ(rOaft zb%&(37@G8n6_IJy`@0MN>S#59@JGr`y_=)1m2lxlU=Bj`stBuc`2;P(@W!`Db!6-a z7!*KgDJBF*(A)`GmebP|Xsq}dHTCg31qKVc9;2J3%%%@k z1DQwSz0U{YMH6WY%_D?VwFGz>0oT!iSWEAsU@Ke)#8xrmNE;q3jvb-p@;~T$3*F_vaz=gE;gg{g+RCy#FFZO50y`w%YJOLK%$MVCT{8>Y zK{V*#tH%d}1oPVI_Cu_tmL0y_}MeXQbA$qQxUXq?5{a4cA^1YfXKYu}ryr@a{8YTEqB>39Q3X zu7kGnCgt-k$C!P-ydqMTtGAd!Oez!;AM7Oj78?pI3PzlwNz%_qQ5=Tz&D5uE=l2(d zYG{>W^~q0&kHu4zjNJ&^l@cXJ~N z@#?_ur1{x`+hC6!Ez`b#Ml=l@Hw@LswoVDn9~uCK@Ik%Arc@`YaY zz{4)36qe;@g%ZvlAdJ$CGJ#3YeU;kn8=ML%;0WY>0>MY|2~YeMhSv}~^KH>0)WWr7 zZVn=`G(Sg3<@g@FoFP_J)qoc;DS56ec?57c=jUVKUD3~qxlauVUp>_{#XLoi1I&x? z?k{U+%p>?hoduyAs{lP2bs{h75LvN!m?%;?sIn>_%bw&)V@7_#G|HHt0^;kSZ&%38 zTh@5YDe~*6u-kpPkUiR6jgG@L-Mbpapu)hHz}CO*UXdxl4E2e3BJ(Bq(jP_EU}W;J z0`lcUz+Gc3sAF1JU5XaavNQ1PPO0lW60i2UtE03BaeU4(%{46(l=GdF1C&wcF25iL zheM|Y*}qJQde_mD_dlkI3?FT|I4??(79QoiO;rRXX0p?6SzWuLGw_J{82SccU(z*P zBWVe>hF2jBe~6u(R^QmsNv%^Mn8CS`bx1^Qp-ll(? z@!#AkAbm{x-E#XR&-m}C0MKvFq{yna!VJZ{J#fx}_sr;BGfKP~G{hNMY(3L5B1<_C z!p^k|F;JLpyH}SS8b{||b=JX?o%ePTmp#_qi8BPBthi{)DtMF?OPH_NNh>EdFIrpJ z-#siZdU4^-sP+_wivots`%MDArxN^v*#)#%?56}v!%i_I($q2N- z`;O07GDWbOqj-f}Y3_v6md;W4$}5b>;>A99D?xerk=25`BSZ+m5sw)cUUr3iz7VMv zip}>fw_a}G{$8$eA|uf!6onANp;V(5fIJCeQ4bh`!maHuJ*y$zqiTuUofB6{rS#ylnZiSeEO&nr5rX$GTqu;N4*@U^6KF zh$a1ZviZFGiRAbpfeGX%G7XM?^z-;Gjh{Jmy~`M# zfj&TXh;u-y_R{zwj1Bs*zHUdo;Uyhi5F0dMn(D{#;H0#m4%f6$-v+dXC}C2%^b$jX zM*cS=>$g1Mn;sCDi22Hn)?7f7)h=;$`As06q2A1Y7u3t(s}le|L7hNx5B|q}Ns?il z>Sphx4khy=1^{4i8xg#5t8%nOOo7(NvEWj>C~+%k;cdCc(2I?-Rgm{xYn^)*{%T?^ z%H2Rp0LUryyOP47GW=H7eEDidqA2+YKpXLc9j_j*9on=qzCJK3Z*YK5mC7}}!$8SUu4zAwx8Jm5?}B#T$7Ha$65h zdZ@?e*{!$smcdq7h!8-XE4WW5J^Nkl((9zGnEn$&c7u%hmZ3VvOXNL{IDRxS*t91Q ztWF3dl42c}P|}^wAXh)&Ex8?6a%UO`-n$O$?T*Gmx|8BX0_cYBVXz7Vx@vULTEzPR z{lB%D@W?CKxm2i!A#hV(ao(+ zE}X7hcn2--Qu|LCdV2M{-S2B){Hd;t%MWhUtO*<<%rkiZ5oiD5Y_u`gek0Md=&JBP zsRrT@Y^8o?m*@xnNug0OGHJxL0PAOL4b>rKpt@+<$0QJT42hl<9?cPzhtn9Ih}DT^ z_itD;d z03b37bqKWJB|bSLa(>_5<|uS z$~^c@CZ*^N!E|`r%Z$yYDFUX^&4XXyLlv!{`wbL5+o+J70I~7xpy4{?O( zAekO@7AL*OY;bTlZcz19_*Y-C=_We~2BMb~26Xh7)jKdxt$5@$k|#VWqw*z^kQ{rn z8t-KiT|ufuL~x(GcK|#JsOa7owtPy)n#2k3QfW6d^Y7t}{Tgck4UGS!ka?r8T!r^Q ze&C%(1{NJx2M?5Eemzy{5q^!^mXnyW`l#fLqJY8}r@RsD`@w^)Omr4g=y|tMJ=N}` z*M?yv<3#AoNLH_@9)Y%afOjIE@^zK1k%+3`@H|R5KTvq2&)KBOXG?o2$&#_b;}o6I zu>(|V@2!*HUUvJ%VI2e=iuVE9Q*vgQX^I;0I;o&dq!=AtLwkOO2ppU{^PH~^`2}sL zuV14hpbgpuYH(_3B7lH9Vr?3(dQSLtU!oH$RKs|F2R3k*r$!n23rx}m8?yN*(-oV= zE|o`Q6Jm)JeT{5c+cXY(Ssm@3@Z)5g|G0B}pWl^b?1|s!rDps%=Nl~ju?P7gMKtDO zYnnh^i>9?NO5?+!G%bV3J}qpx0rw!!Kw-Z+i5GfPeI01l?*#dx-b%uE`JMn4AwT*5 zai?amZPKEX2FRG0N6P2#788eJaHTxfqvl~)s-8?1abZb6}4p!3wz<4qfpU;jYm zwI~AAW8C@B(Q^5z<0nzuZ;s1+eOthqqahJmi8-4?@Kyn1K5KbU&`vpX7VC2OJ6ByU z<{HFfd<9A}7pU7T^;_bBbrvWQj9KkTvUm(~4ao#!{z4JMJcMjmi0mGm4+A zHtOKbjqO?uBBwl0TF}CT9VkI^I68Ey@qAzsfTCrf)1TAY;%V(nVz~-4Vdy*Xs?odO z1TjJWF#z~!WsP>}wRY||@U8>c{G<}+`C#Cd@~)YnyQ-}bM|4fTUnvX>glVZYrG0*g zth|dzJpxeh;JbdIsF=F7wW6k=3AOXC*8x2T-heWcs4?nZDl{rnp$fYBYZ^$6pb7Jv zi^%z}CT>=?W=^KV*FkYS(as4}uYw+2c=r6vlsuBne=!mBw^#WOSe)=17K00Y{^+;} zNx&AZg4F?I>HKFN@OcWm!suZyycV{wt)46P2vlBSIPF^*%tl#In6_gmOWofuv*ac) z6TO-}A>gNJB(tag8V$B6Q3)`|T(!it0fF zK&zL%&_BKuYmEua{S+s`N!W5$I;&3IF4WNqF_xB=xJXzr>8o?Uh+y^U%u3 zf4KQpY5J%7p<$gM+ES*oo*By<;uhTlumlgcVE`bH($-lNRs0LNdYv z#gQ*Hsrb7$F$BSkk61uPWhfroqi#UGzskwFL5N491;I83^S)0jdQ!ED2#y)_>zBBZ zb&RGsOOg;0&rk7tVp-e|1+6+%@S^0*vkXtiuH<$!#4RP^;MNnbp!j>}Ywk$UGp~gl zuV_~mCh~XCrVo{UTWJXV(sUm%!Lxc`;~O!W1Z<~_-CO1>Ex&3!QRn(yr-!NAw(b&Q zl^LTZHrSgT7|$q=erUGXh(Y3d6$75mY!gj!Dl#Pm%Nkfeo1rgtmp5}lnV;)jsrgw= z3vK0{gqs*M9Vo!LG`o6x0%Vtx$abMtl=aKBMdq=y=i_6w;#Rz|v}8EU&dab9)ylz$ zq;3?I0=BGq;EzVd4xKEWXh2>LutH+@^Kfo&-rRln2l*0;A9m`|f*}7E0uzN!U`ve2w&Jd7N9?3ojy`w4ST)W}FK>EA zwxHA>=*GHtWyc;)n|>mCJcyxW$BReb9hm6r=iBhi8BB^$fQ3`)Vz7~UePY{Zy9g+vf^koa!*qT%t-->tfOZ>P%+EDsJ*0-oq~k%L1wAkhOoZ(aaw zU29e5d!09|aF&Y7A4JcB+CxDj@6ZF43U#(qlkuRWqcB>i#jff{3e$+qPzBRF=DfY3(fq(MGb1E&7t!y& znJ)6GDA|>UGdJQ(1T*@6wBqW5v6K%(!zn#GKO}d{YDV-Ek_+z4H2)U2j+d889!ynB zILjs%^Z9{D1C=Lqa;fpl=UILg>B?siOKk%t9aclL+q2gNEpl+Gz)0SLJ}NgTCZY6l zESab7t5mO{_~+;>oi3MqeWWx-FV$o^s8Goq@(V%(PXoBDBx;x%tUqk?jw)6d+ZSj< zpJxH9;IYiuUnc5pc(f+Gae-<@6bnaw^{B6;Sv~i((5tiDSZ_qa7%G8V#nW$KnzU=EM4i&g5^ZTu8q()x1}> zUMrQci>~5tf(r|nOlZrlz8JngmJaWrYwx5$r!f&zwyT*4EtQ@{Glt)849GYn2q%oe z3gE269}|FT5?rz;=aGOTGW8`%ic-0Q>)_G)3GfM|w3)LR4xa)&nh@$z#Gm+9q-ajm z3mMvJ&w5aX+fa1rxS1VOZWp+!&Yf?Omq^Uet2I3u`vPI$@NLK1ivxtfWzf%kEUDP< zlH`}hkKV+4YO=KQquP7@g@yWPaMUVoduyEidCr}8H8&T3WPMw_Ej7RU?iK9*SB;HJvH_b`6@?Uy>SNWWQXV^z0C-_y7r zaQ0?(<<}}DUKJg3k>RruUZHwQ6F_}Xt@s-Um9?ma$FyMI?F;9Wlnhmxp&?glr{ef+ z-R`Oy(fi|k)Y9=;f(ML+a;+WEdw$O7+}vQ&dof{HqS zn>JnKWc|s+2_&b#Ygp!en8@Kxz$M_NU#0xWYggaP(tZm~_<}lzt(o?3enB15c*u>gPkyx%}u{#|X3hK%Yr_-(#av z%g-lcPI|-X>m}Tb1$iN_Oj?Q#18SG97c~DqEe@G~|Fh)-xrO`Mq(4Fg=ilGWn=*#~ zWEYA+)%kz&=tmH7NZiw(odmftR)w@gJi5{RG-2%iV!DH~b!X0Y zY+95CW{QbHWNROgM2JaK)cu@%A}#IIMwqm3WQV5Cu1->TBKO3U=&6Oxkx3{6rDJlQ zM;CDz|8YKni|Y6K2tnZ{oySj9#&tQ`!^FPthQEw>Q1t_U-{JM<96Br%lM>FDA<#-F zYtQU?oHihJQk)q{!dMWpjPlcvr=ROWd%abDx*d?(Mp;?L#%=V7Sz7X$j<+$v#{jRb zIE|?R^Vc@bJxmpHmWU(MpNFL2g)nG(P7MBttHB)IA#Tmx<7xOH z{>rV*X#@1JkCe`6+lullM1NX_|8|P7{&EwEVe!K}=`%HXEC!kt5HV3CgxmwP|7mvW z$ETmEfwSVuQ^od9I7Hx&_IN;z(No;mHemE_eIJgi9ii zk7!b7ojlL#NfmcZ_n=a}|LczazTp41>4V=kU9#&ED*U&-SF8x^X_E)S{PBV+lrHO; z{Uo^r12`xbdUC!6KG{(G`nm|6W8zPk8!h-fc-n_!RKH4Q_GRNf3|cZ8apBD^)Lb}^ zvrznqKM`b5x8)=lS}G!hZ$4)D9EoqeWjx4z269|~Zy!B;E+~~T&mKgx_9+YBdp$e* zK=4JJ=?CkrVws*in%#{MR~#di{5z9^JEQ|WBrcwLvN5<}V#3L}tAs>u?%IreIUfnh z(%TMa@-nzVngeS;!kt$$S&SG4aN8(8ZTq9vz;sbFZi4qeHL0{y(JUD}s2XeKFcPs3 zj~bM$YETdkTQRY)XUyXcb5v53Zzug=9AjUe$K52TRJ3#Q+Wt`>>{E0&6y!L;kJ`fJtB~vaKN`AMZ*M|I2+mX< zHng-6x@hIC5-t{|@K-6I^xH=SB6p}if!wxET2;({6=`cQzm+9QU@9@+%AMrTTQ$j% z<1&LEr66-sLWxbMr~63<>_4^=MCgxh6@R+@-f7|$b5#gl_Xb~in#QVx@!13QTq6bk zv+||PZ|~hZ{H#>jlDJk0CdYTEnWO4&y_U*0$P^(l9gg#Rsu@Aqb)p6Pq5I*XeK=|b z$#^R_h2xmns3UXgSaGz5xiyzUU?BRu>bXj|FYJ< z-K?k3Rjiq=n^QsXSmq<4Qhl-Rxg2`@|0WLV)4o;NdZmLiqm4)(B}=ugLEAJm4iGEVriPj$H*ye2p0s2dg*ZPbiic*8~)?K4Iqno;Mf+@CZ2@54Hs z@k@9ISI{1ik(REC?eR)vBxUQUkVJ4KWxYP@uf+!<<6O5t+;jLoMBMlNy${>mC%e6a z?|oA%!)<*5rM1g_(Swy!rjsas1t0u zuyaX2Z_OSBT4{dveHmi?pN{X}MjLt@+!}L2v3~uRF`ZQ~q-S4Bs|A@fZ8PK%^?BCe zV#6xWK6tx?qIokfURP15&PrA6>{EU~8zt-8%XShuM+s;2(Q|dSl7-G`&QNdXW zKfcc8QO7MhOf$o`wCA_>n@d&rSXbdCZN%leb)tYR>LpES3#-EH4yT*TV_9B;$Ia{7 zs!3o{pZl=wRo8S>tq}Z5v$rLp7kIdYRM2*?en%B7H62MyXKc*4TR$c(9sN@*+`O{& z9IV1!%`~d5xV%ta@OY5ghzjiT*b2A#3(effPiSj`mtNjM0_~YunHYPFm|E}Km7xsh z2VdcDKjq=bzDouKGyJ_jD4GkrxD!zb}ZFgMp#K(B&|*rE5821 z$SEPtJ@>6^9iE)9uS+=*6x z3USfVM164|DA_gtSSG=}0FSPQ79c_mSQqPPk+g|4V;swT`X%(w`3Y{wo-M99k5g9G zQ9clevmiun>eyIV4W;&;Qyg@9?Vk%{(#{>jjbCpyNF6hW2TUGy^qpM(JTY`+%!ll$ z$$=hC>n6Eo%-RvdbNHc#=0L6f8(azA82@iVQnYpQSwX9kuUNrq&|<$3IV#;)cSY(pDw>gzT!mYG zrc$Eumo=aLVde<}dlo06>5b2H~q`MmghVE{py9Gf~x@Ab|lpat*K)RGJY50DebM&0&c^}XB zAD7O>X79b?UiZ4!9t7x)f9$G%C~>3aML#)^fWn0?5a$~B1NVF98d*M$f4EYbixt!I z;V&E5>b7*x(yX-`Qv1SvI{*4>srI?3PW)=9;FLNQ(PvnWOu0mMxs+$GUX5VLzW8V= ziw|z2lsW7>l_m#e{l@uisf{GNX22~Q(+d&=<{>9*;Rjt&oX}@u_d@Dezf2eojz|!z zJ$J0MNHx)TRWmn3RiAvDS?RyEb!teBoZa`y^@+aWthN(0Ay_cb1Wt`HL>-ZK{rS;2 zA0L!oj!_4ap0Jn-M$V%Z#F~TBPdVXJr?i#@ut5*RCsQbRq0hyuCSLfyiC(ABZlMs; z9w2(hst&*X^bmslVQ~Y>tW96;8vN2rUb^n|-RIbm7hT>)`ElRH6|~I`;F~AyMU8pr zsBt?@EWUo5BactFeZ3z*T!j=!{GsXK1GW?`#?Q3=-v++ejQDVl+fMZH&kG5VBaaht z0hJfA{z#sCE|7xe@X|c~S^Myi8dr!$R*jk~q9~D_xZw)TT%uT5-m)hYT-+x&_?*A* z>#YrEf#M^Rr>%}`df-Punea&LAviSTxtY?mu-$-p$}W;sDrGb(@t6Fz?a^fq!A%G9 z^BGG;dUzXo>HX^MqWb*O`%DUoM&Q{D=+wGxh!s8v1c79O0jPQseI;lAHfQ#jBLA|u0O|e!C4b4kqhkTcC+t2Ni5AZny!Cj;sHlI2R*`q7NJYLP>s5RroOd~Mc8+dAVRZuA{}`i(^L@N;WX&bK zM=`iatPHZ7QHJb$5B`3_MCg*!?@m739l#C|^~>buOXPQ^Fl*Dl+!rx^_D3!}aT>RJ~(Un)nY! zd_C4Q(wpvLP^H#dzMS; zsI>7-1hGEDU=lNoWs6mDT1iwhnX7&L+OHvziHk2@5{_C%;&}VilvzNk1IU2jSGAb?M9HyQJ3Oxwp5fjiIV2QQAMODe3DE=GbqENy<9W=cRx1 zLaLu%Tg2#P3_r`48~p$2nvf4=`nOfN%e4M zyz-}`hmAFI)l|jT25`1`Lw^_D@fx_{&~0@|G#)4a75@X!m=-Ci4Fezbjb*{xD30f8 z{>UR8SZS=1a9yjsnDXAjvgPMpLpI3qL0NcLDX_ON9{ZWp zmn#zKO`L!QN2FA>bp(v#<&cnpk+k1H{PQa2Qall!Ik&}lqU}&TEf&@<$iFzewekgN zyn~;h{rp0x5U*~{n7vV*?1G=$d#GQn{9~{Mf#P|tUETg=rAYKACvbwEB}?uEt9<2~ z#1q|bZ%Lm*xNy9)hM^T|OosvCR~{DjqF{paB}4g_YNlxA?${YBq)e|T-xpjln-$|T zxvJJN*?odZ%$D$+6S~Z6??@k38tj{LMp{0iNBE@33NJGT>m5qEOA_DX$s*etAemk! zl*DZu3@9o@qk+3isad1oDiDkr);xw+gqPXlntK=(!8sVrClf|?B_dWbd36WIo zi@oB3Yes)TAUvS%QAOO}Fu?NLE~Tl8$Fxl)peryI-lh4`w1;EUB{!hAZd$t5Wvtdl zPB;f+nt?;)a(8ke8%kn27=_HSO|Y_iJ6L{BOf)IVu8J!y${E$eq-nfmn=;A#enidS z7Yh6W zECLSD_sCH|KImI1#9nL&4tUhy7Hj#i&s?4|DZVr*hdkCBX5v91sG}g{ zee{vZarCu$YVy@3p6yrfQNX<4wKO*_c0*lWxT$R%)-`F?Iw$1~OBzC4qQ6csIw@}S zToI-kw_1f^Fxfp{kUC=0orgB(ip0tAG#{`6DWmn6+bBib2;KneK<{0R^CqVAO(<`@ z>9Ak9W_WQn_V@W3=p5t7h~Ak|*Jxfr(cx4t0iD$aFk1G%51%>z-YeqQ%ISOJKY2YK z#H$|t_%uoEY3?ZEL}Xh?aP&*E65Amop*bk0ObbuTNEJm<*%*`hPCD$t8piS3CMRzB zi_#YjMEKmsTia(Bwt{Rs{S+nUn=r-RGDAl^+u(4oGm3IkI1q>!;KV*uNdvr*d*-H{c z69@+5)uf%T$!SxY+`mK&O?!n(tYZXOi!k)U7BC0tB!$4b%q%|lmQ1FqXkjI2TFINf zgENXrUimD>IEtzRNsc;H3y8hC)R&j>^VOfjhSud~L=fB)#R^}c(A|Y1~p+Y#<)#-AyAMq$*H|kUluLJ@Yd!P98BJIU*!?HsYOR8b` z7GlDAQdFwJ?Y+!Ukw#)JdOHz|(3rUlnf=O4IXE?De$|g3v%mMiP2wV(L-s`(P2Qy( z(twK!a@5!e5?V7v@IgKmY8=i~tdgZRx*v1YSSXP7AakPdMEP_tute@r3P~$s=T%S; zc7C)gmE_LDPv#&R`z|JlTVk=<`q~k)h63U%v)#)HK%Rp3j_qOs!xBnN$+SJb9egBWx= z)cI$1@72maXski^t;~7f+sQ9VH1l9B=w+tY`1YbD#DvrL$4F^C?M7h?^z z6(wrU&PQX!c3pdIL_t#n-4+bi#5EVsy#Ghn;V)8FA3(O8d*)qid3XO1AjgnxgVvl+ z+)td9TbEFEP$ioVud#Qy0gp_)N0M6Yn9<(Lp>eYAdLwcA26v=d$?kNvZhl^`(W0{N z=-8Dv1^Cj?(YeNo%u=q&6X1bfuc~K#BD3&st>y>Cb^L=VmrrifEX<7qA0nH(Ct^kj z9X-UsxHf!6`n3E#V^2vNS*%;y08#0}hIiJMspT1k(BbIO?D5g4-PWTc*)wAzd{$2h z0iNIut&H{VGFr-mc9BCQzESE1!xX+S<^!cP1%`To)4;mBDsaY?Yi9hb>cjEPvYPz& zFTXw6@DObJ;G1dr{=&;faO+-xRVP6JXqxiW2bT9_d)aLDY{=q_Dfw}~O$zQo>skci zqh?n)r}5JDjEryK?7XVwp7GsuOC#UbltBD*+)oOf2=pm}3AO8+x*+4#Q1!xTy|2lB zmMJsZFEQ(aZK&LyNn7RSZeMD!hr96ldZpj~JaUgEL2Fl0XeLuO z2Y(=n3ixX5{!Dko@+{QZZ)IL^>qtaxTcT$fRkoC=OU^sbKjpd2Q!Z`U;S}cEeEoV3 z_nQPnS9OB$aBkW&>(OZH9|&1r@_tJ(e`W?ji&FC#jJV1(&sjq)ls)P5fH5b%Bu#8O z@n_GP%L#NFEtt|6MKt$_gb6A>ynfoQ7KW5HyU=aZJU?@f!B;m8mt~e6CqMDZjbbL+ zkabB@qQPj1q40=0J($z*g79R!B!S554O;z_`Lk?@3&J(yb*ZJBjF-oeawNjszH{Q) zmb21^vK$vl=A)OV zEG?SZYP2xDJxPHK;*`J72|x!q1u5!LuC6NIE4X-|@1(wazEU7yi-44wwZ}vA6;E)A zJlgIlnKYQF&>V?+Ef)br0yQT}pMNS_!gteiZ#!=DhW9g*kdTn1pY%Algm2r`(S(qY z=haolRcpF~7B(}R5OIf7BoFo%t%*!DyG6wEcVk*(L>b1~N!w-CY(}&`B7W)`p)Bl0 zxFX3!An~>J)y#BSncGd78JwP={6H4_>t=+x0Ko%XwB?wX7vHO2E&n*B2n$)zsEAxJ zavvJaK}2ocK(54RD@#yD;Bb{buDI;3C}u*ooh~XPXH``f!N>SLUvP*G^c4@!pL$UL z!AshsAS@U2czA9XT@CglMR?XfyFtC#IBiw2wQJOufHf^CWL~q+h%{de+*6B3^%$+- z8hFdZWqsBfN*mL^KyuD+`)iaF?v>%^?Z#c%2?W(w%($Rj?f(L(Z zVc#&Bd$iQ@!&>2o1qOUbO|}|m2!X_!RbeZOKn;9u4o>+StR91_sXoh94Bb&Cxy#We zCA~9=hA(^H zoagMwni2YhuK;zH_)F|#ZbDC0N`IvQ>4!umFm-Be9`I~(5Oy=9>@x#hQLcX-E%nR; zL@D_3%$)!36ZI z(2$7SrnsBkmA@&_|01B7HGuHWJf)B_^(QDL6ki;=O68xv4ZAvt_v7pnnlh-^GHBg0 zsP{YUjqU&7I>!iqM0vP}aYL9U$6~mUoIbbp7J`%f;8>y&qg?i&x>V@N-kigz=cm{8 zLac|)S6QJq?>A8qP?|&4d11u8JNR;kd|Wq}j^Hk}b$4x^CE(g=7q+kCoB3S^tof!m zre1&kSU!xjiaX^yJqLZP|L}F_C_km2+c8>E(^6Ulxu=TuEBIGM%K-Fu+iOaU zbcW1=K%nDMkc_!0BZFhoQw<``){h+O(Ykb$F^eG~?98h$;b8R+7|=92ZWN?Ie!1zYHw9Eekm{%c24YMZX-eaD3wsl0 zHb3=9tgkG|a{b<%gb&`x4W}DM*Bjp5S#Ka%+o9(Q;N?t*}i;>}WWWjZBC%ICBJ z94k_Fc#Qm|qN~H?z1JUf>^kKREY3ei8i@8pi?b(lyKz5xHHqotWG?H4B2cHkeFn?C zRLWVtsGTn{oMnYdozGRjKO6ysps!A<|0?57%w+Pl0Qh zb?K&D+SSM1NUK%?LeKC?=Vv@#U#+{eL2QGprcE(AQV0v>2kO22p6-bjflK`$)L5jR z0@^zi;U$D=`9hGg!qA_R*K!2C|C*tArt5 zhpFyG@0Oy`*JiShPjd5BDiC2I>@k$;c=&7sg9oG%9B&H<9NvsQK$^{SYf}>Ss${oO z;_LD%+!fU%9{4Z@^-i8wUiqHGQz)lv|O?%0HB3%U9 zM)&QUMsjQvV@5MlMk#_0&-R_hgVHD3T&^oWo*?V1N;+{h*GBgkklaX~A1o#&2#beo zeXmUN^ZF_|V!0OQN7=&jNK_}ZC_3R*Q=_zv|B43}c9w;46*xERS_(~?kRib*^F4(b zJjr(GBst2{xQ6Pg)%PKYW|HtcUD&+NDgNw0s9k)uo#~ePEKEPfm5lrI$^e=Vq$AKi zq;7T-0FH58T?js#idG7IWeDibK9q@b^F7z49Z`C6b=|Ngu~JiKu9^3x4*5+I^fMXT z)>UK11(?Yyj$8o;M$1Iv4+uuGzkhr>PdBx`+AW)fa*VLhNJbv@?Xz?#PPy;<$@3NI z;gj#JjXVL7`n=h`E5}FkLtrqmSBcxAxoI(f|9Jmy@ve4-e~y}wtA-2tVnC3r;U*{O zF=FVey)0j(S6u|q3>526SxXZ;lI$e!c@c2-FM4Y7*`Nv{j_vEvl!F&HgY`8<~E=1ucOiUiirw*c~;t`~c)yot{b`;o9b z&#F<>f#P!Q)b27!SLMT$DwTVApKgl*k=o;%o3+Y;(@V0%>K>FY8Z|P$dy@*Ot~BI) zZYKO&rfZm47djp!I~vZjU(2}$`c7}Jn$#rJ+xj%d%+^LTZ_B90MphK|E^HcWQMgzg zUI_*m_8XAhG?VZjC0W29E&1%l>%6H*=4JqUudTkKd6>tfE9lYHr--=f4?#;o*Umwg z_c#m&KjL}5IFM^*xF0fu(bM9ehOi(XLH0(oH3=2^+@iehU4E~qX##7wgbGgA3VMld zl7y2;)H4RFq?{F?Sx2PZBF1RK{lXPeniwqGiNKJ8S+o)LMDju1IcqG#Q)bcU9XV53 z2ljbD^<|{q+n?|!;j66ZgYfL;a-R|c7c~OA9YjKABQ0Iq^INCd1^PGL3tu#Er|oa za}fhFBB0i8l$7}}U2bEX2a3>BqIU)7X>u}jisco+^-1(C|EB4Tm)-sY(3P27`m8q_ z|3S`jV+N3c9w$`h%=9Mag(~IbirieEki~n=*z&|u66}_!VVjLHtd1I-A)WWbR0@>TT9%f}-Sa~=c23e3Dm?0<%I`(+wBkU+1f;X;5^86S^ zquQukkmH)mkL1sTef9)Pn?!tXzUOOS3pJ!51fuvk_u2LDAR8qGn;^5j!`eKHDn72x zdoL&G-J{Sy9DTXwbshWU;SpAeD*8tdNMq1$`!=qzaW9PWK-}Ole?~Hr{>OoPa3tnx zSyjr*d}o7a?fXnn0pxPk@_i;%3J2)8xP8NN8jnmegR{J0RzeuH%UWZfs^tSex~%vY z+|iRToVZJfLErFh!�Gpqz>M{Gq5aK4k?@03g7`9+!9Nq_~)ink^l|#bHrm*07Sd{3A<( z+eEp!KRmy$p`l4IxLq9^pJThkGF~%W(>=&-Ph2=p{%H_yR%58o%a&^O1J^;eq^8#Z8zG= zy0!h2p`{Q>jk?VYVGfW_NByU1g^i4_#xG6#hN}w}SZwHVNQk^)FmYczwn9#IQ&HS= zA7?Ed)1yV_T_CkW-$<-Z@^xG6;e`O|=~c0xU5@{zfDJ(a^qLf2 zqWSX!?VfN362!4AnP)NR!?EEHFnnD{uh-4o_O`QS3(XuZdjfN(9i;@G@!B9&AH`%NAzhwF2Hz;550m=CM0>y z+KHzp#WEX67Tl2hU zyC^4oef%P@^s_YG+cWe-umR?SXU<+WQ0XSD=B4a*Q+j`C@iXy|ZG1Rl?Fb|_w)cfG zGT#@$$kxpnXts-uFS%?z0=wU)hKiDRo8_#%5B%ZfE%NSt};5K7Y#>xe{?|%BT=tb7#HIF1b;B$R?&$wqSz4U;A65c#+~TZ=m)yu0Ll8J zf?FHD8jtG>^wG#io>7mX$ma^Nw;GS+qXsieNF8ruyjH^~~(c;e1v&#Bp?+sr=ClKR-^ckJb8T`;8^s$W^BJm%!$&_W~(L&cJ#J~^zb z_^`S=NsKmQ12K)Im5lvJCUmqgESL{T%$*=tw$(OMc5-z3#Nod1AwREK>t_F3TQVD( zi6faPG1<(ed?S$vHYxMS3V=hr3s121v&m`-=xiM|y)e)d*VFyCoHvLH6&@Qz&CyHQ78|0+E(^$eKEf=i&vR^~)I*urr*mu|fbKIO!#_(G z&+hrDQ`Qa-{q8rFW%;N`vMg*C!RWrIGd9hvxAYN1w$?a|B={dH^gAyra6aU)PGN}a z%%jexu2)rFMR|5|gN19oJH? zk4@RYT+yg!1qu!19#|(6LX8R%Yc}5?UW<66x6%Cx)~8Y=d+?30?J0}=+h@uAg8r~R zmTj+7Ew1BHk8F_s=O;aGSTkqi;d=uPs$&plo4AGPTc^YIm|3|7;aPOZqG+gk4-EC8 z-{E$l93UYffXF0yLf7dqU!xM?qzd;)sdm9QSv{|(ICSaTe1}6_8AO>m7Vk zLJB_0OK?_E4q5CSiEw=?qZGdae2GrA9T8tSG$)zwhpJ1Olje1-`h5?$9gWA^e%-2| zv53d#IrEKJ@KBQv07}`(LG-ej-q)Rfb3NDcv=5+g&|{kSPLD54ykKKbGf6oFGkz=+yf%LZP4LG~R-1spH#$hf0+mbCAWwDXNy zSkp6K@BJW6y1ke)A9C2W`k*#Hdri}rw-$2Z>LE8V+^tg6TPIrGA{53*50A`PAc;mC%zD$dx}gnyrfYR4H)%*n_7 z8k(|ktkQTs)(x(TYn~v5k`a#>!CbP@*fxrV&GQ`Zwp%3&5xUj8>Jq30J_Sh}U%PBJ zLh#W+;G!J$MPPG0=jwa0a`>@V>8j!k@A7i1EW_%^zNk&aXBop0yI(b{S^rOn;} zLvh3jk{O|J!FA`bX93YzK^Q)AuSgB}PJBFEuc<rz*~~i<@(dLQg`}-@^R6U-`tQ z(gP)MTYSCie7NP~t-YH0>XF*(Q25xvDbH4xPzj9mXWe9TKkU{nLQ1h9bZjDt^It@8 zvx|`PYSC;xFOc(|=GICPkfVYuAucH8+Bp+6&qQ>sr|Ov7yf#PI8V_Dm@0h+LWzx7= z&b%d^pmX$_Xq`NVKq#Zz3T_9xZiDKWghbpa%ou>K|2#blIo08-j=t67?YJ*$jGA+k z-fvOTHk-7qKHv+mVBW5UW+sO42ZBo>5OI)z4yh-|YM4i(h^C(Q79f(C3;cy5Es5Y8 zQ)It^W2M*63wuZ<@}r-^>3&N(WA>dqxO_3E>s8>C8EGQ}*|3H3g{Rp|99MW9H0w|W z641qel-|NEuQ2j`{q_jr9sm1`AWjId)w#Gwpg*vc;Rlqv;S&FYs;lck1CG}>M_vo2 zfy5mHm?F3~28WLTGn^?V`-rDq=Bx%CpvAb#oeQ*lJo9};6dfcD;Y$(nT~|3w%qf&@ z7n_a08j)+G;bYM0>Gi8gL-aTvszG+twGbsdPoXDnX?uK0rVmh6{y(S-%`WnD-bvhC zv@Rsa91`HZheLxxJ968wpI0C)io8vDEgYo?<QC)@FPpKM zI!kGQh1rpOxpZ{1TmKOZukdlo%a`tv5aVg>R|c|$Tc>rF)wcN~_;Uhp zC~BV$?`{(2u16@!>oP&+EFj2Lw)toEr%uv%(##(xp_pzy_iBFSy*utx`=%9f|uGDa*`KHTIH>si)+`;5f^)HM616=&2NUBo1+_SgrQ{jrK#%b&OX0o7VRt=&~Bd-CKKQ%|7Nm|@_^HLhws?4YCNbkNJK1qGk3 z33dHjB3>u!dig2vGQ5#1 zO5{gbCSh{B+LY!em3mn@_L*AAOQARznwRj6b%PvxDaId47t(4$VA%JJEHA4 zM=wIX=qydZF0tNaj=GwIK@H*5hhvL~%ES7Mr+L(zDqd>9hI`{Qr?barr#|!OS)q9- zR>4^WiuOC~XJb`;v>4u+43uYsZcs+#nA^k83j>s0qaRfr3Dm$utChRgLRdc_=<=*v zwB{G99XFDSZ&&z~NsE&VTGrudva130=^fZ7SvNLfD=2F5wN7VHPS?;a0cjDpk zGwd{(p8Vo6tqQxw^Rj2+zFFpd`bnc%XLS%h+_{@1%x#YkS99}jIodDudkr-kb5rnwxQM4`Ms^EI7CsjcTR4b>Ai)$0bV|fHb0ncrVWwursLy zB8|9fsqm}}(vrmmG-5kFo{^AXw3c1;Y_-4QI}`}w1%z6MBX3U!fGT)m`-ZNQWI-4$ znN|`rJ$tB9_x#iQN;C6)m~J6}1_JzrxEPtWGu&}kmZM|WCJSkL(n$yjmp}_HJ(7xD z2B{1C)@3im0O)&U|DCwow%wC7W#UY4rV)I{f+-@8TBR`BTe~aD|4Ymezl#~UWF!iI z#f%o=ak>5}h3k82T&*lV$a>vax(&tOh`z-BAZ5x2(F`3Nex~PPzX*gRhlP=*aJkOW zkWF8^HWD}mf@4q6;qd*!r13u$Ykl%ud&{?*9IJQDwuz%S?MdB`RY6-LsWX=d=ENDyKsdHHG-0#@iJ=LUn`ASKE}*z>1SkXDcBkGRqZ_6$KQ)jC$3CR zP*Ves+x9|L&PTqzf}pK@w9`MuhQ@wlpAZM6@&z-!=Gakb0ihvAQ0Z{>AF%fRJolk& z(f&~i=fju^uCO}wCE(j@ewmp7lv|p-Og( z_Mnr}lqu4XJ}4W=1W4LM-UEl;*LlYxXW#D2tl@HnxJ z_iEJaQipFZDQo70rZNB9OrVFAKzJT+j{)fgrH8YlaId|J3vDOf?YAa_B!LqRDXn+8 zhOcqi3b9kpa=gEFyAA)q+-)>~4)i3~enmW*^jf~C@IE+m8|KIg%aVXl*;GtHr9~JK zFmJ!Xz7uQHgJ0dHyQAeNsnzON?`Q>|(5_r3{BSJSV1PMFND@qN_a?F(_KkW=XFe7% zW{832PDpp74B8v(vQLkxjal-as_$_Rej*9F1xm6?J>gQ*#nbRC={F_~@IK2qZ$gCl zDErBL3;?bwQb^haz`T_HaNo7A57X!={bit#1BFevYR749^q?`@!~^3=JfABa*tUP3xp;Cu~tluFP^67BZqCWRsTbWz~ZLYTSgaSZuBxED^#8){t^{l$^rSACOG*!VQRYOqE^z2{>~jZAFauJ*sm=v)GElDNPH;hoBh^{yd+Gf6nE}qB z2C+HZ;ec?x2aFg!bTpcM(jVu}rh;s_$jWQ%x{r(2s7`Obr(e~>z{s_u;B0dC-3ohp zbEFnrTqMSr#95uIgiKb@xV`(smV2S9HJ%U`0hMSg_$5bF1-Qqpj0TOaw&dV(b&qoEP~5la$DTPv0y`Sa64YCk?hS)bBDva~(T{ z*$+JxWzx3EBe5})RwNG!h=!vP%n??Cn;&AA{`#^TXmr&B3W()T#(~7)L1*C}{mss* z%?H@f7gG2x{EUJAsTMx(Wy&6wtbhKT^QIT0#tt%yzwGQBuBWB;xtcrtw{CS0(GjqM z{q%pzQ3D3WgyA0wGFU)>fcOQsZ>WjSQ3B|#(g6HsJo-!~9-Am~PI6L&o?3LPI46U7 zb|BUySdWj|@sbz|R)mK}4k|3pPt9|{_#NF0ELKJd$nJTc&B^|wG7BiDS^RM#{rw=6 z`g%KY{j*a?vJsY`w4R6r4N>~heYJ3q_72tJA8MA`W;9`rn8U41NqC8-bT#j*e%dEoAMRosM6B73$%r2sl%$?@ zg6jaiWDh}7a-ghT^0K-IW;BOrc5x0mj-e8d#MlBXP$Y`&G)HYwgHp{>HFvQC%4Js`h z5LoA^HFZ)JkO#-{P^^zjj~0kr=pL#mh%_!Z{HjQU%hRp+X)4-z921U!aRzwQ?l zI?k?}MHavaXbTd0TT6!o0agvlUrfMAgj?7G-i~01S1j6GaiI7Y6V~;-6Q;HBEc}BI zGUP+fJ_#qxV#W01tUorZA|Xp2s(wABJe%u|Y>%fR*CmSF6_IT$VNo2!+te-k0rx|N zh$x?{6y#Kp(sF7dQcbyu)Y&8Y_ZXj<_-@hk*aSk+zddKa38lfdq8vrO-5`}vC>{$l znGFzRI$U~D&-xL`@P|Bz;1w@VB~J++c%)Q4z=$;+mK*qBCOauPQg7kO7~d{-p;M?P z>(K8tcmSEt#01!I%FUVk;XnSkbP-`d%lZr~38+33Jl5kH%pb%r?~B16_ncm>xv+iH zy`8QLYC5GzQZk;&7Tfc{&D_IA39WvejOGlv#QpaL%UOU0CvVX`5dX2@Yt42A8o86G zW0lwq?VRpHYOkigj9@M{YF~b4Uk0sXEgmKtoYr;!AXoBiBR-s&L^NB-Eba$_9R=89 zaYlRl-%5;t>TK=KOV>`*4kP}NX_&MBXkwW4l8ga7m+JU_0Zn)Mo22c2lVzr-*+UXI zP6b@P8mj%UCn4F!Au1XyfR9#JL#?)`K39Xc*9lhH%GP*@$>M%H?F8JSVJRBG*-QwD zWD5OlZtcPN_BAw7OVPjpvXB)V(e8ASM=Vf+V@}=JqUMeHtb4L(C{BXX{(F4sn z6qhw%RdJJCHLfyE^xH~1nKEQiVok??xzGQ#Vz56KAX~%~afv=2BkU$6!Xgs^*?CI8 zhxch5AdW;izzqipK#fDb!mQVWUAGeyPapH!;g!DE-X376E-k6f)UkDc_Af=BaDdI7 zm(<;yc^v#Q+QmcseHAI(0ehs&F}NPWyGkdo3)!|ILGVzQxq($kgXG>!)^u|6GPEvI zO+7#|Ey#6`?J~sGm9n8)WBVdxIi^SJ*Dw1UD*oDefIo1P_t7iMUiBXtYe0vX7&8X1BX;}d+L!UG& zDF<%waS0Nb|61VhIP*6m`2+dGKWY63R;W5h)Kwj~zZ)6fKzQ85A8tvUNP%p9mWIw? zyGZ1Xxudv0ZK!FwhBH%(6#I33Y17lI5O@MLsLPmqIw0m%;|4KFB+G9{aqdol69*_a zD9Kyp1N_@NI|JV-7q=7q+O~Z$;#1Y}xE@4oC{J~aAPa#g@m)+X*7v@$t8J@-imgS1 zkbqXQ&D*qf1Bd7^^|7LINyenZe91>8Hio45oWFai2G}&fhUFKblSKCDkE$$s8BpDY zV4SCzULL634k~o*(d=Pt8}1sR_kB6(TqUOlR`& zTrMD9%NkP=&J@|$(Gdn}l;bmIt|xr1VGExr0$~q+t9cU+7^YEFQX*elySnvj{37@o z%|Q?-6_CXro6I6ZdZJ%GZ?vg+P59SCYJk3`^=8jneK!4F57-pshaJ!#?ojQi5Oy-4 ze?pf3 zG>OW3fAu*!BbUXgOOpn=y-{Fh(`ZfP=gj8aec|n2&*ABLW}ay_`3My3g!2h9^dq%V ze^KchgUb(#leihS))6Cn=&JXMd*`j0z>k7@83ZHHhvF~%8$$nT^1~E>o0*+cxj+8( zF+)BS$cwyN=vi@61H>o6_mH(8W3jQ0>PQzV*dG?^O6QA;&yDpR(4(*6hZg>lpG8n%V) z-6?Vo5>XD89cqRn71~PTLO#uqFO~j?)J&OHQ!A=cVgcqRzRl6AQ&Fftr6F4X7vdQ( zJ!><-*f@_0yPW<;tU5~fCfJ)BwXmn~{e0yCcyn+tM@kyM4K&muUFnvMMm8TEeEJ5e zgsBb)BZD*IxB=OnhI)Ej$OCo?BwJzoE`rj62eYM|`~<(3-2UNj=|E&j-@#ov|8?;J zOmvIpY!v-3nZp9wC4Rh9|D>6;{q3=o(2YHFS-_4ss86IPs>s47_vr1UYc(;^@X@A# zT?#DI2i*b>vHxGVOS&rjyTKpSES%B5;gq%G{rXA`<L|0`t5-G%9b8+UidUmr6} zM0t|56qAAN=KiAD=&58#RZI%)434R>?$ue;86*xjFxsAt7@?=m47?mzU5Y&->;n4GDnh~!kd!qQBA|bR z#y}$;n4&TH;62_f|I3}L;{DYJUNBQ+urm2FQ2QE-yr+$|um!*G(@z+R-;m9Zj>bsz z(<^-mxQnrd>KnO{5dpx6D9V&r;&t|L3R@DB*$*{!|B-Zl*~niTYp=rx7M=AyZnpn> zQK~9cCNQHlM3M636%y6#6fUx@wXR%V`}q4ImBJRZ`_X(@k$Efveob3oVk$EpMF z^gd0WoD80w8EU1RiI4Qvm$P26I-AZ%>U31D;6wli8-ID?>Qb&5ABJKT)Z`kjr}8RJ z0f zLoL&^kN#EZSqt2^F<)+KJd)4v9(~s%38D1*9z^ALETE#+6{`@*$6j?%YjGx8`qI!S zxU(o58y|(|F3tkhk^sZk2oIim`}S)=Q*STQD*DfT{P$M>bk$#J{+EYvR{L|k;-u{Y zucNEP-iSSCX??Q0iOgX0QcUMv*hWOGeP}jnUkD{hilRn6a>~+IFYL(%l?>7k-Kcqp z(oaKn;@1?B7_F^~`cZzl`~Qjxxj+P>iwIBr^=GxOW9<_X_6V@ir4eRg18R0))ad{h z8DiLw*X1rq@9S8-ndtIHhuV3KV;Sm_>K|h9DsVAGPYW38PSnIt7AK3u{YN_d^=tpW zWA_u_l$$--cK@zfnZx}bJOc=`{vRh1XM%7uY&5?Mt)-5qyMw*dIzH zw!d4}KufCigHJ+ePO62t7JWxCt%QOELuUKKgN^Ec=1 z_B^L9u)|;c+y6M)KW>un?m@Q5t--usATJ;nVQz{1ptnypJ#iI&K~y}Na#>E{jWqGH z2VU0AfTG8eu_4`4>P@~9_u*c{XeeqA1f28-7PmT`0DTF zY7v{$EPz?HxZ@m2AcBfSFNqxVN{EArwAH$Fgwc|$&34{u*wj*BSI_cNdmh;sPjN+{ z*g9_n61xKM0N=gaxb(-e%KyiDU)?3mC6z^rgDrklTjrbk>6 z+(jJ7Ky-7JafTAa!TC^~oyJuU9**dw(ey3zK~ZtyT>#Wj4HL^X)|j%kr}`(*{u_n= zvZ!{mJ0_mZ8==P zwBDmXP%;9STD*Jg2~+*NPl(&27iAjG9h2G~qDd3C1mv=qX&<}z{hO^GTgcO0&5WN< zPW{p&;r7dLf%El9=^+k zx_Ew&>rOl%xutT|(h{-n0!n5^0`V$J)&CCx3-BG~wqk#Cu+^~-EUoh!j;@ruSHXh1IKa?3ndu) zb6EI;e=9QX%%#B_=o&V73+d=T`f1#iYVD^zk)zp`Ayz57X(D$}k4#(`KGq?K(>Yv{ zl^i4hVkE*x69M7E+Q);~ORR<-g;cf@7)8sEJ^Obo3p_R_1Gvb}iN}%U&oHGa*{)Cs zn|Ud6wVI1!{Zip^m#6aFjlPt?d4pDjhVCFPO0XhRp4hfG7W@R;Hc=2~)7XQxic1)V zDHoV%%NyfA$m3t}`=^~w+uos{dWDQBnV%rx4IIEfCire=ztrb{`v2H_&!8x`t!q>< z5JYkk5JV(rCFcx+WRMIZTS=007LZJXC<2lt2T4kfqSEBhWXYkS5rhVs)Wo~m^X>DV zeeUbtU)B9}tNcTCQ4e(YTyu>%#+dV2PfIGfCtKY+i|hVoZq}ESYH3pLG@biIHlsII zn$r>mE7xhZgSjYVZ#Izi7@@@L{)Z_3xz+x2YXogDffd|8YVxlBa|N?bBf$|70Tc@{ zL6Tjx0S~9|dBxh&fQb<=l2$ysW9Kc_?gXf%L4BPx8?(JrkXB6O=6raHvM#q|lIc@JLT{-(f^XV7w_pk4a z?{ISdV+K3fRIgs@%+u$7*b53H>vsp)+HySEIKffm3JUVXoL(^B)2ZfhZmIZ^>}+LX zas?k^U1?EPJ^85JI|(nPCwJeT8`8j~~6ik#Q`3oO-Y) zCjI=vMXbv>x^dDDc6S}ka3Am1)+z;>I~JwE#FishTaNcv{dX5e_W#IQRNQrIg*Vux z7;7^@#J{QQSRxBkz6{*M>Dv|x~bp|C#e zUso9HV-=;}S9zM?y?c<4Y?dgBQncEY+?94SaerbPvZIXaynBK=d)HT8BbcsFs%1XFy3XC4Ks>r&Hcj=NpCF}vM}nt=qh=QoM9 zaYG(kXbdC;njev$ztfyw#CJt>EsQ_oHO~L?p8xvpa>PW_R8g;7|8+&s6Q03g(q;{* z8#4!ITt`p)BhJYBlg-s~klzck4~lG-o8PHddF1J7Ux8fD3246F&E#J6L2+ZEhGeYs zji7A@B(9Kjh^n{%^{Y1wTEN9|z#r(-yZrNtPeWv#_8Qs>kQPK{bPM8XKJs49c|Af! zIOjN{G20RPv4~4WX*cgyhh}UFIaEQ-|5TdQvM*9=sz!FCtifo+QF!p|PMnsQMRlQ# zEnT-$Kf|99{eP`&Ofm>@gcfxKW?j#?E*~A|wAKn$FHjjBLkf!`Q^H2f7(=gyxxu$p zqvPD-c1895yxJ{-#nl~d49y7edK`&_Q)Rzn@OM;>44q8MFlv$1Yl;w8Jj>{KFg^++UPWS!fEH?ITxOkF_J>GrzI%Qp;xsm5c0?LIPk`Yc9K|8W~HkE#9RN%wu zt621`H#Y>LH8q(s_KnJYy^LOft5hpVmPW1Ux(Zrk036Htt28x$Vg=LmcwL&#z+Azu zn$=n$c9h{Q>a*GEi9;uKHH}Xvl0GUP7$~GtDBi(LOz*kugU^%=`|;d z^48YW6dGGT9?A1ytmY`E7GPto>|_iV9HMP-XcX|#ASMGkx4luO&*AAb@kFli`(kCM{0UoEHk*e01g=P?7sW z&1LXTxLzU=zhUp;Ec36Iv~qzQYDT$!K;!2qb4iO7i&CIQQm@%|^$#ves!@iSxRG%gjfJ zUhdJmwkauBC&M0?u%zryN>SHLFzfAG8+j$`E@ZFQ7-z4q(5S5D1aAJ&7Rk|_?krU@ zGRn?(f5z`{P;5QzT+2TPp}+}(XQ_f+tZ*!1rHD-E$Z>p#2L=mncI63H^Vm#3iF2Fb z)@q%-dPz@w$Eq5)%poB371>d;gqROqAb*2ntAHqXE9+6d)IH65#sNXSF%bDu>j{^s zLSx@_?bDoq95t209KkEHw)$py?_C!{y~BT%F1ITDaBv(r>5m^#U^bp616K2JMnu1q zDY2`D*po+4Ra?SWQy#eMl>6Tx|wdpmzeD$x+(3KvKREk6j8Isc^BH@ zS0)qT1HN>1Jd-@FybZ_`yIl#f<`-hN`fp!;rYzb~#?wu>rQ8sB>hoIVLaxn@S>E-o z6Sn0FMTEP4dz1N@c=H=s3cIp5#S!nx{O?)GBXbCqW_;FJQibGE>xAgmsGCLEiUggz z;=7Mg7uhhfb7yFvK3mZdFQE5$^hVJnbdOE>ZaFxP zRFzy{jm4hqH2o&*U{{Cj!L~JCgt&Gv?R6SQ3bo^ANuyaWgUYV;jfU?H$p`EAvN3RK zG0c_iJT8$PFmt~s7%Gj!7%r3IF~aCe(?wQ{?ISm8JOvSR$ELdi^L zJiS(7;0vLmDPA1^vue|kQUSg>=&xa^&;S(As2NdG7O-i9o-n+;gi{}-(2<8oDkiXJ z@smtB@SovGRRO^0B(GHb4CXyEdM!pEZC)|Z3RKl?>)xZ!1=&JZ_+2XQU@JjsQIUl= zhaU#tK%e&0O4o1I!i1K-70&&EvKCoYgH;{a=`2U*4@KpQRV#a+or|^f7I=)}!kWhI za`qx}x)JpX_uV`d4(84&KDMCS?<%@^Dh-$ULh&+>*QDE{-PAjtP(r6OPaKQ;xBX%y zRt|U3bVahz7bKi-_eKZu){*M=Mm4cxo-IPuf>Xu4rR#|_*{6$h3gcWtio^Y{bCw5d zmA(j0@yst`18(xa7g*_PuR)H$Cmv_@g7`;36yTHmODHH%@$z3(j4kE4It(SDfUJEIH zNi;67r7pUyB~9=aso2^V#Fl;=?`kiA#onP=zuz_OQfL(|2HQM)mzhE$T~`c#Xk ztv;2Mlg7E~FUwcqU63d2Q<^=PdL5L<@$w74NX(%7v3a(`p1|YYa7j9~#vhU=sr$R~ z#|j9=4suE*%%1PVThcy`tL|9`RDpn_4tD(2@<~%{^g#4kS-*9_2F>GMt&Y4Ju0EbB z-Y(Kj^(m_9ch!3jd@|GNbWs-n{bChaXU-KhFZN^H6^yJe!m@PFpx%`6>PM zmqm@JkO0JHZ9=I8Lt%onNLK3>Uk#b%BHE6#0peTs`o^m0vDUDaO1Tha=hrjY8X03x zM#8&qs0ts6iD|b-vGE;*vnU=&ZAMNybL6j)lt-oTSz=HcJDAA$87T(BNfZqab26|u zImpFbva<6l`BDhQ`JqinxoF`^mIjCBN^l%y%Tl>k?aEiGgSolLNr5r;QSPIR^o*4) zgZeUu#Rjd~$+f&F=zP#oMqt?s9v7S(Pg^{PXXhzHBuwENw%WousA7)Co8P6l?QBHg z6(j6V$_q_hOIK6R9`vHyB4Ni{QaPYLd`MhzMpBzNcv86+CBFufbnk~6fF-Bqm z1*_01j*9=xs4A{a^ZIPFjI7fuyH~0<_Iv6*u@sLfv1%c-NpW$!(T7}j&pB^n+y*zZ zK=+Z-CR_f^3e}O)TA5wReJ3N!;+1%#8vYGe>8h6R8!&Iv;-@>u(W=)H^Ck8p`RIi` zB6YSHFo^QMqD~g|<;z0F7L=tynD{BGJ(?}=kJxt|h@HNjoB!^e?nK07!zywka&=ub zD6}C$clmbhr*~$D0~~MCs#aY@oDj_V?&XnN5={w~y~j zpxoT%&JW1$)>l^PbdAcnyop)yH>WRk6cbOEBzpK=!y>k<(IM`eclyRg5uOsB8@znv zHnaI~-4fh-()Ya3_{nH8KxHKdm^Rd_M@;9F5xac*)=fM|Kk-O0dIEas0-6^hK_jq*lH zTb_Q++UGhgnpEg$Z{^VAc3Dp%nHE0EfmO_u44-sde_tRKszpt$%9?jP+nBSo6Mc2E zFjrcUpd9YO?mr+-em&KJSj~5n5+bG++Yq^aGH0Zt7dNE7Tqbr~AFFFaSLHdQ-fp2r zTTEDKjq1{#yp`?sVdwUC*h?%P5$mCHtC?a5a%`a^ou0(a);aQz8b^vL9m85u zHeYl{Z}(ZA3*lc@)=o_XFfKnJMXzjc9zZT}aL#!2bRcRHw?tt|N$TdF@iS9d($kw8 zMRyfF8=m?Z8WV){qGyFvBP;_#hWyMC4Jf2jil;|3_*E45OO=53VbY7OyHib6FB!_aqNrwU9CJFa zl;fRwNky5VR_D9&#R!x7?BmU;9FOV4kqkKLB|UCvIn^nf^tK9z16+j~to)6S=X2wj zBCg{dS5|0j_qeKQV+O!1b>3se0n)fPf-EzhGY^fb|9Ec_pR@7tT!-Bvm*n>H>6x4T zhDwS2Q6N<|YGvJW_N`Rps>XRz@gJZhfQ*oGbl~5p@#I&Zy+^Hc1wwn_#!!4SIHyKC zB5yr^z1!bd%FAS=yxIfz)z_0^=Ayj^YQ>MV#RBq%GMESY{)XsLfF!d0c7Oe6k%Eyb zGlaP$oP7Gs5Zu3B9vd6o4*_b*M4iUeK! zkV>Y-cZM>Ja^GPi4CveH#XKmI{;~>Y7C1ZAXcp?u7lv#fdIe*H&XWi@>3l^&-HQ5dZmQI`D}GUAxG|G99%j+1(F$)uiw)wX`E;kVcq3@ z8+b+6)()Fn)glg4e-4$U5~FBP*`D1?u=ue__b&p=0T8hb3VZrv-Qt&g+*dfy+-FWw z363)4XuiEWT^!+ldNd5H>nq6y96IN8`lAkcY-$cqwOZGxqINwy zFoqo;TEgbO5xbX^Q9#6y6tI`FI0EN*BDvCX10e@W1D1pZYcwBo1oc`w$5(O~u28NC z&Dmzn-1kQ0owMk7uKJ(A$SS-zfoa9L0HG!6Vt8e;og<`I-Dz*?ldm3mbX_pY*<6gx zR!8}9p_0SvA=Ci&@u%SWO601sH^0~D&Qwvy5VnQ<{GfuyxMZGS(sU+ zRYmKijBC|7jz`|9RA^9`=?D~i(x|xGh30vya$Gyi=WRN12`wnx9|1aa|-!)n{ zjPJ%x^vw;_RvgG9P95GrJftXh&oPe1TBjQ19eY5Oq9s?LAvb>TXIo2>c=~ zol_w8^t`n^e;sk%uB8uXyOg=De3DqNZ+ugI$r}QtmRI}L);IiDib&YI^ki&x2LE`+sK2kg?z4GFs4&nR7=7fBQ05_6+*@H-kZTaXb`W>+5N-73j=J5;&5VPA?@kFEYTup(0c@yD@u8S7wdV=*ARrO2J%p4ZMq}FB zj`Qt<6BZi;#-a*Kj324{10OPd{De<0LAanbdrjXYj1UOib_enL=+^m61!o=%1uCnU@(YKRA(z>0jJbJ-51)fgDRT^ruDp++Iv z(5RycPfT{*snxmi@;j``5ickArP9oc(Z{nq>e2l)=V`JxX!7xB!xfj{5;Mso-6tS8)}L&Ns3{+hRuT)8a7?mXW67Q@>TBY{|SjE%h(Kx$e$YfyZ@iq6@M zf3waG_a=UZrU-KAnzq*x;qnF_oAuzyWQcZ2LZWV^rO)oc!F5Uhpz(TkMMP$q^2IPa zRk8S^>q3z==hTJMT4FV&FBn{UUh2ER=amm6JoB=?4rC+F0_B05j{79$bg%=v_>Ym1 zT&Hd+)hjXHh_ts3dZeX}0lvk#&ekKPBF;g^h)Td|zRewJW9*g@$10nure#xDRgXiS zKra?c0lg31p_1Fl!iUwm)oGG|E>i(W<9c52VF&Z?D8GU zl;BLtJvR9YZ+_%)gY`r}c%d3OqwSC{(;rUDd9b(auDg9+f@=@oYO>4oZ`3=Ej0nl^ za!FRjkBG#xyf}YQ(bpGal(nc|m*BKT&X5>Z<q*yEYyYXn;mK5NS>})x5(&1tcsVY8Q^R0-%h~e<{*|M4Hhth#a6Wn$>oaO$A2%rd1@*#n?$c`@FTU$%M4V#d0H6RRlH-Om)r+j;f#D z31JJ>Ofy3R4;0!UA|F*#U-#RUvtNukavzVY(eUS68#Ak|vb^D!L)&1jVQ(zTEg_`` zFXXUzO)FdkczN-QD%zuc?NJQ1xN82;41sqOLwSPlG_$=NO8QI=n@VMm9(kjw7+6O4 zs>S6(E{1~eU6D>Xn3f9F6*RSek9LF$ELSK6;sx`!JVo4%+cDJ8*zHRrb`5aThCjR) zAHc2mK$|Gf@o2bRME|sN!(?qNwW?E)6NOw5`!#;O0=kpkrndLZPyA|@#(TOeA#|^r zt^VF*pzFh3@tuK@yMN4Lp^8oIA=Z=5`W6CpKw(V-+?n!j!f|!Oh}l( zV0z8AY)L|(X%dHqo0+>pRl#$mfiLGVVGu!xv$+UgHI?Bz-49gaeq}XU!8A#xRMkaS z?$1oFCv@Kc3Mzgw_J>hf7nhmJwZt~cm2r_5L*vHmj&QyPhla8jdRH?a4o@UtQIr(y zGa>JDS(@D&RAj1kHJpy`=ru=sIol=e1{}<=vA!CcbDzK~(7x7Je(tYHta?%TVoBiBzJ?_go9_L>+o2;D;au)3k8Ow3 zhI7q@=dS%)W#<8ZT=$3|Me3(aG7cIM&29Y%&VUeAxfVYie8r{{?wk9j2uqiyyv(nEq$l@ZIKOle9$rH z%KqOqfbu~D$lA8?r%LvPy{)$yfFZvI+A|C{4c-2s>4 zd{O;ngfKhdClG(K28uE!zs8sN-+%ne7gsSo($8D3{{6?le8HOnz$m9^!|dN5{+BO; zWWc2%yiIq0mVjSghWWfPzr*@FtiQAMHwgUy4O{h^q#Yd{clY-6PDD9>DMB&08`Mow zSW*%_IXT&#QuU4CzBD)#<$?Nb-dkHIr~xHkUonEehxC|YX=C8>-MIHu7y12Z4kIW7 zqWzoNOjsgye2%kC?1qphe_M|<*)Ugn{FqRATjt$@4^(kd(6TO8dU`r+X6AbEF9H82 zc;~PWPE1gtd$jxkOpVqSJe2?2?}~wR-3Z`4gg#5KA^Bz1`BN{9vH1Lu z?W997-OWzLx6FT5xnF*6&EFsOcT|7p>hEp!8?1ig>hEgxx7^@&Vf{^4elx4zeDycA z{w-J(^(Ohn#Ke}mSTbX5hV$EheEXP_m&f$b*uFZM*FyGv#b-P%9UW9uB0o@_$$XZI zyZiesaA9|M_sX4ajwF#Lc$fM=D!v?6AYY><&IxL}gCIpZ1yHWqX_E894yni832c0J zWACl)?YY_@?qh~5g5UmlJz-Ojo?TxrVryq-E~pw97&z@dZtC22g%V$+ZacBIiAiwI z50&K4L-`L?4h((*r?ryb{k#vlxw#q3lHqk5OvauZc4-8V5K8Q77QK(ZL|uPZQAtUu zs=E5kAk<3g_}j#@kA07)>zsKSj}-Ow_0yc8H=IYbl&%{dj<=q==-rPZ}|D(&9 z!vMZBb@w^*f!E1)$I>fR?nqK~VTQ@6Da#)dm3%fsxf32!_LeKXZ?}_LPx5ra#QZI8nnOW*Ccm(7SexM{h&N9 zum)A3=W5UaJzf{z1QUfNG$J!jsQJ6vb`YW8t^HYFq_wRrJD!Hbp6`A@zZ&Sd z7-zy~*_+IlH`Apgbp#uF_#}ulX-m$*cV{^BjK8dLvtc6Ocw>oX?ojbZ!+M!&-DjnL zGBqBd!2%gb9^@@PeZye`=WTzZZ)S*fgVM~)0#|2be4M8ukxeK6YhIEWbqAgAGn-78 zU6@iwhva@GCNU|59q&tlJ3K^AwkIs(+EeFFR^l;@gHj3q|r&D{M2#lND z6P$PyvQs@~WOv=$m20=yNa}fKF}}CW6}l&tgI>f zTGOJfc)432pZ2j|jIF7uF|f&dqw@R@Wb;AnjK@@I)#ENb;a?9L{U7`3kvAB~GRW}A z2}bN>o})7{i3q$r-ygMe0o(070eph9UQ3}56TVP_U-%YP#FtpYAGOCT(cM6Lb%;@j z9<*b;!}c$PQnZ=qUaQ44*M9GXrAo^VXcR}80T!)5 z##{EQw|{&bfPxIBMuN9lj9`p47%95elvLsl7vwNMKN?^9(8C?w(9l3772x-*=G)WC zb_(OKUn5CX$>Z$G>X|Y76H^X~y5mFc)T?xWe@k26+|424!h2`n%dhA9FlZ43BkLRx zMcva@+QSUd&Sh}FCimaXp@*2CuIVpJk%iy%Fz`EBqK~`(nwiUm+jGv}lh&n^f$vFo zf;KKAw$RSF>P-=T=Y>#8=t0XtbLT>giOdqnMo0*ur0|$|;HeAJB06GrvkTY|sV;6?_{U!rGScxmeKZ8;PbhFXLv z(Q~2b`m4)+ed8gf3Vw3B0(Mzx=fd^>i0SFT`H4mvsH#jO73XKi9h5Cwp%Y$95Xux) z0`qAAT$Km2z8#?ATh93(Kxm6Q^+dyb;CwFNp(gXzHa3wdDJh}c>G!PE(|kAU<2i!h zkm@@p|PLfpU(H`^AzYH$V``H8I( zMV&X6yD17X1AJ`MKCG8DLWNq-sqq5`i@kVVNq5rC!oNd>R3Ub}&@6B~El%fk9@pmdBj*(a*RnC(( z1?`~LP#8NlyPs@@OU0DdOxnJ)1GRc7jK+ag1~UC{+1}CDnAWwIukv`ak;iQ_d$w~_ z@G5h=l-c1{OK+wK$I0@&6}ik^TF2^ zM|3nGJ=~3)AUl>oqad#9HH(s^0cC7GXM5jgkB?95BDisVCP>PPz{Hh}S zX|F#r1RIl+1I^xulsK3sl{#!A?AAA%JZL>X6+WB~v0VHbdF}l!l*TcJ9?&@FF+(c_ zS65d*Pzz$?0#BAOeTOUk#v_FFvK<(#A;Ehq>TdfC)Y`o64v?ngdwYA=Om;dM3jZgE`UusDtZi||4)si!jemGF-bx3yfYtl@l@!?Epk`UuIHH>#)F_@M0NfGe z4-FV359$Zrh?Ih&q?hZ!_7CyZ{0{mmZ(2fnU?=4*M{5`YvItUNjbA{ZeIcBl4jFLJ zXt@Ltifw6WIXp!MntvwKH}igC^X?UgJ$w)fs^95n_I!;@y1>IP^q4BbhTLd9Kf~;m zcIMP)+R!?Xq^bstwU4_xK+&Z8NhEsiI!xECZfrnk9O@VX_CIE}>*<@4V~G400tG>Gn|No=u^Uf>bLxPdvNjheGN(UAGlWVTbNy1RZ|1CEh&9iY|)!8CbWaN_q3OF zArkB+_Lig3a;Uo41{oI@7Z0!a_E%C2xomoIZzolW&ijw1byFn!LiCAiS!cZ5&4P3f zBiyI_f`W-r)PM!3zi(J8=5t-?)h@L>^LLqF>M=iasUH9gFfH8aA2q;y#6c(RJt zXZ2k?^5bF@hoZm#=?Q}$=8na`4Qo#*$Y__!>!bG~Gcq!gC44c|V~O<2$h$^XTr%An;K^CFw? zpU4(i%P)XFQ(a9_{lB=B5u*l4qM-jEmN5B&F9%{xWzGHf1^;I%#k>F+qs&VE>%%7g zopZdH7O+KD{focPK@4#H4lK^^z+(N**iXNq;5QWfhJuT~q2T%dXDFDTlY2Dw-f$ae zIzpRDK)T~wscC3(>g((MMuE5aebP_c+}zyF!vnheeYgkQnYag=oTOfYLTGlVqD3on zau}ZuWZd-TGBq^?^IN9scDA-riHV6WaekTN2%xGJu47qV3%+ncON&%MA0}l@$GyW zV!DZbXGb6j4=*Tr|3T$JnlK+wF3lt{offh0XweHBK%!A3nFn~|a$?#HmD^C74u{5# z{SWj`2=vOJ9DyyI57~t3uLI7&BE|~0%Ji>^sXbaN>BznRx?5am5DAxdEN6RS4)W+y$aE{T;8Ul?X*=YZ7qQm~s;WJE*Ct6DN^IEgEOt2VHGp z%iCV?c%|;S6{&Odz|LIMNo_{pc#<2?SeFRJiS*6=-O)4y=Nnz;&EKA;+CL7tE@+LHLFYlBU-J0JAy7Uf z1LCL$@W2_Va|;hT7AgLLQJJ0b7r3O4fo&E6LR!-+q`zp8s6=3YG9GxUFTOXT<2)#X z-99yUwhx@CoeycReB+)=mzljj*l)w8k5Lo+c4vRab1vyU@3xC$>KBIk;@-@&-GZMQ zvOR?`U@L^jGg(VSDx9ZZcq!(m9xb-_Te+7R>Djz)8r|$k@Hjp249#<5il|=M8a1BX zN%R7yB8|w8>i2T`(dkWue<8?^A~BR7zWvwp1y01S`F&S4kHYu?%NEu^=wL)&UNJOU zZ6{>0-W34_8c5|$G;NrHef8+$O#l5zj2bfojv>!ZL)(&xb31u_$Zd>FE>eCeH3jtD zlAZZ6cq(?2?zK>~C9uZP@7x1E`2s8W&6Zrir_?E4I6Dn+Zd`Ef#Sf-kG4A4!3qxQF zl-e%An+{}C<*>Tg8}tCZYgMZ`M(2S$dcTeo=`4W`jUC3`<4MesAUvQk48q8D=WkDv z-Z%{dx0b>&0F}ykZVG|N8)KRJe9voxsZ!>?_Ukp zH=xa>_4fGb*#G1nhR5S!p+actkJ|UA9O=;vTRufsx})Q0r;~xBCk}k1_$fMg6qd+W zfHX$M4)7@A`sI$rSU>x-qyzJ_og`JU*V7J6`kkE&(X=KQ-zl6dx6@wJMIl5-U4*Hu z_3UU!g`t+JYlBj7#%-8nx^v;R$-nwpUr9B-HXqCyPYQ?wzv+n&TE;2Bs|lD8oF8kh z1=BeF@v}+=+%KGQcKj@V(REZQKot$9{nu%D=V3pi5ez+^ zJ!m=nge|FAtj!sFJDkg9*4Maz-JC;kxxOs9 z??$zLks#%Mt%+V?) zUJIdl*HxLYg;U)-c{xeP1uw%aouLMw?vk{I9(K&-tNm9Eu3!J6~kP<_3TZnth=E5qpv)C^g+b%FPYTSM4B1{UNUb-&vi> zq^32!a-VIYWyI|4Y-cH1%h_Ie(x7r#O8nGEzpW5@PJ3+%%E@G@i6(EO(cH=nHrqeJ@x7EuARPz+WZ`{1qB-MD* z#B;`dIc<84c@Df7=UM{qycs|BqsfaK9Z*l*%NQrFz4>DC)H*EcDCXDS?}{O7NBBvRqqz)zd*4E*AY zZ1%*}f=5w-6_{HBt}A!ocs-YvgH{zH!&h;(74!(NKn=mOOwR%-1CFQWpdyGg9dOQJ zwbt3q?$Q_+^7OXW*VpjZ{(8nop)z0sQPYC2|8l5c5R8aQ-HB0Zq>r3Y65XYd7(Rjy z@*{BY+m1FjvF=98cm>p*7istxpY$A^Y>8n@nt=vf2Qc_3STg-wyjRnw^5FGw>>702 z4gm1OPCXHIsncB&oKC_;4Uy#p%)A7J&+5sd9pYRZlra~70xfuWZ@G7$UH7FubY|Sh z0j*#i;7v=}(cJ8IJyHlF@RW1-9^!%w!AE%bB-+A{Agf8ZE*_C0$A_t)hK>Xb4>Bbs zTrM!hy{1n_2*9yt(C>T*aN*%AA(wE!afTC?)h$NtY)QtwrJqIc%9TBSdFhtOgO}G) zROkiY7RhjSmqKQJ!7iyl&>LrMdrqk@*$H~-OboTHU9LS`VlD~^=yy?|c`d}$`EopS z>O1_A0zoBQS~^3Cpbq}&6X17`qq%B8S5o-)PICRtevpmy=!O$lI>+kKha4M9v->r* zZ-|r$H1Q=r44hQYf?9a)FC=gyPDByo6{nPGXr*Tlwp{(nS#wl)8AfS#4Djijec< zx>>~WSBbax(uaokbvol#?cF@0ZYpsv(=rX5pkVP((_nfx2o*jV zzGN->EjV%7Pn1cs$suZ_Z?0pJV}^+}@t(P;pXLwdKV&9?W%(~&2qujQaiF4oHZ?>) zowYSEje|vW<^^b1GU^We(*fBs+oH}3miICX=|tBTu+8E^sKw{bz4yM&Yc)W& zF5lU-;*m@kKF%hl%h@QwYlU?QLr*A>hN`_(qzo5>Z%$Jh5aZpVl^(}|4!GhHVYb&7 z%04fgi;n{KM)eV1I0Qr^pQ2rpIhK}0OL?Oy2^vJ?K;Z}I!eFk zcH^tGfTQm>h!&N&78Knv!lXE-?##^?l{L^@3Ervw9(A+Op~4aD14+ zm#B))e&Un`(WXqN#kotfA(!uB6bina0J5uX{_0KFSkbk@A5Nc7_rL%KT)^;}S*8O` z%xgX`)~x4Pc>nXm?ZK_HkP*3ySI9J_|MN!}4n{&0Z3kW&!MQxbdNhVZGr~GqwjIGM z0~wTo$l5_(Dnj~+(M_yF^YI4#P-=GZN|e_ zfe5onpp)7APtZko#yhbvOg*ih#?g`Ww6%oH&g?P%fT3o*^u3g{ZRW8qa z2l4edaXfZV6~e0o;Z=ad3Zp$&x(x^=59du#I56E!p0R}Lh?lDUR$#WE&)sJ1?Sb{> zGM^Qb@B>G;?k?Ymz-y0HQ>htVQ7;tvneWh@PjpB=WF*sOoPIS7my8}%F$;H=_e}JQ zLxNEnfzM*-<5jX(154GZ2>ZhL0iD035}9@Rh0Hr!f6XtDZb1xKYEB`K-Q0{jvxP#( z&af5C@fyi$fVdjq#39{{*toT3=DVS1nHtGSL1>myfz5CsLvB9Um2FqB-2&@2Oos6i zj!E208MkdJtrJAbi;Jv^N>Rz#8Xe{ z=ziviN!3(z(<%g=&Mp*=m?nQ*EU7m^sB7>ojQG1NFFkQf^!Xoc8LfP=t7Gv}9UdT6 zRsL{3e+=pV?Dn_7Yv9q4Y>sBGj)cS5e1Ff|6 z^V?D@Cy+&{mCC;R7+q!iEd3Pd+Gc(+F?|6Qr_vSit!h5%|e_j`Y$UPd|XwpWT+j9K=7Jx8jfXDTwN$y z)r~)(E$6e!dR+@+Dr{_=a z8@yP5)FxsQT}gz6ZuLU$(8X;$vVHdPb)e}YcRT4yJ5~Nhx1^+TJ zX1<`>gD>kEmMQltzU(JUdqPoJOj)kfawFDafVOQpF%1wf+OUJ$vW%lTCK|Z+EDe( zGdx?78**zvSPXl=2g}2RgU!o~C48#%4EObt9Vb-}n06WoRJkfa!DBX^$})40ig5|i zk`;aJ;uTr$kGFSsj&VqNThBsV9Rcqa4LCW(|2o(fLcmBH(sn&q=JEra+FknT+Fpj5 zt81Z&s{yb;B9-}5+3}Zli*&Tn>NKb02;a?nX^&Oo@g*uZ``V~*O^80O_Z&~JYyl?zj)@|CNH*Jo7+6$FBmo{_vYPsBC@PX6;71Bc1h|<(e%VTEV z{V0UyJs-k}XA6mIK_g3ZIqvW*_!@WTv76II%Jz}oe^v}K%||E-@X#Rd^2{9lzXHI- z3(QVm6t%b{dS!6$c`%RhYof%g!AY_)wz?1GSK19_DF^m(y=(MDz|`VL;U1QKji?Q& ziI!k#CDc6W2{1g@?Q7er(U$`QgOWjUBLfdk{#cDQC5tb%DkwvheFHVwGGzaJll)ZX zmBlzsAF}RhIs<01#?Mrgh&QlI%4=}@xrm^`I&)o{_&1}gp3n!CX^(m*qs}k$uYQ2_ z#z}wKS?>8*p-~ui0c$Bsu6>eV?A^ClaE#9OfKaW^gF~A(If6+;uubWYzOYQ|hpejz zMNT=kHVX~tXh;Zj^hyM8gUP4lz$w7>*O-z_v~fc)6S{WdIJSdVLVc5u<4YgZDF!-) zRK@}Zay@*8QfS6a$K%aBI&HkQ{kp4fSqRMwzKNFndKMfWXAOKYoR37S3~p`crgEGe zep6Yn4wqj|a6g?0qL*RE(o>r#{MP*xJbro|$TmT=V<$%a%-u-0!f_>{Vif-HN7t(c z+tFeRo+ade)pbGFi{MEyXe?2`|Js3SEQ6PYSfA=#$OQAG6kXpopL2Z`q+iZu!iZp2h(d#3!jPOc>KSs8cG+)T~IK^Yb9G@ zn-v;g;aIrRKt;rB!TD@i*2!X`^4Yf!w2V{XyHZ})MKyWp%%bP7hHhG1IkFh{ubCrm zcw|gg6ZI(OnT3o{#8ZNh0n-LIn~X2}Bp`WSc3`X;{Aaw9HE*(3)dwvn+Uk&CG;h{# zRa~Y3r@@c6HmPMd&%1O02-GhbEn{;Kcc%~$^<>_6z~4)^Yho_ni`4MDJsu(w2i+kqIXByb1TPh3Z;? zMO#}AyODcC*zUn#(EQaIrmKiu6Ko+IVSg~NvtbBR2qm~jAG-e;VgiQk5*>W8hS` zmKjuLlNeBWU~ENn8uJ|)i^Ocsoy~Dw4P3sZ7MOG9Z z%Kfn1EPaM&b!+ha(K_Br-Rb@m!UTG^PUhX)+q=edM`J%$rNbkw)@eUc&|JoySj@q7 zAbN;yBMf+nVlj~RjKBnou^|rMXm~s5`Beun4A=em7~{q6TCCrY#d-FoQnJGUM71_~ z3AmAsP~NNH*|6pTXljI0-1Tb?C%|VKsE571%(JOx7VYg?eqFM{VNO(HLT9D-GwH$Qh1&% zFH^^9)>QZPAs$myDO1fntJXX>X0E>n!%w9tQnLmU)&$7zI6e9JUpd&8{){xlJ2G*`SSB1g7A15%-ot}Gs;G8 za-glo#6=krR@3+vcUOOC{WS4bhcqir=dq;>vO`n`T-t)KFg5s-^2*7B>!gd*h|5}Cvk{}+xsM#Qsh#Jy}urXGfo8)hn(2~{r3<2}qV z9N1p!NW!T}SpBR`wGKS6)Pr^~UaOC}L4d^cv0_XJefF38k!5^LqWKB>tw*EiNmx<7 zCmk{p2e>zaxfn4xp12;j+;)rv)sGc+%7{E&t-(wQukN z*ZOB);o#e7$E)KO_J(4cLSxUjw=z}&j#rhWQcyH~Mg*~4ndE=XK_gb8a~R^?U*sGn z0p%`de@e~lF9^Ai*%Wts?d33I#P;6rHh3iCi{`WeWaAMJC70Z{$ zvly|*bEi5hLQu!GA0MUS)~G}-`a7t7-sgAYsb9^G4wH}wJOK9>Fx6Tvh>9nv<6_MF z4=60ldE%h3A1Zua#8B#ssZI34JYc#^G^@h@s+~qbG!8a=Uo>QVBKiMm?@Hg2OxwOG zA})v+nWea37EYx$N$x4GWx3TfS}v*NGNUO9rKXu6t|>Of?`v<%q;v4q|j*II)?(_Vezva5=8syYf5K<}ibBw#_ zujs{f?<(k!D%jDzoJH|JH1a09yF&2}BZyuoFV;8W+Bma=Gbzy`a_6xpbv< z{$l(=pIZ&R3Z*#c1D6WLFT{G_j(9_#+Ark6i-r#_6T@TCdQ>0Qt!{y18DM{WAB5c5 zGPl*Dv*E<%iKVaa5^5$b+2B_7R8c=mOex%1(%!6rIw7y=5s;&-fIE2I z2IIQ`G@3UBJ5aeVeoVE6b-oX^WFGLb@~{fojcPvXeecZSU`r+CW{1Ev8gL*_kHO+R zZgYT@@w<_G3+thlFb|cR8OnMe^6L|W)qd^zp4lN%m-BgI95sdiMJ(x$+aF;v@aojN zR=iV{@{z{r96+GsEDqkpllyK*pr4^SyojuYAvaraTdZr&KMlgHZxwi!@02H)Ov>bl;& zY)ZH&(MNIYdw)t&CSx)!QN=07iu0bM<$NCy7YmBy46V0{chAc|Ujy2k8^^2luM^wo znDu??d_h|q@QSXLmyMC!qoar%$3ptYm&0w%e@D42wap?T8{qH}dAMcK5D*DsE^Bl9 z&b!RV_b0?8hASGvf2VW)F~fqlmMz!i=xw+jN0)uRi`n5JUd+ApCr?XX@s4Ytxc0Zc zU&(G^bY5Gf*U=iOBQ;8FzgnRno@^ObmjK{MjwsK@@ioulX4^PCYNm9v&($*z=;tne zUmj#qF4z@jS410De=2~I^rVPDGnj6AdtMU$`BNhMg&)1qhbY1=MNl<*GqW!Xx`FuQ zH};_}HaIk+dg%~J`SjZZcwQw7_y$PN%$rO8MYe5E;|#J0ompGV+kn_$?U+QxkUcVI zBkt*S0;0e8)zM;MOs*9`-g;#&gXP0D9_|@wB=JZGd`@oar8=>G&^pLF6ehi$W&!)b z3dl+Q9x~%7Y3L0M6JN90f*$d!fD3X!*8!60T1a|l6SP+Rj%p|MU9-urjqu(KH*7Df zCDgEG;UOEio~J(|QLjgLe7QmXl^R_F{P0$b8;_fD@j1M?8gBEhcz7QgJQYoE&cz!I z45qJT_F2-7X@Oa1! z#QW>!>Y4Q}5K8N=S&^4nFttOYsKe(}l+C6~NEe3n@7wY!#X3LO4lCKg$tXs+tdB`a zOt%5+tYT`ALMyq70dr+0?>!}P1}JUwP^u=n$E8FWHPF(ON#xu6WWL-q;KA z+hLZjBR!r9gu{ldPXbzJ>TIJ+?gqoG0vx?$EJ7kcknX1^+ZRlwzwZ?-gOJR-wDx)P z8O%er%GEG-8gcHDC6gjUT7A8h2#lT>H3dMy-LEAs zx@>3rtF*5~QNx_e9j-JvPx7ri`o zNvt%}u*|R0lvv6?)05@H55=mD+CyB;Lc!Pv@<(qzk%wGOC#`9_=U0H-wCnet0oRlU zWT`pNuI5Es*!>PT_p$I9@|Jv5w0MY>R%cNqpA-$>ND(Y6Smjo9iYbDBb~YJlBo&Js z#SY>sBt-P%0y%vq_maL{2q zKA;xw50P~-=F6Vzb$6oR-KtEU)u$03mAfNVQ@LpMf$z&yFiZ7g+J1qGud(o?fv=s>J;7_{noWEAHg zVc=PL{BY*1b+_T~?<5SSb7s3s=3N69vN^WV^-$klwr(EGH4g-@@<9;eI-(&DD=37S ztXh)ZNj zIR~OEnWuwBidFrrT*zmi$c4A72l?c;Xcu)+Wfff_Ca*39(vfGdgTGL0KG4&(lb8+w z!g^(_`$fQy2Za}oj;-#LRu6u0FW6dpZhzgs`89mR>D6wpZlJm!XPQ>6*=HKNXGWs* zrDgS-$DhOpsEvr;eA)-%FRHqiRnhhe!ggiBP1J(?872mzC%_U z44&9ApEL8LC&uIi#x{THaX4Udo~$U&*TdrSV9cAy`5~BTKI}+7>?mq9v{Ck$`BV|% zg8h_MkRc#HeaO52SQCID&~-%;^1J3{@h^w2BGteviSCX8JLfZ_5+^iPE!J-!{eHSI ztht_+um+s{Jx!9x^3_Z9yS@tIYKPGLmaTH-6aD{-WtYIR8|k2f^b$%nZ*9N{q0QrY>LV^=D+QFmxUj4JOOhT2yC<`kF&*%zk_ zYS#+Q)mXm=9~ScWn?F*CN}MQVE^?&*OuN2Ce%{QLEB(4RpLsiaT~R6|N&@5|Od996 zq$34;3Uqf(g%wqQNEobUA<+&7 zDJVh^^hNlvYWc_j>o^s@L`@}~WnF?rnh3cp-*g>?iGW_SIBnzDhdT+;4Qf4w!)-t* ze^w^Ecc9QW=5Sxm>I9xhO-5|ed5EJ~Ens$Z@)rZ!oV72KN1bUG%)n?mxQeOoZ%yC2 zNsH>4ow{~aMX`Etuim5M@xO@5KfogwT4s(NtE@UQst7S>CXu%o(9Xi_K8#KD3Pl@O zGR7xqsOd|N6WwD&*D_a?M0v;Y$U>4DbG$mA6^8$f7x!##i_> z)tLq3gyna8=5i0{;KxfMp(utyB8bY4P6AfBxA8-6CQ*h0vw-rq=*mfJC-%?B5mjs% z3APaizo6x?yrE!9on}w$aWoC`W1a7HKg}6m3W5-1g{qLJztg0oug!C86xY)=V-EF2 z0d-&&p1HbcwMT95sl3$Ko>glSCfN>oV?c^j8P@+n4#9Q}uy0SBQ$oFGIT0RR7(77{ z8aGb`w9qVJi~&4tZu>fY`WV0K?*V&6uPvMc;ktCCT{dcCdU@|^Zx=p$krH4V;d)Is zZ3-uHk;)*#C~&$tY3fXD>iW2i*j|uqs87BoU?j-lw@i&`me`w7s5Y;UQkJVeK1Oy&2I3g z`8Q1KyeGzxvl$6J-kPin?b3ufClTvwA{$UJWm;?Z*pc4V#MrO^hUHk+}MEcepTIx6FZA;nlO^>bhaHL$(|_ZHe-#N6q(S$#7>!%Acs2tBy! z?pb`+5~i`>%XNq@zN?E3+P_Q7_i{A-yj4oQd2#u+t@ZtJfa&qWPAyq$p%!0+Pz5mA zp8yjcy}0r@CER2wsFbOORxa9H*Gm$yn^%c&|CH(dP_{b}&@vcHLPut~oaMHLw21VM zCGSuRn(>1WblNd-#xyG`|J{ec#+$^#DtQ9v{JxA60R+YT#}Q+m+Lh`;uc?8!E^^o zX_46Mz*=Q)_Fl9x&}0MA5^OwORNQFlx6iC_iMcgCx9wf9(bH($ywP0O=Y%k0X7tqc z&=6+E0<`{*Q^bRA{9NYiG> zp7MW8Z+iKIe~U6H`0S10Z3mo35&RSgwz#Q_aJDOem@=C+^JWo*AlqWmqsJl9A5(UR z|6az~z`+BooVK!I-QcUz7Dj`cX959{#e1YMrIFxQ&RJC?=Hha5uESTnmv}|O>?u~b zV&wZM_x;7fIQrMUD_J@RG&QG(^&5Jf0(M|C!%mb`KO!(CtR$o|IIFVrDQdaf#N*fM>n)%%MDQN=*f)Z(i zolg>fS}Bwmwm|Of=c*7_&rbkMe0P2a56g8s82|R!t;yo>Is+h7t~vEa0TK{dqrDJ$8^A*aQ6%%~2F~~6`P*_1viq*Z=RY14)wSomT%kg?IHSamH3}h^R2dQ+>tL;_1m_5+m>$)`X6xPTMzx#L;v6O aP|>A6uGXSB%=(TL_#Ae0bEvZmp#BFh%5?q! literal 0 HcmV?d00001 diff --git a/recipes/use_cases/end2end-recipes/raft/images/Num_of_refusal_comparison.png b/recipes/use_cases/end2end-recipes/raft/images/Num_of_refusal_comparison.png new file mode 100644 index 0000000000000000000000000000000000000000..a860e5e078ec60727f5511aa343e1f3e9049e0bf GIT binary patch literal 180982 zcmeFZXIK;6+BR%KK}Cv4lMV_hHK5dxsG+DJSn1M26%t4yB~+y&O+Yd0i{vclESSpFDT+ z(4j-8Z{5_mf9McX`JqF{PBI+_j;wEd>H{{1G557^94hMK{{?(Wur<16cjwNb%fLR< zA%?@}4jrRk0(>Bc`Tnu5b@<|;qrZr?9&g+iVFX}+j8hZ|b5BPoKJ~%F23pc*uAt$he^GWkD(`Dzb9&vhwoMz!lOMEb^(9r!*2H_P3k- z&wVs(F>th_>r+P;q%i%yR@N?VPt`?5=@0tH-{1Oa>*@HSCy{X!Jn=K#TYX~1<54*-7%J!V;f8YIalL>qohL6?A zw)@{N{^yOIPuMtIV*mFZ@wZmdFRq0o(!IkU7jXY@Ia ze`oA}AI{6HxBUP1`vTf1Up~TX5`CHfs`!8Ko`qpLD*x7m{Jm4I4;kdk`5!4i(fv0k z@^`zC1oYqA|69-c@s7#HLFI$%x1@hVQvPvcX6fnwU<$)N-vg!*{c!vp=--!0Vc-pS z{s&XY_B==QW!Ch&rhD-JL?-3SUmyKXI;E{C48z|G&T0KmWD-ym#{Z;K|5cR#D$0Lg zj{d7C|Me*Umbm=aqx{#S{0DyfztHQy(Cgn5m;XYq|Hdf)0l@u#5~J|-cdx`8Y{jJd z@6Y&~6Q5b^wpa}BewE$(Nj|F;_=-dt&^`d!i3^Ne4bgZZW>|a3y`O zuW^boSOQ#d!`OwN^|{dgQYc?c#R^UWf?CaQ+{t$mS~q7X;(dOwzj!d(bqTuNp|5zb zEOb!1y&U6Tw@1r^?)_@D7wpUtTz^L-?G2LDjgklYxs}(bR*@gZ;5kYG)G0*?)IZ=%9q;CHB9Cnm_IzQ7Xuq z9`#-v@%g1=@#%Hz!S4E69@KZWz+T;HwTZjIb@g=~*mhmbaFXcax0;%&?=!)X-W#H6IuP10t6hpw-c2}*e;AB$%mp)wJN}&IX%l^X?8Zze=F2~ zur+x6IWLkRjjxvD0)13wQRY%%34bu{!?soBPphJq?va3L5bC!qb#wOb&@o^65611g z=siYhpe@w?9`}?46x;l5<%MCLHDW~6TXZ3E7-a7Z`0ouACI`Q7dLjHmC91mXA_^~E<$T(Q*(h0)nI|Fv zwVmg`lczHwAHszymFF&ttybnz7E~WUpL+jd>sulC$~F*F>ZXin7?*C$f|ywCCU?btbXce{YUlT`fQq zL0LJlCnh%TPLr#RzUQDJbf!ZH^&$SR<9&W8y^FZ8q@M=-N+VBuDH~*$F>?{p)XF<; z_1Pflu6>}FWeRGQ(f~VSs(~bIRL2u7_WCRkkb`ZKOT!-3#p1}9G+T)GT8>b??|hdP zvy{g$s%>LK@waV(9;`zBJ2v)E+YI{)i~Gpc^hF57y=Q4MvBwP$-N=)!&fc_RPc{YjL`>Db3$*}?> z#{tGl=DdI*1Sku4?vRVAgyRkg-4ksD{Y zi?uqs(N{0-JI}Ruuar%XjUGUs>-(+5ksn@K-xyo<|wWk@n8!YuVfgA_t`$ zqzAkR==XCH1&6GUedz4FuvA0xMXvtR_xD8(OpT|AQ*epmq? z0~p%Ej<=@Qq~E8<4DgzCX{SB0{cCA%(+9|)OCW1=D{CtUEY-=P7W*|WqZ|uZZ2wM! zP~WtyX*s>N-9l#RHvV+JJjCWOhSab8h0yfoqm-=gFC8Vogf`YcJ~n0OH*F|WQ%|H@ zRu^!=e$Po^ohODf>wBAHjoVvo(1z_z;8fvtTjM#vD83@%%{M+G%=irxsst{}=-i)K zbPGvXBS(@=Pbfp+j zla)gw!bDnkD5Sh*v4K{ZU$_f6lKM9Tbmda28gVE0hqa{{VyBpV=HWtQIlh|Umu(-1 zz2*v2>6Qi1H<3K_9vkqqJ>18PzoyET$}Rvxa&5nH7vES{`ss_4`DzL=-+cW8^$8%d zEI}Ru{?ahK8~-roTLr7nkE5yHi}fD0LofQ&6Q=2I?6-c7+P!;Zlweb|=VX__>c7>T zkFv7E;!Q_CZf|EEZ0Ec4)jEykY`8K3svUZFR_(7XfX0uf0QqRZ1Nz`eKnVE+|J*v& zxO6JD#<%T3#>#8r!H&g&lkHsd*puLdW@CXlC53W+CmUpe``+TgUZWD-o*Ag^0XEYO zGn4LtoGF*QI$pT&hm!vJrPvsssp~NOLgGU8WHgaHx>1>L9X(uF+vr~jXuz)XcGc$p zO+Wsr{Y66bo>^}vwSn!tTXyMs5IQ2>L=7=t3gC7|Hf&R%FtL0{SkD;?jP-H>F7q{(q9jH0tIYHO zC06+s?UV}W0%r_g#Z|x>DXh@{$qamY#A{M?b$mAFFN2LAS|TsAY%hF8O0EshKs;9C zi+yZ=+s@kC5oFd&1~%JoEq2}#CC!PvI4Z|445;rnT{!IPT%Fno;PS`HDq*9%3s zM3s}H+8`UHsS=Roa5C9=-p{>#j1I>>ZO;H;v9B1ifQlja*YAuU)V+VKQD4z?b}Ea2 zNG}Lk1)!Mcb=Es`>H!b9tj%-70YinOS)MQE8T+r`nq?B zFubozqSZ5eS$2Oy7AX`OeQdW+ZJTOrhQr%G_*NAe;oEV2G5zg3L;nNn*go|Y5kmV; zLPCHyV6if~yG#d(tDgqyk^6O1Se3Snvlu>yyk%=Er%&hiqc(l(91&*v_-4X1~?Fjz9NN^jv!irFHf}}CR@G?UF-t&`z z99$f=;d}=6(xC*_*|Ry0a|Cv4RchMai;?m>wG>ppiu-Kpo(#roi_oxq7p%YF(CoTA zzqdX5O`WuprQbCTmW`mgUez~W*fMBbRc?#uaHGjBrozkT-c(AHKl&63&V*7_!3pa- zf^k)=q+c{Nxnf{8Nd=w}y7*08 z+iJyq>U(JrDVmC#K6G=sX8}S}`%|#~d0h)|8n-nT=@wvdw`>JqHz!~f4le% zZR+;r*l{|jRH7p!rD6d7DCuK69~k(zXDnu}`MeEDDE6f~rP3A~4_;Lg8-Dw|misrB zreDkYZ{sCSmNG*nPL!4v>-)|=NQLg?KvSTySU;kZc^URQ-n`P66hp%$`0uv)5BmU! znG7-I41IR(=F(AU={0CUO5oKEAiQb(#L3am5X6q!9~auUTBT!i@*{WLOG^#8($lfe z4qY#KBnyA_iIvkT7Z!TXWZtlGACODGRsVg`0k-7MsKk@OV#t|*HR9!KLuJL8y&-tt z)=5U_)|cZFtVbqYn_jXh6ogp_4qZivnH4TW3)KOS?_WkVr4374({=_L{Z=I#S0u^& zTN_o4_1pA-ONVfTLv5^wtLm`#N?c~0Y2VcYKB1a?p^4dK&zl!H3~LG_Ki^6^f1U+~ zNC*_O6ZmSm&bOK*6|pSl=cLJERbfD|h~yl|_5m6XryLoW^`m=hPz=vZwffTMqp6Gj z2Q)I|$BPq+NyM$QGq7OHE@=5KRB;eH22Uohcy1`vNDq)+k#Udr_UQe)+tS@s_3EPn7_7hx}40eTj@-z14z^JM-n5 z3!Rq`aZZU3ro#48p!3|BAav4Z;%aT(h^?4Q`4^f5(R3$!fo{&mcGoQq)-MpRF#d|D z`BB4ty5C;f+Md~G0h3AYD(*F(T$%>N%YDDEsEmLa5*F;D0)F1b85`Do)@_gCvQMF4 zN#3KCi(U45l0#ufyR7T@^LFHBw3W}LCNZ1(tdAGeFn`N;5{oiowGb5yA=h2Z5pElz+8xQt=V~%yntx-Q3l`OQjSFO)* zVK#C*$l=;~5e7>tqn{n?krbiL69tj1bhWDzKXv1oD5Ua2H~W*m_Y3UVxq7G+RqZ6x z{#$a(+ak+gV)qIiACSdwMD0tHHKv49*qxa;cW#!y)CpqVQRsNfv=)1-FlaCl~2Mo)=u+kgo zg>q~ER7)g4JxDB{RUzmt?3}-(A#o7iAeRW5vI>L32bFsBX>mEq{4ZoE0O;$Dkft64RJ zPplFjlqBjKh?1r0M1j&GAU=xRGG>E&4zwy|cH7V@tPSN1IPxgmy)~f5Ny8L{d+bB> zpqya^$sL@Nu;+9k=Cr<5aUMFqVNL>F^PW+b!~UVrD?{T48P}G#fpZW>se!D#c|*_U zOd#)Igrp9^4pH+R85vK@$#Ek&A0LVuhk`Cb$!F~`t%XsDwe6jX9eNxxjISS&-)os>e{9mcy%?Chrh|KGI6d;39{1@ zgN$;=fbWN5lxK=C<{)y~Mr4Iox=<%orT@EL{ZKnYTToyvsKGR()N9Q`x5WdE(uJYB z_rX)?Z7V!43Tw{U`w%Z#yE<$G%v32VzoDW@cPrYX4(=o`uLN=J=@!*PTPnYJ%cBAT z@vE@1%W4Yw>KAok1Nc+m!&5y)pp($OH2ABXDfxt6eXI?$!Jli{@J+gwX?Ww#LQDKt zQ^$s1oDMn+EzlRSW|KCH+TjDR9ywGfn{*#(1HJa>Al=z7Lt?#xI+1lXEB@~mw1o< zx&=!0v%0dEo`iLeT#++(E&Y(b%_p>&X`rW2F2~8!7R}w&%&#UeIo&VcXa^m-LM`*p zJ=Dfzr(D|G)!WrmB9|~-?koSONI*y;bY4GsW@I>afa0k_T(iuu;L1Hs!w6dSanLf; z(stp>Hg^pMM9lL1#M$kbF;eiK-{vr-O+k)Pyi?uQk-QJ+u|XA8fTeQ4eCBwsdGWwd zehkGH>rk}eTx5=KwyhPfZ1!YEb$wAM^x7vYZ19?-J*cB1T*w)L)$8w;!EIZPaT`B# zh7;k@`a^%T>tn@Yrb=q=rL8Z&*?rufA2ZuV!oC|!4HS==FFW(6!OIUVpghF(!Vt5x zhgsd_e%3)`;A*g}kCh}7Tl+4Ldds6%3@58LMU>fecb9w|mf=;Bi{amDHPeeG+}*Zr zP1hB5HxX{!6|v!nk`s58`kwL49=gKuHcT#3Ba+!~O5!8i=`afo{-%+FpQ!tf@tfht zacpii@n`oZZW#?40%Ky@*!k7z6!1#1<2@`wk$o+E) zW3(n!$XJIa8p(gOycI^~{KVqO7(!v)yKn^lQ)HAzc{$h8eM6TM3Q|N)&)jzv2tki{ z+RT1DdEzjtYS4h$PE)9c`oybr<-2ib*lP-Rjk+{SDTaro z++_@PV+5Td@}x^A$y#4^UyAl|!^F?CUBK>P4ZJI^x2!1yBe=J-)xxRIW5Tln(3&9z z4^V< z#c9Z9PQ6fLI7?9q7~fZ<^dT#F@Vd`dGjz9^8nf?LVgYeYzhNVKe*UB}!zs~oJ@~>C zOy}t_U6uIhBOMeM`qV%|3P-ThrWE$YP5RlNuePOY72r zp{P)E)SezWx6AR=;S(H3l;Q0aQX7sk!juk$8v`p(yl5HjU8^6}dnwoq5-!%L<#qE} zrK-D0B#+FNZfYGt8*d$1a01x?7m}F9F9J*_5?rMD-2*SOcg3&82&qR)s0rj`ANX)f z1B?>4Uf{A;msv4!E;2wmMeOMzomM~k;@y7+k`3A3CQhR#Cp=exP{7x;uXF@`Gjpjj z;LMk_??0q4;d(^nq?ro}*(EB2!~3R<>!x{KUs^>Df3LIlBR{@_Xr06%GZJaw!aJ?+ z;Lo>POP_l-+7+d!#t^%kel)iSu!?$qtR~CXlri5cP^cGX3)=(Sjq*^y! zJ|N!2>gY|1&dnv zTU45Rox9T=N&+n66A!pRPdp|eczcMSYGATcV}Sc=1SDjjwIicu>TS|siWPZWVO{FG z%=V68;7jaKT9$=%L+#!-1rm)}dSVicTlU-ZL01kOpOK8#Hwf$Tpmqz1d$x&Urv^); zhALbN(!c4u4H^il%)HW0%IqgiH+T#Dx#_F}c*BopYTjnGurBy@iwm2w+ZH{^AnqpV z@-K-g^n76@kE%^+Q~3J2C$v4YFO0KWOdN&(0*#U zvZOwdh&b!^9#iyW<8f0g=>#t5c+A`Dw+Kv$MJsN>)%lj;gJKX}LLz-A8l0>FA%0H+ z8$O8+#S<`dCw&F(W&ANs>y7!&efUu(lOn_G@hh2`Yr6bjqPA>)`2@VqUroGG`w0gV zgy#~6UNAxk7m5h2__B?tKc>`nf;dVOUuu3cRuI(X+vGKI&QqW5L*@$Cm89@=N&82t zfXhb|2R?+wNa71SZ?t1tD){e*1Xlj4^1o;-&V)y!xI^%IE8}>F3I_Z4*<9 z&4Ww@xUaUzi09OVRY2nX!h^Sc(Wb~dy1t>;?}1xdhKm}m#-*aH_wSKDMzW)~N9(va zd`%_Mqq{l$aaqUse6Pf?IY%91)_$E(#^VU33{FNRFX}p@`em_^2XX@zAbWQ-xPHCg zw91WOw6gt?fC{)b>1070a!0~7l6G)+A5S>!Us>!_T8%VYsplP|Tt%RydbBg8=azlp zz2LTo4h5MHqg%SOT?BQ{wYC=!aX@*}74>^(eokHWj|-lnJlFqE)J%9PPJOv}4<@ma>*+MCK zK=oVUPi`)XFy;%ti=7}31cS_yo=sKpn=MJNi|rkZ{B^^pV1i;bhOZfE5SZbaBV4zB zYQoMD1LXN!oC>-V9*YlJccj@0Hr-^_mmHb_H<>B5eg3YcMvUz`><))w9ZV;m8c2UZ zqM0&Bz_`b?Q<*4ftrb?W#Y>P)^-_@ixlC)@uEBvk-w>|tfLQl@^708;?E89(H-Zh0 zQ9hcmfj^A6GRhx^%EaiFu665C6=XY_Z3W+x6bC!SR(sYkJ5SVRPOjYapN{pC{Bv5l z$ezyIX4hUR-x_2cG;l<5QtuNo?(A(*Lmdrs&**BgrI7T!8t$Ez_VbZAvn7)G>^igX z8rx~Dc2RNmGD@fSjg!~mxaU^4%kMIZeAz6Oh~zN*reepm$Z;|(0cS*EFg&Mlk=lPt zS&YEu6{axcb^p7)Lo&iyR-37A=jPc9y5D1NToEy2n@Um-i)IFk9oc5FI<9n_#V|Q$ z>t~s-=9nvr{5z|{n&RY~^B4!GioSG6W&FY*OFLh}#$0i00p9ld`R4tlUd1D)dZH}L zBza`W)ue9uu{g%9AB)!=IGrX~L~fb0y>9Wu{rbM~1Ue^)3cohpiN>B+JgKz-y^&T| zy971-%8_!G93jhocQjl&mc5aERh-K@BC8?u{Dh0-$adSTFvF(`^uS{egy9_{+Pq0a z72wYrtAFLCDU3!wLM478l~dMln{EbZQ3d(ir0pBnO!q`Oxa-SXmHYt zCB>09YQ%4(kWah?bV|?!vJxBPOkj5kPMSBun5c*YnrRdtjC~=8Fq4X?30!g_ST9^1 zrSMKvBw6XbR!o!|;^npmt#*9NW0|~ePB2r+B(8pQTddzi8s2>nx_40_SA2tVay56$ zF^1$jpg0REZa+pvIll~y(=UT!qL&eF3{0(i;KRKV_z3^SIxNg)oi^cb&>cDS`AZO@f z@lVF)m`Bc$7R#C0kaf8ucoA%g)C!}}jIgeB^kiIU*U=VZLA}V*9*D|2Bzf{umx7^M z?zx<1L)x;Npr~H6$M@*zIp2_k?F=J@Ol3%idYgnMO+a$(S5wRO=gs*s?>Oy=8mZ`+ zo%0*aw<}}uJ4G>`KVxd(6fO82Xp8I4dp+t)ZS;L;+u5>Vi{d{|ewyk3j!7{C-4l|2 zpCz(M@rizB8A3_nd48S}Nn7{ESjRNNdOM?@I4n=9gTijYm@SV88=Z-E#c`N6F@KqV z9wB+3?Rwa;BEQF6q5+7o!y0@bVJ^0xVV5-+GHDs4zFYk4I>(RYtI6@`O7-387ApvQ zOFlce{|23srD+lFqQR(jjCzj!=+Tyj6I?jvBPXuo`qqL}CtAJav&LG;7b^$npvc!M%^`bPVBIe_Wl! zZIFF0rc|S(M;!6xwl~vtcwMq$S~*D)o9RGeJ1?X0;qA>!Ap|as%SU9gZtd%OQ$2>H z5++0vm&LlHEQ|JbB)BFnH`Hd&H9*poU=jQBszyXB4VCRMviwZs>9N&)WMovlEfOEz zfxwCgO~&dRKW`F2m-fV{@Npb64t{(Q?93ig;k1Q&S*V9S3bCw>^+97|n%5hQi z*3MTk%M>2RBCFR|AQaC_<~k6CP+rNTq{bx_*oZ4~8RVg0IWL*VYZ8f_VG?XzTH&{f z?r5#8jBQ+P zid&?#sao`Xi1)h(k1G>p8xDYypr^FDVJ%&L#<7)zDra(IWR$8$1h&V>dIi&<3gXr; z&I{u_S{eUx4ezISc_=4_`BJ2CtL(xHuZmlb>FNH!NoV842BH}v`q5Z5Zw_%o9MHq)nAAJJw}B1RUr5K=X3AiVk0k9&k^wM!BSood!s! z*i2&W%m<1BD3N3yQNJUe;VDIP^v2qxfgJBZsTNRs3B{n6|ANh*(&}Ke9=$ zw4!!0;I1r6?kL)EVRbd)LY;g!e97#eBfV85#ETlt`|}!g^^3hn<|?seXV<@>?yIHZRFy8VmUr0?cP99@&!1PlIe% zIqyC2w%BUohFk>Bkn9Iivo!b5j70fMUWpXZu@IP>D#V`f2dNkSP+u)sE|H>Gwmrmr zBDWN9!o#oY*5O=?!zO`sf6F^>P&LD?PFPz4{%Bj8ZSrJfpBs?#9*}~5np!UfLfVj| zb4eyVHxgUS5I#SeH9Rf~VYndvYuf(hzEX2nX4F_rFEq~z(dYnkdDc^OA;|mqGv93+ zma7tSw%`g8n#{P@=HqYj#U4EbhOWM% zu3hI#%dx!Drid7@gX<6y+8#}IHY~p;Z7rx3^K-;)IfPCqTaRtz_0G@QpW@T#ZZkv%1cwWcDvd4#b$(W$ zyj7`6TV+&v`gORG)Sdl!=lnv~(R#@H421puj1VT4r}OZEg%MXlpTs_;viHTdTe3F( zz{MfcE6I2&*6%NP$$t^1~srpJz9f^zyZ1?bBba0 z#1}hV#k>=tL1Njg<2PwxcY7gjPlU!DkS?}DwVDlQuuRDeTumSlkn;UJ*VK~#EnKAf z*Q6u$BkKfz)?NO{V<_Hxru7;yt+NxSF(H);2>{k?>S6NR(XPI?NXoXoPBhP+<<=ZMBioYb*>MvU)-zehYeMg*6sT1tYzZymkkDm0vd={|L+a9KC*wxyl0Dl|H6g;c7L6jtbG`##+a zDfO!|4}iv|eX-i~ghMzy+Kx>EUQjB8>o{u_-1^72JX?^qnjvT`rHWO$HR#i@m(~bg zl(MBv=?M0<&CF`qLYtlOu;UxnPcLhMB@Riu`yjc z;rLl{YR;M_L{Ofx9NSAcy*c7B7KjP^P3CuI*XY>zwpI#q>XO1Dj`qHOGNDfb5^(f zRcN(dHF0kU(YqT(WP~g|r*2QLVIXmsm7CrhEiMgMlD=Yc_&7*LWTdCyd#oG`$2+V9$CGvv$Xm@b`IGHajTp-otsr2Yv7i9I&IoO31x z*Q_3~88&$NnxJeiAC4MqW+*WTo6uk$W_|x)I4nAdD|@R*PBNMRLR%5#%(Z*x*bjeI6=`D7#fugQQp=K9$-i7{9$UWgJX3c)3Vou6B z*Kbm5smkkl{+q-bA0Y}y-kb7MPOgnlV>1DQDhDy~^YR8PDARVtaJ7PmZJ8-DXmz^q zBCco9Z=ogL0hhR?8L8%2AWMF0Zl{a_g@!E>L>ltUms?)`sFpp*9bL6^0PWkaGy8&d zKnBx__pkZ+}u!X$();d*Ff)_V)}p_U&%lFFol z5L7QRd9;#~keDfLArz6-QuAUPzYMEOXh{o}E^8JkO+n0TY1kYrg+jqDE}2CCEjd~p zP{-FeRWeY_$~uzY4fSGZQpNhHT8&jJRfwQ=ZJ=d7Q=weasDI=`}kq;WHnRl0&+w1R?xH zC4@42ikoywU$tQ#kTVrTZ&4g2V|t%*6?UpQbGb3HmQ?!X`{gT6GtLj%z^SIwJ!yfS z>c=E&nupIzu&8hmE7i6Cv?fnE<~LwK0E!#4?Ds~3Qhl;NK3`}23o9{*75c9B<1p)F zP+$d`E|cakvoB%qP$nV`mh(?sDRC9qdg6z&2_na%*r$0onb`I-L_-9N&iT!#jnx=3 zkJvl(a5eE;1v@{Ez4Jf=}Lt!tj$4-<8VQBeOsJ6 ziC#D$WcNoJJP^Y$v36_P2&f&cSVWImtXMI*&YLBSH+N4A))WJC#M6izX@C8+U8QMW zTkl+1YdYusgka<5(nWzq&$bOStLdr#| zu@U_I*r_(38IEZ7T1{tg5;5|g6v4g=N-Q(TEww(SIBK}2*_h<>jdZ@WA37s!07Dg6 zml$!8qp&suJ~l{2Lp8sY{KOUyVjsZN=cvDvM-R{!T@_rTvQ8Zgt#ZqSFc*o;AS zyXWj)yEpHgwUl2HYs^;*7j|me1%fgwFe-S>xagr<#Rt=C^XBZ3@drXjW8)9^<1)5K z!4ce`1d%{q%S(ivXl42Gy1B=TpPXTMv+=oYsqFUhSP{+WnkQt=ulO4pDUXoun&-#F z2p}`|wgQ_{Gsghsja;RAN6EqJ`&$M1f5@GEN1ByFij@G_^M(yMIN|nbyt{WjsryD4 zI`xF_>dqUlLjuTINZ~E zK^$|_rORcRc z>~Z&XjdpDl@JJhs2U0nrqs@#tfu{PaqEanERn}&- zFGi*K%BFG^s?O?-i9&>mY(>MA3(Za8fIt^CA~sjo@vn%dN`GxLo{qSdA1oa_NBt18y z)dfqspA_2|;YveN6PKEJENT)aQaXFtL6oJhp zb=l}84pr*{*4;@)#>X>DDxDlkS+fUf$MzQLv$q;JO~Nh_)ELC9zZsvp7xYKx)5%V% zZ7Biz{7&NuSZ(mu&kqje8D^_H(H+>sTe!+nDuGu+-=^GY@ z(OQ0%qX9u9#2GZmiOY%ZlgwTmyO;1i;yLLuLq_2zjXF-&<>k*`o3|PaZWK+Hhbofh zM(iiM&E5+MG9PPI9nZkRRa|7?RCiShz(ru?%EQP@#_%bOPpk5?7}I_p4y0JKZVA7) z4p(biJdX)~CWZexM8ITC8(dff*NnA|wMXBrC8*<@Tt$lQkG&d6t1O({nz{lHNOR{O zBsyY!Ra6j}(fQFQx+F{NF5b_jM&}zm0I_~w{HPqF+&>o@KU30e0q=ftVQ1bfJ{A@o z1JZmP4L6WSo%&!~Kihpg_q8$BX2@oKr)ZT-ukSGx#3L$**oc~Q1CJjR#arQcb*Nyc z*fLKQ=+`mZQu#zbmC!hCx-|KU zy%dnkYk5>t^h4YxQ?uRD0X4mDa!)AB9`~$balSz!df+RNlI@YnZZ~%E%`RlnsA6)h zSYPNPq9ZN_yp??8aVjg;61Fy0FU?gN^fLP~LtNPl@oFqvH&dg=+e@?JhuGVOC!QtM zn*!LxtY$Z%s?N6dTjMgizmmEzcY9xa%gTjQyf|_1tbRb^BNJD!j-XTAC)@zPrI47+ zjx&WNglqXiT82SaYSwg@+Tw6mgmOl_K(edXL}817~utW+26Eh^5JrzJe39c zTqAbpnh)_?sql2|YGUp#k4!_}SY)a7beaZVVhh-+$X!suHw@973JShu2~4~h z)0xa7eY?cMoP|ae0xn%nRE-tnk=}B*CFp!i0ZY(#6T$Z`$*>KP zOJYdYkS`d(JNr(AY`-d{Zg7SqY&LAx=Qj<<7XdP5byL_}N;BJ+YV0ST8Kzeh>3(ec zE}aeZFyRbI^WpX>YpP-05J*dj_ava^G><{Sk(}40cPng6&@3JLuyIXKHhQ07JV)-4sC+$bZmPLkQa`i zly*nVmVEL04%SC;bqqY*pq_J!8-4}v|NGW)dnb@)wr zp{;B3;>I)Ltrsxjlp%G(4FA9f&=8JrMN>!sumr>Xp&)pbvI~mYO)zj#>=BOFdk@7@ zX#gnZROl)jrpK815&R71l;n&#;Tq42;5KA3xC!kT`AP>5HlcjY!j{(-???Y>-MecB z@FP3M;iFl=hg8R zDkJrlCyE-JDqgSTs%l5#_oEv3<;%kudx=OOniojY?E~>GYtqw+ z+v^u#H^3W6$+wMShXghZ)a`7K z;Bv>X zc1%Q1;gOs;_?6}u(a~W4bDP5Cab&HJ-145)(|&jJ_gT|LwF^T<8t8ru1GSDHhMx)^ zwqgRwDg;|%?)$YnQprTeRu&ZkK}ChC?Dx1&x>(lf! z%IK_J7pyR40*q4$P~;Vglts3^iRoN>o)!FkP;ux3541tTB65A2M~EX5Uax{8mOH6o zDoU%bG&@T+h8z&g6lN@)YKQC{YN(au${CSCPks~J7o%5d$g-gHw5B`Jh4CFZj;r8Q z3HQ~Tka4M*%YV)(AkY6!Hu}EHYRUEc?9M3-5Mj$y7k}EB!$ERCX7Q%T8e2)r?dL;} z%ILWf5rdj1{jA&emjIc;X^Ku7<14O|JH$y6Nta*f3NeWV3ELerd?b6SC$MMycdTM? z7*DFRb0|%Z4>Ys#e|7PYJ{UKQPHyJ=o zCQrj6wlXRTC$9gHcNt_HcYW~*muTrIG?0_kV&t}7*8upTFXt2_k4LgW=Zn_Ms==Ez zq>xYgID{+==4HO=%=F}RV-C(Bfy!*k+KxCsA0=1D_B}o42fXXc%u8kI?J#c0g!I1C z^zGBqU0?X2n=QVG#?e|p9KKz!7mwkA7B~c?ZnCC3QPN_Pu^xR<9SFPNxE(v@4z*kk z$s(yqg{pHlFWpV~TTmK4yn1P3yb-Muy@7i*zH#tyU+Www^Wg6q5Vw2_(87HMCfF6VTwC@HG0E?)c^f<6B!0@yCFIUtPpHsc5$ zRew1lUZ>Vh`IwN&2CwBS{Wu?`m^6;C!laBA*D@D>U#>r|;Yj*zNxF=gr4@xhQCmyj zNq!sGV=avBeTZYG>&N=yOA^UW~dHCD|nYQi~U~QSblS9pI zV$J*$ru#hd>GSe74kDgWS<`6uY&|<(n$bCvZx;XyK@;Z6FyH|*J#`9xt}6ri+Jb;C zG#>2tjmWFjc8n>9X-vjf;GYAZGc-i z@>#siEIk{mw!*ET#Ud^||E<<6%ch1@F%UO^HemsL&vOGE)cnT22`IM!-15}AC4QRU z-i<{(lwO4H^suyE5u|H$y2VGW?F>b#%_6@@F6r_P#mf8>uxdvJ^vg(qxJ=k>z5Aq5 zR~^ZF-kwk!r_h>XT4aoHF|beGM|!M54~(o@%?f?Xs4J^Z+a}%=erE&}5EW#oeRSii zfHGxy&162%Yi(t(gKl}V=~Hp|so0tx&$AKRU0H4$e-XXUL;=gd;=|JWTtV4_<20bS ze8qqXb(Qjw4Ig$2sWx@n=t$@zrLYFSrBUgINk%uQQRYJ)hv9z9J2s_4WR9y{CQPh; z^B%TT6P2kpV43LOYv&QG)It*(?E+Om`+UdELJk3RWCzw`!|54!6wkrCi57Rx^;EOW4%#&NpQvoS55?&2VZegcti* zf=kZ@^|1{jZId}F6=rzL1S#j+H1qig&10)coF+`$(WOq)bv!1x<6@X3#=ro;vLw0i z>-TR-sbHhh8MO$t(H&0HU_O%)D=kYfuu9iN&zp3aKQxJB>8fmpV-@qOUIS4$KT5FL zdsm=0obzFl-n@3SWE>u8*ivBSJ!VfqM}M}NqPLHE8KmE%ujkMbO4evrNYbIb%JPV8fruC(ByDH!HE0req` z-ZjPR7McB>6^iA9d60;7P@AwCW&%taR>qd|C)Ofb1thRjX_m>_t+}MX@Hs_>be|*Y zZ2-eR9KUUD*b!Z#IPsjM9p*`2Xj6Z(pY`h)u#P;st6Lti0Y9MQk6%LA!a?Kb1QUjjR$mZUY5BPdJiry35 z(SfL*Sqpn<^h-;`v0(hw^7)0{Wth3b5w_Bbk>=YEBE^GDMHj*4R)#+_RCy(Mt|u!H z9}y&4TRswI9SCu)XI5Jyd)c4$!LBG-Otd)9EMm8Ok|irQEw7Z_ZZ%Pn1&muusU;Fr z*2-vjoYN=i{IW<2t5D{+Lhdn8Ynw^{yw^m96p}EqPf%fh5K!knm)_yn%79hhp6BHw zRg$F%Cd{bl?~&r>-Y$i5d2$uXj3r~XD7;8y%WbPSicn=nfz?y6_wio~ezfveN;NEk zj!R^Qmn~;KrY{()noPW>1+17mi}~=#D`{tVxKT>O=*CxiO)11v?fLW!`W`vVdgY~@eDJevic%vMcJ=|O=q7mV|6}jH zgPLmBcJWt4r6@&|A{_)22t@=!mm(a9eeLpoqoBq^;ZV~8_1t^|@n~By7+i{Kp7@Oe`fVNhqi!mcsJzZiuzy)Z z)-#&KL6zy-5!1K3`l&Dm3o=lN#(no=BewbR&63L<{HKcQx zMI#?IeWhlWU^sHzLcQ(djxUgWb?mq=C*Zcq`&6?1NaJ!}qk32C@(zZ!ke%WtU2gjc z!krz+yJ@&jJ6XFLU_{E2Qu~WIs`+hZGLa{|FcSy)C~5tiP@thcz&6d|E)a6l*9F(U zU{*JmlW?_Aen}G6+}3Tz^z7}g+|^E8S9z*VQEU*;FoiU7z^+3wA`bH2cdORs zU>)dHu&PX;nP&iUafiSz!Ofm6OZ18<8@UgYa7XJSxAe`tMfi$`T4~F9oxirJE=v9^ zlPh_YA3@p{4{4lsu5+sU5wE4g08j4tz=OQNkA68h_t{iI3@bn((dRum=-uwJtx;g= zdyJPo@G$Kq!#bmUF`S^!y|a-pF90tjSwD7 z2DU}iMP$bmp7-IT{Vp?L`HNB7&&0crm$d0B7$n(;^`dz4Bh?i~XYbx2dQQmN`e|iW z4$KLDWT7kk%rL)3ZU&?mVJ8z+_<0~IC-e`S(Cc|r=C~oS_17C^9XsCuznd0AwJ583 zzbg{(8(z|Uz(jUsF131NwXQ+s;-4T$!U>4%+C<#YlV@#q-FERX7?wgV$B!91nrGtbn8h-Pd_NjhJYx&vSmIH)Fx}UOWs9d{U**L4cQ$#y| zxT?~fW5pD6RncU{q^{%aT|@1u{hcK@(fk#Lo*pp93v%amYq>y>Ksq>J9-w{bjbriy zk$3bRTh1B8`d5d48jto}b2EUU&sdVRWqu9_fBtm##<|DQd;_%Z8L*-)ci27Hsw)I3 z^8P;1G;@h6AG~SvCKL!%+qX)>NsMiQdCbXI84h2y@%_zC(hOnRkd;1B#4E(8C$>K-epp zOL4{O;j@A^t<{r-wb_ayriB}C7HL@HUess<#5M|6^CX~>bPh1(P2%r1AVOodyYklr zE%YLuv>Szes5GFa$iCPizjUnw3H80YpG;n|RYedXDPei-Bf1g3e*oBWQ>3e+>Itwf zEAGzcZ!1XYC{Up~7K@O?OB>MnGt42kcj0de{`|f(77@ZG`L)LQn;=YYx073oJIU)C z&I$AFjWDZPeIVHfAZmk!N#Jis8)54uEj11=~Z zDeW4PLh)D%K@k&qe3KrD(Qlb4){EG2kOR`=*X5&dy&zOk>4K!09)kqH9wF*9$@HkR zud0uCuY>kYY=w3PdvuOfOq^3}_u!+#oUYDULCgf|_+kE0nrK+aCi<~FcN=Y6#Edbu zNB48w!?>{zg51`?z{2m;68{faachyz8`k$%SS!doITfkOtW@=cxb2t|Vt!9VZEIhltF~ zDeKe`(Y3nVZ1JW)WOMO%i+=fbIVx~P~M_mbd_@Ij+# zJ^;z!`}=4a+kb`(`i{vFu@&6j^n&kjD!!fQOxFv0^7I)yMQ;mw6sPcjr@rRaji2Kr z>~WTYOop;eRov}=rh$KYxq$YbRk<8m8JdCPW~ccmPUyHWV`>$%tOJXy+AQYseJ1_n z?a!<^eA@0*`qo8Mp19nX2L`r`HJth3`dlSFPD`ylH;gZNdq)Kj0&x*=$;SiyaWoh& ztfV|?(9DGN7gJl~GJ+5vBnelY6>3nfpwgYV3>s)T>rhm zS!UJlZB68crv;D+D|XvKew@&9r6QBB(U2oQ`YRF&T1>eY2`G-{(;7rmV!%ry(hN$8 ze{y?Z6*+Ho<|TY!ILA+Q$K(JrYQ+;sHxw<6jenY0_`|l0wPb}@jr_aTH{cN2^gkX%lcp6+4C=wHY8eqfj^bR=oJ{*9pM zYZ$fG?Wbp{+e&_gnG`pZa}UnEn|NB)AQ9Jq95PWn0C&TybO!m>wQtC_#?5GN?_52} z=@|W*oX)jODLJkjmK zY4C`C%DKkl)}n5hTp0DsMY86$v9r*#?v^wcjimzTT9hGcbxQ6R@#I?nh#G%kpuSMz zll8!nNta0op(Io6YTBpw75y=niA}5w7j%QE#xe;y830t(H>g%nDpSG)bxu2FKSD-Z z`h0El4&;6FVA=}$)IiO27e5A8$33)H0knV^&Uju+kna;#@O6ywx_(29lYFcPPpWg{ z7RaI2g9&(~_Eh$8vTmrLh=4jc-%W)P1FH;f@J{0zKc2tTMW3g+TE>k%?VP50P zS82_3kXiYt(=9I^aGeWmT*dv1hf-X3aRpA=FU|Pe__9sElOe6bG%bA0%{HWfnf2*8 z&&_N6-Yw6yTsli2&pafyp;|M2X&4Qjh%|%FzFfA>Hy%H}LdWy(B4e9y(5A`57Q+T! z_kKg5t=c42lB~P!TiXn~_8P2lzy{;zQ_RfdfX3S{ql(4RAzQmk3l>k+c^l1nr$VuD zw`T$3S-e0Z2SRb_@-Ha$c<7Ei8lSB@wrl(r;mcuzrs z=y{)dyK8IJx#~|Q9A{wh$diaU5D=I>x>=dxsy;&>23~l|%dULSg%FX@K5h06=GMV0 z`=TvA(tOduSMfo0uBkGl@|?B0wgzyDy4_;IKWZo}Qo>luT3$E{ed^wn?^9gkVE91% z`f9xQLudNB_GcbFoAS}YK)TM{>blN3P*pf8hbx#uNPnKUcH9jitKFNIn^P03#{cvLQnac z)`c4}0YJ~Xm}1ON_oKQ#ME~a&NWp|O)5957I$c+#tU0hP&#lR(dz`Qt-jB@t zbhTkiA|aD|6VA|zImMsSBdp{xTIXpN1t+eW`L$CxqJ-b9j-xsLm^Hz(0+$ccsZFjs zx;*2pgKU9)af>as6}&fJ3WODZD@y@x{PiHvGT6-r% z%Ikr9I+EZzMv>AK_aZe_Uv7*B7N6+|05@zR6$Wt6qvFi zjpDF4f(0g7)NiILe*GS7&n_m)Ul-Rw+YyK=bpu*-7f+;)W-5#(Eo#$ar3ECxGCTDH z$G_^pv=(hQ*bhLR8QW@Q!KpqsUD-p9{}Q>CD`_nG8pN73)4Wjoz*2g{VBw!*mBKa`0po%VT)tD|cgy66-PHi}9XH0YoYc9ejoS-1<2>mFS zI}sth$f84=MC9-uM#&oae2r}+S~kqoPoGkUjp(OxBSlKPG3&i<;>CK&vRr~3 zipCwT3$|W7&)o;+AC{w&827tp8hE4SYQR|$qx(-Y?g)MoJx;AlrP-xq_iNw&@`y*V z0Q<$wT{VoFhq_xr%&DlG^jv30AbA~cwipi~t8AxjWKJhHl{Jt9(FuSV4LY%R&cK5FxZ)6_o=kgPxEe`!XXjXb>RK=m zpZu#4(!Xceg@GB-_eX6A<}c^R>dU3}y>@5{puHkax0ScTSNQjt_tp)6HkOv2q_co0 zTx*n6XUAI~^gNDpG6sK1!6=_8-k)%C0GxXS-npAg?pV7w!*=`@7Vqv7y6M=FJD7i_ zrAQ%CDFJi7k8XY(d|U8+fx;KolMHPlw!IjN=a#IhvCX+od+)(YhGrjsim(yC?3Uhy zW#{=r4$3@FpkV3EVX|q;mdK0uuYwc(Hkgrv zX;RCXwOLQNVg6w$16<@OnEN!$-U?$xkZdy zP($i>lb@pGJOcvj1xhk0p~S;$Hd8r)y#PZJOUqGfhtDnMrJ>W|?Z|W2_#5657n?lz zsV{ATPH$9cj*Ayyq7NtP4agI2RD2&5EAfaQ#sv87yXhee43@DO%&HljiWtyROV#3i z;}cHBM#!t5)q+|Roqx`z`cXi{HvN>F&_QfB?m8BQ1tpa&xZpTo*1TpRG}=P*P?hAB zS8EnE0x1QZ83;Hn6eD*1CMTJNkJ#LZ&1LMku7%4rR>;iI{j$0^y9x#z(AHkh7-$ha zd0rAAj?~L}TDJ+Q*$D93bdih!VG+AOBoFOAckU#`wT1qC+Oz{<&`!cpcsjq2jRI&WJe^0Jt-KeyBzfa#6D!9hb2mRUcCkYeYTxd4 z)w={_zSv05l?wUgCS6Z+KCE;0t zhT%TDxLGUtS^G^HM4|jN5odtFQQMF0*arX^0!#=<)Br;}p#`e%I8-BURkic9en0Av zV^}5-%+hV+W#eRj7vuVc&y96N*DY8;^|x3th3<%OHI@Z28F&8n!P-mamehFjl8=Yd zZlPUI6^6-c*+i3ZJTOO>yVn-WR{Ys!eE?#R2Lal)W)D(#gJ%WqlhHdIqqynVm#4H< ziQSkS2)IJ1WP6H>`py;-wNBEB(~UNceQP%t?a`5y zGn`K}0$@+U^#D596g*sJ&KWr#-L{J7!B9B~HU0j4HmtOy)4vLpUV{fv#h^j#kpo^) z1M>iFt0T}@8?r&&Nz(c_D3f^L5E*IS|A;G93Gvh!+Jtb6l{#5Iw&458+{xTo4$S7H zD_JKAo=*zL{f;}NPk|U_3VvgJ@2)DzaOwIpHf_AL)a1Y$IXW@yY2kT-KJ7L4pk9=A zn>zA}a*AsiWW!+mmO^$_qVxEVAJqH`sSFsXXLo6*15ELUZuy&jfAv`)Q+*WTd#h4hFso^WUMo>nb@0#}t2G-0 zRB~v1R&t2H^YdzcfQEsfpy(odoFt~^@xf_{t^}XpnNM-7@?UrBtta7Y<6Xw2^1Vdf z0SaC!q;hG@s~X(lwE7Vb0v>qru$zZ7H=K7h@do@pVj-abE4J@3PRCI!>A86nO<#-l z{^1WwT81nX9&B_x;jK4D21f|SMMgColjR_j>IpS@>M4ekr=>^I4|LYp z2?bwwNvT|JfB3jIT;;Pd26tMcGqc|7U?N^sP~qvMs3YQOZvvWifhZ^Zg0I6YcSc}W z%vdE1TRo5%x#=i$W=+48>B`@469YeSQ2^R5m!3WGs&(URNS4->p5yiJ;{uU##CB|xx=pI++}QoPYxEMhk> zWNE?7Bn0!;t{1J9IZvw9{8yovFnQXVfm11$r-6_UF7fk(%@S>{?=Wlt}R3` zqma;~_GmhcG$$~4;t2G!B@Sb5i=kLJ^1ZNoUj(gv#Txq7XPwgOW}L$o6}0Oc1_>38BIO-lk3l$U-h|1L$0tV^SgeHhzd~H1HBzu^-AL*ZBnR1TexMzLTUC?Xu$~0^f=k0tNcMmibP{|wheU>aafS$|vat6hfPbE=xwa?0*$&OPF%#SjzTci%iElE@ zh*-r)hLO9KExs$9LZ6k?F1#p7nARpuouSSjcK5BmC^WN86HgoJ!y@BqjbQ_iV73yU z(}3wS6!ti^4Jgud)JWTfQseq*Wf7xHoA)4Bw^Cjcl|y*V-wSXO{YFN(m1!g7&xp?n z0iL;~?*U>WOs_V^y&sS33z{9SgNxNIW@c)6+x^MsE8liFQ$j6tbFMLyWnt^-Y!$hCrhQNURq;y;&%@k{T0Y zQJZl%RR$SKQK=HBSZNnFj*|_gb&WJ{E3Mv}jZmzi>oUXv`{NO?Mm6p(7 z!AOu>NYiOB=>zWdvr~hzoxX)P^IH;Lt6M&6*jmW;jPFUtcN37y=&7`p5)J|JSiLZ6 zN$~9z%83vxg*Vh$XM8cC?d%JW9!E0PPmXnPo{!Z(t1Ge)uA15?Kx70{-;AM~ikHr3 zd|2)Ic&lavRp)8hpS@Xde zQ;ePyIT6nLQK7*XTZQB#v4M+Jum_%6zv^8w4kQQc1Hb7wN*OnG%Fq6w6k{CaS4vM0z%icZ7# zyLp|yBD&D3Xx^crvUko)dyHmVyc2lWZvsF4A4_f8_8+ZU$f%?W!!!l?{fkU(+V6aFC#UVPAcB^Xa>8ETP}EvSp1YPfJa*)>7*2Ri?(k zIH-qg)H~*!_u?HdU|6BBHBAaqo3k%dDZP;G1?3t%4lX z!*CtF(K6%7eH*rqF_Z|+>DO3G$)8-`OP`tu%#B>UaQe90@0~Y0{;o-K>>^vEtGbE5 zosq`l`?eX|yA2hlWfhZ-j*8jdiUZ*zk+HS7nqNz}sdrDx-bwsK;#OdR#Z->_MlBGy z6Wh*jPM@?eDt-ILoz9@mZ+-|?KO&BimyGZ!zqM7G9)nJvB^l|hr+vZX>YHEqCi(pc zf1M5?Nc5*RiP<+Nb`TLYU7?A3|52pA-Z<<1Kfd{jpbJfc4*EAEHtZ##Co{F^>OB&rikV22dPGLbaF)!E3 zs@N^eos_2Tl1&-BB6|PiFE2+DLJ>XL-f)?-h1(IUg_cT%zD9lS9Av&h#4RjrH~P<= z%L{z~*#n-SuB~~+a}(1KSi|hst^?{5Z-WWLeI#@_-f=zS*tt*p%SBjM))foz{C_!m zsqP+M{nAeCc*=j<655CW8frz40G9NVi^qF_*vy6vv|>QIj!>eGhGg+U{;$?jOC>BY z;68q81E(*y(7%nKkksT4?rJAr%e(Gm@2a!&Y*4O&sQi11Td+_A9qq14OY)R9;X0xI zvW7u?emegEwO?V7&6V1FMMYP>JP&~w6Nl8=cqWYJYyb!0>&9%8+VrL1osX}`Ou*lV zxp^tRxT|eK)~K7Nw*8p7Ol8z-548NN`0YZb7L~d3CB>e9;a@F0*>Jz#7xQhC#oTE+ zz+}40rVx7Xe(_?MIMa9d^J&j&N#u141ru|NiS&*LH^S4)$gS z>o@Qb7}k+~$WPF@H%N{6wW#t@SEl_M7w-GOXiAv}N#to|!jp1oOl!lvyDfjLa93%_A-UTig&B3T60}f8)t&Z+xIavQO%He( z`SXz-r1gBiI4|MEd6A2>o=<%X&~7p>tm_skOwBtLg>`6|27ING?is}liyC)-e%t(| zim|8^%YkEPJf_$|ygrw#lPt>U6_z&Lo@*zVy}vzCvAL8t73<);cR4j+Usqz7SeMKQ zKh!~sY(Y*-Ub`TDeB#4Bze5x~L{=*hr1cY&R^Pk*XW&_PavfDNcL4L`ht4T4wg{^q z*_F#G6&4wtS4)S-WG<z&W*)D`voE}V_E*E?vNy20h|SwjAo{t|GddR(R5 zB>L73i=Qt4g=76E|COpkK#AycK;;?!B_{l*a8ckh02I@Heo(2BXCajP_b*dpVuis+ zaABx>=wBP-e`2Sbqi;uKTPgDJO_Keq_wbKT=)*xF+gm^8*mL)22Q0HGA*?)Y(z6U`>`qqfWPn-Yv4FBpU|5?C)wtB#3{(or! zt+T&b{}&g)e@^2+IjH|HE#`j~@IMRqZ~Kh@_`|<|0srxbe~}>h51{$yNcBI6^e<5O ze-P&=bVN~BIwGhfonn)3N_@u|sBydA zOwZ9NqU*q@-nX7fYXj#=N6*=Hb*-#VX@3t4Qtb(JE_3w)NBQ%(qo7U!4S9M($IAbh zV@)N+2YXM2^C<9lE=3TIkt29?$Q$_@9R0DwCM7O>bjJ_)nIrV`IE-wtd6U+^KG(N+ zh&og;`8hS}4Lgeb*G*H+39c@cU+nd!AIO~QK@K$d?MfPt7iF#Um3V8d*XM1=93gGi z6)i`o5>bIiH&vp0?}!#yg8uZ1rfA~Q=@58ffb_(!t6Rmz2j*Lim)__K!L>+(nS#j; zaZ%@VkTZDViraZh67>f?-dMThrvDi%Hb15U!ohS0E+0~SIH<+uiq=xYVtdC$nNRn$ zP&Q3&TG-DjWy)!*J{!pfk5MsIb49gF+eLO;IA4RC?I5VI0^jq$zhdR=2t7WR zl9Q8bC^WNFUF{y!#HHk7mxV}CR*b_Ab*sv5Y{quly4QDW&T!>uTbw9NjD#0yrTm?1 z@^5B3^*X`&4bqN-kk?~;ILnHQ+@qW72B#;cD{W7QKUSEmnDs($svFilt(a6hbP}1i zUI|LJDs0i&z`?Je^Z#djmkPqKleRxc^`Gm+M#y)8JHhy-6uGF2U4FpK1Qnn^wLAn3 zd&>3Ry3Up?qg&^LZO*K+t!mm$XLHq~1p7SuGx%&?CM>DXtvykaaV$CR!=d=qeQYSJ z4E?(~LQYlVvaKrCU5wOiYqzdmewnmCl1djAo%3{k6_q*r_tyga_0Z;DcjZx5^G+of z3#ZV7`2hcl3!1C4y-6}@d|-n!%#*=#|B6WzY_Mq36P`C|UsoS&`@{Z0E%E=1(j$q0 zxSz7bkO2=4q1S^*16)0@p)N+F83&<->6c6ygYKiFuPY-P>#wvrggMKAbI{Tfjz2eQ z&M~OXDuW6;O?72D{O<|@$6VzRI3cx@XQDjhoWCfz-?{_nP+dF^)1Vy^Xh2eZrA<&HlJ7&eT%q^bZ{ zTBUy)*1Zm=*T+r=jyYCsN{77G-b&5-|0aw9z6Wp5mP5JJj`t2xjMidoO~by_F++Pcq2|)g*&+1PLda$ z^xY9}IlUQuipmp$b!+`Yo}xJz$nEG+yyW|HYpcaIuQa-OK<36v?E6pGl29p45)Hx0 zl}(j-lV~qd7tdN02BjV{PRDZU~GbN=Jjy+)o+B274uL11^>5`}J4fInpQ zYY3Ik_1xs*>N)YH-{3)aHT?JOBMItl6!xASTm|;xuO#^pIOaP{!=``S(1pcz(!o`( z*EnRHOv;qSHu8ywI1hL2KN!PMAP-e8@Y#^^&;7^{#Ab_Idos9J`H%)NW65Gm7ooQ$ zeLUqub`1oS1jwQIsVfw|>wW0D*Q8Yn<_fuY^RFb8kRy=C5SI+D8-o~DcPOdf>%m+X zt&y!5H})0ws$XK1&{10n1VpA`$?gHbP%AGTp24^px8Dn5Kymv49tFsLI?b} zAh!G^=C*wWkNr+qfkuf0t?C#@`G0GgAYR%!#spN2sF03^Hir*JRI*|WlW zbE*;*92lu+l{^zKu$|UP*+Q=jf3-tgo zrKShtWw4=Gi7nkR`S&@`Sxa9L^3^VX8X~&{#DMc9-Saa&xi?*zM0|c5ApQNFW=4#z zRzN*&`K=qwzx_Pke0e^9X*AY6h90$^1J;sXB*7@l@|Gi6i64!z_d-pF!HzhD;DQV7 zxxMYb?MoV9T}E?;7gdE^p3VoBF%PW#w_Opr0@!^=f}{+#Fg!l_f={s~PV5_&-D?ed>00$Jb7>k1CT(E;KT_1tm&6o_*u>ZB z4{@hhK-=?`dTU0KYK9?RhYd>Z|6BfP=}!HM6nxSmPD=X`DbHRJUS#+|edn!3*JT`= z^@8To1e+=47tZt_fFS$NGrUMWF4MtK!ga3ti1bQJE`I131r*41 zwk{L%CBORMMg4dO^jD#P784v)WjC{;?c1Mx$~HFRIqeiBMI&i5;Nli#aL2bFqi#L` zd}9+48WTS^NGmbd<4)#4KOeMib`HV62^InmbOJ{uf6J2p7?TPBB$a6- zb^#5>&u5#g$(GfnMGc*J=Z@tbfor!xm>W|NX-lt*CJ z&U>)Wa(Q&*mXP2Io>!MT7zt~GK{Wp8!#aD_O$%of)NTE6pzL?uiAi9yr3)YGF`<&} z?GZAG;mUe5^gM>8j-`xc=A{zj=mwxnT^OJA?X5K-s8ELy*#BLPP6LzfChh42o1Rdr z?8z9Aw^+&S0U?Cz(9l(V@FNI*I74=Mel*`yJ3YH4^0V6u_$6Jh*VrZ^ea`P6fREOs zD4jImq3E@rOj}otHxn|AHG2V;FA8;5JbzLgIo2!#2ycEs-Dh|UD8##?^2i3*KiR>9 zI*kmJ2FuN6(Sdl}_2-3E0?Gim!r{aR__rsLj=yV+>vLzIsL(^A4~SffCK}DeqN?R5 zoxk`<9#(;D7g(s8@7WyX8(BBQwih`{lF)hP8iAoUV2K#RLR?VljXUiO}p+1K8Lv zr~Q6iWqwdKv*qeJz|eM{>2A$n+B+LKGjbPEigr9f-oMgym0YrPBid#FWu?fG%K&%T zS>x^|_5ec*c@rnxMf|s35rU|+?0}Djo{mCQ)7O*;3deiB#k<7~HF z!zJ#R{gylW9$2ktDO4PkY2RgBzNG$wBiT%7QpIz4yYsn`tcriqFEEVJGsfqgaH%ja zw3~@9k-F}>r~Bw<=JVg7&akHN|4Jn}wgQ%O@;#5t(?55(8SpPUMQf`-W?2?-b$l;3We z)g%NFKG=w!U2J<*x#CFc%lCU9P`dwGpY`rm<6aL>Zz0*?3}gU)a{kdodU;jS&0w?kcju%IB8lX_EX6#y2m-R;@PZ_YI##v?6cs zm#*|kauZuJ0bT1TJc)*T+AofN1#wc2prPjqvZS*v_7_aDy$V{Cz(fJe>6mhnJNHPNs|B61~dg%syVrnZySEQvU(v zUf3#SE@31V=-dUEqI9}?F@AY)^?HD%N za!=WtJH-0{t8h^prtr`8u@&q;fDX9H73rf|QjFa@AV)E1 zpEPkvtv*NJNpl-( zHj*XvJp}-WsitQ#+6C|Vp%$@Xc{8VYk`1xqc-qH*3p@VIKZzvcO{9td?9Zbob{{a5 zZ#3-*L6ityB^ATy?5~*5*V{I1t3slM;y%wgJJx+Ie?F6cbZ7QL@f=JVs9EFWfM0Efiv&jljDNMK=x&FcziDaUb3bMNRU`iE8g?h z4rgv@zR&c`uG_*hyTFP2W3cvLfIAp@z21f5y-L{s$E%#*eKFpw5bKzy#&~jgjmGeW zBj)0Cd;SbD2%U;VN?j8mIcdbfuW%Bn80i7*!;egNA`%zx}K zUc$JJfD`E}#t+Q@H~zx}0q+j`BI=?1pW&n+;H5dQapJ>V*K_(D*%Xe zXu672(Lf?Ax#1k?o%sctVw&E&HiMir4UwK}T6cF{ytZ7$s zE1^lUz#`RofQeOl`gpez?q7ViRslPA^a-!DaRh)m%DHY+JNB}8{ItJJ)8yN0>>_T> z*ce;@C_S0X^1y$1$sBWnxHB04$!%^EyQ7b8nPnH+24JvwxEc!QeSz zhR*UlEkBySyf{+v=LvUz2)CGviOI5v_P88XM*5?5M3L4w>&ppaCjyI20*wqbbQ4?m z@;M{%&F8@)Q{mymo1~ekalX0JXqakuC^A(A;*T;!e4`e(z1w{MTPxeXu zm8Hs8Jd=Do{t)bQa`Bzsn+gJYe)AD@a6$Jy6_ddAUIRjqzh>Sq0GO$268Ey&esiF3 zi2K3x13=Lq8XVx^(JEu|NUNo2 zMnZO&o9>5ssL}%o>b-%06C=P^1+OVKm_E2f&c76SkHP`IQM25}x!HQL-+B?1HHyoL zvS&<-(yA*W3;X&qn7V49VUk!RVG|Mv|FPb`K6RuM*8@yu!)NV7Hon$Q7VK4RSCq+A zyfn7RS4X6{soSnfzt%&@I?Ut!3h%m}2NEL%Y;Hq90qW6M^V*95gxP@{MccryQU82c zz}vDk{O)eIs7}^L1hHH|c|K z8c7xwVpL#za4}({A=i#ph$I6pLmFUT-R!CRB(_Mn@xyM65Y*Eh)SbimyNC7dR_%)M zQahEVraIaERzh{&&H)mF#^VltzB*q)T25QHKz0421%<_%t@=;CiM#D$SeA6ZUlCSk zQM$5(3eve6{GK;M-zL*To92A*a(rnE9V+z77)-`U&&v<6LZ+w~gHNYqHDZ-x6?}G; z8GRNIi783$i=%R&r0h`ne!*^jpKcXpflop>{)vrA1$W)NAhpL6AqRP0$l{ zJr66Jslbbsz~p9d_+!fW1OxC1wetVA>XtL}KzMdHhk2z*5T%PwGTw;ZRb*+IpP*aH z%x-)~6@CPe_v!~v6Na2vF=7d`qfuNiKli2kcY~OMVEIy;b}>a5T#1f}AkP^omwg#$B?9`IbH znxbV5A0W2xwlOi$?-6%{f2Ba!Zm(%%mak0Zi1-~Zgo}nVxOWBXX^RqeV_qRS&Ywul z0fvYlV#rl}3}wj^>C2MY2_on9s2)VI+v-IOa-T~DCr38(S7v`L37=rYy2`zBpjf{r z(vxa9F7Fst`_8Srfr*Wc&RPOJu6Xru{NAR|fUmJ$^=+ndtHyB@A?7!R62rK)J(`YG z^I?ByH5}98817d_h16ZD7S0|Rsa5vs(^N;h@}q?=cV|y@#fnAGXD2i?TgJLne2Y6c z$$3^3lE4ccj(Pk`pFOxG!a{0qQLsts?(_CE2(md!D*TEFKANomBkH56~m9ZFtqU=oSzXEg9F?^ETK{)yB&-@Pw z1%PrTyE4QxAU0V()^1yrB1DRAnna%{Ok97tcDdbi<90V5&D5QX>MPR>ai7iaPtOj0 z{Srq^A`l;_&Yw#j%Upk7UA&*4VPj{d-WLm<_d}6wO3SL;5(rozHF*$3Nk$)QP&C`L zMd8J$#66dh#}JxP#-QLm-O{)45Trf0Xf)QtR`V@`A#p7$Cznh`}} zf)@Slu0)+!$JmLhWZ-PWijTuW!O3RJI!8xa6> zfQJucg9${QX4}R|9X^nuci>n`ZWhi7e@AqR778Xt-jBsf3o4!xh9#(reOG|W|4eBc$rjpzVt~_vZ(n*9t8CpVO*G9slEp4K_h4#36}`X zqDO^4TH5gX-Y?@piUwVmu#LQHzteByYH2g`Iv|6Pv%r#p%Jn02pi6#9_S1J=-LDVc zd=mG1A%g_NY=kD1eLP**{;lZpNhAI;4FVjQLm0^Q?6QaSN z!=*ONz6P9beVf@@hu~AKa51as%)OpJaE6hHJHgZz5kEMXa6rCtW$4BzYs;wY4aUdi zv>Ua!IG=aFgac0g@O*E)mhp$4Ypm3`Vub#T7m#+-6NsRb?R|Duc+?31P4lM2U~2#V zR26S)NfGAKIrHj?{U6s?OTzc<0PjtH*hs>4g8U8yS1}gMC={x%{ z8-PX~BD6FSG)3fV-|jo+3pS_rINvVV0wTVrwzAEWOb$uI2U#{5!qr|H!N>aQ7AfxB zAwIA0hnFXR+cAEn#PXFjmJULMWfx%*g)retL@Fe_*u@3CEY=0rRGefh1sH>aC-q3cJ2x!2v-O2@#N?TS}y6=oADT zKvcS0nvsE_O95$V8A?=Ax;uvsrMm~|2I=pN&;8ur`@a9M7P6KL&YWvsd;e-XJGyew zgWgOn``_ljBNKMQ#5W806I2la{q4cUXz{Mg$|73CZpp(Nqq{Oh_;x>PNK!)mufKy~ zoV0Hqr3?95)e+iEsxm#bH4d=7lGd}jz(!alR2K47S0S@JqiTIYJSxAGU84TW*4eND z!;RoL?BEd=EXtP$ruqG5IC{e77}&@09dZV8gG6kJGaK%gwNIjv9ibhhTC}Q&>2WQ- z*D(qg-DND+8qWt_%ZL%H)rEzKJtc`|=3ca0>012Tipz(K-*R6pbjvvQ9GJOf!P4MB z_wiI8ZFm|I=K8i)YHnw6y*rfcPiWP`W2?+|6UEmpob021sp2kqJz648O2sWYTzrTv z__+(4XM$IC8ocV$YY6u$9*rxXBPtZv5EUBWE?1w5Km2Wz$RnT8 zz@XM~6(qAAC$DV!$dCT81(09qt9Sq9^<$C>0^u?ROrEF8dr1c|lmzZ{P?@1{7D~S|p2S@L z3u~$lauXk4n#Qg~*dNX2KIMuJzu4$@=PTM!+WUdeKT7VYlU)2@iZLd{g(E9gl)z=e zWW1t=Ls4&N>3tMHx)e=V9TIHo&3dE_zVUo_{hio}Ic-lloYrk-UsrhgN|8|a(%_Y_ zx2)LM#|n)i^tx?Rq1JQm@(13s8QMB{cSpCCzrVoanflDT zEx}zAwg1t$BmYkScds$=WZ(fS^#kpz(wNLm=p**<(^=18!9LoH5!2PqQO+(uO(xjHE>+c;Sw z>Q?srE!u&$0w}~b+^&4tdbLIs$&&ev1t0iN1yXiAj^*E5znprR1>!l@T{ZKK`z0PZ zD}0lCSm0rBRcMDvv%^Wry#%Vm7vI9yx&XjXhi`d`lFc_9Ekg8Inl1==&q85!c9OP{ zcK5x;j3YzC&eq7&|F$z(@(+&q&ZP9O5ScRn^zBzE&(yGo1|$e?U;g8$v0&5FC@clj zsrxb+JEI#jW3}sa(gU7#g%E4V4ul#Kymj!%ZdpGL5)BbU@yb@qgahaB&*~pnF$a)y ztxW=m2W*YtN_A=_WF^T7$MwSf01gGeM+t(xdw6d#`N>nY)C4}TN*iIfCM8*%S-e~< zhSS4jzLMZb)(t=ipfM0ZXduKBuhc6c_MxNc!_&l-j{%}9i-m$kO9jGBQ@A8m@oIJkufVDZmZ04bt?Y=UMOw&Be=3zhcj}K zVIlwW{MOjUm$nqM&skotf5S#@XaGR_mDE&+9Nj^5DGj)4!;ivvx%^FVTCYY(;;pQ; z`~S~54=ZVhWEMG@(`eykZLJ8N5Pl?RrO^z%52_B=1U-Edv zG0E9ctP8Imav8A+3u5}CA-Np0KLN=LhY&24Yw8UJm{h!Ie~_l3z!HGh!>EpCTRh;q zIpFIk?2TohogNsY@)cLSH5h5Dc(L4_Mqo`M_ z@J|LHAwNq^OMkb}A5tE-I}kj1Qr4WD5ii~wn|fwffY8QM4ltNX5Dp6y{p_~7d6x>f z!d>P6er?Dz8*HZMtMtD=ieH@j_0h2@?;gTmQN~B7_SfUZyj&Ye;Ta zq^!uLJM|<hq2OZJF&FL+d?ZB-XNy!Z%-zon_YX`F( zt<7ne3wv>nmB03tmXygLUi&e>*x6_z;#<^zDsK$vj5Al{w!*$dJ5YZHPQG^a@s->li>EL<{4B@l)doD<{{eC zlw!H6WYQ|cdV;_#$~NLozWR@OA8+fLTAUd9zO61*SJFf86WqdFJiYL{G};z)HG_bVwMg*IB|{J7O16vcP8 z^Y0nzPpZy4i&QZNX;@*NY|PY#^s~Uw$_+``RDr9xDQMF2F0PbZpFYPCBInNsHuSUR zk?ocqqr@rQg{8^uzb-*s@c=yq0Gb;mh5nOlaMEL|oe9I+wy|xW5k-=Hp8dOhF-}fQ zU(ze|>_d#7OVm9VbN(D5`IY|NMH|3Qv~{hV{uft1lm+f^WNvpHGqxp9hz8)e<(8E= z(x=Psb0Wn8@ONb%Ei5KQ0|CJlp@xv3kQs; zaYsHDbD~Y{ZRyGEO77m*rp<_yM@V!s`}{TLI40BuDT9i8VEX7Xz)GdkEKI6)+6DzkLqRdYTW zQXEqpu)R@n7lbpU<>Zgnx%>T$>csu=iQbRYu``-CWD8{BdATRv-dEMjkzC8U88O!@ zjo7_zVL$W7FB}weD|y#Nuv~YI+nK^9y+5TabqmRr{&fT^TEj>l2Yia_+GC=h_&|n! zRX~y|c0wOX`cS~33MdOq9Bby}W^tEwbea|q&B_1r4|K;i4`P)AWbz~|rJkst{BLt_ zpS6RDT3;C1`FrKjAmlP~rdW_0cXj(nSG1lgrcDT%5}!V*SNFyJfE%+BPhtW{Em)fs zz@@KVNKP0b9{&3Kbe^jUAr(&v^ZL>qhJ<2#8zMg2 zOrZE3x|o%C(ySdhn}Mg-ug`P%mK*rnkJ}#>GQQ?d*jV2B*~l`u_2sf`xo|Xl^+R+P zTV-%RKl=RMbT6j3qbDErrv07$D}^H0Su5|aDD=Hk%U&&}?qFm)79)BV^>l}1uQXZy z!I%>I9h~OOl6g{fiQwMq^vuU0BifQ=stVv!ZasOM(tS-;<~@pNwtPIlFHBq(yn&ud z?td*c%klt7nvq`0QDcqd|IYdh zKxgrS=6}S4)xg#s-NFywPATV7`da$^I|)1a+n|H|ZJFb9nam;0xm_>XHI)7`POwrP z0hyiI5RTdh$rLhSrOwqO2JcaD@D;8Mg4~2LNgL@zOnB)`Y|>K0lF>?tEi@>HHvdk} zuqrK_YLPLp_x5s9{$XgA(%uTNJ>;eHqa}N-y|T390^W^-8)rP8v|V>^F?{F|==X?3 z_Cxkem&)EJQUP88kN`DQ8R{)q8!J>Qq9EDnpCP+0<*|-YLoPZQaBv3m9dE19nm+l{ z*WFKT)oC!zIPRu)-2baLccTVnf1JOqy~a^~aaAC*@HU zB8=&pGQxAgD-W+>zXyBc4(u1!@!8$onSB(tzKhF?{?!C<*nDCN61|XfEWrdauf6it z2ioKuBXWBx@?&qaQ0LTbDoc?S1ArY5xjpG8>f>~TEgnlgXnlVH#Fu`Ds9_(=z?JUA zl3?u+NOO|h2vby&iYs>kJIsWWL&03W?_dQ`e|<4d*{lnAMC@}`oRtjeZdh$e^rSJu zVhdzx?*b2I9fAXdHwekikUyE{ zXR}g)vvoOR#PE>D%q=Cts(2e$+>Tn0iG!7)qwbyPNa;#!)v}=_@;}p|4(3Gp3*Y0Hw{<)-zw! z${}g_(a@)i?{|9kjCj6hC>WK}{T3-uoKRUV;Sc+hHAPyJ>phcX2UwI zh+#QUd+w)LstP|MvwAINJ@eF|^uFO@k0r9vHKryQiMMZUaJ%OL4NRJY4+F(K+bXSa z(u?bH;9%D)K-m@VCpBkHIS%HnzIVix$o8e848;#O0ulA-h$P27;9i!(7@%uJt4eqU z-82<(&&MRt?~E#ZLT;Cs3G`F+ap~KNzu75ND&etaJP~AnpITBkrZ&yH+-doR{A#o5 z+Ln4VG_Jt0V7}G{EJLJGRLdTYp1us)YY{K?RI3bdY!*K-*!9p#=IYwT(o>e(#)ir> zWv{RW+2`m(M}dEZcePU;W*R;ZU<7 zm{#PoNc>1HDfp}R>02W>&t_GF8idJFAAPs zU2Z?@&|rSr&Ceyx0@MKlS%@kWvXAbE=@o1eH|$)MV45l3FEO-FUu37B)~{9YHDK0& z3J{C|<>!m~~kq65d)ia{gnG!|X}Z>=gSvRJ>X zyRl1oVL*$-eB%sAxE~78=8RDvuE!shTd%c0Kv>^Bm`+`Hn=Khl@>pCDjCc&GWFFs9 zPG`kEDl@<5dN<@&Tr)*>{eGO1!or8!GY!x!ytm;EAQZ09xunJi$wfH26!BPVNcP^} z%BqH+D{s;%zM1kJ*)Ih-Op42$g@3naZxW~SjT;M#h5`awjv8mb9wgnTDzo8Hqo;$K^s->gt15 zbz)t|q$e{wUf~jh$1sASdH&7KLqJj^LLYl+q?20kTSCGz(^G!vw;V@60v$)2Zv}t_ zw%)RhKH>5_f-Vm|Z6?Ya3z2UVnB93p#GJ4 z1L-H_*%Z#7|EDstA%A!brsIF{dx&;Y#kkW5_>&W;3~Tlw}H>CDMH+7`>@%ElbEjWF=)xv{tFJ?9X~oL!nM=;Foi>VQW$#N&4t2+}4NH{OVX z?q?awFom=VPkI0JLU?j5`v+>qf&NSZrI+*`cIeTw>SKO1RAF>5>fRGw#TRI`If z1haK+I%&vzHSd6iU+kc}@fG+!qu(=%Prh%zqlZ0JPxltio%4h?0%I72z;euMY;E)` zrg6C!71Ur!gBN}5y+qbG`Cg{GQCg=VeK+7Yu8Se3aaE1{%KHhE;COyn}tID*t%miz1Y030?wi;Eh0GxZ>`14&A$Wu88-dr2gld` zre|wmxt~{W`o3%mrOcWaS(e$;|Hq!OF3ZeRgaha+!N_c-~M)SL?)p z@a|SId}U~Brf(5gzP#hUSP0;_o;GxLGvmQiW+*G_D5O^|ysrm83}F zaYL&;AgJmeq_#q7fOw`TFHBY0$2#f?dHT&l%qCT{D}U1VF{gsNw4Q!?;f;Rf<4B}i z9;IQs@J?Xb))<%RtZ36{{^#;!U*nl_zZi`*@l~EyLtbBKxd|={K=gOyU!B$BTN?fJ zpH#c*4&}tqv|(jWz34*!(QJ<|z-#`DvqYc%Kbla+r@TPFj%JPD{D%sXGXXTf5?4x_ z(omXNPh#r(W0j0wi3`fE=Y+65NBxLel&t3~-DxU3iN zwSEaC*zBq5*HeEbJvM-Rq0XjSMEbv30ExGY`0NV+t|@y=VqdvGc=x)f0M{g-#&3s%e@y4!;qJMA3)l-@}U^6BpzGD19|Fhk!8 zO&^}>WULP69Xh-Zs%4s;(40gJE!qby>Q&ZUd-J8v-U9mjfs>${hnMf_uH zA5))I@}JS1=xnP?A-LLbod4TF$nc;uwSLZ%0BwO<>0a7iT=~rq;QE-BwV<%I?A(-0Jce* zRA6c@^8!bCTa$k0gzqXrf>Jfk%Ko;R7vaafH}kXq5!9M~ekK*h zKpg$+#=t=)tkL0WF-)QYGqkoT+CC_M7kz7!DOBAP4E&@Yqg_T2>Gr`2Z3ggnK}m8< zt;S*ztOouc>L17I4@xS<0(vXx&x=}Hp4Io~S+4Y+ygJJqYdJ`T8mG`}6Uqs;uRemTyXff~j>q{O?%rQ$#&kX95`W>3={E}q52Pry5Hi?cRV=7nst^>?|+n(%9)S^JC2uWF6Kj zm<7q<&O$D0MnTt&SyS?oe5;pVu-*vm|34LnDiI6TeeQ9^w$sIg*yj1N2XjeswD8wj zY#cHQ`2iW*pp*%mCp>aeR)SeCTBjV7>t3HY-c*8wi%ANWF5izF$%@$+46a*`XBZg% z1}MeZSCVGgkZk*Dlin9^buM;pP^sEPqA!5j>8y5|Kxt&(>1<>;(y$pX^95gqND*~1 zjzHK9ALL4S9;2aaC{5JDHJ?-8K6gepwW!@4Z%*SidK}r)LxmAmKTEgA!9N(TirX{0 zTJlpD`k-qnPS=(qb(0m6V-)31Y+}!l-!*}0m<+&m|1%9Q{sC+^>9bqFVqCcns$(BC zt3Ro~CA%j98{vZJXjUD$7K3c_L|J$?YUP~5lZ8ZK#8RPgg`z3*A zG(qiu8)@2Ei*tSgN1xSj0sp}g3345uVER;3@(jTgB452cD#{iA^tGDZlf?*x{Ga7D zjkCh-q5X42ubEOg!!EAuHoz`fdn(=!u$x^zogTl?5!=%x{L1#i)4W&s?NF<*iM?CX z>_HG99Xoh~7k3W&6?P)K^-T@y4_K`r$x4a~=iQeAeznwdDw2?q*K0YLbi1P1h;jx0 zXLSveMpuUO!t~a6Mm8kQ8_Bg(UB`4*kBuNDYXR^bE$FfLfLA~?toe2GKvGbVIQhhT zO|7BQiE;Y(yJFopfbcq#LPq65++ve1u?!aI@^QHTYFysimc$njs}bmi(45<)T$YNB z<+rb)&A)0c>&;V{_&C~$7$r`aX4h6%?s_aoU3glc0EJy^LUE_-VXLeykFR4<#arDW z({OLV`K1pd5YlRC2^F%l)SA(}dz0@QE>LKt=VeHpmz2ZR16)^jbS82?8-hop6pQh; zN^DsVWlz4UF+f5yJ}3qDv-~!uoFKOq6fSd^P-la`64vxI7U&n64A)>v|NM*fIvbd8 z30kCSa&rgxk2!@0ENQmCNcs1-t9UPDdwFly&E;Rs&U~Fx#eFI~>;=es4~xxj#~~*^ z+}^JgFSoaxUjwk10BUc0>^sNxQhe~As}^WrUj_rGDg|bc{p}weCXBuJp1)})1Pg)* z0zhD5a46~(DguRQoVr;nU+PN4C=9hSu8dNEqYxJ^NrZ*m4`-Yk3{+rGNMFYTiiYBj zXi8S*dmMamVE%jq2O6#HC_GJQ`Dj+EDfoLp z|CF|f!q9KlXS6rK5i0fgb5}P!7smb=SVZ79?iP`M{>51*y{3c5K?6Jmi?L?^N>-<~ zJvGWl*oP|3aNLZ=H8$LAO@^sjYW>t2_;b?+gHizlWd@rs8c$eb5;99krpytw8S3n1 z=-b)8&I@AN28uYr94X#?qEe_tuLPm)(}#O)-i(-SE8+{_#5jLSrrl~gc(q$L6_DTt zNXA+Jyol!93ks&~PYS=gGJ^QCLJ>eUPF{0f50w==GmyyqFrh$8wWc^bzlL2?%UhG7 zzz`)ZmrnAH+=;1rh<&Sg{N#wR1?nv;c}$p%Lb|G@tLGVveZNLB%7m?Tmq9r^9fHL@ z9d=sN$ceY>4+j)R@(PG5I=}dCw{oh(yiC#~C_$WCHMMjJ;hgP4X*|)9^o@w)@dc}k zGSc#L8`WQfSotP3Am8L?GlFQt@4D-e>pUQI=PSq0-OL1{IML?*rzU1Dz+)-a!Q=7H zC7vnyP@lK2jU-xn( zh`En>kD0^Nf}RO~3^z5tAI|Tb>{T+kG4(h}sX7Hny(R0u`EQmPCIWcx58q~S{tMpY z#=*Y#f!epXfsM+YJv+4iLa;lLLUnz3o7>>E^0lr5fowcQv=*@K?I0QnSX{nM*mNd$ z#dMgkxeGlh#%Ts-n@DBY%SeWLXadB(7}MM~MeJ1|F-$oo$-W?K&BXH|;{&`$5?16f(^RiomZ5OXa?H7R31nv0p;r9r5PgiCPwle5Xuf*h z5P6&Yc!h3`YjC}Amz&MRnzCm0Z=~R#iW) zDV^3(EAidWq~%o1bo(KSakO*jeGOS9fPZ1OE(x3DLZT%yI}Ygpy>=C;-5xwW;{#8d z>FYG%>h9xkRs;-v(fNRg(T1da;Rg@!MFUVdj1{-T^4Mi5*sF7ym+-Qyp#9&l zAgJ%5_&9G|g4_^O2#0o_+Mht&PcRlH-F*SZk6K7lc(IsWj3i_8YI#B>k1ofJ%KoV{ zh$l_+A_5tk!E8bVy!zjy#25)Z#TxOZKC-&@q|q3`#P{2uRkhmjcVC7`oa(dZ5Nw)6|Rk>AJY}4R?26zZc(9&YgzZe8gO!> zG=$3rW(j!vW;fO$u9{VdwQcqi`` z^_$S!%Nq+!z7d9-t-U3RuerefsmB86U!c!Yx*wG!pU2lcOIZh1eHd$=$`jr^PQ!Fg ztpd|a=*J0FRGsYt&;o9+dj7A|ySW}4re|m?y!y`;9l-z$Fz$Ph{19r0C@3HDr(Q8R z@MBWj05!3QnA7FlAXuVsZ6*e&?Mv~vh^M5kGs|?R9AdxYjA|5FBYCr3Ct7%{pB$$S zwdQu4*apXDg5_GvNrXjA>sGPlC`1yB0?v-a&m@`tOGFOBdqwA zRjI3`*ST(@{me#jNi*#r?qCjnJc9^%K2)k*+Ivw0kde#NfDrG(~N3pwi1Yj?noZFp{X;c=xt{}9Z7?NTZx5tyZ4Ds~*yGN9YS;-3XLQnR54UCg}@ z5$*>-`&>WQ>29q`0Vert!e^_VmpXo4CeD?vMa-FA(hcDnH0&IP&64Yayn!1I)hENU z9Q4SnR?`d=`&1BPulnT;AYR+(E_C;AJMUpc*lW`?zYr5VA#yej7n-kvc^ctH0P@&9Y!qu;@y39Ee^ab0+aA1LvOUL zZ#ZdB-eAQxv}L1MzTUJ?y{K{TpdyK-nzMPcf*coRm}){i3j06HInccFoHqmvHF!#2+-}>mqFt4?VRNQbV`9btlTRr2Q5oeXN8;qOza&~(|^-2lPCd=e4on8AK-tDe-PH|_u$8^ zri?7Wh55liQ67wVKi0xQ)Q1h)PipttI#OuZzuW>tM8XzqdYO%Hy!c z>Ui4=gnLc4EmT>2dWW~AGhQi(wUf@%Zn9)6=!@9wI!wW?*=E!&$V*JE8v673UQR-p z+K_2*+^ukF=DD5_kuk)v2eAy{T!2G-#jrv>U52O=p{~@N5xxOcm@$}?KC*XB+(%H`AaLzytEE>9z$9);e$sMJ*|l02vD z9=TJ<=7||JI#e4}vB`xGs&%eXv1__Iu?1a%I?i!Da5E7VsU91-tB1I6MVij~Ri^1F zAhv5QgHxf>s9Afkdm2~LpSVY^hCq)O-~Qek+ZLSzZ4r2p?`hT`l^$@u+`Lz?oDkx< zFF6O|E2-GD)ywyLOhy}y|;Y$TR;NQolXorypO+Qz&$?I z`B#4UOuF}El3+dnI^RET$1jTr3PNm@>19nZUbMmpR>%yDtzmv^ymebZtpwhxt|a4F zud-2CSB#r?YrWr7MfF~N`E-Uq4-1X}-xkoB9I!X%H6=F@JSAH>m-2*#+lBKNnGRlH zSV3KO1=6CDQ#ic1b{L6emgP?wDbT{ktx1r5T2VxKI*m*(L|$w03s6A<8sh>IcD>fb zEbg23n@gB`TPBfOJSE>I;XDkG1hEP4aX&DhgfmZmX6dDeVa*|6ix ziZQH_ujfUvX@rFrQ=B?%DdD7cP`B6J@~4FZBlU1W8NW>w&+Ie4(VL${&cd*s-j+M` z`t4ni$6CRxw(@{^j1E{iuCJpTk<~Msz|k#q|$y z!$GS?%L1ba!BC(<2t%S# zgi7mU+jsUgX?btep8Pxw1sqger~g&8{@?jxG+0h*e*s#XNYJ_iiS%N3n3x^loUdv_ znGFr3k>l4T)q5x`0mi(&!tHZ9%t@@(i=nTohMZ=>b*VZqvf>Nnv*%C}13v9@v`AwK zggd@TZKzcS?cb>pQpNE3R!%e39V-#SB;)PWO>oMap{0qnC)Vp2TL39q>kX4OqS%{| zu*!b8_ST3DOhNFOvXwY=AtWgXlAV&+=*d003n+g^*6wa`mfec=h_>?QmgU+`%PK9J zCI9+&1_(ni0I!36BhiQ;;r-$2ZV5=06yTwuaE>;(2JBtlVOis_GXO`(W%;q`3F3(z z1;m?7O3Z)dm81osPn_($o*CX{AjuS4oN)k2ooIEipN_}N0i+5~p zm~lUs*-T0!_hbS#%whct-)o)xJY*rIGS`SCjF4qhZFB4=m%muc=%`XG5F$?p1PZ7n zXeNWIQU!BnpU`V0At^lP*wrZj{{I$SyAu_D+Pg(ezYL$6)_{jkL zK~sQ&p`R9+hyV=`d}7LfuetWOZhqRwbqXYa-o2?pc^3$uxM2f0kPVT}qcb1(n*UZu z`^n7p&!Q3(1v#5GV*j?;Fl+~fk#DdpS5~aNYjDZI^wt(nsk&Ox>Dd2320?8qDBF4E zwew@JpSw8OryLsg8s9mHKqWX!w8K&-n2vqu3TjgH^?c5;`npw3l3E{oSIfg?C@h>&Mn z)cFCid8Z8dk1nF%&$U%%P7@CK*gy)_-N?_^zzX9XVOEYm$Q}kMuJ}h>)+Kre0Ytpj zY|@;?s!FY)J(EwmCVvvO7!DXL@eLTTW1P%g8sRXmi|SBy1rk!ZmNx@(k>UYgP6m3h z523bn900bdcL63(o7Gyi%S!d}Bgoi%xa(FJ{~KQuJug^s64j3s0{$a*vsG;3jLTG# zrZEADmk4HK^IRmm;aYOToLKmmt>?;z33Z0TH9F?{z>*K4sm| zql&(~428!0+255Tq&QjpGr25n8!z@=i?Jj*Jx~Q1;yL$uPT5QWeN(a|<2{ma<4zUxIBmXpfKqssp3!lI z_o4)TS(N$#T&-Ov;t(X{ZniD-SbO@#YFXr;_z+`2mShzlD?iop+*fSBojfHAzm8%y z-E6`YS16!rl(5HS#n5oQeLrUh8aL7~IHAGLm&B31e6^}rJV7jk)Og#y7Gevrx_}9D zQu4G(g#p+^)Pc(nsD>iR>2DQ2o=zVkZE_+`mHUYN2dk9P9;5k&Zvw0&i{{FS?7ubS z+hu+wBU>kj5iRV!@ui~k_>;-{Al?MTLlmQ*+N21an^WN2RJS4RMY5L<8gEjl%T6#SzcWsosnTdx=OQh_aUL#kJOd`X~QeFJqG7!&;H2f2IsLPE)KbGziGv$ zS;>t>-frFmiJ&!>1D!LcL{?+8aMH{0Wl8D`Ba(5!0dU;gL5zU7S*uf1xUHKP{nX4a zJ@h^31xN`*FlF;frZ>v>a=A2YOuOHQLuz`*9Q^eWuB1hx?@YKApxd&>^#jpwbRVN@ zDjAI50vh}F%4u3VZmYF&DTHoHYfj@*(JV2ZVYwV&VHzdP1hvSEgb3IJXGTdchrR6$ zXy*7yjjv%r#`m%Kg{-Y*67LOh{8h$gPp059#cOn4v8y1XT@{dk|9t#)iIcl*J9;@!d z-v)4Pg+DE@vXpJ1knxG6>@q96AB{hXRD|)h1IsV;ct!+)dV3SEIT!JAD{tT5X`r^$ z0A?2J@+1rT!DULGi8P2KdvMfMmS@ey%k4>6O@vpH{h#}j!U)r2Ag*axO_vN&=W`0!-)U|_U9eSu=)w7TQ)37WqypJ1UT7u{`Dyxq zn4#tE(nGOC$UdZwKW*x0 zdmwfB#n|R-wdk63Vo`jRilR@#iNS}ofGmzeJ07Br=VEwLni$_4=}F}7@?`QuHVUPT zD}3dA;u)Bj_8x@t4(UY2 zpiy#@{Jk)1jooUG?br-g*FTYZB$3kIorEN)_Va{TKLQ(qHjpkWvUGD~{r+!kX4Fz2 z?~P82Z#n5#k?wrqkPN4(*TC=+JL(0{Qn6?TVyA#xW)YRIX_wA%*F-N&3-&F*0*|OW zx6~iq$|K%Iz>aLqvKe}?c@k(&vqa-CK3@Se9pJA2-bCFwjmY!nlPbf34Ie;db~?o| z{0c*+;GcscksE^n^zy-mr)XGhQR(`)tDTj~tO_+=Ze0Jw4wV~VCXUjiT`A$3NPa!U z^q1XOdPsI5Bya(ww;Tj6f|5W;x!9&tkaZCoFM9i;BqXG!+qsM5Vc0Mz|mBsMvtzH8N(!&gr+5NED5K zL1ynSsb&H4RI{eggh?@b8R$rEW&SRy<~y46AmNh7HYEWG?*>EdD@@o~li{TR?}0Ia zkr&q)l5txOHi))3z_~}hpIBcn!?VyIcxeuAdriWx6mJmp+wB7T@c{h!Ipy;WP}VdQ zxW8w7_%q_6G%tkFH5)ySu6ZhEd*8B;iKAeXkzM?cm8(Sgyw63vPzVxfuskCBGBBgV zJqtDVxW~rZBK)a{aaaTtbF0T@>lLgrIKoxJlb7@et9i4i*CkwJOaYX4zEoVS!h`kC z4N8*HFgOj9fo)G^|B>$!VXJS^U-(Z^06BPAZVmUs!NdrAp7(K)X9DR17Ud_yv2ETe8j_K~>Q&F|>b6lgIAtWb@R} zO9dgAfKG?^mp4>ypg3zB!T_|J^yu+A%{_ z(u^jp=`N!dVx31OIL?|HZMUMW%*aYU-Kr`*qj5&<;>sMDivOy@Tb+1}FxxiX5=kJq zG}gcqs0at<81YUjvl+6XL+-CM%Z+?3vSvclJEFp2&4aE&QivEL{oHg!u_q$-`psnCFne0v?d&V={L*ciLDr(zKB3 z!CI=P`i#BskO28qYs44ieJlcK7(1d3{0Ff^(+V&&L280hE5=Oo`-wI@x^%w*rg+su z99t5z>irtPb0>p}li9Q9wvw8*c?_^9cUD=z>M+}`+Obj-S?HvJ1t7+ee*?29SUpO; zDrGAfIMFFc5Yq`eAnXz$%@Jilcj`!#Bm<6lVYD6-p0FnMB)Br}*2KC8T28o(Co=v9 zy9F-s0GNo1?0z6K%v0@n0WQC?@JB!PB711Q7bW^C+m5m%zVW-m=HsMUbAb|uMoC+z zFQ3lH@i8my+(9~FFVfzOH)B#UqptV$)i}a4Ecne2|2J(I*fxVzyl{`cJ$Q^gPF(+t zuhQ@xc;$3S5B1>UN?2@!fZR}2*~{o~&4}feLQ5QaT*{WG#G+!wAaJP36b=-JQLGFq z$0DW`1%%MdH4|p1+d?_`<_=T@QUw~|nf58AGO&Mz7SC(`b zL#{E0$GU3Zv7wo+3!*;wYPg~x(G%!Vv|50SQ9D~~W{HfUjC_jJ*ue&XStwGjZ7F|` zpeVN<+?dL}yV1|9yjqdG{GS!B8`*`H3_@OHESj?}V`NVmJ&J)NWP9&vpKJ7>N?TYixO&ex#4IVfZ7ChWy|X=4#^Ca0o=iZ9MT12KyCCcW7H*Xd!L zfK7E!sfe%tpHv5|G(~&}mRGEo95}z@0=rKv(&7WdC#DfA2J_pxNn~fHW`l0E(;yIV zcc+=@8dbYRf~7B(FNT-~QR3iem?r5rhAGB5l83Jb9)~>qvphrgP*;SY_HMO@H+dv> zu7a_9K|kmRiz!l%?Ql6`mJD_yDh!;o^XcoMqWkFD9m^;5beGGbE#$JSV(EaPRN3I} zgmZ>y6p%@cMnyp8T1qh*Oy^D7Ie%AR#Km)ez zKJmL{u9!brPHT>#%|O33qqC|R--gV-+m@CuRev`2IBQjNFZ^ZBa*j*&8a(X~h4H&h z%u#g!UJe{Ap=lRIlUI&Z7Y&Z@C*FEJq__9dN!r-&s`zkRXjU*c^dPJgW*=7^9qW1)eAr-O$Z#2Ot9DqVB9IU{a( zI1GX%cpnK8w=i1ufn4xh za0Wg{%CTPvS5=5tt~x_5IIGg@lfQV!T~!r6)4t79bG2&AoF;T6>5wzD9K(qGTDRI?^w~yslOs>OnnZ60mFTR&&TI3eDi+yj9yX&&wip z4!o|G>2KVrho->2j$IZ6<#y^-_5;!?_}T|aBW?x#r#i^`p82&#h)Sc+p@h;-@F9vJ zO65#+03PgRNH=}pK6mQFadUm}j0%@$GQrO3qcpwX(^%yLX>(EfaY`5-Czq7mi5qnOvT zta4(Irc#i9Zw&Z-!mvNs7PrK>hI1pYm;*(TeSL@i{@f{#t-EX2pbPgqJQpq#JNxPj zV~c9;ZLVDW+Ul%FM0fUyH?G#aatMS=2c??8$HL<&Yn^BQ1zu`jNf@O4ZsFj9gx^SKSZLx& zqyyhjJS4HsZO{}@jS+cth-+df-W4W-T$JJyUAbRb|0}+&YE7a1QXb|b%gOi7B8QPL zZ)LomF%c)Mo-tj-`&`-ZTVEIk$6uL`GsQ~BQ2j#O+*x<^rT3hw#QC4B8{>VX zCOe-XCWi|d^W=ilam(BYbNGxqc9>%IC5jt)8?yPCGNE+$P}V|HQl*j;_|Ca>7f#b< z&&YRVF?kMi#O-mvdBsUlIr-&Q`<)~lybcJcu^VlEeh zL|#ZnVaM8RUys#?Flz4Zu*%6IsQn8=Kg03XEHFH?Q+Uj^c2a1vquC%`e zm`38ad$*ZY4Erwjf~E9(mqa$rN_~ugD8(L&A1aM(@H)paCLjb>;^sgacw1dYe$J?^ z?6U@XpXJWaRAD6fPrZ>EVyD?P5v%tDFKZGQRE&Ne;H&I3Jp;L5BwXdqWs>%*oyw^D9 zN~pOREK+QvCDs@=gQ4?9%1Ev0XaeNAR=ow9$>FxtH}x0|a!()!>%v&DpwBt;_4%<5 zz83rTb0L|?gfNeU0p-<9f48P?N&~rd2P`X3u73WzSEdscj9h>6*qZ2|5W`EVR_m1Qenx$v}3*51-qBY;t7+s;#*lftvFEb{963fz_V4WuLkxb zk=Y;NJZ1UXcL;MjMAqaLweS&_UeDNneLfhxGj(Ey^eBNMwM`@=#7-Uw$HslekvSgs~DHaa#?#uOV5y zuGcLz6Z$2T*lI!&sk1WY=FZrY>w%2JgdG~({_Vr&^>XKCEY!vwD~qFN=?Rs2PykO@ zS}yBpSc@pmZm{>~LP`QlXK=reKZI^NcCN7&Xkl-}R*m?o3Hrq_>rCS0$>Kze1p%;C zp3PQgMv^C<>gIYPxkI=5XTxU~9LstZ(zl{2wz0yXQF;TDJ$cli93TFYIjbykI)ODr zXQMPZUxsv4GgatomTk4I77zF!=%e1;iiJLT7F~fs{Bbt#Oa%aAqz@wIL9!O(e9e0%k z_K@Q|bL(xl)QKnarRT$_GH{1)>fUG!xP#1Lu`$P1zbRMdoFsTtS{MD%&)r?up)Px) zWIMtlm@$lsW%D{WXXWXQW5B+Up64OVm@(EI?gg9ZfqSUp*>9~wMYx3;O4Q3ciOk7e zARC$v7b?X~+(cO#jHLE2$*FrD7n(^BI+-{Aq$k5m_cuhzIQuJmBn@MIKru22hh@os z>8#t>s4Rtj@1q zA^0wg=lVo-S4Im)N-#R4g($OclI9}Vgc+1$eeJ}Fj~ULZ9=dfI*oh&`%=-&~f!QH8 z5G(Yk!&TO%aeOs?0J&qBizZixdNR5=?iUbD)u zg?qP#TZ(b);va~T@#6|-Qv~-<)S@{n=rf_P&4pA7os^&HuQisiXYiEZfg=QX1ueDKx5|<0)DcC$za%Z(v$3htr6!T_TP$(wp>r zx!{tp6FXciF>xtj;NC7~L1@(_VW!S;vM{0+Gl;@DZ@jNt%5b+bNyN^utvPDqG;|kk z+2$%~KBW)hfycbGhiz5jj;U@ZN@ECD%zdit=o)%%F~-_&X{u9mheU3?BX%&ZZ==WM z%=ABE>w}$vr_s+_dYNvSUto zfv-1?wdF~B7V4JN=0ll{Qf*~M{tnN%u!Vv=ug&48RGqA`uI39{!--h?C2_u)=W}&; z)xh>*>Mxx%tnX1HWL+wN_PVFWl4@fUGshM3ye}IWSGVL{4$zYYBmdGMEF`=**j@4E z7tWQr^uCw2Swd0qlk6Mzx=al{+Tag_W3sEgB~2RFDv;VOVFOO0lPfe&AY^rvg!ygS z;lr?4j;RgskJso2(r^MX)o}3pix0r>>3k}H0*nYq&=tPyXN(uRh?7$fHn;oogbGEq zwZwKS*w_~EOQU4MN~N<&Oewufr=!1g6X%a#GemZ9*a%FR-W3_u7{1mRn|j>)6&Z=C z*?aDuyN9#co_$FYH#WX|u7zc(@)EGsp^El*vcYeDQ~2H9fraHj3*ekYLJNBf7WPC; zy#rdU)bq3V7F$)U`dq zskG9V7DFw;=cMGzF+Ep&N^u=4*#ehu8sy)G9kYB4tg9RvdEDEaX#6!hSkZK(UCapm z9H;yq5;X2cC_R5(+o(>5P3gakklzQT2e$6Rj`B)Xe27ym596%=mjA*Li$1f<7L9`i z601Yyg2p#=_VV!_73_Y!`rtQjW6~)dwHsoqu>%H8+19~b713l8aLQBtuTI4yuMmr@ zzn$uJt1yifKuUkr3<1v>Q@Ega5|(%mRZ40*U{w^^;Yshg{>8SDw>Cod`KhB1BTv81 zlOCBlQ!(MowGs1fv4#$#95)$M3}%Tq^17|g3WJsbGur&=EEj$8yBkA>tA(z#d}6w~ z@Nhxifrsfzq?@K{zuXnfdU&$qplw?&Hp4dwPi&|jV?LpEIFh99f{oqgOK@EYh(k;X z{pL6kEYeUZB?4gmkT^jLHP-%gQb?Be?;06#S7$djq_Y@X z`~HX#L*DHU_M)0ui2;L<3s#|DDEH-%Fjab?fue2Yv&U657Ny}hC)l<$`01l&8wy43 zv~0_}{4M!~N6*j;$C*XE+byp62(>J2&EW@i*=s}2de8`+(52zYplyiCwO9)}%jSRz zrkeFdJcEnO-_-g$52veTx=ss*p2>#V0ou-F7NN5JSVF}4cDGyhO5GJLOXdm{=Uy%< za<^`1FLReAAI;OIkXWjqXe)5I|1jT*G&6jd6gH+5h-%xq>R>4$Jd>t|PkV=19zY&= zf;=}O|Fw9?HW=qa*&?}RISVG$FG5}6a=`Ew6ib1CDVb{QzqC#IF3GW@`NS`pZ&Cp5 zBc}AO#<-0=pL82FS#zwl@ef5L==IU1i?X=7x+WJne>T?~Gzsh;^^z4kD|gDqPa?Ij zCS!Vz*C@Ta*EL`?q=D&ZW1^H47#xJc9qwS-zQ@1&j4JJ<=oGpMWKgv33`_5V%IGee z6HL7NSLofpear!aT$Tq!xuQR$A6s^EqD-9dyd=6%CZ^U>%>6#mMcg$}?{ryps%XF? zOu9sw*WPgdX;`LJTWzJCvhFc=XD}<$`D_K)>UG(K#D0w{dpMr>B_86A+s#(nrVHU+^1-o_V>;F<_ae zh#WL6yx0L+y^#tHIjD>)zAqJts-1X3L{$Su_>i*4-MG zf)J(qy<=EW+_czQhha))u{`=;vQ7 zf48gB=tTm21Z~Bs0?1s%>M-+j>QoEX>q!n$RKVm+k%lrhT!?PEcpW@CQ@^u$?QO%S zwSrCk7;1~E0?n*!KPy@Bx?i*9x69)c*gb6D?+e&D5}-$Mf~Q5J-Y!7NrpWS(>XOrrxPMhUS^?2 zbtLXAugEjjN-hT37gi0OrUSN7e2win1B}&*Io@Kqoh+@m=PmARPy^&$Z+E5i+_ho1 zS_Sraa=`GMW!i2!^+=W(?;3KjY&;CUnR$6@u1L>)lq~UlYQoo|Na9ZfdJb(@v5PpY zYOH|S?dki_CBZ^fQ2NR9Ob8ZvV<8(C(^L9tHtIRQ$!ZS=SCn+n;cB-}Mil|Wlb+gv z-M8@xP$OE8RRYG4Ez(Ra0m>5jCxfU(BoK0QWH7CYq@(H0=lccj*Dle@NMtY(G>a{i z3h!+%ZCRZ_&I6+aRojhp^HNi}S5v50fihT9IEo9I=1NoAxKtNsJ_SwtdDOt96KMW6_H zmQgJE8QfXQ6Hlj(+<$;-PtT0jG`;3l6@9idEIIcEeFyDa_0z?OJzx!SUW$ZcHuwla zgY4HTUdNWXwJ3*L#2F+4sID#$?eSlRFZ>qLdV3J4VV#{yNS>1$D)ldo}N*F`S<_RaWrrz9y-LT+PrVX%o5(Cr|XH zZpDorg|+b(u9H`-o6?l)EIJN8=&oxxqAQI~@!I_SCOyj^n6{EQRCangC!1271|vHF zEUEMjBooih$aa_5d81a^*YUpN3h4(QMDg(v-QJ_>`&R?{_+en3ZAJkJ%NOd^zSAA2 zqAVdeyfwVGI)ID4_i+TY5FG_Wd#+aqIu5?GU4-Tilqw9_aOT_vA)xdRaU%B6q#k$i zrAB_E-gA1D(5#S4Gos}?b2hb5Zz8+s^?Q{*)xfX5qjbi3xlKN*vbv~#=dg ztU zmFui{Z=!UZ`Zy&F!*@{~Y=#x%M@5fA57^0ef3cdX7CMK1#!}Ld7;&Usv5h1swig=h z3)+gHvnkD?ht2ny1YEcp&^CsP-stZsFQiXnOVw*InBaUOy(GR2`q132IAmGSrM6hU zR>p%XRr>?|n7RX@t67kReZNy@$m_m%F}Zy9)=&%dZPhdC&o|4#aF;CijK=C0gGRp^ z;^2#Sy1SEPpP~{`1IDpdn~?TD_%K0H7mEjV(1NpE9Z%I>@~NNT3X$4%0ks`up@;km z3k&(G!l?G_@A#{i9Lo`zxu;3T)~a=5C~`n+D2H>V6J!wJ+FRMI>4;r`BG9Y;mMWb zb>>Lj^xs2)M+M3O5*%AgqWt4G|NH|7EAW%k<~ynPf$s16{I8(m?{MWiZjl}%I+`!w z$z*=$cj^88!u}oX{rPtEZ2&+czbXa&aaw=-k^3L;Yq`+z{NLE!bgTva@4WyHp4rd8 z`6U>T^#79MeW~~*48MfokIUefF#HmR|EGnac7Aqx+OODtvIYz+-491tmaTN#e*1W& z_Vuk~^EX$Wot=TtL$nCy;#F^NEE&{?vF5mM;571#6~{+JMMd>{-`?YOU7n&Avev#4 zC!DkGIbIhk2pf6y$F}_Q507Mkrx0A$+Sf;;u5Ei&+he5*3RhJRyDcz4gJ~^6aiL(B ze%Bk-#c!3TYJp2aA+@*BG6tw3fykwa>h=`4!OGXJ(2tReW3{y>#b!w$$Fea_gx_h< zwl*3-Y()^v{m*Fb!(9@PC05}X_uU7(P=IK01k9cVpjWwSMsIKP-RFly4Zk&{tFkbG z$+CNr*6+UfyaUA)7$fX8n20_+xV62ji$d0i*pHM&uBtc3z@Yu50rX<7S0^>WxLj^h_k5z4R_(e-C;A03n60j@9ZQ% zk@yv;@5jy{B7pr*jQUTDvV0Gvk?6`tPg8Pf+zigJ8*=Q4WXWrc7ZaLjNz{`}MM=Yb zZu7vMe8X_6^)ns&XKenv5aEGbR{N=S5f027)pY%~7grZQ;GL=h6RZ(R z6b@rSLUOw>`P=ok;b^1j_LNW+qIj?^wqVE`$~z+Nx@mWYCRiLaM`rxaWs$DdFjn|k zJbs}Ocl$wy4#m(J%(O3V0h%KP3Nvl|V1>PBfrM`+bx|t((+2&y;$Cq8^J8llQ&Pss zWjNI`eCaQ+;<3_jZ(w=6Xhd$qeN^o7BZ&(P(^y->4n>aa%sVdz*|@VnGi__Z2stW# zljcBZuJs}ab+E~WFzP5tb#`~FDe>mpVH-3`U2Hdc>AP6v&FV>Ypw>E zNd`?!9Rndh#znRi9C3&hP|Ns&7Sg+Rk5Tk(SRhTDJ=y``B?LtwaK4;hAWBP6pZ>uQg0M|?Bso<4d|DDE8yevOul>{vsZ zu?R{GB!__%M5$0@E+9RX|FeWqn*zJgG>AuIDl!(xKe{=S&69dWbo*Q77;u;aa|~+b zln9nV7(|L2VsT^^;3IZHA6Cw0iM2r&(75phunCf&R_dxGPwGY5OP0N^78Ry#FI;f- z>|;=9!5zp$NSaD&&Aq1#l+f@%5}>0R3rpMH|AlO|Lc(8s1Sl9;(V!iC{OE;_SBe4k z;q0|QJm_Sfo($R^@F7G%b;j$UuP@EgWJ^dWi#&>@)DX>z$H&0HFq7$_V8%+ z(khJ+Dqjq;tBR@)psH2VvZ8>R`Hqc;D+hn6d@8S{H2}xxZpDJISuGBW!PJ_(X05yk zhIL(Y2h3J~Fi`34+|#Y+|HV$20xjE2;^b=#`-mcr%g@g*&G!j9mhV1ip%(y64EB#} zF#YuSm~FGjWP~@0KAts|8KF1*^{YQ|!T1z6G)7D$1_O1Wl$5IA_{WF+;|}ru$N{1j zwF!OE(wM7MI^(Kh)~(IBG6szz2^n7Z58^r1{-{{8owI(gIIxL4&%x`?B>95ZxfE1d^ggXAz+ z1JtPnOp25wvFOh5GwXO&4UKi>YsFo^hqAmKOcK7=3<7c>B1+%HiU-@+-X+c#UmLVv z1n1CwtLx40X8!NL|E85UIIG(6`f$umc!Aa%NBmuNX*AXgwz*RRwpiQc%ag^Ar##=r zNuGDUNym2?uH?}PVwG$wc5jmGTO^9M`}Kl!-rkjI1W|e7j^!RvuBd;1piSp4+Ktym zi#nQ*t3Q#>J9;#~GbdGutrZ8-B!at(OnW(TH73A%?jI&gzM(X$ypE<`{j`s$nAjNT z1$f1Mfa-IIlcxi0k23XRY9gKex|iu?*j_{@04D;HwlgN80P=_oj3GR-XNdVS*U1u^4fJL_5oKCHU zN+5vhQT!pi|M`oA?3G+C z;!8KQ?|&p%wAov=;YH%cU4~|K)s^(}dvx70*O0sa8RwAf+g8!l4>kCRtoL8vyaSMT z48YJJ_g$!EzRMWh2sOY@W1P()m%MhjUBD#2v2?v1b9XRnFnGvm0A}12L$Lrx@9A$Y zPdi!`sFMD3jPG~IaOno1DOQ$Vnr@>bGu@b0_M(l_^F$7U4a53!f;r?RqmRjy&ZnM$ zppxs%5)>2#x0tSvW>)b;j4((#jZP(p1Vce_r+%~470DK+e}9(yyMPCSG?lS88p3T^ zWvl(~HovBifl-efvKOFk93s~-ReoeWT3Z11_AG*N%FN)k_v;`lM_0?_)~a4y4HyL@Y2|4i_Id5?7=T#+|CbHc1PBTsKFAt@L5@U`Q|^e{eRH7LjYHKup7TZg+QKPZApM+Y7K7OH|VD zhxl){oG^V~U5tUc=;-LQ0Vf9<5v}dMSejCh-J)yt_c@)cFcf~a}V)&w)q4Oo|0HAeHY zoD}~2pw}Y1Odo=4^0xJF11|sZk^lGu^(+89j z$p14sl}-kikid22H1_|zA7}+&L}Oki3x@yuTMoKxThD{%32ImQ`d@fDrl7VG#Wro) zbnZVJRsnyY1j)-0uBZRAL5#Tq!T}1R2>$=V4-7gB{J`@PDG7g%6n|nRF;#m{$5|vW}{?2p{bY+(@%eR&Wefd-G)PVrZ zJ<3t<^!xK!4jq1^jZQzB$)z_$cS@pztVp7hA%uI%Zv*b?KXNr=o}-<+0eza9_a&>` z2@5|31FccP@cvPB{DA%dOiSO8^uC!H%^g)$n!mwNDG#IrPW&7sG^eb4-4K)IO8|Ep zTP+^4-BaQ5(L`}A1f`_ACg-hd`RGJ^WPQexLu|!!8CJD*&B5O!Y zEnuHu3^3a%ZmUHUo|yipXnoojUwq8W0*6%&1gNE#Aat#)XK~?2bbUz^4NfIkzGSSY zdWx&U{he3>VvtOt+LsH*6?;Rgh+v#t)~dQf2l$oo{SCJePKnwtwf_#wKe`4yeiBEY zH$R@)-M2&_%p+Br$8TL)A;zfHOU9EyZbUQdQCPki#58nUNy1^)V!)t#d38miap7;E zF|jYm(L8vo_0yVZRqm>qRADPL$a;o4DBx<#lpQAojm0`e^~YOLy>?gNrE}<2m$7n& zYk%Xx5_156FVn@qA^U^>dVt9$Dfg>*h?#w&%tUdZLgB7XlfYzw8 zUeB$^2tZ=Zqw0$JQSu<0a#II<=GWCs)Q^#oyLm(z+8uOy<<^SZQ$;#$7KKSo^I9zg zh2stWdpp&nvZKl;DNNH+%N&1v*Y`!0LaC{AIwby&M|E^NM4Au*ICfNcu$ynrJOi;K zAr*8AZCM({UmXSSui&ijr1BIz{&A^0+`z9`sB+ZvmWB`2XFmJ!weO~p5IhI%Gx49^ zg|7>hnGT~sY^6-H>+D~3M4AFPVsBal`hOH$Z&ASP?mYHBIGxlqjivB!B1WBD;Pdz+ zkNo610e5Pp5ZcO>G|B^;@`f3_3=Y9|L`zhYl6=+<~a4U ze+u@@3hywiYeofj;4OEcO!V=x*C&2l!Rv6a+41O^ceLIpQT@M(wn#UDCl*veGXErk zV8u3cV1>Gg=A=Kp05%J-S`Wiir>PK#5hMjP_+7^FP?N&=Gz;2R561N;FFjq}_YfKxN5a)^{@xd|uNX z#%n*l0`_cx5QsxhD&opdFM!Q@{Sdr^!u1y2k1M#z4#Y!LrpHzg2t-vO$B)vuFEQRR zAT(tD3Mc*3D`3ULmmqQYr9!_{=$8upQXycSe%(U9ZlUjY<^OMPA==keAXj0~6eGA# ziUW0lrIn$3>Q)f_8-PbRT}}U- zRo(!k8!blnGg2V`hCK?SxnE!5=O!&P0w2qu zCeV-gO@3h^DhWlOhTWJ<42d&7pUUJ4##7TQxYUX@gd%VzKV_z3L7E`qI-E>j%7f_btdp5qBr_t-Bt}w=9h}`3c1DpTlw@ zfKGdT*4?ancJ-`)Ot8oVCB`B`(wJf?MNdWTjS zUdOMUcCJY%6lx0aLaB6J&MIGNBL@kKcG_T z46vP#3QR7Q>w;ozo5|-t;_2^tyaN{4?o}A;SPv2H+`#~^KnhN^D(@qV)L?ibyFSQx z!sJcrFT(XS0mqa}Gxr>K#WoHihb2R0WXcKj5P78(5U=LkqO*Ve{ND2bb_VIwbt<1W zrRf|>w)^o#=*zudQGiU)v#dSPo`#-E`@W2WlT*!?r%a3>Z?{j%!Y<45*jx^74?BbM z+FSDqIpz875mw*|RCpTaJdCsjNyPH)r1Bq$lOG5ny4b}&wAVvv40_=##ScR89V`og z_z3)w!xmI%x!#~&eml547j*fou9aNPsmq3hx@DC%vH%f zz{0ct-)t~6+-fv-WZM|xYZij{9EGwC;9#K!5BtX3BjO{;=RDB%`?aMM&ut%G0W1-(7J4iKMtvXbmkw%`u z@y>>zycvzCgDD^+y&x`yggIlRt=UDBQ+hNFrcCG)>yFEyzhe7-qt<^)t8h-A5_+z2dfq| z?}U4!Ac8U|Q9G?HxnanqRR~(eCNMx03c#=c1STOBE3O}+Ti(VLudkzxG~}!5l6VL; zo%&xD5a(NhzB)C*?!1j*EJfp{Tu3cygbFO}`@pdeju0fP>qdjB!1iuJ1^WjDc1mS0d%CV;)EuC5MhKiMGd z=9>*;DTZj&KBt%%MFc~CJUVYhQfQ^4sLY(9ciqDJMTNa?C56fuGTDZx{Vb+eLq&~>{$Im|0T*O-T#O>8hFZRGQppGdq`z( z%7ASqobJS8qG};7hS_7j&M;gt8BF{$K&_DBV?6hikiHo~`%cRmX{+(6AeiTpiW((KtB-dM-o{ z(uRXoCusGPwUD#qi08acC4K_}b0QA`BP8MqjXRC{jjI9~F7TDs?F7C~+(2zfzy)8| zhPQA$zHOPvJ<~nG#aCr}-^#E|;vfFXR{ZdQX#6vQmSxTp8Dhdx_B-eVU?x{OhaWpi ztaPd@LX>h~X_%@Mg@*1ZiRC*?w~vlaGZD&zd%K|x@#@C&(}2co5_kX8F+acmg7Q82p@8hIN~?-d;P1z zE6;+dZwGz|OKu+A&~V<~44r>hK<$m)peVyS$jXW#^tY!_!mc5!z0OYyd+)i!9Vk19 zC5rNeves2iw#VnQrY;PJWv&!j^7p*kqjH|*CK|0{NSxWD(EKUd08a94+hG>@;6Oks zmHQWe48y!IP5*rV?q=GVk_X=hm7|X5Uxw(Seq`(@nMishYbrxSnh)PNkbCi+Ta_@QpN3x+)H~4eA-c@u;g(2)u65of|?spi>Fq~UXFjDJBcgx5h>7d@Wo%nYI$gtsev98J!H zmRC>(FQvMUqbfvb0hO_V!W-k^1`B0t!3<8lXXFB@Z=HlicI+6U$CI8i`it04RIOF) z?PvhPk>O{o!~#+|&Slikh=lFwi?7+afT&y7W(b1H-LPkgpXcW1gXlkbD%!HVELp#* zth|^u>?k#8bL1vEMZ=kQN|l#Y2A*GZm3YQ?$`o_!t?u^DZ)#0%_66j_gX0<8ujn`D z7Yc*5m=^X(nYWUYI{c>}odom0Pxj`(K=%MfRsoRZ3(CgU)ZLcVC9HqV4sZ7vtt#3Zvh=4#gZRfd%)KaGQ;@y%zfcl>j|_ zUHakMmrAHG0$cW_K}Zen*mekr;4+VpTbFN4@I~8Io)C(ViG;Z&o~$RdO~lyn_sIrz-*sf<@S%7HCRR5wf>d)a!5Q^JyfLFn{wR zz^Rm{i@J>!8;3Q&ydT_f9J+)6BBY;(*13!L`ttZ15`F%?VRtS0^)Q1&pO|0Zc* z(y6wEX*L+yLZL9`7$LsAa5J#ot}$@6*8-t6%aUDRtyBzq$KW z`Q%r>qWO_tE9!{19DjlRmy)HMdqa~@+iyUR9AFA%hC9@Ckw)%ci1fhqGCeyRSh1O2 z;ZLR{eCCyMB4e^#<{nw-(nM1LsJak?s?*oO&d`A#C0aC(>j5M#26fdd6jR*uryM8& zkZuZonlSA*(lirXOSxKQzXy0>q*@>ybMYm8WXjR}KyyC@t-=VQGF_Fy-!QS7TEv`k zR%VTEE{}*bt)M6$`V4^@<-I|#GN=8Z{Rz!IiPZe+*j1Yloq(GSFLN?&$98m#&a>ii zWDB%kPdG*Y*1(NkBXHiXwb=?=JXof@N`5llR&TK&6r)Gadv156QD0c}C)EDkYJXq` zabB7$@i|#|}Zvxno^_qe9ok}(U)qSG&_#hfffnp)o{gjZ+*mKvA#yNk_Id)pAm zI;~|>=131FOW4-Xm{dsu7I}HwpK(Qui z*N_zFe~m`_ut=G-C#_g29q=a#=|hwA^J|W6IaIBVA@r-BQKd4`Y8R-{?E|9Ly~9C_ zrbsK^PYnYnU2VUjZL6b%hu36OJoY@#jGHdYwk|q_)K!7u2NV7sEJJ9XuWoDGLT?M zPG0xy`tnjO6G4HA;t6R1Y-3w+eMj1-F@PD~5dXLO&=LBJby}IO_;1+1x$wI0JX&%a z-5hbOw(-9=SK{A>c&tI2n#ey0xav^;?N&88fg6Pi`Ki-Ya`t=(c1*sgN;yCp(^TysyC&YpKoron1RgTj(l5TyEU41SB4kW=fDx{~T9aJu z>lQUKsHr58Up^`LX@LMin-tX<`llEA-9ogwLrz0!OT}hac8ytTJS9X51b0CV7&FBn zav1~ahjeW{^C4=!KLuyQZ;haqf(|OSdUR6sJj66c=^a7lsfI{n@vSe&8Es6L@p$6a z>Z^k!&Us)(P4H+VgmMALRMy!l#K8Y9=Gp2Am)xiOc+uPcsDLZlsO4<*=Bpk53s3zG zHebTD#;8D=uSl|!!I?5?%p^h23vIOg4Hf;~(V{VCn38>=hjjX)%`F>kyRn${E|#l^ zV($QHIG%ID8?x_*_9y7zfc*P9v9(+bcYv}dE}R!|hwUOFL)^z%7QxUiUbY+2B%n4z z$7YU-OM}FIvN?!|r*yx;1mw9x2yAyFaj!2zc1CsS$l)FKZ(}9zN4JWcQ9=rK@6V+0 z64=Eg9Wy}atMN8{&q)N>+5 z=5G2Rg1O~PHUd~g^<5Urz9Z5YMRXe;s0eB}Bvx4MEmlyyE0$mV$*64cCa<7jLiUi> z4YB-J!s;`zw0t4UA}e=;8wz!JS3WoDM-=ubSp*HG=0{FCDwDHZyql{3doO@|_dMz( zfGhfR3HlGx^KYI48{m50=94HOg9r0H!$A!}tihx}DCz=@4_|!WQ^xHCs^p;RL(80M zN~|d?K0baC)MC^Esx}2w9J39aXMf^PP-7C?Q9l5AqWkhpqFVYMb09em9i}_;ZEvVw zb9i=&L$Y+h`H>+D<=6#0u2#fgtGPu`beXza>E!D(hhcbwOltc@LwGE$Q{E_8zyF>s_DM|S@wRfFU+^@3JWmzq{};_h@}4c+iJpI)OV9sT90 zwnwF1Z{*Rghk?1Ep&EJn%JqarJ0$~ffOd^npV{T3*jQUFA(LyW4@osK>=p5#z6QLs zJo@stDi2ZpSzjvod-Z8g=iPEs=~Il%NY$n}jG1Wr+fEz!6jd!-X&FrFyS@Og+*w>| z)vrRaf;L)hPb5Mokwqs4i`Xa--*uiA*W5vA`)&ZLr*nz3VN#Lpm&9nom8m|{mxqu?zRx&fCsbf3ElSG_7LXkc-^}OFKWH(yRtJ&t& z(hO5UW`|iS|4uT0rd4XkbYN(YJ zNiBi=bZ_^=il-34BXenR(Hw5%N-L{EYnY_BQS~RKm)TUbin=_}lJVGHLtJyxl%Z6G zgQ@5$lhQ*}a+$D|Ci}2U|26Tob_ekU@h&WCL9ynM7(T34XXh_4AHqI&ZytV#l2M>^ zj~76QI85CXU+p!}?6rgV>?lB!*;<|JTj)?HO;NVgEVR^onSJIxRPzUQLIShom_g0m zvuf@kYurfV=2>a<=|eJb*4Za84$}*P-A|w+LmxV9bqo5{h4J2uG-$ zQ`V!QQ&2(q4jpk_Yz*kg*_fX0jTa9!`WWY{Cv1N%3pr-G`abEYvr=`B=S7+jiCFTB zCw4bi^yowO92{}nYtWy`p+m9=nxRc z@>xw4ZgrR%EpCHpu&sW}elSd!)FivI8I8i;*$qz-v;e_EAU~8@o^ppZqC^zsjd&#zQm^Nu(6h@V!PH|-&^Xzk>J7W zA#uwdd`S~9&<5I*~U0N%7jU~sfJy%vi!S^~(_ zY9n76nGC#SV%i$|1LnjJZwX)qETrj_?Af%U>2!OGxEzgw*JK?E6}XfIr5dYP_R-Iw zy7cjlCK^S_5=kl6WI0R+zvYf}>!>|+G^g?LVXVwi?;fL5u2{k$;)wJdzRv|y1wFxh z7hU$OLE+O;*-Q#AP@&N=!Ru}Cvh~C@<*Fj4=Z(7}C$wX>SxwDw%d-b;WUDTaVKkR3EN2p~AS$lbwiKgkQhH*2lRGx(zipjt#IyL7! zFHcuzp80`Gk?zVhl`ZES;Y8*5Fc|dyA~M1uGeqppy|s$5GM_L}h2Jg%4lKh`C&9492Zl4fU6sC8AqQ`f~_`wl>Xo9@tj zHClIthwT-uL_?ybZYz%LxYY0J_=jE|?Yt)4dOP`EFYwytD8MQ#3Lm1JnmUG{zm%5r z1`~M{+HgLCL+|d+UDGbUcdi-H6@(m%r&c_ST2_E{)Z=Lcvnm@YB-Y@r;arn!#06*u z)^606pR+cr&C}Fo%10!_ZcA1ZzO5B2Yp5Bb?8wt z>2Ej3p3Wb~w-^2ks0OEc0le#WH0<(&4{yOo^{*U*jy++1KR}EF;)g>DeleASlf5!L z@Oa)|$Vl>uM;eEC!LWm)M-d#-Jogo3;NI`J_fNQh3fL~sE5$4%cI$2CEcm{P z%z>>gV7fJb;yA)CT;xEIR~n5EOvkZ06p(6g`Z|f=Uw>M^2WZwrb+TZ}e1Ut{SAS+B zfi0z$1fuh;X^;VkjyHJo;pM|ojGwslR-*f{XH76h8_UIvM&`{}sDIqy80~G9V zE-Uz32u7Fy%aVnBzwgo40ozhP+4VS?0HzonCh2Cnwxnmtb%6u{7u+?$I1nU{hi{f};XGNjz#P%Q+l!JbKsZyijC`!Qec}#Gx^E)cH zlgp!NRIkR;hB|UQS{&#lyA!5~5dHjc=Y3ap0|2#+rmvnp=#+V~T&cf8#1Hq6A4T}s z&(FvF<{d`RPm&)9c^3vKel-V!%CsS2_6^Wr%g@o2H@#2(lPxe9{ra9JI z@T%srjzz=M1>(Dd4y#L99&;|Y1JEhsUWYV;p>UfyE5J})!i2drG8k!Uou6weAoz6m z0|1tX9=)drf2^yBJ|Tm7F@Aw7Jv=q1bZ)TCrS?g3-;7U9c+ZIxJ$ZIF+i%x+{19R6 z1%X!s*+&dm;-hcqisg$TZ)3W3Iu;kSQb(KR*$w!R58)k9Y#|Z710mH9mH@}*wWUuM zKSa4US7#l7ZI#^2rZlU{54HflC-=UD4>QUBTBCI-xQ5llactC)#g_S)#wz1%b>U>{ z*v3vaK5Uu{As5bGoT1wuBnTI?RkLf3J7?V&PQNz9{%UmQlv2(7c<*xL6~*1t`2tAS z)%})mdBD|cRD0_=Nrd<+L?(8iLV@JkFmW5aTzcON%z>g^?LEhI3O&zB+O4=`X!ZF$ zu@_yeJZFe>nRIZU39Vnhhr>s)kV$#Ox7KtGy_;GNT|Wh)Anl1IL%v5668s045(4ED zp9iOg^^gc|z+2nLr~CFZwvB24;%a@|jOT&8?AZggAJQPCzz@u`_ig+b`uvy5+L{7I zWUf-X{T5~@M7ermP9^0XRe>)$<-BIke%gr*z~q>VDei9M`QuA=`xyHt zOf4;PIHC1+aGomg3GBStj==2YSkzcaF{*U(qfy!P0VxU@e4)$d6L(f~*7umA1CztE ztGJ}ZMp>6cZJUtEnn&%wytk&mp?fPIh77~>g!P>CVE04(6zwqPWHx=Cd3UFoD!G2${ z>{Z)tG;(b{E@vUCNs*aM>v2+FerR@4}o&VYQGn+2b04WAOq~b|LJ9~b{ z4WNXCFs6P1%7Ae8Z2IR@)sypa*L1}MLM1X;J%(CaJX5kSTyFs?3~!v{$u%@PLx}nLlcVpk|#GD1TsNtn?xv~tVF_ZpaIP_-11ND=UCRQLQ~0(wstSHmJ@`YE$5S_3htbKWB_L2TjJ$DjQi~kJm*c* z-0?3RfGEu0IKYzCWfnQ1KSn?sVjCcEvoHVNIc_nxbKDBLXO`QObZ2|T81^=M?}+1O z?}#T}flc=Sp&{xCGCb7C6W7)>ih86li<>)Vyn%CO*Z0C*iK@&7{~MbT_@ex02;gP} zc=+>uU$Zz()0&4HAiwwL^{@jB88=un0vXY}hpmnxD4XW0zNW;#JMX8;t@scnSl>bv zHX^Y$>M#rLippBuFSNjV=jz{MN3mto(=r9QEZ*f|bTt-b?B}I8Wt|vo5MCB-xr&Fl z45qbsnY|-{c0>S*%6&*GYeo^kvhl{V4}{V1IY8|6&L`_zKtdq%4aZF+UDK0t*qvsv z-PjygfYq-7>vYX4Q8Ly@H0Vy4*ELtv)H$zLZ4Fr-9ewFO*2)RZxK1L4^{^^fnL6Wm z7=70uj||-2#AFhB9o&Yr2k7caNSaxX=ASX8QM?{0Vh|5P z-92$pRa~h~WBEoyTsK8TX>89UGjDD>pqA6P#**W?JQehKme=h&7U#f)0mn_QN_0F# zotCtLq7~r7u7WGkzcX{99}I>c%)EZ!lK$2Y1fgeju?Q@P+E=;*#6T}_VZwTj#uwzq zmRV2X?BxJ7*sMY+%}TZgM1`rJWDsr-u%<0LCxZ#wh@TbnP`pK3AYN)O>iO1#V@s(r zO5lMRE{gqsa9wS8KoZb~{g?Q8tYzh}#|3bTTca24D~)M4W4Q^!D73Kv%YcTs{2BH+WO`xHiJnJYdhn2_kUxY)Q`fn zTK$9)B~CAtIB=Kde-0Xp=C^YiXXobIK^|+4dbl4tuyv!MvS{zqiN@mk1OiPtH zQ$qu+q7&Pp8?*)6Zl^zp=d)bUriDV%-@aMyhop@`M&P;Ny*Y4Erwv`RJa}g@lYVPB z!-NOz2XpH<+UOpxA8p^`V4#AFNpjW}pq-w1x=KG=?X^?nx8BVX&KoIinX<-lN?Y?{ zDj8MWa1f>^q^e*tsHV4$b8@F>L8+<-pbUZk&9!M#eAe%F*0qNyO!xwbYU&Rxb=VF} z)W3s{hBmMCddunQaGh?hxf-MTVy04eE);u@NLVRqD!^H}YHxe@$_=(tW(tX4+pB;& zkfj>8?5o3hpbqGBGxiK%IJNOT_JismMe*%1DiN^`6^1`KCLm&}8Cj}7^`1yB?;^Ie zLvHjrRqEIgV2$+{V*`&k$CX~2c4DCntiZi_vA0E48>UXeT5+vh(yQmncE}dnC^oq) z+H@-RDL(AU4!rvJ4Aw&wA@PMtN&=Y3NBV=^Zg2Vn!$oc!0F-wYvf2PK_|5RpH|a7g z`v|h+7_>Nue_OsYSYX7**uOyw42JcKVkkq@$Fexn6I}hC=?F}3yZOpsjCd0E1C#(U zonV)3ZEq~@+owGc%K;lrT`Q_{we@zYSuYW!0za9?54K>BY>#mVEf(!tx4JUvP z!;N>zOvZNyfhhK39e;;yZDqKv*XhEzSqlLqGiG!HBoyP;zNFWV=8vz)PS5z2yR7BB z!@7k7%W)Z#Qy(6$voHd%n*>wIelAQOpz7*PHCsm+IK|iQ1Eg>g7YVSAnPVGJM^a*G z=&ryK#P@=$B1_oh+;E(@6&7niDz>InyZ6rdZ28jqopV7A>7GqA0nn{QbZ^+LUL!-J z*RMWJce}CzSnSQ=Dz3)VW5OQM-AbHk%t!lTDRL|@F*!}ROpM)37LJ!HT<5Us&MLZ$+&Sja(Tvk z+);MK%Z{Ubt1_)OR6NIs1KHoOD2?r89TJ}Gv@V?)ZF~1^*_tbEz|SPifU#eM!dX(3 zVg%l7fdf(e`i?wj=*goqAE1p2AFhb$wcf6H-g}^2_kr|dbam?Y>jNmuG}}(Yx+%z9 zg;WVn22~MQ$mePpt$FLt_6x;Vah3IkZ*SvrbYG@nOizE{Qr6|kH@6k)2EM`zB>CkOGh+pMS^B3b^xN7wj?@+sZq@w&G z{{w>fnYA4~6ZkNX9Fd>_^niO1QLcX;sKI7Qq5CmJKcB*y3?x!Jz`5DHmA)nd+|HKT`M>P(p~ICN z!0qg=FR?j{f1Lt+*=-JOMJlrVPKS$y|A)OdkEgob`$zAjBt;P_QzfZbNs5e%RPI6= zjTs`dl6kg5lT4MYNQMT<5Mh;>217DtmNE0Z%=5C&=lbH__r3T2J$paTxp&^ zx>uwW44_l3;oIFN`xJjj{QVrSqH%(ay5x9!$bd0(`&=_|SWZ!y&Oi%&cFB%+pJ(4p zdi?F0@IS7RZLmg+)^#D|NC`5^9NBYW;H<8Ml;5Ci_1IXu_OvH4`PJJ$6HP3=#RCP| zibo2?NpP>;3t*r4FDtA3dih^7PxEWp|Jw7v4Bjtm|K|hi*HL%pteu2~nc04SZt3j` z{+vaNN>p*m@TBwiKHC?KLk0Eq>cBC~-(40K>K2_led4C99b_o_2Yu8vD$d`EHAk+d z>;*XRl|!maruUhIqzCc-V=`yy%$9x-RB2*JZpluv8>p>MHdK@X0fr>w^6>gpOI@Wi zX9{|IO$0SbdutQ4IR@$y1AK&Xjv~p;t=qO0<(?f0@Og)g>nmFd?Bg2 zd{8TW&P0?@KB5DlW8(IWQhZX7VK!08$#Q3gC;b^`n=UjElAC!45dxh zYIOi`zdXFn8qoH}g+Q(D*O0!tF_-}gTD%Yv`vz~fc@P;T{4yDGhLBZM!G%<`eX z=c3ea#b~Jh>gHxaJ*?}JM*n#6gtq{!d~X%PT$L3l3>W%yNhb3rxVEk4xu+Z|ntz*J zCs;ZLX-GD_4uD+(ovwbM+_@W5=AcY9rr?2WyeVz!hk1vXdxuy%bnGXVZ~Q^;u@h35 zB=d?tH{pTjER(p06K7)j`SIXC;8_m3!t_2ZmvU?WHr$-KqysQ)6XAnYmM*Djv*u2_ zcA0GRefc-h%Ri#E4XBo*Uhbhg6oh-ZL;G&#tkmu>mvJb!9Y3Z+sx9b0BbQlMICRDYj4!_`6`n;}h()wZ zgDUIR0yyB>DAEgZ@vE4u)8!Fsx*?x6olQoPwd~2SeVBI~bcBV_rW}{`A~Ncn^W zYT)%_`47%auEBYU*`4^uTQq?EFtSwdWvz@(Mo_tXh#VuFusEC6_d2v4;VCaW_eCXL zEqZjU&fbcKb}f{Xw>>lDO4w}@^JU8*)}R*44B2Wv-KGprV!MU*KdOC15@6=3_EFEA zigBrV{Ge`*?4Od7QXT^HJJw~kup;b|7adapDBp6-Pm{}agsKN7^{-yt4FXCJD3qs1 z%y+qEKNyL6d`{!mM|0SHI)TwvgncHaO`9 zQK5@$froQ){%)vcj`-q)3iU99T6bwEJ7GeBumv;^u5li{v11s}C^NOGIjqQDgh@IS zR99a_JOE&pL#01;JvC48_BoKTSM++5dM-x_FOB>j)C~9WFFp1Dy8rJ;#4LW{3;ycn z;&5q~Wd6j3Zp%fZXEtTx7;`;niV^b#VW*XbS00j<<}xraNCG75(IHY*dwfdSeSyTR zL-X!mI9fJEhdY~=g_x=0#Ui(Zq50fQNy(AVD6%9@v zR|w$ti%ZDZk5~mTZUnR|p>MmRuy&#LT7YaE-1_ZyV{}gZ_Z|irq0!L*f)pc<%(MKx z=okc&i>s@O!A2v?5&-tvNX0TFIQVOL^`jj3UIeB=3SKUn#Ylh}iJM<0P$rWA3M+Uo zX7*Ymw?{2@UY-AMo>!&Di!my0``zbL36ho3Z(phCBV@wVx97+v0{DjL7VUvf%M=40 zP$s~{fZ(w91^DEV50+8e(BFDooUTANwxz%*&?+h5oGeZtQz1E_w(xbBqQ! zh}lg~QBr8AKNs#%r`dVC{iuVjGgvlhM}@n`7naQgf^nW0_jPJEV=@<=giJ3G zVRWYVBkyrO9kq9#EObUtDl9w%%B8mCF1%(p z7=A-YG@sf%6pps=+~%Va4cUh*?ZoE1EvS z3%qEJ-xGh|{6LqU`~)u>{nm%yj~)_#w4k%=+%s2#8u!qYzN@VSu_VNobvz)yp_r*~ zAA^7Xvxj|xW_{$?vZdC6Qs&>OsqIJ|1S~^cyk8^ELjw{dNBW%CvThC$%KxC;J^j;eL2|6eSAm% ziu$D-7ZARET86jwTJfM|{mik;sqQLT*9+aSGs~ln_4kO+haMgmken{};hOI@YRzQw zEBs{e(&{jDaU36e#il#S{JR%1sk2Xzp+_mdGMy#qdeOtd`3b(e23N*N#D|?^H!N-E z@qqy`GS$A-V(*K{S+@ZlQQd)yCB`x=gb_NzCS;pwh{$&N!r9}N1R`s}Ai;LZA%Z)i zfR8Y#Mqnktp5gFe!+qNojLzmV9ZB0t7UNlVPof5mMD}|)knH_b|I6Otk2}`*@A0_4 zJ+WzWBH7WT?(<-bVBo2?Lt!~td-`_0+U!-$-Sl3f$T6eHHse&*A+cG)AqktlpN@?o zLIwRN6aoq1WBt!Roy%lKIk9TUi=VIhRImCR?tIM2m@qm)FJO8LIbMU4#|UHmghy*| zeYscse0`tUdc0H!DEQ<>hj||hAXR4gDj*VMz%MK`L{Ky?TjjIzOIv;*SfRr{)w0z> zsKb>8W^t0vrd_c$qA_EBdo4$X(U+u|udjK99ZCrKN8%$^O+!n$#}&4thrW9YHHnA~ zH%Z8pY{F4Ryf|ff205SHh=U~s`dR|1y1jvcfx^PV5AzS2RzF(gxV)XT zkgvONV#>JGvJuXfGi}|O{OuyGGPO%D__O`{47Ej^d%RHE(@`D2dN>re>KDNqGwMQ% z<6{2#fU>z8^8C^?=%^j#ruOwGsWu0OF7d)7^4+K-#>^DR2ae_?!~kMuG2)U=acmhO z)z+O~G=8|C#&WCQL~h(Rp10%Ihx}Gf@kkypPIxoEa2q6aX$%^B^fso=#IzdZ;3 zn`|RD2T(j8JaVFsMPqR;kt*TRRRK<8(3%65?&G0~S*roO^8nn2ACuz$b@%we-#reO zP`+iejDCg+1zyl(+pB9Kbx)rx;G+{YtNl_V@XWc^U5+K+dx%ArxaCuI8Hq5j+lg?3 zTG!M05YDwe$IZoc;w6tC(zUY!(J7Z zzoWp9U%FfuGTXlJbX)6U6ZV_LoNmcLzU)~#m^H4~2kWK5jUDJS!+uBbppIFxjHVOB z*{Z5GS8-RIr+S24y&LvQciX9))}XBGyt%Q~3~`u=EYwuf#`y zWRD~0kn)h-r!zZ1n#l!-dtP@tTyy!Sz{r1a;x?S$;U-aROqLS;s+ggAn?@|l^sxIl z7sTBPdZsH2;R+a;&n9#>KfzN94i3JD;OnEa8Ct7uf7#HNT z>vHB!ndh=ux(#Th1Ox1GaFX2)Xr))fTy1!qGUh|Cxu3^lBRZjEZ^I4Ok1}X#!kUgR z#?mI&AaZ}trL4;Vk$Zmk?v+R51>ezc>xHW-noa=nPF7=zvQnJe|6*eDnCYPPLS|sj z;ls11si64eJ#pl?Sc{{<)K#KDLfWp{MBUa0neUZphM+=1GJpMF$DM2ULA3D7XtIu( zbP>N=awR<80*#gXPTPO1Q`Y65s8kfn2PdEdDAvB_q+V-egLC^H#3)8qKxl%|7RIYT z%oo7asX+P`$)y^p*n(iz=uL+%AWY8JO0K-nXXGmS7G*oFoa)3z5 zX&f?2-1bz=QitsWknBlSM)Z{JPvsQ+?Ke94Q^(~lhBdy+Bx(OIOhm06s@Qr$rh?X& zZDFs8>8DN2H0Vs9fX0=+(1Z{C>ov_Jw{u{f1D>#O)_hX6)sRy>F+yBnVR#Is$9=0B znBiSvSqzAvU`?udBf3Z@ER@c1v(Mi>iF=faxX}>lJ%Dv8XQoQPp^xj@|g;bjtLm;Z_RM*DysKFRz5hYFbx zkjSk5AFR?}gY#c(;a`LEYjBp4zp&B%F6sDR8vkD_=hw>le{kiDJio1RGf7fxM$aVX{+ zPeNMz#n?xmkFc6uKJzTEUg_JOgTbYGxBS#^$~*F;_3}M@W-Zw?d0Rb$#l$tC2yf~*&L-&-*;GgNTh-Fx` z@z(ounk((G72@#go|F3W&=rds%gke;7pJ?=47mz=&>d9ls3V@GS@G!0a*d-OdZVgi zLfpZpnA=vr3b`Ke$;f?hl?%zW@OU2|K*!O#mpAWv5^3)AGGo`#hwJ3ohDo2TsBHLN zk1-V=Mo!r}(eKHg#N#7pRvPO|hOmL9 z%5F+lPSIGwzyO;k7{92U^~a0XYchY5#H|eQ8W>=PuVFQ z>f00aB@Q}&m)+1v;CN6Bd)f!l$*TwtF63jPNZ~2EG@rRPjP3?VYTr4y-f^$VKD~Pf zup-PJs6Q$x)itbMgA3#G^TYNl8oxVwEI0v^Q?z8#$^f&$0ONK@u7OL^;taFt;E&R( z8g!SBj_`%iENjR2k$iWTxeKZxREoJELecajPW;Iwz5t(mef2FYLyw)XRyUno#RQ1{ z4d3~&>nv|D7uaAa%O0@_j9=v@ix-A#aNccUMD#*18WrH(N3FRU4L65>#80C?oHDkf z|J#a41H#o96WK_e2l-sEK0^sq?lpU@t61^KeXadUQPbo!T>5c5iWUySCfW9RZ`%|FRp@XgJ!ik07!Vvfls{;D ze9~4b$%b#(*-_knSljgXR0>#tp-{;cVw6Z6EPt)aGWnYypMnW{e8l}<8S2N|NlI$M z#dr4Wh+AC{Dw8zbe+F;A|BSTJeo^j9pD(r3Yvqckxi#+xxEt)`cDGJHQ~3(^=%X=7 z@g~+l1`Mmx5;?Dd1-{#s$hFMn3(>HQ}W5?oKh zg}L}IIhgd(2{I^d9{1Vo!mhejxOl2rV3bl?Jese?u3yY3r)Xe#kYOdGv#uKI)G>aQ z$H2B7eFqtpkD}r-ye36fW>Xky(GILu|D(Y&2~Xm1$>%eeUE=b11RdUDQ|+XFgWCu2 z=FXc)oNnquY;h_q&aU&Pn{o}shv!GFr6x;GOKIl|j$dj^&URiH`OvlB+O#mkFJM(9Qg)pjm@{lez&K^R);5#Tn?q>`9F2XB4CaFlV$q?n@iwtFrp;6Wx)2 zslGBmO%$Kb=^&lVIW471adM5hufEHENW^VyndNqXQUX=L75z5K<9?65xS+qB_5pZT+)Tnf^nmEoKUE@ zwo3$M@AJggq%Ey3H}-M=&~%Hh&^Pi5xsmBfr2qK`%>664evdvquy_c*G8~YzXP*Z^!HQ`zg^KK{W~+C-L^Z3R=XD`5JAF z9NR|yg^|%N^#haRx2Hrj<%-?UF0A;z%cGUCfrd5%<>01V3!1|s4KE8PB7BU;BUr(F z%Hs{@(A_R}K@CIwzFc#9o#p{4JBb#zBBf_`pD%JcvgZbg0;6xcS6X*Re)Up$(b3w6| zeXU7`rsDcnOrz$;2d60!Pe%Am_FcK3u)@0Z4*FJ#7qu52!?FGzmX2AMQciFr-|RXI z%pYThUop3f*@87KGTWtA&e%>gj#cmaA<5p+f)g0642*8Cei_}~`^6>cim3jF^_p`ISDlP3^LSNo zg>IKh#*k6@Sli`j+c?&ifTLulb^?m9Xa#}~J7z1ag>6F}s7h50=;xT*Ng_V5(?r&D zV7@;sJT-0a#)$J3l?UH-phDMwYceiA+on#QcQM1MAyHb2&POC`nSW3Y5AC6yB z91LS*H1-nX0=8-<$f&&`lr$*`O&v(c^EZx_d_Sn}&EcN5rOuTg93JC8L^yt~OxKh6 zEZUfib~IWb-{FqmWx~Pzmc_w8HMdHp06t#i<-5&jgZ~ar_^F#MBm36iHm=sf!chtS z2R0Sf^>aFCg=Wo_^f?X`8?8q1fRj>zNE=a=mpiMRD|fBYQ^_{nFSS($sLna)IQ15FeNtd*9v!ik%ArI%=xBHwFh<;G=aFQ!%)ViC z!vdZ!;3AKht?^_=;w7F-6V3Cn+xK(;__5$iiX9pYWFZ*fFw*($D+;*Z$U{taSKpy_ z6Uo+?%^qXA7y_U5hnVFYV*R`Q7Hf9YW`SkZpuPsj{_GHIqJ+(!mK(#ioXH973~*F6 z-yol#AZLtuDMVytqNhRMp&R4oQG`)1xJ@(m+l2Y4u|(f?1=P%31&;u zk_d{7;xmcH7F^l7t6ZduJUkQkFGR;h8cHM_b&=>c+GSs=;5rz}Fd9S_DiEnDWW){j z+z}Ws;yoj2WVY#j*7yqmhU2o;V`zv}cgiu>;qKe@Ha9yYL`a)CHj{imj&518PBvhj zx>u9jXlTbEQs@+pt{FaGk*v8d5vxQnBkQDm=`Yqv+QeyA1sZWx<%;#~*olRyLWR*F zGBq%)jx60+$0uL>&gm>6qNIAh1wIZU%nS-spZQboB4i%-2bTK zNhDukOoPbsgXp1loK4?3Gx%Bn=_*>&}(XX+G9~w zudnQ|TygOkrzu8UBHmcSUg)?K$xd8mzWMe{H`A1jInk|J*XoDu0@?DZkBEQR{Y7o9 zoRMjt(XRfo1EFz8!ktf6Br`p+TnYJUnPGJ>GkxrXbDaa8Cb?ViU!Ci)o_R$l+Oxpy z)YK?2X7@#jcu6KB^CvSJc6+8k+eheYb-)#wk&K3;<1;Zwvor@3YnnB3tmpNm-WiIA zH}RP`$;k?#jgF0j$t4K?_NNAJ-tkTqhTc$F=U?SOknK_B;jk^R%HD4h;+>l7y2t1> zh{^8!rt506Gv|D>sbQ4a`ICnG*j0JdCRE}gyZA0)*5R_-fd1tRT+@1XcZrJ<;_kCw zpK~swh>_W;0WV6mrxR(o9z~#Hxly}ey_IAUI0G-h*ElgqJOdbP`X8p4Ga!Nt;Xqsf zy~N_Zn6!n#K1Z@)ij^a4)L?kP%-8oWbPie?n*QqC-=i+WE~>D+EY1L7Gp4IP$tBMtETm;zY?p~sj&}rR24Qqg!|D0>(5X7;gpG!=3F9uW4`m_w zDtG^eHa}LSsyIln-sediGwQ#Oo=A@*TL#1x;LKmj8$$*M?_O%bsg{q+?zhH|!z%f~ zUBFaBMhR9cN8*9SDyCHI4kVNslha_>9JHBifpCiLD14eX8Iw zGCFHQY)QMafJU-rJe|9l^TD822)^(>X8d<$^d)U?V)6Fi;$rZouSE0`K#lRXm-Zb(h zf2NV3?#pi;ZkqaN-Q-BtFA<-j*y(q$&LnBxG?hYi+L0G63Mx#sl@}G~Uy+FLBYic(h zbaZY+hf|9p{_|_h(bzf1=kj$YlvJi8yz5imF=x&>7{}})@>uG6E5OkeKD$6JIh2;x zkEg}qgGR)bclHuC7K<`$5yi=;=bU*&6w~&cMT_DqWZ)lB)gBI!6LdJ@@1$53*4(_Ylq)?)E1 z>HjO~|10VLE9w6$>Hn*;_^Yz`tFrk2hsxqLr{<`rC)2zd$4R(y|K#9jGf(0HrBB>gw5tp*Y3PsA2q4bPN)lNKQqM!& zb-t+1u_Zr0A7Ln6`!7V?k4wyvKtSPq78NPsqhELyKJny{sf*+~M^WhIKj;mw$NYYn zo7bdo*kTYXy*~x+zQdMHQ`=~9*L);-kW=~!DoU@yWwU#LhH}942yz|4yOezbZm{;# zf?F;P@6FXfdujFPb0)XFIU~1)@}R?2|H_ps@1BBJeev$*HyYq*ikP%_(c--6+9uHV zA3$I4qsO@q13{$Z%KN?OHnvb=uKf8^>D>{zVp71Ri{^Bpr4|l+&mwQpypMEAF)S z$BN%l;1WBP4asJ*kvM-ZC|`+MXHobSeuet-&eU)-N5n!WKb>$fLg?f{_cHwR+qZ9R z-)M2$nCfeNpo3t`k+T0mUHjUVX8s7*LrC3vrV3A~0_kZP;WQCN{Edce7Y9KcZlgV< zoUUv-Bd5m-A>gGQh7-(kiYLD$UqWO52KDMlnvShmgDX>P}%+TNAcJKfRy;FYRm4XmCD;Hnr8Tvt%GRbpi;#l@23XXih#(;36h{= z!BsZ)4FM2v>CGT?YIbhBGQBQQX=Z79 z#E-%7dcvP17OmwJ3+b-9V3YU;Ch=U$ZXX8CNAD!DH(stqv_UOm?b$zg33^erjn_-P zRCJC#VtFR%Es)@q1bdGjH|2^Shh6IR=IuPxnZgbP z@F%vx{b5;_%?Z=LF~RxT2N zbxi9r{ZfDk>#RK^+DpBUU?>%|671i1Judf z#X5x0HvmonfUv3MB`$nR%sauZy#+MkYuoN2g>ch~LwGz7qJjtt3UW*Mm`Ypw;bXLA zh=1-vS@n;Tdwcs6r5wcFDR#w4?zL2HwF-4@P}+g4*m=G3V?v9Ab1^gD!WJ8hg06-U z8WOY2#NGSaqsqutYcVq=t4$M;;Upnr??oWS6pKtssD-Oh!n@zQs{}eTmSH7@QihU} zlDs<`IrN>KGiC$DS9|!wj+8ibkNrmh5(KBHcM%o>sGA*H-5fX_nyf~{r?dDMXYtgY zB#?bq&vmEhc~=8SEX%3$p@L4YwUmlpDZT4#xZmh)$pB58I_>^7x&4o$JEVmgw+|T8 z;-bpx?NHL>qsN-515@o)t$2x?@wT6I?hlQVnH>w$8{6L+4 zhsEX}6GL@!Q9@?H%*9;wqtNMZ+bNLCbw8I}hW;JD)CqWre!`2Gn=>$zW|b0Ycu8+t z*6+*JYgkWatRpT@HN^vk_L!WKq2a(*G!eETPEJxe8vYL3UN|m>fdJpv>2O3)7kWAh zI4pfd%B4hfsB?9NbO4n?_@%XLT2#nom3Rre3M9@}pKcQ^Ns=Nwc26!?BjYe&;kuA% zV}AMhvtq)R2A|QmC*_568a(0`%PwluW2;(KPi69iTo!#ZOCqf|s zx~}=h3LQ)_vnLxV<58}f8$ElgwW^x#sbiEF`XM?o#$Ba@_b#)_XDEt*^)dBmXlOV( z`0mRKYN`;8IV347M^AxzC)I8tnmQ5D4kz1v#PreiL|0~=X3IjiSfOws=Xh2#Sl!Z5 z@?vwY4;+AQBTcz8?9^TEJr3^1BaW20NhE51l4=RUN+f}iBuiYLLXJ%9@Tsu3!XByG zM_F>mYtxTj9c~D4F&{k6wg*cOa&4)X&SfSnbj_yCeaL4VE+g*QJo8?xBp;vtS?J52 zB%2NB?K}7Vtu=S-SHu=y zO3KR%zbo)a-#hP*Uz||~`!{Y+MW!gsp=DcRswIz9U9JzLxTyn!DdqmH3#F~THs#U| zBiU3KDq#0ibg8}e>{LbNon#x#S5t%iqa9Y#InPW|y593UGJRh!fCtvo#7HuiEB3g$ z@NB6JwBtc8Si}yIg${rnr}f9i>06DqQHKhP4`d3;Ec|3*8Xl`0E&d=)lMn9Zx9u_i zd8eU;@&EvjJV_foP@iMo>Nm3!z+57S!w7ja;OfcS>taO(Of*;n*|8F(9DL04j)pd{ zR<3k6BFh7PIELyky#jJa{VelzDK)zJV`F2n_S3yD0}AZwwAr;jSZvG6A*c}nacMg; zL#jICGo3S?Z8g^yooI%GR)|PYbW(I9YOBL|-QHJ^wF$feAMe#8%i%0fVzsax4n!#VTbv;RWVEA5Gk$OI7w$!qKyf{oo=JrrpGGW|uQF)<- zOlfJeO04sViHSi5;kW;Vqj;OmLc=A*(_mF|Ni827VyT&iKm2>@pc506=y{<;-_cU7 z0g6&fy}-2AsMLBeYIJHZbTt>pp{?~#QrzeWOh)H+sR;1vZmjOw0$18+^w^65dQ$$= zZcb%V!cwEC<0`1?G5lsYMT+E0@n}Fzp6jZCa55$+g~bQzRl;jSF_Xmxix#J!dSm|| zEy!ry{tkJnQ?#wsrb{pN5z)IamL-}iFlc6oZl1>u5K;K%HM3Nk19FOzWo1|iqZyfC zV>Y|N<4~G^apaHbTbflCGR&%Q*lPO2@EoR(Qz!F5H7!tbCweqn)N}j%N;QcU7$_eQ zu^S-Sh6Z1=OB}4E`1?gZD_7G1w;`Fc(L;=aw0UI~M*|AIm}HJRe$Q-~OmOUxY;|f| zEhT?bRsfc=YOt21cVyHNGk&ZahY`LiarCiQ)1%0)9jmx7^5O=x3pg6SZe-9rIp&CA zAK^|90M*DhFe*(LFD@bCONS^I6%7x!xvsVM2rK)9D5d6H{Jj{=phph!g zF$Pf_yql)y_0hwq?;z|xy14}S_kDR=%^Gh-Mv0yVQYOsW(n^4T-`9kgwLMT%fFWuM z_)mSwU)}k?y7OTx{OZn!T?w)8ukQR`5TIWWpkEN6l^(EP5TIW;L*)Da!WsU;8UDf< z()_|1{(lC{9Bf&Zvcd&`MZ~{=&9eXZ0-Mz!RT94;!Pd}JSRe30QPiG>hPg!oYrju z48!C-V)Qh&BEpm5B8C@I+`H8RTF@@7aw@7vZ*7Zi9{stUPy zQWDd48gHamyz-V3?Vs4rha z^vnU-R>6toz2MhhfxqlgQKUuSZys5YKyfO!Y;6u$uesJiFXly1cAeXismm_Pg80!6 z^_Inpc`g0^I3I>`_k{=LoIjo%9-5{Eu*J$4y%CdCYYvLaxKJ!ySTGS2T{xGT#cNca zxx<-5bVImG<|$*SC(1fNUaUFfgtp{nQZ=6`7ka3#kg9)F{7N+ylpOn?Wnq~f7^nv? z;qjMH7=NU>;Gu}AGMg2d7WPiLPFfm}>vA~-7%2A)tva2<2ZKKj4(>2(7zuEf={}Gy z^;*TniuFBbWs>p_Mws$S$R|}ch}jvf5JLP>#4DQO`=hproEuJf#wC(~m0nt#RWCh> zxBEM8Pq|Z`s?Xfb<#Da9X#2QWxM}vALS%-4BKd(l9gpN;hoU%xW)Ao3BUZ;}FW(G< zMG&b0x#N_aeX8uFS7#m0p!fY9p#G;~bgAiUaFd~$D+JvYsPA2E&lMghV*}1>pLrJC zayHcRUUS5vqhJfcQV_qWuECyjC*if282bhOa84Wc-2I|iJrrTL&!^0L`?Zk_ zot*ES0mS_ZfQ1tn1H1GlG92O~qg+Iicvn`THmt&o-ALTVV>A47^U-ZGNA{aExKQf! z>msO6)E7q$l9J~`HYAo2>1ZxgeV3f52zKc*wCYVfIm&LxA=el<1(Mt>px5^R&y14MnRJ*5Y($&krtiP$h{D7Ua&B zYi3d&4L-4vT@Er^MTBGwvQ}q51zUl*{=ab~{70{Za%sbnMA<)w44)wL@g9cvcHa{N)S z+oIelJ#3&PYqr8*E~^GxK|AD@8z6W|aIN^!^}sdK^Ck{}T9u=aB{qi!n)5B4|9OYF z>D3F%#t^G=mC&U{3^8a*EI)J@qQ zc)c*5@AVv8{=nd@%tS*@q#@6yKj=BvqgI{6N>7e~Wr8m?hn4P@WFaZNAod7*5~X^K zfE)8LY6_?o%kTFjZs{+>q#yRdNB2z*;K3rhUl3T~wa9=zq`q&v2fcptcmW?U^ixA= zXa#M*NW;heJ+OG=U{Q|k5&Dl7FCg(a6?sFMmreO0$h=>b`AD(Qtw`qCg2xI3f4b=s zGY#4Zh?OJ$r!b7$pVVFi7KiAt0}9m}VJ6C_*5KxE z&a9A#oJ9mMkykRd04f>A1RBEjG=!k`FsXpeq)u_M%su}TYlkpx;{9f{xRM|mZvyc1!6lxUGl(aqQ+<2G(FPm6&(09>j9z3K|r4gb-%k$kH%E->$? zZ~`0Mqi}d?E%Y1(iZ3BSmS)23gOEuBT^`a@#flpkPvXX^D7cn3e96IBq^Jxbd%B!_ zBl*fPJ;+ix`=3rKCuKzP-OjMV9EQ{I(bJTYx+|wBu47lPzjw>)qyp29q2pF$G>@$z{(9_x`9V1Bf9(m^frIG=`u|VJlIySVjIjIq44SPXObeB zPf&HEuLyetEC8@gCGanj#GsHETCxhg%*sq4b11VTXN9(B3!)A%2|fVS0mCpR8Iqx= z?4jG}IIi3Pwc?c~&~Gw|z(fF8&SF4$MGPMMFR(be$zdcj*AR`LQ1c zA})0dxpvFUOo51*Nt}366v}3|YF8vTQBX$PzTxi7Z+ z;7dr1pt(UQSwyaQx0w_4XcpOAw^zO?i^w*N+%Ci)Fgf*n_!&0v=iwYWpSx&-^xNO> zLIZ^+7F8J&ls?NqQ4io<=W89;Ktb20x(1HaK1cuGJl;bEaG&KESgk)$i?D)7D%Z=` zYg&f}Q&udWD%cQ>>>xGxS#OzAOA08pijoc|?~_yHjc4PN-LVAbZNhAd+$iRS{OP7k z#7Jlc@4&b(aVq?Uu}9^Grko;;ZS-#0$Z>e;vH+98jq5cF_Sj&vzA_WY95UVe12ww_ zT)y?&Un}nXqrUkF)i=)-k8hPz47W)!2ev1$GlvI4^+5@ByY` z4n3G-&qB3Xc2vZJozZCB633u<{*5F*dY@NN9e-7wNQK7FIpw!2ea*f1;e!H!@0fs! z_ix?~r`-`x;yHO~C7zkP3ZR=wf9 zfNFRHYjI6PONr7%?c0DDWd#ND$d~DdV=G7H4m?<7caao0cMpM?x_XqfS&Iz_CiV@& zp>$=no%kO#Di?gfRXM^KigoTR@%E)U-aoPp@=8>3)@e zc*ErYagYkw9l#TRam^Hi+oW?gyR&x>xKB&8n~I2bleXET3SJk(`~yMEKO;#3?|}an6oW*U*d~ZaaTMsC+PXxXjug(gtGhRvZ~#@I82)GR8Zh(Uk=$ zQz1me_3X)7)ISfM@?V(noTx>2c6K&6G}MRJ|ET_^ch3MYF#1g`$9km4li|QU_>2@{ z$q!>lW}z`xLJtqJRsW`2g&ux!WVn7d0?+B=(7IVzj3c`tz4xsa&xJy6_6v78w<}Tt zb~-0hzsuwOLqy7$Nu5(`KP>yKdauAa+pXddo;BMYiuLObBxD{y5*eSuH0)I)LK>K~ ziLvDsmvwZYhd3rUER5@x0Og%w)oW#>oP+e1*eqQ;I646M5p(|7l%#R{Xh>#y2 z0o;?3fVqZ-^jkTvod)1OErPzLA<Br|aF-uX~*fAo$*z&K!&}GS0m*_2XSa1q%*wZ@TCl@*vj<)UPh9 z8<(?B2f^7s^oXAlh!*=ouLAx)em({7gH*SGfQ8d#Fln@3E!X+RbdYKu|!%GN4~{2)JM_@rYGZzdnaFI&r9`aKkkS->2@pUs*th2l3(f|gE~D-Nyy z3hp+FvSp{h#`p(}Aj}7n0rSdCm@o)JZSy=$rcpek337Giz~oxO+>@l_Il!G{6ZmhhR49(UQ}l?R&8T# z|6jV3KOA8pUEFC%9S*Ta=E~1B%Qf{jgzc@^I zZLLicCc6Dj*FfvrZi~qZcAnSEBo5 zTt_Q)p1_0TS3MZVZT^-FYk1F0nh@n=&h}F%D1n+qNuQ86coHYb_x!%cEzZB8zE4iu zDj*r*-!-I?U+FbnSlLZy&^M|;hgUrO^oI&a#)nmVLdG8P4eQ;TzwG%(c2RZ|TSI~- z&=5W8Z;vq|ucQKAMi!#e1qpmE_698O#Z(ggrKjp0hsM) zHkv~Re3&$2C2uNoS*t6B-Dux&HAH;;8CHw3tN#97a^YU@B)R&$+uMr{`Wg-IMc{xA zh?*z+#J37A%$X0F2Rx< z81-d#Ov-vqptjCVjb9FJUqOcMRfODINp%ek@2Gx)MHZ;cwu?l}DqVJHw!_qzBwI%P z9UB{4*pQ4>2f!#2CZDnK%C_Bxmlm>kTqSUSqh@SC@&FvfRs3O`HfmYI-8xoQCC_XY z+YSgS;@t&_^Mr8MU8?rvgl|MM{ekYaJo1O#Cai?r#`e`TYcU2y-Q~G|TPv5tC1)YO z{|t_aD(GHhn)AhlbtQ{t47cjaW`RHckzjot94~)wT#_9P@kR|7C|Y9?ltJSTd7inm zQggj8YjUOVENa?zmloQE7Tb3ii37GNC5H$l+nUC^RB6%i93f=^JccmOku*J^>mOCf;|I%r z`E#5pGp04%L6OtJ$|tU8z` z8FC=Qo%>^!@RlG4-GSgGAO}4cSw4P@VPg7wuA(pKG=>X|jFJWO6%xk@qTz9m$PSj- zdi&i6N=1^c4A0gbq`uGfX(o*Q1V5HCt75;vpPgos)0IEu@$TfJ9$V}@#X;(G6sHPH-^LYQ+p-WV9c1ydy zUFwIZPaSfkr&ehabDkpZALh0ey;-CNwOq`bumwM_Nh>6_yOEj>?*_@@bu-e+8kV(fx}d@U`hA1K_i zZTG7j3g?B~C2Gs1sO+m!#4hs;$ShJ%TGmz39nHwZ50{qaH+3ZjH0*B=7N5*Vg;MpI zPhnn14S#;nV$2%dZ=x}MCnC;W<2+X*)0No7#W0O2_kjk<%ud(4#3&n~d(vcTjk-+V z3A!Bja`0vJ{OAX@2lERp#ibH*Ex&Bf9J3Yz@$6w=YFAX|Nu9Qftnp{?+0xyj_)Q{O zsTTtrj$4vj_TPwE&#xQCtR9j;L+me$SG;TK8>Os}?>f2pp--s4cTj)Cr0POt%7v6< znA=Kvm;{OWJ!P)>+cGx4AQzKfd4qN4Xs9WfxIyU6a&plM&o_+Sl5b*V&4NaGu zqN}bx;xhl?ZjUR>;v0=fYVPEerQSPt%G^`651cF6diJIU6`%e>9)c9;^5x7_N6WCn z0WlC+ozs3_pq)Y{5T4mqo=URnC}6)Fm%xsswYZ&ocT)8(v%j&szj>zLO`-6~u=t1I z0gVB6>!PB@4)!v)thp0K^Cx?Y*gehHDl&Y; zRB7>0-C6xLmGKyeKj0saId@klDQaf#FpxYc} zxGDJ}k90M4-HSzqEpz?BvXkY;6rgxMCtss*`T&>x@jtzR5Nd+!SR)Ce*0 zxX=7d>j`&3+V9-1)n{`PWXM_L?R5l7UB_nzA2HA;X?KcVx%k}VcXxgL?7hk z&?jww7iL~~>KO;W{q^?F$-t34iCm$11^!8=I3=g8JUbq2wea%z?St^i!TDXH^ht`5 z8&8?EK559vY@x=4c*xnNzK)KI>wmU5HSTNtI)i8|sUxkUv(V>KSN ziuib~?nIHUcJZ0>5`J!{@mo!$j2~8)OK2STI@s_&?$ywu%kIqfQayY2P%di`|0n*f z6kcs>lNY7Ukwfct+?0Job1lrM=(*fj=YE+jOst{>f!a=$tI{XZyw+$v5KAGwbywIP zPB?vtoW-|~LVY26_Y#ABWtZQcrWZdu24_xHx_|3y=Q$#OUO}mLUfjskDt&Nb<0@K4 zp&J4IRSS=-o_>+nNG|M?I5VDkqQF$J?^u!1(`}k1N2xbgt)9#7tbijjUr;=cVHNEr z*^k#foWuO81jVlsx!Z5cz6Fh#q<59w1>yvDxr27CqH1O}(c8a2c5?o#Wi2IJz- zw;8%XH86;tWe=@b`Z`d$ zmkfW)5PqTenWNv`W5_r(akg_f@$r~ejl8%nh$J?_j+1eFodJ9sOdojy$xC zY<;O5yL?jl++~J7^nrtfY}7_G^P`*I*_H$B!f&YIg}ZBEG#m9#~@u9UfccEW2h)k#HBWm@YL8O0FP;PV3nB<)$BP{ImU7l-*8w@EoB z&nFwRT~Qd8XU)ART7m0#nXJP~yzE+|8a70z<$93Pl5dY?{zj}SG$h|sG3I4QboIlP z$?p}5H=0D(wB}vr2P=47x9!z)i~`{TMKs)5$YOAoi;!IA)hASwb*-D|kHAQx7OXEJ z=QZw&x+Sq?Phv-ZZt3rpyC^OmKdhm2QCik(H7Zpd6`i4 ziW+YevK`%rb(ylkI}hK*JMWxcXyD+dwa(eNo1EQvY;))?RtSssYp_WGRwGY$C(YUJ z@ubVL7y6^`$-`lhp7F-_Ap;_}40YM(O^W9p8<&oUU^`+>T-VYGoIchN9nAiq?^rKr zENBxv%3(1kgA^#}pj`m{?t{t_k_`qS{69?l|lpYt3QmJPf_~+#3*AN!M$FNwER8!ZBOE_9eaBzUFs<=0bV(Mc}FRyqcNTFj4@3d zy!vJGFGeAj4}h!C1+r{sG-!AYUPd~Mg;~Nd4Z`d!`Eb6=BkWefgyba3@U77Bg_zVI z1SkXbxaDWESRoTH*O*J#6v~WDStSXt7Xn5f?LAkDy|lIuZ6!XyFSVxd=318}VUExA zpxPvDZBlQc^B+n+WnL`d5xRN@0;X>m3Z1-9Z$-<^*ZL#hG;SK)+T$a?VdSPv914z-Q)bicHvud zH9Ql3YO+#>DL)a7!q5y?lUA0N6B3|izvtY0O_Fo!%^xD5X`u9rDxXau_BouWTgx)h zZ)tHnN^XQt3;Yx_;4n_7p7$sW853hU;If7{d=ZU6^I1=R-Ut)6dCw;xK-{z0x*T{> zx)!eW)o*`y$E7REg&fUr6Sl#q$1cc_bv@D zx!F>(*4>Gn)$y1Z=D!mwSRn9~O)M_M~O$A4B{p9&sU0Op4L<4|e z34%a>jnk>(rw4bl5K!?^@P9&J%nq+$R(d*TXkewYV)6>XiSE%KF<(d9z|xz738JdM1o_cXvnZmcUBgTc2?;=UlfAIaB;fc zgqLbTxzEsz)8f6vaHy+KHUtPzoN6IujP7V5SbuGc+-_O!Iio;lo$Xh)bT9Cy#6u`$ zlfUv}B&Y8APlS?zj^0;Z=ufslPST7)K!aI2wC$ok=$7l`nf0jNF$G`m)CMw(F(#c)YVgX}%}xxkrrGU2Mb|NE zZTkX)baDcJtR8XA*#U+&8JeUbK&7kVTw$Gs$U0ABR(JX7jAU(%jp%@zezABah`ujr zs?$@L4z`EOaG5b}?e1pldXyLM*8n#tlT?S^KMEUo0kVY2>Hl69-sVBYJ1obR_^c=) zAVYCu;*HptaSXPO2;V9ZmLe_NN1C0m@EOA_`UiHzrnV=q(yp!|rtkJ1P^P`;WP6%RMlp8uCRq;0DgKPR-x(o`53Bl)Seizc}k$hf*-PX86i)ObbN#MQyxv=1@+_s zXLn?t!v2y6j*#j(3*#?x9Ux$Y)F3G-whf_QA_z-RYv!D&;USd2f2(~>uu}P;yzJ|< zTpLj$3050QczX*UYfh&)`#5W#_)TSnH=7R0ayyX@Lu9lZg=A9+JAuRrD;;|hv55ue z=8z|jkowX!*dk;VH+Yfw6DASOx&bqG!{H)B7=H#7ao_6$rw6UOV7 zwD1qHP>hCC5?(DGe-tYjLSrK66xZG)E&~^KZ9}uT5Pe%ZX&cUrc&2@ZTnwE1a#*b|oiT_H+*$TL-C91<1;te{w)`e}}*4_0lSnWj23A66?ICnY6UP4*Lu=CTp;+lqr zEw|-uXV{E2o%5~B^0cy22X>oxRZkVDUG%Sv<8|H^8 z>L4Cae}`kMsIu08!z{ACZK9v8pLG5-p$d;QJv)A^<@yWe;z_fevg?*T`99V;#EUvE zr3q5GGA7gZDG|0g3E`SdN^iQr;xC=T_j1N9i!rpI{uKYXmq z3T~E*vpZf`xTGI?2$ihejSFALlt>r?j6mRgI2F76FMOV(tu8z{E1Mte@e84Q3M?nPQ~BoJ2F|BihNc%2}8I#ZB#Ahdc%N86E|DP?FSAKwRBr z#I6KB85Z%N5}51^(Sg7DQEUO%@5@C7A{T$zj{S=$!&@t=pMV5%$6CokG*<#L{x#&? zG!cc-<%s(3LqKdzVOlVmjG#40Y52z4P8#_X9?&}f+sh+=7msL#AxjfPy-I_be*Nt1 z04YM+vgM~}jBXHA%@Udb^6lqzn)QD!))YoQq+(um@NqB-bHN=w1nQEh?B+v(05qmc z|HpYaneNt>C*a1b!y3NDa>u+v*m7`jpO}b-I1S%WIT@miP*@bi*O94r9ZuL^cr}q4 z)NuS)F|cV+ya}`v^q^WZ(fA-usNq}lk?!~MHIe13isH%s)hc4<)3D zav!;)aA=JDJhv8n9GlRhI7mpr1uowLGh49-xIiq~s|N^BKX9+SfXlwYt8NhaZ&(v* z=y4thk^WPwy}tyH@-2v@1qx4qI%&Q5v&s(qlJ-olQurG+^Xl~j9vWa$hIA@{&T4IX zq8FN#v;{AZjH0F**-i=+RH3@RWTX^!A>{&RGICCiAoa^K1q)GVzq2;e7%upmqz{A| zu|E)Xpl0#!Vw=h|$JRfCWpr#lys4jd`z45`3Ur2Gg6}^NM!{PL9~5wS;LbD`G7*q4 z>A)|BFe;nmFDP{5v(C!ICO+aZ1&?3P z3~wX|(FzB!u6}cVf4D8a$fI7=r6A{Y9+*Is;;}Qxh7{da>rn!f%N;%X z-@f!L;!8X6qkddO6ly1Ek^SYTA6 z#X7MLwNQavwc@Is%lz>7!=4v(zMNSgYN9fqxO`!3%N?&xPvS$QV-FBg1GvBz8STG4 z-S1Mdl5=KRTYO)$Zf*A|`jt;{m*%TqZY?X@>aTicv(lebrtZlxyj*4wZ7-p4eAeQo zXK}dY9kGlvj9@K4LAM_2)v2uFv+y_wC|jJuB44#xhg*a{>>K7YKSh(=a;0ui9e%d5 zjx?Yh=G#8F4{ivug6%EDwladBjCj3Fs)>7M7XXwI>3L->);u(B6QCcnPCP*6R_#)O z_Q^^Aas7>`ut=Rpy1IJ3x3H^fxwY=jqw2bs?W|$^ojl7bJK-6JSb0|t#ZmRhVVuB> z6Trew4>B*`hA;&se>!~BIabopU?JN{a+UMDQ$(fX>+Bm}>&228wRx8!Lmrf*v6NTdR@E9m_k{B_9=3ZzY%$%qlm`{_zy<*{8T?vvz~U%X5am_ zL#WNT*J7E|bFOmRw*>I+K4PUG=$B*+p1BsZaVRDKJRS8)JhURw%I@Q-z=1+uk_vl? zF<*h{V+&Gyuj~)K{yvW>Jzv*Wzq9*b7){6l5tH7X%*{>}@Eo35-WM1@grYUzhJuYP zy&ChPJBB7LPy8H=jVe>mCJwA1hwGUfZD&2uGu>)tymLTi&P+;&AqA&6=~}dX`-yp* zjo==HTeJIkK@xJjhPT$F4iy+Y8uPRW&kvTMztZaMw|P~;D(O|5Gb9zmcQ3{-f&1z1 z1gzVedRY`$mk?py`*+xkC=h^)L!^BJ?I)HNIba##I+=$68NV(dL*4+qagbOdX1kdQGIfhB0`d@X+a{J>{;7Cn}lP^eRR!d2Sq#^u~Ss zO zOP)h>Zg<5m(Y{~fGn6yg9xoOnY^A}2h`1LcJ9v*moPYJU(iU~=r3jlygX&W6mu-=$ z!+eI+XObN-b57^C?nfF79xDymc>01SnG~m_ADNW27d?SE_Jc6(l1loBtN=h9UGO-p zc8sMaz~)-qPHT3;%~*5l@&~EwQ*I2qdBrQ^WDPEEx$fBEj8*@k{7LKGh{u83jl?#4Gqn0) zjQyblQ%9zbhiShIgcxYh9kxL1+nIV4f}TI--nN3PYu6d(OQxnt z-bjR`vf`Rx1G%jKcZ=|o29Z_I9EiK8uGjsF8zvwY`5Z%l2m^w1k_22VA%u_22W3G3 zK~vr~)<$jQ?wz`}@%A;F>6OtNvUCf3CLbSabOdczP~7ONBf(Fj!W~;&<~lsEBFl%~ zhFE{pPj(%AYswcfT)(_+b5Sq-48(jMmmu!A8Fy}mA3mn&#=l3B9(PmS zwi)h4&WoA0h({G4>i? z+E6Xt84?3&UAQwH^*A^Pfz&>9^_Pe%SVg@m)JF-w^?TK( z?^I>HT!2%&qM)4ZCT|Ua-=AjGObMi(@|JY#x6$%= z(E02D!Q*{FjZPByEe%3v;S!RG)j{9;MJ{e%++Jj4XN1O@$%RPC-)crZ%f&<>0-1pZ!0XG>Nlu2TrSvC{!bYomw^6d+eB%7v|(|zary*g!tq}O8!~Jn z$5zsv!tMww@OoI#n(m!JLLLV9Pu_4K&M!Q5A$si7|9)VoX16Iz`#&z^4f5MV0*DZTl%Swi>|k5zj*V;ub4Qn*aN2;6*Y<<>}M)pd3Aq-QBfklzhEz|ZFcVu z?&B`=LIbCtb-k?)XQ=l$t$jUoAtt1KB!fyE$rgNagj4wuGA}{L(7eca+%;3>)4gC-6J_7bQg$%c_gsN?i<3YzbXZtemZs*xx zb(1lt&9-vIW_n{ia2O{3nI6zzaplRr^mQU2kk3FMS!>th6v&M>$i3Uj1>ub8+OPO2 zv#WNE#) z+ZBbj4=2vw+L5j@p5Ee(t=xwj;JU(`Q|Lb^5fRS=5&tMTEK2Wzv)_4JQq<;wn^_|| zM7KMeZ0$cPilwzJB2wo3al%C`{SdRKFYoxOkM*O#?MYLn@(t<=*Rg{3oViI3uA(HX z_n)kHE1Rld-}mES-$(syLs=+|gpLJBm=~Wl>dH{$9bn#=@mr}GW38+^cCx9rXF|DQ zOe}G|j=E-+%56@mCZka5_4kKoRvzLm^Ze15p6JVW?h6#PL?Vcfsw@JTG-=HE>sm*S zQ3CIN6vQ+MHoP~Naji?Ma0T1GJOw_B+|P&9elh<02_-x%d&??YXTh%d^X@IS@Pah z81utzKXK>uFF$W^f-U;P7ER^8k9$!V;dz5sM2cXG$=7>&Y*q^gOaveL*xOvJJnNd? z-=TG(|JzZy_FIQgg@=TB>*^fMt#XMpqK~ZJ^51*f-=o!LqZnkuVR6kImY4=he3Yj3 z)0Wcc^7LobV#0ck0td2*Rr5##?~#EPi|_^k;;2cwjS(Vn>s&XO^5ix*`#iVH!I+(K;o}nhafT|Hv}kIFS~N_ES^i zYK^U{IY#pyGeh>{?ov44dMvMBVw-kw&3NV9MDykTp05MtTl zO$ku>@P_kbIVG2LT>R`WHlCO@y;fR%c^6W0I#Z={r^#}*qt=U~No;{S4`grcZl!O9 zE3HXZD%JyE{RXjbhcy7a&T^)hRf&Nl8@zKa+t;AxUJ@zGqt#ju`Q{6O!1swNEM?yp zH&1xgCchyCIzM2jdhs}-=ME51fpW`?mEnU7`81%@dTwM+YvAgcljXbFiZsV(XFn>t zCH_gBZOG0fHqkJ41gv$r#37Uu^WJI*f3v0XJn?<8OLDhbPPh;?HecTLF5Z#8K1Ys5 z1CF*PeVg3_*Z%PjEbaFTHc%>7-~N{AJCBr-gtt#eXTl!UW)Y!o@`xRtLtisrW;N14oMozpdA!qoU%XCkD{u@XtmWej3t3Jmjm9df4P6&H3(o0x=T0=nDFa=7m~`?}~5>LOUNk&G8)Bn40=gMi!!82_4Zz>Ml!M3;JAD8xFB17RW~6>q`GU8BFo zGt5_d2xSoIMg02FZp3~;-vI(`*!?;N!BH=Om7q9i`78uJ7;WoAEXA8GP1n$*0DA7V z1178rc0BXFmW#=UDG%Io=K@w}_NKAm%%dG)%-1CXYVnBEdxHmS=K9hOzHpQ!JCl%| z%jX-K=(8L=ka;VMeIRI+l|DGnn`+9-`D5SM9$2ARN8ww0#JoA=Mwc1BvJgf>He^Uu z{kQgmSVkl5+71Hfk_#fr?F;FG7^26Y!q4R7l-5VUSz(h3j<^wY}(AG05Zmex( z(GgCot82$N4l&6phM$|B;wK})F-+Re8XG&!>uCELS!Y#zcNL%nRfw=I3WZQt8O!oF=rL?};!Q2HDY`iy}O za*;8N5S>xidsANX*|6HYrGS)2C-O%E6sG7uG&n2YX5SXPLo72fd=k$&8!UF_)Xw5%XqD zU?ps%8v=?b`3_k9*uX~fphya>Ids`(UW;Dp>d+0G4!4U(DST725@^=L`Lr^7=j|jp z)x@>*VUBC7RAD7_soXj!J?u?-$E|DPGVqT?zudFDG*W(D!5ggvCYC9Y;%ClVMGq=n zLyfvoPlnWi7EV0_le+O`u}>XRp7H)IgEK<)aC%vjo~YM&%<{ z9|u5A`^~G6XiB%HS%$$2u4~@+Hh;>R%#+J1hBA?jYhQxOKiqNwDb~r{(@PzP=@~vD zJI=FX*EST=^;3r;peJx8N4< zfpeQE%DB98P|1gPi3+j|NDeA%O8N2BCWR4qoT}S^7~XlxhU{JX5ggU>+T7NBDY}r_ zVDCMNeb`GGiMI(}WFhU}3{phK1Z(7@*>sfO?DlA>1)H+`OxQ@X0}E1;u*OP9YXnv{ zka|+EA<>&OXZdBDvwF3$E0uNSSqBd>d;KcoS9*~wq?L(e`-TLjnHZ(Kc%IaDJ4vi2O)h1)sM^vB}ocNf%5Wa5*Om@5?xnou+i6Uh0ti6 zt*_NH(8N2>jK=$>{Jfa(vwn$>#Ogh#nl*NPYi){0(llWvx;SA6U+s6^@XOgoA`MGn zi8?Bmc!w=>Pp7k7yh@%NWBp@FaMRHppH`z-W;?X_V0)LNsy1qOmetcmd3aA{&MX~Y zlp)KH6O7IWk6RT)HBUADNii6sMPYUfCwt+_oSf}SPem!(_Pbn-`9RWMl_}hWu_yjm zu^@LI-$k42PNZ>-(x}BLtNjvL&N(}5dxM}DNp7gS--J>iXJh-2`Bv_$wuvv&29YcS z?>(ZkmvyAR6gJRLL6~y9rW73d2=n9bA3nX3Vl1rC#^-Ml{%+5;@ccf*PXL$SU>$u1 z2k}W7vOUi2U&o+~@KL&sgBb+uSr0N5>fG$+nX+Q~(he~vgSR7mn`iTla3}XW!64WB zBX*C1wFTR{j;a!nm zx7s0}=uY3FRg9t72Tt~b#e_sC&vani8@z7ZLDSuRzC;Zwst*5gSG*@BPAYpOfK&1- zyE?%fmr%kUQO83je(Qn+*7JS^f({JX18{E?g znLME%&UKDIG0dmQhu|fekH6c3ne=1k?|a!M-=Yfz!UbS#keng|!UdU&Ru+)wIt@QQ zz8j7N*&N@`CLcl-pQm=KKD3!gznx=#%6wi{cyNJR)_>;_9&f~7Ka`&qyj|q>16`l7 z;u&3a-<>&)H%Dgpr0{Jln`AAx)W6{idl_mivYZ5sDw*M~am?no6fCWvjPQUfJ4rTF zh@@6{&v;*Qo==5()b<#@V)kass;?WyE1kq58tZQ$xCFq6Z;gL# zigL|wCC;prp5~ge(p&MgKl=9W#w%^MyY@(FCrun)XeezcdstC=%j9#6O+)hq$Mdsh za_b9J1gl8BRlen0}9jgRV1f?^QXy{$=Y6dCbb_5E>qg?A1wZz>3v4 z#-RI-u*4&uMC_X`zb|>U1Bq@i5SF+An#yZaGRi}jydgoCOry2E4ltDeNXI0&ta?RN z#+GmU23>c4lN|(X`&a(x?EVJ0vJ1-ht7k6QHch)-g;T)zSXR|CU zdhe5_@EJbIOabD2!h_U+l!n6IDM8JzgTrP=T%0D89yH988SZ*L3s zLjjQ?JX}z@c1>gm45cY81Ac*$IgvO0g@Sg>K%vB!Vi>I~hWLGEq5}I5j5-q@M-??Q8ZiIQ;R=A<3)0^!dJx?D#tEa5#FKCjLEB7sP z26=bJymryp_J@dP-mEmACcWB4N9j$u#`2YZaICgKW;=Di^ow3PZG=?48#@_g>JQjU zW~KJex||W$WV$~)1?5!$RMR$;3FJR!0fp2X)=9+&%bgJ^T zZy@S;PUykcI~l&^y93rPW68^&Wj_%(M@f3uUuGwC&?ovvFj@5d9|y7bX?u4vOv%TL z0IIr=6|3P`d3%)Z^VQ!Og9nGd&lb=caWmrD(JtY+Wjcn_^UJboWTK&0Fq}QPU(ppx zla{<=q=QE}TWB<>WDGW2u-ZF$*4LI2tT~*ngEPosAA~v^MV7N3wOP?5vdI|Gh|a%0 zXV%?6QiyUFlILolB= zqWVVVL=B$#xe+asLW2x!(30fEp>m}1P1>wvlSmf-f%A~qLa~y7X?lX!amuCZ_omF< zE*IK$$m)ckK5mRYj%7~%BS?GmXA=b?_q(Hr-1pMsl^a;BMO5dMy-e|r13jVl+YXjy z6l#kPmPm)tIm(tSqZhoQ;@0>>h4^)2HNtTV*{Tcu0IBaMyeX z$Ixk_IbdX-k7hf~!K;#yJwdkJ8zbxZlrwR-o4r_bPs+w;Ja+)kuaNz&L(tz#`3M+5 zrB=;ZlG%sOqS7V>ui9Q}C9Lt#&z0?2kooQ+uJL}MIZh*0a0e=F?}7Y&{j8JgM`4sg zYc`}X4RP5(oJY{iQ2ADdwjxu0Ijhbv(}GtF9ie8uoh<*veOYZaZN3scf-Ac}bn)mK zvx-dY`S`Z=y*T@eRK5cCNn$(G{j@)Pw%#UOCxMA=Xbgcbxg{k@j(UD{n=kvOTC4BrH-2}U{QkN}trp=Ifrme&yZgHC&4V*hDxjS_%vXMlpvd5*G>-%`|7td< zWyiTU=L5b*x3c^K1Z$BRHY#cnq)uuFK&Mvm{Dkw8vCJvKq~+QG7##}{3yvIkuek82 zLtSmz$Dk(Puj7gaF{ZJt;`Nd>gTBmKP@b>|X3HR|*6q2%gVeUgsrGFwO;THDBe`Po z;+j`fQucZTjdsx&uAd=Kd7+jIW*y`(y*tCm{~YbNOKE^(HhF}FGGN)h-VG(YZCPPW zX^oIGj+pKkN2D-baWASqwSb6~!?h3?o=t8(t`R4)qBHsv(wEB&X;U}BX}Ca^9~r;K zRYVvZ(h`-6Gth90!VL^8xD)-Mb5!gA5B-@$>l8-wL2lvHqsq&g+sO)TCj~zsaU_L$ zu0#qN(<9;c5b0bpr0Fxg8)Nk74-WuID^rV*R3{Q)?qo5+~S_@rS;T; zmhLV01AcG%WFgfM+SZ4Z-o}C<7PLKkv_?=i2yqD%3{mZ#LLg-l*y~OAb3d8k;|1MP zV&sXOQ08aHC*RaKMW5U&J-Z?imA`CN)Oa7y(B%B$t**hOOHKZyflGM9GY#t`s+yTL z){t=01 zSip4n-PXMI$2DaO@AU_?-|k6u(;0BPX+m)-Q)>1^sJfah2X4D6=EVfh&Fl7w3UkJj z5yi7+^0Ae@J=Txf)YYvwNUdJKsWlpe^kV&@A4+Y~mC$B*ec(d64>}0k(D?TB-CQWK z8xXv#_C&0C5iC&~a2I<~wXu}z6E3R)i*PD1{INf^s*JIVh8U53b;*X5IWjVMBisf} z*hqbw(<)=Lo?;rCS;~?J)snTPZMo}wwnofu3xCKO>U(mq8vd$wzL4GR zdP>WMr3zlWP9MomfZ>P6TFW9YjWJ+4*2%fegD0VV;2jO(#0D279YTc(M^(JNGxw;GkY~*TshOsi2<&{TW38-y|mtyATLL3535rMQ5=rpLn?1(=#})0 zIfoRMh5?fU0vTI2eO=a~rM-NKrd5TEtag1pS47K?y3H;h^uPi={#{6FOY9M+QYUJ(|T_GD=NNT(iI4_9RmGswaKQ!DECITj0Gr zfSD3ff{XqPpvce-*lk%F)m7aN`I@s=ZYy1kNL(n96n63vYv|@f@6q*gnZ54iqT{;8 z-gUZ&$$xkD^EAg;M)L+5Gq1~En&0v%-FlYXZB2P|CG^LWM42Ap(tPgR_QX@S?yDUt z)cNu(-F>!MneQ8q^36G$qFJ+08_ur1RLLpwcUquAN4;c)4?&*%?%N_W3-M;XZAbsM z={5`6?4WqgH|r2%{w%FB9wQy*`@YjNJiSkm1@Q5GnxffZ?PUAzekYRDG)(Dlhm`-} zdSa0xPOB?3t`TCDy_t8#30vL$NVC_~ckW9cVjuGvTOLfWz*doJD=(OSdk)Q;?a?53 zYbH8xJa8()p<&Ft4)5EWq9BB6#MZy0WLoPds!7+lOs6Qy5=CoJGuIur$1*AZ@KuS) z#pd`*o;8R`lFUjWp_%K|W78I$Zf1bLWM%%gZN({!kyfweHs*^xkZ6>u;nXhl+{Lm0 zu?y}RAVqlDD(1*D2<8BjZEXVgfiw7?Q6vk*aw|}k@T0eG0_*d#EogJ;C%_Bf2V%gE z{;q1ag>7f1>Uf(%Md26H!X?)RpulAsQKK{Odof(+?Ne%*|8Uwi$3VcWto?;&~Z)Hb7Yipbn(v>sEy1* zmybnWjy2Mbfv6mod!OxCo3Zl*&pWRnO=gyCzh2J)I!I48&WHv2nUDhh*-g4`M*@_$ zZ^0H+U)z~tn&6;+^=*W-Uo?e|4rmTck)>_YW?ETl3aHm7%N|+6eh*U@j&{-jizUCk zv-Q8MlMkBFCeLcr!@Okk&JjzL!FZgkpu};eN z4y3=LqMoCx1BV&c%#b!X-|-wgCgvs^sQ5I7FJn~$YrCPVhDSW&6-%?8a;_0OVyO-g z`~;OmS{;~;{LBno88nhPdkeCR!5;?5SejeSg^xo|nm7ESKurs3HJ}YEL;>@KW?b&e zXFq09ybPeb?=VNxj(q0W*QQZDlENar?6Zteq-;q-QVwfxKsK1V{9CR8;7$AjfiJv{ zq~J18yrHO-9dcHoV2+=iPWU_q;1YRr{1;N_OXdiN;{HHHL%bBuIttimOPIRS+iID< zWf~>7Mdy?VUPY#mH`}jfDtIKKGD7wsgmz}%DP7^vc>Dnl4lj)r;a)TPY>qg`{?F+ncRayCA0@UjFmsFtRY(QnGRP(22 zLt#5cY;y`e`XkwYF9j(xE@1z(0);{UQMDjFRfytvcNej1cl)8)OTZ{VpGEOaQCBiy z9a&<8=2r-6!IUv0D$eX4>qe;)e_gBh6+fJ4xtZTc{-aiOJuN6a<57A74=VOLskiWw zBfuB$-D!s}Lv8R2`h`U3b~)BK`b#&!6kbpSr#x67DY6CsulGIozwmDRdqmf=&6&h9 z^IHaS^5&o?6@~cMAAjaxDbf3UX}7F&XzCHW3t4vPdqZd`nbtIysrJJ#+Z|#Ls^UpWQy_&SLJGsP)nP!2xbsc?@G;zY_g87i3#3sva#IU(j3!dR2%3pd zV5cgIS@A++F4Of(KzTbjVnaHk5W^m$VGE#J210~S5GwO20&~+a?I?;0y zTmWW0Vo|-Z<(4pL2xq2bRS3f2 zsQesdl~3;b`IuYSWaW1e2A(=;z- zhB4C=*?slxtzBI-^;JCf^Y>{)9dqQvcqCtmZ5oA%*rnG;3MotZ(=v~V>OgRhv?<&7 zrrO*kKz+yF>USW`Vaj01Avs7m1Qn1JZe*VyA4Cd)-#Cd8plZ^{P!P)*fme5PN_!?1 zBAdJ?mW_(B%tE=b%)k7yo6G~yL@e_L zfBbXU1ytSQj3VC_9r3`uOBgOc?txo3MIZeYr@zpE;HZ7*p-$5?)vT@$y^d39mg7-& zM1SD=itXw;GapnEsw62yu^l2l9^~akBxFAy`=}@HR=?}k%>An&D9yZYnbqgnVJhC6 zCfC($%WmUjpK*r@wXlZ!c3>rL!^5AulZr&5F`thpRU+8Tuf}Rqpwd0fqIfd!MuVyf zA?OoSQxb*({~h4PR>gv6=cQtqzj2Y7O482?JU(Wi zwXyKpjl*uhB|r2@ZKROZ(j?}DwGD3+dQ!w`7`C?W@U3*dS%FXRBp!55M5UxlH5>cZ_eUr9z;Z5 z((+F>e_4k)2k!wrNUZ&IGA0_0`8vq010xh(u4};{0t$BE^UOrN-4{qFL^KvbPy0PJ z9$G*T?jq61e>U*~1O#0V3Pg|d(@)O5dfp~N-t24J^Ix(|zOxl>FV>9?`Eeb`(Xvs= zb9>-M>q6>a1UQ!;H5YC?#!W|_q{>;_vWEGv=M`^L;yDOz9>Sy8mN4Bd`_ScNE=4c|U~~-rBp}!@HX5_ni*-B&^S)GBwrv5y9teVbBZ6!M zSc7ztW(1HX_FA{y$^z-_Dzr-8u8G<9(Q?Em>fz4P^61O7{K(KQz8V@eIxfCf*lX1! z%Hy>9NC*afeywlkaoJpsuU;U65da73`1K-|+k@TD_?Up+$9d&RIgmTz5hpb4OAL@&VTrQVM zzMK*b5CD9J4G8!CGi?ua4t$lO-*^t7o-VczlH*Q&#@+5rGjH)fZ@*QL5VSG*`;-$r6TX z)>VtGa?=P|CbV~SP>adr!w5+MlHBdO@@+rl#tJN$1)-Uowvklbo%f)ZvnMP{JpK#( zsRnd1cAGZ1=LRx>rK?!ngUZQN8v3&sv|#B!9jjcf5}+*97>cGSjeHuz+4p-i83Inh zFJ)712HN)85Bo2k*qc95VJY#&`$UMj+g8u@*oT9kJg^GMe&?85fBazCTAK@*a~>C2 zgn`1kM4)4iQgH;ByG&{F*v1FnwUIf zPZ&v2VCSIfx^W{}+2#FcW2CK$lvru30_~L8`{sq-Hp}_E7~hO+{e>*u*vC~Ct;;D* z$Xu%}5s4EBy9E$-XJ5z{>$$9$)=Y0N#Q0>iSGdsLAnCqt_iQ0XW3DWF(CJZ*V(ia) ztB3CWJu=!JXVtfgB>a$CRM(@uTUoE#O1=r+DnCeRJBXLI^KaujRS-p5W&F5QcKZ-9 z#SfG4HhPaDKxo^Np;vNxefL}=H^`v-3V}b2E9>p$7!%K{Ys-5Nj_GaOWGM4ko9aYzk<27blT&`EuN{OC~eIr!zB*r*y z-&~-LxKPYOeFIQaEa&~lD+1HCb&Wf{JxrylE@lIre?-4LuQ6C9r5FGcWnIlCr&2n( zeiGmV-#&FZOQ#}Zk1kpA3I|MJ= zI&~!+K`#cAp0oIRaxq2GMJ`D`OsUI9koKS-f=iX$V`*MQh8wrGw45Z=s6U2M)3T7f zB7#Yw(U6b*>g)nnx=?3N-)4D33Z7MT+~_Nj*#NiAWVa)q`ACx!9e#OB`#GNf{15lj z1Td=q!*;{Kwm))9lWIz$IDd$-E}EbHEO)808k7^;=1 z0u5@7;Qd`j#(R!-n`Q)id3oJ8z|Fa?6<`0DsBsHYpD6Xl{pO&3#s6>qKyr$Ss_eQw zJCMVw%ICD#+syaqZK? zN1qOiH%4jZ>XgGs&c)6LuQNw(e}5*#ji~c|o&K+X4C>Wk{+tEy`#-gZe+uP(F!CY0 z5u5p-`iDjb-kc0_9J2Nc<|PYKg`1P94?EJedGx-LY^>&&qv*FftDm~d2}f~uJ5m}5 zJsCt_5vf}%-*ch4`Y1FKE(cijH+PmWXo}We7DNgSe`Pm+`H27SuS3YBAG^cHt#-h- zzFP+O$$SH9d#bC;7mQ}ralX0pR!Y{z#iix#+H4;?%pj$vG$`ENobpWQhl_?db$1s- zMibsBusNPH|2NzIS9(~$27QVNybE{XIu-VfpC{Zdv8$z@%6{35h#6e0l7jFfxK}F6 z6J|YmH85l!9Di_3d0}p-$(6(3b$3T~YvJY=wg1T`iRqnhKQ6+vIN%dQ^s8UdEmpg2 zS6`1EeMYGez$oE$O0@f#I*v=1Q+#iuX%vRg;|i_DYaPa;Sj+de%DLK-6}kQ=JB8q7 zB)@zoE%bgJ%#(!eNK$Jv9i`?B8XFsnm)>WfPlrx?gGs#XW9JO^j&haU<}4hc2;(zW zY8kda^p_5j`~}h$(0f|2e&yDSQ5abn5Mf@X4wK(Wxp}jZD+?;*kxZpt23!_%b9YI6 zPZ`Il%?3_x5OSZ?^)m)ARhGwbc|y-S6THIznrVLh_JuaQaGLp2pL?2Sek}~_XBUT? za`x50$%|6oWLcw_==avEJ?B{3dG3EYfP}7jc~=piD|FOLK6W&{cousr;_6}Qb-(iU zqlGZyUw?6=1_HXlR_amhzqAB2gA`bP?xD5O)~Ki`P*jZjIzd`|GY|r2DB%&Z8v6#p z1ZDjX9)tQ&hJ-AaEnA0Se}R#*t=Ttl#g7yYrOA?_B3Jc_j$LYHIrn_@78X6k;SA`b7QjWqk9&Z#~CY2Fe%1}S#pfhmxQ}n;nA^%aB&|ksNyFfHI zHyZ6UE93yT-i;z$RswpXwZESeB9li~*~{jLbq%gkEd|%;C&a~F4-XF?!D2WA>o|+$ z4H_dazhSvGIX5bF{4Z`4=7WG^7_4msvXf;oETx{M@57;8q|)!a)G%nrd(=-I`>lRR z4@U{E^a~jBTbE;e@=2Ktbe&OX<&jBkS~E*6EzOAo4ElBSYI{>9UKR|F>KJ*Bsu zhqD+x2Y)*Zqa8~%?27Bh9RG#59L1gG;mrz7^sb*n#%%u;QfYz(?Nqwl4TbD~VUr~7 zFoOX{dzr5fnf?10Uhz*OC;y7ty^SPhO%j@|DgV;bf6Y8c1;NT#%{Cs-?^O@yQ|Jr^5ae8iW^q0c^(UFxCk8&z+forCzV>#|QO`P2zjbSM0Ok+82i2EW@Vr z+;PFzJ&9y77{7CNy%tEEj*MnT?nSI{dZxi~nXloOIDNA4XlzvAUgIp^XfPis9@t3- z_$%$h`53p|4NE{#YJ)B-*9yOi`jfppt`OF{)Wc0trweEDdOizAA(xY#m^C|-Px}1L zo8Nmu-$az}(=nc3GSYeBqiF{x_dy(ZFqcVkK4=%K!tIi0>O1uejJXfdudJ+GfAlh(Xh&hC+wy3f}d^>5~98AP)Vq% z_$|lexu9m=RsQNDgN!xR6-A!%rA84;u!yyAmlRt@>1sFvHDiv29VB0RsAE7x&Ff#@ zB@Uq<+~`9iU$gg|%b77Wt&&5!J=^?gF{bS|r+xavBTB@swlxuh*rLFQ$3>S(#>1Ux zmpl}Nqr@IHQFCgYxmJ12X0rXuY~Arc)e5?kz`zT~<+f30IhB@=b1~sneNa|k;9s~# z14zsji8{DR5Cy|iO)dXaqhFy2hFQTS9hr~{isva|J-QkJg)uk|QX&jkp9K=~789I|}=K~T#IHU6vAr<`T}$jT zkIG%)X0>uxpKmaP9))yHl7^Her|MKvTc!Yk*}F^6^2CX$Sn00oh{qT>eJSwB^VyVP`D6^tcPlfcBr`rdY^^`?!mnTS_EPpgFhi zBt7CyI7~jfu~^+5aO}hLv?k+d(}d~f-!q3tG6ef?PSyOnHWzXSx?=5n(2q;ifM|rK zK}VmggIR0O9YbL%B2~_?^DPnL0aH$_5Zdw3u7%11X$|20#22kE}qg>C4eocE1 zp}wmL=)_)MTbk~i+ZvsVVt>dk@1A}3WP$4MpZ>lIbUf#4(6EXX)vp{rAi9$aWRQvc;CydB@^0CaqPHhdzUcOgEGA%kL`Y^SR zT%r3zdicVU>1LE~#PK(7zuxknQ<;+}ypt)F;CE_ZAw$~n%v?p_*|`^~-?AJ7R>vZX zj>K~t#d$;cag17wgim+^Jc+_)-W@=+|ge(e||=Y zBHeo)6=JVO+0>`|b;lmQkUQb;>G;~h(cs~8Dg6ZH)5_D`DjQfXFt;-hDeB)=ycjJQ z*`>1a*}Rn4&bs!$AKyWq_$CO}Qy)ZaDcoT4&7~HA4Z%U9FaMhR&R?Gzh#u;EEBfb>7eM;;lpDGMY#a;#s zwGX!MbqM%wUih)~y>Ke|1c-)VANt^S88q4Y7yVfT5~kz?N8hO=v)=i!raox?!C&(L zf~JD0lv65V6|anEfW^@V*Eg!I4k7@e#3OL@buLzmRSxoho9PqI8~`7j_*tqCCv9MQ z*#E|#kL_V0-#q+cbz!2f%(khwpx$3>y5Rry2x5w{&35!0O{R!jZKpO8n?1y08IHpyx^O24-JOTJ$%8Cp!UDH zJ3(ExpMTB>{l*$@Xa>Gi5BdNHrMuOx^#bO8do<|B)^c~y_Cwn=Z47B%KFG1L(6Kwy zC_T$ic4-3|7{7dda^C09N~k|gId3a4x}gLuDrabeRyD)Ci|+U}(;vA5jZ zI#=rlSlioj!%lg`uHGV(!noIa-{c|N5*Dj;@Q$%J5c?i}3n;+?<)Uu*2<(ucLygI%yokrQI1-10w!->gvz>+k zZSN!7%}J-vY&6zl@Z0n-eRLv+UTL8dI{#rX0ln*=?q?mhbQV;7J3`j_ zTIiP%&aVGKi8n~WA5>39I#L&ZJdapWaGCTrH}=d0jr_B^ELkObj6VrDZ7?I2n8}F( z4$CXM_C{mGP6e(x5FGhlX(X`-g{(3yL3f-KZ)ii@EF5wic7k~Xac}wuzJET*jnW_X z&`hGqv8Z;gWPK}#B-36%P4+b;%Dy%iNT%iYIT2{L1~`xI58yHw=RCeX{#2Su_js;$ zgCRi+oG@p*wH!DZs>jyeDW=x_zKvk=FZ=5^5SLQ&wnr=b%v{h9KmXOM`Po2>WGPBqg3cwQ2_E%3X&5@RS z^$QQe3H#;@ITk+Q!afy$pU1!$PB3hW<6ZIg#8@mB>Qj$Df13RZNXHg~}^l zb=ybr$L3O>zWCEb7E7lbVwWdu{{5I&77C9_j+ zH&~<0L-g;V#wxRZ#N8N$lu&cBE97GVf+2E)H^$pyZ}^gz3|5M zQA=H0S#u8n$6KTpmJ`j!JqaTrhCi1o^5CLX4}GfkB)<4%AF}hP=AAz{W(CM0cKsv} zrRAG+`CAr`);{LNuk=0Vy7>NF%I~Ej!?3#8DgN^~LmH-Dfb(0sXuFB#F!nb5vf4uD zJ5v9}5v<4pfuGfVFwhc|f^+$Mi)iW4&# zg;+`B?6F^aD%As3+_|I*9oG;yMt?WAkBuU%sN0)q9>;e275`eOFQ_FHmf<+b-`?72 zE4+R9;%p*~!sK-s0M6g_z1sXZ{Aac_gQp@?;O!S?iwXA*22plAQY0>`26q@aT!dp zw(mA@Ln?@=afZ>ky140h`(ko}s#wR`kw3FEO5Q)`j4a_-c}L=9z#!)jow-%g(gZ%! z5hLHP5EGj2lXW3z9{moW`=YWzjOVt|@0%Fb3zne%;jNkN@UbT+YAT}gUEqefRQcI~ z(sHH?Cn{Pd{@MK{VsQ9Xguku(HEV#b9GHSpGJtkgUhBZn8)vBmAe${6mR%aDo}8LA zNG%kk6VH8n*B-X(I&qw7A2PhUF3dOVE|l(8>TAzpncB?zy#jDc*gHskaou9eu1ym@1ermJf?MC z-d_LqDP?8E%TPEn{mPQZpY3T9f;5s>!8LnoVw0*Gc5&PRNa5fYu+s^fYR0$ZUS0WZ zor1~x)a%tcFQ{E${k272UD;u3fpEhYAUZE7%nDRA$_#>YJl1gS+>1?=m&CkKyEt(1rNysymgb zCA#3kr`KXCQxC!A(dkdx$!aC>LhjAED9SAHx=pF45Cj{&K-h$%v`X`8`&0w{X8~UD zzv&wzR(X`em2@K{Ih`$lPwDm0tb3ys?^3^Erj)f)b@F^zP zYOj3^m*A*73@nS^%(X+KM-RQdU{aGjizN{4DO-TqmL6s7kH4;L^lMv(ShBot(n8vY zU;bR)HvaAmasLtbM`rh}olC1f6i<~7dH-37**Ahil6Cg0{JP8fSdn?~K8{#yAK?j* zNu~i>uCVu~GY8G$TTyg^>nWBlMX$h(Ox+yo>qeQI+kFK0(!ihmGBlCIfx<*LlGF33 zrE5ihFM~c-(%x?P?4QMSh-|bE2{)XfUEWM?%t1Y{630#jd^1HHF#+Ne=(gA5mk72- zc?+!jwR%{+2ZjtCl98kS;+J7rl_`JdTb-YuVy#I2^vi2L+Izp9Pbva>^V|I(12_`q z`l_IjHMrO}XjC`necmVPOzE%+MYm{vEp`vrI?&!RGzjmMgZqPwt zmEpWvnBi7w#|;f5B8#7Am(rt*9qX|1dx3tVJ5s>k384aa)*j1q7nNT?oRQ&ks!8~kh09eJ zFEB81-_?``{^xa%*;B=e2MvGz53LH z+cPBXTkk+u!h$DXAB%$9c)}SJT=d?y7W^vTlpu7{P7B05kl-WDH&PcmBwGlDVn5w` zc+Ao^oOygB_=aGFdGb|3Cd%TnWjelOXdQYO z^Y9+2SFhZE-12Nmf0@inf^0qOKNg{cR|43u3$5&q(C&5p!03}bY!bXGR?Si`Ywmme zzs`vwbhRHdd4W*Km%&M~H~0XQ-+sg>8byUu@)naM7MgBqAg&pJZq}B5Un&G%frX9H za$r$ySK`{$bn=hCOQ6R4h6^$jseJ$Zuv@5@G_PIe08~~~_E(7KSeVVtk6)GABHa1} z2X;-{%t*Btj*5Qr;y*u3F?vV8MIguogf$)`yV0Gx-?NkI0$2X+%3lxz*ppOln1h1I z+>KxUM>fq)01zIge)e6PO6da`%7mlJR?{S%E%R%K4}Gdg&_OuOHz);YNdB?giv2ke zCx3--YLI|u!=3|sTYMoiLLAe*z&7WMo{?J9z&Nz}v2Pm`oT>|4n3JlxBzg73JA1deUxqW49zvAoSM$zi{q{ZfgybF= zw@93rj(OMFwo38isPUTivz8$Fv_0loxxeS%6@#yh9NV}v+Lt)z=x^9ppgGy_+YZ}M zp@{eNB~q?^$D%Cm-8{TASjF_WWp&Ym?Pahx<(lcA4fn_6`2B*vE@w}{>$l>|mH&z5 z{(8Y*m;ZhKUjy>j<$vSz*MR(W`QP~bH6VXo{x?2<4ai@Y|BcUI1M=79f8+Dlfc$m& z-}wABAb(x{H$Hz2$X}QLjn7{L^4H~mm{B`-?`1~~>e_j4JK7S3!Uzh)l&;JHw z*y=r^D*$Rgs23FQ=?J+ery4j#RtHGMPr^_0VYiB3Iv~H8AD_GbAVh!gl>~b^uIu4Ct4W>Gp971=XPx{j3Pqu?*;6K1f=XYYjnb6aa|L z46w0Rz1?lrpDldsZlo5^U^#i=CFzy1e!ez~#r_wnKQJK5~r?rx4^?CZ(V>;vwc zr(_P)>BB%&oPBI4E5AqDKcRZm8n7LC+AJUutGX@$nCtD4qxT%Jx^$2`-*V~%tUnV_ zYu4|Nyq)o)Jj+0EUG=MnFK0V4Y;$GrIJZPK88>FgCSm{>;a(>ejM1+55+ZVx%i`XG$1EXQSPsXX|bU1x=%J#;)AIDH+k$X9k5&4srM8pP$@5tAr z!=KAE5y=F?L-YJ7ahm#C<@X0|{*6wvTLD?HCy;zF;QxGrXc!{0mz1p*e8q5qQAXs7 zSeVLV?=R`ez&n!w*Dv_;=$#6t2tRe&^U=aYTj9&RBZa_uD*>hMx|i8feiY#M1juHt zBY}XVO?X@QQr6TGaa%+7DuReYvJO@m7VON6L+@gD?CrNiKgM%K%!(1X4s4iPL`CG| z(dF2u%e~J1re+2N*&)EB^NQOikINo-(>MAK_{i^QAAgi};Mx>0aPJo2k z@Sj4RK-jW&s&)cb>#1Mx+&|lyDU}~MLzCvIgnvB|GE7I@L&Cd;>xmM$oX?A-9Hzunx^S{Y;oVL{C7UXxn(f z^U(vTP7b#kdVPC*KnR|?^EBQzn3qPKc^#+CLarEQ@{!sUjjopP?L2+xOZI(-ouP>O z3S^n@^z{!`pT=s+RJ+ciyQO=2>2(_9K$Is`95w>lmk^IvMo01?`Sj6Ao2tqpORb9s zDtSQvX;dDTA35uBE1gQrLE^eZr$D^Y&GS`b5kFbZ@9%qQD+!(TJQzW}GlTvA!Mp$C0wOH~mG6a3-3MA#lIGj*R3ZkZ5sNwxNlQ`al6x?jGr<%zM?q0~9@olXBa*^y7esIrdGsLO ze|nX0$#y9B-PYGjGr+dRV-7}gUv?S>yZux7q-VTs%Cvyn>4&|H`n@oVF! zqTa?{4eV6Yq6Q+Jd_)z@Y*1llUlV*jG1!AKHG55Q!% zqCV?zxF#lBU!Fnq0NSbf)msFWu0?IaIYgq{_YUZ3C&qqDbI+^!a&kkb6Ntx5*3v82 z)2OJHzgG-c$J9!V;d-u?^WD>Ixm9}dNdZDyAs!269RDSPj^T}A;9|mshMlZ;fKHKp zb9&_mNZ$loLnhq!aOYXl=I{zVV!u@U77IMI^9^m5H0-HHjj6CTkP8aPAgVw4_`yRP zf;A+BOHTC&oIl*DB3ps5b~C{7#d%A@a;vhpw7({setGs0w>{RE%N`fx!!>RDZ3T`I ztqLBDY1vEe%+9qC>VWKATE^|=y?$M$I5r9^pBj=skoXf92_kl!&Py89VoaTt631&Q zt80^8MJdd%H?+0+i&NHTiDv&((Yb_D%^{Fl79oR6wqJ;}u~4ikvunMY2@Pgft4 ze@8r40R*wo*L?q%s>M##|8Ri6cO~d$z|?8Jwf{d?A!uc>63QTu_n8Difqom;B`e%3 z=^4;7I0`3orN3%LEg~v9VDqygXnD93o`Fj1BJMpyt4(8C`&&@QYnw7BSJ)`zCTmdP zgy4vM6D7BoQ(hvJ3%6oC7Cse%W+urDy)nK;DHhAAAT`|F?2l+eS~=V@yn#qgMY+vX z^liA|uoH&YL!vH(sspv}gf^8JGdO_AqvG@LIEf~7u=9k=d@VSD) zL570cth<3qB{*U&#!dv+^wBp+HX>UmC8H4Tb*V*h>{4hpHODmhz^?vu^s2k7O?O2I z<#)bNtr6O4xhh9`|EBQkRGkH~jz6H#0_L`7hlmGfsLDtin$P2!SEk2kVuN^GLzVl< z!~ywzn;i0*;#NTJG_T1Bn%uPNukWNdyb(A?Sr~IDO2SIgFRq99AVzk z$d5gxF36I+%Z?#~avb76m6K2%M^d5djzqt9D%<(<3`x9m`rg6f!=*D?+7t&V>RlpX47!E?!T&r4xZJ>ctnunt7=l{Q0QJN-;b?d@|esJWYh48ap8Mmy5G(p z{Sa>uuEy3NK)x@dkd(v8v!X8%(L>3{7-|qpTflj@s6?tqBRIsNp6xrhSglnaR$UJt z+$T_tu{l(QRx+;G{cs4rmq{*0XyALUA$Z`kLKZu1Z1I;)DvT9J*cCm$()NN%a9Sn| ztD@(hgY=MjK)~M0Iw#V$+c}JM;dST{e%!>XK%KLQn z?`6ZhKeCRISw|kySCVWnbPt_Ae%_VKiO)&HbWhiAjbo{FquC`6PblT_0XyTtK~YVv z7~^b-Q9H&iUmq685W0hZXvYqDAzdwX!tB^u*d^itEz^D0X4R4^88KM0CIz+ZJ>61+ z!2dwg@T?uVwtY@zCw^JdB7!pwp+*&QTb+Jmd_ z%&-Pq+_m?+^uN1vCv&Km3B<|s*OX0XOerFIu*OUk>0~R8DtA}+FCH|~uN`{A-RY*u z8l44iK(MotZ~K_{j~?fbfAs|sC_X(>Kk-`|_#0RNHB5d8i;zcJw!>nk z?j@n%EKuE|1Z%IwFpnx}NIg2C zve+ms^Jv{5aumQ8hdNzA;=@a?PRU6b_>vaOE-gnc9Og$H(Rd&JDWuLznP7GpbA{oj z&>uYZJY#fQ^Of>Kv9Lzso>1}W-D9l78Wnq6L$jMf{<)9&2HGxhz-q>V-HGpxI#2aB z8%1mML>aAhTZLhEgvYqglstWF&X=mPg+r~0*FI(CUywgEByWapG(9grrt4#W*=?kW z=+so*w)RZQIb-e|>V1`JvN|Vs-dsi>Dg5k}SwpL!<=da4^$bud=wiGe{Hu&6=^XLB z#*PRTdsr(DwoM8I*Hi3HV?HSJjN~842bRgQ zI*ln#E7?}db!oQ7_;F4cPTjlhJuH%(G_Lfa){On;+7G{#pnu^25E4820YKpD_Lcr$ zPlOz2MF4^jThLaxLo zuaGc$XvAG}9aR53eB446kA_|DM%3>i+uM3=Da3upn9y+Zq82G0Ptnh5t`l9Xj50uUF`%XC(6&SDn4z9aa;1D4=7huiY#L6X zwG4XMQb(tBj#5t4ZBB_tsW4{p-qQ{a!_)Dxb5Eq7zDFrRUvNOw4MuE3{(Ue-W@?B){$IICJO3s#(x_$hH)|IPC!qU{WKIco# z@sl0o+L=W0#W2SkqQ{tgDdYJKX;^(VDzr_o6Peh9i>QDb!WUzZGE&dfBXzo`LO6>LRD$$*(5;jg4~2C$4I36b)w0{C2Zpay zyBOR^VaKr5acUj#{Fw7BB%0ssu!v{Xo%eIT{Ewg6Ca;c9b6Q7vG$k;l z^K|#^?=9Xm)}}F*`K)Socn7_)I-6A|P3H6Y@=BmlBo~BT>_@!6))%IekZxj;oHzvwHPs@JH^B7QJ61)up>)wvy!0*$_4k zsnjMnt{hz0S9)aS)YGY1C2bmMW(1YJA?m~gJcWf7qkm1jN-1gPQEmLvs954s1+1i>zVb8>=u9I z{^Cnqb$!RTt+@quTc-P4pHNDru+pYUl_3f4wv6T+JiFMFQHNvz55DZ?JPNZ={LJ=d z)pp0(E3DF$n_U;%6>i@B(ky!`#RucLqgWn*@MzAXXD@KJ5-*IY+-eHZhI%^B`s1TH zZJaM$bN@OoKcH1g$2cuzpR9R6+d2+Yy47(dmg{ikGqMK}_B||J?N)0CzTVpWo~@+v zecft7u(XcTDt+?l(?$91zi^RkwG(Lb9Kj!TK8OAt2aKXHq9mN|yT#Mz2o5&79L0)w z%3znhgk3p`C}t|6PJ@}riOlx~Sp5WHue@9eeQ*4b;BLO$bdREoSE8=*g%}owhMHgi;{9&1poRn(7rMwC7w#zk%!x?u~*_n$BBgvAXYS z159WfWC)@~Bfd2HT1o5}9XwgYwn=#?MvOpSJhB)TNZ87j+GQ&4IiLN~Q#q+yJg62D z9P%`Fe<>`cyq{-@|E6<-h-oy|Jx53!yHtFJV75Q^HsNv3Y#UTa$$e`jj$KM?jA>NtDZY%qJpmZFA>lJ|#Jpg-_-<*m%RYi4Xubn-J?WDU7vBva)W#L- zt(y1>?`CU^7xeT|9ymXB+#)9&lpS zu-`uv{F^MwR*R%}ig^{=_CN6c=U+}ILOQ6#2)@r6t`bQa6-kSthM}+uicyJK!-W{? z?k|A85;*07D~PG_$EyqOEmkLyjLL(l4*KfG@Qc5sH$5s2h* zI})C>0Ce9w^0i=Ng5Y;Rv;@C!1#>{*y3wSdgOD$_Sbfq4e?yaUZ-7g&26d7WQBhr+ ziPMW$uvI=X-cm`2MwZF9QMEW2TUJkQBpBYIoSOUw(rt4epiOz?3m0$77&U}P3-vv& zxERiYiN*U~St`3cVVRB5qw!%B;Fai&*^zJTEy4)HZiV=ZSuMXfwf5Gs6#l* zuoo1zJJNsiL#YSm23|0%(*5Zquhxh_)$3csE){SG<`PYDDjx)MqB-}AEcR*`Tzoht zS$qp05*=+sP|gjd`mR?au7Mf}JD*BAa}K??jOqEEN~@4N&-yu{ zt;P!YBwiA-!yW8Ar=mjfR1vsF##Fwt-d>&HEA#aFhV;>Pp8h~BZN2VQ#p*wVRG{YC zG_2#=f#fiLqIAg#aJ0%C6$Fzn6pvQ1m`6u?J>5gXLhka;0p)H%i;B(se8-gW1vF~ZV zOw}kr=7Ru}w#3W7Y4(sfUWo1lVnbd3z1IsKsySqb2U9d8c3h)$DbAZ;wQ#&`I+-tO zXPi(h=Ia|h4wbTEipdDCozCBU;u*C09Hq*~R=khy0&hlREwNYqYHq%GsEATZ)MO3T z$m1DbBomU*b|b}m?X~Tqu_pbZ4S90ZV9kEq0%U{9SZqz55Z`=vidbKUq#q@*VxG!; zYgy>5S3hCq#J8%MRhE~lVO?90y+~zyQ`Z&u*d!plOmAo#?l+7MeVeYCH?obMV^z)nVT(%|`4iwKi!gfV>y@q5+QWaNbzCTuhZ0&48acRaCv5GCg zSk9wKlT+2vz-^SaOXv?>Dg{HQ2*5-H4cET0gBmBMl1gw(3Q2}rx{>0S zemXmJW5*cjff!1TDBt($_+4hps-TjZ#SZ#ZfgX+3D`)MEZ5g+_wl4H!t9z*6ouegA zIeg3QEdnU%z*V{zlZ5(l?s?D~F#EFSLGRiFot=BNuNRCQdUuyX%qk}A)Z$H@7D$NT z$Vo*~h606NH3p)yrt)j~Cp8Ac1KexcpME#ve)l^M;(zm(nu#QF6c%j!&c*o6HY#4N zzXPZnLCUDx6C++TuYo&n%okov7bZdsZCvEvEXk+b>i8;Er0Spo^r*UWwb)` zx1cJH$+6)_fXmgRZMhrKC~xBP1xQFWe9cC8)E59qfeb=rU6C3!9|{v}^I{1tLdWS+ zTzoWNWO`3q739tF_k^lWQ>!O<3WPOlscZ;c*OXZ%7j1V^e0%M(q$9f#0Xaq{vcTf(CF+yIiTIKokN1fLX1C6#mnZev3+DPS9N z=Gwd{yXktzMpU3Ib~GZA__}8G5Y>-<>CsgKCa-Vmaml$`s((ir68Xri5S~x}`w3ZI z3Fvk@v{T4#ZT~jFmu8%hDA?k1w}geUnlL&DfAH7Td&=OdDoN$>fIlFkyws!T_iL}DSm!3fXgDNIU(5qt__vunGHqq7XlNvR% zB@Ot^5;hvQU8Hb$!;UPKrHuByB)Wr*@qrIMGj-15Fja(1Q8Q~_2hDJnWbRx${c3vj zM^GkcX7B85r|e9GvENACWr;@9J+MnF0hx%_siNZ^?Sd&XzHFI(A9Et*VwFu}%G{hG zGVbkN5*F^!K-E2s%3LdxGpOFTrhcJaz9sk!3G3Hi~Tn*}hhC3K=WwXOS-BLp`xL8A2x*U>u?27U_ zN)&q<12QD2zeh;PnCx=@TK8j{NYu_}`?`DimiN-TctrYBENW0Zm(;fMr$)m+8>Zk9 zV5c);v)lis-)VRyszX@PJN8c_r5i_JZdyQF!k&$lR_h|G#l%2-^*xs`Dy%Qm9FhTnT z5Xvy{K&YGV$m@s3aw|VJ%1qoQmcpDDmLabsl1L+j2MmiVduB$RHQ`4WrHub zqE?YgU#opt+XRykmSoP{2UN`R@@EH$ zwqFlaznI9#B4o$bJyhb*Keu36Y(P)5|Gnu`a6OUSa1SIi-gmgS`nigHio)&{uxIE@ ze0%g%q%~xIx^d;g)F@|ss(eh#IS15NJe64(dv#{@rKQ`&2LYlxhoZWr7*u;vE$DZq z=RewrY-UiDk@!oW{K^jdlY%v@Z=**pM!Hc?#h3?fwl^Wnrac>=$3kfKOyi z$bIoGGCztqQV!)?TD6pt3@Sl^6guS>`0ef~gV(ICLd3?iX-uOa>WK!$P0jnksn*jf zNfhSEk-GL8hd8GZ*~@$VZ+BeR9*eAvW&+Mxk~VJ*XpOy{@6wAy;z(tOaggt1o@Ryn zeL4;cgYpq@8c@gvgYLke40pH|CqI6bI9k7GBIZhRg>-t!_U<+Hj}dwbgDD)YUqBs( z)oz}Yx80Mhm9zMqbUuR`*k-pdcc;Uq!fbONLU-Vr>p+&!0;$n#WSlsc9F0!!Dp)$b z$}<=T$n=BgS?tqNXDZm8oMC*wH8J4Aw5#MGJ-5m{-O+d@tP;)T@r6Nad~?4{>(k=P zclVI!F6T5dsVC{09vxpllO|LA&D|QWJLRalVNp}L$OhO6Mn(3Kn}$2#_4x2CpkyP4 z)#t0b(5*;~ffkVdov8K;`u9vQ;j;~pDH3_K`=2DkpIc~H$3~Ar%;7G2KWE_TSPLBR zoQ#igv**iOSazBU5aW7BT6>vQw`L6UFzQgXHwG~6($yDkfsD*M()|p+O%UvC+i8+7 z^I^+7Fn6_1q@F%_NP`($u0BOu9ENu|4bZUIF@>o_;%SWo_AV*U4y24_cWAJqoj(ai zrrP|p$`=Pt9@60RcZ|wtPa~Ysr57sjH2`k`Ln^Cz4l+FE2EJlE81|@3#mf+(gp7ym z=DvEJ!M>)ML0`+K-~m+&L<7-Ko~iYL2`=`b5))Z^2j*~p+kv-~d-1xVa|Wj2HqHVX z+!+a|T|_2Ly1Uiwm$!9#W|#X6#nJCy>1BsUb6HVf-)hT;&a%hSj%6gvaZ;|jUvuZ8 zd+>g`z<&=(hRbOU?_nb*KDY2WG!MY7omGn$beF?Y_l1u zDid#J*7W|8kYd{M2vD%mptZbb?!V9yb82&(RWj{xQOB;NSQH&p>8&NAn@+2!C;cQ^ zV#nAxW7>U%X5ApEJyoLMCZ*}2)T{Y7TR`5(##m!uFmA?ucX9dwEzXj0(8l=jo-l;T zi?YYa(aASt`+fhqumq%eWA$*|1D;d~G8iMo>BGgE$VM*MS@2?t%g*%y6fkE@UkIe7K`XlQd7S8K(aW07zqTTtvAIG?Sscc3oGkHTfBJIV#yrrz%ZqWE``o* zC+Z3Fr{yI2X%;?OazlibMm&{6C$_@3tf68-3)H9OU>0@}>{)~|8#;7Oh>GdceXHX! zcoY2`@?_S_<59xV?|y85ZvhP{;{XL;ctNNWFi^JaC5!t?xd&cv-*%_fUrOxEHX54* z@5*V>O89e&?;!BIKHEaK&N1ZBV*z|;;3gmh;vO}j_&0VfUnIwO+_4I0C zHp4SAlgDj_s4-_V zprVaOY)1f6fGV#20UOFqeg(8+j0qjMO{t=duETT9pUl5T1W1cKXSHQrR|}4qk~4RS za=3a(dw__w<0hi=Jw$nf!HWd;1O71k@}&|m@kIe zAjO}TO9XSA>@6H3XT_I8s00W@w$;Q3Uq0W;qP&e~rnofRfKmk^m~lUcK8KOQRj*ht zU`G0F*;H3rR2C+t5uqQaLi`Yqy@e=vBqxg(@ncuqQ4I%^7x_One`{EE-%gIO3bJ}g ztGtZ&0Bw(!>E7a)vdO)rR-*kU&H|tx+;e6)cYAE5e0hJEgG>ap*C1e6*nH4S)o`Gcqw#_jqf*w~<}n8Wj5$5|XDBfk*1Wm~ zE{8q#rRm#3WbGUbwMFfqOmrWWs?2Ch8aVW2A5^zfdZeFnuzZBhw5ZCAAU-V%D`)uX z?#`237hf6eOtP0`s~(!U|1vy{)9XW+yi7O!#b@Ni^{7RhV*+Gp3QaqtuYLc-prk;g*P*+|g>!qq9Q~9XmQ- zG}n5XUQHApGNIADV-+PZC85nJY$kkE;plZw*=vcy6~ew&nf2vf*ZGx!-h8=YCB`?^ zLFNoq+6=2ZbUG4R%7Zd*34b#wCEi@r^75au(W;R%Tq!^*xD8(y?Yv`EvVW>fzk^x1 z*UIZf(jv+g-XdZa7ZotQ<48xprg3I){G3A74NtZGQ-Yqx@!ZPBH0^h4P$nIw)bi)< zy0N)`M&g}TteMYI-*KfWygrGn&uVM`y5=eOXCdK3ebWS>mCT9d}UR_OU}5~w;^Ko<7oA)#;|Q9mtAjCyHg_;S2+r!wa~I(lJOykz`IZH zbV|45E4!Yer|Y|xg%)zJv6AEyN~BCH1LRDNPuzfUjH6K;H_NEgZtB3#_EK!Ojc0gx z>Z)V!A+wpQlo!awmLuEGl4Ef;)5W_MEZ4hG=S!Iw#;Ao-D343zygAPCMZhWHZsMI9 zCc`j7s+!O}O+WMjiJW6}!R~t2U-y$sghe{3(uCm$A>1hYAw>$4X8a&eC&PDrnI8**C=IW;E_=okFZho3@auZ2U6 zJGhj4G+Q3G?!k^f(CI>SVbk{Hm zP6u0D2VSo_W}nPUoFyYR0kvRP0Cf<29QgmQL=(h!DPKtZ9j1gC=y7=SVIWm zRZ=Gwj5s-3sE5sR5w7rtHSJeTXduYHkI72&7ZbMeN2=b4;6C=+lz%!5L7R#7&fmH|~q2ov`B0S~%VN z6sjcAlfPHu|7c7d!m?h^I>0lbd8&M}e0D)(FZ->kcS;8e z(}f3Ee-J^6@uX`%sow6`pq%#gb)Dz3Z}dkQROz5^&6`&Q0M;6_NE@Bh>mSlD&rjncCOL^7)po!*B~q#*|_8*lL8! zE^L)Oni503^dupZU=&~*;cHfraR=(Fq z+RhJnuKar}4HGC_>hc-8%vpNE^&!ZCnV5#&F0E(f15Kb#^U2SVvx3;9MpiylqgJY9 zhye$H{tQ_ijD*+O40;yfkrU&uU$q7nUDtS1RM6p8IqIV2qj}N9n9D$0)hh3Wq4)(52U3pL&#f=YF`F{!F`k8HvGG zGWmd=|LRjtp01EZs}KA5>YFY`@`PZ?9b&K#j#fW$3vm0QM*SM~96j!S{^j}NYZ5xK zq60kBpCA>1efrg-eTQxbHAGb^UB7z`zZZQ3Q<3>Wan1kN5t6Ee$Bncf^!D zy>(K6asBSS2OUfmiFk?}mgIN5q{jWS>J-Ubc8wWZQfrst%-1$$YD6JP1lULFbfNVv zug*CfodO#31i`?DKLditP>a?px~GK}?;aAe^*zY=m|({HUgKj(lyf5-H)_(I5B8qHxF)h*yZ{ZOlGs--1(5wIB&NZ^+NxOGpoM>71+a7UpkLYn@Y#CYStO~89 zU%0YKsh}`KCXFeMp zn=!W%bQ;>8lrul1zcc+h7Xex>_)SN%#rC;=?ZaCO;#*d~U_Kp_c-HdFBrlfJ7VXiS z)&o=2gCtI!ch)_|-=nk^3!F%<+4=2;`rF4f&$^3uELGsleIm1Xm+)=S*_+RwE?IYd z)BUIA?`E>xu{gd`=;&9+YL0muzgz$f;cn2L{~lvlA<7xJ965W%F4&>WkWtnJb6Gjm zfFqn%Q3qYBfs58>t(CYC6Fl=k%A;vs(!itse0**R2Q4%@4%%W9$}vMDZPD}xtdm~@ zj}I)1%G>!=YTB8My!*Pg=Uj}0&-$DRl5AMOCA{O!>{HUdq3;vw_q45IYmuHIv#f~S z{xZAT`k6lI_rENR`PT+qytQoR3e|*z<-aZlnjLHSXS~s`m17U*<_{8o7c?bn>Sy zGdl~_PW+s^fjRkT9p{+<)^-K1=OTY+h6r?hJkn4b^mXZ%2Nheo4UaO5-}2pW@UV1_ zbw$9-MOBM^ZTcDbq(%KI^1P?E0r&B(Xe(T#Z!^Dm%TC_-zJujvO#*YSpUwT!xp6`N zm*xe5krUgB8wz!ub)WtykTc&Z;``K)<6435=0M+;vd_Ofb6K)>sNIw>k83af8Wz|6 zW%4VVREG9{8w&p#-up8DW7gT;c@9=+Dm zw)&e7XaUnK;2P`bw-Uj-lYv_#wV%I6nTcRzO>+URj5L0<;x^jcMWd$B4$z)g;GT<_ zpz|{|tL{#**;)Vp-%H>~GW1*f%T zJuB~SY@D`o>gQ54j~FikdQh_P%G=GTfqXy&wEap+Z~hvG&A{obx~ZUX*bkAQLp^~9 zMos|k#n_v+_fIpbCg8E$n?S}@etEkaRR(xeIA|veaA{~}=AR!QCx893vhtE6yX+L; z-N7$bR<2whxFC%sBI^Zct1)mKD}Srm_qVsr=K^o1%LL9J=5I~*dE2*ArWknB!?DxA zLD8=zz{8uD0uN(O&MLat$KTDmbKK~( z&ScOoUA6zunz^oY9{^s6DtG--3#V|@T5n+MHX;hROX4z6^}Vd)Gj7TDOPjA#lmMOQ z4%&i!q#SL91ZZU|@POK-8$0{ak`Yesmc_vEDm(K10$Q}-ArM@3>Q5bm-Q!z-%f{j229aFPIUlIb(NARox@zaGq(EcdF`_8D7ojBdk zWg-T2(2XPR*saI($*n+Oezty8ok6URj{zMdHTy>gY68UN{sj)!z|thsy@r1+P9>-k z7b1WTN*0z!pT)v-I>d_^?!eUew&VRml*P_?qC*OFdTY)jW!zzinw;K(4%L_aXuE}2 zA14DH)azU0g2xTWv9mxHRJ<$iYsXr=WAXNACLk0OqnUtE+8fOTsHF~SvHNF#+1m?j z3=9m3H~z;LAK*K1CCOuzdjmrw7e`c9lA=Qc10xd)hkycZh6PY%-;Z^i_|!nPYbj)` z>}(}yqJqN%4TTKLN4wQ<+X{EkiiQPC)x-$65v00tal?YA9q(rmVf_N8rfIVx@cJ5R qe;-h#^1f_htRD@~ks6=}7;f${acP?Sbp|k>F?hQAxvXl8X-b}gzyV<#+f{Rz~dYUh|u1+$WP2Y;Ezr_|(Mx!7@3r4+F!g z!*oYap1X4U`Julr{@#SYk??mE{2c{Bx&}NH25zoMEw1aUhBV%>Vf>ZUH$)44b5J9ao=@vW&!$_ z)lniZXxnt8G!gZe6>4Uabm#3NA5BvJIyfCaPur#|QcQsVG!*~zmkxh)8`$Fq&;{hcKKOzvv_J4yVhp!>U$_%nj>|B*_6dEA?25mj z1zOFIl8nMJ4hH-sic@5kE-^;^DoyhHFJ9Q5)h{Rmia9S-07f7bp_%J|6G}IFj+&h7 z(FnOEK0`s3aW`_Vhn@^P2<~jtW`xgf)k~gh|LuA~{|jBd#a=N=zW`-h;FO-7kHnp7 zid+^<@_5T}5HD&F+9v~7Y*hZQs3^GYPYD=GwOx)QyKkQQUh(lyNr98$56sE{TUq=E zRL_#yoYpr4XP8RW$+gcqA71+uXx3{Lh6;)iY3WwAGr&YrLzP%6lOO)XZXAVjrv-5OWkR&{Y z&R8ZSFzG!a?sAi4z_R5_`tGpi%2MEU#qX*aiObt1DDwRCcQeOQ+Vv{?D<* ze(L_)nm56ow6jXMbU36X2zUOGBD1HE8~00_!_bTV)|b3|z9ZX{5Km2Yil0$wE5IY! zAKm*?Y5dQ@rJQENlu0Ji)3!t8lE3%G1>j+;!#Vj&%`8I3_*)+4tk*YxdR^UxGl>0BP;PkbUvD-TTaB1Rn%&=c6Qh}`Z4;eHsqGfZRRC|8gU~zUJ+$YeV zt2QvpXZhG+Ro)c2ZH>9n#VMIOb#4=!XP5bhtr_4enH+gSW&8dt)#USn}#v+GYNIBI*fH$dHYySZ~6jC7Yd53mbcr zy{VT0ioatpMwjK=i}0}~un(s-yU>R(%T^n86`THdwy`<%9L*$Bd}_n5{Jc4wwnn=& z_GOpW@Ivg{OI35*h0204JLeNz{H5DBRwr#)3Fl_laJH1!nL@dV-_#{H(RxA8iQ6AX zdxu{&OB^iimiqEjga496Aby?Z2Xw>57}TCOpE~#A*3<2XdYGFt{;u?SyGu?F3-$fw z+LCaMC0k&@!HPmzDca8TQT8e8yb+bG2}RZ^ zuCSNFF^=3qhN0{;u&Xqx)%%W#|yJKn#q$gUuZi~^%j=Qvkn%QZ(ti3*j&uR8%fs?e<+>dB?cRIFqPB-y?9?}2dmFDo1d`Z)&3m;=)P*U;kUCcB{Y&7+R=hnqJBIKF}(_lK>B8t zX}PwhwcgoTX=x}Ozj3p>Qx@UyaQbgsbQ|ffmar2AKwcX>7y>r-C7NXU@b+&Ur(WejxScnPlWwrgT7W$7z_`~D7K0%GOz z9tZJ}qXjDZ)d1za_6kRFRn#bO#uflZZTmrl_HDVx1ym(WM?OR)e#d^K-GlTm&@Czn%sd6Uz2vj0x=u)xO;#@x?vs)$ZlU`UrHXiYl?P!%WoC~=7Mr^&(TuptE6*!f+UTCphi{b}v?0Wb@V4XxAjVfzMTDz-> zp5mMv!jA>k33shR6$1MggHS8rsH!etkl=t1<#e4x>SBImi<`XT&Fp#kG_u)r{G8~E zkq`C9Z6>9FOjZdKOAPC%kS_!6t#ENmiPZl(QtG@ zo_{$Kaz|ylZlQjIWmI6M3NS_*;ZC|d+gG5!6*3+!2;68Ci3ekQaG{UAqh2l}A3&0q z?KPrFAXaba95ty+tcz4rx*co%3ZHVP5<6VVi;x$>`(ob~2TZZ^7mFh_-ADVMm@N0h z>rnDi8>3y@TA28FAQt*#Owdf5jxmfCzo$w?+VRAjwthtN&8)MUaDbs5Pfsaof~CmTKAFctfc-rP-zwgAKhU1&rpD6x_)yq}-X6N9@4d85J+>(nW_Aqg~7>gf%XxwS-a|biA#3q&7q*^W`Xwj8c zxtLDNcB`yS@Omr)U)gO1qJ8Y!O5PcF_cKqHxR^J0j^v|X(U%G3qo0aW9s0If0s9IW zJ)+TN+kO}vv>pr`@s8n%$14KVqta9MuXCHQu1}#f5RRuW4=<#Lj39K*Igk>ipx!GMO5vI4R*Ce_vK=sfBxH`XzEYjK&v63^&-#xQ$l zo7FA#Sr*jA7_3mT7E&2ybh?V_zLC{ zpxnnSbN4fztCr_oZGtuR^ zbH+zkX!E-v$D}&W-8opm7eGIr=)C| z0GJOYKJxc&OY-FhHcdR%alwr$0GW>Q9ko%s0tS1jLXxrRN(~N(ZpxLJFxB-eubx>u zo;WB*1qrku9@Yghy#n2wDLbMjCm5~PLcNu%$O*;fC5|c#1>IiDlluNG^4ByVa)vGk zbGLwV;3xT83uPW^M6u*fdU*kxRBTMeUJBerwO-zp3UedpokYI2L-=f}m>C{(U}cCK zsoMiOD9i+Q<57N17pTv*#3!dR$Fu+^6zA|?YScG&`oeu2Zl%|C;|;}&B0KOw#JUVm zh!N+ThYWM#IVT|UA~S~p!Af(4SS6THWF(b;rb~5w)5|yn*IH8?dWNgjcS!v#mFJV@ z=TcEIwTm^H8eifUG zUlR`ddQDLF?I%6n+NXwM2#q1J)zP8OVNpsbD*x1K{%#E@8({*;ljq|T57B{fWvz2X zEqdH=pMJe2%s^~Z9r^yK&1u)f;IXOUFsapTr@8Aj57RT)CUW|9s)xEKiG}I};f*Lv z(i~XFMS|WnX~n2Lm+oG7HC9U}Kf`Z#avACtw&Rqh-l`1DI_K**H-)mefqfpS$t?vJ zX44Q3uhGObK_9}2wd-B()E?_)p#i7VjuX*K@6TaTh%s~MlC&`|6>}!pNfu4SZJmF?!SjP#D{H<20Sq`!{{qU@vTQ~mq|-;snQKa4jg;EL|+HtiSiWJB!_h8X_8ut zyGYeYIfbjz6StoBD`p;Ce*soP*QT51%@P~+5>5T_81$K1N3L5<0>e+w9$RBtj=^o7 zscjO*yX|m%W9Qx=w#RBqMO{?jw_q7j>k#cG#YUH>E$o$cN_ApWmqEfFnicER)inE8 z&{V_^ewDg(i$_VaW}ibbd59f5ogl;10p=A6^|dzHt(K5iN__aV*Mm9#p4r;b4c0fV z+`Ndhby5raJ*cLr4ve9kQo-N`N`L)MI{TiYS~6CYpZ(M%-!w>Q1_wO8!_hC&t_0fO zA3{;i3?ByYzo{-B|K_vQXwS2=TP8x09Hz2K5~Ghus*$i<6*@g`7k zfB+ChtDq`2l|kEGFLi+jL%UHHKAhgEKUb!@6MWV2XoPQdmGcPs z#Kr6`D(C2s$EWwcPZmlmt=k9uDCI_}9~a78II-mXc!)rI%QJAKAF23)y{R)HVU8x} zEUjOxgQKXsQ~8@hQ?Y?Njp!=pE>hiv-ClQl?+6DMV`F+!O=^z_9G}8FVLM4uh;BT) zc8UFoA(e4r%(4Fbug+Q0si%N5s_8XnHuhr>2@6c>hbxkm{r8zv6~6iwDEoY}zH(ca;;!%ky@P$%!Ov$`o-+O03qJ9B9Z`@=|GkJAd0k=BabAp?*U%>^^ zk{BaUwSEqBanC>jXu4kKV+J`4x6ZQl?RE*cEKe(YH~eg zBEtbl_Vuah&`X3I1g>+)%N)ZFRGE;QkQ7w!ZOV~cEy$meFuuOEXjnAQA}=m$tT*61 z|5 zc7v?p;+e-K^PVY5$O-|YUq-!Kxt!V{2qfg7Rdy${(PC|$MS6j}t?2&sbf&o8!+Wwt zOT}d={PQ4bcx6;VCApz?RXl{8Ca)V`7E=^3@EtiiFY!tGALf8a6P;WeI$}vY0imtJ zZ0&a}Xy>x|$c}ZXjq8mUV6W7`S4g#+v+4lhkPfcJg{bC*!pjO77eD%eBr+P?#(D}? zqIjYyk@e{6oF(DeG;~u~>F?Re#@H-O-yd!3Y3n-`WI(0(%to}hnHkaw)4QJyN@J=L z`gs~$1@zj&fEzUvp0q@$?8hWp%#+S@d8-P#)6kd6ZL{t=q6HuEhC++OyuRkjUN#5U zDVho!g$lPXgP|$sobO@eFOXn+RT*S7RjBQcuro6!3_%Mtg4`LB3nJqKVPRp!Z$`RB zp?81qnaEZDR#<-Hr^HQOhUMY3iY-hBn}B zSI+tEhoH3k3o3oLDsg;D^*k2tj*2~GaMZ5eNIC1j-Nwo z%VJZRnD(SHmfnBxMa#lPsi?aXrqC(nz}+>p9oVln-aX-qflL(M{Yx2O=iIDITzzCZ zi?bJI=DGOCYaIy9cSby1Z~0@WdYF$cZOzQ|_CKrC0dBfTSC8)>Cw< zeA8IF6lnNUHa?NFxL;C(z40qV{O=sCfPz`5Q2QYd^Z{cn7KW`T&~?>SnsVR_jkpWMc=$%<;# zfTr7Ya_ z^&BG2_Ka`~g>(on(F!hM8%Nf@%5r5hx8l3_9qzYHk@~R5#OT~*0n(bNk3QxsAj^f2 zOR>S^aJwkvtMFbU`#^XlySbR=J!-sZitiR5&Aiac(@L-P-eMhLt#Ch0QH6AV>CeUb za+C5?fi4E5$Q!ZhnHx2LnkFg1tU2v5Xfix~)irgRT`L|U&ULC|E{7{@`z~a6L9|R2 z!wJ|I2S#_z%fezOrO%?aHDCd0>tkC4?>1ON&n*54poXI#yvi6dDxU#QoQt}t+CIB# zp=hF2CdLfiiwDwt0Isol83F9QtPQ9Zf{aS3R8L&+y*@JA3rO_ISKe`u*j0k8foJ9S z7v9ZpB~~tD@=DcqX2&MmGKFf8oO%O`BKR6LkdrQkdXYGqW8pKX`nge=>E+3!KMzoYS@w0HX3cUN8U4IKnSxQke|t*FlpcX$)!7zzhd?EjS_=Wk9l2_G8a8x z#eyB-V0~5k_d(6#i=)S+ILMcw;w@xFX&P||&CF*my391l3&njN<~9ta2z=`VIC5wq zXP0brN$n2l{d@l2XotMb7M`-r!qa~kLlb#IH-R0ei?T!bdfeiDTy|B&+}|e%9;X7b z+$%asi-!84jh#=El{=FQDmUudREk2jXHf=}6{+;Yex{f(AXQ?99Z+USiv*m@TK;Yw zi|P+9rL{Ls0lr?t8ckktUN~id$XEXHHzEpjVPWkHA0QSzmfSn7;|G*067n{Qb1*&4 z_`|C~2CM$qe#SR5`v_^OKHvWMmtyZ)vSK~9iHdoNod)B4!`#fvduK zuS1?{4M^ZOX*Tz!T|E8a^o9$Gb9r`l(Q50j@8-x5&>O9}9oK{sz4+ASc$hYPC4k5T zU6|M%HQwxQD~-L67ezNV0f{&Gs-QVm=U?k7##B01PH_XHA648u!OX^`ldHNct>klU z##3RH^CAI!L2FlZVj$B`9Ww+_SF!UpKF47+{x-TuYLS8|BJCZ=hP>1a>v(&&VSbY3 zrLziV8Td|p@l6^TvgZk0Qt>nJ)y5Rsv-!y_xq??cDU+gYO;={Hz@^BR$1bjRu9Qjn zKGboQ2~uJpA_d@HBj`6Jw!v#PNHL4a@8v5m}JVxM@eHs{&De}j>b1RPT*TH z@gr{ET5@qs89j{`XOkm^j2YiWc}lN5PKrto%6~BNRJV5)LD}=hDA*6*AWiMFm-5b_ zQD_&dhnJa4ePIDv>%x&E(ND&|>*Ckr@jJDyZKRAIA~)3(G9*eu<#v&){X#I<@bxU0 z;=9&q#X*8uuD6q2p9(?qitCZNd_<3Qxk7wO|5oChFZLQug=`>4(Uu>)e)X_#>-h60 zOBtE39^bJ9^M^mq)+14_#(}`zdMD56`}wO|ToMlJ;3877D0UhlU#3;vSgMWS%LNoY zZ@=*(wOIkSudXe&3=aH*{5aYDQ?pH3$qOd^wkNzInq}8+n^EYJBaw03iI!B2Yw7de z#+WW>|LmL`!bsplA+(&O3gwq}S(K1o=fJhRtp`8?UScuWXPtOd6We+cN>Ayb0+Jo3 z^XMP|B>qPLTxPhRCezM7g8mV43%AWgU)2i>2on%l;@vmQj`!xvY!z*x5~4k%Q~Uxlb6xQBT6mi7Bko4RkSx8zeXp( z^I}E1Y1wj)0eCiSTMd$uEOmpqy>9Y&7S|hVU(nM}#a8K_Q8e#H_!-+=egCYh!mKxL zZCWnlJy{!=?iZcYq6%DNch-hDp6_9CxBRIfxP6&GR9766ZR0Z08gX>9lbLu*S|MY( z#a`;=)pPSPqHa|TKF(VsuP04ZK3vm5ub}0Gf(ZSajkwR9q8t7ISUdKZRy^6>Av2w} zd^kt)*2G)miCIxnojh-U1&-!sO@{$BLT<#WfhMxmh>g+9>p`emBH?Z`q7N4XAHDf+z)qbLAjZW>1&hZo~S=nh~It1uXF|qm6w~z0u)Ey zlA00b-WYowtk`|;e4YLo>!70>J6-XL_17)RSPj?EianN3W2^`FB3#Cl`t*2H<+*|B zRVg_{&9#wY)b@P{s7mfht1oO{aG~*^CH%?mH@A5|O)i&kYZ_ElFU7R>fo-LNEp$VE z2qCk6I_c>uAt6q7Wl3g|k?&yAyh$2j+rqUb;yYM@PE$bOW1<$RiD4541BZu(XV1Uv z`-YH=z;SLSrie354aK@<+@YdwMk`obwAJ?Wc0Rbts!j0O2pa;UZWUH#;P-acsZ4;V zWmZ$tRz=?xuH0OH$FSo&pEO}(4d@J!Ms;91e?~eKGAJ{*(s=T8mZ5kN8wnWnjtD?bRnptT@&R zsO{_b%Mt{+nr#}As%#X-SHLtED>Z8w;xfFNtlk?enLNL+a2=?`B1}-2V|h#l#cgy`0WX1D}jR zT}jwP2gx%GWoljDg8LlTCU%~#3S2xW5aa%}Koo(}CEj**b{;BSzBJ+6R=TQDHsqV`XkM#{aM*DqJxVA7E6LE^6J~`_+5-39u81*DmOL%5SAg7sayk> z7%x+9dF*Nc;RTR2TDEYJ9`N&aHXAu35-W9U!FdxBD>(yC$tLAOT=mZbBV) z2YB|yZ)@)(R^@UeoHdP;mzV0N@ft*#oSp-fp=n#ix0;+;eO6<4za(|@`;VbHmcZsF z$aVL0LWBjVtiQe3@(z)xMl(J(pq-|uTLtrc3;6SJsuQ$pP4fEf*H^xM%k@^qET4jB zS1cu18#Y}BP+%@Z_i$d-A(jtb47&qXDV(9SVUo!=E9%W>5FRZ$b zFt^G!fGh@0`sN2?DZS*ksO4;jPBe3eFAT{}L$q-?&+ii+W*SGg4!GC-rBm+WlGVFz zInHmd1L1M6Y^72){xEN)esp)r)}$fkL7B5fk6LbxEOMr#L}cmbq-040diY5-|J%G) zH867-v6@+z_;$=`5B~0JO({O+ZyF!Et3O;Ym8LjAc%LunEuCL**K{pjlMei%2HMR< z5iZblVTCd}j&ZF%yDH!Vxwdg2Pu*_mMMzeEllu7{qsaXZa;a3b$#P@>Z(Cw>$-}LPk2)%6b_b_~+U9ri&;=2puK9fN zZJOkDowDtS`~ka5y|YNpt{QLi!j1YJ3`!n<)=;dJFmQ^A0Lxu2`gkodi##E$vJ;<~;zG{{D8!n`07nA1yv*Yw@|hH`&@} z5B7cUJxV;@nX@+StUtchBIRLYxiVXttUQkjp>x&={K5& z7n1tu2Ne#zYyD~i8yhI&f8v1N&iAS*&sNWqMH;!9LtyW!d6nB__u|iga2uancIDPs z<`n(#h2lRR_Gn9w%C-0515A+oWa$N1pKb1h)d;OUHt$RQT5QUy2<(}>zH!H&*INBe z-;fq*4u!ik9)lKJ)Te$eMW?`Er4k^dlM&y}b9K;V(Z`)9SpC>uf@@dw#xH~Svbk)G z2KK%MibfY#r1aG$ZK+2kN(TDzHW21A=4;RoP1s`NW7!4=BbF#~sv{xSIxS_9uOGpb z5R);H0Sp$0lZ?Qw$ll=WwFjlmuy&*(ksS)=rZ!l=d0)rio{E-C^#>IXI8nYE6!l-Syor`bzpN<<|-8cRhnZ|^o;BP%U0Zj_<_F>vu;V7 z;Mmq1P6jsDGt6y%p5->|q=c^&0ir^@R3H;XKecySop`RusA^XTXHMCJjmu>z^IyOE zydd0(cTH3p^7PY;OV8}pcGi?sebn{t4^FX2!EI2o_d%>BpPn!XphX&d+^_LlV!}fG zabLy>3>OoN?x*iZ7am;!i$8lB0PDu5Zurz9eg>O8Jl=-bLXJ64U`;w+s@wLwIA_UskV%k`H-xb)?*4%9&~DFKqkTi* z)gNku<^?UNU3o=cXiycSpKCuP=AS*62$OC5!3!tVw9Q72aJms!#Pb%80b+U9yrGF| zI=~(kyNf!!58^2VnbLYLRL4=}F0Boy5#voaAH%^t*EfyIFh4-5X@_pClycCrbLFVo?LZ)shZT3V&& z0cxb$HL-8+HNjov{R9?VCJ(uO+?*~KnVJu*)fhTZ%;5-X&k3V^73#L@=$`EbJkFp_ z<1Ic7Ffo5yvGd79uF9g%&~;cQ5wwlX=d+|J0=NVQZHH*h6Ui`K z^)s5bK(an5zh_=J&FpR}FeqL4@kEJANcr-btqVvqVpQ}5-^XQ3O5#QLF40=I@HEie zY4P_RLV=+1v?obMdiYI{=cQg5^-AnDf_JTF>wM!c+F@$hKH&$E&}P=&AKQ?7UM1N2 zxgp`zA%8!*k=721&n@nz{6`$?R&Es!^xsS6#wjGnbWj#@3r<7Y&=(S0bK73Bg@pByOmt!aA8#B z%?;Kq1Hm0NJTq{M?aL`6cHh*?>n~BrsA^t|gpIm{?eHuwyf&2Fw#*balcqX*95H79 zB7|2yf{dnjJXCi5ce(Jq*@eOCd{5xr^Xl{x@!kwye;cVYi=mrRmgHx$Z`&i71alxp zHJhC7E=aMXpNb6*^74v+{2AJ4@QthH2 z7sQ;tiQmshF+UW%-?L5Gs3|H4InF#Rxuj!CQlv!fs_C%7Y?J7(q+fFO&5KT|g$3>k zGPqB{vQ~%O>WUZFN^EbDdMIxkCtlGvPG4BBLHXonqvrz7;`L}k7k_c-9ba_dJTti^ zMor0Z{E|8cZJW_{suR*VJL-^BEf~(deKsnecZIDfn>GRN)KAtB!n!_VM&tMLXZK3T zno~6y%CD+uAiHKc)hP3EHd~@tir5KBf@RT&s9gz~MeD$v+jJiZmV_}d&E|aBzA0dm zkFIqclt^hO3Tuv}3R8=Q4CC4H3J}C?heuD9_Uzd@KM>a3&b*YY0G z@#17!OKx>y%X9=MkHZ%&o~V~_Fzl6apRnyGDNOzB-Zk!lTUnC}zP^VXou2<~xZu}= zQmbno*KfYZEw*ksW&5=6t-i^Hin4D45YV?$S7&vpf^KX_@r2XE)w&7AFz8@q{}0Rc z59fTyA91b&%Sf!rvE<&0?7&`WViSE@qvQ6iy~RT;dr>-KCw!KX1um7n$-4P;12KUN z8+`)dz~gtY5goX+IXwBgk`YtsWgc3ma~wrky-$stA<(7=K;|#(EOdVisr~x7`kemv zIo(ulhAeO}dVTsp4dyB<>CvbY#L|6@{VjZ&#ZFp@^@P`__h7#5eVqC5VM&2e&Skv~ zv$4k}aao!H%xPE6m(){p+wYC3%3yr{{?8~!sJsX+yyPjnuR_Hq+`JZP@viRx+MAip zAB^plL~uDd1b}4K|CQ}z$y)e| za>vLSSuf-6$Jz(^I`c=q{`K%g!szxdF{GwXq|0DF;2AMoV9fW~r0-wj#GiSWV+niX zag{{7iKeaJ9jo6(npzGG$_4khDTgzcR->`J_%ajtM}hsrnbK{coc%V|{NoD*@9B%B zju>aatk0UHP1Uz(6oo^T%85?|z#DZ&`NsMG`F+i1=)8+T7Vf-#-E(*LF7Za6;c)rh z<){W4ES|pgdpa$<@}g5hT-1kmut0#;hd3Lp=Bm3-XOa~hT64+EvtWwfe)yY)RugT? z@KB?W#Rd9kv9Z250BjzUTm1(8u63JoM;~SvZB-z|s?GZ|nT%!~3dbI`J~3Fnvz*H{ zINN*|hgX`a2;7}U%Lc1O5@kNW)V7&`NJbpN5Dp<&eD4-S{e0cx5t5TSs(p0$v zxEu1+k;NjLhAgJr!+RBjcGI4(=C1-C&Cycip6w^1x@UaWUQVazttTTPy|wYZVD;g; z0nMo=b|4veG9vao9}c+t(REB_%uc-qItVWLv;f!gbbkZN|3fPb`LKs=&sg1eVu|Av zlaWzw7aBS|*wmEmH#n4KDG?P=7L;P1w_m9&yIhoyAV|`xnKGx|ya|vV=9hAtP-HYN zJzuVhp$wqfFXp5Q-c%72W=lySdEk6jSoOh=76Hn*wa z-9gtlE2w#pzFso!D;DU1Hh8C=0ctwba}>!H=A{mqxTYZ5*)xAwnsJlM%XGi5u$|K= zAwwM)uYETmrB#l*NVq%yVTI;H2jj<1cLuX|UT2lBz)!nSvl`!lxV*OEXHRR10j!l^ zH3jI5Cq8yE;$wJ4V|f1Y<@5X;tV}lH?~DMcd&-zj$<$%ryTr7l#bn*6rENRertDRf z32Aai(+AmZQ&YiB++4yzH`RA(E1hJM35e;ZMuYnc`O7>~zZPiT{9+Kjx%OR$WH_#g z%cP>ASh9vEL(}i_DX-UrKt2-!>2P$&!of+l`9&q7ATA2%C!)mb? zmnL-Q8y4WFn1)sdcn0dX^!#?T67V9hEY~}rTXnb8@S^a%ecloD+E{$=^)ZJuf~)P9 zgDC-CUs_QtJnv+OyFBFXbU;DIdbInt{&+WGv%sl2t^&kUj2^cXhuJaOfcYGk8QT`Q z`LNB@<5%sF{6e;h`n*2oSspOBo3O9}ZJbb&5PSq_X@}O=Egl{J(V6+3m}sE%q_b^h zQ@939j~KUZt{@L2I3zLIIk0xcrw2+M#HKO;uOxppLmlBJk(LEEbBouX_6c%Kc$HoS zzCObNXaPNv?RNQ|ELCMqr8S~alqmyhWWkc_ch?M!f4K*T>Eh^tk)F4Ut+6%Z<$=B{>9hGW|Ld=;y zx?CQ!+Y7>RI=4Je#WQ|BE1)(Zm8gDGbedoRYPsch$#uW?cI$H1#o0P&yjt?L=Tq{s zas_oQqNkBU3(ip}O{s*DfK%{|ZM!l@J@;6~jTHl!Oa|gYt*~xvC80K9BRy0no}YMG zPvsEIR&JH7QM7l2;V3w=mbd->n=tl#C1+}C#WJMGi&H!c^4y(_jqp0NPisuVtG*ya zv6Qa&FzfV#bnJ=xlH=Au;OGv{U3$fW{jfezXA;nu9!T6ywU}Qs)PJwT;2!XBN{V;< zt%&(_nQrWA#>{i{N0xNPgoRD>lq|RKv))1-t}8`>v3A}_J;Px{BQ`bmmj?W9V~(fv zxg?1|`Y@KeI>vmZU1UomC_`*wiFJDOo8=|3niszfHQhEdIaV10Ng}uVI`Y0C=nV@N$@09iD_Yaj zlProklx@#8ORuK=Sx%G)RivO>?^?a90f%(FeIgwCxD4mWv8vh`*?4|4^M+>8;#>7s zZ?x$;)t-Bq^wg^iWzF4^nQDsREz0YHsP2f|f$iwR6lpo?UE1Y=xqDYy;>d$;-v*L) z3xou1quoK*b~{e6$JNcXoZY$Cs>b5wg~8H>DI)vKn(k(5aey`Xy1Da{ZVkJMI=N4+ zZ`_)Z}#4<9PK< zK#3jr-;>V-9rbzN?rN1b?McoemSoyAUKm^ydXtlLJ-sj|W%6h|rhv$3I4>;H)L71v zXhjSJ45XZNv(sw$XJV{Bbx_Z>wpx0RI8NN>b?fL8HljvX zYfXM!?)~OViK$Wi`CcFJ!o;;2i=mm=-Cnz;S7Yv^WaT$1>E}-`e@Yzjl{<2=*c8oz zV{LFfJ;yjt6SVGr`a>9Q7Np|$_4lY8=vzFOqK{059l}h#rS}NEWRHD*)Q$3O#X(8s z$A(`mKBJX6qiR0UNaDaApad($^fqOxeAgdG_C{G(OE70YuvwMuGl=b%k;cyNeEW3c zqflR`MquR9)_`9$wFK9Z?k|W~8UoKepA(Jto(e*_dyT6zM(UYK>;>w0J z+8wS5U2)ba`@s!%k<>p}@CQ^*q{$M(nOzy!lx=Xd_DAsfDub{@q}2EV$s^T8<#Y`7 zwPubn=?RG40%RdM9KT<(~{$ZMpWNw~ClQjDP@12+TRu*>z_LfjmP28*&9^B~? zO_m*mQyYja4&XG?n|}hqA^(t*4|)2!1M^g_NW45Isq?#_u{=-9D~S&qs|4CeI^G-} zR#b-!Ej!|DxDtmw#7vD2huX2)h|Yo5^8%T!O9^7bgA$Dh#cXQdo+n9V-uH03-EGWtM_EvJS@Zs`7oqFgK~Cg>!!sQX+TqCHSI>M<2D~U!@q-QlVU?fVJziVK#cZ*U8)wU zH-5I}Y)w)04fhFNCR5=bmTv@NMAArB@NpFZfr z_t^Nhu`1GLS;XGsCb&1CcZDOpG%&*e^^dBVeTsngA&Key2grMasRsq`=3G(`{8f1D z`43jWJi;(d;Wp3|uFm6cJ9mwv(f#kL)NOQ@_?x=-ouiMT?V>v$snTz6e0f!?ZP+^r z7A{!6vZM+=M7cW%&dZ6KbQDJ{xGT)Y&#npuY7|G+Ix;UM>qgKti7fU&<9d$HipGEW z{NX1F8OMWJll)3=^F2Ri_knl&-Q(^VEr#RZHg@mm=B@M4hs_1^dEk{FGJZ{mqz|*Q849cK+QtHY*pkPMWlv?TcueBWQ54i!v%puospn z{eIjB?_?%(>_8`EneHb-Tu1)#Lr#AAgl3gX9pssOVB&%j&#OMD_HKo3p@Q?OlDjeY zMltT@Q5)mW@uhUaVWSE&j=-YG$J!>W5e~)!BGEj$vBb<7X;`daTa*}WrsHb2v_&GY zZ>>3N(9>gFVn@H}SjXFkC${qS*+yt8bg|vml)pPi9mZ?<2$t}=lW|uiJBN99t%y$n zMRTDOkTPX*Md75``x%K{UUih542pc)eq}~c&sT1HwyG&l``RZkqvv(kh($WeVI5I?YesO$T;L=kjr!P{+EmyRkT`gh+@h zQ#c!#1R9X)^}18!g`Ka87Y8=k6;|6do2)slj23UDWD~vYyp2E|q0HHx-r zd&sTcBN`4fKKrg~TVo*UN#_=+YArQF_Q;DDM@2aL8E)=BynEk5TX~=7#q`xT$L}hC zJ$&LMN5IXt=Z7b%*-0jZeff5wu}7-hJ@?0lrWFWy;$)G{Lh44WZohJ-UiXq^XS!u) z7hl(<`{IkD5)r;n7pvjq=(p{#M^h{JDzL8b39kw)DRjA`O=NLbdA2@hRMSs)C@cqv@V zKSMWJS>0QGcE>_=LQ>JH3vxw=j`*Mst7`hpcAUTIBi7BW?#O(=8j_eav>fSPx-G*A zaseC$ocmuMj}b>_PJJ=ReLXV$pA)^OMr+D@k`A#_-Ysul#s?VxxB>NDqd~@oqOD zA|l*zD2rF^!OI!D1wMe2+|7sl-FA9qM>1o>u>NI2N(}Xd)w^@LM#oH3~s`|5m zZt+)?DwFqxHjlb@Ub-C2SS(2I5}Md{0tsit)^1G=>y6ZY2y(ikuXL5xclEK!>C=4l zqD;^7`_(R>!MWvn<0E!+gaMTkn$Z_j67-VOb`mk|ajU$bDm|d@I2T=& z!R1H;lKATE(e@`3BoT0LwSiFN*!TuKm@IWX$i}SV3VR?%@1t19dd$1>{?`4LpSq5; zBlh=VA_Y511yuxzt=Y-5PnWqPh!xsFT>Qo-w;dSXcA8zEqVow=?g};K``+pLov?mj7`M!<+A6s7?74_D(E#0AXgGh)pNH+*7Qi33@G}0*z zLn|GEXw?rU!h7O1OD z=Pez+(8ln1CO3Dzpcll zb5Xhh#-L_o^t6)^#3lE2*QNGx+!k&|VY&<;uW`(1n3Ou$0>QKI;=-GaBeean;qg1c zU(`I0<5M6r_wCp`1w3_ErRA*dC(>aYz%Gr~y>~2bM!Z9+Z{zy*DK1u_J$xqI@5>8i zSa#cgC9|YHru$$XS&wJUqw!DO|J?v8ARB@sDg4SwxDiXGDO`#fs_7JF6mu-~jx2h{ zT~hqxjC>lk3a9X0a=;e04=}QW0wSBU#q1*BCZ8fMJz1sv9@M&zqsq<*TlZStiI+2^ zk;fu&J;+IrB`7|;lo(ROHloEiVDUZ7I`NStdqdT8Qfi%eP;)0V%u98SjeKm%C#Vk* zl&#K2@q2j#rWHX|Zc&}`B>0rICkIh=@y8CS|I{|R$jQU*gxL$v-n2rQ9rfwSTs7z= z?coK6x|Lb;y!lpnri1STsQ_t8A!G+kM5e@n6gjG%`LsKE%g)-yfKSf@!x-Ya)grEH<2S zhanZ=OWXyo5McsSm)N2|CX&hr{28wqPs0ndwe+bW230I&2<~m=aeq*}cIt61HdgV= zT3rxO5f$Ma-nf4p%$R|1rTB)FhiOm8jqB3!{0Q6?f%y5Bl&Vs3F4%C$Jf>Bh?@H0o zY9R0`P%-2Azv}9*8XI_wVMexVuE68(O{FA+t!CKpI=}bJ-}$|#`Rfjuc8%5?!5_7TN2r9O97BW$7p&_Vn7~5x6@ax z)5hdu9G6#wo@A$jwFE<@ZZZLBmRn&Hj0)K@H=o_&2m!mGun@d$Z<2r;&2y5`69ebAZpaydfctsNnyO)nm@ z3+{w5k&OGW6+7;LYTA`8C*{-Hd{1;}Yu<)K>%+ID^3N9cC|DXvYG#sF;omFEfsX9< z=7#+^Hq)xSh*)GLX z^xi`1fdvqoDw3f2`L{vYNJcOY9oKQBDF^ebts^#G7QHn6=7B-yQgw>^?K546To(9M z)ehG3ql05&>sHydd-wHz{dhUII8dX(Sj$9X=rpeKNZsQ@O`p-b*~r#(zN`8&k;K74 zEisS987p6t5rnTZ-<}5=Ll)v4`%@&KvYU`E<#IDlHLV8^!1hS8tO7Fl@4$V;-sv&y z((k%?R??x^VmJgx&AKR>hX$aEH&N-u$_#lYc@O%UHACevm1ysE)*coR0h6}Y{)RGf$jmDfOmzZ8tt!8GQOxeVyhQXUk-dz5x}r;#1KUt?{7W- zH<3tG$INE&!t{K;TJs|>y;t&0c7tB_qaIj)YvcuCj{{jNHG;MW#_ijXKmH>@Sxuph zQ5p>ULRt9%ZnQG)8y@TR39fyw*pemzmqYp4@pzK;G;IbPu#xr21(RK~_O#>C9X?yc za0}{824SYv-W{-9q*1`ilj!rf|4u3I2=8c65>rMTA>v6>OO-q&hv7^i(|&X*`oKb> zG%1*&XL+(Kx_XHEuMLD8FwZ(Z+t~fM=?6Y(&mBcHKK}hD^a}tJ!BK$kRGWb~*se!i zOieRW#(w%2tVj&Z&3p8NKLfwk3Uj%CrJb;jjLs@O(UY?w!jUXu9?m-@W_|p&NIjpB z4fU1Q`s0)Hhd~6FCb6|w7&~q$s(EkTtFIPd*q+z_+V2XPnw4k;+-#=RuUQ>Oj;QhC z7_c5zrrlViISrebpM1pO11_jKQD1E3my<5HN=eLL+d^``%x`Gy0@bLLf+qHbRVnhG zI6~e|EVDDzImx^AIT?$Wb!0+FAqn%S`SZoohPm8<)*gLOB29Y!{_8j3m|Z36{tp&e z{!|^<0df?HTUnw%*BbGCpT(S^E(IL5ksDwirR6nR1M6FECq zhXyPw_N;wJu;;*f5Y#dae%VoKAKFwHy<-PfF*6E`v)jtqTU@%TUr)z*y0&3&uwJmw z(&zfICV{V<6TJRzY0SgnEih&CdeuAew6)mNc^wGPJ-Igc`YGWmDoJW5$R# zjvLne7S9+j7i_;aBpbTrz9QTmqj=gX%&lyHe;gD4NN%K<^;X|Bc$vQ&nteh+^kvy$ zya80o1`On1cv%&w4NkxRZa4PkLh&STOeNJ?3@Gj+f2tFGLeZH0%cmnF5JasU96F$N zQF|v(bszf_!5IxPU|eq{y4Aem#ey>aj!H!Yv7bGsDJKWiw6d~7?ZlvY9KWp}#$9fX z@wR2{$vfpnGR2~F#nWEP3SGZ!K1ARw>7{>AQ0MH7*$PnL8TQfT7?!}WdGeZb0`VH| zXqY{GY8wsI*}7+C=b~}$(rz_Dk)geKkHP|O zLwY2{w;hq$)Ov0TnNA3+<0K5ZPtVs#%ow6D2&CYTF5^F}9yagyln8s@glD+&nbA#$l#fHX8ZiE4dB69v0E-fU7W#5K;^(h zpCEz!G_cQCs|t8iw&C`@)-?T)L1*?N_;tFsp8jtlfYls35+sd^h<{pTEQ9iTe5-YV z7cJm-EhkF+t$sg&REFRfgE2!M(C*qswDvf*EYvRTJbvbgB{YceIJiZIwjpn4oMBv^ z&8&X~iTjbPA_*OCVHtOQnkJ9A5O3<#RH_`gc{N_J@8{(^xrdP5{D1+zF+G%kNP*rw zd-*;f`2ApY`^OOc=>P9$BkB(H^oD*d8=8_QaWbOAn{?bvgk>*BCgf^s=6FvE&U0#D zr^W3cO-b^dZ?t=Htg1?v32ab$=`Z}whjv$%K#J!2wIXpy&;G-~Y_~?RFemxrN}&?p zQ788te+E4*dVAv7_nDSYT8rqdXH=;02cSVA7^mNy`tz~sZAl!F2m)c#N^!`pZZ7^t zMCZ=-z>8?8w7*`XZVRYQUb$6k(4*yD23U4Y;^fe9%WNBWo0~hvzYD2-*h>z*Z?I-U zJ~Hr)Bfic# zU67Q*sm1eXzILFKS@^EqAoitTGT&2jmgt%}CG{8lBWO#i z=llGL79zb9e%@We^COUAQ#GXe%8LQ?$I_Y6=aCHcU}9(?*!0~O8ru5O8JtguvmufQ zc=4}w{67NDDvvGEzoS0RyPdDS0BhmS%-sK&c@XLxp<19{>i=Ya6)wBI{0AD{nMQ7ufQL6u_GG?VZSOvs!ca|k=RW=ot_f{L7wWRN0^23)Z$z*z#w@;QA56#Pe zS@KAeD%%nzaVi;o0kW=B4C(0TP-c>8L3}R;F8m`MSjDkbUr0GT*S7W6XB}bX{kNI| z*kCtbFubD&xjQ(PAyj?o(;?U#XqUl=9Dv2%C(M37IFADeXg;soz_Sj-{SZL=IG#_5 zO_TA1B{8*B&r3o3PGRGaJYJY{&*vh;zE`t|jgs&%B*bfQSj+2Sm%|9v>q;zu|TYsH1= zGlQ*ko!8qUjudDE1<2fJ$WdX5h8IowjV_V1*N4dlj}XV{d||r)8QkYbj}IKa#G^V? zGiEGBI~6h^e47$Fmu+b`CA_(%CYWv5Za^CAMP3}AY;U3Zi=5A->e>e*pREncGW(|r z5Ihh`&xMnP)QB&|K2+0w>@zqw)4&@6vNI=75R=K ze+f7BitJRQ#3L3y#S2G``z$#0Ozqj`&TnIZdgcAyP?WRUznoBc}G4+FVu7v4k zs|;EflYR|Iuacc4pFY`qOZV!j{7ltImUcTwM`sCEAGGCgs})urFZ0R-8~z+bBXusm zt+Nfv_JWh2M*WaoA8i@cy|`i=%Najpg~*)pUJoN732gOa2bo@gk9WT_kCc#D6GNe?{I4o8m(mJhB?QAk)$I{v>o!`5wi|jcC&kb@e z`dp3@{UJzu-W!6CCoAH}T!8Cd>BqvRpfKDwh>>AHx3QuohCP~#%r5$31oCUR&jnR4 zpr(i1z6?|tAFB`88l_0q+%hG$>|eS*hO33gCJh3^*0AM>OYa|c0KW&d6wM3;Wv6^_ z>5jVUSRgU+;$Yv-MY{b_9m?8}Z_ z`Md|v#QLNPg)K2-fYQ?YV_(-;ci)F_9nX!uR!`1;yhQ4*=4 znlINOa;xR{;;~ej7#0B|%V+wWOKs1jkEuQ^MypssZcc23w_MI|Q!P6~%Sks4YnghM zqyFdK3~I(cS3V(AW-o=rF&RN+V8*QJO>k+wwa=Y2@{2h8>)YwCjx|dp4$eG)SN4Wy z5PgCE)X3xcd(gNhTIgzTjsx^%<g7{}%uFvk}obvnL#9;3C)5$Z~_xH#fg|;k|mP(-cKQ zCo5$O!!A>$&zrAANm9~+Y9Ib-N^NCq+Lhwg7U@TcHj?-F3_jcmv$@@_H65-TTb#Pt zh}GGGU+h~1`OH7{*qqBBp|Xt=&cAL7xA;22`m5|OuB8ye^&fWKTO}hSK~)UonQX>q z1gzfTUkmqS?U{$nEAjU^Aq>XtVkpAwDrz=F9_@0a}_?S+dthGPofWILuL;DwH=8aJoy8 z_~TWu;QFNolahY*KcqqJ+OUq2xw5_ZJa3wq?0WgR1)!zHS=AmjOod;^$R zV!`)}2zwt-+t;`a+~82Y?}GcJwh=aytsOVpF~pej0$pCiKlk3D5i9MHOWU{gH7Wme zvLJK#hZAIis$c&A_9cUTvG!sdoe3Vq#a)T0nU6Q+*JeB^z zywBKQ!I32MOuB{j4taCl+Z)TQ?JqdB&L4Yjh46T8f5vuVPVY3R*tsXsThm+uS^O+n(8+bArM2D7#mNMn<0lZX&=EZ zTj!>pZT|D%SY7_ZMV77$Zle_*0^G1*8mP$l!e@s8hjWE(lCCOdw>SaC#@Bx{?eXUX zo<#Tsb7)U9&YdwpTZlr?HS2h5RHZW*y<0r}?w+UJ--?>=Gws~Gq zy%LA-LfTa%Oh73?sV46c>uu3!)SA&REdNFAyI0I~+H5YFfkb|H{oX4C=_SMJDP959 z+ntb{`!p;8K)an-b>tuVQ)S7n!3koTIL1)6AlE4HutZx>*mf(B@TyX0UU1bn z&(%Aa@92tFl-7=VoV1Qr3BBk_U~Jmg*hnWo`6^$!{wDV}isVP3)o*nT$iCZ1iS#w- z3tW3#T7K8*2$cqRaJb9gG_Vz>GNZVOs@eMD@#+A3hO5#b+i)`o`|;SB7l+=Cle3Dp ztwj+@f}v(_HHRztH~IN|0yB|-xo!ccbW`osAMu~uS)lJ`8l^xta^n@uLH~P{`&2Uo zFfD;Tw+Be^kv+T^F9z}MH*S3z9c@{AD3`==)Xm26%000Y5EN; zTCi~sbNj)-IU~A>>h{`T7W(w z^SCWd$DU9r-7;>I4tOg!J%Xcf%EOq$^c4bF-;oV?hp%=UYoW_R@!V6};;B&@hNGjG zfJ!iR#OXs#VV2?h9fJNmyzCEh$o!jVxx~>!JLXJ`Hq!D4T*Za>wHKP}SUxn0{5F=m z)k||*MC1F~zf`DkFC^*KNTv&UKKs||~xJ_Av8lhcf z7<8ye+Jyr;J1kMmkA5NkBtu6R)M9tRpV+@WADJ-e*vz@l04b13cZ2#uRvXO$jV_zs z3^ofvE&zSS1Z5e{o;j4T&w?xWu!+PUyh=jfz0UBxmAkwU)0*5Dh8|173$iEvcg+uY z`NIR|gYYJm0!&hx65Og(DdotFw$>g2)QGD~ED_%K`rx4oB?&BB^Ls)ZXYe8ohGtLG%F@wP^alv&#mw2{(3X0DX3j0$It;5nA(%}o06yXwg9%-dd`c{JYdB{a|7wndSD zN&2LoJpyGeuQN^@z#pZr)N0coV1{)-a0WP!AO(=}^R9P8yrZ}($(GkmmBAhxD0oL0 zCl@1qB?je;wR`~dEV%&kyh{B>9v()HFnkD00R-Ld%C#<6p%x2`!=&RSW z0lV;K33ISmLQ!i#UE2EaqnJtYQP}b03JSwB0>*HD#{Hjh97n6!eeW2c!tkJ!m@5;J zt@dl|bX9F{7CVEIiR{QPBN&OLBT>rPwxuSpSH5>k9ss(3&N`6;BTAk?$a1yJDi)OA zic-V~83NlG{^K>Qk#9rF+awVoK{jiN&P5fx*T=CquwgGPez8h{McQJXPj>+e6x;C% znKJ2Sbv~Lk?eW>7z7u6%XEgXY;r9z)V8c~G9fiuRU%m7d(=4WWHCLK?jK278RaV&% zZRqok5QwQ!(cG7_Wk;!O3$Sfcv*m~vtkbfXFp(b z^&wx^@>sz{c;X>el4fsJA@`|sw77McBh@`~lA#g>qdydVF6=x7>J+tp^^npQ1$IKb zmg@O#puP1mH1y-xlQm;Sx-?N|sN1z+`MQezjHIErEal@S53%UpjNv{fPA~$fe)d-?vA8b3b+|xYwk^q z5V^ZkAcKF(NP2$7a_6L^G%Dc@>-ZV`sSt;6DXM2~TyVhVMivq@S=b&^MnN299-*qqmz;L`6&%=dp@&gvy#2@_r%R|Ck*!HQfX8&tFTIPydS(T&Kfc3Z z@74}~kMxR0U}pTKDSq2Kq7}}uC7@Xz)w}mZ>*(6wE4Nl$XhqN9bj6;0)#SFjCl^@h zTL(A9bH$icx=0vBk_c;0 znThDwnt$B|NBCdMZ|rc1-yI;rkc&Dci}rXt`CX%A<8Jgx&Xwb`k{v;he-$zh1Lrsh z*{*HWe;cMflMXsmoDy3aLoxMuWIgEf&|mToqM9cpiHKBg1Bu4=W1OGJ*;&(5eIs?j z?1+H2w6?`%U@n92U4mcz)1U57-t-jI6n*YvDz*^1lQ^8y31Vg%j))=_4Dx*9OeZeFIu@DgAK4Ol4LhdGOP-EeoKC}A<&cp+KFG+Un3*hdIl+#v=2Sbe z^#4hvl@J0Fn9!G5A1rfx-><@gjT9#^I<~v6JJ>vmPe)tLU$N;KVC{e^4ZipUnq%wx ziQ<^BpLt7kUb}|Bz=H}7OuHQv*5;AkUPo0_n<`98jIVJcWLd9k%R{SMKUjk|WB9#3 zA>c*H<+l&KYRcQd47`S0an(~CU>$QOub=7S%aGf_;`Wybx zpl(9p`W7Y_N_uFSh-rPBMn8a4STl4fq$~8sEk_UL$RBeSv)XD|zaJp>oX@}5s#V|* z0vw9SgMt>D5<@kfFx@PgIevvbR5yFAzmz~xh}?2nr{^9lz&!vurJWsuX2mY>+~N?^ z>g&|;hWuzq{@fkStB)qTxMn^qr`(h`=D{1<>_m=gAAm~P1*ttjIJV4S%@;2}^rjThiOE07GE!FP^V+o!DC1GuBedYSNYFyg{LdK(|dY^Ig`Ig!WX`?BoMuIEX`XTix zM!+rhyO4lb3=^%WOKn{a-qG%(x5(G@X?dp~$tCE@T|3q-7x1>;E`KMxUiC}JK&YCp zw}0vopji((eY5?RhUxOr2KpRNuR0)XBHr@( z1sIYZtR%?}J0oFPk9DF@?Naq$MVMOZg}C;Y`ejacc|=@#i)^eX%p_2tTN}M){*lrr z@=|#$NCv6Q3(bYW)>gYVU$-C4Th=ello)k9mHKKM#V=X zA|5kE3)SELV`B~w0hZ48mBQ?(5wpd{cp}vy&=l}uo!D(+CQ9?n~e{r6eWIcaq%NVJi|a~LCp6=bikw` z2P73RS$!j0tyRAt6&uJ~3DI=pg|33Jv?m5hf>oGU!XyUY-lMdUgJ_{UeA99>JS+u- zu}LSCUaXpm_IsY4du)!hM~0Vc|GMI>l*+|%TP$`$Cq@f|mW<{T z^&p=lz)}aN@}%ypSwPMdzcOM5!4%mRUmq|4Ycnb=vv>` zF2(k7?pgP_w=N5g$&1WG`*(# zm0KP(vm#NiAieROc+Co1(`O#{Y8F@G7?+9bxm(te5ae@8g}`|nMK|@O=kQt$oo}n; zjiNf8G4up?j^_O8?Bx*)U-|MFq^|Hn9DD{go8Q%f`wzqFyM?r(G()yL zesmXgMIp?cJB&e^<0{<40g_u+5;}7FxAAhsR0(RESM{4MiFA+e&eZ2{E{=UVk3$qm z^lxhAXCKP7)V6&;^u0xfpUpi+JCEzSLXB-8e-k{qm$#AyCya?}7qzi3e9ho9lsqOe zQbCw;0ExGgy%IVf`VDZEkE?7~JPAEzYQJg!sJJY96d-_dUF^4MlCPQ?32Hg73n;6< zHXZX&hmB6k^^5h1o?p~or<4JCG#@|d4f9y*Xb>-W|-Pto5 z?c0lS67tF%6}rl{2)Yv=Q*ZNVWVGKb&Xd-C`6HRJNra$Q$MoL08u2ap_77}wpSCne ze`&7Qi%@fUEpvkgjtWwa&6LYDz5Gg2Y^g82m0rh2o-}wJ6JJh)TSfDpFGKPd2jiX0 zVXxa;B`%QSpSb#72gwihWWA!w7pD53#{}>ne3yQCHF2K>!IK>y`eo~&w;b_=Qii@+ zq(b>wQ~l*FIC=Adxi2yb=~~sGefffWWaVWmb*}^5R=E%$GN+#mD{*zHs=xnZz>|pw zqcyyuzYFCT-C=349{2@pnj_2;Tw&Q3kjp)Iqjm5oW=ybAG%#L$m2&7{QwJ{?+}goS z0Wf+Xh?#Ti78>7yVV)ZmfNhiu%KZXk3~0I|#q``gn~^Md*+aEae_`uOZwjm2CX zZ#EDNo7D^5>%CMwOR`R;q@C=D`C3ZkZ*?C0Sc^8>Nq%;s5%AM#*gR=#^8^*AwfrB@ zg)4#_`h6OF>z$yR;%J~_#8ia-`g4HC_%>^eCh()eD8ArlGM9HkM;n3Kr@<23dKSc_ z-V8k;wRP6z6$RJ8e%dl?vTfV(Z|`63n=>nlg;6k(%D&e3e3;gdR_K-_K5xp&IZ~t> zQm#1v31f%W=S?n2RDxk0(GZ_r8+`dTIuokp{$Q6{WZ^m9PNX@6pRpk^8@ z%5oIm$`!ND8cf6vngq4pki}H|mK#&RewTO$#kJ{GE=jd5U70t+;{es*Jqcd>Kt7i9 z$l4JMU3D-!wEQk*D2lNJxgtDO#PwSa! z7@6Lk!Nm$DJAH790&N=>b5p(%xW7LUa{BH8dQR0IPjcdC^KxBQ+Wo#XtR9&f6w_`>qDi`?3DlwHotgYny7@hNsE?O-N!X0USXcn` z*<7Z@Vrzvku0=O;F7QdZXfPY)W1G7Ir zLS7Lap6oB`+!1C`yZR@erBHrvyFwvzsnmHonZ#ytdS4&4NWNWd%CJ1C>oBE}VQGO~ zY4rm=VAnXTnA5aprnxKX`C~}KruX`L|3QBLw|jH^&rYc4%{$^|Q5wRUuLCRHy4`F`tq^QCivh=hUVg<@bA^pAxuaDzTtz>Ud18(# znw14IdJL~HpVSSKZ{lol&393t&?It}(Y4@+?9^Vr&MB>v-l|x(8VzZ%W_~@a~5SM|ZGPR3bn#d4m4;=oZLaO>)?rVW&`U;B>Rn ztI@Qx>&*h03awikDFu@mFkAI=%LU&>Z$b3&glJ$NG(B&f^+5hwc|pw$bu8QyIDHbK zb&wF|2n6*g(KjFjY2rho!8 zjzwD^t zN16{wZ}%&<9{#+e5pj#_6#4C~xpF>BxQ`su%q40K+s?=&O5Z4c=6p2lQeHi?&jG$1 z5wxwd)3Vx)Lez+vl^)eSXm59_7n3(rlVWJbr_Xggx(P-w+Ua)rO}L};E?vgnR$V}ixtU;~eSAn-XY9lWvOjqjOm}U7=&renD zCV*t1a89979rr4P_N68bUP5(tsh5@_^Feu^wS8z3=ds4PW2vW8m<|v#kYcTk(%k=U zeEPLi@!osU-E|QOe(xOh)x_!<)|`v)Cb1_-txz7fMXwU|{^vvPOPMdQ-zD7{GbcUX zlSnu@9fRjy%KREzMzN3k3`{kmZn;li8T1o=f?iFe=dl#cE>ZvLMFeE~-!dooT`>-F zaKWZB;*JDrjDucWB;(7y>o@YFWJ9a8|P_X|;^An)OlsUpudd5oJ{LM%9#-)>VNO|?mwQic6<>cL(DO4DFsE@buTnvzZhtDi z_xK5jA#}159Vj*gHe`vVBnnaq?_a8MVm?*~A(J`gbTV`PRV^=06Hex0jG@KzbwmwE zkGh`pldzyEb)Py$C3R}SZ5!XH<9bG7fSmS4Lt-m$N{xy6CUWcAzXAkreaDxA_!XLb z`j#pi`23}!YP(^Dx${^J;ZOM2v4KUn0`+FRW#OkHJ z5qcC04-)=tqTl}&UcbfP$--^OP(?wUa)2uNnFZN@~?B;r#`uU8M&48K>@}F zQphZ~HFYSsKb}mm=R~*QFkspFH_a&eHj@H&k6crD2^gX2B?Z5-9PsyJPd%9vwoIa#*0&H>FmDSz+1 zh1QPF0rC5I<~*EeNy9F9muYD^f0Ag$@&lfACAIJQ*{^)(qgy02` zhKbR=YsJq)8P_1OpV&16*KW8Ga8^WWa7Zu7E@;h=(Ck$22KE;5U@O-BKS$U|7=zg& z2(g0cF$zEqJL6ZIID3#=od_iGiW=8jy9)BSya4~Mg9lgOFwsCvvV%j;{(QGbzo>ox>-^-{0CDkio_(JcOPh;Af(Fw*s=UPm_O^6g- zFPqqy%Kh(I(Jx#KGt?&dnaN%Zx6E=cTSRqwetd#5f1oVOW(xKMVcd)pR3SmxtZ+N>IQZ-iomeJ)JL z6F+Z)V>A@ItT#Ux{d3~$k7Hep*l87W7Sv@6R=DT*z+1`M_gs^8mc}*S6vd-OwwCOn z25Aj?337WA#Wfjqp31)ZYnyIi-o$D8Mmv9uWT*EtyYz=t{+#N93h#Z6dy}zz3uT;a zY0ud~N0@dN^5P7mELY%X#v{+4qWz*I1?7%B8?(92U3vyG0nWNdK|hQCc!IwK0d0JN z2CN!HjMg6%9^D_OwnRo-w^p(rzEW?m{s(ja-SaA}M3~~B33(eGx47vhYUGwz^G{Nw ziz_!I^CSGnH^?K=H*!7$3hotTB5@I8-BjD`=2d|YPi?nFqxa84nw7F2c7d=d4@;Nz zFXPD!M+p|fZv*j-GHQZIRfFR__a^J&H`gg`bIfCOzZVSuxRfK#BpFnRnpNsXA{)%!)UtQUbaY1dGq9AZX#e2fo(&t90FoAM*N}P%MMKDR z%XDA8%DoeIm9z76Pxq*4s9S(9{qV~dG6fkI_bO!_ADPq87+qXzU-?h{5z-^Qv{UeZ+bw_(-ndvZV5CZ4#I9R zg^qms;2dj!C@=tLI*=Q!!$jIvDH)hwli>DD?o%WuIl^vN?WG>UJtCQ;xX zFQZP&-UmugR{G;X3Au<5cK0YY1-%v8zAAs$&XQ#aOBylv{bx0_!Pq9uDeg-msF0x^-Cv@o!IH)0WaZ)CYiAY5tm{^M44%!sG~FC4&u?5-~M}Z z{9AWhq3`DmyBsJyyZ0H8 z2gDP4gV}J>IxiW|gb*JRa6A2=8^B2@OHusBFr#KDt&TQ9Va%sj@A-e@t_tYWU5R5Zk)rY&1Ga^}LdK^a z#GwY)v;5LtVmXjvhM|Y=98U}!g~8L;H;^3>rQdyvTl@n5^G_YeU)BCt7+~j88r!I% z8OKtx{Z8l~8>S5Yi;yuqq9Em?D~;$5=pzt2s2-iOQX20E(Xh4m&fBy$je&Ch*CzHK zh%KQ6nCItx{q(M^Z+3gHkY#F&DfH!R)gHRg4Fh;<6Qax=-tR2UW7ysNQ&oTEoB-IQ=n;TcE198sca6x{ zTr73&dq->eet2Q8DVBZi9VpiGN8|6&DAj{~ipee>?i&b5(qbL4c2>UNU+jm7FsQYc zmFkMUDxHoc5CsJ-J#v2x^yjE>>Uk*762Tm=`@QVNV=VhyqQEO9!(g5!pHH@ytuq|z z=-UCz1y0Ls2yggLPCsS+t4Oi$%{I6F9De;KnCt1XGt-M@=Y1?>;Jtp*Q=)>odW{@9 z=nlj5*zw&njbIO_&!rRpq}NDduR(j_@`*U$_wL;PNEoO8VAe(&>pp7~=FGqc~l^0U_3Ygvh+ zSqF4d(LmZLL6G~_#+Mc%2nW*Mh271SHz`j=y=zluH@Fo1a==* z?utlN%pi(QvAB7~MF*zq100n^I*cEdbZoD4gxlMl0jk+B)Sn?@-ywbB zr1(mfD*KS?gE&=s)-1Cs?HeN#(nXQL9j9-zgO!a$qz%UbegsNe@4Xive;-r9R!O%+&!$Dm+rj>4_u3Ekt^M}NMDD^|KsQ(n$H)bDGSUw1IqSJmvnl2 zL?Akw3(|p-$fi=**V|+z9%%8V7)2TK*2WLL+{XkzuNwp$z7rIeG(NxUoM4KcmnCjw zSTjCSCP=a2%Kqmioj;sf$Qv~5ckp^jba@+meiE24}3_C{(SBX-Su$4&yh zhe^G!?;7nf0dKcwZKa?*bW`V8_nfOH;dzXdTpSlcTg4Acb3Ey(a3A92;3*J` z9+mPgp+&=XcRr;%KS+#fqy)s_M;1S-eZz&QC7au$R`ujD7H_}90|GYP1I*OKt`Ys4BKX5p0gDdC1C!!6CD2#41ND|o zJC7_^i~LErthSmQVWUA6?~_{THm`iYN=XwGe)jsr!QS(Oc9kTPJc2iJZVBqaWKn@eX7o)GOUDW$r)Q>wh zsXq zbvCw?Lb~o7BVFw5$jcBO*c!+0yX7&~Qp)X9pDO(b(a~#k^a3h6*FV1|G~X~xRoX`) za=Ehd$ZSvuxk@~U97XZ0O`4Z!wTB7fJ;oYLvw!e1`YE+M0&YY3!P&WGFqZ;32{ih` z+0_IQoOIc~pAS)9R}M-qBy3%>Yi*nAW~pbvLa;Y;B1TuP#yW-$v2QR*E-_tgB{BVMh}CV-=vkR4WT6}# z$8>Hp)79F}^zF*cX0ofQ?K%JgqFP0iPiiH)B!71U&CVleCkAK)D!y*j4mVhX+p99) zmGkt%>gjMnS>Mu4KP*PTxBE4BeW_|45qi`E$H$d~-^4S%`cLC`qm@ck?9bnmt{k`n z8zHC3sBO^=SS;o-y+O`4x|p*>=q2;x{nS;2sX6G4TXgOXTOZU?m`_z>zY?~~<@xcAP=o7FE%;=DM6?lb0L*Z}A5u*8Q(YV{8`O}d? zI^4yE?_2GTprqz9^^0g#*xc>ZftNJ4l)X{}S2#%CDuOS7=Ivnmh8?nnDR-zH5ZepG z5A!;MH1W1}zV%mn;Nz8V8`}LwW)ux1|mMG0|-#flQBQcNB$Lw(4SzegBjsqi1LtFev5}E;=a>(R%nvnO27rPf`$DMx|~45bK+Si=N9f-z#!NiaVUmugkRexr_)|Ov@dQ z`yLq3jzcE~o8u-ff+SP>!3Wgl_VZ6#h_NiPkqnOy_HY_KZr8wt=Vy%f+(C&RC#T~D zHeM)(yX$$8^CI6$ft-`@LX+v>9gjINs9@giNH+~%Mc~$qRrjqr*}$qZSr5HQ0?W%f zQ~&9xHly;PH6Ko8CR8-*2`|{R$)O?06kZ;^si9ge++N-y;p1i3zL>bS` zzBkq~3^ex5Hvc10sN;7tI#!u%Ub%6G|HYW&4;PW2z4P9?YTANH&vQDXiTL>JpQZaf zVfscN^9dgO@*0yCzYb(=Rb^L{h0nKO0o>AYqaGbiz9Jo#W(X#FwSb;_&+y^njeFUA zQ?~ESr?FhWnv;~ts~Qz1&KG9=T@qp+wB8i3J`w8h81i_WzC+eXw`wf~SxJ1=n5UZz z{r*782qKB;F#l*HkUPK(c>4&3JJ;Fs&xXeL2H5j4HnKMWR<^@*;gi$liRlsjyxua> zMPYIxD=X6~XMB&ZuBm>?{9fAD@~f{EJm4z+pY3@$wf(0wKP=Cck56pFK@BFujYN9; zyyuS&Uf)$Fvj6Q`&H>Y7?|EIP?-<~3PqDr!0m$H19tIZdSIkAIk=(lLyNmh$*QK|2 zSsonBdh;;jXN%NY$d3(A(~*a?$yU|`KcByo*)45ZI+0sCxC>y}mAj4m^{jCTC<_+T zH#3s{^|WF&@ggSBw5XJqS2hb&PY#}7c#2`2&Yz!7xp<){$QtZBYMKLNfSh5t>P#8th5H}PD`K<)_{YHrpMnD*&lw` zZ40!`_+oBLoR?W&>(M}l5@qSRZo zm=_qbAM~PWKepU_n8##J=N(gj$L>3oen3^;rmRnqlA^F(oqzhUJE^Y4hsQZK%ZNGj z*yl|(ed~Ht9*kYYvYPpCLh*_e1DO?HIw?)=d2V*wG)q1Q7}@@MT>1HJL*Lp5K*~FP zGNXmS0ORUDk|D;$_=x}mDMTF?h$M+E|HKb-H6w{5xv{!l7o5sc5#ZTraHglB{}6Fd zBJIO?3k~f4Y7zfijr;jK_)&;Fx`JOESRkGwUSpGxNv&2A9q#GVdldg{g<E3+};kR)lC&ow~f4MDSH8fC(JxUX5oMO8jHa*Diqe>mQ7%c>Lx{#~%qP?;0?Gmji z;-{B%&JZz<|I|U~{PaUF5e;9)qv7d+u{f5}{euySejU^q|0!)&ZVpOt9E)Q=%o~;X z+PfivV?W(Oh~M-5M_EjXaC^7jy$}a2j?cMMH<8}+lOp~WM{bkl2IrDvR_z@3=1Y8= zE)Zb3gd%!9v%SKlY_llk%F9CcPKL*Wb%VwSG$4%n{YGLi2hRg?!WzIMj@tN^mj5 zYu|WZbPnoAa5?Bu?8Mx=u(lbUeO_X3NOaidL?K2IQ0C~6Z?7|`bfpjgQ+N8if@HiK z_(AnsR^(TcLh=EN#+;Q)cJg^_2X0G}74fm5=jnd#602^*Kq1mt3EM+b2s;S?5dD@~ z@Ow&CJ8LEYr&R7!zRZJx679(DXQ-z-RoyOoo^WdJiu+89q>GV5(W@hdh)s3*b7wyx z&-00T{GpY&r&l!C%bu6U4|+&xVLvj@2lHdPTVCGZ*Ks|yw8!(hCo_hg@(Yaq@%lnM z6_OvrPhQMV8t95&V)@GqKZn(8Lx}v@srXyz8jm_LQ$o6>ldSz`7Zi5FTB8WQpC2tl z`&y!x4gzlX+XzWy5(zEji(}nKEjv8#=k})RVwM2!ccdoyeVsoPznMF(Jlh@bZuH%6 z_8E*YoSmQyI+(oCxx1wk*jf-<@v=rkl%-raH^zJG?2U?!mz(X*$HS&1zk!K5(XL%_ z|A92$;ag*M-g-_j=LN-|BjKmJql5a6qp$cgAn`v&`6skj32*=i|* zaNtB9VBN;*^~%kl?9`C?^L+j|sDCXUyhHO!1q=?oYu2add_YgC4D3-wt4;7-2*u|g zfh~6eOCea#Bwg>P0ByGp98{(QD|sW`ijU1>fIvcY_7jWUc`Rku;#HiO4bhOa+uPdZ zA;05h2ey>lFh4 zV&JtbddGk}n03dDc*?O`(3HU6N|Gcd>e&owzUdo0pyt<$^Z<8e;L3~+uHhVVrf!ff zxqmxd)`R91CbHuN0g&ozXWHJKT1x%A>RD#jtvjmlR_eT^*taCH z!Ap(z5rWk}JFP-KeQIrt&HYC{o%wp^srhzaBL)@7S{Q^sq$HB}#g?pr8(|mC_Ra{| z^y5*HYJ82#LSC83yP@7o6_tfEct(v5#@KGS_lTA%&3mLP(ZUeR61;-PaVb4p^`yj1 z@2$6lBQ$N@&mTpNRaYECD_k0HwK+SX?{Rafl+-_li0QedVZ}~L`0@4s%^80YDw&Km z;>G8EtWTlOy>RwfHnV^hSpNu>H)C9*cYA-$^Wylhp!zlY>3tG`n+Ohc&-&)plNq;0 zji&wL|FxF3W?dxREr?6ZIm7_EoBA@K<21%y(OPnM?7*-AZnU(* zFl+^^sIniqbaU5YW#fDzc2N$1a*ssO?lkix;#r42UH7udkBs(`$-u+#8Z6n3;wpR` z^emeCpsp*+gC2`+V0E<9f_}r1$U#7Ez=PQ1@x0TlAQ5fwX1npKFH2@v#h($oUoyjw z=HMqEL9f0)cTI`0L{LeEKOxxXN!RwkR_N_F)m9N1O zP{*Cjo`rqjW)vUbRt;ZXHVY3B?IoG|INXRk!B;|){fO-%E?_#gV;%udZKU8olx z-CMXrIbs|kFFjDjWd~E6SyA)0BenO=cs~1|3goYB@|U%)(Cy~$=W4^O84O(D{QIeZ zy^F;;fb6wrz%sHVkgeF^Etnc8h7uccDWvB_NFWR{hX-Go z*1Ep4@23ey=B;Te@;px3eX_ZXA2!;!SKnVw)VbU!#{;C!A1p3!Xg&FNF91(Fe^cGq z6$o!{h+>BPogDdy?r4|E`6J7WWPjE9!8z2ElcpqD-=MDak8}m*tUrPD#~+sqq<0=3 zOw0fyRzj2~q$5ei(XH<6O^*mT-)RvypnGv#=V3j-YubKOLK)HOcuEUPa7mu!k(I|p z_jb4zU7<|ptS0=l(!9~RX1;e6Zhlb@MHfZff_i^);b&Fav_&%9GK_)Gw1fg>MOw%G z`ryZ-N1mckDE7!~P1DHZscnA%&JsA+TUoo`h5m{)6p*^{v4-v2&d6c(>_!~YepqxQ zVtQbaFh|IOuVpKQe&3bm=4}=@;cHVnI^m#x?Vz}b7;=tU@7w5&Sdr*H5iyE(dc!vL ztkf>>LG@L;(1_b8bj8haWlXVuj)zk?`MK-E8&99aB~jl=I`96&6|fHa)X~}TNIO=N zqMgUwla4&^JgK)vNar=%cKf2$W;VMW(Ug}JDRelRUT^!Qy9wkjH?4~XOv}oIcuah$ zs~)+07pmfkAS5^t1sMZLs0V9oV%`hQ>mom)#%t^OnJO0^GifQJrQgeOKc=^JmOjJ$ zO@9Bvfd9d#y0>l*a*Glng#_|3!{na>ar7U;5W>xQtGoUbyW1nN{OG8PEC zq6~FM?!AILb?@59F}*U6p>0(3cX}{DsORVZrBu76bN1rW@%W&KV_-Y{a6-)A#_+6z z=qg$7MQtQgl;4E#l^#>4>5krOJY6RJl$3V51mUV%kRXQy z?912>7MJLvG8>C`I4^w8hi~Uvv!aAL5Jm=#^OuM=OyxFzTcG2V?dDftLRRq8(z8QU~I!kk$SzSaa!Hdsan#zH@s_!y5x}iEU&yEQTuks(UhnT z_J|g8BjeSOP_OO=QJvmnWWDO+ew72U;WuR!G7oruS;$$Y7 zT=HEKgoGwOt3h4F;3la~xurQwpphxS?2>*KO#0eAjqng3*%2AMA6D)LD<%i0{0U zsdKWiF}s%Bue$FsW;1`ECc9GGwL>LU38r~T|9Of6`q9g0PpmkZ%6dAmeA2G5x=lZ1 z@uLIK(KEvFx_4k*?+nh@f6F*h51(sMr$gnk#L2h=p}0B9Dfb};19-rjALED^v<@1Lw?rdJSrLUCf2Z1i z{Y~Os%5Yt~0N1{3R}CNo@c_*gt@87g?B*H51X73n;Pl1jb?fXdi3s?ZW@x+H9jVc0 z%Xj}8&n%r;w`9J+;IB0J*ZK!NgzjSJjk&~3P*Qq-hAWo9a&hvnD(4T9~(A!6Nt`c zRVW-JZZEM?0KFYzdQR%TO{QTPp0_ z>3)Y3a`OeQSWU;uJ~X3jFCpjNZ#(`G(b|V&SO?UGjTLyitG6He*C)vvhTf!Ms^HL! zNnI|K0j?J2oz062>yow|mJIsR8tWVG+*Ib))nE02Ar%>EKq8lGVQR_}_L3WtWyrK}%V!jazT(bG`3jzL7>dAP2PNA=q)} zpHY0@*9NGXz5#`XGu)CwiA>8oTjRKGI3Cv7r-@S=;8c3C()enBY9Tv*sak7pw2>MA zx1TgB%m(6ejjQwEW5UHjRKvp~VN}D+e2XAU%($>dcqpeq$#n5bHw?;fXZKe&KTqs{Z|X>O-+N3Sk+zfzT{~~BiI6j%uvqVsk{o6&wE~AI7S{9zq?1idJo%~j+iCqZ z&HMY>4XwLv&EdsUfw}3P=YYd1f#xft5$UOUhN;@}5abo)CG!RE{SP-Ba)jRsSp76!Xg#yJ{Ox;fE@ zu_(z1T<7TF~VK3DJU+(`NKv`eoD zh=PUFKk80MywxDcYERX(yvBYKVC%#7w803nuYh~G_74&GLr?}WYCv`ZJfARXI5mZQ zUb(;OH%S~FVrUXeEKNO+NHk6^|Lm~oR<*H?i9N*`w>wweL^YN5+<|vuqZWc_mO_~e zK{|WIx}{B9yzL=sKbQ8W!Q&=QUu9Frv$%d0=AJ(>`UOUD5$b;U$OkR zr^$q!uA@oHyT^5iz?1E%dodFe4Zf1lez73)^7#fM{Rq+B_}6!+Q?F52$Mrnyj#jwo zOm5pYpU%pS-WUKL{l$PUxjXgdwA7Z6b=iJYEfEk^aCIVJ zS3fRE_zRT(f4p>wAilLZ(PYEt;4up3BI-HTq**4>61 zE=>!ezcnHxJUfa1y~e_wIDw8>sY0ts?^a(qdi_U)|5K=;E~R*r$6!Uc`M4+87)Z>_ zUMq*=Ws7(lzC*K`Kln*||sMql; znYoOyV10mam3&>-9kKI{tHR`p+-VZOulU`v83@4bvxkjMX$GjkeZ`e3V)fi=c&Hzx zS*q*FR9ckstgxvFVk>1Yeb!K?k-^bdPfN?<&d`I$MKyV=J3M8A=Z#SuWu~uN{(ndc z{LeD+BD(10b5NPcol>yKmy1{3&YR4ZEGJlIzFa(b7*nz@j=iOsGDMjH;{LpQd+#|2 zGQTroA^Fq$YhZdL{#mlc`ICunM3X(N4^W&V_vtd~|9au`-xV~ENDgqbJ!ofc)Iz*} z6tYXw3Q{CtUg+N_kCqAAUlX4=ZQ5)ezar<+TH+U4fSj)$6Fo@8`N?6o)yM zR@*Id-Ce4=I!SeLWIW>ApuESRL&B{(Iv(0&JkG(DQ-aL~NL{Sr>ea;6*DLWPv#J2< zul`?^>8^iwR)Ju=196)QoZ zuv-gd89l+zCsl6eVAToR8FNE}M#-v@D!x70)+$&Y41Ha;TGl+WrFo{;+&EGow_N`8 zq0hCRHB=ja((S-w!N2UN6#G_uu&CiXEQy{mK0vN zId;%Hxz^n^=3EF3D{bpJikvoX5bdwj#FhLMj}BUWeswmn1!n1y!he; zJ8#7{CN;f--9>|iX7>EMHwH=b-6LBeh9%uEHk3lt4?YfnUYwJl;acF_(W))`@Q$FS z?StJ4HSEi~o*J{A=A3DwdixJ&AgWCZUp4-mj4a-O(!>`!bCWO$=qk`oNi$VoWJSR(XZZ zEYDgA!&sg*%_ewoHyQ{(?#+G|FDITgJB%9d6?u_eM;~nDsg%(RF6xPUQQ4$ba5i-Z z^f(o%br`we-hLix%{@nBeOeL~+nVzs@9_n8?ol2_8l%ExBJv+i`>*2Q*R{(F-lyNCWbhZFN? zarfA$C}RBlj*&o-wgA)zS=$GKX_p*3=)QY4=%VLJ)_14yY8L$ct5c}8=Gs==M6bFJ z_j~2ix`}RjzPyA)o;NPN%upLkJ;$F>Z++}{J}s}QK4~<6Gg0g|%iy#kSk8v1UUC^a z6kwF{@STtfVcmCy@%=gQ^Y;ONzh;Z7X)MN;FO}uj9$pZBC6X~C8I%grCT0jWf53); zQzg4tvAKp#=GhDTvf=bbxVNaD_XSY|uLiUM+b!VHZ*#MyBk_NcH@_QNpp_S|u;RmE z;vvK0*T9!)z+9$2_%T%;TJ-$tZh|zV5%N0rOFA=G(df5l$pv2MS$O&@LUoq2IpGxR z$}uWd`pSf8q%5!2n#LJOgediPNCcI@=qF~zlO ztY86)0X7He(^0I`5&bDGm<}twBox$+5`uHqz}cfZp&%YqnSRT=vupf2=e8HimoMFR z-myzjRF`Ts2S(p1x!Wt>N5+i%T#4YqAX?lcODM98KI7;s!g%mCu0Xo%o z4zqL>{#nTsRe43tJ8RRtD~3%6Rr`;X6Mhqamg-h?vKr9a0HGvt#?L)D*9^I$`@*a-@x$3dqdGV@V@57w?a)T9>cRna0H#ectb*S8J|M`jaUO z@%3qOId@{~qG*mg+P7H~`w)3tMX8OBb`S41I1I#jw%?_`R+jL+ z&uZDZa3LYpqAH`cy+2SlnHD-zKJX6ZuI@kC6YD@QSNYardM~NaZOb-XbtTo_a%J)` z7t8kfyKXh=yJEw#c1>BEtytS%emlb7wcRa9ASN9};742n=z3 zrLd?tvb~PS6)%@ep=CL#6ul7Co2oJ~-S1Ve<7Q5Pr~uuuc(yEx(T#|@-ewyw%xg2^?OE4L4`&0xr%vX==QbaGSDZmN zn~DE^K4a|%r23!*4XMK=DtRFlV~?4mnH~=Qr|T0WJ!EA%8aGw_i7q5&5xc|B$@@70 z-DbTy$t&f=O$35Tn0xT>OqCa1H!+wG#UKjf&(iBon_GDmH-4tB_MV8REF!EeSEx|pdl*-P(=3+8}l&n zgXn+n@SkIn|6}g40?&aSS&hA3mB-tTUOZbt2I&vdm1q|O6F0PnZRXT+SHd!070((f zlnpg%mLHbH!*0%^onbcQzYplOjF*lzQ;mE-xrp=t&Y`#I0U#P~^7?mQS6>6Z6 z%@A)AL!ziD1(QG9tYXQfitle@$-zbTleBn@Xbw5ZBI^2w54wV&nwCYl5EXoN4?CBa zd8``cH|)UzWd%t8sD@zEho7HD zVfv8#p$j%Ywx*ruE}a%Z??ppMN}u#H&HwaG!zD);T4&>amc&$%yH z!GvJGD03ol3PHlT94zdi)Yop=E_&OnDkZSmwq9(gkjfp2G78d4(X{)!)4@6hH!so> zM5to_Cm9j=tFrcG#lvvlNFiY@%RF3wK81K*;`guL$-wR6A{LI-OdJe9y2H! zKSnleR=#8$Na@{|TlKggya;`k5s0x@`Z&HlbKog)ldHIvLbiua06d*jS*7S3L!w64 zhBnz^msJHzTodQA81ZSPW7yae^0fio1)9U=LoL0Re=u~01YpEw)rF(haRUdr*5#Hv zY#X{%HxqXF%S}og+qX-qHQ~_9^=iFPIImnubFOl-W(u)_1W%LgiVs}w#?iC5OsiCP z+BaCh&AeueJFY+60~v)M;uN8$ZFV&JE=t;IubBNNduutHwdd6?tGWCjkAs)+ORwp@ z#33411>1&h%bFID6osHzOK@R&StL5TX*6DB+ED#KgWS!`MEAu^3T`}qk3$KcuO{53 zddAtaZlz%C`#3WGMP*X5=AX*_KlCF&M5OZKNy5P>3u}|)_RaHXTXj}IK#R~|S^kl- zjRnu-?Sh1Es-HE;Df1ngUD@G>!}Vb; zr-&e>2%VZ$kw9@@R8|GyPwSTt>>Q7TiM8asxJ*=J_?bx^BqWxp2L#^4OA`Uiv>p>o zGS`~#BxaFmXB;!6rX?$0`kjS4L})*JbKGf$Oh_SGfze*J6aFLF-@C{JrXpB$1og1W zV;RG~T1Ssz=L z^X)PCp&^(~4+nQA9NL=oMpq)E>A44Y+8+9^ z3{8w0-b;{bI6QvGN@|tcpIJ^I@k-%gwHI1lR*XXjo>Qz@`9gKqFFF=R#>~w$-HGC4nNs;xD%NkQVsxwe*8A( z(cR?;zI%V1z^}zgPU{-)jyZ|dx?CC+LzMYT=TiN91}zVlkBJ8i6`c&LqF(UdjcF#g zN6#AEZ#D#)-hfT`r3@U@jOgLbq=)-tFl)ICQnWLrwPrMY)*oC`GTx|V6yYF@nknhkZi{K*`G4@Rhs2o_p~)) zIp5!6vKYMn+J(eQ4y{Xf4S}j56LXCp8uuM29B?7EN^>xSmpfi#4-|RNAnO~1DNwzt zITI|(^=lUNmEYVu>jKH-Y1F|ts`cS}QG_N^hhir4yAYF^Dy6(>y?5<3b4`K_l?9Jp zu_&z7bEP)LF+wyo74=w;$zrU(kHHF0^JZdlq z@c0CH4%RG>me!NW0<|p1B3SrgBf*k5VI)5Q5ARV~h+K8qN!jQM&x{#BT*DT~>q=Pk z=`mp8*we!!K(=2iiDY5s(O1=vQ*tq0;;JhM88UAf74)bVpN!(c`GVIY@Zc=ZqfJei zlCEsHp?Zu1i2W-it%IZ>F?QD`&Y5?98(Y8V`N!@uzP1W-2v3$cd4`)M^5>~?sEFEt z25KhDKzoMslX4Y(o@dGkvvayNe1M# z>yLuGwGtEw`7Gq@a>E-oyRcsl807?0BnTHB@>Y3I&PS*?PlO?Rj4e8?tDj|rNv~wp zDe-Lo?nV5@w|5^5@1V(RBrF2%#IFODZonus{CV}sRoS>Uot?tTS9oTAVcnyuvfdvSt`z`e?shZXvdPs9bzY>uXO#=D;8eOj(_Az(aXZ%>^wS!-t$~ zwlzQM&FLve=9bJttEFoHQq#Eb*B{9m?BD?1VQE&beHaMkdFB70R+ z)w5(*L60{LqJ>jKPV6<&FDe=J{mpkwz3JKqngi*!5LSl|7W_H>+>R)t{dljk3(BC` z-L>58I*iwaj!Z$oVq8JytP)%kVWno*HWS_u0v4X|g?X(<;djB8!8u9#XbrIeSYzeCx{H_7!(6t}i?_g@Q{yp84;7u!xJb&!pQcNL{A6uDoxbKK`FO3JQy3bi%3D%gZaa-5tm zTOJ1zH?UWYfBdU30Lly#Qe?semG8GQF+C0$+K}@q2o@^TB>h4C!hk#q*Sj~)m3g!u z!)+i&TsRh2Gd0jkPI*oa%>z9vPoPj>5GmYIK@?uuxSh!^yt@?-2oB)>(ZOFPogHpk zvjIpr_DFGLQ-WBdCac$d=C0+#SIkl7h>Ogi7R#R=6cbWHT~BhpKhYqe*P=9^Ww{|y z+>bb)ueBCyIe>ByMD53ymJO>!JEC3GSvPZEde?26zHbtYTiuzZ=q@h*^N6HF(S%Fp zbZr)@p$Vk8aTAzP_B8$5k-B+y5frqS|^A z4)rnCeVyXj(RdXXe!Ujq0t5WSB#CPA`oNykbC(S+4izIq@B(dZb;H>n#{_6fr+EHb zmUfGWR#Xr6vHLV>9}#CaXR4Yw&^nE_c51&JZ5O=rYO22S@c6q{SzXKdUN}nX%*BL* zZ)v|N*(+uo7p8*HS*O$$FRR?bjqe|dd}EBWIVqM|9!)`OyxX>LB}-*6CfJ6n^excr8=68O+OK#RRWtb%o&i863S<(k(` zNv$O##fdUW$wh?xxf-!PNBaE|rk)iaq+M@Gb?g`Z#j)ghe)~vwY^i*_uYfXAC4rafI~tBC(8YYhF|D zSUfxTGD7wN9R$l4_zY4hVh6d@4nK%4L@peA*N%!nh-4<~d<``nf+A%by5(mz%bNhi z|8J$$MHZA2-+5XjOaZ`xk=h-nrLCyeqgI)uJg&0T#-j-^l&k>}rd*}%^kIo?vUuhV z;8;G2dhE1*Vud>O{uOtst9*YGeOA~|n4$?17`xV&a_?PU4rYy0bgo0T(RRA|BBT$S z^yoNi_QL$LWkk--Wl5|i>`jc&&Cw_5DHhQw|s!f|Lpg3#8{fY_cvS1W<=PgDGNIvan-oX|>WOC+07xBU`4JFi(? z?h44OE`&&Hwjy4xfgSfF5zU?kHaFWL3Yw|1VYj4j!pK~>Dm$m~k}RHm#q#;@pNZ61 z1e38``&l|U2e<;ZZ+Q3O2&v@|OW)EWrxgCPepRxdod9eV_rIkur76&4g> zSdvR`b{!z_uCYXN@@LgkI*7y4AS*6JmIy=>6Nuk@!As=~7D~M5%GgJ0dJwOb&?+UL ztP~;^#mpb=#k4%dt-7pC){TbmFrOu+D;>%(}?Ag^rn*YuG1Yw))AYj!+WKz&2Dbj6RF^B z70{MnqRXV7NHu27UcT>T_TX|Izvw6DgG;K;Dyvf|pDBHl2-@U~2-Hhy#(`#_7og(r zC;A8T{aR#Hw9ephHN*nmn5E&p^2YBpf()3_=ajQk4=uqzghgBUkBO4 zH#^Fj(dvtkN*5_VM_(zcs2V_1Oy1p3W7(N3Y%Z7x5$#hgt{h}X)4&9bX`z0dtHw}6 z4dA#WzGXnc^gz9W)@vJVbrO}<6W{!#-7v1dEL@J&ya_;qbS=jHU+&a+}zMbUYY-K@^#ZWcm|m9tw8t;o_}S(wwnLBqcmz6bbmZ2LIO{D2*Q6 z3|xgDYKumBK%&?vlkIwK5g zCW}5d(qM6vPLA)~zo9B${jlc4Jgf}i=`hv+(Ig6EqNL_hHuK93sUy=LZ=Y2VwUChQ zbL!RMpB^1wmdQDbGFUa2omh*q`E-pBJtcs+!yS@pg5<+AVId-2J`=u{@`mof#Dl&|+Q(NlpDVOE_Afol@& z-8_@?O*va)uHg%6H$2|k3a9(1R9wsj)M-P=sRJh)oXmK^aL5%=^@I*!uE)>6uobdEchNtS0dA+|+AU{8U|z>sjMSLFt*$r22rLmMB=qoaiR z^Q1z1H-yHF)#p^hJa^x^VMx-od*ugwy_TFw8-E5?2BNJ00y|;EwVrG1ynS*!PfXR( z`s8_j8Nqcyy-*3TJk)0IP|$5My?aD){a)zCS`$Xd@@o4A8Gr7^*2JM=mhJtP@l}1# z{?W!WH~3L|x*g}9kG(t`PomL0qr1|#-0r4rtkm0Qr|C4BfZO14C1w8Ay|HV;u@y>) zD}0BS7HZXIdp0ih=}6GJ=hCRryxQtVV1o#1N}fbts?q+Z|1aN|Zng&elY|KOC;8@K z{OAlR^wWO`$G;!sA5R3y#VEBZMDVb0*wP(mMt4EMHqFNyGSU08G$USWTsSc#KkNpY z3e*SW+1D&KpA$C=odhZHh)uf*Qqd1f}}Yk zE6C<|ei~juTL`MD@k= zycVOV2UBx>G^Jsp*4{)*Zw&yxPLicr;6+Csa-fX{CLalU=dUTGrkC;^yLhH+;BGr0 zULMYp*YO};UOWfgU*Zw6UnWVTw`qiJ!Hr{%;@qF|s5hj2-QprWTwDKlzX(U*rKra? zaa0u_U+*u}l4Ad3mKQEhK5IV|7Zn)mGw*PNFfsM-Faoj{r{dn~jzj3!um(FLhk-(* zf(VffzmD&a%m?g&fw8C6eYS)ocoQIzLu>NEf)-m+k>`li5xYrAyOEe+B_m;5aAj8i)0q$tv=MLK|dU&}5 zunJ$#Y#*a5)G9ii;!2cnJ`rTJz24fjjSCSoAvY>}675CUOZ}ob z-vm`LFc=ad(p{((zj2tl&j7BbAt6Pnbo9ZVwB^Yf8HHOSoa&+;c|D=4+PSdJR+Z|* zHG8(_Mc?vO|CF~{C5ZV#cUfYY@;?3_u1g`#Y?Z5Zx$Q<;lCp*Dv$|-abiEWBT){jG z*Vg-MpT_m9iyyU|*Ec|amUyk(AAK8(-zbPU>>JvL8QRPkg27#Of+F$1g6F)COAwNM zCD#{MFi+Q&v)z&#)hk)2!t3bT!fZ#iCq6=OX zD3L5%NFo<&(i=U1h%vcc+f9;5Q}H3 z>>O>_02yJveUxV_Llx9~B#u-jzA@^!Aok#IQig_^@-ZzzkJb-*q|#XYuHlR;;)W?xP;Up9AN zXm+%biS?WnUZiOxK7m|bBA~jI*O(&+Tdu0fm_HPBYdacv8My9KHmIgh-A5(Q&8DIk zcOy>IViPWt0);X#tOyouv)+g@kyvtvtp;~<51`1yjY>ho-RC7N8>32oGvO6phs6;D zmEKxcC^%;lc(I#>Hc>7|Wm?OGqrQ5`ZFgR4c4ARvCuN0#JdkI*zuZ9C!m)aLqIoZ7 zrWf1w9+Azo1{9SQ-~VV(;_d4>(ShO=DR74SlFq4=$xMZ*s>8G`b@GG-f?80YirghK_uodB*xl9i#VtmL}d@Emp8G>B+mcQ_h!*=J) z##RI3J#t{gF684~WzFJYXZRVc8Bl)YQDM}#VX`d7t|VD9SURji`nyJZtbuWam%adc zYZ@6*Lm}(+h>?`owZaT7Yf^629V*`_VbXYP)= zGxFd(`OAeMFXeinWmTp)88$V&Mb`3*n<81iNjX#zQySSw|Azc)@7beOZO?Zd>9Xp| zY+&muZzq|S7-z7W%(eIRSq0u5u(0d2+=)8+uIQTwDiN(`8M|@=jo1(=B;Z7f1HM+wy1G4`7;`sczZZdd> zzjzKEH6wI7;p~*|duBoOGx;U26H@xH{y1*D1Q@go0qyNklf4rgYUlH9syYv`W7$+Qp{xvn6x2=EB>f>1e_yo5+&lNN zaWT|auNLROzb-55F#@#tbYglR+qk&q7gQ!QOLN#26hMpz0pZe&ewQw(vVE(qOpW6u zFd=*`w>qVw5+r-!rC!sRY18@ol)D|?OwNKu;RNnkvM?Z6^OfEH)$%p0; zMI0!r&XxG&xRO{{ZdC+s-Zx`;5N=K_n4jLWUKrziW7U!V|7iQ_u&B1~|9cG-L=i~= zK|#8^Lq$SLLAtvc>4u943@zQDq|)6C2n-z((jh|(4MPtzFu%jSdhdPn``-9H{^5Ce z&YXSr-fMl^NOa95ntxevqa}dWr@2%zO|NLLE^#6PCq}NIg1ORq^x^TbHTAg&HC0Um zOV}-aQp9_(Dn2p^Mo&FQ5+&>HxCvzY#bCAI7-}CzYH?pLdr7(nec7MHLnkPrN)(V} z&R&uYJ=-qM*%{bGC%vR+A;jiyJHOOWbK#pD9j}7TdV*gdxA4`UqVH!;9t|xml=D+A z&v(WLB^(U;`y1!)MAY)aFyR7BnRYAcTI_waoQhx#^TUe1>+IZBF|Gc-MTi>=CD zBoNn^x(mYZWv(6%`*oiI5L7SeQ^Bd~MFjP+a#_WZtJT0W%*d;^7`L_bF_oW7#J1mR zjWF=pP3r>Z(6)HTm8ytRTsIkyHj8|PHUKu6UdDFmuumovIYm$PLcSnoqVfgR;TXv7 z?RB2WZqsZ1`Qy+xMaKH!kQ5ZUe40<}AoT?(H%X#aCD>>qT$pIg*-_uM#}OKiN9ai| z(n(qWN)6Freos|WZ!Es2MC|A-46U*sYqS*&Si$|k!zyNtCk7OLcNbYIz`su7-(2{0 zGrlERFnQc7r!~=t4&!9k>g@v2E(g4-kz9HzRma9I)YBDeqJ{n-;_>|YOA?Fa%uWA% zyaLl}SJkBF^+Vkr1#r&YwuNYB>WYDy$|ajF7kF_Jk78Cf{8&A9j+LOXK25EadSte9 zcV5tTTLvurb9%;`@h!&1$Ew{}E76-S(S7!nO=!k^XxIKd@rYVqfxZ09%b4)=t$0b3 z*7BGjUokcxG1|-?_fq*-wW>b5OhY=Y-VndhoTi;;;wNt+w&)NH^gdv$}3;*|HIrgfS9zHl4Oic3~% zS?SNt*22}M__lS!^Dj|_2V%IXF@ed9^peXrsQ27I$S3sxG^oSep{n2M2v;f5`jVpz z9qGA-&}KNwAJc}Ob8c6t)C(3PFYFFC`_ z#O00*1bant-iZwFG?%U{Xyg~FuhdObcrI?a@3+u(Y&kBSz1)PKM8VlC{^<38AEPfW+b2ts8kX|;B ze7SjSQ^d3F7GA7dRVUzLS>>)nV7rkQP62A~ZnD|xNQ3S{-9x<4d%g6mY-~-=Pwcm5 zwL3~U28P!V?4t>(jv9!@PN$>M!~AP5Vce(Zq3JGrFxZItl6aP9Nh!!D13=uJh%4av za7l2pi`toQggJG;AVAzHomtrKlk+HsqEUhyi2)bMUa!@3LQ!0OBVH{Nw z&-!E=XMILvHD8HxZ-Mtd&MdM`d{KJ(Sw;7e6ken40Xr=fybfl5Q<`rQu-nu%+%yDR zOUPjPzR`jG9?add<#6dKT^FgKj9@qK8~5OKgL$~plXekI+GUTLPSuUZK&jZ`adQM3 z_f$@`B;)prL0}itfg3BFs2;FcC^>1kLc~m=VbLj4#7SC`3|_bY@wpv1yl>pt)Zw}b z*93+VQDZxk+WA!f?ktNw_|x1_|BnqEYCZ!8*$bT2gUOR5T)qaoC?z~m%}3q;G$-hh_BVNQ2t_V z@oke^nj94Z-bgg;yuTr2Lp?!>Sjf&?fz4#T(Yxl@ovzI5^y4*>(3LSv;-)%GK-hG$ z5vC%pqdikeC;rR^LP3K4I>QN;32T(m^Zfh)u}7Mo9L$44?RYL-KRJlpW*rc5c*MyZ zT{OlVDeCbEu{|$Pb~2G;9m2dlp|Qyvb%;E9vO?JGJy#Apc|s;Wzv4JU&!(*c!WqLS zReyJ0F<+~|^6A2{d+A)38%x;=pp$ifd(cNs1WkCFlXYTB?g)Sjj=I zgx+UwCCLhsBeT~*>e7w;nR?x!4auy?ekh24`opyicuk!c6=Djz_SQ zOCZOpz1M(G72f4Yl*Zt`pXDEvMQ~=G@uj#B1-0R;G*ouKs!r!nk=GVpNefbZSi{`Vcw(gv~a(W3Q^{TratK0{w_3STa%>juZ zqOy_n+lU-+#rjr@Y&lzvdXmm1Uy>9@qjsu;HM29hKH+)1&9zTRbFMaiX{U8jYvvU6 zdkc}t=*Vi=khN_y5y5FF@zD^K^t2p}7|HLY=psiTV}MLtA<#It>5tEp6e}Hethd}6 zGeQ+NK~t0Z)0Ah)HM@&-zsyPo=kQxyVm?2KP=M}}ZI09Kn-lN24+TT{OSIL|opHw` z)2)~m2c3sb4PPs1@n|ol3O3bwI~wSl`Zl|c6dTRP>q3VU^^3S{rz&BpN+Ov8wv%Ys zihzrnSn1;Xd}+3H-Bjnk*RWBV6(UV@_;wR?L)=HD%o8LWIi^){3UxT@pLX@k0EJ1m zP@(76Vf`c&XP@nrL6PZht1~cP5lk(t!?T)D9lCEyccxqC)0qfw_B(0kBycng8G3^kYJ8rSL;A{k6cw%n|5D%seYRvWvh)I_W?Z?pRB%S!WX(nSV~yLG&P2Fjd! zB5lb-3Kn@cJ{2`kyO1;-ZF;UIb}ufB%1n7|BSN3i*UXWW_+p$8vx{P`{Hd`xDtZQ1 zVwV+=v$y^WH*OGeqcCjC*QFhbzg6#6tvW2T|WW!aiDMj%R7?)~%DXRpCL#rc!@n ziiyLH%Oo$+-nx~-Wk%Cw>a6OUG8NS~YW-eJ%TjHZj)M#;naq-d6~*y~`r+9@e)R|U zW17s2BupK!O*+6pNPG{GE;e0>Zd|t&6@mMP)#*|qK=_Ux7U@XAF>((Jv0N@l3nmPz z6yY{@Qb5ku|FB|QYG$T17w0z)q65Fyy?@MTdO#F9qdC&4sqHy*vtSzarAE}A@4!6j z!*-!D@ff93DsRnfP0>VS7RgQ>$AC#g?P|Pd&C}OI{m_d-C*k$5uJZ(J{tWMqVHiZ= z0{tlL1UMVJ9|TQt!-dx^JY7X9MdDgi{Bb;cbS=@&u?w8&(gy+`&KE%w2-2 zsM_UuYgP7sP~9_HVCoLwMnFv%=fiJmsu&vYO@CBn4=mtF1?sPUvnW8;t>h{Mzrt6! zeGuiu>@(BaSOTtgt^5obQP}SpNs#m^9=ZG!gW-jJVhiKu0`gj+(pJ3ZlRb4KfIpX> zL{@Y?3`mhZqbjldY++aEdN~VAhF2{T_*2JvHR}KielsPa5u8z7G(NY+09rYo0Bd`g zKQkLP^OKK0O~Ie-y+1UC)ySKC>~N<)st6V7w%Z53CPgg2JILOCf=nj(Xc0?E(&c^7 z18!g*;06i2&1fDdoY4C#qp|(-AwbRS3%wxnEXTg#94O?PK9WgSFTw4m$|1DSjPS8&-^YkjdE6;eLZPiX# z`W|)5xYR*!fC8s#j?xu73&ZsBn$Kz@12<;Kp6KYX~M)Fle$5J-;2d3bb}9$ft(tIcpseu z2BL(0@8*OJqsw{(h?(ny;%5+k@<9y23tABMWbdTZo`BcWc75psIpibxgGO*1fye@T z=Lvi0sytr|_FS#3kQe0+yRBh{DBINer>!fMDr0C(e411$aYVA1?7atbp6R+4gR5V?IB!Uk)Wl%KW1omB(PyCs1SZCRa;k1QHW1CEk^ zjU(g@{jAk^U3<>Vk9jh7%W-#-!-Ydpd~q1KJlz2rc~+GEnw`7iylCHFo1c&h`6<~5 zzTHJSksH0egBi$&ZHyO#R3aB$_XC)BnC8DQ^pdZ^f8 zTNNHH{S2hQ+n4_E){qnOAaZQ1fNO1Bp&hbKxE?W!3TZ|j^}xu-i@-)2FKf&jDW5sNP zpF1I{9$6IPOM%P@&hWkG-0SOD798POXjL(o=6W=N7AM%RYidn38?ziKl+{C{&xOto zH}a{V?Ij;LkZR|4YJ7rv6{!5~ZThE&J|0TEj4ifTejltc;Vd=hVR!&K=vb1+)fvRI z`f8H+%$FKwkIx=aOP_gf*;kt0T-u8_(^1`kH{H?$W7tkWQ*5QvM8kVd=izNC$qZNP zkuTKcA31%FtF|oA??`l(C;~V&Ci(lHLdYFu(p=!9n(Ed-!cLT$`vq|Fj=50$uDw$TkekCHBvU4e#n<; zkLfD=mgC*dY9hSd3~enDWWYS0R#VPS_2{#uH$20TNf`C;+b#-dA)NRX`UzR z9;akD+Aa{2sXjZpxe?0myD|m~5f*aA4clzh^*kclIX_K*~@)Fa_ zD&En?+rXj0f9y=cN1v{}c=>hjI57ydX$!>z&1-C0FVr0^oN=JPev;8czuzfhJKwxf zL&9J6pzf8+^v#%{nydnUQIY&>H?KS#w)w$?_xBt84`%%*3H7Ucut|35O(sq&AMMp9 z{PEZi)4@y$(s>xd7c14p$gSi?w}Soo^592W4EmwfO2KYk?b0+;&f1IQSta6;0*tWN zV04pSw`8C@r(#ny$vUJ?FMxRLuobcuAjtFN1n~hK7k!fp_(bHhG21Gwi2Xn?m#ocB zq=zPEhdTFh_YHmWGu8-!84A4fjV^Z4J?xk^X*G}AQt_Yg81>LZRPPb(z6&f=XZ>w#Tj!d3h!r)RDNgLR^Lj{ zty)(fOev7<&5wa11;k*Y`0nS9nLY8O0t6&YJh0E;`LNY?_Y?BWdAM{ca6$siv#M6d zj@9EUEG`5)!|&HclHQ!sIcGgZErUa+W_Dy;GeWP;Qz3Jp89*TyRp}(0n4g1eBMI)a zzvECIM+L=N7F;)z%Zu6MOIGr^l?)%)KU_<~PqN|7ctT%qLKns^wN$rkimw#WH6C3K zKgr5heqzAYSkj|_vX`W+Kxz&v(bQ}1?W~B?q?1Ku&@RKtVsg9~`IZjAN-nSbeJH)n zTL*x$7`Xj2K|iyMlzm~rSr5$F6zRs4Lkef^SS6<%%Ws+e=gMD=#WhrBs_Z;xr#z`u zCYEb#8GHz3TZL314u|!y9+^>1_Jzf{#7@;0aPi{CW|a}Q+Ie}cYIo1{*W}tRn{(Pa zP37KHN|JM3FnZUq>a!O&uxY8lk~UP3rF58*m@n#yi*ZfvQFQpUi+HTWz7kFFbSIl` zg}zabHqG6y@%N0q9jInYg~Y`yi%iO!FsO`zkYn%tVi-{V9=FZXiRn&D3GuD1?fIB8 z+!mYP1tA=1i=%VM07V!hTbZ}L8Q!m$1O0#P7e%#0W|2XfWj3>Qd9~X>(gHpOV~zUh zXbW+Cd}uvRBJ9FH4>xgVBU6|x@}5;_;3an@_C20egl&LDQSe(3;{_pBP1n4cD@(6G{megKK&nNyo%12_zu@?9>T~c@OX`KC>y%;v zcg^p=j%k$OCa_ou0J}$RO@Q*-TiFu)jT%!o_Q3A=g5#;`J~0e5?Au*8oO>Q#3%3$g zCGN0ECygS>FkM=FihuY_?^Wt_Dr(geKO>};D|0eB9xk~A1qnt*8Q*UXh$V?%fd;xq z>CnruZ%*qaKH=`DYo;W(mua5?&w$)F&F@TxWk~Cg459hWGGE*NWV|-tb0kv^tPw~X1YRUDP zVz>aOk$rn1xKbhVpuhP&I)0-GUmX15XzPQ8Kz_C-Wn_?aws(Y}gUAYJQm64L^YYcb z#DQbRv)%N3wAx46?@{q*Y`@UVkxz^X{S z15)&;DOk_@mHgM&;5_N2`94|vG$!`}n!yEtke&sH(g=BP^9GWt?9G?5hn&QL^Q0*5 z4h=-ZxgmEEgZTO(U5?~U8VU$dC~;NSiCG#8V7hn-$;R`B{cV^$pUJ~i*lcue1%9qESf#%by@so!YKHP zu;$*3S9-$Wc)`KRAlMfF8R}~M(*&RpPO-fYYxcMol|K{4O`U!(s?!!Sm`E@*uqWXZ zKM2>4=rTU|1lD6@4-58!bazuw9qw3srpxHnBaa$Xk(R7fR+C;Fl+q!!*K9Yz;Ii?R zZPAK1qa@lv(o2gTrYePKDtuq<66KWfWzM}~u(~u)vi_|m8rTzjUs6t@xhEZFOutx7 ziRK$3R=cMDp2rHkgIbhq*Y5e)*iTjLvA*Ssx)i!1&mPZ`sM(h+R+K2{{cfTL2$Q&@ zA-0j(QO8oROVd~%##x&+tPD^v#CM3S^pP@{^(P)ZAEEHP;F7#kT-2UPj`y8pZD2hu zvq5Ycoo}u|!~87qXve0jt(EKt1Jj<;y|#AM(U|mvcXLn6Z|#mbRHRl zVT-6(8W?D~v<{kC#~l)Ya4o-|PQSln>Ov8zz32vzpsr<%ced@qCI};U<|_+o&(62- z?nU<7+MZ3tAnGg*h+Q4@U8n61Tv&tQ)7+6d&sl^m#-4Nzvgk!#hqx+Vukp1r(Xw;;Zp0aJB%v79m{}t_=bndfn zQ*53vG?U(M?t)~>8Q(Rt0%~2dE2tyFHEu^W!|QtFO)4K4skYz5Z*tEulrK~iWEvCo z7NDdO#|!S-udJZ%iItad!F=et;v*i>B^k;uRf$<2T&;vUJg)TBYFD8d2=035RoCyH zUN7H$mWF-Kq6d$w3M}IyEl@chxCWIXKz80+5cAof`B!2HR?%v=5$*Cf__b z4}PGUJiP@!M6m@a92EovWrG3OA^V;27L(8zzcUIZ0KiiyqjQ8$beJC#$R`CPef`1m563oWV^~+tMfqXbK%r3t z)p;F<52eN)=t1PiN>bguiJs;t`THK;?Y#Kvc$h{}AH^eM8osi;XXmr&M4Wf$)t3!&o^@CR?z)2#G#S z1&>Rn-izAN+&lTGy60}E&@PIPOkZvV=x*E(vfKc8R>K> zBTtcZTvHDlNjqb@6Y^krNW}x#rxys|JHty}F-1$&l*Ccf>DY}9^YH4GYEVd*cL^Z5 z?-H)uis)XlR;6p5`*tqWenWem;;9gK!t-lUt+l^GfS~lbl6E05MZ;mQPM9 zX+qBB&)*S)aIDM#CkTel5~Bl?7!X}xD1OopUe3T8uC0m6U39AWrWrAEz`VPw#BFbW zcc~?IfWp7*Bv()6@Av$A^Z#^lDtAAI-fU(_f23GsT6BmjhTT*iT28~~vHWW3sH6wO z2s~xg&`eS>!x{!jnSgZWo= z|3iO~zY3HU=C$fo1IMKP&=3FToBd?(-;Y{(I1$X}_wOh9*T>}`0%D|89M4I<{~Wvj z#Zl<|TEb*YZ~i(54%Mqz8#X5)LK}tuwb<_8>Cbnu`wz=;kn;N`nG|Hp6t=Uw^s90O%+T>1Q>GS2^ByZ_^*7&SoMWE-q)Nq-rWBj74l-fs1@ z(?6wo|HMlDezpd8ZU8tM>{_fq^9xo1-AYF24#y@1{@!tZY7w|}$1=j-2><&M-nove z_}aD<`)^)CzcWCnXCm9!zdhFvf={}cl9bHwH;ceK1z?4jU(N-N4x&v zzp>bVX1i9I0kA^2jQOUz*Ei`oD(6AVK9+O{vpUKze>cL)zXcGM;0j3NyK{8aFJG23h$$@RqHW0E zG^wAT^9~N|*Jhor(c|yhWq&oWLs!er@XM9dWe@exf4d%k{uMsc0vgtE#g_E@%x+xr zAF9^l>T~Tl-6uZ~A2n$D^5>o&BW<7`pzv`xx0NILKZujx7%*9zekJSggY#cuv*PG_ zUZ~GWfiRwl4i>o9scZb;Pc3na2|=wGp9P43+*yBAck(|AVappp9R&>!u;{+Itg&}U zt)6eBWsODph*%-o@W^SGylOLlcaZXr?6)`TE4jY&I{f+YiI?Vb+mX&d36X~+cQF5k zlO@rAnAmDzOW1%*iR*XC@*}27JI2~BVuIjzg&_YBLi$1OkggU?CVxj^gZ1mU9l3nZGLZxJ z0v+0X^;$>5JRW|Xe^OBN?#GzTkpngM`QV%Q(H2;*|AQGhqX)Dev&GH*ZKrYrC9)nS zGru;ppa{{KV@NVP5+(glQ}_={{SVXpxA&a>)A3Y!e7{g5;cMhJK(DVcJZL?U^B6pz z%a~`i)$03x{oCjK5uy2I9VJ|^0#Hy~GxYtZ;a7Sd)_R#|DH?eB_lEH*k1PGn zqqe-caElfiT7KP3yc@ByHc7HPL6mkcFn%VzmM+z`gH#}ump}_bIJ0kw~yiR zK6rb`jWEknHrtf)*7&st-`_O+YApck_vrb)W9qy25|fhgaQ;n8n}GKk50h3jNaVgR z{;$jNmoEIo2!W&_o@+AWH+vi-Wx!FL9V5#l{VovpXwoEEQ~kH1(!Xb#2iYxQ>8WwD z-y18^HMbbp7|{)}s0>6tWA1rbsA6v6HY>b7+O0~n5kxjx@jZhRV6-Qb~ ze3w+e_xD%L?)2j$D2~%;nn#gI-8@0-g*74Hx8Yy#>d(Ch#{TRfqkJag{z5?B z1Sn^jeTywLFSsY?pCJf;kL6tW-S&LH^{+~u(^CT}bt%s4-*nT!>IN=!B|+g`9re0d zX0Kwx`-LBS`mD1DWKt9d6)=BEr2kd@{p_z|b?93CYPThXuV7`xgi=f0dBL~7pZ4NC zIUw#UoP>A|eO$CPhWhQ#lY0Jpf1E!d074G{V?qD&@ViHU^0c*EK2uQR4h@0Ut+>MJ z0V_c>?rvf3_h#JFUk2*d|7OQ35q2Sbh|4YC4@cROkcLrm51G)VvS@QfEkB>^x%Z)l z`ZSd|O1wAr(}&n;f!|m2{@-@#f02-SI$#aPKiri3MWf^${60-US?d5uXqSsq74 z%|Pm8+1w=ChokaW{|_enj|%E)ZA(~0q`?p02VWxUx{~a#U^f@?AtRV+-@GLLed2_S zNpxSisnhKH^6=5`@xkf%B^Bo9K#n=S(pLW^Nw+uO_i&`O^ zMewEk0y!U8I7nH!rRc-M^yT8@-wD3VHv_#SOS0yk_Ar=@67w()?R)+AfD=dX)m%J6 zDD$9brodts5(;zY4iV(O7Sh#3+&z9oSU@@P!t1E>qHWx)Q1De7GC;)vVi72l3*py& z<#f@=vD2pSBH8kD%}spObVp&zvm3Bw#a~Sq!vhXdXEw5k%P~*+tRU1(sZ^YVLm*QI zIuTatn4kDmh8c`41Ykk`!O{Dp+kF4CtluJDlD5vP(`X|kY!ce2ITGq;ITCUg=(il; zXhs;R$JY$B6S}prw0n91wSIipQz2gGwX!@F8~*TqmUO4qN4BIn%?*h|^CSG8-^2yz z;*4edM*ez|cma^i5PX?q7+H09GB6L?h?SD4J!RUR5d($2zqu#=-LU;D6#lP&k$>v9 zcDY&RYvrqfgP$v{18o~Zi-eCApW;l*zoWm(Wog#L<>+yCcsNz30*Ily%aHY z@qeL&f8HLo!$Tz`b<|3%_>~{N2`}#zEaSQ>$*4u#o0w#A={qL3*qIJ#re!{8hCHMF z>#uwe-(q;foVdgbwg4P5VF_DOK$(N;4G-giQ!0h5cFiD9kiwJ=z+=n)I7lyAr!8+4 z`;)N~VPMb|kCy=3LOIKs`}7+wfm;m1rk?Hfr5=QUU2kE+sA9@-p=#lh1=zd<45&ZJ zvHt@=z6-iKw_8aQ8WWkwzm2P;E!KSEpGT$Ih~x%37aebt*F*#WLPv_f>%~Pql$POW zev5EW%vjdG^?@UBV)|I=O)7U6d<(_D!*tOmxb^_35*`q3LWzjcSDWrGL7^|?%t)>c zrfA}Z&88YH9*II#nfZwlW~-@Zx{Y7kcMkljwa?dOMm_shF)8tWGfSUE-=!ZOf_`MF+BAy^8ICoPs9V#>fL zv@`g&o1-9JJ~s(PW6bdQSE%^&rp}KNU%+OlP?=k^-oWgm2NV8%4(F>P8BVVA89>Xi zVZgSSC1vlqX4E$dpc1dnrdjw8-O?7Qa02|V;&jZ!6968P?&Q?LNNoq?Y%dYe0$B&i zZt{d*FgEeJ-zQQY0^V)%^Q2~VR_4rwmrC#4-_1T}vEY5bVsL z(45HNceNrffy`rYuJPr?sWJ&$KxwLMh00bCooqJs(|$5yNNpueHF=QOHPaE}INRM} z$6LM(-R6_0n(Q$aY}z_(_y?;#_p-8hMd&^^0KUi3HiYwADKyaR)qkk}DZYC(tM4r- zpf*6#7@KF~#x3Z&8iCcy^&QfPC~tnkW<06iQeJ-;fq(VTK0o#A7e0z5vo&Kk)n(>2 zjsp;&`X_kIc8IxhAJg(zBzGu%y4&-(Tw9RiNv0-Lu#-RIHV-|Qx!L~5queRZdZ$l) zFngd~Q@UF#aAHXM-pf=^bSCVgCFvx0{ykw^@gJOI-UYUVk@d2E_nHq1TE6bKb7L=d zEiOD|J##a!q_p}N^l@CENeX6(aO23dx*HaFw@$xe>AZ{U;4|hd@~>>xH-QzREA{Cu zR`#B6_^8vQoc2)?=x{{1>oqLiaQJ#8o>08jx$TY8Nu47tvxImw!R#cf5X|7FKAmiKTr&MdSLFdidg8 zhQ{YF^KanZfgb8NRVIMo~IRNYYP^Ya*Pi~P4}&**iyN( z1Qy=b0WI7J%Z)OxV$peA>HymC{=Diuj2xtg+-ySk9W<>B*3S z)Ew-68ul~pBr_>W0#u;yE$HB<-+fl$5A9kDW^gX;r*8XuM}EGdi3I*AF`3s_KyNc| zh1_}CZQ_5&S-Xr@Gk#f4aZHv^bBAug$Nml|-oMINSP`NCV{Z`TYf!Wa%1je*DGZI4 zFMo7T#Ay%m!`u5m!0!K)^ocjobDmr=Cj61B^PoKNuW!d&UqN-;^KiXMUsatyu$}7n z8kCqTaT+>5%C^1d*81<)(7p0FFHtnp( zy%!C%95Azw9LTM9#80f3*dN7ne)!FuFi^X)w`g7`TpT*R*sxyD)IPUXE}D_3qAyVdc$=+cmD8v~k5{89sl5OWTrtMK8iIsTrAbW@u~fsL@nO>r}}X zP;u6_uvGW{GXcrPT&i%~(npzikGJd^tV~Z@jisMpo`)o)p6&Tsqzwf==nM zofU%YBge*7Kv&EjOT1l?|FHh8?RsK}gb(^%B_zwl)5}pxEnuQxx9{%zw3m6hWzP1p z*N883jJSwC6LX1z6l>T4~gmWP3>Q-6$I;Oiy}dxS2;4PW_oz>}@5R9m!4< zBEP<9uPm=JTiGK5))aL_(veVE^f&C>FPfwA{F1cN%{S(^GH&qOtQ88B2$yJ!(mzUe z-O^C8m;VSmH*aWTpQ&c72!D3z3UM*;{sEr0{lAkd{D%8j2$S$;Mxv&#i$6A`)*3}*`) z&r{{Jsi-Y@J_Ftu-in^(``Z#(eUO&pql{bT19(BETXIb4=z$HjAk0Y`4bjGiZW(tv zNLHoTs0OcNcW@}Ais|}o4ctqwc}Ut+?Pdg5=y7df8`xjo^Ay?|rHx+z4TrNIdKAw# zk>K-jU+pJ;(eV|=DPm@K(t079^3s>_U z0|)1a+S+C(T?Y3T^}6NLbl*_s&$*tz{EF}@R*-aC&GAtn-fU@Z?MCLIt$GozzT2k7 zDpfM_q=%MMocpS!8q*V+toOTKt~pjH?Asa-FQ(WXkPfw6ksDB*VY}JFvdwI1_6sh^ zYrW&}k;EBbv5VX(*D|@ZcAA%?v1d=i`ca;)W$%z?<=ot7JILS)upAZ|eqJ|e3Jx~e zV=1)Bx-@6>brN_gz7s)z%HQOPZW$iS{XALyu#(LZ$NXAg+uHsUL$~ZGDGQcxpYUWO zf%%Y#mCuv)4^yegu=7p>ALHD%St@6l+h-OYc9VW`Jo)MN{^dB$J6%weY5(xb=ec1* z#Wbp0=CJa&NY_q-p2GY6IS6mZjiWfj;~S#fK3DDPjGk%`Z868~?1l<+ zHsq(M-95zqLiZE%UJA}C-HrIFHz{muxZ|L0+;JN%GPTa#w%pj)0`F6|sBQ(-quTbR z2Lni8doL-TWL>4n<$$>AREy}8*Rxjg@&_bd7Fro2dD5IN>czI{T?A?^%sqD4$gkdw zw@7HE)`syuGt|G#X}Q_;(QkJ)K`}DT)5F~DY*ngRsz}aZ$Z#_8?C5aI_`MfnELQ@X zjO<$I`$EQt2#W*cBV4##Z#QW)ABm}L;|?hN$Je0wYL-InF)}qM3L-d=r6C1_ZU!3* z&j{a!ROIoCoDP31JJv7rffSjqyYIypr5Sp9spTANt9Ud+cN+-3j=Lapud16JCiHOn z8kYB#7pe=?0$CB>vBlC;>MS$}_i^&eUiw?IJ2@%~WeG}9jI zj_yRTB0Qzh=@q#Zv{)7$UzfSe*0>M9q@Tay@WBY&PN?*Gklw)VL#1q{ejl}+Hd`j3 zy@1InoQ;pj;A;pk^LR|oTM z{=xuC0N*9^;ece@{uOqR1;>vN3t2$bSP9d?tS-`2BJRn@MT$-lepso2{d9Vt8< z9kL@?B7E{Q=!0;U7R|Jj@k^2Q{u#gb%m0`uC+&TQC37v;pdW9xmT$As&bbyol(oQ~v6Zgodnn-6Umo9Qr!1k-)U!_dFrG-F-ttHN zEMEKVj5^aNZab>X+-(fjWe{qt?6WOlcI=&bdp`6h*N8q1i?1Q1Q;Dq{DsQJ>KGVi^ zlsi7{pNvX_ZD;Vub!csz2OX#5h?2r0*p&+L%|;zJdto$AV2kuUycHz_mk#SZoGD|s&2;XI`velAU(8U> zoBqQ7ty4cI-O1r$;ND`|rYfnX{`~w>fIxC(EHko}n-v=Unr|OUGIp_Pr5~>D*&kWT z=8%?Zf~TQ7k1VzSlGD$0y)g1obo0@It)b9^NP-5t@CUCle0Nq#;TrmG$aZA#7d<8A zcS9vFa0bx@JJYFm&-uFSQwsQ@IotSbu9Vfs<_qR#k2+o%CV6En$*aI$~*HAU`&2Fz8 zoIn8kw+aEiqiW3*L^e-65Bk|x+8mGH`Q8hFimV8wA*_|$YOAhxEB+oS88%+igv#;$ zfLV|uLZnDt!-Q3Ftx1(trXt=)ImNZ(?bqfiM{AjkwHQXtN2RXqo+!tiIRWBE(Ihcz z+v(BR8{*!*9j^`pa~AZ)Uvc2#j$^!)H#CT!2tP^=U0`aaT#brUAv&oPD6Y{2azgLs zE(KuU%YFXAh>?)LW1K)=>BCWUOCu3OdsJSIa(BYZSi8L0HDQgirBL!;w(A|RT`rb& z!fYQ2Van)RUx;dm{z@otAE$6tFR)RW-aq}W7Mh3qvVlT+XL%^DDv^JfXrow zI?I7d^a3N{771vYLP#tAO|u)?4Jj2f;|Z-l+C3I?vy}NU?2v`=D~E0g0PLEBb|gCb z1;$^O#ZM=yOWjSib>T=f$!@*QNpIvz43$-;f_a^&{?zOgiNH)^0u! zM;tjXjPQdPwAFXgcEAtbF&Z4tZ~Iiu>28!rO1w40?zlA7*IAcsDt5)37$y*!L&~J= zC2FsmSljkJ>dO^tol zolVY0WYfkG7Hfj#5Q8nc_J2uHLY2R&Z#SBBJmw*=FxU3>$01pKYPDp2kmPE2J;QCf zUbBuRqi*?dw`wiW(oXja+4*@8dx0m|41b}csQ0l}?A|5ogsM0CnAPf1!@yQgaa8fm zg^y3nFQT_zJOj#2?*$8ex2imG8%Kv?8AzVpp7=XFiaC z7oUn<>XFJEV2w$`(-F$K-OrO}YAS}(PFQNCpA!5eV6ItezcrY<27lq=LC6&ya>vx^(Rrh0hQ{8M3_#dW{O-3-razt>b{Vi4`c_Ta9sO zY!(8rKC;qF4zR4EX?_p3I-}EvDD~SFxhBpBkK-k z7Ah7R*fY}ICZ3jDTjV~PMnID6WPj|m>}I-%=jO&wjV#~>8efL)413zCbk9UUcYA|r zMd1}auh5NdB3Kb(2alE9xUlGs@!~LjGTzzd2qlQI_eoc{^^;SbE+%H!{vyE6wbmRR znDMIcKmGAhvdWv3thT*HxWXqg=oUhkyPByzJ@iM)wTEj}{}jIxCrcQ5J%%2(BbOH= zDzhD=b@nWcv7U(?CB+|@V_v#6L2z^zt^W#_#j$ghj|GICV0y(h2mAcI1}_K%(GYdB zVhGgFd0SEmM8w7{Xxq;C0S+M{GCctgyE*OtFBf8x(^cMV-#J!V`3$@SkPa8tnnC1j zeKP+I%og|i$KU-+_I zWX5#@;~C}B>E6!4NY-QMzAqZ^<=Z%CIGHzJaT)9M*&aimL)b-Jctku;VCGUv^UqIu z0ZNbtpg{GTHilm*;BFPD&3P{OxZlfm&j$&hIRb0v9H1c~D&9(|i=vT~{kRl`Zo6+n z2qRDty==IT*=!xeAkxjEk^a}o7u=H{8)XP+oq+rOr1mBKlseI!P8xWutv^XsU2JtzF4LeyV8k<@QLKVFjxaN+ zl?QHZISOC%+MDq#WJm9pG~ZEFzuq8n!Fn9D2ZyK#QHtX!--hoUjc1Vzw);R%+7Y|q zc}j{%?_`KEI<$o9ItIt$Ktz4$u?^kM=UR#2eB(r~dElowFU5St)NAMx!_1N9jmrvB=*%-y_jWI)dkgzS#?u*keqWZk8|dXggv17 z_>){)ysB-EYIVW&u!x92apYXtF0_+5Xh&hwhy%;`d>)Siciih}y|lh!&#|y6hf$}d zj#~uZUlDHYzV|ul6>QT$BZYI2@F&69`~8F*iOnv~bR%bj>3xNugB~gGbiR6Rd#{qC zgCO6dD)q-W6=_z%pyOT7V=CN)V!IpgAgNXwPbCFGi3%rN`oWt4AQ49{ugzTNoCfv@ zyI_7Kz_Pc=bq?y6&WEnuXzvUfmbiozh`%Nme^9TNb6a?dC?U;0SihiSOxIo){4d%v zD*f(u!y1`>i0`?gFQt-P6Ogruy~Mj%vN0#1KJ)SqY12R6+=9NXF*%ef`1$heS<)es zx!ybck|E$_J6f?T%rH6hif=~Xd6d=hr{_~6^u|pfYVH=PCMWhW}uHYbebQk4L_uaKz!rAS4w;WtIAFv|Shm7G# zEwndZx=+{Gea18KwvHf=ll1N@nyN%hB^J@i+Zlb)E# zGo1!YOIF?RDtCa>D5zFgNTXQ;g({W-d@Xh}>Wg}7Ft@+TrXi4yrJoljh9Hf0V(HIX z9A^z*xjo!JDJsY%;YgZ9vZ!8Id71ZFn#*TE3;g_Itz*A?lRFzI>+7sV9?^n!pTUD8 zf^;9(4%!g!PuQF7@23i++_TY9G3TCO5cNII=(@$FHwQ7nk}h>?AkG%785g5c!hAi6 zuFliODf3UGmGk&lZ_k3luUH_qvQP3uu0;R!g70?WA4Tq$4BCHRPzmE3soPWo6kmne z_0u&=99FK9A;_8*%QsQ5L7!c%RjQs_X9cR;8=Mmk1|R+(X;&Q<<+k+&L6BA&Bn(8l zhL9XVkOt}Q4(ZO3Qj{_5#m-GClfk5 zs_`z@d3f8B&d+3anzc{v2}&xJpK+?pi(5W6Tdkk{8QYLQ`3r|MymX3 zFiP444^T;@4xp)2cy0?;L-RAy+#Yix}=?ydX)AoB2O zT*#Axt#zBgdJf_%)=FyE=LFm>`7sONcnL46&zkOj&BWfq2(&iWy;wEoxmwb?T8b*wr%APYIFC|?e2vll-+5jxboKd^sQ`F}aN`^F7Y zqlh>!$2@YjyV2IT6CKfUqC>cVO0q(tS;WFqyjjw7F~Q0@Iahh<^U!kr07I5qR;<-X z)NZC_Z4EU~dcGM7E{OvAN_40D!y4Y+9svSGr4X^S+Cp*lwuUFpN1hn2d>;HqHTheW z%iwlfX$kdc0G2XDvF2a5aH+tnU=v=kycOy+ySPePvrM9%^K67l8cN8_d)to1 z{x)!|rN}2Tz~O^ivD#2+DMOJM+8y`J&Y^Y=T2lG=;ghcMYUUf`{(4bU@Z>|w7w`i|lKB!=$j z(=;P@q6kBh=vGOBB={NSY5bgJ(b_~OKJ`@M{H8`K8URo_mRp_-jQS|;bH+8ssxErgf8~Qcja~&DheKAz2(); z%c$#uJ!MmT3>wKqG1V%)&7-j{ih*`bibk=u8h4(ez1G^20iF7j<=z&bVjc`=Zpj=@ zEq;fRtF3vsnYeNkw^C~|;Xc*{D3plPwQC)oR_urSK~}p+QO0Wu&lV*QxJH0bXFVRyBj94t9LnHyI_R{(RB*o;X>G28=iJyOzAjz4caYhl$#07%7af~Z=oo*q zCh<{VU30n!DRsHb(J zbUo9@{yA+83=vG&Ku8q;WdRo@`NLv_Tl<)M2$EIfBwmD!B~L#6LKZn4VYQzwE&d`K zkQOw1DK5&BU_Dl<5}9XUXiK{3znymS6A60=xBzVLIGQN+fOBn-b)i1foPz<=^kBgW z(?5^Tft7b(!)cyWL*t!ePwfSE`%ct>JG0S#DLGQ?&r3I=6iA&yD4`#!OhgovDE$b| zMsuQBeL|eH$|}u5XDbaErYs{E=-h&f?QY$c=EsSfLoWa|Y}sG0ly;DoqT#iZFm~SC z#h;29QSvoh8G6?&&lfUnB(3A=$@b=MNXGUkKDH|}<3YJ(DxThS%(TStucj9PvFFNs zAW9n5$zHssO{9gGZj|OLKGAR8@qgw1uK~e-cKFO>5O_O-=Lu}rucrAhwHG?v5x!pG zWQB3IXq<~hQELNzeP}IPTCbsQ(3U5^FKco9aIc?|j4R{I zFw4-+Zn|*GJTbC(vq69%a26{yr7YlVI#^t2`Jp3edg#i>Q{R{vmiFc%`=)zoI8b*D zGX}pwS#P5pwc&5)1ix9!9h~{V7)f?nHnRUAG`;{)y+TBeIRyY#^v5x7+Skx~c{!PR zQ?^zIyNNyLl73Bq59f5h>Omt=E655qvj@2?$5={a z?j|5WM6L|lWiwq{?x3X>T+qy?oYdBg3xsOLxg~gdjhZ%MzmL>nCPdWS zNcJoLJpIzM?4g9%O7~08ye|CT-10wX2!BpC8Q7_-yFz619Y3)LTiX4 zV+$@3r@I|FQxvCVHB+C562!=|Y#bm%wIU_HnVttvr>7uuu9?`LQ4Dg-vjsgq?0emu z%vaU8qi9RD8AXc{g91O`4FV-JBa2ZH<%rV$26_C=FRPCsPI(E>Z~{b0`lQtVab3 zG)hbPs`oO(aGl)v8iAfIbClIUDRa!IJG-nkRoFh*}8+7GDs?F|Dr3g(Cy>(`qz%m9rs>-9xrQwfA_4HZQG8WJu}$DTJtiwH6fOEZph z;%nz3*J>P^F5?N^G>eo#>y(t)&s8Ha|K(O`pb=FVlIxrsV0J*(dwP<;T zu45)PkDv}r4C5O)n*A?T1(d^Rt}$iNn_G2cDm{pwLXwW7z?$%yv7ar^)tIw2O<7-Zfnn`-py5-bDyDDeZ;cL%6Mf8A|a9XV$ZWRHP=#Mk!5oT z>ew|#G0k1DXa~2NO=}I+CT~I${J}nGcxC{A3D{yzk%^&xym7mkKu)cQk@B;l;n;?mC}UU^#NGZi-u^xC-XYot6{wgN-^b}%z*>kkGs(Yhp; z)Wp$ZL;*tlC$TWW0U(QDwMI}+XwRBNWw-zTlnn_^_8v^FpLs4>?+aq|T_b~HWoiJu z-SC`eBMQQ{$)Y@>U^xM(TEq@V4Qg z+v~jvMN|8c@7i_89A{cNS64>zRPoi=1bsr@@3r*g#X^^@Sd{yZTq+0Yt?FD2XJ5-t zZmfCkgjsJc%Np^71yScS*WHtvxJyx$&VIiLXC(CkG`IY>lPQpcJDvC_F{gAAIglu) z=F$bC|D}f49$6m|$9v_pkrP{)?~KZPII}b|z|`7dnQmX-*sf%x<{kzDWoDI^2tUswsQ_yjp}-`%OCK=F8ZJZJZRw5p3fP5 z8ikIuxWQW!-$=_8UIrVRGY(7WQ;0zf&+NX0b;Q3Jk($zUyzNgnk+1C^h+ zeP6dM>90K`^}JW-1NFyLr=9&w`%zPfe~P|;F@66)h4Z)C-_`?lQ6oow&|9+IaF=xu zom&T{;cmX6p}7hGBo#?9gvGfxALg_9_p_TtuCluVr5YmQmfGMo^Lomzi!7pc#o~jL zBTVW9;mJtt+^@CEE#=E+0uPbx`gKJsRpNou1sMKmScU>_WqE6;b?!&Al8^^`Km*BP z0NY4t`$&^qL#-19H6QLwstFZhc^;{KlpcT^cj(^WwKx6GdhjFl}>C(%5 zeFEj4)wc}vt@VR$EhpQjQN6br5PY-rI#fv@F;IrG?@Da`=T3EUk) zXqh)}0tW|%yM{Gp&;?^|7a1z*d+hd_s$ot*mo>)xjZqJ{-lLck8C@?GyB=lw`ymN} zO7J1y$6S)zG?GP^Zv7A8hK>;uqxQ&)A)CF5g!aS8Y~J2iWXX{oHCw}d2@Wm24g3>; zv0pzJ;edU{TkLk?%k8?Th`6VwS2?Qfrai6dZgrNC=^t0=;_*>xg@9;@$7>~XJyM-R zfiL6|(jv|JiL!J6P^~z1=d<2)civFZ<_jpkbC}t@RgBuFOeMM%=&;h1x@8;F@H{I_ zVnEU=c&*OUp)o*Jne9#9&SQ`d4)nlvi-hwRxd-wL%?B1QE{JH@=(!k4SXyK$Hs~_8*v>ISd)5k| zC3RsFBvDbRFX~lNr$uT+u-EwTrFRSic2@?whpZt+KzLC3eABNgcQ_y9zRh6HG^*N| zWj`aZ)qy$XwO_!%Ufk~s_CWT3AhO7Tz<72f@RS=XZm7ZGAzD@uHV5I zN~it|IyZI*6g`Q^iJ{l_g4B7~mWpTvaD+etUL&bQz8Vv-scgGUH+h}Ii5nBk=KPOl zrsFBHbth@)^^7rBe!U?P36=gn;94`bk6xI~tPaL!somW9mv-bQwaj-$-eh6k1lqn( zy>>j*8`e(R`=<0cb1)fLH0&~GBJ{l{0f;0p02?#Je=g*0mWlu=en^c5pXi41L=8)| zz$M#0IXUNf?K%&M{S2abF5D^2AlD#&J)+@@Fb8F-!_PeZ0k9e%rgALd5G|YIDPy2u zJEIn;;2Q-l7+1q4Gv70Vfd}2P^A=fXaJHIrwNy(q3+_F@NIdDo)*|7*a#0Z<(T*zr z=pdFR<`7=U*^SKzu$Ep(wzHN^;+J`bhf;O)EE<2{mB*~&vPm#)aBM49(P=>z(9R;s zD#_--c-&sa;R~yC+IsGe1-qiy7*o18fBNhLD5`CRaaCg(@Q_KN1ovA-JN z{CecdPsiPBVwqvvN8#=zB8&h3<25=qXh8S>O3+g#AaU|Z=kWL11+slj9;JYkoRSAO zQgE{3YJq-RZH)R>S{ZU~+(SK`5smN%yNcM)%*|T;v;!d;aY64i!o=U&manwpPsN?| zDo*W+`QKGqEE+u(sTI-oIyB(w<5YMwNVt4tf?+bYqnwL6@JP+|-Dyj8xx3|&itO;3 z$7WG6D-Ru1b>j9nljLl`SCL50jSZT-?i~-P( zK0+h>1lOPuBc12I_zv@ucJNehv{G?DsC@@$#6CNnR(=mdy#9e50@X=sWm#l%?!L58 zNf&%m+mN2-T4I(s>2FirAWiUdXkBJ!Tw<9_&&x{wml=9isP@Gg*TO3vxtQ-ajc|nc znep#-XROE6q%}+tbvPa`>kyH7Om`Lp&8$d#K*SWVZ;Z-MEt+X)WzBP}>y$TITV5UB z8$MWXeY!PONYtQ`j>dZ(17#4q02mw-z>Vc{zI}~rh~oCB-1S*ye|$2i zuAv5yzMykH(j<8F9Aa&%T~TOedT=r`Ot8lXwAzG*t396eWW8kS$vi%>Cosb2hY zD>35fc5R))Vtq}N5afQ%$mzu)xf`m+Cy%HoMtF%LVx}!~{{KE=5@{E2mBFHJ+L-Z; z@>c+iD}s6sRBFmYIgdoW`vuMOC1;F-#RXD~m>h|QVWK+~pj>kX8ZH)O z6L!Ia#s|Sbhv?u8JEoyIhQx7()1&kZOVcEv%*^7vajY_Fv~b$;i1U!$mDgWyd~dr2 zUaaW{l`QG)tE=%JmLauYL9AA&GNGik4?WzWhI?3x;5X`x%0Hm z98G7*Lj5ZLMyZ25mjo)%ir0v}8|OLg0ZNg+c3@2h7CSdCw4eA^?JkqlceX^U#?lJ^OVYHv9uoIcq#IObmu2XcW} z0lpCdg2rqCJOOnr9zLAlZLH^hs%h(=;Wf)cqQe|ec)eZ_8x+k|R9W)6Ir4LWUlv$^I1GD{L{#z`b6e=9+J) zs9t{SQh4f0i?7M;Q6M>_Ax@jD_bNN;W9C-7E9ods7IDq~@K|;~CvO=WMs`=ZaNss1 z5+5jWV1_L7C~k?ZdaW8jE$7@C57v3dP6TC$q+1$#fYVMx;rQ3Mp{r>Q`xJb(#j|%) zfog|xsb86PsXKk>KG05VhSs%wmlKr3!o9Bt=`loHjBXH37_?=%WXiAd*4@vcVIfo2 z_}5Sf>|MWvsA;3fAjx*3fBWF~T=JLD>F@Cql~BYA95h^N$4tq2*n5s=2Rq~XbH?t6 z#zz5-Ywwxg>8F`vue{y3*MxfOWWqz@gZbA zj9R95tVk^&Zx=(sT&{rM5A$>ZTMGyPVgmKkP#UZ}O}(5C_S@^4Q`w}O86_bNTYT(N z^n7s?I_~0E^$M3h}Qlu(0OjNtybomFk9yZo_nHW8OF3+{;<+&}o z4-2_wI>}?>iIr2kxNYh>sSaJI6cCWI`eKjZW`pt+HP&;1rg zDyWk?3?*&}>ug@UzzEQ5B2&HBQ{`my`e2jyq==%H04PU2 z8YZOTyGdH&?gnV~i511=i$XhYySuTZXmw6!@7DXMWjGd#3Af9m2)CCRV>L=9{qYi2 z%lDbxepf|W)aB<#MV{(0Av>lt?|(dXMp6k>kQo=QLqM+~r;Mm$XnK<+b7aj*M9*lk5eDj+X9 zcYxM1nThJ_?)x9u6EbL&9r^nl$>C6XoE15iOr2|~S`SShEX`cOxt7`;@Ga*NDNs?v z9afJ@1~k35dCHfF+yo#Gsh)TcWOr$@RN+o!$IrRlv; z#RMA864>O60?MV;b?ON|rJlFc!Aao*b_)t^`$m;e$i9cG-c#i)*-9%GWK+j5>FV~~ z-10oNXx4zCU-~_P!_?b|JW2IF#Ar^%2WW=uR;-pw7{`QE}6_@qs_Q zO@Yu~Eciob9)%REg$=)E5*sjqAw)lZC_?|u;!&P9*?3_In?{MQsfpHA;D#ny0K*ON z$lD1*4oeN~Xso#?KTmZOdXf65QQ&kdqM<5R<%;L>Vwmm&ckEu0#qeA^v>o^5l%DH; zAEoJ;-XDEdn2=TJve|H3%pgY7j^Xmp9@z1W#e=cSpAZ}|X*QSH?4O>>1A*9#5DEp* zz+}30cf|9!e!?b^LxBM!vPqJpNnrm*RfSAa?nXNl+^O>^TU|V>#rb1Dgx0c>wc8lM~ zlSn}ObpFLn_L96ZODSpw(_W89CiLF>Kx3LeY`pM#^-AFkRmo8FLd79~vI(MnmV~W$ z7Xqz5T%-cexJMY>^QhvCQ`bjh%+tCn*jsl^7|W{c>E(}nnO1a9Ewqot&SnSEZS3Zv znJj-Ti6Nm%%Ei-_%0ME+#1F?L668`&{xL~xp=SUylCft)#VNpwp8>hHf>i1yg} zFsI{oAC}SE+O8i+R}T;J^`=X^HPQ8W46i=Y>f<1|+Bf8zc*yQq#KJBV1}9?p{1T^sIJk>_+j~F z&jOJz_ZVwj?n7Ity`DZ&Zml}tOk3DvororfLb1J`NTOSBG^CrT{LoU}%qM`;-18ZS z{wIB(YD!qEY_bCs>Tl>bUH=)K(B-0B-`%d)y z>UyiRdPZC zI7}}U8}S}Cy&cJ0`W|-HKRY_O!#g6ty8D(F$46XV)BuewRx;9bq~$pkpIcsVT%D;H zY@(wywE*^1;}~g$pOZ*BmMpMMM--t+hK!qZZK1U$ZLU6izPIb}aBgkL8Ac(%hErE1 zxiW5bE3LICHZm&UY@U;8v7Cmp(oSY-T(IDVN3$wM3Z3-k>hq_ofesl#zr*;zXa|(( zgafny!cV6$_>0TmW*&VPnV(Rb?CIQ+{qwIg5dp6V*1!DOme6BD3*%mtW{?B^bN^q>bG{BT znvIcZhC<+c&5qPAXt8o)@RAWuXi zIa)#V7*vEvB3EcKa33%4cibYC-ny&qGiyTHvG0J^UmDkcUH#uW7l5%k_t0LWBdCSq zveW$VQ4Tm08{QC#%^hZ-~f5=AQz-XXmo*QUA6$FoxmEU2is^|1k2eDH72iocCe z=alhO?BGJceWt1xd z{m7GDoGQdd9{gn+@W(JGA^JTZ-zs2a_^mDKx6l3^dk9{G896J>Kl!?btYWAA1&o(f zjkmbq!RXYrx1@)Sepi%D`>lbWgRz&L>*9l7PObfU4RcFg2X0Lm zeH-3)MPMRF4SxJ~t`T6sGlj`|T5QFKvkXgw2jeW zL6iEAc=3B642)g@Lo^<3C)TO=x!HTA<6-rB%QN5l!R#VhSq^9Zh^l!vhlO|o+t69E z6x^8n8n*^>rr%Y_NQP3tEzS1!kFWpBWdY9go3I#5PJz?(;!}UJul3Q0F!yO7t$0{i z0vDK;wr3oKz>xXj8BCSEJ&_s{fM*W>C>;j{pT_R zL@?^-Qj?BzYRBx448L*e>$eH2z-V3cdt+RLg1XvMvAJgafD9K61DAyVo1g#oYu|tN zmlu#q^yKw8JVa2zG+N~qfF;-4JU*G<0(tY3ZZ zvq|ib`O7bVqvn677H}qSclETP!&Wy6ML!hi_IB}d`GJ+`pdqH|<+~s9Ym|Prt%8GG zuXCdF?Gi4T&QiA*Z5t+R`&s_KW%ECghZ*;aWSpS&(5LDBkP1NppbbD~YEKsT+bjUD zY3w!`7Jrjfy3roR+RaZjP{K7eW;Zl^qr$2N&#K>bX^~F$$=KM#3C10f|F6tI4sPbB zSzAqI9veFaKc6Lk%e4sfVz#Dmj`0_?{$y~_#V5hXtLn$6Ze8zh6DLLn!!&!JlKl{` zKjc>a8>DXS;dWTJuhWMuJfj7TzYm&mOo$*W2QUY}uCA&!Op}=pmgk9cZY%!}_H1cU z(%omfq(=ZST=38<@U~*OWM;{Sf4{6wBz_8N`vZEZMS-TRYF>#sTaq77`Q_)pji~l@ z#|FAP_?%5|DicX7arY3Q*{qWLPhiZS3@2NZBQ;FhTC_U3w1V+?rbs5XdCsvt` zH7rE@Maei&z|Tfnnx*3Xa_;ySETk*3dCwhQ4AW2W+xS?4%Ac8|F)w}zl=Pkz9*YOUjM zOD))$pPwhotJ{T-cw@U6+BV}@H?$=>m2nh|H|}ovMha4GZv{r zAG1ApVCNLF7m=r2cL+G6dV6HO`q|vxX8FCEVfL}wYk!h-f7MfA42+7lt>=x&`YvmTG(%|id>6h`2mvLq3(Q53S8E0ZIz&*$+vg8J3B44 z7Or0aqKpdD07VSnzf6A#B8aBswpgoNMbKz4XB*7VGb=YyaTzE5#$-`ETIfICrt=I8 zj10X|{xMbb@4Q*N3?yqa4jST{3Rsc@my_q;E;Y#~Lefh&Y-b=EV<$z35|+6aC4Bjd z_mcJT#wjKV1v&zB#?jt`p_+*7OKu15k9y%lN>SSb>s8W?P;dIBt@6md31j2_=bOKR zC@DpYuH-&u`HgDd?Ti`DVCc|dO9@#(=N(}^7NLfLu>FkHE=@RV zwut}+Vg<>pKT)wZXuW-tarY`EvjDO$Jq)&F)cbc*2wz7kCG*0Wz$3J|JLH9IkMIJS zekl0&9ejV0$`~rh-qZD%J?+uF=r_X!Sww7_Z12)|7KpqrGQN-s&$3S z6a<;ldOD!YJ}5XC7=oi9=RBKsI4sC=zo>t@7`TU_Uh*<)%MP4LeBuz>ZWm+{V;-@oX5Et4dAhUonBiIr5^&{K!CPX$4ndG3VG z`sab`mI$C2)fEbt#Q#n){vwawlu~ zO?3`Fki>L-ec%jIcbr3y=Y(nN!XPIuA`O?FJ&1FcXYSzyvGHS-A7k0VkUpFp7r_&< zl-pnMB!~@Q;U6J?6RUq z;|vr&Q=T0iwL3df{M$t+U&#KqPI*=hV5EjN!rZy0_+N$VJ=>YF4nh>BLj{Ka1qe5g zJ{6n2GX-+V4!B85-^!R%bSm*R0s3WSc1kY8-YP5i>s-cpb4RvqPLS8=Au$6j-bymQ+j6$Fc;X{+Bq z^^Y%9?gK##>9Jm#xBo!E;Lm<2e-PQWVsw=gQv}V_tpFvuW`X4(#|*-S*mc`#5fxxx z9=K$(c%QgnaqY@KH0)}^GN@SG<+0%HSL_#JX<_S|rhatr%{H4G)HciSZDOgP7PSs|IF|8|&v)F${&@RQJ6lQONN0Y8HvuiyNr<2~HT^E-(mlua5OvMGwo2 z+pz|0)UAVW#*WG865jenL{x)hZ+R27RFqUdErm4L0+NY;U_r;FFS{HUEu#z$}>nb4Oh+;zIAH^M;R(RBlmin49dT0%-;fMR}p-B*6 z-2i5Lp>d%wRFQvtEqk_l5vo-;;dN(JBBPOw$FWa>@;=+>mPx?ZrL_=t>PB(vu)Wlg zB1_FMj-rWkc$pU)d9&1QbxRy6j@UuPXsMJ>t`_hHk+j)paH4&oTo{r_1S0!Qw0@1l z(>V)Ror}CFLp$<0QE3oM1s+6rckN~HyIt*M(!B9!bO1;N4oAgFt`H}TT5nf;Y^;J* zr~y7JSq#=wgVR?X(^C(LaT`F&M}aK-0B1w(5gvnHNq2bv0h7Oe z1Sba&kFHi>7jlHChYdC)h^X%sy|s5eeO}=}%g2@M{VyOAGp+}HGKI@;A1xcVWYLnP zlxLRkl0FjSNQ?C~1f)XYdYoc<#t5&8rvXem$Ay7R^PH>bOBDP>F0#200%#U zfqk*9Cl=oN#_e;K|I;5sCUY-b6==xPzslXrt7k7NoV=@jkKrcmFMsh z#k`HSOW!{Iu7dg8h}@F$g!1vJhX{?aY#|;l6N~Mk56=nql3wS)?&{?zA5W_VG1&=w z`1Ag|;+}s*l@ie2$4s`#2vHDb^2aV$-F4^5As7Um_c*RxGT%+VP~0rdJ30r|2{)8b zceU&n*se-+*U1#ck|xOI>iD8hsu-62x=~70Jq@6hhTiQt2eNX>dvJ0)%h%(iMsiHa zIZm}e6dji>LLEyUe$&7zB>VVToe5uz9d$3JYcg-vb%Lk_!_b|DS)bQgb zE;vVb!hhHrwP$;cXJb7j*50_#Ev|Fva1*Rp$rN&M6A;eJS40PxrP2hv<@yw8tH#G- zaoD(pzbpr`eWeo;W!!6|T(~fyw1UBUuzv|wLtZ!G@R9L;QKD{+eeKT8NXh!)ADbu? z1*4elW~rLv0%1p%pB8tgT(k?5wCJIm}eUdbujhP40jV zI1YZ7JQh4NoKqS{tX{VA0<)}+x;ogz?}wa&Nl@qGoi#<6;>x0P(#%dc20AOI`5ZC1 zg5jSD?MK|p8p@OCMyrkga5o~kfaubn$_N4yt1S3zoPw$YQPawt^;Fo1EZ)UB6YMuq zG0n?51~)_`a?@DKy)Apr#5%Dmhth^?~=Rk zA8siex7&Hx@;()hhbu~!m+Y=eOtuhcq*vrz=f3(kTx9!DqqTx&!}qsk*&du|v@FEt zZ5wg&rV+QdRf2UjcB8av3qoPCbo#U)4}VT1aM$>Z(8p`jPjz& zsqHu2j2+5twB~hp&utk7;-SNq@LshBR1?*7M$KubeV<9@sKw@+7<`yDq z@)Pi!Ml7!uvpGw2udA1eE3|GxFR^fGY^A$8V>mNV|Ep2}8x=M@=yn$ho3E4O>=n>W zNI26h*)E_O z4YsLF=f2;w=Cn}vzPck*f~4x5QaF2KZu#Zp+1Ts$o)ZSw39z2;;;P8~g>&1Zu zLqF?c!)wd%$6UGG%TS)((A}S6Zi2dodqvZe1a5%B5f3N$?5*pX_K0b=PUD3@C!S{s zoyZF8VXc~bmIP#>!r#S35>XMF0+dyXPcGu7vKy<9={-qpGX4rWBYR=MAC-7%0QQxt$Zv9 zVhEs0hQ9Rk0rriEh(e7%!Uc5(BT-7n_b8q@UK4+a0DP8jtl!d2&c0s~{U=d)f4MjV_6`DuPCh)o5gS}X z@#!c|2QL;*>_6uBxm?_m^4`wpk{xh(U1Ky5gq1$RBd?ZbQDy^+GKRoyV?rl(oDDfvK#`lHpY{Ha!)@1LgK%+BrGP!_2?aP`6(LfWjR6rKN z%J*o;!=_@Xb)kTK_^x|8EOLI-cfZ1d~swX z(Pc6-&Avm++yuL3H$l2Dj~11H9Q9=nP?^qZ%TB+Q=vIM8Qh!P);yaYF|fS!!rRPoqzK$nbLUoir!xe^=XmNV;OUD&zE{Kb{r z!0C*0%HiM2+zS@EVqcUg*H;}sg`3}EQB9sycR`tC# z(MC6X9TOUFpmt#>>QVsC^Bx#Qf}{b|a~p@@nt7FjKQMga+$e>kUwccd;yD?_7t;4m zfj6~LN^q{}=zcot5u7rg5jobd^$2yY9iUFd>2lt_9yLYU4G)UpZXp{vGfLcWplXDH zu4f@cQiYMS4GW$qsWC+2NYUX3|KP4c64<3tZNYWv0ewWro}zrsthsaB7I$2qF>#VbG`t0DKjbxD=e_5S^TZ z*8;m+({?uEgc9h}xtxF{5k2dcLKowyQ>CxE7b?CPK}ij~%1b*2U+2(GYwOJV*`q)a z7(7wBCDgF$_IPpjr2z(v=$q_NA#%sN`_kD2a`&f(Vuo?fOg zq`mdC>Ary2hHO80JUFw(npis7uqW`2g!kM^`~(gBJ)0p!$Uqqtg}iYg*Er+hB+FYn zlXHa*{QyLwA~qWegTYuxY7gxvvEEtGsc->XFr{6O?2g?0uGEe+JKUJT-EL90<^ zfpA@;q-4N)Vh;hx;EOtloZ<(PXwsCKPqvQ-eaXRrFmys}+vrL+MI1uIeAD8D4YxvP z7*+)whUuLh-WNR($XaM%1=Em|@ zK?d^zp|ZHJRRWc2oQGhup#U$$5$2(l74B=ImSO#K7{iIwO4={(H!O%gapoJC4~Kfb zQ}Ul*Ng$H%q?a{HM0+hfG#+$*%&`N?ZAx(UA;s0jznjy!X)%;gjJjzR@-KVHqLT&E zM!*y<>;~KxoQumiT8TZXRx&rgDGJ<_{4KhL3oJeE9V-<&d^2}U zHTf_L(^r1+K}{UiOlVZ{__0~f)^29ddEgZK39i)EknG2fs0Pn2gr))kSt%mNJCoS^ zC=>6|QnkwVJ`AJGoy(ywlMepN)Q(->38783VZW@Q?CUPL`f%`t@)^?AleS)^32b|u z#7JQ$B8e|4u+Gl8hsBKDg!6GhQed`lpYyYlAc*axu!#S=6^3x3sd#uQ2?(zaO;!xx zHtWv+0Pw!a#NS$MIraLa)U-zGz7T8%{>2k}MsoH8A+w&P4MxjF$d^3=wM$cf7vbf{ zjsTkBKXefam*H*v&ICQBcY3t@B3`(!lnep?lr!6A0QXP!xcRPiqC1l_-YnM-5H7>P zD#$+ipx;)rSF$0WYxqh=jN@aFkl2^_2Y3?+6$rNj$M0k?#d{_;;c~ZeoWBi|g>*mG z$Y8RQFjs_t6hBKCKGgEheQo&F|AKEIe^J&iarHl|=ht^b!w|Jf%*cR=M8>>kZSQSG z8N{tzW3)=I&=bUvq#l* zv~ttl)b=Dw2v!M}UpDg59bxCd(`ugfvN{$%JT}335(h362v}N^?=|;0CIHKPTO!yO zu@6ec+AC1IOurNGblb~(QPnXtMiJKp?vnV~a-#$;=EZtrL63dhhLZ~tb((JZ-}cxX z5LgXUW|GNCpZ+Sj8w4MrR}R)!MHU{>ylDNzOJhIx@$n&bO~#PR0$J3HJOMXFty*5< zHQfa!1VlqxQ9A(5Hssqnft_#azlz2enfZ7NAw=!@=?Ut|8%yClbt`OohS93+j{q`H z>0M8IIh{f*mFx2SI&u~VkcZ@8*}=`}M^OYacYSOC(;RhF{gsWU(tVaLiWSO<3Jov{u(5&(z^jnUp(y(llN`&JAWuG&RNSkm|^3?E;$uajn7_H}EgEvFu} zGd&09Lk|$EWeiT)jr;(!!H2jwVY}_+t)gVnkR4v43m`){DwYrpa8&q6>VX)b@vfcnD#` zOE;(n)!tY{JtH(_Ow$9ci05zyt1>tseE+0v;K7qwTkKu9Ox#Dc@0d|_XL>RHP=6<# z;Ze#FxJVh!uA0v&$oY|{ay5cokyBLlUP`&$78NJ8%q-z--|VZF zA(jd*TP1SO4T)|ZckY;o*3l~}4v-^954Bn`&zOgg0)8z>-{S9+i;T4o3 zLGPnmCyaK@xvnm|?>0K0x(QvS?#>pV?ih`?y@K?N9Bo*&F%r zVD?4UQzBR4I46trP$!F=7g1Ns@XAZXg4;V7J{DWu)2iU5iQ{p%vC*QF{tW(r=B{V; z%)*SmR72a??&9rAGDOha_Pz_$F}B2XG|~S z)N$${3zs4ExastnMVMU}vhzA44t>!Vt9XqPF6@c9YHeqxc5lCRCTJgSqBs$a&T?<| z7atl`xsl>`$Yv5;uEE&SVj~7|)ly#!cY2f+)}Q&_i{PYhgw)X7JmV1KZ5dH=WDy9P zEOSWZA>~?~mZ@JV&`me!FVa1Y)O}FCd;rO2TuQzx0?-SL;beKDwd|c7ZB-? z&BUXW$cwoWeeh_8=;}RQ)((u*O1|=v&^%en@FW4kjb~_&y#&SN^t>!wJ8Z~D6zUCc zNJOG-jLObk6D+s0RK1Z_Fz6J3CPYued{$;QQGMz{Ctix$8!;D9(o2HREAb_LqU=Ic zipzl47;m|19R7=Zu^iP|ktv{W_8}=1KO%QkJ84|*{JBG^7Y79#AKhdT2vI??F%IGb53qhWNsPpVlx9y^ENF^)3?<_J5qcby!qw_Xes6f~cS(-6-ANB`5-t!_Xy2 zmvqO7l(ck9cXxwGcXtUx4?RP}+4z3(zTeO9ocBBbaLsVR%-+wlp0)0EuY0XM!2G0y zpk(P6djFe9;*Zm)8|Nhf+q@%VqaR)R!$P8di*ya%Q7FOpE9BQOg*jf~%YA=#+M=YO z3Nr>a5bZ-*OKUC?FBIX!G>qlq<^C7bBa1`2g%=)IDOfbD4|mK18Q$9JUj-{_t1_mK zdb5TS%c6o`!$*`#Wzi46yu9^{Ywm2k{8a1rbt0#o*Gn=#c9fWwtFNf{`{)N?zg?@G zII-lRWbP!EWuN(w1DwQj?I}}i7l0EdDf1!<Y+n3-=Etim$SrPN+&uzj-ri= z(~wWO*tro}HN>`yi&^x)p`2MrX7L*hG{H;u`#{CYJ%aaHimFB|=CcmMfre%Em z_(Tn73JEr4T{o(R5_`uYTRig)ic%KjPsqpHb{b&F^Ez9cMSiy6z+TT#>3VB_ucu{|viDpSqiCY0v@gy! z9qTQkk|eo~U_u;~>ZhlUA16z&;%)GuWV#p8r3*pFJ{6Ocxl8=fA7Ky%wSpboT2~YI0`5WHPeeGq^!)2h7MVsENF#LSk4%IWef5WvM^?&i_c2P@ZJI`=em9?`AK-P8jW1RN z{$js(AY)IaD%W;%sGh-`!!afl>iZFqUq9KH7o1#MsB=UpyOCR>LJiqbTJuLDR8Ar7eG!@1U zM4V*({1N-Bz-Ec$yvemZo?Hz6IA=N-aPXIy`p*FF?CUjh?+dC5Xs}9Hzwsm+Ig=kR9XTyf&B>AZ?K0uisMsDOpU4=8h zC2Ku;aB=TsPs`)8Ao3HI2ShM?jFGt+bqXE#i7{zAdyV)nFZfEO-TUh`DLg#Ivwg;E zG@6s~dT#N^RtiVOH##ho<3fa7=t(#8;sjX&c*@k6xA+xT^S)B^=^EBYkS|8ZAGNND zhe{j;Q43E{Z~-34zvOQV4O;R5PG!SS#r}RT>YhS9->G6ua{Eo_<;|zThaMzz5fZQs zi@lL_7ge~8vB~`faLNO$+Z`HF1tLX4;3QCM#5#W{``LI#*BTEHTPS|BWs;Mvbtg9) zJjl5x_!o`9t*vLIiWsf*D<4x~-)UOc-!jU~`e7Q7v z;olpc#T^HCca3r4Cz13i<4h1!w75S*`pyJ6=-`%d)|BJNgKupOAh7t!iRV%#qD4W2 zeX~^oS!tu>4Qv&I&p7=;W+-^w?ddA$)~<@8qsRkVlWswFEz0+2-ZEv-GhXcIGaXVs zW==|HMet5^+9gq(yv{@K2wS!SF7$r||KA_R-<@2kmi#IarxjDkaSlZP@j#H_u8U?P z3hv&>fNA_j=}Df&;)tfxFZ1@72Luslt;WAf;Q#3YM6%F#OlPe$XMXI&M&fZNEKL|>d3PJQ0Mq^ zY=}Kql)to(PkhhA_jqvw*?{2Zp8)yqzhYC=<$T5A8kvFN`^elyI4V#@Ms^T1}Ixes5Z+N^)R!-eioj>lhVDf3zqi0LfJ6WHry{fu-CZ1E!sDlcE}o?aU{ zWmjqu#fL+zbvQ<$ME^f-?@5;?WqUpq5!%>0haAJQVwE~A!nIpx zQSiin&uYJ5Bl+I%H!TbU34nOqvPWkNB*3xk4U9?QteTy$$zFxl1Otxz-#~!u8Kf4a zIdLn2C0S7?ni+0za6vRPO{5iOXc;11mnJZ237N;ZB5q`{bfo*IUh?1n{Wo5b@uPgj z@0m*X_rAx$E;^Twlgdq;71t-y|3RC(lH+z8!WIBw2-~D~RwWhD*u=bb9JT1+Qu{XBv)QKd+^MR8Tsvh%SuEOLOV&wWfAxL5A;sCCX*Ui|{SuRQ#$#&IA{hIz;}o_ky@eBTOZ-p>;*z z*@>c?$V0Hajg0lmPqRRe;BI?pd$zvIZPxMqIU4iZ@|S%Jef?WKbBY>Tz9!!dGnH_@GLb`?vPOa@V7m z^?ik1(C_w1#070f-L@Jge&x~J@bTba8(x0IZu82=WLIf9!NEYfjS1ULl?kgn-9d&Z z)T*Ff!DlA2$XG=|wfv{*K7@0eyQ{%d=rjy!+w3W-O~{kE##2_{QQ4uz6@cb2SBqjJ zb|rJmB^Bh(WFQ6Q;lRYa6|&tU*o4>_CaKO>Kimwi*3|jF08LWh(Ozb`2w9ji@i{5@ zu?{evj;Y=|e6^>~wnC)Zre3VjxJ7!OuKoZ@nkGrb6 zD7U)Np2ASnT_xuCMX7$mET6l#ZcCO{@x9M^-2i=cTrKS;u@ED(4)4#B_^G^YX^qT$ zlj^z{2Pp&Vq(~eksnYZFhmE0YYeoi|?pmQTnBCJ5tiuuL`f6Y0%1==7!vLC18)iy9 zn1egjwotC(a&7{lLyzx_#fbJOCG_?9);6zWdqtLEXQt;)LxTe>!$n50)~R@_*!kC? zMT6w82yGH3hZ|yh8LQMOZDd?`ziig>W(jrVifg04*<)8(<=!=|f)zFi=n(YG+cvH(>9e;CM!+}4c@mhzo7e2VdbQp z>};Xr;bE!tCOc6$0d*ksL?86JX-EEOyfuq&s<44YP72*d*{$P%wxx8_86IaZtKGyw z+v!5RN(trTC*4kZzK1=1Qk`1#mW%7Tq=HA$Inl0SO6wIHOe4&8G7EazzVS)9QloyW znv7Y(W~?#=L?RYpdpKJDf{j(*a}=8E`g6_Rz~;jDOM9`3MR6#Kl)xSq@m#fZaOPfn zySyZ4Q1W`;+$T-E#Bf}uDbJgyv2qome2)}81uK-X8A1v>RdxdPn_ZlxWD&2|Nn+)i zwyey|Hh<#5?0vRuU6Riei;&FVEuydeHbe3*P7XH;ho*=|nkjoxUX1*U63-9H$$Wwo!l1ne=bQn*iZ*}OO#q>XLmshaYS@sVL*`F+bLIQvC`$2u~Gm+GOoWje< zZNqe|*o{g_bRR0xSUH%TYzBE)cLG_)4&8}`0FBGTY)vAe7b)(?5t9i(MmS0Rq!o); z$$4{P$Hgplj;NmYUF=P!n>ySfq$Vpg{XGdlKM z)!+OmI}4c*Jam-19R9*bJaZaN=ppG)=p2ZCmr+B3l;@k+CEXWOoN5{PEcA-|x+-zQ zRjSGPcn_>QD)LQDH8j_u(EiC*@uK0)qD-^>4(aN_=pu%h#WL5(?gjT%nmJg1`+_3z zYxZtsPufg-VWDaI&iuX*wh{+@b}?~;g!XxlUX3Bo)#p(vt=-Iz6VHmdx@+Vnl&!+A zwm2*5&k*fon%T9u&={us7F#7UT2w}ztJ9O7453Na(T`-c9r%)502o202T7-UJ_xkg zrhOWn$Zx3yQnDAj{HLE~vAaxE?o+!R{-AF=T%La%2rM+Lswy&1x1PLyt0XjGEsbCD zgS@Wp*Vns5&Wj;@wPqk6bj#|Sr`|-^(W>Ab?K!)g&EmNQn!zN(2xcL76IL(QAsq_R z-as3umWG)@HahU+BuLis^J*!yLZ;yajob1GWS0Q@c@r5ku;`A-%~Zex03T&1fX&j| zh&*pdF7`IB!Dqs5HY{mp^J$kJCyW#ByUNSXo`|a>`two5UTvxJ?wmYrrVqW?qkGgp zj)cyF>*JNe44uDiV~1iAH&W>+v{nbuG+P^$?=+^4P~@4^{c{#T)%MLLtlUN!CGHJC zQa^+?Zf(o>ZLH-#0cEGPf#6wtCl+*0N7wxEN?m7>ZZ}h20`6Sr!j@0@EbHd59IbM{ zhSGT(_wZMZck%zY?BqwfN+ykszBw)|hs@`YfZ*y0IO#f83yqC$?3}EObIkUgdFl4S z%~@NF`ySu!aoRtDpZ0B`FrW^G|3h{Dw>SZj`{+*N@jITPv+RDc~xlm)M!eWX5k{=6s$Fr9bOSzNa_sF<~rsNhdT!lXrq~Fr1yG(r77bGPK zao6o0sJc=>HF4ltDAVf`Co<062?K~u{3MSTN`{pk!5=KJ>3-~MX_10Ovo3;Dv6qi? zy`OkOqlKgSXBKn6(+E?$zbN*xjV}#GRM1DY!|C&?s&~RM&2NZT8Y_NgWZvT~BAU+n z#Lqpx7ltb5rk(5pb9InWA(E>!(*kQ&>xHzUZb_+sa@2Q8j#>JWB)uK0f2e3P;U_rg0*hta}Xl7$fF%RrUho+~h|d z;Xq(1vn{+@UoNB@+Ld7Lz{!|a2Y-9@qyI)y$1jkoY_4K0SXYSH7sq5+)Ra4asGSKL z?KCPyCtO13OMe?j6aOTomCD`Ny{6I*ypN_^FKT4{%(moTH45IGPXV)6_lDkmjEHpB z)-Y|7AfWsH@cCnHd4p2-l*&x{J)e0P)TS9-mJ_A${t4-&K=g&U-+FwH246muP@*VL zR|)=_ursaAz-m$V243ukPL0g&BsJc1>CHXaOH4?ocdadl`rX6EGkuZA$ z-bG^q)5#A+)rjuIXF(ke5Y5a7CmW`ptcq9a8jLxbJ}X-vS2bqo3-X9Vv2*07+!ON0 zE)3(`MCT)fT^T*di$lCe{{}w)*PnD@+$~mLOGRd?lz_{`u1M*^ekvxYSOgrNFu;pD ze?i<^mC+4C?5&u?pYowUQ9wX7)9%T#Od-ka>wf0G-r+!DHHGG-$--})>q3&&pO-P1_)544X?B1>lM$1J0S zr*5hb3)64HDu(!B*AQ6BS6HE=52pku{gq>PT40lhu-k!-j7;%l=6Pk4&@hkiv}(1M zP{q+nW)H;}2Z@+HCoGyVn7j8x?mF=DG^)5GUAj|9`9j+!QI~)tht@JObn(ncQDugx zdb$qWvegPpiL~`*mfG9jtbt2Q0D-JU`YHmz3-a%Wc&qTV`Y8{SaEqoV_`w{xL#=dT z#u*qxV^Gy%U}y};$9gS?G(g>-yt!BdMLxdAr_}^M(O=+4|CrIYBCiN|<2n%!LY#&D_6}R27pUTS(dgpUf z2{e1bC;c%9gN$x#LpSh7*kNXh_E!m6N~SI@r-6KTIUXNO)IrXPb(g|$;0f3(l})Lo zsvdic7K(Y_IrJH!bVTPM$LgMe(HDoIp*lg8`~!c#asfboC$W}%l4saIb0nNZEVQpa z>(MW1eDs)Sym4`dZ*2-Bd`iY{zWI2I*6pm64^-rQS{DzRK zJek(|Us6CloLEq9WcK+2T3N-mIY_K@b^Xl^$p>`ksDAQJf+o?q>R!$KbDU7z-2JX# zczXwzY!n~ah*Ah&U#neFuB>0%Ri|S?;O*t{>^0}gE|gYMoyzwO+Iy{#Hd)BrUKLiK zb?s`5rDa&?+(hh>91r9v=%zG}M-HRWNK5k-hqnr0xBFpV?eG$aF=a1A4A0!SvwpU} zHC3gIqlbKEa=FuaDs!=a&{EfcR+w^d2V#K z*fh=a>%raiEQQLU$!WszxWr#2of1IvitlEv_Iq^!6+{a|G)98ycq-Y$Vd?WDIqk~A zNo@YvV^8Y9hIa!YuRRnajouA3;Wq%bz$bH+q(vHaVtFBB{W^YRd93vH6@UG1jJU6f zGWK}y-p*CQSWKmQ5XnWVqy0@Che`#%LuB6MjP*|it{V3%e&~zp%Tj`laEPHQu;P1j2eO0x5k0gIL>`}z(04bI zs&T;ZVqcm#gqvkoL*QyFR*s1m8ERn>>_Ni!(9SmfO7r+-h&GF#bNqyfiV9G%1s)ug zWH*L8%^9$H+E_i-8^84__8cv-!P7R%&Bilu!z{Vru&QXQoi&0}J1+!OZkNIV{;{s^ zTf{zhu{ur$oXJ# z+kJoS{Ijn~R*xX7kRWMKV#(Fp4yjJFdBr&9o8=Y)TIJTE02+WzIW)n}lKh`CdVk5K zkME3TQ|8k1j+s%nB&0qdX^&q*x?~^JS7MAFf1Z!`1-z>W4%(_gD}frt@3V)`XwO%% zyC|Rw0h^~z8ZcUNzKB#X|St15{{fh6-=cyy7_}b`QZ>iz$ET?st?fa4R4CZ#U*^D zaEw#J8SlwDhm9!I%goiD(aSyJwbAXBSk3Gmyp0H9@ly_FKm-HPiI?Zq2o;1JyV%Xi zcxb;@h1sVQse}+U98^QCNNy?)5`m z6y?dGKI4HWiZ9#$x^|lsPsP+-a6VHYUrf(Q8LXK7iiewfM{$|YAo{9iZ1UwTS?pge!m|wUUz9cl+;w5y zjJ{6>K<-$*(YBXAYdDmZnE7!U(%TSC*NMDtLN)IDM~FHKTDH$IG6Ms>^f6T4asjANBTtywOUwZ!udXERry#3lhs%oL&^)V6 z>=b+R3P$`3WlOY}oR}&7e|dIp3!l&YiyK&VvZ`3)xSm z70cn1on!9YwN5v(`4`pVjY1o))zXa%n?qWSve`Btjgp8!#Tx5HGW?vuIYLpp0k}#n zC7>Y_@xXEuHxFHP-!U^u9!lck%BFT^BH~gLCb1jn@j6ib#5G2icE*Z;JmX zG~LoqVXc5-kV_6};>2rfG{MWGJh#^?1*0j*ao8R4qaJh)RNjzBz+`19PO_6=Hmlzg z!m3mym1hpXWBhWZ2fL0Fd1KeReA+t5WCWL(u#xdi!q!%dniy3&Tr-<0?57jq1B_;S z;n6(^?9O|r1DgCrpG1=%?aT-P6w__uSn&_K>RS>}n&GXviV&43NMSl%hh9)O)3{tW zo#FSs@zc4gL+7DWPE90@Y$RN;dNkJ8-yL<~;GurKp5FZ8=z9N?CHuT@ zhg&6&zv9wNjT{q3|66G=!~#jN?a<@iq=oJ|fXXRyb)gUZh%nf=cE2!ogiQ>T{dbdG3iG$gZjWH^U++iObVu!?Osh6l{3r z)5))NCqFR9w<8c%d8%jX6-J=Ug2K`pvB%7Cxi0qg4k9hLEp-8Bm#NYu%dn!!a}xN% zgt~zhcEjm)IR4o}>0RgD%h+@MM^Da^>7E zHW>tn)ZQ@Ve{AqAT$a1wwa z;|80FEGT9yYoD4>r)xjsW#c8(aQR>&x~n&p2NMDI);0c%N9E9os&&&D2}we7M6>&G zTB1@hyR3oKe4CCt0wC5_NSoaYm7;1acVEUylJeVccrJ@kT_E z1*eUO594g3j`m(8^`Gd>-$sGI(5cJlKll?BLNB~#8qOb3S|TlECjWGAwDS~{o*j82 z@GIq^4U0(XH5~u#$9^al0rC^`fpr%)5go3tcGS(;#@=V_H2k?iXzAoH0D>Nl|4^vm=NW8!vGk`Z1;*vh947Pz zb|%;hfcS%Bx0~w#;b>f!6f`08vxai(>KfaJ4BKVUC4fbG?+z1mMv$LvcV4%!inEWw zp!9>?I+q(*HXi%Xp*-vh3ov13`9U;x=Zd?RE3Mlww8~gTPBnw-e6WJGkyJT7wRHeiSMotAm;K zycC|~^Wvx(99qBCixSQzdVlx4{Q2!N26d3v*Vq>Uy!m{2z&+3vM-NH`($2l$x5O*S-1Rc!emrJ{AX~rnO5BD4D$LG298q6?x<0}ZLJ!7*u<%D_-aFU zNLYcqaQUhK1%kVrO?};(JF@5TNcn zrU>zf>_Xn%Iz?G|Fxn(})=7OM4!~8s*=aDwyW`$vmFD*eH zuW@DKVu_;$Z;tO`(>aVE90LR-bc$-`{+x7pFBqE}Pma%Jn73qN41%#kDL(pAoER97 zLbqRrUYv+v#H-)53}9H2m*BHBslaRmh`;x|3kQK~@`Hs>2gERK#PmxymW1mM&s!=U z`FwVnsSJkJc!XXx4{Ao=z{Q*U{LZXfvk5$QhqdwJ_woIhfR2bYDp#xe2uwSbS+JJy zH3<8|DYKmRmqV-&e+>7Fdwtff$Iq8bZV>ZMWT`0CxJ%A1w;HlkxSz z79pBwH_a(Py**;mPn#7nIc}H#e2tPRqm9RjK$%OW!0{2p?KrBq`^V1PcloYA`y4;$ zbr89gy21omgWo%rVJP2*K3ZW9XugAtdFb{~KlPvxAX>{YrqdyGink913YyXy<8M~# zIZOfdZ02s!WP@P-ZM5fay(rQ`Y)>CT`qTvj&~-OPAAJmb(|a1gP_U& z4%Y{K%v#0v)#rS<3-H=j~tU4XP zd1q`QaYPK0u#tupc=>t5=0JZ8p~h|*Uz9!$lB50Qr}klP%;0yAyY#c0&(|_#YkJK;oj&f?^3>~w=l2_;U%Kx?i6}EPzAajTmyOekjNe_ zqJSxYp{1<6Tj*2^wXJc=ia6F8$obw-^+W76G^AdPSfh^!%p{zW^%o1dzD{=HJYPac z=nyQ6&lnYsV$zg3N`zd9dNG_U%2_0J7$oKlJ&fGgHlsUdA6P&}c^2Ox6IG{F zCzmW>(WGN*qiJ(gcbKzjsjb6QhgpAHR$d`p7Z5^p=uIZRE9XK%pUu7kAgbaFNSld zzX$vlCj1_Uh@3xgS`jdK#j)~;;Em%sK?nc4(dR{53M-ra`Bgyo={0Rj%h6J&*jCLr z&Rj?F3^JGdTz!S}*o#{WP{jr;K*c9X^tXONl^Bv_aS?ovwDA-R+n>eC97`Q|V6y%e zk?q@oBF;5yv5UU7tKBS(;NF{M=^F{db)z7Q?}cuFUr-tfb0;&9kxY+Vl_2HP=M1GD za8vQP>`^Jq`!2-A%1;5H_U}_osRiSzDY;R*iEh)wWt}ww?l%Ge4AD<`OYiBjiyD$` zH)d&XjwN|-H}h>pbTx4bE2@kIAfY&WuyY&uXlb6v#(r=_z8js?Pg#|}12njmnp_Na zbu!0$Ggenkoe=XGtGzEOD5QNP=;rRJXQ-{+DB$ca{8Wxa;e8|~38&vaZ*YF2hl9h| zR6#AVMmqE^l4|d7Mcd>}|c++?9 zf@%xf+%V$?<_kY@$UuV+;DabLF$oiq(<*uKR>ZY=6S})?Q=;STfK*w z6=Y4zo4F2hQ-y-~ouhDdD=YTJ^mDVFY1_3)#78$&@Ey30fSdNQE5ZLH0wiO&8)|A# zQxLt(xuwwGu*{hWPDH72l_R=1&S^T%3Y4(mTT-+-_fxz#h?%>doGBytBhyKVxlX5X z)A~|6R2!bd{Y&BT9Fmw3^quVY)>pEH#AMwqV{s95G?W?}9t~thytc5KgNl$D3q>_0 zGjp-bw-%D+=^_AnI&u1GDir-MyTCih{1!5^Su5uBVbLqqevhgkoy;vlBhRTVNWY8?{fa zw5q~~>nrSS9T5VAm{PXwE`aJ8uwKYG4j$e~_iOBrCUnd9Ro807Em=O31Jpaq;}?KI zY;$u*y|s+Bf@mmd#7N6@WhKhnstpP2MSio9g5R`fGqq=)jH+Y>T}6sN`B@FL#LZ#& z%Qf&(tXN|hxE>z+vj)jCf7G%FuQTk(G}JVpTc93oLK%J^ot4%+Kp%MYPKIHdF7WV! zEvVE$YiTJWyFaCvpyw!46RuAumlCjOWJlb^%d*V=m>OpPy5syU9q0gX1=k1WfGeoW z8e`0)>zh_e7RlX^_DjQ$F8>&=^&hYRviblxeNlNo z`DX9hldWy$cbad@U`r_ZC+BXkd;{Eqj7$y)Y1^zW2*VpIM#;E6qf{m%3gC-^vN<9v z@wZ>(HgKHRosZg_H$&RBnSztK9TdYfj0*2jgl}4w z`LIV+6Rk)|9P$P)&$gZRYBo+(>lHT!!U+lb$lV~e_3_e3a*U**+AOFj>dDVrTLatJ zt;7OX4cIIzyCC4(O*z6y=my;^EvagVHC%W zJB_s{DW#sDID!QNqqfcXf`iZBXtPg9fOvmC$xvHm3eKASoWd?(KhE<;G?_Q2!^ws~ zr6_;FJkkAT!_3CMVG@AyQrRL}^Lz1mglRCL_v;2LjTvGQ+3r|xMqgoPogMJsBme># z59{?G%pbaWXrcfI19Ut~fW0zvpXYw;3vpQ=L{5@HEJr=0`*x5iy*Al5yy?s<%wiG$ z#oXfY9y{1E@JiE8b+z>2S}h)R_-aqO8d~+^gh*kDIdmy$A zAN~zyKX9VL-o4KTMs8+F&3Z#!U; zt)le3v+OiT`Sm(+jkw!P;^j(5zD1q(b|vSU$Wp1vddrj({s`u8d8^-hOD~b{$ael& z-0vGltzRMu=2>(3#BYhnG;XHU!Snk65K=uU|Fv01@^*h=6aM{y`IO(phAho%M|m#8 z7fAFQb8ULKnV7X*I>yA~=_VwlU;CBHRH;-cRS-TnS3=Wk2n6}-T6?H9_ytLOmLIKs zi@R+2UK12ynZ$zv#*o#PocdZMbIgy>Qd3%=qMCkOcAy=9GT0rea~A)?UOPl%p&rH! zmBYB~C4|J+u6-o##4x$Nupeq7sG71DeWLsGj03m97T730GFzJo0blKjQf=E7sIr)d zp;u6FX zZ@wJ5&6?_9_deaHX6`*}#no3I0i-)+w{$;g{lrd8VCNv-><3!|Nl_o`vlH_sIZHLy z&|Bvj^}4bH^kr^f8PtLCOo^A>2qWIQR`US?Z)H6ra~_xMN2kNQQlaP`AnEdnG`Zv*ihoaDt}bSJx4#-aQ(vhd=7K9X9}hB$oO^}U#jbBLp6Yi z7?xxP^KQ2Hf^vsWHQzw<7&>Jhw^xD2ulztEIz%xi=G8ZvmC8OFQMhY88^;Y6j#|DV za_%F;3)ZtheA6==AQB<)2+Dd34nLD1yifHEH-lGzcfCs^F%9xL1WOKb|DZwc*e2P{ zUVLH3gP2X`FJE~#Z2Odu{94lkQ}LnpOHLXuVle+`!)9FZv?_W{-i`xckggM7J+EDUlw#ovfc*rWye!*h+XvDmj z9w=poygMt<$+vS)&8w!*KBs~Kd95e(Fymc5>(2RzsM|c<*ikNfdhfu)g-u0x*5P-b zWAu^4amj3Jn)7q$YOL4@Y6li>M(!RlF+w%W$TKB!b#}5HLY7|dKk5EVgwNexiYatJ zO04$kgthuLFt4{hU0|pZx)PBs!4T?4!qCm_DLV;E`ot`D5oEuZWOiNmrSisjF!V(5 zDNG&A(I-Ja5s`Jfbj)$vSt5($-0_YNlj&ANk8+FA*VtWPxNv+8a22E1R$1e{Lt> z|Hf8?bPwz5uz2v&2i@Z#^GhbDeMCId88qSMt90p-}0y>wDhYeO(~;wJHsbe#nTHZ*7p=VDBb92(nzZRhrR4 zw425H3_m0Hg46*m?4eOe9jTNJa^iQ!$#ERXbaG>MmR^AIFqeN{pLM8&CmPRJR2EwT z#?tQ?%RH)dsFO(2MrB{4D0}jMnV;XWLBD=SW{+%_qucO>Tbp=T?iI3&Qn5=qAYj}? zK>~awnhMY#^)d5b!=(STzCdJ;M+Yd!7qz_3Rprfm3)C^4OgUV;vDNzSax)=VIa9TU zo=dc$qDZgl#7ChcrG&ZQjx0Y#vsZl`%HMc`!g`i-#~+UzUx0h?C^L~|kA)d;CBe5u zbj})BO_O?W`keKk=}NlHvF##l3z(k_EUYTBotNI;P|$6EvM*V`;YA&wmHl8iV70?v zbEvxKw7>gheV!4aU#X1!_I2r%5dsUhvz_!>NjWHK0QLKWT%y>0O1l8ryEb&TQD0vn z$)(_1C-xT-LqjYaikZ1RLJ>Vs$=W<^??fqU(U4No>?16&FdJt~PEmQ)qFvL(YfD?> zgZ;>(#^g%eWS>9HO8alt{4XDNfVhMaWFqn>b(Apx1Lm<>^&dfN{HgTBB0DVQY9BdpTlnjlOA73E`~s-?$H%=;*c8 z9Ub*SPR_Qp(=tLtZn_E*6+~-@?GZYQ)m188)hXPb+SBuv7_)AN6}GzB%Cvp9O@1n3 zd;8Lt!=Ih%vyUp7?3@ad!Gh#)ezwct0OWJ+7x=mn%wrRYz+GEm)~FfTT#X9iE;CQ)yU7)jPj6hjRY&XQ=UyO zFXr}7uTg>cFGhW>22?|`2n{9%96egsYY-!sQk`4@<}Ca-H|36``wA0&H!Z^4U+epN zNnVDPRMUK0dWn{!S{lNaS5>7xt8SJL)6uB^F=KjRR@qxMW2#YTKglH0A}aq>e7bAl z0c59s$9m=FDj8YKX7Hposebhc4tH`KbQIcK@3zGyOR*ETaS9>+DQq6%(+Phk$mK||n{`(AW>h{7e$Ts%CmD7?`c|G z4qDBuGegOpt*tD}F%Em$Nw!Lwaign93Yds_Xs357QO2fSw+>YSlgY&Oy2|{EhRP&a zsy-hTj^p9uS4;+{MRZ==+>E05KWK}4PcS*oDu-mVuQ_)nMc$h#P0sn?3g>h@B~5L5 zAFtPifi<|U-ZYsb^`$){&yW~}M&$18Wj+6P=kLAztRmuz>YL7h(Md-1!W2nBrzsaY zdC$Ua_9HWjZ$nGVHAi>YI`x11FbaAW_tUv9*(a+T>=J!#zF)wi2$ID4nR*R7%$-8P zVqI9PK^^Rs#S#P+UfB7H*!L=(VM%WPeH|x#T7J*l9&pIwL?#9w$idJ+2|1ph@b$Mf zoyPa^Mlsr-$|bbFG*tZbT9cJ}+qyn*iQ#du1}mwaL>Y?-rUh;i`9(>6-DBc;OI552 z&(MrVC7O+$9zFY54=dZ~>$NO*p03@8b71e==eBy-9qcUGdY}S+Veat>gzWStIaW;belWMzFNBTcTp|lqcGDikw$6C zB{El{9#;yP83n&PsckmOepC%VX}GX`%sj8|SM8AG61o*fpBT!?NFNi;-FS z^Gvhau{Jy`<4PzV=xa^2qdii+z52huMGy$KDpA%ZDcUFx5mg=1A?m9$@0hE8gV?OO z47u7V)!PksLZP?->OfilZJX{Z>8|ZRM78rM?CyjnL6oF@(MST^f+nUT5$&OVjn@p7J89zM5>LBPQlZjvzb<>)Sq)zP*ZBd2ZDF})e$u55kz0ihR zrrT!eRiTY1D%%M?dl%pVJWhr6G50_NuBX#FKSliHiV7)L8gNBV3cR`+en@~BV-?bf z-MH0|RS}Oz5;@-SH`wUN@YiWU7Mp>6qVwkH3~cE`j#!Q?m4H%_(++1ipC*KEpca`q zkRW3hQq0K|i?h8QW%z_QS~#;VQB+t6@(gzAY&qXY@7kZXkkDYYaLQY2DVLb$@a8b+ z%E~L4UY7@_D6ZpFKWi*mWXeVzm(j{TUI{$K@?W=QEFM07X>oM?67DWvPi^+Uo*H>F z5eV@_kd9GHjt`fsG zT^5@9YI{!lZ8iJ^q>JsUWm3{>Buu7ee5k9=>lm~vcy>^?;A7m7`i_(=b1}fy$UZ2IxQXnI&WErR zrV}%2^eX$>lFhfeDP{(w;BHwNd2l&X2=Y(3W_2K$3|O-A&8|>>e2o2nS(E?jxO~5f zsNsdE-G5y6qxjBX%K6+pa~^JfLP<2zb38hvMoxD`Nm z`-d4~$lIt{L>5NBHe?!krkUR0*BV~$<7c=EQd`2FEj9lj zRm@I6mHuO0@#>JM5*}xb+2Ix7I3>!~Xs9n06cg5w3E2GGqd`QOb=hLA-y9G@ttTp# zQk&p4Wim8TFUqOOht0={^qkgIS2G`@dc%dt^UMlgr4}AX5W6QEP(cyXZ86Vzz~8cJ zpS794kei?028+5#7OO0~$0sS~@GqtMCe>L?ks(mj`G}oNMy=#$ZJq~a&7pgX^7{`& zM+Sc_c)&APLXW@Bg=(|ABn2gY+FQ_t%}#mFx&9+RSy$kM$V7FdwYMr$ho z`@J`Hz`@p_*I*T8y&g-GTdRtFjF%8oz&lr!UMMf|*3^Mm37aj)#wi+6%Q$fP@+IdS zYH~_)4*%-HRw}VW-a`(N$9zd7T`c{!ol_htEM}392aAh&GqsuD+V%@|(!#Es73Rk> z8+^+z(ahmSBvtrM5GTROLm_^XDe65rcS^v~i8dK^7t`bXd-;BA!2dU5_{X+fy~jI) zumrV|-`5sN?krb;B z$#dPXwI>3b0w4bP))x6^C6Phm-RaGK2I7*e>SO~TDq@V z-({$dMEM=`taX;u8>Zjx?WEidj$xE`29X{RW=hRFM|-#+N(7HPMB@DlEo#=H$h|>+ zN$IskkDg}GuG_fI#apO8RfBv?+J0Mj_;$t8AS~{jEb3{e)b zcI~A>F%4Bj}wzDeR6Sf!Ak6aV5Rt_WHbK!H$_UP-~MNC%Fi=kzjD1X6! zOd-!XXW$~4XVX=;#&j@EcX>odZIHQ$#CEKRoe>uiPxjqBqNI9lg=OqLen5{|VN)ex zeFzFkMZH{*aJkZj>#=z$MtTi)ji3i90imy23tpD6tpB8~*7T|MPz8=z+O+@Z7tld# zs$n73oZR2s-+#No)CUY0h5W9VY=E3bz=j@6qa;()E|NT=EYkIcbneFG93g-pF*pdo zWgD|0lbQ~Y>_yeH82}vNqXv=KYy8s|QZ$u++mG9Wj6}{^BmXS(gs_N*v+WPv?N@~U zY<)i(y&mN#MMcY^A|@=AIUm$o>bKL)vl>21$)BYPyHAUwLDt_k!KHabVmXzm2}+y( zV(mN{Wuc1yuMKUpoKL03?i;bdGQOA3i;s0=L|0Fox0qUBWZ;J0;OXRJ`PML=l3fTh zBp*O5KV?Oj?QC8_&Si zy{K$sf+`fu?}@Oe#jt5AfEg$rZHExPiabHn+hIrg^N|NQOk z7k{SEm3iB``hB{Gyo$sf*D_RNnrf`4u~K7bZ1@`y6@BKN^s;Q4+2bO++N4GugljTE zDp|jAzs7KlL?W(A^U7AE{+hpYqt2moEJ~VvQCq`KL>sZ?80Lt8)fmMAcf{cyRh!BH zT`ig+D9xq?csOPhydX#iqBTw!8R%G754;0>EX@-p$o#m4@!rTaLln)Ytqv&l`$V)c zjsOvv!m{){nf#u;tJ%=JHlFNcG^0d;ofQ!ooqMDOFcDin#KGwnyXpyS@&S2dryx*m zjul?MgB#Q<@=5aQjo@TlsF~2@Y z+Y-muTIunqjp~2JANL~wkINH_`km#zpFc6gru2F3W|YmaZqd|g^I=3NRFBYKk%>y{ z|6%PdgW~G8b>TqJ;K7|hLU0T24hg|RaCdiU92&PEL4&)yyIXMA;I0k9{VuZ4K3|<( zU%h9)_wM2kMK#d9)|_KJHpil4ZwQB%u~>gYlk<6~l7{AkEpRuN*2tURclukcT1<_b}bNKy@Ziskoo zd2X>nF_crIiipZ;CjD#frr&%UfKtK72D9BA4!E#ghl zjJ@MYAvv6Fs7vNzLAlx6hit`0mw%xi8!ve@B;WgJWa#BZyK$j0xVrt4yj@lCJEn9( zHiEa(#gQiHrkaQ@A!gbQH%;MhkmNsjNzfSvC(sx#{Iv))OXP$yH7?Ii$gs`2d~0}4 zr3#(6m7RxLA6a3i+CaAn@+2}XHGG!fA(G#YHn{K6*3SQ*t4&mpEF&G=bVtj(`fQub zahndWWI^EMz}!{Y9#f6OGbj#iCd2T7+kxa4qzM1YpDNOI#D%Kx9ZO8aOzM34h+$ss41ez6)V^nvzP7a^ ze>bF%A>FzB&o?2bgMhdTtHAEa8uTK6m&z^mgAyR7ikW97_avc;R>yeXKKd7^wg83C~_mXkY!O;Qh5l?#>>;O0ts zs)=w9VAQqPZ$jrZt1wtrfxa{!qaO7=A(zPLVJ&vo2}HeE&f*Ch;&|Nh+au5)5Gw?$ zVGOP}hrS#!^%V3_59A#3FT+Jh%y*X!0K~4FlLW7uG2^$QfzDzl^*oc=)F6Knt*}?x zB^!wMIC{rm+KQ(%u@tqo`FM!eLSP{T2g3&6tl@FUJv*daK~c4(FWgEQj{$G`Z84In zV8BZfkV9ts<>?CFHNnZ@I6G?%kB4H)uR5c)jm?<&qv8#=bHvqC%AYrLvvTLz3E z{r~pqzx6Fx0R|`SH%&e6_$yjuE5}3ZV!?)8s=7+dh~K>Nq4cQ(dKGi2M~?`M&7Z^C zK3UjP@S+6*jY&_`TPTs)j=UcEAH6}a83F&N@5Mi70jP+Fi*Fk$JHKQt$?Ef~wop*x zp4idYUD^~}a;!0NdE#SjzJ0w=YlUiR;q)jaiKJdMV^X_UU^0#PK$yynuN@&N2MB=Av7<0S&VkBgl{??r_&*?_a68?XuA&*t8&QPdp+g#D4Ks zH+n+wa^C@->9F5UHXU}>ryWrx!SI#sG$5$OO-IKI>}v-WDQ4*~==fEkoYgQH{tdfG zRM6H<5}MK%pMs^2x=$xE_j_P0uy7M#0Y8~ag6KyUD#GHM&W=ubq9X<((_PVzwN6Zn zTX{PsGVfkKpoW|X#L2=8fY{bw=^C*T@Xychs9LF!AFIiQ1&X0(3@+udc*n=f4tD%P z{4qenLUr+t7&%mlyhm;;(E}CZMLzK|$jFr7t5B$wZR>0ZvoN*!6~|hY@s}b2 zt}o5tiZZL>X|v7;p;y%o`7fQe?nE-^G5Uxyo}j$}WCPI24Q>qbVj6u>yu%!YstN~l z9j`Jf$gkOaKLSvJOl|86Y*)`I;5V?|VS%Kd~HaCXX#Lk-f6ek26yCPIV#p+$rQ z@3eQ&{IhnL0hIpdX0O%>3JbmY-S*qjK*N;_Sv+gy0opVnGCFif)?=Zm#&RX3(fA3j zuW%L$W4rftvmi+NTV)k9Ge+%AeEMUK7uBLzC5s5mLqSMYtBm3>_ba*fK<^H{j)b*XiRL_M;Oi>rtjlDTV*GNm-Xf-*VrSF1ehh zyut2qi7{iZJ)tK!Zlqlp90o^W5VYO-F->{s#C959LQBHqfO*Nu0-YE@jNxEu>|sEEduXLl=FK33T~T9#mM!KFp-fqA zZk3?=z-`#SXp9987OvWLPLVlr4%EL5~sshdo1pA+(nPS!* z+)m<<{rD)9xrP)c?RD7*Pk-H-8S;41O0PHa>x&ZP#%H@gKGF)9weRyec{QOWI>mlV zY3yS$)_~Kx7JRreYxE6*J9RBBI@Fxgo(vy%*&o|87bD9>c8|Lf^7{0}2SYnaeX|Ai z!E}`UzkAld*{=iA+mo4y>|OABSJAP80@~LxqkoZh{2b_>3zfB42Now5UNLmk#}0yG zHC~2)R84>pGZ`9Wo~W<%)GNooIjf{4$H|_5ASK; z9Pe(zD++X=tXodyInn&|x*_n-#vp%7_L5pLr=kQ0p3o^j(XYkI7hz(G787Yi)K0ii z>jRNFclP3TXT_|}NCGX}q9NsuzS7Sgym+?&qoh{s@l@r2g&K!__@p6XH@0JjI}~w4 z*}W!%Hf>zvu#QY>?ap=or*`@(FTbi!IO$aStPDg~_-%c7pHZ?rN+}HAcNsKk+zK&c zA{nhOyU6OOjo&b}x3#h0r847yf)7T3^X$E~1kH_Gr;@~!eJH>fx0X$VvTQ#rx(0b%SN2j zidL285PE65{b~cx(@??`CN-9qQC^R}HNU*U#mfhkvvhxfL*I{;k{!I%ir7H9{(b0)7@N2nT5*~N9 zfhx)9kx%;8>~%qx_74TWBC*|~o??$^wqR@K=f0j%Bo0g?KP%MxyCNG@86aE|4Ris# z;Th=Qga&U42)mQ{U=E~@R3e0rW>5=)@z9z2+6n4m{z>ulzLITB3^E_`2zXPgJ_UA= zJ0iqkyu-V>dvALWiby}6Zxb_16ymd*+^<`0C?kZk_u00kOml)8a=EdY`DzZVO@GVB zo<ypA^x0WDXL&ny&qtnQ6-yU>rW3+-LE*eXqyFv)nz)Lk6DR|pAhkZ74BSaoou7)}g!u;uG7OPLI15X>m~3i>?XzHz#GgA$P* z{U*fIMXV@l`jq_{%fY_Y33!-?ZVw_$> zc~Ma@SxtvmXk}BDU2xA;tCPMwClfY4CD$urp9eCsQR2ZbbYY1xcK}f4-Cd{9biJsz zID)pJS7Vm3B~a&h+|m(TjVGP(Y}fbmu#NDRQY;&UXCZ6o8U6>CJmq2u3yY0C+gK|| z&SVE40A3`Q5T;6QWA7a2)%I50v-f6lKg$GnC+=Mu?MVjZ^8c*-F%c%eOF5m6gbeKJ=uzXkn7$$YN5*Y5`M$X39m(#wAjV{1T zeDqyKF6jHy9g10+%O@Bz0fM7FE>c)M88RVvX%&eGf97{RgeFIO`+h|~!L^p^YgX2w z4HtGoz9Kr?K}e2h1`1ndOYBi5b% zYVSH?f<9wFE5GmLdY~Rp1sPw!JG-s4F&iJoKGKa6+EMwU;z~DQTSXvQY7U!VC-rUM z=>A~uKjFEVUs=&(L0uBy55H`2^O+QwH5OjTWpQkn(!*~S4 zC8Zj1dw!*7)*KYmp1J`DM0S9ohc$-9BWk0<87j3&cZ>z)Y#b(uWHR?a9#)()i7l6w zG3C4^fs!EqeOtLd$suaZ2UqszrPZ3v?dUf5A!je-pi3A3h(9-fF9s+v z^zcVkzv5puX=wBz^*HXy{vZ=3-g0_Sy|D>h6?gH9Od7KJT$d<%(n6gMxfp^XX%jFH zi)S|%OV~G@Jok;u3v~B#F?1EW%PWg0gsB+)xX^FUc*GC- z_G#S(orD!F7&;1vDFIW?dGv}2KphFyX2iM8Z72#eR=wS`vpSm_ULwEuCMjBA!k=|y zG7~7@o)8Rj82T7=>4?6U#`-T8u8zeA@65-Wt-c|IE%=1{-mN*z`Kl##82B6sJFe=< zy+n(Wx!jU+jF@s7f7fW)xkP5#`SvXiHf>-{nM!7{@jF1&bAxV3Ys2ojdSboA>BW{O z3xS@WT@FHga(kK6_DCUqe;$wb*xrSAdXfb+q&%j_wz7lPCo$WCWHWR`Je}_IHTzhJ zw=XGZ3E(NS-K!8z96|*p6sd<3gAO;L6)~-sZbfLt>eY8Ugzflb>bTD{uk_&bmY*!W zUXc>JwOz5?yIQd(h&mg={hkcSB5Ucs?IVhyD;h$Yhm0?Se(C!(VsMN&;st9<`NZj4Wp;rH)ZxIaUBCxd=zjl zDz48>qDIv$#^qK^c`HFuI4eAFsl1;~Zm7+iEyKYxR(LLb6EIq z`vY%~=ull?PzAwYTot;w6G50bdCFc5BCsvjTi{;41f0<@aCG}SWPUJ!RwF5d73gv( zbU#V^c9{4f&$1dDkU94LpsGJx^o69mhywOdIy#f_j?Q`XXeT?!Q?PHtwrRZ4^kpG{ z6`Ztf5o^=ZSK%RRcnbQo-EAo0~V8$B}TGOo^6WIrqK7|3J-3cMd7l9mk z`^3;>IU4eH%}f0rEgO(OFN2%g6Cc3-dIRNc^s!)gt$pHPDFaph#!BeHTCu0Z!b;E{ zz4h1(sl9{l-pttWxAjrR*PkCBqKg2WdVtEhmPSZ{Nu)9P%Rh~dD@??Z*!NOx1N?W( z5`y6BR!7yLc$Q$O^ee3yM&PAzRzV~>=ThsFp+AuDO?mhg=>H<+1FicE_vtbL~rAoH0DVfxiQ6A1vk0?(n2!$-o&yxe=QDBY=MoRkL)EL>{DUs9Cl z;(iYg=-vD{CM#7)eIzb(@PSb10xvAkAxOQOSP4~?I>c<^zl!?axD2pjphKSOqF}Y# zkV*&z1n)*7+=sd}|3Uq&zyMI(MQtdk)TonwOK4H7;$P7)8KV$3nidXpw=xR~TIEA` zoGiOsov%(N9o6(HPD>509C$%4d?%{{i;rRZ+pjci7nF6F-(P5b5lxkf2Pzc}xqjbZ z`b%4YLrre&pTkvZ@n5hfbL`ul+KysH6@)KbZA{a9b%Sq@F!0lS10;0rDr)9j&ozOp zjFvOp;fGI}QrM|lPv|s9cehg~K+P2d@4m7qu|PEGOzDG0Yd$yj*^t)E#|G;*m_EYR z4+@U@BIBF|;-tu+k$AK~a|{ji`FvDe{}6s{UxE?&eA}*){ zv?>ROCSBuBKdd1?$(K#t5=D_s!|F~m;eJqH4LhU+?7e! zB*;aE+3F$XNJ@E0HK!hUSr>{(#1qb@#O{D>@>72nYJPKeTC#=kX_V0){Jj_xP}bTR z!FFwG6&SGV5;~{BIRQKdO5w_$dKUZuQ1BIPe}ah-vTNu4FD3H@tUD6cvIpAtyuz23 zuCA^c49{9+;nTHxePWWU6a2$m1&AX&wB%Mc;?3EFEX7saVx$UF@rrZr=9p)C$%?m# zMWDyc7XUgf`bcHsWSbHTU4cneU|P{pi8%nFzjx01z*11#3j31AdGa} zVq?n5Ik;p?z>M^|7S{DbDH&>ICk3va6w9Vaoqa-jzB%Og-&@ZX}ArUI{) zUU(zyzn9}Vyy^hjI0YSd?Tp+q{mzlQSoll?KVJ{q6Qgj{^?mcQ_JB4= zrR&Ivw(70UJkw?Jdv8%#kOd_ra7)O~dEdxMyT-*h&a~Iy*;%uldP;tQCllS)`h=OR zSoz|tS2MJk_2MqDt#8AQ=eahjG@NlcG&S8H^*TLo6LQM2vHkAlME0=ZGstKx2a##` zoZ2$3kb8Ve+{Wy+IU47^8LmavNwdNomm3zPeJEF|rt&uqC)V}EA1<8c25NKs8*^g$ zdxNAit4Dm7Odg%5ll1LIQ99w>35kK&(?3^HAZ|~BnA>5qk3Ci`v$rFu{hrbT4FjOX z)3ppE_x)-MYpUY5H)$=i1y{|P8859NH5(-<--KZC#4lLIoGY#3;nCWh@IXWKokL$9 z1a>YBn*cl#WpZFCNFtNgc+g!~*kLN;dZ4S%Dng7HdJ zMP_2NfXR5P-Q=Sm^4KY7irzuM;F{oFs%F0DIy1m3@b!y!?4I8MvNRNaYJ&Zev?sB9 zU&hP)>qov_^sC=CxU$CY2_7$A6U5&m&CmYwZf0D8Qus9R)3njtOoo3ax*axgwgoN7tvp6Og!D65kRe%lmO9c zE#81M0BI-fAO`7|a#Ih)aUi|6t@w_a56Q#*dBr{e#Q;@{KuH>-=@e`;BYag2Jkmp=bXH_jW|hvgr7sAdUTL&RD#q=| zvm;F6K>?)Rj{0AdQAWe(Ag=d|)2zzt4- z*=o`c+j|a;T?96imHXvuHkd{+*QkZ+UR3VeeMyZcO?5Q#iTXVa`W6XlF*MLCbPwV3 z))MORgXZP(Fhy3pzlgrBrY{q%l}2=Ped|t8>mwdM$Tmh{^i=tlTn+P?UfZcXP?Ujn)%wFno?4?S$jKFs6@a~7`{?zOZA;ky?E3e7G8 z97yc_a2)vON)hpMZw@$UY#Zwtwb}S^v+Sbpx!ZngwS0sT;%_>x*FN%K3%RgAG9Lyc zg6p+5w2<<3hh%}7fRF3u5(f{()=!#!>Mc;E2WaeF)c3b>d_W+;3`EXX@FbwS$WW^N z_p?pFiVP?Qgj)%c5hiIarJLZR?h|+5hwG~8Pi?~CJ6UTB`oihgfoSB8+r^&d%n8Hf za+AneY>0?Au#$ca(!3NnP%DDg+7911c*9Y|HBX2@B?5&s@hLKm{})dnvW>-ol@I_Y!r6#BmluX3 z?fuoB`-~NS{!Q;OZnL@%GX{k{WfrSB`2^h$$%Z*iPnS|$p&D(d`t}pphj;$Ox)#Vr6u~-sVlEvT(7QWj> zM#a#Ev&m~nkvKGRubPyJC`eqlrUjihgx4nV!pB~EETb%v-PFpLuxHS0MhqbVzo zp=zfP%FJyXaz#P|c~`K`Er~Jf*OUBqS$5&g%dd+)4_}u>!dVXknsGMm|17l%8k_vq zmn(O(P}!kl{l!x-r!F02{G&fDJpQnW+?zl}~?>wv-2KMY|W?dsE zQ}G|*wSZ%-P9Oa4T_djoHZZPT|nJ$8bH@@ zK~e91$SxVA_pwb`uWGCh@0p=Xon-@UcwZCb<6PC+#C5nzTi|3Q*hOC%kj@<1JJ(s*dZlQ64QS)eL8@-o0`WVtF@?^L2iDHnX_xtbs4DnepdUkL&mf12ntpzuR{ITUHSSo}^m` zODIR?35%1a(umxStQ@5u8)(iH4rH2y;?|l%R1ae}Vo30|D!@JR=wg^U}W(N}M^x)S)KH$gX76F8rg4FttOICtvknwSKE$yeCw`7$~Np{xJGQ zu`2vef$5|I+6EOggrJL4&miD?Mx5nki!N!ih+hDS+1%bzCx5+OJ2Vgn`vl1+adPKketL&tdQD>?$gNwrb0u z6LxlCf5S`}S#eV7trTOsm+8Gw#r(&E@KYw+8d8-^<(3AGUGg>AoQy1H0?)&9lV=f( zev{ZD=Hsq5~Ou1pPI9!~NybdFrr;*3EF6>@wis&GRHM4NzMnx0_$mhc18=srU? zw-awzfobE>+}~8}o?#IH)SgkmUw(W3T7@n({>qhjf6~U^E}#+h61ZN%K*ZK4$otT< zP_XwM7}hj*BpKG;X{5mS1pCil6_qA~%#`hqY#r|l_e8WJMIf*nAZB_0-1I=J4kP#m zh$Bpt?*naZF66?o2d7X(Eqi_Z8!d%Rb&|>ee(YJt%hFLY!S^0l>&izM?=_|f7xHD! z3s^tFjqgwG)4#H-nedP$`++Ez?lZOgH$KGw`YHeA1t)Kqln-gIDg+WV4Kt&E0f$h) zoVZMVH@6MCBVcf{8qbYpjJ=Cuc*Vf|= zUS(E1>|;330px`n`$+CP4eQ8ct`A7L4ul5f8W3_3Da}i9&Vz%oMi&YI`i9P+kcOa7 zhA$5bds=C?Vl{AhedG$a1Y~*7d(Z%Thmu;Kw6~#iaAlK*etwTm1-a!Vmt>O;!}s^n z*ieaFO!F+a4NYq5YZk>4$~_pYEehee)7BlT3dH;8hqB-|6l4(jqb87Wz-Cd1s{4Z& z{dd)XZC6Omo=sg%$GjX`M15cH7v|jD4hu{OEDn-}wDkI;bzt8-v6K{W%uk%+wcS}u z$LL5eTs@If`vzByba6|F?*Z9Ne?7_nzLNgs8vn(r1E<3bxS#*7DFan;wP?efHuDDY z9}0t=ahwI|TIoK++qB{3@qX5~#d{^E;|+tC%_Vr^Yr7C8OaE%WoCSwP@&gs8)1RX` znS#qD+T+9-jt53le0Ll`3SmsUJvsJU@~8*Kp5Yl#0ms|G5q!PfqIhS&qsVHX{qD2* zQK10yS}T=^#5P_qJ4Pvf_(X*@8uul{*YO;0c#ZB|PI-8mclc5h>mrQ0ZW5@XvZyEr zf7pCksa9T*QaaJ6S-mY}N_D553VQ!Mg!&z31rzrM@5i#KkkaMNF)O*O)L4-AKL|B?re$=7s94X!L8-jMV|6z?iL0+Lh-@ zqami+vKQm#3pp3Ym@6@TI~HVdOx`8U-O>{9#>EzS`{XzX@x3(`O|AgIkN^SFs9 zNjfUOj%1t45tC=mR0}$efu|dB67aqeDJ&>g0?L6flDWF#E>i#dP~m%KX?A_fN>6SP*kEJ z_K#~2qyPnH+PfDVT4OjE85?#J4^vZ|eTu1zA*7W;J{J0%ssjSW6p>WIuO~;iv_X|> z7|RF|5_QCc?&Lb?x(Qqy%VEEgtj_{Icz(hB^#Q+>x{G*g=KR=<$m>vfozLFd#`j|1 zP`d&KR=cTaHuX{V0}fsHsU(c)h%&tCfFQdp2=oNIcgP&tUg^Vgd>PP+j|`VGvu}oP zz`dT=CU{_6DlyII84}*oG)=4m93R+;(&4mQe>T>ANo^$(dCCXa4 z(E9}jG)MI3?f@qvDDYE%b*A^nh9ue1FC?8$b+9e#L;j&Y$AjU2x(#~0;6kyzBab5q zSJ*b2^+Bz^uPn@$kz{|{CyjlSeAb+7_b(W^#(O)%9dkQc*l$v_U%80(4a_?9mBZqH zFCmx%x*i?Hr?1khDivQOdD5ux$_t0@88%)|MM1! z2&R7~+R&>gd7ZS=t^dP0h7hQHR&lKNKkEr1fD4_we_ndsT6}5-RDP*7Xx=J-F zkJhFBE9fo=K6#a$%BBu#zr}rePE_z?ln=Mz<+QmF1;cjn&*OuQS@Mfl4?tNbPfPE= z3?d7SjyYDpE>Y*8y?c*?O=0@`t6`ELZBBHd!|ZhCR@X%#<-@+6RXu)BC~3fW;#3&} z5f3sYX28;&$EN3LtqpP?GZhmRY_WbXO~l~Rl7FebM6Bg~L0I6Jh6{w~EJ8Wb@ka5H z=)1P1fr`m^i-5PHAGLU9f(8Dcz|wk#gw9Amg6S&-Gn$a&CQ_F=5BhA|uPOR_+)T!X zMR5c(+%BnqPLBT1>-5)1=PzGNPA*3&9=rC#u1VX%P%reNl6wZX|3aQj7^q$v4gOjE zWgvi^k;%Z~W#i|)XNZfl{3Io@;9}1jKULvHTIls;KJ2RcMx+lY>^T-)pZ;`9glKOB zy{V?1#>K_4`b}-!M4J<|XvgUqyvIrH&V)$38sFiDV@gPJ#^+s?)~|JLy<$j?K!Isz zmnF{LbN1j`b-Lp-cq9g8oE*D}<(iyiWIoC$ozyhpT3IH~T1;?orwuO`?($P~5z z)t|L}LLKS4ZIO282C-PI&bX4|pG9Zd+0>f@dFIQz?OWA3d)WusnBEW-L_)l%Cr7_}e!D6b3fLZSGrub( z=4)~won+MT2Drd}ZM1ICZ@27`5Q^Q>!t>JcWM%K}Og)u=h?C{$oT5>^DKG7l`O&_S z?)5diiydRY=oyl*eA1lw(#+gpBE7F&P$3cU8Ler@jnigvhfVFejYp}>cN?TkVL8L~ zNGcggzj$@%Q&mgZzkDeEx0}ab>}WDzgHpac+oX7ICr?1Hf5Qax;c=8A!W&u$iAnN< z3Aa-WG-6jsr2DOdbXYC%;`lq^3WJI=m0aWr)|_0I17hpqF`@|UfAsuD!QWG+jlR`D z>}OVzo$9f}7jR4+7Hh4R!mHqVgf#H! z+IS9h=IGs*Wwu{6(&}CS{1vBxdJ|n|AJ&}$U)C+Pdm8fol521vnE+3I$fT7-&s7_J zr?8k!fsAckZRRan8~CB-Pr@)NGAVDqi`$5*4{OlV=QX2V z_-h3joh^V`c1rajR)8;YC1X4?0!Bs|Zdf}j$BG!!6du_7(A+n9FVkS9UIJI)PCV6i zLdP9J!HQMxafxYs*f~cGkTi=C{GeJk`;EY^(psW(Nv! zj^K~{7Hx~)QP24t4d|_Pn*m|oxAWRoTYt>)bxGjdg{?!So^1g>uGcF#45J)HB68lU zJl2+$SI3v^c+ub8oG#pkRO%5b0PpcNl<9V{1!SSg-26p+(MqGfV<7SxGvE^?BtpxA zICxW1PDp$giHoEz8yNIIy?Xuo)vB3kfI_M-0`^|4oE(C!e3z;=ykfh&UtGH`H99tw zy;7VV2?&I{z`sxkbW03|hLcK9DC1U*{ihr9LeEE?_BZkL2Qxa53+4244lmK)o0IM+ z9(LLj-9IIHtWkXssm*D;ULJ?v=~k=BhRh-U_|w?kHGV{cL|hjQ6}X*Qr=JDoi-P_l7HztjnKj>1hA9HY8M~7O3NB zxo8AQCE5|ZI<7}5_+T7dY!HFKww%2vK!mc%D9ZV`C^6Qk>(REhxw9D%pBAXU#S+8& z8}Cc5>64y+>Iia8r2*~ z_z{{A?(Bj#?CWY$ib?a|v%Fw&*=NlTl4Hi7`5qb1IFpK^pyt~4u0brpXs6@l?p)h! zlVeDq5=Q@qmF3W zgCr3@^mf=#wB?efUYNe(ZXr;M+ty-n#~rKJ4LrzXk+)&`ZR&@_l-q$2@lb<{UP^DY zNIegC^G?%N*Oe}=kWdLB|BH1xzw^zG6GT{7#01MUcj=1-kKd-VWS& zgKUO~U-^wf*vFs!H}~}YjlT)r291zGVOmwnXaN<+4lqN73G=La-JQT%M`_ao?6$C7ZS3lGh8YJK6jP;nn(aeni? znL{5=1s`a;XYl_!E4S%E!exA}ew;o}JRC#}z%fOr}t z=UPH8;FUL&5FB^4JZ+ZQAdp)6a@berZy!DQtC9ztMHuK7Ey`rW-vJ=4p71JEMMx#O z@@umv^gLq_KrrwCg$XXJmtPAU<{ujK`Z9L#cABHi4Nay=llarS-fv2gxzrz|{B%86 z2vKLOD?AUG)S{bph4<_ku-*s>fH%Hjd!D~43Fe|D1Wq>ZrYQ5HNigSRlkMClMS2Vy z%RgL06k0+1Sy( zw;1=V_Z?|nEY01!+kQND*)79};#Gr_mG`T^~ zX&kr3bH@VCvG1;M47q(DO#s_6J~8eBTGSwOsNe_&n!rAaNz5C^D4HMf+Wxj>bVAGY z+CufhtZrqB0bg)_+2Vd8aY2R)tpO=^rw)!~Osw#Ob+0@HR?&xBLkFJrQ~IA4IPQ)g zuWQDG07M!-F3V5&HOL4eWNCQD^t&0XnIr{5j#sLpDcI#=t2dWIFa!CJ!m(-*+rM$!#^6+nC3cv z&pNB4^c4Q8#dZ3AJiqAiq0AR)oVTJrKv~;I{XOpji=9Jz;A`typ?3Iw)W2XVfB)XE zp;KY{7QZGGVeQKJ35Zb+u+C`sjXS9lBzez~;z>-7u|b&>NA}mEFEW)(Z*+(?F2&QP zfoS=2Z~p)&0WI1A4=w=Ur1CgNV+!QDr`h;ED0h1x(3wE==-u<|0j!hqBLK){j_($g zN@O79IP~7l70~y?JoiZ@dG4B9r4Vi%mvP&|fSCxkpS(o$$Uk{qz;AkO9Uv$N$xrwN zliQqMF})RK;Jk?OT}683_fat5%|5oWd65>Tp+`NWq~(28ILYGOmN&0#A``y2t+sqD zATEOkb(Pr=>j91w%rYg9n!sxNn#{-7m)f&M-~cAbY}owSV>>st;&NI)!H8U&22er4 z6PL(z8z=>N^N`=O_!${TQ_AkuMetBDeTC&EK_l9-)_bDoNIx(UlSDqnNi4c(kwx<< zvG5$7OmHjq+T)tye=v|d_}&L`7I{Fzq|3>IPNi5b_4P)JB>vYX|G)W-e~B1`u~q2c zBHtwbBf=X{+Wu1HYRqL53z*x~@s1uvlab^G5ex{;F*ujlgRBRO?3RL{3)^Sq?IksZ zA{~W9)_I+m4>cPbxk7)WJpKaSh+er$6fHD)mMV*YBNvN+hhul{XF8hR{P(E=?#Q5@ zEjLoTi2KYc(ameQraQ@`$EK0o8=p6SUHg|u3_?+(e}wYtH29`uZX(4*Abe1^Yx()p zmnrc)nwx6VfnbA3jV1e0szn%uJu@qxjgr`b(%x(X7((`iMqR!}bMu&uy!c^Ea5oqh zL0$Y?fMWBN8FoaKI#E2gdr6-4@9I51Sr2|EMPh%RZnC-7m!R2vfhv_x&}ab%Jn-5I zWXrI;47i81ae5HrvhFCiRw(6YifoaPzA+7Bk(S>H_Exvh#rl1;i^!seq4ihF55F_Y zKQGmKgxV1)xazc_laZ9YeD?x~(ftP%1K>!Q3bM0TJ^Yq4P&GAP#RTe38|vcuiU4<2 z{@F9es(>H-+Pi7cPexy|4w%CVH%V&K%MX{aWz+}s6rUn_6^Xy*5(ySf7<2{&lKOK| zh!pvcJ^+jpkd=i4vQR`n_WuNr>z%rp;crgXHt-bq{c{$8CDP$weEN2om+-K5s^G)m zXx6jFceY_oUBAJA_Zk}M^O^A>Ah3`In&6;LS{~art(lq5Sk5+qg_3uCc-mlHO2Y4t zFeUlyogikzJ9Z@aFxZkmA37}oMHzm=-;$as9hxOfzAV84yhL#qzGLTylRg7#R&h(= z6?=G`0vy6dRML?CO5(xz{qQ7KmqIJTG?Z?)WRDyt^x8vxKn>oX-?=?3t(jzp=#|hw z9=B`ES0QZCb_;t072KD{N;B{Z^O_1Jhd3ro$#zUiUjQxna><>Z#?yw8`wD_Gh|#TY zVzqc#GN8|qldC(GmMSz5RuyxUsX6bWBhbykrTb%sCrRv~O@@cE-6>Y6S0`28}E$^A;(So1NTGxq}9;-G$i+mU%cAh0kc zOxL}zH^Q+%!s%>1Y)jmLRga3)PM@C!yoKAHG=E_8MS>ktM`ZBml(N$}-~o5=P`Td> z=w-~hdo&4Ork$iArQ$%MBGY3Q6=Tx~K1}>=D0<@%Ex&K^VA+I*zN~*UoE^UxQuxy? zE??8NBsc-bCbd4T0LK)SS+dHo%QbtnL5@vKWA>+;U!enO(HF(Qud^=I!gX!?kA<_U zaE{x{*RX~k03JLB2tbw6kWYYe5ShCaVKbI`3WQb9c`t6LRz7rUHWo$D1{(YszaS!- zem)zK1!Q&)Rgs#t1~VS ztt+$MJYeI$fJO_dHH50Zn*Mspc%|6k0sQO`F<;?a&$>lJ9vK1fH6rU z7Z*%wGbLPxB=t+nFJ^%gjn5W7a?{QoQ;?0cV#RP%f>0t2*-TH2cT){EiE$mm#B{6m z4`JkqVy{PtO~_re)@}x4!jKfBgs@g?Sz*Gw@yJibQZYu!&-1EzUmEwZ71R>7Fo#`l z9w~jZ>E8k8jwcxF(F3|gof{ij7tg<{0G+`x#>u;;sYUwN>{goWjq(uwkAVP}I#28$ zp`~AI5v#vITv>B+;JM!}D(H+Mz74ut4@%<;0Vo3o!PA{te_!6!BSvUO!|#@w%O81i zCpFeI5i1FiL7hzyvGD60Xh*N7B7`Y-c^ES%s=IEfoaVfu5ay3}@yyrzxDm79vW?aM zF+lAv8}z@??ZM@UF~L#CxJm9r`CmNs^zi52OEp|Q0niV$xIF51#em7UIirAAIZx3c z@k&%e&=X*{K&I+>bQmX>d`0DFdn=MJqsb)=M9S*4iOW$PySF_RdicfKH;qOj_;ImL z$2TUZWI9szO!84~XC+KIgX`T%F@z8xtq9UhOb3 z@!ML;D@87y-mpwi(nRl4OIPI5F5pV5@^KWyR?cF{?cL2lRPpHE0oh)%{V3l~iTO{N zqcVdowv{pTv4d=r4&lWLr4)KbK1RY;kDLTSp{6gOE8=x?t2oh#wN$eD9?uqv*Th^9Q~J;>)i_y2uy!+ZJyKvC~$N0Q^>4&Q^~RAbnS!h z9&4F;3qjD~&uUbPvu$kTZ7guQVbiLqwM_UXF^@!|5*>&)0+FDWmQr*QhBr{X`DWzL z!wuF_X@A+KIviY$+3e^LLAyz%UwoT2@bwhH(-=CCB7*_#v;IuZbli9NxnR7q`sNKJ zFFU(Gfs`&z>%k@c^t4Yn9I>tf;-*z}dP7EwT@1VNAv5h>{| ziw@}?34viqhfYUQT96vLMN)x58l^-SV(9Mfm>Fu|{c@h?ec$tY*8QIQ-0OG#Vd)xJ zEUs%`dw=$4fA-$Thtc9@Z>S;OMyd6iLaRH%_<3LSHtb37w>}d)@t*h$-ju-tm9u&$<7gWDP@*T( z2Z!ximo3usYWa(zmrJ2>l4p-5pR{Pk)6^p)nmFcf28wLB!F|@|wR|y!Pt|Jbc|1n! z-wSy_oH=H0_(nw`K#~u?KlYpQT%V}opQ#a8+Puy2>nm4?Jlwi!BJhIsmyn5pM_}J|1va<1Az@LK8N`?r`op9jxTS-QD7vTbzxn9y1M1>-Z{Qb z*3-=y!U3WA6y5qSGh4%2`8J{n!2VPWp6-OHxojK4<4Na)x+(`A;dj2*>0xUZwaG5` z2`PIogSL5T8GPChvfl^stEw9hftdmC&b&Hx!#FrRV3X8Jut)DNtU zwq&cxp1U6dv@OT5b9UX??v4-Thy$(_+g-qDLhl}CJ)1Y5#^KQRt7(3Ug$#=)O^l9) z^oVZ}cJgy{WSJTL?XrhtPh%ISGOAo@aMO(9 z(Uo+ubJ?|~#v)xSkY=u?5xs66Be$gy|SdoL*(8ZGAqsY(8MSE^pwX%~a-04M4J=MLM5 z@yj|czW6eS;Q{rp`q6@$jscgZlh8scs&NcyLFK8(sLJAE%nRWcjW@xkt-ihROdlf= zkcWfLDmVb^ppaX2K>iAelqcLGhGDQX?otjr=9P4vY>R(5sb?%Ad?r~)HK1uiw(mNb zTLjnz1#MkjJ?LjM-H1;*^~cnAjxMIZKLwrnNh6^69^T)+1u}LZan3&z7p=c?~=;muN8NRLm$4aoj_ol{f<2}2VW0>zFcnw|vKLa+qE2_f2w3&z$ zzx)Gci%*D(e95B}!wJ76noJihGhledl$cWDa+15SyIq}LvnRAlraDPfCAd4b=n#73 zUZpbl`*J**sIbze)7>y&IjJp3`=QqQ3}V>5c1CJ-PHd8hTLQB4-PG<9TKu9M3dLd^ zPzOINqpwfS0+XlB4w3RV@<(KSo436PQB}*?g1z`gApZfE7vi+Fc+V zto-5RSQgo|L7Q?^7!XD@s#+jmQWj|C-8;thcb13!zjjuTUB5<=V@Mw86(hunC~m6NCUk?OYMD-g zi;T|lDMcsGGkU_|0aWu%kvOt((%p3x*FZdvLCt3 zGLA`!Nubz`2mbtPV9AlP#D?6M%Cyc=6!&KH{<6GmbNc3}VMeQKUSh%|X7GcY6j4Mrq%hCn=#|)D;60%-=?ea(A*>hL+ckyG$14zlpZIDcHiPiX0U?ezMwfcF4u2 zyi>x;lVoS+BP}kDnw*e$t=iGn6({aC@eb2;^DtwkVAPzgy&*#~A-!gr8^8zaLiT&O zMt^>0FG~ElS1q&CJgNT2^eNt(=?h&wRcwlbqd$KRQYquEV-2E-hRI{HYu&IjG^Vy? zYgbB~)@1vFEdx`6&&zu-(Gj z=k1@IVAlyF{C7n)+DKk%$IAu2D zUN=+}-DO$uYIGW=~`NWKKaPMFj(2m2!c~APbN>RwOe*yLWZNRvonx!>b`6vYP z1y#G+;p|@kc(z6k&E=m(`WtlC85~c0-b|?F1DlTM2+8tj2sECuaA!MTtRuSZ&u1-J z8XFOoB@(H6+(|PK|3^UFEM^Y8h3c8G0d;$+DQG-vtZLXR*-oCH0Y63Gtv=eCl3`QX zbthaDf|Y~w^OeZ$h1llO*jDgL%Ny42=cUN5;uD|3+0q}3wEIebc#8*uRxWBGA@U{4 zMkj>)<)ZHGqf6?s*11ixF-EM{7ppo!gPpsZr>$_U2eOQ0-8H+^Mt;Fln;&SRzaioEZ53a$E2I#s_Vw za{0WE5}Dx#5`#Yp!?&g?{GSw#`PnR=3zajk^#2CUW|7mo-IANKyoaVjid4myIZw0*5Ns zLJYf=-qv7UIZS zdTTXvN9^IyyP=!~O=SI~x1IY=lWf`!5Dip%vn$yGc9gU{C)Txmd%bAGJc>T~?WkP0 zyO@fnaNm@ULO-i?ZwC3&KDIILsS%*WzxzB3y-b~l!+Fr_8&}P_cEP`9E`GP7?O!I- zNfCcz*CU4NjE1JwZ$|;G%t4Y_E=9OAsInb=Nk8p-r|XIO{3HoI3>ZV$}zuSpB7yWjc1I@3g9=5hRaIJ2{6<7g}rq+DW29LhJ#;KBy z!a?C;pJR-{?q{jmqg3Pkb+BK+$m;ZtbbEvBsI2Gijy6@A>+hYj>>Z&yEQ*z;f%Ugf zZ+P6yWOZ0lBrn%*TfObLA_J@wRcvXP5t2Hspv-hZEYoOdY2_VnX>=iGO`XIG0z!$9 z$TW%4fWw{6R(_#TeGrGSzaD3Ls>rWjTutm9$S-*biAp2a2$ar@i%UZ8L}08MNJ82v zIv@T$6#M7z2eyX)82yAoZq=Qi8J^&%@!J>JB=#;)E*O()XExfi8OxafyF6gTCv#S7 zS)5Q+72>|p5d0#}<}Q94!8>N7b9~?q0$nQw>I=($%i_HqN02{hdYQ2 zczXRV#}Ga7t#0Xq`!t8GnZKH1q?zL#&nqc2y~?b1J>EIY89v?q*v|4!PW1-uWeqpk z*s^O}qM41dU+^zrf;twJLCeE_! zI63?)|NM8n`=5WaoZLGyHC}w+eU{;^ppiPw|K*LGmEJL_B{79=3f`Srys z-}2Xz%;`YH;_AQHnM7Uu@IhGiV87hpNVfne7s2so&`rh${fQZ)d#VC+{&q_1?-27N zv3Ex0?U-ddiEB0!aesb3eZe}*EViJzP;#plb1Fy7^#G0B zRb2O?-bUwF`~h4M10I@mmhTO9fZ3H7p6dV1Ri0RddRufcT`n{`gFnS#vu=Qcx$aIe zgWoJC{s9Ex4@9|+CYaC5$%+yUCpWY%g3HGk79e(?Lf*uTMSm!$T+Uz3FD%p3icsMO zFr8P^(DWf_GE1uYW{n6h$WB~AjD7{zj7zU!J8MZvY&{h25ybHYd@56Wz|+|e`vY(1 z*3fJ~jFsOi0V19%(=S6v>L7MIxv96Q--KcaqTAEQ3E+5#;q<+2Ag=R_6VO@Jk_LDr zy_t1hzfUp>YcA}>tSbU=cDwNJ>0aFNKgtF9n25mIkPa9!zA$KfP+G#5+10Wvjb%*qkZ6goc=@Pe;k;kQu z7cQQ#UQuI6hF(15XtrC_NKIA9BkhF!50F4xyJ0=BS|1;6tm&t@Xi zV)8r3pUnCaOVIv(AT^!!#yyUgu2cc#UQSiK57W5G!xgT}bKoxcf@~lDjBlqUS7f>t z;8A0LKBJ48yf@Qll22{Iy%=yJsnqpEcJ(26K<*bncDYZT|0?&_Zn&}2&>7jV@K}lG zQI)jK^fT;-=E2ec4nm29of`96KPKB==1`?8XaPBm%({+km^?+@Mrj*Y+E`M?*a=q* zmU*HE0k6rm9nv5A2OjTjA#OhdZ+f!;fM;e&(MC4QN+baWQvVUwKjpGAyA@)iv=0ve zRqAip= zbfuBTi1#}^ieuZ0u}OSU(;PHa>9CO#%BZY5pjP)m#-MlRh-B4F<>o#kT4d^5CdpJD zn!j|#Q?RemD6{`GTu=1(vP2hwk=p&GbR#vUcAODt^pvaA@!q1(2f!SFA1y$48}$u& zf6}AS1V>^ACGd;a+NcANYmO{IrX`ngSz;CeB0k z1TQ9ykkwCgiwz5F(+klAM#w{Z$FtvW8xJE#LaX2>yQtGFJ@YpqjUa$IrFfm05SKdKW;hpr#4mFvfgbCN#MKC2*g-4_KIZdR9Q{l zEiq=j;w9jcj!6K4{sJ+5Bw2J3(9EE@$tXNqqX_^d!y6NThWm@t z@r$uZzGR;`)ibQ0mNSaeoRMOu<&)0YT!r)M2ecSP-a0mgRIP${J-72$&}IN15J^S? zf39)XA4O>SuGAt}asJIi({KDQBGteKcK$O}yqpX%k(Vb@${=Ab;b+6oK z&h?9Gw^y!JpL8XdBauu%AZo&M!=CHYGTfG3dt!NxPpsY`UohicxWqv2|I6vokD&u{ zi=Qlovxi|;y8*91GrNqAR9e;x$>M?*>)w)xt3Etm!7fCtGR$R>D_e+!r&chUeJrAVAKTgJcVW)HI zKVvwmEkT{Nx^eu^5AoW4)HYuQ%5e``?hc-MlsX0(lUQ)NxP2z-5t`5As$$ss`CE0w z801`pV~i)=u{ztB=d+EnLW;{5`tq;&U)F;soXsvfRwo_erBCAM5uF}ebQ5~8@d@A#B8@;_ncQG03XlCW4L zIa2zu8q}mCbG*cnhmo!MP+r@$fB)tcB>I78>~F*glu-yo#b^2^JOWaVI-9#?NO_V5 zX6yA1qBA$?;u7On9!gct6X$p5gWN`JT5vpDn%w;P;G#G3MIq0fpcf&P ziXiRW?TVqCyOS4<(2#$@AYIf2TXwPCEA6sFvC3!tYZVWn_lNez?s8sECDFYM-f=XjV{=vbR4PL} zFlBP$m$Fwcs}z2-aw1_1NBW5qh}=dXKSR|lo5RCC#m>;(2MyAsgIPNOKhgsIDC`I7 zO85J;#NeWn<9@TaIFpF$uyZ(#L|^OxIKpG9Yr|2eJn_n?xZ+%*cSRXxM4 zk>^ALeRQU)^hVRYq>_fKCPme%kZ)9$r(q+t!*~h9DL{qR=}xE)ljY!?MxzEtp_5W71nLMH`?6hz)QkCd zytAlz0AUcN^I6oBB_&%`?rRi|dd>&FlP^3RXC48C&B;i z7p&Gznn&KMf)rLm&k(i`@*7KbzDRPRa7|y*YH`$ zpUk&zHo2!n)~nQb~$*p}?yyN0v@T4>Q&YCvY8J6(HhdSA*nQbb+gN zMxYM)ZVfqiNJfpH#K@{_cJql5L-U)V+qB@zIq5FA3~SDbFAopu+*S8^XOQ6clkq~l z>-wRkhA-4rSXi4x72q$0>7b1G6Wv$RSN+$0bjE$QzX- znJA46NJSn0bKut$^h_FN_GH+8$$=lj`)wvUUjPl z_qrWAw@Z6_{I;%l(`6trXecO&{j!&;ScUx2cY-0yFu>~e%!I#eV+2MQfZjL~# zjFWL}0Ek}lfpbnXCnhPu>d#_1i|+}r?ymTEBO&24zvGb`x zRMmk?@s9aq2H_~|o=WRjv;j1n95G@%hF|A$y%f(RU zU5CLDXtm5>dCebYvMRS03vi;^SH$GJ74s7_$Z}1@FVk^_?+~%&v=3Cc%9xx2sNZs6%kbl z!v9Gg7GqW7toJ*anc@-ZopL>DP?L{`FBRJ{mhh7Jfy?9ILSARuyKB1!t8@Y>AoHd| z=c2qk#V#-ETEP^e)5pAD(~GyabrCpM76390+!{JR%lwA<)~CF=S^iuy7}Dtm5u5DT z^);=l(XSUNG^RMAePFenB+H>j%OFbGK0C>k8c=8Z>zNR8hBit1v468{jstuTX>gB1 zv1Z;F6rX-s$F5gckyZn3Jj7Bb5)8sxo=|=EnPcdjzL)0v#q(PweJ7guNp`K8hAvwA z&KZJFNEEtY)s-eCOqP|JGh3{n^%JO>&(sQTacXM~vH|Q9AWM4!#9E#lVBU&Kr9CeH z1iP`sz3)$NAk76tvBGO#r1hrFYf`ZR0Uq}6i(1X_um4Bv&kf>K}4%-&AX z$!Wl<|^jRHxzNQN9HdX3YJrZTaIN+T>^3~|D`rKzG3mPz3A+B ziQoR9*}N>o~34q7L9k=kA{-IAONb+~AK(E(T^VV3cXjZw)Nd zZ{{Z?s)To5bXVb~J<Yt7s`bPidQ=hM(L*kvsa)oUdbh8nFvQyp&o_2OWiv_q_~i>6%A091AAC0K z@&WFym4R_sjuBlSTS-VUxOQ)&t<_-n=CQ`CKuG-OuRZ?vgd1 zX$j;pT96j<0LF#-A=SR^6oX=~sOe830ni2zfnJ~7YKYsnLriLEU8|$0EOkL%VW@Nl z>_0*@r`<}H6BGSyk)DT&`f%4aI>NPdoMaOR!(2|AN3M%p(;5Zi@% zdi=1e`YEJ(_#N4H`h+ze*ClmMc{vz9Q?%wYw@?aNtU=U^q?bRQvzSQ*{5{l| z=?@DSsZ`6JI;Wa-(wbsrg1T3&(zBY8g5~K6&s)yomI&wW?qgW`YZ0=}U}8UG%2LHD1amE;i6I!F`U&Q3vmd zaM4YlOiy2Mo-7sSW@gQP%;@Ry=V4|j$P2QKEa9|VSo%fsS_lwGrOS0gHM;FePwsIj zufIrA280;&f&RHK1g_a5HFT9)QK9+h;iTO3mBmFNTNlOf5LNxzz1u*6QdZ!A3zT$D zv8oh)yL`ZUyTj@%)A1949Y*^O?&shPvW8_3vX+9#tsF?44Cv80Q$^vPaCz)o~rf-h?{#;!4p=;FhvO4GieDZur8Eo=(s`?4;VsmY(;gyI#_VIfE zljS@<7hn1@X;H}wZ|%88L7Re;cSUPJhG2R`0A&M&#Ifla9ERZ9xD_>OP`QSL&qQ94 z<=&`!$x0N3NFqQW|C40}vB!Rdv&1V`<}pd?9tbqVrSBwUm^BL_u!_%&W9%UtaT8RB z*{G{5O8GQSKqoA@~8g>3c(4ocB_3`H-pyd6(&XO{7fMJVHZ~5I+S(p z!?i_Yqg>pZDF}Lu!eQmI0aIGF9#ha=f&~rGecw!;Ji>GumHsSMy}vRH@O==r_ToS_ z&thG#my#pKZZG>~MBjg%Cn@cR6~F})Ai=eQZEpt34s}WPy~OG9T(|&{$&`$+kBxaS zttwbwHCBJKm~WierL_F2jw=s;cnsYvFQW*k!C~*K>2d+86J(Bp@5wE&*ZRrl>&@Z4 zVhUGO$tcF@;l==&W+5QWfxG(dWeh&)zF2_v#$OR*;~E6Op^|aGet%01y+W-BNP5_@ zpDo#Te_65ty2Uwj^6A}eTWSi;v4aID>Zt9v+_B4Z3b8wmev7*6VEfpi_dl<(B5PAK z{p{qEF^N50+N>EUKz#({>FjE>kL{jLEUH5~u-mV=wV$4m>oFYz%_uUnnR^;tXf93Yf-usEoH~CQ+83h?exXb2i8xy;=w zL*EwIsoTt#d)Af9xpI~=Gm>aavV0$Z;c1qXM~UzJTDw-Fq;-oo?IdJ5lkZ$-WVlf< ztFMSz?@%H=$^XjmV83oOa2CDokWcE8Sq@b+v;Jz8XT5F{e59=4k*XtFnxrsWt2;8% zbnm2%?!U}iF79@ox7Jb+P4L^cCEpaI44##*ru_2ozH*08?8SEIKw=hLx^R8U`W3ez zb64@OK+#|C3#FA8yR4vYPq!AdP{Zg*xX=&+Npf-uaAeZB_s@^!Ul*D!;C*JTV}ivdfw zTf7c_;ZubS7`PtrVEadX^kh~IS9Qi~{=Uzqe%(B$rSMhTPQqe!hh}7flaq(;I-V*q zZ$do-_btcBeLrZ~7gOPmLJM#F1rR_!9|uH-*au&U-g>h`Co{_T15W*=o?qkK-fdt9 z3e1Wf--!|-0`iW4o>y`3))`it;#B0I7{izN&7ozF)=!%5rTz=1zQ zebf@Eq>X3(aVCdFiRW?R&`*|qJGJ4?>B%pE9^v`A1DG775m%ja+W{AWk^L1I&DqXVA7O4__t&dTF|(_BX`O~DQrs7DB^Fq+opUu( z0=Jz3(i^i4hVKf^odKHOb`TaBnd5iN#PJxkEfeNeS0=^j?9&JI{?@n6JjAezXo(7= zdar<~<83bX-Q>mVj8?IYv)A;VVQsOS*ng;;hnsQB4BJw2Oau4@5L=_qWNMW$?@R8A zTbaDjk%xbUiw^i2@&N`PONeI~QPAb0ed-shAFl9LKZ2 zE?}t>t@;@GY0Gt!bX-92Xk(YIw0J$X22o|}kU)w6$WV{Dsj7ed(*F6=mWq1@OQ>Ez zZo=`gz;Lc&jG(%C+&t10du#hV5~%v1$B5}d{0b+xij2?vUk@@mj+VP+>Bg?y-+oZ) z2KgR89%)?a=ZO2*E9+L)A0TxxfX6$Oy39CDeAc`=9=U~Ts<%H~{962WG^fX5*008F z)m@GLz&an%J8X3CThdGXIK>u;biNN{V;c&1!G+w;cXaGc5Uz@~w28bQ$w3WMm^6S9 zbOqKen>6b^>(W!gT)1a%u&@e~RVqB|p=D^A#iaSAKYVUKo=nKJU&^(+(GX^i>NnjjWoXLx8VuQ z3iCU;!9V=zh=_t6U=rE2uNR)-6{yWsac~)XPoiT2go!g=czcsiJ`< zo|R8U^{u}IAF5GlSSKDKQZ{p5NuVKBP;@Bj;NWlpVt)q?t0!bDcsEGknWw0%^B`il z;T2Cv;6GrC{S7T?%%z9U7tv{qr^J6JRIfE)HwY5(p9*2^i7yAna#U%I)0k5?BwA&| zJ_1(V!V4>UWB7K#|Ahsf-=H?@nJGBNzkicbdS*zFgg1UfJ(Jt7SPzn3qW@_ z2UPvDUH*>`TAomUb_EAS0A@N+Al_#Fqek*5bkMJmL;#jf9V+N~KT3SUny=h$)+qJD zxqb$E-!ljkO6~nd`lCj@OmX$Fx8ieY?An^F9kN<%gWKUUe8fHC;sUus8ri!(d;iGu z!%R7;cbcsSmj};xWhbtx4hhY{9uUCE$Z6dkyIi&a($LwJ+ZvU(JE+zt^mrVK%^qka zhjgX|3PAy$22d-|%2_hD+=HcqwN+ zu>b^g=tCWUH=gAh&#;bBr;u2rY+3|3*7x!1vWpqa0A;>|;;XR(m zE;KGDqgSQ6t|=Q(sUX{BP<+108KyR%C=xdxW66*MrW(;fAHM@yC=|IEnFlY=-d$CA zfY5lfOCr?^2oWQPWBb7#GS9=T6`5)Ol8k^I-+P2qi`PKX`i2#ZoBJf4NK0L5rpPi{ zPAY#d`xL4<3qKb;?x{^NJ2@g}UE1(kWao+k+1y6V15Ddrod5q8UbQM_{S0p8b~lfH zp2nh2?LbX}4#QyC7CIJQBL`iH--v8Fb14p_N0iq<*ml zhOYqqQ(X1#1tNk=${~AbdXrAo)Nc_ju*`T#4oJbbeG3LtK}lVBTmCxoeUZIfj|fzr4QZvsjK#Aa%YP+Tn_23**us&8+V7$qGQ zpUSdM+z?hUtwO{0M>sl>o#)m9#qoiP6IsW}!eoMNAcqfz@D*Xnn>|eEFbV*$>TS9S80aBkC`LpLw3`X3`ve zznreG$}SFAeBxzNbhcI;VNp@oB#sn+1SIYNAEbo+6gpPX$qEEf!hwAk?a>uAfDABZ z6umRw7_uDm`&T)LhqF$-_#bdg#=7LDq9cO#uI;YprSy2i@!3_X!Oq?uBwP>jrlqyQ z-2;8?fYY>RvA#f7n?9I3>*O*A&p4h;Sm79)d4?lyMv&#D38()dh z9`&h2jPH&GNmP0v>+EJc4Ubxk*0=UUX`ep$t6%qj-vnD>%fix>`|HsY5}$JSwZciG z7GUCP!eQ0mkO6h!s@TYC4Se>TWwzAl>?iG6jLK+;h8+%i8SNjVHfJs0jF=0pxgmSM z0L$gE-TnLRlSvV%-u9Qklu%1@MX_tEI!!^7cW+1QFk$z`qpyV%$M8kD8jW^fx2Bj& z%bVD&)AibeEU0Ynuk{9qHD-nj@%rcpbg0FWgPMNY)qS2+i(Ne}=1uCYhTzG=Uug{O z$_dV{WS(nw=v_>qW!3l2aNo8&Kq@s7F_-IT98higYeoUq$m2Y;gtzaxCCiP%x4AeN zi2ByZITXt8l%$lO@-{5Q#oZmN>}L6y-^V8@4dDD1UNNyEPXnC-M<+*?dr^lyw zs!OL)!-HIa`{(RciU%(n#wF|!GdTJL*uM(#z4$)xcJMb3VsY*H|BX+7@h?~qdDshQ zlb`l8!NC(Z$l&WWbUf)l+Tj_Mqxb1x#3r<29g3Rhw-$eQI1m+;JvIV+V<kbkiOL?aCv2{{rGYzzC93QS(H z0yvF$poWtFdItPaqlNr0YlT#`0`1$norf?UmW#6S3v*)kO0_&H*U8AYZvNY zS9)WCT~SaZTX*CvI?pDG#9w)HpNfFKc$=civ0|-n1QSfHDaPdNbTIV5B+o5qFk|@4JvbxZ6ns)to&u^0d>kiP=B0z&nXPgKz1G*@v z1_XdsywoeK99h5P-IXRFoMRbC0O-#@@c<*sgk&Kzn}Ol8IP6ND&+ey!!Y_cj;wtOw z`$QSTu@*Exi^gJ8xBAyOSb0o2DHX{43a9SmdF`dPkJm|L|!R-zedF)%lTxfwHQ9yzeGBU#hp(mVnv990nyL>_K`l>!n1OPDwD#x`hX zY=|4%tjEf@W4k+h@8Ii-Y4_RWuhXtX}P{OcF_ z-}iA>ISUm$tXKO)fkk7UGLL4456b>cZfj)6>OsBWu6QL;eB|nZhH08^Y=Qq2!Xyj5 zO7*}XYui2Zr(b=$OKWYdvZ&3JYO-~(Ud?Zo^v}ObyOIfD&&ixGqL!YI-GI=2k^?jZ zVY4hp{i8#VhMN)I0klJ4R*CO5`G)T2C!~p+Lh;oN=N8YQ@E`QWva!R|0a_xaK2cQ- z#|_Evx|~%uA?3dc7)@Ds`J}vFhZU%!FdYz-Hi-IdT_Sbn?QsTm%f4RvZ_9OzU7^%z z>}q{dG`-CAoyq8-nG+{Nzj^U*=+n#{@|}9$QAnibl2YpBzx~8uOB9|gCy0j5 z==7%_!!h@ffr|JTcS3MAFQ(8a^MloZkNK9Vcdtn5A)HCA_-%%6D6mEl?RO!mfm;)rC7rpty zSDGL6OpIg9YjZY#=c4{9$pA)1#(t_2BFC}&QT(>WldTtVxMKsqW_fOEirQOih1uOX zh{@yvUx{A~Q}Pm{Vu#}H?$^{n6cyXTQd}CH@+8Dj-OGpl|rC`kta|A6gS>w@8#}kgab`#oYRsB zrK{%KFF!J1e`pYQ+|jM8aXj&C6hBoUWQe&55eX;yBGmLTF`9HH9$={|eJ&aNn!JYr ztxkDy)F#lxPqY4sx%14Y?hX3I>dmr~tO-ZfrEN+dcL8-kRPCCIqj-Nz*%tErJZ8ivrq=^ljXX^c%ZdvC-IB`Xi}& zj{EO%*z{sZw>7`;Nc*TJ-}aS!`!66}izeZcm8XNoU_TWjV>Gy45nu?I=5I{h8rTZ_ z3|>zR)gS&n9-4?24sJ;27f=vt)i(LPcgKt5ualwwbG}Rkc=vdF%>HqF?asjH@NpYc zm%ThOV6W6Btw-AriA8gwnl+Y^_d>{1-k}LuMQAqS4ZN4E{rv6`J@Z6-X_2xhAgfoV z7Bwq^rYOt59=q)sf+HDc`j>9y|3sD8EW8vjrg>suB88CgX?oT%_k(Nr7LWJ(9|&9E z32%oJQDkUZpe3u^6;Ob5{4WpezrUsR`Rn~Bhk4d$^(>p42I6nTPy~;3Qc|u-sT|q+EdvOIaofcED+)Q{U;M!R; z;{L^%P3XV(+vTQiS&8F=5w1W#YK;cNZGgr%hB;<}75|xaN{$|m-NqTi!K6$u$bDK!7Ib+w*ba86quyt$4W3iU6ii~}g zz~&h>kbt}HM#1#0!IJ)3)qJKWTQ>+%bQlf(2mcnDggw9a%lQ^+i%n7RRi^Ps)E{^( z59}oAXl_+%(|11kxBWJNd$oXkSXUQ_$2l5b%|u^QtJcb$HM$_Ws6x?^Z`T;V?C5&4 z)Pp+cC3S8q#U75`0K<7q?pMWB9{GFsFV58?Pv>BBH+KktX0T>mm`~tkU(OGWA9o1b znt^<5|B~+O(|-#ZC~CndL;XCkr}QuTfj@?@tz6E{vT$+XK!6D4`rP6Vybr9J*(#>B z#c(H!Q~g2x+Tso3e^=-tcf-57$%ge+fnVnspma+pxK2vqQPKKcFZ9BX4k(`>%g0FE zNuVT^y`o6^2n&-%p1dK8qGzacSjMtps$hDqA5Hx!aA_XpnCkH(4(O;wk_agK5Wudl z4Q`X=iI-%WiTn-QCu}Q}GuAH$%A{kwyc~hE@`xKb6tH9J&;9K1cd8U{up^hdZX=E6 z<0_yOY1g5@^97jy_V3&%44*Q2@h804B-v6-bw9E7VXym|$A-v{PXU7)FPRF(^-E7; z)M&Q6{EMNPR)PU+rhSul7_$91CXYBQM`ayEO!NE-!NT)p9AlC7(*=Cv^+SHSa9Js-{28ztIIpr$`jkjqz!6`ojrE2hd)!AjJ5PEPs-Uk_Hr9^VEae|3#7H@o=f%CWeFqY#Z z0CW8UqL2{VFB~ssI~#$FErB5u)U2_YaEf5qPraXm|CsZ%*t2~jA!X%b0ZC?+ZB!vo z^6DUdeZG(Msb2-(UfN?fEvm*PHd#U8xq*CYPoSVDQAcsx@^h0+I&Um@<4gop>v}oq z%BF~3dxQF?rGgJ9QTi0G#~3>M^yf~2(vAvfP~zy+4uy8#gc#?1%KnZBY26%%F5vas z{dOUYaQ92UQvLtDRWg*If5!qH@jki~byfhhv;JGRY(;5MmUdVgU@Kl!=3Wg+=?HIBBFmayI}guH>s9QMSdy{r}3qm>eG8s zMuI#djv2-ja(3tlm~pjJaQj!G0JaJPr^hBO&Xy$9i9aX_+tgb&f*-ouSXux1D-Qec zj;JTiKA_lRz2oOsV62G+{T;-@X!k z+Hx{Gc@bVxmypN>o^d74Js`kTSH6j8_=-+xsXgo1DV_BDO`72HV(|L458a;eySd9jIS#}T7R?r|X zH5_V#!X$b|)5bTi=QF*-}TyVl$oF&wPZohkE|pBXA|@4&#rdY4D$ zo|`~l>DAInVUAly4P7u{@1S4Ixcu6KY(?CLx}sN*2O9+m2d9Y~M1$L{^HhM~k$lWa z2Vn)#G{4l!V}`oFQ8r9G{;3$XHJsK=>QS}sW8yh)LJ-~cu}i|^aQbmL)xY! zDne$#`Dk1Ok?7P!KWFNuejUh8gIns6tguKCPH%{}|o4$0wthocsdDY^U2z--a;cbRNYidO?<2EAv*oZ4CvT^Yp3-Q4 zA}2(n@p0TXW~@CB)TFWT^>T&XHR!#L?zIf+(AzPnM$Wf$V;I*6na&qfbJoTmjejV6(I6bZYXRQ`o3U{r(a2 zJ2?|VowdiRgax&3xyiMv!p%%(MOWh4e**$_F>5B!P2PbdsknYR`Lq)gFz(#f`f#n4 z5sd4~K$=URCPoWnCp!Kw#@;$E%J%CPR|FA}P*kKFl$LG~R2r1-?ijjTN=3T6yQHOC zVrJ+bx&|1!8S32lJ@4=HJMZIr&hsw}!+l?|ueJAHYhCd!P90QCw9w(xLCfpV53R~o z7)@7!%BC?pXIpko8X-(!>G?bOX4iEed@4DUh3Ol05f?9sbTf7o8Idwoc2BM`&$-ZP z)Kz15C&wO_`ZK-YdpE8}x8#bttlYLG`&MTX-0=5+M=sG8@H#1?N*zxwN|G=wv24kH z={X(>Ppz#*#%UjaI&_Mb`pWd&22J6x5cFebGQHbb|_xR3@9J!fCot1>dwg zdM^78N_JOQG3Ec#UyCJ-JN7ef)fNYR%uHduj<(MTdJ(rCR!h{-$f8`%gtu(5^()Jq z&2&qhkZqi%*ixnM@ND-{faNZ!KrcTg_OJJHDK~Gx>Nm&7XRH1svAyOB&n7AW^tg(CR_a9T z5fJoT$8F?S#EOZ0e|)-NrG9wJVG~Ea=k6=42FQH2;JF{h*JMFn;ZNfDyD=U*a|E*_ zhbh-|)Wq+Mmai)+!wf3$_LB!gJz$gOK# zsOzZnR4Ij}YP_x|q&HKRu{HketdX%>zLbeiCa9pJzQ9-~R3|HwVb1s+gI?QK?;CO6 zppWfF#(2HV*FuaLj`8PsB^SrR7FThEmgHI5Z}q(6I4uiW0Y$K~{(IS#hMebr9ff zI>ApTO>^Mf9&CO&rX(6KAJtVq#^ngge$0k)LXH(LaN=v=_F2q`Oh=gkaZBr_ z^dZ&-Jd@{vPt13cJDWbnARH{Wv`91;CYWx$^&181@(OJvWNhf-P5KjqWBTqRop1*b z8Y&H{!W>-;>+T#}@f(-j4N1;!@2%5D^$hByat2FblPdc{L5C-=&zr=YS_KAgYI12H zLx@Zb7C3+h#r>p6^hH?=KO@&me{`=xzWuAs|ER0N7sK6C0fE6~c~x;*I*hbQe%r2X zj$CxnV8H=79^4f3#RIiBUTCa_%`o8r8C{~h)lKe%jipm#H6p-g6iJf&?Y6+HxTeL7 zopyxz2)y`>@U>e=*&MwDyW%DGgoQGwZY#qJ=l)S<=L?VO_h)Ta1Io&d!S>Hl4p;y> z-IcI-3D98u=hpYTk3xI3d;Z3>Hxdsntz)v}dt*i@Gc|jlI2dD?pTZp8iKI$hnUaW9+BM>MYGl4Wz-yovmuxnAL=RCYLW9vR`wQzvijGSKoaFt9kOl z)H`oHnDjel}!{MmjG!QF?@O#5L$iY*Gj!5?kyEB3*t2BMOAE%JF%{$jY<1cu5| z4$!@DFD;p=>ALmM(Y?{q@C(;t@w3zE7fn;G&3hFXY$V?aN--4bBJ^?GXG zZn8_GWCpf-4bO77N6;P*jZJcKfqismC;&*1ooEqmNLjyYaBxDFc`u(JFFo<5kmGp2`@db*^p_5TqC=g&`B)PiH6OS?4}s5s27ngZ zjg|#Tj#XIbIHy@1GZIkGSpfcGl`PwATav`s97oAMO$i9rFwkF?=RcRMgr^_fGat;d z-9;8Y7zQ@`jo2l(nb&XqlncXb{+f?x;?&sDds{7^}OdmX3n@$orkbz1((t5TBz#Tr@-H!b`oWyHoK z{F}TmAF2++C@0rg(Cpor`sRf{Z-U{|uP*mpZaQe^UHX7G`b!G&>LW@OT?(Zs@%aZV zwr{e(BIvpq(s>pY64ffpXOYW;xiIeY{crkRWcCb1Tl1KxkAt?yemmL(5py74xtXun z7gKz$AwQ7!txO%>>~Wa^$K-5ZT9JE_(iy{) z`g(zH3CDhc=$1~t+^ZK_Nb_`3pnmn<#q8%0YC6G6-_lREwA&X|Rjn-OR(w)7M~60| zSeaPB#MnN!WndgRc+W?!+S!=$g`+dE=LfHr|6`VPe%fWFOQe+wa0YQt)rLur87^H@67FbFVV4ruHS|9*~5-tMi=}OOEtp~YbTkKveM=gccYp?wVH59-K`6j!#s(sb_Ji{ z5v8whBT6ja9HmQsbxNtIhl)k2CGQvPf3@zvqWgV!RqJBa@V_uD*|QLwGLa{kF}?Y; zj<0c>Q=@KaOOT6GHi9%Rt+;s59X@y(Fh7^)wAv&4U{`iB4qmNOHhv`}Q3_lTI$=8D zUNg~t+b+FOl4-+jc2r)YgV(@6WHE8{6Ck_l6rUKEEZ%bMFx_ z{dOvzBBXs}t+CTo?IH3FN|u4e1N~{PR>EIstIur`MvuR){}^>yt_@jkw+t#|{2!bM z*P*{e286eh-(n6wc`2kf7U{R&u*kI={8FuEeDeYFJvPza&4MEtUoGYF7v~y~%{(=~ z2B3Gu*cEVv9N16475=6U*vr-nydlN1yiQ4R8d9bHa4Sy0=l2EjtloYAt1Q6mVL5Wl zizTXlbN~Znzjl*SXbMIP)-BaH+6^*0Wg0+5Pj}c?UCZ$r*s~-&wx%p|(^}%VSRM}1 z#u$F`P{0b1$^s(i3EP~-TOVOp)U!KQhvj`dRBom2QimOWdd(K+<*BcL21!Vh54!gh zi_w%+f$$MKWO%VpEnbWhbu3_}yWzR{;#yW9-rZN~Ab?tQpEKGG zJ9%(+KsZCKMZmNg0r>U8S?{-}3%bWJrl`+MKCgnQX={2MoxCoS{UZKEiEwnLR2)#h7AaZ{&yhE))+wYCLK=irX6)U~cOs zduK5}?^VzO-{GGXy$6(53nspG)4z9sWJuX4h0iVhOFL3lG}Mnyp+0cE)OCOL+oI%) z?n->e%Rmqg;H49?gq|HXQjG)M_76CMaT@T1gqND z^@xdj^&1_2*lvax-q(7uI){d9wYtGg=Wq}$^^su%!Gg~y6jgS2xO5i^BO5PoX*Dtw z;w1~|5e>F$zT)U2gti1^?i{@9IU9mpAiVWUP(wLxINils6i4|_#O{eCk8nm}w zQqj?BQGxu?af-g!*uo@cB(M;cn`wMI@uNfL0s>t^hpwSN>W7SpG^qu&g>pA|+f@(? zEl{OVS+BS>IycI6qPCt7xG9wB48JC;uhcqCM?}rU?cb0e<=YfJKT`WxtcI55jPw3B zg^Z9OcK+SbMb+XOEmk#f19jpuwJp0VOXm1&ZI;q^j;>>2uEfD-4Z2weS5J)NMlt7k z_VQo+@&P4Uc1hFCV#& zmLj9#HCaQ}EI-810|K|0V;L5xTXqw)*QIw2UqfE0wdU}+J}Bj#oMEwU2Kh`(+e^>` z%_RZ9lO>Xu2cK!1MvvBC+L=23wIKV?T!7lIZIrJE-SLtg-GOP0Ym(oLdm=~7W^A+8 z(JB~Ix3$#kDjXN7RbmY_QH7uf3FpA%9ogG6K{4-kRzY*lTkAQ4>NC5fbCgW9Bp6$?BI^VFTDDcOUhKY_!#FzN=GG<`*k&sX;MzU3C}J#AqB)str;%7b^eMlh*__LUAXzi zQX%T{W5f?_oNF;JgPRn66Zcx}(Yh_!M~F+7>-57^z;cTAqY87fL-Y!+Ec@hx6Bds_ zDm*32fu58fC~x7*QHSG|W;68SlLN4LFD-2D?dNzuh~y-H^CtIfOBtB8WV;- zaED+FUR^e>gE_E99Tkuq`@>BTqd+~q4l-do>%glOy!_NBZ!@7Ujxf6d&}LrXH6aiC zoXs~rAqL)g=?YNhx@{_yQG8?n&bFo?Xyc!X`DWfl%}<=iwoDT=Dbi=wQoGALO5nQB zeWnAL)8DKjAA`ACVw(x?&CCkEwnhqUkr57`p>Habz(M9ZLEeL#S6&_Cm?5jKV&wu3 zZK=WntXzX;g2Fw>x=V`zAft2B%_DX07BGcg#3XNCX+$L@QIgdg1YPCBQ#@~Kc`F_B zu6eE2hqJ>qYlZ3{2XY}8JG1uHvV88HI9|=WS(H5#NxzqCX5<7}+?(L4>1n!d7KQ_* zFRG3lOY6hRBKKOY8n(xM=SF3{?;fqs!o*Vg&c9>3$-Q6FO z?m_t-*;~osCJ}iTxn05&J8p9+UPGUWW@27!tL$vtw~wm z@BK+^F+~BhOpB(%YBbWDy82wxM#Ih|4bnV)tL`&=$?>y|FNaFFSnUD%WS<FOUY&NM9h`tILY=O7HHJGbqbu$J-i=GktEjn;BaP|sZz5pJG7BA zrB7IdhCxtVrWbDPU=u|%I?3}GrNOd0-OP818Oxi`kXw6;3il`GFb72xONy(^){MNA zrcJa9wS#v;kHhtcz40N2Bk^aY;0|*J<8DatYTJcsDUrt5OSnw@S^150$V(fg#G}2! z2g0$XbfmHEr4zU7=WJtnq@ke0+T#&6`*UM1%KC`Um>qQ-D5SQb`k8Xd zl(%?^&~qdV)s<)}Fdp@P2^J(P+lwQ~FMaSpHr>B7yK={MHJz6&6J%0F57)R{>k8EI z^5H7bA0)7^c-RIduB-is)q2dtOEA$M-%03LFaAKh&f@2s5Y{&Y*<(`A#Mm^-z44!w z{3>zN593~}=YYov3(l7hxXaF3j=rz9Xn$ST!IJinBfFf56EffQUOD-t26w0r(weuE zOoR-55^XfFnuI{(=V~(1%9~xFj^HwMj;_6L&CtW+7L(0!-&Q>u?FUPcb7g#daIV8+ zS`PN5!1nJL;n#;N`Nm7f#I3Z&^hZ}n!i54cF(sc{B+=f&?>#af*pKU|ZbOR>mRmJ3 ziPV_(rKfHe?}MA{E5)!k>htAkTBAzg#bKf>-c$I?snJSPif?g z84+leGtV$mz5U5u;2fD|!>yx5i8%4BVc;b+aGNYLKU^%va+ZBC5pTY@O;?sm@Dp@3 z@U*%1+C(*5pt1FXPubJO)sf%wTSr4$R13LFX?5j{HBZJkC_;T|L2;g&!_i1|(cj_E zr+3WKG!I%xKhUmRZPd7@=@w5uDmKtTKs-=@7);&N=4F?r9~)-F6%P8B;#EERM+rDH z-k<7%of@YD(e_4!=JPd6F}={XmS4=7e_@g3F;>ZVNU_bIVJ)1Qoaj*9ufKCnKj&Qx zYcRt)G-(wcDrffYc-vofvUfuz7RlmLHgR> znN2`saPA6+h;9NokwCC(rLgsu$e8O90J%$l`>TGJj za(&?)pkVqKQJ#8TO8Br|QAH`ihYC~chl87UE{Ag})@<974j#yT_*glvH1%P6Qjai; zgF%~oMwmP|S&Z&3GH7S^G8njqJ&Tu&g}GHm$o&G|q@U`CB!-!GUgx`3D_=i(d~5+*hL~mB1a9FTAZiT@rA|Z2{MI>JK;j zhl7@{!%oN&ws-d;(m}W1iJO1ITwGf(uf+rpSN+mAIIW_ux=(Z?vr^bvxeWQ~zps0A z(m*-K&F{|Fnl;lGeLG+wP$Oh`_;dL}UUuuLhj-FsRF!p4b20`-i(MajPCwBQL|Ayb zV+1Id5Mv~K>TbS2&y|?*zL*?}g|4|au1FI0`>S*IKnumiCK%qF>EB|Q7%>FqxL3|7 z+)4Bbu1UsNG8|wh)GV)?9bO|TuDh@I``|h;0vcS)?e714Z(O7vZS<;>u&1uo&RkOI z3rA1*+nZBKEQ0H`(}%oDd$)q}$wBXhdiX3i(;Adxi4b670m<%_`QIqEtQ6b?)s z>QGnVc@-8G;;LJ3O9`%y<$$kG)r}+%c1-G!)g?#<&-(ahF$UwBPrm&`rTE23h zjC^csK@!vL{zhSfrz&pW(WnK99Kw0rv3F3{JOVQn&)-qjb=A2!s99z7O}clFRkPgl z8Rx`eiuiUZZOzsznR2nq0VjF0to$6}$s?#$ev-_Wv*83Y6|GMP24;cq3BUbfJZiND+ODPqv zBU3cL*$aNpi^d>s0elqeWjj-M`nj;~fp(M5i7WVn+yTaFy@(S;{E0vY#?diqnZcd( z@Gp4tzyFN&4H^Vp(`J0f2XX4-1Ri1ibtIO>+&}*le+OtHC%YN`1EBs|P@#Am!L4@sb^)ZLsqyBBujj{bkvOZzK zGrG!)DJs0Jl(O=&>zu2N*h^zu*S5omdKzhJrf$CDu1fzm3ia8}{yS2#y?QyuC)-3E zi`1^4`iFWmn!I<>VET6el`+{Z(7Mr@EY1cS!DoQq2Wvbr+7`h*Ko3;Mi=G#O_&LHzm}l#aiMwjTX(6GZVnAULJ(e zkX2$ak`R!@rSF`pPP-nfdQ&8|;LbfDIqbT*joNEHcQMdO{||kUl8z%~*M?N+DeL>u z^RYUdUObQFwN_m|7C8UaLO8d%Z32cN5rZ~il6eQM@LBQ7;aH0afcXr2lFMtDt`DK` zf!?-=6EY?p9zo3!gJW})!AO<*+X}_|VjMk#&BZ=yEr*Vw*%4#o4w%&J(D!G&48SSK zMAVY?E>=#I!;7D{?0)9$l~SnZ($dOd_d@D1^0HPk;Br!1$%tT#m#ZftxB-yjq3V@Uu2y!5GEYY4DVUx24}PZa61{KY;-}-sD%Q(v!$K z$YTDrC>j@faGhdF6yf6u+?R&n>?oS8F6khz}@fNV0s&W#G6g2mJHk_~*|Oq)^uVeQ34pLqWHh$)l}2b2-+n`7ryrmcrzd z;QeYp={4mY(^8P`5cCFrVE$FLY^$eg-FU-wDyW4biDk1zr28g51BHasF8yNYgvCiS z6DIC-*$grfT6%5>5@i?8g;~wXNSGE-B&~}%T{>YZ$!*D1b!U5iM&GF*IEQncC~0EO z+Gxu%p(v{QE_kLlgbOkNH!=L<;U(DoJ`+TyHzDPLDn$#B+V$^NX>#5hJzMY+)m8Ga ziEgr8ep_$ehyH~$B`vOSg*k%Vy5~!6rs(a9q8nu!30;uMyH8;?vC4{E@a{7+!HE7~fH=Wsck2}t8@ zL!e8a-_iPl*qh;(uSiH@>2R*p9z65eaea+9gsdXDrNr!ylm0BHKD1IfQe`|`Exl^n zIgnxfMfnbJwih*BUOx0#^|RI;$@uLIc5bInZbUS)~HZ zpd0zaK`v5BoxkVGqtR*F_MF12^gxaKuu!lJblY;_)7LyXUmoY+X;FVC3y>l4YpC_U zs57_!cI+uUE7;loQGzemiZ>h zLAq*9Nl|x*D^exlj0%X*ufzA=`jkwo6O5hQ!t<&sSE)jBsaej+paTOEvAHkYD$h!; zJq*|nkH0K>B_en7DKA%gXg63Q!8;7n*7;}sea;tswMR&=cdp$6_$~EIKW1;!&=gD# zrV{6ECfYjZY6eIfZB15wAF-#{LElmlDqZNpwz?g0B4p|aA_DBP|Nd*9XF><>lFCWACp)zS@{A9Nm$n_6%R z@HAPP7{W8S_0Ij{d0TeGbOeLD5RYRV$2YeIxPM1v} zmyKE(ZoCOIuA@x-BmF5PcOCu-;^SP9V~F@$vM20I=_Z=$3zoi}^L>9E7DT5l7P*;jB{kac-7=eKdjg)2LNpL+S92`<%&^iFcoc)o8MT z5V181OY;1z#R-CW4w6L4wEnj7J$IGP(J$aLRuFW>aKy`9aj%o1uixEDAnx#oo87Jg zcsXHn6?g?BnF19=VUf6BLjeY7g`LUYA+Xj{)^?Cw=bR{k?Mh_u>I|BTg!4z8p-vNg z7D@^Bf&rozorE0cGX&vUtsC0!H08kQG)GeLlCt2;4C@m5@Ai3p9c)K70qx)c_Vop} zQgG8qE@6N5ROM$=-DH7ZN0$1*W?@eA!;_T4`_wtpe9gn&1zVPI- zpBdqzb|ydP+o8Elx$I1IxBF#iaVw=okh7p<9l#$tEqPEL`~kIq zc=+h1F?q1BnBrko607Xs5|`Va20~bd)w{P;9M&{gad4S3JMZ~xoJ*o=jP}vEOGcTyT0CL!Blgz+i z3J(`#)c5g{BO8pDPa3pV1nMH~XhqMgyw5g01u(ei^8U?@Hf>7yQ(5k>UtH?OK8NX?R&oTw;W!!aYKk&_dsFiUKowrnMj_ec} zG5cr&OJvB2NW?-zHx?>wq^~4*-han;|AByQ&+#`1wgKQjajOy&ug=jOo5Hjp$+Kka z0CJpA_f;u2T_At+=FNSJ`KfXCp*QG=I`mSq;$NU8Z9{Hn@)P}e z5|`+pYb)p(_ABqfI1=HPYl1 z=;Z5RIhY|()jI9NMzOm|L>!ras($8h0GY-0aGIEfIxr-bivy|qveei0>_(PedEIp! zD<1w!vp|tqw4!C92ekhJp#vXLe3xNC<r?8PSDuHUqgf1#kgB-ae1zM+VfqI3PGF zA`VPy{)+ceUzqKfKhZFyw`V(IL=<|8Wz?P&a5#=>*q0P)(G|W~x^{JzHI6l2C{Ooz z7bITlpUkGfZVcUaio1UDbNwu)@HaZZJ2bV(V^l#B;M%m zMrR_82}h73$m@dYB5bv~%H=KJ%N}PTk8^y30cq}jOgis!{ZH+kBH%+4DIJr{oln4w zR~4+Wmse`R+CHUkm%>b@e@hwz7$wNu`?E|N%N;gLphs!-X_l0;z}04p$zw*e9XCNX zP|ZO@pMLk8sRhqXUpSp&fs5(*8Iz&(1JxyDy3^Cb0ykv~J775e`A+f^ezp;X)uO)wf1s`~N3l+bM9Nf04&~zR`zQlshxs>snU8{9(=ILBa z9GP<9dRfVD=+=>1R|ynwu*_A?*V*rAA>f01I*sJlVozmUXMG1r%$_<7JY-lyON>fi zyQzWxXSyAvs{DufXBOC58F8$5>2UTae=IzG6$%822odFs)a`%e!|eC#vFkD)pFfqa zNHrS0qpyql6Z`iFXSFfKKmg9Xr^cPPu@L8=M*mw!0XS_bA6K8wpql3a9`?s}FrCKs zN^@_O3N0<~MB`+U*xY({Bc07|zoPk>x+>=t%{u?N0!DSlP4kI%QGUy`$+q_h+bq3n zbXY3_VglY5bJG7%}L@`JRsFg z>V#&B+l?L_*OLRTw6o!>VHm5`m^f*Qm0k8CG9 zEZ4XjTMf-6X@HECRUK7MXCqeJg--rlwMsU^MDF<)4x9-mu)HqcUcZ=kQ^jOudJ}HP z;fzPA|HgtFQ2AqmRuZ!iFyPC8<9=`;&gNN&E#sM!GloA+w(Phz8c$_z+T2tVOG;wY zz!!H`F0MG>*FZl1NQgJM*genuzq|lA;Tol4YqF1)S_L(U>|PsdC!g@p)DvKvGO}f} zDn527@#oeRTADvgrH!bnhw)Ln0tLgKBbSk(oj;r0O?_fAS}#p3>B^$RDuq6+B}pL`WNmhZSn`u_ zQ287?S}txaReo6IzfOUmZa37FAz_m3~UaVK}%J7c;#gmKcC z-X*mHXSHU>NXviVX2S05*Bv!aM$XSK^XUbU-L;_0q@8dAjwdF0jR^+lv|wJW4Tq54 zV`Jq?wiF-YATTopEJy^f0S0#v%42)X=3h{k|uNp?=KN2P43C zZ?1~w4YwH`u_pP!Up$2jm-?{>-gV{}C^c#!-K6QX+Z^rjQk4z9+7INrfUt!34RTmb z>V>Me%`A0{{ zYK~?EOtL}i@9mMv?s*- zU+lO4{Sk5yO4=jwvhVw@*y8v6HXb#JHP#?M0sgL9{!$qTzmataAQ*rtqU{Du-c@~nsa958 zh)DS%WshF0&Kw~H1Ql{A6#vfk@HGX4H$%c^)@iXzz&vy!N6uE~O<K!Eh9^37CKxxdhKF*1h2Wru*t-9;pY<= zDy2yduZK63sp_U>QAT>x}$YY>; zkotEB3T%ac{G!e1SwWIah(6OJnP+|;))NQp$?io!fb;u`HbS-QhyC|CB-cm#UI{kR zxUqT*c&eLE8+7bcjyO;YjtfWRWSMz)lu;4T)wb>$Z<1`srTl2h7;Uv0kgY^O)-z64 z7$%g;hxY~sY&7qCYg?T*#n<}X>NEgUR0%ZB*C?Ff`ghx93N$a5mK|rS^d}+iQqG{} z);4)*ymcZZyKt5-RU0H}f>6b~x2d@vV|1}ARRq-VC)x)G3~uIi4h=n$R!4X*Dvdtw z!ub7X=~Ybe3NFx#^ZP=2W{i&_RSGSuaDtwUXwN#wud4YN<<#@Lw})B_iq#?poZjva zIsc?K8_K;OU-6nYtJV~_Kccd900%)1t8ja1q#3NzVhVLsugh^Qsb|oYwnkB#doto^ z+;N{zasn|NAzN^~7*TaZdk>=#SWI?kyOV{=G(*P%7##R{|4v0=gK0IKtZcJ@g_!Vi zkD)7};st1CSrn|={9fT7dagCjh(g?p?q*UIwM6J-xBs5w|M0yPPbFj}qiNPGjmAq5 zM$QM6ZwAZ97Ktx$6Q&@btxz@|HuA_1DkOSjF)uq+{!MHRvl;buIoka<6e zH*sR`Xj$0u%!HNIiX?|%OCmdI@6>CM_X2*#bO_|Myz{77cb!QWe)ksfG~qGI@|5j} zNb272;NghzF{NAI@dE^C_q$Gn$=>4E1Xd-ybHb8U##&Jd${GVoDXQ=JN=6wxeRb2!9RDP-zQWBMFr!7W6sJv z`_z&Yr_vy{mjU)X*4d0dvrYVsBTO_blFlkT4pvv<*b85$u&`so(X818*h(@})9!Yl zJ#h+EsUZGuvSs8BHr7^hoG^n~D=)U_3x86X4%$RN@(4W_V{I>lBNYvE11(&Utdfs(($mr43`2*TAuQq9Jij=B@tqh@es(1DzEeY zcW%x;Ayz`O^t6m@pQUx;uIurj6(z2h-`ECD`=fux!n^p0i{%&lVzPH({_u8nu>6*;-r~kO!0h7#-VP!8 zI^rlyZ8)PxJrK;|PV($!b5F=H`Df0P98c9e2)sP5TV`?dE-Xn;&|M)y%zIh zfpihs6iIL1(3;Ry0GuDoAq<+8l%AK-#&{Yt!;=CNG39?A1cUacGsw^LYD?lBfNLY^&L-&IQS z7r>}SmNtDnbBxAU1f*gi+A?UfS0l9d%y+l+4K?g;)oNb6`@*r{ZdLEsMvb>mJ>*`5 zXT~h^?<(=Mk6mf!*GPd0sqnop?xA^k#NC>59P>d?8F&F-OQ0Lv@KI9yucO-ak(`r8 zdA@Or1!%I)(v2RO_LzM7h;esTKLi`Xn*feZu{Gc$GzO{`@b)yWmHnD#4ig?@ zzrmffG0ibnKQ&jiabt~@q>D&Qi;i&czBnFz2ao;XK&lCt@SpAX?yD)b{E7S2Uqy=6 zWqq!a5)UWq1|kzbOzdSpGwPQV4*fmw?gAhhtX51W8x1iNF!x$Gu#c`9st(4qdcXzV zK&5d;=)D^q*ms1PN4U+RzvC|*vyYZ`j&Gf*v&e?`r*RJz!f!%&erF+0lN_XK2cmEI`8hH*#BpVK~bVD)CbmT^VqFI z3H6!!Vl!9tWz7W6Om!xoqhHYw&e9`P+Ex(*Zws%(#;U)I*gKx}N)7(uC>qaw*uzG+ zVt;RLlf$YfCaA&A09I`1J31A0&&Y>)1CKQpxTmNt5xUqKdzx5~9_u4Ye_v@7r>eqT}lRR9Vx z8)o$zwCMw^dpl&6H$wd;NTnWi;Gn-D4q1Di%!j_J%1@Y6OHvBbQAZ zl_Ry8A&&?^Ax*8CUe+#J5xoEVZ=INb(h&+Ul{O!lWWQI@)6&5=!Z{r!?=wF3AeCznD$-ba4JozjR@-=fLjpGRAf^iemKh3vy1eCf*FrtJ%1?ke&-mThsek3neTqcfg^$vIID5Y z$Q=O_EUYb$+s{COZ20W<#?RRS4EP3xd{1!}G=|ljVS7B_BwHAzR+bJ~%gd!v_#E>h zN*rX8bhQ>*wRU@T8c^+c>BI6;!IYWP`PmbV6r#ECn3#2AwX(co;ii2Buw3DfMBiwK z?Y?H&%1v9%F*pa1)P|;>`Y2Iu;eE69ecwttjdqO9_h>&f;%g_s9Q8&yLkuY&5a~e9 z|6bDn@6leu>VCM~pp|mN2Zh~Hf5Aw{wu)L5-SL;}56Y#T7>1aXxsSlY`Hq*h088dZ z2ZPTsIbSmjv@Vvk4scjVM|ONc&t1@pW_QBoaT5ekhObM5`&PQ7{`-IVvvR&Rwv8Xz z!~=9r5@_~kfA+dmcLnD$0Twxw1(|$+$4MnI8VTA6t(~;``D7&JGpqRebCx)umv_ad zt5`o^vv5uVxit|&(C!Aj2yd&_LF1N zNE|C^8EXq0na#XP81By7k6vQUNMc6PM#S0p?!~6dJ^nwBnMM1_$*w}Hxfkg*I<@Fz zeO3XYaj95E7){AD^UT%J>a|a`l zPH!SU2pMU90A)nMWVT2qbY;>QznZs&sScROO@LTvh-c!an_{9vl5a=w@2V^YKnGYX z37Snr!-s-j4ZFolE!%X8dXzHebk$9*E?vo(-5>bPU2DUi85eN$`@(^@OQA3N*D!jH znXupPQhJr+6F8U_6sF;JesUYJBJ1I0@ry{_&&vtMA(A8h;Jv-S+)5=c_XV2y0E0Fy zP5s)taPvLuI756L91XI9ESI1BL%SH4k1M5VXI+lu?3@#Ga@5eZPMa~uN{?D%rZ1Q@qJPxurwA)>vpJVhTiSU=%? z(I1P^{iT+wzeesdHgWZNu!X>2>y zFu(i>!1(06s6|dm2&du^rnqiY*MH~Z0Q?c8^+>gzRr#n#NRath3?FzEk;+F2gC-v| z1R|Z=4jxfw`Ci-DQ!ke!;{7!=D#|QIeYd3Mu4=O%w-&@-y@cI_;mS(qo-1JkHS9D8 z7B%b^pr4)7+eJ2AR$6^?7OHK^1unS3mm(Oee-ipcgH$0w+d`d8(zwE3r0e$pdsxsb zLV`-G_{218b$R3EN5|NX>T-FnO_Pkh1n7Ze{CA=05MMzpc_R;L%8x&K(?u%V=_`~z zu+-5s{{hJ1EaM8n&KY1PXY_5e`E*|YqPrODa5^}wZ@sfL?01XPhvRum{-n0#(1#pD zpkCUsG9GUCcS*)>r}w{~>|AFueyhcJJ!Fj+q*3Ho^xOsY*^>L%Fup)WU5zPgSGvxx z6;T!dEjswA1#2W76dATiJ^XiGIksp5?itgROv*ri>5g8~ru4KRi5}B9^-%yl>p@+m zm|)WB|FP8Dc>Kf&Zd$1mqoOK}weUcHK#t7k`#MyVK8xikE+NKrv_m^U$3Cido)+w^ z^jrQ9tZEOj?>Dl$$hE1F99Dx$n=b56T#Il!r%t~ztJ`V$Jx~>AZSr!Usw6z8lCiww zbbpj7*b0{L@PU|D++8jYG%93>{qlDCzJLV%`=TQKEPD3#x`2H{H{OY z&vUMzPB1YOx(wdtVdP(Kjn>>1qZoI6s6DbM^g33&j}4ly6vQ~ifxgDBv_1Q3*5d!9Q%UW2!9jgaplBc0k7`KR&92lUsdfC6F0u1L)LTr|}+ zthyiQHFVAT9H=D=G+ci1Tq`7Ot*?9lq*1xP;;K%DKU6+`HTjW2@bE8b!9VC4@o13N zi^k8p4%v9fyPzK>k}(s=`@chu#9hca)Yn(|Kjz`TswXnf6O^lMm$B158V9Xok5U7k zo&TgBn~R2C<`y>2+snm6>o-f0fuqb%@yZBF#%j`^RRtb`! z`o9-5;aG2)M})P!1ale2UUyEk^NkV*IES5dPo6)X3e10L8DU}}WpEw+5g2%ZemH-L zE(6;LoAKrk@QQ~hD<7~8aWh3M44RDsYLe=Wr{j8>iZV)z4Uy=PLSJ zFNTcId4{Ug>++X{slbo_uU?Sn^u-8g=w+_3(iyd%s!Z$e!ktn-YFc}*0&P$Mx@XPSK#{4+sF9pc*r26ZdJ)`ks5{Iq28ipJ1Ifs|8wKJ>C0FC&}2?&#|T^Dylf1gCG za_q#rByrvzZI^KIVCQ#9>YIg?_O&PVxZ!T{ac`qH-8s}D>cGV;GPu7!?j_Gf}&AMQ4Y zgqq*=V9MFK+^*j5j6LSv|8%w%Uo3Kd!MlIAsmO=X9#?_80?ajH0sd-9QK2-hUuq4J zv7;l$hE*l|q0xu#@%f(1k?BcvYi>3^%Ekc@wuX)B=TQr?9JlKZ7xSysJf3>BvJ{u% zoc!je&k0LlRH5Z)#fPwk;mPGXo2#(HFH&#%-jKC2l)&m(E`MoNEr$IJTO&6`G1UMJ zlQ(`EuYln#os~ipvA~oVDouLN9P$~>lob4uDiPh@sUuF- zjcJ&DRLLtrlEC#W>OSF(w_5Qb47tUw7zvi+U!k0r$dK>LoXv0Ef-1Gb^Sb=oDW4`J zp^P_Jg-c6wUnk(o9{pSM6DHeM>Cy5L?;`E3kpIlh^8#mc1=4tACK2j0Kr_mX_YBk^ z^B>KqUoYw-sSrqb_FdA&@8`cMc0cKdE6Vl{Seb}E*e?0WgS!b+lprV2p?Y?))GGtF zXB9+bkf4)%#URN6QlENwQ;OM)DQ_=8If&^VRh9Nuu^Vy*nu)Qe^LZI3&M!^~b0+6u zR?{U}W-TRUTBg&qG5xbYJnNmN$uvku08&EZpjCwxd&{AxIP|>HyG|B&@g!t^txhwW-EsZVe2HGYkcSTJg)54R2HG!kbEz-1d?-Eak-wn5~>d0Nse*rrg`FFX3Z_Q zQnrNQ4q=(;&PmrQxiH>RNk;D967&h2?P}SD@9=Qkg_n+}8#NdsTC7a`DImP{gn<@) zSbc(Rr$j%0Tx{>1EDE&i{FaDUEmUogs9<5W7)I|pXD;Ig|Is)|~q zz4;+PIB5GB^&EAc$a(Ai!~}%C%6~9`+GFDWpSNp@VC?@i;r=VYPNF{Ju$rzpsh^I- z7$GLDxbZfTyh!>P%#j&Nrgr^CQj%hf+^P!~dsg!-3!sFe2~rAjiZLfTSq~wBO)~8O z(j0Qnf5DROt?}_=>0t3lQ{o|=>aBlHZVvwrmtJEs%gxujAn5C4$S=;>4eySY|1sxu zT;uXnTEm@?5tc+gWVx;xPEzoI9UiWIfF>zhfqjAD$avX-=eo1~a&xN=+g?N9dL-cS zwJab<5I5Cr_KH{s=xz2Z%V)t?dU#2ktmrFJ27#CB0t|FG&bqkfpCviT$r;7UM zD47_J)2On{32{^}J|t#3C5eEJ=jN+S zl4h%`)hgL3ZyyDkpty^zl#piBqGF72I2;p$VxJ}-S-Zc>^IVUUdTSYybJdL`8$^e+?)n~4Nw#V^UBe>^((e~B>O|||1xJ;0e5)h?B0i~rIMM7zmk{0Ri zj)@}D7<7YxN{A>OLka2b7#$J=MvbxUcc%BPPu$`C{e7PQ#yC6Yy5868-Env`Un*6V z$`k>$#P_5*OdAfGA{RSh2ArAomN8ScKoiaKviRv*i6Fj&hy`DNgUid5F@H_e2Pe$W5As4-u*+zW#Ckmeg=o+UW@e^;1dG(ML^%`UuX^Cc^@zsUJvF+L4F zrWO*9RlQf(<5Wv2o}Tmxd5;>X!y=J?Z1L=Z-^bVK^^ z{#Y)3VX#7|nExl4pbf~u47<1{=IhvWmMM)|?M)e9wU3L|)Ihgs5i1Z}buvEIA{nmb z(0^We04tnk2D(_Cr@j3?KV7QTo8}tjts2xyWEa}o>O@aST7R-+h_4>8@p+(WBD4R& zF}RWkDUS$M+4|_X@&R0ZK4p8NmJbQ^Hs-@Rbl^_zeuZIam!WCh^R$w^%4(eylb#KC zGHdm^>RT$N^+SvHj9~lo%aO8Dz&vMwTjcKI_Ebz@=&eye@m>_eTXj0XsDDyp4#Y0b z-A~G2R_K`4OX!py)gjLjSFgMs${_1vP<_(AU%oi$&6gml_wrU)6H)n;|7Y$iUq~5m z$_hzcRq9bf2g!7;As9w-vNLLXJ&rdIr^`I7GEVk+++Xoa?qwnFE(w{x!gerM8a5(< z`jgSatD!;2@%TEBJnc};t<#2^9mo;?yY&n3wVesH0N02A=Z>o9Tjpg+y+YjwPr zfPQ~2X$7=HEmrvRv~!(DU%W0mLvxCps(Ue&HK*fDNfAF2vf&PxD&}lOvEaY zgY-@zkoy=X#fp;02^`V|8ej_=OAY@zY}{OB$0=W#9ZOJTC!Wbj2#m;TLT#% z2?_4A{3OPHcIV*bE6RiDMnSdE2Gzm|pvHQ^o1<%U`r|=U*rA=+RM#g}CHD-VZ3$bX zN_$ScTMPMmsPO%z_2Q*vt9;wB)w767GVW)F35m*1`Q?WV`KKu-4$Vm->P+O-i#r@V*JeiA6&H>TX;fTsg7 zHiysC5$^fzG)_Ts74*S(GbB&409|*)GmhIf4Kx+JuW+l+D-b81-dTUdKtW93_)bsN z=fN!}U**8$kfzThlJXL*Z(J=#4Ft?nV6}UEMJc}PMDHV)GTt_DrUyE{M;0lOu1QH8 zr+C`OXU|cw02%@PAn@T^_#cavUI9%?U03hf zOr8LY@q${g^jW&KNc%X23~ri#!$W>#Qa;~0}M9wjyD79!%h2A2(W9wX)?2;+@l&^10oU?m6)Q^D(?M@Gu z*e!c&-LV@XB|ZVXoE_!-=t(bc&og0nxaI4x56L3zQrjBAemBDu*&BkI4b4~9)F=Bo zwT&^2HnR)m$ke6&X*HbB7gy}V=ZZU3mQoZRKZ{SGg;`!+JQBZ%<{c!uwnRCq>mx5* znfWy?QQbp1(6CWx&9{ILOnLjdSBhvMtb+R0eMZXK_e+I=HdV&J+vi4sekgfK+NIc% zWZOwkE%lm{VacnnjO8gz#rt{vd%RQ(d$XM6uZc-r&2JDBHBQi7?9idD5-$+)xo~*Y z)a4(^g=D634J^`dK;*1C}&PHW_)yGVUwDX<+)n^!p>O@V?QK@|t2;Q4fU9>+J%z@<{Y zbrG0tDD=wiWQehSjpLG|6af$wKZQes%h2G&Hg55Wk1 zjZiIa{Hy_L9Ll}Q+ymcgpae$#tMlP_@wICb=LQ74qD{qi{0$6TqjUMC2NMa0j0#o~ z0b7PH=0rQjc#Kj@uW=e4h#0CVmH29*IA>Jt-Xa|{5#u~GGr2(mHF^zBeX&Y}?Z`J2 zCig+jA4<@Ec~qIQY0M@2x;NH*?c(YSr)kCg>Svicmwb~ID$J!9@UM;bYpMW|siK{- zK#YdWoqaW>rjF`lJU4g?6XDC&$p+_lh6lrq+<=)NUbOHWk(E%hb!w1l2Y0NU7nhts zYj$a269{?_I>g?SES2jzU-Tfq;I=_fjgZILGxp~aQEjyTmYHE~#@flw;z2d+?bFqE zDN_~iRe3!~zxa~e+}L6|ICHUwv+KngEy1m_^ZawMO%dXK@+;a$4i&LR;_`}n)uXRQMaq+>7H*Kd6K1LKnnNz=Ih7-SjLs3-dRJz?=LTl2 zxpe8@tWa_bupL8h@p^W3g*JK89w&@1=uD&v}SUMUQNRpWdPya~K`;s(`P^tl=k*JYKyS z2&aTpMdWZB2v$Pqj;q|f8gj|ID>AZe_}J!<&YP}im}0B1l~z7?dNK!4GcFIjDBMIA=_AUz&TN<@#*)CdvC+kv;zLyMbHOc0TJx^W3kw z+jnP3_XC44akarOSZM>=1Ek%KeUe6vI~J*J$hI?%t5#Tg7mWqDeyQ*rfQ!qmtb53v zs-eWS{M(h?%V)3!8p^ETtLf{3ygii@Ee)$g;78Oz=&CA>pC`HAk zJY&6R`xMc;3`Dk<@D0~<4$v5y(u_pYIyag7pC7t6sUKuo8vN^S?%!xgd%c|*b~UHeaY;s2<>NP!3`7c+@^@vc@GfQc$%=7)gle7sjYe50YSg%smkP;As@A>V5otpr7MXLGX#O z)3?y;P3CQ7r&&`Wjc12LjL=tb+PU9+fdKGqDTU|o+@W3IgWDa2P%>|g4Tldhob7hr z8RMK}jcBH8{2Mce!s&s)lAyTe}P0t_r_ z-~QqvW-KPLY&1!T*cxe!*Hf2WD6Ah;M9J6hLMr%f8zXNOJkhTZoO_RLX)~QXN*oyR zZ+g%3vY+AtHObNRA*L?-?eb2v!rE8o=fHRe&W&trcC@UQAxJZ*@(_7VXv3tc52#>B zD3!si+`o}kU?*PvmZfh%8tnC+uL1;~5zLsb6|Q)0CZvZOREc~@FAb%1HD#y=qr2JW zos8GaCU?9LI~DM2{CR8cye|(jSP%!T<-kml^ND_{TTUi}eP0*~ByuXk!(wm{5n(nN z4n!yJLxm*tR&7AsZ#SC6dNCQ*4Rn0&fnati$i>WzB8uMl3V!Pj^Y*J|J#X+$9k;^@ z4@hdf9m{w5M-fjd@B^H!JoUTQv2xwSFzv+wZcw-9wcrU{8ZpXQlt)B3#}UQQAA#Tx zzvI6S$vr#?5_P!|*$2D3#`4HEY^xP`l^MA|Alct{w3w>}Q{m=6k~@B(Z+R5Cx0 zH0yb)bSN_`%{NnhLg}`)L3a~6sW4G|cmyQ0e6@RpDH-iPoypbm!>?V0N@woNh>n=< zF$Oy2Jr-6~(Fm3Lu%7C+@oC+hfvGg~KB6vs#Nxu2o<}8HZ~bpM8J*x%2ldxp;*#dj zEDt{zuFOo>6z**YjOT%@`7iP%0Jwk&Sy$U=55KgPo;+=Qk&OATFZoatY+>F%)-Ez?m=H=^G#q~)%0x)ne5 z(JrOyYEMd4AwP`7mSE|Qe`*VELyEg_azuOY34)7c_oPV|F}t(`c#fda20oj?clFR! zrn)h|58WSOMDxXTu${J^;+x5BV*witDdfZH1(HiWI+kvoyV|G*vARr0{b=|6R~i+J z_RrP)HU(MAvJ?7Z2|RHz2km~+6Ydo%63DPTpife%UlnEIsFW-L$_>B>dNMlbV)elv zBJtlV(W=-qi>Hf5i*Jio-hS!zK!(&S*NJf&o0XQ!A-uc(Xv^JI8wiuq7_-D}gXU1@ zT|e=4rV}QZ-gSvR5DE@(UPlUA9uqh$s8s>4Nx5@pHr@y$2R_}KfmygYWFs}>d9QR$ zN1_t3urm5W;_7mZI<|+9oWK-P$4K9GVRwBNQob*Gyr~Lds8hT&8~W)55GWrY!1a&j zpVNwNla%hiXY#-R3ia#4pQ3b}6& zea7&S?t#tXrfdR#7Rp5Y@O-;+>Z&l%hs*w;7;pv`Tvb13$Fmq1SF2wGrlHuJ2|Hdy z_C{odSYzvFF0DwOaX=DQUe62*OhIh*^;O#V>=$ z<=+uVUkPr!%g4ccgppi0I>?@hS@NZ^aVn#2bav4n>EVf+H^Q_*aFLikK^vYunnD@kpIxmE%qlTdHG4-Wtgb>~YzmB~dnNjbbJ<8T z2d6zX_QTg|FB8ULd-zVP$;FDsE_`|(-PmH-?9%}0-SqXbK#KriI#`0yMCb;w=#MiX|mqOncS!9R8TP37PUJAXqd&q#e9%Cko)o{YNs?0r~#p@d)_b<8G zOms|I>qf`qI~=6%;rqESqNLG{b+E}8eAadmk zb)LZh-Qd{#6lM5|eRIU*)h>U`n;k*E*IyDlp&wmNN^W`Aidolia~kr(RTAyrSyeAS zSXObe0_S}V-VE#>7@5qQ#B+8@BjlpHNRq)CpPik^PS3|BttbCgTS&i9|h)>;Q`Ul&o)6n=t zb!ppi_KBA*;)4%vA?^XgG+vx^v9@<|zo;u>h53pUfwR(S`zC-9lV=FS)ZPx~Es_n^H;AKUUo7xZSID zIxr}&p5p3Y!CX1dG|7N;pOe1>H*NE=uU6j5<7)H+yQ309s!gV?=U(+Q9(x~e237-A zXZ((jO%4-$nFi~MZom*WJ>9B-0mnkz<$%dwvsbpZGAhN2XEk z^rgwPZj0LY^LEb6j#}!;RF;5SU3gQK{JpO+e+Ccj8HZ2ybdspgf`KHhnVGhsd@=MU=0IhkC;Qr%U(b(fUYoY5Yoz4iKG^^%!PK1jO z@@`w+O6X#oj3^fh;8##Jd*F=;4sdG5k(0UT%Q2~!i@l!j6@3d(mQ zZ=1aKY3W4@D=iE%%IiQFZGmK@oXEXv z=yCUe=hB0GKUs#c;-(xmP*=lqEuHBWQ~P=;|$vK&iTOJ$R>Ky)N`z+gbC&CHgLd5CBF#S zwB7w?Rozr$){L{dc5+tOj+t<5M?r__H7((GTVe@fEb*L6g(~6CkPo<-n&3j->tG*P z-b&Ng=5%4f$m|etvZNH&(shfwH41s)_Kh8aog} zRT%ul-S0HFZ!@Kt^^MKjw=N1pzZ_Fj?tX7wB2Z)2mrEhUBP=dxwqaNm{!|BBsSmy{ zCDR2IMLg4B?mIkN77$HvAecUKT(bE2LF;o9W^6V)6G__X$tckn3e!S86Kr7XS*mX8> zlsSD13-Cf zL^vmQ5f~U2(7S;;?etXGz-obmoD=s;DaF5Kq#;}`>~i7xulbwyXU?3E{r92&F(km- zcur?Ano3GD1;qqI9yb-(fT+>+tlP+Nt8lB2#O`^-xX;u>nAK9Ba9_3@;< z^mUqb5CU%Ekuet~^&DV*<1}iqbMk002`mFc4W4q6b2IZKn~;I3&-lTdCn)lmlp_PN z&VU&XWgXM%3j2;vG=Yb)b202(xfZ$dC9wVZDRc4k46wSmKIRS3tNtx9eokffW(ae) zPaV^3?w8GJYFUR-n3-*!=V9^kt&##c=FB4Lk=hZy+w0s$_u@?f?!GV}G@m?o+mkw?BTCM+`? zlAULy{n*}zZRi1dP?m*R#}0`Biuk~-B^XURC?)9vy*M7y_*4XG z2e_k_?Ni$HK0UHM0SR4Jf$aw=`4cl7Ram zGG395gLO^i<_XdWfOnp*TE{;Xw6Hq2DsAB}AMLUFZzYZ3SrE?Y)jQfJLF{yO_;!Db ziQg&Ya%r|-x9nKDWe1!x9Al*f@P-`3H0!o^vdfIk8JiR>3r|NO3jo2`LSF_cZg3E| z2X`$AJJggw^)gmQq7qVz8gbP%Q=szg*eoeTn_27h05=DNm1Wi;CZ7u~p`2uuRESkK z&+kD_0xtC(5?S>xBOjwMnf1u0@s}jYnlpxrM8+k%&7U{h`JLf%_fj68NxpaAp`GTl zrqXFHjnl0e%8+Mgan1$ow$nhP6cnz91PTTSyyLJS#K|F@e+tR5#VHwmWNxY~mz7F& z)?M*4b$4?1$UGB?_$6zxHQ4@Lk@9lZa#le$)`eaePJi;n;2C5Cv@=FeCw+e(B|4aW zinp~v2>sR+iO#-pjThVoVv>-^K(%4EWc^QSzP2}_+Ud$0(U=AU&o96%R|4|YY=}Lz zf{=TXJTf-iPkGW#8LA$f5Mx#TK|-IXIULbMAQGR8fGrX_w1Ai0r+&H2xDJ;u3gp#oe!}E%QWO zd^mVP(7BpNhV71w-x3*c1_7Fhyim3XQi0w2!VMQGd5YCyh?XBWkczRgEp$fhmzHPnKod5AKrir0cyoM@K(>zE zz3ufY9f*L2qwHnM&jILkIzha|Oy-@3r@(|nXKkayG0aO+`GxxF`OxDPzAhPbQ7J;t zxNb{vwh^s?c{1L3Xpfm|Z9*e4`;956%}ETb)t&wlut9JTLBM8bX}4S1*6=}hY26%j zURTTS_;}De2o_+nBfOx}2*)ok$VA-L$UT4lJNB?Y+rn$urwP8+`FKTd(w?kg6`85? z!pW3Iz-VU6NUAk-0bv|a{rRJX0%%6y`f=?`ftc(hH_Q86 zlA^^F4K(^qyU&2Bn1n@0#yJp6%OMeh2I@B`LfxmQS}r;$^jb~^F;rd-ur1QmYDai@ zqOK9{jXB&1zK!XgSRloMOD-7K1bJ*&u4iHBcn2}#D-6qSK&$T5wjklT9b|W|cRfJ-I$cVlt9wzp9A-g!2zv|NNk;krn;|iB5j{L5PPeA;bT62xJ>~%jl62z1SYr z!b-1^4K9HZ0eyTS-@B64hamfXlJr|45PrNM*mKz`s-v+W!hk#3AJR@%pw$A%DGgb- z_9jvQ)lpi^G`%MP`{qsF_F>oL2G=Nm2nJuB2hGBff-8^dJ{8aitca+ zZKxzu5PQng*~dUJ(|ACJtm^(v!@OHBd-@oir4f~U$Q}_M$n`i?05BZ(L zd5LEj*gu;`r?5`p1ES3qH|%QPQsCuM2HQf-`JaOG573t-3bMTEwk9}R|5TkBAg_%& zvx1&=*^*N9DKIgZ3?SAv%DEABTpGBXt6;Qqva4f?+UB$3Yw(srK7lP5dF`Z~0=p8N z>pugh5~kcyC6=N14ye=YJ=n}hkd}p35xlyi5~$ z+=f`;SXUQYLh!kz&u6Tw@!B2@idEiIVBQd1gF4xzgEpzUMKOPQS$SsvEimBd^O zLL+Ez)snaHHN|g2lz==b2U5eDGCqGV00Z|fl?@1t;KlE%4llzKrBp79=boA-~sPH2ozfb{5L^t(XRqOZIbt(+$9!AgYe?~&N7GyFT zZHgR%gBH@qVS&5&*my82nDL$P`mVVUZD7O~L%G zPVsJFaNJ{FBA!yzy&Ap<8Vrlvz37$U(njIV$M1wd#snw$y(h~XYq0_M7c_(SM@Fq= z5iPMl;H*1~?_4L{X)1sBBUwXmkf07jc_{`|DswbHog49x1BzAKl>_DyqUs`!1G|!Llb_$K`i@yLu{cJ64bCihK4a!y&eN+9Ctb)qirp?>g;2z5z>sEYpUN zQ0JoO;qJiyG+A8EBc~Rwnr*Pg;mJHQmjuItk(;BLi0Fca!xgKR-Iu}0>_h8dPgG*L zq%6ie!P{RQz(o_$;{Et)PJNwPPMJHNHrvSEK(w=IAaY|=2Hwy5%(vlcwzF|COiQ5q z=<(5bB#con{XP@G%V*f)h;vY#sW7RU(xp<{(T_$pM{_7q^l;6>oiPl`858KYEb1f^ z?&ca?#S5(iP)>|?)W63za%Eg21fq%iwUpxX)erQCyRf^{ox$6!8jPq)cc$R2WzBr$ zwA0i_u|cS<4MEIyIbAMh6ngw=qqq?xO4sWehqVPrIpQL1f&gu%bB2Ej^C;)sgDgo| zM65=GVCfNBmWI4z)1o)PE-RTZ6?^qJ!a=-12>V%QbW7%4Qz``^Qb%87*(U3S?H2=fEhTIT1H1-!@Y&yHIyu)3QI~MUB$k&;nqa6a5Sn)xQ zgq(TRp$@!RcXZ5nXc)EcI^t_4k~8;7g4F=ikL9D0!+Y=RqV$?ngC{q1<`$RCsfLV3 zwgCZ?O*q68bKdt=)gh+;GtxOfdudFu&Xb6D1xCC}8{a5;^w#IDW_fgS9|sP|k;tJ; zvAN#K=W&AUk%smgVm7Y)@nuOBC^0ye9-35t5*s9mMjIJ!_?;Y~PZG;93#FL-jS~z4 z-iDN7!3u^T!n%X~N-L-;wD|08itgJ9*SAh;N0}NJTZEW6^TofcQ40M|Lpk~H>dXWs z+h{bM%9=9FySc@a>#ra*DeP2pxRU$%q*MF~KtoM{h9H23W{uTJrvh5d;H``IL@FH%rdKNLbZq;6rAZ@DFxNAjeMJF)`Fo?=~QG0AU zPc{IdDhI=J>raynI^OIMD>#arbYfr&sOvWTm+$L+6&g zq;|N2)o=%fn3KmQUD3>~SGM<*WCZ-TECBP1UO~82;d-}e-C_B-M$pc}_+ilAXu0Xh zuJwZIJzW5L7r=77IS4xk`xIFADCDf~A#n>Vv0PY|3H`gi<6qos1~~CI@|YE4ub&=! zb1z_)zy5eO5=~M@R8SwN;9(3@LLP(-xrc(R`}M}nf@@g&^5q&vsT@h)4SAcW3=G(7 z0Ji#H5W4S!+(m|wH#7r#kYlVuVgy|&>CP1Ih^55==@HNyyH3rKfFf+2AgE!(QT~)yXls461fJB6y-A_Hz%DP?Lc=UWccIH^!QewwkUO|Y<@*Z^D z(!6%$dGhKhZ#I}v|I{mKZZI!Q6m4>Ly`JUXZ~Xr|eSZ)d`$3+XlWkk_H|J=zRxaWc z-1;h@kL-w=tqng0;)+lkeZd>TIy)PK_4WXD`;xzFAgrqC*oldW>0z9~m>BrF+?N!c zK3v!2Owd(o7sn;oV0*_IRbj!Ddo`vaw@E#5X`jVZEVEa``ztf=d&P~bD-s~wuS6Tw zid``=%im0?VFqW;72|zjbU`pYd7Sv0y9OL4LP`>=0?xc~2)EPKs<_CxlSn40gBg`E z0Jbb?tT3|}+QT{&p;Ia+9EVOc<>MdkXWq9}nfAR6$QKy`&aCl1ZinNUtZWt!dElEK zynIdl#vYX6_3j?1-x;iPf&;>*F4U3}B#K6og_BC$Vjv6B5$L9md6>eW5i$2e^U+dN zy$M;)y*`i39399^h(25R+!1ei(q#5zhUh5E-?>21Oa(+_%WLifJo(SJ3E{-+_;UMt zx0ZTw9>H1NS#DN7e7brFA-%pFOLrLi?3;(gKMjR^Hah}&64@+sQ}@%OjbP83tRtX-d%_DR5v3QYho%%pg?|9wn0V-F zR&uEJ&GPlv28^XqG(oNw1`o!1?>vIZ3NjwhtFs)#mEW&&YPDEOJ`_3o=Q{_8@)BgZ zv0oi^n^uyLd{vpbMXWm9Drks+d`t`2Jj2`@-dX@*zk3DRp05LeRjeA#8jeEK1J_!6 ztr4zjD$tV-I+?oqGX-{)Ikq}i4=9-X1i>#Bw4}->4YYi!4V`lV<0=u@y0xyY@jg5~ zK#ey)d6JkWHrC`Q^!veF_#ih8N)f+>LG)!6)Wpx4at1X7-Mpr^uP-+TVbqA}V+1trTJ&E?K!Bfy za62L&i3Z68*m3S&mdnxsQ`K{2jbAP6DY%psXZ5+^% zSqXn1jQ^4@gCv3w{3<8Ko0pVIBW)HBpmPm{vTu@+n4qjo9mc97cETJ_cO-oNUH^r2 zz+WfMOY;xd#F&Zr%Nnp{1aJ%?5Iyrm3c@t!7{5tsz+M3uTg zcgt9W9q%QnbH7V+`rBry^U30^N1AiFuwbfwsi7!oMm?79HmY1-H_>aia@!SIEvPh= z-J{B{a{QIB0|d?s*PMqbo*&bcLLe2OE4*Jm%I@YWh5%}PCt0u97W~}5G7@pZ8Jy(c z=>MDg%m~~nU^4KJISrt|MmtL#r&}t6HAU@>=@wThMdINEmYD}BSSb49GTT^#9 z6&W(`&V276x*GRZ|e-lkPn{}%(!tOkUzmJ&xI0=AfH^N}KP;#hg3 zstgI15Xx?M;vZIO5poZ1uyWAU(fhFkM`EK$jK}`uTCeux!fLQR7%`$(WyT4Lz$dlL z&%ao3BV{nn+kop1*&rJZDD=ECO}}h- zTMN8pp@juFCJ!Fq#!uz}#l+~8d4rh9&P&WZjJ3}ldr-!or2z0@_0xy-{Sv^o`-ul| z8^-rGTB-OU{&%gwF9v?W7L)lh17WZ>yX>Ti)2Mk8ih z195m@SAZVAA&71~;lnHWz9DfWKvXIm7qFz}6F)WT#c*DJ{jc}Sq5-L3o<@91sZXM~ z{_P)N70?yf)6968r0yAw2y2twvfR^Pqq~A+x$ms-$HIOGkg7hj3{{fbG5iR<0z@xz zeU@iu&MMqYiyXIIpc0Gx(BYcT{+0r;YQ!JjdI9=Nh4~wX!LNa6E7w7y^*qzJ}K}ZA9!D!l@{p_ zR{pCG?5r+Q@lyVo%UHja;F@#n1U8y|a?`w9i5|@?H){uep&>3I;m4m^pUJy*MbC8U z>|d&^pOXJyQSo!Ne?peC$^hXq)JL$vp|>|AtG#OrC!B$(#UQ07x4WM^uf_lUL+{W4 zD%cEm>M{I`i20|#41NFY^8b9Vw3JmN`Wh0fKltna{6k+($c+@9`LEd2&R?DM|KrI3 zTgwv1c`TG#K1BJydH^;tKxA7H{`NO@`~QR9f3{1k%`@Q8qpk73ArW@PJVb!vU@a;6 zSF!Scd)g_KVooJo(m%A=pTq7a8KQ17>nv*B-B_P2xl5s`Ro2V<>nJhH0*-~$1DJIG z4^G5JpIIRo2+_BB7Kfkqm5eUt<56IikE-Phn*Uwa|H``m(KQY{n$6Vk(+(>FR*e3W zx~+Im!o3BS`{sL0Xu%Cci;WtQjfRf0=K~eUY5pe|9-`r#AF3c8D?TtCAuHYO3P9Oq z8rStKYz=l)RYi&=wtca%!#%$0!U*PRG@T4E*`vO0^gjXOHBo3Zq@7b8RQ z0d!%**(x`{>u`Fe+26rvrZ1LuRqO4 z8q!RR%Pp8B#-e_GzmNm~c20!L9>0#%5C{Y>g;%k9_6w?F+zn$%eEI`}H!;ilw?y<$ zX7~$Gsp(}=QUL$lcjCxOyQ}>c4cIREiE3pAxt&S=0i-yy>%};%~_!t+= z9aUWOimOIRzy1QHf8{j(`x+qps(>r&?8}vlkthy--ed7Lq96*DI#IL!=J-u*>6Fmy zKpAUBDP5zHMCsSHn)R=(KEu9x(WcbT195B4*&tDkCx^MKSFbGm1#$l0|0U-SQt+rZ zawu6_sn#?}*KNI}7hV6lRASV07_+;hX2G*^$9if?uPmT{0A3gDJC&cAsAA!J8wlbc zV2Xnan6eTLZj)#BpSO5+SKnkP9H_~tm^QL0+s3nEeJ-*u!dCwbNBaraK#Cx*_wSFw+ z%)d2U*0~fS%>e{#M4-sG!88#WOT6^I#u{hXT^!AJi2lZh{&(V^ESIR;-jZhO zZaVmw*5owqMl9$&sSU5SGXj!&zbeJS(s+PJMcKFX@Av<21poX7!UlFTBNd;?Zjs`j zH+f>XCn3@F544><@IWJ#g@3?mns>6I;OZ8gvGO)66c1uVofRDJ*t}MJ=53c&?svPa zW?_CmR2VrqR!pa7)ski~BHI#wH6-PC#HzyN2GS*O2$ffIhOBWQy&x!~X ztvjd6d|lMGcB;5o{#wi^jz~3H3A!MCe=y~sk}V%v%EVF{%X}%2-*nf^=GA8p$F3S z>xdSxyRg)+3x!|iZpw32rU^w`;oA zQ-Qr#Qb;9^C1V|-RbP!vE68hk3572A4MRguykFT|wL$%J4*2`s%w7dc;GJI2xML)& z-@Q%rM!DV&|YAcVT zCfOIvb4u9kAOX|3a%ZM8A^XB;y_q(!xxK$X|9^gsB99B>csJd|M2OwBhMeeArK^Im z<1d^kQ4e~Til55Vu3CsMpVUPPtVd}na$_A%05GpsxgJ*it-SuW+6!Tj9sW;Lmw$cf zfFjAszMw6SD)MqaFBvhMZUW=xzuTGJS`WWj{jJ{Sm!7P8Yj)zrvzJ`|;K9UT{QB_w z?&K}YpJRR`ynY2<^w`L#d?DTQUq2xmdFGG(uVwy`{e8^lGL@-$^Tuo$1hHV9L0))X zEOPqwaa=tv!JZ~0u>Q&Of%3SiQEaT1#YGOsz7=N&|Jny(3uem8s()fqYsxbZi&>A} zaf7y$(A38K`uc&u85A&d4*Y}iX^jPjo0p6YnH!b1S`+SL^URHM{TOluA2aX?qpPXx z_N5CMavHc18OSThZ?Wxnl4Y0)4@J{!X{=H3k2z{AvlQYK@n#{*%Xov$0u%7s=;< zYO8fzsy+h_3t0f-VOM4wWV(DQfqG+HQWmpru}3ume}XNDLjvs$PTH^5A5$xijTzeh zQCJdunvkB!#^^J^9$B}*Msh1aB&1z^UjZw3@F`tPWiQl&f#>pz0qC2}SB2mba1$>2 zCn=gehZxYSrmDL{{U^luL10!lW$B*W4OfR7zZSIlF62~o0j6tLuWu%dQkTO??^k>M z2q+P)3PBz>Lr)EG0Lk_9z=89;OS>H+SBe{9qQI6Ug9RT(DnC|`jr~7}K6{;1P*r+pe*Rmf ziaEd@VFDUC-zc()SL8aGTK(`ph;5_6tT5_p|9&Ym_=UemQmORx!tooGJHtJs2Ml&o zq}qbGzb@bb-UR1b)$|jC74ep%5Aw^e5Kl!f4!*!yRL|l74oW&XoZ``3gA6tY2j;>p zx~O+b+{h4;qLte56sQ*0gw%M;o#2o;M$>IC5~lzALZei(!UK8?zpQZ|b62ibhP!J3 zez#g>ZocGHj;yTg6v*DJ;P@gG!tQW?ZW-yXl&rtM%ENT)hv%$W71m_ zm?mxXqM@xxoq-u`k@tO{tmIN`!|VH1mpCccc4rn~pE#j6o?-6-rXZrC5npPPr0Jz0 z^EJKGJ$gp%mQz2_L^9ceUtn76lh(ZqM|_nEc8QS1iYK$XASERwGG30oAU_~o?BUxe z{ohg&_-3^fq7^JpIsOQ5nXn6dGrh;UN-(^ib0MR8c;9rTVN=IM1F@om^0Z55j z2gZ)2^dDY4#{{+|)<4Xb#vrYnqbK$H_W$GVa+IKz{#*SwmFSL@soNqV0@Yv=b~T@Q zkn5kkI^)u2;vm^jLy}k7sqsV}zOWtlt@wtgdEbOxC^aW9t8BfGQC)-2_;F|{$vyTS z1wo*$s^w(4KO*7D(6{X7Swm)?2uGk8r`~Psb$G_VLk8GCKrsR0Be|I*YwxP+?k99L zU;MdeL#d@R=(D3uAc288i~54UYkdF_6DRzG~D*9 zXWU4YiAL<~zwPL;1KAUDUdN9%gfyhoRC85m#zxTa!Ta)teThO?iV2aUUt1Zs4gY5Z@8&q zRtUNRsu+@A(0vfR%KCV(K<^QH|`Fg>p*`L4rNjPq7 zjr60rD>$HB8cid!y*u33fsL~RsJo7z1A}^c;h`LSq;CAffu~PniB^Za0ielZ?0ii{gEXSK6o@yNVDH$%Q zB_)^v6!&0_h(H_#;Ym@EKP&~<2LfJ2D~sG9+*Y=%Du~arHaqa6FMt4>!N>j@V2Tv2 z-)P4#gkwL|XQf*A?Y2M8%*+UMWuU!6mP))d6Z`gHM~M|yqjQ^4Mx{{pxYikf?+AG|w5 zX03zdORsuwp>-F&IO850BZXywB`Ur@v`P9>fFy5m|JS5bU`Aq2-oAx)3ZMoU*Nfc6 z6pGV*8Mt{;At1tXDES-#3aZLMXhp%d5?FytejXk9YMI={vh)M7hMMgC;y&{T2weAD z0r$^68-lO#a#qyaJo*1fd+&HE-#=iyP86x^St2ugXJnj;kiBP!?7dg$G{mt>#zBNA znb{*$X7=6`WrQRv`*+<3h5CG--}8F@=_Ss&?)UZH`(99~=EGHAR*TEM^HHrfvL$YD zff2{7J5`gY?#_74LI}Gy`!lTyQd1}fsMN&B)ILE1y5vHff9er+-T@s~bLS{2MHCft zgHh0=!(ZM&FH~&t!n_F$7GM&Ql6`CRhZ_I80D-#Lm(hnB1NX?7&=dMLPv zsHvj>BayRj^^V@QXO*t7%IC%clWP?KU1WuD_Pkr=F!q5H7tu&wlb^nz%M^H~O!UMv7D)-_E0N=)qO0Ugb zHB@N=$Yeq#iMCWlfJ~p8HLy~cjl7pr!T{^hCrHvXl_@lJ^%XNq}VS7hSHE0L|=FPWI9RLM!lz4 z?Ay)nxIHd00p{gk5H=OqNz;=SnZ^#k8G7Fj2ujS%S+}?b023zzLRU9|Oy#NrXp*a} z!W`^j+&UgA3g}V*l$RCaD`=z!d=P$@&qlF>Uh^PHIGGXLZ?uRE$*&r^($;k@z#Z=h z%3HgEa@AGAvTTXpN}tmKvElqp$!c{y&?%?-em5iNFKT(JBa1%VcNV`&PJ^V-cu6D2NsIA(xj#tSlOo#7g{gbH008g7jX;oj z>G~^LFyh0WJ%8-@u@<>s0RE^%zfu_Vq#uH1WDKiWZG&8!#YGJP0Wu{ ztf#4qCEB8nK3J>cAuVn5r45~UeNMiJ=682YR45#!n0a4j2@q)Xg~l_Hmg<^InA=mS z`rf)lRvqraJGO}(nwPpi>UOWhDgNiZk5U!@p)ffY*`$K!~N|7x!sPY85d zw-VxPR{zknpjsewuX*1s$>PAIhG0CE^@i(~Zt_fS{6!N5$`Oj2TLfsw>@!X$Yl>h4 zW)gA7xve`5`*>#$bT}2TD>ENh{lL_#7v|3s12Gn7#hh9@sn^hhIYB^uops-t-K7U| zLExK;2Gt;pfl0M5?fq9mS6Fgm`ZldB-M`ID9OBFH5`OfZ5;@-Uu1L#H@u99Zxeu>oi@ z=kF^Rm;#Y@O>w+%6edC-g)g&p*G)yv9+5(~1oZ_EVJPd4mzT%HL5#x5hake%;YqnP z4ce~3A8%r8lv$GU_3Xzz^Z+@m@HXi!EpOy(>ntm5QigS1GdV!mK~&oZL;1nu5q)DT z$Wr|#nFY=rMR7;(lJEC)AIv-qDHrzq_N%)@Wd`U9$$7)BT%WCTXkr+&8!jxbnFl3nFI~ zGmYL1GDZ@SPONAQ|8zVM^5%n4kL=a562I}L=|You@3pV5Plk4#ZvdeXz|to?SaIk$ zka8>S7SJKT3VHY!MtKmyvTy zfoAdXdbO#miN8mqU!Ku6OAUt3(NHaY(v-DxC-9T=W zu+CDKon-S{$o3oVB47>C6Fq9r3_!~0_bvYXpbo@d_bAwEfKb9puDg0PPD%C!9k}%m zFpUupe9BYMM-?ThtEIaGg_N7o%HYCD%`smAOT)uO$V=Ga_MFw-ojy^XE^YeX9lx<* z!iT{G(>^3lM&G4DiNxT2MoV@o3@9m-*V?ozyl#m>6yRej(Qgg;Jl zofs*pS=v|J8B#gt6e=`2{|BxE^{6F$yNlWsaxwO)8t7bv2+#&G{0A5wWKorYZFV%q zP*=rVX$+b$b>GoHmA#mN#ap}+qlU`7Uvqb@ZcJ_f6Os5cA?_dWV&Q2*&Jy+q6&S{K z#!wpCYRv2W89)mgyoWJ4xa!CMI1B#7fk1pEOF@d<+Yt=Gb$n~M!#Ebd(|nuC)_+&(99*fOVR6`KT8h&%iXIxEn)J>Gqr z9eADwFs4y8T?2zB47%WvR(MkOjb2{ILeUldADYzFJeD;^XA~gX8HfYWb83RjuD@3opgyZecnN?K7%wfJk^G78uP~UIy zffBd}N750KM!lD92m=eNv67`ay}X!n*%SbjGmN354wg0EzM0S&2=9~0SPc|CEyei6 z`(XIjz46>w?HuM@xB%Ly<}^JXgNpyI`=LToSIyg#M?;#V=h$fTPvkcewJ$ONG%&-M z2mr=*U`e44rIB7)VrtAg2e>P{0*+))Cttt(4PE+P1C*z;)ZHGJ5dSAM`^XB?`FFgX zwtiK&6H4$#atfmjQa#R{wW)80sA4i__8TJZ)JOhxiYjQz)T%cGa!pGt6&5)Pqm9!w z$EW6$Mi6P0W+^y)qX)f>hzv|+-CIq)s3uJ449+76A};N*!dClNR(YhJ#@lnrV$FLoQ?b9)cj)c3)9MG{py&)1xbRO^D>4evRGbseCC06PqwgL(qsD;%Hr>AH~L03v;*NJ>a z1d%Z#49#OtOg+(QnuCx5Wc1CDK|wXc#Za0F;3vQG5w73-vZcIZ#+^7q4W(9=z7GX~ zF$}~7z+3$&uBvE49_Yck^r52|mV;_Hg8xr>P((0B8q#e1x&E)o21*Hoog5#hFhW;q zfpKorZh5&>O!suT0O$jEgJvW=DyM5E?($Mu9O`vJzq|kACr0zM{U7TLR`I6W%@~hNoDj7^-ks^gloGODK!gJh4!zw8d zG|Lk$(JAOb^8Iuo;43(? zjfiwId=N50W?=ur-1q4Wf@uhCN%&G>{(MHf9ZdKT1eyd=^Me$H7^5IfbP%A!(lmL) zKr~*TIWf9cT|DC#26d|| zkfn9eYooMqFnvyRA7n=!=5YZuv? z4SK!PK2>uVLh9DXmC#5;S|)K@x^hkknn|qCRz%|&XFQb}Nk*n{AK3BKjeRktSiZ*l z(2WfgVYT;>>e&#^phSIf5q#^j?f-vg=zkdKXE@;^Aqs9EfLXLS;6iw?a>loC5HEj( zs6I2FjEy$p9&DjKt%bwxUVJs9Q1}LxKXiOMnuU$md!ushif#4R+lN3;2!YC%_UcvQ zcPpP`D4$9%s(ii}Bm?Ar98=k8s(iTYH8d8oprbJ zGX^`cqM+`@>*tH$6mA<(a%5NFI6Zb^W2pt@XTVXT$=NAvc9PUhPbv(rf1a6j_4!rB zx3H?T)o1Bpm`Vz498E&hIJ-qAYJ<> zME2}_IFbeA{!JW@g0C?!h0<7B*3hNuwSzyBy@exBW!NXhn_xql;lQ~Er0Q@z!!Az;s!Wp@XJS;N#bHv!7h?vC8oBicclCqFltl+NfoFI?8hP`JS;$} zJozZ^U4SP@!k7$|bQ2^AX@~$QGfbhQ9xF>C_&1?#B>uo5?m%&c)ki>a4Ff|cjlShz z6~oOE2;2VyiLK?8cSI{x>tyhomWOsGfIILtb2CYxalQ_vP$on-CQ*k~T<>A44mLD_ zbO&3S7|89?h0>5$W7hB{fkckMHyG2A!JgDD_E4zW`wcr;pb4ea@=*%{gp(CP^jUk% zm(MiucPiuG(1G-7I?E^Lg|&?Hl59I}IrwsFux(RXcQ^2AmAR&G88c|Z-V!Rx=DMoT zP#@xRgC~HxF-}lE4OEd~SL~0@VStYWN>WQOf1e(UR)_{@NsvPj#p&?$6Lq5(Po29# zU0vB-bZzSF|EW>;I)hz?rradtIy$`L*mcdpPNb|X;*6KQreL^X9nwPa@vjd%^JhS& zWd?5m@YyMKYKbpMoKr4q7!U!$bl!4NmcHSR&@p2u&*Yk@R`%{xa7d-X5PUX2Ixg;T zM|LC}U!$jQnW74vkogCB4Q61F<1qtvRG{T&@ruFCuy4?qvS=0KaTqPYn&WLkX3B$D z<5WkCQZn2DD;n5uit#ZAk=XJdb$-@7k+YODWgvpJQg8+GPw>o z9`&yxpv2*uxzj4u; z1{fgY=QpqxCJZ~dPSPF`CYR;}JiRRX{muNC210sTJyi!S?D1XukQM(q*6wLz>B3l+Xhu z4Q8W3Zh3jub=$$Y)6n@*Pd=t{Y_!<5m#1~8Ny}KZZ(QMOiZF0Se{$b(yg-9S+blx8 z8TCN6^Xh%b%oa{`K8Yss2U^hp7eUlhk~Md95-RY>N>NK>b_Zh16{$jY0LnmXk1_Cm zRz~~hN?1f;62m_$c$zrQlWvcPuTVj~ZBx}?&MUBIH*Wk}$%0i6;|uOveMP%v7ALwN zo%0zV!{omYu`}cLO5X(q$$*w*-nK~5ziI!A407+xAbp z;c-$O@{s?A_!DL~@r(y3i`;LK1Ls2m-U8##(q{(+E~>9SG2M{q|0G<(-aoC^vJcg0 z5`v@~XwGhGf>JwDR}zZuu0WQlhBlOjcx7q6F;y0v2iC?Qn={C*Pfnv#h-ZNBVqiJA zV~fFu(=d^}afKTVSOIJm=Pep4ioy%stq!H#_Th>wyE5O9&rO0?Lt=MpWfP93Z47@G zEdm8w&1I2ja*X>^^zggRQ~d!s8WhiLmRMwGPOka_vt6G9@0=V zhb4kYLzAaq$Gn0afYcI!-9<-b!|@<}WUvU6xE?t?-MYa zNu#h~Xc(3(1ddc4?5X96=LT*AJO@E^*^3#M^5H!^o z(>=wJ9EUvyHQy7kKHAc7si-RE$Srr{rJg&_ly4C?D!O~e0;6CjcfFuGN=&2#mf}6oWFtI|AQM{cFJ96#G@mK z`&hruHD2jGy+4Ht;o~-xvVdBkTFXEhhnUoCSS4!qSA zjpn$7XK-Xky8Tjm>`tno8zes~InJbayebqU?PQ&f`vyw%Xu=->8G+`A8ZGbgyaA9E z-1!~!*gyqH3urZc12vD=rMn6XL!0QOwxWF}UVg}gFB5;6$=~A5JqE6f}*A+*oC3p?E)YL0Wu@x+^l z-p zbRpCbZ)J%+Ip_+&@@J4Oe23!Na1Vz*iKR;-mD51s+E;xZ{09#d3(2QAkV4B!G;Ekl z22-m)x5`TEd|2}@%d`Z%nDFvI4LQb|=Dhf0)etV2ANht-!Q9laenP_>pL(tKoj7an zm3T>!1HkQZ4#+Jr!4`^J%bSA_H@YKk0B&Z{0!#ev!dYNU9!uB0sGB(h=>p0nnXJoe zlX~$uFkF?NFqFlun#cz8u+f797FfvwwGQkV?g@pQZ!*+kZI zhpC*K**Ho*RX0C1TVdX9}YhDppytBQ4iFJncFd z4Is4a$1?gNog`BB<3qhaCkzg9$=)r|tf`KfneqJxI(Mc`EY;?@_1sp4Z5yxHER6>} zqVj*kQfD!c0+&MA#LmBYK_;A4?|!+&jdhIcrqz6_8B0=O8C!@eN!EhPqzlE-@JDCf zLxp?@W&}P?xW<>|Psp(|xH{B6Z{sRXFh7d(oy5lbwcS@rYB!Ks!ntTU=lIhX?p8%j zttahgSA8N!SX|gq5=(kDkwCa!P|5?X#Hl9AyXPRm@(B|^4TgYAy_xfw$<=t%B5Ij65f#GMV|!<$PTR7XYs)Bb zMAtNvuSahhU1ZvaS%W^z^A61)TD0FUFE5AXUg`&=PKT*ENsXMSS1X`V3}cBBI7|z~ zTNr`)_bxlAXxN0M$`yzk7PFJML*czN#G6=R*kSswq%RGz^>`w-1+>vzAIqj#&5&hp zTH1@Qx`4HyM*dR`Th>pRkIQGhvn(x~vQM)PXFks6YKdy`AEMyT2yY_Gw=%wcf7fly zJ*k@OLe?Sr_Jra$VyU?WhNl+F$JRYBd=hcxIFlstH^+eA7$n1qb@;D;kG3^h^Oob% z*LeGfvu+b!q*3ORU`q-@10+LZ=O#UFjso|cjDZv5+@PXK#r1IvCt|FA?SMgu&UBfO zyi9zwu-0fL1f5NfnWmZF6ZlkkkpwDD4CAQgT+On7E7nMYR!N>1AB*A%YI$#_k^KVff*>Akv2URyF1Y)Zt{ zBdg+6&&5r=!ju1$30_9UYQ=KW{@B8>-}m}P>gIGRtbBs=)kb~tarE#h+!ceaeWp2w z6Ey9qDd!y{vs#^B<0Oo|emS>%Ee)o7cQ3CN3|Wq(tU zVjQTkqhO_8IKD7_p!pkBu3rC_Q!afsicQ~7c|URakh9!;tHo(>CnUxYqEW0CMg}_8 zle3KFMTn|;hzVK0O1#BoZ8RI9O-(tW+A(oYP4(dc$^8xg8m(fr@*R#mvD%IszkiAT z0X_#^W9ULYHH?Kxk$pAob(Bw-`Dq>e+Q`Q|w@zNW3%7(BThdOD7a=(%o+%!iSdtBW z4=k|`!l7P6y?P@+*764`3|kpp!_d@BKw_2i7IP+{^H#055ZvRJWSe`dcwdu!fynhp zxrW9!pQ)C&Iqc<>2olXBkSuUk!1=EAsI(PCJ%5|hz>!TXBQ;n2`l~*Y_@5jIc7Y<8 zzo)kNbWBtVB3qs~UPPjCxlgQSu%wcw{DB6h!nm}a?H6eI%B1gX?Jnnmu(C-1{mjZ4(RJCE+v>BoO*sHu;#svUj_9jTZFC;uE%LL!21^kRNZ z`=e%W_N80eRc=7W$1->C;3P}^c7;jk5@Uy;3N>ZYw?;W5suu-vWi4kfV_!Z?Dpgx% zaYyyoU%}cA4TfN9r#^W;cV-DVJYL<%nl0*xO9_3LjMbGYY{XpfMtF4E2 zQDvWmsj&jUH9kw}fPG>ZYaEI-1pJnICdBg{(1IQSIUhHzk5aG)b zetMpMVgdZZujeSb&UuZ`wpV%9yH6BYY$7%7(snNfp0#ts-5(Yqd419H#gVwetB#%t zU3U4yXqrk#{dqMOD|Id%g|X0ny7uL%U?&OKSC7OJ$jYii6hNa=w*ll~N0qQ3yw%Yt zN)YLz8m}^;{k=Ro#hRiRzK}QqLg@`hA%vNO)OqI!C*cO66L`DPS(Hq|_mv}#ayL>R zVxWXlKG@5%@#@4mp6v8G{nRnja5rbYkDDRjDy?8@ydzN=4ZYpo>zom_py-0;p1rFX z#8!T`zdXBJm^<(%^!0@f{ zKJlyySHMk#lk7NKR~e=0WTB6@d84847OI;ubl6s7=^^Rih=13nYm>Y*+*lln)E&u^ zwpT7=PW!F3_Sh=whJja_I>o=q29c+PKD3-y%gOSF3FtFJ_nt}Xe`5xl#n(nFxU~`j z8R}nWJ`Q$N(y+7N$y6YsG1DfMCM>rb3EOWm7O&Er>Y|BDoz(emIkuXMn_T@!A~5c6 zICcQuBz|dVE?9AF%=7uCSEeTU0nW#JDFLTx($>V>wJLv;iURM)i~8fCry(DknCPNW zlZ4!~yx94RmdK%=?0QQ!3*!7>Wz+Ufp>dX-UM{eQm>eg4t-Ow2OFuX{T+r}b6(6&U z$M@v4KO(h@%DtySE>7e1n&)>ocxh9h;?KUnC(Hp?Xn7M$2in?x;4iyEJXv^`LvDHf zmo_E%0CCKSPFaXe?)+ z`?)dI-1cg$Kkf2s9$>rw^$4<)s`iXtS+pASZ(DUAkM0K#VC5N(RB|6!w{&>+U{4VD zlnrM-oYG$}jqGhEU8o92I^*eS2ie-Ulm21G=mWu|lWLA%tXRR!ed!KMfiUw*=FwzS zY4YmR6Brq1dTksRU=j=0D8ddOyTf(b1TPH-V|_-#WOH8}Ge6X&n=K1)$llS$t?}>s z{YLjonS}HUcEntcTT%(WOzb!7zJR&Guk2GN$o5PEC4xESvEE8YMtd2&ZzEJoGJR~* zup9g2Un$bvGd(UUR-@WSTpft(c^##ZXny~LV-IHPFcSsbbmRY)6*N66J7LCogRvz` zPN!9VK8qDQ6J^ucWu{icJuMYX7d>tv?30eJ)kjDsAYY}ds5qB>=LosF0Ny`W8u*G> zMb@_8P31wq`Qy>2gcKTt%N%3hgQ6C-ujPNlQoS8WPxNoja?(#gGNrWSIsO~;ivV<7 z+%)?CGw2d|jd9|+$S7mg7orI6tEqH11ZD8weT+R?y&FHj!3xKxUzzrf@L%GU`q`|N z7W*zG@M(>#TC_rCMd9k;;_o>7H>IE-5FIeJL8+9fzCFdW_t#E#yP{?MQ_x4IFxyG( zys%dcvKEgukA50j2E3Nw2yE^cuK&WC8nP0hr0P7UmygMGW0RxU4WMl;zGDd;cADNoo`vh9&U;v_`SASC+$^7;5~so-b%-#= ze;Hw%$kvqWXRm@bLzSfD3?e6xW}%0)pq8Y)I2;F z9BHEhd!>qu+m_#sw@~O%DP8F(P$k|GBm=OLKIRc$ey{Ad(i`k>mFvtYRy`)U%*Pe- z{r-o#!%^k>^23u-@H&-uJZ<&g_VJiMuL$8h)Fc{Kj-Tl?z%lJYw4Nr=H%;4 z?`{>BgvV*R6X%jKpc=WJ)7(q$fARQP?kgKf5Nr?bx*kUm4>3*L#Y(rm7Po3o$fXFG z)rJo=Fb&Mws?Uym;yQ%6-;V;FzQnXCw^79()iIbIFbN+WvH_a%hz|{d*17-ek_4ZX zVG4ar@J04VUO1K!LmUAIkYs5zA~ru4b;EFc|kfuRgXvetv_8ER$>K z%!q!(!lRD8B?X*M;ev^3T!sTLjXD!UWpvC_=#0`o*o-T#CwJ{s%qNUexCy6#3!|Kx(Xn+4$ET`9j9Th}UoIrgs}0(V|RIKG!=^ z=6AYlD`@?E>gI{1k)Fl$M_;vcQbrUdI#dH*{zotY;-*YE@rTs8ek)_tb>x##wdhgo zLv{~~HLP-tps7n0n>C|yc?x#{sR_^V^fB*wV?~HN$30$Fe0eUzO?PJ46&ge+#)#Dz zH*DxO52@(sv`T~FQ!7ToV{;2=T*AJ6-z@PZaE-5g))SCd>s%>Hg}dW1?y?fK<>PnZ zd5TldDw(Mwn*`3y&r5YrxUU3flkb`>LCh4dIaPK6Gd|O8^9g?TJHcdQqr2_x{~)L= z2e^CCRE&aEF)+^s;&g=Jy#MZ;}^=eoae?8p zYQN$ei99g+iIi*(+Tknutr6mkj0Kt)>f)8|FNThZ)QnhKls>AN1XUY6KgVu$EB5O% z4$TyZ$BrFQ;BJA_cVl}G#C8TCFb`B|ff70o`ibrC$X~Onk-K9>^L*U=%=WH>zKks! zbEj;u$9DUJ;kNIw^YKCk>sMk@<3_G*uGqAYTiWWb9AJQb3J^TEMc>@ znzu|FZ&M2F4FbUmY$?Q$qdJ$v#8gnTmXRGvA7Ayf#K}1!1N_@w`@zZHfJw04;_2X~ z`&^Hq?#vOjHF zyxS`I&u-dP%rva%g>_0D{H(vNFAKGu8)mneKL!P9ujuBn4>Sqpu_@k;g-3HmC@x{E;tPSO*g+_4#snVgrw!Y^T7+JLwjE|_$_(JeOh1>d z2L(@-=VqS`=gO*lG*g}JE+W@S>(vRg=JGb>^IoZYTV0p&7nOcn*J~i@WhFjDVWauu zQnmDa8s^v}s5c!jB2Ky>ybGqH?$8lMkP{XqW8Tp&?EB-oQ)+ZI57;6_lm9>9Y ziu-_O*2YN;b?r*Q*Ka?FJ@P`OPv2cW6CJjnfUPN&CMoYyRq|3ST>FMrho>(k7V_D@#0!Wa^Jl$&;3A-)Qeu>y$R`Uh!4Kko9Vx zEi?8U4qp2Y7aBnff|`seTPmIC?%2^}H?=c|SXuwd_LEiodJRz< zYgVGK+6=C$jJ|65T`yD~bk(?n zSM`NQo41CGGis#Est}mmgA=cJ*sC;?^ajd(?TfMA<5gj5MNvE3pK*o`PvS^A%J^ea zM(CAln}Jbb%KCp=48;mK@wMcZ3CE`ySgK!6G91c4flM4w zKBca*jb_rAB(k3#tFz7b#MX-5_oC1uh!fm&rfro(bF0O1pR8B2=dUIaYBC+2NMAnV zdVHPNz31pG;kN?50p=C$;o?6g2_#9C(RQ2 z%d%!Hs!cSA)xSA#3q?2y-c)v@R1$3X>FIRAebI)U{@|{oy5}RoBroj;5LI3rq)Kw7 ziSIk*t8v`d&Ryuom#S4DXA9l(s*+^CZ@6fqv-5BSG4VyCq1^J2XDMEM?;QJAtj)bW zW$~#zR_+r;2<-BOPNU}Anh-42j>&#AOq(g91gsMXn=W>JoCVuH4DCh zdKuWKuz>l=oRn$4CtR7YH#LmDExKKw2j)d&r)R$Mvis8O);zdaJ2Oyf={R?PXg#SQ z$m|l?({vSrCMD&9|B@URkd{;DZgQCkI@GujasAVe_QpuWJ&04@+RU9}O(oUTXolo)xk&o?HYzl+^uq(K2*w! z9$PlI&pg(Dx$63UPXrzZi*Y_q>iG)eW0ZC9#5H1jc?r3~SE`q{?3d=T$xV1B>F9Q?rR~42 z2!7M5gkT=0k2aCS4F)4S`xzHefZ09p?Cv}^Yf`1lrZn#(RzOr=+`%))Pxmu^4;N4U zB!3rPff=jOFH(Mb%~Q^LMe%GLoC5>cvde*|`%@)K&X~<9o;{H7bjAvgzBi71VlMQa zl{%DwzDXr*GTb{Vr@&LE#yVeTj6sHp_D0$V`*vLw>m;k11mm;E_O_W#K7f00aN0(v z*b=`}8PZAd8d15PYHt)cQ0P;}Yl2Mu*X7XH83LI~i?igZh5wd4Z2}PI)4P3O&#H8B z1htmlLT^0JnMqw4PHU!`tlvN-qdR7ilGOuSd1Bb z8%x0YYW%5d*ll;=f*;7#^5n#-GzZ?wg~YHLpNg=c5EH>wv3_G!V_^K`*j^nGT(7U4 z)gefyYC{E`(~!5geK_X*O`TS31hYv_m_36I{_8q2g^P!1n}7-Qx(=M54k3u4JeuVY z?EQA?P3hL+wTzSo|6gj_U5g1dV330gk(LwIe9r9a6UTfvJu&C;=RcsjR2+#NrQMoP zoijW&K80atBz1>%5Y0G5aq-Ww3odZ7Uoa*nydhTRCr0>vFs1h9MPpf-F5WErg^(dF znKi4FMhYdUWd#t0IugDy<)tWjbJ3PZqLHOb+IY?r^>(FpiN$DwcOfRFsmOkLj>YhU zdgrX5g_F`hnG{?caSEnZ)mw*WN=Dt9=F>@B~7!fgp5(4Q2K($ z>y6`#a07Y7(8@!`@tJZd{+^QieLQ=+0FDMsHudy4SKhJl*e@*!0WgUa>8H`7SRIrg z`Tts^LwcoS1ZUJJL7u)Pa4L}1*h#2kadJ&MCeg_hpApC$X0E~ z^+mBV`OB|KsP=tHuns1kmb0|(7i81=8JnDdrwJn5RCW7IX&-?;i(AD^(c{-q*Kmvq zxo@Wz1|s-%$OhftVme(T03~$~s`C>$swmNr;xhuX!&N?S^?09Ko!ZZJpqzc1?3=hR z*;OY{m>l>#pHU)M?Vrn>RGr6XOs?M zd`d&1`h-HRSHEg21KaxFqnUs^8!jrck0q}(79-cI!4{}NIUk%}KW#PwasB<^0Ca&} zd>@){&BeitislpX)a7HvBI}Xyj!n`(W1(cbCIPs;OiEv6j)R`OtWNe%LJ2OT zh@OD8pq*ifU5yDGMuEq-GStwQOHimSI5m1}C~)2RDN$*EhI7z`9Uq4}+XH4SCnG<1 zlO*V^TF2mn!wkAdE^D05)y;baJ(pU3P1(X8n>W*@?&h5KT41#N~57`i7qQ)iYEMRg9iJK2da314Roazm^upp}&Sc-s10kB%H$&2Gm?r7{i? z*Sad>Q?2sU96_dDzs+ht>kgZr<)+$9vwPP#WKQ5>80{m+k>0G;%m+VFX#r~F#T2!D z%$7u<;*4V7m#65SFWi}TZ>yI|EPk1=g%P1k-8`kndMkp;{EM!=EievQ6L(O45_jPm z0RAsx1bQ4lW03pbNOcs3gQJQgiFX&i05d*CPNm<4QgnR1YD~Uc#?Z7!I5aP{tW)MK zdZ$J?$_^a_=}$0&_)7eP|K3*-7n6e_4hu-bkDom6B-6}6?JqCdVJ*$Oa0?0MjdUS& zCyv$V#*xl-mfl7+PX~xGi34scg=HT5NqIJlbd^Y z$1UDJmO#CK72ufFyRl4<)#l>Po~M^OpT_xhbpO?LZmaJAbM4CIt(PQgys$bVt}Ks<@I-H2u2(_6I+ z*NFf=G2Ob)?B2ZL;sIVG7Xpv%c?M}Bs4aU*y0{H zUesR1Y01SECX4MA(OX#T0s=`4Qa0km4TEOpuVi9mJKW#CFb%wg4F)|cohxTH^f*lT zJ9g&sc-ib0W?M!mPWb|tBuOSkFx(D3L=s1OktFt3zJbpWwv!z7=G&ySTguiOafQCd zSbLtJYC*d(ZV^LBSho+F!*R|LxB%GA$*hj=_tTD9aQrqPbO4LYt7f{ zEkG*f#>uMUMC8L%s>lMR2=-``fC=aNumLw;3gFN402YdU$F+PVAd58fSj~Cz%jkfq z@;>acG;bm9jq}vebN3-8RMz4Bcug(9(dy!gi0a{$XV<&4}3=!kbM!|{u+}dB)-Yk4MN@W%H zcp{m~AL@RLx z<WIGkEZzj2Ige@&CjRXzg4!e$c z&0Ti?dAt7~;Xxylom;ba3D{Hc0`R-(HS5AiE4iPsxwwzm0JcDMqE#bkMO%_grNVnt zC`LB7dZC04B`1ax1l|+c3ABB4ipBiWd*{l&M<1ZoQlMm{Y*0Xl`hfS?{0#jdx>k^i zfs}8mxsBzdV(@Dvn5JX#60Ll`4$niP-M3+C-a@?b%_qg@=>nya|E}#J!eA0g=gJq` zWh&?i^hNn8gc-l_(A*lwTt8b|uo-jS?cBPuLz61i*(LK21yh#@9Bd@tG~r4pH@RNl z>{_3z*q#m3Tu{MvayeY#0qQ>y*b72`Uh3>3fCOBM8_yOJ9(say@>HO7={p7;C$%BD zD4q}~lDP}#$J9L`?53-I=WVF?!n+Tjb_QAn6hDnW=^?qlG@Egy%&bL&t|PWZ4w^H> zn=FEN&96qEN1TH((ny8Tw%PExqI;W>Qec%d6e9q|knZ%Rj!y|U1r zLE8;UQG}WyUm`}%`?FQ>`)&kFxvgfG_y+gR=RSGYQzMtg;r}3RrM7ntnPXcq@yyev zbaQz?OsQwr^zPSoYc$!zkq4dmy7q2C!G>k7#<;0v{NuPI_Xsd|~rEd|O zHtPj-AsV!`->#wIBAF~Dp6D<{Z1t5CxamcT^9(2vf%k!z0EC=U7R}_2y?LT@^EM&W zSqqql=?9#kNpM}Pm+i56%7fV73A##dY*ntlSf9CzG6UEnmi^$4g z2<2s$88-f5w=vi1ORLmpj^rK)#36pQBLm&NP1pbpVd4x=Qz`<1$voroW`{$HwE!Ns+XHK#~+Ktch7w`X_Ry$ zQ4rlQR&p_P_))Dp&(oBan9O7HrRy9!7qGhvFQ-G59!s~im1qpo2_9CW<_;7^d?{##5?N8 zrlLFq8e3+Xx>SYp3n4b)Q6Ha2)jK~rLs`O3V6k;ecq_W}d&#tZ`IMaP+Iv6pl#12V zv_KLmAJ>c5=iWR$Wa2elfGvsK#6oSiV^^iVWQi0vEq#02^068FPz4?#LpJ0O>et^! zu|4adZ_~~6p)rvGZKJ?1@#OqgF)ftfZA1$hTsSVq8^*SCD5>9ud;tgS-0qkS110mA zJh=;SYM?VObW{}5d`o=ccjT42(rzyRSl*c6ce(4wyQl$&M$u9f;YIE)gZf5W(jsLiYWTS@2mkYS7ZBiSnAz~Y!aVyrEFVAiK zd}f~}vN9wBhMMGdtC@eW3CG3k_AFwMp=ou8Obmm2;zNI%?+jv=R+B^Xm(HI3@eV=P z;@L1T_b`HO8J_y$hFv32uAGZEfz^ z3DCzbxhzWNII`}?C2)F%*)T9;2xVF1BwLsiQ4BlW4Fs_l;0$-nV9LwOT}mT(ndNf3zqwAWt*t5O+gF1I&9NZ-P*mu z78a2T>xL3`Q;q#_B-yI-+fLSVzZ4Nb4O@B>X`6UxxJKX!CWMbNP(FZ_W&Et{8uJ`mVMWP>S!F(R1yKat4h4 z;|8V0Ei-f-GXpAh9^hHoesKJ_!o(WMpv%w*!A-W*5+SYy8pT{V`TT2Hqh!r};ty1$ z-+Ggm4WHo;;vvjlG^zLC2V52=zHpXy{!842c_XEk zxd17;&NB9I>$K71GV5;`rgv@2pIbMA#L^Qgd(b;Qju`=4e}xTRag0n)uB$0kzfbdX8!dVnXe);2jmb&%_Aavirv-)A#v zt?0Vt;hFUQ89J_*S*fR8Wj*M-jJw~@2Z;wQoYx5$CZvkQ$*{IusVyv1d0* zI~CBkHE!L(3k6sv(Olj$L*G-&`@n|WM?eMzTaO=;=AAI?@Q1iXT@LsqYL`d5fxnLE z21y7(QRSK2+!5W1r(_N4G1H=a(gGq3+m5X5R`uQzUi2tT{Hmo$t|EucQmWlP_%Tzy zx$G}Y1EOo5L7cTMo9~;Ro}LTt(UsO5`B+>m#G*)VxO+j5RvND#M5=5iOMEf0VF5 zjKV+U)ZrW^{Bz0D(sDv>=+*Y=in-S3*ThI00^?ur^^X14mSBk;=s0c}vrrSk)AR-R zcx<-(DcS@Zi2a3Z);)4m5w?*5#q|)7Bj=GcK~s*f%T9W$1Khxa0TMlU6U?}31gR|! zS`a5IDqo$FVdbvE+kK>IS(Ct|DL$PbtOB?AC-v+LB4#l2Ru`%GEUJ2}?!*|M(xZDxhC&iaz_+O8gq2 zRr>Of?@UKzs~CZ_q<(i~wdUcKE^q@V* z{eD*K#HH-N+>Pu9P_yCZD?DGjh$gS1&8fyhy(t(OzfC%MV}~9^m%Ifo_?s-7KD`SZ zJ}BpBkd#&1U4q6eoc*Zu?_TzGkS8hIsRgAMz@Oe)l>o(uU76U#zi7y0eMjTD*u5tj z?TjON5r#h>=O3^vyVo>$igAWgXwfle__exu6M46Xd{Nw;&p7&S=DwPHr~MHgp$njz z&3NX%G+vZ^!ZB&5c3g%ScC~aJ;N^CKoCkasW^eP_FtniAR%am92V9=J z@(s}KS#W}v+?+&EIbtvR;DqK?s=t`XbR`DO6|xeC*8ju5)uczn01LVf(n7&m?I_0# zzQOaJTdNZzE+BJut=RDINWe_xlU>aHvoT-+BUDZS zO^a@_ak`NZ*q1lY|6CfY%kOyVvi0RR>qWhVL)Us>f$))dK2 z7yP5j8UId(rDs>>7}cJqs;9Qqx^?fI%n0P*y~A2m5xnRB8)th5nyG+F=%u9~IZ(wU zc{153I96ODohV2eBM)RG2IO@HWK;U+B7t78I2&*Tp?4BwHNW1<->D*V=0Z9Fq;jMZ z`{cvKb17Yu0Or9PYq))m8#iEMSNQI5k36?D`*NJpayd&#B8J@ zVnq0~)qkqE-v}>G<9J>F@-0xZN!RqnO454m5dZ0e-K4uTpDc!dY1LiV$f*hNEN||Q zBn|!wVi$d%pI_>?_0|oJ|Ha{*(5ysI(N)5iDhU(PPJkH~>S@6=T^yk%;x_8Bnag)A z)2rahuYs*H7I7HgdBOTmEa6ab9)+U5B&kHf(&+O*05s1dxKm2Z9+yd|>qtz53-An$ zv#MdpIQuZU@Sm~`sT%vdIdszotjeWToGlic8(N%`-799|X@X|a6w&V`8Gf6ux}B|~ z4O#w3)E_@-hkjcaJ9#QT5#3ltz)e~04GsXTFiA#(2|SrEE}(w}w!Xe$WxPVLR% z%7}zkEt{A;OA`fwWv9z;fF=R-{<04g(}PfUGI<9mbA!b;hfVOgdHNaQ$Ceab@7k|v zu1H)IoD?!Fm^>C#>(j}%Ck495{V7fq)M-m>SnY|2?WDp4B>#GpLe9sX)rOa2Q3j3g z-1Flat0U_#U2F{fKOt9{DcP{0$7Fl?H~8nRiuI^J67*mHu{G_xsAP`R@#0zK@O``JR+`A9oxsKeBhepmbS0N1@gEF^Rn;wbv+2>Dk8Qsyq# zdd;54e`}di-YodPt=(Mg8yi8=hw^4!QTEoO#^FvRO@IaR$L$+espt~hx?(uiLx}qM za!>jYZr?w@MbKi6FDmdLUc9b5vqR`wE=3Ucgqs&tjCGg;Oa8MLAdC+rwTP+K6I+x! ztk73KNTUY*v1K2judW1r_4s`+t0izqpkiTOMAhO)%%;6ucW`bvF5tcU8dHjzFbD=- zJ?9)I6H3app#K4qoq2D4{&2$W4l%GTz!&GvZOu<`FP}kckY2o}|Hn)lPAhWPdwW73 zC1@b(u}*FBX)4ld*$X43>R4yW?mk9qPUx>c2k&2={3~I0c6STrmi+@SnYa zdY~8ZFr`cRyq@Z1@j=Vf{38Qq57%R!b#&if<88~R$sR6f#$B0xN3R(u;`88=4^!QL zo34*etKzBH^6u?q18dgm!*y`%9h}DPYGDox3If=Y`6Gs|xONJ+gkNi1w=tU-$FAAT z`mwGbN|yi*Gc1|D2VimNbN<21{uAK*FVJ3&k_jVIIy(Qf>GuM``EP{2rK$U3=X{SA zIN~)oEn2{`HG^e;C;bQQkB^W$4Ps(+yBri$KHYpQm%tuvm4g)4_-|G!lK?fAs-R_2 zIEwmDD*xw)Fuwp%PTC(Oj3+h>rZ0f~y7;XtCO7H2a0htgy%MhI}PmFlNuGp>=@1l+%JPdAP)T`_YsA~!>}q^bkD zign=nJJV}m7hSyo&hRf4KhUa709ZJoU_0oei_$>hQ6e5C{WC}a`Lpl6y?(^ zv>LKtl+p#&Q-4KLe?xzkEZK$}u6rGYELx$7g=m;cvQ655`ZIq(tXKD$sTEsn?d(>Y z&c4k#eQ}NupzqMU+juj7-ZE$D@q;YzJu6>-+OEHdL)x5zKmL?l+OSmMa(>KO{Kp6U zvve7SWI>G({Uu}K=f7`mS`TaRiAMHuX_r2OF$9mLWpV&4 zBLN9wheWS`)G1dK2vx;SDaBCVBF|~`L2cIUAvcy64Ny1O9YyBUfb(Khn&s!^u(*vFfU+ir=D{jv0Ri6 z17LsaF!uhxT%VDp@$?mJyiE@Rao*X2JEefP{$!E>)?X$9M9 z@HQfw*%>p=jZNXMabPcj{Rl0UibixH=Qz=5t^ zL&ev_Z6uTIhr0K3Or-t$ie&XIEq+62sqIAM*(nkcOEtAy;*nbIa9P*@ut@<9xEzP8 z606o~*CclH)fiaYth?zD`|a<8q;}rEhvOYcb{ml%O=cWj3j%~Hz%qFG_wY8Z=&*YF zx<0r`a@3BtIzN1E`(V|RZ~Jg{b=BjKwzk0W<%S4;qpRg?gGFa_4E{PTe{SOpT5wiv zo?VUwZSZ18VaHxVviJ92{Ml##W2zlvt(aN=9La`agRaIYIgb0QaSzA&*_*nLP9bv? zX&I*pRGpb){&&~T(V!&D_))1e7jt_rDM38W%H>B3pQYA{s%^JciWAz;TCs}w+l}huG+fJztKMLH$&lP@{0O_ z>g{P<@wAxdiW!r*h}YJ)?40HaCWWb!YH%Ei0s=ELTRnVCiH@#=A2dGnIqU~IjO0%f zCZ{Dm$H%9HJhzjlCH7Y=rN#3L34mPTb8HH^F(>(6x_Px`+^-^}+=5|lNZ)}_oFyOS zXW10wrN1L-IYg|iq6%_o{2F*I%@N9o&*S{Y{s-xdT@t~=mA)+4Lh!~N5^a7z4}Tmh zNen**{9+fCU5?%)L}5n>@7dcN05y(>>6bQe*k_nQx})eC`C)@<;A9T55RMAd1q%Wu z6F)36PxyluTQP`vW4@L#3$4;G{;s@AZsI?y*57FIwqA;H4dz$Y0QK~rW8$Ag-Y;tD zTaq99v@&Awi&yot@=pnBep@lQs+ox$UeJ;81aMExa}1SM^QmkUZ{)rBE`aIqJyQw@ ze~udf%&N3L<7=s`RP{=R}q9-3N`P+m<6!E;8UFW7F)jaT*jjN z3l78uLZc+edk_8hYY+11iAxQ+7rJgv6FT9CoqVM}DGz7mx4!KhEJg8Bn!{;rm|xEx=Y zYukVQ*A50)tAY%|JiRu?#>Pxd-`M!BTo-e-T4-@|6LDV|QH@6G&gQZvMS1m_Su9qd z?f4p3D&B|CaJW;y^_$oBGv~3X-}rFGgF0yn$i~$l>0+VMm$%`2<~5d5vq{}jX`g8r zcJis_m*z@j*(f7~qPcD^V&DZnX!^OphRzE=B<+8_lUGoAnUZ9B@q1aTJ#}h9F=xmA zHMmz61HgJ_yCm`^j(ll)&lv?&mgys9Y0T9vBWM)GvLp$yN@6^|lF`0Uv5O3hzHKU94 z*5df%$?ugJ-5-lUP{uRGl8xt56Y@WXhVw{P0)kn6Y$ z2CC4IzaVweE8HZl0l+OS=EOc}^@_+&6gKVm7?wyaWa)B|)D5#qtK0Z#Z9uv80+6Ql zNu84g+K-u~Zt$3n=GQziJ*;@+e-0YOMm&9qhu6*p>-Mgllm2M7Sq@;~?nFcj9eoZM zrIzdfCu)7ejoMo7RMS8VNkwXT#8H)T?#ws8;(SvgF_-cjlWZICsZ`5|VeI62a@XGe z*%lA}zmdP`dO$BW>Uku`^`9X{g8odBO-W*-Lw_U`lT2WfYQCL`Ezk~1@tE?OOL{(y zHJsix46p=QWOVNIqu*M0$RN4sjfM=V$6l{F@GJj=poMn7;$+@9JtNi&k6Hh=DF<-!-?hciY zZt2ySGG22sD%DF1`U-sz2r43Kbmcb7zfG#xxifr@t%5AhcXC_%9O`OfA3LcwkvFWe z4ouiD!@X~YERY4+Yc|prHf%%|C6nG$WQuvlOE!|3TZG)6Emj1KT}ZW`2Rc`l(47|* zg3HTGkHD2r>b0+Svwcl5XI%Z;^kA6L`m2+z^d zAvk$;I2&k~FBaAI(mCNRhYpfcnNHj{MNdcJ#?|HDl!E_zZ%J5=ke-<{=ctXA4o-^1 z@X5nG(9ab3(-{Sew=?q=J#*)F2J+lIOXMdUBEh3uH|^_KZ5-q=1t)bFv{%$Oes{i1 z=kUWg0X7*qdFXdz9(uEh`bO8NS}fNCC-EeY3F7X>ajLPM*byg7v^ru-F@zeimJxZ+mUL}k&yvcD zq)|?|OtF$|?y^PbQ763sbWIXb{56lKq0;qumrk1adi<@FSfk*L|k(Cjrdt^jPAM+6;M* zktQ9ye@SOl7jtK~uw|+6q%qdbSh#MfpfCrt)3P%p(>2YI2Y~osw0 z{FZiX(N@GgfcP6oI^(|xo+Q38K6ry<0>-4cM*=vk&Gm&x5#>O3{TEb#HplbJkEFFg zQv9CZ>RtnYKmH7}SQd;D1@_?UazJ7i7NLtmPTuFP%I_X#nWGGFrv zFM0HOn71dmZFRNUXV0)4LnJoBZweVSz~{)K5I1m=Z=+Q498q&5JO0=PE%b=P$^RMD>EG%(qtx)j^$hy2f<1v-|$ zfIG+~KsPmCFi#Ek)Lcj~JdKs)PsBci4UxroyY9fu(CJ}GH_PYAG!~x@$KJWe#zCmg zn@)IUv4}s{_VJxs--Y#z3u=)J>mg!2DOlp~Yl!QYL~7D$c60dgmXm^P-fSHVH_ zq4 zy^py0JgSg}rTN2OJYned(qpp~!CtqQP&T2dezse6SIpqvYWS)(r}_3d_x?Hu1C~DN z5S{fDexvDRhjy}_7QR8;k0R0k&E>qiB~8IP`3tv1OVv@n1=nM!e-`j?d>dVAe$E^3 zsr(#Qel2amEtTV!$GrLkhr`bi3K6(MGjMqNeb$>jUPaE8W#gp}Vx&bqzZ4p|b?kj5 zTHBKjktE-Zq~Ruujdngir=$gHYt?tz7yX;a_<-QX7ZDuQeQh_~%A@I^&w*HZuZH_; zz@uz0n^csMv2p1(+))pM{@)c)Hi5Md+0>u#<|||EatZ9bf4A@LACOb-I)G5ScircS z>hF(b@igh)?R!Z{4srj!ptN3aK|wLUko8rx=+t2##z9SaAtBnjvz{hn8HPel_+E~&x8Jn51*u?dEc|7BVT0%LpW6*wmlWNHxvhCZQBYoi6cA|euKV(%>);^Ebv<8((Yu+{1_P1qkNbp$lr{tGm0@?8PX01Gg`l6henPxzduJB>XgdhNwBd z3DxZzM<)hJI*?{M{7gl7Oq&eCm^4GvX2hbJcTA5)+E#2=1n7olE^HOWq-dwmcUkM76$k@nMtxr4ws3kSY?;Z zthapE<} z80|ElY=$xG6#RbB!W6b^r>rvk zc8ahV@o=evEu3A&3QZ=9Y_E;H9ug{P@|@`B#4;p*UYk}PL3B5wrj5EaPnNm=SdNT% zGR+*#cOo%UNaWq8S0Y>tjnk6MP3(Ry0a_gnI~>jee{D(m1#M&4?g*jYBWDw+(5@yU zv4@o%O;!|<<6jt#5Jsu@m%@_>5Uix?)?4ruD%(5`UOh80>4)Dwdd3?>6G?u+f}nD` zsI*l2ZT}|AvH5-O9=G#p{CiM|R$WuGH-6uo3;L2aY0B{C_)J$1=`pX$RO-u+VrXng`?P@V2J3+R?AtEVQ!=aFF1 zoHG_YjGv<=VQd2v0bikR4lSVUexXg&N{LQ+nrQ0A-%Hh=ohI_kbWvNU>uX`A5z{U_ z3U^%H_yX{8hFl;`=XEENabBD!c`f8rG7_9@M}xgJ&~PR}$rkA)%S<9BkHo#Clx1{z z7a_->-6HAU^4uUm89zni6e-utvP6@agSoF*BBor>KnfBfiS0#jP2aP}T*41!_C=VD z5$sp+5&Za7F@kUc4`^XL+NA!Bluyg!p3-qHUcJ?a4yDvB|;@2L@9Eew+aJXSrWI5<@n>O)_Hus?Je;YG@lhFVY zg%F$Fbuz)z++JHLZ^NG_J)o=w6nzu4Fft`#Jr9JIMc-r- zT9$s7vveNmmS9;xHTYv$fR*ww@8{ZOLna!Ul-Te5#b4oJdU6CNb!>d>&Sre36x-5t z#MSsk#Z%eqOoz*PL79wCr!I@9(YhV-(W+sI&2$XBi5~-Dj)LX9KH)a&< zo7+{FzP_8KK20sirVTSdhh-57c|J;d#xHyx*WY#b-M%lLJ3?Hsis_KbI~)&iAMt~{ zkAsM5*TTPEI1aRuJnPqgGVMIoud!2@sp37IW{=?vE3!-5-1%EahQbT**>b+0xe3VM z_j8Qcy7%VshC-fzXXn?^SBm$Wyd6WWVdVMsWq_sRvTlSjZV~W}GheCT`~GH=UYOJX zq%S)Gfoaf;o7X`W?8(D-8+o>``_IR=1bt3k_mpMn+mRkEdMCRg zKTNIj_@|Hy#U=~?w!}F~pf9wMNh+k>d#cnWfT}g``Fawp(*+$MCR4?gFZ87qiGwB) zP^Nu}+0}tY@{8TPfEi|Cul$lo$s=@e_*mG%A~az8HQ7P5=o%~P`cdkCeSFk25UT2h zQfkKQbM(V(O7TOg6V5x4OW_J$M{yRUuVul!^)3bC6A4b zlc#He?lb zZ45;Dxo+SQBzyo=gHfD6Se+)W1{4&WpGddq+)j%pK6Q8W0(`RjM*qn@ZhU##n_H4< zxtx%7K%ftB`phvhtmb~ta~@m13>iUq40W(gU06PFG(?qdi=YK9oqmqnzxaQ-B+``B zQ}Y7!O9vavTrQ8^c{eC^S-}7hYoxjf}HAvqivBV9O&<ukT} zF)l!;jb`akK{Bo9drl~Px=Lec2b>~I$Fr?(XANE@vW~3+4r0;*60hITQKfs*>wPuoG zD5&brPl7MpOHV9F1uoI&G$P^}E;*ma$H(DmoKHDEOu$J5cv}CQ;2W1nMoI7YAT5sf z1RA0~{7>sSj;(B}bRHn7FBeDFEWsijQ|=Pz`Fi~5ER*|BZclMbU(jKXi&d`F$)QY! ze1HwN%Y39JmzEA`Q{tC@BR}Q_25-wyk0>nEwHUV9w1eKi=q)SYSd(uYIbtg#PTN{s zE$+RPTlg&PDh(dTAERVUBn|N6J6F9OR(zkKeL4h}Hd{D4$I`?tV5MO&kH}GJ>*2=C z_xhazw>wsnEr$`|s-;P}b7@kG@o{rg$_+YXn^RtEKI67MSh65@98-gun8unysE(nDz9!%DDY(#?`+9@zJ8Tdisa) zwRm1os61Kf)n{{(&u6sfq#A5Vg961qAaQGpoX+4*-QJiDMJW&xD5f_Vrkx&F>eV8a z4cD(io|LQN9bdGO+m^}YH;^MxfsnH4NeZY%5LWTRUvPD$o!+!n&+p_zHamLXgPv_7 zf{f!~E%3@5KhZC4g8z_w^YsFT_Wa=6Uw~7(x`cz8vgg6T+c~>`q{bgdvJ6?0W328ro5)FrNqI*5SrbPGF+HeirNBS@-gsu z$%mb4(H#F}^3;l0#y#2**)PYh;bD=;71ZH!9X8_Q^z53IEhw|;gsy{IS< z%xX$3e3kHxoTi9qwqpEE{v?8yfDm3PR@YwbJTcsvM0I;H-W-m}NCos4$H3kJMHSrE zmeifcByB?CPL{qN0e6f$Z#G8!b>$kiu!pXbSR1h@{bE=pOK8nrDa1exoJ`xs?u3o9 zocduxZS1qde@0^HX!+k}Pb*muT(^@eHXC7|tkLbBe9zJ6Hd4}8fAyzpr<&<*g_>^6 z(jh#di29{s=+T}aB#F#~a+T0G;l7kIPPnMKkl+oC>n%PTYaR$6$8VorA0OU>z|Mve zVn>OF#-5Shq&G5FS)g4yk2OpqSCyek9KR>Uw5(gkTmLOOq)!?@id41D0*!KB)eu)WUvF8f7EjDxM{DRu`&{nrJDFPrLdsE&p(#$Z}F?O34on`u3RSnxDz$n zH@*5FYg4Blm2uu&P!&fjyhd>hrUym|z{HY(;JKg^0P+vZb4)ydnnnD^=tZC3CB z{P^6--A09>1fApfzQ;${u=p_V2YfZ@IX!57EC6I*LeQvw zl_ntOo0vDv1K)5H3igXk!8^s1zrG5eJtLt0*!uFo#w%>$V#$G)Iz?zsf92lXiVJZW ztH0v1r?;$cL%=@MPAw2KmS>0`G}3#7ld^qxSJlSQ8rJ{s{y-C!6GpoQ+UD1UZ1bM; z1u^+~?)4xJdqx4vIi*LWP^%MTd0Ug6VSy?~-7>}Kth^oxJRi5lC{XyT@l`7i#D#2SXjs^{N@ zjsVMM?=L-{|3k)u16ueSoHAR>FNTI*)#U9IYtkas6Je^_*hmK8D@xYzKe; zY)B*oR_I}zb1P)w?%5-&CG&~U{VQgSPGjmE%@UQJCriU>w!N#UP_8|9QaQ36N@bIb zxvZjwWdU-o(D(YaHt8h1Y zRw>)goPvEXO`NU)*WWGtRuwY1@mG^dj2ufipVyPa6^CApJq=o8ZCg%^f039+u~_C9 z+44|v4-ryv$z`ufd^}^k|H0P2WUMxC`fx$GvaqzJ0mBj{xi^xwnkB>tKirfIvY%|I zw)9btKsgsv?0V1Aq>t1ttNGmxAq5&v+bx3}_29%%k?!{AyYUU6sMLy?ukTd}{^9)m zC#fqp$#{`^KYc`vGW)+EUs6rG7h~?VA`<16{p2IX-Sr585jbF%xraV87Bw1ytxga3 zuPd!Kd!sq6M$@~{PY@WoLx;qmx84s!iMgoQmpvM z?cm=r!i;I`n3e}xnikRXYVJNMQtjBT$$ph3l)G(_hVYxJu1msve~ufuV@P)3YBy}( z`G~oBPe^{%7ZJ-?n-yu(W%kTP9p$aB-U(|im;Gty#@yI=6(qr`L`h4*1B^Qwd!-~? zY1l3(^?Te`$KI?~BBe&(Q!u4SE-I*X?~Ry6`Tv^g+|4-(goA`h7d1CTX3w^SPoK?T zsf8AA&~@ievr;t<(=EWW6+(clT*UbTLemK0xaxG_359^YgnAnxJ zSCzZF90yz0pI#c5rvUpMeQRTc8q;Fl6UWm`EJso2(*C%1{a?xcPF?HR`^F`3o2en$ znYq^zmQMJ4o1InVX-~!ACO+$xTO_VTi-QgefbdxS>!NDR3JTV zuf6SHr)icY(V~4O2Mo;eB6EteZ!cJ`ljkS`Cg~kd{(M*w#~T$a<86E?_ShTsWY7}M zekS{Q-0KnknzF~6FmJIK>mkqts2CG3bU;$Kfk0FjS@G|)wP!-_^*>9a4@+JxZ(ZND zLZS6%D4UZj>16RGQm~VDey&lG6nj>Mm5VO?2Oe2D!!wF(uzRG>dBR2Lx|NBv&HXn$ zY3oEsrIb1!Yzz&ti2>h})i2M}P{m>rn(P;qdGsFqznt3FtGFs@rp@&^@|RB8Nm@Zk zMr?E^nTE=AP<7VNnB%d3D{gn$r@z7l)4#%He4tokH(s++(ZkyV(K#gq$ceO)9i!Uq zw}$cK)vvTr8TQM6Mj~L5krG>x?su*OP;@6-W@31p|2gVaTj+NpFc+l$ z=A;J3M+STa76X5N>J6!xV{{Mh_E9)`LJ{^2IV;Lw^xgN$Xe(Nt+efnuE-^bQALAFI zyO2eBV{g6klMSxQW{vk%w9MY^Tv@XM^LW4Ex|M^*_9&*^=S<;as~*RfGwmLru@6@@ zpM2;$?(Mq&II2d$zT3=YG3NfZIw%aei%-S9VEo&=AfRbd;(x|rHtE_Y+pd2@g*xq- zNbxnD*Rveb2MT-nYOojl1M_{t(@m!7lWIAv$xMXr@|6I&iukl$n|dX=8KO4P+;cn) z2`;rwdUS7f4!EVC>;Z)qdu&;U_$@ChvKtb5ca-THn&73K!1UBs(MBcRWD^f;F83P0 zXNEP@&`sV^8E#0Qbn0XnE>&_T><(K3%Fb=t54s{NgBLWy9rRLTh)yBpFB$p#wYYO7 zjg%^`l9_QjninnYn$_P8`=_V$&OC6|`jTqYOp3^7X*7#xFwAzU%VkCkYNEMi z;b|SM21w;5+$UyRMnYerozztE>t1SmQp^{j2`==;$3Pv^ER>t-^1-*J?DK7pIQs!h zq1@&1;xArMg_rQcU+Y*S@=Wl+9~YXbFXZf!$|#fRQG$#j?8~7{hT(P!3cMs6w7re5 zO4Y7X_9)ktMGvz>X}Q#eyL8D8@{>>R{2CfhcU?o|N4fYH1MoK$d`F0E*vQ8&j#_yQ zL24#hi~_fA66|n}2?^=vD*!PvTBnZJIv|NVY9s+4Y~yrlv3_ldP56WJqO8ynssV?s z1Qu$akeo|s7MkW`C2qpQK@#6?j8>)- zR_yz*snZQ^%djX(h%m0Bv>hpXrUF>cv@71bXNdhv4}w2m@R$E67G?cOk0%DSbI)xF zJG)&?_yYR2ZaUSaM4riWHj%xit&w*Stc)1^|jl1(?%FF00KWJ#z7D>+sXRz#r z|2zCpjwZ!bDH$YKEk`Fc99IjTrCEgWT@+Ej=juP8sHK`$rvm5p>Hr5`47-Y zt~t)K1Y4(^$n;@uTd~!Z_^M6~^WDM*;SdYMb$wt)CS1M*UG_=C_4&3H7|`z2A=%rU zez@GyZPOrZy?FBuFPzoaC*5sm^l&xd@B`x+&-+yj6IrlzE1!o^dr`9bhgQ~0O5!Jk z=!!mvdanL%cKZJ_JmW-%oXN9iH++C_7L)4Kab*+_RDu;ChiUAEsQH?H179m*(y{Ga zSeJ-6Qd?{dM6gk)N`I52D2}Zzh4C~Jamzj(YLs`h^`HFyxxRjiX&_HbeXh!I*?xHA zO{uSW4M>IdePI}nm66w)Fqk**Bj}bXcbZ`Q2*1+{#H1?SfdC;j;0P7``}ouQ0RLns zky>{~;Pgk16ut8GgQPvF#~{Fs0af&o2-Jhq2t~JnhJ2r>LA{f}2c0Onw&*3XMu@+f z+TYH^hT@)jDhmwI>@%=d89L=B1pfqAU?v_TSCwA8=aTq5>1Fd-)&cJDqj3WlZW(20 z%6#f0|7fKc>=z=iZq+T<;h0LWE!1;XWEbt9ckton^^;9+Zji_KWZI=Ot}?9+2-&Hu zjyMirYj(ubwEjPRb6^U4b}M8r;A3x|02!o!#)_8$2MNt9d*ePNeJh9oiKF5qI4QCx z?u`}#9B>1I4HJb*(=|6EMOegVemmZ<*9iPtm8;{m9(H_%&yq@Z6$7LEcCsq(Q@?|{ zrAK*cCn}zt!hmgghn8$}cV+rb(5P@}uO z%?iU?;2Mai=1Y`Sg4e2RFoD|klo<9h+El;W7y2F`yVm_v zo=kVp)f-YKw|G;50Y(eNov`{XkT~XU9StdnWp`Aze$8=jqeN<4tW2$OVU5g<<3=-k z+#X4m(7C2`JuHr=Jk%)t((0Q>+p&z|`=(W}N_o71P_-E<`OK6j17exe^UQ~Tmo5Ja zHUOl8c95Etg2Iv|(M3P}>TqRqct`A`du-?x&65f>BQ}SdolMh(qIgi-j#oBYN6dK_ zdw5LLIhoa1^d(KINzWWidQ7L|#qK9`{H=K>U)D zG1VZfITC;-hNT5o8I$WLmiE10@2$dGMGRmz5Vfp8R?Uu@*oxLt7%yO5u`c&h5a4<% zLr$jX9Ihc zTvfShQ7yJFw^KEkmLwc~f0GeUo2L?~Y~U!SPy4$jf#~(0_Qu&8j+1V~yDQB|Dg2Ht zU)JWBO=%x50!&1C8zr`sXA2zJE2Fvkb4Q5wpi6CSC}q-nxpz;NXIm*3+bxfm908Y>lvcea{*W=k2!owL!hfkM`(_PYDmmvk1u2lRb3Hw4;Q)59tI9d-suhIYN=k)v9TnAq&SSu78d{s=_J(Gu7xI4PD^)IKH(b z;+ipah(NmL)*)(7_#vCi>7S*E6Q^Q1%W8UOOu>wZgw||U5u)Tp2~Dh;&6?sQeS zTk=4I5A*bx*B-1UxM+OY?(ludKCn_Gy_WB_V$GX|^yt_0Oi^>m6?`g^3t@xDi82#& zgU!{Nqkmx4o_042t_@F7OgOAu1V^qzm6;7Cv6@1&Q28aa{+|AknUnvY?h$4;W> zb#QSTamOr9vINPPlkcT%yBDBN%IL|7`NFTh{ci90-2UvlPA&$iszC(|BlD}yN)|Y> zoOAZw-@~N-y9ms8ersK=otBa*U1|h1j^yY=I>XrJop*ezVh1;Zu17QYZoD7eTODuQ z8*TmyF++Rcwp9k60WuILK_?YbICWAjg)Vxvnyw4w)$<6nF?@z)GQx~jdg~_KSXf%mxeS{D)|`ZI;`H? zygxu}mV$NL6ZMo|O+-&C&gW{0iMC_B*#?!lOGA=!Db$TF3&Ji~ymMAcIShJpHO1$Z z45Q`XFq*|L2o)-FlotQ55OD`^scBf{?8F*R2;edSbLq(`JH@e)LSRTGZ%(Sd%h>TBVDQ)gb{>pXn9ifkf$!%9`^EM`{;e@DYKZ0|~P+e_tkRw%!V`)`#Oi=Q127_7{PEwW2bpG$@U_U1{0Ega? zQ80R%?0uJ)78?gzVkKxSx;Ji9+DyAVR!v2$TY`k;azPvsi%2*0nt!9KSTj<5-3_fC zMoGKgz(rWf@$l-nm-d>$^)b$4LyPx_=z%Ov^Bw$>D&= zszB_AAwCVN10}n)?ldStzAl#6O!N7`&YU@u4$Ul_ywDTKSi0}Uohx`$KfXDgEqolV zxfss=FKb-JV}A;gMds&?-$84Hipd}d=6gI~Dm^G4{1s?}eZ0l@w9!F)L2nB{IJdn2 zS5myw05*Tt_&E_!gB*M73T!vtL^c)S;JJWiLtTX(sRW1L4Ju>&};(U3ReTV>xs219gB%8#^X^zVpixsKJbB3kM zSeH=kEusn!keQoxMhquy8(DE~O4$c_R^n1iUdKC<0`1|~l7X#tyVVL`pGWr)rjE9s zZ=1ZDvVOZvR4;U&DBb?4u|E^4a()J=KZ23+tJWkzn~Jvc;@m#{E@?56Un$jelrPo$ z5pW@+Uk?v`|0Jm6z&t@eLic88u5pMPpI4ftxX0Sc)Z2wPg@z%Ym2&yLWW+)8=mVf6 zw-f+168^?+x~2K=#1O)fw!cv)PipJ!?QJ+cwDJ1%!kuEXT+i0r^?XkSnz6 z-VmDm2ILFdTQG(YNo4!>L4W<}sBP;tv6eW^I(bmg-+52wM5To7*E*+PG(WWnlR2k7 z77Jzk$1->OeBOd{wLzW*b#sCWp4%VU;qLdgGtTIWUGA>c`tZ))_l+^X)IeJVJ8t^Y z?r*$vMl=|>@KS`~m`|#ok^oVz^!$UWciB-6D+46qEESeMyuTDJ)ijM;DPB!-M!lhg z^O$ulVsZaj6@f%W|Fw&mZ1sDU^{Yr2t}0XVi^QLpFb~O@4qzuv^Q2bUTQaMECtdz zj(rd^*`@*a26Abs%4WF47SP{@!+@r|DUgVBBIJPd8XZV`R{$9MV zK};?3z6?_03vkm>8yy{;y01X!+IJn@4*qX?zi{hG#&{5KXvza_ARwFn^@?1;OLdm<26VcW~EDleAu)qzf$Kz5mdhPFGpCV41y8x-t%HLWo zN7shBFY!@LrHh42!^sH0i@HapmVhC%1MZ}p0g1zGLQ=hvkajp@h38Wz>7$?ZB%8@7 zTItFiU`~0<#<$`>9vDaBat+5CQ&koCrU#0h{@BKROt_XQRn;>>*Lk2VPe?Z$%UyL< z^u=M-)mAGu*x>~QeBux6- zs|OVUR8k`2Eij*9sZK^8+qmc_?p)w6v@4fO8cXzDcrYBhdI*3|dmy;KBS-*V+>}(W zg~UJ8Qo{)$5c=#_$Ep|Ck4KhRonr!Rp-LW41>&uLsz;&Y{_>P7smu4qS1>}&q`vjn zxW3Nqp)Hrol4(i`$lFPU%L3(r^;Sh@3}AA9sni-9HXkBUK&6=sZU zh+bQEi#^zI_`+W<3UQyIG82^xW(GE~<%QShU?40O`chs|{jD|o3^%*;grEizyK6(c z4j5BPmGau86>S&91Z@XH}jAEzUcxR!UhebQA?fI+Y63_U)fbV$#Caa z1mEtJ?d1VtnG{T{Z*y*PB`T{$_=UH@)B2la1kH4luEly7?(DdP_{wkmOK`cU>XplO z$dv)&S4O)e?=w|dqU)398AD@JsIygKsnF}lumaaFC?nPv(K`~FF9CCvy{`VmFgVGm zboIYO66hZ>v-uyk8y~i>^f~x#qXGJ9?M3F+4m2w%#*P-tMppvC2%G-Q4(LMT8{J5? zt!h8u%YDP(G2sMLfo}vbJ%z-Q^3VkQR@zjgqy5^S~hKUx&m1P##3~+~%2kC0jXoF8Sy_bavg0>h3-}{rP=%DmdwvFp|uU z7x5PY;}sZaD;Yldy)lUwK5V%$mqnP(3))%423tvsFRkx@N8Kh}JMOVZfB4EuDgA=4 zZj@Aj0K42hd53~&kihuosM$0yRCkE1T z;;6ayZX@BuZ3vFIwDZbecap6ENTPHkPSprb*d2{W9L|@lY6>ZDRjYQX3fqo+j=yIZ z<Eer|mV?%gYwnaCP!F4t_M?%bOC4zgG22@Q}KOl16l zGFQdJm)K)LS3qd-1Any00b_5@z193EQ&+6~$K|Ww6-v>bAvEV9SITF0DC=CZT5c}+ zWKB`Kd5PXiJJdo8%T0QCJy3|MeK^7N5at@SuNGN@lvh!ZL}A^AanUb?^0TwE zm_o;TY~B%kJ>SLjpJRJP^`+4Vqa`punOqHFb(sM2vGP5Yb4G}FG&%|<2R04OPbeUo z?Vm~?AS@#iZrD=IXAE`nWE~jMQ316IjRGHHTRZD(55TIJKKYO;&UENdB1zngLKJgPPe<S}pQK63JK@X7UELK32Uvso8JI`#-xt zoy;ceG%Fa&#C7IlY2Bbhryf^roVPnS0GRF)Hxn&_AcT36@}p!kL(k;Xw&>2gAr_rC z{wc4@SV2PU<|AJ}uI26o^*_up-&yPuU>mp|q@5^u^kV$WF9z4Hj;_lSo~B=ZN5$H7 zZ;45kpod-ucEqbx5xCgAM8>xwNNlIN~wpX5qA^l=!u6zU&Pm#{xL2O2) zHGr!YQuJ;KvPp~88`3Bl^DksxvR1`+6>+KRE#35iklfztNm*qox=2imJngS==~3yr zYS-NTzo$?Ci9hRGCngLa`@{L{SJak9pb-0^+q#e29gpyu+D;n|mQAu&%T4osQR8lh zH7;!a{*ccbc=8}~;^^C;9OD}$22Y#^VzXVYcV`P846I?+^pa1%`V@{sDM)l?zazFjRkdoLrN~rwB+87iQDIT?G_v zV2$(XHl#AJN+9k|N$G*m_BglI^lo3JafN6PZ2@e(@>R8}IOdN7lp3zr;0iuow(_hd z{}II{Ge<7PAns|5HfAv4z{{u8o&8*jnqTnTXv-tzxMXI`!J|Yn>NF)OuieshK*l_Ro{ zIv6WeeaH*f3=>L#N(4n3tB2oq?v=YfJ*6BVkmaU0 zclX@%UmykJM#L7PvXP5S**`xTw5cL4CD?r{;7O7>equI8Z@au#>ztCEEOlRGk8)gJ z`&`p@(}%g7I=SN0n0uFxk)5Lr#y=E~L*xhY2(W-9WrL&*RqU=c^u| zymF_%6&mdF7had9Y@5E6yl?ydngXSOF}<73dnb*$*o-YVbq6K2)3-fjQ?z9Xp7%G! zp0&_WMTX*5`$M9mJmd~>(l8MmJIOuxEfZsAX-PBr)>Pl|>Y9^F?~X2}2yc`Tr_~u= z_vo1Hz_A;fpaz%WTOKdKowWZ7on@J=kZflQ9KMF-#V%X8&WJrd9nUH2A*5u)SZHyiAHHho!x zd%*d`J|}JQkp_R9t*1PWvmqR743x#vcf`eg>VaD#tYE6~;NMSm6xshK z`{O9N(b&!}5kM-rk(|$$n3HXj8aDF?a7zAlxwTpRFvpCg!93O^h7<_ac$RE8&1T`f zkL(dM{(xGQ_?$I;Z~Zuq(i*Zlz{p~vK}K{@O+Z$>6q`b}w}WYvVG0V%jgc0Ab^zf{ zPtZsT$u?&=Am57_T%C*z$5p>?h(a$%)9-fYc|-`{`~zAT)ief?+f1GfWtNK?R>&}1 z{Rb9zsvV-$*q3In75)D3%jca{G-gqioqoc7dL~A#&idqkP8?mLvFbz%5)SiPRFV_H z^|QsEjEJS8d8RWHcARNHRdX_5$mE$|u1HP?NNQnN_mp>UJ14xeBFq^PZ|JV;4#@=* z{oA0ngTw)Y6_;NA&gY@J9F5ZX#Hn zyLeDFO#Hn>Kpl`^qrOX8Pr$lqBAabn#BYvnSofs){Cw3B*h5w+;FgAn#^ zbY^8Q2Pp20)BZUeC*Wr z1-5){cF_jsAtS!`v~&8nGV?ngmEcz2l`P2{ zKIKrO_df>*kmNG~RBNN#VERy33FysBgR$p30Uftzy9pCFRBD{g00^wn-d<&QNL*NH zbXy)u17K2huv+SSMnFYv%WknQYsRudLJ0;xO(JqPM?%B(!+}A5K*HMHeecX>^*3v` z60WrvcojGzT+0Rl^+%x}3dMgtNa1CmHB|rJL8HTdA$>>@t~{paUQiGep_(~*KUy~T zCh(ZFikyOQ#PZ0~HodE<|Bt=zj;ne9AHR;GVYEv$q@kgqLDLawr?hvZok)t7I*6vy z-W_f2JvGoEol1LXP)eI7&F|}dirYBXbuah#$M5m^%h5US_v`g~zMgxY&F&|ipAn)? z*e}thK;M*draK%fUJb~f7HtQhO$-_FfVbC>7@IYD1F1>@$Kci#T;(=^fXxt#$;D3D38Qe!s&=irmba+Xb^=xFFo7hG6BCi~igrr;O};+g zpo5Y;tF}XR5tD4Tl<1xjYwF87uEgP|9CO@7Ob|?j_;OEsVdtpI-mVB&Kn?j|S|#Ia z{~K4L*#zT3Y-0}x!Tent&rY7Dm&)G3&@Ws7y$ttI)(;5|u?Q8Vt3r|CCN_fk!W-UZ zEqa^^y~TDk@-J9lu6e1@5sk5+vy(2eJNGficZs&T+jh+}P9G8inHqOB*M?Ckax{e6 zA$IR{5SCl+g-Xe#V?-ECrCNhWUc~eqnhlM!Q#h1=u7t@{+dy0kt zBz68};uCZ4h3>>&*H9w4o&&j>wzfQnh-L{D=3VVHn}K%OmMG>gCyMe{vf?w9(ptEa z=>2dZ!+@9CuuWVRmm^&>sn?@FPNC_xie@9Pyw zlpxcYeO}qi?|Q1@`!-N2adE$Nna-Ftl&y3bNJk_4n~hNC6jN-&^TM;uWuEk*S5s^{ zch{3i@=SsThUq<72CuvKTUf7;(D_qCY+wVS{xIv36dM_AGWvSvnHq^`#iI#&;JTR! z=4Wj73_Bdvd-*0ADAT}&Li9Z|th@sE7LEKE!>kBW4P9I=9}Bxz>REE#7 z=2`BD5Y%19#j)J|SI`^LlR!AUTw%4ontG#=AB+|u{1PCNeLE2(1H8eWdh>u1-Jx7k>@d# z_D-R$_u~FuGdX~q`^B9tC8NK(W!DLe)}QO~L97AtPre9}xPj29Wq19+BMkvGJ&0c5 z!-6~T%I!pZiMXvDj18lKo#Njg8Cl(5SU2tN?Q5d4$3g(+$KK}%FH~(`biVE=26U-% zQ>-ek5JS)!0>xgNZm64lkkxH)WXbb;-98TDZggb(nfGCD=3J=9L)XL>Vg>lNF97*E zqYFCSR%w>6p6GXUOnLa{7D1&rk;E&~n-gz4xT`KcplZ!7QSArzK@=2>e{E>^p5x3x z72nq&iTC7GQocpEz*PJ3U3_0PCZXJ@tE+QYjUG_j8S92l*PlAI2%=7%`B7*S{em7U zly}l2-Q{3%cy^Z-Fzj{9fEB1qHU2r1VpCo*1T@YAph`ln$~wn`66<}!y=ks1$@iZO z1^PZ};k2Grl!{g?DLjBH69a$S)Rj3Y%RA=C0)m@bL8Z4*){&|Xz^iuYS`%Tg63B!| zD$MHHt<-N0jpXvAL2hZ(LP~fdLQzPAc5PA8s%9ow49c(G7>?JE`3#y+Z2x; zkN~b%rhLBiF8{C?us2k~gL{`_hc?|RCqL{I@M_pwfQISbb&)GG3G-vw_WJaDUh4pF z@d-ZQT{>kOd6Rmd^;6J0JR9nhk;tWd!(t?NB^T7Ok3pH8y3sdE>Q451mM-6bxJm%* z;0X%{kO#N)fn{int8Eq}M%=2b2FqyL&Ad6gNyLG@7>C2t_v`amaFfX(S$4d#O7bne z83_rvX&~T|h=Nq?4N&Lwgs|SPPTQnhR@|E$0(x;#_4>n=0~WmO9eatXIBK;0NRx_ajF#>7u8DAeD6^7)as z!{xd76D7=oGYw}J=cwlQL20aWsaG9Y?ExzkYQC`gnP!!%xc{NxeM&{KAx53i!otEa z+j@H)-AIsQW&|qEnHEa}7k7=k0qHvyFE99_liTj%h1-ohx$0W##Ayt4_hW2qZz(_P z+JKAA*prsKJP&-J_a;Tz9 z>2mE3eOUv*JZ0a4A!}oLAib}F0wC{5p%N(Q=(kV-$h(J5{^oJih~F)C#SUKXn+x}+ zrwojQ*5rgW?f{pi8L3~o;TTIiFKFpz0IRO*XF6tIy7V#C!2pa@?RPq$IEY@uZ=0tDD! zS|)3?k%O)vKtAk69THSn0`ipil?{k@wA5=sh)2DScd8cI)12-**YnJYhi zpBz-0CU&8y@Rmp~zv`PejwN)) zO$5Pk1S&H@rE>9lJ>Ip+oh-LoLLf$Y2B$eh#>&-ztV@{)bxBwZ?NG2JFs`JVvQ%^B zB?!KN&>ha6+&>GU_cz80j4Krx-wlc1XhA^TYB)dtonuvVX7K!5UQh< zCa8gGq*c0hu?^R|L1~>S-!UjD4dgx27=XTfW;T6@-G<(F%`sWxLhlKMt54*9hAbkT z#m+W%o%+*r*FcX-L*U?{i8cG+7;2LWbzH4%Q~4mT&G;6|0Pw3UophWWZG2$@(CaPh zwMI!MDiYlj^$(3kNBATqd16YB`x;v$+aDX;mYL$uz^SRCXNfD51gE!rrGM3mbmIK; zT!Z#^QDf5v2{jHcOJ%e(aXk^v_I4%Y2&a-2Loh@8Q40SB)>XKkZT*O*hH6}p`w1IcX=(yx^6rbY%3piJMM%y40Z65Z$HohQl_P&YOywK zW$n@_0d}Sk_BNwPUFq3vQ$!f9GZ#zrW!<^~e1CI(nw)G=Yj$@3{F@=mrPZReHeXv@0yTBL{wl9BLbqgZ?ZVum?9a3F zi=VAmU5p}a2X8rDOnO)$vjt-1Soog3=lz%{x>^~fk$30?hze$Iz48w~`@nI(y(|`^ zLb-5BPjXF}j#Du1s_g$m|lMmMMLp~Kw z-4s{U*{E^lkA47Q$v)KuPtyq|sYP=cZpi2%NQ5J`pB;*kc~p%=DGUXg+P!`9`I@N| zwV-T3GH@#gTI+0 z@ZdJf&LY3sFFP$G>*RozWDNr~jJHh97smB7KG%yKuOBk+$ogY$7Kb18|Dp2k6#2@lQeOD{>Ea9;C8<*|Vnzu6 zus>|UKcL6wBtnYsQMtYuKN0Smo%~F-M))o(De(o^`wlWQ?CR#v{yMXF=);SBEo;p} zd1UkyQ%+oM4?PwHZ^C})wpXaInla5g&(J9;r+Tm|wpBNeo_G_nB`{cIwvwAylkJ7^3lk0Em_&x-LDvAdjq`Svqx^2=n-1~(RqR}SHN6xm(00*c1G=C& z6;S$GR>%@-B0@aiVz5UQ4Bk%$@xtZkstRkX#IL>wFk0P8FLIqe#ikPS)?=Y(;R&@lWnc5PS&%W#x4g_byMYm63rG z7b*G)V9}0&yvcF?-hwpN9dQ*dlTNFP*}aP;y|2-oXbb|0Xd<0fzIJH#F0_t$a@XAg zv&vZ0pB@L^*e+@k4u3S{PMOychvtVPYGFi;b3UmzMWU| zoS`PE+H)cG4)M+sqJY-iAgx@cBT!eqt>=B@*O)IlL!=PTx+)p1-lojCKGO7iTomNp zb`~-6aUdj#T+1$?`g?DMc?hG@uj>*Oc>Y1LVg)3n=8-*>gQd_Y7UfPBc`i-Y@*FN?inxj$zMg+H6~eIx7s8b)DRqw z!_w+W{VG+~+EfYtv7fak?NsMLM-w(k%YLc`i3TGmaFGWJZwkjiBJ318O~rrz83 z3X0w2Mxa2{LsGqI#aEq$+43WpBvxSTq+6{lLNR2v+>(Bq*B-Rvep*9C2|8aRu498N zo@?;hPt_ml8inl~N*0aj-#92bbQe6f=Rl_&eZOx=?R*8fgs-QkMkuopNNTBs$tcoQ zD{Ezc(v>|fJC~es!oQpzuOOk#Uis@^-?Onf+QVc|=!!P9aRXJ!q)h3 z{^Bl(q=Q9)HalcP@xivfaEiV$z1 zPWUDkW$X!;^8E^_LYZkcZ-4@H)22mPHI=V+XO;N-3^s;4DM`>AV)Z2D0RP14;$GRZ zOKfGHi@EH&<$J*56ta-{Uq*F=hQIUU8*wq9B1M;ePRXw8JZ?vFWRn{a5}_z^37U4$ z@$;yJ-eAwEeX>r80_KliTB>D|g36(+dR5~QYxgy7sD|c-^_aNf{sVv;9)5NMa+Vy0 zLCGC%%;E<$UsFhe#`GtcApuW9myzmxG}ZM{(gn}*T*ARr1zpnFf;Otat@-E=?$X8^`?3z+1EQWa7#}rF6}MiYlkLB$W6*lzsF{ zqqi;>-|g0M!eK>7yQ%UmHPPS=X~*mJVpjp< zOodFXC?R1{D*t_HqinqM^b$Ut1dLLs!3vyE53`r~Xm)OYcnhBkFbrN6S6p0s21ki5 z!z#Y5ty(HDV~QTky&ahDWXi9N+d5^vI&&+bJf$0mxsPWi52`np%$GW& zeWsvjr;{OFq_SQ*GRF1O@Th*VD_r;z&j+(>8b&?(G-5m`Qo^%)wj5^-1i(_!TV9yG z8XOYRakD@nW)@`SilpvlnESE?Er`r=2E*SfPJIlSU@az}59k(?x>Z+(rF0lrh_bX3>f<=2wl261 zD777N9~FTr%QfBoKC5?cg(BD^h<-!f67x2hB5`kpM`x`(+3&|eI0_Q{r*IyPm`bz4 zVxd`7AXo2U$jshODAy!j37NLIC{;!t+9)mmDsMWNp~aHJ78^h1(nGwwi)bZj8!hvQ zBU?;Uy($9Bbl$k%6tk5DXTBje9kM+%aTJ6m4q6wo<+%Y}FOF(dXQjH4s6gq~;TKNq z@vnp8FY7)od4eMo4IQ~%wJ80D`=XS1^qnKCcs{bHo#7W@mi4CM8K~E=h0x4BF=G$F ziLC~tr!zPXYB_)t?>Wk;gLZU&${I!|Z=;CXRT{1j;kzH%1I|{Z4yrxggKCfU`Fi&W z)mDlVE>}8cNie zOb&j%7BF7N1vF1##7nvIJ~}l7-h@TxCT)Mw|xr8T&KbCIx!u^Oo+egrC?2|q2lkT_?N2Gldeq*@eG3(Be6eVR}!MH!kyTdl-x8? z;;VcKQ?ZpY+3-9(l`B4_=AtJqo7ZN7UT0;*te>Qn$ILHYeCHHy-KsvBc8MKiLT};a zp)8zT%`TcJ5b?ON^a4-#u5|ll^jxd&IR88#XE&_>NMGrN$=tqg<`k_X&4~S*ceU;# z419Y>MH_)yFm>hq!MXL7)N^g0(x+d8%&Izsz=C; za}KQ}9Kjzr;(f?MPkDMze$t}7_u|5YuE|&Lyns`cj@(N8ZH|6bcl85LeMYAeZIeMw zw4btqO2*3DhujP%c|l{~abT7Tq>kBcFG%9ihUV?)o*v!i5Q2aKe}Oe384pV4Ug$6e zq8UvLE=frHmD@QC6rkTTB9qM6dv=5k9c)lJ(x8FrQ}#rua@qJlR<1u$OPMV5@nqe; z7r~@&g2V{jH@PCeW0T$=2oN-w`~b8``+W`81MGmYMF#hii!2YH@8!KtfX-3<{%h!i zBAC@N=&xnmwK1ah{f^5|;wk|(4K$A{ngh+2z~AHW1_-r}qb|5%Gr^JXU}1O~PQ}QR z&(W6+borBpzI@p=B?HNauA*m*ZbJeb&xCST7PZ!4$lNPac_tLU6-{t>bO02qbVUzt z`vwg~Zfq9G_(C(oi5b6ovlEIhlh7nsE&&^VbS_mWku;%;^#sJn5mmT3_!^dva6Rxf zd^sUJA%NM+;-$I|rP`Ic=b!j=qklsF!$MNH*;nk&yd}5l&BA9LO2Ib}dk^xZfTk+^ zj|UueoIndB&=_xsU{f?7P4iq(NTxO}hY%y%=Q*zj!clyRa&)1fU-}DM+i9hgHGE6a zK)y^*4ZFlUuAD9bW`p81gXSZ{P0Q;u6Af^3YH&KsgOvpUrXCG?AFQzJz?OvlZz$W( z3gZ9Xt}WxLQ+T~AP`k^)f3s6X;J2tZGZJHJ0<%0rICdpNo_jcv$yY-f z*j2>P95ZEnl91XY?b-JzbOV8%_yarV%?+z)<# z6LEpqWCEnirstb?Tpw+p=y|^FEc(zWUJ4x&>21Or+A|1GrwD`$$^;tu8q4)dS4Tql zs%L-@l1tFI{75ESLKg7lL{TJ_A4$ktwn|0x>p=@1SqWOx6#pc#D%XC{W ztFUcdd(b-#zz3S+A%(%%M-tOJ4XVih>{zi*%D_Aa*!f;LmkBd=D(MXX2lLr!w8sk4 z2JTyQXQ2rTegGCGjl%*4TPKQwx#I3{)@ZVj+GJ0AQ|5zUIu4Xhn2{P^l7Fek=9N{# zmsH=OqjU}$p!Z9lO~^BR&xjtu0B!drorJMm_ic4I5P>7A&_!^WnO}Ck?ks=){JAOs z^pscDdyh3;Yy-4Gz|xI}X||>l-FR;sK2Wl7-Z+kmqYvHNoo0Zx#SzDW0;6;8eZD_I zJQdVaru-y6NyO76;kvL~J_rzMBrnZwNZ{_(WT@+nz2|0-mP=J-P{px8R7XyAP5tRy zR<*TI?PzdwI7Z}1JfjD)p-dR+7v~wO$+vTaM-a>h404$SG+4*80 zWrmY_)lRunH}X|HC2~JDK6JL$_|lX3krlJ8p?{13lsan+!fuXt&UPH{_t@I&i{S5| z%-(P+WDx4c1rmQ>6r(ixqMQ?7&)d(V4Zd4k95yk)44irE`H=dX#f`5(t(sO%5q=3MF%ZN8C`-5imU zS633Yje$Uv%1jC=v@+_Nx}ciwX-fP1ZFj^7DL|5C&*v~UlmX$Yf>Y4YLE>xtSJ0)M zBg8NDM~?eNrtnG1IUWZ&t{@gXuI3<>p?uCHh)_ue0^(6+TJMifTD2SJtC)u`+P`#( zT`|$3JjHCFwLAz;jfLRjhVSKC#f!!sn~HoGhpy= zFeytlSd6^kkyEDB$mLk}NL-ddcWH7t{jOYDYvWf-{+w!q!>!w9S3_8WNWvl`Bb$p1 z31K!XN`8&U2K8Ovz&6o_8+~K=46C`*=D%;nuv^G-EujeSPP@Kg(Wtmsj))4*anU3A zz(hjWPwEa+H?iOzVaPQNLXUq0H=~QBQjAR}?pG2R9X?cTaz!~kPQQU)MR@fjDrisW zfKRD7dY z390!$vp!(Gy}-}8BvF(etFKvG&Y2j#a;0IjZ zrfHi6Z>Wdb4s4A#>0?--5pg+!`=K@^aO4|+@ylGG_Ze`RsBrlBHf?S*_idZOQV8yP zp{9C?gsGY?C>3_&(wq}uOAi-GvqU!k{oBLe4(rb(Z*Uxy<$w!mcDId46v7H}gVwrpg>S4Y(VYr1pVWkV7DwjBW*Z724KM0{cR z6F>QEtJ)aLt_u%l3KXa6c7nTR6h-Fdc-%-4upDe`_ITibD5+%W9@=W){ zx8opy&m=fYdpXAtqd|oc7$qR!JvJT_fbq-k$7`YkqnYwzY<$&Q3nx`%B28IJTKzcY zKGA?GgL0hzpY zhR`+s%#9%ku?)p2^g%J=_v2&wAd?0c%wxP*GoYg@cm&?31Nv*15P|c9Zk^-p79!f3 zd74nVUN7s1WOqH#M+4J4FaY4F0zefaOP6gKOPhWRKZ8A`HZRlcFC+$0P%Lh3<^4{- zJmy6-u9m`vSEA;}<3L*Wu%q5V@Xt>!PGJO;-0H6*YHRiU-zay}(hv+sNR6v!KpDFI z-dZ_l(5=h%p6}Vzljk3S<2Rh$(c8uMIZ53r0L*h%d2(f2ncrVs0);v_`J>@IKgM04 zO62GV5!K69#^p)Eh>!VHPrQB2xIoke*;~7aUd9Ba@7}2}%;N%9mwP9cZ}5|6=TQc$ z$LsKan|8B-dN+arZVj`Q2_V`RSHS{l$)Sus6WYVn@n{Gk=zYXzORo6J4ETmel}C!$B%gWf$Al0VqOMy5Q72f{yQWehh7Uvfjali0?26ewzcYC zZT(CX?C~L|+DuaAFU)le2w{gW5!=xv4y7Xh#o^I62P6wsf?>3j(2U+eBM?2>j}Qtx z%-noS3qKhE3K%&#`j({$R70$nuNVM-jgVuR|HHrRFSCs#js43z&!x6r`Z4!L=g3KW z@H0M4jjKkduqx7ezfu&<0X2BgRg^<*BATCs?K*(mkGPLNka@e8gLjEe1I#GdezyL@ z=mM+~(7&T~Y;5dNW^~j;JH6(RqbJze*t8CKa})x%))16P+DqiUosl76P(mAl;>fCm;*kcOJjLE7_7|Tr z-oOB=&wR+fg#ol!RbqV=#OL1%)(Fx`4yIRzw(coND~VoBZc9~5N|i{o zZF1JP-Yy(JLbH6yDH48!3^N4nWLKU9s!tui>miH0jIgPgY~~5TIx7iO3#0@>=uX@{ z`QgMh`i#czl-@CYGB27s`-bOTQ6v}4Hl=x9+N)5D3_TZN3vQGb;DhQ(<_Bj*Ypr5L z4E%Ge3PLF6Y3LpgdVtXLCwGZ>+m54_p4;V0fhwF%HXRESK)%5oXo?SLE@2^_kQjs3 zRaAoB3@uusmO%BPM+lvh#Nj21I1TNF^A4O^|H8PxjgXI0cFenz4`H@!HS<#aw+vF& zbD%der#}$q@IlpP0%&wHnCT{-W^?4x`Dt7b%3Ic|aQF;`x?Ea7!=TNG2^L9(bjs0y ze9*z`+x{?pP+|olmz_#ens;AFMJO7;Verys+QUjG4iBN#Nh!pfg<1 z2##v%Y0CI;gkApHZ;&92nXw~U&^{Q#M@v)OghnlrJVB`+4iirMF-6>Hw5kKS{Nu0d zwpZEFaw0)!3JRzGjZk;N9Ux?GQR-BHwmr{4Ej2^}0!Zg+bvv(1g@;I$e$icByFlO` z;5e3Sb*Nrh1Jt-;IzNC^7S#G@h0j+<#A4tN?Oo?;#Zxr%C!EerD6H0E=^jlud7qxQ&qM5KE>XJ8iG?~-Oh;c0ZB^c|X zTW?3V$z%`zeBH8DUCB`W&~v}S`L{z$kwf#g?DjhTw|^wn2ogi!>21Z1J(@Zdq02xbwo@P{V0U;%Z{L~2=(hjc^K4z2gw7{^ z_=8UR{Ds4nRjgAerpR0Gmu@Mdwx8!E7M95Z6$XOa#^iqDH>Dh(Dksa1Eq7bRrR#0m zGziHC9&Git={%OvIWcG53qw56Xow-hoxx9;bNUnsya7waJ>_Rw8lPjSkmqH&;MR(l zbL)%Lsccs`{&sV_VdpEudQX0lAM9OTUS@iolFqkvKS(zK@~~S;<6|ZaoptxoVGf=% zdJLT}*FF=Rl+t5hy!_&C%cTLIIWbZvPVne*p0G~=z5jE4)P##4IPReSQwybkMZ?6--D9`Hf`$~`1Pr%MCC2M=S9r}1A z1hMM0@fS9~{U2%^-yjU1#-@{>HofuH`^35FgRO_Z{rwvn4TD&Zmr#`S;sVj<9x;<8yDEI;pZ&D}o2`T7&_WeG#p9h$Unf(o~o?D$CCwQRAPjk4j z<7Vn>&ht39zz0W*?XHJ^dvqiNO#0ZcV%UENPhN6>mk+@1~w5gwZ%rPhZVSE(Rr6J)!o?~BP zL8kz0eG0*a-N-c*@^?Ojt&eSG1rWp?VbYG1CELvFf5RODNZZFJ4s~k!($~`zeS~3J z&5M*Ewxd0!1)4ngyLiE-_aN0FZO}Th=wB$CKerfAiF)_Z%Z>FV1Ruo|e0I6f!2C)v@mL>)2Dz zpI&Dw+SZzD<2!&oat92XQG(pre?Iz8V2KsV?nD1-`mXSVcE^FI18Y^OApBYGGi0IB zc)ey@f(m%KyaG5LsW3&?cJ+Vb=6k!c&~&sqknr$kZW<4mZKrr=ehe2alnU_R)RaIQtaQaLwhZ|e6Y%fQZ>u6$!xNw zPBzMM9V#`1C!K}^!k;gi)QJUQx+LQa;Gx(>lh%sq%bx=+(gCp~pFY)=yT);i{mL`? zMh>2CyAs`iAtr`s1sbsV^7%i0bXR|aTAr*v^d-5LL!s*#OC(GgUETajY?4IvAI`9_ zV}Q1CDLCd}ROUX6*4j^T8qh8dorM(>VbUjfWkL2b4w4U?$TG3bHRJvL-Zfi1q zk{=al-4Zi5eguwAMwO?6X@5U=!+JKr%@sL0Ysb1AP^W7bXSEu!e@=I5ons%BptY|X zEa5wm*YtEWjfy#gh&$^Y2DJeh2$Rlzn=6ZnWB$|)02R~)(C}U9Nt>Gj_Nq3!I2P9> z3kj_cdF+8_t!CK%Dt9m-7J?EFr!nQ- z@RNGpp3E1pUzR9dIo+mNnnr`5AVT&!Gxfd9QpMX^JAq3?Py)V|^7&nd zz~3;ht=rykZJSVRW&?_e6;`a{!T_qe&mc^ySYHIZ_5eKb*R^ zwHKWIzKhe|y4m%!za)QYAIK1g{rjnS)B(o%omahz>C5lG z`r!xpB(yZczuX_MO16@F(_kpx>>IK5;|{nO8{ULqL6v?-9n&JeV+TbL9!!xPAcF;- zXyaW&sJ>Pi-A9FKpAX2a|`nX+jly}-4S&? z0TD3O3sBVt=vJsp%W;-_)Xu8^t?6`e+cv)nq}27}0Pm9cD|Fk5pS$&KsvxJRo?1a$ z24@CgoWmLu*I?xwD-0o5>(-$6!o<59)D3@*iT%aFYk@jg(qE!8rR5aYJs~$#TiFz` zIO&kFnl-<6z2(kuvgi=<*bo=vefTNVN8hx1S9f(5l;Zl+`vfYfZf)}SL(6poq%q;n zI$n;GLgq{FOr(7yf!eU@%jZ1VdKtx_tMvCh!K2_hu7-2D1SpZ%rF@eW162&-l}AT1h)ITUNt&dO+e$m0n+Czd1U608E9uBz*@&RIQkbF{$iuGYA_8 z6L~S^X#B-pY^6&BdZ^C44VM?`r9B^w1vB)V0Y95GSE zH8r@;;6-B9e|hH+6w7O2FGS0;I8kguwQEq%QP#WRYM7N*m>H4Rat@YD*sJN z><=M-;~`#?kkw}%1~kiUpxkjdG?m}f3xi2`R2j&bCr{lm!T;yj5xfgK?SToSne9Hn zY9DfF?vvhrC@RL|!Xv*pQpU}X61FX?b94uic>vz-)C=ey)_zK8AlOg z9kWx(34#EtA8WpR9t))AhtJrX-Y&y6?4;rXk`ohmHic>As!UBcNZh9L~6DkYF|VemEvlb$FHlHJ(Eh zmWPYp_BTW@zEiHkV`(bq74Bu-I+PP=0PUp@2l3%Z={)bIGyE)Gt?yAj8Yq>8ExV~p z^P5IzscG_L$Pz=mfJ~q4q~KeJ9p`xj8=Tu-WZdTFIrcD@b4K;WkNh0Axbx$QPaen@ zrtHrfwxC%|Bj(B$oHHke@E)mzh`W%?`tubR7{Gi8+^vTrxZ6kEfhk_0;VuCsw+@$( zY4DjwPNIJf4}qyW4+6v!`9nJa%WZv#>g`eOoL!FaPR=f~%*b#9KO)lt2^=@mL5h`vzH zY&cnDNJ-XEy~X^(qJmiVHRBzjG4~uc7nxz>g?nmvR~4j|hc3s-RXng*#h!#;r1ul&z)n!8VR8ytU9d z&1gU9Nf;f%C^=a`fKqSn-dxQ}jAvfCq)6PuJu#EbW$F0#)gZ*5%N?)+hq67l4+r@N zdq+rOSz!63ihTN1l3b!Q`8j3+CCZ!qsAQsaxioA=XR+@T`ek~15Ns#8@m~vl5v)44 zTQTboR4#By&Gg+V+q{SLC&b1xK7}x4R(kbcpohf z_afWLX?W$@Ox$m8Ps}WT)xH?RWf7M3fY?&^W{rI}+CyMPKoHBn`?B;ulvdwf3n3{& z4EHV0S!wZ9Ct-7|<~_Y=QX$5=cbV@U$?V1-?ttmzm{*1?a#0`DpXl??w!G;0DSF*k z_$R9WJq*?mNJ+e}uwCk^oPM0Zl`iz)dIICv^2DJ9sUw1|BwOF2cO8*majbvo>Ji<@ zr#XUXTor}QSLm=;QVjJjt?-xWJf=2BJI;P;gusnsu8tz)yjXM~IwzTny2x1FqZ@u264gPcZ zW5O8u&~gJa}9=| zyUCCOlvYQa$-w^iQe+eF-FO3d~<}ji_EU@07$V0FG3$(rc z{+!{D!2vrz5R#s+xYhyGzyR~W{?KR?R{dymzKW&YaP1E zE-?`pD(=*tSD@?sJ&5xCJ@T&QuM)5GCp5Yhw4w?ph#+7q0r!NV%l_(Li242HHiOmm zSXsg;6R-1$r0>_7hRW-(OaTFA?|N|A<3E6BE&a9Q6Qyre-<^FrIp;VBb%DS#h0GYn z-3I?fGLg)&4APsCMV9j&Uy4Pn8BRb%UM>nzpnvcC|Kfa-eV0?L_^aF~y7^O04nQOa zilTiCpCQpI-S5U0Ng!BTCacxb2PVwXv`|H)Mr{@-`5&B&AhE1Gs<{jsN*YIY!=%S8 zbl3g8zxhyTTCgSrOe@GQ>->Mfw9rBV{u&@b-qLQ6BKltd3!oWX8~AfuKg5o0Yi{tJ z+JEsw!*!s`9b4UDye+%?7hnASad~pEf;B($;(s8F*LYHn7NCyTPJeV_Z0GhnP^(!C zte`DdBLWGhzhs2J^3^dA$?TY-ai06nR`?$|{KUomj~spm!T(1N8x`2UFc|(P9e##W z{!cpmjOy_Jhja+k>?!*%lSe6QO=RV(#DGWv)ksT1&j6_v#dY4TKqC7A#og<4@Zy~U#_XIdA{n_v8t2x z&MW+@MDq?OtB!G@>wu7Spctp)!Iht3^#AlX$fF4R=mteD9oNtJToztj!=qO$U+7l7 zga!5r52FU5aK-Nk8UMw$5S2I%w1La>6sKs8Fzex8b*FSX*ssVHk2}Dohc5*^RE_{< z_nK>OE8PF$oj<|o*A&de;_~VkM1uZXeOaVP=P1a#4$(I!v@ULj%x|fA z*vv4HSum$Vf{SXeitUY>SBoi5DNPB~K!7eM0Qi4KPwQ7=p_`Zbj{ZnK7>!$mp+0r` zZv98~l_&YUD^t+v0VhC0+|!O1O#gvBG{W#Ht$l(9W&8RR)#&oOZ^6}j&a|?pcK#RZ zv3%N@`FWPYEQz`Gxh=m(t0TO-0wGkc%=GXkPZ)OXL>KPW|4P6 zET|vY?R+b)yqHp`e$8A&TEgbfieF8aSDj34&_O%Dy+DMb<70mdIv3zyp~YV)n4pCX z-qRyRQY_6KQZ93sSRpX}dQRwfBH!|X=R#j9>F5ZGw99)OoC28%Fyx?p4(0EcuAogk zO{$vFDV5CS_I&^WDM7JBcBXhT=1#G{1W_;2TaJ|if@F(@*_Z6Ir~n(`xd1HvBi_o2 zf5G~;K;@sTPI>H_8Fq1T+#UNYDA$Qc4y@GwQ0dHnuo6dx>a%^i$$gRLP}YeTF~Gor zBP|5KothC<_zp3bs3t0Ivn!jQ&jStA08WBhkAY0nwG7sO0-wJ^^KZKb!#qk=GADG> z+HadbfR6Q_750H%*GpiY)Z&o)c(o1KR)ZQ4rC!1sUccX4Z=6yd6rC+iIx^w0ULg${ zjRMMYd>Z;6P?iMZ^_!M#(qT6InUoqG7ST%w0rDM%lFMxKST-h;`~sS9yhi!R@~8k+ zyXczp6=NlC6P)Pu@_4`y`FSG_{(e?ZpoT0zm$LjG)>a?b?zeJYwnDo#L-IQdR^0AFL;UVi|P+jT^P9k>t{yF!uAW3k|A0R78n))w7yV_WC)#rLC`$O)WSkhIP z?B)_txTgI9B3M09a9H9yA-^AkoZ)fUD<^E`5fn~34ea&e5xXzNVz(%w6MhV#Oo^S5 zn!lee2@+Gv^YJ=6J|s>%VYggZTa84m)N-}P)B5dyaLR9LBS;Tmldxi_vWiQK#*m=? zO_qK8`M*V8!~yI;!>7F%3&8$@u>R@NgHVqh_FA(4!0cE*buh^19!$A2@t=il2!cHT zF%@!Z{zB#QMsbG0+|pA4gp-X|^q+;#;Ck$^*H-G4bibk|mOsa)zV&49l3So|tDo_%v>#H`Wto<|eEpLXY8yltv z3H7MbiqJSL^n~5TUqPqAzrdzDY5U=t$E*IUcJ;!$gPQJUprE?H-Q>#{T_*4r5EoS$ zKRX8Lf&UgTGKV<{!aK-A`Ng>MA)0|7#D3plDQwRDoHretOFUyxW7lD|Y_C(hg)wMG ziD>|{Mk2i;enBNfJc2pC3%SEBF7?$HI!8RAs~*Y>5vraMFOM?8>c-TM7>s!V6&Lc# zzY-7zHB>G}HUiRw;T`sgZKDsj}u&hn^Lqe(CiIccOA(fg{@59?*sCE6589}0Om zd>1(miyAqO0?W}(6*B}m!DIJlaVn3<;#%EXH7Z&g{-AR=cof8py+fT1=L=pIs~PT& z=b=7B%TZiAUvSs+anFa?22Or;SgEtB=bwLTReItwjH|4|t5Jm(`32KKrPQ^(1|&N5tDvOK)|!{*tCu(XUl(7qrRvg z5m3i@wI-Sk+r$(N44~@278_=V|$U0u+h4QeGS~>wQL6mfv3VF%gH!J}wIO)Y1TZaU=__M?&U%GFpoCjL7l5L z`%9D574~d)i(3Mw%FR>?We?wb+oi|-ao^(+z`2~NyZfY?Q+D3C3hPYoaZ`+OwC zIYNnd7hOaz+CEv9ImDYQ2*a;mMcB?ZXL-BXK*9dR%g9dagTvb4)U%_BB<4v#@F72-v&&d*HcznL zCPaP&X=4%uN%R5xVYb|;NCMQ1@|sI2mDUGj*WNQR1k#c8IgjQ$S-~@Yc>c@QCVr9C zdEasFzDnQt=Yp`99^oAC*&G&_Bva+3Z@mDzs=mZ2=(C$X@aM?|{51ZyBi}Dr zttdKq=Cj-`o~W>_MD>T4qSlDt5MkQef6t^<#)J-GhlA6U@*s)#+YVDXKB-z(M4h|x zc0v3~4Y5r{+IiO^PE%3E$ntpb8~M9nQ}Ha$@X^_ft_Kb zm9cu6DEv*}HZ;S;D<;a`HZvKQp(aViMVO>|v=&S3nHU?UU@?5*8Oa~ck2wEhFv&|H zvy~FQw*)~rU|n!Ed@Qe~W14HLNEWb{n3Y#|EsSK0jhI=UOiNf{vN!A=5vg|LM9m0i zC)Ydg1LQ19>tZr-u!Wao|MZ^k1RYPTMk2VP(lp+A->)2@5cP0Dw5S2P2NoDr>UDyl ze?$w=j}bd?V^7ic@*@Qoo%~L@_BwH+!(D$MR&bOSN+%YtIni5%YBHa-0_uc1%X-~? zGQ5xz_32~LxbK6;bnTo@EWBJL9Bz^3EbSZ)r~k(|@EkO#H1*5rvK4Z8CW?7vGr_!+!IG{r>G z9X{#tgR)+pA-$fyf~wphZh(-BoTL7awXcqgdfVPs1QAeCDUmdhMk&b|rIe78ZjmmL zk{Dq0(5XmBhrlS^(kLP-F~kr_}gvQBv;6zhwM5Q{n3TJB@0pQYQ=HOBca zVBB)u72ls5K1C5jts`ErU24;Knb0|)MnZON9>7VcDh>*L=`rp6%L4j?5Ze%VD3p#G z`o%m-r5%4Nagl?^9)-&Gh%ud2nseTG~^naM|IDGz$hh5cfOkzpB=M*fnVKHW4+~O9$E4s;?gMe4lWM;85an ztWcV{eclrYt)3lh*<-92{^$?*As$_VofADz#maIChOG2lIq!PcDS`Qh^lTQ$e@ zqo{752l=Ft>&jtO{|x4q`&2#ny-zO2`3IsMBz1;XtUNb7g%U{SMPK5NaaiWZ;jnYY zbk5I;>NX07n!h(L_=2A0sqhGQ=D%U`zmrXIEcErM7Ds6?FJU7i0ndcYvEtO3LoJ}Y z9G9D%!B&7oHi2g6spg*MU6iz%B%)@>{Jl&L< zZy&U>rr*6Hi4Q+G*(hll4!w}N%tA@Z&?I+q<`UEp0l?+SlDE@85_=qs<0PgdN2ZB6 z9V0BIePZ^Q5J+SWH%uzGiu#{3gK%}4tCXa$C)Fyry;ApDZN=OB`kAY%V#N-l&$SZP z-7=g=PR&FE>+-aJ`D1*Kxh&j5q1-AiSB;q{9hZQZTY|8yfW=t<#IQhNbR74|#1cc6Q^3i4M@Gq-&r@VsC(QD*!TBcR&yzq{utLW>{nnGnNr zpwRa0l){)_G-;6b3f@pDc}Pe!j-c)?5n!JIdURKV>5G=j7_ zmZuEWDgrXSE-gza2#)+@jt%lgvHT7%f8RvH7YksgwEM-4Kcnw{jluu>#R+aDt~#rZ znb#+3UFZ(-l$HzsEeWjyNvOedW6sMLu|<&lE(}30;^NbXvitu@M1S{=|Gfa*o7+YR zE(!+;-f-9*cy46Su&Vn#u9T4Br7_W}k^sQxOc_*T0IXry+V=NB##aaUxQDfekw0O^ z85F>bo8oLc1r97UR>^P-@%pk$HO{|1KOzh)I1emUh5UX%tx_zt7D-Lo_^WTf|M_HBFiBK3LyEd%n)5&R>;ubhdwVQKk!kPFAx zOnBb~_Xx7JQ3;ZE(jyXge)CbV((|D2MCbvE@drlun?H^VcP(5>T=`Z2jkkF6?Hs7l z3C{1Lw;gP=Ynh6@m(0Ea^07Mxi8j9lK5%Dn01VA#*C+HNVf=erk9!>}g;>+CKTzyf=V0I5 zpSBdDO~KD)>5SS%D5Rb%5o4m;Q9y|)TMQegno6a^kp6f=&oCTl3Nguy^Y0$xZ!F)O z9o{(Kx+jp2$_mCubhCY|YoeM5-dM3~zgQumckE%e) zOg~8L*P>JMBw=eHg$=`c3Zg0Pi;ww7C1)#?Sz5=gqI6Ecx!lqVjH29gifLRSY&5B~ z5@@?rai+*ACaU!ZIJWFC0haP?J90bYw_5uP=diOiH*q+ie48;GeN$TSXlnieG`)`nbHxjh4stQMH7@Eo!d221SI2P9hm{;5O% zdwu`&VUAG6@VU)F?VSFTYr;*>%mlWOc3Ni$n&#nSkSxK)aUNFd<=m_A*qk-fgmjsa z(XD(Jl%l)v(XI8Rx62C3(d1wE_J%fU?Ip}m<=E;1^F6G6f@cm)S-DmYr-umKS!9CH z^Z#7Af3;SC+Tl1$ec_{3gbT+;3n!*Gp=19d+lbaDe>#TaPB(OF0r#x;K)<4gC3wq$tN2n@=NJm1=Vs^1eSo>5gSmL`5~ zT|=2zN+E!)XDI}Zd_hP5VBf0|JDh7D5*kzHj*?03sM){trenBu%`iCWR5~8XQGsZz z=6(DpOzwwDLobpA&sa|kqUiwd3>*YWr*(Vuye3|09SI2dJRVg*`--PR@=#HF=H5Tr6T)G99*mm2Gh+DK$eX(sKV? zg3F6@ZTPe$4~!7=Xn6l`^wpai!tbPI3VdC%4uSTQI5q?T{T;M&F~sYy;bC~}iQPxm zIbn5PGRQp4+Q#Ay-ijKU64&7rYwI-zs>mm7X3=L2=wPh**OAVu9L`QB$PyVH_QWi0 z1Ip9Vmbp`NgZc_TY++`0qys$l@Y0Y2u7dkG(?8b~ZzwAqN2613D$-y6l-2v;*QsEC ze1eC#C)j&pou@Mre~w<9%3&|t*LBsnH7r#+DyA}3a@erK61~YXcDNDsHQdIy)km3C zv*;~W#*s2mL=UGHb?hJ(8Weo}nQ?8uhGFm`*$aRKy%^K|=H>%?lsM0<rfH&xmtFvTEZSOF7xztB_^DH-e%#%{S$I0NKq z$&0udpM&fNTlTAK`o}MzxdeIzu{Egp{%YrTCi$WB)%yX;NXkK-rqw`O zR#&Yi;Uj1l8BrT;RGO4x&ZMhU*;|lHKzEJ>nC z*^iE@LJ80&ZuiFH({*e!wl=2W~)`G`cLh$`3VVr)X;dTD!Q967J8y5kFygJ zzGP{Iy+IAjP9ocPc)2YSJ==p1*0^R|@_*~x6{zevHl)7in?G-*fJbRc5I7}bI}#Dr zwS2lf&%-hMt)7pXU}!;-Om*J-_>{A5E}|3P5VLBG8|2p#-(S%wuTK9ms@|{_fb*nx zmt9PjLr)}?h2_OX&oG&5l)&-=e8T1%$21xSie#G=-E-HBD?V@nZj`xAA9(xzd6)?2 zzheW2S3x>%^d6+$0|#k$T^&(kh>l`VAKRSO5d+iP%kgIwMQVImLW6{+B^gxf2i)Yh zA9J|b-s_VoDRq+H>a>iA_1MMoFYGtRtP1isS6-Q-KcjZ_nu}II%{#}g!XfmEx1vmB zpW?FFFeS}$i1u)$L;bXNqUx9cF4~$i0f4i(nw#+ACvExr$9d=p^j4INV(ZT@zr-tP zvH*POaBNbr4r9(@sc%dq0h?fsJqxSD{2GGtjPSi}9fpEwWwAv9!WhWu1L#HsSEl-S{0A%g(}oVvGCTF1^D-I}1--sb9d3F;IcE4-JIrmM z-T_^WQ#J@;qnhcNxx)JDoAHs}iFSJ=wK9{xo9Cg;9yd8s$=zr~y=8SuPnFI1>P=F_mz=(G?LmY8pjHFX=`4dDj(%VYAYu z9)CS0#Tk5#X&s8DK{Vq9+2+0>{EL;ETvFg z%8|ge&rH#z;;ZP03p{VWyfwnx`Oo|6!vwAX#NJ|+=k+n1YVQ!XsLmCoE;LUuv1uvQAxN{{HAGcd>EH2;g6qM169G}W;)Si|KLE5*BrSDx4whhF%uGBH- zW`3ximPtRQjxSEV0kJ5X;K9$d1^#yoW7Ysfi9(EN?knnx&}1fXH|+q^rg6<{LSG$2 z-IH|YC5KuhJ5EOu@DTTx1`*{32=mNEKh@({8a_WahUl5d@9xzc*Hbo|%vVq2de*uX zfhdzn-{k2nwdV-m?Q}<&$!*&}0%ak#6{$!2RSW@Hp)bFz(l${}%PfSVwAi^WHk;`ipMv90Msy4DW|oLE&QF|9MHeNs zCX#dR^84J@XIxLYnYiL)zh9zUdGb+$wp|h>g{t{XU&NJ1mGYlp&Ft$V@wBMis3)rr<5FdgwI{y%na#AUy~ccFaEq<*djntYVdid zw#Cb82i>e^(dU@rcA^nMsZ*N{X`T1?G;rxrM$+a3I5@+22T6|aN-}A-{Cg1?LNWcsbuu(7w5U{@cJIF zP=1vdBgVyY%?S!uj$)$_*A=Hu`W~yb^dUyND*m}lWj|?~GiqbwOz}V+%5EQA> zGAA_Ltd6|C&%neL zCOaDFT5Mzdtnr&B;!gLBKdi=Bf@*C3h%zgOgTg)1gm?yiZZ;nk0TsEOAuow^7*>2P z5s?T(1|$0=Nm`E+;*9$#IB_VXd2{n$pbzlz6?BNQQ{Ntqe$PkDK8xp&%~{~i$J-WM zYgxm2!kqOu9@#OHQRUb>>rPX>dU@-jmvXUZrEbQUJE6))L@@EgVl+T?xX*uR7EyzBBw9UD4*EKbg|e|?B^kLEmw z569+to*&nT2_drlh@D&}+>A=`Gf15%9$!-*Vw;Ol4(ncIZ7KQMM_<82qp`#f~DE1XrAFHb-u&qp>pqNh(WN>k^c z`_0@6z*HM7hT$iQ>Ok!L@H-J#On6P3KM4$btXgj@pwXz;syd7Qt>0^}#nDg=dW{%U z_iBgCgM%bpgo%M9R#^&f>^+o2Rxu(rZUgyJUUqD}UQRDP2M@$!LnK|Y_?h7+s)jfx zdyBwfs`+!M$kN-Tugzn|Vk;jg&^@(#>kHtYt}TVq0 zJH;>P*T@yXABr?e7`wa?78*L%dRQNFy>4gVfZWl>T_xI7qfPu%r;GdU7`}T#bXkHg z;$5*tXQ9|eZyM)QzOo^=dFy?ws&fOo%?JO}=!wq}Mcb>>W=&C?O?QPRUuYbK9R%YX z&WExQOnK}Z!!J8&f3W}s{{!UBBTul|;4nS)b%ChhW1_OgbiVrX<-P3924={(HVrBxBhz54L6_&a`svxl%K96sTN7m5RpB}M)xz!@XfyV~ z4j$t)X|=uaV&jj&v|sh(&?a?$&iFDbUuE4lQwX$mhbH4CF9Lsr5_Hj!4$vd}STd)v zHrcBg@!hgbuWH-VcwwJXsP2dCN7oBS9oRhQWu@)f-7=Ii!_SO82W=n*@Rz)1rVi7s zEh-}u9|9|hg?#I1 zJ?GDXPjA>&E|yw11X7{(Tm~cSBVw&cxLVe07VvXbkKF_P>{>*h4lWo3^^N&Q13g0p z-J>{2IrU0&e2u<)R_lU~xp=3;==ZB1zsdmP_Pq4pDE6-d8NR1=zw}P{Bc@aJEqzb) z8&3CTu%x!^aC_6CkQ~Fo`SVoO04Yy;I(&z-PUO;6|ro#9?k5 ztu8gZ{a_o50w`EE+{szJ9rkv&2iJ!-YXmEImMV*q_IIbzT^a@nIeL|)!}DF^z9E7` zZ>{}Jk*flOj-4Dt{u$6VXA9lFNd*!7ReHrKpz%q-V>T4uSE0c9P;p_e6hS1Ea9=mu z;m$n_)~|^;-xNdkZjXO1`S5_wY!rMUy~C{}R7`JPay3?^ZpY@J9dnI(W*c&9ZE5xU z-En_6uGKvxbNq00Pux=f-ZJD!1g0#&4R%l;CG?dt9ENSS+!b3#S$}?h`3Pw{N6BnX z*^ux~5oD)mVXadU>D+HqrlRUwWd|&jUyRo$uvB%t%q8Tf7k94%TC_TSkZdEs*$tq{AQCBri5jibxj6+h}Hlu6{)cGh}D+?N5e0geD>@6Y%XX$_Q>um%C)2PwoY4=-@GJ~g`?xC6i z0Ds`m_y`DRZ8n))2Fg@8?s+~pBA+pin!8gfXw z)+mpXZi=fPp4nPe8w;I2Ot5&RO~016-PW$h^*1~HSNErdU5WMfYT^xb%#k+-aP{nU zcr7NY_KS0$Mdo?;pa`;sb9TMkzd_l_p9x$2 zu}i9rVlE&0M640Q;u*+}9!aDc%+0Sfjh%6GDh2i`^8_49+Mc}#**)>jR1|k*=@DEy zp(M2V&-MYOnyeB&6|(M;UxJqaw4jdBec{<+#e8nVph+H?&#jHx{heC$Q*!YbvE8rr zXw3S25%7_Gar{>5Q3Tf;gK1Sah7#f0Q)|sRJv}l^FuuzDy`3V?vVp>dizxMq(T5p+F-JdEBRA%n#588+ri&%w%H6*xiW}9i zj=P}K%UJR6I)ww=Ka&KNkvGsCjiO@hcd(k$_FtH^TI!nY=F zWOuHf&PF07q8ur{Ip0rlOr%Uu0l8k0kXtv(9K{{|31clcEsIse&^ZOpbzY-hQ z3=CBUo?A+yij=d`iZ`fwg+uj))fU382$ph(MBq!6N@P{*Id&}>3UTV|1sb~cQlsH z2@W!@op292rj|&RBp8C6tu+Y_owLP(z5CV*x}vn}w^7UEn3~fUrFUz$+;-%+tJkuQ z^~)Q7&kjNURzt8)+oVUmUg{Ce5Y?q0n7KEZFIv7PQOhux8Ohku{|iA+L!zDSdssj2 z_B9M())pjEpC0JO-_oeFx8Ui3mkx_nU2FCm!z3f<0x^1}e!!81y8g$J3Bm=dMo*~q zRBZHF6)k&Ai>OUgDf;u|EMVsImU9PPhQ)L$94!O@tY5Og0rZtMqiWCHmBylbeOW|r ztNQ9eWs)#fFS)450R`O0(AjsVQ5VDlxC}ABsw2=;o&(6fI^I`h%5h=Xb#@I$o945##Z(jza$2jyEI7$2fXvB zUgQ(L<$%>Z%Ogi*@STT8PvM}5fZb44pomtro72YNvS-;?y!KJmv(rvJdeNNvRX5@w zUY^}uU207>Q{R>RDb3l_qfvGNhE0%=^pk;gO(92XAIwtSUwlzlF5z*<=>eg0ZkXBr zg+$n15O>YnvE5aqqE~_cLAnCmEDsNSXROpXbGe5lJ@~VS?YDy z@TtZ&=__axWJ)j6U=y-84kYIDB3&+K)43E*jk%hPoBa4;?}wpX_=OR37pdOg)BQ~ zzG#Jr9Qo|`(FfX^BkuO4f>I3aznFOlZ+P5Bk)3$EY4ZU8fV&renT;6JIEl(0u6{a0|H8meSrP7?(C*^Mc4{1<#J$w^ULpa%D#`zPrr$$?vnS!QH)F{^`Z6+xD!5MN zlUXx}#AqAQfHRRJXK{K1_aMr+o3qBr+NAHX(y;K3xRYDw-~b>rxt%DAQ)goek*l00 zjlFP-jXcmY=2Xl5%HF<#84*~=t2>(QiObd#YAk@a0D5{gm=j>oTx!2rkfBQ1OiR?{A?u6h`Qr zlb@acf%FEa_&rubeulxY5bFKIBagwCnO}m32=pPwoD)4Uv)M|wwTSzaNaPbmh5D_F zS_%~e(3j}0y=RKxoK7qp#Rr9~2i*uQ`!t!b){ea^cjJ5W-8sy%lMcbBPTyE-lbDuH zpb@DNwM?2qyKde!`>_f(R(aK>?gmc%O6~90AEGTX6AE2dr<_DGliMC(KTs$>=+Sjh*l6ZO zQZ9}}HcfoP^wf&?gU=KSIZQeFnN#cF6L%bN!+wjdZO*M+s#tiPndoK$7xt1hyQ9B4 zjYK*R*f&~tXKM`}jvfCP;=cWPHYv^VqvH@*U|uBvf(4$~)e_6DHw)m?ZmtSUVC(JR z;42?{?CL(ge>MKp`WM9;3ZKr@4)xkje0oMB#XRHn5gFYW@6amb!Z&)&`bRp)gTgiO z<|u z`smnIJIN7mE*3U%q3s(PG&Tl3_IHMnTs2!`h2uwPZzDx#lAl&QrBPL~D35wio2i&I zq(?^0@#!RF_v;e0g^FDr2YFasbhxnV4_)i&hQb*fny+2nj3$t(-D=?8ppCn3BD<06 zQq7J}zX8LtPP;sNTfH+wVLq2Ji_Z4#d6D_-qKc>N?o*Rkej<~a?H0S%lQ%}ZmmVTz z=QCXrTEi`aI7??v66NsK4)l_HZPhOTH+ZMq2n$%K0O&8~T!AB>y~4(qdSzuoli11q zR~@;44|#j6b2n36IIif2^3PrR`XGC!6rq|Hn$yX9w9p!(!wk;a7VXtEk{d5?O6U0Z zU6-}ufR_{`)P`az5x0zOT5N&ogeTSc??x5k?*aPpU4PZ0l1<^VMgbPj=q&X=e^5Vj z`;Jx+J3T^cA8O%q5s%2>($om*S1&he9(;Z-E&e`vvvlPnorbP4&`hev9p0E*=fi~O z5KAMiHf4hXS(K?-mV*Vxjm~}oi6GBH+g2{BTguwP)*rqE)`Z4Jy6Kbl?9OS__A#hv zfy0rgR8~!1BA1Mv;cro8FJiCH)BvjSqR4nKZy`YP*_QrO$xPi=aH$~b^@?wxAzy%U zMzho!ZGSq|RnLRT0vF0%Xs6FHLBhON+a$=bD{x}Qw|5k4@x8@(odkY4@Y!0~FI6ma zqlHDjw#+_?8N*~28^fz_ue65mUEXa*tSf69AJb*(?-iRXzIr}3WaaL{lS6P}lHfO; z6}a(xe_OMUtsS$bW%{_LYrTb;N}V5y&Kke}&3A6lV0XJ_MpI`19X4-&Z8JpKf?Gr~ z{cFIz&NP+e@gik%8^mx!4|`f}pqnroGU`l32UCo^fsBk8jgt;aPkFNM#o60CL)@00 zfDN!vR+LJI5Mm5bmvM%h*wRrN7?R=e!s-ZCB7Ihm+dr0q-?M%U+1Rs_wr5sf!j1Q_ z$zgSXEyMtv`YOK;_#=VJm5*@GI5zzPu{@VvBfo`SecwLb1MtNO2C4 zu{{6K9W^YOt6og-LwrvU zPu}A%rR~r5n9*=$R>K07{)(QFfE8E0d!&@ zZ^Z@X_9Yd#&dEiCQ9lXXCF#nnLUS(=wIP(q5@A{o4PSF}i-nu`|8rxUeE zyQ;Rvf>w2%HG5(OY_vCCB0a9eiMc!H?mS+0AC42--(5vZ(dh9ZoH`-wO#54n4kX+3 zGtw&InbM$JkOV**M|}^)yO02zVbjbc2FRxVx#|5+Aj@3E3W!vW|M2riFN8l~0>HNf zvF0wd9Q?*b&TAVzECahZ=i>&kj_KZ46Wy#*PPU%idSlFz@kwFA2V#qR*6%UhW;z&+)oy9T8J0I)_X<8k-xj z$MKO^0Oj?eCrnz5ioeZSX+V=c^=0=t+Afg^-WDNT6|FPX(#H7ySaj-?4D8aI_I>Xu zu7bu#i%Vy3d)1Xd7FVxrPp}D2#PzC88u{YXF}a*jJx@Pj5M)BQG+WhvJWV>}ODJ$y z3n*2^7Ta3vMxeuq8g0X(7J;U_qAMQ;B>3DD3qkS6MM9#N zPN}t)^L$R3=_bfKy=fqT@bKlA@NpHQuf3&f35H!xWe}YUaZmNf|DI9h)o%Vz%NDVwe%cVAG6^ww`)iry2m;0 z#&)yZJaQ%Ws4)bkGla$R5vup9jUmA-_o%z@+zMii=FFa3ea{xIu%9f3s~=9gyIB5) z6#7d+o`;e^Zf-aHb%7`6%~;m(Z!w67)C|eVu9E>i`ZVh-%10EM*=XpuJ+ike5h#7S z^-ct5N5i3hBYOkcLvrr02i{i48*C;;JBm7)Zse4tGk16e%HGV3)W$r`npTT7491&< zu6+ZQ+Ec!W4qVw_JPaZ(0w#_X$#Ioma-U6KGwUyrB{+?VWWaH*6Q5aSQZ_a~(rNZu zpwB!lJMAL&QU1CPDROtg3=)^l)1s~Y?T&zPV|SwBv!O=7s(?Io^+=OpKL8s=M^4^I zp4;ev(76o5T%4DyH=TQc1Fb;vo3+MUueJyxskwA-jBHkTVlg<5DzAc=0^l$P__20T z9?v~(T9!9hma1-{y{x-uP9_B@O_a}f=j_aUiDbv#notww`Vpl~h-^cjLBeFcT>Fqs zt#iY5J@?%(Vu2zU7oDb-yYR4R8$^^y{1F_FSoakqfAo7UhQxm#H7A4X%X3!Wc~Z>h z<$Vh-zqaR2! zl_(s`-zvVdun?tVJ4^emQtOvD_8Ye$3o4{Qj^LJ6HC?1yDj?rm>x4bubuuP z%8yz(i~02@(PIrkMRB$julrEq<4jw1#JHd*y3uQlrZtOyhO^-Y32p;BV6(W$rM2Gp z5$U;6B)+4Iqdi8RlGjv4kpuKrTGx>-NABM?;*j>Sc;n8FQw4blz0f1cft3`*t=c8N z9~ioAh`{yeGcT5_{bG_u)n0r;VgMf>J01YVkTd^Ehy9)SlgvmEF;`%o$M4lP@9;W| zm35{N;pSt$A?nby75;=y_0$8ezO|StrRzy_7H(EA!(haLZXg8Vs7Ko-O3OZq)R!66U*D6V#Wn{ba&*hDi%i8B7bfjau%IiP zSc9sn`+!GFfUN`>G}nBK%zIFa>B`eT*)C7Ys`(9o_@=w!+Y|V*w*rs*Qn{JJs+*#$ zQHGA4-cod_SNzLcN{Yg>ufTLC3)4Qc*ys(t*G|sPrklfFUd}s>jGSXNZ6p^xvWLL+~nTV4Jllyr>dGqV#TjLWmThWO2h9lLvOd&|t zW8c;gUSSx$dnE$buh6U6bnt{Mt4x*)s?jR4n#?v1=&MbM%g%2)#$q=~T%z{JoXncaC*Oa9Cp258@3NGRA;vJGm*cRmW%JkxtT z>Z3)9%0{WwP2tm#CWF04rKT;hvZ^|k*;!faIyD}yF5wT-pI(mOhRw>MfjMYcXm9q; z>u*+AH^*F4n_f@+>MK~4vT2fp1Z5FT=Z~~zDEiww9dg_zW=SyT4FI;+wX!lVVzPv_322ccEVg`Gd@Hl>cKaA|hWXuiPuY&I2<&=kDsm={KF5EOg5 zyO06Cf;ro2hR=Mnl6I_#F zCZZmCY>ZLwsHMKCjt2R8ENsak39tCp(+OGFGwELQg5wJh3a1m@)dw*|G`it>4lVpR zsXqidZkB{yFe@Tv^2m2_X@cE-m&&f5{iz1KXlKUiSQu66yx6y7*uj#BSZjK7I~^2D zK%`o0%Ied0OA4aqd;{jJeu)H@IM3F~$|%2DyIH-~wEAsP!Ol%}`{qU9(_YoN53MGy zEx(^`M2bar#^`)o!-}Hg>=ZlJ@L6XLk4+PBH3qT&6$&B+WtCoj@=(bZ<&BG5&W zKRkUqB@T30Xm#xf42w>py1MmOV|yM%Tv1>X*#>For`#IMT;X`+QQPYE?uAy7PVPau zGF*9%PRz}5_GUHMiFiDR?77feKqKit)f^?Fn{bj+J=3qB(Mfv#elTWx2?zp}6 z8Y8}-|9^@6&x1^#O0tUPeS$kTW1m_qhs8zT(;C^iR^xEbtHXmaVB3pPjWzkYZ&7u} zKC0I_$6h~nob_@+ zfzxfY2PfaCh*XY^YMCWOV#gtS#vAX+SjXKAZHB5`(RWx9UL<&Y`;6Y`w}a$M#ct(R zZ@k%)%K!y|8#=Y(lL+!`F(6oUbnWis!b)Q&cbaxq0wAi;ksA6!BNBR^+mqL$NG%_4^94au^zk`ZVQ)%(~vg?6BCc} z2`pi6Mee9Ro{~vT%(|Y3`f8W9x}b8T8ndymUf$72x%*ZvouvAsiocoJy>gz<`l<$d zG4G|=N?O=YoUHHdUTL+$gF}`*7iZNByda@ zw%&!;O&P{J!x_XwIJ$7jNqqgaq6wW{yr*j$6yxJOE8Al^WyNe;C z90UUG48ewYP_W~x1c98mq$e}UQy7qt^K7}-0J04nuO=1*Sa~QKCJxzsh%wo#>jxn1 z-ij%>aLx7j8FN~Ax?Hs9)%On3CCzZ(ugs&RI7$^Mv^DZn=H#fc*pFiBuj;F$0a}Kf zCQJUw#B`-gppst%x%o`0$=Se@W6#C*HT(pO_1*LcqJ(Pd9v#Lll4SKw@aA#urER52 zE7B((8mJt#DJpUUl>uePLGzojk)FD*a)$2}?^BFp+?CS;G42N93^dEZn9QLBaeS{Y zyozwCc7JaHaJS8^0M{&_&5{^Ug`o88u#wIUW0f<|sm5n;ukJ*!G4lGDgK+_(eN1~_ z{T3>RbE2*#Hb%_Veq1aYPE2s>)+!gB0afelx2>RMi#>FcF<#X5(Q=b`5;5J49njLW zp;hD}gSiH9SX&>#=|km{{--b2boXUuFV9|(FXBBOmb0vy{f80uXN7?XCCnD@{5{LZ zsc7;bkuX8BiuXjw(?hXmHa#oeW9RQLgd_aenqQmDxTsxG$htnntIK(#&}_(JWwatO z$kPKRJ|(VfOG25s_s<_=xY9=Q^F1bkSWRD-y2Uyx_LY3a}?dhZ>9I1+< z7G9{4?Pe8+QA$6?^}w8_j*{wCOMk5(wh`M?l`*%MS26eIjRS^v6yTUS3VP@_f80KM_z54yREpunW6a)%2>!tzud3>9 zRKLF|pi$nMfqXqL&A@{)ZNv6$YQ*g+RmM6d>p>?+PSKEFzP+lTXJz3(PeiBAlwDBS zA|CR|G*wQ|nNbWGWG%MIRP5PcM1CcF-7&}RMsOM(ewFYT5alWjm(4pbIK&S3_RhRLdgK(8*_G(cn>V>#gGtfHG?I-zW>~kmh57D!-rVHVkSJzx za&!z9e>CCmY8>*+%_~)Xm3KB$c*##Ysk`s_v7Me}-fz6uuP@2{Qk_ry2a!UVwc#^E z?^*kbc$t19>)(m~fByC8b8kJ(RC}Jo{?A@;Bse~;{~$t-sh?dcruFE}K-Ok;>@$CY zIfQ1FloHL7m9Vy$8p0I}yc}DKgb*+C;!?mh&uy|ky;49VDDcdh@LqcnG0 zx9Pr(>D{|`Ke5=0+;#w9yVU)4URhIES%NGJ!Ldtky*)WOyE*srlgiNP2GQzP$0o6u z^3?ztM%Rhtde05#&DsH8+xAo`wD5lR+S;1p?BWF*!JaN|DWuoC=a=R7OwffU-HkWRD1?THVog_SU`_y4<-?q&OPx@}fev7WvJ!uGXAlIO?Os2>Wl#b}T zwx_21y3W0Em5`SH$nUlY)T=r(V6R zBR{Dl@mp8%KlSy${R(pf;lh(sJu>z=G$PH_^(l}6+pJRxMXiaasN-gNKK>Fow7_%a z#nO3!ryJZ_Xo2R1UuV_Eq>r znb1!ZilXO{qN~9kV-!i@0EuZ978c$b8t3oPy%;BG7v<~g8$NvrQ&&|b+Rq7EKF}MT zv~=(^8HEfT>o2)_l^;ZH)6>&$($HUj;BWpgUi53-k()b<=O6%r1RQPAI*G1&*iwo6=zey{1s9$b(E&t2chMp?BjRqi<+RFde!d>4-L+z0b*r7UY74>t z9<-LUxTNGWsM21|(#WHxIKK5P%;VB4EF}$1Q%R3LyUV~cVJ+hA?d{SY&#kXU{hq=U zsv|x$`d^6}v=?mwkp8w=yRTPj({IZEN^u8Z6i(F)`Pt5;;jeR9c^IZgz)h;iC2MoD z2qs3qN{Q+h77>A@8AV$Sh%bZY!2Xh#)KCrA1hCvenoqpu@lI+lwhiXj`EM*`mZW^oekXHjiB<- zm-W-OhS{w}ay?pfqZR6%N3Hw@hm98sVrP16Y3R|p<9q#f)~{SxMC1p|+99(2##E_j zhrrWfE*!QAPoy|Awqu#v_s^tSut+lDwi|kjWTC%I5r9~(?Yw;qH@HRNn1vk`71ee7 zu@B)Nuhb`3iB@LYoqoa>F{->!+33gQsd?^Pc2RFfXXj+)Fg8~guqLUaHZYp6R-eCo zd2W?kE}dJu=T_TYXD)>l%n=Ult&YJT$&e2a`jtMJ1v{r2k{Nv`R>=85ORPATlq zcdRaI)p%^1oEzJ|oMW>G;~HkVr=oIyKcIW^M84=BRq|Ab_aK>Is#bLQKX%97-nHJU znq$KI?~Dd&KN(qPM>t&(isyd8tRNBuna8ALkLI5cYB8!)5Av+6){?ik9-890pCM7J zw>jDOQuy2v4GoRkJ!)DO(e(@+XGnOj3JUf(bEg~)?&G>; zHbA7UtvvxcpNhruNR+GR-egSxLN=;X5uzs@#>n~d<;xnOjOAAmcb~5!AqKYPlhi`X zRjbxL*AsA#Cg!0wDZ#NG&RY%rPePFcj&Jgm6ua&}iRXzv~M;xmD`#&xQ=&n)IigF;!w`z+crn%U0= z2fYfUAT0+weJQ&RBwv@bw6q=pD*BF_Tg9R{|AZ{xNRzXCq2t{AoOPMDrlxtnqS+WJ z3WfSK8EwFI`SRt`xOg6;CmxgR1}h=gj0#&?T2$`cdmzG+UsO}05AeaWI@8e8P-Y@R zQO2-4x;0$-(Wyy(wr$KK?qg2km8<6A)5D-q)T6X!W05xJ;^N1TsmOTau@3=cphwLv z=@yauT$?}#en()ktOVbqTMWNrF z!}etpu(|s-FO(bG@6ZHoJd9H;dYLb^v!S_WS7IJkybW=>pV{)Da@yx*srI~rwwKLG zjxSPud-geDT5Fxb2QK(M>F%PPS43gq_42h2Ih##QeSI!kTG|e@#imJ@bHFdNq5!UP zLs6Czqet1ab0pk%iExZ{{*h}6{?>29T&=9K20#Z?2MR^GA2bnsjCrJ@4jeDaTI1KB zQn)&%ToVa;2*hZzcF=D@%LLi_x}8Nk`@h?BneCR%nkA!Cp2`~i42FFT|TE+ICgPs3+?iN3BZ-jS~)So(H z_}-a(GI?F5h72HFt29l)Ues}$UGx0VrP32_l}#DLj%c9;)Se6dlwXGpfy~`f+AR&g znxYb~S1mD0mB9_GC!a2g4)v7p6z|=grz!3Wsg~OTApT7x9=Ul#RY~m9)Np0MG-t}dhL9y%b-)Vb$@9~ zOJ}|S!LhZxy62DM^$lb;IwzZ=hpEFHeB47#nxd*}z{vx->%6>4?d4jdt>k7^!p-_` zACB6SgDxNU(Pm?gKWbCkFae>(#^%Mk?E``R2S4CmJ-_AswfWWKnBGeVpL}-AoTs!$ z0%3Z|vH9koO7Jk@`!i=V)8uSSvx{aO<_gI=L`V39n)hTG3?A@K--)Y9{3$B!?x&_S9uU3K$dCK0+f}dgUHE=1Vs}Wa1wHS&Q+8CFd;?} zcT!JcePOzy`UtG0HwaTcF>jm161%REtyy3SPSxz^TJ7G5b&8*wqf^eDoS2C5lG#$s zELi(boc}dp4Y(AZcstIB`_EMH*rix+k~WlKPM5x2pXFd2OZ!Hn=XbQqoI!Bqp2KU; z#m&138u|>Kky^+9%qQdit&WSMeP)gDiZ8p#`wM>k@_|m_Q%(ZY*Y#2eE&8-1=T5!N z$vQ$u5ZZH;eskIg!RXbaz~#_8Q&zHn!|$5o!J5Ov33fIX9R_+pp9w?brF!p-j4S;% zv2{wak4WyRtA7Fgc&6pEE<>&|sjhpBHy%P6ACDZ?s&HgI9~kvaSv0TB@q8Gp~U z&dDr`nS}*iT~*c2(QmG)`ko+pa+$O0yR}{Bi)c-s);!xqATRw#b->)LTlA8g^OGBU z@u#wq;!-9xAU*wjKRWqIOh6PZU@`e0f6vOm=ivFXAq^jaD-=sR|tDi`-p zP4^mBcQPQ#?-{&+t$tLie1qM28wTYdyz=;ssA&S%oz0n$SFY6#&7Alo^k7x++eB3s z7FKRk8FCseQL&T!VvRri?;!$`3x+gNZzVQo;_0b&Z^@Pju|JN^O8eFti{=T%(1vjoN8JVx#@y|(cGp-G`}?$#V{gmX)*KtJedaOTTpGM?@`bV~CzJ`I|LD=9S2;N?kB&10Tpn3@ z%ptheXO;IEbctPgdi2C|bdgyTINiZIIXU?}(qz-7>NX5kUA5k=^DWG4S3=)y_VVo9 zT%jm2!!hP?6@utW7(b>je>q39U@QqJxAeTbQv&x);xMYG@MV(gAR84K4Ht*|)~KIL zr4vE-pLoFgoS=6;TS|-kPxO*`mMEi@Jx^d%D@EU*1W9ior0aHzPa>i5#&^{5P@`_& zYW3|B1Swz?d&Q6!WvH+!2KTfZ=?6nAm&Em7*HnIfhgD?4bq=25`OW0~uTKT-OZ(?X>-`r0A6?%8PIdqOpOZo%rP2~2 zBYW>rDr8f}F|&`o_c{?3A+q;Q93y0pmc0*-P2m{F-rN6uLQl{4|NO42s|(lRe9q^6 zkJtUWU-$hf56iHcl3TqyTy9`&+{};}cZaoR@6$S^>!rs@NlCSv&gvG^pfkSg5IwaB zCZo{VH_p=e;9ZCS(;i!}LvTrJkL5ht9i>1#Z|(fRr)y+1e?l{cQTfos;AJBqpX z5%!g<^+%ML>{l+*mFKd3vf!a&vrdtoU!n;rw0Qjf6oDZhA;Y3blYDd+*Si~5&;gK3 zmDj(Z%a7}2XSlM36Y{3}wv#~l{qo&AFv#Y?%I+Ex+_BTjl&q|a=KFCLn`f^oev3lA0FINwIs5WgrI?o1 zuouyFj)~E=BJm*s2LL@KPzMMNOyTjR{2k6n%&^FhxykkKAD*0u6hqP%wH{$Abrmw* z2POFJMP13{zl|6^arv4mdy47@vQsJU45y;#Sa+L!x5 zI;#~FGsB6~-yyM$HN7I4ONy~G?2DgP)vRnjQxABi_Tn+`$gWC3WosKO(B8)=CFMM} z)P7BuU{cX~8jrYdgHCxWh+J?jF{@G!!h&dOiRSe98A$tzeZaQonQ9|QBvHkQz>|~O`+i#d&{V~Cl`wH%YSSSJCOOu#Wtxadf-Ft>Ik8NMf^jwre z>+*2V>^a#eG(glU8tX_A>oYRm=q-L}6(<*@%I??KXIaju2L1V~AvBQcMU8diTSPr; zL93dI@ic=*vB$?5QVH3}Qn)jiQd-0lZ$?JQe}C65MQv23N(IBsV^(A~>SVFAUorQB zT3Eok=x~f}u^qLXm>;cwDR7~9W-*vW@Ip4FY+<{9QLaPZ(q>W3&~w7!qd(nu4g{55 z2#cMaokjhlL6_~x$N)e8ewNh%vx0`9&%%#}1a1P%0|>p(Q_bz|k&R(=gFyu8tD({? z@6NN7Qoc_Pe65g_-?8Hm4oXT6r-dPWRihgaERDoDRB7Zfj}k7gK6pdJFKuVH>QQdD zYDHohK7N%+jjbE;L;X)I{Ri>ADi$-!Au|yE|I0`$6qKsPeLfSABBbKR^BhY1G~mcs zfFFt19%2rXwOW-*Dj51+x~;k4TS1`%tv6NYKN9nJe}+?oEMGNJl&+<{>}%I^mpW*b zh;_Dlo}bv5sO~R5aEw&}wBhyr-@1+-ZmIeA#9x4LN4`0TQ`T7ZH_d)t^v66D@dF*A z2B$j~yZ1j6^Y!R;P+HW;VdR;1;$s*cYKMt^WI`SxzFk9Fcy0^T8EIy12!B@t!&B=Mcxl&7Dg?Yjra% zCFMcTE&e;)+}v2o^@+H+_(L;w*%)rKh&TWlX>Ip>R9HT-TAYLI1hM&1i5!!`?66in`_eIl0u#5SP` zj)!nWPy2>q#WX7PGeB&s8H7?Aw}E{1G;kOBkI4Gra{PUFVP%x(0Al?=4>A4J{g{Go z1LprYMR?l9C;_W9)+I^aIfBnKU7sFgX|l1Xky#wuI@>8+nIv@Sx&7>^bvMK8UfoAJ znpO6*m9|teoKbDHCO41kJaEJw9J{@;CMq2(~%?eXMu--XWRWZSey` z1tC7%vPl_JK7mvaZgE9p!sFt@mr7xR&GCbHjq>KGgSS>+o$SCNBcfIA{Fybchrz0* ztpp~UqFhu-qDxl!WmaG5~<|XDYc&0$fCNH$?J6~EW zM|3T$nYBifrEDU!mL*7wq_4`YWd0*szftHm#cp5y1Q^W^6?WTb_~L_?jzGvlH8ea* z&<6m8yOGE8C{q(D6%SvI2Z+rY5(yxj^)P6;is(9NWgh@_Ur#@gq@T!sTxm-yrR;v3 zuq-*q@qTot-5&md*ZG&W-a2aP_ecq1Z>X3l0pQP}!E~xVf5O>6L6RiJ8x{L`z;xPK z$GR6k+j8I-Y3ZIK0O?*CA9+jcf-8Qcz!&Ao8IDnT*Q(E{vep?4e?bN8A012aCP1xS z=o?zzf!YxE%n=9h!ee}4%M}M#4-u?yi1)Yx`=opz5Ubk#)Xb8a)Z6eFBP%I5*CGOBN zA#<6uD*@Sg!(e`(!?r+ndM2`Z?JocN(~3BU3eGOFTyKIp!pQafmOjBUA}|_Z>F~#97x#C{OR*;g@a8M7 z$)yBXy7+Gs`fWF2UgrX%MRxmUWIv$g5D|}R7VpX+mHa{T=XZX<*U`=X_CC+7X8Rk@ z;pk+H7=GBN!iN|cL<6tn zy*|79H*YQYhutu!v+Qh`b7<*|njMSpdt2rpo(XN7y zxUb5>EG<9B=?lo9dveqZC&Ci<(g<(zv^bi}XA%(ZHR~wd+0C8`4H^~!T1Z=@EHAkr zv17e&kT#%MH~?~V0n^ODbD8_t*vElbX8t@KQ4i1b1Fpw3`L>f!%D`xkNBi=ttNBp| zB1V41IC0Szmaj69!Kr{Cu4hO#OL2%fNB%pzD1;YVqPd zvdd5Acd9-;Gt*smMXH!Zq4|AbXPeS`3kW(d$CAiGxRf!=Xw#E=1OAk3=|WEtVR!eU zSADxWhcPC`a4M}F59NOxbaKyEj$5B4JqLp@^Oy`L7fWJj5C>nPV%PfaNftiaUFdNw zRH@*2BG`JPqS&%`C!%+==&A{M_WA>aAyk}UwC6spHD{b2*{$p5@%IO|ED$=VoeY&# z2`^b_t3zCF&)i^ECE|4T1lNbe9a*uD&s=_dq?WAMPW{<#Cd?qIBM%ZT6Ur2#%5dZi zzBC=(2Gw%|WJLR+*Ex;0K3<2RLEEEU71Z90>+9VU0m>b$oAhqG=rvs#x zD(;%uuo`Is=5c3nu*s?PZC>y9^sd5rAHo^$$DHH3b2p?E|LAy?lqt1XG5*k+jCDB# zuIW9<85~hK@7h6PMQ~z|#tW`}>r!B8_-P+O;?LS?u+{o28~-T?J#fp#xgyUxx|0O{ zSbPN0;ydPR;MI`vdm_&9`H!M9wnb7i7MY+*_r<>Xs9*v*gjG-+t~=(-7w&wzA@NVB z09H*ib|`ETG){zjS42grl8BE5*mtyZnE+pa+-x(#N4iXu&|yN4v5_ zDltCXJH`7C7j?26Ug*U3Xpu(1X8Bc4-x{z&9cbLD!*vYqK1=F-U8=E5%8^IK>@&VWUwH!QHB70EFvLKlq&-lOo03+$y-ZxFl@~ z8uy41m#4NH$Np9I3gkVUVnE(zekq`d_))1u#t8%fVhuR#o8~ z{4P{lqCO;&?C@k3?e>at$EGwT%qD)U79lw~`qy}E*YRNge zmOz``qhdo*KqDPN2k$Cxg6u7yeUf@@QrG9Iv&E|%2-rI$DEe0y)fsU|EY zCdN78lTO?Eh}&W#*g`@_%h!$Q>g%H~8L|y9AF1p|DCb+2)ZVnJq>`1Dwb^WFZVs~o zGj$xWuC&s`SkgP*V&=G9*SZ#3ucipR&YfSgI;V)BSZs6^))*4Vj1BhDa5K_6@}0B! zV)9979`3E6-i}cMgHx$CGYpKnBE+}XD$092H_Nb&LmBcFaNrqBKF#U>#iY1XVq0Xv z)eVj?w&3O7`)XhOkbgLlE!=x#frW%fE}Rjz^dPZFUW;PXorg$Y!svV|(8c@Xepz>P zA>0H=nQ)0P%u&HbNp+?=t73rsN1N$`sNM3Q=sX7e3zIS5vj<BqO|1a)*6*WG-4A5OOmkqSNI)-a7gr41C@2_K)$Sh+WW zOQ{M*9J=~VXz(6+&E4<(+yZU5+6O9_Gtg9K+Bl@IcMN5#$r~-}G=dJH59>^b9;mFE zv0txs^Tl}Uie3ts1%Z=oT)q5few{RUiWMei6Rc#6QTee@sbXg$felU9Y9YXoSHPHv zJg6Ejm4WHQyR7ZQnpv%}yv@HIV9zEg9Eg`5V(U!l2&Q*AIXO|92UrvfW;$x3td-tW zK(LH37}k+$K|Es*{?_hwV3&nJTp@LZeRRojTiB(TnwV$^p%ORf%YV+H_Z=T3zQDBf zXifWFy}H<76i~`{cv4mYYkvjdx3qYo)7~@PJFs713COK%l(LRa;s^!m2mf(_Kxqn3 z5NpkwdE#VRrC}!Co&5%{mPa`@v$wTEXa%m3G?mm?@Hs>-68)k6q4}a*2+dQ_-xr z*Q`k3>TBb3V#0 zy6Yn-R#Z*6y{28WdKJ3LTh?>CUHyA0VbcRK&W71PB3dZcW7t|GrHkc+R51+qukwAg z2tEVL#6ovq%o^~ZAY%10+w{HdcCUysyQwCiPMJQxRTzKSXli4Aa1zKmL%|{tUw+PF zzdX!rEjl8Q4t^(^4BrPp-b>v4Y3XNJr!TJTs7gdf$izZOuJ&L$M-cs`s=iOoZ z+x_XOEMd!>!C;G5}tRPuN?yr;xoZd>S5OLp4RS&$L7;T;oz$ zR;%}dtL8>ugN+RC)!BP!r)o>G)jBv4^f0TZGI<0fTJWipA_4lS;^R9CZol*KpZ_Ge zaXhHf2nuGO$xQc@zPVU6@XyQcTx1(njq+YVU^dUnZOX*+b`{$6B*fGb2a)R3`kKt@ zbgylTd`c6Pp~~;~O3F%uTzZn5EaJp%+YtGacep z?6g-U)C=wzmWS}yJaf65^lffe85&UJshQ2rA*GZNBOL3ou#juEzTtwxxQCw#1tY-*kVR#cdqcF|M|0m%V2{eGi_0ReuukTPX$&R z1#qS-ub=fqU5rX5*d*W&wls~t12eR;GA1bny}I1lKdT0Ktw!cn-DlDxzvCvIy=q5J zL`0;dsd=mKu2tpo<3yg!v%H!^YRFjut|GZ!ga%!B_!rJm9nWpU?5wH*GA1=jFrzWs zgZ8psn^r8FjL?{<+Pu^5K3ZrTymzo!6WNok+Pqo4Da=4mza;u7Zx>sMEsxiOy;o=r zX^{D70&3q}Vcbe0TK>}2K3WzXz3}5&XJqzzU?rrto3$i>UMmJ9N>Dz?%^*5U&~se zwi^3qa8HmpzM|k5~%6yA1D2=?^2BQ$|@Tq-&DcX7-&cOZM`bA4`NcD`H)T zmmEsTB)0I-t!rzocUv~)>Yv<4Q(o5*8t-FwI-5|Nbl+!UB24}&>4>V5(y#1{)*)Uh zRc2=9YF%kkOW1>3W@cuS@lGS}PUFc$lnCCXrWS8%ZuWTYV|3xhtGl%L_MqUY-hCI> z^Ojq@-`@uD;X_EWxc?IvEGzGZ2I)Cd!p+6e#R~5GQ5BBSEsndtyvWg>K)P{D$XP)q zA4c=TtID7lNSpH)w`0S@C#l(J{fWuXHGcPv8v$9Xy8cJgB3aGWOCY{cbMPPb1Qz|o z!J&A*65D)F@EVzDRaSbIR5qS(ZsE+lv~qUE%%P<{;%CH-_}ZFf|LllUF(KTRSfxj+MvDIS2l3FwUel5H zZ@N@2NAS# zafd3YtHW-ll!3iFQGO;d!ZDOk(iqdPdWbg!c%U753c|p|*7U>7OeUE`mMmKB#?fa@ zyUZp^zpsEn6oS+3k(8Bd&e3$j`SXzREYt$0;ydnUg$lW@Ks6@$uv{T`kyZ?2@*a4z-sa6dJ?0+g*I2;uBi{?MH73UX}O{u|Qgcy$%oD zuHB-Cy<$lj#Tuy&s(r$TS)AIaa^bmX8T@BsUiM>_{$D_`CrGS6EoTW=8b0uVW*2EE z_ssAoi_-&0_5@pibfq>H|9^G}uK5;e2ibB|$46~LA*z@vp_AMR?>iTmcZ&scdyzaX z%XsNe>g-E++oQ+KhAruE3H0b5a`txRoX|Ajt#TkgsDr{7rgRu$U--gSXd#U|A3j+f zqq0}2z!=N$yNbRfp9gS#30G6K-9}y^V`c%yxvBrmGN|W4&XMJc1=r?3|K_@Iw#ozS zXL#{hV({AB+unUtPF~)o=!6Kis#1d7NP#8l-5Ty$cxx$87$#HePPKWc&eiz2-SIcf zOu0;{P{=x#TP=IBM5Z^o80us)|H}CaEtJ3>RzI&2?>Wypy|02O^RyU}mBk6x?lpr> znQ3KW#Z;#s0tyUgMa(YxpMyU6s#fe$lt5mkKwkh&{lzr?kc6|Ej7QPmhK!z>@v<_; zag!KHqd>cG;gJCy;id1|DfjQ|XS4^I3mg4a0EWDP(8?fvTNm9uEb_8fO>3KhryEPR zuEc1Q-}>jAuE7zHLC(bqv$;OGEKu>un}z*xBvX$Z%8Cb1qYFLKwB12 z6c%1-@pN(4FXp>}R$yQQiEYZ$WY?xV3gF<9T;(F$wECI=tm1bq&X$jh-@UiAoQtg3 zEgq;b`|dv;zmPuTT_EQ(9W1Gv-sZ~d8%_IgbrCw{#}+}=$jYY4`n7(Yx&x*^nBq_PNTa2FIDq2v+h^4?6kY?j3+7lx3cmC_B}Kbpsf7WAOc2zSX0PfcnuUqK0GV4=6{GspRQrLkVNt8P4j1%$Gm z@CQ}C62$`hK|I@e?-{`%Bm#sgbnztA{9UCeDb|uYD~Q?IzGiehAMUQ2z;C{9{5>H}pG36DWvx~4%s-vPr-CQgo7YEHitgmY4ZHBQddIU6M z3Qx4rG}X0Y7rHg$w=Q&gkN!m7BWb(Y1I&CWfdk!&cN-;OKBgt08dzJ4jlg5VXw8$F z=$~V!4C(sK5ofNQu{~+sg)BjU_G)LY6C;Zu?d~%sA=?IXPnp_ z-jc5a$EMqH2%%M2HG_)w9BRDA5Z?~K^z*6HKuY}h8}~$VMOyef3B3ek8e85*i;B`F8uJeU@E+b~kn6B$r5Mo%b%ZhW-_A=& zTdIswilF4$zDiNouF~+|{@6u%Kc?JH-geIO)It|#p43*rb!QF-|I6J!U21R-62$Ts zW2<%w+d<%$(G~NKrBdrpI*(;aip8JTCLGT~ajCHD`ohW)vGFvIj60iorgmLV| zlojBlL0zC$U-0EDR)hq`2w9zmZ2aFx#uDOU^j-)jx}dLPaMrNLE5(%Wy!@+~E2^)R ztgd8Xw8l&S5@~_MNpO#^rP7+MfXa95toAX-US^CY`5qajmwDPQ3|*j3j!t))-Qs@U z_S(LbA4OMGO*R8*3COAy(y~Z;iBo?$*q-FMrnQEXrooQxGt*dSbVf{KyCvwQ94m6Y8L`x4 zil)nCcxW{2aH!h-6pAqm!d%^^ECkX}wM?RmZ3-S(yK#1X&AAhA>M z?^oGyVvd*hYV)_)~tI7G=aEIX$=?kIPZkZS`u# z@Ii-$Z_r@A3x>ELrlxxD*T~R~UnwyD%QgS`^o(o{+AhV10Ss=GG9g#)3c=sK^# zlf2IfKCi#Yy~~z64BD$lCa!wStryt|F5x;Sq}A!mMaldU4uWYf#$7nPJms_h7Ep4_ zx9J^y|KysGdBf_8=!{H~+?Is(5%O*{HVm@(1hExpm_G-||%~_1?-4 zl}$2wt8yGZ*srkNj{-{Od`&RtPK6Kz6j!vUS9RWo%=GMRC>m|VUP8;pDJWQx$(S<; z)~eg|BsJmU<;8-<#A~No6C@gZsZ`A3 zJCL;&5AXp$a2Q}sM=Ob(?@FgDP2Z)d=^ZphlnvJpttE;@C+yfkTc>!zrKKf_8J*K6 zjr`}+{s}4o2eZSMuTTdKPsvR~o?X!)B^@JEc^?;?Tu zSgF?;nOgZV;yRMF>ylq>-kiIPHjwmmEcdiZ%!LnY^V=3l6I0Acc4kkWu&%Yt@Lw0} zrC#^$R4tY{ocl)x8md*xBDv~ORqD~`37-Q)Qtmk6v}qh_%eZFUJf!9`Izk6{EJM7Q zSl6K58z=m%-n}L46e4*1i2>O9dJz$ukihcj(W8s4^U-UdZf)A%-1I~yCWv5r9B>0N zwIY*uQrD`#&10sgr`0Am_|;BkRxg23?$)sjQkgu5vAy*^PRIdyNXd-Ly8Nj-_hRjP zwGoyF1v5{y4iO8(c@58hK0QDpX$hw~gO;6-3m{NR zBh>N+EsV$G4b1J65Nlm{COni6k~;x`k@=oD4*Rll}3r` zFf_Hkmoki2a)8DR?QLyn@86=y5PN+V7Nh!cm(0VkwEi$^_fOC|b#?2twws^IYQ4G( zRIrH0)I%8~{~P4N#enG9FJZ9OGngmJ^hqgB?CnvegQMEX`#fZReka32bU({{phyaA zN23QbqAYRgo1xgO1132d7ltd`oUE}iB`pM77-Y;$4oz&_h0(8sfd-|O)2>iXqBUgK zLBi~krK&mp?<0RgiTX$#=m_EZR@;?~GDUQ|YC0pY&A+IPFE=V0%@fHsE4`8i8~Ld( zUh9yD949NT+tiYP@O=5n4u9zyOxdb`Yv?|u11>;G@%5pMN5%08m_N!Z_@IlGL;sWD zX&?csB+xl1r|~)Ea}~vmrkCqHS@n}vZ(I%yi>A9i6W!I>&%!Km({MMrWWvNa+wA6a z-fFqyeI|1TlJ@!mAt!N$=o$57LVcMx*dy%3i)fzp!pU=jjnT@SA~^%(25dNnj(1y= zM=e=A_8CLo9o)ZWs%Ys~^w~*j{~htZdRDdc?E@EFcow?q)_m%L3Aw2NQZ zx`Z$*il7-@P&WRC-rfzkrrBo4uJ+CJwFZ;GLXYDME@tarL$^jKTI*cw;)nr#NJc5i zX)9>-7Zv^|;n0NCcDqmuYg<7q=%q?*zm7F>r0E){RmjUXheW3eD-}-u%tt@s*qU3W zhcqczmsbN<&xUZX-7@^YE$21>y-<&@sw>o|3aFOQWh2n^WpMmJm!6r zMjluY5i$z_ldkG7U%q5q7m&B!g;r@VCWD=Yn^Gn1-*%3jx`P#hYx20g`hMhAX|E`Z zIna||<(1gj?Rey^FO{_ngGFDu)x3hMo}l(5#v03mDY^y)i-=36Gxx-pjrW&(BOCT!5EzRe&rTk)vR+>5)Avu z7b%9@YVw@9SFwjdxqhbQg7Ojkp4G|4sFtd*Ic6Ac+Fm?W5BoHkTenG`e%Nj3@d#vUeD{*mzj4qF zYJkjDtL!gWU*MK_D&m2bWN2ud=B5`!`nqLEPcHx9FzkNrvYsegwZku|1%Od>1Q^>? z6)vLJpe1Y%Yu&l^n?(l2UWC~4tMMUdjKeWQ^Y6qV29`9)lxAcdHNdJN*wy3h4Gqo@ z-B_jvZ|+{ri`dxMaMP>dU_1_MVxj~jNi(#?q7AJ>#p7{^3*U}B=Z)Od9t){x4h?b+lU)TDJkKL3QFsEoNahH#}402@=2#Y%tH(nbWNK zpC9Kq^XO9IKYjKrS|QTEtE;Q+)%bcYw9IxA&6g0Yd`yRU3xc>;K7ONgR?dRW>%x*D zg4gWr-i2j*jvtCiS`tG)`DT4^Xg_rM#vKei-!sL|cYBe1#a*r37DYt91k_{I8Ew?J zb~BG;C%f+w?kzwD@(}kD5H;D9)U_R7#lLPeaKr>ixPDu)u-0nl>5b&6uj7;oG9gB{-hUJQvEhRV3d;!U>VT3p zr@h-w@k$zV_JgGk5BvAmUMB6pD2(Kp*y=M`Zoz4h2Y*!S?b?@i{ z#W%wOJ`>XL@cP%TD*|XC#D>?c?Jp19){VyToKg=KM;%_qkf@gNezzgSFp^}~H|NBm zoLsQY`hF}$wILC*{}+q&PXHk~4X@035G}OA)g11~YaS_TvG>-nW`@3%f`!yi?wB+D z{ci0nh!685g6=c^x7g4JIkM#6{& z8HH6u9htP`LwG19T=aEmJG?Y=LoTnfEL&ZUT*JU+yFYry_SY*R7dxkb$tV+n4Bu;M zur_uNdw9->J6{Y^y^Wj7iqvmPV14NB1a zwL7dM!(wX%DMYGtbMe^o}?HqL8rpe4>bH#NKkTTUu&B$W>_? zfq#O>*Gql*sf|36NG2YKr^!Gk>$Ad%SVj$ij7J!&Q&9#*lAm*rv{{LEP+NavFy^9={4Nse;_l$=HZIfI|@(#nvxg*A9B(zoFXDt$7f_`#; z<%F*n`|^?ew`>PYFxExZZCCglm7-lpr#qV8uRx><+sg?(3cTMbJ6Ke74UF84bt>s2 zN@hKCrqWfmMy*1cXAws1>aYlFKv!iuJ#ovtE_ICQwBVr^>(}}3$Djl=<*ENui*a)4 zwWqGARXQ)Hy?F5=a(!i&`Jre^4d=u= z|AO2g^@OVHqND7%=IRCJ>qUj^kPBpKh}37c9CpmCWqNaED+9Y`%)2wzAZ0OYowGYD z<*D*cL`5R403jz(-g5H>t*R7~_cT-E&6M_HA(&}4AWeyW^&Bexq8>XG5*X;e*E(yJ zAm6Cw%<==R*xZve0okt%4Sxr0`)$`yFJ=5lP#;qpc3Q*Rr}m)EvEQLowh3=iEl9{x zGS~CM`{YsHC|`%t?d#4Nkl}Az$u6G^z2<5x<*pl20OfJ6(SCLCVkCqj3~pVi z^^L=8>$@9A8m01+)j}FpL9H4Ur>5ixV*Z?CYWr{7JvoBEf40w>L=|%$i9{YsIO!eU zA&D%RU3Wdy2amwj&=etR{woEMWYr?5d<~BZ*^nf;Wa$kj)kX`XpcsgaKQ z{DTHKTifLwg~CMkaxzkH*w<^auZ;KK`K%1Iao?ym9^qqkZUjJmX3}^&L4*2Doyn;V z>qg1LrNsP-H5&_aVJ5emP7Rof1KOJ-hp~$>FY?esF($$5Ha_wZ<9PBBLPU&Q!DgA| z(PK^V{v{2AyV^F@N%W5iS#L2bH_4%6^uO-yWim z-OP@HaL&B1-{59ibncfW+O?HQivo|-d21goqLHG` zGwc_-n3`Hc3VcrMO!e$0b3A?zXSU8S6PGzg^nWGmzpduPlEm}~OkSaR*Vn5=yR_EL zbc3pS_7H$xEWB_`*Z+RuYx?~Ayv6r?-x77PAKoy~%=+BgUCttz=!GnFwGQ(Vf3-Q< zZNn&j>!x|%;J_9cVhRtFM)bQzbBA9zJjcIR*79#3KFI}e8nSDgCMi)J53=>Xu0T$c z>GT&2fW+WSks#n2w1?QR$u_hY6m|XTu5SO)MZb8dnV*!ER9T#1kz(YYsOLgJy&vE@ zf-NWm1!+I5ya>x%dTC;0Xdu|u(%OX5?Q*{kRm=V99B$hm$l4TC?qs-L!8q9y&mFWX zPHx@wLEUzJW|T-^ue-Bwa-_yHTVCgR8zui@+7BiE4Iv^N=2;D(fz}J$LkV6ejTkt=>rQ~;VrXDkK%$Te?C7CaK3sIcLI~sQs#X$2H&}m^`J^YWhb#*1JB(Pf ze0nHOD9SU*dBeapS3e{aSx$?;jLh`VXuH9y^Nrq%wqmw~U)E-hH$eroZk9x}I3Kr0 z$hU2U&f8_Rr!Dxc71;wD(NeRg|v>!yq6U}BEn228%FzMRWTEm3%fmyq$=q}R2c z{gySRWfO&8L!_*mFS8#PN6GR0jG>L|>6Wd~2^F>2LCjh9-Jbtc9W0yu=+aD?L9qny zz_5FQf$_x7SD3u$!HW-{6Y`#y7Sg#tXcPLW=Q8(J=kcBctMGXcz)x;RTySE zG_J+I*(@#vi~}=;aaD4Ja(JpGOcFmBvmE-Ww6Ex6@INo`Gk&u-oRM`Sg%YP_fNL#mjA*yh%I zA&)XgD9Pk@$PSGLcV8c<4O)u4X?y+6s6m!&4~mvA6QHGvfonu7IhMQYFSNT;#lt-; z$u~Fc8WET`Q|00-5<9m`OvJhNH{;TAvLjo!QUY&teW{7Yb`Qj!LddnWUF=i~=Wl!q z_+;p;7%?AZEkGA30iGiEE`e70PMleBO%^f6qOa3{JKXUv7e-U-;9(PAEX2`)<#0BnX2zM7#N%GkzDyAK(4`m6+1`I6n7`zBA{lD-NH*M4%LFH&Bae zlaW+LbFWNcr#v8{#3(78{)3(4W0>@*es^vcGN zHQoCO;&-g=Wy(f@B7#F%pU051`SY_TP6_UYBf)%3t&*%NGZ^dq% z0rO!8g3QboQn%z`F8A_K6j!>iN7b~ z&z`Z{Pg1u=*q`epqfo8f*ol)a=Tij6*)_zk)sy@_dpywP@GSq!!Tw5%l#kBbBY(l7 z)32!SGFgnr+w?PHpM%~jYBK5#`&5In)ytLL=WoNda8_!Y<`^km{+VN zsz_lrC@ayq-ARhxeW4`VU|88SM-XYEut>R`7> zPs}P@^o`$WKpMA%Y?|*3IC!*=3V9qD8B1o1c-;)Lmvl2u@#XTVjeV>AqiOm-j{hP7SZ+DtFR)yAQ8s8l%?E6Wrtq38sQZKWW)Dj3K)Tw*$p z|9psBpe|>V+P?s=cyteEw1;>+r~HCRy=AssfaB_~O{uFMT)*WEjJo@g!cuNh=D!Q= ze+Ou+VZ`Cua;zFaqt|a{JJ{b2=XDges%T*pL^YJrd3M>hj7R z+qE~_cD;)JQm!|!aqttGWyM`%x0AGna&c>;nER~PQby%(^4X!NDPx$m!ExQvM3}~c zT!Mx*TM`Hr!b$zxLzcxerB(bKu03tvo{}wB;#t(96TDzG1AEhmAZV|A-WHtB<>f4i ziNY5ocvt!10<5_~qO*p#nL;kpu=k{=9zs*(EMUBK>63&3m$t%%BTZh&Qwleina z#m%|jER;kVDiy7kI39`9wQiDhR-S9l5h`mDsa*NuH5b+%uXWphzPvh}r4mGrW_L9$4oV<<7)1ohn)^z21zr6v0<|X@DEyCWR zoOqgz9E4*Y;=CmFn{51k(Iqq!CxAa1)|DLmk^q0$GwIogv*@R!XX89_sMbcANPCYt z!S2-DbaCw*L?oIAL#LG?LMLMD{?6V0!Ry8Gp1#=63749uUK1>2ciTlz>8bc7h#y*Q z^k-0o2@)4KnhD0Z%H3SmSmolNxR5Gs=RUDt;|c?7;!H>a(CPjCJo9tzrhO%Sj%B@u<$m3yJ7pNhi7*Vs-r z4dC`)IFjW44xy4-IIH#Eoi)}2{Zkrk;_lSwS+8vJx)?J4yw394=RE5NQBuKu^ZRLNVzqf< z4R{+x&3BAQyLiGc6P|tW;DYYyx+}bm5pJr52J707QxEy|4T2xrExIXl>+c7NqE<(Q z!SikwH?`h!t&Z$gCEB=%@yJAJUfa3A;x+jxL0XVZmD%TJQRx8wGUX`3UxmppM`5jHJlt^5%WCx`C9d(#7v4@x2Q7@D;cZx zXi^v;2{(fdd~Q?(DNcC{T??l8QV$8bvsLnG&GG7#Lcp`P@|84Iq4z#TD$dU!#-D|0 zpPpBfzmXJq&v)2O=8ef4=iFE03%_hX7?PboC405<5{@?i)c~WMl>Y1IX$iLYZf?k4 zp)h*hOWxh3Jk>JqsFjp+w}g)56j5IMuID_x-}b%%SW>W7_tRL7@X7uzW#ErzjGrRA zR#Gq~mK&i2{CE;Vq`@&Ego4<6yu<=Gz4`-50IMsohO(b;BNx<1!@ry6OfofOecGtL zIAHxDY@~*%nHKutO?Fd&@$(Ae@Im`3OQz88kq_YGK7p_C(+9~6It&ZL=&7I{(f8>C z?q>Wy#@;fls&#uG-XJQ7w3Ku!jr1agMR!Semvnb`2}pN$cS=hyTDm)>`#*7?*q(Ft z?|rXpepnw^I7a_Re< zBZH3{D$}KLs{MD|cCn|)14^7n9CWG46KUmb`tG2TcM`c*+uw=(*ASW;B)P4X5#f#d zeVS6{PA5yM{rxQ5v#HhANspLHFf9TqV`iZO8d^s~}4D(n7gIT`_h=+-H$>sDp zr}cIuA*m(u_o}|!?oMQxbrr{tImCni%3R_6`ITEBcD*j^pjZ~pyCM1UXmg*l$u=kZ zGUar9Q@lT%+8VAinJ_jYAnxR^^>dV+q+&lj-15RO3j{Vg82#LmP)|Gd`UBSU|MTH3 zN3P#1ZKB<*cwj>A8GufFmX(#oS_V{VxP3GyeEmlEcXv32`Mf?#8NDLLPWv?kBG6uD z6B-q{dC6uF9}>0D3ng&$Vxg;y*tMVvZ-2f+zkxQ;jn4X-I?m^Xm8B2xF74H%>8+jm zDy!|;AOC>i{hyDPGqREzA(DbzQs54@GXJ!UiGfvfG0fH~<=8{!-M|hIR&6Q!^9Ja= zgJp3gz^Hb7i53KI;r6V}8#9Av5Ax@GZ0*SI$Md(C9;rIS6VQ@lxfc0J_;VMqc%Z5Q zhVE=Pln$4S3nXg7#&GW1RTR-+nGm0KQgldgLfvSRP-;R%+hQr~k?8%pK9%?C`#Er!Q1xox2-8VZKUCCvlzz2kkJQRw6KY_1N?wXEPRT#%OPLa!>rU zEFagoNU@qoEc#A=x&S}vO4TjEX2to|Ualeqzf6-vS!5GN;Gj1Zw`e#4pT~)Yno1=5 zr4pM0MYT!VlKWh=9r0=dsHu=MHg(a--%qCHwk|(#Kts0h;W~~X0p(jYfeR&Ln~Mm! zx=La;`nzh2tXYhYGg8I>UQ9Ua)6d?_bFYANmN|@c5<%(Z(j*o_5Zb-_$m-Mz!{4|+ihyLFt_aAoHjs$a9 zzZRZ+e0-eOKs7Bj=Hlc^ocVo&1p~s=N;p5@MuPX7XI(|M$Eo+xc*JS;Y@qwF5N8)u zjql&_sJAz9J6Sj0U*T`Mkq}83-!Rfr5=l_ql5Sl#wEy`cZ(!CDuwFnv!Q=mY=)+)1 z1@a)pgCQ}D7!{l(C>Zl=nejn{EY-FI?bOh(tzy>Hc)O`)T#O?C8h|Jh@h9BeduNwId{?N2v;c!O{; zDOopA6(<8$ijiBgEl`s$@1Gc1?9m+=e{8*G=SPtc`utMWe)Lk~RuS~$^BsMscZ{-p zDh-nLS&24H!WIXsX$6Z?SIkhL5r~}qhr6eH>x$UfQrQFiz19Lne#KtAF6_mV&$sJB zzf8O_wgi{L$*KyJYJ{pS{4Ayj5O^ualo>^w{{EAk$Hlo>(f7x=Dl9{zzIA6s6G~-I zSwp)6pNYk-#=s=nt$I32PG9`i4-wohL$9rcz&4x_%{_wtkO0N?Q9O9Y{i2pyn%=ux zhuB##tZqG>#hf=b+*t~Ts2Op};xOL+^{R5;&*-AlooMRW?eM_d`@02Yir$gevw88= zcthTC@UDc#?{qwgYYVGNp-y>{h*Pico5YN}i}fM|m=1d7zv2lkwU+pn(n?j7UF=-Q zm%)*oXj@2XZ0;65U#m!s#uHm|kt&!YPP_<3RmN3WWaM@iz{kFW{lVJF@|)fK9Xi8+ zydr^WQl1k+n_TB~>hvGOy7jxj9LcLK`H1KDPdNIo#}gAyt-s+@eY{0Y&{YfWcyZ7z z)a@B&QL4EsX4_!JfK6;?Yl>;drplseg0zjj5G|h@9VT!@UOtphET^0zy{&MG8+Gn~ zo8K>+#DwvBeodu3Ds(@&yqHVF^Z}%2Qp&NnNFS;?=T1p>3(7 zJ61b{YU8!6jx48rh1q!cg$97YbiNN(S{-j9$YFTc3O`417(c4L!?e3dV=!|!{x0Fq zaQH%tra_XHj74lNzfs#~%3}S^H;q7}w=&)YfR(93W9(ram`=be4G#zB^wD3UQ*Ui< zcT<_-+?scm6&y`U^26X6>i3xFh9(PUrcb`mrSC8M8NxQ-wj>Q6FeESFn=iy98p0_Y zn>Q&ksV-)VYPCFgEgBIa?dO=o%|z8K?m}l#2s!UQMCZ$pk();Qe3WdzQ$=$XZskfE^NRK=TKJoX^JwWRUG8v2^h zSF#4iTMLXcA|4A5&*}u2dZQhXYR7e{Kbz?3D~G(NVVQe3sLwO?WuVUt0X0jm8Ud$C z(&*mO`-z*#qtT;Ayf#D``tXcFd;z!LX40c6m`y|VD6r%@Ar%E=P8g#YTZXwY5yoH5 zJ5V{P8&x}L=gc6b?j9=z^S;eq4rORCRL0Ly-eY7;JTdC+hD`WLPf@?@Hd@h7;qZpG z{n=o7Kxy5jlfD&k0fxemI&CR|hR63o?DHRZX%?&zGG;N}bX5uGTuup_>Ah3f=q=*V z8_6Zvi3XjNc=T9($^fQFZj;b`ww)o?MLG+_z z#7FQ|IyA-=R4LjSE23#^9ei#iFI`&<;`6_yV4;NZ+)r8SD~H znkj5fWY8>&mJmA6jE!2!sLrSVHYhF8O-(DBek^_8IA_Esd^$Ep+PINvE9dr#KVeHZ zDRJ9FvncDR#q|rmx}sEsg?Vn`jmBG+Ya7R{%-58n5iFB=R3E*n$?b1>wPvtGfTUvr z!sEVBSQU6`_sy|bKgh${j++d;$HS4Fw()~TxhQ2&0N?qK|9|#n%u7hh$*BgmX!anU zdhY1ze%K3)^C=<#rm`AfCKA2*Eg$gqYmakGzkz2>839*MmRcSrO5fi2wp4MX-2G&v zI3F%--7WsIhuYxE3t&{~R!W;qR0!(P1~iI<|DUf@1*6t9`@`&0*Z+|Q!H{|hJTIzV zEBx%3RIHmdizT#Gdc(T&wJ<=bUGRQBwVUx1WGcFhjitL44LX*6AY~%8%z2|-)5y_w z+tu%-Dv*1pl%}{41kDsMiiHg1*7t#TaSrV_?TbElLV#u30jah{2aZ{-E3~r<9$`-S_Ob!_zYbn2xOdXvGg_SFi`8_YIQZ>YugGv1d;LuXe~aUM|vq z&#BTie34%u)9fdU|KuSo$Fr z%qFj|T^xP|8L-*mN#zo}%UV%@k6}J})y4z&n4JAR&4w;5McmJL_jVeQ=^VmA;rzNG z^tfWi5nLL@T&u-3HGK@Qhd0brw+R5%fWR}GQ%leDbNLvy$pybc%1B;+r!q)fJ6(BP zE{QeEjamD*==9f2fiNC+-sPx;VEv>tzL25HbJMtJ@`g?d*xzC!$Nb_9{x?p~O)lMc zeRPtJ)Ns9T@!}w&N>j8#eO}6OZqP4J(Pmz@$v!*@$)=kQmnvmn%)_*vbE7NZ7e)I6 z3IXaIk2g`!^8Yeq0xuxto$c_Zs(FSfBd%EDQl1iCdzhh&eHtram^+fS9&fvgYoyTA z{xJqWPV7YHyUH7Y!j0ajTLq`<2zkKkn}8|cWevu?HV;a(6?*X5|cSZofZ5>}Jf z^lqU_A`Qg#IvQqns-x0Agmj>H%Am5L$3fN-q_SQPBArZFtfo{VptO zE0{}3jBmni4{t2wL5*Qga(1pd;ce48=nt$X_6kD|XhAlEx)8Bg^lZT@LIW&2gnDvR z%~Z4^QRAYe=?R{1bRCHp7>)3TRv<%+e(%uOhLh^^`h>lFas%X#LMd?1JQ4$+akrKv zTyW4wGT0(2ZP(o6u~hVrXsGj6DZd-RmS6z{MhOhh87=eN1Pc1$TInCTn4UK(HdJD zy>dWIK1Xu7s55evkC!mBlYMEAPP|{h=u{WUjDb_2*~#UDar2|`Md8LJHVMu$^WFLd zMpFG+Tp3+<3|q};K{HRV@TwT~@D5T^Ff$QfDf4->Aq8Z7Vt!t|`f>DV?1L-o^rt2_gk>o3{B zztV#L`T#@e1+3f^OYOL7DR@heR_5YrYkRg*V>*4awSFWo@#g-C0sZq4cYsba{NpCX z`P}!@&car6+3A$f;zBfqbx3&*_HE%uY4+Q)^~y!U&puuRpZ_EV&%LAtxC;Ldx1ptn z1~Ba%*(VxbLhpE?;fZ?Yy%g%qyeec2WEFX>i{@zBalZp9$)Zuo@|Ghe1ZDDIR%4ua z%eO&E24rvI`*dj6Z7<(rOdp32j_w>r&q`)>WY#>uPb$ZN6giktKu&9c5T7?S?~_3< z1)U2TSAh8kC;j=yM*Sj`xJ&2T=%(LETuJ$=K_NkbSyCDg&nWd#^!_wOO0QjzE{X^V zk*J|fQV~j-**>m@ZK_p?iA5W^dsP_S?A^}oB#ordzk%59 zyGijAxLxmL*;ebfLzs;Uh)9 z@TI-KqcJIBr?=AACdPEplr#d?=iy~Ybs`ER=E}-+8#mX2%ftKT%239zXft=>RYpYkP4RPj9ptw70u~VL3>p&ZU^UHJ7@d}z>x%pqw z?GG3=azahYe#}od6pr=D7zqE(?Q!9J{rrN`rE+y!$}X=#qxMCU&X+)DvG@{2LUUZu z{jZ!66SmsQtPZKaR~^fzO1c3;*_+Ur@i}GTA**>oyRK|j?OS*(WuE$dr~TaGOme3P z_-b5Xv9}q*4)iy{dh(_Nw9z0P4<+>FZ!F=j@8k;BItiVe9c_Dt;t zWm?be5@i>3>)K%Cedphtn3}OuJFp!eXC5sy$3{bm@8C^&W>~!z{we};Lnm)2W0GKR zBC*iBBW}^c(xaXim0r4@5=Fda@@yb?O$8OQ{w<38z$?&zVJM>X%A>mS3^5osY@(+F z?g{PXJ}wi6C?bY$KiZcJ+8i8VMcF1Nm0+~YPjBC~*Z|aUqN=~!3U-A3M+Iqz2PMZO zi!dHyW@@35)UJT4eZMYS9LDCvmG7+u#m5;a(H5$$^5iJZlT)N zZe4lg3IR*1y=sR#c**_F1g{2r@FS?XU&`BeiyYGv$E#ej7I&{UNNM+&n(9PN*rwql zB8tud{wVx&4Y>@>&S#xjXDxA3GOA?{hQiF#UsF+>ts>MS3Tu1o0YJj9tw9TJWE6 z^d~M|#qmNrY=x{MxK%5VONYW6U)bzNhxOk0ShYLTexOivYO=R# zfe0@@Ww%~`5G{vkzJdCH{`%S9+72(T055oSbL%_*@I-CZ147MsJ0ty=*sWAsRv3UF zna$Y=v++BJgJP5rVk12iutic-pq5#7vK%}o4*n#(1=wHc`#eYjJ<=(6(oQ;k?DG0_b39Sr!6PSo+-NJ^X#Iod+9LJ~qb2QK!$tlO6MlwCmty zsrZ#|IA-;8PJ`bXW(&x<+Riwhij>_d{V|v@Il+NBgBu|c5|KWyHw_M=vOd>cQqRLY z!abUcRz&HzJy#7oQ?pANFb3P6*4+0X=?qI*9XN;?3%*9r+j~CV^n&`Vh!tl61A?1o zMzQ4-^Za9&nT9*3J}DT>PrdwsSDC`P`QmL0#qo%O_om_eHX)hyob|Jl=`bNsjj=(t zAoi<1Zosd(e4H61%_;U>b81s;l&ZDkMkJ?h7VLmz)Y9Q@(FC)&xABXIds0)tPFpI64 zS*HtGN=YgL9?t&hu5J0G^vy||!wd(>7QrN`W-twxUB4qR&O)hO$D!-9p2c=WuDWD@|=x@@_Br+z3me!-N`zeI*ny-oE7%3q}>0%H^odW{V;X z=zk$pT)n0j(QjVVmdp2e=`8dJ+tNI3ccRiOsZULJXq;7D{dlOA+Lp5Xl0^WuftRw635qa3-jFx6m@u3YW^I5>ZpbvPPpl2q5e z=se;+kxDr=7Ar3=UO19bx0U643TAKie>0yyp+?*Wx^BTf$(7rqPZR4Z5gk76Wt!t^ zQ19I&#SPS7l@KrZ(00n`Qzx%AcbyLxMdWKeo$HKUtPSx^| zQg>J{X|3Otl;0q+{uqv02TCmEpwLBbi)EvTt0mT-`q}Kv^J_(E*w(M|9{}2@^1dT( zk-Gm|^Ijlqw96^<2zAzBD(ma!21ipcWToQGTVka5@~XYOA+tLfhJ9mFoiCvCDxyrZ zNsyWxnyzit#dP&@+RPqXS8i*$X6p07jXG0!XR$?(5!?+w1FG|vMKx^LCiO;Op9J9t zGA%*z$}i8Uhg6(ORUBJebpxGW2*nm$Fm3zM6#blNz>6ja2N?U|-1-adSlYDdGL{{- z_3OlpD7cVsQZ(1%?@fhMvHrMQN<`5nsxBs#NwS#H$x*y)p!kw6de`%D%~hWpl~g|L zz4Y!g^OV5A4dgPY6m?FyHh)|5xQfW!;o*xiMNBm|ewqiWhR52|*CQX(_*q}LEb%Ds zOS_ZHe@#FZIgz;;K`N{*yx~!SAEV5Nn5(cM+mQ{?fLZO)5lAiYgD&`S)Q@js>AGe7}kDI#_QPz z{9(BCGkCS#K>$_Wi6IU`0*R(rE_NC-xDcE@lqGTf5;!?{;&iY5;q^USoTxt?ZSS)m z=n`ysn2j6faVt_qRB!uTtf?WNP)SDQ!afTGP)4RKPT&U%{?1r1%B$$iGc0a{(lNNh zD_%7UKp{hA5KH+!;$&@BPuo>M$4;KyZ++%*WX3tRQ+?p5|4E0G%o;z%oO`!{cg4{i zW!-EK2YboxM&%32TJ>T|Kd&0v;b13@x%atJUbm1mX9)YA*)1GO@%b=ahX1hk&BB~P zDf?6y?C4Ev+lwB_!RCTg>|99`fBC$y|{(G zO?IHsFw+9G*AsBj55dMaD;Orqe6#&KE8qnr(sIH_Ym5V#5WD0Z^X$!3P;j!J#u)|$^(;+HxUNLxf_>b06{4XEhN6%vzWOq_ee+py z9;Hh-&=2>~lF~1_jN5?rwaruZh*m<~vQUC+1oK2|Q<3va_a<5BbRo_3XmRh+D};5; zvQ;?EG_y2p9i`mpPQp<8D*&20mQ39OyX3eQtkzQRY?q?(&XO~tsSF{7KRY$ZRGBPp zI5R#(IVrYai{egman67xWvBi@o2sR=!wM9=rW%Qw1K+Uh>eVR8QgK``%%^-}t8A4C zoV|sJ6|{(Itlfp`>*scchC2dOBWJJ2{|NDN9-qCXp0#cG9;MPoHO#AyRyz1iwE_qa z@N!uQC|GbkZ^y4;(i$^<;a*SY3LU~EH=l9x?fGw- z&C_`-zwp}M-#5KhS>j|(Y@GKMU}`%CCh~CaS@Sdf2bNn3wyUx-1*dPgq}XX?e#Y*l zW*PrrNerUYEonRl`P5MSm z^%M-*kn;_o25>eD&Y^XbaBU0>UgMeITN<%j)o}{lyG-;$hrN{CBcPw>ViTp9dr|OC z{hn1_!JYHK^uvR?T0$OvNT9&f#7^%AD+9Tv=|$E|JYL&+C!!%)?r>0%DM4m#skVCH zT&sjo(>9<_o8JOTIjCFLvq7>wX%7rGZ&Hq@V-R#>AN-jRbS7%vB=$^QBa`O=P#!Dd zOSw19o-wFv788)vTD^tRzBhB-V}4slb%fW7S8%%jboS$CXA-qnl_q-{JkIS!AL%?D?rpTESd#nR>*XDTMJxAu9I(}@E8-4?sj0f5Q?-#;V!%D_Qv&E#zOD#w8rk-VsehL6xS z)B_x{VfOCywlM}33u>qx8g}Au%BW`@Y+oGj4>jus@dG%NqrvDk%7PkWF1hlX-yW|UqbAmkh$TZ#$`r~mwE-FUg^lBc8e z=ZW*SlX?LQHDtJb7R)}(AJAh!ihe6D{P0Nf@PhFm6zUHdY zOKbf`#sKw{CHmOl~x4NM36dH##P63gE|*Z%=aO?^DoIo1rX3rHN*S)W@R zqF*@GdPQk;Vi$9qy)oy0$ka~GgVJRZG!F!ce~rcs;&VF*KkCdlEIh#lpR9gc z^tT_SjOEoW?pU)vT)$!Sr*?*tJ^hea42zuA)|^o?;=q@fa^JI>QvpRO%=SEBVS~1% zK{ER^ETCz7*ZO{`Wk3vd$1jmZPFGzeS(u23sDGwsm=M);=+nzUC`5~{Z51EcT5&ndPuqTO2MpQIdqTR~KC=gfzP|()q z@fPi5WI}*R&M+!!_vPWv?6;S{{(6}Jn%dPBEsZ1mr@jSsrtu0l_wkq*9l~`b!TN8< z>t!seg+Ag^_R{7Ny6*f89@lC0+()go2up}~`Z0O{+K*uJPfS9DIa_MxfceKY`|m;G zAMt~*9+sLm@|45Nu!<2=+ExC;IilT*#pPTh@Du|Tg7S;^lxo8P>~&EPB9jAC+e7d{ zHHDU7=wuupP*93;XOoi6g?LX6qgw;Fe1_viB*xQXwT`Wtkf0$jpe%3WzNimme}~B) zW_r*q9w(SPTR5#zD5B3uh6V7t35MKb4@{XH z#+NfT-QTWr%Qt`n%f1M3vEm(kvJ*hxpZ;;!xQsf6i;*Ko7mwREDqRzqVGEhs;~zu!=HC!$ejaecxOo*+YcK&d58o&>0h(|IoCdgXzl@Hxy%vI$3)n95-vc z6%;ZN@NrOoRAwgVyW>&XJQjo_(%eEmkL4{S0T@LuczfC?08U0bqCJD*x0p~b zwd_%ETpPW~uIBJQP78gkc&lOnSN`Hz?6dQ6e@{u_Qy4k=%QsQRKJaMIf4LNhK{It-fcyCyy0C%V8{3T9%dWt|Jk1@f$`p=~HA0Z2fTF7QF zujIC5on1fkB60u|hA}YIK3d>8u@LsMD)^uhfm0$4M9q5S5fJWQKzVsZad*m)gY&R@ z>Tx1*Q&@J$;d|A9`mk_?86;?EA=S&s7?1sHgpx!zOm$h;X96Z*kKR1_M1~QJ$D0g_ z-lF4IPlnD-B&}Zdh=G3}RYQxU6O8H9d$xk&&`{Mo6O z?|dr{$OifiGKR*f>kF!?uV5+lE7ViWKM-ZVw~nnqR-M#t)^h#kS{yc&-;C7p8i$tVtIg@P!mfOAO;!4TP%rq0%!}am;u?4m!JD)*> z%-3WQHgySZv4#>DdkdN}GrhUb3#1@h+?VW?O2=;La>DsAzsxMY7Ur;VuTDQ&`SOcbyoHlM| zQu`NI;cbWUl15{I{Y}95J-3JPj&0$%vxPi)60)=TPQj_5VjqJ5jI%#z`^d#zV##^Keb!YC-x?S( zIMh7J28;6a+RH>xDAL78SYrkS{fZ&_0IP889ezC;sV|;1x2P;v_UKajL25zKUGxoE zcVU`eLtRFk1!B#S`Hfi;N`cwl_#@u>dm0ib_9->L=g3||LQSuQs$2~+wFfI#!6iy} z0NQ{$k3I@w;Gh={=}s&vK9oYu!ycH|0*0bXyndrqV~2=UW@l&W_?yAB8^YM~d5`Vb zr=9QLcKsA+o(Ho=4Ct$UwMzIVkeYpk2FQ>~`5(*W)_mZfLcF*C(@KtfU|II++M4-l zEVYI}Mn(n;lnBa~=AGOmv(>MysiV)3IC+7tiJ0VMGqkZ724NJLq2y%T#?xVrg38LV z>CW`>a{5MKKzBiYeqRsRSD+iL$Hc%8^_5yV6Hf>jkuM9jaXrSy#%2PxZ=R_CBXt1i zV6&L#Bck$f(k?}jbHD@%9bjv94zfh>zhRoECmOVfnxU;bM1=86QU&+s;L68^PJ8n# zCsq7RhlYO<4{=oKB|qH{P`^9S?@%kHjiyht08h!(f~4j=Wva(AkCHXSk&khX&-guc;WP%yj1JxV2V$(2l^EK_g`Bp zs_St(yC`Naae4>|kKxV`?I6)j?j4yj2(~sGJ}NT#+3`%2;xt*&2QQ&Lbg+nk%O7d9 z{DM6)luUVLKR#+kVrpLl{XzmnwYix*(Vc2{q!9I8YBCn@SF&08rEZnSc_yK2*M$gR zQ12RT*Yu3=nxQX1W!*sMeaFb=55gK^Y{q0c#Te6}sun2YfOBa-)!Y_Hy(o=M;g_JU@YTkI$5EULyGAX=M$b;MNE(S_}) zzVB`hA1215l1%9|i4PbsgY%6Ac_m?YQ`75o6HnTo5_(!`Xf7@;f5s$M3;fB+>%)vx zI5;?Y6%`YQK z64=Er!f?*GLdkt&jXM(mclPONgw8*+NgOC;)m-Oyt_ZA6R+F@J`Wg}n|7YRg@(U=Six^Jgcw%qj)+iuN9KM-P6mGIMZ)r? z(tywLak=8LNw@+9Rx1;Wh6%`W{lq|_ombPn#68+-n0;cCP=1a}{0B6W*bD`y6G=ZN zs?*gN^PZj_BM`i?iePVySq~l@#QM5Z&uh%8txPk3`U+=ZI#hj#jyMhkyxvl*?9z(s zf*vW@Xl|f!-nn>gqTHXl$8<)XnC^Ma#q04s>P)Bpp!~A9#`s!e?`x%jqqoF0?=02i zXR5hMOA#^Ng4{tZK)6jm8kMrW7Sk>ZPb_1Xim%TFX5g>Eo{Oo}0ShNJV zYW0GUztH&ppr38JQxglK+4tW*s__J+W;@pzmeU6w>|iwl~Ftt0i&(AYvz zKWlyU!FV|Q@G@!n`8t)Y)ycVhyh?5|FQ0ty7%sia()j2o zGBED>ebYNCsw^P$tW+ZB7MJ+ZE+FLO)KGxI&dwf_nkrdR(&_?VT3kF4o>Nf~1p&Dv9dI!6W1{&4%l!NZ(%tuw&=IzLlWdaZVl0!d=2-nXyb|b@@h`n1qWbk{U;bVj zo$2l^kfPWq(F37FkMHoQE;C5}5uyGzrU3`XXDhIQWsDocf$@XpD`WpEWL=%=o2$N= zn}E<>Bc_*EEn>V~hL=odkUMWj{nZUJ5z!|UoA}W~q7J%*NsC<)4efFdpwXR}=iU{Sz=0-)r$SV(|ba$n4Mn z>8!R`;#Miw`qlvI4eRR@PULc_r5T27h1~}SzZy5*97+-b#v3!@9f_s~1_9G?O_Jml z6;1EW&Cv+Xa$j-V^jts7mp{q>M`gDcd`i2z!$NGY%~`A^;z=iR(R8VL2t};sHay(v zU+*?x-#}Z(&UdRV(lZbS-4_S3`Tl7);zVAf@Aq-k|JED;xbbYin@6V}%)+dmQ+Uip za!!r64lS-O8m=cm@n_mJzWCYupJc^0h0Fl z!%fPYa{-==w(OxVKKv6i@mn8;|8`S|#%b+?%rp3uRKPirko(tL40 zTNo&gO??v``|jLC;M>EB%&c9Zp)7~GUo}Ko#npEhK@%ATc1oA4oi7m-5^=F`TU@Gn z^;>A=<>#d!eK%BVI@|Fa`kN$g-2+iJ_P!A28gNeWyjsAj1hk8k17Nr$d(lNbEjxR{ z&!1q{f#how+%XqrWo*BIy|%2m8q+xYbYN|n?e4^T^Uy$Zd@g5soM#mO032~vXn=@r z#?~zOoEEzbjlTMw60}P|7m3%p1g_a4T`(rR(8foN-_drt-4tlfYJ=o3b{y$)6fI9iyeV``}=xg0ix-xSSx_4G#-?KM=xOSr2-q}qX&$k2P zPz)1^wXx(9I(>aQ_q6-Zpa*F5oFN57=41icBaJJIvr`03*`D?lZYU3Y(@UX(&+nmY zRVACnr{BV=F3O(}fo3YB!OLTEO$9$c0)2I(xj*cGrZMT@P4%>6H^clySajLAMEr$( zcOTn@g&BkVFbuN=D=e=g&Dp9db>4G|(}dTT$voEOnMnFjgYB~$()z6@R^x$mv^sC4 zSvoJPFVF2T^&>o$;>1?X0UY(+X#G06X?~-zOAKr3&Q|!9o zFPWXTpMNujJAT5HCgnZ>?&`jmc-I~2i39IxhxcEVHpdLe(qdXg(ebI&?{EAB#Zfdk z{`C6PQ|>!Zqt;*FC!vR}rkA`L9gSS0w%(;T)U-k0)_CC+j8i_eyn03YO@Sri^l|a*_<2{9+h!cVjkbs*5QzUeya%^;85daH<)+*a=pjaKO4#ZJxOshS4^~&a~am@zxb$k6t7*)T|*WtO1Uer)$Z)$ zXUDB!1a1Rt=guTlsL{hjPi3qFVw1Fme1hvnNN1 zJPp-Mljo@58)kIV)W#2?M5+2fnk2M{Zxsem(Gaqj}AVi7+U>Wdq7Z{SoH!lf%XuX z*&Ay4B?jT^_tp!cHbsLWJQUf=r6`TKmE~BQz)&HkoUOzg?v_Y{)a~cVHTlQLiLuxm ztsFZFFs%2d)3@2B&kFf`DdA-&ds%qM`VH5HL~SQ_;^!w<3`X4ZB!>gvS|DRphDHS% zYXCi+;&sKOLHJALpbuL^!ooIdD3EK*p~R+u8dD{T${iJ{z*G}xK-%AMtL41BJE9N@ z2Si2EwyO25SAV7~fPnC%c#y$-wJFT6ZBo;6xB zY-OWKowd)1WkYC4U^r;@3Pe!m;mfMA{*#85IFkJ{F@9t0fcMF(rs)8uTbU-eEQxFE zDQsURJKJTzykJ4l0FqD4%m_C$?3zXd2M537=1z5XacKeZp!C>w>%C?)9L@%&a@UUy z$`?bFOiWF`>oBxFtaSyUoef#IM*|ax-;t5|>FDa3&(F@y6;xG?PbeY(y08U)sPV8r zG@r4Pkdk&Oj^nT2U8E7XO(OmL4^&U4AVeaLv9W5Td$ZwMm>l4D9p{;ht+i{zL9Mk@V3_PlDDaN)tU|l~2tt32^P8jn7VWgOzoF~dR!n`yFgeQ$ zl1_@ENv)WD*R{#B;5J78SsB9Y&rtf}s%lAQ4;Hw)__BihG*AFzKIW8fkpV>=xZm1;wOnm> zuQ6tRQb3VF27ulOjH)np*6jIp;gV&2)g@RwBWrb|OW_cAguD|rzDbMcTRi~?ODM4C zV+!(sSL|{4S6h;F@*8WkKHN(u{qd^->7T$P1vmc<4p)GQEYCF^E*hcs9WBpVW%R&? zTBtaM_fByM=g^TcEwdVTU-#t2ye~Lxw@wXA$oFW=i>Rggty=*_LX|)IG1f8}lE>gt z<$)9-a^71P$l`1wXN5c9I9f=zW>7^pFfP(E_YiUidN^H<|FBTep{3bFMq!?|LA;t{ zz{4#1$ZGSWYoo~i0OPcl5k;bu{5%NbM3MzipYa0H66Rr;pPUzY9;UFUs0IyogYGia zm%Ik`IWMq9Di>(o=;#d3E}=f_-8v9 z+6m2}rBSiT{y?P)3FpM5SNN~^09by!=RBAMNk%Rq`3QwbDBc6EkDbK{cInajBo=7) z4JCHk(rmz8`6)qb-HHc%71dw=^g*)Lw^s81t)8NmX@)0aC`U4AOMbN z2xz{3$HHgFGjqMdVwc9&mNkPaQy-gCN7W2|U!lwr)%OFt@HU2=_1vM&0?+NMa{!0} zsVif%U)f>Iu=ENJ7$N|296Z{jh#MPK{ZPSiICsc3>vcxwc|2X#U zwL1@LpCqS;jf6NDcR}r(&}}1eX*Bpt()+~Ap~S!=fL>{TxM{H|ccMU0QNv99z=8Oy zP0Eu5FwXClW*8-UIgRmw(v%`Rbje%shMIC|y0K!n6e=$Z=6IDVcGYD@UU67S^-31& z>dMP4?J055T!#5FJ?{q^tu+iuOuL2I4kNvIGpAqr`O>1y40P10zu<`&Figda)BIFC zT{`1h9nz!r_S+9F607ca81IV*86VX41*eU4zVPx@J9IhvoGY=aLf4=H zL|S4>V^YrVME>Nja$VtM;+A2 zzwT82ZFkGOybh_d_-Bf%$PwyGD1FNlrVV#KVqQm$G6onQhspqks7xm#_zem$-Ex>_ zo9B-0l1R_~maus3@DtJ*lmCFvW>304^sh@+xUD`Ex8Qmj6#%sG_q%uSlWb?+#a8cJ z!S`cX0p(ya6d_*Zz5UlHTRoV?5c5-R(H!j4ou{@)yOi?h1{A~TKlk;S7+d=7t*-mF z>D@LOIE*JitDI5lC2h9{fLzq}`H_gr_B$&Aoo`wcC;%ued}&*4hjhTv5{Afhw9HOm zs2vw{$?fm&D+}CLl7Vx|r48VQzN-lXH<&ZK`?=1a8HqsW^B1Yc$h4G@pR{?&*~Fd6 zsO^}JV?fWaFh8;kQ}=-8N1#SBx=~c$YlVaE1}4PB-M0{jWpucaH9sdW9PH{8T8y^3 zMjWjadiA!k`0SHtxJcTvO>NcrI!}s1)~~W+SHWd18y;m zZ+pBGDd(oQ8(@zY{ka>n&zjlNbTyxHjTK8wh4n(e(1*GNAK=jlk2)ZEl?CF=P|$)NRG z6m1>3OS7G4F=xE=kjuy`_YqIWQdLI@=C`rk=@nf@TWuUn63LCgrROwIL+KhGeWV~? zt0ln>%aICW8aU2sinYPj zYvA!J`z0yhp2u*2B7Ory_2oY}(!bR+&qp}w)C`b^jXQPI?naaA<6L;-IUs6dLsqei z5d4_PK0hVGT(_TpOsMD&TYt%{1iF+(wNdInVQew!J{&rDoN}B^MHw1z6weL49wE1W z#}~k=$SpioA@kJS%uTDS`w3$}oEW8TJV=|(9u))R*2aBtSz)htRC13X^f=7(46{mW zS0Gn5IDK;?r|w5^!HX9k%kW*35z`7-0=?&!V5q0G_2Z(nWvcah6l#e-W_^OMxvZP< zRnR3tedi|-nng09V-X&NRQ_%+wI?ima^{yvl!l6>|HEgthg(H0CVxNgLxey(U+m`x z4wIPhish6%avevYM@Jt1NaHzhcAT|Mlmb|2lhl%e0kpOJtQeW=^@ZgN;wcz12?^ymv7O<4RpZR z9q@ma(C+)tgT#xKuAC6}bmGc{uJvzA65he5l~Q2>7by} zd+$h*CZO~p9i;af5D)~B-g|E%p@m+8AicK)Lhl`oo{y;$Objky^_d(? zYUO?pi6tsG;h$upQF;5Mh z%fY(7TSj*HEFE;@|MT6hVRwrl=CT>1$0NNI$$CL|=yVoALo>}U8YPrG%QvUs=8bn5 z?^vFq*%O}WiuAQNg$C8p0JKSqFD6&b?K8W3>$rb?w||VETR1Uj9=yiG1CP<)|It++ zE{S+#!0%T1u^6I^F}zVsy__#F8ky#O>$X#DFR@7M8U%>q_FL5l0P^!a+xOvJaC zw%;idk*DvUifF}AnG@es!(W)+@FAho@aTu{S+1`LZA#HjxPnZd1O|P29o>yat^d} z##r#8n!MwL(=Z*$%nsbJpRV6eU!%}vCP&e+j|mbuo4|(;;X@5r+>K_!+IVj z#kGPtFIwlFDvw2a-!GOd?MvPF#RF0nX@+BU$37u|Jz@qQjo;p@;XhGMbNQgLW_MhF zX`%`K@)Wzr0rG1iV#2{?`sl4}H$3gYAisdrF&cRzMh6 z9WQ))pG8V>?UapU<$lomAUjocO59#~YIDI{-$|1v! z2FVClcrVBd5RC=c0C*#!+UdS07tyUaOrR&VZ;u28%z(Lg zm&%??zedgSm&X%dDBQj|9&CJXzl8{ra$pb z1TX^FzK77i=T=+OQD_ZR!iH)7?ll7QE24g*2QS{K<`WLeEwnZ;&gfZD1;JaMr+t9W ziTd`w+UQB0xkVx{u;^QMh&jtQB(7epFGxp66D!+JWpNQFW&z|q9eVglT!$lFyX00aK7OQZRvn zPK5A50}Vs>#R+D;A?8;3tV4wiFLO#M7iYX};cL#XPIn`V!}s)4<*4eI;KPBSfApEg z7y=88#jtdQZf}V!gL+w98^bE$y?g`fq13*Wa3ll6 zo_1ZWk)I>EY2R(GTM5XG5Kv3z{vQ5?%C>K6XcSwIQh1b$TK9@LizD5Nq^`Qti)C6- zjSgl11!}>={75t5wQ;oWluM}M?^7y*VfK@jk#U6?;ndsL_w;_*TPlPNb6c!f5;K2F5`TAoi{^{cc8E+LJKW~NMjCe%y4+%ejKfH}m zOiu4{Yme2runl~I>+Pr3b88plq!(K06n{8B=oG@=5PAiiZH%~&x>6YbpST^kdX5+X zAbvFPUIR@Q#NK>&LaQd-%(M&S_#q{xDpFtg`N{K*oj9~_e)Q4nmvW;9OyEI@F~Ma7 zNI`9Q?@`$B+{pzM_sQ6A1h2BtEP6t#n?8loq4Oq%_`VFUD*XtgVkR-Q?20^t)nlhy zvsE2S3r|Y24xbCFq4BPhcc|{DVJpmI9-vD|P>rSj?48mTi1HX>O4fJ!nBv=>kdZ;y zZ$ZK@M!#bbyzweRRq=0LydFDNniH1#23l+m` zR|(Tehoi}2^P$pN>rC5Q3qqYzS>AH-IH}F%D)~89BbPyKdSfGW2QqG@XYXzhy1_r)|ms4-s;EnOVm-o3p@|A!v4{B?5oZH}J)8?+nniwU9 zhevF^CwKMTrhu2Q?%3m0w}N`MaBnD-tanU(Kq;Sl&mGK91XL@QJ0*r!-yP~xK)Vsr z2EWKII}owxFrt;%SkhRXgk>eU|NI{i2y8HICb359gT99cD;+X*7n46c5>KdH3Z1G^ z?BLjx1m|Pl?F%MSG_m4Z}TLtUe*D}3WhzV4V;Nmxq#^SSD*vSVT8+{}PrxXm^i zY>i530*h*3k>1+jqa6E~(JfM^GLTlx5qk1uWZbGNpAx`xtXzwYrlt={G4;+pt*`8J1SSnj@l>}^92qm`8v z#3~>q%KA)6#B9129~bvZH92Jk9$;?vlt5cYM;>VU$l?NElGng&&ius0M*tm38`>^$ zEnK@ebwE^_N66Q81iTV*UhPY&_NMK7&Mzn+U~#zGKL#&2M|OYNSXh9VbQ5Wp8)2=i zt|Fwk@@%KSt24T|xTqK#(*a{7!>Vm(UeC|Vte32 zYh`KCb=v4R8O3+}Y{x{u_18|#Ty_7THOn;vf=PMva^}jwkN4$QoWG)?7Cg*lL(Aji zcPI*L)Tb8VRAhrl_jXEdg*4?7?HaByYH^ZxNm3m!M)dZV-!bg}@x=VoV@zhh@25b2 z>y!p%6=S%A&DYYj1|SMeI}}w~FpTN}I}(M)ig;&{N3ceX^c=d)PSSl~k*E_5L%NPY_xp?(^npy}5q(Gwb9J)aZ*<`Dd7Rq=52ovlq%-%XyRmSh{|w{fJ(QA`|# zm1kJ{M_p;08xbi=qk;_eXB~WcM_L9Bo?IaJR|F3WIv-)z9ET5O@MQ3Ol@y!5XR%9} zPsfaYoBVhSmdQiO&hW@0R#%EiBQEca(bvo9G_uva;nY;k^DUT}fIFgU5WHmk+=yl_ z)SQ0gi&l5634AUr)ywma{}r$W1f_H~e^jhD8jev5a*e=|aVq)KZQ z-)G@5lkpu(vy8Jq$_e#;2#Qvf+E;+6;OKU_Ch(?G%LfCUG zr+n)Z*VNS1MoGKA4)jJbr3mWJEjTxmDsk2?U|!ym2AV%>m_n z2~H?#51LVJH>c!#u|#Dfk@n4X_8X~w&9oz4$Z26P zi%-aY-Z_O(Qr2qR#&4_d4XEzlcd!*X;B-MF;W0RGcBg70(m}^sDf4-tM#d?F^O1yD z(N0I3KDXvW3L-yEWkpOuDv#wTC!@6RGnVF4qw`(2rkqJ?+T{A?FQK6uaA>^f3&a26 z-2Q>|y(#^kAVOw9rJEndSgYHLwJM0wdP1l>W&k+@+bMHEKa=^!^umSY{HuX~Z>GzH zzMOMI({la=1N`N}11%cKEhfbu0_~&p%cex9B?reYT0@)a5!lvE{6w(Dshn?T-v5~^F`nTfD|KRdf